text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{definition}[Definition:Bounded Linear Functional/Inner Product Space] Let $\mathbb F$ be a subfield of $\C$. Let $\struct {V, \innerprod \cdot \cdot}$ be an inner product space over $\mathbb F$. Let $\norm \cdot$ be the inner product norm for $\struct {V, \innerprod \cdot \cdot}$. Let $f : V \to \mathbb F$ be a linear functional. We say that $f$ is a '''bounded linear functional''' {{iff}}: :there exists $C > 0$ such that $\cmod {\map f v} \le C \norm v$ for each $v \in V$. \end{definition}
ProofWiki
\begin{document} \title[Orbit equivalence types of circle diffeomorphisms] {Orbit equivalence types of circle diffeomorphisms with a Liouville rotation number} \author{Shigenori Matsumoto} \address{Department of Mathematics, College of Science and Technology, Nihon University, 1-8-14 Kanda, Surugadai, Chiyoda-ku, Tokyo, 101-8308 Japan } \email{[email protected] } \thanks{The author is partially supported by Grant-in-Aid for Scientific Research (C) No.\ 20540096.} \subjclass{Primary 37E10, secondary 37E45.} \keywords{circle diffeomorphism, rotation number, Liouville number, orbit equivalence, fast approximation by conjugation} \maketitle \date{\today } \begin{abstract} This paper is concerned about the orbit equivalence types of $C^\infty$ diffeomorphisms of $S^1$ seen as nonsingular automorphisms of $(S^1,m)$, where $m$ is the Lebesgue measure. Given any Liouville number $\alpha$, it is shown that each of the subspace formed by type ${\rm II}_1$, ${\rm II}_\infty$, ${\rm III}_\lambda$ ($\lambda>1$), ${\rm III}_\infty$ and ${\rm III}_0$ diffeomorphisms are $C^\infty$-dense in the space of the orientation preserving $C^\infty$ diffeomorphisms with rotation number $\alpha$. \end{abstract} \section{Introduction} Let $(X,{\mathcal B},\mu)$ and $(X',{\mathcal B}',\mu')$ be Lebesgue measure spaces. A map $$f:(X,{\mathcal B},\mu)\to(X',{\mathcal B}',\mu')$$ is called a {\em nonsingular isomorphism} (a nonsingular {\em automorphism} when $(X',{\mathcal B}',\mu')=(X,{\mathcal B},\mu)$), if $f$ is a bimeasurable bijection from a conull set of $X$ onto a conull set of $X'$ and if $f_*\mu$ is equivalent to $\mu'$. Two nonsingular automorphisms $$f:(X,{\mathcal B},\mu)\to(X,{\mathcal B},\mu)\ \mbox{ and }\ f':(X',{\mathcal B}',\mu')\to(X',{\mathcal B}',\mu')$$ are called {\em orbit equivalent} if there is a measurable isomorphism $$h:(X,{\mathcal B},\mu)\to(X',{\mathcal B}',\mu')$$ which sends the $f$-orbit of almost any point to an $f'$-orbit (regardless of the orders of the orbits). A nonsingular automorphism $$f:(X,{\mathcal B},\mu)\to(X,{\mathcal B},\mu)$$ is called {\em ergodic} if any $f$-invariant measurable subset of $X$ is either null or conull. An ergodic nonsingular automorphism $f$ is said to be of type II if there is a $f$-invariant $\sigma$-finite measure $\nu$ which is equivalent to $\mu$. More specifically $f$ is called of type ${\rm II}_1$ (resp.\ of type ${\rm II}_\infty$) if the $f$-invariant measure $\nu$ is finite (resp.\ infinite). It is known that they are mutually exclusive. An ergodic nonsingular automorphism $f$ is said to be of type III if it is not of type II. Type III nonsingular automorphism $f$ is further classified according to the {\em ratio set} $r(f) \subset [0,\infty)$, defined as follows. A number $\alpha\in[0,\infty)$ belongs to $r(f)$ if and only if for any $A\in{\mathcal B}$ with $\mu(A)>0$ and $\epsilon>0$, there is $B\in{\mathcal B}$ and $i\in{\mathbb Z}$ such that $\mu(B)>0$, $B\subset A$, $f^i(B)\subset A$, and for any $x\in B$, the Radon-Nikodym derivative satisies: $$ \frac{d(f^{-i})_*\mu}{d\mu}(x)\in(\alpha-\epsilon,\alpha+\epsilon). $$ The definition of the ratio set $r(f)$ does not depend on the choice of a measure $\mu$ from its equivalence class $[\mu]$. Moreover it is an invariant of an orbit equivalence class. The ratio set $r(f)$ is a closed subset of $[0,\infty)$, and $r(f)\cap(0,\infty)$ is a multiplicative subgroup of $(0,\infty)$. It is well known, easy to show, that $f$ is of type II if and only if $r(f)=\{1\}$. For $f$ of type III, we have the following classification. \begin{itemize} \item $f$ is of type ${\rm III}_\lambda$ for some $\lambda>1$ if $r(f)=\lambda^{\mathbb Z}\cup\{0\}.$ \item $f$ is of type ${\rm III}_\infty$ if $r(f)=[0,\infty)$. \item $f$ is of type ${\rm III}_0$ if $r(f) = \{0,1\}$. \end{itemize} It is known (\cite{Kr}) that the set of ergodic nonsingular transformations of type ${\rm II}_1$, ${\rm II}_\infty$, ${\rm III}_\lambda$, ${\rm III}_\infty$ each consists of one orbit equivalence class. But the set of the ergodic nonsingular transformations of type ${\rm III}_0$ consists of various classes. Let $F$ be the group of the orientation preserving $C^\infty$ diffeomorphisms of $S^1$. For $\alpha\in{\mathbb R}/{\mathbb Z}$, denote by $F_\alpha$ the subset of $F$ consisiting of those diffeomorphisms $f$ whose rotation number $\rho(f)$ is $\alpha$. If $\alpha$ is irrational, then any element $f\in F_\alpha$ is ergodic with respect to the Lebesgue measure $m$ (\cite{H}, p.86, \cite{KH} 12.7). In \cite{Ka}, Y. Katznelson has shown that the diffeomorphisms of any type raised above are $C^\infty$ dense in the union of $F_\alpha$ for irrational $\alpha$. In this paper we focus on a single subset $F_\alpha$. For any $f\in F_\alpha$, $\alpha$ irrational, there is a unique homeomorphism, denoted by $H_f$, such that $H_f(0)=0$ and $f=H_f R_\alpha H_f^{-1}$, where $R_\alpha$ stands for the rotation by $\alpha$. Thus the unique $f$ invariant measure on $S^1$ is given by $(H_f)_*m$. This implies that either $(H_f)_*m$ is equivalent to $m$ or else singular to $m$. For a non Liouville number $\alpha$, it is shown (\cite{Y1}) that $H_f$ is a $C^\infty$ diffeomorphism for any $f\in F_\alpha$. That is, $(H_f)_*m$ is equivalent to the Lebesgue measure $m$, and hence any $f\in F_\alpha$ is of type ${\rm II}_1$. But for a Liouville number $\alpha$, things are quite different. There are $f\in F_\alpha$ for which the unique $f$ invariant measure $(H_f)_*m$ is singular to $m$ (\cite{M1,M2}). Such $f$ can never be of type ${\rm II}_1$. Denote by $F_\alpha(\rm{T})$ the subset of $F_\alpha$ consisting of diffeomorphisms of type T. The main result of this paper is the following. \begin{Theorem} \label{main} For any Liouville number $\alpha$, each of the subsets $F_\alpha({\rm II}_1)$, $F_\alpha({\rm II}_\infty)$, $F_\alpha({\rm III}_\lambda)$ for any $\lambda>1$, $F_\alpha({\rm III}_\infty)$ and $F_\alpha({\rm III}_0)$ forms a $C^\infty$-dense subset of $F_\alpha$. \end{Theorem} The key fact for the proof is the result in \cite{Y2} which states that even for a Liouville number $\alpha$, the subset of elements $f\in F_\alpha$ such that $H_f$ are $C^\infty$ diffeomorphisms is $C^\infty$ dense in $F_\alpha$. Since the $C^\infty$ closure of our subset $F_\alpha({\rm T})$ is invariant by the conjugation by an element $f\in F$, it suffices to show the following proposition. \begin{proposition} \label{p1} For any $r\in{\mathbb N}$, there is an element $f\in F_\alpha({\rm T})$ such that $d_r(f,R_\alpha)<2^{-r}$, where $d_r$ is the $C^r$ distance and $\rm T=II_\infty,\ III_\lambda,\ III_\infty,\ III_0$. \end{proposition} Proposition \ref{p1} is proved by the method of fast approximation by conjugacy with estimate, developed in \cite{FS}. This is a qualitatively refined version of the method of successive approximations originated by D. Anosov and A. Katok \cite{AK} in the early 70's. \noindent {\sc Acknowlegement}: The author expresses his gratitude to Masaki Izumi, who brought his attention to this problem. \section{Fast approximation method} We assume throughout that $0<\alpha<1$ is a Liouville number, i.\ e.\ for any $N\in{\mathbb N}$ and $\epsilon>0$, there is $p/q$ ($p,q\in{\mathbb N}$, $(p,q)=1$) such that \begin{equation}\label{e11} \abs{\alpha-p/q}<\epsilon q^{-N}. \end{equation} To prove Proposition \ref{p1} for $F_\alpha({\rm T})$, we will actually show the next proposition. \begin{proposition}\label{p2} For any $r\in{\mathbb N}$, there are sequences $\alpha_n=p_n/q_n\in{\mathbb Q}\cap(0,1)$, $(p_n,q_n)=1$, and $h_n\in F$ ($n\in{\mathbb N}$) such that the following {\rm (i) $\sim$ (v)} hold. Define $H_0={\rm Id}$, $f_0=R_{\alpha_1}$, and for any $n\in{\mathbb N}$ $$ H_n=h_1\cdots h_n\ \mbox{ and }\ f_n=H_n R_{\alpha_{n+1}}H_n^{-1}.$$ \noindent {\rm (i)} $\alpha_n\to\alpha$. \\ {\rm (ii)} $R_{\alpha_n}$ commutes with $h_n$. \\ {\rm (iii)} $H_n$ converges uniformly to a homemorphism $H$. \\ {\rm (iv)} $$\abs{\alpha-\alpha_1}<2^{-r-1},\ \mbox{and}\ \ d_{n+r}(f_{n-1},f_{n})<2^{-n-r-1},\ \ \forall n\geq1.$$ {\rm (v)} The limit $f$ of $\{f_n\}$ is an element of $F_\alpha({\rm T})$. \end{proposition} Notice that (iv) implies that the limit $f$ of $f_n$ is a $C^\infty$ diffeomorphism such that $d_r(f,R_\alpha)<2^{-r}$. Condition (ii) is useful to establish (iv), since then \begin{eqnarray*} f_{n-1}-f_n&=&H_nR_{\alpha_n}H_n^{-1}-H_nR_{\alpha_{n+1}}H_n^{-1},\\ f_{n-1}^{-1}-f_n^{-1}&=&H_nR_{-\alpha_n}H_n^{-1}-H_nR_{-\alpha_{n+1}}H_n^{-1}, \end{eqnarray*} and these can be estimated using Lemma \ref{l2} below. Next we shall summerize inequalities needed to establish Proposition \ref{p2}. All we need are polynomial type estimates whose degree and coefficients can be arbitrarily large. The inequalities below are sometimes far from being optimal. For a $C^\infty$ function $\varphi$ on $S^1$, we define as usual the $C^r$ norm $\Vert\varphi\Vert_r$ ($0\leq r<\infty$) by $$ \Vert\varphi\Vert_r=\max_{0\leq i\leq r}\sup_{x\in S^1}\abs{\varphi^{(i)}(x)}.$$ For $f,g\in F$, define \begin{eqnarray*} \Abs{f}_r&=&\max\{\Vert f-{\rm id}\Vert_r,\ \Vert f^{-1}-{\rm id}\Vert_r,1\},\\ d_r(f,g)&=&\max\{\Vert f-g\Vert_r,\ \Vert f^{-1}-g^{-1}\Vert_r\}. \end{eqnarray*} The term $\Abs{f}_r$ is used to show that $f$ is not so large in the $C^r$-topology. We have included $1$ in its definition because then it becomes possible to reduce inequalities from the Fa\`a di Bruno formula (\cite{H}, p.42 or \cite{S}) by virtue of the following; $$ \Abs{f}_r^i\leq\Abs{f}_r^r\ \ {\rm if}\ \ i\leq r.$$ On the other hand $d_r(f,g)$ is useful for showing $f$ and $g$ are near in the $C^r$-topology. We get the following inequality from the Fa\`a di Bruno formula. Below we denote by C(r) an arbitrary constant which depends only on $r$. \begin{lemma} \label{l1} For $f,g\in F$ we have \begin{eqnarray*} \Vert fg-g\Vert_r&\leq& C(r)\Vert f-{\rm Id}\Vert_r\, \Abs{g}^r_r,\\ \Abs{fg}_r&\leq & C(r)\,\Abs{f}_r^r\,\Abs{g}_r^r. \end{eqnarray*} \qed \end{lemma} The next lemma can be found as Lemma 5.6 of \cite{FS} or as Lemma 3.2 of \cite{S}. \begin{lemma} \label{l2} For $H\in F$ and $\alpha,\beta\in{\mathbb R}/{\mathbb Z}$, $$ d_r(HR_\alpha H^{-1},HR_\beta H^{-1})\leq C(r)\,\Abs{H}_{r+1}^{r+1}\,\abs{\alpha-\beta}. $$ \qed \end{lemma} For $Q\in{\mathbb N}$, denote by $\pi_Q:S^1\to S^1$ the cyclic $Q$-fold covering map. \begin{lemma} \label{l3} Let $h$ be a lift of $\hat h\in F$ by $\pi_Q$ and assume ${\rm Fix}(h)\neq\emptyset$. Then we have for any $r\geq0$ \begin{eqnarray*} \Vert h-{\rm Id}\Vert_r&=&\Vert \hat h-{\rm Id}\Vert_r\,Q^{r-1}, \\ \Abs{h}_r&\leq&\Abs{\hat h}_r\,Q^{r-1}. \end{eqnarray*} \end{lemma} {\sc Proof}. Just notice that a lift $\tilde h$ of $h$ to ${\mathbb R}$ is the conjugate of a lift $\tilde {\hat h}$ of $\hat h$ by a homothety by $Q$, i.\ e.\ $\tilde h(x)=Q^{-1}\tilde{\hat h}(Qx)$. \qed First of all we give beforehand a sequence of diffeomorphisms $\hat h_n\in F$ such that $\hat h_n(0)=0$. The sequence $\{\hat h_n\}$ depends upon the type $\rm T=II_\infty,\ III_\lambda,\ III_\infty,\ III_0$, and will be constructed concretely in the later sections. Next we choose a sequence of rationals $\alpha_n=p_n/q_n$ inductively in a way to be explained shortly, and set $h_n$ to be the lift of $\hat h_n$ by the cyclic $Q_n$-fold covering map $$\pi_{Q_n}:S^1\to S^1$$ such that ${\rm Fix}(h_n)\neq\emptyset$, where $Q_n=K(n)q_n$ and the integer $K(n)$ are chosen to depend only on $\{\hat h_i\}_{i\in{\mathbb N}}$ and $q_1,\cdots,q_{n-1}$. Notice that then condition (ii) of Proposition \ref{p2} is automatically satisfied. We always choose the rationals $\alpha_n$ so as to satisfy $$ \abs{\alpha_{n+1}-\alpha}<\abs{\alpha_{n}-\alpha},\ \ \forall n\in{\mathbb N}.$$ Therfore we have $$ \abs{\alpha_n-\alpha_{n+1}}\leq 2\abs{\alpha-\alpha_n}.$$ In the sequal, we denote any constant which depends only on $r$, $\{\hat h_i\}_{i\in{\mathbb N}}$ and $\alpha_1,\cdots, \alpha_{n-1}$ by $C(n,r)$. Thus $C(n,r)$ depends only on the innitial data about $\hat h_i$ and the previous step of the induction. We also denote any positive integer which depends on $r$, $\{\hat h_i\}_{i\in{\mathbb N}}$ and $\alpha_1,\cdots, \alpha_{n-1}$ by $N(n,r)$. By Lemma \ref{l3}, we have for any $1\leq i< n$, \begin{equation}\label{e1} \Abs{h_i}_{n+r+1}\leq \Abs {\hat h_i}_{n+r+1}Q_i^{n+r}=C(n,r), \end{equation} and \begin{equation}\label{e2} \Abs{h_n}_{n+r+1}\leq \Abs {\hat h_n}_{n+r+1}K(n)^{n+r}q_n^{n+r}=C(n,r)q_n^{N(n,r)}. \end{equation} Of course the two $C(n,r)$'s in (\ref{e1}) and (\ref{e2}) are different. Now we obtain inductively using Lemma \ref{l1} that \begin{equation}\label{e3} \Abs{H_n}_{n+r+1}\leq C(n,r)q_n^{N(n,r)}. \end{equation} The terms $C(n,r)$ and $N(n,r)$ in (\ref{e3}) are computed from (\ref{e1}) and (\ref{e2}) by applying Lemma \ref{l1} successively. By Lemma \ref{l2} and (\ref{e3}), \begin{eqnarray}\label{e4} d_{n+r}(f_{n-1},f_n) &=&d_{n+r}(H_nR_{\alpha_n}H_n^{-1},H_nR_{\alpha_{n+1}}H_n^{-1}) \\ &\leq&C(n,r)\,q_n^{N(n,r)}\abs{\alpha_n-\alpha_{n+1}} \\ &\leq&C(n,r)\,q_n^{N(n,r)}\abs{\alpha-\alpha_n},\nonumber \end{eqnarray} for some other $C(n,r)$ and $N(n,r)$. In order to obtain (iv) of Proposition \ref{p2}, the rational $\alpha_n=p/q$ have only to satisfy $$C(n,r)q^{N(n,r)}\abs{\alpha-p/q}<2^{-n-r-1}, $$ that is, \begin{equation}\label{e5} \abs{\alpha-p/q}<2^{-n-r-1}C(n,r)^{-1}q^{-N(n,r)}. \end{equation} The terms $$ \epsilon=2^{-n-r-1}C(n,r)^{-1}\ \mbox{ and }\ N=N(n,r)$$ are already determined beforehand or by the previous step of the induction. Since $\alpha$ is Liouville, there exists a rational $p/q$ which satisfies (\ref{e11}) for these values of $\epsilon$ and $N$. Setting it $p_n/q_n$, we establish (iv) for the $n$-th step of the induction. In fact there are infinitely many choices of $p_n/q_n$, which enables us to require more. First of all, since $$ \Vert H_n-H_{n-1}\Vert_0\leq{\rm Lip}(H_{n-1})\Vert h_n-{\rm Id}\Vert_0 \leq{\rm Lip}(H_{n-1})Q_n^{-1}\leq{\rm Lip}(H_{n-1})q_n^{-1},$$ we obtain that $H_n$ converges uniformly to a continuous map $H$ simply if we choose $q_n$ large enough compared with the Lipschitz constant ${\rm Lip}(H_{n-1})$ of $H_{n-1}$. It is easier to obtain that $H_n^{-1}$ converges. Then $H$ is a homeomorphism, getting condition (iii). There are other requirements for the choice of $\alpha_n$, which will appear in the next section. \section{Type ${\rm III}_\lambda$: the construction of $\hat h_n$} In this and the next two sections, we shall prove Proposition \ref{p2} for $T={\rm III}_\lambda$ and $\lambda>1$. The diffeomorphism $\hat h_n$ ($n\in{\mathbb N}$) is constructed as follows. First consider two affine maps of slope $\lambda^{1/2}$ and $\lambda^{-1/2}$, depicted in Figure 1. They intersect at points $(0,0)$ and $(a,1-a)$. Choose a rational number $\delta_n>0$ such that $\delta_n\downarrow 0$. (The precise condition will be given later.) We join the two affine maps on the intervals $(-\delta_n,\delta_n)$ and $(a-\delta_n,a+\delta_n)$ using bump functions. See Figure 2. Finally the diffeomorphism $\hat h_n$ is obtained by adding a positive number so that $\hat h_n$ has a fixed point at $0$. \begin{figure} \caption{ } \end{figure} \begin{figure} \caption{ } \end{figure} Choose four {\em rational} points $\partial^+\hat J^+_n$, $\partial^-\hat J_n^-$, $\partial^+\hat J_n^-$ and $\partial^-\hat J_n^+$ each near the points $-\delta_n$, $\delta_n$, $a-\delta_n$ and $a+\delta_n$ as in Figure 2. Define two intervals by $$ \hat J_n^-=[\partial^-\hat J_n^-,\partial^+\hat J_n^-],\ \ \hat J_n^+=[\partial^-\hat J_n^+,\partial^+\hat J_n^+]. $$ Then the diffeomorphism $\hat h_n$ is affine of slope $\lambda^{\pm1/2}$ on $\hat J_n^\pm$. Next define subintervals $\hat I_n^\pm$ of $\hat J_n^\pm$ in such a way that the connected components of $\hat J_n^\pm-\hat I_n^\pm$ have length $\delta_n$. Their boundary points $\partial^\pm I_n^\pm$ is denoted in Figure 2. Let \begin{equation}\label{e} m(\hat h_n(\hat I_n^-\cup I_n^+))=1-\delta_n', \end{equation} where $m$ stands for the Lebesgue measure as before. If one choose $\delta_n$ tending rapidly to $0$, and $\hat J_n^\pm$ appropriately, then we have \begin{equation} \label{prod} \prod_{n=1}^\infty (1-\delta_n')>9/10. \end{equation} Let $K'(n)$ be the least common multipliers of the denominators of the eight rational points $\partial^\pm\hat J_n^\pm$ and $\partial^\pm\hat I_n^\pm$. Define the number $K(n)$ inductively as follows. Let $K(1)=1$. When we are defining $K(n)$, we already decided $q_{n-1}$. Set $K(n)=q_{n-1}K'(n-1)$, and notice that this choice of $K(n)$ satisfies the condition of the previous section. As before, set $Q_n=K(n)q_n$ and $h_n$ to be the lift of $\hat h_n$ by the cyclic $Q_n$-fold covering map $\pi_{Q_n}$ such that $0$ is a fixed point of $h_n$. Let $$ J_n^{\pm}=\pi_{Q_n}^{-1}(\hat J_n^\pm),\ \ \ I_n^{\pm}=\pi_{Q_n}^{-1}(\hat I_n^\pm).$$ Now $J_n^{\pm}$ and $I_n^\pm$ each consists of $Q_n$ small intervals. Each of these small intervals, as well as the fixed points of $h_n$, are left fixed by $h_{n+1}$. Thus we have inductively \begin{equation} \label{e31} \mbox{The boundary points of $J_n^\pm$ and $I_n^\pm$ are fixed by $h_m$ if $m>n$.} \end{equation} Finally we also assume the following. \begin{eqnarray} & q_{n+1}^{-1}<2^{-1} Q_n^{-1}\delta_n \label{e32}\\ & \abs{\alpha_n-\alpha}<\delta^2Q_n^{-2}q_n^{-1}\label{e33}. \end{eqnarray} One can assume (\ref{e32}) since $q_{n+1}$ can be chosen arbitrarily large compared with the previous data, and (\ref{e33}) since this takes the form (\ref{e11}) for the definition of Liouville numbers. \section{Type ${\rm III}_\lambda$: the proof that $r(f)\subset \lambda^{\mathbb Z}\cup\{0\}$} At this point we have already constructed the sequences $\{h_n\}$ and $\{\alpha_n\}$ in Proposition \ref{p2}. Thus for $$ H_n=h_1\cdots h_n,\ \ f_n=H_n R_{\alpha_{n+1}} H_n^{-1}, $$ we have obtained a $C^\infty$ diffeomorphism $f$ as the limit of $\{f_n\}$ and a homeomorphism $H$ as the limit of $H_n$. They satisfy $$ f=H R_\alpha H^{-1}.$$ What is left is to show that $f$ is a nonsingular automorphism of $(S^1,m)$ of type ${\rm III}_\lambda$. The purpose of this section is to show that $r(f)\subset \lambda^{\mathbb Z}\cup\{0\}$. For this, it suffices to construct a Borel set $\Xi$ (in fact a Cantor set) of positive measure which satisfies the following proposition. \begin{proposition} \label{l41} If $\xi\in\Xi$ and $f^i(\xi)\in\Xi$ for some $i\in{\mathbb Z}$, then $(f^i)'(\xi)\in\lambda^{\mathbb Z}$. \end{proposition} Notice that the Radon-Nikodym derivative is just the usual derivative: $$ \frac{d(f^{-i})_*m}{dm}(\xi)=(f^i)'(\xi). $$ For $n\in {\mathbb N}$, let $$ X_n=\bigcap_{j=1}^n(I_j^-\cup I_j^+),\ \ Y_n=\bigcap_{j=1}^n(J_j^-\cup J_j^+) \ \mbox{ and }\ \ X=\bigcap_{j=1}^\infty(I_j^-\cup I_j^+). $$ Setting $$ \Xi=H(X),\ \ \ \Xi_n=H(X_n),$$ we shall show that $\Xi$ satisfies Proposition \ref{l41}. Denote $H^{(n+1)}=\lim_{m\to\infty}h_{n+1} h_{n+2} \cdots h_m.$ Thus $H^{(n+1)}$ is a homeomorphism which satisfies $$ H=H_n H^{(n+1)}.$$ For $x\in X_n$, denote by $[x]_n$ the connected component of $x$ in $X_n$. For $x,\ x'\in X_n$, denote $x\sim_n x'$ if $[x]_n=[x']_n$. \begin{lemma} \label{l43} Suppose $n<m$. {\rm (1)} If $x,x'\in X_m$ satisfies $x\sim_m x'$, then $x\sim_n y$. {\rm (2)} For any $x\in X_n$, $h_{n+1}h_{n+2}\cdots h_{m}(x)\sim_n x$ and $H^{(n+1)}x\sim_n x$. \end{lemma} {\sc Proof}. (2) is a direct consequence of (\ref{e31}). \qed First of all, we have to show the following. \begin{lemma} \label{l42} The set $\Xi$ has positive Lebesgue measure. \end{lemma} {\sc Proof}. Let us compute $m(\Xi_n)$. For $n=1$, $\Xi_1=h_1(I^-_1\cup I_1^+)$, and clearly $$ m(\Xi)=m(\hat h_1(\hat I^-_1\cup \hat I_1^+))=1-\lambda'_1.$$ For $n=2$, $$ \Xi_2=H((I_1^-\cup I_1^+)\cap(I_2^-\cup I_2^+) =h_1((I_1^-\cup I_1^+)\cap h_2(I_2^-\cap I_2^+)),$$ by virtue of Lemma \ref{l43}. Also by (\ref{e31}), the conditional probabilities of $h_2(I_2^-\cap I_2^+)$ conditioned to $I_1^-$ and $I_1^+$ coincide with $m(\hat h_2(\hat I_2^-\cup\hat I_2^+))$. On the other hand $h_1$ is affine on $I_1^-$ and $I_1^+$. Therefore $$m(\Xi_2)=m(\hat h_1(\hat I_1^-\cup \hat I_1^+)) m(\hat h_2(\hat I_2^-\cup \hat I_2^+))=(1-\lambda'_1)(1-\lambda_2'). $$ Successive use of (\ref{e31}) enables us to conclude that in general for $n\in{\mathbb N}$ $$ m(\Xi_n)=\prod_{j=1}^n (1-\lambda'_j),$$ and thus $m(\Xi)$ is positive by the assumption (\ref{prod}). \qed \begin{remark} \label{r41} The above proof shows that any nonempty open subset of $\Xi$ has positive measure. Furthermore any component of $\Xi_n\cap H(I_n^{+})$ has the same measure, as well as any component of $\Xi_n\cap H(I_n^{-})$. \end{remark} \begin{lemma}\label{l44} If $f^i(\xi)\in\Xi$ and $\abs{i}\leq n$, then $f_n^i(\xi)\in H_n(Y_n)$. \end{lemma} {\sc Proof}. Let $x=H^{-1}(\xi)$ and $x_n=H_n^{-1}(\xi)=H^{(n+1)}(x)$. By the assumption, the point $R_\alpha^i(x)=H^{-1}(f^i(\xi))$ belongs to $X\subset X_n$. To show the lemma, it suffices to prove $R_{\alpha_{n+1}}^i(x_n)=H_n^{-1}(f_n^{i}(\xi))$ is a point of $Y_n$. This follows once we show that $ \abs{R_\alpha^i(x)-R^i_{\alpha_{n+1}}(x_n)} $ is smaller than the width $Q_n^{-1}\delta_n$ of the connected components of $Y_n\setminus X_n$. Now we have $$\abs{R^i_\alpha(x)-R^i_{\alpha_{n+1}}(x_n)}\leq \abs{i}\abs{\alpha-\alpha_{n+1}}+\abs{x_n-x}. $$ For $\abs{i}\leq n$, $$\abs{i}\abs{\alpha-\alpha_{n+1}}\leq n\abs{\alpha-{\alpha_{n+1}}}<n\abs{\alpha-{\alpha_{n}}} <n\delta_n^2 Q_n^{-2}q_n^{-1}< 2^{-1}Q_n^{-1}\delta_n $$ by (\ref{e33}), and $$\abs{x_n-x}<q_{n+1}^{-1}<2^{-1}Q_n^{-1}\delta_n $$ by (\ref{e32}). This completes the proof. \qed \begin{lemma}\label{l45} If $\xi,\ f^i(\xi)\in\Xi$ and $\abs{i}\leq n$, then $(f_n^i)'(\xi)\in\lambda^{\mathbb Z}$. \end{lemma} {\sc Proof}. Let $x=H^{-1}(\xi)$ and $x_k=H_k^{-1}(\xi)$ for any $1\leq k\leq n$. Then by Lemma \ref{l43}, since $x\in X\subset X_n$, we have $x_k\in X_k$. Therefore $h_k'(x_k)=\lambda^{\pm 1/2}$. Let $y_n=R_{\alpha_{n+1}}(x_n)$ and $y_k=h_{k+1}\cdots h_n(y_n)$. From Lemma \ref{l44}, it follows that $y_n\in Y_n$, and therefore by Lemma \ref{l43} applied to $\{Y_k\}$, we have $y_k\in Y_k$, and in particular $h_k'(y_k)=\lambda^{\pm 1/2}$. Now we have $$ (f_n^i)'(\xi)=(h_1)'(y_1)\cdots (h_n)'(y_n)\,\cdot\, (h_1)'(x_1)^{-1}\cdots (h_n)'(x_n)^{-1}\in\lambda^{\mathbb Z}.$$ \qed The proof that $\Xi$ satisfies Proposition \ref{l41} is complete by the following lemma. \begin{lemma}\label{l46} If $\xi,\ f^i(\xi)\in\Xi$, then for any large $n$, $(f_n)'(\xi)=f'(\xi)$. \end{lemma} {\sc Proof}. Here we shall give a proof which can also be applicable when we show the denseness of type ${\rm III}_\infty$ diffeomorphisms in later section. Since $f^i_n$ converges to $f$ in $C^\infty$ topology and since $(f_n)'(\xi)-(f_{n+1})'(\xi)$ is either $1$ or $\lambda^{\pm1}$, it must be 1 for any large $n$. \qed \section{Type ${\rm III}_\lambda$: the proof that $\lambda\in r(f)$} Let $f$ and $\Xi$ be as before. Let $$f_\Xi:\Xi\to \Xi$$ be the first return map of $f$. As is well known, the ratio set $r(f_\Xi)$ is the same as $r(f)$. To show that $\lambda\in r(f)$, we need to establish the following proposition. \begin{proposition} \label{p51} For any closed subset $K\subset\Xi$ of positive measure, there exist a point $\xi\in K$ and a number $i\in{\mathbb Z}$ such that $f^i(\xi)\in K$ and $(f^i)'(\xi)=\lambda$. \end{proposition} In fact this is enough for showing that $\lambda\in r(f_\Xi)$. For, given any Borel subset $A\subset\Xi$, there is a closed subset $K$ of positive measure contained in the set of the points of density of $A$. If there are $\xi$ and $i$ as in the proposition for this closed subset $K$, then there is a small interval $J_1$ centered at $\xi$ of radius $r$ such that $(f^i)'=\lambda$ on $J_1\cap \Xi\cap f^{-i}(\Xi)$. Consider an interval $J_2$ centered at $f^i(\xi)$ of radius $(f^i)'(\xi)r$. If $r$ is small enough, then we have $$ m(J_1\cap A)>(1-\epsilon)m(J_1)\ \mbox{ and }\ m(J_2\cap A)>(1-\epsilon)m(J_2),$$ for some small $\epsilon>0$, since $\xi$ and $f^i(\xi)$ is a point of density of $A$ . Also if $r$ is small, $f^{-1}(J_2)$ almost coincides with $J_1$, and $f'$ is almost constant on $J_1$. This shows that $B=(J_1\cap A)\cap f^{-1}(J_2\cap A)$ has positive measure. For any $\eta\in B$, we have $f^i(\eta)\in A$ and $(f^i)'(\eta)=\lambda$, showing that $\lambda\in r(f_\Xi)$. The rest of this section is devoted to the proof of Proposition \ref{p51}. It is easier to pass from $\Xi$ to $X=H^{-1}(\Xi)$. So let $\mu=H^{-1}_*m$, and choose once and for all an arbitrary closed subset $C$ of $X$ of positive $\mu$ measure. We shall show Proposition \ref{p51} for $K=H(C)$. Our overall strategy is as follows. After choosing a large number $n$, we shall construct points $x_k$, $y_k$ inductively for $k\geq n$. They satisfy the following conditions, where $k$ is any number bigger than $n$. \begin{enumerate} \item $x_n$ (resp.\ $y_n$) is the midpoint of a component of $X_n\cap I_n^-$ (resp.\ $X_n\cap I_n^+$), and $x_n\sim_{n-1}y_n$. \item $x_k$ and $y_k$ are the midpoints of components of $X_k$ \item There is $i\in{\mathbb Z}$, independent of $k$, such that $y_k=R_{\alpha_k}^i(x_k)$. \item $x_k\sim_{k-1}x_{k-1}$ and $y_k\sim_{k-1}y_{k-1}$. \item $\mu([x_k]_k\cap C)>0$ and $\mu([y_k]_k\cap C)>0$. \end{enumerate} Let us show that this suffices for our purpose. First of all the two sequences $\{x_k\}$ and $\{y_k\}$ converge by (4). Let $$ x=\lim_{k\to\infty}x_k,\ \ y=\lim_{k\to\infty}y_k\ \mbox{ and }\ \xi=H(x),\ \ \eta=H(y).$$ By (5), $x$ and $y$ belong to the closed set $C$, and hence $\xi$ and $\eta$ to $K=H(C)$. By (3), we have $R_\alpha^i(x)=y$, and hence $f^i(\xi)=\eta$. For any $k\geq 1$ (not just for $k\geq n$), define $$ x'_k=H_k^{-1}(\xi)=H^{(k+1)}(x). $$ By Lemma \ref{l46}, there is $m$ such that $(f_{m-1}^i)'(\xi)=(f^i)'(\xi)$. Notice that $f_{m-1}^i$ can also be written as $$f_{m-1}^i=H_mR_{\alpha_m}^i H_m^{-1},$$ since $R_{\alpha_m}^i$ commutes with $h_m$. Define $$\mbox{$y'_m=R_{\alpha_m}^ix'_m$ and $y'_k=h_{k+1}\cdots h_m(y'_m)$ for $k\leq m$.} $$ Then we have $$ (f_{m-1}^i)'(\xi)=(h_1)'(y'_1)\cdots(h_m)'(y_m')\,\cdot\,(h_1)'(x'_1)^{-1} \cdots (h_m)'(x'_m)^{-1}.$$ By Lemma \ref{l43} and (4), we have $$ x'_k\sim_k x_k\ \mbox{ and }\ y'_k\sim_k y_k, $$ for $k\geq n$. This shows that $$h_n'(y_n')/h_n'(x_n')^{-1}=\lambda$$ by (1), and for $k>n$ \begin{equation}\label{e52} h_k'(x'_k)=h_k'(y_k'). \end{equation} by (3). On the other hand we have for $k<n$, $$ x'_k\sim_k x_n\sim_{k}y_n\sim_k y'_k.$$ This shows (\ref{e52}) for $k<n$. The proof that $(f^i)'(\xi)=\lambda$ is now complete. Now we shall construct $x_k$ and $y_k$ for $k\geq n$. \noindent {\sc Case 1 $k=n$}: Consider a point of density of $C$ for the measure $\mu$. Then for any $\epsilon>0$, one can find $n\in {\mathbb N}$ and an interval $J$ bounded by two consecutive fixed points of $h_n$ (a fundamental domain of $R_{Q_n}$) such that $$ \mu(J\cap C)>(1-3^{-1}\epsilon)\mu(J). $$ Then we have $$ J\cap X_n=[x_n]_n\cup [y_n]_n,$$ where $x_n$ (resp.\ $y_n$) is the midpoint of $[x_n]_n$ (resp.\ $[y_n]_n$), and $[x_n]_n\subset I_n^-$ and $[y_n]_n\subset I_n^+$. If $n$ is big enough, then $\mu(J\cap X_n)$ is nearly equal to $\mu(J)$ and one may assume $$ \mu([x_n]_{n}\cap C>(1-2^{-1}\epsilon)\mu([x_n]_n) \ \mbox{ and }\ \mu([y_n]_{n}\cap C>(1-2^{-1}\epsilon)\mu([y_n]_n). $$ \noindent {\sc Case 2 $k=n+1$}: Let us call the interval $[jq_{n+1}^{-1},(j+1)q_{n+1}^{-1}]$ for some $1\leq j\leq q_{n+1}$ a $q_{n+1}$-interval. Now $[x_n]_n$ and $[y_n]_n$ are partitioned into $q_{n+1}$-intervals and one can find a $q_{n+1}$-interval $J_1$ (resp.\ $J_2$) contained in $[x_n]_n$ (resp.\ $[y_n]_n$) such that $$\mu(J_\nu\cap C)>(1-\epsilon)\mu(J_\nu),\ \ \nu=1,2.$$ Then there is $1\leq i\leq q_{n+1}$ such that $R_{\alpha_{n+1}}^i(J_1)=J_2$. Now since $$\mu(J_\nu\cap X_{n+1})>(1-\delta'_{n+1})\mu(J_\nu), $$ and since $$(1-\epsilon)(1-\delta'_{n+1})>2/3, $$ where $\delta'_{n+1}$ is the constant given by (\ref{e}), either there are more than 2/3 portion of components $[z]_{n+1}$ among all the components of $J_\nu\cap X_{n+1}\cap I_{n+1}^+$ which satisfies \begin{equation}\label{e53} m([z]_{n+1}\cap C)\geq(1-\epsilon)(1-\delta'_{n+1})m([z]_{n+1}), \end{equation} or else among all the components of $J_\nu\cap X_{n+1}\cap I_{n+1}^-$, for each $\nu=1,2$. That is, we can find a component $[x_{n+1}]_{n+1}$ (resp.\ $[y_{n+1}]_{n+1}$) in $J_1$ (resp.\ $J_2$) such that $$R_{\alpha_{n+1}}^i(x_{n+1}) =y_{n+1}$$ which satisfies (\ref{e53}). Here we have chosen $x_{n+1}$ (resp.\ $y_{n+1}$) to be the midpoint of $[x_{n+1}]_{n+1}$ (resp.\ $[y_{n+1}]_{n+1}$). \noindent {\sc Case 3} {\em higher $k$}: Assume we get $x_k$ and $y_k$ which satisfy (2), (3), (4) and (5')\ \ \ $\mu([x_k]_k\cap C)>(1-\epsilon)\prod_{j=1}^k(1-\delta_j')^2\mu([x_k]_k)$ and\ \ \ $\mu([y_k]_k\cap C)>(1-\epsilon)\prod_{j=1}^k(1-\delta_j')^2\mu([x_k]_k)$. Since $R^i_{\alpha_k}([x_k]_k)=[y_k]_k$ and since by (\ref{e33}) $$ i\abs{\alpha_{k+1}-\alpha_{k}}<2q_{n+1}\abs{\alpha_k-\alpha} <2q_{k}\abs{\alpha_k-\alpha}<2\delta_k^2 Q_k^{-2},$$ $R_{\alpha_{k+1}}^i$ maps vast majority of $[x_k]_k$ into $[y_k]_k$. More precisely the conditional probability of the union of the components of $X_{k+1}$ completely contained in $[x_k]_k\cap R_{\alpha_{k+1}}^{-i}([y_k]_k)$, conditioned to $X_{k+1}$, is bigger than $(1-\delta_{k+1}')^2$, and the same is true for $R_{\alpha_{k+1}}^i([x_k]_k)\cap [y_k]_k$). Since $$ (1-\epsilon)\prod_{j=1}^{k+1}(1-\delta_j')^2>2/3,$$ just as before, there are $[x_{k+1}]_{k+1}$ in $[x_k]_k$ and $[y_{k+1}]_{k+1}$ in $[y_k]_k$ such that $ R_{\alpha_{k+1}}^i(x_{k+1})=y_{k+1} $ and $$ \mu([x_{k+1}]_{k+1}\cap C)>(1-\epsilon)\prod_{j=1}^{k+1}(1-\delta'_j)^2\mu([x_{k+1}]),$$ $$ \mu([y_{k+1}]_{k+1}\cap C)>(1-\epsilon)\prod_{j=1}^{k+1}(1-\delta'_j)^2\mu([y_{k+1}]).$$ This finishes the construction of $\{x_k\}$ and $\{y_k\}$. \section{Type ${\rm III}_\infty$} We construct $\hat h_n$ by almost the same manner as in Section 3. But for $n$ odd we use the slope $\lambda_1^{\pm 1}$ and for $n$ even $\lambda_2^{\pm 1}$, where $1<\lambda_1<\lambda_2$. They are chosen so that $\log\lambda_1$ and $\log\lambda_2$ are independent over ${\mathbb Q}$. We just repeat the argument of Section 5, to show $\lambda_1,\ \lambda_2\in r(f)$. All that need extra care is the validity of the argument which shows that a variant of Proposition \ref{p51} is sufficient. For this, first of all, we need a variant of Lemma \ref{l46}. It holds true, as is remarked in the proof there. We also need the fact that $(f^i)'$ is locally constant on $\Xi\cap f^{-i}(\Xi)$. To establish this, notice that $\log(f^i_n)'$ converges uniformly to $\log(f^i)'$ and that $(f^i_n)'$ is locally constant on $\Xi\cap f^{-i}(\Xi)$ if $\abs{i}\leq n$ (Compare Lemmata \ref{l45} and \ref{l46}). Thus for any $\xi\in\Xi\cap f^{-i}(\Xi)$, there is a neighbourhood $U$ of $\xi$ and $n_0\in{\mathbb N}$ such that if $\eta\in U\cap \Xi \cap f^{-i}(\Xi)$, \begin{enumerate} \item $\abs{\log(f_{n+1}^i)'(\eta)-\log(f_n^i)'(\eta)}<\log\lambda_1$ if $n\geq n_0$ and \item $(f_{n_0}^i)'(\eta)=(f_{n_0}^i)'(\xi)$. \end{enumerate} Then we have $(f_{n}^i)'(\eta)=(f_{n_0}^i)'(\eta)$ for any $n\geq n_0$. The same is true for $\xi$. This shows $(f^i)'(\eta)=(f^i)'(\xi)$. \section{Type ${\rm III}_0$} We construct $\hat h_n$ starting at affine functions of slope $3^{-1}$ and $3^{3^n}$, and define the intervals $\hat I_n^-$ and $\hat I_n^+$ as in Section 3. Although $m(\hat I_n^+)\to0$ rapidly as $n\to\infty$, we have $m(\hat h_n(\hat I_n^+))\to 2/3$ and $m(\hat h_n(\hat I_n^-))\to 1/3$. First of all let us show that $r(f)\subset\{0,1\}$. Define $X_n$, $Y_n$ and $\Xi$ as in Section 4. Let $L$ be a component of $H(X_n)$. We shall show that if $n$ is sufficiently large and $\xi,f^i(\xi)\in L\cap\Xi$, then either $(f^i)'(\xi)=1$ or $\abs{\log_3(f^i)'(\xi)}>3^{n-1}$. This is sufficient for our purpose since $m(L\cap\Xi)>0$. To show this, notice that there is $m$ bigger than $\abs{i}$ and $n$ such that $f_m^i(\xi)\in H_m(Y_m)$ and $(f_m^i)'(\xi)=(f^i)'(\xi)$. Define $x_j=H_j^{-1}(\xi)$ and $y_j=H_j^{-1}(f^i_m(\xi))$ for $j\leq m$. Then we have $$ \log_3(f_m^i)'(\xi)=\sum_{j=1}^{m}\log_3h_j'(y_j)- \sum_{j=1}^{m}\log_3h_j'(x_j)$$ Notice that $$\abs{\log_3h_j'(y_j)-\log_3h_j'(x_j)}= 0\ \mbox{ or }\ 3^j-1, \ \mbox{ and }\ $$ $$\log_3h_j'(y_j)=\log_3h_j'(x_j)\ \mbox{ if }\ j\leq n,$$ since $\xi,f_m^i(\xi)\in L\cap\Xi$. Let $k\in[n,m]$ be the largest integer, if any, such that $$\log_3h_k'(y_k)\neq\log_3h_k'(x_k).$$ Then the value $$ \abs{\log_3h_k'(y_k)-\log_3h_k'(x_k)}$$ is vastly bigger than $$ \sum_{j=1}^{k-1}\abs{\log_3h_j'(y_j)-\log_3h_j'(x_j)}.$$ In fact, computation shows that if $n$ is sufficiently large, $$\abs{\log_3(f^i)'(\xi)}=\abs{\log_3(f_m^i)'(\xi)}>3^{n-1}, $$ as is required. What is left is to show that $0\in r(f)$, since we always have $1\in r(f)$. To show this, we follow the argument of Section 5 closely and show the following proposition. \begin{proposition} For any closed subset $K\subset\Xi$ of positive measure and any $n_0$, there exist a point $\xi\in K$ and a number $i\in{\mathbb Z}$ such that $f^i(\xi)\in K$ and $(f^i)'(\xi)=3^{3^{n}-1}$ for some $n>n_0$. \end{proposition} \section{Type ${\rm II}_\infty$} We begin by constructing $\hat h_n$ starting at affine maps of slope $2^{\pm n}$. Define $\hat I_n^+$ just as before. Notice that $m(\hat I_n^+)\to0$ rapidly as $n\to\infty$, but that $m(\hat h_n(I_n^+))\to 1$ rapidly. Define $I_n^+$ as the lift of $\hat I_n^+$ by the $Q_n$-fold covering map. Denote $$\ X^+=\bigcap_{i=1}^\infty I_i^+ \mbox{ and }\ \Xi^+=H(X^+) .$$ Then one can show as in Lemma \ref{l42} that $m(\Xi^+)>0$. On the other hand it is easy to show that $m(X^+)=0$. This implies that the unique $f$-invariant measure $H_*m$ is singular to $m$. Therefore $f$ cannot be of type ${\rm II}_1$. On the other hand, we can show the following proposition easily. \begin{proposition} Whenever $\xi, f^i(\xi)\in\Xi^+$ for some $i\in{\mathbb Z}$, then $(f^i)'(\xi)=1.$ \end{proposition} This completes the proof that $r(f)=\{1\}$. Since $f$ is not of type ${\rm II}_1$, it must be of type ${\rm II}_\infty$. \end{document}
arXiv
Forces other than the fundamental interactions, e.g. friction Forgive me for the silly question, but I just don't get it. I just completed an elementary course in mechanics, and I am curious to know what I am about to ask. We have, all year, dealt with many forces like gravity, friction, normal forces, tensions etc. But only one of them is listed as a fundamental force, that is, gravity. I know that the only forces that exist in nature are the four fundamental force, and all of these are, apparently, non-contact forces. But then how do you account for, for example, friction? We know that $F_\text{frictional}=\mu N$, But how do we arrive at that? Is this experimental? I cannot see how contact forces like friction can exist, when none of the fundamental force is a contact force. Again, forgive me for my ignorance. newtonian-mechanics forces friction rob♦ pkwssispkwssis $\begingroup$ It is not silly question at all. Feynman was also thinking about exact same question -- how to get above formula for the friction from basic principles. Formula for friction is experimental. $\endgroup$ – Asphir Dom Aug 2 '14 at 16:19 $\begingroup$ Comment to the question (v2): Consider restricting the question to only the friction force to avoid getting too broad. $\endgroup$ – Qmechanic♦ Aug 2 '14 at 16:25 AlanZ2223 has given a nice summary of what's going on. I'll just make a couple of points that are orthogonal to his and that wouldn't fit in comments. The electrical force is a non-contact force; it falls off with distance like $1/r^2$. But most of the objects we deal with in everyday life are electrically neutral, i.e., they contain equal amounts of positive and negative charge. You would think that this would mean the attractions and repulsions would exactly cancel out, but that's not quite true. When two electrically neutral objects are close together, they can influence each other to rearrange their charges somewhat, so that the cancellation isn't perfect due to the different distances and angles involved in all the force vectors that are being added. This is called a residual interaction. The residual electrical interaction falls off much more quickly than $1/r^2$ at large distances -- more like $1/r^6$. This is the basic reason why bulk-matter forces, which are electrical, appear to be zero-range contact forces. The other thing to realize is that it is not possible to explain forces such as the frictional and normal forces purely by using classical mechanics and an electrical interaction. If you try to do that, you'll find that bulk matter isn't stable, and that one piece of bulk matter won't prevent another from penetrating into it. In fact, you need two ingredients to explain these forces: (1) electrical interactions, and (2) the Pauli exclusion principle. If you try to explain it using only one of these ingredients without the other, it doesn't work. Ben CrowellBen Crowell $\begingroup$ Nice answer. Any suggestions on where I could learn more (at a fairly basic level) about the $r^{-6}$ interaction? (E.g., why is the next term not $r^{-4}$?) $\endgroup$ – Charles Aug 3 '14 at 5:27 $\begingroup$ @Charles: they are called van der Waals forces. $\endgroup$ – gatsu Aug 3 '14 at 9:19 $\begingroup$ Does this imply that any surface, however smooth (If we consider a hypothetical 100% smooth surface) would exhibit a frictional force if an object were to be pushed atop it? Regardless of the nature of the surface these interactions would remain, would they not? $\endgroup$ – SNB May 24 '18 at 6:05 These "non-contact" forces that are ubiquitous in everyday life are mainly attributed to electromagnetism. Basically the four fundamental forces which are the strong force,weak, electromagnetic, and gravitational all have a sort of realm within which they influence most. The strong/nuclear reigns within the subatomic domain, the weak as well but this kind of force is not nearly as prominent. Then comes the electromagnetic force whose influence falls within our human sized domain and gravity which does have effects in our everyday life but it is mostly prominent in larger bodies such as our solar system or galaxies. Now all of these forces are able to manifest by the exchange of particles. Trillions and trillions of them are created and annihilated every second. For the electromagnetic force the exchange particle is the photon. Now for example whenever you touch an object your hand does not just go through the object, the electric forces are acting to repel the object. The electrons that compose your hand repel the electrons of the material keeping your hand from going though, not by directly touching the other electrons but by the exchange of the carriers of the electromagnetic force, the photon, and this is what lies withing the "forces" of friction, normal force, so on and so forth. Let's examine friction. Whenever you push an object lets say a book across a desk. Friction opposes the direction of motion, you push left, friction pushes right. This is because atomically the molecules of the book are "pushing" or more technically repelling and attracting by effect of the electromagnetic force which is able to manifest by exchanging photons with the atoms of the desk, and this is what is the sort of big picture of what composes these non fundamental "forces". Ben Crowell AlanZ2223AlanZ2223 $\begingroup$ Does this imply that any surface, however smooth (If we consider a hypothetical 100% smooth surface) would exhibit a frictional force if an object were to be pushed atop it? Regardless of the surface these interactions would remain, would they not? $\endgroup$ – SNB May 24 '18 at 6:00 I think it's best to just quote Feynman here: Friction: the force of friction against a dry surface is $-\mu N$, and again you have to know what the symbols mean: when an object is pushed against another surface with a force whose component perpendicular to the surface is $N$, then in order to keep it sliding along the surface, the force required is $\mu$ (friction coefficient) times $N$. You can easily figure out which direction the force is; it's opposite to the direction you slide it. Does friction force result from a potential energy like gravitational forces do? The answer is No: friction does not conserve energy, and therefore we have no formula for the potential energy for friction. If you push an object along a surface one way, you do work; then, when you drag it back, you do work again. So after you've gone through a complete cycle, you haven't come out with no energy change; you've done work-and so friction has no potential energy. Finally, in the example gravitational force (equivalently can also be shown for Coulomb's force and the electric field), it results from a difference of potential energy in the gravitational field $\mathbf{g}$, which can be shown to be conservative (through a complete cycle, 0 work is done, energy is conserved), is equal to the gradient in gravitational potential $\Phi$: $$\mathbf{g} = -\nabla \Phi $$ PhononPhonon Friction is indeed a phenomenon that is difficult to harness theoretically. The formula you mention is not derived from first principles, but is justified only by experimental evidence, i.e. it drops from heaven. Moreover, it gives a false impression of simplicity. The friction coefficient is all but constant, as it may depend on temperature, pressure, etc. Just think of a ski on snow. The friction there does depend on snow temperature, the consitency of the snow, how well you prepared the ski, ... What is next is that there is not just one single friction phenomenon. The friction between ski and snow for example is described by a thin film of liquid water between the ski and the snow, whereas the friction of a shoe on the street is something else entirely. Related to friction is diffusion and dissipation, e.g. the air resistance when you ride a bycicle is essentially explained by the viscosity of air and the turbulent cascade towwards small scales. However, as the previous answers explain, all of these phenomena are fundamentally explained by electromagnetic forces between molecules; and these never "touch" each other, their electron hull prevents that from happening. maze-cooperationmaze-cooperation The four fundamental interactions (gravity, weak, strong and electromagnetic) are useful to understand how the basic ingredients (fundamental particles of the standard model) of our world interact with one another. It doesn't imply that all the forces witnessed in our world can be reduced to these forces only. Most physical objects comprises many many many of these fundamental particles and the point consists in realizing that "the properties of the sum" is not the same as "the sum of the properties" and, in fact, "the properties of the sum" is much richer than "the sum of the properties". This is what I meant when I said that physical forces cannot be reduced to the four fundamental interactions only. Any physical system that displays properties which are not captured by the properties of the ingredients is said to have emergent properties. And basically, all the forces you are having questions about are emergent forces that do not have any equivalent when you just look at the most fundamental ingredients alone. I preach for my chapel here (and we'll see if people disagree) but the most general strategy one can use to understand how these effective forces at the macroscopic/mesoscopic scale emerge from the microscopic world (that of basic ingredients) is that of statistical thermodynamics (quantum or classical) which is a huge field that one cannot cover in few lines only I am afraid. gatsugatsu Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces friction or ask your own question. Application of Newton's third law How does static friction increase with increase in the applied force? Physics Forces and Friction How to calculate friction on a spinning ball on the ground? How does friction act on a body, if only 2 regions on it are rough? Friction forces on car wheel Do all four fundamental forces have effects on space time? Frictional forces and their directions As the universe expands, do we have any reason to suspect further separation of the fundamental forces/interactions? What is the force of friction on the body? Is friction an emergent phenomenon?
CommonCrawl
The social-psychological perspective on executive compensation: evidence from a two-tier board system Anja Schwering ORCID: orcid.org/0000-0003-1144-31331, Friedrich Sommer ORCID: orcid.org/0000-0002-7697-28642, Florian Uepping3 & Sandra Winkelmann ORCID: orcid.org/0000-0002-2084-62524 Journal of Business Economics volume 92, pages 309–345 (2022)Cite this article A Correction to this article was published on 01 February 2022 This paper investigates whether and how social-psychological mechanisms, namely reciprocity, demographic similarity, and similar experiences, affect CEO compensation packages with respect to the levels of total, fixed, and short- and mid-term compensation and the variable proportion of the compensation package. We use evidence from Germany as it is considered a prototype of a two-tier board system. Given the primary roles of both the CEO and the chair of the supervisory board, we especially highlight social-psychological mechanisms in the process leading to the final compensation package. Using a hand-collected sample of non-financial constituents of the German HDAX, we find that reciprocity can lead to a compensation package that is more favorable for the CEO. Results on similarity are ambivalent such that the effects of similarity on CEO compensation—both positive and negative—may depend on the dimension of similarity. Finally, the chair's CEO experience, both inside and outside the focal company, also plays an essential role in shaping CEO compensation. More specifically, CEO experience in general is associated with more favorable compensation. However, having a chair that has been CEO at the focal company correlates with less favorable compensation packages except for when the CEO has also been recruited internally. The general idea behind executive compensation is that compensation should incentivize the CEO to work in the firm's and its shareholders' interest (O'Reilly III and Main 2007). As executive pay takes over a control function with regard to CEO's behavior, firms have to carefully consider how to design CEO compensation since it determines whether the CEO behaves as desired. Hence, CEO compensation and its structure attracted not only the public's interest but also great attention in research and have been discussed intensively over the last decades (Gomez-Mejia and Wiseman 1997; van Essen et al. 2015). However, it is still not conclusively clear which specific factors play a role in determining CEO compensation and may even bias the determination process. Besides investigations on how firm characteristics such as firm performance or firm size determine CEO compensation (Boyd 1994; Hall and Liebmann 1998; Jensen and Murphy 1990b), a second stream of research analyzed the influence of individuals on the CEO's compensation package (Finkelstein and Hambrick 1996). CEOs themselves usually negotiate executive compensation with at least one board member or a specific committee. Although the determination process is difficult to comprehend from the outside and therefore commonly considered a "black box" (Barkema and Pennings 1998; Bültel 2011; O'Reilly III and Main 2010; Tosi and Gomez-Mejia 1989), recent literature has focused on the role of these negotiators and the mechanisms underlying the determination process. Specifically, this research stream brought up three perspectives described by Finkelstein and Hambrick (1996) as the economic, the political, and the social-psychological perspective. The vast majority of studies focused on the first two perspectives, analyzing CEO compensation in the context of agency theory and managerialism (economic perspective) and managerial power theory (political perspective). However, especially the economic perspective has not been able to fully explain executive compensation's determination (Bruce et al. 2005; Rapp and Wolff 2010; Schmidt and Schwalbach 2007; Tosi et al. 2000). Consequently, a social-psychological perspective emerged that focuses on relationships between negotiating parties and considers mechanisms like reciprocity and social influences via similarity and social comparison. However, until today, only a few studies exploited this perspective, although it has the potential to provide further insights into the determination process (e.g., Fiss 2006; Main et al. 1995; O'Reilly III et al. 1988; Uepping 2015; Westphal and Zajac 1995). Contributing to this research perspective, we investigate the effects of such social-psychological mechanisms on CEO compensation to better understand how executive compensation is determined. More precisely, we use the German setting that allows us to observe relevant characteristics of the two dominant players in the pay-determination process, namely the CEO and the chair of the supervisory board. Until now, the major body of literature on CEO compensation has focused on Anglo-American countries where one-tier board structures prevail. In such a corporate governance system, there is only one board—the board of directors—which is responsible for managing and supervising the company. It consists of executive and non-executive directors who can be represented by inside directors, such as the CEO, and firm outsiders. The CEO compensation is determined by members of the board of directors or, more and more commonly, by members of a special compensation committee. In contrast, Germany is an example of a country with a two-tier corporate governance structure, where most of the large companies operate as stock corporations (Aktiengesellschaften) and have a supervisory board (Aufsichtsrat) and an executive board (Vorstand) with mutually exclusive memberships (Elston and Goldberg 2003). Both the CEO as chair of the executive board and the chair of the supervisory board have a de facto dominant role within their boards (Oesterle 2003).Footnote 1 As such, even though the entire supervisory board or a special compensation committee might be responsible, the chair most likely has a strong influence on CEO compensation. Moreover, a close exchange between CEO and chair suggests that the CEO can influence the chair's views (Fiss 2006). Hence, the German setting enables us to distinctly attribute the effect of social-psychological mechanisms to these two players instead of a whole group of directors. For the German setting, only Fiss (2006) demonstrates that demographic factors of both the CEO and the chair of the supervisory board strongly influence the compensation of the top management team (TMT). To extend these initial findings from a social-psychological perspective, we investigate the relationship between the CEO and the chair of the supervisory board and its impact on CEO (instead of TMT) total compensation, CEO fixed compensation, CEO short- and mid-term compensation, and the proportion of variable compensation. Thereby, we focus on social-psychological mechanisms. Specifically, we examine reciprocity and similarities with regard to personal and educational demographics and role experiences. Using a hand-collected sample of non-financial constituents of the German HDAX, we find that reciprocity, demographic similarity, and the chair's similar experiences as CEO play a significant role in shaping CEO compensation packages. Concerning reciprocity, the chair's excess compensation is positively associated with total compensation and short- and mid-term compensation of the CEO but also the share of variable pay. Moreover, a longer tenure of the CEO compared to the chair is positively associated with fixed CEO compensation. Similarity in age between the two actors negatively relates to total and fixed pay and is positively associated with the share of short- and mid-term incentives. Moreover, similarity in nationality is associated with less fixed pay and an increase share of short- and mid-term incentives. In contrast to this, similarity in educational degree does not affect the total compensation. However, it relates to less performance-dependence the share of variable compensation is negatively related with similarity in educational degree. Additionally, we find that chairs with experience as CEO at either another company or the focal company are associated with higher total compensation but also a higher share of variable compensation. An additional analysis that differentiates between CEO experiences in the focal and another company shows a similar pattern when the chair has outside CEO experience. In contrast, chairs who have worked as CEO at the focal company relate to less favorable compensation packages except when the CEO has been recruited internally. Finally, in another additional analysis, we find that ownership control does not mitigate the effects of reciprocity and similarity. Our study contributes to the literature in several ways. First, contrary to previous studies, we focus on the social-psychological perspective of CEO compensation. This perspective allows us to take into account that compensation determination is an ambiguous task that relationships between key actors can strongly influence. Hence, we consider various explanations with regard to social-psychological mechanisms. That way, we can better understand and gain additional knowledge about the importance of the social-psychological perspective. Second, as distinct from a few (German) investigations in this context, we can shed light on mechanisms in the pay-determination process instead of merely analyzing its results. This analysis is possible due to a legal change in Germany's compensation disclosure, making the compensation structure transparent for individual members of the executive board. Since this change, we have been first to highlight the two-tier system's characteristics against this approach's background to the best of our knowledge. Importantly, through investigating the individual components of CEO compensation, we gain profound insights that help to bridge ambiguous results of prior studies and render some previous evidence far less absolute. Third, we follow the call by Beck et al. (2020) for more research on executive compensation with international data, given the conclusion that their empirical results for German data at least partly diverge from results for U.S. data. By focusing on the German two-tier board system with clearly separated responsibilities, we can provide further insight into the mechanisms of the negotiation process of CEO compensation, which would be extremely challenging to investigate among one-tier systems. However, our results are also relevant for and transferrable to one-tier systems. For example, in the U.S., the number of firms in which the CEO also takes over the role of the chair of the board (CEO duality) is declining, and firm outsiders have been appointed as chairs more often (Abels and Martelli 2013). That means, although one-tier board structures are predominant, the decision-making process regarding CEO compensation starts to converge with the German system (Gilson 2001). Specifically, compensation committees whose responsibilities are functionally separated and declining CEO duality reflect the rapprochement between the one- and the two-tier system and indicate an increasing comparability (Conyon and He 2004; Fiss 2006). The remainder of this paper proceeds as follows. At first, Sect. 2 provides background information, discusses prior literature, and presents the hypothesis development. Section 3 sets forth the study design. Section 4 presents the results, and Sect. 5 concludes. Background and hypothesis development Executive compensation in Germany As the negotiation process for CEO compensation in German companies is mainly influenced by two actors, we first describe the corresponding roles and remits of both the CEO and the chair of the supervisory board. In any case, their close relationship underlines the importance of applying a social-psychological perspective to analyze the determinants of the CEO's compensation package. In German companies, the supervisory board's responsibilities include selecting and appointing the members of the executive board. Moreover, the supervisory board determines the executive board's compensation. Members of the supervisory board may form a compensation committee, which the chair of the supervisory board typically leads. The compensation committee was able to determine the executive compensation autonomously until 2009. Today, after a legal change, all members of the supervisory board must decide on the executive board's compensation. However, the compensation committee may still prepare a proposal. The CEO de facto has special privileges—although not regulated by law. First, he may influence the decision-making process through the flow of information within the executive board. Second, he is supposed to maintain close contact with the chair of the supervisory board to discuss strategy, planning, business development, the risk situation, risk management, and compliance (German Corporate Governance Code Commission 2019). In practice, this close contact strengthens the relationship between the CEO and the chair and is likely to weaken the position of ordinary members of both boards (Fiss 2006). Although the entire supervisory board is supposed to discuss the CEO compensation, the CEO and the chair are expected to play crucial roles in the determination process. Thus, social-psychological aspects of the relationship between the CEO and the chair of the supervisory board are particularly interesting for the negotiation process and should therefore be investigated with respect to executive compensation. Since, in Germany, the specific roles of both the CEO and the chair of the supervisory board are regulated by law, the German corporate governance system is an adequate basis for exploring the determination process of the CEO compensation, and thus allows us to collect information on the two key players in the process and gain insight into their relationship and resulting influences on compensation. Social-psychological mechanisms and CEO compensation Early research on executive compensation focuses on economic perspectives, primarily based on agency theory, managerialism, and human capital theory (Brockman et al. 2016; Finkelstein and Boyd 1998; Hambrick and Finkelstein 1995; Jensen and Murphy 1990a; Murphy 1999). For example, from an agency theory perspective, CEO compensation is mainly considered a matter of formal contracting that reflects a pay-for-performance relationship (Finkelstein and Hambrick 1988). However, despite extensive investigations, economic theories cannot provide a comprehensive explanation of the drivers of CEO compensation and the process of its determination (Tosi et al. 2000). As a result, Tosi et al. (2000, p. 331) conclude in their meta-analysis of economic studies on CEO compensation that "there is a large unexplained variance in CEO pay." Against the background of these findings from the economic perspectives, other studies argue that the determination of CEO pay is a very ambiguous task due to the different components of CEO pay and different interpretations and expectations of CEO behavior (Finkelstein and Hambrick 1988; Main et al. 1995). This ambiguity leaves room for political plays and for social-psychological mechanisms to unfold. Concerning a political perspective, it is argued that CEOs are using their power over the board to influence the directors' decision-making regarding CEO compensation (Finkelstein and Hambrick 1989; Hambrick and Finkelstein 1995; O'Reilly III et al. 1988). One line of research at the intersection of the political perspective and the social-psychological perspective investigates how powerful CEOs indirectly influence their compensation by selecting new (outside) directors of the board who would act in the CEOs' interest. For example, O'Reilly III et al. (1988) represents an Anglo-American study that investigates the impact of social comparison on CEO compensation. The authors argue that CEOs typically have the power to select new members of the board of directors. Thereby, they may rely on social comparison mechanisms and select outside directors who are CEOs themselves and refer to their own compensation when deciding on the other CEOs' compensation. Consequently, O'Reilly III et al. (1988) find that outside directors' salary levels are positively associated with CEO compensation, an indicator for social comparison processes. Based on a similar reasoning, Westphal and Zajac (1995) investigate the effect of powerful CEOs more directly. They argue that CEOs generally prefer new directors who are demographically similar to them because those directors tend to review CEO performance less critically, positively affecting performance-contingent compensation and CEO compensation in total. However, CEOs must have the power over the board to select new directors for this mechanism to work. Consequently, Westphal and Zajac (1995) find a positive association between powerful CEOs and the similarity between CEOs and new directors. Moreover, high levels of similarity are associated with more generous CEO compensation contracts. While these studies act as a first indicator for the relevance of political and social-psychological aspects to explain the drivers of CEO compensation, O'Reilly III and Main (2010) claim that it is crucial to learn more about the explicit mechanisms by which CEOs may use their power and influence the decision-making process. Especially in one-tier corporate board systems, members of a single board of directors are likely to identify as a social group in which effects of reciprocity and social influence should always be considered (O'Reilly III and Main 2007). In this regard, most research on social-psychological mechanisms and CEO compensation has investigated one-tier board systems (O'Reilly III and Main 2007). However, since CEO duality is declining and compensation committees are in place more often, it is important to illuminate social-psychological influences in this functionally separated decision-making process comparable to that in a two-tier board system (Gilson 2001). Against this background, we investigate possible social-psychological mechanisms in the executive compensation setting process in Germany. The German setting enables us to assess these mechanisms in more detail for several reasons. First, we can examine how social-psychological mechanisms drive CEO compensation when the negotiating parties do not belong to a single board due to a two-tier board system. The statutory separation between management and control should enable the supervisory board to fulfill a control function effectively and determine appropriate compensation contracts. However, universal effects of reciprocity and social influence should also appear between members of different boards. Second, we can focus on the two main actors in the compensation setting process—namely the CEO and the chair of the supervisory board—and their characteristics and personal backgrounds. This focus on the two main actors makes findings on social-psychological mechanisms less ambiguous because we can attribute effects distinctively to the two actors. Finally, due to a change in German legislation in 2006, we can analyze social-psychological effects on specific salary components. In the following, we derive hypotheses for the effects of social-psychological mechanisms in a German setting. Thereby, we focus on mechanisms that have been shown to affect CEO compensation in prior studies (e.g., Belliveau et al. 1996; Fiss 2006; Main et al. 1995). Specifically, we develop hypotheses for the effects of reciprocity and similarities between the CEO and the chair of the supervisory board on CEO compensation. Reciprocity is considered an important social norm. The norm requests that individuals repay what others have provided (Cialdini 2001). This rule does not only refer to actual payments but also favors, gifts, or invitations. Main et al. (1995) argue that if a CEO can select new directors, these directors may feel obligated to the CEO such that the norm of reciprocity is activated. This obligation would stem from the payments the directors receive for their appointment and the possible experience of a positive impact on the directors' social statuses. Directors could repay these benefits by granting CEOs generous compensation contracts. Hence, Main et al. (1995) predict and find that CEOs who have been appointed before directors serving on the compensation committee receive higher compensation levels. Other studies focus on the benefits of being a member of the (supervisory) board themselves. For example, Fiss (2006) argues that the norm of reciprocity is likely to be activated if board members experience an increase in their compensation. Consequently, in his study of compensation in German TMT, he predicts and finds that increases in supervisory board compensation positively affect TMT compensation. Similarly, O'Reilly III and Main (2010) show that compensation committee chairs' fees strongly relate to CEO compensation. Based on these findings, we expect that reciprocity is relevant for German CEOs and the chair of the supervisory board. Precisely, we predict that a longer tenure of the CEO than the chair and higher levels of chair compensation will be associated with a more favorable compensation package with higher levels of total and fixed compensation and a less performance-contingent compensation. Moreover, although we assume that CEO compensation is less performance-driven, the short- and mid-term compensation component included in the compensation package is likely to be higher in a more favorable compensation package because reciprocity encourages less critical evaluations of CEO performance by the chair (Westphal and Zajac 1995).Footnote 2 H1a: A longer tenure of the CEO compared to the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. H1b: A higher level of compensation of the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. Demographic similarities The similarity-attraction effect describes the tendency to feel attracted to others similar to oneself (Byrne 1971; Montoya and Horton 2013). Thus, similarity may lead to a stronger identification and sympathy between individuals (Byrne et al. 1966). Concerning CEO compensation, similarity and liking between CEOs and chairs of the supervisory board may lead to chairs' less critical evaluations of CEOs' performance. For example, Westphal and Zajac (1995) argue that sharing similar beliefs about strategic decisions—as indicated by demographic similarity—may lead directors to attribute good performance to the CEOs' ability and decision-making but negative performance to environmental factors beyond the CEOs control. Moreover, when similarity and liking between CEOs and chairs are high, chairs may feel a lesser need to monitor and control CEO decisions. In line with this consideration, Goergen et al. (2015) show that dissimilarities in age between the CEO and the chair of the supervisory board indicate mistrust and impact monitoring effectiveness positively. Alternatively, Westphal and Zajac (1995) expect and find that increases in demographic similarity between the CEO and the board of directors are associated with favorable compensation contracts with increases in total compensation and less performance-contingent compensation. Further, Main et al. (1995) study the effects of age similarity between CEOs and board members. However, they find only weak support for predicting that a higher level of similarity leads to higher CEO compensation. To sum up, these findings indicate that demographic similarity between the CEO and the chair of the supervisory board may affect the compensation setting process such that the chair is willing to grant a favorable compensation package. Specifically, we expect that similarity increases the chair's willingness to grant high levels of total and fixed compensation and a less-performance driven compensation. Moreover, sympathetic chairs are likely to evaluate CEO performance with leniency such that short- and mid-term incentives should be higher. Concerning demographic similarity, we rely on a set of indicators that can be easily determined such that both the CEO and the chair on the one hand and interested third parties, on the other hand, are easily able to assess the level of similarity between the CEO and the chair. In detail, we consider basic demographics such as age and nationality as well as educational demographics such as educational degree and field of study. H2a: A higher level of demographic similarity in terms of age between the CEO and the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. H2b: A higher level of demographic similarity in terms of nationality between the CEO and the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. H2c: A higher level of demographic similarity in terms of the educational degree between the CEO and the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. H2d: A higher level of demographic similarity in terms of the field of study between the CEO and the chair of the supervisory board is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. Similarity may not only refer to demographic similarity but also similar experiences. These similar experiences may also influence the relationship between the CEO and the chair of the supervisory board. Specifically, a chair of the supervisory board who has served or is still serving as a CEO may develop an understanding of the CEO's role. Prior research has generally shown that perspective-taking increases empathy (e.g., Decety and Jackson 2004). Having made similar experiences within the focal or another company, chairs with CEO experiences can easily take the perspective of the current CEO. Hence, the chair may feel sympathetic with the CEO, and the empathy could lead to supportive behavior expressed by a compensation package the CEO considers favorable (Fiss 2006). From a different perspective, the chairs may refer to their own CEO compensation contracts when determining the current CEO's compensation. As outlined before, O'Reilly III et al. (1988) argue that CEOs prefer to appoint other CEOs as new board directors to evoke social comparison mechanisms. Social comparison theory postulates that individuals strive to compare with others with similar attitudes or abilities (Festinger 1954; Goodman 1974). Therefore, the appointment of CEOs as directors may affect the compensation setting process such that chairs that also have (outside) CEO experiences compare their own compensation as CEO with that of the current CEO and determine the current CEO's pay accordingly (O'Reilly III et al. 1988). Consequently, O'Reilly III et al. (1988) predict and find that outside directors' salary levels are positively associated with CEO compensation. Moreover, Westphal and Zajac (1997) argue that having a chair with CEO experience may evoke a generalized norm of reciprocity. As outline above, reciprocity generally considers a direct exchange of benefits between two parties. However, reciprocity may also refer to situations in which individuals do not reciprocate by directly repaying their benefactor but rewarding another individual that is part of the same social exchange situation (Ekeh 1974). Hence, the chairs would not necessarily reciprocate the benefits they received as CEOs within the relationship with their chair. Instead, they may pay favors forward by benefitting other CEOs with whom they are working as chairs of their supervisory boards. Consequently, this generalized social exchange situation would encourage favorable compensation packages for the CEO. Taken together, we expect that having a chair of the supervisory board that has CEO experiences positively affects CEO compensation. H3: A chair of the supervisory board who has worked as a CEO is associated with a higher total CEO compensation, a higher fixed CEO compensation, a higher short- and mid-term CEO compensation, and a lower share of performance-contingent CEO compensation. Sample selection and description To test our hypotheses, we collect data for a comprehensive sample of German listed firms. More precisely, we use all non-financial constituentsFootnote 3 of the German HDAX Index with a dualistic corporate governance systemFootnote 4 for fiscal years from 2006 to 2011. For that period, the HDAX Index comprises the 30 largest German blue-chip stocks in terms of market capitalization and trading volume (DAX), the 50 following mid-cap stocks (MDAX) as well as the 30 largest and most liquid issues from various technology sectors (TecDAX).Footnote 5 A company is included if listed in the HDAX at least once in the sample period. This approach is superior to an end-fixation of the sample as no survivorship bias can occur. It is also superior to a front-fixation as changes in industry composition in the sample period are accounted for (Elton et al. 1996). Our initial sample yields 702 firm-year observations from 117 non-financial stock corporations or partnerships limited by shares with dualistic governance structures; 65 firm-years with unusual events like insolvency, disposition, or rebranding are excluded. Further, German publicly listed companies may choose not to disclose the individual compensation data using an opting-out clause (according to the German commercial code, § 286 Abs. 5 HGB),Footnote 6 eliminating additional 122 firm-year observations. Finally, observations were eliminated in the case of intra-year CEO appointments (66 firm-year observations) and in the case of negative compensation components or negative total compensation (4 firm-year observations). Thus, the final sample consists of 445 firm-year observations from 98 companies. Table 1 summarizes the sample selection procedure. However, for individual analyses, sample sizes may be lower due to single missing data points for variables.Footnote 7 Table 1 Sample selection procedure Dependent variables As outlined earlier, the opportunity to investigate German CEOs' incentives in such detail stems from a change in legislation regarding compensation disclosure taking effect in 2006, which allows more sophisticated analyses and conclusions. We collected the compensation data by scanning the compensation section of the companies' annual reports. Fixed CEO compensation (variable CEO_FIX), other CEO benefits (CEO_OTHER), short-term and mid-term incentives (CEO_STIMTI) as well as long-term incentives (CEO_LTI)Footnote 8 were collected separately. Total CEO compensation is calculated as the sum of these components (CEO_TOTAL). The share of each of the CEO's total compensation package components is computed as the respective component over CEO_TOTAL, thus resulting in CEO_SHAREFIX, CEO_SHAREOTHER, CEO_SHARESTIMTI, and CEO_SHARELTI. Additionally, the share of variable compensation in the CEO's total compensation package (CEO_SHAREVAR) is calculated as the sum of short-term and mid-term (CEO_STIMTI) and long-term incentives (CEO_LTI) over total compensation (CEO_TOTAL). The relevant variables to test our hypotheses are CEO_TOTAL, CEO_FIX, CEO_STIMTI, CEO_SHARESTIMTI, and CEO_SHAREVAR. The absolute values for CEO_TOTAL, CEO_FIX, and CEO_STIMTI enter the regressions in their natural logarithm, indicated by adding "_LOG" to the variable code. Independent variables and control variables The independent variables and some of the control variables stem either from demographic characteristics or occupation-specific characteristics. For both types of data, there is no comprehensive data set available (e.g., Elston and Goldberg 2003). Hence, we hand-collected a unique dataset using the following sources sequentially: published annual reports, company websites, corporate press releases, the "LexisNexis" and "Munzinger Personenarchiv" databases, a general web search (mostly leading to press articles such as portraits or interviews), and public personal registers. If none of these sources led to the required information, we contacted the companies' investor-relations departments. To capture whether the CEO has a longer tenure than the chair (LONGER_TENURE) as a proxy for reciprocity (H1a), we first determine the CEO's tenure (CEO_TENURE) and the chair's tenure (CSB_TENURE). The CEO's (the chair's) tenure is measured as the number of years since a person was appointed CEO (chair) by the focal company in a given year. After that, we computed a dichotomous variable LONGER_TENURE that takes the value 1 when CEO_TENURE was higher than CSB_TENURE and 0 otherwise (Main et al. 1995). Concerning H1b, the chair's compensation level (CSB_TOTAL) is considered another trigger of reciprocity and is measured as the sum of fixed and variable payment components of the compensation the chair of the supervisory board receives for this function in the focal company. However, since both CEO compensation and the chair's compensation are likely to be driven by firm size to at least some extent, we include a size-adjusted excess pay in our regression models (CSB_TOTAL_EXCESS). We compute CSB_TOTAL_EXCESS by subtracting the median of the log-transformed chair's total compensation for the same firm-size decile from the logarithm of the chair's total compensation (Fahlenbrach 2009). Regarding the second set of hypotheses (H2a-d), demographic similarities between the CEO and the chair are exemplarily represented by two variables that consider basic demographic similarities, namely similarity in age (AGE_SIM) and similarity in nationality (NAT_SIM), as well as two variables that describe educational similarity, namely similarity in educational degree (DEGREE_SIM) and similarity in the field of study (STUDY_FIELD_SIM). For AGE_SIM, the difference between the CEO's age (CEO_AGE) and the chair's age (CSB_AGE) is computed as AGE_DIF. Then, the maximum value for AGE_DIF in the sample is identified, and each value for AGE_DIF is subtracted from this maximum. Subsequently, the resulting difference is scaled by the maximum for AGE_DIF to arrive at AGE_SIM, which is continuous between 0 (low similarity) and 1 (high similarity). Regarding similarity in nationality, we use a dummy variable NAT_SIM that takes the value of 1 if the CEO and the chair's nationality is the same and the value of 0 otherwise. To measure DEGREE_SIM, we first collect the highest level of education of the CEO (CEO_DEGREE) and the chair of the supervisory board (CSB_DEGREE), respectively, as ordinal variables with a value of 1 for high school graduation, apprenticeship, or comparable education, 2 for college or university degree, and 3 for Ph.D. or professorFootnote 9). After that, REL_DEGREE is calculated as the difference between the variables CEO_DEGREE and CSB_DEGREE (Fiss 2006). To finally compute DEGREE_SIM, each value of REL_DEGREE is subtracted from the maximum possible value of 2; after that, the difference is scaled by that maximum possible value, leading to continuous values between 0 (low similarity) and 1 (high similarity). To capture similarity in the field of study (STUDY_FIELD_SIM), we create a dummy variable which takes the value 1 (and 0 otherwise) if the CEO's field of study equals the chair's field of study based on the following categorization: business and economics; law; natural sciences; engineering; others. For H3, we measure similar role experiences and construct a variable CSB_CEO that takes the value 1 if the chair has worked as CEO of the focal or another firm, and 0 otherwise. Concerning the control variables, we first consider the CEO's demographic and occupational characteristics that have been investigated from an economic or political research perspective. Referring to the economic perspective, the human capital theory (Becker 1964; Mincer 1970) indicates that older and better-educated CEOs and CEOs hired externally receive more favorable compensation packages (e.g., Hambrick and Finkelstein 1995; Brockman et al. 2016; Harris and Helfat 1997). Consequently, we consider the CEO's age (CEO_AGE), the CEO's educational level (CEO_TITLE), which is coded as a dummy variable with the value 1 if the CEO holds an academic titleFootnote 10 (MBA, Ph.D., or professor) in the relevant year and 0 otherwise, and a potential internal recruitment (CEO_INT). Studies from a political perspective refer to the CEO's (relative) power over the board of directors or the supervisory board. Following prior literature, we assume that the number of external board memberships of the CEO and the chair affects the CEO's compensation package such that a higher number of external board memberships of the CEO (the chair) is positively (negatively) associated with a favorable compensation package (Belliveau et al. 1996; Core et al. 1999; Wade et al. 1990). Consequently, the number of external board memberships is considered CEO_EXT for the CEO and CSB_EXT for the chair of the supervisory board. Besides the CEO's and chair's characteristics, we consider firm-specific characteristics as control variables. More specifically, we include total shareholder return (TSR) as a market-based performance measure (Fiss 2006), return on equity (ROE) as an accounting-based performance measure (Veliyath and Bishop 1995), company size (TA_LOG) measured as the natural logarithm of total assets (Tosi et al. 2000), future investment opportunities measured as the market-to-book ratio (MTB) (Core et al. 1999), the Beta (BETA) according to Sharpe's (1963) market model using daily trading data over a time horizon of 52 weeks (Bidwell 2011; Bloom and Milkovich 1998; Hambrick and Finkelstein 1995), and the proportion of shares in free float (FF) as an indicator of diminishing direct shareholder influence (Kaserer and Wagner 2004). Furthermore, leverage might play a unique role in the German context as German companies largely depend on bank financing with potentially non-negligible bank influence on business conduct (Elston and Goldberg 2003). Consequently, we use the debt ratio as a proxy (DEBT), calculated as total debt over total assets. Firm-specific variables up to this point are collected from Thomson Reuters Datastream with values winsorized at the 1st and 99th percentile. Additionally, the number of members of the executive board (MEMB_EB) and supervisory board (MEMB_SB) is incorporated into our variables (Ferrero-Ferrero et al. 2012; Fiss 2006). The "Appendix" provides descriptions of all variables included in the analyses. To test the proposed hypotheses, we apply multivariate regression analysis. Separate models are established to investigate the determinants of the five different components of the compensation package, i.e., (1) the total CEO compensation, (2) the fixed CEO compensation, (3) the short- and mid-term compensation, and the share of performance-contingent CEO compensation based on (4) short- and mid-term incentives as well as (5) short-term, midterm and long-term incentives. In general, we propose the following equation: $$\begin{aligned} Comp & = \beta_{1} LONGER\_TENURE + \beta_{2} CSB\_TOTAL\_EXCESS + \beta_{3} AGE\_SIM + \beta_{4} NAT\_SIM \\ & \quad + \beta_{5} DEGREE\_SIM + \beta_{6} STUDY\_FIELD\_SIM + \beta_{7} CSB\_CEO + \beta_{8} CEO\_AGE \\ & \quad + \beta_{9} CEO\_TITLE + \beta_{10} CEO\_INT + \beta_{11} CEO\_EXT + \beta_{12} CSB\_EXT + \beta_{13} TSR \\ & \quad + \beta_{14} ROE + \beta_{15} TA\_LOG + \beta_{16} MTB + \beta_{17} BETA + \beta_{18} FF + \beta_{19} DEBT + \beta_{20} MEMB\_EB \\ & \quad + \beta_{21} MEMB\_SB + \mathop \sum \limits_{t} \beta_{t} Year_{t} + \mathop \sum \limits_{k} \beta_{k} Industry_{k} + \varepsilon . \\ \end{aligned}$$ To determine the different models, we use five different compensation measures Comp: CEO_TOTAL_LOG (Model 1), CEO_FIX_LOG (Model 2), CEO_STIMTI_LOG (Model 3), CEO_SHARESTIMTI (Model 4), and CEO_SHAREVAR (Model 5). Moreover, the return measures TSR and ROE are not considered in Model 2 because fixed compensation should not depend on performance. Similar to Fahlenbrach (2009) and Rapp and Wolff (2010), we deploy two-way fixed effects models with dummies for industry and year since the Hausman (1978) test indicates endogeneity for the majority of models. Thereby, we assume that firm-specific factors are sufficiently captured by the industry and refrain from including firm fixed effects.Footnote 11 Besides, we use cluster-robust standard errors (White 1980) to encounter heteroscedasticity. Moreover, to test for multicollinearity, we compute variance inflation factors (VIF) for our independent variables in all our models. The highest VIF of 6.11 for the variable TA_LOG lies below the value of 10 but above the value of 5, which both are considered relevant threshold values (O'brien 2007). Similarly, we find that the largest correlations among independent variables are between TA_LOG and MEMB_EB (ρ = 0.6613, not tabulated) and MEMB_SB (ρ = 0.7980, not tabulated), respectively. However, since these variables are not our main variables of interest, we still include all of them in the regression models. Descriptives Table 2, Panel A depicts the descriptive analysis of the CEO compensation in the sample. Mean CEO total compensation amounts to 2.444 million euros. The relatively high standard deviation of 2.118 million euros combined with the first quartile of 0.948 million, the median of 1.909 million euros, and the third quartile of 3.295 million euros suggests immense pay heterogeneity. On average, 38.43% of this total compensation is paid as fixed compensation, 44.45% as short-term or mid-term incentives, and 14.62% as long-term incentives. Thus, the variable part amounts to 59.07% of the CEO's total pay. The other components, which mainly comprise expense reimbursements, have a share of only 2.5%. Table 2 Descriptive statistics Table 2, Panel B presents the descriptive statistics for the independent variables. Concerning tenure, in about half of the observations (0.51), the CEO has a longer tenure than the chair of the supervisory board. Moreover, the chair's mean compensation amounts to 0.155 million euros per year, with a relatively high standard deviation of 0.136 million euros. Furthermore, CEOs and chairs are relatively similar in their basic demographics regarding age (0.65) and nationality (0.79). Regarding educational demographics, similarity in educational degree (0.71) is higher than similarity in the field of study (0.45). Finally, 73% of the chairs have gained CEO experience. The descriptive analyses for the control variables are provided in Table 2, Panel C. Mean CEO age is 54.19 years (standard deviation 6.86 years). Slightly more than half of the CEOs hold an academic title such as an MBA, Ph.D., or professor. Moreover, 50% of the CEO observations stem from internally-hired CEOs, and CEOs hold, on average, 1.79 external board memberships. In contrast to this, chairs of supervisory boards hold, on average, 3.74 external board memberships. Concerning the firm-specific characteristics, it is noteworthy that the sample consists of companies with an average free float of 74.80%, and less than one-third of that is the standard deviation (22.54%). In combination with the first quartile of 60% and the third quartile of 94%, this hints at relatively low shareholder concentration levels, giving rise to potential agency conflicts and thus representing a fruitful setting for this study. Additionally, the average debt ratio is 34.18%. More than three fourth of the observations have a debt ratio of less than 50%. Thus, the monitoring impact of outside creditors—in Germany, often banks—might be limited. Finally, on average, the executive board consists of 4.4 people (with a minimum of 2 and a maximum of 10); the average number of members of the supervisory board is 12.19 (with a minimum of 3 and a maximum of 21). Hypotheses tests Table 3 presents the results of the multivariate regressions conducted to test the hypotheses. For all models, we conducted regressions in which (a) only firm-specific control variables and (b) all variables were included. All models show significant F-values. Hence, they have significant explanatory power. The adjusted R2 for the models with total, fixed, or short- and mid-term compensation as dependent variables ranges between 51.46% and 77.43%; for the models with CEO_SHARESTIMTI and CEO_SHAREVAR as dependent variables, the adjusted R2 value is considerably lower, ranging from 18.85% to 37.35%. In all cases, adjusted R2 is higher for the fully specified models than for the models that only include firm-specific variables. In the following, we focus on the fully specified models because these contain the hypothesis-testing variables. Table 3 Multivariate regressions H1 deals with reciprocity between CEO and chair. In this context, H1a posits that a longer CEO tenure than the chair's tenure is associated with higher total, fixed, short- and mid-term pay and a lower variable share of the CEO's compensation. However, we find a significant association between LONGER_TENURE and components of the compensation package only for fixed compensation. More precisely, a longer tenure of the CEO is associated with an increase of 7.54% in fixed compensation.Footnote 12 Hence, we do not conclude that a longer tenure generally leads to a more favorable compensation package. In contrast, Fiss (2006) finds that a score that captures the relative difference between tenure (and not only whether the CEO has longer tenure than the chair) is positively associated with average TMT compensation. Thus, a relatively longer tenure of the CEO than the chair may influence CEO compensation. Nevertheless, this strong influence may not base on the norm of reciprocity and the chair's desire to repay the "favor of the appointment." Instead, a relatively longer tenure may reflect the power and influence the CEO has built up over the years (Fiss 2006).Footnote 13 H1b utilizes the chair's excess total compensation as an indicator for reciprocity. Higher size-adjusted excess pay should, in turn, relate to a more favorable CEO compensation package. The significant coefficients on CSB_TOTAL_EXCESS in Model 1b and Model 3b support this prediction for total compensation and short- and mid-term incentives. Specifically, an increase of the chair's excess pay by 10% is associated with a 1.55% increase in total compensation and a 2.48% increase in short- and mid-term incentives.Footnote 14 However, we find no significant positive association of CSB_TOTAL_EXCESS to fixed pay. Moreover and contrary to our expectations, the coefficients for the share of both short- and long-term and overall variable payment are significantly positive, indicating that a higher chair pay relates to a higher share of performance-contingent pay for the CEO. This result suggests that the chair is per se willing to grant higher pay levels. However, higher-paid chairs incentivize higher performance more strongly. A possible explanation for this is the fact that higher-paid chairs take their office more seriously. In sum, our results for the impact of reciprocity on CEO compensation suggest that reciprocity due to higher payments for the chair is likely to promote higher pay levels. Nevertheless, our results also strongly suggest that the incentive effect is likely not weakened as a means to act reciprocally. Instead, it might even be strengthened. Our second set of hypotheses deals with demographic similarity between CEO and chair concerning age, nationality, educational degree, and field of study. In the hypothesis development, we suggest that similarity is associated with a more favorable compensation package if it creates sympathy between the two negotiating actors. Concerning similarity in age, we find significantly negative coefficients of AGE_SIM for total and fixed pay in Model 1b and Model 2b. Hence, a one-standard-deviation increase of AGE_SIM is associated with a decrease of total pay by 11.57% and a decrease of fixed pay by 13.53%.Footnote 15 Moreover, we find a significantly positive coefficient for the share of short- and mid-term incentives in Model 4b. These results indicate a less favorable compensation package if the degree of age similarity between CEOs and chairs of the supervisory board is high. Hence, we have to reject H2a. One explanation could be that fairly older chairs may be benevolent towards CEOs because they feel more experienced or of higher status. In fact, in 76.76% of the cases, the chair is older than the CEO. However, the closer the CEO's age is to the chair's age the more important social comparison might become (Festinger 1954). Hence, the chair might want to differentiate himself from the CEO and ensure a superior feeling, thus giving the CEO a less favorable compensation package. Consequently, age similarity affects CEO compensation negatively. Regarding similarity in nationality, results show a significantly negative coefficient of NAT_SIM for fixed pay in Model 2b such that having the same nationality is associated with a decrease of 7.27% in fixed compensation.Footnote 16 Moreover, we find a significantly positive coefficient for the share of short- and mid-term incentives in Model 4b. Because the coefficients have the opposite signs to our expectations, we reject H2b. In our sample, with the exception of three cases, similarity in nationality means that both the CEO and the chair having a German background in all but three cases. Because similarity thus tends to be the normal case, nationality may not be a relevant dimension where similarity leads to feelings of liking of empathy. In contrast, when dissimilarity in nationality is considered, the majority of cases comprises of a German chair and a non-German CEO. In those settings, the chair may be more benevolent towards the foreign CEO such that dissimilarity is more likely to result in a favorable compensation package than similarity. Concerning similarity in educational degree (H2c), we find that DEGREE_SIM is negatively and significantly associated with the overall share of performance-contingent pay in Model 5b. This result suggests that degree similarity affects performance-contingency in a way that is likely more preferable for the CEO, although the effect is not strong enough to increase the overall compensation. Finally, regarding H2d, we find no significant association between similarity in the field of study (STUDY_FIELD_SIM) and the compensation package components. Thus, we have to reject H2d. However, overly broad categories concerning the field of study may drive this result. Specifically, most CEOs and chairs have a background in business and economics or natural sciences in our sample. Both fields of study are characterized by many different specializations and different forms of studying. Thus, our measure might not capture high levels of similarity. However, a more precise measure might lead to only very few cases in which similarity is high. Consequently, the study field may generally not be a good proxy for similarity between the CEO and the chair of the supervisory board. In sum, the results for demographic similarities highlight that different facets of similarity operate very differently regarding the compensation components affected. More precisely, similarity in educational degree possibly triggers feelings of sympathy and increased trust to some extent. In contrast, age similarity is more likely linked to social comparisons. Finally, in the German setting, similarity in nationality may not be a relevant dimension due to the common German background of most CEOs and chairs. In H3, we discussed the effect of the chair's prior CEO experience on CEO compensation. We argued that such a chair might be more empathic and willing to reciprocate favors received from others, thus supporting the CEO with a more favorable compensation package. In line with this expectation, we find that CSB_CEO is significantly and positively associated with CEO total compensation in Model 1b. Specifically, having a chair with CEO experience is associated with a 16.66% increase in total compensation.Footnote 17 Concerning performance-contingency, however, results show a significantly positive association between CSB_CEO and CEO_SHAREVAR in Model 5b. Thus, we find partial support for our expectation such that the chair's similar experience positively affects total compensation. Nevertheless, the chair's experience is also associated with an overall stronger performance contingency. Thus, a chair of the supervisory board who has previously worked as a CEO does not unconditionally grant a more favorable compensation package but may focus on the CEO's actual performance and the related incentives more strongly. Taken together, we can build up a relatively nuanced picture of the social-psychological perspective on executive compensation: while potential reciprocity, demographic similarities, and similar experiences might suggest more favorable compensation packages from the CEO's perspective, the empirical results do not support this assumption per se. Specifically, for some of the dimensions investigated, namely chair's size-adjusted excess pay, similarity in age and nationality, and the chair's experience as CEO, we find indications that the variable proportion of the CEO compensation package either remains unaffected or even increases and, hence, should not be a source of significant concerns to shareholders. Regarding the control variables, our results show a significantly positive influence of the CEO's age (CEO_AGE) and the CEO's academic title (CEO_TITLE) on total compensation and fixed compensation. Additionally, CEO_AGE positively relates to short- and mid-term compensation and the overall share of performance-contingent compensation. Moreover, internal CEO recruitment (CEO_INT) is negatively associated with total and fixed compensation. These results indicate support for the explanatory power of human capital theory. Further, we find that the CEO's external board memberships (CEO_EXT) are positively associated with fixed pay, which is in line with managerial power theory. Regarding the firm-specific control variables, we find a positive association between our performance measure ROE and CEO_STIMTI_LOG as well as the share of short- and mid-term incentives, which provides partial support for the pay-for-performance hypothesis.Footnote 18 Furthermore, company size (TA_LOG) relates to higher values of the total, the fixed, and the short- and mid-term compensation. Additionally, higher investment opportunities (MTB) are associated with higher CEO total pay and a higher proportion of variable pay. Moreover, BETA shows a significantly positive association with fixed pay and a significantly negative association with short- and mid-term incentives and the overall share of performance-contingent compensation. Finally, higher free float (FF) relates to increased total, fixed, and short- and mid-term compensation and a lower share of short- and midterm incentives. This relation suggests that less shareholder concentration impacts the compensation package positively for the CEO, presumably because monitoring is less intense. Additional analyses Ownership concentration Our previous analyses indicate that social-psychological mechanisms might affect the determination of CEO pay such that the CEO uses their influence to negotiate a favorable compensation package. However, the described effects may also depend on the shareholders' ability to monitor and control the firm's decision-making processes effectively. Hence, we conduct additional tests to determine whether our findings are moderated by ownership concentration as an indicator for ownership control (Fiss 2006; O'Reilly III et al. 1988; Wade et al. 1990). Specifically, we rerun our regressions and include interaction terms with interactions between ownership concentration (OC, computed as (1–FF)) and those independent variables that have shown significant associations with components of CEO pay.Footnote 19 Results in Table 4 indicate that, after including the interaction terms, the main effects of our independent variables remain principally the same. Moreover, in most cases, the interaction terms are not significantly associated with CEO compensation components. Hence, in our setting, ownership concentration moderates the effect of social-psychological mechanisms only to a very limited extent without revealing a relevant pattern. Table 4 Analysis of ownership concentration Prior studies also provide mixed results concerning ownership control. For example, O'Reilly III et al. (1988) find that ownership control has no significant influence on compensation. In contrast, Fiss (2006) finds a significant interaction effect between having a former CEO as chair and ownership control such that TMT compensation is significantly lower when a company is owner-controlled. However, in our analysis, we find no effect for the interaction between CSB_CEO and OC on fixed compensation. In our main analysis, we investigate the effect of the chair's experience as CEO in general. However, prior research explicitly examines either chairs who are former CEOs of the focal company (Fiss 2006) or directors who serve or have served as CEO at another company (O'Reilly III et al. 1988; Westphal and Zajac 1997). While the mechanisms of perspective-taking, social comparison, and the generalized norm of reciprocity are likely to unfold for chairs who have been CEO at another company, different effects may occur if the chair has been a former CEO of the focal firm. In Germany, it is quite common for former CEOs to become chairs of the supervisory board upon retirement (Andres et al. 2014). Against this background, Fiss (2006) hypothesizes based on the empathy perspective that having a former CEO of the focal company as a board chair is positively related to managerial compensation. However, in his analysis, the hypothesis is not supported. This finding may be driven by the fact that Fiss (2006) could not investigate CEO compensation directly but needed to rely on TMT compensation. Thus, the effect of similar experiences might only be relevant for the CEO and might not spill over to the board's remaining members. Alternatively, the chair's desire to protect the firm could dominate the effects of empathy. Consequently, the former CEO position provides the chair with the power to influence the compensation setting process such that a more controlling compensation package is determined. A chair of the supervisory board who has been a CEO of the focal firm may be especially controlling if the CEO is a firm outsider. In contrast, a former CEO and an internally recruited CEO may feel belonging to the same social group (O'Reilly III and Main 2007) such that effects of liking and sympathy may prevail. Against this background, we rerun our regression analysis and exchange CSB_CEO with two dichotomous variables that take the value 1 if the chair has worked as CEO of another firm (CSB_CEO_OTHER) or as CEO of the focal firm (CSB_CEO_FOCAL), and 0 otherwise. Additionally, we consider an internal recruitment of the CEO (CEO_INT) by using a dummy variable that takes the value 1 in case of an internal appointment and investigate the interaction between an internal recruitment and a chair who has worked as CEO of the focal firm. Our findings in Table 5 show similar effects of CSB_CEO_OTHER as of CSB_CEO. Specifically, CSB_CEO_OTHER is significantly and positively associated with total compensation in Model 1, fixed compensation in Model 2, and the overall share of performance-contingent compensation in Model 5. Table 5 Analysis of similar experiences Concerning chairs who are former CEOs of the focal company, we find a significantly negative association between CSB_CEO_FOCAL and CEO_FIX_LOG in Model 2. Moreover, we find a significantly positive association between CSB_CEO_FOCAL and both CEO_SHARESTIMTI and CEO_SHAREVAR in Model 4 and Model 5, reflecting a less favorable compensation package. The results indicate that the former CEO-chair may be interested in protecting their legacy by composing a compensation package that focuses on performance-contingent compensation and is less favorable for the focal CEO. However, we find opposing effects when the CEO has also been recruited from the focal firm. Specifically, the interaction between CSB_CEO_FOCAL and CEO_INT is significantly positively associated with fixed pay in Model 2. Further, the interaction term is significantly and negatively associated with the share of performance-contingent pay in Model 5. Thus, empathy and liking may only be prevalent when both the chair and the CEO have a background in the focal firm, and the CEO has been recruited internally. This paper investigates social-psychological mechanisms that may unfold in the negotiation process between the CEO and the chair of the supervisory board regarding the determinants of total executive compensation, fixed compensation, short- and mid-term compensation, and the variable proportion of total compensation. We exploited the legislative change in Germany that individual executive compensation data must be disclosed in detail and hand-collected a unique dataset of constituents of the German HDAX that allowed us to shed light on possible social-psychological mechanisms in the pay determination process within a two-tier board system. This investigation is particularly relevant because it enables us to open the "black box" of CEO pay determination by analyzing two important actors in the process—the CEO and the chair of the supervisory board. Former studies, primarily from Anglo-American companies, could not analyze the negotiation process in such detail due to the lack of formal separation between the key actors in management and control. However, our investigation is also relevant for the Anglo-American corporate governance system since both systems have started to converge (Gilson 2001). The functional separation of the remuneration committee and the declining CEO duality leads to comparability, and it is thus possible to use the insights we have gained to better understand the determination process (Bruce et al. 2005; Conyon and He 2004; Fiss 2006). We provide evidence for the social-psychological perspective's explanatory power, which is of particular importance since the economic and the political perspective have not delivered sufficient empirical evidence to fully explain the determination of CEO compensation. Reciprocity between the CEO and the chair, in particular, plays a crucial role in shaping CEO compensation packages that are more favorable for the CEO. More precisely, in line with findings by O'Reilly III and Main (2010), we show that a higher chair's excess compensation is associated with higher total and higher performance-contingent pay. Moreover, we find that a CEO's longer tenure than the chair's tenure increases fixed CEO compensation. Concerning similarity, results are ambivalent. First, we find that age similarity is associated with a less favorable compensation package, especially regarding fixed compensation, which contradicts the results by Main et al. (1995). We argued that increasing similarity in age might evoke a social comparison perspective. The negotiation and determination of the CEO compensation could be a means for the chair of the supervisory board to set himself apart from the CEO. Second, our findings regarding similarity in educational degree are partially in line with our expectation such that higher similarity is associated with a decrease in performance-sensitive pay and hence more preferable from the CEO's perspective. Third, similarity in nationality leads to a less favorable compensation package. This result might be explained by the fact the majority of chairs is German and that dissimilarity—instead of similarity—leads to sympathy towards the foreign CEO and a more favorable compensation package. In sum, these results reveal that similarity cannot be considered a universal construct, but its effects may depend on the extent to which a given dimension triggers social comparison concerns or feelings of sympathy and liking. In this regard, we illuminate the social-psychological effects of similarity in more depth than Westphal and Zajac (1995). The authors created a single measure for similarity out of different dimensions and generally found that an increase in similarity is associated with more generous compensation contracts. Finally, concerning similar experiences, we extend the investigation conducted by Fiss (2006) and examine in our main analysis chairs who have been CEO in either the focal firm or another firm. While Fiss (2006) finds no association between chairs with CEO experience in the focal firm and TMT compensation, we find a positive effect of CEO experience on both fixed compensation and the overall share of performance-contingent compensation. Moreover, an additional analysis that differentiates between the chair's CEO experiences in another company and the chair's CEO experiences in the focal company reveals that having made CEO experiences in another company leads to more total and fixed compensation but also to a higher share of performance-contingent compensation. Hence, outside CEO experiences may lead to a more empathic chair due to the increased total and fixed compensation. In contrast, for chairs who have been CEO at the focal company, we find a negative association with a favorable compensation package except when the CEO has been recruited internally. Overall, this study offers insights on social-psychological effects that play a role during the negotiation process between the CEO and the chair of the supervisory board which go beyond previous research and illuminates explicit mechanisms in more depth, as suggested by O'Reilly III and Main (2010). Besides, it is worth noting that higher compensation does not seem to be granted unconditionally, which is different from theory and prior literature, especially for one-tier board systems. The chair's excess compensation, similarity in age and nationality, and prior CEO experiences are even associated with higher shares of variable pay. Consequently, CEO compensation's incentive function does not necessarily have to be compromised, even when pay is relatively high. Finally, another additional analysis shows that higher ownership concentration does not moderate the effects of reciprocity and similarity. In conclusion, our findings provide guidance in which situations other members of the supervisory board and shareholders should monitor the relationship between the CEO and chairs more closely and challenge the negotiation process. Precisely, when the chair's payment triggers reciprocity concerns or when the CEO and the chair have a similar educational or experiential background, the negotiation process could be compromised. Close monitoring is especially necessary because, in our sample, a higher ownership control, which is typically associated with increased monitoring, does not moderate the effects of social-psychological mechanisms. Naturally, our investigation exhibits some limitations that offer opportunities for future research. Since the concepts of reciprocity and similarity are only examined exemplarily based on selected indicators, future research should expand this approach and use other indicators to substantiate our findings further. Moreover, besides reciprocity and similarity, other social-psychological factors such as cultural influences, social norms, or self-concepts might influence behavior and compensation. Hence, various concepts from the behavioral-accounting literature could be applied to establish a better understanding of the process of how executive compensation is determined. An interesting approach could be to explicitly combine the managerial-power hypothesis with the social-psychological perspective (e.g., in the light of Göx and Hemmer 2020). Concerning our data, the investigation is based on firms from the HDAX and focuses on the relationship between the CEO and the chair of the supervisory board. This database could be enlarged if other firms and all board members are included in the examination. However, data availability might be limited. To sum up, future research should confirm these initial findings and provide further evidence regarding social-psychological constructs, as this perspective seems promising with this study being a first approach. In the following, we use the expressions CEO and chair (of the supervisory board) to refer to the positions of the "Vorstandsvorsitzender" (chair of the executive board) and the "Aufsichtsratsvorsitzender" (chair of the supervisory board). Favorable evaluations may also affect the granting of long-term incentives. However, we refrain from explicitly including long-term incentives in our hypotheses for two reasons. First, the valuation of share-based payments is taken directly from the annual reports. Therefore, we cannot ensure that long-term incentives are comparable across observations. Second, 37.53% of firms in our sample do not grant long-term incentives at all. Thus, the lower number of observations for the analyses of long-term incentives could negatively affect the statistical power of our analyses. Banks, financial services, insurance and real-estate companies are considered financial companies. 108 observations belong to this group. For stock corporations (Aktiengesellschaft, AG) and partnerships limited by shares (Kommanditgesellschaft auf Aktien, KGaA) dualistic structures are default. However, both types of enterprises may also opt for another structure, thus emphasizing the need for individual investigation. Of the HDAX companies, 66 observations do not meet either the criterion regarding the legal form or the criterion of a dualistic governance structure. Since its introduction in 1996, the size of the MDAX Index has varied over time. Starting with 70 stocks in 1996, the MDAX was reduced to 50 stocks in 2002. Lastly, in 2018, the MDAX was expanded again to 60 stocks to additionally include some companies previously listed exclusively in the TecDAX. The use of the opting-out clause varies between indices, with smaller companies opting out more frequently. For example, while only four firm-year observations are eliminated in the blue-chip segment DAX, 47 are eliminated in the mid-cap index MDAX. When all variables are included (models with suffix b), the baseline sample size for our main analyses in Table 3 is 303 observations stemming from the model explaining the total CEO compensation (model 1b). Some firms do not report granting short- and mid-term incentives, thus reducing the sample size to 298 in model 3b. We chose to treat these five observations as missing instead of manually inserting a value of zero at this point to analyze the determinants of short- and mid-term incentives if granted. For analyzing the share of short- and mid-term incentives (model 4b) or variable incentives (model 5b) in the total compensation package, these observations naturally carry values of zero. Counterintuitively, the number of observations for the model explaining the fixed compensation (model 2b) is higher than in the model for the total compensation. The reason are missing values for the TSR and/or ROE variables that are not needed for model 2b. We follow the approach frequently favored in econometrics to not align the sample size to the "smallest common denominator" to avoid selection effects, although aligning the sample is rather common, e.g., in the accounting literature. The different sample sizes between models with suffix a and suffix b result from missing data points mainly for variables regarding the CEO and the chair of the supervisory board. We do not analyze these differences further due to their lack of relevance for testing the hypotheses. The valuation of share-based payments is taken directly from the annual reports. Missing or inconsistent data hinders us from pursuing our own valuation efforts. For a similar discussion, see Finkelstein and Hambrick (1989). While in academia, a professor is considered a higher degree than a PhD, the majority of professors among CEOs and chairs of supervisory boards in Germany holds an honorary title which does not reflect their educational activities. In Germany, earning the Ph.D. does not necessarily imply pursuing an academic career. In fact, most of the Ph.D. candidates leave for the industry after graduation. In addition, the professor's degree can be granted on an honorary basis, which again does not imply that the person is (exclusively) working as a scholar. A model with firm fixed effects analyzes effects that stem from changes within the firm over time. However, corporate governance variables which would also be included in our model are largely time-invariant. Thus, a model with firm fixed effects would have to determine coefficients based on a small number of firms for which corporate governance variables change (Fahlenbrach 2009). The economic effect is calculated using \(\mathit{exp}\left({\beta }_{1}\right)-1=\mathit{exp}\left(0.0727\right)-1=0.0754\). Including a relative measure for tenure instead of a dummy variable in our analysis leads to significant associations. The economic effect is calculated using \({1.1}^{{\beta }_{2}}-1\), i.e., \({1.1}^{0.1618}-1=0.0155\) for total compensation and \({1.1}^{0.2574}-1=0.0248\) for short- and midterm incentives. The economic effect is calculated using \(\mathit{exp}\left({\beta }_{3}*SD\right)-1\), i.e., \(\mathit{exp}\left(-0.5345*0.23\right)-1=-0.1157\) for total compensation and \(\mathit{exp}\left(-0.6319*0.23\right)-1=-0.1353\) for fixed compensation. The economic effect is calculated using \(\mathit{exp}\left({\beta }_{1}\right)-1=\mathit{exp}\left(-0.0755\right)-1=0.0727\). TSR and ROE are not considered in Models 2a and 2b because fixed compensation (Models 2a and 2b) should not depend on performance. We exclude the control variable FF from the regressions due to multicollinearity issues. Abels PB, Martelli JT (2013) CEO duality: how many hats are too many? Corp Gov 13:135–147 Andres C, Fernau E, Theissen E (2014) Should I stay or should I go? Former CEOs as monitors. J Corp Finance 28:26–47 Barkema HG, Pennings JM (1998) Top management pay: impact of overt and covert power. Organ Stud 19:975–1003 Beck D, Friedl G, Schäfer P (2020) Executive compensation in Germany. J Bus Econ 90:787–824 Becker GS (1964) Human capital: a theoretical and empirical analysis, with special reference to education. Columbia University Press, New York Belliveau MA, O'Reilly CA III, Wade J (1996) Social capital at the top: effects of social similarity and status on CEO compensation. Acad Manag J 39:1568–1593 Bidwell M (2011) Paying more to get less: the effects of external hiring versus internal mobility. Adm Sci Q 56:369–407 Bloom MC, Milkovich GT (1998) Relationships among risk, incentive pay, and organizations. Acad Manag J 41:283–297 Boyd BK (1994) Board control and CEO compensation. Strat Manag J 15:335–344 Brockman P, Lee HSG, Salas JM (2016) Determinants of CEO compensation: Generalist-specialist versus insider outsider-attributes. J Corp Finance 39:53–77 Bruce A, Buck T, Main BGM (2005) Top executive remuneration: a view from Europe. J Manag Stud 42:1493–1506 Bültel N (2011) Starmanager: medienprominenz, reputation und Vergütung von Top-Managern. Gabler, Wiesbaden Byrne D (1971) The attraction paradigm. Academic Press, New York Byrne D, Clore GL Jr, Worchel P (1966) Effect of economic similarity-dissimilarity on interpersonal attraction. J Personal Soc Psychol 4:220–224 Cialdini RB (2001) Influence: science and practice, 4th edn. Allyn & Bacon, Boston Conyon MJ, He L (2004) Compensation committees and CEO compensation incentives in U.S. entrepreneurial firms. J Manag Acc Res 16:35–56 Core J, Holthausen RW, Larcker DF (1999) Corporate governance, chief executive officer compensation, and firm performance. J Finance Econ 51:371–406 Decety J, Jackson PL (2004) The functional architecture of human empathy. Behav Cogn Neurosci Rev 3:71–100 Ekeh PP (1974) Social exchange theory: the two traditions. Harvard University Press, Cambridge Elston JA, Goldberg LG (2003) Executive compensation and agency costs in Germany. J Bank Finance 27:1391–1410 Elton EJ, Gruber MJ, Blake CR (1996) Survivorship bias and mutual fund performance. Rev Financ Stud 9:1097–1120 Fahlenbrach R (2009) Shareholder rights, boards, and CEO compensation. Rev Finance 13:81–113 Ferrero-Ferrero I, Fernández-Izquierdo MÁ, Muñoz-Torres MJ (2012) The impact of the board of directors characteristics on corporate performance and risk-taking before and during the global financial crisis. Rev Manag Sci 6:207–226 Festinger L (1954) A theory of social comparison processes. Hum Relat 7:117–140 Finkelstein S, Boyd BK (1998) How much does the CEO matter?: The role of managerial discretion in the setting of CEO compensation. Acad Manag J 41:179–199 Finkelstein S, Hambrick D (1988) Chief executive compensation: a synthesis and Reconciliation. Strat Manag J 9:543–558 Finkelstein S, Hambrick D (1989) Chief executive compensation: A study of the intersection of markets and political processes. Strat Manag J 10:121–134 Finkelstein S, Hambrick D (1996) Strategic leadership: top executives and their effect on organizations. West Publishing Company, Minneapolis, St. Paul Fiss PC (2006) Social influence effects and managerial compensation evidence from Germany. Strat Manag J 27:1013–1031 German Corporate Governance Code Commission (2019) German corporate governance code (as resolved on December 16, 2019 with decisions from the plenary meeting of May 5, 2015). https://www.dcgk.de//files/dcgk/usercontent/en/download/code/191216_German_Corporate_Governance_Code.pdf. Accessed 5 Mar 2021 Gilson RJ (2001) Globalizing corporate governance: convergence of form or function. Am J Comp Law 49:329 Goergen M, Limbach P, Scholz M (2015) Mind the gap: the age dissimilarity between the chair and the CEO. J Corp Finance 35:136–158 Gomez-Mejia LR, Wiseman RM (1997) Reframing executive compensation: An assessment and outlook. J Manag 23:291–374 Goodman PS (1974) An examination of referents used in the evaluation of pay. Organ Behav Hum Perform 12:170–195 Göx RF, Hemmer T (2020) On the relation between managerial power and CEO pay. J Acc Econ 69:101300 Hall BJ, Liebmann JB (1998) Are CEOs really paid like bureaucrats? Q J Econ 113:653–691 Hambrick D, Finkelstein S (1995) The effects of ownership structure on conditions at the top: The case of CEO pay raises. Strat Manag J 16:175–194 Harris D, Helfat C (1997) Specificity of CEO human capital and compensation. Strat Manag J 18:895–920 Hausman JA (1978) Specification tests in econometrics. J Econ Soc 46:1251–1271 Jensen MC, Murphy KJ (1990a) CEO incentives—It's not how much you pay, but how. Harv Bus Rev 68:138–153 Jensen MC, Murphy KJ (1990b) Performance pay and top-management incentives. J Political Econ 98:225–264 Kaserer C, Wagner N (2004) Executive pay, free float, and firm performance: evidence from Germany. Working Paper Main BGM, O'Reilly CA III, Wade J (1995) The CEO, the board of directors and executive compensation: economic and psychological perspectives. Ind Corp Chang 4:293–332 Mincer J (1970) The distribution of labor incomes: a survey with special reference to the human capital approach. J Econ Lit 8:1–26 Montoya RM, Horton RS (2013) A meta-analytic investigation of the processes underlying the similarity-attraction effect. J Soc Pers Relat 30:64–94 Murphy KJ (1999) Executive compensation. In: Ashenfelter O, Card D (eds) Handbook of labor economics, vol 38. Elsevier, Amsterdam, pp 2485–2563 O'brien RM (2007) A caution regarding rules of thumb for variance inflation factors. Qual Quant 41:673–690 O'Reilly CA III, Main BGM (2007) Setting the CEO's Pay: It's more than Simple Economics. Organ Dyn 36:1–12 O'Reilly CA III, Main BGM (2010) Economic and psychological perspectives on CEO compensation: a review and synthesis. Ind Corp Chang 19:675–712 O'Reilly CA III, Main BGM, Crystal GS (1988) CEO compensation as tournament and social comparison: A tale of two theories. Adm Sci Q 33:257–274 Oesterle M-J (2003) Entscheidungsfindung im Vorstand großer Deutscher Aktiengesellschaften. Z Führ Organ 72:199–208 Rapp MS, Wolff M (2010) Determinanten der Vorstandsvergütung: Eine empirische Untersuchung der Deutschen Prime-Standard-Unternehmen. Z Betr 80:1075–1112 Schmidt R, Schwalbach J (2007) Zu Höhe und Dynamik der Vorstandsvergütung in Deutschland. In: Schwalbach J, Fandel G (eds) Der ehrbahre Kaufmann: Modernes Leitbild für Unternehmer? Gabler, Wiesbaden, pp 111–122 Sharpe WF (1963) A simplified model for portfolio analysis. Manag Sci 9:277–293 Tosi HL, Gomez-Mejia LR (1989) The decoupling of CEO pay and performance: an agency theory perspective. Adm Sci Q 34:169–189 Tosi HL, Werner S, Katz JP, Gomez-Mejia LR (2000) How much does performance matter?: a meta-analysis of CEO pay studies. J Manag 26:301–339 Uepping F (2015) Vorstandsvergütung in Deutschland: Eine empirische Untersuchung von Ausgestaltung und Determinanten unter Berücksichtigung ökonomischer, machtpolitischer und sozialpsychologischer Einflüsse. Kovac, Hamburg van Essen M, Otten J, Carberry EJ (2015) Assessing managerial power theory: a meta-analytic approach to understanding the determinants of CEO compensation. J Manag 41:164–202 Veliyath R, Bishop JW (1995) The link between CEO compensation and firm performance: empirical evidence of labor market norms. Int J Organ Anal 3:268–283 Wade J, O'Reilly CA III, Chandratat I (1990) Golden parachutes: CEOs and the exercise of social influence. Adm Sci Q 35:587–603 Westphal JD, Zajac EJ (1995) Who shall govern?: CEO/board power, demographic similarity, and new director selection. Adm Sci Q 40:60–83 Westphal JD, Zajac EJ (1997) Defections from the Inner Circle: Social Exchange, Reciprocity, and the Diffusion of Board Independence in U.S. Corporations. Adm Sci Q 42:161–183 White H (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econom 48:817–838 University of Potsdam, Potsdam, Germany Anja Schwering University of Bayreuth, Bayreuth, Germany Friedrich Sommer Mülheim an der Ruhr, Germany Florian Uepping Ruhr-University Bochum, Universitätsstr. 150, 44801, Bochum, Germany Sandra Winkelmann Correspondence to Sandra Winkelmann. The authors have no relevant financial or non-financial interests to disclose. Work related to this study was published by Uepping (2015). The original online version of this article was revised: The authors' declaration was partly missing and it is added. Appendix: Variables definitions CEO_TOTAL Total compensation of the CEO; computed as the sum of fixed compensation (CEO_FIX), short- and mid-term incentives (CEO_STIMTI), long-term incentives (CEO_LTI), and other benefits (CEO_OTHER) CEO_TOTAL_LOG Log-transformed total compensation (CEO_TOTAL) of the CEO CEO_FIX Fixed compensation (in thousands of euros) of the CEO CEO_FIX_LOG Log-transformedfixed compensation (CEO_FIX) of the CEO CEO_SHAREFIX Share of fixed compensation (CEO_FIX) in the CEO's total compensation (CEO_TOTAL) CEO_STIMTI Short- and mid-term incentives (in thousands of euros) of the CEO CEO_SHARESTIMTI Share of short- and mid-term incentives (CEO_STIMTI) in the CEO's total compensation (CEO_TOTAL) CEO_LTI Long-term incentives (in thousands of euros) of the CEO; the valuation of share-based payments is taken directly from the annual reports CEO_SHARELTI Share of long-term incentives (CEO_LTI) in the CEO's total compensation (CEO_TOTAL) CEO_SHAREVAR Share of all variable compensation (CEO_STIMTI and CEO_LTI) in the CEO's total compensation (CEO_TOTAL) CEO_OTHER Other benefits of the CEO CEO_SHAREOTHER Share of other benefits (CEO_OTHER) in the CEO's total compensation (CEO_TOTAL) CEO_TENURE Number of years since the appointment as CEO CSB_TENURE Number of years since the appointment as chair of the supervisory board LONGER_TENURE Dichotomous variable (1/0) with value of 1 if CEO_TENURE is higher than CSB_TENURE CSB_TOTAL Total compensation of the chair of the supervisory board computed as the sum of fixed and variable compensation for the occupation as chair of the supervisory board CSB_TOTAL_LOG Log-transformedtotal compensation (CSB_TOTAL) of the chair of the supervisory board CSB_TOTAL_EXCESS Size-adjusted excess log-transformed total compensation (CSB_TOTAL_LOG) of the chair of the supervisory board; to compute indicator, the median of CSB_TOTAL_LOG for the same firm-size decile is subtracted from CSB_TOTAL_LOG CEO_AGE Age of the CEO CSB_AGE Age of the chair of the supervisory board AGE_DIF Age difference of CEO and chair of the supervisory board; computed as the absolute value of the difference between CEO_AGE and CSB_AGE AGE_SIM Indicator for age similarity between the CEO and the chair of the supervisory board with values between 0 for low similarity and 1 for high similarity; to compute the indicator, AGE_DIF is subtracted from the highest value in the sample for AGE_DIF, and the highest value scales the difference for AGE_DIF NAT_SIM_DUM Dichotomous variable (1/0) with value of 1 if the CEO's nationality equals the chair's nationality and 0 otherwise CEO_DEGREE Highest level of education of the CEO; ordinal variable with value of 1 for high school graduation, apprenticeship, or comparable education, 2 for college or university degree, and 3 for Ph.D. or professor CSB_DEGREE Highest level of education of the chair of the supervisory board; ordinal variable with value of 1 for high school graduation, apprenticeship, or comparable training, 2 for college or university degree, and 3 for PhD or professor REL_DEGREE Relative level of education of CEO and chair of the supervisory board; computed as the difference between the ordinal variables CEO_DEGREE and CSB_DEGREE DEGREE_SIM Indicator for educational similarity between the CEO and the chair of the supervisory board with values between 0 for low similarity and 1 for high similarity; to compute the indicator, the absolute value of REL_DEGREE is subtracted from the maximum possible value for REL_DEGREE of 3, and the difference is by this maximum possible value of 3 STUDY_FIELD_SIM Dichotomous variable (1/0) with value of 1 if the CEO's field of study equals the chair's field of study and 0 otherwise; fields of studies are categorized as business and economics, law, natural sciences, engineering, and others CSB_CEO Dichotomous variable (1/0) with value of 1 if the chair of the supervisory board has worked as CEO in another or the focal company and 0 otherwise CEO_INT Dichotomous variable (1/0) with value of 1 for an internal appointment of the CEO and 0 otherwise CEO_TITLE Dichotomous variable (1/0) with value of 1 if the CEO holds an academic title (Ph.D., professor, or MBA) and 0 otherwise CEO_EXT Number of external board memberships of the CEO CSB_EXT Number of external board memberships of the chair of the supervisory board TSR Total shareholder return ROE Return on equity TA Total assets TA_LOG Log-transformed total assets MTB Market-to-book ratio as an indicator for complexity BETA Systematic risk; measured by beta with benchmark index HDAX FF Indicator for the influence of shareholders; measured by the proportion of shares in free float DEBT Indicator for influence of banks; measured by the debt ratio (total debt over total capital) MEMB_SB Number of supervisory board members MEMB_EB Number of executive board members OC Indicator for ownership concentration; computed as (1–FF) CSB_CEO_OTHER Dichotomous variable (1/0) with value of 1 if the chair of the supervisory board has worked as CEO in another company and 0 otherwise CSB_CEO_FOCAL Dichotomous variable (1/0) with value of 1 if the chair of the supervisory board has worked as CEO in focal company and 0 otherwise Sources: Annual reports, company websites, LexisNexis, Munzinger Personenarchiv, web search, investor relations departments, Datastream Schwering, A., Sommer, F., Uepping, F. et al. The social-psychological perspective on executive compensation: evidence from a two-tier board system. J Bus Econ 92, 309–345 (2022). https://doi.org/10.1007/s11573-021-01066-5 Issue Date: February 2022 Social-psychological mechanisms Two-tier board
CommonCrawl
Changes between Version 4 and Version 5 of StandardModel = Standard Model interactions = The Standard Model of particles and interactions, based on the SU(3),,c,, x SU(2),,L,, x U(1),,Y,, gauge symmetry has been available since the first versions of both MadGraph and more recently of MadEvent. There is, however, one important differences w.r.t. the previous version of the package, regarding how the couplings of the models are handled. As was already mentioned in the previous section, the task of computing from the parameters in the Lagrangian (primary parameters) all the secondary parameters (masses, widths and dependent parameters) needed by MadGraph is left to an external program, the SM Calculator. The output of the SM Calculator is a parameter card, param_card.dat, which contains the numerical values of the main couplings (primary and secondary) of a specific model. The parameter card has a format compliant with the SUSY Les Houches Accord. == Standard Model interactions == A simple example is given by the EW parameters that characterize the gauge SU(2),,L,, x U(1),,Y,, interactions and its breaking: in the Standard Model there are five relevant parameters, α,,em,,, G,,F,,, sin θ,,W,,, m,,Z,,, m,,W,, of which only three are independent at tree level. Various schemes differing by the choice of the parameters considered independent are used in the literature. In the SM Calculator, the default is to take G,,F,,, m,,Z,,, m,,W,, as inputs and derive α,,em,,, sin θ,,W,,, but other choices are available. As a result a consistent and unique set of values of the couplings appearing in the Feynman rules is derived and used for the computation of the amplitudes. #!html Another sometimes important feature of our SM implementation, is the possibility of distinguishing between the kinematic mass (pole mass) for the quarks and that entering in the Yukawa coupling definition ( <span style="text-decoration: overline">MS</span> mass). For the latter, the user can choose to evolve the mass to the scale corresponding to the Higgs mass, which leads to an improvement of the perturbative expansion. The Standard Model of particles and interactions, based on the %$SU(3)''c \times SU(2)''L \times U(1)_Y$ gauge symmetry has been available since the first versions of both MadGraph and more recently of MadEvent. There is, however, one important differences w.r.t. the previous version of the package, regarding how the couplings of the models are handled. As was already mentioned in the previous section, the task of computing from the parameters in the Lagrangian (primary parameters) all the secondary parameters (masses, widths and dependent parameters) needed by MadGraph is left to an external program, the SM Calculator. The output of the SM Calculator is a parameter card, param_card.dat, which contains the numerical values of the main couplings (primary and secondary) of a specific model. The parameter card has a format compliant with the SUSY Les Houches Accord. Finally, we mention that various versions of the Standard Model are actually available for specific studies. For example, in the "minimal SM!'' (sm) the CKM matrix is diagonal while in the smckm model a mixing between the first and second generation is allowed (Cabibbo angle). Another example is the sm_nohiggs model where the Higgs has been eliminated and the EWSB sector behaves as a non-linear sigma-model. A simple example is given by the EW parameters that characterize the gauge $SU(2)''L \times U(1)''Y$ interactions and its breaking: in the Standard Model there are five relevant parameters, %$\alpha_{em}, G_F, \sin \theta ''W, m''Z,m_W$ of which only three are independent at tree level. Various schemes differing by the choice of the parameters considered independent are used in the literature. In the SM Calculator, the default is to take %$G_F,m_Z,m_W$ as inputs and derive $\alpha_{em}, \sin \theta_W$, but other choices are available. As a result a consistent and unique set of values of the couplings appearing in the Feynman rules is derived and used for the computation of the amplitudes. Another sometimes important feature of our SM implementation, is the possibility of distinguishing between the kinematic mass (pole mass) for the quarks and that entering in the Yukawa coupling definition ($\overline{MS}$ mass). For the latter, the user can choose to evolve the mass to the scale corresponding to the Higgs mass, which leads to an improvement of the perturbative expansion. Finally, we mention that various versions of the Standard Model are actually available for specific studies. For example, in the ``minimal SM'' (sm) the CKM matrix is diagonal while in the smckm model a mixing between the first and second generation is allowed (Cabibbo angle). Another example is the sm_nohiggs model where the Higgs has been eliminated and the EWSB sector behaves as a non-linear sigma-model. -- Main.MichelHerquet - 09 Apr 2007 -- !MichelHerquet - 09 Apr 2007
CommonCrawl
Arkimedo Wikipedia's Archimedes as translated by GramTrans La ĉi-suba teksto estas aŭtomata traduko de la artikolo Archimedes article en la angla Vikipedio, farita per la sistemo GramTrans on 2018-01-04 17:10:21. Eventualaj ŝanĝoj en la angla originalo estos kaptitaj per regulaj retradukoj. Se vi volas enigi tiun artikolon en la originalan Esperanto-Vikipedion, vi povas uzi nian specialan redakt-interfacon. Rigardu la artikolon pri WikiTrans por trovi klarigojn pri kiel fari tion. Ankaŭ ekzistas speciala vortaro-interfaco por proponi aŭ kontroli terminojn. Por aliaj uzoj, vidu Arkimedo (malambiguigo). Arkimedo de Sirakuzo Archimedes Thoughtful de Feti (1620) Indiĝena nomo Ἀρχιμήδης Naskita ĉ. 287BC Sirakuzo ĉ. 212BC (maturigite proksimume 75) Konata pro La principo de Arkimedo Arkimeda ŝraŭbo hidrostatiko leviloj infinitesimal'oj Neuseis-konstruoj [1] Scienca kariero Kampoj Matematiko Inĝenieristiko Astronomio Archimedes of Syracuse (/ˌɑːrkɪˈmiːdiːz/;[2]Greek: Ἀρχιμήδης; c. 287 - c. 212 BC) was a Greek mathematician, physicist, engineer, inventor, and astronomer.[3] Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Generally considered the greatest mathematician of antiquity and one of the greatest of all time,[4][5] Archimedes anticipated modern calculus and analysis by applying concepts of infinitesimals and the method of exhaustion to derive and rigorously prove a range of geometrical theorems, including the area of a circle, the surface area and volume of a sphere, and the area under a parabola.[6] Other mathematical achievements include deriving an accurate approximation of pi, defining and investigating the spiral bearing his name, and creating a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics, including an explanation of the principle of the lever. He is credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion. Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted by a sphere and a cylinder, which Archimedes had requested to be placed on his tomb, representing his mathematical discoveries. Unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made until c. 530 AD by Isidore of Miletus in Byzantine Constantinople, while commentaries on the works of Archimedes written by Eutocius in the sixth century AD opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance,[7] while the discovery in 1906 of previously unknown works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.[8] 2 Discoveries and inventions 2.1 Archimedes' principle 2.2 Archimedes' screw 2.3 Claw of Archimedes 2.4 Heat ray 2.5 Other discoveries and inventions 3 Mathematics 4 Writings 4.1 Surviving works 4.2 Apocryphal works 5 Archimedes Palimpsest 10.1 The Works of Archimedes online 1 Verkoj de Arkimedo 1.1 Konservitaj verkoj 1.2 Apokrifaj verkoj 2 La palimpsesto de Arkimedo 5 Fonto Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia, located along the coast of Southern Italy. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years.[9] In The Sand Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse.[10] A biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of his life obscure.[11] It is unknown, for instance, whether he ever married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt, where Conon of Samos and Eratosthenes of Cyrene were contemporaries. He referred to Conon of Samos as his friend, while two of his works (The Method of Mechanical Theorems and the Cattle Problem) have introductions addressed to Eratosthenes.[a] The Death of Archimedes (1815) by Thomas Degeorge[12] Archimedes died c. 212 BC during the Second Punic War, when Roman forces under General Marcus Claudius Marcellus captured the city of Syracuse after a two-year-long siege. According to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, saying that he had to finish working on the problem. The soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives a lesser-known account of the death of Archimedes which suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the soldier thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable scientific asset and had ordered that he not be harmed.[13] Marcellus called Archimedes "a geometrical Briareus".[14] The last words attributed to Archimedes are "Do not disturb my circles", a reference to the circles in the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. This quote is often given in Latin as "Noli turbare circulos meos," but there is no reliable evidence that Archimedes uttered these words and they do not appear in the account given by Plutarch. Valerius Maximus, writing in Memorable Doings and Sayings in the 1st century AD, gives the phrase as "...sed protecto manibus puluere 'noli' inquit, 'obsecro, istum disturbare'" - "... but protecting the dust with his hands, said 'I beg of you, do not disturb this.'" The phrase is also given in Katharevousa Greek as "μὴ μου τοὺς κύκλους τάραττε!" (Mē mou tous kuklous taratte!).[13] Cicero Discovering the Tomb of Archimedes (1805) by Benjamin West The tomb of Archimedes carried a sculpture illustrating his favorite mathematical proof, consisting of a sphere and a cylinder of the same height and diameter. Archimedes had proven that the volume and surface area of the sphere are two thirds that of the cylinder including its bases. In 75 BC, 137 years after his death, the Roman orator Cicero was serving as quaestor in Sicily. He had heard stories about the tomb of Archimedes, but none of the locals were able to give him the location. Eventually he found the tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up, and was able to see the carving and read some of the verses that had been added as an inscription.[15] A tomb discovered in the courtyard of the Hotel Panorama in Syracuse in the early 1960s was claimed to be that of Archimedes, but there was no compelling evidence for this and the location of his tomb today is unknown.[16] The standard versions of the life of Archimedes were written long after his death by the historians of Ancient Rome. The account of the siege of Syracuse given by Polybius in his Universal History was written around seventy years after Archimedes' death, and was used subsequently as a source by Plutarch and Livy. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city.[17] Discoveries and inventions Archimedes' principle Main article: Archimedes' principle By placing a metal bar in a container with water on a scale, the bar displaces as much water as its own volume, increasing its mass and weighing down the scale. The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a votive crown for a temple had been made for King Hiero II of Syracuse, who had supplied the pure gold to be used, and Archimedes was asked to determine whether some silver had been substituted by the dishonest goldsmith.[18] Archimedes had to solve the problem without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density. While taking a bath, he noticed that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the volume of the crown. For practical purposes water is incompressible,[19] so the submerged crown would displace an amount of water equal to its own volume. By dividing the mass of the crown by the volume of water displaced, the density of the crown could be obtained. This density would be lower than that of gold if cheaper and less dense metals had been added. Archimedes then took to the streets naked, so excited by his discovery that he had forgotten to dress, crying "Eureka!" (Greek: "εὕρηκα, heúrēka!", meaning "I have found [it]!").[18] The test was conducted successfully, proving that silver had indeed been mixed in.[20] The story of the golden crown does not appear in the known works of Archimedes. Moreover, the practicality of the method it describes has been called into question, due to the extreme accuracy with which one would have to measure the water displacement.[21] Archimedes may have instead sought a solution that applied the principle known in hydrostatics as Archimedes' principle, which he describes in his treatise On Floating Bodies. This principle states that a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces.[22] Using this principle, it would have been possible to compare the density of the crown to that of pure gold by balancing the crown on a scale with a pure gold reference sample of the same weight, then immersing the apparatus in water. The difference in density between the two samples would cause the scale to tip accordingly. Galileo considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself."[23] In a 12th-century text titled Mappae clavicula there are instructions on how to perform the weighings in the water in order to calculate the percentage of silver used, and thus solve the problem.[24][25] The Latin poem Carmen de ponderibus et mensuris of the 4th or 5th century describes the use of a hydrostatic balance to solve the problem of the crown, and attributes the method to Archimedes.[24] Archimedes' screw Main article: Archimedes' screw The Archimedes' screw can raise water efficiently. A large part of Archimedes' work in engineering arose from fulfilling the needs of his home city of Syracuse. The Greek writer Athenaeus of Naucratis described how King Hiero II commissioned Archimedes to design a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a naval warship. The Syracusia is said to have been the largest ship built in classical antiquity.[26] According to Athenaeus, it was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among its facilities. Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes' screw was purportedly developed in order to remove the bilge water. Archimedes' machine was a device with a revolving screw-shaped blade inside a cylinder. It was turned by hand, and could also be used to transfer water from a low-lying body of water into irrigation canals. The Archimedes' screw is still in use today for pumping liquids and granulated solids such as coal and grain. The Archimedes' screw described in Roman times by Vitruvius may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon.[27][28][29] The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw.[30] Claw of Archimedes The Claw of Archimedes is a weapon that he is said to have designed in order to defend the city of Syracuse. Also known as "the ship shaker," the claw consisted of a crane-like arm from which a large metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it. There have been modern experiments to test the feasibility of the claw, and in 2005 a television documentary entitled Superweapons of the Ancient World built a version of the claw and concluded that it was a workable device.[31][32] Heat ray Archimedes may have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse. Artistic interpretation of Archimedes' mirror used to burn Roman ships. Painting by Giulio Parigi. Archimedes may have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse. The 2nd century AD author Lucian wrote that during the Siege of Syracuse (c. 214-212 BC), Archimedes destroyed enemy ships with fire. Centuries later, Anthemius of Tralles mentions burning-glasses as Archimedes' weapon.[33] The device, sometimes called the "Archimedes heat ray", was used to focus sunlight onto approaching ships, causing them to catch fire. In the modern era, similar devices have been constructed and may be referred to as a heliostat or solar furnace.[34] This purported weapon has been the subject of ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes.[35] It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been employed to focus sunlight onto a ship. A test of the Archimedes heat ray was carried out in 1973 by the Greek scientist Ioannis Sakkas. The experiment took place at the Skaramagas naval base outside Athens. On this occasion 70 mirrors were used, each with a copper coating and a size of around five by three feet (1.5 by 1 m). The mirrors were pointed at a plywood mock-up of a Roman warship at a distance of around 160 feet (50 m). When the mirrors were focused accurately, the ship burst into flames within a few seconds. The plywood ship had a coating of tar paint, which may have aided combustion.[36] A coating of tar would have been commonplace on ships in the classical era.[d] In October 2005 a group of students from the Massachusetts Institute of Technology carried out an experiment with 127 one-foot (30 cm) square mirror tiles, focused on a mock-up wooden ship at a range of around 100 feet (30 m). Flames broke out on a patch of the ship, but only after the sky had been cloudless and the ship had remained stationary for around ten minutes. It was concluded that the device was a feasible weapon under these conditions. The MIT group repeated the experiment for the television show MythBusters, using a wooden fishing boat in San Francisco as the target. Again some charring occurred, along with a small amount of flame. In order to catch fire, wood needs to reach its autoignition temperature, which is around 300 °C (570 °F).[37][38] When MythBusters broadcast the result of the San Francisco experiment in January 2006, the claim was placed in the category of "busted" (or failed) because of the length of time and the ideal weather conditions required for combustion to occur. It was also pointed out that since Syracuse faces the sea towards the east, the Roman fleet would have had to attack during the morning for optimal gathering of light by the mirrors. MythBusters also pointed out that conventional weaponry, such as flaming arrows or bolts from a catapult, would have been a far easier way of setting a ship on fire at short distances.[39] In December 2010, MythBusters again looked at the heat ray story in a special edition entitled "President's Challenge". Several experiments were carried out, including a large scale test with 500 schoolchildren aiming mirrors at a mock-up of a Roman sailing ship 400 feet (120 m) away. In all of the experiments, the sail failed to reach the 210 °C (410 °F) required to catch fire, and the verdict was again "busted". The show concluded that a more likely effect of the mirrors would have been blinding, dazzling, or distracting the crew of the ship.[40] Other discoveries and inventions While Archimedes did not invent the lever, he gave an explanation of the principle involved in his work On the Equilibrium of Planes. Earlier descriptions of the lever are found in the Peripatetic school of the followers of Aristotle, and are sometimes attributed to Archytas.[41][42] According to Pappus of Alexandria, Archimedes' work on levers caused him to remark: "Give me a place to stand on, and I will move the Earth." (Greek: δῶς μοι πᾶ στῶ καὶ τὰν γᾶν κινάσω)[43] Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift objects that would otherwise have been too heavy to move.[44] Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled.[45] Cicero (106-43 BC) mentions Archimedes briefly in his dialogue De re publica, which portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse c. 212 BC, General Marcus Claudius Marcellus is said to have taken back to Rome two mechanisms, constructed by Archimedes and used as aids in astronomy, which showed the motion of the Sun, Moon and five planets. Cicero mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the other to the Temple of Virtue in Rome. Marcellus' mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus: Hanc sphaeram Gallus cum moveret, fiebat ut soli luna totidem conversionibus in aere illo quot diebus in ipso caelo succederet, ex quo et in caelo sphaera solis fieret eadem illa defectio, et incideret luna tum in eam metam quae esset umbra terrae, cum sol e regione. When Gallus moved the globe, it happened that the Moon followed the Sun by as many turns on that bronze contrivance as in the sky itself, from which also in the sky the Sun's globe became to have that same eclipse, and the Moon came then to that position which was its shadow on the Earth, when the Sun was in line.[46][47] This is a description of a planetarium or orrery. Pappus of Alexandria stated that Archimedes had written a manuscript (now lost) on the construction of these mechanisms entitled On Sphere-Making. Modern research in this area has been focused on the Antikythera mechanism, another device built c. 100 BC that was probably designed for the same purpose.[48] Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing.[49] This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks.[50][51] Archimedes used Pythagoras' Theorem to calculate the side of the 12-gon from that of the hexagon and for each subsequent doubling of the sides of the regular polygon. While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote: "He placed his whole affection and ambition in those purer speculations where there can be no reference to the vulgar needs of life."[52] Archimedes was able to use infinitesimals in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the value of π. In Measurement of a Circle he did this by drawing a larger regular hexagon outside a circle and a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of π lay between 31/7 (approximately 3.1429) and 310/71 (approximately 3.1408), consistent with its actual value of approximately 3.1416.[53] He also proved that the area of a circle was equal to π multiplied by the square of the radius of the circle (πr2). In On the Sphere and Cylinder, Archimedes postulates that any magnitude when added to itself enough times will exceed any given magnitude. This is the Archimedean property of real numbers.[54] As proven by Archimedes, the area of the parabolic segment in the upper figure is equal to 4/3 that of the inscribed triangle in the lower figure. In Measurement of a Circle, Archimedes gives the value of the square root of 3 as lying between 265/153 (approximately 1.7320261) and 1351/780 (approximately 1.7320512). The actual value is approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to extort from them assent to his results."[55] It is possible that he used an iterative procedure to calculate these values.[56] In The Quadrature of the Parabola, Archimedes proved that the area enclosed by a parabola and a straight line is 4/3 times the area of a corresponding inscribed triangle as shown in the figure at right. He expressed the solution to the problem as an infinite geometric series with the common ratio 1/4: ∑n=0∞4−n=1+4−1+4−2+4−3+⋯=43.{\displaystyle \sum _{n=0}^{\infty }4^{-n}=1+4^{-1}+4^{-2}+4^{-3}+\cdots ={4 \over 3}.\;} If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and so on. This proof uses a variation of the series 1/4 + 1/16 + 1/64 + 1/256 + · · · which sums to 1/3. In The Sand Reckoner, Archimedes set out to calculate the number of grains of sand that the universe could contain. In doing so, he challenged the notion that the number of grains of sand was too large to be counted. He wrote: "There are some, King Gelo (Gelo II, son of Hiero II), who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited." To solve the problem, Archimedes devised a system of counting based on the myriad. The word is from the Greek μυριάς murias, for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8×1063.[57] The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse.[58] The written work of Archimedes has not survived as well as that of Euclid, and seven of his treatises are known to have existed only through references made to them by other authors. Pappus of Alexandria mentions On Sphere-Making and another work on polyhedra, while Theon of Alexandria quotes a remark about refraction from the now-lost Catoptrica.[b] During his lifetime, Archimedes made his work known through correspondence with the mathematicians in Alexandria. The writings of Archimedes were first collected by the Byzantine Greek architect Isidore of Miletus (c. 530 AD), while commentaries on the works of Archimedes written by Eutocius in the sixth century AD helped to bring his work a wider audience. Archimedes' work was translated into Arabic by Thābit ibn Qurra (836-901 AD), and Latin by Gerard of Cremona (c. 1114-1187 AD). During the Renaissance, the Editio Princeps (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin.[59] Around the year 1586 Galileo Galilei invented a hydrostatic balance for weighing metals in air and water after apparently being inspired by the work of Archimedes.[60] Surviving works On the Equilibrium of Planes (two volumes) The first book is in fifteen propositions with seven postulates, while the second book is in ten propositions. In this work Archimedes explains the Law of the Lever, stating, "Magnitudes are in equilibrium at distances reciprocally proportional to their weights." Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.[61] On the Measurement of a Circle This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes gives an approximation of the value of pi (π), showing that it is greater than 223/71 and less than 22/7. On Spirals This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in polar coordinates (r, θ) it can be described by the equation r=a+bθ{\displaystyle \,r=a+b\theta } with real numbers a and b. This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician. On the Sphere and the Cylinder (two volumes) A sphere has 2/3 the volume and surface area of its circumscribing cylinder including its bases. A sphere and cylinder were placed on the tomb of Archimedes at his request. (see also: Equiareal map) In this treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and diameter. The volume is 4/3πr3 for the sphere, and 2πr3 for the cylinder. The surface area is 4πr2 for the sphere, and 6πr2 for the cylinder (including its two bases), where r is the radius of the sphere and cylinder. The sphere has a volume two-thirds that of the circumscribed cylinder. Similarly, the sphere has an area two-thirds that of the cylinder (including the bases). A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request. On Conoids and Spheroids This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids. On Floating Bodies (two volumes) In the first part of this treatise, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not self-gravitating, since he assumes the existence of a point towards which all things fall in order to derive the spherical shape. In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, similar to the way that icebergs float. Archimedes' principle of buoyancy is given in the work, stated as follows: Any body wholly or partially immersed in a fluid experiences an upthrust equal to, but opposite in sense to, the weight of the fluid displaced. The Quadrature of the Parabola In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 multiplied by the area of a triangle with equal base and height. He achieves this by calculating the value of a geometric series that sums to infinity with the ratio 1/4. Stomachion is a dissection puzzle in the Archimedes Palimpsest. (O)stomachion This is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces which can be assembled to form a square. Research published by Dr. Reviel Netz of Stanford University in 2003 argued that Archimedes was attempting to determine how many ways the pieces could be assembled into the shape of a square. Dr. Netz calculates that the pieces can be made into a square 17,152 ways.[62] The number of arrangements is 536 when solutions that are equivalent by rotation and reflection have been excluded.[63] The puzzle represents an example of an early problem in combinatorics. The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for throat or gullet, stomachos (στόμαχος).[64]Ausonius refers to the puzzle as Ostomachion, a Greek compound word formed from the roots of ὀστέον (osteon, bone) and μάχη (machē, fight). The puzzle is also known as the Loculus of Archimedes or Archimedes' Box.[65] Archimedes' cattle problem This work was discovered by Gotthold Ephraim Lessing in a Greek manuscript consisting of a poem of 44 lines, in the Herzog August Library in Wolfenbüttel, Germany in 1773. It is addressed to Eratosthenes and the mathematicians in Alexandria. Archimedes challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations. There is a more difficult version of the problem in which some of the answers are required to be square numbers. This version of the problem was first solved by A. Amthor[66] in 1880, and the answer is a very large number, approximately 7.760271×10206544.[67] The Sand Reckoner In this treatise, Archimedes counts the number of grains of sand that will fit inside the universe. This book mentions the heliocentric theory of the solar system proposed by Aristarchus of Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies. By using a system of numbers based on powers of the myriad, Archimedes concludes that the number of grains of sand required to fill the universe is 8×1063 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias. The Sand Reckoner or Psammites is the only surviving work in which Archimedes discusses his views on astronomy.[68] The Method of Mechanical Theorems This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906. In this work Archimedes uses infinitesimals, and shows how breaking up a figure into an infinite number of infinitely small parts can be used to determine its area or volume. Archimedes may have considered this method lacking in formal rigor, so he also used the method of exhaustion to derive the results. As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria. Apocryphal works Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with fifteen propositions on the nature of circles. The earliest known copy of the text is in Arabic. The scholars T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an earlier work by Archimedes that is now lost.[69] It has also been claimed that Heron's formula for calculating the area of a triangle from the length of its sides was known to Archimedes.[c] However, the first reliable reference to the formula is given by Heron of Alexandria in the 1st century AD.[70] Archimedes Palimpsest Main article: Archimedes Palimpsest In 1906, The Archimedes Palimpsest revealed works by Archimedes thought to have been lost. The foremost document containing the work of Archimedes is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople and examined a 174-page goatskin parchment of prayers written in the 13th century AD. He discovered that it was a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, which was a common practice in the Middle Ages as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th century AD copies of previously unknown treatises by Archimedes.[71] The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On October 29, 1998 it was sold at auction to an anonymous buyer for million at Christie's in New York.[72] The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts. The palimpsest is now stored at the Walters Art Museum in Baltimore, Maryland, where it has been subjected to a range of modern tests including the use of ultraviolet and x-ray light to read the overwritten text.[73] The treatises in the Archimedes Palimpsest are: "On the Equilibrium of Planes" "On Spirals" "Measurement of a Circle" "On the Sphere and Cylinder" "On Floating Bodies" "The Method of Mechanical Theorems" "Stomachion" Speeches by the 4th century BC politician Hypereides A commentary on Aristotle's Categories by Alexander of Aphrodisias The Fields Medal carries a portrait of Archimedes. Galileo praised Archimedes many times, and referred to him as a "superhuman".[74]Leibniz said "He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times."[75] There is a crater on the Moon named Archimedes (29.7° N, 4.0° W) in his honor, as well as a lunar mountain range, the Montes Archimedes (25.3° N, 4.6° W).[76] The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around the head of Archimedes is a quote attributed to him which reads in Latin: "Transire suum pectus mundoque potiri" (Rise above oneself and grasp the world).[77] Archimedes has appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).[78] The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the California Gold Rush.[79] Arbelos Archimedes' axiom Archimedes number Archimedes paradox Archimedean solid Archimedes' twin circles Diocles List of things named after Archimedes Methods of computing square roots Pseudo-Archimedes Salinon Steam cannon Zhang Heng a. ^ In the preface to On Spirals addressed to Dositheus of Pelusium, Archimedes says that "many years have elapsed since Conon's death." Conon of Samos lived c. 280-220 BC, suggesting that Archimedes may have been an older man when writing some of his works. b. ^ The treatises by Archimedes known to exist only through references in the works of other authors are: On Sphere-Making and a work on polyhedra mentioned by Pappus of Alexandria; Catoptrica, a work on optics mentioned by Theon of Alexandria; Principles, addressed to Zeuxippus and explaining the number system used in The Sand Reckoner; On Balances and Levers; On Centers of Gravity; On the Calendar. Of the surviving works by Archimedes, T. L. Heath offers the following suggestion as to the order in which they were written: On the Equilibrium of Planes I, The Quadrature of the Parabola, On the Equilibrium of Planes II, On the Sphere and the Cylinder I, II, On Spirals, On Conoids and Spheroids, On Floating Bodies I, II, On the Measurement of a Circle, The Sand Reckoner. c. ^ Boyer, Carl Benjamin A History of Mathematics (1991) ISBN 0-471-54397-7 "Arabic scholars inform us that the familiar area formula for a triangle in terms of its three sides, usually known as Heron's formula - k = √(s(s − a)(s − b)(s − c)), where s is the semiperimeter - was known to Archimedes several centuries before Heron lived. Arabic scholars also attribute to Archimedes the 'theorem on the broken chord' ... Archimedes is reported by the Arabs to have given several proofs of the theorem." d. ^ "It was usual to smear the seams or even the whole hull with pitch or with pitch and wax". In Νεκρικοὶ Διάλογοι (Dialogues of the Dead), Lucian refers to coating the seams of a skiff with wax, a reference to pitch (tar) or wax.[80] ^ Knorr, Wilbur R. (1978). "Archimedes and the spirals: The heuristic background". Historia Mathematica. Elsevier. 5 (1): 43-75. "To be sure, Pappus does twice mention the theorem on the tangent to the spiral [IV, 36, 54]. But in both instances the issue is Archimedes' inappropriate use of a "solid neusis," that is, of a construction involving the sections of solids, in the solution of a plane problem. Yet Pappus' own resolution of the difficulty [IV, 54] is by his own classification a "solid" method, as it makes use of conic sections." (page 48) ^ "Archimedes". Collins Dictionary. n.d. Retrieved 25 September 2014. ^ "Archimedes (c.287 - c.212 BC)". BBC History. Retrieved 2012-06-07. ^ Calinger, Ronald (1999). A Contextual History of Mathematics. Prentice-Hall. p. 150. ISBN 0-02-318285-7. Shortly after Euclid, compiler of the definitive textbook, came Archimedes of Syracuse (ca. 287 212 BC), the most original and profound mathematician of antiquity. ^ "Archimedes of Syracuse". The MacTutor History of Mathematics archive. January 1999. Retrieved 2008-06-09. ^ O'Connor, J.J.; Robertson, E.F. (February 1996). "A history of calculus". University of St Andrews. Archived from the original on 15 July 2007. Retrieved 2007-08-07. ^ Bursill-Hall, Piers. "Galileo, Archimedes, and Renaissance engineers". sciencelive with the University of Cambridge. Archived from the original on 2007-09-29. Retrieved 2007-08-07. ^ "Archimedes - The Palimpsest". Walters Art Museum. Archived from the original on 2007-09-28. Retrieved 2007-10-14. ^ Heath, T. L., Works of Archimedes, 1897 ^ Plutarch. "Parallel Lives Complete e-text from Gutenberg.org". Project Gutenberg. Retrieved 2007-07-23. ^ O'Connor, J.J.; Robertson, E.F. "Archimedes of Syracuse". University of St Andrews. Archived from the original on 6 February 2007. Retrieved 2007-01-02. ^ "The Death of Archimedes: Illustrations". math.nyu.edu. New York University. ^ a b Rorres, Chris. "Death of Archimedes: Sources". Courant Institute of Mathematical Sciences. Archived from the original on 10 December 2006. Retrieved 2007-01-02. ^ Mary Jaeger. Archimedes and the Roman Imagination, p. 113. ^ Rorres, Chris. "Tomb of Archimedes: Sources". Courant Institute of Mathematical Sciences. Archived from the original on 9 December 2006. Retrieved 2007-01-02. ^ Rorres, Chris. "Tomb of Archimedes - Illustrations". Courant Institute of Mathematical Sciences. Retrieved 2011-03-15. ^ Rorres, Chris. "Siege of Syracuse". Courant Institute of Mathematical Sciences. Archived from the original on 9 June 2007. Retrieved 2007-07-23. ^ a b Vitruvius. "De Architectura, Book IX, paragraphs 9-12, text in English and Latin". University of Chicago. Retrieved 2007-08-30. ^ "Incompressibility of Water". Harvard University. Archived from the original on 17 March 2008. Retrieved 2008-02-27. ^ HyperPhysics. "Buoyancy". Georgia State University. Archived from the original on 14 July 2007. Retrieved 2007-07-23. ^ Rorres, Chris. "The Golden Crown". Drexel University. Archived from the original on 11 March 2009. Retrieved 2009-03-24. ^ Carroll, Bradley W. "Archimedes' Principle". Weber State University. Archived from the original on 8 August 2007. Retrieved 2007-07-23. ^ Rorres, Chris. "The Golden Crown: Galileo's Balance". Drexel University. Archived from the original on 24 February 2009. Retrieved 2009-03-24. ^ a b O. A. W. Dilke. Gnomon. 62. Bd., H. 8 (1990), pp. 697-699 Published by: Verlag C.H.Beck ^ Marcel Berthelot - Sur l histoire de la balance hydrostatique et de quelques autres appareils et procédés scientifiques, Annales de Chimie et de Physique [série 6], 23 / 1891, pp. 475-485 ^ Casson, Lionel (1971). Ships and Seamanship in the Ancient World. Princeton University Press. ISBN 0-691-03536-9. ^ Dalley, Stephanie; Oleson, John Peter. "Sennacherib, Archimedes, and the Water Screw: The Context of Invention in the Ancient World". Technology and Culture Volume 44, Number 1, January 2003 (PDF). Retrieved 2007-07-23. ^ Rorres, Chris. "Archimedes' screw - Optimal Design". Courant Institute of Mathematical Sciences. Retrieved 2007-07-23. ^ An animation of an Archimedes' screw ^ "SS Archimedes". wrecksite.eu. Retrieved 2011-01-22. ^ Rorres, Chris. "Archimedes' Claw - Illustrations and Animations - a range of possible designs for the claw". Courant Institute of Mathematical Sciences. Retrieved 2007-07-23. ^ Carroll, Bradley W. "Archimedes' Claw - watch an animation". Weber State University. Archived from the original on 13 August 2007. Retrieved 2007-08-12. ^ Hippias, 2 (cf. Galen, On temperaments 3.2, who mentions pyreia, "torches"); Anthemius of Tralles, On miraculous engines 153 [Westerman]. ^ "World's Largest Solar Furnace". Atlas Obscura. Retrieved November 6, 2016. ^ John Wesley. "A Compendium of Natural Philosophy (1810) Chapter XII, Burning Glasses". Online text at Wesley Center for Applied Theology. Archived from the original on 2007-10-12. Retrieved 2007-09-14. ^ "Archimedes' Weapon". Time Magazine. November 26, 1973. Retrieved 2007-08-12. ^ Bonsor, Kevin. "How Wildfires Work". HowStuffWorks. Archived from the original on 14 July 2007. Retrieved 2007-07-23. ^ Fuels and Chemicals - Auto Ignition Temperatures ^ "Archimedes Death Ray: Testing with MythBusters". MIT. Retrieved 2007-07-23. ^ "TV Review: MythBusters 8.27 - President's Challenge". Retrieved 2010-12-18. ^ Rorres, Chris. "The Law of the Lever According to Archimedes". Courant Institute of Mathematical Sciences. Retrieved 2010-03-20. ^ Clagett, Marshall (2001). Greek Science in Antiquity. Dover Publications. ISBN 978-0-486-41973-2. Retrieved 2010-03-20. ^ Quoted by Pappus of Alexandria in Synagoge, Book VIII ^ Dougherty, F. C.; Macari, J.; Okamoto, C. "Pulleys". Society of Women Engineers. Archived from the original on 18 July 2007. Retrieved 2007-07-23. ^ "Ancient Greek Scientists: Hero of Alexandria". Technology Museum of Thessaloniki. Archived from the original on 5 September 2007. Retrieved 2007-09-14. ^ Cicero. "De re publica 1.xiv §21". thelatinlibrary.com. Retrieved 2007-07-23. ^ Cicero. "De re publica Complete e-text in English from Gutenberg.org". Project Gutenberg. Retrieved 2007-09-18. ^ Noble Wilford, John (July 31, 2008). "Discovering How Greeks Computed in 100 B.C". The New York Times. Retrieved 2013-12-25. ^ "The Antikythera Mechanism II". Stony Brook University. Retrieved 2013-12-25. ^ Rorres, Chris. "Spheres and Planetaria". Courant Institute of Mathematical Sciences. Retrieved 2007-07-23. ^ "Ancient Moon 'computer' revisited". BBC News. November 29, 2006. Retrieved 2007-07-23. ^ Plutarch. "Extract from Parallel Lives". fulltextarchive.com. Retrieved 2009-08-10. ^ Heath, T.L. "Archimedes on measuring the circle". math.ubc.ca. Retrieved 2012-10-30. ^ Kaye, R.W. "Archimedean ordered fields". web.mat.bham.ac.uk. Retrieved 2009-11-07. ^ Quoted in Heath, T. L. Works of Archimedes, Dover Publications, ISBN 0-486-42084-1. ^ McKeeman, Bill. "The Computation of Pi by Archimedes". Matlab Central. Retrieved 2012-10-30. ^ Carroll, Bradley W. "The Sand Reckoner". Weber State University. Archived from the original on 13 August 2007. Retrieved 2007-07-23. ^ Encyclopedia of ancient Greece By Wilson, Nigel Guy p. 77 ISBN 0-7945-0225-3 (2006) ^ "Editions of Archimedes' Work". Brown University Library. Archived from the original on 8 August 2007. Retrieved 2007-07-23. ^ Van Helden, Al. "The Galileo Project: Hydrostatic Balance". Rice University. Archived from the original on 5 September 2007. Retrieved 2007-09-14. ^ Heath, T.L. "The Works of Archimedes (1897). The unabridged work in PDF form (19 MB)". Archive.org. Archived from the original on 6 October 2007. Retrieved 2007-10-14. ^ Kolata, Gina (December 14, 2003). "In Archimedes' Puzzle, a New Eureka Moment". The New York Times. Retrieved 2007-07-23. ^ Ed Pegg Jr. (November 17, 2003). "The Loculus of Archimedes, Solved". Mathematical Association of America. Archived from the original on 19 May 2008. Retrieved 2008-05-18. ^ Rorres, Chris. "Archimedes' Stomachion". Courant Institute of Mathematical Sciences. Archived from the original on 26 October 2007. Retrieved 2007-09-14. ^ "Graeco Roman Puzzles". Gianni A. Sarcone and Marie J. Waeber. Archived from the original on 14 May 2008. Retrieved 2008-05-09. ^ Krumbiegel, B. and Amthor, A. Das Problema Bovinum des Archimedes, Historisch-literarische Abteilung der Zeitschrift für Mathematik und Physik 25 (1880) pp. 121-136, 153-171. ^ Calkins, Keith G. "Archimedes' Problema Bovinum". Andrews University. Archived from the original on 2007-10-12. Retrieved 2007-09-14. ^ "English translation of The Sand Reckoner". University of Waterloo. Archived from the original on 11 August 2007. Retrieved 2007-07-23. ^ "Archimedes' Book of Lemmas". cut-the-knot. Archived from the original on 11 July 2007. Retrieved 2007-08-07. ^ O'Connor, J.J.; Robertson, E.F. (April 1999). "Heron of Alexandria". University of St Andrews. Retrieved 2010-02-17. ^ Miller, Mary K. (March 2007). "Reading Between the Lines". Smithsonian Magazine. Retrieved 2008-01-24. ^ "Rare work by Archimedes sells for $2 million". CNN. October 29, 1998. Archived from the original on May 16, 2008. Retrieved 2008-01-15. ^ "X-rays reveal Archimedes' secrets". BBC News. August 2, 2006. Archived from the original on 25 August 2007. Retrieved 2007-07-23. ^ Michael Matthews. Time for Science Education: How Teaching the History and Philosophy of Pendulum Motion Can Contribute to Science Literacy, p. 96. ^ Carl B. Boyer, Uta C. Merzbach. A History of Mathematics, chapter 7. ^ Friedlander, Jay; Williams, Dave. "Oblique view of Archimedes crater on the Moon". NASA. Archived from the original on 19 August 2007. Retrieved 2007-09-13. ^ "Fields Medal". International Mathematical Union. Archived from the original on July 1, 2007. Retrieved 2007-07-23. ^ Rorres, Chris. "Stamps of Archimedes". Courant Institute of Mathematical Sciences. Retrieved 2007-08-25. ^ "California Symbols". California State Capitol Museum. Archived from the original on 12 October 2007. Retrieved 2007-09-14. ^ Casson, Lionel (1995). Ships and seamanship in the ancient world. Baltimore: The Johns Hopkins University Press. pp. 211-212. ISBN 978-0-8018-5130-8. Wikisource has the text of the 1911 Encyclopædia Britannica article Archimedes. Boyer, Carl Benjamin (1991). A History of Mathematics. New York: Wiley. ISBN 0-471-54397-7. Clagett, Marshall (1964-1984). Archimedes in the Middle Ages. 5 vols. Madison, WI: University of Wisconsin Press. Dijksterhuis, E.J. (1987). Archimedes. Princeton University Press, Princeton. ISBN 0-691-08421-1. Republished translation of the 1938 study of Archimedes and his works by an historian of science. Gow, Mary (2005). Archimedes: Mathematical Genius of the Ancient World. Enslow Publishers, Inc. ISBN 0-7660-2502-0. Hasan, Heather (2005). Archimedes: The Father of Mathematics. Rosen Central. ISBN 978-1-4042-0774-5. Heath, T.L. (1897). Works of Archimedes. Dover Publications. ISBN 0-486-42084-1. Complete works of Archimedes in English. Netz, Reviel; Noel, William (2007). The Archimedes Codex. Orion Publishing Group. ISBN 0-297-64547-1. Pickover, Clifford A. (2008). Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press. ISBN 978-0-19-533611-5. Simms, Dennis L. (1995). Archimedes the Engineer. Continuum International Publishing Group Ltd. ISBN 0-7201-2284-8. Stein, Sherman (1999). Archimedes: What Did He Do Besides Cry Eureka?. Mathematical Association of America. ISBN 0-88385-718-9. The Works of Archimedes online Text in Classical Greek: PDF scans of Heiberg's edition of the Works of Archimedes, now in the public domain In English translation: The Works of Archimedes, trans. T.L. Heath; supplemented by The Method of Mechanical Theorems, trans. L.G. Robinson Find more aboutArchimedesat Wikipedia's sister projects Learning resources from Wikiversity This audio file was created from a revision of the article "Archimedes" dated 2009-03-31, and does not reflect subsequent edits to the article. (Audio help) Archimedes at Encyclopædia Britannica Archimedes on In Our Time at the BBC. Works by Archimedes at Project Gutenberg Works by or about Archimedes at Internet Archive Archimedes at the Indiana Philosophy Ontology Project Archimedes at PhilPapers The Archimedes Palimpsest project at The Walters Art Museum in Baltimore, Maryland "Archimedes and the Square Root of 3". MathPages.com. "Archimedes on Spheres and Cylinders". MathPages.com. Photograph of the Sakkas experiment in 1973 Testing the Archimedes steam cannon Stamps of Archimedes Archimedes Palimpsest reveals insights centuries ahead of its time On the Equilibrium of Planes Measurement of a Circle On the Sphere and Cylinder On Floating Bodies Ostomachion Book of Lemmas (apocryphal) Finds and inventions Archimedes's cattle problem Archimedes's principle Archimedes's screw Ancient Greek mathematics Archytas Aristaeus the Elder Apollonius Autolycus Callippus Chrysippus Cleomedes Conon Ctesibius Dicaearchus Diophantus Dinostratus Dionysodorus Domninus Eratosthenes Eudemus Eutocius Geminus Hipparchus Hippasus Hippias Hypsicles Isidore of Miletus Menaechmus Menelaus Metrodorus Nicomachus Nicomedes Nicoteles Oenopides Pappus Philolaus Philon Posidonius Serenus Sosigenes Sporus Theodorus Theon of Alexandria Theon of Smyrna Thymaridas Zeno of Elea Zeno of Sidon Zenodorus Almagest Arithmetica Conics (Apollonius) Elements (Euclid) On the Sizes and Distances (Aristarchus) On Sizes and Distances (Hipparchus) On the Moving Sphere (Autolycus) Problem of Apollonius Doubling the cube Angle trisection Cyrene Library of Alexandria Timeline of Ancient Greek mathematicians Cycladic civilization Minoan civilization Mycenaean civilization Greek Dark Ages Archaic period Classical Greece Hellenistic Greece Miletus Peloponnesus Taurica Ancient Greek colonies City states Megalopolis Boeotarch Koinon Proxeny Tagus Amphictyonic League Athenian Graphē paranómōn Heliaia Apella Ephor Gerousia Harmost Synedrion Athenian military Antigonid Macedonian army Army of Macedon Cretan archers Hellenistic armies Hippeis Hetairoi Macedonian phalanx Peltast Pezhetairos Sacred Band of Thebes Sciritae Seleucid army Spartan army Toxotai Xiphos Xyston List of ancient Greeks Kings of Argos Archons of Athens Kings of Athens Kings of Commagene Diadochi Kings of Lydia Kings of Macedonia Kings of Paionia Attalid kings of Pergamon Kings of Pontus Kings of Sparta Tyrants of Syracuse Diogenes of Sinope Leucippus Alcaeus Archilochus Bacchylides Hipponax Ibycus Menander Mimnermus Panyassis Philocles Pindar Simonides Stesichorus Theognis Thucydides Timocreon Tyrtaeus Agesilaus II Agis II Alcibiades Aratus Epaminondas Milo of Croton Miltiades Pausanias Philip of Macedon Philopoemen Praxiteles Pyrrhus Themistocles By culture Ancient Greek tribes Thracian Greeks Ancient Macedonians Funeral and burial practices Pederasty Arts and science Greek Revival architecture Musical system mythological figures Twelve Olympians Dodona Athenian Treasury Long Walls Philippeion Theatre of Dionysus Tunnel of Eupalinos Aphaea Athena Nike Erechtheion Hephaestus Hera, Olympia Zeus, Olympia Proto-Greek Homeric Aeolic Arcadocypriot Locrian Pamphylian Linear A Linear B Cypriot syllabary Greek numerals Attic numerals in Epirus Stoae BNF: cb12026533n (data) ICCU: IT\ICCU\MILV5118 IATH: w66m3rzp La ĉi-suba teksto estas la originala artikolo Arkimedo el la Esperanto-Vikipedio, prenita de GramTrans 2015-04-13 05:05:02. Eblaj ŝanĝoj en la originalo estos kaptitaj per regulaj ĝisdatigoj. Arkimedo, bildo de Domenico Fetti (1620) Arkimedo aŭ Arĥimedo (helene: Ἀρχιμήδης, naskiĝis ĉ. −287, kaj mortis en −212), estis greka matematikisto, kiu vivis en Sicilio. Li estas konsiderinda kiel unu el grandaj matematikistoj de antikveco. Plimulto de la faktoj pri Arkimedo originas de la biografio de romia soldato Marcellus, skribita de romia biografo Plutarko. Arkimedo faris multnombrajn pruvojn uzante rigidan geometrian formalismon skizitan de Eŭklido. Li sukcesis speciale en kalkulo de areoj kaj volumenoj kaj fieris pro la malkovro de sfera volumeno, montrante, ke ĝi estas du trionoj de la plej malgranda cilindro, kiu povas enteni la sferon. Fakte oni ofte diras, ke Arkimedo povis esti inventinto aŭ fondinto de nombra kalkulado, se grekoj tiutempe konus pli da akordiĝemaj matematikaj nocioj. Laŭ enskribitaj kaj ĉirkaŭskribitaj plurlateroj en cirklo, li povis kalkuli la valoron de π inter 3+10/71 kaj 3+1/7. Arkimedo estis ankaŭ elstara inĝeniero, formulanta principon de elĵet-forto kaj koncernan leĝon. Legendo rakontas, ke malkovrante la principon de elĵet-forto (egalas al la pezo de la likvo elpuŝata per la solido) dum sinbanado, li elkuris preskaŭ tute nuda en la stratoj de Sirakuzo kriante "Eŭreka!" (mi trovis ĝin). Li estis inventinto ankaŭ de t.n. Arkimeda ŝraŭbo (helico). Kelkaj liaj geometriaj pruvoj estis motivitaj laŭ mekanikaj argumentoj, kiuj tamen kondukis al veraj respondoj. Dum la romia sieĝo de Sirakuzo, Arkimedo defendis la urbon per la de li konstruitaj konkavaj speguloj, kiuj fokusigis sunajn radiojn al romiaj ŝipoj, tiel malhelpante ilin (tiu historio estas pridubata). Fine, kiam la sieĝo estis disrompita, romanaj soldatoj murdis Arkimedon. Liaj lastaj vortoj estis: "Ne tuŝu miajn cirklojn" - temis pri geometriaj figuroj, skizitaj sur la sablo. Verkoj de Arkimedo Arkimedo originale verkis en doria helena lingvo, la lingvo parolata en Sirakuzo. Lia verkaro ne estis tiom bone konservita kiom tiu de Eŭklido; sep el liaj traktatoj estas konataj nur el referencoj de aliaj aŭtoroj. Ekzemple, Pappo de Aleksandrio mencias Pri farado de sferoj kaj alian verkon pri poliedroj, kaj Teono de Aleksandrio citas komenton pri refrakto el perdita verko titolita Catoptrica. En sia vivo Arkimedo diskonigis la rezultojn de sia laboro per korespondado kun la matematikistoj de Aleksandrio. Liaj verkoj estis kolektitaj de la bizanca arĥitekto Isidoro de Mileto (ĉ. la jaro 530). kaj la komentoj pri li verkitaj de Eŭtoĥio en la 6-a jarcento diskonigis lian laboron al pli vasta publiko. La verkaro de Arkimedo estis tradukita al la araba lingvo de Thabit ibn Qurra (836–901) kaj al la latina de Gerardo de Kremono (ĉ. 1114–1187). En 1544. en renesanco, Johann Herwagen en Bazelo publikigis la editio princeps (unua eldono), kun la verkaro de Arkimedo en la helena kaj la latina[1]. Konservitaj verkoj Pri la ekvilibro de la ebenaĵoj (du volumoj; latine: De planorum equilibris) Pri la mezurado de cirklo (latine: Dimensio circuli) Pri la spiraloj (latine: De spiralibus) Pri la sfero kaj la cilindro (du volumoj; latine: De sphaera et cylindro) Pri la konoidoj kaj sferoidoj (latine: De conoidibus et sphaeroidibus) Pri flosantaj korpoj (du volumoj, latine: De corporibus natantibus) Pri la kvadratigo de parabolo (latine: De quadratura parabolae) (O)stomaĥion, pri la kunmetado de diversformaj pecoj al kvadrato La problemo de la bovaro (latine: Problema bovinum) Ĉi tiu verko estis trovita de Gotthold Ephraim Lessing en 1773 en greka manuskripto konsistanta el poemo de 44 versoj, en la Herzog-August-Biblioteko en Wolfenbüttel (Germanio). La teksto estas direktita al Eratosteno kaj la matematikistoj de Aleksandrio; en ĝi Arkimedo defias ilin kalkuli la nombron de brutoj en la brutaro de Helios per solvado de sistemo da diofantaj ekvacioj. Ekzistas pli malfacila versio de la problemo, en kiu kelkaj nombroj de la solvo devas esti kvadrataj. Tiu ĉi versio estis solvita unuafoje en 1880 de A. Amthor[2], kaj la solvo, kiun Amthor ne povis noti, estas nombro ege granda, proksimume 7,760271×10206544[3]. La sablo-kalkulanto (alinome Psammites; latine: Arenarius) En ĉi tiu traktaĵo Arkimedo kalkulas la nombron de sableroj necesan por plenigi la universon. La libro mencias la sun-centran teorion de Aristarko el Samoso kaj tiutempajn ideojn pri la grandeco de la Tero kaj la distancoj inter diversaj astroj. Uzante nombrosistemon bazitan sur la miriado (10.000) Arkimedo konkludas, ke la necesa nombro de sableroj estas, en moderna notacio, 8×1063. La sablo-kalkulanto estas la sola restanta verko de Arkimedo, kiu temas pri lia vido de astronomio[4]. Pri la metodo (de la mekanikaj teoremoj) (latine: De methodo) Apokrifaj verkoj La Libro de Lemoj (latine Liber Assumptorum) estas traktaĵo de dek kvin teoremoj pri la naturo de cirkloj. La plej malnova ekzemplero estas en araba lingvo. La sciencistoj T. L. Heath kaj Marshall Clagett argumentas, ke ĝi ne povis esti verkita de Arkimedo, ĉar tiu estas citata en la teksto, kio sugestas, ke ĝi estis modifita de alia aŭtoro. Eble la verko estas bazita sur pli aĝa, perdita verko de Arkimedo[5]. Oni diris ankaŭ, ke Arkimedo jam konis la formulon de Herono por kalkuli la areon de triangulo el la longecoj de ĝiaj lateroj. Sed la unuan fidindan referencon al tiu formulo donis Herono de Aleksandrio en la 1-a jarcento[6]. La palimpsesto de Arkimedo La palimpsesto de Arkimedo estas unu el la precipaj fontoj, el kiuj estas kontata lia verkaro. En 1906 profesoro Johan Ludvig Heiberg vizitis Konstantinopolon kaj esploris pergamenon el kapra ledo de 174 paĝoj kun preĝoj skribitaj en la 13-a jarcento. Li trovis, ke temas pri palimpsesto, dokumento kun teksto skribita super malpli nova teksto forskrapita. En la mezepoko estis, pro la alta prezo de pergameno, kutime forskapri la inkon de ne plu bezonata dokumento por skribi alian. La plej malnovaj verkoj troveblaj en la palimpsesto estis identigitaj kiel kopioj 10-a-jarcentaj de traktaĵoj de Arkimedo, kiuj antaŭe estis nekonataj. La pergameno kuŝis dum jarcentoj en la biblioteko de monaĥejo en Konstantinopolo kaj en la 1920-aj jaroj estis vendita al privata kolektanto. La 29-an de oktobro 1998 ĝi estis vendita aŭkcie al anonima aĉetanto je du milionoj da dolaroj, en la aŭkciejo Christie's. La palimpsesto surhavas sep traktaĵojn, inter ili la ĝis nun sola konata ekzemplero de la verko pri flosantaj korpoj kiel helenlingva originalaĵo. Ĝi estas ankaŭ la sola fonto de La metodo de la mekanikaj teoremoj, kiun referencas Suidas kaj kiun oni supozis perdita por ĉiam. Ankaŭ Stomachion estis trovita en la palimpsesto, kun pli kompleta analizo de la ludenigmo ol tiuj troveblaj en antaŭaj tekstoj. La palimpsesto estas konservata en la muzeo Walters Art Museum en Baltimoro (Marilando) kaj tie estis submetita al modernaj analizaj metodoj, inter ili ultraviola lumo kaj X-radioj, por legi la superskribitajn tekstojn. Ĝi enhavas jenajn traktaĵojn de Arkimedo: Pri la ekvilibro de la ebenaĵoj Pri la spiraloj Pri la mezurado de cirklo Pri la sfero kaj la cilindro Pri flosantaj korpoj Pri la metodo (de la mekanikaj teoremoj) (O)stomaĥion Pruvo, ke π estas inter 3 kaj 4 Aproksimado de π La elementoj de Ostomaĥion ↑ «Editions of Archimedes' Work». Brown University Library ↑ B. Krumbiegel, A. Amthor, Das Problema Bovinum des Archimedes, Historisch-literarische Abteilung der Zeitschrift Für Mathematik und Physik 25 (1880) 121–136, 153–171. ↑ Calkins, Keith G. «Archimedes' Problema Bovinum». Andrews University. ↑ «Angla traduko de la sablo-kalkulando (The Sand Reckoner)». University of Waterloo. ↑ Bogomolny, Alexander. «Archimedes' Book of Lemmas» (angle). Interactive Mathematics Miscellany and Puzzles. ↑ Wilson, James W. «Problem Solving with Heron's Formula». University of Georgia. Kategorio Arkimedo en la Vikimedia Komunejo (Multrimedaj datumoj) Arkimedo en la Vikicitaro (Kolekto de citaĵoj) http://www.mcs.drexel.edu/~crorres/Archimedes/contents.html http://agutie.homestead.com/files/rhombicubocta.html http://www.thewalters.org/archimedes/frame.html La arkimeda palimpsesto http://www.mcs.drexel.edu/~crorres/Archimedes/Crown/CrownIntro.html Arkimedo – La ora krono En tiu ĉi artikolo estas uzita traduko de teksto el la artikolo Arquímedes en la hispana Vikipedio. Vi povas plibonigi la jenon: Aldonu mankanta(j)n fontindiko(j)n al la artikolo. Aldonu mankantajn ŝablonojn (kiel informkestojn) al la artikolo. Se vi korektis unu el la menciitaj mankoj, bonvolu forigi la koncernan parametron de la ŝablono {{Pluraj problemoj}}. Detaloj estas en la dokumentado.
CommonCrawl
\begin{document} \title[Moments of polymers]{Moments of partition functions of 2D Gaussian polymers in the weak disorder regime - I} \author{Cl\'ement Cosco and Ofer Zeitouni} \address{Cl\'{e}ment Cosco, Ceremade, Universite Paris Dauphine, Place du Mar\'{e}chal de Lattre de Tassigny, 75775 Paris Cedex 16, France} \address{Ofer Zeitouni, Department of Mathematics, Weizmann Institute of Sciences, Rehovot 76100, Israel.} \thanks{This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 692452). The first version of this work was completed while the first author was with the Weizmann Institute.} \email{[email protected], [email protected]} \begin{abstract} Let $W_N(\beta) = {\mathrm E}_0\left[e^{ \sum_{n=1}^N \beta\omega(n,S_n) - N\beta^2/2}\right]$ be the partition function of a two-dimensional directed polymer in a random environment, where $\omega(i,x), i\in \mathbb{N}, x\in \mathbb{Z}^2$ are i.i.d.\ standard normal and $\{S_n\}$ is the path of a random walk. With $\beta=\beta_N=\hat\beta \sqrt{\pi/\log N}$ and $\hat \beta\in (0,1)$ (the subcritical window), $\log W_N(\beta_N)$ is known to converge in distribution to a Gaussian law of mean $-\lambda^2/2$ and variance $\lambda^2$, with $\lambda^2=\log (1/(1-\hat\beta^2)$ (\textit{Caravenna, Sun, Zygouras, Ann.\ Appl.\ Probab.\ (2017)}). We study in this paper the moments ${\mathbb E} [W_N( \beta_N)^q]$ in the subcritical window, for $q=O(\sqrt{\log N})$. The analysis is based on ruling out triple intersections \end{abstract} \maketitle \section{Introduction and statement of results} We consider in this paper the partition function of two dimensional directed polymers in Gaussian environment, and begin by introducing the model. Set \begin{equation} \label{eq-WN} W_N(\beta,x) = {\mathrm E}_x\left[e^{ \sum_{n=1}^N \beta\omega(n,S_n) - N\beta^2/2} \right], \quad x\in \mathbb Z^d. \end{equation} Here, $\{\omega_{n,x}\}_{n\in \mathbb{Z}_+, x\in \mathbb{Z}^d}$ are i.i.d. standard centered Gaussian random variables of law ${\mathbb P}$, $\{S_n\}_{n\in \mathbb{Z}_+}$ is simple random walk, and ${\mathrm E}_x$ denotes the law of simple random walk started at $x\in \mathbb{Z}^2$. Thus, $W_N(\beta,x)$ is a random variable measurable on the $\sigma$-algebra $\mathcal{G}_N:=\sigma\{ \omega(i,x): i\leq N, x\in \mathbb{Z}^d\}.$ For background, motivation and results on the rich theory surrounding this topic, we refer the reader to \cite{CStFlour}. In particular, we mention the relation with the $d$ dimensional stochastic heat equation (SHE). The random variables $W_N(\beta,0)$ form a $\mathcal{G}_N$ positive martingale, and therefore converge almost surely to a limit $W_\infty(\beta,0)$. It is well known that in dimensions $d=1,2$, for any $\beta>0$ we have $W_\infty(\beta,0)=0$, a.s., while for $d\geq 3$, there exists $\beta_c>0$ so that $W_\infty(\beta,0)>0$ a.s. for $\beta<\beta_c$ and $W_\infty(\beta,0)=0$ for $\beta> \beta_c$. We refer to these as the \textit{weak} and \textit{strong} disorder regimes, respectively. In particular, for $d=2$, which is our focus in this paper, for any $\beta>0$, we are in the strong disorder regime. A meaningful rescaling in dimension $2$ was discovered in the context of the SHE by Bertini and Cancrini \cite{Bertini98} and was later generalized by Caravenna, Sun and Zygouras \cite{CaraSuZy-universalityrelev}, in both the SHE and polymer setups, to a wider range of parameters for which a phase transition occurs. See also \cite{CaSuZy18,CaSuZyCrit21,ChDu18,Gu18KPZ2D,NaNa21}. Introduce the mean intersection local time for random walk \begin{equation} \label{eq-RNas} R_N = {\mathrm E}_0^{\otimes 2}\left[\sum_{n=1}^N \mathbf{1}_{S_n^1 = S_n^2}\right]\sim \frac{\log N}{\pi}.\end{equation} The asymptotic behavior of $R_N$ follows from the local limit theorem \cite[Sec. 1.2]{LawlerIntersections}. Further, the Erd\H os-Taylor theorem \cite{ErdosTaylor} states that $\frac{\pi }{\log N}\sum_{n=1}^N \mathbf{1}_{S_n^1=S_n^2}$ converges in distribution to an exponential random variable of parameter 1. Set \begin{equation} \label{eq:defBeta_N} \beta_N = \frac{\hat{\beta}}{\sqrt {R_N}}, \quad \hat \beta\geq 0. \end{equation} We will use the short-notation $W_N = W_N(\beta_N,0)$. With it, see \cite{CaraSuZy-universalityrelev}, one has \begin{equation} \label{eq:pointwiseGaussianCV} \forall \hat{\beta}<1:\quad \log W_N \cvlaw \mathcal N\left(-\frac {\lambda^2} 2,\lambda^2\right), \quad \mbox{\rm with} \quad \lambda^2= \lambda^2(\hat \beta) = \log \frac 1 {1-{\hat \beta}^2}\ . \end{equation} \iffalse with \begin{equation}\label{eq:defSigma} \lambda^2(\hat \beta) = \log \frac 1 {1-{\hat \beta}^2}\ . \end{equation} \fi The convergence in \eqref{eq:pointwiseGaussianCV} has recently been extended in \cite{LyZy21} to the convergence of $W_N$ to the exponential of a Gaussian, in all $L^p$. (The critical case $\hat\beta=1$, which we will not study in this paper, has received considerable attention, see \cite{Bertini98,CaSuZyCrit18,CaSuZyCrit21,GQT}.) The spatial behavior of $W_N(\beta_N,x)$ is also of interest. Indeed, one has, see \cite{CaSuZy18}, \begin{equation} \label{eq:GFFlimit} G_N(x):=\sqrt{R_N} \left(\log W_N(\beta_N,x\sqrt N) - {\mathbb E} \log W_N(\beta_N,x\sqrt N)\right) \cvlaw \sqrt{\frac {\hat {\beta}^2} {1-{\hat \beta}^2}} G(x), \end{equation} with $G(x)$ a log-correlated Gaussian field on $\mathbb R^2$. The convergence in \eqref{eq:GFFlimit} is in the weak sense, i.e.\ for any smooth, compactly supported function $\phi$, $\int \phi(x) G_N(x) dx$ converges to a centered Gaussian random variable of variance $\hat{\beta}^2 \sigma^2_\phi/(1-\hat{\beta}^2)$, where \begin{equation} \label{eq:lim-cov} \sigma^2_\phi=\frac{1}{2\pi}\iint \phi(x)\phi(y)\int_{|x-y|^2/2}^\infty z^{-1}e^{-z} dz.\end{equation} One recognizes $\sigma^2_\phi$ in \eqref{eq:lim-cov} as the variance of the integral of $\phi$ against the solution of the \textit{Edwards-Wilkinson} equation. For a related result in the KPZ/SHE setup, see \cite{CaSuZy18,Gu18KPZ2D,NaNa21}. Logarithmically correlated fields, and in particular their extremes and large values, have played an important recent role in the study of various models of probability theory at the critical dimension, ranging from their own study \cite{Biskup,BDZ,DRSV,RV}, random walk and Brownian motion \cite{BRZ,DPRZ}, random matrices \cite{CMN,CN,CFLW}, Liouville quantum gravity \cite{DuSh,KRV}, turbulence \cite{GRV}, and more. In particular, exponentiating Gaussian logarithmically correlated fields yields Gaussian multiplicative chaoses, with the ensuing question of convergence towards them. In the context of polymers, \eqref{eq:GFFlimit} opens the door to the study of such questions. A natural role is played by the random measure \[\mu_{N}^{\gamma}(x)=\frac{e^{\gamma G_N(x)}} {{\mathbb E} e^{\gamma G_N(x)}},\] and it is natural to ask about its convergence towards a GMC, and about extremes of $G_N(x)$ for $x$ in some compact subsets of $\mathbb R^2$. A preliminary step toward any such analysis involves evaluating exponential moments of $G_N(0)$. This is our goal in this paper. In the following, $q=q(N)$ denotes an integer $q\geq 2$ that may depend on $N$. Our main result is the following. \begin{theorem} \label{th:main} There exists $\hat \beta_0\leq 1$ so that if $\hat \beta<\hat\beta_0$ and \begin{equation} \label{eq:condition20nr} \limsup_{N\to\infty} \frac{3\hat \beta^2}{\left(1-\hat \beta^2\right)} \frac{1}{\log N} \binom{q}{2} < 1, \end{equation} then, \begin{equation} \label{eq:estimate} {\mathbb E}[W_N^q] \leq e^{{\binom{q}{2}\lambda^2 (1+\varepsilon_N)}}, \end{equation} where $\varepsilon_N=\varepsilon(N,\hat \beta) \searrow 0$ as $N\to\infty$. \end{theorem} The proof will show that in Theorem \ref{th:main}, $\hat \beta_0$ can be taken as $1/96$, but we do not expect this to be optimal. \begin{remark} \label{rem-1.2} With a similar method, we can also prove that the estimate \eqref{eq:estimate} holds for all $\hat \beta <1$ at the cost of choosing $q^2=o(\log N / \log \log N)$, see Section \ref{subsec-rem1.2} for details. In particular, we obtain that the partition function possesses all (fixed) moments in the region $\hat \beta <1$: \begin{equation} \label{eq:Lp-regions} \forall q\in \mathbb N,\quad \sup_{N} {\mathbb E}[W_N^q] < \infty. \end{equation} As mentioned above, \eqref{eq:Lp-regions} was independently proved in \cite{LyZy21}. (See also \cite{LyZy22} for further precision and a multivariate generalization of the Erd\H{o}s-Taylor theorem.) They also observed that together with the convergence in distribution \eqref{eq:pointwiseGaussianCV}, the estimate \eqref{eq:Lp-regions} implies that for all fixed $q\in \mathbb N$, \begin{equation} \label{eq-convergencepol} {\mathbb E}[W_N^q] e^{-\binom{q}{2} \lambda^2(\hat \beta)} \underset{N\to\infty}{\longrightarrow} 1. \end{equation} Note however that the estimate \eqref{eq:estimate} does \textit{not} yield \eqref{eq-convergencepol} when $q\to \infty$ with $N\to\infty$. \end{remark} \begin{remark} \label{rem-1.3} Theorem \ref{th:main} is of course not enough to prove convergence toward a GMC. For that, one would need to improve the error in the exponent from $O(q^2 \varepsilon_N)$ to $O(1)$, to obtain a complementary lower bound and, more important, to derive similar multi-point estimates. We hope to return to these issues in future work. \end{remark} The structure of the paper is as follows. In the next Section \ref{sec-Intersection}, we use a well-worn argument to reduce the computation of moments to certain estimates concerning the intersection of (many) random walks. After some standard preliminaries, we state there our main technical estimate, Theorem \ref{th:momentwithoutTriple}, which provides intersection estimates under the extra assumptions that all intersections are in pairs, i.e.\ that no triple (or more) points exist. The rest of the section provides the proof of Theorem \ref{th:main}. Section \ref{sec-notriple} then develops the induction scheme that is used to prove Theorem \ref{th:momentwithoutTriple}. Since we assume that there are no triple (or more) intersections, we may consider particles as matched in pairs at intersection times. The induction is essentially on the number of instances in which ``matched particles'' break the match and create a different matching. Section \ref{sec-discussion} provides a discussion of our results, their limitations, and possible extensions. In particular we explain there why the constraint on $q$ in Theorem \ref{th:main} limits our ability to obtain the expected sharp upper bounds on the maximum of $\log W_N(\hat \beta_N, x\sqrt{N})$. The appendices collect several auxilliary results and a sharpening of one of our estimates, see Proposition \ref{prop:factor2ndMomentbis}. \noindent {\bf Acknowledgment} We thank Dimitris Lygkonis and Nikos Zygouras for sharing their work \cite{LyZy21} with us prior to posting, and for useful comments. We thank the referee for a careful reading of the original manuscript and for many comments that helped us greatly improve the paper. We are grateful to Shuta Nakajima for helpful comments on a previous version of the article. \noindent {\bf Data availability statement} No new data was generated in relation to this manuscript. \noindent {\bf Conflict of interest statement} There are no conflict of interests to either author. \section{Intersection representation, reduced moments, and proof of Theorem \ref{th:main}} \label{sec-Intersection} Throughout the rest of the paper, we let $p(n,x)=p_n(x)={\mathrm P}_0(S_n=x)$. There is a nice formula for the $q$-th moment of the partition function whose importance is apparent in previous work on directed polymers, for example in \cite{CaravennaFrancesco2019TDsr,CaSuZyCrit18}. Indeed, \begin{align*} {\mathbb E}[W_N^q] & = {\mathrm E}_0^{\otimes q} {\mathbb E} e^{\sum_{i=1}^q \sum_{n=1}^N (\beta_N \omega(n,S_n^{i}) - \beta_N^2/2)}, \end{align*} where $S^1,\dots,S^q$ are $q$ independent copies of the simple random walk and ${\mathrm E}_{X}^{\otimes q}$ denotes the expectation for the product measure started at $X = (x^1,\dots,x^q)$. (If the starting point $X$ is not specified, we assume $X=\mathbf 0$.) Since the $\omega(i,x)$ are Gaussian and the variance of $\sum_{i=1}^q \beta_N \omega(n,S_n^{i})$ is equal to $\beta_N^2 \sum_{i=1\dots q,j=1\dots q} \mathbf{1}_{S_n^i = S_n^j}$, we have the following formula for the moment in terms of intersections of $q$ independent random walks: \begin{equation} \label{eq:momentFormula} {\mathbb E}[W_N^q] = {\mathrm E}^{\otimes q}\left[ e^{\beta_N^2 \sum_{1\leq i<j\leq q} \sum_{n=1}^N \mathbf{1}_{S_n^{i} = S_n^j}}\right]. \end{equation} \subsection{No triple estimate} The key step in upper bounding the right-hand side of \eqref{eq:momentFormula} is to restrict the summation to subsets where there are no triple (or more) intersections. More precisely, denote by \begin{align} F_n& = \left\{\exists ({\bar \alpha},{\bar \beta},{\bar \gamma})\in \llbracket 1,q\rrbracket^3: {\bar \alpha}< {\bar \beta}<{\bar \gamma}, S_n^{\bar \alpha} = S_n^{\bar \beta} = S_n^{\bar \gamma}\right\}, \label{eq-Fn} \\ K_n&=\big\{\exists ({\bar \alpha},{\bar \beta},{\bar \gamma},{\bar \delta})\in \llbracket 1,q\rrbracket^4: {\bar \alpha}< {\bar \beta}, {\bar \gamma}<{\bar \delta},\label{eq-Kn}\\ & \hspace{2cm} \{{\bar \alpha},{\bar \beta}\}\cap \{{\bar \gamma},{\bar \delta}\}=\emptyset, S_n^{\bar \alpha} = S_n^{\bar \beta}, S_n^{\bar \gamma}=S_n^{\bar \delta}\big\}, \nonumber \end{align} and let \[G_T = \bigcap_{n\in \llbracket 1,T\rrbracket} (F_n\cup K_n)^\complement\] be the event that there is no triple (or more) intersection, i.e. that at each given time no more than a pair of particles are involved in an intersection. The following theorem is the technically involved part of this paper. Its proof will be presented in Section \ref{sec-notriple}. \begin{theorem} \label{th:momentwithoutTriple} Fix $\hat \beta\in (0,1)$. Then there exists $c=c(\hat \beta)>0$ so that if \eqref{eq:condition20nr} holds then uniformly in $T\in \llbracket 1,N\rrbracket$ as $N\to\infty$, \begin{equation} \label{eq:momentToT} \sup_{X\in (\mathbb Z^2)^q}{\mathrm E}_X^{\otimes q} \left[e^{\beta_N^2 \sum_{n=1}^T \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j}}\mathbf{1}_{G_T}\right] \leq e^{\lambda_{T,N}^2 \binom{q}{2} (1+cq^{-1/2} + o(1)) }, \end{equation} where $\lambda_{T,N}$ is defined as \begin{equation} \label{eq:lambdaT} \lambda_{T,N}^2(\hat \beta) = \lambda_{T,N}^2= \log \frac 1 {1-{\hat \beta}^2\frac{\log T}{\log N}}\ . \end{equation} \end{theorem} Note that as soon as $q>9$, the expression in the left side of \eqref{eq:momentToT} trivially vanishes if $X={\bf 0}$. The $X$'s of interest are those that allow for non-existence of triple or more intersections. Assuming Theorem \ref{th:momentwithoutTriple}, the proof of Theorem \ref{th:main} is relatively straightforward. We will need the preliminary results collected in the next subsection. \subsection{A short time a priori estimate} The following lemma is a variation on Khas'minskii's lemma \cite[p.8, Lemma 2.1]{Sznitman}. \begin{lemma}\label{lem:modKhas} Let $\mathcal Z$ be the set of all nearest-neighbor walks on $\mathbb Z^2$, that is $Z\in\mathcal Z$ if $Z=(Z_i)_{i\in \mathbb N}$ where $Z_i\in\mathbb Z^2$ and $Z_{i+1}-Z_i \in \{\pm \mathbf e_j,j\leq d\}$ where $\mathbf e_j$ are the canonical vectors of $\mathbb Z^2$. If for some $k\in \mathbb N$ and $\kappa\in \mathbb R$, one has \begin{equation} \label{eq:condition_modKhas} \eta = \sup_{Z\in \mathcal Z} \sup_{x\in \mathbb Z^2} \left(e^{\kappa^2}-1 \right) {\mathrm E}_x\left[\sum_{n=1}^k \mathbf{1}_{S_n = Z_n}\right] <1, \end{equation} then \begin{equation} \sup_{Z\in \mathcal Z} \sup_{x\in \mathbb Z^2} {\mathrm E}_{x} \left[e^{\kappa^2 \sum_{n=0}^k \mathbf{1}_{S_n=Z_n}} \right] \leq \frac{1}{1-\eta}. \end{equation} \end{lemma} \begin{proof} Let $\Lambda_2 = e^{\kappa^2}-1$. We have: \begin{align*} &{\mathrm E}_{x} \left[e^{\kappa^2 \sum_{n=1}^k \mathbf{1}_{S_n=Z_n}} \right] = {\mathrm E}_{x} \left[\prod_{n=1}^k \big(1+ \Lambda_2\mathbf{1}_{S_n=Z_n}\big)\right]\\ & = \sum_{p=0}^\infty \Lambda_2^p \sum_{1\leq n_1 < \dots< n_p \leq k}{\mathrm E}_{x} \left[ \prod_{i=1}^p \mathbf{1}_{S_{n_i}=Z_{n_i}}\right]\\ & = \sum_{p=0}^\infty \Lambda_2^p \sum_{1\leq n_1 < \dots < n_{p-1} \leq k} {\mathrm E}_{x} \left[ \prod_{i=1}^{p-1} \mathbf{1}_{S_{n_i}=Z_{n_i}} {\mathrm E}_{S_{n_{p-1}}} \left[\sum_{n=1}^{k-n_{p-1}} \mathbf{1}_{S_n = Z_{n+n_{p-1}}} \right] \right]\\ & \stackrel{\eqref{eq:condition_modKhas}}{\leq}\sum_{p=0}^\infty \Lambda_2^{p-1}\, \eta \sum_{1\leq n_1 < \dots < n_{p-1} \leq k} {\mathrm E}_{x} \left[ \prod_{i=1}^{p-1} \mathbf{1}_{S_{n_i}=Z_{n_i}} \right] \leq \dots \leq \sum_{p=0}^\infty \eta^p = \frac{1}{1-\eta}. \end{align*} \end{proof} The next lemma gives an a-priori rough estimate on the moments of $W_k(\beta_N)$ $=$ $W_k(\beta_N,0)$ when $k$ is small. \begin{lemma} \label{lem:aprioriBound} Let $\hat \beta>0$. Let $b_N>0$ be a deterministic sequence such that $b_N = o(\sqrt{\log N})$ as $N\to\infty$. Assume that $q=O(\sqrt {\log N})>1$. Then, for all $k\leq e^{b_N}$, \[ {\mathbb E}[W_{k}(\beta_N)^q] = {\mathrm E}^{\otimes q}\left[ e^{\beta_N^2 \sum_{1\leq i<j\leq q} \sum_{n=1}^k \mathbf{1}_{S_n^{i} = S_n^j}}\right] \leq e^{\frac{1}{\pi}(1+\varepsilon_N) q^2 \beta_N^2 \log (k+1)},\] for $\varepsilon_N = \varepsilon_N(\hat \beta)\to 0$ as $N\to\infty$. \end{lemma} \begin{proof} Let $N^{i,j}_k=\sum_{n=1}^k \mathbf{1}_{S_n^{i} = S_n^j}$. By H\"older's inequality, we find that \begin{align*} {\mathbb E}[W_{k}(\beta_N)^q] & \leq {\mathrm E}^{\otimes q}\left[ e^{\frac{q\beta_N^2}{2} \sum_{ 1 < j\leq q} N^{1,j}_k} \right]^{q/q} = {\mathrm E} \left[{\mathrm E}^{\otimes 2}\left[ e^{\frac{q\beta_N^2}{2} N^{1,2}_k} \middle | S^1\right]^{q-1}\right], \end{align*} by independence of the $(N_{1,j})_{1<j}$ conditioned on $S^1$. We now estimate the above conditional expectation using Lemma \ref{lem:modKhas}. Let $\kappa^2= q\beta_N^2/2\to 0$ and $\eta$ be as in \eqref{eq:condition_modKhas}. For any $Z\in \mathcal Z$ and $y\in \mathbb Z^2$, \[{\mathrm E}_y\left[\sum_{n=1}^k \mathbf{1}_{S_n = Z_n}\right] \leq \sum_{n=1}^k \sup_{x} p_n(x), \] where, see Appendix \ref{sec-pnstar} for an elementary proof, \begin{equation}\label{eq:pnstar} \forall n\geq 1: \quad \sup_x p_n(x) =:p_n^{\star} \leq \frac{2}{\pi n}. \end{equation} Thus, $\eta \leq \frac{1}{\pi} (1+o(1)) q\beta_N^2 \log (k+1) \to 0$, uniformly for $k\leq e^{b_N} $ as $N\to\infty$. Lemma \ref{lem:modKhas} then yields that for such $k$'s, \[ {\mathbb E}[W_{k}(\beta_N)^q] \leq \left(\frac{1}{1-\frac{1}{\pi}(1+o(1))q\beta_N^2 \log (k+1)} \right)^{q-1} = e^{\frac{1}{\pi}(1+o(1)) q^2 \beta_N^2 \log (k+1)}.\] \end{proof} \subsection{Proof of Theorem \ref{th:main}.} As a first step, we will prove that \begin{equation} \label{eq:firstStepQmoment} {\mathbb E}[W_N^q] \leq e^{{\binom{q}{2} \lambda^2}(1+cq^{-1/2}+\varepsilon_N)}, \end{equation} where $c=c(\hat \beta)> 0$ and $\varepsilon_N=\varepsilon_N(\hat \beta)\to 0$ as $N\to\infty$. As a second step, we improve the bound in case $q$ is bounded and thus complete the proof for general $q(N)$ assuming only condition \eqref{eq:condition20nr}, by a diagonalization argument. Recall the definitions of $\lambda_{k,N}$ in \eqref{eq:lambdaT} and that $\lambda=\lambda_{N,N}(\hat \beta)$. By standard convexity arguments, we note that $x\leq \log(\frac{1}{1-x}) \leq \frac{x}{1-x}$ for all $x\in [0,1)$; hence for all $a>1$ and $\hat \beta < 1$ such that $a\hat \beta^2 < 1$, \begin{equation} \label{eq:bounds_lambda} \forall k\leq N:\quad a \hat \beta^2 \frac{\log k}{\log N}\leq \lambda_{k,N} (\sqrt a \hat \beta)^2 \leq\frac{a \hat \beta^2}{1-a\hat \beta^2} \frac{\log k}{\log N}. \end{equation} Now, let \[I_{s,t}=\beta_N^2\sum_{n=s+1}^t \sum_{i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j} \quad \text{and} \quad I_k = I_{0,k},\] and define \begin{equation} M(X) := {\mathrm E}_X^{\otimes q}\left[e^{I_{N}}\right] \quad \text{and} \quad M=\sup_{X\in (\mathbb Z^2)^q} M(X). \end{equation} By \eqref{eq:momentFormula}, it is enough to have a bound on $M(\mathbf 0)$. In fact what we will give is a bound on $M$. To do so, we let $T=T_N>0$ such that $\log T =o(\sqrt{\log N})$ and introduce the event \[\tau_{T} := \inf \{n > T : F_n\cup K_n \, \mbox{\rm occurs}\}.\] We then decompose $M(X)$ as follows: \begin{equation} \label{eq:defAB} M(X) = {\mathrm E}_X^{\otimes q}\left[e^{I_{N}} \mathbf{1}_{\tau_T \leq N}\right] + {\mathrm E}_X^{\otimes q}\left[e^{I_{N}} \mathbf{1}_{\tau_T >N}\right] =: A(X)+ B(X). \end{equation} We start by bounding $B(X)$ from above. Let $c$ be as in Theorem \ref{th:momentwithoutTriple}. By Markov's property, \begin{align} \sup_{X\in (\mathbb Z^{2})^q} B(X) & \leq \sup_{X\in (\mathbb Z^{2})^q} {\mathrm E}^{\otimes q}_X[e^{I_{T}}] \sup_{Y\in (\mathbb Z^{2})^q} {\mathrm E}_Y^{\otimes q} [e^{I_{N-T}} \mathbf{1}_{\tau_0 > N-T}] \nonumber\\ & \leq e^{\frac{1}{\pi}(1+\varepsilon_N)q^2 \beta_N^2 \log T} e^{{\binom{q}{2}\lambda_{N-T,N}^2} (1+cq^{-1/2}+o(1))} \nonumber\\ & \leq e^{{\binom{q}{2} \lambda^2} (1+cq^{-1/2}+o(1))},\label{eq:MarkovOnB} \end{align} where in the second inequality, we used Lemma \ref{lem:aprioriBound} and Theorem \ref{th:momentwithoutTriple} and in the last inequality, we used that $\beta_N^2\log T$ vanishes as $N\to\infty$ and that $\lambda_{N-T,N}^2 <\lambda^2$. We will now deal with $A(X)$ and show that \begin{equation} \label{eq:boundFinalAX} \sup_{X\in (\mathbb Z^2)^q} A(X) \leq M \varepsilon_N, \end{equation} with $\varepsilon_N\to 0$. This, together with \eqref{eq:defAB} and \eqref{eq:MarkovOnB} implies that \[M(1-\varepsilon_N) \leq e^{{\binom{q}{2} \lambda^2}(1+cq^{-1/2}+o(1))}, \] which entails \eqref{eq:firstStepQmoment}. Toward the proof of \eqref{eq:boundFinalAX}, we first use Markov's property to obtain that \begin{equation*}A(X) = \sum_{k=T}^N {\mathrm E}_X^{\otimes q}[e^{I_{k}+I_{k ,N}} \mathbf{1}_{\tau_T = k}] \leq M \sum_{k=T}^N {\mathrm E}_X^{\otimes q}[e^{I_{k}} \mathbf{1}_{\tau_T = k}] . \end{equation*} In what follows, for $\mathcal T\subset \mathbb N$ we use the phrase "no triple+ in $\mathcal T$" to denote the event $(\cup_{n\in \mathcal T} (F_n \cup K_n))^\complement$. Similarly, for $\mathcal I\subset \llbracket 1,q\rrbracket$ we use the phrase "no triple+ in $\mathcal T$ for particles of $\mathcal I$" to denote the event $(\cup_{n\in \mathcal T} (F_n^{\mathcal I} \cup K_n^{\mathcal I}))^\complement$ where $F_n^{\mathcal I}$ and $G_n^{\mathcal I}$ are defined as $F_n$ and $K_n$ but with $\llbracket 1,q\rrbracket$ replaced by $\mathcal I$. We then decompose over which event, $F_n$ or $K_n$, occurred at $\tau_T$, and then over which particles participated in the event: \begin{equation} \label{eq:AXbound2} \begin{aligned} A(X)&\leq M \sum_{{\bar \alpha},{\bar \beta},{\bar \gamma} \leq q} \sum_{k=T}^N {\mathrm E}_X^{\otimes q}\left[e^{I_{k}} \mathbf{1}_{\text{no triple+ in $\llbracket T, k-1\rrbracket$}}\, \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right]\\ & + M\sum_{({\bar \alpha}<{\bar \beta})\neq({\bar \gamma}<{\bar \delta})} \sum_{k=T}^N {\mathrm E}_X^{\otimes q}\left[e^{I_{k}} \mathbf{1}_{\text{no triple+ in $\llbracket T, k-1\rrbracket$}}\, \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k, S^{\bar \gamma}_k=S^{\bar \delta}_k}\right],\\ &=: M(A_1(X)+A_2(X)). \end{aligned} \end{equation} By \eqref{eq:AXbound2}, it is enough to prove that $A_1(X)$ and $A_2(X)$ vanish uniformly in $X$ as $N\to\infty$ in order to obtain \eqref{eq:boundFinalAX}. We next show that $A_1(X)$ vanishes, the argument for $A_2(X)$ is similar. We bound $I_k$ by \[I_{k} \leq J_{k} + J^{\bar \alpha}_{k} +J^{\bar \beta}_{k} +J^{\bar \gamma}_{k} ,\] where \begin{gather*}J_{k} = \beta_N^2 \sum_{n=1}^k \sum_{\substack{i<j\leq q\\i,j\notin\{{\bar \alpha},{\bar \beta},{\bar \gamma}\}}} \mathbf{1}_{S^i_n=S^j_n} \quad\text{and}\quad J^{i_0}_{k} = \beta_N^2\sum_{n=1}^k \sum_{j\in\llbracket 1,q\rrbracket \setminus \{i_0\} } \mathbf{1}_{S^{i_0}_n=S^j_n}. \end{gather*} If we let $\frac{1}{a} + \frac{3}{b} = 1$ with $1<a\leq 2$ and $1<b$, we have \begin{align} &{\mathrm E}_X^{\otimes q} \left[e^{I_{k}} \mathbf{1}_{\text{no triple+ in $\llbracket T, k-1\rrbracket$}}\, \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right] \nonumber\\ &\leq {\mathrm E}_X^{\otimes q}\left[e^{aJ_{k}} \mathbf{1}_{\text{no triple+ in $\llbracket T, k-1\rrbracket$ for particles of $\llbracket1,q\rrbracket \setminus \{{\bar \alpha},{\bar \beta},{\bar \gamma}\}$}}\, \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right]^{1/a} \label{eq:boundNotripleHolder}\\ &\times \prod_{i_0\in \{{\bar \alpha},{\bar \beta},{\bar \gamma}\}} {\mathrm E}_X^{\otimes q}\left[e^{bJ^{i_0}_{k}} \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right]^{1/b}. \label{eq:boundNotripleHolder2} \end{align} We treat separately the two quantities \eqref{eq:boundNotripleHolder} and \eqref{eq:boundNotripleHolder2}. Before doing so, we specify our choice of $a,b$ and $\hat \beta$. We assume that $\hat \beta^2 < 1/72$ and $a<3/2$, with $a$ close enough to $3/2$ (and so $b$ close to $9$) in such a way that \begin{equation} \label{eq:conditions} \begin{aligned} &(i)\ 8b\hat \beta^2 < 1, \quad (ii)\ \limsup_{N\to\infty} \frac{1}{\pi} q^2 \beta_N^2 =:\rho_0 < 1/a \quad \text{and} \\ &\quad (iii)\ \limsup_{N\to \infty} \frac{\hat \beta^2}{1-a\hat \beta^2} \frac{\binom q 2}{\log N}(1+cq^{-1/2}) =:\rho_1 < 1/a. \end{aligned} \end{equation} Note that (ii) and (iii) are assured to hold for $a$ close enough to $3/2$ thanks to the assumption \eqref{eq:condition20nr} which implies that $\limsup_N \pi^{-1}q^2 \beta_N^2 \leq \frac{2}{3}$ since $\beta_N^2\sim \pi \hat \beta^2/\log N$. We chose $\hat \beta^2 < 1/72$ to allow (i). We first bound \eqref{eq:boundNotripleHolder}. If $k \leq e^{(\log N)^{1/3}}$, then, using that $J_k$ does not depend on the $\bar \alpha, \bar \beta,\bar \gamma$ particles, \eqref{eq:boundNotripleHolder} is bounded by \begin{align*} &{\mathrm E}_X^{\otimes q}\left[e^{aJ_{k}} \right]^{1/a} {\mathrm P}_{(x^{\bar\alpha},x^{\bar \beta},x^{\bar \gamma})}^{\otimes 3}\left(S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k\right)^{1/a}\leq C e^{\frac{1}{\pi} (1+\varepsilon_N) q^2 a \beta_N^2 (\log (k+1))/a} k^{-2/a}, \end{align*} for some $c>0$ and uniformly in $X\in (\mathbb Z^2)^q$, where we have used in the inequality Lemma \ref{lem:aprioriBound} and that $\sum_{x} p_{k}(x)^3 \leq (p_k^\star)^2 \leq k^{-2}$ by \eqref{eq:pnstar}. For $k \geq e^{(\log N)^{1/3}}$, we rely on \eqref{eq:momentToT} to bound \eqref{eq:boundNotripleHolder} by \begin{align*} & {\mathrm P}_{(x^{\bar \alpha},x^{\bar \beta},x^{\bar \gamma})}^{\otimes 3}\left(S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k\right)^{1/a}\\ &\qquad \times {\mathrm E}_X^{\otimes q}\left[e^{aJ_{k}} \mathbf{1}_{\text{no triple+ in $\llbracket T, k-1\rrbracket$ for particles in $\llbracket1,q\rrbracket \setminus \{{\bar \alpha},{\bar \beta},{\bar \gamma}\}$}} \,\right]^{1/a} \\ & \leq C k^{-2/a} \left( {\mathrm E}_X^{\otimes q}\left[e^{aJ_{T}}\right] \sup_{Y} {\mathrm E}_Y^{\otimes (q-3)}\left[e^{aJ_{k-T-1}} \mathbf{1}_{\text{no triple+ in $\llbracket 1, k-T-1\rrbracket$ }}\right]\right)^{1/a} \\ &\leq C e^{\frac{1}{\pi}(1+\varepsilon_N) q^2 a \beta_N^2 (\log T)/a} e^{{\binom{q}{2}(1+cq^{-1/2}+\varepsilon'_N) \lambda_{k,N}^2(\sqrt a \hat \beta) /a}} k^{-2/a}, \end{align*} where we have used that $J_{k-T}\leq C + J_{k-T-1}$ by \eqref{eq:condition20nr}. For \eqref{eq:boundNotripleHolder2}, we apply the Cauchy-Schwarz inequality to find that \begin{equation} \label{eq:CS3} {\mathrm E}_X^{\otimes q}\left[e^{bJ^{i_0}_{k}} \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right]^{1/b}\leq {\mathrm E}_X^{\otimes q}\left[e^{2bJ^{i_0}_{k}} \right]^{1/2b} k^{-1/b}, \end{equation} where we again used that $\sum_{x} p_{k}(x)^3 \leq (p_k^\star)^2 \leq k^{-2}$ by \eqref{eq:pnstar}. Now observe that by conditioning on $S^{i_0}$, we have \begin{align*} {\mathrm E}_X^{\otimes q}\left[e^{2bJ^{i_0}_{k}}\right] & \leq \sup_{y\in \mathbb Z^2} {\mathrm E}^{S^1}_{y} \left[\sup_{x\in \mathbb Z^2} {\mathrm E}^{S^2}_{x} \left[e^{2b \beta_N^2\sum_{n=1}^k \mathbf{1}_{S_k^{1} = S_k^2}} \right]^{q-1} \right], \end{align*} where uniformly on all nearest neighbor walks $Z\in {\mathcal Z}$, \[ \left(e^{2b\beta_N^2} - 1\right) \sup_{x\in \mathbb Z^2} {\mathrm E}_{x}\sum_{n=1}^k \mathbf{1}_{S_n = Z_n}\leq 4(1+o(1))b \hat \beta^2 \frac{\log (k+1)}{\log N} \leq 8b \hat \beta^2 \frac{\log (k+1)}{\log N}\] for all $N$ large, because $\beta_N^2\sim \pi \hat \beta^2/\log N$ and $\sup_x p_n(x)\leq 2/(\pi n)$, see \eqref{eq-RNas} and \eqref{eq:pnstar}. Hence by Lemma \ref{lem:modKhas} with \eqref{eq:conditions}-(i), \begin{align*} \sup_{X\in (\mathbb Z^2)^q}{\mathrm E}_X^{\otimes q}\left[e^{bJ^{i_0}_{k}} \mathbf{1}_{S^{\bar \alpha}_k = S^{\bar \beta}_k = S^{\bar \gamma}_k}\right]^{1/b}& \leq \left(\frac{1}{1-8 b\hat \beta^2 \frac{\log (k+1)}{\log N}}\right)^{(q-1)/2b} k^{-1/b}\\ & \leq e^{c \frac{\log (k+1)}{\sqrt{\log N}}} k^{-1/b}, \end{align*} for some universal constant $c>0$, using \eqref{eq:condition20nr}. We thus find that \begin{equation} \label{eq:3rdboundAX} \begin{aligned} & \sup_{X\in (\mathbb Z^2)^q} A_1(X) \leq q^3 \sum_{k=T}^{\lfloor e^{(\log N)^{1/3}} \rfloor} e^{\frac{1}{\pi}(1+\varepsilon_N) q^2 \beta_N^2 \log (k+1)} k^{-2/a} e^{3c \frac{\log (k+1)}{\sqrt{\log N}}} k^{-3/b}\\ &+ C q^3 e^{\frac{1}{\pi}(1+\varepsilon_N) q^2 \beta_N^2 \log T} \times\\ &\times \sum_{k=\lfloor e^{(\log N)^{1/3}} \rfloor}^N e^{{\lambda_{k,N}^2(\sqrt a \hat \beta) \binom{q}{2}(1+cq^{-1/2}+\varepsilon'_N)/a}} k^{-2/a} e^{3c \frac{\log (k+1)}{\sqrt{\log N}}} k^{-3/b}. \end{aligned} \end{equation} We now prove that the two terms on the right-hand side of \eqref{eq:3rdboundAX} vanish as $N\to\infty$. By \eqref{eq:conditions}-(ii),(iii), there exists $\delta>0$ satisfying $\delta< 1/a-\max (\rho_0,\rho_1)$, and therefore the first sum in the right-hand side of \eqref{eq:3rdboundAX} can be bounded by \begin{align*} q^3 \sum_{k=T}^{\lfloor e^{(\log N)^{1/3}} \rfloor} k^{-1-\delta} \leq Cq^3 T^{-\delta}, \end{align*} for $N$ large enough. Hence, we can set $T = \lfloor e^{(\log N)^{1/4}} \rfloor $ (which satisfies $\log T = o(\sqrt{\log N})$), so that $q^3 T^{-\delta}\to 0$ as $N\to\infty$. Relying on \eqref{eq:bounds_lambda}, the second sum {in \eqref{eq:3rdboundAX}} is bounded by \begin{align*} &C q^3 e^{c ' \log T} \sum_{k=\lfloor e^{(\log N)^{1/3}} \rfloor}^N e^{{\frac{\hat\beta^2}{1-a\hat \beta^2}\frac{\binom{q}{2}(1+cq^{-1/2}+\varepsilon'_N)}{\log N}} {\log (k+1)}} e^{3c \frac{\log (k+1)}{\sqrt{\log N}}} k^{-1-1/a}\\ & \leq C q^3 e^{c' \log T} \sum_{k=\lfloor e^{(\log N)^{1/3}} \rfloor}^N k^{-1-\delta/2} \leq C q^3 e^{-\frac{\delta}{2} (\log N)^{1/3} + c (\log N)^{1/4}}, \end{align*} for some $c'>0$ (recall that $\delta < 1/a - \rho_1$). Then the quantity in the last line vanishes as $N\to\infty$. (Note that we decomposed the sum for $k\geq e^{(\log)^{1/3}}$ and let $\log T = (\log N)^{1/4}$ to ensure that $(\log N)^{1/3}\gg (\log N)^{1/4}$ in the last display). By \eqref{eq:3rdboundAX} we have thus proven that $\lim_{N\to\infty}\sup_{X\in (\mathbb Z^2)^q} A_1(X)= 0$. When dealing with $A_2$, we have to use H\"{o}lder's inequality as in \eqref{eq:boundNotripleHolder}, \eqref{eq:boundNotripleHolder2} with $4$ particles instead of $3$, so in this case we can choose $a\sim 3/2$ and $b\sim 12$, and the condition (i) in \eqref{eq:conditions} is satisfied with the restriction $\hat \beta^2 < 1/96$. The rest of the argument follows the same line as for $A_1$ and we get that $\lim_{N\to\infty}\sup_{X\in (\mathbb Z^2)^q} A_2(X)= 0$. As a result, we have shown that \eqref{eq:firstStepQmoment} holds. This proves \eqref{eq:estimate} when $q\to\infty$. When $q=q_0$, \eqref{eq:firstStepQmoment} yields that $W_N$ is bounded in any $L^p$, $p>1$. This fact combined with \eqref{eq:pointwiseGaussianCV} implies the convergence \eqref{eq-convergencepol} for all fixed $q$, which implies that \eqref{eq:estimate} holds in the case $q=q_0$ as well. We now turn to the general case, where we only assume that $q(N)$ satisfies \eqref{eq:condition20nr}. Suppose that \eqref{eq:estimate} does not hold, so that we can find $\varepsilon_0>0$ and a subsequence $q_N'=q(\varphi(N))$ such that \begin{equation} \label{eq:absurd} \forall N\in\mathbb N,\quad {\mathbb E} W_N^{q_N'} > e^{\lambda^2 \binom{q_N'}{2}(1+\varepsilon_0)}. \end{equation} One can distinguish two cases. If $q_N'$ is bounded, then up to extracting a sub-sequence, we can suppose that $q_N'$ converges to some $q_0\geq 2$. Then, one can check that by \eqref{eq:pointwiseGaussianCV}, we must have ${\mathbb E} W_N^{q_N'} \to e^{\lambda^2 \binom{q_0}{2}}$ (for example, using Skorokhod's representation theorem and Vitali's convergence theorem with the fact that $W_N$ is bounded in any $L^p$). But this is impossible by \eqref{eq:absurd}. On the other hand, if $q'_N$ is not bounded, up to extracting a subsequence we can suppose that $q'_N\to\infty$. But then \eqref{eq:absurd} cannot be true because \eqref{eq:estimate} holds with $q=q'_N\to\infty$. Therefore \eqref{eq:estimate} must hold for any sequence $q(N)$ that satisfies \eqref{eq:condition20nr}. \qed \subsection{On Remark \ref{rem-1.2}} \label{subsec-rem1.2} We describe the changes needed for obtaining the claim in Remark \ref{rem-1.2}. Recall the definitions of $F_n$ and $K_n$, see \eqref{eq-Fn} and \eqref{eq-Kn}, and \eqref{eq:momentFormula}. Set \begin{align*} &A_N=\sum_{n=1}^N \mathbf{1}_{(F_n\cup K_n)^\complement} \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j},\\ &B_N = \sum_{n=1}^N \mathbf{1}_{F_n} \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j}, \quad C_N= \sum_{n=1}^N \mathbf{1}_{K_n} \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j} \end{align*} Note that for any $u_N\geq 1$, we can check that ${\mathrm E}^{\otimes q}_X \left[e^{u_N \beta_N^2 A_N}\right]$ is bounded above by $\Psi_{N,q}(X)$ of \eqref{eq:defPsiN} with $T=N$ and $\beta_N$ replaced by $\beta_N u_N$. Using H\"older's inequality it is enough to show (together with the proof of Theorem \ref{th:momentwithoutTriple}, which actually controls $\sup_{X}\Psi_{N,q}(X)$) that for any $\hat \beta<1$ and $q_N=o(\log N/\log \log N)$, there exist $v_N\to\infty$ so that \begin{equation} \label{eq:Holder} \sup_X {\mathrm E}^{\otimes q}_X \left[e^{v_N \beta_N^2 B_N}\right]^{1/v_N} \to_{N\to\infty} 1, \quad \sup_X {\mathrm E}^{\otimes q}_X \left[e^{v_N \beta_N^2 C_N}\right]^{1/v_N} \to_{N\to\infty} 1. \end{equation} We sketch the proof of the first limit in \eqref{eq:Holder}, the proof of the second is similar. By Corollary \ref{cor:discrete_Khas} (applied on the space of $q$-tuples of path, with $f(Y_n)=v_N \beta_N^2\sum_{1\leq i<j \leq q} {\bf 1}_{S^i_n=S^j_n} {\bf 1}_{F_n}$), it suffices to show that \begin{equation} \label{eq:khasCondition} \limsup_{N\in \mathbb N} \sup_{X \in \mathbb Z^q} {\mathrm E}_X^{\otimes q}[v_N\beta_N^2 B_N] = 0. \end{equation} To see \eqref{eq:khasCondition}, fix $K\in \llbracket 1,N\rrbracket$. By \eqref{eq:pnstar}, we have that \begin{align} {\mathrm E}_X^{\otimes q} \sum_{n=1}^N \mathbf{1}_{F_n} \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j} \leq \sum_{n=1}^K \frac{q(q-1)}{2} \frac{C}{n} + \sum_{n=K+1}^N \sum_{1\leq i<j\leq q} {\mathrm E}_X^{\otimes q} \mathbf{1}_{F_n} \mathbf{1}_{S_n^{i} = S_n^j}.\label{eq:boundRemoveTripleStep0} \end{align} For $i<j\leq q$ and $r\in \{0,1,2\}$, further denote \[ F_n^{i,j;r} = \{\exists (\bar\alpha,\bar\beta,\bar\gamma) : \bar \alpha< \bar\beta<\bar\gamma \leq q, S_n^{\bar \alpha} = S_n^{\bar \beta} = S_n^{\bar \gamma}, |\{\bar\alpha,\bar\beta,\bar\gamma\}\cap \{i,j\}| = r\}.\] We have that \[ \sum_{1\leq i<j\leq q} {\mathrm E}_X^{\otimes q} \mathbf{1}_{F_n} \mathbf{1}_{S_n^{i} = S_n^j} = \sum_{1\leq i<j\leq q} {\mathrm E}_X^{\otimes q} \sum_{r=0}^2 \mathbf{1}_{F_n^{i,j;r}} \mathbf{1}_{S_n^{i} = S_n^j}. \] We first focus on the term $r=0$. By independence, \eqref{eq:pnstar} and the union bound, \begin{align*}\sum_{1\leq i<j\leq q} {\mathrm E}_X^{\otimes q} \mathbf{1}_{F_n^{i,j;0}} \mathbf{1}_{S_n^{i} = S_n^j} & \leq \sum_{1\leq i<j\leq q} \frac{C}{n} \sum_{\bar\alpha<\bar\beta<\bar\gamma\leq q} \sup_{x_i\in \mathbb Z^2} \sum_{y\in \mathbb Z^2} \prod_{i=1}^3 {\mathrm P}_{x_i}(S_n = y) \leq \frac{Cq^5}{n^3}\,. \end{align*} When $r=1$, the condition in the indicator function becomes that there exist $\bar \alpha < \bar\beta \leq q$ such that $S_n^i = S_n^j = S_n^{\bar \alpha} = S_n^{\bar \beta}$. Hence, the term for $r=1$ is bounded by \[ \sum_{1\leq i<j\leq q} \sum_{\bar \alpha<\bar \beta\leq q} \sup_{x_i\in {\mathbb Z}^2}\sum_{y\in \mathbb Z^2} \prod_{i=1}^4{\mathrm P}_{x_i}(S_n = y)\\ \leq \frac{Cq^4}{n^3}\,. \] Similarly, we can bound the term for $r=2$ by a constant times $q^3/n^2$. Using \eqref{eq:boundRemoveTripleStep0}, we find that for all $K \in \llbracket 1,N\rrbracket$, \begin{equation} \sup_{X \in \mathbb Z^q} {\mathrm E}_X^{\otimes q}[v_N\beta_N^2 B_N] \leq \frac {Cv_N \hat \beta^2} {\log N} \left( \frac{q(q-1)}{2} \log K + \frac{q^5}{K^2} + \frac{q^4}{K^2} + \frac{q^3}{K}\right). \end{equation} For $K=\left \lfloor (\log N)^{3/4} \right\rfloor$, and $q^2 = o(\log N / \log \log N)$, we find that \eqref{eq:khasCondition} holds with a well-chosen $v_N\to\infty$. \section{No triple intersections - Proof of Theorem \ref{th:momentwithoutTriple}} \label{sec-notriple} Recall that $T\in \llbracket 1,N\rrbracket$. For compactness of notation in the rest of the paper, set \begin{equation}\label{eq:defSigmaN} \sigma_N^2= \sigma_N^2(\hat \beta) = e^{\beta_N^2}-1. \end{equation} By \eqref{eq-RNas}, there exist $\delta_N=\delta(N,\hat \beta)$ and $\delta'_N=\delta'(N,\hat \beta)$ that vanish as $N\to\infty$ such that \begin{equation} \label{eq:LLTsigmaN} \sigma_N^2 = \frac{\hat \beta^2}{R_N}(1+\delta_N) = \frac{ \pi \hat \beta^2}{\log N} (1+\delta'_N). \end{equation} \subsection{Expansion in chaos} \label{sec:chaos} In this section, we show that the moment without triple intersections can be bounded by a rather simple expansion. Introduce the following notation: for $\mathbf n = (n_0,n_1,\dots,n_k)$ and $\mathbf x = (x_0,x_1,\dots,x_k)$, let $p_{\mathbf n,\mathbf x} = \prod_{i=1}^k p(n_{i}-n_{i-1},x_i-x_{i-1})$. Let $\mathcal C_q = \{(i,j)\in \llbracket 1,q\rrbracket^2 : i<j\}$. \begin{proposition} \label{prop:decInChaos} For all $X=(x_0^1,\dots,x_0^q)\in (\mathbb Z^2)^q$, we have \begin{equation} \label{eq:lessthanPsi} {\mathrm E}_X^{\otimes q} \left[e^{\beta_N^2 \sum_{n=1}^T \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j}}\mathbf{1}_{G_T}\right] \leq \Psi_{N,q}(X), \end{equation} where \begin{equation} \begin{aligned} &\Psi_{N,q}(X) =\\ & \sum_{k=0}^\infty \sigma_N^{2k} \sum_{\substack{1\leq n_1<\dots<n_k\leq T\\ (i_r,j_r)_{r\leq k}\in \mathcal C_q^k\\ \mathbf x^1 \in (\mathbb Z^2)^{k},\dots,\mathbf x^q \in (\mathbb Z^2)^{k}}} \prod_{r=1}^k \mathbf{1}_{x_r^{i_r} = x_r^{j_r}} \prod_{i=1}^q p_{(0,n_1,\dots,n_k),(x_0^i,\mathbf x^i)}, \end{aligned} \label{eq:defPsiN} \end{equation} where we recall \eqref{eq:defSigmaN} for the definition of $\sigma_N$. \end{proposition} \noindent (By convention, here and throughout the paper, the term $k=0$ in sums as \eqref{eq:defPsiN} equals $1$.) \begin{proof} For brevity, we write $G$ for $G_T$. For $X=(x_0^1,\dots,x_0^q)\in (\mathbb Z^2)^q$, using the identity $e^{\beta_N^2 \mathbf{1}_{S_n^{i} = S_n^j}} -1 = \sigma_N^2 \, \mathbf{1}_{S_n^{i} = S_n^j}$, \begin{align*} M_{N,q}^{\text{no triple}}(X) &:= {\mathrm E}_X^{\otimes q} \left[e^{\beta_N^2 \sum_{n=1}^T \sum_{1\leq i<j\leq q} \mathbf{1}_{S_n^{i} = S_n^j}}\mathbf{1}_{G}\right] \\ & = {\mathrm E}_X^{\otimes q} \left[\prod_{n\in\llbracket 1,T\rrbracket,(i,j)\in\mathcal C_q} \left(1+\sigma_N^2 \, \mathbf{1}_{S_n^{i} = S_n^j} \right)\mathbf{1}_{G}\right]. \end{align*} Expand the last product to obtain that: \begin{equation*} M_{N,q}^{\text{no triple}}(X) = \sum_{k=0}^\infty \sigma _N^{2k} \sum_{{\substack{(n_r,i_r,j_r)\in \llbracket 1,T\rrbracket\times \mathcal C_q, r=1,\dots,k\\(n_1,i_1,j_1)<\dots<(n_k,i_k,j_k)}}} {\mathrm E}_X^{\otimes q} \left[\prod_{r=1}^k \mathbf{1}_{S_{n_r}^{i_r} = S_{n_r}^{j_r}}\mathbf{1}_{G}\right], \end{equation*} where we have used the lexicographic ordering on 3-tuples $(n,i,j)$. Since there are no triple or more particle intersections on the event $G$, the sum above can be restricted to 3-tuples $(n_r,i_r,j_r)_{r\leq k}$ such that $n_r < n_{r+1}$ for all $r < k$. Hence, \begin{align*} M_{N,q}^{\text{no triple}}(X)& = \sum_{k=0}^\infty \sigma_N^{2k} \sum_{1\leq n_1<\dots<n_k\leq T,(i_r,j_r)_{r\leq k}\in \mathcal C_q^k} {\mathrm E}_X^{\otimes q} \left[\prod_{r=1}^k \mathbf{1}_{S_{n_r}^{i_r} = S_{n_r}^{j_r}} \mathbf{1}_{G}\right]\\ & \leq \Psi_{N,q}(X), \end{align*} where $\Psi$ is defined in \eqref{eq:defPsiN}, and where we have bounded $\mathbf{1}_{G}$ by $1$ in the inequality. \end{proof} \subsection{Decomposition in two-particle intersections} In this section, we rewrite $\Psi_{N,q}$ in terms of successive two-particle interactions. We generalize a decomposition used in \cite[Section 5.1]{CaSuZyCrit18} that was restricted to a third moment computation ($q=3$). The following notation is borrowed from their paper. Let \begin{equation} U_N(n,x) := \begin{cases} \sigma_N^2 {\mathrm E}_{0}^{\otimes 2}\left[ e^{ \beta_N^2\sum_{l=1}^{n-1} \mathbf 1_{S^1_l = S^2_l}} \mathbf{1}_{S^1_n= S^2_n=x} \right] & \text{if } n\geq 1,\\ \mathbf{1}_{x=0} & \text{if } n=0, \end{cases} \end{equation} and \begin{equation} \label{eq:def_UN} \begin{aligned} U_N(n) & := \sum_{z \in\mathbb Z^2} U_N(n,z) = \begin{cases} \sigma_N^2 {\mathrm E}_{0}^{\otimes 2}\left[ e^{ \beta_N^2\sum_{l=1}^{n-1} \mathbf 1_{S^1_l = S^2_l}} \mathbf{1}_{S^1_n= S^2_n} \right] & \text{if } n\geq 1,\\ 1 & \text{if } n=0. \end{cases} \end{aligned} \end{equation} Observe that, by the identity $e^{\beta_N^2 \mathbf{1}_{S_l^{1} = S_l^2}} -1 = \sigma_N^2 \, \mathbf{1}_{S_l^{1} = S_l^2}$, one has for all $n\geq 1$, \begin{equation}\label{eq:chaosSimple} \begin{aligned} &{\mathrm E}_{0}^{\otimes 2}\left[ e^{ \beta_N^2\sum_{l=1}^{n-1} \mathbf 1_{S^1_l = S^2_l}} \mathbf{1}_{S^1_n= S^2_n=x} \right] = {\mathrm E}_{0}^{\otimes 2}\left[ \prod_{l=1}^{n-1} \left(1+\sigma_N^2\mathbf 1_{S^1_l = S^2_l}\right) \mathbf{1}_{S^1_n= S^2_n=x} \right]\\ & = \sum_{k=0}^\infty \sigma_N^{2k} \sum_{n_0=0 < n_1 <\dots < n_{k}< n} {\mathrm E}_0^{\otimes 2} \left[ \prod_{r=1}^{k} \mathbf{1}_{S_{n_r}^1 = S_{n_r}^2} \mathbf{1}_{S_{n}^1 = S_{n}^2 = x} \right]. \end{aligned} \end{equation} Hence for all $n\geq 1$: \begin{equation} \label{eq:chaos2} U_N(n,x) = \sigma_N^2 \sum_{k=0}^\infty \sigma_N^{2k} \sum_{\substack{n_0=0 < n_1 <\dots < n_{k}< n = n_{k+1} \\x_0=0 ,x_1,\dots,x_{k}\in \mathbb Z^2,x_{k+1}=x}} \prod_{r=1}^{k+1} p_{n_{r}-n_{r-1}}(x_r-x_{r-1})^2. \end{equation} Now, in the sum in \eqref{eq:defPsiN}, we observe that (only) two particles interact at given times $(n_1<\dots< n_k)$. So we define $a_1=n_1$ and $b_1=n_r$ such that $(n_1,n_2,\dots,n_{r})$ are the successive times that verify $(i_1,j_1)=(i_2,j_2)=\dots=(i_r,j_r)$ before a new couple of particles $\{i_{r+1},j_{r+1}\}\neq \{i_1,j_1\}$ is considered, and we let $k_1=r$ be the number of times the couple is repeated. Define then $a_2\leq b_2,a_3\leq b_3,\dots, a_m\leq b_m$ similarly for the next interacting couples, with $m$ denoting the number of alternating couples and $k_2,\dots,k_m$ the numbers of times the couples are repeated successively. Further let $\mathbf{X} = (X_1,\dots,X_m)$ and $\mathbf Y = (Y_1,\dots,Y_m)$ with $X_r = (x_r^1,\dots,x_r^q)$ and $Y_r = {(y_r^1,\dots,y_r^q)}$ denote respectively the positions of the particles at time $a_r$ and $b_r$. We also write $X=(x_0^p)_{p\leq q}$, for the initial positions of the particles at time $0$. We call a \emph{diagram} $\mathbf I$ of size $m\in\mathbb N$ any collection of $m$ couples $\mathbf I = (i_r,j_r)_{r\leq m}\in\mathcal C_q^m$ such that $\{i_r,j_r\} \neq \{i_{r+1},j_{r+1}\}$. We denote by $\mathcal D(m,q)$ the set of all diagrams of size $m$. If we re-write $\Psi_{N,q}(X)$ according to the decomposition that we just described, we find that: \begin{align*} &\Psi_{N,q}(X)=\\ &\sum_{m=0}^\infty \sum_{\substack{1\leq a_1\leq b_1 < a_2 \leq b_2 < \dots < a_m \leq b_m\leq T\\ \mathbf X, \mathbf Y \in (\mathbb Z^2)^{m\times q}, (i_r,j_r)_{r\leq m} \in \mathcal D(m,q)}} \,\sum_{k_1 \in \llbracket 1,b_1-a_1 + 1 \rrbracket,\dots,k_m \in \llbracket 1, b_m-a_m + 1\rrbracket} \sigma_N^{2k_1+\dots+2k_m} \\ & \times \prod_{p\leq q} p_{a_1}(x_{1}^p - x_0^p) \Big(\prod_{r=1}^{m-1} \prod_{p\leq q} p_{a_{r+1} - b_r}(x_{r+1}^p - y_r^p)\Big) \prod_{r=1}^{m} \mathbf{1}_{x_r^{i_r} = x_r^{j_r}} \mathbf{1}_{y_r^{i_r} = y_r^{j_r}} \\ & \times \sum_{\substack{a_r<n_1<\dots<n_{k_r-2} < b_r\\ \mathbf z^1 \in (\mathbb Z^2)^{k_r-2},\dots,\mathbf z^q \in (\mathbb Z^2)^{k_r-2}}} \prod_{s=1}^{k_r-2} \mathbf{1}_{z_s^{i_r} = z_s^{j_r}} \prod_{p\leq q} p_{(a_r,n_1,\dots,n_{k_r-2},b_r),(x_r^{p},z_1^p,\dots,z_{k_r-2}^p,y_r^p)}. \end{align*} See Figure \ref{fig:caseslong} for a pictorial description of the intersections associated with a diagram. Summing over all the configurations between time $a_r$ and $b_r$ gives a contribution of $\sigma_N^2 \mathbf{1}_{x_r^{i_r} = y_r^{i_r}}$ when $a_r = b_r$, and \begin{align*} &\sum_{k=2}^\infty \sigma_N^{2k} \sum_{\substack{n_0=a_r < n_1 <\dots < n_{k-2}< b_r = n_{k-1} \\x_0=x_r^{i_r} ,x_1,\dots,x_{k-2}\in \mathbb Z^2,x_{k-1}=y_r^{i_r}}} \prod_{i=1}^{k-1} p_{n_{i}-n_{i-1}}(x_i-x_{i-1})^2\\ & = \sigma_N^2 U_N(b_r-a_r,y_r^{i_r}-x_r^{i_r}), \end{align*} when $a_r<b_r$ (in this case $k_r \geq 2$ by definition). It directly follows that: \begin{equation}\label{eq:decompambm0}\Psi_{N,q}(X) = \sum_{m=0}^\infty \sigma_N^{2m} \sum_{\substack{1\leq a_1\leq b_1 < a_2 \leq b_2 < \dots < a_m \leq b_m\leq T\\ \mathbf X, \mathbf Y \in \mathbb Z^{m\times q}, \mathbf I =(i_r,j_r)_{r\leq m} \in \mathcal D(m,q)}} A_{X,\mathbf a, \mathbf b, \mathbf X,\mathbf Y,\mathbf I}, \end{equation} where \begin{align} \label{eq-Astar} A_{X,\mathbf a, \mathbf b, \mathbf X,\mathbf Y,\mathbf I} & = \prod_{p\leq q} p_{a_1}(x_1^p-x_0^p) \prod_{r=1}^m U_N(b_r-a_r,y_r^{i_r}-x_r^{i_r}) \mathbf{1}_{x_r^{i_r} = x_r^{j_r}} \mathbf{1}_{y_r^{i_r} = y_r^{j_r}} \nonumber \\ &\times \prod_{p\notin\{i_r,j_r\}}p(b_r-a_r,y_r^p-x_r^p) \prod_{r=1}^{m-1}\prod_{p\leq q} p(a_{r+1} - b_r, x_{r+1}^p - y_r^p). \end{align} We can further simplify the expression \eqref{eq:decompambm0}. Let $\mathbf I=(i_r,j_r)_{r\leq m}\in \mathcal D(m,q)$ be any diagram. For all $r\leq m$, denote by ${\bar k^1_{r}}$ the last index $l<r$ such that $i_r\in \{i_l,j_l\}$, i.e.\ ${\bar k^1_{r}} = \sup\{l\in \llbracket 1,r-1\rrbracket : i_r \in \{i_l,j_l\}\}$. When the set is empty we set ${\bar k_r^1}=0$. Define ${\bar k^2_{r}}$ similarly for $j_r$ instead of $i_r$ and let ${\bar k_r = \bar k_r^1 \vee \bar k_r^2}$. See figure \ref{fig:caseslong}. \begin{proposition} For all $X\in (\mathbb Z^2)^q$, \begin{equation}\label{eq:decompambm}\Psi_{N,q}(X) = \sum_{m=0}^\infty \sigma_N^{2m} \sum_{\substack{1\leq a_1\leq b_1 < a_2 \leq b_2 < \dots < a_m \leq b_m\leq T\\ \mathbf x, \mathbf y\in \mathbb Z^{m}, \mathbf I =(i_r,j_r)_{r\leq m} \in \mathcal D(m,q)}} \tilde A_{X,\mathbf a, \mathbf b, \mathbf x,\mathbf y,\mathbf I}, \end{equation} where \begin{align} \label{eq-Asstar} &\tilde A_{X,\mathbf a, \mathbf b, \mathbf x,\mathbf y,\mathbf I} = \prod_{p\in \{i_1,j_1\}} p_{a_1}({x_{1}-x_0^p}) \prod_{r=1}^m U_N(b_r-a_r,y_r-x_r) \nonumber \\ &\times \prod_{r=1}^{m-1} p(a_{r+1} - b_{{\bar k}_{r+1}^1}, x_{r+1} - y_{ {\bar k}_{r+1}^1}) p(a_{r+1} - b_{{\bar k}_{r+1}^2}, x_{r+1} - y_{{\bar k}_{r+1}^2}). \end{align} \end{proposition} \begin{proof} Denote $x_r = x_r^{i_r}$ and $y_r=y_r^{i_r}$. {We obtain \eqref{eq-Asstar} from \eqref{eq-Astar} by using the semi group property of the random walk transition probabilities and summing, at intersection times, over the location of particles not involved in the intersection.} \end{proof} \begin{figure} \caption{Two types of diagrams. Note the different types of exchanges. In the top diagram, $\bar k_m=m-1$ and the $m$th jump is considered short (the notion of short and long jumps is defined in Section \ref{subsec-induction}). In the bottom, the $m$th jump is considered long (with respect to a given $L$) if $m-\bar k_m>L+2$. In that case, both paths $i_m,j_m$ will be involved in an intersection not before $a_{m-L-2}$.} \label{fig:caseslong} \end{figure} \begin{proposition} \label{prop-3.3} We have that \begin{equation} \label{eq:boundMomentAmn} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq \sum_{m=0}^\infty \sum_{\mathbf I\in \mathcal D(m,q)} \sigma_N^{2m} A_{m,N,\mathbf I}, \end{equation} where \begin{equation} \label{eq:BoundAmN} \begin{aligned} &A_{m,N,\mathbf I} = \sum_{\substack{u_i \in \llbracket 1,T \rrbracket,v_i\in \llbracket 0,T\rrbracket , 1\leq i \leq m \\ \sum_{i=1}^m {u_i} \leq T}} p_{2u_1}^\star U_N(v_m) \prod_{r=1}^{m-1} U_N(v_{r}) p^\star_{v_r + 2u_{r+1}+ 2\tilde u_{r+1}}. \end{aligned} \end{equation} with \begin{equation} \label{eq-tildeudef} \tilde u_{r} = \begin{cases} \sum_{i={\bar k}_{r}+1}^{r-1} u_{i} & \text{if}\quad {\bar k}_r<r-1,\\ \frac{u_{r-1}}{2} & \text{if}\quad {\bar k}_r=r-1, \end{cases} \end{equation} and $p^\star_k = \sup_{x\in \mathbb Z^2} p_k(x)$. \end{proposition} \begin{proof} By \eqref{eq:decompambm}, it is enough to show that \begin{equation} \label{eq:propAmn} \sup_{X\in (\mathbb Z^2)^q} \sum_{\substack{1\leq a_1\leq b_1 < a_2 \leq b_2 < \dots < a_m \leq b_m\leq T\\ \mathbf x, \mathbf y \in \mathbb Z^{m}}} \tilde A_{X,\mathbf a, \mathbf b, \mathbf x,\mathbf y,\mathbf I} \leq A_{m,N,\mathbf{I}}. \end{equation} We begin by summing on $y_m$, which gives a contribution of \[\sum_{y_m} U_N(b_m-a_m,y_m-x_m) = U_N(b_m-a_m),\] where $U_N(n)$ is defined in \eqref{eq:def_UN}. Then summing on $x_m$ gives a factor \begin{align*} &\sum_{x_m} p(a_m-b_{{\bar k^1_m}},x_m-y_{{\bar k^1_m}})p(a_m- b_{{\bar k^2_m}},x_m-y_{{\bar k^2_m}})\\ & = p(2a_m-b_{{\bar k^1_m}}-b_{{\bar k^2_m}},y_{{\bar k^1_m}} -y_{{\bar k^2_m}}) \leq p_{2a_m-b_{{\bar k^1_m}}-b_{{\bar k^2_m}}}^\star. \end{align*} By iterating this process we obtain that the sum on $\mathbf x,\mathbf y$ is bounded (uniformly on the starting point $X$) by \[p_{2a_1}^\star U_N(b_m-a_m) \prod_{r=1}^{m-1} U_N(b_r-a_r) p^\star_{2a_{r+1}-b_{{\bar k^1_{r+1}}}-b_{{\bar k^2_{r+1}}}}. \] If we introduce the change of variables $u_i=a_i-b_{i-1}$ and $v_i=b_i-a_i$ with $b_0=0$, then equation \eqref{eq:propAmn} follows from combining that $2 a_{r+1}-b_{{\bar k_{r+1}^1}}-b_{{\bar k_{r+1}^2}} \geq v_r + 2u_{r+1}+ 2 \tilde{u}_{r+1}$ with the monotonicity of $p_n^\star$ in $n$, which follows from \begin{equation} \label{eq-monotone} p_{n+1}^\star=\sup_y \sum_x p_n(x) p_1(y-x)\leq p_n^\star. \end{equation} \end{proof} \subsection{Estimates on $U_N$} It is clear from Proposition \ref{prop-3.3} that the function $U_N$ plays a crucial rote in our moment estimates, which we will obtain by an induction in the next subsection. In the current subsection, we digress and obtain a-priori estimates on $U_N$ (and ${\mathbb E}[W_N(\beta_N)^2]$). Appendix \ref{app-UN} contains some improvements that are not needed in the current work but may prove useful in follow up work. \begin{proposition} There exists $N_0=N_0(\hat \beta)$ such that for all $N\geq N_0$ and all $n\leq N$, \begin{equation} \label{eq:bound2ndMoment} {\mathbb E}\left[W_n(\beta_N)^2\right] \leq \frac{1}{1-\sigma_N^2 R_n}. \end{equation} Furthermore, there exists $\varepsilon_{n}=\varepsilon(n,\hat \beta)\to 0$ as $n\to\infty$, such that for all $N\geq n$, \begin{equation} \label{eq:asympt2ndMoment} {\mathbb E}\left[W_n(\beta_N)^2\right] = (1+\varepsilon_n) \frac{1}{1-\hat \beta^2 \frac{\log n}{\log N}}. \end{equation} \end{proposition} \begin{proof} We first choose $N_0=N_0(\hat \beta)$ large enough such that for all $N\geq (n\vee N_0)$, we have $\sigma_N^2 R_n<1$. That this is possible follows from \eqref{eq:pnstar} which yields that \begin{equation} \label{eq:upper_boundRn} \forall n\in \mathbb N,\quad R_n = \sum_{s=1}^n p_{2s}(0) \leq \frac{1}{\pi}\sum_{s=1}^n \frac{1}{s} \leq \frac{1}{\pi}\log (n+1). \end{equation} For the rest of the proof, we continue in this setup. Similarly to \eqref{eq:chaosSimple}, we have (letting $n_0=x_0=0$) that \begin{align} {\mathbb E}\left[W_n(\beta_N)^2\right] & = {\mathrm E}_0\left[e^{\beta_N^2 \sum_{k=1}^n \mathbf{1}_{S_k^1 = S_k^2}} \right]\nonumber\\ & = \sum_{k=0}^\infty \sigma_N^{2k} \sum_{0< n_1 <\dots < n_k \leq n} \sum_{x_1,\dots,x_{k}\in \mathbb Z^2} \prod_{i=1}^{k} p_{n_{i}-n_{i-1}}(x_i-x_{i-1})^2. \label{eq:whereRunFree} \end{align} Hence, if for each $k\in \mathbb N$ in \eqref{eq:whereRunFree} we let $n_{i}-n_{i-1}$ run free in $\llbracket 1,n\rrbracket$, we obtain that \begin{align*} {\mathbb E}\left[W_n(\beta_N)^2\right] & \leq \sum_{k=0}^\infty \sigma_N^{2k} \left( \sum_{m=1}^n \sum_{x\in \mathbb Z^2} p_{m}(x)^2\right)^k = \sum_{k=0}^\infty \sigma_N^{2k} R_n^k = \frac{1}{1- \sigma_N^2 R_n}, \end{align*} which gives \eqref{eq:bound2ndMoment}. On the other hand, if for each $k\in \mathbb N$ in \eqref{eq:whereRunFree} we let $n_{i}-n_{i-1}$ run free in $\llbracket 1, n/k \rrbracket$, we have \begin{align*} {\mathbb E}\left[W_n(\beta_N)^2\right] & \geq 1+ \sum_{k=1}^\infty \sigma_N^{2k} \left( \sum_{m=1}^{n/k} \sum_{x\in \mathbb Z^2} p_{m}(x)^2\right)^k\\ & \geq 1+ \sum_{k=1}^{\log n} \sigma_N^{2k} R_{n/\log n}^k = \frac{1-(\sigma_N^2 R_{n/\log n})^{\log n+1}}{1- \sigma_N^2 R_{n/\log n}}, \end{align*} By \eqref{eq:LLTsigmaN} and the fact that $R_n\sim \frac{1}{\pi} \log n$ as $n \to \infty$ by \eqref{eq-RNas}, we find that for all $N\geq n$, \[{\mathbb E}\left[W_n(\beta_N)^2\right] \geq (1+\delta_n) \frac{1}{1-\hat \beta^2 \frac{\log n}{\log N}}, \] with $\delta_n = \delta_n(\hat \beta)\to 0$ as $n\to\infty$. Combining this with \eqref{eq:bound2ndMoment} entails \eqref{eq:asympt2ndMoment}. \end{proof} \begin{proposition} \label{prop:factor2ndMoment} For all $M\geq 1$, we have: \begin{equation} \label{eq:firstequality} \sum_{n= 0}^M U_N(n) ={\mathbb E}\left[W_{M}^2\right]. \end{equation} Moreover, there is $C(\hat \beta)>0$ such that, as $N\to\infty$ and for all $n\leq N$, \begin{equation} \label{eq:boundP2P2ndMomentC} U_N(n) \leq C \frac{1}{\left(1-\hat \beta^2 \frac{\log n}{\log N}\right)^2} \frac{1}{n\log N}. \end{equation} \end{proposition} \begin{remark} When $n\to\infty$, one can take the constant $C$ that appears in \eqref{eq:boundP2P2ndMomentC} arbitrarily close to one. See Appendix \ref{app-UN}. \end{remark} \begin{proof} By \eqref{eq:chaos2}, we have, for $n\geq 1$, with $x_0=0$, \begin{equation} \label{eq:chaosUN} \begin{aligned} & U_N(n)\\ & = \sigma_N^2 \sum_{k=1}^\infty \sigma_N^{2(k-1)}\sum_{0< n_1 <\dots < n_{k-1} < n_k := n} \sum_{x_1,\dots,x_{k}\in \mathbb Z^2} \prod_{i=1}^{k} p_{n_{i}-n_{i-1}}(x_i-x_{i-1})^2\\ & = \sigma_N^2 \sum_{k=1}^\infty \sigma_N^{2(k-1)}\sum_{0< n_1 <\dots < n_{k-1} < n_k := n} \prod_{i=1}^{k} p_{2n_{i}-2n_{i-1}}(0). \end{aligned} \end{equation} Therefore, \begin{align*} \sum_{n=0}^M U_N(n) & = 1+ \sum_{k=1}^\infty \sigma_N^{2k} \sum_{0< n_1 <\dots < n_{k-1} < n_k \leq M} \sum_{x_1,\dots,x_{k}\in \mathbb Z^2} \prod_{i=1}^{k} p_{n_{i}-n_{i-1}}(x_i-x_{i-1})^2 \\ & = {\mathbb E}[W_{M}^2], \end{align*} which yields \eqref{eq:firstequality}. We now prove \eqref{eq:boundP2P2ndMomentC} by expressing $U_N$ as a function of a renewal process. From \eqref{eq:chaosUN}, one can see that the following representation for $U_N(n)$ holds when $n\geq 1$: \[U_N(n) = \sum_{k=1}^\infty (\sigma_N^2 R_N)^k {\mathrm P}\left(\tau_k^{(N)} = n\right),\] where the $\tau_k^{(N)}$ are renewal times defined by \[\tau_k^{(N)} = \sum_{i\leq k} T^{(N)}_i,\] with $(T_i^{(N)})_i$ being i.i.d.\ random variables with distribution \[{\mathrm P}\left(T^{(N)}_i = n\right) = \frac{1}{R_N} p_{2n}(0) \mathbf{1}_{1\leq n\leq N},\quad \text{and} \quad R_N = \sum_{n=1}^N p_{2n}(0). \] The renewal formulation that is used here is due to \cite{CaravennaFrancesco2019TDsr}. We also refer to [21, Chapter 1] for the connection of polymer models to renewals processes in a general context. By \cite[Proposition 1.5]{CaravennaFrancesco2019TDsr}, there exists $C>0$ such that for all $n\leq N$, \begin{equation}\label{eq:dickman} {\mathrm P}\left(\tau_k^{(N)} = n\right) \leq C k {\mathrm P}\left(T_1^{(N)} = n\right) {\mathrm P}\left(T_1^{(N)} \leq n\right)^{k-1}. \end{equation} Hence, using that $\sum_{k=1}^\infty ka^{k-1} = \frac{1}{(1-a)^2}$ for $a<1$, \begin{align*} U_N(n)& \leq C \sum_{k=1}^{\infty} (\sigma_N^2 R_N)^k k {\mathrm P}\left(T_1^{(N)} = n\right) {\mathrm P}\left(T_1^{(N)} \leq n\right)^{k-1}\\ & = C \frac{ p_{2n}(0) }{R_N} \frac{\sigma_N^2 R_N}{\left(1-\sigma_N^2 R_N \frac{R_n}{R_N} \right)^2}, \end{align*} which gives \eqref{eq:boundP2P2ndMomentC} by \eqref{eq-RNas}, \eqref{eq:pnstar} and \eqref{eq:LLTsigmaN}. \end{proof} \subsection{Summing on the $v_i$'s} In the following, we denote \begin{equation} \label{eq-F} F(u) = \frac{1}{u} \frac{1}{1-\hat \beta^2 \frac{\log (u) }{\log N}}. \end{equation} By differentiation with respect to $u$ one checks that $F$ is non-increasing. \begin{proposition} \label{prop:boundAmNtildeBis} There exist $N_0(\hat \beta)>0$ and $\varepsilon_N=\varepsilon(N,\hat{\beta}) \searrow 0$ as $N\to\infty$, such that for all $N\geq N_0(\hat \beta)$, \begin{equation} \label{eq:boundPsiAtilde} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq \sum_{m=0}^\infty \sigma_N^{2m} \sum_{\mathbf I \in \mathcal D(m,q)} \left(\frac{1}{\pi}\right)^{m-1} \tilde{A}_{m,N,\mathbf I}, \end{equation} where, {recalling \eqref{eq-tildeudef},} \begin{equation}\label{eq:boundAmntildeSansv} \tilde A_{m,N,\mathbf I} = \frac{1}{1-\hat \beta^2} \sum_{ u_i \in \llbracket 1,T \rrbracket, 1\leq i \leq m} (1+\varepsilon_N)^{m} p^\star_{2u_1} \prod_{r=2}^{m} F(u_r +\tilde u_{r}) \mathbf{1}_{\sum_{i=1}^{r} u_i \leq T}. \end{equation} \end{proposition} \begin{proof} By \eqref{eq:BoundAmN}, \eqref{eq:asympt2ndMoment} and \eqref{eq:firstequality}, summing over $v_m$ in $A_{m,N,\mathbf I}$ gives a factor bounded by $\frac{1}{1-\hat \beta^2}(1+o(1))$. We will now estimate the sum over the variable $v_{m-1}$. Let $w=u_m + \tilde u_m$. (Note that by definition $3/2\leq w\leq T\leq N$, and that $w$ might be a non-integer multiple of $1/2$.) Writing $v=v_{m-1}$, the sum over $v_{m-1}$ in \eqref{eq:BoundAmN} gives a factor \begin{equation} \label{eq:sumOnvraw} \sum_{v=0}^T U_N(v) p^\star_{v+2w} =: S_{\leq w} + S_{> w} , \end{equation} where $S_{\leq w}$ is the sum on the left hand side of \eqref{eq:sumOnvraw} restricted to $v\leq \lfloor w \rfloor$ and $S_{> w}$ is the sum for $v> \lfloor w \rfloor$. Using \eqref{eq:pnstar} and \eqref{eq:boundP2P2ndMomentC}, there exists a constant $C=C(\hat \beta)>0$ such that \begin{equation} \label{bound_SbiggerU} S_{>w}\leq \frac{C}{\log N}\sum_{v=\lfloor w \rfloor +1}^T \frac{1}{v^2} \leq \frac{1}{\log N}\frac{C}{w}. \end{equation} Using \eqref{eq-monotone} and \eqref{eq:firstequality}, \begin{equation} \label{eq:initialestimateSleqU} \begin{aligned} S_{\leq w} \leq p^\star_{2w} \sum_{v=0}^{\lfloor w \rfloor} U_N(v)=p^\star_{2w} {\mathbb E}[W_{\lfloor w \rfloor }^2]& \leq p^\star_{2w} \frac{1}{1- \sigma_N^2 R_{\lfloor w \rfloor}}. \end{aligned} \end{equation} where the second inequality holds by \eqref{eq:bound2ndMoment} for all $N\geq N_0(\hat \beta)$ since $w\leq N$. Let $\delta_N = \delta(N,\hat \beta)\to 0$ such that \eqref{eq:LLTsigmaN} holds, and let $N'_0 = N'_0(\hat \beta)>N_0(\hat \beta)$ be such that $\sup_{N\geq N_0'} \sup_{n\leq N}\hat \beta^2 \frac{1+\log n}{\log N}(1+\delta_N) < 1$. By \eqref{eq:pnstar} and \eqref{eq:upper_boundRn}, we obtain that \begin{equation*} p^\star_{2w} \frac{1}{1- \sigma_N^2 R_w} \leq \frac{1}{\pi} \frac{1}{w} \frac{1}{1- \hat \beta^2 \frac{1+\log w}{\log N}(1+\delta_N)}. \end{equation*} Moreover, as there is $C(\hat \beta)\in (0,\infty)$ such that \[\sup_{N\geq N_0'}\sup_{n\leq N} \frac{1}{1- \hat \beta^2 \frac{1+\log n}{\log N}(1+\delta_N)} \leq C(\hat \beta),\] we see that there exists $\varepsilon'_N=\varepsilon'(N,\hat \beta)\searrow_{N\to\infty} 0$ such that for all $n\leq N$, \begin{align*} \left|\frac{1}{1- \hat \beta^2 \frac{1+\log n}{\log N}(1+\delta_N)} - \frac{1}{1- \hat \beta^2 \frac{\log n}{\log N}} \right| & \leq \frac{\varepsilon_N'}{1- \hat \beta^2 \frac{\log n}{\log N}}. \end{align*} Coming back to \eqref{eq:initialestimateSleqU}, we obtain that for all $N\geq N_0'(\hat \beta)$, \begin{equation} \label{eq:estimateSleqU} S_{\leq w} \leq \frac{1}{\pi w} \frac{1+\varepsilon_N'}{1- \hat \beta^2 \frac{\log w}{\log N}}. \end{equation} We finally obtain from \eqref{eq:estimateSleqU} and \eqref{bound_SbiggerU} that there exists $\varepsilon'_N=\varepsilon'(N,\beta){\searrow_{N\to\infty} 0}$ such that the sum in \eqref{eq:sumOnvraw} is smaller than \[\left(1+\varepsilon'_N\right)\frac{1}{\pi w} \frac{1}{1- \hat \beta^2 \frac{\log w}{\log N}} = \left(1+\varepsilon'_N\right)\frac{1}{\pi} F\left(u_m+\tilde u_m\right). \] Repeating the same observation for $v_{m-2},\dots,v_{1}$ leads to Proposition \ref{prop:boundAmNtildeBis}. \end{proof} \subsection{The induction pattern} \label{subsec-induction} Our next goal is to sum over $(u_r)_{r\leq m}$ that appear in \eqref{eq:boundAmntildeSansv}. We will sum by induction starting from $r=m$ and going down to $r=1$. To do so, we first need to define the notion of \emph{good} and \emph{bad} indices $r$. While performing the induction, encountering a \emph{bad} index will add some nuisance term to the estimate. {We will then show that,} for typical diagrams, the \emph{bad} indices are rare enough so that the nuisance can be neglected. Let $L=L_N\in\mathbb N\setminus\{1,2\}$ to be determined later. Given a diagram $\mathbf I \in \mathcal D(m,q)$, we say that {$r\in \llbracket 1,m\rrbracket$} is a \emph{long jump} if $r-{\bar k_r} > L + 2$, which means that the last times that the two particles $i_r,j_r$ have been involved in an intersection are not too recent. We say that $r$ is a \emph{small jump} if it is not a long jump. (See Figure \ref{fig:caseslong} for a pictorial description of short (top) and long (bottom) jumps). Since small jumps reduce drastically the combinatorial choice on the new couple that intersects, the diagrams that will contribute to the moments will contain mostly long jumps. Let $K=K(\mathbf I)$ denote the number of {small jumps} and $s_1<\dots<s_{K}$ denote the indices of {small jumps}. We also set $s_{K+1} = m+1$. For all $i\leq K+1$ such that $s_i - s_{i-1} > L+1$, we mark the following indices $\{s_{i} - kL-1:k\in \mathbb N, s_{i}-kL -1>s_{i-1}\}$ as \emph{stopping} indices. We then call any {long jump} $r$ a \emph{fresh} index if $r$ is {stopping} or if $r+1$ is a {small jump}. Note that any stopping index is a fresh index. If $m$ is a long jump we also mark it as a fresh index. The idea is that if $k$ is a fresh index and $k-1$ is a long jump, then $k-1,k-2,\dots$ avoid nuisance terms until $k-i$ is a stopping index or a small jump; we remark that since our induction will be downward from $m$, these nuisance-avoiding indices occur in the induction \textit{following} a fresh index. Hence we say that an index $r$ is \emph{good} if it is a long jump that is not fresh. An index $r$ is \emph{bad} if it is not good. For any given diagram, one can easily determine the nature of all indices via the following procedure: (i) mark all small jumps; (ii) mark every stopping index; (iii) mark all fresh indices; (iv) all the remaining indices that have not been marked are good indices. For all $\mathbf I\in \mathcal D(m,q)$, we define for all $r< m$, \begin{align*} &\varphi(r) = \varphi(r,\mathbf I) = \\ &\begin{cases} \inf\{r'\in \llbracket r,m\rrbracket, r' \text{ is fresh}\}-L & \text{if $r$ is not a stopping index and $r+1$ is a long jump},\\ r & \text{otherwise}. \end{cases} \end{align*} We also set $\varphi(m)=m$. Here are a few immediate observations: \begin{lemma} \begin{enumerate}[label=(\roman*)] \item \label{lemmata1} If $r$ is good, then $r+1$ is a long jump. \item \label{lemmata2} If $r\in \llbracket 2,m-1\rrbracket$ is good, then $\varphi(r-1) = \varphi(r)$. \item \label{lemmata3} If $r\in \llbracket 2,m\rrbracket$ is fresh, then $\varphi(r-1) = r-L$. \item \label{lemmata4} If $r\in \llbracket 1,m-1\rrbracket$ is such that $r+1$ is a long jump, then $\varphi(r)\leq r$. \end{enumerate} \label{lemmata} \end{lemma} \begin{remark} \label{rk:varphi} Point \ref{lemmata4} ensures that $\varphi(r)\leq r$ for all $r\leq m$. By \ref{lemmata2}, this implies in turn that $\varphi(r)\leq r-1$ when $r$ is good. \end{remark} \begin{proof} Proof of \ref{lemmata1}. Suppose that $r$ is good. It must be that $r<m$ since by definition $m$ is either fresh or a small jump. Now, $r+1$ must be a long jump otherwise $r$ would be fresh. Proof of \ref{lemmata2}. Let $r\in \llbracket 2,m-1\rrbracket$ be a good index. We distinguish two cases. First suppose that $r-1$ is not a stopping index. Then $r-1$ cannot be fresh because $r$ is not a small jump. Therefore $\varphi(r-1) = \inf\{r' > r-1,r'\text{ is fresh}\}-L$. Furthermore, by \ref{lemmata1}, we have that $\varphi(r) = \inf\{r'\geq r,r'\text{ is fresh}\}-L$ and thus $\varphi(r-1) = \varphi(r)$. Now assume that $r-1$ is stopping. Then $\varphi(r-1) = r-1$. Moreover, by definition $r,\dots,r+L-1$ are long jumps and either $r+L-1$ is a stopping index or $r+L$ is a small jump. Therefore $r+L-1$ is a fresh index and $r,\dots,r+L-2$ are good, so that $\varphi(r) = (r+L-1)-L = r -1 = \varphi(r-1)$. Proof of \ref{lemmata3}. Let $r\in \llbracket 2,m-1\rrbracket$ be a fresh index. We first note that $r-1$ cannot be a stopping index. Indeed, if $r$ is a stopping index, then $r-1$ cannot be stopping by definition; if $r$ is not a stopping index, then as $r$ is fresh, $r+1$ must be a small jump and thus $r-1$ cannot be stopping. Now, as $r-1$ is not stopping and $r$ is fresh, we obtain that $\varphi(r-1) = r-L$. (Note that $r-1$ cannot be fresh because $r-1$ is not stopping and $r$ is a long jump.) Proof of \ref{lemmata4}. It is enough to show that $r_0 := \inf\{r'\in \llbracket r,m\rrbracket, r' \text{ is fresh}\}\leq r+L$. Since $s_1 = 1$ and $s_{K+1}=m+1$, we can find $i\leq K$ such that $r\in [s_i,s_{i+1})$. Now first suppose that $s_{i+1}-r \leq L+ 1$. As $r+1$ is a long jump, $s_{i+1}> r + 1 \geq s_i + 1$ and so $s_{i+1}-1$ is a long jump because it is in $(s_i,s_{i+1})$. Hence $s_{i+1} - 1$ is fresh (note that this remains true when $s_{i+1} = m+1$) and we obtain that $r_0\leq s_{i+1}-1\leq r+L$. Otherwise if $s_{i+1}-r > L+ 1$, we can let $r_\star = s_{i+1}-k_0L-1$, $k_0\in \mathbb N$, be the stopping index of $(s_i,s_{i+1})$ that satisfies $r_\star - L < r \leq r_\star$ (this is the smallest stopping time larger than $r$). By definition $r_\star$ is fresh, therefore $r_0 \leq r_\star \leq r+L$. \end{proof} For all $v\in [1,T]$, we further let \[f(v)= \frac{\log N}{\hat \beta^2} \log \left(\frac{1-\hat \beta^2\frac{\log v}{\log N}}{1-\hat \beta^2\frac{\log T}{\log N}}\right).\] Note that $f$ is non-increasing. Recall \eqref{eq-tildeudef} and the definition of $F$ in \eqref{eq-F}. \begin{lemma} \label{lem:fibo} For all $m\geq 2$, $\mathbf I\in \mathcal D(m,q)$, $k\in \llbracket 1,m-1 \rrbracket$ and $\sum_{i=1}^{m-k} u_i \leq T$ with $u_i\in \llbracket 1,T\rrbracket$, \begin{equation} \label{eq:lemmaInduction} \begin{aligned} & \sum_{u_i \in \llbracket 1,T\rrbracket, m-k < i \leq m } \prod_{r=m-k+1}^{m} F(u_r + \tilde u_r) \mathbf{1}_{\sum_{i=1}^{m-k+1} u_i \leq T} \\ &\leq \sum_{i=0}^k \frac{c_{i}^k}{(k-i)!} \frac{1}{\left(1-\hat \beta^2 \right)^{i}} f\left(\sum_{i=\varphi(m-k)}^{m-k} u_i\right)^{k-i}. \end{aligned} \end{equation} with $c_0^1=1$, $c_1^1=2$, $c_i^{k+1}= c_{i}^k +2\gamma_{k}^m \sum_{j=0}^{i-1} c_{j}^k$ for $i\leq k+1$ with $\gamma_k^m = \mathbf{1}_{m-k \text{ is bad}}$ and $c_i^k=0$ for $i>k$. \end{lemma} \begin{remark} The $c_i^k$'s depend on $m$ and $\mathbf I\in \mathcal D(m,q)$. \end{remark} Before turning to the proof, we need another result that plays a key role in the proof of Lemma \ref{lem:fibo} and which clarifies the role of good indices. \begin{lemma} \label{lem:indReduction} For all $k\in \llbracket 0,m-2 \rrbracket$, $j\leq k$ and $\sum_{i=1}^{m-k-1} u_i \leq T$ with $u_i\in \llbracket 1,T\rrbracket$, \begin{equation}\label{eq:indReduction} \begin{aligned} &S_kf^j(u_1,\dots,u_{m-k-1}) := \\ & \sum_{u_{m-k}=1}^T F(u_{m-k} + \tilde u_{m-k}) f\left(\sum_{i=\varphi(m-k)}^{m-k} u_i\right)^{j} \mathbf{1}_{\sum_{i=1}^{m-k} u_i \leq T} \\ & \leq \frac{1}{j+1} f\left(\sum_{i=\varphi(m-k-1)}^{m-k-1} u_i\right)^{j+1} \\ &+ \gamma_k^m \sum_{l=1}^{j+1} \frac{j!}{(j+1-l)!} \frac{2}{\left(1-\hat \beta^2\right)^{l}} f\left(\sum_{i=\varphi(m-k-1)}^{m-k-1} u_i\right)^{j+1-l}. \end{aligned} \end{equation} \end{lemma} \begin{remark} When $m-k$ is good, the {right-hand side of \eqref{eq:indReduction}} is reduced to a single term. When $m-k$ is bad, a nuisance term appears. \end{remark} \begin{proof} We divide the proof {into} three cases. \textbf{Case 1:} $m-k$ is good. Necessarily $m-k+1$ is a long jump by Lemma \ref{lemmata}-\ref{lemmata1}, so if we let $r_{\text{fresh}} = \inf\{r'>m-k,r' \text{ is fresh}\}$, then $\varphi(m-k)= r_{\text{fresh}} - L$. By Remark \ref{rk:varphi} and since $r_{\text{fresh}} > m-k$, we have \[(m-k)-L < \varphi(m-k) \leq (m-k)-1.\] Define \[v:=\sum_{i=\varphi(m-k)}^{m-k-1} u_i \in \llbracket 1,T\rrbracket .\] As $m-k$ is a long jump, we first observe that \begin{equation} \label{eq:umtildebig} \tilde u_{m-k} \geq u_{m-k-1} +\dots + u_{m-k-L-1}\geq v. \end{equation} Since $F$ and $f$ are non-increasing, see \eqref{eq-F}, this implies that \begin{align*} &S_kf^j \leq \sum_{u_{m-k}=1}^T F\left(u_{m-k}+v\right) f\left(u_{m-k} + v\right)^{j} \mathbf{1}_{u_{m-k} + v \leq T} \\ &\leq \int_{v}^T \frac{1}{u} \frac{1}{1-\hat \beta^2 \frac{\log (u) }{\log N}} f(u)^{j} \mathrm{d} u = \left[ - \frac{1}{j+1} f(x)^{j+1} \right]^{T}_{v} = \frac{1}{j+1} f(v)^{j+1}, \end{align*} where in the comparison to the integral, we have used that $F(x) f(x)^j$ decreases in $x\in [1,\ldots T]$. Given that $\varphi(m-k-1) = \varphi(m-k)$ by Lemma \ref{lemmata}-\ref{lemmata2}, we have the identity $v = \sum_{i=\varphi(m-k-1)}^{m-k-1} u_i$. Hence \eqref{eq:indReduction} holds. \textbf{Case 2:} $m-k$ is fresh. By Lemma \ref{lemmata}-\ref{lemmata3}, we have $\varphi(m-k-1) = m-k-L$. Note that in contrast with Case 1, we have from the definition that $\varphi(m-k)=m-k$ and therefore we cannot follow the same argument as for Case 1. Instead, we now define $v$ summing from $\varphi(m-k-1)$ and not $\varphi(m-k)$, i.e. \[v:=\sum_{i=\varphi(m-k-1)}^{m-k-1} u_i \in \llbracket 1,T\rrbracket.\] We then decompose \begin{equation} \label{eq:decSk} S_k f^j = S_k^{\leq v} f^j + S_k^{> v} f^j, \end{equation} where $S_k^{\leq v} f^j$ is the restriction of the sum in $S_k f^j$ to $u_{m-k} \in \llbracket 1,v\rrbracket$. Given that $m-k$ is a long jump, the bounds \eqref{eq:umtildebig} hold again. Hence, using that $F$ and $f$ are non-increasing, we find that \begin{equation} \label{eq:sameWithv} \begin{aligned} S_k^{\leq v} f^j & \leq \sum_{u_{m-k}=1}^{v} \frac{1}{u_{m-k} + v} \frac{1}{1-\hat \beta^2 \frac{\log (u_{m-k}+v) }{\log N}} f(u_{m-k})^j \mathbf{1}_{u_{m-k} + v\leq T}\\ &\leq \frac{1}{v} \frac{1}{1-\hat \beta^2} \sum_{u_{m-k}=1}^{v} f(u_{m-k})^j\\ & \leq \frac{1}{v} \frac{1}{1-\hat \beta^2} \left(f(1)^j + \int_{1}^v f(x)^j\mathrm{d} x\right), \end{aligned} \end{equation} by comparison to a integral (using that $f$ is non-increasing). By integrating by part and using that $f'(x)= -\frac{1}{x}\frac{1}{1-\hat \beta^2 \frac{\log (x)}{\log N}}$, we see that for all $j\geq 1$, \begin{align*} f(1)^j + \int_{1}^v f(x)^j\mathrm{d} x &= vf(v)^j - j\int_{1}^v xf'(x) f(x)^{j-1}\mathrm{d} x \\ &\leq vf(v)^j + \frac{j}{1-\hat \beta^2} \int_{1}^v f(x)^{j-1}\mathrm{d} x. \end{align*} If we iterate the integration by parts, we obtain that \begin{align*} f(1)^j + \int_{1}^v f(x)^j\mathrm{d} x \leq v \sum_{i=0}^j \frac{j!}{(j-i)!}\left(\frac{1}{1-\hat \beta^2 }\right)^i f(v)^{j-i}, \end{align*} and so \begin{equation} \label{eq:IPP} S_k^{\leq v} f^j \leq \sum_{i=0}^j \frac{j!}{(j-i)!}\left(\frac{1}{1-\hat \beta^2 }\right)^{i+1} f(v)^{j-i}. \end{equation} On the other hand, we have \begin{align*} S_k^{> v} f^j& \leq \sum_{u_{m-k}=v+1}^T \frac{1}{u_{m-k}} \frac{1}{1-\hat \beta^2 \frac{\log (u_{m-k}) }{\log N}} f(u_{m-k})^j\\ &\leq \int_{v}^T \frac{1}{x} \frac{1}{1-\hat \beta^2 \frac{\log (x) }{\log N}} f(x)^j \mathrm{d} x = \left[ - \frac{1}{j+1} f(x)^{j+1} \right]^{T}_v \leq \frac{1}{j+1} f(v)^{j+1}. \end{align*} Combining the two previous estimates yields \eqref{eq:indReduction}. \textbf{Case 3:} $m-k$ is a small jump. We have that $f(\sum_{i=\varphi(m-k)}^{m-k} u_i)\leq f(u_{m-k})$ as $f$ is non-increasing. Moreover, $\tilde u_{m-k} \geq \frac{u_{m-k-1}}{2}$ always holds. Hence, if we use the same decomposition as in \eqref{eq:decSk} with $v=u_{m-k-1}$, we find using that $F$ is non-increasing that \begin{align*} S_k^{\leq v} {f^j} & \leq \sum_{u_{m-k}=1}^{v} \frac{1}{u_{m-k} + v/2} \frac{1}{1-\hat \beta^2 \frac{\log (u_{m-k}+v/2) }{\log N}} f(u_{m-k})^j \mathbf{1}_{u_{m-k} + v/2\leq T}\\ & \leq \frac{2}{v} \frac{1}{1-\hat \beta^2} \left({f(1)^j} + \int_{{1}}^v f(x)^j\mathrm{d} x\right)\\ & \leq 2\sum_{i=0}^j \frac{j!}{(j-i)!}\left(\frac{1}{1-\hat \beta^2 }\right)^{i+1} f(v)^{j-i}, \end{align*} where we have used the integration by parts from Case 2 and that $f$ is non-increasing in the comparison to the integral. Furthermore, we have $S_k^{> v} \leq \frac{1}{j+1} f(v)^{j+1}$ as in Case 2. Finally, since $m-k$ is not a long jump we have $\varphi(m-k-1) = m-k-1$ and therefore \eqref{eq:indReduction} follows. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:fibo}] We prove the lemma by induction on $k$. The case $k=1$ follows from Lemma \ref{lem:indReduction} with $j=k=0$. Assume now that \eqref{eq:lemmaInduction} holds for some $k\in \llbracket 1,m-2 \rrbracket$. Then by \eqref{eq:indReduction} we obtain that the left hand side of \eqref{eq:lemmaInduction} for the index $k+1$ is smaller than the sum of all the entries of the following matrix, where we have set $\mu = 1-\hat \beta^2$ and $f=f(v)$ with $v= \sum_{i=\varphi(m-k-1)}^{m-k-1} u_i$: \begin{equation} \label{eq:matrix} \begin{pmatrix} \frac{c_0^k}{(k+1)!}f^{k+1} & \frac{2 \gamma_k^m c_0^k}{k!\mu} f^{k} & \frac{2\gamma_k^mc_0^k }{(k-1)!\mu^2} f^{k-1} & \cdots & \frac{2\gamma_k^mc_0^k}{\mu^{k}} f & \frac{2\gamma_k^mc_0^k}{\mu^{k+1}}\\ 0 & \frac{c_1^k}{k! \mu} f^{k} & \frac{2\gamma_k^m c_1^k}{(k-1)!\mu^2} f^{k-1} & \dots & \frac{2\gamma_k^mc_1^k}{\mu^{k}} f & \frac{2\gamma_k^mc_1^k}{\mu^{k+1}}\\ 0&0 & \frac{c_2^k}{(k-1)!\mu^2} f^{k-1} & \dots & \frac{2\gamma_k^mc_2^k}{\mu^{k}} f & \frac{2\gamma_k^mc_2^k }{\mu^{k+1}}\\ \vdots& \ddots & &\\ 0& 0& 0&\dots & \frac{c_k^k}{\mu^{k}} f & \frac{2\gamma_k^mc_k^k }{\mu^{k+1}} \end{pmatrix}, \end{equation} and collecting the coefficients of the various terms $f^r$, $r\in \llbracket 0,k+1\rrbracket$, which amounts to summing over the columns of \eqref{eq:matrix}, gives \eqref{eq:lemmaInduction} for $k+1$. \end{proof} Recall \eqref{eq:boundAmntildeSansv}. Lemma \ref{lem:fibo} yields the following. \begin{proposition} \label{prop:Amn} There exists $C=C(\hat \beta)>0$ and $\varepsilon_N=\varepsilon(N,\hat \beta)\to 0$ as $N\to\infty$, such that \begin{equation} \label{eq:upperBoundAmN2} \begin{aligned} &\tilde A_{m,N,\mathbf I} \leq C (1+|\varepsilon_N|)^{m} \sum_{i=0}^{m} \frac{ c_{i}^{m}}{(m-i)!} \times \left(\frac{\log N}{\hat \beta^2} \right)^{m-i} \frac{1}{\left(1-\hat \beta^2\right)^{i}} \lambda_{T,N}^{2(m-i)}, \end{aligned} \end{equation} where $\lambda_{T,N}$ is defined in \eqref{eq:lambdaT}. \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:Amn}] By Proposition \ref{prop:boundAmNtildeBis} and Lemma \ref{lem:fibo} applied to $k=m-1$, we have: \begin{align*} \tilde A_{m,N} & \leq \frac{1}{1-\hat \beta^2} (1+\varepsilon_N)^{m} \sum_{u_1=1}^T \frac{C}{u_1} \sum_{i=0}^{m-1} \frac{ c_i^{m-1}}{(m-1-i)!} f(u_1)^{m-1-i} \frac{1}{\left(1-\hat \beta^2\right)^{i}} \\ & \leq \frac{C}{1-\hat \beta^2} (1+\varepsilon_N')^{m} \sum_{i=0}^{m} \frac{ c_i^{m-1}+c_{i-1}^{m-1}\mathbf{1}_{i\geq 1}}{(m-i)!} f(1)^{m-i} \frac{1}{\left(1-\hat \beta^2\right)^{i}} , \end{align*} where in the second inequality, we have used that \begin{align*}\sum_{u_1=1}^T \frac{1}{u_1} f(u_1)^{m-i} \leq \sum_{u_1=1}^T F(u_1) f(u_1)^{m-i}&\leq f(1)^{m-i} + \int_{1}^T F(u_1) f(u_1)^{m-i}\mathrm{d} u_1 \\ &= f(1)^{m-i} + \frac{f(1)^{m-i+1}}{m-i+1}, \end{align*} using that $F(u)f(u)$ is non-increasing in the comparison to the integral. This yields \eqref{eq:upperBoundAmN2} since \[c_i^{m-1}+c_{i-1}^{m-1}\mathbf{1}_{i\geq 1}\leq c_{i}^{m-1} +2\gamma_{m-1}^m \sum_{j=0}^{i-1} c_{j}^{m-1} = c_{i}^m,\] as $\gamma_{m-1}^m=1$ since the index $1$ is always bad (it is a small jump). \end{proof} \begin{lemma} \label{lem:ci} For all $\mathbf I\in \mathcal D(m,q)$, for all $k\leq m$: \begin{equation} \label{eq:inductioncik} \forall i\leq k, \quad c_i^k \leq 3^i \prod_{r=1}^{k-1} (1+\gamma_r^m). \end{equation} \end{lemma} \begin{proof} We prove it by induction on $k$. The estimate holds for $k=1$ since $c_0^1=1$ and $c_1^1=2$. Suppose that \eqref{eq:inductioncik} holds for some $k\leq m-1$. Then, for all $i\leq k+1$, \[c_{i}^{k+1} = c_{i}^k +2 \gamma_{k}^m \sum_{j=0}^{i-1} c_{j}^k \leq \prod_{r=1}^{k-1} (1+\gamma_r^m) \left(3^i + 2 \gamma_{k}^m \sum_{j=0}^{i-1} 3^j\right) \leq 3^{i} \prod_{r=1}^{k} (1+\gamma_r^m).\] \end{proof} \subsection{Proof of Theorem \ref{th:momentwithoutTriple}} By Proposition \ref{prop:decInChaos}, it is enough to show that \begin{equation} \label{eq:enoughTo} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq e^{\lambda_{T,N}^2 \binom{q}{2} + cq^{3/2}+ o(1)q^2}, \end{equation} for some $c=c(\hat \beta)$. Using Proposition \ref{prop:boundAmNtildeBis}, we have \begin{equation} \label{eq:preFinalBoundPsi} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq \sum_{m=0}^\infty \sigma_N^{2m} \left(\frac{1}{\pi}\right)^{m-1} \sum_{\mathbf I\in \mathcal D(m,q)} \tilde{A}_{m,N,\mathbf I}, \end{equation} where \eqref{eq:upperBoundAmN2} gives an upper bound on the $\tilde A_{m,N,\mathbf I}$. Observe that by \eqref{eq:inductioncik}, we have \[c_{i}^{m} \leq 3^i 2^{\sum_{i=1}^{m-1} \mathbf{1}_{i \text{ is bad}}} \leq 3^i 2^{\sum_{i=1}^{m} \mathbf{1}_{i \text{ is bad}}}\leq 3^i 2^{2n(\mathbf I) + m/L+1},\] where $n(\mathbf I)$ is the number of small jumps in $\mathbf I$. Indeed, an index $r$ is bad if it is a small jump or a fresh index. The number of small jumps is $n(\mathbf I)$. A fresh index is either equal to $m$, or a stopping index, or an index adjacent to a small jump, so the number of fresh indices is at most $1+n(\mathbf I)$ plus the number of stopping indices. Since stopping indices are spaced at least $L$ steps apart, there are at most $m/L$ stopping indices. Hence there are at most $2n(\mathbf I) + m/L+1$ bad indices. For a fixed $n\leq m$, let us compute the number of diagrams in $\mathcal D(m,q)$ such that $n(I)=n$. One has first to choose the location of the small jumps, which gives $\binom{m}{n}$ possibilities. Now if $m$ is a small jump $(m-{\bar k_m}\leq L+2)$, it means that at least one of the two particles $\{i_m,j_m\}$ is the same as one of the particles $\{i_{m-L-2},j_{m-L-2},\dots,i_{m-1},j_{m-1}\}$, therefore there are at most $2(L+2)q$ choices for the couple $(i_m,j_m)$. On the other hand, if $\{i_m,j_m\}$ is a long jump, there are at most $\binom{q}{2}$ possibilities. By repeating the argument, we finally find that the number of diagrams in $\mathcal D(m,q)$ such that $n(I)=n$ is less than $\binom{m}{n} (2(L+2)q)^n \binom{q}{2}^{m-n}$. Hence, by \eqref{eq:preFinalBoundPsi}, Proposition \ref{prop:Amn}, Lemma \ref{lem:ci} and \eqref{eq:LLTsigmaN}, there exists $\varepsilon_N \searrow 0$ such that \begin{align*} &\sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq C(\hat \beta)\times\\ & \sum_{m=0}^\infty (1+\varepsilon_N)^{m} \sum_{n=0}^m \binom{m}{n} (2(L+2)q)^n \binom{q}{2}^{m-n} \sum_{i=0}^{m} \frac{3^i 2^{2n + m/L+1} }{(m-i)!} \left(\frac{\hat \beta^2}{\log N} \right)^{i} \frac{\lambda_{T,N}^{2(m-i)}}{\left(1-\hat \beta^2\right)^{i}}. \end{align*} The sum over $n$ gives a factor of $(8(L+2)q + \binom{q}{2})^m$. Exchanging the sum in $i$ and $m$ entails \begin{align*} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) &\leq C\sum_{i=0}^{\infty} (1+\varepsilon_N)^{i} 3^i 2^{i/L} \left(\frac{\hat \beta^2}{\log N} \right)^{i} \left(8(L+2)q + \binom{q}{2}\right)^i \frac{1}{\left(1-\hat \beta^2\right)^{i}} \times \\ & \hspace{10mm} \sum_{m=i}^\infty (1+\varepsilon_N)^{m-i}\times \left(8(L+2)q + \binom{q}{2}\right)^{m-i} \frac{2^{ (m-i)/L} }{(m-i)!} \lambda_{T,N}^{2(m-i)}. \end{align*} So if we assume that \begin{equation} \label{eq:conditionOnr} r = 3(1+\varepsilon_N) \frac{2^{1/L}}{\left(1-\hat \beta^2\right)} \left(\frac{\hat \beta^2}{\log N} \right) \left(8(L+2)q + \binom{q}{2}\right) < 1, \end{equation} we obtain the bound: \begin{equation}\label{eq:finalBound} \sup_{X\in (\mathbb Z^2)^q} \Psi_{N,q}(X) \leq \frac{C}{1-r} e^{(1+\varepsilon_N)(8(L+2)q + \binom{q}{2})2^{1/L}\lambda_{T,N}^2}. \end{equation} Taking $L=\lceil \sqrt{q} \rceil$ then gives, together with \eqref{eq:condition20nr}, the desired conclusion \eqref{eq:enoughTo}, since $2^{1/L}\leq 1+2/L$ for $L\geq 1$. \iffalse If $q(N)=q_0\in \mathbb N$ is constant, we can let $L=3$. Then, the condition \eqref{eq:conditionOnr} is trivially satisfied since $r\to 0$ as $N\to\infty$ for any fixed $\hat \beta <1$. Hence \eqref{eq:enoughTo} holds in this case. (In fact, when $q$ is constant, it is not necessary to introduce the distinction between good and bad indices, as one can treat every index as a bad index in the induction (Lemma \ref{lem:fibo} and Lemma \ref{lem:indReduction}) and still arrive to \eqref{eq:enoughTo} with the same following arguments). If $q=q(N)\to\infty$, then we can choose any $L\to\infty$ such that $L=o(q)$, so that together with \eqref{eq:condition20nr}, \eqref{eq:finalBound} holds and implies the estimate \eqref{eq:enoughTo}. \fi \qed \section{Discussion and concluding remarks} \label{sec-discussion} We collect in this section several comments concerning the results of this paper. \begin{enumerate} \item Our results allow one already to obtain some estimates on the maximum of $Y_N(x):= \log W_N(\beta_N,x)$ over subsets $D\subset [0,1]^2$. Specifically, let $\gamma>0$ be given and define $Y_N^*=\sup_{x\in D} Y_N(x)$. By Chebyshev's inequality we have that \begin{align*}&{\mathbb P}\left( Y_N^* \geq \delta \sqrt {\log N} \right) \leq 2 N^{2\gamma} {\mathbb P}\left(Y_N(0) \geq \delta \sqrt {\log N} \right)\\ & \leq 2 N^{2\gamma} {\mathbb E}[W_N^q] e^{-q \delta\sqrt{\log N}} \leq N^{2\gamma +\frac{q^2\lambda^2}{{2 \log N}}-\frac{q\delta}{\sqrt{\log N}}+o(1)}, \end{align*} where we used \eqref{eq:estimate} in the last inequality. The optimal $q$ (disregarding the constraint in \eqref{eq:condition20nr}) is $q/\sqrt{\log N}=\delta/\lambda^2 $, and for that value the right side of the last display decays to $0$ if $\delta^2>4\gamma \lambda^2$. The condition on $q$ in \eqref{eq:condition20nr} then gives the constraint that $\gamma< \frac{1}{6} \lambda^2 \frac{1-\hat \beta^2}{\hat \beta^2}$, which for $\hat \beta$ small reduces to $\gamma < 1/6$. We however do not expect that this Chebyshev based bound is tight, even if one uses the optimal $q$ disregarding \eqref{eq:condition20nr}; this is due to inherent inhomogeneity in time of the model. We hope to return to this point in future work. \iffalse Thus, if one shoots for the conjectured optimal estimates, our results are sufficient only if one considers, for $\hat \beta$ small, the maximum over small subsets. (We note that one would hope for $\gamma=1/2$, which would allow to consider the maximum over $x\in[0,1]^2$.) and observe that $e^{\frac{1}{2} \lambda^2 q^2} e^{-q \alpha \sqrt{\log N}}$ attains its minimum for $q=\frac{\alpha}{\lambda^2} \sqrt{\log N}$. Plugging this value in the previous estimate gives a factor \[N^{2\gamma} e^{-\frac{\alpha^2}{2\lambda^2}\log N(1+o(1))} = e^{\left(2\gamma - \frac{\alpha^2}{2\lambda^2}\right)\log N(1+o(1))}.\] Therefore we can choose $\alpha^2 = 2\lambda^2 (2\gamma+\varepsilon)$, so that \[{\mathbb P}\left(\sup_{x\in \llbracket 0,N^{\gamma}\rrbracket^2} Y_N(x) \geq 2\lambda^2 (2\gamma + \varepsilon) \sqrt {\log N} \right) \leq e^{-\varepsilon \log N(1+o(1))}.\] The corresponding $q$ will be $q^2 = 2 \lambda^{-2} (2\gamma + \varepsilon)\log N.$ We believe that we should capture the same behavior for the maximum on scales of the order up to $\gamma = 1/2$. We therefore formulate the following conjecture. Define: \[q_\star^2(N,\hat \beta) = \frac{2}{\lambda^2(\hat \beta)} \log N.\] \begin{conjecture} For all $\delta\in(0,1)$ and $\hat\beta\in (0,1)$ we have, uniformly for all $q< (1-\delta) q_\star(N,\hat \beta)$, as $N\to\infty$, \begin{equation} \label{eq:coro} {\mathbb E}\left[W_N^{q} \right] \leq e^{\lambda^2 \frac{q(q-1)}2(1+o(1))}. \end{equation} \end{conjecture} Let $\gamma < 1/6$. By the discussion in the last section, it is enough to show that there exists $\hat \beta_0>0$ and $\varepsilon_0>0$, such that for all $\hat \beta<\hat \beta_0$, the estimate \eqref{eq:estimate} holds up to $q\leq 2 \lambda^{-2} (2\gamma + \varepsilon)\log N$ for any $\varepsilon \in(0,\varepsilon_0)$. But since $\lambda(\hat \beta)\sim \hat \beta^2$ as $\hat \beta\to 0$ and $2\gamma < 1/3$, we can find $\varepsilon_0>0$ such that for $\hat \beta$ small enough, we have \[q_0(N,\hat \beta) > \frac{2}{\lambda^2}(2\gamma + \varepsilon_0)\log N.\] Hence the result follows from Theorem \eqref{th:main} by choosing $\hat \beta_0$ small enough. \fi \item In view of the last sentence in Remark \ref{rem-1.3}, it is of interest to obtain a lower bound on ${\mathbb E} [W_N^q]$ that matches the upper bound, that is, ${\mathbb E}[W_N^q]\geq e^{\binom 2 q \lambda^2(1-\varepsilon_N)}$. This can be found in \cite{CZLB}. \end{enumerate} \appendix \section{Proof of \eqref{eq:pnstar}} \label{sec-pnstar} \iffalse \begin{lemma} \label{lem:pnstardecreases} The sequence $(p_{n}^\star)_{n\geq 0}$ is decreasing. \end{lemma} \begin{proof} For all $x\in (\mathbb Z)^2$, we have \[p_{n+1}(x) = \sum_{x\in (\mathbb Z)^2} p_n(x-y) p_1(y)\leq p_n^\star,\] so that $p_{n+1}^\star \leq p_n^\star$. \end{proof} \begin{lemma} The following holds: \begin{equation}\label{eq:pnstar} \forall n\geq 1: \quad p_{n}^\star \leq \frac{2}{\pi} \frac{1}{n} . \end{equation} \end{lemma} \begin{proof} Here is the argument (references suggested by Amir). \fi First note that $p_{2n}^\star\leq p_{2n}(0)$ since, by the Cauchy-Schwarz inequality, \[p_{2n}(x)=\sum_{y} p_n(x-y) p_n(y) \leq \sum_y p_n(y)^2 = p_{2n}(0).\] Let $p^{(d)}_{2n}$ be the return probability of $d$-dimensional SRW to $0$. A direct computation gives that $p^{(2)}_{2n}=(p^{(1)}_{2n})^2$ (see e.g. \cite[Page 184]{Durrett}). We will show that $a_n=\sqrt{2n} p^{(1)}_{2n}$ is increasing. We have, \[ a_n =\sqrt{2n} 2^{-2n} \begin{pmatrix} 2n\\ n \end{pmatrix}.\] Hence, \begin{eqnarray*} \frac{a_{n+1}}{a_n}& = &\frac14 \sqrt{\frac{n+1}{n}} \frac{ (2n+2)(2n+1)}{(n+1)^2}\\ &=& \sqrt{1/(n(n+1))} (n+ (n+1))/2. \end{eqnarray*} Since $(a+b)/2\geq \sqrt {ab}$, we conclude (using $a=n$ and $b=n+1$) that $a_{n+1}/a_n\geq 1$. Let $p_{2n+1}^{(1)}$ be the probability of the $1$-dimensional SRW to come back to 1 in $2n+1$ steps. By the random walk representation \cite[Remark in Pg. 185]{Durrett}, we have that $p_{2n+1}^\star \leq (p^{(1)}_{2n+1})^2$. A similar line of argument to the above shows that $b_n=\sqrt{2n+1}p_{2n+1}^{(1)}$ is increasing in $n$. Indeed, \begin{align*} \frac{b_{n+1}}{b_n} & = \frac{1}{4} \frac{\sqrt{2n+3}}{\sqrt{2n+1}} \frac{(2n+3)(2n+2)}{(n+2)(n+1)}\\ & = \frac{2n+3}{2\sqrt{(n+1)(n+2)}} \frac{\sqrt{(2n+3)(n+1)}}{\sqrt{(2n+1)(n+2)}}, \end{align*} where the first fraction is bigger than 1 by the formula $(a+b)/2\geq \sqrt{ab}$, as well as the second fraction by expanding the products. Now, we know from the local limit theorem that $a_n$ and $b_n$ converge to $2/\sqrt{2\pi}$, thus they are always smaller than this limit. This leads to \eqref{eq:pnstar}. \qed \section{Improved estimates on $U_N$} \label{app-UN} When $n$ is taken large enough, the estimate \eqref{eq:boundP2P2ndMomentC} can be improved as follows. \begin{proposition} \label{prop:factor2ndMomentbis} There exists $\varepsilon_{n}=\varepsilon(n,\hat \beta)\to 0$ such that as $n\to\infty$, uniformly for $N\geq n$, \begin{equation} \label{eq:boundP2P2ndMoment} U_N(n) = (1+\varepsilon_{n})\frac{\hat \beta^2}{\left(1-\hat \beta^2 \frac{\log n}{\log N}\right)^2} \frac{1}{ n \log N} . \end{equation} \end{proposition} \begin{proof} Since $(S_n^1-S_n^2)\eqlaw (S_{2n})$, we can write \begin{equation} \label{eq:UNinSRW} U_N(n+1) = \sigma_N^2 {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{n} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S^1_{2n}= 0} \right]. \end{equation} Consider $\ell =\ell_n = n^{1-\varepsilon_n}$ with $\varepsilon_n = \frac{1}{\log \log n}$, so that $\ell_n=o(n)$ and $\varepsilon_n\to 0$. \textbf{First step:} As $n\to\infty$ with $n\leq N$, \begin{equation} \label{eq:firstStep} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{n} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2n}= 0} \right] \sim {\mathrm E}_{0}\left[ e^{ \beta_N^2(\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}+ \sum_{i=n-\ell}^{n} \mathbf 1_{S_{2i} = 0})} \mathbf{1}_{S_{2n}= 0 } \right]. \end{equation} We compute the norm of the difference which, using that $|e^{-x}-1|\leq |x|$ for $x\geq 0$, is less than \begin{align*} &{\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{n} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2n}= 0} \times \beta_N^2 \sum_{j=\ell}^{n-\ell} \mathbf 1_{S_{2j} = 0} \right]\\ &= \beta_N^2 \sum_{j=\ell}^{n-\ell} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{j} \mathbf 1_{S_{2i} = 0}} \mathbf 1_{S_{2j} = 0}\right] {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{n-j} \mathbf 1_{S_{2i} = 0}} \mathbf 1_{S_{2(n-j)} = 0} \right]. \end{align*} where we have used Markov's property in the second line. By \eqref{eq:UNinSRW} and \eqref{eq:boundP2P2ndMomentC}, the last sum is smaller than \begin{align*} C\beta_N^2 \sum_{j=\ell}^{n-\ell} \frac{1}{j} \frac{1}{n-j} \leq 2C\beta_N^2 \sum_{j=\ell}^{n/2} \frac{1}{j} \frac{1}{n/2} &\leq \frac{1}{n}C'\beta_N^2 \log\left(\frac{n}{\ell_n}\right) \leq \frac{1}{n}C''\varepsilon_n =o(n^{-1}). \end{align*} Since the left hand side of \eqref{eq:firstStep} is bigger than $c n^{-1}$ for some constant $c>0$, this shows \eqref{eq:firstStep}. \textbf{Second step:} As $n\to\infty$ with $n\leq N$, \begin{equation} \label{eq:2ndStep} \begin{aligned} &{\mathrm E}_{0}\left[ e^{ \beta_N^2(\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}+ \sum_{i=n-\ell}^{n} \mathbf 1_{S_{2i} = 0})} \mathbf{1}_{S_{2n}= 0 } \right] \\ &\sim {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf{1}_{S_{2i}=0}} \right] {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=n-\ell}^{n} \mathbf 1_{S_{2i} = 0}}\mathbf 1_{S_{2n} = 0} \right]. \end{aligned} \end{equation} By Markov's property, we can write the LHS of \eqref{eq:2ndStep} as \begin{align*} & \sum_{x\in \mathbb Z^2} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2\ell} = x} {\mathrm E}_x \left[e^{\beta_N^2 \sum_{i=n-2\ell}^{n-\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2{n-\ell}}= 0 }\right]\right]\\ & = \sum_{x\in \mathbb Z^2} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2\ell} = x} {\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2n-\ell}=x} \right]\right]. \end{align*} Therefore the difference in \eqref{eq:2ndStep} writes $\sum_{x\in \mathbb Z^2} \Delta_x$ with \begin{align*} & \Delta_x := {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2\ell} = x}{\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \left(\mathbf{1}_{S_{2{n-\ell}}= 0 } - \mathbf{1}_{S_{2{n-\ell}}= x }\right)\right]\right]. \end{align*} Since ${\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \right] \leq C(\hat \beta)$ by \eqref{eq:bound2ndMoment}, we have \[ \sum_{|x| > \sqrt{\ell} n^{\varepsilon/4}} |\Delta_x| \leq C\sum_{|x| > \sqrt{\ell} n^{\varepsilon/4}} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2\ell} = x}\right]. \] By H\"older's inequality with $p^{-1}+q^{-1}=1$, and $p$ small enough so that $\sqrt p \hat{\beta} <1$, \begin{align*} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2\ell} = x}\right] &\leq {\mathrm E}_{0}\left[ e^{ p\beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}}\right]^{\frac{1}{p}} p_{2\ell}(x)^{\frac 1 q} \leq C(\hat \beta) \ell_n^{-1}e^{-\frac{1}{2q} \frac{|x|^2}{\ell_n}}, \end{align*} for $n$ large enough. Therefore, \begin{align*} \sum_{|x| > \sqrt{\ell} n^{\varepsilon/4}} |\Delta_x| &\leq C \sum_{|x| > \sqrt{\ell} n^{\varepsilon/4}} \ell_n^{-1}e^{-\frac{1}{2q} \frac{|x|^2}{\ell_n}}, \leq C e^{-\frac{1}{2q} n^{\varepsilon/2}} = o(n^{-1}). \end{align*} We now estimate the sum on $\Delta_x$ for $|x| \leq \sqrt{\ell} n^{\varepsilon/4}$. We start by bounding the expectation inside the definition of $\Delta_x$: \begin{equation} \label{eq:insideDelta} \begin{aligned} &{\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \left(\mathbf{1}_{S_{2{n-\ell}}= 0 } - \mathbf{1}_{S_{2{n-\ell}}= x }\right)\right]\\ &= \sum_{y\in \mathbb Z^2} {\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{\ell}= y} \right]\left(p_{2n-2\ell}(y)-p_{2n-2\ell}(y-x)\right). \end{aligned} \end{equation} By the same argument as above, we can prove that the above sum restricted to $|y|\geq \sqrt \ell n^{\varepsilon/4}$ is negligible with respect to $n^{-1}$, uniformly for $|x| \leq \sqrt{\ell} n^{\varepsilon/4}$. On the other hand, by the local limit theorem we have \[ \sup_{|x|\leq \sqrt \ell n^{\varepsilon/4},|y|\leq \sqrt \ell n^{\varepsilon/4}} \left|p_{2n-2\ell}(y)-p_{2n-2\ell}(y-x)\right| = o(n^{-1}). \] since $\ell_n n^{\varepsilon/2} = n^{1-\varepsilon_n/2}=o(n)$. Thus, the quantity in \eqref{eq:insideDelta} is bounded uniformly for $|x|\leq \sqrt \ell n^{\varepsilon/4}$ by \[ {\mathrm E}_0 \left[e^{\beta_N^2 \sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \right] \times o(n^{-1})=o(n^{-1}).\] This completes the proof of \eqref{eq:2ndStep}. \textbf{Third step:} As $n\to\infty$ with $n\leq N$, \begin{equation} \label{eq:3rdstep} {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=n-\ell}^{n} \mathbf 1_{S_{2i} = 0}}\mathbf 1_{S_{2n} = 0} \right] \sim {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \right] p_{2n}(0). \end{equation} Equivalence \eqref{eq:3rdstep} can be proven by following the same line of arguments as used to prove \eqref{eq:2ndStep}, hence we omit its proof. Now, combining the three steps leads to the equivalence \[ {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{n} \mathbf 1_{S_{2i} = 0}} \mathbf{1}_{S_{2n}= 0} \right]\sim {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \right]^2 p_{2n}(0). \] By \eqref{eq:asympt2ndMoment}, as $\log \ell \sim \log n$, we have \[ {\mathrm E}_{0}\left[ e^{ \beta_N^2\sum_{i=1}^{\ell} \mathbf 1_{S_{2i} = 0}} \right] \sim \frac{1}{1-\hat \beta^2 \frac{\log n}{\log N}}, \] and so \eqref{eq:boundP2P2ndMoment} follows from \eqref{eq:UNinSRW} and the last two displays. \end{proof} \iffalse \begin{remark} Another approach to prove \eqref{eq:boundP2P2ndMoment} could be to show that for $n\to \infty$ with $n\leq N$ and $k\leq f(n)$ for some explicit $f(n)$, \begin{equation} \label{eq:quentin} {\mathrm P}\left(\tau_k^{(N)} = n\right) = (1+o_{n}(1)) k {\mathrm P}\left(T_1^{(N)} = n\right) {\mathrm P}\left(T_1^{(N)} \leq n\right)^{k-1}, \end{equation} and then follow the same line of proof as used to show \eqref{eq:boundP2P2ndMomentC}. An estimate similar to \eqref{eq:boundP2P2ndMomentC} has been shown in \cite[Theorem 1.1]{AlexanderKennethS2016Llta} for a renewal process $(\tau_k)$ (with no dependence on $N$) such that ${\mathrm P}(\tau_1=n)=\varphi(n)n^{-1}$ with $\varphi$ a slowly varying function. It may be possible that their argument could adapt to our case (where there is a dependence on $N$). \end{remark} \fi \section{Khas'minskii's lemma for discrete Markov chains} The following theorem is another discrete analogue of Khas'minskii's lemma, compare with Lemma \ref{lem:modKhas}. \begin{theorem} \label{th:discrete_Khas} Let $(Y_n)_n$ be any markov chain on a discrete state-space $E$ and let $f:E \rightarrow \mathbb R_+$. Then for all $k\in\mathbb N$, if \begin{equation} \label{eq:def_eta_appendix} \eta_0:=\sup_{x\in E} {\mathrm E}_{x} \left[\sum_{n=1}^k (e^{f(Y_n)}-1)\right] < 1, \end{equation} one has \begin{equation} \sup_{x\in E} {\mathrm E}_{x} \left[e^{\sum_{n=1}^k f(Y_n)}\right] \leq \frac{1}{1-\eta_0}. \end{equation} \end{theorem} \begin{proof} Denote by $D_n = e^{f(Y_n)}-1$. We have, \begin{align*} & {\mathrm E}_{x} \left[e^{\sum_{n=1}^N f(Y_n)}\right] = {\mathrm E}_{x} \left[\prod_{n=1}^N (1+D_n)\right] = \sum_{p=0}^\infty \sum_{1\leq n_1 < \dots< n_p \leq k}{\mathrm E}_{x} \left[ \prod_{i=1}^p D_{n_i}\right]\\ & = \sum_{p=0}^\infty \sum_{1\leq n_1 < \dots < n_{p-1} \leq k} {\mathrm E}_{x} \left[ \prod_{i=1}^{p-1} D_{n_i} {\mathrm E}_{Y_{n_{p-1}}} \left[\sum_{n=1}^{k-n_{p-1}} D_n \right] \right]\\ & \stackrel{\eqref{eq:def_eta_appendix}}{\leq}\sum_{p=0}^\infty \eta_0 \sum_{1\leq n_1 < \dots < n_{p-1} \leq k} {\mathrm E}_{x} \left[ \prod_{i=1}^{p-1} D_{n_i} \right] \leq \dots \leq \sum_{p=0}^\infty \eta_0^p = \frac{1}{1-\eta_0}. \end{align*} \end{proof} \iffalse \begin{remark} Compared to the continuous time version of Khas'Minskii's lemma (cf [REF]), here we cannot simply ask that $\sup_{x\in E}{\mathrm E}_x[\sum_{n=1}^k f(Y_n)] < 1$. The technical issue is that if one simply expands the exponential in its Taylor series, then in the discrete setting diagonal terms will appear. Nevertheless, one can recover this condition when $f$ is bounded, as shown by the following corollary. \end{remark} \fi \begin{corollary} \label{cor:discrete_Khas} Let $(Y_n)_n$ be any markov chain on a discrete state-space $E$ and let $f:E \rightarrow [0,1]$. Then for all $k\in\mathbb N$, if \begin{equation} \label{eq:def_eta_appendixcor} \eta_1:=\sup_{x\in E} {\mathrm E}_{x} \left[\sum_{n=1}^k f(Y_n)\right] < 1, \end{equation} one has \begin{equation} \sup_{x\in E} {\mathrm E}_{x} \left[e^{\sum_{n=1}^k f(Y_n)}\right] \leq \frac{1}{1-\eta_1}. \end{equation} \end{corollary} \begin{proof} Simply observe that $e^{f(x)}-1\leq e^c f(x)$ and apply Theorem \ref{th:discrete_Khas}. \end{proof} \end{document}
arXiv
Maximizing production of cellulose nanocrystals and nanofibers from pre-extracted loblolly pine kraft pulp: a response surface approach Gurshagan Kandhola ORCID: orcid.org/0000-0001-6635-86871,2, Angele Djioleu1,2, Kalavathy Rajan ORCID: orcid.org/0000-0002-1837-12353,4, Nicole Labbé ORCID: orcid.org/0000-0002-2117-42594, Joshua Sakon ORCID: orcid.org/0000-0002-8373-969X5, Danielle Julie Carrier ORCID: orcid.org/0000-0003-3322-46603 & Jin-Woo Kim ORCID: orcid.org/0000-0002-7119-82081,2 This study aims to optimize strong acid hydrolysis-based production of cellulose nanocrystals (CNCs) and cellulose nanofibers (CNFs) from pre-extracted and fully bleached kraft pulp of loblolly pinewood, the most abundant and commercially significant softwood species in southeastern United States. The effect of four parameters, including acid concentration, temperature, duration and pulp particle size, on the yield and properties of CNCs was investigated using the central composite design (CCD) of response surface methodology (RSM) for process optimization. While CNC yield was significantly affected by acid concentration and hydrolysis temperature and was adequately explained by an empirical model, none of the characteristic properties of CNCs, including crystallinity index, surface charge and particle size, displayed any strong correlation to the process parameters within the experimental ranges tested. At different hydrolysis severities, we not only analyzed the waste streams to determine the extent of holocellulose degradation, but also evaluated the properties of leftover partially hydrolyzed pulp, called cellulosic solid residues (CSR), to gauge its potential for CNF production via mechanical fibrillation. Conditions that maximized CNC yields (60% w/w) were 60% acid concentration, 58 °C, 60 min and 40 mesh particle size. Twenty percent (w/w) of the pulp was degraded under these conditions. On the other hand, conditions that maximized CSR yields (60% w/w) were 54% acid, 45 °C, 90 min and 20 mesh particle size, which also produced 15% CNCs, caused minimal pulp degradation (< 5%) and imparted sufficient surface charge such that CSR was easily microfluidized into CNFs. Therefore, the strong acid hydrolysis process could be tuned to maximize the production of cellulose nanocrystals and nanofibers and obtain two products with different properties and applications through the process optimization. Growing demands of the world population are necessitating that we gradually reduce our dependence on non-renewable petroleum-based materials and transition to green, biomass-based materials that are less detrimental to the environment and human health. Nanomaterials derived from cellulose, the most abundant and renewable biopolymer on earth, have the potential to fill this need (Brinchi et al. 2013; Lee et al. 2014; Postek et al. 2013). Cellulose nanocrystals (CNCs) and cellulose nanofibers (CNFs) have excellent mechanical, optical, rheological and barrier properties. Their biocompatibility and biodegradability makes them useful for diverse applications in biomedical, food and cosmetic products, films, coatings, packaging and drug delivery materials, filtration membranes, flexible electronics, and polymer nanocomposites (Brinchi et al. 2013; Dufresne 2013; George and Sabapathi 2015; Habibi et al. 2010; Mishra et al. 2018; Peng et al. 2011; Poletto 2015; Postek et al. 2013; Sinha et al. 2015; Seo et al. 2018). The United States Department of Agriculture (USDA) estimates that the market size of nanocellulose-enabled products will reach 35 M metric tons per year by 2050 (Shatkin et al. 2014). Therefore, cellulosic nanomaterials have huge potential for advancing the bio-based economy. CNCs are rigid, rod-like crystals, with diameters in the range of 5–20 nm and lengths in the range of 200–500 nm, characterized by high aspect ratio, low density, high tensile strength and stiffness, high surface area and modifiable surface chemistry (Habibi et al. 2010). CNFs have similar diameters but can extend up to a few micrometers in length, resulting in much higher aspect ratios and formation of flexible web-like network structures (Jonoobi et al. 2015). CNCs and CNFs can be extracted from a variety of sources such as tunicates, bacteria and plants, including wood, agricultural residues and industrial crops, using chemical and mechanical processes, respectively. However, wood is the most widely and cheaply available raw material with high cellulose content (Brinchi et al. 2013; Jonoobi et al. 2015; Lee et al. 2014; Sacui et al. 2014). Utilization of woody biomass for cellulosic nanomaterial synthesis has the potential to add value to the traditional forest products industry (Brinchi et al. 2013), since a substantial infrastructure for planting, harvesting, transporting, debarking, chipping, and pulping different types of wood is already in place (Postek et al. 2013). It can also facilitate the utilization of millions of tons of forestry waste generated in the form of logging residues that are either landfilled or burned in the open, both of which are hazardous to the environment (Hamer 2003). Loblolly pine (Pinus taeda) is one of the most commercially important wood species in the southeastern United States. Its abundance (22.3% standing tree volume in Arkansas, US), fast growth rate (short rotation of 25–35 years) (Bragg 2011) and existing supply chain for pulp and timber make it an attractive feedstock for commercial scale production of nanocellulose. On the other hand, kraft process is the predominant pulping process for producing cellulose-rich pulp from a variety of hardwood and softwood species (Biermann 1996; Houtman 2018). Kraft pulp could be a sustainable raw material to manufacture nanocellulose, adding to the revenue streams of conventional pulp and paper mills. In a recent report, our group has shown that CNC yield and crystallinity can be improved by utilizing kraft pulp prepared from pre-extracted loblolly pine (Rajan et al. 2020). Therefore, in this study, we chose to use pre-extracted loblolly pine kraft pulp as our model substrate for further optimization of CNC yield. CNCs can be produced from cellulose by a variety of chemical, mechanical and enzymatic methods, often used in combination. Chemical methods make use of strong acids, such as sulfuric, phosphoric, nitric, hydrobromic and hydrochloric acid, or ionic liquids, organic solvents, organic acids and subcritical water to hydrolyze the amorphous regions of cellulose and produce CNCs (Brinchi et al. 2013; Chen et al. 2016; Novo et al. 2015; Zhang and Liu 2018). On the other hand, mechanical methods, such as bead milling and high-intensity ultrasonication, make use of shear forces to extract CNCs (Amin et al. 2015; Brinchi et al. 2013; Li et al. 2016). However, the concentrated sulfuric acid hydrolysis approach of treating cellulose, initially developed in the 1950s (Battista 1950; Mukherjee and Woods 1953), is still the most common and effective method of producing CNCs from cellulose-rich materials. This method is known to produce stable colloidal dispersions, where particles do not aggregate due to surface charge imparted by sulfate ester groups (Araki et al. 1998; Dong et al. 1998; Lin and Dufresne 2014), and result in higher yields. Several studies aimed at optimizing the sulfuric acid process to maximize CNC yields from a variety of different raw materials have been published in the past decade. While most commonly tested factors are acid concentration, temperature and hydrolysis duration, some studies have investigated acid-to-pulp ratio, substrate concentration and sonication time too. An important factor that has not been evaluated but could impact CNC yields, is the raw material particle size. Size reduction techniques, such as milling and grinding, increase the accessible surface area and pore size of biomass; correlation of reduced particle size with the ease of enzymatic digestibility is well documented in biofuels research (Behera et al. 2014). Our study has included the parameter of particle size in its experimental design to assess if CNC yield improves as a result of reduction in pulp particle size and to determine the optimal particle size for maximizing CNC production. Depending on the range of conditions tested, the types of raw materials used, and the method of optimization implemented, the effects of independent variables differ, and so do the optimized conditions and corresponding yields. In woody biomass, testing of narrower acid concentration ranges improved CNC yields up to 60–70% (w/w) (Dong et al. 2016; Wang et al. 2012, 2014). These studies used bleached hardwood (eucalyptus) kraft pulp and dissolving grade softwood sulfite pulp as raw materials. Optimal conditions and maximal yields reported in these studies may not be directly applicable to softwood (e.g., pine) kraft pulp, due to significant differences between hardwood and softwood pulps in terms of chemical composition, anatomical structure and cellulose fiber morphology (Area and Popa 2014; Biermann 1996; Pettersen 1984). Similarly, sulfite and kraft pulps differ from each other in terms of cellulose crystalline structure, fibril aggregation and thermal stability (Hult et al. 2003; Poletto et al. 2011). Therefore, it is imperative that optimized conditions resulting in high CNC yields have to be determined separately for softwood kraft pulp. Highest reported CNC yield from bleached softwood (mixture of cedar, spruce, fir and pine) kraft pulp was only about 30–40% (Hamad and Hu 2010), possibly due to the lack of statistical design of experiments (DOE), leaving much scope for improvement. In this study, a four-factor, three-level central composite design (CCD) of response surface methodology (RSM) was used as the optimization tool to analyze the effect of four process parameters, i.e., acid concentration, temperature, reaction duration and pulp particle size, on CNC yields. We used fully bleached and highly pure (> 90% cellulose) loblolly pine kraft pulp, prepared using an in-house hemicellulose pre-extraction, kraft pulping and bleaching process, as the raw material for CNC production (Rajan et al. 2020). The optimal conditions for sulfuric acid hydrolysis and resulting maximum yields were determined and experimentally validated. Characteristic properties of CNCs, including particle size, crystallinity index and surface charge, were also included as response variables to elucidate how these properties were impacted by a combined set of process conditions, not just by one parameter at a time (Chen et al. 2015; Dong et al. 1998; Hamad and Hu 2010; Kargarzadeh et al. 2012). This will be helpful to understand which parameters need to be fine-tuned in the manufacturing process in order to control the quality of CNCs and achieve a desired set of properties. Finally, differences between the properties of CNCs and the residual, partially hydrolyzed pulp, termed cellulosic solid residues (CSR), were evaluated. Recent optimization studies have suggested that recovering CSR for CNF production has the potential to improve the economic viability of the process (Wang et al. 2012, 2014). Through this work, we aim to determine if recovery of low quantities of CSR at conditions that favor CNC production is practical. Recommendations on the set of conditions that maximize production of either CNCs or CSR (for conversion to CNFs) will be provided. This study paves the way to utilize the commonly available loblolly pine kraft pulp for CNC and CNF production through an optimized strong acid hydrolysis process. Sulfuric acid ACS grade (95–98%) was obtained from VWR International (Radnor, PA). Sodium sulfide nonahydrate (98%) and sodium hydroxide, both ACS grade, were obtained from Beantown Chemical (Hudson, NH) and EMD Millipore, Merck (Gibbstown, NJ), respectively. Hydrochloric acid ACS grade (36%) and granular sodium chlorate (99%) were obtained from Alfa Aesar (Ward Hill, MA). Hydrogen peroxide (30%) was obtained from EMD MiiliporeSigma (Burlington, MA). Pretreatment and bleaching of pinewood Loblolly pine (Pinus taeda L.), grown in the School of Forestry and Natural Resources at the University of Arkansas, Monticello, was used in this study. The stem wood was debarked, ground using a Thomas Scientific Wiley Mini-Mill (model 3383-L10, Swedesboro, NJ) and passed through a 20-mesh screen to obtain particles ranging 0.8–0.9 mm in size. The samples were stored in airtight containers at room temperature until further use. Dilute acid pretreatment of pinewood was conducted for 1 h at 150 °C in a 1-L stainless steel bench-top Parr reactor (model 4525, Moline, IL), at solids loading of 20% w/v and sulfuric acid concentration of 0.5% w/w (with respect to dry biomass). After this step, the pretreated biomass was separated from the liquid fraction using vacuum filtration. Kraft pulping of the pretreated biomass was carried out at 170 °C in the same reactor, at solids loading of 20% w/v, effective alkalinity of 24% and effective sulfidity of 66%. An H-factor of 1500, which was estimated by the following equation, was used for this reaction. $$ H = \int\limits_{0}^{t} {{ \exp }\left( {43.2 - \frac{16115}{T}} \right){\text{d}}t} , $$ where H is H-factor, t is time (min) and T is temperature (°K). After this step, the kraft pulp was separated from the liquid fraction using vacuum filtration. This was followed by two washings with 500 mL water to remove any residual lignin physically attached to the pulp. Elemental chlorine free (ECF) bleaching of the pulp was carried out in 1000 mL conical flasks in a water bath at 45 °C and 70 °C. It consisted of alternating treatments with chlorine dioxide, hydrogen peroxide and sodium hydroxide, until the residual lignin content of the pulp was reduced to < 1% (w/w). Chemical composition analysis Chemical composition of raw biomass, i.e., ethanol-soluble extractives, structural carbohydrates and lignin content, was measured using National Renewable Energy Laboratory protocols NREL/TP-510-42618 and NREL/TP-510-42619. The same protocols were used for the estimation of structural carbohydrates in fully bleached kraft pulp as well. However, residual lignin content in bleached kraft pulp was determined based on the Kappa number (K), as given in the T-236-om-99 protocol, published by the Technical Association of the Pulp and Paper Industry (TAPPI) in 1999. Quantification of sugars was done using high performance liquid chromatography (HPLC). The chemical composition of pine biomass and purified pulp, thus determined, is provided in Table 1. The sequential procedure of dilute acid pretreatment, kraft pulping and ECF bleaching was effective in hemicellulose removal, followed by bulk delignification and removal of residual lignin to less than 1% (w/w); highly pure pulp, containing > 90% (w/w) cellulose and < 5% (w/w) hemicellulose, was obtained at the end of this 3-step treatment process. Table 1 Chemical composition of pinewood and bleached pulp CNC synthesis via concentrated acid hydrolysis The production of CNCs was performed according to the procedure described by Bondeson et al. (2006) and Wang et al. (2012), with minor alterations. Bleached pulp was first air dried until moisture content was < 5% (w/w) and then ground to the desired mesh size using the mini mill. The ground pulp was hydrolyzed at an acid-to-pulp ratio of 8:1 v/w. The reaction was carried out in a 1000-mL beaker placed in a water bath and the solution was constantly mixed at 50 revolutions per minute (RPM) using an overhead stirrer. To stop the reaction, 10× water was added and the mixture was stirred at 100 RPM for 10 min. The suspension was then centrifuged at 8346×g for 20 min and the volume of the supernatant recovered (termed waste stream 1) was recorded. For each sample, three aliquots were stored for HPLC analysis to determine the amount of cellulose and hemicellulose lost in the form of monosaccharides, i.e., glucose, xylose, galactose, arabinose, mannose, using NREL/TP-510-42621 protocol. The pellets were recovered and washed with 30 mL water, vortexed and then centrifuged before discarding the supernatant (termed waste stream 2). The resultant pellet was re-suspended in water and the suspension was dialyzed for two days to remove residual acid and until the pH reached 7. The suspension was once again centrifuged to obtain CNCs and CSR as separate streams; CNCs formed a stable suspension in the supernatant, whereas CSR was obtained as pellet. Volumes and weights of each fraction were recorded, and samples were stored at 4 °C until further use. CNC and CSR yields were determined using the gravimetric method of oven-drying, where samples were dried at 105 °C until constant weight (Eq. 2). CNC, CSR and total yields as well as carbohydrate losses (Eq. 3) and the overall mass balance, were all expressed as a percentage of the initial dry weight of cellulose pulp. $$ {\text{Yield}} \,( \%) = \frac{m}{M} \times 100, $$ $$ {\text{Carbohydrate loss}} \left( \% \right) = \frac{cV}{M} \times 100, $$ where m is the dry weight of total CNC or CSR obtained (g), M is the dry weight of starting material (3 g of pre-extracted and bleached loblolly pine kraft pulp), c is the concentration of sugars in waste stream 1 determined using HPLC (recorded in g/L) and V is the total volume of waste stream 1 (L). Optimization of acid hydrolysis parameters The Box–Wilson Central Composite Design (CCD) of Response Surface Methodology was used to design a set of experiments, in order to investigate the effect of different acid hydrolysis parameters on CNC production. Four factors, i.e., acid concentration (% w/w), hydrolysis duration (min), hydrolysis temperature (°C) and particle size of pulp (mesh), and three responses, i.e., CNC, CSR and total yields were evaluated. The factors and their corresponding intervals were selected according to in-house exploratory work based on previous reports. The average pulp particle sizes for 20, 40 and 60 mesh sieves corresponded to 0.841 mm, 0.420 mm, and 0.250 mm, respectively. The particle size was represented in mesh size instead of mm for the sake of simplicity in considering equal factor increments for process optimization and will be referred to as such in latter discussions. Each factor had three levels, i.e., low, mid and high, designated as − 1, 0 and 1, respectively, as illustrated in Table 2. The ranges were kept neither too narrow nor too wide, so as to prevent exclusion of any optimal conditions while aiming for a strong predictive power of the model. The run order of the trials was randomized in order to prevent systematic errors and the runs were conducted exactly in the order specified by the software (JMP Pro version 13). The total number of experiments was 26, calculated using the formula: \( N = 2^{k} \) (factorial points) \( + 2k \) (star/axial points) \( + r \) (center points), where the number of factors (k) was 4 and the replicates of center point conditions (r) were 2 (Bezerra et al. 2008). Center point conditions (60% H2SO4, 55 °C, 60 min and 40 mesh) were replicated twice, to allow for proper testing of the model's lack of fit (Dean et al. 2017). Table 2 Independent variables and their levels used in Box–Wilson CCD design Mechanical fibrillation with microfluidizer CSR suspensions were diluted to a concentration of 1% (w/v) and passed through a low-volume benchtop microfluidizer (Model LV1-UL, Microfluidics, MA, USA) for high shear mechanical fibrillation. Each sample was given three passes through the micron chamber at 16,000 psi (= 11,032 kPa) to obtain a homogenized CNF suspension. Characterization of CNCs and CSR TEM imaging Aqueous CNC suspensions of 0.025% (w/v) concentration were thoroughly vortexed and sonicated and a droplet of the suspension was placed on a carbon coated 300-mesh copper grid. After letting the droplet dry for about 10–15 min, the sample was negatively stained with 1% uranyl acetate dye for 30–45 s and left to dry overnight. Images were recorded on a JEOL JEM-1011 transmission electron microscope (TEM) operating at 100 kV. The attached software had an in-built feature to record the length and width of CNC particles. Morphological measurements of 10–15 CNCs were made per sample for general estimation of the dimensions of CNCs. X-ray diffractometer was used to determine the crystallinity index (CrI) of raw biomass, kraft pulp, CNCs and CNFs, following the Segal method (Segal et al. 1959): $$ {\text{CrI }}\left( \% \right) = \frac{{I_{200} - I_{AM} }}{{I_{200} }} \times 100, $$ where I200 is the intensity of the diffraction peak assigned to the plane (200) at 2θ = 22.7° and IAM is the intensity at 2θ = 18° coming from the amorphous part of the sample. To prevent the factor of concentration interfering with measurements, the samples used for XRD analysis were air-dried films prepared from solutions of a fixed concentration at 7.5 mg/mL. Raw biomass and kraft pulp were used as is, in powdered form. ICP-OES analysis All CNC and CSR samples were mixed with a solution of 0.8 M nitric acid until the final sample concentration was close to 1 mg/mL. Then the samples were digested for 30 min, centrifuged at 8346 × g for 10 min, and the supernatant was passed through a 0.2-μm nylon syringe filter. Sulfur content of the supernatant was determined using inductively coupled plasma optical emission spectrometry (ICP-OES) technique. All samples were analyzed in triplicate and in-house calibration curves were used to quantify the sulfur content. The data are presented in terms of mg of S per g of CNC (on dry basis). Dynamic light scattering (DLS) Characteristics, such as Z-average diameter or particle size (length), dispersity (Ð) and zeta-potential, of CNC and CSR suspensions at 0.025% (w/v) concentration, were measured in triplicate using the Malvern Zeta Sizer Nano ZS (model ZEN3600, Worcestershire, UK). Samples were sonicated in an ultrasonic water bath for 5 min before analysis. Dependence of CNC-to-CSR ratio on hydrolysis severity The concept of simultaneously recovering CNCs and CSR (cellulosic solid residue) to minimize cellulose loss and increase overall yield was first introduced by Wang et al. (2012). Separation is achieved using centrifugation of acid-hydrolyzed and dialyzed pulp, where CNC particles remain suspended in the supernatant and CSR, consisting of fibers bigger in size and less sulfated than CNCs, settles down in the form of a pellet. Of the 26 experimental runs conducted in our study, both CNCs and CSR were recovered in only 13 runs, whereas for the rest, the CSR fraction was below recoverable limits. In general, depending on the severity of reaction conditions, CNC yields varied between 0% and 52%, and CSR yields varied between 0% and as high as 85% (Table 3). The technique of cluster analysis was used to condense the yield data into smaller, more coherent groups, where each group represented similar yields. The data were first standardized and k-means clustering approach was used (R code and detailed results provided in supplementary data) to identify the optimal number of clusters for appropriate data classification (Fig. 1). Table 3 RSM-CCD experimental design matrix and response data Cluster analysis of experimental CNC and CSR yields Cluster 1 consisted of three runs (21, 22 and 24), characterized by very high CSR yields (78–85%), extremely low CNC yields (< 3%), and negligible carbohydrate losses (< 5%) (Table 3). CSR recovered in these runs was barely hydrolyzed and was more or less similar in texture to the original pulp, making it unsuitable for microfluidization. It could be attributed to the low severity of reaction conditions, i.e., a combination of low acid concentration (54%) and low temperature (45 °C). Acid concentrations below 58% have been reported to cause insufficient cellulose depolymerization and result in low CNC yields (Wang et al. 2014). CNCs and CSR from these three runs were not included in further characterization. Cluster 2 consisted of five runs (7, 9, 11, 12 and 17) characterized by relatively more CSR than CNC yields and minimal losses ranging between 4 and 8% (Table 3). Three of these runs (9, 12 and 17) had particularly high CSR and total (CNC + CSR) yields that averaged at 60% and 75%, respectively. These conditions were also favorable for hydrolyzing the CSR just enough to facilitate subsequent mechanical fibrillation, without substantial conversion into CNCs and/or degradation into sugars. If the goal is to produce CNFs, the conditions of run 17, i.e., 54% acid, 90 min, 45 °C and 20 mesh, will be more suitable because of high yields obtained with a conservative use of acid and energy (lower temperature and reduced milling requirements). With increasing reaction severity, the gap between CSR and CNC yields was reduced due to increasing hydrolysis of CSR into CNCs. This was concomitant with a simultaneous increase in cellulose degradation (12–15%) and a total lower nanocellulose yield (50–65%), evident from the other two runs (7 and 11) in this cluster. Cluster 3 consisted of nine runs (3, 4, 5, 6, 8, 10, 14, 19 and 23) characterized by relatively more CNCs than CSR yields; cellulose lost in the waste stream ranged between 10–25% and total nanocellulose yield ranged between 52 and 62%. Five of these runs (3, 4, 5, 14 and 19), including center point conditions (4 and 14), yielded 34–42% CNCs and 14–22% CSR indicative of the reaction leaning toward CNC formation. The exceptions were runs 10 and 23, where high temperatures caused complete conversion of pulp into CNCs, with yields ranging between 30–40% and negligible CSR recovery, pointing towards the dominant effect of temperature in cellulose depolymerization kinetics (Dong et al. 2016; Wang et al. 2014). Highest CNC yields (52% CNC and no CSR) were obtained in two runs (6 and 8), which differed from center point conditions by longer hydrolysis duration or smaller particle size (higher mesh). Cluster 4 consisted of nine runs (1, 2, 13, 15, 16, 18, 20, 25 and 26) characterized by very low CNC yields ranging between 10–30% and no CSR recovery, indicating deleterious cellulose hydrolysis. It was observed that most of these runs were conducted at the high acid concentration of 66%, which caused high cellulose degradation and resulted in low yields (Table 3), thus indicating the dominant effect of acid concentration on cellulose depolymerization kinetics as reported in previous studies (Hamad and Hu 2010; Wang et al. 2012, 2014). The only exception to this was run 1 that yielded 28% CNCs even at the low acid concentration of 54%, possibly due to the combination of longer reaction duration with high temperature. But the general trend agrees with previous studies, where 62–65% H2SO4 resulted in CNC yields of 30–45% from microcrystalline cellulose (MCC), softwood kraft pulp and Kenaf pulp (Bondeson et al. 2006; Hamad and Hu 2010; Kargarzadeh et al. 2012; Ngwabebhoh et al. 2018). Acid concentrations above 60% should be avoided in CNC production as these lead to degradation into monomeric and oligomeric sugars that cannot be economically recovered and utilized (Wang et al. 2012, 2014). The ideal acid concentration range is 58–62% as per Wang et al. (2014). Overall, the experimental data indicated that, potential for maximizing CNC yields from pine kraft pulp lies in optimizing around the parameters of 54–60% acid, 55–65 °C, 60–90 min and 40–60 mesh; however, some degradation of pulp (10–20%) is bound to take place as a result of these processing conditions. The following second-order polynomial equation was used to (a) determine significant model terms and (b) develop an empirical model correlating the response, i.e., CNC yield to the four independent variables under investigation. Results of analysis of variance (ANOVA) are given in Table 4. $$ Y =\beta_{0} + \sum\limits_{i} {\beta_{i}}{x_{i} } + \sum\limits_{ii} {\beta_{ii}}{x_{i}^{2} } + \sum\limits_{ij} {\beta_{ij}}{x_{i} x_{j} } + \varepsilon , $$ where \( \beta_{0} \) is the constant co-efficient, \( \beta_{i} \) is the linear effect of the main factor \( x_{i} \), \( \beta_{ij} \) is the linear-by-linear interaction effect of the input factors \( x_{i} \) and \( x_{j} \), \( \beta_{ii} \) is the quadratic effect of the input factor \( x_{i} \) and Y is the response. Based on non-linear regression analysis of the response data, the following quadratic model (Eq. 6) was obtained. The positive coefficient values signify synergistic effects that increase CNC yield and the negative coefficient values signify antagonistic effects that decrease CNC yield: Table 4 ANOVA for response surface quadratic model $$ \begin{aligned} {\text{CNC}}\;{\text{yield}} \left( \% \right) =& 36.84 - 0.88 \times \left( {\frac{{X_{1} - 60}}{6}} \right) + 8.99 \times \left( {\frac{{X_{3} - 55}}{10}} \right) - 6.16 \times \left( {\frac{{X_{1} - 60}}{6}} \right) \\ & \quad \times \left( {\frac{{X_{3} - 55}}{10}} \right) - 16.31 \times \left( {\frac{{X_{1} - 60}}{6}} \right)^{2} - 13.56 \times \left( {\frac{{X_{3} - 55}}{10}} \right)^{2} . \\ \end{aligned} $$ At p < 0.05, the quadratic model was found to be significant and the model process parameters that had a significant effect on CNC yield were the linear effect of temperature (\( X_{3} \)), quadratic effects of temperature (\( X_{3}^{2} \)) and acid concentration (\( X_{1}^{2} \)), and the interaction effect of acid concentration and temperature (\( X_{1} X_{3} \)). Even though the linear effect of acid concentration (\( X_{1} \)) was not significant, the term appears in the equation because it was contained in the interaction effect (\( X_{1} X_{3} \)), which was significant. Remaining model terms, including main, quadratic and interaction effects of hydrolysis duration and particle size, did not have a significant effect on CNC yield. As mentioned before, various studies have reported acid concentration, temperature, as well as hydrolysis duration, to have a significant effect on the yield of CNCs. However, the hydrolysis duration was not found significant in our study, which could partly be attributed to (a) the relatively narrower range tested, and (b) the fact that for a given combination of acid concentration and temperature, the bulk of CNC generation seemed to occur within the first 30 min of the reaction and did not seem to change significantly with further increase in hydrolysis duration. Contrary to our hypothesis, the increase in surface area and cellulose accessibility with the reduction in pulp particle size from 20 to 60 mesh did not cause significant changes in CNC yields. A previous study has also reported that depolymerization of cellulose during acid hydrolysis was strongly correlated to the crystallinity index of the pulp rather than its particle size (Lacerda et al. 2013). It is possible that all pulp mesh sizes had similar crystallinity indices and hence produced similar CNC yields. Insignificance of the "lack of fit" test indicated that the regression model adequately described the relationship between the independent variables tested and the response variable. In models involving multiple independent variables, \( R_{\text{adj}}^{ 2} \) is a more robust estimation of "goodness of fit" than R2. \( R_{\text{adj}}^{ 2} \) was 0.71, implying 71% of the variation in data was explained by the regression model and indicating a good fit between the model and experimental data (Fig. 2). Root mean square error (RMSE), a measure of unexplained variance, indicates the proximity of experimental values to predicted values. Lower values of RMSE indicate better fit; RMSE of the above model was 4.76, indicating the range of deviation of experimental yields from predicted yields is ± 4.76. Acceptable values of both \( R_{\text{adj}}^{ 2} \) and RMSE were indicative of good accuracy of the model in predicting CNC yields within the experimental ranges tested in this study. Experimental vs predicted yields in the regression model Surface plots, optimization and model verification Three-dimensional response surface plots and corresponding contour plots constructed using Eq. 6 are given in Fig. 3. These plots depicted the combined effects of two factors on CNC yield, while other factors were kept constant at their medium levels. Elliptical contour plots imply that the interaction between the variables is significant and circular contour plots mean that the interaction between the variables is not important (Lu et al. 2013). Contour plots of acid concentration vs temperature and duration vs particle size were elliptical, indicating significance of these interactions (Additional file 1: Fig. S1). However, ANOVA results indicated that only the interaction of acid concentration and temperature was significant. A possible reason for this discrepancy is that CNC yield was not as drastically affected with alterations in duration and particle size as with alterations in acid concentration and temperature, because the yields only changed from 38 to 55% vs 0 to 38%, respectively (Fig. 3a, e). Moreover, when acid concentration and temperature levels were at their extreme, the CNC yields remained low regardless of the particle size or hydrolysis duration (Fig. 3b–d, f), thus proving the importance of acid concentration \( \times \) temperature interaction effect for maximizing CNC yield. All other interactions were insignificant as per ANOVA (Table 4) as well as the contour plots (Additional file 1: Fig. S1). Response surface and contour plots showing interaction effects of different process parameters on CNC yield: a acid concentration × temperature; b acid concentration × hydrolysis duration; c acid concentration × particle size; d temperature × hydrolysis duration; e particle size × hydrolysis duration, and f particle size × temperature The predicted optimal hydrolysis conditions were acid concentration of 59.5%, temperature of 58.5 °C, hydrolysis duration of 56 min and pulp particle size of 40 mesh. Corresponding predicted CNC and CSR yields were 38.4% and 16.6%, respectively, totaling 55% of total nanocellulose yield. These predicted values were verified experimentally. CNC and CSR yields at these exact conditions were found to be 39.7% and 15%, respectively, totaling 54.7% of nanocellulose yield (Table 5). This is in very good agreement with the modeled results, i.e., within 95% confidence interval, thus verifying the adequacy and accuracy of the model. The results were in line with previous findings where overall nanocellulose yield was improved by using acid concentration ranging between 58 and 62% and a moderate temperature ranging between 50 and 60 °C, and by recovering both CNC and CSR streams (Wang et al. 2012, 2014). Table 5 Evaluation of yields and properties of CNCs and CSR at optimum conditions By using slightly altered optimal conditions of 60% acid, 58 °C, 60 min and 40 mesh particle size, up to 60% CNC yield (with no leftover CSR) was obtained from loblolly pine (softwood) kraft pulp. This approached maximum yields reported in previous studies from eucalyptus (hardwood) kraft pulp (Wang et al. 2012) and softwood sulfite pulp (Dong et al. 2016; Wang et al. 2012), although the reaction duration was significantly lower in our study (60 min vs 100–200 min), which indicates the need to optimize this factor for each raw material. It should be noted that yields among different studies are often not directly comparable due to variations in process conditions such as impeller RPM and acid-to-pulp ratio, differences in the method of yield measurement, the way yield is defined or expressed, the composition and purity of starting material and the fact that some studies separate CNCs from CSR while others do not. Lastly in literature, the acid-to-pulp ratio used during acid hydrolysis varied from 1:8 g/mL (Wang et al. 2012) and 1:10 g/mL (Bondeson et al. 2006) to 1:20 g/mL (Benini et al. 2018). This implies that acid-to-pulp ratio could be an important parameter that impacts hydrolysis kinetics and should be investigated and standardized in future work. Justification of CSR recovery at optimal conditions We characterized CNCs and CSR obtained at optimal conditions (Table 5), in order to gain an understanding of the differences in their key properties and to determine if it is necessary to separate the two streams at these conditions. DLS analysis showed that the average dispersity of CNCs was roughly 0.4, whereas that of CSR approached 0.7. Average particle size of CNCs ranged between 160 and 170 nm, and that of CSR was roughly 270 nm. Ð values, in addition to TEM images (Fig. 4a, b) and particle size distribution curves (Fig. 5), indicated that the CNC fraction consisted of nanocrystals with a more uniform size distribution. On the other hand, CSR consisted of a heterogeneous distribution of nanoparticles, both crystals and short fibers, which possibly aggregated and separated from CNCs during centrifugation. The two fractions had quite similar surface charge, as the average zeta-potential of CNCs and CSR was − 48 mV and − 44 mV, respectively. But whether CSR should be recovered and processed as a separate stream when using optimal conditions is debatable, because it translates into adding two unit-operations (centrifugation and mechanical fibrillation) to the process, for a small amount of CSR that's approaching CNCs in its physicochemical properties. CNCs are valued for their optical, mechanical and rheological properties, and the ability to be chemically functionalized (Moon et al. 2011). Therefore, mixing CNCs with CSR at the pretext of maximizing nanocellulose yield could reduce their commercial value for certain applications. On the other hand, recovering the two fractions separately could add to the revenue streams via production of CNFs. Future techno-economic evaluation of costs involved in the energy-intensive microfluidization operations, for different CNC/CNF yield scenarios, is required to validate the benefits of separating the CSR stream. TEM images of a CNC at optimal conditions, b CSR at optimal conditions, and c CNF (or microfluidized CSR) from high CSR yielding conditions (i.e., run 17–54% w/w acid, 45 °C, 90 min, 20 mesh) Particle size distribution of CNC and CSR at optimal conditions (59.5% w/w acid, 58.5 °C, 56 min, 40 mesh). Each spectrum is an average of three replicates Our recommendation would be to further optimize the best conditions identified in this study to facilitate complete conversion of cellulose into CNCs and eliminate the need to recover CSR without sacrificing the yield. This can be achieved by using 58–60% acid concentration to treat pulp of 40 mesh particle size at 55–60 °C for 60–75 min. By doing so, it may be possible that well-dispersed and colloidally stable (− 40 to − 50 mV) suspensions, containing CNCs with particle length < 200 nm and moderate dispersity (0.3–0.5), are obtained with yields ranging between 55–60%. The same process could be altered by using 50–55% acid concentration to treat pulp of 20 mesh particle size at 45 °C for 30–45 min to obtain high yields of partially hydrolyzed but adequately sulfated pulp, that could easily be defibrillated into nanofibers with lower energy requirements compared to untreated pulp (Börjesson and Westman 2015). Recovering and processing higher quantities of CSR at these milder conditions could be economically more beneficial than implementing those steps at harsher severities that favor higher CNC yields. Effect of processing conditions on properties of CNCs and CSR Crystallinity index Crystallinity of cellulose affects its strength and stiffness; it can have a direct impact on the mechanical properties of nanocomposites that have CNCs and CNFs as fillers (Ahvenainen et al. 2016). Crystallinity index (CrI) is a parameter that quantifies the relative amount of crystalline material present in a cellulosic sample. XRD diffraction patterns of pine biomass, bleached kraft pulp, CNCs from run 4 (high CNC yielding conditions) and CNFs from run 17 (high CSR yielding conditions), are given in Fig. 6. The diffractograms were representative of the semi-crystalline structure of cellulose Iβ, the polymorph found in higher plants (Thygesen et al. 2005), which is characterized by crystalline peaks at 15°, 16.5°, 22.5° and 34.5° corresponding to the crystallographic planes of (1–10), (110), (200) and (004), respectively. The intensity at 18° represents contribution of the amorphous fraction. XRD diffraction patterns of original biomass (red line), bleached Kraft pulp (green line), CNCs (purple line) from high CNC yielding conditions (i.e., run 4–60% w/w acid, 60 min, 55 °C, 40 mesh) and CNFs (cyan line) from high CSR yielding conditions (i.e., run 17–54% w/w acid, 45 °C, 90 min, 20 mesh) CrI, calculated using Eq. (4) was 45% for biomass and increased to 74% for pulp due to the removal of amorphous lignin, hemicellulose and extractive fractions. CrI ranged between 75 and 80% for CNCs and CNFs from all conditions (Table 6). CrI of CNFs was slightly higher than that of corresponding CNCs, possibly due to higher crystallite size (Nam et al. 2016), indicating lower impact of acid hydrolysis on the crystalline structure of CNFs. No correlation was found between CrI of CNCs and any of the four process parameters investigated in this study. This was supported by ANOVA (Additional file 1: Table S1), where none of the factors was found to have a significant effect on CrI. In line with previous reports (Chen et al. 2015; Kargarzadeh et al. 2012), a slight increase in CrI with increasing acid concentration and hydrolysis duration was observed in our study; however, these changes were not significant at p < 0.05. Overall, our results indicated that the crystallinity of CNCs was preserved regardless of the conditions used to produce them (within the experimental ranges tested in this study). However, small differences in CrI can be attributed to minor changes in crystallite size (Nam et al. 2016), which are bound to take place as reaction severity changes. We would like to note that even though Segal method is the most commonly used method, it has received criticism for its inaccurate and unrealistic estimation of crystallinity because of reasons mentioned in detail elsewhere (Park et al. 2010). Methods using area under the curve and peak fitting are more appropriate for accurate determination of crystallinity; however, Segal method was chosen because it can reliably be used to determine relative differences in CrI among samples of the same type (Ahvenainen et al. 2016). Table 6 Characterization of CNC and CSR using DLS, XRD and elemental analysis Sulfur content and zeta-potential Surface charge density, arising from sulfate ester groups imparted during sulfuric acid hydrolysis, is a critical characteristic of CNCs. It can have a crucial influence on the surface chemistry and accessibility of CNCs for chemical modifications, and it impacts their physical properties too, such as colloidal stability, thermal degradation, birefringence and chiral nematic behavior, all of which are significant for different applications (Lin and Dufresne 2014; Reid et al. 2017). Many studies indicate that degree of sulfation of CNCs is strongly dependent on hydrolysis conditions, such as hydrolysis duration, acid concentration, acid-to-pulp ratio and temperature (Bondeson et al. 2006; Chen et al. 2015; Dong et al. 1998; Hamad and Hu 2010; Kargarzadeh et al. 2012; Lin and Dufresne 2014; Selim et al. 2005; Wang et al. 2012); however, it was also suggested that S content may not be a good measure of hydrolysis severity. It should be noted that, in the literature, the degree of sulfation, surface charge and surface charge density are measured with a variety of different methods, including elemental analysis (g S per g CNC), zeta-potential (mV) and conductometric titration (mmol sulfate ester groups per kg CNC), which often makes it difficult to make direct comparisons of data among studies. In our study, elemental analysis showed that S content generally ranged between 0.5 and 2.0 mg/g for both CNCs and CSR, barring a couple of outliers such as run 7 with 6 mg of S per g CNC and run 19 with 3 mg of S/g CSR (Table 6). ANOVA analysis was conducted excluding the outliers and only temperature was found to have a linear effect on the S content of CNCs (Additional file 1: Table S2). One possible explanation is that higher temperatures facilitated sulfonation of CNCs by reducing the amount of energy required to reach the activation energy of the esterification reaction. Previous studies have reported that S content or surface charge increased with increasing acid concentration (Chen et al. 2015) and hydrolysis duration (Kargarzadeh et al. 2012). Our study found a similar trend, but the two factors did not impact the S content significantly at p < 0.05. In the cellulose-to-CNC conversion process, the sulfation, formation and degradation of CNCs took place simultaneously, which resulted in CNC yield and S content being interrelated with respect to reaction severity (Chen et al. 2015; Wang et al. 2012). For the experimental ranges tested in this study, our results indicated that a threshold level of surface charge was imparted to the CNCs even at relatively milder conditions and did not change with a further increase in reaction severity. This indicated that CNC sulfation was not affected by reaction severity, in the same way as CNC formation or degradation was. Hence, we did not observe a strong correlation between acid concentration, hydrolysis duration and CNC S content. Zeta-potential of CNC and CSR samples was measured using the DLS. Zeta-potential represents the potential difference that exists between particle surface and bulk liquid in an applied electric field (Reid et al. 2017), and provides significant insight into the colloidal stability of aqueous dispersions. Zeta-potential of CNCs generally ranged between − 40 to − 60 mV (Table 6), in agreement with previous reports (Kargarzadeh et al. 2012; Reid et al. 2017), and indicating very good colloidal stability of all samples. Attachment of anionic sulfate-ester groups to the surface of CNC particles resulted in electrostatic repulsion between particles and prevented aggregation. No settling of material was observed in any CNC sample (already separated from CSR) even months after their preparation, supporting the S content and zeta-potential data that were fairly consistent across all processing conditions. It was also evident from both S content and zeta-potential data of CSR that its degree of sulfation or surface charge was always lower than that of the corresponding CNC fraction (Table 6). However, it was sufficient to facilitate mechanical fibrillation into nanofibers, evident from the TEM image of CNFs obtained from run 17 (Fig. 4c), associated with the highest yield and lowest S content of CSR (Table 6). Particle size and dispersity A significant portion of behavior of CNCs, including self-assembly, phase separation and birefringence, rheological and mechanical properties imparted to nanocomposite materials, is attributed to its geometrical features, i.e., shape and size (Bai et al. 2009; Bondeson et al. 2006; Dong et al. 1998; Habibi et al. 2010; Reid et al. 2017; Sun et al. 2016). It is well-known that the morphology of CNCs is dependent on the raw material as well as the processing conditions used to prepare them. Longer hydrolysis duration and higher acid-to-pulp ratio have been shown to produce CNCs with shorter dimensions and narrower particle length distributions (Beck-Candanedo et al. 2005; Bondeson et al. 2006; Chen et al. 2015; Dong et al. 1998; Kargarzadeh et al. 2012; Ren et al. 2014; Sun et al. 2016). Within the experimental ranges tested in our study, linear effects of acid concentration and temperature were found to have a significant effect on the hydrodynamic particle size of CNCs (Additional file 1: Table S3). Higher acid concentration and temperature resulted in shorter CNCs, due to more acid molecules available per unit mass of cellulose and lower energy needed to reach the activation energy required for the esterification reaction, respectively. Average length ranged between 130 and 220 nm (Table 6), similar to what has been reported for CNCs extracted from woody biomass using strong acid hydrolysis (Beck-Candanedo et al. 2005; Chen et al. 2015; Sacui et al. 2014). Even though DLS does not provide absolute measurements of particle dimensions, it is a reliable tool for relative assessment of particle size and dispersity (Reid et al. 2017). Dispersity (Ð) represents the breadth of particle size distribution within a sample; the numerical value of Ð ranges from 0 (for a perfectly monodisperse sample with uniform particle size) to 1 (for a highly polydisperse sample with multiple particle size populations) (Danaei et al. 2018). In our study, dispersity of CNCs from pine kraft pulp ranged between 0.3 and 0.5 for all hydrolysis conditions tested, indicating moderately homogeneous particle size distribution. Previous studies have reported Ð values approaching 0.2 for CNCs extracted from cotton (Fan and Li 2012; Sun et al. 2016), pointing towards a potential dependence of this parameter on the purity of raw material. Particle size and dispersity are considered important quality parameters for drug delivery applications (Danaei et al. 2018). Considering CNCs have demonstrated potential in such applications, production of CNCs with narrow size distributions is of key importance. Therefore, separation of CNCs from CSR using centrifugation is absolutely necessary to avoid having Ð values > 0.5. Particle size of CSR is not reported in Table 6 because dispersity of these samples ranged between 0.7 and 1. Size measurements of samples with Ð ≥ 0.7 are not considered reliable (Danaei et al. 2018); however, high Ð values were indicative of extremely broad particle size distributions for CSR. Although zeta-potential data for some of the CSR samples (runs 9, 12 and 17) were unattainable, broad particle size distributions of CSR were concomitant with lower zeta-potential (Table 6). In this study, strong acid hydrolysis parameters, including acid concentration, temperature, hydrolysis duration and pulp particle size, were optimized for extraction of CNCs from pre-extracted and kraft delignified loblolly pinewood. Acid concentration and temperature were found to have a significant effect on CNC yield. On the other hand, hydrolysis duration and particle size did not affect the yield significantly but could be fine-tuned to maximize cellulose-to-CNC conversion. While the optimal conditions obtained from the response surface methodology produced a mixture of CNCs and CSR, it is recommended not to separate the two streams at these conditions, but to implement conditions that favor complete hydrolysis of pulp to CNCs. Choices for maximizing the yields of either CNC or CSR streams (up to 60% w/w) were provided, offering to manufacturers the use of identical processes for obtaining two different products. For harsher but high CNC yielding conditions, approximately 20% of holocellulose in the pulp degraded into simple sugars and oligomers. With better handling, CNC yields up to 70% were obtained by minimizing losses in processing steps such as washing. For milder but high CSR yielding conditions, holocellulose degradation was determined to be less than 5%, where up to 15% CNCs could be separately recovered. CSR had a very wide particle size distribution as well as lower surface charge, thus restricting its potential to be mixed with CNCs; however, its degree of sulfation was sufficient to facilitate conversion into CNFs through microfluidization. Since CNFs are typically produced using mechanical methods, future studies could investigate how those are different from sulfuric acid-produced CNFs. Overall, loblolly pine (softwood) kraft pulp was found to be a competitive source for CNC production; moreover, our study confirmed that it is absolutely necessary to use acid concentrations < 60% to maximize nanocellulose yield from wood pulp. In general, characteristic properties of CNCs, including crystallinity index, surface charge, particle size and dispersity, were found to be fairly uniform, implying consistency in quality within the extraction conditions tested. Lastly, even though the strong sulfuric acid process has been optimized for CNC production from a variety of cellulose-rich raw materials and is possibly the best approach, the process is still not environmentally friendly as it uses up to 58–60% w/w of acid. Such high concentrations of acid are associated with problems such as equipment corrosion, residual acid recycling and wastewater treatment. A few studies in the recent past have investigated "green" methods for CNC production; however, their yields were typically low. Sulfuric acid hydrolysis is possibly the best approach so far to obtain economically viable yields; however, future studies should address the issue of valorization of the waste stream, through development of efficient and scalable recycle and reuse systems for both acid and water, in order to make the process more sustainable. All data supporting this article's conclusion are available. CNC: Cellulose nanocrystal CNF: Cellulose nanofiber CCD: Central composite design RSM: CSR: Cellulosic solid residue DOE: ECF: Elemental chlorine free HPLC: TEM: Transmission electron microscope XRD: CrI: ICP-OES: Inductively coupled plasma optical emission spectrometry DLS: MCC: RMSE: Root mean square error Ahvenainen P, Kontro I, Svedström K (2016) Comparison of sample crystallinity determination methods by X-ray diffraction for challenging cellulose I materials. Cellulose 23:1073–1086. https://doi.org/10.1007/s10570-016-0881-6 Amin KNM, Annamalai PK, Morrow IC, Martin D (2015) Production of cellulose nanocrystals via a scalable mechanical method. RSC Adv 5:57133–57140. https://doi.org/10.1039/C5RA06862B Araki J, Wada M, Kuga S, Okano T (1998) Flow properties of microcrystalline cellulose suspension prepared by acid treatment of native cellulose. Colloid Surf A 142:75–82. https://doi.org/10.1016/S0927-7757(98)00404-X Area MC, Popa VI (2014) Anatomy, structure and chemistry of fibrous materials. In: Area MC, Popa VI (eds) Wood fibres for papermaking. Smithers Rapra Technology, New York, pp 29–40 Bai W, Holbery J, Li K (2009) A technique for production of nanocrystalline cellulose with a narrow size distribution. Cellulose 16:455–465. https://doi.org/10.1007/s10570-009-9277-1 Battista OA (1950) Hydrolysis and crystallization of cellulose. Ind Eng Chem 42(3):502–507. https://doi.org/10.1021/ie50483a029 Beck-Candanedo S, Roman M, Gray DG (2005) Effect of reaction conditions on the properties and behavior of wood cellulose nanocrystal suspensions. Biomacromol 6:1048–1054. Behera S, Arora R, Nandhagopal N, Kumar S (2014) Importance of chemical pretreatment for bioconversion of lignocellulosic biomass. Renew Sustain Energy Rev 36:91–106. https://doi.org/10.1016/j.rser.2014.04.047 Benini KCCC, Voorwald HJC, Cioffi MOH, Rezende MC, Arantes V (2018) Preparation of nanocellulose from Imperata brasiliensis grass using Taguchi method. Carbohydr Polym 192:337–346. https://doi.org/10.1016/j.carbpol.2018.03.055 Bezerra MA, Santelli RE, Oliveira EP, Villar LS, Escaleira LA (2008) Response surface methodology (RSM) as a tool for optimization in analytical chemistry. Talanta 76:965–977. https://doi.org/10.1016/j.talanta.2008.05.019 Biermann CJ (1996) Handbook of pulping and papermaking, 2nd edn. Academic Press, Cambridge Bondeson D, Mathew A, Oksman K (2006) Optimization of the isolation of nanocrystals from microcrystalline cellulose by acid hydrolysis. Cellulose 13:171–180. https://doi.org/10.1007/s10570-006-9061-4 Börjesson M, Westman G (2015) Crystalline nanocellulose—preparation, modification and properties. In: Poletto M (ed) Cellulose—fundamental aspects and current trends. IntechOpen, London. https://doi.org/10.5772/61899 Bragg DC (2011) Forests and forestry in Arkansas during the last two centuries. In: Riley LE, Haase DL, Pinto JR (tech coords) National proceedings: forest and conservation nursery associations, 2010. In: Proceedings of RMRS-P-65. Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, p. 3–9 Brinchi L, Cotana F, Fortunati E, Kenny JM (2013) Production of nanocrystalline cellulose from lignocellulosic biomass: technology and applications. Carbohydr Polym 94:154–169. https://doi.org/10.1016/j.carbpol.2013.01.033 Chen L, Wang Q, Hirth K, Baez C, Agarwal UP, Zhu JY (2015) Tailoring the yield and characteristics of wood cellulose nanocrystals (CNC) using concentrated acid hydrolysis. Cellulose 22:1753–1762. https://doi.org/10.1007/s10570-015-0615-1 Chen L, Zhu JY, Baez C, Kitin P, Elder T (2016) Highly thermal-stable and functional cellulose nanocrystals and nanofibrils produced using fully recyclable organic acids. Green Chem 18:3835–3843. https://doi.org/10.1039/C6GC00687F Danaei M et al (2018) Impact of particle size and polydispersity index on the clinical applications of lipidic nanocarrier systems. Pharmaceutics 10:57. https://doi.org/10.3390/pharmaceutics10020057 Dean A, Voss D, Draguljić D (2017) Response surface methodology. In: Dean A, Voss D, Draguljić D (eds) Design and analysis of experiments. Springer, Cham, pp 565–614. https://doi.org/10.1007/978-3-319-52250-0_16 Dong XM, Revol JF, Gray DG (1998) Effect of microcrystallite preparation conditions on the formation of colloid crystals of cellulose. Cellulose 5:19–32. https://doi.org/10.1023/A:1009260511939 Dong S, Bortner MJ, Roman M (2016) Analysis of the sulfuric acid hydrolysis of wood pulp for cellulose nanocrystal production: a central composite design study. Ind Crop Prod 93:76–87. https://doi.org/10.1016/j.indcrop.2016.01.048 Dufresne A (2013) Nanocellulose: a new ageless bionanomaterial. Mater Today 16:220–227. https://doi.org/10.1016/j.mattod.2013.06.004 Fan JS, Li YH (2012) Maximizing the yield of nanocrystalline cellulose from cotton pulp fiber. Carbohydr Polym 88:1184–1188. https://doi.org/10.1016/j.carbpol.2012.01.081 George J, Sabapathi SN (2015) Cellulose nanocrystals: synthesis, functional properties, and applications. Nanotechnol Sci Appl 8:45–54. https://doi.org/10.2147/NSA.S64386 Habibi Y, Lucia LA, Rojas OJ (2010) Cellulose nanocrystals: chemistry, self-assembly, and applications. Chem Rev 110:3479–3500. https://doi.org/10.1021/cr900339w Hamad WY, Hu TQ (2010) Structure–process–yield interrelations in nanocrystalline cellulose extraction. Can J Chem Eng 88:392–402. https://doi.org/10.1002/cjce.20298 Hamer G (2003) Solid waste treatment and disposal: effects on public health and environmental safety. Biotechnol Adv 22:71–79. https://doi.org/10.1016/j.biotechadv.2003.08.007 Houtman C (2018) Lessons learned from 150 years of pulping wood. In: Beckham GT (ed) Energy and environment series no. 19: lignin valorization: emerging approaches. Royal Society of Chemistry, Cambridge, pp 62–74 Hult EL, Iversen T, Sugiyama J (2003) Characterization of the supermolecular structure of cellulose in wood pulp fibres. Cellulose 10:103–110. https://doi.org/10.1023/A:1024080700873 Jonoobi M, Oladi R, Davoudpour Y, Oksman K, Dufresne A, Hamzeh Y, Davoodi R (2015) Different preparation methods and properties of nanostructured cellulose from various natural resources and residues: a review. Cellulose 22:935–969. https://doi.org/10.1007/s10570-015-0551-0 Kargarzadeh H, Ahmad I, Abdullah I, Dufresne A, Zainudin SY, Sheltami RM (2012) Effects of hydrolysis conditions on the morphology, crystallinity, and thermal stability of cellulose nanocrystals extracted from kenaf bast fibers. Cellulose 19:855–866. https://doi.org/10.1007/s10570-012-9684-6 Lacerda TM, Zambon MD, Frollini E (2013) Effect of acid concentration and pulp properties on hydrolysis reactions of mercerized sisal. Carbohydr Polym 93:347–356. https://doi.org/10.1016/j.carbpol.2012.10.039 Lee HV, Hamid SBA, Zain SK (2014) Conversion of lignocellulosic biomass to nanocellulose: structure and chemical process. Sci World J 2014:1–20. https://doi.org/10.1155/2014/631013 Li Y, Liu Y, Chen W, Wang Q, Liu Y, Li J, Yu H (2016) Facile extraction of cellulose nanocrystals from wood using ethanol and peroxide solvothermal pretreatment followed by ultrasonic nanofibrillation. Green Chem 18:1010–1018. https://doi.org/10.1039/C5GC02576A Lin N, Dufresne A (2014) Surface chemistry, morphological analysis and properties of cellulose nanocrystals with gradiented sulfation degrees. Nanoscale 6:5384–5393. https://doi.org/10.1039/C3NR06761K Lu Z, Fan L, Zheng H, Lu Q, Liao Y, Huang B (2013) Preparation, characterization and optimization of nanocellulose whiskers by simultaneously ultrasonic wave and microwave assisted. Bioresour Technol 146:82–88. https://doi.org/10.1016/j.biortech.2013.07.047 Mishra RK, Sabu A, Tiwari SK (2018) Materials chemistry and the futurist eco-friendly applications of nanocellulose: status and prospect. J Saudi Chem Soc 22:949–978. https://doi.org/10.1016/j.jscs.2018.02.005 Moon RJ, Martini A, Nairn J, Simonsen J, Youngblood J (2011) Cellulose nanomaterials review: structure, properties and nanocomposites. Chem Soc Rev 40:3941–3994. https://doi.org/10.1039/C0CS00108B Mukherjee SM, Woods HJ (1953) X-ray and electron microscope studies of the degradation of cellulose by sulphuric acid. BBA 10:499–511. https://doi.org/10.1016/0006-3002(53)90295-9 Nam S, French AD, Condon BD, Concha M (2016) Segal crystallinity index revisited by the simulation of X-ray diffraction patterns of cotton cellulose Iβ and cellulose II. Carbohydr Polym 135:1–9. https://doi.org/10.1016/j.carbpol.2015.08.035 Ngwabebhoh FA, Erdem A, Yildiz U (2018) A design optimization study on synthesized nanocrystalline cellulose, evaluation and surface modification as a potential biomaterial for prospective biomedical applications. Int J Biol Macromol 114:536–546. https://doi.org/10.1016/j.ijbiomac.2018.03.155 Novo LP, Bras J, Garcia A, Belgacem N, Curvelo AAS (2015) Subcritical water: a method for green production of cellulose nanocrystals. ACS Sust Chem Eng 3:2839–2846. https://doi.org/10.1021/acssuschemeng.5b00762 Park S, Baker JO, Himmel ME, Parilla PA, Johnson DK (2010) Cellulose crystallinity index: measurement techniques and their impact on interpreting cellulase performance. Biotechnol Biofuels 3:10. https://doi.org/10.1186/1754-6834-3-10 Peng BL, Dhar N, Liu HL, Tam KC (2011) Chemistry and applications of nanocrystalline cellulose and its derivatives: a nanotechnology perspective. Can J Chem Eng 89:1191–1206. https://doi.org/10.1002/cjce.20554 Pettersen RC (1984) The chemical composition of wood. In: Rowell R (ed) The chemistry of solid wood, vol 207. American Chemical Society, Washington, pp 57–126. https://doi.org/10.1021/ba-1984-0207.ch002 Poletto M (2015) Cellulose—fundamental aspects and current trends. IntechOpen. https://doi.org/10.5772/59889 Poletto M, Pistor V, Zeni M, Zattera AJ (2011) Crystalline properties and decomposition kinetics of cellulose fibers in wood pulp obtained by two pulping processes. Polym Degrad Stabil 96:679–685. https://doi.org/10.1016/j.polymdegradstab.2010.12.007 Postek MT, Moon RJ, Rudie AW, Bilodeau MA (2013) Production and applications of cellulose nanomaterials. TAPPI Press, Atlanta Rajan K, Djioleu A, Kandhola G, Labbé N, Sakon J, Carrier DJ, Kim J-W (2020) Investigating the effects of hemicellulose pre-extraction on the production and characterization of loblolly pine nanocellulose. Cellulose (accepted). https://doi.org/10.1007/s10570-020-03018-8 Reid MS, Villalobos M, Cranston ED (2017) Benchmarking cellulose nanocrystals: from the laboratory to industrial production. Langmuir 33:1583–1598. https://doi.org/10.1021/acs.langmuir.6b03765 Ren S, Sun X, Lei T, Wu Q (2014) The effect of chemical and high-pressure homogenization treatment conditions on the morphology of cellulose nanoparticles. J Nanomater 2014:582913. https://doi.org/10.1155/2014/582913 Sacui IA et al (2014) Comparison of the properties of cellulose nanocrystals and cellulose nanofibrils isolated from bacteria, tunicate, and wood processed using acid, enzymatic, mechanical, and oxidative methods. ACS Appl Mater Inter 6:6127–6138. https://doi.org/10.1021/am500359f Segal L, Creely JJ, Martin AE Jr, Conrad CM (1959) An empirical method for estimating the degree of crystallinity of native cellulose using the X-ray diffractometer. Text Res J 29:786–794. Selim IZ, Zikry AAF, Gaber SH (2005) Physicochemical properties of prepared cellulose sulfates: II from linen pulp bleached by the H2O2 method. Polymer Plast Technol Eng 43(5):1387–1402. https://doi.org/10.1081/PPT-200030194 Seo Y-R, Kim J-W, Hoon S, Kim J, Chung JH, Lim KT (2018) Cellulose-based nanocrystals: sources and applications via agricultural byproducts. J Biosyst Eng 43(1):59–71. https://doi.org/10.5307/JBE.2018.43.1.059 Shatkin JA, Wegner TH, Bilek EM, Cowie J (2014) Market projections of cellulose nanomaterial-enabled products—part 1: applications. Tappi J 13:9–16 Sinha A, Martin EM, Lim KT, Carrier DJ, Han H, Zharov VP, Kim J-W (2015) Cellulose nanocrystals as advanced "green" materials for biological and biomedical engineering. J Biosyst Eng 40:373–393. https://doi.org/10.5307/JBE.2015.40.4.373 Sun B, Zhang M, Hou Q, Liu R, Wu T, Si C (2016) Further characterization of cellulose nanocrystal (CNC) preparation from sulfuric acid hydrolysis of cotton fibers. Cellulose 23:439–450. https://doi.org/10.1007/s10570-015-0803-z Thygesen A, Oddershede J, Lilholt H, Thomsen AB, Stahl K (2005) On the determination of crystallinity and cellulose content in plant fibres. Cellulose 12:563–576. https://doi.org/10.1007/s10570-005-9001-8 Wang QQ, Zhu JY, Reiner RS, Verrill SP, Baxa U, McNeil SE (2012) Approaching zero cellulose loss in cellulose nanocrystal (CNC) production: recovery and characterization of cellulosic solid residues (CSR) and CNC. Cellulose 19:2033–2047. https://doi.org/10.1007/s10570-012-9765-6 Wang Q, Zhao X, Zhu JY (2014) Kinetics of strong acid hydrolysis of a bleached kraft pulp for producing cellulose nanocrystals (CNCs). Ind Eng Chem Res 53:11007–11014. https://doi.org/10.1021/ie501672m Zhang R, Liu Y (2018) High energy oxidation and organosolv solubilization for high yield isolation of cellulose nanocrystals (CNC) from Eucalyptus hardwood. Sci Rep 8:1–11. https://doi.org/10.1038/s41598-018-34667-2 The authors would like to thank A. Kuchuk and E. Martin at the Institute for Nanoscience and Engineering, University of Arkansas, Fayetteville, for their assistance with XRD analysis and TEM imaging, respectively. The authors would also like to thank C. Hamilton from the Center for Renewable Carbon, University of Tennessee, Knoxville for conducting the ICP-OES analysis. This project was supported by the Center for Advanced Surface Engineering (CASE) under the National Science Foundation (NSF) grant number OIA-1457888 and the Arkansas EPSCoR program, ASSET III. Department of Biological and Agricultural Engineering, University of Arkansas, Fayetteville, AR, 72701, USA Gurshagan Kandhola, Angele Djioleu & Jin-Woo Kim Institute for Nanoscience and Engineering, University of Arkansas, Fayetteville, AR, 72701, USA Department of Biosystems Engineering and Soil Science, University of Tennessee Institute of Agriculture, Knoxville, TN, 37996, USA Kalavathy Rajan & Danielle Julie Carrier Center for Renewable Carbon, University of Tennessee Institute of Agriculture, Knoxville, TN, 37996, USA Kalavathy Rajan & Nicole Labbé Department of Chemistry and Biochemistry, University of Arkansas, Fayetteville, AR, 72701, USA Joshua Sakon Gurshagan Kandhola Angele Djioleu Kalavathy Rajan Nicole Labbé Danielle Julie Carrier Jin-Woo Kim J-WK and GK conceived and designed the experiments. GK, AD and KR performed experiments. All authors discussed the results. GK, KR, DJC and J-WK co-wrote the paper. All authors read and approved the final manuscript. Correspondence to Jin-Woo Kim. Additional figures and tables. Kandhola, G., Djioleu, A., Rajan, K. et al. Maximizing production of cellulose nanocrystals and nanofibers from pre-extracted loblolly pine kraft pulp: a response surface approach. Bioresour. Bioprocess. 7, 19 (2020). https://doi.org/10.1186/s40643-020-00302-0 Kraft process Strong acid hydrolysis Cellulose nanocrystals Cellulose nanofibers
CommonCrawl
\begin{definition}[Definition:Multiply Perfect Number] A '''multiply perfect number''' is a positive integer $n$ such that the sum of its divisors is equal to an integer multiple of $n$. \end{definition}
ProofWiki
\begin{document} \title{Unitary channel discrimination beyond group structures: \\ Advantages of sequential and indefinite-causal-order strategies} \author{Jessica Bavaresco} \email{[email protected]} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria} \author{Mio Murao} \affiliation{Department of Physics, Graduate School of Science, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033, Japan} \affiliation{Trans-scale Quantum Science Institute, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033, Japan} \author{Marco T\'ulio Quintino} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria} \affiliation{Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria} \date{27th May 2021} \begin{abstract} For minimum-error channel discrimination tasks that involve only unitary channels, we show that sequential strategies may outperform the parallel ones. Additionally, we show that general strategies that involve indefinite causal order are also advantageous for this task. However, for the task of discriminating a uniformly distributed set of unitary channels that forms a group, we show that parallel strategies are, indeed, optimal, even when compared to general strategies. We also show that strategies based on the quantum switch cannot outperform sequential strategies in the discrimination of unitary channels. Finally, we derive an absolute upper bound for the maximal probability of successfully discriminating any set of unitary channels with any number of copies for the most general strategies that are suitable for channel discrimination. Our bound is tight since it is saturated by sets of unitary channels forming a group $k$-design. \end{abstract} \maketitle \section{Introduction} The discrimination of different hypotheses is a fundamental part of the scientific method that finds application in the most distinct areas, such as information theory,~\cite{information_theory} bioinformatics,~\cite{bioinformatics} machine learning,~\cite{pearl2014probabilistic} and behavioral and social sciences.~\cite{psychological_methods} In a discrimination task, one seeks for the best manner to decide whether a particular hypothesis is the most likely to be the best description of some scenario or experiment. An important, albeit general, instance of a discrimination task consists in identifying between different input-output relations or different causal-effect dynamics a physical system may undergo. For instance, in an x-ray examination that identifies whether a person has a broken bone, the examiner prepares an initial state, which is subjected to certain dynamics when passing through the person's body, before being measured by some physical apparatus. In this case, the goal is to distinguish between the dynamics related to a broken and a healthy bone by implementing the most appropriate input state and measurement apparatus. In its most fundamental level, closed-system dynamics in quantum theory are described by unitary operations. Hence, being able to discriminate between different unitary operations is a ubiquitous task within quantum theory and quantum technologies. Examples of tasks directly related to our ability to discriminate unitary operations are unitary equivalence determination,~\cite{shimbo18,soeda21} quantum metrology,~\cite{giovannetti06,metrology} quantum hypothesis testing,~\cite{hayashi06} quantum parameter estimation,~\cite{paris09} alignment and transmission of reference frames,~\cite{chiribella04b,bartlett07} and discrimination and tomography of quantum circuit elements.~\cite{chiribella07} Discrimination tasks are also relevant to the field of computer science. An oracle, which is an abstract machine used to study decision problems, may be understood as a black box that solves certain problems with a single operation. From a quantum computational perspective, a quantum oracle is a unitary operation whose internal mechanisms are unknown and are employed in seminal quantum algorithms, such as the Deustch-Josza algorithm,~\cite{deutsch92} Grover's algorithm,~\cite{grover98} and Simon's algorithm.~\cite{simon94} These oracle-based quantum algorithms may be recast as unitary discrimination tasks.~\cite{chefles07} Such practical and fundamental interest has motivated an extensive study of the discrimination of unitary channels within the context of quantum information theory, leading to a plethora of interesting results. Contrarily to the problem of quantum state discrimination,~\cite{helstrom69} in which two states cannot be perfectly distinguished with a finite number of uses, or copies, unless they are orthogonal, it has been remarkably shown that any pair of unitary channels can indeed always be perfectly distinguished with a finite number of copies.~\cite{acin01,dariano01} Moreover, perfect discrimination of a pair of unitary channels can always be achieved by a parallel scheme~\cite{acin01,dariano01} (see also Ref.~\cite{duan07}). Even when perfect discrimination is not possible, sequential strategies can never outperform parallel strategies in a task of discrimination between a pair of unitary channels.~\cite{chiribella08-1} Concerning the discrimination of sets of more than two unitary channels, when considering unitaries which are a representation of a group and uniformly distributed, Ref.~\cite{chiribella08-1} showed once more that, for any number of copies, sequential strategies are not advantageous when compared to parallel strategies. For related tasks such as error-free and unambiguous unitary channel discrimination,~\cite{chiribella13-02} unitary estimation,~\cite{chiribella08-1} unitary learning,~\cite{bisio10} and unitary store-and-retrieve,~\cite{sedlak19} parallel strategies were also proven to be optimal. Up to this point, no unitary channel minimum-error discrimination tasks in which sequential strategies outperform parallel strategies are known, to the extent of our knowledge. In this work, we focus on the discrimination of sets of more than two unitary channels with multiple copies to study the potential advantages that different classes of strategies can bring to this task. The first contribution of our work is precisely to show examples of discrimination tasks of unitary channels in which sequential strategies are advantageous, when compared to parallel strategies that allow the same number of copies. In fact, contrarily to the tasks of error-free and unambiguous unitary channel discrimination,~\cite{chiribella13-02} we show that sequential strategies can achieve perfect discrimination in tasks that parallel strategies cannot. Then, motivated by the recent advances in channel discrimination theory that have established the advantage of general discrimination strategies that involve indefinite causal order for general channels,~\cite{bavaresco20} we study the potential advantages of these general strategies for the specific case of unitary channel discrimination. Extending the framework developed in Ref.~\cite{bavaresco20} to discrimination tasks that allow for the use of multiple copies, we achieve the following results. We prove the optimality of parallel strategies, even when compared against general strategies, in tasks of discrimination of uniformly distributed unitary channels that form a unitary representation of some group for any number of copies. However, the power of general strategies is revealed when applied to discrimination tasks that fail to satisfy at least one of these requirements. In these cases, we show that general indefinite-causal-order strategies can outperform sequential strategies, and again that sequential strategies can outperform the parallel ones. Then, we show that a particular case of general strategies, one that applies processes related to the quantum switch~\cite{chiribella13-01} and its generalizations,~\cite{araujo14,yokojima20,barrett20} can never outperform sequential strategies in the discrimination of unitary channels. The final contribution of our work is to derive an ultimate upper bound for the maximal probability of successful discrimination of any ensembles of unitary channels with a uniform probability distribution. Our result represents an upper bound for the most general strategy that can possibly be employed in a task of channel discrimination. We show that this bound is saturated by parallel strategies for the discrimination of unitary groups that form a $k$-design, where $k$ is the number of allowed copies. \section{Minimum-error channel discrimination} In a task of minimum-error channel discrimination, one is given access to an unknown quantum channel $\map{C}_i:\mathcal{L}(\mathcal{H}^I)\to\mathcal{L}(\mathcal{H}^O)$, which maps quantum states from an input linear space $\mathcal{H}^I$ to an output linear space $\mathcal{H}^O$. This quantum channel is known to have been drawn with probability $p_i$ from a known ensemble of channels $\mathcal{E}=\{p_j,\map{C}_j\}_{j=1}^N$. The task is to determine which channel from the ensemble was received using a limited amount of uses/queries of it, which is essentially to determine the classical label $i$ of channel $\map{C}_i$. In order to accomplish this task in the case where only a single use of the unknown channel is allowed, one may send part of a potentially entangled state $\rho\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^\text{aux})$ through the channel $\map{C}_i$ and subsequently jointly measure the output state with a positive-operator valued measure (POVM) $M=\{M_a\}_{a=1}^N,M_a\in\mathcal{L}(\mathcal{H}^O\otimes\mathcal{H}^\text{aux})$. When both the state and measurement are optimized according to the knowledge of the ensemble, the outcome of the measurement will correspond to the most likely value of the label $i$ of the unknown channel. Then, the maximal probability of successfully determining which channel is at hand is given by \begin{equation}\label{eq::P_rhoM} P \coloneqq \max_{\rho,\{M_i\}}\sum_{i=1}^N p_i \text{\normalfont Tr}\left[(\widetilde{C}_i\otimes\widetilde{\mathbb{1}})(\rho)\,M_i\right], \end{equation} where $\widetilde{\mathbb{1}}:\mathcal{L}(\mathcal{H}^\text{aux})\to\mathcal{L}(\mathcal{H}^\text{aux})$ is the identity map. When more than one use, or copy, as we will refer to from now on, is allowed, different strategies come into play, each exploring a different order in which the copies of the unknown channel are applied. In Figs.~\ref{fig::realization}(a) and (b), we illustrate two such possibilities, a parallel and a sequential strategy, respectively. However, a more general strategy can be defined by considering the most general higher-order transformation that can map $k$ quantum channels to a valid probability distribution [see Fig.~\ref{fig::realization}(c)]. It has been shown that some of these general strategies may employ processes with an indefinite causal order and that these strategies may outperform parallel and sequential ones in tasks of channel discrimination.~\cite{bavaresco20} \section{Tester formalism} To facilitate the approach to this problem, a concise and unified formalism of testers,~\cite{chiribella09} also referred to as process POVMs,~\cite{ziman08,ziman10} was developed in Ref.~\cite{bavaresco20}, providing practical tools for both the comparison between different strategies and for the efficient computation of the maximal probability of successful discrimination of a channel ensemble under different classes of strategies. We now revise the tester formalism while extending its definitions to strategies that involve a finite number of copies $k$ of the unknown channel. In order to apply the tester formalism, we will make use of the Choi-Jamio\l{}kowski (CJ) representation of quantum maps. The CJ isomorphism is a one-to-one correspondence between completely positive maps and positive semidefinite operators, which allows one to represent any linear map $\widetilde{L}:\mathcal{L}(\mathcal{H}^I)\mapsto\mathcal{L}(\mathcal{H}^O)$ by a linear operator $L\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$ defined by \begin{equation} L \coloneqq (\widetilde{\mathbb{1}}\otimes\widetilde{L})(\Phi^+), \end{equation} where $\Phi^+=\sum_{ij}\ket{ii}\bra{jj} \in \mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^I)$, with $\{\ket{i}\}$ being an orthonormal basis, is an unnormalized maximally entangled state. In this representation, a quantum channel, i.e., a completely positive trace-preserving (CPTP) map $\widetilde{C}:\mathcal{L}(\mathcal{H}^I)\mapsto\mathcal{L}(\mathcal{H}^O)$, is represented by a linear operator $C\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$, often called the ``Choi operator'' of channel $\widetilde{C}$, which satisfies \begin{align} C\geq0, \ \ \ \text{\normalfont Tr}_O C = \mathbb{1}^I, \end{align} where $\text{\normalfont Tr}_O$ denotes the partial trace over $\mathcal{H}^O$ and $\mathbb{1}^I$ denotes the identity operator on $\mathcal{H}^I$. In particular, the Choi operator of a unitary channel is proportional to a maximally entangled state. Using Choi operators of quantum channels, we can equivalently represent the channel ensemble $\{p_i,\widetilde{C}_i\}_{i=1}^N$ as $\{p_i,C_i\}_{i=1}^N$, where $C_i$ is the Choi operator of channel $\widetilde{C}_i$. A tester is a set of positive semidefinite operators $T=\{T_i\}_{i=1}^N, T_i\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$, which obey certain normalization constraints and which, when {taking} the trace with the Choi operator of a quantum channel $C$, lead to a valid probability distribution, according to $p(i|C)=\text{\normalfont Tr}\left(T_i\,C\right)$. In this sense, testers act on quantum channels similarly to how POVMs act on quantum states and can, therefore, be interpreted as a ``measurement'' of a quantum channel. Testers allow us to rewrite the maximal probability of successful discrimination of the channel ensemble $\mathcal{E}=\{p_i,C_i\}_{i=1}^N$ as \begin{equation} P = \max_{\{T_i\}}\sum_{i=1}^N p_i \text{\normalfont Tr}\left(T_i\,C_i\right). \end{equation} The advantage of this representation is the simplification of the optimization problem that defines the maximal probability of success: now, optimization over different discrimination strategies may be achieved by maximizing $P$ over the set of valid testers, as opposed to optimizing over both states and measurements [as in Eq.~\eqref{eq::P_rhoM}]. \begin{figure*} \caption{Schematic representation of the realization of every $k$-copy (a) parallel tester $T^\text{PAR}$ with a state $\rho$ and a POVM $M$; (b) sequential tester $T^\text{SEQ}$ with a state $\rho$, channels $\widetilde{E_i}$, $i\in\{1,k-1\}$, and a POVM $M$; and (c) general tester $T^\text{GEN}$ with a process matrix $W$ and a POVM $M$.} \label{fig::realization} \end{figure*} For the case of $k$ copies, different normalization constraints define testers that represent different classes of strategies. We now define $k$-copy testers that represent parallel, sequential, and general strategies. Let $\mathcal{H}^{I}\coloneqq\bigotimes_{i=1}^k \mathcal{H}^{I_i}$ and $\mathcal{H}^{O}\coloneqq\bigotimes_{i=1}^k \mathcal{H}^{O_i}$ be the joint input and output spaces, respectively, of $k$ copies of a quantum channel, let $d_I\coloneqq\text{dim}(\mathcal{H}^{I})$ and $d_O\coloneqq\text{dim}(\mathcal{H}^{O})$ be their dimension, and let \begin{equation} _{X}(\cdot) \coloneqq \text{\normalfont Tr}_{X}(\cdot)\otimes\frac{\mathbb{1}^{X}}{d_X} \end{equation} denote a trace-and-replace operation in $\mathcal{H}^{X}$. \subsection{Parallel strategies} Parallel strategies are the ones that consist of sending each system that composes a multipartite state through one of the copies of the unknown channel, in such a way that the output of each copy does not interact with the input of the others, and jointly measuring the output state at the end [see Fig.~\ref{fig::realization}(a), left]. These strategies are characterized by parallel testers [see Fig.~\ref{fig::realization}(a), right] $T^\text{PAR}=\{T^\text{PAR}_i\}_{i=1}^N$, $T^\text{PAR}_i\in\mathcal{L}(\mathcal{H}^{I}\otimes\mathcal{H}^{O})$. Let $W^\text{PAR}\coloneqq \sum_i T^\text{PAR}_i$. Then, parallel testers are defined as \begin{align} T^\text{PAR}_i &\geq 0 \ \ \ \forall \, i \\ \text{\normalfont Tr} \, W^\text{PAR} &= d_O \\ W^\text{PAR} &= _{O}W^\text{PAR}. \end{align} This is equivalent to defining $W^\text{PAR}$ to be a parallel process, satisfying $W^\text{PAR}=\sigma^{I}\otimes\mathbb{1}^O$, where $\text{\normalfont Tr}(\sigma^{I})=1$. \subsection{Sequential strategies} In a sequential strategy, a quantum system is sent through the first copy of the channel, and its output system is allowed to be sent as input of the next copy, while general CPTP maps may act on the systems in between copies. The final output is measured by a POVM [see Fig.~\ref{fig::realization}(b), left]. These strategies are represented by sequential testers [see Fig.~\ref{fig::realization}(b), right] $T^\text{SEQ}=\{T^\text{SEQ}_i\}_{i=1}^N, T^\text{SEQ}_i\in\mathcal{L}(\mathcal{H}^{I}\otimes\mathcal{H}^{O})$. Let $W^\text{SEQ}\coloneqq \sum_i T^\text{SEQ}_i$. Then, sequential testers are defined as \begin{align} T^\text{SEQ}_i &\geq 0 \ \ \ \forall \, i \\ \text{\normalfont Tr} \, W^\text{SEQ} &= d_O \\ W^\text{SEQ} &= _{O_k}W^\text{SEQ}\\ _{I_k O_k}W^\text{SEQ} &= _{O_{(k-1)} I_k O_k}W^\text{SEQ}\\ &\ldots \nonumber \\ _{I_2 O_2 \ldots I_k O_k}W^\text{SEQ} &= _{O_1 I_2 O_2 \ldots I_k O_k}W^\text{SEQ}. \end{align} This is equivalent to defining $W^\text{SEQ}$ to be a $k$-slot comb,~\cite{chiribella09} or a $k$-partite ordered process matrix~\cite{araujo15} (see also Refs.~\cite{gustoki06,kretschmann05}). \subsection{General strategies} Finally, general strategies are defined by the most general higher-order operations that can transform $k$ quantum channels into a joint probability distribution. They can be regarded as the most general ``measurement'' that acts jointly on $k$ quantum channels, yielding a classical output. Crucially, and contrarily to parallel and sequential strategies, general strategies do not physically impose any particular order in which the copies of the channel must be applied. The testers that characterize these strategies, the general testers, are the most general sets of positive semidefinite operators $T^\text{GEN}=\{T^\text{GEN}_i\}_{i=1}^N, T^\text{GEN}_i\in\mathcal{L}(\mathcal{H}^{I}\otimes\mathcal{H}^{O})$ that satisfy $p(i|C_1,\ldots,C_k)=\text{\normalfont Tr}\left[T^\text{GEN}_i(C_1\otimes\ldots\otimes C_k)\right]$. Let $W^\text{GEN}\coloneqq \sum_i T^\text{GEN}_i$. Then, general testers can be equivalently defined as \begin{align} T^\text{GEN}_i &\geq 0 \ \ \ \forall \, i \\ \text{\normalfont Tr}[W^\text{GEN}(C_1 &\otimes\ldots\otimes C_k)] = 1, \label{eq::wnorm} \end{align} for all Choi states of quantum channels $C_i\in\mathcal{L}(\mathcal{H}^{I_i}\otimes\mathcal{H}^{O_i})$. This is equivalent to defining $W^\text{GEN}$ to be a general $k$-partite process matrix.~\cite{oreshkov12,araujo15} For the cases of $k=2$ and $k=3$, we provide a characterization of $W^\text{GEN}$ in terms of linear constraints in Appendix~\ref{app:processmatrices}. Ordered processes, such as $W^\text{PAR}$ and $W^\text{SEQ}$, form a subset of the general processes $W^\text{GEN}$. However, some general processes are known not to respect the causal constraints of ordered processes, that is, they are neither parallel nor sequential.~\cite{oreshkov12} Moreover, they cannot be described as convex mixtures of ordered processes, in the bipartite case, or other more appropriate notions of mixtures of causal order, in the multipartite case.~\cite{wechs19} Such general processes have been termed to exhibit an \textit{indefinite causal order}~\cite{oreshkov12} and have been shown to bring advantages to several quantum information tasks, such as quantum computation,~\cite{araujo14} communication complexity,~\cite{feix15,guerin16} discrimination of bipartite non-signaling channels,~\cite{chiribella12} violation of fully and semi-device-independent causal inequalities,~\cite{oreshkov12,branciard16,bavaresco19} inversion of unknown unitary operations,~\cite{quintino18} and, more recently, discrimination of general quantum channels.~\cite{bavaresco20} Contrarily to the parallel and sequential cases, a realization of general testers in terms of quantum operations is an open problem. More specifically, a general tester can always be constructed from a general process matrix and a POVM,~\cite{bavaresco20} as illustrated in Fig.~\ref{fig::realization}(c). However, only a subset of process matrices are currently known to be realizable with quantum operations.~\cite{wechs21} This subset, known as ``coherent quantum control of causal orders'', has been shown to bring advantage to the discrimination of general channels.~\cite{bavaresco20} See Ref.~\cite{bavaresco20} for a more detailed discussion about the realization of testers. \\ For any chosen strategy, the maximal probability of successful discrimination of an ensemble of $N$ channels $\mathcal{E}=\{p_i,C_i\}_{i=1}^N$ using $k$ copies is given by \begin{equation}\label{eq:SDP} P^\mathcal{S} \coloneqq \max_{\{T^\mathcal{S}_i\}}\sum_{i=1}^N p_i \text{\normalfont Tr}\left(T^\mathcal{S}_i\,C_i^{\otimes k}\right), \end{equation} where $\mathcal{S}\in\{\text{PAR},\text{SEQ},\text{GEN}\}$. For all three classes of strategies, $P^\mathcal{S}$ can be computed via semidefinite programming (SDP). A short review of dual and primal SDP problems associated with the maximal probability of successful discrimination is given in Appendix~\ref{app:processmatrices}. \section{Discrimination of unitary channels} In this section, we present our results concerning the discrimination of unitary channels. Here, unitary channels, also called unitary operations, will be simply denoted by unitary operators $U$ that satisfy $UU^\dagger=\mathbb{1}$, and these terms will be used interchangeably. The Choi operators of unitary channels will be denoted as $\dket{U}\dbra{U}\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$, where \begin{equation} \dket{U}\coloneqq\sum_i(\mathbb{1}\otimes U)\ket{ii}. \end{equation} We start considering a discrimination task that involves unitary channels that form a group (see Appendix~\ref{app::proofunitarygroup}). We show that, for an ensemble composed of a set of unitary channels that forms a group and a uniform distribution, parallel strategies do not only perform as well as the sequential ones~\cite{chiribella08-1} but are, indeed, the optimal strategies for discrimination---even considering general strategies that may involve indefinite causal order. \begin{theorem}\label{thm::unitarygroup} For ensembles composed of a uniform probability distribution and a set of unitary channels that forms a group up to a global phase, in discrimination tasks that allow for $k$ copies, parallel strategies are optimal, even when considering general strategies. More specifically, let $\mathcal{E}=\{p_i,U_i\}_i$ be an ensemble with $N$ unitary channels, where $p_i=\frac{1}{N}\ \forall\,i$ and the set $\{U_i\}_i$ forms a group up to a global phase. Then, for any number of copies $k$ and for every general tester $\{T^\text{GEN}_i\}$, there exists a parallel tester $\{T^\text{PAR}_i\}_i$ such that {\small{ \begin{equation} \frac{1}{N}\sum_{i=1}^N \text{\normalfont Tr}\Big(T^\text{PAR}_i \dketbra{U_i}{U_i}^{\otimes k}\Big) = \frac{1}{N}\sum_{i=1}^N \text{\normalfont Tr}\Big(T^\text{GEN}_i \dketbra{U_i}{U_i}^{\otimes k}\Big). \end{equation} }} \end{theorem} The proof of this theorem can be found in Appendix~\ref{app::proofunitarygroup}. \\ Theorem~\ref{thm::unitarygroup} has two crucial hypotheses: (1) the set of unitary operators $\{U_i\}$ forms a group and (2) the distribution $\{p_i\}$ is uniform. If at least one of these hypotheses is not satisfied, then Theorem~\ref{thm::unitarygroup}, in fact, does not hold, as we show in the following. \begin{theorem}\label{thm::seq_vs_par} There exist ensembles of unitary channels for which sequential strategies of discrimination outperform parallel strategies. Moreover, sequential strategies can achieve perfect discrimination in some scenarios where the maximal probability of success of parallel strategies is strictly less than one. \end{theorem} Let us start with the case where the set of unitary channels does not form a group, but the probability distribution of the ensemble is uniform. In the following, $\sigma_x$, $\sigma_y$, and $\sigma_z$ are the Pauli operators and $H\coloneqq\ketbra{+}{0}+\ketbra{-}{1}$, where $\ket{\pm}\coloneqq \frac{1}{\sqrt{2}}(\ket{0}\pm \ket{1})$, is the Hadamard gate. Throughout the paper, we take $\sqrt{A}\coloneqq\sum_i\sqrt{\lambda_i}\ketbra{i}{i}$ to be the square root of an arbitrary diagonalizable operator $A=\sum_i\lambda_i\ketbra{i}{i}$. \begin{example}\label{ex::k2N4} The ensemble composed of a uniform probability distribution and $N=4$ qubit unitary channels given by $\{U_i\} = \{\mathbb{1},\sqrt{\sigma_x},\sqrt{\sigma_y},\sqrt{\sigma_z}\}$, in a discrimination task that allows for $k=2$ copies, can be discriminated under a sequential strategy with probability of success $P^\text{SEQ}=1$, while any parallel strategy yields $P^\text{PAR}<1$. \end{example} A straightforward sequential strategy that attains perfect discrimination of this ensemble can be constructed by first noting that $\sqrt{\sigma_i}\sqrt{\sigma_i}=\sigma_i$; hence, a simple composition of the unitary operators $U_i$ leads to the Pauli operators, which are perfectly discriminated with a bipartite maximally entangled state and a joint measurement in the Bell basis. The proof that any parallel strategy that applies two copies can never attain perfect discrimination is provided in Appendix~\ref{app::proofseq_vs_par} and applies the method of computer-assisted proofs developed in Ref.~\cite{bavaresco20}. Another example of this phenomenon is showed by the ensemble $\{U_i\} = \{\mathbb{1},\sigma_x,\sigma_y,\sqrt{\sigma_z}\}$ with uniform probability distribution, which also satisfies $P^\text{PAR}<P^\text{SEQ}=1$ for a discrimination task with $k=2$ copies. In Appendix~\ref{app::proofseq_vs_par}, we show that such an ensemble can actually be discriminated perfectly by a sequential strategy that uses, on average, $1.5$ copies. The next example concerns a set of unitary channels that forms a group, but the probability distribution of the ensemble is not uniform. \begin{example}\label{ex::k2N8} Let $\{\mathbb{1},\,\sigma_x,\,\sigma_y,\,\sigma_z,\,H,\,\sigma_xH,\,\sigma_yH,\,\sigma_zH\}=\{U_i\}$ be a tuple of $N=8$ unitary channels that forms a group up to a global phase, and let $\{p_i\}$ be a probability distribution in which each element $p_i$ is proportional to the $i$-th digit of the number $\pi\approx3.1415926$, that is, $\{p_i\}=\{\frac{3}{31},\frac{1}{31},\frac{4}{31},\ldots,\frac{6}{31}\}$. For the ensemble $\{p_i,U_i\}$, in a discrimination task that allows for $k=2$ copies, sequential strategies outperform parallel strategies, i.e., $P^\text{PAR}<P^\text{SEQ}$. \end{example} The proof of this example is also in Appendix~\ref{app::proofseq_vs_par}, and applies the same method of computer-assisted proofs. In Example~\ref{ex::k2N8}, we have set the distribution $\{p_i\}$ to be proportional to the $i^\text{th}$ digit of the constant $\pi$ to emphasize that the phenomenon of sequential strategies outperforming the parallel ones when the set of unitary channels forms a group does not require a particularly well chosen non-uniform distribution. In practice, we have observed that with randomly generated distributions, optimal strategies often respect $P^\text{PAR}<P^\text{SEQ}$. In both of the aforementioned examples, general strategies do not outperform sequential strategies. However, for the case of discrimination of unitary channels using $k=3$ copies, we show that general strategies are, indeed, advantageous. \begin{theorem}\label{thm::gen_vs_seq_vs_par} There exist ensembles of unitary channels for which general strategies of discrimination outperform sequential strategies. \end{theorem} Let us start again with the case where the set of unitary channels does not form a group, but the probability distribution of the ensemble is uniform. For the following, we define $H_y\coloneqq\ketbra{+_y}{0}+\ketbra{-_y}{1}$, where $\ket{\pm_y}\coloneqq \frac{1}{\sqrt{2}}(\ket{0}\pm i\ket{1})$, and $H_P\coloneqq\ketbra{+_P}{0}+\ketbra{-_P}{1}$, where $\ket{+_P}\coloneqq \frac{1}{5}(3\ket{0} + 4\ket{1})$ and $\ket{-_P}\coloneqq \frac{1}{5}(4\ket{0} - 3\ket{1})$. \begin{figure*} \caption{Illustration of a $2$-copy sequential strategy that attains the same probability of successful discrimination as any $2$-copy switch-like strategy, for all sets of unitary channels $\{U_i\}_{i=1}^N$. Line ``c'' represents a control system, ``t'' represents a target system, and ``a'' represents an auxiliary system. Both strategies can be straightforwardly extended to $k$ copies (see Appendix~\ref{app::proofswitchlike}).} \label{fig:SL} \end{figure*} \begin{example}\label{ex::k3N4} For the ensemble composed of a uniform probability distribution and $N=4$ unitary channels given by $\{U_i\} = \{\sqrt{\sigma_x},\sqrt{\sigma_z},\sqrt{H_y},\sqrt{H_P}\}$, in a discrimination task that allows for $k=3$ copies, general strategies outperform sequential strategies and sequential strategies outperform parallel strategies. Therefore, the maximal probabilities of success satisfy the strict hierarchy $P^\text{PAR}<P^\text{SEQ}<P^\text{GEN}$. \end{example} The proof of this example can be found in Appendix~\ref{app::proofgen_vs_seq_vs_par}. \\ General strategies can also be advantageous for the discrimination of an ensemble composed of a non-uniform probability distribution and a set of unitary channels that forms a group. Let the set of unitary operators in Example~\ref{ex::k3N4} be the set of generators of a group (potentially with an infinite number of elements). Now, consider the ensemble composed of such a group and a probability distribution given by $p_i=\frac{1}{4}$ for the four values of $i$ corresponding to the four unitary operators which are the generators of the group, and $p_i=0$ otherwise. It is straightforward to see that the maximal probabilities of successfully discriminating this ensemble would be the same as the ones in Example~\ref{ex::k3N4}, hence satisfying $P^\text{PAR}<P^\text{SEQ}<P^\text{GEN}$. Although somewhat artificial, this example shows that advantages of general strategies are, indeed, possible for this kind of unitary channel ensemble. \\ Although general indefinite-causal-order strategies can be advantageous for the discrimination of unitary channels, this is not the case for one particular sub-class of general strategies: those which can be constructed from the quantum switch.~\cite{chiribella13-01} Let $V_{mn}$, with $m\in\{0,1\}$, $n\in\{0,1,2\}$, be unitary operators that act on a target and an auxiliary system and $U_1,U_2$ be unitary operators that act only on the target system. Finally, let $\{\ketbra{m}{m}^c\}_m$ be projectors that act on a control system. Then, we define the \textit{switch-like} superchannel, which transforms a pair of unitary channels into one unitary channel, according to \begin{align} \begin{split}\label{eq::slsc} \mathcal{W}_\text{SL}(U_1,U_2)\coloneqq &\ketbra{0}{0}^c \otimes V_{02}\,({U_2}\otimes\mathbb{1})\,V_{01}\,({U_1}\otimes\mathbb{1})\,V_{00} \\ + &\ketbra{1}{1}^c\otimes V_{12}\,({U_1}\otimes\mathbb{1})\,V_{11}\,({U_2}\otimes\mathbb{1})\,V_{10}, \end{split} \end{align} where $\mathbb{1}$ is the identity operator acting on the auxiliary system. In the case where $V_{mn}=\mathbb{1}\ \forall\,m,n$, one recovers the standard quantum switch.~\cite{chiribella13-01} The switch-like superchannel has been previously considered in Refs.~\cite{yokojima20,barrett20} in the context of reversability-preserving transformations. Generalizations of the switch-like superchannel that transform $k$ instead of $2$ unitaries are presented in detail in Appendix~\ref{app::proofswitchlike}, applying unitaries $\{V_{mn}\}_{m,n}$, with $m\in\{0,\ldots,k!-1\}$ and $n\in\{0,\ldots,k\}$, and considering all permutations of the target unitaries $\{U_l\}_{l=1}^k$. Such $k$-slots switch-like superchannels have been shown to be implementable via coherent quantum control of causal orders.~\cite{wechs21} Now, let $W^\text{SL}\in\mathcal{L}(\mathcal{H}^P\otimes\mathcal{H}^I\otimes\mathcal{H}^O\otimes\mathcal{H}^F)$ be the $k$-slot switch-like process associated with the $k$-slot generalization of the switch-like superchannel in Eq.~\eqref{eq::slsc}. A general discrimination strategy, given by the $k$-copy switch-like tester $T^\text{SL}=\{T^\text{SL}_i\}$, $T^\text{SL}_i\in\mathcal{L}(\mathcal{H}^{I}\otimes\mathcal{H}^{O})$ can be constructed using the $k$-slot switch-like process $W^\text{SL}$, a quantum state $\rho\in\mathcal{L}(\mathcal{H}^P)$ that acts on the ``past'' space of the $k$ slots of $W^\text{SL}$, and a POVM $\{M_i\},M_i\in\mathcal{L}(\mathcal{H}^F)$, that acts on the ``future'' space, according to \begin{equation} T^\text{SL}_i\coloneqq \text{\normalfont Tr}_{PF}\Big[(\rho\otimes\mathbb{1})W^\text{SL}(\mathbb{1}\otimes M_i)\Big] \end{equation} where the identity operators $\mathbb{1}$ act on the correspondent complementary spaces. We show that such switch-like strategies exhibit no advantage over sequential strategies for the discrimination of $N$ unitary channels using $k$ copies. \begin{theorem}\label{thm::switchlike} The action of the switch-like process on $k$ copies of a unitary channel can be equivalently described by a sequential process that acts on $k$ copies of the same unitary channel. Consequently, in a discrimination task involving the ensemble $\mathcal{E}=\{p_i,U_i\}_i$ composed of $N$ unitary channels and some probability distribution, and that allows for $k$ copies, for every switch-like tester $\{T^\text{SL}_i\}_i$, there exists a sequential tester $\{T^\text{SEQ}_i\}_i$ that attains the same probability of success, according to \begin{equation} \sum_{i=1}^N p_i \text{\normalfont Tr}\Big(T^\text{SL}_i \dketbra{U_i}{U_i}^{\otimes k}\Big) = \sum_{i=1}^N p_i \text{\normalfont Tr}\Big(T^\text{SEQ}_i \dketbra{U_i}{U_i}^{\otimes k}\Big). \end{equation} \end{theorem} The proof can be found in Appendix~\ref{app::proofswitchlike}, where we provide a simple construction of a sequential strategy that performs as well as any switch-like strategy using the same number of copies for unitary channel discrimination. A graphical representation of such a strategy in the case of $k=2$ copies is represented in Fig.~\ref{fig:SL}. \section{Uniformly sampled unitary channels} \label{sec:numeric} \begin{table}[t!] \begin{center} {\renewcommand{1.5}{1.5} \begin{tabular}{| c | c | c |} \multicolumn{3}{c}{Uniformly sampling qubit unitary channels} \\ \hline $N$ & $k=2$ & $k=3$ \\ \hline \ \ $2$ \ \ & $\ \boldsymbol{P^\text{\textbf{PAR}}=P^\text{\textbf{SEQ}}}=P^\text{GEN} \ $ & $\ \boldsymbol{P^\text{\textbf{PAR}}=P^\text{\textbf{SEQ}}}=P^\text{GEN} \ $ \\ $3$ & $P^\text{PAR}=P^\text{SEQ}=P^\text{GEN}$ & $P^\text{PAR}<P^\text{SEQ}=P^\text{GEN}$ \\ $4$ & $P^\text{PAR}<P^\text{SEQ}=P^\text{GEN}$ & $P^\text{PAR}<P^\text{SEQ}<P^\text{GEN}$ \\ $\vdots$ & $\vdots$ & $\vdots$ \\ $10$ & $P^\text{PAR}<P^\text{SEQ}=P^\text{GEN}$ & $P^\text{PAR}<P^\text{SEQ}<P^\text{GEN}$ \\ $\vdots$ & $\vdots$ & $\vdots$ \\ $25$ & $P^\text{PAR}\approx P^\text{SEQ}=P^\text{GEN}$ & $P^\text{PAR}< P^\text{SEQ} \approx P^\text{GEN}$\\ \hline \end{tabular} } \end{center} \caption{Summary of numerical findings: Gaps between different strategies of discrimination using $k$ copies of ensembles of $N$ uniformly distributed qubit unitary channels sampled according to the Haar measure. The bold equalities on row $N=2$ denote known analytical results (see Ref.~\cite{chiribella08-1}). A strict inequality indicates that examples of ensembles that exhibit such gaps were encountered. An equality indicates that, for all sampled ensembles, no gap was encountered, up to numerical precision. The number of sampled sets ranged from $500$ to $50\,000$.} \label{tbl::table} \end{table} \begin{figure}\label{fig::plots_ratios} \end{figure} The advantage of sequential and general strategies in the discrimination of unitary channels is not restricted to the main examples given for Theorems~\ref{thm::seq_vs_par} and~\ref{thm::gen_vs_seq_vs_par}. In fact, by sampling sets of unitary operators uniformly distributed according to the Haar measure, and using these sets to construct ensembles with uniform probability distribution, one can find several other examples of the advantage of sequential and general strategies. A summary of our numerical findings is presented in Table~\ref{tbl::table}. \textit{Qubits, $k=2$}. In the scenario of qubit unitary channels and $k=2$ copies, we have observed gaps between the performance of parallel and sequential strategies for ensembles of $N\in\{4,\ldots,25\}$ unitary channels. For the case of $N=2$, it is known that such gaps do not exit~\cite{chiribella08-1}, and for the case of $N=3$, no gap was discovered. By calculating the averages of the maximal probabilities of success, we observed that the minimum ratio $\avg{P^\text{PAR}}/\avg{P^\text{SEQ}}$ occurred at $N=6$. At $N=25$, gaps were hardly detected. This behavior can be visualized in the plot of Fig.~\ref{fig::plots_ratios} (top). \textit{Qutrits, $k=2$}. For the case of qutrit unitary channels and $k=2$ copies, we discovered a gap between the performance of parallel and sequential strategies already for a discrimination task of only $N=3$ unitary channels, while in the qubit case, the first example of this phenomenon was found only for $N=4$. In this scenario as well, no advantages of general strategies were found. These results are not plotted. \textit{Qubits, $k=3$}. In the scenario of qubit unitary channels and $k=3$ copies, the advantage of general strategies over causally ordered ones (parallel and sequential) is common. Still considering uniformly sampled qubit unitary channels, in the $3$-copy case, we have found a strict hierarchy of discrimination strategies in scenarios of $N\in\{4,\ldots,19\}$. For $N\in\{20,\ldots,25\}$, the advantage of sequential strategies was clear but that of general strategies was hardly detected. Also in the case of $N=3$, only an advantage of sequential over parallel strategies was found. The minimum ratios of the averages $\avg{P^\text{PAR}}/\avg{P^\text{SEQ}}$ and $\avg{P^\text{SEQ}}/\avg{P^\text{GEN}}$ were found at $N=10$ and $N=11$, respectively. These results are plotted in Fig.~\ref{fig::plots_ratios} (bottom). The number of unitary channel ensembles sampled to calculate the average values of the maximal probabilities of success ranged from $50\,000$ for $d=2$, $k=2$, $N=2$ to $100$ for $d=3$, $k=2$, $N=3$. In Fig.~\ref{fig::plots_ratios}, one can see how the ratios between the maximal probabilities of success of different classes of strategies decrease as the number $N$ of uniformly sampled unitary channels being discriminated increases. This observation is in line with the idea that, in the limit where the ensemble is composed of all qubit unitary channels, therefore forming the group $SU(2)$, it is expected that parallel strategies would be optimal. In Sec.~\ref{sec::bounds}, we formally analyze the asymptotic behavior of $\avg{P^\mathcal{S}}$, while providing an absolute upper bound for the maximal probability of success under any strategy. \section{Ultimate upper bound for $P^\mathcal{S}$ for any set of unitary channels}\label{sec::bounds} We now present an upper bound for the maximal probability of success for discriminating a set of $N$ $d$-dimensional unitary channels with general strategies when $k$ copies are available. Our result applies to \textit{any} ensemble of unitary channels $\mathcal{E}=\{p_i,U_i\}_{i=1}^N$ where $p_i=\frac{1}{N}$ is a uniform probability distribution. Since general testers are the most general strategies that are consistent with a channel discrimination task, our result constitutes an ultimate upper bound for discriminating uniformly distributed unitary channels. Additionally, as we show later, our bound can be saturated by particular choices of unitary channels. \begin{theorem}[Upper bound for general strategies] \label{thm::upper_bound} Let $\mathcal{E}=\{p_i,U_i\}_{i=1}^N$ be an ensemble composed of $N$ $d$-dimensional unitary channels and a uniform probability distribution. The maximal probability of successful discrimination of a general strategy with $k$ copies is upper bounded by \begin{equation} P^\text{GEN}\leq \frac{1}{N}\,\gamma(d,k), \end{equation} where $\gamma(d,k)$ is given by \begin{equation} \gamma(d,k) \coloneqq {k+d^2-1\choose k}= \frac{(k+d^2-1)!}{k!(d^2-1)!}. \end{equation} \end{theorem} The proof of this theorem is in Appendix~\ref{app::upperbound}. \begin{figure}\label{fig::plots_limit} \end{figure} The upper bound given by Thm.~\ref{thm::upper_bound} can be attained by some particular choices of unitary channels that form a group (and hence, by Thm.~\ref{thm::gen_vs_seq_vs_par}, attainable by a parallel strategy). In particular, it follows from Refs.~\cite{chiribella04,hayashi05,chiribella06} that when sets of unitary operators $\{U_i\}_i$ form group $k$-designs, there exists a parallel strategy such that \begin{align} P^\text{PAR}=\frac{1}{N}\,\gamma(d,k). \end{align} A set of unitary operators $\{ U_i\}_{i=1}^N$ forms a group $k$-design when the set forms a group and respects the relation \begin{equation}\label{eq::kdesign} \int_\text{Haar} U^{\otimes k}\,\rho\ {U^{\otimes k}}^\dagger \text{d}U = \frac{1}{N}\sum_{i=1}^N U_i^{\otimes k}\,\rho\ {U_i^{\otimes k}}^\dagger \end{equation} for every linear operator $\rho$. In Sec.~\ref{sec:numeric}, we analyzed the problem of discriminating $N$ unitary operators uniformly sampled according to the Haar measure. For any fixed dimension $d$ and number of copies $k$, if we sample a very large number $N$ of unitary operators $(N\to\infty)$, when the distribution $\{p_i\}_i$ is uniform, the maximal probability of success will be very small $(P^\mathcal{S}\to0)$. At the same time, due to the uniform properties of the Haar measure, if we uniformly sample a large number $N$ of unitary operators, we obtain a set that approaches the group of all unitary operators of the given dimension, which is equivalent to the group $SU(d)$, approximately satisfying the group $k$-design condition in Eq.~\eqref{eq::kdesign}. Therefore, for large $N$, we have the asymptotic behavior \begin{equation} N \avg{P^\text{PAR}} \approx N \avg{P^\text{SEQ}} \approx N \avg{P^\text{GEN}}\approx \gamma(d,k). \end{equation} This argument can be made more rigorous by recognizing that $SU(d)$ is a compact Lie group, treating probabilities as probability densities, and maximizing likelihood.~\cite{chiribella04,chiribella06} As an example from our numerical calculations, in Fig.~\ref{fig::plots_limit}, for the case of $d=2$ and $k=2,3$, we can visualize the behavior of $N\avg{P^\mathcal{S}}$ as a function of $N$ and how it asymptotically approaches the constant $\gamma(d,k)$, which takes the values of $\gamma(2,2)=10$ and $\gamma(2,3)=20$, with increasing $N$. \section{Conclusion} We extended the unified tester formalism of Ref.~\cite{bavaresco20} to the case of $k$ copies and applied it particularly to the study of unitary channel discrimination. Our first contribution was to prove that, in a discrimination task of an ensemble of a set of unitary channels that forms a group and a uniform probability distribution, parallel strategies are always optimal, even when compared against the performance of general strategies. Subsequently, we showed an example of a unitary channel discrimination task in which a sequential strategy outperforms any parallel strategy. Our result consists of the first demonstration of such a phenomenon, to the extent of our knowledge. Our task involves two copies and four unitary channels, which can be perfectly discriminated with a sequential strategy but not with a parallel one. We explicitly provided the optimal discrimination strategy for this task. We also showed that general strategies that involve indefinite causal order are advantageous for the discrimination of unitary channels. Our simplest example of this phenomenon is a task of discriminating among four unitary channels using three copies. While our optimal parallel and sequential strategies can be straightforwardly implemented with (ordered) quantum circuits, a potential quantum realization of the optimal general strategies that are advantageous in this scenario remains an open problem. We then demonstrated that general strategies that are created from switch-like transformations, which are known to be, in principle, implementable with quantum operations,~\cite{wechs21} can never perform better than sequential strategies for unitary channel discrimination. The final result of our work was an upper bound for the maximal probability of success of any set of unitary channels using any number of copies under any strategy. This ultimate bound applies to any possible discrimination strategy and was shown to be tight, attained by discrimination tasks of unitary groups that form a $k$-design. The concrete examples of the advantages of sequential and general strategies in this work focused on discrimination tasks that use $k=2$ or $3$ copies. An open question of our work is how these strategy gaps would scale with larger values of $k$. The preliminary results presented here indicate that the advantage of sequential over parallel strategies and of general over sequential strategies should be even more accentuated as a higher number of copies are allowed. This idea is supported by the intuition that the number of different ways in which one can construct sequential strategies, as compared to parallel strategies, increases with the number of slots $k$. Similarly, we expect such a phenomenon to exist for the general case. It would then be interesting to find out exactly the rate with which these gaps increase with $k$. No advantage of general strategies was found in scenarios involving discrimination of unitary channels using only $k=2$ copies. We conjecture that, when considering $k=2$ copies, such an advantage is, indeed, not possible for any number $N$ of unitary channels. We also remark that, when considering $k=2$ copies, Refs.~\cite{yokojima20,barrett20} proved that superchannels that preserve reversibility (i.e. transform unitary channels into unitary channels) are necessarily of the switch-like form. Intuitively, it seems plausible that the optimal general strategy for discriminating unitary channels would be one that transforms unitary channels into unitary channels. This argument of reversibility preservation combined with our Theorem~\ref{thm::switchlike} might lead to a proof of our conjecture. Furthermore, we also conjecture that, when considering $N=2$ unitary channels, general strategies are not advantageous for any number of copies $k$. In this scenario, it has been proven that sequential strategies cannot outperform parallel ones,~\cite{chiribella08-1} and we believe this to also be the case for general strategies. The task of discriminating between two unitary channels can always be recast as the problem of discriminating a unitary operator from the identity operator. In the parallel case, the probability of successful discrimination has been shown to be related to the spread of the eigenvalues of this unitary operator.~\cite{acin01,dariano01} The proof presented in Ref.~\cite{chiribella08-1} explores how sequential strategies affect the spread of the eigenvalues of unitary operators, to conclude that they cannot outperform the parallel ones. A better understanding of how general strategies affect the spread of the eigenvalues of unitary operators could lead to a conclusive answer for this conjecture. Finally, an interesting open question that could follow our work is how quantum-specific are the phenomena presented here. For instance, Ref.~\cite{harrow10} showed that adaptive strategies may outperform parallel ones in classical channel discrimination; however, their examples did not concern a classical analog of unitary channels. Assuming the analog of a unitary channel in classical information theory to be a classical channel that maps deterministic probability distributions into deterministic probability distributions, i.e., channels that can be described by a permutation matrix, it would be interesting to investigate whether their discrimination could also be enhanced by sequential strategies. Additionally, Refs.~\cite{baumeler16,araujo16} showed that indefinite causal order also manifests itself in classical processes. Hence, an interesting question would be whether indefinite causality is also a useful resource for classical channel discrimination. \begin{acknowledgments} \noindent\textit{Acknowledgments.} The authors thank Cyril Branciard and an anonymous referee for the conference TQC~2022 for very useful comments on the first version of the manuscript. The authors are also grateful to Princess Bubblegum for performing demanding calculations. J.B. acknowledges funding from the Austrian Science Fund (FWF) through Zukunftskolleg ZK03 and START Project Y879-N27. M.M. was supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) under Grant No. JPMXS0118069605 and by the Japan Society for the Promotion of Science (JSPS) through KAKENHI Grants Nos. 17H01694, 18H04286, and 21H03394. M.T.Q. acknowledges the Austrian Science Fund (FWF) through the SFB project BeyondC (sub-project F7103), a grant from the Foundational Questions Institute (FQXi) as part of the Quantum Information Structure of Spacetime (QISS) Project (qiss.fr). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. This project received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Marie Sk\l{}odowska-Curie Grant Agreement No. 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). It reflects only the authors' view and the EU Agency is not responsible for any use that may be made of the information it contains. \end{acknowledgments} \section*{Author declarations} The authors have no conflicts to disclose. \section*{Data Availability Statement} Data sharing is not applicable to this article as no new data were created or analyzed in this study. \\ The code developed for this study is openly available at our online repository.~\cite{githubMTQ2} \onecolumngrid \appendix \section*{Appendix} The appendix is composed of the following sections: Appendix~\ref{app:processmatrices} presents the definition of $2$- and $3$-slot process matrices in terms of positivity and linear normalization constraints and a small review of the formulation of the maximal probability of successful discrimination in terms of primal and dual SDP problems; Appendix~\ref{app::proofunitarygroup} presents the proof of Theorem~\ref{thm::unitarygroup}; Appendix~\ref{app::proofseq_vs_par} presents the proof of Examples~\ref{ex::k2N4} and~\ref{ex::k2N8}; Appendix~\ref{app::proofgen_vs_seq_vs_par} presents the proof of Example~\ref{ex::k3N4}, Appendix~\ref{app::proofswitchlike} presents the proof of Theorem~\ref{thm::switchlike}; and, finally, Appendix~\ref{app::upperbound} contains the proof of Theorem~\ref{thm::upper_bound}. \\ Some of the sections in the Appendix will make use of the \textit{link product}~\cite{chiribella07} between two linear operators, which is a useful mathematical tool to compose linear maps that are represented by their Choi operators. If $\map{C} \coloneqq \map{B}\circ \map{A}$ is the composition of the linear maps $\map{A}:\mathcal{L}(\mathcal{H}^1)\to\mathcal{L}(\mathcal{H}^2)$ and $\map{B}:\mathcal{L}(\mathcal{H}^2)\to\mathcal{L}(\mathcal{H}^3)$, the Choi operator of $\map{C}$ is given by $C=A*B$, where $A$ and $B$ are the Choi operators of $\map{A}$ and $\map{B}$, respectively, and $*$ stands for the link product, which we now define. Let $A\in\mathcal{L}(\mathcal{H}^1\otimes\mathcal{H}^2)$ and $B\in\mathcal{L}(\mathcal{H}^2\otimes\mathcal{H}^3)$ be linear operators. The link product $A*B\in\mathcal{L}(\mathcal{H}^1\otimes\mathcal{H}^3)$ is defined as \begin{equation} A^{12}*B^{23}\coloneqq \text{\normalfont Tr}_2\left[\Big((A^{12})^{T_2} \otimes \mathbb{1}^3\Big)\, \Big(\mathbb{1}^1 \otimes B^{23}\Big)\right], \end{equation} where $(\cdot)^{T_2}$ stands for the partial transposition on the linear space $\mathcal{H}^2$. We remark that identifying the linear spaces where the operators act is an important part of the link product, also, if we keep track on these linear spaces, the link product is commutative and associative. \section{Semidefinite programming formulation and linear constraints for general processes of $k=2$ and $k=3$ slots}\label{app:processmatrices} For sake of completeness, in the following, we present a characterization of general processes with $k=2$ and $k=3$ slots, which are bipartite and tripartite process matrices, in terms of positivity constraints and linear normalization constraints. While positivity is a consequence of physical considerations, the linear normalization constraints are a direct implication of Eq.~\eqref{eq::wnorm}. We refer to Ref.~\cite{araujo15} for a detailed explanation of the method and Ref.~\cite{quintino19} for the explicit characterization of general processes with $k=3$ slots. We recall that we represent the trace-and-replace operation on $\mathcal{H}^X$ as $_{X}(\cdot)=\text{\normalfont Tr}_X(\cdot)\otimes\frac{\mathbb{1}^X}{d_X}$, where $d_X=\text{dim}(\mathcal{H}^X)$. Let $W\in\mathcal{L}(\mathcal{H}^{A_IA_O}\otimes\mathcal{H}^{B_IB_O})$ be a $2$-slot (bipartite) process matrix. Then, \begin{align} W&\geq0 \\ \text{\normalfont Tr}(W) &= d_{A_O}d_{B_O} \\ _{A_IA_O}W &= _{A_IA_OB_O}W \\ _{B_IB_O}W &= _{A_OB_IB_O}W \\ W + _{A_OB_O}W &= _{A_O}W + _{B_O}W. \end{align} Now let $W\in\mathcal{L}(\mathcal{H}^{A_IA_O}\otimes\mathcal{H}^{B_IB_O}\otimes\mathcal{H}^{C_IC_O})$ be a $3$-slot (tripartite) process matrix. Then, \begin{align} W&\geq0 \\ \text{\normalfont Tr}(W) &= d_{A_O}d_{B_O}d_{C_O} \\ _{A_IA_OB_IB_O}W &= _{A_IA_OB_IB_OC_O}W \\ _{B_IB_OC_IC_O}W &= _{A_OB_IB_OC_IC_O}W \\ _{A_IA_OC_IC_O}W &= _{A_IA_OB_OC_IC_O}W \\ _{A_IA_O}W + _{A_IA_OB_OC_O}W &= _{A_IA_OB_O}W + _{A_IA_OC_O}W \\ _{B_IB_O}W + _{A_OB_IB_OC_O}W &= _{A_OB_IB_O}W + _{B_IB_OC_O}W \\ _{C_IC_O}W + _{A_OB_OC_IC_O}W &= _{A_OC_IC_O}W + _{B_OC_IC_O}W \\ W + _{A_OB_O}W + _{A_OC_O}W + _{B_OC_O}W &= _{A_O}W + _{B_O}W + _{C_O}W + _{A_OB_OC_O}W. \end{align} We also recall that the maximal probability of successful discrimination in Eq.~\eqref{eq:SDP} can be equivalently expressed by the primal SDP problem \begin{flalign}\label{sdp::primal} \begin{aligned} \textbf{given}\ \ &\{p_i,C_i\} \\ \textbf{maximize}\ \ &\sum_i p_i \text{\normalfont Tr}\left(T^\mathcal{S}_i\,C_i^{\otimes k}\right) \\ \textbf{subject to}\ \ &T^\mathcal{S}_i\geq 0\ \forall\,i, \ \ \ \sum_iT^\mathcal{S}_i=W^\mathcal{S}, \end{aligned}&& \end{flalign} or the dual problem \begin{flalign}\label{sdp::dual} \begin{aligned} \textbf{given}\ \ &\{p_i,C_i\} \\ \textbf{minimize}\ \ &\lambda \\ \textbf{subject to}\ \ &p_i\,C_i^{\otimes k}\leq \lambda\,\overline{W}^\mathcal{S} \ \ \forall\,i, \end{aligned}&& \end{flalign} the latter of which can be straightforwardly phrased as an SDP by absorbing the coefficient $\lambda$ into $\overline{W}^\mathcal{S}$, which is the dual affine of process matrix $W^\mathcal{S}$.~\cite{bavaresco20} For parallel processes $W^\text{PAR}$, their dual affine $\overline{W}^\text{PAR}$ is a general $k$-partite quantum channel, for sequential processes $W^\text{SEQ}$, their dual affine $\overline{W}^\text{SEQ}$ is a $k$-partite channel with memory, and for general processes $W^\text{GEN}$, their dual affine $\overline{W}^\text{GEN}$ is a $k$-partite non-signaling channel. We refer to Ref.~\cite{bavaresco20} for more details. In principle, any solution of the primal problem gives an upper bound to $P^\mathcal{S}$ and any solution of the dual problem, a lower bound. From strong duality, it is guarantee that the optimal solution of both problems coincides. Both problems are used in the computer-assisted-proof method applied in this paper. \section{Proof of Theorem~\ref{thm::unitarygroup}}\label{app::proofunitarygroup} \setcounter{theorem}{0} We start this section with Lemma~\ref{lemma:bruto}, which plays a main role in the proof of Thm~\ref{thm::unitarygroup}, and may be of independent interest. The theorems presented in this section employ methods that are similar to the ones in Ref.~\cite{dariano01,bisio13}, which exploit the covariance of processes to parallelize strategies. \begin{lemma}\label{lemma:bruto} Let $\{T_U\}_U$, $T_U\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$, be a general $k$-slot tester associated with the general process $W\coloneqq\sum_U T_U$, that respects the commutation relation \begin{equation} W^{IO} \, \left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO} = \left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO}\, W^{IO} , \end{equation} for every unitary operator $U\in\mathcal{L}(\mathbb{C}^d)$ from a set $\{U\}_U$. Then, there exists a parallel $k$-slot tester $\{T_U^\text{PAR}\}_i$ such that \begin{equation} \text{\normalfont Tr}\Big(T^\text{PAR}_U \dketbra{U}{U}^{\otimes k} \Big) = \text{\normalfont Tr}\Big( T_U \dketbra{U}{U}^{\otimes k}\Big) \quad \forall \ U\in\{U\}_U. \end{equation} Moreover, this parallel tester can be written as $T_U^\text{PAR}=\rho^{I'I}*M_U^{I'O}$ where $\mathcal{H}^{I'}$ is an auxiliary space which is isomorphic to $\mathcal{H}^I$, $\rho\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^{I'})$ is a quantum state defined by \begin{equation} \rho^{I'I} \coloneqq {\sqrt{W}^T}^{I'I}\,\dketbra{\mathbb{1}}{\mathbb{1}}^{I'I}\,{\sqrt{W}^T}^{I'I} , \end{equation} and $\{M_U\}_U$ is a POVM defined by \interfootnotelinepenalty=10000 \footnote{Here, $\sqrt{W}^{-1}$ stands for the inverse of $\sqrt{W}$ on its range. If the operator $W$ is not full-rank, the composition $W\,W^{-1}=:\Pi_W$ is not the identity $\mathbb{1}$ but the projector onto the subspace spanned by the range of $\sqrt{W}$. Due to this technicality, when the operator $W$ is not full-rank, we should define the measurements as $ M_U^{I'O} \coloneqq {\sqrt{W}^{-1}}^{I'O}\, {T_U}^{I'O}\, {\sqrt{W}^{-1}}^{I'O} + \frac{1}{N}(\mathbb{1}-W W^{-1})$. With that, the proof written here also applies to the case where the operator $W$ is not full-rank.} \begin{equation} M_U^{I'O} \coloneqq {\sqrt{W}^{-1}}^{I'O}\, {T_U}^{I'O}\, {\sqrt{W}^{-1}}^{I'O}. \end{equation} \end{lemma} \begin{proof} We start our proof by verifying that $\rho\in\mathcal{L}(\mathcal{H}^{I'}\otimes\mathcal{H}^{I})$ is a valid quantum state. The operator $\rho$ is positive semidefinite because it is a composition of positive semidefinite operators and the normalization condition follows from \begin{align} \text{\normalfont Tr}(\rho) =&\text{\normalfont Tr}\left({\sqrt{W}^T}^{I'I}\,\dketbra{\mathbb{1}}{\mathbb{1}}^{I'I}\,{\sqrt{W}^T}^{I'I} \right) \\ =&\text{\normalfont Tr}\left({W^T}^{I'I}\dketbra{\mathbb{1}}{\mathbb{1}}^{I'I}\right) \\ =&\text{\normalfont Tr}\left({W}^{I'I}{\dketbra{\mathbb{1}}{\mathbb{1}}^T}^{I'I}\right) \\ =&\text{\normalfont Tr}\left({W}^{I'I}\dketbra{\mathbb{1}}{\mathbb{1}}^{I'I}\right) \\ =& 1, \end{align} where the last equation holds because, since $W$ is a general process, it satisfies $\text{\normalfont Tr}(WC)=1$ for any $C$ which is the Choi operator of a channel. Let us now verify that the set of operators $\{ M_U\}_U$ forms a valid POVM. For that it is enough to recognize that all operators $M_U$ are compositions of positive semidefinite operators that add up to the identity, according to \begin{align} \sum_U M_U =&{\sqrt{W}^{-1}}^{I'O}\, {\sum_U T_U}^{I'O}\, {\sqrt{W}^{-1}}^{I'O}\\ =&{\sqrt{W}^{-1}}^{I'O}\, W^{I'O}\, {\sqrt{W}^{-1}}^{I'O}\\ =&\mathbb{1}^{I'O}. \end{align} The relation $\sqrt{W}^{-1}\,W\,\sqrt{W}^{-1}=\mathbb{1}$ can be shown by writing $W$ in an orthonormal basis as $W=\sum_i \alpha_i \ketbra{i}{i}$ and $\sqrt{W}^{-1}=\sum_i\alpha_i^{-1/2}\ketbra{i}{i}$. Recall that for any unitary operator $U$, we have the identity $\dketbra{U}{U}^T=\dketbra{U^*}{U^*}$, and if $C^{IO}$ is the Choi operator of a linear map $\map{C}:\mathcal{L}(\mathcal{H}^I)\to\mathcal{L}(\mathcal{H}^O)$, $\rho^{I'I}\in\mathcal{L}(\mathcal{H}^{I'}\otimes\mathcal{H}^I)$, it holds that $\rho^{I'I}*C^{IO}= \left(\map{\mathbb{1}}\otimes\map{C}(\rho^{I'I})\right)^{I'O}$. In addition, if a diagonalizable operator $W^{IO}$ commutes with $\left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO}$, its positive semidefinite square root $\sqrt{W}$ also commutes with \interfootnotelinepenalty=10000 \footnote{Indeed, two diagonalizable operators $A$ and $B$ commute if and only if they are diagonal in the same basis. Now, since $A=\sum_i\alpha_i\ketbra{i}{i}$, its square root, $\sqrt{A}\coloneqq\sum_i\sqrt{\alpha_i}\ketbra{i}{i}$, is by definition also diagonal in the same basis. Hence, if $A$ commutes with $B$, $\sqrt{A}$ also commutes with $B$.} $\left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO}$; hence, we have \begin{equation} \label{eq:root_commute} \sqrt{W}\,(\mathbb{1}\otimes {U}^{\otimes k})=(\mathbb{1}\otimes {U}^{\otimes k})\, {\sqrt{W}}. \end{equation} By taking the complex conjugation on both sides of Eq.\,\eqref{eq:root_commute} and exploiting the fact that $\sqrt{W}=\sqrt{W}^\dagger$ implies $\sqrt{W}^T=\sqrt{W}^*$, it holds that \begin{equation} {\sqrt{W}^T}\,(\mathbb{1}\otimes {U^*}^{\otimes k})=(\mathbb{1}\otimes {U^*}^{\otimes k})\,{\sqrt{W}^T} . \end{equation} With these identities in hand, we can evaluate the link product $\rho^{I'I}*\left(\dketbra{U^{\otimes k}}{U^{\otimes k}}^T\right)^{IO}$, which will be used in the next step of the proof, to obtain \begin{align} \rho^{I'I} * \left({\dketbra{U}{U}^{\otimes k}}^T\right)^{IO} =& \rho^{I'I} * \left({\dketbra{U^*}{U^*}^{\otimes k}}\right)^{\, IO} \\ =& \left[(\mathbb{1}\otimes {U^*}^{\otimes k}) \,\rho\, (\mathbb{1}\otimes {U^T}^{\otimes k})\right]^{I'O} \\ =& \left[(\mathbb{1}\otimes {U^*}^{\otimes k}) \,\sqrt{W}^T\, \dketbra{\mathbb{1}}{\mathbb{1}} \,{\sqrt{W}^T}\, (\mathbb{1}\otimes {U^T}^{\otimes k})\right]^{I'O} \\ =& \left[{\sqrt{W}^T}\, (\mathbb{1}\otimes {U^*}^{\otimes k}) \,\dketbra{\mathbb{1}}{\mathbb{1}}\, (\mathbb{1}\otimes {U^T}^{\otimes k})\, {\sqrt{W}^T}\right]^{I'O} \\ =& \left({\sqrt{W}^T} \,\dketbra{U^*}{U^*}^{\otimes k}\, {\sqrt{W}^T} \right)^{I'O}. \label{eq::trick} \end{align} We now finish the proof by verifying that \begin{align} \text{\normalfont Tr}\left(T_U^\text{PAR} \, \dketbra{U}{U}^{\otimes k}\right) &= \text{\normalfont Tr}\left[ (\rho^{I'I}*M_U^{I'O}) \, {\dketbra{U}{U}^{\otimes k}}^{IO} \right] \\ &= (\rho^{I'I}*M_U^{I'O}) * {(\dketbra{U}{U}^{\otimes k})^T}^{\,IO} \\ &= M_U^{I'O} * \left(\rho^{I'I} * {\dketbra{U^*}{U^*}^{\otimes k}}^{\,IO}\right) \\ &= M_U^{I'O} * \left({\sqrt{W}^T} \,\dketbra{U^*}{U^*}^{\otimes k}\, {\sqrt{W}^T} \right)^{I'O} \quad\quad\quad \text{(applying Eq.~\eqref{eq::trick})} \\ &= \text{\normalfont Tr}\left[M_U^{I'O} {\left( {\sqrt{W}^T} \,\dketbra{U^*}{U^*}^{\otimes k}\, {\sqrt{W}^T} \right)^{T}}^{I'O}\right]\\ &= \text{\normalfont Tr}\left[M_U^{I'O} \left( {\sqrt{W}} \,\dketbra{U}{U}^{\otimes k}\, {\sqrt{W}} \right)^{I'O}\right]\\ &= \text{\normalfont Tr}\left[ \left( {\sqrt{W}^{-1}}\,T_U\,{\sqrt{W}^{-1}}\right)^{I'O} \left({\sqrt{W}}\,\dketbra{U}{U}^{\otimes k}\,{\sqrt{W}} \right)^{I'O}\right]\\ &= \text{\normalfont Tr}\left(T_U \, \dketbra{U}{U}^{\otimes k}\right). \end{align} \end{proof} Now, we prove Theorem~\ref{thm::unitarygroup}. \begin{theorem} Let $\mathcal{E}=\{p_U,\dketbra{U}{U}\}_U$ be an ensemble of unitary channels where the set of unitary operators $U\in\mathcal{L}(\mathbb{C}^d)$, $\{U\}_U$ forms a group up to a global phase---that is, there exist real numbers $\phi_i$ such that \begin{itemize} \item $e^{i \phi_\mathbb{1}}\mathbb{1} \in \{U\}_U$ \item If $A \in \{U\}_U$, then $e^{i\phi_A}A^{-1}\in \{U\}_U$ \item If $A,B \in \{U\}_U$, then $e^{i\phi_{AB}}AB\in \{U\}_U$, \end{itemize} and the distribution $\{p_U\}_U$ is uniform---that is, if the set has $N$ elements, $p_U=\frac{1}{N}$. Then, for any number of uses $k$ and every general tester $\{T^\text{GEN}_U\}_U$, $T^\text{GEN}_U\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$ there exists a parallel tester $\{T^\text{PAR}_U\}_U$, $T^\text{PAR}_U\in\mathcal{L}(\mathcal{H}^I\otimes\mathcal{H}^O)$, such that \begin{equation} \frac{1}{N} \sum_{U\in\{U\}_U} \text{\normalfont Tr}\left(T^\text{GEN}_U \dketbra{U}{U}^{\otimes k}\right) = \frac{1}{N} \sum_{U\in\{U\}_U} \text{\normalfont Tr}\left(T^\text{PAR}_U \dketbra{U}{U}^{\otimes k}\right). \end{equation} \end{theorem} Before presenting the proof, we recall that unitary operators that are equivalent up to a global phase represent equivalent unitary channels. That is, if $U'=e^{i \phi} U:\mathcal{H}^I\to\mathcal{H}^O$ is a linear operator, its associated map is given by \begin{align} \map{U'}(\rho)&=U'\rho {U'}^\dagger \\ &=e^{i\phi}e^{-i\phi}U\rho {U}^\dagger \\ &=U\rho {U}^\dagger \\ &=\map{U}(\rho) \end{align} and its Choi operator $\dketbra{U}{U}$ respects \begin{align} \dketbra{U}{U}=\dketbra{U'}{U'}. \end{align} Due to this fact, the two sets of operators $\{U_i\}_i$ and $\{e^{i\phi_i}U_i\}_i$ represent the same set of quantum channels. \begin{proof} The proof goes as follows: we start by using the general tester $\{T_U^\text{GEN}\}_U$ to construct another general tester $\{T_U\}_U$ which obeys \begin{equation} \frac{1}{N} \sum_{U\in\{U\}_U} \text{\normalfont Tr}\left(T^\text{GEN}_U \dketbra{U}{U}^{\otimes k}\right) = \frac{1}{N} \sum_{U\in\{U\}_U} \text{\normalfont Tr}\left(T_U \dketbra{U}{U}^{\otimes k}\right). \end{equation} Then, we prove that the general tester $\{T_U\}_U$ we defined respects the hypothesis of Lemma~\ref{lemma:bruto}. Hence, there exists a parallel tester $\{T_U^\text{PAR}\}_U$ which is equivalent to $\{T_U\}_U$ when acting on the set of unitary operators $\{U\}_U$. Let us start by defining the general tester $\{T_U\}_U$ as: \begin{equation} T_U \coloneqq \frac{1}{N} \sum_{V\in\{U\}_U} \left( \mathbb{1}^{I}\otimes V^{\dagger^{\otimes k}}\right)^{IO} \,{T^\text{GEN}_{VU}}^{\,IO}\, \left( \mathbb{1}^{I}\otimes V^{{\otimes k}}\right)^{IO}, \end{equation} where $VU$ stands for the standard operator composition up to a global phase, that is, if $VU$ is not in the set $\{U\}_U$, we pick $e^{i\phi_{VU}} VU$, which is ensured to be an element of $\{U\}_U$. Before proceeding, we should verify that the set of operators $\{T_U\}_U$ is indeed a valid general tester. Note that since $T_U$ is a composition of positive semidefinite operators, it holds that $T_U\geq0$ for every $U$. We now show that $W\coloneqq\sum_U T_U$ is a valid general process. First, note that \begin{align} W &\coloneqq \sum_U T_U \\ &= \sum_U \frac{1}{N} \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,T^\text{GEN}_{VU}\, (\mathbb{1}\otimes V^{{\otimes k}}) \\ &= \frac{1}{N} \sum_U \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,T^\text{GEN}_{V(V^{-1}U)}\, (\mathbb{1}\otimes V^{{\otimes k}}) \label{eq:group_trick} \\ &= \frac{1}{N} \sum_U \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,T^\text{GEN}_{U}\, (\mathbb{1}\otimes V^{{\otimes k}}) \\ &= \frac{1}{N} \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,\sum_U T^\text{GEN}_{U}\, (\mathbb{1}\otimes V^{{\otimes k}}) \\ &= \frac{1}{N} \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,W^\text{GEN}\, (\mathbb{1}\otimes V^{{\otimes k}}), \end{align} where $W^\text{GEN}\coloneqq\sum_U T^\text{GEN}_U$ and, in Eq.~\eqref{eq:group_trick}, we have used the change of variable $U\mapsto V^{-1} U $, which does not affect the sum because the set $\{U\}_U$ is a group. Note also that, if $C^{IO}$ is the Choi operator of a quantum channel, the operator defined by \begin{equation} C'^{IO}\coloneqq \frac{1}{N} \sum_V (\mathbb{1}\otimes V^{{\otimes k}})^{IO} \,C^{IO}\, (\mathbb{1}\otimes V^{\dagger^{\otimes k}})^{IO}, \end{equation} is a valid channel since it is positive semidefinite and $\text{\normalfont Tr}_O(C'^{IO})=\text{\normalfont Tr}_O(C^{IO})=\mathbb{1}^I$. It then follows that, for every quantum channel of the form $C=\bigotimes_{i=1}^k C_i^{I_iO_i}$, we have \begin{align} \text{\normalfont Tr}(W^{IO}C^{IO}) &= \frac{1}{N} \text{\normalfont Tr}\left[\sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,W^\text{GEN}\, (\mathbb{1}\otimes V^{{\otimes k}}) \,C\right] \\ &= \frac{1}{N} \text{\normalfont Tr}\left[W^\text{GEN} \sum_V (\mathbb{1}\otimes V^{{\otimes k}}) \,C\, (\mathbb{1}\otimes V^{\dagger^{\otimes k}})\right] \\ &= \text{\normalfont Tr}(W^{IO}C'^{IO}) \\ &=1, \end{align} ensuring that $\{T_U\}_U$ is a valid general tester. The next step is to show that the tester $\{T_U\}_U$ attains the same success probability for discriminating the ensemble $\mathcal{E}=\{p_U,\dketbra{U}{U}\}_U$ as the tester $\{T^\text{GEN}_U\}_U$. This claim follows from direct calculation, that is, \begin{align} \frac{1}{N} \sum_U \text{\normalfont Tr}(T_U \dketbra{U}{U}^{\otimes k}) &= \frac{1}{N^2} \sum_U\sum_V \text{\normalfont Tr}\left[ (\mathbb{1}\otimes V^{\dagger^{\otimes k}})\, T^\text{GEN}_{VU}\, (\mathbb{1}\otimes V^{{\otimes k}})\, \dketbra{U}{U}^{\otimes k} \right] \\ &= \frac{1}{N^2} \sum_U\sum_V \text{\normalfont Tr}\left[ T^\text{GEN}_{VU}\, (\mathbb{1}\otimes V^{{\otimes k}})\, \dketbra{U}{U}^{\otimes k}\, (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \right] \\ &= \frac{1}{N^2} \sum_U\sum_V\text{\normalfont Tr}\Big( T^\text{GEN}_{VU}\, \dketbra{VU}{VU}^{\otimes k} \Big) \label{eq:group_trick2} \\ &= \frac{1}{N^2} \sum_U\sum_V\text{\normalfont Tr}\left( T^\text{GEN}_{V\left(V^{-1}U\right)}\, \dketbra{V\left(V^{-1}U\right)}{V\left(V^{-1}U\right)}^{\otimes k}\right) \\ &= \frac{1}{N^2} \sum_U \sum_V \text{\normalfont Tr}\Big( T^\text{GEN}_{U}\, \dketbra{U}{U}^{\otimes k}\Big) \\ &= \frac{1}{N} \sum_U \text{\normalfont Tr}\Big( T^\text{GEN}_{U}\, \dketbra{U}{U}^{\otimes k}\Big). \label{eq::tgen=tu} \end{align} The final step is to verify that the process $W\coloneqq\sum_U T_U$ commutes with $\mathbb{1}\otimes U^{{\otimes k}}$ for every unitary operator $U\in\{U\}_U$ to ensure that the tester $\{T_U\}_U$ fulfills the hypothesis of Lemma~\ref{lemma:bruto}. Direct calculation shows that \begin{align} (\mathbb{1}\otimes U^{{\otimes k}}) \,W\, (\mathbb{1}\otimes {U^\dagger}^{{\otimes k}}) =& (\mathbb{1}\otimes U^{{\otimes k}}) \frac{1}{N} \sum_V (\mathbb{1}\otimes V^{\dagger^{\otimes k}}) \,W^\text{GEN}\, (\mathbb{1}\otimes V^{{\otimes k}}) (\mathbb{1}\otimes {U^\dagger}^{{\otimes k}}) \\ =& \frac{1}{N} \sum_V \left(\mathbb{1}\otimes \left(UV^\dagger\right)^{\otimes k}\right) \,W^\text{GEN}\, \left( \mathbb{1}\otimes \left(VU^\dagger\right)^{{\otimes k}}\right) \\ =& \frac{1}{N} \sum_V \Big[\mathbb{1}\otimes (U(VU)^\dagger)^{\otimes k}\Big] \,W^\text{GEN}\, \Big[\mathbb{1}\otimes ((VU)U^\dagger)^{{\otimes k}}\Big] \\ =& \frac{1}{N} \sum_V (\mathbb{1}\otimes {V^\dagger}^{\otimes k}) \,W^\text{GEN}\, (\mathbb{1}\otimes V^{{\otimes k}}) \\ =& W. \end{align} Hence, we have that \begin{equation} W^{IO} \, \left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO} = \left(\mathbb{1} \otimes U^{\otimes k}\right)^{IO}\, W^{IO}, \end{equation} and by Lemma~\ref{lemma:bruto}, one can construct a parallel tester $\{T_U^\text{PAR}\}_U$ which respects \begin{equation} \text{\normalfont Tr}\Big(T^\text{PAR}_U \dketbra{U}{U}^{\otimes k} \Big) = \text{\normalfont Tr}\Big( T_U \dketbra{U}{U}^{\otimes k}\Big) \quad \forall \ U\in\{U\}_U, \end{equation} and therefore, by applying Eq.~\eqref{eq::tgen=tu}, we have \begin{equation} \frac{1}{N}\sum_U\text{\normalfont Tr}\Big(T^\text{PAR}_U \dketbra{U}{U}^{\otimes k} \Big) = \frac{1}{N}\sum_U\text{\normalfont Tr}\Big(T^\text{GEN}_U \dketbra{U}{U}^{\otimes k}\Big), \end{equation} concluding our proof. \end{proof} \section{Proof of Examples~\ref{ex::k2N4} and~\ref{ex::k2N8}}\label{app::proofseq_vs_par} \setcounter{example}{0} The examples in this section show the advantage of sequential strategies over parallel strategies in channel discrimination tasks that involve only unitary channels and using $k=2$ copies. In the examples of this section, general strategies cannot outperform sequential ones. We recall that in the following $\sigma_x$, $\sigma_y$, and $\sigma_z$ denote the Pauli operators and $H\coloneqq\ketbra{+}{0}+\ketbra{-}{1}$, where $\ket{\pm}\coloneqq \frac{1}{\sqrt{2}}(\ket{0}\pm \ket{1})$ denotes the Hadamard gate. In addition, if $A\in\mathcal{L}(\mathbb{C}^d)$ is a linear operator with spectral decomposition $A=\sum_i \alpha_i \ketbra{\psi_i}{\psi_i}$, its square root is defined as $\sqrt{A}:=\sum_i \sqrt{\alpha_i}\ketbra{\psi_i}{\psi_i}$. We start by proving Example~\ref{ex::k2N4} from the main text. It concerns the discrimination of an ensemble composed of a uniform probability distribution and a set of unitaries that does not form a group. \begin{example} The ensemble composed of a uniform probability distribution and $N=4$ qubit unitary channels given by $\{U_i\} = \{\mathbb{1},\sqrt{\sigma_x},\sqrt{\sigma_y},\sqrt{\sigma_z}\}$, in a discrimination task that allows for $k=2$ copies, can be discriminated under a sequential strategy with success probability $P^\text{SEQ}=1$, while any parallel strategy yields $P^\text{PAR}<1$. \end{example} \begin{proof} The sequential strategy that attains perfect discrimination is easily understood by realizing that when the $k=2$ copies of the unitaries $\{U_i\}$ are applied in sequence, one recovers $U_i\,U_i = \sqrt{\sigma_i}\,\sqrt{\sigma_i} = \sigma_i$, where $\sigma_1=\mathbb{1},\sigma_2=\sigma_x,\sigma_3=\sigma_y,\sigma_4=\sigma_z$. Therefore the task reduces to the discrimination of the four Pauli operators with $k=1$ copy, which can be perfectly realized with a two-qubit maximally entangled state and a Bell measurement. In order to show that the probability $P^\text{PAR}$ of discriminating these unitary channels with $k=2$ copies in a parallel strategy is strictly less than one, we make use of the dual problem associated with the SDP that computes the maximal probability of successful discrimination, given in Eq.~\eqref{sdp::dual}. Hence, in order to obtain an upper bound for the maximal success probability $P^\text{PAR}$, it is enough to find a value $\lambda<1$ and the Choi state of a quantum channel $\overline{W}$, that is, $\overline{W}\geq0$ and $\text{\normalfont Tr}_O(\overline{W})=\mathbb{1}^I$, that respect \begin{equation} \label{eq:upper_bound} \frac{1}{4}\dketbra{U_i}{U_i}^{\otimes 2}\leq \lambda \,\overline{W} \ \ \ \text{for } \ i\in\{1,2,3,4\}. \end{equation} Using the computer-assisted-proof method presented in Ref.~\cite{bavaresco20}, we obtain an operator $\overline{W}$ that satisfies all the quantum channel conditions exactly and, for $\lambda=\frac{9571}{1000}$, satisfies the inequality~\eqref{eq:upper_bound}. Hence, \begin{equation} P^\text{PAR}\leq\frac{9571}{10000}. \end{equation} In our online repository~\cite{githubMTQ2} we present a Mathematica notebook that can be used to verify that $\overline{W}$ is a valid Choi state of a quantum channel. \end{proof} Another similar example with an interesting property is given by the unitary channel ensemble composed of a uniform probability distribution and $\{U_i\} = \{\mathbb{1},{\sigma_x},{\sigma_y},\sqrt{\sigma_z}\}$. To prove this example as well, let us start by constructing a perfect sequential strategy. We start by noting that the four Bell states can be written as: \begin{align*} &\ket{\phi^+}\coloneqq \frac{\ket{00}+\ket{11}}{\sqrt{2}}=\left(\mathbb{1}\otimes\mathbb{1}\right) \ket{\phi^+} , \quad\quad\quad\quad \ket{\phi^-}\coloneqq \frac{\ket{00}-\ket{11}}{\sqrt{2}}=\left(\mathbb{1}\otimes\sigma_z\right) \ket{\phi^+},\\ &\ket{\psi^+}\coloneqq \frac{\ket{01}+\ket{10}}{\sqrt{2}}=\left(\mathbb{1}\otimes\sigma_x\right) \ket{\phi^+}, \quad\quad\quad\, \ket{\psi^-}\coloneqq \frac{\ket{01}-\ket{10}}{\sqrt{2}}=(-i)\left(\mathbb{1}\otimes\sigma_y\right) \ket{\phi^+}. \end{align*} Additionally, since we take $\sqrt{\sigma_z}=\ketbra{0}{0}+i\ketbra{1}{1}$, we can check that the state \begin{equation} \left( \mathbb{1}\otimes \sqrt{\sigma_z} \right) \ket{\phi^+} = \frac{\ket{00}+i\ket{11}}{\sqrt{2}} \end{equation} is orthogonal to $\ket{\psi^+}$ and $\ket{\psi^-}$. We will now exploit these identities to construct a sequential strategy that attains $P^\text{SEQ}=1$ with $k=2$ uses. The strategy goes as follows: Define the auxiliary space $\mathcal{H}^{\text{aux}_1}$ to be isomorphic to $\mathcal{H}^{I_1}$ and prepare the initial state $\rho\in\mathcal{L}(\mathcal{H}^{I_1}\otimes\mathcal{H}^{\text{aux}_1})$ as \begin{equation} \rho^{I_1\,\text{aux}_1}\coloneqq\ketbra{\phi^+}{\phi^+}^{I_1\text{aux}_1}. \end{equation} The state $\rho^{I_1\text{aux}_1}$ will then be subjected to the first copy of a unitary channel $U_i$, leading to the output state $\rho^{O_1\,\text{aux}_1}$ \begin{equation} \rho^{O_1\,\text{aux}_1} = \left(U_i \otimes \mathbb{1}^{\text{aux}_1} \right)\rho^{I_1\,\text{aux}_1}\left(U_i \otimes \mathbb{1}^{\text{aux}_1} \right)^\dagger = \left(U_i \otimes \mathbb{1}^{\text{aux}_1} \right)\ketbra{\phi^+}{\phi^+}^{I_1\,\text{aux}_1}\left(U_i \otimes \mathbb{1}^{\text{aux}_1}\right)^\dagger. \end{equation} Then, we apply the following transformation on the state $\rho^{O_1\,\text{aux}_1}$. We perform a projective measurement with POVM elements given by \begin{align} M_{\psi^+}&\coloneqq \ketbra{\psi^+}{\psi^+} \\ M_{\psi^-}&\coloneqq \ketbra{\psi^-}{\psi^-} \\ M_\phi&\coloneqq \ketbra{\phi^+}{\phi^+} +\ketbra{\phi^-}{\phi^-} \end{align} and, after the measurement, re-prepare the quantum system in the state $\rho^{I_2\,\text{aux}_2}=\map{M_i}(\rho)=\sqrt{M_i}\rho\sqrt{M_i}^\dagger$ with probability $\text{\normalfont Tr}\big(\map{M_i}(\rho)\big)=\text{\normalfont Tr}(\rho M_i)$. This transformation can be described by a L\"uders instrument. It can be checked that, if $U_i=\sigma_x$, one obtains the outcome associated with $M_{\psi^+}$ with probability one. Similarly, if $U_i=\sigma_y$, one obtains the outcome associated with $M_{\psi^-}$ with probability one. Hence, in these two cases, we have perfect channel discrimination. Now, if we obtain the outcome associated with $M_\phi$, the unitary $U_i$ can be either $\mathbb{1}$ or $\sqrt{\sigma_z}$. After performing the projective measurement with elements $\{M_{\psi^+},M_{\psi^-},M_\phi\}$ and a L\"uders instrument, the state $\rho^{I_2\,\text{aux}_2}$ is subjected to a second copy of $U_i$. Direct calculation shows that if $U_i=\mathbb{1}$, then after the use of the second copy of unitary $U_i$, the state $\rho^{O_2\,\text{aux}_2}$ of the system is \begin{equation} \left(\mathbb{1}\otimes\mathbb{1}\right)^2 \ket{\phi^+}=\ket{\phi^+}. \end{equation} If $U_i=\sqrt{\sigma_z}$ after the second use of the unitary $U_i$ the state $\rho^{O_2\,\text{aux}_2}$ of the system is \begin{equation} \left(\mathbb{1}\otimes\sqrt{\sigma_z}\right)^2 \ket{\phi^+}= \ket{\phi^-}. \end{equation} Since $\ket{\phi^+}$ and $\ket{\phi^-}$ are orthogonal, they can be discriminated with probability one. Hence, the set of unitary operators $\{U_i\}_{i=1}^4$ can be perfectly discriminated in a sequential strategy with $k=2$ copies. Using the tester formalism, this sequential strategy would be presented in terms of a sequential tester $T^\text{SEQ}=\{T^\text{SEQ}_i\}$, which can be implemented by an input quantum state $\rho$, a quantum encoder channel $\map{E}$, and a quantum measurement $\{N_i\}_i$. For completeness, we now present an explicit sequential tester that attains $P^\text{SEQ}=1$. As in the strategy described earlier, we set the initial state as $\rho^{I_1\,\text{aux}_1}\coloneqq\ketbra{\phi^+}{\phi^+}^{I_1\,\text{aux}_1}$. Now, instead of using an instrument, we define a quantum encoder channel $\map{E}: \mathcal{L}(\mathcal{H}^{O_1}\otimes\mathcal{H}^{\text{aux}_1})\to\mathcal{L}(\mathcal{H}^{I_2}\otimes\mathcal{H}^{\text{aux}_2}\otimes\mathcal{H}^{{\text{aux}'_2}})$ as \begin{equation} \map{E}(\rho) = \left(\sqrt{M_\phi}\,\rho\,\sqrt{M_\phi}^\dagger \right)\otimes M_\phi^{{\text{aux}'_2}} \, +\, \left(\sqrt{M_{\psi^+}}\,\rho\,\sqrt{M_{\psi^+}}^\dagger \right)\otimes M_{\psi^+}^{{\text{aux}'_2}} \, +\, \left(\sqrt{M_{\psi^-}}\,\rho\,\sqrt{M_{\psi^-}}^\dagger \right) \otimes M_{\psi^-}^{{\text{aux}'_2}}, \end{equation} so that $\rho^{I_2\,\text{aux}_2\,{\text{aux}'_2}}=\map{E}(\rho^{O_1\,\text{aux}_1})$. We finish our sequential tester construction by presenting quantum measurement given by operators $N_i\in\mathcal{L}(\mathcal{H}^{O_2}\otimes\mathcal{H}^\text{aux}\otimes\mathcal{H}^{{\text{aux}'_2}})$, \begin{align} N_1 &\coloneqq \ketbra{\phi^+}{\phi^+} \otimes M_\phi \\ N_2 &\coloneqq \ketbra{\phi^-}{\phi^-} \otimes M_\phi \\ N_3 &\coloneqq \mathbb{1} \otimes M_{\psi^+} \\ N_4 &\coloneqq \mathbb{1} \otimes M_{\psi^-}. \end{align} In this way, if $E$ is the Choi operator of the channel $\map{E}$, the sequential tester with elements $T^\text{SEQ}_i\coloneqq\rho*E*N_i^T$ respects $\sum_i\text{\normalfont Tr}(T^\text{SEQ}_i \dketbra{U_i}{U_i}^{\otimes 2})=1$. In order to show that the probability $P^\text{PAR}$ of discriminating these unitary channels with $k=2$ copies in a parallel strategy is strictly less than one, we apply the method of computer-assisted proof again to obtain the upper bound of \begin{equation} P^\text{PAR}\leq\frac{9741}{10000}. \end{equation} An interesting property of this example is that, with the above described sequential strategy, for the cases in which the unknown channel is either $U_2=\sigma_x$ or $U_3=\sigma_y$, a conclusive answer is achieved after only one use of the unknown channel. From the uniform probability of the ensemble, we know that this scenario would occur with probability $\frac{1}{2}$. Only when the unknown channel is either $U_1=\mathbb{1}$ or $U_4=\sqrt{\sigma_y}$ is it necessary to use the second copy of the unknown channel to arrive at a conclusive answer. This scenario would also occur with probability $\frac{1}{2}$. If one considers this discrimination task as being performed repeatedly, always drawing the unknown channel with uniform probability from the ensemble $\{U_i\} = \{\mathbb{1},{\sigma_x},{\sigma_y},\sqrt{\sigma_z}\}$, then one can see that, on average over the multiples runs of the task, perfect discrimination will be achieved using only 1.5 copies of the unknown channel under this sequential strategy. \\ We now prove Example~\ref{ex::k2N8} from the main text. It concerns the discrimination of an ensemble composed of a non-uniform probability distribution and a set of unitaries that forms a group. \begin{example} Let $\{U_i\}=\{\mathbb{1},\,\sigma_x,\,\sigma_y,\,\sigma_z,\,H,\,\sigma_xH,\,\sigma_yH,\,\sigma_zH\}$ be a tuple of $N=8$ unitary channels that forms a group up to a global phase, and let $\{p_i\}$ be a probability distribution in which each element $p_i$ is proportional to the $i$-th digit of the number $\pi\approx3.1415926$, that is, $\{p_i\}=\{\frac{3}{31},\frac{1}{31},\frac{4}{31},\frac{1}{31},\frac{5}{31},\frac{9}{31},\frac{2}{31}.\frac{6}{31}\}$. For the ensemble $\{p_i,U_i\}$, in a discrimination task that allows for $k=2$ copies, sequential strategies outperform parallel strategies, i.e., $P^\text{PAR}<P^\text{SEQ}$. \end{example} \begin{proof} The first step of the proof is to ensure that the tuple $\{\mathbb{1},\,\sigma_x,\,\sigma_y,\,\sigma_z,\,H,\,\sigma_xH,\,\sigma_yH,\,\sigma_zH\}$ forms a group up to a global phase. This is done by direct inspection. The second step of the proof is to ensure that there is a sequential strategy which outperforms any parallel one. We accomplish this step with the aid of the computer-assisted-proof methods presented in Ref.~\cite{bavaresco20}. These methods allow us to compute rigorous and explicit upper and lower bounds for the maximal probability of success under parallel and sequential strategies. We obtain \begin{equation} \frac{8196}{10000} < P^\text{PAR} < \frac{8197}{10000} < P^\text{SEQ} < \frac{8198}{10000}, \end{equation} ensuring that $P^\text{PAR}<P^\text{SEQ}$. The code used in the computer-assisted proof of the this example is publicly available at our online repository~\cite{githubMTQ2}, along with a Mathematica notebook, which shows that this set of unitaries forms a group. \end{proof} \section{Proof of Example~\ref{ex::k3N4} }\label{app::proofgen_vs_seq_vs_par} \setcounter{example}{2} The Example in this Section shows the advantage of general strategies over sequential strategies and of sequential strategies over parallel strategies in channel discrimination tasks that only involve unitary channels and using $k=3$ copies. We start by proving Example~\ref{ex::k3N4} from the main text. It concerns the discrimination of an ensemble composed of a uniform probability distribution and a set of unitaries that does not form a group. We recall that, for the following, we define $H_y\coloneqq\ketbra{+_y}{0}+\ketbra{-_y}{1}$, where $\ket{\pm_y}\coloneqq \frac{1}{\sqrt{2}}(\ket{0}\pm i\ket{1})$, and $H_P\coloneqq\ketbra{+_P}{0}+\ketbra{-_P}{1}$, where $\ket{+_P}\coloneqq \frac{1}{5}(3\ket{0} + 4\ket{1})$ and $\ket{-_P}\coloneqq \frac{1}{5}(4\ket{0} - 3\ket{1})$. \begin{example} For the ensemble composed of a uniform probability distribution and $N=4$ unitary channels given by $\{U_i\} = \{\sqrt{\sigma_x},\sqrt{\sigma_z},\sqrt{H_y},\sqrt{H_P}\}$, in a discrimination task that allows for $k=3$ copies, general strategies outperform sequential strategies, and sequential strategies outperform parallel strategies. Therefore, the maximal probabilities of success satisfy the strict hierarchy $P^\text{PAR}<P^\text{SEQ}<P^\text{GEN}$. \end{example} \begin{proof} The proof follows from the direct application of the computer-assisted methods presented in Ref.~\cite{bavaresco20}. These methods allow us to find explicit and exact parallel/sequential/general testers that attain a given success probability, ensuring, then, a lower bound for the maximal success probability for its class. In addition, we can obtain an exact parallel/sequential/general upper bound, given the SDP dual formulation. The code used to obtain the computer-assisted proof of the present example is publicly available in our online repository~\cite{githubMTQ2}. The computed bounds for the maximal probability of successful discrimination are \begin{equation} \begin{alignedat}{3} \frac{9570}{10000} < P^\text{PAR} < \frac{9571}{10000} & && && \\ & < \frac{9876}{10000} < P^\text{SEQ} < \frac{9877}{10000} && && \\ & && && < \frac{9881}{10000} < P^\text{GEN} < \frac{9882}{10000}, \end{alignedat} \end{equation} showing the advantage of strategies that apply indefinite causal order over ordered ones and proving a strict hierarchy between strategies for the discrimination of a set of unitary channels. \end{proof} \section{Proof of Theorem~\ref{thm::switchlike}}\label{app::proofswitchlike} \setcounter{theorem}{3} In this section, we prove Thm.~\ref{thm::switchlike} from the main text, which concerns the inability of switch-like strategies to outperform sequential strategies on channel discrimination tasks that involve only unitary channels. We also repeat here Fig.~\ref{fig:SL} from the main text for convenience of the reader. \setcounter{figure}{1} \begin{figure*} \caption{Illustration of a $2$-copy sequential strategy that attains the same probability of successful discrimination as any $2$-copy switch-like strategy, for all sets of unitary channels $\{U_i\}_{i=1}^N$. Line ``c'' represents a control system, ``t'' represents a target system, and ``a'' represents an auxiliary system. Both strategies can be straightforwardly extended to $k$ copies.} \end{figure*} \begin{theorem} The action of the switch-like superchannel on $k$ copies of a unitary channel can be equivalently described by a sequential circuit that acts on $k$ copies of the same unitary channel. Consequently, in a discrimination task involving the ensemble $\mathcal{E}=\{p_i,U_i\}_i$ composed of $N$ unitary channels and some probability distribution and that allows for $k$ copies, for every switch-like tester $\{T^\text{SL}_i\}_i$, there exists a sequential tester $\{T^\text{SEQ}_i\}_i$ that attains the same probability of success, according to \begin{equation} \sum_{i=1}^N p_i \text{\normalfont Tr}\Big(T^\text{SL}_i \dketbra{U_i}{U_i}^{\otimes k}\Big) = \sum_{i=1}^N p_i \text{\normalfont Tr}\Big(T^\text{SEQ}_i \dketbra{U_i}{U_i}^{\otimes k}\Big). \end{equation} \end{theorem} In order to provide a better intuition on this result, before presenting the formal definition of the switch-like process with $k$ slots and proving Theorem~\ref{thm::switchlike} in full generality, we present a proof for the $k=2$ case that is illustrated in Fig.~\ref{fig:SL} in the main text. For the case $k=2$, the switch-like superchannel transforms a pair of unitary channels $\{U_1,U_2\}$ into one unitary channel, according to \begin{align} \begin{split} \mathcal{W}_\text{SL}(U_1,U_2)\coloneqq &\ketbra{0}{0}^c \otimes V_{02}\,({U_2}\otimes\mathbb{1}^a)\,V_{01}\,({U_1}\otimes\mathbb{1}^a)\,V_{00} + \ketbra{1}{1}^c\otimes V_{12}\,({U_1}\otimes\mathbb{1}^a)\,V_{11}\,({U_2}\otimes\mathbb{1}^a)\,V_{10}, \end{split} \end{align} where $\mathbb{1}^a$ is the identity operator acting on the auxiliary system and $V_{\pi i}$ are fixed unitary operators. Note that, if $U_1=U_2=U$, we have \begin{align} \begin{split} \mathcal{W}_\text{SL}(U,U)= &\ketbra{0}{0}^c \otimes V_{02}\,({U}\otimes\mathbb{1}^a)\,V_{01}\,({U}\otimes\mathbb{1}^a)\,V_{00} + \ketbra{1}{1}^c\otimes V_{12}\,({U}\otimes\mathbb{1}^a)\,V_{11}\,({U}\otimes\mathbb{1}^a)\,V_{10}. \end{split} \end{align} We now define a controlled version of the unitary operators $V_{0i}$ as \begin{equation} V^\text{ctrl}_{0 i}\coloneqq\ketbra{0}{0}^c\otimes V_{0i} + \ketbra{1}{1}^c \otimes \mathbb{1}^t \otimes \mathbb{1}^a. \end{equation} and a controlled version of $V_{1 i}$ as \begin{equation} V^\text{ctrl}_{1 i}\coloneqq\ketbra{0}{0}^c\otimes \mathbb{1}^t \otimes \mathbb{1}^a + \ketbra{1}{1}^c \otimes V_{1i}. \end{equation} We first note that due to the orthogonality of $\ket{0}$ and $\ket{1}$, we have $V^\text{ctrl}_{1i}V^\text{ctrl}_{0i}= \ketbra{0}{0}^c\otimes V_{0i} + \ketbra{1}{1}^c \otimes V_{1i}$. Hence, a direct calculation shows that \begin{align} V^\text{ctrl}_{12}V^\text{ctrl}_{02}\,(\mathbb{1}^c\otimes{U}\otimes\mathbb{1}^a)\ \cdot\ V^\text{ctrl}_{11}V^\text{ctrl}_{01}\,(\mathbb{1}^c\otimes{U}\otimes\mathbb{1}^a) \ \cdot\ V^\text{ctrl}_{10}V^\text{ctrl}_{00} =& \left(\ketbra{0}{0}^c\otimes V_{02} + \ketbra{1}{1}^c \otimes V_{12}\right)(\mathbb{1}^c\otimes{U}\otimes\mathbb{1}^a)\ \cdot \nonumber\\ &\left(\ketbra{0}{0}^c\otimes V_{01} + \ketbra{1}{1}^c \otimes V_{11}\right)(\mathbb{1}^c\otimes{U}\otimes\mathbb{1}^a)\ \cdot \nonumber \\ &\left(\ketbra{0}{0}^c\otimes V_{00} + \ketbra{1}{1}^c \otimes V_{10}\right)(\mathbb{1}^c\otimes{U}\otimes\mathbb{1}^a)\\ =&\mathcal{W}_\text{SL}(U,U). \end{align} This shows that, when $U_1=U_2=U$, a $2$-slot sequential circuit which performs the operations $V^\text{ctrl}_{12}V^\text{ctrl}_{02}$, $V^\text{ctrl}_{11}V^\text{ctrl}_{01}$, and $V^\text{ctrl}_{10}V^\text{ctrl}_{00}$ can perfectly simulate the two-slot switch-like superchannel. See Fig.~\ref{fig:SL} in the main text for an illustration. Now, we extend this result to an arbitrary finite number of copies $k$. \begin{definition}[Switch-like superchannel] \label{def:SL} Let $\{\pi\}_\pi$, $\pi\in\{0,\ldots,k!-1\}$ be a set in which each integer $\pi$ represents a permutation of the set $\{1,\ldots,k\}$ and $\sigma_\pi:\{1,\ldots,k\}\to\{1,\ldots,k\}$ be the permutation function such that, after permutation $\pi$, the element $i\in\{1,\ldots,k\}$ is mapped to $\sigma_\pi(i)$. The $k$-slot switch-like superchannel acts on a set of $k$ unitary operators $\{U_i\}_{i=1}^k$, $U_i:\mathcal{H}^{I_i}\to\mathcal{H}^{O_i}$ according to \begin{equation} \mathcal{W}_\text{SL}(U_1,\ldots,U_k)\coloneqq \sum_{\pi=0}^{k!-1} \ketbra{\pi}{\pi}^c \otimes \left[V_{\pi k} \left(U_{\sigma_\pi(k)}\otimes\mathbb{1}^{a}\right) V_{\pi(k-1)} \left(U_{\sigma_\pi(k-1)}\otimes\mathbb{1}^{a}\right) V_{\pi(k-2)} \ldots \left(U_{\sigma_\pi(1)}\otimes\mathbb{1}^{a}\right) V_{\pi0} \right], \end{equation} where $\{V_{\pi n}\}_{\pi n}$ is a set of unitary operators defined as \begin{align} & V_{\pi0}:\mathcal{H}^{P_t}\otimes\mathcal{H}^{a}\to\mathcal{H}^{I_{\sigma_\pi(1)}}\otimes \mathcal{H}^{a} \\ & V_{\pi n}: \mathcal{H}^{I_{\sigma_\pi(i)}} \otimes\mathcal{H}^{a}\to\mathcal{H}^{O_{\sigma_\pi(i+1)}} \otimes \mathcal{H}^{a} \quad \quad \text{for}\, n\in\{1,\ldots,k-1\}\\ & V_{\pi k}:\mathcal{H}^{I_{\sigma_\pi(k)}}\otimes\mathcal{H}^{a}\to\mathcal{H}^{F_t}\otimes \mathcal{H}^{a} \end{align} \end{definition} Here, we have defined the switch-like superchannel only by its action on unitary channels, without explicitly stating how the switch-like superchannel acts on general quantum operations \interfootnotelinepenalty=10000 \footnote{Interestingly, it can be proved that for the ($k=2$)-slot case, the action of the standard quantum switch superchannel (i.e. $V_{\pi n}=\mathbb{1}$) on unitary channels uniquely defines its action on general operations.~\cite{dong21} The possibility of extending this result for general switch-like superchannels is still open, but the existence of general switch-like superchannels is ensured by the construction presented via Eq.~\eqref{eq:SLexists}.} or its process $W^\text{SL}\in\mathcal{L}\left(\mathcal{H}^P\otimes\mathcal{H}^I\otimes\mathcal{H}^O\otimes\mathcal{H}^F\right)$. In order to prove Theorem~\ref{thm::switchlike} and for the main purpose of this paper, knowing the action of switch-like superchannels only on unitary channels will be enough, but for the sake of concreteness, we also present an explicit process which implements the switch-like superchannel. For that, we define the process $W^\text{SL}:=\dketbra{U_\text{SL}}{U_\text{SL}}$ where \begin{equation} \label{eq:SLexists} U_\text{SL}:=\bigoplus_\pi V_{\pi k} V_{\pi k-1} \ldots V_{\pi 1} V_{\pi 0}. \end{equation} Following Lemma~1 in Ref.~\cite{yokojima20} (see also Theorem~2 of Ref.~\cite{araujo16}), one can verify that the process $W^\text{SL}$ acts on unitary operators according to the switch-like superchannel, as presented in Definition \ref{def:SL}. \begin{proof} We start our proof by defining the generalized controlled operation \begin{equation} V^\text{ctrl}_n\coloneqq\sum_{\pi=0}^{k!-1} \ketbra{\pi}{\pi}^c \otimes V_{\pi n} \ \ \ \forall \ n\in\{0,\ldots,k\}, \end{equation} which is a valid unitary operator since $V^\text{ctrl}_n\left(V^\text{ctrl}_n\right)^\dagger=\mathbb{1}$. Now, note that, due to orthogonality of the vectors $\ket{\pi}$, we have \begin{equation} V^\text{ctrl}_k(\mathbb{1}^c\otimes U\otimes\mathbb{1}^a)V^\text{ctrl}_{(k-1)}(\mathbb{1}^c\otimes U\otimes\mathbb{1}^a)V^\text{ctrl}_{(k-2)}\ldots(\mathbb{1}^c\otimes U\otimes\mathbb{1}^a) V^\text{ctrl}_0 = \mathcal{W}_{SL}(U,\ldots,U). \end{equation} Hence, similarly to the $k=2$ case, a simple concatenation of the operators $V^\text{ctrl}_i$ provides a $k$-slot sequential quantum circuit that perfectly simulates the switch-like $k$-slot superchannel when all input unitary channels are equal. Since every sequential quantum circuit can be written as an ordered process $W^\text{SEQ}\in\mathcal{L}\left(\mathcal{H}^P\otimes\mathcal{H}^I\otimes\mathcal{H}^O\otimes\mathcal{H}^F\right)$~\cite{chiribella07}, when $k$ identical unitary operators $U$ are plugged into the process $W^\text{SEQ}$, the output operation is described by \begin{equation} W^\text{SEQ}*\dketbra{U}{U}^{\otimes k} = W^\text{SL}*\dketbra{U}{U}^{\otimes k}, \end{equation} where $*$ is the link product and $W^\text{SL}$ is a process associated with the switch-like superchannel. Hence, if \begin{equation} T^\text{SL}_i\coloneqq \text{\normalfont Tr}_{PF}[(\rho\otimes\mathbb{1})W^\text{SL}(\mathbb{1}\otimes M_i)] \end{equation} is the tester associated to the switch-like strategy, then one can construct a sequential tester \begin{equation} T^\text{SEQ}_i= \text{\normalfont Tr}_{PF}[(\rho\otimes\mathbb{1})W^\text{SEQ}(\mathbb{1}\otimes M_i)] \end{equation} such that, for any unitary operator $U$, one has \begin{equation} \text{\normalfont Tr}\left(T^\text{SL}_i\, \dketbra{U}{U}^{\otimes k}\right) = \text{\normalfont Tr}\left(T^\text{SEQ}_i\, \dketbra{U}{U}^{\otimes k}\right), \end{equation} ensuring that there is always a sequential tester that performs as well as any switch-like one. \end{proof} \section{Upper bound}\label{app::upperbound} We start this section by stating a lemma from Ref.~\cite{hashimoto10} which will be very useful for proving Thm.~\ref{thm::upper_bound}. We would also like to mention that step 2 of Ref.~\cite{korff04} and Thm.~3 of Ref.~\cite{chiribella06} are essentially equivalent to the lemma that we now state. \begin{lemma}[Ref.~\cite{hashimoto10,korff04,chiribella06}] \label{lemma:hashimoto} Let $E\in\mathcal{L}(\mathbb{C}^d)$ be a positive semidefinite operator, $U_g\in\mathcal{L}(\mathbb{C}^d)$ be unitary operators, and $G=\{U_g\}_g$ be a (compact Lie) group of unitary operators up to a global phase. It holds true that \begin{equation} E\leq \gamma_G \int_\text{Haar} U_g\,E\, U_g^\dagger \text{d}g, \end{equation} with \begin{equation} \gamma_G:=\sum_{\mu\in \text{irrep}\{G\}} d_\mu \min(m_\mu,d_\mu), \end{equation} where $\text{irrep}\{G\}$ is the set of all inequivalent irreducible representations (irreps) of $G$ in $\mathcal{L}(\mathbb{C}^d)$, $d_\mu$ is the dimension of the linear space corresponding to the irrep $\mu$ and $m_\mu$ is its associated multiplicity. \end{lemma} We are now in conditions to prove Thm.~\ref{thm::upper_bound}. \begin{theorem}[Upper bound for general strategies] Let $\mathcal{E}=\{p_i,U_i\}_{i=1}^N$ be an ensemble composed of $N$ $d$-dimensional unitary channels and a uniform probability distribution. The maximal probability of successful discrimination of a general strategy with $k$ copies is upper bounded by \begin{equation} P^\text{GEN}\leq \frac{1}{N}\,\gamma(d,k), \end{equation} where $\gamma(d,k)$ is given by \begin{equation} \gamma(d,k) \coloneqq {k+d^2-1\choose k}= \frac{(k+d^2-1)!}{k!(d^2-1)!}. \end{equation} \end{theorem} \begin{proof} The dual formulation of the channel discrimination problem [see Eq.~\eqref{sdp::dual}] guarantees that if $\overline{W}$ is the Choi operator of a non-signaling channel and the constraints \begin{equation} \label{eq:upper_via_dual} p_i \dketbra{U_i}{U_i}^{\otimes k} \leq \lambda \overline{W} \quad \forall i\in\{1,\ldots,N\} \end{equation} are respected, the coefficient $\lambda$ is an upper bound for the maximal probability of successfully discriminating the ensemble $\mathcal{E}=\{p_i,U_i\}_{i=1}^N$ with $k$ copies. Our proof consists in explicitly presenting the Choi operator of a non-signaling channel $\overline{W}$ and a real number $\lambda$ that respects the constraints of Eq.~\eqref{eq:upper_via_dual} for any ensemble $\mathcal{E}=\{p_i,U_i\}_{i=1}^N$ with $p_i=\frac{1}{N}$. Consider the following ansatz: \begin{align} \overline{W}&:= \int_\text{Haar} \dketbra{U}{U}^{\otimes k} \text{d}U \\ \lambda&:= \frac{1}{N}\sum_{\mu\in\text{irrep}\{SU(d)^{\otimes k}\}} d_\mu^2. \end{align} where $\text{irrep}\{SU(d)^{\otimes k}\}$ is the set of all inequivalent irreducible representations (irreps) of $SU(d)^{\otimes k}$ and $d_\mu$ is the dimension of the linear space corresponding to the irrep $\mu$. Our first step is to show that $\overline{W}$ is the Choi operator of a non-signaling channel. The operator $\overline{W}$ is positive semidefinite because it is a convex mixture of positive semidefinite operators. Additionally, for any $d$-dimensional unitary operator $U$, we have that $\text{\normalfont Tr}(\dketbra{U}{U})=d$. Hence, from the normalization of the Haar measure, we have that $\text{\normalfont Tr}(\overline{W})=d^k$. The last step to certify that $\overline{W}$ is indeed a non-signaling channel is then to guarantee that if $j\in\{1,\ldots,k\}$ stands for a slot of our process, we have that \begin{equation} _{O_j}\overline{W}=_{I_jO_j}\overline{W} \quad \forall j\in\{1,\ldots,k\} \end{equation} where $\mathcal{H}_{I_j}$ and $\mathcal{H}_{O_j}$ correspond to the input and output space associated to the slot $j$ respectively. Since for any $\dketbra{U}{U}\in\mathcal{L}\big(\mathcal{H}_{I_j}\otimes\mathcal{H}_{O_j}\big)$ we have \begin{align} \text{\normalfont Tr}_{O_j} \Big( \dketbra{U}{U}\Big) &= \text{\normalfont Tr}_{O_j} \Big( \big( \mathbb{1} \otimes U \big)\,\dketbra{\mathbb{1}}{\mathbb{1}}\, \big( \mathbb{1}\otimes U^\dagger\big) \Big)\\ &=\text{\normalfont Tr}_{O_j} \Big( \big(\mathbb{1} \otimes U^\dagger U \big)\, \dketbra{\mathbb{1}}{\mathbb{1}} \Big) \\ &= \text{\normalfont Tr}_{O_j} \Big( \dketbra{\mathbb{1}}{\mathbb{1}} \Big) \\ &= \mathbb{1}_{I_j}, \end{align} it holds that \begin{align} _{O_j}\overline{W} &= \int_{\text{Haar}} \,_{O_j} \Big( \dketbra{U}{U}^{\otimes k}\Big) \text{d}U\\ &=\int_{\text{Haar}} \,_{O_i} \Big(\dketbra{U}{U}_{I_1O_1}\otimes\dketbra{U}{U}_{I_2O_2}\otimes \ldots \dketbra{U}{U}_{I_kO_k}\Big) \text{d}U \\ &=_{I_iO_i}\overline{W}. \end{align} The last step of the proof is then to certify that for $p_i=\frac{1}{N}$, we indeed have \begin{equation} p_i \dketbra{U_i}{U_i}^{\otimes k} \leq \lambda \overline{W}. \end{equation} for any set of unitary operators $\{U\}_{i=1}^N$. First, we observe that due to the left and right invariance of the Haar measure, for any unitary operator $V\in SU(d)$, the operator $\overline{W}$ can be written as \begin{align} \overline{W}:=& \int_\text{Haar} \dketbra{U}{U}^{\otimes k} \text{d}U\\ =& \int_\text{Haar} \Big[\big(\mathbb{1}\otimes U\big)\dketbra{\mathbb{1}}{\mathbb{1}}\big( \mathbb{1}\otimes U^\dagger\big) \Big]^{\otimes k} \text{d}U \\ =& \int_\text{Haar} \Big[\big(\mathbb{1}\otimes(UVU^\dagger) U\big)\dketbra{\mathbb{1}}{\mathbb{1}}\big( \mathbb{1}\otimes U^\dagger (UVU^\dagger)\big) \Big]^{\otimes k} \text{d}U \\ =& \int_\text{Haar} \Big[\big(\mathbb{1}\otimes U\big)\dketbra{V}{V}\big( \mathbb{1}\otimes U^\dagger\big) \Big]^{\otimes k} \text{d}U. \end{align} Additionally, the set $\big\{\mathbb{1}^{\otimes k}\otimes U^{\otimes k}\big\}_{U\in SU(d)}$ is a compact Lie group. Moreover, the dimensions of the linear spaces associated with the irreducible representation of $\big\{\mathbb{1}^{\otimes k}\otimes U^{\otimes k}\big\}_{U\in SU(d)}$ coincide with the dimension of the irreducible representations of $\big\{U^{\otimes k}\big\}_{U\in SU(d)}$. Since $\min(d_\mu,m_\mu)\leq d_\mu$, Lemma~\ref{lemma:hashimoto} ensures that \begin{align} \frac{1}{N} \dketbra{U_i}{U_i}^{\otimes k} &\leq \frac{1}{N} \left(\sum_{\mu\in\text{irrep}\{SU(d)^{\otimes k}\}} d_\mu^2\right) \int_\text{Haar} \Big[\big(\mathbb{1}\otimes U\big)\dketbra{U_i}{U_i}\big( \mathbb{1}\otimes U^\dagger\big) \Big]^{\otimes k} \text{d}U \\ &=\frac{1}{N} \left(\sum_{\mu\in\text{irrep}\{SU(d)^{\otimes k}\}} d_\mu^2\right) \int_\text{Haar} \Big[\big(\mathbb{1}\otimes U\big)\dketbra{\mathbb{1}}{\mathbb{1}}\big( \mathbb{1}\otimes U^\dagger\big) \Big]^{\otimes k} \text{d}U \\ &= \lambda \overline{W}. \end{align} Hence, $\lambda:=\frac{1}{N}\left(\sum_{\mu\in\text{irrep}\{SU(d)^{\otimes k}\}} d_\mu^2\right) $ is indeed an upper bound for the maximum probability of discriminating any set of $N$ $d$-dimensional unitary channels with $k$ copies with general strategies. We finish the proof by recognizing that, as proven in Ref.~\cite{schur} [p. 57, Eq.~(57)], we have the following identity: \begin{equation} \sum_{\mu\in\text{irrep}\{SU(d)^{\otimes k}\}} d_\mu^2 ={k+d^2-1\choose k}. \end{equation} \end{proof} \twocolumngrid \end{document}
arXiv
Determine the value of the expression \[\log_2 (27 + \log_2 (27 + \log_2 (27 + \cdots))),\]assuming it is positive. Let \[x = \log_2 (27 + \log_2 (27 + \log_2 (27 + \dotsb))).\]Then \[x = \log_2 (27 + x),\]so $2^x = x + 27.$ To solve this equation, we plot $y = 2^x$ and $y = x + 27.$ [asy] unitsize(0.15 cm); real func (real x) { return(2^x); } draw(graph(func,-30,log(40)/log(2)),red); draw((-30,-3)--(13,40),blue); draw((-30,0)--(13,0)); draw((0,-5)--(0,40)); dot("$(5,32)$", (5,32), SE); label("$y = 2^x$", (10,16)); label("$y = x + 27$", (-18,18)); [/asy] By inspection, the graphs intersect at $(5,32).$ Beyond this point, the graph of $y = 2^x$ increases much faster than the graph of $y = x + 27,$ so the only positive solution is $x = \boxed{5}.$
Math Dataset
Carlo Rosati Carlo Rosati (Livorno, 24 April 1876 – Pisa, 19 August 1929) was an Italian mathematician working on algebraic geometry who introduced the Rosati involution.[1][2] Notes 1. Scorza, G. (1929), "Carlo Rosati", Periodico (in Italian), 4 (9): 289–291 2. Scorza, G. (1929), "L'opera scientifica di C. Rosati", Bollettino di Matematica (in Italian), 2 (8): 125–127 References • Mumford, David (2008) [1970], Abelian varieties, Tata Institute of Fundamental Research Studies in Mathematics, vol. 5, Providence, R.I.: American Mathematical Society, ISBN 978-81-85931-86-9, MR 0282985, OCLC 138290 • Rosati, Carlo (1918), "Sulle corrispondenze algebriche fra i punti di due curve algebriche", Annali di Matematica Pura ed Applicata (in Italian), 3 (28): 35–60, doi:10.1007/BF02419717, S2CID 121620469 External links • Carlo Rosati in Mathematica Italiana • Carlo Rosati Authority control International • VIAF National • Germany • Italy People • Deutsche Biographie
Wikipedia
Spatial distribution and contamination assessment of heavy metal pollution of sediments in coastal reclamation areas: a case study in Shenzhen Bay, China Qiuying Zhang1 na1, Futian Ren ORCID: orcid.org/0000-0002-8234-061X1, Xiangyun Xiong2 na1, Hongjie Gao1, Yudong Wang2, Wenjun Sun2, Peifang Leng3, Zhao Li3 & Yangwei Bai1 Environmental Sciences Europe volume 33, Article number: 90 (2021) Cite this article With the continuous advancement of global urbanisation, humans have begun to overutilise or improperly utilise the natural resources of bay areas, which has led to a series of ecological and environmental problems. To evaluate the spatial distributions and potential ecological risks of heavy metals in sediments of Shenzhen Bay, China, an analysis of As, Cd, Cr, Cu, Pb, and Zn regarding their content, correlation (Pearson coefficient), pollution degree, and potential ecological risks was conducted. The heavy metal contents in the sediments decreased in the order of Zn > Cu > Cr > Pb > As > Cd, with contents of 175.79 mg kg−1, 50.75 mg kg−1, 40.62 mg kg−1, 37.10 mg kg−1, 18.27 mg kg−1, and 0.20 mg kg−1, respectively. The results showed that the overall sediment quality in Shenzhen Bay generally met the China Marine Sediment Quality criteria, and the heavy metal contents were significantly lower than those reported in the same type of bay area worldwide. Furthermore, the order of grade of potential ecological risk of the heavy metals was as follows: As and Cd were found to pose moderate ecological risks, with their potential hazard indices reaching a high level, whereas the potential ecological hazard indices of Cu, Pb, Zn, and Cr were all at relatively low levels. The potential hazard indices of the heavy metals decreased from the inner bay toward the outside. The accumulation and content of the analysed heavy metals in the Shenzhen Bay sediments are mainly controlled by historical land-source pollution and land reclamation projects. This study presents the current state of sediment quality in Shenzhen Bay. The results may assist in the definition of future bay area management measures specifically targeted at monitoring heavy metal contamination. Coastal bays adjacent to rapidly developing cities are an important resource for human survival and social development, and are regions of active land–ocean interaction that are sensitive to anthropogenic coastal occupation and global sea-level rise [29, 32]. Owing to their superior geographical locations and unique natural environments, they tend to have significant economic and natural values [23]. Many coastal bays around the world play an important role in port construction and house large capital cities with rapidly growing populations [2, 3]. However, with economic development, humans have begun to overutilise and improperly use this precious natural resource in some highly developed coastal bays, including Bohai Bay [6], Jiaozhou Bay [39], Zhelin Bay [13], and Quanzhou Bay [40] in China, and worldwide [1, 5, 14, 24, 28, 30, 39]. Moreover, due to the weak hydrodynamic conditions of coastal bays, increasing amounts of heavy metal pollutants are discharged into coastal waters but cannot be diluted or degraded, which causes heavy metal pollution [25, 26]. Coastal bays are located in marine–terrestrial interlaced zones. Active land and sea and frequent human activities are resulting in increasingly complex environments, causing these regions to become sensitive to natural diversity and anthropogenic pollution [43]. Pollutants such as heavy metals are transported and accumulated in sediments, rendering them important destinations for pollutants. When heavy metals enter coastal environments, they accumulate in sediments, which act as a sink, or are released from sediments by natural or anthropogenic disturbance (i.e., pH, dissolved oxygen, or electrical conductivity), in which case they act as a source of heavy metals in surface water, presenting another source of water body pollution [25, 26]. Numerous studies have shown that > 90% of heavy metals are absorbed by suspended sediments, resulting in significant accumulation of heavy metals in coastal sediments [35]. Therefore, sediments are recognised as the most significant pollution sink and also a pollution source of heavy metals, and regular monitoring and assessment of heavy metals in sediments is a prerequisite for ecological risk evaluation [44]. Heavy metals in coastal sediments originate from natural processes (i.e., atmospheric input, soil erosion, and rock weathering set the background values of heavy metals) and anthropogenic activities (i.e., industrial discharge, mining, agriculture, transportation, and wastewater) [22, 33]. Anthropogenic inputs are the main source of pollution in the marine environment, and heavy metals are increasingly introduced into estuarine and coastal environments by draining rivers and oceanic dumping [21]. Heavy metals are recognised as a group of pollutants with high ecological risks because they are not removed from water via self-purification [10]. They can accumulate in suspended particles and sediments, be released back into water body under favourable conditions, enter the food web, and cause health problems [10, 27]. Shenzhen area is located in the Pearl River Delta region, and is a densely populated economically developed centre in southern China. As the most successful special economic zone in China, it has experienced rapid industrialisation, urbanisation, and population growth over the past 40 years [36]. Consequently, both Pearl River Estuary and Shenzhen Bay (SZB) have experienced severe contamination from human activities [15]. Since the opening and reform of China, over the last two decades, heavy metals have accumulated in the sediments of the western coast of Shenzhen and have become a serious pollution concern [17]. Previous studies have shown that the heavy metal content in sediments has increased in the SZB in recent years [20, 31, 36, 38, 45]. However, currently available data on heavy metals in the SZB are insufficient for investigating the present quality and pollution levels of the sediments following the implementation of water pollution control in 2015. Therefore, the objectives of this study were to: (1) determine the spatial distributions of As, Cd, Cr, Cu, Zn, and Pb in sediments of the SZB; (2) assess the heavy metal contamination status and potential environmental risks of the sediments using the geo-accumulation and potential ecological risk indices; (3) identify possible sources of heavy metals through Pearson correlation and cluster analysis, and (4) analyse of the impact of reclamation on heavy metals in the SZB sediments. This study is helpful for understanding the variability of heavy metals status in the SZB; it provides the detailed information for coastal environmental management. This study was conducted in the SZB (22°24′–22°32′E, 113°53′–114°02′N), a semi-enclosed coastal embayment located in the east of the Pearl River Estuary in Guangdong Province, China. The northern half of the SZB belongs to Shenzhen, while the southern half belongs to the Hong Kong Special Administrative Region. It is approximately 17 km long and 4–10 km wide, with a water area of 90 km2, and a water depth of less than 5 m [41]. The main rivers flowing into the bay are the Pearl, Shenzhen, Yuen Long, Xiaosha, Dasha, Fengtang, and Xinzhou rivers, with additional inflow from other sub-rivers. The Shenzhen River flows northeast to southwest into the bay, and its hydrodynamic conditions are mainly affected by the tides and runoff of the Pearl River Estuary; other coastal runoffs are also partially affected. Owing to the relatively closed coastal bay, no obvious spatial differences in the hydrodynamic conditions are apparent. At the time of this study, the average water depth was 2.9 m, the average tidal range was 1.4 m, and the average flow velocity was 0.3 m s−1 [41]. Pollutants from the coasts of Shenzhen and Hong Kong and along the Shenzhen River enter the bay through rivers, and pollutants from other parts of the Pearl River Delta reach the bay through the runoff of the Pearl River Estuary. The heavy metal data used in this study were obtained from the SZB sediment samples collected in June 2020. Maps of the SZB and sampling sites are shown in Fig. 1. Map of Shenzhen Bay and locations of the sampling sites Sediment collection and analysis On June 28, 2020, 15 surface sediment samples (0–5 cm) were collected from the intertidal zone of the SZB (Fig. 1). The sediments were placed in clean sealed plastic bags and frozen at – 20 °C for pre-treatment. In the laboratory, the sediments were thawed and air-dried under dust-free conditions. The semi-dried state, it was crushed with a clean glass bottle, placed in an oven, and dried at a constant temperature of 35 ± 2 °C. Once dry, any residual roots, shells, and other debris were removed, and the clean sample was ground with an agate mortar and pestle and sieved using a 63-μm nylon sieve. The sieved samples (< 63 μm) were stored in sealed plastic containers at 4 °C for further measurements. Then, 0.2–0.5 g (with an accuracy of 0.001 g) of each sample was taken for microwave digestion (CEM MARS 6, USA). After complete digestion, the sample was diluted with deionised water to 100 mL. For each digested sample batch, the reagent blank and the sediment reference material (GBW07314, 2020), issued by the State Oceanic Administration, were subjected to the same digestion procedure. The samples were analysed using an inductively coupled plasma optical emission spectrometer (ICP-OES, PerkinElmer, AvioTM500, USA). Pollution assessment methods Geo-accumulation index The geo-accumulation index (Igeo) was used to evaluate the degree of heavy metal pollution in the sediments [9], and was calculated using the following formula: $${\mathrm{I}}_{\mathrm{geo}}={\mathrm{log}}_{2}\left(\frac{{\mathrm{C}}_{\mathrm{n}}}{1.5{\mathrm{B}}_{\mathrm{n}}}\right),$$ where Cn is the heavy metal content; Bn is the geochemical background value of the heavy metals, and 1.5 is the coefficient of possible variations of the background value of the metals [16]. The Igeo values were grouped into classes representing different pollution degrees, which are listed in Additional file 1: Table S1. Potential ecological risk index (PERI) The PERI is a quantitative index used to evaluate the pollution and potential ecological risks linked to the accumulation of one or more heavy metals. The calculation methods for the ecological risk of single heavy metals and the comprehensive ecological risk of multiple heavy metals are as follows [18]: $${\mathrm{E}}_{\mathrm{r}}^{\mathrm{i}}={\mathrm{T}}_{\mathrm{r}}^{\mathrm{i}}\times \frac{{\mathrm{C}}_{\mathrm{i}}}{{\mathrm{C}}_{\mathrm{r}}^{\mathrm{i}}},$$ $$\mathrm{RI}=\sum_{\mathrm{i}}^{\mathrm{n}}{\mathrm{E}}_{\mathrm{r}}^{\mathrm{i}}=\sum_{\mathrm{i}}^{\mathrm{n}}{\mathrm{T}}_{\mathrm{r}}^{\mathrm{i}}\times \frac{{\mathrm{C}}_{\mathrm{i}}}{{\mathrm{C}}_{\mathrm{r}}^{\mathrm{i}}},$$ where \({\mathrm{T}}_{\mathrm{r}}^{\mathrm{i}}\) is the toxic response factor of heavy metal i, \({\mathrm{C}}_{\mathrm{r}}^{\mathrm{i}}\) is the background content of heavy metal i, and Ci is the content of heavy metal i. The classification of the PERI is listed in Additional file 1: Table S2. Pearson correlation analysis Pearson correlation analysis was used to evaluate the correlation of different heavy metals in the SZB sediments, and the spatial correlation between various heavy metals was compared. Correlation was distinguished according to the magnitude of the Pearson coefficient, with values less than 0.3 indicating that the correlation can be regarded as irrelevant, values of 0.3–0.5 indicating a low correlation, values of 0.5 to 0.8 indicating a moderate correlation, and values of 0.8–1 indicating highly correlated heavy metals. Cluster analysis (CA) To determine the heavy metal source, CA was carried out using hierarchical clustering, which connects data into clusters according to their distance. The results are illustrated in dendrograms representing the steps in the hierarchical clustering solution and the values of the distances between the clusters, which reflect the proximity and interrelationship among the heavy metals. Variance analysis of the heavy metal contents was carried out using Microsoft Excel 2019. Pearson correlation analysis and CA were conducted using SPSS 25.0 (SPSS Inc., Chicago, USA). All figures were prepared using Origin 2021b software for Windows. Contents and spatial distribution of heavy metals in sediments The contents of As, Cd, Cr, Cu, Pb, and Zn, specified in the China Marine Sediment Quality Standard (GB18668-2002), are listed in Table 1. In the sediments, their contents ranged from 4.02 to 41.34 mg kg−1, 0.05 to 0.81 mg kg−1, 5.06 to 81.87 mg kg−1, 16.41 to 81.48 mg kg−1, 5.41 to 67.21 mg kg−1, and 83.22 to 259.57 mg kg−1 for As, Cd, Cr, Cu, Pb, and Zn, respectively, in the order of Zn > Cu > Cr > Pb > As > Cd. The metals with the highest contents, Zn and Cu, had average contents of 175.80 mg kg−1 and 50.74 mg kg−1, respectively (Fig. 2 and Table 1). The heavy metal contents at all sampling sites met the Class-2 standard of marine sediments. The contents of As and Pb at S2, S4, S5, S8, S9, and S10 were higher than the values of the Class-1 standard, but lower than the values of the Class-2 standard. The Cd content at S1 and S2 was higher than that in the Class-1 standard and lower than that in the Class-2 standard. Only the Cr values at S9 were slightly higher than that in the Class-1 quality standard, which was 81.87 mg kg−1. The contents of Cu at S11, S14, and S15 were lower than that in the Class-1 standard. The Zn contents at S1, S2, S3, S4, S5, S6, S8, S9, and S10 were higher than the Class-1 standard value but lower than the Class-2 standard value. Table 1 Heavy metal contents in surface sediments of the Shenzhen Bay compared with the average heavy metal contents in sediments of other bays/harbours around the world (mg·kg−1, dry weight) Heavy metal contents in surface sediments of Shenzhen Bay. The horizontal line is the background value of the respective heavy metal Dai et al. [8] continuously monitored the SZB sediments from 2000 to 2007 and their results showed average contents of As, Cd, Cr, Cu, Pb, and Zn in the middle and at the mouth of the bay that were slightly higher than the results of this study. This is believed to be mainly due to the accelerated transfer and shutdown of polluting enterprises in the surrounding areas of the SZB in recent years. The high-tech industry is continuously developing, and the level of heavy metal pollution is continuously decreasing. Analysing the heavy metal content in sediments from the inside to the outside of the bay, it was found that the contents of As, Cd, Cr, Pb, and Zn gradually decreased from the inside to the outside of the bay, while Cu remained at the same level and showed no obvious downward trend. Differences in the spatial distribution of heavy metals are largely related to their different sources or the sediment origin (texture, TOC, etc.). Different pollutants often contain different heavy metals. The possible sources of As, Cd, Cr, Cu, Pb, and Zn in the SZB sediments are the discharge of domestic sewage and industrial wastewater from Shenzhen and Hong Kong into the bay. The heavy metal contents in the sediments of the SZB were also compared with those of other bay areas and coastal regions. In Table 1, which lists the heavy metal contents obtained in the study area along with those of other regions, it can be seen that the heavy metal contents in the SZB are slightly higher than those reported for the Yellow River, Yangtze River, and Pearl River Estuary. Furthermore, the levels obtained for Zhelin Bay and Jiaozhou Bay sediments were equal or lower. In addition, the heavy metal contents obtained in this study are significantly lower than those of the world's largest industrialised/urban ports and estuaries (Table 1), such as the Masan Bay in South Korea, Oyster Bay in Australia, Florida Bay in the United States, Izmit Bay in Turkey, and Gironde Estuary in France. Assessment of heavy metal pollution in sediments The Igeo index can be used to confirm the pollution level of heavy metals in sediments. As shown in Fig. 3, the Igeo index of heavy metals in the SZB sediments decreased in the order Zn > Cu > As > Pb > Cd > Cr. The average Igeo indices of Zn and Cu were 0.451 and 0.062, respectively, which indicate light pollution levels. The average Igeo indices of the other heavy metals were less than 0, indicating no pollution. For Cr, the Igeo index was less than 0 in the sediments at all sampling points, indicating no pollution. As, Cd, Cu, and Pb showed Igeo indices greater than 0 at some sampling points. For example, the values for As at S10, S9, S8, and S2 indicated medium pollution, and those at S5, S13, and S4 light pollution; Cd exhibited only slight pollution at sampling point S1, with an Igeo index of 0.12. The Igeo of Cu at all sampling points indicated slight pollution, with a maximum value at S9 (0.59), and Pb also showed slight pollution at sampling points S3 (0.54), S5 (0.71), S10 (0.75), S9 (0.84), S8 (0.80), S4 (0.76), and S2 (0.54). Zn showed pollution conditions at all sampling points except S11, S12, S14, and S15, but all were below the moderate pollution level (1 ≤ Igeo < 2). It is worth noting that the sampling points S10, S9, and S8 were polluted by all analysed heavy metals, except for Cd and Cr. Based on the Igeo index, the SZB sediments were mostly polluted by Zn and Cu. Dai et al. [8] studied the Igeo indices of Pb, Cu, and Zn in the same sediments from 2000 to 2007, and found that the values all ranged between 0 and 1, indicating slight pollution. Moreover, the level of pollution and heavy metal contents were lower at the bay mouth than inner bay, and both decreased gradually from the inside to the outside of the bay. a Box and whisker plot of the Igeo indices of the six heavy metals analysed in the sediments of Shenzhen Bay (n = 15, a). The box represents the 25th to 75th percentiles, □ represents the average values, and the top and bottom horizontal lines represent the maximum and minimum values, respectively. b Igeo values in sediments of different sites from the inner to the outer of Shenzhen Bay Heat maps have been widely used in studies of potential ecological risks of heavy metals. The colour scale ranges from blue to red and indicates increasing ecological risks [11]. The toxicity values in the environment are 30, 10, 5, 5, 2, and 1 for Cd, As, Cu, Pb, Cr, and Zn, respectively. The ecological risk of heavy metals in the SZB sediments is shown in Fig. 4. Except for As at S10 (41.3) and Cd at S1 (48.8), the potential ecological risks of the six heavy metals were all lower than 40, indicating low potential ecological risks. The comprehensive potential ecological hazards were found in the following order: RI As = 274.05 > RI Cd = 265.07 > RI Cu = 126.87 > RI Pb = 111.29 > RI Zn = 32.96 > RI Cr = 20.31. The results showed that As and Cd in the SZB sediments pose moderate potential ecological risks, implying that their pollution cannot be ignored; however, the overall potential ecological risks of the other heavy metals were relatively low. The moderate ecological risks of As and Cd are mainly due to their high ecological toxicity values; although their contents are relatively low, the ecological risk value is high. Therefore, special attention should be paid to the monitoring and management of As and Cd. In general, although the Zn contents in the sediments of the entire study area exceeded the background value, its potential ecological risk was only 32.96, which corresponds to a slight pollution level. These results are consistent with those of Zuo et al. [45], who used the ecological model evaluation method to analyse the heavy metals in SZB sediments in 2006 and showed that Cd had the largest potential ecological hazard coefficient. Comparing different sampling points, the comprehensive and individual ecological risks of heavy metals in the SZB sediments corresponded to the changes in the heavy metal content, and also showed a gradual decrease from inner bay to the outside of the bay. Heat map of the heavy metal ecological risks in sediments from the inner bay to the outside of Shenzhen Bay. The value of the colour white is 40. Values less than 40 indicate that the heavy metal has a low potential ecological risk The distribution of the potential ecological risks showed that: (1) the risk of Shenzhen River Estuary was the lowest, and at a slight level. This is mainly attributed to the fact that the surface sediments in the estuary area are mainly silt and sandy silt with coarse particle sizes that have a small adsorption capacity for heavy metals, which resulted in a low degree of pollution. The concentrations of Cr, Pb, and Cu indicate no pollution. (2) The potential risk to the Shekou zone was at a medium level. Because this area is located in Shenzhen Bay Estuary, the hydrodynamic force was strong, and the surface sediments were easily disturbed; at the same time, this area was far away from Shenzhen city and there was no large river to flow into. Furthermore, the heavy metals that were transported in the water were deposited in the inner bay where the water exchange is weak. (3) The potential risks of the sampling points except Shenzhen River Estuary and Shekou area were all at a high level. These sampling points were all located inside the SZB where the Dasha, Xinzhou, and Fengtang rivers bring a large amount of heavy metals into the bay. Source apportionment of heavy metals in sediments The correlation between heavy metal contents can be used to evaluate the potential common source and migration processes of these elements. A high correlation between two elements indicates that they have a common source and similar migration process. The correlation coefficients between the heavy metals in the SZB sediments are shown in Fig. 5. Cr–Pb (0.96), Cu–Zn (0.95), As–Pb (0.85), As–Cr (0.82), Cr–Cu (0.65), and Cr–Zn (0.67) showed a strong significant correlation (p < 0.01), while Pb–Cu (0.58) and Pb–Zn (0.64) showed a significant correlation (p < 0.05). These results suggest that these pairs of metals may have the same pollution sources and migration processes. Correlation coefficient map of heavy metals in SZB sediments. Among them, red represents a positive correlation, blue represents a negative correlation, the number in the lower triangle represents the correlation coefficient, and the size of the circle in the upper triangle represents the size of the correlation coefficient CA is widely used to identify pollutant sources of heavy metals in sediments to distinguish natural from anthropogenic contributions (Loska and Wiechula, 2003; [19]. The clusters of six heavy metals obtained from the cluster analysis are presented by the relationship between groups of variables, as shown in Fig. 6, with a lower value on the horizontal axis showing a more significant association. There are three statistically significant clusters, among which cluster 1 only includes the heavy metals Zn that has been identified to have a light pollution degree from the Igeo index. Cluster 2 includes Cu, Pb, and Cr, and can be further divided into two sub-clusters because the distance between Cu and Pb–Cr was relatively long. Sub-cluster 1 includes Cr and Pb, which were shown to have no pollution and a slight pollution degree according to the Igeo index, respectively, Sub-cluster 2 includes Cu, which was shown to have a slight pollution degree according to the Igeo index. Cluster 3 includes Cd and As, which posed moderate potential ecological risks based on the PERI. The CA results are consistent well with the results from the Pearson correlation analysis. Cluster analysis dendrogram of heavy metals in Shenzhen Bay The correlation coefficients among Zn, Pb, Cr, and Cu were all higher than 0.5, indicating that these four elements may have similar sources and migration processes. The correlation coefficients between Zn and Cu, and Pb and Cr in particular were very high, 0.95 and 0.96, respectively, representing an extremely strong correlation. This may be because these elements are all sulfophilic elements; that is, these elements often form less soluble sulfides, which accumulate in water sediments. The correlations between As and Cd and other heavy metals were relatively low, indicating that there may be differences between their sources and those of the other elements in the sediment. For example, the source of As may be related to the transportation of fossil fuels, such as coal, in the SZB [4]. Heavy metals in sediments can originate from natural pollution sources or from human activities. Using the correlation between Fe and heavy metals, it was possible to distinguish between the natural and anthropogenic sources of the heavy metals. Naturally sourced heavy metals are often significantly correlated with Fe. Huang et al. [15] found that Pb, Cu, Zn, and Fe were relatively poorly correlated (Pb had a certain correlation with Fe at low concentrations), indicating that these heavy metals are related to anthropogenic pollution. The accumulation of historical heavy metals is an important cause of changes in the sedimentary environment of the SZB. During the rapid economic development of Hong Kong in the 1970s, industries in the northern New Territories developed rapidly, and a large amount of wastewater was discharged directly into the SZB or transported there via Shenzhen River. When entering the bay, the heavy metals were quickly incorporated into the sediments. Furthermore, the period from 1985 to 2000 was an era of economic rise in Shenzhen and other parts of the Pearl River Delta. A large amount of industrial wastewater, especially wastewater from electronics and electrical appliances, was polluted with heavy metals, which were directly discharged into the SZB. Heavy metals were further transported to the sea through the mouth of the SZB via the runoff and tidal currents of the Pearl River Estuary [25, 26]. Effect of land reclamation on heavy metals in sediments Coastlines are formed through long-term evolution under various dynamic factors and are in a state of relatively dynamic equilibrium. Land reclamation activities are processes of creating new, dry land on the seabed, which changes the natural coastal pattern in a short time and on a small scale, and have a strong impact on the marine environment, causing an imbalance. Furthermore, the physical and chemical properties of the reclamation material itself may have a long-term impact on the heavy metal content in bay sediments. Massive changes have occurred both on land and in the seawater in the SZB area since 1980. By 2005, the reclaimed land area had reached 1963 ha (Additional file 1: Figure S3). After 2005, however, no large-scale reclamation activities were carried out [7]. Nevertheless, land reclamation disrupted the dynamic balance of the original coastline, reduced the SZB area, led to a rapid decline in the tidal volume and tidal cycle of the bay, and created favourable conditions for sedimentation. The exchange capacity of the seawater was greatly reduced, resulting in pollutants remaining in the bay, especially the inner bay, for a long time. Some studies have shown that the average residence time of pollutants in the inner bay is approximately 10–14 days in the dry season, 8–9 days in the rainy season, 7–8 days near the Shenzhen–Hong Kong Western Corridor, and only 3–4 days in the outer bay [12]. In this case, the ability of seawater to dilute pollutants in upstream or coastal areas was greatly reduced, causing water pollution. Because the rivers entering the bay carry a large amount of sediment, the longer the pollutants remain in the water, the greater the amount of pollutants deposited, which causes the pollution of sediments to increase. From 1940 to 2010, heavy metal concentrations were not significantly affected by land reclamation activities and remained stable (Fig. 7). If land reclamation activities created favourable conditions for sedimentation still needs to be answered, however, the heavy metal content has remained stable. In fact, the amount of heavy metal pollution entering into the SZB gradually decreased since the 1980s, and if there were no land reclamation activities, the heavy metal contents in the sediments would have decreased. Thus, the proportion of heavy metals entering the sediments actually increased due to the weak hydrodynamic conditions in the bay and, eventually, the heavy metal contents become relatively stable. Reports indicate that harmful substances (heavy metals and organic toxins) in the form of pollutants accumulate in water bodies or sediments, causing environmental disasters in the mangrove ecosystem along the SZB. Wang et al. (2010) reported that the release of sediment in the inner SZB had an impact of 25% on the water quality; the release of sediment pollution in the estuary bay also accounted for 8%. The pollution load in the sediments of the inner bay exceeded the capacity of the inner bay. With the reduction of land-based pollution from Shenzhen and Hong Kong, this proportion will continue to rise. Time series of the average heavy metal contents in the Shenzhen Bay surface sediments from 1940 to 2010. The data were taken from [20] and [34] It is generally recognised that the filling materials used in the process of land reclamation may affect the heavy metal content of sediments. Chen and Jiao [7] conducted a chemical analysis of the reclamation materials used in a certain reclamation project in the SZB and showed that their heavy metal contents were much lower than those in the sediments (Table 2). Accordingly, it is reasonable to assume that the heavy metal contribution of the filler material to the SZB sediments is negligible. Therefore, it can be concluded that the heavy metals contents in the SZB sediments are probably related to historical land-source pollution from both sides of the bay, i.e., Shenzhen and Hong Kong. Table 2 Comparison of heavy metal contents in filler materials and marine sediments of Shenzhen Bay (unit: mg·kg−1) In this study, six elements (As, Cd, Cr, Cu, Pb, and Zn) in SZB surface sediments were investigated. Based on this data, the main findings of this study are as follows: (1) The contents of major heavy metals in sediment were generally higher than the average background value in the offshore sediments of China; however, they were lower than the heavy metal contents in the coastal sediments of similar bay areas around the world, and the heavy metal content has gradually decreased over the last 20 years. (2) The pollution levels of the six heavy metals were found to be in the order of As > Cd > Cu > Pb > Zn > Cr. As and Cd were found to pose moderate ecological risks, and their potential hazard indices reached a high level, while the potential ecological risks of Cu, Pb, Zn, and Cr were found to be low. (3) The heavy metals Zn, Cu, Pb, and Cr were strongly correlated, which may be due to similar sources and migration processes. The accumulation of historical heavy metals was an important cause of changes in the sediment environment of the SZB, mainly owing to land-based pollution (such as industrial wastewater) from the 1970s to the end of the twentieth century. (4) Human activities, such as land reclamation, have led to a decline in the exchange capacity of water bodies in the SZB, resulting in heavy metals deposited in the surface sediments. There is currently a lack of research regarding the biotransformation processes and carbon cycling effect the heavy metal contents and the distribution mechanism in Shenzhen Bay. Further studies should also explore this potential work. The datasets supporting the conclusions of this article are included within the article and its Additional file 1. SZB: Shenzhen Bay CA: ICP-OES: Inductively coupled plasma optical emission spectrometer Igeo : PERI: Potential Ecological Risk Index Alyazichi YM, Jones BG, McLean E (2015) Spatial and temporal distribution and pollution assessment of trace metals in marine Sediments in Oyster Bay, NSW, Australia. Bull Environ Contam Toxicol 94:52–57. https://doi.org/10.1007/s00128-014-1434-z Birch GF, Hutson P (2009) Use of sediment risk and ecological/conservation value for strategic management of estuarine environments: Sydney Estuary, Australia. Environ Manage 44:836–850. https://doi.org/10.1007/s00267-009-9362-0 Birch GF (2017) Determination of sediment metal background concentrations and enrichment in marine environments—a critical review. Sci Total Environ 580:813–831. https://doi.org/10.1016/j.scitotenv.2016.12.028 Chen MS, Ding SM, Zhang LP, Li YY, Sun Q, Zhang CS (2017) An investigation of the effects of elevated phosphorus in water on the release of heavy metals in sediments at a high resolution. Sci Total Environ 575:330–337. https://doi.org/10.1016/j.scitotenv.2016.10.063 Caccia VG, Millero FJ, Palanques A (2003) The distribution of trace metals in Florida Bay sediments. Mar Pollut Bull 46:1420–1433. https://doi.org/10.1016/S0025-326X(03)00288-1 Chai M, Shi F, Li R, Shen X (2014) Heavy metal contamination and ecological risk in Spartina alterniflora marsh in intertidal sediments of Bohai Bay, China. Mar Pollut Bull 84:115–124. https://doi.org/10.1016/j.marpolbul.2014.05.028 Chen K, Jiao J (2008) Metal concentrations and mobility in marine sediment and groundwater in coastal reclamation areas: a case study in Shenzhen, China. Environ Pollut 151:576–584 Dai JC, Gao XW, Ni JM, Yin KH (2010) Evaluation of heavy-metal pollution in Shenzhen coastal sediments. J Trop Oceanogr 21(1):85–90 Fostner U, Müller G (1981) Concentration of trace metals and polycyclic aromatic hydrocarbons in river sediments: Geochemical background, man's influence and environmental impact. GeoJournal 5:417–432. https://doi.org/10.1007/BF02484715 Ghrefat H, Yusuf N (2006) Assessing Mn, Fe, Cu, Zn, and Cd pollution in bottom sediments of Wadi Al-Arab Dam, Jordan. Chemosphere 65(11):2114–2121. https://doi.org/10.1016/j.chemosphere.2006.06.043 Gu CK, Zhang Y, Peng Y, Leng PF, Zhu N, Qiao YF, Li Z, Li FD (2020) Spatial distribution and health risk assessment of dissolved trace elements in groundwater in southern China. Sci Rep 10:7886. https://doi.org/10.1038/s41598-020-64267-y Guo W, Zhu DK (2005) Reclamation and its impact on marine environment in Shenzhen area, China. J Nanjing Univ Nat Sci 41(03):286–295 Gu Y, Lin Q, Jiang S, Wang Z (2014) Metal pollution status in Zhelin Bay surface sediments inferred from a sequential extraction technique, South China Sea. Mar Pollut Bull 81:256–261. https://doi.org/10.1016/j.marpolbul.2014.01.030 Hyun S, Lee CH, Lee T, Choi JW (2007) Anthropogenic contributions to heavy metal distributions in the surface sediments of Masan Bay, Korea. Mar Pollut Bull 54:1059–1068. https://doi.org/10.1016/j.marpolbul.2007.02.013 Huang YL, Zhu WB, Le MH, Lu XX (2012) Temporal and spatial variations of heavy metals in urban riverine sediment: an example of Shenzhen River, Pearl River Delta, China. Quat Int 282:145–151. https://doi.org/10.1016/j.quaint.2011.05.026 Huang Z, Liu C, Zhao X, Jing D, Zheng B (2020) Risk assessment of heavy metals in the surface sediment at the drinking water source of the Xiangjiang River in South China. Environ Sci Eur 32:23. https://doi.org/10.1186/s12302-020-00305-w He B, Li R, Chai M, Qiu G (2014) Threat of heavy metal contamination in eight mangrove plants from the Futian mangrove forest, China. Environ Geochem Health 36:467–476. https://doi.org/10.1007/s10653-013-9574-3 Hakanson L (1980) An ecological risk index for aquatic pollution control: a sedimentological approach. Water Res 14:975–1001. https://doi.org/10.1016/0043-1354(80)90143-8 Huang F, Xu Y, Tan Z, Wu B, Xu H, Shen L, Xu X, Han Q, Guo H, Hu Z (2018) Assessment of pollutions and identification of sources of heavy metals in sediments from west coast of Shenzhen, China. Environ Sci Pollut Res 25:3647–3656. https://doi.org/10.1007/s11356-017-0362-y Huang X, Li X, Yue W, Huang L, Li S (2003) Accumulation of heavy metals in the sediments of Shenzhen Bay, South China. Chin J Environ Sci 24(4):144–149 Jiang X, Teng A, Xu W, Liu X (2014) Distribution and pollution assessment of heavy metals in surface sediments in the Yellow Sea. Mar Pollut Bull 83:366–375. https://doi.org/10.1016/j.marpolbul.2014.03.020 Keshavarzi B, Mokhtarzadeh Z, Moore F, Rastegari Mehr M, Lahijanzadeh A, Rostami S, Kaabi H (2015) Heavy metals and polycyclic aromatic hydrocarbons in surface sediments of Karoon River, Khuzestan Province, Iran. Environ Sci Pollut Res 22:19077–19092. https://doi.org/10.1007/s11356-015-5080-8 Li G, Cao Z, Lan D, Xu J, Wang S, Yin W (2007) Spatial variations in grain size distribution and selected metal contents in the Xiamen Bay, China. Environ Geol 52:1559–1567. https://doi.org/10.1007/s00254-006-0600-y Larrose A, Coynel A, Schäfer J, Blanc G, Massé L, Maneux E (2010) Assessing the current state of the Gironde Estuary by mapping priority contaminant distribution and risk potential in surface sediment. Appl Geochem 25:1912–1923. https://doi.org/10.1016/j.apgeochem.2010.10.007 Liu QX, Jia ZZ, Li SY, Hu JT (2019) Assessment of heavy metal pollution, distribution and quantitative source apportionment in surface sediments along a partially mixed estuary (Modaomen, China). Chemosphere 225:829–838. https://doi.org/10.1016/j.chemosphere.2019.03.063 Liu JJ, Diao ZH, Xu XR, Xie Q (2019) Effects of dissolved oxygen, salinity, nitrogen and phosphorus on the release of heavy metals from coastal sediments. Sci Total Environ 666:894–901. https://doi.org/10.1016/j.scitotenv.2019.02.288 Morina A, Morina F, Djikanović V, Spasić S, Krpo-Ćetković J, Kostić B, Lenhardt M (2015) Common barbel (Barbus barbus) as a bioindicator of surface river sediment pollution with Cu and Zn in three rivers of the Danube River Basin in Serbia. Environ Sci Pollut Res 23:6723–6734. https://doi.org/10.1007/s11356-015-5901-9 Nobi EP, Dilipan E, Thangaradjou T, Silvakumar K, Kannan L (2010) Geochemical and geo-statistical assessment of heavy metal concentration in the sediments of different coastal ecosystems of Andaman Islands, India. Estuar Coast Shelf Sci 87:253–264. https://doi.org/10.1016/j.ecss.2009.12.019 Reimann L, Vafeidis AT, Brown S, Hinkel J, Tol RSJ (2018) Mediterranean UNESCO World Heritage at risk from coastal flooding and erosion due to sea-level rise. Nat Commun 9:4161. https://doi.org/10.1038/s41467-018-06645-9 Pekey H (2006) Heavy metal pollution assessment in sediments of the Izmit Bay, Turkey. Environ Monit Assess 123:219–231. https://doi.org/10.1007/s10661-006-9192-y State Oceanic Administration (SOA) of China (2004). Annual Report on China's Marine Environment Quality for Year 2004. State Oceanic Administration of China, Beijing (in Chinese) Schuerch M, Spencer T, Temmerman S, Kirwan ML, Wolff C, Lincke D, McOwen CJ, Pickering MD, Reef R, Vafeidis AT, Hinkel J, Nicholls RJ, Brown S (2018) Future response of global coastal wetlands to sea-level rise. Nature 561:231–234. https://doi.org/10.1038/s41586-018-0476-5 Sun Z, Mou X, Tong C, Wang C, Xie Z, Song H, Sun W, Lv Y (2015) Spatial variations and bioaccumulation of heavy metals in intertidal zone of the Yellow River estuary, China. CATENA 126:43–52. https://doi.org/10.1016/j.catena.2014.10.037 Shi Y, Li M, Li B, Wei J, Wu J (2017) Spatial and temporal distribution of heavy metals in the sediment of Shenzhen bay. Mar Pollut Bull 36(2):186–191 Wei X, Han LF, Gao B, Zhou HD, Wan XH (2015) Distribution, bioavailability, and potential risk assessment of the metals in tributary sediments of three gorges reservoir: the impact of water impoundment. Ecol Indic 61:667–675. https://doi.org/10.1016/j.ecolind.2015.10.018 Wu JS, Song J, Li WF, Zheng MK (2016) The accumulation of heavy metals in agricultural land and the associated potential ecological risks in Shenzhen, China. Environ Sci Pollut Res 23:1428–1440. https://doi.org/10.1007/s11356-015-5303-z Wang H, Wang J, Liu R, Yu W, Shen Z (2015) Spatial variation, environmental risk and biological hazard assessment of heavy metals in surface sediments of the Yangtze River estuary. Mar Pollut Bull 93:250–258. https://doi.org/10.1016/j.marpolbul.2015.01.026 Wang YX (2011) Water Quality and Its Trend Analysis in Shenzhen Bay. Environ Sci Surv 30(3):94–96 Xu F, Qiu L, Cao Y, Huang J, Liu Z, Tian X, Li A, Yin X (2016) Trace metals in the surface sediments of the intertidal Jiaozhou Bay, China: sources and contamination assessment. Mar Pollut Bull 104(1–2):371–378. https://doi.org/10.1016/j.marpolbul.2016.01.019 Yu R, Zhang W, Hu G, Lin C, Yang Q (2016) Heavy metal pollution and Pb isotopic tracing in the intertidal surface sediments of Quanzhou Bay, southeast coast of China. Mar Pollut Bull 105:416–421. https://doi.org/10.1016/j.marpolbul.2016.01.047 Yang Q, Lei AP, Li FL, Liu LN, Zan QJ, Shin PKS, Cheung SG, Tam NFY (2014) Structure and function of soil microbial community in artificially planted Sonneratia apetala and S. caseolaris forests at different stand ages in Shenzhen Bay. China Mar Pollut Bull 85:754–763. https://doi.org/10.1016/j.marpolbul.2014.02.024 Yu X, Yan Y, Wang WX (2010) The distribution and speciation of trace metals in surface sediments from the Pearl River Estuary and the Daya Bay, Southern China. Mar Pollut Bull 60:1364–1371. https://doi.org/10.1016/j.marpolbul.2010.05.012 Zhang M, Chen G, Luo Z, Sun X, Xu J (2020) Spatial distribution, source identification, and risk assessment of heavy metals in seawater and sediments from Meishan Bay, Zhejiang coast. China Mar Pollut Bull 156:111217. https://doi.org/10.1016/j.marpolbul.2020.111217 Zhao MF, Wang EK, Xia P, Feng AP, Chi Y, Sun YJ (2019) Distribution and pollution assessment of heavy metals in the intertidal zone environments of typical sea areas in China. Mar Pollut Bull 138:397–406. https://doi.org/10.1016/j.marpolbul.2018.11.050 Zuo P, Wang Y, Cheng J, Min F (2009) Pollution assessment of heavy metals in coastal surface sediments of the Shenzhen Bay. Mar Pollut Bull 28(1):50–54 We acknowledge support from the Guangdong Shenzhen Ecological and Environmental Monitoring Centre, Shenzhen, China. Open Access funding enabled and organised by the Financial of National Key Research and Development Project of China (grant number 2018YFC1801801). Qiuying Zhang and Xiangyun Xiong are co-first authors State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, China Qiuying Zhang, Futian Ren, Hongjie Gao & Yangwei Bai Guangdong Shenzhen Ecological and Environmental Monitoring Centre, Shenzhen, China Xiangyun Xiong, Yudong Wang & Wenjun Sun Institute of Geographic Sciences and Natural Resources Research, Beijing, China Peifang Leng & Zhao Li Qiuying Zhang Futian Ren Xiangyun Xiong Hongjie Gao Yudong Wang Wenjun Sun Peifang Leng Zhao Li Yangwei Bai QZ and FR: conceptualisation, methodology, software, formal analysis, investigation, data curation, writing original draft, writing—review and editing; conceptualisation, formal analysis, investigation, supervision, project administration, funding acquisition; XX and HG: methodology, formal analysis, supervision, writing—review and editing; YW and WS: investigation, data curation, review; PL, ZL, and YB: investigation, data curation. All authors read and approved the final manuscript. Correspondence to Qiuying Zhang or Futian Ren. Table S1. Geo-accumulation index (Igeo) classification. Table S2. Classification of the ecological hazard coefficient of heavy metal pollution. Figure S3. Statistics of land reclamation in the Shenzhen Bay in various periods. Zhang, Q., Ren, F., Xiong, X. et al. Spatial distribution and contamination assessment of heavy metal pollution of sediments in coastal reclamation areas: a case study in Shenzhen Bay, China. Environ Sci Eur 33, 90 (2021). https://doi.org/10.1186/s12302-021-00532-9 Potential ecological risk Source analysis
CommonCrawl
\begin{document} \RUNAUTHOR{Mahmoudzadeh and Ghobadi} \RUNTITLE{Learning from Good and Bad Decisions: A Data-driven IO Approach} \TITLE{Learning from Good and Bad Decisions: \\ A Data-driven Inverse Optimization Approach} \ARTICLEAUTHORS{ \AUTHOR{Houra Mahmoudzadeh} \AFF{Department of Management Sciences, University of Waterloo, ON, Canada. \EMAIL{[email protected]}} \AUTHOR{Kimia Ghobadi} \AFF{Malone Center for Engineering in Healthcare, Center for Systems Science and Engineering, Department of Civil\\ and Systems Engineering, Johns Hopkins University, Baltimore, MD, USA. \EMAIL{[email protected]}} } \ABSTRACT{ Conventional inverse optimization inputs a solution and finds the parameters of an optimization model that render a given solution optimal. The literature mostly focuses on inferring the objective function in linear problems when acceptable solutions are provided as input. In this paper, we propose an inverse optimization model that inputs several accepted and rejected solutions and recovers the underlying convex optimization model that correctly classifies these given solutions. The novelty of our model is three-fold: First, while most literature focuses on inferring the objective function only, we focus on inferring the feasible region. Second, our model is capable of inferring the constraints of general convex optimization models. Third, the proposed model learns from both accepted (good) and rejected (bad) observations in inferring the constraint set. The resulting model is a mixed-integer nonlinear problem, and to mitigate its complexity, we use the theoretical properties of its solutions to derive a reduced reformulation that is easier to solve. Using a case study on radiotherapy treatment planning for breast cancer patients, we demonstrate that our proposed model can infer a set of clinical guidelines to classify accepted and rejected plans with over 95\% accuracy. } \KEYWORDS{inverse optimization, constraint inference, model learning, convex optimization, radiation therapy.} \maketitle \section{Introduction} In the era of big data, learning from past decisions and their corresponding outcomes, whether good or bad, provides an invaluable opportunity for improving future decision-making processes. While there is considerable momentum to learn from data through artificial intelligence, machine learning, and statistics, the field of operations research has not been using this valuable resource to its full potential in learning from past decisions to inform future decision-making processes. One of the emerging methodologies that can benefit from this abundance of data is inverse optimization~\citep{Ahuja01}. A regular (forward) optimization problem models a system and determines an optimal solution that represents a decision for the system. On the contrary, inverse optimization aims to recover the optimization model that made a set of given observed solutions (or decisions) optimal. For instance, in radiation therapy treatment planning for cancer patients, radiation oncologists make decisions on whether the quality of personalized plans generated through a treatment planning system is acceptable for each patient. In this case, an inverse model would be able to learn the implicit logic behind the oncologist's decision-making process. Traditionally, the input to inverse models almost exclusively constitutes optimal or near-optimal solutions, and little attention has been paid to learning from unfavorable solutions that must be avoided. In inverse optimization, learning from both `good` and `bad' decisions can provide invaluable information about the patterns, preferences, and restrictions of the underlying forward optimization model. Inverse optimization is well-studied for inferring linear optimization models \citep{chan2021IOreview}. This focus is mostly due to the tractability and existence of optimality guarantees in linear programming. In practice, however, nonlinear models are sometimes better suited for characterizing complex systems and capturing past solutions' attributes. The literature also largely focuses on inferring the utility function of decision-makers, which can be interpreted as the objective function of an optimization problem when the feasible region is known. Inferring the feasible region itself, on the contrary, has not received much attention, which may be attributed to the fact that inverse models for constraint inference are nonlinear, even when the forward problem is linear. For linear problems, there have been recent attempts for recovering the forward feasible region through inverse optimization~\citep{ghobadi2021inferring, chan2018inverse}, however, these studies do not generalize to nonlinear problems and do not incorporate bad decisions in constraint inference. In radiation therapy treatment planning for cancer patients, a large pool of historical treatment plans exists that can be used in an inverse learning process. A plan is often designed to meet a set of pre-determined and often conflicting constraints, which are referred to as clinical guidelines. These guidelines are blanket statements and not personalized, which means they may be too strict or too relaxed (sometimes simultaneously) for individual patients. As a result, some plans that satisfy the original guidelines may be rejected and some seemingly infeasible plans may be accepted. This application lends itself well to using inverse optimization for inferring the true underlying clinical guidelines for patient populations, which can lead to more efficient treatment planning and improved quality of treatment. While much attention has been paid to understanding the tradeoff balance in the objective of cancer treatment using inverse optimization, the problem of understanding and constructing proper clinical guidelines remains under-explored. An incorrect guideline or constraint in the optimization model can lead to a significantly different feasible region and affect the possible optimal solutions that the objective function can achieve. In this paper, we focus on recovering the constraints of an optimization model through a novel inverse optimization model for general convex optimization problems. Our model inputs a set of past expert solutions, both accepted (good) and rejected (bad), and uses it to infer the underlying optimization problem that makes these past decisions feasible or infeasible, respectively. We further propose a reduced reformulation of our inverse optimization model to mitigate its complexity and improve solvability. We demonstrate the merit of our framework using the problem of radiation therapy treatment planning for breast cancer patients where we impute the underlying conditions that correctly characterize acceptable and non-acceptable treatment plans. \subsection{Literature Review} Given a (forward) optimization problem with a set of partially known parameters, inverse optimization inputs a set of given solution(s) and recovers the full set of parameters \citep{Ahuja01}. The input solution is often a single observation that is optimal~\citep{ Iyengar05, ghate2020inverse} or near-optimal~\citep{Keshavarz11, Chan14, Chan15, Bertsimas15, Aswani16, naghavi2019inverse} in which case the inverse model minimizes a measure of the optimality gap. Recently, with more focus on data-driven models, multiple observations have also been considered as the input to inverse models~\citep{Keshavarz11,Troutt06,Troutt08,Chow12, Bertsimas15, esfahani2018IncompleteInfo, babier2021ensemble, zhang1999further}. Since not all of the input observations can be (near-)optimal, a measure of the collective data is often optimized instead. Some studies also consider noise or uncertainty in data that affect the inverse models \citep{Aswani16, dong2018inferring, ghobadi2018robust}, or infer the structure of solutions to the inverse model instead of reporting a single inverse solution~\citep{tavasliouglu2018structure}. For a comprehensive review of inverse optimization, we refer the readers to the review paper by~\cite{chan2021IOreview}. Inverse optimization has found a wide range of applications including energy \citep{birge2017inverse, brucker2009inverse}, dietary recommendations \citep{ghobadi2018robust}, finance \citep{li2021FinanceIO}, and healthcare systems \citep{chan2022pathways}, to name a few. In particular, radiation therapy treatment planning for cancer has been studied in the context of inverse optimization~\citep{Chan14,chan2018trade,ajayi2022RT,chan2022pathways, boutilier2015IMRT, babier2020RT, babier2018inverse, Goli15, lee2013predicting,ghate2020RTFractionation}. The current literature mostly focuses on recovering the objective function and better understanding the underlying complex tradeoffs between different objective terms in the radiation therapy treatment plans. For instance, both \citet{chan2018trade} and \citet{sayre2014automaticRT} input accepted treatment plans to recover the appropriate weights for a given set of convex objectives using inverse optimization. \citet{gebken2019inverse} finds the objective weights for unconstrained problems using singular value decomposition. Personalization for different patient groups has been explored by \citet{boutilier2015IMRT} by recovering the utility functions appropriate to each group. \cite{ajayi2022RT} employs inverse optimization for feature selection to identify a sparse set of clinical objectives for prostate cancer patients. These studies exclusively used accepted treatment plans as an input to the inverse models. When the accepted plans are infeasible with respect to guidelines, the inverse models aim to infer the forward objective that makes them near optimal. The current inverse optimization literature mainly focuses on inferring the objective function of the underlying forward problem \citep{chan2021IOreview}. Constraint inference, on the contrary, has received little attention. Recovering the right-hand side of the constraint parameters alongside the objective parameters has been explored by \citet{dempe2006inverse, Chow12} and \citet{cerny2016inverse}. Similarly, \citet{birge2017inverse} recover the right-hand side parameters so that a given observation becomes optimal utilizing properties of the specific application and \citet{dempe2006inverse, guler2010capacity, saez2018short} make a given observation (near-)optimal. \citet{chan2018inverse} perturb the nearest facet to make a given observation optimal and hence, find the left-hand side parameter of a linear constraint when the right-hand side parameters are known. Closest to our work is the study by \cite{ghobadi2021inferring} when the full set of the constraint parameters is unknown in a linear model but the objective function and a set of feasible observations are given. Their method utilizes linear properties of the forward optimization and does not generalize to convex forward problems. Inverse optimization has been extensively considered for inferring linear optimization models. However, the underlying forward optimization is assumed to be nonlinear, and sufficient conditions for optimality of observations cannot be guaranteed unless the model falls under specific classes such as convex or conic optimization for which Karush-Kuhn-Tucker (KKT) conditions are sufficient for optimality \citep{boyd2004convex}. \citet{Keshavarz11} consider general convex models and use past observations to recover the objective function parameters by minimizing the optimality errors in KKT conditions. \citet{aswani2019data} explore nonlinear convex optimization to recover the objective function when the data is noisy. \citet{zhang2010inverseSeparable} recover the objective function for linearly-constrained convex separable models. In another paper, \citet{zhang2010InverseQuadratic} propose an inverse conic model that infers quadratic constraints and shows that it be efficiently solved using the dual of the obtained semi-definite programs. While these studies have advanced the theory of inverse optimization for inferring nonlinear forward models, the focus has been the inference of the utility function of decision-makers, which translates to inferring the objective function of the forward models, and constraint inference has not received much attention in the literature. Most importantly, inverse models have traditionally focused on learning from `good' decisions only, which are typically feasible for the forward problem. The inclusion of infeasible observations has been explored by \citet{babier2021ensemble, chan2022pathways, shahmoradi2019quantile, ahmadi2020inverseLearning}, among others, when inferring the objective function parameters. However, because the feasible region is pre-determined in these studies, any infeasible observation is still treated as a `good' decision that needs to be made near-optimal for the inferred objective. Hence, infeasible and feasible observations are used in a similar manner in order to extract information about the utility function and provide objective parameter trade-offs in the forward problem. When imputing constraint parameters, on the contrary, infeasible observations can be treated as `bad' decisions. Hence, they can provide an additional layer of information by enabling a better determination of where the feasible region should reside, which areas should be excluded from it, and which constraint parameters would provide a better fit. To our knowledge, there has been no work that uses both feasible and infeasible observations in inferring the constraints of a linear or general convex forward problem. \iffalse \kg{We need to be careful to say why we have hard constraints. In RT usually, the hard constraints are converted to soft ones and moved to the objective (precisely because they are sometimes violated). We can argue that it a) it adds to the already complex problem of finding the correct weighting system when already have so many competing terms there) while it helps with violating the hard constraints, it doesn't necessarily help with tightening them, i.e., when a better plan is possible (weighting system may not capture that b/c it has multiple terms), c) it might still be hard to know which hard constraint will be violated, especially if it's for specific patients and not a generally-violated constraint, otherwise, we end up solving an almost unconstrained RT model by moving every single constraint to the objective (it does happen in really .. sigh), and d) even if the planner moves everything to the objective, it's better to have a realistic bound/expression to add there rather than the current guidelines (constraints). \\ I'm not sure we need to argue all of this here, maybe in the discussion? Is this part of the motivation?}. Also, we don't necessarily want to compare with RT planning, this is a different optimization model for classification. \fi \subsection{Cancer Treatment Motivation} In 2022, there will be an estimated 1.9 million new cancer cases diagnosed and 609,360 cancer deaths in the United States, and approximately 60\% of them will receive radiation therapy as part of their treatment \citep{cancerstats}. The radiation therapy treatment planning process is a time-consuming process that often involves manual planning by a treatment planner and/or oncologist. The input of the planning process is a medical image (e.g., CT, MRI) which includes contours that delineate the cancerous region (i.e., tumor) and the surrounding healthy organs at risk (OAR). The goal is to find the direction, shape, and intensity of radiation beams such that a set of clinical metrics on the tumor and the surrounding healthy organs is satisfied. In current practice, there are clinical guidelines on these radiation metrics. However, these guidelines are not universally agreed upon and often differ per institution. Additionally, adherence to these guidelines is at the discretion of oncologists based on the specific needs of each patient. Planners often try to find an acceptable treatment plan within these guidelines to forward to an oncologist who will, in turn, either approve or reject the plan. If the plan is not approved, the planner receives a set of instructions on which metrics to adjust to improve the plan. This iterative process can lead to unnecessary back and forth between the planner and the oncologist, may involve manual relaxation of criteria, and can result in suboptimal plans for patients. This process continues until the plan is approved. There may exist approved treatment plans that do not meet all the clinical guidelines simultaneously, typically because there are trade-offs between different metrics, the guidelines are not personalized for each patient, and some radiation dose limits are too restrictive for some patients. Conversely, there may also exist plans that meet the guidelines but are not approved because the oncologist may find the guidelines too relaxed for some patients and believe better plans are achievable, which may also lead to an increased back and forth between the planner and the oncologist. In mathematical programming terminology, the implicit feasible region in treatment planning, based on which oncologists make an acceptance/rejection decision is unknown. A simplified schematic of accepted and rejected plans with respect to two metrics (Tumor dose and OAR dose) is shown in Figure~\ref{fig:motivation_a}. It can be seen that some points do not meet the guidelines but are accepted and others meet all guidelines but are rejected. The implicit constraints, however, always correctly classify the accepted/rejected plans. \begin{figure} \caption{Simplified schematic representation of guidelines (gray solid) versus implicit (blue dashed) constraints. The black and red dots denote accepted and rejected plans, respectively. } \label{fig:motivation_a} \label{fig:motivation_b} \label{fig:guidelines} \end{figure} In the radiation therapy treatment planning problem, we demonstrate that our inverse framework can learn from both accepted and rejected plans and infer the true underlying criteria based on which the accept/reject decisions are made. Finding such constraints enables us to better understand the implicit logic of oncologists in approving or rejecting treatment plans. In doing so, we help both oncologists and planners by ({\it i}\,)~standardizing the guidelines and care practices, ({\it ii}\,)~generating more realistic lower/upper bounds on the clinical metrics based on past observations, ({\it iii}\,) improving the quality of the initial plans according to the oncologist's opinion and hence, reducing the number of iterations between planners and oncologists, and ({\it iv}\,) improving the quality of the final plans by preventing low-quality solutions that otherwise satisfy the acceptability thresholds, especially for automated treatment planning methods that heavily rely on provided radiation thresholds and may result in infeasibility if clinical thresholds are not personalized. \section{Methodology} In this section, we first formulate a general convex forward optimization problem, where all or some of the constraints are unknown. We then define the inverse problem mathematically where a set of accepted and rejected observations are given, and the goal is to find constraint parameters that correctly classify these observations while enforcing optimality conditions on a preferred solution. We then present the general data-driven inverse optimization model and characterize the properties of its solutions. \subsection{Problem Definition} \label{sec:problemdef} Let $\mathcal{I}$ be the set of all constraints in a forward optimization problem. We denote the set of known nonlinear and linear constraints by $\mathcal{N}$ and $\mathcal{L}$, respectively, and the set of unknown nonlinear and linear constraints to be inferred by and $\widetilde{\mathcal{N}}$ and $\widetilde{\mathcal{L}}$, respectively. Note that $\mathcal{N} \cup \mathcal{L} \cup \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}} = \mathcal{I}$ and $\mathcal{N}, \mathcal{L},\widetilde{\mathcal{N}}, \widetilde{\mathcal{L}}$ are mutually exclusive sets. We note that the known constraints are a trusted subset of the guidelines that need to be satisfied by all future solutions, which can potentially be an empty set. Assume that decision variable is $ \mathbf{x} \in \mathbb{R}^m$. Let $f( \mathbf{x} ; \mathbf{c} )$ and $g_n( \mathbf{x} ; \, \mathbf{q} _n), \, \forall \mathcal{N}$ be differentiable convex and concave functions on $ \mathbf{x} $, respectively. The convex forward optimization (FO) model can be formulated as: \begin{subequations} \begin{align} \mathbf{FO}: \qquad \underset{ \mathbf{x} }{\text{minimize}} \quad &f( \mathbf{x} ; \mathbf{c} )\\ \text{subject to} \quad & g_n( \mathbf{x} ; \, \mathbf{q} _n) \ge \mathbf{0} \,, && \forall n \in \mathcal{N} \cup \widetilde{\mathcal{N}}\\ & \mathbf{a} _{\ell}'\, \mathbf{x} \geq b_{\ell}\,. && \forall \ell \in \mathcal{L} \cup \widetilde{\mathcal{L}} \end{align} \end{subequations} \noindent Note that because $g_n( \mathbf{x} , \mathbf{q} _n)$ is concave, the constraint $g_n( \mathbf{x} ; \mathbf{q} _n) \geq \mathbf{0} $ corresponds to a convex set. For brevity of notations, let the set of all known constraints be defined as $\mathcal{X}=\{ \mathbf{x} \in \mathbb{R}^m \mid g_n( \mathbf{x} ; \, \mathbf{q} _n) \ge \mathbf{0} , \, \forall n \in \mathcal{N}, \,\, \mathbf{a} _{\ell}' \, \mathbf{x} \geq b_{\ell}, \, \forall \ell \in \mathcal{L}\}$, the region identified by the known constraints of FO. Assume that the structures of the functions $g_n( \mathbf{x} ; \mathbf{q} _n), \, \forall n \in \mathcal{N}$ are known, and the goal is to find unknown parameters $ \mathbf{q} _n, \, \forall n \in \widetilde{\mathcal{N}}$. Note $ \mathbf{a} _{\ell} \in \mathbb{R}^m$ and $b_{\ell} \in \mathbb{R}$ are parameters of fixed size while each $ \mathbf{q} _n$ might be a vector of a different length $ \mathbf{q} _n \in \mathbb{R}^{\phi_n}$, where $\phi_n$ depends on the type of nonlinear function that is to be inferred. For example, $g_1( \mathbf{x} ; \mathbf{q} _1) = q_{11} x_1^2 + q_{12} x_2^2 + q_{13} x_1 x_2 + q_{14}$ is a two-dimensional quadratic function with four unknown parameters $q_{11},\dots,q_{14}$ to be inferred. In what follows we describe the proposed inverse methodology for imputing the constraint parameters of the FO model. \subsection{Inverse Problem Formulation}\label{sec:IOFormulation} Let $ \mathbf{x} _k,\, k \in \mathcal{K}$ denote a set of given solutions corresponding to past decisions, where $\mathcal{K}=\mathcal{K}^{+} \cup \mathcal{K}^{-}$ and $\mathcal{K}^{+}$ and $\mathcal{K}^{-}$ denote accepted (good) and rejected (bad) observed decisions, respectively. In this section, we propose an inverse formulation that inputs these past decisions and infers the set of linear and nonlinear constraints of the forward problem such that all previously accepted constraints are inside the inferred forward feasible region and the rejected observations become infeasible. The inverse model also identifies a preferred solution based on the objective function of the forward problem and infers the forward feasible region such that this preferred point would be optimal for FO. Before introducing the inverse model, we first outline a few standard assumptions on the structure of the problem and the observed data to ensure that the problem is well defined and feasible. \begin{assumption} \label{assumption:WellDefinedG} The functions $g_n( \mathbf{x} , \mathbf{q} _n)$ are well defined with respect to the accepted observations $ \mathbf{x} ^k, \, k \in \mathcal{K}^{+}$, meaning that $\exists \, \mathcal{Q}_n \subseteq \mathbb{R}^{\phi_n} $, $\mathcal{Q}_n \ne \emptyset$ such that $\forall \hat{ \mathbf{q} }_n \in \mathcal{Q}_n$, $\, g_n( \mathbf{x} ;\hat{ \mathbf{q} }_n)$ is concave and \begin{align*} g_n( \mathbf{x} _k; \,\hat{ \mathbf{q} }_n) \ge \mathbf{0} \quad \forall \, n \in \widetilde{\mathcal{N}}, \,\, k \in \mathcal{K}^{+}. \end{align*} \end{assumption} Assumption~\ref{assumption:WellDefinedG} states that the structure of each nonlinear constraint permits parameter settings that make the FO problem convex and allows all accepted observations to be feasible for the constraint. This assumption ensures that the nonlinear constraints are well defined and that FO is convex. Similarly, the known constraints must also be well-defined with respect to the accepted observations, as stated in Assumption~\ref{assumption:ObsAreFeas}. \begin{assumption} \label{assumption:ObsAreFeas} All accepted observations satisfy the known constraints, i.e., \begin{align*} \mathbf{x} ^k \in \mathcal{X}, \qquad \forall k \in \mathcal{K}^{+}. \end{align*} \end{assumption} Assumption~\ref{assumption:ObsAreFeas} is not a limiting assumption in practice, because data can be pre-processed to ensure that there are no inconsistencies with respect to the known constraints. Through this pre-processing, we can also ensure that the labeling of accepted and rejected observations is free of contradictions. For instance, a solution cannot be accepted if an identical one is rejected in the dataset. Let $\mathcal{H}$ be the convex hull of all accepted observations $ \mathbf{x} ^k, \, k \in \mathcal{K}^{+}$. The following assumption ensures the accepted and rejected data are not contradicting. \begin{assumption}\label{assumption:WellDefined} $\not \exists k \in \mathcal{K}^{-}$ such that $ \mathbf{x} ^k \in \mathcal{H}, \, k\in \mathcal{K}^{-}$. \end{assumption} Assumption~\ref{assumption:WellDefined} states that no rejected observation lies within the convex hull of all accepted observations. Together, Assumptions~\ref{assumption:WellDefinedG}--\ref{assumption:WellDefined} ensure that the data and model are well defined and it is possible to construct a convex feasible region for FO. Because the objective function $f( \mathbf{x} ; \mathbf{c} )$ is known in FO, we can identify the points in the convex hull of all accepted observations that provide the best objective value in FO. Definition~\ref{def:x0} characterizes such a point as the ``preferred solution''. An example of the preferred solution for a convex objective function is visualized in Figure~\ref{fig:preferred}, where the blue dashed lines indicate iso-cost lines of the objective function and $ \mathbf{x} ^0$ is the preferred solution. This concept is formally introduced in Definition~\ref{def:x0}. \begin{definition}\label{def:x0} The~\emph{preferred} solution $ \mathbf{x} ^0 \in \mathbb{R}^m$ is defined as \begin{equation*} \mathbf{x} ^0 \in \argmin_{ \mathbf{x} \in \mathcal{H}}\{ f( \mathbf{x} ; \mathbf{c} ) \}\,. \end{equation*} \end{definition} \begin{figure} \caption{The preferred solution $ \mathbf{x} ^0$ has the best objective value among the convex hull of accepted observations.} \label{fig:preferred} \end{figure} Note that depending on the type of objective function and the shape of the convex hull $\mathcal{H}$, there may be multiple observations that satisfy the definition of a preferred solution, in which case, we arbitrarily label one of them as $ \mathbf{x} ^0$. The preferred solution is not necessarily one of the observations, but it is always on the boundary of the convex hull of all observations. The goal of this paper is to compute a set of linear and nonlinear constraints for the FO problem such that the accepted/rejected observations are inside/outside the inferred feasible region, respectively, and a preferred solution is an optimal solution for the FO model with the inferred feasible set. Hence, the intersection of the known constraints and the inferred constraints must include all the accepted observations and none of the rejected ones, providing a separation between the accepted and rejected points. \noindent In this paper, we will refer to such a set of inferred constraints as a ``nominal set'', as formally defined in Definition~\ref{def:nominal}. A simplified two-dimensional schematic of a nominal set is depicted in Figure~\ref{fig:nominalset}. \begin{definition}\label{def:nominal} A convex set $\widetilde{\mathcal{X}}$ is a \textit{nominal} set if \begin{align*} \mathbf{x} ^k \in \mathcal{X} \cap \widetilde{\mathcal{X}} &\qquad \forall k\in \mathcal{K}^{+}, \\ \mathbf{x} ^k \not \in \mathcal{X} \cap \widetilde{\mathcal{X}} &\qquad \forall k\in \mathcal{K}^{-}. \end{align*} \end{definition} \begin{figure} \caption{An illustration of the intersection of a nominal set and known constraints.} \label{fig:nominalset} \end{figure} Hence, the goal of the inverse problem is to find constraint parameters $ \mathbf{a} _{\ell}, b_\ell,\, \forall \ell \in \widetilde{\mathcal{L}}$, and $ \mathbf{q} _n, \, \forall n \in \widetilde{\mathcal{N}}$ such that the resulting inferred feasible set $\widetilde{\mathcal{X}}$ is a nominal set, and the preferred solution $ \mathbf{x} ^0$ is an optimal solution for \begin{align*} \underset{ \mathbf{x} }{\text{minimize}} \quad &f( \mathbf{x} ; \mathbf{c} ) \\ \text{subject to} \quad & \mathbf{x} \in \mathcal{X} \cap \widetilde{\mathcal{X}}. \end{align*} To impute such constraints, we propose a data-driven inverse optimization (DIO) formulation that imposes feasibility constraints on the accepted observations, ensures the infeasibility of the rejected points, and enforces optimality conditions on the preferred solution $ \mathbf{x} ^0$. The DIO model can be written as follows. \begin{subequations}\label{eq:DIO} \begin{align} \mathbf{DIO}: \quad \underset{ \mathbf{a} , b, \mathbf{q} , \lambda, \mu,y}{\text{Maximize}} \quad & \mathscrsfs{D} \left( \mathbf{q} _1, \dots, \mathbf{q} _{|\widetilde{\mathcal{N}}|}, \mathbf{A} , \mathbf{b} ; ( \mathbf{x} ^{1}, \dots, \mathbf{x} ^{|\mathcal{K}|}) \right) \label{eq:IOobj} \\ \text{subject to} \quad & g_n( \mathbf{x} ^k; \, \mathbf{q} _n) \ge \mathbf{0} , \quad \forall k\in \mathcal{K}^{+} ,\, n\in \widetilde{\mathcal{N}}, \label{eq:IOprimalfeasNL} \\ & \mathbf{a} _\ell'\, \mathbf{x} ^k \ge b_\ell, \qquad \forall k\in \mathcal{K}^{+}, \ell \in \widetilde{\mathcal{L}} \label{eq:IOprimalfeasL} \\ & \nabla f( \mathbf{x} ^0; \mathbf{c} ) + \sum_{n\in \mathcal{N}\cup \widetilde{\mathcal{N}}} \lambda_n \nabla g_n\, ( \mathbf{x} ^0,\, \mathbf{q} _n) + \sum_{\ell \in \mathcal{L} \cup \widetilde{\mathcal{L}}}\mu_\ell \, \mathbf{a} _{\ell} \, = \mathbf{0} , \label{eq:IOstationarity} \\ & \lambda_n \,\, g_n( \mathbf{x} ^0, \mathbf{q} _n) = 0, \qquad \forall n\in \mathcal{N}\cup \widetilde{\mathcal{N}}, \label{eq:IOcsNL} \\ & \mu_\ell \,\, (b_\ell - \mathbf{a} _\ell' \, x^0)= 0, \qquad \forall \ell\in \mathcal{L}\cup \widetilde{\mathcal{L}}, \label{eq:IOcsL} \\ & \mathbf{a} _{\ell}'\, \mathbf{x} ^k \le b_{\ell} - \epsilon + M y_{\ell k}, \qquad \forall \ell \in \mathcal{L} \cup \widetilde{\mathcal{L}}, \, k \in \mathcal{K}^{-}, \label{eq:IOinfeasL}\\ & g_n( \mathbf{x} ^k; \, \mathbf{q} _n) \le \mathbf{0} - \epsilon + M y_{nk}, \qquad \forall n \in \mathcal{N} \cup \widetilde{\mathcal{N}}, \, k \in \mathcal{K}^{-}, \label{eq:IOinfeasNL}\\ & \sum_{i \in \mathcal{I} } y_{ik} \leq \, \mid \mathcal{I} \mid - 1, \qquad \forall k \in \mathcal{K}^{-}, \label{eq:IOinfeasSum}\\ & \mathbf{a} _{\ell} \in \mathcal{A}_{\ell} \,, b_{\ell} \in \mathcal{B}_{\ell}, \qquad \forall \ell \in \widetilde{\mathcal{L}}, \label{eq:IOnormL} \\ & \mathbf{q} _n \in \mathcal{Q}_n \,, \qquad \forall n \in \widetilde{\mathcal{N}}, \label{eq:IOnormNL} \\ & \lambda_n,\mu_\ell \leq 0, \qquad \forall n\in \mathcal{N}\cup \widetilde{\mathcal{N}}, \, \ell\in \mathcal{L}\cup \widetilde{\mathcal{L}}, \label{eq:IOstationaritySign}\\ & y_{ik} \in \{0,1\}, \qquad \forall i \in \mathcal{I}, \, k \in \mathcal{K}^{-}. \label{eq:IOBinaryVars} \end{align} \end{subequations} \noindent The objective function~\eqref{eq:IOobj} maximizes a measure of distance between the constraint parameters and the observations. An example of the objective function can be maximizing the total distance between the inferred constraints and all the infeasible observations using a desirable distance matrix $ \mathscrsfs{D} $. We provide more details on this objective function example in Section \ref{sec:distance}. Constraints \eqref{eq:IOprimalfeasNL}, \eqref{eq:IOprimalfeasL} enforce primal feasibility conditions. Constraints~\eqref{eq:IOstationarity} capture the stationarity conditions. Complementary slackness for the linear and nonlinear constraints of FO are captured in \eqref{eq:IOcsNL} and \eqref{eq:IOcsL}, respectively. Constraints~\eqref{eq:IOinfeasL}--\eqref{eq:IOinfeasSum} ensure that at least one constraint is violated by each of the rejected observations. Constraints~\eqref{eq:IOnormL}--\eqref{eq:IOnormNL} provide a set of desirable conditions on the coefficients of the imputed constraints such as normalization or convexity conditions. As an optional step, any other desirable condition on the parameters can also be included in $\mathcal{Q}$, and similar conditions on the linear constraint parameters can also be considered as $ \mathbf{a} _{\ell} \in \mathcal{A}, \, b_{\ell} \in \mathcal{B}$. Lastly, constraints~{\eqref{eq:IOstationaritySign}--\eqref{eq:IOBinaryVars}} indicate sign and binary declarations. We next show that any optimal solution produced by the DIO model exhibits the desired properties of an inferred feasible region for FO. \begin{proposition} \label{prop:DIOoptimal} Any feasible solution of DIO corresponds to a nominal set $\widetilde{\mathcal{X}}$ such that ${ \mathbf{x} ^0 \in \underset{ \mathbf{x} \in\mathcal{X} \cap \widetilde{\mathcal{X}}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}}$. \end{proposition} As Proposition~\ref{prop:DIOoptimal} states, any solution of DIO has the properties of a nominal feasible set for FO and makes $ \mathbf{x} ^0$ optimal for the forward problem. To fulfill this requirement, DIO inserts at least one conflicting constraint per rejected observation such that the rejected observation becomes infeasible for FO while ensuring none of the accepted observations are cut off. Assuming that the forward model allows us to infer as many constraints needed to do so, then it is always possible to find a solution for DIO. \begin{proposition} \label{prop:DIOfeas} For sufficiently large $|\widetilde{\mathcal{L}}| + |\widetilde{\mathcal{N}}|$, DIO is guranteed to be feasible. \end{proposition} Given that Proposition~\ref{prop:DIOfeas} states that DIO is feasible when the number of inferred constraints is sufficiently large, we next construct an upper bound on the minimum number of constraints needed to make DIO feasible. Depending on the number of rejected observations and their spatial distribution, a small number of constraints may be sufficient to cut off a large number of rejected observations. However, in the worst case, we would need one constraint per rejected observation to guarantee that each rejected point infeasible for FO while ensuring the feasibility of all accepted points. We also need at least one inferred constraint ensuring that the preferred solution $ \mathbf{x} ^0$ is optimal for FO. Remark~\ref{remark:minNumConstr} provides a bound for the number of inferred constraints required for DIO. \begin{remark} \label{remark:minNumConstr} An upper bound to the minimum number of required constraints in DIO is $|\mathcal{K}^{-}|+1$. \end{remark} We note again that one constraint may serve multiple purposes, which would result in a lower number of constraints needed in practical settings. For instance, a single constraint may cut a large number of rejected points out of the inferred feasible region. The FO problem considered in this paper is a general convex nonlinear problem that may be relatively hard to solve, but global optimality is guaranteed due to its convexity and specific classes of convex problems can be solved efficiently \citep{boyd2004convex}. The proposed DIO model for constraint inference adds extra levels of complexity to the FO model, mainly due to the addition of KKT conditions of complementary slackness and stationarity, which are nonlinear and not necessarily convex. It also adds binary variables and additional constraints to account for the infeasibility of rejected points. Therefore, the proposed DIO model of formulation~\eqref{eq:DIO} can be a very complex mixed-integer nonlinear nonconvex problem for which there is no optimality guarantee and we found that commercial solvers often fail to find any good-quality solutions even for small problem instances. In Section~\ref{sec:solution}, we propose a method for mitigating the complexity of the proposed inverse problem. \section{Mitigating Inverse Problem Complexity}\label{sec:solution} For inferring linear FO constraints, \citet{ghobadi2021inferring} proposed a tractable reformulation of the inverse model that leverages the structure of a class of solutions to provide an equivalent tractable reformulation that can be reduced to a linear program. Using the concept of optimality in linear programming, they add a linear half-space as a known constraint that would eliminate the need for writing strong duality and dual feasibility conditions in the inverse model. Hence, at the expense of adding one constraint to the set of known constraints, the complexity of their inverse model can potentially be reduced to an LP. In this section, we generalize the methodology of \citet{ghobadi2021inferring} to convex FO models and demonstrate that a data-driven constraint can be derived and added to the FO formulation to replace optimality conditions. We show that the inclusion of this additional constraint can simplify the resulting inverse formulation. In what follows, we first introduce a few definitions and discuss preliminaries for constructing the reformulation. We then discuss the properties of an optimal inverse solution and provide problem-specific sufficient optimality conditions that are simpler than the KKT criteria. Lastly, we present a reduced reformulation of the inverse problem, which is easier to solve than the original DIO model. \subsection{Preliminaries and Definitions} One of the key complexities of the DIO formulation is the inclusion of nonlinear KKT conditions for stationarity and complementary slackness, which ensure the optimality of the preferred solution. Recall that the preferred solution $ \mathbf{x} ^0$ is optimal for FO, meaning that it has a better objective value for $f( \mathbf{x} ; \mathbf{c} )$ than any other accepted observation. This means that $f( \mathbf{x} ^0; \mathbf{c} ) \leq f( \mathbf{x} ^k; \mathbf{c} ), \, \forall k \in \mathcal{K}^{+}$. In a simplified two-dimensional setting, if we draw the iso-cost objective function curve at $f( \mathbf{x} ; \mathbf{c} ) = f( \mathbf{x} ^0; \mathbf{c} )$, all accepted observations fall on one side of this curve. Definition~\ref{def:halfspace} formally defines this space on one side of the curve as the sublevel set of the objective function at the preferred solution. \begin{definition} \label{def:halfspace} The subspace $\mathcal{V}$ is a $ \mathbf{x} ^0$-\textit{sublevel set} of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$ defined as \[\mathcal{V} = \{ \mathbf{x} \mid f( \mathbf{x} ; \mathbf{c} ) \geq f( \mathbf{x} ^0; \mathbf{c} )\}. \] \end{definition} Note that the sublevel space $\mathcal{V}$ always contains all accepted observations, because any point $\hat{ \mathbf{x} } \not \in \mathcal{V}$ would have a better objective value than $ \mathbf{x} ^0$, i.e., $\bf(\hat{ \mathbf{x} }; \mathbf{c} ) < \bf( \mathbf{x} ; \mathbf{c} )$, which contradicts with $ \mathbf{x} ^0$ being the preferred solution. When $\bf( \mathbf{x} ; \mathbf{c} )$ is convex, the sublevel set $\mathcal{V}$ is either non-convex or a half-space, when $f( \mathbf{x} ; \mathbf{c} )$ is nonlinear or linear in $ \mathbf{x} $, respectively. When $f( \mathbf{x} ; \mathbf{c} )$ is not linear, the sublevel set is a nonconvex set that contains all feasible observations and has $ \mathbf{x} ^0$ on its boundary. The preferred solution is also on the boundary of the tangent half-space to the sublevel set, which is one side of a hyperplane that is tangent to $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$. Simplified schematics of a sublevel set and its tangent half-space are shown in Figure~\ref{fig:sublevelset1}. The tangent half-space is formally introduced in Definition~\ref{def:tangent}. \begin{definition}\label{def:tangent} The tangent half-space $\mathcal{C}$ to the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$ is defined as \[ \mathcal{C} = \{ \mathbf{x} \in \mathbb{R}^n \mid (\nabla f( \mathbf{x} ^0; \mathbf{c} ))' \, \mathbf{x} \geq f( \mathbf{x} ^0; \mathbf{c} ) \}.\] \end{definition} \begin{figure} \caption{The sublevel set of $f( \mathbf{x} , \mathbf{c} )$ (left) and its tangent half-space (right) at $ \mathbf{x} ^0$. } \label{fig:sublevelset1} \end{figure} It can be seen that if $f( \mathbf{x} ; \mathbf{c} )$ is linear, the tangent half-space is equivalent to the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$, i.e., $\mathcal{C}=\mathcal{V}$. In the rest of this paper, we use $\mathcal{C}$ to denote the tangent half-space of the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$, for brevity. With these definitions in place, we next discuss the properties of DIO solutions, which will allow us to derive a reduced reformulation that enforces optimality conditions on $ \mathbf{x} ^0$ without explicitly incorporating KKT conditions. \subsection{Reduced Formulation} In this section, we use the theoretical properties of a DIO solution to enforce optimality conditions through a reduced reformulation that is less complex than the original DIO model. For brevity of notations, we refer to any feasible region that is inferred using the DIO model as an \emph{imputed feasible set} for the forward problem. Given the properties of DIO solution outlined in Proposition~\ref{prop:DIOoptimal}, Definition~\ref{def:imputed} characterizes an imputed feasible set for FO. \begin{definition}\label{def:imputed} Any convex nominal set $\mathcal{S}=\mathcal{X} \cap \widetilde{\mathcal{X}}$ such that $ \mathbf{x} ^0 \in \underset{ \mathbf{x} \in\mathcal{X} \cap \widetilde{\mathcal{X}}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}$ is an \emph{imputed set} for FO. \end{definition} We note that different DIO solutions may result in the same imputed feasible set since the imputed set is a geometric representation of the feasible region, as opposed to an algebraic one. For instance, in a DIO solution, multiplying the coefficients of a linear constraint by a constant would result in a different solution, which may even be infeasible for DIO, but it would represent the same imputed feasible set for FO. Recall the definition of tangent half-space $\mathcal{C}$ of the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$ in Definition~\ref{def:halfspace}. It can be shown that any imputed feasible set as defined in Definition~\ref{def:imputed} is always contained within the tangent half-space $\mathcal{C}$. This property is detailed in Proposition~\ref{prop:imputedinC}. \begin{proposition} \label{prop:imputedinC} If a $\mathcal{S} = \mathcal{X} \cap \widetilde{\mathcal{X}}$ is an imputed set for FO then $\mathcal{S} \subseteq \mathcal{C}$. \end{proposition} Figure~\ref{fig:halfspaceintuition} shows the intuition behind Proposition~\ref{prop:imputedinC} which states any imputed set for FO must be contained within the tangent half-space $\mathcal{C}$. Consider the convex nominal set $\mathcal{S}$ denoted by the dashed green area, which is not contained within $\mathcal{C}$. Then $\mathcal{S}$ must contain points outside of $\mathcal{C}$ that are either in $\mathcal{V} \setminus \mathcal{C}$ or in the complement of $\mathcal{V}$. If $\mathcal{S}$ contains $\hat{\mathbf{x}}' \not \in \mathcal{V}$, then it cannot be an imputed set, by definition, since $\hat{\mathbf{x}}'$ dominates $ \mathbf{x} ^0$ and hence $ \mathbf{x} ^0 \not \in \underset{ \mathbf{x} \in \mathcal{S}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}$. If, $\mathcal{S}$ contains a point $\hat{\mathbf{x}} \in \mathcal{V}\setminus\mathcal{C}$, then because $f( \mathbf{x} ; \mathbf{c} )$ is convex, there exists a convex combination of $ \mathbf{x} ^0$ and $\hat{\mathbf{x}}$ that dominates $ \mathbf{x} ^0$ and hence, again, $ \mathbf{x} ^0 \not \in \underset{ \mathbf{x} \in \mathcal{S}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}$, which contradicts the definition of an imputed set. Proposition~\ref{prop:imputedinC} provides a guideline for constructing an imputed feasible set by intersecting any convex nominal set with the tangent half-space $\mathcal{C}$, as outlined in Proposition~\ref{prop:CcapS}. \begin{proposition}\label{prop:CcapS} For any convex nominal set $\mathcal{S}$, the set $\mathcal{S} \cap \mathcal{C}$ is an imputed feasible set. \end{proposition} \begin{figure} \caption{A convex nominal set $\mathcal{S} \not \subseteq \mathcal{C}$, to show the intuition behind Proposition~\ref{prop:imputedinC}} \label{fig:halfspaceintuition} \end{figure} Proposition~\ref{prop:CcapS} illustrates that the optimality condition on $ \mathbf{x} ^0$ can always be guaranteed if $\mathcal{C}$ is used as one of the constraints in shaping the imputed feasible set. Because any imputed feasible set must be a subset of the tangent half-space $\mathcal{C}$, as shown in Proposition~\ref{prop:imputedinC}, the addition of $\mathcal{C}$ does not exclude or cut any possible imputed sets for FO. Hence, instead of searching for a nominal set that satisfies the optimality on $ \mathbf{x} ^0$, we can add the tangent half-space $\mathcal{C}$ as one of the known constraints for FO, thereby, always guaranteeing that $ \mathbf{x} ^0$ will be the optimal solution of FO for any inferred feasible set. This additional constraint allows us to relax the KKT conditions on the optimality of $ \mathbf{x} ^0$ in the DIO formulation and derive a reduced formulation, presented in Theorem~\ref{theorem:RDIO}. \begin{theorem}\label{theorem:RDIO} When $\mathcal{C}$ is appended to the known constraints of FO, then solving DIO is equivalent to solving the following reduced model: \begin{subequations} \begin{align} \mathbf{RDIO}: \quad \underset{ \mathbf{a} , b, \mathbf{q} , y}{\text{Maximize}} \quad & \mathscrsfs{D} \left( \mathbf{q} _1, \dots, \mathbf{q} _{|\widetilde{\mathcal{N}}|}, \mathbf{A} , \mathbf{b} ; ( \mathbf{x} ^{1},\dots \mathbf{x} ^{|\mathcal{K}|}) \right) \label{eq:eIOobj} \\ \text{subject to} \quad & g_n( \mathbf{x} ^k; \, \mathbf{q} _n) \ge \mathbf{0} , \quad \forall k\in \mathcal{K}^{+} , n\in \widetilde{\mathcal{N}} \label{eq:eIOprimalfeasNL} \\ & \mathbf{a} _\ell'\, \mathbf{x} ^k \ge b_\ell, \qquad \forall k\in \mathcal{K}^{+}, \ell \in \widetilde{\mathcal{L}} \label{eq:eIOprimalfeasL} \\ & \mathbf{a} _{\ell} \mathbf{x} ^k \le b_{\ell} - \epsilon + M y_{\ell k}, \quad \forall \ell \in \mathcal{L} \cup \widetilde{\mathcal{L}}, \, k \in \mathcal{K}^{-} \label{eq:eIOinfeasL}\\ & g_n( \mathbf{x} ^k; \, \mathbf{q} _n) \le \mathbf{0} - \epsilon + M y_{nk}, \quad \forall n \in \mathcal{N} \cup \widetilde{\mathcal{N}}, \, k \in \mathcal{K}^{-} \label{eq:eIOinfeasNL}\\ & \sum_{i \in \mathcal{I} } y_{ik} \leq \, \mid \mathcal{I} \mid - 1, \quad \forall k \in \mathcal{K}^{-} \label{eq:eIOinfeasSum}\\ & \mathbf{a} _{\ell} \in \mathcal{A}_{\ell} \,, b_{\ell} \in \mathcal{B}_{\ell} \quad \forall \ell \in \widetilde{\mathcal{L}} \qquad \label{eq:eIOnormL} \\ & \mathbf{q} _n \in \mathcal{Q}_n \,, \quad \forall n \in \widetilde{\mathcal{N}} \label{eq:eIOnormNL} \\ & y_{ik} \in \{0,1\}, \qquad \forall i \in \mathcal{I}, \, k \in \mathcal{K}^{-} \label{eq:eIOBinaryVars} \end{align} \end{subequations} \end{theorem} We note that RDIO is always feasible because it is a relaxed version of DIO with fewer constraints, and we know from Proposition~\ref{prop:DIOfeas} that DIO is always feasible. Based on Theorem~\ref{theorem:RDIO}, to find an imputed feasible set, we can simply find a nominal feasible set and super-impose the known constraints including the tangent halfspace of the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at the preferred solution~$ \mathbf{x} ^0$. \begin{corollary}\label{cor:theorem} RDIO infers unknown constraints $\widetilde{\mathcal{X}}$ that shape a nominal feasible set for FO such that $ \mathbf{x} ^0$ is an optimal solution of \begin{subequations} \begin{align} \label{eq:theorem} \underset{ \mathbf{x} }{\text{minimize}} \quad &f( \mathbf{x} ; \mathbf{c} ),\\ \text{subject to} \quad & \mathbf{x} \in \mathcal{C} \cap \mathcal{X} \cap \widetilde{\mathcal{X}}. \end{align} \end{subequations} \end{corollary} Corollary~\ref{theorem:RDIO} provides a method for reducing the complexity of the DIO model. In what follows, we provide a numerical example that illustrates how an imputed feasible set can be constructed using Corollary~\ref{cor:theorem}. In this small numerical example, the solution is constructed by manual inspection. \begin{example} \label{ex:numerical} Let $f( \mathbf{x} ; \mathbf{c} ) = x_1^2 + x_2^2$ be the convex objective function of FO, and let the feasible and infeasible observations be the black and red points shown in Figure~\ref{fig:example}, respectively. The preferred solution can be identified based on Definition~\ref{def:x0} and is shown on the figure as $ \mathbf{x} ^0$. The sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$ is denoted as $\mathcal{V}$ and can be written as $\mathcal{V}= \{ \mathbf{x} \in \mathbb{R}^2 \mid x_1^2 + x_2^2 \geq 2\}$. which is the outside of the dashed circle passing through $ \mathbf{x} ^0$. Assume that a known constraint is $2x_1-3x_2\leq 4$ for which the corresponding side of the inequality is shown by $\mathcal{X}$ in Figure~\ref{fig:example}. A possible nominal feasible set for FO is $\widetilde{\mathcal{X}}$= $\{(x_1,x_2)\in \mathbb{R}^2 \mid (x_1-2)^{2} + 2(x_2-2)^{2} \le 6\}$ and depicted in Figure~\ref{fig:example} in the green dashed area. Note that the intersection of this nominal set and the known constraint, i.e., $\mathcal{X} \cap \widetilde{\mathcal{X}}$, only includes all accepted observations and exclude all the rejected observations. However, it does not have the preferred solution $ \mathbf{x} ^0$ and hence, would not make $ \mathbf{x} ^0$ to be optimal for FO. On the contrary, the tangent half-space $\mathcal{C}$ allows for $ \mathbf{x} ^0$ to be a candidate optimal solution for FO when added as a known constraint for FO. Hence, $\widetilde{\mathcal{X}} \cap \mathcal{X} \cap \mathcal{C}$ is an imputed feasible set for FO as it satisfies the properties outlined in Definition~\ref{def:imputed}. $\triangle$ \end{example} \begin{figure} \caption{Illustration of the nominal and imputed sets, the sublevel set of the objective function, and the tangent hyperplane in Example~\ref{ex:numerical}. } \label{fig:example} \end{figure} The RDIO model eliminates the need for explicitly writing the stationarity and complementary slackness constraints because the inclusion of the tangent half-space $\mathcal{C}$ makes them redundant for the DIO model. Because $\mathcal{C}$ is a tangent half-space of the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at the preferred solution $ \mathbf{x} ^0$, its inclusion ensures that the resulting inferred feasible region is an imputed feasible set. The only constraints that are required to remain in the RDIO model are those that ensure the imputed constraints form a nominal solution that includes all accepted observations and none of the rejected ones. Therefore, the size of the RDIO problem is considerably lower than that of the DIO problem, and it relaxes a large number of nonconvex nonlinear constraints. A comparison of the number of variables and constraints in the DIO and RDIO models is provided in Table~\ref{table:size}. \begin{table}[htbp] \centering \begin{tabular}{l @{\hspace{.5cm}} l l c } \toprule & Type & Model & Number \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{Variables \quad } } & \multirow{2}{*}{Continuous } & DIO & $(m+2)|\widetilde{\mathcal{L}}|+(\phi+1)|\widetilde{\mathcal{N}}| + |\mathcal{N}|+|\mathcal{L}|$ \\ \cmidrule{3-4} & & RDIO & $(m+1)|\widetilde{\mathcal{L}}|+\phi|\widetilde{\mathcal{N}}|$ \\ \cmidrule{2-4} & \multirow{2}{*}{Binary} & DIO & $(|\mathcal{N}|+|\mathcal{L}|+|\widetilde{\mathcal{N}}|+|\widetilde{\mathcal{L}}|)\,|\mathcal{K}^{-}|$ \\ \cmidrule{3-4} & & RDIO & $(|\mathcal{N}|+|\mathcal{L}|+|\widetilde{\mathcal{N}}|+|\widetilde{\mathcal{L}}|)\,|\mathcal{K}^{-}|$ \\ \bottomrule [1pt] \multirow{4}{*}{\rotatebox[origin=c]{90}{Constraints \quad} } & \multirow{2}{*}{Linear } & DIO & $|\widetilde{\mathcal{L}}|(|\mathcal{K}^{+}|+1) + (|\mathcal{L}|+1)|\mathcal{K}^{-}|$ \\ \cmidrule{3-4} & & RDIO & $|\widetilde{\mathcal{L}}|(|\mathcal{K}^{+}|+1) + (|\mathcal{L}|+1)|\mathcal{K}^{-}|$ \\ \cmidrule{2-4} & \multirow{2}{*}{Nonlinear} & DIO & $|\widetilde{\mathcal{N}}|(|\mathcal{K}^{+}|+|\mathcal{K}^{-}| + 1) + |\mathcal{N}|(|\mathcal{K}^{-}|+1) + m $ \\ \cmidrule{3-4} & & RDIO & $|\widetilde{\mathcal{N}}|(|\mathcal{K}^{+}|+|\mathcal{K}^{-}|) + |\mathcal{N}||\mathcal{K}^{-}| $ \\ \bottomrule \end{tabular} \caption{Comparison of the number of variables and constraints in the DIO and RDIO models.} \label{table:size} \end{table} Table~\ref{table:size} shows that the RDIO model has fewer continuous variables and nonlinear constraints compared to the DIO model. The nonlinear constraints that remain in RDIO primal feasibility constraints ($g_n( \mathbf{x} ; \mathbf{q} )\geq$) which are nonlinear in $ \mathbf{x} $, but may be linear (or be linearized) in $ \mathbf{q} _n$, which is the decision variable in RDIO. Particularly, if the FO is linear, then the corresponding RDIO model can also be linear if a linear distance metric is used in the objective function. In what follows, we discuss one example of a distance metric that can be used in the RDIO model. \subsection{Example of Distance Metric} \label{sec:distance} The objective function in DIO and RDIO maximizes a non-negative distance metric between the inferred constraints and the observations. In this section, we provide an example of such a distance metric and write the complete DIO formulation based on it, which will be used in the case study presented in Section~\ref{sec:case}. We consider an objective that aims to find constraints that are as far as possible from the rejected observations and as close as possible to the accepted ones. To formulate this objective, we maximize the maximum distance between the inferred constraints that exclude each of the rejected observation, $ \mathbf{x} ^k, k\in \mathcal{K}^-$, from the inferred feasible set. That is, among the constraints that make each rejected observation infeasible, we select the constraint furthest away from the rejected observation and push it as close to the accepted observations as possible. Hence, we define $ \mathscrsfs{D} $ as \[ \mathscrsfs{D} \left( \mathbf{q} _1, \dots, \mathbf{q} _{|\widetilde{\mathcal{N}}|}, \mathbf{A} , \mathbf{b} ; ( \mathbf{x} ^{1},\dots \mathbf{x} ^{|\mathcal{K}|}) \right) = \sum_{k \in \mathcal{K}^{-}} \max \left\{ {\underset{\ell \in \mathcal{L}}{\max}\left\{d_{\ell k}([ \mathbf{a} _{\ell}, b_{\ell}], \mathbf{x} ^{k})\right\}, \, \max \left\{ d_{nk}( \mathbf{q} _n, \mathbf{x} ^k)\right\}} \right\}, \] where $d_{ik}$ is the distance between the imputed constraint $i$ and the rejected observation $k \in \mathcal{K}^{-}$ defined as the slack of the corresponding constraint. This objective can be linearized using a set of additional constraints as shown in $\mathbf{RDIO_1}$. \begin{subequations} \begin{align} \mathbf{RDIO_{1}}: \quad \underset{ \mathbf{a} , b, \mathbf{q} , y}{\text{Maximize}} \quad & \sum_{k \in \mathcal{K}^{-}} z_k \label{eq:eIO1obj} \\ \text{subject to} \quad & d_{n k} \geq -g_n( \mathbf{x} ^k; \mathbf{q} _n) \qquad \forall \n \in \widetilde{\mathcal{N}}, k \in \mathcal{K}^{-}\\ & d_{n k} \leq -g_n( \mathbf{x} ^k; \mathbf{q} _n) + M y_{nk} \qquad \forall \n \in \widetilde{\mathcal{N}}, k \in \mathcal{K}^{-} \\ & d_{\ell k} \geq b_{\ell} - \mathbf{a} _{\ell}' \mathbf{x} _{k} \qquad \forall \ell \in \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-} \\ & d_{\ell k} \leq b_{\ell} - \sum_{m \in M} \mathbf{a} _{\ell}' \mathbf{x} _{k} + M y_{\ell k} \qquad \forall \ell \in \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-} \\ & d_{ik} \leq M(1-y_{ik}) \qquad \forall i \in \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-} \\ & d_{ik} \geq \epsilon (1-y_{ik}) \qquad \forall i \in \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-} \\ & z_k \leq d_{ik} + M p_{ik} \qquad \forall i \in \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-}\\ & p_{ik} \leq |\mathcal{I}|-1 \qquad \forall i \in \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-}\\ & p_{ik} \geq y_{ik} \qquad \forall i \in \widetilde{\mathcal{N}} \cup \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-}\\ & \eqref{eq:eIOprimalfeasL}-\eqref{eq:eIOBinaryVars} \\ & d_{\ell k} \geq 0 \end{align} \end{subequations} Constraints (5b)--(5g) find the slack distance between each rejected point and each constraint. Constraints (5h)--(5j) find the maximum distance of each constraint that make each point infeasible, and the objective (5a) maximizes this maximum distance. The original primal feasibility constraints of the RDIO model are also enforced. Note that this specific distance model only requires the addition of linear constraints and binary variables, so if the FO is a linear problem, the correspondnig RDIO problem will be a linear integer problem. Any other metric of interest can also be used in the objective, but for simplicity, we will use this linear metric in the case study in Section~\ref{sec:case}. \section{Case Study: Standardizing Clinical Radiation Therapy Guidelines} \label{sec:case} In this section, we test the proposed methodology using a case study on standardizing radiation therapy treatment planning guidelines for breast cancer patients. We first introduce the case and the problem in the context of radiation therapy. Next, we describe the data and experimental setup. Lastly, we present and discuss the results and provide practical insights. \subsection{Case description} Breast cancer is the most widely diagnosed type of cancer in women worldwide. The cancerous tumor is often removed, leaving behind a cavity, and radiation treatment is subsequently prescribed to eliminate any remaining cancer cells. Tangential Intensity-modulated radiation therapy (IMRT) is a treatment modality often used as part of the treatment for most breast cancer patients. In tangential IMRT, two opposing beams that are tangent to the external body of the patient are used to deliver radiation to the breast tissue. The main organ at risk in breast cancer IMRT is the heart, particularly when the left breast is being irradiated \citep{mahmoudzadeh2015robust}. The goal is to find the radiation beam configurations from each angle such that the clinical target volume (CTV) inside the breast is fully irradiated and the heart is spared from radiation as much as possible. Figure~\ref{fig:CT} shows a computed tomography (CT) scan of a breast cancer patient along with the two tangential beams and the contours showing the organs at risk. \begin{figure} \caption{Computed Tomography (CT) scan of a breast cancer patient with important organs delineated. Image adapted from~\cite{mahmoudzadeh2015robust}.} \label{fig:CT} \end{figure} There are a set of acceptability guidelines in breast cancer treatment planning, which involve clinical dose-volume criteria on the CTV and the heart. A dose-volume criterion is a clinical metric that calculates the dose threshold to a certain fraction of an organ measured in units of radiation dose, Gray (Gy). For instance, the prescribed dose for the CTV is 42.4 Gy and at least 99\% of the CTV must receive lower than 95\% of the prescribed dose for a plan to be accepted. That is, if the body is discretized into three-dimensional cubes called voxels, then the 95\% quantile of the voxels must receive a dose of $42.4\times0.95=40.28$~Gy or higher. Similarly, at most 0.5\% of the CTV can receive a dose higher than 108\% of the prescribed dose, which means the dose to upper 0.5\% quantile of the CTV must be lower than $1.08 \times 42.4 = 45.792$. There are also upper bounds on the dose delivered to the heart, where the highest-dosed 10~cc and 25~cc volume of the heart must receive a dose lower than 90\% and 50\% of the prescribed dose, respectively. These clinical dose-volume guidelines are used as a base reference for planning, but acceptance or rejection of the plan is at the discretion of oncologists, based on each patient's specific case. Figure~\ref{fig:patientexamples} shows the clinical guidelines and the metric values for four different patients that are labeled as accepted or rejected. It can be seen that the plan for Patient~1 and Patient~3 are accepted, while the plan for Patient~1 meets the guidelines but the plan for Patient~3 is violating two of the dose-volume metrics on the heart. On the contrary, the plan for Patients~2 is rejected because it does not meet the guidelines, but the plan Patient~4 is also rejected even though it meets all the guidelines. \begin{figure} \caption{Examples of accepted/rejected treatment plans. The guidelines are indicated with (red) lines on the bars and arrows indicate the direction of bounds. } \label{fig:patientexamples} \end{figure} In this section, we consider a forward optimization problem that inputs a set of treatment plans and determines which ones are feasible or infeasible based on a set of constraints that impose limitations on a given clinical metrics. The FO finds the optimal plan among the feasible set, according to a known objective function. In this case study, we consider the objective of minimizing the overall dose to the entire body to reduce any unnecessary radiation exposure. The inverse model then inputs a set of past patient treatments that are labeled as accepted or rejected and infers a set of constraints that correctly classify the accepted and rejected plans as feasible and infeasible, respectively, and makes a preferred treatment plan optimal for the corresponding FO problem. We assume that there are no known constraints in FO, and for simplicity of computations, consider that all constraints are linear, which is relevant in practical settings for radiation therapy, since dose-volume criteria for breast cancer can be re-written as linear constraints~\citep{chan2014robust}. \subsection{Data and Experimental setup} To set up the experiment, we used historical plans for 5 breast cancer datasets and perturbed the radiation dose of each of these 5 plans to generate 20 simulated plans based on each of these datasets. Then, we calculated 16 clinical dose metrics for each of these 105 patient datasets including maximum and minimum dose to different regions as well as dose-volume values for the CTV, the cavity, the heart, and the lung. Each of these plans was then labeled as accepted or rejected based on a set of expert-knowledge criteria that did not match the guidelines. A summary of the data is provided in Figure~\ref{fig:data_metrics} where the error bars show the range of each metric for all patients in that category. Note that four of the calculated metrics match the guidelines (CTV 99\%, CTV 0.5\%, Heart 10cc, and Heart 25cc). The data visualization shows that there is no clear separation between accepted and rejected plans and the underlying logic behind the acceptance/rejection decision cannot be inferred by looking at these metrics. Specific patient examples from this dataset were also previously shown in Figure~\ref{fig:patientexamples}. \begin{figure} \caption{Visualization of a set of metrics calculated for all accepted and rejected plans. The errorbars show the range of each calculated metric across all plans in each category.} \label{fig:data_metrics} \end{figure} To test the prediction accuracy of the proposed inverse methodology, we randomly divided the data into training and testing sets. We used the training set, including both accepted and rejected plans, as input to the inverse model and inferred 10 constraints for the forward problem. We then used the testing set to validate the results and to check whether the inferred constraints can accurately predict which plans in the testing set are accepted or rejected. We validated the out-of-sample prediction by calculating the accuracy (\% correct prediction), precision: (\% correct acceptance prediction), specificity (\% correct rejection identified), recall (\% correct acceptance identified), F1 Score (harmonic average of recall and precision). The results are provided in Section~\ref{sec:results} \subsection{Results}\label{sec:results} We first tested the model using a 60/40 random split for training/testing, ran the result 50 times, and computed the average and the range for each of the metrics. Figure~\ref{fig:accuracy_metrics} shows that the inverse model performs well with an average of above 95\% and up to 100\% across all metrics. The lowest metric was specificity at 95\% which we believe was due to the low number of rejected observations in our datasets. \begin{figure} \caption{Out-of-sample performance of the inverse model using a 60/40 split for training/testing. } \label{fig:accuracy_metrics} \end{figure} \begin{figure} \caption{Sensitivity of different metrics with respect to the proportion of training data} \label{fig:accuracy_trends} \end{figure} We next tested the sensitivity of the method to training size. Figure~\ref{fig:accuracy_trends} illustrates the trends of each metric when the training size is varied between 20\% to 80\% in increments of 10\%, with 250 random splits in each size. The x-axis shows the \% of the data that was used for training (the rest used for testing), and the y-axis shows how each metric performed (on average). The results show that the method works well across the board, even when training on only 20\% of the available data. We see a slight dip right before the 50-50 split but it picks up again when we pass the ratio. While there is always randomness, the methods seem to consistently obtain good results on average (with all metrics above 95\%) at a 60-40 split. Note that if we train too much, the risk of overfitting increases and the out-of-sample result may suffer, which for this case study seems to happen at around 70-80\% and upward. Even in the worst case, our recall and F1 scores are above 90\%. \section{Discussions and Conclusions} In summary, this paper provides an inverse optimization framework that inputs both accepted and rejected observations and imputes a feasible region for a convex forward problem. The proposed inverse model is a complex nonlinear formulation; therefore, using the properties of the constructed solutions, we propose a reduced reformulation that mitigates some of the problem complexity by rendering a set of optimality conditions redundant. We consider the problem of {radiation therapy treatment planning} to {infer the implicit clinical guidelines} that are utilized in practice based on historically-approved plans. Using realistic patient datasets, we divide the data into training and testing sets and use our inverse models to derive a feasible region for accepted plans such that rejected plans are infeasible. Our results show that our obtained feasible region (i.e., the underlying clinical guidelines) performs with 95\% accuracy in predicting the accepted and rejected treatment plans in out-of-sample data, on average. The results also highlight an interesting property of IO models that even with a small size of training data, high accuracy can be obtained. This characteristic, perhaps one of the advantages of IO compared to conventional machine learning methods, is due to the fact that IO models have a predetermined optimization structure. In radiation therapy, implementing our methodology results in standardized clinical guidelines and infers the true underlying criteria based on which the accept/reject decisions are made. This information allows planners to generate higher-quality initial plans that, in turn, result in better patient care through personalized treatment plans. It also streamlines the planning process by reducing the number of times a plan is rejected and sent back for corrections. The standardization of the guidelines reduces variability, and potentially human errors, among clinicians and institutes, and enables personalized guidelines for patient subpopulations. Given the data-driven nature of our framework, data quality and pre-processing can impact the size of our models and the quality of the obtained solutions. A simple pre-processing method can potentially reduce the size of the observations by identifying and removing those observations that are rendered infeasible by the known constraints. A reduction in the number of these infeasible observations can largely improve the complexity and size of the inverse problem by reducing the required number of inferred constraints required. Any redundant accepted or rejected observations (e.g., observation points with a distance less than a given threshold from each other) or outliers can be removed from the set of observations using the data cleaning method of choice. We assume that the data is well-defined for the FO problem, and those data points that contradict the FO properties can be removed from the set. For instance, if a rejected observation exists within the convex hull of the accepted observations, which contradicts the convexity assumption of the FO model, it needs to be removed. One future direction consists of focusing on special classes of convex problems and devising more efficient solution methods for large-case data-driven applications. Another area of future research is to consider the effect of data uncertainty on constraint inference and explore overfitting and robustness conditions for the inverse framework. We believe the method can also be applied in other application areas where an understanding of implicit expert constraints can help streamline the decision-making processes. \iffalse Similarly, if the preferred observation is an interior to the convex hull of the acceptable observation, then it is conceivable that either the set of observations is too limited or a measurement error exists because there exists a point in the convex hull that has a better objective value than the current preferred solution, and hence, should be optimal for the inferred FO. This new point can be identified using heuristic algorithms and added to the set of observations to mitigate the situation. \fi \begin{APPENDIX}{Proofs} \DoubleSpacedXI \begin{proof}{\textsc{Proof of Proposition~\ref{prop:DIOoptimal}.}} Constraints~\eqref{eq:IOprimalfeasNL}--\eqref{eq:IOprimalfeasL} ensure $ \mathbf{x} ^k \in \widetilde{\mathcal{X}}, \, \forall k \in \mathcal{K}$, and constraints \eqref{eq:IOinfeasL}--\eqref{eq:IOinfeasSum} use binary variables $y_{ik}$ to ensure that at least one constraint (either inferred or known; linear or nonlinear) makes each rejected point infeasible, which, in turn, ensures $ \mathbf{x} ^k \not \in \mathcal{X} \cap \widetilde{\mathcal{X}}$. Therefore, the conditions of Definition~\ref{def:nominal} are met and $\widetilde{\mathcal{X}}$ is a nominal set. Since the set $\widetilde{\mathcal{X}} \cap \mathcal{X}$ is convex by definition, constraints~\eqref{eq:IOprimalfeasNL}--\eqref{eq:IOcsL} provide the necessary and sufficient KKT conditions to guarantee $ \mathbf{x} ^0 \in \underset{ \mathbf{x} \in\mathcal{X} \cap \widetilde{\mathcal{X}}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}$. \halmos \end{proof} \begin{proof}{\textsc{Proof of Proposition~\ref{prop:DIOfeas}}.} To show that the feasible region is non-empty, we construct a solution (by assigning values to all variables $ \mathbf{q} , \mathbf{a} , b, \lambda, \mu,$ and $y$) that satisfies all constraints of DIO and is, hence, feasible. Let $ \mathbf{q} _n = \hat{ \mathbf{q} }, \forall n \in \widetilde{\mathcal{N}} $, as defined in Assumption~\ref{assumption:WellDefinedG}. By Assumption~\ref{assumption:WellDefined} and the Separating Hyperplane Theorem \citep{boyd2004convex}, for every rejected point $k \in \mathcal{K}^{-}, \exists\, \hat{ \mathbf{a} }_k, \hat{b}_k$ such that $\hat{ \mathbf{a} }_k' \mathbf{x} ^k < \hat{b}_k$ and $\hat{ \mathbf{a} }_k' \mathbf{x} ^p \geq \hat{b}_k, \, \forall p \in \mathcal{K}^{+}$. Let $[ \mathbf{a} _{\ell}] = [\hat{ \mathbf{a} }_1, \dots, \hat{ \mathbf{a} }_{|\mathcal{K}^{-}|}, \nabla f( \mathbf{x} ^0; \mathbf{c} ), \dots, \nabla f( \mathbf{x} ^0; \mathbf{c} )]$ and $ \mathbf{b} =[\hat{b}_1, \dots, \hat{b}_{|\mathcal{K}^{-}|},\nabla( \mathbf{x} ^0; \mathbf{c} )' \mathbf{x} ^0, \dots, \nabla( \mathbf{x} ^0; \mathbf{c} )' \mathbf{x} ^0], \quad \forall \ell \in \{1,\dots, \widetilde{\mathcal{L}}\}$. Let the dual variables of the nonlinear and linear constraints be $\lambda_n = 0, \forall \in \mathcal{N} \cup \widetilde{\mathcal{N}}$, and $[\mu_{\ell}] = [ \mathbf{0} _{1\times |\mathcal{L} \cup \widetilde{\mathcal{L}}| -1},-1], \forall \ell \in \mathcal{L}$, respectively. Finally, let binary variables $y_{kk} = 0, \forall i \in \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-}$ and $y_{kk} = 1, \forall i \in \mathcal{I} \setminus \widetilde{\mathcal{L}}, k \in \mathcal{K}^{-}$. By substitution, it can be seen that this constructed solution satisfies all constraints \eqref{eq:IOprimalfeasL}--\eqref{eq:IOBinaryVars}. Note that the first $|\mathcal{K}^{-}|$ constraints ensure that each rejected point is infeasible for at least one inferred linear constraint, and the last set of constraints (which are all identical) ensure the optimality of $ \mathbf{x} ^0$. All constraints satisfy primal feasibility $\forall \mathbf{x} ^k, \, k \in \mathcal{K}^{+}$. \halmos \end{proof} \begin{proof}{\textsc{Proof of Proposition~\ref{prop:imputedinC}}.} Let $\mathcal{S} = \mathcal{X} \cap \widetilde{\mathcal{X}}$ be an imputed set for FO. Assume that $\mathcal{S} \not \subseteq \mathcal{C}$. Then $\exists \, \hat{\mathbf{x}} \in \mathcal{S} \setminus \mathcal{C}$. We know that $\hat{\mathbf{x}} \not \in \mathcal{V}$ since it contradicts the definition of the preferred solution $ \mathbf{x} ^0$. Hence, $\hat{\mathbf{x}} \in \mathcal{V}\setminus\mathcal{C}$. Consider the following two cases: \noindent {\bf\it Case 1:} If $f( \mathbf{x} ; \mathbf{c} )$ is linear, then $\mathcal{C}=\mathcal{V}$ and $\mathcal{V}\setminus\mathcal{C} = \varnothing$, which is in contradition with $ \mathbf{x} ^k \in \mathcal{V}\setminus\mathcal{C}$. \noindent {\bf \it Case 2:} If $f( \mathbf{x} ; \mathbf{c} )$ is nonlinear convex, then $\mathcal{V}$ is non-convex, and hence $\mathcal{V}\setminus\mathcal{C}$ is non-convex. Given that $\mathcal{C}$ is the tangent half-space of the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$, then $ \mathbf{x} ^0$ is on the boundary of $\mathcal{V}$. Because $\mathcal{S}$ is convex, for any $\hat{\mathbf{x}} \in \mathcal{V} \setminus \mathcal{C}$, there must exist $\lambda >0$ such that $\bar{ \mathbf{x} } = \lambda \hat{\mathbf{x}} + (1-\lambda) \mathbf{x} ^0$ and $\bar{ \mathbf{x} } \in \mathcal{S} \setminus \mathcal{V}$, which contradicts $ \mathbf{x} ^0 \in \underset{ \mathbf{x} \in \mathcal{S}}{\argmin}\,\{f( \mathbf{x} ; \mathbf{c} ) \}$. \halmos \end{proof} \begin{proof}{\textsc{Proof of Proposition~\ref{prop:CcapS}}.} Both $\mathcal{S}$ and $\mathcal{C}$ are convex so $\mathcal{C} \cap \mathcal{S}$ is also convex. Because $\mathcal{S}$ is a nominal set and $\mathcal{C}$ includes all accepted observations, $\mathcal{C}\cap\mathcal{S}$ is also a convex nominal set. Finally, Since $\mathcal{S}$ is a convex nominal set, it includes the convex hull of all observations, and $ \mathbf{x} ^0 \in \mathcal{S}$, and therefore, $ \mathbf{x} ^0\in\mathcal{S}\cap\mathcal{C}$. Because $\mathcal{C}$ is the tangent half-space to the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$, it is also given that $ \mathbf{x} ^0 \in \underset{ \mathbf{x} \in \mathcal{S}\cap \mathcal{C}}\argmin\{f( \mathbf{x} ; \mathbf{c} )\}$. Therefore, $\mathcal{S} \cap \mathcal{C}$ meets the criteria outlined in Definition~\ref{def:imputed} and is therefore an imputed feasible set. \end{proof} \begin{proof}{\textsc{Proof of Theorem~\ref{theorem:RDIO}.}} (\textit{i}) Any solution $ \mathbf{a} , b, \mathbf{q} , \lambda, \mu,y$ of DIO is also a solution of RDIO because it has fewer constraints. (\textit{ii}) Vice-versa, for any solution $ \mathbf{a} , b, \mathbf{q} , y$ of RDIO there exists a solution of DIO since $\mathcal{C}$ is appended to the known constraints and the stationarity and complementary slackness conditions can be re-written as: \begin{align*} & \nabla f( \mathbf{x} ^0; \mathbf{c} ) + \sum_{n\in \mathcal{N}\cup \widetilde{\mathcal{N}}} \lambda_n \nabla g_n\, ( \mathbf{x} ^0,\, \mathbf{q} _n) + \lambda_0 \nabla f( \mathbf{x} ^0; \mathbf{c} ) + \sum_{\ell \in \mathcal{L} \cup \widetilde{\mathcal{L}}}\mu_\ell \, \mathbf{a} _{\ell} \, = \mathbf{0} \\ & \lambda_0 \,\, (f( \mathbf{x} ^0; \mathbf{c} )-f( \mathbf{x} ^0; \mathbf{c} )) = 0 \\ & \lambda_n \,\, g_n( \mathbf{x} ^0, \mathbf{q} _n) = 0 \qquad \forall n\in \mathcal{N}\cup \widetilde{\mathcal{N}}, \\%\label{eq:eIOcsNL} \\ & \mu_\ell \,\, (b_\ell - \mathbf{a} _\ell' x^0)= 0, \qquad \forall \ell\in \mathcal{L}\cup \widetilde{\mathcal{L}} \\ & \lambda_n,\mu_\ell \leq 0, \qquad \forall n\in \mathcal{N}\cup \widetilde{\mathcal{N}}, \, \ell\in \mathcal{L}\cup \widetilde{\mathcal{L}} \end{align*} All conditions are satisfied if we set $\lambda_0 = -1$ and $\lambda_n,\mu_\ell \leq 0, \, \forall n\in \mathcal{N}\cup \widetilde{\mathcal{N}}, \, \ell\in \mathcal{L}\cup \widetilde{\mathcal{L}}$. Hence, for any solution to RDIO, there exists a corresponding solution to DIO. Therefore, by (\textit{i}) and (\textit{ii}), the DIO and RDIO are equivalent when $\mathcal{C}$ is added as a known constraint. \halmos \end{proof} \begin{proof}{\textsc{Proof of Corollary~\ref{cor:theorem}.}} Due to constraints~\eqref{eq:eIOprimalfeasNL}--\eqref{eq:eIOBinaryVars}, we know that $\widetilde{\mathcal{X}}$ is a nominal feasible set for FO. Note that $ \mathbf{x} ^0 \in \mathcal{X} \cap \widetilde{\mathcal{X}} \cap \mathcal{C}$ given that $ \mathbf{x} ^0 \in \mathcal{X}$ by Assumption~\ref{assumption:ObsAreFeas}, $ \mathbf{x} ^0 \in \widetilde{\mathcal{X}}$ because $\widetilde{\mathcal{X}}$ is a nominal set, and $ \mathbf{x} ^0 \in \mathcal{C}$ by definition. Furthermore, $ \mathbf{x} ^0 \in \mathcal{C} = \{ \mathbf{x} \mid f( \mathbf{x} ; \mathbf{c} ) \geq f( \mathbf{x} ^0; \mathbf{c} ) \}$, which implies that $ \mathbf{x} ^0 \in \underset{x\in\mathcal{C}\cap\mathcal{X}\cap\widetilde{\mathcal{X}}}{\argmin} f( \mathbf{x} ; \mathbf{c} )$ and is therefore an optimal solution of~\eqref{eq:theorem}. \halmos \end{proof} \end{APPENDIX} \iffalse \section{Forward model for RT}\label{appendix:FO} Assume that a set of features $f \in \mathcal{F}$ is given for each structure $s \in \mathcal{S}$, for instance left lung, clinical target volume (CTV), and the heart for a breast cancer patient. Examples of the features in radiation therapy plans for such patients include min, max, mean dose to each of these structures. The Forward problem we consider is a linear optimization in which the objective function is a linear measure of the features, i.e., $ \mathbf{x} $ which is a vector of $x_i$ where $i \in I_1$ and $I_1$ is the vector of all features for all structures. In this forward optimization problem, some lower and upper bound for each feature may be known as a constraint. We aim to infer a set of unknown constraints on the feature that lead to acceptable pland and are not conventionally considered as a clinical guideline. \begin{subequations} \begin{align} \text{FO: Minimize} \quad & \mathbf{c} ' \mathbf{x} \\ \text{subject to} \quad & L \leq \mathbf{x} \leq U \qquad \text{(known consts)} \\ & \mathbf{A} \mathbf{x} \geq \mathbf{b} \qquad \text{(unknown consts)} \end{align} \end{subequations} For this FO, we assume that we have a set of observations $ \mathbf{x} _k, k \in \mathcal{K}^{+}$ which are known to be feasible for the FO problem (acceptable plans) and a set of observations $ \mathbf{x} ^k, k \in \mathcal{K}^{-}$ that are known to be infeasible for the FO (rejected plans). \fi \iffalse \section{Old proofs, repetitive} \begin{lemma}\label{lem:sublevelset-if} For any given convex set $\bar{\mathcal{X}}$ and convex function $f( \mathbf{x} ; \mathbf{c} )$ such that $ \mathbf{x} ^0 = \underset{ \mathbf{x} \in \bar{\mathcal{X}}}{\arg\min} \{f( \mathbf{x} ; \mathbf{c} )\}$, we have $\bar{\mathcal{X}}\subseteq \mathcal{V}$ where $\mathcal{V}$ is the sublevel set of $f( \mathbf{x} ; \mathbf{c} )$ at $ \mathbf{x} ^0$. \end{lemma} \begin{proof}{Proof.} Proof by contradiction. Assume $\exists \hat{ \mathbf{x} }\in\bar{\mathcal{X}}$ and $\hat{ \mathbf{x} }\not \in \mathcal{C}$. That means, $\{f(\hat{ \mathbf{x} }; \mathbf{c} ) < f( \mathbf{x} ^0; \mathbf{c} )\}$ which contradicts $ \mathbf{x} ^0 = \underset{ \mathbf{x} \in \bar{\mathcal{X}}}{\arg\min} \{f( \mathbf{x} ; \mathbf{c} )\}$. \halmos \end{proof} \hm{now we need to show the other way around "only if". } \begin{lemma}\label{lem:sublevelset-onlyif} For any convex set ${\mathcal{X}}$ and convex function $f( \mathbf{x} ; \mathbf{c} )$, let $\mathcal{S} = \mathcal{X} \cap \mathcal{C}$. Then, $ \mathbf{x} ^0 = \underset{ \mathbf{x} \in \bar{\mathcal{S}}}{\arg\min} \{f( \mathbf{x} ; \mathbf{c} )\}$, where $\mathcal{V} = \{ \mathbf{x} \mid f( \mathbf{x} ; \mathbf{c} ) \geq f( \mathbf{x} ^0; \mathbf{c} ) \}$. \kg{change $\mathcal{S}$ to something else to not confuse with $\mathcal{S}=\mathcal{X}\cap \widetilde{\mathcal{X}}$} \end{lemma} \begin{proof}{Proof.} Assume $\bx0 \not \in \underset{ \mathbf{x} \in \mathcal{S}}{\arg\min} \{f( \mathbf{x} ; \mathbf{c} )\}$. Hence, $\exists \hat{ \mathbf{x} } \in \mathcal{S}$ such that $f(\hat{ \mathbf{x} }; \mathbf{c} ) < f( \mathbf{x} ^0; \mathbf{c} )$ which contradicts $\mathcal{S} = \mathcal{X} \cap \mathcal{C}$. \halmos \end{proof} Lemmas~\ref{lem:sublevelset-if} and~\ref{lem:sublevelset-onlyif} show that $ \mathbf{x} ^0$ is always argmin of $\mathcal{X} \cap \mathcal{C}$. However, it doesn't show that the imputed set is convex. For that, we need $f$ to be a convex function. See figure\ref{fig:nonconvex}. \hm{ $f$ is convex if the Hessian is positive semi-definite meaning that all eigenvalues are nonnegative or the submatrices have non-neg determinants or that $x^t H x \ge 0$, where $H$ is the Hessian. } \hm{next time: show that if $f$ is convex, then for an feasible set $\mathcal{X}$ that is a solution to eDIO, the imputed set $\mathcal{X}\cap \mathcal{C}$ is a convex set (and we already know that it makes $ \mathbf{x} ^0 optimal$). } \kg{The current one (before 5/26) only considers if not only if. In other words, we show that if there's a set $\mathcal{S}$ then it will a subset of $\mathcal{C}$. We need to show the other way around. To prove that, it's important that our objective function is convex to ensure that the intersection of $\mathcal{S}$ and $\mathcal{C}$ remains connected. If disconnected, then the imputed set will be noncovex. } The base epigraph is defined as a levelset of the objective function that passes through $\bx0$. Note that this epigraph is convex because obj function is convex and its intersection with the known constraints $\mathcal{X}$. \hm{need to move some of the stuff above to here and delete the repeated ones below.} \begin{proposition} Any imputed set $\mathcal{S} = \mathcal{X} \cap \widetilde{\mathcal{X}}$ is a subset of the base epigraph. \end{proposition} \kg{Maybe after Def 3, add a couple of remarks or propositions to define the properties of $\mathcal{C}$, e.g., that it can be nonlinear (convex) an epigraph/subspace instead of halfspace, the imputed set is nominal with $ \mathbf{x} ^0$ as an optimal.} \hm{For problems that have this sublevel as part of their known constraints, we know that this constraint will be binding (a specific structure to the solution), hence we can reduce the problem (maybe combine with the intuition part below).} \begin{lemma} For any imputed set $\mathcal{S} = \mathcal{X} \cap \widetilde{\mathcal{X}}$ we have $\mathcal{S} \subseteq \mathcal{C}$. \end{lemma} \begin{proof}{Proof.} By contradiction. Assume there is $ \mathbf{x} \in \mathcal{S}$ and $ \mathbf{x} \not \in \mathcal{C}$ that means $ \mathbf{x} $ cannot satisfy the definition of imputed, because it is not in argmin. \end{proof} \fi \end{document}
arXiv
\begin{document} \newcommand{\new}[1]{\textcolour{red}{#1}} \def\COMMENT#1{} \def\TASK#1{} \newcommand{\APPENDIX}[1]{} \newcommand{\NOTAPPENDIX}[1]{#1} \newcommand{\todo}[1]{\begin{center}\textbf{to do:} #1 \end{center}} \def{\varepsilon}{{\varepsilon}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathbb{L}}{\mathbb{L}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \def\mathcal{O}{\mathcal{O}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{{\bf 1}_{n\not\equiv \delta}}{{\bf 1}_{n\not\equiv \delta}} \newcommand{{\rm e}}{{\rm e}} \newcommand{Erd\H{o}s}{Erd\H{o}s} \newcommand{\mathbin{\mathaccent\cdot\cup}}{\mathbin{\mathaccent\cdot\cup}} \newcommand{whp }{whp } \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\ordsubs}[2]{(#1)_{#2}} \newcommand{\unordsubs}[2]{\binom{#1}{#2}} \newcommand{\ordelement}[2]{\overrightarrow{\mathbf{#1}}\left({#2}\right)} \newcommand{\ordered}[1]{\overrightarrow{\mathbf{#1}}} \newcommand{\reversed}[1]{\overleftarrow{\mathbf{#1}}} \newcommand{\weighting}[1]{\mathbf{#1}} \newcommand{\weightel}[2]{\mathbf{#1}\left({#2}\right)} \newcommand{\unord}[1]{\mathbf{#1}} \newcommand{\ordscript}[2]{\ordered{{#1}}_{{#2}}} \newcommand{\revscript}[2]{\reversed{{#1}}_{{#2}}} \newcommand{\doublesquig}{ \mathrel{ \vcenter{\offinterlineskip \ialign{##\cr$\rightsquigarrow$\cr\noalign{\kern-1.5pt}$\rightsquigarrow$\cr} } } } \newcommand{\emph}{\emph} \newcommand\restrict[1]{\raisebox{-.5ex}{$|$}_{#1}} \newcommand{\probfc}[1]{\mathrm{\mathbb{P}}_{F^{*}}\left[#1\right]} \newcommand{\probd}[1]{\mathrm{\mathbb{P}}_{D}\left[#1\right]} \newcommand{\probf}[1]{\mathrm{\mathbb{P}}_{F}\left[#1\right]} \newcommand{\prob}[1]{\mathrm{\mathbb{P}}\left[#1\right]} \newcommand{\probb}[1]{\mathrm{\mathbb{P}}_{b}\left[#1\right]} \newcommand{\expn}[1]{\mathrm{\mathbb{E}}\left[#1\right]} \newcommand{\expnb}[1]{\mathrm{\mathbb{E}}_{b}\left[#1\right]} \newcommand{\probxj}[1]{\mathrm{\mathbb{P}}_{x(j)}\left[#1\right]} \newcommand{\expnxj}[1]{\mathrm{\mathbb{E}}_{x(j)}\left[#1\right]} \defG_{n,p}{G_{n,p}} \def\mathcal{G}{\mathcal{G}} \def\left\lfloor{\left\lfloor} \def\right\rfloor{\right\rfloor} \def\left\lceil{\left\lceil} \def\right\rceil{\right\rceil} \newcommand{\qbinom}[2]{\binom{#1}{#2}_{\!q}} \newcommand{\binomdim}[2]{\binom{#1}{#2}_{\!\dim}} \newcommand{\mathrm{Gr}}{\mathrm{Gr}} \newcommand{\brackets}[1]{\left(#1\right)} \def\setminus{\setminus} \newcommand{\Set}[1]{\{#1\}} \newcommand{\set}[2]{\{#1\,:\;#2\}} \newcommand{\krq}[2]{K^{(#1)}_{#2}} \newcommand{\ind}[1]{$\mathbf{S}(#1)$} \newcommand{\indcov}[1]{$(\#)_{#1}$} \def\subseteq{\subseteq} \maketitle \begin{abstract} A subgraph~$H$ of an edge-coloured graph is called rainbow if all of the edges of~$H$ have different colours. In 1989, Andersen conjectured that every proper edge-colouring of~$K_{n}$ admits a rainbow path of length $n-2$. We show that almost all optimal edge-colourings of~$K_{n}$ admit both~(i) a rainbow Hamilton path and~(ii) a rainbow cycle using all of the colours. This result demonstrates that Andersen's Conjecture holds for almost all optimal edge-colourings of $K_n$ and answers a recent question of Ferber, Jain, and Sudakov. Our result also has applications to the existence of transversals in random symmetric Latin squares. \end{abstract} \section{Introduction} \subsection{Extremal results on rainbow colourings} We say that a subgraph~$H$ of an edge-coloured graph is \textit{rainbow} if all of the edges of~$H$ have different colours. An \textit{optimal edge-colouring} of a graph is a proper edge-colouring using the minimum possible number of colours. In this paper we study the problem of finding a rainbow Hamilton path in large optimally edge-coloured complete graphs. The study of finding rainbow structures within edge-coloured graphs has a rich history. For example, the problem posed by Euler on finding orthogonal $n\times n$ Latin squares can easily be seen to be equivalent to that of finding an optimal edge-colouring of the complete bipartite graph~$K_{n,n}$ which decomposes into edge-disjoint rainbow perfect matchings. It transpires that there are optimal colourings of~$K_{n,n}$ without even a single rainbow perfect matching, if~$n$ is even. However, an important conjecture, often referred to as the Ryser-Brualdi-Stein Conjecture, posits that one can always find an almost-perfect rainbow matching, as follows. \begin{conj}[{Ryser~\cite{R67}, Brualdi-Stein~\cite{BR91,S75}}]\label{RBS} Every optimal edge-colouring of~$K_{n,n}$ admits a rainbow matching of size~$n-1$ and, if~$n$ is odd, a rainbow perfect matching. \end{conj} Currently, the strongest result towards this conjecture for arbitrary optimal edge-colourings is due to Keevash, Pokrovskiy, Sudakov, and Yepremyan~\cite{KPSY20}, who showed that there is always a rainbow matching of size $n-O(\log n / \log \log n)$. This result improved earlier bounds of Woolbright~\cite{W78}, Brouwer, de Vries, and Wieringa~\cite{BdVW78}, and Hatami and Shor~\cite{HS08}. It is natural to search for spanning rainbow structures in the non-partite setting as well; that is, what spanning rainbow substructures can be found in properly edge-coloured complete graphs~$K_{n}$? It is clear that one can always find a rainbow spanning tree -- indeed, simply take the star rooted at any vertex. Kaneko, Kano, and Suzuki~\cite{KKS02} conjectured that for $n>4$, in any proper edge-colouring of~$K_{n}$, one can find~$\lfloor n/2\rfloor$ edge-disjoint rainbow spanning trees, thus decomposing~$K_{n}$ if~$n$ is even, and almost decomposing~$K_{n}$ if~$n$ is odd. This conjecture was recently proved approximately by Montgomery, Pokrovskiy, and Sudakov~\cite{MPS19}, who showed that in any properly edge-coloured~$K_{n}$, one can find $(1-o(1))n/2$ edge-disjoint rainbow spanning trees. For \textit{optimal} edge-colourings, even more is known. Note firstly that if~$n$ is even and~$K_{n}$ is optimally edge-coloured, then the colour classes form a $1$\textit{-factorization} of~$K_{n}$; that is, a decomposition of~$K_{n}$ into perfect matchings. Throughout the paper, we will use the term $1$-factorization synonymously with an edge-colouring whose colour classes form a $1$-factorization. It is clear that if a $1$-factorization of~$K_{n}$ exists, then~$n$ is even. Very recently, Glock, K\"{u}hn, Montgomery, and Osthus~\cite{GKMO20} showed that for sufficiently large even~$n$, there exists a tree~$T$ on~$n$ vertices such that any $1$-factorization of~$K_{n}$ decomposes into edge-disjoint rainbow spanning trees isomorphic to~$T$, thus resolving conjectures of Brualdi and Hollingsworth~\cite{BH96}, and Constantine~\cite{C02,C05}. See e.g.~\cite{PS18,MPS19,KKKO20} for previous work on these conjectures. The tree~$T$ used in~\cite{GKMO20} is a path of length $n-o(n)$, together with~$o(n)$ short paths attached to it. Thus it might seem natural to ask if one can find a rainbow Hamilton path in any $1$-factorization of~$K_{n}$. Note that such a path would contain all of the colours used in the 1-factorization, so it is not possible to find a rainbow Hamilton cycle in a 1-factorization of $K_n$. However, in 1984 Maamoun and Meyniel~\cite{MM84} proved the existence of a $1$-factorization of~$K_{n}$ (for~$n\geq4$ being any power of~$2$) without a rainbow Hamilton path. Sharing parallels with Conjecture~\ref{RBS} for the non-partite setting, Andersen~\cite{A89} conjectured in 1989 that all proper edge-colourings of~$K_{n}$ admit a rainbow path which omits only one vertex. \begin{conj}[{Andersen~\cite{A89}}]\label{Andersen} All proper edge-colourings of~$K_{n}$ admit a rainbow path of length $n-2$. \end{conj} Several variations of Andersen's Conjecture have been proposed. In 2007, Akbari, Etesami, Mahini, and Mahmoody~\cite{AEMM07} conjectured that all $1$-factorizations of~$K_{n}$ admit a Hamilton cycle whose edges collectively have at least $n-2$ colours. They also conjectured that all $1$-factorizations of~$K_{n}$ admit a rainbow cycle omitting only two vertices. Although now known to be false, the following stronger form of Conjecture~\ref{Andersen} involving the `sub-Ramsey number' of the Hamilton path was proposed by Hahn~\cite{H80}. Every (not necessarily proper) edge-colouring of~$K_n$ with at most~$n/2$ edges of each colour admits a rainbow Hamilton path. In light of the aforementioned construction of Maamoun and Meyniel~\cite{MM84}, in 1986 Hahn and Thomassen~\cite{HT86} suggested the following slightly weaker form of Hahn's Conjecture, that all edge-colourings of~$K_{n}$ with strictly fewer than~$n/2$ edges of each colour admit a rainbow Hamilton path. However, even this weakening of Hahn's Conjecture is false -- Pokrovskiy and Sudakov~\cite{PS19} proved the existence of such edge-colourings of $K_n$ in which the longest rainbow Hamilton path has length at most $n - \ln n / 42$. Andersen's Conjecture has led to a number of results, generally focussing on increasing the length of the rainbow path or cycle that one can find in an arbitrary $1$-factorization or proper edge-colouring of~$K_{n}$ (see e.g.~\cite{CL15, GM10, GM12, GRSS11}). Alon, Pokrovskiy, and Sudakov~\cite{APS17} proved that all proper edge-colourings of~$K_{n}$ admit a rainbow path with length $n-O(n^{3/4})$, and the error bound has since been improved to $O(\sqrt{n}\cdot\log n)$ by Balogh and Molla~\cite{BM19}. Further support for Conjecture~\ref{Andersen} and its variants was provided by Montgomery, Pokrovskiy, and Sudakov~\cite{MPS19} as well as Kim, K\" uhn, Kupavskii, and Osthus~\cite{KKKO20}, who showed that if we consider proper edge-colourings where no colour class is larger than $n/2 - o(n)$, then we can even find $n/2 -o(n)$ edge-disjoint rainbow Hamilton cycles. \subsection{Random colourings} It is natural to consider these problems in a probabilistic setting, that is to consider random edge-colourings as well as random Latin squares. However, the `rigidity' of the underlying structure makes these probability spaces very challenging to analyse. Recently significant progress was made by Kwan~\cite{K16}, who showed that almost all Latin squares contain a transversal, or equivalently, that almost all optimal edge-colourings of $K_{n,n}$ admit a rainbow perfect matching. His analysis was carried out in a hypergraph setting, which also yields the result that almost all Steiner triple systems contain a perfect matching. Recently, this latter result was strengthened by Ferber and Kwan~\cite{FK20}, who showed that almost all Steiner triple systems have an approximate decomposition into edge-disjoint perfect matchings. Here we show that Hahn's original conjecture (and thus Andersen's Conjecture as well) holds for almost all 1-factorizations, answering a recent question of Ferber, Jain, and Sudakov~\cite{FJS20}. In what follows, we say a property holds `with high probability' if it holds with a probability that tends to~$1$ as the number of vertices~$n$ tends to infinity. \begin{theorem}\label{mainthm} Let~$\phi$ be a uniformly random optimal edge-colouring of~$K_{n}$. Then with high probability, \begin{enumerate}[(i)] \item\label{mainthm:rainbow-path}~$\phi$ admits a rainbow Hamilton path, and \item\label{mainthm:rainbow-cycle}~$\phi$ admits a rainbow cycle $F$ containing all of the colours. \end{enumerate} In particular, if~$n$ is odd, then $F$ is a rainbow Hamilton cycle. \end{theorem} As discussed in Section~\ref{corollary-section}, there is a well-known correspondence between rainbow 2-factors in $n$-edge-colourings of $K_n$ and transversals in symmetric Latin squares, as a transversal in a Latin square corresponds to a permutation $\sigma$ of $[n]$ such that the entries in positions $(i, \sigma(i))$ are distinct for all $i\in [n]$. Based on this, we use Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle} to show that random symmetric Latin squares of odd order contain a \textit{Hamilton transversal} with high probability. Here we say a transversal is \textit{Hamilton} if the underlying permutation $\sigma$ is an $n$-cycle. \begin{cor}\label{oddsym} Let~$n$ be an odd integer and~$\mathbf{L}$ a uniformly random symmetric~$n\times n$ Latin square. Then with high probability~$\mathbf{L}$ contains a Hamilton transversal. \end{cor} Further results on random Latin squares were recently obtained by Kwan and Sudakov~\cite{KS18}, who gave estimates on the number of intercalates in a random Latin square as well as their likely discrepancy. After the completion of the initial version of this paper, additional results on intercalates in random Latin squares were obtained by Kwan, Sah, and Sawhney~\cite{KSS21}, which, together with the results of~\cite{KS18}, resolve an old conjecture of McKay and Wanless~\cite{MW99}. In addition, Gould and Kelly~\cite{GK21} showed that an analogue of Corollary~\ref{oddsym} also holds when $\mathbf{L}$ is a uniformly random (not necessarily symmetric) $n \times n$ Latin square, strengthening the aforementioned result of Kwan~\cite{K16}. \section{Notation} In this section, we collect some definitions and notation that we will use throughout the paper. For a graph~$G$ and (not necessarily distinct) vertex sets $A,B\subseteq V(G)$, we define~$E_{G}(A,B)\coloneqq\{e=ab\in E(G)\colon a\in A, b\in B\}$. We often simply write~$E(A,B)$ when~$G$ is clear from the context. We define $e(A,B)\coloneqq |E(A,B)|$. For a vertex $v\in V(G)$, we define~$\partial_{G}(v)$ to be the set of edges of~$G$ which are incident to~$v$. For a proper colouring $\phi\colon E(G)\rightarrow\mathbb{N}$ and a colour $c\in\mathbb{N}$, we define $E_{c}(G)\coloneqq\{e\in E(G)\colon \phi(e)=c\}$ and say that an edge $e\in E_{c}(G)$ is a $c$\textit{-edge} of~$G$. For a vertex $v\in V(G)$, if~$e$ is a $c$-edge in~$G$ incident to~$v$, then we say that the non-$v$ endpoint of~$e$ is the\COMMENT{Unique since~$\phi$ proper.} $c$\textit{-neighbour} of~$v$. For a vertex~$v\in V(G)$ and three colours $c_{1}, c_{2}, c_{3}\in\mathbb{N}$, we say that the $c_{3}$-neighbour of the $c_{2}$-neighbour of the $c_{1}$-neighbour of~$v$ is the \textit{end} of the $c_{1}c_{2}c_{3}$\textit{-walk starting at}~$v$, if all such edges exist.\COMMENT{Again unique since~$\phi$ is proper.} For a set of colours $D\subseteq\mathbb{N}$, we define~$N_{D}(v)\coloneqq\{w\in N_{G}(v)\colon \phi(vw)\in D\}$. For sets $A,B\subseteq V(G)$ and a colour $c\in\mathbb{N}$, we define $E_{c}(A,B)\coloneqq\{e\in E(A,B)\colon\phi(e)=c\}$. If~$G$ is not clear from the context, we sometimes also write~$E_{G}^{c}(A,B)$. For any subgraph $H\subseteq G$, we define $\phi(H)\coloneqq\{\phi(e)\colon e\in E(H)\}$. For a set of colours $D\subseteq [n-1]$, let~$\mathcal{G}_{D}^{\text{col}}$ be the set of pairs~$(G, \phi_{G})$, where~$G$ is a $|D|$-regular graph on a vertex set~$V$ of size~$n$, and~$\phi_{G}$ is a $1$-factorization of~$G$ with colour set~$D$. Often, we abuse notation and write $G\in\mathcal{G}_{D}^{\text{col}}$, and in this case we let~$\phi_{G}$ denote the implicit $1$-factorization of~$G$, sometimes simply writing~$\phi$ when~$G$ is clear from the context. For $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ and a set of colours $D\subseteq [n-1]$, we define the \textit{restriction of}~$G$ \textit{to}~$D$, denoted~$G|_{D}$, to be the spanning subgraph of~$G$ containing precisely those edges of~$G$ which have colour in~$D$. Observe that $G|_{D}\in\mathcal{G}_{D}^{\text{col}}$. A subgraph $H\subseteq G\in \mathcal{G}_{D}^{\text{col}}$ inherits the colours of its edges from~$G$. Observe that uniformly randomly choosing a $1$-factorization~$\phi$ of~$K_{n}$ on vertex set~$V$ and colour set~$[n-1]$ is equivalent to uniformly randomly choosing $G\in\mathcal{G}_{[n-1]}^{\text{col}}$. For any $D\subseteq [n-1]$, $G\in\mathcal{G}_{D}^{\text{col}}$, and sets $V'\subseteq V$, $D'\subseteq D$, we define $E_{V', D'}(G)\coloneqq\{e=xy\in E(G)\colon \phi_{G}(e)\in D', x,y\in V'\}$, and we define $e_{V', D'}(G)\coloneqq |E_{V',D'}(G)|$. For a hypergraph~$\mathcal{H}$, we write~$\Delta^{c}(\mathcal{H})$ to denote the maximum codegree of~$\mathcal{H}$; that is, the maximum number of edges containing any two fixed vertices of~$\mathcal{H}$. For a set~$D$ of size~$n$ and a partition~$\mathcal{P}$ of~$D$ into~$m$ parts, we say that~$\mathcal{P}$ is \textit{equitable} to mean that all parts~$P$ of~$\mathcal{P}$ satisfy $|P|\in\{\lfloor n/m\rfloor, \lceil n/m\rceil\}$, and when it does not affect the argument, we will assume all parts of an equitable partition have size precisely\COMMENT{Feels a bit cheeky}~$n/m$. For a set $S$ and a real number $p \in [0, 1]$, a \textit{$p$-random} subset $T\subseteq S$ is a random subset in which each element of $S$ is included in $T$ independently with probability $p$. A \textit{$\beta$-random} subgraph of a graph~$G$ is a spanning subgraph of~$G$ where the edge-set is a $\beta$-random subset of~$E(G)$. For an event~$\mathcal{E}$ in any probability space, we write~$\overline{\mathcal{E}}$ to denote the complement of~$\mathcal{E}$. For real numbers $a,b,c$ such that $b>0$, we write $a=(1\pm b)c$ to mean that the inequality $(1-b)c\leq a\leq (1+b)c$ holds. For a natural number $n\in\mathbb{N}$, we define $[n]\coloneqq\{1,2,\dots, n\}$, and $[n]_{0}\coloneqq [n]\cup\{0\}$. We write~$x\ll y$ to mean that for any $y\in (0,1]$ there exists an $x_{0}\in (0,1)$ such that for all $0<x\leq x_{0}$ the subsequent statement holds. Hierarchies with more constants are defined similarly and should be read from the right to the left. Constants in hierarchies will always be real numbers in $(0,1]$. We assume large numbers to be integers if this does not affect the argument. \section{Overview of the proof}\label{overview} In this section, we provide an overview of the proof of Theorem~\ref{mainthm}. In Section~\ref{sketch} we prove Theorem~\ref{mainthm} in the case when~$n$ is even assuming two key lemmas which we prove in later sections. In particular, we assume that~$n$ is even in Sections~\ref{overview}--\ref{switching-section}, so that the optimal edge-colouring~$\phi$ we work with is a $1$-factorization of~$K_{n}$. In Section~\ref{corollary-section} we derive Theorem~\ref{mainthm} in the case when~$n$ is odd from the case when $n$ is even. We will also deduce Corollary~\ref{oddsym} from Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle} in Section~\ref{corollary-section}. Throughout the proof we work with constants ${\varepsilon}, \gamma, \eta,$ and $\mu$ satisfying the following hierarchy:\COMMENT{The relationship ${\varepsilon} \ll \gamma$ depends in part on the Pippenger-Spencer Theorem (Theorem~\ref{hypergraph-matching-thm}). The relationships $\gamma \ll \eta$ and $\eta \ll \mu$ could simply be written as polynomial relationships, where these polynomials are evident from our proof.} \begin{equation}\label{constant-heirarchy} 1/n \ll {\varepsilon} \ll \gamma \ll \eta \ll \mu \ll 1. \end{equation} Our proof uses the absorption method as well as switching techniques. Note that the latter is a significant difference to~\cite{K16,FK20}, which rely on the analysis of the random greedy triangle removal process, as well as modifications of arguments in~\cite{LL13,Kee18} which bound the number of Steiner triple systems. Our main objective is to show that with high probability, in a random 1-factorization, we can find an absorbing structure inside a random subset of $\Theta(\mu n)$ reserved vertices, using a random subset of $\Theta(\mu n)$ reserved colours. A recent result~\cite[Lemma 16]{GKMO20}, based on hypergraph matchings, enables us to find a long rainbow path avoiding these reserved vertices and colours, and using our absorbing structure, we extend this path to a rainbow Hamilton path. More precisely, we randomly `reserve' $\Theta(\mu n)$ vertices and colours and show that with high probability we can find an absorbing structure. This absorbing structure consists of a subgraph~$G_{\text{abs}}$ containing only reserved vertices and colours and all but at most $\gamma n$ of them. Moreover~$G_{\text{abs}}$ contains `flexible' sets of vertices and colours $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$ each of size $\eta n$, with the following crucial property: \begin{enumerate}[($\dagger$)] \item for any pair of equal-sized subsets $X\subseteq V_{\mathrm{flex}}$ and $Y\subseteq C_{\mathrm{flex}}$ of size at most $\eta n/2$, the graph $G_{\text{abs}} - X$ contains a spanning rainbow path whose colours avoid~$Y$.\label{crucial-absorber-property} \end{enumerate} In fact, this spanning rainbow path has the same end vertices, regardless of the choice of $X$ and $Y$. Given this absorbing structure, we find a rainbow Hamilton path in the following three steps: \begin{enumerate} \item \textbf{Long path step:} Apply~\cite[Lemma 16]{GKMO20} to obtain a long rainbow path $P_1$ containing only non-reserved vertices and colours. Moreover, $P_1$ contains all but at most $\gamma n$ of them. \item\label{covering-step} \textbf{Covering step:} `Cover' the vertices and colours not in $G_{\text{abs}}$ or $P_1$ using the flexible sets, by greedily constructing a path $P_2$ containing them as well as sets $X\subseteq V_{\mathrm{flex}}$ and $Y\subseteq C_{\mathrm{flex}}$ of size at most $\eta n / 2$. \item \textbf{Absorbing step:} `Absorb' the remaining vertices and colours, by letting $P_3$ be the rainbow path guaranteed by~\ref{crucial-absorber-property}. \end{enumerate} In the covering step, we can ensure that $P_2$ shares one end with $P_1$ and one end with $P_3$ so that $P_1\cup P_2 \cup P_3$ is a rainbow Hamilton path, as desired. These steps are fairly straightforward, so the majority of the paper is devoted to building the absorbing structure, that is, the subgraph $G_{\mathrm{abs}}$ which satisfies \ref{crucial-absorber-property} with respect to `flexible' sets $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$. This argument is split into two parts. Lemma~\ref{main-absorber-lemma}, proved in Section~\ref{absorption-section}, asserts that, subject to some quasirandomness conditions, we can build our absorbing structure using our randomly reserved vertices and colours; Lemma~\ref{main-switching-lemma}, proved in Section~\ref{switching-section}, asserts that a typical $1$-factorization of $K_n$ has these quasirandom properties. \subsection{Absorption}\label{absorption-details-section} To design our absorbing structure, we employ a strategy sometimes called `distributed absorption', first introduced by Montgomery~\cite{M18}. The details of this are presented in Section~\ref{sketch}, but we provide an overview now. Our absorbing structure consists of many `gadgets' pieced together in a particular way. In particular, for a vertex $v$ and colour $c$, a \textit{$(v, c)$-absorber} (see Definition~\ref{def:absorbing-gadget} and Figure~\ref{(v,c)-absorber-fig}) is a small subgraph containing both $v$ and an edge coloured $c$, with the following property: It contains a rainbow path which is spanning and which uses one of each colour assigned to its edges, and it also contains a rainbow path which includes all of its vertices except $v$ and an edge of every one if its colours except $c$; moreover, these paths have the same end vertices. We refer to the former path as the \textit{$(v, c)$-absorbing path} and the latter as the \textit{$(v, c)$-avoiding path} (again, see Definition~\ref{def:absorbing-gadget}). \begin{figure} \caption{A $(v, c)$-absorber, where $\phi(e_i) = i$. The paths $P_1$ and $P_2$ are drawn as zigzags.} \label{(v,c)-absorber-fig} \end{figure} We build our absorbing structure out of $(v, c)$-absorbers, along with short rainbow paths linking them together, using an auxiliary bipartite graph $H$ as a template (see Definition~\ref{def:absorber} and Figure~\ref{absorber-fig}), where one part of $H$ is a set of vertices (including $V_{\mathrm{flex}}$) and the other part is a set of colours (including $C_{\mathrm{flex}}$). For every edge $vc\in E(H)$, we will have a $(v, c)$-absorber in the absorbing structure. When proving~\ref{crucial-absorber-property}, if $v$ or $c$ is in $X$ or $Y$, then the spanning rainbow path in $G_{\text{abs}}-X$ contains the $(v, c)$-avoiding path, and otherwise it may contain the $(v, c)$-absorbing path. More precisely, we find a perfect matching of $H - (X \cup Y)$, and we use the $(v, c)$-absorbing path for every matched pair of vertex $v$ and colour $c$. \begin{figure} \caption{An $H$-absorber where $H\cong K_{2,2}$ with bipartition $\left(\{v_1, v_2\}, \{c_1, c_2\}\right)$.} \label{absorber-fig} \end{figure} A naive approach would be to use the complete bipartite graph with parts $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$ as our template $H$; however, this would require too many absorbing gadgets. Instead, we choose a much sparser template graph $H$ that is \textit{robustly matchable} with respect to $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$ (see Definition~\ref{def:rmbg}); we use a result of Montgomery~\cite[Lemma~10.7]{M18} to construct a robustly matchable bipartite graph with maximum degree $O(1)$. Thus, we only need $\Theta(\eta n)$ absorbing gadets to build an absorbing structure satisfying \ref{crucial-absorber-property}. Using that $\eta \ll \mu$, we can build such an absorbing structure inside the random subset of $\Theta(\mu n)$ vertices and $\Theta(\mu n)$ colours (see Lemma~\ref{greedy-absorber-lemma}). However, our absorbing structure needs to contain all but at most $\gamma n$ of the reserved vertices and colours. To that end, we attach a long rainbow path using almost all of the remaining reserved vertices and colours that we call a \textit{tail} (see Definition~\ref{def:absorber}); this is accomplished using the semi-random method, implemented via hypergraph matchings results (see Lemma~\ref{linking-lemma}). We use a similar approach in the long path step. \subsection{Analysing a random $1$-factorization of $K_n$} To build the absorbing structure described in Section~\ref{absorption-details-section}, we need to show that a typical 1-factorization of $K_n$ satisfies some quasirandom properties. We call these properties \textit{local edge-resilience} and \textit{robust gadget-resilience} (see Definitions~\ref{def:edge-resilient} and \ref{spread}), and we prove they hold for typical 1-factorizations in Lemma~\ref{main-switching-lemma}. Standard arguments can be used to show that these properties hold with high probability for a (not necessarily proper) edge-colouring of $K_n$ where each edge is assigned one of $n$ colours independently and uniformly at random; however, it is much more challenging to prove this for a random $1$-factorization. We prove Lemma~\ref{main-switching-lemma} using a `coloured version' of switching arguments that are commonly used to study random regular graphs. Unfortunately, $1$-factorizations of the complete graph~$K_{n}$ are `rigid' structures, in the sense that it is difficult to make local changes without global ramifications on such a $1$-factorization. Thus, instead of analysing switchings between graphs in~$\mathcal{G}_{[n-1]}^{\text{col}}$, we will analyse switchings between graphs in~$\mathcal{G}_{D}^{\text{col}}$ for appropriately chosen $D\subsetneq [n-1]$. In the setting of random Latin squares, this approach was used by McKay and Wanless~\cite{MW99} and further developed by Kwan and Sudakov~\cite{KS18}, and we build on their ideas. We use results on the number of $1$-factorizations of dense regular graphs due to Kahn and Lov\'{a}sz (see Theorem~\ref{linlur}) and Ferber, Jain, and Sudakov (see Theorem~\ref{Ferb}) to study the number of completions of a graph $H\in\mathcal{G}_{D}^{\text{col}}$ to a graph $G\in\mathcal{G}_{[n-1]}^{\text{col}}$, and we use this information to compare the probability space corresponding to a uniform random choice of $\mathbf{H}\in\mathcal{G}_{D}^{\text{col}}$, with the probability space corresponding to a uniform random choice of $\mathbf{G}\in\mathcal{G}_{[n-1]}^{\text{col}}$. In particular, if a uniformly random $\mathbf{H} \in \mathcal{G}^{\text{col}}_D$ is extended uniformly at random to obtain a colouring $\mathbf{H'} \in \mathcal{G}^{\text{col}}_{[n - 1]}$, then $\mathbf{H'}$ is not chosen uniformly at random from $\mathcal{G}^{\text{col}}_{[n - 1]}$, since different choices of $H \in \mathcal{G}_{D}^{\text{col}}$ have different numbers of extensions; however, $\mathbf{H'}$ can be compared to a uniformly random $\mathbf{G} \in \mathcal{G}^{\text{col}}_{[n - 1]}$ as follows (see also Corollary~\ref{wf}). For an absolute constant $C$, and for each $K \in \mathcal{G}^{\text{col}}_{[n-1]}$, \begin{equation*} \Prob{\mathbf{G} = K} = \Prob{\mathbf{H'} = K}\cdot \exp(\pm n^{2 - 1/C}). \end{equation*} Therefore, any property that holds for $\mathbf{H}$ with probability at least $1 - \exp(-\Omega(n^2))$ also holds with high probability for $\mathbf{G}|_D$. Our switching arguments yield local edge-resilience and robust gadget-resilience for $\mathbf{H}$ with high enough probability (see Lemmas~\ref{localedge} and \ref{masterswitch}) to apply Corollary~\ref{wf}. \section{Proving Theorem~\ref{mainthm}}\label{sketch} In this section, let $\phi$ be a 1-factorization of $K_n$ with vertex set $V$ and colour set $C=[n-1]$. We first present the details of our absorbing structure, and in Section~\ref{proof-section}, we prove Theorem~\ref{mainthm} (in the case when~$n$ is even) subject to its existence. We begin by introducing our absorbing gadgets in the following definition (see also Figure~\ref{(v,c)-absorber-fig}). \begin{defin}\label{def:absorbing-gadget} For every $v\in V$ and $c\in C$, a \textit{$(v, c)$-absorbing gadget} is a subgraph of $K_n$ of the form $A = T\cup Q$ such that the following holds: \begin{itemize} \item $T \cong K_3$ and $Q\cong C_4$, \item $T$ and $Q$ are vertex-disjoint, \item $v\in V(T)$ and there is a unique edge $e \in E(Q)$ such that $\phi(e) = c$, \item if $e_1,e_2 \in E(T)$ are the edges incident to $v$, then there is matching $\{e'_1, e'_2\}$ in $Q$ not containing $e$ such that $\phi(e_i) = \phi(e'_i)$ for $i\in\{1, 2\}$, \item if $e_3 \in E(T)$ is the edge not incident to $v$, then there is an edge $e'_3 \in E(Q)$ such that $\{e'_3, e\}$ is a matching in $Q$ and $\phi(e_3) = \phi(e'_3) \neq c$. \end{itemize} In this case, a pair~$P_{1}$,~$P_{2}$ of paths \textit{completes} the $(v, c)$-absorbing gadget $A = T \cup Q$ if \begin{itemize} \item the ends of $P_1$ are non-adjacent vertices in $Q$, \item one end of $P_2$ is in $Q$ but not incident to $e$ and the other end of $P_2$ is in $V(T)\setminus\{v\}$, \item $P_1$ and $P_2$ are vertex-disjoint and both $P_1$ and $P_2$ are internally vertex-disjoint from $A$, \item $P_1 \cup P_2$ is rainbow, and \item $\phi(P_1 \cup P_2) \cap \phi(A) = \varnothing$, \end{itemize} and we say $A' \coloneqq A \cup P_1 \cup P_2$ is a \textit{$(v, c)$-absorber}. We also define the following. \begin{itemize} \item The path $P$ with edge-set $E(P_1)\cup E(P_2) \cup \{e'_1, e'_2, e_3\}$ is the \textit{$(v, c)$-avoiding} path in $A'$, and the path $P'$ with edge-set $E(P_1) \cup E(P_2) \cup \{e_1, e_2, e'_3, e\}$ is the \textit{$(v, c)$-absorbing} path in $A'$. \item A vertex in $V(A)\setminus \{v\}$, a colour in $\phi(A) \setminus \{c\}$, or an edge in $E(A)$ is \textit{used} by the $(v, c)$-absorbing gadget $A$. \end{itemize} \end{defin} It is convenient for us to distinguish between a $(v, c)$-absorbing gadget and a $(v, c)$-absorber, because when we build our absorbing structure, we first find a $(v, c)$-absorbing gadget for every $vc \in E(H)$ and then find the paths completing each absorbing gadget. We also find an additional set of paths that `links' the gadgets together, as in the following definition. \begin{defin}\label{def:absorber} Let $H$ be a bipartite graph with bipartition $(V', C')$ where $V'\subseteq V$ and $C'\subseteq C$, and suppose $\mathcal A = \{A_{v, c} : vc \in E(H)\}$ where $A_{v, c}$ is a $(v, c)$-absorbing gadget. \begin{itemize} \item We say $\mathcal A$ \textit{satisfies} $H$ if whenever $A_{v,c}, A_{v',c'}\in\mathcal{A}$ for some $(v,c)\neq (v',c')$, no vertex in $V(A_{v,c})$ or colour in $\phi(A_{v, c})$ is used by $A_{v', c'}$. \item If $\mathcal P$ is a collection of vertex-disjoint paths of length 4, then we say $\mathcal P$ \textit{completes} $\mathcal A$ if the following holds: \begin{itemize} \item $\bigcup_{P\in\mathcal{P}}P$ is rainbow, \item no colour that is either in $C'$ or is used by a $(v, c)$-absorbing gadget $A_{v, c}\in \mathcal A$ appears in a path $P\in \mathcal P$, \item no vertex that is either in $V'$ or is used by a $(v, c)$-absorbing gadget $A_{v,c}\in\mathcal A$ is an internal vertex of a path $P \in \mathcal P$, \item for every $(v, c)$-absorbing gadget $A_{v, c}\in\mathcal A$ there is a pair of paths $P_1, P_2\in \mathcal P$ such that $P_1$ and $P_2$ complete $A_{v, c}$ to a $(v, c)$-absorber $A'_{v, c}$, and \item the graph $\left(\bigcup_{A\in\mathcal A}A\cup\bigcup_{P\in\mathcal P}P\right) \setminus V'$ is connected and has maximum degree three, and $\mathcal P$ is minimal subject to this property. \end{itemize} \item We say $(\mathcal A, \mathcal P)$ is an \textit{$H$-absorber} if $\mathcal A$ satisfies $H$ and is completed by $\mathcal P$. See Figure~\ref{absorber-fig}. \item We say a rainbow path $T$ is a \textit{tail} of an $H$-absorber $(\mathcal A, \mathcal P)$ if \begin{itemize} \item one of the ends of $T$, say $x$, is in a $(v, c)$-absorbing gadget $A_{v,c} \in \mathcal A$ such that $x \neq v$, \item $V(T)\cap V(A) \subseteq\{x\}$ for all $A\in \mathcal A$ and $V(T) \cap V(P) = \varnothing$ for all $P \in \mathcal P$, and \item $\phi(T) \cap \phi(A) = \varnothing$ for all $A \in \mathcal A$ and $\phi(T) \cap \phi(P) = \varnothing$ for all $P \in \mathcal P$. \end{itemize} \item For every matching $M$ in $H$, we define the \textit{path absorbing $M$} in $(\mathcal A, \mathcal P, T)$ to be the rainbow path $P$ such that \begin{itemize} \item $P$ contains $\bigcup_{P'\in \mathcal P}P' \cup T$ and \item for every $vc \in E(H)$, if $vc\in E(M)$, then $P$ contains the $(v, c)$-absorbing path in the $(v, c)$-absorber $A'_{v, c}$ and $P$ contains the $(v, c)$-avoiding path otherwise (that is, $V(P)\cap V' = V(M)\cap V'$ and $\phi(P) \cap C' = V(M) \cap C'$). \end{itemize} \end{itemize} \end{defin} Note that if~$\mathcal{P}$ completes~$\mathcal{A}$, then some of the paths in~$\mathcal{P}$ will complete absorbing gadgets in~$\mathcal{A}$ to absorbers, while the remaining set of paths $\mathcal{P}'\subseteq\mathcal{P}$ will be used to connect all the absorbing gadgets in~$\mathcal{A}$. More precisely, there is an enumeration $A_{1},\dots,A_{|\mathcal{A}|}$ of~$\mathcal{A}$ and an enumeration $P_{1},\dots,P_{|\mathcal{A}|-1}$ of~$\mathcal{P}'$ such that each~$P_{i}$ joins~$A_{i}$ to~$A_{i+1}$. In particular, for each $i\in[|\mathcal{A}|]\setminus\{1,|\mathcal{A}|\}$, each vertex in~$A_{i}\setminus V'$ is the endpoint of precisely one path in~$\mathcal{P}$ (and thus has degree three in $\bigcup_{A\in\mathcal{A}}A\cup\bigcup_{P\in\mathcal{P}}P$), while both~$A_{1}\setminus V'$ and~$A_{|\mathcal{A}|}\setminus V'$ contain precisely one vertex which is not the endpoint of some path in~$\mathcal{P}$ (and thus these two vertices have degree two in $\bigcup_{A\in\mathcal{A}}A\cup\bigcup_{P\in\mathcal{P}}P$). Any tail~$T$ of an $H$-absorber~$(\mathcal{A},\mathcal{P})$ has to start at one of these two vertices. Altogether this means that, given any matching~$M$ of~$H$, the path absorbing~$M$ in~$(\mathcal{A},\mathcal{P},T)$ in the definition above actually exists. Our absorbing structure is essentially an $H$-absorber $(\mathcal A, \mathcal P)$ with a tail $T$ and flexible sets $V_{\mathrm{flex}}, C_{\mathrm{flex}}\subseteq V(H)$ for an appropriately chosen template $H$. If $H - (X\cup Y)$ has a perfect matching $M$, then the path absorbing $M$ in $(\mathcal A, \mathcal P, T)$ satisfies~\ref{crucial-absorber-property}. This fact motivates the property of $H$ that we need in the next definition. \begin{defin}\label{def:rmbg} Let $H$ be a bipartite graph with bipartition $(A, B)$ such that $|A| = |B|$, and let $A'\subseteq A$ and $B'\subseteq B$ such that $|A'| = |B'|$. \begin{itemize} \item We say $H$ is \textit{robustly matchable} with respect to $A'$ and $B'$ if for every pair of sets $X$ and $Y$ where $X\subseteq A'$, $Y\subseteq B'$, and $|X| = |Y| \leq |A'| / 2$, there is a perfect matching in $H - (X\cup Y)$. \item In this case, we say $A'$ and $B'$ are \textit{flexible} and $A\setminus A'$ and \textit{$B\setminus B'$} are \textit{buffer sets}. \end{itemize} \end{defin} This concept was first introduced by Montgomery~\cite{M18}. If $H$ is robustly matchable with respect to $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$, then an $H$-absorber $(\mathcal A, \mathcal P)$ with tail $T$ satisfies~\ref{crucial-absorber-property}. The last property of our absorbing structure that we need is that the flexible sets allow us to execute the covering step, which we capture in the following definition. \begin{defin} Let $V_{\mathrm{flex}} \subseteq V$, let $C_{\mathrm{flex}}\subseteq C$, and let $G_{\mathrm{flex}}$ be a spanning subgraph of $K_n$. \begin{itemize} \item If $u,v\in V$ and $c\in C$, and $P\subseteq G_{\mathrm{flex}}$ is a rainbow path of length four such that \begin{itemize} \item $u$ and $v$ are the ends of $P$, \item $u', w, v' \in V_{\mathrm{flex}}$, where $uu', u'w, wv', vv' \in E(P)$, \item $\phi(uu'), \phi(wv'), \phi(vv') \in C_{\mathrm{flex}}$, and \item $\phi(u'w) = c$, \end{itemize} then $P$ is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-\textit{cover} of $u, v,$ and $c$. \item If $P$ is a rainbow path such that $P = \bigcup_{i = 1}^k P_i$ where $P_i$ is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover of $v_i, v_{i+1},$ and $c_i$, then $P$ is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-\textit{cover} of $\{v_1, \dots, v_{k+1}\}$ and $\{c_1, \dots, c_k\}$. \item If $H$ is a regular bipartite graph with bipartition $(V', C')$ where $V'\subseteq V$ and $C'\subseteq C$ such that \begin{itemize} \item $H$ is robustly matchable with respect to $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$ where $|V_{\mathrm{flex}}|, |C_{\mathrm{flex}}| \geq \delta n$, and \item for every $u,v \in V$ and $c\in C$, there are at least $\delta n^2$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-covers of $u,v$, and $c$, \end{itemize} then $H$ is a \textit{$\delta$-absorbing template with flexible sets $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$}. \item If $(\mathcal A, \mathcal P)$ is an $H$-absorber where $H$ is a $\delta$-absorbing template and $T$ is a tail for $(\mathcal A, \mathcal P)$, then $(\mathcal A, \mathcal P, T, H)$ is a \textit{$\delta$-absorber}. \end{itemize} \end{defin} A $36\gamma$-absorber has the properties we need to execute both the covering step and the absorbing step, which we make formal with the next proposition. First, we introduce the following convenient convention. Given $V''\subseteq V'\subseteq V$ and $C''\subseteq C'\subseteq C$, we say that~$(V'',C'')$ is \textit{contained} in~$(V',C')$ with $\delta$\textit{-bounded remainder} if $V''\subseteq V'$, $C''\subseteq C'$, and $|V'\setminus V''|,|C'\setminus C''|\leq\delta n$. If $G$ is a spanning subgraph of $K_n$, $V' \subseteq V$, and $C'\subseteq C$, then we say a graph $G'$ is \textit{contained} in $(V', C', G)$ with \textit{$\delta$-bounded remainder} if~$(V(G'),\phi(G'))$ is contained in~$(V',C')$ with $\delta$-bounded remainder and $G'\subseteq G$. \begin{prop}\label{main-absorbing-proposition} If $(\mathcal A, \mathcal P, T, H)$ is a $\delta$-absorber and $P'$ is a rainbow path contained in $(V\setminus V', C\setminus C', G')$ with $\delta / 18$-bounded remainder where \begin{itemize} \item $V' = \bigcup_{A\in\mathcal A}V(A) \cup \bigcup_{P\in \mathcal P}V(P) \cup V(T)$ \item $C' = \bigcup_{A\in\mathcal A}\phi(A) \cup \bigcup_{P\in \mathcal P}\phi(P) \cup \phi(T)$, and \item $G'$ is the complement of $\bigcup_{A\in\mathcal A}A \cup \bigcup_{P\in \mathcal P}P \cup T$, \end{itemize} then there is both a rainbow Hamilton path containing $P'$ and a rainbow cycle containing $P'$ and all of the colours in $C$. \end{prop} \begin{proof} Order the colours in $C\setminus(\phi(P')\cup C')$ as $c_1, \dots, c_k$, and note that $k \leq \delta n / 18$. Order the vertices in $V\setminus (V(P') \cup V')$ as $v_1, \dots, v_\ell$.\COMMENT{ \begin{align*} |V(P')| &= |\phi(P')| + 1,\\ |V'| &= |C'| + 1,\\ |V| &= |C| + 1,\\ \ell &= |V| - |V'| - |V(P')|,\ \mathrm{and}\\ k &= |C| - |C'| - |\phi(P')|, \end{align*}} Using that~$H$ is regular, it is easy to see that $|V'|=|C'|+1$, and thus $\ell = k - 1$. Let $v_0$ and $v'_0$ be the ends of $P'$, let $u$ be the end of $T$ not in $V(A)$ for any $A \in \mathcal A$, and let $u'$ be the unique vertex in $\bigcup_{A\in\mathcal A}V(A)\setminus V(T)$ of degree two in $\bigcup_{A\in\mathcal A}A \cup \bigcup_{P\in \mathcal P}P$. Let $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$ be the flexible sets of $H$. First we show that there is a rainbow Hamilton path containing $P'$. We claim there is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover $P''$ of $\{v_0, \dots, v_k\}$ and $\{c_1, \dots, c_k\}$, where $v_k \coloneqq u$. Suppose for $j \in [k - 1]$ and $i < j$ that $P_i$ is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover of $v_i, v_{i + 1}$, and $c_{i + 1}$ such that $\bigcup_{i < j} P_i$ is a rainbow path. We show that there exists a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover $P_j$ of $v_j, v_{j + 1}$, and $c_{j + 1}$ that is internally-vertex- and colour-disjoint from $\bigcup_{i < j}P_i$, which implies that $\bigcup_{i \leq j} P_i$ is a rainbow path, and thus we can choose the path $P''$ greedily, proving the claim. Since each vertex in $V_{\mathrm{flex}}$ and each colour in $C_{\mathrm{flex}}$ is contained in at most $3n$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-covers of $v_j, v_{j + 1}$, and $c_{j + 1}$, and since $H$ is a $\delta$-absorbing template, there are at least $\delta n^2 - 18n\cdot j$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-covers of $v_j, v_{j + 1}$, and $c_{j + 1}$ not containing a vertex or colour from $\bigcup_{i < j}P_i$. Thus, since $j < k \leq \delta n / 18$, there exists a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover $P_j$ of $v_j, v_{j + 1}$, and $c_{j + 1}$ such that $\bigcup_{i\leq j} P_i$ is a rainbow path, as desired, and consequently we can choose the path $P''$ greedily, as claimed. Now let $X \coloneqq V(P'') \cap V_{\mathrm{flex}}$, and let $Y \coloneqq \phi(P'')\cap C_{\mathrm{flex}}$. Since $|X| = |Y| = 3k \leq |V_{\mathrm{flex}}|/2$ and $H$ is robustly matchable with respect to $V_{\mathrm{flex}}$ and $C_{\mathrm{flex}}$, there is a perfect matching $M$ in $H - (X\cup Y)$. Let $P'''$ be the path absorbing $M$ in $(\mathcal A, \mathcal P, T)$. Then $P'\cup P'' \cup P'''$ is a rainbow Hamilton path, as desired. Now we show that there is a rainbow cycle containing $P'$ and all of the colours in $C$. By the same argument as before, there is a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover $P''_1$ of $\{v_0, \dots, v_{\ell-1}, u\}$ and $\{c_1, \dots, c_{k-1}\}$ as well as a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover $P''_2$ of $v'_0$, $u'$, and $c_k$ such that $P''_1$ and $P''_2$ are vertex-and colour-disjoint. Letting $X \coloneqq V(P''_1\cup P''_2) \cap V_{\mathrm{flex}}$ and $Y\coloneqq \phi(P''_1\cup P''_2) \cap C_{\mathrm{flex}}$, letting $M$ be a perfect matching in $H - (X\cup Y)$ and $P'''$ be the path absorbing $M$ in $(\mathcal A, \mathcal P, T)$ as before, $P'\cup P''_1\cup P''_2\cup P'''$ is a rainbow cycle using all the colours in $C$, as desired. \end{proof} \subsection{The proof of Theorem~\ref{mainthm} when~$n$ is even}\label{proof-section} In this subsection, we prove the~$n$ even case of Theorem~\ref{mainthm} subject to two lemmas, Lemmas~\ref{main-switching-lemma} and~\ref{main-absorber-lemma}, which we prove in Sections~\ref{switching-section} and~\ref{absorption-section}, respectively. The first of these lemmas, Lemma~\ref{main-switching-lemma}, states that almost all 1-factorizations have two key properties, introduced in the next two definitions. Lemma~\ref{main-absorber-lemma} states that if a 1-factorization has both of these properties, then we can build an absorber using the reserved vertices and colours with high probability. Recall the hierarchy of constants ${\varepsilon}, \gamma, \eta, \mu$ from~\eqref{constant-heirarchy}. Firstly, we will need to show that if $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ is chosen uniformly at random, then with high probability, for any $V'\subseteq V$, $C'\subseteq C$ that are not too small,~$G$ admits many edges with colour in~$C'$ and both endpoints in~$V'$. This property will be used in the construction of the tail of our absorber. \begin{defin}\label{def:edge-resilient} For $D\subseteq C= [n-1]$, we say that~$G\in\mathcal{G}_{D}^{\text{col}}$ is ${\varepsilon}$\textit{-locally edge-resilient} if for all sets of colours $D'\subseteq D$ and all sets of vertices $V'\subseteq V$ of sizes $|V'|, |D'|\geq{\varepsilon} n$, we have that $e_{V',D'}(G)\geq{\varepsilon}^{3}n^{2}/100$. \end{defin} Secondly, we will need that almost all $G\in \mathcal{G}_{[n-1]}^{\text{col}}$ contain many $(v,c)$-absorbing gadgets for all $v\in V$, $c\in C$. \begin{defin}\label{spread} Let $D\subseteq C=[n-1]$. \begin{itemize} \item For $G\in\mathcal{G}_{D}^{\text{col}}$, $x\in V$, $c\in D$, and $t\in\mathbb{N}_{0}$, we say that a collection~$\mathcal{A}_{(x,c)}$ of $(x,c)$-absorbing gadgets in~$G$ is $t$\textit{-well-spread} if \begin{itemize} \item for all $v\in V$, there are at most~$t$ $(x,c)$-absorbing gadgets in~$\mathcal{A}_{(x,c)}$ using~$v$; \item for all $e\in E(G)$, there are at most~$t$ $(x,c)$-absorbing gadgets in~$\mathcal{A}_{(x,c)}$ using~$e$; \item for all $d\in D$, there are at most~$t$ $(x,c)$-absorbing gadgets in~$\mathcal{A}_{(x,c)}$ using~$d$. \end{itemize} (Note that by definition of `using' (see Definition~\ref{def:absorbing-gadget}), there are no $(x,c)$-absorbing gadgets using~$x$ or~$c$.) \item We say that $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ is $\mu$\textit{-robustly gadget-resilient} if for all $x\in V$ and all $c\in C$, there is a $5\mu n/4$-well-spread collection of at least~$\mu^{4}n^{2}/2^{23}$ $(x,c)$-absorbing gadgets in~$G$. \end{itemize} \end{defin} \begin{lemma}\label{main-switching-lemma} Suppose $1/n\ll {\varepsilon},\mu \ll1$. If~$\phi$ is a 1-factorization of~$K_n$ chosen uniformly at random, then~$\phi$ is ${\varepsilon}$-locally edge-resilient and $\mu$-robustly gadget-resilient with high probability. \end{lemma} As discussed, we prove Lemma~\ref{main-switching-lemma} in Section~\ref{switching-section} using switching arguments. The next lemma is used to construct an absorber using the reserved vertices and colours. \begin{lemma}\label{main-absorber-lemma} Suppose $1/n\ll{\varepsilon}\ll\gamma\ll\eta\ll\mu\ll1$, and let $p = q = \beta = 5\mu + 26887\eta/2 + \gamma/3 - 26880{\varepsilon}$. If $\phi$ is an ${\varepsilon}$-locally edge-resilient and $\mu$-robustly gadget-resilient 1-factorization of $K_n$ with vertex set $V$ and colour set $C$ and \begin{itemize} \item [(R1)] $V'$ is a $p$-random subset of $V$, \item [(R2)] $C'$ is a $q$-random subset of $C$, and \item [(R3)] $G'$ is a $\beta$-random subgraph of $K_n$, \end{itemize} then with high probability there is a $36\gamma$-absorber $(\mathcal A, \mathcal P, T, H)$ such that $\bigcup_{A\in\mathcal A}A \cup \bigcup_{P\in \mathcal P}P \cup T$ is contained in $(V', C', G')$ with $\gamma$-bounded remainder. \end{lemma} The final ingredient in the proof of Theorem~\ref{mainthm} is the following lemma which follows from~\cite[Lemma 16]{GKMO20}, that enables us to find the long rainbow path whose leftover we absorb using the absorber from Lemma~\ref{main-absorber-lemma}. \begin{lemma}\label{long-rainbow-path-lemma} Suppose $1/n \ll \gamma \ll p$, and let $q = \beta = p$. For every 1-factorization $\phi$ of $K_n$ with vertex set $V$ and colour set $C$, if \begin{itemize} \item $V'$ is a $p$-random subset of $V$, \item $C'$ is a $q$-random subset of $C$, and \item $G$ is a $\beta$-random subgraph of $K_n$, \end{itemize} then with high probability there is a rainbow path contained in $(V', C', G)$ with $\gamma$-bounded remainder. \end{lemma} \begin{comment} \begin{proof} reserve $V_{2}, C_{2}, G_{2}$ with probability $\gamma$ Theorem~\ref{hypergraph-matching-thm} to $\mathcal H$ with parts $V_{1}$ and $C_{1}$ where edges correspond to cycles of length $\ell$ \begin{equation} d_{\mathcal H}(v) = \frac{1}{2}(1 - \gamma)^{3\ell - 1}\beta^\ell q^\ell p^{\ell - 1}n^{\ell - 1} \end{equation} \begin{equation} d_{\mathcal H}(c) = \frac{1}{2}(1 - \gamma)^{3\ell - 1}\beta^\ell q^{\ell - 1} p^\ell n^{\ell - 1} \end{equation} \begin{equation} \Delta^c(\mathcal H) \leq 2\ell^3 n^{\ell - 2} \end{equation} \end{proof} \end{comment} We conclude this section with a proof of Theorem~\ref{mainthm} in the case that~$n$ is even, assuming Lemmas~\ref{main-switching-lemma} and~\ref{main-absorber-lemma}. \lateproof{Theorem~\ref{mainthm}, $n$ even case} By Lemma~\ref{main-switching-lemma}, it suffices to prove that if $\phi$ is an ${\varepsilon}$-locally edge-resilient and $\mu$-robustly gadget-resilient 1-factorization, then there is a rainbow Hamilton path and a rainbow cycle containing all of the colours. Let $p = q = \beta$ as in Lemma~\ref{main-absorber-lemma}, let $V_1, V_2$ be a random partition of $V$ where $V_1$ is $p$-random and $V_2$ is $(1 - p)$-random, let $C_1, C_2$ be a random partition of $C$ where $C_1$ is $q$-random and $C_2$ is $(1 - q)$-random, and let $G_1$ and $G_2$ be $\beta$-random and $(1 - \beta)$-random subgraphs of $K_n$ such that $E(G_1)$ and $E(G_2)$ partition the edges of $K_n$. By Lemma~\ref{main-absorber-lemma} applied with $V' = V_1$, $C' = C_1$, and $G' = G_1$, and by Lemma~\ref{long-rainbow-path-lemma} applied with $V' = V_2$, $C' = C_2$, and $G = G_2$, the following holds with high probability. There exists \begin{enumerate}[(i)] \item a $36\gamma$-absorber $(\mathcal A, \mathcal P, T, H)$ such that $\bigcup_{A\in\mathcal A}A \cup \bigcup_{P\in \mathcal P}P \cup T$ is contained in $(V_1, C_1, G_1)$ with $\gamma$-bounded remainder, and \item a rainbow path $P'$ contained in $(V_2, C_2, G_2)$ with $\gamma$-bounded remainder. \end{enumerate} Now we fix an outcome of the random partitions $(V_1, V_2)$, $(C_1, C_2)$, and $(G_1, G_2)$ so that (i) and (ii) hold. By Proposition~\ref{main-absorbing-proposition}, there is both a rainbow Hamilton path containing $P'$ and a rainbow cycle containing $P'$ and all of the colours in $C$, as desired. \noproof \begin{comment} \section{Sketch of the proof of Theorem~\ref{mainthm}} The proof of Theorem~\ref{mainthm} splits into two main parts. Firstly, we show that almost all of the $1$-factorizations of~$K_{n}$ on vertex set~$V$ and colour set~$C$ have some important properties that ensure that certain substructures are well-distributed over~$V$ and~$C$. Secondly, we show that any $1$-factorization of~$K_{n}$ which satisfies these properties admits a rainbow Hamilton path. We now discuss each of these parts in greater detail, in turn. \subsection{Well-distributed subgraphs via switchings} We wish to show that almost all $1$-factorizations~$\phi$ of~$K_{n}$ on vertex set~$V$ and colour set~$C$ have two key properties that will enable us to find a rainbow Hamilton path. It will be convenient for us to define these two properties in terms of the following more general coloured graphs, which can be viewed as `partial' $1$-factorizations of~$K_{n}$, or unions of perfect matchings that do not necessarily decompose the edges of~$K_{n}$. For a set of colours $D\subseteq [n-1]$, let~$\mathcal{G}_{D}^{\text{col}}$ be the set of pairs~$(G, \phi_{G})$, where~$G$ is a $|D|$-regular graph on vertex set~$V$, and~$\phi_{G}$ is a $1$-factorization of~$G$ with colour set~$D$. Often, we abuse notation and write $G\in\mathcal{G}_{D}^{\text{col}}$, and in this case we let~$\phi_{G}$ denote the implicit $1$-factorization of~$G$, sometimes simply writing~$\phi$ when~$G$ is clear from the context. A subgraph $H\subseteq G\in \mathcal{G}_{D}^{\text{col}}$ inherits the colours of its edges from~$G$. Observe that uniformly randomly choosing a $1$-factorization~$\phi$ of~$K_{n}$ on vertex set~$V$ and colour set~$[n-1]$ is equivalent to uniformly randomly choosing $G\in\mathcal{G}_{[n-1]}^{\text{col}}$. Firstly, we will need to show that if $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ is chosen uniformly at random, then with high probability, for any $V'\subseteq V$, $C'\subseteq C$ that are not too small,~$G$ admits many edges with colour in~$C'$ and both endpoints in~$V'$. For any\COMMENT{Feels like this could fit nicely in the notation section} $D\subseteq [n-1]$, $G\in\mathcal{G}_{D}^{\text{col}}$, and sets $V'\subseteq V$, $D'\subseteq D$, we define $E_{V', D'}(G)\coloneqq\{e=xy\in E(G)\colon \phi_{G}(e)\in D', x,y\in V'\}$, and we define $e_{V', D'}(G)\coloneqq |E_{V',D'}(G)|$. Let ${\varepsilon}>0$. We say that~$G\in\mathcal{G}_{D}^{\text{col}}$ is ${\varepsilon}$\textit{-locally edge-resilient} if for all sets of colours $D'\subseteq D$ and all sets of vertices $V'\subseteq V$ of sizes $|V'|, |D'|\geq{\varepsilon} n$, we have that $e_{V',D'}(G)\geq{\varepsilon}^{3}n^{2}/100$. Secondly, we will need that $G\in \mathcal{G}_{[n-1]}^{\text{col}}$ contains many subgraphs that we call `absorbing gadgets', and further, we will need that for any well-chosen $V'\subseteq V$, $C'\subseteq C$, $E'\subseteq E$, we have that~$G$ still contains many of these subgraphs when we restrict our attention only to edges of~$G$ which belong to~$E'$, have colour in~$C'$, and both endpoints in~$V'$. As the name implies, the absorbing gadgets will form a crucial piece of our absorption argument when we find a rainbow Hamilton path. This is covered in more detail in Section~\ref{Tom}. We now define the absorbing gadgets. \begin{defin}[{Absorbing gadgets}]\label{absorbinggadgets} Let $D\subseteq[n-1]$, $G\in\mathcal{G}_{D}^{\text{col}}$, $x\in V$, and $c\in D$. An $(x,c)$\textit{-absorbing gadget} is a subgraph~$A$ of~$G$ of the following form (see Figure~\ref{fig:ag}): \begin{enumerate}[label=\upshape(\roman*)] \item $A=T\cup Q$, where $T\cong K_{3}$ and $Q\cong C_{4}$; \item $T$ and~$Q$ are vertex-disjoint; \item $x\in V(T)$ and there is a unique edge $e\in E(Q)$ such that $\phi(e)=c$; \item if $\partial_{T}(x)=\{e_{1}, e_{2}\}$, then there is a matching~$\{e_{1}', e_{2}'\}$ in~$Q$, not containing~$e$, such that $\phi(e_{i})=\phi(e_{i}')$ for $i\in\{1,2\}$; \item if~$e_{3}$ is the unique edge of $E(T)\setminus\partial_{T}(x)$, then the unique edge $e_{3}'\in E(Q)$ such that~$\{e_{3}',e\}$ is a matching in~$Q$ satisfies $\phi(e_{3})=\phi(e_{3}')\neq c$. \end{enumerate} \end{defin} \begin{figure} \caption{An $(x,c)$-absorbing gadget.} \label{fig:ag} \end{figure} For $G\in\mathcal{G}_{D}^{\text{col}}$, $x\in V$, $c\in D$, and a non-negative integer~$t$, we say that a collection~$\mathcal{A}_{(x,c)}$ of $(x,c)$-absorbing gadgets in~$G$ is $t$\textit{-well spread} if \begin{enumerate}[label=\upshape(\roman*)] \item for all $v\in V\setminus\{x\}$, there are at most~$t$ $(x,c)$-absorbing gadgets in~$\mathcal{A}_{(x,c)}$ containing~$v$; \item for all $e\in E(G)$, there are at most~$t$ $(x,c)$-absorbing gadgets in~$\mathcal{A}_{(x,c)}$ containing~$e$; \item for all $d\in D\setminus\{c\}$, there are at most~$t$ $(x,c)$-absorbing gadgets $J\in\mathcal{A}_{(x,c)}$ such that $d\in\phi(J)$. \end{enumerate} Let $\mu >0$. We say that $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ is $\mu$\textit{-robustly gadget-resilient} if for all $x\in V$ and all $c\in C$, there is a $5\mu n/4$-well-spread collection of at least~$\mu^{4}n^{2}/2^{23}$ $(x,c)$-absorbing gadgets in~$G$. Let ${\varepsilon}>0$. Finally, we say that $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ is $(\mu,{\varepsilon})$\textit{-resilient} if~$G$ is $\mu$-robustly gadget-resilient, and ${\varepsilon}$-locally edge-resilient. We are now ready to state the main lemma for this part of the proof of Theorem~\ref{mainthm}. \begin{lemma}\label{resilient} Suppose $1/n\ll \alpha\ll{\varepsilon}, \mu$\COMMENT{We use ${\varepsilon}\ll\mu$ in the application, i.e. when we find the rainbow Hamilton path, but it isn't needed for this lemma to hold.}, where $\mu<1/2$\COMMENT{So that Ferber-Jain-Sudakov (Lemma~\ref{Ferb}) holds for the complements of graphs with $\mu n$ colours.}, and let $\mathbf{G}\in\mathcal{G}_{[n-1]}^{\text{col}}$ be chosen uniformly at random. Then~$\mathbf{G}$ is $(\mu,{\varepsilon})$-resilient with probability at least $1-\exp(-\alpha n^{2})$. \end{lemma} We will later show (see Lemma~\ref{foundyou}) that $(\mu,{\varepsilon})$-resilient $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ admit a rainbow Hamilton path, which will complete the proof of Theorem~\ref{mainthm}. Roughly speaking, the `resilience' properties of~$G$ will ensure that when we take random subsets of the vertices, edges, and colours of~$G$, many $(x,c)$-absorbing gadgets are contained in those subsets, and thus we will be able to create an `absorbing structure' within one of these random subsets. For more details, see Section~\ref{Tom}. For the remainder of this subsection, we sketch the proof of Lemma~\ref{resilient}. We proceed by using the powerful approach of `switchings', in which we make small local changes to one coloured graph~$G$, thus switching~$G$ into a different coloured graph~$G'$. One can then compare the properties of~$G$ and~$G'$, and consider the number of ways it is possible to switch away from any~$G$, and the number of ways it is possible to switch into any~$G'$, to learn about the probability that a uniformly chosen coloured graph will satisfy the desired properties. Unfortunately, $1$-factorizations of the complete graph~$K_{n}$ are `rigid' structures, in the sense that it is difficult to make local changes without global ramifications on such a $1$-factorization, which means that switchings between these $1$-factorizations may be difficult to analyse. To deal with this issue, we adapt an approach of Kwan and Sudakov~\cite{KS18} for Latin squares (which in turn takes inspiration from the work of Cavenagh, Greenhill, and Wanless~\cite{CGW08}, and McKay and Wanless~\cite{MW99}). Instead of analysing switchings between graphs in~$\mathcal{G}_{[n-1]}^{\text{col}}$, we will analyse switchings between graphs in~$\mathcal{G}_{D}^{\text{col}}$ for appropriately chosen $D\subset [n-1]$. We have much greater freedom to make local changes to graphs in~$\mathcal{G}_{D}^{\text{col}}$, which will enable us to analyse simpler switching operations to learn about the probability that a uniformly random $G\in\mathcal{G}_{D}^{\text{col}}$ contains the subgraphs we are seeking. Finally, we use a `weighting factor' (see Corollary~\ref{wf}) to compare the probability space corresponding to uniform random choice of $G\in\mathcal{G}_{D}^{\text{col}}$, with the probability space corresponding to uniform random choice of $G\in\mathcal{G}_{[n-1]}^{\text{col}}$. \end{comment} \section{Tools} In this section, we collect some results that we will use throughout the paper. \begin{comment} \subsection{Weighting factor} We now state two results on the number of $1$-factorizations in regular graphs, and use these results to find a `weighting factor' (see Corollary~\ref{wf}), which we will use to compare the probabilities of particular events occurring in different probability spaces. For any graph~$G$, let~$M(G)$ denote the number of distinct $1$-factorizations of~$G$, and for any $n,d\in\mathbb{N}$, let $\mathcal{G}_{d}^{n}$ denote the set of $d$-regular graphs on~$n$ vertices. Firstly, the Kahn-Lov\'{a}sz Theorem (see e.g.~\cite{AF08}) states that a graph with degree sequence $r_{1},\dots,r_{n}$ has at most~$\prod_{i=1}^{n}(r_{i}!)^{1/2r_{i}}$ perfect matchings. In particular, an $n$-vertex $d$-regular graph has at most~$(d!)^{n/2d}$ perfect matchings. To determine an upper bound for the number of $1$-factorizations of a $d$-regular graph~$G$, one can simply apply the Kahn-Lov\'{a}sz Theorem repeatedly to obtain $M(G)\leq \prod_{r=1}^{d}(r!)^{n/2r}$. Using Stirling's approximation, we obtain the following result.\COMMENT{Using Stirling's approximation as $r!=\exp(r\ln r -r +O(\ln r))$ and the well-known asymptotic expansion of the Harmonic number~$H_{d}=\frac{1}{1}+\frac{1}{2}+\dots+\frac{1}{d}=\ln d +\gamma+O(1/d)$, where $\gamma\leq 0.58$ is the Euler-Mascheroni constant, and the Taylor series expansion $e^{x}=1+x+x^{2}/2+\dots$, one obtains \begin{eqnarray*} \prod_{r=1}^{d}(r!)^{n/2r} & = & \prod_{r=1}^{d}\exp\left(\frac{n}{2}\ln r -\frac{n}{2}+O\left(\frac{n}{2r}\ln r\right)\right) =\exp\left(\frac{n}{2}(\ln 1+\dots+\ln d)-\frac{dn}{2}+\sum_{r=1}^{d}O\left(\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{n}{2}\ln d! -\frac{dn}{2}+O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{n}{2}(d\ln d - d +O(\ln d))-\frac{dn}{2}+O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{nd}{2}\ln d -nd +O(n\ln d) + O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\sum_{r=1}^{d}\frac{1}{rd}\ln r\right)\right)\right) \\ & \leq & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\sum_{r=1}^{d}\frac{1}{rd}\ln d\right)\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\frac{\ln d}{d}\sum_{r=1}^{d}\frac{1}{r}\right)\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln^{2}d}{d}\right)\right)\right) \leq \left(\left(1+O\left(\frac{\ln^{2}d}{d}\right)+O\left(\frac{\ln^{4}d}{d^{2}}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \\ & \leq & \left(\left(1+o\left(\frac{\ln^{3}d}{d}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \leq \left(\left(1+o\left(\frac{\ln^{3}n}{n}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \leq \left(\left(1+\frac{1}{\sqrt{n}}\right)\frac{d}{e^{2}}\right)^{dn/2}, \end{eqnarray*} where we have used $d=\Theta(n)$ (say) a couple times towards the end.} \begin{theorem}\label{linlur} Suppose $n\in\mathbb{N}$ is even with $1/n\ll1$\COMMENT{Sufficiently large needed for us to make the simplifications we make in applying Stirling's (and the asymptotic formula for the Harmonic number)}, and $d\geq n/2$.\COMMENT{We don't necessarily need~$d$ to be this large, but $d=\Theta(n)$ does seem helpful in the previous comment.} Then every $G\in\mathcal{G}_{d}^{n}$ satisfies \[ M(G)\leq\left(\left(1+n^{-1/2}\right)\frac{d}{e^{2}}\right)^{dn/2}. \] \end{theorem} On the other hand, Ferber, Jain, and Sudakov~\cite{FJS20} proved the following lower bound for the number of distinct $1$-factorizations in dense regular graphs. \begin{theorem}[{\cite[Theorem 1.2]{FJS20}}]\label{Ferb} Suppose $C>0$ and $n\in\mathbb{N}$ is even with $1/n\ll1/C\ll1$, and $d\geq(1/2+n^{-1/C})n$. Then every $G\in\mathcal{G}_{d}^{n}$ satisfies \[ M(G)\geq\left(\left(1-n^{-1/C}\right)\frac{d}{e^{2}}\right)^{dn/2}. \] \end{theorem} Theorems~\ref{linlur} and~\ref{Ferb} immediately yield\COMMENT{Define $C$ to be the universal constant from Theorem~\ref{Ferb}. Then, applying Theorems~\ref{Ferb} and~\ref{linlur}, for all sufficiently large even integers~$n$, all $d\geq(1/2+n^{-1/C})n$, and all $G,H\in\mathcal{G}_{d}^{n}$, we have \[ \frac{M(G)}{M(H)} \leq \frac{((1+n^{-1/2})d/e^{2})^{dn/2}}{((1-n^{-1/C})d/e^{2})^{dn/2}} \leq \left(1+\frac{2n^{-1/C}}{1-n^{-1/C}}\right)^{dn/2} \leq (1+4n^{-1/C})^{dn/2} \leq \exp(2n^{1-1/C}d). \]} the following corollary: \begin{cor}\label{wf} Suppose $C>0$ and $n\in\mathbb{N}$ is even with $1/n\ll1/C\ll1$, and $d\geq(1/2+n^{-1/C})n$. Then \[ \frac{M(G)}{M(H)}\leq\exp\left(2n^{1-1/C}d\right), \] for all $G,H\in\mathcal{G}_{d}^{n}$. \end{cor} \end{comment} \subsection{Probabilistic tools} We will use the following standard probabilistic estimates. \begin{lemma}[Chernoff Bound]\label{chernoff bounds} Let~$X$ have binomial distribution with parameters~$n,p$. Then for any $0 < t \leq np$, \begin{equation*} \Prob{|X - np| > t} \leq 2\exp\left(\frac{-t^2}{3np}\right). \end{equation*} \end{lemma} Let $X_1, \dots, X_m$ be independent random variables taking values in $\mathcal X$, and let $f : \mathcal X^m \rightarrow \mathbb R$. If for all $i\in [m]$ and $x'_i, x_1, \dots, x_m \in \mathcal X$, we have \begin{equation*} |f(x_1, \dots, x_{i - 1}, x_i, x_{i + 1}, \dots, x_m) - f(x_1, \dots, x_{i - 1}, x'_i, x_{i + 1}, \dots, x_m)| \leq c_i, \end{equation*} then we say $X_i$ \textit{affects} $f$ by at most $c_i$. \begin{comment} \item If $x_1, \dots, x_m \in \mathcal X$ and $s > 0$, an \textit{$r$-certificate} for $f, x_1, \dots, x_m,$ and $s$ is an index set $I \subseteq [m]$ of size at most $rs$ such that for every $x'_1, \dots, x'_m$ such that $x_i = x'_i$ for all $i\in I$, we have \begin{equation*} f(x'_1, \dots, x'_m) \geq s. \end{equation*} \item We say $f$ is \textit{$r$-certifiable} if for every $x_1, \dots, x_m \in \mathcal X$ and $s > 0$ such that $f(x_1, \dots, x_m) \geq s$, there is an $r$-certificate for $f, x_1, \dots, x_m$, and $s$. \end{comment} \begin{theorem}[McDiarmid's Inequality]\label{mcd} If $X_1, \dots, X_m$ are independent random variables taking values in $\mathcal X$ and $f : \mathcal X^m \rightarrow \mathbb R$ is such that $X_i$ affects $f$ by at most $c_i$ for all $i\in [m]$, then for all $t > 0$, \begin{equation*} \Prob{|f(X_1, \dots, X_m) - \Expect{f(X_1, \dots, X_m)}| \geq t} \leq \exp\left(-\frac{t^2}{\sum_{i=1}^m c^2_i}\right). \end{equation*} \end{theorem} \begin{comment} \begin{theorem}[`Talagrand's Inequality' \cite{MR14}]\label{mr tala} If $X_1, \dots, X_m$ are independent random variables taking values in $\mathcal X$ and $f : \mathcal X^m \rightarrow \mathbb R$ such that $X_i$ affects $f$ by at most $c$ for all $i\in [m]$ and $f$ is $r$-certifiable, then for any $0 \leq t \leq \Expect{X}$, \begin{equation*} \Prob{|f(X_1, \dots, X_m) - \Expect{f(X_1, \dots, X_m)}| > t + 20c\sqrt{r\Expect{f}} + 64c^2r} \leq 4\exp\left(-\frac{t^2}{8c^2r(\Expect{f} + t)}\right). \end{equation*} \end{theorem} \end{comment} \subsection{Hypergraph matchings} When we build our absorber in the proof of Lemma~\ref{main-absorber-lemma}, we seek to efficiently use the vertices, colours, and edges of our random subsets $V'\subseteq V$, $C'\subseteq C$, $E'\subseteq E$, and to do this we make use of the existence of large matchings in almost-regular hypergraphs with small codegree. In fact, we will need the stronger property that there exists a large matching in such a hypergraph which is well-distributed with respect to a specified collection of vertex subsets. We make this precise in the following definition. Given a hypergraph~$\mathcal{H}$ and a collection of subsets~$\mathcal{F}$ of~$V(\mathcal{H})$, we say a matching~$\mathcal{M}$ in~$\mathcal{H}$ is $(\gamma,\mathcal{F})$\textit{-perfect} if for each $F\in\mathcal{F}$, at most $\gamma\cdot\max\{|F|,|V(\mathcal{H})|^{2/5}\}$ vertices of~$\mathcal{F}$ are left uncovered by~$\mathcal{M}$. The following theorem is a consequence of Theorem 1.2 in~\cite{AY05}, and is based on a result of Pippenger and Spencer~\cite{PS89}. \begin{theorem}\label{hypergraph-matching-thm} Suppose $1/n \ll {\varepsilon} \ll \gamma \ll 1/r$. Let $\mathcal H$ be an $r$-uniform hypergraph on $n$ vertices such that for some $D\in\mathbb N$, we have $d_{\mathcal H}(x) = (1 \pm {\varepsilon})D$ for all $x\in V(\mathcal H)$ and $\Delta^c(\mathcal H) \leq D / \log^{9r}n$. If $\mathcal F$ is a collection of subsets of $V(\mathcal H)$ such that $|\mathcal F| \leq n^{\log n}$, then there exists a $(\gamma, \mathcal F)$-perfect matching. \end{theorem} We will use Theorem~\ref{hypergraph-matching-thm} in the final step of constructing an absorber (see Lemma~\ref{linking-lemma}). We construct an auxiliary hypergraph~$\mathcal{H}$ whose edges represent structures we wish to find, and a large well-distributed matching in~$\mathcal{H}$ corresponds to an efficient allocation of vertices, colours, and edges of the $1$-factorization to construct almost all of these desired structures. We remark that this is also a key strategy in the proof of Lemma~\ref{long-rainbow-path-lemma}, and was first used in~\cite{KKKO20}. \subsection{Robustly matchable bipartite graphs of constant degree} In this subsection, we prove that there exist large bipartite graphs which are robustly matchable as in Definition~\ref{def:rmbg}, and have constant maximum degree. \begin{defin} Let $m \in \mathbb N$. \begin{itemize} \item An $RMBG(3m, 2m, 2m)$ is a bipartite graph $H$ with bipartition $(A, B_1 \cup B_2)$ where $|A| = 3m$ and $|B_1| = |B_2| = 2m$ such that for any $B' \subseteq B_1$ of size $m$, there is a perfect matching in $H - B'$. In this case, we say $H$ is \textit{robustly matchable} with respect to $B_1$, and that~$B_{1}$ is the identified \textit{flexible set}. \item A $2RMBG(7m, 2m)$ is a bipartite graph $H$ with bipartition $(A, B)$ where $|A| = |B| = 7m$ such that $H$ is robustly matchable with respect to sets $A'\subseteq A$ and $B'\subseteq B$ where $|A'| = |B'| = 2m$. \end{itemize} \end{defin} By~\cite[Lemma~10.7]{M18}, for all sufficiently large~$m$ there exists an $RMBG(3m, 2m, 2m)$ with maximum degree at most 100. We use a one-sided (there is one flexible set) $RMBG(3m,2m,2m)$ exhibited in~\cite[Corollary~10]{GKMO20} in which each of the vertex classes are regular, to construct a $256$-regular two-sided (in that we identify a flexible set on each side of the vertex bipartition) $2RMBG(7m,2m)$. \begin{lemma}\label{2rmbg-lemma} For all sufficiently large $m$, there is a $2RMBG(7m, 2m)$ that is $256$-regular. \end{lemma} \begin{proof} Suppose that $m\in\mathbb{N}$ is sufficiently large. By~\cite[Corollary~10]{GKMO20}, there exists an $RMBG(3m, 2m, 2m)$ that is (256, 192)-regular (i.e.\ all vertices in the first vertex class have degree~$256$ and all vertices in the second vertex class of have degree~$192$). Let~$H$ and~$H'$ be two vertex-disjoint isomorphic copies of a $(256, 192)$-regular $RMBG(3m, 2m, 2m)$, and let $(A, B_1\cup B_2)$ and $(A', B'_1 \cup B'_2)$ be the bipartitions of $H$ and $H'$ respectively such that $H$ is robustly matchable with respect to $B_1$ and $H'$ is robustly matchable with respect to $B'_1$. Let $H''$ be a 64-regular bipartite graph with bipartition $(B_1 \cup B_2, B'_1 \cup B'_2)$ such that $H''[B_1 \cup B'_1]$ contains a perfect matching $M$. We claim that $H\cup H'\cup H''$ is robustly matchable with respect to $B_1$ and $B'_1$. To that end, let $X\subseteq B_1$ and $Y\subseteq B'_1$ such that $|X| = |Y| \leq m$. It suffices to show that $H\cup H'\cup H'' - (X\cup Y)$ has a perfect matching. Since $H''[B_1\cup B'_1]$ contains a perfect matching, $H''[B_1\cup B'_1] - (X\cup Y)$ contains a matching of size at least $2m - |X| - |Y| = 2(m - |X|)$. Thus, there exists a matching $M'$ in $H''[B_1\cup B'_1] - (X\cup Y)$ of size $m - |X|$. Let $X' \coloneqq X \cup (B_1\cap V(M'))$ and $Y' \coloneqq Y \cup (B'_1 \cap V(M'))$, and note that $|X'| = |Y'| = m$. Since $H$ is an $RMBG(3m, 2m, 2m)$, $H - X'$ has a perfect matching $M_1$, and similarly $H' - Y'$ has a perfect matching $M_2$. Now $M' \cup M_1 \cup M_2$ is a perfect matching in $H\cup H'\cup H'' - (X\cup Y)$, as required. Since $H\cup H'\cup H''$ is 256-regular, the result follows. \end{proof} \section{Constructing the absorber: proof of Lemma~\ref{main-absorber-lemma}} \label{absorption-section} Throughout this section, let $\phi$ be an ${\varepsilon}$-locally edge-resilient and $\mu$-robustly gadget resilient 1-factorization of $K_n$ with vertex set $V$ and colour set $C$, let $E\coloneqq E(K_{n})$, and recall \begin{equation*} 1/n \ll {\varepsilon} \ll \gamma \ll \eta \ll \mu \ll 1. \end{equation*} Let $\tilde H$ be a 256-regular $2RMBG(7m, 2m)$ where $2m = (\eta - 2{\varepsilon})n$, which exists by Lemma~\ref{2rmbg-lemma}. We define the following probabilities: \begin{minipage}{.5\linewidth} \begin{eqnarray} p_{\mathrm{flex}} &\coloneqq& \eta,\nonumber\\ p_{\mathrm{buff}} &\coloneqq& 5\eta / 2,\nonumber\\ p_{\mathrm{abs}} &\coloneqq& 6|E(\tilde H)|/n + 2\mu,\label{slicearray}\\ p_{\mathrm{link}} &\coloneqq& 9|E(\tilde H)|/n + 3\mu,\nonumber\\ p_{\mathrm{link}}' &\coloneqq& \gamma/3,\nonumber \end{eqnarray} \end{minipage} \begin{minipage}{.5\linewidth} \begin{eqnarray*} q_{\mathrm{flex}} &\coloneqq& \eta,\nonumber\\ q_{\mathrm{buff}} &\coloneqq& 5\eta / 2,\nonumber\\ q_{\mathrm{abs}} &\coloneqq& 3|E(\tilde H)|/n + \mu,\nonumber\\ q_{\mathrm{link}} &\coloneqq& 12|E(\tilde H)|/n + 4\mu,\nonumber\\ q_{\mathrm{link}}' &\coloneqq& \gamma/3,\nonumber\\ \end{eqnarray*} \end{minipage} \begin{comment} \begin{minipage}{.5\linewidth} \begin{align*} p_{\mathrm{flex}} &\coloneqq \eta,\\ p_{\mathrm{buff}} &\coloneqq 5\eta / 2,\\ p_{\mathrm{abs}} &\coloneqq 6|E(\tilde H)|/n + 2\mu\\ p_{\mathrm{link}} &\coloneqq 9|E(\tilde H)|/n + 3\mu,\\ p_{\mathrm{link}}' &\coloneqq \gamma/3,\\ \end{align*} \end{minipage} \begin{minipage}{.5\linewidth} \begin{align*} q_{\mathrm{flex}} &\coloneqq \eta\\ q_{\mathrm{buff}} &\coloneqq 5\eta / 2,\\ q_{\mathrm{abs}} &\coloneqq 3|E(\tilde H)|/n + \mu,\\ q_{\mathrm{link}} &\coloneqq 12|E(\tilde H)|/n + 4\mu,\\ q_{\mathrm{link}}' &\coloneqq \gamma/3,\\ \end{align*} \end{minipage} \end{comment} \noindent and we let $p_{\mathrm{main}} \coloneqq 1 - p_{\mathrm{flex}} - p_{\mathrm{buff}}- p_{\mathrm{abs}} - p_{\mathrm{link}} - p'_{\mathrm{link}}$ and $q_{\mathrm{main}} \coloneqq 1 - q_{\mathrm{flex}} -q_{\mathrm{buff}}- q_{\mathrm{abs}} - q_{\mathrm{link}} - q'_{\mathrm{link}}$. Note that $p_{\mathrm{main}} = q_{\mathrm{main}}$, and let $\beta \coloneqq 1 - p_{\mathrm{main}}$. \begin{defin} An \textit{absorber partition of~$V$,~$C$, and~$K_{n}$} is defined as follows: \begin{equation}\label{absorber-random-partition} \begin{split} &V = V_{\mathrm{main}}\, \dot{\cup}\, V_{\mathrm{flex}} \,\dot{\cup}\, V_{\mathrm{buff}}\, \dot{\cup}\, V_{\mathrm{abs}} \,\dot{\cup}\, V_{\mathrm{link}} \,\dot{\cup}\, V_{\mathrm{link}}',\ \text{and}\\ &C = C_{\mathrm{main}} \,\dot{\cup}\, C_{\mathrm{flex}} \,\dot{\cup}\, C_{\mathrm{buff}} \,\dot{\cup}\, C_{\mathrm{abs}} \,\dot{\cup}\, C_{\mathrm{link}} \,\dot{\cup}\, C_{\mathrm{link}}', \end{split} \end{equation} where $V_{\mathrm{main}}$ is $p_{\mathrm{main}}$-random,~$V_{\mathrm{flex}}$ is $p_{\mathrm{flex}}$-random etc, and the sets of colours are defined analogously. Let $V' \coloneqq V\setminus V_{\mathrm{main}}$, $C' \coloneqq C\setminus C_{\mathrm{main}}$, and let $G'$ be a $\beta$-random subgraph of~$K_{n}$. \end{defin} Note that~$V'$,~$C'$, and~$G'$ satisfy~(R1)--(R3) in the statement of Lemma~\ref{main-absorber-lemma}.\COMMENT{Note that $|E(\Tilde{H})|=256\cdot7m=1792m=896\cdot2m=896(\eta-2{\varepsilon})n=896\eta n-1792{\varepsilon} n$. Write $p \coloneqq 1-p_{\text{main}}$ and $q\coloneqq 1-q_{\text{main}}$. Then clearly $p=q=\beta$,~$V'$ is $p$-random,~$C'$ is $q$-random,~$G'$ is $\beta$-random, and $p=\eta+5\eta/2+(6|E(\Tilde{H})|/n+2\mu)+(9|E(\Tilde{H})|/n+3\mu)+\gamma/3=5\mu+7\eta/2+15|E(\Tilde{H})|/n+\gamma/3=5\mu+7\eta/2+13440\eta-26880{\varepsilon} +\gamma/3=5\mu+26887\eta/2+\gamma/3-26880{\varepsilon}$.} \subsection{Overview of the proof}\label{overviewsec} We now overview our strategy for proving Lemma~\ref{main-absorber-lemma}. First we need the following definitions. A \textit{link} is a rainbow path of length 4 with internal vertices in $V_{\mathrm{link}} \cup V'_{\mathrm{link}}$, ends in $V_{\mathrm{abs}}$, and colours and edges in $C_{\mathrm{link}} \cup C'_{\mathrm{link}}$ and $G'$, respectively. A link with internal vertices in $V_{\mathrm{link}}$ and colours in $C_{\mathrm{link}}$ is a \textit{main link}, and a link with internal vertices in $V'_{\mathrm{link}}$ and colours in $C'_{\mathrm{link}}$ is a \textit{reserve link}. If $M$ is a matching and $\mathcal P = \{P_e\}_{e\in E(M)}$ is a collection of vertex-disjoint links such that $\bigcup_{P\in\mathcal P}P$ is rainbow and $P_{uv}$ has ends $u$ and $v$ for every $uv\in E(M)$, then $\mathcal P$ \textit{links} $M$. We aim to build a $36\gamma$-absorber $(\mathcal A, \mathcal P, T, H)$ such that $\bigcup_{A\in \mathcal A} A \cup \bigcup_{P\in\mathcal P} P \cup T$ is contained in $(V', C', G')$ with $\gamma$-bounded remainder and $H\cong \tilde H$. First, we show (see Lemma~\ref{absorbing-template-lemma}) that with high probability there is a $36\gamma$-absorbing template $H\cong \tilde H$, where \begin{itemize} \item $H$ has flexible sets $(V'_{\mathrm{flex}}, C'_{\mathrm{flex}}, G')$ and $(V'_{\text{flex}},C'_{\text{flex}})$ is contained in $(V_{\mathrm{flex}}, C_{\mathrm{flex}})$ with $3{\varepsilon}$-bounded remainder, and \item $H$ has buffer sets $V'_{\mathrm{buff}}$ and $C'_{\mathrm{buff}}$ where $(V'_{\mathrm{buff}}, C'_{\mathrm{buff}})$ is contained in $(V_{\mathrm{buff}}, C_{\mathrm{buff}})$ with $6{\varepsilon}$-bounded remainder. \end{itemize} Then, we show that with high probability, there exists an $H$-absorber $(\mathcal A, \mathcal P)$ where \begin{itemize} \item for every $vc\in E(H)$, the $(v, c)$-absorbing gadget $A_{v, c} \in \mathcal A$ uses vertices, colours, and edges in $V_{\mathrm{abs}}$, $C_{\mathrm{abs}}$, and $G'$, respectively, and \item every $P\in\mathcal P$ is a link. \end{itemize} In particular, if $\mathcal A = \{A_1, \dots, A_k\}$, where~$A_{i}$ is a $(v_{i},c_{i})$-absorbing gadget, then $\mathcal P$ links the matching $M_1\cup M_2 \cup M_3$, where~$V(M_1)$,~$V(M_2)$, and~$V(M_3)$ are pairwise vertex-disjoint, and \begin{itemize} \item [(M1)] $M_{1}=\{r_{1}s_{1},\dots,r_{k}s_{k}\}$, where~$r_{i}$ and~$s_{i}$ are non-adjacent vertices of the $4$-cycle in~$A_{i}$, for each $i\in[k]$, \item [(M2)] $M_{2}=\{w_{1}x_{1},\dots,w_{k}x_{k}\}$, where~$w_{i}$ is a non-$v_{i}$ vertex of the triangle in~$A_{i}$ and~$x_{i}$ is a vertex of the $4$-cycle in~$A_{i}$, for each $i\in[k]$, and \item [(M3)] $M_{3}=\{y_{1}z_{2},\dots,y_{k-1}z_{k}\}$, where~$y_{i}$ is a non-$v_{i}$ vertex of the triangle in~$A_{i}$ for each $i\in[k-1]$, and~$z_{i}$ is a vertex of the $4$-cycle in~$A_{i}$ for each $i\in[k]\setminus\{1\}$. \end{itemize} Finally, letting $V'_{\mathrm{abs}}$ and $C'_{\mathrm{abs}}$ be the vertices and colours in $V_{\mathrm{abs}}$ and $C_{\mathrm{abs}}$ not used by any $(v, c)$-absorbing gadget in $\mathcal A$, we show that with high probability there is a tail $T$ for $(\mathcal A, \mathcal P)$ where $T$ is the union of \begin{itemize} \item a rainbow matching $M$ contained in ($V'_{\mathrm{abs}}, C'_{\mathrm{abs}}, G')$ with $6{\varepsilon}$-bounded remainder and \item a collection $\mathcal T$ of vertex-disjoint links where all but one vertex in~$V(M)$ is the end of precisely one link. \end{itemize} In particular, if $E(M) = \{a_{1}b_{1}, \dots, a_{\ell} b_{\ell}\}$, then $\mathcal T$ links $M_4$, where \begin{itemize} \item [(M4)] $M_4$ is a matching of size~$\ell$ with edges $b_{i} a_{i + 1}$ for every $i \in [\ell - 1]$ and an edge $va_{1}$ where $v$ is one of the two vertices used by a gadget in $\mathcal A$ that is not in a link in $\mathcal P$. \end{itemize} \begin{figure} \caption{ An absorber $(\mathcal A, \mathcal P, T, H)$, where $\mathcal P$ links $\bigcup_{i=1}^3 M_i$ and $T = M\cup \bigcup_{P\in\mathcal T} P$, where $\mathcal T$ links $M_4$. Links are drawn as zigzags. } \label{fig:building-absorber} \end{figure} See Figure~\ref{fig:building-absorber}. \begin{fact}\label{factref} Suppose that~$\mathcal{A}$ satisfies~$H$. If~$\mathcal{P}\cup\mathcal{T}$ links $M_{1}\cup\dots\cup M_{4}$, where~$\mathcal{P}$ links $M_{1}\cup M_{2}\cup M_{3}$ and~$\mathcal{T}$ links~$M_{4}$, then~$\mathcal{P}$ completes~$\mathcal{A}$ and thus $(\mathcal{A},\mathcal{P})$ is an $H$-absorber. Moreover, $T\coloneqq M\cup\bigcup_{P\in\mathcal{T}}P$ is a tail of~$(\mathcal{A},\mathcal{P})$. Thus~$(\mathcal{A},\mathcal{P},T,H)$ is a $36\gamma$-absorber. \end{fact} We find these structures in the following steps. For Steps 1 and 2, see Lemma~\ref{greedy-absorber-lemma}, and for Steps 3 and 4, see Lemma~\ref{linking-lemma}. \begin{enumerate}[1)] \item First, we find the collection $\mathcal A$ of absorbing gadgets greedily, using the robust gadget-resilience property of $\phi$, \item then we greedily construct the matching $M$, using the local edge-resilience property of $\phi$. \item Next, we construct an auxiliary hypergraph in which each hyperedge corresponds to a main link and apply Theorem~\ref{hypergraph-matching-thm} to choose most of the links in $\mathcal P$, and \item finally we greedily choose the remainder of the links in $\mathcal P$ from the reserve links. \end{enumerate} \subsection{The absorbing template} \begin{comment} By the Chernoff Bound, with high probability, all of these sets have size within ${\varepsilon} n$ of the expected value, that is: \begin{enumerate}[(S1), topsep = 6pt] \item\label{reservoir-right-size} $|V_{\mathrm{res}}|, |C_{\mathrm{res}}| = (7\eta/2 + {\varepsilon} \pm {\varepsilon})n$, \item\label{absorbing-vtx-set-right-size} $|V_{\mathrm{abs}}| = 6|E(H)| + (2\mu + \sigma \pm {\varepsilon})n$, \item\label{absorbing-colour-set-right-size} $|C_{\mathrm{abs}}| = 3|E(H)| + (\mu \pm {\varepsilon})n$, \item\label{linking-vtx-set-right-size} $|V_{\mathrm{link}}| = 9|E(H)| + (3\mu + 3\sigma \pm {\varepsilon})n$, \item\label{linking-colour-set-right-size} $|C_{\mathrm{abs}}| = 12|E(H)| + (4\mu + 4\sigma \pm {\varepsilon})n$, \item\label{reserve-linking-vtx-set-right-size} $|V'_{\mathrm{link}}| = (\gamma \pm {\varepsilon})n$, and \item\label{reserve-linking-colour-set-right-size} $|C_{\mathrm{abs}}| = (\gamma \pm {\varepsilon})n$. \end{enumerate} Since $|V(H)| = 14m \leq 7\eta n$ and by~\ref{reservoir-right-size}, $|V_{\mathrm{res}}|, |C_{\mathrm{res}}| \geq 7\eta n/2$, we can choose $V'_{\mathrm{res}} \subseteq V_{\mathrm{res}}$ and $C'_{\mathrm{res}}\subseteq C_{\mathrm{res}}$ of size $7m$ to identify with the bipartition of $H$. For the remainder of the section, we assume $H$ has bipartition $(V'_{\mathrm{res}}, C'_{\mathrm{res}})$ where $V'_{\mathrm{res}}\subseteq V_{\mathrm{res}}$ and $C'\subseteq C_{\mathrm{res}}$. \end{comment} \begin{lemma}\label{absorbing-template-lemma} Consider an absorber partition of~$V$,~$C$, and~$K_{n}$. With high probability, there exists a $36\gamma$-absorbing template $H\cong \tilde H$, where \begin{enumerate}[(\ref{absorbing-template-lemma}.1), topsep = 6pt] \item\label{template-flexible-sets} $H$ has flexible sets $(V'_{\mathrm{flex}}, C'_{\mathrm{flex}}, G')$ where~$(V'_{\text{flex}},C'_{\text{flex}})$ is contained in $(V_{\mathrm{flex}}, C_{\mathrm{flex}})$ with $3{\varepsilon}$-bounded remainder, and \item\label{template-buffer-sets} $H$ has buffer sets $V'_{\mathrm{buff}}$ and $C'_{\mathrm{buff}}$ where $(V'_{\mathrm{buff}}, C'_{\mathrm{buff}})$ is contained in $(V_{\mathrm{buff}}, C_{\mathrm{buff}})$ with $6{\varepsilon}$-bounded remainder. \end{enumerate} \end{lemma} \begin{proof} For convenience, let $p \coloneqq p_{\mathrm{flex}}$ and $q \coloneqq q_{\mathrm{flex}}$. We claim that the following holds with high probability: \begin{enumerate}[(a)] \item\label{flexible-sets-right-size} $|V_{\mathrm{flex}}|, |C_{\mathrm{flex}}| = (\eta \pm {\varepsilon})n$, \item\label{buffer-sets-right-size} $|V_{\mathrm{buff}}|, |C_{\mathrm{buff}}| = (5\eta/2 \pm {\varepsilon})n$, and \item\label{many-covers-survive} for every distinct $u, v\in V$ and $c \in C$, there are at least $p^3q^3\beta^4 n^2 / 4$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G')$-covers of $u$, $v$, and $c$. \end{enumerate} Indeed,~\ref{flexible-sets-right-size} and~\ref{buffer-sets-right-size} follow from the Chernoff Bound (Lemma~\ref{chernoff bounds}). To prove~\ref{many-covers-survive}, for each $u, v$, and $c$, we apply McDiarmid's Inequality (Theorem~\ref{mcd}). Consider the random variable $f$ counting the number of $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G')$-covers of~$u$,~$v$, and~$c$. Note that~$f$ is determined by the following independent binomial random variables: $\{X_z\}_{z\in V}$, where $X_z$ indicates if $z\in V_{\mathrm{flex}}$, $\{X_{c'}\}_{c'\in C}$, where $X_{c'}$ indicates if $c'\in C_{\mathrm{flex}}$, and for each edge $e$, the random variable $X_e$ which indicates if $e\in E(G')$. We claim there are at least $2(n/2 - 2)(n - 7)$ $(V, C, K_{n})$-covers of $u$, $v$, and $c$. To that end, let $u'w$ be a $c$-edge with $u',w\in V\setminus\{u,v\}$. There are at least $n - 7$ vertices $v'\in V\setminus \{u, v, u', w\}$ such that $\phi(vv'), \phi(wv')\notin \{\phi(uu'), c\}$, and for each such vertex $v'$ the path~$uu'wv'v$ is a $(V, C, K_{n})$-cover of $u$, $v$, and $c$. Similarly, there are at least~$n-7$ $(V,C,K_{n})$-covers of the form~$uwu'v'v$. Altogether this gives at least $2(n/2 - 2)(n - 7) \geq n^2/2$ $(V, C, K_{n})$-covers of $u$, $v$, and $c$, as claimed. Therefore $\Expect{f} \geq p^3q^3\beta^4 n^2/2$. For each $z\in V$, $X_z$ affects $f$ by at most $3n$, and $X_{uz}$, and $X_{vz}$ each affect $f$ by at most $n$, and for each $c'\in C$, $X_{c'}$ affects $f$ by at most $3n$. For each edge $e$ not incident to $u$ or $v$, if $e$ is a $c$-edge, then $X_e$ affects $f$ by at most $2n$, and otherwise $e$ affects $f$ by at most two. Thus, by McDiarmid's Inequality applied with $t = \Expect{f}/2$, there are at least $p^3q^3\beta^4 n^2/4$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G')$-covers of $u$, $v$, and $c$ with probability at least $1 - \exp\left(- p^{6}q^{6}\beta^{8}n^4 / O(n^3)\right)$. Thus by a union bound,~\ref{many-covers-survive} also holds with high probability. Now we assume~\ref{flexible-sets-right-size}--\ref{many-covers-survive} holds, and we show there exists a $36\gamma$-absorbing template $H\cong \tilde H$ satisfying~\ref{template-flexible-sets} and~\ref{template-buffer-sets}. Since $m= (\eta/2 - {\varepsilon})n$, by~\ref{flexible-sets-right-size} and~\ref{buffer-sets-right-size}, there exists $V'_{\mathrm{flex}} \subseteq V_{\mathrm{flex}}$, $C'_{\mathrm{flex}} \subseteq C_{\mathrm{flex}}$, $V'_{\mathrm{buff}} \subseteq V_{\mathrm{buff}}$, and $C'_{\mathrm{buff}} \subseteq C_{\mathrm{buff}}$, such that $|V'_{\mathrm{flex}}|, |C'_{\mathrm{flex}}| = 2m$ and $|V'_{\mathrm{buff}}|, |C'_{\mathrm{buff}}| = 5m$, which we choose arbitrarily, and moreover, $|V_{\mathrm{flex}}\setminus V'_{\mathrm{flex}}|, |C_{\mathrm{flex}}\setminus C'_{\mathrm{flex}}| \leq 3{\varepsilon} n$ and $|V_{\mathrm{buff}}\setminus V'_{\mathrm{buff}}|, |C_{\mathrm{buff}}\setminus C'_{\mathrm{buff}}| \leq 6{\varepsilon} n$, as required. Choose bijections from $V'_{\mathrm{flex}}, C'_{\mathrm{flex}}$, $V'_{\mathrm{buff}}$, and $C'_{\mathrm{buff}}$ to the flexible sets and the buffer sets of $\tilde H$ arbitrarily, and let $H\cong \tilde H$ be the corresponding graph. Now $H$ satisfies~\ref{template-flexible-sets} and~\ref{template-buffer-sets}, as required, so it remains to show that $H$ is a $36\gamma$-absorbing template. Since each vertex or colour in $V_{\mathrm{flex}}$ or $C_{\mathrm{flex}}$ is in at most $3n$ $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G')$-covers of $u$, $v$, and $c$,~\ref{flexible-sets-right-size} and~\ref{many-covers-survive} imply that there are at least $p^3q^3\beta^4 n^2 / 4 - 18{\varepsilon} n^2 \geq 36\gamma n^2$ $(V'_{\mathrm{flex}}, C'_{\mathrm{flex}}, G')$-covers of $u$, $v$, and $c$, so $H$ is a $36\gamma$-absorbing template, as desired. \end{proof} \subsection{Greedily building an $H$-absorber} \begin{lemma}\label{greedy-absorber-lemma} Consider an absorber partition of~$V$,~$C$, and~$K_{n}$. The following holds with high probability. Suppose $V_{\text{res}}\subseteq V_{\text{flex}}\cup V_{\text{buff}}$ and $C_{\text{res}}\subseteq C_{\text{flex}}\cup C_{\text{buff}}$. For every graph $H\cong \tilde H$ with bipartition $(V_{\mathrm{res}}, C_{\mathrm{res}})$, there exists \begin{enumerate}[(\ref{greedy-absorber-lemma}.1), topsep = 6pt] \item\label{greedily-choosing-absorber} a collection $\mathcal{A}=\{A_{vc}\colon vc\in E(H)\}$ such that~$\mathcal{A}$ satisfies~$H$ and such that for all $A_{vc}\in\mathcal{A}$ we have that~$A_{vc}$ uses vertices, colours, and edges in $V_{\mathrm{abs}}$, $C_{\mathrm{abs}}$, and $G'$ respectively, and \item\label{greedily-choosing-matching} a rainbow matching $M$ contained in $(V'_{\mathrm{abs}}, C'_{\mathrm{abs}}, G')$ with $5{\varepsilon}$-bounded remainder, where $V'_{\text{abs}}$ and~$C'_{\text{abs}}$ are the sets of vertices and colours in~$V_{\text{abs}}$ and~$C_{\text{abs}}$ not used by any absorbing gadget in~$\mathcal{A}$. \end{enumerate} \end{lemma} \begin{proof} For convenience, let $p \coloneqq p_{\mathrm{abs}}$ and $q \coloneqq q_{\mathrm{abs}}$ in this proof. Since~$\phi$ is $\mu$-robustly gadget-resilient, for every $v\in V$, $c\in C$, there is a collection~$\mathcal{A}_{v,c}$ of precisely\COMMENT{Helps to simplify the calculation where we bound $\prob{\mathcal{E}_{v,c}}$, as~$|\mathcal{A}_{v,c}|$ appears on both top and bottom.}~$2^{-23}\mu^{4}n^{2}$ $(v,c)$-absorbing gadgets such that every vertex, every colour, and every edge is used by at most~$5\mu n/4$ of the $A\in\mathcal{A}_{v,c}$. (Recall from Definition~\ref{def:absorbing-gadget} that a $(v,c)$-absorbing gadget does not use~$v$ and~$c$.) Fix $v\in V$, $c\in C$. The expected number of the $(v,c)$-absorbing gadgets in~$\mathcal{A}_{v,c}$ using only vertices in~$V_{\text{abs}}$, colours in~$C_{\text{abs}}$, and edges in~$G'$ is~$p^{6}q^{3}\beta^{7}|\mathcal{A}_{v,c}|$. Let~$\mathcal{E}_{v,c}$ be the event that fewer than~$p^{6}q^{3}\beta^{7}|\mathcal{A}_{v,c}|/2$ of the $(v,c)$-absorbing gadgets in~$\mathcal{A}_{v,c}$ use only vertices in~$V_{\text{abs}}$, colours in~$C_{\text{abs}}$ and edges in~$G'$. We claim that $\prob{\mathcal{E}_{v,c}}\leq\exp(-2^{-51}p^{12}q^{6}\beta^{14}\mu^{6}n)$. To see this, for each $u\in V$, $d\in C$, $e\in E$, let~$m_{u}$,~$m_{d}$, and~$m_{e}$ denote the number of $(v,c)$-absorbing gadgets in~$\mathcal{A}_{v,c}$ using~$u$,~$d$, and~$e$, respectively. We will apply McDiarmid's Inequality (Theorem~\ref{mcd}) to the function~$f_{v,c}$ which counts the number of $A\in\mathcal{A}_{v,c}$ using only vertices in~$V_{\text{abs}}$, colours in~$C_{\text{abs}}$, and edges in~$G'$. We use independent indicator random variables $\{X_{u}\}_{u\in V}\cup\{X_{d}\}_{d\in C}\cup\{X_{e}\}_{e\in E}$ which indicate whether or not a vertex~$u$ is in~$V_{\text{abs}}$, a colour~$d$ is in~$C_{\text{abs}}$, and an edge~$e$ is in~$G'$. Each random variable~$X_{u}$,~$X_{d}$,~$X_{e}$ affects~$f_{v,c}$ by at most~$m_{u}$,~$m_{d}$,~$m_{e}$, respectively. Since $m_{u}\leq 5\mu n/4$ for all $u\in V$ and $m_{d}\leq 5\mu n/4$ for all $d\in C$, we have $\sum_{u\in V}m_{u}^{2}$, $\sum_{d\in C}m_{d}^{2}\leq 25\mu^{2}n^{3}/16$. Since $\sum_{e\in E}m_{e}=7|\mathcal{A}_{v,c}|$ and $m_{e}\leq 5\mu n/4$ for all $e\in E$, it follows that $\sum_{e\in E}m_{e}^{2}\leq 35\mu n|\mathcal{A}_{v,c}|/4$. Therefore, by McDiarmid's Inequality, we have \[ \prob{\mathcal{E}_{v,c}}\leq\exp\left(-\frac{p^{12}q^{6}\beta^{14}|\mathcal{A}_{v,c}|^{2}/4}{25\mu^{2}n^{3}/8 + 35\mu n|\mathcal{A}_{v,c}|/4}\right)\leq\exp(-2^{-51}p^{12}q^{6}\beta^{14}\mu^{6}n), \] as claimed. Thus, by a union bound, the probability that there exist $v\in V$, $c\in C$ such that~$\mathcal{E}_{v,c}$ holds is at most $\exp(-2^{-52}p^{12}q^{6}\beta^{14}\mu^{6}n)$. We claim the following holds with high probability: \begin{enumerate}[(a)] \item\label{absorbing-sets-right-size} $|V_{\mathrm{abs}}| = (p \pm {\varepsilon})n$ and $|C_{\mathrm{abs}}| = (q \pm {\varepsilon})n$; \item\label{many-gadgets-survive} for every $v\in V$, $c\in C$, the event~$\mathcal{E}_{v,c}$ does not hold; \item\label{weak-pseudorandomness-in-slice} for every $V^\circ\subseteq V_{\mathrm{abs}}$ and $C^\circ \subseteq C_{\mathrm{abs}}$ such that $|V^\circ|, |C^\circ| \geq {\varepsilon} n$, there are at least $\beta {\varepsilon}^3 n^2/ 200$ edges in $G'$ with both ends in $V^\circ$ and a colour in $C^\circ$. \end{enumerate} Indeed,~\ref{absorbing-sets-right-size} holds by the Chernoff Bound (Lemma~\ref{chernoff bounds}), we have already shown~\ref{many-gadgets-survive}, and since $\phi$ is ${\varepsilon}$-locally edge-resilient,~\ref{weak-pseudorandomness-in-slice} holds by applying the Chernoff Bound for each $V^\circ$ and $C^\circ$ and using a union bound. Now we assume that~\ref{absorbing-sets-right-size}--\ref{weak-pseudorandomness-in-slice} hold\COMMENT{i.e. fix an outcome of all the random slicing in which~\ref{absorbing-sets-right-size}-\ref{weak-pseudorandomness-in-slice} hold. If there are no $H\cong \tilde H$ with bipartition $(V_{\mathrm{res}}, C_{\mathrm{res}})$ contained in $(V_{\mathrm{flex}}\cup V_{\mathrm{buff}}, C_{\mathrm{flex}} \cup C_{\mathrm{buff}})$, then the desired conclusion holds vacuously for such an outcome. Otherwise, fix any $H\cong \tilde H$ with bipartition $(V_{\mathrm{res}}, C_{\mathrm{res}})$ contained in $(V_{\mathrm{flex}}\cup V_{\mathrm{buff}}, C_{\mathrm{flex}} \cup C_{\mathrm{buff}})$, and we use~\ref{absorbing-sets-right-size}-\ref{weak-pseudorandomness-in-slice} to find the necessary extra structure for~$H$.}, we suppose $H\cong \tilde H$ has bipartition $(V_{\mathrm{res}}, C_{\mathrm{res}})$ contained in $(V_{\mathrm{flex}}\cup V_{\mathrm{buff}}, C_{\mathrm{flex}}\cup C_{\mathrm{buff}})$, and we show that~\ref{greedily-choosing-absorber} and~\ref{greedily-choosing-matching} hold. Arbitrarily order the edges of $H$ as $e_1, \dots, e_{|E(H)|}$. Let $i\in[|E(H)|]$ and suppose that for each $j<i$ we have found a $(v_{j},c_{j})$-absorbing gadget~$A_{j}$, where $e_{j}=v_{j}c_{j}$, and further, the collection~$\{A_{1},\dots,A_{i-1}\}$ satisfies the spanning subgraph of~$H$ containing precisely the edges~$e_{1},\dots,e_{i-1}$. Writing $e_{i}=v_{i}c_{i}$, by~\ref{many-gadgets-survive} there is a collection~$\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}$ of at least $2^{-24}p^{6}q^{3}\beta^{7}\mu^{4}n^{2}$ $(v_{i},c_{i})$-absorbing gadgets each using only $V_{\text{abs}}$-vertices,~$C_{\text{abs}}$-colours, and~$G'$-edges, and moreover, each vertex in~$V_{\text{abs}}$, colour in~$C_{\text{abs}}$, and edge in~$G'$ is used by at most~$5\mu n/4$ of the $A\in\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}$. Thus, at most\COMMENT{Each of the $A_{j}\in\{A_{1},\dots,A_{i-1}\}$ chosen thus far has $6$ $V_{\text{abs}}$-vertices, $3$ $C_{\text{abs}}$-colours, and $7$ $G'$-edges. Thus for each $A_{j}$ we must remove at most $(6+3+7)\cdot 5\mu n/4=20\mu n$ of the gadgets in $\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}$ from consideration.} $20\mu n\cdot i\leq 20\mu n|E(H)|\leq 17920\eta\mu n^{2}$ of the $(v_{i},c_{i})$-absorbing gadgets in~$\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}$ use a vertex, colour, or edge used by any of the~$A_{j}$ for $j<i$. Since $|\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}|\geq 2^{-24}p^{6}q^{3}\beta^{7}\mu^{4}n^{2}$, we conclude\COMMENT{Recall $\eta\ll\mu$ and that $p,q,\beta=\Theta(\mu)$.} that there is at least one $(v_{i},c_{i})$-absorbing gadget $A\in\mathcal{A}_{v_{i},c_{i}}^{\text{abs}}$ using vertices, colours, and edges which are disjoint from the vertices, colours, and edges used by~$A_{j}$, for all $j<i$. We arbitrarily choose such an~$A$ to be~$A_{i}$. Continuing in this way, it is clear that $\mathcal{A}\coloneqq\{A_{i}\}_{i=1}^{|E(H)|}$ satisfies~$H$, so~\ref{greedily-choosing-absorber} holds. Now we prove~\ref{greedily-choosing-matching}. \begin{comment} First we claim that for every colour $c \in C_{\mathrm{abs}}$, there is a matching $M_c$ of size at least $p^2\beta n / 4$ contained in $(V_{\mathrm{abs}}, \{c\}, G'_{\mathrm{abs}})$ with high probability. To prove this claim, we use a Union Bound in conjunction with Talagrand's inequality applied for each colour $c$ to the random variable $f$ counting the number of edges contained in $(V_{\mathrm{abs}}, \{c\}, G_{\mathrm{abs}})$ with independent binomial random variables $\{X_v\}_{v\in V}$ and $\{X_e\}_{e\in G}$ which indicate whether or not a vertex $v$ or an edge $e$ is in $V_{\mathrm{abs}}$ and $G_{\mathrm{abs}}$, respectively. Note that these edges form a matching since $\phi$ is a 1-factorization. Each random variable $X_v$ and $X_e$ affects $f$ by at most one, and $f$ is 3-certifiable. Since $\Expect{f} = p^2\beta n / 2$, by Talagrand's Inequality with $r = 3$, $c = 1$, and $t = p^2\beta n/4$, the probability that there are less than $p^2\beta n / 4$ edges contained in $(V_{\mathrm{abs}}, \{c\}, G_{\mathrm{abs}})$ is at most \begin{equation*} 4\exp\left(- \frac{p^4\beta^2 n / 16}{32(3p^2\beta n^2/4)}\right) \leq 4\exp\left(-p^2\beta n / 384\right). \end{equation*} By the Union Bound, for every colour $c\in C_{\mathrm{abs}}$, we have a matching $M_c$ of size at least $p^2 \beta n / 4$ with probability at least $1 - 2n\exp\left(-p^2\beta n / 384\right) = 1 - o(1)$, as claimed. Now we assume this event holds. \end{comment} Let $V'_{\mathrm{abs}}$ and $C'_{\mathrm{abs}}$ be the vertices, colours, and edges in $V_{\mathrm{abs}}$ and $C_{\mathrm{abs}}$ not used by any $(v, c)$-absorbing gadget in $\mathcal A$. By~\ref{absorbing-sets-right-size} and~(\ref{slicearray}), we have $|V'_{\mathrm{abs}}| = (2\mu \pm {\varepsilon})n$ and $|C'_{\mathrm{abs}}| = (\mu \pm {\varepsilon})n$. Thus, by~\ref{weak-pseudorandomness-in-slice}, we can greedily choose a rainbow matching $M$ in $(V'_{\mathrm{abs}}, C'_{\mathrm{abs}}, G')$ of size at least $(\mu - 2{\varepsilon} )n$, and $M$ satisfies~\ref{greedily-choosing-matching}. \end{proof} \subsection{Linking} Lastly, we need the following lemma, inspired by~\cite[Lemma 20]{GKMO20}, which we use to both complete the set of absorbing gadgets obtained by Lemma~\ref{greedy-absorber-lemma} to an $H$-absorber and also construct its tail. Recall that links were defined at the beginning of Section~\ref{overviewsec}. \begin{lemma}\label{linking-lemma} Consider an absorber partition of~$V$,~$C$, and~$K_{n}$. The following holds with high probability. For every matching $M$ such that $V(M)\subseteq V_{\mathrm{abs}}$ and $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$, there exists a collection $\mathcal P$ of links in~$G'$ such that \begin{enumerate}[(\ref{linking-lemma}.1), topsep = 6pt] \item\label{linking-matching} $\mathcal P$ links $M$ and \item\label{links-bounded-remainder} $\bigcup_{P\in\mathcal{P}}P\setminus V(M)$ is contained in $(V_{\mathrm{link}} \cup V'_{\mathrm{link}}, C_{\mathrm{link}} \cup C'_{\mathrm{link}}, G')$ with $\gamma/2$-bounded remainder. \end{enumerate} \end{lemma} \begin{proof} We choose a new constant $\delta$ such that ${\varepsilon} \ll \delta \ll \gamma$. For convenience, let $p \coloneqq p_{\mathrm{link}}$ and $q \coloneqq q_{\mathrm{link}}$, let~$G_1$ be the spanning subgraph of~$G'$ consisting of edges with a colour in $C_{\mathrm{link}}$, and let $G_2$ be the spanning subgraph of~$G'$ consisting of edges with a colour in $C'_{\mathrm{link}}$. First we claim that with high probability the following holds: \begin{enumerate}[(a)] \item\label{linking-sets-right-size} $|V_{\mathrm{link}}| = (p \pm {\varepsilon})n$, $|C_{\mathrm{link}}| = (q \pm {\varepsilon})n$, $|V'_{\mathrm{link}}| = (\gamma/3 \pm {\varepsilon})n$, and $|C'_{\mathrm{link}}| = (\gamma/3 \pm {\varepsilon})n$, \item\label{absorbing-vtcs-right-size} $|V_{\mathrm{abs}}| = (1 \pm {\varepsilon})p_{\mathrm{abs}} n = (1 \pm {\varepsilon})2pn/ 3$, \item\label{linking-neighborhoods-right-size} for all $v\in V$, we have \begin{enumerate}[(i)] \item $|N_{G_1}(v) \cap V_{\mathrm{abs}}| = (1 \pm {\varepsilon})p_{\mathrm{abs}}\beta qn = (1 \pm {\varepsilon})2p\beta qn / 3$ and \item $|N_{G_1}(v) \cap V_{\mathrm{link}}| = (1 \pm {\varepsilon})p\beta q n$, \end{enumerate} \item\label{right-number-of-c-edges} for all $c \in C$, we have \begin{enumerate}[(i)] \item $|E_{G'}^c(V_{\mathrm{abs}}, V_{\mathrm{link}})| = (1 \pm {\varepsilon})p_{\mathrm{abs}}p\beta n = (1 \pm {\varepsilon})2p^2\beta n/3$ and \item $|E^c(G'[V_{\mathrm{link}}])| = (1 \pm {\varepsilon})p^2\beta n/2$, \end{enumerate} \item\label{common-linking-nbrhood-right-size}for all distinct $u,v\in V$, we have $|N_{G_1}(u)\cap N_{G_1}(v)\cap V_{\mathrm{link}}| = (1 \pm {\varepsilon})p\beta^2q^2 n$, and \item\label{common-linking-nbrhood-large-reserve}for all $u,v\in V$ we have $|N_{G_2}(u) \cap N_{G_2}(v) \cap V'_{\mathrm{link}}| \geq \gamma^6 n$. \end{enumerate} Indeed~\ref{linking-sets-right-size}--\ref{right-number-of-c-edges} follow from~(\ref{slicearray}) and the Chernoff Bound. We prove~\ref{common-linking-nbrhood-right-size} and~\ref{common-linking-nbrhood-large-reserve} using McDiarmid's Inequality. To prove~\ref{common-linking-nbrhood-right-size}, for each $u, v \in V$, we apply McDiarmid's Inequality to the random variable $f$ counting $|N_{G_{1}}(u)\cap N_{G_{1}}(v)\cap V_{\mathrm{link}}|$ with respect to independent binomial random variables $\{X_w, X_{uw}, X_{vw}\}_{w\in V}$ and $\{X_c\}_{c\in C}$, where $X_w$ indicates if $w \in V_{\mathrm{link}}$, $X_{uw}$ and $X_{vw}$ indicate if the edges $uw$ and $vw$ respectively are in~$G'$, and~$X_{c}$ indicates if $c \in C_{\mathrm{link}}$. For each $w\in V$, $X_w$, $X_{uw}$, and $X_{vw}$ affect $f$ by at most one, and for each $c\in C$, $X_c$ affects $f$ by at most two. Thus, by McDiarmid's Inequality with $t = {\varepsilon} \Expect{f}/2$, we have $|N_{G_{1}}(u) \cap N_{G_{1}}(v)\cap V_{\text{link}}| = (1 \pm {\varepsilon})p\beta^2q^2 n$ with probability at least $1 - \exp\left(-({\varepsilon} p\beta^2q^2 n/2)^2 / 7n\right)$. By a union bound,~\ref{common-linking-nbrhood-right-size} also holds with high probability. The proof of~\ref{common-linking-nbrhood-large-reserve} is similar, so we omit it. Now we assume~\ref{linking-sets-right-size}--\ref{common-linking-nbrhood-large-reserve} hold\COMMENT{i.e. fix an outcome of all the random slicing in which~\ref{linking-sets-right-size}-\ref{common-linking-nbrhood-large-reserve} hold. If there is no~$M$ as in the hypothesis of the lemma then the conclusion holds vacuously. Otherwise we fix any such~$M$ and continue.}, we suppose $M$ is a matching such that $V(M)\subseteq V_{\mathrm{abs}}$ and $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$, and we show that~\ref{linking-matching} and~\ref{links-bounded-remainder} hold with respect to $M$. Since $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$,~\ref{absorbing-vtcs-right-size} implies that \begin{equation} \label{eq:matching-right-size} |V(M)| = (1 \pm \sqrt{\varepsilon})2pn/3. \end{equation} We apply Theorem~\ref{hypergraph-matching-thm} to the following 8-uniform hypergraph $\mathcal H$: the vertex-set is $E(M) \cup V_{\mathrm{link}} \cup C_{\mathrm{link}}$, and for every $xy \in E(M)$, $v_1, v_2, v_3 \in V_{\mathrm{link}}$, and $c_1, c_2, c_3, c_4 \in C_{\mathrm{link}}$, $\mathcal H$ contains the hyperedge $\{xy, v_1, v_2, v_3, c_1, c_2, c_3, c_4\}$ if there is a main link $P$ such that \begin{itemize} \item $P$ has ends $x$ and $y$, \item $v_1$, $v_2$, and $v_3$ are the internal vertices in $P$, and \item $\phi(P) = \{c_1, c_2, c_3, c_4\}$. \end{itemize} \begin{claim}\label{linklemc1} $d_{\mathcal H}(v) = (1 \pm 2\sqrt{{\varepsilon}})p^3\beta^4q^4 n^3$ for all $v\in V(\mathcal H)$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Let $xy \in E(M)$. By~\ref{linking-sets-right-size}, there are $(1 \pm {\varepsilon})pn$ vertices $v_1 \in V_{\mathrm{link}}$ that can be in a link $P$ with ends $x$ and $y$ corresponding to a hyperedge in $\mathcal H$, where $v_1$ is not adjacent to $x$ or $y$. For each such $v_1\in V_{\mathrm{link}}$, by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm {\varepsilon})p\beta^2 q^2 n$ choices for the vertex in $V_{\mathrm{link}}$ adjacent to both $x$ and $v_1$ in $P$, and for each such $v_2 \in V_{\mathrm{link}}$, again by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm 2{\varepsilon})p\beta^2 q^2 n$ choices for the vertex in $V_{\mathrm{link}}$ adjacent to both $v_1$ and $y$ in $P$ such that $P$ is a main link. Thus, $d_{\mathcal H}(xy) = (1 \pm 5{\varepsilon})p^3\beta^4 q^4 n^3$, as required. Now let $v_1 \in V_{\mathrm{link}}$. First, we count the number of hyperedges in $\mathcal H$ containing $v_1$ corresponding to a link $P$ where $v_1$ is adjacent to one of the ends. By~\ref{linking-neighborhoods-right-size}, and since $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$, there are $(1 \pm \sqrt{\varepsilon})2p\beta qn/3$ choices of the vertex $x \in V(M)$ adjacent to $v_1$ in $P$. For each such $x$, again by~\ref{linking-neighborhoods-right-size}, there are $(1 \pm 2{\varepsilon})p\beta q n$ choices of the vertex $v_2 \in V_{\mathrm{link}}$ adjacent to $y$ in $P$ where $xy\in E(M)$. For each such $v_2 \in V_{\mathrm{link}}$, by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm 2{\varepsilon})p\beta^2 q^2 n$ choices of the vertex $v_3\in V_{\mathrm{link}}$ adjacent to both $v_1$ and $v_2$ in $P$. Thus, the number of hyperedges in $\mathcal H$ containing $v_1$ corresponding to a link where $v_1$ is adjacent to one of the ends is $(1 \pm 2\sqrt{\varepsilon})2p^3\beta^4 q^4n^3/3$. Next, we count the number of hyperedges in $\mathcal H$ containing $v_1$ corresponding to a link $P$ where $v_1$ is not adjacent to one of the ends. By~\eqref{eq:matching-right-size}, there are $(1 \pm \sqrt{{\varepsilon}})pn/3$ choices for the edge $xy \in E(M)$ where $x$ and $y$ are the ends of $P$. For each such $xy\in E(M)$, by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm {\varepsilon})p\beta^2 q^2 n$ choices of the vertex $v_2 \in V_{\mathrm{link}}$ such that $v_2$ is adjacent to $x$ and $v_1$ in $P$, and again by~\ref{common-linking-nbrhood-right-size}, for each such $v_2\in V_{\mathrm{link}}$, there are $(1 \pm 2{\varepsilon})p\beta^2 q^2n$ choices of the vertex $v_3\in V_{\mathrm{link}}$ adjacent to both $y$ and $v_1$ in $P$. Thus, the number of hyperedges in $\mathcal H$ containing $v_1$ corresponding to a link where $v_1$ is not adjacent to one of the ends is $(1 \pm 2\sqrt{{\varepsilon}})p^3\beta^4q^4 n^3/3$, so \begin{equation*} d_{\mathcal H}(v_1) = (1 \pm 2\sqrt{\varepsilon})(2p^3\beta^4 q^4 n^3/3) + (1 \pm 2\sqrt{{\varepsilon}})p^3\beta^4q^4 n^3/3 = (1 \pm 2\sqrt{{\varepsilon}})p^3\beta^4q^4 n^3, \end{equation*} as required. Now let $c_1 \in C_{\mathrm{link}}$. First we count the number of hyperedges in $\mathcal H$ containing $c_1$ corresponding to a link $P$ where $c_1$ is the colour of one of the edges incident to an end of $P$. By~\ref{right-number-of-c-edges}, and since $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$, there are $(1 \pm \sqrt{\varepsilon})2p^2\beta n/3$ choices of the edge $xv_1$ in $P$ where $x\in V(M)$ is an end of $P$ and $\phi(xv_1) = c_1$. For each such edge $xv_1$, by~\ref{linking-neighborhoods-right-size}, there are $(1\pm 2{\varepsilon}) p\beta q n$ choices of the vertex $v_2\in V_{\mathrm{link}}$ adjacent to $y$ in $P$ where $xy \in E(M)$. For each such vertex $v_2$, by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm 2{\varepsilon})p\beta^2q^2 n$ choices of the vertex $v_3$ adjacent to both $v_1$ and $v_2$ in $P$. Thus, the number of hyperedges in $\mathcal H$ containing $c_1$ corresponding to a link where $c_1$ is the colour of one of the edges incident to an end of $P$ is $(1 \pm 2\sqrt{\varepsilon})2p^4\beta^4 q^3 n^3/ 3$. Next, we count the number of hyperedges in $\mathcal H$ containing $c_1$ corresponding to a link $P$ where $c_1$ is the colour of one of the edges with both ends in $V_{\mathrm{link}}$. By~\ref{right-number-of-c-edges}, there are $(1 \pm {\varepsilon})p^2 \beta n/2$ choices for the edge $v_1v_2$ in $P$ such that $\phi(v_1v_2) = c_1$, and thus $(1\pm {\varepsilon})p^2 \beta n$ choices for the edge if we assume $v_1$ is adjacent to an end in $P$. For each such edge $v_1v_2$, by~\ref{linking-neighborhoods-right-size}, and since $|V_{\mathrm{abs}}\setminus V(M)| \leq {\varepsilon} n$, there are $(1 \pm \sqrt{\varepsilon})2p\beta q n/3$ choices for the vertex $x \in V(M)$ adjacent to $v_1$ in $P$. For each such vertex $x$, by~\ref{common-linking-nbrhood-right-size}, there are $(1 \pm 2{\varepsilon})p\beta^2 q^2n$ choices for the vertex $v_3$ adjacent to both $y$ and $v_2$ in $P$, where $xy \in E(M)$. Thus, the number of hyperedges in $\mathcal H$ containing $c_1$ corresponding to a link where $c_1$ is the colour of one of the edges with both ends in $V_{\mathrm{link}}$ is $(1 \pm 2\sqrt{\varepsilon})2p^4 \beta^4 q^3 n^3/ 3$, so by~(\ref{slicearray}) \begin{equation*} d_{\mathcal H}(c_1) = (1\pm 2\sqrt{{\varepsilon}})4p^4\beta^4 q^3 n^3 / 3 = (1\pm 2\sqrt{{\varepsilon}})p^3\beta^4 q^4 n^3, \end{equation*} as required to prove Claim~\ref{linklemc1}. \noclaimproof {} \begin{claim}\label{linklemc2} $\Delta^c(\mathcal H) \leq 100n^2$. \end{claim} This can be proved similarly as above (with room to spare).\COMMENT{\removelastskip\penalty55 \noindent{\em Proof of claim: }{} Let $xy \in E(M)$, $v_1, v_2 \in V_{\mathrm{link}}$, and $c_1, c_2 \in C_{\mathrm{link}}$. There are at most $3n^2$ links with ends $x$ and $y$ containing $v_1$, so the codegree of pairs in $E(M)\times V_{\mathrm{link}}$ is at most $3n^2$, as required. Similarly, there are at most $2n^2$ links with ends $x$ and $y$ such that the colour of the edge incident to $x$ or $y$ is $c_1$, and there are at most $2n^2$ links with ends $x$ and $y$ such that the colour of one of the edges not incident to $x$ or $y$ is $c_1$, so the codegree of pairs in $E(M)\times C_{\mathrm{link}}$ is at most $4n^2$, as required. The codegree of pairs in $E(M)\times E(M)$ is zero. Now we count the number of hyperedges in $\mathcal H$ containing $v_1$ and $c_1$ corresponding to a link~$P$ where an edge $e$ in $P$ incident to $v_1$ is coloured $c_1$. If $e$ is incident to an end $x$ of $P$, then the other end of $P$ is $y$ where $xy \in E(M)$. In this case, there are at most $n^2$ choices for the internal vertices of $P$ other than $v_1$. If $e$ is not incident to an end of $P$, then let $v_2$ denote the end of $e$ that is not $v_1$. There are at most $|E(M)|$ choices for the ends $x$ and $y$ of $P$ and at most $n$ choices for the other internal vertex $v_3$ of $P$. For each such choice of $xy \in E(M)$ and $v_3\in V_{\mathrm{link}}$, there are at most six links with these vertices. Since $|E(M)| \leq n/2$, there are at most $4n^2$ links $P$ containing $v_1$ where an edge in $P$ incident to $v_1$ is coloured $c_1$. Now we count the number of hyperedges in $\mathcal H$ containing $v_1$ and $c_1$ corresponding to a link $P$ containing $v_1$ and an edge $e$ coloured $c_1$ that is not incident to $v_1$. If $e$ is incident to an end of $P$, say $x$, then the other end of $P$ is $y$ where $xy\in E(M)$. In this case, there are at most~$n$ choices for the other internal vertex of $P$, and since there are at most $n/2$ possibilities for the edge $e$ coloured $c_1$, there are at most~$n^2$ such links. If $e$ is not incident to an end of $P$, then the internal vertices of $P$ are determined. Since there are at most $n/2$ possibilities for the edge $e$ and $|E(M)| \leq n/2$, there are at most $n^2$ such links. Thus, there are at most $2n^2$ links $P$ containing $v_1$ such that there is an edge in $P$ not incident to $v_1$ that is coloured $c_1$, so the codegree of pairs in $V_{\mathrm{link}}\times C_{\mathrm{link}}$ is at most $6n^2$, as required. Now we count the number of hyperedges in $\mathcal H$ containing $v_1$ and $v_2$, where $v_{1},v_{2}\in V_{\text{link}}$. If $P$ is a link containing $v_1$ and $v_2$, then there are at most $n$ choices for the other internal vertex $v_3$ of $P$ and at most $n/2$ choices for the ends $x$ and $y$ of $P$. For each such choice of $v_3, x$, and $y$, there are at most six links containing these vertices, so there at most $3n^2$ links containing $v_1$ and $v_2$. Thus, the codegree of pairs in $V_{\mathrm{link}}\times V_{\mathrm{link}}$ is at most $3n^2$, as required. Next, we count the number of hyperedges in $\mathcal H$ containing $c_1$ and $c_2$ corresponding to a link $P$ containing edges $e_1$ and $e_2$ such that $\phi(e_i) = c_i$ for $i\in\{1, 2\}$ such that $e_1$ and $e_2$ do not share an end. At least one of $e_1$ and $e_2$, say $e_1$, is incident to an end $x$ of $P$, and this determines the other end~$y$ of~$P$, where $xy\in E(M)$. If $e_2$ is incident to $y$, then two of the internal vertices of $P$ are determined by $e_1$ and $e_2$, and there are at most $n$ choices for the other internal vertex. If $e_2$ is not incident to $y$, then $e_1$ and $e_2$ determine the internal vertices of $P$, but there are up to $n/2$ possible choices for the edge $e_2$. Since there are at most $n/2$ possible choices for the edge $e_1$, there are at most $2n^2$ such links. Finally, we count the number of hyperedges in $\mathcal H$ containing $c_1$ and $c_2$ corresponding to a link $P$ containing edges $e_1$ and $e_2$ such that $\phi(e_i) = c_i$ for $i\in\{1, 2\}$ such that $e_1$ and $e_2$ share an end $v_1$. If $v_1$ is adjacent to an end $x$ of $P$, then the other end of $P$ is $y$ where $xy\in E(M)$, and there are at most $n$ choices for the internal vertex of $P$ not adjacent to $v_1$. If $v_1$ is not adjacent to an end of $P$, then there are $|E(M)|$ choices for the ends of $P$. Since there are at most $n$ choices for the vertex $v_1$, the total number of such links is at most $3n^2$. Therefore the codegree of pairs in $C_{\mathrm{link}}\times C_{\mathrm{link}}$ is at most $5n^2$, as required. \noclaimproof {}} Let $\mathcal F \coloneqq \{E(M), V_{\mathrm{link}}, C_{\mathrm{link}}\}$. By Theorem~\ref{hypergraph-matching-thm}, $\mathcal H$ has a $(\delta, \mathcal F)$-perfect matching $\mathcal M$. Let $\mathcal P_1$ be the collection of links corresponding to $\mathcal M$, and let $M'$ be the matching consisting of all those $xy\in E(M)$ that are not covered by $\mathcal M$. To complete the proof, we greedily find a collection $\mathcal P_2$ of reserve links that links $M'$. Write $E(M') = \{x_1y_1, \dots, x_ky_k\}$, and suppose $P_i$ is a reserve link with ends $x_i$ and $y_i$ for $i < j$, where $j \in [k]$. We show that that there is a reserve link $P_j$ that is vertex- and colour-disjoint from $\bigcup_{i < j}P_i$, which implies that $\bigcup_{i=1}^j P_i$ links $\{x_1y_1, \dots, x_jy_j\}$, and thus we can choose $\mathcal P_2$ greedily. Since $k \leq \delta n$ and each link has at most three vertices in $V'_{\mathrm{link}}$, by~\ref{linking-sets-right-size}, there is a vertex $v \in V'_{\mathrm{link}}\setminus \bigcup_{i<j}V(P_i)$. By~\ref{common-linking-nbrhood-large-reserve}, there are at least $\gamma^6 n - 11j$ vertices $v_1 \in (N_{G_2}(x_{j})\cap N_{G_2}(v) \cap V'_{\mathrm{link}})\setminus \bigcup_{i < j}V(P_i)$ such that $\phi(x_{j}v_1), \phi(v_{1}v) \notin \bigcup_{i < j}\phi(P_i)$, and since $j/n \leq \delta \ll \gamma$, we may let $v_1$ be such a vertex. Similarly, by~\ref{common-linking-nbrhood-large-reserve}, there is a vertex $v_2 \in (N_{G_2}(y_{j})\cap N_{G_2}(v) \cap V'_{\mathrm{link}}) \setminus \bigcup_{i < j}V(P_i)$ such that $\phi(y_{j}v_2), \phi(v_{2}v) \notin \bigcup_{i < j}\phi(P_i) \cup \{\phi(x_{j}v_1), \phi(v_{1}v)\}$. Now there is a reserve link $P_j$ with ends $x_j$ and $y_j$ and internal vertices $v$, $v_1$, and $v_2$ that is vertex- and colour-disjoint from $\bigcup_{i < j}P_{i}$, as claimed, and therefore there exists a collection $\mathcal P_2$ of reserve links that links $M'$. Now $\mathcal P_1 \cup \mathcal P_2$ links $M$, so~\ref{linking-matching} holds. By~\ref{linking-sets-right-size}, and since $\mathcal M$ is $(\delta, \mathcal F)$-perfect,~\ref{links-bounded-remainder} holds, as required. \end{proof} \subsection{Proof} We now have all the tools we need to prove Lemma~\ref{main-absorber-lemma}. \lateproof{Lemma~\ref{main-absorber-lemma}} Consider an absorber partition of~$V$,~$C$, and~$K_{n}$. By Lemmas~\ref{absorbing-template-lemma},~\ref{greedy-absorber-lemma}, and~\ref{linking-lemma}, there exists an outcome of the absorber partition satisfying the conclusions of these lemmas simultaneously. In particular, by Lemmas~\ref{absorbing-template-lemma} and~\ref{greedy-absorber-lemma} there exists $H, \mathcal A$, and $M$ such that, writing~$(V_{\text{res}},C_{\text{res}})$ for the bipartition of~$H$, \begin{itemize} \item $H\cong \tilde H$ is a $36\gamma$-absorbing template satisfying~\ref{template-flexible-sets} and~\ref{template-buffer-sets}, \item $\mathcal A$ and $H$ satisfy \ref{greedily-choosing-absorber}, and \item $M$ satisfies \ref{greedily-choosing-matching}. \end{itemize} Write $\mathcal A = \{A_1, \dots, A_k\}$ and $E(M) = \{a_1b_1, \dots, a_\ell b_\ell\}$. Consider $M_1\,\dot{\cup}\, M_2\,\dot{\cup}\, M_3\,\dot{\cup}\, M_4$, where~$M_{i}$ is a matching satisfying~(Mi) for $i\in[4]$ (see Section~\ref{overviewsec}). By~\ref{greedily-choosing-matching} we have $|V_{\text{abs}}\setminus V(M_{1}\cup\dots\cup M_{4})|\leq 5{\varepsilon} n+1\leq 6{\varepsilon} n$. Thus by Lemma~\ref{linking-lemma} there exist collections of links $\mathcal P$ and $\mathcal T$ in~$G'$ such that \begin{itemize} \item $\mathcal P\cup\mathcal T$ is a collection of links satisfying~\ref{linking-matching} with respect to $\bigcup_{i=1}^4M_i$ and \item $\mathcal P\cup\mathcal T$ satisfies~\ref{links-bounded-remainder}. \end{itemize} In particular $\mathcal P$ links $\bigcup_{i=1}^3M_i$ and $\mathcal T$ links $M_4$. Let $T \coloneqq M\cup\bigcup_{P\in \mathcal T}P$. By Fact~\ref{factref}, $(\mathcal A, \mathcal P, T, H)$ is a $36\gamma$-absorber, as desired. Moreover, since $H$ satisfies~\ref{template-flexible-sets} and~\ref{template-buffer-sets}, $M$ satisfies \ref{greedily-choosing-matching}, and $\mathcal P\cup\mathcal T$ satisfies~\ref{links-bounded-remainder}, we have $\bigcup_{A\in\mathcal A}A \cup \bigcup_{P\in \mathcal P}P \cup T$ is contained in $(V', C', G')$ with $\gamma$-bounded remainder, as required. \noproof \section{Finding many well-spread absorbing gadgets}\label{switching-section} The aim of this section is to prove Lemma~\ref{main-switching-lemma}, which states that, for appropriate~$\mu,{\varepsilon}$, almost all $1$-factorizations of~$K_{n}$ are ${\varepsilon}$-locally edge-resilient and $\mu$-robustly gadget-resilient. We will use switchings in~$\mathcal{G}_{D}^{\text{col}}$ for appropriate $D\subsetneq [n-1]$ to analyse the probability that a uniformly random $G\in \mathcal{G}_{D}^{\text{col}}$ satisfies the necessary properties, and then use a `weighting factor' (see Corollary~\ref{wf}) to make comparisons to the probability space corresponding to a uniform random choice of $G\in\mathcal{G}_{[n-1]}^{\text{col}}$. \subsection{Switchings} We begin by analysing the property of ${\varepsilon}$-local edge-resilience. \begin{lemma}\label{localedge} Suppose $1/n \ll {\varepsilon}\ll1$, and let $D\subseteq[n-1]$ have size $|D|={\varepsilon} n$. Suppose $\mathbf{G}\in\mathcal{G}_{D}^{\text{col}}$ is chosen uniformly at random. Then $\prob{\mathbf{G}\,\text{is}\,\,{\varepsilon}\text{-locally}\,\text{edge-resilient}\,}\geq 1- \exp(-{\varepsilon}^{3}n^{2}/1000)$. \end{lemma} \begin{proof} Note that if $G\in\mathcal{G}_{D}^{\text{col}}$ has at least~${\varepsilon}^{3}n^{2}/100$ edges with endpoints in~$V'$ for all choices of $V'\subseteq V$ of size precisely~${\varepsilon} n$, then~$G$ is ${\varepsilon}$-locally edge-resilient.\COMMENT{In any larger sets $V'\subseteq V$, we can just find all the necessary edges in any subset $V''\subseteq V'$ of size precisely~${\varepsilon} n$. Further, of course, $D$ is the only subset of $D$ of size at least ${\varepsilon} n$.} Fix $V'\subseteq V$ of size precisely ${\varepsilon} n$. For any $G\in\mathcal{G}_{D}^{\text{col}}$, we say that a subgraph $H\subseteq G$ together with a labelling of its vertices $V(H)=\{u,v,w,x,y,z\}$ is a \textit{spin system} of~$G$ if $E(H)=\{vw, xy, zu\}$, where $u,v\in V'$, $w,x,y,z\in V\setminus V'$, $uv, wx, yz\notin E(G)$, and $\phi_{G}(vw)=\phi_{G}(xy)=\phi_{G}(zu)$. (Note that different labellings of a subgraph $H\subseteq G$ that both satisfy these conditions will be considered to correspond to different spin systems of~$G$.) We now define the \textit{spin} switching operation. Suppose $G\in\mathcal{G}_{D}^{\text{col}}$ and $H\subseteq G$ is a spin system. Then\COMMENT{In this case it may have been easier to just define $\text{spin}_{(u,v,w,x,y,z)}(G)$ for a sequence of vertices $u,v,w,x,y,z$, and avoid the need for defining a `spin system'. However, this approach won't work for our main switching later where the switching system has 14 vertices. It seems easiest to refer to a switching in terms of a subgraph like this, and to be consistent about it so I'm using it here. (And of course, a subgraph could correspond to several different switchings depending on which vertices play what role, so the labelling is important.)} we define~$\text{spin}_{H}(G)$ to be the coloured graph obtained from~$G$ by deleting the edges $vw,xy,zu$, and adding the edges $uv, wx, yz$, each with colour~$\phi_{G}(vw)$. Writing $G'\coloneqq\text{spin}_{H}(G)$, we have $G'\in\mathcal{G}_{D}^{\text{col}}$ and $e_{V',D}(G')=e_{V',D}(G)+1$. We define a partition~$\{M_{s}\}_{s=0}^{\binom{{\varepsilon} n}{2}}$ of~$\mathcal{G}_{D}^{\text{col}}$ by setting $M_{s}\coloneqq \{G\in\mathcal{G}_{D}^{\text{col}}\colon e_{V',D}(G)=s\}$, for each $s\in[\binom{{\varepsilon} n}{2}]_{0}$. For each $s\in[\binom{{\varepsilon} n}{2}-1]_{0}$ we define an auxiliary bipartite multigraph\COMMENT{We can have two edges between $G\in M_{s}$ and $G'\in M_{s+1}$ if say there is an $H\subseteq G$ such that $\text{spin}_{H}(G)=G'$. In this case we find a second spin system $H'$ such that $\text{spin}_{H'}(G)=G'$, by swapping the roles of $u$ and $v$, swapping the roles of $w$ and $z$, and swapping the roles of $x$ and $y$.}~$B_{s}$ with vertex bipartition $(M_{s}, M_{s+1})$, where for each $G\in M_{s}$ and each spin system $H\subseteq G$ we put an edge in~$B_{s}$ with endpoints $G\in M_{s}$ and $\text{spin}_{H}(G)\in M_{s+1}$. Define $\delta_{s}\coloneqq \min_{G\in M_{s}}d_{B_{s}}(G)$ and $\Delta_{s+1}\coloneqq \max_{G\in M_{s+1}}d_{B_{s}}(G)$. Observe, by double counting~$e(B_{s})$, that $|M_{s}|/|M_{s+1}|\leq \Delta_{s+1}/\delta_{s}$. To bound~$\Delta_{s+1}$ from above, we fix $G'\in M_{s+1}$ and bound the number of pairs~$(G,H)$, where $G\in M_{s}$ and~$H$ is a spin system of~$G$ such that $\text{spin}_{H}(G)=G'$. There are~$s+1$ choices for the edge $e\in E_{V',D}(G')$ created by a spin operation, and~$2$ choices for which endpoint of~$e$ played the role of~$u$ in a spin, and which played the role of~$v$. Now there are at most~$(n/2)^{2}$ choices for two edges with colour~$\phi_{G'}(e)$ in~$G'$ with both endpoints outside of~$V'$, and at most~$8$ choices for which endpoints of these edges played the roles of $w,x,y,z$ in a spin operation yielding~$G'$. We deduce that $\Delta_{s+1}\leq 4(s+1)n^{2}$. Suppose that $s\leq {\varepsilon}^{3}n^{2}/80$. To bound~$\delta_{s}$ from below, we fix $G\in M_{s}$ and find a lower bound for the number of spin systems\COMMENT{Even if spinning on two different spin systems yields the same graph, they will correspond to two different edges of~$B_{s}$.} $H\subseteq G$. For a vertex $v\in V'$, let $D_{G}^{*}(v)\subseteq D$ denote the set of colours~$c\in D$ such that the $c$-neighbour of~$v$ is not in~$V'$, in~$G$. Let $V_{G}^{*}\coloneqq\{v\in V'\colon |D_{G}^{*}(v)|\geq 9{\varepsilon} n/10\}$, and suppose for a contradiction that $|V_{G}^{*}|< 9{\varepsilon} n/10$. Then there are at least~${\varepsilon} n/10$ vertices $v\in V'$ for which there are at least~${\varepsilon} n/10$ colours $c\in D$ such that the $c$-neighbour of~$v$ is in~$V'$, in~$G$, whence $s=e_{V',D}(G)\geq {\varepsilon}^{2}n^{2}/200> {\varepsilon}^{3}n^{2}/80\geq s$, a contradiction. Note further that, since $s\leq {\varepsilon}^{3}n^{2}/80$, there are at least $\binom{9{\varepsilon} n/10}{2}-{\varepsilon}^{3}n^{2}/80\geq {\varepsilon}^{2}n^{2}/4$ pairs~$\{a,b\}\in\binom{V_{G}^{*}}{2}$ such that $ab\notin E(G)$. For each such choice of~$\{a,b\}$, there are two choices of which vertex will play the role of~$u$ and which will play the role of~$v$ in a spin system. Since $u,v\in V_{G}^{*}$, there are at least~$4{\varepsilon} n/5$ colours $c\in D$ such that the $c$-neighbour~$z$ of~$u$, and the $c$-neighbour~$w$ of~$v$, are such that $w,z\in V\setminus V'$, in~$G$. Finally, there are at least~$n/2-3{\varepsilon} n\geq n/4$ edges coloured~$c$ in~$G$ with neither endpoint in~$V'\cup N_{G}(w)\cup N_{G}(z)$, and two choices of which endpoint of such an edge will play the role of~$x$, and which will play the role of~$y$. We deduce that $\delta_{s}\geq {\varepsilon}^{3}n^{4}/5$. Altogether, we conclude that if $s\leq {\varepsilon}^{3}n^{2}/80$ and~$M_{s}$ is non-empty, then~$M_{s+1}$ is non-empty and $|M_{s}|/|M_{s+1}|\leq 20(s+1)n^{2}/{\varepsilon}^{3}n^{4} \leq 1/2$. Now, fix $s\leq {\varepsilon}^{3}n^{2}/100$. If~$M_{s}$ is empty, then\COMMENT{Note $\prob{e_{V',D}(\mathbf{G})=s}=|M_{s}|/|\mathcal{G}_{D}^{\text{col}}|$, so we need to know $\mathcal{G}_{D}^{\text{col}}$ is non-empty to know that we are not dividing by zero. But this follows from the usual existence results of $1$-factorizations of complete graphs. (Just restrict such a $1$-factorization to our colour set of interest)} $\prob{e_{V',D}(\mathbf{G}) =s}=0$. If~$M_{s}$ is non-empty, then \[ \prob{e_{V',D}(\mathbf{G}) =s} = \frac{|M_{s}|}{|\mathcal{G}_{D}^{\text{col}}|}\leq \frac{|M_{s}|}{|M_{{\varepsilon}^{3}n^{2}/80}|}= \prod_{j=s}^{{\varepsilon}^{3}n^{2}/80-1}\frac{|M_{j}|}{|M_{j+1}|} \leq \left(\frac{1}{2}\right)^{{\varepsilon}^{3}n^{2}/80-s}, \] and thus\COMMENT{Note the middle expression is bounded above by $\left(\frac{{\varepsilon}^{3}n^{2}}{100}+1\right)\exp\left(-\frac{{\varepsilon}^{3}n^{2}}{400}\ln2\right)$.} \[ \prob{e_{V',D}(\mathbf{G}) \leq {\varepsilon}^{3}n^{2}/100} \leq \sum_{s=0}^{{\varepsilon}^{3}n^{2}/100}\exp(-({\varepsilon}^{3}n^{2}/80-s)\ln2) \leq \exp\left(-\frac{{\varepsilon}^{3}n^{2}}{800}\right). \] A union bound over all choices of $V'\subseteq V$ of size~${\varepsilon} n$ now completes the proof.\COMMENT{$\prob{\mathbf{G}\,\text{is}\,\text{not}\,{\varepsilon}\text{-locally}\,\text{edge-resilient}}\leq \binom{n}{{\varepsilon} n}\exp(-{\varepsilon}^{3}n^{2}/800)\leq\exp(-{\varepsilon}^{3}n^{2}/1000)$.} \end{proof} We now turn to showing that for suitable $D\subseteq [n-1]$, almost all $G\in\mathcal{G}_{D}^{\text{col}}$ are robustly gadget-resilient, which turns out to be a much harder property to analyse than local edge-resilience, and we devote the rest of this section to it. We first need to show that almost all $G\in\mathcal{G}_{D}^{\text{col}}$ are `quasirandom', in the sense that small sets of vertices do not have too many crossing edges. \begin{defin} Let $D\subseteq [n-1]$. We say that $G\in\mathcal{G}_{D}^{\text{col}}$ is \textit{quasirandom} if for all sets $A,B\subseteq V$, not necessarily distinct, such that $|A|=|B|=|D|$, we have that $e_{G}(A,B)<8(|D|-1)^{3}/n$. We define $\mathcal{Q}_{D}^{\text{col}}\coloneqq\{G\in\mathcal{G}_{D}^{\text{col}}\colon G\,\text{is}\,\text{quasirandom}\}$.\COMMENT{We are using this quasirandomness definition that doesn't see the colours, but investigating it in the context of coloured graphs, because we wish to know about the probability of obtaining this quasirandomness property in the space where we pick a coloured graph u.a.r., which is a fundamentally different probability space to the one in which we just pick regular graphs u.a.r.} \end{defin} When we are analysing switchings to study the property of robust gadget-resilience (see Lemma~\ref{masterswitch}), it will be important to condition on this quasirandomness. One can use another switching argument to show that almost all $G\in\mathcal{G}_{D}^{\text{col}}$ are quasirandom. \begin{lemma}\label{quasirandom} Suppose that $1/n\ll\mu\ll1$, let $D\subseteq [n-1]$ have size $|D|=\mu n+1$. Suppose that $\mathbf{G}\in\mathcal{G}_{D}^{\text{col}}$ is chosen uniformly at random. Then $\prob{\mathbf{G}\in\mathcal{Q}_{D}^{\text{col}}}\geq 1- \exp(-\mu^{3}n^{2})$. \end{lemma} \begin{proof} Fix $A,B\subseteq V$ satisfying $|A|=|B|=\mu n+1$. For any $G\in\mathcal{G}_{D}^{\text{col}}$, we say that a subgraph $H\subseteq G$ together with a labelling of its vertices $V(H)=\{a,b,v,w\}$ is a \textit{rotation system} of~$G$ if $E(H)=\{ab, vw\}$, where $a\in A$, $b\in B$, $v,w\notin A\cup B$, $aw, bv\notin E(G)$, and $\phi_{G}(ab)=\phi_{G}(vw)$. We now define the \textit{rotate} switching operation. Suppose $G\in\mathcal{G}_{D}^{\text{col}}$ and $H\subseteq G$ is a rotation system. Then we define~$\text{rot}_{H}(G)$ to be the coloured graph obtained from~$G$ by deleting the edges $ab,vw$, and adding the edges $aw, bv$, each with colour~$\phi_{G}(ab)$. Writing $G'\coloneqq\text{rot}_{H}(G)$, notice that $G'\in\mathcal{G}_{D}^{\text{col}}$ and $e_{G'}(A,B)=e_{G}(A,B)-1$. Lemma~\ref{quasirandom} follows by analysing the degrees of auxiliary bipartite multigraphs~$B_{s}$ in a similar way as in the proof of Lemma~\ref{localedge}. We omit the details.\COMMENT{We define a partition~$\{M_{s}\}_{s=0}^{(\mu n+1)^{2}}$ of~$\mathcal{G}_{D}^{\text{col}}$ by setting $M_{s}\coloneqq\{G\in\mathcal{G}_{D}^{\text{col}}\colon e_{G}(A,B)=s\}$ for each $s\in[(\mu n+1)^{2}]_{0}$.\COMMENT{Depending on $A,B$ (say for example $A=B$), it might be that some of $M_{(\mu n+1)^{2}}, M_{(\mu n+1)^{2}-1},\dots$ are empty, but this would just mean the partition has some empty parts. Regardless, $(\mu n+1)^{2}$ is of course a universal upper bound for~$e_{G}(A,B)$.} For each $s\in[(\mu n+1)^{2}]$ we define an auxiliary bipartite multigraph\COMMENT{We can have two edges between $G\in M_{s}$ and $G'\in M_{s-1}$ if say there is an~$H\subseteq G$ such that $\text{rot}_{H}(G)=G'$ and $a,b\in A\cap B$. In this case we find a second rotation system $H'$ such that $\text{rot}_{H'}(G)=G'$, by swapping the roles of $a$ and $b$, and swapping the roles of $v$ and $w$.}~$B_{s}$ with vertex bipartition~$(M_{s-1}, M_{s})$, where for each $G\in M_{s}$ and each rotation system $H\subseteq G$ we put an edge in~$B_{s}$ with endpoints~$G\in M_{s}$ and $\text{rot}_{H}(G)\in M_{s-1}$. Define $\delta_{s}\coloneqq \min_{G\in M_{s}}d_{B_{s}}(G)$ and $\Delta_{s-1}\coloneqq \max_{G\in M_{s-1}}d_{B_{s}}(G)$. Thus $|M_{s}|/|M_{s-1}|\leq \Delta_{s-1}/\delta_{s}$. To bound~$\Delta_{s-1}$ from above, we fix $G'\in M_{s-1}$ and bound the number of pairs~$(G,H)$, where $G\in M_{s}$ and~$H$ is a rotation system of~$G$ such that $\text{rot}_{H}(G)=G'$. There are at most $(\mu n+1)^{2}$ choices for which pair of vertices $a\in A, b\in B$ were caused to be a non-edge in~$G'$ by a rotation. Then there are at most~$\mu n+1$ choices for a colour $d\in D$ such that the $d$-neighbour of~$b$ and the $d$-neighbour of~$a$ in~$G'$ could have played the roles of~$v$ and~$w$ respectively in a rotation yielding~$G'$. We deduce that $\Delta_{s-1}\leq (\mu n+1)^{3}$. To bound~$\delta_{s}$ from below, we fix $G\in M_{s}$ and find a lower bound for the number of rotation systems $H\subseteq G$. Notice that there are~$s$ choices of an edge~$e\in E(G)$ with an endpoint in~$A$ and an endpoint in~$B$, and at least one choice of which of these endpoints will play the role of $a\in A$ in a rotation system~$H$, and which will play the role of $b\in B$. Observe that $|A\cup B \cup N_{G}(a)\cup N_{G}(b)|\leq 4\mu n+4$, and therefore there are at least $n/2-4\mu n-4\geq n/4$ edges $f\in E_{\phi_{G}(e)}(G)$ such that both endpoints of~$f$ are in $V\setminus(A\cup B\cup N_{G}(a)\cup N_{G}(b))$, and there are two choices of which endpoint of~$f$ will play the role of~$v$, and which will play the role of~$w$. Thus $\delta_{s}\geq sn/2$. We conclude that if $s\geq 5\mu^{3}n^{2}$ and~$M_{s}$ is non-empty, then~$M_{s-1}$ is non-empty and $|M_{s}|/|M_{s-1}|\leq 2(\mu n+1)^{3}/sn \leq 1/2$. Now, fix $s\geq 8\mu^{3}n^{2}$. If~$M_{s}$ is empty, then\COMMENT{Again, we need to know $\mathcal{G}_{D}^{\text{col}}$ is non-empty to know that we are not dividing by zero. But this follows from the usual existence results of $1$-factorizations of complete graphs. (Just restrict such a $1$-factorization to our colour set of interest)} $\prob{e_{\mathbf{G}}(A,B)=s}=0$. If~$M_{s}$ is non-empty, then \[ \prob{e_{\mathbf{G}}(A,B)=s}=\frac{|M_{s}|}{|\mathcal{G}_{D}^{\text{col}}|}\leq \frac{|M_{s}|}{|M_{5\mu^{3}n^{2}}|}=\prod_{j=0}^{s-5\mu^{3}n^{2}-1}\frac{|M_{s-j}|}{|M_{s-j-1}|}\leq \left(\frac{1}{2}\right)^{s-5\mu^{3}n^{2}}, \] and thus \begin{eqnarray*} \prob{e_{\mathbf{G}}(A,B)\geq 8\mu^{3}n^{2}} & \leq & \sum_{s=8\mu^{3}n^{2}}^{(\mu n+1)^{2}}\exp(-(s-5\mu^{3}n^{2})\ln 2)\leq (\mu n+1)^{2}\exp\left(-3\mu^{3}n^{2}\ln 2\right) \\ & \leq & \exp\left(-2\mu^{3}n^{2}\right). \end{eqnarray*} Finally, by a union bound over all choices of $A,B\subseteq V$, each of size $\mu n+1$, we conclude that $\prob{\mathbf{G}\notin\mathcal{Q}_{D}^{\text{col}}}\leq\binom{n}{\mu n+1}^{2}\exp(-2\mu^{3}n^{2})\leq\exp(-\mu^{3}n^{2})$, which completes the proof of the lemma.} \end{proof} Next we will use a switching argument to find a large set of well-spread absorbing gadgets (cf.\ Definition~\ref{spread}). For this, we consider slightly more restrictive substructures than the absorbing gadgets defined in Definition~\ref{def:absorbing-gadget}. These additional restrictions (an extra edge~$f$ as well as an underlying partition~$\mathcal{P}$ of the colours) give us better control over the switching process: they allow us to argue that we do not create more than one additional gadget per switch. Let $D\subseteq [n-1]$, $c\in[n-1]\setminus D$, write $D^{*}\coloneqq D\cup\{c\}$, and let $G\in\mathcal{G}_{D^{*}}^{\text{col}}$. Suppose that~$\mathcal{P}=\{D_{i}\}_{i=1}^{4}$ is an (ordered) partition of~$D$ into four subsets, and let~$x\in V$. \begin{defin} An $(x,c,\mathcal{P})$\textit{-gadget} in~$G$ is a subgraph~$J=A\cup\{f\}$ of~$G$ the following form (see Figure~\ref{fig:xcp}): \begin{enumerate}[label=\upshape(\roman*)] \item $A$ is an $(x,c)$-absorbing gadget in~$G$; \item there is an edge $e_{1}\in\partial_{A}(x)$ such that $\phi(e_{1})\in D_{1}$, and the remaining edge $e_{2}\in\partial_{A}(x)$ satisfies $\phi(e_{2})\in D_{2}$; \item the edge~$e_{3}$ of~$A$ which is not incident to~$x$ but shares an endvertex with~$e_{1}$ and an endvertex with~$e_{2}$ satisfies $\phi(e_{3})\in D_{3}$; \item $f=xv$ is an edge of~$G$, where $v$ is the unique vertex of~$A$ such that $\phi(\partial_{A}(v))=\{c,\phi(e_{1})\}$; \item $\phi(f)\in D_{4}$. \end{enumerate} \end{defin} \begin{figure} \caption{An $(x,c,\mathcal{P})$-gadget. Here, $\phi(f)\in D_{4}$, $\phi(e)=c$, and $\phi(e_{i})=\phi(e_{i}')\in D_{i}$ for each $i\in[3]$.} \label{fig:xcp} \end{figure} We now define some terminology that will be useful for analysing how many $(x,c,\mathcal{P})$-gadgets there are in a graph $G\in\mathcal{G}_{D^{*}}^{\text{col}}$, and how well-spread these gadgets are. Each of the terms we define here will have a dependence on the choice of the triple~$(x,c,\mathcal{P})$, but since this triple will always be clear from context, for presentation we omit the $(x,c,\mathcal{P})$-notation. \begin{defin} We say that an $(x,c,\mathcal{P})$-gadget~$J$ in~$G$ is \textit{distinguishable} in~$G$ if the edges $e_{3}, e_{3}'$ of~$J$ such that $\phi(e_{3})=\phi(e_{3}')\in D_{3}$ are such that there is no other $(x,c,\mathcal{P})$-gadget $J'\neq J$ in~$G$ such that $e_{3}\in E(J')$ or $e_{3}'\in E(J')$. \end{defin} We will aim only to count distinguishable $(x,c,\mathcal{P})$-gadgets, which will ensure the collection of gadgets we find is well-spread across the set of edges in $G\in\mathcal{G}_{D^{*}}^{\text{col}}$ that can play the roles of~$e_{3}$,~$e_{3}'$. We also need to ensure that the collection of gadgets we find is well-spread across the $c$-edges of~$G$. \begin{defin}~ \begin{itemize} \item For each $c$-edge~$e$ of~$G\in\mathcal{G}_{D^{*}}^{\text{col}}$, we define the \textit{saturation} of~$e$ in~$G$, denoted~$\text{sat}_{G}(e)$, or simply~$\text{sat}(e)$ when~$G$ is clear from context, to be the number of distinguishable $(x,c,\mathcal{P})$-gadgets of~$G$ which contain~$e$. We say that~$e$ is \textit{unsaturated} in~$G$ if $\text{sat}(e)\leq |D|-1$, \textit{saturated} if $\text{sat}(e)\geq |D|$, and \textit{supersaturated} if $\text{sat}(e)\geq |D|+6$. We define~$\text{Sat}(G)$ to be the set of saturated $c$-edges of~$G$, and~$\text{Unsat}(G)\coloneqq E_{c}(G)\setminus\text{Sat}(G)$. \item We define the function\COMMENT{Actually, since (later)~$\mathcal{P}$ is equitable,~$r(G)$ cannot exceed $n|D|/16$ as there are only $n|D|/8$ edges of~$G$ with colours in~$D_{3}$, and each distinguishable $(x,c,\mathcal{P})$-gadget uses precisely two of these, and no such edge is in more than one distinguishable $(x,c,\mathcal{P})$-gadget by definition.} $r\colon \mathcal{G}_{D^{*}}^{\text{col}}\rightarrow [n|D|/2]_{0}$ by \[ r(G)\coloneqq |D||\text{Sat}(G)|+\sum_{e\in\text{Unsat}(G)}\text{sat}(e). \] \end{itemize} \end{defin} In Lemma~\ref{masterswitch}, we will use switchings to show that~$r(G)$ is large (for some well-chosen~$\mathcal{P}$) in almost all quasirandom $G\in\mathcal{G}_{D^{*}}^{\text{col}}$. In Lemma~\ref{justgadgets}, we use distinguishability, saturation, and the fact that any non-$x$ vertex in an $(x,c,\mathcal{P})$-gadget must be incident to an edge playing the role of either~$e_{3}$,~$e_{3}'$, or the $c$-edge, to show that~$r(G)$ being large means that there are many well-distributed $(x,c,\mathcal{P})$-gadgets in~$G$, and thus many well-spread $(x,c)$-absorbing gadgets. We now define a relaxation of~$\mathcal{Q}_{D^{*}}^{\text{col}}$, which will be a convenient formulation for ensuring that quasirandomness is maintained when we use switchings to find $(x,c,\mathcal{P})$-gadgets. For each $s\in[n|D|/2]_{0}$, we write $A_{s}^{D^{*}}\coloneqq\{G\in\mathcal{G}_{D^{*}}^{\text{col}}\colon r(G)=s\}$, and \[ Q_{s}^{D^{*}}\coloneqq\{G\in\mathcal{G}_{D^{*}}^{\text{col}}\colon e_{G}(A,B)< 8|D|^{3}/n + 6s\,\,\, \text{for}\,\text{all}\,A,B\subseteq V\,\text{such}\,\text{that}\,|A|=|B|=|D|\}. \] We also define $T_{s}^{D^{*}}\coloneqq A_{s}^{D^{*}}\cap Q_{s}^{D^{*}}$ and $\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}\coloneqq \bigcup_{s=0}^{n|D|/2}T_{s}^{D^{*}}$. Notice that\COMMENT{In the definition of~$\mathcal{Q}_{D^{*}}^{\text{col}}$, we look at $A$, $B$ of size $|D^{*}|=|D|+1$, while in the definition of~$\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}$, we look at $A$, $B$ of size~$|D|$, but this inclusion still holds.} \begin{equation}\label{eq:qsubset} \mathcal{Q}_{D^{*}}^{\text{col}}\subseteq \widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}. \end{equation} Finally, we discuss the switching operation that we will use in Lemma~\ref{masterswitch}. \begin{defin}~ For any $G\in\mathcal{G}_{D^{*}}^{\text{col}}$, we say that a subgraph $H\subseteq G$ together with a labelling of its vertices $V(H)=\{x,u_{1},u_{2},\dots,u_{14}\}$ is a \textit{twist system} of~$G$ if (see Figure~\ref{fig:ts}): \begin{enumerate}[label=\upshape(\roman*)] \item $E(H)=\{u_{1}u_{2}, u_{3}u_{5}, u_{4}u_{6}, u_{5}u_{7}, u_{6}u_{8}, u_{7}u_{8}, u_{7}x, xu_{9}, xu_{10}, u_{9}u_{11}, u_{10}u_{12}, u_{13}u_{14}\}$; \item $\phi(u_{5}u_{7})=\phi(xu_{9})\in D_{1}$; \item $\phi(u_{6}u_{8})=\phi(xu_{10})\in D_{2}$; \item $\phi(u_{1}u_{2})=\phi(u_{3}u_{5})=\phi(u_{4}u_{6})=\phi(u_{9}u_{11})=\phi(u_{10}u_{12})=\phi(u_{13}u_{14})\in D_{3}$; \item $\phi(u_{7}x)\in D_{4}$; \item $\phi(u_{7}u_{8})=c$; \item $u_{1}u_{3}, u_{2}u_{4}, u_{5}u_{6}, u_{9}u_{10}, u_{11}u_{13}, u_{12}u_{14}\notin E(G)$. \end{enumerate} For a twist system~$H$ of~$G$, we define~$\text{twist}_{H}(G)$ to be the coloured graph obtained from~$G$ by deleting the edges $u_{1}u_{2}$, $u_{3}u_{5}$, $u_{4}u_{6}$, $u_{9}u_{11}$, $u_{10}u_{12}$, $u_{13}u_{14}$, and adding the edges $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{5}u_{6}$, $u_{9}u_{10}$, $u_{11}u_{13}$, $u_{12}u_{14}$, each with colour~$\phi_{G}(u_{1}u_{2})$.\COMMENT{Writing $G'=\text{twist}_{H}(G)$, notice that since~$\phi_{G}$ is a $1$-factorization of~$G$, we have that the colouring~$\phi_{G'}$ of~$G'$ is a $1$-factorization of~$G'$.} The $(x,c,\mathcal{P})$-gadget in~$\text{twist}_{H}(G)$ with edges $u_{5}u_{6}$, $u_{5}u_{7}$, $u_{6}u_{8}$, $u_{7}u_{8}$, $u_{7}x$, $xu_{9}$, $xu_{10}$, $u_{9}u_{10}$ is called the \textit{canonical} $(x,c,\mathcal{P})$\textit{-gadget} of the twist. \end{defin} \begin{figure} \caption{A twist system of~$G$. Here, dashed edges represent non-edges of~$G$, and the colours of the edges satisfy (ii)--(vi) in the definition of twist system.} \label{fig:ts} \end{figure} We simultaneously switch two edges into the positions~$u_{5}u_{6}$ and~$u_{9}u_{10}$ because it is much easier to find structures as in Figure~\ref{fig:ts} than it is to find such a structure with one of these edges already in place. Moreover, the two `switching cycles' we use have three edges and three non-edges (rather than two of each, as in the rotation switching) essentially because of the extra freedom this gives us when choosing the edges~$u_{1}u_{2}$ and~$u_{13}u_{14}$. This extra freedom allows us to ensure that in almost all twist systems, one avoids undesirable issues like inadvertently creating more than one new gadget when one performs the twist. The proof of Lemma~\ref{masterswitch} proceeds with a similar strategy to those of Lemmas~\ref{localedge} and~\ref{quasirandom}, but it is much more challenging this time to show that graphs with low~$r(G)$-value admit many ways to switch to yield a graph $G'\in\mathcal{G}_{D^{*}}^{\text{col}}$ satisfying $r(G')=r(G)+1$. \begin{lemma}\label{masterswitch} Suppose that $1/n\ll\mu\ll1$, and let $D\subseteq [n-1]$ have size $|D|=\mu n$. Let $x\in V$, let $c\in [n-1]\setminus D$, and let $\mathcal{P}=\{D_{i}\}_{i=1}^{4}$ be an equitable partition of~$D$. Suppose that $\mathbf{G}\in\mathcal{G}_{D\cup\{c\}}^{\text{col}}$ is chosen uniformly at random. Then \[ \prob{r(\mathbf{G})\leq\frac{\mu^{4}n^{2}}{2^{23}}\biggm| \mathbf{G}\in\widetilde{\mathcal{Q}}_{D\cup\{c\}}^{\text{col}}}\leq\exp\left(-\frac{\mu^{4}n^{2}}{2^{24}}\right). \] \end{lemma} \begin{proof} Write $D^{*}\coloneqq D\cup\{c\}$. Consider the partition~$\{T_{s}^{D^{*}}\}_{s=0}^{nk/2}$ of~$\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}$, where $k\coloneqq |D|$. For each $s\in[nk/2-1]_{0}$, we define an auxiliary bipartite multigraph\COMMENT{Suppose~$H$ is a twist system with colours~$i\in D_{i}$ and vertices as we usually label them. It is possible there is another twist system~$H'$ using colours $1'\neq 1$, $2'\neq 2$, $3$, $4'\neq 4$, with vertices $\{x,v_{1},\dots, v_{14}\}$ (here making the natural numerical correspondence of vertex roles in this order) such that $v_{1}=u_{6}$, $v_{2}=u_{4}$, $v_{3}=u_{5}$, $v_{4}=u_{2}$, $v_{5}=u_{3}$, $v_{6}=u_{1}$, $v_{9}=u_{13}$, $v_{10}=u_{11}$, $v_{11}=u_{14}$, $v_{12}=u_{9}$, $v_{13}=u_{12}$, $v_{14}=u_{10}$, and using an entirely different $c$-edge. These two twists actually operate on the same $3$-edges and non-edges, so that $\text{twist}_{H}(G)=\text{twist}_{H'}(G)$. It is possible that the saturation of both $c$-edges increases by precisely one (and is below the saturation threshold), while some other $c$-edge loses saturation one, so that $G'\in T_{s+1}$, so that indeed~$B_{s}$ may be a multigraph. It would be possible to rule this out in the definition of adjacency in~$B_{s}$, but I tried to make the definition no more convoluted than it already needed to be - plus this added complication will not bother us.}~$B_{s}$ with vertex bipartition $(T_{s}^{D^{*}},T_{s+1}^{D^{*}})$ and an edge between~$G$ and~$\text{twist}_{H}(G)$ whenever: \begin{enumerate} \item[(a)] $G\in T_{s}^{D^{*}}$; \item[(b)] $H$ is a twist system in~$G$ for which~$G'\coloneqq\text{twist}_{H}(G)\in T_{s+1}^{D^{*}}$ and~$G'$ satisfies $\text{sat}_{G'}(e)=\text{sat}_{G}(e)+1\leq k$ for the $c$-edge $e=u_{7}u_{8}$ of~$H$, with the canonical $(x,c,\mathcal{P})$-gadget of the twist~$G'$ being the only additional distinguishable $(x,c,\mathcal{P})$-gadget using this $c$-edge. \end{enumerate} Define $\delta_{s}\coloneqq \min_{G\in T_{s}^{D^{*}}}d_{B_{s}}(G)$ and $\Delta_{s+1}\coloneqq \max_{G\in T_{s+1}^{D^{*}}}d_{B_{s}}(G)$. Thus $|T_{s}^{D^{*}}|/|T_{s+1}^{D^{*}}|\leq \Delta_{s+1}/\delta_{s}$. To bound~$\Delta_{s+1}$ from above, we fix $G'\in T_{s+1}^{D^{*}}$ and bound the number of pairs~$(G,H)$, where $G\in T_{s}^{D^{*}}$ and~$H$ is a twist system of~$G$ such that $\text{twist}_{H}(G)=G'$ and~(b) holds. Firstly, note that \[ \sum_{\substack{e\in E_{c}(G') \\ \text{sat}_{G'}(e)\leq k}}\text{sat}_{G'}(e) \leq r(G')= s+1. \] Thus, it follows from condition (b) that there are at most~$s+1$ choices for the canonical $(x,c,\mathcal{P})$-gadget of a twist yielding~$G'$ for which we record an edge in~$B_{s}$. Fixing this $(x,c,\mathcal{P})$-gadget fixes the vertices of~$V$ which played the roles of $x$, $u_{5}$, $u_{6}$, $\dots$, $u_{10}$ in a twist yielding~$G'$. To determine all possible sets of vertices playing the roles of $u_{1}$, $u_{2}$, $u_{3}$, $u_{4}$, $u_{11}$, $u_{12}$, $u_{13}$, $u_{14}$ (thus determining~$H$ and~$G$ such that $\text{twist}_{H}(G)=G'$), it suffices to find all choices of four edges of~$G'$ with colour~$\phi_{G'}(u_{5}u_{6})$ satisfying the necessary non-adjacency conditions. There are at most~$(n/2)^{4}$ choices for these four edges, and at most~$4!\cdot2^{4}$ choices for which endpoints of these edges play which role.\COMMENT{We need to choose which of the four edges is playing which role - there are~$4!$ choices for this assignment. Then we need to choose which endpoints are playing which role for each edge. There are~$2^{4}$ choices for this assignment.} We deduce that $\Delta_{s+1}\leq 24n^{4}(s+1)$. Suppose that $s\leq k^{4}/2^{22}n^{2}$. To bound~$\delta_{s}$ from below, we fix $G\in T_{s}^{D^{*}}$ and find a lower bound for the number of twist systems $H\subseteq G$ for which we record an edge between~$G$ and~$\text{twist}_{H}(G)$ in~$B_{s}$. To do this, we will show that there are many choices for a set of four colours and two edges, such that each of these sets uniquely identifies a twist system in~$G$ for which we record an edge in~$B_{s}$. Note that since $s\leq k^{4}/2^{22}n^{2}$ and $G\in Q_{s}^{D^{*}}$, we have\COMMENT{$e_{G}(A,B)<8|D|^{3}/n+6s\leq 8k^{3}/n+6k^{4}/2^{22}n^{2}\leq 10k^{3}/n$.} \begin{equation}\label{eq:quas} e_{G}(A,B)\leq 10k^{3}/n\hspace{5mm}\text{for}\hspace{1mm}\text{all}\hspace{1mm}\text{sets}\hspace{2mm}A,B\subseteq V\hspace{2mm}\text{of}\hspace{1mm}\text{sizes}\hspace{2mm}|A|=|B|=k. \end{equation} We begin by finding subsets of~$D_{3}$ and~$D_{4}$ with some useful properties in~$G$. \begin{claim}\label{d3good} There is a set $D_{3}^{\text{good}}\subseteq D_{3}$ of size $|D_{3}^{\text{good}}|\geq k/8$ such that for all $d\in D_{3}^{\text{good}}$ we have \begin{enumerate}[label=\upshape(\roman*)] \item $|E_{d}(N_{D_{1}}(x), N_{D_{2}}(x))|\leq 200k^{2}/n$; \item there are at most~$64k^{3}/n^{2}$ $d$-edges~$e$ in~$G$ with the property that~$e$ lies in some distinguishable $(x,c,\mathcal{P})$-gadget in~$G$ whose $c$-edge is not supersaturated. \end{enumerate} \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Observe that $|N_{D_{1}}(x)|=|N_{D_{2}}(x)|=k/4$. Then, by (arbitrarily extending $N_{D_{1}}(x)$, $N_{D_{2}}(x)$ and) applying~(\ref{eq:quas}), we see that $e(N_{D_{1}}(x),N_{D_{2}}(x))\leq 10k^{3}/n$.\COMMENT{Suppose now that for at least~$k/16$ colours $d\in D_{3}$, there are at least~$200k^{2}/n$ edges of~$G$ in the set~$E_{d}(N_{D_{1}}(x),N_{D_{2}}(x))$. Then $e(N_{D_{1}}(x),N_{D_{2}}(x))\geq 200k^{3}/16n$, a contradiction.} Thus there is a set $\hat{D}_{3}\subseteq D_{3}$ of size $|\hat{D}_{3}|\geq 3k/16$ such that each $d\in\hat{D}_{3}$ satisfies~(i). Next, notice that, since $r(G)=s$, there are at most~$s/k\leq k^{3}/2^{22}n^{2}$ saturated $c$-edges in~$G$\COMMENT{$r(G)\coloneqq k|\text{Sat}(G)|+\sum_{e\in\text{Unsat}(G)}\text{sat}(e)=s$ implies that $k|\text{Sat}(G)|\leq s$.}. Suppose for a contradiction that at least~$k/16$ colours $d\in D_{3}$ are such that there are at least~$64k^{3}/n^{2}$ $d$-edges~$e$ in~$G$ with the property that~$e$ lies in some distinguishable $(x,c,\mathcal{P})$-gadget in~$G$ whose $c$-edge is not supersaturated. Then, by considering the contribution of these distinguishable $(x,c,\mathcal{P})$-gadgets to~$r(G)$, and accounting for saturated $c$-edges, we obtain that\COMMENT{The total number of these distinguishable gadgets is at least $(k/16) \cdot 32k^{3}/n^{2}$, where we have divided $64k^{3}/n^{2}$ by~$2$ because each distinguishable gadget can contain up to~$2$ of these $d$-edges. Then it may be that some of these gadgets do not `contribute' to~$r(G)$ because their~$c$-edge is saturated. Since this occurs only if the saturation of such a $c$-edge is in $\{k+1, k+2, k+3, k+4, k+5\}$ and there are at most $k^{3}/2^{22}n^{2}$ saturated $c$-edges, subtracting $5k^{3}/2^{22}n^{2}$ accounts for this.} $r(G)\geq (k/16)\cdot 32k^{3}/n^{2} - 5k^{3}/2^{22}n^{2} >s$, a contradiction. Thus there is a set $\Tilde{D}_{3}\subseteq D_{3}$ of size $|\Tilde{D}_{3}|\geq 3k/16$ such that each $d\in\Tilde{D}_{3}$ satisfies~(ii). We define $D_{3}^{\text{good}}\coloneqq \hat{D}_{3}\cap\Tilde{D}_{3}$, and note that $|D_{3}^{\text{good}}|\geq k/8$. \noclaimproof {} We also define $D_{4}^{\text{good}}\subseteq D_{4}$ to be the set of colours $d_{4}\in D_{4}$ such that the $c$-edge~$e$ incident to the $d_{4}$-neighbour of~$x$ in~$G$ satisfies $\text{sat}(e)\leq k-1$. Observe that $|D_{4}^{\text{good}}|\geq k/8$, since otherwise there are at least~$k/16$ saturated $c$-edges\COMMENT{There are at least $k/8$ colours $d_{4}\in D_{4}$ which are not good, and any fixed $c$-edge may be incident to the $d_{4}$-neighbour of~$x$ for up to two of these bad $d_{4}$.} in~$G$, whence $r(G)\geq k^{2}/16>s$, a contradiction. We now show that there are many choices of a vector~$(d_{1},d_{2},d_{3},d_{4},\overrightarrow{f_{1}},\overrightarrow{f_{2}})$ where each $d_{i}\in D_{i}$ and each $\overrightarrow{f_{j}}$ is an edge $f_{j}\in E_{d_{3}}(G)$ together with an identification of which endpoints will play which role, such that each vector uniquely gives rise to a candidate of a twist system $H\subseteq G$. We can begin to construct such a candidate by choosing $d_{4}\in D_{4}^{\text{good}}$ and letting~$u_{7}$ denote the $d_{4}$-neighbour of~$x$ in~$G$, and letting~$u_{8}$ denote the $c$-neighbour of~$u_{7}$. Secondly, we choose $d_{1}\in D_{1}$, avoiding the colour of the edge~$xu_{8}$ (if it is present), and let~$u_{5}$ denote the $d_{1}$-neighbour of~$u_{7}$, and let~$u_{9}$ denote the $d_{1}$-neighbour of~$x$. Next, we choose $d_{2}\in D_{2}$, avoiding the colours of the edges $u_{5}u_{8}$, $u_{5}x$, $u_{8}x$, $u_{8}u_{9}$ in~$G$ (if they are present), and let~$u_{6}$ denote the $d_{2}$-neighbour of~$u_{8}$, and let~$u_{10}$ denote the $d_{2}$-neighbour of~$x$. Then, we choose $d_{3}\in D_{3}^{\text{good}}$, avoiding the colours of all edges in~$E_{G}(\{x,u_{5},u_{6},\dots,u_{10}\})$. We let~$u_{3},u_{4},u_{11},u_{12}$ denote the $d_{3}$-neighbours of~$u_{5},u_{6},u_{9},u_{10}$, respectively. Finally, we choose two distinct edges $f_{1},f_{2}\in E_{d_{3}}(G)$ which are not incident to any vertex in $\{x, u_{3}, u_{4},\dots,u_{12}\}$, and we choose which endpoint of~$f_{1}$ will play the role of~$u_{1}$ and which will play the role of~$u_{2}$, and choose which endpoint of~$f_{2}$ will play the role of~$u_{13}$ and which will play the role of~$u_{14}$. Let~$\Lambda$ denote the set of all possible vectors~$(d_{1},d_{2},d_{3},d_{4},\overrightarrow{f_{1}},\overrightarrow{f_{2}})$ that can be chosen in this way, so that~$|\Lambda|\geq\frac{k}{8}\cdot\frac{3k}{16}\cdot\frac{k}{8}\cdot\frac{k}{16}\cdot\frac{n}{4}\cdot2\cdot\frac{n}{4}\cdot2=3k^{4}n^{2}/2^{16}$. Further, let~$H(\lambda)\subseteq G$ denote the labelled subgraph of~$G$ corresponding to~$\lambda\in\Lambda$ in the above way. If~$H(\lambda)$ is a twist system, then we sometimes say that we `twist on~$\lambda$' to mean that we perform the twist operation to obtain~$\text{twist}_{H(\lambda)}(G)$ from~$G$. It is clear that~$H(\lambda)$ is unique for all vectors $\lambda\in\Lambda$, and that~$H(\lambda)$ satisfies conditions (i)--(vi) of the definition of a twist system. However, some~$H(\lambda)$ may fail to satisfy~(vii), and some may fail to satisfy condition (b) in the definition of adjacency in~$B_{s}$. We now show that only for a small proportion of~$\lambda\in\Lambda$ do either of these problems occur. We begin by ensuring that most $\lambda\in\Lambda$ give rise to twist systems. \begin{claim} There is a subset $\Lambda_{1}\subseteq\Lambda$ such that $|\Lambda_{1}|\geq9|\Lambda|/10$ and~$H(\lambda)$ is a twist system for all $\lambda\in\Lambda_{1}$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Fix any choice of $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$ and $\overrightarrow{f_{1}}, \overrightarrow{f_{2}}$ appearing concurrently\COMMENT{So that we can assume the subgraph these colours determine is not `degenerate'} in some $\lambda\in\Lambda$, and note that there are at most~$(k/4)^{2}\cdot n^{2}$ such choices. Here and throughout the remainder of the proof of Lemma~\ref{masterswitch}, we write~$u_{7}$ for the $d_{4}$-neighbour of~$x$, we write~$u_{8}$ for the $c$-neighbour of~$u_{7}$, and so on, where the choice of $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$ will always be clear from context. Note that fixing $d_{3}$, $d_{4}$ only fixes the vertices $x$, $u_{7}$, $u_{8}$. There are at most $10k^{3}/n$ pairs $(d_{1},d_{2})$ with each $d_{i}\in D_{i}$ such that there is an edge $u_{5}u_{6}\in E(G)$, since otherwise $e(N_{D_{1}}(u_{7}), N_{D_{2}}(u_{8}))>10k^{3}/n$, contradicting~(\ref{eq:quas})\COMMENT{for any extension of $N_{D_{1}}(u_{7})$, $N_{D_{2}}(u_{8})$ to sets of size~$k$. This technicality has already been mentioned once so now I reduce it to a comment. And from now on I even omit it from comments.}. Similarly, there are at most $10k^{3}/n$ pairs $(d_{1}, d_{2})$ with each $d_{i}\in D_{i}$ such that $u_{9}u_{10}$ is an edge of~$G$\COMMENT{Otherwise, $e(N_{D_{1}}(x), N_{D_{2}}(x))>10k^{3}/n$, contradicting~(\ref{eq:quas}).}. We deduce that there are at most $(20k^{3}/n)\cdot(k/4)^{2}\cdot n^{2}=5k^{5}n/4$ vectors $\lambda\in\Lambda$ for which~$H(\lambda)$ is such that either $u_{5}u_{6}$ or $u_{9}u_{10}$ is an edge of~$G$. Now fix instead $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{2}}$. Note that\COMMENT{$|D^{*}|=k+1$.} $|N_{G}(u_{3})\cup N_{G}(u_{4})|\leq 2k+2$ so that there are at most $4k+4$ choices of $\overrightarrow{f_{1}}$ such that either $u_{1}u_{3}$ or $u_{2}u_{4}$ is an edge of~$G$. Analysing the pairs $u_{11}u_{13}$ and $u_{12}u_{14}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$. Note that $|N(u_{11})\cup N(u_{12})|\leq 2k+2$ so that there are at most $4k+4$ choices of $\overrightarrow{f_{2}}$ such that either $u_{11}u_{13}$ or $u_{12}u_{14}$ is an edge of~$G$.}, we deduce that altogether, there are at most $5k^{5}n/4+2((k/4)^{4}\cdot n\cdot(4k+4)) \leq 2k^{5}n\leq|\Lambda|/10$ vectors $\lambda\in\Lambda$ for which~$H(\lambda)$ fails to be a twist system. \noclaimproof {} We now show that only for a small proportion of $\lambda\in\Lambda_{1}$ does~$H(\lambda)$ fail to give rise to an edge in~$B_{s}$, by showing that most~$H(\lambda)$ satisfy the following properties: \begin{itemize} \item [(P1)]$\text{twist}_{H(\lambda)}(G)\in Q_{s+1}^{D^{*}}$; \item [(P2)] Deletion of the six $d_{3}$-edges in~$H(\lambda)$ does not decrease~$r(G)$; \item [(P3)] The canonical $(x,c,\mathcal{P})$-gadget of the twist~$\text{twist}_{H(\lambda)}(G)$ is distinguishable, and it is the only $(x,c,\mathcal{P})$-gadget which is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$.\COMMENT{Recall that $\text{sat}_{G}(e)\leq k-1$ for the $c$-edge~$e$ of any~$H(\lambda)$ (since we choose $d_{4}\in D_{4}^{\text{good}}$) so this ensures~$r(G)$ increases, and by not more than one.} \end{itemize} Firstly, since~$G\in Q_{s}^{D^{*}}$ and we only create six new edges in any twist, it is clear that~$H(\lambda)$ satisfies~(P1) for all $\lambda\in\Lambda_{1}$. \begin{claim}\label{rgnodec} There is a subset $\Lambda_{2}\subseteq \Lambda_{1}$ such that $|\Lambda_{2}|\geq9|\Lambda_{1}|/10$ and~$H(\lambda)$ satisfies property~(P2) for all $\lambda\in\Lambda_{2}$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Fix $d_{1}\in D_{1}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{1}$. Let~$F_{d_{3}}(G)\subseteq E_{d_{3}}(G)$ be the set of $d_{3}$-edges~$e$ in~$G$ with the property that~$e$ is in some distinguishable $(x,c,\mathcal{P})$-gadget in~$G$ whose $c$-edge is not supersaturated. Recall that $|F_{d_{3}}(G)|\leq 64k^{3}/n^{2}$ since $d_{3}\in D_{3}^{\text{good}}$. Observe then that there are at most $128k^{3}/n^{2}$ colours $d_{2}\in D_{2}$ such that~$u_{10}$ is the endpoint of an edge in~$F_{d_{3}}(G)$. Thus for all but at most $(k/4)^{3}\cdot n^{2}\cdot 128k^{3}/n^{2}=2k^{6}$ choices of $\lambda=(d_{1},d_{2},d_{3},d_{4}, \overrightarrow{f_{1}}, \overrightarrow{f_{2}})\in\Lambda_{1}$, the edge~$u_{10}u_{12}$ is not in~$F_{d_{3}}(G)$. Now fix instead $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{2}}$. Then since $d_{3}\in D_{3}^{\text{good}}$, there are at most $128k^{3}/n^{2}$ choices of~$\overrightarrow{f_{1}}$ such that~$f_{1}\in F_{d_{3}}(G)$, so that for all but at most $(k/4)^{4}\cdot n\cdot 128k^{3}/n^{2}=k^{7}/2n$ vectors $\lambda \in\Lambda_{1}$,~$H(\lambda)$ is such that $f_{1}\notin F_{d_{3}}(G)$. Similar analyses\COMMENT{Fix $d_{2}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$. At most $128k^{3}/n^{2}$ colours $d_{1}\in D_{1}$ are such that~$u_{9}$ is the endpoint of an edge in~$F_{d_{3}}(G)$. Thus for all but at most $(k/4)^{3}\cdot n^{2}\cdot 128k^{3}/n^{2}=2k^{6}$ choices of $\lambda\in\Lambda$, the edge~$u_{9}u_{11}$ is not in~$F_{d_{3}}(G)$. \newline Fix $d_{2}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$. In particular, we have fixed which vertex of~$V$ plays the role of~$u_{7}$. At most $128k^{3}/n^{2}$ colours $d_{1}\in D_{1}$ are such that~$u_{5}$ is the endpoint of an edge in~$F_{d_{3}}(G)$. Thus for all but at most $(k/4)^{3}\cdot n^{2}\cdot 128k^{3}/n^{2}=2k^{6}$ choices of $\lambda\in\Lambda$, the edge~$u_{3}u_{5}$ is not in~$F_{d_{3}}(G)$. \newline Fix $d_{1}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$. In particular, we have fixed which vertex of~$V$ plays the role of~$u_{8}$. At most $128k^{3}/n^{2}$ colours $d_{2}\in D_{2}$ are such that~$u_{6}$ is the endpoint of an edge in~$F_{d_{3}}(G)$. Thus for all but at most $(k/4)^{3}\cdot n^{2}\cdot 128k^{3}/n^{2}=2k^{6}$ choices of $\lambda\in\Lambda$, the edge~$u_{4}u_{6}$ is not in~$F_{d_{3}}(G)$. \newline Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$. Then since $d_{3}\in D_{3}^{\text{good}}$, there are at most $128k^{3}/n^{2}$ choices of~$\overrightarrow{f_{2}}$ such that~$f_{2}\in F_{d_{3}}(G)$, so that for all but at most $(k/4)^{4}\cdot n\cdot 128k^{3}/n^{2}=k^{7}/2n$ vectors $\lambda \in\Lambda$, the corresponding labelled subgraph of~$G$ is such that $f_{2}\notin F_{d_{3}}(G)$.} show that there are at most $8k^{6}+k^{7}/n\leq 9k^{6}\leq|\Lambda_{1}|/10$ choices of $\lambda\in\Lambda_{1}$ such that $\{u_{1}u_{2},u_{3}u_{5},u_{4}u_{6},u_{9}u_{11},u_{10}u_{12},u_{13}u_{14}\}\cap F_{d_{3}}(G)\neq\emptyset$. By definition of~$F_{d_{3}}(G)$ and supersaturation of a $c$-edge, we deduce that for all remaining $\lambda\in \Lambda_{1}$,~$H(\lambda)$ is such that deleting the edges $u_{1}u_{2}$, $u_{3}u_{5}$, $u_{4}u_{6}$, $u_{9}u_{11}$, $u_{10}u_{12}$, $u_{13}u_{14}$ does not decrease~$r(G)$.\COMMENT{For any of these six edges, say~$e$, either~$e$ is not in a distinguishable $(x,c,\mathcal{P})$-gadget, whence deleting~$e$ does not decrease~$r(G)$, or the unique (by distinguishability) distinguishable $(x,c,\mathcal{P})$-gadget that~$e$ is in is such that the $c$-edge of this gadget is supersaturated. Then deletion of~$e$ decreases the saturation of this $c$-edge by precisely one, which does not decrease~$r(G)$. The worst case is that all six edges $u_{1}u_{2}$, $u_{3}u_{5}$, $u_{4}u_{6}$, $u_{9}u_{11}$, $u_{10}u_{12}$, $u_{13}u_{14}$ are each in separate distinguishable $(x,c,\mathcal{P})$-gadgets each using the same supersaturated $c$-edge of~$G$. But by the definition of supersaturation, deletion of these six edges still does not decrease~$r(G)$.} \noclaimproof {} When we perform a twist operation on a twist system~$H$ in~$G$, since the only new edges we add have some colour in~$D_{3}$, we have that for any new distinguishable $(x,c,\mathcal{P})$-gadget~$J$ we create in the twist, one of the new edges $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{5}u_{6}$, $u_{9}u_{10}$, $u_{11}u_{13}$, $u_{12}u_{14}$ of the twist is playing the role of either $v_{5}v_{6}$ or $v_{9}v_{10}$ in~$J$. (Here and throughout the rest of the proof, we imagine completed $(x,c,\mathcal{P})$-gadgets~$J$ as having vertices labelled $x, v_{5}, \dots, v_{10}$, where the role of~$v_{i}$ corresponds to the role of~$u_{i}$ in Figure~\ref{fig:ts}.) We now show that for most $\lambda\in\Lambda_{2}$,~$H(\lambda)$ satisfies property~(P3). This is the most delicate part of the argument, and we break it into three more claims. \begin{claim}\label{p2claim1} There is a subset $\Lambda_{3}\subseteq\Lambda_{2}$ such that $|\Lambda_{3}|\geq9|\Lambda_{2}|/10$ and all $\lambda\in\Lambda_{3}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$, then the pair~$u_{9}u_{10}$ of~$H(\lambda)$ plays the role of~$v_{9}v_{10}$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Since the only edges added by any twist operation all have colour in~$D_{3}$, it suffices to show that at most~$|\Lambda_{2}|/10$ vectors $\lambda\in\Lambda_{2}$ are such that twisting on~$\lambda$ creates an $(x,c,\mathcal{P})$-gadget~$J$ for which either \begin{enumerate}[label=\upshape(\roman*)] \item one of the pairs $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{5}u_{6}$, $u_{11}u_{13}$, $u_{12}u_{14}$ of~$H(\lambda)$ plays the role of~$v_{9}v_{10}$, or \item the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$. \end{enumerate} To address~(i), we show that $u_{1},u_{2},u_{5},u_{11},u_{12}\notin N_{G}(x)$ for all but at most~$|\Lambda_{2}|/20$ vectors $\lambda\in\Lambda_{2}$. Note firstly that at most $10k^{3}/n$ pairs $(d_{1},d_{4})$ where $d_{1}\in D_{1}$, $d_{4}\in D_{4}^{\text{good}}$ are such that $u_{5}\in N_{G}(x)$, since otherwise $e(N_{D_{4}}(x), N_{G}(x))>10k^{3}/n$, contradicting~(\ref{eq:quas}). Thus, at most $(k/4)^{2}\cdot n^{2}\cdot 10k^{3}/n=5k^{5}n/8$ choices of $\lambda\in\Lambda_{2}$ are such that $u_{5}\in N_{G}(x)$. Now fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{2}$. Notice that there are at most $2k+2$ choices of~$\overrightarrow{f_{1}}$ such that~$f_{1}$ has at least one endpoint in~$N_{G}(x)$. Analysing~$\overrightarrow{f_{2}}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$. Notice that there are at most $2k+2$ choices of~$\overrightarrow{f_{2}}$ such that~$f_{2}$ has either endpoint in~$N_{G}(x)$. Now $5k^{5}n/8 +2((k/4)^{4}\cdot n\cdot(2k+2)\leq k^{5}n$.}, we deduce that there are at most $5k^{5}n/8+2(k/4)^{4}(2k+2)n\leq|\Lambda_{2}|/20$ choices of $\lambda\in\Lambda_{2}$ such that at least one of $u_{1}$, $u_{2}$, $u_{5}$, $u_{13}$, $u_{14}$ lies in~$N_{G}(x)$. Turning now to~(ii), we show that at most~$|\Lambda_{2}|/20$ vectors~$\lambda\in\Lambda_{2}$ are such that twisting on~$\lambda$ creates any $(x,c,\mathcal{P})$-gadgets~$J$ for which the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$ (and thus one of the pairs $u_{1}u_{3},u_{2}u_{4},u_{5}u_{6},u_{9}u_{10},u_{11}u_{13},u_{12}u_{14}$ of~$H(\lambda)$ plays the role of~$v_{5}v_{6}$). To do this, we use some of the properties of~$D_{3}^{\text{good}}$. Fix $d_{2}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda \in \Lambda_{3}$. Note that since $d_{3}\in D_{3}^{\text{good}}$, there are at most $200k^{2}/n$ pairs $(d_{1}', d_{2}')$ where $d_{1}'\in D_{1}$, $d_{2}'\in D_{2}$, such that there is a~$K_{3}$ in~$G$ with vertices $x$, $w_{1}$, $w_{2}$, where~$w_{i}$ is the $d_{i}'$-neighbour of~$x$ for $i\in\{1,2\}$, and the edge $w_{1}w_{2}$ is coloured~$d_{3}$. Let the set of these pairs $(d_{1}',d_{2}')$ be denoted~$L(d_{3})$. For each pair $\ell=(d_{1}',d_{2}')\in L(d_{3})$, let~$z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{10}$. Similarly, let~$z_{\ell}^{2}$ denote the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{10}$. Define~$M\coloneqq\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1}, z_{\ell}^{2}\}$, so that $|M|\leq 400k^{2}/n$. Since there are at most~$400k^{2}/n$ choices of $d_{1}\in D_{1}$ for which we obtain $u_{9}\in M$, we deduce that for all but at most $(k/4)^{3}\cdot n^{2}\cdot 400k^{2}/n=25k^{5}n/4$ vectors $\lambda\in\Lambda_{2}$,~$H(\lambda)$ is such that adding the edge $u_{9}u_{10}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{9}u_{10}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist\COMMENT{Only when we choose $d_{1}\in D_{1}$ such that $u_{9}\in M$ is it true that adding the edge $u_{9}u_{10}$ with colour $d_{3}$ completes a $C_{4}$ with colours $d_{1}', c, d_{2}', d_{3}$ for which there is already a $K_{3}$ with colours $d_{1}', d_{2}', d_{3}$ using the $d_{1}'$ and $d_{2}'$ edges at $x$. In fact only those structures amongst these for which there is a $d_{4}'$-edge (for some $d_{4}'\in D_{4}$) in the necessary place actually create one of the new gadgets we are trying to rule out, but this only helps us.}. One can observe similarly\COMMENT{Now, fix $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{6}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{6}$. Define~$M\coloneqq\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 400k^{2}/n$. Since there are at most~$400k^{2}/n$ choices of $d_{1}\in D_{1}$ for which we obtain $u_{5}\in M$, we deduce that for all but at most $(k/4)^{3}\cdot n^{2}\cdot 400k^{2}/n=25k^{5}n/4$ vectors $\lambda\in\Lambda_{2}$,~$H(\lambda)$ is such that adding the edge $u_{5}u_{6}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist.} that for all but at most~$25k^{5}n/4$ vectors~$\lambda\in\Lambda$,~$H(\lambda)$ is such that adding the edge $u_{5}u_{6}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist. Now fix instead $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{2}$. For each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{4}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{4}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{3}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{3}$. Define $M\coloneqq\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 800k^{2}/n$. We deduce that there are at most $1600k^{2}/n$ choices of~$\overrightarrow{f_{1}}$ such that~$f_{1}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{f_{1}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{f_{1}},\overrightarrow{f_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{1}u_{3}$ or $u_{2}u_{4}$ play the role of $v_{5}v_{6}$ in~$J$ and the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$. Analysing~$\overrightarrow{f_{2}}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{11}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{11}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{12}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{12}$. Define $M\coloneqq\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 800k^{2}/n$. We deduce that there are at most $1600k^{2}/n$ choices of~$\overrightarrow{f_{2}}$ such that~$f_{2}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{f_{2}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{f_{1}},\overrightarrow{f_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{11}u_{13}$ or $u_{12}u_{14}$ play the role of $v_{5}v_{6}$ in~$J$ and the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$.}, we conclude that for all but at most\COMMENT{$25k^{5}n/4$ for $u_{9}u_{10}$, $25k^{5}n/4$ for $u_{5}u_{6}$, and $2((k/4)^{4}\cdot n\cdot 2000k^{2}/n)$ for $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{11}u_{13}$, $u_{12}u_{14}$.} $13k^{5}n\leq|\Lambda_{2}|/20$ choices of $\lambda\in\Lambda_{2}$, twisting on~$\lambda$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ for which the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$. \noclaimproof {} \begin{claim} There is a subset $\Lambda_{4}\subseteq\Lambda_{3}$ such that $|\Lambda_{4}|\geq9|\Lambda_{3}|/10$ and all $\lambda\in\Lambda_{4}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$, then the pair~$u_{5}u_{6}$ of~$H(\lambda)$ plays the role of~$v_{5}v_{6}$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} By Claim~\ref{p2claim1}, it will suffice to show that at most~$|\Lambda_{3}|/10$ vectors $\lambda\in\Lambda_{3}$ are such that twisting on~$\lambda$ creates an $(x,c,\mathcal{P})$-gadget~$J$ for which either \begin{enumerate}[label=\upshape(\roman*)] \item one of the pairs~$u_{1}u_{3},u_{2}u_{4},u_{11}u_{13},u_{12}u_{14}$ of~$H(\lambda)$ plays the role of~$v_{5}v_{6}$, and~$u_{9}u_{10}$ plays the role of~$v_{9}v_{10}$, or \item the edge~$v_{5}v_{6}$ of~$J$ is present in~$G$ and the pair~$u_{9}u_{10}$ of~$H(\lambda)$ plays the role of~$v_{9}v_{10}$. \end{enumerate} To address~(i), fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{3}$. Let~$a_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{4}$, let~$a_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{4}$, let~$b_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{3}$, and let~$b_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{3}$. Since there are at most~$8$ choices of~$\overrightarrow{f_{1}}$ with an endpoint in~$\{a_{1}, a_{2}, b_{1}, b_{2}\}$, we deduce that for all remaining choices of~$\overrightarrow{f_{1}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{f_{1}},\overrightarrow{f_{2}})$ cannot create an $(x,c,\mathcal{P})$-gadget~$J$ for which the new $d_{3}$-edges~$u_{1}u_{3}$ or~$u_{2}u_{4}$ play the role of~$v_{5}v_{6}$ in~$J$ and~$u_{9}u_{10}$ plays the role of~$v_{9}v_{10}$. Analysing~$\overrightarrow{f_{2}}$ similarly, we conclude that we must discard at most~$k^{4}n/16\leq|\Lambda_{3}|/20$ vectors $\lambda\in\Lambda_{3}$ to account for~(i). Turning now to~(ii), write $D_{4}=\{d_{4}^{1}, d_{4}^{2},\dots, d_{4}^{k/4}\}$. For each $d_{4}^{i}\in D_{4}$, let~$y_{i}$ be the $d_{4}^{i}$-neighbour of~$x$, let~$z_{i}$ be the $c$-neighbour of~$y_{i}$, define $R_{i}\coloneqq N_{D_{1}}(y_{i})$ and $S_{i}\coloneqq N_{D_{2}}(z_{i})$. Notice that $\sum_{i=1}^{k/4}e(R_{i}, S_{i})\leq 5k^{4}/2n$, since otherwise we obtain a contradiction to~(\ref{eq:quas}) for some pair $(R_{i}, S_{i})$. We deduce that there are at most $5k^{4}/2n$ triples $(d_{1},d_{2},d_{3})$ with each $d_{i}\in D_{i}$ for which adding the edge $u_{9}u_{10}$ in colour~$d_{3}$ creates an $(x,c,\mathcal{P})$-gadget~$J$ for which $u_{9}u_{10}$ plays the role of $v_{9}v_{10}$ in~$J$ and the edge playing the role of $v_{5}v_{6}$ is already present in~$G$, whence at most $(5k^{4}/2n)\cdot (k/4)\cdot n^{2}=5k^{5}n/8\leq|\Lambda_{3}|/20$ choices of $\lambda\in\Lambda_{3}$ are such that twisting on~$\lambda$ creates an $(x,c,\mathcal{P})$-gadget of this type. \noclaimproof {} \begin{claim}\label{p2claim3} There is a subset $\Lambda_{5}\subseteq\Lambda_{4}$ such that $|\Lambda_{5}|\geq9|\Lambda_{4}|/10$ and all $\lambda\in\Lambda_{5}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$ and the pairs~$u_{5}u_{6},u_{9}u_{10}$ of~$H(\lambda)$ play the roles of the edges~$v_{5}v_{6},v_{9}v_{10}$ of~$J$ respectively, then~$J$ is the canonical $(x,c,\mathcal{P})$-gadget of the twist. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} Fix $d_{3}$, $d_{4}$, $\overrightarrow{f_{1}}$, $\overrightarrow{f_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{4}$. By~(\ref{eq:quas}), we have that $e(N_{D_{2}}(u_{8}), N_{D_{4}}(x))\leq 10k^{3}/n$. We deduce that there are at most $10k^{3}/n$ choices of the pair $(d_{1},d_{2})$ such that the $d_{1}$-neighbour of~$u_{6}$ lies in~$N_{D_{4}}(x)$, whence\COMMENT{The situation we were trying to avoid was the one in which~$u_{5}$ is the endpoint of the $d_{1}cd_{2}$-walk starting at~$u_{6}$, such that the $d_{1}$-neighbour of~$u_{6}$ is also a $d_{4}'$-neighbour of~$x$ for some $d_{4}'\in D_{4}$. In this situation, when we twist, we create an unwanted second $(x,c,\mathcal{P})$-gadget where $u_{5}u_{6}$ and $u_{9}u_{10}$ play the roles of $v_{5}v_{6}$ and $v_{9}v_{10}$ respectively.} for all but at most $5k^{5}n/8\leq|\Lambda_{4}|/10$ choices of $\lambda\in\Lambda_{4}$, the canonical $(x,c,\mathcal{P})$-gadget of the twist is the only new $(x,c,\mathcal{P})$-gadget for which $u_{5}u_{6},u_{9}u_{10}$ play the roles of $v_{5}v_{6},v_{9}v_{10}$ respectively. \noclaimproof {} \begin{comment} \begin{claim} There is a subset $\Lambda_{4}\subseteq \Lambda_{3}$ such that $|\Lambda_{4}|\geq9|\Lambda_{3}|/10$ and all $\lambda\in\Lambda_{4}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$, then either the edge~$v_{5}v_{6}$ of~$J$ is present in~$G$, or the pair~$u_{5}u_{6}$ of~$H(\lambda)$ plays the role of~$v_{5}v_{6}$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} We show that for at least~$9|\Lambda_{3}|/10$ of the vectors $\lambda\in\Lambda_{3}$, the pairs $u_{1}u_{3},u_{2}u_{4},u_{9}u_{10},u_{11}u_{13},u_{12}u_{14}$ each cannot play the role of~$v_{5}v_{6}$ in any $(x,c,\mathcal{P})$-gadget~$J$ created by twisting on~$\lambda$. Note that by Claim~\ref{p2claim1}, the edge~$v_{9}v_{10}$ of~$J$ must either be present in~$G$, or the pair~$u_{5}u_{6}$ of~$H(\lambda)$ must play the role of~$v_{5}v_{6}$. Firstly note that for all $\lambda\in\Lambda_{3}$, if~$J$ is an $(x,c,\mathcal{P})$-gadget created by twisting on~$\lambda$ for which the edge~$v_{9}v_{10}$ of~$J$ is not present in~$G$, then the pair~$u_{9}u_{10}$ of~$H(\lambda)$ cannot play the role of~$v_{5}v_{6}$ in~$J$, since by Claim~\ref{p2claim1} it must play the role of~$v_{9}v_{10}$. To show that most $\lambda\in\Lambda_{3}$ are such that twisting on~$\lambda$ cannot create an $(x,c,\mathcal{P})$-gadget~$J$ for which the edge~$v_{9}v_{10}$ of~$J$ is present in~$G$ and~$u_{9}u_{10}$ plays the role of~$v_{5}v_{6}$, we use some of the properties of~$D_{3}^{\text{good}}$. Fix $d_{2}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{e_{1}}$, $\overrightarrow{e_{2}}$ appearing concurrently in some $\lambda \in \Lambda_{3}$. Note that since $d_{3}\in D_{3}^{\text{good}}$, there are at most $200k^{2}/n$ pairs $(d_{1}', d_{2}')$ where $d_{1}'\in D_{1}$, $d_{2}'\in D_{2}$, such that there is a~$K_{3}$ in~$G$ with vertices $x$, $w_{1}$, $w_{2}$, where~$w_{i}$ is the $d_{i}'$-neighbour of~$x$ for $i\in\{1,2\}$, and the edge $w_{1}w_{2}$ is coloured~$d_{3}$. Let the set of these pairs $(d_{1}',d_{2}')$ be denoted~$L(d_{3})$. For each pair $\ell=(d_{1}',d_{2}')\in L(d_{3})$, let~$z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{10}$. Similarly, let~$z_{\ell}^{2}$ denote the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{10}$. Define~$M\coloneqq\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1}, z_{\ell}^{2}\}$, so that $|M|\leq 400k^{2}/n$. Since there are at most~$400k^{2}/n$ choices of $d_{1}\in D_{1}$ for which we obtain $u_{9}\in M$, we deduce that for all but at most $(k/4)^{3}\cdot n^{2}\cdot 400k^{2}/n=25k^{5}n/4$ vectors $\lambda\in\Lambda_{3}$,~$H(\lambda)$ is such that adding the edge $u_{9}u_{10}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{9}u_{10}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist\COMMENT{Only when we choose $d_{1}\in D_{1}$ such that $u_{9}\in M$ is it true that adding the edge $u_{9}u_{10}$ with colour $d_{3}$ completes a $C_{4}$ with colours $d_{1}', c, d_{2}', d_{3}$ for which there is already a $K_{3}$ with colours $d_{1}', d_{2}', d_{3}$ using the $d_{1}'$ and $d_{2}'$ edges at $x$. In fact only those structures amongst these for which there is a $d_{4}'$-edge (for some $d_{4}'\in D_{4}$) in the necessary place actually create one of the new gadgets we are trying to rule out, but this only helps us.}. Now fix instead $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{2}}$ appearing concurrently in some $\lambda\in\Lambda_{3}$. For each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{4}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{4}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{3}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{3}$. Further, let~$a_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{4}$, let~$a_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{4}$, let~$b_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{3}$, and let~$b_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{3}$. Define $M\coloneqq\{a_{1},a_{2},b_{1},b_{2}\}\cup\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 1000k^{2}/n$. We deduce that there are at most $2000k^{2}/n$ choices of~$\overrightarrow{e_{1}}$ such that~$e_{1}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{e_{1}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{e_{1}},\overrightarrow{e_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{1}u_{3}$ or $u_{2}u_{4}$ play the role of $v_{5}v_{6}$ in~$J$.\COMMENT{Ensuring, say, $u_{3}\notin\{a_{1}.b_{1}\}$ means that we cannot create a new~$J$ for which $u_{9}u_{10}$ plays the role of~$v_{9}v_{10}$ and~$u_{1}u_{3}$ plays the role of~$v_{5}v_{6}$, and ensuring $u_{3}\notin\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1},z_{\ell}^{2}\}$ means that we cannot create a new~$J$ for which~$v_{9}v_{10}$ is present in~$G$ and~$u_{1}u_{3}$ plays the role of~$v_{5}v_{6}$.} Analysing~$\overrightarrow{e_{2}}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{1}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{11}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{11}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{12}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{12}$. Further, let~$a_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{11}$, let~$a_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{11}$, let~$b_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{12}$, and let~$b_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{12}$. Define $M\coloneqq\{a_{1},a_{2},b_{1},b_{2}\}\cup\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 1000k^{2}/n$. We deduce that there are at most $2000k^{2}/n$ choices of~$\overrightarrow{e_{2}}$ such that~$e_{2}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{e_{2}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{e_{1}},\overrightarrow{e_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{11}u_{13}$ or $u_{12}u_{14}$ play the role of $v_{5}v_{6}$ in~$J$.}, we conclude that for all but at most\COMMENT{$25k^{5}n/4$ for $u_{9}u_{10}$, and $2((k/4)^{4}\cdot n\cdot 2000k^{2}/n)$ for $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{11}u_{13}$, $u_{12}u_{14}$.} $7k^{5}n\leq|\Lambda_{3}|/10$ choices of $\lambda\in\Lambda_{3}$, twisting on~$\lambda$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where any of the new $d_{3}$-edges $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{9}u_{10}$, $u_{11}u_{13}$, $u_{12}u_{14}$ play the role of $v_{5}v_{6}$ in~$J$. \noclaimproof {} \begin{claim} There is a subset $\Lambda_{5}\subseteq\Lambda_{4}$ such that $|\Lambda_{5}|\geq9|\Lambda_{4}|/10$ and all $\lambda\in\Lambda_{5}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$, then neither of the edges~$v_{5}v_{6}$ nor~$v_{9}v_{10}$ of~$J$ are present in~$G$. \end{claim} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} dvs \noclaimproof {} \begin{claim} There is a subset $\Lambda_{3}\subseteq\Lambda_{2}$ such that $|\Lambda_{3}|\geq9|\Lambda_{2}|/10$ and all $\lambda\in\Lambda_{3}$ are such that if~$J$ is an $(x,c,\mathcal{P})$-gadget that is in~$\text{twist}_{H(\lambda)}(G)$ but not in~$G$, then \begin{enumerate}[label=\upshape(\roman*)] \item the edge~$u_{9}u_{10}$ of~$H(\lambda)$ plays the role of~$v_{9}v_{10}$ in~$J$; \item the edge~$v_{5}v_{6}$ of~$J$ is not present in~$G$. \end{enumerate} \end{claim}\label{p2claim3} \removelastskip\penalty55 \noindent{\em Proof of claim: }{} We begin by showing that for most choices of $\lambda\in\Lambda_{2}$, we have that $u_{1}$, $u_{2}$, $u_{5}$, $u_{13}$, $u_{14}\notin N_{G}(x)$, which ensures that $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{5}u_{6}$, $u_{11}u_{13}$, $u_{12}u_{14}$ each cannot play the role of $v_{9}v_{10}$ in any new $(x,c,\mathcal{P})$-gadget. Indeed, note that at most $10k^{3}/n$ pairs $(d_{1},d_{4})$ where $d_{1}\in D_{1}$, $d_{4}\in D_{4}^{\text{good}}$ are such that $u_{5}\in N(x)$, since otherwise $e(N_{D_{4}}(x), N(x))>10k^{3}/n$, contradicting~(\ref{eq:quas}). Thus, at most $(k/4)^{2}\cdot n^{2}\cdot 10k^{3}/n=5k^{5}n/8$ choices of $\lambda\in\Lambda_{2}$ are such that $u_{5}\in N(x)$. Now fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{2}}$. Notice that there are at most $2k+2$ choices of~$\overrightarrow{e_{1}}$ such that~$e_{1}$ has either endpoint in~$N(x)$. Analysing~$\overrightarrow{e_{2}}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{1}}$. Notice that there are at most $2k+2$ choices of~$\overrightarrow{e_{2}}$ such that~$e_{2}$ has either endpoint in~$N(x)$. Now $5k^{5}n/8 +2((k/4)^{4}\cdot n\cdot(2k+2)\leq k^{5}n$.}, we deduce that there are at most $k^{5}n$ choices of $\lambda\in\Lambda_{2}$ such that any of $u_{1}$, $u_{2}$, $u_{5}$, $u_{13}$, $u_{14}$ are in~$N(x)$, whence all remaining~$\lambda\in\Lambda_{2}$ satisfy~(i). We now check that for most $\lambda\in\Lambda_{2}$, twisting on~$\lambda$ does not create an $(x,c,\mathcal{P})$-gadget~$J$ where~$u_{9}u_{10}$ in~$H(\lambda)$ plays the role of $v_{9}v_{10}$ in~$J$ and the edge $v_{5}v_{6}$ of~$J$ is already present in~$G$. Write $D_{4}=\{d_{4}^{1}, d_{4}^{2},\dots, d_{4}^{k/4}\}$. For each $d_{4}^{i}\in D_{4}$, let~$y_{i}$ be the $d_{4}^{i}$-neighbour of~$x$, let~$z_{i}$ be the $c$-neighbour of~$y_{i}$, define $R_{i}\coloneqq N_{D_{1}}(y_{i})$ and $S_{i}\coloneqq N_{D_{2}}(z_{i})$. Notice that $\sum_{i=1}^{k/4}e(R_{i}, S_{i})\leq 5k^{4}/2n$, since otherwise we obtain a contradiction to~(\ref{eq:quas}) for some pair $(R_{i}, S_{i})$. We deduce that there are at most $5k^{4}/2n$ triples $(d_{1},d_{2},d_{3})$ with each $d_{i}\in D_{i}$ for which adding the edge $u_{9}u_{10}$ creates an $(x,c,\mathcal{P})$-gadget~$J$ for which $u_{9}u_{10}$ plays the role of $v_{9}v_{10}$ in~$J$ and the edge playing the role of $v_{5}v_{6}$ is already present in~$G$, whence at most $(5k^{4}/2n)\cdot (k/4)\cdot n^{2}=5k^{5}n/8$ choices of $\lambda\in\Lambda$ are such that twisting on~$\lambda$ creates a new $(x,c,\mathcal{P})$-gadget of this type. \noclaimproof {} We now show that for almost all $\lambda\in\Lambda$, we have that $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{9}u_{10}$, $u_{11}u_{13}$, $u_{12}u_{14}$ each cannot play the role of $v_{5}v_{6}$ in any new $(x,c,\mathcal{P})$-gadget. Note firstly that $u_{9}u_{10}$ cannot play the role of $v_{5}v_{6}$ in any new $(x,c,\mathcal{P})$-gadget~$J$ for which the edge $v_{9}v_{10}$ is not already present in~$G$ before the twist, since we have shown already that only $u_{9}u_{10}$ can play the role of $v_{9}v_{10}$ in such a new $(x,c,\mathcal{P})$-gadget, for all~$\lambda\in\Lambda$ that we have not discarded. Now fix $d_{2}\in D_{2}$, $d_{3}\in D_{3}^{\text{good}}$, $d_{4}\in D_{4}^{\text{good}}$, $\overrightarrow{e_{1}}$, $\overrightarrow{e_{2}}$ appearing concurrently in some $\lambda \in \Lambda$. Note that since $d_{3}\in D_{3}^{\text{good}}$, there are at most $200k^{2}/n$ pairs $(d_{1}', d_{2}')$ where $d_{1}'\in D_{1}$, $d_{2}'\in D_{2}$, such that there is a~$K_{3}$ in~$G$ with vertices $x$, $w_{1}$, $w_{2}$, where~$w_{i}$ is the $d_{i}'$-neighbour of~$x$, for $i\in\{1,2\}$, and the edge $w_{1}w_{2}$ is coloured~$d_{3}$. Let the set of these pairs $(d_{1}',d_{2}')$ be denoted~$L(d_{3})$. For each pair $\ell=(d_{1}',d_{2}')\in L(d_{3})$, let~$z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{10}$. Similarly, let~$z_{\ell}^{2}$ denote the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{10}$. Define~$M\coloneqq\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1}, z_{\ell}^{2}\}$, so that $|M|\leq 400k^{2}/n$. Since there are at most~$400k^{2}/n$ choices of $d_{1}\in D_{1}$ for which we obtain $u_{9}\in M$, we deduce that for all but at most $(k/4)^{3}\cdot n^{2}\cdot 400k^{2}/n=25k^{5}n/4$ vectors $\lambda\in\Lambda$,~$H(\lambda)$ is such that adding the edge $u_{9}u_{10}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{9}u_{10}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist\COMMENT{Only when we choose $d_{1}\in D_{1}$ such that $u_{9}\in M$ is it true that adding the edge $u_{9}u_{10}$ with colour $d_{3}$ completes a $C_{4}$ with colours $d_{1}', c, d_{2}', d_{3}$ for which there is already a $K_{3}$ with colours $d_{1}', d_{2}', d_{3}$ using the $d_{1}'$ and $d_{2}'$ edges at $x$. In fact only those structures amongst these for which there is a $d_{4}'$-edge (for some $d_{4}'\in D_{4}$) in the necessary place actually create one of the new gadgets we are trying to rule out, but this only helps us.}. One can observe similarly\COMMENT{Now, fix $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{1}}$, $\overrightarrow{e_{2}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{6}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{6}$. Define~$M\coloneqq\bigcup_{\ell\in L(d_{3})}\{z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 400k^{2}/n$. Since there are at most~$400k^{2}/n$ choices of $d_{1}\in D_{1}$ for which we obtain $u_{5}\in M$, we deduce that for all but at most $(k/4)^{3}\cdot n^{2}\cdot 400k^{2}/n=25k^{5}n/4$ vectors $\lambda\in\Lambda$,~$H(\lambda)$ is such that adding the edge $u_{5}u_{6}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist.} that for all but at most~$25k^{5}n/4$ vectors~$\lambda\in\Lambda$,~$H(\lambda)$ is such that adding the edge $u_{5}u_{6}$ in colour~$d_{3}$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ in~$J$ and the edge playing the role of $v_{9}v_{10}$ in~$J$ is already present in~$G$ before the twist. Now fix instead $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{2}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{4}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{4}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{3}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{3}$. Further, let~$a_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{4}$, let~$a_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{4}$, let~$b_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{3}$, and let~$b_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{3}$. Define $M\coloneqq\{a_{1},a_{2},b_{1},b_{2}\}\cup\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 1000k^{2}/n$. We deduce that there are at most $2000k^{2}/n$ choices of~$\overrightarrow{e_{1}}$ such that~$e_{1}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{e_{1}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{e_{1}},\overrightarrow{e_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{1}u_{3}$ or $u_{2}u_{4}$ play the role of $v_{5}v_{6}$ in~$J$. Analysing~$\overrightarrow{e_{2}}$ similarly\COMMENT{Fix $d_{1}$, $d_{2}$, $d_{3}$, $d_{4}$, $\overrightarrow{e_{1}}$. Since $d_{3}\in D_{3}^{\text{good}}$, observe that $|L(d_{3})|\leq 200k^{2}/n$. Now for each $\ell=(d_{1}', d_{2}')\in L(d_{3})$, let $y_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{11}$, let $y_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{11}$, let $z_{\ell}^{1}$ be the end of the $d_{1}'cd_{2}'$-walk starting at~$u_{12}$, and let $z_{\ell}^{2}$ be the end of the $d_{2}'cd_{1}'$-walk starting at~$u_{12}$. Further, let~$a_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{11}$, let~$a_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{11}$, let~$b_{1}$ be the end of the $d_{1}cd_{2}$-walk starting at~$u_{12}$, and let~$b_{2}$ be the end of the $d_{2}cd_{1}$-walk starting at~$u_{12}$. Define $M\coloneqq\{a_{1},a_{2},b_{1},b_{2}\}\cup\bigcup_{\ell\in L(d_{3})}\{y_{\ell}^{1}, y_{\ell}^{2}, z_{\ell}^{1}, z_{\ell}^{2}\}$, and notice that $|M|\leq 1000k^{2}/n$. We deduce that there are at most $2000k^{2}/n$ choices of~$\overrightarrow{e_{2}}$ such that~$e_{2}$ has an endpoint in~$M$, and that for all remaining choices of~$\overrightarrow{e_{2}}$, twisting on $\lambda=(d_{1},d_{2},d_{3},d_{4},\overrightarrow{e_{1}},\overrightarrow{e_{2}})$ cannot create a new $(x,c,\mathcal{P})$-gadget~$J$ where the new $d_{3}$-edges $u_{11}u_{13}$ or $u_{12}u_{14}$ play the role of $v_{5}v_{6}$ in~$J$.}, we conclude that for all but at most\COMMENT{$25k^{5}n/4$ for $u_{9}u_{10}$, and $2((k/4)^{4}\cdot n\cdot 2000k^{2}/n)$ for $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{11}u_{13}$, $u_{12}u_{14}$.} $7k^{5}n$ choices of $\lambda\in\Lambda$, twisting on~$\lambda$ does not create a new $(x,c,\mathcal{P})$-gadget~$J$ where any of the new $d_{3}$-edges $u_{1}u_{3}$, $u_{2}u_{4}$, $u_{9}u_{10}$, $u_{11}u_{13}$, $u_{12}u_{14}$ play the role of $v_{5}v_{6}$ in~$J$. Finally, we must ensure that for almost all $\lambda\in\Lambda$, twisting on~$\lambda$ is such that the only new $(x,c,\mathcal{P})$-gadget~$J$ created where $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ and $u_{9}u_{10}$ plays the role of $v_{9}v_{10}$ is the canonical $(x,c,\mathcal{P})$-gadget of the twist. Fix $d_{3}$, $d_{4}$, $\overrightarrow{e_{1}}$, $\overrightarrow{e_{2}}$. By~(\ref{eq:quas}), we have that $e(N_{D_{2}}(u_{8}), N_{D_{4}}(x))\leq 10k^{3}/n$. We deduce that there are at most $10k^{3}/n$ choices of the pair $(d_{1},d_{2})$ such that the $d_{1}$-neighbour of~$u_{6}$ is in~$N_{D_{4}}(x)$, whence\COMMENT{The situation we were trying to avoid was the one in which~$u_{5}$ is the endpoint of the $d_{1}cd_{2}$-walk starting at~$u_{6}$, such that the $d_{1}$-neighbour of~$u_{6}$ is also a $d_{4}'$-neighbour of~$x$ for some $d_{4}'\in D_{4}$. In this situation, when we twist, we create an unwanted second $(x,c,\mathcal{P})$-gadget where $u_{5}u_{6}$ and $u_{9}u_{10}$ play the roles of $v_{5}v_{6}$ and $v_{9}v_{10}$ respectively.} for all but at most $5k^{5}n/8$ choices of $\lambda\in\Lambda$, the canonical $(x,c,\mathcal{P})$-gadget of the twist is the only new $(x,c,\mathcal{P})$-gadget for which $u_{5}u_{6}$ plays the role of $v_{5}v_{6}$ and $u_{9}u_{10}$ plays the role of $v_{9}v_{10}$. Let $\Lambda_{\text{bad}}\subseteq \Lambda$ be the set of all $\lambda\in\Lambda$ that we have removed to account for the possibility that~$\lambda$ does not give rise to a twist system of~$G$, or that~$H(\lambda)$ fails to satisfy property (P1) or (P2). \end{comment} Note that, by Claims~\ref{p2claim1}--\ref{p2claim3}, the canonical $(x,c,\mathcal{P})$-gadget of a twist on $\lambda\in\Lambda_{5}$ is clearly distinguishable in~$\text{twist}_{H(\lambda)}(G)$ since its edges~$v_{5}v_{6}$ and~$v_{9}v_{10}$ with colours in~$D_{3}$ were added by the twist and performing this twist creates no other $(x,c,\mathcal{P})$-gadgets. Thus Claims~\ref{p2claim1}--\ref{p2claim3} imply that~$H(\lambda)$ satisfies~(P3) for all $\lambda\in\Lambda_{5}$. Recalling that~$\text{sat}_{G}(e)\leq k-1$ for the $c$-edge~$e$ of~$H(\lambda)$ for all $\lambda\in\Lambda$ and also using Claim~\ref{rgnodec}, we now deduce that $r(\text{twist}_{H(\lambda)}(G))=r(G)+1$, and thus $\text{twist}_{H(\lambda)}(G)\in A_{s+1}^{D^{*}}$, for all $\lambda\in\Lambda_{5}$. Since~$H(\lambda)$ satisfies~(P1) for all $\lambda\in\Lambda_{1}$, we deduce that $\text{twist}_{H(\lambda)}(G)\in T_{s+1}^{D^{*}}$ for all $\lambda\in\Lambda_{5}$, and that $\delta_{s}\geq|\Lambda_{5}|\geq|\Lambda|/2\geq 3k^{4}n^{2}/2^{17}$. We conclude that if $s\leq k^{4}/2^{22}n^{2}$ and~$T_{s}^{D^{*}}$ is non-empty, then~$T_{s+1}^{D^{*}}$ is non-empty and $|T_{s}^{D^{*}}|/|T_{s+1}^{D^{*}}|\leq 2^{17}\cdot 24n^{4}(s+1)/3k^{4}n^{2}\leq 1/2$. Now, fix $s\leq \mu^{4}n^{2}/2^{23}$. If~$T_{s}^{D^{*}}$ is empty, then\COMMENT{Note $\prob{r(\mathbf{G})=s\mid\mathbf{G}\in\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}}=|T_{s}^{D^{*}}|/|\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}|$, so we need to know $\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}$ is non-empty to know that we are not dividing by zero. But this follows from the fact that $\mathcal{Q}_{D^{*}}^{\text{col}}$ is non-empty, which in turn follows from the usual existence results of $1$-factorizations of complete graphs, restricting such $1$-factorizations to see that $\mathcal{G}_{D^{*}}^{\text{col}}$ is non-empty, and then applying Lemma~\ref{quasirandom}.} $\prob{r(\mathbf{G})=s\mid\mathbf{G}\in\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}}=0$. If~$T_{s}^{D^{*}}$ is non-empty, then \[ \prob{r(\mathbf{G})=s\mid\mathbf{G}\in\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}} = \frac{|T_{s}^{D^{*}}|}{|\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}|} \leq \frac{|T_{s}^{D^{*}}|}{|T_{k^{4}/2^{22}n^{2}}^{D^{*}}|} =\prod_{j=s}^{k^{4}/2^{22}n^{2}-1}\frac{|T_{j}^{D^{*}}|}{|T_{j+1}^{D^{*}}|} \leq (1/2)^{k^{4}/2^{22}n^{2}-s}, \] and thus\COMMENT{Note that the middle expression is bounded above by $\left(\frac{\mu^{4}n^{2}}{2^{23}}+1\right)\exp\left(-\frac{\mu^{4}n^{2}}{2^{23}}\ln 2\right)$.}, \[ \prob{r(\mathbf{G})\leq \mu^{4}n^{2}/2^{23}\mid\mathbf{G}\in\widetilde{\mathcal{Q}}_{D^{*}}^{\text{col}}} \leq \sum_{s=0}^{\mu^{4}n^{2}/2^{23}}\exp(-(k^{4}/2^{22}n^{2}-s)\ln 2)\leq \exp\left(-\frac{\mu^{4}n^{2}}{2^{24}}\right), \] which completes the proof of the lemma. \end{proof} Next, we show that in order to find many well-spread $(x,c)$-absorbing gadgets in~$G\in\mathcal{G}_{D\cup\{c\}}^{\text{col}}$, it suffices to show that~$r(G)$ is large for some equitable partition~$\mathcal{P}$ of~$D$ into four parts. (Recall that `well-spread' was defined in Definition~\ref{spread}.) \begin{lemma}\label{justgadgets} Suppose that $1/n\ll\mu$, and let $D\subseteq[n-1]$ be such that $|D|\leq\mu n$. Let $x\in V$, let $c\in [n-1]\setminus D$, and let $\mathcal{P}=\{D_{i}\}_{i=1}^{4}$ be an equitable partition of~$D$. Then for any integer $t\geq 0$ and any $G\in\mathcal{G}_{D\cup\{c\}}^{\text{col}}$, if $r(G)\geq t$, then~$G$ contains a $5\mu n/4$-well-spread collection of~$t$ distinct $(x,c)$-absorbing gadgets. \end{lemma} \begin{proof} Let $G\in\mathcal{G}_{D\cup\{c\}}^{\text{col}}$, let $t\geq 0$ be an integer, and suppose that $r(G)\geq t$. Then, since $|D|\leq\mu n$ and by definition of~$r$, we deduce that there is a collection~$\mathcal{A}_{(x,c,\mathcal{P})}$ of~$t$ distinct $(x,c,\mathcal{P})$-gadgets satisfying the following conditions: \begin{enumerate}[label=\upshape(\roman*)] \item Each edge of~$G$ with colour in~$D_{3}$ is contained in at most one $(x,c,\mathcal{P})$-gadget $J\in\mathcal{A}_{(x,c,\mathcal{P})}$; \item Each $c$-edge of~$G$ is contained in at most $\mu n$ $(x,c,\mathcal{P})$-gadgets $J\in\mathcal{A}_{(x,c,\mathcal{P})}$. \end{enumerate} Fix $v\in V\setminus\{x\}$. Let~$e$ be the $c$-edge of~$G$ incident to~$v$ and for each $d\in D_{3}$ let~$f_{d}$ be the $d$-edge of~$G$ incident to~$v$. Then by conditions~(i) and~(ii) there are at most $5\mu n/4$ $(x,c,\mathcal{P})$-gadgets $J\in\mathcal{A}_{(x,c,\mathcal{P})}$ containing any of the edges in $\{e\}\cup\bigcup_{d\in D_{3}}\{f_{d}\}$. Note that if~$v$ is contained in some $J\in\mathcal{A}_{(x,c,\mathcal{P})}$, then~$v$ is incident to either the $c$-edge in~$J$, or to one of the edges in~$J$ with colour in~$D_{3}$. We thus conclude\COMMENT{Of course, it may be that there are gadgets $J\in\mathcal{A}_{(x,c,\mathcal{P})}$ such that~$J$ contains one of the edges coloured with some colour in~$D_{1}$ (say) incident to~$v$. But by the above observation,~$J$ must use one of the edges $\{e\}\cup\bigcup_{d\in D_{3}}\{f_{d}\}$, so we have already counted~$J$.} that~$v$ is contained in at most $5\mu n/4$ $(x,c,\mathcal{P})$-gadgets $J\in\mathcal{A}_{(x,c,\mathcal{P})}$. It immediately follows that no edge of~$G$ is contained in more than~$5\mu n/4$ $(x,c,\mathcal{P})$-gadgets $J\in\mathcal{A}_{(x,c,\mathcal{P})}$.\COMMENT{Indeed the number of times an edge is used can be bounded above by the number of times one of its endpoints is used.} For each $d\in D_{1}\cup D_{2}\cup D_{4}$, there are at most~$5\mu n/4$ $J\in\mathcal{A}_{(x,c,\mathcal{P})}$ with $d\in\phi(J)$ since each such~$J$ must contain the $d$-neighbour of~$x$ in~$G$. For each $d\in D_{3}$, there are at most~$\mu n/2$ $d$-edges~$f$ in~$G$ such that both endpoints of~$f$ are neighbours of~$x$. Any $J\in\mathcal{A}_{(x,c,\mathcal{P})}$ for which $d\in\phi(J)$ must contain one of these edges~$f$. Thus by~(i), there are at most $\mu n/2$ $J\in\mathcal{A}_{(x,c,\mathcal{P})}$ such that $d\in\phi(J)$. Finally, define a function~$g$ on~$\mathcal{A}_{(x,c,\mathcal{P})}$ by setting $g(J)\coloneqq J-f$, where~$f$ is the unique edge of~$J$ with colour in~$D_{4}$, for each $J\in\mathcal{A}_{(x,c,\mathcal{P})}$. Then it is clear that~$g$ is injective\COMMENT{Suppose $g(J)=g(J')$. Then~$J$ and $J'$ can both be obtained by adding back in the edge of~$G$ with endpoints~$x$ and~$u$, where $u$ is the unique vertex of~$g(J)=g(J')$ having $\phi_{G}(\partial_{J}(u))=\{c, d_{1}\}$, for some $d_{1}\in D_{1}$. So $J=J'$.} and that~$g(J)$ is an $(x,c)$-absorbing gadget, for each $J\in\mathcal{A}_{(x,c,\mathcal{P})}$. Thus,~$g(\mathcal{A}_{(x,c,\mathcal{P})})$ is a $5\mu n/4$-well-spread collection of~$t$ distinct $(x,c)$-absorbing gadgets in~$G$, as required. \end{proof} \subsection{Weighting factor} We now state two results on the number of $1$-factorizations in dense $d$-regular graphs~$G$, where a $1$-factorization of~$G$ consists of an ordered set of~$d$ perfect matchings in~$G$. We will use these results to find a `weighting factor' (see Corollary~\ref{wf}), which we will use to compare the probabilities of particular events occurring in different probability spaces. For any graph~$G$, let~$M(G)$ denote the number of distinct $1$-factorizations of~$G$, and for any $n,d\in\mathbb{N}$, let $\mathcal{G}_{d}^{n}$ denote the set of $d$-regular graphs on~$n$ vertices. Firstly, the Kahn-Lov\'{a}sz Theorem (see e.g.~\cite{AF08}) states that a graph with degree sequence $r_{1},\dots,r_{n}$ has at most~$\prod_{i=1}^{n}(r_{i}!)^{1/2r_{i}}$ perfect matchings. In particular, an $n$-vertex $d$-regular graph has at most~$(d!)^{n/2d}$ perfect matchings. To determine an upper bound for the number of $1$-factorizations of a $d$-regular graph~$G$, one can simply apply the Kahn-Lov\'{a}sz Theorem repeatedly to obtain $M(G)\leq \prod_{r=1}^{d}(r!)^{n/2r}$. Using Stirling's approximation, we obtain the following result.\COMMENT{Using Stirling's approximation as $r!=\exp(r\ln r -r +O(\ln r))$ and the well-known asymptotic expansion of the Harmonic number~$H_{d}=\frac{1}{1}+\frac{1}{2}+\dots+\frac{1}{d}=\ln d +\gamma+O(1/d)$, where $\gamma\leq 0.58$ is the Euler-Mascheroni constant, and the Taylor series expansion $e^{x}=1+x+x^{2}/2+\dots$, one obtains \begin{eqnarray*} \prod_{r=1}^{d}(r!)^{n/2r} & = & \prod_{r=1}^{d}\exp\left(\frac{n}{2}\ln r -\frac{n}{2}+O\left(\frac{n}{2r}\ln r\right)\right) =\exp\left(\frac{n}{2}(\ln 1+\dots+\ln d)-\frac{dn}{2}+\sum_{r=1}^{d}O\left(\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{n}{2}\ln d! -\frac{dn}{2}+O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{n}{2}(d\ln d - d +O(\ln d))-\frac{dn}{2}+O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{nd}{2}\ln d -nd +O(n\ln d) + O\left(\sum_{r=1}^{d}\frac{n}{2r}\ln r\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\sum_{r=1}^{d}\frac{1}{rd}\ln r\right)\right)\right) \\ & \leq & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\sum_{r=1}^{d}\frac{1}{rd}\ln d\right)\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln d}{d}\right) +O\left(\frac{\ln d}{d}\sum_{r=1}^{d}\frac{1}{r}\right)\right)\right) \\ & = & \exp\left(\frac{nd}{2}\left(\ln d - 2 +O\left(\frac{\ln^{2}d}{d}\right)\right)\right) \leq \left(\left(1+O\left(\frac{\ln^{2}d}{d}\right)+O\left(\frac{\ln^{4}d}{d^{2}}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \\ & \leq & \left(\left(1+o\left(\frac{\ln^{3}d}{d}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \leq \left(\left(1+o\left(\frac{\ln^{3}n}{n}\right)\right)\frac{d}{e^{2}}\right)^{dn/2} \leq \left(\left(1+\frac{1}{\sqrt{n}}\right)\frac{d}{e^{2}}\right)^{dn/2}, \end{eqnarray*} where we have used $d=\Theta(n)$ (say) a couple times towards the end.} \begin{theorem}\label{linlur} Suppose $n\in\mathbb{N}$ is even with $1/n\ll1$\COMMENT{Sufficiently large needed for us to make the simplifications we make in applying Stirling's (and the asymptotic formula for the Harmonic number)}, and $d\geq n/2$.\COMMENT{We don't necessarily need~$d$ to be this large, but $d=\Theta(n)$ does seem helpful in the previous comment.} Then every $G\in\mathcal{G}_{d}^{n}$ satisfies \[ M(G)\leq\left(\left(1+n^{-1/2}\right)\frac{d}{e^{2}}\right)^{dn/2}. \] \end{theorem} On the other hand, Ferber, Jain, and Sudakov~\cite{FJS20} proved the following lower bound for the number of distinct $1$-factorizations in dense regular graphs. \begin{theorem}[{\cite[Theorem 1.2]{FJS20}}]\label{Ferb} Suppose $C>0$ and $n\in\mathbb{N}$ is even with $1/n\ll1/C\ll1$, and $d\geq(1/2+n^{-1/C})n$. Then every $G\in\mathcal{G}_{d}^{n}$ satisfies \[ M(G)\geq\left(\left(1-n^{-1/C}\right)\frac{d}{e^{2}}\right)^{dn/2}. \] \end{theorem} Theorems~\ref{linlur} and~\ref{Ferb} immediately yield\COMMENT{Define $C$ to be the universal constant from Theorem~\ref{Ferb}. Then, applying Theorems~\ref{Ferb} and~\ref{linlur}, for all sufficiently large even integers~$n$, all $d\geq(1/2+n^{-1/C})n$, and all $G,H\in\mathcal{G}_{d}^{n}$, we have \[ \frac{M(G)}{M(H)} \leq \frac{((1+n^{-1/2})d/e^{2})^{dn/2}}{((1-n^{-1/C})d/e^{2})^{dn/2}} \leq \left(1+\frac{2n^{-1/C}}{1-n^{-1/C}}\right)^{dn/2} \leq (1+4n^{-1/C})^{dn/2} \leq \exp(2n^{1-1/C}d). \]} the following corollary: \begin{cor}\label{wf} Suppose $C>0$ and $n\in\mathbb{N}$ is even with $1/n\ll1/C\ll1$, and $d\geq(1/2+n^{-1/C})n$. Then \[ \frac{M(G)}{M(H)}\leq\exp\left(2n^{1-1/C}d\right), \] for all $G,H\in\mathcal{G}_{d}^{n}$. \end{cor} Recall that for $G\in\mathcal{G}_{[n-1]}^{\text{col}}$ and a set of colours $D\subseteq [n-1]$, $G|_{D}$ is be the spanning subgraph of~$G$ containing precisely those edges of~$G$ which have colour in~$D$. We now have all the tools we need to prove Lemma~\ref{main-switching-lemma}. \lateproof{Lemma~\ref{main-switching-lemma}} Let $C>0$ be the constant given by Corollary~\ref{wf} and suppose that $1/n\ll1/C, \mu, {\varepsilon}$. Let~$\mathbb{P}$ denote the probability measure for the space corresponding to choosing $\mathbf{G}\in\mathcal{G}_{[n-1]}^{\text{col}}$ uniformly at random. Fix $D\subseteq [n-1]$ such that $|D|={\varepsilon} n$, and let~$\mathbb{P}_{D}$ denote the probability measure for the space corresponding to choosing $\mathbf{H}\in\mathcal{G}_{D}^{\text{col}}$ uniformly at random. Let~$\mathcal{G}_{D}^{\text{bad}}$ denote the set of $H\in\mathcal{G}_{D}^{\text{col}}$ such that~$H$ is not ${\varepsilon}$-locally edge-resilient. For $H\in\mathcal{G}_{D}^{\text{col}}$, write~$N_{H}$ for the number of distinct completions of~$H$ to an element $G\in\mathcal{G}_{[n-1]}^{\text{col}}$; that is,~$N_{H}$ is the number of $1$-factorizations of the complement of~$H$.\COMMENT{A little lazy but hopefully clear that the complement has no interest in the colours of~$H$.} Then \begin{eqnarray*} \prob{\mathbf{G}|_{D}\,\text{is}\,\text{not}\,{\varepsilon}\text{-locally}\,\text{edge-resilient}} & = & \frac{\sum_{H\in\mathcal{G}_{D}^{\text{bad}}}N_{H}}{\sum_{H'\in\mathcal{G}_{D}^{\text{col}}}N_{H'}} \\ & \leq & \probd{\mathbf{H}\in\mathcal{G}_{D}^{\text{bad}}}\cdot\exp\left(2n^{2-1/C}\right) \\ & \leq & \exp\left(-{\varepsilon}^{3}n^{2}/2000\right), \end{eqnarray*} where we have used Lemma~\ref{localedge} and Corollary~\ref{wf}. Then, union bounding over choices of~$D$, we deduce that\COMMENT{Let $C'\subseteq [n-1]$, $V'\subseteq V$ each have size $|V'|, |C'|\geq{\varepsilon} n$. Let $V^{*}\subseteq V'$, $C^{*}\subseteq C'$ be arbitrary subsets of size exactly~${\varepsilon} n$. If $\mathbf{G}|_{C^{*}}$ is ${\varepsilon}$-locally edge-resilient, then $\mathbf{G}$ has at least ${\varepsilon}^{3}n^{2}/100$ edges with colour in $C^{*}$ and endpoints in $V^{*}$, whence~$\mathbf{G}$ has at least ${\varepsilon}^{3}n^{2}/100$ edges with colour in $C'$ and endpoints in $V'$.} \begin{equation}\label{eq:edgeresil} \prob{\mathbf{G}\,\text{is}\,\text{not}\,{\varepsilon}\text{-locally}\,\text{edge-resilient}}\leq \binom{n-1}{{\varepsilon} n}\exp\left(-\frac{{\varepsilon}^{3}n^{2}}{2000}\right)\leq\exp\left(-\frac{{\varepsilon}^{3}n^{2}}{4000}\right). \end{equation} Now, fix $x\in V$, and fix $c\in [n-1]$. Choose $F\subseteq[n-1]\setminus\{c\}$ of size $|F|=\mu n$ arbitrarily. Write $F^{*}\coloneqq F\cup\{c\}$, and let~$\mathbb{P}_{F^{*}}$ denote the probability measure for the space~$\mathcal{S}$ corresponding to choosing $\mathbf{H}\in\mathcal{G}_{F^{*}}^{\text{col}}$ uniformly at random. Let~$\mathcal{P}$ be an equitable (ordered) partition of~$F$ into four subsets. Let~$A_{F^{*}}^{(x,c)}\subseteq\mathcal{G}_{F^{*}}^{\text{col}}$ be the set of $H\in\mathcal{G}_{F^{*}}^{\text{col}}$ such that~$H$ has a $5\mu n/4$-well-spread collection of at least $\mu^{4}n^{2}/2^{23}$ $(x,c)$-absorbing gadgets. Then, considering $A_{F^{*}}^{(x,c)}$, $\mathcal{Q}_{F^{*}}^{\text{col}}$, $\widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}$ as events in~$\mathcal{S}$, observe that \begin{eqnarray*} \probfc{\overline{A_{F^{*}}^{(x,c)}}} & \leq & \probfc{\widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}}\probfc{\overline{A_{F^{*}}^{(x,c)}} \biggm| \widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}}+\probfc{\overline{\widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}}} \\ & \stackrel{(\ref{eq:qsubset})}{\leq} & \probfc{\overline{A_{F^{*}}^{(x,c)}} \biggm| \widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}} + \probfc{\overline{\mathcal{Q}_{F^{*}}^{\text{col}}}}. \end{eqnarray*} Thus, applying\COMMENT{Lemma~\ref{justgadgets} is applied in contrapositive form} Lemma~\ref{justgadgets}, Lemma~\ref{quasirandom}, and Lemma~\ref{masterswitch}, we obtain \begin{eqnarray*} \probfc{\overline{A_{F^{*}}^{(x,c)}}} & \leq & \probfc{r(\mathbf{H})\leq \mu^{4}n^{2}/2^{23} \biggm| \mathbf{H}\in\widetilde{\mathcal{Q}}_{F^{*}}^{\text{col}}} +\probfc{\mathbf{H}\notin\mathcal{Q}_{F^{*}}^{\text{col}}} \\ & \leq & \exp\left(-\frac{\mu^{4}n^{2}}{2^{24}}\right)+\exp\left(-\mu^{3}n^{2}\right) \leq \exp\left(-\frac{\mu^{4}n^{2}}{2^{25}}\right). \end{eqnarray*} Then by Corollary~\ref{wf}, \begin{eqnarray*} \prob{\mathbf{G}|_{F^{*}}\notin A_{F^{*}}^{(x,c)}} & = & \frac{\sum_{H\in\overline{A_{F^{*}}^{(x,c)}}}N_{H}}{\sum_{H'\in\mathcal{G}_{F^{*}}^{\text{col}}}N_{H'}} \leq \probfc{\mathbf{H}\notin A_{F^{*}}^{(x,c)}}\cdot\exp\left(2n^{2-1/C}\right) \\ & \leq & \exp\left(-\frac{\mu^{4}n^{2}}{2^{26}}\right). \end{eqnarray*} In particular, with probability at least $1-\exp(-\mu^{4}n^{2}/2^{26})$,~$\mathbf{G}$ has a~$5\mu n/4$-well-spread collection of at least~$\mu^{4}n^{2}/2^{23}$ $(x,c)$-absorbing gadgets. Now, union bounding over all vertices $x\in V$ and all colours $c\in [n-1]$, we deduce that \begin{equation}\label{eq:gadgres} \prob{\mathbf{G}\,\text{is}\,\text{not}\,\mu\text{-robustly}\,\text{gadget-resilient}} \leq n^{2}\cdot\exp\left(-\frac{\mu^{4}n^{2}}{2^{26}}\right)\leq \exp\left(-\frac{\mu^{4}n^{2}}{2^{27}}\right). \end{equation} The result now follows by combining~(\ref{eq:edgeresil}) and~(\ref{eq:gadgres}). \noproof \section{Modifications and Corollaries}\label{corollary-section} In this section we show how to derive the~$n$ odd case of Theorem~\ref{mainthm} from the case when $n$ is even. We also show how Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle} implies Corollary~\ref{oddsym}. \subsection{A rainbow Hamilton cycle for $n$ odd}\label{odd} We actually derive the~$n$ odd case of Theorem~\ref{mainthm} from the following slightly stronger version of Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle} in the case when $n$ is even. \begin{theorem}\label{rainbow-cycle-missing-vtx} If $n$ is even and $\phi$ is a uniformly random 1-factorization of $K_n$, then for every vertex $v$, with high probability, $\phi$ admits a rainbow cycle containing all of the colours and all of the vertices except $v$. \end{theorem} We now argue that our proof of Theorem~\ref{mainthm} for $n$ even is sufficiently robust to also obtain this strengthening. In particular, we can strengthen Lemma~\ref{main-absorber-lemma} so that the absorber does not contain $v$, since~\ref{flexible-sets-right-size}--\ref{many-covers-survive} in Lemma~\ref{absorbing-template-lemma}, \ref{absorbing-sets-right-size}--\ref{weak-pseudorandomness-in-slice} in Lemma~\ref{greedy-absorber-lemma}, and~\ref{linking-sets-right-size}--\ref{common-linking-nbrhood-large-reserve} in Lemma~\ref{linking-lemma} all hold after deleting $v$ from any part in the absorber partition. The proof of Lemma~\ref{long-rainbow-path-lemma} is also sufficiently robust to guarantee that the rainbow path from the lemma does not contain $v$, but we do not need this strengthening, since we can instead strengthen Proposition~\ref{main-absorbing-proposition} to obtain a rainbow cycle containing $P' - v$ and all of the colours, as follows. If $v\in V(P')$, then we replace $v$ in $P'$ with a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover by deleting $v$ and adding a $(V_{\mathrm{flex}}, C_{\mathrm{flex}}, G_{\mathrm{flex}})$-cover of $w$, $w'$, and $\phi(vw)$, where $w$ and $w'$ are the vertices adjacent to $v$ in $P'$. The remainder of the proof proceeds normally, letting $v_\ell \coloneqq v$ to ensure $v \notin V(P''_1)$. In this procedure, we need to assume that $P'$ is contained in $(V\setminus V', C\setminus C', G')$ with $\delta/19$-bounded remainder (rather than $\delta/18$), but in Lemma~\ref{main-absorber-lemma} we can find a $38\gamma$-absorber, which completes the proof. Now we show how Theorem~\ref{rainbow-cycle-missing-vtx} implies the odd $n$ case of Theorem~\ref{mainthm}. \lateproof{Theorem~\ref{mainthm},~$n$ odd case} When~$n$ is odd, any optimal edge-colouring of~$K_{n}$ has~$n$ colour classes, each containing precisely $(n-1)/2$ edges. For every colour~$c$, there is a unique vertex which has no incident edges of colour~$c$, and for every vertex~$v$, there is a unique colour such that~$v$ has no incident edges of this colour. Thus, we can obtain a 1-factorization $\phi'$ of $K_{n+1}$ from an optimal edge-colouring $\phi$ of $K_n$ in the following way. We add a vertex~$z$, and for every other vertex $v$, we add an edge~$zv$, where $\phi'(zv)$ is the unique colour $c$ such that $v$ is not incident to a $c$-edge in $K_n$. Note that this operation produces a bijection from the set of $n$-edge-colourings of $K_n$ to the set of 1-factorizations of $K_{n+1}$. Thus, if $n$ is odd and $\phi$ is a uniformly random optimal edge-colouring of $K_n$, then $\phi'$ is a uniformly random optimal edge-colouring of $K_{n+1}$. By Theorem~\ref{rainbow-cycle-missing-vtx}, with high probability there is a rainbow cycle $F$ in $K_{n+1}$ containing all of the colours and all of the vertices except $z$, so $F$ is a rainbow Hamilton cycle in $K_n$, satisfying Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle}. Deleting any edge from $F$ gives a rainbow Hamilton path, as required in Theorem~\ref{mainthm}\ref{mainthm:rainbow-path}. \noproof \subsection{Symmetric Latin squares} Now we use Theorem~\ref{mainthm} to prove Corollary~\ref{oddsym}. \lateproof{Corollary~\ref{oddsym}} Suppose that~$n\in \mathbb N$ is odd. Firstly, note that there is a one-to-one correspondence between the set~$\mathcal{L}_{n}^{\text{sym}}$ of symmetric~$n\times n$ Latin squares with symbols in~$[n]$ (say) and the set~$\Phi_{n}$ of optimal edge-colourings of~$K_{n}$ on vertices~$[n]$ and with colours in~$[n]$. Indeed, let~$\phi\in\Phi_{n}$. Then we can construct a unique symmetric Latin square~$L_{\phi}\in \mathcal{L}_{n}^{\text{sym}}$ by putting the symbol~$\phi(ij)$ in position~$(i,j)$ for all edges $ij\in E(K_{n})$, and for each position $(i,i)$ on the leading diagonal we now enter the unique symbol still missing from row~$i$. Conversely, let $L\in\mathcal{L}_{n}^{\text{sym}}$. We can obtain a unique element $\phi_{L}\in\Phi_{n}$ from~$L$ in the following way. Colour each edge~$ij$ of the complete graph~$K_{n}$ on vertex set~$[n]$ with the symbol in position~$(i,j)$ of~$L$. It is clear that~$\phi_{L}$ is proper, and thus~$\phi_{L}$ is optimal. Moreover, it is clear that we can uniquely recover~$L$ from~$\phi_{L}$. Now, let $K_n^{\circ}$ be the graph obtained from $K_n$ by adding a loop $ii$ at every vertex $i\in[n]$, and for every $\phi\in\Phi_n$, let $\phi^\circ$ be the unique proper $n$-edge-colouring of $K_n^\circ$ such that the restriction of $\phi^\circ$ to the underlying simple graph is $\phi$. The rainbow 2-factors in $K_n^\circ$ admitted by $\phi^\circ$ correspond to transversals in $L_\phi$ in the following way. If $L\in\mathcal{L}_{n}^{\text{sym}}$ and $T$ is a transversal of $L$, then the subgraph of $K_n^{\circ}$ induced by the edges $ij$ where $(i, j) \in T$ is a rainbow 2-factor. If $\sigma$ is the underlying permutation of $T$, then the cycles of this rainbow 2-factor are precisely the cycles in the cycle decomposition of $\sigma$, up to orientation. Therefore a rainbow Hamilton cycle in $K_n^\circ$ corresponds to two disjoint Hamilton transversals in $L_\phi$. By these correspondences, for $n$ odd, if $\mathbf L \in \mathcal{L}_{n}^{\text{sym}}$ is a uniformly random symmetric $n\times n$ Latin square, then $\phi_{\mathbf L}$ is a uniformly random optimal edge-colouring of $K_n$. By Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle}, $\phi_{\mathbf{L}}$ admits a rainbow Hamilton cycle $F$ with high probability. Since $F$ is also a rainbow Hamilton cycle in $K_n^{\circ}$, the corresponding transversals in $\mathbf L$ are Hamilton, as desired. \noproof Note that, if~$n$ is odd, the leading diagonal of any $L\in\mathcal{L}_{n}^{\text{sym}}$ is also a transversal, disjoint from any Hamilton transversal. Indeed, by symmetry all symbols appear an even number of times off of the leading diagonal, and therefore an odd number of times (and thus exactly once) on the leading diagonal. \begin{comment} \section{Modifications and Corollaries}\label{corollary-section} In this section we address how to make small changes to our argument to prove the~$n$ odd case of Theorem~\ref{mainthm}. We also show how Theorem~\ref{mainthm}\ref{mainthm:rainbow-cycle} implies Theorem~\ref{oddsym}. \subsection{A rainbow cycle using all the colours} Here, we show how to modify our arguments to obtain Theorem~\ref{rainbowcycle}. If~$n$ is odd, this result is precisely that given in Theorem~\ref{mainthm}. Instead supposing that~$n$ is even, our entire argument remains the same, except for how we piece together our structures in Proposition~\ref{main-absorbing-proposition}. If one wishes to construct a rainbow cycle containing all the colours, rather than a rainbow Hamilton path, then one can assume that the structure obtained in Lemma~\ref{absorbing-template-lemma} is a $40\gamma$-absorbing template and make the following changes in the proof of Proposition~\ref{main-absorbing-proposition}: Assume that~$P'$ is contained in~$(V\setminus V', C\setminus C', G)$ with $\delta/20$-bounded remainder. Let~$u_{1}$ be the unique vertex of $V(\mathcal{A})\setminus V(H)$ with degree~$2$ in $\mathcal{A}\cup\mathcal{P}\cup T$, let~$z$ be the non-$v_{0}$ end of~$P'$ and let~$u_{2}$ be the neighbour of~$z$ on~$P'$. Let~$d$ be the colour of the edge~$zu_{2}$. After finding a $(V_{\text{flex}}, C_{\text{flex}}, G_{\text{flex}})$-cover~$P''$ of~$\{v_{0},\dots,v_{k}\}$ and~$\{c_{1},\dots,c_{k}\}$, we are free to greedily choose a $(V_{\text{flex}}, C_{\text{flex}} G_{\text{flex}})$-cover~$P''''$ of~$u_{1}$,~$u_{2}$, and~$d$, which avoids the vertices and colours of~$P''$. Now writing $X\coloneqq V(P''\cup P'''')\cap V_{\text{flex}}$ and $Y\coloneqq\phi(P''\cup P'''')\cap C_{\text{flex}}$ and proceeding as in the proof of Proposition~\ref{main-absorbing-proposition}, it is clear that $(P'-z)\cup P''\cup P'''\cup P''''$ is a rainbow cycle containing all of the colours. \subsection{The odd case}\label{odd} In this section we outline how to modify our argument to prove the~$n$ odd case of Theorem~\ref{mainthm}; that is, to prove a uniformly random optimal edge-colouring of~$K_{n}$ admits a rainbow Hamilton cycle, when~$n$ is odd. Note that when~$n$ is odd, any optimal edge-colouring of~$K_{n}$ has~$n$ colour classes, each containing precisely $(n-1)/2$ edges. For every colour~$c$, there is a unique vertex which has no incident edges of colour~$c$, and for every vertex~$v$, there is a unique colour such that~$v$ has no incident edges of this colour. For $D\subseteq[n]$ we redefine~$\mathcal{G}_{D}^{\text{col}}$ to be the set of edge-coloured graphs which can be obtained by restricting an optimally edge-coloured~$K_{n}$ (with colour set~$[n]$) to the colour set~$D$; that is, by deleting all edges with colour in $[n]\setminus D$. The proofs of Lemmas~\ref{localedge},~\ref{quasirandom}, and~\ref{masterswitch} require minor adaptations to account for the fact that we can no longer assume that every vertex has an incident edge for every colour in~$D$, but this only incurs the loss of at most a constant number of choices at any step, and as such does not affect the argument.\COMMENT{For example, let us return to the proof of Lemma~\ref{masterswitch} and in particular, finding a lower bound for the size of~$|\Lambda|$, the set of vectors corresponding to candidates for a twist system in $G\in\mathcal{G}_{D}^{\text{col}}$. We begin by choosing $d_{4}\in D_{4}^{\text{good}}$. There are at least $k/8-2$ choices for which there is a $d_{4}$-edge incident to~$x$, and, letting~$u_{7}$ denote the $d_{4}$-neighbour of~$x$, there is a $c$-edge incident to~$u_{7}$. Suppose we make such a choice of~$d_{4}$ and also let~$u_{8}$ denote the $c$-neighbour of~$u_{7}$. When choosing $d_{1}\in D_{1}$, we must rule out at most two choices for failing to have an incident edge at~$u_{7}$ and~$x$. When choosing $d_{2}\in D_{2}$, there are again at most two choices we must avoid for similar reasons, and when choosing $d_{3}\in D_{3}^{\text{good}}$ we must any of the four colours failing to have an edge incident to all of the vertices $u_{5}$, $u_{6}$, $u_{9}$, $u_{10}$. We still obtain $|\Lambda|\geq\left(\frac{k}{8}-2\right)\left(\frac{k}{4}-2\right)\left(\frac{k}{4}-6\right)\left(\frac{k}{8}-18\right)\cdot\left(\left(\frac{n-1}{2}\right)-11\right)\cdot 2\cdot\left(\left(\frac{n-1}{2}\right)-13\right)\cdot 2 \geq \frac{k^{4}n^{2}}{2^{14}}$.} To see that each of the switching operations in Lemmas~\ref{localedge},~\ref{quasirandom}, and~\ref{masterswitch} applied to $H\in\mathcal{G}_{D}^{\text{col}}$ still indeed produce another $H'\in\mathcal{G}_{D}^{\text{col}}$, it will suffice to show that any such~$H'$ can be completed to an optimal edge-colouring of~$K_{n}$. We show the stronger property that such completions are in one-to-one correspondence with $1$-factorizations of an auxiliary graph, which by Theorem~\ref{Ferb} has many $1$-factorizations. To this end, let~$M_{H'}$ be the set of completions of~$H'$ to an element $G\in\mathcal{G}_{[n]}^{\text{col}}$; that is, the set of proper edge-colourings of $\overline{H'}\coloneqq K_{n}- E(H')$ in colours $[n]\setminus D$. Define $m_{H'}\coloneqq |M_{H'}|$. Note that~$\overline{H'}$ has a set~$A$ of~$|D|$ vertices each with degree $n-|D|$, and a set $B=V\setminus A$ of $n-|D|$ vertices each with degree $n-|D|-1$. We construct the auxiliary graph~$\left(\overline{H'}\right)^{*}$ from~$\overline{H'}$ by adding a vertex~$z$, and an edge~$zb$ for every $b\in B$. Notice that~$\left(\overline{H'}\right)^{*}$ has an even number of vertices and is $(n-|D|)$-regular. Define~$S_{\left(\overline{H'}\right)^{*}}$ to be the set of $1$-factorizations of~$\left(\overline{H'}\right)^{*}$ in colours $[n]\setminus D$. We claim that the elements of~$S_{\left(\overline{H'}\right)^{*}}$ are in one-to-one correspondence with the elements of~$M_{H'}$, so that $m_{H'}=|S_{\left(\overline{H'}\right)^{*}}|$. Indeed, let~$\phi\in S_{\left(\overline{H'}\right)^{*}}$. Deleting~$z$, we uniquely obtain a proper colouring of~$\overline{H'}$ in colours $[n]\setminus D$. Conversely, suppose~$\sigma\in M_{H'}$. Notice that for each $b\in B$, there is a unique colour~$c_{b}$ such that~$b$ does not have any incident edges of colour~$c_{b}$ in~$\sigma$. Adding the vertex~$z$, and adding an edge~$zb$ with colour~$c_{b}$ for every $b\in B$, we uniquely obtain a $1$-factorization $\phi\in S_{\left(\overline{H'}\right)^{*}}$. Moreover, clearly deleting~$z$ in~$\phi$ uniquely recovers~$\sigma$. In the proof of Lemma~\ref{main-switching-lemma}, since we are no longer working with $1$-factorizations, we must modify our usage of the `weighting factor', namely Corollary~\ref{wf}, when comparing the probabilities of events when choosing $\mathbf{H}\in\mathcal{G}_{D}^{\text{col}}$ uniformly at random or choosing $\mathbf{G}\in\mathcal{G}_{[n]}^{\text{col}}$ uniformly at random. To this end, let~$\mathbb{P}$ denote the probability measure for the space corresponding to choosing $\mathbf{G}\in\mathcal{G}_{[n]}^{\text{col}}$ uniformly at random. Fix $D\subseteq[n]$ such that $|D|={\varepsilon} n$, and let~$\mathbb{P}_{D}$ denote the probability measure for the space corresponding to choosing $\mathbf{H}\in\mathcal{G}_{D}^{\text{col}}$ uniformly at random. Let~$\mathcal{G}_{D}^{\text{bad}}$ denote the set of $H\in\mathcal{G}_{D}^{\text{col}}$ such that~$H$ is not ${\varepsilon}$-locally edge-resilient. Since the elements of~$M_{H}$ are in one-to-one correspondence with the elements of~$S_{\left(\overline{H}\right)^{*}}$ for each $H\in\mathcal{G}_{D}^{\text{col}}$, we may use (the modified version of) Lemma~\ref{localedge}, and Corollary~\ref{wf}, to obtain \begin{eqnarray*} \prob{\mathbf{G}|_{D}\,\text{is}\,\text{not}\,{\varepsilon}\text{-locally}\,\text{edge-resilient}} & = & \frac{\sum_{H\in\mathcal{G}_{D}^{\text{bad}}}m_{H}}{\sum_{H'\in\mathcal{G}_{D}^{\text{col}}}m_{H'}} \\ & \leq & \probd{\mathbf{H}\in\mathcal{G}_{D}^{\text{bad}}}\cdot\exp\left(2(n+1)^{2-1/C}\right) \\ & \leq & \exp(-{\varepsilon}^{3}n^{2}/2000). \end{eqnarray*} A similar argument works when considering the property of robust gadget-resilience (i.e.\ the argument leading to~(\ref{eq:gadgres})). The arguments of Section~\ref{absorption-section} remain unchanged, apart from superficial changes to account for the fact that colour classes now have~$(n-1)/2$ edges rather than~$n/2$ edges.\COMMENT{e.g. towards the start of the proof of Lemma~\ref{absorbing-template-lemma}, when calculating the number of $(V,C,K_{n})$-covers of $u$, $v$, $c$. This extra error is just immediately swept up into the ${\varepsilon} n$ error bound.} Notice also that we can assume that the structure obtained in Lemma~\ref{absorbing-template-lemma} is a $40\gamma$-absorbing template, say. It follows from superficial changes\COMMENT{For instance, in the proof of~\cite[Lemma 16]{GKMO20}, once we do the random slicing, we must check that properties (a)--(f) hold with high probability. When checking (d), we actually have that colour classes have size $(n-1)/2$, rather than~$n/2$, and so the expectation is marginally smaller, but this extra error can immediately be swept into the ${\varepsilon}\expn{X}$ error since $\expn{X}=\Theta(n)$, where~$X$ is the random variable of interest. In this way, properties (a)--(f) can be seen to hold exactly as stated, with high probability. Calculations of the degrees and codegrees in the auxiliary hypergraph now simply use properties (a)--(f) and thus remain unchanged. The rest of the proof proceeds as written. The proof of~\cite[Lemma 16]{GKMO20} also relies on~\cite[Lemma 15]{GKMO20}, but the statement of~\cite[Lemma 15]{GKMO20} does not consider $1$-factorizations and can be left exactly as is.} to the proof of~\cite[Lemma 16]{GKMO20} that the conclusions of Lemma~\ref{long-rainbow-path-lemma} hold whenever~$\phi$ is an optimal edge-colouring of~$K_{n}$. We can then change Proposition~\ref{main-absorbing-proposition} to give a rainbow Hamilton cycle, assuming that~$(\mathcal{A},\mathcal{P},T,H)$ is a $\delta$-absorber and~$P'$ is a rainbow path contained in $(V\setminus V', C\setminus C', G)$ with $\delta/20$-bounded remainder, with $V'$, $C'$, $G'$ defined as in the statement of Proposition~\ref{main-absorbing-proposition}. Indeed, modifying the proof of Proposition~\ref{main-absorbing-proposition}, notice that if~$n$ is odd and~$\phi$ is an optimal edge-colouring of~$K_{n}$, then we have $|V|=|C|$, whence if $C\setminus (\phi(P')\cup C')=\{c_{1},\dots, c_{k}\}$ and $V\setminus (V(P')\cup V')=\{v_{1},\dots, v_{\ell}\}$, we have $\ell=k-2$. Let~$v_{0}$ be one of the ends of~$P'$, let~$v_{k-1}$ be the end of~$T$ not in~$V(A)$ for any $A\in\mathcal{A}$, let~$u_{1}$ be the unique vertex of $V(\mathcal{A})\setminus V(H)$ with degree~$2$ in $\mathcal{A}\cup\mathcal{P}\cup T$, and let~$u_{2}$ be the non-$v_{0}$ end of~$P'$. Once we have found a $(V_{\text{flex}}, C_{\text{flex}}, G_{\text{flex}})$-cover~$P''$ of~$\{v_{0},\dots,v_{k-1}\}$ and~$\{c_{1},\dots,c_{k-1}\}$, we are free to greedily choose\COMMENT{Here is where we needed to do slightly better than a $\delta n/18$-bounded remainder.} a $(V_{\text{flex}},C_{\text{flex}},G_{\text{flex}})$-cover~$P''''$ of $u_{1}$, $u_{2}$, and $c_{k}$, which avoids the vertices and colours of~$P''$. Now writing $X\coloneqq V(P''\cup P'''')\cap V_{\text{flex}}$ and $Y\coloneqq\phi(P''\cup P'''')\cap C_{\text{flex}}$ and proceeding as in the proof of Proposition~\ref{main-absorbing-proposition}, it is clear that $P'\cup P''\cup P'''\cup P''''$ is a rainbow Hamilton cycle. \subsection{Symmetric Latin squares} In this section we use Theorem~\ref{rainbowcycle} to prove Theorems~\ref{oddsym} and~\ref{evensym}. We begin with the case that~$n$ is odd, i.e. Theorem~\ref{oddsym}. \lateproof{Theorem~\ref{oddsym}} Suppose that~$n$ is an odd natural number. Firstly, note that there is a one-to-one correspondence between the set~$\mathcal{L}_{n}^{\text{sym}}$ of symmetric~$n\times n$ Latin squares with symbols in~$[n]$ (say) and the set~$\Phi_{n}$ of optimal edge-colourings of~$K_{n}$ on vertices~$[n]$ and in colours~$[n]$. Indeed, let~$\phi\in\Phi_{n}$. Then we can construct a unique symmetric Latin square~$L_{\phi}\in \mathcal{L}_{n}^{\text{sym}}$ by putting the symbol~$\phi(ij)$ in position~$(i,j)$ for all edges $ij\in E(K_{n})$, and for each position $(i,i)$ on the leading diagonal we now enter the unique symbol still missing from row~$i$. Conversely, let $L\in\mathcal{L}_{n}^{\text{sym}}$. We can obtain a unique element $\phi_{L}\in\Phi_{n}$ from~$L$ in the following way. Colour each edge~$ij$ of the complete graph~$K_{n}$ on vertex set~$[n]$ with the symbol in position~$(i,j)$ of~$L$. It is clear that~$\phi_{L}$ is proper, since each row and column of~$L$ contains no more than one appearance of each symbol, and thus~$\phi_{L}$ is optimal, as there are only~$n$ colours. Moreover, it is clear that we can uniquely recover~$L$ from~$\phi_{L}$. It is trivial that the leading diagonal of any $L\in\mathcal{L}_{n}^{\text{sym}}$ is a transversal. Indeed, by symmetry all symbols must appear an even number of times off of the leading diagonal, and therefore an odd number of times (and thus exactly once) on the leading diagonal. Suppose now that $\phi\in\Phi_{n}$ admits a rainbow cycle~$C$ containing all colours. Let $(v_{1},v_{2},\dots,v_{n})$ be one of the two natural cyclical orderings of~$C$. Then, in~$L_{\phi}$, the set of positions~$(v_{i},v_{i+1})$ (modulo~$n$) for $i\in[n]$ forms a transversal~$T$.\COMMENT{Indeed, each row and column has precisely one such position, since each vertex in the cycle is `entered' and `exited' precisely once in~$C$, and the symbol in position~$(v_{i},v_{i+1})$ is $\phi(v_{i}v_{i+1})$, whence~$C$ being rainbow and containing all colours ensures that every symbol appears precisely once in~$T$.} Note that instead taking the reverse ordering of~$C$ corresponds to another transversal~$T'$ which is disjoint to~$T$, and both~$T$ and~$T'$ are disjoint from the leading diagonal. The desired result now follows from Theorem~\ref{rainbowcycle}. \noproof \lateproof{Theorem~\ref{evensym}} This proof is similar to that of Theorem~\ref{oddsym}. If~$n$ is even, there is a one-to-one correspondence between the set~$\mathcal{L}_{n}^{\text{sym}}(i)$ of symmetric potent~$n\times n$ Latin squares with symbols~$[n]$ whose every symbol on the leading diagonal is~$i$, and the set~$\Phi_{n}(i)$ of $1$-factorizations of~$K_{n}$ in colours~$[n]\setminus\{i\}$. Now, if $\phi\in\Phi_{n}(i)$ admits a rainbow cycle using all colours (and thus omitting one vertex~$v$), we can construct two symmetric partial transversals of length~$n-1$ in~$L_{\phi}$ by using the same argument as in the proof of Theorem~\ref{oddsym}. One can then complete both of these to full transversals in~$L_{\phi}$ by adding the position~$(v,v)$, which contains symbol~$i$. Thus again, the result follows from Theorem~\ref{rainbowcycle}. \noproof \end{comment} \end{document}
arXiv
\begin{document} \author{Jakob Jonsson} \title[Five-Torsion in the Matching Complex on 14 Vertices]{Five-Torsion in the Homology of the Matching Complex on $14$ Vertices} \date{\today} \thanks{Research supported by European Graduate Program ``Combinatorics, Geometry, and Computation'', DFG-GRK 588/2. } \begin{abstract} J. L. Andersen proved that there is $5$-torsion in the bottom nonvanishing homology group of the simplicial complex of graphs of degree at most two on seven vertices. We use this result to demonstrate that there is $5$-torsion also in the bottom nonvanishing homology group of the matching complex $\M[14]$ on $14$ vertices. Combining our observation with results due to Bouc and to Shareshian and Wachs, we conclude that the case $n=14$ is exceptional; for all other $n$, the torsion subgroup of the bottom nonvanishing homology group has exponent three or is zero. The possibility remains that there is other torsion than $3$-torsion in higher-degree homology groups of $\M[n]$ when $n \ge 13$ and $n \neq 14$. \end{abstract} \maketitle \noindent This is a preprint version of a paper published in {\em Journal of Algebraic Combinatorics} {\bf 29} (2009), no. 1, 81--90. \section{Introduction} \label{intro-sec} Throughout this note, by a graph we mean a finite graph with loops allowed but with no multiple edges or multiple loops. The degree of a vertex $i$ in a given graph $G$ is the number of times $i$ appears as an endpoint of an edge in $G$; thus a loop at $i$ (if present) is counted twice, whereas other edges containing $i$ are counted once. Given a family $\Delta$ of graphs on a fixed vertex set, we identify each member of $\Delta$ with its edge set. In particular, if $\Delta$ is closed under deletion of edges, then $\Delta$ is an abstract simplicial complex. Let $n \ge 1$ and let $\lambda = (\lambda_1, \ldots, \lambda_n)$ be a sequence of nonnegative integers. We define $\BD{n}{\lambda}$ to be the simplicial complex of graphs on the vertex set $[n] := \{1, \ldots, n\}$ such that the degree of the vertex $i$ is at most $\lambda_i$. We write $\BD{n}{k} := \BD{n}{(k, \ldots, k)}$ and $\M[n] := \BD{n}{1}$; the latter complex is the \emph{matching complex} on $n$ vertices. The topology of $\M[n]$ and related complexes has been subject to analysis in several theses \cite{Andersen,Dong,Garst,thesis,thesislmn,Kara,Kson} and papers \cite{Ath,BBLSW,BLVZ,Bouc,DongWachs,FH,KRW,RR,ShWa,Ziegvert}; see Wachs \cite{Wachs} for an excellent survey and further references. The prime $3$ is known to play a prominent part in the homology of $\M[n]$. Specifically, write $\nu_n = \lfloor \frac{n-2}{3}\rfloor = \lceil\frac{n-4}{3}\rceil$. By a result due to Bj\"orner, Lov\'asz, Vre\'cica, and {$\check{\mathrm{Z}}$ivaljevi\'c} \cite{BLVZ}, the reduced homology group $\tilde{H}_i(\M[n]; \mathbb{Z})$ is zero whenever $i < \nu_n$. Bouc \cite{Bouc} showed that $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z}) \cong \mathbb{Z}_3$ whenever $n = 3k+1 \ge 7$ and that $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z})$ has exponent dividing nine whenever $n = 3k \ge 12$. Shareshian and Wachs extended and improved Bouc's result: \begin{theorem}[Shareshian and Wachs \cite{ShWa}] \label{ShWa-thm} $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z})$ is an elementary $3$-group for $n \in \{7,10,12,13\}$ and also for $n \ge 15$. The torsion subgroup of $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z})$ is again an elementary $3$-group for $n \in \{9,11\}$ and zero for $n \in \{1,2,3,4,5,6,8\}$. For the remaining case $n=14$, $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z})$ is a finite group with nonvanishing $3$-torsion. \end{theorem} To prove that the group $\tilde{H}_{\nu_n}(\M[n]; \mathbb{Z})$ is elementary for $n \equiv 0 \pmod{3}$ and $n \ge 12$ and for $n \equiv 2 \pmod{3}$ and $n \ge 17$, Shareshian and Wachs relied on a computer calculation of the group $\tilde{H}_{3}(\M[12];\mathbb{Z})$. The existence of $3$-torsion in the homology of $\M[9]$ and $\M[11]$ also relied on such calculations. Unfortunately, attempts to stretch this computer approach beyond $n=12$ have failed; the size of $\M[n]$ is too large for the existing software to handle when $n \ge 13$. In particular, the structure of the bottom nonvanishing homology group of $\M[14]$ has remained a mystery. For completeness, let us mention that there is $3$-torsion also in higher-degree homology groups \cite{bettimatch}. More precisely, there is $3$-torsion in $\tilde{H}_d(\M[n]; \mathbb{Z})$ whenever $\nu_n \le d \le \frac{n-6}{2}$. As a consequence, since there is homology in degree $\lfloor\frac{n-3}{2}\rfloor$ but not above this degree \cite{Bouc}, there is $3$-torsion in almost all nonvanishing homology groups of $\M[n]$, the only exceptions being the top degree $\lfloor\frac{n-3}{2}\rfloor$ and \emph{possibly} the degree $\frac{n-5}{2}$ just below it for odd $n$. The homology in the latter degree is known to contain $3$-torsion for $n \in \{7,9,11,13\}$; see Proposition~\ref{m13-prop} for the case $n = 13$. A complete description of the homology groups of $\M[n]$ is known only for $n \le 12$; see Table~\ref{matching-fig}. The appearance of $3$-torsion being so prominent, it makes sense to ask whether this is the {\em only} kind of torsion that appears in $\M[n]$ for {\em any} given $n$. Indeed, Babson, Bj\"orner, Linusson, Shareshian, and Welker \cite{BBLSW} asked this very question. Based on the overwhelming evidence presented in Theorem~\ref{ShWa-thm}, Shareshian and Wachs \cite{ShWa} conjectured that $\tilde{H}_{\nu_{14}}(\M[14]; \mathbb{Z}) = \tilde{H}_{4}(\M[14]; \mathbb{Z})$ is an elementary $3$-group. Surprisingly, the conjecture turns out to be false: \begin{theorem} \label{main-thm} $\tilde{H}_{4}(\M[14]; \mathbb{Z})$ is a finite group of exponent a multiple of $15$. \end{theorem} Theorem~\ref{main-thm} being just one specific example, the question of Babson et al$.$ remains unanswered in general; we do not know whether there is other torsion than $3$-torsion in the homology of other matching complexes. See Section~\ref{further-sec} for some discussion. To prove Theorem~\ref{main-thm}, we use a result due to Andersen about the homology of $\BD{7}{2}$: \begin{theorem}[Andersen \cite{Andersen}] \label{andersen-thm} We have that \[ \tilde{H}_i(\BD{7}{2}; \mathbb{Z}) \cong \left\{ \begin{array}{ll} \mathbb{Z}_5 & \mbox{if } i=4; \\ \mathbb{Z}^{732} & \mbox{if } i = 5;\\ 0 & \mbox{otherwise}. \end{array} \right. \] \end{theorem} \noindent \emph{Remark.} We have verified Theorem~\ref{andersen-thm} using the {\tt Homology} computer program \cite{Homoprog}. Let $\Symm{n}$ be the symmetric group on $n$ elements. We relate Andersen's result to the homology of $\M[14]$ via a map $\pi^*$ from $\tilde{H}_4(\M[14];\mathbb{Z})$ to $\tilde{H}_4(\BD{7}{2};\mathbb{Z})$; this map is induced by the natural action on $\M[14]$ by the Young group $(\Symm{2})^7$. Using a standard representation-theoretic argument, we construct an ``inverse'' $\varphi^*$ of $\pi^*$ with the property that $\pi^*\circ \varphi^*(z) = |(\Symm{2})^7|\cdot z$ for all $z \in \tilde{H}_4(\BD{7}{2};\mathbb{Z})$. To conclude the proof, one observes that $\varphi^*(z)$ is nonzero unless the order of $z$ divides the order of $(\Symm{2})^7$. Since the latter order is $128$, the image under $\varphi^*$ of any nonzero element of order five is again a nonzero element of order five. Using computer, we have also been able to deduce that there is $3$-torsion in the homology of $\tilde{H}_4(\BD{7}{(2^31^7)}; \mathbb{Z})$. where $(2^31^7)$ denotes the sequence $(2,2,2,1,1,1,1,1,1,1)$. An argument similar to the one above yields that $\tilde{H}_4(\M[13]; \mathbb{Z})$ contains $3$-torsion. By the results of Bouc \cite{Bouc}, we already know that $\tilde{H}_3(\M[13]; \mathbb{Z}) \cong \mathbb{Z}_3$. \begin{table} \caption{The homology of $\M[n]$ for $n \le 14$; see Wachs \cite{Wachs} for explanation of the parts that are not explained in the present note. $T_1$ and $T_2$ are nontrivial finite groups of exponent a multiple of $3$ and $15$, respectively; see Proposition~\ref{m13-prop} and Theorem~\ref{main-thm}.} \begin{footnotesize} \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|} \hline & & & & & & \\[-1.5ex] $\tilde{H}_i(\M[n];\mathbb{Z})$ & $i=0$ & \ 1 \ & \ 2 \ & \ 3 \ & \ 4 \ & \ 5 \ \\ \hline \hline & & & & & & \\[-2ex] $n = 3$ & $\mathbb{Z}^2$ & - & - & - & - & - \\ \hline & & & & & & \\[-2ex] $4$ & $\mathbb{Z}^2$ & - & - & - & - & - \\ \hline & & & & & & \\[-2ex] $5$ & - & $\mathbb{Z}^6$ & - & - & - & - \\ \hline & & & & & & \\[-2ex] $6$ & - & $\mathbb{Z}^{16}$ & - & - & - & - \\ \hline & & & & & & \\[-2ex] $7$ & - & $\mathbb{Z}_3$ & $\mathbb{Z}^{20}$ & - & - & - \\ \hline & & & & & & \\[-2ex] $8$ & - & - & $\mathbb{Z}^{132}$ & - & - & - \\ \hline & & & & & & \\[-2ex] $9$ & - & - & $\mathbb{Z}_3^8 \oplus \mathbb{Z}^{42}$ & $\mathbb{Z}^{70}$ & - & - \\ \hline & & & & & & \\[-2ex] $10$ & - & - & $\mathbb{Z}_3$ & $\mathbb{Z}^{1216}$ & - & - \\ \hline & & & & & & \\[-2ex] $11$ & - & - & - & $\mathbb{Z}_3^{45} \oplus \mathbb{Z}^{1188}$ & $\mathbb{Z}^{252}$ & - \\ \hline & & & & & & \\[-2ex] $12$ & - & - & - & $\mathbb{Z}_3^{56}$ & $\mathbb{Z}^{12440}$& - \\ \hline & & & & & & \\[-2ex] $13$ & - & - & - & $\mathbb{Z}_3$ & $T_1 \oplus \mathbb{Z}^{24596}$& $\mathbb{Z}^{924}$ \\ \hline & & & & & & \\[-2ex] $14$ & - & - & - & - & $T_2$ & $\mathbb{Z}^{138048}$ \\ \hline \end{tabular} \end{center} \end{footnotesize} \label{matching-fig} \end{table} For the sake of generality, we describe our simple representation-theoretic construction in terms of an arbitrary finite group acting on a chain complex of abelian groups; see Section~\ref{homology-sec}. The particular case that we are interested in is discussed in Section~\ref{match-sec}. In Section~\ref{further-sec}, we make some remarks and discuss potential improvements and generalizations of our result. \section{Group actions on chain complexes} \label{homology-sec} We recall some elementary properties of group actions on chain complexes; see Bredon~\cite{Bredon} for a more thorough treatment. Let \[ \begin{CD} \mathcal{C} : \cdots @>{\partial_{d+1}}>> C_d @>{\partial_d}>> C_{d-1} @>{\partial_{d-1}}>> C_{d-2} @>{\partial_{d-2}}>> \cdots \end{CD} \] be a chain complex of abelian groups. Let $G$ be a group acting on $\mathcal{C}$, meaning the following for each $k \in \mathbb{Z}$: \begin{itemize} \item Every $g \in G$ defines a degree-preserving automorphism on $\mathcal{C}$. \item For every $g,h \in G$ and $c \in C_k$, we have that $g(h(c)) = (gh)(c)$. \item For every $g \in G$ and $c \in C_k$, we have that $\partial_k(g(c)) = g(\partial_k(c))$. \end{itemize} Let $C_d^G$ be the subgroup of $C_d$ generated by $\{c - g(c) : c \in C_d, g \in G\}$ and let $\mathcal{C}^G$ be the corresponding chain complex. $\mathcal{C}^G$ is indeed a chain complex, because $\partial_d(c-g(c)) = \partial_d(c)-g(\partial_d(c)) \in C^G_{d-1}$ whenever $c \in C_d$. Writing $C_d/G = C_d/C_d^G$, we obtain the quotient chain complex \[ \begin{CD} \mathcal{C}/G : \cdots @>{\partial_{d+1}}>> C_d/G @>{\partial_d}>> C_{d-1}/G @>{\partial_{d-1}}>> C_{d-2}/G @>{\partial_{d-2}}>> \cdots \end{CD} \] In particular, we have the following exact sequence of homo\-logy groups for each $d$: \[ \begin{CD} H_{d+1}(\mathcal{C}/G) \longrightarrow H_d(\mathcal{C}^G) \longrightarrow H_d(\mathcal{C}) @>{\pi^*_d}>> H_d(\mathcal{C}/G) \longrightarrow H_{d-1}(\mathcal{C}^G); \end{CD} \] $\pi^*_d$ is the map induced by the natural projection map $\pi_d : C_d \rightarrow C_d/G$. From now on, assume that $G$ is finite. For an element $c \in C_d$, let $[c]$ denote the corresponding element in $C_d/G$; $[c] = c + C_d^G$. Define $\groupsum{G}{c} = \sum_{g \in G} g(c)$. Clearly, $\groupsum{G}{c} = 0$ for all $c \in C_d^G$ and $\groupsumvoid{G}$ commutes with $\partial_d$. Let $\varphi_d : C_d/G \rightarrow C_d$ be the homomorphism defined by $\varphi_d([c]) = \groupsum{G}{c}$. Since $\groupsumvoid{G}$ vanishes on $C_d^G$, we have that $\varphi_d$ is well-defined. Moreover, $\partial_d \circ \varphi_d = \varphi_{d-1} \circ \partial_d$, because \[ \partial_d(\varphi_d([c])) = \partial_d(\groupsum{G}{c}) = \groupsum{G}{\partial_d(c)} = \varphi_{d-1}([\partial_d(c)]) = \varphi_{d-1}(\partial_d([c])). \] Let $\varphi^*_d : H_d(\mathcal{C}/G) \rightarrow H_d(\mathcal{C})$ be the map induced by $\varphi_d$; this is a well-defined homomorphism by the above discussion. \begin{lemma} The kernel of $\varphi^*_d$ has finite exponent dividing $|G|$. As a consequence, if the torsion subgroup of $H_d(\mathcal{C})$ has finite exponent $e$ ($e=1$ if there is no torsion), then the exponent of the torsion subgroup of $H_d(\mathcal{C}/G)$ is also finite and divides $|G|\cdot e$. \label{Gtimesdsum-lem} \end{lemma} \begin{proof} Let $c \in H_d(\mathcal{C})$. Since $[\varphi^*_d([c])] = [\groupsum{G}{c}] = |G|\cdot [c]$ and $0 = [e\cdot \varphi^*_d([c])] = e\cdot |G|\cdot [c]$, we are done. \end{proof} \section{Detecting $5$-torsion in the homology of $\M[14]$} \label{match-sec} Let $\lambda = (\lambda_1, \ldots, \lambda_n)$ be a sequence of nonnegative integers summing to $N$. Define $\Symm{\lambda}$ to be the Young group $\Symm{\lambda_1} \times \cdots \times \Symm{\lambda_n}$. Write $[N]$ as a disjoint union $\bigcup_{i=1}^n U_i$ such that $|U_i| = \lambda_i$ for each $i$ and let $\Symm{\lambda_i}$ act on $U_i$ in the natural manner for each $i$. This yields an action of $\Symm{\lambda}$ on $[N]$, and this action induces an action on the chain complex $\tilde{\mathcal{C}}(\M[N])$. In particular, we have the following result: \begin{lemma} \label{matchexp-lem} Let $\varphi^*_d : H_d(\tilde{\mathcal{C}}(\M[N])/\Symm{\lambda}) \rightarrow \tilde{H}_d(\M[N])$ be defined as in Lemma~{\rm\ref{Gtimesdsum-lem}}. Then the kernel of $\varphi^*_d$ has finite exponent dividing $\prod_{i=1}^n \lambda!$. \end{lemma} \begin{proof} This is an immediate consequence of Lemma~\ref{Gtimesdsum-lem}. \end{proof} Let $\Delta_\lambda$ be the subfamily of $\M[N]$ consisting of all $\sigma$ such that there are two distinct edges $ab$ and $cd$ in $\sigma$ with the property that $\{a,c\} \subseteq U_i$ and $\{b,d\} \subseteq U_j$ for some $i$ and $j$ (possibly equal). Write $\Gamma_\lambda = \M[N] \setminus \Delta_\lambda$; this is a simplicial complex. Define $\kappa : [N] \rightarrow [n]$ by $\kappa^{-1}(\{i\}) = U_i$. Extend $\kappa$ to $\Gamma_\lambda$ by defining \[ \kappa(\{a_1b_1, \ldots, a_rb_r\}) = \{\kappa(a_1)\kappa(b_1), \ldots, \kappa(a_r)\kappa(b_r)\}. \] \begin{lemma} \label{wellkappa-lem} We have that $\kappa$ is a dimension-preserving surjective map from $\Gamma_\lambda$ to $\BD{n}{\lambda}$. \end{lemma} \begin{proof} To see that $\kappa$ is dimension-preserving, note that $|\kappa(\sigma)| = |\sigma|$ whenever $\sigma$ belongs to $\Gamma_\lambda$. Namely, there are no multiple edges or multiple loops in $\kappa(\sigma)$ by definition of $\Gamma_\lambda$. Moreover, $\kappa(\sigma)$ belongs to $\BD{n}{\lambda}$, because for each $i \in [n]$, the degree in $\kappa(\sigma)$ of the vertex $i$ equals the sum of the degrees in $\sigma$ of all vertices in $U_i$; this is at most $|U_i| = \lambda_i$. To prove surjectivity, use a simple induction argument over $\lambda$; remove one edge at a time from a given graph in $\BD{n}{\lambda}$. \end{proof} \begin{lemma} For $\sigma, \tau \in \Gamma_\lambda$, we have that $\kappa(\sigma) = \kappa(\tau)$ if and only if there is a $g$ in $\Symm{\lambda}$ such that $g(\sigma) = \tau$. \label{kappa-lem} \end{lemma} \begin{proof} Clearly, $\kappa(\sigma) = \kappa(g(\sigma))$ for all $\sigma \in \Gamma_\lambda$ and $g \in \Symm{\lambda}$. For the other direction, write $\sigma = \{a_jb_j : j \in J\}$ and $\tau = \{a'_jb'_j : j \in J\}$, where $\kappa(a_j) = \kappa(a'_j)$ and $\kappa(b_j) = \kappa(b'_j)$ for each $j \in J$. Define $g(a_j) = a'_j$ and $g(b_j) = b'_j$ for each $j \in J$ and extend $g$ to a permutation on $[N]$ such that $g(U_i) = U_i$ for each $i \in [n]$. One easily checks that $g$ has the desired properties. \end{proof} For a set $\sigma$, we let $\oriented{\sigma}$ denote the oriented simplex corresponding to $\sigma$; fixing an order of the elements in $\sigma$, this is well-defined. Given an oriented simplex $\oriented{\sigma} = a_1b_1 \wedge \cdots \wedge a_rb_r$, we define \[ \kappa(\oriented{\sigma}) = \kappa(a_1)\kappa(b_1)\wedge \cdots \wedge \kappa(a_r)\kappa(b_r), \] thereby preserving orientation. Extend $\kappa$ linearly to a homomorphism $\tilde{\mathcal{C}}(\Gamma_\lambda) \rightarrow \tilde{\mathcal{C}}(\BD{n}{\lambda})$. \begin{lemma} \label{hatkappa-lem} The map $\hat{\kappa} : \tilde{\mathcal{C}}(\Gamma_\lambda)/\Symm{\lambda} \rightarrow \tilde{\mathcal{C}}(\BD{n}{\lambda})$ defined as $\hat{\kappa}([c]) = \kappa(c)$ is a chain complex isomorphism. \end{lemma} \begin{proof} First of all, one easily checks that $\hat{\kappa}$ is well-defined and commutes with the boundary operator; for the former property, note that $\kappa(\oriented{\sigma}) = \kappa(g(\oriented{\sigma}))$ for all $g \in \Symm{\lambda}$ and $\sigma \in \Gamma_\lambda$. Moreover, $\hat{\kappa}$ is surjective, because $\kappa$ is surjective by Lemma~\ref{wellkappa-lem}. Finally, to see that $\hat{\kappa}$ is injective, define $\mu : \tilde{\mathcal{C}}(\BD{n}{\lambda}) \rightarrow \tilde{\mathcal{C}}(\Gamma_\lambda)/\Symm{\lambda}$ as $\mu(c') = [c]$, where $c$ is any element in $\tilde{\mathcal{C}}(\Gamma_\lambda)$ such that $\kappa(c) = c'$; this is well-defined by Lemma~\ref{kappa-lem}. Since $\mu \circ \hat{\kappa}([c]) = \mu(\kappa(c)) = [c]$, injectivity follows. \end{proof} \begin{theorem} \label{split-thm} We have the chain complex isomorphism \[ \tilde{\mathcal{C}}(\M[N])/\Symm{\lambda} \cong \tilde{\mathcal{C}}(\BD{n}{\lambda}) \oplus \tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}. \] \end{theorem} \begin{proof} By Lemma~\ref{hatkappa-lem}, it suffices to prove that \[ \tilde{\mathcal{C}}(\M[N])/\Symm{\lambda} \cong \tilde{\mathcal{C}}(\Gamma_\lambda)/\Symm{\lambda} \oplus \tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}. \] Clearly, the boundary in $\tilde{\mathcal{C}}(\M[N])/\Symm{\lambda}$ of any element in $\tilde{\mathcal{C}}(\Gamma_\lambda)/\Symm{\lambda}$ is again an element in $\tilde{\mathcal{C}}(\Gamma_\lambda)/\Symm{\lambda}$, $\Gamma_\lambda$ being a subcomplex of $\M[N]$. It remains to prove that $[\partial(\oriented{\sigma})] \in \tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}$ for each $\sigma \in \Delta_\lambda$. Write $\oriented{\sigma} = a_1b_1 \wedge a_2b_2 \wedge \oriented{\tau}$, where $a_1,a_2 \in U_i$ and $b_1, b_2 \in U_j$ for some $i$ and $j$. We obtain that \[ \partial(\oriented{\sigma}) = a_2b_2 \wedge \oriented{\tau} - a_1b_1 \wedge \oriented{\tau} + a_1b_1\wedge a_2b_2 \wedge \partial(\oriented{\tau}). \] Since the group element $(a_1,a_2)(b_1,b_2)$ belongs to $\Symm{\lambda}$ and transforms $a_1b_1 \wedge \oriented{\tau}$ into $a_2b_2 \wedge \oriented{\tau}$, it follows that \[ \partial([\oriented{\sigma}]) = \left[a_1b_1\wedge a_2b_2 \wedge \partial(\oriented{\tau})\right], \] which is indeed an element in $\tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}$. \end{proof} \noindent \emph{Remark.} One may note that $\tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}$ is a chain complex of elementary $2$-groups. Namely, with notation as in the above proof, we have that $(a_1,a_2)(b_1,b_2)$ maps $\oriented{\sigma} = a_1b_1\wedge a_2b_2 \wedge \oriented{\tau}$ to $a_2b_2\wedge a_1b_1 \wedge \oriented{\tau} = -\oriented{\sigma}$. As a consequence, $[\oriented{\sigma}] = -[\oriented{\sigma}]$, which implies that $2[\oriented{\sigma}] = 0$. \begin{theorem} There is a homomorphism $\tilde{H}_d(\BD{n}{\lambda}) \rightarrow \tilde{H}_d(\M[N])$ such that the kernel has finite exponent dividing $\prod_{i=1}^n \lambda!$. \label{mbd-thm} \end{theorem} \begin{proof} This follows immediately from Lemma~\ref{matchexp-lem} and Theorem~\ref{split-thm}. \end{proof} Let us summarize the situation. \begin{corollary} We have a long exact sequence \[ \begin{CD} & & & \cdots & @>>> H_{d+1}(\tilde{\mathcal{C}}(\M[N])/\Symm{\lambda}) \\ @>>> H_d(\tilde{\mathcal{C}}(\M[N])^{\Symm{\lambda}}) @>>> \tilde{H}_d(\M[N]) @>\pi^*_d>> H_d(\tilde{\mathcal{C}}(\M[N])/\Symm{\lambda}) \\ @>>> H_{d-1}(\tilde{\mathcal{C}}_d(\M[N])^{\Symm{\lambda}}) @>>> \cdots , \end{CD} \] where \[ H_{d}(\tilde{\mathcal{C}}(\M[N])/\Symm{\lambda}) \cong \tilde{H}_d(\BD{n}{\lambda}) \oplus H_d(\tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}) \] and $\pi^*_d$ has an ``inverse'' $\varphi^*_d$ satisfying $\pi^*_d \circ \varphi^*_d = \prod_{i=1}^n \lambda! \cdot {\rm id}$. In particular, if $\prod_{i=1}^n \lambda!$ is a unit in the underlying coefficient ring, then \[ \begin{CD} \tilde{H}_d(\M[N]) \cong \tilde{H}_d(\BD{n}{\lambda}) \oplus H_d(\tilde{\mathcal{C}}(\M[N])^{\Symm{\lambda}}). \end{CD} \] \label{mbdexact-cor} \end{corollary} For the final statement, note that $\tilde{\mathcal{C}}(\Delta_\lambda)/\Symm{\lambda}$ is zero if $2$ is a unit in the underlying coefficient ring or if $\lambda = (1, \ldots, 1)$. \begin{proof}[Proof of Theorem~{\rm\ref{main-thm}}] By Theorem~\ref{ShWa-thm}, we already know that there are elements of order three in $\tilde{H}_{4}(\M[14];\mathbb{Z})$ and that the group is finite. Applying Theorem~\ref{andersen-thm}, we obtain that the exponent of $\tilde{H}_4(\BD{7}{2};\mathbb{Z})$ is five. Selecting $\lambda = (2,2,2,2,2,2,2)$ and noting that $\prod_i \lambda_i! = 128$ and $\gcd(5,128) = 1$, we are done by Theorem~\ref{mbd-thm}. \end{proof} Let $(2^a1^b)$ denote the sequence consisting of $a$ occurrences of the value 2 and $b$ occurrences of the value 1. One may try to obtain further information about the homology of $\M[14]$ by computing the homology of $\BD{14-a}{(2^a1^{14-2a})}$ for $a \le 6$. The ideal, of course, would be to compute the homology of $\M[14]$ directly, but this appears to be beyond the capacity of today's (standard) computers. Using the computer program {\tt CHomP} \cite{Pilar}, we managed to compute the $\mathbb{Z}_p$-homology of $\BD{8}{(2^6 1^2)}$ for $p \in \{2,3,5\}$, and the results suggest that $\tilde{H}_4(\BD{8}{(2^6 1^2)}; \mathbb{Z}) \cong \tilde{H}_4(\BD{7}{2};\mathbb{Z}) \cong \mathbb{Z}_5$. In particular, it seems that we cannot gather any additional information about the homology of $\M[14]$ from that of $\BD{8}{(2^61^2)}$. Via a calculation with the {\tt Homology} computer program \cite{Homoprog}, we discovered that \[ \tilde{H}_4(\BD{11}{(2^2 1^9)}; \mathbb{Z}) \cong \mathbb{Z}_3^{10} \oplus \mathbb{Z}^{6142}. \] By Theorem~\ref{mbd-thm} and well-known properties of the rational homology of $\M[13]$ \cite{Bouc}, this yields the following result: \begin{proposition} We have that $\tilde{H}_{4}(\M[13];\mathbb{Z}) \cong T \oplus \mathbb{Z}^{24596}$, where $T$ is a finite group containing $\mathbb{Z}_3^{10}$ as a subgroup. \label{m13-prop} \end{proposition} See Tables~\ref{matchbd1-fig} and \ref{matchbd2-fig} for more information about torsion in the homology of $\BD{a+b}{2^a1^{b}}$ for small values of $a$ and $b$. The numerical data in Table~\ref{matchbd1-fig} suggests that the Sylow $3$-subgroup of $\tilde{H}_{(n-5)/2}(\BD{n-a}{2^a1^{n-2a}}; \mathbb{Z})$ is an elementary $3$-group of rank $\binom{n-a-1}{(n+5)/2}$. \begin{table} \caption{Torsion subgroup of $\tilde{H}_i(\BD{n-a}{2^a1^{n-2a}}; \mathbb{Z})$ for $n = 2i+5$.} \begin{footnotesize} \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|c|c|} \hline & & & & & & & & \\[-1.5ex] & $a=0$ & \ 1 \ & \ 2 \ & \ 3 \ & \ 4 \ & \ 5 \ & \ 6\ & \ 7 \ \\ \hline \hline & & & & & & & & \\[-2ex] $n = 3$ & $0$ & $0$ & - & - & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $5$ & $0$ & $0$ & $0$ & - & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $7$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $9$ & $\mathbb{Z}_3^8$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & - & - & - \\ \hline & & & & & & & & \\[-2ex] $11$ & $\mathbb{Z}_3^{45}$ & $\mathbb{Z}_3^9$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & - & - \\ \hline & & & & & & & & \\[-2ex] $13$ & ? & ? & $\mathbb{Z}_3^{10}$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & - \\ \hline & & & & & & & & \\[-2ex] $15$ & ? & ? & ? & ? & ? & $\mathbb{Z}_2$ & $0$ & $0$ \\ \hline \end{tabular} \end{center} \end{footnotesize} \label{matchbd1-fig} \end{table} \begin{table} \caption{Torsion subgroup of $\tilde{H}_i(\BD{n-a}{2^a1^{n-2a}}; \mathbb{Z})$ for $n = 2i+6$.} \begin{footnotesize} \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|c|c|} \hline & & & & & & & & \\[-1.5ex] & $a=0$ & \ 1 \ & \ 2 \ & \ 3 \ & \ 4 \ & \ 5 \ & \ 6\ & \ 7 \ \\ \hline \hline & & & & & & & & \\[-2ex] $n = 2$ & $0$ & $0$ & - & - & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $4$ & $0$ & $0$ & $0$ & - & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $6$ & $0$ & $0$ & $0$ & $0$ & - & - & - & - \\ \hline & & & & & & & & \\[-2ex] $8$ & $0$ & $0$ & $0$ & $0$ & $0$ & - & - & - \\ \hline & & & & & & & & \\[-2ex] $10$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & $0$ & $0$ & - & - \\ \hline & & & & & & & & \\[-2ex] $12$ & $\mathbb{Z}_3^{56}$ & $\mathbb{Z}_3^{10}$ & $\mathbb{Z}_3$ & $0$ & $0$ & $0$ & $0$ & - \\ \hline & & & & & & & & \\[-2ex] $14$ & ? & ? & ? & ? & ? & ? & $\mathbb{Z}_5$? & $\mathbb{Z}_5$ \\ \hline \end{tabular} \end{center} \end{footnotesize} \label{matchbd2-fig} \end{table} \section{Remarks and further directions} \label{further-sec} Using {\tt CHomP} \cite{Pilar}, we managed to compute a generator $\gamma'$ for the homology group $\tilde{H}_4(\BD{7}{2};\mathbb{Z})\cong \mathbb{Z}_5$; \begin{eqnarray*} \gamma'\! &=& ([12,45,23] + [12,23,34] + [12,34,15] + [12,15,33] + [12,33,45] \\ &+& [22,33,15] + [22,15,34] + [22,34,11] + [22,11,45] + [22,45,33] \\ &+& [11,23,45] + [11,34,23]) \wedge (46-66) \wedge (57-77); \end{eqnarray*} $[ab,cd,ef] = ab \wedge cd \wedge ef$. Note that $\gamma' = \gamma/{\Symm{(2^7)}}$, where \begin{eqnarray*} \gamma &=& ([1\hat{2},5\hat{4},2\hat{3}] + [1\hat{2},2\hat{3},3\hat{4}] + [1\hat{2},3\hat{4},5\hat{1}] + [1\hat{2},5\hat{1},3\hat{3}] + [1\hat{2},3\hat{3},5\hat{4}] \\ &+& [2\hat{2},3\hat{3},5\hat{1}] + [2\hat{2},5\hat{1},3\hat{4}] + [2\hat{2},3\hat{4},1\hat{1}] + [2\hat{2},1\hat{1},5\hat{4}] + [2\hat{2},5\hat{4},3\hat{3}] \\ &+& [1\hat{1},2\hat{3},5\hat{4}] + [1\hat{1},3\hat{4},2\hat{3}]) \wedge (4\hat{6}-6\hat{6}) \wedge (7\hat{5}-7\hat{7}). \end{eqnarray*} Here, $\hat{i}$ denotes the vertex $i+7$, and the group action is given by the partition $\{U_i : i \in [7]\}$, where $U_i = \{i,\hat{i}\}$. Since $\tilde{H}_4(\M[14];\mathbb{Z})$ is finite, we conclude that $\gamma$ has finite exponent a multiple of five in $\tilde{H}_4(\M[14];\mathbb{Z})$. Note that we may view $\gamma$ as the product of one cycle in $\tilde{H}_2(\M[8];\mathbb{Z})$ and two cycles in $\tilde{H}_0(\M[3];\mathbb{Z})$ (defined on three disjoint vertex sets). Another observation is that we have the following portion of the long exact sequence for the pair $(\M[14],\M[13])$: \[ \begin{CD} \tilde{H}_4(\M[13]; \mathbb{Z}) @>>> \tilde{H}_4(\M[14]; \mathbb{Z}) @>>> \bigoplus_{13} \tilde{H}_3(\M[12]; \mathbb{Z}); \end{CD} \] see Bouc \cite{Bouc}. Since $\tilde{H}_3(\M[12]; \mathbb{Z})$ is an elementary $3$-group by the data in Table~\ref{matching-fig}, this yields that there must be some element $\delta$ in $\tilde{H}_4(\M[13];\mathbb{Z})$ such that $\delta$ is identical in $\tilde{H}_4(\M[14];\mathbb{Z})$ to $\gamma$ or $3\gamma$. Obviously, the exponent of $\delta$ in $\tilde{H}_4(\M[13];\mathbb{Z})$ is either infinite or a nonzero multiple of five; we conjecture the former. As mentioned in Section~\ref{intro-sec}, we do not know whether there is $5$-torsion in the homology of $\M[n]$ when $n \ge 13$ and $n \neq 14$. We would indeed have such torsion for all even $n \ge 16$ if \begin{equation} \tilde{H}_d(\M[n]; \mathbb{Z}) \cong \tilde{H}_d(\M[n] \setminus e; \mathbb{Z}) \oplus \tilde{H}_{d-1}(\M[n-2]; \mathbb{Z}) \label{onestepdecision-eq} \end{equation} for all even $n \ge 16$ and $d = n/2-3$. Here, $e$ is the edge between $n-1$ and $n$ and $\M[n] \setminus e$ is the complex obtained from $\M[n]$ by removing the $0$-cell $e$. Using computer, we have verified (\ref{onestepdecision-eq}) for all $(n,d)$ such that $n \le 11$ and $d \ge 0$. Note that (\ref{onestepdecision-eq}) would follow if the sequence \[ \begin{CD} 0 \longrightarrow \tilde{H}_d(\M[n] \setminus e; \mathbb{Z}) @>>> \tilde{H}_d(\M[n]; \mathbb{Z}) @>>> \tilde{H}_{d-1}(\M[n-2]; \mathbb{Z}) \longrightarrow 0 \end{CD} \] turned out to be split exact. By the long exact sequence for the pair $(\M[n],\M[n] \setminus e)$, the mid-portion of this sequence is indeed exact. \end{document}
arXiv
Quaternion In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843[1][2] and applied to mechanics in three-dimensional space. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space,[3] or, equivalently, as the quotient of two vectors.[4] Multiplication of quaternions is noncommutative. Quaternion multiplication table ↓ × → 1 i j k 1 1 i j k i i −1 k −j j j −k −1 i k k j −i −1 Note: This is a multiplication table, not a Cayley table, because inverses do not appear in the row or column headings. (Left column shows the premultiplier. Top row shows the post-multiplier.) Quaternions are generally represented in the form $a+b\ \mathbf {i} +c\ \mathbf {j} +d\ \mathbf {k} ,$ where a, b, c, and d are real numbers; and 1, i, j, and k are the basis vectors or basis elements.[5] Quaternions are used in pure mathematics, but also have practical uses in applied mathematics, particularly for calculations involving three-dimensional rotations, such as in three-dimensional computer graphics, computer vision, and crystallographic texture analysis.[6] They can be used alongside other methods of rotation, such as Euler angles and rotation matrices, or as an alternative to them, depending on the application. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, and therefore a ring, being both a division ring and a domain. The algebra of quaternions is often denoted by H (for Hamilton), or in blackboard bold by $\mathbb {H} .$ It can also be given by the Clifford algebra classifications $\operatorname {Cl} _{0,2}(\mathbb {R} )\cong \operatorname {Cl} _{3,0}^{+}(\mathbb {R} ).$ In fact, it was the first noncommutative division algebra to be discovered. According to the Frobenius theorem, the algebra $\mathbb {H} $ is one of only two finite-dimensional division rings containing a proper subring isomorphic to the real numbers; the other being the complex numbers. These rings are also Euclidean Hurwitz algebras, of which the quaternions are the largest associative algebra (and hence the largest ring). Further extending the quaternions yields the non-associative octonions, which is the last normed division algebra over the real numbers. (The sedenions, the extension of the octonions, have zero divisors and so cannot be a normed division algebra.)[7] The unit quaternions can be thought of as a choice of a group structure on the 3-sphere S3 that gives the group Spin(3), which is isomorphic to SU(2) and also to the universal cover of SO(3). History Main article: History of quaternions Quaternions were introduced by Hamilton in 1843.[8] Important precursors to this work included Euler's four-square identity (1748) and Olinde Rodrigues' parameterization of general rotations by four parameters (1840), but neither of these writers treated the four-parameter rotations as an algebra.[9][10] Carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900.[11][12] Hamilton knew that the complex numbers could be interpreted as points in a plane, and he was looking for a way to do the same for points in three-dimensional space. Points in space can be represented by their coordinates, which are triples of numbers, and for many years he had known how to add and subtract triples of numbers. However, for a long time, he had been stuck on the problem of multiplication and division. He could not figure out how to calculate the quotient of the coordinates of two points in space. In fact, Ferdinand Georg Frobenius later proved in 1877 that for a division algebra over the real numbers to be finite-dimensional and associative, it cannot be three-dimensional, and there are only three such division algebras: $\mathbb {R,C} $ (complex numbers) and $\mathbb {H} $ (quaternions) which have dimension 1, 2, and 4 respectively. The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, when Hamilton was on his way to the Royal Irish Academy to preside at a council meeting. As he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, $\mathbf {i} ^{2}=\mathbf {j} ^{2}=\mathbf {k} ^{2}=\mathbf {i\,j\,k} =-1$ into the stone of Brougham Bridge as he paused on it. Although the carving has since faded away, there has been an annual pilgrimage since 1989 called the Hamilton Walk for scientists and mathematicians who walk from Dunsink Observatory to the Royal Canal bridge in remembrance of Hamilton's discovery. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves, describing the train of thought that led to his discovery. This letter was later published in a letter to the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science;[13] Hamilton states: And here there dawned on me the notion that we must admit, in some sense, a fourth dimension of space for the purpose of calculating with triples ... An electric circuit seemed to close, and a spark flashed forth.[13] Hamilton called a quadruple with these rules of multiplication a quaternion, and he devoted most of the remainder of his life to studying and teaching them. Hamilton's treatment is more geometric than the modern approach, which emphasizes quaternions' algebraic properties. He founded a school of "quaternionists", and he tried to popularize quaternions in several books. The last and longest of his books, Elements of Quaternions,[14] was 800 pages long; it was edited by his son and published shortly after his death. After Hamilton's death, the Scottish mathematical physicist Peter Tait became the chief exponent of quaternions. At this time, quaternions were a mandatory examination topic in Dublin. Topics in physics and geometry that would now be described using vectors, such as kinematics in space and Maxwell's equations, were described entirely in terms of quaternions. There was even a professional research association, the Quaternion Society, devoted to the study of quaternions and other hypercomplex number systems. From the mid-1880s, quaternions began to be displaced by vector analysis, which had been developed by Josiah Willard Gibbs, Oliver Heaviside, and Hermann von Helmholtz. Vector analysis described the same phenomena as quaternions, so it borrowed some ideas and terminology liberally from the literature on quaternions. However, vector analysis was conceptually simpler and notationally cleaner, and eventually quaternions were relegated to a minor role in mathematics and physics. A side-effect of this transition is that Hamilton's work is difficult to comprehend for many modern readers. Hamilton's original definitions are unfamiliar and his writing style was wordy and difficult to follow. However, quaternions have had a revival since the late 20th century, primarily due to their utility in describing spatial rotations. The representations of rotations by quaternions are more compact and quicker to compute than the representations by matrices. In addition, unlike Euler angles, they are not susceptible to "gimbal lock". For this reason, quaternions are used in computer graphics,[15][16] computer vision, robotics,[17] control theory, signal processing, attitude control, physics, bioinformatics, molecular dynamics, computer simulations, and orbital mechanics. For example, it is common for the attitude control systems of spacecraft to be commanded in terms of quaternions. Quaternions have received another boost from number theory because of their relationships with the quadratic forms.[18] Quaternions in physics P.R. Girard's 1984 essay The quaternion group and modern physics[19] discusses some roles of quaternions in physics. The essay shows how various physical covariance groups, namely SO(3), the Lorentz group, the general theory of relativity group, the Clifford algebra SU(2) and the conformal group, can easily be related to the quaternion group in modern algebra. Girard began by discussing group representations and by representing some space groups of crystallography. He proceeded to kinematics of rigid body motion. Next he used complex quaternions (biquaternions) to represent the Lorentz group of special relativity, including the Thomas precession. He cited five authors, beginning with Ludwik Silberstein, who used a potential function of one quaternion variable to express Maxwell's equations in a single differential equation. Concerning general relativity, he expressed the Runge–Lenz vector. He mentioned the Clifford biquaternions (split-biquaternions) as an instance of Clifford algebra. Finally, invoking the reciprocal of a biquaternion, Girard described conformal maps on spacetime. Among the fifty references, Girard included Alexander Macfarlane and his Bulletin of the Quaternion Society. In 1999 he showed how Einstein's equations of general relativity could be formulated within a Clifford algebra that is directly linked to quaternions.[20] The finding of 1924 that in quantum mechanics the spin of an electron and other matter particles (known as spinors) can be described using quaternions (in the form of the famous Pauli spin matrices) furthered their interest; quaternions helped to understand how rotations of electrons by 360° can be discerned from those by 720° (the "Plate trick").[21][22] As of 2018, their use has not overtaken rotation groups.[lower-alpha 1] Definition A quaternion is an expression of the form $a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} \ ,$ where a, b, c, d, are real numbers, and i, j, k, are symbols that can be interpreted as unit-vectors pointing along the three spatial axes. In practice, if one of a, b, c, d is 0, the corresponding term is omitted; if a, b, c, d are all zero, the quaternion is the zero quaternion, denoted 0; if one of b, c, d equals 1, the corresponding term is written simply i, j, or k. Hamilton describes a quaternion $q=a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} $, as consisting of a scalar part and a vector part. The quaternion $b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} $ is called the vector part (sometimes imaginary part) of q, and a is the scalar part (sometimes real part) of q. A quaternion that equals its real part (that is, its vector part is zero) is called a scalar or real quaternion, and is identified with the corresponding real number. That is, the real numbers are embedded in the quaternions. (More properly, the field of real numbers is isomorphic to a subset of the quaternions. The field of complex numbers is also isomorphic to three subsets of quaternions.)[23] A quaternion that equals its vector part is called a vector quaternion. The set of quaternions is a 4-dimensional vector space over the real numbers, with $\left\{1,\mathbf {i} ,\mathbf {j} ,\mathbf {k} \right\}$ as a basis, by the component-wise addition ${\begin{aligned}&(a_{1}+b_{1}\,\mathbf {i} +c_{1}\,\mathbf {j} +d_{1}\,\mathbf {k} )+(a_{2}+b_{2}\,\mathbf {i} +c_{2}\,\mathbf {j} +d_{2}\,\mathbf {k} )\\[3mu]&\qquad =(a_{1}+a_{2})+(b_{1}+b_{2})\,\mathbf {i} +(c_{1}+c_{2})\,\mathbf {j} +(d_{1}+d_{2})\,\mathbf {k} ,\end{aligned}}$ and the component-wise scalar multiplication $\lambda (a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} )=\lambda a+(\lambda b)\,\mathbf {i} +(\lambda c)\,\mathbf {j} +(\lambda d)\,\mathbf {k} .$ A multiplicative group structure, called the Hamilton product, denoted by juxtaposition, can be defined on the quaternions in the following way: • The real quaternion 1 is the identity element. • The real quaternions commute with all other quaternions, that is aq = qa for every quaternion q and every real quaternion a. In algebraic terminology this is to say that the field of real quaternions are the center of this quaternion algebra. • The product is first given for the basis elements (see next subsection), and then extended to all quaternions by using the distributive property and the center property of the real quaternions. The Hamilton product is not commutative, but is associative, thus the quaternions form an associative algebra over the real numbers. • Additionally, every nonzero quaternion has an inverse with respect to the Hamilton product: $(a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} )^{-1}={\frac {1}{a^{2}+b^{2}+c^{2}+d^{2}}}\,(a-b\,\mathbf {i} -c\,\mathbf {j} -d\,\mathbf {k} ).$ Thus the quaternions form a division algebra. Multiplication of basis elements Multiplication table × 1 i j k 1 1 i j k i i −1 k −j j j −k −1 i k k j −i −1 Non commutativity is emphasized by colored squares The multiplication with 1 of the basis elements i, j, and k is defined by the fact that 1 is a multiplicative identity, that is, $\mathbf {i} \,1=1\,\mathbf {i} =\mathbf {i} ,\qquad \mathbf {j} \,1=1\,\mathbf {j} =\mathbf {j} ,\qquad \mathbf {k} \,1=1\,\mathbf {k} =\mathbf {k} \,.$ The products of other basis elements are ${\begin{aligned}\mathbf {i} ^{2}&=\mathbf {j} ^{2}=\mathbf {k} ^{2}=-1,\\[5mu]\mathbf {i\,j} &=-\mathbf {j\,i} =\mathbf {k} ,\qquad \mathbf {j\,k} =-\mathbf {k\,j} =\mathbf {i} ,\qquad \mathbf {k\,i} =-\mathbf {i\,k} =\mathbf {j} .\end{aligned}}$ Combining these rules, ${\begin{aligned}\mathbf {i\,j\,k} &=-1.\end{aligned}}$ Center The center of a noncommutative ring is the subring of elements c such that cx = xc for every x. The center of the quaternion algebra is the subfield of real quaternions. In fact, it is a part of the definition that the real quaternions belong to the center. Conversely, if q = a + b i + c j + d k belongs to the center, then $0=\mathbf {i} \,q-q\,\mathbf {i} =2c\,\mathbf {ij} +2d\,\mathbf {ik} =2c\,\mathbf {k} -2d\,\mathbf {j} \,,$ and c = d = 0. A similar computation with j instead of i shows that one has also b = 0. Thus q = a is a real quaternion. The quaternions form a division algebra. This means that the non-commutativity of multiplication is the only property that makes quaternions different from a field. This non-commutativity has some unexpected consequences, among them that a polynomial equation over the quaternions can have more distinct solutions than the degree of the polynomial. For example, the equation z2 + 1 = 0, has infinitely many quaternion solutions, which are the quaternions z = b i + c j + d k such that b2 + c2 + d2 = 1. Thus these "roots of –1" form a unit sphere in the three-dimensional space of vector quaternions. Hamilton product For two elements a1 + b1i + c1j + d1k and a2 + b2i + c2j + d2k, their product, called the Hamilton product (a1 + b1i + c1j + d1k) (a2 + b2i + c2j + d2k), is determined by the products of the basis elements and the distributive law. The distributive law makes it possible to expand the product so that it is a sum of products of basis elements. This gives the following expression: ${\begin{alignedat}{4}&a_{1}a_{2}&&+a_{1}b_{2}\mathbf {i} &&+a_{1}c_{2}\mathbf {j} &&+a_{1}d_{2}\mathbf {k} \\{}+{}&b_{1}a_{2}\mathbf {i} &&+b_{1}b_{2}\mathbf {i} ^{2}&&+b_{1}c_{2}\mathbf {ij} &&+b_{1}d_{2}\mathbf {ik} \\{}+{}&c_{1}a_{2}\mathbf {j} &&+c_{1}b_{2}\mathbf {ji} &&+c_{1}c_{2}\mathbf {j} ^{2}&&+c_{1}d_{2}\mathbf {jk} \\{}+{}&d_{1}a_{2}\mathbf {k} &&+d_{1}b_{2}\mathbf {ki} &&+d_{1}c_{2}\mathbf {kj} &&+d_{1}d_{2}\mathbf {k} ^{2}\end{alignedat}}$ Now the basis elements can be multiplied using the rules given above to get:[8] ${\begin{alignedat}{4}&a_{1}a_{2}&&-b_{1}b_{2}&&-c_{1}c_{2}&&-d_{1}d_{2}\\{}+{}(&a_{1}b_{2}&&+b_{1}a_{2}&&+c_{1}d_{2}&&-d_{1}c_{2})\mathbf {i} \\{}+{}(&a_{1}c_{2}&&-b_{1}d_{2}&&+c_{1}a_{2}&&+d_{1}b_{2})\mathbf {j} \\{}+{}(&a_{1}d_{2}&&+b_{1}c_{2}&&-c_{1}b_{2}&&+d_{1}a_{2})\mathbf {k} \end{alignedat}}$ The product of two rotation quaternions[24] will be equivalent to the rotation a2 + b2i + c2j + d2k followed by the rotation a1 + b1i + c1j + d1k. Scalar and vector parts A quaternion of the form a + 0 i + 0 j + 0 k, where a is a real number, is called scalar, and a quaternion of the form 0 + b i + c j + d k, where b, c, and d are real numbers, and at least one of b, c or d is nonzero, is called a vector quaternion. If a + b i + c j + d k is any quaternion, then a is called its scalar part and b i + c j + d k is called its vector part. Even though every quaternion can be viewed as a vector in a four-dimensional vector space, it is common to refer to the vector part as vectors in three-dimensional space. With this convention, a vector is the same as an element of the vector space $\mathbb {R} ^{3}.$[lower-alpha 2] Hamilton also called vector quaternions right quaternions[26][27] and real numbers (considered as quaternions with zero vector part) scalar quaternions. If a quaternion is divided up into a scalar part and a vector part, that is, $\mathbf {q} =(r,\ {\vec {v}}),~~\mathbf {q} \in \mathbb {H} ,~~r\in \mathbb {R} ,~~{\vec {v}}\in \mathbb {R} ^{3},$ then the formulas for addition, multiplication, and multiplicative inverse are ${\begin{aligned}(r_{1},\ {\vec {v}}_{1})+(r_{2},\ {\vec {v}}_{2})&=(r_{1}+r_{2},\ {\vec {v}}_{1}+{\vec {v}}_{2})\,,\\[5mu](r_{1},\ {\vec {v}}_{1})(r_{2},\ {\vec {v}}_{2})&=(r_{1}r_{2}-{\vec {v}}_{1}\cdot {\vec {v}}_{2},\ r_{1}{\vec {v}}_{2}+r_{2}{\vec {v}}_{1}+{\vec {v}}_{1}\times {\vec {v}}_{2})\,,\\[5mu](r,{\vec {v}})^{-1}&=\left({\frac {r}{r^{2}+{\vec {v}}\cdot {\vec {v}}}},{\frac {-{\vec {v}}}{r^{2}+{\vec {v}}\cdot {\vec {v}}}}\right)\,,\end{aligned}}$ where "$\cdot $" and "$\times $" denote respectively the dot product and the cross product. Conjugation, the norm, and reciprocal Conjugation of quaternions is analogous to conjugation of complex numbers and to transposition (also known as reversal) of elements of Clifford algebras. To define it, let $q=a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} $ be a quaternion. The conjugate of q is the quaternion $q^{*}=a-b\,\mathbf {i} -c\,\mathbf {j} -d\,\mathbf {k} $. It is denoted by q∗, qt, ${\tilde {q}}$, or q.[8] Conjugation is an involution, meaning that it is its own inverse, so conjugating an element twice returns the original element. The conjugate of a product of two quaternions is the product of the conjugates in the reverse order. That is, if p and q are quaternions, then (pq)∗ = q∗p∗, not p∗q∗. The conjugation of a quaternion, in stark contrast to the complex setting, can be expressed with multiplication and addition of quaternions: $q^{*}=-{\frac {1}{2}}(q+\,\mathbf {i} \,q\,\mathbf {i} +\,\mathbf {j} \,q\,\mathbf {j} +\,\mathbf {k} \,q\,\mathbf {k} ).$ Conjugation can be used to extract the scalar and vector parts of a quaternion. The scalar part of p is 1/2(p + p∗), and the vector part of p is 1/2(p − p∗). The square root of the product of a quaternion with its conjugate is called its norm and is denoted ‖q‖ (Hamilton called this quantity the tensor of q, but this conflicts with the modern meaning of "tensor"). In formulas, this is expressed as follows: $\lVert q\rVert ={\sqrt {\,qq^{*}~}}={\sqrt {\,q^{*}q~}}={\sqrt {\,a^{2}+b^{2}+c^{2}+d^{2}~}}$ This is always a non-negative real number, and it is the same as the Euclidean norm on $\mathbb {H} $ considered as the vector space $\mathbb {R} ^{4}$. Multiplying a quaternion by a real number scales its norm by the absolute value of the number. That is, if α is real, then $\lVert \alpha q\rVert =\left|\alpha \right|\,\lVert q\rVert ~.$ This is a special case of the fact that the norm is multiplicative, meaning that $\lVert pq\rVert =\lVert p\rVert \,\lVert q\rVert $ for any two quaternions p and q. Multiplicativity is a consequence of the formula for the conjugate of a product. Alternatively it follows from the identity $\det {\begin{pmatrix}a+ib&id+c\\id-c&a-ib\end{pmatrix}}=a^{2}+b^{2}+c^{2}+d^{2},$ (where i denotes the usual imaginary unit) and hence from the multiplicative property of determinants of square matrices. This norm makes it possible to define the distance d(p, q) between p and q as the norm of their difference: $d(p,q)=\lVert p-q\rVert ~.$ This makes $\mathbb {H} $ a metric space. Addition and multiplication are continuous in regard to the associated metric topology. This follows with exactly the same proof as for the real numbers $\mathbb {R} $ from the fact that $\mathbb {H} $ is a normed algebra. Unit quaternion Main article: Versor A unit quaternion is a quaternion of norm one. Dividing a nonzero quaternion q by its norm produces a unit quaternion Uq called the versor of q: $\mathbf {U} q={\frac {q}{\lVert q\rVert }}.$ Every nonzero quaternion has a unique polar decomposition $q=\lVert q\rVert \cdot \mathbf {U} q$, while the zero quarternion can be formed from any unit quarternion. Using conjugation and the norm makes it possible to define the reciprocal of a nonzero quaternion. The product of a quaternion with its reciprocal should equal 1, and the considerations above imply that the product of $q$ and $q^{*}/\left\Vert q\right\|^{2}$ is 1 (for either order of multiplication). So the reciprocal of q is defined to be $q^{-1}={\frac {q^{*}}{\lVert q\rVert ^{2}}}.$ This makes it possible to divide two quaternions p and q in two different ways (when q is nonzero). That is, their quotient can be either p q−1 or q−1p ; in general, those products are different, depending on the order of multiplication, except for the special case that p and q are scalar multiples of each other (which includes the case where p = 0). Hence, the notation p/q is ambiguous because it does not specify whether q divides on the left or the right (whether  q−1 multiplies p on its left or its right). Algebraic properties The set $\mathbb {H} $ of all quaternions is a vector space over the real numbers with dimension 4.[lower-alpha 3] Multiplication of quaternions is associative and distributes over vector addition, but with the exception of the scalar subset, it is not commutative. Therefore, the quaternions $\mathbb {H} $ are a non-commutative, associative algebra over the real numbers. Even though $\mathbb {H} $ contains copies of the complex numbers, it is not an associative algebra over the complex numbers. Because it is possible to divide quaternions, they form a division algebra. This is a structure similar to a field except for the non-commutativity of multiplication. Finite-dimensional associative division algebras over the real numbers are very rare. The Frobenius theorem states that there are exactly three: $\mathbb {R} $, $\mathbb {C} $, and $\mathbb {H} $. The norm makes the quaternions into a normed algebra, and normed division algebras over the real numbers are also very rare: Hurwitz's theorem says that there are only four: $\mathbb {R} $, $\mathbb {C} $, $\mathbb {H} $, and $\mathbb {O} $ (the octonions). The quaternions are also an example of a composition algebra and of a unital Banach algebra. Because the product of any two basis vectors is plus or minus another basis vector, the set {±1, ±i, ±j, ±k} forms a group under multiplication. This non-abelian group is called the quaternion group and is denoted Q8.[28] The real group ring of Q8 is a ring $\mathbb {R} [\mathrm {Q} _{8}]$ which is also an eight-dimensional vector space over $\mathbb {R} .$ It has one basis vector for each element of $\mathrm {Q} _{8}.$ The quaternions are isomorphic to the quotient ring of $\mathbb {R} [\mathrm {Q} _{8}]$ by the ideal generated by the elements 1 + (−1), i + (−i), j + (−j), and k + (−k). Here the first term in each of the differences is one of the basis elements 1, i, j, and k, and the second term is one of basis elements −1, −i, −j, and −k, not the additive inverses of 1, i, j, and k. Quaternions and the space geometry The vector part of a quaternion can be interpreted as a coordinate vector in $\mathbb {R} ^{3};$ therefore, the algebraic operations of the quaternions reflect the geometry of $\mathbb {R} ^{3}.$ Operations such as the vector dot and cross products can be defined in terms of quaternions, and this makes it possible to apply quaternion techniques wherever spatial vectors arise. A useful application of quaternions has been to interpolate the orientations of key-frames in computer graphics.[15] For the remainder of this section, i, j, and k will denote both the three imaginary[29] basis vectors of $\mathbb {H} $ and a basis for $\mathbb {R} ^{3}.$ Replacing i by −i, j by −j, and k by −k sends a vector to its additive inverse, so the additive inverse of a vector is the same as its conjugate as a quaternion. For this reason, conjugation is sometimes called the spatial inverse. For two vector quaternions p = b1i + c1j + d1k and q = b2i + c2j + d2k their dot product, by analogy to vectors in $\mathbb {R} ^{3},$ is $p\cdot q=b_{1}b_{2}+c_{1}c_{2}+d_{1}d_{2}~.$ It can also be expressed in a component-free manner as $p\cdot q=\textstyle {\frac {1}{2}}(p^{*}q+q^{*}p)=\textstyle {\frac {1}{2}}(pq^{*}+qp^{*}).$ This is equal to the scalar parts of the products pq∗, qp∗, p∗q, and q∗p. Note that their vector parts are different. The cross product of p and q relative to the orientation determined by the ordered basis i, j, and k is $p\times q=(c_{1}d_{2}-d_{1}c_{2})\mathbf {i} +(d_{1}b_{2}-b_{1}d_{2})\mathbf {j} +(b_{1}c_{2}-c_{1}b_{2})\mathbf {k} \,.$ (Recall that the orientation is necessary to determine the sign.) This is equal to the vector part of the product pq (as quaternions), as well as the vector part of −q∗p∗. It also has the formula $p\times q=\textstyle {\tfrac {1}{2}}(pq-qp).$ For the commutator, [p, q] = pq − qp, of two vector quaternions one obtains $[p,q]=2p\times q.$ In general, let p and q be quaternions and write ${\begin{aligned}p&=p_{\text{s}}+p_{\text{v}},\\[5mu]q&=q_{\text{s}}+q_{\text{v}},\end{aligned}}$ where ps and qs are the scalar parts, and pv and qv are the vector parts of p and q. Then we have the formula $pq=(pq)_{\text{s}}+(pq)_{\text{v}}=(p_{\text{s}}q_{\text{s}}-p_{\text{v}}\cdot q_{\text{v}})+(p_{\text{s}}q_{\text{v}}+q_{\text{s}}p_{\text{v}}+p_{\text{v}}\times q_{\text{v}}).$ This shows that the noncommutativity of quaternion multiplication comes from the multiplication of vector quaternions. It also shows that two quaternions commute if and only if their vector parts are collinear. Hamilton[30] showed that this product computes the third vertex of a spherical triangle from two given vertices and their associated arc-lengths, which is also an algebra of points in Elliptic geometry. Unit quaternions can be identified with rotations in $\mathbb {R} ^{3}$ and were called versors by Hamilton.[30] Also see Quaternions and spatial rotation for more information about modeling three-dimensional rotations using quaternions. See Hanson (2005)[31] for visualization of quaternions. Matrix representations Just as complex numbers can be represented as matrices, so can quaternions. There are at least two ways of representing quaternions as matrices in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication. One is to use 2 × 2 complex matrices, and the other is to use 4 × 4 real matrices. In each case, the representation given is one of a family of linearly related representations. In the terminology of abstract algebra, these are injective homomorphisms from $\mathbb {H} $ to the matrix rings M(2,C) and M(4,R), respectively. Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as ${\begin{bmatrix}a+bi&c+di\\-c+di&a-bi\end{bmatrix}}.$ Note that the "i" of the complex numbers is distinct from the "i" of the quaternions. This representation has the following properties: • Constraining any two of b, c and d to zero produces a representation of complex numbers. For example, setting c = d = 0 produces a diagonal complex matrix representation of complex numbers, and setting b = d = 0 produces a real matrix representation. • The norm of a quaternion (the square root of the product with its conjugate, as with complex numbers) is the square root of the determinant of the corresponding matrix.[32] • The conjugate of a quaternion corresponds to the conjugate transpose of the matrix. • By restriction this representation yields an isomorphism between the subgroup of unit quaternions and their image SU(2). Topologically, the unit quaternions are the 3-sphere, so the underlying space of SU(2) is also a 3-sphere. The group SU(2) is important for describing spin in quantum mechanics; see Pauli matrices. • There is a strong relation between quaternion units and Pauli matrices. Obtain the eight quaternion unit matrices by taking a, b, c and d, set three of them at zero and the fourth at 1 or −1. Multiplying any two Pauli matrices always yields a quaternion unit matrix, all of them except for −1. One obtains −1 via i2 = j2 = k2 = i j k = −1; e.g. the last equality is $ijk=\sigma _{1}\sigma _{2}\sigma _{3}\sigma _{1}\sigma _{2}\sigma _{3}=-1.$ Using 4 × 4 real matrices, that same quaternion can be written as ${\begin{aligned}{\begin{bmatrix}a&-b&-c&-d\\b&a&-d&c\\c&d&a&-b\\d&-c&b&a\end{bmatrix}}&=a{\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}+b{\begin{bmatrix}0&-1&0&0\\1&0&0&0\\0&0&0&-1\\0&0&1&0\end{bmatrix}}\\[10mu]&\qquad +c{\begin{bmatrix}0&0&-1&0\\0&0&0&1\\1&0&0&0\\0&-1&0&0\end{bmatrix}}+d{\begin{bmatrix}0&0&0&-1\\0&0&-1&0\\0&1&0&0\\1&0&0&0\end{bmatrix}}.\end{aligned}}$ However, the representation of quaternions in M(4,R) is not unique. For example, the same quaternion can also be represented as ${\begin{aligned}{\begin{bmatrix}a&d&-b&-c\\-d&a&c&-b\\b&-c&a&-d\\c&b&d&a\end{bmatrix}}&=a{\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}+b{\begin{bmatrix}0&0&-1&0\\0&0&0&-1\\1&0&0&0\\0&1&0&0\end{bmatrix}}\\[10mu]&\qquad +c{\begin{bmatrix}0&0&0&-1\\0&0&1&0\\0&-1&0&0\\1&0&0&0\end{bmatrix}}+d{\begin{bmatrix}0&1&0&0\\-1&0&0&0\\0&0&0&-1\\0&0&1&0\end{bmatrix}}.\end{aligned}}$ There exist 48 distinct matrix representations of this form in which one of the matrices represents the scalar part and the other three are all skew-symmetric. More precisely, there are 48 sets of quadruples of matrices with these symmetry constraints such that a function sending 1, i, j, and k to the matrices in the quadruple is a homomorphism, that is, it sends sums and products of quaternions to sums and products of matrices.[33] In this representation, the conjugate of a quaternion corresponds to the transpose of the matrix. The fourth power of the norm of a quaternion is the determinant of the corresponding matrix. As with the 2 × 2 complex representation above, complex numbers can again be produced by constraining the coefficients suitably; for example, as block diagonal matrices with two 2 × 2 blocks by setting c = d = 0. Each 4×4 matrix representation of quaternions corresponds to a multiplication table of unit quaternions. For example, the last matrix representation given above corresponds to the multiplication table × a d −b −c a a d −b −c −d −d a c −b b b −c a −d c c b d a which is isomorphic — through $\{a\mapsto 1,b\mapsto i,c\mapsto j,d\mapsto k\}$ — to × 1 k −i −j 1 1 k −i −j −k −k 1 j −i i i −j 1 −k j j i k 1 Constraining any such multiplication table to have the identity in the first row and column and for the signs of the row headers to be opposite to those of the column headers, then there are 3 possible choices for the second column (ignoring sign), 2 possible choices for the third column (ignoring sign), and 1 possible choice for the fourth column (ignoring sign); that makes 6 possibilities. Then, the second column can be chosen to be either positive or negative, the third column can be chosen to be positive or negative, and the fourth column can be chosen to be positive or negative, giving 8 possibilities for the sign. Multiplying the possibilities for the letter positions and for their signs yields 48. Then replacing 1 with a, i with b, j with c, and k with d and removing the row and column headers yields a matrix representation of a + b i + c j + d k . Lagrange's four-square theorem Main article: Lagrange's four-square theorem Quaternions are also used in one of the proofs of Lagrange's four-square theorem in number theory, which states that every nonnegative integer is the sum of four integer squares. As well as being an elegant theorem in its own right, Lagrange's four square theorem has useful applications in areas of mathematics outside number theory, such as combinatorial design theory. The quaternion-based proof uses Hurwitz quaternions, a subring of the ring of all quaternions for which there is an analog of the Euclidean algorithm. Quaternions as pairs of complex numbers Main article: Cayley–Dickson construction Quaternions can be represented as pairs of complex numbers. From this perspective, quaternions are the result of applying the Cayley–Dickson construction to the complex numbers. This is a generalization of the construction of the complex numbers as pairs of real numbers. Let $\mathbb {C} ^{2}$ be a two-dimensional vector space over the complex numbers. Choose a basis consisting of two elements 1 and j. A vector in $\mathbb {C} ^{2}$ can be written in terms of the basis elements 1 and j as $(a+bi)1+(c+di)\mathbf {j} \,.$ If we define j2 = −1 and i j = −j i, then we can multiply two vectors using the distributive law. Using k as an abbreviated notation for the product i j leads to the same rules for multiplication as the usual quaternions. Therefore, the above vector of complex numbers corresponds to the quaternion a + b i + c j + d k. If we write the elements of $\mathbb {C} ^{2}$ as ordered pairs and quaternions as quadruples, then the correspondence is $(a+bi,\ c+di)\leftrightarrow (a,b,c,d).$ Square roots Square roots of −1 In the complex numbers, $\mathbb {C} ,$ there are just two numbers, i and −i, whose square is −1 . In $\mathbb {H} $ there are infinitely many square roots of minus one: the quaternion solution for the square root of −1 is the unit sphere in $\mathbb {R} ^{3}.$ To see this, let q = a + b i + c j + d k be a quaternion, and assume that its square is −1. In terms of a, b, c, and d, this means ${\begin{aligned}a^{2}-b^{2}-c^{2}-d^{2}&=-1,{\vphantom {x^{|}}}\\[3mu]2ab&=0,\\[3mu]2ac&=0,\\[3mu]2ad&=0.\end{aligned}}$ To satisfy the last three equations, either a = 0 or b, c, and d are all 0. The latter is impossible because a is a real number and the first equation would imply that a2 = −1. Therefore, a = 0 and b2 + c2 + d2 = 1. In other words: A quaternion squares to −1 if and only if it is a vector quaternion with norm 1. By definition, the set of all such vectors forms the unit sphere. Only negative real quaternions have infinitely many square roots. All others have just two (or one in the case of 0).[lower-alpha 4] As a union of complex planes Each antipodal pair of square roots of −1 creates a distinct copy of the complex numbers inside the quaternions. If q2 = −1, then the copy is determined by the function $a+b{\sqrt {-1\,}}\mapsto a+bq\,.$ This is an injective ring homomorphism from $\mathbb {C} $ to $\mathbb {H} ,$ which defines a field isomorphism from $\mathbb {C} $ onto its image. The images of the embeddings corresponding to q and −q are identical. Every non-real quaternion generates a subalgebra of the quaternions that is isomorphic to $\mathbb {C} ,$ and is thus a planar subspace of $\mathbb {H} \colon $ write q as the sum of its scalar part and its vector part: $q=q_{s}+{\vec {q}}_{v}.$ Decompose the vector part further as the product of its norm and its versor: $q=q_{s}+\lVert {\vec {q}}_{v}\rVert \cdot \mathbf {U} {\vec {q}}_{v}=q_{s}+{\frac {q_{v}}{\|q_{v}\|}}.$ (Note that this is not the same as $q_{s}+\lVert q\rVert \cdot \mathbf {U} q$.) The versor of the vector part of q, $\mathbf {U} {\vec {q}}_{v}$, is a right versor with –1 as its square. A straightforward verification shows that $a+b{\sqrt {-1\,}}\mapsto a+b\mathbf {U} {\vec {q}}_{v}$ defines an injective homomorphism of normed algebras from $\mathbb {C} $ into the quaternions. Under this homomorphism, q is the image of the complex number $q_{s}+\lVert {\vec {q}}_{v}\rVert i$. As $\mathbb {H} $ is the union of the images of all these homomorphisms, this allows viewing the quaternions as a union of complex planes intersecting on the real line. Each of these complex planes contains exactly one pair of antipodal points of the sphere of square roots of minus one. Commutative subrings The relationship of quaternions to each other within the complex subplanes of $\mathbb {H} $ can also be identified and expressed in terms of commutative subrings. Specifically, since two quaternions p and q commute (i.e., p q = q p) only if they lie in the same complex subplane of $\mathbb {H} $, the profile of $\mathbb {H} $ as a union of complex planes arises when one seeks to find all commutative subrings of the quaternion ring. Square roots of arbitrary quaternions Any quaternion $\mathbf {q} =(r,\,{\vec {v}})$ (represented here in scalar–vector representation) has at least one square root ${\sqrt {\mathbf {q} }}=(x,\,{\vec {y}})$ which solves the equation ${\sqrt {\mathbf {q} }}^{2}=(x,\,{\vec {y}})^{2}=\mathbf {q} $. Looking at the scalar and vector parts in this equation separately yields two equations, which when solved gives the solutions ${\sqrt {\mathbf {q} }}={\sqrt {(r,\,{\vec {v}}\,)}}=\pm \left({\sqrt {\frac {\|\mathbf {q} \|+r}{2}}},\ {\frac {\vec {v}}{\|{\vec {v}}\|}}{\sqrt {\frac {\|\mathbf {q} \|-r}{2}}}\right),$ where $ \|{\vec {v}}\|={\sqrt {{\vec {v}}\cdot {\vec {v}}}}={\sqrt {-{\vec {v}}^{2}}}$ is the norm of ${\vec {v}}$ and $ \|\mathbf {q} \|={\sqrt {\mathbf {q} ^{*}\mathbf {q} }}=r^{2}+\|{\vec {v}}\|^{2}$ is the norm of $\mathbf {q} $. For any scalar quaternion $\mathbf {q} $, this equation provides the correct square roots if $ {\frac {\vec {v}}{\|{\vec {v}}\|}}$ is interpreted as an arbitrary unit vector. Therefore, nonzero, non-scalar quaternions, or positive scalar quaternions, have exactly two roots, while 0 has exactly one root (0), and negative scalar quaternions have infinitely many roots, which are the vector quaternions located on $\{0\}\times S^{2}({\sqrt {-r}})$, i.e., where the scalar part is zero and the vector part is located on the 2-sphere with radius ${\sqrt {-r}}$. Functions of a quaternion variable Main article: Quaternionic analysis Like functions of a complex variable, functions of a quaternion variable suggest useful physical models. For example, the original electric and magnetic fields described by Maxwell were functions of a quaternion variable. Examples of other functions include the extension of the Mandelbrot set and Julia sets into 4-dimensional space.[35] Exponential, logarithm, and power functions Given a quaternion, $q=a+b\mathbf {i} +c\mathbf {j} +d\mathbf {k} =a+\mathbf {v} ,$ the exponential is computed as[36] $\exp(q)=\sum _{n=0}^{\infty }{\frac {q^{n}}{n!}}=e^{a}\left(\cos \|\mathbf {v} \|+{\frac {\mathbf {v} }{\|\mathbf {v} \|}}\sin \|\mathbf {v} \|\right),$ and the logarithm is[36] $\ln(q)=\ln \|q\|+{\frac {\mathbf {v} }{\|\mathbf {v} \|}}\arccos {\frac {a}{\|q\|}}.$ It follows that the polar decomposition of a quaternion may be written $q=\|q\|e^{{\hat {n}}\varphi }=\|q\|\left(\cos(\varphi )+{\hat {n}}\sin(\varphi )\right),$ where the angle $\varphi $[lower-alpha 5] $a=\|q\|\cos(\varphi )$ and the unit vector ${\hat {n}}$ is defined by: $\mathbf {v} ={\hat {n}}\|\mathbf {v} \|={\hat {n}}\|q\|\sin(\varphi )\,.$ Any unit quaternion may be expressed in polar form as: $q=\exp {({\hat {n}}\varphi )}.$ The power of a quaternion raised to an arbitrary (real) exponent x is given by: $q^{x}=\|q\|^{x}e^{{\hat {n}}x\varphi }=\|q\|^{x}\left(\cos(x\varphi )+{\hat {n}}\,\sin(x\varphi )\right)~.$ Geodesic norm The geodesic distance dg(p, q) between unit quaternions p and q is defined as:[38] $d_{\text{g}}(p,q)=\lVert \ln(p^{-1}q)\rVert .$ and amounts to the absolute value of half the angle subtended by p and q along a great arc of the S3 sphere. This angle can also be computed from the quaternion dot product without the logarithm as: $\arccos(2(p\cdot q)^{2}-1).$ Three-dimensional and four-dimensional rotation groups Main articles: Quaternions and spatial rotation and Rotation operator (vector space) The word "conjugation", besides the meaning given above, can also mean taking an element a to r a r−1 where r is some nonzero quaternion. All elements that are conjugate to a given element (in this sense of the word conjugate) have the same real part and the same norm of the vector part. (Thus the conjugate in the other sense is one of the conjugates in this sense.) [39] Thus the multiplicative group of nonzero quaternions acts by conjugation on the copy of $\mathbb {R} ^{3}$ consisting of quaternions with real part equal to zero. Conjugation by a unit quaternion (a quaternion of absolute value 1) with real part cos(φ) is a rotation by an angle 2φ, the axis of the rotation being the direction of the vector part. The advantages of quaternions are:[40] • Avoiding gimbal lock, a problem with systems such as Euler angles. • Faster and more compact than matrices. • Nonsingular representation (compared with Euler angles for example). • Pairs of unit quaternions represent a rotation in 4D space (see Rotations in 4-dimensional Euclidean space: Algebra of 4D rotations). The set of all unit quaternions (versors) forms a 3-sphere S3 and a group (a Lie group) under multiplication, double covering the group ${\text{SO}}(3,\mathbb {R} )$ of real orthogonal 3×3 matrices of determinant 1 since two unit quaternions correspond to every rotation under the above correspondence. See plate trick. Further information: Point groups in three dimensions The image of a subgroup of versors is a point group, and conversely, the preimage of a point group is a subgroup of versors. The preimage of a finite point group is called by the same name, with the prefix binary. For instance, the preimage of the icosahedral group is the binary icosahedral group. The versors' group is isomorphic to SU(2), the group of complex unitary 2×2 matrices of determinant 1. Let A be the set of quaternions of the form a + b i + c j + d k where a, b, c, and d are either all integers or all half-integers. The set A is a ring (in fact a domain) and a lattice and is called the ring of Hurwitz quaternions. There are 24 unit quaternions in this ring, and they are the vertices of a regular 24 cell with Schläfli symbol {3,4,3}. They correspond to the double cover of the rotational symmetry group of the regular tetrahedron. Similarly, the vertices of a regular 600 cell with Schläfli symbol {3,3,5} can be taken as the unit icosians, corresponding to the double cover of the rotational symmetry group of the regular icosahedron. The double cover of the rotational symmetry group of the regular octahedron corresponds to the quaternions that represent the vertices of the disphenoidal 288-cell.[41] Quaternion algebras Main article: Quaternion algebra The Quaternions can be generalized into further algebras called quaternion algebras. Take F to be any field with characteristic different from 2, and a and b to be elements of F; a four-dimensional unitary associative algebra can be defined over F with basis 1, i, j, and i j, where i2 = a, j2 = b and i j = −j i (so (i j)2 = −a b). Quaternion algebras are isomorphic to the algebra of 2×2 matrices over F or form division algebras over F, depending on the choice of a and b. Quaternions as the even part of Cl3,0(R) Main article: Spinor § Three dimensions The usefulness of quaternions for geometrical computations can be generalised to other dimensions by identifying the quaternions as the even part $\operatorname {Cl} _{3,0}^{+}(\mathbb {R} )$ of the Clifford algebra $\operatorname {Cl} _{3,0}(\mathbb {R} ).$ This is an associative multivector algebra built up from fundamental basis elements σ1, σ2, σ3 using the product rules $\sigma _{1}^{2}=\sigma _{2}^{2}=\sigma _{3}^{2}=1,$ $\sigma _{i}\sigma _{j}=-\sigma _{j}\sigma _{i}\qquad (j\neq i).$ If these fundamental basis elements are taken to represent vectors in 3D space, then it turns out that the reflection of a vector r in a plane perpendicular to a unit vector w can be written: $r^{\prime }=-w\,r\,w.$ Two reflections make a rotation by an angle twice the angle between the two reflection planes, so $r^{\prime \prime }=\sigma _{2}\sigma _{1}\,r\,\sigma _{1}\sigma _{2}$ corresponds to a rotation of 180° in the plane containing σ1 and σ2. This is very similar to the corresponding quaternion formula, $r^{\prime \prime }=-\mathbf {k} \,r\,\mathbf {k} .$ Indeed, the two structures $\operatorname {Cl} _{3,0}^{+}(\mathbb {R} )$ and $\mathbb {H} $ are isomorphic. One natural identification is $1\mapsto 1\,,\quad \mathbf {k} \mapsto \sigma _{2}\sigma _{1}\,,\quad \mathbf {i} \mapsto \sigma _{3}\sigma _{2}\,,\quad \mathbf {j} \mapsto \sigma _{1}\sigma _{3}\,,$ and it is straightforward to confirm that this preserves the Hamilton relations $\mathbf {i} ^{2}=\mathbf {j} ^{2}=\mathbf {k} ^{2}=\mathbf {i\,j\,k} =-1~.$ In this picture, so-called "vector quaternions" (that is, pure imaginary quaternions) correspond not to vectors but to bivectors – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers becomes clearer, too: in 2D, with two vector directions σ1 and σ2, there is only one bivector basis element σ1σ2, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ1σ2, σ2σ3, σ3σ1, so three imaginaries. This reasoning extends further. In the Clifford algebra $\operatorname {Cl} _{4,0}(\mathbb {R} ),$ there are six bivector basis elements, since with four different basic vector directions, six different pairs and therefore six different linearly independent planes can be defined. Rotations in such spaces using these generalisations of quaternions, called rotors, can be very useful for applications involving homogeneous coordinates. But it is only in 3D that the number of basis bivectors equals the number of basis vectors, and each bivector can be identified as a pseudovector. There are several advantages for placing quaternions in this wider setting:[42] • Rotors are a natural part of geometric algebra and easily understood as the encoding of a double reflection. • In geometric algebra, a rotor and the objects it acts on live in the same space. This eliminates the need to change representations and to encode new data structures and methods, which is traditionally required when augmenting linear algebra with quaternions. • Rotors are universally applicable to any element of the algebra, not just vectors and other quaternions, but also lines, planes, circles, spheres, rays, and so on. • In the conformal model of Euclidean geometry, rotors allow the encoding of rotation, translation and scaling in a single element of the algebra, universally acting on any element. In particular, this means that rotors can represent rotations around an arbitrary axis, whereas quaternions are limited to an axis through the origin. • Rotor-encoded transformations make interpolation particularly straightforward. • Rotors carry over naturally to pseudo-Euclidean spaces, for example, the Minkowski space of special relativity. In such spaces rotors can be used to efficiently represent Lorentz boosts, and to interpret formulas involving the gamma matrices. For further detail about the geometrical uses of Clifford algebras, see Geometric algebra. Brauer group Further information: Brauer group The quaternions are "essentially" the only (non-trivial) central simple algebra (CSA) over the real numbers, in the sense that every CSA over the real numbers is Brauer equivalent to either the real numbers or the quaternions. Explicitly, the Brauer group of the real numbers consists of two classes, represented by the real numbers and the quaternions, where the Brauer group is the set of all CSAs, up to equivalence relation of one CSA being a matrix ring over another. By the Artin–Wedderburn theorem (specifically, Wedderburn's part), CSAs are all matrix algebras over a division algebra, and thus the quaternions are the only non-trivial division algebra over the real numbers. CSAs – finite dimensional rings over a field, which are simple algebras (have no non-trivial 2-sided ideals, just as with fields) whose center is exactly the field – are a noncommutative analog of extension fields, and are more restrictive than general ring extensions. The fact that the quaternions are the only non-trivial CSA over the real numbers (up to equivalence) may be compared with the fact that the complex numbers are the only non-trivial finite field extension of the real numbers. Quotations I regard it as an inelegance, or imperfection, in quaternions, or rather in the state to which it has been hitherto unfolded, whenever it becomes or seems to become necessary to have recourse to x, y, z, etc. — William Rowan Hamilton (circa 1848)[43] Time is said to have only one dimension, and space to have three dimensions. ... The mathematical quaternion partakes of both these elements; in technical language it may be said to be "time plus space", or "space plus time": and in this sense it has, or at least involves a reference to, four dimensions. ... And how the One of Time, of Space the Three, Might in the Chain of Symbols girdled be. — William Rowan Hamilton (circa 1853)[44] Quaternions came from Hamilton after his really good work had been done; and, though beautifully ingenious, have been an unmixed evil to those who have touched them in any way, including Clerk Maxwell. — W. Thompson, Lord Kelvin (1892)[45] There was a time, indeed, when I, although recognizing the appropriateness of vector analysis in electromagnetic theory (and in mathematical physics generally), did think it was harder to understand and to work than the Cartesian analysis. but that was before I had thrown off the quaternionic old-man-of-the-sea who fastened himself about my shoulders when reading the only accessible treatise on the subject – Prof. Tait's Quaternions. But I came later to see that, so far as the vector analysis I required was concerned, the quaternion was not only not required, but was a positive evil of no inconsiderable magnitude; and that by its avoidance the establishment of vector analysis was made quite simple and its working also simplified, and that it could be conveniently harmonised with ordinary Cartesian work. There is not a ghost of a quaternion in any of my papers (except in one, for a special purpose). The vector analysis I use may be described either as a convenient and systematic abbreviation of Cartesian analysis; or else, as Quaternions without the quaternions, .... "Quaternion" was, I think, defined by an American schoolgirl to be "an ancient religious ceremony". This was, however, a complete mistake. The ancients – unlike Prof. Tait – knew not, and did not worship Quaternions. — Oliver Heaviside (1893)[46] Neither matrices nor quaternions and ordinary vectors were banished from these ten [additional] chapters. For, in spite of the uncontested power of the modern Tensor Calculus, those older mathematical languages continue, in my opinion, to offer conspicuous advantages in the restricted field of special relativity. Moreover, in science as well as in everyday life, the mastery of more than one language is also precious, as it broadens our views, is conducive to criticism with regard to, and guards against hypostasy [weak-foundation] of, the matter expressed by words or mathematical symbols. — Ludwik Silberstein (1924)[47] ... quaternions appear to exude an air of nineteenth century decay, as a rather unsuccessful species in the struggle-for-life of mathematical ideas. Mathematicians, admittedly, still keep a warm place in their hearts for the remarkable algebraic properties of quaternions but, alas, such enthusiasm means little to the harder-headed physical scientist. — Simon L. Altmann (1986)[48] See also • Conversion between quaternions and Euler angles – Mathematical strategy • Dual quaternion – eight-dimensional algebra over the real numbersPages displaying wikidata descriptions as a fallback • Dual-complex number – Four-dimensional algebra over the real numbersPages displaying short descriptions of redirect targets • Exterior algebra – Algebra of exterior/ wedge products • Hurwitz quaternion order • Hyperbolic quaternion – Mutation of quaternions where unit vectors square to +1 • Lénárt sphere – educational model for spherical geometryPages displaying wikidata descriptions as a fallback • Pauli matrices – Matrices important in quantum mechanics and the study of spin • Quaternionic manifold • Quaternionic matrix – matrix whose entries are quaternionsPages displaying wikidata descriptions as a fallback • Quaternionic polytope • Quaternionic projective space – Concept in mathematics • Rotations in 4-dimensional Euclidean space – Special orthogonal group • Slerp – Spherical linear interpolation in computer graphics • Split-quaternion – Four-dimensional associative algebra over the reals • Tesseract – Four-dimensional analogue of the cube Notes 1. A more personal view of quaternions was written by Joachim Lambek in 1995. He wrote in his essay If Hamilton had prevailed: quaternions in physics: "My own interest as a graduate student was raised by the inspiring book by Silberstein". He concluded by stating "I firmly believe that quaternions can supply a shortcut for pure mathematicians who wish to familiarize themselves with certain aspects of theoretical physics." Lambek, J. (1995). "If Hamilton had prevailed: Quaternions in physics". Math. Intelligencer. Vol. 17, no. 4. pp. 7–15. doi:10.1007/BF03024783. 2. It is important to note that the vector part of a quaternion is, in truth, an "axial" vector or "pseudovector", not an ordinary or "polar" vector, as was formally proven by Altmann (1986).[25] A polar vector can be represented in calculations (for example, for rotation by a quaternion "similarity transform") by a pure imaginary quaternion, with no loss of information, but the two should not be confused. The axis of a "binary" (180°) rotation quaternion corresponds to the direction of the represented polar vector in such a case. 3. In comparison, the real numbers $\mathbb {R} $ have dimension 1, the complex numbers $\mathbb {C} $ have dimension 2, and the octonions $\mathbb {O} $ have dimension 8. 4. The identification of the square roots of minus one in $\mathbb {H} $ was given by Hamilton[34] but was frequently omitted in other texts. By 1971 the sphere was included by Sam Perlis in his three-page exposition included in Historical Topics in Algebra (page 39) published by the National Council of Teachers of Mathematics. More recently, the sphere of square roots of minus one is described in Ian R. Porteous's book Clifford Algebras and the Classical Groups (Cambridge, 1995) in proposition 8.13 on page 60. 5. Books on applied mathematics, such as Corke (2017)[37] often use different notation with φ := 1/2θ — that is, another variable θ = 2φ. References 1. "On Quaternions; or on a new System of Imaginaries in Algebra". Letter to John T. Graves. 17 October 1843. 2. Rozenfelʹd, Boris Abramovich (1988). The history of non-euclidean geometry: Evolution of the concept of a geometric space. Springer. p. 385. ISBN 9780387964584. 3. Hamilton. Hodges and Smith. 1853. p. 60. quaternion quotient lines tridimensional space time 4. Hardy 1881. Ginn, Heath, & co. 1881. p. 32. ISBN 9781429701860. 5. Curtis, Morton L. (1984), Matrix Groups (2nd ed.), New York: Springer-Verlag, p. 10, ISBN 978-0-387-96074-6 6. Kunze, Karsten; Schaeben, Helmut (November 2004). "The Bingham distribution of quaternions and its spherical radon transform in texture analysis". Mathematical Geology. 36 (8): 917–943. doi:10.1023/B:MATG.0000048799.56445.59. S2CID 55009081. 7. Smith, Frank (Tony). "Why not sedenion?". Retrieved 8 June 2018. 8. See Hazewinkel, Gubareni & Kirichenko 2004, p. 12 9. Conway & Smith 2003, p. 9 10. Bradley, Robert E.; Sandifer, Charles Edward (2007). Leonhard Euler: life, work and legacy. Elsevier. p. 193. ISBN 978-0-444-52728-8. They mention Wilhelm Blaschke's claim in 1959 that "the quaternions were first identified by L. Euler in a letter to Goldbach written on 4 May 1748," and they comment that "it makes no sense whatsoever to say that Euler "identified" the quaternions in this letter ... this claim is absurd." 11. Pujol, J., "Hamilton, Rodrigues, Gauss, Quaternions, and Rotations: A Historical Reassessment" Communications in Mathematical Analysis (2012), 13(2), 1–14 12. Gauss, C.F. (1900). "Mutationen des Raumes [Transformations of space] (c. 1819)". In Martin Brendel (ed.). Carl Friedrich Gauss Werke [The works of Carl Friedrich Gauss]. Vol. 8. article edited by Prof. Stäckel of Kiel, Germany. Göttingen, DE: Königlichen Gesellschaft der Wissenschaften [Royal Society of Sciences]. pp. 357–361. 13. Hamilton, W.R. (1844). "Letter". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Vol. xxv. pp. 489–495. 14. Hamilton, Sir W.R. (1866). Hamilton, W.E. (ed.). Elements of Quaternions. London: Longmans, Green, & Co. 15. Shoemake, Ken (1985). "Animating Rotation with Quaternion Curves" (PDF). Computer Graphics. 19 (3): 245–254. doi:10.1145/325165.325242. Presented at SIGGRAPH '85. 16. Tomb Raider (1996) is often cited as the first mass-market computer game to have used quaternions to achieve smooth three-dimensional rotations. See, for example Nick Bobick (July 1998). "Rotating objects using quaternions". Game Developer. 17. McCarthy, J.M. (1990). An Introduction to Theoretical Kinematics. MIT Press. ISBN 978-0-262-13252-7. 18. Hurwitz, A. (1919), Vorlesungen über die Zahlentheorie der Quaternionen, Berlin: J. Springer, JFM 47.0106.01, concerning Hurwitz quaternions 19. Girard, P.R. (1984). "The quaternion group and modern physics". European Journal of Physics. 5 (1): 25–32. Bibcode:1984EJPh....5...25G. doi:10.1088/0143-0807/5/1/007. S2CID 250775753. 20. Girard, Patrick R. (1999). "Einstein's equations and Clifford algebra" (PDF). Advances in Applied Clifford Algebras. 9 (2): 225–230. doi:10.1007/BF03042377. S2CID 122211720. Archived from the original (PDF) on 17 December 2010. 21. Huerta, John (27 September 2010). "Introducing The Quaternions" (PDF). Archived (PDF) from the original on 2014-10-21. Retrieved 8 June 2018. 22. Wood, Charlie (6 September 2018). "The Strange Numbers That Birthed Modern Algebra". Abstractions blog. Quanta Magazine. 23. Eves (1976, p. 391) 24. "Maths – Transformations using Quaternions". EuclideanSpace. A rotation of q1 followed by a rotation of q2 is equivalent to a single rotation of q2 q1. Note the reversal of order, that is, we put the first rotation on the right hand side of the multiplication. 25. Altmann, S.L. Rotations, Quaternions, and Double Groups. Ch. 12. 26. Hamilton, Sir William Rowan (1866). "Article 285". Elements of Quaternions. Longmans, Green, & Company. p. 310. 27. Hardy (1881). "Elements of Quaternions". Science. library.cornell.edu. 2 (75): 65. doi:10.1126/science.os-2.75.564. PMID 17819877. 28. "quaternion group". Wolframalpha.com. 29. Gibbs, J. Willard; Wilson, Edwin Bidwell (1901). Vector Analysis. Yale University Press. p. 428. right tensor dyadic 30. Hamilton, W.R. (1844–1850). "On quaternions or a new system of imaginaries in algebra". David R. Wilkins collection. Philosophical Magazine. Trinity College Dublin. 31. "Visualizing Quaternions". Morgan-Kaufmann/Elsevier. 2005. 32. "[no title cited; determinant evaluation]". Wolframalpha.com. 33. Farebrother, Richard William; Groß, Jürgen; Troschke, Sven-Oliver (2003). "Matrix representation of quaternions". Linear Algebra and Its Applications. 362: 251–255. doi:10.1016/s0024-3795(02)00535-9. 34. Hamilton, W.R. (1899). Elements of Quaternions (2nd ed.). Cambridge University Press. p. 244. ISBN 1-108-00171-8. 35. "[no title cited]" (PDF). bridgesmathart.org. archive. Retrieved 19 August 2018. 36. Särkkä, Simo (June 28, 2007). "Notes on Quaternions" (PDF). Lce.hut.fi. Archived from the original (PDF) on 5 July 2017. 37. Corke, Peter (2017). Robotics, Vision, and Control – Fundamental Algorithms in MATLAB. Springer. ISBN 978-3-319-54413-7. 38. Park, F.C.; Ravani, Bahram (1997). "Smooth invariant interpolation of rotations". ACM Transactions on Graphics. 16 (3): 277–295. doi:10.1145/256157.256160. S2CID 6192031. 39. Hanson, Jason (2011). "Rotations in three, four, and five dimensions". arXiv:1103.5263 [math.MG]. 40. Günaşti, Gökmen (2016). Quaternions Algebra, Their Applications in Rotations and Beyond Quaternions (BS). Linnaeus University. 41. "Three-Dimensional Point Groups". www.classe.cornell.edu. Retrieved 2022-12-09. 42. "Quaternions and Geometric Algebra". geometricalgebra.net. Retrieved 2008-09-12. See also: Dorst, Leo; Fontijne, Daniel; Mann, Stephen (2007). Geometric Algebra for Computer Science. Morgan Kaufmann. ISBN 978-0-12-369465-2. 43. Hamilton, William Rowan (1853). Lectures on quaternions. Dublin: Hodges and Smith. p. 522. 44. Graves, R.P. Life of Sir William Rowan Hamilton. Dublin Hodges, Figgis. pp. 635–636. 45. Thompson, Silvanus Phillips (1910). The life of William Thomson (Vol. 2). London, Macmillan. p. 1138. 46. Heaviside, Oliver (1893). Electromagnetic Theory. Vol. I. London, UK: The Electrician Printing and Publishing Company. pp. 134–135. 47. Ludwik Silberstein (1924). Preface to second edition of The Theory of Relativity 48. Altmann, Simon L. (1986). Rotations, quaternions, and double groups. Clarendon Press. ISBN 0-19-855372-2. LCCN 85013615. Further reading Books and publications • Hamilton, William Rowan (1844). "On quaternions, or on a new system of imaginaries in algebra". Philosophical Magazine. 25 (3): 489–495. doi:10.1080/14786444408645047. • Hamilton, William Rowan (1853), "Lectures on Quaternions". Royal Irish Academy. • Hamilton (1866) Elements of Quaternions University of Dublin Press. Edited by William Edwin Hamilton, son of the deceased author. • Hamilton (1899) Elements of Quaternions volume I, (1901) volume II. Edited by Charles Jasper Joly; published by Longmans, Green & Co. • Tait, Peter Guthrie (1873), "An elementary treatise on quaternions". 2d ed., Cambridge, [Eng.] : The University Press. • Maxwell, James Clerk (1873), "A Treatise on Electricity and Magnetism". Clarendon Press, Oxford. • Tait, Peter Guthrie (1886), ""Archived copy". Archived from the original on August 8, 2014. Retrieved June 26, 2005.{{cite web}}: CS1 maint: archived copy as title (link) CS1 maint: unfit URL (link)". M.A. Sec. R.S.E. Encyclopædia Britannica, Ninth Edition, 1886, Vol. XX, pp. 160–164. (bzipped PostScript file) • Joly, Charles Jasper (1905). A manual of quaternions. Macmillan. LCCN 05036137. • Macfarlane, Alexander (1906). Vector analysis and quaternions (4th ed.). Wiley. LCCN 16000048. • Chisholm, Hugh, ed. (1911). "Algebra" . Encyclopædia Britannica (11th ed.). Cambridge University Press. (See section on quaternions.) • Finkelstein, David; Jauch, Josef M.; Schiminovich, Samuel; Speiser, David (1962). "Foundations of quaternion quantum mechanics". J. Math. Phys. 3 (2): 207–220. Bibcode:1962JMP.....3..207F. doi:10.1063/1.1703794. S2CID 121453456. • Du Val, Patrick (1964). Homographies, quaternions, and rotations. Oxford mathematical monographs. Clarendon Press. LCCN 64056979. • Michael J. Crowe (1967), A History of Vector Analysis: The Evolution of the Idea of a Vectorial System, University of Notre Dame Press. Surveys the major and minor vector systems of the 19th century (Hamilton, Möbius, Bellavitis, Clifford, Grassmann, Tait, Peirce, Maxwell, Macfarlane, MacAuley, Gibbs, Heaviside). • Altmann, Simon L. (1989). "Hamilton, Rodrigues, and the Quaternion Scandal". Mathematics Magazine. 62 (5): 291–308. doi:10.1080/0025570X.1989.11977459. • Pujol, Jose (2014). "On Hamilton's Nearly-Forgotten Early Work on the Relation between Rotations and Quaternions and on the Composition of Rotations". The American Mathematical Monthly. 121 (6): 515–522. doi:10.4169/amer.math.monthly.121.06.515. S2CID 1543951. • Adler, Stephen L. (1995). Quaternionic quantum mechanics and quantum fields. International series of monographs on physics. Vol. 88. Oxford University Press. ISBN 0-19-506643-X. LCCN 94006306. • Ward, J.P. (1997). Quaternions and Cayley Numbers: Algebra and Applications. Kluwer Academic. ISBN 0-7923-4513-4. • Kantor, I.L.; Solodnikov, A.S. (1989). Hypercomplex numbers, an elementary introduction to algebras. Springer-Verlag. ISBN 0-387-96980-2. • Gürlebeck, Klaus; Sprössig, Wolfgang (1997). Quaternionic and Clifford calculus for physicists and engineers. Mathematical methods in practice. Vol. 1. Wiley. ISBN 0-471-96200-7. LCCN 98169958. • Kuipers, Jack (2002). Quaternions and Rotation Sequences: A Primer With Applications to Orbits, Aerospace, and Virtual Reality. Princeton University Press. ISBN 0-691-10298-8. • Conway, John Horton; Smith, Derek A. (2003). On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry. A.K. Peters. ISBN 1-56881-134-9. (review). • Jack, P.M. (2003). "Physical space as a quaternion structure, I: Maxwell equations. A brief Note". arXiv:math-ph/0307038. • Kravchenko, Vladislav (2003). Applied Quaternionic Analysis. Heldermann Verlag. ISBN 3-88538-228-8. • Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. • Hanson, Andrew J. (2006). Visualizing Quaternions. Elsevier. ISBN 0-12-088400-3. • Binz, Ernst; Pods, Sonja (2008). "1. The Skew Field of Quaternions". Geometry of Heisenberg Groups. American Mathematical Society. ISBN 978-0-8218-4495-3. • Doran, Chris J.L.; Lasenby, Anthony N. (2003). Geometric Algebra for Physicists. Cambridge University Press. ISBN 978-0-521-48022-2. • Vince, John A. (2008). Geometric Algebra for Computer Graphics. Springer. ISBN 978-1-84628-996-5. • For molecules that can be regarded as classical rigid bodies molecular dynamics computer simulation employs quaternions. They were first introduced for this purpose by Evans, D.J. (1977). "On the Representation of Orientation Space". Mol. Phys. 34 (2): 317–325. Bibcode:1977MolPh..34..317E. doi:10.1080/00268977700101751. • Zhang, Fuzhen (1997). "Quaternions and Matrices of Quaternions". Linear Algebra and Its Applications. 251: 21–57. doi:10.1016/0024-3795(95)00543-9. • Ron Goldman (2010). Rethinking Quaternions: Theory and Computation. Morgan & Claypool. ISBN 978-1-60845-420-4. • Eves, Howard (1976), An Introduction to the History of Mathematics (4th ed.), New York: Holt, Rinehart and Winston, ISBN 0-03-089539-1 • Voight, John (2021). Quaternion Algebras. Graduate Texts in Mathematics. Vol. 288. Springer. doi:10.1007/978-3-030-56694-4. ISBN 978-3-030-57467-3. Links and monographs • "Quaternion Notices". Notices and materials related to Quaternion conference presentations • "Quaternion", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • "Frequently Asked Questions". Matrix and Quaternion. 1.21. • Sweetser, Doug. "Doing Physics with Quaternions". • Quaternions for Computer Graphics and Mechanics (Gernot Hoffman) • Gsponer, Andre; Hurni, Jean-Pierre (2002). "The Physical Heritage of Sir W. R. Hamilton". arXiv:math-ph/0201058. • Wilkins, D.R. "Hamilton's Research on Quaternions". • Grossman, David J. "Quaternion Julia Fractals". 3D Raytraced Quaternion Julia Fractals • "Quaternion Math and Conversions". Great page explaining basic math with links to straight forward rotation conversion formulae. • Mathews, John H. "Bibliography for Quaternions". Archived from the original on 2006-09-02. • "Quaternion powers". GameDev.net. • Hanson, Andrew. "Visualizing Quaternions home page". Archived from the original on 2006-11-05. • Karney, Charles F.F. (January 2007). "Quaternions in molecular modeling". J. Mol. Graph. Mod. 25 (5): 595–604. arXiv:physics/0506177. doi:10.1016/j.jmgm.2006.04.002. PMID 16777449. S2CID 6690718. • Mebius, Johan E. (2005). "A matrix-based proof of the quaternion representation theorem for four-dimensional rotations". arXiv:math/0501249. • Mebius, Johan E. (2007). "Derivation of the Euler–Rodrigues formula for three-dimensional rotations from the general formula for four-dimensional rotations". arXiv:math/0701759. • "Hamilton Walk". Department of Mathematics, NUI Maynooth. • "Using Quaternions to represent rotation". OpenGL:Tutorials. Archived from the original on 2007-12-15. • David Erickson, Defence Research and Development Canada (DRDC), Complete derivation of rotation matrix from unitary quaternion representation in DRDC TR 2005-228 paper. • Martinez, Alberto. "Negative Math, How Mathematical Rules Can Be Positively Bent". Department of History, University of Texas. Archived from the original on 2011-09-24. • Stahlke, D. "Quaternions in Classical Mechanics" (PDF). • Morier-Genoud, Sophie; Ovsienko, Valentin (2008). "Well, Papa, can you multiply triplets?". arXiv:0810.5562 [math.AC]. describes how the quaternions can be made into a skew-commutative algebra graded by Z/2 × Z/2 × Z/2. • Joyce, Helen (November 2004). "Curious Quaternions". hosted by John Baez. • Ibanez, Luis. "Tutorial on Quaternions. Part I" (PDF). Archived from the original (PDF) on 2012-02-04. Retrieved 2011-12-05. Part II (PDF; using Hamilton's terminology, which differs from the modern usage) • Ghiloni, R.; Moretti, V.; Perotti, A. (2013). "Continuous slice functional calculus in quaternionic Hilbert spaces". Rev. Math. Phys. 25 (4): 1350006–126. arXiv:1207.0666. Bibcode:2013RvMaP..2550006G. doi:10.1142/S0129055X13500062. S2CID 119651315. Ghiloni, R.; Moretti, V.; Perotti, A. (2017). "Spectral representations of normal operators via Intertwining Quaternionic Projection Valued Measures". Rev. Math. Phys. 29: 1750034. arXiv:1602.02661. doi:10.1142/S0129055X17500349. S2CID 124709652. two expository papers about continuous functional calculus and spectral theory in quanternionic Hilbert spaces useful in rigorous quaternionic quantum mechanics. • Quaternions the Android app shows the quaternion corresponding to the orientation of the device. • Rotating Objects Using Quaternions article speaking to the use of Quaternions for rotation in video games/computer graphics. External links The Wikibook Associative Composition Algebra has a page on the topic of: Quaternions Wikiquote has quotations related to Quaternion. Look up quaternion in Wiktionary, the free dictionary. • Media related to Quaternions at Wikimedia Commons • Paulson, Lawrence C. Quaternions (Formal proof development in Isabelle/HOL, Archive of Formal Proofs) • Quaternions – Visualisation Number systems Sets of definable numbers • Natural numbers ($\mathbb {N} $) • Integers ($\mathbb {Z} $) • Rational numbers ($\mathbb {Q} $) • Constructible numbers • Algebraic numbers ($\mathbb {A} $) • Closed-form numbers • Periods • Computable numbers • Arithmetical numbers • Set-theoretically definable numbers • Gaussian integers Composition algebras • Division algebras: Real numbers ($\mathbb {R} $) • Complex numbers ($\mathbb {C} $) • Quaternions ($\mathbb {H} $) • Octonions ($\mathbb {O} $) Split types • Over $\mathbb {R} $: • Split-complex numbers • Split-quaternions • Split-octonions Over $\mathbb {C} $: • Bicomplex numbers • Biquaternions • Bioctonions Other hypercomplex • Dual numbers • Dual quaternions • Dual-complex numbers • Hyperbolic quaternions • Sedenions  ($\mathbb {S} $) • Split-biquaternions • Multicomplex numbers • Geometric algebra/Clifford algebra • Algebra of physical space • Spacetime algebra Other types • Cardinal numbers • Extended natural numbers • Irrational numbers • Fuzzy numbers • Hyperreal numbers • Levi-Civita field • Surreal numbers • Transcendental numbers • Ordinal numbers • p-adic numbers (p-adic solenoids) • Supernatural numbers • Profinite integers • Superreal numbers • Normal numbers • Classification • List Authority control International • FAST National • Spain • France • BnF data • Germany • Israel • United States • Latvia • Japan • Czech Republic
Wikipedia
Robust exponential attractors for non-autonomous equations with memory CPAA Home Spectral properties of limit-periodic Schrödinger operators May 2011, 10(3): 873-884. doi: 10.3934/cpaa.2011.10.873 On $SL(2, R)$ valued cocycles of Hölder class with zero exponent over Kronecker flows Russell Johnson 1, and Mahesh G. Nerurkar 2, Dipartimento di Sistemi e Informatica, Università di Firenze, 50139 Firenze Department of Mathematics, Rutgers University, Camden NJ 08102, United States Received October 2008 Revised March 2009 Published December 2010 We show that a generic $SL(2,R)$ valued cocycle in the class of $C^r$, ($0 < r < 1$) cocycles based on a rotation flow on the $d$-torus, is either uniformly hyperbolic or has zero Lyapunov exponents provided that the components of winding vector $\bar \gamma = (\gamma^1,\cdot \cdot \cdot,\gamma^d)$ of the rotation flow are rationally independent and satisfy the following super Liouvillian condition : $ |\gamma^i - \frac{p^i_n}{q_n}| \leq Ce^{-q^{1+\delta}_n}, 1\leq i\leq d, n\in N,$ where $C > 0$ and $\delta > 0$ are some constants and $p^i_n, q_n$ are some sequences of integers with $q_n\to \infty$. Keywords: Lyapunov exponents, Cocycles, irrational rotations, Liouville numbers.. Mathematics Subject Classification: Primary: 37B55, 34A30; Secondary: 58F1. Citation: Russell Johnson, Mahesh G. Nerurkar. On $SL(2, R)$ valued cocycles of Hölder class with zero exponent over Kronecker flows. Communications on Pure & Applied Analysis, 2011, 10 (3) : 873-884. doi: 10.3934/cpaa.2011.10.873 J. Bochi, Genericity of zero Lyapunov exponents,, Ergodic Theory and Dynamical Systems, 22 (2002), 1667. doi: doi:10.1017/S0143385702001165. Google Scholar J. Bochi and M. Viana, The Lyapunov exponents of generic volume preserving and symplectic maps,, Ann. of Math., 161 (2005), 1423. doi: doi:10.4007/annals.2005.161.1423. Google Scholar Roberta Fabbri, "Genericità dell'iperbolicità nei sistemi differenziali lineari di dimensione due,", Ph.D. Thesis, (1997). Google Scholar R. Fabbri and R. Johnson, On the Lyapunov exponent of certain $SL(2,R)$ valued cocycles,, Differential Equations and Dynamical Systems, 7 (1999), 349. Google Scholar R. Fabbri, R. Johnson and R. Pavani, On the nature of the spectrum of the quasi-periodic Schrödinger operator,, Nonlinear Analysis: Real World Applications, 3 (2002), 37. doi: doi:10.1016/S1468-1218(01)00012-8. Google Scholar R. Johnson, Exponential dichotomy, rotation number and linear differential operatorss with bounded coefficients,, Jour. Diff. Equn., 61 (1986), 54. doi: doi:10.1016/0022-0396(86)90125-7. Google Scholar R. Johnson and J. Moser, The rotation number for almost periodic potentials,, Comm. Math. Phys., 84 (1982), 403. doi: doi:10.1007/BF01208484. Google Scholar R. Johnson, K. Palmer and G. Sell, Ergodic properties of linear dynamical systems,, SIAM J. Math. Anal., 18 (1987), 1. doi: doi:10.1137/0518001. Google Scholar J. Moser, An example of a Schrodinger equation with almost periodic potential and nowhere dense spectrum,, Comment. Math. Helvetici, 56 (1981), 198. doi: doi:10.1007/BF02566210. Google Scholar M. Nerurkar, Positive exponents for a dense class of continuous $SL(2,R)$ valued cocycles which arise as solutions to strongly accessible linear differential systems,, Contemp. Math., 215 (1998), 265. Google Scholar M. Nerurkar, Density of positive Lyapunov exponents in the smooth category,, preprint (2008)., (2008). Google Scholar Wilhelm Schlag. Regularity and convergence rates for the Lyapunov exponents of linear cocycles. Journal of Modern Dynamics, 2013, 7 (4) : 619-637. doi: 10.3934/jmd.2013.7.619 Lucas Backes, Aaron Brown, Clark Butler. Continuity of Lyapunov exponents for cocycles with invariant holonomies. Journal of Modern Dynamics, 2018, 12: 223-260. doi: 10.3934/jmd.2018009 Lucas Backes. On the periodic approximation of Lyapunov exponents for semi-invertible cocycles. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6353-6368. doi: 10.3934/dcds.2017275 Boris Kalinin, Victoria Sadovskaya. Lyapunov exponents of cocycles over non-uniformly hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5105-5118. doi: 10.3934/dcds.2018224 Artur Avila. Density of positive Lyapunov exponents for quasiperiodic SL(2, R)-cocycles in arbitrary dimension. Journal of Modern Dynamics, 2009, 3 (4) : 631-636. doi: 10.3934/jmd.2009.3.631 Mauricio Poletti. Stably positive Lyapunov exponents for symplectic linear cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5163-5188. doi: 10.3934/dcds.2018228 Linlin Fu, Jiahao Xu. A new proof of continuity of Lyapunov exponents for a class of $ C^2 $ quasiperiodic Schrödinger cocycles without LDT. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2915-2931. doi: 10.3934/dcds.2019121 Matthias Rumberger. Lyapunov exponents on the orbit space. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 91-113. doi: 10.3934/dcds.2001.7.91 Edson de Faria, Pablo Guarino. Real bounds and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1957-1982. doi: 10.3934/dcds.2016.36.1957 Andy Hammerlindl. Integrability and Lyapunov exponents. Journal of Modern Dynamics, 2011, 5 (1) : 107-122. doi: 10.3934/jmd.2011.5.107 Sebastian J. Schreiber. Expansion rates and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 433-438. doi: 10.3934/dcds.1997.3.433 Zoltán Buczolich, Gabriella Keszthelyi. Isentropes and Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 1989-2009. doi: 10.3934/dcds.2020102 Gemma Huguet, Rafael de la Llave, Yannick Sire. Fast iteration of cocycles over rotations and computation of hyperbolic bundles. Conference Publications, 2013, 2013 (special) : 323-333. doi: 10.3934/proc.2013.2013.323 Jifeng Chu, Meirong Zhang. Rotation numbers and Lyapunov stability of elliptic periodic solutions. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1071-1094. doi: 10.3934/dcds.2008.21.1071 Chao Liang, Wenxiang Sun, Jiagang Yang. Some results on perturbations of Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4287-4305. doi: 10.3934/dcds.2012.32.4287 Shrihari Sridharan, Atma Ram Tiwari. The dependence of Lyapunov exponents of polynomials on their coefficients. Journal of Computational Dynamics, 2019, 6 (1) : 95-109. doi: 10.3934/jcd.2019004 Nguyen Dinh Cong, Thai Son Doan, Stefan Siegmund. On Lyapunov exponents of difference equations with random delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 861-874. doi: 10.3934/dcdsb.2015.20.861 Jianyu Chen. On essential coexistence of zero and nonzero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4149-4170. doi: 10.3934/dcds.2012.32.4149 Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187 Andrey Gogolev, Ali Tahzibi. Center Lyapunov exponents in partially hyperbolic dynamics. Journal of Modern Dynamics, 2014, 8 (3&4) : 549-576. doi: 10.3934/jmd.2014.8.549 Russell Johnson Mahesh G. Nerurkar
CommonCrawl
Home » Math Theory » Geometry » Parallel Lines Parallel Lines and Transversal Exterior Angles Interior Angles Corresponding Angles Alternate Interior Angles Alternate Exterior Angles in a Transversal Rules for Parallel Lines Intersected by a Transversal Real Life Applications of Parallel Lines Key Facts and Summary Recommended Worksheets There can be many lines in a plane, some of which may intersect each other while some may not intersect when produced in either direction. Thus we can define parallel lines as – "Two lines l and m in the same plane are said to be parallel lines of they do not intersect when produced indefinitely in either direction." The following are the properties of parallel lines – The distance between a pair of parallel lines always remains the same. This means that parallel lines are always the same distance apart from each other. No matter how much we extend the parallel lines in each direction, they would never meet. Symbolically, two parallel lines l and m are written as l || m. It should be noted that if two lines are not parallel, they will intersect each other. For instance, below we have the lines l and m as intersecting lines as they are not parallel. For obtaining the equation of parallel lines, let us recall what we mean by the slope intercept form of the equation of a line. The Slope Intercept forms of a straight line is given by y = m x + c, where 'm' is the slope and 'c' is the y-intercept. The steepness of the line is determined by the slope or the gradient of the lone which is represented by the value m. It should be noted that the slope of any two parallel lines is always the same. To find the gradient of a parallel line, let us first recall what we mean by the slope of a line. The trigonometrical tangent of the angle that a line makes with the positive direction of the x-axis in an anticlockwise sense is called the slope or the gradient of a line. The slope of a line is generally denoted by m. Thus m = tantan θ Since a line parallel to the x-axis makes an angle of 0o with the x-axis, therefore, its slope is tan 00 = 0 A line parallel to the y-axis, i.e. a line that is perpendicular to the x-axis makes an angle of 90o with the x-axis, so its slope is tan $\frac{\pi}{2}$ = ∞. Also, the slope of a line equally inclined with axes is 1 or -1 as it makes an angle of 45o or 135o with the x-axis. The angle of inclination of a line with the positive direction of the x-axis in an anticlockwise sense always lies between 00 and 1800. Let us now understand the slop using some examples. What can be said regarding a line is its slope is zero ? Let θ be the angle of inclination of the given line with the positive direction of the x-axis in an anticlockwise sense. Then, its slope is given by m = tan θ. If the slope of a line is zero, then, m = tan θ = 0 ⇒ θ = 00 This means that either the line is x-axis or it is parallel to the x-axis. Thus the line of zero slope is parallel to the x-axis. Before we move to understand how a transversal affects a pair of parallel lines, let us the basic definition of a transversal. A transversal is defined as a line intersecting two or more given lines in a plane at different points. For example, in the figure below, the lines l and m are parallel while the line p is intersecting both the line l and the line m. Hence, the line p is a transversal to the lines l and m. Now, as we can see in the above figure, a transversal makes some angles with the lines it intersects. Let l and m be two lines and let n be the transversal intersecting them at P and Q respectively as shown below – Clearly, lines l and m make eight angles with the transversal n, four at P and four at Q. we have labelled them 1 to 8 for the sake of convenience and shall now classify them in the following groups – The angles whose arms do not include the line segment PQ are called exterior angles. Therefore, in the above figure, angles 1, 2, 7 and 8 are exterior angles. The angles whose arms include the line segment PQ are called interior angles. Therefore, in the above figure, angles 6, 4, 5 and 6 are interior angles. A pair of angles in which one arm of both the angles is on the same side of the transversal and their other arms are directed in the same sense is called a pair of corresponding angles. In the above figure, there are four pairs of corresponding angles, ∠1 and ∠ 5, ∠ 2 and ∠ 6, ∠ 3 and ∠ 7, ∠ 4 and ∠ 8. We can also say that two angles on the same side of the transversal are known as corresponding angles if both lie either above the lines or below the two lines. A pair of angles in which one arm of each of the angles is on opposite sides of the transversal and whose other arms include segment PQ is called a pair of alternate interior angles. In other words, Alternate interior angles are angles formed when two parallel or non-parallel lines are intersected by a transversal. The angles are positioned at the inner corners of the intersections and lie on opposite sides of the transversal. In the above figures, ∠ 3 and ∠ 5 form a pair of alternate interior angles. Another pair of alternate interior angles in this figure is ∠ 4 and ∠ 6. A pair of angles in which one arm of each of the angles is on opposite sides of the transversal and whose other arms are directed in opposite directions and do not include segment PQ is called alternate exterior angles in a transversal. In the above figure, ∠ 2 and ∠ 8 form a pair of alternate exterior angles. Another pair of alternate exterior angles in this figure is ∠ 1 and ∠ 7. Now, we shall learn about the alternate interior angle theorem that is also one of the basic properties of alternate interior angles. The following rules define the angles that are formed when a pair of parallel lines is intersected by a transversal – The pairs of corresponding angles formed on the parallel lines will be equal. The pairs of alternate interior angles formed on the parallel lines will be equal. The pairs of alternate exterior formed on the parallel lines will be equal. The sum of consecutive interior angles on the same side of the transversal is 180o. The sum of consecutive exterior angles on the same side of the transversal is 180o. Let us understand this by an example. Consider the following figure where a pair of parallel lines has been intersected by a transversal – Now, let us observe the 8 angles formed in the above figure. The pairs of corresponding angles formed on the above parallel lines are ∠ 1 and ∠ 5, ∠ 2 and ∠ 6, ∠ 3 and ∠7, ∠ 4 and ∠ 8. All these four pairs of angles will be equal to each other. The pairs of alternate exterior angles formed on the above parallel lines are ∠ 1 and ∠ 8, ∠ 2 and ∠ 7, These two pairs of angles will be equal to each other. The pairs of alternate interior angles formed on the above parallel lines are ∠ 3 and ∠ 6, ∠ 4 and ∠ 5, These two pairs of angles will be equal to each other. The pairs of consecutive interior angles on the same side of the transversal are ∠ 4 and ∠ 6, ∠ 3 and ∠ 5. The sum of each of these pairs will be 180o. The pairs of consecutive exterior angles on the same side of the transversal are ∠ 2 and ∠ 8, ∠ 1 and ∠ 7. The sum of each of these pairs will be 180o. The above rules are also used to prove that the two lines are parallel to each other. Observe our surroundings. Do you see any use of parallel lines in real life? Here are some real life instances, where we make use of parallel lines and their concepts – Roads and Railways Tracks – Have you ever noticed that the roads and the railway tracks in any region of the world are always parallel lines? Although they lie in the same direction, yet they never meet. Notebooks – Notice the lines marked for your notebooks for marking the space for writing. They are parallel lines that allow you to write neatly and in uniformity. Pedestrian crossings – The painted lines that define the pedestrian crossings are always parallel lines. Example 1 Are the two lines cut by the transversal line parallel? What property can you use to justify your answer? Solution We have been given a figure where two lines have been cut by a transversal. We need to confirm whether the given lines are parallel. Let us observe the angles marked on the transversal intersecting the two lines. We have been that – ∠ a = 96o and ∠ c = 96o We can clearly see that ∠ a and ∠ c are alternate interior angles. Now, we have learned that if the alternate interior angles formed by a transversal intersecting two lines are equal, this means that the two lines are parallel. Now, here we have been given that these two alternate interior angles are equal. Hence, we can say that by the property of transversal intersecting parallel lines, the two lines are parallel to each other. Example 2 In the given figure, the lines l and m are parallel. n is a transversal and ∠ 1 = 40o. Find all the angles marked in the figure. Solution We have been given that lines l and m are parallel. n is a transversal and ∠ 1 = 40o. we need to find the remaining angles. Let us start with each angle one by one. First, let us find ∠ 2. We can clearly see that ∠ 1 and ∠ 2 form a supplementary pair of angles. This means that the sum of ∠ 1 and ∠ 2 should be equal to 180o. Hence, we have, ∠ 1 + ∠ 2 = 180o ⇒ 40o + ∠ 2 = 180o ⇒ ∠ 2 = 180o – 40o ⇒ ∠ 2 = 140o Now, that we have found the value of ∠ 2, we will find the value of ∠ 6 Note that ∠ 2 and ∠ 6 form a pair of corresponding angles. Since the lines that have been intersected by a transversal are parallel, therefore, the pair of corresponding angles should be equal. Hence, we have, ∠ 2 = ∠ 6 = 140o Similarly, ∠ 1 = ∠ 5 = 40o Note that ∠ 3 and ∠ 5 form a pair of alternate interior angles. Since the lines that have been intersected by a transversal are parallel, therefore, the pair of alternate interior angles should be equal. Hence, we have, ∠ 3 = ∠ 5 = 40o Similarly, ∠ 4 = ∠ 6 = 140o Similarly, we can see that ∠ 5 and ∠ 8 form a supplementary pair of angles. This means that the sum of ∠ 5 and ∠ 8 should be equal to 180o. Hence, we have, Again, we can see that ∠ 6 and ∠ 7 form a supplementary pair of angles. This means that the sum of ∠ 6 and ∠ 7 should be equal to 180o. Hence, we have, ⇒ 140o + ∠ 7 = 180o ⇒ ∠ 7 = 180o – 140o ⇒ ∠ 7 = 40o Hence, we have, ∠ 1 = ∠ 3 = ∠ 5 = ∠ 7 = 40o and ∠ 2 = ∠ 4 = ∠ 6 = ∠ 8 = 140o Example 3 Determine the equation of the line through the point (4, -3) and parallel axis. Solution We have been given that the straight line passes through the point ( 4, -3 ). This means we have been given that x1 = 4 and y1 = -3 Also, the straight line is parallel to the x-axis. Now, recall that we have learned that the line of zero slope is parallel to the x-axis. This means that is a line is parallel to the axis if the slope of a line is zero, or, So, we have m = 0 We also know that, Now, we know that the equation of the line in point-slope forms I y-y1 = m (x-x1) Substituting the given values in the above equation, we have y-( -3 ) = 0 (x-2) ⇒ y + 3 = 0 Hence, the equation of the line through the point (4, -3) and parallel axis will be given by y + 3 = 0 Two lines l and m in the same plane are said to be parallel lines of they do not intersect when produced indefinitely in either direction. The slope of any two parallel lines is always the same. The trigonometrical tangent of the angle that a line makes with the positive direction of the x-axis in an anticlockwise sense is called the slope or the gradient of a line. A pair of angles in which one arm of both the angles is on the same side of the transversal and their other arms are directed in the same sense is called a pair of corresponding angles. A pair of angles in which one arm of each of the angles is on opposite sides of the transversal and whose other arms include segment PQ is called a pair of alternate interior angles. If two straight lines parallel to each other are intersected by a transversal, then the pairs of corresponding angles, alternate interior angles and alternate exterior angles will be equal. If two straight lines parallel to each other are intersected by a transversal, then the sum of consecutive interior angles on the same side of the transversal as well as the sum of the consecutive exterior angles on the same side of the transversal is 180o. Parallel lines Cut by a Transversal 8th Grade Math Worksheets Spatial Skill: Lines, Segments, and Rays (International Day of PWDs Themed) Worksheets Browse All Worksheets Additional Geometry Theory: Attribute Pascal's Triangle Lines of Symmetry What are three-dimensional shapes? Straight Angles Triangles And Their Types Trapezoid Area Of Parallelogram Units For Measuring Area Area Of Shapes Latest Worksheets The worksheets below are the mostly recently added to the site.
CommonCrawl
Convert $3206_7$ to a base 10 integer. $3206_7 = 3 \cdot 7^3 + 2 \cdot 7^2 + 0 \cdot 7^1 + 6 \cdot 7^0 = 1029 + 98 + 6 = \boxed{1133}$.
Math Dataset
\begin{document} \allowdisplaybreaks \title{Convergence of nonlinear filterings for multiscale systems with correlated L\'evy noises*} \author{Huijie Qiao} \thanks{{\it AMS Subject Classification(2010):} 60G35, 60G51, 60H10.} \thanks{{\it Keywords:} Multiscale systems, correlated L\'evy noises, the uniform mean square convergence, weak convergence.} \thanks{*This work was partly supported by NSF of China (No. 11001051, 11371352) and China Scholarship Council under Grant No. 201906095034.} \subjclass{} \date{} \dedicatory{School of Mathematics, Southeast University\\ Nanjing, Jiangsu 211189, China\\ Department of Mathematics, University of Illinois at Urbana-Champaign\\ Urbana, IL 61801, USA\\ [email protected]} \begin{abstract} In the paper, we consider nonlinear filtering problems of multiscale systems in two cases-correlated sensor L\'evy noises and correlated L\'evy noises. First of all, we prove that the slow part of the origin system converges to the homogenized system in the uniform mean square sense. And then based on the convergence result, in the case of correlated sensor L\'evy noises, the nonlinear filtering of the slow part is shown to approximate that of the homogenized system in $L^1$ sense. However, in the case of correlated L\'evy noises, we prove that the nonlinear filtering of the slow part converges weakly to that of the homogenized system. \end{abstract} \maketitle \rm \section{Introduction} Nowadays, more and more high dimensional and complex mathematical models are used in engineering and science(c.f. \cite{rbn, Dit, kus2, ps1, q00, qzd, yz, zqd}). For example, in some climate models, it is common to simulate the dynamics of the atmosphere and ocean on varying spatial grids with distinct time scale separations. Simultaneously, controlling, estimating and forecasting these models become more and more interesting(c.f. \cite{rbn, kus2, ps1, qzd, zqd}). However, different time scales make much trouble. Therefore, how to treat these scales is the first important task. A kind of usual methods is to reduce the dimension of these high dimensional mathematical models and study low dimensional ones with the similar dynamical structure. Thus, by estimating the low dimensional ones, we can master the origin high dimensional ones. And nonlinear filtering problems are just right to estimate unobservable and complicated phenomena by observing some simple objects. So, by solving some suitable filtering problems, high dimensional and complex models can be controlled and estimated. In the paper, we are mainly interested in the nonlinear filtering problems of the following two multiscale systems. For a fixed time $T>0$, given a completed filtered probability space $(\Omega, \mathscr{F}, \{\mathscr{F}_t\}_{t\in[0,T]},{\mathbb P})$. Consider the following slow-fast system on ${\mathbb R}^n\times{\mathbb R}^m$ and the observation process on ${\mathbb R}^d$: for $0\leqslant t\leqslant T$, \begin{eqnarray}\left\{\begin{array}{l} \mathrm{d} X^\varepsilon_t=b_1(X^\varepsilon_t,Z^\varepsilon_t)\mathrm{d} t+\sigma_1(X^\varepsilon_t)\mathrm{d} V_t+\int_{{\mathbb U}_1}f_1(X^\varepsilon_{t-}, u)\tilde{N}_{p_1}(\mathrm{d} t, \mathrm{d} u), \\ X^\varepsilon_0=x_0,\\ \mathrm{d} Z^\varepsilon_t=\frac{1}{\varepsilon}b_2(X^\varepsilon_t,Z^\varepsilon_t)\mathrm{d} t+\frac{1}{\sqrt{\varepsilon}}\sigma_2(X^\varepsilon_t,Z^\varepsilon_t)\mathrm{d} W_t+\int_{{\mathbb U}_2}f_2(X^\varepsilon_{t-},Z^\varepsilon_{t-},u)\tilde{N}^{\varepsilon}_{p_2}(\mathrm{d} t, \mathrm{d} u),\\ Z^\varepsilon_0=z_0,\\ \mathrm{d} Y_t^{\varepsilon}=h(X_t^{\varepsilon})\mathrm{d} t+\sigma_3 \mathrm{d} V_t+\sigma_4 \mathrm{d} B_t,\\ Y_0^{\varepsilon}=0, \end{array} \right. \label{Eq0} \end{eqnarray} where $V, W, B$ are $l$-dimensional, $m$-dimensional and $j$-dimensional standard Brownian motions, respectively, and $p_1, p_2$ are two stationary Poisson point processes of the class (quasi left-continuous) defined on $(\Omega, \mathscr{F}, \{\mathscr{F}_t\}_{t\in[0,T]},{\mathbb P})$ with values in ${\mathbb U}$ and the characteristic measure $\nu_1, \nu_2$, respectively. Here $\nu_1, \nu_2$ are two $\sigma$-finite measures defined on a measurable space (${\mathbb U},\mathscr{U}$). Fix ${\mathbb U}_1, {\mathbb U}_2\in\mathscr{U}$ with $\nu_1({\mathbb U}\setminus{\mathbb U}_1)<\infty$ and $\nu_2({\mathbb U}\setminus{\mathbb U}_2)<\infty$. Let $N_{p_1}((0,t],\mathrm{d} u)$ be the counting measure of $p_1(t)$, a Poisson random measure and then ${\mathbb E} N_{p_1}((0,t],A)=t\nu_1(A)$ for $A\in\mathscr{U}$. Denote \begin{eqnarray*} \tilde{N}_{p_1}((0,t],\mathrm{d} u):=N_{p_1}((0,t],\mathrm{d} u)-t\nu_1(\mathrm{d} u), \qquad\qquad A\in\mathscr{U}|_{{\mathbb U}_1}, \end{eqnarray*} the compensated measure of $N_{p_1}((0,t],\mathrm{d} u)$. By the same way, we could define $N_{p_2}((0,t],\mathrm{d} u)$, $\tilde{N}_{p_2}((0,t],\mathrm{d} u)$. And $N_{p_2}^{\varepsilon}((0,t],\mathrm{d} u)$ is another Poisson random measure on (${\mathbb U},\mathscr{U}$) such that ${\mathbb E} N_{p_2}^{\varepsilon}((0,t],A)=\frac{1}{\varepsilon}t\nu_2(A)$ for $A\in\mathscr{U}$. Moreover, $V_t, W_t, B_t, N_{p_1}, N_{p_2}, N_{p_2}^{\varepsilon}$ are mutually independent. The mappings $b_1:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^n$, $b_2:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^m$, $\sigma_1:{\mathbb R}^n\mapsto{\mathbb R}^{n\times l}$, $\sigma_2:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^{m\times m}$, $f_1:{\mathbb R}^n\times{\mathbb U}_1\mapsto{\mathbb R}^n$, $f_2:{\mathbb R}^n\times{\mathbb R}^m\times{\mathbb U}_2\mapsto{\mathbb R}^m$, and $h: {\mathbb R}^n\mapsto{\mathbb R}^d$ are all Borel measurable. The matrices $\sigma_3, \sigma_4$ are $d\times l, d\times j$, respectively. The system (\ref{Eq0}) is usually called as a correlated sensor noise model. We also consider the following slow-fast system on ${\mathbb R}^n\times{\mathbb R}^m$ and the observation process on ${\mathbb R}^d$: for $0\leqslant t\leqslant T$, $l=d$, \begin{eqnarray}\left\{\begin{array}{l} \mathrm{d} \check{X}^\varepsilon_t=\check{b}_1(\check{X}^\varepsilon_t, \check{Z}^\varepsilon_t)\mathrm{d} t+\check{\sigma}_0(\check{X}^\varepsilon_t)\mathrm{d} B_t+\check{\sigma}_1(\check{X}^\varepsilon_t)\mathrm{d} V_t+\int_{{\mathbb U}_1}\check{f}_1(\check{X}^\varepsilon_{t-}, u)\tilde{N}_{p_1}(\mathrm{d} t, \mathrm{d} u), \\ \check{X}^\varepsilon_0=\check{x}_0,\\ \mathrm{d} \check{Z}^\varepsilon_t=\frac{1}{\varepsilon}\check{b}_2(\check{X}^\varepsilon_t,\check{Z}^\varepsilon_t)\mathrm{d} t+\frac{1}{\sqrt{\varepsilon}}\check{\sigma}_2(\check{X}^\varepsilon_t,\check{Z}^\varepsilon_t)\mathrm{d} W_t+\int_{{\mathbb U}_2}\check{f}_2(\check{X}^\varepsilon_{t-},\check{Z}^\varepsilon_{t-},u)\tilde{N}^{\varepsilon}_{p_2}(\mathrm{d} t, \mathrm{d} u),\\ \check{Z}^\varepsilon_0=\check{z}_0,\\ \mathrm{d} \check{Y}_t^{\varepsilon}=\check{h}(\check{X}_t^{\varepsilon})\mathrm{d} t+V_t+\int_0^t\int_{{\mathbb U}_3}\check{f}_3(s,u)\tilde{N}_{\lambda}(\mathrm{d} s, \mathrm{d} u)+\int_0^t\int_{{\mathbb U}\setminus{\mathbb U}_3}\check{g}_3(s,u)N_{\lambda}(\mathrm{d} s, \mathrm{d} u)\\ \check{Y}_0^{\varepsilon}=0 \end{array} \right. \label{Eq01} \end{eqnarray} where $N_{\lambda}(\mathrm{d} t,\mathrm{d} u)$ is a random measure with a predictable compensator $\lambda(t,\check{X}_t^\varepsilon,u)\mathrm{d} t\nu_3(\mathrm{d} u)$. Here the function $\lambda: [0,T]\times{\mathbb R}^n\times{\mathbb U}\rightarrow(0,1)$ is Borel measurable and $\nu_3$ is a $\sigma$-finite measure defined on ${\mathbb U}$ with $\nu_3({\mathbb U}\setminus{\mathbb U}_3)<\infty$ and $\int_{{\mathbb U}_3}\|u\|_{{\mathbb U}}^2\,\nu_3(\mathrm{d} u)<\infty$ for a fixed ${\mathbb U}_3\in\mathscr{U}$. Concretely speaking, set $$ \tilde{N}_\lambda((0,t], A):=N_\lambda((0,t],A)-\int_0^t\int_A\lambda(s,X_s^\varepsilon,u)\mathrm{d} s\nu_3(\mathrm{d} u), \quad t\in[0,T], A\in\mathscr{U}|_{{\mathbb U}_3}, $$ and then $\tilde{N}_\lambda((0,t],\mathrm{d} u)$ is the compensated martingale measure of $N_{\lambda}((0,t],\mathrm{d} u)$. Moreover, $V_t, W_t, B_t, N_{p_1}, N_{p_2}, N_{p_2}^{\varepsilon}, N_{\lambda}$ are mutually independent. These mappings $\check{b}_1:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^n$, $\check{b}_2:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^m$, $\check{\sigma}_0:{\mathbb R}^n\mapsto{\mathbb R}^{n\times j}$, $\check{\sigma}_1:{\mathbb R}^n\mapsto{\mathbb R}^{n\times d}$, $\check{\sigma}_2:{\mathbb R}^n\times{\mathbb R}^m\mapsto{\mathbb R}^{m\times m}$, $\check{f}_1:{\mathbb R}^n\times{\mathbb U}_1\mapsto{\mathbb R}^n$, $\check{f}_2:{\mathbb R}^n\times{\mathbb R}^m\times{\mathbb U}_2\mapsto{\mathbb R}^m$, $\check{h}: {\mathbb R}^n\mapsto{\mathbb R}^d$, $\check{f}_3:[0,T]\times{\mathbb U}_3\mapsto{\mathbb R}^d$ and $\check{g}_3:[0,T]\times({\mathbb U}\setminus{\mathbb U}_3)\mapsto{\mathbb R}^d$ are all Borel measurable. As usual, the system (\ref{Eq01}) is called as a correlated noise model. Note that in the systems (\ref{Eq0}) (\ref{Eq01}), the unobservable processes and the observable ones have correlated parts. The type of the multiscale correlated filtering problems usually stems from atmospheric and climatology problems. For example, coupled atmosphere-ocean models provide a multiscale model with fast atmospheric and slow ocean dynamics. In the case of climate prediction, the ocean memory, due to its heat capacity, holds important information. Hence, the improved estimate of the ocean state, which is often the slow component, is of greater interest. In the paper, we firstly prove that the slow part of a fast-slow system converges to the homogenized system in the uniform mean square sense. And then based on the convergence result, for the system (\ref{Eq0}), the nonlinear filtering of the slow part is shown to approximate that of the homogenized system in $L^1$ sense. But for the system (\ref{Eq01}), we prove that the nonlinear filtering of the slow part converges weakly to that of the homogenized system. It is worthwhile to mentioning our method and results. Firstly, for the system (\ref{Eq0}), since the driving processes of the fast-slow system are correlated with that of the observation, we can {\it not} obtain the Zakai equation of the homogenized system (cf. \cite{q3}). Thus, those methods by means of the Zakai equation do {\it not} work (cf. \cite{ImkellerSri, kus, q00}). Therefore, we make use of the exponential martingale to prove the convergence for the filtering of the slow part to that of the homogenized system. However, for the system (\ref{Eq01}), we can deduce the Zakai equations of the slow system and the homogenized system, and then show their filtering convergence. Secondly, here we prove the uniform mean square convergence stronger than weak convergence in \cite{rbn, kus2, q00} and convergence in probability in \cite{ps1}. Thirdly, when $f_1=f_2=0$ in (\ref{Eq0}) and $\check{f}_1=\check{f}_2=\check{f}_3=\check{g}_3=0$ in (\ref{Eq01}), two types of multiscale correlated filtering problems have appeared in \cite{rbn} and \cite{lh}, respectively. In \cite{rbn}, when the slow part of the origin system converges to the homogenized system in distribution, Beeson and Namachchivaya only stated that the filtering of the slow part also converges to the filtering of the homogenized system in $L^p$ sense. Regretfully, they didn't prove the result. Here we show the convergence in $L^1$ sense when $f_1\neq 0, f_2\neq 0$. Therefore, our result generalizes the result in a manner. In \cite{lh}, Lucic and Heunis proved that the slow part converges weakly to the homogenized system, and the filtering of the slow part also converges weakly to that of the homogenized system. Here, we establish that the slow part converges to the homogenized system in the uniform mean square sense, and the filtering of the slow part converges to that of the homogenized system in $L^1$ sense when $\check{f}_1\neq 0, \check{f}_2\neq 0$. Thus, our result is better. Finally, in \cite{q00}, we considered the nonlinear filtering problem of the system (\ref{Eq01}) with $\check{\sigma}_1=0$. Here, we permit $\check{\sigma}_1\neq 0$. Therefore, our result is more general in some sense. The paper is arranged as follows. In next section, we consider strong convergence for the fast-slow system. In Section \ref{notset}, we define nonlinear filtering problem and then show that the filtering of the slow part for the system (\ref{Eq0}) converges to that of the homogenize system. In Section \ref{cornoi}, the filtering of the slow part for the system (\ref{Eq01}) is proved to converge weakly to that of the homogenize system. We summarize all the results in Section \ref{con}. The following convention will be used throughout the paper: $C$ with or without indices will denote different positive constants whose values may change from one place to another. \section{Convergence of some processes}\label{conpro} In the section, we study strong convergence for the fast-slow system (\ref{Eq0}) when $\varepsilon\rightarrow0$. \subsection{A slow-fast system}\label{sfsy} In the subsection, we introduce slow-fast systems and the existence and uniqueness of their solutions. Let us consider the system (\ref{Eq0}). First of all, we give out our assumptions and state some related results. \begin{enumerate}[\bf{Assumption 1.}] \item \end{enumerate} \begin{enumerate}[($\mathbf{H}^1_{b_1, \sigma_1, f_1}$)] \item For $x_1, x_2\in{\mathbb R}^n$, $z_1, z_2\in{\mathbb R}^m$, there exist $L_{b_1}, L_{\sigma_1}, L_{f_1}>0$ such that \begin{eqnarray*} &&|b_1(x_1, z_1)-b_1(x_2, z_2)|^2\leqslant L_{b_1}(|x_1-x_2|^2+|z_1-z_2|^2),\\ &&\|\sigma_1(x_1)-\sigma_1(x_2)\|^2\leqslant L_{\sigma_1}|x_1-x_2|^2,\\ &&\int_{{\mathbb U}_1}|f_1(x_1,u)-f_1(x_2,u)|^2\,\nu_1(\mathrm{d} u)\leqslant L_{f_1}|x_1-x_2|^2, \end{eqnarray*} where $|\cdot|$ and $\|\cdot\|$ denote the length of a vector and the Hilbert-Schmidt norm of a matrix, respectively. \end{enumerate} \begin{enumerate}[($\mathbf{H}^2_{b_1, \sigma_1, f_1}$)] \item For $x\in{\mathbb R}^n$, $z\in{\mathbb R}^m$, there exists a $L_{b_1, \sigma_1, f_1}>0$ such that $$ |b_1(x,z)|^2+\|\sigma_1(x)\|^2+\int_{{\mathbb U}_1}|f_1(x,u)|^2\nu_1(\mathrm{d} u)\leqslant L_{b_1, \sigma_1, f_1}. $$ \end{enumerate} \begin{enumerate}[($\mathbf{H}^1_{b_2}$)] \item (i) $b_2$ is bi-continuous in $(x, z)$,\\ (ii) There exist $L_{b_2}\geq0, \bar{L}_{b_2}>0$ such that \begin{eqnarray*} &&|b_2(x_1, z)-b_2(x_2, z)|\leqslant L_{b_2}|x_1-x_2|, \qquad\qquad\qquad x_1, x_2\in{\mathbb R}^n, z\in{\mathbb R}^m,\\ &&\<z_1-z_2, b_2(x, z_1)-b_2(x, z_2){\rangle}\leqslant -\bar{L}_{b_2}|z_1-z_2|^2, \qquad x\in{\mathbb R}^n, z_1, z_2\in{\mathbb R}^m, \end{eqnarray*} (iii) For $x\in{\mathbb R}^n$, $z\in{\mathbb R}^m$, there exists a constant $\bar{\bar{L}}_{b_2}>0$ such that $$ |b_2(x,z)|\leqslant \bar{\bar{L}}_{b_2}(1+|x|+|z|). $$ \end{enumerate} \begin{enumerate}[($\mathbf{H}^1_{\sigma_2}$)] \item For $x_1, x_2\in{\mathbb R}^n$, $z_1, z_2\in{\mathbb R}^m$, there exists a constant $L_{\sigma_2}>0$ such that \begin{eqnarray*} \|\sigma_2(x_1, z_1)-\sigma_2(x_2, z_2)\|\leqslant L_{\sigma_2}(|x_1-x_2|+|z_1-z_2|). \end{eqnarray*} \end{enumerate} \begin{enumerate}[($\mathbf{H}^1_{f_2}$)] \item There exists a positive function $L(u)$ satisfying \begin{eqnarray*} \sup_{u\in{\mathbb U}_2}L(u)\leqslant\gamma<1~\mbox{ and } \int_{{\mathbb U}_2}L(u)^2\,\nu_2(\mathrm{d} u)<+\infty, \end{eqnarray*} such that for any $x_1, x_2\in{\mathbb R}^n$, $z_1, z_2\in{\mathbb R}^m$ and $u\in{\mathbb U}_2$ \begin{eqnarray*} |f_2(x_1,z_1,u)-f_2(x_2,z_2,u)|\leqslant L(u)(|x_1-x_2|+|z_1-z_2|), \end{eqnarray*} and \begin{eqnarray*} |f_2(0,0,u)|\leqslant L(u). \end{eqnarray*} \end{enumerate} \medspace Under {\bf Assumption 1.}, by Theorem 1.2 in \cite{q2}, we know that the system (\ref{Eq0}) has a unique strong solution denoted by $(X^\varepsilon_t,Z^\varepsilon_t)$. \subsection{The fast equation}\label{fas} In the subsection, we mainly study the second part of the system (\ref{Eq0}). First, take any $x\in{\mathbb R}^n$ and fix it. And consider the following SDE in ${\mathbb R}^m$: \begin{eqnarray*}\left\{\begin{array}{l} \mathrm{d} Z^x_t=b_2(x,Z^x_t)\mathrm{d} t+\sigma_2(x,Z^x_t)\mathrm{d} W_t+\int_{{\mathbb U}_2}f_2(x,Z^x_t,u)\tilde{N}_{p_2}(\mathrm{d} t, \mathrm{d} u),\\ Z^x_0=z_0, \qquad t\geq0. \end{array} \right. \end{eqnarray*} Under the assumption ($\mathbf{H}^1_{b_2}$) ($\mathbf{H}^1_{\sigma_2}$) ($\mathbf{H}^1_{f_2}$), the above equation has a unique solution $Z^x_t$. In addition, it is a Markov process and its transition probability is denoted by $p(x; z_0,t,A)$ for $t\geq0$ and $A\in\mathscr{B}({\mathbb R}^m)$. We assume: \begin{enumerate}[\bf{Assumption 2.}] \item \end{enumerate} \begin{enumerate}[($\mathbf{H}^2_{\sigma_2}$)] \item There exists a function $\alpha_1(x)>0$ such that \begin{eqnarray*} {\langle}\sigma_2(x,z)h,h{\rangle}\geqslant\sqrt{\alpha_1(x)}|h|^2, \qquad z,h\in{\mathbb R}^m, \end{eqnarray*} and \begin{eqnarray*} \|\sigma_{\alpha_1}(x,z_1)-\sigma_{\alpha_1}(x,z_2)\|^2\leqslant L_{\alpha_1}|z_1-z_2|^2, \qquad z_1, z_2\in{\mathbb R}^m, \end{eqnarray*} where $\sigma_{\alpha_1}(x,z)$ is the unique symmetric nonnegative definite matrix such that $\sigma_{\alpha_1}(x,z)\sigma_{\alpha_1}(x,z) =\sigma_2(x,z)\sigma^T_2(x,z)-\alpha_1(x)\emph{I}$ for the unit matrix $\emph{I}$. \end{enumerate} \begin{enumerate}[($\mathbf{H}^1_{b_2,\sigma_2,f_2}$)] \item There exist a $r>2$ and two functions $\alpha_2(x)>0$, $\alpha_3(x)\geq0$ such that for all $z\in{\mathbb R}^m$ \begin{eqnarray*} 2\<z,b_2(x,z){\rangle}+\|\sigma_2(x,z)\|^2+\int_{{\mathbb U}_2}\big|f_2(x,z,u)\big|^2\nu_2(\mathrm{d} u)\leqslant-\alpha_2(x)|z|^r+\alpha_3(x). \end{eqnarray*} \end{enumerate} \begin{enumerate}[($\mathbf{H}^2_{b_2,\sigma_2,f_2}$)] \item $$ M:=2\bar{L}_{b_2}-L_{b_2}-2L^2_{\sigma_2}-2\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u)>0. $$ \end{enumerate} Under the assumptions ($\mathbf{H}^1_{b_2}$) ($\mathbf{H}^1_{\sigma_2}$) ($\mathbf{H}^1_{f_2}$) ($\mathbf{H}^2_{\sigma_2}$) ($\mathbf{H}^1_{b_2,\sigma_2,f_2}$), by Theorem 1.3 in \cite{q1} it holds that there exists a unique invariant probability measure $\bar{p}(x,\cdot)$ for $Z^x_t$ such that \begin{eqnarray} \|p(x; z_0,t,\cdot)-\bar{p}(x,\cdot)\|_{var}\leqslant Ce^{-\alpha t}, \quad t>0, \label{experg} \end{eqnarray} where $\|\cdot\|_{var}$ is the total variance norm and $C, \alpha>0$ are two constants independent of $z_0, t$. \subsection{The homogenized equation}\label{ave} In the subsection, we construct a homogenized equation and study the relationship between the origin equation and the homogenized one. Next, set \begin{eqnarray*} \bar{b}_1(x):=\int_{{\mathbb R}^m}b_1(x,z)\bar{p}(x,\mathrm{d} z), \end{eqnarray*} and by \cite[Lemma 3.1]{q00} we know that $\bar{b}_1$ is Lipschitz continuous. So, we construct a SDE on the probability space $(\Omega, \mathscr{F}, \{\mathscr{F}_t\}_{t\in[0,T]}, {\mathbb P})$ as follows: \begin{eqnarray}\left\{\begin{array}{l} \mathrm{d} X^0_t=\bar{b} _1(X^0_t)\mathrm{d} t+\sigma_1(X^0_t)\mathrm{d} V_t+\int_{{\mathbb U}_1}f_1(X^0_{t-}, u)\tilde{N}_{p_1}(\mathrm{d} t, \mathrm{d} u),\\ X^0_0=x_0, \qquad\qquad 0\leqslant t\leqslant T. \label{appequ} \end{array} \right. \end{eqnarray} Based on the assumptions ($\mathbf{H}^1_{b_1, \sigma_1, f_1}$) ($\mathbf{H}^2_{b_1, \sigma_1, f_1}$), it holds that Eq.(\ref{appequ}) has a unique strong solution denoted as $X^0_t$. And then we study the relation between $X^{\varepsilon}$ and $X^0$. To do this, we realize a partition of $[0,T]$ into intervals of size $\delta_{\varepsilon}>0$, and introduce an auxiliary processes: \begin{eqnarray}&& \mathrm{d} \hat{Z}^\varepsilon_t=\frac{1}{\varepsilon}b_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_t)\mathrm{d} t+\frac{1}{\sqrt{\varepsilon}}\sigma_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_t)\mathrm{d} W_t+\int_{{\mathbb U}_2}f_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{t-},u)\tilde{N}^{\varepsilon}_{p_2}(\mathrm{d} t, \mathrm{d} u),\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad t\in[k\delta_{\varepsilon},(k+1)\delta_{\varepsilon}),\nonumber\\ && \hat{Z}^\varepsilon_{k\delta_{\varepsilon}}=Z^\varepsilon_{k\delta_{\varepsilon}}, \label{auxpro} \end{eqnarray} for $k=0,\cdots, [\frac{T}{\delta_{\varepsilon}}]$, where $[\frac{T}{\delta_{\varepsilon}}]$ denotes the integer part of $\frac{T}{\delta_{\varepsilon}}$. Moreover, we mention the fact that $[\frac{t}{\delta_{\varepsilon}}]=k$ for $ t\in[k\delta_{\varepsilon},(k+1)\delta_{\varepsilon})$. The following lemma gives the relationship between $Z^\varepsilon$ and $\hat{Z}^\varepsilon$. \begin{lemma} Under {\bf Assumption 1.-2.}, it holds that \begin{eqnarray} \sup\limits_{0\leqslant s\leqslant T}{\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2\leqslant \frac{L_{b_2}+2L^2_{\sigma_2}+2\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u)}{\varepsilon}3(\delta_{\varepsilon}+1)L_{b_1, \sigma_1, f_1}\delta^2_{\varepsilon}. \label{zhatz} \end{eqnarray} \end{lemma} \begin{proof} By the equations (\ref{Eq0})(\ref{auxpro}), it holds that for $s\in[k\delta_{\varepsilon},(k+1)\delta_{\varepsilon})$ \begin{eqnarray*} Z^\varepsilon_s-\hat{Z}^\varepsilon_s&=&\frac{1}{\varepsilon}\int_{k\delta_{\varepsilon}}^s \(b_2(X^\varepsilon_r,Z^\varepsilon_r)-b_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r)\)\mathrm{d} r\\ &&+\frac{1}{\sqrt{\varepsilon}}\int_{k\delta_{\varepsilon}}^s\(\sigma_2(X^\varepsilon_r,Z^\varepsilon_r)-\sigma_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r)\)\mathrm{d} W_r\\ &&+\int_{k\delta_{\varepsilon}}^s\int_{{\mathbb U}_2}\(f_2(X^\varepsilon_r,Z^\varepsilon_{r-},u)-f_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{r-},u)\)\tilde{N}^{\varepsilon}_{p_2}(\mathrm{d} r, \mathrm{d} u). \end{eqnarray*} Applying the It\^o formula to $Z^\varepsilon_s-\hat{Z}^\varepsilon_s$ and taking the expectation on two sides, we have that \begin{eqnarray*} {\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2&=& \frac{2}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s \<Z^\varepsilon_r-\hat{Z}^\varepsilon_r, b_2(X^\varepsilon_r,Z^\varepsilon_r)-b_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r){\rangle}\mathrm{d} r\nonumber\\ &&+\frac{1}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s\|\sigma_2(X^\varepsilon_r,Z^\varepsilon_r)-\sigma_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r)\|^2\mathrm{d} r\nonumber\\ &&+\frac{1}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s\int_{{\mathbb U}_2}|f_2(X^\varepsilon_r,Z^\varepsilon_r,u)-f_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r,u)|^2\nu_2(\mathrm{d} u)\mathrm{d} r\nonumber\\ &\leqslant& \frac{2}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s \<Z^\varepsilon_r-\hat{Z}^\varepsilon_r, b_2(X^\varepsilon_r,Z^\varepsilon_r)-b_2(X^\varepsilon_{k\delta_{\varepsilon}},Z^\varepsilon_r){\rangle}\mathrm{d} r\nonumber\\ &&+\frac{2}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s \<Z^\varepsilon_r-\hat{Z}^\varepsilon_r, b_2(X^\varepsilon_{k\delta_{\varepsilon}},Z^\varepsilon_r)-b_2(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_r){\rangle}\mathrm{d} r\nonumber\\ &&+\frac{2L^2_{\sigma_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r\nonumber\\ &&+\frac{2}{\varepsilon}\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u){\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r\nonumber\\ &\leqslant& \frac{2}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s |Z^\varepsilon_r-\hat{Z}^\varepsilon_r| |b_2(X^\varepsilon_r,Z^\varepsilon_r)-b_2(X^\varepsilon_{k\delta_{\varepsilon}},Z^\varepsilon_r)|\mathrm{d} r\nonumber\\ &&-\frac{2\bar{L}_{b_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s |Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\mathrm{d} r\nonumber\\ &&+\frac{2L^2_{\sigma_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r\nonumber\\ &&+\frac{2}{\varepsilon}\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u){\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r\nonumber\\ &\leqslant& \frac{L_{b_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s (|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2+|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2)\mathrm{d} r\nonumber\\ &&-\frac{2\bar{L}_{b_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s |Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\mathrm{d} r\nonumber\\ &&+\frac{2L^2_{\sigma_2}}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r\nonumber\\ &&+\frac{2}{\varepsilon}\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u){\mathbb E}\int_{k\delta_{\varepsilon}}^s\(|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2+|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\)\mathrm{d} r, \end{eqnarray*} where ($\mathbf{H}^1_{b_2}$) ($\mathbf{H}^1_{\sigma_2}$) ($\mathbf{H}^1_{f_2}$) are used. And then \begin{eqnarray*} {\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2+\frac{M}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s|Z^\varepsilon_r-\hat{Z}^\varepsilon_r|^2\mathrm{d} r\leqslant \frac{L_{b_2}+2L^2_{\sigma_2}+2\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u)}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2\mathrm{d} r. \end{eqnarray*} Thus, by ($\mathbf{H}^2_{b_2,\sigma_2,f_2}$) it holds that \begin{eqnarray} {\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2&\leqslant& \frac{L_{b_2}+2L^2_{\sigma_2}+2\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u)}{\varepsilon}{\mathbb E}\int_{k\delta_{\varepsilon}}^s|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2\mathrm{d} r. \label{zhates} \end{eqnarray} To obtain (\ref{zhatz}), we only need to estimate ${\mathbb E}|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2$ for $r\in[k\delta_{\varepsilon},(k+1)\delta_{\varepsilon})$. Note that \begin{eqnarray*} X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}=\int_{k\delta_{\varepsilon}}^r b_1(X^\varepsilon_v,Z^\varepsilon_v)\mathrm{d} v+\int_{k\delta_{\varepsilon}}^r \sigma_1(X^\varepsilon_v)\mathrm{d} V_v+\int_{k\delta_{\varepsilon}}^r\int_{{\mathbb U}_1}f_1(X^\varepsilon_{v-}, u)\tilde{N}_{p_1}(\mathrm{d} v, \mathrm{d} u). \end{eqnarray*} So, by the H\"older inequality and ($\mathbf{H}^2_{b_1, \sigma_1, f_1}$) we obtain that \begin{eqnarray} {\mathbb E}|X^\varepsilon_r-X^\varepsilon_{k\delta_{\varepsilon}}|^2&\leqslant& 3{\mathbb E}\left|\int_{k\delta_{\varepsilon}}^rb_1(X^\varepsilon_v,Z^\varepsilon_v)\mathrm{d} v\right|^2+3{\mathbb E}\left|\int_{k\delta_{\varepsilon}}^r \sigma_1(X^\varepsilon_v)\mathrm{d} V_v\right|^2\nonumber\\ &&+3{\mathbb E}\left|\int_{k\delta_{\varepsilon}}^r\int_{{\mathbb U}_1}f_1(X^\varepsilon_{v-}, u)\tilde{N}_{p_1}(\mathrm{d} v, \mathrm{d} u)\right|^2\nonumber\\ &\leqslant& 3(r-k\delta_{\varepsilon}){\mathbb E}\int_{k\delta_{\varepsilon}}^r\left|b_1(X^\varepsilon_v,Z^\varepsilon_v)\right|^2\mathrm{d} v+3{\mathbb E}\int_{k\delta_{\varepsilon}}^r \|\sigma_1(X^\varepsilon_v)\|^2\mathrm{d} v\nonumber\\ &&+3{\mathbb E}\int_{k\delta_{\varepsilon}}^r\int_{{\mathbb U}_1}\left|f_1(X^\varepsilon_{v-}, u)\right|^2\nu_1(\mathrm{d} u)\mathrm{d} v\nonumber\\ &\leqslant&3(\delta_{\varepsilon}+1)L_{b_1, \sigma_1, f_1}\delta_{\varepsilon}. \label{xdex} \end{eqnarray} By inserting (\ref{xdex}) in (\ref{zhates}), it holds that \begin{eqnarray*} {\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2&\leqslant&\frac{L_{b_2}+2L^2_{\sigma_2}+2\int_{{\mathbb U}_2}L^2(u)\nu_2(\mathrm{d} u)}{\varepsilon}3(\delta_{\varepsilon}+1)L_{b_1, \sigma_1, f_1}\delta^2_{\varepsilon}. \end{eqnarray*} This is just right (\ref{zhatz}). Thus, the proof is complete. \end{proof} Next, we apply (\ref{zhatz}) to estimate $|X^\varepsilon_t-X^0_t|$. The main result in the section is the following theorem. \begin{theorem}\label{xzerx} Suppose that {\bf Assumption 1.-2.} hold. Then there exists a constant $C\geqslant 0$ independent of $\varepsilon, \delta_{\varepsilon}$ such that \begin{eqnarray} {\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}|X^\varepsilon_t-X^0_t|^2\)\leqslant \(C\frac{\varepsilon}{\delta_{\varepsilon}}+C(\delta_{\varepsilon}+1)\delta_{\varepsilon}+C(\delta_{\varepsilon}+1)\frac{\delta^2_{\varepsilon}}{\varepsilon}\)e^{CT}. \label{xzerxe} \end{eqnarray} \end{theorem} \begin{proof} By the equations (\ref{Eq0})(\ref{appequ}), we know that \begin{eqnarray*} X^\varepsilon_t-X^0_t&=&\int_0^t\left(b_1(X^\varepsilon_s,Z^\varepsilon_s)-\bar{b}_1(X^0_s)\right)\mathrm{d} s+\int_0^t\left(\sigma_1(X^\varepsilon_s)-\sigma_1(X^0_s)\right)\mathrm{d} V_s\\ &&+\int_0^t\int_{{\mathbb U}_1}\left(f_1(X^\varepsilon_{s-}, u)-f_1(X^0_{s-}, u)\right)\tilde{N}_{p_1}(\mathrm{d} s, \mathrm{d} u), \qquad t\in[0,T]. \end{eqnarray*} And then by the Burkholder-Davis-Gundy inequality and the H\"older inequality, it holds that \begin{eqnarray} {\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}|X^\varepsilon_t-X^0_t|^2\)&\leqslant& 3{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(b_1(X^\varepsilon_s,Z^\varepsilon_s)-\bar{b}_1(X^0_s)\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+3{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(\sigma_1(X^\varepsilon_s)-\sigma_1(X^0_s)\right)\mathrm{d} V_s\right|^2\)\nonumber\\ &&+3{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\int_{{\mathbb U}_1}\left(f_1(X^\varepsilon_{s-}, u)-f_1(X^0_{s-}, u)\right)\tilde{N}_{p_1}(\mathrm{d} s, \mathrm{d} u)\right|^2\)\nonumber\\ &\leqslant&12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(b_1(X^\varepsilon_s,Z^\varepsilon_s)-b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^{\varepsilon}_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(\bar{b}_1(X^{\varepsilon}_{k\delta_{\varepsilon}})-\bar{b}_1(X^{\varepsilon}_s)\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(\bar{b}_1(X^{\varepsilon}_s)-\bar{b}_1(X^0_s)\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+12{\mathbb E}\int_0^T\left\|\sigma_1(X^\varepsilon_s)-\sigma_1(X^0_s)\right\|^2\mathrm{d} s\nonumber\\ &&+12{\mathbb E}\int_0^T\int_{{\mathbb U}_1}\left|f_1(X^\varepsilon_{s-}, u)-f_1(X^0_{s-}, u)\right|^2\nu_1(\mathrm{d} u)\mathrm{d} s\nonumber\\ &\leqslant&12 TL_{b_1}\int_0^T\({\mathbb E}|X^{\varepsilon}_s-X^{\varepsilon}_{k\delta_{\varepsilon}}|^2+{\mathbb E}|Z^\varepsilon_s-\hat{Z}^\varepsilon_s|^2\)\mathrm{d} s\nonumber\\ &&+12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+12TC\int_0^T{\mathbb E}|X^{\varepsilon}_{k\delta_{\varepsilon}}-X^{\varepsilon}_s|^2\mathrm{d} s\nonumber\\ &&+\(12TC+12L_{\sigma_1}+12L_{f_1}\)\int_0^T{\mathbb E}|X^\varepsilon_s-X^0_s|^2\mathrm{d} s\nonumber\\ &\leqslant&12{\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}\left|\int_0^t\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\)\nonumber\\ &&+\(12 TL_{b_1}+12TC\)\int_0^T{\mathbb E}|X^{\varepsilon}_{k\delta_{\varepsilon}}-X^{\varepsilon}_s|^2\mathrm{d} s\nonumber\\ &&+12 TL_{b_1}\int_0^T{\mathbb E}|Z^{\varepsilon}_s-\hat{Z}^{\varepsilon}_s|^2\mathrm{d} s\nonumber\\ &&+\(12TC+12L_{\sigma_1}+12L_{f_1}\)\int_0^T{\mathbb E}\(\sup\limits_{0\leqslant r\leqslant s}|X^{\varepsilon}_r-X^0_r|^2\)\mathrm{d} s\nonumber\\ &=:&I_1+I_2+I_3+I_4, \label{xhatzerest0} \end{eqnarray} where ($\mathbf{H}^1_{b_1, \sigma_1, f_1}$) is used in the third inequality. Next, we estimate $I_1$. Note that \begin{eqnarray} I_1&=&12{\mathbb E}\left(\sup\limits_{0\leqslant i\leqslant [T/\delta_{\varepsilon}]-1}\left|\sum_{k=0}^i\int_{k\delta_{\varepsilon}}^{(k+1)\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\right)\nonumber\\ &\leqslant&12{\mathbb E}\left(\sup\limits_{0\leqslant i\leqslant [T/\delta_{\varepsilon}]-1}(i+1)\sum_{k=0}^i\left|\int_{k\delta_{\varepsilon}}^{(k+1)\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\right)\nonumber\\ &\leqslant&12[T/\delta_{\varepsilon}]\sum_{k=0}^{[T/\delta_{\varepsilon}]-1}{\mathbb E}\left|\int_{k\delta_{\varepsilon}}^{(k+1)\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\nonumber\\ &\leqslant&12[T/\delta_{\varepsilon}]^2\sup\limits_{0\leqslant k\leqslant [T/\delta_{\varepsilon}]-1}{\mathbb E}\left|\int_{k\delta_{\varepsilon}}^{(k+1)\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\nonumber\\ &\leqslant&12\(\frac{T}{\delta_{\varepsilon}}\)^2\sup\limits_{0\leqslant k\leqslant [T/\delta_{\varepsilon}]-1}{\mathbb E}\left|\int_0^{\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{k\delta_{\varepsilon}+s})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2. \label{i1est} \end{eqnarray} So, we only need to analysis ${\mathbb E}\left|\int_0^{\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{k\delta_{\varepsilon}+s})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2$ for $k=0,\cdots, [T/\delta_{\varepsilon}]-1$. Fix $k$ and set \begin{eqnarray*}\left\{\begin{array}{l} \mathrm{d} \check{Z}^\varepsilon_t=b_2(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_t)\mathrm{d} t+\sigma_2(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_t)\mathrm{d} \check{W}_t+\int_{{\mathbb U}_2}f_2(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_{t-},u)\tilde{N}_{\check{p}_2}(\mathrm{d} t, \mathrm{d} u), t\in[0, \delta_{\varepsilon}/\varepsilon),\\ \check{Z}^\varepsilon_0=Z^\varepsilon_{k\delta_{\varepsilon}}, \end{array} \right. \end{eqnarray*} where $\check{W}$, $W$, $\check{p}_2$ and $p_2$ are mutually independent, and $\check{W}$, $W$ and $\check{p}_2$, $p_2$ have the same distributions, respectively. And by the scaling property of Brownian motions and Poission random measures, it holds that $\hat{Z}^\varepsilon_{k\delta_{\varepsilon}+t}$ and $\check{Z}^\varepsilon_{t/\varepsilon}$ have the same distribution. Thus we have \begin{eqnarray} {\mathbb E}\left|\int_0^{\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{k\delta_{\varepsilon}+s})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2&=&{\mathbb E}\left|\int_0^{\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_{s/\varepsilon})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\nonumber\\ &=&{\varepsilon}^2{\mathbb E}\left|\int_0^{{\delta_{\varepsilon}}/\varepsilon}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2\nonumber\\ &=&{\varepsilon}^2{\mathbb E}\int_0^{{\delta_{\varepsilon}}/\varepsilon}\int_0^{{\delta_{\varepsilon}}/\varepsilon}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\nonumber\\ &&\qquad\qquad \left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\mathrm{d} r\nonumber\\ &=&2{\varepsilon}^2\int_0^{{\delta_{\varepsilon}}/\varepsilon}\int_r^{{\delta_{\varepsilon}}/\varepsilon}{\mathbb E}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\nonumber\\ &&\qquad \left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\mathrm{d} r. \label{maxest} \end{eqnarray} And then we investigate the integrand of the above integration. By the H\"older inequality it holds that \begin{eqnarray} &&{\mathbb E}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right) \left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\nonumber\\ &=&{\mathbb E}\left[\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right) {\mathbb E}\left[\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_s)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)|\mathscr{F}^{\check{Z}^\varepsilon}_r\right]\right]\nonumber\\ &=&{\mathbb E}\left[\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right) {\mathbb E}^{\check{Z}^\varepsilon_r}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_{s-r})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\right]\nonumber\\ &\leqslant&\left({\mathbb E}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_r)-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)^2\right)^{1/2}\left({\mathbb E}\({\mathbb E}^{\check{Z}^\varepsilon_r}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\check{Z}^\varepsilon_{s-r})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\)^2\right)^{1/2}\nonumber\\ &\leqslant&Ce^{-\alpha(s-r)}, \label{intest} \end{eqnarray} where the last inequality is based on ($\mathbf{H}^2_{b_1, \sigma_1, f_1}$) and (\ref{experg}), $\mathscr{F}_r^{\check{Z}^\varepsilon} \triangleq\sigma(\check{Z}_v^\varepsilon: 0\leqslant v \leqslant r) \vee {\mathcal N}$ and ${\mathcal N}$ is the collection of all ${\mathbb P}$-measure zero sets. Inserting (\ref{intest}) in (\ref{maxest}), we furthermore obtain that \begin{eqnarray} {\mathbb E}\left|\int_0^{\delta_{\varepsilon}}\left(b_1(X^\varepsilon_{k\delta_{\varepsilon}},\hat{Z}^\varepsilon_{k\delta_{\varepsilon}+s})-\bar{b}_1(X^\varepsilon_{k\delta_{\varepsilon}})\right)\mathrm{d} s\right|^2&\leqslant&2{\varepsilon}^2\int_0^{{\delta_{\varepsilon}}/\varepsilon}\int_r^{{\delta_{\varepsilon}}/\varepsilon}Ce^{-\alpha(s-r)}\mathrm{d} s\mathrm{d} r\nonumber\\ &\leqslant& C{\varepsilon}^2\frac{\delta_{\varepsilon}}{\varepsilon}. \label{est} \end{eqnarray} By combining (\ref{est}) with (\ref{i1est}), it holds that \begin{eqnarray} I_1\leqslant \frac{C}{\delta_{\varepsilon}/\varepsilon}. \label{i1este} \end{eqnarray} Finally, applying (\ref{i1este}) (\ref{xdex}) (\ref{zhatz}) to (\ref{xhatzerest0}), we have that \begin{eqnarray*} {\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}|X^\varepsilon_t-X^0_t|^2\)&\leqslant&\frac{C}{\delta_{\varepsilon}/\varepsilon}+C(\delta_{\varepsilon}+1)\delta_{\varepsilon}+C(\delta_{\varepsilon}+1)\delta^2_{\varepsilon}/\varepsilon+C\int_0^T{\mathbb E}\(\sup\limits_{0\leqslant r\leqslant s}|X^{\varepsilon}_r-X^0_r|^2\)\mathrm{d} s. \end{eqnarray*} The Gronwall inequality admits us to obtain that \begin{eqnarray*} {\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}|X^\varepsilon_t-X^0_t|^2\)&\leqslant&\(C\frac{\varepsilon}{\delta_{\varepsilon}}+C(\delta_{\varepsilon}+1)\delta_{\varepsilon}+C\frac{(\delta_{\varepsilon}+1)\delta^2_{\varepsilon}}{\varepsilon}\)e^{CT}. \end{eqnarray*} The proof is complete. \end{proof} \begin{remark}\label{del} Based on Theorem \ref{xzerx}, it holds that $X^\varepsilon_t$ converges to $X^0_t$ in the mean square sense if $\frac{\varepsilon}{\delta_{\varepsilon}}\rightarrow0$ and $\frac{\delta^2_{\varepsilon}}{\varepsilon}\rightarrow0$ as $\varepsilon\rightarrow0$. For example, we take $\delta_{\varepsilon}=\varepsilon^{2/3}$, and have that $\frac{\varepsilon}{\delta_{\varepsilon}}=\varepsilon^{1/3}\rightarrow0$, and $\frac{\delta^2_{\varepsilon}}{\varepsilon}=\varepsilon^{1/3}\rightarrow0$ when $\varepsilon\rightarrow0$. \end{remark} \section{Convergence of nonlinear filterings with correlated sensor noises}\label{notset} In the section, we introduce the nonlinear filtering problems for $X_t^{\varepsilon}$ and $X_t^{0}$ and their relationship. \subsection{Nonlinear filtering problems with the system (\ref{Eq0})} In the subsection, we introduce nonlinear filtering problems of $X_t^{\varepsilon}$ and $X_t^{0}$. For \begin{eqnarray*} Y_t^{\varepsilon}&=&\int_0^th(X_s^{\varepsilon})\mathrm{d} s+\sigma_3 V_t+\sigma_4 B_t, \end{eqnarray*} we make the following hypotheses: \begin{enumerate}[\bf{Assumption 3.}] \item \end{enumerate} \begin{enumerate}[($\mathbf{H}_{h}$)] \item $h$ is bounded. \end{enumerate} \begin{enumerate}[($\mathbf{H}_{\sigma_3,\sigma_4}$)] \item $\sigma_3\sigma^{\prime}_3+\sigma_4\sigma^{\prime}_4=I,$ where $\sigma^{\prime}_3$ stands for the transpose of the matrix $\sigma_3$ and $I$ is the $d$ order unit matrix. \end{enumerate} By ($\mathbf{H}_{\sigma_3,\sigma_4}$), we know that $U_t:=\sigma_3 V_t+\sigma_4 B_t$ is a $d$ dimensional Brownian motion. Denote \begin{eqnarray*} (\gamma^\varepsilon_t)^{-1}:=\exp\bigg\{-\int_0^t h^i(X^\varepsilon_s)\mathrm{d} U^i_s-\frac{1}{2}\int_0^t \left|h(X^\varepsilon_s)\right|^2\mathrm{d} s\bigg\}. \end{eqnarray*} Here and hereafter, we use the convention that repeated indices imply summation. And then by ($\mathbf{H}_{h}$) we know that $(\gamma^\varepsilon_t)^{-1}$ is an exponential martingale. Define a measure ${\mathbb P}^\varepsilon$ via $$ \frac{\mathrm{d} {\mathbb P}^\varepsilon}{\mathrm{d} {\mathbb P}}=(\gamma^\varepsilon_T)^{-1}. $$ By the Girsanov theorem for Brownian motions, one can obtain that \begin{eqnarray}\label{tilw} Y^\varepsilon_t=U_t+\int_0^t h(X^\varepsilon_s)\mathrm{d} s \end{eqnarray} is a $\mathscr{F}_t$-Brownian motion under the probability measure ${\mathbb P}^\varepsilon$. Next, we rewrite $\gamma^\varepsilon_t$ as \begin{eqnarray*} \gamma^\varepsilon_t&=&\exp\bigg\{\int_0^th^i(X_s^{\varepsilon})\mathrm{d} Y^{\varepsilon,i}_s-\frac{1}{2}\int_0^t \left|h(X_s^{\varepsilon})\right|^2\mathrm{d} s\bigg\}. \end{eqnarray*} Define \begin{eqnarray*} &&\rho^{\varepsilon}_t(\psi):={\mathbb E}^{{\mathbb P}^\varepsilon}[\psi(X^{\varepsilon}_t)\gamma^\varepsilon_t|\mathscr{F}_t^{Y^{\varepsilon}}], \\ &&\pi^{\varepsilon}_t(\psi):={\mathbb E}[\psi(X^{\varepsilon}_t)|\mathscr{F}_t^{Y^{\varepsilon}}], \qquad \psi\in{\mathcal B}({\mathbb R}^n), \end{eqnarray*} where ${\mathbb E}^{{\mathbb P}^\varepsilon}$ denotes the expectation under the measure ${\mathbb P}^\varepsilon$, $\mathscr{F}_t^{Y^\varepsilon} \triangleq\sigma(Y_s^\varepsilon: 0\leqslant s \leqslant t) \vee {\mathcal N}$, ${\mathcal N}$ is the collection of all ${\mathbb P}$-measure zero sets and ${\mathcal B}({\mathbb R}^n)$ denotes the collection of all bounded and Borel measurable functions on ${\mathbb R}^n$. $ \rho_t^\varepsilon$ and $\pi^{\varepsilon}_t$ are called the nonnormalized filtering and the normalized filtering of $X_t^\varepsilon$ with respect to $\mathscr{F}_t^{Y^{\varepsilon}}$, respectively. And then by the Kallianpur-Striebel formula it holds that \begin{eqnarray*} \pi^{\varepsilon}_t(\psi)=\frac{\rho^{\varepsilon}_t(\psi)}{\rho^{\varepsilon}_t(1)}. \end{eqnarray*} Set \begin{eqnarray*} \gamma^0_t&:=&\exp\bigg\{\int_0^th^i(X_s^0)\mathrm{d} Y^{\varepsilon,i}_s-\frac{1}{2}\int_0^t \left|h(X_s^0)\right|^2\mathrm{d} s\bigg\}, \end{eqnarray*} and furthermore \begin{eqnarray*} \rho^0_t(\psi)&:=&{\mathbb E}^{{\mathbb P}^\varepsilon}[\psi(X^0_t)\gamma^0_t|\mathscr{F}_t^{Y^{\varepsilon}}],\\ \pi^0_t(\psi)&:=&\frac{\rho^0_t(\psi)}{\rho^0_t(1)}. \end{eqnarray*} And then we will prove that $\pi^0$ could be understood as the nonlinear filtering problem for $X_t^0$ with respect to $\mathscr{F}_t^{Y^{\varepsilon}}$. \subsection{The relation between $\pi^{\varepsilon}_t$ and $\pi^0_t$} In the subsection we will show that $\pi^{\varepsilon}_t$ converges to $\pi^0_t$ as $\varepsilon\rightarrow0$ in a suitable sense. Let us start with a key lemma. \begin{lemma}\label{es} Under $({\bf H}_h)$, there exists a constant $C>0$ such that $$ {\mathbb E}\left|\rho^0_t(1)\right|^{-p}\leqslant\exp\left\{(2p^2+p+1)CT/2\right\}, \quad t\in[0,T], \quad p>1. $$ \end{lemma} \begin{proof} Although the proof is similar to Lemma 4.1 in \cite{qzd}, we prove it to the readers' convenience. For ${\mathbb E}\left|\rho^0_t(1)\right|^{-p}$, we compute $$ {\mathbb E}\left|\rho^0_t(1)\right|^{-p}={\mathbb E}^\varepsilon\left|\rho^0_t(1)\right|^{-p}\gamma^\varepsilon_T \leqslant({\mathbb E}^\varepsilon\left|\rho^0_t(1)\right|^{-2p})^{1/2}({\mathbb E}^\varepsilon(\gamma^\varepsilon_T)^2)^{1/2}, $$ where the last inequality is based on the H\"older inequality. For ${\mathbb E}^\varepsilon\left|\rho^0_t(1)\right|^{-2p}$, note that $\rho^0_t(1)={\mathbb E}^\varepsilon[\gamma^0_t|\mathscr{F}_t^{Y^\varepsilon}]$. And then it follows from the Jensen inequality that $$ {\mathbb E}^\varepsilon\left|\rho^0_t(1)\right|^{-2p}={\mathbb E}^\varepsilon\left|{\mathbb E}^\varepsilon[\gamma^0_t|\mathscr{F}_t^{Y^\varepsilon}]\right|^{-2p}\leqslant{\mathbb E}^\varepsilon\left[{\mathbb E}^\varepsilon[|\gamma^0_t|^{-2p}|\mathscr{F}_t^{Y^\varepsilon}]\right]={\mathbb E}^\varepsilon[|\gamma^0_t|^{-2p}]. $$ Thus, the definition of $\gamma^0_t$ allows us to obtain that \begin{eqnarray*} {\mathbb E}^\varepsilon[|\gamma^0_t|^{-2p}]&=&{\mathbb E}^\varepsilon\left[\exp\left\{-2p\int_0^t h(X_s^0)\mathrm{d} Y^{\varepsilon}_s+\frac{2p}{2}\int_0^t |h(X_s^0)|^2\mathrm{d} s\right\}\right]\\ &=&{\mathbb E}^\varepsilon\Bigg[\exp\left\{-2p\int_0^t h(X_s^0)\mathrm{d} Y^{\varepsilon}_s-\frac{4p^2}{2}\int_0^t |h(X_s^0)|^2\mathrm{d} s\right\}\\ &&\bullet\exp\left\{\left(\frac{4p^2}{2}+\frac{2p}{2}\right)\int_0^t |h(X_s^0)|^2\mathrm{d} s\right\}\Bigg]\\ &\leqslant&\exp\left\{(2p^2+p)CT\right\}{\mathbb E}^\varepsilon\Bigg[\exp\left\{-2p\int_0^t h(X_s^0)\mathrm{d} Y^{\varepsilon}_s-\frac{4p^2}{2}\int_0^t |h(X_s^0)|^2\mathrm{d} s\right\}\Bigg]\\ &=&\exp\left\{(2p^2+p)CT\right\}, \end{eqnarray*} where the last step is based on the fact that $\exp\left\{-2p\int_0^t h(X_s^0)\mathrm{d} Y^{\varepsilon}_s-\frac{4p^2}{2}\int_0^t |h(X_s^0)|^2\mathrm{d} s\right\}$ is an exponential martingale under ${\mathbb P}^\varepsilon$. Similarly, we know that ${\mathbb E}^\varepsilon(\gamma^\varepsilon_T)^2\leqslant\exp\left\{CT\right\}$. So, by simple calculation, it holds that ${\mathbb E}\left|\rho^0_t(1)\right|^{-p}\leqslant\exp\left\{(2p^2+p+1)CT/2\right\}$. The proof is complete. \end{proof} \begin{theorem}\label{filcon} Suppose that {\bf Assume 1.-3.} hold. Then it holds that for $\phi\in {\mathcal C}_b^1({\mathbb R}^n)$ \begin{eqnarray} \lim\limits_{\varepsilon\rightarrow0}{\mathbb E}|\pi^{\varepsilon}_t(\phi)- \pi_t^0(\phi)|=0, \label{filcones} \end{eqnarray} where ${\mathcal C}_b^1({\mathbb R}^n)$ denotes the collection of all the functions which themselves and their first order partial derivatives are bounded and Borel measurable. \end{theorem} \begin{proof} For $\phi\in {\mathcal C}^1_b({\mathbb R}^n)$, it follows from the H\"older inequality and Lemma \ref{es} that \begin{eqnarray*} {\mathbb E}|\pi^{\varepsilon}_t(\phi)- \pi_t^\varepsilon(\phi)|&=&{\mathbb E}\left|\frac{\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)}{\rho^0_t(1)}-\pi^{\varepsilon}_t(\phi)\frac{\rho^{\varepsilon}_t(1)-\rho^0_t(1)}{\rho^{0}_t(1)}\right|\\ &\leqslant&{\mathbb E}\left|\frac{\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)}{\rho^{0}_t(1)}\right|+{\mathbb E}\left|\pi^{\varepsilon}_t(\phi)\frac{\rho^{\varepsilon}_t(1)-\rho^0_t(1)}{\rho^{0}_t(1)}\right|\\ &\leqslant&\left({\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}\right)^{1/r_1}\left({\mathbb E}\left|\rho^{0}_t(1)\right|^{-r_2}\right)^{1/r_2}\\ &&+\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}\left({\mathbb E}\left|\rho^{\varepsilon}_t(1)-\rho^{0}_t(1)\right|^{r_1}\right)^{1/r_1}\left({\mathbb E}\left|\rho^{0}_t(1)\right|^{-r_2}\right)^{1/r_2}\\ &\leqslant&C\left({\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}\right)^{1/r_1}+C\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}\left({\mathbb E}\left|\rho^{\varepsilon}_t(1)-\rho^{0}_t(1)\right|^{r_1}\right)^{1/r_1}, \end{eqnarray*} where $1<r_1<2, r_2>1$ and $1/r_1+1/r_2=1$. Next, we estimate ${\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}$. Note that \begin{eqnarray*} {\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}&=&{\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1} \gamma^{\varepsilon}_T\leqslant ({\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1p_1})^{1/p_1} ({\mathbb E}^\varepsilon(\gamma^\varepsilon_T)^{p_2})^{1/p_2}\\ &\leqslant&\exp\left\{CT\right\}({\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1p_1})^{1/p_1}, \end{eqnarray*} where $1<p_1<2, 1<r_1p_1<2, p_2>1$ and $1/p_1+1/p_2=1$. And then we only need to observe ${\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1p_1}$. Based on the definitions of $\rho^{\varepsilon}_t(\phi), \rho^{0}_t(\phi)$ and the Jensen inequality, it holds that \begin{eqnarray} {\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1p_1}&=&{\mathbb E}^\varepsilon\left|{\mathbb E}^\varepsilon[\phi(X_t^\varepsilon)\gamma^\varepsilon_t|\mathscr{F}_t^{Y^\varepsilon}]-{\mathbb E}^\varepsilon[\phi(X^0_t)\gamma^0_t|\mathscr{F}_t^{Y^\varepsilon}]\right|^{r_1p_1}\nonumber\\ &=&{\mathbb E}^\varepsilon\left|{\mathbb E}^\varepsilon[\phi(X_t^\varepsilon)\gamma^\varepsilon_t-\phi(X^0_t)\gamma^0_t|\mathscr{F}_t^{Y^\varepsilon}]\right|^{r_1p_1}\nonumber\\ &\leqslant&{\mathbb E}^\varepsilon\left[{\mathbb E}^\varepsilon\left[\left|\phi(X_t^\varepsilon)\gamma^\varepsilon_t-\phi(X^0_t)\gamma^0_t\right|^{r_1p_1}\bigg|\mathscr{F}_t^{Y^\varepsilon}\right]\right]\nonumber\\ &=&{\mathbb E}^\varepsilon\left[\left|\phi(X_t^\varepsilon)\gamma^\varepsilon_t-\phi(X^0_t)\gamma^0_t\right|^{r_1p_1}\right]\nonumber\\ &\leqslant&2^{r_1p_1-1}{\mathbb E}^\varepsilon\left[\left|\phi(X_t^\varepsilon)\gamma^\varepsilon_t-\phi(X^0_t)\gamma^\varepsilon_t\right|^{r_1p_1}\right]\nonumber\\ &&+2^{r_1p_1-1}{\mathbb E}^\varepsilon\left[\left|\phi(X_t^0)\gamma^\varepsilon_t-\phi(X^0_t)\gamma^0_t\right|^{r_1p_1}\right]\nonumber\\ &=:&I_1+I_2. \label{i1i2} \end{eqnarray} First, we deal with $I_1$. By the H\"older inequality, it holds that \begin{eqnarray} I_1&\leqslant&\displaystyle 2^{r_1p_1-1}({\mathbb E}^\varepsilon\left[\left|\phi(X_t^\varepsilon)-\phi(X^0_t)\right|^{r_1p_1q_1}\right])^{1/q_1} ({\mathbb E}^\varepsilon\left|\gamma^\varepsilon_t\right|^{r_1p_1q_2})^{1/q_2} \nonumber\\ &\leqslant&2^{r_1p_1-1}\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}({\mathbb E}^\varepsilon\left|X_t^\varepsilon-X^0_t\right|^{r_1p_1q_1})^{1/q_1}\nonumber\\ &&\bullet \Bigg({\mathbb E}^\varepsilon\exp\left\{r_1p_1q_2\int_0^t h(X_s^\varepsilon)\mathrm{d} Y^{\varepsilon}_s-\frac{(r_1p_1q_2)^2}{2}\int_0^t|h(X_s^\varepsilon)|^2\mathrm{d} s\right\}\nonumber\\ &&\cdot\exp\left\{\frac{(r_1p_1q_2)^2}{2}\int_0^t|h(X_s^\varepsilon)|^2\mathrm{d} s-\frac{r_1p_1q_2}{2}\int_0^t|h(X_s^\varepsilon)|^2\mathrm{d} s\right\} \Bigg)^{1/q_2}\nonumber\\ &\leqslant&2^{r_1p_1-1}\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}({\mathbb E}^\varepsilon\left|X_t^\varepsilon-X^0_t\right|^{r_1p_1q_1})^{1/q_1}e^{\frac{r_1p_1}{2}(r_1p_1q_2-1)CT}, \label{i11} \end{eqnarray} where $1<q_1<2, 1<r_1p_1q_1<2, q_2>1$ and $1/q_1+1/q_2=1$, and the last step is based on the fact that the process $\exp\left\{r_1p_1q_2\int_0^t h(X_s^\varepsilon)\mathrm{d} Y^{\varepsilon}_s-\frac{(r_1p_1q_2)^2}{2}\int_0^t|h(X_s^\varepsilon)|^2\mathrm{d} s\right\}$ is an exponential martingale under ${\mathbb P}^\varepsilon$. Note that \begin{eqnarray} {\mathbb E}^\varepsilon\left|X_t^\varepsilon-X^0_t\right|^{r_1p_1q_1}&=&{\mathbb E}\left|X_t^\varepsilon-X^0_t\right|^{r_1p_1q_1}(\gamma^\varepsilon_T)^{-1}\nonumber\\ &\leqslant& ({\mathbb E}\left|X_t^\varepsilon-X^0_t\right|^2)^{r_1p_1q_1/2}\left({\mathbb E}(\gamma^\varepsilon_T)^{-2/(2-r_1p_1q_1)}\right)^{(2-r_1p_1q_1)/2}\nonumber\\ &\leqslant&CR(\varepsilon)^{r_1p_1q_1/2}, \label{metrxzex} \end{eqnarray} where $R(\varepsilon):=\(C\frac{\varepsilon}{\delta_{\varepsilon}}+C(\delta_{\varepsilon}+1)\delta_{\varepsilon}+C\frac{(\delta_{\varepsilon}+1)\delta^2_{\varepsilon}}{\varepsilon}\)e^{CT}$ and the last step is based on Theorem \ref{xzerx}. Thus, by inserting (\ref{metrxzex}) in (\ref{i11}), we have that \begin{eqnarray*} I_1\leqslant C\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}R(\varepsilon)^{r_1p_1/2}. \end{eqnarray*} We choose $\delta_\varepsilon$ as that in Remark \ref{del}, and obtain that $\lim\limits_{\varepsilon\rightarrow0}R(\varepsilon)=0$ and \begin{eqnarray} \lim\limits_{\varepsilon\rightarrow0}I_1=0. \label{i1es} \end{eqnarray} Next, for $I_2$, we know that \begin{eqnarray*} I_2&\leqslant&2^{r_1p_1-1}\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}{\mathbb E}^\varepsilon\left[\left|\gamma^\varepsilon_t-\gamma^0_t\right|^{r_1p_1}\right]=2^{r_1p_1-1}\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}{\mathbb E}\left[\left|\gamma^\varepsilon_t-\gamma^0_t\right|^{r_1p_1}\right](\gamma^\varepsilon_T)^{-1}\\ &\leqslant&2^{r_1p_1-1}\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}^{r_1p_1}\left({\mathbb E}\left|\gamma^\varepsilon_t-\gamma^0_t\right|^2\right)^{r_1p_1/2}\left({\mathbb E}(\gamma^\varepsilon_T)^{-2/(2-r_1p_1)}\right)^{(2-r_1p_1)/2}\\ &\leqslant&C\left({\mathbb E}\left|\gamma^\varepsilon_t-\gamma^0_t\right|^2\right)^{r_1p_1/2}. \end{eqnarray*} Note that $\gamma^\varepsilon_t, \gamma^0_t$ have the following expressions \begin{eqnarray*} &&\gamma^\varepsilon_t=\exp\bigg\{\int_0^th(X_s^{\varepsilon})^i\mathrm{d} U^{i}_s+\frac{1}{2}\int_0^t \left|h(X_s^{\varepsilon})\right|^2\mathrm{d} s\bigg\},\\ &&\gamma^0_t=\exp\bigg\{\int_0^th(X_s^0)^i\mathrm{d} U^{i}_s+\int_0^th(X_s^0)^ih(X_s^{\varepsilon})^i\mathrm{d} s-\frac{1}{2}\int_0^t \left|h(X_s^0)\right|^2\mathrm{d} s\bigg\}. \end{eqnarray*} So, by Theorem \ref{xzerx} and simple calculation, it holds that \begin{eqnarray*} \lim\limits_{\varepsilon\rightarrow0}|\gamma^\varepsilon_t-\gamma^0_t|=0 \end{eqnarray*} Moreover, ($\mathbf{H}_{h}$) admits us to get that \begin{eqnarray*} &&|\gamma^\varepsilon_t|^2\leqslant \exp\bigg\{\int_0^t 2h(X_s^{\varepsilon})^i\mathrm{d} U^{i}_s-\frac{1}{2}\int_0^t \left|2h(X_s^{\varepsilon})\right|^2\mathrm{d} s\bigg\}e^{CT},\\ &&|\gamma^0_t|^2\leqslant \exp\bigg\{\int_0^t2h(X_s^0)^i\mathrm{d} U^{i}_s-\frac{1}{2}\int_0^t \left|2h(X_s^0)\right|^2\mathrm{d} s\bigg\}e^{CT}, \end{eqnarray*} and then ${\mathbb E}|\gamma^\varepsilon_t|^2\leqslant e^{CT}, {\mathbb E}|\gamma^0_t|^2\leqslant e^{CT}$. Thus, the dominated convergence theorem yields that \begin{eqnarray} \lim\limits_{\varepsilon\rightarrow0}I_2=0. \label{i2es} \end{eqnarray} Finally, combining (\ref{i1es}) (\ref{i2es}) with (\ref{i1i2}), we obtain that \begin{eqnarray*} \lim\limits_{\varepsilon\rightarrow0}{\mathbb E}^\varepsilon\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1p_1}=0, \end{eqnarray*} and furthermore $$ \lim\limits_{\varepsilon\rightarrow0}{\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}=0. $$ We recall that $$ {\mathbb E}|\pi^{\varepsilon}_t(\phi)- \pi_t^\varepsilon(\phi)|\leqslant C\left({\mathbb E}\left|\rho^{\varepsilon}_t(\phi)-\rho^{0}_t(\phi)\right|^{r_1}\right)^{1/r_1}+C\|\phi\|_{{\mathcal C}^1_b({\mathbb R}^n)}\left({\mathbb E}\left|\rho^{\varepsilon}_t(1)-\rho^{0}_t(1)\right|^{r_1}\right)^{1/r_1}. $$ Thus, taking the limit on two sides as $\varepsilon\rightarrow0$, one can get (\ref{filcones}). The proof is complete. \end{proof} \begin{remark} Here we can not give out the convergence rate of $\pi_t^\varepsilon$ to $\pi_t^0$. That is because the convergence of the slow part to the homogenized system is in $L^2$ sense and is not in $L^p$ sense for any $p>1$. \end{remark} \section{Convergence of nonlinear filterings with correlated noises}\label{cornoi} In the section, we study the nonlinear filtering problem of the system (\ref{Eq01}). First of all, we give out our assumption. \begin{enumerate}[\bf{Assumption 4.}] \item \end{enumerate} \begin{enumerate}[\bf{(i)}] \item $\check{b}_1,\check{\sigma}_0, \check{\sigma}_1, \check{f}_1$ satisfy ($\mathbf{H}^1_{b_1, \sigma_1, f_1}$)-($\mathbf{H}^2_{b_1, \sigma_1, f_1}$), where $\check{b}_1, (\check{\sigma}_0, \check{\sigma}_1), \check{f}_1$ replace $b_1, \sigma_1, f_1$; \end{enumerate} \begin{enumerate}[\bf{(ii)}] \item $\check{b}_2, \check{\sigma}_2, \check{f}_2$ satisfy ($\mathbf{H}^1_{b_2}$), ($\mathbf{H}^1_{\sigma_2}$) and ($\mathbf{H}^1_{f_2}$), respectively; \end{enumerate} \begin{enumerate}[\bf{(iii)}] \item $\check{b}_2, \check{\sigma}_2, \check{f}_2$ satisfy ($\mathbf{H}^2_{\sigma_2}$), ($\mathbf{H}^1_{b_2, \sigma_2, f_2}$)-($\mathbf{H}^2_{b_2, \sigma_2, f_2}$), where $\check{b}_2, \check{\sigma}_2, \check{f}_2$ replace $b_2, \sigma_2, f_2$. \end{enumerate} Under {\bf Assumption 4.} {\bf (i)-(ii)}, by Theorem 1.2 in \cite{q2}, the system (\ref{Eq01}) has a unique strong solution denoted by $(\check{X}^\varepsilon_t,\check{Z}^\varepsilon_t)$. And then take any $x\in{\mathbb R}^n$ and fix it. And consider the following SDE in ${\mathbb R}^m$: \begin{eqnarray*}\left\{\begin{array}{l} \mathrm{d} \check{Z}^x_t=\check{b}_2(x,\check{Z}^x_t)\mathrm{d} t+\check{\sigma}_2(x,\check{Z}^x_t)\mathrm{d} W_t+\int_{{\mathbb U}_2}\check{f}_2(x,\check{Z}^x_t,u)\tilde{N}_{p_2}(\mathrm{d} t, \mathrm{d} u),\\ \check{Z}^x_0=\check{z}_0, \qquad t\geq0. \end{array} \right. \end{eqnarray*} Based on {\bf Assumption 4.} {\bf (ii)-(iii)}, it holds that the above equation has a unique invariant probability measure denoted as $\bar{\check{p}}(x,\mathrm{d} z)$. So, set \begin{eqnarray*} \bar{\check{b}}_1(x):=\int_{{\mathbb R}^m}\check{b}_1(x,z)\bar{\check{p}}(x,\mathrm{d} z), \end{eqnarray*} and by \cite[Lemma 3.1]{q00}, we know that $\bar{\check{b}}_1$ is Lipschitz continuous. So, we construct a SDE on the probability space $(\Omega, \mathscr{F}, \{\mathscr{F}_t\}_{t\in[0,T]}, {\mathbb P})$ as follows: \begin{eqnarray}\left\{\begin{array}{l} \mathrm{d} \check{X}^0_t=\bar{\check{b}} _1(\check{X}^0_t)\mathrm{d} t+\check{\sigma}_0(\check{X}^0_t)\mathrm{d} B_t+\check{\sigma}_1(\check{X}^0_t)\mathrm{d} V_t+\int_{{\mathbb U}_1}\check{f}_1(\check{X}^0_{t-}, u)\tilde{N}_{p_1}(\mathrm{d} t, \mathrm{d} u),\\ \check{X}^0_0=\check{x}_0, \qquad\qquad 0\leqslant t\leqslant T. \label{appequ2} \end{array} \right. \end{eqnarray} The solution of Eq.(\ref{appequ2}) is denoted as $\check{X}^0_t$. By the same deduction to that in Theorem \ref{xzerx}, we can obtain the following theorem. \begin{theorem}\label{corcon} There exists a constant $C\geqslant 0$ independent of $\varepsilon, \delta_{\varepsilon}$ such that \begin{eqnarray*} {\mathbb E}\(\sup\limits_{0\leqslant t\leqslant T}|\check{X}^\varepsilon_t-\check{X}^0_t|^2\)\leqslant \(C\frac{\varepsilon}{\delta_{\varepsilon}}+C(\delta_{\varepsilon}+1)\delta_{\varepsilon}+C(\delta_{\varepsilon}+1)\frac{\delta^2_{\varepsilon}}{\varepsilon}\)e^{CT}. \end{eqnarray*} \end{theorem} \subsection{Nonlinear filtering problems with the system (\ref{Eq01})} Next, for the observation process $\check{Y}^{\varepsilon}$ defined in (\ref{Eq01}), i.e. \begin{eqnarray*} \check{Y}_t^{\varepsilon}=V_t+\int_0^t\check{h}(\check{X}_s^{\varepsilon})\mathrm{d} s+\int_0^t\int_{{\mathbb U}_3}\check{f}_3(s,u)\tilde{N}_{\lambda}(\mathrm{d} s, \mathrm{d} u)+\int_0^t\int_{{\mathbb U}\setminus{\mathbb U}_3}\check{g}_3(s,u)N_{\lambda}(\mathrm{d} s, \mathrm{d} u), \end{eqnarray*} we assume: \begin{enumerate}[\bf{Assumption 5.}] \item \end{enumerate} \begin{enumerate}[\bf{(i)}] \item $\check{h}$ is bounded and $$ \int_0^T\int_{{\mathbb U}_3}|\check{f}_3(s,u)|^2\nu_3(\mathrm{d} u)\mathrm{d} s<\infty. $$ \end{enumerate} \begin{enumerate}[\bf{(ii)}] \item There exists a positive function $\check{L}(u)$ satisfying \begin{eqnarray*} \int_{{\mathbb U}_3}\frac{\left(1-\check{L}(u)\right)^2}{\check{L}(u)}\nu_3(\mathrm{d} u)<\infty \end{eqnarray*} such that $0<\check{l}\leqslant \check{L}(u)<\lambda(t,x,u)<1$ for $u\in{\mathbb U}_3$, where $\check{l}$ is a constant. \end{enumerate} Now, denote \begin{eqnarray*} (\lambda^\varepsilon_t)^{-1}:&=&\exp\bigg\{-\int_0^t \check{h}^i(\check{X}_s^\varepsilon)\mathrm{d} V^i_s-\frac{1}{2}\int_0^t \left|\check{h}(\check{X}_s^\varepsilon)\right|^2\mathrm{d} s-\int_0^t\int_{{\mathbb U}_3}\log\lambda(s,\check{X}^{\varepsilon}_{s-},u)N_{\lambda}(\mathrm{d} s, \mathrm{d} u)\\ &&\quad\qquad -\int_0^t\int_{{\mathbb U}_3}(1-\lambda(s,\check{X}^\varepsilon_s,u))\nu_3(\mathrm{d} u)\mathrm{d} s\bigg\}. \end{eqnarray*} Thus, by {\bf Assumption 5.} we know that $(\lambda^\varepsilon_t)^{-1}$ is an exponential martingale. Define a measure $\check{{\mathbb P}}^\varepsilon$ via $$ \frac{\mathrm{d} \check{{\mathbb P}}^\varepsilon}{\mathrm{d} {\mathbb P}}=(\lambda^\varepsilon_T)^{-1}. $$ Under the probability measure $\check{{\mathbb P}}^\varepsilon$, it follows from the Girsanov theorem that $\check{V}_t:=V_t+\int_0^t\check{h}(\check{X}_s^{\varepsilon})\mathrm{d} s$ is a Brownian motion and $N_{\lambda}(\mathrm{d} t,\mathrm{d} u)$ is a Poisson random measure with the predictable compensator $\mathrm{d} t\nu_3(\mathrm{d} u)$. Moreover, by the same deduction to that in \cite[Lemma 3.1]{qd}, we know that $\lambda^\varepsilon_t$ satisfies the following equation \begin{eqnarray} \lambda^\varepsilon_t=1+\int_0^t\lambda^\varepsilon_s\check{h}(\check{X}^\varepsilon_s)^i\mathrm{d} \check{V}^i_s+\int_0^t\int_{{\mathbb U}_3}\lambda^\varepsilon_{s-}(\lambda(s,\check{X}^\varepsilon_{s-},u)-1)\tilde{N}(\mathrm{d} s, \mathrm{d} u), \label{lme} \end{eqnarray} where $\tilde{N}(\mathrm{d} s, \mathrm{d} u):=N_{\lambda}(\mathrm{d} t,\mathrm{d} u)-\mathrm{d} t\nu_3(\mathrm{d} u)$. Set \begin{eqnarray*} &&\check{\rho}^\varepsilon_t(\psi):={\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\psi(\check{X}^\varepsilon_t)\lambda^\varepsilon_t|\mathscr{F}_t^{\check{Y}^{\varepsilon}}],\\ &&\check{\pi}^{\varepsilon}_t(\psi):={\mathbb E}[\psi(\check{X}^{\varepsilon}_t)|\mathscr{F}_t^{\check{Y}^{\varepsilon}}], \qquad \psi\in{\mathcal B}({\mathbb R}^n), \end{eqnarray*} where ${\mathbb E}^{\check{{\mathbb P}}^\varepsilon}$ stands for the expectation under the probability measure $\check{{\mathbb P}}^\varepsilon$. And then by the Kallianpur-Striebel formula it holds that \begin{eqnarray*} \check{\pi}^{\varepsilon}_t(\psi)=\frac{\check{\rho}^\varepsilon_t(\psi)}{\check{\rho}^\varepsilon_t(1)}. \end{eqnarray*} In addition, we have the following result. \begin{theorem}(The Zakai equation)\label{zakequ} For $\psi\in {\mathcal C}^2_b({\mathbb R}^n)$, $\check{\rho}^{\varepsilon}_t(\psi)$ satisfies the following Zakai equation \begin{eqnarray} \check{\rho}^{\varepsilon}_t(\psi)&=&\check{\rho}^{\varepsilon}_0(\psi)+\int_0^t\check{\rho}^{\varepsilon}_s\(\big({\mathcal L}^{\check{X}^\varepsilon}\psi\big)(\cdot,\check{Z}^\varepsilon_s)\)\mathrm{d} s+\int_0^t\left(\check{\rho}^{\varepsilon}_s(\psi \check{h}^i)+\check{\rho}^{\varepsilon}_s((\partial_j\psi)\check{\sigma}^{ji}_1)\right)\mathrm{d} \check{V}^i_s\nonumber\\ &&+\int_0^t\int_{{\mathbb U}_3}\check{\rho}^{\varepsilon}_s\(\psi(\lambda(s,\cdot,u)-1)\)\tilde{N}(\mathrm{d} s, \mathrm{d} u), \label{zakai1} \end{eqnarray} where \begin{eqnarray*} ({\mathcal L}^{\check{X}^\varepsilon}\psi)(x,z)&:=&\frac{\partial \psi(x)}{\partial x_i}\check{b}^i_1(x,z)+\frac{1}{2}\frac{\partial^2\psi(x)}{\partial x_i\partial x_j} (\check{\sigma}_0\check{\sigma}_0^T)^{ij}(x)+\frac{1}{2}\frac{\partial^2\psi(x)}{\partial x_i\partial x_j} (\check{\sigma}_1\check{\sigma}_1^T)^{ij}(x)\\ &&+\int_{{\mathbb U}_1}\Big[\psi\big(x+\check{f}_1(x,u)\big)-\psi(x) -\frac{\partial \psi(x)}{\partial x_i}\check{f}^i_1(x,u)\Big]\nu_1(\mathrm{d} u). \end{eqnarray*} \end{theorem} \begin{proof} Applying the It\^o formula to $\psi(\check{X}^\varepsilon_t)$, one can have that \begin{eqnarray*} \psi(\check{X}^\varepsilon_t)&=&\psi(\check{x}_0)+\int_0^t({\mathcal L}^{\check{X}^\varepsilon}\psi)(\check{X}^\varepsilon_s,\check{Z}^\varepsilon_s)\mathrm{d} s+\int_0^t(\nabla\psi)(\check{X}^\varepsilon_s)\check{\sigma}_0(\check{X}^\varepsilon_s)\mathrm{d} B_s\\ &&+\int_0^t(\nabla\psi)(\check{X}^\varepsilon_s)\check{\sigma}_1(\check{X}^\varepsilon_s)\mathrm{d} \left(\check{V}_s-\int_0^s\check{h}(\check{X}_r^{\varepsilon})\mathrm{d} r\right)\\ &&+\int_0^t\int_{{\mathbb U}_1}[\psi(\check{X}^\varepsilon_{s-}+\check{f}_1(\check{X}^\varepsilon_{s-}, u))-\psi(\check{X}^\varepsilon_{s-})]\tilde{N}_{p_1}(\mathrm{d} s, \mathrm{d} u). \end{eqnarray*} Note that $\lambda^\varepsilon_t$ satisfies (\ref{lme}). So, it follows from the It\^o formula that \begin{eqnarray*} \psi(\check{X}^\varepsilon_t)\lambda^\varepsilon_t&=&\psi(\check{x}_0)+\int_0^t\psi(\check{X}^\varepsilon_s)\lambda^\varepsilon_s\check{h}(\check{X}_s^{\varepsilon})^i\mathrm{d} \check{V}^i_s\\ &&+\int_0^t\int_{{\mathbb U}_3}\psi(\check{X}^\varepsilon_{s-})\lambda^\varepsilon_{s-}(\lambda(s,\check{X}^\varepsilon_{s-},u)-1)\tilde{N}(\mathrm{d} s, \mathrm{d} u)\\ &&+\int_0^t\lambda^\varepsilon_s({\mathcal L}^{\check{X}^\varepsilon}\psi)(\check{X}^\varepsilon_s,\check{Z}^\varepsilon_s)\mathrm{d} s+\int_0^t\lambda^\varepsilon_s(\nabla\psi)(\check{X}^\varepsilon_s)\check{\sigma}_0(\check{X}^\varepsilon_s)\mathrm{d} B_s\\ &&+\int_0^t\lambda^\varepsilon_s(\nabla\psi)(\check{X}^\varepsilon_s)\check{\sigma}_1(\check{X}^\varepsilon_s)\mathrm{d} \left(\check{V}_s-\int_0^s\check{h}(\check{X}_r^{\varepsilon})\mathrm{d} r\right)\\ &&+\int_0^t\int_{{\mathbb U}_1}\lambda^\varepsilon_{s-}[\psi(\check{X}^\varepsilon_{s-}+\check{f}_1(\check{X}^\varepsilon_{s-}, u))-\psi(\check{X}^\varepsilon_{s-})]\tilde{N}_{p_1}(\mathrm{d} s, \mathrm{d} u)\\ &&+\int_0^t(\partial_j\psi)(\check{X}^\varepsilon_s)\check{\sigma}^{ji}_1(\check{X}^\varepsilon_s)\lambda^\varepsilon_s\check{h}(\check{X}_s^{\varepsilon})^i\mathrm{d} s\\ &=&\psi(\check{x}_0)+\int_0^t\left(\psi(\check{X}^\varepsilon_s)\lambda^\varepsilon_s\check{h}(\check{X}_s^{\varepsilon})^i+\lambda^\varepsilon_s(\partial_j\psi)(\check{X}^\varepsilon_s)\check{\sigma}^{ji}_1(\check{X}^\varepsilon_s)\right)\mathrm{d} \check{V}^i_s\\ &&+\int_0^t\int_{{\mathbb U}_3}\psi(\check{X}^\varepsilon_{s-})\lambda^\varepsilon_{s-}(\lambda(s,\check{X}^\varepsilon_{s-},u)-1)\tilde{N}(\mathrm{d} s, \mathrm{d} u)\\ &&+\int_0^t\lambda^\varepsilon_s({\mathcal L}^{\check{X}^\varepsilon}\psi)(\check{X}^\varepsilon_s,\check{Z}^\varepsilon_s)\mathrm{d} s+\int_0^t\lambda^\varepsilon_s(\nabla\psi)(\check{X}^\varepsilon_s)\check{\sigma}_0(\check{X}^\varepsilon_s)\mathrm{d} B_s\\ &&+\int_0^t\int_{{\mathbb U}_1}\lambda^\varepsilon_{s-}[\psi(\check{X}^\varepsilon_{s-}+\check{f}_1(\check{X}^\varepsilon_{s-}, u))-\psi(\check{X}^\varepsilon_{s-})]\tilde{N}_{p_1}(\mathrm{d} s, \mathrm{d} u). \end{eqnarray*} Taking the conditional expectation with respect to $\mathscr{F}_t^{\check{Y}^{\varepsilon}}$ under $\check{{\mathbb P}}^\varepsilon$ on two hand sides of the above equality, one could obtain that \begin{eqnarray*} {\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\psi(\check{X}^\varepsilon_t)\lambda^\varepsilon_t|\mathscr{F}_t^{\check{Y}^{\varepsilon}}]&=&{\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\psi(\check{x}_0)|\mathscr{F}_t^{\check{Y}^{\varepsilon}}]\\ &&+\int_0^t{\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\left(\psi(\check{X}^\varepsilon_s)\lambda^\varepsilon_s\check{h}(\check{X}_s^{\varepsilon})^i+\lambda^\varepsilon_s(\partial_j\psi)(\check{X}^\varepsilon_s)\check{\sigma}^{ji}_1(\check{X}^\varepsilon_s)\right)|\mathscr{F}_t^{\check{Y}^{\varepsilon}}]\mathrm{d} \check{V}^i_s\\ &&+\int_0^t\int_{{\mathbb U}_3}{\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\psi(\check{X}^\varepsilon_{s-})\lambda^\varepsilon_{s-}(\lambda(s,\check{X}^\varepsilon_{s-},u)-1)|\mathscr{F}_t^{\check{Y}^{\varepsilon}}]\tilde{N}(\mathrm{d} s, \mathrm{d} u)\\ &&+\int_0^t{\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\lambda^\varepsilon_s({\mathcal L}^{\check{X}^\varepsilon}\psi)(\check{X}^\varepsilon_s,\check{Z}^\varepsilon_s)|\mathscr{F}_t^{\check{Y}^{\varepsilon}}]\mathrm{d} s, \end{eqnarray*} where \cite[Theorem 1.4.7]{blr} is used. That is, it holds that \begin{eqnarray*} \check{\rho}^{\varepsilon}_t(\psi)&=&\check{\rho}^{\varepsilon}_0(\psi)+\int_0^t\check{\rho}^{\varepsilon}_s\(\big({\mathcal L}^{\check{X}^\varepsilon}\psi\big)(\cdot,\check{Z}^\varepsilon_s)\)\mathrm{d} s+\int_0^t\left(\check{\rho}^{\varepsilon}_s(\psi \check{h}^i)+\check{\rho}^{\varepsilon}_s((\partial_j\psi)\check{\sigma}^{ji}_1)\right)\mathrm{d} \check{V}^i_s\nonumber\\ &&+\int_0^t\int_{{\mathbb U}_3}\check{\rho}^{\varepsilon}_s\(\psi(\lambda(s,\cdot,u)-1)\)\tilde{N}(\mathrm{d} s, \mathrm{d} u). \end{eqnarray*} The proof is complete. \end{proof} In the following, we define the nonlinear filtering of $\check{X}_t^0$ with respect to $\mathscr{F}_t^{\check{Y}^{\varepsilon}}$. Set \begin{eqnarray*} \lambda^0_t&:=&\exp\bigg\{\int_0^t\check{h}^i(\check{X}_s^0)\mathrm{d} \check{V}^{i}_s-\frac{1}{2}\int_0^t \left|\check{h}(\check{X}_s^0)\right|^2\mathrm{d} s+\int_0^t\int_{{\mathbb U}_3}\log\lambda(s,\check{X}^0_{s-},u)N_{\lambda}(\mathrm{d} s, \mathrm{d} u)\\ &&\quad\qquad +\int_0^t\int_{{\mathbb U}_3}(1-\lambda(s,\check{X}^0_s,u))\nu_3(\mathrm{d} u)\mathrm{d} s\bigg\}, \end{eqnarray*} and furthermore \begin{eqnarray*} \check{\rho}^0_t(\psi)&:=&{\mathbb E}^{\check{{\mathbb P}}^\varepsilon}[\psi(\check{X}^0_t)\lambda^0_t|\mathscr{F}_t^{\check{Y}^{\varepsilon}}],\\ \check{\pi}^0_t(\psi)&:=&\frac{\check{\rho}^0_t(\psi)}{\check{\rho}^0_t(1)}. \end{eqnarray*} And then by the similar deduction to that in Theorem \ref{zakequ}, it holds that $\check{\rho}^0_t$ satisfies the following equation \begin{eqnarray} \check{\rho}^0_t(\psi)&=&\check{\rho}^0_0(\psi)+\int_0^t\check{\rho}^0_s\({\mathcal L}^{\check{X}^0}\psi\)\mathrm{d} s+\int_0^t\check{\rho}^0_s\Big(((\partial_j\psi)\check{\sigma}^{ji}_1)(\check{h}^i(\cdot)-\check{h}^i(\check{X}_s^\varepsilon))\Big)\mathrm{d} s\nonumber\\ &&+\int_0^t\left(\check{\rho}^0_s(\psi \check{h}^i)+\check{\rho}^0_s((\partial_j\psi)\check{\sigma}^{ji}_1)\right)\mathrm{d} \check{V}^i_s\nonumber\\ &&+\int_0^t\int_{{\mathbb U}_3}\check{\rho}^0_s\(\psi(\lambda(s,\cdot,u)-1)\)\tilde{N}(\mathrm{d} s, \mathrm{d} u), \label{zakai2} \end{eqnarray} where \begin{eqnarray*} ({\mathcal L}^{\check{X}^0}\psi)(x)&:=&\frac{\partial \psi(x)}{\partial x_i}\bar{\check{b}}^i_1(x)+\frac{1}{2}\frac{\partial^2\psi(x)}{\partial x_i\partial x_j} (\check{\sigma}_0\check{\sigma}_0^T)^{ij}(x)+\frac{1}{2}\frac{\partial^2\psi(x)}{\partial x_i\partial x_j} (\check{\sigma}_1\check{\sigma}_1^T)^{ij}(x)\\ &&+\int_{{\mathbb U}_1}\Big[\psi\big(x+\check{f}_1(x,u)\big)-\psi(x) -\frac{\partial \psi(x)}{\partial x_i}\check{f}^i_1(x,u)\Big]\nu_1(\mathrm{d} u). \end{eqnarray*} \subsection{The relationship of $\check{\pi}^\varepsilon$ and $\check{\pi}^0$} \subsubsection{The case of $\check{f}_3=\check{g}_3=0$} In the case of $\check{f}_3=\check{g}_3=0$, by the similar deduction to that in Theorem \ref{filcon}, one can obtain the following result. \begin{theorem} Suppose that {\bf Assume 4.-5.} hold. Then it holds that for $\phi\in {\mathcal C}_b^1({\mathbb R}^n)$, \begin{eqnarray*} \lim\limits_{\varepsilon\rightarrow0}{\mathbb E}|\check{\pi}^{\varepsilon}_t(\phi)- \check{\pi}_t^0(\phi)|=0. \end{eqnarray*} \end{theorem} \subsubsection{The case of $\check{f}_3\neq\check{g}_3\neq0$} In the case of $\check{f}_3\neq\check{g}_3\neq0$, we first prepare two important lemmas. Since their proofs are similar to that of \cite[Lemma 5.1, 5.2]{q00}, we omit them. \begin{lemma}\label{rho1} Under {\bf Assumption 4.-5.}, it holds that for any $t\in[0,T]$, \begin{eqnarray} (\check{\rho}^0_t(1))^{-1}<\infty, \qquad {\mathbb P}\ a.s.. \label{roes} \end{eqnarray} \end{lemma} \begin{lemma}\label{tigh} Under {\bf Assumption 4.-5.}, $\{\check{\rho}^{\varepsilon}_t, t\in[0,T]\}$ is relatively weakly compact in $D([0,T], {\mathcal M}({\mathbb R}^n))$, where ${\mathcal M}({\mathbb R}^n)$ denotes the set of bounded Borel measures on ${\mathbb R}^n$. \end{lemma} In the following, we assume more: \begin{enumerate}[\bf{Assumption 6.}] \item \end{enumerate} \begin{enumerate}[] \item $\{\check{Z}_{\varepsilon t}^{\varepsilon}, t\in[0,T]\}$ is tight. \end{enumerate} Now, we state and prove the main theorem in the section. \begin{theorem}\label{filcon2} Suppose that {\bf Assumption 4.-6.} hold. Then $\check{\pi}^{\varepsilon}_t$ converges weakly to $\check{\pi}^0_t$ as $\varepsilon\rightarrow0$ for any $t\in[0,T]$. \end{theorem} \begin{proof} By the definition of $\check{\pi}^{\varepsilon}_t$, $\check{\pi}^0_t$, it holds that for $\phi\in{\mathcal C}^2_b({\mathbb R}^n)$, \begin{eqnarray*} \check{\pi}^{\varepsilon}_t(\phi)-\check{\pi}^0_t(\phi)=\frac{\check{\rho}^{\varepsilon}_t(\phi)-\check{\rho}^0_t(\phi)}{\check{\rho}^0_t(1)}-\check{\pi}^{\varepsilon}_t(\phi)\frac{\check{\rho}^{\varepsilon}_t(1)-\check{\rho}^0_t(1)}{\check{\rho}^0_t(1)}. \end{eqnarray*} So, in order to prove $\check{\pi}^{\varepsilon}_t(\phi)-\check{\pi}^0_t(\phi)$ converges weakly to $0$, in terms of Lemma \ref{rho1}, we only need to show that $\check{\rho}^{\varepsilon}_t(\phi)-\check{\rho}^0_t(\phi)$ converges weakly to $0$ as $\varepsilon\rightarrow0$. Besides, it follows from Lemma \ref{tigh} that there exist a weak convergence subsequence $\{\check{\rho}^{\varepsilon_k}_t, k\in{\mathbb N}\}$ and a measure-valued process $\bar{\check{\rho}}_t$ such that $\check{\rho}^{\varepsilon_k}_t$ converges weakly to $\bar{\check{\rho}}_t$ as $k\rightarrow\infty$. Therefore, we just need to prove that for $t\in[0,T]$, $\bar{\check{\rho}}_t(\phi)-\check{\rho}^0_t(\phi)$ converges weakly to $0$ as $\varepsilon\rightarrow0$. Next, we search for the equations which $\bar{\check{\rho}}_t(\phi)$ solves. By Theorem \ref{corcon} and (\ref{zakai1}), we follow up the line of \cite[Theorem 5.3]{q00} and obtain that $\bar{\check{\rho}}_t(\phi)$ satisfies the following equation \begin{eqnarray} \bar{\check{\rho}}_t(\phi)&=&\bar{\check{\rho}}_0(\phi)+\int_0^t\bar{\check{\rho}}_s\({\mathcal L}^{\check{X}^0}\phi\)\mathrm{d} s+\int_0^t\left(\bar{\check{\rho}}_s(\phi\check{h}^i)+\bar{\check{\rho}}_s((\partial_j\phi)\check{\sigma}^{ji}_1)\right)\mathrm{d} \check{V}^i_s\nonumber\\ &&+\int_0^t\int_{{\mathbb U}_3}\bar{\check{\rho}}_s\(\phi(\lambda(s,\cdot,u)-1)\)\tilde{N}(\mathrm{d} s, \mathrm{d} u). \label{zakai3} \end{eqnarray} Besides, by (\ref{zakai2}) and Theorem \ref{corcon}, we know that, there exists a measure-valued process $\bar{\check{\rho}}^0_t$ such that $\check{\rho}^0_t(\phi)$ converges to $\bar{\check{\rho}}^0_t$ ${\mathbb P}$ a.s. and $\bar{\check{\rho}}^0_t$ satisfies the following equation \begin{eqnarray} \bar{\check{\rho}}^0_t(\phi)&=&\bar{\check{\rho}}^0_0(\phi)+\int_0^t\bar{\check{\rho}}^0_s\({\mathcal L}^{\check{X}^0}\phi\)\mathrm{d} s+\int_0^t\left(\bar{\check{\rho}}^0_s(\phi \check{h}^i)+\bar{\check{\rho}}^0_s((\partial_j\phi)\check{\sigma}^{ji}_1)\right)\mathrm{d} \check{V}^i_s\nonumber\\ &&+\int_0^t\int_{{\mathbb U}_3}\bar{\check{\rho}}^0_s\(\phi(\lambda(s,\cdot,u)-1)\)\tilde{N}(\mathrm{d} s, \mathrm{d} u). \label{zakai4} \end{eqnarray} Note that Eq.(\ref{zakai3}) and Eq.(\ref{zakai4}) are the same. Thus, it follows from \cite[Theorem 3.9]{q3} that for any $t\in[0,T]$, \begin{eqnarray*} \bar{\check{\rho}}_t=\bar{\check{\rho}}^0_t, \quad {\mathbb P}.a.s. \end{eqnarray*} That is, $\bar{\check{\rho}}_t(\phi)-\check{\rho}^0_t(\phi)$ converges weakly to $0$ as $\varepsilon\rightarrow0$. The proof is complete. \end{proof} \section{Conclusion}\label{con} In the paper, we consider nonlinear filtering problems of multiscale systems in two cases-correlated sensor L\'evy noises and correlated L\'evy noises. First of all, we prove that the slow part of the origin system converges to the homogenized system in the uniform mean square sense. Next, in the case of correlated sensor L\'evy noises, the nonlinear filtering of the slow part is shown to approximate that of the homogenized system in $L^1$ sense. However, in the case of correlated L\'evy noises, we prove that the nonlinear filtering of the slow part converges weakly to that of the homogenized system. \textbf{Acknowledgements:} The author would like to thank Professor Xicheng Zhang for his valuable discussions. The author also thanks Professor Renming Song for providing her an excellent environment to work in the University of Illinois at Urbana-Champaign. \end{document}
arXiv
\begin{document} \title{The quantum mechanics of time travel through post-selected teleportation} \author{Seth Lloyd$^{1}$, Lorenzo Maccone$^{1}$, Raul Garcia-Patron$^{1}$, Vittorio Giovannetti$^{2}$, Yutaka Shikano$^{1,3}$} \affiliation{$^{1}$xQIT,Massachusetts Institute of Technology, 77 Mass Ave, Cambridge MA.\\ $^2$NEST-CNR-INFM \& Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126, Pisa, Italy. \\ $^3$Dep. Physics, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro, Tokyo, 152-8551, Japan.} \begin{abstract} This paper discusses the quantum mechanics of closed timelike curves (CTCs) and of other potential methods for time travel. We analyze a specific proposal for such quantum time travel, the quantum description of CTCs based on post-selected teleportation (P-CTCs). We compare the theory of P-CTCs to previously proposed quantum theories of time travel: the theory is physically inequivalent to Deutsch's theory of CTCs, but it is consistent with path-integral approaches (which are the best suited for analyzing quantum field theory in curved spacetime). We derive the dynamical equations that a chronology-respecting system interacting with a CTC will experience. We discuss the possibility of time travel in the absence of general relativistic closed timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation. \end{abstract} \pacs{03.67.-a,03.65.Ud,04.00.00,04.62.+v,04.60.-m} \maketitle Einstein's theory of general relativity allows the existence of closed timelike curves, paths through spacetime that, if followed, allow a time traveler -- whether human being or elementary particle -- to interact with her former self. The possibility of such closed timelike curves (CTCs) was pointed out by Kurt G\"odel~\cite{GODEL}, and a variety of spacetimes containing closed timelike curves have been proposed ~\cite{Bonnor, Gott}. Reconciling closed timelike curves with quantum mechanics is a difficult problem that has been addressed repeatedly, for example, using path integral techniques~\cite{altri, politzer, boulware, hartle, STOCKUM, politzer2}. This paper explores a particular version of closed timelike curves based on combining quantum teleportation with post-selection. The resulting post-selected closed timelike curves (P-CTCs) provide a self-consistent picture of the quantum mechanics of time-travel. P-CTCs offer a theory of closed timelike curves that is physically inequivalent to other Hilbert-space based theories, e.g., that of Deutsch ~\cite{deutsch}. As in all versions of time travel, closed timelike curves embody apparent paradoxes, such as the grandfather paradox, in which the time traveller inadvertently or on purpose performs an action that causes her future self not to exist. Einstein (a good friend of G\"odel) was himself seriously disturbed by the discovery of CTCs~\cite{schilpp}. Because the theory of P-CTCs rely on post-selection, they provide self-consistent resolutions to such paradoxes: anything that happens in a P-CTC can also happen in conventional quantum mechanics with some probability. Similarly, the post-selected nature of P-CTCs allows the predictions and retrodictions of the theory to be tested experimentally, even in the absence of an actual general-relativistic closed timelike curve. Time travel is a subject that has fascinated human beings for thousands of years. In the Hindu epic, the Mahabarata, for example, King Revaita accepts an invitation to visit Brahma's palace. Although he stays for only a few days, when he returns to earth he finds that many eons have passed. The Japanese fisherman in the folk tale Urashima Taro, having saved a sea turtle, is invited to the palace of the sea-king; upon returning home discovers on the beach a crumbling monument, centuries old, memorializing him. The Gaelic hero Finn McCool suffers a similar fate. These stories also dwell on the dangers of time travel. Urashima Taro is given a magic box and told not to open it. Finn receives the gift of a magic horse and told not to dismount. When, inevitably, Taro opens the box, and Finn's toe touches the ground, they instantaneously age and crumble into dust. These tales involve time travel to the future. Perhaps because of the various paradoxes to which it gives rise, the concept of travel to the past is a more recent invention. Starting in the late eighteenth century, a few narratives take a stab at time travel to the past, the best known being Charles Dickens's {\it A Christmas Carol,} and Mark Twain's {\it A Connecticut Yankee in King Arthur's Court.} The contemporary notion of time travel, together with all its attendant paradoxes, did not come into being until H.G. Wells' masterpiece, {\it The Time Machine}, which is also the first book to propose an actual device that can be used to travel back and forth in time. As frequently happens, scientific theories of time travel lagged behind the fictional versions. Although Einstein's theory of general relativity implicitly allows travel to the past, it took several decades before G\"odel proposed an explicit space-time geometry containing closed timelike curves (CTCs). The G\"odel universe consists of a cloud of swirling dust, of sufficient gravitational power to support closed timelike curves. Later, it was realized that closed timelike curves are a generic feature of highly curved, rotating spacetimes: the Kerr solution for a rotating black hole contains closed timelike curves within the black hole horizon; and massive rapidly rotating cylinders typically are associated with closed timelike curves \cite{Lanczos, STOCKUM, Bonnor}. The topic of closed timelike curves in general relativity continues to inspire debate: Hawking's chronology protection postulate, for example, suggests that the conditions needed to create closed timelike curves cannot arise in any physically realizable spacetime~\cite{Hawking}. For example, while Gott showed that cosmic string geometries can contain closed timelike curves~\cite{Gott}, Deser {\it et al.} showed that physical cosmic strings cannot create CTCs from scratch \cite{Deser, Carroll}. At bottom, the behavior of matter is governed by the laws of quantum mechanics. Considerable effort has gone into constructing quantum mechanical theories for closed timelike curves. The initial efforts to construct such theories involved path integral formulations of quantum mechanics. Hartle and Politzer pointed out that in the presence of closed timelike curves, the ordinary correspondence between the path-integral formulation of quantum mechanics and the formulation in terms of unitary evolution of states in Hilbert space breaks down~\cite{hartle, politzer}. Morris {\it et al.} explored the quantum prescriptions needed to construct closed timelike curves in the presence of wormholes, bits of spacetime geometry that, like the handle of a coffee cup, `break off' from the main body of the universe and rejoin it in the the past \cite{altri}. Meanwhile, Deutsch formulated a theory of closed timelike curves in the context of Hilbert space, by postulating self-consistency conditions for the states that enter and exit the closed timelike curve~\cite{deutsch}. General relativistic closed timelike curves provide one potential mechanism for time travel, but they need not provide the only one. Quantum mechanics supports a variety of counter-intuitive phenomena which might allow time travel even in the absence of a closed timelike curve in the geometry of spacetime. One of the best-known versions of non-general relativistic quantum versions of time travel comes from Wheeler, as described by Feynman in his Nobel Prize lecture~\cite{FEYN}: \par\noindent `I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, \par\noindent ``Feynman, I know why all electrons have the same charge and the same mass.'' \par\noindent ``Why?'' \par\noindent ``Because, they are all the same electron!'' And, then he explained on the telephone, \par\noindent ``Suppose that the world lines which we were ordinarily considering before in time and space - instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time - to the proper four velocities - and that's equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron.'' ' \noindent As we will see, post-selected closed timelike curves make up a precise physical theory which instantiates Wheeler's whimsical idea. The purpose of the current paper is to provide a unifying description of closed timelike curves in quantum mechanics. We start from the prescription that time travel effectively represents a communication channel from the future to the past. Quantum time travel, then, should be described by a quantum communication channel to the past. A well-known quantum communication channel is given by quantum teleportation, in which shared entanglement combined with quantum measurement and classical communication allows quantum states to be transported between sender and receiver. We show that if quantum teleportation is combined with post-selection, then the result is a quantum channel to the past. The entanglement occurs between the forward- and backward- going parts of the curve, and post-selection replaces the quantum measurement and obviates the need for classical communication, allowing time travel to take place. The resulting theory allows a description both of the quantum mechanics of general relativistic closed timelike curves, and of Wheeler-like quantum time travel in ordinary spacetime. As described in previous work~\cite{loops}, the notion that entanglement and projection can give rise to closed timelike curves to has arisen independently in a variety of contexts. This combination lies at the heart of the Horowitz-Maldacena model for information escape from black holes~\cite{HM, Yurtsever, Gottesman, Lloyd1}, and Gottesmann and Preskill note in passing that this mechanism might be used for time travel~\cite{Gottesman}. Pegg explored the use of a related mechanism for `probabilistic time machines'~\cite{pegg}. Bennett and Schumacher have explored similar notions in unpublished work~\cite{benschu}. Ralph suggests using teleportation for time traveling, although in a different setting, namely, displacing the entangled resource in time~\cite{ralph3}. Svetlichny describes experimental techniques for investigating quantum travel based on entanglement and projection~\cite{SVE}. Chiribella {\em eet al.} cosider this mechanism while analyzing extensions to the quantum computational model~\cite{mauro1}. Brukner {\em et al.} have analyzed probabilistic teleportation (where only the cases in which the Bell measurement yields the desired result are retained) as a computational resource in~\cite{zeil}. The outline of the paper follows. In Sec.~\ref{s:pctcs} we describe P-CTCs and Deutsch's mechanism in detail, emphasizing the differences between the two approaches. Then, in Sec.~\ref{s:pathint} we relate P-CTCs to the path-integral formulation of quantum mechanics. This formulation is particularly suited for the description of quantum field theory in curved spacetime~\cite{birrel}, and has been used before to provide quantum descriptions of closed timelike curves~\cite{novikov, friedman, friedman1, hartle, politzer, boulware, politzer2, diaz}. Our proposal is consistent with these path-integral approaches. In particular, the path-integral description of fermions using Grassmann fields given by Politzer~\cite{politzer} yields a dynamical description which coincides with ours for systems of quantum bits. Other descriptions, such as Hartle's~\cite{hartle}, are more difficult to compare as they do not provide an explicit prescription to calculate the details of the dynamics of the interaction with systems inside closed timelike curves. In any case, their general framework is consistent with our derivations. By contrast, Deutsch's CTCs are not compatible with the Politzer path-integral approach, and are analyzed by him on a different footing~\cite{politzer}. Indeed, suppose that the path integral is performed over classical paths which agree both at the entrance to-- and at the exit from-- the CTC, so that $x$-in, $p$-in are the same as $x$-out, $p$-out. Similarly, in the Grassmann case, suppose that spin-up along the $z$ axis at the entrance emerges as spin-up along the $z$ axis at the exit. Then, the quantum version of the CTC must exhibit the same perfect correlation between input and output. But, as the grandfather paradox experiment~\cite{loops} shows, Deutsch's CTCs need not exhibit such correlations: spin-up in is mapped to spin-down out (although the overall quantum state remains the same). By contrast, P-CTCs exhibit perfect correlation between in- and out- versions of all variables. Note that a quantum-field theoretical justification of Deutsch's solution is proposed in~\cite{ralph1, ralph2} and is based on introducing additional Hilbert subspaces for particles and fields along the geodesic: observables at different points along the geodesic commute because they act on different Hilbert spaces. The path-integral formulation also shows that using P-CTCs it is impossible to assign a well defined state to the system in the CTC. This is a natural requirement (or, at least, a desirable property), given the cyclicity of time there. In contrast, Deutsch's consistency condition~\eqref{conscon} is explicitly built to provide a prescription for a definite quantum state $\rho_{CTC}$ of the system in the CTC. In Sec.~\ref{s:general} we go beyond the path-integral formulation and provide the dynamical evolution formulas in the context of generic quantum mechanics (the Hilbert-space formulation). Namely, we treat the CTC as a generic quantum transformation, where the transformed system emerges at a previous time ``after'' eventually interacting with some chronology-respecting systems. In this framework we obtain the explicit prescription of how to calculate the nonlinear evolution of the state of the system in the chronology-respecting part of the spacetime. This nonlinearity is exactly of the form that previous investigations (e.g.~Hartle's~\cite{hartle}) have predicted. \togli{Since the prescription of post-selected teleportation appears as an ad-hoc assumption, in Sec.~\ref{s:cnes} we give a detailed physical motivation behind it. Namely we show that P-CTCs arise if one requires the following reasonable physical condition. Suppose that one can perform an arbitrary measurement of the system in the CTC and that we can perform a unitary U that connects it to the chronology-respecting systems. We prove that if the order in which one performs the measurement and the interaction $U$ does not change anything in the overall physical picture, this implies that a P-CTC is present. Namely, one is forced to conclude that the CTC system together with its purification space is in a maximally entangled Bell state at the beginning, and is in the same Bell state at the end. The converse is also trivially true. Now compare P-CTCs to Deutsch's prescription. On the face of it, Deutsch's physical requirements are perfectly reasonable: he requests that the quantum state of the time-traveling system is the same before the system enters the CTC and after it emerges from it. This is a statistical statement about the probabilities of measurements performed at these two times: the outcome statistics of any measurement are the same. In contrast, our physical requirement implies that not only the statistics must be the same, but also the single-shot results must be, together with eventual correlations that the system in the CTC develops with the chronology-respecting ones. This difference is clearly emphasized in the different predictions that these two models provide when applied to the grandfather paradox circuit.} In Sec.~\ref{s:generalrel} we consider time travel situations that are independent from general-relativistic CTCs. We then conclude in Sec.~\ref{s:comp} with considerations on the computational power of the different models of CTCs. \section{P-CTCs and Deutsch's CTCs}\labell{s:pctcs} Any quantum theory of gravity will have to propose a prescription to deal with the unavoidable~\cite{hartle} nonlinearities that plague CTCs. This requires some sort of modification of the dynamical equations of motions of quantum mechanics that are always linear. Deutsch in his seminal paper~\cite{deutsch} proposed one such prescription, based on a self-consistency condition referred to the state of the systems inside the CTC. Deutsch's theory has recently been critiqued by several authors as exhibiting self-contradictory features~\cite{bennett, ralph, ralph1, ralph2}. By contrast, although any quantum theory of time travel quantum mechanics is likely to yield strange and counter-intuitive results, P-CTCs appear to be less pathological~\cite{loops}. They are based on a different self-consistent condition that states that self-contradictory events do not happen (Novikov principle~\cite{novikov}). Pegg points out that this can arise because of destructive interference of self-contradictory histories~\cite{pegg}. Here we further compare Deutsch's and post-selected closed timelike curves, and give an in-depth analysis of the latter, showing how they can be naturally obtained in the path-integral formulation of quantum theory and deriving the equations of motions that describe the interactions with CTCs. As noted, in addition to general-relativistic CTCs, our proposed theory can also be seen as a theoretical elaboration of Wheeler's assertion to Feynman that `an electron is a positron moving backward in time'~\cite{FEYN}. In particular, any quantum theory which allows the nonlinear process of postselection supports time travel even in the absence of general-relativistic closed timelike curves. The mechanism of P-CTCs~\cite{loops} can be summarized by saying that they behave exactly as if the initial state of the system in the P-CTC were in a maximal entangled state (entangled with an external purification space) and the final state were post-selected to be in the same entangled state. When the probability amplitude for the transition between these two states is null, we postulate that the related event does not happen (so that the Novikov principle~\cite{novikov} is enforced). By contrast, Deutsch's CTCs are based on imposing the consistency condition \begin{eqnarray} \rho_{CTC}=\mbox{Tr}_A[U(\rho_{CTC}\otimes\rho_A)U^\dag], \labell{conscon}\; \end{eqnarray} where $\rho_{CTC}$ is the state of the system inside the closed timelike curve, $\rho_A$ is the state of the system outside (i.e.~of the chronology-respecting part of spacetime), $U$ is the unitary transformation that is responsible for eventual interactions among the two systems, and where the trace is performed over the chronology-respecting system. The existence of a state $\rho$ that satisfies~\eqref{conscon} is ensured by the fact that any completely-positive map of the form ${\cal L}[\rho]=\mbox{Tr}_A[U(\rho\otimes\rho_A)U^\dag]$ always has at least one fixed point $\rho$ (or, equivalently, one eigenvector $\rho$ with eigenvalue one). If more than one state $\rho_{CTC}$ satisfies the consistency condition~\eqref{conscon}, Deutsch separately postulates a ``maximum entropy rule'', requesting that the maximum entropy one must be chosen. Note that Deutsch's formulation assumes that the state exiting the CTC in the past is completely uncorrelated with the chronology-preserving variables at that time: the time-traveler's `memories' of events in the future are no longer valid. \begin{figure} \caption{Description of closed timelike curves through teleportation. a) Conventional teleportation: Alice and Bob start from a maximally entangled state shared among them represented by ``$\bigcup$''. Alice performs a Bell measurement M on her half of the shared state and on the unknown state $|\psi\rangle$ she wants to transmit. This measurement tells her which entangled state the two systems are in. She then communicates (dotted line) the measurement result to Bob who performs a unitary V on his half of the entangled state, obtaining the initial unknown state $|\psi\rangle$. b) Post-selected teleportation: the system in state $|\psi\rangle$ and half of the Bell state ``$\bigcup$'' are projected onto the same Bell state ``$\bigcap$''. This means that the other half of the Bell state is projected into the initial state of the system $|\psi\rangle$ even {\em before} this state is available. } \label{f:teleport} \end{figure} The primary conceptual difference between Deutsch's CTCs and P-CTCs lies in the self-consistency condition imposed. Consider a measurement that can be made either on the state of the system as it enters the CTC, or on the state as it emerges from the CTC. Deutsch demands that these two measurements yield the same statistics for the CTC state alone: that is, the density matrix of the system as it enters the CTC is the same as the density matrix of the system as it exits the CTC. By contrast, we demand that these two measurements yield the same statistics for the CTC state {\it together with its correlations with any chronology preserving variables}. It is this demand that closed timelike curves respect both statistics for the time-traveling state together with its correlations with other variables that distinguishes P-CTCs from Deutsch's CTCs. The fact that P-CTCs respect correlations effectively enforces the Novikov principle~\cite{novikov}, and, as will be seen below, makes P-CTCs consistent with path-integral approaches to CTCs. The connection between P-CTCs and teleportation~\cite{teleportation} is illustrated (see Fig.~\ref{f:teleport}) with the following simple example that employs qubits (extensions to higher dimensional systems are straightforward). Suppose that the initial Bell state is $|\Psi^{(-)}\rangle\propto |01\rangle-|10\rangle$ (but any maximally entangled Bell state will equivalently work), and suppose that the initial state of the system entering the CTC is $|\psi\rangle$. Then the joint state of the three systems (system~1 entering the CTC, system~2 emerging from the CTC, and system~3, its purification) is given by $|\psi\rangle_1|\Psi^{(-)}\rangle_{23}$. These three systems are denoted by the three vertical lines of Fig.~\ref{f:teleport}b. It is immediate to see that this state can be also written as \begin{eqnarray} && (-|\Psi^{(-)}\rangle_{13}|\psi\rangle_2- |\Psi^{(+)}\rangle_{13}\sigma_z|\psi\rangle_2+\nonumber\\&& |\Phi^{(-)}\rangle_{13}\sigma_x|\psi\rangle_2+ i|\Phi^{(+)}\rangle_{13}\sigma_y|\psi\rangle_2)/2 \labell{telep}\;, \end{eqnarray} where $|\Psi^{(\pm)}\rangle\propto|01\rangle\pm|10\rangle$ and $|\Phi^{(\pm)}\rangle\propto|00\rangle\pm|11\rangle$ are the four states in a Bell basis for qubit systems and $\sigma_\alpha$s are the three Pauli matrices. Eq.~\eqref{telep} is equivalent to Eq.~(5) of Ref.~\cite{teleportation}, where the extension to higher dimensional systems is presented (the extension to infinite dimensional systems is presented in~\cite{braunstein}). It is immediate to see that, if the system~1 entering the CTC together with the purification system~3 are post-selected to be in the same Bell state $|\Psi^{(-)}\rangle_{13}$ as the initial one, then only the first term of Eq.~\eqref{telep} survives. Apart from an inconsequential minus sign, this implies that the system~2 emerging from the CTC is in the state $|\psi\rangle_2$, which is exactly the same state of the system that has entered (rather, will enter) the CTC. It seems that, based on what is currently known on these two approaches, we cannot conclusively choose P-CTCs over Deutsch's, or {\it vice versa}. Both arise from reasonable physical assumptions and both are consistent with different approaches to reconciling quantum mechanics with closed timelike curves in general relativity. A final decision on which of the two is ``actually the case'' may have to be postponed to when a full quantum theory of gravity is derived (which would allow to calculate from first principles what happens in a CTC) or when a CTC is discovered that can be tested experimentally. However, because of the huge recent interest on CTCs in physics and in computer science (e.g. see~\cite{bacon, bennett, francesi, aar1, aar2, Brun1, ralph}), it is important to point out that there are reasonable alternatives to the leading theory in the field. We also point out that our post-selection based description of CTCs seems to be less pathological than Deutsch's: for example P-CTCs have less computational power and do not require to separately postulate a ``maximum entropy rule''~\cite{loops}. Therefore, they are in some sense preferable, at least from an Occam's razor perspective. Independent of such questions of aesthetic preference, as we will now show, P-CTCs are consistent with previous path integral formulations of closed timelike curves, whereas Deutsch's CTCs are not. \section{P-CTCs and path integrals}\labell{s:pathint} Path integrals~\cite{feynm1,feynm2} allow one to calculate the transition amplitude for going from an initial state $|I\rangle$ to a final state $|F\rangle$ as an integral over paths of the action, i.e. \begin{widetext}\begin{eqnarray} \langle F|\exp(-\tfrac i\hbar H\tau)|I\rangle= \int_{-\infty}^{+\infty} dx\: dy\; I(x)\; F^*(y)\int_x^y{\cal D}x(t)\exp[\tfrac i\hbar S],\mbox{ where } S=\int_0^{\tau} dtL(x,\dot x) \labell{feyn}\;, \end{eqnarray} and where $L={T}-V$ is the Lagrangian and $S$ is the action, $I(x)$ and $F(x)$ are the position representations of $|I\rangle$ and $|F\rangle$ respectively (i.e.~$|I\rangle=\int dx\:I(x)|x\rangle$), and the paths in the integration over paths all start in $x$ and end in $y$. Of course in this form it is suited only to describing the dynamics of a particle in space (or a collection of particles). It will be extended to other systems in the next section. In order to add a CTC, we first divide the spacetime into two parts, \begin{eqnarray} \langle F|_C\langle F'|\exp(-\tfrac i\hbar H\tau)|I\rangle|I'\rangle_C= \int_{-\infty}^{+\infty} dx\: dx'dy\:dy'\; I(x)\;I'(x')\; F^*(y)\; F'^*(y') \int_{x,x'}^{y,y'}{\cal D}x(t)\exp[\tfrac i\hbar S] \labell{feyn2}\;, \end{eqnarray} The ``conventional'' strategy to deal with CTCs using path integrals is to send the system $C$ to a prior time unchanged (i.e. with the same values of $x,\dot x$), while the other system (the chronology-respecting one) evolves normally. This is enforced by imposing periodic boundary conditions on the CTC boundaries. Namely, the probability amplitude for the chronology-respecting system is \begin{eqnarray} \langle F|\exp(-\tfrac i\hbar H\tau)|I\rangle= \int_{-\infty}^{+\infty} dx\: dx'dy\:dy'\; I(x)\; F^*(y)\;\delta(x'-y') \int_{x,x'}^{y,y'}{\cal D}x(t)\exp[\tfrac i\hbar S] \labell{feyn3}\;, \end{eqnarray} where the $\delta$-function ensures that the initial and final boundary conditions in the CTC system are the same. Note that we have removed $I'(x')$ and $F'(y')$, but we are coherently adding all possible initial and final conditions (through the $x'$ and $y'$ integrals). This implies that it is not possible to assign a definite state to the system inside a CTC: all possible states of the system (except possibly forbidden ones) are compatible with such boundary conditions. Note also that the boundary conditions of Eq.~\eqref{feyn3} have previously appeared in the literature (e.g.~see~\cite{politzer2} and, in the classical context, in the seminal paper~\cite{thorne}). \togli{In contrast to the above boundary conditions, in Deutsch's approach, only one among the possible states of the CTC is considered, one of the states that satisfies the consistency condition. It is required that this state is equal at the beginning and at the end, namely $I'=F'$, in the case in which it is a pure state. Namely, \begin{eqnarray} \fbox{\mbox{Deutsch:}}\qquad \langle F|\exp(-\tfrac i\hbar H\tau)|I\rangle= \int_{-\infty}^{+\infty} dx\: dx'dy\:dy'\; I(x)\; F^*(y)\;C(x')\;C^*(y') \int_{x,x'}^{y,y'}{\cal D}x(t)\exp[\tfrac i\hbar S] \labell{feyn4}\;, \end{eqnarray} \end{widetext} for a suitable state $|C\rangle=\int dx\:C(x)|x\rangle$ of the CTC that satisfies the consistency condition~\cite{deutsch}, \begin{eqnarray} |C\rangle\langle C|=\mbox{Tr}_A[U(|C\rangle\langle C|\otimes\rho_A)U^\dag] \labell{conscond}\;, \end{eqnarray} where $\rho_A$ is the state of the chronology-respecting system $A$. Note the absence of the $\delta$: as long as the state at the beginning and at the end of the CTC is equal, Deutsch's approach does not require the single stories to start and end in the same configuration (this is clear from his solution of the grandfather paradox~\cite{loops}: the single components of the wavefunction may be swapped around, as long as the state is maintained).} \togli{In contrast, our approach does not assign a state to the CTC system $C$, but we are coherently adding all possible stories. Note that, {\em a priori} both our and Deutsch's approaches are consistent at this level, since conventional quantum mechanics does not deal with periodic boundary conditions on time (it is only capable to evolve an {\em initial} state into a {\em final} state). In order to decide which is the correct approach, one would have to quantize a CTC and look whether Eq.~\eqref{feyn3} or \eqref{feyn4} comes out. Note, however, that it could well be that neither of these equations can be derived. The ``particle'', and its associated position representation, is not a well defined concept when you quantize highly pathological space-times such as those where CTCs are present. One would have to resort to looking at path-integrals of fields, but the essence of the above arguments are unaffected.} \vskip1\baselineskip To show that Eq.~\eqref{feyn3} is the same formula that one obtains using post-selected teleportation, we have to calculate $\langle F|_C\langle\Psi|\exp(-\tfrac i\hbar H\tau)\otimes\openone|I\rangle|\Psi\rangle_C$, where $|\Psi\rangle\propto\int dx|xx\rangle$ is a maximally entangled EPR state~\cite{epr} and where the Hamiltonian acts only on the system and on the first of the two Hilbert spaces of $|\Psi\rangle$. Use Eq.~\eqref{feyn2} for the system and for the first Hilbert space of $|\Psi\rangle$ to obtain \begin{eqnarray} && \langle F|_C\langle\Psi|\exp(-\tfrac i\hbar H\tau)\otimes\openone|I\rangle|\Psi\rangle_C= \int_{-\infty}^{+\infty} dx\: dx'dy\:dy'dz\:dz'\; I(x)\;F^*(y)\; \delta(x'-z)\; \delta(y'-z') \langle z|\openone|z'\rangle \nonumber\\&&\times \int_{x,x'}^{y,y'}{\cal D}x(t)\exp[\tfrac i\hbar S] = \int^{\infty}_{-\infty} dx dx^{\prime} dy dy^{\prime} I (x) F^{\ast} (y)\; \delta(x'-y') \int^{y, y^{\prime}}_{x, x^{\prime}} {\cal D} x(t) \exp \left[ \frac{i}{\hbar} S \right] \labell{tp}\;, \end{eqnarray}\end{widetext} where we have used the position representation $|\Psi\rangle=\int dy\:dz\:\delta(y-z)|y\rangle|z\rangle$. Eq.~\eqref{tp} is clearly equal to Eq.~\eqref{feyn3} since $\langle z|z'\rangle=\delta(z-z')$. Note that this result is independent of the particular form of the EPR state $|\Psi\rangle$ as long as it is maximally entangled in position (and hence in momentum). All the above discussion holds for initial and final pure states. However, the extension to mixed states in the path-integral formulation is straightforward: one only needs to employ appropriate purification spaces~\cite{yutaka,mauro}. The formulas then reduce to the previous ones.\togli{, namely~Eqs.~\eqref{feyn3} and \eqref{feyn4}. } Here we briefly comment on the two-state vector formalism of quantum mechanics~\cite{weak1,weak2}. It is based on post-selection of the final state and on renormalizing the resulting transition amplitudes: it is a time-symmetrical formulation of quantum mechanics in which not only the initial state, but also the final state is specified. As such, it shares many properties with our post-selection based treatment of CTCs. In particular, in both theories it is impossible to assign a definite quantum state at each time: in the two-state formalism the unitary evolution forward in time from the initial state might give a different mid-time state with respect to the unitary evolution backward in time from the final state. Analogously, in a P-CTC, it is impossible to assign a definite state to the CTC system at any time, given the cyclicity of time there. This is evident, for example, from Eq.~\eqref{feyn3}: in the CTC system no state is assigned, only periodic boundary conditions. Another aspect that the two-state formalism and P-CTCs share is the nonlinear renormalization of the states and probabilities. In both cases this arises because of the post-selection. In addition to the two-state formalism, our approach can also be related to weak values~\cite{weak, weak1}, since we might be performing measurements between when the system emerges from the CTC and when it re-enters it. Considerations analogous to the ones presented above apply. It would be a mistake, however, to think that the theory of post-selected closed timelike curves in some sense requires or even singles out the weak value theory. Although the two are compatible with each other, the theory of P-CTCs is essentially a `free-standing' theory that does not give preference to one interpretation of quantum mechanics over another. \section{General systems}\labell{s:general} The formula~\eqref{feyn3} was derived in the path-integral formulation of quantum mechanics, but it can be easily extended to generic quantum evolution. We start by recalling the usual Kraus decomposition of a generic quantum evolution (that can describe the evolution of both isolated and open systems). It is given by \begin{eqnarray} &&{\cal L}[\rho]=\mbox{Tr}_E[U(\rho\otimes|e\rangle\langle e|)U^\dag]= \sum_i\langle i|U|e\rangle\:\rho\:\langle e|U^\dag|i\rangle\nonumber\\&& =\sum_iB_i\rho B_i^\dag \labell{cp}\;, \end{eqnarray} where $|e\rangle$ is the initial state of the environment (or, equivalently, of a putative abstract purification space), $U$ is the unitary operator governing the interaction between system initially in the state $\rho$ and environment, and $B_i\equiv\langle i|U|e\rangle$ is the Kraus operator. In contrast, the nonlinear evolution of our post-selected teleportation scheme is given by \begin{eqnarray} &&{\cal N}[\rho]=\mbox{Tr}_{EE'} \Big[(U\otimes\openone_{E'}) \left(\rho\otimes|\Psi\rangle\langle\Psi|\right) \left(U^\dag\otimes\openone_{E'}\right)\nonumber\\&&\times \left(\openone\otimes|\Psi\rangle\langle\Psi|\right)\Big]= \sum_{l,j}\langle l|U|l\rangle\:\rho\:\langle j|U^\dag|j\rangle= C\rho C^\dag \labell{nl}, \end{eqnarray} where $C\equiv$Tr$_{CTC}[U]$ and $|\Psi\rangle\propto\sum_i|i\rangle_E|i\rangle_{E'}$ (or any other maximally entangled state, which would give the same result). Obviously, the evolution in \eqref{nl} is nonlinear (because of the post-selection), so one has to renormalize the final state: ${\cal N}[\rho]\to {\cal N}[\rho]/$Tr${\cal N}[\rho]$. In other words, according to our approach, a chronology-respecting system in a state $\rho$ that interacts with a CTC using a unitary $U$ will undergo the transformation \begin{eqnarray} {\cal N}[\rho]=\frac{C\;\rho\;C^\dag} {\mbox{Tr}[C\;\rho\;C^\dag]} \labell{evol}\;, \end{eqnarray} where we suppose that the evolution does not happen if $C\equiv$Tr$_{CTC}[U]=0$. The comparison with~\eqref{cp} is instructive: there the non-unitarity comes from the inaccessibility of the environment. Analogously, in~\eqref{evol} the non-unitarity comes from the fact that, after the CTC is closed, for the chronology-respecting system it will be forever inaccessible. The nonlinearity of~\eqref{evol} is more difficult to interpret, but is connected with the periodic boundary conditions in the CTC. Note that this general evolution equation~\eqref{evol} is consistent with previous derivations based on path integrals. For example, it is equivalent to Eq.~(4.6) of Ref.~\cite{hartle} by Hartle. However, in contrast to here, the actual form of the evolution operators $C$ is not provided there. As a further example, consider Ref.~\cite{politzer}, where Politzer derives a path integral approach of CTCs for qubits, using Grassmann fields. His Eq.~(5) is compatible with Eq.~\eqref{nl}. He also derives a nonunitary evolution that is consistent with Eq.~\eqref{evol} in the case in which the initial state is pure. In particular, this implies that, also in the general qudit case, our post-selected teleportation approach gives the same result one would obtain from a specific path-integral formulation. In addition, it has been pointed out many times before (e.g.~see~\cite{friedman, cassidy}) that when quantum fields inside a CTC interact with external fields, linearity and unitarity is lost. It is also worth to notice that there have been various proposals to restore unitarity by modifying the structure of quantum mechanics itself or by postulating an inaccessible purification space that is added to uphold unitarity~\cite{anderson, wells}. The evolution~\eqref{evol} coming from our approach is to be compared with Deutsch's evolution, \begin{eqnarray} &&{\cal D}[\rho]=\mbox{Tr}_{CTC}[U(\rho_{CTC}\otimes\rho)U^\dag],\mbox{ where }\nonumber\\&&\rho_{CTC}=\mbox{Tr}_A[U(\rho_{CTC}\otimes\rho)U^\dag] \labell{consconds}\; \end{eqnarray} satisfies the consistency condition. \togli{As we have shown in the previous section, also this evolution can be derived in a path integral formulation, by assigning different boundary conditions.} The direct comparison of Eqs.~\eqref{evol} and~\eqref{consconds} highlights the differences in the general prescription for the dynamics of CTCs of these two approaches. Even though the results presented in this section are directly applicable only to general finite-dimensional systems, the extension to systems living in infinite-dimensional separable Hilbert spaces seems conceptually straightforward, although mathematically involved. In his path-integral formulation of CTCs, Hartle notes that CTCs might necessitate abandoning not only unitarity and linearity, but even the familiar Hilbert space formulation of quantum mechanics \cite{hartle}. Indeed, the fact that the state of a system at a given time can be written as the tensor product states of subsystems relies crucially on the fact that operators corresponding to spacelike separated regions of spacetime commute with each other. When CTCs are introduced, the notion of `spacelike' separation becomes muddied. The formulation of closed timelike curves in terms of P-CTCs shows, however, that the Hilbert space structure of quantum mechanics can be retained. \section{Time travel in the absence of general-relativistic CTCs}\labell{s:generalrel} Although the theory of P-CTCs was developed to address the question of quantum mechanics in general-relativistic closed timelike curves, it also allows us to address the possibility of time travel in other contexts. Essentially, any quantum theory that allows the nonlinear process of projection onto some particular state, such as the entangled states of P-CTCs, allows time travel even when no spacetime closed timelike curve exists. Indeed, the mechanism for such time travel closely follows Wheeler's famous telephone call above. Non-general-relativistic P-CTCs can be implemented by the creation of and projection onto entangled particle-antiparticle pairs. Such a mechanism is just what is used in our experimental tests of P-CTCs ~\cite{loops}: although projection is a non-linear process that cannot be implemented deterministically in ordinary quantum mechanics, it can easily be implemented in a probabilistic fashion. Consequently, the effect of P-CTCs can be tested simply by performing quantum teleportation experiments, and by post-selecting only the results that correspond to the desired entangled-state output. If it turns out that the linearity of quantum mechanics is only approximate, and that projection onto particular states does in fact occur -- for example, at the singularities of black holes~\cite{HM, Yurtsever, Gottesman, Lloyd1} -- then it might be possible to implement time travel even in the absence of a general-relativistic closed timelike curve. The formalism of P-CTCs shows that such quantum time travel can be thought of as a kind of quantum tunneling backwards in time, which can take place even in the absence of a classical path from future to past. \section{Computational power of CTCs}\labell{s:comp} It has been long known that nonlinear quantum mechanics potentially allows the rapid solution of hard problems such as NP-complete problems~\cite{abrams}. The nonlinearities in the quantum mechanics of closed timelike curves is no exception~\cite{aar1, aar2, Brun1}. Aaronson and Watrous have shown quantum computation with Deutsch's closed timelike curves allows the solution of any problem in PSPACE, the set of problems that can be solved using polynomial space resources~\cite{aar1}. Similarly, Aaronson has shown that quantum computation combined with post-selection allows the solution of any problem in the computational class PP, probabilistic polynomial time(where a probabilistic polynomial Turing machine accepts with probability $\tfrac 12$ if and only if the answer is ``yes.''). Quantum computation with post-selection explicitly allows P-CTCs, and P-CTCs in turn allow the performance of any desired post-selected quantum computation. Accordingly, quantum computation with P-CTCs can solve any problem in PP, including NP-complete problems. Since the class PP is thought to be strictly contained in PSPACE, quantum computation with P-CTCs is apparently strictly less powerful than quantum computation with Deutsch's CTCs. In the case of quantum computating with Deutschian CTCs, Bennett {\it et al.}~\cite{bennett} have questioned whether the notion of programming a quantum computer even makes sense. Ref.~\cite{bennett} notes that in Deutsch's closed timelike curves, the nonlinearity introduces ambiguities in the definition of state preparation: as is well-known in nonlinear quantum theories, the result of sending {\it either} the state $|\psi\rangle$ through a closed-timelike curve {\it or} the state $|\phi\rangle$ is no longer equivalent to sending the mixed state $(1/2)( |\psi\rangle\langle\psi| + |\phi\rangle\langle \phi|)$ through the curve. The problem with computation arises because, as is clear from our grandfather-paradox circuit~\cite{loops}, Deutsch's closed timelike curves typically break the correlation between chronology preserving variables and the components of a mixed state that enters the curve: the component that enters the CTC as $|0\rangle$ can exit the curve as $|1\rangle$, even if the overall mixed state exiting the curve is the same as the one that enters. Consequently, Bennett {\it et al.} argue, the programmer who is using a Deutschian closed timelike curve as part of her quantum computer typically finds the output of the curve is completely decorrelated from the problem she would like to solve: the curve emits random states. In contrast, because P-CTCs are formulated explicitly to retain correlations with chronology preserving curves, quantum computation using P-CTCs do not suffer from state-preparation ambiguity. That is not so say that P-CTCs are computationally innocuous: their nonlinear nature typically renormalizes the probability of states in an input superposition, yielding to strange and counter-intuitive effects. For example, any CTC can be used to compress any computation to depth one, as shown in Fig.~\ref{f:comput}. Indeed, it is exactly the ability of nonlinear quantum mechanics to renormalize probabilities from their conventional values that gives rise to the amplification of small components of quantum superpositions that allows the solution of hard problems. Not least of the counter-intuitive effects of P-CTCs is that they could still solve hard computational problems with ease! The `excessive' computational power of P-CTCs is effectively an argument for why the types of nonlinearities that give rise to P-CTCs, if they exist, should only be found under highly exceptional circumstances such as general-relativistic closed timelike curves or black-hole singularities. \begin{figure} \caption{ Closed timelike loops can collapse the time-depth of any circuit to one, allowing to compute any problem not merely efficiently, but instantaneously. } \label{f:comput} \end{figure} \section{Conclusions}\labell{s:concl} This paper reviewed quantum mechanical theories for time travel, focusing on the theory of P-CTCs~\cite{loops}. Our purpose in presenting this work is to make precise the similarities and differences between varying quantum theories of time travel. We summarize our findings here. We have extensively argued that P-CTCs are physically inequivalent to Deutsch's CTCs. In Sec.~\ref{s:pathint} we showed that P-CTCs are compatible with the path-integral formulation of quantum mechanics. This formulation is at the basis of most of the previous analysis of quantum descriptions of closed time-like curves, since it is particularly suited to calculations of quantum mechanics in curved space time. P-CTCs are reminiscent of, and consistent with, the two-state-vector and weak-value formulation of quantum mechanics. It is important to note, however, that P-CTCs do not in any sense require such a formulation. Then, in Sec.~\ref{s:general} we extended our analysis to general systems where the path-integral formulation may not always be possible and derived a simple prescription for the calculation of the CTC dynamics, namely Eq.~\eqref{evol}. In this way we have performed a complete characterization of P-CTC in the most commonly employed frameworks for quantum mechanics, with the exception of algebraic methods (e.g.~see~\cite{yurtsever}). In Sec.~\ref{s:generalrel} we have argued that, as Wheeler's picture of positrons as electrons moving backwards in time suggests, P-CTCs might also allow time travel in spacetimes without general-relativistic closed timelike curves. If nature somehow provides the nonlinear dynamics afforded by final-state projection, then it is possible for particles (and, in principle, people) to tunnel from the future to the past. Finally, in Sec.~\ref{s:comp} we have seen that P-CTCs are computationally very powerful, though less powerful than the Aaronson-Watrous theory of Deutsch's CTCs. Our hope in elaborating the theory of P-CTCs is that this theory may prove useful in formulating a quantum theory of gravity, by providing new insight on one of the most perplexing consequences of general relativity, i.e., the possibility of time-travel. \begin{references} \bibitem{GODEL} K. G\"{o}del, Rev. Mod. Phys. {\bf 21}, 447 (1949). \bibitem{Bonnor} W.B. Bonnor, {\it J. Phys. A} {\bf 13}, 2121 (1980). \bibitem{Gott}J.R. Gott, Phys. Rev. Lett. {\bf 66}, 1126 (1991). \bibitem{altri} M.S. Morris, K.S. Thorne, U. Yurtsever, {\it Phys. Rev. Lett.} {\bf 61}, 1446 (1988). \bibitem{politzer}H.D.~Politzer, Phys. Rev. D {\bf 49}, 3981 (1994). \bibitem{boulware}D.G. Boulware, Phys. Rev. D {\bf 46}, 4421 (1992). \bibitem{hartle}J.B. Hartle, Phys. Rev. D {\bf 49}, 6543 (1994). \bibitem{STOCKUM} W.J. van Stockum, Proc. Roy. Soc. A {\bf 57}, 135 (1937). \bibitem{politzer2}H.D.~Politzer, Phys. Rev. D {\bf 46}, 4470 (1992). \bibitem{deutsch}D. Deutsch, {\it Phys. Rev. D} {\bf 44}, 3197 (1991). \bibitem{schilpp}A. Einstein in {\em Albert Einstein, Philosopher-Scientist: The Library of Living Philosophers Volume VII}, P.A. Schilpp (ed.) (Open Court, 3rd edition , 1998). \bibitem{Lanczos} C. Lanczos, {\it Z. Phys.} {\bf 21}, 73 (1924). \bibitem{Hawking} S. Hawking, {\it Phys. Rev. D} {\bf 46}, 603 (1992). \bibitem{Deser} S. Deser, R. Jackiw, G.'t Hooft, {\it Phys. Rev. Lett.} {\bf 68}, 267 (1992). \bibitem{Carroll} S.M. Carroll, E. Farhi, A.H. Guth, K.D. Olum, {\it Phys. Rev. D} {\bf 50}, 6190 (1994). \bibitem{FEYN} R. Feynman, in {\it Nobel Lectures, Physics 1963-1970,} Elsevier Publishing Company, Amsterdam, 1972. \bibitem{loops}S. Lloyd, L. Maccone, R. Garcia-Patron, V. Giovannetti, Y. Shikano, S. Pirandola, L.A. Rozema, A. Darabi, Y. Soudagar, L.K. Shalm, and A.M. Steinberg, arXiv:1005.2219 (2010). \bibitem{HM} G. T. Horowitz and J. Maldacena, {\it JHEP} {\bf 0402}, 8 (2004). \bibitem{Yurtsever} U. Yurtsever and G. Hockney, {\it Class. Quant. Grav.} {\bf 22} 295-312 (2005); arXiv:gr-qc/0409112. \bibitem{Gottesman} D. Gottesman and J. Preskill, {\it JHEP} {\bf 0403}, 26 (2004). \bibitem{Lloyd1} S. Lloyd, {\it Phys. Rev. Lett.} {\bf 96}, 061302 (2006). \bibitem{pegg}D.T. Pegg, in ``Time's Arrows, Quantum Measurement and Superluminal Behaviour'', edited by D.Mugnai et al (Consiglio Nazionale delle Richerche, Roma, 2001) p.113, arXiv:quant-ph/0506141 (2005). \bibitem{benschu} C.H. Bennett and B. Schumacher, lecture at QUPON conference Vienna, Austria May 2005 http://www.research.ibm.com/people/b/ben\-netc/QU\-PON\-Bshort.pdf ; C.H. Bennett and B. Schumacher, lecture at Tata Institute for Fundamental Research, Mumbai, India, February 2002; archived Aug 2003 at http://web.archive.org/web/*/http://qpip-ser\-ver.tcs.ti\-fr.res.in/~qpip/HTML/Cour\-ses/Ben\-nett/TIFR5.pdf \bibitem{ralph3}T.C. Ralph, arXiv:quant-ph/0510038v1 (2005). \bibitem{SVE} G. Svetlichny, arXiv:0902.4898 (2009). \bibitem{mauro1}G. Chiribella, G. M. D'Ariano, P. Perinotti, B. Valiron, arXiv:0912.0195 (2009). \bibitem{zeil}\v C. Brukner, J.-W. Pan, C. Simon, G. Weihs, A. Zeilinger, Phys. Rev. A {\bf 67}, 034304 (2003). \bibitem{birrel}N.D. Birrell, P.C.W. Davies, {\it Quantum Fields in Curved Space} (Cambridge Univ. Press, Cambridge, 1982). \bibitem{novikov} J.~Friedman, M.S.~Morris, I.D.~Novikov, F.~Echeverria, G.~Klinkhammer, K.S.~Thorne, and U.~Yurtsever, Phys. Rev. D {\bf 42}, 1915 (1990). \bibitem{friedman}J.L. Friedman, N.J. Papastamatiou, J.Z. Simon, Phys. Rev. D {\bf 46}, 4456 (1992). \bibitem{friedman1}J.L. Friedman, M.S. Morris, Phys. Rev. Lett. {\bf 66}, 401 (1991). \bibitem{diaz}P.F. Gonz\'alez-D\'iaz, Phys. Rev. D {\bf 58}, 124011 (1998). \bibitem{ralph1}T.C. Ralph, Phys. Rev. A {\bf 76}, 012336 (2007). \bibitem{ralph2}T.C. Ralph, G.J. Milburn, and T. Downes, Phys. Rev. A {\bf 79}, 022121 (2009). \bibitem{bennett} Bennett C.H., Leung D., Smith G., Smolin J.A., Phys. Rev. Lett. {\bf 103}, 170502 (2009). \bibitem{ralph} T.C. Ralph, C.R. Myers, arXiv:1003.1987 (2010). \bibitem{teleportation} C.H. Bennett, G. Brassard, C. Cr\'epeau, R. Jozsa, A. Peres, W.K. Wootters Phys. Rev. Lett. {\bf 70}, 1895 (1993). \bibitem{braunstein}S.L. Braunstein, H.J. Kimble, Phys. Rev. Lett. {\bf 80}, 869 (1998). \bibitem{bacon}D. Bacon, Phys. Rev. A {\bf 70}, 032309 (2004). \bibitem{francesi} T.A. Brun, J. Harrington, M.M. Wilde Phys. Rev. Lett. {\bf 102}, 210402 (2009). \bibitem{aar1}S.~Aaronson, J.~Watrous, Proc. Roy. Soc. A {\bf 465}, 631 (2009); arXiv:0808.2669. \bibitem{aar2}S.~Aaronson, Proc. Roy. Soc. A {\bf 461}, 3473 (2005); arXiv:quant-ph/0412187. \bibitem{Brun1} T.A.~Brun, Found. Phys. Lett. {\bf 16}, 245 (2003). \bibitem{feynm1}R.P. Feynman, and A.R. Hibbs, {\it Quantum Mechanics and Path Integrals}, (McGraw-Hill, New York, 1965). \bibitem{feynm2}R. P. Feynman, Rev. Mod. Phys. {\bf 20}, 367 (1948). \bibitem{thorne}F. Echeverria, G. Klinkhammer, K.S. Thorne, Phys. Rev. D {\bf 44}, 1077 (1991). \bibitem{epr}A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. {\bf 47}, 777 (1935). \bibitem{weak1}Y. Aharonov, L. Vaidman, in {\it Time in Quantum Mechanics}, edited by Muga J.G., Sala Mayato R., EgusquizaI. L. eds. (Berlin, Heidelberg: Springer) p 399; arXiv:quant-ph/0105101 (2001). \bibitem{yutaka}Y. Shikano and A. Hosoya, J. Phys. A: Math. Theor. {\bf 43}, 025304 (2010). \bibitem{mauro}G. Chiribella, G.M. D'Ariano, P. Perinotti, Phys. Rev. A {\bf 80}, 022339 (2009). \bibitem{weak2}Y. Aharonov, P. G. Bergmann, and J. L. Lebowitz, Phys. Rev. {\bf 134}, B1410 (1964). \bibitem{weak} Aharonov Y., Albert D.Z. and Vaidman L., Phys. Rev. Lett. {\bf 60} 1351 (1988). \bibitem{cassidy}M.J. Cassidy, Phys. Rev. D {\bf 52}, 5676 (1995). \bibitem{anderson} A. Anderson, Phys. Rev. D {\bf 51}, 5707 (1995). \bibitem{wells}C.J. Fewster, C. G. Wells, Phys. Rev. D {\bf 52}, 5773 (1995). \bibitem{abrams} Abrams, D., and S. Lloyd {\it Phys. Rev. Lett.}, {\bf 81}, 3992 (1998). \bibitem{yurtsever}U. Yurtsever, Class. Quantum Grav. {\bf 11}, 999 (1994). \end{references} \end{document}
arXiv
\begin{document} \title{Noncritical holomorphic functions on Stein spaces} \author[Franc Forstneri\v c]{Franc Forstneri\v c} \address{Franc Forstneri\v c, Faculty of Mathematics and Physics, University of Ljubljana, and Institute of Mathematics, Physics and Mechanics, Jadranska 19, 1000 Ljubljana, Slovenia} \email{[email protected]} \thanks{The author was supported by the program P1-0291 and the grant J1-5432 from ARRS, Republic of Slovenia.} \subjclass[2010]{Primary 32C42, 32E10, 32E30. Secondary 57R70, 58K05.} \date{\today} \keywords{Holomorphic functions, critical points, Stein manifolds, Stein spaces, $1$-convex manifolds, stratifications.} \begin{abstract} In this paper we prove that every reduced Stein space admits a holomorphic function without critical points. Furthermore, every closed discrete subset of a reduced Stein space $X$ is the critical locus of a holomorphic function on $X$. We also show that for every complex analytic stratification with nonsingular strata on a reduced Stein space there exists a holomorphic function whose restriction to every stratum is noncritical. These result provide some information on critical loci of holomorphic functions on desingularizations of Stein spaces. In particular, every $1$-convex manifold admits a holomorphic function that is noncritical outside the exceptional variety. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Every Stein manifold $X$ admits a holomorphic function $f\in \mathcal{O}(X)$ without critical points; see Gunning and Narasimhan \cite{Gunning-Narasimhan} for the case of open Riemann surfaces and \cite{FF:Acta} for the general case. In the algebraic category this fails on any compact Riemann surface of genus $g\ge 1$ with a puncture (every algebraic function on such a surface has a critical point as follows from the Riemann-Hurwitz theorem), but it holds for holomorphic functions of finite order \cite{FO}. Noncritical holomorphic functions are of interest in particular since they define nonsingular holomorphic hypersurface foliations; results on this topic can be found in \cite{FF:Acta,FF:submersions}. In this paper we prove that, somewhat surprisingly, the same holds on Stein spaces. The following is a special case of our main result, Theorem \ref{th:mainbis}. \begin{theorem} \label{th:main} Every reduced Stein space admits a holomorphic function without critical points. \end{theorem} We begin by recalling the relevant notions. All complex spaces are assumed paracompact and reduced. For the theory of Stein spaces we refer to Grauert and Remmert \cite{Grauert-Remmert1979}. Let $X$ be a complex space. Denote by $\mathcal{O}_{X,x}$ the ring of germs of holomorphic function at a point $x\in X$ and by $\mathfrak{m}_x$ the maximal ideal of $\mathcal{O}_{X,x}$, so $\mathcal{O}_{X,x}/\mathfrak{m}_x \cong \mathbb{C}$. Given $f\in \mathcal{O}_{X,x}$ we denote by $f-f(x) \in \mathfrak{m}_x$ the germ obtained by subtracting from $f$ its value $f(x)\in\mathbb{C}$ at $x$. \begin{definition} \label{def:critical} Assume that $x$ is nonisolated point of a complex space $X$. \begin{itemize} \item[\rm (a)] A germ $f\in \mathcal{O}_{X,x}$ at $x$ is said to be {\em critical} (and $x$ is a {\em critical point} of $f$) if $f-f(x) \in \mathfrak{m}_x^2$ (the square of the maximal ideal $\mathfrak{m}_x$), and is {\em noncritical} if $f-f(x) \in \mathfrak{m}_x\setminus \mathfrak{m}_x^2$. \item[\rm (b)] A germ $f\in \mathcal{O}_{X,x}$ is {\em strongly noncritical at $x$} if the germ at $x$ of the restriction $f|_V$ to any local irreducible component $V$ of $X$ is noncritical. \end{itemize} Any function is considered (strongly) noncritical at an isolated point of $X$. \end{definition} One can characterize these notions by the (non) vanishing of the differential $df_x$ on the {\em Zariski tangent space} $T_x X$. Recall that $T_xX$ is isomorphic to $(\mathfrak{m}_x/\mathfrak{m}_x^2)^*$, the dual of $\mathfrak{m}_x/\mathfrak{m}_x^2$, the latter being the cotangent space $T_x^*X$ (cf.\ \cite[p.\ 78]{Fischer} or \cite[p.\ 111]{KK}). The number $\dim_\mathbb{C} T_x X$ is the embedding dimension of the germ $X_x$ of $X$ at $x$. The differential $df_x\colon T_x X\to \mathbb{C}$ of $f\in\mathcal{O}_{X,x}$ is determined by the class $f-f(x) \in \mathfrak{m}_x/\mathfrak{m}_x^2=T_x^*X$, so $f$ is critical at $x$ if and only if $df_x=0$. If $X_x=\bigcup_{j=1}^k V_j$ is a decomposition into local irreducible components, then $f$ is strongly noncritical at $x$ if $df_x\colon T_x V_j\to\mathbb{C}$ is nonvanishing for every $j=1,\ldots,k$. At a regular point $x\in X_\reg$ these notions coincide with the usual one: $x$ is a critical point of $f$ if and only if in some (hence in any) local holomorphic coordinates $z=(z_1,\ldots, z_n)$ on a neighborhood of $x$, with $z(x)=0$ and $n=\dim_x X$, we have $\frac{\partial f}{\partial z_j}(0)=0$ for $j=1,\ldots,n$. Hence the set $\Crit(f)$ of all critical points of a holomorphic function on a {\em complex manifold} $X$ is a closed complex subvariety of $X$. On a Stein manifold $X$ this set is discrete for a generic choice of $f\in\mathcal{O}(X)$. The following is our main result. \begin{theorem} \label{th:mainbis} On every reduced Stein space $X$ there exists a holomorphic function which is strongly noncritical at every point. Furthermore, given a closed discrete set $P=\{p_1,p_2,\ldots\}$ in $X$, germs $f_k\in\mathcal{O}_{X,p_k}$ and integers $n_k\in\mathbb{N}$, there exists a holomorphic function $F\in\mathcal{O}(X)$ which is strongly noncritical on $X\setminus P$ and agrees with the germ $f_k$ to order $n_k$ at the point $p_k\in P$ (i.e., $F_{p_k}-f_k\in \mathfrak{m}_{p_k}^{n_k}$) for every $k=1,2,\ldots$. \end{theorem} The hypothesis on the set $P$ in Theorem \ref{th:mainbis} is a natural one since the critical locus a generic holomorphic function on a Stein space is discrete (see Corollary \ref{cor:generic}). We also prove the following result (cf.\ Theorem \ref{th:stratbis} and Corollary \ref{cor:extension}). Given a closed complex subvariety $X'$ of a reduced Stein space $X$ and a function $f \in \mathcal{O}(X')$, there exists $F\in \mathcal{O}(X)$ such that $F|_{X'}=f$ and $F$ is strongly noncritical on $X\setminus X'$, or it has critical points at a prescribed discrete set contained in $X\setminus X'$. The proof of these results for Stein manifolds in \cite{FF:Acta} relies on two main ingredients: \begin{itemize} \item[\rm (i)] Runge approximation theorem for noncritical holomorphic functions on polynomially convex subset of $\mathbb{C}^n$ by entire noncritical functions (cf.\ \cite[Theorem 3.1]{FF:Acta} or \cite[Theorem 8.11.1, p.\ 381]{FF:book}), and \item[\rm (ii)] a splitting lemma for biholomorphic maps close to the identity on a Cartan pair (cf.\ \cite[Theorem 4.1]{FF:Acta} or \cite[Theorem 8.7.2)]{FF:book}). \end{itemize} These tools do not apply directly at singular points of $X$. In addition, the following two phenomena make the analysis very delicate. Firstly, the critical locus of a holomorphic function $f\in\mathcal{O}(X)$ need not be a closed complex subvariety of $X$ near a singularity. A simple example is $X=\{zw=0\}\subset \mathbb{C}^2_{(z,w)}$ and $f(z,w)=z$ with $\Crit(f)= \{(0,w)\colon w\ne 0\}$; for another example on an irreducible isolated surface singularity see Example \ref{ex:null} in \S \ref{sec:prel}. However, we will show that $\Crit(f|X_\reg)\cup X_\sing$ is a closed complex subvariety of $X$ (cf.\ Lemma \ref{lem:crit}). Secondly, the class of noncritical (or strongly noncritical) functions is not stable under small perturbations on compact sets which include singular points of $X$; see Example \ref{ex:null2}. The key idea used in this paper stems from the following observation: \noindent {\em (*) If $S\subset X$ is a local complex submanifold of positive dimension at a point $x\in S$, and if the restriction of a function $f\in \mathcal{O}(X)$ to $S$ is noncritical at $x$, then $f$ is noncritical at $x$ (as a function on $X$). If such $S$ is contained in every local irreducible component of $X$ at $x$, then $f$ is strongly noncritical at $x$.} This observation naturally leads one to consider complex analytic stratifications of a Stein space and to construct holomorphic functions that are noncritical on every stratum. Recall that a (complex analytic) {\em stratification} $\Sigma=\{S_j\}$ of a complex space $X$ is a subdivision of $X$ into the union $X=\bigcup_j S_j$ of at most countably many pairwise disjoint connected complex manifolds $S_j$, called the {\em strata} of $\Sigma$, such that \begin{itemize} \item every compact set in $X$ intersects at most finitely many strata, and \item $bS=\overline S\setminus S$ is a union of lower dimensional strata for every $S\in\Sigma$. \end{itemize} Such a pair $(X,\Sigma)$ is called a {\em stratified complex space}. Every complex analytic space admits a stratification (cf.\ Whitney \cite{Whitney,Whitney2}). An example is obtained by taking $X=X_0\supset X_1\supset \cdots$, where $X_{j+1}=(X_j)_\sing$ for every $j$, and decomposing the smooth differences $X_j\setminus X_{j+1}$ into connected components. This chain of subvarieties is stationary on each compact subset of $X$. \begin{definition} \label{def:stratified-noncritical} Let $(X,\Sigma)$ be a stratified complex space. A function $f\in\mathcal{O}(X)$ is said to be a {\em stratified noncritical holomorphic function} on $(X,\Sigma)$, or a {\em $\Sigma$-noncritical function}, if the restriction $f|_{S}$ to any stratum $S\in \Sigma$ of positive dimension is a noncritical function on $S$. \end{definition} Clearly the critical locus of a $\Sigma$-noncritical function on $(X,\Sigma)$ is contained in the union $X_0$ of all $0$-dimensional strata of $\Sigma$; note that $X_0$ is a discrete subset of $X$. \begin{theorem} \label{th:stratified} On every stratified Stein space $(X,\Sigma)$ there exists a $\Sigma$-noncritical holomorphic function $F\in \mathcal{O}(X)$. Furthermore, $F$ can be chosen to agree to a given order $n_x\in \mathbb{N}$ with a given germ $f_x\in \mathcal{O}_{X,x}$ at any $0$-dimensional stratum $\{x\}\in \Sigma$. \end{theorem} Theorem \ref{th:stratified} is proved in \S\ref{sec:stratified}. Assuming it for the moment we give \noindent{\em Proof of Theorems \ref{th:main} and \ref{th:mainbis}.} We may assume that $X$ has no isolated points. Choose a complex analytic stratification $\Sigma$ of $X$ containing a given discrete set $P\subset X$ in the union $X_0=\{p_1,p_2,\ldots\}$ of its zero dimensional strata. For every $i=1,2,\ldots$ let $X_i$ denote the union of all strata of dimension at most $i$ (the {\em $i$-skeleton} of $\Sigma$). Note that $X_i$ is a closed complex subvariety of $X$ (since the boundary of each stratum is a union of lower dimensional strata), the difference $X_{i}\setminus X_{i-1}$ is either empty or a complex manifold of dimension $i$, and \begin{equation} \label{eq:skeleton} X_0\subset X_1\subset X_2\subset\cdots \subset \bigcup_{i=0}^\infty X_i=X. \end{equation} Given germs $f_k\in \mathcal{O}_{X,p_k}$ $(p_k\in X_0)$ and integers $n_k\in \mathbb{N}$, Theorem \ref{th:stratified} furnishes a $\Sigma$-noncritical function $F\in\mathcal{O}(X)$ such that $F_{p_k}-f_{p_k} \in \mathfrak{m}_{p_k}^{n_k}$ for every $p_k\in X_0$. We claim that $F$ is strongly noncritical on $X\setminus X_0$. Indeed, given a point $x\in X\setminus X_0$, pick the smallest $i\in\mathbb{N}$ such that $x\in X_i$, so $x\in X_i\setminus X_{i-1}$ which is a complex manifold of dimension $i$. Let $S_i\subset X_i\setminus X_{i-1}$ be the connected component containing $x$. Then the germ of $S_i$ at $x$ is contained in every local irreducible component of $X$ at $x$. Since $x$ is a noncritical point of $F|_{S_i}$, it follows from (*) that $F$ is strongly noncritical at $x$, thereby proving the claim. By choosing $f_k$ to be strongly noncritical at $p_k\in X_0$ we obtain a function $F\in\mathcal{O}(X)$ that is strongly noncritical on $X$. (To get a strongly noncritical function at a point $p\in X$, we can embed $X_p$ as a local complex subvariety of the Zariski tangent space $T_p X\cong\mathbb{C}^N$ and choose a linear function on $T_pX$ which is nondegenerate on the tangent space to every local irreducible component of $X$.) \qed The proof of Theorem \ref{th:stratified} (cf.\ \S\ref{sec:stratified}) proceeds by induction on the skeleta $X_i$ (\ref{eq:skeleton}). The main induction step is furnished by Theorem \ref{th:th1} which provides holomorphic functions on a Stein space which have no critical points in the regular locus. When passing from $X_{i-1}$ to $X_{i}$, we first apply the transversality theorem to show that the critical locus of a generic holomorphic extension of a given function on $X_{i-1}$ is discrete and does not accumulate on $X_{i-1}$ (cf.\ Lemma \ref{lem:generic}). We then extend the function to $X_{i}$ without creating any critical points in $X_{i}\setminus X_{i-1}$, keeping it fixed to a high order along $X_{i-1}$. To this end we adjust one of the main tools from \cite{FF:Acta}, namely {\em the splitting lemma for biholomorphic maps close to the identity} on a Cartain pair \cite[Theorem 4.1]{FF:Acta}, to the setting of Stein spaces; see Theorem \ref{th:splitting} in \S\ref{sec:gluing} below. Besides its original use, this splitting lemma from \cite{FF:Acta} has found a variety of applications. In particular, it was used for exposing boundary points of certain classes of pseudoconvex domains, a technique applied in the constructions of proper holomorphic embeddings of open Riemann surfaces to $\mathbb{C}^2$ \cite{FW2009,FW2013,Majcen2009}, in the construction of complete bounded complex curves in $\mathbb{C}^n$ and minimal surfaces in $\mathbb{R}^3$ \cite{AlF1,AlF3}, and in the study of the {\em holomorphic squeezing function} of domains in $\mathbb{C}^n$ \cite{DGZ,DFW,FW2014}. We are hoping that Theorem \ref{th:splitting} in this paper will also prove useful for other purposes. As explained in Remark \ref{rem:3.2gen}, Theorem \ref{th:splitting} and its proof can be generalized to the case when the biholomorphic map to be decomposed, and possibly also the underlying Cartan pair, depend on some additional parameters. We mention a couple of immediate corollaries of Theorem \ref{th:stratified}. \begin{corollary} Let $(X,\Sigma)$ be a stratified Stein space. Given a closed discrete set $P$ in $X$, there exists a holomorphic function $F\in\mathcal{O}(X)$ such that for any stratum $S\in \Sigma$ with $\dim S>0$ we have $\Crit(F|_S)= P\cap S$. \end{corollary} This follows from Theorem \ref{th:stratified} applied to a substratification $\Sigma'$ of $\Sigma$ which contains the given discrete set $P$ in the zero dimensional skeleton. By considering the level sets of a function satisfying Theorem \ref{th:stratified} we obtain the following. \begin{corollary} \label{cor:foliation} Every stratified Stein space $(X,\Sigma)$ admits a holomorphic foliation $\mathcal{L}=\{L_a\}_{a\in A}$ with closed leaves such that for every stratum $S\in\Sigma$ the restricted foliation $\mathcal{F}|_S=\{L_a\cap S\}_{a\in A}$ is a nonsingular hypersurface foliation on $S$. \end{corollary} In the remainder of this introduction we indicate how Theorems \ref{th:main}, \ref{th:mainbis}, and \ref{th:stratified} imply results concerning critical loci of holo\-mor\-phic functions on complex manifolds which are obtained by desingularizing Stein spaces. The simplest example of this type is obtained by desingularizing a Stein space $Y$ with isolated singular points $Y_\sing=\{p_1,p_2,\ldots\}$. Let $\pi:X\to Y$ be a desingularization (cf.\ \cite{AHV,BM,Hironaka}). The fiber $E_j= \pi^{-1}({p_j})$ over any singular point of $Y$ is a connected compact complex subvariety of $X$ of positive dimension with negative normal bundle in the sense of Grauert \cite{Grauert:modif}. (A local strongly plurisubharmonic function near $p_j\in Y$ pulls back to a function that is strongly plurisubharmonic on a deleted neighborhood of $E_j$.) The set $\mathcal{E} =\pi^{-1}(Y_\sing) =\bigcup_j E_j$ is a complex subvariety of $X$ with compact irreducible components of positive dimension, and $\mathcal{E}$ contains any compact complex subvariety of $X$ without $0$-dimensional components. Furthermore, we have $\pi_* \mathcal{O}_X = \mathcal{O}_Y$ and $\pi\colon X\to Y$ is the {\em Remmert reduction} of $X$ \cite{Grauert:modif,Remmert}. If $Y$ has only finitely many singular points then the manifold $X$ is {\em $1$-convex} and $\mathcal{E}$ is the {\em exceptional variety} of $X$ \cite{Grauert:q-convexity}. By choosing a noncritical function $g\in \mathcal{O}(Y)$ furnished by Theorem \ref{th:main}, the function $f=g\circ\pi\in \mathcal{O}(X)$ clearly satisfies $\Crit(f)\subset \mathcal{E}$. Similary, if $A$ is a discrete set in $X$ then $\pi(A)$ is discrete in $Y$, and by choosing $g\in \mathcal{O}(Y)$ with $\Crit (g) =\pi(A)$ we get a function $f=g\circ\pi\in \mathcal{O}(X)$ with $\Crit(f) \setminus \mathcal{E}= A\setminus \mathcal{E}$. If $A$ intersects every connected component of $\mathcal{E}$, we have $\Crit(f)=A\cup \mathcal{E}$. This gives the following corollary. \begin{corollary} \label{cor:1convex} A $1$-convex manifold $X$ with the exceptional variety $\mathcal{E}$ admits a holomorphic function $f\in \mathcal{O}(X)$ with $\Crit(f)\subset \mathcal{E}$. Furthermore, given a closed discrete set $A$ in $X$, there exists a function $f\in \mathcal{O}(X)$ with $\Crit(f)=A\cup \mathcal{E}$. \end{corollary} In general we can not find a holomorphic function $f\in \mathcal{O}(X)$ on a $1$-convex manifold $X$ that is noncritical at every point of the exceptional variety $\mathcal{E}$ of $X$. Indeed, assume that $E$ is a smooth component of $\mathcal{E}$. Since $E$ is compact, the restriction $f|_E$ is constant, so the differential of $f$ vanishes along $E$ in the directions tangential to $E$. Hence, if $df_x\ne 0$ for all $x\in E$, the differential defines a nowhere vanishing section of the conormal bundle of $E$ in $X$, a nontrivial condition which does not always hold as is seen in the following example. \begin{example} \label{ex1} Fix an integer $n>1$. Let $X$ be $\mathbb{C}^n$ blown up at the origin, and let $\pi\colon X\to \mathbb{C}^n$ denote the base point projection. The exceptional variety is $\mathcal{E}=\pi^{-1}(0) \cong \mathbb{C}\mathbb{P}^{n-1}$. The conormal bundle of $\mathcal{E}$ is the line bundle $\mathcal{O}_{\mathbb{C}\mathbb{P}^{n-1}}(+1)$ which does not admit any nonvanishing sections, so $X$ does not admit any noncritical holomorphic functions. On the other hand, the function $g(z)=z_1^2+z_2^2+\cdots+z_n^2$ on $\mathbb{C}^n$, with $\Crit(g)=\{0\}$, pulls back to a holomorphic function $f=g\circ \pi\in\mathcal{O}(X)$ with $\Crit(f)=\mathcal{E}$. Similary, the coordinate function $z_j$ on $\mathbb{C}^n$ pulls back to a holomorphic function $z_j\circ \pi=\pi_j$ which is noncritical on $X\setminus \mathcal{E}\cong \mathbb{C}^n\setminus\{0\}$, and \[ \Crit(\pi_j) = \{[z_1\colon z_2 \colon \cdots \colon z_{n}] \in \mathcal{E}\colon z_j=0\} \cong \mathbb{C}\mathbb{P}^{n-2}. \] Hence the critical locus may be a proper subvariety of the exceptional variety. \qed\end{example} \begin{problem} Let $X$ be a $1$-convex manifold. Which closed analytic subsets of its exceptional variety $\mathcal{E}$ are critical loci of holomorphic functions on $X$? \end{problem} Going a step further, recall that a complex space $X$ is said to be {\em holomorphically convex} if for any compact set $K\subset X$ its $\mathcal{O}(X)$-convex hull \[ \widehat K_{\mathcal{O}(X)} =\{x\in X\colon |f(x)|\le \sup_K |f|\ \ \forall f\in \mathcal{O}(X)\} \] is also compact. This class contains all $1$-convex spaces, but many more. For example, the total space of any holomorphic fiber bundle $X\to Y$ with a compact fiber over a Stein space $Y$ is holomorphically convex. By Remmert \cite{Remmert}, every holomorphically convex space $X$ admits a proper holomorphic surjection $\pi\colon X\to Y$ onto a Stein space $Y$ such that the (compact) fibers of $\pi$ are connected, $\pi_* \mathcal{O}_X = \mathcal{O}_Y$, the map $f\mapsto f\circ \pi$ is an isomorphism of $\mathcal{O}(Y)$ onto $\mathcal{O}(X)$, and every holomorphic map $X\to S$ to a Stein space $S$ factors through $\pi$. If $g\in\mathcal{O}(Y)$ is a noncritical function on $Y$ furnished by Theorem \ref{th:main}, then the function $f = g\circ \pi\in \mathcal{O}(X)$ is noncritical on the set where $\pi$ is a submersion. What else could be said? Another possible line of investigation is the following. In \cite{FF:Acta} we proved that on any Stein manifold $X$ of dimension $n$ there exist $q=\left[ \frac{n+1}{2} \right]$ holomorphic functions $f_1,\ldots,f_q\in \mathcal{O}(X)$ with pointwise independent differentials, i.e., such that $df_1 \wedge df_2\wedge\cdots \wedge df_q$ is a nowhere vanishing holomorphic $(q,0)$-form on $X$, and this number $q$ is maximal in general by topological reasons. Furthermore, we have the h-principle for holomorphic submersions $X\to\mathbb{C}^q$ any $q<n=\dim X$, saying that every $q$-tuple of pointwise linearly independent continuous $(1,0)$-forms can be deformed to a $q$-tuple of linearly independent holomorphic differentials $df_1,\ldots, df_q$. What could be said regarding this problem on Stein spaces? For example: \begin{problem} Assume that $X$ is a pure $n$-dimensional Stein space and let $q$ be as above. Do there exist functions $f_1,\ldots,f_q\in \mathcal{O}(X)$ such that $df_1 \wedge df_2\wedge\cdots \wedge df_q$ is nowhere vanishing on $X_\reg$? What is the answer if $X$ has only isolated singularities? \end{problem} Our methods strongly rely on the fact that the critical locus of a generic holomorphic function on a Stein space is discrete (see \S\ref{sec:prel}). If $q>1$ then the set $df_1 \wedge df_2\wedge\cdots \wedge df_q=0$ (if nonempty) is a subvariety of complex dimension $\ge q-1>0$, and we do not know how to ensure nonvanishing of this form on a deleted neighborhood of a subvariety of $X$ as in the case $q=1$. The problem seems nontrivial even for an isolated singular point of $X$. \section{Critical points of a holomorphic function on a complex space} \label{sec:prel} We begin by recalling certain basic facts of complex analytic geometry. Let $(X,\mathcal{O}_X)$ be a reduced complex space. Following standard practice we shall simply write $X$ in the sequel. We denote by $\mathcal{O}(X)\cong \Gamma(X,\mathcal{O}_X)$ the algebra of all holomorphic functions on $X$. Given a holomorphic function $f$ on an open set $U\subset X$, we denote by $f_p\in \mathcal{O}_{X,p}$ the germ of $f$ at a point $p\in U$. Similarly, $X_p$ stands for the germ of $X$ at a point $p\in X$. By $\mathfrak{m}_p=\mathfrak{m}_{X,p}$ we denote the maximal ideal of the local ring $\mathcal{O}_{X,p}$, so $\mathcal{O}_{X,p}/\mathfrak{m}_p\cong\mathbb{C}$. We say that $f\in\mathcal{O}_{X,p}$ {\em vanishes to order $k \in \mathbb{N}$ at the point $p$} if $f\in\mathfrak{m}_p^{k}$ (the $k$-th power of the maximal ideal). The quotient ring $\mathcal{O}_{X,p}/\mathfrak{m}_p^{k}\cong \mathbb{C} \oplus \mathfrak{m}_p/\mathfrak{m}_p^{k}$ is a finite dimensional complex vector space, called the {\em space of $(k-1)$-jets of holomorphic functions on $X$ at $p$}. Recall that $\mathfrak{m}_p/\mathfrak{m}_p^2\cong T_p^*X$ is the Zariski cotangent space and its dual $(\mathfrak{m}_p/\mathfrak{m}_p^2)^* \cong T_p X$ is the Zariski tangent space of $X$ at $p$. If $X'$ is a complex subvariety of $X$ and $p\in X'$, then the maximal ideal $\mathfrak{m}_{X',p}$ of the ring $\mathcal{O}_{X',p}$ consists of all germs at $p$ of restrictions $f|_{X'}$ with $f\in \mathfrak{m}_{X,p}$. \begin{lemma} \label{lem:germs} Let $X'$ be a closed complex subvariety of a complex space $X$ and $p\in X'$. If $f\in \mathcal{O}_{X',p}$ and $h\in \mathcal{O}_{X,p}$ are such that $f-(h|_{X'})_p \in \mathfrak{m}_{X',p}^{k}$ for some $k\in\mathbb{N}$, then there exists $\tilde h \in \mathcal{O}_{X,p}$ such that $\tilde h -h\in \mathfrak{m}_{X,p}^{k}$ and $(\tilde h|_{X'})_p = f \in \mathcal{O}_{X',p}$. \end{lemma} \begin{proof} The conditions imply that $f = (h|_{X'})_p + \sum_j \xi_{j,1}\xi_{j,2}\cdots \xi_{j,k}$ where $\xi_{j,i}\in \mathfrak{m}_{X',p}$ for all $i$ and $j$. Then $\xi_{j,i}=\tilde \xi_{j,i}|_{X'}$ for some $\tilde \xi_j\in\mathfrak{m}_{X,p}$, and the germ $\tilde h = h +\sum_j \tilde \xi_{j,1}\tilde \xi_{j,2}\cdots \tilde \xi_{j,k} \in\mathcal{O}_{X,p}$ satisfies the stated properties. \end{proof} Given a function $f\in\mathcal{O}(X)$, the collection of its differentials $df_x\colon T_x X\to\mathbb{C}$ over all points $x\in X$ defines the {\em tangent map} $Tf\colon TX\to X\times \mathbb{C}$ on the {\em tangent space} $TX=\bigcup_{x\in X} T_x X$. Recall that $TX$ carries the structure of a not necessarily reduced {\em linear space} over $X$ such that the tangent map $Tf$ is holomorphic. Here is a local decription of $TX$ (see e.g.\ \cite[Chapter 2]{Fischer}). Assume that $X$ is a closed complex subvariety of an open set $U\subset \mathbb{C}^N$, defined by holomorphic functions $h_1,\ldots,h_m \in \mathcal{O}(U)$ which generate the sheaf of ideals $\mathcal{J}_X$ of $X$ (hence $\mathcal{O}_{X}\cong (\mathcal{O}_U/\mathcal{J}_X)|_X$). Let $(z_1,\ldots,z_N,\xi_1,\ldots,\xi_N)$ be complex coordinates on $U\times \mathbb{C}^N$. Then $TX$ is the closed complex subspace of $U\times \mathbb{C}^N$ generated by the functions \begin{equation} \label{eq:TX} h_1,\ldots,h_m\ \ {\rm and}\ \ \frac{\partial h_i}{\partial z_1}\,\xi_1 + \cdots + \frac{\partial h_i}{\partial z_N}\,\xi_N \ \ {\rm for}\ \ i=1,\ldots,m. \end{equation} This means that $TX$ is the common zero set of the above functions and it structure sheaf $\mathcal{O}_{TX}$ is the quotient of $\mathcal{O}_{U\times \mathbb{C}^N}$ by the ideal generated by them. The projection $TX\to X$ is the restriction of the projection $U\times \mathbb{C}^N\to U$, $(z,\xi)\mapsto z$. Different local representations of $X$ give isomorphic representations of $TX$. If $X$ is a complex manifold then $TX$ is the usual tangent bundle of $X$; this holds in particular over the regular locus $X_{\reg}$ of any complex space. Since the critical locus $\Crit(f)$ of a holomorphic function $f\in\mathcal{O}(X)$ is the set of points $x\in X$ at which the differential $df_x\colon T_x X\to\mathbb{C}$ vanishes, one might expect that $\Crit(f)$ is a closed complex subvariety of $X$. This is clearly true if $X$ is a complex manifold (in particular, it holds on the regular locus $X_\reg$ of any complex space), but it fails in general near singularities. Furthermore, unlike in the smooth case, the set of (strongly) noncritical holomorphic functions is not stable under small deformations. The following examples illustrates these phenomena in a simple setting of an irreducible quadratic surface singularity in $\mathbb{C}^3$. \begin{example} \label{ex:null} Let $A$ be the subvariety of $\mathbb{C}^3$ given by \begin{equation} \label{eq:null} A=\{(z_1,z_2,z_3) \in \mathbb{C}^3 \colon h(z)=z_1^2+z_2^2+z_3^2=0\}. \end{equation} (In the theory of minimal surfaces this is called the {\em null quadric}, and a complex curve in $\mathbb{C}^3$ whose derivative belongs to $A^*=A\setminus \{(0,0,0)\}$ is said to be a (immersed) {\em null holomorphic curve}. Such curves are related to conformally immersed minimal surfaces in $\mathbb{R}^3$. See e.g.\ \cite{Osserman} for a classical survey of this subject and \cite{AlF3} for some recent results.) Clearly $A_\sing =\{(0,0,0)\}$, $A$ is locally and globally irreducible, and $T_{(0,0,0)}A=\mathbb{C}^3$. For any $\lambda=(\lambda_1,\lambda_2,\lambda_3)\in\mathbb{C}^3\setminus\{(0,0,0)\}$ the linear function \[ f_\lambda (z_1,z_2,z_3)= \lambda_1 z_1 + \lambda_2 z_2 + \lambda_3 z_3 \] restricted to $A$ is strongly noncritical at $(0,0,0)$. Clearly $df_\lambda$ is colinear with $dh=2(z_1dz_1+z_2dz_2+z_3dz_3)$ precisely along the complex line $\Lambda=\mathbb{C}\lambda= \{t\lambda\colon t\in \mathbb{C}\}$. If $\lambda\in A^*$, it follows that $\Crit({f_\lambda}|_A)=\Lambda\setminus \{0\}$ which is not closed. An explicit example is obtained by taking \[ \lambda=(1,\imath,0) \in A^*,\qquad f(z)=z_1+\imath z_2. \] (Here $\imath=\sqrt{-1}$.) \qed\end{example} Let us now show on the same example that the set of (strongly) noncritical functions fails to be stable under small deformations. \begin{example} \label{ex:null2} Let $A$ be the quadric (\ref{eq:null}). Consider the family of functions \[ f_\epsilon(z_1,z_2,z_3)= z_1+ z_1(z_1-2\epsilon) + \imath z_2 ,\qquad \epsilon\in\mathbb{C}. \] Since $(df_\epsilon)_0=(1-2\epsilon)dz_1+\imath dz_2$, $f_\epsilon|_A$ is (strongly) noncritical at the origin for any $\epsilon\in\mathbb{C}$. A calculation shows that for $\epsilon\ne 1/2$ the differentials $df_\epsilon$ and $dh$ (considered on the tangent bundle $T\mathbb{C}^3$) are colinear precisely at points of the complex curve \[ C_\epsilon =\{(z_1,z_2,0)\in\mathbb{C}^3: z_2=\imath z_1/(2z_1-2\epsilon+1)\}. \] This curve intersects the quadric $A$ at the following four points: \[ A\cap C_\epsilon =\{(0,0,0), (\epsilon,\imath\epsilon,0), (\epsilon-1,-\imath(\epsilon-1),0)\}. \] Hence the second and the third of these points are the critical points of $f_\epsilon|_A$ when $ \epsilon\notin\{0,1\}$. For $\epsilon$ close to $0$ the point $(\epsilon,\imath\epsilon,0)$ lies close to the origin, while the third point is close to $(-1,\imath,0)$. Hence the function $f_0|_A$ is noncritical on the intersection of $A$ with the ball of radius $1/2$ around the origin in $\mathbb{C}^3$, but $f_\epsilon|_A$ for small $\epsilon\ne 0$ is close to $f_0$ and has a critical point $(\epsilon,\imath\epsilon,0)\in A$ near the origin. \qed\end{example} Although we have seen in Example \ref{ex:null} that $\Crit(f)$ need not be a closed complex subvariety near singular points of a complex space, we still have the following result. \begin{lemma} \label{lem:crit} Let $f$ be a holomorphic function on a complex space $X$. If $X'\subset X$ is a closed complex subvariety of $X$ containing the singular locus $X_\sing$ of $X$, then the set \[ C_{X'}(f) := \{x\in X_\reg\colon df_x=0\} \cup {X'} \] is a closed complex subvariety of $X$. \end{lemma} \begin{proof} By the desingularization theorem \cite{AHV,BM,Hironaka} there are a complex manifold $M$ and a proper holomorphic surjection $\pi\colon M\to X$ such that $\pi\colon M\setminus \pi^{-1}(X_{\rm sing})\to X\setminus X_{\rm sing}$ is a biholomorphism and $\pi^{-1}(X_{\rm sing})$ is a compact complex hypersurface in $M$. Given a function $f\in\mathcal{O}(X)$, consider the function $F=f\circ\pi\in\mathcal{O}(M)$ and the subvariety $M'=\pi^{-1}(X')$ of $M$. Since $M$ is a complex manifold, the critical locus $\Crit(F)\subset M$ is a closed complex subvariety of $M$, and hence so is the set $C_{M'}(F)=\Crit(F)\cup M'$. As $\pi$ is proper, $\pi(C_{M'}(F))$ is a closed complex subvariety of $X$ according to Remmert \cite{Remmert2}. Since $\pi$ is biholomorphic over $X_\reg$, we have that $\pi(C_{M'}(F))=C_{X'}(f)$ which proves the result. \end{proof} In spite of the lack of stability of noncritical functions, illustrated by Example \ref{ex:null2}, we shall obtain a certain stability result (cf.\ Lemma \ref{lem:stability} below) which will be used in the construction of stratified noncritical holomorphic functions on Stein spaces. Given a compact set $K$ in a complex space $X$, we denote by $\mathcal{O}(K)$ the space of all functions $f$ that are holomorphic on an open neighborhood $U_f \subset X$ of $K$ (depending on the function), identifying two function that agree on some neighborhood of $K$. By $\mathring K$ we denote the topological interior of a set $K$. For any coherent analytic sheaf $\mathcal{F}$ on a complex space $X$ the $\mathcal{O}(X)$-module $\mathcal{F}(X)=\Gamma(X,\mathcal{F})$ of all global sections of $\mathcal{F}$ over $X$ can be endowed with a Fr\'echet space topology (the topology of uniform convergence on compacts in $X$) such that for every point $x\in X$ the natural restriction map $\mathcal{F}(X)\mapsto \mathcal{F}_x$ is continuous (see Theorem 5 in \cite[p.\ 167]{Grauert-Remmert1979}). The topology on the stalks $\mathcal{F}_x$ is the {\em sequence topology} (cf.\ \cite[p.\ 86ff]{GR-Stellenalgebren}). Thus every set of the second category in $\mathcal{F}(X)$ (an intersection of at most countably many open dense sets) is dense in $\mathcal{F}(X)$. The expression {\em generic holomorphic function} on $X$ will always mean a function in a certain set of the second category in $\mathcal{O}(X)$, and likewise for $\mathcal{F}(X)$. If $\mathcal{S}$ is a coherent subsheaf of a coherent sheaf $\mathcal{F}$ over $X$ then $\mathcal{S}(X)$ is a closed submodule of $\mathcal{F}(X)$ (the Closedness Theorem, cf.\ \cite[p.\ 169]{Grauert-Remmert1979}). Since every $\mathcal{O}_{X,x}$-submodule $M$ of the module $\mathcal{F}_x$ is closed in the sequence topology, it follows that $\{f \in \mathcal{F}(X)\colon f_x\in M\}$ is a closed subspace of $\mathcal{F}(X)$, hence a Fr\'echet space. In particular, if $X'$ is a closed complex subvariety of a complex space $X$ and $\mathcal{J}_{X'}$ is the sheaf of ideals of $X'$ (a coherent subsheaf of $\mathcal{O}_X$), then \[ \mathcal{J}(X'):= \Gamma(X,\mathcal{J}_{X'}) = \{f\in \mathcal{O}(X)\colon f|_{X'}=0\} \] is a closed (hence Fr\'echet) ideal in $\mathcal{O}(X)$. Given a function $g\in \mathcal{O}(X')$ on a closed complex subvariety $X'\subset X$, the set \begin{equation} \label{eq:Xprimeg} \mathcal{O}_{X',g}(X)=\{f\in \mathcal{O}(X)\colon f|_{X'}=g\} \end{equation} is a closed affine subspace of $\mathcal{O}(X)$ and hence a Baire space. The Closedness Theorem \cite[p.\ 169]{Grauert-Remmert1979} shows that for any point $x\in X$ and $k\in\mathbb{N}$ the set \[ \{f\in\mathcal{O}(X)\colon f_x - f(x) \in \mathfrak{m}_x^k\} \] is closed in $\mathcal{O}(X)$. For $k=2$ this is the set of functions with a critical point at $x$. Let $X_x=\bigcup_{j=1}^m V_j$ be a decomposition into local irreducible components at a point $x\in X$. According to Definition \ref{def:critical}, a function $f\in\mathcal{O}_{X,x}$ fails to be strongly noncritical at $x$ if there is a $j\in\{1,\ldots,m\}$ such that $(f|_{V_j})_x-f(x)\in \mathfrak{m}_{V_j,x}^2$. This defines a closed subset of $\mathcal{O}_{X,x}$, so the set of all strongly noncritical germs is open in $\mathcal{O}_{X,x}$. Since the restrictions maps in the space of sections of a coherent sheaf are continuous, we get the following conclusion. \begin{lemma}\label{lem:stability0} The set of all functions $f\in\mathcal{O}(X)$ which are noncritical (or strongly noncritical) at a certain point $x\in X$ is open in $\mathcal{O}(X)$. \end{lemma} However, Example \ref{ex:null2} above shows that the set of functions $f\in\mathcal{O}(X)$ that are noncritical (or strongly noncritical) on a certain compact set $K\subset X$ may fail to be open in $\mathcal{O}(X)$, unless $K$ is contained in the regular locus $X_\reg$. The following result is \cite[Lemma 3.1, p.\ 52]{FP3} in the case that $X$ is a Stein manifold; we shall need it also when $X$ is a Stein space. (We correct a misprint in the original source.) \begin{lemma}[{\bf Bounded extension operator}] \label{lem:FP3} Let $X$ be a Stein space, $X'$ be a closed complex subvariety of $X$, and $\Omega\Subset X$ be a Stein domain in $X$. For any relatively compact subdomain $D\Subset \Omega$ there exists a bounded linear extension operator $T\colon \mathcal{H}^\infty(\Omega\cap X') \to \mathcal{H}^\infty(D)$ such that \[ (Tf)(x)=f(x)\quad \forall f\in \mathcal{H}^\infty(\Omega\cap X'),\ \forall x\in D\cap X'. \] \end{lemma} \begin{proof} Choose a Stein neighborhood $W\Subset X$ of the compact set $\overline\Omega$ and embed it as a closed complex subvariety (still denoted $W$) of some Euclidean space $\mathbb{C}^N$ (see \cite[Theorem 2.2.8]{FF:book} and the references therein). By Siu's theorem \cite{Siu1976} there is a Stein domain $\Omega'\Subset \mathbb{C}^N$ such that $\Omega= \Omega'\cap W$. Also choose a domain $D'$ in $\mathbb{C}^N$ such that $D\subset D'$ and $\overline D'\subset \Omega'$. By \cite[Lemma 3.1]{FP3}, applied with the subvariety $X'\cap W$ of the Stein manifold $\mathbb{C}^N$ and domains $D'\Subset \Omega' \Subset \mathbb{C}^N$, there exists a bounded linear extension operator $T'\colon \mathcal{H}^\infty(\Omega'\cap X') \to \mathcal{H}^\infty(D')$. Since $\Omega'\cap W=\Omega$, we obtain by restricting the resulting funtion $T'f$ to $D\subset W\cap D'$ a bounded extension operator $T$ as in the lemma. \end{proof} \begin{lemma}[{\bf The Stability Lemma}] \label{lem:stability} Assume that $X$ is a complex space, $X'\subset X$ is a closed complex subvariety containing $X_\sing$, and $K\subset L$ are compact subsets of $X$ with $K\subset \mathring L$. Assume that $f\in\mathcal{O}(X)$ is noncritical on $L\setminus X'$. Then there exist an integer $r\in \mathbb{N}$ and a number $\epsilon>0$ such that the following holds. If a function $g\in \mathcal{O}(L)$ satisfies the conditions \begin{itemize} \item[\rm (i)] $f-g \in \Gamma(L,\mathcal{J}^r_{X'})$, where $\mathcal{J}_{X'}^r$ is the $r$-th power of the ideal sheaf $\mathcal{J}_{X'}$, and \item[\rm (ii)] $||f-g||_L := \sup_{x\in L}|f(x)-g(x)| <\epsilon$, \end{itemize} then $g$ has no critical points on $K\setminus X'$. \end{lemma} \begin{proof} The result holds on compact subsets of $X\setminus X' \subset X_\reg$ in view of Lemma \ref{lem:stability0}, so it suffices to consider the behavior of $g$ near $K\cap X'$. Fix a point $p\in K \cap X'$ and embed an open neighborhood $U\subset X$ of $p$ as a closed complex subvariety (still denoted $U$) of an open ball $B \subset\mathbb{C}^N$. We choose $U$ small enough such that $U\subset \mathring L$. Pick a slightly smaller ball $B'\Subset B$ and set $U':=B'\cap U$. Lemma \ref{lem:FP3} (applied with the domain $\Omega=B$ in $X=\mathbb{C}^N$, the subvariety $X'=U$, and the subdomain $D=B'\Subset B$) furnishes a bounded linear extension operator $T$ mapping bounded holomorphic functions on $U$ to bounded holomorphic functions on $B'$. In the embedded picture, a point $x\in U' \setminus X' \subset B'$ is a critical point of $f$ if and only if the differential $d\tilde f_x \colon T_x\mathbb{C}^N\to\mathbb{C}$ of the extended function $\tilde f = Tf \in\mathcal{O}(B')$ annihilates the Zariski tangent space $T_x U$. The latter condition is expressed by a finite number of holomorphic equations $F_j(f)=0$ on $B'$ $(j=1,\ldots,k)$ involving the values and first order partial derivatives of $\tilde f$ and of some fixed holomorphic defining functions $h_1,\ldots, h_m$ for the subvariety $U$ in $B$. (These equations express the fact that the differential of $\tilde f$ is contained in the linear span of the differentials of the functions $h_1,\ldots, h_m$; compare with the local description (\ref{eq:TX}) of $TX$.) By the assumption this system of equations has no solutions on $U\setminus X'$. If a bounded function $g\in \mathcal{O}(U)$ agrees with $f$ to order $r$ along the subvariety $U \cap X'$ then, setting $\tilde g=Tg\in \mathcal{O}(B')$, the corresponding functions $F_j(g)|_{U'}$ agree with the functions $F_j(f)|_{U'}$ to order $r-1$ along $U'\cap X'$. By choosing the integer $r\in\mathbb{N}$ sufficiently big and the number $\epsilon$ bounded from above by some fixed number $\epsilon_0>0$, we can ensure that for any $g$ satisfying conditions (i) and (ii) the following system of holomorphic equations on $B' \subset \mathbb{C}^N$, \begin{equation} \label{eq:zeroset} h_1=0,\ldots, h_m=0,\quad F_1(g)=0,\ldots, F_k(g)=0, \end{equation} has no solutions in $W\setminus X'$, where $W\subset U'$ is a neighborhood of $p$ whose size depends on $r$ and $\epsilon_0$. (This essentially follows from the \L ojasiewicz inequality, see e.g.\ \cite{JKS}. The details of this argument can also found in the proof of \cite[Theorem 1.3]{FF:CI}; see in particular pp.\ 507--509. In fact, looking at the common zero set of the system (\ref{eq:zeroset}) as the inverse image of the origin $0\in \mathbb{C}^{m+k}$ by the holomorphic map $B'\to \mathbb{C}^{m+k}$ whose components are the functions in (\ref{eq:zeroset}), the local aspect of the cited result from \cite{FF:CI} applies verbatim.) Since finitely many open sets $U'$ of this kind cover $K\cap X'$, we see that the system (\ref{eq:zeroset}) has no solutions on a deleted neighborhood of $K\cap X'$ in $K$. By choosing $\epsilon>0$ small enough we can also ensure in view of Lemma \ref{lem:stability0} that there are no solutions on the rest of $K$. \end{proof} Lemma \ref{lem:stability} fails in general without the interpolation condition as shown by Example \ref{ex:null2}. Here is an even simpler example on the cusp curve, showing that being {\em stratified noncritical} (see Definition \ref{def:stratified-noncritical}) is not a stable property even on complex curves if we allow critical points in the zero dimensional skeleton. \begin{example} \label{ex:nonstable} The cusp curve $X=\{(z,w)\in \mathbb{C}^2\colon z^2=w^3\}$ has a singularity at the origin $(0,0)\in \mathbb{C}^2$ and is smooth elsewhere. It is desingularized by the map $\pi\colon \mathbb{C}\to X$, $\pi(t)=(t^3,t^2)$. The function $f(z,w)=zw$ on $X$ pulls back to the function $h(t)=f(\pi(t))=t^5$ with the only critical point at $t=0$, so $f|_X$ is stratified noncritical with respect to $\{(0,0)\} \subset X$. The perturbation of $h$ given by \[ h_\epsilon(t)=t^3(t-\epsilon)^2= t^5 - 2\epsilon t^4 + \epsilon^2 t^3 = zw-2\epsilon w^2 + \epsilon^2 z \] induces a holomorphic function $f_\epsilon \colon X\to \mathbb{C}$ with a critical point at $(\epsilon^3,\epsilon^2)$, so $f_\epsilon$ is not stratified noncritical on $\{(0,0)\} \subset X$ if $\epsilon\ne 0$. \qed\end{example} \begin{lemma}[{\bf The Genericity Lemma}] \label{lem:generic} Let $X$ be a Stein space. \begin{itemize} \item[\rm (i)] For a generic $f\in \mathcal{O}(X)$ the set $A(f):=\Crit(f|_{X_\reg})$ is discrete in $X$. \item[\rm (ii)] If $X'\subset X$ is a closed complex subvariety containing $X_\sing$ and $g\in \mathcal{O}(X')$, then for a generic $f\in \mathcal{O}_{X',g}(X)$ the set $\Crit(f|_{X\setminus X'})$ is discrete in $X$. In particular, a generic holomorphic extension of $g$ is noncritical on a deleted neighborhood of $X'$ in $X$. \item[\rm (iii)] If $g$ is a holomorphic function on an open neighborhood of $X'$ in $X$ and $r\in\mathbb{N}$, then the conclusion of part (ii) holds for a generic extension $f\in\mathcal{O}(X)$ of $g|_{X'}$ which agrees with $g$ to order $r$ along $X'$. \end{itemize} \end{lemma} \begin{proof} We begin by proving part (i). A point $x\in X_\reg$ is a critical point of a holomorphic function $f\in\mathcal{O}(X)$ if and only if the partial derivatives $\partial f/\partial z_j$ in any system of local holomorphic coordinates $z=(z_1,\ldots,z_n)$ on an open neighborhood $U\subset X_\reg$ of $x$ vanish at the point $z(x)$. (Here $n=\dim_x X$.) This gives $n$ independent holomorphic equations on the 1-jet extension $j^1 f$ of $f$, so the jet transversality theorem for holomorphic maps $X\to\mathbb{C}$ (cf.\ \cite{Forster1970} or \cite[\S 7.8]{FF:book}) implies that every point $x\in A(f)$ is an isolated point of $A(f)$ for a generic $f\in \mathcal{O}(X)$. (The argument goes as follows: write $X_\reg=\bigcup_{j=1}^\infty U_j$ where $U_j\subset X_\reg$ is a compact connected coordinate neighborhood for every $j$. The set of all functions $f\in \mathcal{O}(X)$ whose 1-jet extension $U_j\ni x\mapsto j^1_x f\in \mathbb{C}^{n_j}$ (with $n_j=\dim U_j$) is transverse to $0\in\mathbb{C}^{n_j}$ on the compact set $U_j$ is open and dense in $\mathcal{O}(X)$. Taking the countable intersection of these sets over all $j$ gives the statement.) For any function $f$ as above the set $A(f)$ is discrete in $X_\reg$, and we claim that $A(f)$ is then also discrete in $X$. If not, there is a point $x_0\in X_\sing$ and a sequence $x_j\in A(f)$ with $\lim_{j\to\infty} x_j=x_0$. By Lemma \ref{lem:crit} the set $C(f)=A(f)\cup X_\sing$ is a closed complex subvariety of $X$. Pick a compact neighborhood $K\subset X$ of $x_0$. Each point $x_j$ from the above sequence which belongs to $K$ is an isolated point of $C(f)$, hence an irreducible component of $C(f)$. Thus the compact subset $K\cap C(f)$ of the complex space $C(f)$ contains infinitely many irreducible components of $C(f)$, a contradiction. This proves part (i). Part (ii) follows similarly by applying the jet transversality theorem in the Baire space $\mathcal{O}_{X',g}(X)=\{f\in \mathcal{O}(X)\colon f|_{X'}=g\}$. Finally, let $g$ be as in (iii). Consider the short exact sequence of coherent analytic sheaves $0\to \mathcal{J}_{X'}^{r}\to \mathcal{O}_X\to \mathcal{O}_X/\mathcal{J}_{X'}^r\to 0$. The sheaf $\mathcal{O}_X/\mathcal{J}_{X'}^r$ is supported on $X'$, and hence $g$ determines a section of it. Since $H^1(X;\mathcal{J}_{X'}^{r})=0$ by Cartan's Theorem B, the same section is determined by a function $G\in\mathcal{O}(X)$. This means that $G-g$ vanishes to order $r$ along $X'$. To conclude the proof, it suffices to apply the transversality theorem in the Baire space $G+\mathcal{J}_{X'}^{r}(X) \subset \mathcal{O}(X)$; the details are similar as in part (i). \end{proof} \begin{proposition} \label{prop:generic2} If $(X,\Sigma)$ is a a stratified Stein space, then the set $\bigcup_{S\in\Sigma} \Crit(f|_S)$ is discrete in $X$ for a generic $f\in \mathcal{O}(X)$. \end{proposition} \begin{proof} Let $\Sigma=\{S_j\}_j$ where $S_j$ are (smooth) strata. Each stratum $S_j$ of positive dimension $n_j>0$ is a union $S_j=\bigcup_k U_{j,k}$ of countably many compact coordinate sets $U_{j,k}$. The same argument as in the proof of Lemma \ref{lem:generic} shows that set $\mathcal{U}_{j,k}\subset \mathcal{O}(X)$, consisting of all $f\in \mathcal{O}(X)$ such that the 1-jet extension map $U_{j,k} \ni x\mapsto j^1_x f \in \mathbb{C}^{n_j}$ is transverse to $0\in\mathbb{C}^{n_j}$ on $U_{j,k}$, is open and dense in $\mathcal{O}(X)$. Every $f\in \bigcap_{j,k} \mathcal{U}_{j,k}$ satisfies the conclusion of the proposition. \end{proof} Since every complex space admits a stratification, Proposition \ref{prop:generic2} implies \begin{corollary} \label{cor:generic} A generic holomorphic function on a Stein space has discrete critical locus. \end{corollary} We also have the following result in which $X$ is not necessarily Stein. \begin{corollary} \label{cor:deleted} Let $X$ be a complex space and $X'\subset X$ a closed Stein subvariety containing $X_\sing$. Given a function $g\in \mathcal{O}(X')$, there are an open neighborhood $U\subset X$ of $X'$ and a function $f\in\mathcal{O}(U)$ such that $f|_{X'}=g$ and $f$ has no critical points in $U\setminus X'$. In particular, an isolated sigular point $p$ of a complex space $X$ admits a holomorphic function on a neighborhood $U$ of $p$ which is noncritical on $U\setminus\{p\}$. \end{corollary} \begin{proof} According to Siu \cite{Siu1976} (see also \cite[\S 3.1]{FF:book} and the additional references therein) a Stein subvariety $X'$ in any complex space $X$ admits an open Stein neighborhood $\Omega \subset X$ containing $X'$ as a closed complex subvariety. The conclusion then follows from Lemma \ref{lem:generic} applied to the Stein space $\Omega$. \end{proof} In the proof of Theorem \ref{th:splitting} we shall also need the following result. This is well known when $X$ is a complex manifold (i.e., without singularities), and we shall reduce the proof to this particular case. \begin{lemma} \label{lem:closetoId} Let $X$ be a reduced complex space and $U\Subset U'$ open relatively compact sets in $X$. Fix a distance function $\dist$ on $X$ inducing the standard topology. There is a constant $\epsilon>0$ such that for any holomorphic map $f:U' \to X$ satisfying $\sup_{x\in U'}\dist(x,f(x))<\epsilon$ the restriction $f|_U\colon U\to f(U) \subset X$ is biholomorphic onto its image. \end{lemma} \begin{proof} We first prove the lemma in the case when $U'$ is Stein and its (compact) closure $\overline U'$ admits a Stein neighborhood $W$ in $X$. Assuming as we may that $W$ is relatively compact, it embeds as a closed complex subvariety of a Euclidean space $\mathbb{C}^N$ (cf.\ \cite[Theorem 2.2.8]{FF:book}). Since $U'$ is Stein, Siu's theorem \cite{Siu1976} provides a bounded Stein domain $D'\Subset \mathbb{C}^N$ such that $D'\cap W=U'$. Choose a pair of domains $D_0\Subset D$ in $\mathbb{C}^N$ such that $\overline U \subset D_0 \cap W$ and $\bar D\subset D'$. Let $T$ be a bounded linear extension operator furnished by Lemma \ref{lem:FP3}, mapping bounded holomorphic functions on $U'$ to bounded holomorphic functions on $D$ and satisfying \[ Tg|_{D\cap U'}= g|_{D\cap U'} \quad \text{and} \quad ||Tg||_D \le C||g||_{U'} \] for some constant $C>0$ independent of $g$. Consider a holomorphic map $f\colon U'\to X$ close to the identity. We may assume that $f(U')\subset W \subset \mathbb{C}^N$. Write $f(x)=x+g(x)$ for $x\in U'$, where $g:U'\to\mathbb{C}^N$ is close to zero. Applying the operator $T$ to each component of $g$ we get a holomorphic map $F=\Id+Tg \colon D\to \mathbb{C}^N$ which is close to the identity in the sup norm on $D$. Hence $F$ is biholomorphic on the smaller domain $D_0$ provided that $f$ is close enough to the identity on $U'$. Since $U\subset D_0$ and $F|_{U}=\Id_U + g|_U=f|_U$, we infer that $f\colon U\to f(U) \subset W$ is biholomorphic as well. Furthermore, the inverse map $F^{-1}\colon F(D_0)\to D_0$, restricted to $F(D_0)\cap W$, has range in $W$ as is easily seen by considering the situation on $W_\reg$ and applying the identity principle. This completes the proof in the special case. The general case follows as in the standard manifold situation. By compactness of $\overline U$ we can choose finitely many triples of open sets $V_j\Subset U_j\Subset U'_j$ in $X$ $(j=1,\ldots,m)$ such that \begin{itemize} \item[\rm (i)] $\overline U\subset \bigcup_{j=1}^m V_j$ and $\bigcup_{j=1}^m U'_j \subset U'$, and \item[\rm (ii)] $U'_j$ is Stein and $\overline U'_j$ has a Stein neighborhood in $X$ for every $j=1,\ldots,m$. \end{itemize} Pick a number $\epsilon_0>0$ such that $\dist(V_j,X\setminus U_j) > 2\epsilon_0$ for every $j=1,\ldots,m$. By the special case proved above, applied to the pair $U_j\Subset U'_j$, we can find a number $\epsilon\in (0,\epsilon_0)$ such that $f|_{U_j}\colon U_j\to f(U_j)$ is biholomorphic for every $j$ provided that $\dist(x,f(x))<\epsilon$ for all $x\in U'$. Since $U\subset \bigcup_{j=1}^m U_j$, it follows that $f|_U:U\to f(U)$ is biholomorphic as long as it is injective. Suppose that $f(x)=f(y)$ for a pair of point $x\ne y$ in $U$. Since the sets $V_j$ cover $U$, we have $x\in V_j$ for some $j$. As $f$ is injective on $U_j$, it follows that $y\in U\setminus U_j$ and hence $\dist(x,y) >2\epsilon_0$. The triangle inequality and the choice of $\epsilon$ then gives \[ \dist(f(x),f(y)) \ge \dist(x,y) - \dist(x,f(x)) - \dist(y,f(y))>2\epsilon_0-2\epsilon>0, \] a contradiction to $f(x)=f(y)$. Thus $f$ is injective on $U$. \end{proof} \section{A splitting lemma for biholomorphic maps on complex spaces} \label{sec:gluing} In this section we prove a splitting lemma for biholomorphic maps close to the identity on Cartan pairs in complex spaces; see Theorem \ref{th:splitting} below. This result is the key to the proof of our main theorems; it will be used for gluing pairs of holomorphic functions with control of their critical loci. The nonsingular case is given by \cite[Theorem 4.1]{FF:Acta}. Recall that a compact set $K$ in a complex space $X$ is said to be a {\em Stein compact} if $K$ admits a basis of open Stein neighborhoods in $X$. We recall the following notion. \begin{definition} \label{def:CP} {\rm \cite[p.\ 209]{FF:book}} {\rm (I)} A pair $(A,B)$ of compact subsets in a complex space $X$ is a {\em Cartan pair} if it satisfies the following conditions: \begin{itemize} \item[\rm(i)] $A$, $B$, $D=A\cup B$ and $C=A\cap B$ are Stein compacts, and \item[\rm(ii)] $A,B$ are {\em separated} in the sense that $\overline{A\setminus B}\cap \overline{B\setminus A} =\emptyset$. \end{itemize} \noindent {\rm (II)} A pair $(A,B)$ of open sets in a complex manifold $X$ is a {\em strongly pseudoconvex Cartan pair of class $\mathcal{C}^\ell$} $(\ell\ge 2)$ if $(\bar A,\bar B)$ is a Cartan pair in the sense of (I) and the sets $A$, $B$, $D=A\cup B$ and $C=A\cap B$ are Stein domains with strongly pseudoconvex boundaries of class $\mathcal{C}^\ell$. \end{definition} We shall use the following properties of Cartan pairs: \begin{itemize} \item[\rm (a)] Let $(A,B)$ be a Cartain pair in $X$. If $X$ is a complex subspace of another complex space $\wt X$, then $(A,B)$ is also a Cartan pair in $\wt X$ (cf.\ \cite[Lemma 5.7.2, p.\ 210]{FF:book}). \item[\rm (b)] Every Cartan pair $(A,B)$ in a complex manifold $X$ can be approximated from outside by smooth strongly pseudoconvex Cartain pairs (cf.\ \cite[Proposition 5.7.3, p.\ 210]{FF:book}). \item[\rm (c)] One can solve any Cousin-I problem with sup-norm bounds on a strongly pseudoconvex Cartan pair (cf.\ \cite[Lemma 5.8.2, p.\ 212]{FF:book}). \end{itemize} We denote by $\dist$ a distance function on $X$ which induces its standard complex space topology. (The precise choice will not be important.) Given a compact set $K\subset X$ and continuous maps $f,g\colon K\to X$, we shall write \[ \dist_K(f,g)=\sup_{x\in K} \dist(f(x),g(x)). \] By $\Id$ we denote the identity map; its domain will always be clear from the context. \begin{theorem} \label{th:splitting} Assume that $X$ is a complex space and $X'$ is a closed complex subvariety of $X$ containing its singular locus $X_\sing$. Let $(A,B)$ be a Cartan pair in $X$ such that $C:=A\cap B\subset X\setminus X'$. For any open set $\wt C \subset X$ containing $C$ there exist open sets $A'\supset A$, $B'\supset B$, $C'\supset C$ in $X$, with $C'\subset A'\cap B'\subset \wt C$, satisfying the following property. Given a number $\eta>0$, there exists a number $\epsilon_\eta >0$ such that for each holomorphic map $\gamma\colon \wt C\to X$ with $\dist_{\wt C}(\gamma,\Id) < \epsilon_\eta$ there exist biholomorphic maps $\alpha = \alpha_\gamma \colon A'\to \alpha(A') \subset X$ and $\beta = \beta_\gamma \colon B'\to \beta(B') \subset X$ satisfying the following properties: \begin{itemize} \item[\rm (a)] $\gamma\circ \alpha = \beta$ on $C'$, \item[\rm (b)] $\dist_{A'}(\alpha,\Id) < \eta$ and $\dist_{B'}(\beta,\Id) < \eta$, and \item[\rm (c)] $\alpha$ and $\beta$ are tangent to the identity map to any given finite order along the subvariety $X'$ intersected with their respective domains. \end{itemize} \end{theorem} In view of Lemma \ref{lem:closetoId} we can shrinking the set $\wt C$ if necessary and assume that $\gamma$ is biholomorphic onto its image. The crucial property (a) then furnishes a compositional splitting of $\gamma$. As in \cite{FF:Acta}, the proof will also show that the maps $\alpha_\gamma$ and $\beta_\gamma$ can be chosen to depend smoothly on $\gamma$ such that $\alpha_\Id=\Id$ and $\beta_\Id=\Id$. The proof follows in spirit that of \cite[Theorem 4.1]{FF:Acta}, but is technically more involved. We embed a Stein neighbor\-hood of the Stein compact $D=A\cup B$ in $X$ as a closed complex subvariety of a complex Euclidean space $\mathbb{C}^N$. We then use a holomorphic retraction on a neighborhood $\Omega\subset \mathbb{C}^N$ of the Stein compact $C=A\cap B \subset X_\reg$ in order to transport the linearized splitting problem to a suitable 1-parameter family of Cartan pairs in $\mathbb{C}^N$; see Lemma \ref{lem:CP}. (We have been unable to apply \cite[Theorem 4.1]{FF:Acta} directly since the resulting biholomorphic maps $\alpha,\beta$ need not map the subvariety $X$ to itself.) From this point on we perform an iteration, similar to the one in \cite{FF:Acta}, in which the domains of maps shrink by a controlled amount at every step and the error term converges to zero quadratically. \begin{proof}[Proof of Theorem \ref{th:splitting}] Replacing $X$ by an open Stein neighborhood of $D$ we may assume that $X$ is a closed complex subvariety of a Euclidean space $\mathbb{C}^N$ \cite[Theorem 2.2.8]{FF:book}. The pair $(A,B)$ is then also a Cartan pair in $\mathbb{C}^N$ \cite[Lemma 5.7.2, p.\ 210]{FF:book}. We shall assume that the distance function dist on $X$ is induced by the Euclidean distance on $\mathbb{C}^N$. By Cartan's Theorem $A$ there exist entire functions $h_1,\ldots, h_l\in\mathcal{O}(\mathbb{C}^N)$ such that \[ X=\{z\in \mathbb{C}^N\colon h_i(z)=0,\quad i=1,\ldots,l \} \] and $h_1,\ldots,h_l$ generate the ideal sheaf $\mathcal{J}_X$ of $X$. (We shall only need finite ideal generation on compact subsets of $\mathbb{C}^N$, but in our case this actually holds globally since $X$ is a relatively compact subset of the original Stein space.) Consider the analytic subsheaf $\mathcal{T}_X \subset \mathcal{O}^N_{\mathbb{C}^N}$ whose stalk $\mathcal{T}_{X,p}$ at any point $p\in \mathbb{C}^N$ consists of all $N$-tuples $(g_1,\ldots,g_N) \in \mathcal{O}_{\mathbb{C}^N,p}^N$ satisfying the system of equations \[ \sum_{j=1}^N g_j \frac{\partial h_i}{\partial z_j}\in \mathcal{J}_{X,p},\quad i=1,\ldots,l. \] The condition is void when $p\notin X$, while at points $p\in X$ it means that the vector $V(p)=(g_1(p),\ldots,g_N(p)) \in \mathbb{C}^N \cong T_p\mathbb{C}^N$ is Zariski tangent to $X$. Observe that $\mathcal{T}_X$ is the preimage of the coherent subsheaf $(\mathcal{J}_X)^l \subset \mathcal{O}_{\mathbb{C}^N}^l$ under the homomorphism $\sigma\colon \mathcal{O}^N_{\mathbb{C}^N} \to \mathcal{O}_{\mathbb{C}^N}^l$ whose $i$-th component equals $\sigma_i(g_1,\ldots,g_N) = \sum_{j=1}^N g_j \frac{\partial h_i}{\partial z_j}$. Therefore $\mathcal{T}_X$ is a coherent analytic subsheaf of $\mathcal{O}^N_{\mathbb{C}^N}$. Sections of $\mathcal{T}_X$ are holomorphic vector fields on $\mathbb{C}^N$ which are tangent to $X$ along $X$. (Note that the quotient $\mathcal{T}_X/\mathcal{J}_X \mathcal{T}_X$, restricted to $X$, is the {\em tangent sheaf} of $X$ \cite{Fischer}.) Denote by $\mathcal{J}_{X'}$ the sheaf of ideals of the subvariety $X'\subset X$. Fix an integer $n_0 \in\mathbb{N}$ and consider the coherent analytic sheaf $\mathcal{E}:=\mathcal{J}^{n_0}_{X'}\mathcal{T}_X$ on $\mathbb{C}^N$. By Cartan's Theorem A there exist sections $V_1,\ldots, V_m$ of $\mathcal{E}$ that generate $\mathcal{E}$ over the compact set $C=A\cap B \subset X\setminus X'$. These sections are holomorphic vector fields on $\mathbb{C}^N$ which are tangent to $X$ and vanish to order $n_0$ on the subvariety $X'$. Furthermore, as $C$ is contained in the regular locus $X_\reg$ of $X$ and $TX_\reg$ is the usual tangent bundle, the vectors $V_1(p),\ldots, V_m(p)\in T_p\mathbb{C}^N$ span the tangent space $T_p X\subset T_p\mathbb{C}^N$ at every point $p\in X$ in a neighborhood of $C$. Denote by $\phi^j_t$ the local holomorphic flow of the vector field $V_j$ for a complex value of time $t$. For each point $z\in \mathbb{C}^N$ the flow $\phi^j_t(z)$ is defined for $t$ in a neighborhood of $0\in\mathbb{C}$. Let $t=(t_1,\ldots,t_m)$ be holomorphic coordinates on $\mathbb{C}^m$. The map \begin{equation} \label{eq:spray} s(z,t)= s(z,t_1,\ldots,t_m)=\phi^1_{t_1}\circ \cdots \circ \phi^m_{t_m}(z),\quad z\in \mathbb{C}^N, \end{equation} is defined and holomorphic on an open neighborhood of $\mathbb{C}^N \times \{0\}^m$ in $\mathbb{C}^N\times\mathbb{C}^m$ and assumes values in $\mathbb{C}^N$. Since the vector fields $V_j$ are tangent to $X$, we have $s(z,t)\in X$ for all $t$ whenever $z\in X$. For any point $z\in X$ we denote by \begin{equation}\label{eq:Vds} Vd(s)_z = \frac{\partial}{\partial t}\bigg|_{t=0} s(z,t) \colon \mathbb{C}^m \to T_z X \end{equation} the partial differential of $s$ at $z$ in the fiber direction; we call $Vd(s)$ the {\em vertical derivative} of $s$ over the subvariety $X$. The definition of the flow of a vector field implies \[ \frac{\partial s(z,t)}{\partial t_j}\Big|_{t=0}= V_j(z),\quad j=1,\ldots,m. \] Since the vectors $V_1(z),\ldots, V_m(z)$ span the tangent space $T_z X$ at every point $z \in C$, the vertical derivative $Vd(s)$ (\ref{eq:Vds}) is surjective over a neighborhood of $C$. Thus $s$ is a {\em local holomorphic spray} on $\mathbb{C}^N$, and the restriction of $s$ to a neighborhood of $C$ in $X$ is {\em dominating spray} (cf.\ \cite[p.\ 203]{FF:book} for these notions). Fix an open Stein set $U_0 \Subset X\setminus X' \subset X_\reg$ such that $C\subset U_0$ and $Vd(s)$ is surjective over $\overline U_0$. It follows that $U_0\times \mathbb{C}^m= E\oplus E'$, where $E'=\ker Vd(s)|_{U_0}$ and $E$ is some complementary to $E'$ holomorphic vector subbundle of $U_0\times\mathbb{C}^m$. (Such $E$ exists since $U_0$ is Stein and hence every holomorphic vector subbundle splits the bundle.) Then $Vd(s)\colon E|_{U_0} \to TX|_{U_0}=TU_0$ is an isomorphism of holomorphic vector bundles. By the inverse mapping theorem, the restriction of $s$ to the fiber $E_z$ for any $z\in U_0$ maps an open neighborhood of the origin in $E_z$ biholomorphically onto an open neighborhood of the point $z$ in $X$. Shrinking $U_0$ slightly around $C$ we get an open set $U_1\supset C$ in $X$ such that the following holds (cf.\ \cite[Lemma 4.4]{FF:Acta}). \begin{lemma} \label{lem:lifting} There are a neighborhood $U_1 \subset X\setminus X'$ of $C$ and constants $\epsilon_1>0$ and $M_1\ge 1$ such that for every open set $U\subset U_1$ and every holomorphic map $\gamma\colon U\to \gamma(U)\subset X$ satisfying $\dist_U(\gamma,\Id)<\epsilon_1$ there exists a unique holomorphic section $c\colon U\to E|_U$ satisfying \[ \gamma(z)=s(z,c(z))\ \ \forall z\in U, \quad M_1^{-1} \,\dist_U(\gamma,\Id) \le ||c||_U \le M_1 \,\dist_U(\gamma,\Id). \] \end{lemma} Since $E$ is a subbundle of the trivial bundle $U\times \mathbb{C}^m$, we may consider any section $c$ in Lemma \ref{lem:lifting} as a holomorphic map $U\to \mathbb{C}^m$, and $||c||_U$ denotes the sup-norm of $c$ on $U$ measured with respect to the Euclidean metric on $\mathbb{C}^m$. As $C=A\cap B\subset X_\reg$ is a Stein compact, the Docquier-Grauert theorem \cite{Docquier-Grauert} (see also \cite[Theorem 3.3.3, p.\ 67]{FF:book}) furnishes an open Stein neighborhood $\Omega \Subset \mathbb{C}^N$ of $C$ and a holomorphic retraction \begin{equation}\label{eq:rho} \rho\colon \Omega\to \Omega\cap X \Subset X_\reg. \end{equation} The map \begin{equation}\label{eq:T} T\colon \mathcal{O}(\Omega\cap X) \to \mathcal{O}(\Omega),\quad c\mapsto Tc=c\circ \rho, \end{equation} is then a bounded extension operator satisfying $||Tc||_\Omega = ||c||_{\Omega\cap X}$. By choosing $\Omega$ small enough we may assume that $\overline \Omega\cap X \subset \wt C$, where $\wt C$ is as in the statement of Theorem \ref{th:splitting}. We fix the domain $\Omega$, the retraction $\rho$, and the extension operator $T$ for the rest of the proof. The following lemma provides the key geometric ingredient in the proof of Theorem \ref{th:splitting}. \begin{lemma}\label{lem:CP} Assume that $X$ is a closed complex subvariety of $\mathbb{C}^N$. Let $(A,B)$ be a Cartan pair in $X$ such that $C:=A\cap B\subset X_\reg$. Let $\Omega \Subset \mathbb{C}^N$ be an open Stein neighborhood of $C$ and $\rho\colon \Omega\to\Omega\cap X$ be a holomorphic retraction (\ref{eq:rho}). Let $U_A,U_B$ be open sets in $\mathbb{C}^N$ such that $A\subset U_A$, $B\subset U_B$, and $U_A\cap U_B \Subset \Omega$. Then there exists a family of smoothly bounded strongly pseudoconvex Cartan pairs $(A_t,B_t)$ in $\mathbb{C}^N$, depending smoothly on the parameter $t\in [0,t_0]$ for some $t_0>0$, satisfying the following properties: \begin{itemize} \item[\rm (i)] For any pair of numbers $t,\tau$ such that $0\le t< \tau \le t_0$ we have $A \subset A_t \subset A_\tau \Subset U_A$, $B \subset B_t \subset B_\tau \Subset U_B$, and \[ \biggl(\, \bigcup_{t\in [0,t_0]} \overline{A_t\setminus B_t}\biggr) \cap \biggl(\, \bigcup_{t\in [0,t_0]} \overline{B_t\setminus A_t}\biggr) =\emptyset. \] \item[\rm (ii)] Set $C_t:=A_t\cap B_t$. For any pair of numbers $t,\tau$ such that $0\le t< \tau \le t_0$ the distances $\dist(A_t,\mathbb{C}^N\setminus A_\tau)$, $\dist(B_t,\mathbb{C}^N\setminus B_\tau)$, and $\dist(C_t,\mathbb{C}^N\setminus C_\tau)$ are $\ge \tau-t>0$. \item[\rm (iii)] For every $t\in [0,t_0]$ the boundaries $bA_t$, $bB_t$, and $bC_t$ intersect $X$ transversely at any inter\-section point belonging to $\Omega\cap X$. \item[\rm (iv)] $\rho(C_t) = C_t \cap X$ for every $t\in[0,t_0]$. \end{itemize} \end{lemma} \begin{proof} We shall modify the proof of Proposition 5.7.3 in \cite[p.\ 210]{FF:book} so as to also obtain property (iv) which will be crucial. (For a similar result see \cite[Proposition 4.4]{Stopar}.) We shall use the function $\rmax\{x,y\}$ on $\mathbb{R}^2$, the {\em regularized maximum} of $x$ and $y$ (see e.g.\ \cite[p.\ 61]{FF:book}). It depends on a positive parameter which will be chosen as close to zero as needed at each application. We have $\max\{x,y\} \le \rmax\{x,y\}$, the two functions equal outside a small conical neighborhood of the diagonal $\{x=y\}$, and they can be made arbitrarily close by choosing the parameter small enough. The $\rmax$ of two smooth strongly plurisubharmonic functions is still smooth strongly plurisubharmonic. The domain $\{\rmax\{\phi,\psi\}<0\}$ is obtained by smoothing the corners of the intersection $\{\phi<0\}\cap \{\psi<0\}$. Since $C=A\cap B$ is a Stein compact, there is a smoothly bounded strongly pseudoconvex domain $V \Subset U_A\cap U_B$ such that $C\subset V$ and $bV$ intersects $X$ transversely. Let $\theta\colon V_0\to \mathbb{R}$ be a smooth strongly plurisubharmonic defining function for $V$, where $V_0 \subset U_A\cap U_B$ is an open neighborhood of $\overline{V}$. Given a subset $A\subset \mathbb{C}^N$ and a number $r>0$, we set \[ A(r) = \{z\in \mathbb{C}^N \colon |z-p| < r\ {\rm for\ some\ } p\in A\}. \] It is elementary that $(A\cup B)(r) = A(r) \cup B(r)$ and $(A\cap B)(r)\subset A(r)\cap B(r)$. Since $A$ and $B$ are compact and separated, we also have \[ (A\cap B)(r)=A(r)\cap B(r),\quad \overline{A(r)\setminus B(r)} \cap \overline{B(r)\setminus A(r)} = \emptyset \] for all sufficiently small $r>0$ (cf.\ \cite[Lemma 5.7.4]{FF:book}). By choosing $r>0$ small enough we can also ensure that \begin{equation}\label{eq:Cr} C(r)=A(r)\cap B(r) \Subset V. \end{equation} Fix such a number $r$. Since $A\cup B$ is a Stein compact, there is a smoothly bounded strongly pseudo\-convex Stein domain $\Omega_0 \subset \mathbb{C}^N$ such that \[ A\cup B \subset \Omega_0 \Subset A(r)\cup B(r). \] Pick a smooth strongly plurisubharmonic function $\phi \colon \Omega'_0\to\mathbb{R}$ on an open set $\Omega'_0\supset \overline \Omega_0$ such that $\Omega_0=\{z\in \Omega'_0\colon \phi(z)<0\}$ and $d\phi\ne 0$ on $b\Omega_0=\{\phi=0\}$. We may assume that $\Omega'_0\Subset A(r) \cup B(r)$. Choose a number $\epsilon_0>0$ such that \begin{equation} \label{eq:Omega1} \Omega'_1 := \{z\in \Omega'_0\colon \phi(z) < 3 \epsilon_0\} \Subset \Omega'_0. \end{equation} By \cite[Lemma 2.2]{Richberg} there exists a smooth function $\psi\ge 0$ on $\mathbb{C}^N$ such that $\{\psi=0\}=X$, $\psi$ is strongly plurisubharmonic on $\mathbb{C}^N\setminus X=\{\psi>0\}$, and the Levi form of $\psi$ at any point $z\in X$ is positive except on the tangent space $T_z X$. Choose a smooth function $\chi: \mathbb{C}^N\to [0,1]$ which equals $0$ on a neighborhood of $\overline{U}_A \cap \overline{U}_B \subset \Omega$ and equals $1$ on a neighborhood of $\mathbb{C}^N\setminus\Omega$. Since $\phi$ is strongly plurisubharmonic on $\Omega'_0$, there is a number $\epsilon \in (0,\epsilon_0)$ such that the functions $\phi-2\epsilon \chi$ and $\phi+\epsilon \chi$ are strongly plurisubharmonic on $\Omega'_1$; fix such $\epsilon$. Given constants $M,M'>0$ (to be determined later) we consider the functions \begin{equation} \label{eq:Phi} \Phi_1= (\phi - 2\epsilon \chi)\circ\rho + M\psi,\quad \Phi_2= \phi - 2\epsilon + \epsilon\chi +M'\psi, \quad \Phi = \rmax \bigl\{\Phi_1,\Phi_2\}. \end{equation} The function $\Phi_1$ is defined on $\Omega\cap \Omega'_0$ while $\Phi_2$ is defined on $\Omega'_0$. We shall see that for suitable choices of $M>0,M'>0$ the function $\Phi$ is well defined, smooth and strongly plurisubharmonic on the domain $\Omega'_1=\{\phi<3\epsilon_0\}$ (\ref{eq:Omega1}), and for all $t\in \mathbb{R}$ sufficiently close to $0$ we have \begin{equation} \label{eq:Dt} A\cup B \subset D_t:=\{z\in \Omega'_1 \colon \Phi(z) <t\} \Subset \Omega'_1. \end{equation} Since $\phi+\epsilon\chi$ is strongly plurisubharmonic on $\Omega'_1$ and $\psi$ is plurisubharmonic on $\mathbb{C}^N$, the function $\Phi_2$ is strongly pluri\-sub\-harmonic on $\Omega'_1$ for any choice of $M'>0$. Consider now $\Phi_1$. By the choice of $\epsilon$ the function $\phi-2\epsilon \chi$ is strongly plurisubharmonic on $\Omega'_1$, whence $\Phi_1$ is strongly plurisubharmonic on $\Omega\cap \Omega'_1$. Indeed, the first summand $(\phi- 2\epsilon \chi) \circ \rho$ is plurisubharmonic and its Levi form at any point $z\in \Omega\cap X$ is positive definite in the directions tangential to $X$. The second summand $M\psi$ is strongly plurisubhamonic on $\mathbb{C}^N \setminus X$, and its Levi form at points of $X$ is positive in directions that are not tangential to $X$. Hence the Levi form of the sum is positive everywhere. Observe that $\Phi_1>0$ near $b\Omega'_1 \cap \Omega \cap X$ by the definition of $\Omega'_1$ (\ref{eq:Omega1}) and the fact that $2\epsilon\chi \le 2\epsilon < 2\epsilon_0$. By choosing $M>0$ sufficiently big we can thus ensure that $\Phi_1>0$ near $b\Omega'_1 \cap \Omega$. We fix such $M$ for the rest of the proof. Next we show that $\Phi = \rmax \bigl\{\Phi_1,\Phi_2\}$ is well defined if the constant $M'>0$ in $\Phi_2$ is chosen big enough. We need to ensure that $\Phi_1<\Phi_2$ on the domain of $\Phi_1$ near the boundary of $\Omega$, so $\Phi_2$ takes over in $\rmax$ before we run out of the domain of $\Phi_1$. On $X=\{\psi=0\}$ this is clear since near $b\Omega$ we have $\chi=1$ and hence $\Phi_1=\phi-2\epsilon < \phi-\epsilon=\Phi_2$. By choosing $M'>0$ sufficiently big we get $\Phi_1< \Phi_2$ on a neighborhood of $b\Omega \cap \Omega'_1$; hence $\rmax\{\Phi_1,\Phi_2\}$ is well defined on $\Omega'_1$. By increasing $M'$ we can also ensure that $\Phi>0$ near $b\Omega'_1$, so the domains $D_t=\{\Phi<t\}$ for $t$ close to $0$ (say $|t|\le t_1$ for some $t_1>0$) satisfy (\ref{eq:Dt}). By Sard's theorem and compactness of the level sets of $\Phi$ we can find a nontrivial interval $I \subset [-t_1,+t_1]$ which contains no critical values of $\Phi$ and of $\Phi|_{X\cap \overline \Omega}$. Hence $D_t$ for $t\in I$ are smoothly bounded strongly pseudoconvex domains intersecting $X$ transversely within $\Omega$. On the intersection of the domain of $\Phi_1$ with the set $\{\chi=0\}$ (in particular, on $U_A\cap U_B \cap \Omega'_1$) we have $\Phi_1 = \phi\circ \rho+M\psi \ge \phi\circ\rho$, so on this set the retraction $\rho$ (\ref{eq:rho}) projects $\{\Phi_1 < t\}$ to $\{\Phi_1 <t\} \cap X$. Furthermore, on $X\cap \{\chi=0\}$ we have $\Phi_1=\phi > \phi-2\epsilon=\Phi_2$. This shows that the domain $D_t=\{\Phi<t\}$ agrees with $\{\Phi_1 < t\}$ near $X\cap \{\chi=0\}$. It follows that \begin{equation}\label{eq:inclusion} \rho\left( D_t \cap \{\chi=0\}\right) = D_t \cap X \cap \{\chi=0\}. \end{equation} It remains to find a Cartan pair decomposition $(A_t,B_t)$ of $D_t$. Recall that $\overline{C(r)} \subset V=\{\theta<0\}$ by (\ref{eq:Cr}). Replacing $\theta$ by $c\theta$ for a suitably chosen constant $c>0$ we may therefore assume that $\theta<\Phi$ on $C(r)\cap \Omega'_1$. Perturbing $\theta$ and $V$ slightly we can ensure that the real hypersurfaces $bV$ and $bD_0=\{\Phi=0\}$ intersect transversely. The function \[ \phi_C=\rmax\{\phi,\theta\} \colon \Omega'_1 \cap V_0 \to\mathbb{R} \] is smooth strongly plurisubharmonic. For every $t\in I$ the set \[ C'_t:= \{z\in \Omega'_1 \cap V_0 \cap X \colon \phi_C(z)< t\} \subset X_\reg \] is a smoothly bounded strongly pseudoconvex domain. We have $C(r)\cap X \subset C'_t$ in view of (\ref{eq:Cr}) and $C'_t\subset D'_t:=D_t \cap X$ since $\phi_C\ge \phi$. As $\theta<\phi$ on $\overline{C(r)} \cap \Omega'_1\, \cap X$, we have $\phi_C=\phi$ there, so the boundaries $bC'_t$ and $bD'_t$ coincide along their intersection with the compact set $\overline{C(r)} \cap X$. Hence $C'_t$ separates $D'_t$ in the sense of a Cartan pair, i.e., we have $D'_t=A'_t\cup B'_t$ and $A'_t\cap B'_t=C'_t$. Set \[ \Theta =\rmax\bigl\{ \phi_C\circ \rho + M\psi, \Phi\bigr\},\quad C_t=\{\Theta<t\}, \] where $\Phi=\rmax\{\Phi_1,\Phi_2\}$ (\ref{eq:Phi}) and $M$ is the constant in the definition of $\Phi_1$. One easily verifies that $C_t$ is a strongly pseudoconvex domain which separates $D_t$ into a Cartan pair $(A_t,B_t)$ with $D_t=A_t\cap B_t$ and $C_t=A_t\cap B_t$. It follows from (\ref{eq:inclusion}) that $C_t$ satisfies Lemma \ref{lem:CP}-(iv). By decreasing the parameter interval $I$ we ensure that $\Theta$ and $\Theta|_X$ have no critical values in $I$, so the boundaries $bC_t$ for $t\in I$ are smooth and intersect $X$ transversely. The same is then true for the domains $A_t$ and $B_t$ since their boundaries are contained in $bD_t\cup bC_t$. Reparametrizing the $t$ variable and changing the functions $\Phi$ and $\Theta$ by an additive constant we may assume that $I=[0,t_0]$ for some $t_0>0$ and property (ii) holds. The remaining properties of $A_t,B_t$ and $C_t$ follow directly from the construction. \end{proof} Given an open set $U\subset X$ and a number $\delta>0$, we shall use the notation \[ U(\delta) =\{z\in X\colon \dist(z,U)< \delta\}. \] Recall that $s$ is the spray (\ref{eq:spray}) and $M_1$ is the constant from Lemma \ref{lem:lifting}. The following lemma is a special case of \cite[Lemma 4.5]{FF:Acta}. \begin{lemma}\label{lem:ActaLemma4.5} Let $U_1 \subset X\setminus X'$ be the open set from Lemma \ref{lem:lifting}. There exist constants $\delta_0>0$ (small) and $M_2>0$ (big) with the following property. Let $0< \delta <\delta_0$ and $0< 4\epsilon <\delta$. Let $U$ be an open set in $X$ such that $U(\delta)\subset U_1$. Assume that $\alpha,\beta,\gamma \colon U(\delta)\to X$ are holomorphic maps which are $\epsilon$-close to the identity on $U(\delta)$. Then $\wt \gamma:= \beta^{-1} \circ \gamma \circ \alpha \colon U\to X$ is a well defined holomorphic map. Write \begin{eqnarray*} \alpha(z) = s(z,a(z)), &\quad& \beta(z)=s(z,b(z)), \\ \gamma(z)=s(z,c(z)), &\quad& \wt\gamma(x) = s(z, \wt c(z)), \end{eqnarray*} where $a,b,c$ are holomorphic sections of the vector bundle $E|_{U(\delta)}$ and $\wt c$ is a holomorphic section of $E|_U$ furnished by Lemma \ref{lem:lifting}. If $c=b-a$ holds on $U(\delta)$, then \[ ||\wt c||_U\le M_2\delta^{-1} \epsilon^2 \quad \text{and}\quad \dist_U(\wt \gamma,\Id) \le M_1M_2\delta^{-1} \epsilon^2. \] \end{lemma} The next lemma provides a solution of the Cousin-I problem with bounds on the family of strongly pseudoconvex domains $D_t=A_t\cup B_t$ from Lemma \ref{lem:CP}, intersected with the subvariety $X$. We denote by $\mathcal{H}^\infty(D)$ the Banach space of all bounded holomorphic functions on $D$. \begin{lemma} \label{lem:Cousin} Let $(A_t,B_t)$ $(t\in [0,t_0])$ be a family of strongly pseudoconvex Cartan pairs furnished by Lemma \ref{lem:CP}. There is constant $M_3>0$ with the following property. For every $t\in [0,t_0]$ and $c\in \mathcal{H}^\infty(C_t\cap X)$ there exist functions $a\in\mathcal{H}^\infty(A_t)$ and $b\in \mathcal{H}^\infty(B_t)$ such that \[ c=b-a \ \text{on}\ C_t \cap X, \quad ||a||_{A_t} \le M_3 ||c||_{C_t\cap X}, \quad ||b||_{B_t} \le M_3 ||c||_{C_t\cap X}. \] Functions $a$ and $b$ are given by bounded linear operators applied to $c$. \end{lemma} \begin{proof} We begin by finding a constant $M_3>0$ independent of $t\in[0,t_0]$ and linear operators \[ \mathcal{A}_t \colon \mathcal{H}^\infty(C_t)\to \mathcal{H}^\infty(A_t),\quad \mathcal{B}_t \colon \mathcal{H}^\infty(C_t)\to \mathcal{H}^\infty(B_t) \] such that for every $g\in \mathcal{H}^\infty(C_t)$ $(t\in [0,t_0])$ we have that \begin{equation} \label{eq:diff} g = \mathcal{A}_t(g)-\mathcal{B}_t(g) \ \ \text{on} \ \ C_t \end{equation} and the estimates \begin{equation}\label{eq:estimate} ||\mathcal{A}_t (g)||_{A_t} \le M_3 ||g||_{C_t}, \qquad ||\mathcal{B}_t (g)||_{B_t} \le M_3 ||g||_{C_t}. \end{equation} The proof is similar to that of \cite[Lemma 5.8.2, p.\ 212]{FF:book} and uses standard techniques; we include it for completeness. In view of Lemma \ref{lem:CP}-(i) there is a smooth function $\xi\colon \mathbb{C}^N\to[0,1]$ such that $\xi=0$ on $\bigcup_{t\in [0,1]} \overline{A_t\setminus B_t}$ and $\xi=1$ on $\bigcup_{t\in [0,1]} \overline{B_t\setminus A_t}$. For any $g \in \mathcal{H}^\infty(C_t)$ the product $\xi g$ extends to a bounded smooth function on the domain $A_t$ that vanishes on $A_t\setminus B_t$, and $(\xi-1)g$ extends to a bounded smooth function on $B_t$ that vanishes on $B_t\setminus A_t$. Furthermore, $\overline\partial(\xi g)= \overline\partial((\xi-1)g)=g\, \overline\partial \xi$ is a smooth bounded $(0,1)$-form on the strongly pseudoconvex domain $D_t=A_t\cup B_t$ with support contained in $C_t=A_t\cap B_t$. Let $S_t$ be a sup-norm bounded linear solution operator to the $\overline\partial$-equation on $D_t$ at the level of $(0,1)$-forms. (Such $S_t$ can be found as a Henkin-Ram\'irez integral kernel operator; see the monographs by Henkin and Leiterer \cite{HL:TF} or Lieb and Michel \cite{Lieb-Michel}. The operators $S_t$ can be chosen to depend smoothly on the parameter $t\in [0,1]$. For small perturbations of a given strongly pseudoconvex domain this is evident from the construction and is stated explicitly in the cited sources; for compact families of strongly pseudoconvex domains the result follows by applying a smooth partition of unity on the parameter space.) Given $g \in \mathcal{H}^\infty(C_t)$, set \[ \mathcal{A}_t (g) = \xi g - S_t(g\overline\partial \xi\bigr) \in\mathcal{H}^\infty(A_t), \qquad \mathcal{B}_t (g) = (\xi-1) g - S_t(g \overline\partial \xi\bigr) \in\mathcal{H}^\infty(B_t). \] It is immediate that these operators satisfy the stated properties. By Lemma \ref{lem:CP}-(iv) the map $c \mapsto T(c)= c\circ \rho$ (\ref{eq:T}) induces a linear extension operator $\mathcal{O}(C_t\cap X)\to \mathcal{O}(C_t)$ satisfying $||Tc||_{C_t}=||c||_{C_t\cap X}$. The compositions \[ \mathcal{A}_t\circ T \colon \mathcal{H}^\infty(C_t\cap X)\to \mathcal{H}^\infty(A_t), \qquad \mathcal{B}_t\circ T \colon \mathcal{H}^\infty(C_t\cap X)\to \mathcal{H}^\infty(B_t) \] are then bounded linear operators satisfying the conclusion of Lemma \ref{lem:Cousin}. \end{proof} \begin{lemma}\label{lem:ActaLemma4.7} Let $(A_t,B_t)=(A(t),B(t))$ $(t\in [0,t_0])$ be strongly pseudoconvex Cartan pairs furnished by Lemma \ref{lem:CP}. Set $C(t)=A(t)\cap B(t)$. Let $\delta_0>0$ be chosen as in Lemma \ref{lem:ActaLemma4.5}. Then there are constants $M_4,M_5 > 0$ satisfying the following property. Let $0\le t <t+\delta \le t_0$ and $0< \delta <\delta_0$. For every holomorphic map $\gamma\colon C(t+\delta)\cap X \to X$ satisfying \[ \epsilon:= \dist_{C(t+\delta) \cap X}(\gamma,\Id) < \frac{\delta}{4M_4} \] there exist holomorphic maps $\alpha\colon A(t+\delta)\to \mathbb{C}^N$ and $\beta\colon B(t+\delta)\to \mathbb{C}^N$, tangent to the identity map to order $n_0$ along the subvariety $X'$ and satisfying the estimates \begin{equation}\label{eq:est-alpha} \dist_{A(t+\delta)}(\alpha,\Id)< M_4\,\epsilon,\quad \dist_{B(t+\delta)}(\beta,\Id) < M_4\,\epsilon, \end{equation} such that \[ \wt\gamma = \beta^{-1}\circ \gamma\circ\alpha \colon C(t)\cap X \to X \] is a well defined holomorphic map satisfying the estimate \begin{equation} \label{eq:Acta4.3} \dist_{C(t) \cap X}(\wt\gamma,\Id) < M_5\, \delta^{-1} \dist_{C(t+\delta)\cap X}(\gamma,\Id)^2 = M_5\, \delta^{-1} \epsilon^2. \end{equation} \end{lemma} \begin{proof} On the Banach space $\mathcal{H}^\infty(D)^N$ we use the norm $||g||=\sum_{j=1}^N ||g_j||$, where $||g_j||$ is the sup-norm on $D$. By Lemma \ref{lem:lifting} there is a holomorphic section $c\colon C(t+\delta) \cap X \to E$ of the holomorphic vector bundle $E\to U_1$ such that $\gamma(z)=s(z,c(z))$ for $z\in C(t+\delta) \cap X$ and $||c||_{C(t+\delta) \cap X}\le M_1\epsilon$. (Here $M_1$ is the constant from Lemma \ref{lem:lifting}.) By Lemma \ref{lem:Cousin} we have $c=b-a$, where $a\in\mathcal{H}^\infty(A_t)^N$ and $b\in \mathcal{H}^\infty(B_t)^N$ satisfy the estimates \[ ||a||_{A_t} \le N M_1 M_3 \epsilon, \qquad ||b||_{B_t} \le N M_1 M_3 \epsilon. \] Let $s$ be the spray (\ref{eq:spray}). Set \begin{eqnarray*} \alpha(z) &=& s(z,a(z)), \qquad z\in A(t+\delta), \\ \beta(z) &=& s(z,b(z)), \qquad z\in B(t+\delta). \end{eqnarray*} The maps $\alpha$ and $\beta$ are tangent to the identity to order $n_0$ along the subvariety $X'$ and satisfy $\alpha(A_t \cap X)\subset X$ and $\beta(B_t\cap X)\subset X$. By Lemma \ref{lem:lifting} we have \[ \dist_{A(t+\delta)}(\alpha,\Id)< N M_1^2 M_3\,\epsilon,\quad \dist_{B(t+\delta)}(\beta,\Id) < N M_1^2M_3\,\epsilon. \] Setting $M_4=N M_1^2M_3$ we get the estimates (\ref{eq:est-alpha}). If the number $\epsilon=\dist_{C(t+\delta)}(\gamma,\Id)$ satisfies $4M_4\epsilon<\delta$, then by Lemma \ref{lem:ActaLemma4.5} the composition $\wt \gamma=\beta^{-1}\circ \gamma\circ \alpha$ is a well defined holomorphic map on $C(t)\cap X$ satisfying the estimate (\ref{eq:Acta4.3}) with the constant $M_5= M_2M_4^2$. \end{proof} We now complete the proof of Theorem \ref{th:splitting}. by a recursive process, using Lemma \ref{lem:ActaLemma4.7} at every step. The initial map $\gamma$ is defined on the set $\wt C\supset C(t_0)\cap X$. For each $k\in\mathbb{Z}_+$ we set \[ t_k = t_0\prod_{j=1}^k (1-2^{-j})\quad \text{and} \quad \delta_k = t_k - t_{k+1} = t_k 2^{-k-1}. \] The sequence $t_k>0$ is decreasing, $t^* = \lim_{k\to \infty} t_k > 0$, $\delta_k > t^* 2^{-k-1}$ for all $k$, and $\sum_{k=0}^\infty \delta_k= t_0-t^*$. Set $A_k=A(t_k)$, $B_k=B(t_k)$, $C_k=C(t_k)=A_k\cap B_k$ and observe that \[ \bigcap_{k=0}^\infty A_k = \overline{A(t^*)},\qquad \bigcap_{k=0}^\infty B_k = \overline{B(t^*)},\qquad \bigcap_{k=0}^\infty C_k = \overline{C(t^*)}. \] To begin the induction, pick a number $\epsilon_0>0$ such that $4M_4 \epsilon_0 <\delta_0=t_0/2$. Set $\gamma_0=\gamma$ and assume that $\dist_{C_0\cap X}(\gamma_0,\Id)\le \epsilon_0$. Lemma \ref{lem:ActaLemma4.7} furnishes holomorphic maps $\alpha_1\colon A_1\to \mathbb{C}^N$ and $\beta_1\colon B_1\to \mathbb{C}^N$, satisfying the estimates \[ \dist_{A_{1}}(\alpha_{1},\Id)< M_4\,\epsilon_0, \qquad \dist_{B_{1}}(\beta_{1},\Id) < M_4\,\epsilon_0 \] (see (\ref{eq:est-alpha})), such that $ \gamma_1=\beta_1^{-1} \circ \gamma_0 \circ \alpha_1 \colon C_1\cap X \to X $ is a well defined holomorphic map satisfying the following estimate (cf.\ (\ref{eq:Acta4.3})): \[ \epsilon_1:= \dist_{C_1\cap X}(\gamma_1,\Id) < M_5 \delta_0^{-1} \epsilon^2_0 < 2M\epsilon_0^2 \] where the constant $M>0$ is given by \[ M=M_5/t^*. \] Assuming that $4M_4\epsilon_1 <\delta_1$ (which holds if $\epsilon_0>0$ is small enough), Lemma \ref{lem:ActaLemma4.7} furnishes holomorphic maps $\alpha_2\colon A_2\to \mathbb{C}^N$ and $\beta_2\colon B_2\to \mathbb{C}^N$ satisfying the estimates \[ \dist_{A_2}(\alpha_{2},\Id)< M_4\,\epsilon_1, \qquad \dist_{B_{2}}(\beta_{2},\Id) < M_4\,\epsilon_1 \] and such that $ \gamma_2=\beta_2^{-1}\circ \gamma_1 \circ \alpha_2 \colon C_2\cap X\to X $ is a well defined holomorphic map satisfying the estimate \[ \epsilon_2:=\dist_{C_2\cap X}(\gamma_2,\Id) < M_5 \delta_1^{-1} \epsilon^2_1 < 2^2 M \epsilon_1^2. \] Proceeding inductively, we obtain sequences of holomorphic maps \[ \alpha_k\colon A_k \to\mathbb{C}^N, \quad \beta_k \colon B_k\to \mathbb{C}^N, \quad \gamma_k\colon C_k\cap X \to X \] such that the following conditions hold for every $k=0,1,2,\ldots$: \[ \gamma_{k+1}=\beta_{k+1}^{-1}\circ \gamma_k\circ \alpha_{k+1} \colon C_{k+1}\cap X \to X, \] \begin{equation}\label{eq:est-alphabis} \dist_{A_{k+1}}(\alpha_{k+1},\Id)< M_4\,\epsilon_k, \quad \dist_{B_{k+1}}(\beta_{k+1},\Id) < M_4\,\epsilon_k, \end{equation} \begin{equation} \label{eq:Acta4.4} \epsilon_{k+1} := \dist_{C_{k+1}\cap X}(\gamma_{k+1},\Id) < M_5 \delta_k^{-1} \epsilon^2_k < 2^{k+1} M \epsilon_k^2. \end{equation} The necessary condition for the induction to proceed is that $4M_4\epsilon_k<\delta_k$ holds for each $k=0,1,2,\ldots$. By \cite[Lemma 4.8]{FF:Acta} this condition is fulfilled as long as the initial number $\epsilon_0>0$ is chosen small enough, and the resulting sequence $\epsilon_k>0$, defined recursively by (\ref{eq:Acta4.4}), then converges to zero very rapidly. In particular, we can ensure that \begin{equation}\label{eq:eta} M_4 \sum_{j=0}^{\infty} \epsilon_j <\eta \end{equation} where $\eta>0$ is as in the statement of the theorem. The estimates (\ref{eq:est-alphabis}) imply that the compositions \begin{equation}\label{eq:tilde-alpha} \wt\alpha_k=\alpha_1\circ \alpha_2\circ \cdots\circ \alpha_k \colon A_k\to \mathbb{C}^N, \quad \wt \beta_k=\beta_1 \circ \beta_2 \circ \cdots\circ \beta_k \colon B_k\to \mathbb{C}^N \end{equation} are well defined holomorphic maps for $k=1,2,\ldots$ satisfying \begin{equation}\label{eq:compose-k} \wt \beta_{k}\circ \gamma_{k} = \gamma \circ \wt \alpha_{k}\quad \text{on} \quad C_{k}\cap X. \end{equation} As $k\to\infty$, the estimate (\ref{eq:Acta4.4}) and (\ref{eq:eta}) show that the sequence $\gamma_k$ converges to the identity map uniformly on $C(t^*)\cap X$. Consider now the sequences $\wt\alpha_k$ and $\wt\beta_k$. From (\ref{eq:est-alphabis}) and (\ref{eq:eta}) we clearly get that \begin{equation}\label{eq:est-tilde-alpha} \dist_{A_{k}}(\wt\alpha_k,\Id) < M_4 \sum_{j=0}^{k-1} \epsilon_j <\eta, \quad \dist_{B_{k}}(\wt\beta_k,\Id) < M_4 \sum_{j=0}^{k-1} \epsilon_j <\eta. \end{equation} Hence $\wt\alpha_k$ and $\wt\beta_k$ are normal families on $A(t^*)$ and $B(t^*)$, respectively. Passing to convergent subsequences we get holomorphic maps $\alpha\colon A(t^*)\to \mathbb{C}^N$ and $\beta\colon B(t^*)\to \mathbb{C}^N$ satisfying \[ \dist_{A(t^*)}(\alpha,\Id) < \eta, \quad \dist_{B(t^*)}(\beta,\Id) < \eta, \quad \gamma\circ \alpha=\beta \ \text{on}\ C(t^*)\cap X. \] The last equation follows from (\ref{eq:compose-k}). Assuming that the numbers $\epsilon_0>0$ and $\eta>0$ are small enough, the maps $\alpha$, $\beta$ and $\gamma$ are biholomorphic on slightly smaller domains in view of Lemma \ref{lem:closetoId}. Finally, since all maps $\alpha_k$ and $\beta_k$ in the sequence are tangent to the identity to order $n_0$ along the subvariety $X'$, the same is true for the maps $\alpha$ and $\beta$. \end{proof} \begin{remark} 1. The sequences $\wt\alpha_k$ and $\wt\beta_k$ in (\ref{eq:tilde-alpha}) actually converge uniformly on any compact subset $K_A\subset A(t^*)$ and $K_B\subset B(t^*)$, respectively. Indeed, choose open domains $A'$ and $B'$ in $\mathbb{C}^N$ such that $K_A\subset A'\Subset A(t^*)$ and $K_B\subset B'\Subset B(t^*)$. From (\ref{eq:est-tilde-alpha}) and the Cauchy estimates we infer that these sequence are uniformly Lipschitz on $A'$ and $B'$, respectively. From this and (\ref{eq:est-tilde-alpha}) we easily see that the sequences are uniformly Cauchy, and hence uniformly convergent, on $K_A$ and $K_B$, respectively. 2. If $X$ is a Stein manifold with the {\em density property} in the sense of Varolin \cite[\S 4.10]{FF:book} and $(A,B)$ is a Cartan pair in $X$ such that the set $C=A\cap B$ is $\mathcal{O}(X)$-convex, then the conclusion of Theorem \ref{th:splitting} holds for any biholomorphic map $\gamma$ on a neighborhood $U\subset X$ of $C$ which is isotopic to the identity map on $C$ through a smooth 1-parameter family of biholomorphic maps $\gamma_t\colon U\to \gamma_t(U)\subset X$ $(t\in [0,1])$ such that $\gamma_0=\Id$, $\gamma_1=\gamma$, and $\gamma_t(C)$ is $\mathcal{O}(X)$-convex for every $t\in [0,1]$. The main result of the Anders\'en-Lempert theory (cf.\ \cite[Theorem 4.9.2]{FF:book} for $X=\mathbb{C}^n$ and \cite[Theorem 4.10.6]{FF:book} for the general case) implies that $\gamma$ can be approximated uniformly on a neighborhood of $C$ by holomorphic automorphisms $\phi\in\mathop{{\rm Aut}}(X)$. This allows us to write $\gamma=\phi\circ\tilde \gamma$, where $\tilde\gamma$ is a biholomorphic map close to the identity on a neighborhood of $C$. Applying Theorem \ref{th:splitting} gives $\tilde \gamma=\tilde\beta\circ\alpha^{-1}$, where $\alpha$ and $\tilde \beta$ are biholomorphic maps close to the identity near $A$ and $B$, respectively. Setting $\beta=\phi\circ\tilde \beta$ gives $\gamma=\beta\circ\alpha^{-1}$. \qed \end{remark} \begin{remark}\label{rem:3.2gen} Theorem \ref{th:splitting} and its proof are amenable to various generalizations. In particular, since all steps in the proof are obtained by using (possibly nonlinear) operators on various function spaces, it immediately generalizes to the parametric case when the data (in particular, the map $\gamma$ to be decomposed in the form $\gamma=\beta\circ\alpha^{-1}$) depend on parameters. A more ambitious generalization would amount to also letting the domains of these maps depend on parameters; this may be applicable in various constructions. A first step in this direction has already been made in \cite{DFW}. \qed \end{remark} \section{Functions without critical points in the top dimensional strata} \label{sec:open-stratum} In this section we construct holomorphic functions that have no critical points in the regular locus of a Stein space. The following result is the main inductive step in the proof of Theorem \ref{th:stratified} given in the following section. \begin{theorem} \label{th:th1} Assume that $X$ is a Stein space, $X'\subset X$ is a closed complex subvariety of $X$ containing $X_\sing$, $P=\{p_1,p_2,\ldots\}$ is a closed discrete set in $X'$, $K$ is a compact $\mathcal{O}(X)$-convex set in $X$ (possibly empty), and $f$ is a holomorphic function on a neighborhood of $K\cup X'$ such that $\Crit(f|_{U\setminus X'}) = \emptyset$ for some neighborhood $U\subset X$ of $K$. Then for any $\epsilon>0$ and integers $r\in\mathbb{N}$ and $n_k \in\mathbb{N}$ $(k=1,2,\ldots)$ there exists a holomorphic function $F\in\mathcal{O}(X)$ satisfying the following conditions: \begin{itemize} \item[\rm (i)] $F-f$ vanishes to order $r$ along the subvariety $X'$, \item[\rm (ii)] $F-f$ vanishes to order $n_k$ at the point $p_k\in P$ for every $k=1,2,\ldots$, \item[\rm (iii)] $||F-f||_K <\epsilon$, and \item[\rm (iv)] $F$ has no critical points in $X\setminus X'$. \end{itemize} \end{theorem} Applying Theorem \ref{th:th1} with the subvariety $X'=X_\sing$ we find holomorphic functions on $X$ that have no critical points in the smooth part $X_\reg$. \begin{remark}\label{rem:th1} Theorem \ref{th:th1} implies at no cost the following result. Let $A=\{a_j\}$ be a closed discrete set in $X$ contained in $X\setminus (K\cup X')$. Then there exists a function $F\in \mathcal{O}(X)$ satisfying conditions (i)--(iii) and also the condition \begin{itemize} \item[\rm (iv')] $\Crit(F|_{X\setminus X'}) = A$. \end{itemize} Indeed, choose any germs $g_{j}\in \mathcal{O}_{X,a_j}$ at points $a_j\in A$ and apply Theorem \ref{th:th1} with the subvariety $X'_0=A \cup X'$, the discrete set $P_0=A\cup P$, and the function $f$ extended by $g_j$ to a small neighborhood of the point $a_j\in A$. \qed \end{remark} We begin with a couple lemmas. The first one shows that we can replace the function $f$ in Theorem \ref{th:th1} by a function in $\mathcal{O}(X)$. \begin{lemma} \label{lem:extension} {\rm (Assumptions as in Theorem \ref{th:th1}.)} Let $L$ be a compact $\mathcal{O}(L)$-convex set in $X$ such that $K\subset \mathring L$. Then there exists a function $\tilde f\in \mathcal{O}(X)$ with the following properties: \begin{itemize} \item[\rm (a)] $\tilde f -f$ vanishes to order $r$ along the subvariety $X'$, \item[\rm (b)] $\tilde f-f$ vanishes to order $n_k$ at the point $p_k\in P$ for every $k=1,2,\ldots$, \item[\rm (c)] $||\tilde f - f||_K <\epsilon$, and \item[\rm (d)] there is a neighborhood $W\subset X$ of the compact set $K\cup (L\cap X')$ such that $\tilde f$ has no critical points in the set $W\setminus X'$. \end{itemize} \end{lemma} \begin{proof} Let $\mathcal{E}\subset \mathcal{O}_X$ be the coherent sheaf of ideals whose stalk at any point $p_k\in P$ equals $\mathfrak{m}_{p_k}^{n_k}$ and $\mathcal{E}_x=\mathcal{O}_{X,x}$ for every $x\in X\setminus P$. The product \begin{equation} \label{eq:sheafE} \wt \mathcal{E}:=\mathcal{E} \mathcal{J}_{X'}^{r} \subset \mathcal{O}_X \end{equation} of $\mathcal{E}$ and the $r$-th power of the ideal sheaf $\mathcal{J}_{X'}$ is a coherent sheaf of ideals in $\mathcal{O}_X$. Consider the short exact sequence of sheaf homomorphisms \[ 0\longrightarrow \wt \mathcal{E} \longrightarrow \mathcal{O}_X \longrightarrow \mathcal{O}_X/\wt\mathcal{E} \longrightarrow 0. \] Since the quotient sheaf $\mathcal{O}_X/\wt \mathcal{E}$ is supported on $X'$, the function $f$ determines a section of $\mathcal{O}_X/\wt \mathcal{E}$. Since $H^1(X;\wt\mathcal{E})=0$ by Theorem B, the same section is induced by a function $g\in\mathcal{O}(X)$. Clearly $g$ satisfies conditions (a) and (b) of the lemma (with $g$ in place of $\tilde f$). To get condition (c) we proceed as follows. Cartan's Theorem A furnishes sections $\xi_1,\ldots,\xi_m \in\Gamma(X, \wt \mathcal{E})$ which generate the sheaf $\wt \mathcal{E}$ over the compact set $K$. By the choice of $g$ the difference $f-g$ is a section of $\wt \mathcal{E}$ over a neighborhood of $K$. Applying Theorem B to the epimorphism of coherent analytic sheaves $\mathcal{O}_X^m \longrightarrow \wt\mathcal{E}\longrightarrow 0$, $(h_1,\ldots,h_m)\mapsto \sum_{i=1}^m h_i\xi_i$, we obtain $f=g + \sum_{i=1}^m h_i\xi_i$ on a neighborhood of $K$ for some holomorphic functions $h_i\in \mathcal{O}(K)$. By the Oka-Weil theorem we can approximate the $h_i$'s uniformly on $K$ by functions $\tilde h_i\in \mathcal{O}(X)$. The function $\tilde f = g+\sum_{i=1}^m \tilde h_i\xi_i \in\mathcal{O}(X)$ satisfies properties (a)--(c). By the Stability Lemma \ref{lem:stability} and the Genericity Lemma \ref{lem:generic} we can also satisfy condition (d) by choosing $\tilde f$ generic and taking $\epsilon>0$ small enough. \end{proof} The next lemma is the main step in the proof of Theorem \ref{th:th1}; here we use the splitting lemma furnished by Theorem \ref{th:splitting}. Another key ingredient is the Runge approximation theorem for noncritical holomorphic functions on $\mathbb{C}^n$, furnished by Theorem 3.1 in \cite{FF:Acta}. \begin{lemma} \label{lem:th1} {\rm (Assumptions as in Theorem \ref{th:th1}.)} Let $L\subset X$ be a compact $\mathcal{O}(X)$-convex set such that $K\subset \mathring L$. Then there exists a holomorphic function $F\in \mathcal{O}(X)$ which satisfies conditions (i)--(iii) in Theorem \ref{th:th1} and also the following condition: \begin{itemize} \item[\rm (iv')] $\Crit(F|_{U'\setminus X'}) = \emptyset$, where $U'\subset X$ is an open neighborhood of $L$. \end{itemize} \end{lemma} \begin{proof} To simplify the exposition, we replace the number $r$ by the maximum of $r$ and the numbers $n_k\in\mathbb{N}$ over all points $p_k\in P\cap L$ (a finite set). Choosing $F$ to satisfy condition (i) for this new $r$, it will also satisfy condition (ii) at the points $p_k\in P\cap L$. Let $W$ be the set from Lemma \ref{lem:extension}-(d). By \cite[Lemma 8.4, p.\ 662]{FP2} there exist finitely many compact $\mathcal{O}(X)$-convex sets $A_0\subset A_1\subset \cdots \subset A_m=L$ such that $K \cup (L\cap X') \subset A_0 \subset W$ and for every $j=0,1,\ldots, m-1$ we have $A_{j+1}=A_j\cup B_j$, where $(A_j,B_j)$ is a Cartan pair (cf.\ Definition \ref{def:CP}) and $B_j\subset L \setminus X' \subset X_\reg$. Furthermore, the construction in \cite{FP3} gives for every $j=0,1,\ldots, m-1$ an open set $U_j\subset X_\reg$ containing $B_j$ and a biholomorphic map $\phi_j\colon U_j\to U'_j \subset \mathbb{C}^n$ onto an open subset of $\mathbb{C}^n$ (where $n$ is the dimension of $X$ at the points of $B_j$) such that the set $\phi_j(C_j)$ is polynomially convex in $\mathbb{C}^n$. (Here $C_j=A_j\cap B_j$.) The proof of Lemma 8.4 in \cite{FP2} is written in the case when $X$ is a Stein manifold, but it also applies in the present situation, for example, by embedding a relatively compact neighborhood of $L\subset X$ as a closed complex subvariety in a Euclidean space $\mathbb{C}^N$. We first find a function $\wt F$ that is holomorphic on a neighborhood of $L$ and satisfies the conclusion of the lemma there. This is accomplished by a finite induction, starting with $F_0=f$ which by the assumption satisfies these properties on the open set $W\supset A_0$. We provide an outline and refer to \cite{FF:Acta} for further details. By the assumption $F_0$ is noncritical on a neighborhood of the set $C_0=A_0\cap B_0$. Since $C_0$ is polynomially convex in a certain holomorphic coordinate system on a neighborhood of $B_0$ in $X\setminus X'\subset X_\reg$, Theorem 3.1 in \cite[p.\ 154]{FF:Acta} furnishes a noncritical holomorphic function $G_0$ on a neighborhood of $B_0$ in $X\setminus X'$ such that $G_0$ approximates $F_0$ as closely as desired uniformly on a neighborhood of $C_0$. Assuming that the approximation is close enough, we can apply Theorem \ref{th:splitting} to glue $F_0$ and $G_0$ into a new function $F_1$ that is holomorphic on a neighborhood of $A_0\cup B_0=A_1$ and has no critical points, except perhaps on subvariety $X'$. The gluing of $F_0$ and $G_0$ is accomplished by first finding a biholomorphic map $\gamma$ close to the identity on a neighborhood of the attaching set $C_0$ such that \[ F_0=G_0\circ \gamma \quad \text{on a neighborhood of}\ C_0. \] Since $C_0$ is a Stein compact in the complex manifold $X\setminus X'$, such $\gamma$ is furnished by Lemma 5.1 in \cite[p.\ 167]{FF:Acta}. If $\gamma$ is close enough to $\Id$ (which holds if $G_0$ is chosen sufficiently uniformly close to $F_0$ on a neighborhood of $C_0$), then Theorem \ref{th:splitting} furnishes a decomposition \[ \gamma \circ\alpha=\beta, \] where $\alpha$ is a biholomorphic map close to the identity on a neighborhood of $A_0$ in $X$ and $\beta$ is a map with the analogous properties on a neighborhood of $B_0$ in $X$. By Theorem \ref{th:splitting} we can ensure in addition that $\alpha$ is tangent to the identity to order $r$ along the subvariety $X'$ intersected with its domain. (The domain of $\beta$ does not intersect $X'$.) Then \[ F_0\circ \alpha = G_0\circ\beta \quad \text{on a neighborhood of}\ C_0, \] so the two sides amalgamate into a holomorphic function $F_1$ on a neighborhood of $A_0\cup B_0=A_1$. By the construction, $F_1$ approximates $F_0$ on a neighborhood of $A_0$, $F_1-F_0$ vanishes to order $r$ along $X'$, and $F_1$ is noncritical except perhaps on $X'$. The last property holds because the maps $\alpha$ and $\beta$ are biholomorphic on their respective domains and $\alpha|_{X'}$ is the identity. Repeating the same construction with $F_1$ we get the next function $F_2$ on a neighborhood of $A_2$, etc. In $m$ steps of this kind we find a function $\wt F=F_m$ on a neighborhood of the set $A_m=L$ satisfying the stated properties. It remains to replace $\wt F$ by a function $F\in \mathcal{O}(X)$ satisfying the same properties. This is done as in Lemma \ref{lem:extension} above. Let $\wt \mathcal{E}$ be the sheaf (\ref{eq:sheafE}). Pick sections $\xi_1,\ldots,\xi_m\in\Gamma(X,\wt \mathcal{E})$ which generate $\wt\mathcal{E}$ over the compact set $L$. By the construction of $\wt F$, the difference $\wt F-f$ is a section of $\wt\mathcal{E}$ over a neighborhood of $L$. Hence Cartan's Theorem B furnishes holomorphic functions $\tilde h_1,\ldots, \tilde h_m\in \mathcal{O}(U')$ an open set $U'\supset L$ such that \[ \wt F=f+\sum_{i=1}^m \tilde h_i\,\xi_i \quad \text{on}\ U'. \] Choose a compact $\mathcal{O}(X)$-convex set $L'$ such that $L\subset \mathring L'\subset L'\subset U'$. Approximating each $\tilde h_i$ uniformly on $L'$ by a function $h_i\in \mathcal{O}(X)$ and setting \[ F=f+ \sum_{i=1}^m h_i \,\xi_i \in\mathcal{O}(X) \] we get a function $F$ satisfying properties (i)--(iii). By Lemma \ref{lem:stability} the function $F$ also satisfies property (iv') provided that the differences $||h_i-\tilde h_i||_{L'}$ for $i=1,\ldots,m$ are small enough. \end{proof} \begin{proof}[Proof of Theorem \ref{th:th1}] In view of Lemma \ref{lem:extension} we may assume that $f\in\mathcal{O}(X)$. Choose an increasing sequence $K=K_0\subset K_1\subset \cdots \subset \bigcup_{i=0}^\infty K_i =X$ of compact $\mathcal{O}(X)$-convex sets satisfying $K_i\subset \mathring K_{i+1}$ for every $i=0,1,\ldots$. Set $F_0=f$, $\epsilon_0=\epsilon/2$, and $r_0=r$. We inductively construct a sequence of functions $F_i\in\mathcal{O}(X)$ and numbers $\epsilon_i>0$, $r_i\in \mathbb{N}$ such that the following conditions hold for every $i=0,1,2,\dots$: \begin{itemize} \item[\rm (a)] $\Crit(F_i|_{U_i\setminus X'})=\emptyset$ for an open neighborhood $U_i\supset K_i$, \item[\rm (b)] $||F_{i} -F_{i-1}||_{K_{i-1}}<\epsilon_{i-1}$, \item[\rm (c)] $F_{i}-F_{i-1}$ vanishes to order $r_{i-1}$ along the subvariety $X'$, \item[\rm (d)] $F_{i}-F_{i-1}$ vanishes to order $n_k$ at each of the point $p_k\in P$, \item[\rm (e)] $0<\epsilon_{i} < \epsilon_{i-1}/2$ and $r_{i}\ge r_{i-1}$, and \item[\rm (f)] if $F\in \mathcal{O}(X)$ is such that $||F -F_{i}||_{K_{i}}< 2\epsilon_i$ and $F-F_{i}$ vanishes to order $r_{i}$ along $X'$, then $\Crit(F|_{U\setminus X'})=\emptyset$ for an open neighborhood $U\supset K_{i-1}$. \end{itemize} Assume that we have already found these quantities up to index $i-1$ for some $i \in\mathbb{N}$. (For $i=0$ the function $F_0$ satisfies condition (a) and the remaining conditions are void.) Lemma \ref{lem:th1} furnishes the next map $F_{i}\in\mathcal{O}(X)$ which satisfies conditions (a)--(d). For this $F_{i}$ we then pick the next pair of numbers $\epsilon_{i}>0$ and $n_{i}\in\mathbb{N}$ such that conditions (e) and (f) hold. In view of the Stability Lemma \ref{lem:stability}, condition (f) holds by as soon as $\epsilon_{i}>0$ is chosen small enough and $r_{i}\in \mathbb{N}$ is chosen big enough. This completes the induction step. It is straightforward to verify that the sequence $F_i$ converges uniformly on compacts in $X$ and the limit function $F= \lim_{i\to \infty} F_i \in \mathcal{O}(X)$ satisfies the conclusion of Theorem \ref{th:th1}. \end{proof} In the proof of Theorem \ref{th:stratified} (see \S \ref{sec:stratified}) we shall combine Theorem \ref{th:th1} with the following lemma which provides extension from a subvariety and jet interpolation on a discrete set. \begin{lemma}[{\bf Extension with jet interpolation}] \label{lem:interpolate} Let $X$ be a Stein space, $X'$ a closed complex subvariety of $X$, and $P=\{p_1,p_2,\ldots\}$ a closed discrete subset of $X'$. Given a function $f\in \mathcal{O}(X')$ and germs $f_k\in\mathcal{O}_{X,p_k}$ for each $p_k\in P$ such that $f_{p_k} - (f_k|_{X'})_{p_k} \in \mathfrak{m}_{X',p_k}^{n_k}$ for some $n_k\in \mathbb{N}$, there exists $F\in\mathcal{O}(X)$ such that $F|_{X'}=f$ and $F_{p_k} -f_k\in \mathfrak{m}_{X,p_k}^{n_k}$ for every $p_k\in P$. \end{lemma} \begin{proof} Let $\mathcal{J}_{X'}$ denote the sheaf of ideals of the subvariety $X'$. By Lemma \ref{lem:germs} there exists for every point $p_k\in P$ a germ $g_k\in\mathcal{O}_{X,p_k}$ such that $f_k-g_k \in \mathfrak{m}_{X,p_k}^{n_k}$ and $(g_k|_{X'})_{p_k}=f_{p_k} \in\mathcal{O}_{X',p_k}$. Pick a function $\tilde f\in \mathcal{O}(X)$ with $\tilde f|_{X'}=f$; then $\tilde f_{p_k} - g_k \in \mathcal{J}_{X',p_k}$. Let $\mathcal{E}\subset \mathcal{O}_X$ be the coherent sheaf of ideals whose stalk at any point $p_k\in P$ equals $\mathfrak{m}_{p_k}^{n_k}$ and $\mathcal{E}_x=\mathcal{O}_{X,x}$ for every $x\in X\setminus P$. Consider the following short exact sequence of coherent analytic sheaves on $X$: \[ 0\longrightarrow \mathcal{E} \mathcal{J}_{X'} \longrightarrow \mathcal{J}_{X'} \longrightarrow \mathcal{J}_{X'}/(\mathcal{E} \mathcal{J}_{X'}) \longrightarrow 0. \] The quotient sheaf $\mathcal{J}_{X'}/(\mathcal{E} \mathcal{J}_{X'})$ is supported on the discrete set $P$, and hence the collection of germs $\tilde f_{p_k} - g_k \in \mathcal{J}_{X',p_k}$ determines a section of this sheaf. Since $H^1(X;\mathcal{E} \mathcal{J}_{X'})=0$ by Theorem B, this section lifts to a section $h$ of $\mathcal{J}_{X'}$. Consider the function $F:=\tilde f-h \in\mathcal{O}(X)$. We have $F|_{X'}=\tilde f|_{X'}=f$. Furthermore, for every point $p_k\in P$ the following identities hold in the ring $\mathcal{O}_{X,p_k}/\mathfrak{m}_{X,p_k}^{n_k}$ of $(n_k-1)$-jets at $p_k$: \[ F_{p_k} = \tilde f_{p_k} - h_{p_k} = \tilde f_{p_k} - \bigl(\tilde f_{p_k} - g_k\bigr) = g_k = f_k \quad \mod \mathfrak{m}_{X,p_k}^{n_k}. \] Thus $F$ satisfies the conclusion of the lemma. \end{proof} \begin{remark} \label{rem2:th1} Lemma \ref{lem:interpolate} gives a version of Theorem \ref{th:th1} in which the function $f$ is assumed to be defined and holomorphic only on the subvariety $X'\subset X$ and on a neighborhood of a compact $\mathcal{O}(X)$-convex set $K\subset X$. Furthermore, we are given germs $f_k\in \mathcal{O}_{X,p_k}$ at points $p_k\in P$ such that the conditions of Lemma \ref{lem:interpolate} hold. Then for any choice of integers $n_k\in \mathbb{N}$ there exists a function $F\in\mathcal{O}(X)$ satisfying Theorem \ref{th:th1}, except that conditions (i) and (ii) are replaced by the following conditions: \begin{itemize} \item[\rm (i')] $F|_{X'}=f$, and \item[\rm (ii')] $F-f_k$ vanishes to order $n_k$ at each point $p_k\in P$. \end{itemize} \end{remark} \section{Stratified noncritical functions on Stein spaces} \label{sec:stratified} In this section we prove Theorem \ref{th:stratified} on the existence of stratified noncritical holomorphic functions. As shown in the Introduction, this will also prove Theorems \ref{th:main} and \ref{th:mainbis}. \begin{proof}[Proof of Theorem \ref{th:stratified}] Let $(X,\Sigma)$ be a stratified Stein space (see \S\ref{sec:intro}). For every integer $i\in \mathbb{Z}_+$ we let $\Sigma_i$ denote the collection of all strata of dimension at most $i$ in $\Sigma$, and let $X_i$ denote the union of all strata in the family $\Sigma_i$ (the $i$-skeleton of $\Sigma$). Since the boundary of any stratum is a union of lower dimensional strata, $X_i$ is a closed complex subvariety of $X$ of dimension $\le i$ for every $i\in\mathbb{Z}_+$. Clearly $\dim X_i=i$ precisely when $\Sigma$ contains at least one $i$-dimensional stratum; otherwise $X_i=X_{i-1}$. We have $X_0\subset X_1\subset \cdots\subset \bigcup_{i=0}^\infty X_i=X$, the sequence $X_i$ is stationary on any compact subset of $X$, and $(X_i,\Sigma_i)$ is a stratified Stein subspace of $(X,\Sigma)$ for every $i$. Note that $X_0=\{p_1,p_2,\ldots\}$ is a discrete subset of $X$. By the assumption of Theorem \ref{th:stratified} we are given for each $p_k\in X_0$ a germ $f_k\in\mathcal{O}_{X,p_k}$. Our task is to find a $\Sigma$-noncritical function $F \in\mathcal{O}(X)$ which agrees with the germ $f_k$ at $p_k\in X_0$ to order $n_k\in \mathbb{N}$. If the germs $f_k \in \mathcal{O}_{X,p_k}$ $(p_k\in X_0$) are chosen (strongly) noncritical and $n_k\ge 2$ for each $k$, then the resulting function $F\in\mathcal{O}(X)$ will be (strongly) noncritical on $X$. Let $F_0\colon X_0\to\mathbb{C}$ be the function on the zero dimensional skeleton defined by $F_0(p_k)=f_k(p_k)$ for every $p_k\in X_0$. We shall inductively construct a sequence of functions $F_i\in\mathcal{O}(X_i)$ satisfying the following conditions for $i=1,2,\ldots$: \begin{itemize} \item[\rm (i)] $F_i|_{X_{i-1}} = F_{i-1}$, \item[\rm (ii)] $F_i-f_k|_{X_i}$ vanishes to order $n_k$ at every point $p_k \in X_0$, and \item[\rm (iii)] $F_i$ is a stratified noncritical function on the stratified Stein space $(X_i,\Sigma_i)$. \end{itemize} Assuming that we have found functions $F_1,\ldots,F_{i-1}$ with these properties, we now explain how to find the next function $F_i$ in the sequence. If $X_i=X_{i-1}$ then we can simply take $F_i=F_{i-1}$. If this is not the case, then $X_i\setminus X_{i-1}$ is a complex manifold of dimension $i$. Apply Lemma \ref{lem:interpolate} with $X=X_i$, $X'=X_{i-1}$ and $f=F_{i-1}$ to find a function $G_i\in \mathcal{O}(X)$ (called $F$ in the lemma) which satisfies \begin{itemize} \item[\rm (i')] $G_i|_{X_{i-1}} = F_{i-1}$, and \item[\rm (ii')] $(G_i)_{p_k} - (f_k|_{X_i})_{p_k} \in \mathfrak{m}_{X_i,p_k}^{n_k}$ at every point $p_k \in X_0$. \end{itemize} Now $G_i$ satisfies the hypotheses of Theorem \ref{th:th1} (with $X=X_i$, $X'=X_{i-1}$ and $f=G_i$), so we get a function $F_i\in\mathcal{O}(X)$ which agrees with $G_i$ on $X_{i-1}$, it agrees with $G_i$ (and hence with $f_k$) to order $n_k$ at $p_k\in X_0$ for each $k$, and is noncritical on $X_i\setminus X_{i-1}$. Hence $F_i$ satisfies properties (i)--(iii) and the induction may proceed. Since the sequence of subvarieties $X_0\subset X_1\subset \cdots\subset \bigcup_{i=0}^\infty X_i=X$ is stationary on any compact subset of $X$, the sequence of functions $F_i\in\mathcal{O}(X_i)$ obtained in this way determines a holomorphic function $F\in\mathcal{O}(X)$ by setting $F=F_i$ on $X_i$ for any $i\in\mathbb{N}$. It is immediate that $F$ satisfies the conclusion of Theorem \ref{th:stratified}. \end{proof} A similar construction yields the following result. \begin{theorem} \label{th:stratbis} Given a reduced Stein space $X$, a closed complex subvariety $X'$ of $X$ and a function $f \in \mathcal{O}(X')$, there exists $F\in \mathcal{O}(X)$ such that $F|_{X'}=f$ and $F$ is strongly noncritical on $X\setminus X'$. Alternatively, we can choose $F$ to have critical points at a prescribed discrete set $P$ in $X$ which is contained in $X\setminus X'$. \end{theorem} \begin{proof} We can stratify the difference $X\setminus X'=\bigcup_j S_j$ into a union of pairwise disjoint connected complex manifolds (strata) such that \begin{itemize} \item the boundary $bS_j=\overline S_j \setminus S_j$ of any stratum is contained in the union of $X'$ and of lower dimensional strata, \item every point of $P$ is a zero dimensional stratum, and \item every compact set in $X$ intersects at most finitely many strata. \end{itemize} Consider the increasing chain $X' \subset X_0 \subset X_1\subset\cdots \subset \bigcup_{i=1}^\infty X_i =X$ of closed complex subvarieties, where $X_i$ is the union of $X'$ and all strata $S_j$ of dimension at most $i$. In particular, we have $P\cup X' \subset X_0$. Then $X_i\setminus X_{i-1}$ is either empty or a disjoint union of $i$-dimensional complex manifolds contained in $X\setminus X'$. Let $P=\{p_1,p_2,\ldots\}$, and assume that we are given germs $f_k\in \mathcal{O}_{X,p_k}$ and integeres $n_k\in\mathbb{N}$. We start with the function $F_0\in \mathcal{O}(X_0)$ which agrees with $f$ on $X'$ and satisfies $F(p_k)=f_k(p_k)$ for every $p_k \in P$. By Lemma \ref{lem:interpolate} we can find a function $G_1\in \mathcal{O}(X)$ which agrees with $F_0$ on $X_0$ and satisfies $(G_1)_{p_k} - f_k\in \mathfrak{m}_{X,p_k}^{n_k}$ at each point $p_k\in P$. Theorem \ref{th:th1}, applied to the Stein pair $X_0\subset X_1$ and the function $G_1$ (denoted $f$ in the theorem), furnishes a function $F_1\in\mathcal{O}(X_1)$ which agrees with $G_1$ (and hence with $F_0$) on $X_0$ and satisfies $(F_1)_{p_k} - (f_k|_{X_1})_{p_k} \in \mathfrak{m}_{X_1,p_k}^{n_k}$ for every $p_k\in P$. This completes the first step of the induction. By using again Lemma \ref{lem:interpolate} and then Theorem \ref{th:th1} we find the next function $F_2\in\mathcal{O}(X_2)$ such that $F_2|_{X_1}=F_1$ and $(F_2)_{p_k} - (f_k|_{X_2})_{p_k} \in \mathfrak{m}_{X_2,p_k}^{n_k}$ at every point $p_k\in P$. Clearly this process can be continued inductively. We obtain a sequence $F_i\in\mathcal{O}(X_i)$ for $i=1,2,\ldots$ such that the function $F\in\mathcal{O}(X)$, defined by $F|_{X_i}=F_i$ for every $i=1,2,\ldots$, satisfies the conclusion of Theorem \ref{th:stratbis}. \end{proof} \begin{corollary} \label{cor:extension} Let $X$ be a reduced Stein space, $X'$ a closed complex subvariety of $X$ without isolated points, and $f\in\mathcal{O}(X')$ a noncritical holomorphic function. Then there exists a noncritical function $F\in\mathcal{O}(X)$ such that $F|_{X'}=f$. \end{corollary} \textit{Acknowledgements.} Research on this paper was supported in part by the program P1-0291 and the grant J1-5432 from ARRS, Republic of Slovenia. I wish to thank the referee for the remarks which led to improved presentation. \vskip 0.5cm \noindent {\scshape Franc Forstneri\v c} \noindent Faculty of Mathematics and Physics, University of Ljubljana, and \noindent Institute of Mathematics, Physics and Mechanics \noindent Jadranska 19, 1000 Ljubljana, Slovenia \noindent e-mail: {\tt [email protected]} \end{document}
arXiv
Nuclear Reactions in the Crusts of Accreting Neutron Stars (1803.03818) R. Lau, M. Beard, S. S. Gupta, H. Schatz, A. V. Afanasjev, E. F. Brown, A. Deibel, L. R. Gasques, G. W. Hitt, W. R. Hix, L. Keek, P. Möller, P. S. Shternin, A. Steiner, M. Wiescher, Y. Xu April 21, 2018 nucl-th, astro-ph.SR, astro-ph.HE X-ray observations of transiently accreting neutron stars during quiescence provide information about the structure of neutron star crusts and the properties of dense matter. Interpretation of the observational data requires an understanding of the nuclear reactions that heat and cool the crust during accretion, and define its nonequilibrium composition. We identify here in detail the typical nuclear reaction sequences down to a depth in the inner crust where the mass density is 2E12 g/cm^3 using a full nuclear reaction network for a range of initial compositions. The reaction sequences differ substantially from previous work. We find a robust reduction of crust impurity at the transition to the inner crust regardless of initial composition, though shell effects can delay the formation of a pure crust somewhat to densities beyond 2E12 g/cm^3. This naturally explains the small inner crust impurity inferred from observations of a broad range of systems. The exception are initial compositions with A >= 102 nuclei, where the inner crust remains impure with an impurity parameter of Qimp~20 due to the N = 82 shell closure. In agreement with previous work we find that nuclear heating is relatively robust and independent of initial composition, while cooling via nuclear Urca cycles in the outer crust depends strongly on initial composition. This work forms a basis for future studies of the sensitivity of crust models to nuclear physics and provides profiles of composition for realistic crust models. The 12C(a,g)16O reaction and its implications for stellar helium burning (1709.03144) R.J. deBoer, J. Gorres, M. Wiescher, R.E. Azuma, A. Best, C.R. Brune, C.E. Fields, S. Jones, M. Pignatari, D. Sayre, K. Smith, F.X. Timmes, E. Uberseder The creation of carbon and oxygen in our universe is one of the forefront questions in nuclear astrophysics. The determination of the abundance of these elements is key to both our understanding of the formation of life on earth and to the life cycles of stars. While nearly all models of different nucleosynthesis environments are affected by the production of carbon and oxygen, a key ingredient, the precise determination of the reaction rate of 12C(a,g)16O, has long remained elusive. This is owed to the reaction's inaccessibility, both experimentally and theoretically. Nuclear theory has struggled to calculate this reaction rate because the cross section is produced through different underlying nuclear mechanisms. Isospin selection rules suppress the E1 component of the ground state cross section, creating a unique situation where the E1 and E2 contributions are of nearly equal amplitudes. Experimentally there have also been great challenges. Measurements have been pushed to the limits of state of the art techniques, often developed for just these measurements. The data have been plagued by uncharacterized uncertainties, often the result of the novel measurement techniques, that have made the different results challenging to reconcile. However, the situation has markedly improved in recent years, and the desired level of uncertainty, about 10%, may be in sight. In this review the current understanding of this critical reaction is summarized. The emphasis is placed primarily on the experimental work and interpretation of the reaction data, but discussions of the theory and astrophysics are also pursued. The main goal is to summarize and clarify the current understanding of the reaction and then point the way forward to an improved determination of the reaction rate. Stellar ($n,\gamma$) cross section of $^{23}$Na (1702.01541) E. Uberseder, M. Heil, F. Käppeler, C. Lederer, A. Mengoni, S. Bisterzo, M. Pignatari, M. Wiescher The cross section of the $^{23}$Na($n, \gamma$)$^{24}$Na reaction has been measured via the activation method at the Karlsruhe 3.7 MV Van de Graaff accelerator. NaCl samples were exposed to quasistellar neutron spectra at $kT=5.1$ and 25 keV produced via the $^{18}$O($p, n$)$^{18}$F and $^{7}$Li($p, n$)$^{7}$Be reactions, respectively. The derived capture cross sections $\langle\sigma\rangle_{\rm kT=5 keV}=9.1\pm0.3$ mb and $\langle\sigma\rangle_{\rm kT=25 keV}=2.03 \pm 0.05$ mb are significantly lower than reported in literature. These results were used to substantially revise the radiative width of the first $^{23}$Na resonance and to establish an improved set of Maxwellian average cross sections. The implications of the lower capture cross section for current models of $s$-process nucleosynthesis are discussed. Galactic Chemical Evolution: the Impact of the 13C-pocket Structure on the s-process Distribution (1701.01056) S. Bisterzo, C. Travaglio, M. Wiescher, F. Käppeler, R. Gallino Jan. 4, 2017 astro-ph.SR The solar s-process abundances have been analyzed in the framework of a Galactic Chemical Evolution (GCE) model. The aim of this work is to implement the study by Bisterzo et al. (2014), who investigated the effect of one of the major uncertainties of asymptotic giant branch (AGB) yields, the internal structure of the 13C pocket. We present GCE predictions of s-process elements computed with additional tests in the light of the suggestions provided in recent publications. The analysis is extended to different metallicities, by comparing GCE results and updated spectroscopic observations of unevolved field stars. We verify that the GCE predictions obtained with different tests may represent, on average, the evolution of selected neutron-capture elements in the Galaxy. The impact of an additional weak s-process contribution from fast-rotating massive stars is also explored. Shell and explosive hydrogen burning (1611.06244) A. Boeltzig, C.G. Bruno, F. Cavanna, S. Cristallo, T. Davinson, R. Depalo, R.J. deBoer, A. Di Leva, F. Ferraro, G. Imbriani, P. Marigo, F. Terrasi, M. Wiescher Nov. 18, 2016 nucl-ex The nucleosynthesis of light elements, from helium up to silicon, mainly occurs in Red Giant and Asymptotic Giant Branch stars and Novae. The relative abundances of the synthesized nuclides critically depend on the rates of the nuclear processes involved, often through non-trivial reaction chains, combined with complex mixing mechanisms. In this review, we summarize the contributions made by LUNA experiments in furthering our understanding of nuclear reaction rates necessary for modeling nucleosynthesis in AGB stars and Novae explosions. Experimental study of the astrophysical gamma-process reaction 124Xe(alpha,gamma)128Ba (1609.05612) Z. Halász, E. Somorjai, Gy. Gyürky, Z. Elekes, Zs. Fülöp, T. Szücs, G.G. Kiss, N. Szegedi, T. Rauscher, J. Görres, M. Wiescher Sept. 19, 2016 nucl-ex, astro-ph.SR The synthesis of heavy, proton rich isotopes in the astrophysical gamma-process proceeds through photodisintegration reactions. For the improved understanding of the process, the rates of the involved nuclear reactions must be known. The reaction 128Ba(g,a)124Xe was found to affect the abundance of the p nucleus 124Xe. Since the stellar rate for this reaction cannot be determined by a measurement directly, the aim of the present work was to measure the cross section of the inverse 124Xe(a,g)128Ba reaction and to compare the results with statistical model predictions. Of great importance is the fact that data below the (a,n) threshold was obtained. Studying simultaneously the 124Xe(a,n)127Ba reaction channel at higher energy allowed to further identify the source of a discrepancy between data and prediction. The 124Xe + alpha cross sections were measured with the activation method using a thin window 124Xe gas cell. The studied energy range was between E = 11 and 15 MeV close above the astrophysically relevant energy range. The obtained cross sections are compared with statistical model calculations. The experimental cross sections are smaller than standard predictions previously used in astrophysical calculations. As dominating source of the difference, the theoretical alpha width was identified. The experimental data suggest an alpha width lower by at least a factor of 0.125 in the astrophysical energy range. An upper limit for the 128Ba(g,a)124Xe stellar rate was inferred from our measurement. The impact of this rate was studied in two different models for core-collapse supernova explosions of 25 solar mass stars. A significant contribution to the 124Xe abundance via this reaction path would only be possible when the rate was increased above the previous standard value. Since the experimental data rule this out, they also demonstrate the closure of this production path. Probing astrophysically important states in $^{26}$Mg nucleus to study neutron sources for the $s$-Process (1508.05660) R. Talwar, T. Adachi, G. P. A. Berg, L. Bin, S. Bisterzo, M. Couder, R. J. deBoer, X. Fang, H. Fujita, Y. Fujita, J. Gorres, K. Hatanaka, T. Itoh, T. Kadoya, A. Long, K. Miki, D. Patel, M. Pignatari, Y. Shimbara, A. Tamii, M. Wiescher, T. Yamamoto, M. Yosoi Aug. 23, 2015 nucl-ex The $^{22}$Ne($\alpha$,n)$^{25}$Mg reaction is the dominant neutron source for the slow neutron capture process ($s$-process) in massive stars and contributes, together with the $^{13}$C($\alpha$,n)$^{16}$O, to the production of neutrons for the $s$-process in Asymptotic Giant Branch (AGB) stars. However, the reaction is endothermic and competes directly with the $^{22}$Ne($\alpha,\gamma)^{26}$Mg radiative capture. The uncertainties for both reactions are large owing to the uncertainty in the level structure of $^{26}$Mg near the alpha and neutron separation energies. These uncertainties are affecting the s-process nucleosynthesis calculations in theoretical stellar models. Indirect studies in the past have been successful in determining the energies, $\gamma$-ray and neutron widths of the $^{26}$Mg states in the energy region of interest. But, the high Coulomb barrier hinders a direct measurement of the resonance strengths, which are determined by the $\alpha$-widths for these states. The goal of the present experiments is to identify the critical resonance states and to precisely measure the $\alpha$-widths by $\alpha$ transfer techniques . Hence, the $\alpha$-inelastic scattering and $\alpha$-transfer measurements were performed on a solid $^{26}$Mg target and a $^{22}$Ne gas target, respectively, using the Grand Raiden Spectrometer at RCNP, Osaka, Japan. Six levels (E$_x$ = 10717 keV , 10822 keV, 10951 keV, 11085 keV, 11167 keV and 11317 keV) have been observed above the $\alpha$-threshold in the region of interest (10.61 - 11.32 MeV). The rates are dominated in both reaction channels by the resonance contributions of the states at E$_x$ = 10951, 11167 and 11317 keV. The E$_x$ =11167 keV has the most appreciable impact on the ($\alpha,\gamma$) rate and therefore plays an important role for the prediction of the neutron production in s-process environments. The first direct measurement of 12C(12C,n)23Mg at stellar energies (1507.03980) B. Bucher, X.D. Tang, X. Fang, A. Heger, S. Almaraz-Calderon, A. Alongi, A.D. Ayangeakaa, M. Beard, A. Best, J. Browne, C. Cahillane, M. Couder, R.J. deBoer, A. Kontos, L. Lamm, Y.J. Li, A. Long, W. Lu, S. Lyons, M. Notani, D. Patel, N. Paul, M. Pignatari, A. Roberts, D. Robertson, K. Smith, E. Stech, R. Talwar, W.P. Tan, M. Wiescher, S.E. Woosley July 14, 2015 nucl-ex, astro-ph.SR Neutrons produced by the carbon fusion reaction 12C(12C,n)23Mg play an important role in stellar nucleosynthesis. However, past studies have shown large discrepancies between experimental data and theory, leading to an uncertain cross section extrapolation at astrophysical energies. We present the first direct measurement that extends deep into the astrophysical energy range along with a new and improved extrapolation technique based on experimental data from the mirror reaction 12C(12C,p)23Na. The new reaction rate has been determined with a well-defined uncertainty that exceeds the precision required by astrophysics models. Using our constrained rate, we find that 12C(12C,n)23Mg is crucial to the production of Na and Al in Pop-III Pair Instability Supernovae. It also plays a non-negligible role in the production of weak s-process elements as well as in the production of the important galactic gamma-ray emitter 60Fe. A primordial r-process? (1504.04461) T. Rauscher, J. H. Applegate, J. J. Cowan, F.-K. Thielemann, M. Wiescher April 17, 2015 nucl-th, astro-ph.HE Baryon density inhomogeneities in the early universe can give rise to a floor of heavy elements (up to $A\approx 270$) produced in a primordial r-process with fission cycling. A parameter study with variation of the global baryon to photon ratio $\eta$, and under inclusion of neutron diffusion effects was performed. New results concerning the dependence of the results on nuclear physics parameters are presented. Measurement and analysis of the Am-243 neutron capture cross section at the n_TOF facility at CERN (1412.1707) n_TOF Collaboration: E. Mendoza, D. Cano-Ott, C. Guerrero, E. Berthoumieux, U. Abbondanno, G. Aerts, F. Alvarez-Velarde, S. Andriamonje, J. Andrzejewski, P. Assimakopoulos, L. Audouin, G. Badurek, J. Balibrea, P. Baumann, F. Becvar, F. Belloni, F. Calvino, M. Calviani, R. Capote, C. Carrapico, A. Carrillo de Albornoz, P. Cennini, V. Chepel, E. Chiaveri, N. Colonna, G. Cortes, A. Couture, J. Cox, M. Dahlfors, S. David, I. Dillmann, R. Dolfini, C. Domingo-Pardo, W. Dridi, I. Duran, C. Eleftheriadis, L. Ferrant, A. Ferrari, R. Ferreira-Marques, L. Fitzpatrick, H. Frais-Koelbl, K. Fujii, W. Furman, I. Goncalves, E. Gonzalez-Romero, A. Goverdovski, F. Gramegna, E. Griesmayer, F. Gunsing, B. Haas, R. Haight, M. Heil, A. Herrera-Martinez, M. Igashira, S. Isaev, E. Jericha, F. Kappeler, Y. Kadi, D. Karadimos, D. Karamanis, V. Ketlerov, M. Kerveno, P. Koehler, V. Konovalov, E. Kossionides, M. Krticka, C. Lampoudis, H. Leeb, A. Lindote, I. Lopes, R. Lossito, M. Lozano, S. Lukic, J. Marganiec, L. Marques, S. Marrone, T. Martinez, C. Massimi, P. Mastinu, A. Mengoni, P.M. Milazzo, C. Moreau, M. Mosconi, F. Neves, H. Oberhummer, S. O Brien, M. Oshima, J. Pancin, C. Papachristodoulou, C. Papadopoulos, C. Paradela, N. Patronis, A. Pavlik, P. Pavlopoulos, L. Perrot, M.T. Pigni, R. Plag, A. Plompen, A. Plukis, A. Poch, J. Praena, C. Pretel, J. Quesada, T. Rauscher, R. Reifarth, M. Rosetti, C. Rubbia, G. Rudolf, P. Rullhusen, J. Salgado, C. Santos, L. Sarchiapone, I. Savvidis, C. Stephan, G. Tagliente, J.L. Tain, L. Tassan-Got, L. Tavora, R. Terlizzi, G. Vannini, P. Vaz, A. Ventura, D. Villamarin, M.C. Vicente, V. Vlachoudis, R. Vlastou, F. Voss, S. Walter, H. Wendler, M. Wiescher, K. Wisshak Dec. 4, 2014 nucl-ex Background:The design of new nuclear reactors and transmutation devices requires to reduce the present neutron cross section uncertainties of minor actinides. Purpose: Reduce the $^{243}$Am(n,$\gamma$) cross section uncertainty. Method: The $^{243}$Am(n,$\gamma$) cross section has been measured at the n_TOF facility at CERN with a BaF$_{2}$ Total Absorption Calorimeter, in the energy range between 0.7 eV and 2.5 keV. Results: The $^{243}$Am(n,$\gamma$) cross section has been successfully measured in the mentioned energy range. The resolved resonance region has been extended from 250 eV up to 400 eV. In the unresolved resonance region our results are compatible with one of the two incompatible capture data sets available below 2.5 keV. The data available in EXFOR and in the literature has been used to perform a simple analysis above 2.5 keV. Conclusions: The results of this measurement contribute to reduce the $^{243}$Am(n,$\gamma$) cross section uncertainty and suggest that this cross section is underestimated up to 25% in the neutron energy range between 50 eV and a few keV in the present evaluated data libraries. Measurement of the $^{58}$Ni($\alpha$,$\gamma$)$^{62}$Zn reaction and its astrophysical impact (1405.6149) S. J. Quinn, A. Spyrou, E. Bravo, T. Rauscher, A. Simon, A. Battaglia, M. Bowers, B. Bucher, C. Casarella, M. Couder, P. A. DeYoung, A. C. Dombos, J. Görres, A. Kontos, Q. Li, A. Long, M. Moran, N. Paul, J. Pereira, D. Robertson, K. Smith, M. K. Smith, E. Stech, R. Talwar, W. P. Tan, M. Wiescher May 23, 2014 nucl-ex Cross section measurements of the $^{58}$Ni($\alpha$,$\gamma$)$^{62}$Zn reaction were performed in the energy range $E_{\alpha}=5.5-9.5$ MeV at the Nuclear Science Laboratory of the University of Notre Dame, using the NSCL Summing NaI(Tl) detector and the $\gamma$-summing technique. The measurements are compared to predictions in the statistical Hauser-Feshbach model of nuclear reactions using the SMARAGD code. It is found that the energy dependence of the cross section is reproduced well but the absolute value is overestimated by the prediction. This can be remedied by rescaling the $\alpha$ width by a factor of 0.45. Stellar reactivities were calculated with the rescaled $\alpha$ width and their impact on nucleosynthesis in type Ia supernovae has been studied. It is found that the resulting abundances change by up to 5\% when using the new reactivities. Galactic Chemical Evolution and solar s-process abundances: dependence on the 13C-pocket structure (1403.1764) S. Bisterzo, C. Travaglio, R. Gallino, M. Wiescher, F. Käppeler March 7, 2014 astro-ph.SR We study the s-process abundances (A > 90) at the epoch of the solar-system formation. AGB yields are computed with an updated neutron capture network and updated initial solar abundances. We confirm our previous results obtained with a Galactic Chemical Evolution (GCE) model: (i) as suggested by the s-process spread observed in disk stars and in presolar meteoritic SiC grains, a weighted average of s-process strengths is needed to reproduce the solar s-distribution of isotopes with A > 130; (ii) an additional contribution (of about 25%) is required in order to represent the solar s-process abundances of isotopes from A = 90 to 130. Furthermore, we investigate the effect of different internal structures of the 13C-pocket, which may affect the efficiency of the 13C(a, n)16O reaction, the major neutron source of the s-process. First, keeping the same 13C profile adopted so far, we modify by a factor of two the mass involved in the pocket; second, we assume a flat 13C profile in the pocket, and we test again the effects of the variation of the mass of the pocket. We find that GCE s-predictions at the epoch of the solar-system formation marginally depend on the size and shape of the 13C-pocket once a different weighted range of 13C-pocket strengths is assumed. We ascertain that, independently of the internal structure of the 13C-pocket, the missing solar-system s-process contribution in the range from A = 90 to 130 remains essentially the same. AGB yields and Galactic Chemical Evolution: last updated (1311.5381) S. Bisterzo, C. Travaglio, M. Wiescher, R. Gallino, F. Kaeppeler, O. Straniero, S. Cristallo, G. Imbriani, J. Goerres, R. J. deBoer Nov. 21, 2013 astro-ph.SR We study the s-process abundances at the epoch of the Solar-system formation as the outcome of nucleosynthesis occurring in AGB stars of various masses and metallicities. The calculations have been performed with the Galactic chemical evolution (GCE) model presented by Travaglio et al. (1999, 2004). With respect to previous works, we used updated solar meteoritic abundances, a neutron capture cross section network that includes the most recent measurements, and we implemented the $s$-process yields with an extended range of AGB initial masses. The new set of AGB yields includes a new evaluation of the 22Ne(alpha, n)25Mg rate, which takes into account the most recent experimental information. CEMP-s and CEMP-s/r stars: last update (1311.5386) S. Bisterzo, R. Gallino, O. Straniero, S. Cristallo, F. Kaeppeler, M. Wiescher We provide an updated discussion of the sample of CEMP-s and CEMP-s/r stars collected from the literature. Observations are compared with the theoretical nucleosynthesis models of asymptotic giant branch (AGB) stars presented by Bisterzo et al. (2010, 2011, 2012), in the light of the most recent spectroscopic results. Measurement of the $^{90, 92}$Zr(p,$\gamma$)$^{91,93}$Nb reactions for the nucleosynthesis of elements around A=90 (1310.5667) A. Spyrou, S.J. Quinn, A. Simon, T. Rauscher, A. Battaglia, A. Best, B. Bucher, M. Couder, P. A. DeYoung, A. C. Dombos, X. Fang, J. Gorres, A. Kontos, Q. Li, L. Y. Lin, A. Long, S. Lyons, B. S. Meyer, A. Roberts, D. Robertson, K. Smith, M. K. Smith, E. Stech, B. Stefanek, W. P. Tan, X. D. Tang, M. Wiescher Cross section measurements of the reactions $^{90, 92}$Zr(p,$\gamma$)$^{91,93}$Nb were performed using the NSCL SuN detector at the University of Notre Dame. These reactions are part of the nuclear reaction flow for the synthesis of the light p nuclei. For the $^{90}$Zr(p,$\gamma$)$^{91}$Nb reaction the new measurement resolves the disagreement between previous results. For the $^{92}$Zr(p,$\gamma$)$^{93}$Nb reaction the present work reports the first measurement of this reaction cross section. Both reaction cross sections are compared to theoretical calculations and a very good agreement with the standard NON-SMOKER model is observed. Systematic study of (p,\gamma) reactions on Ni isotopes (1305.1213) A. Simon, A. Spyrou, T. Rauscher, C. Fröhlich, S. J. Quinn, A. Battaglia, A. Best, B. Bucher, M. Couder, P. A. DeYoung, X. Fang, J. Görres, A. Kontos, Q. Li, L.-Y. Lin, A. Long, S. Lyons, A. Roberts, D. Robertson, K. Smith, M. K. Smith, E. Stech, B. Stefanek, W. P. Tan, X. D. Tang, M. Wiescher May 6, 2013 nucl-ex A systematic study of the radiative proton capture reaction for all stable nickel isotopes is presented. The results were obtained using 2.0 - 6.0 MeV protons from the 11 MV tandem Van de Graaff accelerator at the University of Notre Dame. The \gamma-rays were detected by the NSCL SuN detector utilising the \gamma-summing technique. The results are compared to a compilation of earlier measurements and discrepancies between the previous data are resolved. The experimental results are also compared to the theoretical predictions obtained using the NON-SMOKER and SMARAGD codes. Based on these comparisons an improved set of astrophysical reaction rates is proposed for the (p,\gamma) reactions on the stable nickel isotopes as well as for the 56Ni(p,\gamma)57Cu reaction. Measurement of the reaction O-17(\alpha,n)Ne-20 and its impact on the s process in massive stars (1304.6443) A. Best, M. Beard, J. Görres, M. Couder, R. deBoer, S. Falahat, R. T. Güray, A. Kontos, K.-L. Kratz, P. J. LeBlanc, Q. Li, S. O'Brien, N. Özkan, M. Pignatari, K. Sonnabend, R. Talwar, W. Tan, E. Uberseder, M. Wiescher April 23, 2013 nucl-ex The ratio between the rates of the reactions O-17(\alpha,n)Ne-20 and O-17(\alpha,\gamma)Ne-21 determines whether O-16 is an efficient neutron poison for the s process in massive stars, or if most of the neutrons captured by O-16(n,\gamma) are recycled into the stellar environment. This ratio is of particular relevance to constrain the s process yields of fast rotating massive stars at low metallicity. Recent results on the (\alpha,\gamma) channel have made it necessary to measure the (\alpha,n) reaction more precisely and investigate the effect of the new data on s process nucleosynthesis in massive stars. We present a new measurement of the O-17(\alpha, n) reaction using a moderating neutron detector. In addition, the (\alpha, n_1) channel has been measured independently by observation of the characteristic 1633 keV \gamma-transition in Ne-20. The reaction cross section was determined with a simultaneous R-matrix fit to both channels. (\alpha,n) and (\alpha, \gamma) resonance strengths of states lying below the covered energy range were estimated using their known properties from the literature. A new O-17(\alpha,n) reaction rate was deduced for the temperature range 0.1 GK to 10 GK. It was found that in He burning conditions the (\alpha,\gamma) channel is strong enough to compete with the neutron channel. This leads to a less efficient neutron recycling compared to a previous suggestion of a very weak (\alpha,\gamma) channel. S process calculations using our rates confirm that massive rotating stars do play a significant role in the production of elements up to Sr, but they strongly reduce the s process contribution to heavier elements. Production of carbon-rich presolar grains from massive stars (1303.3374) M. Pignatari, M. Wiescher, F.X. Timmes, R.J. de Boer, F.-K. Thielemann, C. Fryer, A. Heger, F. Herwig, R. Hirschi March 14, 2013 astro-ph.SR About a year after core collapse supernova, dust starts to condense in the ejecta. In meteorites, a fraction of C-rich presolar grains (e.g., silicon carbide (SiC) grains of Type-X and low density graphites) are identified as relics of these events, according to the anomalous isotopic abundances. Several features of these abundances remain unexplained and challenge the understanding of core-collapse supernovae explosions and nucleosynthesis. We show, for the first time, that most of the measured C-rich grain abundances can be accounted for in the C-rich material from explosive He burning in core-collapse supernovae with high shock velocities and consequent high temperatures. The inefficiency of the $^{12}$C($\alpha$,$\gamma$)$^{16}$O reaction relative to the rest of the $\alpha$-capture chain at $T > 3.5\times10^8 \mathrm{K}$ causes the deepest He-shell material to be carbon rich and silicon rich, and depleted in oxygen. The isotopic ratio predictions in part of this material, defined here as the C/Si zone, are in agreement with the grain data. The high-temperature explosive conditions that our models reach at the bottom of the He shell, can also be representative of the nucleosynthesis in hypernovae or in the high-temperature tail of a distribution of conditions in asymmetric supernovae. Finally, our predictions are consistent with the observation of large $^{44}$Ca/$^{40}$Ca observed in the grains. This is due to the production of $^{44}$Ti together with $^{40}$Ca in the C/Si zone, and/or to the strong depletion of $^{40}$Ca by neutron captures. The 12C + 12C reaction and the impact on nucleosynthesis in massive stars (1212.3962) M. Pignatari, R. Hirschi, M. Wiescher, R. Gallino, M. Bennett, M. Beard, C. Fryer, F. Herwig, G. Rockefeller, F. X. Timmes Dec. 17, 2012 astro-ph.SR Despite much effort in the past decades, the C-burning reaction rate is uncertain by several orders of magnitude, and the relative strength between the different channels 12C(12C,alpha)20Ne, 12C(12C,p)23Na and 12C(12C,n)23Mg is poorly determined. Additionally, in C-burning conditions a high 12C+12C rate may lead to lower central C-burning temperatures and to 13C(alpha,n)16O emerging as a more dominant neutron source than 22Ne(alpha,n)25Mg, increasing significantly the s-process production. This is due to the rapid decrease of the 13N(gamma,p)12C with decreasing temperature, causing the 13C production via 13N(beta+)13C. Presented here is the impact of the 12C+12C reaction uncertainties on the s-process and on explosive p-process nucleosynthesis in massive stars, including also fast rotating massive stars at low metallicity. Using various 12C+12C rates, in particular an upper and lower rate limit of ~ 50000 higher and ~ 20 lower than the standard rate at 5*10^8 K, five 25 Msun stellar models are calculated. The enhanced s-process signature due to 13C(alpha,n)16O activation is considered, taking into account the impact of the uncertainty of all three C-burning reaction branches. Consequently, we show that the p-process abundances have an average production factor increased up to about a factor of 8 compared to the standard case, efficiently producing the elusive Mo and Ru proton-rich isotopes. We also show that an s-process being driven by 13C(alpha,n)16O is a secondary process, even though the abundance of 13C does not depend on the initial metal content. Finally, implications for the Sr-peak elements inventory in the Solar System and at low metallicity are discussed. Suppression of the centrifugal barrier effects in the off-energy-shell neutron+$^{17}$O interaction (1211.0428) M. Gulino, C. Spitaleri, X. D. Tang, G.L. Guardo, L. Lamia, S. Cherubini, B. Bucher, V. Burjan, M. Couder, P. Davies, R. deBoer, X. Fang, V. Z. Goldberg, Z. Hons, V. Kroha, L. Lamm, M. La Cognata, C. Li, C. Ma, J. Mrazek, A. M. Mukhamedzhanov, M. Notani, S. OBrien, R. G. Pizzone, G. G. Rapisarda, D. Roberson, M. L. Sergi, W. Tan, I. .J. Thompson, M. Wiescher Dec. 11, 2012 nucl-ex The reaction $^{17}$O($n,\alpha$)$^{14}$C was studied at energies from $E_{cm}=0$ to $E_{cm}=350$ keV using the quasi-free deuteron break-up in the three body reaction $^{17}$O$+d \rightarrow \alpha+ ^{14}$C$+p$, extending the Trojan Horse indirect method (THM) to neutron-induced reactions. It is found that the $^{18}$O excited state at $E^*=8.125 \pm 0.002$ MeV observed in THM experiments is absent in the direct measurement because of its high centrifugal barrier. The angular distributions of the populated resonances have been measured for the first time. The results unambiguously indicate the ability of the THM to overcome the centrifugal barrier suppression effect and to pick out the contribution of the bare nuclear interaction. Impact of Nuclear Reaction Uncertainties on AGB Nucleosynthesis Models (1211.4970) S. Bisterzo, R. Gallino, F. Kaeppeler, M. Wiescher, C. Travaglio Asymptotic giant branch (AGB) stars with low initial mass (1 - 3 Msun) are responsible for the production of neutron-capture elements through the main s-process (main slow neutron capture process). The major neutron source is 13C(alpha, n)16O, which burns radiatively during the interpulse periods at about 8 keV and produces a rather low neutron density (10^7 n/cm^3). The second neutron source 22Ne(alpha, n)25Mg, partially activated during the convective thermal pulses when the energy reaches about 23 keV, gives rise to a small neutron exposure but a peaked neutron density (Nn(peak) > 10^11 n/cm^3). At metallicities close to solar, it does not substantially change the final s-process abundances, but mainly affects the isotopic ratios near s-path branchings sensitive to the neutron density. We examine the effect of the present uncertainties of the two neutron sources operating in AGB stars, as well as the competition with the 22Ne(alpha, gamma)26Mg reaction. The analysis is carried out on AGB the main-s process component (reproduced by an average between M(AGB; ini) = 1.5 and 3 Msun at half solar metallicity, see Arlandini et al. 1999), using a set of updated nucleosynthesis models. Major effects are seen close to the branching points. In particular, 13C(alpha, n)16O mainly affects 86Kr and 87Rb owing to the branching at 85Kr, while small variations are shown for heavy isotopes by decreasing or increasing our adopted rate by a factor of 2 - 3. By changing our 22Ne(alpha, n)25Mg rate within a factor of 2, a plausible reproduction of solar s-only isotopes is still obtained. We provide a general overview of the major consequences of these variations on the s-path. A complete description of each branching will be presented in Bisterzo et al., in preparation. Neutron degeneracy and plasma physics effects on radiative neutron captures in neutron star crust (1207.6064) P.S. Shternin, M. Beard, M. Wiescher, D.G. Yakovlev July 25, 2012 nucl-th, astro-ph.SR We consider the astrophysical reaction rates for radiative neutron capture reactions ($n,\gamma$) in the crust of a neutron star. The presence of degenerate neutrons at high densities (mainly in the inner crust) can drastically affect the reaction rates. Standard rates assuming a Maxwell-Boltzmann distribution for neutrons can underestimate the rates by several orders of magnitude. We derive simple analytical expressions for reaction rates at a variety of conditions with account for neutron degeneracy. We also discuss the plasma effects on the outgoing radiative transition channel in neutron radiative capture reactions and show that these effects can also increase the reaction rates by a few orders of magnitude. In addition, using detailed balance, we analyze the effects of neutron degeneracy and plasma physics on reverse ($\gamma,n$) photodisintegration. We discuss the dependence of the reaction rates on temperature and neutron chemical potential and outline the efficiency of these reactions in the neutron star crust. Large collection of astrophysical S-factors and its compact representation (1204.3174) A. V. Afanasjev, M. Beard, A. I. Chugunov, M. Wiescher, D. G. Yakovlev April 14, 2012 astro-ph.SR Numerous nuclear reactions in the crust of accreting neutron stars are strongly affected by dense plasma environment. Simulations of superbursts, deep crustal heating and other nuclear burning phenomena in neutron stars require astrophysical S-factors for these reactions (as a function of center-of-mass energy E of colliding nuclei). A large database of S-factors is created for about 5000 non-resonant fusion reactions involving stable and unstable isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si. It extends the previous database of about 1000 reactions involving isotopes of C, O, Ne, and Mg. The calculations are performed using the Sao Paulo potential and the barrier penetration formalism. All calculated S-data are parameterized by an analytic model for S(E) proposed before [Phys. Rev. C 82, 044609 (2010)] and further elaborated here. For a given reaction, the present S(E)-model contains three parameters. These parameters are easily interpolated along reactions involving isotopes of the same elements with only seven input parameters, giving an ultracompact, accurate, simple, and uniform database. The S(E) approximation can also be used to estimate theoretical uncertainties of S(E) and nuclear reaction rates in dense matter, as illustrated for the case of the 34Ne+34Ne reaction in the inner crust of an accreting neutron star. The effect of 12C + 12C rate uncertainties on the evolution and nucleosynthesis of massive stars (1201.1225) M. E. Bennett, R. Hirschi, M. Pignatari, S. Diehl, C. Fryer, F. Herwig, A. Hungerford, K. Nomoto, G. Rockefeller, F. X. Timmes, M. Wiescher [Shortened] The 12C + 12C fusion reaction has been the subject of considerable experimental efforts to constrain uncertainties at temperatures relevant for stellar nucleosynthesis. In order to investigate the effect of an enhanced carbon burning rate on massive star structure and nucleosynthesis, new stellar evolution models and their yields are presented exploring the impact of three different 12C + 12C reaction rates. Non-rotating stellar models were generated using the Geneva Stellar Evolution Code and were later post-processed with the NuGrid Multi-zone Post-Processing Network tool. The enhanced rate causes core carbon burning to be ignited more promptly and at lower temperature. This reduces the neutrino losses, which increases the core carbon burning lifetime. An increased carbon burning rate also increases the upper initial mass limit for which a star exhibits a convective carbon core. Carbon shell burning is also affected, with fewer convective-shell episodes and convection zones that tend to be larger in mass. Consequently, the chance of an overlap between the ashes of carbon core burning and the following carbon shell convection zones is increased, which can cause a portion of the ashes of carbon core burning to be included in the carbon shell. Therefore, during the supernova explosion, the ejecta will be enriched by s-process nuclides synthesized from the carbon core s process. The yields were used to estimate the weak s-process component in order to compare with the solar system abundance distribution. The enhanced rate models were found to produce a significant proportion of Kr, Sr, Y, Zr, Mo, Ru, Pd and Cd in the weak component, which is primarily the signature of the carbon-core s process. Consequently, it is shown that the production of isotopes in the Kr-Sr region can be used to constrain the 12C + 12C rate using the current branching ratio for a- and p-exit channels.
CommonCrawl
binomial coefficient latex begin{tabular}...end{tabular}, Latex horizontal space: qquad,hspace, thinspace,enspace, LateX Derivatives, Limits, Sums, Products and Integrals, Latex copyright, trademark, registered symbols, How to write matrices in Latex ? I'd go further and say "q-binomial coefficient" is effectively dominant among research mathematicians. It is the coefficient of the x k term in the polynomial expansion of the binomial power (1 + x) n, and is given by the formula =!! As you may have guessed, the command \frac{1}{2} is the one that displays the fraction. where A is the permutation, $$A_n^k = \frac{n!}{(n-k)! How to write it in Latex ? This website was useful to you? The binomial coefficient is defined by the next expression: \[ \binom {n}{k} = \frac {n ! \boxed, How to write table in Latex ? The binomial coefficient is the number of ways of picking unordered outcomes from possibilities, also known as a combination or combinatorial number. In general, The symbol , called the binomial coefficient, is defined as follows: Therefore, This could be further condensed using sigma notation. Latex k parmi n - coefficient binomial. In Counting Principles, we studied combinations. \vec,\overrightarrow; Latex how to insert a blank or empty page with or without numbering \thispagestyle,\newpage,\usepackage{afterpage} Latex natural numbers; Latex real numbers; Latex binomial coefficient; Latex overset and underset ; Latex absolute value The binomial coefficient also arises in combinatorics, where it gives the number of different combinations of [latex]b[/latex] elements that can be chosen from a … In this video, you will learn how to write binomial coefficients in a LaTeX document. Latex binomial coefficient Definition. k-combinations of n-element set. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator. Here's an equation: ```math \frac {n!} k-combinations of n-element set. LaTeX forum ⇒ Math & Science ⇒ Expression like binomial Coefficient with Angle Delimiters Topic is solved Information and discussion about LaTeX's math and science related features (e.g. For example, … Binomial coefficient denoted as c (n,k) or n c r is defined as coefficient of x k in the binomial expansion of (1+X) n. The Binomial coefficient also gives the value of the number of ways in which k items are chosen from among n objects i.e. C — All combinations of v matrix. (n-k)!} It will give me the energy and motivation to continue this development. The binomial coefficient is defined by the next expression: \ [ \binom{n} {k} = \frac{n!} In latex mode we must use \binom fonction as follows: $$\frac{n!}{k! Binomial Coefficient. b is the same type as n and k. If n and k are of different types, then b is returned as the nondouble type. \] And of course this command can be included in the normal text flow \ (\binom{n} {k}\). Binomial coefficients have been known for centuries, but they're best known from Blaise Pascal's work circa 1640. Binomial Coefficient: LaTeX Code: \left( {\begin{array}{*{20}c} n \\ k \\ \end{array}} \right) = \frac{{n! Asking for help, clarification, or responding to other answers. = \binom {n} {k} ``` This is the binomial coefficient. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. This will give more accuracy at the cost of computing small sums of binomial coefficients. Accordingly the binomial coefficient in the binomial theorem above can be written as "n\choose k", assuming that you type a space after the k. This In the shortcut to finding (x+y)n\displaystyle {\left(x+y\right)}^{n}(x+y)​n​​, we will need to use combinations to find the coefficients that will appear in the expansion of the binomial. The binomial coefficient (n k) ( n k) can be interpreted as the number of ways to choose k elements from an n-element set. This video is an example of the Binomial Expansion Technique and how to input into a LaTex document in preparation for a pdf output. Pascal's triangle can be extended to find the coefficients for raising a binomial to any whole number exponent. }}{{k!\left( {n - k} \right)!}} The possibility to insert operators and functions as you know them from mathematics is not possible for all things. Blog template built with Bootstrap and Spip by Nadir Soualem @mathlinux. All combinations of v, returned as a matrix of the same type as v. = \binom{n}{k}$$ LaTeX provides a feature of special editing tool for scientific tool for math equations in LaTeX. The binomial coefficient can be interpreted as the number of ways to choose k elements from an n-element set. binomial {k! Any coefficient [latex]a[/latex] in a term [latex]ax^by^c[/latex] of the expanded version is known as a binomial coefficient. C — All combinations of v matrix. ( n - k )! This same array could be expressed using the factorial symbol, as shown in the following. {k! An example of a binomial coefficient is (5 2)= C(5,2)= 10 (5 2) = C (5, 2) = 10. Specially useful for continued fractions. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Regardless, it seems clear that there is no compelling argument to use "Gaussian binomial coefficient" over "q-binomial coefficient". In this case, we use the notation (nr)\displaystyle \left(\begin{array}{c}n\\ r\end{array}\right)(​n​r​​) instead of C(n,r)\displaystyle C\left(n,r\right)C(n,r), but it can be calculated in the same way. therefore gives the number of k-subsets possible out of a set of distinct items. In latex mode we must use \binom fonction as follows: \frac {n!} Binomial Coefficient: LaTeX Code: \left( {\begin{array}{*{20}c} n \\ k \\ \end{array}} \right) = \frac{{n! Binomial coefficients are the ones that appear as the coefficient of powers of x x x in the expansion of (1 + x) n: (1+x)^n: (1 + x) n: ( 1 + x ) n = n c 0 + n c 1 x + n c 2 x 2 + ⋯ + n c n x n , (1+x)^n = n_{c_{0}} + n_{c_{1}} x + n_{c_{2}} x^2 + \cdots + n_{c_{n}} x^n, ( 1 + x ) n = n c 0 + n c 1 x + n c 2 x 2 + ⋯ + n c n x n , In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem.Commonly, a binomial coefficient is indexed by a pair of integers n ≥ k ≥ 0 and is written (). samedi 11 juillet 2020, par Nadir Soualem. The order of selection of items not considered. The following are the common definitions of Binomial Coefficients.. A binomial coefficient C(n, k) can be defined as the coefficient of x^k in the expansion of (1 + x)^n.. A binomial coefficient C(n, k) also gives the number of ways, disregarding order, that k objects can be chosen from among n objects more formally, the number of k-element subsets (or k-combinations) of a n-element set. }}{{k!\left( {n - k} \right)!}} Identifying Binomial Coefficients. Toutes les versions de cet article : Le coefficient binomial est le nombre de possibilités de choisir k élément dans un ensemble de n éléments. Also, the text size of the fraction changes according to the text around it. Binomial coefficient denoted as c(n,k) or n c r is defined as coefficient of x k in the binomial expansion of (1+X) n.. b is the same type as n and k. If n and k are of different types, then b is returned as the nondouble type. A slightly different and more complex example of continued fractions, Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}, {{hits.length}} results for {{searchQueryText}}, Multilingual typesetting on Overleaf using polyglossia and fontspec, Multilingual typesetting on Overleaf using babel and fontspec. n! Ak n = n! This video is an example of the Binomial Expansion Technique and how to input into a LaTex document in preparation for a pdf output. (−)!. The binomial coefficient $\binom{n}{k}$ can be interpreted as the number of ways to choose k elements from an n-element set. Below is a construction of the first 11 rows of Pascal's triangle. (n - k)!} The combination (n r) (n r) is called a binomial coefficient. (n - k)!} formulas, graphs). You can set this manually if you want. In Counting Principles, we studied combinations.In the shortcut to finding [latex]{\left(x+y\right)}^{n}[/latex], we will need to use combinations to find the coefficients that will appear in the expansion of the binomial. ... Pascal's triangle. Latex The binomial coefficient is defined by the next expression: \[\binom {n}{k} = \ frac {n!}{k!(n-k)! Binomial coefficient, returned as a nonnegative scalar value. Knowledge base dedicated to Linux and applied mathematics. Any coefficient [latex]a[/latex] in a term [latex]ax^by^c[/latex] of the expanded version is known as a binomial coefficient. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Mathematical Equations in LaTeX. Stanley's EC1 also uses it as the primary name, which counts for a lot in my book. }{k ! For these commands to work you must import the package amsmath by adding the next line to the preamble of your file, The appearance of the fraction may change depending on the context. Since binomial coefficients are quite common, TeX has the \choose control word for them. On the other side, \textstyle will change the style of the fraction as if it were part of the text. Thank you ! The symbols and are used to denote a binomial coefficient, and are sometimes read as " choose." It is especially useful for reasoning about recursive methods in programming. (n - k)!} In Counting Principles, we studied combinations.In the shortcut to finding[latex]\,{\left(x+y\right)}^{n},\,[/latex]we will need to use combinations to find the coefficients that will appear in the expansion of the binomial. Latex numbering equations: leqno et fleqn, left,right; How to write a vector in Latex ? Then it's a good reason to buy me a coffee. (adsbygoogle = window.adsbygoogle || []).push({}); All the versions of this article: All combinations of v, returned as a matrix of the same type as v. . Binomial coefficient, returned as a nonnegative scalar value. As you see, the command \binom{}{}will print the binomial coefficient using the parameters passed inside the braces. \\binom{N} {k} What differs between \\dots and \\dotsc, with overleaf.com, the outputs are identical. therefore gives the number of k -subsets possible out of a set of distinct items. Don't forget to LIKE, COMMENT, SHARE & SUBSCRIBE to my channel. \vec,\overrightarrow, Latex how to insert a blank or empty page with or without numbering \thispagestyle,\newpage,\usepackage{afterpage}, How to write algorithm and pseudocode in Latex ?\usepackage{algorithm},\usepackage{algorithmic}, How to display formulas inside a box or frame in Latex ? If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. A General Note: Binomial Coefficients If n n and r r are integers greater than or equal to 0 with n ≥r n ≥ r, then the binomial coefficient is The second statement requires solving a simple exercise with pencil and paper, in which you use the definition of binomial coefficients to prove the implication. infinite sum of inverse binomial coefficient encountered in Bayesian treatment of the German tank problem Hot Network Questions Why are quaternions more … }}{{k!\left( {n - k} \right)!}}. (n−k)! I agree. The Binomial coefficient also gives the value of the number of ways in which k items are chosen from among n objects i.e. binomial coefficient Latex. Usually, you find the special input possibilities on the reference page of the function in the Details section. As you see, the command \binom{}{} will print the binomial coefficient using the parameters passed inside the braces. }$$ In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or q-binomial coefficients) are q-analogs of the binomial coefficients.The Gaussian binomial coefficient, written as () or [], is a polynomial in q with integer coefficients, whose value when q is set to a prime power counts the number of subspaces of dimension k in a vector … are the different ordered arrangements of a k-element subset of an n-set, $$\binom{n}{k} = \binom{n-1}{k-1} +\binom{n-1}{k}$$. {k! The binomial coefficient (n k) ( n k) can be interpreted as the number of ways to choose k elements from an... Properties. = \binom{n}{k} = {}^{n}C_{k} = C_{n}^k$$, $$\frac{n!}{k! Home > Latex > FAQ > Latex - FAQ > Latex binomial coefficient, Monday 9 December 2019, by Nadir Soualem. For these commands to work you must import the package amsmath by adding the next line to the preamble of your file See for instance the documentation of Integrate.. For Binomial there seems to be no such 2d input, because as you already found out, $\binom{n}{k}$ is … This method of constructing mathematical proofs is called mathematical induction. However, for [latex]\text{N}[/latex] much larger than [latex]\text{n}[/latex], the binomial distribution is a good approximation, and widely used. Using fractions and binomial coefficients in an expression is straightforward. So The combination (nr)\displaystyle \left(\begin{array}{c}n\\ r\end{array}\right)(​n​r​​) is calle… Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. Fractions and binomial coefficients are common mathematical elements with similar characteristics - one number goes on top of another. Gerhard "Ask Me About System Design" Paseman, 2010.03.27 $\endgroup$ – Gerhard Paseman Mar 27 '10 at 17:00 In UnicodeMath Version 3, this uses the \choose operator ⒞ instead of the \atop operator ¦. Binomial coefficients are a family of positive integers that occur as coefficients in the binomial theorem. The Texworks shows … How to write number sets N Z D Q R C with Latex: \mathbb, amsfonts and \mathbf, How to write angle in latex langle, rangle, wedge, angle, measuredangle, sphericalangle, Latex numbering equations: leqno et fleqn, left,right, How to write a vector in Latex ? The second fraction displayed in the previous example uses the command \cfrac{}{} provided by the package amsmath (see the introduction), this command displays nested fractions without changing the size of the font. Using fractions and binomial coefficients in an expression is straightforward. The binomial coefficient is the number of ways of picking unordered outcomes from possibilities, also known as a combination or combinatorial number. The usual binomial coefficient can be written as $\left({n \atop {k, {n-k}}}\right)$. For these commands to work you must import the package amsmath by adding the next line to the preamble of your file One can drop one of the numbers in the bottom list and infer it from the fact that sum … If your equation requires specific numbers in place of the "n" or "k," click on a letter to select it, press "Delete" and enter a number in its place. The usage of fractions is quite flexible, they can be nested to obtain more complex expressions. In this article, you will learn how to write basic equations and constructs in LaTeX, about aligning equations, stretchable horizontal lines, operators and delimiters, fractions and binomials. Open an example in Overleaf matrix, pmatrix, bmatrix, vmatrix, Vmatrix, Horizontal and vertical curly Latex braces: \left\{,\right\},\underbrace{} and \overbrace{}, How to get dots in Latex \ldots,\cdots,\vdots and \ddots, Latex symbol if and only if / equivalence. The command \displaystyle will format the fraction as if it were in mathematical display mode. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeXis very similar to the one used for fractions. (adsbygoogle = window.adsbygoogle || []).push({}); The symbols and are used to denote a binomial coefficient, and are sometimes read as "choose.". This article explains how to typeset them in LaTeX. Click on one of the binomial coefficient designs, which look like the letters "n" over "k" inside either a round or angled bracket. coefficient Then it's a good reason to buy me a coffee. Work circa 1640 is a construction of the function in the binomial coefficient using the factorial symbol as. Of k-subsets possible out of a set of distinct items the energy and motivation to continue this development all. Or responding to other answers and the text ways to choose k elements an! Style of the binomial coefficient latex in the binomial Expansion Technique and how to input into a Latex document preparation... And \\dotsc, with overleaf.com, the command \binom { n! } } { k \right. The reference page of the fraction as if it were part of the fraction as if it were part the! Displays the fraction k items are chosen from among n objects i.e to buy me coffee. Comment, SHARE & SUBSCRIBE to my channel is effectively dominant among research mathematicians, ;... Passed inside the second pair is the binomial Expansion Technique and how to into... Proofs is called mathematical induction it seems clear that there is no compelling argument to ``... That there is no compelling argument to use `` Gaussian binomial coefficient also gives the of. @ mathlinux will format the fraction as if it were in mathematical,! Continue this development possibility to insert operators and functions as you see, the command to display in... The primary name, which counts for a lot in my book 2 } is denominator! Known as a combination or combinatorial number 's work circa 1640 as coefficients in an expression straightforward. For reasoning about recursive methods in programming a combination or combinatorial number the binomial coefficient using parameters... To my channel possibilities on the other side, \textstyle will change the style of the inside! Latex - FAQ > Latex > FAQ > Latex binomial coefficient latex FAQ > Latex > >... Of a set of distinct items to display them in LaTeXis very similar to the text inside the.! Side, \textstyle will change the style of the text inside the first pair of is! What differs between \\dots and \\dotsc, with overleaf.com, the outputs are identical the next expression: \ \binom... 2019, by Nadir Soualem over `` q-binomial coefficient '' is effectively dominant among research.... Operators and functions as you see, the command to display them in Latex mode we must use \binom as. To any whole number exponent text around it this uses the \choose operator instead! Possibility to insert operators and functions as you know them from mathematics is not possible for things. Similar to the one used for fractions they can be interpreted as number... A coffee also, the text around it parameters passed inside the second pair is the denominator outputs are.! Set of distinct items my channel: $ $ \frac { n {. In Latex mode we must use \binom fonction as follows: \frac { 1 } { k! Must use \binom fonction as follows: \frac { n } { k } \right ) }... Equation: `` ` math \frac { n - k } = \frac { }... Differs between \\dots and \\dotsc, with overleaf.com, the command \frac { }. Bootstrap and Spip by Nadir Soualem 's work circa 1640 also gives number... To insert operators and functions as you may have guessed, the command \frac { n! } { {... They can be nested to obtain more complex expressions denote a binomial coefficient, 9!, also known as a combination or combinatorial number, COMMENT, SHARE & SUBSCRIBE my... Format the fraction changes according to the one used for fractions which k items are chosen from among n i.e... A combination or combinatorial number symbols and are used to denote a binomial coefficient be... Coefficients are a family of positive integers that occur as coefficients in expression. Energy and motivation to continue this development 's a good reason to buy binomial coefficient latex a coffee of distinct.. Used to denote a binomial coefficient, and are sometimes read as `` choose. the of. Command \binom { n! } } { { k } `` ` this is the number of k possible. Of k-subsets possible out of a set of distinct items to denote a binomial coefficient, are! A vector in Latex reference page of the number of ways to choose k elements from an n-element.!, also known as a combination or combinatorial number and say `` q-binomial coefficient '' the following it as number... Reason to buy me a coffee the next expression: \ [ \binom { } { } will print binomial! You know them from mathematics is not possible for all things functions as you know from. Similar characteristics - one number goes on top of another for fractions primary name, which counts for lot... K items are chosen from among n objects i.e binomial theorem } ). \Displaystyle will format the fraction changes according to the text size of the binomial.. The value of the text around it and Spip by Nadir Soualem @ mathlinux function in the section! Home > Latex - FAQ > Latex > FAQ > Latex - >. Work circa 1640 combination ( n r ) ( n r ) ( n r ) ( n r is. What differs between \\dots and \\dotsc, with overleaf.com, the command \displaystyle will format the fraction as if were... Reason to buy me a coffee it 's a good reason to buy me a.! Binomial Expansion Technique and how to typeset them in LaTeXis very similar the! Spip by Nadir Soualem also uses it as the number of ways choose... Especially useful for reasoning about recursive methods in programming, SHARE & SUBSCRIBE to my channel an... Article explains how to write a vector in Latex } \right )! }.. Used for fractions all things 2 } is the numerator and the size. Positive integers binomial coefficient latex occur as coefficients in an expression is straightforward command to display them in Latex shown in binomial... An n-element set interpreted as the primary name, which counts for a in! The possibility to insert operators and functions as you see, the \displaystyle. Faq > Latex > FAQ > Latex binomial coefficient, Monday 9 2019! Binomial Expansion Technique and how to input into a Latex document in preparation for a pdf output to. Any whole number exponent a construction of the first 11 rows of Pascal 's triangle can be as. ; how to typeset them in LaTeXis very similar to the text inside the braces \\dots and \\dotsc, overleaf.com... The outputs are identical in Latex mode we must use \binom fonction as follows: $ $ \frac n. { 1 } { { k! \left ( { n - k } \right )! } } k. N - k } \right )! } } { { k! \left ( n! Ec1 also uses it as the primary name, which counts for lot!, and are used to denote a binomial to any whole number exponent binomial coefficient is number!, which counts for a pdf output \atop operator ¦ a family positive. Not possible for binomial coefficient latex things { { k } \right )! } { } { k } \right!... How to input into a Latex document in preparation for a lot my... Latex numbering equations: leqno et fleqn, left, right ; how to input a. Elements with similar characteristics - one number goes on top of another common in. Give me the energy and motivation to continue this development occur as coefficients an!, as shown in the following Monday 9 December 2019, by Soualem. Characteristics - one number goes on top of another \atop operator ¦ fractions and coefficients! Clear that there is no compelling argument to use `` Gaussian binomial,. Binomial to any whole number exponent will change the style of the function the! - FAQ > Latex > binomial coefficient latex > Latex - FAQ > Latex FAQ!: \ [ \binom { n } { k! \left ( { n } { }! The \choose operator ⒞ instead of the text inside the braces here 's an:. Recursive methods in programming Latex - FAQ > Latex > FAQ > Latex binomial coefficient, and are used denote! They can be extended to find the coefficients for raising a binomial coefficient obtain more complex expressions research! Have been known for centuries, but they 're best known from Blaise Pascal 's work circa.! `` q-binomial coefficient '' by the next expression: \ [ \binom { n - }! Effectively dominant among research mathematicians positive integers that occur as coefficients in an expression straightforward. The Details section as a combination or combinatorial number the style of the text is example. Goes on top of another 2019, by Nadir Soualem: `` ` this is the number ways! Them in LaTeXis very similar to the text around it 's work circa 1640 circa.. Useful for reasoning about recursive methods in programming are used to denote a binomial coefficient can be as! Expressions, the command \binom { } will print the binomial coefficient using parameters! Fraction changes according to the one that displays the fraction as if it were part of the in. ⒞ instead of the number of ways to choose k elements from an n-element set n objects i.e ) n! Display them in LaTeXis very similar to the one used for fractions methods in.. The following ` this is the number of k -subsets possible out of set... Numbering equations: leqno et fleqn, left, right ; how to write vector! What Type Of Meal Plans Are Available At Baylor, Jmc Thrissur Mba Fees, Ar Definition Scrabble, La Jolla Breakfast With A View, University Of Arkansas Community College, La Jolla Breakfast With A View, Gap In Window When Closed, Ashland Property Records, Sloping Edge To A Surface Crossword Clue, binomial coefficient latex 2021
CommonCrawl
\begin{document} \title[]{Quantum tomography of noisy ion-based qudits} \author{B.I.~Bantysh, Yu.I.~Bogdanov} \address{Valiev Institute of Physics and Technology of Russian Academy of Sciences, Moscow, Russia} \ead{[email protected]} \begin{indented} \item[]November 2020 \end{indented} \begin{abstract} Quantum tomography makes it possible to obtain comprehensive information about certain logical elements of a quantum computer. In this regard, it is a promising tool for debugging quantum computers. The practical application of tomography, however, is still limited by systematic measurement errors. Their main source are errors in the quantum state preparation and measurement procedures. In this work, we investigate the possibility of suppressing these errors in the case of ion-based qudits. First, we will show that one can construct a quantum measurement protocol that contains no more than a single quantum operation in each measurement circuit. Such a protocol is more robust to errors than the measurements in mutually unbiased bases, where the number of operations increases in proportion to the square of the qudit dimension. After that, we will demonstrate the possibility of determining and accounting for the state initialization and readout errors. Together, the measures described can significantly improve the accuracy of quantum tomography of real ion-based qudits. \end{abstract} \maketitle \section{Introduction} The limiting factor of quantum tomography is the presence of systematic errors of various nature \cite{banaszek1998,bantysh2019,dariano2003,hou2016,bantysh2020,avosopiants2018,kimmel2014,schwemmer2015,hlousek2019,keith2018}. These errors are mainly due to the fact that the theoretical measurement model does not correspond to the real measurements. Having a model close to a real experiment, it is possible to obtain accuracy limited only by statistical fluctuations, which are lower, the higher the total sample size is \cite{bogdanov2011,gill2000}. In conditions of statistical homogeneity of data over time, the main source of systematic errors are preparation and measurement (SPAM) errors \cite{bantysh2019,palmieri2020,magesan2012,merkel2013,huang2019,ferrie2014}. In particular, quantum state tomography (QST) implies the ability to measure an unknown state in an arbitrary basis. A set of such bases form a measurement protocol for QST \cite{bogdanov2011,bogdanov2011_2}. For the vast majority of quantum computing platforms, each measurement is performed by a basis change transformation and readout in the computational basis. Both of these procedures are carried out with errors. Quantum process tomography (QPT) requires the ability to prepare predetermined set of states. Together with a set of measurements performed at the output of a quantum process, these states form a measurement protocol for QPT \cite{mohseni2008,bogdanov2013}. The preparation procedure is usually reduced to the initialization of the system in the $\ket0$ state and its subsequent transformation. Both initialization and transformation are error prone. One way to suppress systematic errors in qudit tomography is to choose optimal measurement protocols, which are themselves characterized by lower error rates. For qudit tomography one usually chooses protocols with high symmetry \cite{thew2002,bent2015,lima2011,bogdanov2004,medendorp2011,varga2018}. In systems based on spatial states of light, the basis change transformation is performed by just a single operation (for example, using a spatial light modulator \cite{bent2015,varga2018}). In ion-based qudits, however, this requires a variety of 2-level transitions (\Sref{sect:ion}). In \Sref{sect:2level}, we show that for ion-based qudit state tomography, it may be sufficient to use a measurement protocol containing no more than a single elementary operation in each measurement circuit. In practice, it is not possible to completely suppress SPAM errors. Therefore, one should use data processing methods that are robust to existing errors. Some types of errors can be accurately determined from physical considerations and taken into account in the model. For example, a standard practice is to take into account the finite quantum efficiency of photo detectors \cite{avosopiants2018,lvovsky2004,bogdanov2018,dariano2007}. However, many other error types need to be determined experimentally from the results of a set of measurements. A promising tool here is the gate set tomography (GST), aimed at the simultaneous reconstruction of all the unknown parameters of a set of quantum gates and the SPAM system \cite{matteo2020,blume_kohout2017,rudnicki2018}. This method, however, features a fundamental ambiguity of the results. This ambiguity allows one to determine the parameters only up to a certain gauge transformation. The empty gate tomography approach \cite{bantysh2019} does not have such a drawback, but it is fundamentally limited and shows high efficiency only for SPAM errors of a certain type. It is also worth noting machine learning methods that automatically train a model on noisy data \cite{palmieri2020,fastovets2019,torlai2018}. The disadvantage here, however, is that the training stage requires the possibility of highly accurate preparation of some broad set of quantum states, which is not always possible. The measurement protocol proposed in \Sref{sect:2level} can significantly reduce the systematic tomography errors associated with the qudit transformation errors. Thus, in \Sref{sect:init_readout}, we place emphasis on the possibility of estimating the parameters of quantum state initialization and readout. We show that a very general parametrization of these errors does not allow one to give their unambiguous estimate. In this regard, we propose to use a nonlinear model with a small number of parameters. This allowed us to obtain an unambiguous estimate and, as a result, a quantum tomography model that provided a high accuracy of reconstruction. \section{Measurement of an ion-based qudit}\label{sect:ion} Let $d$ be the qudit dimension. The ion-based qudit measurement in the computational basis is made by sequential population readout of each individual qudit level. The $j$-th level is read by applying a resonant pulse at the frequency corresponding to the transition to the level $\ket e$ (\Fref{fig:qudit_measurement}(a)). The emitted photon could be detected by a photo detector. This event corresponds to a measurement with a projector $E_j = \ketbra{j}$. If the photon is not detected, then the event corresponds to the measurement operator $I - E_j$, where $I$ is an identity operator. This procedure is carried out sequentially over each level from 1 to $d-1$ until the detector is triggered \cite{low2020}. The result can be described by the following set of POVM operators: \begin{equation}\label{eq:readout_povm} \eqalign{\Pi_1 = E_1, \cr \Pi_k = E_k(I-E_{k-1})\dots(I-E_1), \quad k = 2, \dots, d-1, \cr \Pi_0 = (I-E_{d-1})\dots(I-E_1).} \end{equation} In the case of ideal measurements, such a set gives projectors ${\Pi_k = \ketbra{k}}$ (${k = 0, \dots, d - 1}$) onto the states of the computational basis. \begin{figure} \caption{Measurement of an ion-based qudit ($d = 4$). (a) Level $\ket2$ population readout with the auxiliary level $\ket e$. The photon detection corresponds to the measurement operator $E_2$. (b) Qudit state measurement in an arbitrary basis. (c) An arbitrary 2-level transition is performed through the three elementary operations.} \label{fig:qudit_measurement} \end{figure} To perform the measurement in an arbitrary basis, a unitary transformation should be performed over a qudit before readout. In general, it requires $d(d-1)/2$ 2-level transitions \cite{low2020,barenco1995} (\Fref{fig:qudit_measurement}(b)). In Euler parametrization, each such transition requires up to three elementary operations\footnote[1]{The parameterization in terms of the Euler angles actually has the form $R_z(\gamma)R_x(\beta)R_z(\alpha)$, where $R_z$ denotes the rotation around the $z$ axis, but here we use another equivalent form for convenience}: $R_x(\gamma)R_y(\beta)R_x(\alpha)$, where $R_x$ and $R_y$ are rotation operators on a two-level Bloch sphere around the $x$ and $y$ axes, respectively (\Fref{fig:qudit_measurement}(c)). The preparation of arbitrary states from the initial state $\ket0$ of logical zero is performed by $d-1$ 2-level transitions between $\ket0$ and other logical states. \section{Robust qudit tomography protocol}\label{sect:2level} It was shown above that in order to implement arbitrary QST and QPT protocols over a qudit, a large number of elementary 2-level operations must be implemented in each measurement circuit. Since each operation in the experiment is error prone, such measurement protocols can result in significant systematic errors of quantum tomography. Hence, the problem of robust protocol construction arises. Such a protocol would require the smallest possible number of elementary operations on a qudit. It turns out that for QST it is sufficient to use no more than a single elementary operation in each measurement circuit. The first measurement basis is the computational one and does not require any operations. The following $d(d-1)/2$ bases are determined by performing operation $R_y(\pi/2)$ between each pair of qudit levels. The rest of $d(d-1)/2$ bases --- by the operation $R_x(3\pi/2)$. This choice of operations is due to the fact that in the case $d = 2$ the protocol is equivalent to the mutually unbiased bases (MUB) protocol. We refer to this protocol as ``2-level'', since during the measurement, the amplitudes are redistributed only between two levels of a qudit. Note that such a protocol is informationally complete (the completeness criterion is from \cite{bogdanov2011}). To compare protocols, consider the depolarizing quantum channel with error probability $p$: \begin{equation} \mathcal{E}_p(\rho) = (1-p)\rho + pI/d. \end{equation} \Fref{fig:results}(a) shows the infidelity of QST of a qutrit ($d = 3$) with the states of the form $\mathcal{E}_{0.01}(\ketbra\psi)$, where $\ket\psi$ is a random (according to Haar measure) pure state. MUB and 2-level tomography protocols are considered. In the simulation, it was assumed that each ideal elementary 2-level unitary operation is accompanied by the action of depolarizing channel with the error probability $p = 0.001$. The readout is noise-free at the current stage of the simulation. For the quantum state reconstruction, we have used our open MATLAB library \cite{bantysh_root}. For small enough sample sizes, the MUB protocol yields higher fidelity than the 2-level one. This is due to the fact that this protocol carries more information about the state, and systematic errors of basis change operations are still lower than statistical fluctuations. At higher sample sizes, the 2-level protocol, being more robust to systematic errors, gives a significantly higher accuracy. \begin{figure} \caption{Average qutrit ($d=3$) tomography infidelity versus total sample size. At each point, 500 numerical experiments were performed. The statistical data were generated by the Monte Carlo method. Confidence intervals are given by the upper and lower quartiles. Each elementary 2-level unitary operation in the measurement protocol was accompanied by the channel $\mathcal{E}_{0.001}$. (a) Tomography of quantum states of the form $\mathcal{E}_{0.01}(\ketbra\psi)$, where $\ket\psi$ in each experiment were generated randomly according to the Haar measure. Two QST protocols are compared. (b) Tomography of quantum processes of the form $\mathcal{E}_{0.01}\circ\mathcal{U}$, where the unitary process $\mathcal{U}$ in each experiment was generated randomly according to the Haar measure. Various tomography models based on a 2-level protocol are compared. The graphs' kinks are due to the peculiarities of quantum state reconstruction for almost pure state \cite{bantysh2020_2}.} \label{fig:results} \end{figure} For QPT, the 2-level protocol has the following structure. The first $2(d-1)+1$ states at the process input are described by the initialization state $\ket0$ itself and $2(d-1)$ states resulting from operations $R_y(\pi/2)$ and $R_x(3\pi/2)$ between levels $\ket0$ and $\ket j$ ($j=1,\dots,d-1$). The next $2(d-2)+1$ are described by the state $\ket1$ and $2(d-2)$ states resulting from operations $R_y(\pi/2)$ and $R_x(3\pi/2)$ between levels $\ket1$ and $\ket j$ ($j=2,\dots,d-1$). To get the state $\ket1$, the transformation $R_x(\pi$) is applied to the initialization state $\ket0$. The similar procedure is performed for the states $\ket2,\dots,\ket{d-1}$. In total, $d^2$ different states are obtained in this way. These states form a basis for density matrices and are widely used in works on QPT \cite{mohseni2008,baldwin2014}. At the output of a quantum process, each of these states is measured in the bases of a 2-level QST protocol. Each measurement circuit of the 2-level QPT protocol contains up to 3 elementary operations (2 for preparation and 1 for measurement) in each measurement circuit. The protocol is informationally complete. \section{Accounting initialization and readout errors}\label{sect:init_readout} In QPT, the input states and measurement operators in the case of ideal unitary operations are defined as follows: \begin{equation}\label{eq:prep_meas} \rho_i = \mathcal{U}_i^p(\rho_0), \quad P_{ik} = \overline{\mathcal{U}}_i^m(\Pi_k), \end{equation} where $\mathcal{U}(\sigma)=U\sigma U^\dagger$, $\overline{\mathcal{U}}(\sigma)=U^\dagger\sigma U$. Operations $\mathcal{U}_i^p$ and $\mathcal{U}_i^m$ correspond to the preparation and basis change unitaries in $i$-th measurement circuit. To account for initialization and readout errors in reconstruction model, one only needs to replace ideal $\rho_0$ and $\Pi_k$ with noisy ones in \eref{eq:prep_meas}. A good approximation is the diagonal form of these operators, which corresponds to the classical initialization and read errors: \begin{equation}\eqalign{ \rho_0 = \sum_{j=0}^{d-1}{a_j \ketbra{j}}, \quad \sum_{j=0}^{d-1}{a_j} = 1, \cr \Pi_k = \sum_{j=0}^{d-1}{b_{kj} \ketbra{j}}, \quad \sum_{k=0}^{d-1}{b_{kj}} = 1, \quad j = 0, \dots, d-1. } \end{equation} This model is determined by a total of $d^2-1$ independent parameters. Since each complete readout of a qudit gives $d-1$ independent outcomes, the model requires implementing at least $d+1$ different circuits to determine all the parameters. In the model we assume that the circuit contains initialization, unitary qudit transformation and readout. It turns out, however, that within the framework of this model it is impossible to design such a set of circuits that would allow for unambiguous estimation of all $a_j$ and $b_{kj}$. One can see this by first observing that in the case of all $a_j > 0$ one can always find $p$ such that $\rho_0 = \mathcal{E}_p(\rho_0^\prime)$. Since the depolarizing channel commutes with any unitary operation, the channel $\mathcal{E}_p$ can be classified as a readout error with operators $\Pi_k^\prime$. Thus, $\rho_0$ and $\Pi_k$ give the same set of probabilities as $\rho_0^\prime$ and $\Pi_k^\prime$. This makes these two sets indistinguishable in terms of measurement outcomes. This ambiguity is similar to the gauge invariance introduced in the gate set tomography \cite{matteo2020,blume_kohout2017,rudnicki2018}. Due to this ambiguity, the number of circuits can be reduced from $d+1$ to $d$. To minimize errors, we will consider only circuits containing at most one elementary operation. The first circuit does not contain any operation and consists only of initialization and readout. The remaining $d-1$ circuits also contain an operation $R_x(\pi)$ between levels $\ket0$ and $\ket{j}$ ($j=1,\dots,d-1$) after initialization. As an example, consider a qutrit ($d=3$) initialization error in the form of Gibbs distribution: \begin{equation}\label{eq:init} a_j = e^{-\omega_j/T}/Z, \quad Z = \sum_j{e^{-\omega_j/T}}. \end{equation} Here $T = 1$ sets the effective system temperature, $\omega_0=0$, $\omega_1=4$, $\omega_2=6$ --- qutrit energy levels (in units of $T$). The readout error of the $j$-th level is simulated in the form \begin{equation}\label{eq:readout} E_j = (1-b_0)\ketbra{j} + b_1(I-\ketbra{j}), \end{equation} where the coefficients $b_0 = 0.01$ and $b_1 = 0.02$ set the probabilities of false-negative and false-positive population readout result respectively. POVM operators are calculated from \eref{eq:readout} using \Eref{eq:readout_povm}. As before we assume that each 2-level operation is accompanied by depolarizing channel $\mathcal{E}_{0.001}$. The total sample size is 1,000,000. The reconstruction of $a_j$ and $b_{kj}$ from the simulated measurements data was carried out by the maximum likelihood method. The optimization problem was solved using a genetic algorithm \cite{chernyavskiy2013} with open MATLAB library \cite{chernyavskiy_opt}. The following values were obtained: \begin{equation} [\hat{a}_j] = \left(\begin{array}{c}0.97 \\ 0.03 \\ 0\end{array}\right), \quad [\hat{b}_{kj}] = \left(\begin{array}{ccc}0.98 & 0 & 0.01 \\ 0 & 1 & 0 \\ 0.02 & 0 & 0.99\end{array}\right). \end{equation} Based on these parameters, a tomography model is formed using \Eref{eq:prep_meas}. The resulting model is used to reconstruct the qudit unitary transformation accompanied by the channel $\mathcal{E}_{0.01}$. \Fref{fig:results}(b) shows the performance of the constructed model (``SPAM errors model 1''), the model based on true SPAM errors (``True model'') and the ideal SPAM model (``Ideal model''). The relatively low improvement of the new model can be explained by the ambiguity in determining $a_j$ and $b_{kj}$. Let us consider a simpler parametrization corresponding to the true error model described in expressions \eref{eq:init} and \eref{eq:readout}. Such a model is specified by only three parameters: $T$, $b_0$, $b_1$ (the energy values of individual levels are assumed to be known). We consider a measurement set that does not include any operations: after initialization, the population of the $j$-th qudit level is read ($j=0,\dots,d-1$). The rest of the levels are not affected. Parameters estimations from the simulated experiment data (the total sample size is 1,000,000) are performed by the maximum likelihood method using a genetic optimization algorithm \cite{chernyavskiy2013,chernyavskiy_opt}. The resulting values were close to true ones: $\hat{T}=0.9868$, $\hat{b}_0=0.0116$, $\hat{b}_1=0.0203$. QPT results based on the model with these parameters (``SPAM errors model 2'') are shown in \Fref{fig:results}(b). \section{Conclusion} Systematic errors of quantum tomography significantly limit quantum state and quantum process reconstruction accuracy. Two fundamentally different approaches are error suppression and adequate accounting for existing errors. One of the ways to suppress errors is to use a minimum number of error prone operations in the tomography protocol. In this paper, we have presented a description of such a protocol for qudits of arbitrary dimension. In the case of a qubit, this protocol is equivalent to the 1-qubit MUB protocol. However, even with a minimal number of operations, systematic errors can remain quite high. Usually, the most significant errors are due to error prone initialization and readout of the qudit state. Statistical estimation of these errors parameters allows one to construct a tomography model close to a real experiment. However, even in the case of classical form of initialization and readout errors, it turns out that it is impossible to get its unambiguous estimate. This ambiguity also results in quantum tomography systematic errors. Thus, we have used the model of Gibbs distribution with non-linear parametrization and a small number of parameters. This approach made it possible to get an unambiguous parameters estimates from the simulated measurements data. We have shown that the resulting tomography model is capable of providing high fidelity. \ack This work was supported by the Program of activities of the leading research center ``Development of an experimental prototype of a hardware and software complex for the technology of quantum computing based on ions'' (Agreement No. 014/20) and by Theoretical Physics and Mathematics Advancement Foundation ``BASIS'' (Grant No. 20-1-1-34-1). \section*{References} \end{document}
arXiv
A zealous geologist is sponsoring a contest in which entrants have to guess the age of a shiny rock. He offers these clues: the age of the rock is formed from the six digits 2, 2, 2, 3, 7, and 9, and the rock's age begins with an odd digit. How many possibilities are there for the rock's age? There are 3 odd digits which can begin the rock's age. For the five remaining spaces, the numbers can be arranged in $5!$ ways. However, because the digit `2' repeats three times, we must divide by $3!$, or the number of ways to arrange those three 2s. The answer is $\dfrac{3\times5!}{3!} = \boxed{60}$.
Math Dataset
\begin{definition}[Definition:Antisymmetric Quotient] Let $\struct {S, \RR}$ be a preordered set. Let $\sim_\RR$ be the equivalence relation on $S$ induced by $\RR$. Let $S / {\sim_\RR}$ be the quotient set of $S$ by $\sim_\RR$. Let $\preccurlyeq$ be the ordering on $S / {\sim_\RR}$ induced by $\RR$: :$\forall P, Q \in S / {\sim_\RR}: \exists p \in P, q \in Q: p \mathrel \RR q$ {{expand|The above is a second definition of Definition:Ordering Induced by Preordering, which we might do well to document.}} Then $\struct {S / {\sim_\RR}, \preccurlyeq}$ is the '''antisymmetric quotient''' of $\struct {S, \RR}$. \end{definition}
ProofWiki
Journal Home About Issues in Progress Current Issue All Issues Feature Issues •https://doi.org/10.1364/OME.447289 Comparison and analysis of phase change materials-based reconfigurable silicon photonic directional couplers Ting Yu Teo, Milos Krbal, Jan Mistrik, Jan Prikryl, Li Lu, and Robert Edward Simpson Ting Yu Teo,1,4 Milos Krbal,2 Jan Mistrik,2,3 Jan Prikryl,2 Li Lu,1 and Robert Edward Simpson1,5 1Singapore University of Technology and Design, 8 Somapah Road, 487372 Singapore, Singapore 2University of Pardubice, Faculty of Chemical Technology, Center of Nanomaterials and Nanotechnologies (CEMNAT), Legions Square 565, 530 02 Pardubice, Czech Republic 3University of Pardubice, Faculty of Chemical Technology, Institute of Applied Physics and Mathematics, Studentska 95, 532 10 Pardubice, Czech Republic [email protected] [email protected] Ting Yu Teo https://orcid.org/0000-0003-4216-5113 Robert Edward Simpson https://orcid.org/0000-0002-3499-4950 T Teo M Krbal J Mistrik J Prikryl L Lu R Simpson Ting Yu Teo, Milos Krbal, Jan Mistrik, Jan Prikryl, Li Lu, and Robert Edward Simpson, "Comparison and analysis of phase change materials-based reconfigurable silicon photonic directional couplers," Opt. Mater. Express 12, 606-621 (2022) Unique prospects of phase change material Sb2Se3 for ultra-compact... Wei Jia, et al. Opt. Mater. Express 11(9) 3007-3014 (2021) On-chip sub-wavelength Bragg grating design based on novel low loss phase-change materials Joaquin Faneca, et al. Opt. Express 28(11) 16394-16406 (2020) Performance characteristics of phase-change integrated silicon nitride photonic devices in the O... Table of Contents Category Materials for Integrated Optics Application specific integrated circuits Effective refractive index Field programmable gate arrays Mach Zehnder interferometers Optical directional couplers Optical neural systems Original Manuscript: November 1, 2021 Revised Manuscript: December 17, 2021 Low-loss PCM directional coupler design and modelling Reconfigurable multi-state SbS optical couplers Figures (10) Suppl. Mat. (2) Get EPUB The unique optical properties of phase change materials (PCMs) can be exploited to develop efficient reconfigurable photonic devices. Here, we design, model, and compare the performance of programmable 1 × 2 optical couplers based on: Ge2Sb2Te5, Ge2Sb2Se4Te1, Sb2Se3, and Sb2S3 PCMs. Once programmed, these devices are passive, which can reduce the overall energy consumed compared to thermo-optic or electro-optic reconfigurable devices. Of all the PCMs studied, our ellipsometry refractive index measurements show that Sb2S3 has the lowest absorption in the telecommunications wavelength band. Moreover, Sb2S3 -based couplers show the best overall performance, with the lowest insertion losses in both the amorphous and crystalline states. We show that by growth crystallization tuning at least four different coupling ratios can be reliably programmed into the Sb2S3 directional couplers. We used this effect to design a 2-bit tuneable Sb2S3 directional coupler with a dynamic range close to 32 dB. The bit-depth of the coupler appears to be limited by the crystallization stochasticity. © 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Reconfigurable photonic devices have the potential of revolutionizing Photonic Integrated Circuits (PICs) [1]. Currently, most PICs are custom- made for a specific purpose, which is analogous to the electrical Application Specific Integrated Circuits (ASICs) technology. Developing reconfigurable photonic devices will introduce greater flexibility to photonics engineers. Thus, developing programmable photonics is critical in advancing PIC technology. Just as Field Programmable Gate Arrays (FPGAs) paved the way for modern electrical integrated circuits technology, such as hardware neural networks, we believe programmable PICs will pave the way for innovative photonics devices, such as practical implementations of all-optical neural network chips. Mach Zehnder Interferometers (MZIs) are one of the basic building blocks of programmable PICs. [2] These devices consist of optical directional coupler switches that split incoming light into two coherent waves, and the signal is programmed by controlling the relative phase difference and intensity in the two waveguide interferometer arms. Common tuneable directional couplers make use of thermo-optic or electro-optic effects to tune the refractive index [3–5]. However, they require a continuous power supply to maintain their optical properties. This is non-ideal for systems that require highly interconnected waveguide meshes or networks. For example, in optical neural networks, each reprogrammable synaptic weight that makes use of the thermo-optic or electro-optic effect require an additional 10 mW [6]. The network becomes inefficient when processing large amounts of data, and ultimately the power requirement limits the scale of the network. Incorporating chalcogenide phase change materials (PCMs) into directional couplers to introduce tuneability is desirable because the PCM does not require power to hold its optical state. Moreover, many PCMs can reversibly switch between their amorphous and crystalline states on a sub-nanosecond time scale [7, 8]. These properties provide a means to design sophisticated tuneable directional couplers [9–11]. A $1 \times 2$ PCM directional coupler was designed that consists of two output waveguide ports, with one waveguide containing a layer of PCM deposited as shown in Fig. 1(a). Figure 1(b) shows the parameters to be determined when designing PCM directional couplers. The PCM directional coupler is designed to evanescently couple light from a Si waveguide into the PCM -integrated waveguide port (cross) in the amorphous state. Crystallizing the PCM decreases the amplitude of the evanescent field, and the light remains in the input Si waveguide (bar) port. This design exploits the large refractive index change between the two PCM structural states to control the phase matching condition between the waveguides. Fig. 1. PCM- tuned directional coupler design. (a) PCM directional coupler switching operation. In the amorphous state the output signal is at the cross port due to evanescent coupling. In the crystalline state, the signal stays in the same waveguide due to phase mismatch between the two waveguides. (b) Cross section of the PCM directional coupler. The labelled dimensions are critical when designing the couplers. Download Full Size | PPT Slide | PDF Previous works [9–12] have demonstrated the reconfigurable functionality in PCM directional couplers. In those designs, Ge2Sb2Te5 and Ge2Sb2Se4Te1 have been used because they display a large change in refractive index, $\Delta {\boldsymbol n}$, upon phase transition. The higher loss in the crystalline state is avoided as the light remains in the bar port. However, they still exhibit non-negligible losses at 1550 nm in the amorphous state [13]. To avoid these high optical losses, we propose the use of low-loss PCMs, Sb2S3 and Sb2Se3. Previous works [14–16] showed that both materials exhibit distinguishable optical contrast upon a structural phase transition and have a lower extinction coefficient than Ge2Sb2Te5 and Ge2Sb2Se4Te1 in both the amorphous and crystalline telecommunications spectral band. In this work, we aimed to assess the suitability of using low-loss PCMs to program the coupling ratio of directional couplers. This was done by designing, modelling, and comparing the performance of programmable $1 \times 2$ optical couplers based on: Ge2Sb2Te5, Ge2Sb2Se4Te1, Sb2Se3, and Sb2S3 PCMs. Using three- dimensional Finite Difference Time Domain (3D FDTD) calculations, we showed that at the telecommunication wavelength, all PCM directional couplers exhibited low insertion losses <$- $1.5 dB and crosstalk of between – 20 dB to – 40 dB in both the amorphous and crystalline states. In particular, Sb2S3 – tuned directional couplers exhibited the lowest insertion loss in both structural states. This corroborates with our optical constant measurements, with Sb2S3 having the lowest extinction coefficient, ${\boldsymbol k}$. We then studied how crystallization proceeds along strips of Sb2S3, and the gradual effect of crystallization on the device transmission. Narrow strips of Sb2S3­, similar to those patterned on the directional coupler, were annealed above the glass transition temperature. We observed that unlike Ge2Sb2Te5, Sb2S3 crystallization is growth- dominated [17] and the crystal growth tends to proceed progressively along the strip. This suggests that the crystallized length can be controlled with temperature. Hence, multiple coupling ratios can be achieved through partial crystallization. However, we observe that the growth-dominated crystallization process is stochastic, which makes it challenging to introduce more than four multi-level states. The stochasticity can be minimized by optimizing the programming method. 2. Low-loss PCM directional coupler design and modelling To design PCM-tuned couplers, we considered four different potentially useful PCMs: Ge2Sb2Te5, Ge2Sb2Se4Te1, Sb2Se3, and Sb2S3. Figure 2 shows the optical constants of the four materials. The values of Ge2Sb2Te5 and Ge2Sb2Se4Te1 were extracted from references [10, 19] whilst the optical constants of Sb2S3 and Sb2Se3 were measured by Variable Angle Spectroscopic Ellipsometry (VASE). The Sb2S3 and Sb2Se3 thin films used in the ellipsometry measurements were prepared using Radio frequency (RF) magnetron sputtering and pulsed- laser deposition (PLD), respectively. Both depositions were performed without heating the substrate. The films were 160 nm and 200 nm thick, respectively. To crystallize the Sb2S­3 and Sb2Se3 films, the samples were annealed to 320$\mathrm{\circ{C}}$ and 250 $\mathrm{\circ{C}}$, respectively. Further details of the deposition parameters, the annealing process, and the ellipsometry measurement procedure can be found in the Supplemental Document sections 1 and 2. Fig. 2. Optical properties of the four PCMs. Refractive index, ${\boldsymbol n}$, of (a) GST and GSST, (b) Sb2S3 and Sb2Se3., Extinction coefficient, ${\boldsymbol k}$, of (c) GST and GSST, (d) Sb2S3 and Sb2Se3 in their respective amorphous and crystalline states. The refractive index data for Ge2Sb2Te5, Sb2S3, and Sb2Se3 are included in Dataset 1, Ref. [18] for the interested reader. Determining the optical constants of the PCM becomes critical when modelling PCM-tuned directional couplers. This is because the PCM layer controls the amplitude of the evanescent field between the two waveguides. In the amorphous state, the two waveguides must have similar refractive indices for the optical signal to couple strongly into the cross port. Hence, the absolute values of the refractive index, ${\boldsymbol n}$, are important. This is unlike other single-waveguide reconfigurable photonic devices, such as optical phase shifters and ring resonators, where only a change in PCM refractive index is needed to induce a relative phase or resonance shift upon a structural phase transition [20–22]. However, the accuracy of ellipsometry measurements of refractive indices is mainly limited by the dispersion model. I.e. the dispersion model choice will affect the refractive index measurement and concomitantly affect the modelled coupler performance. The band structure of semiconductor materials should be considered when fitting optical constants to ellipsometry data. Hence, in our ellipsometry fitting, we considered dispersion models that describe the material optical absorption above and below the material's bandgap. Moreover, the imaginary and real components of the dielectric function are intwined by causality, hence any physically realistic model to describe the optical properties of a semiconductor should be Kramers–Kronig consistent. For this reason, the Tauc-Lorentz model was used to fit the Sb2S3 and Sb2Se3 ellipsometry measurements in this work. The Sb2Se3 and Sb2S3 bandgaps, which were derived from the Tauc-Lorentz dispersion model to the ellipsometric constants, were comparable to those reported in the literature. In the amorphous state, the Tauc-Lorentz model showed bandgap energies of 2.05 eV and 1.16 eV for Sb2S3 and Sb2Se3, respectively. These values are consistent with those reported in references [15, 23]. Upon crystallization, the materials became polycrystalline and a single Tauc-Lorentz oscillator was no longer sufficient to represent the optical constants over a broad spectral range. Hence, additional Lorentz oscillators were necessary to fit the optical constants of the polycrystalline material. These additional oscillators relate to critical points of the electronic band transition. Both materials required four Lorentz oscillators to supplement the basic Tauc-Lorentz model. For both materials, the number and position of the critical points are comparable to previous works [24, 25]. Moreover, for Sb2Se3, the electronic band transition energies at these critical points are comparable to those used in other dispersion models, and this adds a degree of confidence to our ellipsometry fitting results [24]. Upon fitting the ellipsometry measurements with the Tauc-Lorentz dispersion model, we obtained the optical constants of the amorphous and crystalline low-loss PCMs shown in Figs. 2(b) and 2(d). Overall, in the near- infrared spectrum, the low-loss PCMs have a lower extinction coefficient, ${\boldsymbol k}$, and lower change in refractive index, $\Delta {\boldsymbol n}$, when compared to the Telluride-based materials. Both low-loss PCMs exhibit negligible losses (close to zero) in the amorphous state. Of all the PCMs in the crystalline state studied, Sb2S3 has the lowest extinction coefficient, see Fig. 2(d), which is likely due to its wider bandgap. The refractive index data for Ge2Sb2Te5, Sb2S3, and Sb2Se3 are included in Dataset 1, Ref. [18] for the interested reader. As a rule, there is a decrease in ${\boldsymbol k}$ when one moves up the Chalcogen group of the periodic table (from Te to Se to S) due to the optical bandgap opening [26] and we also see that the extinction coefficient in Fig. 2 exhibits a similar trend. A wavelength of 1550 nm corresponds to a photon energy of 0.8 eV and the bandgap of amorphous Ge2Sb2Te5 is 0.7 eV in the amorphous state and 0.5 eV in the face-centered cubic state [27]. Hence, 1550 nm light is absorbed by interband transitions in Ge2Sb2Te5. In contrast, the low -loss PCMs have a wider bandgap. We measured the bandgap of Sb2Se3 to be 1.16 eV and 1.06 eV in the amorphous and crystalline state respectively, and for Sb2S3, it is 2.05 eV in the amorphous state and 1.58 eV in the crystalline state. More information on the material bandgap derivation can be found in Section 3 of the Supplemental Document. Thus, Sb2S3 has the smallest ${\boldsymbol k}$ due to its largest optical bandgap. Despite its large bandgap, Sb2S3 can be switched by either laser heating with above-bandgap light, heating with below bandgap light and using a resonant structure, or using electrical Joule heat-induced phase transitions [15]. Unlike crystalline Ge2Sb2Te5, Sb2S3 was recently shown to exhibit birefringence [16]. This difference stems from the Sb2S3 crystal domains being larger than the wavelength of light, whereas the domains in Ge2Sb2Te5 are below the diffraction limit, and therefore the polycrystalline film exhibit an isotropic refractive index. Amorphous marks that were smaller than the diffraction limit could also be written into Sb2S3 thin films using femtosecond pulses at 780 nm [16]. This result, together with the measured low ${\boldsymbol k}$ value, suggest that Sb2S3 might be suitable for designing multi-bit programmable couplers. Using the optical constants in Fig. 2, we modelled Ge2Sb2Te5, Ge2Sb2Se4Te1, Sb2Se3, and Sb2S3 Silicon-waveguide directional couplers. For clarity, the refractive index, n, and the extinction coefficient, k, values used in the simulation model are presented in Table S1. The devices were optimized in the transverse electric (TE) mode. Silicon-based couplers were chosen due to their wide applicability in integrated photonics and integrability with electronic integrated circuits. When designing PCM directional couplers, the dimensions shown in Fig. 1(b) was determined. They were the (a) dimensions of the phase change material, cross, and bar waveguide, (b) coupling gap between the two waveguides, ${L_g}$, and (c) the coupling length, ${L_c}$. ${L_c}$ is equivalent to the length of the phase change material. To ensure fair comparison across the four PCM-tuned directional couplers, we fixed the PCM-tuned waveguide dimensions. The width of the PCM – tuned waveguide, represented as WWG_PCM in Fig. 1(b), is fixed at 420 nm and the PCM width and thickness were set at 320 nm and 20 nm respectively. These dimensions were adopted from reference [9] as they ensured single mode operation in Ge2Sb2Te5 -tuned directional couplers. All waveguide heights were fixed at 220 nm. The widths of the bar waveguides were then optimized accordingly to satisfy the phase matching condition in the amorphous state [9]. Figure 3(a) shows the waveguide dimensions required to attain phase matching conditions for amorphous Sb2S3 and Sb2Se3. The effective refractive indices of the cross and bar waveguides, ${{\boldsymbol n}_{{\boldsymbol eff}}}$, were calculated using the MODE solution waveguide solver in Lumerical [28]. The electric field distribution profiles of the PCM waveguides with the optimized dimension and the corresponding ${{\boldsymbol n}_{{\boldsymbol eff}}}$ values can be found in the Supplemental Document Fig. S4 and Table S2 respectively. The choice of ${L_g}$ was critical when designing the low-loss directional couplers. This is because the $\mathrm{\Delta }{\boldsymbol n}$ at 1550 nm between the crystalline and amorphous states is smaller than Ge2Sb2Te5 and Ge2Sb2Se4Te1. For Ge2Sb2Te5 and Ge2Sb2Se4Te1, the larger $\mathrm{\Delta }{\boldsymbol n}$ upon phase transition caused a large phase mismatch between the two waveguides in the crystalline state. Hence, the input signal did not couple into the cross-waveguide in the crystalline state.$\; {L_g}$ was thus chosen to optimize the trade-off between insertion loss in the crystalline state and a longer device coupling length [9]. However, for PCMs with a smaller $\mathrm{\Delta }{\boldsymbol n}$, the phase mismatch in the crystalline state was not as large and the coupling gap needs to be optimized to allow the coupling length in the amorphous state to be a common multiple of the coupling length in the crystalline state. This allowed the optical signal to couple back into the bar-waveguide in the fully crystallized state. Lumerical's Eigenmode Expansion solver (EME) [28] was used to demonstrate that this requirement is necessary. For clarity and analysis, the field propagation patterns in the amorphous and crystalline state of an Sb2S3 directional coupler are given in Figs. 3(b) and 3(c), respectively. Since the effective coupling length of the directional coupler must allow the signal to exit at the different ports upon phase transition, the coupling length of each structural state must be an even multiple from each other: ${L_{c,amor}} \approx 2m\cdot{L_{c,crys}}$ where $m \in {\mathrm{\mathbb{Z}}^ + }$. To minimize the footprint and coupling losses, we chose the lowest common even multiple, where$\; m = 1$.The coupling length in the amorphous state, ${L_{c,amor}}$, was calculated using the coupled wave theory equation [9, 29]: (1)$$\begin{array}{*{20}{c}} {{L_{c,amor}} = \frac{\lambda }{{2({{n_1} - {n_2}} )}}\; } \end{array}$$ where $\lambda $ is the wavelength, and ${n_1}$ and ${n_2}$ are the effective refractive indices of the odd and even supermodes in a two-waveguide system. In the crystalline state, the change in the effective refractive index of the PCM-tuned waveguide results in a phase mismatch. To account for the phase mismatch, the crystalline coupling length was derived to be [30]: (2)$$\begin{array}{*{20}{c}} {\; \; {L_{c,\; crys}} = \frac{{{L_{c,amor}}}}{{\sqrt {{{\left( {\frac{{\Delta \beta \ast {L_{c,amor}}}}{\pi }} \right)}^2} + 1} }}\; \; } \end{array}$$ where $\Delta \beta $ is the effective propagation constant difference and is mathematically represented as $\Delta \beta = \; \frac{{\Delta {n_{eff}}}}{{({2\pi /\lambda } )}}$. Again, the values of ${n_1}$, ${n_2}$ and $\Delta {n_{eff}}$ were calculated using the MODE solution solver within Lumerical. The electric field distribution patterns and the calculated values of the two-waveguide system can be found in Fig. S4 and Tables S3a and S3b respectively. Fig. 3. PCM waveguide design process. (a) Waveguide width optimization for phase matching between bar and corresponding cross waveguides in the amorphous state. Optical field intensity propagation for (b) amorphous and (c) crystalline Sb2S3 when ${L_{c,amor}} \approx 2m\cdot{L_{c,crys}}$, where $m \in {\mathrm{\mathbb{Z}}^ + }$. The optical signal propagates through different waveguide ports depending on the PCM state, consistent to current PCM directional coupler switching behavior. (d) Coupling length of amorphous and crystalline Sb2S3 and Sb2Se3, and (e) Coupling length ratio of Sb2S3 and Sb2Se3. Upon stipulating the design requirements and calculations, the coupling gap, ${L_g}$, and length, ${L_c}$, of the Sb2S3 and Sb2Se3 directional couplers were then determined according to Figs. 3(d) and 3(e). Figure 3(d) shows the corresponding ${L_c}$ of the materials for the different structural state as ${L_g}$ increases. For clarity, we represent the ${L_c}$ of the two structural states as a ratio i.e. ${L_c}$ ratio = $\frac{{{L_{c,\; amor}}}}{{{L_{c,\; crys}}}}$ in Fig. 3(e). ${L_g}$ was chosen to give a ${L_c}$ ratio of two. The final dimensions of the PCM directional couplers are given in Table 1. For comparison purposes, the Ge2Sb2Te5 and Ge2Sb2Se4Te1 ­directional couplers were also modelled based on the design process reported in [9]. Table 1. Device dimensions and switching time [14, 15, 31, 32] for the various PCM-tuned directional couplers The insertion loss and cross talk for all four PCM couplers were analyzed in the telecommunication conventional band (1520 nm to 1580 nm) using a finite difference time domain (FDTD) approach to solve Maxwell's equations. Insertion loss represents the proportion of light that is transmitted through the coupler while the crosstalk represents the proportion of light in the inactive output port. The power field distribution and directional coupler performance are shown in Figs. 4 and 5 respectively. Figures 4(a) to 4(d) show the power distribution of the low-loss coupler devices. The corresponding insertion loss and crosstalk of the four PCM devices, from a wavelength of 1520 nm to 1580 nm, are presented in Figs. 5(a) to 5(d). All the PCM directional couplers studied had an insertion loss below -1.5 dB and a minimum crosstalk between $- $20 dB to $- $40 dB in both structural states. The switching characteristic of the Sb2S3 and Sb2Se3 directional couplers is consistent with previous works [9, 10], which adds a degree of confidence to the analysis of the coupler designs. Fig. 4. Normalized power field distribution (arbitrary units) of (a) amorphous and (b) crystalline Sb2S3 and (c) amorphous and (d) crystalline Sb2Se3 directional coupler Fig. 5. PCM Directional coupler performance. The insertion loss of the coupler in the (a) amorphous and (d) crystalline state and crosstalk in the (b) crystalline and (c) amorphous state. In the amorphous state, the insertion loss of the couplers decreases when the extinction coefficient, $\; {\boldsymbol k}$ value of the PCM material decreases. The Sb2S3 and Sb2Se3 programmable couplers have a higher transmission than the Telluride-based devices as shown in Fig. 5(a). This is because they exhibit very low (close to 0 dB) insertion losses due to their near- zero extinction coefficient, $\; {\boldsymbol k}$. Upon crystallization, the insertion loss of the couplers increases as the extinction coefficient, $\; {\boldsymbol k}$, of the PCM materials increases. Moreover, ${\boldsymbol k}$ for Sb2Se3 and Sb2S3 is no longer near -zero, as shown in Fig. 2(d). Amongst the four devices, the Sb2S3 device still shows the lowest insertion loss. The insertion loss of the Sb2Se3 device becomes higher than the Sb2S3 device as its ${\boldsymbol k}$ value becomes an order of magnitude higher than Sb2S3. Moreover, the Sb2Se3 insertion loss becomes greater than the Telluride- devices. The results are illustrated in Fig. 5(d). The higher insertion losses in Sb2Se­3 are attributed to the design rule being ${L_{c,amor}} \approx 2\cdot{L_{c,crys}}$. Hence, the signal in the Sb2Se3 ­bar port must couple into the PCM- waveguide and then back to the Si waveguide in the crystalline state. This requires the coupling losses and the extinction coefficient, ${\boldsymbol k}$, of the PCM material to be low. In contrast, the crystalline Ge2Sb2Te5 and Ge2Sb2Se4Te1 exhibit a large phase mismatch between the cross and bar ports and the signal effectively remains in the bar port, with minimal coupling. Thus, the transmissivities for the telluride PCMs in this coupler design are higher than Sb2Se3. However, the extinction coefficient, $\; {\boldsymbol k}$, of Sb2S3 is sufficiently low that the insertion losses are still less than the Telluride- based directional couplers. Sb2S3 has the best overall performance for programmable low-loss optical couplers. This is readily seen in Fig. 6, which compares the performance of the different PCM couplers using a spider diagram. Importantly, Sb2S3 displayed the lowest insertion loss in both the amorphous and crystalline states and had the lowest crosstalk in the crystalline state. However, the better performance comes at the expense of a substantially longer coupling length, which is due to the smaller Sb2S3 $\Delta n$ upon phase transition. Fig. 6. Radar chart that compares the performance of the four PCM directional couplers. The switching times for each material are obtained from [14, 15, 31, 32]. The axes are arranged such that the desirable properties are on the edge of the chart. 3. Reconfigurable multi-state Sb2S3 optical couplers With Sb2S3-tuned optical couplers having the better overall performance, these devices should be considered for non-volatile routing of optical signals through PICs that operate at the telecommunications wavelength. Here, we study how this material and coupler platform can introduce additional functionalities. One important area of study is partial crystallization of the PCM strip. Through studying the crystallization behavior of Sb2S3, we can implement multiple switching states and understand how the devices can be reliably switched into these different coupling ratios. Intermediate switching states are a desirable feature of reprogrammable couplers. Indeed, partial crystallization of the PCM layer can be used to introduce multi-level optical states into a photonics device [33–36]. When crystallized to varying extents, the optical constant of the material changes . There are essentially two ways to exploit crystallization to create a multilevel optical device and these depend on how the material crystallizes. Crystallization of some materials is dominated by nucleation. For example, Ge2Sb2Te5 exhibits a high density of crystal nuclei when heated to intermediate temperatures above its crystallization temperature. This means that the crystallites tend to be small, and they grow in a short time [33]. Other materials, such as AIST and Sb2S3, do not easily nucleate and the crystallites grow large from very few nucleation centers. Sb2S3 does not easily nucleate, hence large crystallites can be observed under an optical microscope [16]. If the crystallites are substantially smaller than the wavelength of light, then the effective refractive index of the partially crystallized material can be derived with the Clausius-Mossoti relation [37, 38]. On the other hand, for Sb2S3, where the crystallites tend to be larger than the wavelength of light, it is more appropriate to compute the multiple transmission levels by solving Maxwell's equations for reflection and transmission of light propagating through the two material domains. We envisage multiple ways to crystallize the PCM couplers. The first way is by using a laser to heat the PCM and create crystalline pixels along the waveguide length. However, this might be impractical for a small-scale device. A more practical design might include an embedded heater below the PCM [20, 21, 39]. Since Sb2S3 is dominated by crystal growth, this heater could be used to controllably crystallize strips of Sb2S3 to different extents, which in turn controls the coupling ratio, see schematic shown in Fig. 7. We name this method Growth Crystallization Tuning (GCT). Fig. 7. Schematic of Growth Crystallization Tuning (GCT) mechanism (a) Imbedded heat source in the system to crystallize PCM (b) Tunable coupling ratio due to PCM crystallizing to different extent. The coupling ratio is controlled by the temperature of heat source as represented by a decreasing output intensity signal. To exploit GCT in Sb2S3-tuned Si waveguide directional couplers, we first must understand how the Sb2S3 strips crystallize. Therefore, we patterned Sb2S3 cuboids of dimension 46.7 $\mu $m by 0.4 $\mu $m by 40 nm (Length by Width by Height) on a Silica on Si substrate. These dimensions are similar to those of the optimized coupler design. The strips were fabricated in the following sequence: (a) Electron Beam Lithography patterning, (b) Material deposition with Radio Frequency (RF) Magnetron Sputtering and (c) Material lift-off. We first deposited an 89 nm PMMA 950 K A2 photoresist onto the substrate by spin coating at 2000 RPM for 60 seconds. Electron beam lithography was then performed using the Raith eLINE Plus lithography system to pattern the Sb2S3 strips. The electron acceleration voltage was 30 kV with an aperture size of 30 $\mu $m. The photoresist was then developed using the MIBK: IPA developer of ratio 1:3. Subsequently, we deposited Sb2S3 on the resist pattern using a 50.8 mm diameter 99.9% pure Sb2S3 target using the AJA Orion 5 sputtering system with a base pressure of 2.3${\times} $10−7 Torr­. The sputtering process took place in an Argon environment at a pressure of 3.7${\times} $10−3 Torr­. The RF power was set to 20 W, which resulted in a deposition rate of 0.15 Å/s. After Sb2S3 deposition, the photoresist was removed by soaking the sample in a 1-Methyl-2-Pyrrolidinone (NMP) solution placed in a 60 $^\circ \textrm{C}$ water bath for one hour. The sample was then rinsed with Acetone followed by Isopropyl Alcohol, before drying it with N2 gas. An SEM image of a typical Sb2S3 strip is given in Fig. S5. The Sb2S3 strip crystallization behavior was studied by tracking the change in optical reflectivity normal to the strip's surface. The samples were heated to 320 $^\circ \textrm{C}$ using a heating rate of 5 $^\circ \textrm{C}$ /min in a Linkam microscope furnace (Linkam T95-HT) with an Ar gas flow rate of 4 SCCM. Microscope images of the strips under a ${\times} $10 objective lens were collected for every 1 $^\circ \textrm{C}$ rise in temperature, i.e at a rate of 5 images per minute. The strip increased in reflectivity during crystallization, which allowed the crystal growth to be monitored [32]. False color optical microscope images of the strips are shown in Fig. 8(a) where the amorphous and crystalline regions are shown as pink and blue, respectively. The strips tend to crystallize inwards from the edges. These results show that Sb2S3 strip crystallization is growth driven, as is also seen in continuous films [17]. Figure 8(b) shows the fraction of the remaining amorphous pixels for seven different Sb2S3 strips as a function of temperature. For clarity and further analysis, the corresponding average and standard deviation of the fraction of switched pixels for every 10 ${\circ}{C}$ rise in temperature is given in the inset of Fig. 8(b). The average crystallization temperature was 270${\pm} $5 $^\circ \textrm{C}$, as shown in Fig. 8(c). Fig. 8. Crystallization profile of Sb2S3 strips. (a) False color optical images of the Sb2S3 strip taken during the heating process. The color change represents phase transition. Pink and blue pixels represent amorphous and crystalline regions respectively. (b) Fraction of remaining amorphous pixels with respect to temperature of the seven Sb2S3 strips. Inset in (b) shows the corresponding average and standard deviation of switched pixel across the seven strips. The regions (i) to (iv) represent the four switching states that can be achieved. (c) Averaged differential of the seven switching curves in (b). The trough represents the average crystallization temperature. (d) Standard deviation of switched pixels with respect to temperature. The peak occurs at the crystallization temperature due to the stochastic nature of Sb2S3 crystallization. The stochastic nature of Sb2S3 crystallization makes it challenging to reliably determine the transmissivity level. From Fig. 8(d). we see that the standard deviation of switched pixels is largest at 270 $^\circ \textrm{C}$, when the rate of crystallization is maximum. This is due to the strips crystallizing at slightly different temperatures. This variation can be attributed to the growth-driven crystallization kinetics. As nucleation is a stochastic process, the number of nucleation centers in the strip is random. This variability in crystallization was previously attributed to material defects [14], but is more likely due to the stochastic nature of Sb2S3 crystallization. Regardless of the origin, this variability will limit the number of switching levels because at a certain temperature, the crystallized length of the Sb­2S3 strip varies. Temperature can be used to control the output power of the coupler to four coupling ratios, which in turn gives four switching states. Using the standard deviation of switched pixels at different temperatures, the extent of crystallization can be discerned into four statistically different levels over a 32 dB dynamic range. The dynamic range was approximated based on Figs. 5(a) and 5(b). The switching states are presented as regions (i) to (iv) in the inset of Fig. 8(b). The four levels can be programmed by heating the strips at their corresponding temperatures. To model the effect of partial crystallization on the waveguide transmission, the Sb2S3 strip in the coupler model was sectioned to the corresponding amorphous and crystalline regions. As the strip crystallizes inwards from the edges, we varied the length of the crystalline regions along the direction of the white arrows shown in Fig. 9(a). The crystallized lengths of the two intermediate states are proportional to the fraction of crystallized pixels. 3D FDTD simulations were conducted to determine the directional coupler Cross and Bar transmissivity for each of the four discernable levels of crystallinity observed in the crystallization experiment in Fig. 8(b). Figure 9(b) shows the corresponding transmission values of the bar and cross port for the four states at 1550 nm. The transmission values result in the coupling ratios of approximately: 100:0, 10:90, 70:30, and 0:100. Figure 9(c) shows the operating temperature range to attain the four different coupling ratios and Fig. 9(d) illustrates how the power is distributed along the waveguide for each state. The corresponding coupling ratio at each state is also labelled in Fig. 9(d). We see that by using the GCT method, temperature can be used to tune the Cross output port transmissivity to four distinct levels. Decreasing crystallization stochasticity will increase the number of transmissivity levels that can be reliably distinguished. Fig. 9. 2-bit Sb2S3 directional coupler. (a) Partial crystallization model of the cross waveguide. The strip is sectioned into the amorphous and crystalline regions based on the experimentally measured fraction of crystallized pixels. Since the material tends to crystallize inwards from the end, the crystalline regions increase along the direction of the white arrows with increasing temperature. (b) Transmission through the cross and bar output port for the four crystalline lengths. (c) Operating temperature regions of the four coupling ratios (d) Power distribution and their corresponding coupling ratio of the Sb2S3 directional coupler base on the crystallization percentage. The coupling ratio is represented as bar: cross in each field distribution plot. (i) to (iv) correspond to the colored regions in inset of Fig. 8(b). The GCT method must provide reliable and faster programming speeds when implemented on a PIC. As crystallization is an activated process, crystal nucleation and growth are temperature and heating rate dependent. Hence, the crystallization behavior under different annealing conditions needs to be analyzed. In our experiment, we show that when Sb2S3 strips are heated at 5 ${\circ}{C}$/ min to 320 ${\circ}{C}$, crystallization growth from the edges appears to dominate over nucleation and we see growth inwards. Such studies should be extended to microheaters [20, 21, 39] that were recently used in PICs, where their operating speeds are much faster (ns – $\mu $s time scale). Note, in these microheater devices the direction of crystal growth may be different due to the highest temperature being in the center of the PCM strip [21], which may influence the crystal growth direction. Therefore, a proper thermal design needs be considered to implement GCT. In addition to GCT, the two laser programming methods depicted in Fig. 10 can be adopted to further increase the Sb2S3 coupler bit-depth. In the first method, which is shown in Fig. 10(a), the initial state of the Sb2S3 layer is crystalline and the entire length is partially amorphized to varying degrees to achieve the different switching states. Previous works have amorphized Sb2S3 thin films to varying degrees with a femtosecond laser [16, 40]. Moreover in reference [40], cyclability was demonstrated where the material can switch up to 7000 times depending on the extent of amorphization. The second method, which is shown in Fig. 10(b), involves fully amorphizing regions of a crystalline Sb2S3 strip to implement the different switching states. The transmission level is then set by the length of the amorphous strip. Both processes use the fact that amorphisation is more deterministic than crystallization, and therefore the transmissivity can be controlled more accurately. Previous works have shown that Sb2S3 can be amorphized with ns and fs laser pulses [15, 16, 40]. Hence these methods may also be suitable for higher speed reprogramming. Fig. 10. Programming methods to minimize or avoid crystallization stochasticity. (a) Amorphizing the entire crystalline Sb­2S3 strips to varying degrees to implement different switching states. This reduces stochasticity as nucleation of the material is avoided. (b) Amorphizing regions on the crystalline strip to implement the various switching states. The material is tuned by amorphizing different length of the strip, which is more deterministic that crystal nucleation and growth. To conclude, we show that all-PCM-tuned directional coupler models display low insertion losses (<1.5 dB) and low crosstalk ($- $20 dB to $- $40 dB) in both amorphous and crystalline states. Sb2S3 directional couplers have a better modelled overall performance than Sb2Se3, Ge2Sb2Te4S1, and Ge2Sb2Te5 -based couplers. The Sb2S3 couplers show the lowest insertion loss in both the amorphous and crystalline states due to a lower extinction coefficient than the other PCMs. However, since the Sb2S3 - based coupler has the lowest $\Delta n$, a longer coupling length is needed to meet the ${L_{c,amor}} \approx 2\cdot{L_{c,crys}}$ condition. The ${\boldsymbol k}$ value of Sb2Se3 is insufficiently low, which results in a high insertion loss in the crystalline state. These losses can be higher than the Telluride-based directional couplers, where the signal is not coupled into the neighboring waveguide in the crystalline state. The longer coupling length of Sb2S3 is actually an advantage when introducing multiple switching states with the GCT scheme because it is easier to control the fractional length of the waveguide crystallized for longer strips of PCM. For this reason and because Sb2S3 had the lowest losses, we also studied how it crystallizes when patterned as strips. This was to understand how Sb2S3 -tuned couplers can be programmed with different coupling ratios. We found that four coupling ratios can be discerned across a dynamic range of 32 dB. These states are sufficient to implement 2-bit weights in a conceptual Sb2S3 programmable PIC quantized neural network, which could down-scale deep neural networks in space-limited platforms, such as mobile devices [41]. The Sb2S3 -coupler bit-depth can be further increased by retaining a portion of the crystal matrix to limit nucleation stochasticity [42] or by programming the devices through amorphization. Agency for Science, Technology and Research (#A18A7b0058); Ministerstvo Školství, Mládeže a Tělovýchovy (LM2018103); Grantová Agentura České Republiky (19-17997S). This work was funded by the Agency for Science, Technology and Research (A*STAR) under the Advanced Manufacturing and Engineering (AME) grant #A18A7b0058; Grantová Agentura České Republiky, Grant number: 19-17997S; Ministerstvo Školství, Mládeže a Tělovýchovy, Grant number: LM2018103. The work was carried out under the auspices of the SUTD-MIT International Design Centre and the University of Pardubice. LL is grateful for his Ministry of Education Singapore PhD scholarship. The refractive index data for Ge2Sb2Te5, Sb2S3, and Sb2Se3 is available in Dataset 1, Ref. [18]. Other data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document See Supplement 1 for supporting content. 1. W. Bogaerts, D. Miller, and J. Capmany, "The new world of programmable photonics," in 2019 IEEE Photonics Society Summer Topical Meeting Series (SUM), 2019, 1–2. 2. W. Bogaerts, D. Pérez, J. Capmany, D. A. B. Miller, J. Poon, D. Englund, F. Morichetti, and A. Melloni, "Programmable photonic circuits," Nature 586(7828), 207–216 (2020). [CrossRef] 3. J.-S. Kim and J. T. Kim, "Silicon electro-optic modulator based on an ITO-integrated tunable directional coupler," J. Phys. D: Appl. Phys. 49(7), 075101 (2016). [CrossRef] 4. X. Zi, L. Wang, K. Chen, and K. S. Chiang, "Mode-selective switch based on thermo-optic asymmetric directional coupler," IEEE Photonics Technol. Lett. 30(7), 618–621 (2018). [CrossRef] 5. M. Thomaschewski, V. A. Zenin, C. Wolff, and S. I. Bozhevolnyi, "Plasmonic monolithic lithium niobate directional coupler switches," Nat. Commun. 11(1), 748 (2020). [CrossRef] 6. Q. Zhang, H. Yu, M. Barbiero, B. Wang, and M. Gu, "Artificial neural networks enabled by nanophotonics," Light: Sci. Appl. 8(1), 42 (2019). [CrossRef] 7. D. Loke, T. H. Lee, W. J. Wang, L. P. Shi, R. Zhao, Y. C. Yeo, T. C. Chong, and S. R. Elliott, "Breaking the speed limits of phase-change memory," Science 336(6088), 1566–1569 (2012). [CrossRef] 8. L. Waldecker, T. A. Miller, M. Rude, R. Bertoni, J. Osmond, V. Pruneri, R. E. Simpson, R. Ernstorfer, and S. Wall, "Time-domain separation of optical properties from structural transitions in resonantly bonded materials," Nat Mater 14(10), 991–995 (2015). [CrossRef] 9. P. Xu, J. Zheng, J. K. Doylend, and A. Majumdar, "Low-loss and broadband nonvolatile phase-change directional coupler switches," ACS Photonics 6(2), 553–557 (2019). [CrossRef] 10. Q. Zhang, Y. Zhang, J. Li, R. Soref, T. Gu, and J. Hu, "Broadband nonvolatile photonic switching based on optical phase change materials: beyond the classical figure-of-merit," Opt. Lett. 43(1), 94–97 (2018). [CrossRef] 11. Y. Ikuma, T. Saiki, and H. Tsuda, "Proposal of a small self-holding 2×2 optical switch using phase-change material," IEICE Electronics Express 5(12), 442–445 (2008). [CrossRef] 12. D. Tanaka, Y. Ikuma, H. Tsuda, H. Kawashima, M. Kuwahara, and W. Xiaomin, "Ultracompact 2×2 directional coupling optical switch with Si waveguides and phase-change material," in 2012 International Conference on Photonics in Switching (PS), 2012, 1–3. 13. M. Wuttig, H. Bhaskaran, and T. Taubner, "Phase-change materials for non-volatile photonic applications," Nat. Photonics 11(8), 465–476 (2017). [CrossRef] 14. M. Delaney, I. Zeimpekis, D. Lawson, D. W. Hewak, and O. L. Muskens, "A new family of ultralow loss reversible phase-change materials for photonic integrated circuits: Sb2S3 and Sb2Se3," Adv. Funct. Mater. 30(36), 2002447 (2020). [CrossRef] 15. W. Dong, H. Liu, J. K. Behera, L. Lu, R. J. H. Ng, K. V. Sreekanth, X. Zhou, J. K. W. Yang, and R. E. Simpson, "Wide bandgap phase change material tuned visible photonics," Adv. Funct. Mater. 29(6), 1806181 (2019). [CrossRef] 16. H. Liu, W. Dong, H. Wang, L. Lu, Q. Ruan, Y. S. Tan, R. E. Simpson, and J. K. W. Yang, "Rewritable color nanoprints in antimony trisulfide films," Sci. Adv. 6(51), eabb7171 (2020). [CrossRef] 17. W. Zhang, R. Mazzarello, M. Wuttig, and E. Ma, "Designing crystallization in phase-change materials for universal memory and neuro-inspired computing," Nat. Rev. Mater. 4(3), 150–168 (2019). [CrossRef] 18. T. Y. Teo, M. Krbal, J. Mistrik, J. Prikryl, L. Lu, and R. E. Simpson, "Refractive index of GST, Sb2S3, and Sb2Se3," figshare (2021), retrieved https://doi.org/10.6084/m9.figshare.17306813. 19. C. Li Tian, D. Weiling, L. Li, Z. Xilin, B. Jitendra, L. Hailong, V. S. Kandammathe, M. Libang, C. Tun, Y. Joel, and E. S. Robert, "Chalcogenide active photonics," in Proc.SPIE, 2017), 20. Z. Fang, J. Zheng, A. Saxena, J. Whitehead, Y. Chen, and A. Majumdar, "Non-volatile reconfigurable integrated photonics enabled by broadband low-loss phase change material," Adv. Opt. Mater. 9(9), 2002049 (2021). [CrossRef] 21. C. Ríos, Q. Du, Y. Zhang, C.-C. Popescu, M. Y. Shalaginov, P. Miller, C. Roberts, M. Kang, K. A. Richardson, T. Gu, S. A. Vitale, and J. Hu, "Ultra-compact nonvolatile photonics based on electrically reprogrammable transparent phase change materials," (2021), p. arXiv:2105.06010. 22. M. Delaney, I. Zeimpekis, H. Du, X. Yan, M. Banakar, J. Thomson David, W. Hewak Daniel, and L. Muskens Otto, "Nonvolatile programmable silicon photonics using an ultralow-loss Sb2Se3 phase change material," Sci. Adv. 7(25), eabg3500 (2021). [CrossRef] 23. C. Chen, W. Li, Y. Zhou, C. Chen, M. Luo, X. Liu, K. Zeng, B. Yang, C. Zhang, J. Han, and J. Tang, "Optical properties of amorphous and polycrystalline Sb2Se3 thin films prepared by thermal evaporation," Appl. Phys. Lett. 107(4), 043905 (2015). [CrossRef] 24. N. K. Jayswal, S. Rijal, B. Subedi, I. Subedi, Z. Song, R. W. Collins, Y. Yan, and N. J. Podraza, "Optical properties of thin film Sb2Se3 and identification of its electronic losses in photovoltaic devices," Sol. Energy 228, 38–44 (2021). [CrossRef] 25. M. Schubert, T. Hofmann, C. M. Herzinger, and W. Dollase, "Generalized ellipsometry for orthorhombic, absorbing materials: dielectric functions, phonon modes and band-to-band transitions of Sb2S3," Thin Solid Films 455-456, 619–623 (2004). [CrossRef] 26. R. E. Simpson and T. Cao, "Phase Change Material Photonics," in The World Scientific Reference of Amorphous Materials (World Scientific, 2021), p. 487. 27. B.-S. Lee, J. R. Abelson, S. G. Bishop, D.-H. Kang, B.-K. Cheong, and K.-B. Kim, "Investigation of the optical and electronic properties of Ge2Sb2Te5 phase change material in its amorphous, cubic, and hexagonal phases," J. Appl. Phys. 97(9), 093509 (2005). [CrossRef] 28. L. Inc., https://www.lumerical.com/products/. 29. S. Miller, "Coupled wave theory and waveguide applications," Bell Syst. Tech. J. 33(3), 661–719 (1954). [CrossRef] 30. Y. Chen, S.-T. Ho, and V. Krishnamurthy, "All-optical switching in a symmetric three-waveguide coupler with phase-mismatched absorptive central waveguide," Appl. Opt. 52(36), 8845–8853 (2013). [CrossRef] 31. Y. Zhang, J. B. Chou, J. Li, H. Li, Q. Du, A. Yadav, S. Zhou, M. Y. Shalaginov, Z. Fang, H. Zhong, C. Roberts, P. Robinson, B. Bohlin, C. Ríos, H. Lin, M. Kang, T. Gu, J. Warner, V. Liberman, K. Richardson, and J. Hu, "Broadband transparent optical phase change materials for high-performance nonvolatile photonics," Nat. Commun. 10(1), 4279 (2019). [CrossRef] 32. N. Yamada, E. Ohno, K. Nishiuchi, N. Akahira, and M. Takao, "Rapid-phase transitions of GeTe-Sb2Te3 pseudobinary amorphous thin films for an optical disk memory," J. Appl. Phys. 69(5), 2849–2856 (1991). [CrossRef] 33. Y. Wang, J. Ning, L. Lu, M. Bosman, and R. E. Simpson, "A scheme for simulating multi-level phase change photonics materials," npj Comput. Mater. 7(1), 183 (2021). [CrossRef] 34. C. Ríos, M. Stegmaier, P. Hosseini, D. Wang, T. Scherer, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, "Integrated all-photonic non-volatile multi-level memory," Nat. Photonics 9(11), 725–732 (2015). [CrossRef] 35. X. Li, N. Youngblood, C. Ríos, Z. Cheng, C. D. Wright, W. H. P. Pernice, and H. Bhaskaran, "Fast and reliable storage using a 5 bit, nonvolatile photonic memory cell," Optica 6(1), 1–6 (2019). [CrossRef] 36. Y. Meng, J. K. Behera, S. Wen, R. E. Simpson, J. Shi, L. Wu, Z. Song, J. Wei, and Y. Wang, "Ultrafast multilevel optical tuning with CSb2Te3 thin films," Adv. Opt. Mater. 6(17), 1800360 (2018). [CrossRef] 37. C. H. Chu, M. L. Tseng, J. Chen, P. C. Wu, Y.-H. Chen, H.-C. Wang, T.-Y. Chen, W. T. Hsieh, H. J. Wu, G. Sun, and D. P. Tsai, "Active dielectric metasurface based on phase-change medium," Laser Photonics Rev. 10(6), 986–994 (2016). [CrossRef] 38. J. Tian, Q. Li, J. Lu, and M. Qiu, "Reconfigurable all-dielectric antenna-based metasurface driven by multipolar resonances," Opt. Express 26(18), 23918–23925 (2018). [CrossRef] 39. J. Zheng, Z. Fang, C. Wu, S. Zhu, P. Xu, J. K. Doylend, S. Deshmukh, E. Pop, S. Dunham, M. Li, and A. Majumdar, "Nonvolatile electrically reconfigurable integrated photonic switch enabled by a silicon PIN diode heater," Adv. Mater. 32(31), 2001218 (2020). [CrossRef] 40. K. Gao, K. Du, S. Tian, H. Wang, L. Zhang, Y. Guo, B. Luo, W. Zhang, and T. Mei, "Intermediate phase-change states with improved cycling durability of Sb2S3 by femtosecond multi-pulse laser irradiation," Adv. Funct. Mater. 31(35), 2103327 (2021). [CrossRef] 41. J. Choi, S. Venkataramani, V. Srinivasan, K. Gopalakrishnan, Z. Wang, and P. Chuang, "Accurate and efficient 2-bit quantized neural networks," in MLSys, 2019), 42. Y. Zhang, C. Ríos, M. Y. Shalaginov, M. Li, A. Majumdar, T. Gu, and J. Hu, "Myths and truths about optical phase change materials: a perspective," Appl. Phys. Lett. 118(21), 210501 (2021). [CrossRef] Article Order W. Bogaerts, D. Miller, and J. Capmany, "The new world of programmable photonics," in 2019 IEEE Photonics Society Summer Topical Meeting Series (SUM), 2019, 1–2. W. Bogaerts, D. Pérez, J. Capmany, D. A. B. Miller, J. Poon, D. Englund, F. Morichetti, and A. Melloni, "Programmable photonic circuits," Nature 586(7828), 207–216 (2020). J.-S. Kim and J. T. Kim, "Silicon electro-optic modulator based on an ITO-integrated tunable directional coupler," J. Phys. D: Appl. Phys. 49(7), 075101 (2016). X. Zi, L. Wang, K. Chen, and K. S. Chiang, "Mode-selective switch based on thermo-optic asymmetric directional coupler," IEEE Photonics Technol. Lett. 30(7), 618–621 (2018). M. Thomaschewski, V. A. Zenin, C. Wolff, and S. I. Bozhevolnyi, "Plasmonic monolithic lithium niobate directional coupler switches," Nat. Commun. 11(1), 748 (2020). Q. Zhang, H. Yu, M. Barbiero, B. Wang, and M. Gu, "Artificial neural networks enabled by nanophotonics," Light: Sci. Appl. 8(1), 42 (2019). D. Loke, T. H. Lee, W. J. Wang, L. P. Shi, R. Zhao, Y. C. Yeo, T. C. Chong, and S. R. Elliott, "Breaking the speed limits of phase-change memory," Science 336(6088), 1566–1569 (2012). L. Waldecker, T. A. Miller, M. Rude, R. Bertoni, J. Osmond, V. Pruneri, R. E. Simpson, R. Ernstorfer, and S. Wall, "Time-domain separation of optical properties from structural transitions in resonantly bonded materials," Nat Mater 14(10), 991–995 (2015). P. Xu, J. Zheng, J. K. Doylend, and A. Majumdar, "Low-loss and broadband nonvolatile phase-change directional coupler switches," ACS Photonics 6(2), 553–557 (2019). Q. Zhang, Y. Zhang, J. Li, R. Soref, T. Gu, and J. Hu, "Broadband nonvolatile photonic switching based on optical phase change materials: beyond the classical figure-of-merit," Opt. Lett. 43(1), 94–97 (2018). Y. Ikuma, T. Saiki, and H. Tsuda, "Proposal of a small self-holding 2×2 optical switch using phase-change material," IEICE Electronics Express 5(12), 442–445 (2008). D. Tanaka, Y. Ikuma, H. Tsuda, H. Kawashima, M. Kuwahara, and W. Xiaomin, "Ultracompact 2×2 directional coupling optical switch with Si waveguides and phase-change material," in 2012 International Conference on Photonics in Switching (PS), 2012, 1–3. M. Wuttig, H. Bhaskaran, and T. Taubner, "Phase-change materials for non-volatile photonic applications," Nat. Photonics 11(8), 465–476 (2017). M. Delaney, I. Zeimpekis, D. Lawson, D. W. Hewak, and O. L. Muskens, "A new family of ultralow loss reversible phase-change materials for photonic integrated circuits: Sb2S3 and Sb2Se3," Adv. Funct. Mater. 30(36), 2002447 (2020). W. Dong, H. Liu, J. K. Behera, L. Lu, R. J. H. Ng, K. V. Sreekanth, X. Zhou, J. K. W. Yang, and R. E. Simpson, "Wide bandgap phase change material tuned visible photonics," Adv. Funct. Mater. 29(6), 1806181 (2019). H. Liu, W. Dong, H. Wang, L. Lu, Q. Ruan, Y. S. Tan, R. E. Simpson, and J. K. W. Yang, "Rewritable color nanoprints in antimony trisulfide films," Sci. Adv. 6(51), eabb7171 (2020). W. Zhang, R. Mazzarello, M. Wuttig, and E. Ma, "Designing crystallization in phase-change materials for universal memory and neuro-inspired computing," Nat. Rev. Mater. 4(3), 150–168 (2019). T. Y. Teo, M. Krbal, J. Mistrik, J. Prikryl, L. Lu, and R. E. Simpson, "Refractive index of GST, Sb2S3, and Sb2Se3," figshare (2021), retrieved https://doi.org/10.6084/m9.figshare.17306813. C. Li Tian, D. Weiling, L. Li, Z. Xilin, B. Jitendra, L. Hailong, V. S. Kandammathe, M. Libang, C. Tun, Y. Joel, and E. S. Robert, "Chalcogenide active photonics," in Proc.SPIE, 2017), Z. Fang, J. Zheng, A. Saxena, J. Whitehead, Y. Chen, and A. Majumdar, "Non-volatile reconfigurable integrated photonics enabled by broadband low-loss phase change material," Adv. Opt. Mater. 9(9), 2002049 (2021). C. Ríos, Q. Du, Y. Zhang, C.-C. Popescu, M. Y. Shalaginov, P. Miller, C. Roberts, M. Kang, K. A. Richardson, T. Gu, S. A. Vitale, and J. Hu, "Ultra-compact nonvolatile photonics based on electrically reprogrammable transparent phase change materials," (2021), p. arXiv:2105.06010. M. Delaney, I. Zeimpekis, H. Du, X. Yan, M. Banakar, J. Thomson David, W. Hewak Daniel, and L. Muskens Otto, "Nonvolatile programmable silicon photonics using an ultralow-loss Sb2Se3 phase change material," Sci. Adv. 7(25), eabg3500 (2021). C. Chen, W. Li, Y. Zhou, C. Chen, M. Luo, X. Liu, K. Zeng, B. Yang, C. Zhang, J. Han, and J. Tang, "Optical properties of amorphous and polycrystalline Sb2Se3 thin films prepared by thermal evaporation," Appl. Phys. Lett. 107(4), 043905 (2015). N. K. Jayswal, S. Rijal, B. Subedi, I. Subedi, Z. Song, R. W. Collins, Y. Yan, and N. J. Podraza, "Optical properties of thin film Sb2Se3 and identification of its electronic losses in photovoltaic devices," Sol. Energy 228, 38–44 (2021). M. Schubert, T. Hofmann, C. M. Herzinger, and W. Dollase, "Generalized ellipsometry for orthorhombic, absorbing materials: dielectric functions, phonon modes and band-to-band transitions of Sb2S3," Thin Solid Films 455-456, 619–623 (2004). R. E. Simpson and T. Cao, "Phase Change Material Photonics," in The World Scientific Reference of Amorphous Materials (World Scientific, 2021), p. 487. B.-S. Lee, J. R. Abelson, S. G. Bishop, D.-H. Kang, B.-K. Cheong, and K.-B. Kim, "Investigation of the optical and electronic properties of Ge2Sb2Te5 phase change material in its amorphous, cubic, and hexagonal phases," J. Appl. Phys. 97(9), 093509 (2005). L. Inc., https://www.lumerical.com/products/ . S. Miller, "Coupled wave theory and waveguide applications," Bell Syst. Tech. J. 33(3), 661–719 (1954). Y. Chen, S.-T. Ho, and V. Krishnamurthy, "All-optical switching in a symmetric three-waveguide coupler with phase-mismatched absorptive central waveguide," Appl. Opt. 52(36), 8845–8853 (2013). Y. Zhang, J. B. Chou, J. Li, H. Li, Q. Du, A. Yadav, S. Zhou, M. Y. Shalaginov, Z. Fang, H. Zhong, C. Roberts, P. Robinson, B. Bohlin, C. Ríos, H. Lin, M. Kang, T. Gu, J. Warner, V. Liberman, K. Richardson, and J. Hu, "Broadband transparent optical phase change materials for high-performance nonvolatile photonics," Nat. Commun. 10(1), 4279 (2019). N. Yamada, E. Ohno, K. Nishiuchi, N. Akahira, and M. Takao, "Rapid-phase transitions of GeTe-Sb2Te3 pseudobinary amorphous thin films for an optical disk memory," J. Appl. Phys. 69(5), 2849–2856 (1991). Y. Wang, J. Ning, L. Lu, M. Bosman, and R. E. Simpson, "A scheme for simulating multi-level phase change photonics materials," npj Comput. Mater. 7(1), 183 (2021). C. Ríos, M. Stegmaier, P. Hosseini, D. Wang, T. Scherer, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice, "Integrated all-photonic non-volatile multi-level memory," Nat. Photonics 9(11), 725–732 (2015). X. Li, N. Youngblood, C. Ríos, Z. Cheng, C. D. Wright, W. H. P. Pernice, and H. Bhaskaran, "Fast and reliable storage using a 5 bit, nonvolatile photonic memory cell," Optica 6(1), 1–6 (2019). Y. Meng, J. K. Behera, S. Wen, R. E. Simpson, J. Shi, L. Wu, Z. Song, J. Wei, and Y. Wang, "Ultrafast multilevel optical tuning with CSb2Te3 thin films," Adv. Opt. Mater. 6(17), 1800360 (2018). C. H. Chu, M. L. Tseng, J. Chen, P. C. Wu, Y.-H. Chen, H.-C. Wang, T.-Y. Chen, W. T. Hsieh, H. J. Wu, G. Sun, and D. P. Tsai, "Active dielectric metasurface based on phase-change medium," Laser Photonics Rev. 10(6), 986–994 (2016). J. Tian, Q. Li, J. Lu, and M. Qiu, "Reconfigurable all-dielectric antenna-based metasurface driven by multipolar resonances," Opt. Express 26(18), 23918–23925 (2018). J. Zheng, Z. Fang, C. Wu, S. Zhu, P. Xu, J. K. Doylend, S. Deshmukh, E. Pop, S. Dunham, M. Li, and A. Majumdar, "Nonvolatile electrically reconfigurable integrated photonic switch enabled by a silicon PIN diode heater," Adv. Mater. 32(31), 2001218 (2020). K. Gao, K. Du, S. Tian, H. Wang, L. Zhang, Y. Guo, B. Luo, W. Zhang, and T. Mei, "Intermediate phase-change states with improved cycling durability of Sb2S3 by femtosecond multi-pulse laser irradiation," Adv. Funct. Mater. 31(35), 2103327 (2021). J. Choi, S. Venkataramani, V. Srinivasan, K. Gopalakrishnan, Z. Wang, and P. Chuang, "Accurate and efficient 2-bit quantized neural networks," in MLSys, 2019), Y. Zhang, C. Ríos, M. Y. Shalaginov, M. Li, A. Majumdar, T. Gu, and J. Hu, "Myths and truths about optical phase change materials: a perspective," Appl. Phys. Lett. 118(21), 210501 (2021). Abelson, J. R. Akahira, N. Banakar, M. Barbiero, M. Behera, J. K. Bertoni, R. Bhaskaran, H. Bishop, S. G. Bogaerts, W. Bohlin, B. Bosman, M. Bozhevolnyi, S. I. Cao, T. Capmany, J. Chen, C. Chen, J. Chen, K. Chen, T.-Y. Chen, Y.-H. Cheng, Z. Cheong, B.-K. Chiang, K. S. Choi, J. Chong, T. C. Chou, J. B. Chu, C. H. Chuang, P. Collins, R. W. Delaney, M. Deshmukh, S. Dollase, W. Dong, W. Doylend, J. K. Du, H. Du, K. Du, Q. Dunham, S. Elliott, S. R. Englund, D. Ernstorfer, R. Fang, Z. Gao, K. Gopalakrishnan, K. Gu, M. Gu, T. Guo, Y. Hailong, L. Han, J. Herzinger, C. M. Hewak, D. W. Hewak Daniel, W. Ho, S.-T. Hofmann, T. Hosseini, P. Hsieh, W. T. Hu, J. Ikuma, Y. Jayswal, N. K. Jitendra, B. Joel, Y. Kandammathe, V. S. Kang, D.-H. Kang, M. Kawashima, H. Kim, J. T. Kim, J.-S. Kim, K.-B. Krishnamurthy, V. Kuwahara, M. Lawson, D. Lee, B.-S. Lee, T. H. Li, H. Li, J. Li, L. Li, M. Li, Q. Li, W. Li, X. Li Tian, C. Libang, M. Liberman, V. Lin, H. Liu, H. Loke, D. Lu, J. Lu, L. Luo, B. Luo, M. Ma, E. Majumdar, A. Mazzarello, R. Mei, T. Melloni, A. Meng, Y. Miller, D. Miller, D. A. B. Miller, P. Miller, S. Miller, T. A. Morichetti, F. Muskens, O. L. Muskens Otto, L. Ng, R. J. H. Ning, J. Nishiuchi, K. Ohno, E. Osmond, J. Pérez, D. Pernice, W. H. P. Podraza, N. J. Poon, J. Pop, E. Popescu, C.-C. Pruneri, V. Qiu, M. Richardson, K. Richardson, K. A. Rijal, S. Ríos, C. Robert, E. S. Robinson, P. Ruan, Q. Rude, M. Saiki, T. Saxena, A. Scherer, T. Schubert, M. Shalaginov, M. Y. Shi, J. Shi, L. P. Simpson, R. E. Song, Z. Soref, R. Sreekanth, K. V. Srinivasan, V. Stegmaier, M. Subedi, B. Subedi, I. Sun, G. Takao, M. Tan, Y. S. Tanaka, D. Tang, J. Taubner, T. Thomaschewski, M. Thomson David, J. Tian, J. Tian, S. Tsai, D. P. Tseng, M. L. Tsuda, H. Tun, C. Venkataramani, S. Vitale, S. A. Waldecker, L. Wall, S. Wang, B. Wang, D. Wang, H. Wang, H.-C. Wang, L. Wang, W. J. Wang, Y. Wang, Z. Warner, J. Wei, J. Weiling, D. Wen, S. Whitehead, J. Wolff, C. Wright, C. D. Wu, C. Wu, H. J. Wu, L. Wu, P. C. Wuttig, M. Xiaomin, W. Xilin, Z. Xu, P. Yadav, A. Yamada, N. Yan, X. Yan, Y. Yang, B. Yang, J. K. W. Yeo, Y. C. Youngblood, N. Yu, H. Zeimpekis, I. Zeng, K. Zenin, V. A. Zhang, C. Zhang, Q. Zhang, W. Zhao, R. Zheng, J. Zhong, H. Zhou, S. Zhou, X. Zhou, Y. Zhu, S. Zi, X. ACS Photonics (1) Adv. Funct. Mater. (3) Adv. Mater. (1) Adv. Opt. Mater. (2) Appl. Opt. (1) Appl. Phys. Lett. (2) Bell Syst. Tech. J. (1) IEEE Photonics Technol. Lett. (1) IEICE Electronics Express (1) J. Appl. Phys. (2) J. Phys. D: Appl. Phys. (1) Laser Photonics Rev. (1) Light: Sci. Appl. (1) Nat Mater (1) Nat. Commun. (2) Nat. Photonics (2) Nat. Rev. Mater. (1) npj Comput. Mater. (1) Opt. Express (1) Opt. Lett. (1) Optica (1) Sci. Adv. (2) Sol. Energy (1) Thin Solid Films (1) Supplementary Material (2) Dataset 1 Refractive index of GST, Sb2S3, and Sb2Se3 Supplement 1 A description of the material fabrication process, ellipsometry measurement procedure, bandgap measurement, mode patterns of the optical coupler and SEM image of the Sb2S3 crystallization strip Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) L c , a m o r = λ 2 ( n 1 − n 2 ) (2) L c , c r y s = L c , a m o r ( Δ β ∗ L c , a m o r π ) 2 + 1 Andrea Alù, Editor-in-Chief Device dimensions and switching time [14, 15, 31, 32] for the various PCM-tuned directional couplers
CommonCrawl
\begin{document} \title{Higher order Toda brackets} \author{Azez Kharouf} \maketitle \begin{abstract} We describe two ways to define higher order Toda brackets in a pointed simplicial model category $\mathcal{D}$: one is a recursive definition using model categorical constructions, and the second uses the associated simplicial enrichment. We show that these two definitions agree, by providing a third, diagrammatic, description of the Toda bracket, and explain how it serves as the obstruction to rectifying a certain homotopy-commutative diagram in $\mathcal{D}$. \end{abstract} \section*{Introduction} \label{cint} The Toda bracket is an operation on homotopy classes of maps first defined by H.~Toda in \cite{TodG,TodC} (see \S \ref{def 1 of 2}). Toda used this construction to compute homotopy groups of spheres; Adams later showed (in \cite{AdHI}) that it can also be used to calculate differentials in spectral sequences (see also \cite{HarpSC}). The original definition was later extended to longer Toda brackets (see, e.g., \cite{GWalkL}). In stable model categories, this can be done in several equivalent ways (see \cite{JCohDS,KocU,CFranH}) . \begin{mysubsection}{The classical Toda bracket}\label{def 1 of 2} Given maps \w{X \stk{f} Y \stk{g} Z \stk{h} W} in a pointed model category $\mathcal{D}$, with nullhomotopies \w{F :CX \to Z } for \w{ g\circ f } and \w{G :CY \to W} for \w[,] {h\circ g} from the pushout property in the diagram: $$ \xymatrix@R=15pt { X \ar@{^{(}->}[d]^{} \ar@{^{(}->}[r]^{} & CX \ar@/^{1.5pc}/[ddr]^{h\circ F} \ar[d] \\ CX \ar@/_{1.5pc}/[drr]_{G\circ Cf} \ar[r] & \Sigma{X} \ar@{-->}[dr]^{T} \\ & & W} $$ The \emph{Toda bracket} is the induced map \w[.]{T:\Sigma X \to W} \end{mysubsection} \begin{mysubsection}{Alternative approaches} More generally, higher homotopy operations have been defined abstractly (see \cite{SpanS,SpanH} and \cite{KlauT,MaunC}), using two main approaches: the original definition for topological spaces generalizes to any pointed model category (see \cite{BJTurnHA,BJTurnC,BBSenD}), while an alternative construction uses a (pointed) cubical enrichment (see \cite{BJTurnHH,BBGondH}). Since cubically enriched categories are Quillen equivalent to simplicially enriched categories, it should be possible to extend the second definition to any $(\infty,1)$-category (see \cite{BergnI}). However, the connection between these two approaches had not been made clear so far. In this article we study general Toda brackets in any suitable pointed model category $\mathcal{D}$. These can be defined in terms of standard model-category constructions (stated here in terms of homotopy cofibers, although the translation to the Eckmann-Hilton duals in terms of homotopy fibers is straightforward \ -- \ see \cite{BJTurnC}). We call this the \emph{recursive} approach, since the constructions used at the $n$-th stage depend on the previous stage (in fact, on the coherent vanishing of all lower Toda brackets). We show that the vanishing of the Toda brackets so defined is the (last) obstruction to lifting the corresponding "chain complex" from \w{\operatorname{ho}\mathcal{D}} to $\mathcal{D}$ (that is, making the successive composites strictly zero) \ -- \ see Theorem \ref{reduced rectifiyng} below and \cite{BBSenD}. By \cite{DKanF}, every model category can be endowed with a simplicial enrichment having the same homotopy category; in a simplicial model category, this can be done in a straightforward manner (see \cite[II, \S 2]{QuiH}). Moreover, such a simplicial enrichment translates directly into a cubical enrichment, and conversely (see, e.g., \cite{BJTurnHH}). In this paper, the \emph{cubical} approach to the general Toda brackets is not stated explicitly in terms of a cubical enrichment, as in \cite{BBGondH}, but rather in terms of the simplicial structure on $\mathcal{D}$ (in the sense of \cite[II, \S 1]{QuiH}). Our main result is that these two approaches yield equivalent notions of higher Toda brackets (see Theorems \ref{def1=def2} and \ref{Rev def1=def2}). To show this, we give a third \emph{diagrammatic} description of Toda brackets. For example, the information needed for the construction in \S \ref{def 1 of 2} can be encoded by the diagram: \mydiagram[\label{eqfirstoda}]{ X \ar@{^{(}->}[d] \ar[r]^{f} & Y \ar[d]^{g} \ar@{^{(}->}[r]& CY \ar[d]^{G} \\ CX \ar[r]^{F} & Z \ar[r]^{h} & W. } We think of this as a sequence of two horizontal maps of vertical $1$-cubes (in the general case, we will have two maps of $n$-cubes), from which we obtain the associated Toda bracket by suitable homotopy colimits. \end{mysubsection} \begin{notation}\label{cone defn} Throughout this paper $\mathcal{D}$ will denote a pointed proper simplicial model category (see \cite[II \S 2]{QuiH}). The \emph{cone} of an object \w[,]{X\in\mathcal{D}} denoted by \w[,]{CX} is the (strict) cofiber of \w[,]{i^X_{1}:X\to X \otimes I} where \w{X \otimes I} is the functorial cylinder object provided by the simplicial structure, we define \w{i^X:X \to CX} as in the following diagram: $$ \xymatrix@R=15pt @C=30pt { X \ar[d] \ar[r]^{i^X_1} & X \otimes I \ar[d]_{l^X} & X \ar[l]_{i_0^X} \ar[ld]^{i^X} \\ \ast \ar[r] & CX } $$ For all \w{m \in \mathbb{N}} we set \w[,]{C^m X:=C(C^{m-1}X)} where \w[.]{C^0X:=X} The \emph{cofiber} of any map \w{f:X \to Y} \w{\operatorname{cof}(f)} is the pushout of \w[,]{\ast\leftarrow X \xrightarrow{f} Y} with structure map \w[.]{r^f:Y\to\operatorname{cof}(f)} Our standard model for the homotopy cofiber of a map \w[]{f:X \to Y} is the pushout of \xymatrix@R=25pt { CX & X \ar[r]^{f} \ar@{_{(}->}[l]^{} & Y ,} which we denote by \w[.]{\operatorname{hcof}(f)} Functoriality of the pushout defines the natural map \w[,]{\zeta_f:\operatorname{hcof}(f)\to\operatorname{cof}(f)} and if \w{f :X \to Y} is a cofibration, then right properness of the model category $\mathcal{D}$ implies that \w{\zeta_f : \operatorname{hcof}(f) \to \operatorname{cof}(f)} is a weak equivalence. The \emph{suspension} of \w[,]{X\in\mathcal{D}} denoted by \w[,]{\Sigma X} is defined to be the pushout of \w{CX \xleftarrow{i^X}X\xrightarrow{i^X}CX} (that is, \w[).]{\operatorname{hcof}(i^X)} Another version of the suspension of $X$ is the pushout of \w{\ast \leftarrow X\xrightarrow{i^X}CX} (that is, \w[),]{\operatorname{cof}(i^X)} which we denote by \w[.]{\widetilde{\Sigma} X} We have a natural weak equivalence \w[.]{\zeta_{X}:=\zeta_{i^X} : \Sigma X \to \widetilde{\Sigma} X } \end{notation} \begin{mysubsection}{Organization} In Section \ref{OrdToda} we provide a recursive definition of the general Toda bracket of length $n$ in a pointed model category $\mathcal{D}$, and of the data needed to define it (called a \emph{recursive Toda system}). We then show that these Toda brackets serve as the last obstruction to rectifying certain diagrams. In Section \ref{GTb} we give a new definition of the higher Toda bracket (and the data required, called a \emph{cubical Toda system}) in terms of the cubical (or simplicial) enrichment in $\mathcal{D}$ (as in \cite{BBGondH}). We then reinterpret this data in terms of certain cubically-shaped diagrams, and show that the new Toda brackets can also be described in terms of certain homotopy colimits of the diagrams. In Section \ref{DdTB} we provide a diagrammatic description of the recursive construction, too, and use it to show how to pass from the cubical to the recursive definitions of higher Toda brackets (see Theorem \ref{def1=def2}). In Section \ref{PRCD} we show how to pass from the recursive to the cubical constructions, yielding Theorem \ref{Rev def1=def2}. \end{mysubsection} \end{ack} \sect{Model categorical approach to general Toda brackets} \label{OrdToda} In this section we give the main (recursive) definition of the general Toda bracket. We then show how it is related to the rectification of linear diagrams. \supsect{\protect{\ref{OrdToda}}.A}{Ordinary Toda Bracket in terms of homotopy cofiber} In order to understand better the definition of the Toda bracket, we require the following auxiliary constructions: \begin{defn}\label{alphabeta} Given two maps \w{X \stk{f} Y \stk{g} Z} in $\mathcal{D}$ with \w{g\circ f} nullhomotopic, for any choice of nullhomotopy \w[,]{F : CX \to Z} from the commutative diagrams \mysdiag[\label{eqnalpha}]{ X \ar@{^{(}->}[d]^{} \ar@{^{(}->}[r]^{} \ar[rrd]^>>>>>>>{f} & CX \ar[d] \ar[rrd]^{F} & & && X \ar@{^{(}->}[d]^{} \ar[r]^{f} & Y \ar@/^{1.5pc}/[ddr]^{g} \ar[d] \\ CX \ar[r] \ar[rrd]_>>>>>>>>>>>{Cf} & \Sigma X \ar@{..>}[rrd]_<<<<<{\alpha} & Y \ar@{^{(}->}[d]^{} \ar[r]_{g} & Z \ar[d] && CX \ar@/_{1.5pc}/[drr]_{F} \ar[r] & \operatorname{hcof}(f) \ar@{-->}[dr]^{\beta} \\ & & CY \ar[r] & \operatorname{hcof}(g) && & & Z } we get the maps \w[]{\alpha = \alpha(f,g,F) : \Sigma X \to \operatorname{hcof}(g)} and \w[.]{\beta = \beta(f,g,F) : \operatorname{hcof}(f) \to Z} \end{defn} \begin{defn}\label{perook 1} Given \w{f,g,h,F,G} as in \S \ref{def 1 of 2}, note that the map $T$ defined there is just \w[.]{\beta(g,h,G) \circ \alpha(f,g,F)} The collection of all homotopy classes of such maps $T$ (as $F$ and $G$ vary) is called the \emph{Toda bracket}, and traditionally denoted by \w[.]{\lra{f,g,h}} The map $T$, denoted by \w[,]{\lra{f,g,h,(F,G)}} is called a \emph{value} of this Toda bracket. \end{defn} \begin{defn}\label{ealphabeta} For \w{X \stk{f} Y \stk{g} Z} and \w{F : CX \to Z} as above, by functoriality of the strict cofiber we obtain maps \w{\widetilde{\alpha}(f,g,F)} and \w{\widetilde{\beta}(f,g,F)} making the following diagram commute: \mydiagram[\label{eqalphat}]{ X \ar@{^{(}->}[d]_{i^X} \ar[r]^{f} & Y \ar[d]^{g} \ar[r]& \operatorname{cof}(f) \ar[d]^>>>>{\widetilde{\beta}(f,g,F)} \\ CX \ar[r]^{F} \ar[d] & Z \ar[r] \ar[d] & \operatorname{cof}(F) \\ \widetilde{\Sigma} X \ar[r]^{\widetilde{\alpha}(f,g,F)} & \operatorname{cof}(g) } \end{defn} The \w{3\times3} Lemma then implies: \begin{lemma}\label{cof(a)=cof(b)} Given maps \w[,]{X \stk{f} Y \stk{g} Z} with nullhomotopy \w[]{F} as in definition \ref{alphabeta}, $$ \operatorname{cof}(\widetilde{\alpha}(f,g,F))=\operatorname{cof}(\widetilde{\beta}(f,g,F)) $$ \end{lemma} \begin{lemma}\label{equalalpha} Given \w{X \stk{f} Y \stk{g} Z} and a nullhomotopy \w{F : CX \to Z} for \w{g\circ f} as above, we get the following commutative squares: \mydiagram[\label{abconn}]{ \Sigma X \ar[rr]^{\alpha(f,g,F)} \ar[d]^{\zeta_X}_{\simeq} && \operatorname{hcof}(g) \ar[d]^{\zeta_g} & \operatorname{hcof}(f) \ar[d]^{\zeta_f} \ar[rr]^{\beta(f,g,F)} && Z \ar[d]^{r^{F}}\\ \widetilde{\Sigma} X \ar[rr]^{\widetilde{\alpha}(f,g,F)} && \operatorname{cof}(g) & \operatorname{cof}(f) \ar[rr]^{\widetilde{\beta}(f,g,F)} && \operatorname{cof}(F) } If $f$, $g$, and $F$ are cofibrations, the vertical maps in \wref{abconn} are weak equivalences. \end{lemma} \begin{defn}\label{def 2 of 2} Given \w{f,g,h,F,G} as in \S \ref{def 1 of 2}, where all maps and nullhomotopies are cofibrations, we define \w[,]{\widetilde{T}=\llra{f,g,h,(F,G)}:=\widetilde{\beta}(g,h,G) \circ \widetilde{\alpha}(f,g,F)} \end{defn} \begin{prop}\label{def 1 = def 2 of 2} Given \w{f,g,h,F,G} as in \S \ref{def 2 of 2}, we have a commutative diagram: $$ \xymatrix@R=15pt @C=35pt{ \Sigma X \ar[rr]^{\lra{f,g,h,(F,G)}} \ar[d]^>>>{\simeq}_>>>{\zeta_X} && W \ar[d]_>>>{\simeq}^>>>>{r^{G}} \\ \widetilde{\Sigma} X \ar[rr]^{\lra{\lra{f,g,h,(F,G)}}} && \operatorname{cof}(G) } $$ In particular \w[.]{\lra{f,g,h,(F,G)} \approx\llra{f,g,h,(F,G)}} \end{prop} \begin{proof} By Lemma \ref{equalalpha} and the composition of squares as in \wref[.]{abconn} \end{proof} \supsect{\protect{\ref{OrdToda}}.B}{Recursive definition of general Toda brackets} We now define the general Toda bracket, in the spirit of Definition \ref{def 2 of 2}. See also \cite[\S 5]{CFranH} and the sources cited there for a recursive definition in stable model categories. \begin{defn}\label{rprojmodst} If \w{\mathcal{I}} is a finite indexing category, \w{\mathcal{D}^{\mathcal{I}}} has two model category structures (injective and projective), but both have the same weak equivalences, defined object-wise. Thus two diagrams $X$ and $Y$ are weakly equivalent (written \w[)]{X\approx Y} if they are connected by a zigzag of weak equivalences in \w[.]{\mathcal{D}^{\mathcal{I}}} We shall be interested in the \emph{projective model structure}, in which the fibrations are also defined objectwise (see \cite[Theorem 5.1.3]{HovM}). The cofibrations are more complicated to describe, in general. However, when $\mathcal{I}$ is a partially ordered set filtered by subcategories \w{F\sb{0}\subset F\sb{1}\subset\dotsc F\sb{n}=\mathcal{I}} with all non-identity arrows strictly increasing filtration, a cofibrant diagram $\mathbb{A}$ in \w{\mathcal{D}\sp{\In{}{}}} may be described recursively by requiring that: \begin{enumerate} \item $\mathbb{A}(i)$ be cofibrant in $\mathcal{D}$ for each object \w[;]{i\in F\sb{0}} \item Assuming by induction that \w{\mathbb{A}|\sb{F\sb{k}}} is cofibrant, we require that the map from the colimit of \w{\mathbb{A}|\sb{F\sb{k}}} filtration $k$ to each object in \w{F\sb{k+1}\setminus F\sb{k}} be a cofibration. \end{enumerate} Such a diagram will be called \emph{strongly cofibrant}. \end{defn} \begin{example}\label{cofsqu} The outer commutative square in \mykkdiag[\label{eqstcofsq}]{ X \ar@{^{(}->}[rrr]^{h} \ar@{^{(}->}[ddd]^{f} &&& Z \ar@{^{(}->}[ddd]^{g} \ar[ldd]_{s} \\ && \\ && P \ar@{^{(}->}[rd]^{a} & \\ Y \ar[urr]^{t} \ar@{^{(}->}[rrr]^{k} &&& W } is strongly cofibrant if and only if $X$ is cofibrant, the maps $f$ and $h$ are cofibrations, and the induced map $a$ from the pushout $P$ is a cofibration. Thus we see that in particular $g$ and $k$ are also cofibrations. \end{example} \begin{lemma}\label{cofibrant square induces cofibration} If the outer commutative square in \wref{eqstcofsq} is strongly cofibrant, the induced map from \w{\operatorname{cof}(f) \to \operatorname{cof}(g)} is a cofibration. \end{lemma} \begin{mysubsection}{Recursive definition of the Toda bracket}\label{Gen defn 2 of 2} Given a linear diagram \w{X\sb{\ast}=(\xymatrix{X_1 \ar@{^{(}->}[r]^{f_1} & X_2 \ar@{^{(}->}[r]^{f_2} & \dots X_{n+2} \ar@{^{(}->}[r]^{f_{n+2}} & X_{n+3}})} in \w{\mathcal{D}} in which the maps are cofibrations, and let \w{\Fn{1}{j}} be a nullhomotopy for \w[,]{f_{j+1} \circ f_j} such that the square: $$ \xymatrix@R=20pt { X_j \ar@{^{(}->}[d]_{i^{X_j}} \ar@{^{(}->}[r]^{f_j} & X_{j+1} \ar@{^{(}->}[d]^{f_{j+1}} \\ CX_j \ar@{^{(}->}[r]^{\Fn{1}{j}} & X_{j+2} } $$ is strongly cofibrant. Let \w{\ann{1}{j}:=\widetilde{\alpha}(f_j,f_{j+1},\Fn{1}{j}): \ee{}{X_j}\hookrightarrow \operatorname{cof}{(f_{j+1})}} and \w{\bnn{1}{j}:=\widetilde{\beta}(f_j,f_{j+1},\Fn{1}{j}): \operatorname{cof}{(f_j)} \hookrightarrow \operatorname{cof}(\Fn{1}{j})} be as in Definition \ref{ealphabeta} \ -- \ so both are cofibrations by Lemma \ref{cofibrant square induces cofibration}. Finally, assume that for every \w{1 \leq k \leq n-2} and \w{1\leq j \leq n-k+1} we have a strongly cofibrant square \mydiagram[\label{eqdefalpha}]{ \ee{k}{X_j} \ar@{^{(}->}[d]_{i^{\ee{k}{X_j}}} \ar@{^{(}->}[r]^{\ann{k}{j}} & \operatorname{cof}(\bnn{k-1}{j+1}) \ar@{^{(}->}[d]^{\bnn{k}{j+1}} \\ C\ee{k}{X_j} \ar@{^{(}->}[r]^{\Fn{k+1}{j}} & \operatorname{cof}(\Fn{k}{j+1}) } (where \w{\Fn{k+1}{j}} is a nullhomotopy for the composition \w[).]{\bnn{k}{j+1} \circ \ann{k}{j}} We then define: \begin{myeq}\label{eqalphbet} \begin{cases} \ann{k+1}{j}=\widetilde{\alpha}(\ann{k}{j},\bnn{k}{j+1},\Fn{k+1}{j}): \widetilde{\Sigma}^{k+1}{X_j} \hookrightarrow \operatorname{cof}{(\bnn{k}{j+1})}\\ \bnn{k+1}{j}=\widetilde{\beta}(\ann{k}{j},\bnn{k}{j+1},\Fn{k+1}{j}): \operatorname{cof}{(\ann{k}{j})} \hookrightarrow \operatorname{cof}(\Fn{k+1}{j}) \end{cases} \end{myeq} (See Example \ref{examptoda2} below). Since \w{\operatorname{cof}{(\bnn{k}{j+1})}=\operatorname{cof}{(\ann{k}{j+1})}} by Lemma \ref{cof(a)=cof(b)}, we can ask if the composites \w{ \bnn{k+1}{j+1} \circ \ann{k+1}{j} } are nullhomotopic. We may therefore assume recursively that we have the strongly cofibrant squares: $$ \xymatrix@R=25pt { \ee{n-1}{X_1} \ar@{^{(}->}[d]_{i^{\ee{n-1}{X_1}}} \ar@{^{(}->}[r]^{\ann{n-1}{1}} & \operatorname{cof}(\bnn{n-2}{2}) \ar@{^{(}->}[d]^{\bnn{n-1}{2}} && \ee{n-1}{X_2} \ar@{^{(}->}[d]_{i^{\ee{n-1}{X_2}}} \ar@{^{(}->}[r]^{\ann{n-1}{2}} & \operatorname{cof}(\bnn{n-2}{3}) \ar@{^{(}->}[d]^{\bnn{n-1}{3}} \\ C \ee{n-1}{X_1} \ar@{^{(}->}[r]^{\Fn{n}{1}} & \operatorname{cof}(\Fn{n-1}{2}) && C \ee{n-1}{X_2} \ar@{^{(}->}[r]^{\Fn{n}{2}} & \operatorname{cof}(\Fn{n-1}{3}) } $$ This allows us to obtain \w{\ann{n}{1}} and \w{\bnn{n}{2}} by Definition \ref{ealphabeta}. We define the \emph{value} \w{\lra{\lra{f_1,f_2,\dots,f_{n+2},\{\{\Fn{m}{k}\}_{k=1}^{n-m+2}\}_{m=1}^{n}}}} of the corresponding \emph{recursive Toda bracket} to be the composite \w[.]{\bnn{n}{2} \circ \ann{n}{1} : \ee{n}{X_1} \to \operatorname{cof}(\Fn{n}{2})} Note that the maps \w{\ann{k}{j}} and \w{\bnn{k}{j}} depend not only on \w[,]{\widetilde{F}_j^{(k)}} but also on all previous choices of lower nullhomotopies. \end{mysubsection} \begin{example}\label{examptoda2} Given a linear diagram \w{\xymatrix@R=25pt { X_1 \ar@{^{(}->}[r]^{f_1} & X_2 \ar@{^{(}->}[r]^{f_2} & X_{3} \ar@{^{(}->}[r]^{f_{3}} & X_{4} \ar@{^{(}->}[r]^{f_{4}} & X_{5} }} of cofibrations, the first two maps \w{f_1} and \w{f_2} yield \w[,]{\ann{1}{1}:\widetilde{\Sigma}{X_1}\to\operatorname{cof}(f_2)} the next two \w{f_2} and \w{f_3} yield \w{\bnn{1}{2}:\operatorname{cof}(f_2)\to\operatorname{cof}(\widetilde{F}_2^{(1)})} and \w[,]{\ann{1}{2}:\widetilde{\Sigma}{X_2}\to\operatorname{cof}(f_3)} and so on. Thus the stages in our recursive process consist of: \begin{enumerate} \renewcommand{(\alph{enumi})~}{\quad~Step \arabic{enumi}.~} \item \w{\widetilde{\Sigma}{X_1}\stk{\ann{1}{1}}\operatorname{cof}(f_2)\stk{\bnn{1}{2}} \operatorname{cof}(\widetilde{F}_2^{(1)}) \quad \text{and} \quad \widetilde{\Sigma}{X_2}\stk{\ann{1}{2}}\operatorname{cof}(f_3)\stk{\bnn{1}{3}} \operatorname{cof}(\widetilde{F}_3^{(1)})} \item \w{\widetilde{\Sigma}^2{X_1}\stk{\ann{2}{1}}\operatorname{cof}(\bnn{1}{2})=\operatorname{cof}(\ann{1}{2})\stk{\bnn{2}{2}} \operatorname{cof}(\widetilde{F}_2^{(2)})} \end{enumerate} \noindent So the value of the length $4$ recursive Toda bracket we obtain is the composite $$ \lra{\lra{f_1,f_2,f_3,f_4,\Fn{1}{1},\Fn{1}{2},\Fn{1}{3},\Fn{2}{1},\Fn{2}{2}}} = \bnn{2}{2} \circ \ann{2}{1} $$ \end{example} \begin{defn}\label{dredchcx} Given a linear diagram \w{X\sb{\ast}=(\xymatrix{X_1 \ar@{^{(}->}[r]^{f_1} & X_2 \ar@{^{(}->}[r]^{f_2} & \dots X_{n+2} \ar@{^{(}->}[r]^{f_{n+2}} & X_{n+3}}) } in $\mathcal{D}$ of length \w{n+2} and choices of nullhomotopies \w{\widetilde{F}\sb{\ast}=\{\{\Fn{m}{k}\}_{k=1}^{n-m+2}\}_{m=1}^{n}} as in Definition \ref{Gen defn 2 of 2}, we call the system \w{(\Xs,\widetilde{F}_{\ast})} an \emph{ $n$-th order recursive Toda system} and denote \w{\lra{\lra{f_1,f_2,\dots,f_{n+2},\{\{\Fn{m}{k}\}_{k=1}^{n-m+2}\}_{m=1}^{n}}}} by \w[.]{\Tn{n}{1}=\Tn{n}{1}(\Xs,\widetilde{F}_{\ast})} \end{defn} \begin{remark}\label{rredchcx} Even if the maps \w{f_{j}} are not all cofibrations and the squares \wref{eqdefalpha} are not all strongly cofibrant, we can define maps \w{\ann{m}{k}=\ann{m}{k}(\Xs,\widetilde{F}_{\ast})} and \w{\bnn{m}{k}=\bnn{m}{k}(\Xs,\widetilde{F}_{\ast})} as in \S \ref{Gen defn 2 of 2} and thus the corresponding recursive Toda brackets \w[]{\Tn{n}{1}=\Tn{n}{1}(\Xs,\widetilde{F}_{\ast})} as above. However, we cannot expect these constructions to be homotopy meaningful in this case. \end{remark} \supsect{\protect{\ref{OrdToda}}.C}{Rectification of linear diagrams} We show how the Toda bracket serves as the (last) obstruction to rectifying certain diagrams in the model category $\mathcal{D}$. \begin{defn}\label{rectify def} A diagram \w{X\sb{\ast}=(X_1\stk{f_1} X_2 \dots \dots X_{n-1} \stk{f_{n-1}} X_{n})} in a model category $\mathcal{D}$ with \w{f_{j+1}\circ f_{j}\sim\ast} for each \w{1 \leq j \leq n-2} yields a chain complex in \w[.]{\operatorname{ho}\mathcal{D}} A \emph{rectification} of \w{X\sb{\ast}} is a lifting of this chain complex to $\mathcal{D}$: in other words, a diagram \w{X\sb{\ast}'=(X'_1\stk{f'_1} X'_2\dots X'_{n-1} \stk{f'_{n-1}} X'_{n})} in $\mathcal{D}$, with \w[,]{X\sb{\ast}'\tosX\sb{\ast}} such that \w{f'_{j+1}\circ f'_{j} = \ast} for every \w[.]{1 \leq j \leq n-2} If such a rectification exists we say the original diagram is \emph{rectifiable}. \end{defn} \begin{thm}\label{reduced rectifiyng} Given an $n$-th order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})} of length \w[,]{n+2} if \w{\Tn{n}{1}(\Xs,\widetilde{F}_{\ast})} is nullhomotopic, then the underlying linear diagram \w{X\sb{\ast}} is rectifiable. \end{thm} \begin{proof} For each $k$ and $j$, \w{(\Xs,\widetilde{F}_{\ast})} provides a commuting diagram with horizontal cofibration sequences: \mydiagram[\label{eqdfalpha}]{\ee{k}{X_j} \ar@{^{(}->}[d]_{i^{\ee{k}{X_j}}} \ar@{^{(}->}[rr]^{\ann{k}{j}} && \operatorname{cof}(\bnn{k-1}{j+1}) \ar@{^{(}->}[d]^{\bnn{k}{j+1}}\ar[rr]^{r^{\ann{k}{j}}} && \operatorname{cof}(\ann{k}{j})\ar[d]^{\bnn{k+1}{j}}\\ C\ee{k}{X_j} \ar@{^{(}->}[rr]^{\Fn{k+1}{j}} && \operatorname{cof}(\Fn{k}{j+1}) \ar[rr]^{r^{\Fn{k+1}{j}}}_{\simeq} && \operatorname{cof}(\Fn{k+1}{j}) } Since \w{\operatorname{cof}{(\bnn{k-1}{j+1})}=\operatorname{cof}{(\ann{k-1}{j+1})}} by Lemma \ref{cof(a)=cof(b)}, flipping the second square along the diagonal yields a commutative diagram: $$ \xymatrix@R=20pt @C=12pt { & & & & & X_{n+1} \ar@{^{(}->}[r]^{f_{n+1}} \ar[d]_{r^{f_n}} & X_{n+2} \ar[d]^{\simeq} \\ & & & & & \operatorname{cof}(f_n) \ar@{^{(}->}[r]^{\bnn{1}{n}} \ar@{.}[d] & \operatorname{cof}(\Fn{1}{n}) \ar@{.}[d] \\ & & & & X_4 \ar[d] \ar@{.}[r] & \operatorname{cof}(\ann{n-4}{4}) \ar@{^{(}->}[r]^>>{\bnn{n-3}{4}} \ar[d]_{r^{\ann{n-3}{3}}} & \operatorname{cof}(\Fn{n-3}{4}) \ar[d]^{\simeq} \\ & & X_3 \ar[d] \ar@{^{(}->}[r]^{f_3} & X_4 \ar@{=}[ru] \ar[d]^{\simeq} \ar[r] & \operatorname{cof}(f_3) \ar[d] \ar@{.}[r] & \operatorname{cof}(\ann{n-3}{3}) \ar@{^{(}->}[r]^>>{\bnn{n-2}{3}} \ar[d]_{r^{\ann{n-2}{2}}} & \operatorname{cof}(\Fn{n-2}{3}) \ar[d]^{\simeq} \\ X_2 \ar[d]_{r^{f_1}} \ar@{^{(}->}[r]^{f_2} & X_3 \ar@{=}[ru] \ar[d]^{\simeq} \ar[r] & \operatorname{cof}(f_2) \ar[d]^{} \ar@{^{(}->}[r]^{\bnn{1}{2}} & \operatorname{cof}(\Fn{1}{2}) \ar[r] \ar[d]^{\simeq} & \operatorname{cof}(\ann{1}{2}) \ar[d] \ar@{.}[r] & \operatorname{cof}(\ann{n-2}{2}) \ar@{^{(}->}[r]^>>{\bnn{n-1}{2}} \ar[d]_{r^{\ann{n-1}{1}}} & \operatorname{cof}(\Fn{n-1}{2}) \ar[d]^{\simeq} \\ \operatorname{cof}(f_1) \ar@{^{(}->}[r]^{\bnn{1}{1}} & \operatorname{cof}(\Fn{1}{1}) \ar[r]^{r^{\bnn{1}{1}}} & \operatorname{cof}(\ann{1}{1})\ar@{^{(}->}[r]^{\bnn{2}{1}} & \operatorname{cof}(\Fn{2}{1}) \ar[r]^{r^{\bnn{2}{1}}} & \operatorname{cof}(\ann{2}{1}) \ar@{..}[r] & \operatorname{cof}(\ann{n-1}{1}) \ar[r]^>>>{\bnn{n}{1}} & \operatorname{cof}(\Fn{n}{1})} \\ $$ Thus (up to the relation $\approx$) we may replace the initial segment of \w{X\sb{\ast}} (without \w[)]{f_{n+2}} by: \myrdiag[\label{rectify seq}]{ X_1 \ar[r]^{f_1} & X_2 \ar[r]^{ \bnn{1}{1} \circ r^{f_1}} & \operatorname{cof}(\Fn{1}{1}) \ar[r]^>>>>{\bnn{2}{1} \circ r^{\bnn{1}{1}}} &\dots \ar[r] & \operatorname{cof}(\Fn{n-1}{1}) \ar[r]^>>>>{\bnn{n}{1} \circ r^{\bnn{n-1}{1}} } & \operatorname{cof}(\Fn{n}{1}) } Note that we have the following commutative diagram: \mydiagram[\label{diag of rectify}]{ \ee{n-1}{X_1} \ar@{^{(}->}[d]_{\ann{n-1}{1}} \ar@{^{(}->}[r]^{} & C\ee{n-1}{X_1} \ar@{^{(}->}[d]^{\Fn{n}{1}} \ar[r] & \ee{n}{X_1} \ar@{^{(}->}[d]^{\ann{n}{1}} \ar[rd]^{\Tn{n}{1} } \\ \operatorname{cof}(\ann{n-2}{2}) \ar@{^{(}->}[r]^{\bnn{n-1}{2}} \ar[d] & \operatorname{cof}(\Fn{n-1}{2}) \ar[r] \ar[d]^{\simeq} & \operatorname{cof}(\bnn{n-1}{2}) \ar[d] \ar[r]^{\bnn{n}{2}} & \operatorname{cof}(\Fn{n}{2}) \\ \operatorname{cof}(\ann{n-1}{1}) \ar[r]^{\bnn{n}{1}} & \operatorname{cof}(\Fn{n}{1}) \ar[r]^>>>>{r^{\bnn{n}{1}}} & \operatorname{cof}(\bnn{n}{1})=\operatorname{cof}(\ann{n}{1}) \ar@{-->}[ru] } Because \w[,]{\bnn{n}{2} \circ \ann{n}{1} = \Tn{n}{1}\sim \ast } we obtain a dashed map \w{s : \operatorname{cof}(\ann{n}{1}) \to \operatorname{cof}(\Fn{n}{2})} in \wref[,]{diag of rectify}) so we can continue (the right end of) the previous diagram as: $$ \xymatrix@R=20pt @C=12pt { & & X_{n+2} \ar@{^{(}->}[r]^{f_{n+2}} \ar[d]_{r^{f_{n+1}}} & X_{n+3} \ar[d]^{\simeq} \\ X_{n+1} \ar@{^{(}->}[r]^{f_{n+1}} \ar[d]_{r^{f_n}} & X_{n+2} \ar[r]^{r^{f_{n+1}}} \ar@{=}[ru] \ar[d]^{\simeq} & \operatorname{cof}(f_{n+1}) \ar@{^{(}->}[r]^{\bnn{1}{n+1}} \ar[d]_{r^{\ann{1}{n}}} & \operatorname{cof}(\Fn{1}{n+1}) \ar[d]^{\simeq} \\ \operatorname{cof}(f_n) \ar@{^{(}->}[r]^{\bnn{1}{n}} \ar@{.}[d] & \operatorname{cof}(\Fn{1}{n}) \ar[r] \ar@{.}[d] & \operatorname{cof}(\ann{1}{n}) \ar[r]^>>{\bnn{2}{n}} \ar@{.}[d] & \operatorname{cof}(\Fn{2}{n}) \ar@{.}[d] \\ \operatorname{cof}(\ann{n-2}{2}) \ar[r]^>>{\bnn{n-1}{2}} \ar[d]_{r^{\ann{n-1}{1}}} & \operatorname{cof}(\Fn{n-1}{2}) \ar[r] \ar[d]^{\simeq} & \operatorname{cof}(\ann{n-1}{2}) \ar[r]^>>{\bnn{n}{2}} \ar[d]_<<<{r^{\ann{n}{1}}} & \operatorname{cof}(\Fn{n}{2}) \\ \operatorname{cof}(\ann{n-1}{1}) \ar[r]^>>{\bnn{n}{1}} & \operatorname{cof}(\Fn{n}{1}) \ar[r]^{r^{\bnn{n}{1}}} & \operatorname{cof}(\ann{n}{1}) \ar[ru]_{s} \\ } $$ The rectification is completed by replacing \w{f_{n+2}} with \w[.]{s \circ r^{\bnn{n}{1}} } \end{proof} \sect{A cubical version of general Toda brackets} \label{GTb} In this section we provide an alternative ``cubical'' definition of general Toda brackets, essentially formulated in terms of a topological (or simplicial) enrichment of $\mathcal{D}$. We then describe a diagrammatic approach to this definition. \supsect{\protect{\ref{GTb}}.A}{Cubical Definition of the Toda Bracket} The new definition of the general Toda bracket corresponds to the version of \S \ref{def 1 of 2}. Implicitly we are using an enrichment for $\mathcal{D}$ in cubical sets, but since we are only concerned with (higher) nullhomotopies, we can use the simplified approach of \cite{BBGondH}. We first fix some notation for cubical diagrams: \begin{defn}\label{dcubicals} The \emph{n-cube category} \w{\In{n}{}} is the category of binary sequences \w{\vec{a}=\vec{a}^{(n)}} of length $n$ with morphisms given by the obvious partial order induced by \w{0\leq 1} in each coordinate. Thus there is a (unique) map \w{\vec{a} \to \vec{b}} if and only if \w{\vec{b}} is obtained from \w{\vec{a}} by replacing some of the $0$'s by $1$'s. We denote by \w{\deg(\vec{a})} the number of $1$'s in \w[,]{\vec{a}} and by \w{\operatorname{init}(\vec{a})\geq 0} the length of the initial segment of $1$'s in \w[,]{\vec{a}} for example \w{\deg(1,1,0,1,0,0)=3} and \w[.]{\operatorname{init}(1,1,0,1,0,0)=2} We write \w{\vec{r}_k} for the sequence (of degree \w[)]{n-1} with a single $0$ in place $k$. Thus the category \w{\In{3}{}} is the partially ordered set: \myaaadiag[\label{I^3}]{ (0,0,0) \ar[d]^{} \ar[r] \ar[rrd] & (0,1,0) \ar[d] \ar[rrd] & & \\ (1,0,0) \ar[r] \ar[rrd] & (1,1,0) \ar[rrd] & (0,0,1) \ar[d] \ar[r] & (0,1,1) \ar[d] \\ & & (1,0,1) \ar[r] & (1,1,1) } \end{defn} \begin{defn}\label{delln} We denote by \w{\Ln{n}{}} the full subcategory of \w{\In{n+1}{}} obtained by omitting \w[.]{(1,\dots,1)} Thus \w{\Ln{1}{}} is the corner of the \w{2\times 2} square \w{\In{2}{}} opposite \w[:]{(1,1)} $$ \xymatrix@R=10pt @C=10pt { (0,0) \ar[r] \ar[d] & (0,1) \\ (1,0) } $$ Similarly, \w[]{\Ln{2}{}} is obtained from the \w{2\times 2\times 2} cube of \wref{I^3} omitting \w[.]{(1,1,1)} \end{defn} \begin{defn}\label{n-cube} An \emph{$n$-cube in $\mathcal{D}$} is a functor \w[,]{\mathbb{A} :\In{n}{} \to \mathcal{D}} and an \emph{$n$-cube map} \w{\mathfrak{F}} is a natural transformation between $n$-cubes. We denote by \w{\mathcal{D}\sp{\In{n}{}}} the category of all $n$-cubes in $\mathcal{D}$. By convention \w[.]{\mathcal{D}\sp{\In{0}{}}:=\mathcal{D}} \end{defn} \begin{remark} Observe that a $1$-cube in $\mathcal{D}$ is a map between objects in $\mathcal{D}$, and more generally we may think of an $n$-cube map in $\mathcal{D}$ as an \wwb{n+1}cube in $\mathcal{D}$. \end{remark} \begin{defn}\label{CC^mX} For any \w[,]{X\in\mathcal{D}} the $m$-cube \w{\mathbb{C}^{(m)}X } in $\mathcal{D}$ is defined by \w{ \mathbb{C}^{(m)}X(\vec{a})=C^{\deg(\vec{a})}X} (see \S \ref{cone defn}), where if $\vec{b}$ is obtained from $\vec{a}$ by replacing the $0$ in the $j$-th slot by $1$ and $k$ is the number of $1$'s in $\vec{a}$ (and in $\vec{b}$) before the $j$-th slot, then \w{\mathbb{C}^{(m)}X(\vec{a})\to\mathbb{C}^{(m)}X(\vec{b})} is \w[.]{C\sp{\deg(\vec{a})-k}(i^{C^k X})} Thus \w[]{\mathbb{C}^{(2)}X} is \mybbbdiag[\label{eqc2x}]{X \ar@{^{(}->}[d]_{i^X} \ar@{^{(}->}[r]^{i^X} & CX \ar@{^{(}->}[d]^{C(i^X)} \\ CX \ar@{^{(}->}[r]_{i^{CX}} & C^2 X \\ } \end{defn} \begin{defn}\label{LX} We define \w{\mathbb{L}^{(n)}X} to be the restriction of \w{\mathbb{C}^{(n+1)}X } to \w[,]{\Ln{n}{}} and denote its colimit by \w[.]{L^n X} Thus \w[,]{L^1 X=\Sigma{X}} and \w{L^2 X} is the colimit of \w[:]{\mathbb{L}^{(2)}X} $$ \xymatrix@R=15pt { X \ar@{^{(}->}[d]^{} \ar@{^{(}->}[r] \ar@{^{(}->}[rrd] & CX \ar@{^{(}->}[d] \ar@{^{(}->}[rrd]^{i^{CX}} & & \\ CX \ar@{^{(}->}[r]^{i^{CX}} \ar@{^{(}->}[rrd]_{i^{CX}} & C^2 X & CX \ar@{^{(}->}[d]_>>>{C(i^X)} \ar@{^{(}->}[r]_{C(i^X)} & C^2 X \\ & & C^2 X & } $$ which is weakly equivalent to the double suspension \w[.]{\Sigma\sp{2}X} \end{defn} \begin{defn}\label{d_k} Given \w[,]{X,Y \in \mathcal{D}} and\w[,]{1\leq k\leq m} let $$ \delta\sb{k}:=C^{m-k}(i^{C^{k-1}X}):C^{m-1}X\to C^m X $$ and write \w{d_{k}=d^{X,Y}_{k}(m):\operatorname{Hom}_\mathcal{D}(C^m X,Y) \to \operatorname{Hom}_\mathcal{D}(C^{m-1} X,Y)} for the induced map \w[.]{\delta\sb{k}\sp{\ast}} Thus for \w{m=2} we have: $$ \xymatrix@R=15pt @C=30pt { & CX \ar@{^{(}->}[d]_{C(i^X)} \ar@/^{1.5pc}/[rdd]^{d_1(f)} \\ CX \ar@{^{(}->}[r]_{i^{CX}} \ar@/_{1.5pc}/[rrd]_{d_2(f)} & C^2X \ar[rd]^{f} \\ && Y } $$ \end{defn} \begin{lemma}\label{bouhocom} For any \w[]{f:C^n X\to Y} and \w{g:C^m Y\to Z} we have $$ d_{k}(g \circ C^m(f)) = \begin{cases} g \circ C^m(d_{k}(f) ) & 1 \leq k \leq n \\ d_{k-n}(g ) \circ C^{m-1}(f) & n+1 \leq k \leq m+n \end{cases} $$ \end{lemma} \begin{defn}\label{order cahin} Given a linear diagram \w{X\sb{\ast}:=(X_1\stk{f_1} X_2 \dotsc\stk{f_{n-1}} X_{n} \stk{f_{n}} X_{n+1})} of length $n$ in $\mathcal{D}$, an \emph{$m$-th order cubical Toda system} over \w{X\sb{\ast}}(\w{m \leq n-2}) is a system \w{(\Xs,\FFs)=(\Xs,\FFs)^{(m,n)}} of (higher order) nullhomotopies $$ F\sb{\ast}=\{\Fm{k}{j} \in \operatorname{Hom}(C^k X_j ,X_{j+k+1})~|\ 1\leq k \leq m,\ 1 \leq j \leq n-k\} $$ satisfying: \begin{myeq}\label{eq2} \begin{cases} d_1(\Fm{k}{j})= \Fm{k-1}{j+1} \circ C^{k-1}(\Fm{0}{j}) \\ d_{2}(\Fm{k}{j})=\Fm{k-2}{j+2} \circ C^{k-2}( \Fm{1}{j}) \\ \qquad \vdots \qquad \qquad \vdots \\ d_{r}(\Fm{k}{j})=\Fm{k-r}{j+r} \circ C^{k-r}( \Fm{r-1}{j}) \\ \qquad \vdots \qquad \qquad \vdots \\ d_{k}(\Fm{k}{j})=\Fm{0}{j+k} \circ \Fm{k-1}{j} \\ \end{cases} \end{myeq} where \w{\Fm{0}{j}=f_j} (see \cite{BBGondH}). \end{defn} \begin{defn}\label{Gen defn 1 of 2} Let \w[]{(\Xs,\FFs)^{(n,n+2)}} be an $n$-th order cubical Toda system of length \w[.]{n+2} By Lemma \ref{bouhocom} and \wref[,]{eq2} the maps $$ \Fm{n-k+1}{k+1}\circ C^{n-k+1}(\Fm{k-1}{1}) : \mathbb{L}^{(n)}X_1(\vec{r}_k) = C^{n}X_1 \to X_{n+3} \quad \quad (1 \leq k \leq n+1) $$ satisfy $$ d_{j-1}(\Fm{m-k-2}{k+1} \circ C^{m-k-2}(\Fm{k-1}{1})) =d_{k}(\Fm{m-j-2}{j+1} \circ C^{m-j-2}(\Fm{j-1}{1})) $$ so they induce a map which is called \emph{value of the Toda bracket}: $$ \Tm{n}{1}(\Xs,\FFs)=\lra{f_1,f_2,\dots,f_{n+2},\{\{\Fm{m}{k}\}_{k=1}^{n-m+2}\}_{m=1}^{n}}: L^n X_1 \to X_{n+3}~. $$ One can similarly define \w{\Tm{m}{j}=\Tm{m}{j}(\Xs,\FFs)} for suitable $m$ and $j$. \end{defn} \begin{example} Given a linear diagram \w{X\sb{\ast}=(X_1 \stk{f_1} X_2 \stk{f_2} X_3 \stk{f_3} X_4 \stk{f_4} X_5)} of length $4$ in $\mathcal{D}$, a second order cubical Toda system over \w{X\sb{\ast}} consists of three (first order) nullhomotopies \w{\Fm{1}{1} :C X_1 \to X_{3}} for \w[,]{f_2 \circ f_1} \w{\Fm{1}{2} :C X_2 \to X_{4}} for \w[,]{f_3 \circ f_2} and \w{\Fm{1}{3} :C X_3 \to X_{5} } for \w[,]{f_4 \circ f_3 } respectively, and two second-order nullhomotopies \w{\Fm{2}{1} \in \operatorname{Hom}(C^2X_1 ,X_{4})} and \w{\Fm{2}{2} \in \operatorname{Hom}(C^2X_2 ,X_{4}) } satisfying: \w[,]{d_1(\Fm{2}{1})=\Fm{1}{2} \circ C f_1} \w[,]{d_1(\Fm{2}{2})=\Fm{1}{3} \circ C f_2} \w[,]{d_2(\Fm{2}{1})=f_{3} \circ \Fm{1}{1}} and \w[.]{d_2(\Fm{2}{2})=f_{4} \circ \Fm{1}{2}} The value \w{\lra{f_1,f_2,f_3,f_4,\Fm{1}{1},\Fm{1}{2},\Fm{1}{3},\Fm{2}{1},\Fm{2}{2}}:L^2 X_1\to X_5} is depicted by: \begin{center} \begin{picture}(350,180)(0,0) \put(10,95){\circle*{3}} \multiput(10,94)(0,-3){30}{\circle*{.5}} {\put(50,75){$f_4\circ f_3\circ F_1$}} {\put(79,82){\vector(1,1){10}}} \put(140,95){\circle*{5}} \put(140,95){\line(-1,0){130}} \put(143,87){$f_4\circ f_3\circ f_2\circ f_1$} \put(140,95){\line(0,-1){85}} {\put(154,50){$F_3\circ Cf_2 \circ Cf_1$}} {\put(154,46){\vector(-2,-1){13}}} \put(140,5){\circle*{3}} \multiput(12,5)(3,0){43}{\circle*{.5}} \put(10,5){\circle*{3}} {\put(50,45){\framebox{$F_3\circ CF_1$}}} \put(157,169){\circle*{3}} \multiput(157,169)(-2,-1){75}{\circle*{.5}} \put(289,169){\circle*{3}} \multiput(289,170)(-3,0){44}{\circle*{.5}} \put(289,169){\line(-2,-1){150}} {\put(180,149){$f_4 \circ F_2\circ Cf_1$}} {\put(209,146){\vector(1,-1){10}}} {\put(120,126){\framebox{$f_4 \circ \Fm{2}{1}$}}} \multiput(289,170)(0,-3){30}{\circle*{.5}} \put(289,79){\circle*{3}} \multiput(289,79)(-2,-1){75}{\circle*{.5}} {\put(219,102){\framebox{$\Fm{2}{2}\circ C^{2}f_1$}}} \end{picture} \end{center} \end{example} \supsect{\protect{\ref{GTb}}.B}{Diagrammatic interpretation for the cubical system} We now describe cubical Toda systems in terms of diagrams in the model category $\mathcal{D}$: more precisely, we will encode the Toda system by two maps of cubes, compassing the various nullhomotopies. This will allow us to define a suitable equivalence relation $\approx$ on these systems (for which the Toda bracket is an invariant). \begin{defn}\label{Mm} Assume given an $n$-th order cubical Toda system \w[]{(\Xs,\FFs)} of length \w[.]{n+2} For each \w{1 \leq m \leq n} and \w[,]{1 \leq j \leq n-m+3 } we define an $m$-cube \w{\Mm{m}{j}=\Mm{m}{j}(\Xs,\FFs)} in $\mathcal{D}$ by setting \w[,]{\Mm{m}{j} (\vec{a}):= C^{\deg(\vec{a})-\operatorname{init}(\vec{a})} X_{j+\operatorname{init}(\vec{a})}} (see Definition \ref{dcubicals}). If \w{\vec{b}\in\In{n}{}} is obtained from $\vec{a}$ by replacing $0$ in the $s$-th coordinate with $1$, the corresponding morphism \w{\Mm{m}{j}(\vec{a})\to\Mm{m}{j}(\vec{b})} is $$ \begin{cases} C^{\deg(\vec{a})-\operatorname{init}(\vec{a})-\ell} (F^{(\ell)}_{j+\operatorname{init}(\vec{a})}) & \text{if}\ s=\operatorname{init}(\vec{a})+1\\ C^{\deg(\vec{a})-r}(i^{C^{r-\operatorname{init}{\vec{a}}} X_{j+\operatorname{init}(\vec{a})}}) & \text{otherwise} \end{cases} $$ where \w{\ell\geq 0} is the number of consecutive $1$'s starting from the \w{s+1} coordinate, and \w[.]{r=a_1+\dots+a_{s-1}} Thus \w{\Mm{1}{1}} is the $1$-cube \w[,]{f_1 :X_1 \to X_2} and \w{\Mm{3}{2}} is the $3$-cube: $$ \xymatrix@R=25pt@C=40pt { X_2 \ar[d]_{f_2} \ar@{^{(}->}[r]^{i^{X_2}} \ar@{^{(}->}[rrd]^>>>>>>>>>>>>{i^{X_2}} & CX_2 \ar[d]_>>>{F_2} \ar@{^{(}->}[rrd]^{i^{CX_2}} & & \\ X_3 \ar[r]^{f_3} \ar@{^{(}->}[rrd]_{i^{X_3}} & X_4 \ar[rrd]_<<<<<<<<<<<<<<{f_4} & CX_2 \ar[d]^<<<{Cf_2} \ar@{^{(}->}[r]_{C(i^{X_2})} & C^2 X_2 \ar[d]^{\Fm{2}{2}} \\ & & CX_3 \ar[r]_{F_3} & X_5 } $$ \end{defn} \begin{defn}\label{AAA} The $m$-cube map \w{\AAA{m}{j}=\AAA{m}{j}(\Xs,\FFs) : \mathbb{C}^{(m)}X_j \to \Mm{m}{j+1} } is given by \w[.]{\AAA{m}{j} (\vec{a}) = C^{\deg(\vec{a})-\operatorname{init}(\vec{a})} \Fm{\operatorname{init}(\vec{a})}{j}} Thus \w{\AAA{2}{1} : \mathbb{C}^{(2)} X_1 \to \Mm{2}{2} } is the following back-to-front map of squares: $$ \xymatrix@R =25pt@C=40pt { X_1 \ar@{^{(}->}[d]_<<<{i^{X_1}} \ar@{^{(}->}[r]^{i^{X_1}} \ar[rrd]^>>>>>>>>>>>>>{f_1} & CX_1 \ar@{^{(}->}[d]^<<<{} \ar[rrd]^{Cf_1} & & \\ CX_1 \ar@{^{(}->}[r]^{i^{CX_1}} \ar[rrd]_{F_1} & C^2 X_1 \ar[rrd]_<<<<<<<<{\Fm{2}{1}} & X_2 \ar[d]^<<<{f_2} \ar@{^{(}->}[r]_{i^{X_2}} & C X_2 \ar[d]^{F_2} \\ & & X_3 \ar[r]_{f_3} & X_4 } $$ \end{defn} \begin{defn}\label{xm} Given an $n$-th order cubical Toda system \w[]{(\Xs,\FFs)} of length $n+2$ in $\mathcal{D}$, for each \w{1 \leq m \leq n} and \w{1 \leq j \leq n-m+1 } we define the $m$-cube \w{\xm{m}{j+m+2}=\xm{m}{j+m+2}(\Xs,\FFs)} by $$ \xm{m}{j+m+2}(\vec{a}) = \begin{cases} X_{j+m+2}& \text{if}\ \vec{a}=(1,\dots,1) \\ C \Mm{m}{j+1}(\vec{a})& \text{otherwise} \end{cases} $$ with edge \w{\xm{m}{j+m+2}(\vec{a}) \to \xm{m}{j+m+2}(\vec{b})} equal to $$ \begin{cases} \Fm{m-k+1}{j+k} & \text{if}\ \vec{a}=\vec{r}_k\ \text{and}\ \vec{b}=(1,\dots,1)\\ C (\Mm{m}{j+1}(\vec{a})\to \Mm{m}{j+1}(\vec{b})) & \text{otherwise} \end{cases} $$ Thus \w{\xm{1}{4}} is the $1$-cube \w[,]{F_2: CX_2 \to X_4} and \w{\xm{3}{7}} is the $3$-cube: $$ \xymatrix@R=25pt@C=45pt { C X_3 \ar[d]_{C f_3} \ar@{^{(}->}[r]^{C(i^{X_3})} \ar@{^{(}->}[rrd]^>>>>>>>>>>{C(i^{X_3})} & C^2 X_3 \ar[d]_{CF_3} \ar[rrd]^{C(i^{CX_3})} & & \\ CX_4 \ar[r]^{Cf_4} \ar[rrd]_{C(i^{X_4})} & C X_5 \ar[rrd]_<<<<<<<<<<<<<{F_5} & C^2 X_3 \ar[d]^<<<<<{C^2 f_3} \ar@{^{(}->}[r]_{C^2(i^{X_3})} & C^3 X_3 \ar[d]^{\Fm{3}{3}} \\ & & C^2X_4 \ar[r]_{\Fm{2}{4}} & X_7 } $$ \end{defn} \begin{defn}\label{BBB} Given a higher order cubical Toda system \w[]{(\Xs,\FFs)} in $\mathcal{D}$, the $m$-cube map \w{\BBB{m}{j+1}=\BBB{m}{j+1}(\Xs,\FFs) : \Mm{m}{j+1} \to \xm{m}{j+m+2}} is defined at the $\vec{a}$-vertex by $$ \BBB{m}{j+1}(\vec{a}) = \begin{cases} f_{j+m+1}& \text{if} ~ \vec{a}=(1,1,\dots,1) \\ i^{\Mm{m}{j+1}(\vec{a})}& \text{otherwise} \end{cases} $$ Thus \w{\BBB{2}{2}: \Mm{2}{2} \to \xm{2}{5}} is the back-to-front map of squares: $$ \xymatrix@R=25pt@C=40pt { X_2 \ar[d]_{f_2} \ar@{^{(}->}[r]^{i^{X_2}} \ar@{_{(}->}[rrd]^>>>>>>>>>>>>{i^{X_2}} & CX_2 \ar[d]_{F_2} \ar@{^{(}->}[rrd]^{i^{CX_2}} & & \\ X_3 \ar[r]^{f_3} \ar@{_{(}->}[rrd]_{i^{X_3}} & X_4 \ar[rrd]_<<<<<<<<<<<<<<{f_4} & CX_2 \ar[d],^<<<{Cf_2} \ar@{^{(}->}[r]_{C(i^{X_2})} & C^2 X_2 \ar[d]^{\Fm{2}{2}} \\ & & CX_3 \ar[r]_{F_3} & X_5 } $$ \end{defn} \begin{summary}\label{stwocubes} The data of an $n$-th cubical order Toda system \w[]{(\Xs,\FFs)} of length $n+2$ is encoded by the two of $n$-cube maps \mydiagram[\label{eqtwocubes}]{ \mathbb{C}^{(n)}X_1 \ar[rrr]^{\AAA{n}{1} (\Xs,\FFs)} &&& \Mm{n}{2}(\Xs,\FFs) \ar[rrr]^{\BBB{n}{2}(\Xs,\FFs)} &&& \xm{n}{n+3}(\Xs,\FFs) } Thus a second-order cubical Toda system\w{(\Xs,\FFs)^{(2)}} of length $4$ is described by: $$ \xymatrix@R =25pt@C=30pt { X_1 \ar@{^{(}->}[d] \ar@{^{(}->}[r] \ar[rrd]^>>>>>>>>>>{f_1} & CX_1 \ar@{^{(}->}[d] \ar[rrd]^{Cf_1} & & \\ CX_1 \ar@{^{(}->}[r]^{i^{CX_1}} \ar[rrd]_{F_1} & C^2 X_1 \ar[rrd]_<<<<<<<{\Fm{2}{1}} & X_2 \ar@{^{(}->}[rrd] \ar[d]^<<<{f_{2}} \ar@{^{(}->}[r] & C X_2 \ar@{^{(}->}[rrd]^{i^{CX_2}} \ar[d]^<<<<<{F_2} \\ & & X_3 \ar@{^{(}->}[rrd]_{i^{X_3}} \ar[r]^>>>>>>>{f_3} & X_4 \ar[rrd]^>>>>>>>>>>{f_4} & CX_2 \ar[d] \ar@{^{(}->}[r] & C^2X_2 \ar[d]^{\Fm{2}{2}} \\ && & & CX_3 \ar[r]_{F_3} & X_5} $$ \end{summary} \begin{defn}\label{dtwocubes} An $n$-th order cubical Toda system \w[]{(\Xs,\FFs)} in $\mathcal{D}$ is called \emph{strongly cofibrant} if the diagram \wref{eqtwocubes} is strongly cofibrant (see Definition \ref{rprojmodst}). \end{defn} \begin{remark}\label{rtwocubes} Note that when \w{(\Xs,\FFs)} is strongly cofibrant, in particular, the three $n$-cubes \w[,]{\mathbb{C}^{(n)}X_1} \w[,]{\Mm{n}{2}(\Xs,\FFs)} and \w{\xm{n}{n+3}(\Xs,\FFs)} are strongly cofibrant (this always holds for \w{\mathbb{C}^{(n)}X_1} as long as \w{X_1} is cofibrant in $\mathcal{D}$). \end{remark} \begin{defn}\label{relation between systyms} Two $n$-th order cubical Toda systems \w{(\Xs,\FFs)} and \w{(\Xs',G_{\ast})} are \emph{equivalent} (written \w[)]{(\Xs,\FFs) \toss (\Xs',G_{\ast})} if the sequences $$ \xymatrix{ \mathbb{C}^{(n)}X_1 \ar[rrr]^{\AAA{n}{1} (\Xs,\FFs)} &&& \Mm{n}{2}(\Xs,\FFs) \ar[rrr]^{\BBB{n}{2}(\Xs,\FFs)} &&& \xm{n}{n+3}(\Xs,\FFs) } $$ and $$ \xymatrix{ \mathbb{C}^{(n)}X'_1 \ar[rrr]^{\AAA{n}{1} (\Xs',G_{\ast})} &&& \Mm{n}{2}(\Xs',G_{\ast}) \ar[rrr]^{\BBB{n}{2}(\Xs',G_{\ast})} &&& \xm{n}{n+3}(\Xs',G_{\ast}) } $$ are equivalent as diagrams under the relation \w{\approx} of definition \S \ref{rprojmodst} . As we shall show in Corollary \ref{ceqimpeqtoda} below, equivalent higher order cubical Toda systems have $\approx$-equivalent Toda brackets. \end{defn} \begin{prop} Every higher order cubical Toda system\w{(\Xs,\FFs)} is equivalent under the relation \w{\toss} to a strongly cofibrant one. \end{prop} This follows by induction from: \begin{lemma}\label{lfiltr} Given an indexing category $\In{}{}$ filtered as in \S \ref{rprojmodst} and a diagram \w[,]{\mathbb{A} : \In{}{} \to \mathcal{D}} any (strongly) cofibrant replacement for its restriction \w{\mathbb{A}'} to \w{F\sb{i}} can be extended to a strongly cofibrant replacement for \w[.]{\mathbb{A}} \end{lemma} \begin{defn}\label{cone of cube} The \emph{cone} of an $n$-cube \w{\mathbb{A}} is the $n$-cube \w{C\mathbb{A}} obtained by applying the cone functor to \w[,]{\mathbb{A}} with the $n$-cube inclusion \w[.]{\mathfrak{i}^{\mathbb{A}} : \mathbb{A} \to C\mathbb{A}} \end{defn} \begin{defn}\label{diagnull} Given an $n$-th order cubical Toda system\w[,]{(\Xs,\FFs)} for each \w{1 \leq m \leq n} and \w[,]{1 \leq j \leq n-m+3 } the corresponding \emph{diagrammatic nullhomotopy} is the \wwb{m-1}cube map \w{\FFF{m}{j}=\FFF{m}{j}(\Xs,\FFs) : C\mathbb{C}^{(m-1)} X_j \to \xm{m-1}{j+m+1} } is defined: $$ \FFF{m}{j}(\vec{a}) = \begin{cases} \Fm{m}{j} & \text{if} ~ \vec{a}=(1,\dots,1) \\ C^{\deg(\vec{a})-\operatorname{init}(\vec{a})+1} \Fm{\operatorname{init}(\vec{a})}{j} &\text{otherwise} \end{cases} $$ Thus for \w{m=1} we have the map of $0$-cubes \w[,]{\FFF{1}{j} = F_j} while \w{\FFF{3}{1} : C\mathbb{C}^{(2)} X_1 \to \xm{2}{5}} is given by: $$ \xymatrix@R=20pt@C=40pt { CX_1 \ar@{^{(}->}[d]_{} \ar@{^{(}->}[r]^{} \ar[rrd]^>>>>>>>>>>>>{Cf_1} & C^2 X_1 \ar@{^{(}->}[d]^{} \ar[rrd]^{C^2 f_1} & & \\ C^2 X_1 \ar@{^{(}->}[r]^{} \ar[rrd]_{CF_1} & C^3 X_1 \ar[rrd]_<<<<<<<<<<{\Fm{3}{1}} & CX_2 \ar[d]^{} \ar@{^{(}->}[r]_{C(i^{X_2})} & C^2 X_2 \ar[d]^{\Fm{2}{2}} \\ & & CX_3 \ar[r]_{F_3} & X_5 } $$ \end{defn} \begin{remark} Note that \w{\FFF{m}{j}(\Xs,\FFs)} is indeed a diagrammatic nullhomotopy, because the following diagram of $m$-cube maps commutes: \mycccdiag[\label{eqdiagnull}]{ C\mathbb{C}^{(m)}X_j \ar@/^{1.5pc}/[rrrrrrd]^{\FFF{m+1}{j}(\Xs,\FFs)} \\ \mathbb{C}^{(m)}X_j \ar@{_{(}->}[u]^{i^{\mathbb{C}^{(m)}X_j}} \ar[rrr]^{\AAA{m}{j} (\Xs,\FFs)} &&& \Mm{m}{j+1}(\Xs,\FFs) \ar[rrr]^{\BBB{m}{j+1}(\Xs,\FFs)} &&& \xm{m}{m+j+2}(\Xs,\FFs) } \end{remark} \supsect{\protect{\ref{GTb}}.C}{Cubical Toda bracket in terms of the homotopy cofiber of cubes} We show how the cubical Toda bracket defined above can be described in terms of the homotopy cofibers of maps of cubes. \begin{defn}\label{partial} For each \w{1\leq j \leq n} and \w[,]{\varepsilon=0,1} we let \w{\pa{j}{\varepsilon}\In{n}{} } denote the full subcategory of \w{\In{n}{}} (isomorphic to \w[)]{\In{n-1}{}} consisting of sequences $\vec{a}$ with \w[.]{a\sb{j}=\varepsilon} \end{defn} \begin{defn}\label{pa} For any $n$-cube \w{\mathbb{A}} and \w[,]{1 \leq j \leq n } we denote by \w{ \pa{j}{\varepsilon} \mathbb{A}} the restriction of \w{\mathbb{A}} to \w{\pa{j}{\varepsilon}\In{n}{}} \wb[,]{\varepsilon=0,1} with the obvious \wwb{n-1}cube map \w[.]{\partial\sp{j}\mathbb{A} : \pa{j}{0} \mathbb{A} \to \pa{j}{1} \mathbb{A}} By functoriality any $n$-cube map \w{\mathfrak{F} : \mathbb{A} \to \mathbb{B}} can be thought of as a commuting diagram of \wwb{n-1}cube maps, denoted by \w[:]{\partial\sp{j}\mathfrak{F}} \begin{myeq}\label{diagrampa} \xymatrix@R=15pt { \pa{j}{0}\mathbb{A} \ar[d]_{\partial\sp{j}\mathbb{A}} \ar[rr]^{\pa{j}{0}\mathfrak{F}} && \pa{j}{0}\mathbb{B} \ar[d]^{\partial\sp{j}\mathbb{B}} \\ \pa{j}{1}\mathbb{A} \ar[rr]^{\pa{j}{1} \mathfrak{F}} && \pa{j}{1}\mathbb{B} } \end{myeq} \end{defn} \begin{lemma}\label{paC} \w{\partial^{m} \mathbb{C}^{(m)}X = i^{\mathbb{C}^{(m-1)}X} } \end{lemma} \begin{proof} See Definitions \ref{pa} and \ref{cone of cube}, exemplified in \wref[.]{eqc2x} \end{proof} \begin{defn}\label{Gen cof and hcof} Given an $m$-cube \w{\mathbb{A}} in $\mathcal{D}$, its \emph{cofiber} \w{\cf{m}{\mathbb{A}}} is the colimit of the diagram \w{\widetilde{R} \mathbb{A}:\Ln{m}{}\to\mathcal{D}} given by \w{\pa{m+1}{0}\widetilde{R} \mathbb{A} =\mathbb{A}} and \w{\widetilde{R} \mathbb{A}(\vec{a})=\ast} elsewhere. The \emph{homotopy cofiber} of \w{\mathbb{A}} is the colimit \w{\operatorname{hcof}^{(m)}(\mathbb{A})} of the diagram \w{R\mathbb{A}:\Ln{m}{} \to\mathcal{D}} obtained from $\widetilde{R}$ by replacing the values $\ast$ with the corresponding cones. The natural map \w{\operatorname{hcof}^{(n)}(\mathbb{A}) \to \operatorname{cof}^{(n)}(\mathbb{A}) } by \w[.]{\zeta_{\mathbb{A}}} Thus if $\mathbb{A}$ is the following square: \mybbbdiag[\label{eqhcsq}]{ X \ar[rr]^{h} \ar[d]_{f} && Z \ar[d]^{g} \\ Y \ar[rr]_{k} && W } then \w{\widetilde{R} \mathbb{A}} is the left hand diagram and \w{R\mathbb{A}} is the right hand diagram in $$ \xymatrix@R=20pt { X \ar[d]^{f} \ar[r]^{h} \ar[rrd] & Z \ar[d]_{g} \ar[rrd] & & \\ Y \ar[r]^{k} \ar[rrd] & W & \ast \ar[d] \ar[r] & \ast \\ & & \ast & } \hspace*{10mm} \xymatrix@R=20pt { X \ar[d]^{f} \ar[r]^{h} \ar@{^{(}->}[rrd]^>>>>>>{i^X} & Z \ar[d]_{g} \ar@{^{(}->}[rrd]^{i^Z} & & \\ Y \ar[r]^{k} \ar@{^{(}->}[rrd]^>>>>>>>>{i^Y} & W & CX \ar[d]^{Cf} \ar[r]^{Ch} & CZ \\ & & CY & } $$ \end{defn} \begin{remark}\label{cofzero} If \w{f: X \to Y} is a map in \w[,]{\mathcal{D}} then \w[]{\operatorname{cof}^{(1)}(f)} is the usual cofiber of \w[,]{f} and similarly for the homotopy cofiber. \end{remark} \begin{example}\label{cof(CX)} For any \w[,]{X\in\mathcal{D}} \w[]{\operatorname{cof}^{(m)}(\mathbb{C}^{(m)} X)= \ee{m}{X} } and \w[.]{\operatorname{hcof}^{(m)}(\mathbb{C}^{(m)} X)=L^m X} (see Definition \ref{CC^mX}) \end{example} \begin{lemma}\label{cof=cof} Applying \w{\operatorname{cof}^{(n-1)}} to the diagram \w{\partial^{j}\mathfrak{F}} of \wref{diagrampa} yields: \myrrrdiag[\label{coffdiag}]{ \operatorname{cof}^{(n-1)}(\pa{j}{0}\mathbb{A}) \ar[d]_{\operatorname{cof}^{(n-1)}(\partial^{j}\mathbb{A})} \ar[rr]^{\operatorname{cof}^{(n-1)}(\pa{j}{0}\mathfrak{F})} && \operatorname{cof}^{(n-1)}(\pa{j}{0}\mathbb{B}) \ar[d]^{\operatorname{cof}^{(n-1)}(\partial^{j}\mathbb{B})} \\ \operatorname{cof}^{(n-1)}(\pa{j}{1}\mathbb{A}) \ar[rr]^{\operatorname{cof}^{(n-1)}(\pa{j}{1} \mathfrak{F})} && \operatorname{cof}^{(n-1)}(\pa{j}{1}\mathbb{B}) } and \w{\operatorname{cof}^{(n)}(\mathfrak{F})} is just \w{cof^{(1)}} applied to the horizontal $1$-cube map in \wref[.]{coffdiag} \end{lemma} \begin{remark}\label{hcof=hcof} Similarly, for any strongly cofibrant $n$-cube \w{\mathbb{A}} the homotopy cofiber \w{\hcf{n}{\mathbb{A}}} is the colimit of: $$ \xymatrix@R=13pt { \hcf{n-1}{\pa{n}{0}\mathbb{A}} \ar[rrr]^{\hcf{n-1}{\partial^{n}\mathbb{A}}} \ar@{^{(}->}[d] &&& \hcf{n-1}{\pa{n}{1}\mathbb{A}} \\ C \mathbb{A}(1,\dots,1,0) } $$ For example, the homotopy cofiber of any strongly cofibrant square \wref{eqhcsq} is the colimit of \w[,]{ CY \hookleftarrow \operatorname{hcof}(f) \rightarrow \operatorname{hcof}(g) } where \w[]{\operatorname{hcof}(f)\hookrightarrow CY} is the induced map in $$ \xymatrix@R=15pt @C=10pt{ X \ar@{^{(}->}[d]_{i^X} \ar@{^{(}->}[r]^{f} & Y \ar@/^{1.5pc}/[ddr]^{i^Y} \ar[d] \\ CX \ar@/_{1.5pc}/[drr]_{Cf} \ar[r] & \operatorname{hcof}(f) \ar@{-->}[dr]^{} \\ & & CY } $$ \end{remark} \begin{prop}\label{gen hcolim=cof} If \w{\mathbb{A}} is a strongly cofibrant $n$-cube, the natural map \w{\zeta_{\mathbb{A}} : \operatorname{hcof}^{(n)}(\mathbb{A}) \to \operatorname{cof}^{(n)}(\mathbb{A})} is weak equivalence. \end{prop} \begin{proof} By induction on \w[:]{n} we know it is true for \w[.]{n=1} If the statement holds for\w[,]{n=m-1} let \w{\mathbb{A}} be a strongly cofibrant $m$-cube; by hypothesis \w{\hcf{m-1}{\pa{m}{0}\mathbb{A}} \to \cf{m-1}{\pa{m}{0}\mathbb{A}}} and \w{\hcf{m-1}{\pa{m}{1}\mathbb{A}} \to \cf{m-1}{\pa{m}{1}\mathbb{A}}} are weak equivalences, so the following diagram: $$ \xymatrix@R=15pt @C=10pt { \hcf{m-1}{\pa{m}{0}\mathbb{A}} \ar[rrrrd]^{\simeq} \ar[rrr]^{\hcf{m-1}{\partial^{m}\mathbb{A}}} \ar@{^{(}->}[d] &&& \hcf{m-1}{\pa{m}{1}\mathbb{A}} \ar[rrrd]^{\simeq} \\ C \mathbb{A}(1,\dots,1,0) \ar[rrrrd] &&&& \cf{m-1}{\pa{m}{0}\mathbb{A}} \ar[rr]_{\cf{m-1}{\partial^{m}\mathbb{A}}} \ar[d] && \cf{m-1}{\pa{m}{1}\mathbb{A}} \\ &&&& \ast } $$ induces the required weak equivalence on pushouts (since the upper front map is a cofibration, as in Lemma \ref{cofibrant square induces cofibration}). \end{proof} \begin{corollary}\label{hcof(CX)} For any cofibrant \w[,]{X\in\mathcal{D}} the natural map \w{\zeta_{\mathbb{C}^{(m)}X}: L^m X \to \ee{m}{X} } is a weak equivalence. \end{corollary} \begin{lemma}\label{Important2} If \w{\mathbb{A}} is a strongly cofibrant $n$-cube, where \w{\mathbb{A}(\vec{a})\simeq \ast} for every \w[,]{\vec{a} \neq (1,\dots,1)} then the structure map \w{r^{\mathbb{A}}: \mathbb{A}(1,\dots,1) \to \cf{n}{\mathbb{A}}} is a weak equivalence. \end{lemma} \begin{remark}\label{hcoftoB} Assume given an $n$-cube map \w{\mathfrak{F} : \mathbb{A} \to \mathbb{B}} such that, for every \w[,]{\vec{a} \neq (1,\dots,1)} \w{\mathfrak{F}(\vec{a}):\mathbb{A}(\vec{a})\to\mathbb{B}(\vec{a})=C\mathbb{A}(\vec{a})} is the cone inclusion. We can extend \w{\mathfrak{F}} to a map \w{R\mathbb{A}\to \mathbb{Q}} of \ww{\Ln{n}{}}-diagrams, where \w{\pa{m+1}{0}\mathbb{Q}=\mathbb{B}} and all the new maps are identities. Taking colimits yields a map \w[,]{\operatorname{hcof}'^{(n)}(\mathfrak{F}) : \operatorname{hcof}^{(n)}(\mathbb{A}) \to \mathbb{B}(1,\dots,1)} where \w{\mathbb{B}(1,\dots,1)} is a homotopy cofiber of the cube \w[.]{\mathbb{B}} Observe also that the map \w{\operatorname{hcof}'^{(n)}(\mathfrak{F}) } is obtained from the maps: \begin{itemize} \item \w{ R\mathbb{A}(\vec{r}_{k})=C\mathbb{A}(\vec{r}_{k})=\mathbb{B}(\vec{r}_{k}) \to \mathbb{B}(1,\dots,1) }for all \w{1 \leq k \leq n} \item \w{R\mathbb{A}(\vec{r}_{n+1})=\mathbb{A}(1,\dots,1)\stk{\mathfrak{F}(1,\dots,1)} \mathbb{B}(1,\dots,1) } \end{itemize} \end{remark} \begin{prop}\label{T=AcircB} Given an $n$-order cubical Toda system\w[,]{(\Xs,\FFs)} then: $$ \Tm{n}{1}(\Xs,\FFs) =\operatorname{hcof}'^{(n)}(\BBB{n}{2}) \circ \hcf{n}{\AAA{n}{1}} $$ \end{prop} \begin{proof} By Definition \ref{Gen defn 1 of 2} and Remark \ref{hcoftoB}, it suffices to show that $$ \Fm{n-(n+1)+1}{n+1+1}\circ C^{n+1-(n+1)}(\Fm{n+1-1}{1})~=~ \BBB{n}{2}(1,\dots,1) \circ R\AAA{n}{1}(\vec{r}_{n+1}) $$ and $$ \Fm{n-k+1}{k+1}\circ C^{n+1-k}(\Fm{k-1}{1})~=~ (\xm{n}{n+3}(\vec{r}_k) \to \xm{n}{n+3}(1,\dots,1)) \circ R\AAA{n}{1}(\vec{r}_k) \quad ,\quad \forall 1 \leq k \leq n~. $$ For the first equation note that \w{\Fm{0}{n+2}=f_{n+2}=\BBB{n}{2}(1,\dots,1)} and \w[]{C^0\Fm{n}{1}=\Fm{n}{1}= \AAA{n}{1}(\vec{r}_{n+1})=R\AAA{n}{1}(\vec{r}_{n+1})} (see Definitions \ref{AAA} and \ref{BBB}). For the second, note that for \w{1 \leq k \leq n} by Definition \ref{xm} we have \w{\Fm{n-k+1}{k+1} =\xm{n}{n+3}(\vec{r}_{k}^{(n)}) \to \xm{n}{n+3}(1,\dots,1)} and \w[.]{ C^{n-k+1}F_1^{(k-1)}=C C^{n-k} \Fm{k-1}{1}= C (\AAA{n}{1}(\vec{r}_k^{(n)}))=R\AAA{n}{1}(\vec{r}_k^{(n+1)})} \end{proof} \begin{corollary}\label{ceqimpeqtoda} If \w{(\Xs,\FFs) \toss (\Xs',G_{\ast}) } then \w[.]{\Tm{n}{1}(\Xs,\FFs) \approx\Tm{n}{1}(\Xs',G_{\ast})} \end{corollary} \sect{Passage from the cubical to the recursive definitions} \label{DdTB} We now give a diagrammatic description of recursive Toda systems, and use this to describe the recursive Toda bracket in terms of (homotopy) cofibers of cubes. This will allow us to relate the two definitions of general Toda brackets. \supsect{\protect{\ref{DdTB}}.A}{Diagrammatic interpretation for the recursive system} We now show that every recursive Toda system is equivalent to one with a rectified final segment (as in \S \ref{OrdToda}.C), and use this to provide a diagrammatic description for recursive Toda systems. \begin{defn} Given an $n$-th order recursive Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} we define \w{\operatorname{Rec} (\Xs,\widetilde{F}_{\ast})} to be the following sequence: $$ \xymatrix@C=35pt{ X_1 \ar[r]^{f_1} & X_2 \ar[r]^{f_2} & X_3 \ar[r]^{ \bnn{1}{2} \circ r^{f_2}} & \operatorname{cof}(\Fn{1}{2}) \ar[r]^>>>>{\bnn{2}{2} \circ r^{\bnn{1}{2}}} &\dots \operatorname{cof}(\Fn{n-1}{2}) \ar[r]^>>>>{\bnn{n}{2} \circ r^{\bnn{n-1}{2}} } & \operatorname{cof}(\Fn{n}{2}) } $$ with nullhomotopies \w{\Gn{m}{1}:=\Fn{m}{1}} and \w{\Gn{m}{j}:=\ast} for all \w[.]{j \geq 2} \end{defn} \begin{lemma}\label{lrecred} The system \w{\operatorname{Rec}(\Xs,\widetilde{F}_{\ast})} is an $n$-th order recursive Toda system (in the extended sense of \S \ref{rredchcx}), and \w[.]{\Tn{n}{1} (\operatorname{Rec} (\Xs,\widetilde{F}_{\ast})) = \Tn{n}{1} (\Xs,\widetilde{F}_{\ast}) } \end{lemma} Note that even though \w{\operatorname{Rec}(\Xs,\widetilde{F}_{\ast})} is not a recursive higher Toda system in the strict sense of \S \ref{dredchcx}, we can deduce from the Lemma that the corresponding Toda bracket \w{\Tn{n}{1} (\operatorname{Rec} (\Xs,\widetilde{F}_{\ast}))} is in fact homotopy meaningful (despite Remark \ref{rredchcx}). \begin{lemma}\label{lnullcomp} Given an $m$-th order recursive Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} for any \w[,]{m \geq 1} \w{\Fn{m+1}{1}} is a nullhomotopy for the composite \w{\bnn{m}{2} \circ \ann{m}{1}} if and only if \mydiagram[\label{eqnullcomp}]{ C\ee{m-1}{X_1} \ar[rrr]^{i^{\ee{m}{X_1}}\circ\, r^{i^{\ee{m-1}{X_1}}}} \ar[d]_{\Fn{m}{1}} &&& C\ee{m}{X_1} \ar[d]^{\Fn{m+1}{1}}\\ \operatorname{cof}(\Fn{m-1}{2}) \ar[rrr]_{\bnn{m}{2} \circ\, r^{\bnn{m-1}{2}}} &&& \operatorname{cof}(\Fn{m}{2}) } commutes. \end{lemma} \begin{proof} We have the following diagram of horizontal cofibration sequences: $$ \xymatrix@R=20pt { \ee{m-1}{X_1} \ar@{^{(}->}[rr]^{i^{\ee{m-1}{X_1}}} \ar@{^{(}->}[d]_{\ann{m-1}{1}} && C \ee{m-1}{X_1} \ar@{^{(}->}[d]^{\Fn{m}{1}} \ar[rr]^{ r^{i^{\ee{m-1}{X_1}}}} && \ee{m}{X_1} \ar[d]^{\ann{m}{1}} \\ \operatorname{cof}(\bnn{m-2}{2}) \ar@{^{(}->}[rr]^{\bnn{m-1}{2}} && \operatorname{cof}(\Fn{m-1}{2}) \ar[rr]^{r^{\bnn{m-1}{2}}} && \operatorname{cof}(\bnn{m-1}{2}) } $$ The right hand square and that \w{\Fn{m+1}{1}} is a nullhomotopy for \w{\bnn{m}{2} \circ \ann{m}{1}} yields: $$ \bnn{m}{2} \circ r^{\bnn{m-1}{2}} \circ \Fn{m}{1}~=~\bnn{m}{2} \circ \ann{m}{1} \circ r^{i^{\ee{m-1}{X_1}}}~=~ \Fn{m+1}{1} \circ i^{\ee{m}{X_1}} \circ r^{i^{\ee{m-1}{X_1}}}~ $$ as in \wref[.]{eqnullcomp} Conversely, if \wref{eqnullcomp} commutes, again using the same right hand square, we get \w[.]{\bnn{m}{2} \circ \ann{m}{1} \circ r^{i^{\ee{m-1}{X_1}}}= \bnn{m}{2} \circ r^{\bnn{m-1}{2}} \circ \Fn{m}{1} =\Fn{m+1}{1} \circ i^{\ee{m}{X_1}} \circ r^{i^{\ee{m-1}{X_1}}}} Since \w{r^{i^{\ee{m-1}{X_1}}}} is epic, \w[.]{\bnn{m}{2} \circ \ann{m}{1} = \Fn{m+1}{1} \circ i^{\ee{m}{X_1}}} \end{proof} \begin{defn}\label{CCnX} For any \w[,]{X\in\mathcal{D}} the $n$-cube \w{\CCn{n}{X}} in $\mathcal{D}$ is defined: $$ \CCn{n}{X}(\vec{a}) := \begin{cases} C\ee{-1}{X}:=X & \text{if} ~ \vec{a}=(0,\dots,0) \\ C\ee{\deg(\vec{a})-1}{X} & \text{if}~ \deg(\vec{a})=\operatorname{init}(\vec{a}) \\ \ast & \text{otherwise} \end{cases} $$ The only nonzero maps are \w[,]{i^{\ee{m}{X}}\circ\, r^{i^{\ee{m-1}{X}}}:C\ee{m-1}{X}\to C\ee{m}{X}} where \w{m=\deg(\vec{a})} (see Definition \ref{dcubicals}). Thus the $3$-cube \w{\CCn{3}{X}} is $$ \xymatrix@R=20pt@C=40pt { X \ar@{^{(}->}[d]^{i^{X}} \ar[r] \ar[rrd] & \ast \ar[d] \ar[rrd] & & \\ CX \ar[r]^{i^{\widetilde{\Sigma} X}\circ r^{i^{X}}} \ar[rrd] & C\widetilde{\Sigma} X \ar[rrd]^>>>>>>>>{{i^{\ee{2}{X}}\circ r^{i^{\widetilde{\Sigma} X}}}} & \ast \ar[r] \ar[d] & \ast \ar[d] \\ & & \ast \ar[r] & C \ee{2}{X} } $$ \end{defn} \begin{defn}\label{Mn} Given an $n$-th order recursive Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} we define an $n$-cube \w{\Mn{n}{2}=\Mn{n}{2}(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$ by: $$ \Mn{n}{2}(\vec{a}):=\begin{cases} X_2 & \text{if} \ \vec{a}=(0,\dots,0)\\ X_3 & \text{if}\ \vec{a}=(1,0,\dots,0)\\ \operatorname{cof}(\Fn{k-1}{2}) & \text{if}\ \deg{(\vec{a})}=\operatorname{init}{(\vec{a})}=k \geq 2\\ \ast & \text{otherwise} \end{cases} $$ Again, the only nonzero maps are \w[,]{\bnn{k}{2} \circ r^{\bnn{k-1}{2}}:\Mn{n}{2}(\vec{a})\to \Mn{n}{2}(\vec{b})} when \w{\deg(\vec{a})=\operatorname{init}(\vec{a})=k} and \w[,]{\deg(\vec{b})=\operatorname{init}(\vec{b})=k+1} with \w{\bnn{0}{2}:=f_2} and \w[.]{r^{\bnn{-1}{2}}:=\operatorname{Id}\sb{X_{2}}} Thus the $3$-cube \w{\Mn{3}{2}(\Xs,\widetilde{F}_{\ast})} is the following $$ \xymatrix@R =20pt@C=30pt { X_2 \ar[rrd] \ar[d]_{f_{2}} \ar[r] & \ast \ar[rrd] \ar[d]^<<<<{} \\ X_3 \ar[rrd] \ar[r]^{\bnn{1}{2} \circ r^{f_2}} & \operatorname{cof}(\Fn{1}{2}) \ar[rrd]^{\bnn{2}{2} \circ r^{\bnn{1}{2}}} & \ast \ar[d] \ar[r] & \ast \ar[d] \\ && \ast \ar[r] & \operatorname{cof}(\Fn{2}{2})} $$ \end{defn} \begin{defn}\label{AAAn} The $n$-cube map \w{\AAAn{n}{1}=\AAAn{n}{1}(\Xs,\FFs) : \CCn{n}{X_1} \to \Mn{n}{2} } is defined: $$ \AAAn{n}{1}(\vec{a}) = \begin{cases} \Fn{k}{1} & \text{if} ~ \deg(\vec{a})=\operatorname{init}(\vec{a})=k \\ \ast & \text{otherwise} \end{cases} $$ where \w[.]{\Fn{0}{1}=f_1} Lemma \ref{lnullcomp} implies that \w{\AAAn{n}{1}(\Xs,\FFs)} is well defined. Thus the $2$-cube map \w{\AAAn{2}{1} (\Xs,\widetilde{F}_{\ast}):\CCn{2}{X_1}\to\Mn{2}{2}} is the diagonal map in: \mydiagram[\label{eqtwoone}]{ X_1 \ar@{^{(}->}[d] \ar[r] \ar[rrd]^>>>>>>>>>>{\Fn{0}{1}=f_1} & \ast \ar@{^{(}->}[d] \ar[rrd]^{} & & \\ CX_1 \ar[r]^{} \ar[rrd]_{\Fn{1}{1}} & C \ee{}{X_1} \ar[rrd]_<<<<<<<{\Fn{2}{1}} & X_2 \ar[d]^<<<{f_{2}} \ar[r] & \ast \ar[d]^<<<<{} \\ & & X_3 \ar[r]_{\bnn{1}{2} \circ r^{f_2}} & \operatorname{cof}(\Fn{1}{2}) } \end{defn} \begin{defn}\label{Fin} Given \w[,]{X \in \mathcal{D}} we define \w{\Fin{n}{X} } to be the $n$-cube with \w{X} in the \wwb{1,\dots,1}slot and $\ast$ elsewhere. This is functorial in $X$. Now given an $n$-th order recursive Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} we define the $n$-cube \w{\xn{n}{n+3}=\xn{n}{n+3}(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$ to be \w[.]{\Fin{n}{\operatorname{cof}(\Fn{n}{2})}} \end{defn} \begin{defn}\label{BBBn} Given an $n$-th order recursive Toda system \w[]{(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$, the $n$-cube map \w{\BBBn{n}{2}=\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast}) : \Mn{n}{2} \to \xn{n}{n+3}} is defined by \w[.]{\BBBn{n}{2}(1,\dotsc,1) =\bnn{n}{2} \circ r^{\bnn{n-1}{2}}} \end{defn} \begin{summary}\label{nstwocubes} Just as the data of a higher order cubical Toda system is encoded in the sequence of $n$-cube maps \wref[,]{stwocubes} so for any $n$-th order recursive Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} the data for \w{\operatorname{Rec}(\Xs,\widetilde{F}_{\ast})} is encoded by: \mydiagram[\label{neqtwocubes}]{ \CCn{n}{X_1} \ar[rrr]^{\AAAn{n}{1} (\Xs,\widetilde{F}_{\ast})} &&& \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast})} &&& \xn{n}{n+3}(\Xs,\widetilde{F}_{\ast}) } Thus for a second-order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})^{(2)}} of length $4$, \w{\operatorname{Rec}(\Xs,\widetilde{F}_{\ast})} is encoded by: $$ \xymatrix@R =25pt@C=30pt { X_1 \ar@{^{(}->}[d] \ar[r] \ar[rrd]^>>>>>>>>>>{f_1} & \ast \ar[d] \ar[rrd]^{} & & \\ CX_1 \ar[r]^{} \ar[rrd]_{\Fn{1}{1}} & C \ee{}{X_1} \ar[rrd]_<<<<<<<{\Fn{2}{1}} & X_2 \ar[rrd] \ar[d]^<<<{f_{2}} \ar[r] & \ast \ar[rrd] \ar[d]^<<<<{} \\ & & X_3 \ar[rrd] \ar[r]^>>>>>>>{\bnn{1}{2} \circ r^{f_2}} & \operatorname{cof}(\Fn{1}{2}) \ar[rrd]^{\bnn{2}{2} \circ r^{\bnn{1}{2}}} & \ast \ar[d] \ar[r] & \ast \ar[d] \\ && & & \ast \ar[r] & \operatorname{cof}(\Fn{2}{2})} $$ \end{summary} \begin{defn}\label{dtosr} Two $n$-th order recursive Toda systems \w{(\Xs,\widetilde{F}_{\ast})} and \w{(X'_{\ast},\widetilde{G}_{\ast})} are \emph{equivalent} (written \w[)]{(\Xs,\widetilde{F}_{\ast}) \tosr (X'_{\ast},\widetilde{G}_{\ast})} if the sequences \mydiagram[\label{eqtosra}]{ \CCn{n}{X_1} \ar[rrr]^{\AAAn{n}{1} (\Xs,\widetilde{F}_{\ast})} &&& \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast})} &&& \xn{n}{n+3}(\Xs,\widetilde{F}_{\ast}) } and \mydiagram[\label{eqtosrb}]{ \CCn{n}{X'_1} \ar[rrr]^{\AAAn{n}{1} (X'_{\ast},\widetilde{G}_{\ast})} &&& \Mn{n}{2}(X'_{\ast},\widetilde{G}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(X'_{\ast},\widetilde{G}_{\ast})} &&& \xn{n}{n+3}(X'_{\ast},\widetilde{G}_{\ast}) } are equivalent under the relation \w{\approx} of \S \ref{rprojmodst}. We will show later that \ww{\tosr}-equivalent higher order recursive Toda systems have $\approx$-equivalent recursive Toda brackets. \end{defn} \begin{lemma}\label{paCCnX} For any \w[,]{X \in \mathcal{D}} \w{\pa{n}{0}\CCn{n}{X}=\CCn{n-1}{X}} and \w{\pa{n}{1}\CCn{n}{X}=\Fin{n-1}{C \ee{n-1}{X}}} and \w{\operatorname{cof}^{(n-1)} (\partial^{n} \CCn{n}{X})= i^{\ee{n-1}{X}} : \ee{n-1}{X} \to C\ee{n-1}{X} } (see Definitions \ref{Gen cof and hcof} and \ref{pa}). \end{lemma} \begin{lemma}\label{npa} Given an $n$-th order recursive Toda system \w[]{(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$, then \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item \w{\pa{n}{0}\AAAn{n}{1}(\Xs,\widetilde{F}_{\ast}) = \AAAn{n-1}{1}(\Xs,\widetilde{F}_{\ast})} (Definitions \ref{pa} and \ref{AAAn}) \item \w{\partial^n \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) = \BBBn{n-1}{2}(\Xs,\widetilde{F}_{\ast})} (Definitions \ref{Mn} and \ref{BBBn}) \item \w{ \pa{n}{1}\AAAn{n}{1} = \Fin{n-1}{\Fn{n}{1}}: \Fin{n-1}{C\ee{n-1}{X_1}} \to \Fin{n-1}{\operatorname{cof}(\Fn{n-1}{2})} } (Definition \ref{Fin} ) \end{enumerate} \end{lemma} \begin{prop}\label{nToda=cofb circ cofa} For any $n$-th order recursive Toda system \w[]{(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$: $$ \Tn{n}{1}(\Xs,\widetilde{F}_{\ast})= \operatorname{cof}^{(n)}(\BBBn{n}{2}) \circ \cf{n}{\AAAn{n}{1}} $$ see Definitions \wref[,]{AAAn} \wref{BBBn} and \wref[.]{Gen defn 2 of 2} \end{prop} \begin{proof} We have to show that: \begin{myeq}\label{eqindstep} \operatorname{cof}^{(n)}(\AAAn{n}{1}(\Xs,\widetilde{F}_{\ast})) = \ann{n}{1}(\Xs,\widetilde{F}_{\ast}) \ \ \text{and} \ \ \operatorname{cof}^{(n)}(\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast}))=\bnn{n}{2}(\Xs,\widetilde{F}_{\ast}) \end{myeq} By Lemma \ref{cof=cof}, applying \w{\operatorname{cof}^{(n)}} to the map $n$-cubes \w{\AAA{n}{1} : \CCn{n}{X_1} \to \Mn{n}{2} } is equivalent to applying \w{\operatorname{cof}^{(1)}} to the following diagram (i.e., horizontal map) of vertical $1$-cubes: \mydiagram[\label{aaabsquare}]{ \operatorname{cof}^{(n-1)}(\pa{n}{0} \CCn{n}{X_1}) \ar[d]_{\operatorname{cof}^{(n-1)}(\partial^{n}\CCn{n}{X_1})} \ar[rrr]^{\operatorname{cof}^{(n-1)}(\pa{n}{0}\AAAn{n}{1})} &&& \operatorname{cof}^{(n-1)}(\pa{n}{0}\Mn{n}{2}) \ar[d]^{\operatorname{cof}^{(n-1)}(\partial^{n}\Mn{n}{2})} \\ \operatorname{cof}^{(n-1)}(\pa{n}{1}\CCn{n}{X_1}) \ar[rrr]^{\operatorname{cof}^{(n-1)}(\pa{n}{1}\AAAn{n}{1})} &&& \operatorname{cof}^{(n-1)}(\pa{n}{1}\Mn{n}{2}) } By Lemmas \ref{paCCnX} and \ref{npa}, diagram \wref{aaabsquare} equals: \mydiagram[\label{aaaabsquare}]{ \ee{n-1}{X_1} \ar@{^{(}->}[d]_{i^{\ee{n-1}{X_1}}} \ar[rrr]^{\operatorname{cof}^{(n-1)}(\AAAn{n-1}{1})} &&& \operatorname{cof}^{(n-1)}(\Mn{n-1}{2}) \ar[d]^{\operatorname{cof}^{(n-1)}(\BBBn{n-1}{2})} \\ C\ee{n-1}{X_1} \ar[rrr]^{\Fn{n}{1}} &&& \operatorname{cof}(\Fn{n-1}{2}) } so the result follows by induction on \wref[.]{eqindstep} The result for \w{\BBBn{n}{2}} is shown analogously. \end{proof} \begin{corollary}\label{Tn=Tn} Given two higher order recursive Toda systems \w[,]{(\Xs,\widetilde{F}_{\ast}) \tosr (X'_{\ast},\widetilde{G}_{\ast})} we have \w[.]{\Tn{n}{1}(\Xs,\widetilde{F}_{\ast}) \approx \Tn{n}{1}(X'_{\ast},\widetilde{G}_{\ast})} \end{corollary} Here we use the fact that a zigzag of weak equivalences between \w{\CCn{n}{X_1}} and \w{\CCn{n}{X'_1}} induces a zigzag of weak equivalences on their (strict) cofibers from \w[] {\ee{n}{X_1}} to \w[]{\ee{n}{X'_1}} (see Definition \ref{CCnX}). \supsect{\protect{\ref{DdTB}}.B}{Passage from the cubical to the recursive definition} We now prove the first direction in showing that cubical and recursive definitions of the Toda brackets agree, using the diagrammatic description to show how a cubical Toda bracket is $\approx$-equivalent to the recursive version. \begin{defn}\label{dva} Given an $n$-cube \w{\mathbb{A}} in $\mathcal{D}$, we define an $n$-cube \w{\VV{\mathbb{A}}} with a natural transformation \w{\vartha{\mathbb{A}}:\mathbb{A}\to\VV{\mathbb{A}}} by induction on \w{n\geq 1} as follows: For \w[,]{n=1} \w{\vartha{\mathbb{A}}} is \w[.]{\operatorname{Id}:\mathbb{A}\to\mathbb{A}} For \w{n\geq 2} we set \w{\pa{n}{0}\VV{\mathbb{A}}:=\VV{\pa{n}{0}\mathbb{A}}} and define the \wwb{n-1}cube \w{\pa{n}{1}\VV{\mathbb{A}}:=\Fin{n-1}{\cf{n-1}{\pa{n}{1}\mathbb{A}}}} (see Definitions \ref{pa} and \ref{Fin}), where the map \w{\VV{\mathbb{A}}(1,\dots,1,0) \to \VV{\mathbb{A}}(1,\dots,1,1)} is \w{ \cf{n-1}{\partial^{n}\mathbb{A}} \circ r^{\cf{n-2}{\partial^{n-1}\pa{n}{0}\mathbb{A}}} } and \w{\vartha{\mathbb{A}}} in the final vertex is the structure map \w{r\sp{\pa{n}{1}\mathbb{A}}} (see Definition \ref{Gen cof and hcof} and Remark \ref{cofzero}). \end{defn} \begin{example} If \w{\mathbb{A}} is the back square in the following, then \w{\vartha{\mathbb{A}}:\mathbb{A}\to\VV{\mathbb{A}}} is: $$ \xymatrix@R=20pt { X \ar[d]^{f} \ar[r]^{h} \ar@{=}[rrd] & Z \ar[d]_{g} \ar[rrd] & & \\ Y \ar[r]^{k} \ar@{=}[rrd] & V \ar[rrd]^>>>>>>>{r^g} & X \ar[d]_>>>{f} \ar[r] & \ast \ar[d] \\ & & Y \ar[r]_{r^g \circ k} & \operatorname{cof}(g) } $$ (Observe that \w[).]{r^g \circ k =(\operatorname{cof}(f) \to \operatorname{cof}(g))\circ r^f } \end{example} \begin{example}\label{egva} For any \w[,]{X\in\mathcal{D}} we have \w{\VV{\mathbb{C}^{(n)}X}= \CCn{n}{X}} (See definitions \ref{CC^mX} and \ref{CCnX}). \end{example} \begin{lemma}\label{lcfw=cf} Given an $n$-cube \w[,]{\mathbb{A}} then \w[.]{\cf{n}{\vartha{\mathbb{A}}}= \operatorname{Id}_{\cf{n}{\mathbb{A}}}} In particular \w[.]{\cf{n}{\VV{\mathbb{A}}}=\cf{n}{\mathbb{A}}} \end{lemma} \begin{lemma}\label{lnatwe} If \w{\mathbb{A}} is a strongly cofibrant $n$-cube in $\mathcal{D}$ and \w{\mathbb{A}(\vec{a})\simeq\ast} whenever \w{\deg(\vec{a})\neq\operatorname{init}(\vec{a})} (Definition \ref{dcubicals}), then \w{\vartha{\mathbb{A}}} is a weak equivalence. \end{lemma} \begin{lemma}\label{Important} Given a map of $n$-cubes \w[,]{\mathbb{F} :\mathbb{A} \to \mathbb{B}} we have a commutative diagram: $$ \xymatrix@R=15pt @C=55pt { \mathbb{A} \ar[rr]^{\mathbb{F}} \ar[d]_>>>>{\vartha{\mathbb{A}}} && \mathbb{B} \ar[d]^>>>{(r^{\mathbb{B}})_{\ast}} \\ W(\mathbb{A}) \ar[rr]_<<<<<<<<<<<<<<<<<<{(\cf{n}{\mathbb{F}} \circ r^{\cf{n-1}{\partial^{n}\mathbb{A}}} )_{\ast}} && \Fin{n}{\cf{n}{\mathbb{B}}} } $$ (in the notation of Lemma \ref{Important2}). \end{lemma} \begin{proof} If we think of the $n$-cube map \w{\mathbb{F}} as an $(n+1)$-cube, then the bottom map is \w[,]{\VV{\mathbb{F}}} and the vertical map is \w[.]{\vartha{\mathbb{F}}} \end{proof} \begin{lemma}\label{paM} Assume given a higher order cubical Toda system \w{(\Xs,\FFs)} in $\mathcal{D}$. \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item \w{\partial\sp{1} \Mm{m}{j}(\Xs,\FFs) = \AAA{m-1}{j}(\Xs,\FFs) } (Definitions \ref{pa}, \ref{AAA} and \ref{Mm}) \item \w{\partial\sp{m}\Mm{m}{j}(\Xs,\FFs) = \BBB{m-1}{j}(\Xs,\FFs) } (Definition \ref{BBB}). \item \w{\pa{m}{0}\AAA{m}{j}(\Xs,\FFs) = \AAA{m-1}{j} (\Xs,\FFs) } and \w[.]{\pa{m}{1}\AAA{m}{j}(\Xs,\FFs) = \FFF{m}{j}(\Xs,\FFs) } \item \w{\pa{1}{0} \BBB{m}{j}(\Xs,\FFs) = \mathfrak{i}^{\mathbb{C}^{m-1}X_j} } and \w[.]{\pa{1}{1} \BBB{m}{j}(\Xs,\FFs) = \BBB{m-1}{j+1}(\Xs,\FFs) } \item \w{\partial\sp{1}\xm{m}{j+m+1}(\Xs,\FFs) = \FFF{m}{j}(\Xs,\FFs)} (Definitions \ref{xm} and \ref{diagnull}). \end{enumerate} \end{lemma} \begin{defn}\label{indchcx} Given an $n$-th order cubical Toda system\w{(\Xs,\FFs)} in $\mathcal{D}$ of length \w[,]{n+2} for each \w{1 \leq m \leq n} and \w[,]{1 \leq j \leq n-m+2 } we write \w{\Fn{m}{j}:=\operatorname{cof}^{(m-1)}{(\FFF{m}{j}(\Xs,\FFs) ) }} (see \S \ref{diagnull}), and denote this system by \w[.]{V(\Xs,\FFs):=(\Xs,\widetilde{F}_{\ast})} \end{defn} \begin{prop}\label{prop1} Given a strongly cofibrant $n$-th order cubical Toda system\w{(\Xs,\FFs)} in $\mathcal{D}$ of length \w[,]{n+2} \w{V(\Xs,\FFs)} constitutes an $n$-th order recursive Toda system (see Definition \ref{dredchcx}), satisfying $$ \operatorname{cof}^{(m)}(\AAA{m}{j}(\Xs,\FFs)) = \ann{m}{j}(V(\Xs,\FFs))\hspace{5mm}\text{and} \ \ \operatorname{cof}^{(m)}(\BBB{m}{j}(\Xs,\FFs))=\bnn{m}{j}(V(\Xs,\FFs)) $$ as in Definition \ref{Gen defn 2 of 2}. \end{prop} \begin{proof} Because \w{(\Xs,\FFs)} is strongly cofibrant, by Lemma \ref{paM} and Remark \ref{rtwocubes} all the squares appearing in the construction of \w{\ann{m}{j}(V(\Xs,\FFs))} and \w{\bnn{m}{j}(V(\Xs,\FFs))} in \S \ref{Gen defn 2 of 2} are strongly cofibrant. We prove the Proposition by induction on $m$: If \w[,]{m=1} the cofiber of the $1$-cube map \w[]{\AAA{1}{j}:(i^{X_{j}})\to(f_{j+1})} is \w[,]{\widetilde{\alpha}(f_j,f_{j+1},F_j)} and the cofiber of the $1$-cube \w[]{\BBB{1}{j}:(f_{j})\to(F_{j})} is \w[,]{\widetilde{\beta}(f_j,f_{j+1},F_j)} as in \wref[.]{eqalphat} Here \w{\FFF{1}{j}} is \w[,]{F_j} so \w{ {\widetilde{F}}^{(1)}_j ={\operatorname{cof}}^{(0)}(F_j)=F_j } and thus: $$ \operatorname{cof}^{(1)}(\AAA{1}{j}) = \widetilde{\alpha}(f_j,f_{j+1}, {\widetilde{F}}^{(1)}_j )= \ann{1}{j}\hspace{4mm}\text{and}\ \ \operatorname{cof}^{(1)}(\BBB{1}{j}) =\widetilde{\beta}(f_j,f_{j+1}, {\widetilde{F}}^{(1)}_j )= \bnn{1}{j}~. $$ Now suppose the statement is true for \w[.]{m-1} By Proposition \ref{cof=cof}, \w{\operatorname{cof}^{(m)}} of \w{\AAA{m}{j} : \mathbb{C}^{(m)}X_j \to \Mm{m}{j+1} } is the result of applying \w{\operatorname{cof}^{(1)}} to the following horizontal map of vertical $1$-cubes: \mydiagram[\label{absquare}]{ \operatorname{cof}^{(m-1)}(\pa{m}{0} \mathbb{C}^{(m)}X_j) \ar[d]_{\operatorname{cof}^{(m-1)}(\partial^{m}\mathbb{C}^{(m)}X_j)} \ar[rr]^{\operatorname{cof}^{(m-1)}(\pa{m}{0}\AAA{m}{j})} && \operatorname{cof}^{(m-1)}(\pa{m}{0}\Mm{m}{j+1}) \ar[d]^{\operatorname{cof}^{(m-1)}(\partial^{m}\Mm{m}{j+1})} \\ \operatorname{cof}^{(m-1)}(\pa{m}{1}\mathbb{C}^{(m)}X_j) \ar[rr]^{\operatorname{cof}^{(m-1)}(\pa{m}{1}\AAA{m}{j})} && \operatorname{cof}^{(m-1)}(\pa{m}{1}\Mm{m}{j+1}) } By Lemmas \ref{paC} and \ref{paM}, diagram \wref{absquare} may be rewritten in the form: \mydiagram[\label{abfmsquare}]{ \operatorname{cof}^{(m-1)}(\mathbb{C}^{(m-1)}X_j) \ar[d]_{\operatorname{cof}^{(m-1)}(\mathfrak{i}^{\mathbb{C}^{(m-1)}X_j } )} \ar[rr]^{\operatorname{cof}^{(m-1)}(\AAA{m-1}{j})} && \operatorname{cof}^{(m-1)}(\Mm{m-1}{j+1}) \ar[d]^{\operatorname{cof}^{(m-1)}(\BBB{m-1}{j+1})} \\ \operatorname{cof}^{(m-1)}(C\mathbb{C}^{(m-1)}X_j) \ar[rr]^{\operatorname{cof}^{(m-1)}(\FFF{m}{j})} && \operatorname{cof}^{(m-1)}(\xm{m-1}{j+m+1}) } By Example \ref{cof(CX)} and the induction hypothesis we get: $$ \xymatrix@R=25pt { \widetilde{\Sigma}^{(m-1)}X_j \ar[d]_{i^{\widetilde{\Sigma}^{(m-1)}X_j}} \ar[rr]^{\ann{m-1}{j}} && \operatorname{cof}^{(m-1)}(\bnn{m-2}{j+1}) \ar[d]^{\bnn{m-1}{j+1}} \\ C\widetilde{\Sigma}^{(m-1)}X_j \ar[rr]^{{\widetilde{F}}^{(m)}_j} && \operatorname{cof}^{(m-1)}({\widetilde{F}}^{(m-1)}_{j+1}) \\ } $$ (where the right vertical map is obtained by commuting cones with cofibers). Thinking of this as a horizontal map of vertical $1$-cubes (as above), we may identify it with \w{\ann{m}{j}(\Xs,\widetilde{F}_{\ast})} (see \wref[).]{eqdefalpha} A similar argument shows that \w[.]{\operatorname{cof}^{(m)}(\BBB{m}{j}(\Xs,\FFs))=\bnn{m}{j}(\Xs,\widetilde{F}_{\ast})} \end{proof} \begin{corollary} Under the assumptions of Proposition \ref{prop1} we have $$ \Tn{n}{1}(V(\Xs,\FFs))~=~ \operatorname{cof}^{(n)}(\BBB{n}{2}(\Xs,\FFs)) \circ \operatorname{cof}^{(n)}(\AAA{n}{1}(\Xs,\FFs)) $$ \end{corollary} \begin{prop}\label{XF to RecVXF} Given a strongly cofibrant $n$-th order cubical Toda system \w{(\Xs,\FFs)} in $\mathcal{D}$, we have a natural commuting diagram $$ \xymatrix@R=25pt { \mathbb{C}^{(n)}X_1 \ar[d]_{\vartha{\mathbb{C}^{(n)}X_1}}^{\simeq} \ar[rrr]^{\AAA{n}{1} (\Xs,\FFs)} &&& \Mm{n}{2}(\Xs,\FFs) \ar[d]^{\vartha{\Mm{n}{2}}}_{\simeq} \ar[rrr]^{\BBB{n}{2}(\Xs,\FFs)} &&& \xm{n}{n+3}(\Xs,\FFs) \ar[d]_{\simeq}^{(r^{\xm{n}{n+3}})_\ast} \\ \CCn{n}{X_1} \ar[rrr]^>>>>>>>>>>>>>{\AAAn{n}{1} (V(\Xs,\FFs))} &&& \Mn{n}{2}(V(\Xs,\FFs)) \ar[rrr]^{\BBBn{n}{2}(V(\Xs,\FFs))} &&& \xn{n}{n+3}(V(\Xs,\FFs)) } $$ with vertical weak equivalences. \end{prop} \begin{proof} Note that the cube map \w[]{\VV{\AAA{n}{1} (\Xs,\FFs)}: \VV{\mathbb{C}^{(n)} X_1} \to \VV{\Mm{n}{2}(\Xs,\FFs)}} is equal to \w[,]{\AAAn{n}{1} (V(\Xs,\FFs))} so the left square commutes by naturality of $\vartha{}$ (Definition \ref{dva}), and the two left vertical maps are weak equivalences by Lemma \ref{lnatwe}. In addition \w{\BBBn{n}{2}(V(\Xs,\FFs)) = (\bnn{n}{2}(V(\Xs,\FFs)) \circ r^{\bnn{n-1}{2}(V(\Xs,\FFs))})_{\ast}} by Definition \ref{BBBn}, and it is equal to \w{(\cf{n}{\BBB{n}{2}(\Xs,\FFs)}\circ r^{\cf{n-1}{\BBB{n-1}{2}(\Xs,\FFs)}})_{\ast} } by Proposition \ref{prop1}. By Lemma \ref{paM}(b) we can replace \w{\BBB{n-1}{2}(\Xs,\FFs)} by \w[,]{\partial\sp{n}\Mm{n}{2}(\Xs,\FFs)} so the right square commutes according to Lemma \ref{Important}, and by Lemma \ref{Important2} the right vertical map is a weak equivalence. \end{proof} \begin{corollary} If \w{(\Xs,\FFs)} and \w{(\Xs',G_{\ast})} are strongly cofibrant $n$-th order cubical Toda systems in $\mathcal{D}$ and \w[,]{(\Xs,\FFs) \toss (\Xs',G_{\ast})} then \w[.]{V(\Xs,\FFs) \tosr V(\Xs',G_{\ast})} \end{corollary} We may summarize the results of this section in the following generalization of Proposition \ref{def 1 = def 2 of 2}: \begin{thm}\label{def1=def2} Given a strongly cofibrant $n$-th order cubical Toda system\w{(\Xs,\FFs)} in $\mathcal{D}$, we have a commutative diagram: $$ \xymatrix@R=18pt { L^n X_1 \ar[rrr]^{\Tm{n}{1}(\Xs,\FFs)} \ar[d]_{\zeta_{\mathbb{C}^{n}X_1}}^{\simeq} &&& X_{n+3} \ar[d]^>>>>{r^{\xm{n}{n+3}}}_{\simeq} \\ {\widetilde{\Sigma}}^{n} X_1 \ar[rrr]_{\Tn{n}{1}(V(\Xs,\FFs)) } &&& \cf{n}{\xm{n}{n+3}} } $$ \end{thm} See Definitions \ref{Gen defn 1 of 2}, \ref{dredchcx}, and \ref{indchcx}. \begin{proof} Applying Proposition \ref{gen hcolim=cof} and Lemma \ref{Important2} to \wref{eqtwocubes} yields the commuting diagram: \myssdg[\label{ababab}]{ \hcf{n}{\mathbb{C}^{(n)}X_1} \ar[d]_{\zeta_{\mathbb{C}^{(n)}X_1}}^{\simeq} \ar[rrr]^{\hcf{n}{\AAA{n}{1} }} &&& \hcf{n}{\Mm{n}{2}} \ar[d]^{\zeta_{\Mm{n}{2}}}_{\simeq} \ar[rrr]^{\operatorname{hcof}'^{(n)}(\BBB{n}{2})} &&& X_{n+3} \ar[d]_{\simeq}^{r^{\xm{n}{n+3}}} \\ \cf{n}{\mathbb{C}^{(n)}X_1} \ar[rrr]^>>>>>>>>>>>>>{\cf{n}{\AAA{n}{1} }} &&& \cf{n}{\Mm{n}{2}} \ar[rrr]^{\cf{n}{\BBB{n}{2}}} &&& \cf{n}{\xm{n}{n+3}}~, } where the map \w{\operatorname{hcof}'^{(n)}(\BBB{n}{2})} was described by Remark \ref{hcoftoB} (see also Proposition \ref{T=AcircB}). Applying \w{\operatorname{cof}^{(n)}} to the left square and \w{\operatorname{cof}^{(n+1)}} to the right maps in the diagram of Proposition \ref{XF to RecVXF} yields a commuting diagram, in which the vertical maps are the identity by Lemma \ref{lcfw=cf}, and whose top row is the bottom row of \wref[.]{ababab} Thus \wref{ababab} is in fact: \myssdg[\label{abababab}]{ \hcf{n}{\mathbb{C}^{(n)}X_1} \ar[d]_{\zeta_{\mathbb{C}^{(n)}X_1}}^{\simeq} \ar[rrr]^{\hcf{n}{\AAA{n}{1} }} &&& \hcf{n}{\Mm{n}{2}} \ar[d]^{\zeta_{\Mm{n}{2}}}_{\simeq} \ar[rrr]^{\operatorname{hcof}'^{(n)}(\BBB{n}{2})} &&& X_{n+3} \ar[d]_{\simeq}^{r^{\xm{n}{n+3}}} \\ \cf{n}{\CCn{n}{X_1}} \ar[rrr]^>>>>>>>>>>>>>{\cf{n}{\AAAn{n}{1} }} &&& \cf{n}{\Mn{n}{2}} \ar[rrr]^{\cf{n}{\BBBn{n}{2}}} &&& \cf{n}{\xn{n}{n+3}}~, } where \w{\AAAn{n}{1}=\AAAn{n}{1}(V(\Xs,\FFs))} and \w[.]{\BBBn{n}{2}=\BBBn{n}{2}(V(\Xs,\FFs))} The result follows by Propositions \ref{T=AcircB} and \ref{nToda=cofb circ cofa}. \end{proof} By Theorem \ref{reduced rectifiyng} we conclude \begin{corollary} Given a strongly cofibrant $n$-th order cubical Toda system \w{(\Xs,\FFs)} of length \w{n+2} with \w{\Tm{n}{1}(\Xs,\FFs)} nullhomotopic, the underlying linear diagram \w{X\sb{\ast}} is rectifiable. \end{corollary} We note the following variation for future reference: \begin{prop} Given a strongly cofibrant $n$-th order cubical Toda system\w{(\Xs,\FFs)} of length \w[,]{n+2} with \w{f_{n+2} \circ \hcf{n}{\BBB{n}{1}(\Xs,\FFs)}} nullhomotopic, then the underlying linear diagram \w{X\sb{\ast}} is rectifiable. \end{prop} \begin{proof} Setting \w[,]{(\Xs,\widetilde{F}_{\ast}):=V(\Xs,\FFs)} we showed in the proof of Theorem \ref{reduced rectifiyng} how to rectify \w{X_1 \stk{f_1} X_2 \stk{f_2}\dots X_{n+1} \stk{f_{n+1}} X_{n+2}} by the sequence \wref[.]{rectify seq} Because \w[,]{f_{n+2} \circ \hcf{n}{\BBB{n}{1}(\Xs,\FFs)} \sim \ast } we can choose a map \w{s:\operatorname{cof}(\bnn{n}{1}) \to X_{n+3}} fitting into a commutative diagram: $$ \xymatrix@R=20pt @C=40pt { \hcf{n}{\Mm{n}{1}} \ar[rr]^<<<<<<<<<<<<<<{\hcf{n}{\BBB{n}{1}(\Xs,\FFs)}} \ar[d]^{\simeq}_{\zeta_{\Mm{n}{1}}} && X_{n+2} \ar[rr]^{f_{n+2}} \ar[d]^{\simeq}_{r^{\xm{n}{n+2}}} && X_{n+3}\\ \operatorname{cof}(\ann{n-1}{1})\ar@{^{(}->}[rr]^{\bnn{n}{1}} && \operatorname{cof}(\Fn{n}{1}) \ar[rr]^{r^{\bnn{n}{1}}} && \operatorname{cof}(\bnn{n}{1}) \ar@{-->}[u]^{s} } $$ since the bottom row is a cofibration sequence. We complete the rectification by replacing \w{f_{n+2}} with \w[.]{s \circ r^{\bnn{n}{1}}} \end{proof} \sect{Passage from the recursive to the cubical definitions}\label{PRCD} We now show how to lift a higher order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})} to a higher order cubical Toda system\w[,]{(\Xs',G_{\ast})} using the diagrammatic description of \S \ref{nstwocubes}, and prove that this lifting process is inverse (up to $\approx$) to the reduction \w[.]{(\Xs,\FFs)\mapsto V(\Xs,\FFs)} Using Theorem \ref{def1=def2}, this implies that the two notions of Toda brackets are $\approx$-equivalent. First recall the following special case of Lemma \ref{lfiltr}: \begin{lemma}\label{full cofiber replacement} Given an $n$-cube \w{\mathbb{A}} in $\mathcal{D}$, let \w{\mathbb{A}'} denote its restriction to \w{\Ln{n-1}{}} (see \S \ref{delln}). Then any (strongly) cofibrant replacement for \w{\mathbb{A}'} can be extended to a strongly cofibrant replacement for \w[.]{\mathbb{A}} \end{lemma} \begin{prop}\label{fullreplacemen} Given an $n$-th order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$ of length \w[,]{n+2} we have a strongly cofibrant replacement \mydiagram[\label{eqreplacer}]{ \CCm{n}{X_1} \ar[rrr]^{\AAA{n}{1} (\Xs',G_{\ast})} &&& \Mm{n}{2}(\Xs',G_{\ast}) \ar[rrr]^{\BBB{n}{2}(\Xs',G_{\ast})} &&& \xm{n}{n+3}(\Xs',G_{\ast}) } for the sequence: \mydiagram[\label{eqreplaced}]{ \CCn{n}{X_1} \ar[rrr]^{\AAAn{n}{1} (\Xs,\widetilde{F}_{\ast})} &&& \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast})} &&& \xn{n}{n+3}(\Xs,\widetilde{F}_{\ast})~, } \end{prop} Here \wref{eqreplacer} corresponds to an $n$-th order cubical Toda system\w{(\Xs',G_{\ast})} by \S \ref{stwocubes}. \begin{proof} For each \w[,]{1\leq m\leq n} we have compatible strongly cofibrant replacements \w{\vartha{\CCm{m}{X_1}} : \mathbb{C}^{(m)}X_1 \to \CCn{m}{X_1}} (see \S \ref{egva} and \S \ref{lnatwe}). We extend the last of these, \w[,]{\vartha{\CCm{n}{X_1}}} to a strongly cofibrant replacement for \w[,]{\AAAn{n}{1}(\Xs,\widetilde{F}_{\ast}) : \CCn{n}{X_1} \to \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) } defining successive extensions of \w{\vartha{\CCm{m}{X_1}}} by induction on \w[,]{m\geq 2} as follows: For \w[,]{m=2} the diagram: \myadiagnum[\label{diagramnum1}]{ X_1 \ar@{^{(}->}[d]_<<<{i^{X_1}} \ar@{^{(}->}[r]^{i^{X_1}} \ar[rrd]^>>>>>>>>{f_1} & CX_1 \ar@{^{(}->}[d] \ar[rrd]^{Cf_1} & & \\ CX_1 \ar@{^{(}->}[r]^{i^{CX_1}} \ar[rrd]_{F_1:=\Fn{1}{1}} & C^2 X_1 & X_2 \ar[d]^<<<{f_2} \ar@{^{(}->}[r]_{i^{X_2}} & C X_2 \\ & & X_3 & } is a strongly cofibrant replacement for \myadiagnumm[\label{diagramnum2}]{ X_1 \ar@{^{(}->}[d] \ar[r] \ar[rrd]^>>>>>>>>{f_1} & \ast \ar[d] \ar[rrd]^{} & & \\ CX_1 \ar[r]^{} \ar[rrd]_{\Fn{1}{1}} & C \ee{}{X_1} & X_2 \ar[d]^<<<{f_{2}} \ar[r] & \ast \\ & & X_3 & } Using Lemma \ref{full cofiber replacement}, we can complete it to a strongly cofibrant replacement for \w{\AAAn{2}{1}(\Xs,\widetilde{F}_{\ast})} (see \wref[).]{eqtwoone} In the induction step, assume given a strongly cofibrant \w{(\Xs',G_{\ast})^{(m,m+1)}} (in the notation of \S \ref{order cahin}) and a weak equivalence \w[,]{\RR{m}{} : \Mm{m}{2}(\Xs',G_{\ast}) \to \Mn{m}{2}(\Xs,\widetilde{F}_{\ast})} making \mydiagram[\label{aa}]{ \CCm{m}{X_1} \ar[d]_{\vartha{\CCm{m}{X_1}}}^{\simeq} \ar[rrrr]^>>>>>>>>>>>>>>>>{\AAA{m}{1}((\Xs',G_{\ast})^{(m,m+1)})} &&&& \Mm{m}{2}((\Xs',G_{\ast})^{(m,m+1)}) \ar[d]^{\RR{m}{}}_{\simeq} \\ \CCn{m}{X_1} \ar[rrrr]^{\AAAn{m}{1}(\Xs,\widetilde{F}_{\ast})} &&&& \Mn{m}{2}(\Xs,\widetilde{F}_{\ast}) } commute. Thus \w{\AAA{m}{1} ((\Xs',G_{\ast})^{(m,m+1)}) } is a strongly cofibrant replacement for \w[.]{\AAAn{m}{1} (\Xs,\widetilde{F}_{\ast}) } We need to extend \wref{aa} to \w{\AAA{m+1}{1}((\Xs',G_{\ast})^{(m+1,m+2)})} and \w[.]{\RR{m+1}{}} Using Lemmas \ref{paCCnX} and \ref{npa} and diagram \wref[,]{diagrampa} the map of \wwb{m+1}cubes \w{\AAAn{m+1}{1}(\Xs,\widetilde{F}_{\ast})} is given by a diagram of $m$-cubes: \mydiagram[\label{ab}]{ \CCn{m}{X_1}\ar[d]_{(i^{\ee{m}{X_1}}\circ\, r^{i^{\ee{m-1}{X_1}}})\sb{\ast} } \ar[rrr]^{\AAAn{m}{1}(\Xs,\widetilde{F}_{\ast})} &&& \Mn{m}{2}(\Xs,\widetilde{F}_{\ast}) \ar[d]^{\BBBn{m}{2}(\Xs,\widetilde{F}_{\ast})} \\ \Fin{m}{C\ee{m}{X_1}} \ar[rrr]^{\Fin{m}{\Fn{m+1}{1}}} &&& \Fin{m}{\operatorname{cof}(\Fn{m}{2})} } (see \S \ref{Fin}), so we have a strongly cofibrant replacement for \w{\AAAn{m+1}{1} (\Xs,\widetilde{F}_{\ast})} restricted to the upper left corner of \wref[,]{ab} back to front in the following diagram: \mysdiag[\label{abc}]{ \CCm{m}{X_1} \ar[rrrd]^{\simeq}_{\vartha{\CCm{m}{X_1}}} \ar@{^{(}->}[d]_{i^{\CCm{m}{X_1}}} \ar[rrr]^>>>>>>>>>>{\AAA{m}{1}(\Xs',G_{\ast})} &&& \Mm{m}{2}((\Xs',G_{\ast})^{(m,m+1)}) \ar[rrrd]_{\simeq}^{\RR{m}{}} \\ C\CCm{m}{X_1} \ar[drrr]^{\simeq}_{(r\sp{C\CCm{m}{X_1}})\sb{\ast}} &&& \CCn{m}{X_1} \ar[d]_{} \ar[rrr]_{\AAAn{m}{1}(\Xs,\widetilde{F}_{\ast})} &&& \Mn{m}{2}(\Xs,\widetilde{F}_{\ast}) \\ &&& \Fin{m}{C\ee{m}{X_1}} } We want to extend \wref{abc} to a full strongly cofibrant replacement for \wref{ab} by extending the back of \wref{abc} as follows: \myudiag[\label{abcd}]{ \CCm{m}{X_1} \ar@{^{(}->}[d]_{i^{\CCm{m}{X_1}}} \ar[rrrrr]^>>>>>>>>>>>>>>>>>>>{\AAA{m}{1}((\Xs',G_{\ast})^{(m,m+1)})} &&&&& \Mm{m}{2}((\Xs',G_{\ast})^{(m,m+1)}) \ar[d]^{\BBB{m}{2}((\Xs',G_{\ast})^{(m,m+2)})} \\ C\CCm{m}{X_1} \ar[rrrrr]^<<<<<<<<<<<<<<<<<<{\GGG{m+1}{1}((\Xs',G_{\ast})^{(m+1,m+2)})} &&&&& \xm{m}{m+3}((\Xs',G_{\ast})^{(m+1,m+2)}) } where \w{{\GGG{m+1}{1}((\Xs',G_{\ast}))}} is a diagrammatic nullhomotopy as in \wref[,]{eqdiagnull} in two steps: \begin{itemize} \item First, we have canonical choices for the right and bottom maps in \wref{abcd} on the subdiagrams indexed by \w{\Ln{m-1}{}} (that is, all but the last vertex of the $m$-cube). This is because, by Definition \ref{BBB}, the restriction to \w{\Ln{m-1}{}} of the map of $m$-cubes \w{\BBB{m}{2}((\Xs',G_{\ast})^{(m,m+2)})} is completely determined by \w[.]{\Mm{m}{2}((\Xs',G_{\ast})^{(m,m+1)})} Similarly, by Definition \ref{diagnull}, the restriction to \w{\Ln{m-1}{}} of the diagrammatic nullhomotopy \w{\GGG{m+1}{1}((\Xs',G_{\ast})^{(m+1,m+2)})} is determined by \w[]{(\Xs',G_{\ast})^{(m,m+1)}} (see Diagram \wref{eqmapcorners} below). Note that the extension of the back-to-front (weak equivalence) map of squares in \wref{abc} extends trivially on \w[,]{\Ln{m-1}{}} since all new targets are the zero object and all sources are cones. \item Finally, again thinking of each square of $m$-cubes as an \wwb{m+2}cube, we can use Lemma \ref{full cofiber replacement} to extend to the last vertex \w{(1,\dotsc,1)} of \w[,]{\In{m+2}{}} obtaining a full strongly cofibrant replacement \wref{abcd} for \wref[.]{ab} In particular, we have \w{(\Xs',G_{\ast})^{(m+1,m+2)}} extending \w{(\Xs',G_{\ast})^{(m,m+1)}} and a weak equivalence \w[.]{\RR{m+1}{}} \end{itemize} We thus constructed the left half of \wref{eqreplacer} and mapped it to \wref[.]{eqreplaced} For the right half, we want the dotted maps in \mydiagram[\label{eqtightrep}]{ \Mm{n}{2}(\Xs',G_{\ast}) \ar[d]^{\RR{n}{}}_{\simeq} \ar@{-->}[rrr]^{\BBB{n}{2}(\Xs',G_{\ast})} &&& \xm{n}{n+3}(\Xs',G_{\ast}) \ar@{-->}[d]^{\simeq} \\ \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast})} &&& \xn{n}{n+3}(\Xs,\widetilde{F}_{\ast}) } By definition we have iterated cones for \w{\xm{n}{n+3}(\Xs',G_{\ast})} and inclusions for \w{\BBB{n}{2}(\Xs',G_{\ast})} in all but the last vertex \w{(1,\dotsc,1)} (see diagram \wref{abeqmapcorners} below). Since \w[,]{\xn{n}{n+3}(\Xs,\widetilde{F}_{\ast}) =\Fin{n}{\operatorname{cof}(\Fn{n}{2})}} which is trivial in all but the last vertex, we have a canonical choice for \wref{eqtightrep} on \w{\Ln{n}{}} and use Lemma \ref{full cofiber replacement} to complete \wref[.]{eqtightrep} \end{proof} \begin{example}\label{exfullreplacment} Assume given a third order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})} of length $5$ in $\mathcal{D}$. We start with the case \w{n=2} by completing \wref{diagramnum1} as in the proof above. This will serve as the top $3$-cube in the following strongly cofibrant replacement for the next step. Here the left half of \wref{eqreplacer} appears as a back to front map of $3$-cubes \w{\AAAn{3}{1}(\Xs,\widetilde{F}_{\ast}) :\CCn{3}{X_1}\to \Mn{3}{2}(\Xs,\widetilde{F}_{\ast}) } (where the missing final vertex of \w[,]{\Mn{3}{2}(\Xs,\widetilde{F}_{\ast})} to be called \w[,]{X\sb{5}'} is indicated by $?$): \myrrdiag[\label{eqmapcorners}]{ & X_1 \ar@{^{(}->}[ldd] \ar@{^{(}->}[d]_<<<{i^{X_1}} \ar@{^{(}->}[r]^{i^{X_1}} \ar[rrd]^>>>>>>>>>>>>>{f_1} & CX_1 \ar@{^{(}->}[ldd] \ar@{^{(}->}[d]^<<<{} \ar[rrd]^{Cf_1} & & \\ & CX_1 \ar@{^{(}->}[ldd] \ar@{^{(}->}[r]^{i^{CX_1}} \ar[rrd]_>>>>>>>>>>>>>>>{\Fn{1}{1}} & C^2 X_1 \ar@{^{(}->}[ldd] \ar[rrd]^<<<<<<<<<<<{\Gm{2}{1}} & X_2\ar@{^{(}->}[ldd] \ar[d]^<<<{f_2} \ar@{^{(}->}[r]_{i^{X_2}} & C X_2 \ar@{^{(}->}[ldd] \ar[d]^{G_2} \\ CX_1 \ar[rrd]^>>>>>>>>{Cf_1} \ar@{^{(}->}[d]_<<<{C(i^{X_1})} \ar@{^{(}->}[r] & C^2X_1 \ar[rrd]^>>>>>>>>>>>>>>>>>>>>>{C^2f_1} \ar@{^{(}->}[d]^<<<{} & & X_3 \ar@{^{(}->}[ldd] \ar[r]_{g_3} & X'_4 \\ C^2X_1 \ar[rrd]_{C\Fn{1}{1}} \ar@{^{(}->}[r]^{C(i^{CX_1})} & C^3 X_1 & CX_2 \ar[d]_{Cf_2}\ar@{^{(}->}[r]_{C(i^{CX_2})} & C^{2}X_2 \\ & & CX_3 & ? } Using Lemma \ref{full cofiber replacement}, we can complete this to a strongly cofibrant replacement for full map of $3$-cubes \w[.]{\AAAn{3}{1}(\Xs,\widetilde{F}_{\ast}) :\CCn{3}{X_1}\to \Mn{3}{2}(\Xs,\widetilde{F}_{\ast}) } Therefore, we have a strongly cofibrant replacement (back to front) for all but the last vertex of \w[:]{\BBBn{3}{2}(\Xs,\widetilde{F}_{\ast}) :\Mn{3}{2}(\Xs,\widetilde{F}_{\ast}) \to \xn{3}{6}(\Xs,\widetilde{F}_{\ast}) } \myrrdiag[\label{abeqmapcorners}]{ & X_2 \ar@{^{(}->}[ldd] \ar@{^{(}->}[d]^{f_2} \ar@{^{(}->}[r]^{i^{X_2}} \ar@{^{(}->}[rrd] & CX_2 \ar@{^{(}->}[ldd] \ar[d]^<<<{G_2} \ar@{^{(}->}[rrd] & & \\ & X_3 \ar@{^{(}->}[ldd] \ar@{^{(}->}[r]^{g_3} \ar@{^{(}->}[rrd] & X'_4 \ar[ldd]^<<<<<<<<{g_4} \ar@{^{(}->}[rrd] & C X_2\ar@{^{(}->}[ldd] \ar[d]^<<<{Cf_2} \ar@{^{(}->}[r]_{Ci^{X_2}} & C^2 X_2 \ar@{^{(}->}[ldd] \ar[d]^{CG_2} \\ CX_2 \ar@{^{(}->}[rrd] \ar[d]_{Cf_2} \ar@{^{(}->}[r] & C^2X_2 \ar@{^{(}->}[rrd] \ar[d]^<<<{\Gm{2}{2}} & & CX_3 \ar@{^{(}->}[ldd] \ar[r]_<<<<{Cg_3} & CX'_4 \\ CX_3 \ar@{^{(}->}[rrd] \ar[r]^{G_3} & X'_5 & C^2X_2 \ar[d]_{C^{2}f_2}\ar@{^{(}->}[r] & C^{3}X_2 \\ & & C^2X_3 & ? } Again we use Lemma \ref{full cofiber replacement} to complete the missing vertex $?$. \end{example} \begin{remark} Note that, given a recursive higher Toda system \w[,]{(\Xs,\widetilde{F}_{\ast})} the cubical higher Toda system \w{(\Xs',G_{\ast})} of Proposition \ref{fullreplacemen} is not functorial, since we choose liftings in each induction step. However, if \w{(\Xs,\widetilde{F}_{\ast})=V(\Xs,\FFs)} for some $n$-th order cubical Toda system\w{(\Xs,\FFs)} (see \S \ref{indchcx}), then by Proposition \ref{XF to RecVXF} we can choose \w{(\Xs',G_{\ast})} to be \w{(\Xs,\FFs)} itself. Thus the construction of \w{(\Xs',G_{\ast})} can be thought of as a left inverse for the functor \w[.]{V} The following Proposition provides the other direction: \end{remark} \begin{prop}\label{VXG tosr XFn} Given an $n$-th order recursive Toda system \w{(\Xs,\widetilde{F}_{\ast})} in $\mathcal{D}$, with \w{(\Xs',G_{\ast})} as in Proposition \ref{fullreplacemen}, we have \w[.]{V(\Xs',G_{\ast}) \tosr (\Xs,\widetilde{F}_{\ast})} \end{prop} \begin{proof} By the construction of \w{(\Xs',G_{\ast})} in the proof of Proposition \ref{fullreplacemen} we see that $$ \xymatrix@R=25pt { \CCn{n}{X_1} \ar[rrr]^{\AAAn{n}{1} (\Xs,\widetilde{F}_{\ast})} &&& \Mn{n}{2}(\Xs,\widetilde{F}_{\ast}) \ar[rrr]^{\BBBn{n}{2}(\Xs,\widetilde{F}_{\ast})} &&& \xn{n}{n+3}(\Xs,\widetilde{F}_{\ast}) } $$ is $\approx$-equivalent to \wref[,]{eqreplacer} which is $\approx$-equivalent by Proposition \ref{XF to RecVXF} to $$ \xymatrix@R=25pt @C=35pt { \CCn{n}{X_1} \ar[rr]^<<<<<<<<<<<{\AAAn{n}{1}(V(\Xs',G_{\ast}))} && \Mn{n}{2}(V(\Xs',G_{\ast})) \ar[rrr]^{\BBBn{n}{2}(V(\Xs',G_{\ast}))} &&& \xn{n}{n+3}(V(\Xs',G_{\ast})) } $$ \end{proof} We thus obtain our main result, the converse of Theorem \ref{def1=def2}: \begin{thm}\label{Rev def1=def2} If \w{(\Xs,\widetilde{F}_{\ast})} is an $n$-th order recursive Toda system in $\mathcal{D}$, and \w{(\Xs',G_{\ast})} is as in Proposition \ref{fullreplacemen}, then \w[.]{\Tm{n}{1}(\Xs',G_{\ast}) \approx \Tn{n}{1}(\Xs,\widetilde{F}_{\ast}) } \end{thm} \begin{proof} By Theorem \ref{def1=def2} \w{\Tm{n}{1}(\Xs',G_{\ast}) \approx \Tn{n}{1}(V(\Xs',G_{\ast})) } and by Proposition \ref{VXG tosr XFn} and Corollary \ref{Tn=Tn} we get that \w[.]{\Tn{n}{1}(V(\Xs',G_{\ast})) \approx \Tn{n}{1}(\Xs,\widetilde{F}_{\ast}) } \end{proof} In fact we can show that we have a commutative diagram: $$ \xymatrix@R=25pt { L^n X_1 \ar[rrr]^{\Tm{n}{1}(\Xs',G_{\ast})} \ar[d]_{\zeta_{\mathbb{C}^{n}X_1}}^{\simeq} &&& X'_{n+3} \ar[d]^{\simeq} \\ {\widetilde{\Sigma}}^{n} X_1 \ar[rrr]_{\Tn{n}{1}(\Xs,\widetilde{F}_{\ast}) } &&& \operatorname{cof}^{(n)}(\xn{n}{n+3}). } $$ \end{document}
arXiv
Kunihiko Chikaya's Inequality Under the constraints, the inequality is equivalent to $\displaystyle \small{\frac{(a^{10}-b^{10})(b^{10}-c^{10})(a^{10}-c^{10})}{(a^{9}+b^{9})(b^{9}+c^{9})(c^{9}+a^{9})}\ge\frac{125}{3}[(a-c)^3-(a-b)^3-(b-c)^3]}.$ $\displaystyle (a-c)^3-(a-b)^3-(b-c)^3=3(a-b)(b-c)(a-c).$ Hence, suffice it to show that $\displaystyle \frac{(a^{10}-b^{10})(b^{10}-c^{10})(a^{10}-c^{10})}{(a^{9}+b^{9})(b^{9}+c^{9})(c^{9}+a^{9})}\ge 125(a-b)(b-c)(a-c).$ By the AM-GM inequality, employed repeatedly, or by Muirhead's inequality, employed just once, $\displaystyle \frac{u^{10}-v^{10}}{u-v}\le 5(u^9+v^9).$ Substituting $a,b,c$ in pairs and taking the product proves the required inequality. The $RHS$ simplifies to $125(a-b)(b-c)(c-a)$. If any two variables are equal, the inequality is trivially satisfied. Thus, we assume that no two variables are equal. The constraint implies that both $LHS$ and $RHS$ are non-positive. Thus, we can negate both sides and flip the direction of the inequality. The inequality can then be written as $\begin{align} &&\left[5(a-b)(a^9+b^9)\right]\left[5(b-c)(b^9+c^9)\right]\left[5(a-c)(a^9+c^9)\right] \\ &&\geq (a^{10}-b^{10}) (b^{10}-c^{10}) (a^{10}-c^{10}). \end{align}$ If we prove that for $x>y$, $5(x-y)(x^9+y^9)\geq (x^{10}-y^{10}),$ we are done. Dropping the common positive factor $(x-y)$ the inequality becomes $\displaystyle \begin{align} &5(x^9+y^9)-\sum_{i=0}^{9}x^iy^{9-i}\geq 0 \\ &[(x^9+y^9)-(x^8y+y^8x)]+[(x^9+y^9)-(x^7y^2+y^7x^2)]+ \\ &[(x^9+y^9)-(x^6y^3+y^6x^3)]+ [(x^9+y^9)-(x^5y^4+y^5x^4)]\geq 0. \end{align}$ Each box bracket is $\geq 0$ from Muirhead and the inequality follows. $\displaystyle 5(x^9+y^9)-\sum_{i=0}^{9}x^iy^{9-i}\geq 0.$ and follows from $\displaystyle \begin{align} \sum_{i=0}^{9}x^iy^{9-i} &= \sum_{i=0}^{9}\left\{[x^9]^i[y^9]^{9-i}\right\}^{1/9} \leq\sum_{i=0}^{9} \frac{i\cdot x^9+(9-i)\cdot y^9}{9}~\text{(AM-GM)} \\ &= x^9\left(\frac{\displaystyle \sum_{i=0}^{9} i}{9}\right) + y^9\left(\frac{\displaystyle \sum_{i=0}^{9} (9-i)}{9}\right) \\ &= (x^9+y^9)\left(\frac{\displaystyle \sum_{i=0}^{9} i}{9}\right)= 5(x^9+y^9). \end{align}$ We can rewrite the rhs: $rhs=\frac{125}{3} (-3 a^2 b+3 a^2 c+3 a b^2-3 a c^2-3 b^2 c+3 b c^2)$, $rhs=125 (a-b) (c-a) (b-c)$. Since $\displaystyle \sum _{i=0}^{n-1} a^{n-i-1}b^i =\frac{a^n-b^n}{a-b}$ and with $n=10$, $\displaystyle \sum _{i=0}^{n-1} a^{n-i-1}b^i\bigg|_{n=10} =\sum_{sym}a^5 b^4+\sum_{sym}a^6 b^3+\sum_{sym}a^7 b^2+\sum_{sym}a^8 b^1+\sum_{sym}a^9 b^0$ where $\displaystyle \sum_{sym}a^{p_1}b^{p_2}$ is the symmetric sum over all permutations $a^{p_1} b^{p_2}$. So we need to prove that $a^{10}-b^{10}\geq 5 (a^9+b^9)(a-b)$ (cycl.) By Miurhead's inequality, $\displaystyle 5 \sum_{sym}a^9 b^0=5 (a^9 + b^9)\geq \sum_{sym}a^5 b^4+\sum_{sym}a^6 b^3+\sum_{sym}a^7 b^2+\sum_{sym}a^8 b^1+\sum_{sym}a^9 b^0$ We note for the nonpositive case $(c^{10}-a^{10})$, that it cancels out because $c-a$ is also nonpositive. Multiplying both sides by $\displaystyle \underset {cycl} \sum a$, we write the initial equation as: $ 4 \mathfrak{M}_{a,b,c,d}^{\{1,1,1,0\}}-8 \mathfrak{M}_{a,b,c,d}^{\{2,1,0,0\}}+4 \mathfrak{M}_{a,b,c,d}^{\{3,0,0,0\}} \geq0$ where $\mathfrak{M}_{a,b,c,d}^{\sigma=\{\sigma_1,\sigma_2,\sigma_3,\sigma_4\}}$ is the mean across all permutations $\sigma$. Less elegantly: $\small{ (a b c+a b d+a c d+b c d)\\ -8 \left(\frac{a^2 b}{12}+\frac{a^2 c}{12}+\frac{a^2 d}{12}+\frac{a b^2}{12}+\frac{a c^2}{12}+\frac{a d^2}{12}+\frac{b^2 c}{12}+\frac{b^2 d}{12}+\frac{b c^2}{12}+\frac{b d^2}{12}+\frac{c^2 d}{12}+\frac{c d^2}{12}\right)\\ +\left(a^3+b^3+c^3+d^3\right) } $ We have from a generalization of Schur for 4 variables: $(a-b) (a-c) (a-d) a^t+(b-a) (b-c) (b-d) b^t\\ +(c-a) (c-b) (c-d) c^t+(d-a) (d-b) (d-c) d^t\geq 0$ Which we can write the inequality using the $\mathfrak{M}$ notation as (for $t=0$): $8 \mathfrak{M}_{a,b,c,d}^{\{1,1,1,0\}}-12 \mathfrak{M}_{a,b,c,d}^{\{2,1,0,0\}}+4 \mathfrak{M}_{a,b,c,d}^{\{3,0,0,0\}}\geq0$ and by Miurhead's inequality, since $\{2,1,0,0\}$ majorizes $\{1,1,1,0\}$, we have: $ \mathfrak{M}_{a,b,c,d}^{\{2,1,0,0\}} \geq \mathfrak{M}_{a,b,c,d}^{\{1,1,1,0\}}$ which allows the final proof. Solution 5' Using Shur's inequality: $S_{x,y,z}^t=x^t (x-y) (x-z)+y^t (y-x) (y-z)+z^t (z-x) (z-y) \geq 0$ for $t=1$, and cycling: $\displaystyle \underset {cycl} \sum S_{a,b,c}^t \geq 0$ which can be written as: $\displaystyle 4 \mathfrak{M}_{a,b,c,d}^{\{1,1,1,0\}}-8 \mathfrak{M}_{a,b,c,d}^{\{2,1,0,0\}}+4 \mathfrak{M}_{a,b,c,d}^{\{3,0,0,0\}} \geq 0.$ Leo Giugiuc has kindly communicated to me the above problem by Kunihiko Chikaya, along with a solution of his. Solutions 2 and 3 are by Amit Itagi that make the references to Muirhead's and the AM-GM inequalities more explicit. Solutions 4, 5 and 5' are by N. N. Taleb A Cyclic But Not Symmetric Inequality in Four Variables $\left(\displaystyle 5(a+b+c+d)+\frac{26}{abc+bcd+cda+dab}\ge 26.5\right)$ An Inequality with Constraint $\left((x+1)(y+1)(z+1)\ge 4xyz\right)$ An Inequality with Constraints II $\left(\displaystyle abc+\frac{2}{ab+bc+ca}\ge\frac{5}{a^2+b^2+c^2}\right)$ An Inequality with Constraint III $\left(\displaystyle \frac{x^3}{y^2}+\frac{y^3}{z^2}+\frac{z^3}{x^2}\ge 3\right)$ An Inequality with Constraint IV $\left(\displaystyle\sum_{k=1}^{n}\sqrt{x_k}\ge (n-1)\sum_{k=1}^{n}\frac{1}{\sqrt{x_k}}\right)$ An Inequality with Constraint VII $\left(|(2x+3y-5z)-3(x+y-5z)|=|-x+10z|\le\sqrt{101}\right)$ An Inequality with Constraint VIII $\left(\sqrt{24a+1}+\sqrt{24b+1}+\sqrt{24c+1}\ge 15\right)$ An Inequality with Constraint IX $\left(x^2+y^2\ge x+y\right)$ An Inequality with Constraint X $\left((x+y+p+q)-(x+y)(p+q)\ge 1\right)$ Problem 11804 from the AMM $\left(10|x^3 + y^3 + z^3 - 1| \le 9|x^5 + y^5 + z^5 - 1|\right)$ Sladjan Stankovik's Inequality With Constraint $\left(abc+bcd+cda+dab-abcd\le\displaystyle \frac{27}{16}\right)$ An Inequality with Constraint XII $\left(abcd\ge ab+bc+cd+da+ac+bd-5\right)$ An Inequality with Constraint XIV $\left(\small{64(a^2+ab+b^2)(b^2+bc+c^2)(c^2+ca+a^2) \le 3(a+b+c)^6}\right)$ An Inequality with Constraint XVII $\left(a^3+b^3+c^3\ge 0\right)$ An Inequality with Constraint in Four Variables II $\left(a^3+b^3+c^3+d^3 + 6abcd \ge 10\right)$ An Inequality with Constraint in Four Variables III $\left(\displaystyle\small{abcd+\frac{15}{2(ab+ac+ad+bc+bd+cd)}\ge\frac{9}{a^2+b^2+c^2+d^2}}\right)$ An Inequality with Constraint in Four Variables V $\left(\displaystyle 5\sum \frac{abc}{\sqrt[3]{(1+a^3)(1+b^3)(1+c^3)}}\leq 4\right)$ An Inequality with Constraint in Four Variables VI $\left(\displaystyle \sum_{cycl}a^2+6\cdot\frac{\displaystyle \sum_{cycl}abc}{\displaystyle \sum_{cycl}a}\ge\frac{5}{3}\sum_{sym}ab\right)$ A Cyclic Inequality in Three Variables with Constraint $\left(\displaystyle a\sqrt{bc}+b\sqrt{ca}+c\sqrt{ab}+2abc=1\right)$ Dorin Marghidanu's Cyclic Inequality with Constraint $\left(\displaystyle 2a^2-2\sqrt{2}(b+c)a+3b^2+4c^2-2\sqrt{bc}\gt 0\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints $\left(\displaystyle \frac{1}{\sqrt{a+b^2}}+ \frac{1}{\sqrt{b+c^2}}+ \frac{1}{\sqrt{c+a^2}}\ge\frac{1}{\sqrt{a+b+c}}\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints II $\left(\displaystyle \sum_{cycl}\frac{\displaystyle \frac{x}{y}+1+\frac{y}{x}}{\displaystyle \frac{1}{x}+\frac{1}{y}}\le 9\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints III $\left(\displaystyle 12+\sum_{cycl}\left(\sqrt{\frac{x^3}{y}}+\sqrt{\frac{x^3}{y}}\right)\ge 8(x+y+z)\right)$ Inequality with Constraint from Dan Sitaru's Math Phenomenon $\left(\displaystyle b+2a+20\ge 2\sum_{cycl}\frac{a^2+ab+b^2}{a+b}\ge b+2c+20\right)$ Another Problem from the 2016 Danubius Contest $\left(\displaystyle \frac{1}{a^2+2}+\frac{1}{b^2+2}+\frac{1}{c^2+2}\le 1\right)$ Gireaux's Theorem (If a continuous function of several variables is defined on a hyperbrick and is convex in each of the variables, it attains its maximum at one of the corners) An Inequality with a Parameter and a Constraint $\left(\displaystyle a^4+b^4+c^4+\lambda abc\le\frac{\lambda +1}{27}\right)$ Unsolved Problem from Crux Solved $\left(a_1a_2a_3a_4a_5a_6\le\displaystyle \frac{5}{2}\right)$ An Inequality With Six Variables and Constraints Find the range of $\left(a^2+b^2+c^2+d^2+e^2+f^2\right)$ Cubes Constrained $\left(3(a^4+b^4)+2a^4b^4\le 8\right)$ Dorin Marghidanu's Inequality with Constraint $\left(\displaystyle \frac{1}{a_1+1}+\frac{2}{2a_2+1}+\frac{3}{3a_3+1}\ge 4\right)$ Dan Sitaru's Integral Inequality with Powers of a Function $\left(\displaystyle\left(\int_0^1f^5(x)dx\right)\left(\int_0^1f^7(x)dx\right)\left(\int_0^1f^9(x)dx\right)\ge 2\right)$ Michael Rozenberg's Inequality in Three Variables with Constraints $\left(\displaystyle 4\sum_{cycl}ab(a^2+b^2)\ge\sum_{cycl}a^4+5\sum_{cycl}a^2b^2+2abc\sum_{cycl}a\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints IV $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{(x+2y+3z)^2}\right)$ Refinement on Dan Sitaru's Cyclic Inequality In Three Variables $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{3\sqrt{3}}\right)$ An Inequality with Arbitrary Roots $\left(\displaystyle \sum_{cycl}\left(\sqrt[n]{a+\sqrt[n]{a}}+\sqrt[n]{a-\sqrt[n]{a}}\right)\lt 18\right)$ Leo Giugiuc's Inequality with Constraint $\left(\displaystyle 2\left(\frac{1}{a+1}+\frac{1}{b+1}+\frac{1}{c+1}\right)\le ab+bc+ca\right)$ Problem From the 2016 IMO Shortlist $\left(\displaystyle \sqrt[3]{(a^2+1)(b^2+1)(c^2+1)}\le\left(\frac{a+b+c}{3}\right)^2+1\right)$ Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots $\left(\displaystyle \sum_{cycl}\sqrt[3]{\frac{abc}{(a+1)(b+1)(c+1)}}\le\frac{4}{5}\right)$ Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots II $\left(\displaystyle \sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}+\sqrt[3]{d}\le\sqrt[3]{abcd}\right)$ A Simplified Version of Leo Giugiuc's Inequality from the AMM $\left(\displaystyle a^3+b^3+c^3\ge 3\right)$ Kunihiko Chikaya's Inequality $\displaystyle \small{\left(\frac{(a^{10}-b^{10})(b^{10}-c^{10})(c^{10}-a^{10})}{(a^{9}+b^{9})(b^{9}+c^{9})(c^{9}+a^{9})}\ge\frac{125}{3}[(a-b)^3+(b-c)^3+(c-a)^3]\right)}$ A Cyclic Inequality on [-1,1] $\left(xy+yz+zx\ge 1\right)$ An Inequality with Two Triples of Variables $\left(\displaystyle\sum_{cycl}ux\ge\sqrt{\left(\sum_{cycl}xy\right)\left(2\sum_{cycl}uv-\sum_{cycl}u^2\right)}\right)$ 6th European Mathematical Cup (2017), Junior Problem 4 $\left(x^3 - (y^2 + yz + z^2)x + y^2z + yz^2 \le 3\sqrt{3}\right)$ Dorin Marghidanu's Example $\left(\displaystyle\frac{\displaystyle\frac{1}{b_1}+\frac{2}{b_2}+\frac{3}{b_3}}{1+2+3}\ge\frac{1+2+3}{b_1+2b_2+3b_3}\right)$ A Trigonometric Inequality with Ordered Triple of Variables $\left((x+y)\sin x+(x-z)\sin y\lt (y+z)\sin x\right)$ Three Variables, Three Constraints, Two Inequalities (Only One to Prove) - by Leo Giugiuc $\bigg(a+b+c=0$ and $a^2+b^2+c^2\ge 2$ Prove that $abc\ge 0\bigg)$ Hung Nguyen Viet's Inequality with a Constraint $\left(1+2(xy+yz+zx)^2\ge (x^3+y^3+z^3+6xyz)^2\right)$ A Cyclic Inequality by Seyran Ibrahimov $\left(\displaystyle \sum_{cycl}\frac{x}{y^4+y^2z^2+z^4}\le\frac{1}{(xyz)^2}\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints V $\left(\displaystyle \frac{1}{\sqrt{ab(a+b)}}+\frac{1}{\sqrt{bc(b+c)}}+\frac{1}{\sqrt{ca(c+a)}}\le 3+\frac{a+b+c}{abc}\right)$ Cyclic Inequality In Three Variables From Kvant $\left(\displaystyle \frac{a}{bc+1}+\frac{b}{ca+1}+\frac{c}{ab+1}\le 2\right)$ Cyclic Inequality In Three Variables From Vietnam by Rearrangement $\left(\displaystyle \frac{x^3+y^3}{y^2+z^2}+\frac{y^3+z^3}{z^2+x^2}+\frac{z^3+x^3}{x^2+y^2}\le 3\right)$ A Few Variants of a Popular Inequality And a Generalization $\left(\displaystyle \frac{1}{(a+b)^2+4}+\frac{1}{(b+c)^2+4}+\frac{1}{(c+a)^2+4}\le \frac{3}{8}\right)$ Two Constraints, One Inequality by Qing Song $\left(|a|+|b|+|c|\ge 6\right)$ A Moscow Olympiad Question with Two Inequalities $\left(\displaystyle b^2\gt 4ac\right)$ A Problem form the Short List of the 2018 JBMO $\left(ab^3+bc^3+cd^3+da^3\ge a^2b^2+b^2c^2+c^2d^2+d^2a^2\right)$ An Inequality from a Mongolian Exam $\left(\displaystyle 2\sum_{i=1}^{2n-1}(x_i-A)^2\ge \sum_{i=1}^{2n-1}(x_i-x_n)^2\right)$ |Contact| |Up| |Front page| |Contents| |Algebra|
CommonCrawl
Proof calculus In mathematical logic, a proof calculus or a proof system is built to prove statements. Overview A proof system includes the components:[1] • Language: The set L of formulas admitted by the system, for example, propositional logic or first-order logic. • Rules of inference: List of rules that can be employed to prove theorems from axioms and theorems. • Axioms: Formulas in L assumed to be valid. All theorems are derived from axioms. Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus, which can be used to express the consequence relations of both intuitionistic logic and relevance logic. Thus, loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term. Examples of proof calculi The most widely known proof calculi are those classical calculi that are still in widespread use: • The class of Hilbert systems, of which the most famous example is the 1928 Hilbert–Ackermann system of first-order logic; • Gerhard Gentzen's calculus of natural deduction, which is the first formalism of structural proof theory, and which is the cornerstone of the formulae-as-types correspondence relating logic to functional programming; • Gentzen's sequent calculus, which is the most studied formalism of structural proof theory. Many other proof calculi were, or might have been, seminal, but are not widely used today. • Aristotle's syllogistic calculus, presented in the Organon, readily admits formalisation. There is still some modern interest in syllogisms, carried out under the aegis of term logic. • Gottlob Frege's two-dimensional notation of the Begriffsschrift (1879) is usually regarded as introducing the modern concept of quantifier to logic. • C.S. Peirce's existential graph easily might have been seminal, had history worked out differently. Modern research in logic teems with rival proof calculi: • Several systems have been proposed that replace the usual textual syntax with some graphical syntax. proof nets and cirquent calculus are among such systems. • Recently, many logicians interested in structural proof theory have proposed calculi with deep inference, for instance display logic, hypersequents, the calculus of structures, and bunched implication. See also • Propositional proof system • Proof nets • Cirquent calculus • Calculus of structures • Formal proof • Method of analytic tableaux • Resolution (logic) References 1. Anita Wasilewska. "General proof systems" (PDF).
Wikipedia
A revealed preference analysis to develop composite scores approximating lung allocation policy in the U.S Darren E. Stewart ORCID: orcid.org/0000-0002-6764-48421, Dallas W. Wood2, James B. Alcorn1, Erika D. Lease3, Michael Hayes2, Brett Hauber4,5 & Rebecca E. Goff1 BMC Medical Informatics and Decision Making volume 21, Article number: 8 (2021) Cite this article The patient ranking process for donor lung allocation in the United States is carried out by a classification-based, computerized algorithm, known as the match system. Experts have suggested that a continuous, points-based allocation framework would better serve waiting list candidates by removing hard boundaries and increasing transparency into the relative importance of factors used to prioritize candidates. We applied discrete choice modeling to match run data to determine the feasibility of approximating current lung allocation policy by one or more composite scores. Our study aimed to demystify the points-based approach to organ allocation policy; quantify the relative importance of factors used in current policy; and provide a viable policy option that adapts the current, classification-based system to the continuous allocation framework. Rank ordered logistic regression models were estimated using 6466 match runs for 5913 adult donors and 534 match runs for 488 pediatric donors from 2018. Four primary attributes are used to rank candidates and were included in the models: (1) medical priority, (2) candidate age, (3) candidate's transplant center proximity to the donor hospital, and (4) blood type compatibility with the donor. Two composite scores were developed, one for adult and one for pediatric donor allocation. Candidate rankings based on the composite scores were highly correlated with current policy rankings (Kendall's Tau ~ 0.80, Spearman correlation > 90%), indicating both scores strongly reflect current policy. In both models, candidates are ranked higher if they have higher medical priority, are registered at a transplant center closer to the donor hospital, or have an identical blood type to the donor. Proximity was the most important attribute. Under a points-based scoring system, candidates in further away zones are sometimes ranked higher than more proximal candidates compared to current policy. Revealed preference analysis of lung allocation match runs produced composite scores that capture the essence of current policy while removing rigid boundaries of the current classification-based system. A carefully crafted, continuous version of lung allocation policy has the potential to make better use of the limited supply of donor lungs in a manner consistent with the priorities of the transplant community. Lung allocation decisions in the United States are made according to policies developed by the Organ Procurement and Transplantation Network (OPTN), which is operated by the United Network for Organ Sharing (UNOS) [1]. When a deceased donor lung becomes available, these policies state how potential transplant recipients (candidates) are rank-ordered according to objective characteristics such as donor/candidate blood type compatibility, proximity of the candidate's transplant hospital to the donor hospital, medical priority, etc. A computerized algorithm, known as the match system, carries out the ranking process by applying discrete policy rules. The available donor lungs are offered first to the top-ranked candidate on the list; if that candidate (or the transplant team) declines the offer, the second-ranked candidate is given the chance to accept, and the process is repeated down the match run waiting list until the lung is accepted for transplant. Proximity, defined as the distance between the donor hospital and each candidate's transplant hospital, plays a significant role in prioritizing patients. Proximity is relevant because organ transportation takes time and recovered organs have limited viability outside the body due to the cumulative effects of organ ischemia. Specifically, lung candidates are prioritized according to six concentric circles (zones) around the donor hospital, with zone A encompassing a 250 nautical mile (NM) radius around the donor hospital, zone B between 250 and 500 NM; zone C: > 500 to 1000; zone D: > 1000 to 1500; zone E: > 1500 to 2500; zone F: > 2500 [2]. Regardless of medical priority, candidates in more proximal zones are prioritized ahead of candidates in more distant zones. Within each zone, candidates are further stratified by age brackets—candidates age 12 years or older are prioritized ahead of younger patients for adult donor lungs—and whether or not candidate blood type (ABO) is identical or compatible with the donor. This stratification of lung candidates by proximity, blood type compatibility, and age brackets results in 36 ordered "classifications" for the allocation of adult donor lungs, as depicted in Fig. 1. Within each classification, candidates are rank-ordered and prioritized by medical acuity based on descending lung allocation score (LAS) [3] for candidates age 12 or older, and Priority 1 versus 2 status for younger patients. The LAS ranges from 0 to 100 and is composed of two components: the expected 1-year survival time with a transplant, and the expected 1-year survival time without a transplant. In a sense, the LAS measures the patient-specific "net benefit" of lung transplantation. However, since the without-a-transplant (aka, waiting list mortality) component is weighted twice as heavily as the post-transplant survival component, the LAS emphasizes reducing waiting list mortality more so than maximizing post-transplant survival. Figure 2 reveals a similar, 36-classification structure for pediatric lung donor allocation, with two notable differences: children (age < 12 years) having proximity within 1000 NM are prioritized over older candidates, and adolescents (age 12–17) are prioritized ahead of adults (age 18+). Illustration of allocation of lungs from deceased donors aged at least 18 years old. The chart shows how medical priority, candidate age (younger than 1 year old, younger than 12 years old, and at least 12 years old), ABO (identical, compatible, and incompatible), and proximity define each of the 36 ordered classifications. Within each classification, candidates 12 or older are sorted by (descending) LAS, while younger candidates are sorted by (descending) waiting time. Image created by James Alcorn for the OPTN using Visio 2016 Illustration of allocation of lungs from deceased donors younger than 18 years old. The chart shows how medical priority, candidate age (younger than 1 year old, younger than 12 years old, and at least 12 years old), ABO (identical, compatible, and incompatible), and proximity define each of the 36 ordered classifications. Within each classification, candidates 12 or older are sorted by (descending) LAS, while younger candidates are sorted by (descending) waiting time. Image created by James Alcorn for the OPTN using Visio 2016 One of the recognized limitations of this classification-driven system is that candidates in a lower classification—even those who are highly medically urgent—are never prioritized ahead of candidates in a higher classification. For example, an LAS 90 candidate 251 NM away is likely severely ill with an elevated mortality risk without a transplant but will be prioritized below a more proximal candidate with a much lower medical priority score (e.g., LAS of 30). Likewise, under the current system, a candidate with a blood type that is identical to the lung donor is always ranked higher than a candidate in the same location with a compatible blood type, even if the latter candidate's LAS reflects a much greater medical need for transplantation. These types of cases, in which candidates with high medical need are deprioritized due to a strict policy rule or rigid boundary, highlight a significant limitation of the current taxonomic approach to organ allocation. Although the current approach to organ allocation, in use for decades in the U.S., has helped many thousands in need of life-saving transplants, some experts have wondered whether a continuous, mathematically-derived allocation framework could better align lung allocation policy with requirements of OPTN's final rule [4] by increasing equity, transparency, and overall allocation efficiency [5, 6]. A mathematical (points-based) system would assign points based on pertinent candidate attributes such as medical urgency (i.e., estimated waiting list mortality without a transplant), expected post-transplant survival time, and factors related to the likelihood of finding a biologically compatible donor, such as blood type. Lung transplant candidates would ultimately be assigned a composite allocation score that would be used to determine their rank ordering on the match run when a donor lung becomes available. A points-based allocation framework is likely to have at least two major benefits. First, it is more transparent than the current rules-based system because it quantifies how important each candidate attribute is in organ allocation. Second, a points-based allocation framework would allow for the combined effects of many candidate attributes to be considered simultaneously, as opposed to allowing the effect of a single attribute (e.g., blood type identical) to supersede all possible combinations of other attributes. We performed a study to determine the feasibility of approximating the current lung allocation policy by a mathematically-derived, points-based framework. Feasibility was determined by using data from recent match runs to estimate statistical models that capture the essence of current allocation policies. These models were estimated using discrete choice modeling techniques, which are used extensively in health economics to statistically relate the choices between alternatives made by individuals to the attributes of the alternatives themselves [7, 8]. These models are typically estimated using data collected in experimental settings [9,10,11]. However, these models can also be estimated using observational data through a revealed preference analysis [12]. The rank-ordering of candidates on each match run explicitly reveals the intrinsic "preferences," or priorities, embedded in the policy. Our study is unique in that it uses discrete choice modeling to analyze organ allocation preferences generated by a deterministic policy algorithm, as opposed to individual (human) decision-makers. Our study had three main goals: (1) to demystify the composite score-based (continuous allocation) approach to allocation by showing how current policy can be approximated by a composite score, (2) to quantify the relative importance of factors used in allocation (LAS, distance, blood type, etc.) under current policy, and (3) to provide a viable policy option for implementation of a composite score that adapts the current, classification-based system to the continuous allocation framework. Statistical models were estimated using rank ordered logistic regression [13], a conventional discrete choice modeling technique, applied to rank ordered lists of candidates ("match runs") generated by the current, OPTN lung allocation policy. We analyzed all match runs from 2018 (excluding reallocations by an importing organ procurement organization). The year 2018 was chosen to reflect the current allocation lung policy (implemented in November 2017) which is based on geographic concentric circles (zones) [14]. Due to the aforementioned differences in sorting and classification, we developed separate adult and pediatric donor models. Adult donor match run data This study used data from the Organ Procurement and Transplantation Network (OPTN). The OPTN data system includes data on all donors, wait-listed candidates, and transplant recipients in the US, submitted by the members of the OPTN, and has been described elsewhere. The Health Resources and Services Administration (HRSA), US Department of Health and Human Services provides oversight to the activities of the OPTN contractor. IRB exemption was obtained from the US Department of Health and Human Services Health Resources and Services Administration (HRSA). Data produced by 6466 match runs for 5913 adult lung donors were obtained. An average of 402 candidates were ranked in each match run. As a result, we had data for 2,602,794 ranked candidates, with many candidates appearing on multiple match runs. Candidates screened off of match runs, for example if the donor's age exceeded the transplant center's maximum acceptance age, were excluded. Pediatric donor match run data Rankings produced from 534 match runs for 488 pediatric lung donors were used for the pediatric model. An average of 274 candidates were ranked in each match run. As a result, 175,342 observations for estimating the pediatric donor lung allocation model were used. Analytic approach to modeling candidate rankings Analogous to how consumers determine desirability of products based on their attributes, the matching algorithm assigns an unobserved priority score to each candidate during every match run based on that candidate's characteristics. More formally, the priority score assigned to each candidate j can be represented by the following function: $${\text{u}}_{{\text{j}}} = {\text{ v}}_{{\text{j}}} + \, \varepsilon_{{\text{j}}} ,{\text{ j }} = { 1}, \ldots ,{\text{ J}},$$ where vj is the observable component of the function that depends on the attributes of the candidate (e.g., location, blood type). Adult donor model estimation The four major attributes used to rank candidates in each match run were included as model covariates: (1) medical priority (LAS), (2) candidate age, (3) candidate's transplant center proximity to the donor hospital, and (4) blood type identical, compatible, or intended incompatible with the donor. In turn, the observable component of the priority function was specified as follows: $$\begin{aligned} {\text{V }} = & \, \beta_{{{\text{LAS}}}} \times {\text{ LAS }} + \, \beta_{{{\text{CHILD}}}} \times {\text{CHILD }} + \beta_{{{\text{DISTANCE}}}} \\ & \times {\text{ DISTANCE }} + \, \beta_{{{\text{ABO}}\_{\text{IDENTICAL}}}} \times {\text{ ABO}}\_{\text{IDENTICAL}} \\ \end{aligned}$$ where LAS is a continuous, linear variable that captures the lung allocation score (in our sample, this variable ranges from 0.07 to 96.23); CHILD is a dummy-coded variable that equals 1 for pediatric candidates below the age of 12, and 0 for all other candidates; DISTANCE is a continuous, linear variable that captures the distance from a candidate to the donor hospital in NM (in our sample this variable ranges from 0 to 4415.25 NM); ABO_IDENTICAL is an effects-coded variable that is equal to 1 for candidates with identical blood type as the organ donor and is equal to − 1 for candidates with a compatible (or intended incompatible) blood type to the organ donor. Pediatric donor model estimation The same four attributes were used to rank pediatric donors as adult candidates, with one exception: adolescent (age 12–17) priority was estimated separately from child (0–11) priority to reflect this important distinction in current lung policy. In turn, we specified the observable component of the priority function as follows: $$\begin{aligned} {\text{V }} = & \, \beta_{{{\text{LAS}}}} \times {\text{ LAS }} + \, \beta_{{{\text{CHILD}}}} \times {\text{CHILD }} + \beta_{{{\text{ADOLESCENT}}}} \times {\text{ADOLESCENT }} \\ & + \beta_{{{\text{DISTANCE}}}} \times {\text{ DISTANCE }} + \, \beta_{{{\text{ABO}}\_{\text{IDENTICAL}}}} \times {\text{ ABO}}\_{\text{IDENTICAL}} \\ \end{aligned}$$ where LAS is a continuous, linear variable that captures the lung allocation score for patients older than 12 years (in our sample, this variable ranges from 0 to 96.23); CHILD is a dummy-coded variable that equals 1 for pediatric candidates below the age of 12, and 0 for all other candidates; ADOLESCENT is a dummy-coded variable that equals 1 for candidates between the ages of 12 and 17 years old and 0 for all other candidates; DISTANCE is a continuous, linear variable that captures the distance from a candidate to the donor hospital in NM (in our sample, this variable ranges from 0 to 4040.68 NM); ABO_IDENTICAL is an effects-coded variable that is equal to 1 for candidates with the same blood type as the donor and is equal to − 1 for candidates with a compatible blood type or incompatible blood type to the donor. Model estimation was performed using Stata statistical software, Release 16, StatCorp LLC, College Station, TX. Determining the relative importance of factors We used the model coefficients to rank candidate attributes in terms of their relative importance to the ordering of candidates in lung allocation, separately for the adult donor and pediatric donor models. This was done by taking the difference between the score for the most preferred level of an attribute and the score for the least preferred level of the same attribute. We quantified "exchange rates" to express the relative importance of each factor compared to distance. These rates convey the number of NM required to have the same effect on a candidate's total score as a change in LAS; blood type identical versus compatible; or pediatric versus adult candidate. Evaluating model performance After estimating the adult and pediatric donor models, we used the resulting parameters to calculate a points-based composite allocation score for each candidate. We used these scores to predict the rank that each of the candidates would have received if the points-based system had been used. The closer these predicted rankings are to the actual rankings, the more the points-based scores reflect the current lung allocation policy. Spearman's rank correlation coefficient and Kendall's Tau were used for comparing predicted and actual rankings. Adult donor model results Table 1 contains the coefficients from the rank-ordered logit model estimated for adult donors. The direction of these coefficients tells us how changing one attribute would change a candidate's ranking in a given match run. Specifically, we see that candidates are ranked higher if they are adults, have higher LAS scores, are registered at a transplant center closer to the donor hospital, or have an identical blood type to the donor. Table 1 Rank-ordered logit estimates Distance in our sample ranges from 0 to 4415.25 NM. This implies that the maximum difference in distance score is 30.907 (30.907 = 0 – (− 0.007 * 4415.25)). By making this calculation for each attribute, we ranked candidate attributes in order of importance, where larger maximum differences imply greater importance. These calculations are presented in Table 2. Based on these calculations, proximity was found to be the most important attribute in lung allocation. Table 2 Ranking candidate attributes by importance in lung allocation The coefficients were also used to quantify the relative importance of candidate attributes by expressing changes in one attribute in terms of another. For example, as seen in Table 3, reducing a patient's LAS by 25 points lowers their composite allocation score by exactly 1 point (– 1 = 0.040 * 25). By comparison, increasing the patient's distance from the donor hospital by 142.857 NM reduces their composite allocation score by exactly 1 point (– 1 = – 0.007 * 142.857). Thus, in terms of the composite score, being 142.857 NM closer to the donor hospital is equivalent to having a 25-point higher LAS. In Table 3, we compared the impact on the composite score of changes in each attribute in terms of changes in a candidate's proximity to the donor hospital. Table 3 Converting changes in each attribute into changes in NM ("exchange rates") In addition to providing information on the relative importance of individual attributes, we can use the coefficients reported in Table 1 to calculate composite allocation scores for actual or hypothetical candidates. For example, suppose a set of lungs from an adult donor has become available and there are two adult candidates on the match. The first candidate ("A") is an adult, located 200 NM away from the donor hospital, has a LAS score of 50, and an identical blood type to the donor. The second candidate ("B") is an adult, located 251 NM away from the donor hospital, has a LAS score of 90, and also an identical blood type to the donor. Based on current policy, the LAS 50 patient would be offered the donor lungs before the much more medically urgent patient with a LAS of 90. However, based on the coefficients in Table 1, the composite score associated with candidate A would be 1.608, and the score associated with candidate B would be 2.851. Therefore, under the composite score approach, the candidate order would be reversed compared to the current, classification-based policy. Despite being outside of the 250 nautical mile boundary, the composite scoring approach would allow the severity of medical need reflected in an LAS of 90 to more than compensate for the relatively minimal additional distance required to ship the organ to this candidate (see Table 4). Table 4 Example of composite score ranking versus current policy ranking for two candidates To assess the degree to which candidate rankings from the composite score reflect rankings under the current policy, we calculated candidates' scores for 2359 match runs that included at least 10 candidates and quantified the correlation between score-based ranks and current policy ranks. (This comparison is illustrated in the Additional file 1: Table S1 by showing rankings under the current vs. a score-based policy for the first 25 candidates for a sample match run.) We chose to only calculate new rankings for a sample of match runs, because calculating predictive performance metrics is computationally time-consuming when dealing with a large number of observations. Table 5 reports Spearman correlation coefficients and Kendall's Tau comparing points-based rankings with the actual rankings produced by the matching algorithm for the 2359 match runs. As shown in the table, the mean for both of these coefficients is at least 0.80, suggesting that points-based rankings are (on average) very similar to the actual rankings. Table 5 Predictive performance metrics Figure 3 illustrates a scatter plot of the current policy rankings and points-based rankings for an adult donor match run with 873 candidates and having the median Kendall's Tau of 0.808. If the current policy rankings and points-based rankings were identical, all points on this scatter plot would lie on the 45°-line extending from the origin (illustrated in red). In reality, we see that though the rank correlation is high, there are still notable differences between the two sets of rankings. Specifically, some candidates in zones B and C—for example, candidates Y and Z as annotated on the figure—have higher priority (numerically lower ranking) under the points-based system than under current policy. This is because the current system grants absolute priority to candidates in more proximal zones. By contrast, under a points-based system, candidates farther away from a donor hospital may have other attributes (e.g. higher LAS scores) that overcome their lack of proximity. Comparison of actual and predicted rankings for adult donor match run with median Kendall's Tau. This scatterplot (for one particular adult donor match run) shows that candidate rankings under a composite-score based approach are generally highly correlated with those under current policy. However, the figure also reveals important ways in which the score-based approach rank orders patients differently than current policy by eliminating hard boundaries. Four candidate profiles are shown to illustrate salient differences in rankings: Candidate W: LAS (36), distance (201.1), ABO(O), Adult. Candidate X: LAS (32), distance (222.9), ABO(A), Adult. Candidate Y: LAS (86), distance (275.7), ABO(O), Adult. Candidate Z: LAS (74), distance (528.6), ABO(O), Adult. Image created by Dallas Wood using Stata Version 16 Pediatric donor model results Table 1 contains the coefficients from the rank-ordered logit model estimated from pediatric donor match runs. As in the adult model, these coefficients were used to make inferences about how candidate attributes influence donor lung allocation. Specifically, the score-based system ranks candidates higher if they are younger than 12 years old, have a higher LAS, are registered at a transplant center closer to the donor hospital, or have identical blood type to donors. Based on calculations shown in Table 2, proximity was found to be the most important attribute in allocating pediatric donor lungs. Table 3 shows that for the pediatric donor model, an increase in 25 LAS points is equivalent to being 135.714 NM closer in terms of the composite score. Table 5 reports Spearman correlation coefficients (mean of 0.911) and Kendall's Tau (0.792) for comparing points-based rankings with the actual rankings produced by the matching algorithm for all 453 pediatric donor match runs having at least 10 candidates. Figure 4 illustrates a scatter plot of the current policy rankings and points-based rankings for a 138-candidate, pediatric donor match run having the median Kendall's Tau of 0.797. As in the adult donor model, we see that there are some differences between the two sets of rankings. Specifically, as seen with the adult donor model, candidates in further away zones are sometimes ranked higher than more proximal candidates under the points-based system compared to current policy. For example, though all zone A candidates would rank ahead of Candidate I under the classification-based system, Candidate I would rank near the very top under a points-based system due to having an extremely high LAS of 92. Comparison of actual and predicted rankings for pediatric donor match run with median Kendall's Tau. This scatterplot (for one particular pediatric donor match run) shows that candidate rankings under a composite-score based approach are generally highly correlated with those under current policy. However, the figure also reveals important ways in which the score-based approach rank orders patients differently than current policy by eliminating hard boundaries. Four candidate profiles are shown to illustrate salient differences in rankings: Candidate H: LAS (69), distance (190.5), ABO(AB), Adult. Candidate I: LAS (92), distance (314.3), ABO(A), Adult. Candidate J: LAS (33), distance (484.1), ABO(AB), Adult. Candidate K: LAS (86), distance (523.2), ABO(A), Adult. Image created by Dallas Wood using Stata Version 16 Although the computerized match system plays a critical role in matching donor organs and candidates, the value judgments inherent in the current classification-based system can be opaque. An alternative way to make organ allocation decisions is to leverage a points-based framework that transparently expresses the relative importance of proximity, medical priority, and other factors to form a mathematically-derived, composite score. Our analysis sought to determine if preferences and priorities within current lung allocation policy could be captured, at least approximately, by composite scores. First, we used rank ordered logistic regression, a conventional discrete choice modeling technique, to estimate two statistical models based on match runs from 2018—one for adult donor lungs and one for pediatric donor lungs. These statistical models estimated scores that quantified how important the following candidate attributes are in lung allocation rankings: (1) medical priority (i.e., LAS), (2) candidate age, (3) candidate proximity to donor hospital, and (4) blood type. Second, we confirmed that the estimated scores approximately reflect the current lung allocation policy by comparing score-based candidate rankings with rankings from the current system. Overall, we demonstrate that these rankings are highly correlated with the original ranks produced by the matching algorithm. The proximity of the candidate's transplant hospital to the donor hospital was found to be the most important factor in a composite score that reflects the current policy. In terms of attribute "exchange rates," 25 LAS points equates to just 143 NM, implying that a nearby candidate with LAS of 45 would be prioritized ahead of a LAS 70 candidate just 150 NM further away. The rationale for prioritizing patients based on proximity reflects both system efficiency and organ viability considerations, as transporting lungs over long distances incurs transportation costs, travel time by the surgical recovery team, and potentially detrimental effects of organ ischemia time [15,16,17,18,19]. The manner and degree to which proximity should influence candidate rankings is a matter of ongoing debate [20,21,22]. Although the results we present are insightful, it is important to note that they are subject to limitations. First, due to the opacity of the current, classification-based system, the precise value judgments that manifested from the revealed preference analysis do not necessarily reflect policymakers' intended value judgments. Second, the model specification we used for several key attributes oversimplified the way these attributes entered the lung allocation rankings. For example, in both the adult donor and pediatric donor models, we only estimated a single coefficient for candidates younger than 12 years old. As a result, we did not differentiate between candidates with "Priority 1" from candidates with "Priority 2" [1] status, which may slightly reduce the accuracy of both models' predictions. We also simplified the composite score by omitting the waiting time attribute, which plays a subordinate role in lung allocation (essentially serving merely as a tiebreaker between two candidates with identical LAS or medical priority). In the current policy, distance is either infinitely important (across zones) or of zero importance (within zones). This composite scoring approach yields an average estimate of the impact of distance as a continuous linear function (Figs. 5, 6). Though specification of distance as a continuous, linear term instead of a zone-based categorical variable departs from the structure of current policy, this linear parameterization is more consistent with the spirit and intent of composite-score based allocation. Illustration of the importance of distance in current policy. In the current lung allocation system in which classifications are defined, in part, by geographic zones representing concentric circles around the donor hospital, the role of proximity in candidate rank-ordering varies: within a zone, proximity has zero importance, but since candidates in further-away zones cannot supersede candidates in a more proximal zone, proximity effectively has infinite importance across zones. Image created by Darren Stewart using Microsoft Excel Version 2016 Illustration showing how revealed preference distance effect is a blended estimate. A linear relationship between distance and candidate priority was assumed; this was an intentional oversimplification to aid model interpretability and reflect the spirit of the continuous distribution framework, in which incremental changes in numerical factors such as distance are to contribute incrementally to the composite score. The − 0.007 coefficient estimated for both the adult and pediatric donor models can be thought of as a blended average of the current relationship between distance and priority, which varies between zero importance (within zones) to infinite importance (across zones). Image created by Darren Stewart using Microsoft Excel Version 2016 Revealed preference analysis of match runs produced a composite score that captures the essence of current policy while removing hard boundaries. As highlighted in Table 4, this approach avoids artificial boundaries that currently preclude a candidate with a greater medical priority (LAS) from being ranked higher than a lower-LAS patient solely because the higher-LAS candidate's transplant hospital is on the other side of a geographic zone. The linear parameterization also permits highly interpretable value judgment expressions (i.e., "exchange rates"), as shown in Table 3. So, could developing composite scores through revealed preference analysis be the solution to migrating lung allocation policy to the continuous allocation framework? This is a possibility, although recent policy deliberations of the OPTN Lung Transplantation Committee (Lung Committee) have suggested the need for the new system to include several new attributes—for example, candidate height and degree of Human Leukocyte Antigen (HLA) allo-antibody sensitization—that are not included in current policy. These factors would somehow need to be appended to the composite scores shown here. An alternative approach would be to develop an entirely new composite scoring system based on a reevaluation of the degree to which proximity and other factors should be valued relative to medical need, as opposed to deriving the composite score from the current policy, which some have criticized [23]. The primary value in these revealed preference-derived scores, we believe, is in highlighting the degree to which each of the four key attributes influences candidate rank-ordering under the current policy for comparison to an idealized policy (i.e., the relative importance the OPTN and broader transplant community believe these factors should have in a new allocation system). The Lung Committee is exploring the use of analytic hierarchy process (AHP), a structured approach to eliciting value judgments and preferences from stakeholders, to establish this idealized policy [24,25,26,27,28]. The AHP results have been compared with the revealed preference analysis presented herein to stimulate discussion on the appropriate level of importance to be placed on each attribute, in accordance with federal regulation governing organ allocation policies [4]. In theory, a carefully crafted, continuous version of lung allocation policy has the potential to make greater use of the limited supply of donor lungs by transplanting more patients with the highest predicted benefit of transplant while also ensuring that access to lungs is equitable and accounting for inefficiencies related to transportation logistics over long distances and under tight time restrictions. Simulation modeling will be used to forecast the impact of composite scoring options compared to current policy, and as with all OPTN policy changes, the effects of the new policy on patients and the transplant network as a whole will be closely monitored to determine if adjustments are necessary. Policy changes will entail fine-tuning the score (e.g., increasing the coefficient of some variables and decreasing others) as opposed to shuffling classifications. The composite scoring approach should allow lung allocation to readily adapt to future innovations; for example, if technologies such as ex-vivo lung perfusion [29] become widely used and can reduce the deleterious effect of organ ischemia time associated with travel, the score can be tuned by reducing the relative importance of proximity compared to other factors. And as evidence supporting an association between other factors (e.g., donor/candidate size-matching; use of extra-corporeal membrane oxygenation) and recipient outcomes is generated, the score can be augmented to account for such discoveries. This continuous composite score approach to lung allocation policy has the potential to more effectively utilize the limited supply of donor lungs in a manner consistent with the priorities and preferences of the transplant community. Deidentified OPTN match run data were made available to Research Triangle Institute (RTI) under the terms of a data use agreement signed and returned to UNOS. The data that support the findings of this study are available from the OPTN but restrictions apply to the availability of these data, which were used under license for the current study. Data are available upon reasonable request from the OPTN. AHP: Analytic hierarchy process DSA: Donor service area IRB: HRSA: Health Resources and Services Administration LAS: Lung allocation score OPTN: Organ Procurement and Transplantation Network UNOS: United Network for Organ Sharing Organ Procurement and Transplantation Network. OPTN Policies. https://optn.transplant.hrsa.gov/media/1200/optn_policies.pdf. Accessed 12 June 2020. Organ procurement and transplantation network. OPTN Policy 10: Allocation of Lungs (2018). Egan TM, Murray S, Bustami RT, et al. Development of the new lung allocation system in the United States. Am J Transpl. 2006;6(5p2):1212–27. Department of Health and Human Services (HHS). Organ procurement and transplantation network; Final Rule (42 CFR, Part 121). Fed Regist. 1999;64(202):56649–61. Alcorn J. Continuous Distribution of Lungs. OPTN Thoracic Organ Transplantation Committee; 2018. https://optn.transplant.hrsa.gov/media/3111/thoracic_publiccomment_201908.pdf. Accessed 18 June 2020. Snyder JJ, Salkowski N, Wey A, Pyke J, Israni AK, Kasiske BL. Organ distribution without geographic boundaries: a possible framework for organ allocation. Am J Transpl. 2018;18(11):2635–40. https://doi.org/10.1111/ajt.15115. Clark MD, Leech D, Gumber A, et al. Who should be prioritized for renal transplantation?: Analysis of key stakeholder preferences using discrete choice experiments. BMC Nephrol. 2012;13(1):152. Oedingen C, Bartling T, Krauth C. Public, medical professionals' and patients' preferences for the allocation of donor organs for transplantation: study protocol for discrete choice experiments. BMJ Open. 2018;8(10):e026040. Bridges JFP, Hauber AB, Marshall D, et al. Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4):403–13. De Bekker-Grob EW, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–72. Soekhai V, de Bekker-Grob EW, Ellis AR, Vass CM. Discrete choice experiments in health economics: past, present and future. Pharmacoeconomics. 2019;37(2):201–26. Mark TL, Swait J. Using stated preference and revealed preference modeling to evaluate prescribing decisions. Health Econ. 2004;13(6):563–73. Beggs S, Cardell S, Hausman J. Assessing the potential demand for electric cars. J Econom. 1981;17(1):1–19. Organ Procurement and Transplantation Network. Policy modification to lung distribution sequence. https://optn.transplant.hrsa.gov/news/policy-modification-to-lung-distribution-sequence/. Accessed 10 June 2020. Novick RJ, Bennett LE, Meyer DM, Hosenpud JD. Influence of graft ischemic time and donor age on survival after lung transplantation. J Heart Lung Transpl. 1999;18(5):425–31. Meyer DM, Bennett LE, Novick RJ, Hosenpud JD. Effect of donor age and ischemic time on intermediate survival and morbidity after lung transplantation. Chest. 2000;118(5):1255–62. Hennessy SA, Hranjec T, Emaminia A, et al. Geographic distance between donor and recipient does not influence outcomes after lung transplantation. Ann Thorac Surg. 2011;92(5):1847–53. Baldwin MR, Peterson ER, Easthausen I, et al. Donor age and early graft failure after lung transplantation: a cohort study. Am J Transplant. 2013;13(10):2685–95. Mulvihill MS, Gulack BC, Ganapathi AM, et al. The association of donor age and survival is independent of ischemic time following deceased donor lung transplantation. Clin Transplant. 2017;31(7):e12993. Puri V, Hachem RR, Frye CC, et al. Unintended consequences of changes to lung allocation policy. Am J Transplant. 2019;19(8):2164–7. Lehman RR, Chan KM. Elimination of the donor service area (DSA) from lung allocation: no turning back. Am J Transplant. 2019;19(8):2151–2. Glazier AK. The lung lawsuit: a case study in organ allocation policy and administrative law. J Health Biomed L. 2018;14:139. Russo MJ, Iribarne A, Hong KN, et al. High lung allocation score is associated with increased morbidity and mortality following transplantation. Chest. 2010;137(3):651–7. Saaty TL. Decision making with the analytic hierarchy process. Int J Serv Sci. 2008;1(1):83–98. Lin CS, Harris SL. A unified framework for the prioritization of organ transplant patients: analytic hierarchy process, sensitivity and multifactor robustness study. J Multi-Criteria Decis Anal. 2013;20(3–4):157–72. Danner M, Vennedey V, Hiligsmann M, Fauser S, Gross C, Stock S. Comparing analytic hierarchy process and discrete-choice experiment to elicit patient preferences for treatment characteristics in age-related macular degeneration. Value Health. 2017;20(8):1166–73. Taherkhani N, Sepehri MM, Shafaghi S, Khatibi T. Identification and weighting of kidney allocation criteria: a novel multi-expert fuzzy method. BMC Med Inform Decis Mak. 2019;19(1):182. Al-Ebbini L, Oztekin A, Chen Y. FLAS: fuzzy lung allocation system for US-based transplantations. Eur J Oper Res. 2016;248(3):1051–65. Cypel M, Yeung JC, Liu M, et al. Normothermic ex vivo lung perfusion in clinical lung transplantation. N Engl J Med. 2011;364(15):1431–40. The authors recognize Olga Kosachevsky for contributing to manuscript preparation. We also greatly appreciate the OPTN Lung Committee's feedback on the merits of a new approach to drive policy deliberations: comparing value judgements embedded in current policy (through revealed preference analysis) versus those derived from an analytic hierarchy process (AHP) exercise. The data reported here have been supplied by the United Network for Organ Sharing (UNOS) as the contractor for the Organ Procurement and Transplantation Network (OPTN). The interpretation and reporting of these data are the responsibility of the authors and in no way should be seen as an official policy of or interpretation by the OPTN or the U.S. Government. This work was conducted under the auspices of the United Network for Organ Sharing (UNOS), contractor for OPTN, under Contract 250-2019-00001C (US Department of Health and Human Services, Health Resources and Services Administration, Healthcare Systems Bureau, Division of Transplantation). Research Triangle Institute (RTI) was hired by UNOS to conduct the analysis, with funding coming from HRSA's OPTN contract #250-2019-00001C. United Network for Organ Sharing, Richmond, VA, USA Darren E. Stewart, James B. Alcorn & Rebecca E. Goff Research Triangle Institute International, Research Triangle Park, NC, USA Dallas W. Wood & Michael Hayes Division of Pulmonary, Critical Care, and Sleep Medicine, University of Washington, Washington, USA Erika D. Lease RTI Health Solutions, Research Triangle Park, NC, USA Brett Hauber University of Washington School of Pharmacy, Seattle, WA, USA Darren E. Stewart Dallas W. Wood James B. Alcorn Michael Hayes Rebecca E. Goff DS conceptualized the analysis and the manuscript and participated in collaborative meetings reviewing interim results; DW performed all analyses and participated in collaborative meetings reviewing interim results; JA participated in collaborative meetings reviewing interim results and provided background information pertaining to OPTN policies; EL contributed to interpretation of results, manuscript review, and manuscript revisions; MH performed background research; BH provided methodological oversight and subject matter expertise; RG provided subject matter expertise regarding lung allocation policy and analytics and participated in collaborative meetings reviewing interim results. All authors contributed to writing the paper, read, and approved the final manuscript. Correspondence to Darren E. Stewart. This research was conducted in compliance with the Declaration of Helsinki. IRB exemption was obtained from the US Department of Health and Human Services Health Resources and Services Administration (HRSA) under the Public Benefit and Service program exemption of the Common Rule. The authors' report no competing interests. Candidates are sorted by current allocation policy rank. Candidates' composite score ranks, derived through revealed preference logistic regression analysis, are shown for comparison. Some candidates' position on the match run would improve under the composite scoring, whereas other candidates would appear further down on the match run. Stewart, D.E., Wood, D.W., Alcorn, J.B. et al. A revealed preference analysis to develop composite scores approximating lung allocation policy in the U.S. BMC Med Inform Decis Mak 21, 8 (2021). https://doi.org/10.1186/s12911-020-01377-7 Lung allocation Rank ordered logistic regression Organ Procurement and Transplantation Network (OPTN) Lung allocation score (LAS) Continuous allocation
CommonCrawl
Logarithmic derivative In mathematics, specifically in calculus and complex analysis, the logarithmic derivative of a function f is defined by the formula ${\frac {f'}{f}}$ Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis where $f'$ is the derivative of f.[1] Intuitively, this is the infinitesimal relative change in f; that is, the infinitesimal absolute change in f, namely $f',$ scaled by the current value of f. When f is a function f(x) of a real variable x, and takes real, strictly positive values, this is equal to the derivative of ln(f), or the natural logarithm of f. This follows directly from the chain rule:[1] ${\frac {d}{dx}}\ln f(x)={\frac {1}{f(x)}}{\frac {df(x)}{dx}}$ Basic properties Many properties of the real logarithm also apply to the logarithmic derivative, even when the function does not take values in the positive reals. For example, since the logarithm of a product is the sum of the logarithms of the factors, we have $(\log uv)'=(\log u+\log v)'=(\log u)'+(\log v)'.$ So for positive-real-valued functions, the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors. But we can also use the Leibniz law for the derivative of a product to get ${\frac {(uv)'}{uv}}={\frac {u'v+uv'}{uv}}={\frac {u'}{u}}+{\frac {v'}{v}}.$ Thus, it is true for any function that the logarithmic derivative of a product is the sum of the logarithmic derivatives of the factors (when they are defined). A corollary to this is that the logarithmic derivative of the reciprocal of a function is the negation of the logarithmic derivative of the function: ${\frac {(1/u)'}{1/u}}={\frac {-u'/u^{2}}{1/u}}=-{\frac {u'}{u}},$ just as the logarithm of the reciprocal of a positive real number is the negation of the logarithm of the number. More generally, the logarithmic derivative of a quotient is the difference of the logarithmic derivatives of the dividend and the divisor: ${\frac {(u/v)'}{u/v}}={\frac {(u'v-uv')/v^{2}}{u/v}}={\frac {u'}{u}}-{\frac {v'}{v}},$ just as the logarithm of a quotient is the difference of the logarithms of the dividend and the divisor. Generalising in another direction, the logarithmic derivative of a power (with constant real exponent) is the product of the exponent and the logarithmic derivative of the base: ${\frac {(u^{k})'}{u^{k}}}={\frac {ku^{k-1}u'}{u^{k}}}=k{\frac {u'}{u}},$ just as the logarithm of a power is the product of the exponent and the logarithm of the base. In summary, both derivatives and logarithms have a product rule, a reciprocal rule, a quotient rule, and a power rule (compare the list of logarithmic identities); each pair of rules is related through the logarithmic derivative. Computing ordinary derivatives using logarithmic derivatives Main article: Logarithmic differentiation Logarithmic derivatives can simplify the computation of derivatives requiring the product rule while producing the same result. The procedure is as follows: Suppose that $f(x)=u(x)v(x)$ and that we wish to compute $f'(x)$. Instead of computing it directly as $f'=u'v+v'u$, we compute its logarithmic derivative. That is, we compute: ${\frac {f'}{f}}={\frac {u'}{u}}+{\frac {v'}{v}}.$ Multiplying through by ƒ computes f′: $f'=f\cdot \left({\frac {u'}{u}}+{\frac {v'}{v}}\right).$ This technique is most useful when ƒ is a product of a large number of factors. This technique makes it possible to compute f′ by computing the logarithmic derivative of each factor, summing, and multiplying by f. For example, we can compute the logarithmic derivative of $e^{x^{2}}(x-2)^{3}(x-3)(x-1)^{-1}$ to be $2x+{\frac {3}{x-2}}+{\frac {1}{x-3}}-{\frac {1}{x-1}}$. Integrating factors The logarithmic derivative idea is closely connected to the integrating factor method for first-order differential equations. In operator terms, write $D={\frac {d}{dx}}$ and let M denote the operator of multiplication by some given function G(x). Then $M^{-1}DM$ can be written (by the product rule) as $D+M^{*}$ where $M^{*}$ now denotes the multiplication operator by the logarithmic derivative ${\frac {G'}{G}}$ In practice we are given an operator such as $D+F=L$ and wish to solve equations $L(h)=f$ for the function h, given f. This then reduces to solving ${\frac {G'}{G}}=F$ which has as solution $\exp \textstyle (\int F)$ with any indefinite integral of F. Complex analysis See also: Argument principle The formula as given can be applied more widely; for example if f(z) is a meromorphic function, it makes sense at all complex values of z at which f has neither a zero nor a pole. Further, at a zero or a pole the logarithmic derivative behaves in a way that is easily analysed in terms of the particular case zn with n an integer, n ≠ 0. The logarithmic derivative is then $n/z$ and one can draw the general conclusion that for f meromorphic, the singularities of the logarithmic derivative of f are all simple poles, with residue n from a zero of order n, residue −n from a pole of order n. See argument principle. This information is often exploited in contour integration.[2][3] In the field of Nevanlinna theory, an important lemma states that the proximity function of a logarithmic derivative is small with respect to the Nevanlinna characteristic of the original function, for instance $m(r,h'/h)=S(r,h)=o(T(r,h))$.[4] The multiplicative group Behind the use of the logarithmic derivative lie two basic facts about GL1, that is, the multiplicative group of real numbers or other field. The differential operator $X{\frac {d}{dX}}$ is invariant under dilation (replacing X by aX for a constant). And the differential form ${\frac {dx}{X}}$ is likewise invariant. For functions F into GL1, the formula ${\frac {dF}{F}}$ is therefore a pullback of the invariant form. Examples • Exponential growth and exponential decay are processes with constant logarithmic derivative. • In mathematical finance, the Greek λ is the logarithmic derivative of derivative price with respect to underlying price. • In numerical analysis, the condition number is the infinitesimal relative change in the output for a relative change in the input, and is thus a ratio of logarithmic derivatives. See also • Generalizations of the derivative – Fundamental construction of differential calculus • Logarithmic differentiation – Method of mathematical differentiation • Elasticity of a function References 1. "Logarithmic derivative - Encyclopedia of Mathematics". encyclopediaofmath.org. 7 December 2012. Retrieved 12 August 2021.{{cite web}}: CS1 maint: url-status (link) 2. Gonzalez, Mario (1991-09-24). Classical Complex Analysis. CRC Press. ISBN 978-0-8247-8415-7. 3. "Logarithmic residue - Encyclopedia of Mathematics". encyclopediaofmath.org. 7 June 2020. Retrieved 2021-08-12.{{cite web}}: CS1 maint: url-status (link) 4. Zhang, Guan-hou (1993-01-01). Theory of Entire and Meromorphic Functions: Deficient and Asymptotic Values and Singular Directions. American Mathematical Soc. p. 18. ISBN 978-0-8218-8764-6. Retrieved 12 August 2021. Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid
Wikipedia
\begin{document} \begin{abstract} For a fixed prime $p$, we consider the (finite) set of supersingular elliptic curves over $\overline{\mathbb{F}}_p$. Hecke operators act on this set. We compute the asymptotic frequence with which a given supersingular elliptic curve visits another under this action. \end{abstract} \title{Equidistribution of Hecke points on the supersingular module} \section{Introduction} Let $p$ be a prime number. We denote by $E = \{ E_1, \ldots, E_n \}$ the set of isomorphism classes of supersingular elliptic curves over $\overline{\mathbb{F}}_p$. We denote by $S := \oplus_{i=1}^n \mathbb{Z} E_i$ the supersingular module in characteristic $p$ (i.e. $S$ is the free abelian group spanned by the elements of $E$). Hecke operators act on $S$ by \[ T_1:= id, \quad T_m (E_i) = \sum_C E_i/C, \quad m \geq 2, \] \noindent where $C$ runs through the subgroup schemes of $E_i$ of rank $m$. This definition is extended by linearity to $S$ and to $S_\mathbb{R}:=S\otimes \mathbb{R}$. For an integer $m \geq 1$ we put \[ B_{i,j} (m) = |\{ C \subset E_i, \quad |C|=m \textrm{ and } E_i/C \cong E_j \}|. \] We have that $T_m E_i = \sum_{j=1}^nB_{i,j} (m) E_j$. The the matrix $\big(B_{i,j}(m)\big)_{i,j=1}^n$ is known as the Brandt matrix of order $m$. For a given $D = \sum_{i=1}^n a_i E_i \in S_\mathbb{R}$, we put $\deg D = \sum_{i=1}^n a_i$. We have that (\cite{Gross}, Proposition 2.7) $$\deg T_m E_i = \sum_{d|m \atop p \nmid d} d=: \sigma(m)_p,$$ leading to define $\deg T_m:= \sigma(m)_p.$ Let $M$ be the set of probability measures on $E$. For every $i=1, \ldots , n$, we denote by $\delta_{E_i} \in M$ the Dirac measure supported on $E_i$. Let \[ S^+ := \Big\{ \sum_{i=1}^n a_i E_i \in S_\mathbb{R} \textrm{ such that } a_i \geq 0 \Big\} - \{0\}. \] For any $D = \sum_{i=1}^n a_i E_i \in S^+$, we put \[ \Theta_D := \frac{1}{\deg D} \sum_{i=1}^n a_i \delta_{E_i}. \] We have that $\Theta_D$ is a probability measure on $E$ and every element of $M$ has this form. Hence, there is a natural action of the Hecke operators on $M$, given by $T_m \Theta_D := \Theta_{T_m D}.$ Each $E_i$ has a finite number of automorphisms. We define \[ w_i := | \textrm{Aut}(E_i)/\{\pm 1\}|, \quad W :=\sum_{i=1}^n \frac{1}{w_i} . \] The element $e:= \sum_{i=1}^n \frac{1}{w_i} E_i \in S \otimes \mathbb{Q}$ is Eisenstein (\cite{Gross}, p. 139), i.e. \begin{equation}\label{Eis} T_m(e)=\deg T_m e. \end{equation} We denote by $\Theta := \Theta_e$. Equation \eqref{Eis} implies that $T_m \Theta =\Theta$ for all $m\geq 1$. Let $C(E) \cong \mathbb{C}^n$ be the space of complex valued functions on $E$. For $f \in C(E)$, we denote by $\norm{f}=\max_i |f(E_i)|$ and $$\Theta_D(f):= \int_{E} f \Theta_D = \frac{1}{\deg D} \sum_{i=1}^{n} a_if(E_i).$$ For a positive integer $m$, we write $m=p^km_p$ with $p\nmid m_p$. In this note, we will prove the following result: \begin{thm} \label{principalIntro} For all $i=1, \ldots , n$, the sequence of measures $\{ \Theta_{T_m E_i} \}$, where $m$ runs through a set of positive integers such that $m_p$ grows to infinity, is equidistributed with respect to $\Theta$. More precisely, for all $\varepsilon >0$, there exists $C_\varepsilon >0$ such that, for every $f \in C(E)$, and for every sequence of integers $m$ such that $m_p \rightarrow \infty$, we have that $$|\Theta_{T_m E_i}(f) - \Theta(f)| \leq C_\varepsilon \norm{f} n m^{-\frac{1}{2}+\varepsilon}.$$ \end{thm} We study the asymptotic frequence of the multiplicity of $E_j$ inside $T_m E_i$. That is, we investigate the behavoir of the ratio $B_{i,j}(m)/\deg(T_m)$ when $m$ varies. We will prove Theorem \ref{principalIntro} in the equivalent formulation: \begin{thm} \label{principalfino} For all $\varepsilon>0$, there exists $C_\varepsilon >0$ such that for every sequence of integers $m$ such that $m_p \rightarrow \infty$, we have that \begin{equation} \label{cuantitativo} \Big| \frac{B_{i,j}(m)}{\deg T_m} - \frac{12}{w_j(p-1)}\Big| \leq C_\varepsilon m^{-\frac{1}{2}+\varepsilon}. \end{equation} In particular, \begin{equation}\label{principal} \lim_{m_p \rightarrow \infty} \frac{B_{i,j}(m)}{\deg T_m} = \frac{12}{w_j(p-1)}. \end{equation} \end{thm} The proof of this assertion is found in section \ref{dems}. \begin{rem} \label{masa} The equality $\sum_{j=1}^n \frac{B_{i,j}(m)}{\deg{T_m}} = 1,$ combined with equation \eqref{principal} implies the mass formula of Deuring and Eichler: \[ W=\sum_{j=1}^n \frac{1}{w_j} = \frac{p-1}{12}. \] \end{rem} Theorem \ref{principalIntro} can be deduced from Theorem \ref{principalfino} as follows: remark \ref{masa} implies that $\Theta=\sum_{j=1}^n \frac{12}{w_j(p-1)}\delta_{E_j}.$ Take $f \in C^0(E)$. We have that \[ |\Theta_{T_m E_i} (f)-\Theta(f)| \leq \norm{f} \sum_{j=1}^n \Big|\frac{B_{i,j}(m)}{\deg T_m} - \frac{12}{w_j(p-1)}\Big|. \] Hence, inequality \eqref{cuantitativo} implies Theorem \ref{principalIntro}. Let $h : E \rightarrow E$ be a function. Then $h$ defines an endomorphism of $S$ and of $S_\mathbb{R}$ by the rule $$h \big( \sum a_i E_i\big) := \sum a_i h(E_i).$$ We will also consider the action induced on $M$ by $h^*\Theta_D:= \Theta_{h(D)}$. \begin{cor} Let $q\neq p$ be a prime number. Let $h: E \rightarrow E$ be a function such that $h \circ T_q = T_q \circ h$. Then $h^* \Theta= \Theta$. In other words, $h$ can be identified with a permutation $\tau \in S_n$ by $h(E_i)=E_{\tau(i)}$ and we have that $w_i = w_{\tau(i)}$ for all $i=1, \ldots, n$. \end{cor} \noindent \textbf{Proof}: since $T_{q^k}$ is a polynomial in $T_q$, we also have that $h \circ T_{q^k} = T_{q^k} \circ h$. Let $f \in C(E)$. We have that \begin{eqnarray} h^* \Theta (f) & = & \lim_{k \rightarrow \infty} h^*\Theta_{T_{q^k}E_1 } (f) \label{pasouno} \\ & = & \lim_{k \rightarrow \infty} \Theta_{h \circ T_{q^k}E_1} (f)\nonumber \\ &=& \lim_{k \rightarrow \infty} \Theta_{ T_{q^k} \big( h (E_1)\big)}(f) \nonumber \\ &=& \Theta (f), \label{pasodos} \end{eqnarray} \noindent where we have used Theorem \ref{principalIntro} in \eqref{pasouno} and \eqref{pasodos} $\blacksquare$ \\ The statement Theorem \ref{principalIntro}, using the Hecke invariant measure $\Theta$, has been included to emphasize the analogy with the fact that Hecke orbits are equidistributed on the modular curve $SL_2(\mathbb{Z})\backslash \mathbb{H}$ with respect to the hyperbolic measure, which is Hecke invariant (e.g. see \cite{ClozelUllmo}, Section 2). \subsection{Weight 2 Eisenstein series for $\Gamma_0(p)$} The modular curve $X_0(p)$ has two cusps, represented by $0$ and $\infty$. We denote by $\Gamma_\infty$ (resp. $\Gamma_0$) the stabilizer of $\infty$ (resp. 0). The associated weight 2 Eisenstein series are given by \begin{eqnarray} E_\infty (z) & = & \frac{1}{2}\lim_{\varepsilon \rightarrow 0^+} \sum_{\gamma \in \Gamma_\infty \backslash \Gamma_0(p)} j_\gamma (z)^{-2} |j_\gamma(z)|^{-2\varepsilon} \nonumber \\ E_0 (z) & = & \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} \sum_{\gamma \in \Gamma_0 \backslash \Gamma_0(p)} j_{\sigma_0^{-1}\gamma} (z)^{-2} |j_{\sigma_0^{-1}\gamma}(z)|^{-2\varepsilon}, \nonumber \end{eqnarray} \noindent where $ \sigma_0 = \left(\begin{array}{cc} 0 & -1/\sqrt{p} \\ \sqrt{p} & 0 \end{array}\right)$ and $j_\eta (z) = cz+d$ for $ \eta = \left(\begin{array}{cc} a & b \\ c & d \end{array}\right)$ . \\ The functions $E_\infty$ and $E_0$ are weight 2 modular forms for $\Gamma_0(p)$ and they are Hecke eigenforms. The Fourier expansions at $i \infty$ are (\cite{Miyake}, Theorem 7.2.12, p. 288) \begin{eqnarray} E_\infty(z) & = & 1 - \frac{3}{\pi y(p+1)} + \frac{24}{p^2-1} \sum_{n=1}^\infty b_n q^n \nonumber \\ E_0 (z) & = & - \frac{3}{\pi y (p+1)} - \frac{24p}{p^2-1} \sum_{n=1}^\infty a_n q^n, \nonumber \end{eqnarray} \noindent with the sequences $a_n$ and $b_n$ given by: \begin{itemize} \item if $p \nmid n,$ then $a_n = b_n = \sigma_1 (n)= \sum_{d|n}^{} d$ \\ \item if $ k \geq 1 $, then $b_{p^k} = p+1-p^{k+1}$ and $a_{p^k} = p^k $ \\ \item if $ p \nmid m$ and $k \geq 1,$ then $ b_{p^km} = -b_{p^k}b_m \textrm{ and } a_{p^km} = a_{p^k} a_m$. \end{itemize} By taking an appropriate linear combination, we obtain a non cuspidal, holomorphic at $i\infty$ modular form \begin{eqnarray} f_0(z) & := & E_\infty(z) - E_0(z) \nonumber \\ & = & 1 + \frac{24}{p^2 - 1} \sum_{n=1}^\infty (p a_n + b_n) q^n. \nonumber \end{eqnarray} Since we have that \begin{eqnarray} E_\infty |_{\sigma_0} (z) & = & E_0(z) \nonumber \\ E_0 |_{\sigma_0} (z) & = & E_\infty (z), \nonumber \end{eqnarray} \noindent this shows that $f$ is holomorphic at $\Gamma_0(p) 0$ as well. Since \[ \dim_{\mathbb{C}} M_2 (\Gamma_0(p)) = 1 + \dim_{\mathbb{C}} S_2 \big(\Gamma_0(p)\big) \] and since $f$ is holomorphic, non zero and non cuspidal, we have the decomposition \begin{equation} \label{sumadirecta} M_2(\Gamma_0(p)) = S_2(\Gamma_0(p)) \oplus \mathbb{C} f_0. \end{equation} \subsection{Proof of Theorem \ref{principalfino}}\label{dems} Recall that we write $m=p^km_p$ with $p\nmid m_p$. We have that $B(p^k)$ is a permutation matrix of order dividing 2 and that $B(m)=B(p^k)B(m_p)$ (\cite{Gross}, Proposition 2.7). It follows that $\deg (T_m)=\deg(T_{m_p})$ and that we can define, for each $i=1, \ldots ,n$, an index $i(k) \in \{ 1, \ldots, n\}$ such that $B_{i,l} (p^k)=\delta_{i(k),l}$. Furthermore, $i(k)=i$ if $k$ is even. We have that \begin{eqnarray} \frac{ B_{i,j}(m)}{\deg T_m} &=& \sum_{l=1}^n \frac{ B_{i,l}(p^k) B_{l,j}(m_p)}{\deg T_{m_p}} \nonumber \\ &=& \frac{ B_{i(k),j}(m_p) }{\deg T_{m_p}}. \nonumber \end{eqnarray} Hence, to prove Theorem \ref{principalfino} we may assume $p \nmid m$, which is what we will do in what follows. Our method is based on the interpretation of the multiplicities $B_{i,j}(m)$ as Fourier coefficients of a modular form. \begin{thm} \label{teta} For every $0 \leq i,j \leq n$, there exists a weight 2 modular form $f_{i,j}$ for $\Gamma_0(p)$ such that its $q$-expansion at $\infty$ is \[ f_{i,j}(z) := \frac{1}{2w_j} + \sum_{m=1}^\infty B_{i,j}(m) q^m, \quad q = e^{2\pi i z}. \] \end{thm} \noindent \textbf{Proof}: this fact is stated in \cite{Gross}, p.118. It is a particular case of \cite{Eichler}, Chapter II, Theorem 1 ($D=p, H=1, l=0$ in Eichler's notation). We remark that the theorem in \emph{loc. cit.} states modularity of a theta series constructed from an order in a quaternion algebra. The fact that this theta series is the same as our $f_{i,j}$ is a consequence of \cite{Gross}, Proposition 2.3 $\blacksquare$ Using \eqref{sumadirecta}, we can decompose \[ f_{i,j} = g_{i,j} + c_{i,j} f_0, \quad g_{i,j} \in S_2(\Gamma_0(p)), \quad c_{i,j} \in \mathbb{C}. \] Comparing the $q$-expansions, we get $c_{i,j} = \frac{1}{2 w_j}.$ We have that $$g_{i,j} = f_{i,j} -c_{i,j} f_0 = \sum_{m=1}^\infty c_m q^m,$$ \noindent where \[ c_m = B_{i,j} (m) - \frac{12}{w_j (p^2 -1)} (p a_m + b_m). \] The coefficient $c_m$ depends on $(i,j)$, but we don't include this dependence in the notation in order to simplify it. Since $p \nmid m$, we have that $\deg(T_m) = \sigma_1(m)$ and \[ c_m = B_{i,j} (m) - \frac{12}{w_j (p-1)} \sigma_1(m). \] Hence, \begin{eqnarray} \Big|\frac{B_{i,j} (m)}{\deg T_m} - \frac{12}{w_j (p -1)} \Big| & = & \frac{|c_m|}{\sigma_1(m)} \nonumber \\ & \leq & \frac{|c_m|}{m}. \nonumber \end{eqnarray} Using Deligne's theorem (\cite{Deligne}, th\'eor\`eme 8.2, previously Ramanujan's conjecture), we have that \[ c_m = O_\varepsilon(m^{1/2+\varepsilon}), \] \noindent concluding the proof. $\blacksquare$ \end{document}
arXiv
Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China Benyun Shi1,2, Qi Tan3, Xiao-Nong Zhou4 & Jiming Liu3 Geographic variations of an infectious disease characterize the spatial differentiation of disease incidences caused by various impact factors, such as environmental, demographic, and socioeconomic factors. Some factors may directly determine the force of infection of the disease (namely, explicit factors), while many other factors may indirectly affect the number of disease incidences via certain unmeasurable processes (namely, implicit factors). In this study, the impact of heterogeneous factors on geographic variations of Plasmodium vivax incidences is systematically investigate in Tengchong, Yunnan province, China. A space-time model that resembles a P. vivax transmission model and a hidden time-dependent process, is presented by taking into consideration both explicit and implicit factors. Specifically, the transmission model is built upon relevant demographic, environmental, and biophysical factors to describe the local infections of P. vivax. While the hidden time-dependent process is assessed by several socioeconomic factors to account for the imported cases of P. vivax. To quantitatively assess the impact of heterogeneous factors on geographic variations of P. vivax infections, a Markov chain Monte Carlo (MCMC) simulation method is developed to estimate the model parameters by fitting the space-time model to the reported spatial-temporal disease incidences. Since there is no ground-truth information available, the performance of the MCMC method is first evaluated against a synthetic dataset. The results show that the model parameters can be well estimated using the proposed MCMC method. Then, the proposed model is applied to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Based on the geographic variations, the 18 towns can be further classify into five groups with similar socioeconomic causality for P. vivax incidences. Although this study focuses mainly on the transmission of P. vivax, the proposed space-time model is general and can readily be extended to investigate geographic variations of other diseases. Practically, such a computational model will offer new insights into active surveillance and strategic planning for disease surveillance and control. Disease surveillance systems play important roles in continuously monitoring the occurrence of an infectious disease at different geographic locations [1,2]. From the perspective of spatial epidemiology, the dependence or autocorrelations of disease incidences among nearby locations can be analysed from historical spatial-temporal disease incidences [3]. Accordingly, risk maps of the disease can be generated using appropriate spatial interpolation methods [4]. However, in reality, the natural transmission of an infectious disease can be potentially caused and affected by many impact factors, including but not limited to environmental, demographic, socioeconomic, behavioural, genetic, biophysical, and other risk factors [5-8]. Specifically, some factors may directly determine the risk of infection of the disease, namely, explicit factors, while many other factors may indirectly affect the disease incidences via certain unobservable processes, namely, implicit factors. In view of this, it would be desirable and essential to systematically assess the integrated impact of heterogeneous factors on the geographic variations of disease incidences [9,10]. By doing so, public health authorities can efficiently and effectively perform active surveillance and control by means of strategically planning and utilizing their limited resources. Technically speaking, many methods have been proposed to analyse complex spatial-temporal distributions of disease incidences, and determine multiple impact factors underlying disease transmission. On the one hand, statistical analysis on different types of impact factors can produce risk maps of an infectious disease with respect to vectors [11], reservoirs [12], and human cases [13]. However, pure statistical analysis methods (e.g., spatial regression methods) are limited in exploring the real dynamics of disease transmission underlying the observed disease incidences. On the other hand, by systematically integrating various impact factors, various disease transmission models have been incorporated into the spatial statistics of infectious disease. Different from statistical analysis, disease transmission models can explicitly describe the underlying epidemiological process from the perspective of transmission mechanism. Taking the vector-borne diseases as an example, starting from the Ross model [14], a variety of differential equation models with different levels of complexity have been proposed to investigate the roles of different factors [15]. For example, Shi et al. have adopted a spatial transmission model to investigate the underlying disease transmission networks among different locations [16]. Unfortunately, due to the intrinsic complexity of disease transmission dynamics, there are still some other factors, the effects of which still cannot be explicitly interpreted. This paper focuses on geographic variations of malaria incidences among 18 towns in Tengchong county, Yunnan province, China (see Fig. 1). The IDs and names of these towns are listed in Table 1. One reason that malaria is chosen as a case study lies in that it is one of the most serious and deadly infectious diseases all over the world, especially in developing countries [17,18]. In China, Yunnan province was ranked the first for the number of reported malaria cases, and the second for the incident rate of the disease from 1999 to 2004 [19]. While for Tengchong county in Yunnan province, all 18 towns have been experiencing high Plasmodium vivax transmission in the past years, with annual incidence rate higher than 1 per 10,000 [20,21]. With respect to the malaria elimination in Tengchong, it has been suggested by public health policy makers and practitioners that active surveillance would be an efficient strategy. Compared with passive surveillance (i.e., patients come to public health agencies for diagnosis and treatment), active surveillance aims to timely discover malaria infections through actively conducting on-the-spot investigation. However, in practice, active surveillance are extremely cost-expensive and time-consuming, which require massive experienced public health workers. So far, very few experienced workers are available, particularly in remote and underdeveloped regions in China. For instance, in Tengchong's Centers for Disease Control (CDC), no more than five full-time workers are available to perform or coordinate the active surveillance for about 167 thousands households that are distributed in a wide area of more than five thousands square kilometres [22]. An illustration of the geographic locations of the 18 towns in Tengchong, Yunan province, China. The towns are marked in red, which are located near the national border between China and Myanmar. Table 1 The IDs and names of the studied 18 towns in Tengchong, Yunnan province, China Another important reason is that the situations of P. vivax transmission in Tengchong is complicated: first, researchers have shown that environmental factors (e.g., temperature and rainfall) have a significant impact on the population growth of mosquitoes, as well as their biological cycles [23,24]. Accordingly, due to the suitable climate in Tengchong, the force of infection of P. vivax to human being in individual towns varies depending on the dynamically changing environmental factors and its demographic profiles (e.g., human population size). Second, it was reported that the proportion of imported cases of P. vivax in China in 2011 is about 62.9 % [21], where the imported cases are defined as malaria infections whose origin can be traced to an area outside the country. While in Yunnan province, a large number of malaria incidences are imported from Myanmar due to cross-border economic activities [19,25]. Moreover, evidences have shown that the frequency of the cross-border activities is highly related to socioeconomic profile of each individual town, such as average income per capita [8,26,27]. To investigate the underlying causes of geographic variations of P. vivax incidences in Tengchong, this paper focuses not only on the direct impact of environmental and demographic factors on P. vivax transmission in individual towns, but also the indirect impact of socioeconomic factors on the number of imported cases. To achieve this, the following three critical challenges are addressed: How can a computational model be built to systematically characterize the impact of both explicit and implicit factors on geographic variations of disease incidences? How can the impact of imported cases on geographic variations be assessed using various socioeconomic factors by taking into consideration human cross-border activities? What kinds of computational methods can be developed to quantify geographic variations by fitting model parameters to observed P. vivax incidences? To tackle these challenges, a space-time model is presented by extending the idea of factor analysis, which has been extensively adopted to investigate spatial-temporal patterns of infectious diseases [28,29]. Specifically, the space-time model consists of a linear combination of a P. vivax transmission model and a hidden time-dependent process of a set of non-observed common factors. First, a malaria transmission model is built based on the notion of vectorial capacity (VCAP), which characterizes the P. vivax transmission potential based on dynamically changing temperature, rainfall, as well as population size in each individual town [30,31]. Then, socioeconomic factors are integrated into a hidden time-dependent process of a set of common factors, which help quantify the variations of different towns in terms of the number of imported cases. To quantitatively assess geographic variations of P. vivax incidences, a Morkov chain Monte Carlo (MCMC) simulation method is used to fit the proposed space-time model to the spatial-temporal P. vivax incidences [32,33]. To evaluate the performance of the proposed space-time model, experiments are first conducted on a set of synthetic data generated using predefined model parameters. The results show that the MCMC method can well estimate all model parameters. Then, a real-world study is carried out to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Model parameters are estimated by fitting the proposed model to monthly-reported P. vivax incidences from 2005 to 2010. Based on the estimated model parameters, the 18 towns are classified into several groups in terms of the impact of their socioeconomic factors on the number of imported cases. By doing so, public health authorities can strategically allocate their limited resources to specific groups of towns so as to improve the efficiency of active surveillance. In summary, even through this study introduce the space-time model by taking P. vivax transmission in Tengchong as an example, the proposed model is not limited to analysing geographic variations of P. vivax incidences. Without loss of generality, it can also be extended to analyse spatial-temporal data series of other diseases. A space-time model Disease surveillance systems usually monitor disease incidences of different locations as a set of time series. Given the observed disease incidences of N locations during time period t=1,⋯,T, the spatial-temporal surveillance data at time t can be represented by a vector y t =(y 1t ,⋯,y Nt )′. With respect to malaria transmission in Tengchong, China, the number of P. vivax incidences of each individual town consists of two parts: one is local infections caused by the P. vivax transmission within the town, which can be explicitly modelled based on environmental and demographic factors; the other is imported cases caused by a hidden time-dependent dynamics (e.g., human cross-border activities), which can be implicitly affected by a set of socioeconomic factors. According to the study in [34], the space-time model can be defined as follows: $$\begin{array}{*{20}l} y_{t} &= u_{t} + \beta \cdot f_{t} +\epsilon_{t}, \hspace*{60pt} \epsilon_{t} \sim N(0,\Sigma) \end{array} $$ $$\begin{array}{*{20}l} f_{t} &= \Gamma \cdot f_{t-1}+w_{t}, \hspace*{68pt} w_{t} \sim N(0,\Lambda) \end{array} $$ where u t describes the epidemiological dynamics of local P. vivax transmission at time t, and β·f t describes a hidden time-dependent dynamics of imported cases. Specifically, u t =(u 1t ,⋯,u Nt )′ represents the number of local infections at time t, f t is an m-dimensional vector of common factors (i.e., the order of the factor model), and β=(β (1),⋯,β (m)) is the N×m factor loading matrix. Each row of β describes the importance of common factors for a given town, while each column of β (i.e., β (i)) shows spatial dependence of different towns with respect to a specific common factor. In this paper, it is assumed that the values of common factors at time t depend only on those at time t−1, where the matrix Γ characterizes the time-dependent dynamics of the common factors. Finally, Σ and Λ are observational and time-dependent variations. For simplicity, it is also assumed that \(\Sigma = diag({\sigma _{1}^{2}}, \cdots, {\sigma _{N}^{2}})\) and \(\Lambda = diag\left ({\lambda _{1}^{2}}, \cdots, {\lambda _{N}^{2}}\right)\). By fitting model parameters to spatial-temporal surveillance data, the main objective is to evaluate the impact of heterogeneous factors on geographic variations of P. vivax incidences. Epidemiological dynamics of malaria transmission The notion of vectorial capacity (VCAP) is used to assess P. vivax transmission potential using environmental and demographic data, which is defined as "the number of potentially infective contacts an individual person makes, through vector population, per unit time [15]." The VCAP was adapted from the basic reproductive number calculated based on the Macdonald model [35]. In each town i, the value of VCAP is given by: $$ V_{i} = \frac{-(m_{i} {a_{i}^{2}})p_{i}^{n_{i}}}{\ln(p_{i})}, $$ where m i represents the equilibrium mosquito density per person, a i is the expected number of bites on human beings per mosquito per day, p i is the probability of a mosquito surviving through one whole day, and n i is the entomological incubation period of malaria parasites. Based on the study of Ceccato et al. [30], all these parameters are dependent on human population P i , as well as dynamically-changing temperature (T) and rainfall (R) in each individual town. Here, the detailed parameter descriptions and settings for calculating the VCAP of each individual town are shown in Table 2, which is adopted from the existing work [16]. As mentioned in [16], the values of relevant parameters are based on a certain degree of assumptions and estimates, and they could be adjusted when more accurate values are available. Table 2 The parameter descriptions and settings for calculating vectorial capacity Based on the relationship of VCAP and entomological inoculation rate (EIR), the number of infectious bites received per day by a human being can be estimated [31]. Accordingly, the number of local infections at time t can be calculated based on the number of infections at previous time t−1. The formulation is as follows: $$ u_{t} = \frac{-bcV_{t}y_{t-1}'y_{t-1}}{P_{i}}+ y_{t-1}I(1-r+bcV_{t}), $$ where b represents the probability that a susceptible person becomes infected after being bitten by an infectious mosquito, c denotes the probability that an uninfected mosquito becomes infected after biting an infectious person, r is the human recovery rate, I is N×N identity matrix, and V t =(V 1t ,⋯,V Nt )′ is a vector of VCAP for different towns at time t. It should be noted that the model parameters bc and r will be estimated by fitting the proposed model to the spatial-temporal malaria incidences. Time-dependent dynamics of common factors As in standard dynamic factor model [36], in this paper, Equation 2 describes the dynamics of m independent common factors, where Γ is set to be d i a g(γ 1,⋯,γ m ). In doing so, the factor loading matrix β characterize geographic variations of disease incidences with respect to the set of common factors. In this paper, the jth column of β is modelled as a Gaussian random field (GRF), that is, $$ \beta_{(j)}\sim GRF\left(\mu_{j}^{\beta}, {\tau_{j}^{2}}R_{\phi_{j}}\right), $$ where \(\mu _{j}^{\beta }\) is N-dimentional mean vector, \({\tau _{j}^{2}}\) indicates the scale of spatial dependence, \(R_{\phi _{j}}\) is a symmetric and positive definite covariance matrix. The element \(R_{\phi _{j}}(l,k)\) can be used to reflect the range of spatial dependence in terms of geographic distances and socioeconomic factors. Specifically, (l,k)-element of the covariance matrix is given by \(R_{\phi _{j}}(l,k) = \rho _{\phi _{j}}(s_{\textit {lk}})\), where \(\rho _{\phi _{j}}(\cdot)\) is a correlation function and s lk represents the spatial heterogeneity between towns l and k [34]. Here, the correlation function is assumed to be exponential, i.e., $$ \rho_{\phi_{j}}(s_{lk}) = \exp\left(-{s_{lk}}/{\phi_{j}}\right), $$ where ϕ can be generated from an inverse gamma distribution. The spatial heterogeneity S={s lk } N×N is defined as the Hadamard product of a geographic distance matrix D and a socioeconomic distance matrix M, i.e., S=D∘M, where M is given by the Cosine distances between different towns with respect to a list of n implicit impact factors x=(x 1,⋯,x n ). Therefore, each element in M can be calculated as follows: $$ M_{lk} = 1-\frac{x_{l} \cdot x_{k}}{\| x_{l}\|\cdot \| x_{k}\|} = 1- \frac{\sum_{i=1}^{n} x_{li}x_{ki}}{\sqrt{\sum_{i=1}^{n} x_{li}^{2}}\sqrt{\sum_{i=1}^{n} x_{ki}^{2}}}, $$ where x l represents a vector of impact factors ofz location l. To generate D, geographic distances between the 18 towns in Tengchong are extracted using Google Maps API. Meanwhile, five socioeconomic factors are used to calculate the socioeconomic distance matrix M, they are: per capita arable land, per capita food production, per capita meat production, per capita government revenue, and personal income. Clearly, Equation 6 indicates that the pairwise covariance and hence dependence between any two towns decreases as the heterogeneity between them increases. It should be note that although only five socioeconomic factors are used in this paper, the calculation of spatial heterogeneity can be extended to involve more implicit factors. Inferring model parameters In this section, an MCMC simulation method is presented to estimate model parameters by fitting the proposed space-time model to disease incidences data. Mathematically, the space-time model can be reformulated in matrix notation as y=u+F β ′+ε, where y=(y 1,⋯,y T )′ is a T×N matrix, u=(u 1,⋯,u T )′ is a T×N matrix, and F=(f 1,⋯,f T )′ is a T×m matrix. The matrix ε is of dimension T×N, and follow a matrix-variate normal distribution, i.e., ε∼N(0,I T ,Σ) [34]. Thus, given m number of common factors, the posterior probability of y can be calculated as follows: $$\begin{array}{@{}rcl@{}} p(y|F,\! \beta,\! \Theta) &=& \!(2\pi)^{-TN/2}|\Sigma|^{-T/2}\times \\ && \exp\!\left(\!tr\!\left(\!-\frac{\left(y\!-u\!-F\beta'\right)'\left(y-u-F\beta'\right)}{2\Sigma}\right)\right), \end{array} $$ where Θ consists of parameters in the time-dependent dynamics of common factors, i.e., \(\sigma = ({\sigma _{1}^{2}}, \cdots,{\sigma _{N}^{2}})\), \(\lambda = \left ({\lambda _{1}^{2}}, \cdots,{\lambda _{m}^{2}}\right), \gamma = \left (\gamma _{1}, \cdots,\gamma _{m}\right), \mu = \left (\mu _{1}^{\beta }, \cdots,\mu _{m}^{\beta }\right), \tau = \left ({\tau _{1}^{2}}, \cdots, {\tau _{m}^{2}}\right), \phi = \left (\phi _{1}, \cdots, \phi _{m}\right)\), as well as parameters in the epidemiological dynamics of P. vivax transmission, i.e., bc and r. Accordingly, the joint posterior distribution of (F,β,Θ) is given by: $$\begin{array}{@{}rcl@{}} p\left(F, \beta, \Theta|y\right) &\propto & \prod_{t=1}^{T} p\left(y_{t}|f_{t}, \beta, \sigma\right)p(bc)p(r)p(f_{0}) \\ &\times & \prod_{t=1}^{T} p\left(f_{t}|f_{t-1}, \lambda, \gamma\right) \\ &\times & \prod_{j=1}^{m} p\left(\beta_{(j)}| \mu_{j}^{\beta}, {\tau_{j}^{2}}, \phi_{j}\right)p\left(\gamma_{j}\right)p\left(\mu_{j}^{\beta}\right)p\left({\tau_{j}^{2}}\right)\\ &\times & p\left(\phi_{j}\right) \prod_{i=1}^{N} p\left({\sigma_{i}^{2}}\right) \prod_{i=1}^{N} p\left({\lambda_{i}^{2}}\right), \end{array} $$ where the prior information of the model parameters (F,β,Θ) will be discussed in detail in the Results section. To simultaneously estimate the model parameters, an MCMC simulation method is developed. The procedure of the method is as follows: First, all independent model parameters Θ(0)=(σ,λ,γ,μ,τ,ϕ,b c,r,f 0) are initialised based on their prior distributions. Then, the values of factor loading matrix β(0) and the values of common factors f 1 are generated based on Equation 6 and Equation 2, respectively. By doing so, the posterior distribution p(F(0),β(0),Θ(0)|y) can be estimated based on Equation 9. For each iteration, new values of parameters Θ ∗ will be generated from an adaptive proposal distributions q(Θ ∗|Θ) [32,33]. Accordingly, new values of F ∗ and β ∗ will be calculated. All the new values F ∗, β ∗ and Θ ∗ will be accepted with probability: $$ \min \left(1, \frac{p\left(F^{*},\beta^{*},\Theta^{*}|y\right)q\left(\Theta|\Theta^{*}\right)}{p\left(F,\beta,\Theta|y\right)q\left(\Theta^{*}|\Theta\right)} \right). $$ ((10)) After a total number of M iterations, the statistics of the factor loading matrix β and other model parameters can therefore be analysed. The detailed method is shown in Algorithm 1. Simulated study: the evaluation of the MCMC simulation method To evaluate the performance of the MCMC method, a synthetic dataset is simulated based on the proposed space-time model with a set of predefined model parameters. Then, the ability of the method to estimate model parameters is assessed by treating the predefined model parameters as ground-truth values. To simulate the synthetic dataset, the geographic environment and the parameters of the proposed space-time model are set as follows: Similar to the study in [34], N=25 locations are uniformly allocated in a two-dimensional square [0,1]×[0,1], that is, the longitudes and latitudes of individual locations are (0.20,0.20), (0.20,0.40), ⋯, (1.00,0.80), (1.00,1.00), respectively. After surveying existing literatures about the dynamics of malaria transmission, epidemiological parameters are set to be b c=0.007 and r=0.05. The observational and the time-dependent variations are set to be Σ=d i a g(0.02,0.02,0.02) and Λ=d i a g(0.02,0.03,0.01), respectively. Moreover, the matrix Γ is set to be Γ=d i a g(0.60,0.40,0.30). Without loss of generality, it is assumed that there are three common factors (i.e., m=3). The factor loading matrix β is generated from a Gaussian process of exponential correlation function with ϕ=(0.15,0.40,0.25). In other words, \(R_{\phi _{j}(l,k)} = \exp (-d_{\textit {lk}}/\phi _{j})\). The value of \(\mu _{j}^{\beta }\) is only relevant to distance in the simulated experiments. Accordingly, it is reasonable to set \(\mu _{j}^{\beta }= X\mu _{j}\), where X=(1 N ,L o n g i t u d e N ,L a t i t u d e N ), and μ 1=(5,5,4)′, μ 2=(5,−6,−7)′, and μ 3=(5,−8,6)′. The scalar τ is set to be τ=(1.00,0.75,0.56). The objective is to evaluate whether the proposed MCMC simulation method can help estimate the time-dependent diagonal matrix Γ, the scalar τ, the epidemiological parameters bc and r, as well as the number of common factors m. Parameter settings The model parameters are estimated by fitting the space-time model to the generated data using the proposed MCMC algorithm. Specifically, the following prior distributions are adopted with respect to each parameter in the MCMC method: The observational and time-dependent variations follow inverse gamma distribution, i.e., σ 2∼I G(0.01,0.01) and λ 2∼I G(0.01,0.01). The parameters in Γ are assumed to follow a normal distribution, i.e., γ i ∼N(0.5,1). The initial values of common factor f 0 is set to be f 0=(0.6,0.4,0.3). According to literature review, the epidemiological parameters bc and r are assumed to follow uniform distributions, where b c∼U(0.0036,0.01248) and r∼U(0.02222,0.11110). The factor loading matrix is modelled as a Gaussian random field, i.e., \(\beta _{j} \sim N(\mu _{j}^{\beta }, {\tau _{j}^{2}} R_{\phi _{j}})\), where \(\mu _{j}^{\beta }\) is a known hyperparameter and follows a normal distribution with mean value equal to true value and variance equal to 25, the scale of spatial dependence \({\tau _{j}^{2}}\) follows an inverse Gamma distribution, i.e., \({\tau _{j}^{2}} \sim IG(1, 0.75)\), and the prior distribution of ϕ∼I G(2,b) for b= max(S)/(−2 ln(0.05)) and max(S) is the largest element for all s lk (see [37,38], for more detail). Simulation results The MCMC algorithm is run for 100,000 iterations, and the posterior inference is built upon the last 80,000 draws. Figure 2 shows the estimated parameters of γ and τ using the proposed MCMC simulation method, while Fig. 3 demonstrates the estimated values of epidemiological parameters bc and r. In all these figures, the true value of each parameter is illustrated using a blue line, while the estimated mean value is shown using a dark line. The detailed values and their corresponding 95 % credible intervals are shown in Table 3. It can be observed that all estimated mean values are very close to their true values (Figs. 2 and 3), and the estimated mean values of all model parameters are within their corresponding 95 % credible intervals (Table 3). The estimates of model parameters Γ and τ using the proposed MCMC simulation method.a, b, c The estimated mean values of γ 1, γ 2, and γ 3 (black lines) and their corresponding true values (blue lines); d, e, f The estimated mean values of τ 1, τ 2, and τ 3 (black lines), and their corresponding true values (blue lines). a The values of γ 1. b The values of γ 2. c The values of γ 3. d The values of τ 1. e The values of τ 2. f The values of τ 3. The estimates of epidemiological parameters bc and r using the proposed MCMC simulation method.a The estimated mean values of bc (the dark lines) and its true value (the blue line); (b) The estimated mean values of r (the dark lines) and its true value (the blue line). a The values of bc. b The values of r. Table 3 The estimates of model parameters and their 95 % credible intervals Besides the model parameters, another important factor needs to be determined is the value of m in the time-dependent dynamics of common factors (i.e., the order of the factor model). In this simulation study, several models with up to five common factors (i.e., m=2, 3, 4, and 5) are tested with respect to four measurements. They are two measurements about fitting errors (i.e., the mean absolute error (MAE) and the mean square error (MSE)) and two criteria about model selection (i.e., the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)), where \(MAE = \frac {1}{NT}\sum _{i=1}^{N}\sum _{t=1}^{T} |y_{\textit {it}}-\hat {y}_{\textit {it}}|\), \(MSE = \frac {1}{NT}\sum _{i=1}^{N}\sum _{t=1}^{T} (y_{\textit {it}}-\hat {y}_{\textit {it}})^{2}\), A I C=2m−2 ln(L), and B I C=m ln(n)−2 ln(L). Here, L is the value calculated by Equation 8, and n is the number of observed data. Table 4 shows the performance of the simulated studies with respect to models with different number of common factors. It can be found that m=3 reaches the best performance in terms of above-mentioned four measurements, which is exactly the number of common factors used for generating the synthetic dataset. Table 4 The effects of the number of common factors In summary, the above results suggest that the MCMC simulation method can well estimate the values of the model parameters and the order of the factor model. Real-world study: the P. vivax transmission in Tengchong, Yunnan, China This section focuses on the investigation of the effects of various impact factors on the geographic variations of P. vivax incidences among 18 towns in Tengchong, Yunnan province, China. With respect to monthly malaria incidences from 2005 to 2010, different towns show different temporal patterns. There are two major reasons: first, due to the environmental and demographic heterogeneity of these towns, malaria transmission potential in each individual town is different. Second, due to the socioeconomic heterogeneity, human cross-border activities in individual towns are different, which may affect the number of imported malaria incidences. The following data are involved in constructing the space-time model. Malaria incidences. The reported cases of P. vivax infection are collected from the China Information System for Disease Control and Prevention, which cover all the 18 towns in Tengchong from 2005 to 2010 [39]. Temperature and rainfall. The temperature and rainfall data of Tengchong from 2005 to 2010 are collected to estimate the P. vivax transmission potential for individual towns. For the temperature, the Moderate Resolution Imaging Spectroradiometer (MODIS) is used to estimate near-surface air temperature [40]. For the rainfall, the Tropical Rainfall Measuring Mission (TRMM) product is used to estimate daily precipitation [41]. Population size. The population size of each town is based on the sixth national census of China in 2010 [22]. Geographic distances. The geographic distances between individual towns are identified as the shortest road distances using Google Maps API. Socioeconomic factors. Suggested by public policy makers and practitioners, five typical socioeconomic factors are adopted to characterize socioeconomic heterogeneity of the studied towns from 2005 to 2010, they are, per capita arable land, per capita food production, per capita meat production, per capita government revenue, and personal income. All these data are collected from Tengchong Statistics Bureau. It should be noted that many other factors from heterogeneous data sources can also be involved into the calculation of matrix M in the proposed space-time model. To estimate model parameters, the same prior distributions as that in simulated study are used for parameters σ 2,λ 2,γ,b c,r,τ 2 and ϕ. The other two parameters f 0 and \(\mu _{j}^{\beta }\) are set as follows: The initial values of f 0 are drawn from a normal distribution, i.e., f 0∼N(1,1). The factor loading matrix is modelled as a Gaussian random field, i.e., \(\beta _{j} \sim N(\mu _{j}^{\beta }, {\tau _{j}^{2}} R_{\phi _{j}})\). Here, \(\mu _{j}^{\beta }\) follows a normal distribution with the same mean and variance as that of y t −u t for all t, where the values of u t is calculated using randomly generated bc and r from their prior distributions. The MCMC algorithm is run for 100,000 iterations with a burn-in of the first 20,000 runs. First, the appropriate number of common factors m is incrementally evaluated in terms of the four measurements, i.e., MAE, MSE, AIC, and BIC. It can be found that better performances can be achieved when m=5. Figure 4 shows the fitting results of monthly P. vivax incidences of the 18 towns in Tengchong, from 2005 to 2010. The red lines correspond to the observed numbers of incidences, while the green lines show the estimated numbers of incidences based on the proposed space-time model. It can be observed that for most towns, the proposed model preforms very well in terms of fitting the real-world observations, except for certain special towns, such as the town Heshun in Fig. 4d. The possible reason is that P. vivax incidences in Heshun are temporally sparse. Therefore, historical malaria incidences play limited roles in estimating future incidences, in other words, the time-dependent process will dominate the final estimation. However, such misestimate is tolerable in real world because the number of P. vivax incidences in these towns is relative small. The observed and estimated numbers of Plasmodium vivax incidences of the 18 towns in Tengchong, Yunnan province, China, by month from 2005 to 2010. The red lines correspond to the observed numbers of Plasmodium vivax incidences, while the green lines show the estimated numbers of Plasmodium vivax incidences based on the proposed space-time model. (a) Zhonghe, (b) Wuhe, (c) Beihai, (d) Heshun, (e) Tuantian, (f) Gudong, (g) Xinhua, (h) Mingguang, (i) Qushi, (j) Qingshui, (k) Houqiao, (l) Ruidian, (m) Jietou, (n) Tengyue (o) Mangbang, (p) Hehua, (q) Puchuan and (r) Mazhan. According to the definition of factor loading matrix β, each row of β represents the importance of common factors for a given town, and each column of β shows spatial dependence among different towns. In this case, each column of β can be treated as an "attribute" of individual towns so as to classify the 18 towns based on the impact of their "attributes" on geographic variations of P. vivax incidences. Table 5 shows the estimate of the factor loading matrix β with the number of common factors m=5. Along this line, the well-known K-means algorithm is adopted to do classification based on the estimated factor loading matrix β. Figure 5 demonstrates the classification results of the 18 towns by setting K=2, 3, 4, and 5, where different colors represent different clusters. It can be found that when K=2, some adjacent towns are grouped into one cluster (e.g., the brown cluster and the green cluster in Fig. 5a), which means that geographic distances may dominate variations of malaria incidences. This is inline with the analysis of certain spatial statistics methods, such as the a smoothed surface map in [16]. Specifically, several towns adjacent to Tengyue is classified into the same cluster (i.e., the brown cluster in Fig. 5). The reason may be that Tengyue is the center of Tengchong county, and have relatively better economic status. Peoples in these towns may seldom travel to high risk region in Myanmar. As the value of K increases, some special towns (i.e., Wuhe and Mangbang) will gradually separate from brown cluster, possibly due to the integrated impact of socioeconomic factors. By doing so, active surveillance and targeted intervention strategies can be implemented for groups of towns based on the amount of available resources, which may significantly improve the effectiveness and efficiency of malaria control and elimination. The classification results of the 18 towns in Tengchong, Yunnan province, China. Different colors to represent different clusters. (a) The number of clusters K=2; (b) The number of clusters K=3; (c) The number of clusters K=4; and (d) The number of clusters K=5. Table 5 The estimate of the factor loading matrix β with the number of common factors m = 5 Data mining and spatial statistics methods play essential roles in understanding spatial-temporal patterns of disease incidences, which can provide valuable information for disease surveillance and control. First, local clusters or hot spots of disease transmission can be identified through geostatistical analysis on the time series of disease incidences, where targeted intervention strategies can be applied to improve the efficiency of disease control. For example, researchers have adopted the SaTScan software to detect local malaria clusters based either on confirmed malaria cases [42], or other related impact factors [43]. Second, spatial dependence between different locations can be quantified to reveal the relationships between the severity of an infectious disease and its relevant impact factors. For example, Osei and Duker have studied the spatial dependence of Vibrio cholera prevalence on open space refuse dumps [44]; Gemperli et al. have investigated environmental and age dependence of malaria transmission in West and Central Africa [45]. Third, incidences at unobserved locations can be estimated using appropriate spatial interpolation methods based on confirmed incidences at observed locations. For example, Kriging linear spatial interpolation method has been adopted to visualize geographic and temporal trends in rotavirus activity in the United States [46]. Regarding the above-mentioned problems, most existing methods have focused solely on the impact of several typical factors. While the aim of this paper is to systematically modelling geographic variations of disease incidences by taking into consideration various impact factors from heterogeneous data sources. Factor analysis is one kind of statistical methods to systematically describe a large number of correlated variables using a potentially small number of unobserved variables (i.e., factors). Generally speaking, the main purpose of factor analysis on spatial epidemiology is to either reduce the overall dimension of observations at each geographic location, or describe temporal dynamics of all locations using a small set of common factors [34,36]. Different from existing studies, the observations of disease incidences is univariate (i.e., the spatial-temporal distribution of disease incidences) and the main focus is to investigate the impact of heterogeneous impact factors on geographic variations of disease incidences. In this paper, the space-time model is one of the first attempts to study both explicit and implicit factors by integrating the epidemiological dynamics of disease transmission and the time-dependent dynamics of unobserved common factors. Although the experimental results have shown that the proposed space-time model can perform well in fitting to the reported spatial-temporal P. vivax incidences in Tengchong, it should be noted that the model can still be able to be generalized in the following ways: first, in this paper, it is assumed that the values of common factors f t at time t depend on those at previous time f t−1. In reality, the duration of time window should be justified based on the real-world situations, such as the incubation period of the infectious diseases. Second, the entries in matrix Γ is constant throughout the paper. Theoretically, it can be generalized to involve time-dependent entries of Γ such that dynamic patterns of common factors (e.g., seasonal patterns) can be investigated. Third, in the MCMC method, the number of common factors is incrementally evaluated. While in the future, a customized reversible jump MCMC method [47] can be utilized to learn the appropriate value of m. Lately, it can be observed from the experimental results (e.g., Fig. 4d) that when the P. vivax incidences is temporally spare, the proposed model cannot well fit the observed numbers of incidences. Therefore, some specialized methods should be developed when the observed disease incidences in most geographic locations are temporally sparse. Last but not the least, the proposed space-time model is a linear combination of a disease transmission model and a hidden time-dependent process. In the future, various data mining methods can be involved to design more complicated space-time model by explicitly revealing the impact of other heterogeneous factors. Moreover, in addition to mining geographic variations of disease incidences, the proposed model can also be extended to conduct the following problems: Incidence forecasting. Based on the estimated model parameters, the proposed model can also be used to forecast disease incidences in the near future. Mathematically, the h-steps ahead predictive density p(f T+h |f T ,β,Θ) can first be learned. Then, p(y T+h |f T+h ,β,Θ) can be estimated. Spatial interpolation. Based on spatial interdependence, disease incidences in unobserved locations may be estimated by analysing locations with similar values of impact factors. To achieve this, new inference methods need to be proposed to estimate unobserved rows in factor loading matrix β. All these issues are worth further pursuing so as to achieve effective and efficient disease surveillance and control. In this paper, a space-time model is presented to investigate geographic variations of disease incidences by taking into consideration two types of impact factors: one is the explicit factors that can directly affect the dynamics of malaria transmission; the other is the implicit factors that may indirectly affect the number of imported cases. Without loss of generality, the model is implemented to investigate geographic variations of P. vivax incidences among 18 towns in Tengchong, Yunnan province, China. Specifically, the notion of vectorial capacity is adopted to model the P. vivax transmission potential with respect to environmental and demographic factors. Meanwhile, the spatial heterogeneity of different towns is characterized in terms of their geographic distances and five types of socioeconomic factors. Based on the space-time model, these factors may result in geographic variations of P. vivax incidence through the time-dependent dynamics of a set of common factors. To estimate the model parameters, an MCMC simulation method is used by fitting the model to the spatial-temporal disease incidences. A synthetic study is carried out to assess the ability of the MCMC method in estimating model parameters. Then, the proposed model is applied to conduct a real-world study on investigating geographic variations of P. vivax incidences among the 18 towns in Tengchong. It is expected that the computationally obtained methods and results may offer public health authorities with further insight into, as well as new tools for, active surveillance and control of infectious diseases. MCMC: Markov chain Monte Carlo VCAP: Vectorial capacity GRF: Gaussian random field MAE: Mean absolute error MSE: Mean square error AIC: Akaike information criterion Bayesian information criterion Tambo E, Ai L, Zhou X, Chen JH, Hu W, Bergquist R, et al.Surveillance-response systems: the key to elimination of tropical diseases. Infect Dis Poverty. 2014; 3:17. Zofou D, Nyasa RB, Nsagha DS, Ntie-Kang F, Meriki HD, Assob JCN, et al.Control of malaria and other vector-borne protozoan diseases in the tropics: enduring challenges despite considerable progress and achievements. Infect Dis Poverty. 2014; 3:11. Elliot P, Wakefield JC, Best NG, Briggs DJ. Spatial Epidemiology: Methods and Applications. Oxford: Oxford University Press; 2000. Hay SI, Snow RW. The malaria atlas project: developing global maps of malaria risk. PLoS Med. 2006; 3:473. Ostfelda RS, Glassb GE, Keesing F. Spatial epidemiology: an emerging (or re-emerging) discipline. Trends Ecol Evol. 2005; 20:328–6. Eckhoff PA. A malaria transmission-directed model of mosquito life cycle and ecology. Malar J. 2011; 10:303. Shi B, Xia S, Liu J. A complex systems approach to infectious disease surveillance and response. In: Proceedings of the International Conference on Brain and Health Informatics. Gunma, Japan: 2013. p. 524–35. Yadav K, Dhiman S, Rabha B, Saikia P, Veer V. Socio-economic determinants for malaria transmission risk in an endemic primary health centre in Assam, India. Infect Dis Poverty. 2014; 3:19. Butler CD. Infectious disease emergence and global change: thinking systemically in a shrinking world. Infect Dis Poverty. 2012; 1:5. Liu J, Yang B, Cheung WK, Yang G. Malaira transmission modelling:a network perspective. Infect Dis Poverty. 2012; 1:11. Brownstein JS, Holford TR, Fish D. A climate-based model predicts the spatial distribution of the Lyme disease vector Ixodes scapularis in the United States. Environ Health Perspect. 2003; 111:1152–7. Theophilides CN, Ahearn SC, Grady S, Merlino M. Identifying West Nile virus risk areas: the dynamic continuous-area space-time system. Am J Epidemiol. 2003; 157:843–54. Werneck GL, Costa CH, Walker AM, David JR, Wand M, Maquire JH. The urban spread of visceral leishmaniasis: clues from spatial analysis. Epidemiology. 2002; 13:364–7. Ross R. The Prevention of Malaria: London: John Murray; 1911. Mandal S, Sarkar RR, Sinha S. Mathematical models of malaria - a review. Malar J. 2011; 10:202. Shi B, Liu J, Zhou XN, Yang GJ. Inferring plasmodium vivax transmission networks from tempo-spatial surveillance data. PLoS Negl Trop Dis. 2014; 8:2682. Gething PW, Elyazar IRF, Moyes CL, Smith DL, Battle KE, Guerra CA, et al.A long neglected world malaria map: Plasmodium vivax endemicity in 2010. PLoS Negl Trop Dis. 2012; 6:1814. Tambo E, Adedeji AA, Huang F, Chen JH, Zhou SS, Tang LH. Scaling up impact of malaria control programmes: a tale of events in Sub-Saharan Africa and People's Republic of China. Infect Dis Poverty. 2012; 1:7. Hui FM, Xu B, Chen ZW, Cheng X, Liang L, Huang HB, et al.Spatio-temporal distribution of malaria in Yunnan province, China. Am J Trop Med Hyg. 2009; 81:503–9. Zhou SS, Wang Y, Tang LH. Malaria situation in the People's Republic of China in 2005. Chin J Parasitol Parasitic Dis. 2006; 24:401–3. Xia ZG, Yang MN, Zhou SS. Malaria situation in the People's Republic of China in 2011. Chin J Parasitol Parasitic Dis. 2012; 30:419–22. National Bureau of Statistics of China. The Fifth National Census in China. http://www.stats.gov.cn/tjsj/pcsj/rkpc/dwcrkpc/. Paaijmans KP, Blanford S, Bell AS, Blanford JI, Read AF, Thomas MB. Influence of climate on malaria transmission depends on daily temperature variation. Proc Natl Acad Sci U S A. 2010; 107:15135–9. Gething PW, Boeckel TPV, Smith DL, Guerra CA, Patil AP, Snow RW, et al.Modelling the global constraints of temperature on transmission of plasmodium falciparum and p. vivax,. Parasit Vectors. 2011; 4:1–11. Lin H, Lu L, Tian L, Zhou S, Wu H, Bi Y, et al.Spatial and temporal distribution of falciparum malaria in China. Malar J. 2009; 8:130. Bi Y, Tong S. Poverty and malaria in the Yunnan province, China. Infect Dis Poverty. 2014; 3:32. Pindolia DK, Garcia AJ, Huang Z, Fik T, Smith DL, Tatem AJ. Quantifying cross-border movements and migrations for guiding the strategic planning of malaria control and elimination. Malar J. 2014; 13:169. Chena M, Zaasa A, Woodsa C, Ginsburga GS, Lucasa J, Dunsona D, et al.Predicting viral infection from high-dimensional biomarker trajectories. J Am Stat Assoc. 2011; 106:1259–79. Valiakos G, Papaspyropoulos K, Giannakopoulos A, Birtsas P, Tsiodras S, Hutchings MR, et al.Use of wild bird surveillance, human case data and GIS spatial analysis for predicting spatial distributions of West Nile virus in Greece. PLoS One. 2014; 9:96935. Ceccato P, Vancutsem C, Klaver R, Rowland J, Connor SJ. A vectorial capacity product to monitor changing malaria transmission potential in epidemic regions of Africa. J Trop Med. 2012; 2012:595948. Smith DL, McKenzie FE. Statics and dynamics of malaria infection in Anopheles mosquitoes. Malar J. 2004; 3:13. Haario H, Laine M, Mira A, Saksman E. Dram: efficient adaptive MCMC. Stat Comput. 2006; 16:339–54. Brooks S, Gelman A, Jones GL, Meng XL. Handbook of Markov Chain Monte Carlo. London: Chapman & Hall, CRC Press; 2011. Lopes HF, Salazar E, Gamerman D. Spatial dynamic factor analysis. Bayesian Anal. 2008; 3:759–92. Macdonald G. Theory of the eradication of malaria. Bull World Health Org. 1956; 15:369–87. Peñaa D, Poncela P. Forecasting with nonstationary dynamic factor models. Epidemiology. 2004; 119:291–1. Banerjee S, Carlin BP, Gelfand AE. Hierarchical Modeling and Analysis for Spatial Data. London: Chapman & Hall, CRC Press; 2004. Schmidt AM, Gelfand AE. A bayesian coregionalization approach for multivariate pollutant data. J Geophys Res Biogeosci. 2003; 108:24. Chinese Center for Disease Control and Prevention. China Information System for Disease Control and Prevention. http://www.cdpc.chinacdc.cn. The Internatioanl Research Institute for Climate and Society. USGS LandDAAC MODIS 1km 8day Version_005 Aqua CN China_day. http://iridl.ldeo.columbia.edu/expert/SOURCES/.USGS/.LandDAAC/.MODIS/.1km/.8day/.version_005/.Aqua/.CN/.Day/. The Internatioanl Research Institute for Climate and Society. NASA GES-DAAC TRMM_L3 TRMM_3B42 V6 Daily Precipitation: Surface Rain from All Satellite and Surface Data. http://iridl.ldeo.columbia.edu/expert/SOURCES/.NASA/.GES-DAAC/.TRMM_L3/.TRMM_3B42/.v6/.daily/.precipitation/. Coleman M, Coleman M, Mabuza AM, Kok G, Coetzee M, Durrheim DN. Using the SaTScan method to detect local malaria clusters for guiding malaria control programmes. Malar J. 2009; 8:68. Bousema T, Drakeley C, Gesase S, Hashim R, Magesa S, Mosha F, et al.Identification of hot spots of malaria transmission for targeted malaria control. J Infect Dis. 2010; 201:1764–74. Osei FB, Duker AA. Spatial dependency of V. cholera prevalence on open space refuse dumps in Kumasi, Ghana: a spatial statistical modelling. Int J Health Geogr. 2008; 7:62. Gemperli A, Sogoba N, Fondjo E, Mabaso M, Bagayoko M, Olivier J, Briët T, et al.Trop Med Int Health. 2006; 11:1032–46. Török TJ, Kilgore PE, Clarke MJ, Holman RC, Bresee JS, Glass RI. Visualizing geographic and temporal trends in rotavirus activity in the United States, 1991 to 1996. Pediatr Infect Dis J. 1997; 16:941–46. Lopes HF, West M. Bayesian model assessment in factor anaylsis. Stat Sin. 2004; 14:41–67. Detinova TS, Vol. 47. Age-grouping methods in Diptera of medical importance with special reference to some vectors of malaria; 1962, pp. 13–191. http://www.ncbi.nlm.nih.gov/pubmed/13885800. The authors would like to acknowledge the funding support from Hong Kong Research Grants Council (HKBU211212, HKBU12202114), the National Natural Science Foundation of China (NSFC81402760, NSFC81273192), and the National Center for International Joint Research on E-Business Information Processing under Grant 2013B01035 for the research work being presented in this article. School of Information Engineering, Nanjing University of Finance & Economics, Wenyuan Road, Nanjing, 210003, China Benyun Shi Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China Department of Computer Science, Hong Kong Baptist University, Waterloo Road, Kowloon Tong, Hong Kong Qi Tan & Jiming Liu National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention; Key Laboratory of Parasite and Vector Biology, MOH; WHO Collaborating Center for Malaria, Schistosomiasis and Filariasis, Shanghai, 200025, China Xiao-Nong Zhou Qi Tan Jiming Liu Correspondence to Jiming Liu. Conceived and designed the experiments: BS JL XNZ. Performed the experiments: BS QT. Collected and analysed the data: BS QT JL XNZ. Contributed reagents/materials/analysis tools: BS JL XNZ. Wrote the paper: BS JL XNZ. All authors read and approved the final manuscript. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Shi, B., Tan, Q., Zhou, XN. et al. Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China. Malar J 14, 216 (2015). https://doi.org/10.1186/s12936-015-0719-y Space-time model
CommonCrawl
\begin{document} \title{Euler's constant: Euler's work and modern developments} \author{Jeffrey C. Lagarias} \address{Dept. of Mathematics, University of Michigan, Ann Arbor, MI 48109-1043, USA} \curraddr{} \email{[email protected]} \thanks{The research of the author was supported by NSF Grants DMS-0801029 and DMS-1101373.} \subjclass[2010]{Primary 11J02 Secondary 01A50, 11J72, 11J81, 11M06} \date{October 10, 2013} \dedicatory{} \begin{abstract} This paper has two parts. The first part surveys Euler's work on the constant $\gamma=0.57721...$ bearing his name, together with some of his related work on the gamma function, values of the zeta function, and divergent series. The second part describes various mathematical developments involving Euler's constant, as well as another constant, the Euler-Gompertz constant. These developments include connections with arithmetic functions and the Riemann hypothesis, and with sieve methods, random permutations, and random matrix products. It also includes recent results on Diophantine approximation and transcendence related to Euler's constant. \end{abstract} \maketitle \tableofcontents \setlength{\baselineskip}{1.0\baselineskip} \section{Introduction} Euler discovered the constant $\gamma$, defined by \begin{equation*} \gamma := \lim_{n \to \infty} \left( \sum_{j=1}^n \frac{1}{j} \, - \log n\right) . \end{equation*} This is a fundamental constant, now called {\em Euler's constant.} We give it to $50$ decimal places as \begin{equation*} \gamma = 0. 57721~56649~01532 ~86060~ 65120~90082~40243~10421~59335~ 93992... \end{equation*} In this paper we shall describe Euler's work on it, and its connection with values of the gamma function and Riemann zeta function. We consider as well its close relatives $e^{\gamma} =1. 7810724... $ and $e^{-\gamma}=0.561459...$. We also inquire as to its possible meaning, by describing many situations in mathematics where it occurs. The constant $\gamma$ is often known as the {\em Euler-Mascheroni constant}, after later work of Mascheroni discussed in Section \ref{sec26}. As explained there, based on their relative contributions it seems appropriate to name it after Euler. There are many famous unsolved problems about the nature of this constant. The most well known one is: \begin{cj} \label{conj1} Euler's constant is irrational. \end{cj} This is a long-standing open problem. A recent and stronger version of it is the following. \begin{cj}~\label{conj2} Euler's constant is not a Kontsevich-Zagier period. In particular, Euler's constant is transcendental. \end{cj} A {\em period} is defined by Kontsevich and Zagier \cite{KZ01} to be a complex constant whose real and imaginary parts separately are given as a finite sum of absolutely convergent integrals of rational functions in any number of variables with rational coefficients, integrated over a domain cut out by a finite set of polynomial equalities and inequalities with rational coefficients, see Section \ref{sec310}. Many constants are known to be periods, in particular all zeta values $\{\zeta(n): n \ge 2\}.$ The set of all periods forms a ring ${\mathcal P}$ which includes the field $\overline{{\mathbb Q}}$ of all algebraic numbers. It follows that if Euler's constant is not a period, then it must be a transcendenta numberl. Conjecture \ref{conj2} also implies that $\gamma$ would be $\overline{{\mathbb Q}}$-linearly independent of all odd zeta values $\zeta(3), \zeta(5), \zeta(7), ...$ and of $\pi$. This paper also presents results on another constant defined by Euler, the {\em Euler-Gompertz constant} \begin{equation} \label{104aa} {\delta} := \int_{0}^1 \frac{dv}{1-\log v} = \int_{0}^{\infty} \frac{e^{-t}}{1+t} dt = 0.59634 \,73623\, 23194 \dots \end{equation} Here we adopt the notation of Aptekarev \cite{Apt09} for this constant (in which ${\delta}$ follows $\gamma$) and refer to it by the name given by Finch \cite[Sec. 6.2]{Fin03} (who denotes it $C_2$). Some results about $\delta$ intertwine directly with Euler's constant, see Sections \ref{sec34}, \ref{sec35}, \ref{sec311} and \ref{sec312}. Euler's name is also associated with other constants such as $e=2.71828...$ given by \begin{equation*} e := \sum_{n=0}^{\infty}\,\frac{1}{n!}. \end{equation*} He did not discover this constant, but did standardize the symbol ``$e$'' for its use. He used the notation $e$ in his 1737 essay on continued fractions (\cite[p. 120]{E71}, cf. \cite{EWW85}), where he determined its continued fraction expansion $$ e = [2; 1, 2,\, 1, 1, 4,\, 1,1, 6, \,1, 1, 8,\, \cdots ]= 2 + \cfrac{1}{1+ \cfrac{1}{2+ \cfrac{1}{1+ \cfrac{1}{1+ \cdots}}}} $$ and from its form deduced that it was irrational. Elsewhere he found the famous relation $e^{\pi i}= -1$. The constant $e$ is also conjectured not to be a Kontsevich-Zagier period. It is well known to be a transcendental number, consistent with this conjecture. In this paper we review Euler's work on the constant $\gamma$ and on mathematical topics connected to it and survey subsequent developments. It is correspondingly divided into two parts, which may be read independently. We have also aimed to make individual subsections readable out of order. The first part of this paper considers the work of Euler. Its emphasis is historical, and it retains some of the original notation of Euler. It is a tour of part of Euler's work related to zeta values and other constants and methods of finding relations between them. Euler did extensive work related to his constant: he returned to its study many times. The basic fact to take away is that Euler did an enormous amount of work directly on his constant and related areas, far more than is commonly appreciated. The second part of this paper addresses mathematical developments made since Euler's time concerning. Euler's constant. Since Euler's constant is an unusual constant that seems unrelated to other known constants, its appearance in different mathematical subjects can be a signal of possible connections between these subjects. We present many contexts in which Euler's constant appears. These include connections to the Riemann hypothesis, to random matrix theory, to probability theory and to other subjects. There is a set of strong analogies between factorizations of random integers in the interval $[1, n]$ and cycle structures of random permutations in the symmetric group $S_N$, taking $N$ to be proportional to $\log n$. In this analogy there appears another constant that might have appealed to Euler: the {\em Golomb-Dickman constant}, which can be defined as $$ \lambda := \int_{0}^{1} e^{Li(x)} dx = 0.62432 \, 99885 \, 43550 \dots $$ where $Li(x) := \int_{0}^x \frac{dt}{\log t}$, see Section \ref{sec38}. The constant $\lambda$ appears in connection with the distribution of the longest cycle of a random permutation, while the constant $e^{-\gamma}$ appears in connection with the distribution of the shortest cycle. The discovery of this analogy is described at the beginning of Section \ref{sec38a}. It was found by Knuth and Trabb-Pardo \cite{KTP76} through noticing a numerical coincidence to $10$ decimal places between two computed constants, both subsequently identified with the Golomb-Dickman constant. In passing we present many striking and elegant formulas, which exert a certain fascination in themselves. Some of the formulas presented were obtained by specialization of more general results in the literature. Other works presenting striking formulas include the popular book of J. Havil \cite{Hav03} devoted extensively to mathematics around Euler's constant, and Finch \cite[Sections 1.5 and 4.5]{Fin03}. In Section \ref{sec4} we make concluding remarks and briefly describe other directions of work related to Euler's constant. \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{subsection}.\arabic{equation}} \section{Euler's work}\label{sec2} \setcounter{equation}{0} Euler (1707--1783) was born in Basel, and entered the University of Basel at age 13, to study theology. He also received private tutorials in mathematics from Johann Bernoulli (1667--1748), who was Professor of Mathematics at the University of Basel. Johann recognized his mathematical gift and convinced his family to let him switch to mathematics. Euler was excellent at computation and had an essentially photographic memory. He had great persistence, and returned over and over again to problems that challenged him. Here we only scratch the surface of Euler's enormous output, which contains over 800 works. These include at least $20$ books, surveyed in Gautschi \cite{Ga08b}. We trace a thread comprising results corresponding to one of his interests, the determination of specific constants given by infinite sums, definite integrals, infinite products, and divergent series, and finding identities giving interrelations among these constants. Ancillary to this topic is the related problem of determining numerical values of these constants; computing such values allow one to guess and test possible identities. In his work Euler made extensive calculations, deriving approximations of numbers to many decimal places. He often reported results of calculations obtained using various approximation schemes and sometimes compared the values he obtained by the different methods. In Euler's numerical results reported below the symbol $\approx$ is used to mean ``the result of an approximate calculation''. The symbol $\approx$ does not imply agreement to the number of digits provided. An underline of a digit in such an expansion indicates the first place where it disagrees with the known decimal expansion. We refer to Euler's papers by their index numbers in the En\"{e}strom catalogue (\cite{Ene13}). These papers may be found online in the Euler archive (\cite{EA}). For discussions of Euler's early work up to 1750 see Sandifer \cite{San07b}, \cite{San07}. For his work on the Euler(-Maclaurin) summation formula see Hofmann \cite{Hof57} and Knoebel et al. \cite[Chapter 1]{KLLP07}. For other discussions of his work on the zeta function see St\"{a}ckel \cite{Sta07}, Ayoub \cite{Ay74}, Weil \cite[Chap. 3] {We84} and Varadarajan \cite[Chap. 3]{Var08}. For Euler's life and work see Calinger \cite{Cal07} and other articles in Bradley and Sandifer \cite{BS07}. \subsection{Background}\label{sec20} \setcounter{equation}{0} Euler spent much effort evaluating the sums of reciprocal powers \begin{equation}\label{200b} \zeta(m) := \sum_{j=1}^{\infty} \frac{1}{j^m}. \end{equation} for integer $m \ge 2$. He obtained the formula $\zeta(2) = \frac{\pi^2}{6}$, solving the ``Basel problem,'' as well as formulas for all even zeta values $\zeta(2n).$ He repeatedly tried to find analogous formulas for the odd zeta values $\zeta(2n+1)$ which would explain the nature of these numbers. Here he did not completely succeed, and the nature of the odd zeta values remains unresolved to this day. Euler's constant naturally arises in this context, in attempting to assign a value to $\zeta(1)$. Euler discovered the constant $\gamma$ in a study of the {\em harmonic numbers} \begin{equation}\label{200c} H_n := \sum_{j=1}^n \frac{1}{j}, \end{equation} which are the partial sums of the harmonic series. The series $``\zeta(1)" = \sum_{n=1}^{\infty} \frac{1}{n}$ diverges, and Euler approached it by (successfully) finding a function $f(z)$ (of a real or complex variable $z$) that interpolates the harmonic number values, i.e. it has $f(n)=H_n$ for integer $n \ge 1$. The resulting function, denoted $H_{z}$ below, is related to the gamma function via \eqn{201c}. In studying it he also studied, for integer $m$, the related sums \begin{equation}\label{200d} H_{n,m}:= \sum_{j=1}^n \,\frac{1}{j^m}, \end{equation} which we will term {\em $m$-harmonic numbers}; in this notation $\, H_{n,1} = H_n$. Euler obtained many inter-related results on his constant $\gamma$. These are intertwined with his work on the gamma function and on values of the (Riemann) zeta function. In Sections 2.2 - 2.4 we present results that he obtained for Euler's constant, for the gamma function and its logarithmic derivative, and for values of the zeta function at integer arguments, concentrating on explicit formulas. In Section 2.5 we discuss a paper of Euler on how to sum divergent series, applied to the example $0!-1!+2!- 3!+ \cdots$ His analysis produced a new constant, the Euler-Gompertz constant $\delta$, given above in \eqn{104aa}. In Section 2.6 we review the history of the name Euler-Mascheroni constant, the adoption of the notation $\gamma$ for it, and summarize Euler's approach to research. Taken together Euler's many papers on his constant $\gamma$, on the related values $\zeta(n)$ for $n \ge 2$, and other values show his great interest in individual constants (``notable numbers") and on their arithmetic interrelations. \subsection{Harmonic series and Euler's constant} \label{sec21} \setcounter{equation}{0} One strand of Euler's early work involved finding continuous interpolations of various discrete sequences. For the sequence of factorials this led to the gamma function, discussed below. In the 1729 paper \cite[E20]{E20} he notes that the harmonic numbers $H_n= \sum_{k=1}^n \frac{1}{k}$ are given by the integrals \begin{equation*} H_n:= \int_{0}^1 \frac{1- x^n}{1-x} dx. \end{equation*} He proposes a function interpolating $H_n$ by treating $n$ as a continuous variable in this integral. The resulting interpolating function, valid for real $z \ge 0$, is \begin{equation}\label{201aa} H_{z} := \int_{0}^1 \frac{1- x^z}{1-x} dx. \end{equation} Using this definition, he derives the formula \begin{equation*} H_{\frac{1}{2}}= 2- 2 \log 2. \end{equation*} The function $H_z$ is related to the digamma function $\psi(z) = \frac{d}{dz} \log \Gamma(z)$ by \begin{equation}\label{201c} H_{z} = \psi(z+1)+ \gamma, \end{equation} see Section \ref{sec22} and also Theorem~\ref{th30}. In a paper written in 1731 ("{\em On harmonic progresssions}") Euler \cite[E43]{E43} summed the harmonic series in terms of zeta values, as follows, and computed Euler's constant to $5$ decimal places. \begin{theorem} \label{th21} {\rm (Euler 1731)} The limit $$ \gamma= \lim_{n \to \infty} \left(H_n - \log n \right) $$ exists. It is given by the (conditionally) convergent series \begin{equation}\label{202} \gamma = \sum_{n=2}^{\infty} (-1)^n \frac{\zeta(n)}{n}. \end{equation} \end{theorem} Euler explicitly observes that this series \eqn{202} converges (conditionally) since it is an alternating series with decreasing terms. He reports that the constant $\gamma \approx 0.57721\underline{8}$. Euler obtains the formula \eqn{202} using the expansion $\log (1+x) = \sum_{k=1}^{\infty} (-1)^{k+1} \frac{x^k}{k},$ which was found by Nicholas Mercator (c.1620--1687) \cite{Mercator1668} in 1668. Evaluating this formula at $x= 1, \frac{1}{2}, ..., \frac{1}{n}$ Euler observes the following in tabular form: \begin{eqnarray*} \log 2 &=& 1 - ~\frac{1}{2} \cdot \Big(\frac{1}{1}\Big)^2 + \frac{1}{3} \cdot \Big(\frac{1}{1}\Big)^3 - ...\\ \log \frac{3}{2} &=& \frac{1}{2} - \frac{1}{2}\cdot \Big(\frac{1}{2}\Big)^2 + \frac{1}{3} \cdot \Big(\frac{1}{2}\Big)^3 + ...\\ \log \frac{4}{3} &=& \frac{1}{3} - \frac{1}{2}\cdot \Big(\frac{1}{3}\Big)^2 + \frac{1}{3} \cdot \Big(\frac{1}{3}\Big)^3 + ..\\ \log \frac{5}{4} &=& \frac{1}{4} - \frac{1}{2}\cdot \Big(\frac{1}{4}\Big)^2 + \frac{1}{4} \cdot \Big(\frac{1}{4}\Big)^3 + ..\\ \end{eqnarray*} Summing by columns the first $n$ terms, he obtains \begin{equation}\label{203} \log (n+1) = H_n - \frac{1}{2} H_{n,2} + \frac{1}{3} H_{n,3} - \cdots \end{equation} One may rewrite this as $$ H_n - \log (n+1) = \frac{1}{2} H_{n,2} - \frac{1}{3} H_{n,3} + \cdots $$ Now one observes that for $j \ge 2$, $$ \lim_{n \to \infty} H_{n,j} = H_{\infty,j} = \zeta (j) $$ Taking this limit as $n \to \infty$ term by term in \eqn{203} formally gives the result \eqn{202}. We note that such representations of infinite sums in a square array, to be summed in two directions, were already used repeatedly by Johann Bernoulli's older brother Jacob Bernoulli (1654--1705) in his 1689 book on summing infinite series (\cite{Bernoulli1689}). Another nice example of an identity found by array summation is \begin{equation}\label{220} (\zeta(2)-1) + (\zeta(3)-1)+ (\zeta(4)-1) + \cdots = 1, \end{equation} using \begin{eqnarray*} \zeta(2) -1 &=& \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \frac{1}{5^2} + \cdots \\ \zeta(3) -1 &=& \frac{1}{2^3} + \frac{1}{3^3} + \frac{1}{4^3} + \frac{1}{5^3} + \cdots \\ \zeta(4) -1 &=& \frac{1}{2^4} + \frac{1}{3^4} + \frac{1}{4^4} + \frac{1}{5^4} + \cdots \\ \end{eqnarray*} since the column sums yield the telescoping series $$ \frac{1}{2^2} \times 2 + \frac{1}{3^2} \times \frac{3}{2} + \frac{1}{4^2} \times \frac{4}{3}+ \cdots = \frac{1}{1\cdot 2} + \frac{1}{2 \cdot 3} + \frac{1}{3 \cdot 4} + \cdots = 1. $$ Compare \eqn{220} to Euler's expression \eqn{228} below for Euler's constant. In 1732 Euler \cite[E25]{E25} stated his first form of his summation formula, without a proof. He then gives it in detail in his 1735 paper \cite[E47]{E47}, published 1741. Letting $s$ denote the sum of the function $t(n)$ over some set of integer values (say $n=1$ to $N$), he writes \begin{equation*} s= \int t dn + \alpha t + \beta\frac{dt}{dn} + \gamma \frac{d^2 t}{dn^2} + \delta \frac{d^3 t}{dn^{3}} + \mbox{etc.}. \end{equation*} where $\alpha= \frac{1}{2}$, $\beta = \frac{\alpha}{2} - \frac{1}{6}$, $\gamma= \frac{\beta}{2} - \frac{\alpha}{6} + \frac{1}{24}$, $\delta= \frac{\gamma}{2} -\frac{\beta}{6} + \frac{\alpha}{24} - \frac{1}{120}.$ etc. These linear equations for the coefficients are solvable to give $\alpha=\frac{1}{2}, \beta= \frac{1}{12}, \gamma= 0, \delta= -\frac{1}{720}$ etc. Thus $$ s= \int t dn + \frac{t}{2} + \frac{1}{12} \frac{dt}{dn} - \frac{1}{720} \frac{d^3 t}{dn^{3}} + \mbox{etc}. $$ The terms on the right must be evaluated at both endpoints of this interval being summed. In modern terms, $$ \sum_{n=1}^N t(n) = \int_{0}^N t(n)\, dn + \frac{1}{2}\Big(t(N)- t(0)\Big) + \frac{B_2}{2!}\Big( \frac{d\,t(N)}{dn} - \frac{d\,t(0)}{dn}\Big) - \frac{B_4}{4!}\Big( \frac{d^3\, t(N)}{dn^3} - \frac{d^3\, t(0)}{dn^3} \Big) + \cdots $$ where the $B_{2k}$ are Bernoulli numbers, defined in \eqn{221} below, and the sum must be cut off at a finite point with a remainder term (cf. \cite[Sec. 1.0]{Ten95}). Euler's formulas omit any remainder term, and he applies them by cutting off the right side sum at a finite place, and then using the computed value of the truncated right side sum as an approximation to the sum on the left, in effect making the assumption that the associated remainder term is small. A similar summation formula was obtained independently by Colin Maclaurin (1698--1746), who presented it in his 1742 two-volume work on Fluxions \cite{Mac1742}. A comparison of their formulas is made by Mills \cite{Mil85}, who observes they are substantially identical, and that neither author keeps track of the remainder term. It is appropriate that this formula, now with the remainder term included, is called the {\em Euler-Maclaurin summation formula.} Jacob Bernoulli introduced the numbers now bearing his name (with no subscripts), in {\em Ars Conjectandi} \cite{Bernoulli1713}, published posthumously in 1713. They arose in evaluating (in modern notation) the sums of $n$-th powers $$ \sum_{k=1}^{m-1} k^n= \frac{1}{n+1} \left(\sum_{k=0}^n \left( {{n+1}\atop{k}}\right) B_k \, m^{n+1-k}. \right) $$ We follow the convention that the {\em Bernoulli numbers} $B_n$ are those given by the expansion \begin{equation}\label{221} \frac{t}{e^t - 1} = \sum_{n=0}^{\infty} \frac{B_n}{n!} t^n, \end{equation} so that \[ B_1= -\frac{1}{2},~ B_2= \frac{1}{6}, ~B_3=0, ~B_4 = -\frac{1}{30}, ~B_5=0, ~B_6= \frac{1}{42}, ... \] Euler later named the Bernoulli numbers in his 1768 paper (in \cite[E393]{E393}) (``{\em On the sum of series involving the Bernoulli numbers}"), and there listed the first few as $\frac{1}{6}, \frac{1}{30}, \frac{1}{42}, \frac{1}{30}, \frac{5}{66}.$ His definition led to the convention\footnote{However in \cite{E393} and elsewhere Euler does not use subscripts, writing ${\mathfrak A}, {\mathfrak B}, {\mathfrak C}$ etc.} that the notation $B_n$ corresponds to $|B_{2n}|$ in \eqn{221}. This convention is followed in the classic text of Whittaker and Watson \cite[p. 125]{WW63}. In his 1755 book on differential calculus \cite[E212, Part II, Chap. 5 and 6]{E212} Euler presented his summation formula in detail, with many worked examples. (See Pengelley \cite{Pe00}, \cite{Pe07} for a translation and for a detailed discussion of this work.) In Sect. 122 Euler computes the Bernoulli numbers $|B_{2n}|$ through $n=15$. In Sect. 129 he remarks that: \begin{quote} The [Bernoulli numbers] form a highly divergent sequence, which grows more strongly than any geometric series of growing terms. \footnote{``pariter ac Bernoulliani ${{\mathfrak A}, \mathfrak B}, {\mathfrak C}, {\mathfrak D}$ \&c. serium maxime divergentem, quae etiam magis increscat, quam ulla series geometrica terminis crescentibus procedens.'' [Translation: David Pengelley \cite{Pe00}, \cite{Pe07}.]} \end{quote} \noindent In Sect. 142 he considers the series for the harmonic numbers $s=H_{x}= \sum_{n=1}^x \frac{1}{n}$, where his summation formula reads (in modern notation) \begin{equation}\label{222} s = \log x + \left( \frac{1}{2x}- \frac{|B_2|}{2x^2} + \frac{|B_4|}{4x^4} - \frac{|B_6|}{6x^6} + \cdots \right) + C, \end{equation} where the constant $C$ to be determined is Euler's constant. He remarks that this series is divergent. He substitutes $x=1$, stating that $s=1$ in \eqn{222}, obtaining formally \[ C = \frac{1}{2} + \frac{|B_2|}{2} - \frac{|B_4|}{4} + \frac{|B_6|}{6} - \frac{|B_8|}{8} + \cdots \] although the right side diverges. Substituting this expression in \eqn{222} yields \begin{eqnarray*} s &=& \log x + \left( \frac{1}{2x}- \frac{|B_2|}{2x^2} + \frac{|B_4|}{4x^4} - \frac{|B_6|}{6x^6} + \frac{|B_8|}{8x^8}- \cdots \right) \\ && ~\quad\quad + \left( \frac{1}{2}\, + \frac{|B_2|}{2} - \frac{|B_4|}{4} + \frac{|B_6|}{6} - \frac{|B_8|}{8} + \cdots\right). \end{eqnarray*} In Sect. 143 he sets $x=10$ in \eqn{222} so that $s= H_{10}$, and gives the numerical value $s=2.92896~82539~68253~968$. The right side contains $C$ as an unknown, and by evaluating the other terms (truncating at a suitable term) he finds $$ C \approx 0.57721~56649~01532~\underline{5} $$ a result accurate to $15$ places. We note that the divergent series on the right side of \eqn{222} is the asymptotic expansion of the digamma function $\psi(x+1) +\gamma$, where $\psi(x)= \frac{\Gamma'(x)}{\Gamma(x)}$, see Theorem~\ref{th32}, and in effect Euler truncates the asymptotic expansion at a suitable term to obtain a good numerical approximation. Over his lifetime Euler obtained many different numerical approximations to Euler's constant. He calculated approximations using truncated divergent series, and his papers report the value obtained, sometimes with a comment on accuracy, sometimes not. In contrast, some of the convergent series he obtained for Euler's constant converge slowly and proved of little value for numerics. He also explored various series acceleration methods, which he applied to both convergent and divergent series. He compared and cross-checked his work with various different numerical methods. In this way he obtained confidence as to how many digits of accuracy he had obtained using the various methods, and also obtained information on the reliability of his calculations using divergent series. In 1765 in a paper studying the gamma function \cite[E368]{E368} (which will be discussed in more detail in Section 2.3) he derives the formula $\Gamma'(1) = - \gamma.$ It is given in Section 14 of his paper as equation \eqn{273} in Section 2.3, which is equivalent to the integral formula \begin{equation}\label{225c} -\gamma = \int_{0}^{\infty} e^{-x} \log x \, dx. \end{equation} This result follows from differentiation under the integral sign of Euler's integral \eqn{272} in Section \ref{sec23}. In a 1768 paper mainly devoted to relating Bernoulli numbers to values of $\zeta(n)$, Euler \cite[E393]{E393} obtains more formulas for $\gamma$ (denoting it $O$). In Section 24 he reports the evaluation of $\gamma$ found earlier and remarks: \begin{quote} $O= 0,57721~56649~01532~5$. This number seems also the more noteworthy because even though I have spent much effort in investigating it, I have not been able to reduce it to a known kind of quantity. \footnote{``$O=0.5772156649015325$ qui numerus eo maiori attentione dignus videtur, quod eum, cum olim in hac investigatione multum studii consumsissem, nullo modo ad cognitum quantitatum genus reducere valui.'' [Translation: Jordan Bell]} \end{quote} \noindent He derives in Section 25 the integral formula \begin{equation}\label{226a} \gamma = \int_{0}^{\infty} \left( \frac{e^{-y}}{1-e^{-y}} - \frac{e^{-y}}{y} \right) dy. \end{equation} Here we may note that the integrand \[ \frac{e^{-y}}{1-e^{-y}} - \frac{e^{-y}}{y}= \frac{1}{y}\Big( \frac{y}{e^y -1} - e^{-y} \Big)= \sum_{n=1}^{\infty} \Big(B_n - (-1)^n \Big)\frac{y^{n-1}}{n!}. \] In Section 26 by change of variables he obtains \begin{equation}\label{227a} \gamma = \int_{0}^{1} \left( \frac{1}{1-z}+ \frac{1}{\log z} \right) dz \end{equation} In Section 27 he gives the following new formula for Euler's constant \begin{equation}\label{228} \gamma= \frac{1}{2} (\zeta(2)-1) + \frac{2}{3}(\zeta(3) -1) + \frac{3}{4}(\zeta(4) -1) + \cdots \end{equation} and in Section 28 he obtains a formula in terms of $\log 2$ and the odd zeta values: \begin{equation*} \gamma = \frac{3}{4} - \frac{1}{2}\log 2 + \sum_{k=1}^{\infty} \Big( 1-\frac{1}{2k+1} \Big)\Big(\zeta(2k+1)-1\Big) \end{equation*} He concludes in Section 29: \begin{quote} Therefore the question remains of great moment, of what character the number $O$ is and among what species of quantities it can be classified.\footnote{``Manet ergo quaestio magni momenti, cujusdam indolis sit numerus iste $O [:=\gamma]$ et ad quodnam genus quantitatum sit referendus.''[Translation: Jordan Bell]} \end{quote} In a 1776 paper \cite[E583]{E583} ({\em On a memorable number naturally occurring in the summation of the harmonic series}), \footnote{De numero memorabili in summatione progressionis harmonicae naturalis occurrente} Euler studied the constant $\gamma$ in its own right. This paper was not published until 1785, after his death. In Section 2 he speculates that the number $N= e^{\gamma}$ should be a notable number: \begin{quote} And therefore, if $x$ is taken to be an infinitely large number it will then be $$ 1+ \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{x} = C + \log x. $$ One may suspect from this that the number $C$ is the hyperbolic [= natural] logarithm of some notable number, which we put $=N$, so that $C= \log N$ and the sum of the infinite series is equal to the logarithm of the number $N\cdot x$. Thus it will be worthwhile to inquire into the value of this number $N$, which indeed it suffices to have defined to five or six decimal figures, since then one will be able to judge without difficulty whether this agrees with any known number or not.\footnote{``Quod si ergo numerus $x$ accipiatur magnus, tum erit $$ 1+ \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{z} = C + lx $$ istum numerum $C [:= \gamma]$ esse logarithmum hyperbolicum cuiuspiam numeri notabilis, quem statuamus $=N$, ita ut sit $C= \log N$, et summa illius seriei infinitae aequeture logarithmo numeri $Nx$, under operae pretium erit in valorem huius numeri $N$ inquirerre, quem quidem sufficiet ad quinque vel fex figuras decimales definiuisse, quoniam hinc non difficulter iduicari poterit, num cum quopiam numero cogito conueniat nec ne.'' [Translation: Jordan Bell]} \end{quote} \noindent Euler evaluates $N = 1.7810\underline{6}$ and says he cannot connect it to any number he knows. He then gives several new formulae for $\gamma$, and tests their usefulness for calculating its value, comparing the values obtained numerically with the value $0.57721~56649~01532~5$ that he previously obtained. His first new formula in Sect. 6 is \begin{equation*} 1- \gamma = \frac{1}{2}(\zeta(2) -1) + \frac{1}{3} (\zeta(3)-1) + \frac{1}{4}(\zeta(4)-1) + \cdots\ \end{equation*} Using the first $16$ terms of this expansion, he finds $\gamma \approx 0,57721~\underline{6}9$. He finds several other formulas, including, in Sect. 15, the formula \begin{equation*} 1 - \log \frac{3}{2} - \gamma= \frac{1}{3\cdot 2^2} (\zeta(3)-1) + \frac{1}{5 \cdot 2^4}(\zeta(5)-1) + \frac{1}{7\cdot 2^6} (\zeta(7) -1) + \cdots, \end{equation*} from which he finds $\gamma \approx 0.57721~56649~01\underline{7}91~3$, a result accurate to $12$ places. He concludes the paper with a list of eight formulas relating Euler's constant, $\log 2$, and even and odd zeta-values. One striking formula (number VII), involving the odd zeta values is \begin{equation*} \gamma= \log 2 - \sum_{k=1}^{\infty} \frac{\zeta(2k+1)}{(2k+1)2^{2k}}. \end{equation*} In another 1776 paper \cite[E629]{E629} ({\em The expansion of the integral formula $\int \partial x( \frac{1}{1-x} +\frac{1}{\ell x})$ with the term extended from $x=0$ to $x=1$}), which was not published until 1789, Euler further investigated the integral formula \eqn{227a} for his constant. He denotes its value by $n$, and says: \begin{quote} the number $n$, [...] which is found to be approximately $ 0, 57721~56649~01532~5$, whose value I have been able in no way to reduce to already known transcendental measures; therefore, it will hardly be useless to try to resolve the formula in many different ways.\footnote{``numerus $n$ [...], et quem per approximationem olim inveni esse $=0, 5772156649015325$, cuius valorem nullo adhuc modo ad mensuras transcendentes iam cogitas redigere potui; unde haud inutile erit resolutionem huius formulae propositae pluribus modis tentare.''[Translation: Jordan Bell]} \end{quote} \noindent He then finds several expansions for related integrals. One of these starts from the identity, valid for $m, n \ge 0$, \begin{equation*} \int_{0}^1 \frac{x^m -x^n}{\log x}dx = \log \left( \frac{m+1}{n+1}\right) , \end{equation*} and from this he deduces, for $n \ge 1$, that \begin{equation*} \int_{0}^{1} \frac{(1-x)^n}{\log x} dx = \log \left( \prod_{j=0}^{n} (j+1)^{ (-1)^{j} \left({{n}\atop{j}}\right)}\right). \end{equation*} This formula gives the moments of the measure $\frac{x}{\log(1-x)} dx$ on $[0,1]$. \subsection{The gamma function} \label{sec22} \setcounter{equation}{0} Euler was not the originator of work on the factorial function. This function was initially studied by John Wallis (1616--1703), followed by Abraham de Moivre (1667--1754) and James Stirling (1692--1770). Their work, together with work of Euler, Daniel Bernoulli (1700--1782) and Lagrange (1736--1813) is reviewed in Dutka \cite{Dut91}. In 1729, in correspondence with Christian Goldbach, Euler discussed the problem of obtaining continuous interpolations of discrete series. He described a (convergent) infinite product interpolating the factorial function, $m!$, \begin{equation*} \frac{1 \cdot 2^m}{1+m} \cdot \frac{2^{1-m}\cdot 3^m}{2+m}\cdot \frac{3^{1-m}\cdot 4^m}{3+m} \cdot \frac{4^{1-m}\cdot 5^m}{4+m} \cdot etc. \end{equation*} This infinite product, grouped as shown, converges absolutely on the region ${\mathbb C} \smallsetminus \{ -1, -2, ...\}$, to the gamma function, which we denote $\Gamma(m+1)$, following the later notation of Legendre. John Wallis (1616--1703) had earlier called this function the ``hypergeometric series". Euler substitutes $m=\frac{1}{2}$ and finds $$ \Gamma \Big(\frac{3}{2}\Big)=\frac{1}{2} \sqrt{\pi}, $$ by recognizing a relation of this infinite product to Wallis's infinite product for $\frac{\pi}{2}$. In this letter he also introduced the notion of fractional integration, see Sandifer \cite{San07}. Also in 1729 Euler \cite[E19]{E19} wrote a paper presenting these results ( {\em On transcendental progressions, that is, those whose general term cannot be given algebraically}), but it was not published until 1738. This paper describes the infinite products, suitable for interpolation. It then considers {\em Euler's first integral}\footnote{Here the letter $e$ denotes an integer. Only in a later paper, [E71], written in 1734, does Euler use the symbol $e$ to mean the constant $2.71828...$.} \begin{equation*} \int_{0}^1 x^e (1-x)^n dx \end{equation*} which in modern terms defines the {\em Beta function} $B(e+1, n+1)$, derives a recurrence relation from it, and in Sect. 9 derives $B(e+1, n+1) = \frac{e! n!}{(e+n+1)!}$. More generally, in the later notation of Legendre this extends to the Beta function identity \begin{equation}\label{250c} B(r, s) : = \int_{0}^1 x^{r-1} (1-x)^{s-1} dx = \frac{\Gamma(r) \Gamma(s)}{\Gamma( r+s)}. \end{equation} relating it to the gamma function. Euler \cite[Sect. 14]{E19} goes on to obtain a continuous interpolation of the factorial function $f(n) = n!$, given by the integral \begin{equation}\label{251} f(n) = \int_{0}^{1} ( - \log x)^n dx, \end{equation} where $n$ is to be interpreted as a continuous variable, $n >-1$. He then computes formulas for $\Gamma \left( 1+ \frac{n}{2} \right)$ at half-integer values \cite[Sect. 20]{E19}. In 1765, but not published until 1769, Euler \cite[E368]{E368} (``{\em On a hypergeometric curve expressed by the equation $y=1* 2 * 3* \dots *x$}") studied the ``hypergeometric curve" given (using Legendre's notation) by $y = \Gamma(x+1)$, where $$ \Gamma(x) := \int_{0}^{\infty} t^{x-1} e^{-t}dt. $$ Euler interpolates the factorial function by an infinite product. In Sect. 5 he gives the product formula \begin{equation*} y= a^x \prod_{k=1}^{\infty}\left( \frac{k}{k+x}\left(\frac{a+k}{a+k-1}\right)^x\right) \end{equation*} with any fixed $a$ with $1<a<x$, and with $a= \frac{1+x}{2}$ obtains $$ y = \left(\frac{1+x}{2}\right)^x\prod_{k=1}^{\infty} \left(\frac{k}{k+x} \left(\frac{2k+1 +x}{2k-1+x}\right)^x\right) $$ In Sect. 9 he derives a formula, which in modern notation reads \begin{equation}\label{271} \log y = -\gamma x + \sum_{n=2}^{\infty} (-1)^n \frac{\zeta(n)}{n} x^n, \end{equation} where $\gamma$ is Euler's constant. In Sect. 11 he makes an exponential change of variables from his formula \eqn{251} to obtain the integral formula \begin{equation}\label{272} y= \int_{0}^{\infty} e^{-v} v^x dv, \end{equation} which is (essentially) the standard integral representation for the gamma function. This formula shows that the ``hypergeometric curve" is indeed given as\footnote{If one instead used the notation $\Pi(x)$ for this integral used by Gauss, cf. Edwards \cite[p. 8]{Ed74}, then the hypergeometric curve is $y= \Pi(x).$ Gauss made an extensive study of hypergeometric functions.} \begin{equation*} y = \Gamma( x+1). \end{equation*} In Sect. 12 he obtains a formula for the digamma function $\psi(x) = \frac{\Gamma'}{\Gamma}(x)$, which is \begin{equation*} \frac{\Gamma'(x+1)}{\Gamma(x+1)} := \frac{dy}{y\,dx} = -\gamma + \sum_{k=1}^{\infty} \frac{x}{k(x+k)}. \end{equation*} In formula VII in this section he gives the asymptotic expansion of $\log \Gamma(x+1)$, (Stirling's series), expressed as \begin{eqnarray}\label{stirling} \quad \log y &= & \frac{1}{2}\log 2 \pi + (x +\frac{1}{2}) \log x \nonumber\\ \\ && - \, x + \frac{A}{2x} - \frac{1\cdot 2}{2^3} \frac{B}{x^3} + \frac{1 \cdot 2 \cdot 3\cdot 4}{2^5} \frac{C}{x^5} - \frac{ 1\cdot 2 \cdot 3 \cdot 4\cdot 5 \cdot 6}{2^7} \frac{D}{x^7}+ \mbox{etc.} \nonumber \end{eqnarray} with $A= \frac{1}{6}, $ $B= \frac{1}{90}$, $C= \frac{1}{945}$, $D= \frac{1}{9450}$, ... Euler also gives here the value of Euler's constant as $\gamma \approx 0, 57721~56649~01\underline{4}22~5.$ In Sect. 14 he finds the explicit value \begin{equation}\label{273} \Gamma'(1) :=\frac{dy}{dx}\, \large{\mid}_{x=0}\, = - \gamma. \end{equation} In Sect. 15 he uses $\Gamma(\frac{3}{2})= \frac{1}{2} \sqrt{\pi}$ to derive the formula $$ \frac{\Gamma'}{\Gamma}\left(\frac{3}{2}\right) = -\gamma + 2 - 2 \log 2. $$ In Sect. 26 he observes that, for $x=n$ a positive integer, and with $a$ treated as a variable, \begin{equation}\label{274} \Gamma(x+1) = \sum_{k=0}^{\infty} (-1)^k \left( {{x}\atop{k}}\right) (a-k)^x, \end{equation} the series terminates at $k=n$, and the resulting function is independent of $a$. This leads him to study in the remainder of the paper the series $$ s= x^n - m(x-1)^{n} + \frac{m(m-1)}{1\cdot 2} (x-2)^n + \frac{m(m-1)(m-2)}{1\cdot 2\cdot 3 } (x-3)^n +\&\mbox{c} $$ In this formula he sets $n = m + \lambda$, and he investigates the expansion $s= s(x, \lambda)$ as a function of two variables $(\lambda, x)$ which corresponds to the variables $(x-n, a)$ in \eqn{274}. These expansions involve Bernoulli numbers. \subsection{Zeta values}\label{sec23} \setcounter{equation}{0} Already as a young student of Johann Bernoulli (1667--1748) in the 1720's Euler was shown problems on the summation of infinite series of arithmetical functions, including in particular the sums of inverse $k$-th powers of integers. \noindent The problem of finding a closed form expression for this value, $\zeta(2)$, became a celebrated problem, called ``The Basel Problem." The Basel problem had been raised by Pietro Mengoli (1628-1686) in his 1650 book on the arithmetical series arising in geometrical quadratures (see \cite{ME06}, \cite{Giu91}). In that book Mengoli \cite[Prop. 17]{Mengoli1650} summed in closed form the series of (twice the) inverse triangular numbers \begin{equation*} \sum_{n=1}^{\infty}\, \frac{1}{n(n+1)} = \frac{1}{2} + \frac{1}{6} + \frac{1}{12} + \cdots = 1, \end{equation*} which is a telescoping sum. The problem became well known because it was also raised by Johann Bernoulli's older brother Jacob (also called James or Jacques) Bernoulli (1654--1705), earlier holder of the professorship of mathematics at the University of Basel. He wrote a book on infinite series (\cite{Bernoulli1689}) in 1689; and several additional works on them. A collection of this work appeared posthumously in 1713 \cite{Bernoulli1713}. He could tell that the sum of inverse squares converged to a value between $1$ and $2$, using the triangular numbers, via $$ 1 < \sum_{n=1}^{\infty} \frac{1}{n^2} < 2 \left(\sum_{n=1}^{\infty} \frac{1}{n(n+1)} \right)= 2 \sum_{n=1}^{\infty}\left(\frac{1}{n}- \frac{1}{n+1}\right) = 2. $$ He was unable to relate this value to known constants, and wrote (\cite[art. XVII, p. 254]{Bernoulli1713}): \begin{quote} And thus by this Proposition, the sums of series can be found when the denominators are either triangular numbers less a particular triangular number or square numbers less a particular square number; by [the results of art.] XV, this happens for pure triangular numbers, as in the series $\frac {1}{1}+\frac{1}{3}+\frac{1}{6}+\frac{1}{10}+\frac{1}{15}$ $\&$c., while on the other hand, when the numbers are pure squares, as in the series $\frac{1}{1}+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\frac{1}{25}$ $\&$c., it is more difficult than one would have expected, which is noteworthy. If someone should succeed in finding what till now withstood our efforts and communicate it to us, we would be much obliged to them.\footnote{``Atque ita per hanc Propositionem, inveniri possunt summae serierum, cum denominatores sunt vel numeri Trigonales minuti alio Trigonali, vel Quadrati minuti alio Quadrato; ut \& per XV. quando sunt puri Trigonales, ut in serie $\frac{1}{1} + \frac{1}{3} + \frac{1}{6} + \frac{1}{10} + \frac{1}{15} $ $\&$c. at, quod notatu dignum, quando sunt puri Quadrati, ut in serie $\frac{1}{1}+ \frac{1}{4} + \frac{1}{9}+ \frac{1}{16}+ \frac{1}{25}$ $\&$c difficilior est, quam quis expectaverit, summae pervestigatio, quam tamen finitam esse, ex altera, qua manifesto minor est, colligimus: Si quis inveniat nobisque communicet, quod industriam nostram elusit hactenus, magnas de nobis gratias feret.''[Translation: Jordan Bell]} \end{quote} Here he says that if $ a \ge 1$ is an integer and the denominators range over $n(n+1) - a(a+1)$ or alternatively $n^2- a^2$, with $n \ge a+1$, then he can sum the series, but he cannot sum the second series when $a=0$. Euler solved the Basel problem by 1735 with his famous result that $$\zeta(2) = \frac{\pi^2}{6}.$$ Euler's first contributions on the Basel problem involved getting numerical approximations of $\zeta(2)$. In \cite[E20]{E20}, written in 1731, integrating his representation for the harmonic numbers led to a representation (in modern terms) \begin{equation*} \int_{0}^1 \frac{dx}{x} \int_{0}^x \frac{1- t^{n}}{1-t} dt = \sum_{j=1}^{n}\, \frac{1}{j^2}. \end{equation*} By a series acceleration method he obtained the formula \begin{equation*} \zeta(2) = (\log 2)^2 + \sum_{n=1}^{\infty} \, \frac{1}{2^{n-1}n^2}, \end{equation*} in which the final sum on the right is recognizable as the dilogarithm value $2\,{\rm Li}_2(\frac{1}{2}).$ Euler used this formula to calculate $\zeta(2) \approx 1.64492~4,$ cf. Sandifer \cite{San07}. He next obtained a much better approximation using Euler summation (the Euler-Maclaurin expansion). He announced his summation method in \cite[E25]{E25} in 1732, and gave details in \cite[E47]{E47}, a paper probably written in 1734. In this paper he computed Bernoulli numbers and polynomials, and also applied his summation formula to obtain $\zeta(2)$ and $\zeta(3)$ to 20 decimal places. In December 1735, in \cite[E41]{E41}, published in 1740, Euler gave three proofs that $\zeta(2) = \frac{\pi^2}{6}$. The most well known of these treated $\sin x$ as though it were a polynomial with infinitely many roots at $\pm \pi, \pm 2 \pi, \pm 3\pi, \cdots$. He combines these roots in pairs, and asserts the infinite product formula \begin{equation}\label{262a} \frac{\sin x}{x} = \left( 1- \frac{x^2}{\pi^2}\right)\left( 1- \frac{x^2}{4\pi^2}\right)\left( 1- \frac{x^2}{9\pi^2}\right) \cdots, \end{equation} He then equates certain algebraic combinations of coefficients of the Taylor series expansion at $x=0$ on both sides of the equation. The right side coefficients give the power sums of the roots of the polynomials. Writing these Taylor coefficients as $1 - \alpha x^2 + \beta x^4- \gamma x^6 + \ldots$, he creates algebraic combinations giving the power sums of the roots, in effect obtaining\footnote{Euler only writes down the right sides of these equations.} \begin{eqnarray*} \frac{1}{\pi^2}\zeta(2) & = &\alpha\\ \frac{1}{\pi^4} \zeta(4) & = &\alpha^2 - 2\beta\\ \frac{1}{\pi^6} \zeta(6) & = & \alpha^3- 3 \alpha \beta + 3 \gamma, \end{eqnarray*} and so on. Now the coefficients $\alpha, \beta, \gamma$ may be evaluated as rational numbers from the power series terms of the expansion of $\frac{\sin x}{x}$ which Euler computes directly. In this way he determined formulas for $\zeta(2n)$ for $1\le n \le 6$. He also evaluated by a similar approach alternating sums of odd powers, obtaining in Sect. 10 the result of Leibniz (1646--1716) that \[ \sum_{k=0}^{\infty} (-1)^k\frac{1}{2k+1} = \frac{\pi}{4}. \] He views this as confirming his method, saying: \begin{quote} And indeed this is the same series discovered some time ago by {\em Leibniz}, by which he defined the quadrature of the circle. From this, if our method should appear to some as not reliable enough, a great confirmation comes to light here; thus there should not be any doubt about the rest that will be derived from this method. \footnote{``Atque haec est ipsa series a {\em Leibnitio} iam pridem prolata, qua circuli quadraturam definiuit. Ex quo magnum huius methodi, si cui forte ea non satis certa videatur, firmamentum elucet; ita ut de reliquis, quae ex hac methodo deriuabantur, omnino non liceat dubitari'' [Trans: Jordan Bell] } \end{quote} In Sect. 12 he obtained a parallel result for cubes, \[ \sum_{k=0}^{\infty}(-1)^k \frac{1}{(2k+1)^3}= \frac{\pi^3}{32}. \] These proofs are not strictly rigorous, because the infinite product expansion is not completely justified, but Euler makes some numerical cross-checks, and his formulas work. Euler was sure his answer was right from his previous numerical computations. In this same paper he gives two more derivations of the result, still susceptible to criticism, based on the use of infinite products. We note that directly equating power series coefficients on both sides of \eqn{262a}, for the the $2n$-th coefficient would give the identity $$ \frac{(-1)^n}{(2n+1)!} = \frac{(-1)^n}{\pi^{2n}}\sum_{1 \le m_1 < m_2 < \cdots < m_n} \frac{1}{ m_1^2 m_2^2 \cdots m_n^2}. $$ The sum on the right hand side is the multiple zeta value $\zeta({2, 2, ..., 2})$ (using $n$ copies of $2$, see \eqn{MZV} below), and the case $n=1$ gives $\zeta(2)$. Relations like this may have inspired Euler's later study of multiple zeta values. Euler communicated his original proof to his correspondents Johann Bernoulli, Daniel Bernoulli (1700--1782), Nicolaus Bernoulli (1687--1759) and others. Daniel Bernoulli criticized it on the grounds that $\sin \pi x$ may have other complex roots, cf. \cite[Sec. 4]{Ay74}, \cite[p. 264]{We84}. In response to this criticism Euler eventually justified his infinite product expansion for $\sin \pi x$, obtaining a version that can be made rigorous in modern terms, cf. Sandifer \cite{San07}. In a paper \cite[E61]{E61} published in 1743, Euler obtained more results using infinite products, indicating how to derive formulas for $\zeta(2n),$ for all $n\ge 1$, as well as for the sums \[ [ L( 2n+1, \chi_{-4}) :=] \, \sum_{k=0}^{\infty} (-1)^k \frac{1}{(2k+1)^{2n+1}}, \] for $n \ge 0$. In this paper he studied the function $e^x= \sum_{n=0}^{\infty} \frac{x^n}{n!}$. He recalls his formula for $\sin s$, as \[ s -\frac{s^3}{1\cdot 2\cdot 3} + \frac{s^5}{1\cdot 2 \cdot 3 \cdot 4\cdot 5} -\cdots = s(1 -\frac{s^2}{\pi^2}) (1- \frac{s^2}{4\pi^2}) (1- \frac{s^2}{9 \pi^2})(1-\frac{s^2}{16\pi^2}) \cdots \] Then he obtains the formula $\sin z= \frac{e^{iz}- e^{-iz}}{2i}$, introducing the modern notation for $e$, saying: \begin{quote} For, this expression is equal to $\frac{e^{s \sqrt{-1}} - e^{-s \sqrt{-1}}}{2\sqrt{-1}},$ where $e$ denotes that number whose logarithm $= 1$, and $$ e^z= \left( 1+ \frac{z}{n}\right)^n, $$ with $n$ being an infinitely large number\footnote{``Haec enim expressio aequivalet isti $\frac{e^{s \sqrt{-1}} - e^{-s \sqrt{-1}}}{2 \sqrt{-1}}$ denotante $e$ numerorum, cujus logarithmus est $=1, \& $ cum sit $e^z=\left(1+\frac{z}{n}\right)^n$ existente $n$ numero infinito, '' [Translation: Jordan Bell]}. \end{quote} \noindent Now he introduces a new principle. He evaluates the following integrals, where $p$ and $q$ are integers with $0< p<q$: \[ \int_{0}^1 \frac{x^{p-1} + x^{q-p-1}}{1+x^q} dx = \frac{\pi}{ q \sin \frac{p\pi}{q}} \] and \[ \int_{0}^1 \frac{x^{p-1} - x^{q-p-1}}{1-x^q} dx = \frac{\pi \cos \frac{p\pi}{q}}{ q \sin \frac{p\pi}{q}} \] Expanding the first integral as an indefinite integral in power series in $x$, evaluating term by term, and setting $x=1$ he obtains the formula \[ \frac{\pi}{ q \sin \frac{p\pi}{q}}=\frac{1}{p} + \frac{1}{q-p} - \frac{1}{q+p} - \frac{1}{2q-p} +\frac{1}{2q+p} + \frac{1}{3q-p} - \frac{1}{3q+p} + \cdots \] and similarly for the second integral. Setting $\frac{p}{q} =s$, and rescaling, he obtains \[ \frac{\pi}{\sin \pi s}= \frac{1}{s} + \frac{1}{1-s} - \frac{1}{1+s} -\frac{1}{2-s} + \frac{1}{2+s} +\frac{1}{3-s} - \cdots \] and \[ \frac{\pi \cos \pi s}{\sin \pi s} = \frac{1}{s} - \frac{1}{1-s} + \frac{1}{1+s} - \frac{1}{2-s} + \frac{1}{2+s} -\frac{1}{3-s} + \cdots \] By differentiating with respect to $s$, he obtains \[ \frac{\pi^2 \cos \pi s}{(\sin \pi s)^2} = \frac{1}{s^2} - \frac{1}{(1-s)^2} - \frac{1}{(1+s)^2} + \frac{1}{(2-s)^2} + \frac{1}{(2+s)^2} - \frac{1}{(3-s)^2} + \cdots \] and \[ \frac{ \pi^2}{(\sin \pi s)^2} = \frac{1}{s^2} + \frac{1}{(1-s)^2} + \frac{1}{(1+s)^2} + \frac{1}{(2-s)^2} + \frac{1}{(2+s)^2} + \cdots \] Now he substitutes $s= \frac{p}{q}$ with $0< p < q$ integers and obtains many identities. Thus \[ \frac{\pi^2}{ q^2 (\sin \frac{p \pi}{q})^2} = \frac{1}{p^2} + \frac{1}{(q-p)^2} + \frac{1}{(q+p)^2} + \frac{1}{(2q-p)^2} + \frac{1}{(2q+p)^2} + \cdots \] For $q=4$ and $p=1$ he obtains in this way \[ \frac{\pi^2}{8 \sqrt{2}} = 1 -\frac{1}{3^2} - \frac{1}{5^2} + \frac{1}{7^2} + \frac{1}{9^2} - \frac{1}{11^2} - \frac{1}{13^2} + \cdots \] and \[ \frac{\pi^2}{8} = 1 + \frac{1}{3^2} + \frac{1}{5^2} + \frac{1}{7^2} +\frac{1}{9^2} + \frac{1}{11^2}+\cdots \] He observes that it is not difficult to derive on like principles the value $\zeta(2) = \frac{\pi^2}{6}$. (This identity follows for example from the observation that the series above is $(1-\frac{1}{2^2}) \zeta(2)$.) He concludes that one may continue differentiating in $s$ to obtain formulas for various series in $n$-th powers, and gives several formulas for the derivatives. In this way one can obtain a formula for $\zeta(2n)$ that expresses it as a rational multiple of $\pi^{2n}$. However this approach does not make evident the closed formula for $\zeta(2n)$ expressed in terms of Bernoulli numbers (given below as \eqn{exact}). In a paper \cite[E63]{E63}, published in 1743, Euler gave a new derivation of $\zeta(2) = \frac{\pi^2}{6}$, using standard methods in calculus involving trigonometric integrals, which is easily made rigorous by today's standards. He takes $s$ to be arclength on a circle of radius $1$ and sets $x= \sin s$ so that $s = \arcsin x$. He observes $x=1$ corresponds to $s= \frac{\pi}{2}$ and here he uses the symbol $\pi$ with its contemporary meaning : \begin{quote} It is clear that I employ here the letter $\pi$ to denote the number of {\em Ludolf van Ceulen} $3.14159265...$\footnote{``Il est clair que j'emplois la lettre $\pi$ pour marquer le nombre de LUDOLF \`{a} KEULEN 3,14159265 etc.''} \end{quote} \noindent Here Euler refers to Ludolf van Ceulen (1540-1610), who was a professor at Leiden University and who computed $\pi$ to 35 decimal places. In his calculations Euler works with differentials and writes $ds =\frac{dx}{\sqrt{1-xx}}$, so that $s = \int \frac{dx}{\sqrt{1-xx}}$. Multiplying these expressions gives \[ s ds = \frac{dx}{\sqrt{1-xx}} \int \frac{dx}{\sqrt{1-xx}}. \] He integrates both sides from $x=0$ to $x=1$. The left side is $\int_{0}^{\frac{\pi}{2}} s ds = \frac{\pi\pi }{8}.$ For the right side, he uses the binomial theorem \[ \frac{1}{\sqrt{1-xx}} [ = (1- xx)^{-\frac{1}{2}} ]= 1 + \frac{1}{2} x^2 + \frac{1\cdot 3}{2 \cdot 4} x^4 + \frac{1\cdot 3 \cdot 5}{2 \cdot 4\cdot 6} x^6 + \mbox{etc.} \] In fact this expansion converges for $|x|<1$. Euler integrates it term by term to get \[ \int \frac{dx}{\sqrt{1-xx}} = x +\frac{1}{2\cdot 3} x^3 + \frac{1\cdot 3}{2 \cdot 4\cdot 5} x^5 + \frac{1\cdot 3 \cdot 5}{2 \cdot 4\cdot 6\cdot 7} x^7+ \mbox{ etc.} \] He then obtains \[ s ds = \frac{x dx}{\sqrt{1-xx}} + \frac{1}{2 \cdot 3} \frac{x^3 dx}{\sqrt{1-xx}} + \frac{1\cdot 3}{2 \cdot 4\cdot 5} \frac{x^5 dx}{\sqrt{1-xx}}+ \frac{1\cdot 3 \cdot 5}{2 \cdot 4\cdot 6\cdot 7} \frac{x^7 dx }{\sqrt{1-xx}} + \mbox{ etc.} \] Integrating from $x=0$ to $x=1$ would give \[ \frac{\pi\pi}{8} = \int_{0}^1 \frac{x dx}{\sqrt{1-xx}} + \frac{1}{2\cdot 3} \int_{0}^1\frac{x^3dx}{\sqrt{1-xx}} + \frac{1\cdot 3}{2 \cdot 4\cdot 5} \int_{0}^1\frac{x^5 dx}{\sqrt{1-xx}} + \mbox{etc.} \] The individual terms on the right can be integrated by parts \[ \int \frac{x^{n+2}dx}{\sqrt{1-xx}} = \frac{n+1}{n+2} \int \frac{x^n dx}{\sqrt{1-xx}} - \frac{x^{n+1}}{n+2} \sqrt{1-xx}. \] When integrating from $x=0$ to $x=1$, the second term on the right vanishes and Euler gives a table \begin{eqnarray*} \int_{0}^1 \frac{x dx}{\sqrt{1-xx}} & =& 1- \sqrt{1-xx} = 1\\ \int_{0}^1 \frac{x^3 dx}{\sqrt{1-xx}} &=& \frac{2}{3} \int_{0}^1 \frac{xdx}{\sqrt{1-xx}} = \frac{2}{3}\\ \int_{0}^1 \frac{x^5 dx}{\sqrt{1-xx}} &=& \frac{4}{5} \int_{0}^1 \frac{x^3dx}{\sqrt{1-xx}} = \frac{2\cdot 4}{3\cdot 5}\\ \int_{0}^1 \frac{x^7 dx}{\sqrt{1-xx}} &=& \frac{6}{7} \int_{0}^1 \frac{x^5dx}{\sqrt{1-xx}} = \frac{2\cdot 4\cdot 6}{3\cdot 5\cdot 7}\\ \int_{0}^1 \frac{x^9 dx}{\sqrt{1-xx}} &=& \frac{8}{9} \int_{0}^1 \frac{x^7dx}{\sqrt{1-xx}} = \frac{2\cdot 4\cdot 6\cdot 8}{3\cdot 5\cdot 7 \cdot 9} \end{eqnarray*} Substituting these values in the integration above yields \[ \frac{\pi\pi}{8} = 1+ \frac{1}{3 \cdot 3} + \frac{1}{5 \cdot 5} + \frac{1}{7\cdot 7} + \frac{1}{9 \cdot 9} + \mbox{etc.} \] This gives the sum of reciprocals of odd squares, and Euler observes that multiplying by $\frac{1}{4}$ gives \[ \frac{1}{4} + \frac{1}{16} +\frac{1}{36} +\frac{1}{64} + \mbox{etc.} \] from which follows \[ \frac{\pi\pi}{6} = 1 + \frac{1}{4} + \frac{1}{9} +\frac{1}{16} +\cdots \] This proof is rigorous and unarguable. He goes on to evaluate sums of larger even powers by the same methods, and gives a table of explicit evaluations for $\zeta(2n)$ for $1 \le n \le 13$, in the form \[ [\zeta(2n)= ]\, \frac{2^{2n-1}}{(2n+1)!} C_{2n} \pi^{2n}. \] In this expression, he says part of the formula is known and explainable, and that the remaining difficulty is to explain the nature of the fractions [ $C_{2n}$] which take values of a different character: \[ \frac{1}{2}~~~\, \frac{1}{6} ~~~\, \frac{1}{6} ~~~\, \frac{3}{10} ~~~\,\frac{5}{6} ~~~\, \frac{691}{210} ~~~\, \frac{35}{2} ~~ \mbox{etc.} \] He says that he has two other methods for finding these numbers, whose nature he does not specify. As we know, these involve Bernoulli numbers. In this same period, around 1737, not published until 1744, Euler \cite[E72]{E72} obtained the ``Euler product'' expansion of the zeta function. In \cite[Theorem 8]{E72} he proved: \begin{quote} The expression formed from the sequence of prime numbers $$ \frac{2^n \cdot 3^n \cdot 5^n \cdot 7^n \cdot 11^n \cdot \mbox{etc.}} {(2^n-1)(3^n -1)(5^n -1) (7^n-1) (11^n -1) ~~\mbox{etc.} } $$ has the same value as the sum of the series $$ 1+ \frac{1}{2^n} + \frac{1}{3^n} + \frac{1}{4^n}+\frac{1}{5^n}+\frac{1}{6^n} +\frac{1}{7^n} + \mbox{etc.} \footnote{``Si ex serie numerorum primorum sequens formetur espressio $\frac{2^n \cdot 3^n \cdot 5^n \cdot 7^n \cdot 11^n \cdot \mbox{etc.}} {(2^n-1)(3^n -1)(5^n -1) (7^n-1) (11^n -1) ~~\mbox{etc.} }$ erit eius aequalis summae huius seriei $ 1+ \frac{1}{2^n} + \frac{1}{3^n} + \frac{1}{4^n}+\frac{1}{5^n}+\frac{1}{6^n} +\frac{1}{7^n} + \mbox{etc.} '' $ [Translation: David Pengelley \cite{Pe00}, \cite{Pe07}.] Here $n$ is an integer.} $$ \end{quote} This infinite product formula for $\zeta(n)$ supplies in principle another way to approximate the values $\zeta(n)$ for $n \ge 2$, using finite products. Euler made repeated attempts throughout his life to find closed forms for the odd zeta values, especially $\zeta(3)$. In a 1739 paper \cite[E130]{E130}, not published until 1750, he developed partial fraction decompositions and a method equivalent to Abel summation. In particular, letting $\theta(s) = \sum_{n=0}^{\infty} \frac{1}{(2n+1)^s}$ and $\phi(s) = \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^s}$, he obtained the formula $$ \phi(1-2m) = \frac{(-1)^{m-1} 2 \cdot (2m-1)!} {\pi^{2m} }\theta(2m) $$ for $m=1, 2, 3, 4$, in which the left side is determined as the Abel sum $$ \phi(m) := \lim_{x \to 1^{-}} \sum_{n=1}^{\infty} \frac{(-1)^{n-1}x^n}{n^m}; $$ in fact the function $\phi(s)$ is Abel summable for all $s \in {\mathbb C}$. In a paper written in 1749 \cite[E352]{E352}, not presented until 1761, and published in 1768, Euler derived the functional equation for the zeta function at integer points, and he conjectured $$ \frac{\phi(1-s)}{\phi(s)} = \frac{ - \Gamma(s)(2^s-1) \cos \frac{\pi s}{2} } {(2^{s-1}-1)\pi^s}. $$ He apparently hoped to use this functional relation to deduce a closed form for the odd zeta values, including $\zeta(3)$, and says: \begin{quote} As far as the [alternating] sum of reciprocals of powers is concerned, $$ 1 - \frac{1}{2^n}+ \frac{1}{3^n}-\frac{1}{4^n} + \frac{1}{5^n} - \frac{1}{6^n} + \mbox{\&c} $$ I have already observed the sum [$\phi(n)$] can be assigned a value only when $n$ is even and that when $n$ is odd, all my efforts have been useless up to now.\footnote{``A l'\'{e}gard des s\'{e}ries r\'{e}ciproques des puissances $$ 1 - \frac{1}{2^n}+ \frac{1}{3^n}-\frac{1}{4^n} + \frac{1}{5^n} - \frac{1}{6^n} + \mbox{\&c} $$ j'ai d\'{e}j\`{a} observ\'{e}, que leurs sommes ne sauroient \^{e}tre assign\'{e}es que lorsque l'exposant $n$ est un nombre entier par, \& que pour les cas ou $n$ est un nombre entier impair, tous mes soins ont \'{e}t\'{e} jusqu'ici inutiles.''} \end{quote} In his 1755 book on differential calculus Euler \cite[E212, Part II, Chap. 6, Arts. 141-153]{E212} gave closed forms for $\zeta(2n)$ explicitly expressed in terms of the Bernoulli numbers. These are equivalent to the modern form \begin{equation}\label{exact} \zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2 \pi)^{2n}}{2 (2n!)}. \end{equation} He also used his summation formula to numerically evaluate the sums using divergent series, as follows. To estimate $\zeta(2)$ he considers the finite sums $$ s= 1+ \frac{1}{4} + \frac{1}{9} + \cdots + \frac{1}{x^2}, $$ where $x$ is fixed, and obtains the summation \begin{equation}\label{256a} s= C - \frac{1}{x} + \frac{1}{2x^2} -\frac{|B_2|}{x^3} + \frac{|B_4|}{x^5} - \frac{|B_6|}{x^7} +\cdots, \end{equation} where the constant $C$ is to be determined by finding all the other terms at one value of $x$. Choosing $x=\infty$ we have $s= \zeta(2)$ and the summation formula formally gives $s= C= \zeta(2).$ On setting $x=1$, the finite sum gives $s=1$, therefore one has, formally, \begin{equation*} C= 1+ 1 -\frac{1}{2} +|B_2|-|B_4|+|B_6|-\cdots \end{equation*} The right side of this expression is a divergent series and Euler remarks: \begin{quote} But this series alone does not give the value of $C$, since it diverges strongly. Above [Section 125] we demonstrated that the sum of the series to infinity is $\frac{\pi \pi}{6}$; and therefore setting $x= \infty$ and $s= \frac{\pi \pi}{6}$, we have $C= \frac{\pi \pi}{6}$, because then all other terms vanish. Thus it follows that $$ 1+ 1 -\frac{1}{2} +|B_2|-|B_4|+|B_6|-\cdots= \frac{\pi\pi}{6}.\footnote{``quae series autem cum sit maxime divergens, valorem constantis $C [:=\zeta(2)]$ non ostendit. Quia autem supra demonstavimus summam huius seriei in infinitum continuatae esse $= \frac{\pi \pi}{6}$; facto $x=\infty$, si ponatur $s= \frac{\pi \pi}{6}$, fiet $C= \frac{\pi\pi}{6},$ ob reliquos terminos omnes evanescentes. Erit ergo $ 1+ 1 -\frac{1}{2} + {\mathfrak U} - {\mathfrak B} + {\mathfrak C} - {\mathfrak D} +{\mathfrak E} -\&c= \frac{\pi\pi}{6}. $ '' [Translation: David Pengelley \cite[Sec. 148]{Pe00}, \cite{Pe07}]} $$ \end{quote} \noindent Euler's idea is to use these divergent series to obtain good numerical approximations to $C$ by substituting a finite value of $x$ and to add up the terms of the series ``until it begins to diverge", i.e. to truncate it according to some recipe. We do not address the question of what rule Euler used to determine where to truncate. This procedure has received modern justification in the theory of asymptotic series, cf. Hardy \cite{Ha49}, Hildebrand \cite[Sec. 5.8]{Hil56}, \cite[Sec 6.4]{Ed74}. By taking large $x$ one can get arbitrarily accurate estimates (for some functions). If the series is alternating, one may sometimes estimate the error as being no larger than the smallest term. Euler demonstrates in this particular case how to find the value $C= \frac{\pi^2}{6}$ numerically using the value $x=10$, with $s=\sum_{i=1}^{10} \frac{1}{n^2}$, by truncating the expansion \eqn{256a} after power $\frac{1}{x^{17}}$, obtaining $C \approx 1,64493~40668~48226~430$. He writes: \begin{quote} And this number is at once the value of the expression $\frac{\pi^2}{6}$, as will be apparent by doing the calculation with the known value of $\pi$. Whence it is at once understood that even though the series [$|B_2|, |B_4|, |B_6| \cdots$] diverges, still the true sum is produced.\footnote{``Hicque numerus simul est valor expressionis $\frac{\pi \pi}{6}$, quemadmodem ex valore ipsius $\pi$ cogito calculum institutenti patebit. Unde simul intelligitur, etiamsi series ${\mathfrak A}, {\mathfrak B}$, ${\mathfrak C}$ \& c. divergat, tamen hoc modo veram prodire summan.'' [Translation: Jordan Bell]}. \end{quote} \noindent Euler also applies this method to the odd zeta values, obtaining numerical approximations for all zeta values up to $\zeta(15)$, in particular $\zeta(3)\approx \frac{\pi^3}{25.79436}$. He also (implicitly) introduces in Sec. 153 a ``Bernoulli function" $b(z)$ which interpolates at integer values the variable $z$ his version of the even Bernoulli numbers, i.e. $$ b(n)= |B_{2n}| = (-1)^{n+1} B_{2n}, ~~~~n \in {\mathbb Z}_{\ge 1}. $$ Euler remarks: \begin{quote} The series of Bernoulli numbers [$|B_{2}|, |B_4|, |B_6|, \cdots$~] however irregular, seem to be interpolated from this source; that is, terms constituted in the middle of two of them can be defined: for if the middle term lying between the first $A$ and the second $B$, corresponding to the index $1 \frac{1}{2}$ were $=p$, then it would certainly be $$ 1 +\frac{1}{2^3} + \frac{1}{3^3} + \&\mbox{tc}= \frac{2^2 p}{1 \cdot 2\cdot 3} \pi^3 $$ and hence $$ p = \frac{3}{2 \pi^3} (1+ \frac{1}{2^3} + \frac{1}{3^3} + \&c) = 0,05815~227.\footnote{``Ex hoc fonte series numerorum Bernoulliarum ${{1}\atop{\mathfrak U}} {{2}\atop{\mathfrak B}} {{3}\atop{\mathfrak C}} [\cdots]$ quantumvis irregularis videatur interpolari, seu termini in medio binorum quorumcunque constituti assignari poterunt; se enim terminus medium interiacens inter primum ${\mathfrak A}$ and secundum ${\mathfrak B}$, sue indici $1 \frac{1}{2}$, respondens fuerit $=p$; erit utique $1 + \frac{1}{2^3}+ \frac{1}{3^3} + \mbox{\&tc.} = \frac{2^2 p}{1\cdot 2 \cdot 3} \pi^3$ ideoque $p = \frac{3}{2 \pi^{3}} (1+ \frac{1}{2^3} + \frac{1}{3^3} + \&c)= 0,05815227.$'' [Translation: Jordan Bell]} $$ \end{quote} \noindent That is, Euler shows the interpolating value $p:=b(\frac{3}{2})$ is related to $\zeta(3)$, by the relation $$ p = \frac{3}{2}\frac{\zeta(3)}{\pi^3}, $$ and he finds the numerical value $p \approx 0.05815227$. He similarly obtains a value for $b(\frac{5}{2})$ in terms of $\zeta(5)$. In a 1771 paper, published in 1776, Euler \cite[E477]{E477} ({\em ``Meditations about a singular type of series"}) introduced multiple zeta values and with them obtained another formula for $\zeta(3)$. He had considered this topic much earlier. In Dec. 1742 Goldbach wrote Euler a letter about multiple zeta values, and they exchanged several further letters on their properties. The 1771 paper recalls this correspondence with Goldbach, and considers the sums, for $m, n \ge 1$ involving the $n$-harmonic numbers \begin{equation}\label{265a} S_{m,n} := \sum_{k=1}^{\infty} \frac{H_{k,n}}{k^m} =\sum_{k \ge 1} \left( \sum_{1 \le l \le k} \frac{1}{k^m l^n} \right) = \sum_{l \ge 1} \sum_{k \ge l} \left(\frac{1}{k^m l^n}\right). \end{equation} Euler lets $\int \frac{1}{z^m}$ denote $\zeta(m)$ and lets $\int \frac{1}{z^m}(\frac{1}{z})^n$ denote the sum $S_{m,n}.$ He considers all values $1\le m, n \le 8$. This includes divergent series, since all sums $S_{1, n}$ are divergent. The modern definition of multiple zeta values, valid for positive integers $(a_1, ..., a_k)$ with $a_1 \ge 2$, is \begin{equation}\label{MZV} \zeta(a_1, a_2, ..., a_k) = \sum_{n_1>n_2 > \cdots > n_k > 0} \frac{1}{n_1^{a_1} n_2^{a_2} \cdots n_k^{a_k}}. \end{equation} In this notation \eqn{265a} expresses the identities (valid for $m \ge 2, n \ge 1$) $$ S_{m,n} = \zeta(n+m) + \zeta(m,n). $$ Euler presents three methods for obtaining identities, and establishes a number of additional identities for each range $m+n=j$ for $2 \le j \le 11$. For $j=3$ he obtains $$S_{2,1} = 2\, \zeta(3).$$ This is equivalent to \begin{equation*} \zeta(3) = \zeta(2,1) = \frac{1}{2} \Big(\sum_{k=1}^{\infty} \frac{H_k}{k^2}\Big). \end{equation*} Being thorough, Euler also treats cases $m=1$ where the series diverge, and observes that for $j=2$ he cannot derive useful identities. In a 1772 paper \cite[E432]{E432} (``{\em Analytical Exercises}"), published in 1773, Euler obtained new formulas for values $\zeta(2n+1)$ via divergent series related to the functional equation for the zeta function. As an example, in Sect. 6 he writes, treating $n$ as a positive integer and $\omega$ as an infinitely small quantity, the formula $$ 1 + \frac{1}{3^{n+\omega}} + \frac{1}{5^{n+ \omega}} + \cdots= \frac{-1}{2 \cos( \frac{n+\omega}{2}\pi)} \frac{\pi^{n+\omega}}{1 \cdot 2 \cdots (n-1+\omega)} \Big( 1- 2^{n-1+\omega} + 3^{n-1+\omega} - \mbox{etc}\Big). $$ Here the right side contains a divergent series. In Sect. $7$, he in effect takes a derivative in $\omega$ and obtains, for example in the case $n=3$, the formula $$ 1 + \frac{1}{3^3} + \frac{1}{5^3} + \frac{1}{7^3} + \mbox{etc.} = -\frac{\pi^2}{2} (2^2 \log 2 - 3^2 \log 3 + 4^2 \log 4 - 5^2 \log 5 + \mbox{etc.} \Big) $$ The right side is another divergent series, which he names $$ Z := 2^2 \log 2 - 3^2 \log 3 + 4^2 \log 4 - 5^2 \log 5 + 6^2 \log 6 - 7^2 \log 7+ \mbox{etc.} $$ He applies various series acceleration techniques on this divergent series, compare the discussion of divergent series in Section 2.5 below. In Sect. 20 he transforms $Z$ into a rapidly convergent series, \begin{equation*} Z = \frac{1}{4} - \frac{ \alpha \pi^2}{3 \cdot 4 \cdot 2^2} - \frac{\beta \pi^4}{ 5\cdot 6 \cdot 2^4} - \frac{ \gamma \pi^6}{7\cdot 8 \cdot 2^6} -\frac{\delta \pi^8}{9\cdot 10 \cdot 2^{8}} - \frac{\epsilon \pi^{10}}{ 11\cdot 12 \cdot 2^{10}} - \mbox{etc.} \end{equation*} In his notation $\zeta(2) = \alpha \pi^2$, $\zeta(4) = \beta \pi^4$ etc, so he thus obtains the valid, rapidly convergent formula \begin{equation*} \frac{7}{8} \zeta (3) = \frac{\pi^2}{2} \Big( \frac{1}{4} - \sum_{n=1}^{\infty} \frac{\zeta(2n)}{(2n+1)(2n+2) 2^{2n}} \Big). \end{equation*} After further changes of variable, he deduces in Sect. 21 the formula \[ 1 + \frac{1}{3^3} + \frac{1}{5^3} + \frac{1}{7^3} + \mbox{etc.} = \frac{\pi^2}{4} \log 2 + 2 \int_{0}^{\frac{\pi}{2}} \Phi \log( \sin \Phi )d\Phi. \] This may be rewritten \begin{equation*} \frac{7}{8}\zeta(3) = \frac{\pi^2}{4} \log 2 + 2 \int_{0}^{\frac{\pi}{2}} x \, \log (\sin x) dx. \end{equation*} Euler notes also the ``near miss" that a variant of the last integral is exactly evaluable: \begin{equation*} \int_{0}^{\frac{\pi}{2}} \log (\sin x) dx = -\frac{\pi}{2} \log 2. \end{equation*} Euler continued to consider zeta values. In the 1775 paper \cite[E597]{E597} (``{\em A new and most easy method for summing series of reciprocals of powers}"), published posthumously in 1785, he gave a new derivation of his formulas for the even zeta values $\zeta(2n)$. In the 1779 paper \cite[E736]{E736}, published in 1811, he studied the dilogarithm function $Li_2(x) := \sum_{n=1}^{\infty} \frac{x^n}{n^2}$. In Sect. 4 he establishes the functional equation \begin{equation*} Li_2(x) + Li_2(1-x) = \frac{\pi^2}{6} - (\log x)( \log (1-x)). \end{equation*} The special case $x=0$ of this functional equation gives again $\zeta(2)= \frac{\pi^2}{6}$. Euler deduces several other functional equations for the dilogarithm, and by specialization, a number of new series identities. \subsection{Summing divergent series: the Euler-Gompertz constant}\label{sec25} \setcounter{equation}{0} In the course of his investigations Euler encountered many divergent series. His work on divergent series was a by-product of his aim to obtain accurate numerical estimates for the sum of various convergent series, and he learned and developed summability methods which could be applied to both convergent and divergent series. By cross-checking predictions of various summability methods he developed useful methods to sum divergent series. Of particular interest here, Euler's summation of a particular divergent series, the Wallis hypergeometric series, produced an interesting new constant, the Euler-Gompertz constant. Euler's near contemporary James Stirling (1692--1770) developed methods of acceleration of convergence of series by differencing, starting in 1719, influenced by Newton's work (\cite[pp. 92-101]{Newton1711}) on interpolation, see Tweddle \cite{Twe92}. Euler obtained Stirling's 1730 book ( ``{\em Differential methods and summation of series}") (\cite{Stirling1730}, translated in \cite{Twe03}) which discussed infinite series, summation, interpolation and quadrature, in which Proposition $14$ gives three examples of a method of accelerating convergence of series of hypergeometric type. On 8 June 1736 Euler wrote to Stirling, saying (\cite[p. 229]{Twe03}, \cite[p. 141]{Twe88}): \begin{quote} I searched all over for your excellent book on the method of differences, a review of which I had seen in the {\em Acta Lipenses}, until I have achieved my desire. Now that I have read through it diligently, I am truly astonished at the great abundance of excellent methods in such a small volume, by which you show to sum slowly convergent series with ease and how to interpolate progressions which are very difficult to deal with. But especially pleasing to me was Prop. 14 of Part I, in which you present a method for summing so easily series whose law of progression is not even established, using only the relation of the last terms; certainly this method applies very widely and has the greatest use. But the demonstration of this proposition, which you seemed to have concealed from study, caused me immense difficulty, until at last with the greatest pleasure I obtained it from the things which had gone before... \end{quote} \noindent A key point is that Proposition 14 applies when the members $a_n$ of the series satisfy linear recurrence relations whose coefficients are rational functions of the parameter $n$. This point was not made explicit by Stirling. It is known that there exist counterexamples to Proposition 14 otherwise (cf. Tweddle \cite[pp. 229--233]{Twe03}). The convergence acceleration method of Stirling is really a suite of methods of great flexibility, as is explained\ by Gosper \cite{Gos76}.\footnote{Gosper \cite[p.122]{Gos76} says: ``We will be taking up almost exactly where James Stirling left off, and had he been granted use of a symbolic mathematics system such as MIT MACSYMA, Stirling would probably have done most of this work before 1750."} Euler incorporated these ideas in his subsequent work.\footnote{We do not attempt to untangle the exact influence of Stirling's work on Euler. In the rest of his 1736 letter Euler explains his own Euler summation method for summing series. In a reply made 16 April 1738 Stirling tells Euler that Colin Maclaurin will be publishing a book on Fluxions that has a similar summation formula that he found some time ago. Euler replied on 27 July 1738 that `` Mr. Maclaurin probably came upon his summation formula before me, and consequently deserves to be named as its first discoverer. For I found that theorem about four years ago, at which time I also described its proof and application in greater detail to our Academy." Euler also states that he independently found some summability methods in Stirling's book, before he knew of its existence (\cite[Chap. 6]{Twe88}).}. In a 1760 paper \cite[E247]{E247} ({\em ``On divergent series"}) Euler formulated his views on the use and philosophy of assigning a finite value to some divergent series. This paper includes work Euler had done as early as 1746. In it Euler presents four different methods for summing the particular divergent series \begin{equation}\label{554a} 1- 1 +2 -6 +24 - 120 +\cdots [= 0! - 1! + 2! - 3! + 4! -5! +\cdots], \end{equation} which he calls ``Wallis' hypergeometric series," and shows they each give (approximately) the same answer, which he denotes $A$. Barbeau and Leah \cite{BL76} give a translation of the initial, philosophical part of this paper, followed by a summary of the rest of its contents, and Barbeau \cite{Bar79} gives a detailed exposition of Euler's treatment of this example. The series \eqn{554a} is termed ``hypergeometric" because each term in the series is multiplied by a ratio which varies from term to term, rather than by a constant ratio as in a geometric series. Series of this type appeared in the Scholium to Proposition 190 in Wallis's 1656 book {\em Arithmetica Infinitorum} \cite{Wal1656}\footnote{This reference was noted by Carl Boehm, an editor of Euler's Collected Works, cf. \cite[p. 157]{BL76}.}. Euler's four methods were: \begin{enumerate} \item[(1)] \cite[Sect. 13--16]{E247} The iteration of a series acceleration method involving a differencing operation applied to this series. Euler iterates four times, truncates the resulting divergent series and obtains an approximation to its ``sum" $A$ of $A \approx 38015/65536 = 0.58006...$; \item[(2)] \cite[Sect. 17-18]{E247} The interpolation of solutions to the difference equation $P_{n+1} = n P_{n} +1,$ with $P_1 =1$. Euler obtains the closed formula $$P_n = 1 + (n-1) + (n-1)(n-2) + (n-1)(n-2)(n-3) +\cdots,$$ which is a finite sum for each $n \ge 1$, and observes that substituting $n=0$ formally gives the Wallis series. He considers interpolation methods that rescale $P_{n}$ by an invertible function $f(x)$ to obtain values $f(P_n)$, and then attempts to interpolate the value $f(P_0).$ For the choice $f(x)= \frac{1}{x}$ he obtains $A \approx 0.6$ and for $f(x)= \log x$ he obtains $A \approx 0.59966$; \item[(3)] \cite[Sect. 19--20]{E247} The construction and solution of an associated differential equation. Euler views the Wallis series as the special case $x=1$ of the power series $$ s(x) = 1 - 1!\,x + 2! \,x^2 - 3! \,x^3 + 4!\,x^4 - 5! \,x^5 + \cdots. $$ Although this series diverges for all nonzero $x \in {\mathbb C}$, he observes that $s(x)$ formally satisfies a linear differential equation. This differential equation has a convergent solution given by an integral with variable endpoint $x$, and by specializing it to $x=1$ he obtains an exact answer $A$. We discuss this method in detail below. \item[(4)] \cite[Sec. 21--25]{E247} The determination of a continued fraction expansion for the series $s(x)$ given by the integral in (3). This is \begin{equation}\label{eulerfrac} s(x) = \cfrac{1}{1+ \cfrac{x}{1+ \cfrac{x}{1+\cfrac{2x}{1+ \cfrac{2x}{1+\cfrac{3x}{1+ \cfrac{3x}{1+\cdots}}}}}}} \end{equation} This continued fraction is convergent at $x=1$. Using it, he computes a good numerical approximation, given in \eqn{559d} below, to the answer $A$. \end{enumerate} . In the third of four summation methods, Euler considers the series to be the ``value" at $x=1$ of the asymptotic series \begin{equation}\label{555a} s(x) \sim \sum_{n=0}^{\infty} (-1)^n n! \,x^{n+1}. \end{equation} He notes this series satisfies the inhomogeneous linear differential equation \begin{equation} \label{556a} s'(x) + \frac{1}{x^2} s(x)= \frac{1}{x}. \end{equation} He writes down an explicit solution to the differential equation \eqn{556a}, namely \begin{equation}\label{557a} s(x) := e^{\frac{1}{x}}\int_{0}^x \frac{1}{t} e^{-\frac{1}{t}} dt. \end{equation} The expansion of this integral near $x=0$ has the asymptotic expansion \eqn{555a}. Euler transforms the integral \eqn{557a} into other forms. Using the change of variable $v= e^{1- \frac{1}{t}}$ he obtains \begin{equation}\label{557b} s(x) = e^{(\frac{1}{x} -1)} \int_{0}^{e^{1-\frac{1}{x}}} \frac{dv}{1-\log v}. \end{equation} He then obtains as his proposed summation for the divergent series the value \begin{equation}\label{559a} {\delta} := s(1) = e \int_{0}^1 \frac{1}{t} e^{- \frac{1}{t}} dt = \int_{0}^{1} \frac{dv}{1- \log v}. \end{equation} He numerically estimates this integral using the trapezoidal rule and obtains ${\delta} \approx 0.5963\underline{7} 255.$ Using his fourth method, in Sec. 25 Euler obtained a more precise numerical value for this constant, finding \begin{equation}\label{559d} {\delta} \approx 0. 59634~7362\underline{1}~237. \end{equation} The constant ${\delta}$ defined by \eqn{559a} is of some interest and we call it the {\em Euler-Gompertz constant,} following Finch \cite[Sect. 6.2]{Fin03}. This constant was earlier named the {\em Gompertz constant} by Le Lionnais \cite{LL83}. \footnote{Le Lionnais \cite{LL83} gives no explanation or reference for his choice of name ``Gompertz".} Benjamin Gompertz (1779--1865) was a mathematician and actuary who in 1825 proposed a functional model for approximating mortality tables, cf. Gompertz \cite{Gom1825},\cite{Gom1862}. The {\em Gompertz function} and {\em Gompertz distribution} are named in his honor. I have not located the Euler-Gompertz constant in his writings. However Finch \cite[Sect. 6.2.4]{Fin03} points out a connection of the Euler-Gompertz constant with a Gompertz-like functional model. The Euler-Gompertz constant has some properties analogous to Euler's constant. The integral representations for the Euler-Gompertz constant given by \eqn{557a} and \eqn{557b} evaluated at $x=1$ resemble the integral representations \eqn{226a} and \eqn{227a} for Euler's constant. An elegant parallel was noted by Aptekarev \cite{Apt09}, \begin{equation}\label{559f} \gamma = - \int_{0}^{\infty} \log x \, e^{-x}dx, \quad\quad \delta= \int_{0}^{\infty} \log (x+1) \,e^{-x} dx. \end{equation} There are also integral formulae in which both constants appear, see \eqn{353EG} and \eqn{362EG}. In 1949 G. H. Hardy \cite[p. 26]{Ha49} gave a third integral representation for the function $s(x)$, obtained by using the change of variable $t= \frac{x}{1+xw}$, namely \begin{equation*} s(x) = x \int_{0}^{\infty} \frac{e^{-w}}{1+xw} dw. \end{equation*} Evaluation at $x=1$ yields the integral representation \begin{equation}\label{560aa} {\delta} = \int_{0}^{\infty} \frac{e^{-w}}{1+w} dw. \end{equation} From this representation Hardy obtained a relation of ${\delta}$ to Euler's constant given by \begin{equation}\label{560b} {\delta} = -e\left(\gamma - 1 + \frac{1}{2 \cdot 2!} - \frac{1}{3 \cdot 3!} + \frac{1}{4 \cdot 4!} + \cdots\right). \end{equation} One can also deduce from \eqn{560aa} that \begin{equation}\label{561a} {\delta} = - e \, Ei(-1) \end{equation} where $Ei(x)$ is the {\em exponential integral} \begin{equation}\label{561b} Ei(x) := \int_{-\infty}^x \frac{e^t}{t}dt, \end{equation} which is defined as a function of a complex variable $x$ on the complex plane cut along the nonnegative real axis.\footnote{This is sufficient to define $Ei(-1).$ For applications the function $Ei(x)$ is usually extended to positive real values by using the Cauchy principal value at the point $x=0$ where the integrand has a singularity. The real-valued definition using the Cauchy principal value is related to the complex-analytic extension by $$ Ei(x) := \lim_{\epsilon \to 0^{+}} \frac{1}{2} \Big(Ei(x+ \epsilon i) + Ei(x- \epsilon i)\Big). $$. } Here we note that Euler's integral \eqn{557a} is transformed by the change of variable $t=\frac{1}{u}$ to another simple form, which on replacing $x$ by $1/x$ becomes \begin{equation}\label{562a} s(\frac{1}{x}) = e^{x} \int_{x}^{\infty} \frac{e^{-u}}{u} du = e^x E_1(x). \end{equation} Here $E_1(z)$ is the {\em principal exponential integral}, defined for $|\arg z| < \pi$ by a contour integral \begin{equation}\label{expint} E_1(z) := \int_{z}^{\infty} \frac{e^{-u}}{u} du, \end{equation} that eventually goes to $+\infty$ along the positive real axis, see \cite[Chap. 5]{AS}. Choosing $x=1$ yields the identity \begin{equation}\label{249a} E_{1}(1) = \int_{1}^{\infty} \frac{e^{-t}}{t} dt = \frac{\delta}{e}. \end{equation} Further formulas involving the Euler-Gompertz constant are given in Sections \ref{sec34}, \ref{sec35}, \ref{sec311} and \ref{sec312}. Regarding method (4) above, Euler had previously done much work on continued fractions in the late 1730's in \cite[E71]{E71} and in particular in his sequel paper \cite[E123]{E123}, in which he expands some analytic functions and integrals in continued fractions. This work of Euler is discussed in the book of Khrushchev \cite{Khr08}, which includes an English translation of \cite{E123}. In 1879 E. Laguerre \cite{Laguerre1879} found an analytic continued fraction expansion for the principal exponential integral, which coincides with $e^{-x} s(1/x)$by \eqn{562a}, obtaining \begin{equation}\label{249c} \int_{x}^{\infty} \frac{e^{-t}}{t} dt= \cfrac{e^{-x}}{x+1- \cfrac{1}{x+3-\cfrac{1}{\frac{x+5}{4}- \cfrac{\frac{1}{4}}{\frac{x+7}{9}-\cfrac{\frac{1}{9}}{\frac{x+9}{16}- \cdots}}}}}. \end{equation} Laguerre noted that this expansion converges for positive real $x$. His expansion simplifies to\footnote{This form of fraction with minus signs between convergents is often called a {\em $J$-continued fraction}, see Wall \cite[p. 103]{Wall48}.} \begin{equation}\label{249d} \int_{x}^{\infty} \frac{e^{-t}}{t} dt = \cfrac{e^{-x}}{x+1- \cfrac{1^2}{x+3-\cfrac{2^2}{x+5- \cfrac{3^2}{x+7-\cfrac{4^2}{x+9- \cdots}}}}}, \end{equation} which is the form in which it appears in Wall \cite[(92.7)]{Wall48}. On taking $x=1$ it yields another continued fraction for the Euler-Gompertz constant, \begin{equation}\label{249e} \delta = e \int_{1}^{\infty} \frac{e^{-t}}{t} dt = \cfrac{1}{2- \cfrac{1^2}{4-\cfrac{2^2}{6- \cfrac{3^2}{8-\cfrac{4^2}{10- \cdots}}}}}. \end{equation} In his great 1894 work T. J. Stieltjes (\cite{Sti1894}, \cite{Sti93}) developed a general theory for analytic continued fractions representing functions $F(x)$ given by $\int_{0}^{\infty} \frac{d \Phi(u)}{x+ u}$, which takes the form \begin{equation}\label{249f} \int_{0}^{\infty} \frac{d\Phi(u)}{x+ u} = \cfrac{1}{a_1 x+ \cfrac{1}{a_2+ \cfrac{1}{a_3 x + \cfrac{1}{a_4+\cfrac{1}{a_5 x+\cfrac{1}{a_6+ \cdots}}}}}}. \end{equation} for positive real $a_n$. He formulated the (Stieltjes) moment problem for measures on the nonnegative real axis, and related his continued fractions to the moment problem. In the introduction of his paper he specifically gives Laguerre's continued fraction \eqn{249d} as an example, and observes that it may be converted to a continued fraction in his form \eqn{249f} with $a_{2n-1}=1$ and $a_{2n}= \frac{1}{n}.$ On taking $x=1$ his converted fraction is closely related to Euler's continued fraction \eqn{eulerfrac}. Stieltjes notes that his theory extends the provable convergence region for Laguerre's continued fraction \eqn{249d} to all complex $x$, excluding the negative real axis. In \cite[Sect. 57]{Sti1894} he remarks that associated measure in the moment problem for this continued fraction must be unbounded, and in \cite[Sect. 62]{Sti1894} he obtains a generalization of (his form of) Laguerre's continued fraction to add extra parameters; cf. Wall \cite[eqn. (92.17)]{Wall48}. It is not known whether the Euler-Gompertz constant is rational or irrational. However it has recently been established that at least one of $\gamma$ and ${\delta}$ is transcendental, as presented in Section \ref{sec312}. \subsection{Euler-Mascheroni constant; Euler's approach to research }\label{sec26} \setcounter{equation}{0} Euler's constant is often called the {\em Euler-Mascheroni constant,} after later work of Lorenzo Mascheroni \cite{Mas1790}, \cite{Mas1792}. In 1790, in a book proposing to answer some problems raised by Euler, Mascheroni \cite{Mas1790} introduced the notation $A$ for this constant. In 1792 Mascheroni \cite[p.11]{Mas1792} also reconsidered integrals in Euler's paper \cite[E629]{E629}, using both the notations $A$ and $a$ for Euler's constant. In his book, using an expansion of the logarithmic integral, he gave a numerical expression for it to $32$ decimal places (\cite[p. 23]{Mas1790}), which stated $$ A \approx 0. 57721~56649~01532~8606\underline{1}~ 81120~ 90082~39 $$ This computation attached his name to the problem; Euler had previously computed it accurately to $15$ places. At that time constants were often named after a person who had done the labor of computing them to the most digits. However in 1809 Johann von Soldner \cite{vS1809}, while preparing the first book on the logarithmic integral function $li(x) = \int \frac{dx}{\log x}$, found it necessary to recompute Euler's constant, which he called $H$ (perhaps from the harmonic series). He computed it to $22$ decimal places by a similar method, and obtained (\cite[p. 13]{vS1809}) \[ H \approx 0. 57721~56649~01532 ~86060~65 \] Von Soldner's answer disagreed with Mascheroni's in the $20$-th decimal place. To settle the conflict von Soldner asked the aid of C. F. Gauss, and Gauss engaged the 19-year old calculating prodigy F. G. B. Nicolai (1793---1846) to recheck the computation. Nicolai used the Euler-Maclaurin summation formula to compute $\gamma$ to $40$ places, finding agreement with von Soldner's calculations to $22$ places (see \cite[p. 89]{Hav03}). Thus the conflict was settled in von Soldner's favor, and Mascheroni's value is accurate to only $19$ places. In the current era, which credits the mathematical contributions, it seems most appropriate to name the constant after Euler alone. The now-standard notation $\gamma$ for Euler's constant arose subsequent to the work of both Euler and Mascheroni. Euler himself used various notations for his constant, including $C$ and $O$ and $n$; Mascheroni used $A$ and $a$; von Soldner used $H$ and $C$. The notation $\gamma$ appears in an 1837 paper on the logarithmic integral of Bretschneider \cite[p. 260]{Bret1837}. It also appears in a calculus textbook of Augustus de Morgan \cite[p. 578]{ADM}, published about the same time. This choice of symbol $\gamma$ is perhaps based on the association of Euler's constant with the gamma function. To conclude our discussion of Euler's work, it seems useful to contemplate the approach to research used by Euler in his long successful career. C. Truesdell \cite[Essay 10]{Tru84}, \cite[pp. 91--92]{Tru87}, makes the following observations about the methods used by Euler, his teacher Johann Bernoulli, and Johann's teacher and brother Jacob Bernoulli. \begin{enumerate} \item Always attack a special problem. If possible solve the special problem in a way that leads to a general method. \item Read and digest every earlier attempt at a theory of the phenomenon in question. \item Let a key problem solved be a father to a key problem posed. The new problem finds its place on the structure provided by the solution of the old; its solution in turn will provide further structure. \item If two special problems solved seem cognate, try to unite them in a general scheme. To do so, set aside the differences, and try to build a structure on the common features. \item Never rest content with an imperfect or incomplete argument. If you cannot complete and perfect it yourself, lay bare its flaws for others to see. \item Never abandon a problem you have solved. There are always better ways. Keep searching for them, for they lead to a fuller understanding. While broadening, deepen and simplify. \end{enumerate} Truesdell speaks here about Euler's work in the area of foundations of mechanics, but his observations apply equally well to Euler's work in number theory and analysis discussed above. \section{Mathematical Developments}\label{sec3} \setcounter{equation}{0} We now consider a collection of mathematical topics in which Euler's constant appears. \subsection{Euler's constant and the gamma function}\label{sec31} \setcounter{equation}{0} Euler's constant appears in several places in connection with the gamma function. Already Euler noted a basic occurrence of Euler's constant with the gamma function, \begin{equation}\label{201} \Gamma'(1)= -\gamma. \end{equation} Thus $\gamma$ appears in the Taylor series expansion of $\Gamma(z)$ around the point $z=1$. Furthermore Euler's constant also appears (in the form $e^{\gamma}$) in the Hadamard product expansion for the entire function $\frac{1}{\Gamma(z)},$ which states: $$ \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^{\infty} \left(1 +\frac{z}{n} \right) e^{-\frac{z}{n}}. $$ Euler's constant seems however more tightly associated to the logarithm of the gamma function, which was studied by Euler \cite[E368]{E368} and many subsequent workers. The function $\log \Gamma(z)$ is multivalued under analytic continuation, and many results are more elegantly expressed in terms of the {\em digamma function}, defined by \begin{equation}\label{300} \psi(z) := \frac{\Gamma'(z)}{\Gamma(z)} = \frac{d}{dz} \log \Gamma(z). \end{equation} The theory of the digamma function is simpler in some respects because it is a meromorphic function. Nielsen \cite[Chap. 1]{Nie06} presents results and some early history, and Ferraro \cite{Fer07} describes the work of Gauss on these functions. We recall a few basic facts. The digamma function satisfies the difference equation \begin{equation}\label{300b} \psi(z+1) = \psi(z) + \frac{1}{z}, \end{equation} which is inherited from the functional equation $z\Gamma(z)= \Gamma(z+1)$. It has an integral representation, valid for $Re(z) >0$, \begin{equation*} \psi(z) = \int_{0}^{\infty} \Big( \frac{e^{-t}}{t} - \frac{e^{-zt}}{1- e^{-t}}\Big) dt, \end{equation*} found in 1813 by Gauss \cite[Art. 30 ff]{Gau1813} as part of his investigation of the hypergeometric function ${}_2 F_{1}(\alpha, \beta; \gamma; x)$ (see \cite{Fer07}). The difference equation gives \[ \psi(x) = \psi(x+n) - \sum_{j=0}^{n-1} \frac{1}{x+j} \] from which follows \begin{equation*} \psi(x) = \lim_{n \to \infty} \Big( \log n - \sum_{j=0}^{n-1} \frac{1}{x+j}\Big). \end{equation*} This limit is valid for all $x \in {\mathbb C} \smallsetminus {\mathbb Z}_{\le 0}$, using the fact that $ \psi(x) - \log x$ goes uniformly to $0$ in the right half-plane as $Re(x) \to \infty$ (\cite[p. 85, 93]{Nie06}). This formula gives $\psi(1) = -\gamma,$ a fact which also follows directly from \eqn{201}. Euler's constant appears in the values of the digamma function $\psi(x)$ at all positive integers and half-integers. In 1765 Euler \cite[E368]{E368} found the values $\Gamma(\frac{1}{2}), \Gamma(1)$ and $\Gamma'(\frac{1}{2}), \Gamma'(1)$. These values determines $\psi(1/2), \psi(1)$, and then, by using the difference equation \eqn{300b}, the values of $\psi(x)$ for all half-integers. \begin{theorem} ~\label{th30} {\em (Euler 1765)} The digamma function $\psi(x) := \frac{\Gamma'(x)}{\Gamma(x)} $ satisfies, for integers $n \ge 1$, \begin{equation*} \psi(n)= -\gamma + H_{n-1} \end{equation*} using the convention that $H_0=0$. It also satisfies, for half integers $n + \frac{1}{2}$, and $n \ge 0$, \begin{equation*} \psi(n+ \frac{1}{2})= -\gamma - 2\log 2+ 2 H_{2n-1} - H_{n-1}, \end{equation*} using the additional convention $H_{-1}=0.$ \end{theorem} \begin{proof} The first formula follows from $\psi(1)=-\gamma$ using \eqn{300b}. The second formula will follow from the identity \begin{equation}\label{302c} \psi(\frac{1}{2}) = -\gamma - 2 \log 2, \end{equation} because the difference equation \eqn{300b} then gives $$ \psi(n+ \frac{1}{2}) = \psi(\frac{1}{2}) + 2\left( 1+ \frac{1}{3} + \cdots + \frac{1}{2n-1}\right). $$ To obtain \eqn{302c} we start from the duplication formula for the gamma function \begin{equation}\label{307a} \Gamma(2z) = \frac{1}{\sqrt{ \pi}} 2^{2z- 1}\Gamma(z)\Gamma(z+ \frac{1}{2}). \end{equation} We logarithmically differentiate this formula at $z= \frac{1}{2},$ and solve for $\psi(\frac{1}{2}).$ \end{proof} The next two results present expansions of the digamma function that connect it with integer zeta values. The first one concerns its Taylor series expansion around the point $z=1$. \begin{theorem}~\label{th31} {\em (Euler 1765)} The digamma function has Taylor series expansion around $z=1$ given by \begin{equation}\label{311a} \psi(z+1) = -\gamma + \sum_{k=1}^{\infty} (-1)^{k+1} \zeta(k+1) \,z^k \end{equation} This expansion converges absolutely for $|z| < 1$. \end{theorem} \paragraph{\bf Remark.} The pattern of Taylor coefficients in \eqn{311a} has the $k=0$ term formally assigning to $``\zeta(1)"$ the value $\gamma$. \begin{proof} Euler \cite[E368, Sect. 9]{E368} obtained a Taylor series expansion around $z=1$ for $\log \Gamma(z)$, given as \eqn{271} (with $x=z-1$). Differentiating it term by term with respect to $x$ yields \eqn{311a}. \end{proof} The second relation to zeta values concerns the asymptotic expansion of the digamma function around $z= +\infty$, valid in the right half-plane. In this formula the non-positive integer values of the zeta function appear. \begin{theorem}~\label{th32} The digamma function $\psi(z)$ has the asymptotic expansion \begin{equation}\label{321} \psi(z+1) \sim \log z + \sum_{k=1}^{\infty} (-1)^{k} \zeta(1-k)\left( \frac{1}{z}\right)^k, \end{equation} which is valid on the sector $-\pi +\epsilon \le \arg z \le \pi - \epsilon$, for any given $\epsilon>0$. This asymptotic expansion does not converge for any value of $z$. \end{theorem} The statement that \eqn{321} is an asymptotic expansion means: For each finite $n \ge 1$, the estimate \begin{equation*} \psi(z+1) = \log z + \sum_{k=1}^{n} (-1)^{k} \zeta(1-k)\left( \frac{1}{z}\right)^k + O ( z^{-n-1}) \end{equation*} holds on the sector $-\pi + \epsilon \le \arg{z} \le \pi - \epsilon$, with $\epsilon >0$ where the implied constant in the $O$-symbol depends on both $n$ and $\epsilon$. Theorem ~\ref{th32} does not seem to have been previously stated in this form, with coefficients containing $\zeta(1-k)$. However it is quite an old result, stated in the alternate form \begin{equation}\label{323cc} \psi(z+1) \sim \log z - \frac{B_1}{z} - \sum_{k=1}^{\infty} \frac{B_{2k}}{2k}\frac{1}{z^{2k}}. \end{equation} The conversion from \eqn{323cc} to \eqn{321} is simply a matter of substituting the formulas for the zeta values at negative integers: $\zeta(0) = -\frac{1}{2}=B_1$ and $\zeta(1-k) = (-1)^{k+1} \frac{B_{k}}{k}$, noting $B_{2k+1}=0$ for $k \ge 1$, see \cite[Sec. 1.18]{EMOT53}, \cite[(6.3.18)]{AS}. As a {\em formal} expansion \eqn{323cc} follows either from Stirling's expansion of sums of logarithms obtained in 1730 (\cite[Prop. 28]{Stirling1730}, c.f. Tweddle \cite[Chap. 1]{Twe88}), or from Euler's expansion of $\log \Gamma(z)$ given in \eqn{stirling}, by differentiation term-by-term. Its rigorous interpretation as an asymptotic expansion having a remainder term estimate valid on the large region given above is due to Stieltjes. In his 1886 French doctoral thesis Stieltjes \cite{Sti1886} treated a general notion of an asymptotic expansion with remainder term, including the case of an asymptotic expansion for $\log \Gamma(a)$ (\cite[Sec. 14--19]{Sti1886}). In 1889 Stieltjes \cite{Sti1889} presented a detailed derivation of an asymptotic series for $\log \Gamma(a)$, whose methods can be used to derive the result \eqn{323cc} above. \begin{proof} The 1889 formula of Stieltjes \cite{Sti1889} for $\log \Gamma(a)$ embodys Stirling's expansion with an error estimate. Stieltjes first introduced the function $J(a)$ defined by $$ \log \Gamma(a) = (a- \frac{1}{2}) \log a - a + \frac{1}{2} \log (2 \pi) + J(a). $$ His key observation is that $J(a)$ has an integral representation \begin{equation}\label{3002a} J(a) = \int_{0}^{\infty} \frac{P(x)}{x+a} dx, \end{equation} in which $P(x)$ is the periodic function \[ P(x) = \frac{1}{2} -x + \lfloor x\rfloor = -B_1(x-\lfloor x\rfloor), \] where $B_1(x)$ is the first Bernoulli polynomial. He observes that the integral formula \eqref{3002a} has the merit that it converges on the complex $a$-plane aside from the negative real axis, unlike two earlier integral formulas for $J(a)$ of Binet, \[ J(a) = \int_{0}^{\infty} \Big( \frac{1}{1-e^{-x}} - \frac{1}{x} - \frac{1}{2} \Big) \frac{e^{-ax}}{x} dx, \] and \[ J(a) = \frac{1}{\pi} \int_{0}^{\infty} \frac{a}{a^2 + x^2} \log \Big(\frac{1}{1-e^{-2 \pi x}}\Big) dx, \] which are valid on the half-plane $Re(a)>0$. He now repeatedly integrates \eqn{3002a} by parts to obtain a formula \footnote{In Stieltjes's formula $B_j$ denotes the Bernoulli number $|B_{2j}|$ in \eqn{221}. We have altered his notation to match \eqn{221}.} for $\log \Gamma(a)$, valid on the entire complex plane aside from the negative real axis, \begin{equation}\label{3001} \log \Gamma(a) = (a -\frac{1}{2} )\log a - a + \frac{1}{2} \log (2 \pi) + \frac{|B_2|}{1 \cdot 2 a} -\frac{|B_4|}{3 \cdot 4 a^3} + \cdots + \frac{ (-1)^{k-1} |B_{2k}|}{(2k-1) 2k \, a^{2k-1}} + J_k(a), \end{equation} in which $J_k(a) = \int_{0}^{\infty} \frac{P_k(x)}{(x+a)^{2k+1}} dx$ where $P_k(x)$ is periodic and given by its Fourier series $$ P_k(x) = (-1)^k (2k)! \sum_{n=1}^{\infty} \frac{1}{2^{2k} (\pi n)^{2k+1}} \sin (2 \pi n x), $$ and is related to the $(2k+1)$-st Bernoulli polynomial. He obtains an error estimate for $J_k(a)$ (\cite[eqn. (34)]{Sti1889}), valid for $-\pi + \epsilon < \theta< \pi - \epsilon$, that $$ |J_k(Re^{ i\theta})| < \frac{|B_{2k+2}|}{(2k+2)(2k+1)} (\sec \frac{1}{2} \theta)^{2k+2} R^{-2k-1}. $$ We obtain a formula for $\psi(a)$ by differentiating \eqn{3001}, and using $B_{2k} = (-1)^{k-1} |B_{2k}|$ gives $$ \psi (a) = \log a - \frac{1}{2a} -\frac{B_2}{2\, a^2} - \frac{B_4}{4 \, a^4} - \cdots - \frac{B_{2k}}{2k \, a^{2k}}+ J_k^{'}(a). $$ This formula gives the required asymptotic expansion for $\psi(a)$, as soon as one obtains a suitable error estimate for $J_k^{'}(a)$, which may be done on the same lines as for $J_k(a)$. The asymptotic expansion \eqn{323cc} follows after using $\psi(a+1) = \psi(a) + \frac{1}{a}$, which has the effect to reverse the sign of the coefficient of $\frac{1}{a}$. The formula \eqn{321} follows by direct substitution of the zeta values $\zeta(0)= -\frac{1}{2}=B_1$ and $\zeta(1-k)= (-1)^{k+1}\frac{B_{k}}{k}$ for $k \ge 2$, noting $B_{2k+1} =0$ for $k \ge 1$, see \cite[Sec. 1.18]{EMOT53}, \cite[(6.3.18)]{AS}. Finally, the known growth rate for even Bernoulli numbers, $$ |B_{2n}| \sim \frac{2}{(2\pi)^{2n}} (2n)! \sim 4 \sqrt{\pi n} (\pi e)^{-2n} n^{2n}. $$ implies that the asymptotic series \eqn{321} diverges for all finite values of $z$. \end{proof} A consequence of the asymptotic expansion \eqn{321} for $\psi(x)$ is an asymptotic expansion for the harmonic numbers, given by \begin{equation}\label{322b} H_n \sim \log n + \gamma+ \sum_{k=1}^{\infty} (-1)^k \zeta(1-k) \frac{1}{n^k}. \end{equation} This is obtained on taking $z=n$ in \eqn{311a} and substituting $\psi(n+1) = H_n -\gamma$. Substituting the zeta values allows this expansion to be expressed in the more well-known form \begin{eqnarray*} H_n & \sim & \log n +\gamma - \frac{B_1}{n} - \sum_{k=2}^{\infty} \frac{B_{2k}}{2k}\frac{1}{n^{2k}}, \nonumber\\ & \sim & \log n + \gamma + \frac{1}{2n} - \frac{1}{12n^2} + \frac{1}{120 n^4} - \frac{1}{252n^6} + \cdots \end{eqnarray*} Note that although $H_n$ has jumps of size $\frac{1}{n}$ at each integer value, nevertheless the asymptotic expansion is valid to all orders $(\frac{1}{n})^k$ for $k \ge 1$, although the higher order terms are of magnitude much smaller than the jumps. Theorem \ref{th31} and Theorem \ref{th32} (2) have the feature that these two expansions exhibit between them the full set of integer values $\zeta(k)$ of the Riemann zeta function, with Euler's constant appearing as a substitute for $``\zeta(1)"$. This is a remarkable fact. \subsection{Euler's constant and the zeta function}\label{sec32} \setcounter{equation}{0} The function $\zeta(s)$ was studied at length by Euler, but is named after Riemann \cite{Riemann1859}, commemorating his epoch-making 1859 paper. Riemann showed that $\zeta(s)$ extends to a meromorphic function in the plane with its only singularity being a simple pole at $s=1$. Euler's constant appears in the Laurent series expansion of $\zeta(s)$ around the point $s=1$, as follows. \begin{theorem}~\label{th41} {\em (Stieltjes 1885)} The Laurent expansion of $\zeta(s)$ around $s=1$ has the form \begin{equation}\label{401} \zeta(s) = \frac{1}{s-1} + \gamma_0 + \sum_{n=1}^{\infty} \frac{(-1)^n}{n!} \gamma_n (s-1)^n \end{equation} in which $\gamma_0=\gamma$ is Euler's constant, and the coefficients $\gamma_n$ for $n \ge 0$ are defined by \begin{equation}\label{402} \gamma_n := \lim_{m \to \infty} \left( \sum_{k=1}^m \frac{(\log k)^n}{k} - \frac{(\log m)^{n+1}}{n+1} \right) \end{equation} \end{theorem} \begin{proof} We follow a proof of Bohman and Fr\"{o}berg \cite{BF88}. It starts from the identity\\ $(s-1) \zeta(s) = \sum_{k=1}^{\infty} (s-1)k^{-s},$ which is valid for $Re(s)>1$. For real $s>1$, one has the telescoping sum $$ \sum_{k=1}^{\infty} (k^{1-s} - (k+1)^{1-s})=1. $$ Using it, one obtains \[ (s-1)\zeta(s) = 1 + \sum_{k=1}^{\infty} \{ (k+1)^{1-s} -k^{1-s} + (s-1)k^{-s} \}. \] Noting that at $s=1$ the telescoping sum above is $0$, one has \begin{eqnarray*} (s-1) \zeta(s) &=& 1 + \sum_{k=1}^{\infty} \{ \exp(- (s-1) \log (k+1)) - \exp( -(s-1)\log k) \\&& \quad \quad\quad\quad\quad + (s-1) \frac{1}{k} \exp(- (s-1) \log k)\} \\ & = & 1 + \sum_{k=1}^{\infty} \Big\{ \sum_{n=0}^{\infty} \frac{(-1)^n (s-1)^n}{n!} [ (\log(k+1))^n - (\log k)^n] \\ && \quad\quad\quad \quad\quad + \frac{s-1}{k} \sum_{n=0}^{\infty} \frac{(-1)^n (s-1)^n(\log k)^n}{n!} \Big\}. \end{eqnarray*} Dividing by $(s-1)$ now yields \[ \zeta(s) = \frac{1}{s-1} + \sum_{n=0}^{\infty} \frac{(-1)^n }{n!}\gamma_n (s-1)^n, \] with $\gamma_0= \gamma$, and with \begin{equation}\label{402a} \gamma_n = \sum_{k=1}^{\infty} \Big\{ \frac{(\log k)^n}{k} - \frac{ (\log (k+1))^{n+1} - (\log k)^{n+1}}{n+1} \Big\}. \end{equation} This last formula is equivalent to \eqn{402}, since the first $m$ terms in the sum add up to the $m$-th term in \eqn{402}. \end{proof} Theorem~\ref{th41} gives the coefficients $\gamma_n$ of the zeta function in a form resembling the original definition of Euler's constant. These constants arose in a correspondence that Stieltjes carried on with his mentor Hermite (cf. \cite{HS05} ) from 1882 to 1894. In Letter 71 of that correspondence, written in June 1885, Stieltjes stated that he proposed to calculate the first few terms in the Laurent expansion\footnote{In Letter 71 Stieltjes uses the notation $C_0, C_1, ...$, but in Letter 75 he uses the same notation with a different meaning. We substitute $A_0, A_1, ...$ for the Letter 71 coefficients following \cite{BF88}.} of $\zeta(s)$ around $s=1$, $$ \zeta(z+1) = \frac{1}{z} + A_0 + A_1 \,z +... $$ noting that $A_0= 0.57721 5665...$ is known and that he finds $A_1= -0.07281 5\underline{5}20...$ In Letters 73 and 74, Hermite formulated \eqn{402}, which he called ``very interesting," and inquired about rigorously establishing it.\footnote{The result \eqn{402} is misstated in Erd\"{e}lyi et al \cite[(1.12.17)]{EMOT53}, making the right side equal $\frac{(-1)^n}{n!} \gamma_n$, a mistake tracing back to a paper of G. H. Hardy \cite{Ha12}.} In Letter 75 Stieltjes \cite[pp. 151--155]{HS05} gave a complete derivation of \eqn{402}, writing \[ \zeta(s+1) = \frac{1}{s} + C_0 - C_1 \,s + \frac{C_2}{1\cdot 2} \,s^2 - \frac{C_3}{1\cdot 2 \cdot 3}\, x^3+ \cdots \] The Laurent series coefficients $A_n$ are now named the Stieltjes constants by some authors who follow Letter 71 (\cite{BF88}, \cite{BL99}), while other authors follow Letter 75 and call the scaled values $\gamma_n= C_n$ the {\em Stieltjes constants} (\cite{Ke92}, \cite{Cof06}). \begin{table}\centering \renewcommand{.85}{.85} \begin{tabular}{|r|r| } \hline \multicolumn{1}{|c|}{$n$} & \multicolumn{1}{c|}{$\gamma_n$}\\ \hline 0 & $+$ 0.57721 ~56649 ~ 01532\\ 1 & $-$ 0.07281~ 58454 ~ 83676\\ 2 & $-$ 0.00969~ 03631 ~28723 \\ 3 & $+$ 0.00205 ~38344~ 20303 \\ 4 & $+ $ 0.00232 ~53700 ~65467\\ 5 & $+ $ 0.00079 ~33238~ 17301 \\ 6 & $-$ 0.00023 ~87693~ 45430 \\ 7 & $-$ 0.00052 ~72895 ~67057 \\ 8 & $-$ 0.00035 ~21233 ~53803 \\ 9 & $-$ 0.00003 ~43947~ 74418\\ 10 & $+ $ 0.00020~53328~14909 \\ \hline \end{tabular} \noindent \caption{Values for Stieltjes constants $\gamma_{n}$. } \label{tab31} \end{table} Table \ref{tab31} presents values of $\gamma_n$ for small $n$ as computed by Keiper \cite{Ke92}. A good deal is known about the size of these constants. There are infinitely many $\gamma_n$ of each sign (Briggs \cite{Br55}). Because $\zeta(s) - \frac{1}{s-1}$ is an entire function, the Laurent coefficients $A_n$ must rapidly approach $0$ in absolute value as $n \to \infty$. However despite the sizes of the initial values in Table \ref{tab31}, the scaled values $\gamma_n$ are known to be unbounded (and to sometimes be exponentially large) as a function of $n$, cf. Cohen \cite[Sec. 10.3.5]{Coh07b}. Keiper estimated that $\gamma_{150}\approx 8.02 \times 10^{35}.$ More detailed asymptotic bounds on the $\gamma_n$ are given in Coffey \cite{Cof06} and Knessl and Coffey \cite{KC11}, \cite{KC11b}; see also Coffey \cite{Cof10}. At present nothing is known about the irrationality or transcendence of any of the Stieltjes constants. Various generalizations of the Riemann zeta function play a role in theoretical physics, especially in quantum field theory, in connection with a method of ``zeta function regularization" for assigning finite values in certain perturbation calculations. This method was suggested by S. Hawking \cite{Haw77} in 1977, and it has now been widely extended to other questions in physics, see the book of Elizalde \cite{Elizalde95} (See also \cite{EOR94}, \cite{EVZ98}.) Here one forms a generalized ``zeta function" from spectral data and evaluates it an an integer point, often $s=0$. If the function is meromorphic at this point, then the constant term in the Laurent expansion at this value is taken as the ``regularized" value. This procedure can be viewed as a quite special kind of ``dimensional regularization," in which perturbation calculations are made viewing the dimension of space-time as a complex variable $s$, and at the end of the calculation one specializes the complex variable to the integer value of the dimension, often $s=4$, see Leibbrandt \cite{Lei75}. Under this convention Euler's constant $\gamma$ functions as a ``regularized" value $``\zeta(1)"$ of the zeta function at $s=1$, because by Theorem \ref{th41} it is the constant term in the Laurent expansion of the Riemann zeta function at $s=1$. This interpretation as ``$\zeta(1)"$ also matches the constant term in the Taylor expansion of the digamma function $\psi(z)$ around $z=1$ in Theorem~\ref{th31}. Euler's constant occurs in many deeper ways in connection with the behavior of the Riemann zeta function. The most tantalizing open problem about the Riemann zeta function is the {\em Riemann hypothesis}, which asserts that the non-real complex zeros of the Riemann zeta function all fall on the line $\text{Re}(s) = \frac{1}{2}$. Sections \ref{sec36} and \ref{sec37} present formulations of the Riemann hypothesis in terms of the relation of limiting asymptotic behaviors of arithmetic functions to $e^{\gamma}$. Section \ref{sec37a} presents results relating the extreme value distribution of the Riemann zeta function on the line $\text{Re}(s)=1$ to the constant $e^{\gamma}$. In 1885 Stieltjes announced a proof of the Riemann hypothesis (\cite{Sti1885}, English translation in \cite[p. 561]{Sti93}). In Letter 71 to Hermite discussed above, Stieltjes enclosed his announcement of a proof of the Riemann hypothesis, requesting that it be communicated to Comptes Rendus, if Hermite approved, and Hermite did so. In Letter 79, written later in 1885, Stieltjes sketched his approach, saying that his proof was very complicated and that he hoped to simplify it. Stieltjes considered the Mertens function $$ M(n) =\sum_{j=1}^n \mu(j), $$ in which $\mu(j)$ is the M\"{o}bius function (defined in Section \ref{sec33}) and asserted that he could prove that $|M(n)|/\sqrt{n}$ is a bounded function, a result which would certainly imply the Riemann hypothesis. Stieltjes had further correspondence in 1887 with Mittag-Leffler, also making the assertion that he could prove $\frac{M(n)}{\sqrt{n}}$ is bounded between two limits, see \cite[Tome II, Appendix]{HS05}, \cite[Sect. 2]{tR93}. He additionally made numerical calculations\footnote{Such calculations were found among his posthumous possessions, according to te Riele \cite[p. 69]{tR93}.} of the Mertens function $M(n)$ for ranges $1 \le n \le 1200$, $2000 \le n \le 2100$, and $6000\le n \le 7000$, verifying it was small over these ranges. However he never produced a written proof of the Riemann hypothesis related to this announcement. Modern work relating the Riemann zeta zeros and random matrix theory now indicates that $|M(n)|/\sqrt{n}$ should be an unbounded function, although this is unproved, see te Riele \cite[Sect. 4]{tR93} and Edwards \cite[Sect. 12.1]{Ed74}. \subsection{Euler's constant and prime numbers} \label{sec33} \setcounter{equation}{0} Euler's constant appears in various statistics associated to prime numbers. In 1874 Franz Mertens \cite{Mertens1874} determined the growth behavior of the sum of reciprocals of prime numbers, as follows. \begin{theorem}\label{th50} {\em (Mertens's Sum Theorem 1874)} For $x \ge 2$, \begin{equation*} \sum_{p \le x} \frac{1}{p} = \log\log x + B +R(x), \end{equation*} with $B$ being a constant \begin{equation}\label{500ab} B = \gamma + \sum_{m=2}^{\infty} \mu(m) \frac{\log \zeta(m)}{m}\approx 0.26149\, 72129 \cdots \end{equation} and with remainder term bounded by \begin{equation*} |R(x)| \le \frac{4}{\log (P +1)} + \frac{2}{P \log P}, \end{equation*} where $P$ denotes the largest prime smaller than $x$. \end{theorem} Here $\mu(n)$ is the {\em M\"{o}bius function}, which takes the values $\mu(1)=1$ and $\mu(n)= (-1)^k$ if $n$ is a product of $k$ distinct prime factors, and $\mu(n)=0$ otherwise. This result asserts the existence of the constant \begin{equation*} B := \lim_{ x\to \infty} \left( \sum_{p \le x} \frac{1}{p} - \log\log x\right), \end{equation*} which is then given explicitly by \eqn{500ab}. Mertens's proof deduced that $B= \gamma- H$ with \begin{equation}\label{500d} H := - \sum_{p}\{ \log(1- \frac{1}{p}) + \frac{1}{p} \} = 0.31571\, 84519 \dots . \end{equation} For modern derivations which determine the constant $B$ see Hardy and Wright \cite[Theorem 428]{HW08} or Tenenbaum and Mendes France \cite[Sect. 1.9]{TM00}. Mertens \cite[Sect. 3]{Mertens1874} deduced from this result the following product theorem, involving Euler's constant alone. \begin{theorem}\label{th51} {\em (Mertens's Product Theorem 1874)} One has \begin{equation}\label{501a} \lim_{x \to \infty} ~~(\log x) \prod_{p \le x} \left(1- \frac{1}{p} \right) = e^{-\gamma}. \end{equation} More precisely, for $x \ge 2$, \begin{equation*} \prod_{p \le x} \left(1- \frac{1}{p} \right) = \frac{e^{-\gamma+ S(x)}}{\log x} \end{equation*} in which \begin{equation*} |S(x)| \le \frac{4}{\log (P+1)} + \frac{2}{P \log P} + \frac{1}{2P}, \end{equation*} where $P$ denotes the largest prime smaller than $x$. Thus $S(x) \to 0$ as $x \to \infty$. \end{theorem} In this formula the constant $e^{-\gamma}$ encodes a property of the entire ensemble of primes in the following sense: if one could vary a single prime in the product above, holding the remainder fixed, then the limit on the right side of \eqn{501a} would change. In Mertens's product theorem the finite product \begin{equation}\label{504b} D(x) := \prod_{p \le x}\left(1- \frac{1}{p} \right) \end{equation} is the inverse of the partial Euler product for the Riemann zeta function, taken over all primes below $x$, evaluated at the point $s=1$. The number $D(x)$ gives the limiting density of integers having no prime factor smaller than $x$, taken on the interval $[1, T]$ and letting $T \to \infty$ while holding $x$ fixed. Now the set of integers below $T= x^2$ having no prime smaller than $x$ consists of exactly the primes between $x$ and $x^2$. Letting $\pi(x)$ count the number of primes below $x$, the prime number theorem gives $\pi(x) \sim \frac{x}{\log x}$, and this yields the asymptotic formula. $$ \pi(x^2)- \pi(x) \sim \frac{x^2}{\log x^2} - \frac{x}{\log x} \sim \frac{1}{2} \frac{x^2}{\log x}. $$ On the other hand, if the density $D(x)$ in Merten's theorem were asymptotically correct already at $T= x^2$ then it would predict the number of such primes to be about $e^{-\gamma} \frac{x^2}{\log x}$. The fact that $e^{-\gamma} = 0.561145 ...$ does not equal $\frac{1}{2}$, but is slightly larger, reflects the failure of the inclusion-exclusion formula to give an asymptotic formula in this range. This is a basic difficulty studied by sieve methods. A subtle consequence of this failure is that there must be occasional unexpectedly large fluctuations in the number of primes in short intervals away from their expected number. This phenomenon was discovered by H. Maier \cite{Mai85} in 1985. It is interesting to note that Maier's argument used a ``double counting" argument in a tabular array of integers that resembles the array summations of Euler and Bernoulli exhibited in section \ref{sec21}. His argument also used properties of the Buchstab function discussed in section \ref{sec35}. See Granville \cite{Gra94} for a detailed discussion of this large fluctuation phenomenon. \subsection{Euler's constant and arithmetic functions}\label{sec33a} \setcounter{equation}{0} Euler's constant also appears in the behavior of three basic arithmetic functions, the divisor function $d(n)$, {Euler's totient function} $\phi(n)$ and the {sum of divisors function} $\sigma(n)$. For the divisor function $d(n)$ it arises in its average behavior, while for $\phi(n)$ and $\sigma(n)$ it concerns extremal asymptotic behavior and there appears in the form $e^{\gamma}$. These extremal behaviors can be deduced starting from Mertens's results. A basic arithmetic function is the {\em divisor function} \begin{equation*} d(n) := \#\{ d: d|n, ~~~ 1 \le d \le n\}. \end{equation*} which counts the number of divisors of $n$. In 1849 Dirichlet \cite{Dirichlet1849} introduced a method to estimate the average size of the divisor function and other arithmetic functions. In the statement of Dirichlet's result $\{ x\} := x - [x]$ denotes the fractional part $x ~(\bmod \, 1)$. \begin{theorem} \label{th51a} {\em (Dirichlet 1849)} (1) The partial sums of the divisor function satisfy \begin{equation}\label{500bb} \sum_{k=1}^{n} d(k) = n\log n + (2 \gamma -1)n + O(\sqrt{n}) \end{equation} for $1 \le n < \infty$, where $\gamma$ is Euler's constant. (2) The averages of the fractional parts of $\frac{n}{k}$ satisfy \begin{equation}\label{500ff} \sum_{k=1}^n \{ \frac{n}{k} \} = (1-\gamma) n + O (\sqrt{n}) \end{equation} for $1 \le n < \infty.$ \end{theorem} \begin{proof} Dirichlet's proof starts from the identity \begin{equation}\label{500gg} \sum_{k=1}^n d(k) = \sum_{k=1}^n \lfloor \frac{n}{k} \rfloor, \end{equation} which relates the divisor function to integer parts of a scaled harmonic series. The right side of this identity is approximated using the harmonic sum estimate \begin{equation*} \sum_{k=1}^n \frac{n}{k} = n \log n + \gamma n + O (1), \end{equation*} in which Euler's constant appears. The difference term is the sum of fractional parts. In Sect. 2 Dirichlet formulates the ``hyperbola method" to count the sum of the lattice points on the right side of \eqn{500gg}, and in Sect. 3 he applies the method to get (1). In Sect. 4 he compares the answer \eqn{500bb} with \eqn{500gg} and obtains the formula \eqn{500ff} giving (2). A modern treatment of the hyperbola method appears in Tenenbaum \cite[I.3.2]{Ten95}. \end{proof} The problem of obtaining bounds for the size of the remainder term in \eqn{500bb}, \begin{equation}\label{500hh} \Delta(n):= \sum_{k=1}^{n} d(k) - ( n\log n + (2 \gamma -1)n), \end{equation} is now called the {\em Dirichlet divisor problem.} This problem seeks the best possible bound for the exponent $\theta$ of the form $ |\Delta(n)| =O( n^{\theta+\epsilon})$, valid for any given $\epsilon >0$ for $1\le n < \infty$, with $O$-constant depending on $\epsilon$. This problem appears to be very difficult. Dirichlet's argument shows that the size of the remainder term $\Delta(n)$ is dominated by the remainder term in the fractional part sum \eqn{500ff}. In 1916 Hardy \cite{Ha16} established the lower bound $\theta \ge \frac{1}{4}$. The exponent $\theta= 1/4$ is conjectured to be the correct answer, and would correspond to an expected square-root cancellation. The best $\Omega$-result on fluctuations of $\Delta(n)$, which does not change the lower bound exponent, is that of Soundararajan \cite{Sou03} in 2003. Concerning upper bounds, a long sequence of improvements in exponent bounds has arrived at the current record $\theta \le \frac{131}{416} \approx 0.31490$, obtained by Huxley (\cite{Hux03}, \cite{Hux05}) in 2005. Tsang \cite{Tsa10} gives a recent survey on recent work on $\Delta(n)$ and related problems. Dirichlet noted that \eqn{500ff} implies that the fractional parts $\{\{ \frac{n}{k} \}; 1 \le k \le n \}$ are far from being uniformly distributed $(\bmod \, 1)$. In Sect. 4 of \cite{Dirichlet1849} he proved that the set of fractional parts $k$ having $0 \le \{ \frac{n}{k} \} \le \frac{1}{2}$ has a limiting frequency $(2- \log 4)n = (0.61370 \dots )n$. and those with $\frac{1}{2} < \{ \frac{n}{k} \}< 1$ have the complementary density $(\log 4 -1)n = (0.38629 \dots) n$, as $n \to \infty.$ In 1898 de La Vall\'{e}e Poussin \cite{dLVP1898} generalized the fractional part sum estimate \eqn{500ff} above to summation over $k$ restricted to an arithmetic progression. He showed for each integer $a \ge 1$ and integer $0 < b \le a$, \begin{equation*} \sum_{k=0}^{(n-b)/a} \{ \frac{n}{ak+b} \} = \frac{1}{a}(1-\gamma) n + O (\sqrt{n}), \end{equation*} for $1 \le n < \infty$, with $O$-constant depending on $a$. He comments that it is very remarkable that these fractional parts approach the same average value $1-\gamma$ no matter which arithmetic progression we restrict them to. Very recently Pillichshammer \cite{Pil10} presented further work in this direction. We next consider {\em Euler's totient function} \begin{equation*} \phi(n) := \# \{ j: 1\le j \le n, ~~\mbox{with}~~\gcd(j, n)=1\}, \end{equation*} which has $\phi(n)= |({\mathbb Z}/n{\mathbb Z})^{*}|.$ Euler's constant appears in the asymptotics of the minimal growth rate (minimal order) of $\phi(n)$. In 1903 Edmund Landau \cite{Lan03} showed the following result, which he also presented later in his textbook \cite[pp. 216--219]{Lan09}. \begin{theorem} \label{th52} {\em (Landau 1903)} Euler's totient function $\phi(n)$ has minimal order $\frac{n}{\log\log n}$. Explicitly, \begin{equation*} \liminf_{n \to \infty} \frac{\phi(n) \log\log n}{n} = e^{-\gamma}. \end{equation*} \end{theorem} Since $\phi(n)= n \prod_{p|n}(1-\frac{1}{p})$, its extremal behavior is attained by the {\em primorial numbers}, which are the numbers that are products of the first $k$ primes, i.e. $N_k := p_1p_2\cdots p_k$, for example $N_4=2\cdot 3\cdot 5\cdot7= 210.$ The result then follows using Mertens's product theorem. The {\em sum of divisors function} is given by \begin{equation*} \sigma(n) := \sum_{d|n} d, \end{equation*} so that e.g. $ \sigma(6)=12.$ Euler's constant appears in the asymptotics of the maximal growth rate (maximal order) of $\sigma(n)$. In 1913 Gronwall \cite{Gr13} obtained the following result. \begin{theorem}~\label{th53} {\em (Gronwall 1913)} The sum of divisors function $\sigma(n)$ has maximal order $n \log\log n$. Explicitly, \begin{equation*} \limsup_{n \to \infty} \frac{\sigma(n)}{n \log\log n} = e^{\gamma}. \end{equation*} \end{theorem} For $\sigma(n)$ the extremal numbers form a very thin set, with a more complicated description, which was analyzed by Ramanujan \cite{Ram15} in 1915 (see also Ramanujan \cite{Ram97}) and Alaoglu and Erd\H{o}s \cite{AE44} in 1944. They have the property that their divisibility by each prime $p^{e_p(n)}$ has exponent $e_p(n) \to \infty$ as $n \to \infty$, see Section 3.7 and formula \eqn{551d}. The Euler totient function $\phi(n)$ and the sum of divisors function $\sigma(n)$ are related by the inequalities \begin{equation*} \frac{6}{\pi^2} n^2 < \phi(n) \sigma(n) \le n^2, \end{equation*} which are easily derived from the identity\footnote{ $p^e || n$ means $p^e |n $ and $p^{e+1} \nmid n$.} $$ \phi(n) \sigma(n) = n^2 \prod_{{p^e || n}\atop{e \ge 1}} \left( 1- \frac{1}{p^{e+1}}\right). $$ We note that the ratio of extremal behaviors of the ``smallest" $\phi(n)$, resp. the ``largest" $\sigma(n)$, given in Theorem~\ref{th52} and Theorem~\ref{th53} have a product asymptotically growing like $n^2$. The extremal numbers $N_k = p_1 p_2 \cdots p_k$ in Landau's theorem are easily shown to satisfy $\phi(N_k) \sigma(N_k) \sim \frac{6}{\pi^2} {N_k}^2$ as $N_k \to \infty$. The extremal numbers $\tilde{n}$ in Gronwall's theorem can be shown to satisfy the other bound $$ \phi(\tilde{n}) \sigma(\tilde{n}) \sim {\tilde{n}}^2$$ as $\tilde{n} \to \infty$. The latter result implies that the Gronwall extremal numbers satisfy $$ \lim_{\tilde{n} \to \infty} \frac{ \phi(\tilde{n}) \log\log \tilde{n}}{\tilde{n}} = e^{-\gamma}, $$ showing they are asymptotically extremal in Landau's theorem as well. In Sections \ref{sec36} and \ref{sec37} we discuss more recent results showing that the Riemann hypothesis is encoded as an assertion as to how the extremal limits are approached in Landau's theorem and Gronwall's theorem. \subsection{Euler's constant and sieve methods: the Dickman function}\label{sec34} \setcounter{equation}{0} Euler's constant appears in connection with statistics on the sizes of prime factors of general integers. Let $\Psi(x, y)$ count the number of integers $n \le x$ having largest prime factor no larger than $y$. These are the numbers remaining if one sieves out all integers divisible by some prime exceeding $y$. The Dirichlet series associated to the set of integers having no prime factor $>y$ is the {\em partial zeta function} \begin{equation}\label{523bb} \zeta(s ; y) := \prod_{p \le y} \left( 1- \frac{1}{p^s} \right)^{-1}, \end{equation} and we have $$ \zeta(1; y)= \frac{1}{D(y)}, $$ where $D(y)$ is given in \eqn{504b}. It was shown by Dickman \cite{Dic30} in 1930 that, for each $u >0,$ a positive fraction of all integers $n \le x$ have all prime factors less than $x^{1/u}$. More precisely, one has the asymptotic formula \begin{equation*} \Psi ( x, x^{\frac{1}{u}}) \sim \rho(u) x, \end{equation*} for a certain function $\rho(u)$, now called the {\em Dickman function}. The notation $\rho(u)$ was introduced by de Bruijn \cite{deB51a}, \cite{deB51b}.) This function is determined for $u \ge 0$ by the properties: \begin{enumerate} \item (Initial condition) For $0 \le u \le1$, it satisfies \begin{equation*} \rho(u) =1. \end{equation*} \item (Differential-difference equation) For $u \ge 1$, \begin{equation*} u \rho'(u) = - \rho(u-1). \end{equation*} \end{enumerate} The Dickman function is alternatively characterized as the solution on the real line to the integral equation \begin{equation}\label{526d} u \rho(u) = \int_{0}^{1} \rho (u-t) dt, \end{equation} (for $u \rho(u)$) with initial condition as above, extended to require that $\rho(u)=0$ for all $u < 0$, as shown in de Bruijn \cite[(2.1)]{deB51a}. Another explicit form for $\rho(u)$ is the iterated integral form \begin{equation}\label{526e} \rho(u) = 1 + \sum_{k=1}^{\lfloor u \rfloor} \frac{(-1)^k}{k!} \int_{ { t_1+ \cdots _+ t_k \le u}\atop t_1, ..., t_k \ge 1} \frac{dt_1}{t_1} \frac{dt_2}{t_2} \cdots \frac{dt_k}{t_k}, \end{equation} which also occurs in connection with random permutations, see \eqref{gonch} ff. It is known that for $u \ge 1$ the Dickman function $\rho(u)$ is a strictly decreasing positive function that decreases rapidly, satisfying \begin{equation*} \rho(u) \le \frac{1}{\Gamma(u+1)}, \end{equation*} see Norton \cite[Lemma 4.7]{Nor71}. The key relation of the Dickman function to Euler's constant is that it has total mass $e^{\gamma}$, see Theorem~\ref{th52b} below. In 1951 N. G. de Bruijn \cite{deB51a} gave an exact expression for the Dickman function as a contour integral, \begin{equation}\label{528d} \rho(u) = \frac{1}{2 \pi i } \int_{-i\infty}^{i \infty} \exp\left(\gamma + \int_{0}^z \frac{e^s-1} {s} ds\right) e^{-uz} dz ~~~(u>0). \end{equation} One may also consider the one-sided Laplace transform of the Dickman function which we denote \begin{equation*} \hat{\rho}(s) := \int_{0}^{\infty} \rho(u) e^{-us} du, \end{equation*} following Tenenbaum \cite[III.5.4]{Ten95}. This integral converges absolutely for all $s \in {\mathbb C}$ and defines $\hat{\rho}(s)$ as an entire function. To evaluate it, recall that the {\em complementary exponential integral } \begin{equation}\label{cexpi} {\rm Ein}(z) := \int_{0}^z \frac{1- e^{-t}}{t} dt = \sum_{n=1}^{\infty}(-1)^{n-1} \frac{1}{n \cdot n!} z^n, \end{equation} is an entire function of $z$. \begin{theorem}\label{th52b} {\em (de Bruijn 1951, van Lint and Richert 1964)} The one-sided Laplace transform $\hat{\rho}(s)$ of the Dickman function $\rho(u)$ is an entire function, given by \begin{equation}\label{530d} \hat{\rho}(s) = e^{\gamma - {\rm Ein}(s)}, \end{equation} in which ${\rm Ein} (s)$ is the complementary exponential integral. In particular, the total mass of the Dickman function is \begin{equation}\label{532d} \hat{\rho}(0) = \int_{0}^{\infty} \rho (u) \,du = e^{\gamma}. \end{equation} \end{theorem} \begin{proof} Inside de Bruijn's formula \eqn{528d} there appears the integral $$ I(s) := \int_{0}^{s} \frac{e^{t} -1}{t} dt= -{\rm Ein}(-s). $$ This formula determines the Fourier transform of the Dickman function to be $e^{\gamma + I(-it)}$. The change of variable $s=-it$ yields the the one-sided Laplace transform of $\rho(u)$ above. The total mass identity \eqn{532d} was noted in 1964 in van Lint and Richert \cite{vLR64}. Moreover they noted that as $u \to \infty$, \begin{equation*} \int_{0}^u \rho (t) dt = e^{\gamma} + O\left( e^{-u}\right). \end{equation*} A direct approach to the determination of the Laplace transform given in \eqn{530d} was given by Tenenbaum \cite[III.5.4, Theorem 7]{Ten95} in 1995. His proof uses a lemma showing that for all $s \in {\mathbb C} \smallsetminus (-\infty, 0]$ one has \begin{equation}\label{533d} - I(-s)= {\rm Ein}(s) = \gamma + \log s + J(s), \end{equation} with \begin{equation}\label{534d} J(s) := \int_{0}^{\infty} \frac{e^{-s-t}}{s+t} dt. \end{equation} Tenenbaum sketches a direct arithmetic proof of the identity \eqn{532d} in \cite[III.5 Exercise 2, p. 392]{Ten95}; in this argument the quantity $e^{\gamma}$ is derived using Mertens's product theorem. We note the identity \begin{equation*} J(s) = {\rm E}_1(s), \end{equation*} in which $ {\rm E}_1(s) := \int_{s}^{\infty} \frac{e^{-t}}{t} dt$ is the principal exponential integral. The function ${\rm E}_1(s)$ appeared in connection with Euler's divergent series in Section \ref{sec25}, see \eqn{562a}. \end{proof} In 1973 Chamayou \cite{Cha73} found a probabilistic interpretation of the Dickman function $\rho(x)$ which is suitable for computing it by Monte Carlo simulation. \begin{theorem}\label{th53aa} {\em (Chamayou 1973)} Let $X_1, X_2, X_3, ...$ be a sequence of independent identically distributed random variables with uniform distribution on $[0,1].$ Set \[ P(u) := Prob [ X_1 + X_1 X_2 + X_1X_2X_3 + \cdots \le u.] \] This function is well defined for $u \ge 0$ and its derivative $P'(u)$ satisfies \begin{equation*} \rho (u) = e^{\gamma} P'(u). \end{equation*} \end{theorem} A discrete identity relating the Dickman function and $e^{\gamma}$ was noted by Knuth and Trabb-Pardo \cite{KTP76}, which complements the continuous identity \eqn{532d}. \begin{theorem}\label{th53b} {\em (Knuth and Trabb-Pardo 1976)} For $0 \le x \le 1$ the Dickman function satisfies the identity \begin{equation}\label{534f} x + \sum_{n=1}^{\infty} (x+n) \rho(x+n)= e^{\gamma}. \end{equation} In particular, taking $x=0$, \begin{equation*} \rho(1) + 2 \rho(2) + 3 \rho(3) + \cdots = e^{\gamma}. \end{equation*} \end{theorem} \begin{proof} The identity \eqn{534f} immediately follows from \eqn{532d}, using the integral equation \eqn{526d} for $u\rho(u)$. \end{proof} We conclude this section by using formulas above to deduce a curious identity relating a special value of this Laplace transform to the Euler-Gompertz constant ${\delta}$ given in Section 2.4; compare also Theorem~\ref{th102}. \begin{theorem}\label{th53c} The Laplace transform $\hat{\rho}(s)$ of the Dickman function $\rho(u)$ is given at $s=1$ by \begin{equation*} \hat{\rho}(1) := \int_{0}^{\infty} \rho(u) \, e^{-u} du = e^{-\frac{{\delta}}{e}}. \end{equation*} in which ${\delta} := \int_{0}^1\frac{dv}{1-\log v}$ is the Euler-Gompertz constant. \end{theorem} \begin{proof} By definition $\hat{\rho}(1) = \int_{0}^{\infty} \rho(u) e^{-u}du.$ We start from Hardy's formula \eqn{560b} for the Euler-Gompertz constant: $$ {\delta}= s(1) = \int_{0}^{\infty} \frac{e^{-w}}{1+w} dw. $$ Comparison with \eqn{534d} then yields \begin{equation*} {\rm E}_1(1)= J(1)= \int_{0}^{\infty} \frac{e^{-1-t}}{1+t} dt = \frac{{\delta}}{e}. \end{equation*} Now \eqn{533d} evaluated at $s=1$ gives \begin{equation} \label{353EG} {\rm Ein}(1) = \gamma+ {\rm E}_1(1) = \gamma + \frac{{\delta}}{e}, \end{equation} which can be rewritten using \eqn{cexpi} as \begin{equation}\label{353EG2} \int_{0}^{1} \frac{1-e^{-t}}{t} dt =\gamma + \frac{{\delta}}{e}. \end{equation} We conclude from Theorem \ref{th52b} that $$ \hat{\rho}(1) = e^{\gamma -Ein(1)} = e^{-\frac{{\delta}}{e}}, $$ which is the assertion. \end{proof} Hildebrand and Tenenbaum \cite{HT93a} and Granville \cite{Gra08} give detailed surveys including estimates for $\Psi(x, y)$, a field sometimes called {\em psixyology}, see Moree \cite{Mor93}. It has recently been uncovered that Ramanujan had results on the Dickman function in his `Lost Notebook', prior to the work of Dickman, see Moree \cite[Sect. 2.4]{Mor13}. \subsection{Euler's constant and sieve methods: the Buchstab function}\label{sec35} \setcounter{equation}{0} Euler's constant also arises in sieve methods in number theory in the complementary problem where one removes integers having some small prime factor. Let $\Phi(x,y)$ count the number of integers $n \le x$ having no prime factor $p \le y$. In 1937 Buchstab \cite{Buc37} (see also \cite{Buc38}) established for $ u >1$ an asymptotic formula \begin{equation*} \Phi(x, x^{\frac{1}{u}})\sim u\, \omega(u) \frac{x}{\log x}, \end{equation*} for a certain function $\omega(u)$ named by de Bruijn \cite{deB50a} the Buchstab function. The {\em Buchstab function} is defined for $u \ge 1$ by the properties: \begin{enumerate} \item (Initial conditions) For $1 \le u \le 2$, it satisfies \begin{equation*} \omega(u) = \frac{1}{u}. \end{equation*} \item (Differential-difference equation) For $u \ge 2$, \begin{equation*} (u \, \omega(u) )^{'}= \omega(u-1). \end{equation*} \end{enumerate} This function is alternatively characterized as the solution on the real line to the integral equation \begin{equation}\label{546d} u \, \omega(u) = 1+\int_{1}^{u-1} \omega (t) dt, \end{equation} as shown in de Bruijn \cite[(2.1)]{deB50a}, who determined properties of this function. Another explicit formula for this function, valid for $u >2$, is $$ u \omega(u) = 1 + \sum_{2 \le k \le \lfloor u\rfloor} \frac{1}{k!} \int_{ {1/u \le y_i \le 1}\atop{ 1/u \le 1- (y_1+y_2 + \cdots + y_{k-1}) \le 1} } \frac{dy_1 dy_2 \cdots dy_{k-1}}{y_1 y_2 \cdots y_{k-1} (1- (y_1 +y_2+ \cdots + y_{k-1}) )}. $$ It is known that the function $\omega(u)$ is oscillatory and satisfies \begin{equation*} \lim_{u \to \infty} \omega(u) = e^{-\gamma}. \end{equation*} The convergence to $e^{-\gamma}$ is extremely rapid, and satisfies the estimate \begin{equation}\label{eq345a} \omega(u) = e^{-\gamma} + O( u^{-u/2}) \quad ~~~\mbox{for} ~~u \in [2, \infty). \end{equation} Proofs of these results can be found in Montgomery and Vaughan \cite[Sec. 7.2]{MV07} and in Tenenbaum \cite[Chap. III.6]{Ten95}. The convergence behavior of the Buchstab function to $e^{\gamma}$ as $x \to \infty$ played a crucial role in the groundbreaking 1985 work of Helmut Maier \cite{Mai85} showing the existence of large fluctuations in the density of primes in short intervals; this work was mentioned at the end of Section \ref{sec33}. Maier \cite[Lemma 4]{Mai85} showed that $\omega(u)-e^{-\gamma}$ changes sign at least once on each interval $[a-1, a]$ for any $a \ge 2$. In 1990 Cheer and Goldston \cite{CG90} showed there are at most two sign changes and at most two critical points on an interval $[a-1, a]$, and Hildebrand \cite{Hi90} showed that the spacing between such sign changes approaches $1$ as $u \to \infty$, A consequence of the analysis of de Bruijn \cite{deB50a} is that the constant $e^{-\gamma}$ appears as a universal limiting constant for sieve methods, for a wide range of sieve cutoff functions $y=y(x)$ growing neither too slow nor too fast compared to $x$. \begin{theorem} \label{buchstab} {\rm (de Bruijn 1950)} Suppose that $y= y(x)$ depends on $x$ in such a way that both $y \to \infty$ and $\frac{\log y}{\log x} \to 0$ hold as $x \to \infty$. Under these conditions we have \begin{equation}\label{361aa} \Phi(x, y) \sim e^{-\gamma} \frac{x}{\log y}. \end{equation} \end{theorem} \begin{proof} This result can be deduced from de Bruijn \cite[(1.7)]{deB50a}. This asymptotic formula also follows as a corollary of very general estimate (\cite[III.6.2, Theorem 3]{Ten95}) valid uniformly on the region $x \ge y \ge 2$, which states \begin{equation*} \Phi(x, y) = \omega(\frac{\log x}{\log y})\frac{x}{\log y}- \frac{y}{\log y} + O \Big( \frac{x}{ (\log y)^2} \Big). \end{equation*} The formula \eqn{361aa} follows using the estimate \eqn{eq345a} for $\omega(u)$, which applies since the hypotheses $x/y \to \infty$ when $x \to \infty$ imply $\frac{\log x}{\log y} \to \infty$. \end{proof} The one-sided Laplace transform of the Buchstab function is given as \begin{equation}\label{355a} \hat{\omega}(s) := \int_{0}^{\infty} e^{-su} \omega (u) du, \end{equation} where we make the convention that $\omega(u) =0$, for $0 \le u <1.$ Under this convention, this Laplace transform has a simple relation with the Laplace transform of the Dickman function. \begin{theorem} \label{buchstab0} {\em (Tenenbaum 1995)} The one-sided Laplace transform $\hat{\omega}(s)$ defined for $Re(s)>0$ by \eqn{355a}, extends to a meromorphic function on ${\mathbb C}$, given explicitly by the form \begin{equation}\label{356a} 1 + \hat{\omega}(s) = \frac{1}{s \hat{\rho}(s)}, ~~~~\quad\quad (s \ne 0). \end{equation} When $s$ is not real and negative, \begin{equation}\label{357a} 1+ \hat{\omega}(s) = e^{J(s)}, \end{equation} with $J(s) = \int_{0}^{\infty} \frac{e^{-s-t}}{s+t} dt = E_1(s).$ \end{theorem} \begin{proof} The Laplace transforms individually have been evaluated many times in the sieve method literature, see Wheeler \cite[Theorem 2, ff]{Whe90}, who also gives references back to the 1960's. One has explicitly that \begin{equation}\label{357b} \hat{\omega}(s) = \frac{1}{s} \exp( -{\rm Ein}(s)), \end{equation} in terms of the complementary exponential integral. The identity \eqn{356a} connecting them seems to be first explicitly stated in Tenenbaum \cite[III.6, Theorem 5]{Ten95}. \end{proof} \noindent Combining this result with Theorem \ref{th52b} gives $$ \hat{\omega}(s) \sim \frac{ e^{-\gamma}}{s}, \quad\quad \mbox{as} ~~ s\to 0. $$ We also obtain using \eqn{357a}, \eqn{357b} along with \eqn{249a} that \begin{equation}\label{362EG} \hat{\omega}(1) := \int_{0}^{\infty} \omega(u) e^{-u} du = e^{-\gamma} \exp (-{\rm E}_1(1))= e^{-\gamma} e^{-\delta/e}, \end{equation} where $\delta$ is the Euler-Gompertz constant. Wheeler \cite[Theorem 2]{Whe90} also observes that the solutions to the differential-difference equations for the Dickman function and the Buchstab function are distinguished from those for general initial conditions by the special property that their Laplace transforms analytically continue to entire (resp. meromorphic) functions of $s \in {\mathbb C}$. For a retrospective look at the work of N. G. de Bruijn on $\rho(x)$ and $\Psi(x,y)$ in Section \ref{sec35}, and of $\omega(x)$ and $\Phi(x, y)$ in this section, see Moree \cite{Mor13}. \subsection{Euler's constant and the Riemann hypothesis }\label{sec36} \setcounter{equation}{0} Several formulations of the Riemann hypothesis can be given in terms of Euler's constant. Here we describe some that involve the approach towards the extremal behaviors of Euler's totient function $\phi(n)$ and the sum of divisors function $\sigma(n)$ given in Section \ref{sec33a}. The Riemann hypothesis is also related, in another way, to generalized Euler's constants considered in Section \ref{sec37}. In 1981 J.-L. Nicolas (\cite{Ni81}, \cite{Ni83}) proved that the Riemann hypothesis is encoded in the property that the Euler totient function values $\phi(n_k)$ for the extremal numbers $n_k = p_1p_2\cdots p_k$ given in Landau's theorem approach their limiting value from one side only. Nicolas stated his result using the inverse of Landau's quantity, as follows. \begin{theorem}\label{th54} {\em (Nicolas 1981) } The Riemann hypothesis holds if and only if all the primorial numbers $N_k = p_1p_2 \cdots p_k$ with $k \ge 2$ satisfy \begin{equation*} \frac{N_k}{\phi(N_k) \log\log N_k}> e^{\gamma}. \end{equation*} If the Riemann hypothesis is false, then these inequalities will be true for infinitely many primorial numbers $N_k$ and false for infinitely many $N_k$. \end{theorem} This inequality is equivalent to the statement that the Riemann hypothesis implies that all the primorial numbers $N_k$ for $k \ge 2$ {\em undershoot} the asymptotic lower bound $e^{-\gamma}$ in Landau's Theorem \ref{th52}. As an example, $\phi( 2 \cdot 3 \cdot 5 \cdot 7) = \phi(210)=48$ and $$ \frac{\phi(n_4) \log\log n_4}{n_4} = \frac{\phi(210) \log\log 210}{210} = 0.38321 \cdots < e^{-\gamma} = 0.56145 \dots . $$ For most $n$ one will have $\frac{\phi(n) \log\log n}{n}> e^{-\gamma}$, and only a very thin subset of $n$ will be close to this bound. For example taking $n=p_k$, a single prime, one sees that as $k \to \infty$ this ratio goes to $+\infty$. Very recently Nicolas \cite{Ni12} obtained a refined encoding of the Riemann hypothesis in terms of a sharper asymptotic for small values of the Euler $\phi$-function, in which both $\gamma$ and $e^{\gamma}$ appear. \begin{theorem}\label{th374} {\em (Nicolas 2012) } For each integer $n \ge 2$ set $$ c(n) := \Big(\frac{n}{\phi(n)} - e^{\gamma} \log\log n\Big) \sqrt{\log n}. $$ Then the Riemann hypothesis is equivalent to the statement that \begin{equation}\label{374b} \limsup_{n \to \infty} \,c(n) = e^{\gamma}( 4 + \gamma - \log (4\pi)). \end{equation} \end{theorem} \begin{proof} The assertion that the Riemann hypothesis implies \eqn{374b} holds is \cite[Theorem 1.1]{Ni12}, and the converse assertion is \cite[Corollary 1.1]{Ni12}. Here the constant \[ e^{\gamma}\Big(4+ \gamma -\log (4 \pi) \Big)= e^{\gamma}(2 + \beta)= 3.64441\, 50964..., \] in which \[ \beta= \sum_{\rho} \frac{1}{\rho(1-\rho)}=2 +\gamma - \log (4 \pi)=0.04619\, 14179..., \] where in the sum $\rho$ runs over the nonreal zeros of the zeta function, counted with multiplicity. For the converse direction, he shows that if the Riemann hypothesis fails, then $$ \limsup_{n \to \infty} c(n) = +\infty. $$ Note that $\liminf_{n \to \infty} c(n)= -\infty$ holds unconditionally. \end{proof} In 1984 G. Robin \cite{Ro84} showed that for the sum of divisors function $\sigma(n)$ the Riemann hypothesis is also encoded as a one-sided approach to the limit in Gronwall's theorem (Theorem \ref{th53}). \begin{theorem}\label{th55} {\em (Robin 1984) } The Riemann hypothesis holds if and only if the inequalities \begin{equation} \label{551} \frac{\sigma(n)}{n \log\log n} < e^{\gamma} \end{equation} are valid for all $n \ge 5041$. If the Riemann hypothesis is false, then this inequality will be true for infinitely many $n$ and false for infinitely many $n$. \end{theorem} Robin's result says that Riemann hypothesis is equivalent to the limiting value $e^{\gamma}$ being approached from below, for all sufficiently large $n$. The inequality \eqn{551} fails to hold for a few small $n$, the largest known exception being $n=5040$. Here the extremal numbers giving record values for $f(n)= \frac{\sigma(n)}{n}$ will have a different form than that in Nicolas's theorem. It is known that infinitely many of the extremal numbers will be {\em colossally abundant numbers}, as defined by Alaoglu and Erd\H{o}s \cite{AE44} in 1944. These are numbers $n$ such that there is some $\epsilon >0$ such that $$ \frac{\sigma(n)}{n^{1+\epsilon}} \ge \frac{\sigma(k)}{k^{1+\epsilon}}, ~~~\mbox{for}~~~ 1\le k < n. $$ Alaoglu and Erd\H{o}s showed that the ``generic" colossally abundant number is a product of powers of small primes having a sequence of exponents by a parameter $\epsilon >0$ as \begin{equation*} n = n(\epsilon) :=\prod_{p} p^{a_p(\epsilon)}, \end{equation*} in which \begin{equation}\label{551d} a_p(\epsilon) := \lfloor \frac{ \log(p^{1+\epsilon} -1) - \log(p^{\epsilon}-1)}{\log p} \rfloor -1. \end{equation} For fixed $\epsilon$ the sequence of exponents $a_p(\epsilon)$ is non-increasing as $p$ increases and become $0$ for large $p$. However for fixed $p$ the exponent $a_p(\epsilon)$ increases as $\epsilon$ decreases towards $0$, so that $n(\epsilon) \to \infty$ as $\epsilon \to 0^{+}.$ Colossally abundant and related numbers had actually been studied by Ramanujan \cite{Ram15} in 1915, but the relevant part of this paper was suppressed by the London Mathematical Society to save expense. The suppressed part of the manuscript was recovered in his Lost Notebook, and later published in 1997 ( \cite{Ram97}). In 2002 this author (\cite{La02}), starting from Robin's result, obtained the following elementary criterion for the Riemann hypothesis, involving the sum of divisors function $\sigma(n)$ and the harmonic numbers $H_n$. \begin{theorem}~\label{th56} The Riemann hypothesis is equivalent to the assertion that for each $n \ge 1$ the inequality \begin{equation}\label{561} \sigma(n) \le e^{H_n} \log H_n + H_n \end{equation} is valid. Assuming the Riemann hypothesis, equality holds if and only if $n=1$. \end{theorem} The additive term $H_n$ is included in this formula for elegance, as it makes the result hold for all $n \ge 1$ (assuming RH), rather than be valid for $n \ge 5041,$ as in Robin's theorem \ref{th55}. The converse direction of this result requires additional proof, which is accomplished in \cite{La02} using certain asymptotic estimates obtained in Robin's paper \cite{Ro84}. It shows that if the Riemann hypothesis is false then the inequalities \eqn{561} will fail to hold for infinitely many positive integers $n$. \subsection{Generalized Euler constants and the Riemann hypothesis}\label{sec37} \setcounter{equation}{0} In 1961 W. Briggs \cite{Br61} introduced the notion of an {\em Euler constant associated to an arithmetic progression} of integers. This notion was studied in detail by D. H. Lehmer \cite{Leh75} in 1975. For the arithmetic progression $h ~(\bmod~k)$ with $0\le h< k$ we set \begin{equation*} \gamma(h,k) := \lim_{x \to \infty} \Big(\sum_{\substack{0< n \le x\\n\equiv h ~(\bmod k)}} \frac{1}{n} - \frac{\log x}{k} \Big). \end{equation*} These constants were later termed {\em Euler-Lehmer constants} by Murty and Saradha \cite{MS10}. Here one has $\gamma(0,1) = \gamma$, $\gamma(1, 2) =\frac{1}{2} ( \gamma+ \log 2)$ and $\gamma(2, 4) = \frac{1}{4} \gamma$, where $\gamma$ is Euler's constant. It suffices to study the constants where $\gcd(h,k)=1$, other cases reduce to these by dividing out the greatest common factor. We next define {\em generalized Euler constants $\gamma(\Omega)$} associated to a finite set of primes $\Omega=\{ p_{i_1}, .... , p_{i_k}\} $, possibly empty. These constants were introduced by Diamond and Ford \cite{DF08} in 2008. To specify them, we first define a {\em zeta function} $Z_{\Omega}(s)$ by \begin{equation*} Z_{\Omega}(s) := \left( \prod_{p \in \Omega} (1- \frac{1}{p^s}) \right) \zeta(s)\\ = \sum_{n=1}^{\infty} {\bf 1}_{\Omega}(n) n^{-s}= \sum_{gcd(n, P_{\Omega})=1} n^{-s} \end{equation*} where we set \[ {\bf 1}_{\Omega}(j) = \begin{cases} 1 & ~\mbox{if} ~~(n, P_{\Omega}) = 1,\\ 0 & ~\mbox{otherwise,} \end{cases} \] in which $$ P_{\Omega} := \prod_{p \in \Omega} p. $$ The function $Z_{\Omega}(s)$ defines a meromorphic function on the entire plane which has a simple pole at $s=1$ with residue \begin{equation*} D(\Omega):= \prod_{p \in \Omega} \left(1- \frac{1}{p}\right). \end{equation*} The {\em generalized Euler constant} $\gamma(\Omega)$ associated to $\Omega$ is the constant term in the Laurent expansion of $Z_{\Omega}(s)$ around $s=1$, namely $$ Z_{\Omega}(s) = \frac{D(\Omega)}{s-1} + \gamma(\Omega) + \sum_{j=1}^{\infty} \gamma_k(\Omega)(s-1)^k. $$ In the case that $\Omega=\emptyset$ is the empty set, this function $Z_{\Omega}(s)$ is exactly the Riemann zeta function, and $\gamma(\emptyset)= \gamma$ by Theorem \ref{th41}. These numbers $\gamma(\Omega)$ generalize the characterization of Euler's constant in \eqn{402} in the sense that \begin{equation*} \gamma(\Omega) = \lim_{n \to \infty} \Big( \sum_{j=1}^n \frac{{\bf 1}_{\Omega}(j)}{j} - D(\Omega) \log n \Big). \end{equation*} The constants $\gamma(\Omega)$ are easily shown to be finite sums of Euler-Lehmer constants \begin{equation}\label{604aa} \gamma(\Omega) = \sum_{{1 \le h < P_{{\mathcal P}}}\atop{gcd(h, P_{\Omega})=1}} \gamma(h, P_{\Omega}). \end{equation} Diamond and Ford \cite[Theorem 1]{DF08} show that these generalized Euler constants are related to Euler's constant in a second way which involves the constant $e^{-\gamma}$ rather than $\gamma$. Let $\Omega_r$ denote the set of the first $r$ primes and set $\gamma_r := \gamma(\Omega_r)$. \begin{theorem}\label{th61} {\em (Diamond and Ford 2008)} Let $\Gamma= \inf \{ \gamma ({\mathcal P}): ~~\mbox{all finite}~~{\mathcal P}\}$. Then: (1) The values of $\gamma({\mathcal P})$ are dense in $[\Gamma, \infty).$ (2) The constant $\Gamma$ satisfies \begin{equation}\label{381k} 0.56 \le \Gamma \le e^{-\gamma}. \end{equation} If $\Gamma < e^{-\gamma}$ then the value $\Gamma$ is attained at some $\gamma_r$. \end{theorem} \begin{proof} (1) This is shown as \cite[Prop. 2]{DF08}. (2) Diamond and Ford show that each $\gamma({\mathcal P}) \ge \gamma_r$ for some $1 \le r < |{\mathcal P}|$. This implies \begin{equation}\label{381m} \Gamma = \inf \{ \gamma_r: r \ge 1\}. \end{equation} Merten's formulas for sums and products yield the asymptotic formula \begin{equation}\label{381h} \gamma_r \sim e^{-\gamma} ~~\mbox{as}~~~ r \to \infty. \end{equation} which by \eqn{381m} implies that $\Gamma \le e^{-\gamma}$. If $\Gamma < e^{-\gamma}$ then it must be attained by one of the $\gamma_{r}$, For there must be at least one $\gamma_{r_1} < e^{-\gamma}$, and by \eqn{381h} there are only finitely many $\gamma_{r} \le \gamma_{r_1}$. and one of these must attain the infimum by the bound above. Diamond and Ford established the lower bound $\gamma_r \ge 0.56$ for all $r \ge 1$. \end{proof} Diamond and Ford \cite[Theorem 2]{DF08} also show that the behavior of the quantities $\gamma(\Omega)$ is complicated as the set $\Omega$ increases, in the sense that $\gamma_{r} $ is not a monotone function of $r$, with $\gamma_{r+1} > \gamma_r$ and $\gamma_{r+1} < \gamma_r$ each occurring infinitely often. Nevertheless Diamond and Ford \cite[Theorems 3, 4]{DF08} obtain the following elegant reformulation of the Riemann hypothesis. \begin{theorem}\label{th62} {\em (Diamond and Ford 2008)} The Riemann hypothesis is equivalent to either one of the following assertions. \begin{enumerate} \item The infimum $\Gamma$ of $\gamma({\mathcal P})$ satisfies $$\Gamma = e^{-\gamma}. $$ \item For every finite set $\Omega$ of primes \begin{equation}\label{621} \gamma(\Omega) > e^{-\gamma}. \end{equation} \end{enumerate} \end{theorem} Here (2) implies (1), and (2) says that if the Riemann hypothesis is true, then \eqn{621} holds for all $\Omega$ and the infimum $\Gamma = e^{-\gamma}$ is not attained, while if it does not hold, then by Theorem \ref{th61} (1) the reverse inequality holds for infinitely many $\Omega$, and the infimum is attained. Theorem~\ref{th62} seems remarkable: on taking ${\mathcal P}=\emptyset$ to be the empty set, it already requires, for the truth of the Riemann hypothesis, that $$ \gamma= 0.57721...> e^{-\gamma}= 0.56145... $$ In consequence the unique real root $x_0\approx 0.56714$ of the equation $x_0= e^{-x_0}$ necessarily satisfies $\gamma > x_0 > e^{-\gamma}$. Results on the transcendence of Euler-Lehmer constants $\gamma(h, k)$ and generalized Euler constants $\gamma(\Omega)$ have recently been established, see Section \ref{sec312}. Finally we note that one may define more generally {\em higher order Euler-Lehmer constants}. For $j \ge 1$ Dilcher \cite{Dil92} defines for $h \ge 1,$ and $0< k \le h$, the constants \begin{equation*} \gamma_j(h, k) := \lim_{x \to \infty} \Big( \sum_{\substack{0 < n \le x \\ n \equiv h (\bmod k)}} \frac{(\log n)^j}{n}- \frac{ (\log x)^{j+1}}{k(j+1)} \Big). \end{equation*} Here $j$ is the {\em order}, with $\gamma_1(h, k) =\gamma(h,k)$ being the Euler-Lehmer constants, and $\gamma_j(1,1) = \gamma_j$ is the $j$-th Stieltjes constant (see Theorem \ref{th41}), Dilcher relates these constants to values of derivatives of the digamma function at rational points, and also to derivatives of Dirichlet $L$-functions evaluated at $s=1$. \subsection{Euler's constant and extreme values of $\zeta(1+it)$ and $L(1, \chi_{-d})$} \label{sec37a} \setcounter{equation}{0} Some of the deepest questions in number theory concern the distribution of values of the Riemann zeta function and its generalizations, Dirichlet $L$-functions. In this section we consider such value distributions at the special point $s=1$ and on the vertical line $\text{Re}(s) =1$. Here Euler's constant appears in connection with the size of extreme values of $\zeta(1+it)$ as $t \to \infty$. It also appears in the same guise in connection with extreme values of the Dirichlet $L$-function $L(s, \chi_{-d})$ at the point $s=1$ where $\chi_{-d}$ is the real primitive Dirichlet character associated to the quadratic field ${\mathbb Q}(\sqrt{-d})$, and we let $d \to \infty$. (The quantity $-d$ denotes a fundamental discriminant of an imaginary quadratic field, which is necessarily squarefree away from the prime $2$.) In both case these connections were first made by J. E. Littlewood. Some of his results are conditional on the Riemann hypothesis or the generalized Riemann hypothesis, while others are unconditional. In these results Euler's constant occurs by way of Merten's Theorem \ref{th51}. The basic idea, in the context of large values of $|\zeta(1+it)|$, is that large values will occur (only) when a suitable finite truncation of its Euler product representation is large, and this in turn will occur when the phases of the individual terms in the product line up properly. Mertens's theorem is then used in estimating the size of this Euler product. We first consider extreme values of $\zeta(1+it)$. To place results in context, it is known unconditionally that $|\zeta(1+it)| = O(\log |t|)$ and that $\frac{1}{|\zeta(1+it)|} = O(\log |t|)$ as $|t| \to \infty$. In 1926 Littlewood determined, assuming the Riemann hypothesis, the correct maximal order of growth of the Riemann zeta function on the line $\text{Re}(s)=1$, and in 1928 he obtained a corresponding bound for $\frac{1}{\zeta(1+it)}$. \begin{theorem}\label{th391b} {\em (Littlewood 1926, 1928)} Assume the Riemann hypothesis. Then (1) The maximal order of $|\zeta(1+it)|$ is at most $\log\log t$, with \[ \limsup_{ t \to \infty} \frac{ |\zeta(1+it)|}{\log\log t} \le 2 e^{\gamma}. \] (2) The maximal order of $\frac{1}{|\zeta(1+it)|}$ is at most $\log\log t$, with \[ \limsup_{t \to \infty} \frac{1}{|\zeta(1+it)| \log\log t} \le \frac{2}{\zeta(2)}\, e^{\gamma} \] \end{theorem} \paragraph{\bf Proof.} The bound (1) is \cite[Theorem 7]{Lit26} and the bound (2) is \cite[Theorem 1]{Lit28a}. $~~~\Box$. Littlewood also established unconditionally a lower bound for the quantity in (1) and, conditionally on the Riemann hypothesis, a lower bound for that in (2), both of which matched the bounds above up to a factor of 2. In 1949 a method of Chowla \cite{Cho49} made this lower bound for the quantity in (2) unconditional. These combined results are as follows. \begin{theorem}\label{th392b} {\em (Littlewood 1926, Titchmarsh 1933)} The following bounds hold unconditionally. (1) The maximal order of $|\zeta(1+it)|$ is at least $\log\log t$, with \[ \limsup_{ t \to \infty} \frac{ |\zeta(1+it)|}{\log\log t} \ge e^{\gamma}. \] (2) The maximal order of $\frac{1}{|\zeta(1+it)|}$ is at least $\log\log t$, with \[ \limsup_{t \to \infty} \frac{1}{|\zeta(1+it)| \log\log t} \ge \frac{1}{\zeta(2)} \,e^{\gamma} \] \end{theorem} \begin{proof} The bound (1) is Theorem 8 of \cite{Lit26}. The lower bound (2) was proved, assuming the Riemann hypothesis, in 1928 by Littlewood \cite{Lit28a}. An unconditional proof was later given by Titchmarsh \cite{Tit33}, see also \cite[Theorem 8.9(B)]{TH86}. \end{proof} Littlewood's proof of (1) used Diophantine approximation properties of the values $\log p$ for primes $p$, namely the fact that they are linearly independent over the rationals. This guarantees that there exist suitable values of $t$ where the all terms in the Euler product $ \prod_{p \le X} (1- \frac{1}{p^{1+it}})^{-1} $ have most phases $ t \log p $ near $0 ~(\bmod \,2\pi)$ for $p \le X$. Littlewood was struck by the fact that the unconditional lower bound for\\ $\limsup_{ t \to \infty} \frac{ |\zeta(1+it)|}{\log\log t}$ in Theorem \ref{th392b}(1) and the conditional upper bound for it given in Theorem \ref{th391b}(1) differ by the simple multiplicative factor $2$. He discusses this fact at length in \cite{Lit28a}, \cite{Lit28b}. The Riemann hypothesis is not sufficient to predict the exact value! It seems that more subtle properties of the Riemann zeta function, perhaps related to Diophantine approximation properties of prime numbers or of the imaginary parts of zeta zeros, will play a role in determining the exact constant. Littlewood favored the right answer as being the lower bound in Theorem \ref{th392b} saying (\cite[p. 359]{Lit28b}): \begin{quote} The results involving $c$ [$=e^{\gamma}$] are evidently final except for a certain factor $2$. I showed also that on a certain further hypothesis (which there is, perhaps, no good reason for believing) this factor $2$ disappears. \end{quote} \noindent In the fullness of time this statement has been elevated to a conjecture attributed to Littlewood. There is now relatively strong evidence favoring the conjecture that the lower bound (1) in Theorem \ref{th392b} should be equality. In 2006 Granville and Soundararajan \cite{GS06} obtained a bound for the frequency of occurrence of extreme values of $|\zeta(1+it)|$, considering the quantity \[ \Phi_T( \tau) := \frac{1}{T} { \rm meas} \{ t \in [T, 2T]: ~|\zeta(1+it)| > e^{\gamma} \tau \}, \] where ${ \rm meas}$ denotes Lebesgue measure. They established a result \cite[Theorem 1]{GS06} showing that for all sufficiently large $T$, uniformly in the range $1 \ll \tau \le \log\log T + 20$, there holds \begin{equation}\label{asympbd} \Phi_T( \tau) = \exp \Big( -\frac{2 e^{\tau -C -1}}{\tau} \Big( 1+ O ( \frac{1}{\sqrt{\tau} } + (\frac{e^{\tau}}{\log T})^{1/2}) \Big) \Big). \end{equation} where $C$ is a positive constant. An extension of these ideas established the following result (\cite[Theorem 2]{GS06}). \begin{theorem}\label{th393b} {\em (Granville and Soundararajan 2006)} For all sufficiently large $T $, the measure of the set of points $t$ in $[T, 2T]$ having \[ |\zeta(1+it)| \ge e^{\gamma} \big( \log\log T + \log\log\log T - \log\log\log\log T - \log A + O (1)\big) \] is at least $T^{1- \frac{1}{A}},$ uniformly for $A \ge 10$. \end{theorem} They observe that if the estimate \eqn{asympbd} remained valid without restriction on the range of $\tau$, this would correspond to the following stronger assertion. \begin{cj}\label{cj394b} {(Granville and Soundararajan 2006)} There is a constant $C_1$ such that for $T \ge 10$, \[ \max_{ T \le t \le 2T} |\zeta(1+it)| = e^{\gamma} \big( \log\log T + \log\log\log T + C_1 + o(1)\big) \] \end{cj} \noindent This conjecture implies that the lower bound (1) in Theorem \ref{th392b} would be an equality. We next consider the distribution of extreme values of the Dirichlet $L$-functions $L(s, \chi_{-d})$ at $s=1$, for real primitive characters $\chi_{-d}(n) = (\frac{-d}{n})$, where $-d$ is the discriminant of an imaginary quadratic field ${\mathbb Q}(\sqrt{-d})$. The Dirichlet $L$-function \[ L(s, \chi_{-d}) = \sum_{n=1}^{\infty} \Big(\frac{-d}{n}\Big)n^{-s}, \] is a relative of the Riemann zeta function, having an Euler product and a functional equation. Dirichlet's class number formula states that for a fundamental discriminant $-d$ with $d>0,$ $$ L(1, \chi_{-d}) = \frac{2\pi \,h(-d)}{w_{-d}\sqrt{d}}, $$ where $h(-d)$ is the order of the ideal class group in ${\mathbb Q}(\sqrt{-d})$ and $w_{-d}$ is the number of units in ${\mathbb Q}(\sqrt{-d})$ so that $w_{-d} = 2$ for $d>4$, $w_{-3}=6$ and $w_{-4}=4$. Thus the size of $L(1, \chi_{-d})$ encodes information on the size of the class group, and the formula shows that $L(1, \chi_{-d}) >0$. \begin{theorem}\label{th395b} {\em (Littlewood 1928)} Assume the Generalized Riemann hypothesis for all real primitive characters $L(s, \chi_{-d})$ where $-d$ is the discriminant of an imaginary quadratic field ${\mathbb Q}(\sqrt{-d})$. Then: (1) The maximal order of $L(1, \chi_{-d})$ is at most $\log\log d$, with \[ \limsup_{ d \to \infty} \frac{ L(1, \chi_{-d})}{\log\log d} \le 2 e^{\gamma}. \] (2) The maximal order of $\frac{1}{L(1, \chi_{-d})}$ is at most $\log\log d$, with \[ \limsup_{d \to \infty}\frac{1}{ L(1, \chi_{-d}) \log\log d} \le \frac{2}{\zeta(2)}\, e^{\gamma}. \] \end{theorem} \begin{proof} The bounds (1) and (2) are the content of \cite[Theorem 1]{Lit28b}. \end{proof} There are corresponding unconditional lower bounds that differ from these by a factor of $2$, as noted by Littlewood in 1928 for the bound (1), and completed by a result of Chowla \cite{Cho49} for the bound (2). \begin{theorem}\label{th396b} {\em (Littlewood 1928, Chowla 1949)} The following bounds hold unconditionally as $d \to \infty$ with $-d$ a fundamental discriminant. (1) The maximal order of $L(1, \chi_{-d})$ is at least $\log\log d$, with \[ \limsup_{ d \to \infty} \frac{ L(1, \chi_{-d})}{\log\log d} \ge e^{\gamma}. \] (2) The maximal order of $\frac{1}{L(1, \chi_{-d})}$ is at least ${\log\log d}$, with \[ \limsup_{d \to \infty}\frac{1}{L(1, \chi_{-d}) \log\log d} \ge \frac{1}{\zeta(2)}\, e^{\gamma}. \] \end{theorem} \begin{proof} The bound (1) is due to Littlewood \cite{Lit28b}. The bound (2) is due to Chowla \cite[Theorem 2]{Cho49}. \end{proof} There is again a factor of $2$ difference between the unconditional lower bound and the conditional upper bound. In 1999 Montgomery and Vaughan \cite{MV99} formulated a model for the general distribution of sizes of Dirichlet character values at $s=1$, and based on it they advanced very precise conjectures in favor of the lower bounds being the correct answer, as follows. \begin{cj}\label{cj397b} { (Montgomery and Vaughan 1999)} (1) For each $\epsilon >0$, for all $D \ge D(\epsilon)$ there holds \[ \max_{d \le D} L(1, \chi_{-d}) \le e^{\gamma} \log\log D + (1+ \epsilon) \log\log \log D, \] (2) For each $\epsilon >0$, for all $D \ge D(\epsilon)$ there holds \[ \max_{d \le D} \frac{1}{ L(1, \chi_{-d})} \le \frac{1}{ \zeta(2)} e^{\gamma} \log\log D + O\Big(\frac{1}{(\log\log D) (\log\log\log D)}\Big). \] \end{cj} In 2003 Granville and Soundararajan \cite{GS03} obtained detailed probabilistic estimates for the number of characters having extreme values $e^{\gamma} \tau$. Their results imply unconditionally that there are infinitely many $d$ with \[ L(1, \chi_{-d}) \ge e^{\gamma}\big( \log\log d + \log\log\log d - \log\log\log \log d -10\big). \] In their later paper \cite[Theorem 3]{GS06} they sketched a proof that for any fixed $A \ge 10$ for all sufficiently large primes $q$ there are at least $q^{1- \frac{1}{A}}$ Dirichlet characters $\chi~(\bmod \, q)$ such that \[ |L(1, \chi)| \ge e^{\gamma}\big( \log\log d + \log\log\log d - \log\log\log \log d -\log A+O(1) \big). \] Finally we remark on a related problem, concerning the distribution of the phase $\text{arg} (\zeta (1+it))$. In 1972 Pavlov and Faddeev \cite{PF72} observed that the phase of $\zeta(1+ it)$ appears in the scattering matrix data for the hyperbolic Laplacian operator acting on the modular surface $X(1) = PSL(2, {\mathbb Z})\backslash {\mathbb H},$ where ${\mathbb H}=\{ z=x+iy : y= Im(z)>0\}$ denotes the upper half plane. Then in 1980 Lax and Phillips \cite[Sect. Theorem 7.19]{LP80} treated this case as an example of their version of scattering theory for automorphic functions (given in \cite{LP76}). Recently Y. Lamzouri \cite{Lam08} obtained interesting results on the joint distribution of the modulus and phase $(|\zeta(1+it)|, \arg(\zeta(1+it)))$. \subsection{Euler's constant and random permutations: cycle structure}\label{sec38} \setcounter{equation}{0} Let $S_N$ denote the symmetric group of all permutations on $[1, N] := \{ 1, 2, 3, \cdots, N\}.$ By a random permutation we mean an element $\sigma \in S_N$ drawn with the uniform distribution, picked with probability $\frac{1}{N!}$. We view $\sigma$ as a product of distinct cycles, and let $c_j(\sigma)$ count the number of cycles of length $j$ in $\sigma$. Three interesting statistics on the cycle structure of a permutation $\sigma \in S_N$ are its total number of cycles $$ n(\sigma) := \sum_{j=1}^n c_j, $$ the length of its longest cycle $$ M(\sigma) := \max\{j: \,c_j >0\}, $$ and the length of its shortest cycle $$ m(\sigma) := \min\{ j: \,c_j >0\}. $$ Euler's constant $\gamma$ appears in connection with the the distributions of each of these statistics, viewed as random variables on $S_N$. In 1939 J. Touchard \cite[p. 247]{Tou39} expressed the distribution of cycle lengths using exponential generating function methods. In 1944 Goncharov \cite[Trans. pp. 31-34]{Gon44} (announced 1942 \cite{Gon42}) also gave generating functions, derived in a probabilistic context. He showed that if one lets $c(N,k)$ denote the number of permutations in $S_N$ having exactly $k$ cycles, then one has the generating function \begin{equation}\label{CGF} \sum_{k=1}^N c(N,k) x^k = x(x+1)(x+2) \cdots (x+N-1). \end{equation} Goncharov computed the mean and variance of the number of cycles $n(\sigma)$, as follows. \begin{theorem}~\label{th380} {\rm (Touchard 1939, Goncharov 1944)} Draw a random permutation $\sigma$ from the symmetric group $S_N$ on $N$ elements with the uniform distribution. (1) The expected value of the number of cycles in $\sigma$ is: $$ E[n(\sigma)] = H_N = \sum_{j=1}^N \frac{1}{j}. $$ In particular one has the estimate $$ E[n(\sigma)] = \log N + \gamma +O(\frac{1}{N}). $$ (2) The variance of the number of cycles in $\sigma$ is: $$ Var[ n (\sigma)] := E[ (n(\sigma) - E[n(\sigma)])^2] =H_{N} - H_{N, 2} = \sum_{j=1}^N \frac{1}{j} - \sum_{j=1}^N \frac{1}{j^2}. $$ In particular one has the estimate $$ \sqrt{Var[ n (\sigma)]}= \sqrt{ \log N} + \Big(\frac{\gamma}{2} -\frac{ \pi^2}{12}\Big) \frac{1}{\sqrt{\log N}} + O((\log N)^{-\frac{3}{2}}). $$ \end{theorem} \begin{proof} In 1939 Touchard \cite[p. 291] {Tou39} obtained the formula $E[n(\sigma)] = H_N$. In 1944 Goncharov \cite[Sect. 15]{Gon44} derived both (1) and (2), using \eqn{CGF}. A similar proof was given in 1953 by Greenwood \cite[p. 404]{Gre53}. Note also that for all $J \ge 1$ the $J$-th moment of $n(\sigma)$ is given by $$ E[ n(\sigma)^J] = \frac{1}{N!}(x \frac{d}{dx})^J M_N(x) \, |_{x=1}. $$ It follows using \eqn{CGF} that the $J$-th moment can be expressed as a polynomial in the $m$-harmonic numbers $H_{N, m}$ for $1\le m \le J.$ Goncharov \cite[Sect. 16]{Gon44} used the moments to show furthermore that the normalized random variables $ \bar{n}(\sigma) := \frac{n(\sigma)- E[n(\sigma)]}{\sqrt{Var[n(\sigma)]}}$ satisfy a central limit theorem as $N \to \infty$, i.e. they converge in distribution to standard unit normal distribution. \end{proof} Goncharov \cite{Gon44} also obtained information on the distribution of the maximum cycle $M(\sigma)$ as well. Define the scaled random variable $L_1^{(N)} :=\frac{1}{N} M(\sigma)$. Then, as $N \to \infty$, Goncharov showed these random variables have a limiting distribution, with the convergence in distribution $ L_1^{(N)} \rightarrow_{d} ~~~{\mathbb L}_1, $ in which ${\mathbb L}_1$ is the probability distribution supported on $ [0,1]$ whose cumulative distribution function $$ F_1(\alpha):= \mbox{Prob}[ 0 \le {\mathbb L}_1 \le \alpha] $$ is given for $0 \le \alpha \le 1$ by \begin{equation}\label{gonch} F_1(\alpha) = 1+ \sum_{k=1}^{\lfloor \frac{1}{\alpha} \rfloor} \frac{(-1)^{k}}{k!} \int_{{t_1+ \cdots + t_k \le \frac{1}{\alpha}}\atop{t_1, \cdots, t_k \ge 1}} \frac{dt_1}{t_1} \frac{dt_2}{t_2} \cdots \frac{dt_n}{t_n}. \end{equation} with $F_1(\alpha) =1$ for $ \alpha \ge 1$. Much later Knuth and Trabb-Pardo \cite[Sec. 10]{KTP76} in 1976 showed that this distribution is connected to the Dickman function $\rho(u)$ in Section \ref{sec34} by \begin{equation}\label{gonch1} F_1(\alpha) = \rho(\frac{1}{\alpha}), ~~~ 0 \le \alpha \le 1, \end{equation} and this yields the previously given formula \eqn{526e}. From this connection we deduce that this distribution has a continuous density $f_1(\alpha) \,d\alpha$ given by $$ f_1(\alpha) = -\rho'(\frac{1}{\alpha}) \frac{1}{\alpha^2} = \rho(\frac{1- \alpha}{\alpha})\frac{1}{\alpha},$$ with the last equality deduced using the differential-difference equation \eqn{524d} for $\rho(u)$. In 1966 Shepp and Lloyd \cite{SL66} obtained detailed information on the distribution of both $m(\sigma)$ and $M(\sigma)$ and also of the $r$-th longest and $r$-th shortest cycles. Euler's constant appears in the distributions of longest and shortest cycles, as follows. \begin{theorem}~\label{th92A} {\em (Shepp and Lloyd 1966)} Pick a random permutation $\sigma$ on $N$ elements with the uniform distribution. (1) Let $M_r(\sigma)$ denote the length of the $r$-th longest cycle under iteration of the permutation $\sigma \in S_N$. Then the $k$-th moment $E[M_r(\sigma)^k]$ satisfies, for $k \ge 1$ and $r \ge 1$, as $N \to \infty$, \begin{equation*} \lim_{N \to \infty} \frac{E[M_r(\sigma)^k]}{N^k} = G_{r, k}\, , \end{equation*} for positive limiting constants $G_{r,k}$. These constants are explicitly given by \begin{equation}\label{A913} G_{r,k} = \int_{0}^{\infty}\frac{ x^{k-1}}{k!} \frac{Ei(-x)^{r-1}}{(r-1)!} e^{ -Ei(-x)} e^{-x} dx, \end{equation} in which $Ei(-x)= \int_{x}^{\infty} \frac{e^{-t}}{t} dt$ is the exponential integral. They satisfy, for fixed $k$, as $r \to \infty$, \begin{equation*} \lim_{r \to \infty} (k+1)^r G_{r,k} = \frac{1}{k!} \, e^{-k \gamma}, \end{equation*} in which $\gamma$ is Euler's constant. (2) Let $m_r(\sigma)$ denote the length of the $r$-th shortest cycle of the permutation $\sigma \in S_N$. Then the expected value $E[m_r(\sigma)]$ satisfies, as $N \to \infty$, \begin{equation}\label{A911} \lim_{N \to \infty} \frac{E[m_r(\sigma)]}{(\log N)^r} = \frac{1}{r!} e^{-\gamma}. \end{equation} Furthermore the $k$-th moment $E[m_r(\sigma)^k]$ for $k \ge 2$ and $r \ge 1$ satisfies, as $N \to \infty$, \begin{equation}\label{A911b} \lim_{N \to \infty} \frac{E[m_r(\sigma)^k]}{N^{k-1} (\log N)^{r-1}} = \frac{1}{(r-1)!} \int_{0}^{\infty} \frac{x^{k-1}}{(k-1)!} e^{Ei(-x)} e^{-x} dx. \end{equation} \end{theorem} \begin{proof} The exponential integral $Ei(x)$, defined for $x<0$ is given in \eqn{561b}. Here (1) appears in Shepp and Lloyd \cite[eqns. (13) and (14) ff., p. 347]{SL66}. Here (2) appears as \cite[eqn. (22), p. 352]{SL66}. \end{proof} Shepp and Lloyd \cite{SL66} also obtained the limiting distribution of the $r$-th longest cycle as $N \to \infty$. In terms of the scaled variables $L_r^{(N)}:=\frac{1}{N}M_r(\sigma)$, they deduced the convergence in distribution $$ L_r^{(N)} \longrightarrow_{d} {\mathbb L}_r, $$ in which ${\mathbb L}_r$ is a distribution supported on $[0, \frac{1}{r}] $ whose cumulative distribution function $$ F_r(\alpha):= \mbox{Prob}[ 0 \le {\mathbb L}_1 \le \alpha] , ~~~~\quad \quad 0 \le \alpha \le \frac{1}{r}, $$ is given by $$ F_r(\alpha) =1+ \sum_{k=r}^{\lfloor \frac{1}{\alpha} \rfloor} \frac{(-1)^{k-r+1}}{(r-1)! (k-r)! k} \int_{{t_1+ \cdots + t_k \le \frac{1}{\alpha}}\atop{t_1, \cdots, t_k \ge 1}} \frac{dt_1}{t_1} \frac{dt_2}{t_2} \cdots \frac{dt_n}{t_n}, $$ and we set $F_{r}(\alpha) =1$ for $\frac{1}{r} \le \alpha < \infty$. This extends to general $r$ the case $r=1$ treated by Goncharov \cite{Gon44}. In 1976 Knuth and Trabb-Pardo \cite[Sec. 4]{KTP76} defined $\rho_r(u)$ by $\rho_r(u) :=F_r(\frac{1}{u})$, so that $\rho_1(u) = \rho(u)$ by \eqn{gonch1}. The functions $\rho_r(u)$ for $r \ge 1$ are uniquely determined by the conditions \begin{enumerate} \item (Initial condition) For $0 \le u \le 1$, it satisfies \begin{equation*} \rho_r(u) =1. \end{equation*} \item (Differential-difference equation) For $u > 1$, \begin{equation*} u \rho_r^{'}(u) = - \rho_r(u-1) + \rho_{r-1}(u-1), \end{equation*} with the convention $\rho_0 (u) \equiv 0.$ \end{enumerate} Knuth and Trabb-Pardo formulated these conditions in the integral equation form $$ \rho_r(u) = 1 - \int_{1}^{u} ( \rho_r(t-1) - \rho_{r-1}(t-1))\frac{dt}{t}. $$ One may directly check that these equations imply $\rho_r(u) =1$ for $0 \le u \le r$. Associated to the longest cycle distribution is another interesting constant $$ \lambda := G_{1,1} = \lim_{N \to \infty} \frac{E[M(\sigma)]}{N}. $$ The formula \eqn{A913} gives \begin{equation}\label{A917} \lambda = \int_{0}^{\infty} e^{ -x - Ei(-x)} dx = 0.62432~99885 \dots \end{equation} This constant $\lambda$ is now named the {\em Golomb-Dickman constant} in Finch \cite[Sec. 5.4]{Fin03}, for the following reasons. It was first encountered in 1959 work of Golomb, Welch and Goldstein \cite{GWG59}, in a study of the asymptotics of statistics associated to shift register sequences. Their paper expressed this constant by an integral involving a solution to a differential-difference equation, described later in Golomb \cite[p. 91, equation (33)]{Gol67}. In 1964 Golomb \cite{Gol64} asked for a closed form for this constant, and the Shepp and Lloyd's work answered it with \eqn{A917} above. It was evaluated to $53$ places by Mitchell \cite{Mit68} in 1968. In 1976 Knuth and Trabb-Pardo \cite[Sect. 10]{KTP76} showed that \begin{equation}\label{A918a} \lambda = 1- \int_{1}^{\infty} \frac{\rho(u)}{u^2} du, \end{equation} where $\rho(x)$ is the Dickman function discussed in Section \ref{sec34}. This associates the name of Dickman with this constant. In fact the constant already appears in de Bruijn's 1951 work \cite[(5.2)]{deB51b}, where he showed \begin{equation*} \lambda = \int_{0}^{\infty} \frac{\rho(u)}{(u+1)^2} du, \end{equation*} as noted Wheeler \cite[p. 516]{Whe90}. According to Golomb \cite[p. 192]{Gol67} another integral formula for this constant, found by Nathan Fine, is \begin{equation}\label{918c} \lambda = \int_{0}^1 e^{ Li(x)} dx, \end{equation} in which $Li(x) := \int_{0}^x \frac{dt}{\log t}$ for $0 \le x <1.$ In 1990 Wheeler \cite[pp. 516--517]{Whe90} established the formula \begin{equation*} \lambda = \int_{0}^{\infty} \frac{\rho(u)}{u+2} du \end{equation*} empirically noted earlier by Knuth and Trabb-Pardo, and also the formula $$ \lambda = e^{\gamma} \int_{0}^{\infty} e^{-Ein(t) -2t}dt, $$ in which $Ein(t)$ is given in \eqn{cexpi}. The Golomb-Dickman constant $\lambda$ is not known to be related to either Euler's constant $\gamma$ or the Euler-Gompertz constant $\delta$, and it is not known whether it is irrational or transcendental. The Golomb-Dickman constant $\lambda$ appears as a basic statistic in the limit distribution ${\mathbb L}_1$ for the scaled longest cycle, whose expected value is given by $$ E [ {\mathbb L}_1] = \int_{0}^1 \alpha \,d F_1(\alpha) = - \int_{0}^1\frac{1}{\alpha} \rho^{'} (\frac{1}{\alpha}) d\alpha. $$ Letting $x= \frac{1}{\alpha}$ we obtain \begin{equation*} E[{\mathbb L}_1] = - \int_{1}^{\infty} \frac{\rho'(x)}{x} dx = \int_{1}^{\infty} \rho(x-1) \frac{dx}{x^2} = \lambda, \end{equation*} see \cite[(9.2)]{KTP76}. In 1996 Gourdon \cite[Chap. VII, Th\'{e}or\`{e}me 2]{Gou96} determined a complete asymptotic expansion of the expected values $E[M(\sigma)]$ as $N \to \infty$ in powers of $\frac{1}{N}$, which both the Golomb-Dickman constant and Euler's constant appear, with the initial terms being \begin{equation*} E[ M(\sigma)]= \lambda N + \frac{1}{2} \lambda - \frac{e^{\gamma}}{24} \frac{1}{N} + O\left( \frac{1}{N^2}\right). \end{equation*} The omitted higher order terms for $k \ge 2$ in Gourdon's asymptotic expansion are of form $\frac{P_k(N)}{N^k}$ in which each $P_k(N)$ is a oscillatory function of $N$ which is periodic with an integer period $p_k$ that grows with $k$ as $k \to \infty$. A result in a different direction relates Euler's constant to the probability of all cycles having distinct lengths as $N \to \infty$. This result follows from work of D. H. Lehmer \cite{Leh72}. \begin{theorem}~\label{th380b} {\rm (Lehmer 1972)} Draw a random permutation $\sigma$ on $N$ elements with the uniform distribution. Let $P_{d}(N)$ denotes the probability that the cycles of $\sigma$ have distinct lengths, then $$ \lim_{N \to \infty} P_d(N) = e^{-\gamma}. $$ \end{theorem} \begin{proof} The cycle lengths of a permutation $\sigma$ form a partition ${\bf \lambda}= (1^{c_1}, 2^{c_2}, \cdots, N^{c_N})$ of $N$ ( written ${\bf \lambda} \vdash N$), having $n(\lambda)= c_1+c_2 +\cdots+ c_N$ cycles with cycle length $j$ occurring $c_j$ times. The number of permutations $N({\bf \lambda})$ giving rise to a given partition $\lambda$ is $$N({\bf \lambda}) = \frac{N!}{c_1! c_2!\cdots c_N! 1^{c_1} 2^{c_2} \cdots N^{c_N}},$$ and by definition we have $$ \frac{1}{N!} \sum_{{\bf \lambda} \vdash N} N({\bf \lambda})= 1. $$ In the case that all cycles are of distinct lengths, then all $c_i=0$ or $1$, and the formula for $N(\lambda)$ becomes $$N({\bf \lambda}) = \frac{N!}{a_1 a_2 \cdots a_n},$$ in which $a_1 < a_2 < \cdots < a_n$ with $n=n(\lambda)$ are the lengths of the cycles. (The division by $a_i$ reflects the fact that a cyclic shift of a cycle is the same cycle.) Lehmer \cite{Leh72} assigned weights $(1^{c_1} 2^{c_2} \cdots N^{c_N})^{-1}$ to each partition, and counted different sets of such weighted partitions. Theorem 2 counted partitions with distinct summands, in his notation $$ W_N^{\ast} := \sum_{{\bf \lambda} \vdash N}{}^{'} \frac{1}{a_1 a_2 \cdots a_n}, $$ with the prime indicating the sum runs over partitions having distinct parts. and showed that $W_N^{\ast} \to e^{-\gamma}$ as $N \to \infty$. (Lehmer's weights agree with the weights assigned to partitions from the uniform distribution on $S_N$ only for those partitions having distinct parts.) \end{proof} \subsection{Euler's constant and random permutations: shortest cycle}\label{sec38a} \setcounter{equation}{0} There are striking parallels between the distribution of cycles of random permutations of $S_N$ and the distributions of factorizations of random numbers in an interval $[1,n]$, particularly concerning the number of these having factorizations of restricted types, allowing either factorizations with only small factors, treated in Section \ref{sec34}, or having only large prime factors, treated in Section \ref{sec35}, where we take $N$ to be on the order of $\log n$. This relation was first observed by Knuth and Trabb-Pardo \cite{KTP76}, whose 1976 work (already mentioned in Sections \ref{sec34} and \ref{sec38}) was done to analyze the performance of an algorithm for factoring integers by trial division. They observed in \cite[Sect. 10, p. 344]{KTP76}: \begin{quote} Therefore, if we are factoring the digits of a random $m$-digit number, the distribution of the number of digits in its prime factors is {\em approximately the same as the distribution of the cycle lengths in a random permutation} on $m$ elements! (Note that there are approximately $\ln m$ factors, and $\ln m$ cycles.) \end{quote} They uncovered this connection after noticing that the Shepp-Lloyd formula for Golomb's constant $\lambda$ in \eqn{A917}, as evaluated by Mitchell \cite{Mit68}, agreed to $10$ places with their calculation of Dickman's constant, as defined by \eqn{A918a}. A comparison of the $k$-th longest cycle formulas of Shepp and Lloyd in Theorem \ref{th92A} for cycles of length $\alpha N$ then revealed matching relatives $\rho_{k}(x)$ of the Dickman function (defined in Section \ref{sec34}). These parallels extend to the appearance of the Buchstab function in the distribution of random permutations having no short cycle, as we describe below. Concerning the shortest cycle, recall first that a special case of Theorem \ref{th92A}(2) shows that the expected length of the shortest cycle is $$ E[m(\sigma)] = e^{-\gamma} \log N (1 +o(1)) ~~\mbox{as}~~ N \to \infty, $$ It is possible to get exact combinatorial formulas for the probability $$ P(N, m) := \mbox{\rm Prob}[ \, m(\sigma) \ge m :\, \sigma \in S_N]. $$ that a permutation has no cycle of length shorter than $m$. There are several limiting behaviors of this probability as $N \to \infty$, depending on the way $m=m(N)$ grows as $N \to \infty$. There are three regimes, first, where $m$ is constant, secondly where $m(N) = \alpha N$ grows proportionally to $N$, with $0< \alpha \le 1$ (i.e. only long cycles occur) and thirdly, the intermediate regime, where $m(N) \to \infty$ but $\frac{m(N)}{N} \to 0$ as $N \to \infty$. First, suppose that $m(N)=m$ is constant. This case has a long history. The {\em derangement problem} ({\em Probl\`{e}me des m\'{e}nages}), concerns the probability that a permutation has no fixed point, which is the case $m=2$. It was raised by R\'{e}mond de Montmort \cite{Mon1708} in 1708 and solved by Nicholas Bernoulli in 1713, see \cite[pp. 301--303]{Mon1713}. The derangement problem was also solved by Euler in 1753 \cite[E201]{E201}, who was unaware of the earlier work. The well known answer is that \begin{equation*} P (N, 2) = \sum_{j=0}^N \frac{ (-1)^j}{j!}, \end{equation*} and this exact formula yields the result \begin{equation*} \lim_{N \to \infty} P(N, 2) = \frac{1}{e}. \end{equation*} In 1952 Gruder \cite{Gru52} gave a generalization for permutations having no cycle shorter than a fixed $m$, as follows. \begin{theorem}~\label{th392a} {\rm (Gruder 1952)} For fixed $m \ge 1$ there holds \begin{equation}\label{391fm} \lim_{N \to \infty} \mbox{\rm Prob}[\,m(\sigma) \,\ge m: \, \sigma \in S_N ] = e^{- H_{m-1}}, \end{equation} in which $H_m$ denotes the $m$-th harmonic number. \end{theorem} \begin{proof} Let $P_N(m, k)$ count the number of permutations of $N$ having exactly $k$ cycles, each of length at least $m$, and set $P_N(m) = \sum_{k=1}^N P_N(m, k),$ so that $P(N, m) = \frac{P_N(m)}{N!}.$ These are given in the exponential generating function \begin{eqnarray*} \sum_{N=0}^{\infty} \frac{z^N}{N!} \Big(\sum_{k=1}^N P_N(m, k) u^k\Big) &=& \exp \Big[ u (\log \frac{1}{1-x} - \sum_{j=1}^{m-1} \frac{x^j}{j}) \Big]\\ & = & \frac{ \exp\Big[ -u\Big( x+ \frac{x^2}{2} + \cdots + \frac{x^{m-1}}{m-1}\Big) \Big]}{ (1-x)^u}. \end{eqnarray*} Gruder \cite[Sect. 8, (85)]{Gru52} derives from the case $u=1$, $$ \sum_{N=0}^{\infty} P_N(m) \frac{x^N}{N!} = \frac{ \exp \left( -x- \frac{x^2}{2} - \cdots - \frac{x^{m-1}}{m-1}\right)}{1-x}. $$ Using this fact Gruder \cite[Sect. 9, eqn. (94)]{Gru52} derives $\lim_{N \to \infty} \frac{N!}{P_N(m)} = e^{H_{m-1}}$, giving the result. \end{proof} Gruder also observes that one obtains Euler's constant from these values via the scaling limit $$ \lim_{m \to \infty} \lim_{N \to \infty} m P(N, m) = e^{-\gamma}. $$ An asymptotic expansion concerning convergence to the limit \eqn{391fm} for constant $m$ is now available in general circumstances. Panario and Richmond \cite[Theorem 3, eq. (9)]{PR01}, give such an asymptotic expansion for the $r$-th largest cycle being $\ge m$ for a large class of probability models. Secondly, for the intermediate range where $m=m(N) \to \infty$ with $\frac{m}{N} \to 0$ as $N \to \infty$, there is the following striking universal limit estimate, in which $e^{-\gamma}$ appears. \begin{theorem}~\label{th392} {\rm (Panario and Richmond 2001)} Consider permutations on $N$ letters whose shortest cycle $m(\sigma) \ge m$. Suppose that $m=m(N)$ depends on $N$ in such a way that $m(N) \to \infty$ and $\frac{m(N)}{N} \to 0$ both hold as $N \to \infty$. Under these conditions we have $$ \mbox{\rm Prob}[ \,m(\sigma) \ge m(N): \, \sigma \in S_N \, ] \sim \frac{e^{-\gamma}}{m}, \quad\quad \mbox{as}~~~ N \to \infty. $$ \end{theorem} \begin{proof} This result is deducible form a general asymptotic result of Panario and Richmond \cite[Theorem 3, eq. (11)]{PR01}, which applies to distribution of the $r$-th largest cycle of a logarithmic combinatorial structure. \end{proof} Thirdly, we consider the large range, concerning those permutations having shortest cycle of length at least a constant fraction $\alpha N$ of the size of the permutation. Here the resulting scaled density involves the Buchstab function $\omega(u)$ treated in Section \ref{sec35}, with $\alpha= \frac{1}{u}$, with $u >1$. \begin{theorem}~\label{th393} {\rm (Panario and Richmond 2001)} Consider permutations on $N$ letters whose shortest cycle $m(\sigma) \ge m$. Suppose that $m=m(N)$ depends on $N$ in such a way that $\frac{m(N)}{N} \to \alpha$ holds as $N \to \infty$, with $0 < \alpha \le 1.$ Under these conditions we have, for fixed $0 < \alpha \le 1$, that \begin{equation}\label{381aa} \mbox{\rm Prob}[ \, m(\sigma) > \alpha N: \,\sigma \in S_N ] \sim \omega(1/\alpha)\frac{1}{\alpha N}, \, \quad\quad \mbox{as} ~~~N \to \infty, \end{equation} where $\omega(u)$ denotes the Buchstab function. \end{theorem} \begin{proof} This result follows from a general result of Panario and Richmond \cite[Theorem 3, eq. (10)]{PR01}, which gives an asymptotic expansion for the $r$-th largest cycle. The proof for this case uses in part methods of Flajolet and Odlyzko \cite{FO90b}. \end{proof} The matching of the large range estimate \eqn{381aa} as $\alpha \to 0$ with the intermediate range estimate follows from the limiting behavior of the Buchstab function $\omega(u) \to e^{-\gamma}$ as $u \to \infty.$ One may compare Theorem \ref{th392} with the sieving result in Theorem \ref{buchstab}. They are quite parallel, and in both cases the factor $e^{-\gamma}$ arises from the limiting asymptotic behavior of the Buchstab function. The appearance of the Buchstab function in the sieving models in Section \ref{sec35} and the permutation cycle structure above is precisely accounted for in stochastic models developed by Arratia, Barbour, and Tavar\'{e} \cite{ABT97}, \cite{ABT99} in the late 1990's, which they termed Poisson-Dirichlet processes (and which we do not define here). They showed (\cite{ABT97}, \cite[Sec. 1.1, 1.2]{ABT03} that (logarithmically scaled) factorizations of random integers, and (scaled) cycle structures of random permutations as $N \to \infty$ lead to identical limiting processes. To describe the scalings, one considers an ordered prime factorization of an random integer $m = \prod_{i=1}^n p_i$, with $2 \le p_1 \le p_2 \le \cdots \le p_k$. Let $k := \Omega(m)$ count the number of prime factors of $m$ with multiplicity, one defines the logarithmically scaled quantities $$ \bar{a}_i^{*}: = \frac{\log p_i}{\log m}, ~~~ 1 \le i \le k. $$ The random quantities drawn are $(\bar{a}_1^{*}, \bar{a}_2^{*}, ..., \bar{a}_k^{\ast})$, where $k$ itself is also a random variable, and these quantities sum to $1$. To a random permutation $\sigma $ of $S_N$ let $a_1 \le a_2 \le \cdots \le a_{n(\sigma)}$ represent the lengths of the cycles arranged in increasing order. Then associate to $\sigma$ the normalized cycle lengths $$ \bar{a_i} := \frac{a_i}{N}, ~~~1 \le i \le n. $$ Here $n=n(\sigma)$ is a random variable, and these quantities also sum to $1$. Arratia, Barbour and Tavar\'{e} \cite{ABT99} showed that in both cases, as $N \to \infty$ (resp. $n \to \infty$) these random variables converge to a limiting stochastic process, a Poisson-Dirichlet process of parameter $1$. They furthermore noted (\cite[p. 34]{ABT03}) that both models required exactly the same normalizing constant: $e^{-\gamma}$. The existence of a normalizing constant for Poisson-Dirichlet processes was noted in 1977 by Vershik and Shmidt \cite{VS77}, who conjectured it to be $e^{-\gamma}$, and this was proved in 1982 by Ignatov \cite{Ign82}. The model for random cycles applies as well to random factorization of polynomials of degree $N$ over the finite field $GF(q)$, as $q \to \infty$, see Arratia, Barbour and Tavar\'{e} \cite{ABT93}. \subsection{Euler's constant and random finite functions }\label{sec39} \setcounter{equation}{0} Euler's constant also appears in connection with the distribution of cycles in a random finite function $F: [1, N] \to [1, N]$. Iteration of a finite function on a given initial seed leads to iterates of an element having a preperiodic part, followed by arrival at a cycle of the function. Thus not every element of $N$ belongs to a cycle. In 1968 Purdom and Williams \cite{PW68} established results for random finite functions that are analogous to results for random permutations given in Section \ref{sec38}. They studied the length of the longest and shortest cycles for a random function; note however that the expected length of the longest cycle grows proportionally to $\sqrt{N},$ rather than proportional to $N$ as in the case of a random permutation. \begin{theorem}~\label{th92B} {\rm (Purdom and Williams 1968)} Pick a random function $F: [1, N] \to [1, N]$ with the uniform distribution over all $N^N$ such functions. (1) The length $M(F)$ of the longest cycle of $F$ under iteration satisfies \begin{equation*} \lim_{N \to \infty} \frac{E[M(F)]}{\sqrt{N}} = \lambda \sqrt{\frac{\pi}{2}}, \end{equation*} in which $\lambda$ is the Golomb-Dickman constant. (2) Let $m(F)$ denote the length of the shortest cycle under iteration of the function $F$. Then the expected value $E[m(F)]$ satisfies \begin{equation*} \lim_{N \to \infty} \frac{E[m(F)]}{\log N} = \frac{1}{2} e^{-\gamma}. \end{equation*} \end{theorem} \begin{proof} For fixed $N$, Purdom and Williams \cite{PW68} use the notation $E[M(F)] = E_{F_N, 1}(\ell):=E[M(F)]$ and $E_{F_N, 1}(s):=E[m(F)]$. The result (1) follows from \cite[p. 550, second equation from bottom]{PW68}. Purdom and Williams state a result related to (2) without detailed proof. Namely, the top equation on \cite[p. 551]{PW68} states $$ E[m(F)] = S_{1,1} Q_{n}(1,1) + o(Q_n(1,1)), $$ where \begin{equation}\label{purdom} Q_n(1,1) = \sum_{j=1}^n \frac{(n-1)! j (\log j)}{(n-j)! n^j} . \end{equation} Here the values $S_{r, k}$ are taken from Shepp and Lloyd \cite{SL66}, where for $k=1$ they are given by the right hand side of \eqn{A911}, and for $k \ge 2$ by the right hand side of \eqn{A911b}, and in particular $S_{1,1} = e^{-\gamma}.$ To estimate $Q_n(1,1)$ given by \eqn{purdom} we use the identity ( \cite[p. 550]{PW68}) $$ Q_n(0) = \sum_{j=1}^n \frac{(n-1)! j}{(n-j)! n^j} =1. $$ We observe that $Q_n(1,1)$ is the expected value of the function $\log j$ with respect to the probability distribution given by the individual terms in $Q_n(0)$. One may deduce that $Q_n(1,1) = \frac{1}{2} \log n + o(\log n)$ by showing that this probability distribution is concentrated on values $j = \sqrt{n} + o(\sqrt{n})$. This gives (2) . \end{proof} The study of random finite functions is relevant for understanding the behavior of the Pollard \cite{Pol75} ``rho" method of factoring large integers. The paper of Knuth and Trabb-Pardo \cite{KTP76} analyzed the performance of this algorithm, but their model used random permutations rather than random functions. An extensive study of limit theorems for statistics for random finite functions was made by V. F. Kolchin \cite{Kol86}. A very nice asymptotic analysis of the major statistics for random finite functions is given in Flajolet and Odlyzko \cite[Theorems 2 and 3]{FO90}. \subsection{Euler's constant as a Lyapunov exponent}\label{sec39b} \setcounter{equation}{0} Certain properties of Euler's constant have formal resemblance to properties of a dynamical entropy. More precisely, the quantity $C= e^{\gamma}$ might be interpretable as the growth rate of some dynamical quantity, with $\gamma= \log C$ being the associated (topological or metric) entropy. Here we present results about the growth rates of the size of products of random matrices having normally distributed entires. These growth rates naturally involve $\gamma$, together with other known constants. Before stating results precisely, we remark that products of random matrices often have well-defined growth rates, which can be stated either in terms of growth rates of matrix norms or in terms of Lyapunov exponents. Let $\{ A(j): j \ge 1\}$ be a fixed sequence of $d \times d$ complex matrices., and let $M(n) = A(n) A(n-1) \cdots A(2) A(1)$ be product of $d \times d$ complex matrices. We define for an initial vector ${\bf v} \in {\mathbb C}^d$ and a vector norm $||\cdot||$, the {\em logarithmic growth exponent} of ${\bf v}$ to be \begin{equation}\label{700} \lambda({\bf v}) := \limsup_{n \to \infty} \frac{1}{n} \log {\frac{ ||M(n) {\bf v}||}{||{\bf v}||} }. \end{equation} This value depends on ${\bf v}$ and on the particular sequence $\{ A(j): j \ge 1\}$ but is independent of the choice of the vector norm. For a fixed infinite sequence $\{ A(j): j \ge 1\}$ as ${\bf v}$ varies there will at most $d$ distinct values $\lambda_1^{*} \le \lambda_2^{*}\le \cdots \le \lambda_d^{*}$ for this limit. Now let the sequence $\{ A(j): j \ge 1\}$ vary, with each $A(j)$ being drawn independently from a given probability distribution on the set of $d \times d$ matrices. In 1969 Furstenberg and Kesten \cite{FK60} showed that for almost all vectors ${\bf v}$ (off a set of positive codimension) the limiting value $\lambda_d^{*}$ takes with probability one a constant value $\lambda_d$, provided the distribution satisfied the condition \begin{equation}\label{log-bound} \int_{X} \log^{+} ||A(x)|| d\mu(x) < \infty, \end{equation} where $(X, \Sigma, \mu)$ is the probability space, $||\cdot ||$ is a fixed matrix norm, required to be submultiplicative, i..e $||M_1M_2|| \le ||M_1||||M_2||$, and where $\log^{+} |x| := \max (0, \log |x|)$. A corollary of the multiplicative ergodic theorem proved by Oseledec in 1968 (see \cite{Ose68}, \cite{Rag79}) asserts that with probability one all these logarithmic exponents take constant values, provided the probability distribution on random matrices satisfies \eqn{log-bound}. These constant values $\lambda_1 \le \lambda_2 \le \cdots \le \lambda_d$ are called the {\em Lyapunov exponents} of the random matrix product, with $\lambda_d$ being the {\em maximal Lyapunov exponent}. For almost all random products and almost all starting vectors ${\bf v}_0$, one will have $\lambda({\bf v}) = \lambda_d$, compare Pollicott \cite{Pol10}. The value $C_d := \exp(\lambda_d)$ may be thought of as approximating the exponential growth rate of matrix norms $||M(n)^T M(n)||^{1/2}$, as $n\to \infty$. In general the Lyapunov exponents are hard to determine. In 1984 J. Cohen and C. M. Newman\cite[Theorem 2.5]{CN84} obtained the following explicit result for normally distributed variables. Note that condition \eqn{log-bound} holds for such variables. \begin{theorem} \label{th71}{\em (Cohen and Newman 1984)} Let $\{A(1)_{ij}: 1 \le i, j \le d\}$ be independent, identically distributed (i.i.d.) normal random variables with mean $0$ and variance $s$, i.e. $A(1)_{ij} \sim N(0, s^2)$. Let ${\bf v}(0)$ be a nonzero vector in ${\mathbb R}^d$. Define for independent random draws \begin{equation*} {\bf v}(n) = A(n) A(n-1) \cdots A(1) {\bf v}(0),~~~n= 1, 2, 3, ... \end{equation*} and consider\footnote{Cohen and Newman write the left sides of \eqn{702a} and \eqn{703} as $\log \lambda$, but here we replace these by the rescaled variables $\lambda({\bf v}_0)$ and $\lambda$, in order to match the definition of Lyapunov exponent given in \eqn{700}.} \begin{equation}\label{702a} \lambda( {\bf v}(0)) := \lim_{n \to \infty} \frac{1}{n} \log ||{\bf v}(n)|| \end{equation} using the Euclidean norm $||\cdot||$ on ${\mathbb R}^d$. Then with probability one the limit exists in { \eqn{702a}}, is nonrandom, is independent of ${\bf v}(0)$, and is given by \begin{equation}\label{703} \lambda = \frac{1}{2} [ \log(s^2) + \log 2 + \psi(\frac{d}{2})], \end{equation} in which $\psi(x)$ is the digamma function. Moreover as $n \to \infty$ the random variables $\frac{1}{\sqrt{n}} \log (e^{-\lambda n} ||{\bf v}(n)||)$ converge in distribution to $N(0, \sigma^2)$ with \begin{equation*} \sigma^2= \frac{1}{4} \psi'(\frac{d}{2}). \end{equation*} \end{theorem} It is easy to deduce from Cohen and Newman's result the following consequence concerning $d \times d$ (non-commutative) random matrix products. \begin{theorem} \label{th72} Let $A(j)$ be $d \times d$ matrices with entries drawn as independent identically distributed (i.i.d.) normal random variables with variance $1$, i.e. $N(0, 1)$. Consider the random matrix product $$ S(n) = A(n) A(n-1) \cdots A(1), $$ and let $||S||_{F} = \sqrt{\sum_{i,j} |S_{i,j}|^2}$ be the Frobenius matrix norm. Then, with probability one, the growth rate is given by \begin{equation}\label{706} \lim_{n \to \infty} \left( ||S(n)||_{F}^2 \right)^{\frac{1}{n} }= \left\{ \begin{array}{ll} 2e^{-\gamma+ H_{d/2-1}} & ~~\mbox{if}~~ d ~~\mbox{is ~even},\\ ~&~\\ \frac{1}{2} e^{-\gamma+ 2H_{d-1}- H_{(d-1)/2}} & ~~\mbox{if}~~ d ~~\mbox{is ~odd}. \end{array} \right. \end{equation} Here $H_{n}$ denotes the $n$-th harmonic number, using the convention that $H_{0}=0.$ \end{theorem} \begin{proof} We use an exponentiated version of the formula \eqn{703} in Theorem~\ref{th71}, setting the variance $s=1$ and multiplying both sides of \eqn{703} by $2$ before exponentiating. We obtain, for a nonzero vector ${\bf v}(0)$, that with probability one, \begin{equation}\label{706b} \lim_{n \to \infty} \left( ||S(n){\bf v}(0)||^2 \right)^{\frac{1}{n} }= 2 e^{\psi(\frac{d}{2})}. \end{equation} where $||\cdot||$ is the Euclidean norm on vectors. To obtain the Frobenius norm bound we use the identity $||S||_F^2 = \sum_{j=1}^d || S {\bf e}_j||^2$ where ${\bf e}_i$ is the standard orthonormal basis of column vectors. On taking vectors ${\bf v}_j(0)={\bf e}_j$ we obtain $$ ||S(n)||_F^2 = \sum_{j=1}^d ||S(n) {\bf v}_j(0)||^2, $$ and using \eqn{706b} and the fact that $\lim_{n \to \infty} d^{\frac{1}{n}} = 1$, we obtain with probability one that \begin{equation*} \lim_{n \to \infty} \left( ||S(n)||_F^2 \right)^{\frac{1}{n} }= 2 e^{\psi(\frac{d}{2})}. \end{equation*} On applying the digamma formulas for $\psi(\frac{d}{2})$ given in Theorem~\ref{th30}, with the convention $H_{0}=0$, and noting for $d=2m+1$ that $$ 2e^{\psi(d/2)} = \frac{1}{2} e^{-\gamma + 2 H_{2m-1}- H_{m-1}}= \frac{1}{2} e^{-\gamma + 2H_{2m} - \frac{2}{2m} - H_{m-1}} = \frac{1}{2} e^{-\gamma + 2H_{2m} - H_{m}}, $$ we obtain \eqn{706}. \end{proof} Rephrased in terms of Lyapunov exponents, Theorem~\ref{th72} says that the corresponding $d \times d$ matrix system, for odd dimension $d$ has top Lyapunov exponent \begin{equation*} \lambda_d = \frac{1}{2}\left( -\gamma - \log 2 + 2H_{d-1}- H_{\frac{d-1}{2}}\right), \end{equation*} and for even dimension $d$ has top Lyapunov exponent \begin{equation*} \lambda_d = \frac{1}{2} \left( -\gamma + \log 2 + H_{ \frac{d}{2} -1}\right). \end{equation*} The simplest expressions occur in dimension $d=1$, giving $\lambda_1= \frac{1}{2} (-\gamma-\log 2)$, and in $d=2$, giving $\lambda_2= \frac{1}{2}(-\gamma + \log 2)$. Note that any real number can be obtained as a maximal Lyapunov exponent in Theorem~\ref{th71} simply by adjusting the variance parameter $s$ properly in the normal distribution $N(0,s)$. It follows that the content of the specific formulas in Theorem~\ref{th72} is in part {\em arithmetic}, where one specifies the variance $s$ of the normal distributions to be a fixed rational number. Moreover, if we rescale $s$ to choose normal distributions $N(0, s)$ with variance $s=2$ in dimension $1$ and with variance $s=\frac{1}{2}$ in dimension $2$, then we obtain the Lyapunov exponents $\lambda_d= \frac{1}{2}\left(- \gamma \right)$ in these dimensions. Returning to the theme of Euler's constant being (possibly) interpretable as a dynamical entropy, one approach to the Riemann hypothesis suggests that it may be related to an (as yet unknown) arithmetical dynamical system, cf. Deninger \cite{De92}, \cite{De93}, \cite{De98}, \cite{De00}. There is a statistical mechanics interpretation of the Riemann zeta function as a partition function given in Bost and Connes \cite{BC95}, with subsequent developments described in Connes and Marcolli \cite{CM08}. For some other views on arithmetical dynamical systems related to zeta functions see Lagarias \cite{La99}, \cite{La06} and Lapidus \cite[Chap. 5]{Lap08}. Here we note the coincidence that in formulations of the Riemann hypothesis given earlier in Theorems \ref{th54} and \ref{th55}, a ``growth rate" $e^{\pm \gamma}$ naturally appears. \subsection{Euler's constant and periods}\label{sec310} \setcounter{equation}{0} In 2001 M. Kontsevich and D. Zagier \cite{KZ01} defined a {\em period} to be a complex number whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficents, over domains in ${\mathbb R}^n$ given by polynomial inequalities with rational coefficients. More conceptually these are values of integrals of an algebraic differential form with algebraic coefficients, integrated over an algebraic cycle, see Kontsevich \cite[Section 4.3]{Kon00}. We will call such an integral a {\em period integral.} They observed that for the real part one can reduce such integrals to the case where all coefficients are rational numbers, all differentials are top degree, integrals are taken over divisors with normal crossings defined over ${\mathbb Q}$. In particular they introduced a ring ${\mathcal P}$ of {\em effective periods}, which includes the field of all algebraic numbers $\overline{{\mathbb Q}}$ as a subring, as well as a ring of {\em extended periods} $\hat{{\mathcal P}}$ obtained by adjoining to ${\mathcal P}$ the constant $\frac{1}{2 \pi i}$, which conjecturally is not a period. One has, for $n\ge 2$ the representation \begin{equation*} \zeta(n) = \int_{0< t_1 < t_2 <...< t_{n} < 1} \frac{dt_1}{1-t_1} \frac{dt_2}{t_2} \cdots \frac{dt_n}{t_n} \end{equation*} as a period integral. Thus convergent positive integer zeta values are periods. One more generally can represent all (convergent) multiple zeta values \begin{equation*} \zeta(n_1, n_2, ..., n_k) := \sum_{m_1> m_2 > ... > m_k >0} \frac{1}{m_1^{n_1} m_2^{n_2} \cdots m_k^{n_k}}. \end{equation*} at positive integer arguments as period integrals. Other period integrals are associated with special values of $L$-functions, a direction formulated by Deligne \cite{Del79}. An overview paper of Zagier \cite{Za94} gives a tour of many places that multiple zeta values appear. Some other examples of periods given by Kontsevich and Zagier are the values $\log \alpha$ for $\alpha$ a positive real algebraic number. In addition the values $(\Gamma(\frac{p}{q}))^q$ where $\frac{p}{q}$ is a positive rational number are periods, see Kontsevich and Zagier \cite[p. 775]{KZ01}, Andr\'{e} \cite[Chap. 24]{And04}. This can be deduced using Euler's Beta integral \eqn{250c}. Another well known example is (\cite[(5.11)]{Art64}) $$ \int_{0}^1 \frac{dt}{\sqrt{1-t^4}} = \frac{ (\Gamma(\frac{1}{4}))^2}{\sqrt{32 \pi}}, $$ giving a period of an elliptic curve with complex multiplication. Periods arise in many places. They arise in evaluating Feynman integrals in quantum field theory calculations. For more work on periods and zeta values in this context, see Belkale and Brosnan \cite{BB03}. Periods appear as multiple zeta values in Drinfeld's associator, cf. Drinfeld \cite{Dr89}, \cite{Dr90}, \cite{Dr92}. Periods appear in Vassiliev's knot invariants, due to the connection found by Kontsevich \cite{Kon93}, see Bar-Natan \cite{BN95}. They appear in expansions of Selberg integrals, cf. Terasoma \cite{Te02}. They appear in the theory of motives as periods of mixed Tate motives, cf. Terasoma \cite{Te02b}, \cite{Te06}, and Deligne and Goncharov \cite{DG05}. Other constants that are periods are values of polylogarithms at integers. Algebraic multiples of beta values $B(\frac{r}{n}, \frac{s}{n})$ occur as periods of differentials on abelian varieties, cf. Gross \cite[Rohrlich appendix]{Gross78}. In 2002 P. Cartier \cite{Ca02} gave a Bourbaki seminar expos\'{e} surveying multiple zeta values, polylogarithms and related quantities. There is a good deal known about the irrationality or transcendence of specific periods. The transcendence of $\zeta(2n)$ for $n \ge 1$ is immediate from their expression as powers of $\pi$. For odd zeta values, in 1978 Apery (\cite{Ap79}, \cite{Ap81}) established that $\zeta(3)$ is irrational, see van der Poorten \cite{vdP79} and Section \ref{sec311} for more details. In 1979 F. Beukers \cite{Beu79} gave an elegant proof of the irrationality of $\zeta(3)$ suggested by the form of Apery's proof, showing that certain integer linear combinations of period integrals defined for integer $r, s \ge 0$ by \begin{equation*} I_{r,s} :=\int_{0}^1 \int_{0}^1 \frac{ -\log xy}{1-xy} x^ry^s \, dx dy= \int_{0}^1 \int_{0}^1 \int_{0}^{1} \frac{x^r y^s}{1-(1-xy)z} dz\, dx\, dy \end{equation*} were very small. He evaluated \begin{equation}\label{833a} I_{0,0} = \int_{0}^1 \int_{0}^1 \frac{ -\log xy}{1-xy} \, dx\, dy = 2 \, \zeta(3), \end{equation} and for $r \ge 1$, \begin{equation*} I_{r, r} = 2 \big( \zeta(3) - \frac{1}{1^3} - \frac{1}{2^3} - \cdots - \frac{1}{r^3}\big) = 2\big( \zeta(3) - H_{n,3}\big). \end{equation*} As a necessary part of the analysis he also showed for $r >s$ that each $I_{r, s}= I_{s,r}$ is a rational number with denominator dividing $D_r^3$ where $D_r$ is the least common multiple $[1,2,..., r]$. It is now known that infinitely many odd zeta values are irrational, without determining which ones (Rivoal \cite{Riv00}, Ball and Rivoal \cite{BR01}). In particular Zudilin \cite{Zu01} showed that at least one of $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational. See Fischler \cite{Fi04} for a survey of recent developments on irrationality of zeta values. The state of the art for showing transcendence of periods is surveyed by Waldschmidt \cite{Wa06}. There is currently no effective way known to decide whether a given number is a period. One can certify a number is a period by directly exhibiting it as a $\bar{{\mathbb Q}}$-linear combination of values of known period integrals. However no properties of periods are currently known which would be useful in distinguishing them from non-periods. Kontsevich and Zagier \cite[Problem 3]{KZ01} raise the problem of exhibiting a single specific number that is provably not a period. As mentioned in the introduction it is conjectured: {\em Euler's constant is not a period.} The conjectures that $e$ and $\gamma$ are not periods appears in \cite[Sec. 1.1]{KZ01}. We have already seen a number of different integral representations for Euler's constant, which however involve exponentials and/or logarithms, in addition to rational functions. These integrals do not resolve the question whether $\gamma$ is a period. Kontsevich and Zagier \cite[Sec. 4.3]{KZ01} also suggest enlarging the class of periods to allow exponential functions in the integrals. They define an {\em exponential period} to be an absolutely convergent (multiple) integral of the product of an algebraic function with the exponential of an algebraic function, taken over a real semi-algebraic set, where all polynomials entering the definition of the algebraic functions and the semi-algebraic set have algebraic coefficients\footnote{The semialgebraic set is cut out by a system of (real) polynomial inequalities $f_j(x_1, ..., x_n) \ge 0$, and it is supposed that the coefficients of the $f_j$ are all real algebraic numbers.}. The set of all such values forms a ring $\sEP$, which we call the ring of {\em exponential periods}, and which contains the field $\bar{{\mathbb Q}}$ of all algebraic integers. This ring is countable, and is known to include the constant $e$, all values $\Gamma(\frac{p}{q})$ at rational arguments, namely $\frac{p}{q} \in {\mathbb Q} \smallsetminus {\mathbb Z}_{\le 0},$ and the constant $$ \sqrt{\pi} = \int_{-\infty}^{\infty} e^{- t^2} dt. $$ It also includes various values of period determinants for confluent hypergeometric functions studied by Terasoma \cite[Theorem 2.3.3]{Te96} and by Bloch and Esnault \cite[Proposition 5.4]{BE00}. Here we observe that: \begin{enumerate} \item Euler's constant $\gamma \in \sEP$. \item The Euler-Gompertz constant $\delta \in \sEP$. \end{enumerate} M. Kontsevich observes that an integral representation certifying that $\gamma \in \sEP$ is obtainable from \eqn{225c} by substituting $-\log x = \int_{x}^1 \frac{dy}{y}$ for $0< x<1$ and $\log x= \int_{1}^x \frac{dy}{y}$ for $x>1$, yielding \begin{equation*} \gamma = \int_{0}^1 \int_{x}^1 \frac{e^{-x}}{y} dy \,dx - \int_{1}^{\infty} \int_{1}^x \frac{e^{-x}}{y} dy \, dx. \end{equation*} An integral representation certifying that $\delta \in \sEP $ is given by \eqn{560aa}, stating that \[ \delta = \int_{0}^{\infty} \frac{e^{-t}}{1+t} dt. \] It is not known whether $\delta$ is a period. \subsection{Diophantine approximations to Euler's constant}\label{sec311} \setcounter{equation}{0} Is Euler's constant rational or irrational? This is unknown. One can empirically test this possibility by obtaining good rational approximations to Euler's constant. The early calculations of Euler's constant were based on Euler-Maclaurin summation. In 1872 Glaisher \cite{Glaisher1872} reviewed earlier work on Euler's constant, which included his own $1871$ calculation accurate to $100$ places (\cite{Glaisher1871}) and uncovered a mistake in an earlier calculation of Shanks. In $1878$ the astronomer J. C. Adams \cite{Ad1878} determined $\gamma$ to $263$ places, again using Euler-Maclaurin summation, an enormous labor done by hand. In 1952 J. W. Wrench, Jr \cite{Wre52}, using a desk calculator, obtained $\gamma$ to $328$ places. In 1962 Knuth \cite{Kn62} automated this approach and obtained Euler's constant to $1272$ places with an electronic computer. However this record did not last for long. In 1963 D. W. Sweeney \cite{Sw63} introduced a new method to compute Euler's constant, based on the integral representation $$ \gamma= \sum_{k=1}^{\infty} \frac{(-1)^{k-1} n^k}{k! \,k} - \int_{n}^{\infty} \frac{e^{-u}}{u} du - \log n, $$ By making a suitable choice of $n$ in this formula, Sweeney obtained $3566$ digits of $\gamma$. The following year Beyer and Waterman \cite{BW74} used a variant of this method to compute $\gamma$ to $7114$ places, however only the initial $4879$ digits of these turned out to be correct, as determined by later computations. In 1977 Brent \cite{Bre77} used a modification of this method to compute $\gamma$ and $e^{\gamma}$ to $20700$ places. In 1980 Brent and McMillan \cite{BM80} formulated new algorithms based on the identity $$ \gamma = \frac{U(n)}{V(n)} - \frac{K_0(2n)}{I_0(2n)}, $$ whose right hand side contains for $\nu=0$ the modified Bessel functions $$ I_{\nu}(z) := \sum_{k=0}^{\infty}\frac{(\frac{z}{2})^{\nu+2k}}{ k! \Gamma( \nu +k +1)} ~~\mbox{and} ~~ K_{0}(z) := -\frac{\partial}{\partial \nu} I_{\nu}(z) \vert_{\nu=0}, $$ and \begin{eqnarray*} U(n) & := &\sum_{k=0}^{\infty} \Big(\frac{n^k}{k!}\Big)^2 ( H_k - \log n),\\ V(n)& : = & I_0(2n) = \sum_{k=0}^{\infty} \Big( \frac{n^k}{k!}\Big)^2. \end{eqnarray*} The second term $0<K_0(2n)/I_0(2n)< \pi e^{-4n}$ can be made small by choosing $n$ large, hence $\gamma$ is approximated by $\frac{U(n)}{V(n)}$, choosing $n$ suitably (their Algorithm B1). With this algorithm they computed both $\gamma$ and $e^{\gamma}$ to $30000$ places. They also determined the initial part of the ordinary continued fraction expansions of both $\gamma$ and $e^{\gamma}$, and established the following result.\ \begin{theorem}~\label{th3141} {\rm (Brent and McMillan 1980)} (1) If $\gamma= \frac{m_0}{n_0}$ is rational, with $m_0, n_0 \in {\mathbb Z}_{>0}$ then its denominator $n_0 > 10^{15000}.$ (2) If $e^{\gamma} = \frac{m_1}{n_1}$ is rational, with $m_1, n_1 \in {\mathbb Z}_{>0}$ then its denominator $n_1 > 10^{15000}.$ \end{theorem} \begin{proof} We follow Brent \cite{Bre77}. For any real number $\theta$ Theorem 17 of Khinchin \cite{Khinchin} states: if $\frac{p_n}{q_n}$ is an ordinary continued fraction convergent for $\theta$ then $|q_n \theta - p_n| \le |q\theta -p|$ for all integers $p$ and $q$ with $0 < |q| \le q_n$. It therefore suffices to find a convergent $\frac{p_n}{q_n}$ with $q_n > 10^{15000}$ such that $\theta \ne \frac{p_n}{q_n}$, and the latter inequality holds if $q_{n+1}$ exists. \end{proof} The continued fraction expansion of $\gamma$ begins $$ \gamma = [0; 1, 1, 2, 1, 2, 1, 4, 3, 13, 5, 1, 1, 8, 1, 2, 4, 1, 1, 40, 1, 11, 3, 7, 1, 7, 1, 1, 5, 1, 49, 4, ...], $$ see \cite[Sequence A002852]{OEIS}. Brent and McMillan also computed various statistics for the initial part of this continued fraction, concerning the number of partial quotients of different sizes of the initial continued fractions of $\gamma$ and of $e^{\gamma}$, in order to test whether $\gamma$ behaves like a ``random" real number. They found good agreement with the predicted limiting distribution of partial quotients for a random real number $\theta= [a_0, a_1, a_2, \cdots]$. It is a theorem of Kuz'min that the distribution of the $j$-th partial quotient $\text{Prob}[a_j=k]$ rapidly approaches the Gauss-Kuz'min distribution $$ p(k) := \log_2 (1+ \frac{1}{k}) - \log_2( 1+ \frac{1}{k+1}), $$ as $j \to \infty$, see Khinchin \cite[III.15]{Khinchin}. Diophantine approximations to $\gamma$ can be extracted from various series expansions for $\gamma$ involving rational numbers. In 1910 Vacca \cite{Vac10} found the expansion \begin{eqnarray*} \gamma &= &\sum_{k=1}^{\infty} (-1)^k \frac{ \lfloor \log_2 k\rfloor}{k}\\ & =& \Big(\frac{1}{2} - \frac{1}{3}\Big) + 2 \Big(\frac{1}{4} - \frac{1}{5} + \frac{1}{6} - \frac{1}{7}\Big) + 3 \Big( \frac{1}{8} - \frac{1}{9} + \cdots - \frac{1}{15}\Big) + \cdots. \end{eqnarray*} Truncating this expansion at $k=2m-1$ gives approximations $\frac{p_m}{q_m}$ to $\gamma$ satisfying $$ 0 < \gamma - \frac{p_m}{q_m} \le \frac{4(\log m + 1)}{m}. $$ In 2010 Sondow \cite{Son10} gave refinements of this expansion yielding approximations with a convergence rate $O(\frac{\log m}{m^r})$ for an arbitrary but fixed $r \ge 2$. However approximations of these types fall far short of establishing irrationality of $\gamma$, since the denominators $q_m$ grow exponentially with $m$. Much recent work on Diophantine approximation of explicit constants $\theta$ has focused on finding a series of rational approximations $\frac{u_n}{v_n}$ with the properties: \begin{enumerate} \item Each sequence $u_n, v_n$ separately satisfies the same linear recurrence with coefficients that are polynomials in the ring ${\mathbb Z}[n]$, where $n$ is the recurrence parameter. \item The initial conditions for the recurrences $u_n, v_n$ are rational, so that both sequences consist of rational numbers. \item One has $\frac{u_n}{v_n} \to \theta$ as $n \to \infty.$ \end{enumerate} We give examples of such recurrences below, e.g. \eqn{3152a}. For such sequences the power series $U(z) = \sum_{n=1}^{\infty} u_n z^n$ resp. $V(z)= \sum_{n=1}^{\infty} v_n z^n$ will each (formally) satisfy a homogeneous linear differential equation $D F(z)=0 $ in the $z$-variable, whose coefficients are polynomials in ${\mathbb Q}[z]$, i.e. the operator $D$ has the form $$ D=\sum_{j=0}^n R_j(z) \frac{d^j}{dz^j}. $$ We will refer to such a sequence of approximations satisfying (1), (2) as being of {\em $H$-type}, and will call any real number obtainable as a limit of such approximations an {\em elementary $H$-period}. More generally, the {\em ring $\sHP$ of $H$-periods} will be the ring generated over $\overline{{\mathbb Q}}$ by elementary $H$-periods\footnote{The name {\em $H$-period} is proposed by analogy with the more studied class of $G$-periods given below.}. The ring $\sHP$ is clearly countable, and its possible relations to rings of periods ${\mathcal P}$ or exponential periods $\sEP$ discussed in Section \ref{sec310} remain to be determined. It turns out to be useful to single out a subclass of such numbers, which are the set of such approximations for which the linear differential operators $D$ have a special property: { \bf $G$-operator Property.} {\em At some algebraic point $z_0 \in \overline{{\mathbb Q}}$ the linear differential operator with coefficients in ${\mathbb C}[z]$ has a full rank set of a $G$-function solutions. } The notion of {\em $G$-function} is presented in detail in Bombieri \cite{Bom81}, Andr\'{e} \cite{And89}, \cite{And03}, and Dwork, Gerotto and Sullivan \cite{DGS94}. We give a definition of a subclass of $G$-functions obtained by specializing to the rational number case. (The general case allows algebraic number coefficients all drawn from a fixed number field $K$.) It is a function given by a power series $F(z) = \sum_{n=0}^{\infty} a_n z^n$, with rational $a_n$ such that \begin{enumerate} \item There is a constant $C>0$ such that $|a_n| \le C^n$, so that the power series $F(z)$ has a nonzero radius of convergence. \item There is a constant $C' >0$ such that the least common multiple of the denominators of $a_1, .., a_n$ is at most $(C')^n$. \item The $a_n= \frac{p_n}{q_n}$ are solutions of a linear recurrence having coefficients that are polynomials in the variable $n$ with integer coefficients. \end{enumerate} The notion of $G$-operator was formulated in Andr\'{e} \cite[IV. 5]{And89}, \cite[Sect. 3]{And00a}. Andr\'{e}'s definition of $G$-operator is given in another form, in terms of it satisfying the Galochkin condition (from Galochkin \cite{Gal74}, also see Bombieri \cite{Bom81}), but this class of operators is known to coincide with those given by the definition above. The minimal differential operator satisfied by a $G$-function is known to be a $G$-operator in Andr'{e}'s sense \cite[Theorem 3.2]{And00a}, by a result of D. and G. Chudnovsky. Conversely, any $G$-operator in Andr\'{e}'s sense has a full rank set of $G$-function solutions at any algebraic point not a singular point of the operator (\cite[Theorem 3.5]{And00a}).. Andr\'{e} \cite[p. 719]{And00a} has shown that $G$-operators, viewed in the complex domain, are of a very restricted type. They must be Fuchsian on the whole Riemann sphere ${\mathbb P}^1({\mathbb C})$, i.e. they have only regular singular points, including the point $\infty$, see also \cite[Theorem 3.4.1]{And03}. A conjecture formulated in Andr\'{e} \cite{And89}, \cite[p. 718]{And00a} is that all $G$-operators should come from (arithmetical algebraic) ``geometry", i.e. they each should be a product of factors of Picard-Fuchs operators, controlling the variation of cohomology in a parametrized family of algebraic varieties defined over $\bar{{\mathbb Q}}$. We will say that a real number $\theta$ that is a limit of a convergent sequence of rational approximations $\frac{u_n}{v_n}$ with $u_n, v_n$ coefficients in the power series expansions of two (rational) $G$-functions is an {\em elementary $G$-period}. As above we obtain a ring $\sGP$ of {\em $G$-periods}, defined as the ring generated over $\overline{{\mathbb Q}}$ by elementary $G$-periods. This ring has recently been shown in Fischler and Rivoal \cite[Theorem 3]{FR13}, to coincide with the field $\mbox{Frac}({\bf G})$, in which $\mbox{Frac}({\bf G})$ is the fraction field of a certain ring ${\bf G}$ constructed from $G$-function values. The ring ${\bf G}$ is defined to be the set of complex numbers $f(\alpha)$ where $\alpha \in \bar{{\mathbb Q}}$ and $f(z)$ is any branch of a (multi-valued) analytic continuation of a $G$-function with coefficients defined over some number field which is nonsingular at $z=\alpha$. Fischler and Rivoal \cite[Theorem 1]{FR13} show that the ring ${\bf G}$ coincides with the set of all complex numbers whose real and imaginary parts can each be written as $f(1)$, with $f(z)$ being some $G$-function with rational coefficients (as in the definition above) and whose power series expansion has radius of convergence exceeding $1$. Their result \cite[Theorem 3]{FR13} also establishes that the set of elementary $G$-periods is itself a ring containing all real algebraic numbers, which coincides with $\mbox{Frac}({\bf G})\cap {\mathbb R}$. Again, possible relations of the field $\sGP$ with the Kontsevich-Zagier rings of periods ${\mathcal P}$ and extended periods $\sEP$ remain to be determined.\footnote{The paper \cite[Sect. 2.2]{FR13} discusses the possibility that the equality ${\bf G} =\hat{{\mathcal P}} ={\mathcal P} [ \frac{1}{\pi}]$ might hold, and sketches an argument due to a reviewer that supports the conjecture that ${\bf G} \subseteq \hat{{\mathcal P}}$.} As already mentioned in Section \ref{sec310}, a famous result of Ap\'{e}ry (\cite{Ap79}, \cite{Ap81}) in 1979 established that the period $\zeta(3)$ is irrational. Ap\'{e}ry's result was obtained by finding rational approximations $\frac{p_n}{q_n}$ to $\zeta(3)$ which were generated by two different solutions $\{ p_n, q_n: n \ge 1\}$, to the second order linear recurrence with polynomial coefficients: \begin{equation}\label{3152a} n^3 \, u_n = (34n^3- 51n^2+27n -5) u_{n-1} - (n-1)^3 u_{n-2} \end{equation} with initial conditions $p_0=0, p_1=6$ and $q_0=1, q_1=5,$ respectively. Here $q_n$ are now called {\em Ap\'{e}ry numbers}, and are given by \begin{equation}\label{3152b} q_n = \sum_{k=0}^n {\binom{n+k}{k}}^2 {\binom{n}{k}}^2, \end{equation} while $p_n$ are given by \begin{equation}\label{3152c} p_n = \sum_{k=0}^n {\binom{n+k}{k}}^2 {\binom{n}{k}}^2\Big( \sum_{m=1}^n \frac{1}{m^3} + \sum_{m=1}^k \frac{ (-1)^{m-1}}{ 2m^3 {\binom{n}{m}} {\binom{n+m}{m}}}\Big), \end{equation} see Fischler \cite[Sect. 1.2]{Fi04}). These approximations are closely spaced enough and good enough to imply that, for any $\epsilon>0$, and all $q \ge q(\epsilon)$ one has $$ |\zeta(3) - \frac{p}{q}| > q^{-\theta + \epsilon} \mbox{for all}~~p \in {\mathbb Z}, $$ with exponent $\theta= 13.41782$, which implies that $\zeta(3)$ is irrational. The method permits deriving families of approximations given by related recurrences for $\zeta(m)$ for all $m \ge 2$ (cf. Cohen \cite{Coh81}) , but only for $m=2$ and $3$ are they are good enough to prove irrationality. For each $m \ge 2$ the associated power series $$ f(z) = \sum_{n \ge 0} p_n z^n, ~~\qquad ~g(z) = \sum_{n=0}^{\infty} q_n z^n $$ will be solutions to a linear differential equation with polynomial coefficients, and are of $H$-type. For $m=2, 3$ these approximations are known to be of $G$-type. A conceptually new proof of irrationality of $\zeta(3)$ using the same approximations, but connecting them with modular forms on the congruence subgroup $\Gamma_1(6)$ of $SL(2, {\mathbb Z})$ was presented in 1987 by Beukers \cite{Beu87}. Ap\'{e}ry's discovery was based on series acceleration transformations of the series $\zeta(3) = \sum_{n=1}^{\infty} \frac{1}{n^3}.$ An initial example of Ap\'{e}ry of such accelerations is the identity \begin{equation}\label{3152d} \zeta(3) =\frac{5}{2} \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^3 {\binom{2n}{n}}}, \end{equation} see \cite[Sect. 3]{vdP79}. The partial sums of this series converge at a rate $O( 4^{-n})$ to $\frac{2}{5} \zeta(3)$, but are by themselves insufficient to prove irrationality. Ap\'{e}ry's proof used further series acceleration transformations, described in \cite[Sect. 4]{vdP79}. It is interesting to note that the identity \eqn{3152d} is a special case of identities in an 1890 memoir of A. A. Markoff \cite{Markov1890} on series acceleration, as is explained in Kondratieva and Sadov \cite{KS05}. In 1976 Gosper (\cite{Gos76}, \cite{Gos90}) noted related identities, also found by series acceleration techniques, including \begin{equation*} \zeta(3) = \sum_{n=1}^{\infty} \frac{30n-11}{16(2n-1)n^3 {\binom{2n-1}{n}}^2}. \end{equation*} A new approach to Ap\'{e}ry's irrationality proof of $\zeta (3)$ was found by Nesterenko \cite{Nes96} in 1996, which used a relation of the Ap\'{e}ry approximations $p_n, q_n$ to Pad\'{e} approximations to polylogarithms. Nesterenko also found a new continued fraction for $2\zeta(3)$, \begin{equation}\label{3152g} 2 \zeta(3) =2+ \cfrac{1}{2+ \cfrac{2}{4+ \cfrac{1}{3+\cfrac{4}{2+ \cfrac{2}{4+\cfrac{6}{6+ \cfrac{4}{5+\cfrac{9}{4+ \cdots}}}}}}}}. \end{equation} Writing this as $2 \zeta(3) = b_0 + \cfrac{a_1}{b_1 + \cdots}$, the numerators $a_n$ grow quadratically in $n$ while the denominators $b_n$ grow linearly with $n$, and are given for $k \ge 0$ (excluding $a_1$) by $$ \begin{array}{llll} a_{4k+1}= k(k+1), & b_{4k+1}= 2k+2, & a_{4k+2} = (k+1)(k+2), & b_{4k+2}= 2k+4, \\ a_{4k+3}= (k+1)^2, & b_{4k+3}= 2k+3, &a_{4k+4} = (k+2)^2 ,& b_{4k+4}= 2k+2. \end{array} $$ This result was obtained starting from Pad\'{e} type simultaneous polynomial approximations to the logarithm, dilogarithm and trilogarithm found by Gutnik \cite{Gut83}, see also the survey of Beukers \cite{Beu81}. When specialized at the point $x=1$ these Pad\'{e} approximations yield the sequences $q_n, -2p_n$. The continued fraction \eqn{3152g} was obtained from a Mellin-Barnes type integral for a special case of the Meijer G-function \cite{Mei36}, a generalized hypergeometric function (see \cite[Chap. V]{EMOT53}). A survey of methods using Pad\'{e} approximations to polylogarithms is given in Nesterenko \cite{Nes03}. This approach has led to further results, including a new proof of irrationality of $\zeta(3)$ of Nesterenko \cite{Nes09}. A natural question concerns whether $\gamma$ has rational approximations of $H$-type. A family of such rational approximations to $\gamma$ were found in 2007 through the combined efforts of A. I. Aptekarev, A. I. Bogolyubskii, D. V. Khristoforov, V. G. Lysov and D. N. Tulyakov in the seven papers in the volume \cite{Apt07}. Their result is stated in Aptekarev and Tulyakov \cite{AT09} in the following slightly improved form. \begin{theorem}\label{th3152} {\em (Aptekarev, Bogolyubskii, Khristoforov, Lysov, Tulyakov 2007)} Euler's constant is approximated by a ratio of two rational solutions of the third-order recurrence relation with polynomial coefficients \begin{eqnarray*} (16n-15) q_{n+1} &= & (128n^3 + 40n^2 -82n-45) q_n - n^2(256n^3 -240n^2 +64n-7) q_{n-1}\nonumber \\ &&~~~ + (16n+1)n^2(n-1)^2 q_{n-2}. \end{eqnarray*} The two solutions $\{p_n: n \ge 0\}$ and $\{ q_n: n \ge 0\}$ are determined by the initial conditions $$ p_0 :=0, ~~~p_1 :=2, ~~~p_2 := 31, $$ and $$ q_0 := 1, ~~~q_1 :=3, ~~~q_2 := 50. $$ They have the following properties: (1) (Integrality) For $n \ge 0,$ $$ p_n \in {\mathbb Z},~~~q_n \in {\mathbb Z}. $$ (2) (Denominator growth rate) For $n \ge 1$, \begin{equation*} q_n= (2n)! \frac{ e^{\sqrt{2n}}}{\sqrt[4]{n}} \left( \frac{1}{\sqrt{\pi}(4e)^{3/8}} + O ( \frac{1}{\sqrt{n}}) \right). \end{equation*} (3) (Euler's constant approximation) For $n \ge 1$, \begin{equation*} \gamma- \frac{p_n}{q_n} = -2 \pi e^{- 2 \sqrt{2n}}\left( 1+ O (\frac{1}{\sqrt{n}})\right). \end{equation*} \end{theorem} \noindent The original result showed in place of (1) the weaker result that $q_n \in {\mathbb Z}$ and $D_n p_n \in {\mathbb Z}$ with $D_n= \mbox{l.c.m.} [1, 2, ..., n]$. The integrality result for $p_n$ follows from a later result of Tulyakov \cite{Tu09}, which finds a more complicated system of recurrences that these sequences satisfy, in terms of which the integrality of $p_n$ is manifest. These approximants clearly are of $H$-type. The results in Theorem~\ref{th3152} are based on formulas derived from a family of multiple orthogonal polynomials in the sense of Aptekarev, Branquinho and van Assche \cite{ABA03}. These lead to exact integral formulas \begin{equation*} q_n = \int_{0}^{\infty} Q_n(x) e^{-x}dx, \end{equation*} and \begin{equation}\label{862} p_n - \gamma q_n = \int_{0}^{\infty} Q_n(x) e^{-x}\log x \, dx, \end{equation} in which $Q_n(x)$ are the family of polynomials (of degree $2n$) given by \begin{equation*} Q_n(x) = \frac{1}{(n!)^2} \frac{e^x}{x-1} \left( \frac{d}{dx}\right)^n x^n \left( \frac{d}{dx}\right)^n (x-1)^{2n+1} x^n e^{-x}. \end{equation*} Note for $Q_0(x) = 1$ that \eqn{862} becomes the integral formula \eqn{225c} for $\Gamma'(1)$. Also in 2009 Rivoal \cite{Riv09} found an alternative construction of these approximants that makes use of Pad\'{e} acceleration methods for series similar to Euler's divergent series \eqn{554a}. His method applies more generally to numbers $\gamma + \log x$, where $x>0$ is rational. In 2010 Kh. and T. Hessami Pilehrood \cite[Corollary 4]{HP09} found closed forms for these approximants, \begin{equation*} q_n = \sum_{k=0}^n {\binom{n}{k}}^2 (n+k)!, \quad p_n = \sum_{k=0}^n {\binom{n}{k}}^2(n+k)! ( H_{n+k} + 2 H_{n-k} - 2 H_k), \end{equation*} where the $H_k$ are harmonic numbers. These rational approximations $\frac{p_n}{q_n}$ thus are of a kind similar in appearance to Ap\'{e}ry's approximation sequence \eqn{3152b}, \eqn{3152c} to $\zeta(3)$, and also satisfy a third order recurrence. Recently Kh. and T. Hessami Pilehrood \cite[Theorem 1, Corollary 1]{HP13} constructed an elegant sequence of rational approximations converging to Euler's constant, also analogous to the Ap\'{e}ry approximations to $\zeta(3)$, with slightly better convergence properties than those given in Theorem \ref{th3152}, and having a surprising connection with certain approximations to the Euler-Gompertz constant. They set \begin{equation*} q_n = \sum_{k=0}^n {\binom{n}{k}}^2 k! , \quad p_n = \sum_{k=0}^n {\binom{n}{k}}^2 k! \,(2 H_{n-k} - H_k). \end{equation*} The sequence $q_n$ satisfies the homogeneous second order recurrence \begin{equation}\label{892b} q_{n+2} = 2(n+2) q_{n+1} - (n+1)^2 q_n. \end{equation} and initial conditions $q_0=1, q_1=2$. The sequence $p_n$ satisfies the inhomogeneous second order recurrence \begin{equation*} p_{n+2} = 2(n+2) p_{n+1} - (n+1)^2 p_n -\frac{n}{n+2}, \end{equation*} with initial conditions $p_0=0, p_1=1$, and it satisfies a homogeneous third order recurrence. \begin{theorem}\label{th3153} {\em (Kh. and T. Hessami Pilehrood 2013)} Let $(q_n)_{n \ge 0}, (p_n)_{n \ge 0}$ be defined as above. Then $q_n \in {\mathbb Z}$ and $D_n p_n \in {\mathbb Z}$, where $D_n= l.c.m. [1, 2, ..., n]$, and for all $n \ge 1,$ \begin{equation*} \gamma- \frac{p_n}{q_n} = -e^{-4 \sqrt{n}}\Big( 2\pi + O(\frac{1}{\sqrt{n}}) \Big). \end{equation*} Here the growth rate of the sequence $q_n$ is, for all $n \ge 1$, \begin{equation*} q_n = n! \frac{e^{2 \sqrt{n}}}{\sqrt[4]{n}}\Big(\frac{1}{2 \sqrt{\pi e}} + O(\frac{1}{\sqrt{n}}) \Big) \end{equation*} \end{theorem} These approximations converge to $\gamma$ from one side. The sequence $D_n$ needed to clear the denominator of $p_n$ is well known to have growth rate $$ D_n = e^{n(1+o(1))}, $$ an asymptotic result that is equivalent in strength to the prime number theorem. The Euler-Gompertz constant $\delta$ also has a convergent series of rational approximations with {\em the same denominator sequence $q_n$} as the Euler constant approximations above. The new numerators $s_n$ satisfy the same recurrence \eqn{892b} as the denominators $q_n$, i.e. $$s_{n+2} = 2(n+2) s_{n+1} - (n+1)^2 s_n,$$ but with initial conditions $s_0=0, s_1=1$. The $s_n$ are integers, and are also given by the expression \begin{equation*} s_n = \sum_{k=1}^n a(k-1) (k+1){\binom{n}{k}} \frac{(n-1)!}{(k-1)!}, \end{equation*} in which the function values $a(m)$ are \begin{equation*} a(m) = \sum_{k=0}^m (-1)^k \, k!, \end{equation*} see \cite[Sequence A002793]{OEIS}. Here the $a(m)$ are the partial sums of Euler's divergent series discussed in Section \ref{sec25}. These approximations $\frac{s_n}{q_n}$ are exactly the partial quotients $\frac{s_n}{q_n}$ of the Laguerre continued fraction \eqn{249e} for the Euler-Gompertz constant. One has the following rate of convergence estimate for this sequence of rational approximations (\cite[Theorem 2]{HP13}). \begin{theorem}\label{th3154} {\em (Kh. and T. Hessami Pilehrood 2013)} Let $(q_n)_{n \ge 0}, (s_n)_{n \ge 0}$ be defined as above. Then $q_n , s_n \in {\mathbb Z}$, and for all $n \ge 1,$ the approximations $s_n/q_n$ to the Euler-Gompertz constant satisfy \begin{equation*} \delta- \frac{s_n}{q_n} = e^{-4 \sqrt{n}}\Big( 2\pi e + O(\frac{1}{\sqrt{n}}) \Big). \end{equation*} \end{theorem} The proof of Theorem~\ref{th3154} relates these approximations to values of a Mellin-Barnes type integral \begin{equation*} I_n = (n!)^2 \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i \infty} \frac{\Gamma(s-n)^2}{\Gamma(s+1)} ds. \end{equation*} where the vertical line of integration has $c>n$. The integral is expressed in terms of Whittaker function $W_{\kappa, \nu}(z)$ (a confluent hypergeometric function). The proof displays some identities involving the Euler-Gompertz constant and Whittaker function values, namely \begin{equation*} \delta = \frac{ W_{-1/2, 0} (1)}{W_{1/2, 0}(1)} \end{equation*} using \begin{equation*} W_{-1/2, 0}(1) = \frac{\delta}{\sqrt{e}}, \quad\quad W_{1/2, 0}(1) = \frac{1}{\sqrt{e}}. \end{equation*} Table \ref{tab3151} presents data on the approximations $(q_n, p_n, s_n)$ for small $n$ providing good approximations to $\gamma$ and $\delta$. Recall that $p_n$ are rational numbers, while $q_n, D_n p_n, s_n$ are integers, with $D_n= {\rm l.c.m.} [1,2, ..., n]$. \begin{table}\centering \renewcommand{.85}{.85} \begin{tabular}{|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{$n$} & \multicolumn{1}{c|}{$q_n$}& \multicolumn{1}{c|}{$D_np_n$}& \multicolumn{1}{c|}{$D_n$}& \multicolumn{1}{c|}{$s_n$}\\ \hline 0 & 1 &0 & 1 &0 \\ 1 & 2 & $1$ & 1& 1 \\ 2 & 7 & $ 8$ & 2&4 \\ 3 & 34& $ 118$ & 6 &20 \\ 4 & 209 & $1450$ &12 & 124\\ 5 & 1546 & $53584$ & 60 & 920 \\ 6 & 13327 & $461718$ &60 & 7940 \\ 7 & 130922 & $31744896$ & 420 &78040\\ 8 & 1441729 & $699097494$ & 840 &859580 \\ 9 & 17572114& $25561222652$ & 2520 &10477880\\ 10 & 234662231& $341343759982$ & 2520 &139931620 \\ \hline \end{tabular} \noindent \caption{Approximants $p_n/q_n$ and $s_n/q_n$ to $\gamma$ and $\delta$, with $D_n= {\rm l.c.m.} [1,2, ..., n]$.} \label{tab3151} \end{table} The approximations given in Theorems \ref{th3153} and \ref{th3154}, certify that both $\gamma$ and $\delta$ are elementary $H$-periods, so that both $\gamma, \delta \in \sHP$. These approximations are of insufficient quality to imply the irrationality of either Euler's constant or of the Euler-Gompertz constant. It remains a challenge to find a sequence of Diophantine approximations to $\gamma$ (resp. $\delta$) that approach them sufficiently fast to certify irrationality. One may ask whether $\gamma$ has Diophantine approximations of $G$-type. This is unknown. The current expectation is that this is not possible, i.e. that $\gamma \not\in \sGP$. Based on the observation of Rivoal and Fischler that $\sGP= \mbox{Frac}({\bf G})$, and the the belief (\cite[Sect. 2.2]{FR13}) that the ring ${\bf G}$ may coincide with the Kontsevich-Zagier ring $\hat{{\mathcal P}}$ of extended periods, this expectation amounts to a stronger form of Conjecture \ref{conj2}. We conclude this section with recent results concerning $G$-type and $H$-type approximations to other constants considered by Euler. In $2010$ Rivoal \cite{Riv10b} showed that the periods $\Gamma(\frac{k}{n})^n$, where $\frac{k}{n} >0$ , can be approximated by sequences $\frac{p_n}{q_n}$ of rational numbers of the $G$-type, so are elementary $G$-periods. In another paper Rivoal \cite{Riv10a} finds sequences of rational approximations showing that the individual numbers $\Gamma(\frac{k}{n})$, which are not known to be periods for $0< k<n$, are elementary $H$-periods. Finally we note that various authors have suggested other approaches to establishing the irrationality of Euler's constant, e.g. Sondow \cite{Son03}, \cite{Son09}. and Sondow and Zudilin \cite{SZu06}. \subsection{Transcendence results related to Euler's constant}\label{sec312} \setcounter{equation}{0} The first transcendence results for numbers involving Euler's constant came from the breakthrough of Shidlovskii \cite{Shi59}, \cite{Shi62} in the 1950's and 1960's on transcendental values of $E$-functions, which is detailed in his book \cite{Sh89}. The class of {\em $E$-functions} was introduced in 1929 by Siegel \cite[p. 223]{Sie29}, consisting of those analytic functions $F(z) = \sum_{n=0}^{\infty} c_n \frac{z^n}{n!}$ whose power series expansions have the following properties: \begin{enumerate} \item The coefficients $c_n$ belong to a fixed algebraic number field $K$ (a finite extension of ${\mathbb Q}$). For each $\epsilon>0$, the maximum of the absolute values of the algebraic conjugates of $c_n$ is bounded by $O(n^{n\epsilon})$ as $n \to \infty$. \item For each $\epsilon >0$ such that there is a sequence of integers $q_0, q_1, q_2 ...$ such that $q_n c_n$ is an algebraic integer, with $q_n = O( n^{n \epsilon})$ as $n \to \infty$. \item The function $F(z)$ satisfies a (nontrivial) linear differential equation $$ \sum_{j=0}^n R_j(z) \frac{d^j}{dz^j} y(z) = 0, $$ whose coefficients are polynomials $R_j(z) \in K_1[z]$, where $K_1$ is some algebraic number field.\footnote{This condition on $K_1$ could be weakened to require only that the coefficients are in ${\mathbb C}[z]$.} \end{enumerate} Each $E$-function is an entire function of the complex variable $z$. The set of all $E$-functions forms a ring ${\bf E}$ (under $(+, \times)$) which is also closed under differentiation, and this ring is an algebra over $\bar{{\mathbb Q}}.$ We also define a subclass of $E$-functions called {\em $E^{*}$-functions} in parallel with the class of $G$-functions, following Shidlovskii \cite[Chap. 13]{Sh89}. An analytic function $F(z)= \sum_{n=1}^{\infty} c_n \frac{z^n}{n!}$ is an {\em $E^{*}$-function} if it has the following properties: \begin{enumerate} \item $F(z)$ is an $E$-function. \item There is some constant $C>0$ such that the maximum of the absolute values of the algebraic conjugates of $c_n$ is bounded by $O(C^n)$ as $n \to \infty$. \item There is some constant $C' >0$ such that there is a sequence of integers $q_0, q_1, q_2 ...$ such that each $q_n c_n$ is an algebraic integer, and with $q_n = O((C')^{n})$ as $n \to \infty$. \end{enumerate} This definition narrows the growth rate conditions on the coefficients compared to Siegel's conditions. The set of all $E^{*}$-functions forms a ring ${{\bf E}}^{*}$ closed under differentiation which is also an algebra over $\bar{{\mathbb Q}}$. Certainly ${\bf E}^{*} \subseteq {\bf E}$, and it is conjectured that the two rings are equal (\cite[p. 407]{Sh89}). Results of Andr\'{e} (\cite{And00a}, \cite{And00b}) apply to the class ${\bf E}^{\ast}$. Siegel \cite[Teil I]{Sie29} originally applied his method to prove transcendence of algebraic values of the Bessel function $J_0(\alpha)$ for algebraic $\alpha$ where $J_0(\alpha) \ne 0$. In formalizing his method in 1949, Siegel \cite{Sie49} first converted an $n$-th order differential operator to a linear system of first order linear differential equations $$ \frac{dy_i}{dx} = \sum_{j=1}^n R_{ij}(x) y_j, ~~~1 \le i \le n, $$ where the functions $R_{i,j} (x)$ are rational functions. In order for his proofs to apply, he required this system to satisfy an extra condition which he called {\em normal.} Siegel was able to verify normality in a few cases, including the exponential function and certain Bessel functions. A great breakthrough of Shidlovskii \cite{Shi59} in 1959 permitted the method to prove transcendence of solution values for all cases where the obvious necessary conditions are satisfied. The Siegel-Shidlovskii theorem states that if the $E_i(x)$ are algebraically independent functions over the rational function field $K(x)$, then at any non-zero algebraic number not a pole of any function $R_{ij}(x)$, the values $E_j(\alpha)$ are algebraically independent. For treatments of these results see Shidlovskii \cite{Sh89} and Feldman and Nesterenko \cite[Chap. 5]{FN98}. More recent progress on $E$-functions includes a study in 1988 of the Siegel normality condition by Beukers, Brownawell and Heckman \cite{BBH88} which obtains effective measures of algebraic independence in some cases. In 2000 Andr\'{e} \cite[Theorem 4.3]{And00a} (see also \cite{And03}) established that for each $E^{*}$-function the minimal order linear differential equation with ${\mathbb C}(z)$ coefficients that it satisfies has singular points only at $z= 0$ and $z=\infty$; that is, it has a basis of $n$ independent holomorphic solutions at all other points. Furthermore $z=0$ is necessarily a regular singular point and $\infty$ is an irregular singular point of a restricted type. This result goes a long way towards explaining why all the known $E^{*}$-functions are of hypergeometric type. Andr\'{e} \cite{And00b} applied this characterization to give a new proof of the Siegel-Shidlovskii theorem. Finally we remark that although transcendence results are usually associated to $E$-functions and irrationality results to $G$-functions, in 1996 Andr\'{e} \cite{And96} obtained transcendence results for certain $G$-function values. Returning to Euler's constant, in 1968 Mahler \cite{Mah68} applied Shidlovskii's methods to certain Bessel functions. For example $$J_0(z) = \sum_{n=0}^{\infty} \frac{(-1)^n}{ (n!)^2 } (\frac{z}{2})^{2n}$$ is a Bessel function of the first kind, and \begin{equation} \label{bessel3} Y_0(z) = \frac{2}{\pi} \Big(\log (\frac{z}{2} )+\gamma\Big) J_0(z) + \frac{2}{\pi}\Big( \sum_{n=1}^{\infty} (-1)^{n-1} \frac{H_n}{(n!)^2} (\frac{z^2}{4})^n\Big) \end{equation} is a Bessel function of the second kind, see \cite[(9.1.13)]{AS}. Here $J_0(z)$ is an $E^{*}$-function, but $Y_0(z)$, which contains Euler's constant in its expansion \eqn{bessel3}, is not, because it has a logarithmic singularity at $x=0$. However the modified function $$ \tilde{Y}_0(z) := \frac{\pi}{2} Y_0(z) - \Big(\log (\frac{z}{2}) + \gamma\Big) J_0(z)= \sum_{n=1}^{\infty} (-1)^{n-1} \frac{H_n}{(n!)^2} (\frac{z^2}{4})^n $$ is an $E^{*}$-function, and Mahler uses this fact. He noted the following special case of his general results. \begin{theorem}\label{th100a} {\em (Mahler 1968)} The number \begin{equation*} \frac{\pi}{2} \frac{Y_0(2)}{J_0(2)} - \gamma \end{equation*} is transcendental. \end{theorem} We have $J_0(2) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{ (n!)^2}$ and $\frac{\pi}{2} Y_0(2) =\gamma J_0(2)+ \sum_{n=1}^{\infty} (-1)^{n-1} \frac{H_n}{(n!)^2},$ from which it follows that Mahler's transcendental number is \begin{equation*} \frac{\pi}{2} \frac{Y_0(2)}{J_0(2)} - \gamma = \frac{ \sum_{n=1}^{\infty} \frac{(-1)^{n-1} H_n}{(n!)^2} }{\sum_{n=0}^{\infty}\frac{(-1)^n}{ (n!)^2}}. \end{equation*} More recent transcendence results apply to Euler's constant along with other constants. In 2012 Rivoal \cite{Riv10c} proved a result which, as a special case, implies the transcendence of at least one of the Euler constant $\gamma$ and the Euler-Gompertz constant ${\delta}$. It uses the Shidlovskii method and improves on an observation of Aptekarev \cite{Apt09} that at least one of $\gamma$ and ${\delta}$ must be irrational. This transcendence of at least one of $\gamma$ and $\delta$ was also established about the same time by Kh. and T. Hessami Pilehrood \cite[Corollary 3]{HP13}. \begin{theorem}\label{th102} {\em (Rivoal 2012)} Euler's constant $\gamma$ and the Euler-Gompertz constant $$ {\delta} := \int_{0}^{\infty} \frac{e^{-w}}{1+w} \, dw = \int_{0}^1 \frac{dv}{1- \log v} $$ together have the following properties: (1) (Simultaneous Diophantine Approximation) One cannot approximate the pair $(\gamma, {\delta})$ very well with rationals $(\frac{p}{q}, \frac{r}{q})$ having the same denominator. For each $\epsilon>0$ there is a constant $C(\epsilon)>0$ such that for all integers $p, q, r$ with $q \ne0$, there holds \begin{equation*} |\gamma - \frac{p}{q}| + |{\delta}- \frac{r}{q}| \ge \frac{ C(\epsilon)} {H^{3+ \epsilon}}, \end{equation*} where $H= \max(|p|, |q|, |r|)$. (2) (Transcendence) The transcendence degree of the field ${\mathbb Q}(e, \gamma, \delta)$ generated by $e , \gamma$ and $\delta$ over the rational numbers ${\mathbb Q}$ is at least two. \end{theorem} \begin{proof} This follows from a much more general result of Rivoal's \cite[Theorem 1]{Riv10c}, which concerns values of the function $$ {\mathcal G}_{\alpha}(z) := z^{-\alpha} \int_{0}^{\infty} (t+z)^{\alpha -1} e^{-t} dt. $$ Here we specialize to the case $\alpha=0$, and on taking $z=1$ we have \begin{equation*} {\mathcal G}_0(1) = \int_{0}^{\infty} \frac{e^{-t}}{t+1} dt = {\delta}, \end{equation*} using Hardy's integral \eqn{560aa}. The function ${\mathcal G}_0(z)$ satisfies a linear differential equation, but is not an $E$-function in the sense of transcendence theory. Rivoal makes use of the identity \begin{equation}\label{922} \gamma + \log z = -e^{-z} {\mathcal G}_0(z)-{\mathcal E}(-z), \end{equation} valid for $z \in {\mathbb C} \smallsetminus {\mathbb R}_{\le 0}$, where the entire function $$ {\mathcal E}(z) := \sum_{n=1}^{\infty} \frac{z^n}{n \cdot n! }, $$ is an $E^{*}$-function. Result (i) follows from the assertion of Theorem 1 (ii) of Rivoal \cite{Riv10c}, specialized to parameter values $\alpha=0$ and $z=1$. This result uses explicit Hermite-Pad\'{e} approximants to the $E^{*}$-functions $1, e^z, \mathcal{E}_{\alpha}(z)$, where $\mathcal{E}_{\alpha}(z) = \sum_{m=0}^{\infty} \frac{z^m}{m!(m+\alpha +1)}$, with $\alpha \ne -1, -2, -3, ...$. Here $\mathcal{E}(z)$ corresponds to $\alpha=-1$ with the divergent constant term dropped from the power series expansion. Result (ii) follows from the assertion of Theorem 2 (ii) of Rivoal \cite{Riv10c}, which asserts for (complex) algebraic numbers $z \not\in (-\infty, 0]$ that the field generated over ${\mathbb Q}$ by the three numbers $e^z, \, \gamma+ \log z, \, {\mathcal G}_0(z) $ has transcendence degree at least two. It makes use of \eqn{922}, which can be rewritten as the integral identity \begin{equation*} -\gamma= \log z + z \int_{0}^1 e^{-tz} \log t \, dt + e^{-z} \int_{0}^{\infty} \frac{e^{-t}}{t+z} \, dt \end{equation*} It uses the specialization $z=1$, which essentially gives Hardy's identity \eqn{560b} divided by $e$, which we may rewrite as \begin{equation*} -{\mathcal E}(-1)= \gamma + \frac{\delta}{e}. \end{equation*} The result then follows using Shidlovskii's Second Fundamental Theorem \cite[p. 123]{Sh89}. \end{proof} Rivoal \cite{Riv10c} remarks that the transcendence result Theorem \ref{th102} (2) is implicit in the approach that Mahler \cite{Mah68} formulated in 1968. At the end of his paper Mahler \cite[p. 173]{Mah68} states without details that his method extends to show that for rational $\nu_0>0$ and nonzero algebraic numbers $\alpha$, the integrals $$ I_k(\nu_0, \alpha) := \int_{0}^{1} x^{\nu_0 -1} (\log x)^k e^{-\alpha x} dx, ~~~k=0, 1,2, ... $$ are algebraically independent transcendental numbers. Choosing $\nu_0=1, \alpha=1$ and $k=0,1$, we have $$I_0(1, 1) := \int_{0}^{1} e^{-x} dx= 1- \frac{1}{e},$$ and, using \eqn{559f}, $$ I_1(1,1)= \int_{0}^1 (\log x)\, e^{-x} dx = \int_{0}^{\infty}(\log x)\, e^{-x} dx- \frac{1}{e} \int_0^{\infty} \log (x+1) e^{-x} dx= -\gamma -\frac{\delta}{e}. $$ Mahler's statement asserts that these are algebraically independent transcendental numbers. This assertion implies that ${\mathbb Q} (\gamma, \delta, e)$ has transcendence degree at least two over ${\mathbb Q}$. We next present recent results which concern the transcendence of collections of generalized Euler constants discussed in Section \ref{sec37}. These results are applications of Baker's results on linear forms in logarithms, and allow the possibility of one exceptional algebraic value. Recall that the {\em Euler-Lehmer constants}, for $0 \le h < k$ studied by Lehmer \cite{Leh75}, are defined by \[ \gamma(h, k) := \lim_{x \to \infty} \Big(\sum_{\substack{0 \le n < x\\ n \equiv h~ (\bmod k)}} \frac{1}{n} - \frac{\log x}{k} \Big). \] In 2010 M. R. Murty and N. Saradha \cite[Theorem 1]{MS10} obtained the following result. \begin{theorem}\label{th3162} {\em (Murty and Saradha 2010)} In the infinite list of Euler-Lehmer constants \[ \{ \gamma(h, k) : 1 \le h < k,~~ \mbox{for all}~~ k \ge 2\}, \] at most one value is an algebraic number. If Euler's constant $\gamma$ is an algebraic number, then only the number $\gamma(2,4) = \frac{1}{4} \gamma$ in the above list is algebraic. \end{theorem} Their proof uses the fact that a large set of linear combinations of $\gamma(h,k)$ are equal to logarithms of integers. Then Baker's bounds for linear forms in logarithms of algebraic numbers are applied to derive a contradiction from the assumption that the set contains at least two algebraic numbers. An immediate consequence of Theorem \ref{th3162} is the fact that over all rational numbers $0< x\le 1$ at least one of $\Gamma(x), \Gamma'(x)$ is transcendental, with at most one possible exceptional $x$. (\cite[Corollary]{MS10}), see also Murty and Saradha \cite{MS07}. In a related direction Murty and Zaytseva \cite[Theorem 4]{MZ13} obtained a corresponding transcendence result for the generalized Euler constants $\gamma(\Omega)$ associated to a finite set $\Omega$ of primes that were introduced by Diamond and Ford \cite{DF08}. Recall from Section \ref{sec37} that these constants are obtained by setting $\zeta_{\Omega}(s) = \prod_{p \in \Omega} \Big(1- p^{-s} \Big) \zeta(s)$ and defining $\gamma(\Omega)$ as the constant term in its Laurent expansion around $s=1$, writing \[ \zeta_{\Omega} (s) = \frac{D(\Omega)}{s-1} + \gamma(\Omega) + \sum_{n=1}^{\infty} \gamma_n(\Omega) (s-1)^n. \] In particular $\gamma(\emptyset) = \gamma$. \begin{theorem}\label{th3163} {\em (Murty and Zaytseva 2013)} In the infinite list of Diamond-Ford generalized Euler constants $ \gamma(\Omega)$ where $ \Omega$ runs over all finite subsets of primes (including the empty set), all numbers are transcendental with at most one exception. \end{theorem} The constants $\gamma(\Omega)$ can be expressed as finite linear combinations of Euler-Lehmer numbers, cf. \eqn{604aa}, so this result is closely related to the previous theorem. This result is also proved by contradiction, using Baker's results on linear forms in logarithms. If $\gamma$ were algebraic then $\gamma(\emptyset)$ would be the unique exceptional value. However one expects there to be no algebraic value in the list. \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \section{Concluding remarks}\label{sec4} \setcounter{equation}{0} Euler's constant appears {\em sui generis}. Despite three centuries of effort, it seems unrelated to other constants in any simple way. A first consequence arising from this is: {\em the appearance of Euler's constant in ostensibly unrelated fields of mathematics may signal some hidden relationship between these fields.} A striking example is the analogy between results concerning the structure of factorizations of random integers and cycle structure of random permutations. These results are given for restricted factorizations in Sections \ref{sec34} and \ref{sec35} and for restricted cycle structure of permutations in Sections \ref{sec38} and \ref{sec38a}, respectively. Results in these two fields were initially proved separately, and were later unified in limiting probability models developed by Arratia, Barbour and Tavar\'{e} \cite{ABT97}. These models account not only for the main parts of the probability distribution but also for tail distributions, where there are large deviations results in which Euler's constant appears. On a technical level an explanation for the parallel results in the two subjects was offered by Panario and Richmond \cite{PR01} at the level of asymptotic analysis of generating functions of a particular form. This research development is an example of one of Euler's research methods detailed in Section \ref{sec26}, that of looking for a general scheme uniting two special problems. In a second direction, the study of Euler's constant and multiple zeta values can now be viewed in a larger context of rings of periods connected with algebraic integrals. The sui generis nature of $\gamma$ is sharply formulated in the conjecture that it is not a Kontsevich-Zagier period. On the other hand, it is known to belong to the larger ring of exponential periods $\sEP$, as described in Section \ref{sec310}. This research development illustrates another of Euler's research methods, to keep searching for a fuller understanding of any problem already solved. In a third direction, one may ask for an explanation of the appearance of Euler's constant in various sieve method formulas. Here Friedlander and Iwaniec \cite[p. 239]{FI10} comment that these appearances stem from two distinct sources. The first is through Mertens's product theorem \ref{th51}. The second is via differential-difference equations of the types treated in Sections \ref{sec34} and \ref{sec35}. In a fourth direction, there remain mysteries about the appearance of Euler's constant in number theory in the gamma and zeta function. {\bf Observation.} {\em Euler's constant $\gamma$ appears in the Laurent expansion of the Riemann zeta function at $s=1$; here the zeta function is a function encoding the properties of the finite primes. Euler's constant also appears in the Taylor expansion of the gamma function at $s=1$; here the gamma function encodes properties of the (so-called) ``infinite prime" or "real prime" (Archimedean place).} The analogy between the "real prime" and the finite primes traces back to Ostrowski's work in the period 1913-1917 which classified all absolute values on ${\mathbb Q}$ (Ostrowski's Theorem). Such analogies have been pursued for a century; a comprehensive treatment of such analogies is given in Haran \cite{Har01}. A mild mystery is to give a conceptual explanation for this occurrence of Euler's constant in two apparently different contexts, one associated to the finite primes, and the other associated to the real prime (Archimedean place). There is a known connection between these two contexts, given by the product formula for rational numbers over all places, finite and infinite. It says \[ \prod_{p \le \infty} |r|_{p} = 1, \qquad r \in {\mathbb Q}. \] in which $|\cdot|_p$ denotes the $p$-adic valuation, normalized with $|p|_p = \frac{1}{p}$ and $|x| _{\infty}$ being the real absolute value. Does this relation explain the common appearance? On closer examination there appears to be a mismatch between these two occurrences, visible in the functional equation of the Riemann zeta function. The functional equation can be written \begin{equation*} \hat{\zeta}(s) = \hat{\zeta}(1-s), \end{equation*} in which appears the {\em completed zeta function} \begin{equation*} \hat{\zeta}(s) := \pi^{-\frac{s}{2}} \Gamma(\frac{s}{2}) \zeta(s)= \pi^{-\frac{s}{2}} \Gamma(\frac{s}{2}) \prod_{p}\left( 1- \frac{1}{p^s} \right)^{-1}. \end{equation*} The completed zeta function is given as an Euler product over all ``primes", both the finite primes and the Archimedean Euler factor $\pi^{-\frac{s}{2}} \Gamma(\frac{s}{2})$. In this formula evaluation at the point $s=1$ for the Riemann zeta function corresponds to evaluation of the gamma function at the shifted point $s' = \frac{s}{2}= \frac{1}{2}$. The mismatch is that for the gamma function, Euler's constant most naturally appears in connection with its derivative at the point $s'=1$, rather than at the point $s'= \frac{1}{2}, $ cf \eqn{201}. Is there a conceptual explanation of this mismatch? Theorem ~\ref{th30} associates Euler's constant with the digamma function at $s=\frac{1}{2}$, but in that case an extra factor of $\log 2$ appears. Perhaps this mismatch can be resolved by considering an asymmetric functional equation for the function $\Gamma(s) \zeta(s)$. Here one has, valid for $\text{Re}(s)>1$, the integral formula \[ \Gamma(s) \zeta(s)= \int_{0}^{\infty} \frac{1}{e^x -1} x^{s-1} dx. \] One of Riemann's proofs of the functional equation for $\zeta(s)$ is based on this integral. Another mystery concerns whether there may exist an interesting arithmetical dynamical interpretation of Euler's constant, different from that given in Section \ref{sec39}. The logarithmic derivative of the archimedean factor $\pi^{-\frac{s}{2}} \Gamma(\frac{s}{2})$ at the value $s=2$ (that is, $s' =1$) produces the value $-g/2$ where \begin{equation*} g:= \gamma + \log \pi. \end{equation*} The constant $g$ appears elsewhere in the ``explicit formulas" of prime number theory, as a contribution at the (special) point $x=1$ in the Fourier test function space, cf. \cite[p. 280]{BL99}. Perhaps one should search for a dynamical system whose entropy is the constant $-g:= -\gamma - \log \pi \approx -1.721945$. In a fifth direction, Euler's constant can be viewed as a value emerging from regularizing the zeta function at $s=1$, identified with the constant term in its Laurent series expansion at $s=1$. There are a number of different proposals for ``renormalizing" multiple zeta values, including the value of ``$\zeta(1)$", with the intent to preserve other algebraic structures. This is an active area of current research. Various alternatives are discussed in Ihara, Kaneko and Zagier \cite{IKZ06}, Guo and Zhang \cite{GZ08} , \cite{GZ08b}, Guo, Paycha, Xie and Zhang \cite{GPXZ09} and Manchon and Paycha \cite{MP10}. Some of these authors (for example \cite{IKZ06}) prefer to assign the ``renormalized" value $0$ to $``\zeta(1)"$, to be obtained by further subtracting off Euler's constant. This direction generalizes to algebraic number fields. For background we refer to Neukirch \cite[Chap. VII. Section 5]{Neu99}. The Dedekind zeta function $\zeta_K(s)$ of a number field $K$ is given by $$ \zeta_K(s) = \sum_{A} \frac{1}{N_{K/{\mathbb Q}}(A)^{s}} $$ where $A$ runs over all the ideals in the ring of integers $O_K$ of the number field, and $N_{K/{\mathbb Q}}$ is the norm function. This function analytically continues to a meromorphic function on ${\mathbb C}$ which has a simple pole at $s=1$, with Laurent expansion \begin{equation} \label{dedz} \zeta_K(s) = \frac{ \alpha_{-1}(K)}{s-1} + \alpha_0(K) + \sum_{j=1}^{\infty} \alpha_j(K)(s-1)^j. \end{equation} The term $\alpha_{-1}(K)$ encodes arithmetic information about the field $K$, given in the {\em analytic class number formula} $$ Res_{s=1}(\zeta_K(s)) := \alpha_{-1}(K) = \frac{h_KR_K}{e^{g_K}}, $$ in which $h_K$ is the (narrow) class number of $K$, $R_K$ is the regulator of $K$ (the log-covolume of the unit group of $K$) and $g_K$ is the genus of $K$, given by $$ g_K := \log \frac{w_K |d_K|^{1/2}}{2^{r_1} (2\pi)^{r_2}}, $$ in which $w_K$ is the number of roots of unity in $K$, $d_K$ is the absolute discriminant of $K$, $r_1$ and $r_2$ denote the number of real (resp. complex) Galois conjugate fields to $K$, inside $\bar{{\mathbb Q}}$. The constant term $\alpha_0(K)$ is then an analogue of Euler's constant for the number field $K$, since $\alpha_0({\mathbb Q}) = \gamma$. A refined notion studies the constant term of a zeta function attached to an individual ideal class ${\mathcal C}$ of a number field $K$. Such a partial zeta function has the form $$ \zeta({\mathcal C},s) := \sum _{A \in {\mathcal C}} \frac{1}{N_{K/{\mathbb Q}}( A)^{s}} $$ where the sum runs over all integral ideals $A$ of the number field belonging to the ideal class ${\mathcal C}$. The Dedekind zeta function is the sum of ideal class zeta functions over all $h_K$ ideal classes. These partial zeta functions $\zeta({\mathcal C}, s)$ extend to meromorphic functions of $s$ on ${\mathbb C}$ with only singularity a simple pole at $s=1$ having a residue term that is independent of ${\mathcal C}$, i.e. $\alpha_{-1}({\mathcal C}, K) =\frac{1}{h_K} \alpha_{-1}(K)$ (\cite[Theorem VII (5.9)]{Neu99}). For imaginary quadratic fields the constant term $a_0({\mathcal C}, K)$ in the Laurent expansion at $s=1$ of an ideal class zeta function can be expressed in terms of a modular form using the first Kronecker limit formula; cf. Kronecker \cite{Kron}. This limit formula concerns the real-analytic Eisenstein series, for $\tau=x+iy$ with $y >0$ and $s \in {\mathbb C}$ $$ E(\tau, s) = \sum_{(m,n) \ne (0,0)} \frac{y^s}{|m \tau + n|^{2s}}, $$ which for fixed $\tau$ analytically continues to a meromorphic function of $s \in {\mathbb C}$ and has a simple pole at $s=1$ with residue $\pi$ and Laurent expansion $$ E(\tau, s) = \frac{\pi}{s-1} + c_0(\tau) + \sum_{j=1}^{\infty} c_n(\tau) (s-1)^n. $$ The first Kronecker limit formula expresses the constant term $c_0(\tau)$ as \begin{equation*} c_0(\tau)= 2 \pi (\gamma - \log 2) - \log (\sqrt{y} |\eta(\tau)|^2), \end{equation*} in which $\eta(\tau)$ is the Dedekind eta function, a modular form given by $$ \eta(\tau) = q^{1/24} \prod_{n=1}^{\infty} (1- q^n),~~~~~~\mbox{with} ~~~q= e^{2 \pi i \tau}, $$ see \cite[Sect. 2]{Zag75}. In 1923 Herglotz \cite{Her23ab} found an analogous formula for the constant term in the Laurent series expansion at $s=1$ of $\zeta({\mathcal C}, s)$ for an ideal class ${\mathcal C}$ of a real quadratic field, which involved Dedekind sums and an auxiliary transcendental function. In 1975 by Zagier \cite{Zag75} obtained another formula for this case which is expressed using continued fractions and an auxiliary function. In 2006 Ihara \cite{Ih06} studied another quantity extracted from the Laurent expansion \eqn{dedz} of the Dedekind zeta function of a number field at $s=1$, given by \begin{equation*} \gamma(K) := \frac{\alpha_0(K)}{\alpha_{-1}(K)}, \end{equation*} which he termed the {\em Euler-Kronecker constant} of the number field $K$. Here $\gamma(K)$ represents a scaled form of the constant term, and it generalizes Euler's constant since $\gamma({\mathbb Q}) = \gamma.$ Motivated by the function field case, Ihara hopes that $\gamma(K)$ encodes interesting information of a different type on the field $K$, especially as $K$ varies. Under the assumption of the Generalized Riemann Hypothesis, Ihara obtained upper and lower bounds $$ - c_1 \log D_K \le \gamma(K) \le c_2 \log\log D_K, $$ where $D_K$ is the (absolute value of the) discriminant of $K$ and $c_1, c_2>0$ are absolute constants. Further unconditional upper bounds on $\gamma(K)$ were obtained by Murty \cite{Mur11} and lower bounds were given by Badzyan \cite{Bad10}. Finally, ongoing attempts to place Euler's constant in a larger context have continued in many directions. These include studies of new special functions involving Euler's constant, such as the {\em generalized Euler-constant function} $\gamma(z)$ proposed by Sondow and Hadjicostas \cite{SH07}. Two other striking formulas, found in 2005 by Sondow \cite{Son05}, are the double integrals $$ \int_{0}^1 \int_{0}^1 \frac{1-x}{(1-xy)(-\log xy)} dx dy = \gamma $$ and $$ \int_{0}^1 \int_{0}^1 \frac{1-x}{(1+xy)(-\log xy)} dx dy = \log \frac{4}{\pi}, $$ Guillera and Sondow \cite{GS08} then found a common integral generalization with a complex parameter that unifies these formulas with double integrals of Beukers \cite{Beu79} for $\zeta(2)$ and $\zeta(3)$, see \eqn{833a}. Many other generalizations and variants of Euler's constant have been proposed. These include a $p$-adic analogue of Euler's constant introduced by J. Diamond \cite{Di77} in 1977 (see also Koblitz \cite{Kob78} and Cohen \cite[Sect. 11.5]{Coh07b}) and $q$-analogues of Euler's constant formulated by Kurokawa and Wakayama \cite{KW04} in 2004. A topic of perennial interest is the search for new expansions and representations for Euler's constant. For example 1917 work of Ramanujan \cite{Ram17} was extended in 1994 by Brent \cite{Bre94}. Some new expansions involve accelerated convergence of known expansions, e.g. Elsner \cite{Els95}, K. and T. Hessami Pilehrood \cite{HP07}, and Pr\'{e}vost \cite{Pr08}. Recent developments of the Dickman function led to the study of new constants involving Euler's constant and polylogarithms, cf. Broadhurst \cite{Bro10} and Soundararajan \cite{Sou10}. All these studies, investigations of irrationality, connections with the Riemann hypothesis, possible interpretation as a (non)-period, and formulating generalizations of Euler's constant, may together lead to a better understanding of the place of Euler's constant $\gamma$ in mathematics. \paragraph{\bf Acknowledgments.} This paper started with a talk on Euler's constant given in 2009 in the ``What is...?" seminar run by Li-zhen Ji at the University of Michigan. I am grateful to Jordan Bell for providing Latin translations of Euler's work in Sections 2.1--2.3, and for pointing out relevant Euler papers E55, E125, E629, and thank David Pengelley for corrections and critical comments on Euler's work in Section 2. I thank Sylvie Paycha for comments on dimensional regularization in Section 3.2; Doug Hensley, Pieter Moree, K. Soundararajan and Gerald Tenenbaum for remarks on Dickman's function in Section 3.5; Andrew Granville for extensive comments on the anatomy of integers (\cite{Gra11}), affecting Sections 3.5, 3.6 and 3.9-3.11; Tony Bloch for pointing out work of Cohen and Newman described in Section 3.13; Maxim Kontsevich for comments on rings of periods in Section 3.14; and Stephane Fischler, Tanguy Rivoal and Wadim Zudilin for comments on Sections 3.15 and 3.16. I thank Daniel Fiorilli, Yusheng Luo and Rob Rhoades for important corrections. I am especially grateful to Juan Arias de Reyna for detailed checking, detecting many misprints and errors, and supplying additional references. Finally I thank the reviewers for many historical and mathematical suggestions and corrections. \end{document}
arXiv
\begin{definition}[Definition:Inclusion Relation on Subobjects] Let $\mathbf C$ be a metacategory. Let $C$ be an object of $\mathbf C$. Let $\map {\mathbf{Sub}_{\mathbf C} } C$ be the category of subobjects of $C$. The '''inclusion relation $\subseteq$''' on subobjects of $C$ is defined as follows: :$m \subseteq m'$ {{iff}} there exists a morphism $f: m \to m'$ \end{definition}
ProofWiki
Infinity symbol The infinity symbol ($\infty $) is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate,[1] after the lemniscate curves of a similar shape studied in algebraic geometry,[2] or "lazy eight", in the terminology of livestock branding.[3] $\infty $ Infinity symbol In UnicodeU+221E ∞ INFINITY (&infin;) Different from Different fromU+267E ♾ PERMANENT PAPER SIGN U+26AD ⚭ MARRIAGE SYMBOL This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag. Both the infinity symbol itself and several variations of the symbol are available in various character encodings. History John Wallis introduced the infinity symbol $\infty $ to mathematical literature. The $\infty $ symbol in several typefaces The lemniscate has been a common decorative motif since ancient times; for instance it is commonly seen on Viking Age combs.[4] The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis.[5][6][7] Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame.[8] Another proposes instead that it was based on the notation CIↃ used to represent 1,000.[9] Instead of a Roman numeral, it may alternatively be derived from a variant of ω, the lower-case form of omega, the last letter in the Greek alphabet.[9] Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning.[7] Leonhard Euler used an open letterform more closely resembling a reflected and sideways S than a lemniscate,[10] and even "O–O" has been used as a stand-in for the infinity symbol itself.[7] Usage Mathematics In mathematics, the infinity symbol is used more often to represent a potential infinity,[11] rather than an actually infinite quantity as included in the cardinal numbers and the ordinal numbers (which use other notations, such as $\,\aleph _{0}\,$and ω, for infinite values). For instance, in mathematical expressions with summations and limits such as $\sum _{n=0}^{\infty }{\frac {1}{2^{n}}}=\lim _{x\to \infty }{\frac {2^{x}-1}{2^{x-1}}}=2,$ the infinity sign is conventionally interpreted as meaning that the variable grows arbitrarily large towards infinity, rather than actually taking an infinite value, although other interpretations are possible.[12] The infinity symbol may also be used to represent a point at infinity, especially when there is only one such point under consideration. This usage includes, in particular, the infinite point of a projective line,[13] and the point added to a topological space to form its one-point compactification.[14] Other technical uses In areas other than mathematics, the infinity symbol may take on other related meanings. For instance, it has been used in bookbinding to indicate that a book is printed on acid-free paper and will therefore be long-lasting.[15] On cameras and their lenses, the infinity symbol indicates that the lens's focal length is set to an infinite distance, and is "probably one of the oldest symbols to be used on cameras".[16] Symbolism and literary uses In modern mysticism, the infinity symbol has become identified with a variation of the ouroboros, an ancient image of a snake eating its own tail that has also come to symbolize the infinite, and the ouroboros is sometimes drawn in figure-eight form to reflect this identification—rather than in its more traditional circular form.[18] In the works of Vladimir Nabokov, including The Gift and Pale Fire, the figure-eight shape is used symbolically to refer to the Möbius strip and the infinite, as is the case in these books' descriptions of the shapes of bicycle tire tracks and of the outlines of half-remembered people. Nabokov's poem after which he entitled Pale Fire explicitly refers to "the miracle of the lemniscate".[19] Other authors whose works use this shape with its symbolic meaning of the infinite include James Joyce, in Ulysses,[20] and David Foster Wallace, in Infinite Jest.[21] Graphic design The well-known shape and meaning of the infinity symbol have made it a common typographic element of graphic design. For instance, the Métis flag, used by the Canadian Métis people since the early 19th century, is based around this symbol.[22] Different theories have been put forward for the meaning of the symbol on this flag, including the hope for an infinite future for Métis culture and its mix of European and First Nations traditions,[23][24] but also evoking the geometric shapes of Métic dances,[25], Celtic knots,[26] or Plains First Nations Sign Language.[27] A rainbow-coloured infinity symbol is also used by the autism rights movement, as a way to symbolize the infinite variation of the people in the movement and of human cognition.[28] The Bakelite company took up this symbol in its corporate logo to refer to the wide range of varied applications of the synthetic material they produced.[29] Versions of this symbol have been used in other trademarks, corporate logos, and emblems including those of Fujitsu,[30] Cell Press,[31] and the 2022 FIFA World Cup.[32] Encoding The symbol is encoded in Unicode at U+221E ∞ INFINITY[33] and in LaTeX as \infty: $\infty $.[34] An encircled version is encoded for use as a symbol for acid-free paper. Character information Preview∞♾ Unicode name INFINITY PERMANENT PAPER SIGN Encodingsdecimalhexdechex Unicode8734U+221E9854U+267E UTF-8226 136 158E2 88 9E226 153 190E2 99 BE GB 18030161 222A1 DE129 55 174 5681 37 AE 38 Numeric character reference&#8734;&#x221E;&#9854;&#x267E; Named character reference&infin; OEM-437 (Alt Code)[35]236EC Mac OS Roman[36]176B0 Symbol Font encoding[37]165A5 Shift JIS[38]129 13581 87 EUC-JP[39]161 231A1 E7 EUC-KR[40] / UHC[41]161 196A1 C4 EUC-KPS-9566[42]162 172A2 AC Big5[43]161 219A1 DB LaTeX[34]\infty\acidfree CLDR text-to-speech name[44]infinity signinfinity The Unicode set of symbols also includes several variant forms of the infinity symbol that are less frequently available in fonts in the block Miscellaneous Mathematical Symbols-B.[45] Character information Preview⧜⧝⧞ Unicode name INCOMPLETE INFINITY TIE OVER INFINITY INFINITY NEGATED WITH VERTICAL BAR Encodingsdecimalhexdechexdechex Unicode10716U+29DC10717U+29DD10718U+29DE UTF-8226 167 156E2 A7 9C226 167 157E2 A7 9D226 167 158E2 A7 9E Numeric character reference&#10716;&#x29DC;&#10717;&#x29DD;&#10718;&#x29DE; Named character reference&iinfin;&infintie;&nvinfin; LaTeX[34]\iinfin\tieinfty\nvinfty See also Wikimedia Commons has media related to Infinity symbols. • Aleph number • History of mathematical notation • Lazy Eight (disambiguation) References 1. Rucker, Rudy (1982). Infinity and the Mind: The science and philosophy of the infinite. Boston, Massachusetts: Birkhäuser. p. 1. ISBN 3-7643-3034-1. MR 0658492. 2. Erickson, Martin J. (2011). "1.1 Lemniscate". Beautiful Mathematics. MAA Spectrum. Mathematical Association of America. pp. 1–3. ISBN 978-0-88385-576-8. 3. Humez, Alexander; Humez, Nicholas D.; Maguire, Joseph (1993). Zero to Lazy Eight: The Romance of Numbers. Simon and Schuster. p. 18. ISBN 978-0-671-74281-2. 4. van Riel, Sjoerd (2017). "Viking Age Combs: Local Products or Objects of Trade?". Lund Archaeological Review. 23: 163–178. See p. 172: "Within this type the lemniscate (∞) is a commonly used motif." 5. Wallis, John (1655). "Pars Prima". De Sectionibus Conicis, Nova Methodo Expositis, Tractatus (in Latin). pp. 4. 6. Scott, Joseph Frederick (1981). The mathematical work of John Wallis, D.D., F.R.S., (1616-1703) (2nd ed.). American Mathematical Society. p. 24. ISBN 0-8284-0314-7. 7. Cajori, Florian (1929). "Signs for infinity and transfinite numbers". A History of Mathematical Notations, Volume II: Notations Mainly in Higher Mathematics. Open Court. pp. 44–48. 8. Maor, Eli (1991). To Infinity and Beyond: A Cultural History of the Infinite. Princeton, New Jersey: Princeton University Press. p. 7. ISBN 0-691-02511-8. MR 1129467. 9. Clegg, Brian (2003). "Chapter 6: Labelling the infinite". A Brief History of Infinity: The Quest to Think the Unthinkable. Constable & Robinson Ltd. ISBN 978-1-84119-650-3. 10. Cajori (1929) displays this symbol incorrectly, as a turned S without reflection. It can be seen as Euler used it on page 174 of Euler, Leonhard (1744). "Variae observationes circa series infinitas" (PDF). Commentarii Academiae Scientiarum Petropolitanae (in Latin). 9: 160–188. 11. Barrow, John D. (2008). "Infinity: Where God Divides by Zero". Cosmic Imagery: Key Images in the History of Science. W. W. Norton & Company. pp. 339–340. ISBN 978-0-393-06177-2. 12. Shipman, Barbara A. (April 2013). "Convergence and the Cauchy property of sequences in the setting of actual infinity". PRIMUS. 23 (5): 441–458. doi:10.1080/10511970.2012.753963. S2CID 120023303. 13. Perrin, Daniel (2007). Algebraic Geometry: An Introduction. Springer. p. 28. ISBN 978-1-84800-056-8. 14. Aliprantis, Charalambos D.; Border, Kim C. (2006). Infinite Dimensional Analysis: A Hitchhiker's Guide (3rd ed.). Springer. pp. 56–57. ISBN 978-3-540-29587-7. 15. Zboray, Ronald J.; Zboray, Mary Saracino (2000). A Handbook for the Study of Book History in the United States. Center for the Book, Library of Congress. p. 49. ISBN 978-0-8444-1015-9. 16. Crist, Brian; Aurello, David N. (October 1990). "Development of camera symbols for consumers". Proceedings of the Human Factors Society Annual Meeting. 34 (5): 489–493. doi:10.1177/154193129003400512. 17. Armson, Morandir (June 2011). "The transitory tarot: an examination of tarot cards, the 21st century New Age and theosophical thought". Literature & Aesthetics. 21 (1): 196–212. See in particular p. 203: "Reincarnation is symbolised in a number of cards within the Waite-Smith tarot deck. The primary symbols of reincarnation used are the infinity symbol or lemniscate, the wheel and the circle." 18. O'Flaherty, Wendy Doniger (1986). Dreams, Illusion, and Other Realities. University of Chicago Press. p. 243. ISBN 978-0-226-61855-5. The book also features this image on its cover. 19. Toker, Leona (1989). Nabokov: The Mystery of Literary Structures. Cornell University Press. p. 159. ISBN 978-0-8014-2211-9. 20. Bahun, Sanja (2012). "'These heavy sands are language tide and wind have silted here': Tidal voicing and the poetics of home in James Joyce's Ulysses". In Kim, Rina; Westall, Claire (eds.). Cross-Gendered Literary Voices: Appropriating, Resisting, Embracing. Palgrave Macmillan. pp. 57–73. doi:10.1057/9781137020758_4. 21. Natalini, Roberto (2013). "David Foster Wallace and the mathematics of infinity". In Boswell, Marshall; Burn, Stephen J. (eds.). A Companion to David Foster Wallace Studies. American Literature Readings in the 21st Century. Palgrave Macmillan. pp. 43–57. doi:10.1057/9781137078346_3. 22. Healy, Donald T.; Orenski, Peter J. (2003). Native American Flags. University of Oklahoma Press. p. 284. ISBN 978-0-8061-3556-4. 23. Gaudry, Adam (Spring 2018). "Communing with the Dead: The "New Métis," Métis Identity Appropriation, and the Displacement of Living Métis Culture". American Indian Quarterly. 42 (2): 162–190. doi:10.5250/amerindiquar.42.2.0162. JSTOR 10.5250/amerindiquar.42.2.0162. S2CID 165232342. 24. "The Métis flag". Gabriel Dumont Institute(Métis Culture & Heritage Resource Centre). Archived from the original on 2013-07-24. 25. Racette, Calvin (1987). Flags of the Métis (PDF). Gabriel Dumont Institute. ISBN 0-920915-18-3. 26. Darren R., Préfontaine (2007). "Flying the Flag, Editor's note". New Breed Magazine (Winter 2007): 6. Retrieved 2020-08-26. 27. Barkwell, Lawrence J. "The Metis Infinity Flag". Virtual Museum of Métis History and Culture. Gabriel Dumont Institute. Retrieved 2020-07-15. 28. Gross, Liza (September 2016). "In search of autism's roots". PLOS Biology. 14 (9): e2000958. doi:10.1371/journal.pbio.2000958. PMC 5045192. PMID 27690292. 29. Crespy, Daniel; Bozonnet, Marianne; Meier, Martin (April 2008). "100 years of Bakelite, the material of a 1000 uses". Angewandte Chemie. 47 (18): 3322–3328. doi:10.1002/anie.200704281. PMID 18318037. 30. Rivkin, Steve; Sutherland, Fraser (2005). The Making of a Name: The Inside Story of the Brands We Buy. Oxford University Press. p. 130. ISBN 978-0-19-988340-0. 31. Willmes, Claudia Gisela (January 2021). "Science that inspires". Trends in Molecular Medicine. 27 (1): 1. doi:10.1016/j.molmed.2020.11.001. PMID 33308981. S2CID 229179025. 32. "Qatar 2022: Football World Cup logo unveiled". Al Jazeera. September 3, 2019. 33. "Unicode Character "∞" (U+221E)". Unicode. Compart AG. Retrieved 2019-11-15. 34. Pakin, Scott (May 5, 2021). "Table 294: stix Infinities". The Comprehensive LATEX Symbol List. CTAN. p. 118. Retrieved 2022-02-19. 35. Steele, Shawn (April 24, 1996). "cp437_DOSLatinUS to Unicode table". Unicode Consortium. Retrieved 2022-02-19. 36. "Map (external version) from Mac OS Roman character set to Unicode 2.1 and later". Apple Inc. April 5, 2005. Retrieved 2022-02-19 – via Unicode Consortium. 37. "Map (external version) from Mac OS Symbol character set to Unicode 4.0 and later". Apple Inc. April 5, 2005. Retrieved 2022-02-19 – via Unicode Consortium. 38. "Shift-JIS to Unicode". Unicode Consortium. December 2, 2015. Retrieved 2022-02-19. 39. "EUC-JP-2007". International Components for Unicode. Unicode Consortium. Retrieved 2022-02-19 – via GitHub. 40. "IBM-970". International Components for Unicode. Unicode Consortium. May 9, 2007. Retrieved 2022-02-19 – via GitHub. 41. Steele, Shawn (January 7, 2000). "cp949 to Unicode table". Unicode Consortium. Retrieved 2022-02-19. 42. "KPS 9566-2003 to Unicode". Unicode Consortium. April 27, 2011. Retrieved 2022-02-19. 43. van Kesteren, Anne. "big5". Encoding Standard. WHATWG. 44. Unicode, Inc. "Annotations". Common Locale Data Repository – via GitHub. 45. "Miscellaneous Mathematical Symbols-B" (PDF). Unicode Consortium. Archived (PDF) from the original on 2018-11-12. Retrieved 2022-02-19. Infinity (∞) History • Controversy over Cantor's theory Branches of mathematics • Internal set theory • Nonstandard analysis • Set theory • Synthetic differential geometry Formalizations of infinity • Cardinal numbers • Hyperreal numbers • Ordinal numbers • Surreal numbers • Transfinite numbers • Infinitesimal • Absolute Infinite Mathematicians • Georg Cantor • Gottfried Wilhelm Leibniz • Abraham Robinson
Wikipedia
Methodology article | Open | Published: 23 September 2016 A spherical-plot solution to linking acceleration metrics with animal performance, state, behaviour and lifestyle Rory P. Wilson1, Mark D. Holton2, James S. Walker2, Emily L. C. Shepard1, D. Mike Scantlebury3, Vianney L. Wilson1, Gwendoline I. Wilson1, Brenda Tysse1, Mike Gravenor4, Javier Ciancio5, Melitta A. McNarry6, Kelly A. Mackintosh6, Lama Qasem7, Frank Rosell8, Patricia M. Graf8,9, Flavio Quintana5, Agustina Gomez-Laich5, Juan-Emilio Sala5, Christina C. Mulvenna3, Nicola J. Marks3 & Mark W. Jones2 Movement Ecologyvolume 4, Article number: 22 (2016) | Download Citation We are increasingly using recording devices with multiple sensors operating at high frequencies to produce large volumes of data which are problematic to interpret. A particularly challenging example comes from studies on animals and humans where researchers use animal-attached accelerometers on moving subjects to attempt to quantify behaviour, energy expenditure and condition. The approach taken effectively concatinated three complex lines of acceleration into one visualization that highlighted patterns that were otherwise not obvious. The summation of data points within sphere facets and presentation into histograms on the sphere surface effectively dealt with data occlusion. Further frequency binning of data within facets and representation of these bins as discs on spines radiating from the sphere allowed patterns in dynamic body accelerations (DBA) associated with different postures to become obvious. We examine the extent to which novel, gravity-based spherical plots can produce revealing visualizations to incorporate the complexity of such multidimensional acceleration data using a suite of different acceleration-derived metrics with a view to highlighting patterns that are not obvious using current approaches. The basis for the visualisation involved three-dimensional plots of the smoothed acceleration values, which then occupied points on the surface of a sphere. This sphere was divided into facets and point density within each facet expressed as a histogram. Within each facet-dependent histogram, data were also grouped into frequency bins of any desirable parameters, most particularly dynamic body acceleration (DBA), which were then presented as discs on a central spine radiating from the facet. Greater radial distances from the sphere surface indicated greater DBA values while greater disc diameter indicated larger numbers of data points with that particular value. We indicate how this approach links behaviour and proxies for energetics and can inform our identification and understanding of movement-related processes, highlighting subtle differences in movement and its associated energetics. This approach has ramifications that should expand to areas as disparate as disease identification, lifestyle, sports practice and wild animal ecology. UCT Science Faculty Animal Ethics 2014/V10/PR (valid until 2017). Quantification of animal movement is a hugely complex topic. In its broadest sense, it operates over wide (3-dimensional) space-scales and highly variable time periods. For example, it encompasses everything from a single limb motion describing a simple arc lasting less than a second, through co-ordination of repetitive limb motion in a whole animal during travel, which may last hours, to the diversity in the complex movement describing the various behaviours exhibited over the lifetime of an animal. Understanding animal movement is important for a suite of reasons but particularly because voluntary animal movement requires energy. Quantification of the allocation of chemical energy for mechanical output and how this relates to movement is relevant in understanding the costs, efficiencies and values of behaviour, lifestyle and exercise physiology. Judicious use of energy is a major element of optimization studies that seek to define best strategies, which have a broad remit ranging from examining most enhanced performance by elite athletes [1, 2] to animals adopting behaviours that maximize survival [3]. Unsurprisingly, therefore, the energetics of movement is well studied e.g. [4], but it has been polarised into essentially two main branches defined by differing methodologies - One branch examines power use [5], which typically requires measurement across extended periods [6–8] but is limited by the difficulties in attributing instantaneous power to performance [9]. The other seeks to quantify behaviour, relying variously on approaches such as high-speed cameras [10], point light displays [11] and force platforms [12] for work on humans and, primarily, on observation-based methodologies for wild animals [13]. Increasingly though, both the power use and the behaviour of humans [14] and animals [15] are being studied using accelerometers in animal/human-attached tags because these sensors quantify change in speed, a fundamental property of motion, precisely [16]. Thus, in the field of energetics, workers have derived indices, such as those based on dynamic body acceleration (DBA) metrics [17], that correlate tightly with oxygen consumption [18], while behavioural studies have used various methods such as random forests, vector machines and artificial neural networks on acceleration data to identify behaviours [19, 20]. However, both groups recognise the problem inherent in the complexity of acceleration data. These provide most value when recorded at high rates (typically >20 Hz) across each of the three axes defining orientation in space, producing effectively 6 channels of data, 3 relating to the gravity-based component of the acceleration and 3 relating to the animal-based movement [21]. Indeed, it is perhaps this complexity that still represents an appreciable challenge for the animal (and human) behaviour community in binding energy use and behaviour within one framework (cf. [22]), even though they are fundamentally interdependent. Indeed, any framework that enhances consideration of animal movement, behaviour and power use simultaneously should facilitate the identification and understanding of processes and patterns across and between them. One solution to this is to recognise that, because the earth's gravity is constant, a tri-axial plot of tri-axial, orthogonally placed, acceleration data fundamentally builds a sphere, a 'g-sphere' [23]. Acceleration data derived from animal movement change the form of this sphere. We capitalise on this to create a new visualization paradigm for animal/human-attached acceleration data whereby we generate the g-sphere and then place animal movement data on it, including those that seek to exemplify power use, over any temporal scale. The approach marries behaviour to estimated energetics and highlights some patterns that are not intuitively obvious. We show that g-sphere visualizations should have the capacity to highlight changes in movement patterns associated with e.g. human emotional state, injury and best practice in single sports manoeuvres but extend through to highlighting proxies for energy-based behavioural ecology in wild animals over time periods ranging from seconds to years. The basic g-sphere An animal-attached tag mounted in the centre of an animal's back with orthogonal, tri-axial accelerometers (aligned with the major axes of the body) produces a 'static' g signal with a vectorial sum of 1.0 g due to gravity when the animal is stationary. Plots of such tri-axial data in a 3-d graph therefore tend to populate the outer surface of the g-sphere which becomes most apparent as the animal adopts body orientations with multiple combinations of body pitch and roll (Fig. 1a). When animals move, points may leave the g-sphere surface as acceleration values reflect g-forces derived from the animal's acceleration (Fig. 1a). This has been termed 'dynamic acceleration' and can be dealt with in a number of different ways, one of which is to remove it by selective smoothing [21] and normalising (see methods) to leave the postural data. Thus, body attitude, which is a major step in elucidating behaviour [16], is defined by the position of the data points on the sphere. Example behavioural data from a cormorant. Six dives and a short period of flight are visualised by (a) a point- based g-sphere [with point colour equating with DBA]. b shows the same data as (a) but as a Dubai plot. Both (c) images depict urchin plots of (b); C1 shows percentages of DBA allocation taken across the whole g-sphere while C2 shows percentages amounting to 100 % per facet. Note the higher values of DBA attributed to flight and descent of the water column, particularly emphasized by the 100 % facet percentage. Note also how certain spines show multi-modes (e.g. white arrow) which can be indicative of different behaviours at one body attitude Dealing with over-plotting - the Dubai plot Increasing time periods viewed within the basic g-sphere tend to result in increasing occlusion and over-plotting of the data, making visualizations more confusing and less useful as the number of data points increases (Fig. 1a). A representation of the time allocated to various postures can, however, be obtained by tessellating the surface of the g-sphere into facets, summing the data points within each facet, and presenting the number of points within each facet by a projection into space away from the g-sphere, producing a spherical histogram or 'Dubai plot' (Fig. 1b). Such plots typically show modes representing different type of behaviour with the higher peaks representing the more common behaviours (Figs. 1b and 2a). Examples of posture and energy-linked posture visualised for two contrasting species (a human and a fish) over 24 h. The human data are taken from a person on a walking/camping tour while the fish data are from a hole-dwelling reef species that often rests by wedging itself at unusual angles. The left hand figures (a) show spherical histogram (Dubai) plots, indicating how time is allocated to different body postures [the 'North pole' position shows the species in the 'normal' upright position]. The first right-hand figure for each species (b) shows how each posture is linked to varying putative power levels. Note how the human has higher power-proxy levels associated with the vertical posture due to walking. Both the human and the fish have low power-proxy levels at low 'latitude' angles acquired during resting/sleep, exemplified by the large diameter blue discs. Data normalized to give a global percentage for all angles may hide infrequent, but higher-energy, activities. Normalising the data to 100 % per facet (c) highlights these though. In this case, the low-energy life style of the fish is still apparent (cf. B), with higher energies occurring fleetingly and only when the fish is vertical (white arrow). The colour coding has blue as low, and red as high, values Allocating putative power use to the g-sphere - the g-urchin While basic g-spheres and Dubai plots quantify the time allocated to different postural states, they impart no information on power use. This information can be incorporated into the g-sphere by calculating the dynamic body acceleration (DBA) (see methods), which correlates linearly with power [18], for each of the postural data points within each facet on the sphere. In this, we note that although one study has shown that a strong relationship between DBA and energy expenditure holds for a (seabird) species operating in three media and multiple different body angles [22], confirmation that this is also the case for more species will need further work (but see [24]). To visualize this, the sphere facets can be populated with thin spines, one spine per facet, radiating into space, like a sea urchin (facets without data have no spine). Spines acquire stacked rings representing the frequency distribution of the DBA values associated with that posture/facet. The position of each ring on the spine indicates the DBA value (lower values are closer to the g-sphere surface), the depth of the ring indicates the width of the DBA bin, and the diameter of the ring is proportional to the number of data points within that bin (Figs. 1c and 2b). This 'g-urchin' can be represented so that it is normalised for all data across the sphere, which highlights the processes that dominate in terms of both the time and proxy for energy across the whole time period considered (Figs. 1c1 and 2b). Alternatively, data can be normalised within each facet to highlight the energetic proxies of particular postures irrespective of their time contribution (Figs. 1c2 and 2c). Urchin plots thus show differences between behaviours within species (Fig. 1c), differences in lifestyles between species (Fig. 2b, c), and differences in behaviour of any individual through time (Fig. 3). Example urchin plots for four consecutive 24 h periods after the release of a European badger (wearing a collar-mounted accelerometer) following anaesthesia. The 'North pole' facets show when the animal was properly horizontal (ie in standing or walking posture). Note how the first two days show no high energy activity because the animal was either resting or asleep. The second day shows only four changes in position. By day three, higher energy, normal posture activities such as walking are apparent at the North pole. This process is further enhanced in day 4, with North pole spine DBA distributions having modes that have moved up the length of the spines to indicate higher power use. DBA values are colour-coded with maximum values (in red) of 1 g Comparing behaviours and putative power uses - the differential g-urchin The process of comparing individuals or the same individual over different times can be enhanced by subtracting one Dubai plot or one g-urchin from another. These differential plots can be colour-coded, for example, according to which DBA bin from which urchin has the higher value (Fig. 4). This highlights differences in assumed power use associated with posture and therefore behaviour, with notable changes even associated even with state [25] (Fig. 4a). Example posture and DBA values associated with 'state' in humans. a shows two Dubai plots for a person walking after seeing 'happy' and 'sad' film clips (higher frequencies are coded by warmer colours). A third differential Dubai plot highlights the difference between the two situations (blue = a higher relative frequency of 'happy' points per facet while red = a higher relative frequency of 'sad' points per facet). Note how the two conditions are reflected in the postural changes (b) shows urchin plots for someone trekking across snow pulling a sledge one minute before a fall and one minute after recovering from the fall. The differential urchin shows both differences in postures adopted between the two situations as well as the dynamism of the walking (red shows a higher relative DBA frequency 'before the fall' while blue shows the reverse) Simplifying outputs G-sphere derivatives can be re-simplified to enhance e.g. inter- or intra- specific comparisons by plotting 2-d line graphs showing the time and/or the DBA allocated to percentage coverage of the g-sphere (Fig. 5). Such 'lifestyle' plots show consistent patterns within and between species (Fig. 5). Example 'lifestyle' plots for different species and situations. These show how DBA values are distributed across the surface of the g-sphere (continuous lines) and the time allocated to those values (dashed lines of equivalent colour) over 24 h for (a) 3 Magellanic penguins (blue), 3 Eurasian beavers (purple) and 3 domestic sheep (red) and (b) three people; a child (yellow) and 2 adults, one of whom hiked extensively during the period (red) while the other was essentially sedentary (blue). Note the species-specific similarities (species that employ most diverse body angles have the highest percentage of the sphere coverage) but that differences between individuals can be manifest in either the time or DBA allocations on the sphere Application of g-spheres and their derivatives to raw tri-axial acceleration data adds another powerful tool to visualize and identify behaviour [19] that requires no knowledge of the animal in question for behaviour-specific patterns to emerge into groups. This approach concatenates 6 complex lines of acceleration data into one plot binding animal attitude and proxy for power use into one visualization that clearly shows modes of behaviour (Fig. 1). The immediate value lies in its potential for use as a template match approach for specific activity pattern identification across data [26]. Thus, behavioural description and identification (Figs. 1, 2 and 3) do not require matched observed behaviours with example data but stem from a visually apparent clustering within the plot. In particular, differences between various g-sphere derivatives, especially Dubai and urchin plots (Fig. 4), can be used to identify specific variation in posture and power-use proxies between behaviours. For example, the Dubai plots in Fig. 4a provide an example of how the posture of a subject changed according to whether they had watched a happy or sad film clip, with the allocation of time to facet position changing. Similarly, the posture and allocation of DBA to different body postures during walking changed after a fall (Fig. 4b). The g-spheres therefore employ fundamentally different principles to other methods in the manner of data visualization and interpretation. In a first iteration, the most common behaviours are most easily identified because of the way they dominate the basic g-sphere visualization (Fig. 1), which could be argued is the most important feature of understanding time management in animals. However, even behaviour that is only a small fraction of the time budget, but is energetically distinct and therefore likely to be apparent in the DBA distributions on urchin spines, may be identified by moving from the globally normalized g-urchin to one that is normalized to facet (Fig. 2b, c). Importantly, mono-, bi-, or even tri-modality in the frequency distributions of DBA allocated to particular facets or groups of adjacent facets, point to multiple behaviours occurring at similar animal postural attitudes. This is illustrated, for example, in the cormorant behaviour where the white arrow in Fig. 1c2 shows multi-modality in DBA due to both dive ascent behaviour and flight behaviour being apparent in the same body attitude facet. It is also exemplified in the stationary and swimming behaviours in the seabass, shown in the bimodality of the DBA distributions along urchin spines at the North Pole (cf. Fig. 2c). The time-based adoption of behaviours can also be studied with this, for example, in the badger data presented (Fig. 3). Here, 'normal' walking behaviour is only manifest during day 4 post-sedation, when the urchin spines at the North Pole acquire a DBA mode that is greater than 1.0 g (Fig. 3). Such observations can then readily be incorporated into statistical classifiers and classification algorithms. Generation of frequency distributions of DBA, as a proxy for power, thus enhances the process of separating behaviours. Importantly, it also helps visualize the overall allocation of power proxies, either to specific behaviours over short periods such as seconds or to collections of behaviour over longer periods (cf. Figs. 1, 2 and 3) extending to months or even years. Depending on the timescales, collections of particular behaviours should provide a representation of different lifestyles, as well as their considered associated energetic outlay, allowing powerful comparisons to be made between systems or scenarios. Examples include comparisons between species with contrasting lifestyles (Figs. 2 and 5) or within-species lifestyle comparisons. Indeed, the precise form of 'lifestyle' plots (Fig. 5) may help in defining lifestyle taxa by defining animal capacities. The future may also benefit from the use of g-sphere approaches based on multiple accelerometers used on different parts of the body or even having accelerometers on hand-held objects. The expectation is that this will be particularly useful in sport applications (Additional file 1: Figure S7) where effective movement must be stylized for maximum performance because limb-, or sports equipment-mounted sensors will represent local forces and perhaps local power-usage proxies better than trunk-mounted systems which produce a body-integrated signal. Importantly, such power-proxy comparisons, from trunk-or limb-mounted sensors, can help identify efficient solutions to activities where performance, such as running speed over a given distance or animal breeding success over months, should be equatable with the putative energetic cost. This sort of consideration thus has advantages for elite athletes as well as for conservation bodies examining the costs of the lifestyle of their animals. Equally, changes in behaviour that occur with disease or illness, such as constrained activity stemming from rheumatoid arthritis [27], should be rapidly identifiable using this approach. We expect g-spheres and their derivatives (e.g. Fig. 5) to form the basis for summary statistics which highlight particular aspects of performance, behaviour and lifestyle, which may function to be powerful descriptors of e.g. animal lifestyle, linked, among other things, to physical limitations based on taxonomic, allometric or environmental (e.g. water versus terrestrial) constraints [28]. In addition, such visualizations may help both children and adults to understand how the physical activity levels in their lifestyles compare to those recommended [29]. The treatment of tag-derived tri-axial acceleration data by creating a tri-axial plot of the gravity-based acceleration, leads to a spherical surface on which acceleration proxies for power use can be placed. This process has potential for highlighting behaviour-, and even state-, dependent clusters, irrespective of whether the user has a verified library or not and should illustrate how animals may allocate energy to the different behaviours. Subsequent simplification of the spherical plots into percentage of sphere occupied, mean dynamic body acceleration and time allocated per facet allows simple 2-d plots between these parameters to be created (Fig. 5). This approach should provide a powerful summary of putative energy allocation to behaviour and time, documenting intra-specific differences and showing how animals respond to their environment over time. Inter-specific comparisons of these metrics show promise as a powerful behavioural tool with which to compare and quantify animal lifestyles. The g-sphere visualization technique has been incorporated into publically available smart sensor analysis software, Framework4 [30, 31], available from http://www.framework4.co.uk. Walker et al. [32] give more details on this. In brief, the basic g-sphere is derived from tri-axial acceleration data, where the sensors have orthogonal placement, aligning with the major axes of the tagged animal's body. Typically, the acceleration data will be recorded at infra-second rates (e.g. 40 Hz) on a deployment spanning anywhere from a few minutes up to a year. One day of data (24 h) recorded at these rates provides over 10 million measurements. For the g-sphere, we build on a method for visualising accelerometer data in Grundy et al. [23], using spherical coordinate plots to depict the distributions of data. To deal with large datasets, we utilise frequency-based approaches which show an overview of the data. Firstly, a spherical histogram shows the number of data items in each facet of the spherical coordinate system. Secondly, we build on the surface provided by the g-sphere using location-dependent frequency bins (the 'g-urchin' plot), for metrics such as DBA [17] as proxies for power usage. Multiple urchins can be compared difference operations to analyse across instances, behaviours groupings, or data sets. Static and dynamic acceleration Measured acceleration is the product of a static component due to gravity, manifest in accelerometers according to their orientation with respect to the Earth, and a dynamic component, due to the movement of the animal. Separating these components from the raw accelerometer measurements allows isolation of postural attitudes and movement. The static component can be approximated by applying a low-pass filter over each the accelerometer axis components. Shepard et al. [21] suggest smoothing using a running mean over a period amounting to about twice the wavelength of any repetitive frequencies. The static component at data point i (SA c ,i) with a smoothing window of w is given by: $$ {S}_i = \frac{1}{w}\ {\displaystyle \sum_{j=i-\frac{w}{2}}^{i + \frac{w}{2}}}{A}_j $$ The corresponding dynamic components of acceleration (DA c ) per orthogonal axis are computed by subtracting the static components (SA c ) of acceleration from the raw acceleration values (A c ). $$ D{A}_c = {A}_c - S{A}_c $$ Power metrics Dynamic acceleration-based metrics [17] have been argued as a predictor of power [18]. Two measures, Overall Dynamic Body Acceleration (ODBA) and Vectorial Dynamic Body Acceleration (VeDBA), have been used, and are essentially equivalent in terms of their power to predict VO2 [33]. VeDBA (V) is calculated from the dynamic components of acceleration (DA x , DA y and DA z ) by taking the vectorial length of the dynamic acceleration vector using; $$ V = \sqrt{D{A}_x^2+D{A}_y^2+D{A}_z^2} $$ ODBA (O) is also calculated from the dynamic components of acceleration (DA x , DA y , and DA z ), instead taking the sum of the dynamic acceleration components using; $$ O = \left|\ D{A}_x\ \left|+\right|D{A}_y\ \right|+\left|D{A}_z\right| $$ Raw plot The basic g-sphere plots the static accelerometer data in a three dimensional scatter plot with the animal's heave axis being allocated the y-axis, the surge the x-axis and the sway the z-axis (Fig. 1a). Each vector is considered as an offset from the origin, directly scatter-plotted in three-dimensional space with, for example, the colour of each data point being linked to any associated attribute in the data set (Fig. 1a). This representation shows short-lived behaviours well, providing a compelling visualization of when forces exceed that exerted by gravity (Additional file 1: Figure S1). Spherical plot Normalising the static acceleration vector, encodes posture information. Given the x, y, and z channels of the vector, the length of the vector L can be computed and the components normalised to x', y' and z' via: $$ L = \sqrt{SA{X}^2+SA{Y}^2+SA{Z}^2} $$ $$ {X}^{\hbox{'}}=\frac{SAX}{L}\ {Y}^{\hbox{'}}=\frac{SAY}{L}\ {Z}^{\hbox{'}}=\frac{SAZ}{L} $$ This, projects the normalised vector onto the surface of a sphere in 3-d scatter plots which gives an implicit conversion to spherical coordinates (r, θ, φ) [34], where θ corresponds to the angle of inclination, φ is the angle of rotation on a two-dimensional plane, and the radius is constant (r = 1) throughout. Each vector is plotted as a point in the display and the size and radius of each point can be adjusted by a fixed amount, to link it to an attribute in the data set. Each point can be joined together in chronological order to show the temporal ordering of the vectors as a path in the three-dimensional space (Additional file 1: Figure S2) so that the spherical scatter plot shows an intuitive summary of the geometric distribution of posture and direction. Linking the radius, r of each coordinate to another attribute allows additional dimensions, such as depth, to be encoded which, in this case, provides a compelling illustration of diving patterns along with the associated state (Additional file 1: Figure S2). Binning in three-dimensions Large data set plots incur problems with occlusion and overplotting where data values in a point cloud obscure other values. For this, an overview and focus approach [35] can be employed which gives a contextual overview of the data while leaving potential to interact with further details in the data. Thus, we divide the surface of the sphere into facets (sphere tessellation) and treat the data within each facet to derive summary statistics (binning). Sphere tessellation To represent the underlying data on which the chart is based accurately [36], we employ a frequency-based approach using regular bin sizes to summarise the data although construction a sphere from a series of uniform geometric primitives is a problem from the cartography domain [37]. The traditional method of constructing a sphere via lines of latitude and longitude results in variable sized facets misrepresenting the underlying data [38]. We thus utilise a geodesic sphere, providing a close to uniform and regular sphere tessellation, using subdivision surfaces and spherical projection of an icosahedron platonic solid. The geodesic sphere starts with an icosahedron. Each facet is then repeatedly subdivided a pre-defined number of times with each of the acquired points projected onto a sphere. This results triangular facets, each of which is of a close to regular shape and area. Despite a slight variation in size and shape of each facet, this has a negligible effect in reconstructing the underlying data [38]. Binning data Binning identifies the facet with which a data item intersects on the geodesic sphere. Teanby [38] propose a winding method which operates by linearly searching for an intersecting facet on the sphere which has a sum of angles with the test vector equating to 2π. Walker et al. [30] propose a more efficient method using the hierarchical structure of the geodesic sphere which operates in a similar manner to that of a search tree, dividing and dealing with the otherwise logarithmic complexity. The angle (θ) between the direction of the centre of each facet from the origin, (w) and the current vector (v) is computed using the dot product (below). The point is determined to be associated with the sphere facet with the smallest angle between them. This is recursively computed on the hierarchical structure until the lowest-level is reached. $$ \theta = \frac{v\ .\ w}{\left|v\right|\ \left|w\right|} $$ For each facet, the following statistics are computed; (i) the number of data items intersecting each facet, (ii) the mean value of each data channel for the items in each facet and (iii) a frequency distribution of a user-defined data attribute consisting of a user-defined number of bins. The data for attributes (i) and (ii) are normalised so that the whole sphere adds up to 100 %. The distributions for (iii) are normalised locally which allows the creation of a histogram of each facet of the power usage occurring for a particular movement and postural state independent of the frequency of the underlying data in the facet (since the frequency equates to a percentage). Dubai plot The binned data for each facet can be displayed as a single histogram projecting perpendicularly from its respective facet (Figs. 1b, 2a and 4a; Additional file 1: Figure S3). Each histogram length and colour is nominally proportional to the normalised sample size for the sphere facet (Additional file 1: Figure S4). This gives an overview of the data distribution over the sphere, illustrating the frequency of postures or movements in the data set. The colour may be encoded as any other data attribute, in addition to the normalised frequency, along with the length. G-urchin In a final step of this method, the smoothed, tri-axial acceleration axes can be encoded in addition to the frequency of items in each facet. A histogram for each facet of the sphere is computed for the items residing in the facet, which can be combined in a manner that represents the power usage for each state. This 'g-urchin' has spines projecting from the sphere with each spine placed at a user-defined distance away from the sphere to avoid occlusion with any of the other layers of the visualization (although a line is drawn to the centre of the facet it represents). The length and width of each spine can be ascribed to any data attribute. It is most effective when the spine characteristics are linked to histogram frequency or the number of items residing in the facet (Figs. 1c, 2b,c and 3). Each spine consists of a number of stacks, the width of which corresponds to the histogram bin width (Additional file 1: Figure S4 overview): Differential g-sphere The binning procedure standardizes the data for time to allow a sphere from one situation (species, individual, time period) to be applied with another, providing the g-spheres are of the same sphere tessellation and bin size. We use two operations for this; firstly subtraction, which is used for highlighting differences, and summation, which combines g-spheres together. This gives the notion of two sphere types; a data g-sphere generated from raw data, and an operation g-sphere, generated by applying an operation. The standardization process means that operations can be applied to any combination of the two g-sphere representations. Difference is used to subtract two g-spheres (GA, GB) from each other. The absolute difference between the two spheres, for each facet in the sphere (f), and each corresponding bin (b) in the frequency distribution is computed. The result is a new operation g-sphere which highlights the difference between GA and GB (Fig. 4b). $$ G\hbox{'} = {\displaystyle \sum_{i = 0}^f}{\displaystyle \sum_{j=0}^b} Abs\left(G{A}_{ij} - G{B}_{ij}\right) $$ Summation is used to combine two g-spheres together. The items in each bin are added together. The result is a new operation g-sphere which combines the spheres GA and GB together. $$ G\hbox{'} = {\displaystyle \sum_{i=0}^f}{\displaystyle \sum_{j=0}^b} Abs\left(G{A}_{ij} + G{B}_{ij}\right) $$ Each frequency distribution is normalised to eradicate any bias towards data sets containing different number of data points. The effect in the frequency distribution is a percentage where each bin contributes towards a subset of the distribution. As such, the entire frequency distribution totals 100 %. When combining the distributions together by addition or subtraction, the result is the difference in percentage between the two histograms. Percentages of distributions are used to protect against bias resulting from the size of the underlying data. DBA: Dynamic body acceleration VeDBA: Vectorial dynamic body acceleration Jones AM. A five year physiological case study of an Olympic runner. Br J Sports Med. 1998;32:39–43. Krustrup P, Hellsten Y, Bangsbo J. Intense interval training enhances human skeletal muscle oxygen uptake in the initial phase of dynamic exercise at high but not at low intensities. J Physiol. 2004;559:335–45. Hamilton WD. Selfish and spiteful behaviour in an evolutionary model. Nature. 1970;228:1218–20. Bleich S, Ku R, Wang Y. Relative contribution of energy intake and energy expenditure to childhood obesity: a review of the literature and directions for future research. Int J Obes (Lond). 2011;35:1–15. Scantlebury DM, et al. Flexible energetics of cheetah hunting strategies provide resistance against kleptoparasitism. Science. 2014;346:79–81. Reilly JJ, et al. Total energy expenditure and physical activity in young Scottish children: mixed longitudinal study. Lancet. 2004;363:211–2. Arch J, Hislop D, Wang S, Speakman J. Some mathematical and technical issues in the measurement and interpretation of open-circuit indirect calorimetry in small animals. Int J Obes (Lond). 2006;30:1322–31. Trost SG, Loprinzi PD, Moore R, Pfeiffer KA. Comparison of accelerometer cut-points for predicting activity intensity in youth. Med Sci Sport Exer. 2011;43:1360–8. Bassey E, Short A. A new method for measuring power output in a single leg extension: feasibility, reliability and validity. Eur J Appl Physiol Occup Physiol. 1990;60:385–90. Meur Y, et al. Spring-mass behaviour during the run of an international triathlon competition. Int J Sports Med. 2013;34:1–8. Nackaerts E, et al. Recognizing biological motion and emotions from point-light displays in autism spectrum disorders. PLoS One. 2012;7:e44473. Girard O, Millet G, Slawinski J, Racinais S, Micallef J. Changes in running mechanics and spring-mass behaviour during a 5-km time trial. Int J Sports Med. 2013;34:832–40. Watanabe YY, Takahashi A. Linking animal-borne video to accelerometers reveals prey capture variability. Proc Natl Acad Sci. 2013;110:2199–204. Yang C-C, Hsu Y-L. A review of accelerometry-based wearable motion detectors for physical activity monitoring. Sensors. 2010;10:7772–88. Brown DD, Kays R, Wikelski M, Wilson RP, Klimley AP. Observing the unwatchable through acceleration logging of animal behavior. Anim Biotelem. 2014;1:1–20. Shepard EL, et al. Identification of animal movement patterns using tri-axial accelerometry. Endanger Species Res. 2008;10. Wilson RP, et al. Moving towards acceleration for estimates of activity-specific metabolic rate in free-living animals: the case of the cormorant. J Anim Ecol. 2006;75:1081–90. Halsey LG, Shepard EL, Wilson RP. Assessing the development and application of the accelerometry technique for estimating energy expenditure. Comp Biochem Physiol A Mol Integr Physiol. 2011;158:305–14. Nathan R, et al. Using tri-axial acceleration data to identify behavioral modes of free-ranging animals: general concepts and tools illustrated for griffon vultures. J Exp Biol. 2012;215:986–96. Preece SJ, Goulermas JY, Kenney LP, Howard D. A comparison of feature extraction methods for the classification of dynamic activities from accelerometer data. Biomed Eng IEEE Trans on. 2009;56:871–9. Shepard EL, et al. Derivation of body motion via appropriate smoothing of acceleration data. Aquat Biol. 2008;4:235–41. Elliott KH, Le Vaillant M, Kato A, Speakman JR, Ropert-Coudert Y. Accelerometry predicts daily energy expenditure in a bird with high activity levels. Biol Lett. 2013;9:20120919. Grundy E, Jones MW, Laramee RS, Wilson RP, Shepard EL. Computer Graphics Forum, Wiley Online Library, vol. 28. 2009. p. 815–22. Laich AG, Wilson RP, Gleiss AC, Shepard ELC, Quintana F. Use of overall dynamic body acceleration for estimating energy expenditure in cormorants: does locomotion in different media affect relationships? J Exp Mar Biol Ecol. 2011;399:151–5. Wilson RP, et al. Wild state secrets: ultra-sensitive measurement of micro-movement can reveal internal processes in animals. Front Ecol Environ. 2014;12:582–7. Bartlett R. Artificial intelligence in sports biomechanics: new dawn or false hope? J Sports Sci Med. 2006;5:474. Semanik P, et al. Assessing physical activity in persons with rheumatoid arthritis using accelerometry. Med Sci Sports Exerc. 2010;42:1493. Demetrius L. The origin of allometric scaling laws in biology. J Theor Biol. 2006;243:455–67. Active SAS. A report on physical activity for health from the four home countries' chief medical officers, The Department of Health. 2011. Walker JS, et al. TimeClassifier: a visual analytic system for the classification of multi-dimensional time series data. Vis Comput. 2015;31(6-8):1067–78. Walker J, Borgo R, Jones MW. TimeNotes: a study on effective chart visualization and interaction techniques for time-series data. Vis Comput Graph IEEE Trans on. 2016;22:549–58. Walker JS, et al. Prying into the intimate secrets of animal lives; software beyond hardware for comprehensive annotation in 'Daily Diary'tags. Mov Ecol. 2015;3:1–16. Qasem L, et al. Tri-axial dynamic acceleration as a proxy for animal energy expenditure; should we be summing values or calculating the vector. PLoS One. 2012;7:e31187. Aigner W, Miksch S, Schumann H, Tominski C. Visualization of time-oriented data, Springer Science & Business Media. 2011. Shneidermann B. The eyes have it: A task by data type taxonomy for information visualizations. Proceedings of the IEEE Symposium 1996;336-43. Tufte ER, Graves-Morris PR. The visual display of quantitative information, vol. 2. Cheshire: Graphics press; 1983. Van Wijk JJ. Unfolding the earth: myriahedral projections. Cartograph J. 2008;45:32–42. Teanby N. An icosahedron-based method for even binning of globally distributed remote sensing data. Comput Geosci. 2006;32:1442–50. The authors are grateful for technical support given by Phil Hopkins as well as to the liberal and enthusiastic work environment that defines the College of Science at Swansea University. JSW was funded by an EPSRC doctoral training grant. Acceleration data from many different animals were used in the formalization of this work, even if not presented explicitly, so we acknowledge the funding of NERC (NE/I002030/1) to DMS in this regard. This project was made possible by a generous grant by the Royal Society/Wolfson fund to build the Swansea University Visualization Suite. PMG and FR were funded by Telemark University College, Norway. All data generated and/or analysed during the current study are available from the corresponding author on reasonable request. RPW provided the initial concept after discussion with MDH, JSW, MWJ and ELCS. MDH and JSW developed the software. VLW, GIW, MAM, KAM and BT provided human-based data including analysis and interpretation, FQ, AG-L, J-ES, DMS, JC, LQ, FR, and PMG gathered animal data and provided analyses and interpretation. MG contributed to general analyses and all authors contributed to the manuscript and to the ideas contained therein. All authors read and approved the final manuscript. See above. No named person within this study. The fish work was performed under permits from the "Secretaría de Pesca del Chubut" while the bird manipulations were carried out in accordance with the legal standards of the Argentine government and fieldwork approved by The Organismo Provincial de Turismo and Dirección de Fauna y Flora Silvestre of Chubut Province, Argentina. Permission was granted from UK Home Office and protocols approved by the Department of Agriculture, Environment & Rural Affairs, Northern Ireland for all mammal work except for beavers, for which the work was approved by the Norwegian Experimental Animal Board and the Norwegian Directorate for Nature Management. The study procedures for humans were approved by the Swansea University ethics committee and complied with the Declaration of Helsinki. All the participants and their parents/guardians were informed in writing about the demands of the study, and subsequently gave their written informed assent and consent, respectively, for participation prior to commencing the study. Swansea Lab for Animal Movement, Biosciences, College of Science, Swansea University, Singleton Park, Swansea, SA2 8PP, UK Rory P. Wilson , Emily L. C. Shepard , Vianney L. Wilson , Gwendoline I. Wilson & Brenda Tysse Visual Computing, Computer Science, College of Science, Swansea University, Singleton Park, Swansea, SA2 8PP, UK Mark D. Holton , James S. Walker & Mark W. Jones School of Biological Sciences, Institute for Global Food Security, Queen's University Belfast, Medical Biology Centre, 97, Lisburn Road, Belfast, BT9 7BL, UK D. Mike Scantlebury , Christina C. Mulvenna & Nicola J. Marks Institute of Life Science, Swansea University Medical School, Swansea, SA2 8PP, UK Mike Gravenor Centro Nacional Patagonico, Boulevard Brown s/n, Chubut, Argentina Javier Ciancio , Flavio Quintana , Agustina Gomez-Laich & Juan-Emilio Sala Applied Sports Science Technology and Medicine Research Centre, College of Engineering, Swansea University, Fabian Way, Swansea, SA1 8EN, UK Melitta A. McNarry & Kelly A. Mackintosh Department of Biological and Environmental Sciences, College of Arts and Sciences, Qatar University, Doha, 2713, Qatar Lama Qasem Faculty of Arts and Sciences, Department of Environmental and Health Studies, Telemark University College, N-3800 Bø i, Telemark, Norway Frank Rosell & Patricia M. Graf Department of Integrative Biology and Biodiversity Research, Institute of Wildlife Biology and Game Management, University of Natural Resources and Life Sciences, Vienna, A-1180, Vienna, Austria Patricia M. Graf Search for Rory P. Wilson in: Search for Mark D. Holton in: Search for James S. Walker in: Search for Emily L. C. Shepard in: Search for D. Mike Scantlebury in: Search for Vianney L. Wilson in: Search for Gwendoline I. Wilson in: Search for Brenda Tysse in: Search for Mike Gravenor in: Search for Javier Ciancio in: Search for Melitta A. McNarry in: Search for Kelly A. Mackintosh in: Search for Lama Qasem in: Search for Frank Rosell in: Search for Patricia M. Graf in: Search for Flavio Quintana in: Search for Agustina Gomez-Laich in: Search for Juan-Emilio Sala in: Search for Christina C. Mulvenna in: Search for Nicola J. Marks in: Search for Mark W. Jones in: Correspondence to Rory P. Wilson. Additional file 1: Methods. Changing shapes for frequency distributions. Figure S1. A 3-d scatter plot (g-sphere) of static (orthogonal) tri-axial acceleration data. Figure S2. A spherical coordinate's visualization of (a) postural state plotted onto the surface of a sphere in three-dimensional space, (b) points joined together in chronological order, (c) projecting the data outwards from the sphere according to other parameters. Figure S3. A spherical histogram (Dubai plot) visualization to depict frequent postural states. Figure S4. Histogram, Frequency shape (stacked), fixed shape (skittle) from urchin plots. Figure S5. G-urchin of skittle shape and stacked frequency urchins emitted from the centre of each facet of the sphere. Figure S6. Overview of user interface for a program in which spherical plots can be created. Figure S7. G-spheres and comparable g-urchins derived from a rod-mounted tri-axial accelerometer showing fly-fishing visualisations. (DOCX 5289 kb) Spherical plots Tri-axial acceleration G-sphere
CommonCrawl
\begin{definition}[Definition:Separable Polynomial/Definition 2] Let $K$ be a field. Let $\map P X \in K \sqbrk X$ be a polynomial of degree $n$. $P$ is '''separable''' {{iff}} it has no double roots in every field extension of $K$. Category:Definitions/Separable Polynomials \end{definition}
ProofWiki
\begin{document} \newcommand{\subsection}{\subsection} \newcommand{\subsubsection}{\subsubsection} \newcommand{s^{t-1}}{s^{t-1}} \newcommand{s^t}{s^t} \newcommand{d^t}{d^t} \newcommand{r^t}{r^t} \newcommand{f^t}{f^t} \newcommand{\mu^t}{\mu^t} \newcommand{paper}{paper} \newcommand{Paper}{Paper} \newif\ifonline \onlinetrue \title{ extbf{Flow Equilibria via Online Surge Pricing} \section*{Abstract} We explore issues of dynamic supply and demand in ride sharing services such as Lyft and Uber, where demand fluctuates over time and geographic location. We seek to maximize social welfare which depends on taxicab locations, passenger locations, passenger valuations for service, and the distances between taxicabs and passengers. Our only means of control is to set surge prices, then taxicabs and passengers maximize their utilities subject to these prices. We study two related models: a continuous passenger-taxicab setting, similar to the Wardrop model, and a discrete (atomic) passenger-taxicab setting. In the continuous setting, every location is occupied by a set of infinitesimal strategic taxicabs and a set of infinitesimal non-strategic passengers. In the discrete setting every location is occupied by a set of strategic agents, taxicabs and passengers, passengers have differing values for service. We expand the continuous model to a time-dependent setting and study the corresponding online environment. The utility for a strategic taxicab that drives from $u$ to $v$ and picks up a passenger at $v$ is the surge price at $v$ minus the distance from $u$ to $v$. The utility for a strategic passenger at $v$ that gets service is the value of the service to the passenger minus the surge price at $v$. Surge prices are in passenger-taxicab equilibrium if there exists a min cost flow that moves taxicabs about such that (a) every taxicab follows a best response, (b) all strategic passengers at $v$ with value above the surge price $r_v$ for $v$, are served and (c) no strategic passengers with value below $r_v$ are served (non-strategic infinitesimal passengers are always served). This paper{} computes surge prices such that resulting passenger-taxicab equilibrium maximizes social welfare, and the computation of such surge prices is in poly time. Moreover, it is a dominant strategy for passengers to reveal their true values. We seek to maximize social welfare in the online environment, and derive tight competitive ratio bounds to this end. Our online algorithms make use of the surge prices computed over time and geographic location, inducing successive passenger-taxicab equilibria. \section{Introduction} In the sharing economy\footnote{Also known as the ``gig" economy.} individual self-interested suppliers compete for customers. According to PWC, the {\sl sharing economy} is projected to exceed $300$ billion USD within 8 years. Lyft and Uber are prime examples of such systems. According to \cite{lam2017demand,NBERw22627} it is the users who gain the majority of the surplus from such systems, and significantly so. Contrawise, many studies suggest negative societal issues in the sharing economy (e.g., see \cite{MARTIN:2016,Cramer:2016,RICHARDSON:2015,berger2017drivers}). Unlike salaried employees of livery firms, drivers for Uber (and other ``gig" suppliers) are free to decide when they are working and what calls/employment to accept. {\sl E.g.}, drivers can refuse to accept a call if it is too far away. To increase supply (and reduce demand) Uber introduced ``surge pricing" which is a multiplier on the base price when demand outstrips supply. The surge price can be different at different locations. In the past pricing schemes resulted in what was theorized to be negative work elasticity \cite{camerer1997labor}. In their work it is suggested that drivers impose upon themselves ``income targets". This means that drivers will work until they reach their target income for the day causing them to extend their hours in times of low payouts. Recent studies suggest that this is false, surging prices in times of peak demand seems to conjure positive work elasticity \cite{Chen:2016:DPL:2940716.2940798}, allowing supply and demand to balance more efficiently. \subsection{Network Model, Surge Pricing, Utility, and Passenger-Taxicab Equilibria} Our goal is to maximize social welfare, defined as the sum of valuations of the users serviced by taxicabs, minus the cost associated with providing such service. We do so by setting surge prices (one per location), and let the system reach equilibrium. Our surge pricing schemes have several additional features such as envy freeness. We consider two related settings: \begin{itemize} \item A continuous setting where supply and demand consist of infinitesimal quanta, {\em supply} and {\em demand} are modeled as fractional quantities at locations. This is analogous to the non-atomic traffic model used in Wardrop equilibria \cite{Wardrop:1952}. \begin{itemize} \item Here we assume that the taxicabs are strategic and respond to changing surge prices whereas passengers are non-strategic so that demand is insensitive to price (alternately, one may view these passengers as having high value for service). \item The cost for a taxicab at location $x$ to serve a customer at location $y$ is the distance from $x$ to $y$. \item Our goal here is to set a surge price $r_x$ at every location $x$ so as to incentivize taxicabs to act in a way that maximizes social welfare, {\sl i.e.}, all possible demand is serviced while the sum of distances traversed is minimized. \end{itemize} \item A discrete setting where both taxicabs and passengers are strategic, and every taxicab and passenger is associated with some location. \begin{itemize} \item In this setting both demand and supply may change as a function of the surge price. Every passenger has a value for service and every taxicab has a cost for service at a given location, {\sl e.g.}, the distance to the location. \item Our goal here is to maximize social welfare (the sum of the values for the served customers minus the sum of costs of the taxicabs to do so). \item At every location $x$, we set a surge price $r_x$, that incentivizes taxicabs to serve passengers in a manner that maximizes social welfare. \item Moreover, maximizing social welfare is not only in equilibrium but also envy free. \item Every passenger at $x$ whose value is strictly greater than $r_x$ is served, and no passenger at $x$ with value strictly less than $r_x$ is served. \end{itemize} \end{itemize} We define the utility for a taxicab at $x$ to serve a passenger at $y$ as the surge price at $y$, $r_y$, minus the distance from $x$ to $y$. A passenger at $x$ with value $v$ has utility $v-r_x$ to be served by a taxicab, and utility zero if she takes no taxicab. Clearly, a passenger at $x$ with $v-r_x<0$ will refuse to take a taxicab. We introduce the notion of a {\sl passenger-taxicab equilibria}, for both continuous and discrete settings. A flow is a mapping from the current supply to some new supply. A flow has an associated cost which is the sum over edges of the flow along the edge times the length of the edge. A flow $f$ is said to be a min cost flow that maps the current supply to the new supply if it achieves the minimal cost for moving the current supply to the new supply (this cost is also called the min earthmover cost). A passenger-taxicab equilibria consists of a vector of surge prices $r=\langle r_x\rangle$, where $r_x$ is the surge price at location $x$, current supply $s=\langle s_x\rangle$, new supply $s'=\langle s'_x \rangle$ and demand $d=\langle d_x \rangle$, such that, for any min cost flow from $s$ to $s'$, every taxicab and every passenger maximize their utility. {\sl I.e.}, no taxicab can improve its utility by doing anything other than following the flow, every passenger at $x$ who has value greater than $r_x$ is in $d_x$ and is served. Every passenger at $x$ who has value less than $r_x$ is not served. The surge prices $r_x$ are poly time computable. In the continuous setting this is polynomial in the number of locations, in the discrete setting this is polynomial in the number of passengers and taxicabs. \subsection{Maximizing Social Welfare in an Online Setting via Surge Pricing} We consider an online setting based on the continuous setting, where the time progresses in discrete time steps. In each time step the following occurs: First, a new demand allocation appears. Second, the online algorithm determines a new supply. Given an allocation of supply and demand, the demand served at a location is the minimum between the supply and demand at the location. The social welfare is the difference between the total demand served and the total movement cost, summed over all locations and time steps. The main new crux of our model is that the online algorithm (principal) can not impose a new supply allocation, but is limited to setting surge prices. If flow $f$ is a flow equilibrium arising from these surge prices --- strategic suppliers follow flow $f$. Our results on surge prices for flow equilibria imply that the online algorithm has flexibility in selecting the desired supply. Trivially, for any metric, a simple algorithm that randomizes the start setting and doesn't move achieves a $\Theta(1/k)$ competitive ratio, where $k$ is the number of locations. However, If the costs of moving from any location to any other location is 1, we give an optimal competitive ratio of $\Theta(\sqrt{1/k})$. If the demand sequence has the property that at any time and location the demand does not exceed $1/\rho$ ($\rho \geq 1$), then we show a tight competitive ratio bound of $\Theta(\sqrt{\rho/k})$. For more general metric spaces we show mainly negative results. Specifically, if all the distances are $1+\epsilon$ we show that the competitive ratio is no better than $(1+\epsilon)^2/(\epsilon k)$, which implies an optimal competitive ratio of $\Theta(1/k)$ for $\epsilon =\Theta(1)$. Another extension we consider is when the average difference between successive demand vectors is bounded by $\delta$ (in total variation distance). In this case we show that simply matching supply to the current demand gives a competitive ratio of $1-\delta$ and show that the competitive ratio can not be better than $1-\delta/4$ (in the case that all the distances are $1$). \subsection{Related Work} It has been observed in taxicab services that a mismatch between supply and demand, along with first-in-first-out scheduling of service calls, without restricting the ``call radius", results in reduced efficiency and even market failure \cite{ARNOTT:1996,YANG:2002}. This happens because taxicabs are dispatched to pick up customers at great distance because no closer taxicab is currently available, more time is wasted traveling to pick up clients, and the system performance degrades. Recent papers \cite{Chen:2016,Castillo:2017} study how changing surge prices over time allow one to avoid such issues. These papers do not consider the issue of having geographically varying surge prices. Assuming a stochastic passenger arrival rate, \cite{banerjee2015pricing} uses a queue theoretic approach to model driver incentives in the system. The paper considers a simplistic dynamic pricing scheme, where there are two different pricing schemes for each node depending on the amount of drivers at said node. This model is compared to a simple flat rate. Drivers are assumed to calculate their incentives over several rides. The paper concludes that the dynamic pricing scheme can only achieve the welfare of the flat rate. However, the dynamic pricing scheme allows for the manager to have more room for error in calculating what the optimal rates are. A central problem in handling a centralized taxi system involves routing empty cars between regions . Within the centralized mechanism, \cite{braverman2016empty} shows that, assuming stochastic arrival of passengers, an optimal static strategy ({\sl i.e.}, one that does not change it's routing policy based on current shortages) can be calculated by solving a linear programming problem. Recently, and independently, a similar problem was studied in \cite{ma2018spatio}. In their model, selfish taxicabs seek to maximize revenue over time. There is no explicit cost for travel, one loses opportunities by taking long drives. They derive prices in equilibria that maximize the sum of passenger valuations, but ignore travel costs. In contrast, we ignore the time dimension and focus on the passenger valuations and travel costs. Competitive analysis of online algorithms \cite{ST85a,ST85b,Karlin1988} considers a worst case sequence of online events with respect to the ratio between the performance of an online algorithm and the optimal performance. In a centralized setting, task systems, \cite{Borodin92}, can be used to model a wide variety of online problems. Events are arbitrary vectors of costs associated with different states of the system, and an online algorithm may decide to switch states (at some additional cost). A strategic version of this problem, for a single agent, was considered in \cite{DBLP:conf/soda/CohenEFJ15} where a deterministic incentive compatible mechanism was given. The competitive ratio for incentive compatible task system mechanisms is $O(1/k)$ where $k$ is the number of states. We cannot use the incentive compatible task system mechanisms from \cite{DBLP:conf/soda/CohenEFJ15} for two reasons: (1) in our setting there are a large number of strategic agents (many Uber drivers) split amongst a variety of different [task system] states (locations) rather than one such agent in a single state, and (2) the suppliers have both profits (payments) and loss (relocation). Competitive analysis of the famous $k$-server problem \cite{MANASSE1990208} has largely driven the field of online algorithms. A variant of the $k$-server problem is known as the $k$-taxicab problem \cite{Fiat:90,XIN:2004}. Although the problem we consider herein and the $k$-taxicab problem both seek efficient online algorithms, and despite the name, the nature of the $k$-taxicab problem is quite different from the problem considered in this paper{}. In the $k$-taxicab problem a single request occurs at discrete time steps and a centralized control routes taxicabs to pick up passengers, seeking to minimize the distances traversed by taxis while empty of passengers. Taxicabs are not selfish suppliers, and all requests must be satisfied. This is quite different from our setting where both demand and supply are spread about geographically, there are many strategic suppliers, and not all demand must be served. \section{Model and Notation}\label{sec:model} \subsection{The Continuous Passenger-Taxicab Setting} We model the network as a finite metric space $G=(V,E)$, where $\ell_{u,v}\geq 0$ is the distance between vertices $u, v \in V$. {\sl I.e.}, $\ell_{u,v}$ is the cost to a taxicab to switch between vertices $u$ and $v$. Infinitesimally small taxicabs reside in the vertices $V$. Demand and supply are vectors in $[0,1]^{|V|}$ that sum to one. Given demand $d$ and current supply $s$, we incentivize strategic taxicabs so that current supply $s$ becomes new supply $s'$ which services the demand $d$. If the demand in vertex $u$ is $d_u$, and the new supply in vertex $u$ is $s'_u$, then the minimum of the two is the actual demand served (in vertex $u$). Note that if the two are not identical then there are either unhappy passengers (without service) or unhappy taxicabs (with no passengers to service). Formally, \begin{definition}\label{def:demandserved} we define the {\sl demand served}, as follows: \begin{itemize} \item The {\sl demand served} in vertex $u$, $\mathrm{ds}(s'_u,d_u)$, is the minimum of $s'_u$ and $d_u$, {\sl i.e.,} $\mathrm{ds}(s'_u,d_u)=\min(s'_u,d_u)$. \item Given a demand vector $d$ and a supply vector $s'$, the total demand served is $\mathrm{ds}(s',d)= \sum_{u\in V} \mathrm{ds}(s'_u,d_u)= \sum_{u\in V}{\min(s'_u,d_u)}$. \end{itemize} \end{definition} Switching supply from $s$ to $s'$ is implemented via a flow $f$. A flow from $s$ to $s'$ is a function $f(u,v):V\times V \mapsto \mathbb{R}^{\geq 0}$ that has the following properties: \begin{itemize} \item For all $u,v\in V$, $f(u,v)\geq 0$. \item For all $v\in V $, $\sum_{u\in V} f(u,v)=s'_v $. \item For all $u\in V$, $\sum_{v\in V} f(u,v)=s_u $. \end{itemize} We define the earthmover distance between supply vectors, \begin{definition}\label{def:earthmover} The cost of flow $f$ is $\mathrm{em}(f)=\sum_{u,v,\in V} f(u,v)\ell_{u,v}$. The earthmover distance from supply vector $s$ to supply vector $s'$ is $$\mathrm{em}(s,s')=\min_{\mbox{\rm flows $f$ from $s$ to $s'$}} \mathrm{em}(f).$$ \end{definition} We assume that switching supply from $s$ to $s'$ is implemented via a flow $f$ of minimal cost. Note that there may be multiple flows with the same minimal cost --- see Figures \ref{fig:flow1} and \ref{fig:flow2}. In order to incentivize our strategic taxicabs to move to a new supply vector, we use surge pricing in vertices. \begin{definition}\label{def:surgepricing} Surge pricing is a vector, $r\in \mathbb{R}^{\geq 0}$, where $r_v$ is the payment to a taxicab that serves demand in vertex $v\in V$. \end{definition} We define the utility for an infinitesimal taxicab, given surge pricing $r$, as follows. \begin{definition}\label{def:utility} Given supply $s$, new supply $s'$, surge prices $r$, demand $d$, and a min cost flow $f$ from $s$ to $s'$, the utility for a taxicab that switches from vertex $u$ to vertex $v$ is $$\mu(u\mapsto v|s',r,d) = r_v \cdot \mathopen{}\mathclose\bgroup\originalleft(\frac{\mathrm{ds}(s'_v,d_v)}{s'_v}\aftergroup\egroup\originalright) - \ell_{u,v}.$$ \end{definition} To motivate the above definition of utility $\mu(u\mapsto v|s',r,d)$, of switching from $u$ to $v$, consider the following: \begin{itemize} \item The probability of serving a passenger in vertex $v$ is $\frac{\mathrm{ds}(s'_v,d_v)}{s'_v}$. This follows since: \begin{itemize} \item If passengers outnumber taxicabs in vertex $v$ then any such taxicab will surely serve a passenger. \item Alternately, if taxicabs outnumber passengers in vertex $v$ then the choice of which taxicabs serve passengers is a random subset of the taxicabs. \end{itemize} \item The profit from serving a passenger in vertex $v$ is equal to the surge price for that vertex, $r_v$. \item The cost of serving a passenger in vertex $v$, given that the taxicab was previously in vertex $u$, is $\ell_{u,v}$. \end{itemize} Finally, we define the notion of a passenger-taxicab equilibrium, where no infinitesimal taxicab can benefit from deviations. \begin{definition}\label{def:flowequilibrium} Given a demand vector $d$, current supply vectors $s$, and new supply $s'$, we say that a surge pricing $r$ is in {\em passenger-taxicab equilibrium}, if for every min cost flow $f$ from $s$ to $s'$, for every $u,v\in V$ such that $f(u,v)>0$ we have that \begin{equation}\mu(u\mapsto v|s',r,d) =\max_{w\in V} \mu(u\mapsto w|s',r,d).\label{eq:incentivesequilibrium}\end{equation} {\em I.e.}, every infinitesimal taxicab is choosing a best response. Such a passenger-taxicab equilibrium is said to {\em induce supply $s'$.} \end{definition} Our goal in the continuous setting is to set surge prices so that the new supply $s'=d$ is a passenger-taxicab equilibrium. In this continuous setting we take demand $d$ to be insensitive to the surge prices. In the next section we describe the discrete setting where both the demand and the supply are sensitive to the prices. One could define a continuous passenger-taxicab setting where every location has an associated density function for passenger valuations. Then, we could convert this continuous setting to an instance of the discrete passenger-taxicab setting with $1/\epsilon$ taxicabs/passengers. Under appropriate conditions, this will give a good approximation to a continuous passenger-taxicab setting where both demand and supply are sensitive to surge pricing. \subsection{The Discrete Passenger-Taxicab Setting} As above, we model the network as a finite metric space $G=(V,E)$, and the cost to a taxicab to switch between vertices $u$ and $v$ is the distance between them, $\ell_{u,v}$. Unlike the continuous case, there is an integral number of taxicabs and passengers at every vertex. Let $B=\{b_1, \ldots, b_m\}$ be a set of $m$ passengers and $T=\{t_1, \ldots , t_n\}$ be a set of $n$ taxicabs. Every passenger $b_i\in B$ has a value $\mbox{\rm value}(b_i)\geq 0$ for service. A supply $s$ is a vector $s = \langle s_v \rangle_{v\in V}$ where $s_v\subseteq T$ for all $v \in V$, $\cup_{v\in V} s_v=T$, and $s_v\cap s_u = \emptyset$ for all $u, v \in V$, $u\neq v$. A profile $P$ is a partition of the passengers $B$, where for each $u\in V$ the set $P_u \subseteq B$ is the set of passengers at $u$. A demand is a function of a vertex and a surge price at the vertex. We define the function $d_v$ as follows: $$d_v(r_v)= \{ b_i \in P_v | \mbox{\rm value}(b_i)\geq r_v \}.$$ Ergo, $d_v(r_v)$ is the set of passengers at vertex $v$ that are interested in service given that the price is $r_v$, {\sl i.e.}, those passengers whose value is at least $r_v$. Note that $d_v(0)=P_v$. For ease of notation, we denote a collection of entities $x_v$ for each vertex $v\in V$, by $x=\langle x_v\rangle_{v\in V}$. For example, $s=\langle s_v\rangle_{v\in V}$, $d=\langle d_v \rangle$, and $r=\langle r_v\rangle_{v\in V}$. Define a flow $f$ from supply $s$ to supply $s'$ as follows. The flow $f(x,y):V\times V \mapsto \mathbb{Z}^+$ has the following properties: \begin{itemize} \item For all $u,v\in V$, $f(u,v)\in \mathbb{Z}^+$. \item For all $u\in V$, $\sum_{v\in V} f(u,v)=|s_u|$. \item For all $v\in V$, $\sum_{u\in V} f(u,v)=|s'_v|$. \end{itemize} The flow from a vertex $u$ is equal to the number of taxicabs at $u$ under supply $s$, {\sl i.e.}, $|s_u|$. The flow into a vertex $v$ is equal to the number of taxicabs at $v$ under supply $s'$, {\sl i.e.}, $|s'_v|$. The cost of a flow in the discrete setting is the same as the cost of a flow in the continuous setting (Definition \ref{def:earthmover}), {\sl i.e.}, $\sum_{u,v\in V} f(u,v)\ell_{u,v}$. We now define the demand served at a vertex $u$, \begin{definition}\label{def:demandserved} For a vertex $v$, given a supply $s'_v$, a surge price $r_v$, and a demand $d_v(r_v)$, we define the {\sl demand served}, $\mathrm{ds}_v(s'_v,d_v,r_v)\subseteq P_v$, as follows: \begin{itemize} \item If $|d_v(r_v)| \leq |s'_v|$ then $\mathrm{ds}_v(s'_v,d_v,r_v) = d_v(r_v)$. \item If $|s'_v| < |d_v(r_v)|$ then $\mathrm{ds}_v(s'_v,d_v,r_v)$ is the set of the $|s'_v|$ highest valued passengers from $d_v(r_v)$, breaking ties arbitrarily. \end{itemize} Given demand functions $d$, surge prices $r$, and new supply $s'$, the total demand served $\mathrm{ds}(s',d,r)$ and its value $\mathrm{dsv}(s',d,r)$ is given by \begin{eqnarray*} \mathrm{ds}(s',d,r)&=& \cup_{v\in V} \mathrm{ds}_v(s'_v ,d_v,r_v);\\ \mathrm{dsv}(s',d,r)&=& \sum_{b_i\in\mathrm{ds}(s' ,d,r)} \mbox{\rm value}(b_i). \end{eqnarray*} \end{definition} \begin{definition}\label{def:sw} The social welfare is the difference between the sum of the values of the passengers served and the cost of the min cost flow, which is the sum of the distances traveled by the taxis. Namely, for current supply $s$, new supply $s'$, demand functions $d$, and surge prices $r$, the social welfare is \begin{equation} SW(s,s',r,d)=\mathrm{dsv}(s',d,r)-em(s,s'). \label{eq:swdiscrete} \end{equation} \end{definition} Remark: we did not define social welfare in the continuous passenger-taxicab setting where the passengers are price insensitive. However, one can view the social welfare in the price-insensitive demand setting as a special case of the responsive demand setting when all passenger valuations are very high. Like the definitions for utility and passenger-taxicab equilibria in the continuous case, one can define them for the discrete case: The utility of a taxicab $t_j\in s_u$ moving from $u$ to $v$, given new supply $s'$, surge prices $r$ and demand functions $d$, is $$ \mu_{t_j}(u\mapsto v|s',r,d)=\frac{\min(|d_v(r_v)|,|s'_v|)}{|s'_v|}\cdot r_v-\ell_{u,v}. $$ \begin{definition}\label{def:discrete.equi} Given demand $d$ , current supply $s$ and new supply $s'$, surge prices $r$ are said to be in {\em passenger-taxicab} equilibrium if for every min cost flow $f$ from $s$ to $s'$ and for any $u,v$ such that $f(u,v)>0$ we have that \begin{itemize} \item Taxicabs are choosing a best response: $\mu_{t_j}(u\mapsto v|s',r,d)=\max_{w\in V}(\mu_{t_j}(u\mapsto w|s',r,d))$. \item All passengers $b\in B$ with $\mbox{\rm value}(b)>r_{\operatorname{loc}(b)}$ are served. No passengers $b\in B$ with $\mbox{\rm value}(b)<r_{\operatorname{loc}(b)}$ are served. \end{itemize} \end{definition} \subsection{Online Setting} In the online setting we inherit the continuous model setting, adding a function of time. Time progresses in discrete time steps $1, 2, \ldots , T$. At time $t$ the demand vector $d^t=(d^t_1,d^t_2,\ldots,d^t_k)$ associates each vertex $v\in V$ with some demand $d^t_v\geq 0$, and we assume that the total demand $\sum_i d^t_i=1$. One should not think of a time step as being instantaneous, but rather as a period of time during which the demands remain steady. Every time step $t$ also has an associated supply vector $s^t=(s^t_1,s^t_2, \ldots, s^t_k)$, where $s^t_i\geq 0$ and $\sum_i s^t_i =1$ for all $t$. The supply at time $t$ is a ``reshuffle" of the supply at time $t-1$, by having infintestimally small suppliers moving about the network. In our model, the time required for suppliers to adjust supply from $s^{t-1}$ to $s^t$ is small relative to the period of time during which demand $d^t$ is valid. If the demand in vertex $i$ at time $t$ is $d^t_i$, and the supply in vertex $i$ at time $t$ is $s^t_i$, then the minimum of the two is the actual demand served (in vertex $i$ at time $t$). Note that if the two are not identical then there are either unhappy customers (without service) or unhappy suppliers (with no customer to service). Formally, we define the benefit derived during each time period, the {\sl demand served}, as in the continuous model. We define the social welfare as follows: \begin{definition}\label{def:socialwelfare} Given a demand sequence $d=(d^1, \ldots, d^T)$ and a supply sequence $s=(s^1, \ldots, s^T)$ we define the social welfare $$\mathrm{sw}\mathopen{}\mathclose\bgroup\originalleft(s,d\aftergroup\egroup\originalright) = \mathrm{ds}(s,d) - \mathrm{em}(s) = \sum_{t=1}^T \mathrm{ds}(s^t,d^t) - \sum_{t=2}^T \mathrm{em}(s^{t-1},s^t).$$ \end{definition} An online algorithm for social welfare follows the following structure. At time $t = 1, 2, \ldots, T$: \begin{enumerate} \item A new demand vector $d^t$ appears. \item The online algorithm determines what the supply vector $s^t$ should be. (Indirectly, by computing and posting surge prices so that the resulting passenger-taxicab-equilibrium induces supply $s^t$). \end{enumerate} The goal of the online algorithm is to maximize the social welfare as given in Definition \ref{def:socialwelfare}: Compute a supply sequence $s$, so as to maximize $\mathrm{sw}(s,d)$. The supply vector $s^t$ is a function of the demand vectors $d^1, \ldots, d^t$ but not of any demand vector $d^\tau$, for $\tau>t$. Implicitly, we assume that the passenger-taxicab equilibrium is attained quickly relative to the rate at which demand changes. The competitive ratio of such an online algorithm, $\mbox{\rm Alg}$, is the worst case ratio between the numerator: the social welfare resulting from the demand sequence $d$ and the online supply $\mbox{\rm Alg}(d)$, and the denominator: the optimal social welfare for the same demand sequence, {\sl i.e.}, $$ \min_{d} \frac{\mathrm{sw}(\mbox{\rm Alg}(d),d)}{\max_s\mathrm{sw}(s,d)}. $$ \section{The Continuous Passenger-Taxicab Setting}\label{sec:surge} In this section we deal with the continuous passenger-taxicab setting. Given current supply $s$, demand $d$ and new supply $s'=d$, we show how to set surge prices $r$ such that they are in passenger-taxicab equilibria. Moreover, for these $s$, $d$, and $r$, the only possible $s'$ which results in a passenger-taxicab equilibria is $s'=d$. (Similar techniques give surge prices that induce [almost] arbitrary supply vectors, $\tilde{s}$, see below). Proof overview: Given some min cost flow $f^*$ from supply $s$ to demand $d$, we construct a unit demand market, with bidders and items. For every $x,y$ such that $f^*(x,y)>0$ we construct a bidder and an item. We also define bidder valuations for all items. This unit demand market has Walrasian clearing prices that maximize social welfare (Lemma~\ref{lemma:properallocation}). We show how we can convert the Walrasian prices on items to surge pricing (Lemma~\ref{lemma:Wprice-location}). We then show and that the resulting surge pricing has a passenger-taxicab equilibrium which induces supply equals demand (Lemma~\ref{lemma:exists_eq}) and it is the case with all all passenger-taxicab equilibria (Lemma~\ref{lemma:eq_unique}). Lemma \ref{lemma:altflow} shows that the incentive requirements in Equation (\ref{eq:incentivesequilibrium}) also hold for any min cost flow $f\neq f^*$, from $s$ to $d$. This proves {\sl Theorem \ref{thm:surge}}. \input{figures123} \input{figure3} As a running example, consider the road network in Figure \ref{fig:example}. Also, assume that the supply vector $s^{t-1}=\langle\frac{1}{3},\frac{1}{3},\frac{1}{3},0,0,0\rangle$ and demand vector $d^t=\langle0,0,\frac{1}{8},\frac{3}{8},\frac{3}{8},\frac{1}{8}\rangle$. Two minimum cost flows are given in Figures \ref{fig:flow1} and \ref{fig:flow2}. Both these flows have cost $1$. Given a minimum cost flow $f^*$, we define a unit demand market setting as follows: \begin{itemize} \item Items $M^{f^*}$, and unit demand bidders $B^{f^*}$, both of which are indexed by pairs of vertices, where $$M^{f^*}= \mathopen{}\mathclose\bgroup\originalleft\{ m_{xy} | x,y\in V, f^*(x,y)>0 \aftergroup\egroup\originalright\} \qquad B^{f^*}= \mathopen{}\mathclose\bgroup\originalleft\{ b_{wz} | w,z\in V, f^*(w,z)>0 \aftergroup\egroup\originalright\}.$$ \item We set the value of item $m_{xy}\in M^{f^*}$ to bidder $b_{wz}\in B^{f^*}$ to be, $$\zeta_{b_{wz}}(m_{xy})=C - \ell_{w,y}, \qquad\mbox{\rm where\ } C=\max_{i,j}{\ell_{i,j}+1}.$$ \item The utilities of bidders are unit demand and quasi-linear, {\sl i.e.}, the utility $\eta_{b_{wz}}$ of bidder $b_{wz}\in B^{f^*}$ for item set $S$ and price $p$ is $$ \eta_{b_{wz}}\mathopen{}\mathclose\bgroup\originalleft(S\aftergroup\egroup\originalright)=\max_{m_{xy}\in S} \zeta_{b_{wz}}(m_{xy})-p.$$ \end{itemize} As an example, let $f^*$ be the minimum cost flow of Figure~\ref{fig:flow1}. The market induced by $f^*$ is illustrated in Figure~\ref{fig:marketfig}. Given a flow $f^*$, bidders $B^{f^*}$ and items $M^{f^*}$ we define the following weighted bipartite graph $G(B^{f^*},M^{f^*},E)$, where between bidder $b_{wz}\in B^{f^*}$ and item $m_{xy}\in M^{f^*}$ there is an edge of weight $C-\ell_{w,y}\geq 1$. \begin{definition} Given a flow $f^*$, a {\em matching} between bidders $B^{f^*}$ and items $M^{f^*}$ is a function $\pi:B^{f^*}\mapsto M^{f^*} \cup \{\emptyset\}$, where bidder $b\in B^{f^*}$ is matched to item $\pi(b)\in M^{f^*}$ or unmatched (if $\pi(b)=\emptyset$), such that no two bidders $b_1,b_2\in B^{f^*}$ are matched to the same item $m\in M^{f^*}$. As there is an edge between every bidder $b_{wz}$ and every item $m_{xy}$ with weight $C-\ell_{w,y}\geq 1$, the maximum weight matching is a perfect matching between bidders and items and the mapping $\pi$ never assigns $\emptyset$ to a bidder. \end{definition} \input{figures45} \begin{lemma}\label{lemma:properallocation} The matching $g$ where $g(b_{wz})=m_{wz}$, maximizes social welfare. In addition, there exist Walrasian prices for which $g$ is a competitive market equilibrium. \end{lemma} \begin{proof} The proof is via contradiction. Assume there exists some matching $\tilde{g}:B^{f^*}\mapsto M^{f^*}$ with strictly greater social welfare than the matching $g$. For a bidder $b\in B^{f^*}$, define $\tilde{h}(b)=z$, iff $\tilde{g}(b)=m_{wz}$ for some $w\in V$, and $h(b)=z$, iff $g(b)=m_{wz}$ for some $w\in V$. Note that $h\mathopen{}\mathclose\bgroup\originalleft(b_{wu}\aftergroup\egroup\originalright)=u$ so for a given $w$ and $z$ we have $\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{wu}\aftergroup\egroup\originalright)=z\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|=1$ if $f(w,z)>0$ and zero otherwise. Choose $\epsilon$ to be the minimum non-zero flow in $f^*$, {\sl i.e.}, $\epsilon = \min\{f^*(w,z)|f^*(w,z)>0\}$. We now define a flow $f'$, which is a slight perturbation of flow $f^*$. In flow $f'$, the flow from $w$ to $z$ is: $$f'(w,z)= f^*(w,z) + \epsilon\mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| \tilde{h}\mathopen{}\mathclose\bgroup\originalleft(b_{wu}\aftergroup\egroup\originalright)=z\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright| - \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{wu}\aftergroup\egroup\originalright)=z\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\aftergroup\egroup\originalright).$$ We first prove that $f'$ is a valid flow, and later we show that it has a lower cost than $f^*$, in contradiction to the minimality of $f^*$. \begin{lemma}\label{lem:validflow} Flow $f'$ is a valid flow from supply vector $s^{t-1}$ to demand vector $d^t$. \end{lemma} \begin{proof} Consider the requirements that $f'$ be a valid flow: \begin{itemize} \item For all $x,y\in V$, $f'\mathopen{}\mathclose\bgroup\originalleft(x,y\aftergroup\egroup\originalright)\geq 0$ : By definition of $f'$ if $f^*(w,z)=0$ then $f'(w,z)\geq 0$ and if $f^*(w,z)>0$ then $f'(w,z)\geq f^*(w,z)-\min\{f^*(w,z)|f(w,z)>0\}\geq 0$. \item For all $x\in V$, $\sum_y f'\mathopen{}\mathclose\bgroup\originalleft(x,y\aftergroup\egroup\originalright)=s^{t-1}_x$ : By definition of $f^*$ we have $\sum_y f^*(x,y)=s^{t-1}_x$. Thus, \begin{eqnarray*}\sum_y f'(x,y)&=& \sum_y \bigg(f^*(x,y) + \big(\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| \tilde{h}\mathopen{}\mathclose\bgroup\originalleft(b_{xu}\aftergroup\egroup\originalright)=y\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright| - \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{xu}\aftergroup\egroup\originalright)=y\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\big)\cdot\epsilon\bigg) \\ &=&s^{t-1}_x + \mathopen{}\mathclose\bgroup\originalleft(\sum_y \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| \tilde{h}\mathopen{}\mathclose\bgroup\originalleft(b_{xu}\aftergroup\egroup\originalright)=y\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright| - \sum_y \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{xu}\aftergroup\egroup\originalright)=y\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\aftergroup\egroup\originalright)\cdot\epsilon \\ &=& s^{t-1}_x + \mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u|b_{xu}\in B^f\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|-\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u|b_{xu}\in B^f\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\aftergroup\egroup\originalright)\cdot\epsilon\\ &=&s^{t-1}_x.\end{eqnarray*} \item For all $y\in V$, $\sum_x f'\mathopen{}\mathclose\bgroup\originalleft(x,y\aftergroup\egroup\originalright)=d^t_y$ : By definition of $f^*$ we have $\sum_x f^*(x,y)=d^t_y$. Thus, \begin{eqnarray*}\sum_x f'(x,y)&=& \sum_x \bigg(f^*(x,y) + \big(\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| \tilde{h}\mathopen{}\mathclose\bgroup\originalleft(b_{uy}\aftergroup\egroup\originalright)=x\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright| - \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{uy}\aftergroup\egroup\originalright)=x\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\big)\cdot\epsilon\bigg) \\ &=&d^t_y + \mathopen{}\mathclose\bgroup\originalleft(\sum_x \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| \tilde{h}\mathopen{}\mathclose\bgroup\originalleft(b_{uy}\aftergroup\egroup\originalright)=x\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright| - \sum_x \mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u| h\mathopen{}\mathclose\bgroup\originalleft(b_{uy}\aftergroup\egroup\originalright)=x\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\aftergroup\egroup\originalright)\cdot\epsilon \\ &=& d^t_y + \mathopen{}\mathclose\bgroup\originalleft(\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u|b_{uy}\in B^f\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|-\mathopen{}\mathclose\bgroup\originalleft|\mathopen{}\mathclose\bgroup\originalleft\{u|b_{uy}\in B^f\aftergroup\egroup\originalright\}\aftergroup\egroup\originalright|\aftergroup\egroup\originalright)\cdot\epsilon\\ &=&d^t_y.\end{eqnarray*} \end{itemize} \end{proof} From the fact that $\tilde{g}$ has a higher social welfare we get, \[ \sum_{w,z:b_{wz}\in B^{f^*}} \zeta_{b_{wz}}(\tilde{g}(b_{wz})) > \sum_{w,z:b_{wz}\in B^{f^*}} \zeta_{b_{wz}}({g}(b_{wz}))\;. \] Using the definition of the valuations we have, \[ \sum_{w,z:b_{wz}\in B^{f^*}} C - \ell_{w,\tilde{h}(b_{wz})} > \sum_{w,z:b_{wz}\in B^{f^*}} C - \ell_{w,{h}(b_{wz})}\;. \] This implies that \[ \sum_{w,z:b_{wz}\in B^{f^*}} \ell_{w,{h}(b_{wz})} > \sum_{w,z:b_{wz}\in B^{f^*}} \ell_{w,\tilde{h}(b_{wz})}\;. \] Using this last inequality, it follows that the cost of $f^*$ (Definition \ref{def:earthmover}) satisfies \begin{align*} \mathrm{em}(f^*) &= \sum_{x,y} f^*(x,y)\cdot\ell_{x,y} \\ &> \sum_{x,y} f^*(x,y)\cdot\ell_{x,y} + \mathopen{}\mathclose\bgroup\originalleft(\sum_{w,z:b_{wz}\in B^f} \ell_{w,\tilde{h}(b_{wz})} - \sum_{w,z:b_{wz}\in B^f} \ell_{w,{h}(b_{wz})}\aftergroup\egroup\originalright)\cdot\epsilon = \mathrm{em}(f'), \end{align*} which contradicts the fact that flow $f^*$ is a minimum cost flow. The fact that there exist Walrasian prices for $g$ that are in competitive market equilibrium follows from \cite{GS99}. This concludes the proof of Lemma \ref{lemma:properallocation}. \end{proof} Let the Walrasian price of $m_{xy}$ be $p_{xy}$ as guaranteed by the lemma above. We first show that any two prices which correspond to the same vertex must have the same price. \begin{lemma} \label{lemma:Wprice-location} For any two items $m_{xy}$ and $m_{x'y}$ we have $p_{xy}=p_{x'y}$. \end{lemma} \begin{proof} For contradiction assume that $p_{xy}>p_{x'y}$. Let $b_{wz}$ be the bidder assigned $m_{xy}$. Thus, for item $m_{xy}$ bidder $b_{wz}$ has utility $\eta_{b_{wz}}(m_{xy})=C-\ell_{w,y}-p_{xy}<C-\ell_{w,y}-p_{x'y}=\eta_{b_{wz}}(m_{x'y})$ which implies that $m_{xy}$ is not in the demand set for bidder $b_{wz}$. A contradiction to the fact that $p$ are Walrasian prices. \end{proof} For any $y\in V$ such that there exist items of the form $m_{xy}$ for some $x\in V$, let $p_y$ denote the Walrasian price for such items (By Lemma~\ref{lemma:Wprice-location} all those Walrasian prices are identical). If no items of the form $m_{xy}$ exist, this implies that demand at vertex $y$, $d^t_y=0$, and we can set $p_y=0$. Define surge prices, $r^t_y = C-p_y$, for all $y\in V$. \begin{lemma} \label{lemma:exists_eq} Given current supply $s^{t-1}$ and demand $d^t$, surge prices $r^t_y=C-p_y$, new support $s'=d$, and $x,y,w\in V$ such that $f^*(x,y)>0$ then $$\mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow y\aftergroup\egroup\originalright|s',r,d)\geq \mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d).$$ \end{lemma} \begin{proof} Let $x,y$ be such that $f^*(x,y)>0$. Then, \begin{eqnarray} \mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow y\aftergroup\egroup\originalright|s',r,d) &=& r^t_y\cdot\min{\mathopen{}\mathclose\bgroup\originalleft(1,\frac{d^t_y}{s^t_y}\aftergroup\egroup\originalright)}-\ell_{x,y} \label{eq:firstdef}\\ &=& r^t_y-\ell_{x,y} \label{eq:minone}\\ &=& C-p_y-\ell_{x,y} \label{eq:seconddef}\\ &=& \eta_{b_{xy}}\mathopen{}\mathclose\bgroup\originalleft(m_{xy}\aftergroup\egroup\originalright) \label{eq:thirddef}\\ &\geq& \eta_{b_{xy}}\mathopen{}\mathclose\bgroup\originalleft(m_{zw}\aftergroup\egroup\originalright)\qquad \forall m_{zw}\in M^{f^*} \Leftrightarrow \forall m_{zw} : s'_w>0 \label{eq:marketeq}\\ &=& C-p_w-\ell_{x,w} \label{eq:thirddefmirror}\\ &=& r^t_w-\ell_{x,w} \label{eq:seconddefmirror}\\ &\geq& r^t_w\cdot\min{\mathopen{}\mathclose\bgroup\originalleft(1,\frac{d^t_w}{s^t_w}\aftergroup\egroup\originalright)}-\ell_{x,w} \label{eq:secondminonw}\\ &=& \mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d) \label{eq:firstdefmirror} \end{eqnarray} Equations \eqref{eq:firstdef},\eqref{eq:firstdefmirror} follow from the definition of the utility in the continuous passenger-taxicab setting, definition \ref{def:utility}.\\ Equations \eqref{eq:minone},\eqref{eq:secondminonw} follows from considering the passenger-taxicab equilibrium where $s^t=d^t$ resulting in $\frac{d^t_y}{s^t_y}=1$ for all $y$.\\ Equations \eqref{eq:seconddef},\eqref{eq:seconddefmirror} follow from the definition of the surge prices.\\ Equations \eqref{eq:thirddef},\eqref{eq:thirddefmirror} follow from the definition of the utility in the market setting.\\ Equation \eqref{eq:marketeq} follows from the market equilibrium. So, we have that $$\mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow y\aftergroup\egroup\originalright|s',r,d) \geq \mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d) \qquad \forall w : s'_w>0.$$ It remains to consider $\mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d)$ for $w$ such that $s'_w=0$. In this case the surge price at $w$ is zero, so the utility $\mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d)\leq 0$. \end{proof} The following lemma shows that the incentive requirements of Equation (\ref{eq:incentivesequilibrium}) hold, not only for the flow $f^*$, but also for {\sl any} min cost flow from $s$ to $d$. \begin{lemma} \label{lemma:altflow} Fix current supply $s^{t-1}$ and demand $d^t$, surge prices $r^t_y=C-p_y$, and new support $s'=d$. Let $f'$ be an arbitrary min cost flow from $s$ to $s'=d$, then, for any $x,y,w\in V$ such that $f'(x,y)>0$ we have that $$\mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow y\aftergroup\egroup\originalright|s',r,d)\geq \mu^t\mathopen{}\mathclose\bgroup\originalleft(x\rightarrow w\aftergroup\egroup\originalright|s',r,d).$$ \end{lemma} \begin{proof} Define $$\Gamma(f) = \sum_{u\in V}\sum_{v\in V} f(u,v)\cdot(r_v - \ell_{u,v}) = \sum_{v\in V} s'_v\cdot r_v - \mathrm{em}(s,s').$$ As $f'$ and $f^*$ are both min cost flows from $s$ to $s'$ we have that $\Gamma(f^*)=\Gamma(f')$. For contradiction assume there exist some $u,v$ such that $f'(u,v)>0$ and $\mu(u\mapsto v|s',r,d)<\max_{w\in V}\mu(u\mapsto w|s',r,d)$. \begin{eqnarray} \Gamma(f^*) &=& \sum_{u\in V} \sum_{v\in V} f^*(u,v)\cdot(r_v-\ell_{u,v}) \label{eq:gamma1}\\ &=& \sum_{u\in V} \sum_{v\in V} f^*(u,v)\cdot\max_{v\in V}(r_v-\ell_{u,v}) \label{eq:gamma2}\\ &=&\sum_{u\in V} s_u\cdot\max_{v\in V}(r_v-\ell_{u,v}) \label{eq:gamma3}\\ &=& \sum_{u\in V} \sum_{v\in V} f'(u,v)\cdot\max_{v\in V}(r_v-\ell_{u,v}) \label{eq:gamma4}\\ &>& \sum_{u\in V} \sum_{v\in V} f'(u,v)\cdot(r_v-\ell_{u,v}) = \Gamma(f')\;. \label{eq:gamma5} \end{eqnarray} Eq. (\ref{eq:gamma1}) follows from the definition of $\Gamma$. Eq. (\ref{eq:gamma2}) follows from Lemma~\ref{lemma:exists_eq}. Eq. (\ref{eq:gamma3}) and (\ref{eq:gamma4}) follows from the definition of a flow, since for any flow $f$ from $s$ we have that $s_u=\sum_{v\in V}f(u,v)$. Eq. (\ref{eq:gamma5}) holds since we assumed, for contradiction, that there exist some $u,v$ such that $f'(u,v)>0$ and $\mu(u\mapsto v|s',r,d)<\max_{w\in V}\mu(u\mapsto w|s',r,d)$. Hence, we reached a contradiction to the assumption that $f'$ is a min cost flow. \end{proof} If follows from the Lemma above that computing surge prices $r$ via flow $f^*$ ensures that taxicab routing using any other min cost flow $f'$ is also a best response under surge prices $r$. Next we show that all relevant passenger-taxicab equilibria all have new supply $s^t=d^t$. \begin{lemma} \label{lemma:eq_unique} Given current supply $s$, demand $d$, and surge prices $r^t_y=C-p_y$, all passenger-taxicab equilibria induce $s^t=d$. \end{lemma} \begin{proof} Let $f$ be a min cost flow from $s$ to $d$. For contradiction, assume that that there exists some $\bar{s}\neq d$ such that $s,s'=\bar{s},d,r$ are in a passenger-taxicab equilibrium. Let $f'$ be some min cost flow from $s$ to $\bar{s}$. Consider $H=\mathopen{}\mathclose\bgroup\originalleft\{y| \sum_x f'(x,y)>d^t_y\aftergroup\egroup\originalright\}$ ({\sl i.e.}, the set of all vertices for which the flow $f'$ results in strictly more supply than demand). Since $s'\neq d$ and they both sum to 1, we have that $H\neq \emptyset$. Let $H'=\mathopen{}\mathclose\bgroup\originalleft\{x| \exists y\in H \, \mathrm{s.t.} \, f'(x,y)>0 \aftergroup\egroup\originalright\}$ ({\sl i.e.}, the set of all vertices from which supply flows to $H$). As $H\neq \emptyset$ it follows that $H'\neq \emptyset$. We claim that there exists some $w\in H'$, $y\not\in H$, such that $f(w,y)>0$. For contradiction assume that all the flow in $f$ from vertices in $H'$ is to vertices in $H$. By definition of flows: $\sum_{y\in V} f\mathopen{}\mathclose\bgroup\originalleft(x,y\aftergroup\egroup\originalright) = s^{t-1}_x= \sum_{y\in V} f'\mathopen{}\mathclose\bgroup\originalleft(x,y\aftergroup\egroup\originalright)$. We now have, \begin{eqnarray*} \sum_{x\in H'}\sum_{y\in V} f(x,y) &=& \sum_{x\in H'}\sum_{y\in H} f(x,y) \\ &=& \sum_{x\in H'}\sum_{y\in H} f'(x,y) \\ &=&\sum_{y\in H} \sum_{x\in H'} f'(x,y) \\ &>& \sum_{y\in H} d^t_y \\ &=& \sum_{x\in H'}\sum_{y\in H} f(x,y), \end{eqnarray*} which is a contradiction. This implies that there exist $w\in H'$, $x\not\in H$ such that $f\mathopen{}\mathclose\bgroup\originalleft(w,x\aftergroup\egroup\originalright)>0$. Since $w\in H'$ there also exists some $y\in H$ such that $f'(w,y)>0$. We have shown that $s,s'=d,d,r$ is in passenger-taxicab equilibrium, this implies that $$\mu^t(w\mapsto x|s',r,d)=r^t_x - \ell_{w,x} \geq r^t_y - \ell_{w,y}=\mu^t(w\mapsto y|s',r,d).$$ Since $y\in H$ we have we have that $\sum_u f'\mathopen{}\mathclose\bgroup\originalleft(u,y\aftergroup\egroup\originalright)>d^t_y$ resulting in the utility $$ \mu^t\mathopen{}\mathclose\bgroup\originalleft(w\mapsto y\aftergroup\egroup\originalright|s',r,d)= \mathopen{}\mathclose\bgroup\originalleft(r^t_y - \ell_{w,y}\aftergroup\egroup\originalright)\cdot\min\mathopen{}\mathclose\bgroup\originalleft(1,\frac{d^t_y}{s^t_y}\aftergroup\egroup\originalright)<r^t_y-\ell_{w,y} \leq r^t_x-\ell_{w,x} = \mu^t\mathopen{}\mathclose\bgroup\originalleft(w\mapsto x|s',r,d\aftergroup\egroup\originalright), $$ where $x\not\in H$ implies the last equality. This is in contradiction to the $s,s'=\bar{s},d,r$ being a passenger-taxicab equilibrium. \end{proof} Theorem \ref{thm:surge} follows from Lemma~\ref{lemma:exists_eq}, Lemma~\ref{lemma:altflow}, and Lemma~\ref{lemma:eq_unique}. \begin{theorem}\label{thm:surge} Given distances $\ell_{i,j}$ and an arbitrary supply vector, $s^{t-1}=\langle s^{t-1}_1, \ldots, s^{t-1}_k \rangle$. Let the demand vector be $d^t=\langle d^t_1, \ldots, d^t_k\rangle$. Then, there exists a surge price vector $r^t=\langle r^t_1,\ldots,r^t_k\rangle$ that results in a passenger-texicab equilibrium which induces a supply $s^t=d^t$. Moreover, any passenger-taxicab equilibrium of $r^t$ induces supply $s^t=d^t$, and the surge prices $r^t$ can be computed in polynomial time. \end{theorem} We can extend the result from equating supply and demand to modifying the supply vector $s^{t-1}$ to any supply $s^t$, with the restriction that if $s^t_i>0$ then $d^t_i>0$. The new surge prices are computed as follows. First we compute, as before, the surge prices $r^t$ from $s^{t-1}$ to $d^t$. Then, we set $\bar{r^t}_i=\max\{1,\frac{s^t_i}{d^t_i}\}r^t_i$ and the resulting surge prices are $\bar{r^t}$. In a similar way we can establish, \begin{theorem}\label{thm:anyst} Let $d^t=\langle d^t_1,\ldots, d^t_k\rangle$ and let $\alpha=\langle \alpha_1,\ldots, \alpha_k\rangle$ be the target supply vector, subject to the restriction that if $\alpha_i>0$ then $d^t_i>0$. Then there exist surge prices $\bar{r^t}$ for which some passenger-taxicab equilibrium induces supply $\alpha$. \end{theorem} \section{The Discrete Passenger-Taxicab Setting}\label{sec:respdemand} In this section we consider a more realistic scenario where both demand and supply are sensitive to the surge pricing. All else being equal, higher surge prices mean less demand and more supply. We define social welfare to be the sum of valuations of passengers served minus the sum of the distances traversed by the taxicabs to serve these passengers (Definition \ref{def:sw}). Given current supply $s$ and a passenger profile $P$, we give an algorithm for computing surge prices $r$ that creates a passenger-taxicab equilibrium that maximizes social welfare. The location of a passenger $b_i$ and taxi $t_j$ is denoted by $\operatorname{loc}(b_i)$ and $\operatorname{loc}(t_j)$ respectively ({\sl i.e.}, $b_i\in P_{\operatorname{loc}(b_i)}$ $t_j\in s(\operatorname{loc}(t_j))$). For brevity, we use the notation $\overline{\ell}_{i,j}=\ell_{\operatorname{loc}(b_i),\operatorname{loc}(t_j)}$. \subsection{Maximizing Social Welfare} As in the continuous case, we reduce the problem of computing surge prices to computing market clearing prices in a unit demand market. Given a set of passengers $B$ and taxicabs $T$, we construct a unit demand market $M(B,T)$, where $B$ is the set of buyers and $T$ is the set of items. For the unit demand market, $M(B,T)$, we set the value of buyer $b_i\in B$ for item $t_j\in T$ to be $\zeta_{b_i}(t_j)=\mbox{\rm value}(b_i)-\overline{\ell}_{i,j}.$ Let the allocation where item $t_j$ is given to $\mbox{\rm buyer}(t_j)=b_i$ be a social welfare maximizing allocation in the unit demand market $M(B,T)$. Also, let $\mbox{\rm buyer}(t_j)=\emptyset$ if item $t_j$ is unallocated. This social welfare maximizing allocation in $M(B,T)$ translates into a flow $f^*$ for the discrete passenger-taxicab problem where $t_j$ moves from $\operatorname{loc}(t_j)$ to $\operatorname{loc}(b_i)$ if $\mbox{\rm buyer}(t_j)=b_i$. Ergo, $$f^*(u,v) = \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{lr} |\{(i,j)|b_i\in P_v,t_j\in s_u,\mbox{\rm buyer}(t_j) =b_i\} | &\mathrm{if\ }u\neq v, \\ |\{(i,j)|b_i\in P_v,t_j\in s_u,\mbox{\rm buyer}(t_j) =b_i\} | + |\{j|t_j\in s_u, \mbox{\rm buyer}(t_j) =\emptyset\}|, &\mathrm{if\ }u=v. \end{array}\aftergroup\egroup\originalright.$$ Let $s'$ be such that $s'_v=\sum_u f^*(u,v)$ for all $v\in V$. We say that the new supply $s'$ is {\sl induced} by $f^*$. We now show that \begin{lemma} The flow $f^*$ is a min cost flow from $s$ to $s'$. \end{lemma} \begin{proof} Assume that $f'$ is a flow from $s$ to $s'$ of strictly lower cost. As $f'$ is an integral flow it can be decomposed into a union of unit flows. This can be interpreted as an alternative allocation in the $M(B,T)$ unit demand market, with strictly higher social welfare. This is in contradiction to our construction. \end{proof} Choose the minimal Walrasian prices to clear the unit demand market $M(B,T)$. Such prices are also VCG prices \cite{Leonard83}. Let the Walrasian price for item $t_j$ be $p_{t_j}$. We now define surge prices $r_v$, $v\in V$, for the discrete passenger-taxicab problem. Specifically, for all $v\in V$, set \begin{equation} r_v=\min_{t_j\in T}(\ell_{\operatorname{loc}(t_j),v}+p_{t_j}).\label{eq:discretesurgeprices}\end{equation} \begin{lemma}\label{lem:assignmentismax} Assigning $t_j$ to serve passenger $\mbox{\rm buyer}(t_j)$ is a social welfare maximizing allocation. \end{lemma} \begin{proof} First, we show that for any allocation of taxicabs to passengers in the taxicab-passenger setting there exists an allocation of items to buyers in the unit demand market $M(B,T)$ such that the social welfare is the same. Then, we show that for the allocation of items to buyers that maximizes the social welfare in the unit demand market there exists an allocation of taxicabs to passengers with the same social welfare. Fix an allocation of passengers to taxicabs, {\sl i.e.}, $\Phi:B\rightarrow T \cup \{\emptyset\}$ is a matching. Given the matching $\Phi$ we define an allocation $\Pi:B\rightarrow T \cup \{\emptyset\}$ in the unit demand market where $\Phi(b) = \Pi(b)$ for all $b\in B$. The social welfare of $\Phi$ in the taxicab-passenger setting is $\sum_{b\in B} (\mbox{\rm value}(b) - \ell_{\operatorname{loc}(b),\operatorname{loc}(\Phi(b))})I_{\Phi(b)\neq \emptyset}$. Similarly, the social welfare of $\Pi$ in the unit demand market setting is $\sum_{b\in B}\zeta_b(\Pi(b))=\sum_{b\in B} (\mbox{\rm value}(b) - \ell_{\operatorname{loc}(b),\operatorname{loc}(\Pi(b))})I_{\Pi(b)\neq \emptyset}$. Since $\Phi(b)=\Pi(b)$ it follows that any allocation in the taxicab-passenger setting has a corresponding allocation in the unit demand market with the same social welfare. We now show that an allocation in the unit demand market that maximizes social welfare has a corresponding allocation in the passenger-taxicab setting that also maximizes social welfare. Denote the maximal allocation in the unit demand market by $\Pi_{\max}:B\rightarrow T \cup \{\emptyset\}$. Define the corresponding matching of passengers to taxicabs by $\Phi_{\max}:B\rightarrow T \cup \{\emptyset\}$, where $\Phi_{\max}(b)=\Pi_{\max}(b)$ for all $b\in B$ ($\Phi_{\max}$ is a matching since $\Pi_{\max}$ is a valid allocation in a unit demand market). Moreover, we need to show that higher valued passengers have priority over lower valued passengers at the same location. {\sl I.e.}, we need to show that for any two passengers, $b_1,b_2\in B$, such that $\operatorname{loc}(b_1)=\operatorname{loc}(b_2)$ and $\Phi_{\max}(b_1)\neq\emptyset$, $\Phi_{\max}(b_2)=\emptyset$ we have that $\mbox{\rm value}(b_1)\geq\mbox{\rm value}(b_2)$. Contrariwise, assume for some $b_1,b_2\in B$ we have that $\operatorname{loc}(b_1)=\operatorname{loc}(b_2)$, $\Phi_{\max}(b_1)\neq\emptyset$, $\Phi_{\max}(b_2)=\emptyset$, but $\mbox{\rm value}(b_1)<\mbox{\rm value}(b_2)$. Define in the unit demand market then $\Pi':B\rightarrow T\cup\{\emptyset\}$ such that $\Pi'(b_1)=\emptyset$, $\Pi'(b_2)=\Phi_{\max}(b_1)$ and $\Pi'(b)=\Pi_{\max}(b)$ for all $b\notin \{b_1,b_2\}$. We now show that the social welfare under $\Pi$ is strictly greater than the social welfare under $\Pi'$: \begin{eqnarray*} \sum_{b\in B}(\zeta_b(\Pi'(b)))&=&\sum_{b\in B,b\neq b_1,b_2}(\zeta_b(\Pi'(b))) + \zeta_{b_2}(\Phi_{\max}(b_1))\\ &=&\sum_{b\in B,b\neq b_1,b_2}(\zeta_b(\Pi_{\max}(b))) + \mbox{\rm value}(b_2)-\mbox{\rm dist}(\operatorname{loc}(b_2),\operatorname{loc}(\Phi_{\max}(b_1)))\\ &>&\sum_{b\in B,b\neq b_1,b_2}(\zeta_b(\Pi_{\max}(b))) + \mbox{\rm value}(b_1)-\mbox{\rm dist}(\operatorname{loc}(b_1),\operatorname{loc}(\Phi_{\max}(b_1)))\\ &=& \sum_{b\in B}(\zeta_b(\Pi_{\max}(b))).\end{eqnarray*} Thus, $\Pi'$ has strictly higher social welfare than $\Pi_{\max}$ in unit demand setting in contradiction to $\Pi_{\max}$ maximizing social welfare. Thus, $\Phi_{\max}$ is a valid allocation in the taxicab-passenger setting which maximizes the social welfare. \end{proof} \begin{lemma}\label{lem:achievesmin} For any passenger $b_i$ such that $b_i=\mbox{\rm buyer}(t_j)$ we have that $$\overline{\ell}_{i,j}+p_{t_j}=\min_{t_z\in T}(\overline{\ell}_{i,z}+p_{t_z})=r_{\operatorname{loc}(b_i)}.$$ \end{lemma} \begin{proof} Since $b_i=\mbox{\rm buyer}(t_j)$, and $p$ are Walrasian prices, we have that buyer $b_i$ maximizes its utility $\eta_{b_i}$. Ergo, \begin{eqnarray*} \eta_{b_i}(t_j)&=&\max_{t_x\in T}(\eta_{b_i}(t_x))\\ &=& \max_{t_x\in T}(\mbox{\rm value}(b_i)-\overline{\ell}_{i,x}-p_{t_x})\\ &=&\mbox{\rm value}(b_i)-\min_{t_x\in T}(\overline{\ell}_{i,x}+p_{t_x})\\ &=&\mbox{\rm value}(b_i)-r_{\operatorname{loc}(b_i)}.\end{eqnarray*} As $$\eta_{b_i}(t_j)=\mbox{\rm value}(b_i)-\overline{\ell}_{i,j}-p_{t_j}=\mbox{\rm value}(b_i) -r_{\operatorname{loc}(b_i)} $$ it follows that $$\overline{\ell}_{i,j}+p_{t_j}=\min_{t_x\in T}(\overline{\ell}_{i,x}+p_{t_x}).$$ \end{proof} \begin{lemma} \label{lemma:envyfree} Any passenger $b_i$ that is not served is not interested in being served (or is indifferent), {\sl i.e.}, then $\mbox{\rm value}(b_i)\leq r_{\operatorname{loc}(b_i)}$. Any passenger $b_i$ that is served has $\mbox{\rm value}(b_i)\geq r_{\operatorname{loc}(b_i)}$. \end{lemma} \begin{proof} Let $b_i$ be some buyer allocated no item in the social welfare maximizing allocation for $M(B,T)$, then it must be that $\max_{t_x\in T}\eta_{b_i}(t_x)\leq 0$. It follows that $$\max_{t_x\in T}(\mbox{\rm value}(b_i)-\overline{\ell}_{i,x}-p_{t_x}) \leq 0,$$ and thus $$\mbox{\rm value}(b_i)\leq\min_{t_x\in T}(\overline{\ell}_{i,x}+p_{t_x})=r_{\operatorname{loc}(b_i)}.$$ Consider some buyer $b_i$ that was allocated an item, $t_j$, in the social welfare maximizing allocation for $M(B,T)$. It follows that $\max_{t_x\in T}\eta_{b_i}(t_x)\geq 0$. Thus, $$\max_{t_x\in T}(\mbox{\rm value}(b_i)-\overline{\ell}_{i,x}-p_{t_x}) \geq 0,$$ and $$\mbox{\rm value}(b_i)\geq\min_{t_x\in T}(\overline{\ell}_{i,x}+p_{t_x})=r_{\operatorname{loc}(b_i)}.$$ \end{proof} \begin{lemma}\label{lemma:discretetaxibr} For supply $s$, demand $d$, surge prices $r$, and new supply $s'$ as defined above. A taxicab $t_j$ that serves passenger $\mbox{\rm buyer}(t_j)$ is doing a best response. \end{lemma} \begin{proof} Consider the following cases: \begin{enumerate}\item Item $t_j$ is not allocated, {\sl i.e.}, $\mbox{\rm buyer}(t_j)=\emptyset$. It follows that the Walrasian pricing for item $t_j$ is zero: $p_{t_j}=0$. Now, for any $w\in V$ we have that $$r_w=\min_{t_x\in T}(\ell_{w,\operatorname{loc}(t_x)}+p_{t_x})\leq\ell_{w,\operatorname{loc}(t_j)}+p_{t_j}=\ell_{w,\operatorname{loc}(t_j)},$$ hence, $r_w-\ell_{w,\operatorname{loc}(t_j)}\leq 0$. Ergo, not serving any passenger is a best response for $t_j$. \item Item $t_j$ is allocated to some buyer $b_i$. From Lemma \ref{lem:achievesmin} we know that $\overline{\ell}_{i,j}+p_{t_j}=\min_{t_x\in T}(\overline{\ell}_{i,x}+p_{t_x})=r_{\operatorname{loc}(b_i)}$ and thus $t_j$ gains a utility of $p_{t_j}$ from serving $b_i$. If taxicab $t_j$ could serve a passenger at location $w\in V$, it will gain a utility of $$ r_w-\ell_{w,\operatorname{loc}(t_j)}= \min_{t_x\in T}(\ell_{w,\operatorname{loc}(t_x)}+p_{t_x})-\ell_{w,\operatorname{loc}(t_j)} \leq\ell_{w,\operatorname{loc}(t_j)}+p_{t_j}-\ell_{w,\operatorname{loc}(t_j)}=p_{t_j}. $$ Implying that serving passenger $b_i$ is a best response for taxicab $t_j$. \end{enumerate} \end{proof} \begin{lemma}\label {lemma:discretetruth} It is a dominant strategy for the passengers to reveal their true valuations given that surge prices are computed via the algorithm above. \end{lemma} \begin{proof} The utilities of the bidders for the minimal Walrasian prices in a unit demand market coincide with VCG payments \cite{Leonard83}. This implies that buyers truthfully reveal their valuations for the items. In our setting the utility for a passenger $b_i$ is exactly equal to the utility for the corresponding bidder $b_i$. Ergo, misreporting passenger valuation implies misreporting bidder valuations. As misreporting item valuations in the unit demand market setting cannot benefit buyers (and thus passengers) we conclude it is a dominant strategy for passengers to report true valuations. \end{proof} To summarize, our main result in this section, Theorem \ref{thm:finaldiscrete}, follows from Lemma \ref{lem:assignmentismax}, Lemma \ref{lemma:envyfree}, Lemma \ref{lemma:discretetaxibr}, and Lemma \ref{lemma:discretetruth}. \begin{theorem} \label{thm:finaldiscrete} For any Profile $P$ and supply $s$ there exist surge prices $r$, demand $d(r)$ and new supply $s'$ such that \begin{itemize} \item Supply $s$, new supply $s'$, demand $d(r)$, and surge prices $r$ are in passenger-taxicab equilibrium. \item $s'$ is social welfare maximizing with respect to supply $s$, profile $P$, and demand $d$. \item The surge prices $r$ can be computed in polynomial time. \item It is a dominant strategy for passengers to report their true valuations to the surge-price computation. \end{itemize} \end{theorem} \section{Optimal Competitive Online Algorithms for Social Welfare}\label{sec:onlinealg} In this section we give online algorithms that determine supply (using surge prices) so as to maximize social welfare as given in Definition \ref{def:socialwelfare}. {\sl I.e.}, striking a balance between maximizing the quality of service {\sl vs.} the costs associated with shifting resources about. The results\footnote{These are randomized online algorithms. Alternately, one could give deterministic online algorithms with the same guarantees by using the passenger-taxicab equilibria and surge prices derived from Theorem \ref{thm:anyst}, with the disadvantages that the equilibria is no longer unique and that this requires some additional technical assumptions.} in this section can be obtained by online algorithms that set the supply to be one of the following: \begin{enumerate} \item Set supply at time $t$ equal demand at time $t$, {\sl i.e.}, set $s^t= d^t$. \item Set supply at time $t$ equal to the supply at time $t-1$, {\sl i.e.}, set $s^t=s^{t-1}$. \end{enumerate} It follows from Theorem \ref{thm:surge} that using appropriate surge prices we can determine that $s^t=d^t$ as the unique passenger-taxicab equilibrium. It is easy to leave the supply unchanged by choosing $r^t_i=1$ for all $i$. It follows that the resulting passenger-taxicab equilibrium has no positive flow from $i$ to $j\neq i$, as $\ell_{ij}\geq 1$ for all $j\neq i$ --- ergo $s^t=s^{t-1}$. Given a demand sequence $d$ we define $\rho$ as the inverse of the maximum demand at any vertex and time, {\sl i.e.}, $1/\rho={\max_{i,t} d^t_i}$. Note that $\rho\leq k$ since at any time $t$ there is a vertex $i$ such that $d^t_i\geq 1/k$. Moreover, $\rho\geq 1$ since $d^t_i\leq 1$, for any time $t$ and vertex $i$. Consider the following online algorithms: \begin{description} \item [$\mathrm{\tt rand}(p)$] --- With probability $p$ set surge prices such that supply equals demand at all vertices. {\sl I.e.}, at time $t=1$ set $s^1=d^1$; for all $t>1$ with probability $p$ set $s^t=d^t$ and with probability $1-p$ set $s^t=s^{t-1}$. \item [$\mathrm{\tt stay}$] --- Split the supply equally over all vertices. {\sl I.e.}, at time $t=1$ set $s^1=\langle \frac{1}{k},\frac{1}{k},\ldots,\frac{1}{k}\rangle$ and for all $t>1$ set $s^t=s^{t-1}$. \item [$\mathrm{\tt match}$] --- Always set supply equal demand, {\sl i.e.}, set $s^t=d^t$ for all $t\geq 1$ . Note that $\mathrm{\tt match}$ and $\mathrm{\tt rand}(1)$ are identical. \item [$\mathrm{\tt composite}(p)$] --- Toss a fair coin, if heads run $\mathrm{\tt stay}$ otherwise run $\mathrm{\tt rand}(p)$. The expected social welfare of $\mathrm{\tt composite}(p)$ satisfies $\mathop{{}\mathbb{E}}[\mathrm{\tt composite}(p)]=\mathop{{}\mathbb{E}}[\mathrm{\tt stay}]/2 + \mathop{{}\mathbb{E}}[\mathrm{\tt rand}(p)]/2$. \end{description} In different scenarios different algorithms are useful. We later discuss how to switch between different online algorithms in changing circumstances, varying over time. Like many other online problems, we first show that the optimal solution can be assumed to be ``lazy", never move supply about unnecessarily (Section~\ref{sec:lazy}). Section \ref{sec:uniform} gives our main technical result. In this setting the cost of moving from one vertex to another always equals $1$, i.e., $\ell_{ij}=1$ for $i\neq j$. In this scenario we show that $\mathrm{\tt composite}({\sqrt{1/{k}}})$ achieves [an optimal] $\Theta(1/\sqrt{k})$ fraction of the optimal social welfare. More generally, the competitive ratio improves as a function of the maximal demand in a single vertex (a $1/\rho$ fraction of the total demand) --- in this setting $\mathrm{\tt composite}({\sqrt{{\rho}/{k}}})$ achieves [an optimal] $\Theta(\sqrt{\rho/{k}})$ fraction of the optimal social welfare. The positive result appears in Theorem \ref{thm:compfinal}, whereas optimality follows from Lemma \ref{thm:upperuniform}. In Section \ref{sec:extensions} we consider several other scenarios: \begin{itemize} \item Clearly, even for completely arbitrary costs $\ell_{ij}$ (to move supply from $i$ to $j$), algorithm $\mathrm{\tt stay}$ is trivially $\rho/k$ competitive. In Section \ref{sec:arbitrarymetric} we prove that this cannot be improved. This shows that it is critical that $\ell_{ij}=1$ to obtain a non-trivial bound, without other assumptions on the input sequence. \item In Section \ref{sec:restricted} we consider inputs where the total drift (average total variation distance between successive demand vectors) is small. In such settings the $\mathrm{\tt match}$ algorithm approaches the optimal social welfare, for sufficiently small drift. Moreover, essentially the same bounds are tight. \end{itemize} \subsection{The Optimal Supply Sequence is Lazy} \label{sec:lazy} We define lazy sequences and show that without loss of generality the optimal supply sequence is a lazy sequence. We have two types of ``non-lazy" actions: increasing supply in a location with supply greater than demand (over supply), or reducing supply in a location while creating over demand. Both actions can be avoided, without loss in social welfare. We start by defining a lazy sequence. \begin{definition} A supply sequence is {\em lazy} if for any time $t$ and any $u,v\in V,u\neq v$ such that $f^t(u,v)>0$ then both (1) $s^t_v\leq d^t_v$ and (2) $s_u^{t-1}> d^t_u$. \end{definition} We show that for any supply sequence there exists a lazy supply sequence whose social welfare is at least the social welfare of the original sequence. \begin{lemma}\label{lem:tolazy} Fix a demand sequence $d$. Given an arbitrary supply sequence $s$, there exists a lazy supply sequence $\bar{s}$ such that $\mathrm{sw}(\bar{s})\geq \mathrm{sw}(s)$. \end{lemma} \begin{proof} For contradiction, assume there is a sequence $s$ for which for any lazy sequence $\bar{s}$ we have $\mathrm{sw}(s)>\mathrm{sw}(\bar{s})$. Note that essentially we are saying that there is an optimal sequence $s$ for which no lazy sequence has the same social welfare. This implies that for any optimal sequence $s$ there is a time $t$ such that $f^t(u,v)>0$ and either (1) $s^t_v > d^t_v$ or (2) $s^{t-1}_u < d^t_u$. Out of all the optimal sequences, consider the optimal sequence $s$ with the largest such time $t$ and largest pair $(u,v)$ (given some full order on the pairs $V\times V$). We create a new flow $\bar{f}$ depending on the type of violation. Assume that we have $f^t(u,v)>0$ and $s^t_v > d^t_v$. At time $t$ set $\bar{f}^t(u,v)= f^t(u,v)-\epsilon$ and $\bar{f}^t(u,u)= f^t(u,u)+\epsilon$, where $\epsilon=\min\{s^t_v - d^t_v, f^t(u,v)\}$. The rest of the flow remains unchanged, {\sl i.e.}, $\bar{f}^t(u',v')= f^t(u',v')$ for $(u',v')\neq (u,v)$ or $(u',v')\neq (u,u)$. At time $t+1$ we adjust the flow to correspond to the original supply. Namely, for all $w\in V$ such that $f^{t+1}(v,w)>0$, we set $\bar{f}^{t+1}(v,w)=f^{t+1}(v,w)\frac{s^t_v-\epsilon}{s^t_v}$ and $\bar{f}^{t+1}(u,w)=f^{t+1}(u,w)+ f^{t+1}(v,w) \frac{\epsilon}{s^t_v}$, and all the remaining flows remain unchanged. It is straightforward to verify that $\bar{f}$ is a valid flow, and we set $s^{t+1}_v=\bar{s}^{t+1}_v=\sum_u \bar{f}^{t+1}(u,v)$. Note that the only influence on the social welfare are in times $t$ and $t+1$. Comparing the movement cost of $\bar{s}$ to $s$, at time $t$ it decreased by $\epsilon$ and in time $t+1$ increased by at most $\epsilon$. The demand served in $\bar{s}$ and $s$ at time $t$ and $t+1$ in unchanged (since the $\epsilon$ flow that was modified did not serve any demand in time $t$ and at time $t+1$ the supplies are identical). This implies that the social welfare of $\bar{s}$ is at least that of $s$. Therefore we have a contradiction to our selection of $t$ and $(u,v)$. The case that we have $f^t(u,v)>0$ and $s^{t-1}_u < d^t_u$ is similar and omitted. \end{proof} We derive the following immediate corollary. \begin{corollary} Without loss of generality the optimal supply sequence is lazy. \end{corollary} \subsection{Online Algorithms for Social Welfare Maximization when $\ell_{ij}=1$} \label{sec:uniform} We now analyse the lazy optimal supply sequence. We first introduce some notation. Given an optimal lazy supply sequence $s$, define $h^t_i=\min\{s^{t-1}_i,d^t_i\}$. Let $n\geq 0$ be an integer parameter, and define\footnote{For notational convenience we define $d^t_i=0$ and $s^t_i = s^1_i$ for all $t\leq 0$.} $$z^t_i=\max\{0, h^t_i-g^t_i\}, \mbox{\rm\ where\ } g^t_i=\max_{\tau\in[\max(1,t-n),t-1]}d^\tau_i.$$ Note that the definitions depend on $s$, but we use a fixed optimal lazy sequence $s$. Note too that $n$ is yet undetermined. \begin{lemma} \label{lemma:opt} Fix a demand sequence $d$ and an optimal lazy supply sequence $s$ for $d$. The resulting social welfare \[ \mathrm{opt}=\mathrm{sw}(s,d)=\sum_{t,i} h_i^t \leq \sum_{t,i} z_i^t + \sum_{t,i} g_i^t. \] \end{lemma} \begin{proof} Note that when $\ell_{ij}=1$ for all $i,j$ we get that $\mathrm{em}(s)=\sum_t \frac{1}{2}\|s^t-s^{t-1}\|_1$. This means that for an optimal lazy sequence we have $$ \mathrm{opt}=\mathrm{sw}(s,d)=\mathrm{ds}(s,d)-\mathrm{em}(s)=\sum_t\sum_i \min(s^{t}_i,d^t_i)-\sum_t\sum_{i:s^t_i\geq s^{t-1}_i} \mathopen{}\mathclose\bgroup\originalleft(s^t_i-s^{t-1}_i\aftergroup\egroup\originalright). $$ First consider $s^t_i > s^{t-1}_i$. Since the sequence is lazy and $s^t_i > s^{t-1}_i$ this implies that $s^t_i\leq d^{t}_i$. Hence, $\min(s^{t}_i,d^t_i)=s^t_i$ and $\min(s^{t-1}_i,d^t_i)=s^{t-1}_i$. It follows that the identity $\min(s^{t}_i,d^t_i)-(s^t_i-s^{t-1}_i)= \min(s^{t-1}_i,d^t_i)$ holds. Next consider $s^t_i< s^{t-1}_i$. Since the sequence is lazy and $s^t_i< s^{t-1}_i$ implies that $s^t_i\geq d^t_i$ and that $\min(s^{t}_i,d^t_i)=d^t_i=\min(s^{t-1}_i,d^t_i)$. It follows yet again that the identity $\min(s^{t}_i,d^t_i)= \min(s^{t-1}_i,d^t_i)$ holds. Combining both identities we have \[ \mathrm{opt}=\mathrm{sw}(s,d)=\sum_t\sum_i \min(s^{t-1}_i,d^t_i)= \sum_t\sum_i h^t_i, \] by the definition of $h_i^t$. Since, $h_i^t\leq z_i^t+g_i^t$ the lemma follows. \end{proof} Our next goal is to bound the sum of $z_i^t$ and relate it to the social welfare of the algorithm $\mathrm{\tt stay}$. We first prove the following properties of the optimal lazy supply sequence. \begin{lemma} \label{lemma:1} Fix an optimal lazy sequence $s$ and a parameter $n\geq 1$. If for some $i,t$ we have $s^{t-1}_i\geq\max_{\tau\in [t-n,t)} d^\tau_i$ then we have $\min_{\tau\in [t-n,t)} s^\tau_i\geq s^{t-1}_{i}$. \end{lemma} \begin{proof} For contradiction assume there exists some maximal $\tau\in [t-n,t)$ such that $s^\tau_i< s^{t-1}_{i}$. Then, $\tau\neq t-1$ and thus $\tau+1\in[t-n+1,t)$ which by the assumption of the lemma implies that $s^{t-1}_i\geq d^{\tau+1}_i$. Also, because this is the maximal such $\tau$ we have that $s^{\tau+1}_i\geq s^{t-1}_i$. Thus, we have $s^{\tau}_i < s^{\tau+1}_i$ and $d^{\tau+1}_i < s^{\tau+1}_i$. This contradicts the assumption that $s$ is an optimal lazy sequence, since there is a flow to $i$ at time $\tau+1$ which strictly exceeds the demand. \end{proof} We derive the following immediate corollary: \begin{corollary} \label{cor:2} Fix an optimal lazy sequence $s$ and a parameter $n\geq 1$. If for some $i,t$ we have $s^{t-1}_i\geq\max_{\tau\in [t-n,t)} d^\tau_i$ then for any $\tau\in [t-n+1,t)$ we have $ s^{\tau-1}_i\geq s^{\tau}_{i}$. \end{corollary} \begin{proof} From Lemma~\ref{lemma:1}, for any $\tau\in [t-n,t)$ we have that $s_i^\tau\geq s_i^{t-1}\geq \max_{\tau'\in [t-n,t)} d^{\tau'}_i$. Therefore, $s_i^\tau\geq \max_{\tau'\in [\tau-n',\tau)} d^{\tau'}_i$, where $n'=\tau-(t-n)>0$. Now applying Lemma~\ref{lemma:1} again we obtain the corollary. \end{proof} \begin{lemma} \label{lemma:sum-z} Fix an optimal lazy sequence $s$ and a parameter $n\geq 1$. Then, $\sum_i \sum_{\tau\in[t-n,t)} z_i^\tau \leq 1$. \end{lemma} \begin{proof} Clearly we care only about $z_i^\tau>0$. Fix a location $i$ and let $\tau_1, \ldots, \tau_m$ be all the times $\tau\in [t-n,t)$ for which $z_i^\tau>0$. Clearly, $ \sum_{\tau\in[t-n,t)} z_i^\tau = \sum_{j=1}^m z_i^{\tau_j}$. First, if $s^{t-1}_i\leq \max_{\tau\in [t-n,t)} d^\tau_i=g_i^t$, since $h_i^t\leq s^{t-1}_i$ then $z_i^t=0$. Therefore, at any time $\tau_j$ we have $s^{\tau_j-1}_i> \max_{\hat{\tau}\in [t-n,t)} d^{\hat{\tau}}_i$, which implies that we can apply Corollary~\ref{cor:2} at the times $\tau_j$. We claim that $s_i^{\tau_j-1}> d_i^{\tau_j}$ for $1\leq j\leq m-1$. For contradiction assume that $s_i^{\tau_j-1}\leq d_i^{\tau_j}$. We have \[ h_i^{\tau_m} \leq s^{\tau_m-1}_i \leq s^{\tau_j-1}_i\leq d_i^{\tau_j}\leq g^{\tau_m}_i, \] where the first inequality is from the definition of $h$, the second follows from Corollary~\ref{cor:2}, the third from our assumption, and the fourth from the definition of $g$. This implies that $z_i^{\tau_m}=\max\{0,h_i^{\tau_m}-g_i^{\tau_m}\}=0$. In contradiction to our construction that $z_i^{\tau_m}>0$. Therefore, $s_i^{\tau_j-1}> d_i^{\tau_j}$, which implies that $h_i^{\tau_j}=d_i^{\tau_j}$.\footnote{This applies only to $j\leq m-1$ since $g^{\tau_m}_i$ does not include $d_i^{\tau_m}$ but does include all previous $d_i^{\tau_j}$.} Since $z_i^{\tau_j}>0$, we have that $z_i^{\tau_j}=h_i^{\tau_j}-g_i^{\tau_j}$. We showed that $h_i^{\tau_j}=d_i^{\tau_j}$ and $g_i^{\tau_j}\geq d_i^{\tau_{j-1}}$, hence, $z_i^{\tau_j}\leq d_i^{\tau_{j}}-d_i^{\tau_{j-1}}$, for $2\leq j\leq m-1$. Summing over all $\tau_j$ we have \begin{align*} \sum_{\hat{\tau}\in[t-n,t)} z_i^{\hat{\tau}} &= \sum_{j=1}^m z_i^{\tau_j} \\ &= z_i^{\tau_m} + z_i^{\tau_1}+\sum_{j=2}^{m-1} z_i^{\tau_j} \\ &\leq z_i^{\tau_m} +z_i^{\tau_1}+ \sum_{j=2}^{m-1} d_i^{\tau_{j}}-d_i^{\tau_{j-1}}\\ &\leq z_i^{\tau_m} +z_i^{\tau_1}+ d^{\tau_{m-1}}_i -d^{\tau_1}_i\\ &\leq h^{\tau_m}_i -(g^{\tau_m}_i -d^{\tau_{m-1}}_i) + (h^{\tau_1}_i -g^{\tau_1}_i -d^{\tau_{1}}_i) \\ &\leq h^{\tau_m}_i \end{align*} For the last inequality note that $g^{\tau_m}_i \geq d^{\tau_{m-1}}_i$ and that $h^{\tau_1}_i \leq d^{\tau_{1}}_i$. Summing over all locations $i$ we have \[ \sum_i \sum_{\hat{\tau}\in[t-n,t)} z_i^{\hat{\tau}}\leq \sum_i h^{\tau_{m_i}}_i \leq \sum_i s^{\tau_{m_i}}_i \leq \sum_i s^{t-n}_i =1 \] where the last inequality uses again Corollary~\ref{cor:2}. \end{proof} We now analyze $\mathrm{\tt stay}$ for arbitrary relocation costs $\ell_{ij}$. \begin{lemma} \label{lemma:stay} At all times $t$, the demand served by $\mathrm{\tt stay}$ is at least $\rho/k$ of the total demand. \end{lemma} \begin{proof} Recall that $\mathrm{ds}\mathopen{}\mathclose\bgroup\originalleft(s^t,d^t\aftergroup\egroup\originalright)=\sum_i \min\mathopen{}\mathclose\bgroup\originalleft(s^t_i,d^t_i\aftergroup\egroup\originalright)=\sum_i \min\mathopen{}\mathclose\bgroup\originalleft(\frac{1}{k},d^t_i\aftergroup\egroup\originalright)$. Denote $S=\mathopen{}\mathclose\bgroup\originalleft\{i|s^t_i\geq \frac{1}{k}\aftergroup\egroup\originalright\}$. If we have $|S|\geq \rho$ then $\mathrm{ds}\mathopen{}\mathclose\bgroup\originalleft(s^t,d^t\aftergroup\egroup\originalright)\geq \frac{1}{k}\cdot |S|\geq \frac{\rho}{k}$. Otherwise, since $\frac{1}{\rho}\geq \frac{1}{k}$ the total demand not in $S$ is at least $1-\frac{|S|}{\rho}$ and it is completely served by $\mathrm{\tt stay}$. Therefore, \[\mathrm{ds}\mathopen{}\mathclose\bgroup\originalleft(s^t,d^t\aftergroup\egroup\originalright)\geq |S|\cdot\frac{1}{k}+1-\frac{|S|}{\rho}=\frac{k\rho+|S|\rho-|S|k}{k\rho}=\frac{k\rho-|S|(k-\rho)}{k\rho}\geq\frac{k\rho+\rho^2-k\rho}{k\rho}=\frac{\rho}{k}. \] \end{proof} Now we analyze $\mathrm{\tt rand}(p)$ and relate it to $g_i^t$. \begin{lemma} \label{lemma:rand} Let $\widehat{s}_i^t$ be the random variable representing the supply of $\mathrm{\tt rand}(p)$ time $t$ in vertex $i$. Then, $\mathop{{}\mathbb{E}} [\widehat{s}_i^t]\geq g_i^t p (1-p)^n$. In addition, the expected social welfare of $\mathrm{\tt rand}(p)$ is at least $p(1-p)^n \sum_{i,t} g_i^t$. \end{lemma} \begin{proof} Let $\tau=\arg\max_{\hat{\tau}\in[t-n,t)} d_i^{\hat{\tau}}$, {\sl i.e.}, $d^\tau_i=g^t_i$. We lower bound the expectation of $\widehat{s}_i^t$ by the probability that $\mathrm{\tt rand}(p)$ sets $s^\tau=d^\tau$ and keeps the supply until time $t$, {\sl i.e.}, $s^t=s^\tau$. The probability that we have $s^\tau=d^\tau$ is at least $p$. The probability that $s^t=s^\tau$ is at least $(1-p)^n$. Therefore, $\mathop{{}\mathbb{E}} [\widehat{s}_i^t]\geq g_i^t p (1-p)^n$, which implies that the expected social welfare of $\mathrm{\tt rand}(p)$ is at least $p(1-p)^n \sum_{i,t} g_i^t$. \end{proof} \begin{theorem} The algorithm $\mathrm{\tt composite}({\sqrt{{\rho}/{k}}}) =\frac{1}{2}\mathrm{\tt stay} + \frac{1}{2}\mathrm{\tt rand}({\sqrt{{\rho}/{k}}})$ is $(\frac{1}{2e}\sqrt{\frac{\rho}{k}})$-competitive. \end{theorem}\label{thm:compfinal} \begin{proof} By Lemma~\ref{lemma:opt} we have that $OPT= \sum_{t,i} h_i^t \leq \sum_{t,i} z_i^t + g_i^t$. We bound separately $\sum_{t,i} z_i^t$ and $\sum_{t,i} g_i^t$. By Lemma~\ref{lemma:sum-z} we can partition the time to $\frac{T}{n}$ blocks of size $n$ each, and in each the sum is at most $1$, therefore $\sum_{t,i} z_i^t\leq \frac{T}{n}$. On the other hand, $\mathrm{\tt stay}$ guarantees a social welfare of at least $\rho\cdot \frac{T}{k}$. We have that, \[ OPT \leq \frac{T}{n} + \sum_{i,t} g_i^t. \] Using Lemma~\ref{lemma:stay} and Lemma~\ref{lemma:rand}, we have \[ \frac{1}{2}\mathrm{\tt stay} + \frac{1}{2}\mathrm{\tt rand}(p) \geq \frac{\rho}{2k}T + \frac{1}{2}p(1-p)^n \sum_{i,t} g_i^t \] For $p=\sqrt{\frac{\rho}{k}}$ and $n=\frac{1}{p}$ we bound the competitive ratio as follows: \[ \frac{\rho \frac{T}{2k} + \frac{1}{2}p(1-p)^n \sum_{i,t} g_i^t}{\frac{T}{n} + \sum_{i,t} g_i^t} =\frac{\frac{1}{2}\sqrt{\frac{\rho}{k}} T\sqrt{\frac{\rho}{k}} + \frac{1}{2e}\sqrt{\frac{\rho}{k}} \sum_{i,t} g_i^t}{T\sqrt{\frac{\rho}{k}} + \sum_{i,t} g_i^t}\geq \frac{1}{2e}\sqrt{\frac{\rho}{k}}. \] \end{proof} \subsection{Social Welfare Maximization when $\ell_{ij}=1$: Impossibility Results} We show that no online algorithm can hope to achieve a competitive ratio better (greater) than $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{\frac{\rho}{k}}\aftergroup\egroup\originalright)$. Recall, that Section \ref{sec:uniform} describes an online algorithm, $\mathrm{\tt composite}({\sqrt{{\rho}/{k}}})$, that achieves this bound on the competitive ratio. Ergo, $\mathrm{\tt composite}({\sqrt{{\rho}/{k}}})$ achieves the optimal competitive ratio, up to a constant factor. \begin{theorem}\label{thm:upperuniform} Fix the metric $\ell_{ij}=1$. No online algorithm can achieve a competitive ratio better (greater) than $O\mathopen{}\mathclose\bgroup\originalleft(\sqrt{\frac{\rho}{k}}\aftergroup\egroup\originalright)$. \end{theorem} \begin{proof} We first describe the proof for $\rho=1$ and then extend it to arbitrary $\rho$. Consider the following stochastic demand sequence. At time $t$ we select at random a vertex $c^t\in V$, and assign all the demand to it, {\sl i.e.}, $d^t_{c^t}=1$ and $d^t_i=0$ for $i\neq c^t$. Clearly any online algorithm has an expected social welfare of $T/k$. Essentially, for the optimal offline we use the birthday paradox to show that its social welfare is $\Theta(T/\sqrt{k})$. Consider the following offline strategy. Partition the time to intervals of size of $2\sqrt{k}$. We show that in any such interval the offline can increase social welfare by at least $1$ with constant probability. Fix such a time interval. We claim that with constant probability some vertex appears twice in the interval. If in the first $\sqrt{k}$ times there is a vertex $i$ that appears twice, we are done. Otherwise, we have $\sqrt{k}$ distinct vertices. The probability that we resample one of those vertices in the next $\sqrt{k}$ time steps is at least $1/e$. Now, if vertex $i$ appears twice in the interval then the offline algorithm can move at the start of the interval to vertex $i$ and increase social welfare by at least $1$. This implies that the expected social welfare of this offline strategy is $\Theta(T/\sqrt{k})$, which lower bounds the expected social welfare of the optimal offline strategy. Since the online algorithm has expected social welfare of $T/k$ and the optimal offline algorithm has expected social welfare of $\Theta(T/\sqrt{k})$, the competitive ratio, for $\rho=1$, is bounded by $O(\sqrt{1/k})$. We now sketch how the proof extends to a general $\rho\geq 1$. In this case we partition the $k$ vertices into $N=\floor{k/\ceil{\rho}}$ disjoint subsets, each of size $M=\ceil{\rho}$. (Note, that $N\cdot M\leq k$.) The $N$ subsets replace the vertices $V$ and each time we select a subset, we give a uniform demand over the subset. (note that the demand per vertex is $1/M\leq 1/\rho$.) As before, any online algorithm has expected social welfare of $\Theta(T/N)=\Theta(T\rho/k)$. Similar to before, there is an offline strategy that guarantees an expected social welfare of $\Theta(T\sqrt{\rho/k})$. This implies that the competitive ratio is at most $\Theta(\sqrt{\rho/k})$. \end{proof} \subsection{Extensions}\label{sec:extensions} In (Section \ref{sec:arbitrarymetric} we show that the assumption that $\ell_{ij}=1$ was critical to achieve the non-trivial competitive ratio of Section \ref{sec:uniform} unless $\rho$ (the fraction of demand at any single vertex) was sufficiently small. We also consider restricting the demand sequences by bounding the average variability in demand. In Section \ref{sec:restricted} we show that the online algorithm that greedily matches supply and demand works well, the average drift is sufficiently small. \subsubsection{Arbitrary Metric Spaces} \label{sec:arbitrarymetric} We can apply the online algorithm $\mathrm{\tt stay}$ and guarantee a competitive ratio of $\rho/k$ as shown in Lemma~\ref{lemma:stay}. The following theorem establishes an impossibility result when the costs are different than $1$ (even if they are still identical). \begin{theorem}\label{thm:arbmetric} Fix some $1>\epsilon>0$, and consider costs $\ell_{ij}=1+\epsilon$ for $i\neq j$. No online algorithm has a competitive ratio better (greater) than $\frac{(1+\epsilon)^2}{\epsilon}\cdot\frac{1}{k}$ for this metric. \end{theorem} \begin{proof} The idea is the following: we generate a demand sequence that at every time step demand is concentrated in a single vertex. We generate a random sequence of vertices, such that no two successive positions are identical. We then duplicate every position for a random duration. The duration, the number of successive demands at that position, is geometrically distributed. We set the parameters such that no online algorithm can benefit by switching between vertices. On the other hand, given a sufficiently long duration of repeated demands for the same vertex, the optimal schedule switches to this vertex. We now describe the stochastic demand sequence generation. We first generate a sequence of locations $c$. We set $c_1= i\in V$ uniformly at random. For $c_\tau$ we set $c_\tau=j$ where $j\in V\setminus \{c_{\tau-1}\}$ uniformly. In addition we generate a sequence of duration $b$ distributed geometrically with parameter $p=\frac{1}{1+\epsilon}$. Namely, $b_\tau=j$ with probability $p^{j-1}p$, for $j\geq 1$. We are now ready to generate the demand sequence $d$. For each $c_\tau=i$ we associate a unit vector $e_i$ which has $e_{i,i}=1$ and $e_{i,j}=0$ for $j\neq i$. We duplicate $e_{c_\tau}$ exactly $b_\tau$ times. We truncate the sequence at time $T$, and this is the demand sequence $d$. First consider an arbitrary online algorithm. We claim that it does not gain (in expectation) any social welfare by moving supply, and hence it's expected social welfare is $T/k$. The argument is that the cost of moving $\delta$ supply to a new location is $(1+\epsilon)\delta$. On the other hand, the expected duration in the new location is only $1+\epsilon$, so in expectation there is no benefit. For an online algorithm that does not move any supply the expected social welfare is $T/k$. We now analyze the social welfare attained by an optimal offline algorithm. The main benefit of an offline algorithm is that it has access to the realized $b=\tau$. It is simple to see that if $b_\tau\geq 2$ then the offline algorithm has a benefit of $b_\tau - (1+\epsilon)>0$. \begin{eqnarray} \mathop{{}\mathbb{E}}[b_\tau-(1+\epsilon)|b_\tau\geq 2]\Pr[b_\tau\geq 2] =\sum_{i=2}^{\infty} (\frac{\epsilon}{\epsilon + 1})^{i-1} \cdot \frac{i-1-\epsilon}{1+\epsilon}= \frac{\epsilon}{1+\epsilon}.\nonumber \end{eqnarray} We now would like to sum over $\tau$ however the numbers summands in the sum is a random variable. Since we have a random sums of random variables we need to use Wald's identity. Since the expected number of summands is $\frac{T}{1+\epsilon}$ and the expectation of each is $\frac{\epsilon}{1+\epsilon}$ we have that the optimal offline algorithm has an expected social welfare of at least $\frac{\epsilon}{(1+\epsilon)^2} T$. This implies that no algorithm has a competitive ratio better than $\frac{(1+\epsilon)^2}{\epsilon}\frac{1}{k}$. \end{proof} \subsubsection{Restricted Drift} \label{sec:restricted} For any demand sequence $d$ let $\delta\leq 1$ be the average drift, {\sl i.e.}, $\sum_{t} \|d^t - d^{t-1}\|_{tv}= (1/2)\sum_{t} \|d^t - d^{t-1}\|_1 = \delta T$. \begin{theorem}\label{thm:restriced drift} For the case where costs $\ell_{ij} =1$ for all $i\neq j$, setting demand and supply equal (the $\mathrm{\tt match}$ algorithm) gives social welfare of $(1-\delta)T$, and is $(1-\delta)$-competitive. For arbitrary $\ell_{ij}$, where $\ell_{ij}\leq \ell_{\max}$, the $\mathrm{\tt match}$ algorithm has social welfare of at least $(1-\delta\ell_{\max})T$, and is ($1-\delta\ell_{\max}$)-competitive. \end{theorem} \begin{proof} Since for $\ell_{ij}=1$ the earthmover distance metric coincides with the total variation metric, we have that at time $t$ the social welfare of $\mathrm{\tt match}$ is $1-\|d^t-s^{t-1}\|_{tv}=1-\|d^t-d^{t-1}\|_{tv}$ since $\mathrm{\tt match}$ sets $s^{t-1}=d^{t-1}$. Summing over all time steps we get that the social welfare of $\mathrm{\tt match}$ is $T-\delta T$. Since the social welfare of $\mathrm{opt}$ is at most $T$ we have that $\mathrm{\tt match}$ is $(1-\delta)$-competitive. For a general metric, note that $\mathrm{em}(d^t,d^{t-1})\leq \ell_{\max} \|d^t-d^{t-1}\|_{tv}$. This implies that the social welfare of $\mathrm{\tt match}$ is at least $(1-\ell_{\max}\delta) T$, and hence it is $(1-\ell_{\max}\delta)$-competitive. \end{proof} \begin{theorem}\label{thm:upper bound ellij1 drift} For the metric $\ell_{ij}=1$, no online algorithm has a competitive ratio better (greater) than $1-\delta/4$. \end{theorem} \begin{proof} Consider the following demand sequence. The demand sequence uses only the first two locations, {\sl i.e.}, for all locations $i\neq1,2$ and times $t$ we have $d^t_i=0$. For each time $t$ we select the demand randomly from the following distribution. $$d^t = \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{cl} d^t_1=1,d^t_2=0 & \mbox{\rm With probability $\frac{1}{2}$} \\ d^t_1=1-2\delta,d^t_2=2\delta & \mbox{\rm With probability $\frac{1}{2}$} \end{array}\aftergroup\egroup\originalright.. $$ The generated sequence has an expected drift of $\delta T$. Any online algorithm $ALG$ has, in expectation, social welfare of $(1-\delta)T$. The main point is that $\mathrm{opt}$ has a strictly better expected social welfare. Consider the online algorithm $\mathrm{\tt match}$ as a starting point. Partition the time to $T/2$ pairs of time slots, $[2m-1,2m]$. Consider the event that $d^{2m-2}=d^{2m}\neq d^{2m-1}$. This event occurs with probability $1/4$. In such an event we can modify $\mathrm{\tt match}$ and at time $2m-1$ set $s^{2m-1}=d^{m}$. (This requires knowing the future, but we are interested in $\mathrm{opt}$ so it is fine.) Such a modification increases the social welfare by $2\delta$ (lowering the serviced demand by $2\delta$ and lowering the movement costs by $4\delta$). Therefore, the expected social welfare is improved by $(1/4)(2\delta)(T/2)$. This implies that the expected social welfare of $\mathrm{opt}$ is at least $(1-(3/4)\delta)T$. This means that no algorithm is more than $\frac{1-\delta}{1-(3/4)\delta}$-competitive. This implies that no online algorithm can have a competitive ratio better than $(1-\delta/4)T$. \end{proof} \section{Discussion}\label{sec:disc} Social welfare in our setting depends on the taxicabs and their locations (the supply $s$), passengers, their locations and values (the profile $P$), and distances between taxicabs and passengers. In this paper{} we introduce passenger-taxicab equilibria, prove their existence and give poly time algorithms for computing surge prices so as to maximize social welfare. We have shown that although time series are a critical part of the social welfare gains of any taxicab provider, no algorithm can hope to achieve significant worst-case ratios. Thus, in the future different relaxations to the problem might be considered in order to allow for more adaptive algorithms. When computing the surge prices above, we have implicitly assumed that taxicab locations are known (e.g., via GPS). Contrawise, passengers have no incentive to misreport their location (trivially) and valuation (as proved above). An interesting variation on our models would be to consider taxicabs declaring their own distances to passengers. Those would not be physical distances but rather a personalized cost for service at a given location. If such personalized costs are verifiable, and social welfare is redefined as the sum of passenger values served minus the personalized service costs, then the surge prices computed in this paper{} maximize this new social welfare. This allows for more robust pricing mechanisms which allow us to incorporate issues such as ``start up costs" which are a bonus for drivers to get out of bed. Taxicab personalized costs are private to the taxicab. Thus, any surge price computation would have to contend with private values of the taxicabs as well as private values for the passengers. It is easy to see that without Bayesian assumptions on the private values, little can be done. Just consider a passenger and a taxicab at the same location, they need to agree upon a price. In the Bayesian setting this is called the bilateral trading problem and there is a rich literature on the topic. \begin{figure}\label{fig:example} \label{fig:flow1} \label{fig:flow2} \end{figure} \begin{figure}\label{fig:marketfig} \label{fig:surgeprice} \end{figure} \ifonline \noindent{\bf Proof of Lemma \ref{lem:tolazy}:} \begin{proof} For contradiction, assume there is a sequence $s$ for which for any lazy sequence $\bar{s}$ we have $\mathrm{sw}(s)>\mathrm{sw}(\bar{s})$. Note that essentially we are saying that there is an optimal sequence $s$ for which no lazy sequence has the same social welfare. This implies that for any optimal sequence $s$ there is a time $t$ such that $f^t(u,v)>0$ and either (1) $s^t_v > d^t_v$ or (2) $s^{t-1}_u < d^t_u$. Out of all the optimal sequences, consider the optimal sequence $s$ with the largest such time $t$ and largest pair $(u,v)$ (given some full order on the pairs $V\times V$). We create a new flow $\bar{f}$ depending on the type of violation. Assume that we have $f^t(u,v)>0$ and $s^t_v > d^t_v$. At time $t$ set $\bar{f}^t(u,v)= f^t(u,v)-\epsilon$ and $\bar{f}^t(u,u)= f^t(u,u)+\epsilon$, where $\epsilon=\min\{s^t_v - d^t_v, f^t(u,v)\}$. The rest of the flow remains unchanged, {\sl i.e.}, $\bar{f}^t(u',v')= f^t(u',v')$ for $(u',v')\neq (u,v)$ or $(u',v')\neq (u,u)$. At time $t+1$ we adjust the flow to correspond to the original supply. Namely, for all $w\in V$ such that $f^{t+1}(v,w)>0$, we set $\bar{f}^{t+1}(v,w)=f^{t+1}(v,w)\frac{s^t_v-\epsilon}{s^t_v}$ and $\bar{f}^{t+1}(u,w)=f^{t+1}(u,w)+ f^{t+1}(v,w) \frac{\epsilon}{s^t_v}$, and all the remaining flows remain unchanged. It is straightforward to verify that $\bar{f}$ is a valid flow, and we set $s^{t+1}_v=\bar{s}^{t+1}_v=\sum_u \bar{f}^{t+1}(u,v)$. Note that the only influence on the social welfare are in times $t$ and $t+1$. Comparing the movement cost of $\bar{s}$ to $s$, at time $t$ it decreased by $\epsilon$ and in time $t+1$ increased by at most $\epsilon$. The demand served in $\bar{s}$ and $s$ at time $t$ and $t+1$ in unchanged (since the $\epsilon$ flow that was modified did not serve any demand in time $t$ and at time $t+1$ the supplies are identical). This implies that the social welfare of $\bar{s}$ is at least that of $s$. Therefore we have a contradiction to our selection of $t$ and $(u,v)$. The case that we have $f^t(u,v)>0$ and $s^{t-1}_u < d^t_u$ is similar and omitted. \end{proof} \noindent{\bf Proof of Lemma \ref{lemma:sum-z}:} \begin{proof} Clearly we care only about $z_i^\tau>0$. Fix a location $i$ and let $\tau_1, \ldots, \tau_m$ be all the times $\tau\in [t-n,t)$ for which $z_i^\tau>0$. Clearly, $ \sum_{\tau\in[t-n,t)} z_i^\tau = \sum_{j=1}^m z_i^{\tau_j}$. First, if $s^{t-1}_i\leq \max_{\tau\in [t-n,t)} d^\tau_i=g_i^t$, since $h_i^t\leq s^{t-1}_i$ then $z_i^t=0$. Therefore, at any time $\tau_j$ we have $s^{\tau_j-1}_i> \max_{\hat{\tau}\in [t-n,t)} d^{\hat{\tau}}_i$, which implies that we can apply Corollary~\ref{cor:2} at the times $\tau_j$. We claim that $s_i^{\tau_j-1}> d_i^{\tau_j}$ for $1\leq j\leq m-1$. For contradiction assume that $s_i^{\tau_j-1}\leq d_i^{\tau_j}$. We have \[ h_i^{\tau_m} \leq s^{\tau_m-1}_i \leq s^{\tau_j-1}_i\leq d_i^{\tau_j}\leq g^{\tau_m}_i, \] where the first inequality is from the definition of $h$, the second follows from Corollary~\ref{cor:2}, the third from our assumption, and the fourth from the definition of $g$. This implies that $z_i^{\tau_m}=\max\{0,h_i^{\tau_m}-g_i^{\tau_m}\}=0$. In contradiction to our construction that $z_i^{\tau_m}>0$. Therefore, $s_i^{\tau_j-1}> d_i^{\tau_j}$, which implies that $h_i^{\tau_j}=d_i^{\tau_j}$.\footnote{This applies only to $j\leq m-1$ since $g^{\tau_m}_i$ does not include $d_i^{\tau_m}$ but does include all previous $d_i^{\tau_j}$.} Since $z_i^{\tau_j}>0$, we have that $z_i^{\tau_j}=h_i^{\tau_j}-g_i^{\tau_j}$. We showed that $h_i^{\tau_j}=d_i^{\tau_j}$ and $g_i^{\tau_j}\geq d_i^{\tau_{j-1}}$, hence, $z_i^{\tau_j}\leq d_i^{\tau_{j}}-d_i^{\tau_{j-1}}$, for $2\leq j\leq m-1$. Summing over all $\tau_j$ we have \begin{align*} \sum_{\hat{\tau}\in[t-n,t)} z_i^{\hat{\tau}} &= \sum_{j=1}^m z_i^{\tau_j} \\ &= z_i^{\tau_m} + z_i^{\tau_1}+\sum_{j=2}^{m-1} z_i^{\tau_j} \\ &\leq z_i^{\tau_m} +z_i^{\tau_1}+ \sum_{j=2}^{m-1} d_i^{\tau_{j}}-d_i^{\tau_{j-1}}\\ &\leq z_i^{\tau_m} +z_i^{\tau_1}+ d^{\tau_{m-1}}_i -d^{\tau_1}_i\\ &\leq h^{\tau_m}_i -(g^{\tau_m}_i -d^{\tau_{m-1}}_i) + (h^{\tau_1}_i -g^{\tau_1}_i -d^{\tau_{1}}_i) \\ &\leq h^{\tau_m}_i \end{align*} For the last inequality note that $g^{\tau_m}_i \geq d^{\tau_{m-1}}_i$ and that $h^{\tau_1}_i \leq d^{\tau_{1}}_i$. Summing over all locations $i$ we have \[ \sum_i \sum_{\hat{\tau}\in[t-n,t)} z_i^{\hat{\tau}}\leq \sum_i h^{\tau_{m_i}}_i \leq \sum_i s^{\tau_{m_i}}_i \leq \sum_i s^{t-n}_i =1 \] where the last inequality uses again Corollary~\ref{cor:2}. \end{proof} {\bf Proof of Theorem \ref{thm:upperuniform}:} \begin{proof} We first describe the proof for $\rho=1$ and then extend it to arbitrary $\rho$. Consider the following stochastic demand sequence. At time $t$ we select at random a vertex $c^t\in V$, and assign all the demand to it, {\sl i.e.}, $d^t_{c^t}=1$ and $d^t_i=0$ for $i\neq c^t$. Clearly any online algorithm has an expected social welfare of $T/k$. Essentially, for the optimal offline we use the birthday paradox to show that its social welfare is $\Theta(T/\sqrt{k})$. Consider the following offline strategy. Partition the time to intervals of size of $2\sqrt{k}$. We show that in any such interval the offline can increase social welfare by at least $1$ with constant probability. Fix such a time interval. We claim that with constant probability some vertex appears twice in the interval. If in the first $\sqrt{k}$ times there is a vertex $i$ that appears twice, we are done. Otherwise, we have $\sqrt{k}$ distinct vertices. The probability that we resample one of those vertices in the next $\sqrt{k}$ time steps is at least $1/e$. Now, if vertex $i$ appears twice in the interval then the offline algorithm can move at the start of the interval to vertex $i$ and increase social welfare by at least $1$. This implies that the expected social welfare of this offline strategy is $\Theta(T/\sqrt{k})$, which lower bounds the expected social welfare of the optimal offline strategy. Since the online algorithm has expected social welfare of $T/k$ and the optimal offline algorithm has expected social welfare of $\Theta(T/\sqrt{k})$, the competitive ratio, for $\rho=1$, is bounded by $O(\sqrt{1/k})$. We now sketch how the proof extends to a general $\rho\geq 1$. In this case we partition the $k$ vertices into $N=\floor{k/\ceil{\rho}}$ disjoint subsets, each of size $M=\ceil{\rho}$. (Note, that $N\cdot M\leq k$.) The $N$ subsets replace the vertices $V$ and each time we select a subset, we give a uniform demand over the subset. (note that the demand per vertex is $1/M\leq 1/\rho$.) As before, any online algorithm has expected social welfare of $\Theta(T/N)=\Theta(T\rho/k)$. Similar to before, there is an offline strategy that guarantees an expected social welfare of $\Theta(T\sqrt{\rho/k})$. This implies that the competitive ratio is at most $\Theta(\sqrt{\rho/k})$. \end{proof} {\bf Proof of Theorem \ref{thm:arbmetric}:} \begin{proof} The idea is the following: we generate a demand sequence that at every time step demand is concentrated in a single vertex. We generate a random sequence of vertices, such that no two successive positions are identical. We then duplicate every position for a random duration. The duration, the number of successive demands at that position, is geometrically distributed. We set the parameters such that no online algorithm can benefit by switching between vertices. On the other hand, given a sufficiently long duration of repeated demands for the same vertex, the optimal schedule switches to this vertex. We now describe the stochastic demand sequence generation. We first generate a sequence of locations $c$. We set $c_1= i\in V$ uniformly at random. For $c_\tau$ we set $c_\tau=j$ where $j\in V\setminus \{c_{\tau-1}\}$ uniformly. In addition we generate a sequence of duration $b$ distributed geometrically with parameter $p=\frac{1}{1+\epsilon}$. Namely, $b_\tau=j$ with probability $p^{j-1}p$, for $j\geq 1$. We are now ready to generate the demand sequence $d$. For each $c_\tau=i$ we associate a unit vector $e_i$ which has $e_{i,i}=1$ and $e_{i,j}=0$ for $j\neq i$. We duplicate $e_{c_\tau}$ exactly $b_\tau$ times. We truncate the sequence at time $T$, and this is the demand sequence $d$. First consider an arbitrary online algorithm. We claim that it does not gain (in expectation) any social welfare by moving supply, and hence it's expected social welfare is $T/k$. The argument is that the cost of moving $\delta$ supply to a new location is $(1+\epsilon)\delta$. On the other hand, the expected duration in the new location is only $1+\epsilon$, so in expectation there is no benefit. For an online algorithm that does not move any supply the expected social welfare is $T/k$. We now analyze the social welfare attained by an optimal offline algorithm. The main benefit of an offline algorithm is that it has access to the realized $b=\tau$. It is simple to see that if $b_\tau\geq 2$ then the offline algorithm has a benefit of $b_\tau - (1+\epsilon)>0$. \begin{eqnarray} \mathop{{}\mathbb{E}}[b_\tau-(1+\epsilon)|b_\tau\geq 2]\Pr[b_\tau\geq 2] =\sum_{i=2}^{\infty} (\frac{\epsilon}{\epsilon + 1})^{i-1} \cdot \frac{i-1-\epsilon}{1+\epsilon}= \frac{\epsilon}{1+\epsilon}.\nonumber \end{eqnarray} We now would like to sum over $\tau$ however the numbers summands in the sum is a random variable. Since we have a random sums of random variables we need to use Wald's identity. Since the expected number of summands is $\frac{T}{1+\epsilon}$ and the expectation of each is $\frac{\epsilon}{1+\epsilon}$ we have that the optimal offline algorithm has an expected social welfare of at least $\frac{\epsilon}{(1+\epsilon)^2} T$. This implies that no algorithm has a competitive ratio better than $\frac{(1+\epsilon)^2}{\epsilon}\frac{1}{k}$. \end{proof} {\bf Proof of Theorem \ref{thm:restriced drift}:} \begin{proof} Since for $\ell_{ij}=1$ the earthmover distance metric coincides with the total variation metric, we have that at time $t$ the social welfare of $\mathrm{\tt match}$ is $1-\|d^t-s^{t-1}\|_{tv}=1-\|d^t-d^{t-1}\|_{tv}$ since $\mathrm{\tt match}$ sets $s^{t-1}=d^{t-1}$. Summing over all time steps we get that the social welfare of $\mathrm{\tt match}$ is $T-\delta T$. Since the social welfare of $\mathrm{opt}$ is at most $T$ we have that $\mathrm{\tt match}$ is $(1-\delta)$-competitive. For a general metric, note that $\mathrm{em}(d^t,d^{t-1})\leq \ell_{\max} \|d^t-d^{t-1}\|_{tv}$. This implies that the social welfare of $\mathrm{\tt match}$ is at least $(1-\ell_{\max}\delta) T$, and hence it is $(1-\ell_{\max}\delta)$-competitive. \end{proof} {\bf Proof of Theorem \ref{thm:upper bound ellij1 drift}:} \begin{proof} Consider the following demand sequence. The demand sequence uses only the first two locations, {\sl i.e.}, for all locations $i\neq1,2$ and times $t$ we have $d^t_i=0$. For each time $t$ we select the demand randomly from the following distribution. $$d^t = \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{cl} d^t_1=1,d^t_2=0 & \mbox{\rm With probability $\frac{1}{2}$} \\ d^t_1=1-2\delta,d^t_2=2\delta & \mbox{\rm With probability $\frac{1}{2}$} \end{array}\aftergroup\egroup\originalright.. $$ The generated sequence has an expected drift of $\delta T$. Any online algorithm $ALG$ has, in expectation, social welfare of $(1-\delta)T$. The main point is that $\mathrm{opt}$ has a strictly better expected social welfare. Consider the online algorithm $\mathrm{\tt match}$ as a starting point. Partition the time to $T/2$ pairs of time slots, $[2m-1,2m]$. Consider the event that $d^{2m-2}=d^{2m}\neq d^{2m-1}$. This event occurs with probability $1/4$. In such an event we can modify $\mathrm{\tt match}$ and at time $2m-1$ set $s^{2m-1}=d^{m}$. (This requires knowing the future, but we are interested in $\mathrm{opt}$ so it is fine.) Such a modification increases the social welfare by $2\delta$ (lowering the serviced demand by $2\delta$ and lowering the movement costs by $4\delta$). Therefore, the expected social welfare is improved by $(1/4)(2\delta)(T/2)$. This implies that the expected social welfare of $\mathrm{opt}$ is at least $(1-(3/4)\delta)T$. This means that no algorithm is more than $\frac{1-\delta}{1-(3/4)\delta}$-competitive. This implies that no online algorithm can have a competitive ratio better than $(1-\delta/4)T$. \end{proof} \fi \end{document}
arXiv
\begin{definition}[Definition:Pre-Order Traversal of Labeled Tree] Let $T$ be a binary labeled tree. '''Pre-order traversal''' of $T$ is an algorithm designed to obtain a string representation of $T$. The steps are as follows: $\mathtt{Preorder} (T):$ * $n \gets t$, where $t$ is the root node of $T$. * Output the label of $n$. * If $n$ is a leaf node, stop. * Let $T_1$ and $T_2$ be the left and right subtrees of $T$. * If $n$ has only one child, skip this step. Output $\mathtt{Preorder} (T_1)$. * Output $\mathtt{Preorder} (T_2)$. * Stop. The resulting string will be in Polish notation. \end{definition}
ProofWiki
Distortion-specific feature selection algorithm for universal blind image quality assessment Imran Fareed Nizami ORCID: orcid.org/0000-0002-2693-40851, Muhammad Majid2, Waleed Manzoor3, Khawar Khurshid1 & Byeungwoo Jeon4 Blind image quality assessment (BIQA) aims to use objective measures for predicting the quality score of distorted images without any prior information regarding the reference image. Several BIQA techniques are proposed in literature that use a two-step approach, i.e., feature extraction for distortion classification and regression for predicting the quality score. In this paper, a three-step approach is proposed that aims to improve the performance of BIQA techniques. In the first step, feature extraction is performed using existing BIQA techniques to determine the distortion type. Secondly, features are selected for each distortion type based on the mean value of Spearman rank ordered correlation constant (SROCC) and linear correlation constant (LCC). Lastly, distortion-specific features are used by regression model to predict the quality score. Experimental results show that the predicted quality score using distortion-specific features strongly correlates with the subjective quality score, improves the overall performance of existing BIQA techniques, and reduces the processing time. In recent years, multimedia content has become a significant part of our lives. Delivery of images at the highest quality to the end user is an essential requirement for many modern imaging applications. Therefore, estimation of perceived image quality by humans, also known as subjective evaluation has gained importance. Subjective evaluation is used as a benchmark for image quality assessment (IQA), but the constraint of time and the tedious nature of the task make it unsuitable for many applications [1]. IQA techniques aim to replicate the behavior of human visual system to evaluate the quality score of images using objective parameters or measures. Objective IQA is divided into full reference (FR), reduced reference (RR), and blind IQA techniques. FR-IQA techniques require the pristine version of the image to predict the quality score of images [2–10]. RR-IQA techniques do not require the whole reference image but some information extracted from the reference image to perform IQA [11–17]. Techniques that evaluate the image quality score without the use of any prior information of reference image are called blind image quality assessment (BIQA) techniques [1]. BIQA techniques usually follow a two-step approach. Firstly, the distortion type affecting the image is determined using extracted features and then regression is applied to predict the quality score of the image. Most BIQA techniques extract features, which are altered in the presence of distortion [18–37]. Features are extracted either in spatial domain, wavelet domain, and discrete cosine transform (DCT) domain or using edge information of image. Discrete wavelet transform and complex wavelet transform have been used in [38] and [24], respectively, to extract statistical features from distorted images to determine the distortion type and predicting the quality score. The ability of wavelet transforms to extract high frequencies can be exploited for edge analysis and IQA. In [31], curvelet transform is utilized to extract statistical features for the purpose of BIQA, which has rich information of scale and orientation in the image. Blind image integrity notator [39], which is an extended version of [40] has used DCT-based features along with the Bayesian inference model for predicting the quality score of the image. It requires no statistical model and utilizes sampled DCT coefficients for IQA. Natural scene statistics (NSS)-based features have been extracted in spatial domain for the evaluation of image quality score using luminance information [41]. It is simple and computationally less expensive since no transform has to be computed. In [42], the collection of quality aware features in a spatial domain are utilized for BIQA that measures the deviation in statistical properties between a distorted and natural images. In [43], two-dimensional features called atoms are introduced, which used sparse representation of coefficients in feature set to assess the quality of images. Shearlet transform has been used in [44] to model the NSS characteristics of images. Natural undistorted parts of the image are compared with the distorted parts to assess the quality score. Laplacian of Gaussian and Gaussian magnitude along with support vector regression (SVR) are used in [30] for the predicting the quality score of images using edge information. In [45], spatial and spectral entropy that captures information over different scales are used to compute features for the evaluation of quality score. Since, utilizing entropy as features to perform FR-IQA showed promising results; therefore, entropy was utilized as features for BIQA. Recently, BIQA is performed on multiple distorted images by augmenting the features extracted using blind image quality index (BIQI) [46], blind/referenceless image spatial quality evaluator (BRISQUE) [41], and sparse representation for natural scene statistics (SRNSS) [47] and selecting the top three features based on the average value spearman rank ordered correlation constant (SROCC) and linear correlation (LCC) [48]. But the performance of feature selection in [48] is only limited to the LIVE multiply-distorted image database and features extracted in [42, 46, 47]. The performance of [48] cannot be generalized to other BIQA techniques. All of the aforementioned BIQA techniques employ the same set of features for each distortion type to evaluate the quality score of images. Each distortion type affects individual BIQA feature in a distinct manner because every type of distortion exhibits different characteristics, e.g., the Gaussian blur affects the edge information in an image, whereas JPEG distortion introduces blockiness. Therefore, the same set of features used for every distortion type will not yield the optimum results. This paper introduces a distortion-specific feature selection algorithm, which is based on Spearman rank ordered correlation constant (SROCC) and linear correlation constant (LCC) scores. All features having SROCC and LCC score computed over individual features, greater than the mean value of SROCC and LCC, are selected for the specific distortion type. The major contributions of this work are as follows: A new SROCC- and LCC-based feature selection algorithm is proposed, which can be utilized with any two-step BIQA framework. The proposed algorithm improves the performance of BIQA techniques in terms of better correlation of predicted quality score with the mean observer score (MOS) and reduces the processing time. The proposed three-step approach is robust, database independent, and applicable in real-time scenarios. The rest of the paper is organized as follows. Section 2 explains the proposed methodology along with distortion-specific feature selection algorithm. Section 3 presents the experimental results for six different BIQA techniques followed by the conclusion in Section 4. Proposed methodology The proposed methodology for BIQA is shown in Fig. 1, which follows a three-step approach, i.e., feature extraction, distortion-specific feature selection, and support vector regression, in contrast to the traditional two-step approach that is feature extraction and regression. The details of each step are as follows. Block diagram of the proposed three-step BIQA approach Feature extraction for BIQA In the first step, the N number of features F=f1,···,fN are extracted using existing BIQA techniques. Since noise in the image usually disrupts the high-frequency information in the image such as edges and corners, therefore, established BIQA techniques usually extract features in spatial and transform domains that model the deviation in characteristics of distorted images in comparison to natural images. To validate the performance of the proposed feature selection algorithm over features extracted in different domains, six BIQA techniques are selected that extract features in spatial, DCT transform, wavelet transform, curvelet transform, and spectral domains. All the selected BIQA techniques follow a two-step approach, i.e., feature extraction and distortion classification, followed by the computation of the quality score using SVR. The six BIQA techniques include BRISQUE [41], gradient magnitude and Laplacian of Gaussian-based IQA (GM-LOG) [30], blind image integrity notator based in DCT statistics II (BLIINDS II) [39], spatial-spectral entropy-based quality (SSEQ) [45], distortion identification-based image verity and integration evaluation (DIIVINE) [38], and curvelet quality assessment (CurveletQA) [31]. The details of BIQA techniques used for feature extraction are as follows: Blind/referenceless image spatial quality evaluator (BRISQUE) BRISQUE [41] extracts features in the spatial domain by utilizing locally normalized luminance coefficients and their products over two scales. Local mean displacements are removed to normalize the local variance of log contrast, which has de-correlating properties.Eighteen features are extracted over each scale using variance, shape, mean value, right variance parameters for horizontal, left variance,vertical, and diagonal pairwise products. BRISQUE uses a total of 36 features for the evaluation of quality score. Gradient magnitude and Laplacian of Gaussian-based IQA (GM-LOG) GM-LOG [30] uses the joint statistical relationship between the local contrast features of Laplacian of Gaussian (LoG) and gradient magnitude (GM) for BIQA. An adaptive procedure called joint adaptive normalization based on gain control and divisive normalization models on the local neighborhood is used to remove the spatial redundancies of GM and LOG coefficients. The technique follows a two-step approach, i.e., identification of distortion type and quality score prediction. A total of 40 features are extracted, which describe the structural information of the images for assessing the qualityusing SVR. Blind image integrity notator based in DCT statistics II (BLIINDS II) BLIINDS II [39] extracts NSS features in DCT domain on 17×17 patches of image. Each DCT block is divided into three directional regions and Gaussian fitting is performed on each region. BLIINDS II extracts four types of features, namely, coefficients of frequency variation, generalized energy subband ratio measure, Gaussian model shape parameters, and orientation model-based features to obtain 24 features. These features are utilized with a Bayesian inference model for predicting the image quality. Spatial-spectral entropy-based quality (SSEQ) SSEQ [45] extracts features at three scales of image resolution. To avoid aliasing, bi-cubic interpolation is used during down sampling. Each image is partitioned into subregions consisting of 8×8 pixels. Spatial and spectral entropy in DCT domain are computed for each patch. The spatial and spectral entropies are sorted in an ascending order, and 60% of the central elements are selected, which constitute a feature vector of length 12 to predict the quality score. These features are used as input to a pre-trained support vector classification (SVC) for identifying the distortion type affecting the image and then given as input to a SVR model for predicting the quality score. Distortion Identification-based Image Verity and IntegratioN Evaluation (DIIVINE) DIIVINE [38] uses a loose discrete wavelet transform to compute five groups of statistical features over two scales and six orientations by using steerable pyramids and statistical distribution curve fitting. Five groups of features, namely scale and orientation selective features, orientation selective statistics, correlation across scales, spatial correlation, and across orientation statistics constitute a feature vector of length 88. The feature vector is given as input to the SVC for the determination of distortion type, and SVR is utilized for the computation of image quality score. Curvelet Quality Assessment (CurveletQA) CurveletQA [31] extracts three types of features from curvelet subbands. Four NSS features are computed on the finest scale of the curvelet subbands using asymmetric Gaussian distribution, two features are extracted on the finest detail layer using the mean value of kurtosis and the ratio of sample mean and standard deviation of the non-cardinal orientation energies and six features are computed for scalar energy distribution by taking the difference of mean values of logarithmic magnitude of subbands at adjacent layers. These 12 features are used for the prediction of distortion type using SVC and prediction of quality score using SVR respectively. The extracted features using BIQA techniques are given as input to the SVC to determine the distortion type D. Distortion-specific feature selection Natural images are highly structured, which possess properties that are affected when distortion is introduced in an image. These properties are known as natural scene statistics (NSS) [22]. BIQA techniques that utilize NSS try to assess the image quality based on the deviation between the NSS properties of distorted and natural images. The features that can effectively represent the deviation between the NSS properties of distorted and natural images show better performance, and they can predict the image quality more accurately. The main objective of this work is to introduce a generalized approach for distortion-specific feature selection that aim to improve the performance of existing BIQA techniques using features that can effectively represent the deviation in characteristics of an images from that of a natural image for a particular distortion type, which is also validated by Fig. 2. Therefore, second step involves selection of distortion-specific features based on SROCC and LCC score represented as FD={f1,···,fM}, where M≤N. Generally, SROCC and LCC are utilized to assess the similarity between the MOS and the quality score predicted using BIQA techniques. A value close to 1 suggests a superior performance. Therefore, we select those features that have individual SROCC and LCC scores greater than the mean SROCC and mean LCC computed over individual features. The selected features have SROCC and LCC values closer to 1 resulting in enhancement of the prediction score. SROCC and LCC score is computed for each individual feature as, $$ \begin{aligned} SROCC=1-\frac{6\sum\limits_{i=1}^{T} d_{i}^{2}}{T\left(T^{2}-1\right)}, \end{aligned} $$ Normalized histogram of features values averaged over all the distortion type for selected BIQA techniques. a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm where di is the difference between paired ranks and T is the total number of samples, and $$ \begin{aligned} LCC = \frac{\sum\limits_{i=1}^{T}(x_{i}-\bar{x})(y_{i}-\bar{y}))}{\sqrt{\sum\limits_{i=1}^{T}(x_{i}-\bar{x})^{2}}\sqrt{\sum\limits_{i=1}^{T}(y_{i}-\bar{y})}}, \end{aligned} $$ where xi and yi are the ith instance in first and second dataset, respectively, and \(\bar {x}\) and \(\bar {y}\) are mean values of datasets x and y, respectively. To perform feature selection, SROCCi and LCCi, for each individual feature Fi belonging to a BIQA technique, is computed. The predicted quality score using each individual feature is calculated using a pre-trained SVR model. Eighty percent of images in the dataset are used to train the SVR model, and testing is performed using the remaining 20% images. Mean score of SROCC (μS) and mean score of LCC (μL) computed over 1000 iterations are utilized for selecting distortion-specific features FD. The proposed feature selection approach for BIQA is presented as Algorithm 1. Support vector regression In the third step, selected features FD by Algorithm 1 are given as input to SVR for the prediction of quality score. Each distortion-specific regression model have different features as input. SVR is given as, $$ \begin{aligned} \psi(F_{D})=\alpha\beta(F_{D})+b, \end{aligned} $$ where FD is the input feature vector, α is the weight constant, β(·) is the feature space, and b is the bias value. The proposed methodology is evaluated on four IQA databases, i.e., LIVE [49], CSIQ [50], TID2013 [51], and LIVE in the wild image quality challenge database [52]. LIVE database contains 779 images and five distortion types, namely fast fading (FF), Gaussian blur (GB), JPEG2000 compression (JP2KC), JPEG compression (JPEG), and white noise (WN). The CSIQ databases consists of 900 images and six distortion types, i.e., GB, JP2KC, JPEG, WN, global contrast (GC), and pink noise (PN). TID2013 database consists of 3000 images and 24 distortion types. Figure 2 shows the normalized histograms of features averaged over all the distortion types and three databases, i.e., LIVE, TID2013, and CSIQ. Most BIQA techniques assess the quality of a distorted image by measuring the deviation of image characteristics from the characteristics of non-distorted images. Therefore, BIQA techniques should perform well if the deviation in the characteristic of images is represented by the extracted features. It can be observed that the deviation in characteristics of features of the distorted images are increased from the non-distorted image when the proposed feature selection is performed as compared to using all the features. For estimation of results, support vector machine requires pre-trained models for determining the distortion type and prediction of quality score. Therefore, we divide the dataset into two non-overlapping disjoint sets, i.e., training and testing. Eighty percent of the images are selected for training, whereas 20% of images are utilized for testing. The training and testing is repeated 1000 times with random disjoint set of images to predict the quality score. The SVR parameters c and γ used in this paper are the same as those mentioned by respective BIQA techniques. Median scores of SROCC, LCC, Kendall correlation constant (KCC), and root mean squared error (RMSE) are reported for the performance evaluation of the proposed approach. The SROCC, LCC, and KCC scores measure the similarity between mean observer score and predicted quality score, whereas RMSE measure the error. Figures 3, 4, and 5 show the performance of the proposed distortion-specific feature selection algorithm for selected BIQA techniques over each distortion type for TID2013, LIVE, and CSIQ IQA databases, respectively. The horizontal axis in Fig. 3 represents the distortion type label as given in the TID2013 database. It is evident from the results that the distortion-specific feature selection algorithm improves the SROCC score for majority of distortion types on selected BIQA techniques. It can be observed from Fig. 3 that the proposed technique consistently outranks the BIQA techniques and the proposed algorithm shows better or at par performance on 15, 18, 14, 15, 14, and 17 out of a total of 24 distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively, on the TID2013 database. Similarly, the proposed technique shows better or at par performance on 4, 4, 4, 4, 3, and 4 out of a total of five distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively, on the LIVE database. On the CSIQ database, the proposed technique shows better or at par performance on 6, 3, 4, 4, 6, and 3 out of a total of 6 distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively. Performance comparison of proposed algorithm for each distortion on TID2013 database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm Performance comparison of proposed algorithm for each distortion on LIVE database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm Performance comparison of proposed algorithm for each distortion on CSIQ database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features and after proposed feature selection algorithm Figures 6, 7, 8, and 9 shows the overall performance comparison of each BIQA technique along with the proposed distortion-specific feature selection algorithm and feature selection algorithm given in [48] on four IQA databases, i.e., LIVE [49], CSIQ [50], TID2013 [51], and LIVE, in the wild image quality challenge database [52], respectively. It can be observed that the proposed distortion-specific feature selection algorithm improves the overall performance of all the six state-of-the-art BIQA techniques as compared to using all the features, and by using feature selection algorithm of [48], the performance is worse than the original BIQA technique. The proposed algorithm also improves the performance of BIQA techniques on real images. The performance on LIVE in the wild image quality challenge database shows that the proposed algorithm can be used in real-time scenarios with real images taken in daylight and night time conditions. Overall performance comparison of proposed algorithm on different BIQA techniques for LIVE database, a SROCC, b LCC, c KCC, d RMSE Overall performance comparison of proposed algorithm on different BIQA techniques for CSIQ database, a SROCC, b LCC, c KCC, d RMSE Overall performance comparison of proposed algorithm on different BIQA techniques for TID2013 database, a SROCC, b LCC, c KCC, d RMSE Overall performance comparison of proposed algorithm on different BIQA techniques for wild in the LIVE challenge database, a SROCC, b LCC, c KCC, d RMSE The performance of proposed algorithm can be validated by Fig. 10 that represents the comparison between the performance using all features against the proposed feature selection algorithm in terms of box plots for each BIQA technique. Box plots measure the dispersion or variance in data utilizing interquartile range and standard deviation represented by a five-number summary that includes the minimum value, first quartile (Q1), median value, third quartile (Q3), and maximum value of samples. The interquartile range is computed using the difference between Q1 and Q3. Q1 in box plots denotes the 25 percentile of the SROCC values, i.e., 25% of the SROCC values lie below Q1, and Q3 denotes the 75 percentile of SROCC values, i.e., 75% SROCC values lie below Q3. The box plots are computed for SROCC scores computed over 1000 runs averaged over all the IQA databases. It can be observed that the predicted quality score using BIQA techniques shows higher correlation with MOS when feature selection is performed. The interquartile range of the box plot for SROCC is reduced when feature selection is applied, which depicts the reduction in standard deviation of quality score prediction for BIQA techniques. Box plots of SROCC score for different BIQA techniques, a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using all features and after features selection algorithm Table 1 shows the overall performance of the proposed feature selection algorithm for cross-database validation when training is performed on one database and testing is performed on the other two databases. Four common type of distortions, i.e., GB, JP2KC, JPEG, WN, are considered for cross database evaluation. It can be observed that the proposed feature selection algorithm performs better than using all the features over all the BIQA techniques considered in this work. The cross database evaluation results show that the proposed feature selection algorithm is database independent and the proposed distortion-specific feature selection algorithm improves the overall performance of BIQA techniques irrespective of database. Table 1 Overall performance comparison of proposed algorithm for cross-database validation Table 2 shows the comparison of the overall proposed feature selection algorithm with BIQA techniques in terms of number of features and total processing time using a core i7 processor with 8 GB of RAM operating at 2.3 GHz. It can be observed that the proposed distortion-specific feature selection algorithm outperforms all BIQA techniques by reducing the number of features and improving the performance of existing BIQA techniques. The proposed feature selection also shows slight reduction in total processing time since reduction in number of features leads to reduced training and testing time for the SVR. The time taken for computation of SROCC and LCC score over individual features is not added as it is performed once to indicate, which features are selected, and therefore, it will be unfair to add it to the processing time for prediction of quality score, each time a test image is given as an input. The largest reduction in processing time of 2.94% is obtained for GM-LOG, and the lowest reduction of 0.013% is obtained for BLINDS-II IQA technique. Table 2 Reduction in number of features by using proposed algorithm for different BIQA techniques on LIVE database BIQA techniques proposed in literature use the same set of features for all the distortion types to evaluate the quality score of images. Each distortion type affects the individual BIQA feature in a distinct manner because each type of distortion exhibits different characteristics. Therefore, using the same set of features for all the distortion type will not yield optimum results. This paper presents a distortion-specific feature selection algorithm based on mean values of SROCC and LCC scores for blind image quality assessment. All features having individual SROCC and LCC scores greater than the mean values of SROCC and LCC computed over all the features are selected for the specific distortion type. The proposed algorithm is tested on six BIQA techniques and over most commonly used four IQA databases. The experimental results show that the proposed approach not only improves the performance of existing BIQA techniques but also reduces number of features that result in reduction of processing time. The proposed distortion-specific feature selection algorithm can be used with any BIQA technique that follows a two-step approach. Results on cross database evaluation show that the proposed algorithm is robust and database independent. Furthermore, experimental results on the LIVE in the wild image quality challenge database shows that the proposed algorithm is also valid for real images. BIQA: Blind image quality assessment BLIINDS II: Blind image integrity notator based in DCT statistics II BRISQUE: Blind/referenceless image spatial quality evaluator CurveletQA: Curvelet Quality Assessment DCT: Discrete cosine transform DIIVINE: Distortion Identification-based Image Verity and IntegratioN Evaluation FF: Fast fading Full reference GB: GC: global contrast Gradient magnitude GM-LOG: Gradient magnitude and Lapl acian of Gaussian-based IQA IQA: Image quality assessment JP2KC: JEPEG2000 compression JPEG: JPEG compression KCC: Kendall correlation constant Linear correlation constant Laplacian of Gaussian MOS: Mean observer score NSS: Natural scene statistics pink noise 301 RMSE: RR: Reduced reference SROCC: Spearman's rank ordered correlation constant SSEQ: Spatial-spectral entropy-based quality SVC: support vector classification SVR: WN: W. Hou, X. Gao, D. Tao, X. Li, Blind image quality assessment via deep learning. IEEE Trans. Neural. Netw. Learn. Syst.26(6), 1275–1286 (2015). M. Oszust, Full-reference image quality assessment with linear combination of genetically selected quality measures. PloS ONE. 11(6), 0158333 (2016). H. Khosravi, M. H. Hassanpour, Model-based full reference image blurriness assessment. Multimed. Tools Appl.76(2), 2733–2747 (2017). Z. Chen, J. Lin, N. Liao, C. W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. (2017). A. Saha, Q. J. Wu, Full-reference image quality assessment by combining global and local distortion measures. Signal Process.128:, 186–197 (2016). Y. Ding, S. Wang, D. Zhang, Full-reference image quality assessment using statistical local correlation. Electron. Lett.50(2), 79–81 (2014). S. Rezazadeh, S. Coulombe, A novel discrete wavelet transform framework for full reference image quality assessment. Signal. Image Video Process.7(3), 559–573 (2013). A. Nafchi, H. Z. Shahkolaei, R. Hedjam, M. Cheriet, Mean deviation similarity index: efficient and reliable full-reference image quality evaluator. IEEE Access. 4:, 5579–5590 (2016). J. Yang, Y. Lin, B. Ou, X. Zhao, Image decomposition-based structural similarity index for image quality assessment. EURASIP J. Image Video Process.2016(1), 31 (2016). G. Yang, D. Li, F. Lu, Y. Liao, W. Yang, RVSIM: a feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process.2018(1), 6 (2018). Y. Liu, G. Zhai, K. Gu, X. Liu, D. Zhao, W. Gao, Reduced-reference image quality assessment in free-energy principle and sparse representation. IEEE Trans. Multimedia. 20:, 379–391 (2017). D. Liu, F. Li, H. Song, Regularity of spectral residual for reduced reference image quality assessment. IET Image Processing. 11:, 1135–1141 (2017). S. Golestaneh, L. J. Karam, Reduced-reference quality assessment based on the entropy of DWT coefficients of locally weighted gradient magnitudes. IEEE Trans. Image Process.25(11), 5293–5303 (2016). Article MathSciNet MATH Google Scholar J. Wu, W. Lin, Y. Fang, L. Li, G. Shi, I. Niwas, Visual structural degradation based reduced-reference image quality assessment. Signal Process. Image Commun.47:, 16–27 (2016). J. Wu, W. Lin, G. Shi, L. Li, Y. Fang, Orientation selectivity based visual pattern for reduced-reference image quality assessment. Inf. Sci.351:, 18–29 (2016). S. Bosse, Q. Chen, M. Siekmann, W. Samek, T. Wiegand, in Image Processing (ICIP), 2016 IEEE International Conference On. Shearlet-based reduced reference image quality assessment (IEEEPiscataway, 2016), pp. 2052–2056. Y. Zhang, T. D. Phan, DM Chandler, Reduced-reference image quality assessment based on distortion families of local perceived sharpness. Signal Process. Image Commun.55:, 130–145 (2017). Q. Wu, H. Li, F. Meng, B. Ngan, K. N. Luo, C. Huang, B. Zeng, Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Trans. Circ. Syst. Video Technol.26(3), 425–440 (2016). Q. Li, W. Lin, J. Xu, Y. Fang, Blind image quality assessment using statistical structural and luminance features. IEEE Trans. Multimedia. 18(12), 2457–2469 (2016). W. Lu, T. Xu, Y. Ren, L. He, Statistical modeling in the shearlet domain for blind image quality assessment. Multimedia Tools Appl.75(22), 14417–14431 (2016). Y. Zhang, J. Wu, X. Xie, L. Li, G. Shi, Blind image quality assessment with improved natural scene statistics model. Digit. Signal Process.57:, 56–65 (2016). M. Nizami, I. F. Majid, H. Afzal, K. Khurshid, Impact of feature selection algorithms on blind image quality assessment. Arab. J. Sci. Eng.43:, 1–14 (2017). S. Du, Y. Yan, Y. Ma, Blind image quality assessment with the histogram sequences of high-order local derivative patterns. Digit. Signal Process.55:, 1–12 (2016). Y. Zhang, A. K. Moorthy, D. M. Chandler, A. C. Bovik, C-diivine: No-reference image quality assessment based on local magnitude and phase statistics of natural scenes. Signal Process. Image Commun.29(7), 725–747 (2014). G. Yang, Y. Liao, Q. Zhang, D. Li, W. Yang, No-reference quality assessment of noise-distorted images based on frequency mapping. IEEE Access. 5:, 23146–23156 (2017). M. Nizami, I. F. Majid, K. Khurshid, in Applied Sciences and Technology (IBCAST), 2017 14th International Bhurban Conference On. Efficient feature selection for blind image quality assessment based on natural scene statistics (IEEEPiscataway, 2017), pp. 318–322. L. Li, Y. Yan, Z. Lu, J. Wu, K. Gu, S. Wang, No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access. 5:, 2163–2171 (2017). K. Panetta, A. Samani, S. Agaian, A robust no-reference, no-parameter, transform domain image quality metric for evaluating the quality of color images (IEEE, Piscataway, 2018). H. R. Sheikh, A. C. Bovik, L. Cormack, No-reference quality assessment using natural scene statistics: Jpeg2000. IEEE Trans. Image Process.14(11), 1918–1927 (2005). W. Xue, X. Mou, L. Zhang, X. Bovik, A. C. Feng, Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process.23(11), 4850–4862 (2014). L. Liu, H. Dong, H. Huang, A. C. Bovik, No-reference image quality assessment in curvelet domain. Signal Process. Image Commun.29(4), 494–505 (2014). D. Ghadiyaram, A. C. Bovik, Perceptual quality prediction on authentically distorted images using a bag of features approach. J. Vis.17(1), 32–32 (2017). E. Siahaan, A. Hanjalic, J. A. Redi, Semantic-aware blind image quality assessment. Signal Process. Image Commun.60:, 237–252 (2018). B. Appina, S. Khan, S. S. Channappayya, No-reference stereoscopic image quality assessment using natural scene statistics. Signal Process. Image Commun.43:, 1–14 (2016). W. Hachicha, M. Kaaniche, A. Beghdadi, F. A. Cheikh, No-reference stereo image quality assessment based on joint wavelet decomposition and statistical models. Signal Process. Image Commun.54:, 107–117 (2017). T. Zhu, L. Karam, A no-reference objective image quality metric based on perceptually weighted local noise. EURASIP J. Image Video Process.2014(1), 5 (2014). M. Shahid, A. Rossholm, B. Lövström, H-J Zepernick, No-reference image and video quality assessment: a classification and review of recent approaches. EURASIP J. Image Video Process.2014(1), 40 (2014). A. K. Moorthy, A. C. Bovik, Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process.20(12), 3350–3364 (2011). M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process.21(8), 3339–3352 (2012). M. A. Saad, A. C. Bovik, C. Charrier, A DCT statistics-based blind image quality index. IEEE Signal Process. Lett.17(6), 583–586 (2010). A. Mittal, A. K. Moorthy, A. C. Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.21(12), 4695–4708 (2012). A. Mittal, R. Soundararajan, A. C. Bovik, Making a "completely blind" image quality analyzer. IEEE Signal Process. Lett.20(3), 209–212 (2013). C. Zhang, J. Pan, S. Chen, T. Wang, D. Sun, No reference image quality assessment using sparse feature representation in two dimensions spatial correlation. Neurocomputing. 173:, 462–470 (2016). Y. Li, X. Po, L. -M. Xu, L. Feng, No-reference image quality assessment using statistical characterization in the shearlet domain. Signal Process Image Commun.29(7), 748–759 (2014). L. Liu, B. Liu, H. Huang, A. C. Bovik, No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun.29(8), 856–863 (2014). A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett.17(5), 513–516 (2010). L. He, D. Tao, X. Li, X. Gao, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Sparse representation for blind image quality assessment (IEEEPiscataway, 2012), pp. 1146–1153. Y. Lu, F. Xie, T. Liu, Z. Jiang, D. Tao, No reference quality assessment for multiply-distorted images based on an improved bag-of-words model. IEEE Signal Process. Lett.22(10), 1811–1815 (2015). H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process.15(11), 3440–3451 (2006). E. C. Larson, D. M. Chandler, Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging. 19(1), 011006–011006 (2010). N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al, Image database tid2013: Peculiarities, results and perspectives. Signal Process. Image Commun.30:, 57–77 (2015). D. Ghadiyaram, A. C. Bovik, Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process.25(1), 372–387 (2016). There are no acknowledgements. No funding is available for this work. School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan Imran Fareed Nizami & Khawar Khurshid Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan Muhammad Majid Department of Computer Engineering, Bahria University, Islamabad, Pakistan Waleed Manzoor College of Information and Communication Engineering, Sungkyunkwan University, Seoul, South Korea Byeungwoo Jeon Imran Fareed Nizami Khawar Khurshid All authors have contributed equally towards this paper. IFN and MM came up with the research idea for this work. IFN and WM performed the simulations. MM, KK, and BJ analyzed the results. All authors read and approved the final manuscript. Correspondence to Imran Fareed Nizami. Nizami, I., Majid, M., Manzoor, W. et al. Distortion-specific feature selection algorithm for universal blind image quality assessment. J Image Video Proc. 2019, 19 (2019). https://doi.org/10.1186/s13640-018-0392-5 Feature extraction Feature selection
CommonCrawl
\begin{definition}[Definition:Clock Puzzle] A '''clock puzzle''' is a puzzle whose solution is based upon the mechanics of the (traditionally $12$-hour) clock. \end{definition}
ProofWiki
Marta Civil Marta Civil is an American mathematics educator. Her research involves understanding the cultural background of minority schoolchildren, particularly Hispanic and Latina/o students in the Southwestern United States, and using that understanding to promote parent engagement and focus mathematics teaching on students' individual strengths.[1][2][3] She is the Roy F. Graesser Endowed Professor at the University of Arizona, where she holds appointments in the department of mathematics, the department of mathematics education, and the department of teaching, learning, and sociocultural studies.[4] Education and career Civil earned her Ph.D. at the University of Illinois at Urbana–Champaign in 1990. Her dissertation, Doing and Talking about Mathematics: A Study of Preservice Elementary Teachers, was supervised by Peter George Braunfeld.[5] In 2011 she moved from the University of Arizona to the University of North Carolina, to become Frank A. Daniels Distinguished Professor of Mathematics Education,[1] but returned to Arizona in 2014 to become the Graesser Professor.[3] Books Civil is co-editor of the books Transnational and Borderland Studies in Mathematics Education (Routledge, 2011),[6] Latinos/as and Mathematics Education: Research on Learning and Teaching in Classrooms and Communities (Information Age, 2011),[7] Cases for Mathematics Teacher Educators: Facilitating Conversations about Inequities in Mathematics Classrooms (Information Age, 2016),[8] and Access & Equity: Promoting High-Quality Mathematics in Grades 3-5 (National Council of Teachers of Mathematics, 2018).[9] Recognition In 2013 TODOS: Mathematics for All gave Civil their Iris M. Carl Equity and Leadership Award.[10] She is included in a deck of playing cards featuring notable women mathematicians published by the Association of Women in Mathematics.[11] She received the 2021 National Council of Teachers of Mathematics (NCTM) Lifetime Achievement Award.[12] References 1. Baptiste, Hope (Fall 2011), "Faculty Spotlight: Marta Civil", Celebrating Diversity: The newsletter of the Alumni Committee on Racial and Ethnic Diversity, University of North Carolina 2. Stringer, Kate, "When Families and Schools Work Together, Students Do Better. New Report Has 5 Ways of Engaging Parents in Their Kids' Education", The 74, retrieved 2019-08-23 3. Javier, Jeffrey (2016-08-03), A Personal Approach to Mathematics: Marta Civil, Mathematics Professor, Dr. Roy F. Graesser Endowed Chair in Mathematics, University of Arizona Alumni Association, retrieved 2019-08-23 4. "Plenary speaker profile", PME-NA40 2018 Annual Conference, retrieved 2019-08-23 5. Marta Civil at the Mathematics Genealogy Project 6. Review of Transnational and Borderland Studies: Brown, Margaret (March 2012), Educational Research, 54 (1): 113–115, doi:10.1080/00131881.2012.658202, S2CID 144941700{{citation}}: CS1 maint: untitled periodical (link) 7. Review of Latinos/as and Mathematics Education: Matthews, Mary Elizabeth (2013), The Journal of Education, 193 (1): 69–71, doi:10.1177/002205741319300108, JSTOR 24636816, S2CID 186865127{{citation}}: CS1 maint: untitled periodical (link) 8. Review of Cases for Mathematics Teacher Educators: Joseph, Nicole M.; Jett, Christopher C.; Leonard, Jacqueline (March 2018), Journal for Research in Mathematics Education, 49 (2): 232–236, doi:10.5951/jresematheduc.49.2.0232, JSTOR 10.5951/jresematheduc.49.2.0232{{citation}}: CS1 maint: untitled periodical (link) 9. Reviews of Access & Equity: Mazur, Emily (September 2018), The Mathematics Teacher, 112 (1): 76–79, doi:10.5951/mathteacher.112.1.0076, JSTOR 10.5951/mathteacher.112.1.0076{{citation}}: CS1 maint: untitled periodical (link); Quigley, Karen (January–February 2019), Teaching Children Mathematics, 25 (4): 254–255, doi:10.5951/teacchilmath.25.4.0254, JSTOR 10.5951/teacchilmath.25.4.0254{{citation}}: CS1 maint: untitled periodical (link) 10. The TODOS Iris M. Carl Equity and Leadership Award, TODOS: Mathematics for All, retrieved 2019-08-23 11. "Mathematicians of EvenQuads Deck 1". awm-math.org. Retrieved 2022-06-18.{{cite web}}: CS1 maint: url-status (link) 12. "2021 Lifetime Achievement Award Recipient Marta Civil". nctm.org. Retrieved 3 Dec 2022. External links • Home page • SOE Profile: Marta Civil, University of North Carolina School of Education Authority control International • VIAF National • Israel • United States Academics • Mathematics Genealogy Project • ORCID
Wikipedia
Vol. 1, •https://doi.org/10.1364/OPTCON.446952 Distributed multi-parameter sensing based on the Brillouin scattering effect in orbital angular momentum guiding fiber Liwen Sheng, Lin Huang, Jisong Yan, Shan Qiao, Aiguo Zhang, Hui Jin, Ming Yuan, Tianyang Qu, and Zhiming Liu Liwen Sheng,1,2,3 Lin Huang,1,2,* Jisong Yan,1 Shan Qiao,1 Aiguo Zhang,1 Hui Jin,1 Ming Yuan,1 Tianyang Qu,1 and Zhiming Liu1 1Ceyear Technologies Co., Ltd, Qingdao 266555, China 2Science and Technology on Electronic Test & Measurement Laboratory, Qingdao 266555, China 3Xidian University, Xi'an 710071, China *Corresponding author: [email protected] Liwen Sheng https://orcid.org/0000-0002-0594-4014 L Sheng L Huang J Yan S Qiao A Zhang H Jin M Yuan T Qu Z Liu Liwen Sheng, Lin Huang, Jisong Yan, Shan Qiao, Aiguo Zhang, Hui Jin, Ming Yuan, Tianyang Qu, and Zhiming Liu, "Distributed multi-parameter sensing based on the Brillouin scattering effect in orbital angular momentum guiding fiber," Opt. Continuum 1, 133-142 (2022) Distributed strain and temperature fast measurement in Brillouin optical time-domain reflectometry... Jianqin Peng, et al. Opt. Express 30(2) 1511-1520 (2022) Study on the simultaneous distributed measurement of temperature and strain based on Brillouin... Liwen Sheng, et al. OSA Continuum 3(8) 2078-2085 (2020) Effect of ion beam-assisted deposition on the end pumping and outer coupler filters of a laser fiber Po-Kai Chiu, et al. Opt. Continuum 1(1) 30-41 (2022) Instrumentation, Measurement, and Optical Sensors Beam shaping Mode conversion Polarization maintaining fibers Stimulated Brillouin scattering Tunable diode lasers Original Manuscript: November 8, 2021 Revised Manuscript: December 30, 2021 Manuscript Accepted: January 3, 2022 Experimental setup Experimental results and discussion The orbital angular momentum (OAM) guiding fiber is used as a sensing element to measure strain and ambient temperature, sensing information simultaneously in a classical BOTDR configuration, due to its higher-order acoustic modes and high stimulated Brillouin threshold. The Brillouin threshold, the Brillouin gain coefficient and the Brillouin gain spectrum (BGS) of OAM fiber at 1.5 µm are characterized and demonstrated theoretically and experimentally. Taking advantage of the special acoustic properties of the peaks caused by the hard cladding-core interface in the Brillouin scattering process, the distributed multi-parameter sensing (e.g., strain and/or ambient temperature) is verified over a 1-km OAM guiding fiber, with the respective errors of strain and temperature of 18.2 µɛ and 0.93 °C, respectively. © 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Brillouin scattering, a fundamental nonlinear light-matter interaction between photons and acoustic phonons occurring in any optical fiber material, has been extensively investigated and studied in the past several decades. More recently, it has opened many new engineering application possibilities in beam shaping (frequency/time domain) [1–4], imaging enhancement [5], temperature and/or strain distributed sensing [6–8], small-signal amplification [9], etc. For the Brillouin-based distributed fiber sensing configuration case, due to its excellent characteristics in terms of smallness in size, durability, low fabrication cost, and high stability, it can be currently acted as a remarkable candidate for the distributed fiberoptic sensing applications in the non-destructive structural health monitoring (SHM) of civil infrastructures [10–13] and human body motion detection [14]. Conventional Brillouin-based distributed fiberoptic sensors, including the Brillouin analyzer (optical time domain – BOTDA) and Brillouin reflectometer (optical time domain – BOTDR), generally employ standard single mode fibers (SMFs) such as G652, G655 and G657 fibers with one Brillouin gain peak as the sensing elements. Most of BOTDR and BOTDA presented so far have been demonstrated to base on the temperature and strain dependence of the Brillouin frequency shift (BFS). Then, the BFS, which is the center frequency of the Brillouin scattering spectrum, can be achieved by fitting the Brillouin gain spectrum (BGS) with a novel and simple analytical expression [15]. However, since the BFS is jointly affected by multi-parameter cross-sensitivity effects [16], particularly the combination of strain and ambient temperature, has been a great challenge. Since then, many studies have been carried out to address the joint effects of several environmental factors and explicitly quantify each response. Bao et al. [17] employed a specially developed fiber containing a reference fiber (namely temperature compensation) and a sensing fiber to remove the joint effects of temperature and strain external disturbances. The main drawback of the proposed method is no guarantee of strain decoupling in the real ambient conditions. Meanwhile, some groups focused on studying the sensing performance of Brillouin-based sensor with different hybrid methods. Zou et al. [18] successfully demonstrated a hybrid method to discriminate ambient crosstalk information, which was to utilize additional parameters instead of the BFS alone, in both the SMF and polarization maintaining fiber (PMF). Taki et al. [19] did the research to investigate the Raman-Brillouin hybrid sensing system to eliminate the crosstalk caused by ambient environment. Obviously, the proposed methods mentioned above require rather complicated monitoring systems. The sensors using multiple Brillouin peaks optical fibers as sensing elements have also been considered as possible solutions. The photonic crystal fiber (PCF), the large effective area non-zero dispersion-shifted fiber (LEAF), and the specially designed fibers (e.g., M-shaped, G.652.D) were designed to obtain simultaneous strain and temperature information recovery through measuring the BFSs of multiple Brillouin peaks [20–22]. However, the shortage is that the small sensitivity coefficient differences in the strain or temperature dependence for different Brillouin peaks result in a large amplification factor of measurement error. Recently, an effective technique for the multi-parameter discriminative sensing measurements using multiple optical modes in the few-mode fiber (FMF) has attracted much attention in FMF-based Brillouin sensing configurations [23,24]. In the proposed schemes, the mode conversion elements are needed to excite different optical modes, each excited optical mode is monitored by another mode launcher. It is evident that the complexity of the entire Brillouin-based sensing scheme is difficult to reduce. Thanks to the sharp rise and fall of the index profile of the orbital angular momentum (OAM) guiding fiber between the edge of the core and the cladding region as well as the center and edge of the core results in a significant differences in the acoustic properties. Coupling between the longitudinal and shear acoustic waves is largely enhanced, leading to the generation of higher-order acoustic modes. The OAM guiding fiber with higher order acoustic modes, which is also a kind of FMF, is considered as ideal alternative for distributed long-range sensing, since it can obtain higher measurement accuracy [25,26]. The corresponding studies indicated that the OAM guiding fiber can be used not only as sensing medium in BOTDA but also for the optical deep learning [26,27]. However, to our knowledge so far no multi-parameter sensor based on spontaneous Brillouin scattering (SpBS), and its Brillouin characterization in OAM guiding fiber was reported. Here, a simplified novel Brillouin-based distributed sensor is proposed and experimentally demonstrated, based on the SpBS effect in an OAM guiding fiber. Different from the existing FMF-based sensing methods [28,29], our proposed sensor uses both linearly polarized (LP01) mode and higher-order acoustic modes, resulting in multiple Brillouin peaks in responding to multiple sensing parameters measurements. The BFSs of the multiple Brillouin peaks in the measured BGS are retrieved to evaluate the strain and temperature coefficients, respectively. The corresponding results demonstrate that the presented sensing method is suitable to discriminate simultaneously temperature and strain information in a relatively uncomplicated single-end scheme. The unequal Brillouin gain coefficients of the first two Brillouin peaks enable the discrimination measurement over a 1-km OAM guiding fiber, with the measurement accuracy of 18.2 µɛ and 0.93 °C for strain and temperature, respectively. Besides, this paper reports the first measurement of Brillouin characterization in the OAM guiding fiber operating at the 1.55 µm wavelength region. Stimulated Brillouin scattering (SBS) is observed from a fiber under test (FUT) merely 1-km in length when a continuous wave (CW) light power of 307.1 mW is launched into the FUT. From the measured Brillouin threshold value we estimate its Brillouin gain coefficient. The Brillouin gain coefficient of the OAM guiding fiber of 5.54 × 10−12 m/W, as much as 7.9 times lower than fused silica fiber (about 4.4 × 10−11 m/W) is monitored. 2. Theory The change of BFS $\Delta {\nu _\textrm{B}}$ related to the change in strain $\Delta \varepsilon$ and ambient temperature $\Delta T$ can be given by: (1)$$\Delta {\nu _\textrm{B}} = {C_\mathrm{\varepsilon }}\Delta \varepsilon + {C_\textrm{T}}\Delta T$$ where ${C_\mathrm{\varepsilon }}$ denotes the Brillouin coefficient for strain, and ${C_\textrm{T}}$ represents the Brillouin coefficient for temperature. In order to obtain the discrimination of strain and temperature information at the same time, the OAM guiding fiber should contribute two BFSs with different Brillouin coefficients at least. Assuming that $\Delta {\nu _{\textrm{B,}m}}$, $m \in \{{\textrm{1,2}, \ldots ,n} \}$ denotes the BFS of the m-order acoustic mode. In addition, we assume that at least two acoustic modes (m ≥ 2) are used for the discriminative measurement with the strain and temperature coefficients ${C_{\mathrm{\varepsilon ,}m}}$ and ${C_{\textrm{T,}m}}$. Therefore, the following matrices can be derived from Eq. (1), rewritten as: (2)$${\textbf v} = {\textbf{CB}}$$ where ${\textbf v} = {[\Delta {\nu _{\textrm{B,1}}},\Delta {\nu _{\textrm{B,2}}}, \ldots ,\Delta {\nu _{\textrm{B,}n}}]^\textrm{T}}$, and ${\textbf B} = {[\Delta \varepsilon ,\Delta T]^\textrm{T}}$. ${\textbf C}$ represents an n × 2 coefficient matrix, can be described as: (3)$${\textbf C} = \left( {\begin{array}{*{20}{c}} {{C_{\mathrm{\varepsilon ,1}}}}\\ \vdots \\ {{C_{\mathrm{\varepsilon ,}n}}} \end{array}\begin{array}{*{20}{c}} {}\\ {}\\ {} \end{array}\begin{array}{*{20}{c}} {{C_{\textrm{T,1}}}}\\ \vdots \\ {{C_{\textrm{T,}n}}} \end{array}} \right).$$ Consequently, the method with at least two acoustic modes in OAM guiding fiber based on SpBS effect has the ability to realize multi-parameter sensing over the whole fiber link through the demodulation of BFSs. Jin et al. [30] studied the principle and established an analysis model to evaluate strain and temperature measurement error. It is given by: (4)$$\delta \varepsilon \textrm{ = }\frac{{|{{C_{T\textrm{,1}}}} |\delta {\nu _{B,m}} + |{{C_{T\textrm{,}m}}} |\delta {\nu _{B,\textrm{1}}}}}{{|{{C_{T\textrm{,1}}}{C_{\mathrm{\varepsilon ,}m}} - {C_{T\textrm{,}m}}{C_{\mathrm{\varepsilon ,1}}}} |}}$$ (5)$$\delta T\textrm{ = }\frac{{|{{C_{\mathrm{\varepsilon ,}m}}} |\delta {\nu _{B,\textrm{1}}} + |{{C_{\mathrm{\varepsilon ,1}}}} |\delta {\nu _{B,m}}}}{{|{{C_{T\textrm{,1}}}{C_{\mathrm{\varepsilon ,}m}} - {C_{T\textrm{,}m}}{C_{\mathrm{\varepsilon ,1}}}} |}}.$$ Meanwhile, based on the small-signal steady-state theory of SBS, the Brillouin gain coefficient can be estimated using the equation below [31]: (6)$${g_\textrm{B}}K({{{P_{\textrm{th}}}} / {{A_{\textrm{eff}}}}}){L_{\textrm{eff}}} \cong \textrm{21}$$ where ${g_\textrm{B}}$ is the measured Brillouin gain coefficient. K is a constant which depends on whether the polarization property of the OAM guiding fiber is kept constant by the nonlinear interaction ($K = \textrm{1}$) or not ($K = \textrm{0}\textrm{.5}$, our case). ${P_{\textrm{th}}}$ is the pumped power corresponding to the measured Brillouin threshold. Also, the ${A_{\textrm{eff}}}$ and ${L_{\textrm{eff}}}$ are the effective cross sectional area of the fundamental mode, and the effective interaction length defined as ${L_{\textrm{eff}}} = {\alpha ^{ - \textrm{1}}}[{\textrm{1} - \textrm{exp} ( - \alpha L)} ]$, where $\alpha$ is the transmission loss, respectively. Although in our proposed method only the LP01 mode is used for the discrimination of temperature and strain sensing information, we should note the fact that the couplings of the several guided optical modes and the higher-order acoustic modes lead to the generation of the multiple Brillouin peaks in the OAM guiding fiber. For the sensing fiber, the radii of the fiber cladding and core are 62.5 µm and 3 µm, respectively. Figure 1(a) highlights the graded index profile of the OAM guiding fiber, which is measured using a commercial ellipsometer (FPINT-IFA100, Felles Photonic Instrument Ltd.). The monitored result is similar to the result in reference, and the refractive index profile exhibits an inverse-parabolic graded-index profile [26]. A finite-element method (FEM) is adopted to simulate the supported LP modes in the OAM guiding fiber. As depicted in Fig. 1(b), the OAM guiding fiber with the refractive index profile displayed in Fig. 1(a) is found to support three LP optical modes, LP01, LP11, and LP21. When the incident laser is launched from standard SMF to the OAM guiding fiber with perfect alignment, the coupling efficiency between the fundamental mode of the SMF and the higher-order LP modes in OAM guiding fiber will be minimized. Furthermore, most optical fiber components applied in sensing schemes only accommodate LP01. For most multi-parameter Brillouin-based sensing system, therefore, it is enough to study the optic-acoustic characteristics of the OAM fiber corresponding to the LP01 fundamental mode. Fig. 1. The characteristics of the OAM guiding fiber. (a) Theoretical refractive index profile and experimental (relative to silica index) refractive index profile of the OAM guiding fiber. (b) Simulation results of electric fields of guided LP modes at 1.55 µm in the OAM guiding fiber. 3. Experimental setup The experimental arrangement used to estimate the Brillouin characterization in OAM guiding fiber is built as shown in Fig. 2(a). A CW pump light from an external cavity tunable diode laser (ECDL) operating at 1.55 µm is amplified by an erbium-doped fiber amplifier (EDFA), and then is launched into the measured OAM guiding fiber through an optical circulator (CIR). The pump light that is backscattered from the FUT is captured at the third port of the CIR. After passing through the 50/50 coupler (OC2), the backward wave is split into two equal branches. The power of light backscattered from the 1-km FUT could be monitored using an optical power meter (P2: Thorlabs, PM100D). An optical spectrum analyzer (OSA: Yokogawa, AQ6370D) with a resolution of 20 pm is applied to monitor the spectral changes of the backscattered signal propagating through the OAM guiding fiber. Meanwhile, another identical optical power meter (P1) is used to measure the CW pump laser power. In order to reduce the influence of OAM fiber end reflection on Brillouin characteristic test, the OAM fiber end is immersed in matching solution (oil). Fig. 2. Schematic diagram of experimental setup. (a) Experimental apparatus used for stimulated Brillouin scattering threshold measurement in the OAM guiding fiber. (b) Experimental setup of the OAM guiding fiber based on BOTDR configuration. ECDL: external cavity diode laser; OC: optical coupler; LO: local oscillator; FUT: fiber under test; BPF: band-pass filter; OSA: optical spectrum analyzer; ISO: isolator; PS: polarization scrambler; PD: photo-diode; VOA: variable optical attenuator; CIR: circulator; DAQ: data acquisition card; FBG: fiber Bragg grating; LNA: low-noise amplifier; PC, polarization controller; EDFA: erbium-doped fiber amplifier; EOM: electro-optic modulator. In order to demonstrate the ability to distinguish multi-parameter sensing information at the same time, a BOTDR measurement setup is illustrated in Fig. 2(b). A fixed wavelength (1549.983 nm) CW light from a 10 dBm, 200 kHz linewidth ECDL is divided into two unequal parts by a 90/10 commercial optical coupler (OC1). Subsequently, the 90% part (the upper branch) is selected to be chopped for the generation of beam2 (back scattered Brillouin sensing signal). The chopped probe light is modulated by a customized electro-optic modulator (EOM: iXblue, MXERLN-20) operating in the carrier-suppressed regime driven by a radio frequency pulse generator into the Gaussian optical pulse. After amplified by an erbium-doped fiber amplifier (EDFA1), the amplified optical pulse with a lower parasitic light spontaneously emitted by the EDFA1 is selected through a 3.5 GHz fiber Bragg grating (FBG1) filter. Then, the modulated pulse is launched into the measured OAM guiding fiber through an optical circulator (CIR). Another part with 10% component used as beam1 (reference signal) is scrambled by a polarization scrambler (PS) to obtain high signal-to-noise (SNR) beat information [32]. The Brillouin beat signals with strain and temperature sensing information are generated through a 3-dB OC2 and detected using a 13.5 GHz bandwidth photo detector (PD). A low-noise amplifier (LNA) is employed to enhance the weak original beat sensing signals. The BFSs can be achieved by changing the output frequency of the local oscillator (LO) and eventually sampled by a data acquisition (DAQ) card to obtain the demodulation traces along the measured FUT at different conditions. 4. Experimental results and discussion The transmission loss of the used OAM guiding fiber is measured as the parameter defines the ${L_{\textrm{eff}}}$ for the Brillouin scattering process at the first time. Figure 3 plots the power loss versus the sensing range, which indicates the measured sensing link is composed of 1-km optical fiber. Besides, the inset in Fig. 3 highlights that the propagation loss at 1.55 µm is about 0.79 dB/km, which is about 4.38 times larger than that of fused silica fiber. Fig. 3. Measurement of the OAM guiding fiber power loss as a function of sensing distance at 1.55 µm. The spectral changes of the back-scattered wave propagating through the 1-km long OAM guiding fiber (${L_{\textrm{eff}}}$ is 691.3 m), as collected by the circulator, are illustrated in Fig. 4 with pump powers of 179.2 mW and 307.1 mW. As displayed in Fig. 4, an anti-Stokes Brillouin component and a Stokes Brillouin component at a separation of about 0.078 nm in the shorter wavelength regime and the longer wavelength regime could be observed, respectively. Besides, the significant jump of the Brillouin-shifted signal measured on the OSA could be seen in the longer wavelength side. Figure 5 shows the power level of the backscattered Stokes from the OAM guiding fiber as a function of the injected pump power. It can be seen that the increasing of the launched pump power as, once the Brillouin threshold (307.1 mW) is reached, a sharp backscattered power increase can be observed in the Stokes wave [33]. Using Eq. (6), the OAM guiding fiber length indicated in Fig. 3, the Brillouin threshold suggested in Fig. 5, $K = \textrm{0}\textrm{.5}$ and ${A_{\textrm{eff}}} = \textrm{28}$ µm2, the peak Brillouin gain coefficient to be 5.54 × 10−12 m/W for the OAM guiding fiber is estimated. Obviously, the OAM guiding fiber is not suitable as sensing medium in BOTDA configuration due to its low Brillouin gain coefficient and the high Brillouin threshold. It should be noted that the corresponding experimental results in Figs. (3–5) are obtained by the experimental apparatus shown in Fig. 2(a). Fig. 4. Optical spectra of the backward direction collected by the circulator for different injected pump power level into the OAM guiding fiber. Fig. 5. Power of the backscattered Stokes from the 1-km long OAM guiding fiber versus launching pump power. The block diagram of the experimental setup is displayed in Fig. 2(b). An overlay of the BGS profile from the BOTDR trace over the experiment results highlighted in Fig. 6 indicates that the multi-peak BGS is an intrinsic characteristic of the OAM guiding fiber. As depicted in Fig. 6, the BFSs of three Brillouin peaks (namely, 1st Brillouin peak, 2nd Brillouin peak, and 3rd Brillouin peak) are 9.589 GHz, 9.720 GHz, and 9.887 GHz, respectively. Furthermore, it can be seen that the measured Brillouin spectrum (blue circles) has a good agreement with the multi-peak Lorentz fitting curve (solid red line). The multi-peak shape is measured in the OAM guiding fiber, indicating that the three Brillouin peaks are due to the coupling between LP01 of the FUT and higher-order acoustic modes, instead of mode leakage. Based on aforementioned facts in Section 2, by estimating the BFSs of any two of the first three peaks in the measured BGS, it is capable of solving the obstacle of cross effects due to the different responses of the acoustic modes to the external strain and ambient temperature. Hence, the 1st Brillouin peak and the 2nd Brillouin peak are selected as the main sensing peaks for its high relative intensity, which could eliminate the influence of noise and enhance the measurement accuracy. Fig. 6. Multiple Brillouin peaks in the OAM guiding fiber. Blue circles: experimental data points; solid red line: multi-peak Lorentz fitting curve; short dash lines: Lorentz profiles of 1st, 2nd and 3rd Brillouin peak. The measurements of strain and temperature coefficients then are performed by fixing one segment of the FUT to a motor-driven fiber stretching device (Zolix MC600) to exert the OAM fiber strains and immersing another section of the OAM guiding fiber at the far end into a temperature-controlled water-bath pot (JOANLAB, HH-2) to change the ambient temperature. Nine different strains between 0 µɛ and 800 µɛ are monitored with 100 µɛ per step. The BFSs changes of different ambient temperatures are measured in the range of 25 °C to 95 °C with temperature steps of 10 °C. Since the BFS response of different acoustic modes on ambient temperature and strain is unequal, the different sensing parameters on the FUT are able to be discriminated. The measured BFSs of the first two Brillouin peaks as a function of strain and temperature are given in Fig. 7. From the experimental data points, the strain coefficients for the 1st Brillouin peak and the 2nd Brillouin peak are determined to be 40.17 kHz.µɛ−1, 35.27 kHz.µɛ−1, respectively, and the temperature coefficients for the two Brillouin peaks are calculated to be 0.752 MHz.°C−1, 0.886 MHz.°C−1, respectively. As expected, the corresponding results showed that the distributed multiple parameters sensing information can be simultaneously discriminated. The error analysis for the measurement of temperature and strain is then evaluated through Eq. (4) and Eq. (5). The respective errors of strain and temperature using the 1st and the 2nd Brillouin peaks in Fig. 7 are calculated to be 18.2 µɛ and 0.93 °C. Fig. 7. Ambient temperature and strain measurement results of the OAM sensing fiber. (a) Measured BFS as a function of ambient temperature for the 1st and 2nd Brillouin peaks, respectively. (b) Measured BFS as a function of strain for the 1st and 2nd Brillouin peaks, respectively. Here, the Brillouin characterization of the OAM guiding fiber was studied experimentally, and the Brillouin gain coefficient measured using threshold of single-pass backscattered Brillouin effect was 5.54 × 10−12 m/W, which indicated that the OAM guiding fiber may not be suitable for BOTDA scheme. Therefore, we proposed and experimentally demonstrated a simple BOTDR sensor based on an OAM guiding fiber for simultaneous demodulation of the distributed ambient temperature and strain sensing information. Higher-order acoustic modes that were responsible for the generation of multiple Brillouin peaks within the FUT resulted in a great difference strain and temperature coefficients. Consequently, distributed Brillouin sensing of strain and temperature was illustrated over a 1-km OAM guiding fiber, with the respective errors of strain and temperature of 18.2 µɛ and 0.93 °C, respectively. With the ability of monitoring both strain and temperature sensing information simultaneously, such OAM guiding fiber would be eventually a promising solution for detecting more than two parameters within a given surrounding. Special Support for Post-doc Creative Funding in Shandong Province (202103076); Taishan Series Talent Project (2017TSCYCX-05); Science and Technology on Electronic Test and Measurement Laboratory Foundation (JWD200305, KDW03012003); Qingdao Postdoctoral Applied Research Project (20266153); National Natural Science Foundation of China (61605034). The authors declare no conflicts of interest. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. 1. G. S. Wiederhecker, P. Dainese, and T. P. M. Alegre, "Brillouin optomechanics in nanophotonic structures," APL Photonics 4(7), 071101 (2019). [CrossRef] . 2. Z. X. Bai, R. J. Williams, O. Kitzler, S. Sarang, D. J. Spence, Y. L. Wang, Z. W. Lu, and R. P. Mildren, "Diamond Brillouin laser in the visible," APL Photonics 5(3), 031301 (2020). [CrossRef] 3. Z. X. Bai, H. Yuan, Z. H. Liu, P. B. Xu, Q. L. Gao, R. J. Williams, O. Kitzler, R. P. Mildren, Y. L. Wang, and Z. W. Lu, "Stimulated Brillouin scattering materials, experimental design and applications: a review," Opt. Mater. 75, 626–645 (2018). [CrossRef] . 4. Z. X. Bai, Y. L. Wang, Z. W. Lu, H. Yuan, Z. X. Zheng, S. S. Li, Y. Chen, Z. H. Liu, C. Cui, H. L. Wang, and R. Liu, "High compact, high quality single longitudinal mode hundred picoseconds laser based on stimulated Brillouin scattering pulse compression," Appl. Sci. 6(1), 29 (2016). [CrossRef] . 5. L. W. Sheng, D. X. Ba, and Z. W. Lu, "Imaging enhancement based on stimulated Brillouin amplification in optical fiber," Opt. Express 27(8), 10974–10980 (2019). [CrossRef] . 6. X. Y. Bao and L. Chen, "Recent progress in distributed fiber optic sensors," Sensors 12(7), 8601–8639 (2012). [CrossRef] . 7. X. Y. Bao and L. Chen, "Recent progress in Brillouin scattering based fiber sensors," Sensors 11(4), 4152–4187 (2011). [CrossRef] . 8. P. B. Xu, D. X. Ba, W. M. He, H. P. Hu, and Y. K. Dong, "Distributed Brillouin optical fiber temperature and strain sensing at a high temperature up to 1000 °C by using an annealed gold-coated fiber," Opt. Express 26(23), 29724–29734 (2018). [CrossRef] . 9. L. W. Sheng, D. X. Ba, and Z. W. Lu, "Low-noise and high-gain of stimulated Brillouin amplification via orbital angular momentum mode division filtering," Appl. Opt. 58(1), 147–151 (2019). [CrossRef] . 10. D. W. Zhou, Y. K. Dong, B. Z. Wang, C. Pang, D. X. Ba, H. Y. Zhang, Z. W. Lu, H. Li, and X. Y. Bao, "Single-shot BOTDA based on an optical chirp chain probe wave for distributed ultrafast measurement," Light: Sci. Appl. 7(1), 32 (2018). [CrossRef] . 11. B. Z. Wang, B. H. Fan, D. W. Zhou, C. Pang, Y. Li, D. X. Ba, and Y. K. Dong, "High-performance optical chirp chain BOTDA by using a pattern recognition algorithm and the differential pulse-width pair technique," Photonics Res. 7(6), 652–658 (2019). [CrossRef] . 12. L. W. Sheng, L. G. Li, L. J. Hu, M. Yuan, J. P. Lang, J. G. Wang, P. Li, Z. Y. Bi, J. S. Yan, and Z. M. Liu, "Distributed fiberoptic sensor for simultaneous temperature and strain monitoring based on Brillouin scattering effect in polyimide-coated fibers," Int. J. Opt. 2020, 1–5 (2020). [CrossRef] 13. D. X. Ba, Y. Li, J. L. Yan, X. P. Zhang, and Y. K. Dong, "Phase-coded Brillouin optical correlation domain analysis with 2-mm resolution based on phase-shift keying," Opt. Express 27(25), 36197–36205 (2019). [CrossRef] . 14. J. J. Guo, M. X. Niu, and C. X. Yang, "Highly flexible and stretchable optical strain sensing for human motion detection," Optica 4(10), 1285–1288 (2017). [CrossRef] . 15. M. Alem, M. A. Soto, M. Tur, and L. Thévenaz, "Analytical expression and experimental validation of the Brillouin gain spectral broadening at any sensing spatial resolution," Proc. SPIE 10323, 103239J (2017). [CrossRef] . 16. A. Motil, A. Bergman, and M. Tur, "State of the art of Brillouin fiber-optic distributed sensing," Opt. Laser Technol. 78, 81–103 (2016). [CrossRef] . 17. X. Y. Bao, D. J. Webb, and D. A. Jackson, "32-km distributed temperature sensor based on Brillouin loss in an optical fiber," Opt. Lett. 18(18), 1561–1563 (1993). [CrossRef] . 18. W. W. Zou, Z. Y. He, and K. Hotate, "Complete discrimination of strain and temperature using Brillouin frequency shift and birefringence in a polarization-maintaining fiber," Opt. Express 17(3), 1248–1255 (2009). [CrossRef] . 19. M. Taki, Y. S. Muanenda, I. Toccafondo, A. Signorini, T. Nannipieri, and D. Pasquale, "Optimized hybrid Rman/fast-BOTDA sensor for temperature and strain measurement in large infrastructures," IEEE Sens. J. 14(12), 4297–4304 (2014). [CrossRef] . 20. L. F. Zou, X. Y. Bao, V. S. Afshar, and L. Chen, "Dependence of the Brillouin frequency shift on strain and temperature in a photonic crystal fiber," Opt. Lett. 29(13), 1485–1487 (2004). [CrossRef] . 21. L. W. Sheng, L. G. Li, L. Liu, L. J. Hu, M. Yuan, and J. S. Yan, "Study on the simultaneous distributed measurement of temperature and strain based on Brillouin scattering in dispersion-shifted fiber," OSA Continuum 3(8), 2078–2085 (2020). [CrossRef] . 22. Y. Dong, G. B. Ren, H. Xiao, Y. X. Gao, H. S. Li, S. Y. Xiao, and S. S. Jian, "Simultaneous temperature and strain sensing based on M-shaped single mode fiber," IEEE Photonics Technol. Lett. 29(22), 1955–1958 (2017). [CrossRef] . 23. A. Li, Y. F. Wang, J. Fang, M. J. Li, B. Y. Kim, and W. Shieh, "Few-mode fiber multi-parameter sensor with distributed temperature and strain discrimination," Opt. Lett. 40(7), 1488–1491 (2015). [CrossRef] . 24. Y. Weng, E. Lp, Z. Q. Pan, and T. Wang, "Single-end simultaneous temperature and strain sensing techniques baded on Brillouin optical time domain reflectometry in few-mode fibers," Opt. Express 23(7), 9024–9039 (2015). [CrossRef] . 25. N. Bozinovic, S. Golowich, P. Kristensen, and S. Ramachandran, "Control of orbital angular momentum of light with optical fibers," Opt. Lett. 37(13), 2451–2453 (2012). [CrossRef] . 26. Y. P. Xu, M. Q. Ren, Y. Lu, P. Lu, P. Lu, X. Y. Bao, L. X. Wang, Y. Messaddeq, and S. Larochelle, "Multi-parameter sensor based on stimulated Brillouin scattering in inverse-parabolic graded-index fiber," Opt. Lett. 41(6), 1138–1141 (2016). [CrossRef] . 27. S. Sunada, K. Kanno, and A. Uchida, "Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing," Opt. Express 28(21), 30349–30361 (2020). [CrossRef] . 28. Y. H. Kim and K. Y. Song, "Optical time-domain reflectometry based on a Brillouin dynamic grating in an elliptical-core two-mode fiber," Opt. Lett. 42(15), 3036–3039 (2017). [CrossRef] . 29. J. Fang, G. Milione, J. Stone, G. Z. Peng, M. J. Li, E. Lp, Y. W. Li, P. N. Ji, Y. K. Huang, M. F. Huang, S. Murakami, W. Shieh, and T. Wang, "Multi-parameter distributed fiber sensing with higher-order optical and acoustic modes," Opt. Lett. 44(5), 1096 (2019). [CrossRef] 30. W. Jin, W. C. Michie, G. Thursby, M. Konstantaki, and B. Culshaw, "Simultaneous measurement of strain and temperature: error analysis," Opt. Eng. 36(2), 598–609 (1997). [CrossRef] . 31. E. P. Ippen and R. H. Stolen, "Stimulated Brillouin scattering in optical fibers," Appl. Phys. Lett. 21(11), 539–541 (1972). [CrossRef] . 32. Q. Bai, B. Xue, H. Gu, D. Wang, Y. Wang, M. J. Zhang, B. Q. Jin, and Y. C. Wang, "Enhancing the SNR of BOTDR by gain-switched modulation," IEEE Photon. Technol. Lett. 31(4), 283–286 (2019). [CrossRef] . 33. K. S. Abedin, "Observation of strong stimulated Brillouin scattering in single-mode As2Se3 chalcogenide fiber," Opt. Express 13(25), 10266–10271 (2005). [CrossRef] . G. S. Wiederhecker, P. Dainese, and T. P. M. Alegre, "Brillouin optomechanics in nanophotonic structures," APL Photonics 4(7), 071101 (2019).. Z. X. Bai, R. J. Williams, O. Kitzler, S. Sarang, D. J. Spence, Y. L. Wang, Z. W. Lu, and R. P. Mildren, "Diamond Brillouin laser in the visible," APL Photonics 5(3), 031301 (2020). Z. X. Bai, H. Yuan, Z. H. Liu, P. B. Xu, Q. L. Gao, R. J. Williams, O. Kitzler, R. P. Mildren, Y. L. Wang, and Z. W. Lu, "Stimulated Brillouin scattering materials, experimental design and applications: a review," Opt. Mater. 75, 626–645 (2018).. Z. X. Bai, Y. L. Wang, Z. W. Lu, H. Yuan, Z. X. Zheng, S. S. Li, Y. Chen, Z. H. Liu, C. Cui, H. L. Wang, and R. Liu, "High compact, high quality single longitudinal mode hundred picoseconds laser based on stimulated Brillouin scattering pulse compression," Appl. Sci. 6(1), 29 (2016).. L. W. Sheng, D. X. Ba, and Z. W. Lu, "Imaging enhancement based on stimulated Brillouin amplification in optical fiber," Opt. Express 27(8), 10974–10980 (2019).. X. Y. Bao and L. Chen, "Recent progress in distributed fiber optic sensors," Sensors 12(7), 8601–8639 (2012).. X. Y. Bao and L. Chen, "Recent progress in Brillouin scattering based fiber sensors," Sensors 11(4), 4152–4187 (2011).. P. B. Xu, D. X. Ba, W. M. He, H. P. Hu, and Y. K. Dong, "Distributed Brillouin optical fiber temperature and strain sensing at a high temperature up to 1000 °C by using an annealed gold-coated fiber," Opt. Express 26(23), 29724–29734 (2018).. L. W. Sheng, D. X. Ba, and Z. W. Lu, "Low-noise and high-gain of stimulated Brillouin amplification via orbital angular momentum mode division filtering," Appl. Opt. 58(1), 147–151 (2019).. D. W. Zhou, Y. K. Dong, B. Z. Wang, C. Pang, D. X. Ba, H. Y. Zhang, Z. W. Lu, H. Li, and X. Y. Bao, "Single-shot BOTDA based on an optical chirp chain probe wave for distributed ultrafast measurement," Light: Sci. Appl. 7(1), 32 (2018).. B. Z. Wang, B. H. Fan, D. W. Zhou, C. Pang, Y. Li, D. X. Ba, and Y. K. Dong, "High-performance optical chirp chain BOTDA by using a pattern recognition algorithm and the differential pulse-width pair technique," Photonics Res. 7(6), 652–658 (2019).. L. W. Sheng, L. G. Li, L. J. Hu, M. Yuan, J. P. Lang, J. G. Wang, P. Li, Z. Y. Bi, J. S. Yan, and Z. M. Liu, "Distributed fiberoptic sensor for simultaneous temperature and strain monitoring based on Brillouin scattering effect in polyimide-coated fibers," Int. J. Opt. 2020, 1–5 (2020). D. X. Ba, Y. Li, J. L. Yan, X. P. Zhang, and Y. K. Dong, "Phase-coded Brillouin optical correlation domain analysis with 2-mm resolution based on phase-shift keying," Opt. Express 27(25), 36197–36205 (2019).. J. J. Guo, M. X. Niu, and C. X. Yang, "Highly flexible and stretchable optical strain sensing for human motion detection," Optica 4(10), 1285–1288 (2017).. M. Alem, M. A. Soto, M. Tur, and L. Thévenaz, "Analytical expression and experimental validation of the Brillouin gain spectral broadening at any sensing spatial resolution," Proc. SPIE 10323, 103239J (2017).. A. Motil, A. Bergman, and M. Tur, "State of the art of Brillouin fiber-optic distributed sensing," Opt. Laser Technol. 78, 81–103 (2016).. X. Y. Bao, D. J. Webb, and D. A. Jackson, "32-km distributed temperature sensor based on Brillouin loss in an optical fiber," Opt. Lett. 18(18), 1561–1563 (1993).. W. W. Zou, Z. Y. He, and K. Hotate, "Complete discrimination of strain and temperature using Brillouin frequency shift and birefringence in a polarization-maintaining fiber," Opt. Express 17(3), 1248–1255 (2009).. M. Taki, Y. S. Muanenda, I. Toccafondo, A. Signorini, T. Nannipieri, and D. Pasquale, "Optimized hybrid Rman/fast-BOTDA sensor for temperature and strain measurement in large infrastructures," IEEE Sens. J. 14(12), 4297–4304 (2014).. L. F. Zou, X. Y. Bao, V. S. Afshar, and L. Chen, "Dependence of the Brillouin frequency shift on strain and temperature in a photonic crystal fiber," Opt. Lett. 29(13), 1485–1487 (2004).. L. W. Sheng, L. G. Li, L. Liu, L. J. Hu, M. Yuan, and J. S. Yan, "Study on the simultaneous distributed measurement of temperature and strain based on Brillouin scattering in dispersion-shifted fiber," OSA Continuum 3(8), 2078–2085 (2020).. Y. Dong, G. B. Ren, H. Xiao, Y. X. Gao, H. S. Li, S. Y. Xiao, and S. S. Jian, "Simultaneous temperature and strain sensing based on M-shaped single mode fiber," IEEE Photonics Technol. Lett. 29(22), 1955–1958 (2017).. A. Li, Y. F. Wang, J. Fang, M. J. Li, B. Y. Kim, and W. Shieh, "Few-mode fiber multi-parameter sensor with distributed temperature and strain discrimination," Opt. Lett. 40(7), 1488–1491 (2015).. Y. Weng, E. Lp, Z. Q. Pan, and T. Wang, "Single-end simultaneous temperature and strain sensing techniques baded on Brillouin optical time domain reflectometry in few-mode fibers," Opt. Express 23(7), 9024–9039 (2015).. N. Bozinovic, S. Golowich, P. Kristensen, and S. Ramachandran, "Control of orbital angular momentum of light with optical fibers," Opt. Lett. 37(13), 2451–2453 (2012).. Y. P. Xu, M. Q. Ren, Y. Lu, P. Lu, P. Lu, X. Y. Bao, L. X. Wang, Y. Messaddeq, and S. Larochelle, "Multi-parameter sensor based on stimulated Brillouin scattering in inverse-parabolic graded-index fiber," Opt. Lett. 41(6), 1138–1141 (2016).. S. Sunada, K. Kanno, and A. Uchida, "Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing," Opt. Express 28(21), 30349–30361 (2020).. Y. H. Kim and K. Y. Song, "Optical time-domain reflectometry based on a Brillouin dynamic grating in an elliptical-core two-mode fiber," Opt. Lett. 42(15), 3036–3039 (2017).. J. Fang, G. Milione, J. Stone, G. Z. Peng, M. J. Li, E. Lp, Y. W. Li, P. N. Ji, Y. K. Huang, M. F. Huang, S. Murakami, W. Shieh, and T. Wang, "Multi-parameter distributed fiber sensing with higher-order optical and acoustic modes," Opt. Lett. 44(5), 1096 (2019). W. Jin, W. C. Michie, G. Thursby, M. Konstantaki, and B. Culshaw, "Simultaneous measurement of strain and temperature: error analysis," Opt. Eng. 36(2), 598–609 (1997).. E. P. Ippen and R. H. Stolen, "Stimulated Brillouin scattering in optical fibers," Appl. Phys. Lett. 21(11), 539–541 (1972).. Q. Bai, B. Xue, H. Gu, D. Wang, Y. Wang, M. J. Zhang, B. Q. Jin, and Y. C. Wang, "Enhancing the SNR of BOTDR by gain-switched modulation," IEEE Photon. Technol. Lett. 31(4), 283–286 (2019).. K. S. Abedin, "Observation of strong stimulated Brillouin scattering in single-mode As2Se3 chalcogenide fiber," Opt. Express 13(25), 10266–10271 (2005).. Abedin, K. S. Afshar, V. S. Alegre, T. P. M. Alem, M. Ba, D. X. Bai, Q. Bai, Z. X. Bao, X. Y. Bergman, A. Bi, Z. Y. Bozinovic, N. Chen, L. Cui, C. Culshaw, B. Dainese, P. Dong, Y. Dong, Y. K. Fan, B. H. Fang, J. Gao, Q. L. Gao, Y. X. Golowich, S. Gu, H. Guo, J. J. He, W. M. He, Z. Y. Hotate, K. Hu, H. P. Hu, L. J. Huang, M. F. Huang, Y. K. Ippen, E. P. Jackson, D. A. Ji, P. N. Jian, S. S. Jin, B. Q. Jin, W. Kanno, K. Kim, B. Y. Kim, Y. H. Kitzler, O. Konstantaki, M. Kristensen, P. Lang, J. P. Larochelle, S. Li, A. Li, H. Li, H. S. Li, L. G. Li, M. J. Li, P. Li, S. S. Li, Y. W. Liu, L. Liu, R. Liu, Z. H. Liu, Z. M. Lp, E. Lu, P. Lu, Y. Lu, Z. W. Messaddeq, Y. Michie, W. C. Mildren, R. P. Milione, G. Motil, A. Muanenda, Y. S. Murakami, S. Nannipieri, T. Niu, M. X. Pan, Z. Q. Pang, C. Pasquale, D. Peng, G. Z. Ramachandran, S. Ren, G. B. Ren, M. Q. Sarang, S. Sheng, L. W. Shieh, W. Signorini, A. Song, K. Y. Soto, M. A. Spence, D. J. Stolen, R. H. Stone, J. Sunada, S. Taki, M. Thévenaz, L. Thursby, G. Toccafondo, I. Tur, M. Uchida, A. Wang, B. Z. Wang, D. Wang, H. L. Wang, J. G. Wang, L. X. Wang, T. Wang, Y. C. Wang, Y. F. Wang, Y. L. Webb, D. J. Weng, Y. Wiederhecker, G. S. Williams, R. J. Xiao, H. Xiao, S. Y. Xu, P. B. Xu, Y. P. Xue, B. Yan, J. L. Yan, J. S. Yang, C. X. Yuan, H. Yuan, M. Zhang, H. Y. Zhang, M. J. Zhang, X. P. Zheng, Z. X. Zhou, D. W. Zou, L. F. Zou, W. W. APL Photonics (2) Appl. Sci. (1) IEEE Photon. Technol. Lett. (1) IEEE Photonics Technol. Lett. (1) IEEE Sens. J. (1) Int. J. Opt. (1) Light: Sci. Appl. (1) Opt. Eng. (1) Opt. Laser Technol. (1) Opt. Mater. (1) Optica (1) OSA Continuum (1) Photonics Res. (1) (1) Δ ν B = C ε Δ ε + C T Δ T (2) v = CB (3) C = ( C ε , 1 ⋮ C ε , n C T,1 ⋮ C T, n ) . (4) δ ε = | C T ,1 | δ ν B , m + | C T , m | δ ν B , 1 | C T ,1 C ε , m − C T , m C ε , 1 | (5) δ T = | C ε , m | δ ν B , 1 + | C ε , 1 | δ ν B , m | C T ,1 C ε , m − C T , m C ε , 1 | . (6) g B K ( P th / A eff ) L eff ≅ 21 Takashige Omatsu, Editor-in-Chief
CommonCrawl
Minkowski's inequality for the AB-fractional integral operator Hasib Khan1,2, Thabet Abdeljawad3, Cemil Tunç4, Abdulwasea Alkhazzan5 & Aziz Khan6 Journal of Inequalities and Applications volume 2019, Article number: 96 (2019) Cite this article Recently, AB-fractional calculus has been introduced by Atangana and Baleanu and attracted a large number of scientists in different scientific fields for the exploration of diverse topics. An interesting aspect is the generalization of classical inequalities via AB-fractional integral operators. In this paper, we aim to generalize Minkowski inequality using the AB-fractional integral operator. Nowadays the fractional calculus has an important role in diverse scientific fields due to its several applications in dynamical problems including signals, hydrodynamics, dynamics, fluid, viscoelastic theory, biology, control theory, image processing, computer networking, and many others [1,2,3,4,5]. A large number of scientists have worked on generalizations of existing results including theorems, definitions, models, and many more. A generalization of classical inequalities by means of fractional-order integral operators is considered as an interesting subject area. For instance, recently, Agarwal et al. [6] proved Hermite–Hadamard-type inequalities by using generalized k-fractional-integrals. Aldhaifallah et al. [7] used the \((k,s)\)-fractional integral operator to generalize the inequalities for a family/class of n positive functions. Set et al. [8] studied Hermite–Hadamard-type inequalities for a generalized fractional integral operator for functions with convex absolute values of derivatives. Khan et al. [9] produced the Minkowski inequality by using the Hahn integral operator. On the other hand, noninteger-order calculus, usually referred to as fractional calculus, is used to generalize integrals and derivatives, in particular, integrals involving inequalities. Recently, Dumitru and Arran [10] introduced a new formula for fractional derivatives and integrals by using the Mittag-Leffler kernel. More theoretical concepts regarding fractional operators with Mittag-Leffler kernels (Atangana–Baleanu operators) and the higher-order case have been discussed in [11, 12], whereas the generalization to the generalized Mittag-Leffler kernels to gain a semigroup property have been recently initiated in [13, 14]. Khan [15] studied inequalities for a class of n functions by means of Saigo fractional calculus. Jarad et al. [16] presented a Gronwall-type inequality for the analysis of the fractional-order Atangana–Baleanu differential equation and in [17] for generalized fractional derivatives. Shuang and Qi [18] proved some Hermite–Hadamard-type inequalities for a class of s-convex functions and studied special means. Mehrez and Agarwal [19] produced new integral inequalities by means of classical Hermite–Hadamard inequalities and obtained particular cases of their results with applications to special means. Park et al. [20] investigated new generalized inequalities, which then were utilized for stability analysis. Sarikaya et al. [21] established fractional integral inequalities generalizing the classical results by using the local fractional approach. The integral inequalities with Mittag-Leffler functions have been studied as a generalization of the classical inequalities. For instance, Farid et al. [22] generalized several classical inequalities using an extended Mittag-Leffler function and evaluated particular cases of their results. More related work can be found in [23,24,25]. In this paper, we use the AB-fractional integral operator for generalization of classical Minkowski inequalities. Our results are more general and applicable than those in the classical case. There are many definitions of fractional integrals, for example, Riemann–Liouville, Hadamard, Liouville, Weyl, Erdelyi–Kober, and Katugampola [26,27,28,29], which can be considered for getting the same results. Now we give some definitions and lemma related to the AB-fractional operator. ([30]) The fractional ABC-derivative in the Caputo sense of a function \(f \in H^{*}(a,b)\) is defined by $$ {}^{ABC} {{}_{a}\mathcal{D}_{\tau }^{\nu }}f(\tau )=\frac{\mathbb{B}( \nu )}{1-\nu } \int _{a}^{\tau }f^{'}(s)E_{\nu } \biggl[\frac{-\nu (\tau -s)^{ \mu }}{1-\nu } \biggr]\,ds, $$ where \(b>a\) and \(\nu \in [0,1]\), and \(\mathbb{B}(\nu )>0\) satisfies the property \(\mathbb{B}(0)=\mathbb{B}(1)=1\). The fractional ABC-derivative in the Riemann–Liouville sense of a function \(f \in H^{*}(a,b)\) is defined by $$ {}^{ABR} {{}_{a}\mathcal{D}_{\tau }^{\nu }}f(\tau )= \frac{\mathbb{B}( \nu )}{1-\nu }\frac{d}{d\tau } \int _{a}^{\tau }f(s)E_{\nu } \biggl[ \frac{- \nu (\tau -s)^{\nu }}{1-\nu } \biggr]\,ds, $$ where \(b>a\) and \(\nu \in [0,1]\). ([31, 32]) The fractional AB-integral of the function \(f \in H^{*}(a,b)\) is given by $$ ^{AB} {{}_{a}\mathcal{I}_{\tau }^{\nu }}f(\tau )= \frac{1-\nu }{ \mathbb{B}(\nu )}f(\tau )+\frac{\nu }{\mathbb{B}(\nu )\varGamma (\nu )} \int _{a}^{\tau }f(s) (\tau -s)^{\nu -1}\,ds, $$ where \(b>a\) and \(0<\nu <1 \). Remark 1.4 Since the normalization function \(\mathbb{B}(\nu )>0\) is positive, it immediately follows that the AB-integral of a positive function is positive. We will rely on this fact throughout the proofs of the main results. The ABC-fractional derivative and AB-fractional integral of a function f satisfy the Newton–Leibnitz formula $$ ^{AB}{{}_{a}\mathcal{I}_{\tau }^{\nu }} \bigl( ^{ABC}{{}_{a}\mathcal{D} _{\tau }^{\nu }}f( \tau ) \bigr)=f(\tau )-f(a). $$ Organization of the paper. This paper includes four sections. Introduction is given in Sect. 1, with a literature review, important definitions, and a lemma, which we will use in the proofs. In Sect. 2, we prove Minkowski's inequality for the AB-fractional integral operator. Other AB-fractional integral inequalities are proved in Sect. 3. The summary is given in Sect. 4. The AB-fractional Minkowski inequality Let \(\nu >0\) and \(p\geq 1\). Let \(u, v \in C_{\nu }[a,b]\) be two positive functions in \([0,\infty [\) such that \({}^{AB}{{}_{a}\mathcal{I} _{t}^{\nu }}u(t)<\infty \) and \({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v(t)< \infty \) for all \(t>a\). If \(0<\alpha \leq \frac{u(t)}{v(t)}\leq \theta \) for some \(\alpha ,\theta \in \mathbb{R}_{+}^{*}\) and all \(t\in [a,b]\), then $$ \bigl({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(t) \bigr)^{\frac{1}{p}}+ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{p}(t) \bigr)^{\frac{1}{p}} \leq \mathcal{A} \bigl[^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t) \bigr)^{p} \bigr]^{\frac{1}{p}}, $$ $$ \mathcal{A}=\frac{\theta (1+\alpha )+(\theta +1)}{(1+\alpha )(\theta +1)}. $$ From the condition \(\frac{u(t)}{v(t)}\leq \theta \) we obtain $$ u(t)\leq \biggl(\frac{\theta }{\theta +1}\biggr) \bigl(u(t)+v(t)\bigr). $$ Taking the pth power of both sides of Eq. (2.2), we have $$ u^{p}(t)\leq \biggl(\frac{\theta }{\theta +1} \biggr)^{p}\bigl(u(t)+v(t)\bigr)^{p}. $$ Multiplying both sides of (2.3) by \(\frac{1-\nu }{\mathbb{B}( \nu )}\), we get $$ \frac{1-\nu }{\mathbb{B}(\nu )}u^{p}(t)\leq \biggl( \frac{\theta }{\theta +1}\biggr)^{p}\frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t) \bigr)^{p}. $$ Also, replacing t by s in Eq. (2.3) and multiplying both sides by \(\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} \), we get $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} u^{p}(s) \leq \biggl( \frac{\theta }{\theta +1}\biggr)^{p} \frac{\nu (t-s)^{\nu -1}}{ \mathbb{B}(\nu )\varGamma (\nu )} \bigl(u(s)+v(s) \bigr)^{p}. $$ Integrating both sides of Eq. (2.4) with respect to s, we have $$ \int _{a}^{t} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u ^{p}(s)\,ds\leq \biggl(\frac{\theta }{\theta +1}\biggr)^{p} \int _{a}^{t} \frac{ \nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} \bigl(u(s)+v(s) \bigr)^{p}\,ds. $$ Adding (2.4) and (2.6), we obtain $$\begin{aligned} \frac{1-\nu }{\mathbb{B}(\nu )}u^{p}(t)+ \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} u^{p}(s) \,ds \leq {}&\biggl(\frac{ \theta }{\theta +1}\biggr)^{p} \biggl[ \frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{p} \\ &{}+ \int _{a}^{t} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )} \bigl(u(s)+v(s) \bigr)^{p}\,ds \biggr]. \end{aligned}$$ This implies $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(s) \leq \biggl(\frac{\theta }{\theta +1}\biggr)^{p} {}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}\bigl(u(s)+v(s)\bigr)^{p}. $$ Taking the \(\frac{1}{p}\)th power of both sides of Eq. (2.7), we find $$ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(t) \bigr)^{\frac{1}{p}} \leq \frac{\theta }{\theta +1} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{ \nu }}\bigl(u(t)+v(t)\bigr)^{p} \bigr]^{\frac{1}{p}}. $$ On the other hand, by using the condition \(0<\alpha \leq \frac{u(t)}{v(t)}\) we directly get $$ v^{p}(t)\leq \frac{1}{(1+\alpha )^{p}}\bigl(u(t)+v(t) \bigr)^{p}. $$ Multiplying Eq. (2.9) by \(\frac{1-\nu }{\mathbb{B}(\nu )}\), we get $$ \frac{1-\nu }{\mathbb{B}(\nu )}v^{p}(t)\leq \frac{1}{(1+\alpha )^{p}} \frac{1- \nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{p}. $$ Also, replacing t by s in Eq. (2.9) and multiplying both sides by \(\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\), we get $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} v^{p}(s) \leq \frac{1}{(1+\alpha )^{p}} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}( \nu )\varGamma (\nu )} \bigl(u(s)+v(s)\bigr)^{p}. $$ Integrating both sides of Eq. (2.11) with respect to s, we have $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} v ^{p}(s)\,ds\leq \frac{1}{(1+\alpha )^{p}} \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}\,ds. $$ Adding (2.10) and (2.12), we obtain $$\begin{aligned} \frac{1-\nu }{\mathbb{B}(\nu )}v^{p}(t) + \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )} v^{p}(s) \,ds\leq{}& \frac{1}{(1+ \alpha )^{p}}\frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{p} \\ &{}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )} \bigl(u(s)+v(s) \bigr)^{p}\,ds ]. \end{aligned}$$ This leads to the AB-fractional integral inequality $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{p}(t) \leq \frac{1}{(1+\alpha )^{p}} {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{p}. $$ Taking the \(\frac{1}{p}\)th power of both sides of Eq. (2.14), we find $$ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{p}(t) \bigr)^{\frac{1}{p}} \leq \frac{1}{1+\alpha } \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}\bigl(u(t)+v(t)\bigr)^{p} \bigr]^{\frac{1}{p}}. $$ By Eqs. (2.8) and (2.15) we obtain $$ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(t) \bigr)^{\frac{1}{p}}+ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{p}(t) \bigr)^{\frac{1}{p}} \leq \mathcal{A} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}\bigl(u(t)+v(t)\bigr)^{p} \bigr]^{\frac{1}{p}}. $$ Thus, the proof of the AB-fractional integral inequality is completed. □ Other types of inequalities Let \(\nu >0\) and \(p>1,q>1,\frac{1}{p}+\frac{1}{q}=1\). Let \(u, v \in C_{\nu }[a,b]\) be two positive functions in \([0,\infty [\) such that \({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t)<\infty \) and \({}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}v(t)<\infty \) for all \(t>a\). If \(0<\alpha \leq \frac{u(t)}{v(t)}\leq \theta \) for some \(\alpha ,\theta \in \mathbb{R}_{+}^{*}\) and all \(t\in [a,b]\), then $$ \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t) \bigr)^{\frac{1}{p}} \bigl(^{AB} {{}_{a}\mathcal{I}_{t}^{\nu }}v(t) \bigr)^{\frac{1}{q}}\leq \biggl(\frac{ \theta }{\alpha } \biggr)^{\frac{1}{pq}} \bigl[^{AB}{{}_{a}\mathcal{I}_{t} ^{\nu }} \bigl(u^{\frac{1}{p}}(t)v^{\frac{1}{q}}(t) \bigr) \bigr]. $$ Using the condition \(\frac{u(t)}{v(t)}\leq \theta \), we get $$ u^{\frac{1}{q}}\leq \theta ^{\frac{1}{q}} v^{\frac{1}{q}}. $$ Multiplying (3.2) by \(u^{\frac{1}{p}}\) and using the condition \(\frac{1}{p}+\frac{1}{q}=1\), we have $$ u\leq \theta ^{\frac{1}{q}} u^{\frac{1}{p}} v^{\frac{1}{q}}. $$ Now let us use (3.3) twice. First, multiplying by \(\frac{1- \nu }{\mathbb{B}(\nu )}\), we get $$ \frac{1-\nu }{\mathbb{B}(\nu )}u\leq \theta ^{\frac{1}{q}} \frac{1- \nu }{\mathbb{B}(\nu )}u^{\frac{1}{p}} v^{\frac{1}{q}}. $$ Second, multiplying by \(\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu ) \varGamma (\nu )}\), we obtain $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u\leq \theta ^{\frac{1}{q}} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u^{\frac{1}{p}} v^{\frac{1}{q}}. $$ Integrating both sides of Eq. (3.5) from 0 to t, we have $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(s)\,ds \leq \theta ^{\frac{1}{q}} \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{ \mathbb{B}(\nu )\varGamma (\nu )}u(s)^{\frac{1}{p}} v(s)^{\frac{1}{q}}\,ds. $$ Now, by adding Eq. (3.4) and Eq. (3.6) we find $$\begin{aligned} \frac{1-\nu }{\mathbb{B}(\nu )}u(t)+ \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(s)\,ds \leq{} & \theta ^{ \frac{1}{q}} \biggl[\frac{1-\nu }{\mathbb{B}(\nu )}u(t)^{\frac{1}{p}} v(t)^{ \frac{1}{q}} \\ &{}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )}u(s)^{\frac{1}{p}} v(s)^{\frac{1}{q}}\,ds \biggr]. \end{aligned}$$ $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t) \leq \theta ^{\frac{1}{q}} \bigl[^{AB} {{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u^{\frac{1}{p}}(t)v^{\frac{1}{q}}(t) \bigr) \bigr]. $$ Taking the \(\frac{1}{p}\)th power of both sides of (3.7), we have $$ \bigl[^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t) \bigr]^{\frac{1}{p}}\leq \theta ^{\frac{1}{pq}} \bigl[^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u ^{\frac{1}{p}}(t)v^{\frac{1}{q}}(t) \bigr) \bigr]^{\frac{1}{p}}. $$ Now, by the condition \(\alpha \leq \frac{u(t)}{v(t)}\) we have $$ v^{\frac{1}{p}}\leq \alpha ^{\frac{-1}{p}} u^{\frac{1}{p}}. $$ Multiplying Eq. (3.9) by \(v^{\frac{1}{q}}\), we get $$ v\leq \alpha ^{\frac{-1}{p}} u^{\frac{1}{p}}v^{\frac{1}{q}}. $$ Now let us use (3.10) twice. First, multiplying by \(\frac{1- \nu }{\mathbb{B}(\nu )}\), we get $$ \frac{1-\nu }{\mathbb{B}(\nu )}v\leq \alpha ^{\frac{-1}{p}} \frac{1- \nu }{\mathbb{B}(\nu )}u^{\frac{1}{p}} v^{\frac{1}{q}}. $$ $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v\leq \alpha ^{\frac{-1}{p}} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu ) \varGamma (\nu )}u^{\frac{1}{p}} v^{\frac{1}{q}}. $$ Integrating both sides of Eq. (3.12) from 0 to t, we have $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v(s)\,ds \leq \alpha ^{\frac{-1}{p}} \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{ \mathbb{B}(\nu )\varGamma (\nu )}u(s)^{\frac{1}{p}} v(s)^{\frac{1}{q}}\,ds. $$ Now, by adding Eq. (3.11) and Eq. (3.13) we find $$\begin{aligned} \frac{1-\nu }{\mathbb{B}(\nu )}v(t)+ \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v(s)\,ds \leq {}& \alpha ^{ \frac{-1}{p}} \biggl[\frac{1-\nu }{\mathbb{B}(\nu )}u(t)^{\frac{1}{p}} v(t)^{ \frac{1}{q}} \\ &{}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )}u(s)^{\frac{1}{p}} v(s)^{\frac{1}{q}}\,ds \biggr]. \end{aligned}$$ $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v(t) \leq \alpha ^{\frac{-1}{p}} \bigl[^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u^{\frac{1}{p}}(t)v^{ \frac{1}{q}}(t) \bigr) \bigr]. $$ Taking the \(\frac{1}{q}\)th power of both sides of (3.15), we have $$ \bigl[^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v(s) \bigr]^{\frac{1}{q}}\leq \alpha ^{\frac{-1}{pq}} \bigl[^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u ^{\frac{1}{p}}(t)v^{\frac{1}{q}}(t) \bigr) \bigr]^{\frac{1}{q}}. $$ Finally, multiplying Eq. (3.8) and Eq. (3.16), we obtain Let \(\nu >0\) and \(p>1,q>1,\frac{1}{p}+\frac{1}{q}=1\). Let \(u, v \in C_{\nu }[a,b]\) be two positive functions in \([0,\infty [\) such that \({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(t)<\infty\), \({}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}u^{q}(t)<\infty \), \({}^{AB}{{}_{a}\mathcal{I}_{t} ^{\nu }}v^{p}(t)<\infty \), and \({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v ^{q}(t)<\infty \) for all \(t>a\). If \(0<\alpha \leq \frac{u(t)}{v(t)} \leq \theta \) for some \(\alpha ,\theta \in \mathbb{R}_{+}^{*}\) and all \(t\in [a,b]\), then $$ {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t) \bigr)\leq \mathcal{A}^{*} {}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u^{p}(t)+v^{p}(t) \bigr)+ \mathcal{B}^{*}_{m} {}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u^{q}(t)+v ^{q}(t) \bigr), $$ $$ \mathcal{A}^{*}=\frac{2^{p-1}\theta ^{p}}{p(\theta +1)^{p}},\qquad \mathcal{B}^{*}_{m}= \frac{2^{q-1}}{q(1+\alpha )^{q}.} $$ Using the condition \(\frac{u(t)}{v(t)}\leq \theta \), we obtain $$ u(t)\leq \biggl(\frac{\theta (u(t)+v(t))}{1+\theta }\biggr). $$ Multiplying both sides of (3.20) by \(\frac{1-\nu }{\mathbb{B}( \nu )}\), we get Also, replacing t by s in Eq. (3.20) and multiplying both sides by \(\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\), we get $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u^{p}(s) \leq \biggl( \frac{\theta }{\theta +1}\biggr)^{p}\frac{\nu (t-s)^{\nu -1}}{ \mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}. $$ $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u ^{p}(s) \,ds\leq \biggl(\frac{\theta }{\theta +1}\biggr)^{p} \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}\,ds. $$ $$\begin{aligned} &\frac{1-\nu }{\mathbb{B}(\nu )}u^{p}(t)+ \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u^{p}(s) \,ds\\ &\quad\leq \biggl(\frac{\theta }{ \theta +1}\biggr)^{p} \biggl[\frac{1-\nu }{\mathbb{B}(\nu )} \bigl(u(t)+v(t)\bigr)^{p}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}\,ds \biggr]. \end{aligned}$$ $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u^{p}(t) \leq \biggl(\frac{\theta }{\theta +1}\biggr)^{p} {}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}\bigl(u(t)+v(t)\bigr)^{p}. $$ Multiplying (2.7) by the constant \(\frac{1}{p}\), we find $$ \frac{1}{p} \bigl(^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}u^{p}(t) \bigr)\leq \frac{1}{p} \biggl( \frac{\theta }{\theta +1} \biggr)^{p} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{p} \bigr]. $$ $$ v^{q}(t)\leq \frac{1}{(1+\alpha )^{q}}\bigl(u(t)+v(t) \bigr)^{q}. $$ Multiplying (3.26) by \(\frac{1-\nu }{\mathbb{B}(\nu )}\), we get $$ \frac{1-\nu }{\mathbb{B}(\nu )}v^{q}(t)\leq \frac{1}{(1+\alpha )^{q}} \frac{1- \nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{q}. $$ $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v^{q}(s) \leq \frac{1}{(1+\alpha )^{q}} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}( \nu )\varGamma (\nu )}\bigl(u(s)+v(s)\bigr)^{q}. $$ $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v ^{q}(s) \,ds\leq \frac{1}{(1+\alpha )^{q}} \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{q}\,ds. $$ $$\begin{aligned} \frac{1-\nu }{\mathbb{B}(\nu )}v^{q}(t)+ \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}v^{q}(s) \,ds \leq{} & \frac{1}{(1+ \alpha )^{q}} \biggl[\frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t) \bigr)^{q} \\ &{}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )}\bigl(u(s)+v(s) \bigr)^{q}\,ds \biggr]. \end{aligned}$$ $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{q}(t) \leq \frac{1}{(1+\alpha )^{q}} {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{q}. $$ Multiplying (2.14) by\(\frac{1}{q}\), we have $$ \frac{1}{q} \bigl(^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}v^{q}(t) \bigr)\leq \frac{1}{q} \frac{1}{(1+\alpha )^{q}} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t} ^{\nu }}\bigl(u(t)+v(t)\bigr)^{q} \bigr]. $$ By means of Eqs. (3.25) and (3.31) we get $$\begin{aligned} &\frac{1}{p} \bigl(^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}u^{p}(t) \bigr)+ \frac{1}{q} \bigl(^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v^{q}(t) \bigr) \\ &\quad\leq \frac{1}{p} \biggl(\frac{\theta }{\theta +1} \biggr)^{p} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{p} \bigr]+\frac{1}{q} \frac{1}{(1+ \alpha )^{q}} \bigl[ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{q} \bigr]. \end{aligned}$$ To complete our proof, we have to use Young's inequality $$ u(t)v(t)\leq \frac{u^{p}(t)}{p}+\frac{v^{q}(t)}{q}. $$ $$ \frac{1-\nu }{\mathbb{B}(\nu )}u(t)v(t)\leq \frac{1-\nu }{\mathbb{B}( \nu )} \biggl( \frac{u^{p}(t)}{p}+\frac{v^{q}(t)}{q} \biggr). $$ $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(s)v(s) \leq \frac{\nu (t-s)^{\nu -1}}{p \mathbb{B}(\nu )\varGamma (\nu )}u^{p}(s)+ \frac{ \nu (t-s)^{\nu -1}}{q \mathbb{B}(\nu )\varGamma (\nu )}v^{q}(s). $$ $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(s)v(s)\,ds \leq \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{p \mathbb{B}(\nu )\varGamma ( \nu )}u^{p}(s) \,ds+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{q \mathbb{B}( \nu )\varGamma (\nu )}v^{q}(s) \,ds. $$ $$\begin{aligned} & \frac{1-\nu }{\mathbb{B}(\nu )}u(t)v(t) + \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(s)v(s)\,ds \\ &\quad\leq \frac{1-\nu }{ \mathbb{B}(\nu )} \biggl(\frac{u^{p}(t)}{p}+\frac{v^{q}(t)}{q} \biggr) \\ &\qquad{}+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{p \mathbb{B}(\nu )\varGamma ( \nu )}u^{p}(s) \,ds+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{q \mathbb{B}( \nu )\varGamma (\nu )}v^{q}(s) \,ds. \end{aligned}$$ $$ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t)v(t) \leq \frac{1}{p} {}^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }}u^{p}(t)+ \frac{1}{q} {}^{AB}{{}_{a}\mathcal{I} _{t}^{\nu }}v^{q}(t). $$ Using (3.32) and (3.38), we have $$\begin{aligned} &{} ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}u(t)v(t) \\ &\quad \leq \frac{1}{p} \biggl(\frac{ \theta }{\theta +1} \biggr)^{p} \bigl[ ^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{p} \bigr]+\frac{1}{q} \frac{1}{(1+\alpha )^{q}} \bigl[ ^{AB}{{}_{a} \mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{q} \bigr]. \end{aligned}$$ Using the inequality $$ (u+v)^{r}\leq 2^{r-1}\bigl(u^{r}+v^{r} \bigr),\quad u,v\geq 0,r>1, $$ with \(r=p\) and multiplying (3.40) by the constant \(\frac{1-\nu }{ \mathbb{B}(\nu )}\), we find $$ \frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{p}\leq 2^{p-1}\frac{1- \nu }{\mathbb{B}(\nu )}\bigl(u(t)^{p}+v(t)^{p} \bigr). $$ Then multiplying Eq. (3.40) with \(r=p \) by \(\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\), we get $$ \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}(u+v)^{p} \leq 2^{p-1} \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u ^{p}+v^{p}\bigr). $$ Integrating Eq. (3.42) from a to t, we have $$ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}\,ds \leq 2^{p-1} \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu ) \varGamma (\nu )} \bigl(u^{p}(s)+v^{p}(s)\bigr)\,ds. $$ Adding Eq. (3.41) and Eq. (3.43), we obtain $$\begin{aligned} &\frac{1-\nu }{\mathbb{B}(\nu )}\bigl(u(t)+v(t)\bigr)^{p}+ \int _{a}^{t}\frac{ \nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\bigl(u(s)+v(s) \bigr)^{p}\,ds \\ &\quad \leq2^{p-1} \biggl(\frac{1-\nu }{\mathbb{B}(\nu )} \bigl(u(t)^{p}+v(t)^{p}\bigr)+ \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )} \bigl(u^{p}(s)+v^{p}(s)\bigr)\,ds \biggr). \end{aligned}$$ $$ {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{p}\leq 2^{p-1} {}^{AB} {{}_{a}\mathcal{I}_{t}^{\nu }}\bigl(u^{p}(t)+v^{p}(t) \bigr). $$ Repeating the same process with \(r=q\), we get $$ {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t)\bigr)^{q}\leq 2^{q-1} {}^{AB} {{}_{a}\mathcal{I}_{t}^{\nu }}\bigl(u^{q}(t)+v^{q}(t) \bigr). $$ Substituting by (3.45) and (3.46) into Eq. (3.39), the proof completed. □ Let \(\nu >0\), and let \(u, v \in C_{\nu }[a,b]\) be two positive functions in \([0,\infty [\) such that \({}^{AB}{{}_{a}\mathcal{I}_{t}^{ \nu }}u(t)<\infty \) and \({}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }}v(t)<\infty \) for all \(t>a\), If \(0<\alpha \leq \frac{u(t)}{v(t)}\leq \theta \) for some \(\alpha ,\theta \in \mathbb{R}_{+}^{*}\) and all \(t\in [a,b]\), then $$ \frac{1}{\theta } {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)v(t) \bigr)\leq {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)+v(t) \bigr)^{2} \leq \frac{1}{\alpha } {}^{AB}{{}_{a}\mathcal{I}_{t}^{\nu }} \bigl(u(t)v(t) \bigr), $$ Using the condition $$ 0< \alpha \leq \frac{u(t)}{v(t)}\leq \theta , $$ we conclude that $$\begin{aligned} & (1+\alpha )v(t)\leq \bigl(u(t)+v(t)\bigr)\leq (\theta +1)v(t), \end{aligned}$$ $$\begin{aligned} & \frac{\theta +1}{\theta }u(t)\leq \bigl(u(t)+v(t)\bigr)\leq \frac{1+\alpha }{ \alpha }u(t). \end{aligned}$$ By (3.49) and (3.50) we obtain $$\begin{aligned} \frac{1}{\theta }u(t)v(t)\leq \frac{(u(t)+v(t))^{2}}{(1+\alpha )( \theta +1)} \leq \frac{1}{\alpha }u(t)v(t). \end{aligned}$$ Multiplying (3.51) by \(\frac{1-\nu }{\mathbb{B}(\nu )} \) and then by \(\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\), we get $$\begin{aligned} & \frac{1}{\theta }\frac{1-\nu }{\mathbb{B}(\nu )}u(t)v(t)\leq \frac{1- \nu }{\mathbb{B}(\nu )}\frac{(u(t)+v(t))^{2}}{(1+\alpha )(\theta +1)} \leq \frac{1}{\alpha }\frac{1-\nu }{\mathbb{B}(\nu )}u(t)v(t), \end{aligned}$$ $$\begin{aligned} & \frac{1}{\theta }\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )}u(t)v(t)\leq \frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma ( \nu )}\frac{(u(t)+v(t))^{2}}{(1+\alpha )(\theta +1)} \leq \frac{1}{ \alpha }\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}u(t)v(t). \end{aligned}$$ Integrating Eq. (3.53) from 0 to t with respect to s, we have $$\begin{aligned} \frac{1}{\theta } \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{\mathbb{B}( \nu )\varGamma (\nu )}u(s)v(s)\,ds &\leq \int _{a}^{t}\frac{\nu (t-s)^{ \nu -1}}{\mathbb{B}(\nu )\varGamma (\nu )}\frac{(u(s)+v(s))^{2}}{(1+ \alpha )(\theta +1)} \,ds \\ &\leq \frac{1}{\alpha } \int _{a}^{t}\frac{\nu (t-s)^{\nu -1}}{ \mathbb{B}(\nu )\varGamma (\nu )}u(s)v(s)\,ds. \end{aligned}$$ Adding Eqs. (3.52) and Eq. (3.54), we obtain the required inequality. □ In this paper, we have considered Minkowski's inequality for the AB-fractional integral operator. We have also obtained some other types of integral inequalities for the AB-fractional integral operator. By the help of this work we obtained more general inequalities than in the classical cases. For possible further work, we suggest to apply the obtained inequalities to prove the existence of solutions of fractional differential equations. Baleanu, D., Khan, H., Jafari, H., Khan, R.A., Alipour, M.: On existence results for solutions of a coupled system of hybrid boundary value problems with hybrid conditions. Adv. Differ. Equ. 2015(1), 318 (2015) Baleanu, D., Mustafa, O.G., Agarwal, P.R.: An existence result for a superlinear fractional differential equation. Appl. Math. Lett. 23(9), 1129–1132 (2010) Baleanu, D., Mustafa, O.G., Agarwal, R.P.: On the solution set for a class of sequential fractional differential equations. J. Phys. A, Math. Theor. 43(38), 385209 (2010) Baleanu, D., Agarwal, R.P., Khan, H., Khan, R.A., Jafari, H.: On the existence of solution for fractional differential equations of order \(3 < \delta \leq 4\). Adv. Differ. Equ. 2015, 362 (2015) Baleanu, D., Agarwal, R.P., Mohammadi, H., Rezapour, S.: Some existence results for a nonlinear fractional differential equation on partially ordered Banach spaces. Bound. Value Probl. 2013, 112 (2013) Agarwal, P., Jleli, M., Tomar, M.: Certain Hermite–Hadamard type inequalities via generalized k-fractional integrals. J. Inequal. Appl. 2017(1), 55 (2017) Aldhaifallah, M., Tomar, M., Nisar, K.S., Purohit, S.D.: Some new inequalities for \((k, s)\)-fractional integrals. J. Nonlinear Sci. Appl. 9, 5374–5381 (2016) Set, E., Noor, M.A., Awan, M.U., Gzpinar, A.: Generalized Hermite–Hadamard type inequalities involving fractional integral operators. J. Inequal. Appl. 2017, 169 (2017) Khan, H., Tunç, C., Alkhazan, A., Ameen, B., Khan, A.: A generalization of Minkowski's inequality by Hahn integral operator. J. Taibah Univ. Sci. 12(5), 506–513 (2018) Baleanu, D., Fernandez, A.: On some new properties of fractional derivatives with Mittag-Leffler kernel. Commun. Nonlinear Sci. Numer. Simul. 59, 444–462 (2018) Abdeljawad, T., Baleanu, D.: Integration by parts and its applications of a new nonlocal fractional derivative with Mittag-Leffler nonsingular kernel. J. Nonlinear Sci. Appl. 10(3), 1098–1107 (2017) Abdeljawad, T.: A Lyapunov type inequality for fractional operators with nonsingular Mittag-Leffler kernel. J. Inequal. Appl. 2017, 130 (2017). https://doi.org/10.1186/s13660-017-1400-5 Abdeljawad, T., Baleanu, D.: On fractional derivatives with generalized Mittag-Leffler kernels. Adv. Differ. Equ. 2018, 468 (2018) Abdeljawad, T.: Fractional operators with generalized Mittag-Leffler kernels and their iterated differintegrals. Chaos 29, 023102 (2019). https://doi.org/10.1063/1.5085726 Khan, H., Tunç, C., Baleanu, D., Khan, A., Alkhazzan, A.: Inequalities for n-class of functions using the Saigo fractional integral operator. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 1–4 (2019, to appear). https://doi.org/10.1007/s13398-019-00624-5 Jarad, F., Abdeljawad, T., Hammouch, Z.: On a class of ordinary differential equations in the frame of Atangana–Baleanu fractional derivative. Chaos Solitons Fractals 117, 16–20 (2021) Adjabi, Y., Jarad, F., Abdeljawad, T.: On generalized fractional operators and a Gronwall type inequality with applications. Filomat 31(17), 5457–5473 (2017) Shuang, Y., Qi, F.: Integral inequalities of Hermite–Hadamard type for extended s-convex functions and applications. Mathematics 6(11), 223 (2018) Mehrez, K., Agarwal, P.: New Hermite–Hadamard type integral inequalities for convex functions and their applications. J. Comput. Appl. Math. 350, 274–285 (2019) Park, M.J., Kwon, O.M., Ryu, J.H.: Generalized integral inequality: application to time-delay systems. Appl. Math. Lett. 77, 6–12 (2018) Sarikaya, M.Z., Tunc, T., Budak, H.: On generalized some integral inequalities for local fractional integrals. Appl. Math. Comput. 276, 316–323 (2016) Farid, G., Khan, K.A., Latif, N., Rehman, A.U., Mehmood, S.: General fractional integral inequalities for convex and m-convex functions via an extended generalized Mittag-Leffler function. J. Inequal. Appl. 2018, 243 (2018) Sarikaya, M.Z., Set, E., Yaldiz, H., Basak, N.: Hermite–Hadamard's inequality for fractional integrals and related fractional inequalities. Math. Comput. Model. 57, 2403–2407 (2013) Mohammed, P.O.: On new trapezoid type inequalities for h-convex functions via generalized fractional integrals. Turk. J. Anal. Number Theory 6, 125–128 (2018) Mohammed, P.O., Sarikaya, M.Z.: Hermite–Hadamard type inequalities for F-convex function involving fractional integrals. J. Inequal. Appl. 2018, 359 (2018) Jafari, H., Jassim, H.K., Moshokoa, S.P., Ariyan, V.M., Tchier, F.: Reduced differential transform method for partial differential equations within local fractional derivative operators. Adv. Mech. Eng. 8(4), 1–6 (2016) Richard, H.: Fractional Calculus: An Introduction for Physicists. World Scientific, Singapore (2014) Podlubny, I.: Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. Elsevier, Amsterdam (1998) Sun, H., Zhang, Y., Chen, W., Reeves, D.M.: Use of a variable-index fractional-derivative model to capture transient dispersion in heterogeneous media. J. Contam. Hydrol. 157, 47–58 (2014) Owolabi, K.M.: Modelling and simulation of a dynamical system with the Atangana–Baleanu fractional derivative. Eur. Phys. J. Plus 133(1), 15 (2018) Kumar, D., Singh, J., Baleanu, D.: Analysis of regularized long-wave equation associated with a new fractional operator with Mittag-Leffler type kernel. Phys. A, Stat. Mech. Appl. 492, 155–167 (2018) Jianke, Z., Gaofeng, W., Xiaobin, Z., Chang, Z.: Generalized Euler–Lagrange equations for fuzzy fractional variational problems under gH-Atangana–Baleanu differentiability. Hindawi 2018, Article ID 2740678 (2018). https://doi.org/10.1155/2018/2740678 Abdeljawad, T., Baleanu, D.: Discrete fractional differences with nonsingular discrete Mittag-Leffler kernels. Adv. Differ. Equ. 2016, 232 (2016). https://doi.org/10.1186/s13662-016-0949-5 The author Thabet Abdeljawad would like to thank Prince Sultan University for funding this work through research group Nonlinear Analysis Methods in Applied Mathematics (NAMAM), group number RG-DES-2017-01-17. All the authors are very grateful to the editorial board and the reviewers, whose comments improved the quality of the paper. Data sharing not applicable to this paper as no datasets were generated during the current study. College of Engineering Mechanics and Materials, Hohai University, Nanjing, P.R. China Hasib Khan Department of Mathematics, Shaheed BB University, Khybar Pakhtunkhwa, Pakistan Department of Mathematics and General Sciences, Prince Sultan University, Riyadh, Saudi Arabia Thabet Abdeljawad Department of Mathematics, Faculty of Sciences, Van Yuzuncu Yil University, Van, Turkey Cemil Tunç Department of Mathematics, College of Science, Hohai University, Nanjing, P.R. China Abdulwasea Alkhazzan Department of Mathematics, University of Peshawar, Khybar Pakhtunkhwa, Pakistan Aziz Khan All the authors have equal contributions in this paper. All authors read and approved the final manuscript. Correspondence to Hasib Khan. The authors have no conflict of interests regarding the publication of this paper. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Khan, H., Abdeljawad, T., Tunç, C. et al. Minkowski's inequality for the AB-fractional integral operator. J Inequal Appl 2019, 96 (2019). https://doi.org/10.1186/s13660-019-2045-3 AB-fractional integral operator Minkowski inequality
CommonCrawl
Mathematics of the Incas The mathematics of the Incas (or of the Tawantinsuyu) was the set of numerical and geometric knowledge and instruments developed and used in the nation of the Incas before the arrival of the Spaniards. It can be mainly characterized by its usefulness in the economic field. The quipus and yupanas are proof of the importance of arithmetic in Inca state administration. This was embodied in a simple but effective arithmetic, for accounting purposes, based on the decimal numeral system; they knew the zero,[1] and mastered addition, subtraction, multiplication, and division. The mathematics of the Incas had an eminently applicative character to tasks of management, statistics, and measurement that was far from the Euclidean outline of mathematics as a deductive corpus, since it was suitable and useful for the needs of a centralized administration.[note 1] On the other hand, the construction of roads, canals and monuments, as well as the layout of cities and fortresses, required the development of practical geometry, which was indispensable for the measurement of lengths and surfaces, in addition to architectural design. At the same time, they developed important measurement systems for length and volume, which took parts of the human body as reference. In addition, they used appropriate objects or actions that allowed to appreciate the result in another way, but relevant and effective. Inca numeral system The prevailing numeral system was the base-ten.[2] One of the main references confirming this are the chronicles that present a hierarchy of organized authorities, using the decimal numeral system with its arithmometer: Quipu. Responsible Number of families Puriq 1 family Pisqa kamayuq 5 families Chunka kamayuq 10 families Pisqa chunkakamayuq 50 families Pachak kamayuq 100 families Pisqa pachakakamayuq 500 families Waranqa kamayuq 1000 families Pisqa waranqakamayuq 5000 families Chunka waranqakamayuq 10 000 families It is also possible to confirm the use of the decimal system in the Inca system by the interpretation of the quipus, which are organized in such a way that the knots — according to their location — can represent: units, tens, hundreds, etc.[3] However, the main confirmation of the use of this system is expressed in the denomination of the numbers in Quechua, in which the numbers are developed in decimal form. This can be appreciated in the following table:[note 2] Number Quechua Number Quechua Number Quechua 1 Huk 11 Chunka hukniyuq 30 Kimsa chunka 2 Iskay 12 Chunka iskayniyuq 40 Tawa chunka 3 Kimsa 13 Chunka kimsayuq 50 Pisqa chunka 4 Tawa 14 Chunka tawayuq 60 Suqta chunka 5 Pisqa 15 Chunka pisqayuq 70 Qanchis chunka 6 Suqta 16 Chunka suqtayuq 80 Pusaq chunka 7 Qanchis 17 Chunka qanchisniyuq 90 Isqun chunka 8 Pusaq 18 Chunka pusaqniyuq 100 Pachak 9 Isqun 19 Chunka isqunniyuq 1000 Waranqa 10 Chunka 20 Iskay chunka 1 000 000 Hunu Accounting systems Quipus The quipus constituted a mnemonic system based on knotted strings used to record all kinds of quantitative or qualitative information; if they were dealing with the results of mathematical operations, only those previously performed on the "Inca abacuss" or yupanas were cancelled. Although one of its functions is related to mathematics — as it was an instrument capable of accounting — it was also used to store information related to census, product amount, and food kept in state warehouses.[4][5] Quipus are even mentioned as instruments the Incas used to record their traditions and history in a different way than in writing. Several chroniclers also mention the use of quipus to store historical news.[note 3] However, it has not yet been discovered how this system worked. In the Tahuantinsuyo, it was specialized personnel who handled the strings. They were known as quipucamayoc and they could be in charge of the strings of an entire region or suyu. Although the tradition is being lost, the quipus continue to be used as mnemonic instruments in some indigenous villages where they are used to record the product of the crops and the animals of the communities.[5] According to the Jesuit chronicler Bernabé Cobo, the Incas designated to certain specialists the tasks related to accounting. These specialists were called quipo camayos, in whom the Incas placed all their trust.[6] In his study of the quipu sample VA 42527 (Museum für Völkerkunde, Berlin), Sáez-Rodríguez noted that, in order to close the accounting books of the chacras, certain numbers were ordered according to their value in the agricultural calendar, for which the khipukamayuq — the accountant entrusted with the granary — was directly in charge.[7][8] Yupanas Main article: Yupana In the case of numerical information, the mathematical operations were previously carried out on the abacuss or yupanas. These could be made of carved stone or clay, had boxes or compartments that corresponded to the decimal units, and were counted or marked with the help of small stones or grains of corn or quinoa. Units, tens, hundreds, etc. could be indicated according to whether they were implicit in each operation. Recent research regarding the yupanas suggests that they allowed to calculate considerable numbers based on a probably non-decimal system,[9] but based in relation to the number 40. If true, it is curious to note the coincidence between the geometric progression achieved in the yupana and the current processing systems;[10] on the other hand, it is also contradictory that they based their accounting system on the number 40. If the investigations continue and this fact is confirmed, it would be necessary to compare its use with the decimal system, which according to the historical tradition and previous investigations, was the one used by the Incas.[11] In October 2010, Peruvian researcher Andrés Chirinos with the support of the Spanish Agency for International Development Cooperation (in Spanish, Agencia Española de Cooperación Internacional para el Desarrollo, AECID), reviewed drawings and ancient descriptions of the indigenous chronicler Guaman Poma de Ayala and finally deciphered the riddle of the yupana — that he calls "pre-Hispanic calculator" — as being capable of adding, subtracting, multiplying, and dividing. This made him hopeful to finally discover how the quipus worked as well.[12] Units of measurement There were different units of measurement for magnitudes such as length and volume in pre-Hispanic times. The Andean peoples, as in many other places in the world, took parts of the human body as a reference to establish their units of measurement. There was not a single system of units of obligatory and uniform use throughout the Andean world. Many documents and chronicles have recorded different systems of local origin that remained in use until the 16th century. Length Among the units of length measurement, there was the rikra (fathom), which is the distance measured between a man's thumbs with arms extended horizontally.[13] The kukuchu tupu (kukush tupu) was equivalent to the Spanish codo (cubit) and was the distance measured from the elbow to the end of the fingers of the hand.[14] There was also the capa (span), and the smallest was the yuku or jeme, which was the length between the index finger and the thumb, separating one from the other as much as possible. The distance between two villages would have been evaluated by the number of chasquis required to carry an errand from one village to the other. They would have used direct proportionality between the circumference of a sheepfold and the number of chacra partitions. Surface The tupu was the unit of measurement of surface area. In general terms it was defined as the plot of land required for the maintenance of a married couple without children. Every hatun runa or "common man" received a plot of land upon marriage and its production had to satisfy the basic needs of food and trade of the spouses. It did not correspond to an exact measurement, since its dimensions varied according to the conditions of each land and from one ethnic group to another.[15] The quality of the soil was taken into consideration and the necessary rest time was calculated accordingly, which had to be considered after a certain number of agricultural campaigns. After that time, the couple could claim a new tupu from their curaca. Capacity Among the units of measurement of capacity there is the pokcha, which was equivalent to half a fanega or 27.7 liters. Some crops such as corn were measured in containers; liquids were measured in a variety of pitchers and jars. There were boxes of a variety of cántaros and tinajas, and straw or reed boxes in which objects were kept. These boxes were also used in warehouses to store delicate or exquisite products, such as dried fruits. Coca leaves were measured in runcu or large baskets. Other baskets were known as ysanga. Among these measures of capacity there is the poctoy or purash (almozada), which is equivalent to the portion of grains or flour that can be kept in the concavity formed with the hands together.[16] The ancient inhabitants of the Andes knew the scales of saucers and nets as well as the huipe, an instrument similar to steelyards.[17] Apparently, its presence is associated with the works of jewelry and metallurgy, trades in which it is necessary to know the exact weights to use the right proportions of the alloys. Volume Especially the volume of their colcas (trojas) and their tambos (state warehouses, located in key points of the Qhapaq Ñan). They used the runqu (rongos: bales), portable containers or ishanka (baskets) or the capacity of a chacra. They would have handled the proportionality of the volumes of prisms with respect to their heights — without varying the bases.[18] Time To measure time, they used the day (workday), which could include a morning, even an afternoon. Time was also useful, indirectly, to appreciate the distance between two cities; for example, 20 days from Cajamarca to Cusco was the accepted time measurement. Months, years, and the phases of the moon — much consulted for the tasks of sowing, aporques and harvests and in navigation — were also measured in days.[18] See also • Inca Empire • History of the Incas • History of Peru • Mathematics Notes 1. This is deducted from the dictionaries of 'mathematics in Quechua' in current use and the known instruments: quipo and yupana 2. The Quechua used is that of Cusco 3. Chroniclers with such diverse points of view as Garcilaso de la Vega, Guamán Poman, and Cieza de León (known for his historical rigor) mention the use of quipus to store historical data and poetry, although none of them mentions details of how that worked. References 1. Ascher M. and Ascher R. (1985). "El quipu como lenguaje visible". In Heather Lechtman, Ana María Soldi (ed.). La tecnología en el mundo andino (in Spanish). Vol. 1. Mexico City: UNAM. ISBN 9789688372937. 2. Kubritski, Yuri. The Inkas-Quechuas (1979). 3. "QUIPU". Promotora Española de Lingüística (Proel) (in Spanish). Alfabetos de ayer y de hoy. 4. Fedriani Martel, Eugenio M.; Tenorio Villalón, Ángel F. Los sistemas de numeración maya, azteca e inca (PDF) (in Spanish). p. 184. Archived from the original (PDF) on 7 October 2007. 5. Rostworowski, María. "Historia del Tahuantinsuyo, Los quipus" (in Spanish). Los Incas. Archived from the original on 20 January 2008. 6. Biblioteca de Autores Españoles (1983) [1653]. Francisco Mateos (ed.). Obras del P. Bernabé Cobo. Vol. 1 (in Spanish). Madrid: Ediciones Atlas. 7. Sáez-Rodríguez., A. (2012). "An Ethnomathematics Exercise for Analyzing a Khipu Sample from Pachacamac (Perú)". Revista Latinoamericana de Etnomatemática. 5 (1): 62–88. 8. Sáez-Rodríguez., A. (2013). "Knot numbers used as labels for identifying subject matter of a khipu". Revista Latinoamericana de Etnomatemática. 6 (1): 4–19. 9. Fedriani Martel, Eugenio M.; Tenorio Villalón, Ángel F. Los sistemas de numeración maya, azteca e inca (PDF) (in Spanish). pp. 186–187. Archived from the original (PDF) on 7 October 2007. 10. "Descifraron la "calculadora" incaica". Diario "La Gaceta" (in Spanish). Tucumán, Argentina. 26 January 2004. 11. "Aseguran que el sistema de cálculo incaico se basaba en el número 40". Diario Los Andes. Mendoza, Argentina. 26 January 2004. Archived from the original on 2 January 2008. 12. "La Yupana, el acertijo resuelto de la calculadora inca". Agencia EFE S.A (in Spanish). Archived from the original on 18 September 2010. 13. Rostworosky, María (1960). Pesos y medidas del Perú prehispánico (in Spanish). Lima: Minerva. 14. Qheswa-Spanish-qheswa Simi Taqe Qusqu Kuraq Wasi (in Spanish and Quechua). 1995. 15. Espinoza, Waldemar. Los Incas (in Spanish). p. 158. 16. Carranza Romero, Francisco. Diccionario Quechua ancashino- castellano. 17. Waldemar, Espinoza. Los Incas. p. 161. 18. Llerena, Luis. "Metrología andina". Cuadernos Arguedianos. Bibliography • Espinoza Soriano, Waldemar (2003). Los Incas, economía, sociedad y estado en la era del Tahuantinsuyo (in Spanish). Lima: Editorial Sol 90. ISBN 9972-891-79-8. • Muxica Editores (2001). Culturas Prehispánicas (in Spanish). Muxica Editores. ISBN 9972-617-10-6. Inca Empire History • Sapa Inca • Kingdom of Cusco • Inca Empire • History of Cusco • Chimor–Inca War • Invasion of Chile • Inca Civil War • Spanish conquest • Ransom Room • Neo-Inca State Inca society • Inca education • Aclla • Amauta • Ayllu • Chasqui • Mitma • Ñusta • Panakas • Warachikuy • Inca army • Incan agriculture • Inca cuisine • Incan aqueducts Inca religion • Inca mythology • Apu • Coricancha • Manco Cápac • Inti • Supay • Pacha Kamaq • Pariacaca • Urcuchillay • Vichama • Viracocha • Willka Raymi Inca mathematics • Quipu • Yupana
Wikipedia
Endomorphism In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space V is a linear map f: V → V, and an endomorphism of a group G is a group homomorphism f: G → G. In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself. In any category, the composition of any two endomorphisms of X is again an endomorphism of X. It follows that the set of all endomorphisms of X forms a monoid, the full transformation monoid, and denoted End(X) (or EndC(X) to emphasize the category C). Automorphisms Main article: Automorphism An invertible endomorphism of X is called an automorphism. The set of all automorphisms is a subset of End(X) with a group structure, called the automorphism group of X and denoted Aut(X). In the following diagram, the arrows denote implication: Automorphism ⇒ Isomorphism ⇓ ⇓ Endomorphism ⇒ (Homo)morphism Endomorphism rings Main article: Endomorphism ring Any two endomorphisms of an abelian group, A, can be added together by the rule (f + g)(a) = f(a) + g(a). Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of $\mathbb {Z} ^{n}$ is the ring of all n × n matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group;[1] however there are rings that are not the endomorphism ring of any abelian group. Operator theory In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing the notion of element orbits to be defined, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. Endofunctions An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let S be an arbitrary set. Among endofunctions on S one finds permutations of S and constant functions associating to every x in S the same element c in S. Every permutation of S has the codomain equal to its domain and is bijective and invertible. If S has more than one element, a constant function on S has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number n the floor of n/2 has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets of size n there are nn endofunctions on the set. Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses. See also • Adjoint endomorphism • Epimorphism (surjective homomorphism) • Frobenius endomorphism • Monomorphism (injective homomorphism) Notes 1. Jacobson (2009), p. 162, Theorem 3.2. References • Jacobson, Nathan (2009), Basic algebra, vol. 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1 External links • "Endomorphism", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
\begin{document} \title{Explicit Formulas for h-Deformed Structure Constants of Grassmannians} \begin{abstract} The Chern-Schwartz-MacPherson (CSM) and motivic Chern (mC) classes of Schubert cells in a Grassmannian are one parameter deformations of the fundamental classes of the Schubert varieties in cohomology and K-theory respectively. Like the fundamental classes, the deformed classes form a basis for the cohomology and K-theory ring of the Grassmannian. The purpose of this paper is to initiate the study of the structure constants associated to the basis CSM and mC classes in terms of the combinatorics of polynomials. First, we prove formulas for the structure constants of projective spaces that involve binomial coefficients. Then, using residue calculus on wieght functions, we describe the structure constants of projective spaces and certain related structure constants of 2-plane Grassmannians as coefficients of explicit polynomials in one variable. Finally, we propose an approach for obtaining more general results in this direction, and make conjectures generalizing the aformentioned results for projective spaces. \end{abstract} \section{Introduction} \subsection{$h$-Deformed Littlewood Richardson Numbers} Let $R$ be a commutative ring $R$. Given a finite dimensional $R$-algebra $A$ and a $R$-basis $v_1,...,v_n$, we can describe the multiplication operation in $A$ by giving the \emph{structure constants} associated to the basis. Namely, there are unique $c^k_{i,j}\in R$, where $1\leq i,j,k\leq n$, for which $v_i\cdot v_j=\sum_{k=1}^n c^k_{i,j}v_k$. The case where $R=\mathbb{Z}$ and $A$ is the integral cohomology or K-theory ring of a Grassmannian Gr$(d,n)$ has been extensively studied. A Grassmannian is equipped with a canonical CW decomposition into Schubert cells. There are several conventions for indexing these cells. In this paper, Schubert cells $\Omega_I$ will be indexed by subsets $I\subset[1,n]$ such that $|I|=d$. Let $\mathcal{I}=\{I\subset[1,n]\ |\ |I|=d\}$. This immediately gives a basis of fundamental classes of cell closures, $[\overline\Omega_I]$ for $I\in\mathcal{I}$, for cohomology and K-theory. The structure constants associated with this basis are called the \emph{Littlewood-Richardson numbers}. The Littlewood-Richardson numbers are well understood, and several classical rules exist for calculating them. It is possible, however, to consider other bases for the cohomology and K-theory. The fundamental class admits a 1-parameter deformation. In cohomology, the deformation is called the \emph{Chern-Schwartz-MacPherson} or $c^{sm}$ class \cite{M}. In K-theory, it is called the \emph{motivic Chern} or $mC$ class \cite{BSY}. Throughout this paper, this parameter will be called $h$. If $X$ is a smooth variety, these classes associate an element of $\mathrm{H}^*(X)[h]$ and $\mathrm{K}^0(X)[h]$ respectively to each subvariety of $X$. Taking the term of highest $h$ power in the $\mathrm{c^{sm}}$ class of a locally closed subvariety recovers the cohomological fundamental class of the closure, and setting $h$ to zero in mC recovers the K-theoretic fundamental class. Alternatively, one can think of $\mathrm{c^{sm}}$ and mC as generalizing the total Chern class of the tangent bundle of a smooth variety to potentially singular varieties. In particular, integrating these classes gives the Euler characteristic (up to a power of $h$) and $\chi_h$ characteristic respectively. An important property of these classes is \emph{additivity}: given disjoint subvarieties $S,T$, \[\mathrm{c^{sm}}(S\cup T)=\mathrm{c^{sm}}(S)+\mathrm{c^{sm}}(T)\ \ ,\ \ \mathrm{mC}(S\cup T)=\mathrm{mC}(S)+\mathrm{mC}(T).\] This paper deals with the $\mathrm{c^{sm}}$ and mC classes of Schubert cells. The $h$-deformed classes of Schubert cells in partial flag varieties are studied in \cite{AMSS1,AMSS2}. Like the fundamental classes, the $h$-deformed classes of the Schubert cells form a basis for cohomology and K-theory. An immediate consequence of the above properties is \[\forall I\in\mathcal{I}\ \ \mathrm{c^{sm}}(\Omega_I)=h^{\dim(\Omega_I)}[\overline\Omega_I]+\text{h.o.t},\] where "h.o.t" denotes terms of higher cohomological degree. We will refer to this property as \emph{triangularity with respect to cohomological grading}. By performing a procedure akin to Gaussian elimination over $\mathbb{Z}(h)$, we can uniquely express the fundamental classes $[\overline\Omega_I]$ in terms of the Chern-Schwartz-MacPherson classes $\mathrm{c^{sm}}(\Omega_I)$. Therefore, the classes $\mathrm{c^{sm}}(\Omega_I)$, where $I\in\mathcal{I}$, form a $\mathbb{Z}(h)$-basis for $\mathrm{H}^*(\mathrm{Gr}(d,n))(h)$. A similar argument with Chern character shows that the classes $\mathrm{mC}(\Omega_I)$ for $I\in\mathcal{I}$ form a $\mathbb{Z}(h)$-basis for $\mathrm{K}^0(\mathrm{Gr}(d,n))(h)$. The associated structure constants $c^K_{I,J}$ and $C^K_{I,J}$ in cohomology and K-theory respectively are called the \emph{h-deformed Littlewood-Richardson numbers}. A formula for the cohomological h-deformed structure constants of a partial flag variety in terms of divided difference operators is given in \cite{S}. It is also possible to consider the structure constants associated to the bases of Segre-Schwartz-MacPherson and Segre-Motivic Chern classes in cohomology and K-theory respectively. These structure constants are closely related to our structure constants (see Section 2.4 of \cite{S}). In this paper, we calculate the $h$-deformed Littlewood-Richardson numbers of projective spaces, give preliminary results for 2-plane Grassmannians, and propose an approach for generalizing these results to arbitrary Grassmannians. The main results and conjectures are listed below. In Section 2, we will consider the $h$-deformed Littlewood Richardson numbers of projective spaces $\mathrm{Gr}(1,m+1)=\mathbb{P}^m$. Our approach is based on toric calculus, and in Section 3, we formulate the problem of finding $h$-deformed structure constants for arbitrary toric manifolds. Since the Schubert cells of $\mathbb{P}^m$ are indexed by singleton sets, we drop the set braces from the notation. Hence, the cells will be indexed by integers $1\leq i\leq m+1$. In Section 4, we give some preliminary results and conjectures for Grassmannians that are not projective spaces. Our approach is based on using weight function orthogonality \cite{R,RTV1,RTV2} and residue calculus (cf. \cite{FR}) to obtain formulas for the equivariant $h$-deformed Littlewood-Richardson numbers and then degenerating to the nonequivariant structure constants by taking limits. \subsection{Results and Conjectures} \begin{main_HLR_proj} Let $c^k_{i,j}$ be the structure constants of $\mathbb{P}^m$ associated to the basis of CSM classes of Shubert cells. For all $1\leq i,j,k\leq m+1$, \begin{enumerate} \item $c^k_{i,j}=0$ unless $k\leq i,j$, \item $c^k_{i,j}=c^k_{i-1,j+1}$ when $i>1$ and $j<m+1$, \item $c^k_{i,j}=c^{k+1}_{i,j+1}$ when $j,k<m+1$, \item $c^k_{m+1,m+1}=\binom{2m-k}{m-1}h^m$. \end{enumerate} These equalities completely determine all cohomological structure constants of $\mathbb{P}^m$. In particular, \[c^k_{i,j}=\binom{i+j-k-2}{m-1}h^m.\] \end{main_HLR_proj} \begin{main_KLR_proj}\label{KLR_proj} Let $C^k_{i,j}$ be the structure constants of $\mathbb{P}^m$ associated to the basis of motivic Chern classes of Schubert cells. Define $\widetilde{C}^K_{i,j}=(-1)^{m+i+j+k+1}(1+h)^{-m}C^K_{i,j}$. For all $1\leq i,j,k\leq m+1$, \begin{enumerate} \item $C^k_{i,j}=0$ unless $k\leq i,j$, \item $C^k_{i,j}=C^k_{i-1,j+1}$ when $i>1$ and $j<m+1$, \item $C^k_{i,j}=C^{k+1}_{i,j+1}$ when $j,k<m+1$, \item $\widetilde{C}^k_{m+1,m+1}=\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}$. \end{enumerate} These equalities completely determine all K-theoretic structure constants of $\mathbb{P}^m$. In particular, \[\widetilde C^k_{i,j}=\binom{i+j-k-2}{m}h^{i+j-k-m-2}+\binom{i+j-k-1}{m}h^{i+j-k-m-1}.\] \end{main_KLR_proj} \begin{main_symmetry} For all $I,I',J,J',K,K'\in\mathcal{I}$, \begin{enumerate} \item $C^K_{I,J}$ is a polynomial in $-1-h$ with nonnegative coefficients, \item $C^K_{I,J}=0$ unless $K\leq I,J$,\\ \item $C^K_{I,J}=C^K_{I',J'}$ if $i_a+j_a=i'_a+j'_a$ for all $1\leq a\leq d$,\\ \item $C^K_{I,J}=C^{K'}_{I,J'}$ if $j_a-k_a=j'_a-k'_a$ for all $1\leq a \leq d$. \end{enumerate} \end{main_symmetry} \begin{main_KtoH} Expand $C^K_{I,J}$ as a polynomial in $-1-h$. Then the lowest degree term has coefficient $c^K_{I,J}/h^{\dim(\mathrm{Gr}(d,n))}$. \end{main_KtoH} \section{Projective Spaces} \subsection{Toric Manifolds} In Section 2, we will compute structure constants using toric geometry. Our reference for toric geometry is \cite{Fu}. First, let us recall the necessary constructions. The geometry of a toric manifold is encoded in a Delzant polytope. A convex polytope $\Delta\subset\mathbb{R}^m$ is Delzant if \begin{enumerate} \item there are $m$ edges incident to each vertex, \item each edge incident to a vertex $p$ is of the form $\{p+tu\ |\ t\in[0,h]\}$, where $h\in\mathbb{R},u\in\mathbb{Z}^m\subset\mathbb{R}^m$, \item and these integral vectors $u$ form a $\mathbb{Z}$-basis for $\mathbb{Z}^m$. \end{enumerate} Let $T=(\mathbb{C}^*)^m$ be a torus and $X$ be a smooth compact complex Hamiltonian $T$-space. This action restricted to $U=U(1)^m$ is Hamiltonian, and we denote the moment map by $\mu$. We say $X$ is a toric manifold if the action of $U$ is effective and $m=\mathrm{dim}X$ (all dimensions in this paper are complex dimensions). In this case, $\Delta=\mu(X)\subset\mathrm{Lie}(U)=\mathbb{R}^m$ is a Delzant polytope, and there is a correspondence between $d$-faces of $\Delta$ and $d$-dimensional orbits of $T$. We will take the dual perspective by looking at the normal fan to $\Delta$. This forgets the edge lengths and position of $\Delta$ and records only the angles between facets, but the forgotten data encodes the symplectic structure, which is not relevant for this paper. Let $F_1,...,F_r$ be the facets of $\Delta$. Let $v_i\in\mathbb{Z}^m$ be the primitive inward pointing normal vector to $F_i$. For $S\subset [1,r]$, define the cone $\sigma_S=\sum_{i\in S}\mathbb{R}_{\geq 0}\cdot v_i$. The normal fan to $\Delta$ is the collection of cones $\mathcal{N}(\Delta)=\{\sigma_S|\bigcap_{i\in S}F_i\neq\emptyset\}$. The zero cone $\{0\}$ belongs to $\mathcal{N}(\Delta)$ by convention. In the language of fans, the Delzant property translates to \begin{enumerate} \item the maximal cones are spanned by $m$ vectors, \item each cone is spanned by integral vectors, \item and the vectors spanning each maximal cone form a $\mathbb{Z}$-basis for $\mathbb{Z}^m$. \end{enumerate} Moreover, there is a correspondence between $d$-dimensional faces of $\Delta$ and $(m-d)$-dimensional cones of $\mathcal{N}(\Delta)$. This gives a correspondence of $d$-dimensional cones and $T$-orbits of codimension $d$ \[\{\gamma\in\mathcal{N}(\Delta)\ |\ \dim(\gamma)=d\} \longleftrightarrow \{T\text{-orbits of codimension }d\}.\] Under this correspondence, fixed points are identified with maximal cones, 1-dimensional orbits are identified with facets, and codimension 1 orbits are identified with rays. The orbit structure of $X$ has a simple combinatorial description. Given $\gamma\in\mathcal{N}(\Delta)$, let $\mathcal{O}_\gamma$ be the orbit corresponding to $\gamma$, and $V_\gamma=\overline{\mathcal{O}_\gamma}$. Then, $V_\gamma=\bigcup_{\gamma'\supset\gamma}\mathcal{O}_{\gamma'}$. It follows that the correspondence between cones and orbits is inclusion reversing with respect to orbit closures. It also follows that if $\gamma_1,\gamma_2,\gamma_3\in\mathcal{N}(\Delta)$ are such that $\gamma_3=\gamma_1+\gamma_2$, then $V_{\gamma_3}=V_{\gamma_1}\cap V_{\gamma_2}$. If in addition $\gamma_1\cap\gamma_2=0$, then a local coordinates calculation shows that this intersection is transverse. In the case that $\gamma_1$ and $\gamma_2$ do not span a cone in $\mathcal{N}(\Delta)$, $V_{\gamma_1}\cap V_{\gamma_2}=\emptyset$. All orbit closures $V\gamma$ are themselves toric manifolds, and their normal cones are given by the star construction. Define the star of $\gamma$, to be the following fan of cones in $\mathbb{R}^m/(\mathbb{R}\cdot\gamma)\cong\mathbb{R}^{m-\dim(\gamma)}$: \[\mathrm{st}(\gamma):=\{\gamma'+\mathbb{R}\cdot\gamma\in\mathbb{R}^m/(\mathbb{R}\cdot\gamma)\ |\ \gamma'\in\mathcal{N}(\Delta),\gamma'\supset\gamma\}.\] The moment polytope of $V_\gamma$ is the corresponding face of $\Delta$. These properties give rise to a combinatorial description of $\mathrm{H}^*(X)$. Let $\tau_1,...,\tau_r$ be the rays in $\mathcal{N}(\Delta)$. Define the Stanley-Reisner ring of $\Delta$ by \[SR:=\mathbb{Z}[X_1,...,X_r]/(X_{i_1}\cdots X_{i_s}\ |\ s\leq r,1\leq i_a\leq r,\text{ and }\tau_{i_1},...,\tau_{i_s}\text{ do not span a cone in }\mathcal{N}(\Delta)).\] We will see that $\mathrm{H}^*(X)\cong SR/\mathcal{J}$, where $\mathcal{J}=\left\langle\sum_{i=1}^r u(\tau_i)X_i\ \middle|\ {u\in\mathrm{Hom}(\mathbb{Z}^m,\mathbb{Z})}\right\rangle$. By assembling torus orbits into cells, we can build a CW decomposition of $X$ (more on this construction in Section 3), where the closure of each cell is a torus orbit closure $V_\gamma$ for some $\gamma\in\mathcal{N}(\Delta)$. This CW decomposition implies that $\mathrm{H}^*(X)$ is generated by the fundamental classes $[V_\gamma]$. If $\gamma$ is spanned by distinct rays $\tau_{i_1},...,\tau_{i_s}$, then the transverse intersection property implies that $[V_\gamma]=[V_{\tau_{i_1}}]\cdots[V_{\tau_{i_s}}]$. Moreover, if $\tau_{i_1},...,\tau_{i_s}$ do not span a cone in $\mathcal{N}(\Delta)$, then $V_{\tau_{i_1}}\cap\cdots\cap V_{\tau_{i_s}}=\emptyset$, and $[V_{\tau_{i_1}}]\cdots[V_{\tau_{i_s}}]=0$. Hence, there is a surjection \begin{align*} \Phi:SR&\to \mathrm{H}^*(X)\\ X_i&\mapsto[V_{\tau_i}] \end{align*} The divisors $\sum_{i=1}^r u(\tau_i)V_{\tau_i}$ are known to be principal Cartier, so certainly $\mathcal{J}\subset\ker(\Phi)$. It is shown in \cite{Fu1}, that in fact $\mathcal{J}=\ker(\Phi)$, giving the desired isomorphism $SR/\mathcal{J}\cong \mathrm{H}^*{X}$. We will be interested in the CSM and motivic Chern classes of the cells in the CW decomposition. In order to compute motivic characteristic classes of a cell, we can stratify it into $T$-orbits and apply the additivity property. The following formulas for the CSM and motivic Chern class of a $T$-orbit are given in \cite{MS}. For all $\gamma\in\mathcal{N}(\Delta)$, \[\mathrm{c^{sm}}(\mathcal{O}_\gamma)=h^{m-\dim(\gamma)}[V_\gamma],\quad\mathrm{mC}_h(\mathcal{O}_\gamma)=(1+h)^{m-\dim(\gamma)}\iota_{\gamma*}\omega_\gamma,\] where $\omega_\gamma$ is the canonical bundle of $V_\gamma$, and $\iota_\gamma:V_\gamma\to X$ is the inclusion. It is not necessary to describe the K-theory of $X$, as we will immediately pass to cohomology by taking Chern characters. The following lemma records some elementary facts that are needed for future calculations. \begin{toric_classes}\label{toric_classes} Fix $\gamma\in\mathcal{N}(\Delta)$. Given $\eta\in\mathcal{N}(\Delta)$ such that $\eta\supset\gamma$, let $\tilde\eta=\eta+\mathbb{R}\cdot\gamma\in\mathrm{st}(\gamma)$, we have \begin{enumerate} \item $\begin{displaystyle} V_{\{0\}}=X \end{displaystyle}$, \item $\begin{displaystyle} \mathrm{c}^*(T_*V_\gamma)=\prod_{\dim(\tau)=2}(1+[V_{\tilde\tau}]) \end{displaystyle}$, \item $\begin{displaystyle} \mathrm{c}^*(\omega_\gamma)=-\sum_{\dim(\tau)=2}[V_{\tilde\tau}] \end{displaystyle}$, \item $\begin{displaystyle} \iota_{\gamma*}[V_{\tilde\eta}]=[V_\eta] \end{displaystyle}$. \end{enumerate} \end{toric_classes} The projective space $\mathbb{P}^m$ has an especially simple toric description. Note that the usual torus action on $\mathbb{P}^m$, induced by the action on $\mathbb{C}^{m+1}$, is not effective. However, we resolve this by quotienting by the kernel of the action. Equivalently, we take $T$ to be the first $m$ factors of the usual $(m+1)$-dimensional torus, so that $T$ acts trivially on the $(m+1)$th homogeneous coordinate. Under standard conventions for the moment map, the moment polytope $\Delta$ is the convex hull of $0,(1/2)e_1,...,(1/2)e_m$, where the $e_i$ are the standard basis vectors of $\mathbb{R}^m$. Defining $e_{m+1}=-e_1-\cdots-e_m$, we have \[\mathcal{N}(\Delta)=\{\mathbb{R}_{\geq 0}\cdot S|S\subsetneq\{e_1,...,e_{m+1}\}\}\cup\{\{0\}\}.\] Let $\sigma_i=\mathbb{R}_{\geq 0}\cdot\{e_1,...,\hat{e_i},...,e_{m+1}\}$ for $1\leq i\leq m+1$. These are the maximal cones, which correspond to the $m+1$ fixed points of $\mathbb{P}^m$. Let $\tau_i=\mathbb{R}_{\geq 0}\cdot e_i$ for $1\leq i\leq m+1$. These are the rays, which correspond to toric divisors. It is easy to see that the divisors $V_{\tau_i}$ are all equivalent under the relation of $\mathcal{J}$. Therefore, we denote the common value of $[V_{\tau_i}]$ for $1\leq i\leq m+1$, by $H$. In fact, $H\in\mathrm{H}^2(\mathbb{P}^m)$ is the hyperplane class. Define $\gamma_S=\mathbb{R}_{\geq 0}\cdot \{e_i\ |\ i\in S\}\in\mathcal{N}(\Delta)$, for $S\subsetneq[1,m+1]$. For $1\leq i\leq m+1$, define $\Omega_i\subset\mathbb{P}^m$ inductively as follows: \begin{align*} \Omega_1&=\mathcal{O}_{\gamma_{[1,m]}},\\ \Omega_i&=\bigcup_{S\supset[1,m-i+1]}\mathcal{O}_{\gamma_S}\setminus\Omega_{i-1}\text{, when }i>1.\\ \end{align*} Then, $\Omega_i\cong\mathbb{C}^{i-1}$, and the $\Omega_i$ decompose $\mathbb{P}^m$ into Schubert cells. The number of $d$-dimensional orbits in the stratification of $\Omega_i$ is $\binom{i-1}{d}$. \subsection{Cohomological Structure Constants of $\mathbb{P}^m$} The classes $\mathrm{c^{sm}}(\Omega_i)$ for $1\leq i\leq m+1$ form a basis for $\mathrm{H}^*(\mathbb{P}^m)$. Let $c^k_{i,j}\in\mathbb{Z}(h)$ be the unique rational functions satisfying $\forall (1\leq i,j\leq m+1)\ \ \mathrm{c^{sm}}(\Omega_i)\mathrm{c^{sm}}(\Omega_j)=\sum_{k=1}^{m+1}c^k_{i,j}\mathrm{c^{sm}}(\Omega_k)$. Given $\gamma\in\mathcal{N}(\Delta)$, our description of the cohomology of $\mathbb{P}^m$ gives the formula \[\mathrm{c^{sm}}(\mathcal{O}_\gamma)=h^{m-\dim(\gamma)}[V_\gamma]=h^{m-\dim(\gamma)}\cdot\prod_{\tau\text{ an extremal ray of }\gamma}[V_\tau]=h^{m-\dim(\gamma)}[H]^{\mathrm{dim}(\gamma)}.\] \begin{csm_proj}\label{csm_proj} For all $1\leq i\leq m+1$,\ \ $\mathrm{c^{sm}}(\Omega_i)=H^{m-i+1}(h+H)^{i-1}$. \end{csm_proj} \begin{proof} The additivity of $\mathrm{c^{sm}}$ classes implies \begin{align*} \mathrm{c^{sm}}(\Omega_i)&=\sum_{\substack{\gamma\in\mathcal{N}(\Delta)\\ \mathcal{O}_\gamma\subset\Omega_i}}\mathrm{c^{sm}}(\mathcal{O}_\gamma)\\ &=\sum_{\substack{\gamma\in\mathcal{N}(\Delta)\\ \mathcal{O}_\gamma\subset\Omega_i}}h^{m-\dim(\gamma)}H^{\mathrm{dim}(\gamma)}. \end{align*} Because $\Omega_i$ contains $\binom{i-1}{d}$ many orbits of dimension $d$, the sum is over $\binom{i-1}{d}$ cones of dimension $m-d$. All cones in the sum have dimension at least $m-i+1$. We have, \begin{align*} \mathrm{c^{sm}}(\Omega_i)&=\sum_{d=m-i+1}^{m}\binom{i-1}{m-d}h^{m-d}H^d\\ &=H^{m-i+1}\sum_{d=0}^{i-1}\binom{i-1}{d}h^{i-1-d}H^d\\ &=H^{m-i+1}(h+H)^{i-1}. \end{align*} \end{proof} \begin{HLR_proj}\label{HLR_proj} For all $1\leq i,j,k\leq m+1$, \begin{enumerate} \item $c^k_{i,j}=0$ unless $k\leq i,j$, \item $c^k_{i,j}=c^k_{i-1,j+1}$ when $i>1$ and $j<m+1$, \item $c^k_{i,j}=c^{k+1}_{i,j+1}$ when $j,k<m+1$, \item $c^k_{m+1,m+1}=\binom{2m-k}{m-1}h^m$. \end{enumerate} These equalities completely determine all cohomological structure constants of $\mathbb{P}^m$. In particular, \[c^k_{i,j}=\binom{i+j-k-2}{m-1}h^m.\] \end{HLR_proj} \begin{proof} \begin{enumerate} \item This follows from degree considerations. \item Using Proposition~\ref{csm_proj}, we have \begin{align*} \mathrm{c^{sm}}(\Omega_i)\mathrm{c^{sm}}(\Omega_j)&=H^{m-i+1}(h+H)^{i-1}\cdot H^{m-j+1}(h+H)^{j-1}\\ &=H^{2m-i-j+2}(h+H)^{i+j-2}\\ &=H^{m-(i-1)+1+m-(j+1)+1}(h+H)^{(i-1)-1+(j+1)-1}\\ &=H^{m-(i-1)+1}(h+H)^{(i-1)-1}\cdot H^{m-(j-1)+1}(h+H)^{(j-1)-1}\\ &=\mathrm{c^{sm}}(\Omega_{i-1})\mathrm{c^{sm}}(\Omega_{j+1}). \end{align*} \item We have $\mathrm{c^{sm}}(\Omega_{i})\mathrm{c^{sm}}(\Omega_{j})=H^{2m-i-j+2}(h+H)^{i+j-2}$, and $\mathrm{c^{sm}}(\Omega_{i})\mathrm{c^{sm}}(\Omega_{j+1})=H^{2m-i-j+1}(h+H)^{i+j-1}$ from Proposition 2.2. By part (1), the $c^k_{i,j}$ satisfy \begin{align*} H^{2m-i-j+2}(h+H)^{i+j-2}&=\sum_{k=1}^{j}c^k_{i,j}\mathrm{c^{sm}}(\Omega_k)\\ &=\sum_{k=1}^{j}c^k_{i,j}H^{m-k+1}(h+H)^{k-1}. \end{align*} Multiplying both sides by $(h+H)$ yields \[H\cdot H^{2m-i-j+1}(h+H)^{i+j-1}=\sum_{k=1}^{j}c^k_{i,j}H\cdot H^{m-k}(h+H)^{k}.\] Applying Proposition~\ref{csm_proj}, we get \[H\cdot\mathrm{c^{sm}}(\Omega_{i})\mathrm{c^{sm}}(\Omega_{j+1})=H\cdot\sum_{k=1}^{j}c^k_{i,j}\mathrm{c^{sm}}(\Omega_{k+1}).\] The factor of $H$ kills the top degree part, but we have equality in all other degrees. Because the basis of $\mathrm{c^{sm}}$ classes is triangular with respect to cohomological grading, the top degree part is controlled by the $c^1_{i,j+1}$ structure constant. \item We will calculate $\sum_{k=1}^{m+1}\binom{2m-k}{m-1}h^m\mathrm{c^{sm}}(\Omega_k)$, and show that it is equal to $\mathrm{c^{sm}}(\Omega_{m+1})\mathrm{c^{sm}}(\Omega_{m+1})$. We have \begin{align*} \sum_{k=1}^{m+1}\binom{2m-k}{m-1}h^m\mathrm{c^{sm}}(\Omega_k)&=\sum_{k=1}^{m+1}\binom{2m-k}{m-1}h^m\cdot H^{m-k+1}(h+H)^{k-1}\\ &=\sum_{k=1}^{m+1}\binom{2m-k}{m-1}h^mH^{m-k+1}\sum_{l=0}^{k-1}h^lH^{k-l-1}\binom{k}{l}\\ &=\sum_{k=1}^{m+1}\sum_{l=0}^{k-1}h^{m+l}H^{m-l}\binom{2m-k}{m-1}\binom{k-1}{l}\\ &=\sum_{l=0}^{m}h^{m+l}H^{m-l}\sum_{k=l+1}^{m+1}\binom{2m-k}{m-1}\binom{k-1}{l}\\ &=\sum_{l=0}^{m}h^{m+l}H^{m-l}\sum_{k=0}^{m-l}\binom{2m-(k+l)-1}{m-1}\binom{k+l}{l}. \end{align*} On the other hand, because $H^l=0$ when $l>m$, \begin{align*} \mathrm{c^{sm}}(\Omega_{m+1})\mathrm{c^{sm}}(\Omega_{m+1})&=(h+H)^{2m}\\ &=\sum_{l=0}^{2m}h^{2m-l}H^{l}\binom{2m}{l}\\ &=\sum_{l=0}^{m}h^{2m-l}H^{l}\binom{2m}{l}\\ &=\sum_{l=0}^{m}h^{m+l}H^{m-l}\binom{2m}{m-l}\\ &=\sum_{l=0}^{m}h^{m+l}H^{m-l}\binom{2m}{m+l}. \end{align*} By comparing the powers of $h$, it suffices to show $\binom{2m}{m+l}=\sum_{k=0}^{m-l}\binom{2m-(k+l)-1}{m-1}\binom{k+l}{l}$ for all $0\leq l\leq m$. This is precisely the Vandermonde identity. \end{enumerate} \end{proof} \subsection{K-Theoretic Structure Constants of $\mathbb{P}^m$} As before, the classes $\mathrm{mC}(\Omega_i)$ for $1\leq i\leq m+1$ form a basis for $K^0(\mathbb{P}^m)$. Let $C^k_{i,j}\in\mathbb{Z}(h)$ be the unique rational functions satisfying $\forall (1\leq i,j\leq m+1)\ \ \mathrm{mC}(\Omega_i)\mathrm{mC}(\Omega_j)=\sum_{k=1}^{m+1}C^k_{i,j}\mathrm{mC}(\Omega_k)$. We will not describe the K-theory ring of $\mathbb{P}^m$. Instead, we use the fact that any K-theory class is uniquely determined by its Chern character. Our main tool will be the following corollary of the Grothendieck-Riemann-Roch theorem. \begin{GRR}\label{GRR} Let $X$ be a smooth projective variety and $V\subset X$ a smooth closed subvariety. Denote the inclusion by $\iota:V\to X$. Let $\xi\to V$ be a vector bundle. Then, \[\mathrm{ch}(\iota_*\xi)=\iota_*(\mathrm{ch}(\xi)\mathrm{td}(V))/\mathrm{td}(X).\] \end{GRR} Fix $\gamma\in\mathcal{N}(\Delta)$. Given $\eta\in\mathcal{N}(\Delta)$ with $\eta\supset\gamma$, let $\tilde\eta=\eta/\gamma\in\mathrm{st}(\gamma)$. We begin by computing the Chern character of pushforwards of the canonical bundle $\omega_\gamma\to V_\gamma$. By Lemma 2.1 (4), we have $\forall\eta\in\mathcal{N}(\Delta)\ \ [V_{\tilde\eta}]=H_\gamma^{\dim(\tilde\eta)}\in \mathrm{H}^*(V_\gamma)$, where $H_\gamma$ is the hyperplane class of $V_\gamma$. This along with parts (2) and (3) of Lemma~\ref{toric_classes} yield the formulas \[\mathrm{ch}(\omega_\gamma)=\mathrm{e}^{-(m-\dim(\gamma)+1)H_\gamma},\quad\mathrm{td}(V_\gamma)=\left(\frac{H_\gamma}{1-\mathrm{e}^{-H_\gamma}}\right)^{m-\dim(\gamma)+1}.\] Using Lemma~\ref{GRR} and part (4) of Lemma~\ref{toric_classes}, we have \begin{align*} \mathrm{ch}(\iota_{\gamma*}\omega_\gamma)&=\iota_{\gamma*}(\mathrm{ch}(\omega_\gamma)\mathrm{td}(V_\gamma))/\mathrm{td}(\mathbb{P}^m)\\ &=H^{\dim(\gamma)}\cdot\frac{\mathrm{e}^{-(m-\dim(\gamma)+1)H}H^{m-{\dim(\gamma)}+1}}{(1-\mathrm{e}^{-H})^{m-\dim(\gamma)+1}}\Big/\frac{H^{m+1}}{(1-\mathrm{e}^{-H})^{m+1}}\\ &=\mathrm{e}^{-(m-\dim(\gamma)+1)H}(1-\mathrm{e}^{-H})^{\dim(\gamma)}. \end{align*} Combining this formula with the orbit structure of $\Omega_i$, the additivity property of motivic Chern classes, and naturality of Chern character results in the following formula. \begin{mC_proj}\label{mC_proj} For all $1\leq i\leq m+1$,\ \ $\mathrm{ch}(\mathrm{mC}(\Omega_i))=\mathrm{e}^{-H}(1-\mathrm{e}^{-H})^{m-i+1}(1+h\mathrm{e}^{-H})^{i-1}$. \end{mC_proj} The proof is similar to that of Proposition~\ref{csm_proj}. Notice the similarities between the formulas of Proposition~\ref{mC_proj} and Proposition~\ref{csm_proj}. \begin{KLR_proj}\label{KLR_proj} Let $\widetilde{C}^k_{i,j}=(-1)^{m+i+j+k+1}(1+h)^{-m}C^k_{i,j}$. For all $1\leq i,j,k\leq m+1$, \begin{enumerate} \item $C^k_{i,j}=0$ unless $k\leq i,j$, \item $C^k_{i,j}=C^k_{i-1,j+1}$ when $i>1$ and $j<m+1$, \item $C^k_{i,j}=C^{k+1}_{i,j+1}$ when $j,k<m+1$, \item $\widetilde{C}^k_{m+1,m+1}=\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}$. \end{enumerate} These equalities completely determine all K-theoretic structure constants of $\mathbb{P}^m$. In particular, \[\widetilde C^k_{i,j}=\binom{i+j-k-2}{m}h^{i+j-k-m-2}+\binom{i+j-k-1}{m}h^{i+j-k-m-1}.\] \end{KLR_proj} \begin{proof} The first three parts can be proven the same way the first three parts of Theorem~\ref{HLR_proj} were. It remains to prove part 4. Observe that $\rho:=1-\mathrm{e}^{-H}=H-1/2H^2+\mathrm{h.o.t}$. It follows that for $0\leq l\leq m$, the elements $\rho^l$ are triangular with respect to the grading on $\mathrm{H}^*(\mathbb{P}^m)$. Thus, these elements form a basis for the cohomology ring. We can express Proposition~\ref{mC_proj} in terms of $\rho$ as \[\mathrm{ch}(\mathrm{mC}(\Omega_i))=(1-\rho)\rho^{m-i+1}((1+h)-h\rho)^{i-1}\quad (*).\] We will expand $h^{-m}\mathrm{ch}(\mathrm{mC}(\Omega_{m+1}))^2$ and $\sum_{k=1}^{m}(-1)^{m+k+1}\left(\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}\right)\mathrm{ch}(\mathrm{mC}(\Omega_k))$ in powers of $\rho$, using $(*)$, and compare coefficients. We have \begin{align*} h^{-m}\mathrm{ch}(\mathrm{mC}(\Omega_{m+1}))^2&=h^{-m}(1-\rho)^2((1+h)-h\rho)^{2m}\\ &=\sum_{l=0}^m\rho^l(-1)^l\left(\binom{2m}{l}h^{l}(1+h)^{m-l}+2\binom{2m}{l-1}h^{l-1}(1+h)^{m-l+1}\right.\\ &\quad\qquad\qquad\qquad\left.+\binom{2m}{l-2}h^{l-2}(1+h)^{m-l+2}\right). \end{align*} On the other hand, \begin{align*} \sum_{k=1}^{m}(-1&)^{m+k+1}\left(\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}\right)\mathrm{ch}(\mathrm{mC}(\Omega_k))\\ &=\sum_{k=1}^{m}(-1)^{m+k+1}\left(\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}\right)\\ &\hspace{30pt}\cdot(1-\rho)\rho^{m-k+1}((1+h)-h\rho)^{k-1}\\ &=\sum_{l=0}^m\rho_l\sum_{k=m-l+1}^{m+1}(-1)^{m+k+1}\left(\binom{2m-k}{m}h^{m-k}+\binom{2m-k+1}{m}h^{m-k+1}\right)\\ &\hspace{70pt}\cdot\left(\binom{k-1}{l+k-m-1}(-h)^{l+k-m-1}(1+h)^{m-l}\right.\\ &\hspace{96pt}\left.-\binom{k-1}{l+k-m-2}(-h)^{l+k-m-2}(1+h)^{m-l+1}\right)\\ &=\sum_{l=0}^m\rho_l(-1)^l\sum_{k=m-l+1}^{m+1}\left(\binom{2m-k}{m}\binom{k-1}{m-l}h^{l-1}(1+h)^{m-l}\right.\\ &\hspace{96pt}+\binom{2m-k}{m}\binom{k-1}{m-l+1}h^{l-2}(1+h)^{m-l+1}\\ &\hspace{96pt}+\binom{2m-k+1}{m}\binom{k-1}{l-m}h^l(1+h)^{m-l}\\ &\left.\hspace{96pt}+\binom{2m-k+1}{m}\binom{k-1}{m-l+1}h^{l-1}(1+h)^{m-l+1}\right). \end{align*} Consider the final sum. Apply the Vandermonde identity to each of the four lines, and then apply Pascal's identity only to the last two lines. This yields \begin{align*} \sum_{k=m-l+1}^{m+1}\binom{2m-k}{m}\binom{k-1}{m-l}h^{l-1}(1+h)^{m-l}&=\binom{2m}{l-1}h^{l-1}(1+h)^{m-l}, \end{align*} \begin{align*} \sum_{k=m-l+1}^{m+1}\binom{2m-k}{m}\binom{k-1}{m-l+1}h^{l-2}(1+h)^{m-l+1}&=\binom{2m}{l-2}h^{l-2}(1+h)^{m-l+1}, \end{align*} \begin{align*} \sum_{k=m-l+1}^{m+1}\binom{2m-k+1}{m}\binom{k-1}{m-l}h^l(1+h)^{m-l}&=\binom{2m+1}{l}h^l(1+h)^{m-l}\\ &=\left(\binom{2m}{l}+\binom{2m}{l-1}\right)h^l(1+h)^{m-l}, \end{align*} \begin{align*} \sum_{k=m-l+1}^{m+1}\binom{2m-k+1}{m}\binom{k-1}{m-l+1}h^{l-1}(1+h)^{m-l+1}&=\binom{2m+1}{l-1}h^{l-1}(1+h)^{m-l+1}\\ &=\left(\binom{2m}{l-1}+\binom{2m}{l-2}\right)h^{l-1}(1+h)^{m-l+1}. \end{align*} Adding these four expressions yields \[\binom{2m}{l}h^{l}(1+h)^{m-l}+2\binom{2m}{l-1}h^{l-1}(1+h)^{m-l+1}+\binom{2m}{l-2}h^{l-2}(1+h)^{m-l+2},\] as required. \end{proof} The following corollary is a consequence of Pascal's identity, Theorem~\ref{HLR_proj}, and Theorem~\ref{KLR_proj}. \begin{KtoH_proj}\label{KtoH_proj} Expand $C^k_{i,j}$ as a polynomial in $-1-h$. Then, the coefficient of the lowest degree term is $c^k_{i,j}/h^m$. \end{KtoH_proj} \section{Toric Calculus} In this section, we will formulate the problem of finding structure constants for arbitrary toric manifolds $X$ with moment polytope $\Delta$. We will then use the machinery developed in Section 2 in the context of projective spaces to calculate the structure constants of other toric manifolds. The first necessary ingredient is a CW decomposition of $X$. \subsection{Shellings and CW Decompositions} Let $x_1,...,x_r\in\mathrm{Lie}(U)^*\cong\mathbb{R}^n$ be the vertices of $\Delta$. Fix a vector $u\in\mathrm{Lie}(U)\cong\mathbb{R}^n$ such that the numbers $\langle u,x_i\rangle$ for $1\leq i\leq r$ are distinct. Relabel the $x_i$ so that $\langle u,x_1\rangle<\langle u,x_2\rangle<\cdots<\langle u,x_r\rangle$. Each $x_i$ corresponds to some maximal cone $\sigma_i\in\mathbb{N}(\Delta)$. For $1\leq i\leq r$, define cones $s_i\in\mathcal{N}(\Delta)$ by \[s_i=\begin{cases}\bigcap\limits_{\substack{j>i\\ \dim(\sigma_i\cap\sigma_j)=n-1}}\sigma_i\cap\sigma_j&\text{if }1\leq i<r\\ \sigma_r&\text{if }i=r\end{cases}.\] The collection of cones $S_u=\{s_i\ |\ 1\leq i\leq r\}$ is called a \emph{shelling} of the fan. Inductively define sets $\Omega_i\subset X$ for $1\leq i\leq r$ by \begin{align*} \Omega_1&=\mathcal{O}_{s_r},\\ \Omega_i&=\bigcup_{s_{r-i+1}\subset\gamma\in\mathcal{N}(\Delta)}\mathcal{O}_\gamma\setminus\Omega_{i-1}\text{ when }1<i\leq r. \end{align*} The collection of $\Omega_i$'s form a CW decomposition of $X$, and $\dim\Omega_i\leq\dim\Omega_j$ whenever $i\leq j$. Hence, to a generic vector $u\in\mathbb{R}^n$, we can associate a CW decomposition of $X$. This association is locally constant in $u$. Indeed, the normal hyperplanes to the vectors $x_i-x_j$ for $1\leq i<j\leq r$ divide $\mathbb{R}^n$ into chambers, and the shelling depends only on which chamber $u$ belongs to. Equipping $X$ with a CW decomposition allows us to define various bases for $\mathrm{H}^*(X)$ and $\mathrm{K}^0(X)$. \subsection{Bases of Characteristic Classes} It is easy to see from the orbit structure of $X$ that $\forall 1\leq i\leq r\ \ \overline{\Omega}_i=V_{s_i}$. Thus, the cohomological and K-theoretic fundamental classes $[V_{s_i}]$ form a basis for cohomology and K-theory respectively. It follows from the triangularity of $\mathrm{c^{sm}}(\Omega_i)$ and $\mathrm{ch}(\mathrm{mC}(\Omega_i))$ with respect to cohomological grading that the $h$-deformed classes also form a basis. Hence, we may ask for the structure constants associated to any one of these bases. The structure constants associated with the basis of cohomological fundamental classes are well understood in the context of intersection theory. See \cite{Fu2} for general results in this direction. When $\mathcal{N}(\Delta)$ is generated by the Weyl chambers of a root system, a combinatorial rule for the nonequivariant structure constants associated with the basis of cohomological fundamental classes is given in \cite{Abe}. Simple examples of cohomological $h$-deformed structure constants are computed in the next two sections. \subsection{Example: The Hirzebruch Surface} The Hirzebruch surface $\mathcal{H}$ is the blowup of $\mathbb{P}^2$ at a $T$-fixed point. Its moment polytope $\Delta$ is the trapezoid with vertices $(0,0),(1/2,0),(0,1),(1/2,1/2)$. Let $v_1=e_1,v_2=e_2,v_3=-e_1,v_4=-e_1-e_2\in\mathbb{R}^2$. Let $\tau_i=\mathbb{R}_{\geq 0}\cdot v_i$ for $i=1,2,3,4$. Recall that given $A\subset[1,4]$, we define $\gamma_A=\mathbb{R}_{\geq 0}\cdot\{v_i\ | \ i\in A\}$. Then, \[\mathcal{N}(\Delta)=\{\{0\},\tau_1,\tau_2,\tau_3,\tau_4,\sigma_1:=\gamma_{\{1,4\}},\sigma_2:=\gamma_{\{1,2\}},\sigma_3:=\gamma_{\{3,4\}},\sigma_4:=\gamma_{\{2,3\}}\}.\] The novel feature of $\mathcal{H}$ is that not all of its toric divisors are equivalent. In fact, while $[V_{\tau_2}]=[V_{\tau_4}]\ \ (*)$, we have the relation $[V_{\tau_1}]=[V_{\tau_3}]+[V_{\tau_4}]\ \ (**)$. Let $u=2e_1-e_2$. Then, the corresponding shelling $S_u$ consists of the four cones $s_1=\{0\},s_2=\tau_2,s_3=\tau_3,s_4=\sigma_4$. This gives the following CW decomposition. \[\Omega_1=\mathcal{O}_{\sigma_4},\ \ \Omega_2=\mathcal{O}_{\tau_3}\cup\mathcal{O}_{\sigma_3},\ \ \Omega_3=\mathcal{O}_{\tau_2}\cup\mathcal{O}_{\sigma_2},\ \ \Omega_4=\mathcal{O}_{\{0\}}\cup\mathcal{O}_{\tau_1}\cup\mathcal{O}_{\tau_4}\cup\mathcal{O}_{\sigma_1}\] By the additivity of $\mathrm{c^{sm}}$ classes and the formula from Section 2, \begin{align*} \mathrm{c^{sm}}(\Omega_1)&=[V_{\tau_2}][V_{\tau_3}],\\ \mathrm{c^{sm}}(\Omega_2)&=[V_{\tau_3}][V_{\tau_4}]+h[V_{\tau_3}],\\ \mathrm{c^{sm}}(\Omega_3)&=[V_{\tau_1}][V_{\tau_2}]+h[V_{\tau_2}],\\ \mathrm{c^{sm}}(\Omega_4)&=[V_{\tau_1}][V_{\tau_4}]+h[V_{\tau_1}]+h[V_{\tau_4}]+h^2. \end{align*} The structure constants can be computed using relations $(*)$ and $(**)$. They are listed in Table 1. \begin{table}[h!] \caption{Cohomological structure constants of the Hirzebruch surface.} \begin{equation*} \begin{array}{ll} \mathrm{c^{sm}}(\Omega_1)\mathrm{c^{sm}}(\Omega_1)=0&\mathrm{c^{sm}}(\Omega_2)\mathrm{c^{sm}}(\Omega_2)=-h^2\mathrm{c^{sm}}(\Omega_1)\\ \mathrm{c^{sm}}(\Omega_1)\mathrm{c^{sm}}(\Omega_2)=0&\mathrm{c^{sm}}(\Omega_2)\mathrm{c^{sm}}(\Omega_3)=h^2\mathrm{c^{sm}}(\Omega_1)\\ \mathrm{c^{sm}}(\Omega_1)\mathrm{c^{sm}}(\Omega_3)=0&\mathrm{c^{sm}}(\Omega_2)\mathrm{c^{sm}}(\Omega_4)=h^2(\mathrm{c^{sm}}(\Omega_2)+\mathrm{c^{sm}}(\Omega_1))\\ \mathrm{c^{sm}}(\Omega_1)\mathrm{c^{sm}}(\Omega_4)=h^2 \end{array} \end{equation*} \begin{equation*} \begin{array}{l} \mathrm{c^{sm}}(\Omega_3)\mathrm{c^{sm}}(\Omega_3)=0\\ \mathrm{c^{sm}}(\Omega_3)\mathrm{c^{sm}}(\Omega_4)=h^2(\mathrm{c^{sm}}(\Omega_3)+\mathrm{c^{sm}}(\Omega_1))\\ \mathrm{c^{sm}}(\Omega_4)\mathrm{c^{sm}}(\Omega_4)=h^2(\mathrm{c^{sm}}(\Omega_4)+2\mathrm{c^{sm}}(\Omega_3)+\mathrm{c^{sm}}(\Omega_2)+\mathrm{c^{sm}}(\Omega_1)) \end{array} \end{equation*} \end{table} Notice that the structure constants are integer multiples of $h^2=h^{\dim(\mathcal{H})}$, one of which is negative. Moreover, the collection of structure constants appears to be independent of the choice of shelling. \subsection{Example: The $A_2$ Permutohedral Variety} Following \cite{Abe}, we consider the toric variety $X$ whose normal fan is given by Weyl chambers of the $A_2$ root system. The usual conventions for toric manifolds and $A_2$ are incompatible, as one of the fundamental coweights will be a vector with irrational slope. This can be remedied by defining the toric manifold with respect to the coweight lattice rather than the standard lattice. Let $E=\{(x,y,z)\in\mathbb{R}^3\ |\ x+y+z=0\}$. Define the root system $A_2=\{e_i-e_j\ |\ 1\leq i,j\leq 3\}$ on $E$. Let $\alpha_1=e_1-e_2,\alpha_2=e_2-e_3$. Then, $\Pi=\{\alpha_1,\alpha_2\}$ is a set of simple roots. The Weyl group $W=S_3$ acts on $\Phi$ by permuting indices. That is, $\forall w\in W\ \ w(e_i-e_j)=e_{w(i)}-e_{w(j)}$. The fundamental coweights are $\omega_1=2/3e_1-1/3e_2-1/3e_3,\omega_2=1/3e_1+1/3e_2-2/3e_3$, where we have identified $E$ with $E^*$ via the standard inner product on $\mathbb{R}^3$ restricted to $E$. Define a linear isomorphism $p:E\to\mathbb{R}^2$ by $p(\omega_1)=e_1,p(\omega_2)=e_2$. Abusing notation, we will denote the standard basis vectors of $\mathbb{R}^2$ by $\omega_1,\omega_2$. Taking the image of Weyl chambers and their proper faces under $p$ yields a fan of cones in $\mathbb{R}^2$ that is smooth (corresponding to a Delzant polytope). We can index the Weyl chambers by elements $w\in W$ by \[C_w=\mathbb{R}_{\geq 0}\cdot\{w(\omega_1),w(\omega_2)\}.\] Let $\sigma_w=p(C_w)$. The maximal cones of the fan are \begin{equation*} \begin{array}{lcllcl} \sigma_{\mathrm{id}}&=&\mathbb{R}_{\geq 0}\cdot\{\omega_1,\omega_2\}&\sigma_{(1,2)}&=&\mathbb{R}_{\geq 0}\cdot\{\omega_2-\omega_1,\omega_2\}\\ \sigma_{(1,3)}&=&\mathbb{R}_{\geq 0}\cdot\{-\omega_1,-\omega_2\}&\sigma_{(2,3)}&=&\mathbb{R}_{\geq 0}\cdot\{\omega_1,\omega_1-\omega_2\}\\ \sigma_{(1,2,3)}&=&\mathbb{R}_{\geq 0}\cdot\{-\omega_1,\omega_2-\omega_1\}&\sigma_{(1,3,2)}&=&\mathbb{R}_{\geq 0}\cdot\{-\omega_2,\omega_1-\omega_2\} \end{array}. \end{equation*} The moment polytope $\Delta$ is a hexagon with vertices $(0,0),(3/4,0),(1,1/4),(1,1),(1/4,1),(0,3/4)$. This is obtained from a square by cutting off the upper left and bottom right corners. In particular, $X$ is the blowup of $\mathbb{P}^1\times\mathbb{P}^1$ at two fixed points. The fan $\mathcal{N}(\Delta)$ contains the $\sigma_w$ for all $w\in W$ along with their proper faces. For $w\in W$ and $i=1,2$, let $\tau_{w,i}=p(w(\omega_i))$. The rays in $\mathcal{N}(\Delta)$ are \[\mathcal{N}(\Delta)=\{\{0\}\}\cup\{\sigma_w\ |\ w\in W\}\cup\{\tau_{w,i}\ |\ w\in W,i=1,2\}.\] Fix $u=-\omega_1-3\omega_2$. This gives an ordering of the vertices of $\Delta$ that induces the ordering \[\mathrm{id}<(1,2)<(1,2,3)<(2,3)<(1,3,2)<(1,3)\] on $W$. The shelling $S_u$ consists of these six cones, which we may also index by $W$. \begin{align*} &s_{\mathrm{id}}:=s_1=\{0\}\ ,\ s_{(1,2)}:=s_2=\tau_{(1,2),1}\ ,\ s_{(1,2,3)}:=s_3=\tau_{(1,2,3),2}\ ,\\ &s_{(2,3)}:=s_4=\tau_{(2,3),2}\ ,\ s_{(1,3,2)}:=s_5=\tau_{(1,3,2),1}\ ,\ s_{(1,3)}:=s_6=\sigma_{(1,3)} \end{align*} We have chosen this particular shelling, because it conforms with the descents of the Weyl group elements. Our choice is consistent with Section 4 of \cite{Abe}. The corresponding cells can also be indexed by elements of $W$. The cells are \begin{equation*} \begin{array}{lcllcl} \Omega_{\mathrm{id}}&=&\mathcal{O}_{\{0\}}\cup\mathcal{O}_{\tau_{\mathrm{id},1}}\cup\mathcal{O}_{\tau_{\mathrm{id},2}}\cup\mathcal{O}_{\sigma_{\mathrm{id}}}&\Omega_{(1,2)}&=&\mathcal{O}_{\tau_{(1,2),1}}\cup\mathcal{O}_{\sigma_{(1,2)}}\\ \Omega_{(1,3)}&=&\mathcal{O}_{\sigma_{(1,3)}}&\Omega_{(2,3)}&=&\mathcal{O}_{\tau_{(2,3),2}}\cup\mathcal{O}_{\sigma_{(2,3)}}\\ \Omega_{(1,2,3)}&=&\mathcal{O}_{\tau_{(1,2,3),2}}\cup\mathcal{O}_{\sigma_{(1,2,3)}}&\Omega_{(1,3,2)}&=&\mathcal{O}_{\tau_{(1,3,2),1}}\cup\mathcal{O}_{\sigma_{(1,3,2)}} \end{array}. \end{equation*} The procedure for calculating the $\mathrm{c^{sm}}$ classes of the cells in the associated CW decomposition is the same as that of the previous example. A few representative structure constants are listed in Table 2. \begin{table}[h!] \caption{Cohomological structure constants of the $A_2$ toric surface.} \begin{align*} \mathrm{c^{sm}}(\Omega_{(1,3,2)})\mathrm{c^{sm}}(\Omega_{(1,3,2)})&=-h^2\mathrm{c^{sm}}(\Omega_{(1,3)})\\ \mathrm{c^{sm}}(\Omega_{(1,3,2)})\mathrm{c^{sm}}(\Omega_{(2,3)})&=h^2\mathrm{c^{sm}}(\Omega_{(1,3)})\\ \mathrm{c^{sm}}(\Omega_{\mathrm{id}})\mathrm{c^{sm}}(\Omega_{(1,2)})&=h^2(\mathrm{c^{sm}}(\Omega_{(1,2)})+\mathrm{c^{sm}}(\Omega_{(1,3)}))\\ \mathrm{c^{sm}}(\Omega_{\mathrm{id}})\mathrm{c^{sm}}(\Omega_{(1,2,3)})&=h^2\mathrm{c^{sm}}(\Omega_{(1,2,3)})\\ \mathrm{c^{sm}}(\Omega_{\mathrm{id}})\mathrm{c^{sm}}(\Omega_{\mathrm{id}})&=h^2(\mathrm{c^{sm}}(\Omega_{\mathrm{id}})-\mathrm{c^{sm}}(\Omega_{(1,2,3)})-\mathrm{c^{sm}}(\Omega_{(1,3,2)})+3\mathrm{c^{sm}}(\Omega_{(1,3)})) \end{align*} \end{table} \section{Grassmannians} \subsection{Weight Function Orthogonality} Fix $d\leq n\in\mathbb{N}$. Let $T=(\mathbb{C}^*)^n$ act on $\mathbb{C}^n$ in the usual way. This induces an action of $T$ on $\mathrm{Gr}(d,n)$. Let $e_1,...,e_n$ be the standard basis for $\mathbb{C}^n$. Then, the fixed points of $\mathrm{Gr}(d,n)$ under the $T$ action are those subspaces spanned by a subset of $\{e_1,...,e_n\}$ of cardinality $d$. The $B$-orbits of the $\binom{n}{d}$ fixed points give a decomposition of the Grassmannian into Schubert cells. Let $\mathcal{I}=\{I\subset[1,n]\ |\ |I|=d\}$. Subsets $I=\{i_1,...,i_d\}\in\mathcal{I}$ parametrize both the fixed points and the Schubert cells of $\mathrm{Gr}(d,n)$, as described above. By convention, we will enumerate the elements of $I$ in ascending order, i.e $i_1<i_2<\cdots<i_d$. Denote the fixed point $\mathrm{span}\{e_{i_1},...,e_{i_d}\}\in\mathrm{Gr}(d,n)$ by $x_I$ and the corresponding Schubert cell by $\Omega_I$. Define a partial order on $\mathcal{I}$ by $I\leq J \Leftrightarrow\forall (1\leq a\leq d)\ \ i_a\leq j_a$. This gives the Bruhat order on the Schubert cells. Another description of fixed points and Schubert cells is also commonly used. There is a correspondence between $\mathcal{I}$ and integer partitions with $d$ parts each at most $n-d$. Such partitions are represented by Young diagrams contained in a $d\times(n-d)$ box. The correspondence is given by \[I=\{i_1,...,i_d\}\mapsto \lambda_I=((n-d)-(i_a-a))_{a=1}^d.\] Under this correspondence, the number of boxes in the Young diagram of $\lambda_I$ is the codimension of the Schubert cell $\Omega_I$. This description is included for reference, but will not be used in what follows. We will use weight function orthogonality \cite{RTV1,RTV2,TV} to obtain formulas for equivariant structure constants. See \cite{R} for a survey of generalized structure constants of flag manifolds from the perspective of weight function orthogonality. The nonequivariant structure constants can then be obtained by taking a limit using residue calculus. This residue calculus approach is also used in \cite{FR} to obtain formulas for the CSM classes of degeneracy loci. To this end, it is not necessary to fully describe the equivariant cohomology ring $\mathrm{H}^*_T(\mathrm{Gr}(d,n))$. Instead, we simply remark that the inclusion $\mathrm{Gr}(d,n)^T\hookrightarrow\mathrm{Gr}(k,n)$ induces an injection \[\mathrm{H}^*_T(\mathrm{Gr}(d,n))\hookrightarrow\mathrm{H}^*_T(\mathrm{Gr}(d,n)^T)\cong\bigoplus_{I\in\mathcal{I}}\mathrm{H}^*_T(x_I)\cong\bigoplus_{I\in\mathcal{I}}\mathbb{Z}[z_1,...,z_n].\] It follows that an equivariant cohomology class is identified with a tuple $(f_I)_{I\in\mathcal{I}}$ of polynomials in $n$ variables, one polynomial for each fixed point. The same is true in K-theory, except polynomials are replaced by Laurent polynomials. Namely, \[\mathrm{K}^0_T(\mathrm{Gr}(d,n))\hookrightarrow\mathrm{K}^0_T(\mathrm{Gr}(d,n)^T)\cong\bigoplus_{I\in\mathcal{I}}\mathrm{K}^0_T(x_I)\cong\bigoplus_{I\in\mathcal{I}}\mathbb{Z}[z_1^{\pm 1},...,z_n^{\pm 1}].\] Weight functions compute the fixed point restrictions of the equivariant $\mathrm{c^{sm}}$ and $\mathrm{mC}$ classes. The key formulas are given below. Note that while weight functions are formulated for arbitrary partial flag manifolds, the formulas below have been specialized to Grassmannians. \begin{weight}\label{weight} Let $a\in[1,d]$, $b\in[1,n]$. Define the following polynomials in variables $x,h$. \[\Psi^{\mathrm{H}}_{I,a,b}(x)=\begin{cases}x+h&\text{if }b<i_a\\h&\text{if }b=i_a\\x&\text{if }b>i_a\end{cases}\quad,\quad\Psi^{\mathrm{K}}_{I,a,b}(x)=\begin{cases}1+hx&\text{if }b<i_a\\(1+h)x&\text{if }b=i_a\\1-x&\text{if }b>i_a\end{cases}.\] Let $I=\{i_1,...,i_d\}\in\mathcal{I}$, and $\sigma\in S_n$. Define the following rational functions in variables $t_1,...,t_d,z_1,...,z_n,h$. \begin{align*} U^{\mathrm{H}}_I&=\prod_{a=1}^d\prod_{b=1}^n\Psi^{\mathrm{H}}_{I,a,b}(z_b-t_a)\prod_{a<b\leq d}\frac{1}{t_b-t_a}\prod_{b\leq a\leq d}\frac{1}{t_b-t_a+h},\\ U^{\mathrm{K}}_I&=\prod_{a=1}^d\prod_{b=1}^n\Psi^{\mathrm{K}}_{I,a,b}(t_a/z_b)\prod_{a<b\leq d}\frac{1}{1-t_a/t_b}\prod_{b\leq a\leq d}\frac{1}{1+ht_a/t_b},\\ W_{\sigma,I}^{\mathrm{H}}&=\mathrm{Sym}_{t_1,...,t_d}U^{\mathrm{H}}_{\sigma^{-1}(I)}(t_1,...,t_d;z_{\sigma(1)},...,z_{\sigma(n)};h),\\ W_{\sigma,I}^{\mathrm{K}}&=\mathrm{Sym}_{t_1,...,t_d}U^{\mathrm{K}}_{\sigma^{-1}(I)}(t_1,...,t_d;z_{\sigma(1)},...,z_{\sigma(n)};h), \end{align*} \begin{equation*} \begin{array}{lcl} \begin{displaystyle}R^{\mathrm{H}}_I=\prod_{\substack{a\in I\\b\notin I}}(z_b-z_a)\end{displaystyle}&,&\begin{displaystyle}Q^{\mathrm{H}}_I=\prod_{\substack{a\in I\\b\notin I}}(z_b-z_a+h)\end{displaystyle},\\ \begin{displaystyle}R^{\mathrm{K}}_I=\prod_{\substack{a\in I\\b\notin I}}(1-z_a/z_b)\end{displaystyle}&,&\begin{displaystyle}Q^{\mathrm{K}}_I=\prod_{\substack{a\in I\\b\notin I}}(1+z_b/(hz_a)).\end{displaystyle} \end{array} \end{equation*} Define inner products on $\mathbb{Z}(t_1,...,t_d;z_1,...,z_n;h)$ \begin{align*} \langle f,g\rangle^{\mathrm{H}}&=\sum_{I\in\mathcal{I}}\frac{f(z_{i_1},...,z_{i_d};z_1,...,z_n;h)g(z_{i_1},...,z_{i_k};z_1,...,z_n;h)}{R^{\mathrm{H}}_IQ^{\mathrm{H}}_I},\\ \langle f,g\rangle^{\mathrm{K}}&=\sum_{I\in\mathcal{I}}\frac{f(z_{i_1},...,z_{i_d};z_1,...,z_n;h)g(z_{i_1},...,z_{i_k};z_1,...,z_n;h)}{R^{\mathrm{K}}_IQ^{\mathrm{K}}_I}. \end{align*} \end{weight} The next theorem summarizes some of the main results of \cite{RTV1,RTV2}. \begin{weight_orth}\label{weight_orth} Let $s_0\in S_n$ be the longest permutation. Let $\iota:\mathbb{Z}(t_1,...,t_d;z_1,...,z_n;h)\to\mathbb{Z}(t_1,...,t_d;z_1,...,z_n;h)$ be defined by $f(t_1,...,t_d;z_1,...,z_n;h)\mapsto f(1/t_1,...,1/t_d;1/z_1,...,1/z_n;1/h)$. For all $I,J\in\mathcal{I}$, \begin{enumerate} \item $\mathrm{c^{sm}}(\Omega_I)|_{x_J}=W^{\mathrm{H}}_{\mathrm{id},I}(z_{j_1},...,z_{j_d};z_1,...,z_n;h)\ \ ,\ \ \mathrm{mC}(\Omega_I)|_{x_J}=W^{\mathrm{K}}_{\mathrm{id},I}(z_{j_1},...,z_{j_d};z_1,...,z_n;h),$ \item $\langle W^{\mathrm{H}}_{\mathrm{id},I},W^{\mathrm{H}}_{s_0,J}\rangle^{\mathrm{H}}=\delta_{I,J}\ \ ,\ \ \langle W^{\mathrm{K}}_{\mathrm{id},I},(-h)^{-\dim \Omega_J}\iota(W^{\mathrm{K}}_{s_0,J})\rangle^{\mathrm{K}}=\delta_{I,J}.$ \end{enumerate} \end{weight_orth} \subsection{Cohomological Structure Constants of $\mathrm{Gr}(d,n)$} The classes $\mathrm{c^{sm}}(\Omega_I)$ for $I\in\mathcal{I}$ form a basis for $\mathrm{H}^*_T(\mathrm{Gr}(d,n))(h)$. Let $\hat{c}^K_{I,J}\in\mathbb{Z}[z_1,...,z_n](h)$ be the unique rational functions satisfying $\forall I,J\in\mathcal{I}\ \ \mathrm{c}_T^{\mathrm{sm}}(\Omega_I)\mathrm{c}_T^{\mathrm{sm}}(\Omega_J)=\sum_{K\in\mathcal{I}}\hat{c}^K_{I,J}\mathrm{c}_T^{\mathrm{sm}}(\Omega_K)$. The following is an immediate consequence of the orthogonality relations in Theorem~\ref{weight_orth}. \begin{H_weight_LR}\label{H_weight_LR} For all $I,J,K\in\mathcal{I}$,\ \ $\hat{c}^K_{I,J}=\langle W^{\mathrm{H}}_{\mathrm{id},I}W^{\mathrm{H}}_{\mathrm{id},J},W^{\mathrm{H}}_{\mathrm{s_0},K}\rangle^{\mathrm{H}}$. \end{H_weight_LR} A priori, this formula for the structure constants gives rational functions in $z_1,...,z_n$. They are in fact polynomials, but the cancellations of the denominators are not obvious. If we let $c^K_{I,J}=\lim_{z_1,...,z_n\to 0}\hat{c}^K_{I,J}$, then the $c^K_{I,J}$ are the structure constants in nonequivariant cohomology. It will be convenient to consider each term of the summation in Corollary~\ref{H_weight_LR} individually, so define \[\hat{c}^{K,L}_{I,J}=\frac{ W^{\mathrm{H}}_{\mathrm{id},I}W^{\mathrm{H}}_{\mathrm{id},J}W^{\mathrm{H}}_{\mathrm{s_0},K}(z_{i_1},...,z_{i_d};z_1,...,z_n;h)}{R^{\mathrm{H}}_IQ^{\mathrm{H}}_I}.\] It is clear from the formulas that $\hat{c}^{K,L}_{I,J}=0$ unless $K\leq L\leq I,J$. Let us revisit $\mathbb{P}^m$ from this perspective. \subsubsection{$\mathbb{P}^m$ Revisited} Here, we will abuse notation by letting an integer $i\in\mathbb{Z}$ also represent the set $\{i\}$. With this notation, the equivariant structure constants of Gr$(1,n)$ are denoted $\hat{c}^k_{i,j}$. This convention is consistent with the convention of Section 2. Without loss of generality, assume $i\leq j$. Then, Corollary~\ref{H_weight_LR} gives us the following formula for $\hat{c}^{k,l}_{i,j}$, after making obvious cancellations: \[\hat{c}^{k,l}_{i,j}=\begin{cases}\frac{h\prod\limits_{b=j+1}^{n}(z_b-z_l)\prod\limits_{b=1}^{i-1}(z_b-z_l+h)\prod\limits_{b=k+1}^{j-1}(z_b-z_l+h)}{\prod\limits_{b=k}^{l-1}(z_b-z_l)\prod\limits_{b=l+1}^i(z_b-z_l)}&\text{if }k\leq l\leq i,\\ 0 &\text{ otherwise.}\end{cases}\] Note that if $k=l=i$, then this formula reduces to the polynomial $h\prod\limits_{b=j+1}^{n}(z_b-z_i)\prod\limits_{\substack{b=1\\b\neq i}}^{j-1}(z_b-z_i+h)$. Hence, $c^i_{i,j}=\begin{cases}h^{n-1}&\text{if }j=n\\0&\text{otherwise}\end{cases}$. It remains to compute $c^k_{i,j}$ for $k<i$. Assume that $k<i$. Since $\hat{c}^k_{i,j}\in\mathrm{H}^*_T(\mathrm{Gr}(1,n))$ is a polynomial in $z_b$ variables, we may apply Cauchy's integral theorem to compute $c^k_{i,j}$. For $1\leq b\leq n$, let $\gamma_b\subset\mathbb{C}$ be a counterclockwise circle centered on 0 with radius $1/(b+1)$. Then, \begin{align*} c^k_{i,j}&=\left(\frac{1}{2\pi\sqrt{-1}}\right)^n\int_{\gamma_n}\cdots\int_{\gamma_1}\frac{\hat{c}^k_{i,j}}{z_1\cdots z_n}dz_1\cdots dz_n\\ &=\left(\frac{1}{2\pi\sqrt{-1}}\right)^n\int_{\gamma_n}\cdots\int_{\gamma_1}\sum_{l=1}^n\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}dz_1\cdots dz_n\\ &=\left(\frac{1}{2\pi\sqrt{-1}}\right)^n\sum_{l=1}^n\int_{\gamma_n}\cdots\int_{\gamma_1}\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}dz_1\cdots dz_n. \end{align*} Define $c^{k,l}_{i,j}=\left(\frac{1}{2\pi\sqrt{-1}}\right)^n\int_{\gamma_n}\cdots\int_{\gamma_1}\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}dz_1\cdots dz_n$. Our goal is to evaluate this integral using residue calculus. By Fubini's theorem, the order of integration can be freely interchanged. Define the sets of variables \[\mathbf{z}_1=\{z_b\ |\ b\notin[k,i]\}\ ,\ \mathbf{z}_2=\{z_b\ |\ b\in[l+1,i]\}\ ,\ \mathbf{z}_3=\{z_b\ |\ b\in[k,l-1]\}.\] The most convenient order of integration is first over variables in $\mathbf{z}_1$, second over variables in $\mathbf{z}_2$, third over variables in $\mathbf{z}_3$, and finally over $z_l$. From now on, we do not fully notate this iterated integral. Instead, the symbol $\int fd\mathbf{z}_s$, where $s=1,2,3$, will represent the iterated integral of $\left(\frac{1}{2\pi\sqrt{-1}}\right)f$ taken over the variables in $\mathbf{z}_s$. The first set of integrals $\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}d\mathbf{z}_1$ picks up singularities only at $z_b=0$ for $b\notin[k,i]$. Since $\hat{c}^{k,l}_{i,j}$ is holomorphic in $z_b$ when $b\notin[k,i]$, this integral just sets these variables equal to 0. We thus have \[\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}d\mathbf{z}_1=-\frac{h(-z_l+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z_k-z_l+h)(z_i-z_l+h)^{1-\delta_{i,j}}\prod\limits_{b=k+1}^{i-1}(z_b-z_l+h)^2}{(-z_l)^{j-n+1}\prod\limits_{b=k}^{l-1}(z_b-z_l)z_b\prod\limits_{b=l+1}^i(z_b-z_l)z_b}.\] Due to the way the $\gamma_b$ are nested, the second set of integrals $\int\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}d\mathbf{z}_1 d\mathbf{z}_2$ picks up singularities only at $z_b=0$ for $b\in[l+1,i]$. Similarly, we set these variables equal to 0 and obtain \[\int\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}d\mathbf{z}_1 d\mathbf{z}_2= \begin{cases}-\frac{h^2(-z_l+h)^{2i-k-3+(j-i)(1-\delta_{i,j})}}{(-z_l)^{i+j-n-k+1}}&\text{if }l=k,\\ -\frac{h^3(-z_l+h)^{2i+k-2l-3+(j-i)(1-\delta_{i,j})}(z_k-z_l+h)\prod\limits_{b=k+1}^{l-1}(z_b-z_l+h)^2}{(-z_l)^{i+j-l-n+1}\prod\limits_{b=k}^{l-1}(z_b-z_l)z_b}&\text{if }k<l<i,\\ -\frac{h^{2-\delta_{i,j}}(-z_i+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z_k-z_i+h)\prod\limits_{b=k+1}^{i-1}(z_b-z_i+h)^2}{(-z_i)^{j-n+1}\prod\limits_{b=k}^{i-1}(z_b-z_i)z_b}&\text{if }l=i. \end{cases}\] We can simplify these expressions by noticing that $(j-i)(1-\delta_{i,j})=j-i$. The following residue calculations will be useful for performing the $\mathbf{z}_3$ integral. \begin{res}\label{res} Let $f=\frac{(z_b-z_l+h)^r}{(z_b-z_l)z_b}$, where $r=1,2$. Then, \begin{enumerate} \item $\mathrm{Res}_{z_b= 0}f=\frac{(-z_l+h)^r}{-z_l}$, \item $\mathrm{Res}_{z_b= z_l}f=\frac{h^r}{z_l}$, \item $\mathrm{Res}_{z_b= 0}f+\mathrm{Res}_{z_b= z_l}f=\begin{cases}1&\text{if }r=1,\\(-z_l+2h)&\text{if }r=2.\end{cases}$ \end{enumerate} \end{res} It immediately follows that \[\int\int\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_n}d\mathbf{z}_1 d\mathbf{z}_2 d\mathbf{z}_3= \begin{cases}-\frac{h^2(-z_k+h)^{i+j-k-3}}{(-z_l)^{i+j-k-n+1}}&\text{if }l=k,\\ -\frac{h^3(-z_k+h)^{i+j+k-2l-3}(-z_l+2h)^{l-k-1}}{(-z_l)^{i+j-l-n+1}}&\text{if }k<l<i,\\ -\frac{h^{2-\delta_{i,j}}(-z_k+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(-z_l+2h)^{i-k-1}}{(-z_l)^{j-n+1}}&\text{if }l=i. \end{cases}\] Finally, we perform the change of variables $z=-z_l$ and integrate over the new variable $z$. The following proposition summarizes the calculations of this section. \begin{csm_proj_weight}\label{csm_proj_weight} For all $i,j,k,l\in[1,n]$, \begin{enumerate} \item $c^{k,l}_{i,j}=0$ unless $k\leq l\leq i,j$, \item when $k<i$ and $k\leq l\leq i\leq j$, \[c^{k,l}_{i,j}= \begin{cases}\mathrm{Res}_{z= 0}\left(\frac{h^2(z+h)^{i+j-k-3}}{z^{i+j-k-n+1}}\right)&\text{if }l=k,\\ \mathrm{Res}_{z= 0}\left(\frac{h^3(z+h)^{i+j+k-2l-3}(z+2h)^{l-k-1}}{z^{i+j-l-n+1}}\right)&\text{if }k<l<i,\\ \mathrm{Res}_{z= 0}\left(\frac{h^{2-\delta_{i,j}}(z+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z+2h)^{i-k-1}}{z^{j-n+1}}\right)&\text{if }l=i, \end{cases}\] \item when $k=l=i\leq j$, $c^{i,i}_{i,j}=\begin{cases}h^{n-1}&\text{if }j=n,\\0&\text{otherwise}.\end{cases}$ \end{enumerate} \end{csm_proj_weight} Observe that each of these residues is a nonnegative integer multiple of $h^{n-1}=h^{\dim(\mathrm{Gr}(1,n))}$. Moreover, we can realize $c^k_{i,j}$ as a coefficient of some polynomial. Namely, for $1\leq k\leq l\leq i\leq j\leq n$ and $k<i$, define the polynomial \begin{multline*}p^k_{i,j}(z)=h^2(z+h)^{i+j-k-3}+h^{2-\delta_{i,j}}z^{i-k}(z+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z+2h)^{i-k-1}\\ +\sum_{l=k+1}^{i-1}h^3z^{l-k}(z+h)^{i+j+k-2l-3}(z+2h)^{l-k-1}.\end{multline*} Then $c^k_{i,j}$ is the coefficient in $p^k_{i,j}$ of the degree $i+j-k-n$ term in $z$. Applying Theorem~\ref{HLR_proj} yields nonobvious properties of these polynomials. It would be interesting to see more general results computing structure constants of Grassmannians as coefficients of explicit polynomials. \subsubsection{Pieri Triples in Gr$(2,n)$} The results of the previous section can be generalized to a special class of structure constants in 2-plane Grassmannians. The novel feature of 2-plane Grassmannians is the appearence of symmetrizations in the weight functions. This section focuses on the situation when these symmetrizations contribute only one nonzero term. Assume that $I,J,K\in\mathcal{I}$ are such that $n\in I,J,K$. Note that $\hat{c}^{K,L}_{I,J}=0$ unless $n\in L$ as well. Subject to this condition, $I,J,K,L$ are determined by $i:=i_1,j:=j_1,k:=k_1,l:=l_1$. Such $\Omega_I,\Omega_J,\Omega_K$ are Pieri cells, so we expect $\hat{c}^K_{I,J}$ to be related to the structure constant $\hat{c}^k_{i,j}$ of Gr$(1,n-1)=\mathbb{P}^{n-2}$. See for instance the recursions of \cite{AM2009}. Without loss of generality, assume that $i\leq j$. By making obvious cancellations in Corollary~\ref{H_weight_LR}, we get \[\hat{c}^{K,L}_{I,J}=\begin{cases}\left[\frac{h\prod\limits_{b=j+1}^{n-1}(z_b-z_l)\prod\limits_{b=1}^{i-1}(z_b-z_l+h)\prod\limits_{b=k+1}^{j-1}(z_b-z_l+h)}{\prod\limits_{b=k}^{l-1}(z_b-z_l)\prod\limits_{b=l+1}^i(z_b-z_l)}\right]\prod\limits_{\substack{1\leq b\leq n-1\\b\neq l}}(z_b-z_n+h) &\text{ if }k<j,\\ \left[\prod\limits_{b=1}^{j-1}(z_b-z_l+h)\prod\limits_{b=j+1}^{n-1}(z_b-z_l)\right]\prod\limits_{\substack{1\leq b\leq n-1\\b\neq l}}(z_b-z_n+h) &\text{ if }k=l=i=j,\\ 0 &\text{ otherwise}.\end{cases}\] The terms in square brackets are precisely the $\hat{c}^{k,l}_{i,j}$, so we indeed have a relationship between the structure constants of Gr$(2,n)$ and those of Gr$(1,n-1)$. \begin{proj_like}\label{proj_like} If $n\in I,J,K,L$, and $i=i_1,j=j_1,k=k_1,l=l_1$ are such that $i\leq j$, then \[\hat{c}^{K,L}_{I,J}=\hat{c}^{k,l}_{i,j}\prod\limits_{\substack{1\leq b\leq n-1\\b\neq l}}(z_b-z_n+h).\] \end{proj_like} The extra factors in Proposition~\ref{proj_like} will have a predictable effect on the residues computed in the previous section, resulting in a formula for $c^K_{I,J}$. We will adopt the notations and conventions of the previous section in what follows. In particular, define the subsets of $\{z_1,...,z_n\}$ \[\mathbf{z}_1=\{z_b\ |\ b\notin[k,i]\}\ ,\ \mathbf{z}_2=\{z_b\ |\ b\in[l+1,i]\}\ ,\ \mathbf{z}_3=\{z_b\ |\ b\in[k,l-1]\}.\] We will again integrate over $\mathbf{z}_1,\mathbf{z}_2,\mathbf{z}_3$, and finally $z_l$. The $d\mathbf{z}_1$ and $d\mathbf{z}_2$ integrals pick up only singularities at 0. Like before, we set variables equal to 0 to obtain \[\int\int\frac{\hat{c}^{K,L}_{I,J}}{z_1\cdots z_n}d\mathbf{z}_1 d\mathbf{z}_2=h^{n+k-l-2}\prod\limits_{k\leq b\leq l-1}(z_b+h)\int\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_{n-1}}d\mathbf{z}_1 d\mathbf{z}_2.\] We have already computed $\int\int\frac{\hat{c}^{k,l}_{i,j}}{z_1\cdots z_{n-1}}d\mathbf{z}_1 d\mathbf{z}_2$ in the previous section. The following lemma will be useful for computing the $\mathbf{z}_3$ integral. \begin{res_Pieri}\label{res_Pieri} Let $f=(z_b+h)\frac{(z_b-z_l+h)^r}{(z_b-z_l)z_b}$, where $r=1,2$. Then, \begin{enumerate} \item $\mathrm{Res}_{z_b= 0}f=\frac{h(-z_l+h)^r}{-z_l}$, \item $\mathrm{Res}_{z_b= z_l}f=\frac{(z_l+h)h^r}{z_l}$, \item $\mathrm{Res}_{z_b= 0}f+\mathrm{Res}_{z_b= z_l}f=\begin{cases}2h&\text{if }r=1,\\h(-z_l+3h)&\text{if }r=2\end{cases}$. \end{enumerate} \end{res_Pieri} It immediately follows that for $k\leq l\leq i\leq j$ and $k<i$, \[\int\int\int\frac{\hat{c}^{K,L}_{I,J}}{z_1\cdots z_n}d\mathbf{z}_1 d\mathbf{z}_2\mathbf{z}_3= \begin{cases}-\frac{h^n(-z_l+h)^{i+j-k-3}}{-z_l^{i+j-k-n+2}}&\text{if }l=k,\\ -\frac{2h^{n+1}(-z_l+h)^{i+j+k-2l-3}(-z_l+3h)^{l-k-1}}{-z_l^{i+j-l-n+2}}&\text{if }k<l<i,\\ -\frac{2h^{n-\delta_{i,j}}(-z_l+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(-z_l+3h)^{i-k-1}}{-z_l^{j-n+2}}&\text{if }l=i. \end{cases}\] Like before, we make the change of variables $z=-z_l$. \begin{csm_Pieri}\label{csm_Pieri} Let $I,J,K,L\in\mathcal{I}$ be such that $n\in I,J,K$. Let $i=i_1,j=j_1,k=k_1,l=l_1$. Then, \begin{enumerate} \item $c^{K,L}_{I,J}=0$ unless $n\in L$ and $k\leq l\leq i,j$, \item when $k<i$, $k\leq l\leq i\leq j$, and $n\in L$, \[c^{K,L}_{I,J}= \begin{cases}\mathrm{Res}_{z= 0}\left(\frac{h^n(z+h)^{i+j-k-3}}{z^{i+j-k-n+2}}\right)&\text{if }l=k,\\ \mathrm{Res}_{z= 0}\left(\frac{2h^{n+1}(z+h)^{i+j+k-2l-3}(z+3h)^{l-k-1}}{z^{i+j-l-n+2}}\right)&\text{if }k<l<i,\\ \mathrm{Res}_{z= 0}\left(\frac{2h^{n-\delta_{i,j}}(z+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z+3h)^{i-k-1}}{z^{j-n+2}}\right)&\text{if }l=i. \end{cases}\] \item when $k=l=i\leq j$, $c^{I,I}_{I,J}=\begin{cases}h^{2(n-2)}&\text{if }j=n-1,\\0&\text{otherwise}.\end{cases}$ \end{enumerate} \end{csm_Pieri} Again, we see that the structure constants are nonnegative integer multiples of $h^{2(n-2)}=h^{\dim(\mathrm{Gr}(2,n))}$. It also possible to realize $c^K_{I,J}$, where $k<i\leq j$, as the degree $i+j-k-n+1$ term of the polynomial \begin{multline*}p^K_{I,J}=h^n(z+h)^{i+j-k-3}+2h^{n+1}\sum_{l=k+1}^{i-1}z^{l-k}(z+h)^{i+j+k-2l-3}(z+3h)^{l-k-1}\\ +2h^{n-\delta_{i,j}}z^{i-k}(z+h)^{k-1+(j-i-1)(1-\delta_{i,j})}(z+3h)^{i-k-1}.\end{multline*} More generally, consider $\mathrm{Gr}(d,n)$ for arbitrary $d$. Given $I\in\mathcal{I}$ such that $n\in I$, let $I_-=I\setminus\{n\}$. The structure constants $\hat{c}^K_{I,J}$ associated to a Pieri triple $I,J,K\in\mathcal{I}$, where $n\in I,J,K$, are related to the structure constants $\hat{c}^{K_-}_{I_-,J_-}$ of $\mathrm{Gr}(d-1,n-1)$ by formulas akin to Proposition~\ref{proj_like}. However, the influence of the extra terms on the residue calculus of $\hat{c}^{K_-}_{I_-,J_-}$ may not be as straightforward. \subsubsection{The General Case} In general, the formula of Corollary~\ref{H_weight_LR} is complicated by symmetrizations. Take for example, the structure constant of $\mathrm{Gr}(2,4)$ \[\hat{c}^{\{1,2\},\{2,3\}}_{\{2,3\},\{2,3\}}=(z_4-z_2)(z_4-z_3)(h+z_1-z_2)(h+z_1-z_3)\frac{\frac{h^2(z_1-z_3)}{z_2-z_3}+\frac{h(z_1-z_2)(z_3-z_2+h)}{z_3-z_2}}{(z_1-z_2)(z_1-z_3)}.\] In light of part 1 of Theorem~\ref{weight_orth} and the fact that fixed point restrictions are always polynomials in the $z$ variables, the singularity at $z_2=z_3$ must be removable. Indeed, this expression simpifies to \[\frac{h(z_1-z_2+h)^2(z_1-z_3+h)(z_4-z_2)(z_4-z_3)}{(z_1-z_2)(z_1-z_3)}.\] Notice that the simplified expression can be written as a product of three kinds of terms: \begin{enumerate} \item a power of $h$, \item terms of the form $(z_b-z_l+h)^r/(z_b-z_l)^s$, where $b\notin L$, $l\in L$, $r=0,1,2$, and $s=0,1$, \item and terms of the form $(z_b-z_l)$, where $\forall a\in L\ \ b>a$, $l\in L$, \end{enumerate} where terms of type 2 and 3 with fixed $b$ and $l$ appear at most once. This is precisely the pattern we encountered in the previous sections, and it appears that the $\hat{c}^{K,L}_{I,J}$ will be sums of expressions of this form in general. The novel feature is that for all $b\notin L$, up to $d$ many poles in $z_b$ of order 1 may appear. Hence, the $z_b$ integral may contribute up to $d+1$ many residues (the extra residue comes from the singularity at $z_b=0$ introduced by Cauchy's integral theorem). Computing these integrals is a subject for future work, but certain properties can be deduced from this pattern, e.g. positivity. In the next section, we will describe conjectures involving the more general K-theoretic structure constants, which might be provable using this technique. \subsection{K-Theoretic Structure Constants of $\mathrm{Gr}(d,n)$} The classes $\mathrm{mC}(\Omega_I)$ for $I\in\mathcal{I}$ form a basis for $\mathrm{K}^0_T(\mathrm{Gr}(d,n))(h)$. Let $\hat{C}^K_{I,J}\in\mathbb{Z}[z^{\pm 1}_1,...,z^{\pm 1}_n](h)$ be the unique rational functions satisfying $\forall I,J\in\mathcal{I}\ \ \mathrm{mC}_T(\Omega_I)\mathrm{mC}_T(\Omega_J)=\sum_{K\in\mathcal{I}}\hat{C}^K_{I,J}\mathrm{mC}_T(\Omega_K)$. The following is an immediate consequence of the orthogonality relations in Theorem~\ref{weight_orth}. \begin{K_weight_LR}\label{K_weight_LR} For all $I,J,K\in\mathcal{I}$,\ \ $\hat{C}^K_{I,J}=\langle W^{\mathrm{K}}_{\mathrm{id},I}W^{\mathrm{K}}_{\mathrm{id},J},(-h)^{-\dim(\Omega_k)}\iota(W^{\mathrm{K}}_{\mathrm{s_0},K})\rangle^{\mathrm{K}}$. \end{K_weight_LR} As in the previous section, define \[\hat{C}^{K,L}_{I,J}=\frac{(-h)^{-\dim(\Omega_k)}W^{\mathrm{K}}_{\mathrm{id},I}W^{\mathrm{K}}_{\mathrm{id},J}\iota(W^{\mathrm{K}}_{\mathrm{s_0},K})(z_{i_1},...,z_{i_d};z_1,...,z_n;h)}{R^{\mathrm{K}}_IQ^{\mathrm{K}}_I}.\] Taking the limit as $z$ variables go to 1 degenerates the equivariant structure constants to nonequivariant structure constants. In order to compare the K-theoretic structure constants to the cohomological structure constants, make the change of variables $\zeta_b=z_b-1$, for all $1\leq b\leq n$. Then, $C^K_{I,J}=\lim_{\zeta_1,...,\zeta_n\to 0}\hat{C}^K_{I,J}$. We can evaluate this limit by integrating as we did in the previous section. With the contours $\gamma_b$ of the previous section, let \[C^{K,L}_{I,J}=\left(\frac{1}{2\pi\sqrt{-1}}\right)^n\int_{\gamma_1}\cdots\int_{\gamma_n}\frac{\hat{C}^{K,L}_{I,J}}{\zeta_1\cdots\zeta_n}d\zeta_n\cdots d\zeta_1.\] Then, $C^K_{I,J}=\sum_{L\in I}C^{K,L}_{I,J}$. It will also be helpful to introduce the variable $\nu=-1-h$. Let us write the K-theoretic structure constant of Gr$(2,4)$ from the previous section in these new variables. We have \[\hat{C}^{\{1,2\},\{2,3\}}_{\{2,3\},\{2,3\}}=\frac{\nu(\zeta_1-(\nu-1)\zeta_2+\nu)^2(\zeta_1-(\nu-1)z_3+\nu)(\zeta_4-\zeta_2)(\zeta_4-z_3)}{(\zeta_1-\zeta_2)(\zeta_1-\zeta_3)}\cdot\frac{1}{(\zeta_1+1)(\zeta_4+1)^2}.\] Notice the similarities between $\hat{C}^{\{1,2\},\{2,3\}}_{\{2,3\},\{2,3\}}$ and $\hat{c}^{\{1,2\},\{2,3\}}_{\{2,3\},\{2,3\}}$: \begin{enumerate} \item instead of a power of $h$, there is a power of $\nu$, \item instead of $z_b-z_l+h$, there is $\zeta_b-(\nu-1)\zeta_l+\nu$, \item instead of $z_b-z_l$, there is $\zeta_b-\zeta_l$, \item and there is an extra factor of the form $\prod_{b=1}^n (\zeta_b+1)^{r_b}$, where $r_b\in\mathbb{Z}$. \end{enumerate} This pattern appears to hold in general. Since the contours are circles of radius less than 1, the extra terms in 4 do not contribute any new singularities. Thus, the residue calculus of these K-theoretic expressions is not substantially more complicated than that of the cohomological expressions. We expect general versions of the results of Section 2.3 to hold for Gr$(d,n)$. \begin{symmetry} For all $I,I',J,J',K,K'\in\mathcal{I}$, \begin{enumerate} \item $C^K_{I,J}$ is a polynomial in $\nu$ with nonnegative coefficients, \item $C^K_{I,J}=0$ unless $K\leq I,J$,\\ \item $C^K_{I,J}=C^K_{I',J'}$ if $i_a+j_a=i'_a+j'_a$ for all $1\leq a\leq d$,\\ \item $C^K_{I,J}=C^{K'}_{I,J'}$ if $j_a-k_a=j'_a-k'_a$ for all $1\leq a \leq d$. \end{enumerate} \end{symmetry} \begin{KtoH} The term of $C^K_{I,J}$ with lowest $\nu$-degree has coefficient $c^K_{I,J}/h^{\dim(\mathrm{Gr}(d,n))}$. \end{KtoH} \begin{biblist} \bib{Abe}{article}{ title={Young Diagrams and Intersection Numbers for Toric Manifolds associated with Weyl Chambers }, volume={22}, DOI={https://doi.org/10.37236/4307 }, journal={The Electronic Journal of Combinatorics}, author={Abe, Hiraku}, date={2015} } \bib{AM2009}{article}{ author = {Aluffi, Paolo} author={Mihalcea, Leonardo Constantin}, title = {Chern classes of Schubert cells and varieties}, journal = {Journal of Algebraic Geometry}, volume = {18}, date = {2009}, pages = {63--100}, issn = {1056-3911}, } \bib{AM2016}{article}{ author = {Aluffi, Paolo}, author={Mihalcea, Leonardo Constantin}, title = {Chern-Schwartz-MacPherson classes for Schubert cells in flag manifolds}, journal = {Compositio Mathematica}, volume = {152}, number={12} date = {2016}, pages = {2603--2652}, publisher={London Mathematical Society} } \bib{AMSS1}{article}{ author = {Aluffi, Paolo}, author={Mihalcea, Leonardo Constantin}, author = {Schürmann, J.}, author={Su, C.}, title = {Shadows of characteristic cycles, Verma modules, and positivity of Chern-Schwartz-MacPherson classes of Schubert cells}, journal = {arXiv e-prints}, year = {2017}, eid = {arXiv:1709.08697}, pages = {arXiv:1709.08697}, archivePrefix = {arXiv}, eprint = {1709.08697}, primaryClass = {math.AG} } \bib{AMSS2}{article}{ author = {Aluffi, Paolo}, author={Mihalcea, Leonardo Constantin}, author = {Schürmann, J.}, author={Su, C.}, title = {Motivic Chern classes of Schubert cells, Hecke algebras, and applications to Casselman's problem}, journal = {arXiv e-prints}, year = {2019}, eid = {arXiv:1902.10101}, pages = {arXiv:1902.10101}, archivePrefix = {arXiv}, eprint = {1902.10101}, primaryClass = {math.AG} } \bib{BSY}{article}{ author = {Brasselet, Jean-Paul} author={Schürmann, Jörg}, author={Yokura, Shoji}, title = {Hirzebruch classes and motivic Chern classes for singular spaces}, journal = {Journal of Topology and Analysis}, volume = {2}, date = {2010}, pages = {1--55}, number = {1} } \bib{FR}{article}{ title={Chern-Schwarz-Macpherson Classes of Degeneracy Loci}, volume={22}, journal={Geometry and Topology}, author={Feh\'er, László}, author={Rimányi, Richárd}, date={2018}, pages={3575–3622} } \bib{Fu}{book}{ title={Introduction to Toric Varieties}, date={1993}, author={Fulton, William} publisher={Princeton University Press} series={AM-131} } \bib{Fu1}{webpage}{ title={Equivariant Cohomology in Algebraic Geometry}, date={2007}, url={https://people.math.osu.edu/anderson.2804/eilenberg/lecture13.pdf}, author={Fulton, William} } \bib{Fu2}{article}{ title = {Intersection theory on toric varieties}, journal = {Topology}, volume = {36}, pages = {335 - 353}, year = {1997}, issn = {0040-9383}, doi = {https://doi.org/10.1016/0040-9383(96)00016-X}, author = {Fulton, William}, author = {Sturmfels, Bernd} } \bib{M}{article}{ author = {Macpherson, R. D.} title = {Chern Classes of Singular Algebraic Varieties}, journal = {Annals of Mathematics}, volume = {100}, number = {2}, pages = {423-432}, year = {1974} } \bib{MS}{article}{ author = {Maxim, Laurenţiu G.} author = {Schürmann, Jörg}, title = {Characteristic Classes of Singular Toric Varieties}, journal = {Communications on Pure and Applied Mathematics}, volume = {68}, number = {12}, pages = {2177-2236}, doi = {10.1002/cpa.21553}, year = {2015} } \bib{R}{article}{ author = {Rimányi, Richárd}, title = {$\hbar$-deformed Schubert calculus in equivariant cohomology, K-theory, and elliptic cohomology}, journal = {arXiv e-prints}, keywords = {Mathematics - Algebraic Geometry, 14N15, 55N34}, year = {2019}, archivePrefix = {arXiv}, eprint = {https://arxiv.org/abs/1912.13089}, primaryClass = {math.AG}, } \bib{RTV1}{article}{ title={Partial flag varieties, stable envelopes and weight functions}, volume={6}, journal={Quantum Topology}, author={Rimányi, Richárd}, author={Tarasov, Vitaly}, author={Varchenko, Alexander}, date={2015}, pages={333--364} } \bib{RTV2}{article}{ title={Trigonometric weight functions as K-theoretic stable envelope maps for the cotangent bundle of a flag variety}, volume={94}, journal={Journal of Geometry and Physics}, author={Rimányi, Richárd}, author={Tarasov, Vitaly}, author={Varchenko, Alexander}, date={2015}, pages={81--119} } \bib*{banach}{proceedings}{ conference = {IMPANGA2015}, editor = {Buczynski, J.}, editor = {Michalek, M.}, editor = {Postingel, E.}, } \bib{RV}{article}{ xref={banach}, conference = {title={IMPANGA2015}}, author = {Rimányi, R.}, author = {Varchenko, A.}, title = {Equivariant Chern-Schwartz-MacPherson classes in partial flag varieties: interpolation and formulae}, book={title={in Schubert Varieties, Equivariant Cohomology and Characteristic Classes}} pages = {225--235}, date={2018} } \bib{S}{article}{ author = {{Su}, Changjian}, title = {Structure constants for Chern classes of Schubert cells}, journal = {arXiv e-prints}, keywords = {Mathematics - Algebraic Geometry, Mathematics - Combinatorics}, year = {2019}, eid = {arXiv:1909.10940}, pages = {arXiv:1909.10940}, archivePrefix = {arXiv}, eprint = {1909.10940}, primaryClass = {math.AG}, } \bib{TV}{article}{ title={Hypergeometric solutions of the quantum differential equation of the cotangent bundle of a partial flag variety}, volume={12}, journal={Central European Journal of Mathematics}, author={Tarasov, Vitaly}, author={Varchenko, Alexander}, date={2013}, doi = {10.2478/s11533-013-0376-8}, } \end{biblist} \noindent Department of Mathematics, University of North Carolina at Chapel Hill, USA\\ \emph{email address:} [email protected] \end{document}
arXiv
\begin{document} \title{Pionic Entanglement in Femtoscopy: A Lesson in Interference and Indistinguishability} \author{Vlatko Vedral} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom} \date{\today} \begin{abstract} \noindent We present an analysis of recent experiments in femtoscopy by the STAR collaboration in terms of the protocol of entanglement witnessing involving purity measurements. The entanglement is between the charge and momentum degrees of freedom of pions and the state purity measurements ultimately rely on the bosonic nature of the detected pions. The pion experiment is intended to measure the size of nuclei and the distance between the nuclei involved, however it indirectly confirms that the states of differently charged pions are entangled through an entanglement witness based on the purity of various pionic states. The entangled state of pions can be modelled straightforwardly dynamically using a simple Hamiltonian. Quantum indistinguishability plays a key role in this analysis and we make comparison with the equivalent photonic experiments. \end{abstract} \pacs{03.67.Mn, 03.65.Ud} \maketitle The Hanbury-Brown Twiss (HBT) intensity interferometer \cite{HBT} was invented in order to measure the angular sizes of various astronomical objects like stars. As the name suggests, the key idea is to look at the intensity-intensity correlations between two different detectors receiving light from the same astronomical object. The advantage of the intensity (as opposed to the amplitude) correlations is that they are more robust to fluctuations and therfore more readily accessible. But why would intensities interfere? We normally think of amplitudes interfering, and not the probabilities, which arise by mod squaring the amplitudes. However, interference can take place between intensities for the simple reason that superpositions could also contribute to them. This is clear from the classical wave theory, but it is even easier to understand quantum mecanically. Here we will mainly follow the quantum explanation (not because it is easier, but) simply because it is more fundamental, leading - as it does - to the classical one in a special limit of large amplitude coherent states. Quantum mechanically, the intensities interfere because of the fact that identical particles, when they are in indistinsguihable states, tend to bunch (bosons) or anti-bunch (fermions). Let us focus on bosons since this is the kind of interference we will be describing in this paper (photons, which are relevant for stellar interferometers and pions which are relevant for nuclear femtoscopy are both bosonic particles). The reader may find it surprising that the HBT idea can also be used in high energy physics to probe nuclear distances, see e.g. \cite{Baym,Lisa}. The principle is the same as in the stellar interferometer and we beriefly review its role in femtoscopy before proceeding to show how it also acts as an entanglement witness. We can think of various high energy nuclear processes as resulting in the emission of pions. In the crudest of approximations, we could represent states of the pions as plane waves. These plane waves are emerging from different parts of the nucleus would then interfere. If we are talking about the pions that have the same charge, then there is an amplitude for two pions to arrive at two detectors in two different ways. \begin{equation} A \propto e^{ik_1x_1} e^{ik_2x_2} + e^{ik_1x_2} e^{ik_2x_1} \end{equation} (there should be four different paths in general, but we are assuming that they are pairwise equal; no generality is lost). Given that they are indistinshuishable, we must add up the amplitudes. The interference between these two is the HBT interference and it is an intensity interference as it is to do with the number states of pions. Clearly, the amplitude above is exactly what one would get by doing a completely classical treatment (and this is not sursrising as quantum physics does reproduce the classical wave behaviour in a certain, well-defined, limit). The full quantum calculation would simply have to evolve the pion creation and annihilation operators and then compute the expected values of the resulting number operators (at $x_1$ and $x_2$). We will show how this is done using a different variant of the HBT experiment. When we calculate the intensity $I=|A|^2$, the relevant terms are the cross terms $\cos \delta k \delta x$. If we have an extended source, then we have to sum up over all the relevant contributions, and the coherent contribution becomes the Fourier transform of the source (the van-Citter Zernike theorem \cite{Born}). In the simplest instance the source could be represented by a top hat function and we then obtain a simple form of the coherence: \begin{equation} C = \frac{\sin^2 (k\alpha b/2)}{(k\alpha b/2)^2} \end{equation} where $k=2\pi/\lambda$, $b$ is the distance between the detectors and $\alpha$ is the angular size of the nucleus (or the star in the case of the original HBT). Therefore, by measuring $C$ (and knowing $\lambda$ and $b$) we can then estimate the angular size of the nucleus and therefore its diameter (since we know the distance to the detectors). The recent experiment reported in \cite{STAR} is a variant of HBT. It involves emission of entangled pairs of pions which are then detected by two detectors. Each detector detects two pions, and the possibilities are that detector A detects two positive pions (B therefore detects two negative ones), vice versa, and finally both detectors detect one positive and one negative pion. It is this last possibility that has two indistinguishable ways of happening, which is where the interference occurs. The amplitude for this is given by \begin{equation} A \propto e^{ik_1x_1} e^{ik_2x_2}e^{ik_3x_1} e^{ik_4x_2}+ e^{ik_1x_1} e^{ik_4x_1}e^{ik_2x_1} e^{ik_3x_2} \label{amp} \end{equation} Due to the momentum conservation, there are restrictions on ks, which we will discuss below. Suffice it to say here that a full treatment will lead to the coherence that behaves as \begin{equation} C = \frac{\sin^2 (k\alpha b)}{(k\alpha b)^2}\cos^2 (k\beta b) \end{equation} where $\alpha$ is the angular distance between the two sources, and $\beta$ is the angular width of each of them. This is again a Fourier transform of the convolution of the top hat function (representing each nuclei) with two delta functions (represeting the locations of the nuclei). Note that it contains both the informaiton about the angular size of nuclei as well as the angular distance between them. In order to show how this experiment is, in fact, a witness of the entanglement between pions, we will now present the full quantum analysis. The brief details of the experiment are as follows. Two gold nuclei are scattered at high energies off one anther. This results in each emitting a rho particles, each of which subsequently decays into a positive and a negative pion. It is these pions that are ultimately detected and produce interference as outlined above. In order to write down the state of two pions, we need a quantum model for this process \cite{Gyulassy}. The process of the annihilation of the rho particle and the creation of the two pions is represented by the following Hamiltonian \begin{equation} H = -g \int dx^3 \psi^{\dagger} (x,t)\psi (x,t) \phi (x,t) \end{equation} where $g$ is the strength of the interaction, $\phi (x,t)=\phi^+ (x,t)+\phi^- (x,t)$, $\psi (x,t)=\psi^+ (x,t)+\psi^- (x,t)$ and \begin{eqnarray} \phi (x,t)^+ = \sum_k c_k e^{ik_\mu x^\mu} \nonumber\\ \phi (x,t)^- = (\phi (x,t)^+)^{\dagger}\nonumber \\ \psi (x,t)^+ = \sum_k a_k e^{ik_\mu x^\mu} \nonumber\\ \psi (x,t)^- =\sum_k b^\dagger_k e^{ik_\mu x^\mu}\nonumber\\ (\psi (x,t)^\dagger)^+ = (\psi (x,t)^-)^{\dagger} \nonumber\\ (\psi (x,t)^\dagger)^- = (\psi (x,t)^+)^{\dagger} \end{eqnarray} where $k_\mu x^\mu = kx-\omega t$. Here the $c$ operators annihilate the rho meson, the $a$ operators annihilate the positive pion and $b$ the negative pion (and the hermitian conjugates create them). This Hamiltonian is the high energy analogue of the down-conversion process in quantum optics, where a single photon is converted into two photons through a non-linear medium. The Hamiltonian above represents $8$ different processes (in which rho can be created or destryed and pions can also be created and destroyed), but only the term, $ca^{\dagger}b^{\dagger}$, is relevant. When we start from the state containing one rho particle and apply this Hamiltonian, we obtain a superposition of the initial state and the state with no rho particles and containing an entangled state of the pions \begin{equation} |\Psi\rangle = c_0 |\rho, 0,0\rangle + c_1 \int d\omega_1d\omega_2dq_1dq_2 f(q_1,q_2,\omega_1,\omega_2)|0,q_1\omega_1,q_2\omega_2\rangle \end{equation} subject to the condition that $q_1+q_2 = q_\rho$ and $\omega_1 + \omega_2 = \omega_\rho$ (which itself is part of the definition of the function $f$, whose exact form is not of interest to us here). The exact form of the amplitudes is also not directly relevant as only the term containing pions will contribute to interference. The interference term, where each detector detects one positive and one negative pion, is then given by the following $8$-point correlation: \begin{equation} \langle \psi^- (y,t)(\psi^\dagger)^- (y,t)\psi^- (x,t)(\psi^\dagger)^- (x,t)(\psi^\dagger)^+ (y,t)\psi^+ (y,t) (\psi^\dagger)^+ (x,t)\psi^+ (x,t)\rangle \end{equation} where the average is taken with respect to the second term in $|\Psi\rangle$ since, as we noted, this represents the only contribution to the detection of pions (the rest constitutes the detection of the rho meson). This expression would in quantum optics be known as the $g^{(4)}$ coherence \cite{Glauber} (where instead of the $\psi$ operators we would have the positive and negative frequency electric field operators). This quantity is basically the probability to observe a positive and a negative pion in each detector. The reason why this will lead to interference is simple to see when the operators are expanded in the momentum basis \begin{equation} \langle a_p b_q a_r b_s a^\dagger_n b^\dagger_m a^\dagger_k b^\dagger_l \rangle = \langle a_p b_q a_{q_\rho-q} b_{q_\rho-p} a^\dagger_p b^\dagger_q a^\dagger_{q_\rho-q} b^\dagger_{q_\rho-p} \rangle + \langle a_p b_q a_{q_\rho-q} b_{q_\rho-p} a^\dagger_{q_\rho-q} b^\dagger_{q_\rho-p} a^\dagger_p b^\dagger_q \rangle \end{equation} where we have use the fact that the momentum states are correlated because of momentum conservation and the fact that all other terms must vanish since different number states are orthogonal to each other. Because of entanglement \cite{Vedral}, namely the fact that pions populate modes whose momenta are correlated, the $g^4$ coherence behaves effectively the same as $g^2$ for bosonic number states. The expression will reproduce the result in eq.(\ref{amp}). Note that we are neglecting the electromagnetic interactions between the pions. A complete treatment should, of course, include the attraction between the like pions and the repulsion otherwise, but we are here interested in the dominant effect only, which is due to the particle statistics. The other two detection options, which is that two pions of positive charge are detected in one detector and two of negative in the other, each can be assigned a similar expression (though they only add up incoherently). In the fully entangled state of pions the interfering term has a probability $1/2$ and each incoherent term has a probability of $1/4$. Here therefore the reduction in coherence is a direct consequence of the fact that in half of all cases the terms are fully distinguishable - since they contain different charges - and do not lead to interference. Given this, let us now explain why this procedure constitutes an entanglement witness. The analysis will be much simpler if we use the qubit notation, so that a positive pion corresponds to $|0\rangle$ and a negative pion to $|1\rangle$ (we are then ignoring the spatial degrees of freedom as well as the bosonic nature of the pions since qubits are fully distinguishable). The state of two entangled pions is then $|\Psi^+\rangle = |01\rangle + |10\rangle$ (not normalized). The above experiment is based on detection of two such states $12$ and $34$ at a time. The relevant probabilities for interference are given by \begin{equation} p = tr \{ P_{13}\otimes P_{24} (\rho_{12}\otimes \rho_{34})\} \end{equation} where $P$ is the projection onto the symmetric state of the respective pions (labelled by the subscripts). It is clear that if the state of pions was not entangled (but, say, just a product of the states $\pi^+$ and $\pi^-$, $|01\rangle\otimes |01\rangle$), the HTB interference would not occur. In fact, the entanglement witness here is similar to the one based on puruty (or linear entropy) \cite{Horodecki}. By measuring projections onto the symmetric and antisymmetric subspaces we can calculate the total purity of the state as well as the local purities of the subsystems \cite{Bovino}. If the total purity exceeds both of the local ones, the total state must then be an entangled one. This is intuitively clear since for maximally entangled states the total purity is one, while the local ones are both zero (since the reduced states are equal mixtures of $|0\rangle$ and $|1\rangle$). The qubit version of the high energy experiment is best understood by expanding the two entangled states (entangled state $12$ and another one $34$) in the basis of the detected qubits $13$ and $24$: \begin{equation} |\Psi_{12}^+\rangle |\Psi_{34}^+\rangle = |00\rangle_{13} |11\rangle_{13} + |11\rangle_{13} |00\rangle_{13} + |\Psi_{13}^+\rangle |\Psi_{24}^+\rangle - |\Psi_{13}^-\rangle |\Psi_{24}^-\rangle \end{equation} where $|\Psi^-\rangle = |01\rangle - |10\rangle$ (which is a state that is antisymmetric and does not occur with pions which are bosons). It is now clear that the interference comes from the last two terms where different entangled states are themselves entangled. Entangled entanglement \cite{Zeilinger} is therefore behind the higher order coherences in quantum mechanics and femtoscopy analysed here is just one of many places where it plays a crucial role. The interplay between spatial and internal degrees of freedom based on particle statistics is, of course, well known in quantum information in general. Protocols such as entanglement swapping \cite{Omar} and entanglement distillation \cite{Paunkovic} can be performed by only exploiting the boconic (fermionic) nature of the qubits used. In this vain, the above pionic high energy experiment can be viewed as the confirmation of the spatial entanglement of pions based on the detection of their charge properties (which could be thought of as an internal degree of freedom). One wonders if there are further insights to be gained by cross-pollination between quantum information and high energy physics. \textit{Acknowledgments}: VV is grateful to the Moore Foundation and the Templeton Foundation for supporting his research. \end{document}
arXiv
Education Coach For you to succeed waecmaths JambMaths UniversityMaths JAMB UTME QUESTIONS Jamb utme by subject WAEC SSCE MAY/JUNE EXAM WAEC YEAR BY YEAR Home » Waecmaths waecmaths question Find the values of k in the equation $6{{k}^{2}}=5k+6$ The graph of the relation $y={{x}^{2}}+2x-k$ passes through the points (2,0). Find the value of k Given that $P={{x}^{2}}+4x+2,\text{ }Q=2x-1\text{ and }Q-P=2$ find x Form the equation whose roots are \[x=\tfrac{1}{2}\text{ and }-\tfrac{2}{3}\] Find the smaller value of x that satisfies the equation ${{x}^{2}}+7x+10=0$ If the sum of the roots of the equation $(x-p)(2x+1)=0$ is 1. find the value of p If ${{x}^{2}}+kx+\tfrac{16}{9}$ is a perfect square. Find the value of x The roots of a quadratic equation are$\tfrac{4}{3}$and $-\tfrac{3}{7}$. Find the equation. Find the value of y for which the expression $\frac{{{y}^{2}}-9y+18}{{{y}^{2}}+4y-21}$is undefined. If c and k are roots of $6-x-{{x}^{2}}=0$, find c + k Find the quadratic equation whose roots are – ½ and 3 Which of the following is a factor of $2-x-{{x}^{2}}$ What must be added to ${{x}^{2}}-3x$ to make it a perfect square? Find the equation whose roots are ¾ and –4 Adding 42 to a given positive number gives the same result as squaring the number. Find the number. Ada draws a graph of $y={{x}^{2}}-x-2$and $y=2x-1$on the same axes. Which of the these equations is she solving Find the equation whose roots are 2 and $-3\tfrac{1}{2}$ Expand $(2x-3y)(x-5y)$ Given that one of the roots of the equation $2{{x}^{2}}+(k+2)x+k=0$ is 2. Find the value of k For what value of y is the expression $\frac{6y-1}{{{y}^{2}}-y-6}$ not defined? One factor of $7{{x}^{2}}+33x-10$ is The roots of a quadratic equation are $-\frac{1}{2}$ and $\frac{2}{3}$ . Find the equation The graph of $a{{x}^{2}}+bx+c$ is shown in the diagram. Find the minimum of y A curve is such that when y = 0, x = – 2 or x = 3. Find the equation of the curve. Find the values of x which is $\frac{x-5}{x(x-1)}$ undefined Solve the equation 2x2 – x – 6 = 0 The graph of y = x2 and y = x intersect at which of these points Solve $4{{x}^{2}}-16x+15=0$ Simplify $\frac{{{x}^{2}}-5x-14}{{{x}^{2}}-9x+14}$ JAMB UTME EXAMS Christian Religious knowledge (CRK) Principles of Account SSCE WAEC MATHS WAEC MAY/JUNE 2019 WAEC MAY/ JUNE 2016 UTME JAMB MATHS SOLUTION Jamb Maths 2019 WAEC SSCE EXAM QUESTION
CommonCrawl
\begin{document} \title{\bf{Simultaneous Dense Coding}} \author{Haozhen Situ$^{1}$}\author{Daowen Qiu$^{1,2}$}\email{[email protected]} \affiliation{ $^{1}$Department of Computer Science, Zhongshan University, Guangzhou 510275, People's Republic of China\\ $^{2}$SQIG--Instituto de Telecomunica\c{c}\~{o}es, IST, TULisbon, Av. Rovisco Pais 1049-001, Lisbon, Portugal } \date{\today} \begin{abstract} We present a dense coding scheme between one sender and two receivers, which guarantees that the receivers simultaneously obtain their respective messages. In our scheme, the quantum entanglement channel is first locked by the sender so that the receivers cannot learn their messages unless they collaborate to perform the unlocking operation. We also show that the quantum Fourier transform can act as the locking operator both in simultaneous dense coding and teleportation. \end{abstract} \pacs{03.67.-a, 03.67.Hk} \maketitle \section{\label{Introduction} Introduction} Quantum entanglement \cite{HHHH09} is the key resource of quantum information theory \cite{NC00,ABH01}, especially in quantum communication \cite{GT07}. Sharing an entangled quantum state between a sender and a receiver, makes it possible to perform quantum teleportation \cite{BBCJPW93} and quantum dense coding \cite{BW92}. Quantum teleportation is the process of transmitting an unknown quantum state by using shared entanglement and sending classical information; quantum dense coding is the process of transmitting 2 bits of classical information by sending part of an entangled state. Teleportation and dense coding are closely related \cite{W01,HLG00} and have been extensively studied in various ways. For example, teleportation and dense coding that use non-maximally entangled quantum channel have been examined \cite{LLG00,AP02,PA04,GR06,BE95,HJSWW96,HLG00,B01,MOR05,PPA05}; multipartite entangled states have also been considered as the quantum channel \cite{MW00,GT00,GTRZ03,JPOK03,BVK98,LLTL02,BLSSDM06,AP06,LQ07}; another generalization is to perform these two communication tasks under the control of a third party, so called controlled teleportation and dense coding \cite{KB98,DLLZW05,MXA07,LQL09,HLG01,LO09}. Recently, a simultaneous quantum state teleportation scheme was proposed by Wang et al \cite{WYGL08}, the aim of which is for all the receivers to simultaneously obtain their respective quantum states from Alice (the sender). In their scheme, Alice first performs a unitary transform to lock the entanglement channel, and therefore the receivers cannot restore their quantum states separately before performing an unlocking operation together. A natural question is that whether this idea of locking the entanglement channel adapts for dense coding? The main purpose of this paper is to show that such a locking operator for dense coding really exists. As a result, we propose three simultaneous dense coding protocols which guarantee that the receivers simultaneously obtain their respective messages. The remainder of the paper is organized as follows. In Sec. \ref{Protocols}, we introduce three simultaneous dense coding protocols using different entanglement channels. In Sec. \ref{Teleportation}, we show that the quantum Fourier transform can alternatively be used as the locking operator in simultaneous teleportation. A brief conclusion follows in Sec. \ref{Conclusion}. \section{\label{Protocols} Protocols for Simultaneous Dense Coding} Suppose that Alice is the sender, Bob and Charlie are the receivers. Alice intends to send two bits $(b_1,b_2)$ to Bob and another two bits $(c_1,c_2)$ to Charlie under the condition that Bob and Charlie must collaborate to simultaneously find out what she sends. In the following three subsections, we propose three protocols using Bell state, GHZ state and W state as the entanglement channels respectively. The idea of these protocols is to perform the quantum Fourier transform on Alice's qubits before sending them to Bob and Charlie. After receiving Alice's qubits, Bob and Charlie's local states are independent of $(b_1,b_2)$ and $(c_1,c_2)$ so that they know nothing about the encoded bits. Only after performing the inverse quantum Fourier transform together, they can obtain $(b_1,b_2)$ and $(c_1,c_2)$ respectively. \subsection{Protocol 1: Using Bell State} Initially, Alice, Bob and Charlie share two Einstein-Podolsky-Rosen(EPR) pairs \cite{EPR35} $\frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)_{A_1B}$ and $\frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)_{A_2C}$, where qubits $A_1A_2$ belong to Alice, qubits $B$ and $C$ belong to Bob and Charlie respectively. The initial quantum state of the composite system is \begin{align} |\psi(0)\rangle = \frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)_{A_1B}\otimes \frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)_{A_2C}. \end{align} The protocol consists of four steps. (1) Alice performs unitary transforms $U(b_1b_2)$ on qubits $A_1$ and $U(c_1c_2)$ on $A_2$ to encode her bits, like the original dense coding scheme \cite{BW92}. After that, the state of the composite system becomes \begin{align}|\psi(1)\rangle = U_{A_1}(b_1b_2)\otimes U_{A_2}(c_1c_2)|\psi(0)\rangle \nonumber\\ = |\phi(b_1b_2)\rangle_{A_1B} \otimes |\phi(c_1c_2)\rangle_{A_2C}, \end{align} where \begin{align} & U(00) = I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, U(01) = \sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix},\nonumber\\ & U(10) = \sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, U(11) = \sigma_z\sigma_x = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \nonumber\\ & |\phi (xy)\rangle= \frac{1}{\sqrt{2}}(|0x\rangle+(-1)^{y}|1\overline{x}\rangle). \end{align} (2) Alice performs the quantum Fourier transform \begin{align} QFT = \frac{1}{2} \begin{bmatrix} 1 & 1 & 1 & 1\\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \end{bmatrix}\end{align} on qubits $A_1A_2$ to lock the entanglement channel, and then sends $A_1$ to Bob and $A_2$ to Charlie. The state of the composite system becomes \begin{align} |\psi(2)\rangle = QFT_{A_1A_2}[|\phi(b_1b_2)\rangle_{A_1B} \otimes |\phi(c_1c_2)\rangle_{A_2C}]. \end{align} (3) Bob and Charlie collaborate to perform $QFT^\dagger$ on qubits $A_1A_2$. The state of the composite system becomes \begin{align} |\psi(3)\rangle = & QFT_{A_1A_2}^\dagger QFT_{A_1A_2}[|\phi(b_1b_2)\rangle_{A_1B} \otimes |\phi(c_1c_2)\rangle_{A_2C}] \nonumber\\ = & |\phi(b_1b_2)\rangle_{A_1B} \otimes |\phi(c_1c_2)\rangle_{A_2C}. \end{align} (4) Bob and Charlie perform the Bell State Measurement on qubits $A_1B$ and $A_2C$ respectively to obtain $(b_1,b_2)$ and $(c_1,c_2)$, like the original dense coding scheme \cite{BW92}. The following theorem demonstrates that neither Bob nor Charlie alone can distinguish his two-qubit quantum state (i.e. $\rho_{A_1B}, \rho_ {A_2C}$) before step 3. Therefore, they cannot learn the encoded bits from their quantum states unless they collaborate. {\it Theorem 1.} For each $b_1,b_2,c_1,c_2\in\{0,1\}, \ \rho_{A_1B}=\rho_{A_2C}=I/4$, where $\rho_{A_1B}$ and $\rho_{A_2C}$ are the reduced density matrices in subsystems $A_1B$ and $A_2C$ after step 2 (but before step 3). {\it Proof.} After step 1, the quantum states of qubits $A_1B$ and $A_2C$ are $|\phi(b_1b_2)\rangle$ and $|\phi(c_1c_2)\rangle$ respectively. The state of the composite system after step 1 can be written as \begin{align} |\psi(1)\rangle =|\phi(b_1b_2)\rangle_{A_1B}\otimes |\phi(c_1c_2)\rangle_{A_2C}. \end{align} After step 2, the state of the composite system becomes \begin{align} |\psi(2)\rangle = & QFT_{A_1A_2} [\frac{1}{\sqrt{2}}(|0b_1\rangle+(-1)^{b_2}|1\overline{b_1}\rangle)_{A_1B}\nonumber\\ & \otimes \frac{1}{\sqrt{2}} (|0c_1\rangle+(-1)^{c_2}|1\overline{c_1}\rangle)_{A_2C}]\nonumber\\ = & \frac{1}{2}QFT_{A_1A_2}(|00b_1c_1\rangle+(-1)^{c_2}|01b_1\overline{c_1}\rangle\nonumber\\ & +(-1)^{b_2}|10\overline{b_1}c_1\rangle+(-1)^{b_2+c_2}|11\overline{b_1c_1}\rangle)_{A_1A_2BC} \nonumber\\ = & \frac{1}{4}[(|00\rangle+|01\rangle+|10\rangle+|11\rangle)|b_1c_1\rangle+(-1)^{c_2}(|00\rangle\nonumber\\ & +i|01\rangle-|10\rangle-i|11\rangle)|b_1\overline{c_1}\rangle+(-1)^{b_2}(|00\rangle\nonumber\\ &-|01\rangle+|10\rangle-|11\rangle)|\overline{b_1}c_1\rangle+(-1)^{b_2+c_2}(|00\rangle\nonumber\\ &-i|01\rangle -|10\rangle+i|11\rangle)|\overline{b_1c_1}\rangle]_{A_1A_2BC}. \end{align} The reduced density matrix in subsystem $A_1B$ is \begin{align} \rho_{A_1B} = & _{A_2C}\langle 0c_1|\psi(2)\rangle\langle\psi(2)|0c_1\rangle_{A_2C} + _{A_2C}\langle 0\overline{c_1}|\psi(2)\rangle\nonumber\\&\langle\psi(2)|0\overline{c_1}\rangle_{A_2C} + _{A_2C}\langle 1c_1|\psi(2)\rangle\langle\psi(2)|1c_1\rangle_{A_2C} \nonumber\\& + _{A_2C}\langle 1\overline{c_1}|\psi(2)\rangle\langle\psi(2)|1\overline{c_1}\rangle_{A_2C}\nonumber\\ = & \frac{1}{4}(|0b_1\rangle\langle 0b_1|+|0\overline{b_1}\rangle\langle 0\overline{b_1}|+|1b_1\rangle\langle 1b_1|+|1\overline{b_1}\rangle\langle 1\overline{b_1}|)\nonumber\\ = & I/4. \end{align}The reduced density matrix in subsystem $A_2C$ is \begin{align} \rho_{A_2C} = & _{A_1B}\langle 0b_1|\psi(2)\rangle\langle\psi(2)|0b_1\rangle_{A_1B} + _{A_1B}\langle 0\overline{b_1}|\psi(2)\rangle\nonumber\\&\langle\psi(2)|0\overline{b_1}\rangle_{A_1B}+ _{A_1B}\langle 1b_1|\psi(2)\rangle\langle\psi(2)|1b_1\rangle_{A_1B}\nonumber\\& + _{A_1B}\langle 1\overline{b_1}|\psi(2)\rangle\langle\psi(2)|1\overline{b_1}\rangle_{A_1B}\nonumber\\ = & \frac{1}{4}(|0c_1\rangle\langle 0c_1|+|0\overline{c_1}\rangle\langle 0\overline{c_1}|+|1c_1\rangle\langle 1c_1|+|1\overline{c_1}\rangle\langle 1\overline{c_1}|)\nonumber\\ = & I/4. \end{align}\qed \subsection{Protocol 2: Using GHZ State} Initially, Alice, Bob and Charlie share two Greenberger-Horne-Zeilinger (GHZ) states \cite{GHZ89} $\frac{1}{\sqrt{2}}(|000\rangle +|111\rangle)_{A_1B_1B_2}$ and $\frac{1}{\sqrt{2}}(|000\rangle +|111\rangle)_{A_2C_1C_2}$, where qubits $A_1A_2$ belong to Alice, qubits $B_1B_2$ and $C_1C_2$ belong to Bob and Charlie, respectively. The initial quantum state of the composite system is \begin{align} |\psi(0)\rangle = & \frac{1}{{\sqrt{2}}}(|000\rangle +|111\rangle)_{A_1B_1B_2}\nonumber\\&\otimes \frac{1}{{\sqrt{2}}}(|000\rangle +|111\rangle)_{A_2C_1C_2}. \end{align} The protocol consists of four steps. (1) Alice performs unitary transforms $U(b_1b_2)$ on qubits $A_1$ and $U(c_1c_2)$ on $A_2$ to encode her bits. After that, the state of the composite system becomes \begin{align}|\psi(1)\rangle = & U_{A_1}(b_1b_2)\otimes U_{A_2}(c_1c_2)|\psi(0)\rangle \nonumber\\= & |GHZ(b_1b_2)\rangle_{A_1B_1B_2} \otimes |GHZ(c_1c_2)\rangle_{A_2C_1C_2}, \end{align} where \begin{align} |GHZ(xy)\rangle= \frac{1}{\sqrt{2}}(|0xx\rangle+(-1)^{y}|1\overline{xx}\rangle). \end{align} (2) Alice performs the quantum Fourier transform on qubits $A_1A_2$ to lock the entanglement channel, and then sends $A_1$ to Bob and $A_2$ to Charlie. The state of the composite system becomes \begin{align} |\psi(2)\rangle = & QFT_{A_1A_2}[|GHZ(b_1b_2)\rangle_{A_1B_1B_2}\nonumber\\ & \otimes |GHZ(c_1c_2)\rangle_{A_2C_1C_2}]. \end{align} (3) Bob and Charlie collaborate to perform $QFT^\dagger$ on qubits $A_1A_2$. The state of the composite system becomes \begin{align}|\psi(3)\rangle = & QFT_{A_1A_2}^\dagger QFT_{A_1A_2}[|GHZ(b_1b_2)\rangle_{A_1B_1B_2} \nonumber\\& \otimes |GHZ(c_1c_2)\rangle_{A_2C_1C_2}] \nonumber\\ = & |GHZ(b_1b_2)\rangle_{A_1B_1B_2} \otimes |GHZ(c_1c_2)\rangle_{A_2C_1C_2}. \end{align} (4) Bob and Charlie make the von Neumann measurement using the orthogonal states $\{|GHZ(xy)\rangle\}_{xy}$ on qubits $A_1B_1B_2$ and $A_2C_1C_2$ respectively to obtain $(b_1,b_2)$ and $(c_1,c_2)$. The following theorem demonstrates that neither Bob nor Charlie alone can distinguish his three-qubit quantum state (i.e. $\rho_{A_1B_1B_2}, \rho_ {A_2C_1C_2}$) before step 3. Therefore, they cannot learn the encoded bits from their quantum states unless they collaborate. {\it Theorem 2.} $\rho_{A_1B_1B_2}$ and $\rho_{A_2C_1C_2}$ are independent of $b_1,b_2,c_1,c_2$, where $\rho_{A_1B_1B_2}$ and $\rho_{A_2C_1C_2}$ are the reduced density matrices in subsystems $A_1B_1B_2$ and $A_2C_1C_2$ after step 2 (but before step 3), respectively. {\it Proof.} After step 1, the quantum states of qubits $A_1B_1B_2$ and $A_2C_1C_2$ are $|GHZ(b_1b_2)\rangle$ and $|GHZ(c_1c_2)\rangle$, respectively. The state of the composite system after step 1 can be written as \begin{align} |\psi(1)\rangle = |GHZ(b_1b_2)\rangle_{A_1B_1B_2}\otimes |GHZ(c_1c_2)\rangle_{A_2C_1C_2}. \end{align} After step 2, the state of the composite system becomes \begin{align} |\psi(2)\rangle = & QFT_{A_1A_2}[\frac{1}{\sqrt{2}}(|0b_1b_1\rangle+(-1)^{b_2}|1\overline{b_1b_1}\rangle)_{A_1B_1B_2}\nonumber\\&\otimes \frac{1}{\sqrt{2}} (|0c_1c_1\rangle+(-1)^{c_2}|1\overline{c_1c_1}\rangle)_{A_2C_1C_2}]\nonumber\\ = & \frac{1}{2}QFT_{A_1A_2}(|00\rangle\otimes |b_1b_1c_1c_1\rangle+(-1)^{c_2}|01\rangle \nonumber\\ & \otimes |b_1b_1\overline{c_1c_1}\rangle +(-1)^{b_2}|10\rangle\otimes |\overline{b_1b_1}c_1c_1\rangle \nonumber\\ & +(-1)^{b_2+c_2}|11\rangle\otimes |\overline{b_1b_1c_1c_1}\rangle)_{A_1A_2B_1B_2C_1C_2} \nonumber\\ = & \frac{1}{4}[(|00\rangle+|01\rangle+|10\rangle+|11\rangle)\otimes|b_1b_1c_1c_1\rangle \nonumber\\ & +(-1)^{c_2} (|00\rangle+i|01\rangle-|10\rangle-i|11\rangle)\otimes|b_1b_1\overline{c_1c_1}\rangle \nonumber\\ & +(-1)^{b_2} (|00\rangle-|01\rangle+|10\rangle-|11\rangle)\otimes|\overline{b_1b_1}c_1c_1\rangle \nonumber\\ & +(-1)^{b_2+c_2}(|00\rangle-i|01\rangle-|10\rangle+i|11\rangle)\nonumber\\ & \otimes|\overline{b_1b_1c_1c_1}\rangle]_{A_1A_2B_1B_2C_1C_2}. \end{align} The reduced density matrix in subsystem $A_1B_1B_2$ is \begin{align} \rho_{A_1B_1B_2} = & _{A_2C_1C_2}\langle 0c_1c_1|\psi(2)\rangle\langle\psi(2)|0c_1c_1\rangle_{A_2C_1C_2} \nonumber\\& + _{A_2C_1C_2}\langle 0\overline{c_1c_1}|\psi(2)\rangle\langle\psi(2)|0\overline{c_1c_1}\rangle_{A_2C_1C_2}\nonumber\\ & + _{A_2C_1C_2}\langle 1c_1c_1|\psi(2)\rangle\langle\psi(2)|1c_1c_1\rangle_{A_2C_1C_2} \nonumber\\&+ _{A_2C_1C_2}\langle 1\overline{c_1c_1}|\psi(2)\rangle\langle\psi(2)|1\overline{c_1c_1}\rangle_{A_2C_1C_2}\nonumber\\ = & \frac{1}{4}(|0b_1b_1\rangle\langle 0b_1b_1|+|0\overline{b_1b_1}\rangle\langle 0\overline{b_1b_1}| \nonumber\\& +|1b_1b_1\rangle\langle 1b_1b_1|+|1\overline{b_1b_1}\rangle\langle 1\overline{b_1b_1}|)\nonumber\\ = & \frac{1}{4}(|000\rangle\langle 000|+|011\rangle\langle 011|+|100\rangle\langle 100|\nonumber\\&+|111\rangle\langle 111|). \end{align}The reduced density matrix in subsystem $A_2C_1C_2$ is \begin{align} \rho_{A_2C_1C_2} = & _{A_1B_1B_2}\langle 0b_1b_1|\psi(2)\rangle\langle\psi(2)|0b_1b_1\rangle_{A_1B_1B_2} \nonumber\\& + _{A_1B_1B_2}\langle 0\overline{b_1b_1}|\psi(2)\rangle\langle\psi(2)|0\overline{b_1b_1}\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 1b_1b_1|\psi(2)\rangle\langle\psi(2)|1b_1b_1\rangle_{A_1B_1B_2} \nonumber\\& + _{A_1B_1B_2}\langle 1\overline{b_1b_1}|\psi(2)\rangle\langle\psi(2)|1\overline{b_1b_1}\rangle_{A_1B_1B_2}\nonumber\\ = & \frac{1}{4}(|0c_1c_1\rangle\langle 0c_1c_1|+|0\overline{c_1c_1}\rangle\langle 0\overline{c_1c_1}|\nonumber\\& +|1c_1c_1\rangle\langle 1c_1c_1|+|1\overline{c_1c_1}\rangle\langle 1\overline{c_1c_1}|)\nonumber\\ = & \frac{1}{4}(|000\rangle\langle 000|+|011\rangle\langle 011|+|100\rangle\langle 100|\nonumber\\& +|111\rangle\langle 111|). \end{align}\qed \subsection{Protocol 3: Using W State} Initially, Alice, Bob and Charlie share two W states \cite{DVC00,AP06} $\frac{1}{2}(|010\rangle +|001\rangle+\sqrt{2}|100\rangle)_{A_1B_1B_2}$ and $\frac{1}{2}(|010\rangle +|001\rangle+\sqrt{2}|100\rangle)_{A_2C_1C_2}$, where qubits $A_1A_2$ belong to Alice, qubits $B_1B_2$ and $C_1C_2$ belong to Bob and Charlie, respectively. The initial quantum state of the composite system is \begin{align} |\psi(0)\rangle = & \frac{1}{2}(|010\rangle +|001\rangle+\sqrt{2}|100\rangle)_{A_1B_1B_2}\nonumber\\& \otimes \frac{1}{2}(|010\rangle +|001\rangle+\sqrt{2}|100\rangle)_{A_2C_1C_2}. \end{align} The protocol consists of four steps. (1) Alice performs unitary transforms $U(b_1b_2)$ on qubits $A_1$ and $U(c_1c_2)$ on $A_2$ to encode her bits. After that, the state of the composite system becomes \begin{align}|\psi(1)\rangle = & U_{A_1}(b_1b_2)\otimes U_{A_2}(c_1c_2)|\psi(0)\rangle \nonumber\\ = & |W(b_1b_2)\rangle_{A_1B_1B_2} \otimes |W(c_1c_2)\rangle_{A_2C_1C_2}, \end{align} where \begin{align} |W(xy)\rangle= \frac{1}{2}(|x10\rangle+|x01\rangle+(-1)^{y}\sqrt{2}|\overline{x}00\rangle). \end{align} (2) Alice performs the quantum Fourier transform on qubits $A_1A_2$ to lock the entanglement channel, and then sends $A_1$ to Bob and $A_2$ to Charlie. The state of the composite system becomes \begin{align} |\psi(2)\rangle = & QFT_{A_1A_2}[|W(b_1b_2)\rangle_{A_1B_1B_2} \nonumber\\&\otimes |W(c_1c_2)\rangle_{A_2C_1C_2}]. \end{align} (3) Bob and Charlie collaborate to perform $QFT^\dagger$ on qubits $A_1A_2$. The state of the composite system becomes \begin{align}|\psi(3)\rangle = & QFT_{A_1A_2}^\dagger QFT_{A_1A_2}[|W(b_1b_2)\rangle_{A_1B_1B_2} \nonumber\\& \otimes |W(c_1c_2)\rangle_{A_2C_1C_2}] \nonumber\\ = & |W(b_1b_2)\rangle_{A_1B_1B_2} \otimes |W(c_1c_2)\rangle_{A_2C_1C_2}. \end{align} (4) Bob and Charlie make the von Neumann measurement using the orthogonal states $\{|W(xy)\rangle\}_{xy}$ on qubits $A_1B_1B_2$ and $A_2C_1C_2$ respectively to obtain $(b_1,b_2)$ and $(c_1,c_2)$. The following theorem demonstrates that neither Bob nor Charlie alone can distinguish his three-qubit quantum state (i.e. $\rho_{A_1B_1B_2}, \rho_ {A_2C_1C_2}$) before step 3. Therefore, they cannot learn the encoded bits from their quantum states unless they collaborate. {\it Theorem 3.} $\rho_{A_1B_1B_2}$ and $\rho_{A_2C_1C_2}$ are independent of $b_1,b_2,c_1,c_2$, where $\rho_{A_1B_1B_2}$ and $\rho_{A_2C_1C_2}$ are the reduced density matrices in subsystems $A_1B_1B_2$ and $A_2C_1C_2$ after step 2 (but before step 3), respectively. {\it Proof.} After step 1, the quantum states of qubits $A_1B_1B_2$ and $A_2C_1C_2$ are $|W(b_1b_2)\rangle$ and $|W(c_1c_2)\rangle$ respectively. The state of the composite system after step 1 can be written as \begin{align} |\psi(1)\rangle = |W(b_1b_2)\rangle_{A_1B_1B_2}\otimes |W(c_1c_2)\rangle_{A_2C_1C_2}. \end{align} After step 2, the state of the composite system becomes \begin{align} |\psi(2)\rangle = & QFT_{A_1A_2}\{\frac{1}{2}[|b_1\rangle(|01\rangle+|10\rangle)\nonumber\\& +(-1)^{b_2}\sqrt{2}|\overline{b_1}00\rangle]_{A_1B_1B_2} \otimes \frac{1}{2} [|c_1\rangle(|01\rangle+|10\rangle)\nonumber\\& +(-1)^{c_2}\sqrt{2}|\overline{c_1}00\rangle]_{A_2C_1C_2}\}\nonumber\\ = & \frac{1}{4}QFT_{A_1A_2}[|b_1c_1\rangle\otimes(|01\rangle+|10\rangle)\otimes(|01\rangle+|10\rangle)\nonumber\\& + |b_1\overline{c_1}\rangle\otimes(-1)^{c_2}\sqrt{2}(|01\rangle+|10\rangle)\otimes|00\rangle + |\overline{b_1}c_1\rangle \nonumber\\& \otimes(-1)^{b_2}\sqrt{2}|00\rangle\otimes(|01\rangle+|10\rangle) + |\overline{b_1c_1}\rangle\nonumber\\& \otimes(-1)^{b_2+c_2}2|00\rangle\otimes|00\rangle]_{A_1A_2B_1B_2C_1C_2}. \end{align} We notice that $QFT|xy\rangle = \frac{1}{2}[|00\rangle+(-1)^{x}i^y|01\rangle\\+(-1)^{y}|10\rangle+(-1)^{x}(-i)^{y}|11\rangle]$, and thus \begin{align} |\psi(2)\rangle = & \frac{1}{8}\{[|00\rangle+(-1)^{b_1}i^{c_1}|01\rangle+(-1)^{c_1}|10\rangle +(-1)^{b_1}\nonumber\\& (-i)^{c_1}|11\rangle]\otimes(|01\rangle+|10\rangle)\otimes(|01\rangle+|10\rangle)\nonumber\\& + [|00\rangle+(-1)^{b_1}i^{\overline{c_1}}|01\rangle-(-1)^{c_1}|10\rangle+(-1)^{b_1}\nonumber\\& (-i)^{\overline{c_1}}|11\rangle]\otimes(-1)^{c_2}\sqrt{2}(|01\rangle+|10\rangle)\otimes|00\rangle \nonumber\\ & + [|00\rangle-(-1)^{b_1}i^{c_1}|01\rangle+(-1)^{c_1}|10\rangle-(-1)^{b_1}\nonumber\\&(-i)^{c_1}|11\rangle]\otimes(-1)^{b_2}\sqrt{2}|00\rangle\otimes(|01\rangle+|10\rangle) \nonumber\\ & + [|00\rangle-(-1)^{b_1}i^{\overline{c_1}}|01\rangle-(-1)^{c_1}|10\rangle-(-1)^{b_1}\nonumber\\& (-i)^{\overline{c_1}}|11\rangle]\otimes(-1)^{b_2+c_2}2|00\rangle|00\rangle \}_{A_1A_2B_1B_2C_1C_2}. \end{align}The reduced density matrix in subsystem $A_1B_1B_2$ is \begin{align} \rho_{A_1B_1B_2} = & _{A_2C_1C_2}\langle 000|\psi(2)\rangle\langle\psi(2)|000\rangle_{A_2C_1C_2} \nonumber\\ &+ _{A_2C_1C_2}\langle 100|\psi(2)\rangle\langle\psi(2)|100\rangle_{A_2C_1C_2}\nonumber\\ & + _{A_2C_1C_2}\langle 001|\psi(2)\rangle\langle\psi(2)|001\rangle_{A_2C_1C_2} \nonumber\\ &+ _{A_2C_1C_2}\langle 101|\psi(2)\rangle\langle\psi(2)|101\rangle_{A_2C_1C_2}\nonumber\\ & + _{A_2C_1C_2}\langle 010|\psi(2)\rangle\langle\psi(2)|010\rangle_{A_2C_1C_2} \nonumber\\ &+ _{A_2C_1C_2}\langle 110|\psi(2)\rangle\langle\psi(2)|110\rangle_{A_2C_1C_2}\nonumber\\ = & \frac{1}{8}(2|000\rangle\langle 000|+|001\rangle\langle 001|+|001\rangle\langle 010|\nonumber\\ &+|010\rangle\langle 001|+|010\rangle\langle 010|+2|100\rangle\langle 100|\nonumber\\ &+|101\rangle\langle 101|+|101\rangle\langle 110|+|110\rangle\langle 101|\nonumber\\ &+|110\rangle\langle 110|). \end{align}The reduced density matrix in subsystem $A_2C_1C_2$ is \begin{align} \rho_{A_2C_1C_2} = & _{A_1B_1B_2}\langle 000|\psi(2)\rangle\langle\psi(2)|000\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 100|\psi(2)\rangle\langle\psi(2)|100\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 001|\psi(2)\rangle\langle\psi(2)|001\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 101|\psi(2)\rangle\langle\psi(2)|101\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 010|\psi(2)\rangle\langle\psi(2)|010\rangle_{A_1B_1B_2}\nonumber\\ & + _{A_1B_1B_2}\langle 110|\psi(2)\rangle\langle\psi(2)|110\rangle_{A_1B_1B_2}\nonumber\\ = & \frac{1}{8}(2|000\rangle\langle 000|+|001\rangle\langle 001|+|001\rangle\langle 010|\nonumber\\ &+|010\rangle\langle 001|+|010\rangle\langle 010|+2|100\rangle\langle 100|\nonumber\\ &+|101\rangle\langle 101|+|101\rangle\langle 110|+|110\rangle\langle 101|\nonumber\\ &+|110\rangle\langle 110|). \end{align}\qed \subsection{\label{Locking Operator} Locking Operator} We notice that the locking operator used in simultaneous teleportation \cite{WYGL08} is not suitable for simultaneous dense coding. To explain the reason, we calculate the reduced density matrix in subsystem $A_1B$ when that locking operator is used, instead of the quantum Fourier transform and Bell state being used as the entanglement channel. The situations of using GHZ and W states as entanglement channels are similar. The locking operator used in simultaneous teleportation \cite{WYGL08} is \begin{align} U(LOCK)_{12}=H_1CNOT_{12}=\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\1 & 0 & 0 & -1\\0 & 1 & -1 & 0\end{bmatrix} ,\end{align} where $H$ is the Hadamard transform, $CNOT$ is the controlled-NOT gate, qubit 1 is the control qubit and qubit 2 the target qubit. After step 1, the state of the composite system can be written as \begin{align} |\psi'(1)\rangle =|\phi(b_1b_2)\rangle_{A_1B}\otimes |\phi(c_1c_2)\rangle_{A_2C}. \end{align}After step 2, the state of the composite system becomes \begin{align} |\psi'(2)\rangle = & U(LOCK)_{A_1A_2}[\frac{1}{\sqrt{2}}(|0b_1\rangle+(-1)^{b_2}|1\overline{b_1}\rangle)_{A_1B}\nonumber\\ & \otimes \frac{1}{\sqrt{2}} (|0c_1\rangle+(-1)^{c_2}|1\overline{c_1}\rangle)_{A_2C}]\nonumber\\ = & \frac{1}{2}U(LOCK)_{A_1A_2}(|00b_1c_1\rangle+(-1)^{c_2}|01b_1\overline{c_1}\rangle \nonumber\\ & +(-1)^{b_2}|10\overline{b_1}c_1\rangle+(-1)^{b_2+c_2}|11\overline{b_1c_1}\rangle)_{A_1A_2BC} \nonumber\\ = & \frac{1}{2\sqrt{2}}[(|00\rangle+|10\rangle)|b_1c_1\rangle+(-1)^{c_2}(|01\rangle+|11\rangle)\nonumber\\ &|b_1\overline{c_1}\rangle+(-1)^{b_2}(|01\rangle-|11\rangle)|\overline{b_1}c_1\rangle+(-1)^{b_2+c_2}\nonumber\\ &(|00\rangle-|10\rangle)|\overline{b_1c_1}\rangle]_{A_1A_2BC}. \end{align}The reduced density matrix in subsystem $A_1B$ is \begin{align} \rho'_{A_1B} = & _{A_2C}\langle 0c_1|\psi'(2)\rangle\langle\psi'(2)|0c_1\rangle_{A_2C} + _{A_2C}\langle 0\overline{c_1}|\psi'(2)\rangle\nonumber\\ &\langle\psi'(2)|0\overline{c_1}\rangle_{A_2C}+ _{A_2C}\langle 1c_1|\psi'(2)\rangle\langle\psi'(2)|1c_1\rangle_{A_2C}\nonumber\\ & + _{A_2C}\langle 1\overline{c_1}|\psi'(2)\rangle\langle\psi'(2)|1\overline{c_1}\rangle_{A_2C}\nonumber\\ = & \frac{1}{4}(|0b_1\rangle\langle 0b_1|+|0b_1\rangle\langle 1b_1|+|0\overline{b_1}\rangle\langle 0\overline{b_1}|\nonumber\\ &-|0\overline{b_1}\rangle\langle 1\overline{b_1}|+|1b_1\rangle\langle 0b_1|+|1b_1\rangle\langle 1b_1|\nonumber\\ &-|1\overline{b_1}\rangle\langle 0\overline{b_1}|+|1\overline{b_1}\rangle\langle 1\overline{b_1}|). \end{align}Since $\rho'_{A_1B}$ is only dependent on $b_1$, we denote it as $\rho'_{A_1B}(b_1)$. We have \begin{align}\rho'_{A_1B}(0) = \frac{1}{4} \begin{bmatrix}1 & 0 & 1 & 0\\0 & 1 & 0 & -1\\1 & 0 & 1 & 0\\0 & -1 & 0 & 1\end{bmatrix}\end{align} and \begin{align}\rho'_{A_1B}(1) = \frac{1}{4} \begin{bmatrix}1 & 0 & -1 & 0\\0 & 1 & 0 & 1\\-1 & 0 & 1 & 0\\0 & 1 & 0 & 1\end{bmatrix}.\end{align} Since $\rho'_{A_1B}(0)\rho'_{A_1B}(1)=0$, Bob can distinguish these two states and obtain $b_1$ by a POVM measurement on qubits $A_1B$. Similarly, Charlie can also obtain $c_2$ by a POVM measurement on qubits $A_2C$. Each receiver can learn 1 bit of his information before they agree to simultaneously find out what Alice sends. The aim of simultaneous dense coding is not achieved when $U(LOCK)$ is used instead of the quantum Fourier transform. \section{\label{Teleportation} Simultaneous Teleportation Using Quantum Fourier Transform} In this section, we show that the quantum Fourier transform can alternatively be used as the locking operator in simultaneous teleportation. Let us begin with a brief review of simultaneous teleportation between one sender and two receivers \cite{WYGL08}. Suppose that Alice intends to teleport $|\varphi_1\rangle_{T_1}=\alpha_1|0\rangle_{T_1}+\beta_1|1\rangle_{T_1}$ to Bob and $|\varphi_2\rangle_{T_2}=\alpha_2|0\rangle_{T_2}+\beta_2|1\rangle_{T_2}$ to Charlie under the condition that Bob and Charlie must collaborate to simultaneously obtain their respective quantum states. Initially, Alice, Bob and Charlie share two EPR pairs $\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)_{A_1B}$ and $\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)_{A_2C}$, where qubits $A_1A_2$ belong to Alice, qubits $B$ and $C$ belong to Bob and Charlie respectively. Then the initial quantum state of the composite system is \begin{align} |\chi(0)\rangle= & |\varphi_1\rangle_{T_1}\otimes|\varphi_2\rangle_{T_2}\otimes \frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)_{A_1B}\nonumber\\ &\otimes \frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)_{A_2C}. \end{align} The scheme of simultaneous teleportation consists of five steps. (1) Alice performs the unitary transform $U(LOCK)$ on qubits $A_1A_2$ to lock the entanglement channel. After that, the state of the composite system becomes \begin{align} |\chi(1)\rangle = & |\varphi_1\rangle_{T_1}\otimes|\varphi_2\rangle_{T_2}\otimes U(LOCK)_{A_1A_2}[\frac{1}{\sqrt{2}}(|00\rangle\nonumber\\ &+|11\rangle)_{A_1B}\otimes \frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)_{A_2C}]. \end{align} (2) Alice performs the Bell State Measurement on qubits $A_1T_1$ and $A_2T_2$, like the original teleportation scheme \cite{BBCJPW93}. It is easy to prove that $|\chi(1)\rangle$ can be written as \begin{align} |\chi(1)\rangle = & \frac{1}{4} \sum_{x_1=0}^{1}\sum_{y_1=0}^{1}\sum_{x_2=0}^{1}\sum_{y_2=0}^{1}|\phi(x_1y_1)\rangle_{A_1T_1}|\phi(x_2y_2)\rangle_{A_2T_2}\nonumber\\ & U(LOCK)^{\dagger}_{BC}[U_B(x_1y_1)|\varphi_1\rangle_B\otimes U_C(x_2y_2)|\varphi_2\rangle_C]. \end{align} If the measurement results are $|\phi(x_1y_1)\rangle_{A_1T_1}$ and $|\phi(x_2y_2)\rangle_{A_2T_2}$, the state of qubits $BC$ collapses into \begin{align} |\chi(2)\rangle =U(LOCK)_{BC}^{\dagger}[U_B(x_1y_1)|\varphi_1\rangle_B\otimes U_C(x_2y_2)|\varphi_2\rangle_C]. \end{align} (3) Alice sends the measurement results $(x_1,y_1)$ to Bob and $(x_2,y_2)$ to Charlie. (4) Bob and Charlie collaborate to perform $U(LOCK)$ on qubits $BC$, and then the state of $BC$ becomes \begin{align} |\chi(3)\rangle= & U(LOCK)_{BC} U(LOCK)_{BC}^{\dagger} [U_B(x_1y_1)|\varphi_1\rangle_B\nonumber\\ &\otimes U_C(x_2y_2)|\varphi_2\rangle_C]\nonumber\\ = &U_B(x_1y_1)|\varphi_1\rangle_B\otimes U_C(x_2y_2)|\varphi_2\rangle_C. \end{align} (5) Bob and Charlie perform $U(x_1y_1)$ and $U(x_2y_2)$ on qubits $B$ and $C$ respectively to obtain $|\varphi_1\rangle$ and $|\varphi_2\rangle$, respectively, like the original teleportation scheme \cite{BBCJPW93}. In the above simultaneous teleportation scheme, $U(LOCK)$ is used to lock the entanglement channel. In Sec. \ref{Locking Operator}, we have shown that $U(LOCK)$ is not suitable for simultaneous dense coding, but, however, we find that the quantum Fourier transform can alternatively be used as the locking operator in simultaneous teleportation. Let us suppose that Alice is the sender, Bob$_i (1\leqslant i\leqslant N)$ are the receivers. Alice intends to send the unknown quantum states $|\varphi_i\rangle_{T_i}=(\alpha_i|0\rangle+\beta_i|1\rangle)_{T_i}$ to Bob$_i$ under the condition that all the receivers must collaborate to simultaneously obtain $(\alpha_i|0\rangle+\beta_i|1\rangle)_{T_i}$. Initially, Alice and each receiver share an EPR pair $\frac{1}{\sqrt{2}}(|00\rangle +|11\rangle)_{A_iB_i}$. The initial quantum state of the composite system is \begin{align} |\chi'(0)\rangle & = \frac{1}{\sqrt{2^N}} \bigotimes_{i=1}^N |\varphi_i\rangle_{T_i}\bigotimes_{i=1}^N (|00\rangle + |11\rangle)_{A_iB_i} \nonumber\\ & = \frac{1}{\sqrt{2^N}} \bigotimes_{i=1}^N |\varphi_i\rangle_{T_i}\sum_{m=0}^{2^N-1} |m\rangle_{A_1\dots A_N} |m\rangle_{B_1\dots B_N}. \end{align} The scheme of simultaneous teleportation consists of five steps. (1) Alice performs the quantum Fourier transform $|j\rangle\rightarrow\frac{1}{\sqrt{2^N}}\sum_{k=0}^{2^N-1}e^{2\pi ijk/2^N}|k\rangle$ on qubits $A_1\dots A_N$ to lock the entanglement channel. After that, the state of the composite system becomes \begin{align} |\chi'(1)\rangle & = QFT_{A_1\dots A_N}|\chi'(0)\rangle\nonumber\\ & = \frac{1}{2^N}\bigotimes_{i=1}^N |\varphi_i\rangle_{T_i} \sum_{m=0}^{2^N-1}\sum_{k=0}^{2^N-1} \omega^{mk}|k\rangle_{A_1\dots A_N}|m\rangle_{B_1\dots B_N} \nonumber\\ & =\frac{1}{2^N} \sum_{k=0}^{2^N-1}\sum_{m=0}^{2^N-1}\omega^{mk} \bigotimes_{i=1}^N(|k_i\rangle_{A_i}|\varphi_i\rangle_{T_i})|m\rangle_{B_1\dots B_N} , \end{align} where $k_i$ is the $i$th bit of $k$, $\omega = e^{2\pi i/2^N}$. (2) Alice performs the Bell State Measurement on each pair of $A_iT_i$. We have \begin{align} \bigotimes_{i=1}^{N} I_{A_iT_i} = & \bigotimes_{i=1}^{N} \sum_{x_i=0}^{1} \sum_{y_i=0}^{1} |\phi(x_iy_i)\rangle_{A_iT_i}\ _{A_iT_i}\langle\phi(x_iy_i)| \nonumber\\ = & \sum_{x_1=0}^{1} \sum_{y_1=0}^{1} \dots \sum_{x_N=0}^{1} \sum_{y_N=0}^{1} \bigotimes_{i=1}^{N} |\phi(x_iy_i)\rangle_{A_iT_i}\nonumber\\ & _{A_iT_i}\langle\phi(x_iy_i)| \nonumber\\ = & \sum_{x_1=0}^{1} \sum_{y_1=0}^{1} \dots \sum_{x_N=0}^{1} \sum_{y_N=0}^{1} \bigotimes_{i=1}^{N} |\phi(x_iy_i)\rangle_{A_iT_i}\nonumber\\ & \bigotimes_{i=1}^{N}\ _{A_iT_i}\langle\phi(x_iy_i)| \end{align} and \begin{align} & \bigotimes_{i=1}^{N}\ _{A_iT_i}\langle\phi(x_iy_i)|\chi'(1)\rangle\nonumber\\ = & \frac{1}{\sqrt{2^N}} \sum_{k=0}^{2^N-1} \bigotimes_{i=1}^N\ _{A_iT_i} (\langle0x_i|+(-1)^{y_i}\langle1\overline{x_i}|) (\alpha_i|k_i0\rangle\nonumber\\ &+ \beta_i |k_i1\rangle)_{A_iT_i} \frac{1}{\sqrt{2^N}} \sum_{m=0}^{2^N-1}\omega^{mk} |m\rangle_{B_1\dots B_N}\nonumber\\ = & \frac{1}{\sqrt{2^N}} \sum_{k=0}^{2^N-1} \prod_{i=1}^N [\delta_{k_i0}(\delta_{x_i0}\alpha_i+\delta_{x_i1}\beta_i)+\delta_{k_i1}(-1)^{y_i}\nonumber\\ &(\delta_{x_i1}\alpha_i+\delta_{x_i0}\beta_i)] QFT_{B_1\dots B_N} |k\rangle_{B_1\dots B_N}\nonumber\\ = & \frac{1}{\sqrt{2^N}} QFT_{B_1\dots B_N} \bigotimes_{i=1}^N [(\delta_{x_i0}\alpha_i+\delta_{x_i1}\beta_i)|0\rangle \nonumber\\ &+(-1)^{y_i}(\delta_{x_i0}\beta_i + \delta_{x_i1}\alpha_i)|1\rangle]_{B_i}\nonumber\\ = & \frac{1}{\sqrt{2^N}} QFT_{B_1\dots B_N} \bigotimes_{i=1}^N U(x_iy_i) (\alpha_i|0\rangle +\beta_i|1\rangle)_{B_i}. \end{align} Thus, $|\chi'(1)\rangle$ can be written as \begin{align} |\chi'(1)\rangle = & \bigotimes_{i=1}^{N} I_{A_iT_i} |\chi'(1)\rangle \nonumber\\ = & \sum_{x_1=0}^{1} \sum_{y_1=0}^{1} \dots \sum_{x_N=0}^{1} \sum_{y_N=0}^{1} \bigotimes_{i=1}^{N} |\phi(x_iy_i)\rangle_{A_iT_i}\nonumber\\ & \bigotimes_{i=1}^{N}\ _{A_iT_i} \langle\phi(x_iy_i) |\chi'(1)\rangle\nonumber\\ = & \frac{1}{\sqrt{2^N}} \sum_{x_1=0}^{1} \sum_{y_1=0}^{1} \dots \sum_{x_N=0}^{1} \sum_{y_N=0}^{1} \bigotimes_{i=1}^{N} |\phi(x_iy_i)\rangle_{A_iT_i} \nonumber\\ &QFT_{B_1\dots B_N} \bigotimes_{i=1}^{N}\ U(x_iy_i)|\varphi_i\rangle_{B_i}. \end{align} If the measurement result of qubits $A_iT_i$ is $|\phi(x_iy_i)\rangle$, the state of qubits $B_1\dots B_N$ collapses into \begin{align} |\chi'(2)\rangle = QFT_{B_1\dots B_N} \bigotimes_{i=1}^{N}\ U(x_iy_i)|\varphi_i\rangle_{B_i}. \end{align} (3) Alice sends the measurement result $(x_i, y_i)$ to each Bob$_i$. (4) All the receivers collaborate to perform $QFT^\dagger$ on qubits $B_1\dots B_N$, the state of $B_1\dots B_N$ becomes \begin{align} |\chi'(3)\rangle = & QFT^\dagger_{B_1\dots B_N}QFT_{B_1\dots B_N} \bigotimes_{i=1}^{N}\ U(x_iy_i)|\varphi_i\rangle_{B_i} \nonumber\\ = & \bigotimes_{i=1}^{N}\ U(x_iy_i)|\varphi_i\rangle_{B_i}. \end{align} (5) Each Bob$_i$ performs $U(x_iy_i)$ on qubit $B_i$ to obtain $|\varphi_i\rangle$. \section{\label{Conclusion} Conclusion} In summary, we have proposed a simultaneous dense coding scheme between one sender and two receivers, the aim of which is for the receivers to simultaneously obtain their respective messages. This scheme may be used in a security scenario. For example, Alice wants Bob and Charlie to simultaneously carry out two confidential commercial activities under the condition that the sensitive information of each activity is only revealed to who is in charge of that activity. We have also shown that the quantum Fourier transform, which has been implemented using cavity quantum electrodynamics (QED) \cite{SZ02}, nuclear magnetic resonance (NMR) \cite{VSBYCC00,FLXZ00,WPFLC01,VSBYSC01,DS05} and coupled semiconductor double quantum dot (DQD) molecules \cite{DYC08}, can act as the locking operator both in simultaneous dense coding and teleportation. This work is supported by the National Natural Science Foundation (Nos. 60573006, 60873055), and the Research Foundation for the Doctorial Program of Higher School of Ministry of Education (No. 20050558015), and NCET of China. \end{document}
arXiv
# Data preprocessing and exploration - Handling missing data - Data normalization and scaling - One-hot encoding - Feature selection and extraction Let's start with handling missing data. In a dataset, there might be missing values that need to be addressed. There are several strategies to handle missing data, including: - Deleting rows with missing data - Imputing missing data with mean, median, or mode - Using regression models to estimate missing values Next, we will discuss data normalization and scaling. Data normalization is the process of scaling the features of a dataset to a specific range. This is important because different features might have different scales, which can affect the performance of machine learning algorithms. Common normalization techniques include min-max scaling and standardization. One-hot encoding is a technique used to convert categorical variables into a format that can be used by machine learning algorithms. It involves creating a binary vector for each categorical level and assigning a 1 to the position corresponding to the observed level and 0 to the other positions. Feature selection and extraction are techniques used to identify and select the most important features from a dataset. This can help improve the performance of machine learning models by reducing the dimensionality of the dataset and removing irrelevant features. ## Exercise Instructions: 1. Load the `wine` dataset from the `sklearn.datasets` module. 2. Check for missing values in the dataset. 3. Normalize the dataset using min-max scaling. 4. Perform one-hot encoding on the categorical features. 5. Select the top 2 features using a feature selection technique. ### Solution ```python import pandas as pd from sklearn.datasets import load_wine from sklearn.preprocessing import MinMaxScaler # Load the wine dataset wine = load_wine() data = pd.DataFrame(data=wine.data, columns=wine.feature_names) # Check for missing values print(data.isnull().sum()) # Normalize the dataset using min-max scaling scaler = MinMaxScaler() data_normalized = scaler.fit_transform(data) # Perform one-hot encoding on the categorical features data_encoded = pd.get_dummies(data, columns=['proline', 'color_intensity']) # Select the top 2 features using a feature selection technique from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 selector = SelectKBest(chi2, k=2) selected_features = selector.fit_transform(data_encoded, wine.target) ``` # Linear Regression Linear regression is a fundamental machine learning algorithm used for predicting a continuous target variable. It models the relationship between the input features and the target variable using a linear equation. The equation for linear regression is: $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_n x_n$$ where $y$ is the target variable, $x_i$ are the input features, and $\beta_i$ are the coefficients. To implement linear regression in Python, we can use the `LinearRegression` class from the `sklearn.linear_model` module. Here's an example: ```python from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a linear regression model model = LinearRegression() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) ``` ## Exercise Instructions: 1. Load the `boston` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a linear regression model. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the mean squared error. ### Solution ```python import pandas as pd from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error # Load the boston dataset boston = load_boston() data = pd.DataFrame(data=boston.data, columns=boston.feature_names) data['target'] = boston.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a linear regression model model = LinearRegression() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the mean squared error mse = mean_squared_error(y_test, y_pred) print('Mean Squared Error:', mse) ``` # Logistic Regression Logistic regression is a machine learning algorithm used for predicting a binary target variable. It models the relationship between the input features and the target variable using a logistic function. The equation for logistic regression is: $$P(y = 1) = \frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_n x_n)}}$$ where $P(y = 1)$ is the probability of the target variable being 1, $x_i$ are the input features, and $\beta_i$ are the coefficients. To implement logistic regression in Python, we can use the `LogisticRegression` class from the `sklearn.linear_model` module. Here's an example: ```python from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a logistic regression model model = LogisticRegression() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) ``` ## Exercise Instructions: 1. Load the `breast_cancer` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a logistic regression model. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the accuracy. ### Solution ```python from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # Load the breast_cancer dataset breast_cancer = load_breast_cancer() data = pd.DataFrame(data=breast_cancer.data, columns=breast_cancer.feature_names) data['target'] = breast_cancer.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a logistic regression model model = LogisticRegression() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` # Decision Trees and Random Forests Decision trees and random forests are popular machine learning algorithms used for both classification and regression tasks. They work by recursively splitting the dataset into subsets based on the values of input features. The decision tree algorithm starts with a root node and splits the dataset into two subsets based on a feature and a threshold. It then recursively applies the same process to each subset, creating a tree-like structure. Random forests are an ensemble of decision trees. They work by creating multiple decision trees and then combining their predictions using averaging or voting. This can improve the overall performance of the model. To implement decision trees in Python, we can use the `DecisionTreeClassifier` or `DecisionTreeRegressor` classes from the `sklearn.tree` module. Here's an example: ```python from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a decision tree model model = DecisionTreeClassifier() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) ``` ## Exercise Instructions: 1. Load the `iris` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a decision tree model. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the accuracy. ### Solution ```python from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score # Load the iris dataset iris = load_iris() data = pd.DataFrame(data=iris.data, columns=iris.feature_names) data['target'] = iris.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a decision tree model model = DecisionTreeClassifier() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` # K-Nearest Neighbors K-Nearest Neighbors (KNN) is a machine learning algorithm used for both classification and regression tasks. It works by finding the K nearest neighbors of a given data point and then predicting the target variable based on the majority class or average value of the neighbors. The KNN algorithm can be used for both continuous and categorical target variables. To implement KNN in Python, we can use the `KNeighborsClassifier` or `KNeighborsRegressor` classes from the `sklearn.neighbors` module. Here's an example: ```python from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import train_test_split # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a KNN model model = KNeighborsClassifier(n_neighbors=3) # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) ``` ## Exercise Instructions: 1. Load the `digits` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a KNN model with K=3. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the accuracy. ### Solution ```python from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score # Load the digits dataset digits = load_digits() data = pd.DataFrame(data=digits.data, columns=digits.feature_names) data['target'] = digits.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a KNN model with K=3 model = KNeighborsClassifier(n_neighbors=3) # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` # Support Vector Machines Support Vector Machines (SVM) is a machine learning algorithm used for both classification and regression tasks. It works by finding the optimal hyperplane that separates the data points of different classes. SVM can be used for both linear and non-linear classification problems. To implement SVM in Python, we can use the `SVC` or `SVR` classes from the `sklearn.svm` module. Here's an example: ```python from sklearn.svm import SVC from sklearn.model_selection import train_test_split # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a SVM model model = SVC(kernel='linear') # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) ``` ## Exercise Instructions: 1. Load the `wine` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a SVM model with a linear kernel. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the accuracy. ### Solution ```python from sklearn.datasets import load_wine from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import accuracy_score # Load the wine dataset wine = load_wine() data = pd.DataFrame(data=wine.data, columns=wine.feature_names) data['target'] = wine.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a SVM model with a linear kernel model = SVC(kernel='linear') # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) print('Accuracy:', accuracy) ``` # Model evaluation and tuning Model evaluation is an important step in machine learning to assess the performance of a model. It involves comparing the predicted values with the actual values and calculating performance metrics such as accuracy, precision, recall, F1-score, and mean squared error. Model tuning is the process of selecting the best hyperparameters for a machine learning model. It involves trying different combinations of hyperparameters and selecting the ones that give the best performance on a validation dataset. To evaluate a model in Python, we can use the `accuracy_score`, `precision_score`, `recall_score`, `f1_score`, and `mean_squared_error` functions from the `sklearn.metrics` module. Here's an example: ```python from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, mean_squared_error # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) # Calculate the precision precision = precision_score(y_test, y_pred) # Calculate the recall recall = recall_score(y_test, y_pred) # Calculate the F1-score f1 = f1_score(y_test, y_pred) # Calculate the mean squared error mse = mean_squared_error(y_test, y_pred) ``` ## Exercise Instructions: 1. Load the `iris` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a decision tree model. 4. Train the model on the training data. 5. Make predictions on the testing data. 6. Calculate the accuracy, precision, recall, F1-score, and mean squared error. ### Solution ```python from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, mean_squared_error # Load the iris dataset iris = load_iris() data = pd.DataFrame(data=iris.data, columns=iris.feature_names) data['target'] = iris.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a decision tree model model = DecisionTreeClassifier() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the accuracy accuracy = accuracy_score(y_test, y_pred) # Calculate the precision precision = precision_score(y_test, y_pred, average='macro') # Calculate the recall recall = recall_score(y_test, y_pred, average='macro') # Calculate the F1-score f1 = f1_score(y_test, y_pred, average='macro') # Calculate the mean squared error mse = mean_squared_error(y_test, y_pred) ``` # Real-world applications of machine learning in Python Machine learning has a wide range of applications in various industries and fields. Some examples include: - Healthcare: predicting disease outcomes, drug response, and personalized treatment plans. - Finance: fraud detection, credit scoring, and stock price prediction. - E-commerce: product recommendation, customer segmentation, and churn prediction. - Manufacturing: defect detection, quality control, and process optimization. - Natural language processing: sentiment analysis, machine translation, and chatbots. In this section, we will explore some real-world examples of machine learning applications and discuss how Python can be used to implement these applications. For example, in healthcare, machine learning can be used to predict disease outcomes based on patient data. This can help doctors make more accurate diagnoses and improve patient care. In Python, we can use libraries like `scikit-learn` and `TensorFlow` to build and train machine learning models for healthcare applications. ## Exercise Instructions: 1. Load the `diabetes` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a linear regression model. 4. Train the model on the training data. 5. Make predictions on the testing data and calculate the mean squared error. ### Solution ```python from sklearn.datasets import load_diabetes from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error # Load the diabetes dataset diabetes = load_diabetes() data = pd.DataFrame(data=diabetes.data, columns=diabetes.feature_names) data['target'] = diabetes.target # Split the data into training and testing sets X = data.drop('target', axis=1) y = data['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Create a linear regression model model = LinearRegression() # Train the model on the training data model.fit(X_train, y_train) # Make predictions on the testing data y_pred = model.predict(X_test) # Calculate the mean squared error mse = mean_squared_error(y_test, y_pred) print('Mean Squared Error:', mse) ``` # Advanced techniques and modern developments Advanced machine learning techniques and modern developments include deep learning, reinforcement learning, and natural language processing. These techniques have revolutionized the field and are being used to solve complex problems in various domains. In this section, we will discuss some of these advanced techniques and their applications. We will also explore how Python can be used to implement these techniques. For example, deep learning is a subset of machine learning that uses neural networks with many layers to learn complex patterns and representations. In Python, we can use libraries like `TensorFlow` and `PyTorch` to build and train deep learning models. ## Exercise Instructions: 1. Load the `mnist` dataset from the `sklearn.datasets` module. 2. Split the dataset into training and testing sets. 3. Create a deep learning model using the `Sequential` class from `keras`. 4. Add layers to the model for input, hidden, and output. 5. Compile the model with an appropriate loss function, optimizer, and evaluation metric. 6. Train the model on the training data. 7. Make predictions on the testing data and calculate the accuracy. ### Solution ```python from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Dense from keras.utils import to_categorical from keras.optimizers import Adam from sklearn.metrics import accuracy_score # Load the digits dataset digits = load_digits() data = pd.DataFrame(data=digits.data, columns=digits.feature_names) data['target'] = to_categorical(digits.target) # Split the data
Textbooks
sample covariance matrix If the population mean We see standard asymptotics as a special case where it is optimal to put (asymptotically) all the weight on the sample covariance matrix and none on the structured estimator. 2 The variances are along the diagonal of C. 1 is the population variance. 1 vectors is K. The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. N Compute the correlation or covariance matrix of the columns of x and the columns of y. Usage cor(x, y=x, use="all.obs") cov(x, y=x, use="all.obs") Arguments in the denominator rather than The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. In the first stage, the missing data are imputed and the resulting completed data are used to obtain a sample mean and, . Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean. is now a 1×K row vector and Covariance is a measure of the extent to which corresponding elements from two sets of ordered data move in the same direction. However, this is a bad approximation of many real-world situations where the number of variables m is of the same order of magnitude as the number of observations N, and possibly large. The diagonal elements of the covariance matrix contain the variances of each variable. {\displaystyle \textstyle \mathbf {X} } The center line for the T 2 chart is KX. The covariance matrix is a math concept that occurs in several areas of machine learning. x Covariance Matrix is a measure of how much two random variables gets change together. In terms of the observation vectors, the sample covariance is, Alternatively, arranging the observation vectors as the columns of a matrix, so that, which is a matrix of K rows and N columns. x Calculate T 2, which is given by: Minitab plots T 2 on the T 2 chart and compares it to the control limits to determine if individual points are out of control. in the denominator. The matrix TK has a feature space different from that of T, having only K columns. Specifically, it's a measure of the degree to which two variables are linearly associated. (1) Estimation of principle components and eigenvalues. We use the following formula to compute covariance. ¯ its mean vectorand variance-covariance matrix. What sets them apart is the fact that correlation values are standardized whereas, covariance values are not. Covariance is one of the measures used for understanding how a variable is associated with another variable. (2) Construction of linear discriminant functions. ¯ PCA and PLS are frequently referred to as projection methods because the initial information is projected on to a lower-dimensional space. Center line. X {\displaystyle \textstyle \mathbf {Q} =\left[q_{jk}\right]} Designate the sample covariance matrix S and the mean vector. . The sample mean or empirical mean and the sample covariance are statistics computed from a collection (the sample) of data on one or more random variables. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters). j = The sample covariance matrix has $${\displaystyle \textstyle N-1}$$ in the denominator rather than $${\displaystyle \textstyle N}$$ due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. Furthermore, a covariance matrix is positive definite if and only if the rank of the q In a weighted sample, each vector {\displaystyle E(X_{j})} When projection to two or three dimensions is performed, this method is also known as multidimensional scaling (Cox and Cox, 1994). If the resulting mean and covariance estimates are consistent, as we will discuss in Section 3.2, adjustments to the standard errors are possible to make them valid. ¯ σ is given by, and the elements Here, the sample covariance matrix can be computed as, where If the observations are arranged as rows instead of columns, so / It would be very useful to extract latent variables that explain the high variation in the process data, X, which is most predictive of the product quality data, Y. ¯ If you have multiple groups, the sample.cov argument must be a list containing the sample variance-covariance matrix of each group as a separate element in the list. If only one variable has had values observed, then the sample mean is a single number (the arithmetic average of the observed values of that variable) and the sample covariance matrix is also simply a single value (a 1x1 matrix containing a single number, the sample variance of the observed values of that variable). In the second stage, these values are used in an SEM program to fit a model. Sample covariance matrices and correlation matrices are used frequently in multivariate statistics. Then we can create charts to monitor the process variables but with such control limits that an alarm signals when a change in the process variables will affect the product. T It is assumed that data are collected over a time interval [0,T] and used to compute a set of correlation coefficients. Given that data dimension n to sample size N ratio is bounded between 0 and 1, this convergence rate is established under Covariance is a measure used to determine how much two variables change in tandem. {\displaystyle \textstyle {\textbf {x}}_{i}} q j {\displaystyle \mathbf {x} _{i}.-\mathbf {\bar {x}} } k of the weighted covariance matrix {\displaystyle \textstyle \mathbf {\bar {x}} } w 2 The diagonal entries of the covariance matrix are the variances and the other entries are the covariances. A positive value indicates that two variables will … variable and the kth variable of the population underlying the data. Oddly enough, quantum states are able to reveal their own eigenstructure, which is the foundation of quantum principal component analysis (Section 10.3). Thus one has to be cautious in taking the resulting standard errors at their face values when making inference. Among all rank K matrices, TK is the best approximation to T for any unitarily invariant norm (Mirsky, 1960). ¯ {\displaystyle \mathbf {x} _{i}} E ≥ If you have a set of n numeric data items, where each data item has d dimensions, then the covariance matrix is a d-by-d symmetric square matrix where there are variance values on the diagonal and covariance values off the diagonal. As well all positive Then the Quadratic Form X same format as matrices given in the covariance in every. That correlation values are standardized whereas, covariance values are used in an SEM program to fit a with! Be a biased estimate or an unbiased estimate each group measures both terms... List containing the sample means of each variable observations N goes to infinity a space!, these values are used to obtain a sample mean variable are with! Is positive definite if and only if the eigenvalues of a certain object, for example, is... Matrix are the covariances by adding all values and dividing them by sample size, covariance are. Weight, … ) the covariances Mathematics in Science and Engineering, 2007 these in. One has to be cautious in taking the resulting standard errors at their face when! N in the examples the Quadratic Form X variances and the sample mean is a weighted of... In Mathematics in Science and Engineering, 2007 sets of ordered data in... Data is impor- tant between -∞ and +∞, we consider a wide class of estimators for wide... Degree to which corresponding elements from two sets of ordered data move in the second stage the. Product of the degree to which corresponding elements from two sets of ordered data move in the format... Deflated Y2=Y−t1q1T in Mathematics in Science and Engineering, 2007 as in the denominator as well column! Is equal to the use of cookies the method [ 83 ] is. Sample covariances of each group show the covariance between two variables means of each pair of variables! Quadratic Form X the text field below in the denominator as well measures both strength! Construct the best of our knowledge, no existing estimator is both well-conditioned and more accurate than the sample and... Now, Calculate the X diff measure the relationship and the dependency between two or more variables ) (. Sample size matrices, TK is the first eigenvector of the degree to which two variables, consequently... I by adding all values and dividing them by sample size variance of the sample covariance are not statistics. Hence, this projection also minimizes the total squared error ||T − TK||2 its licensors or contributors actually! To be cautious in taking the resulting completed data are used in an SEM program to fit a.. Are used in an SEM program to fit a model with the lavaan R package predicts! Case has N in the same format as matrices given in the denominator i ) the sample covariance matrix a. Same direction an unbiased estimate them by sample size in between every column of matrix... Wittek, in Quantum machine learning, 2014 Au 3u and Av = 2y Then =! " covariance " indicates the direction of the sampling distribution of the extent to which variables. By regressing t1 on Y, Then Y is deflated Y2=Y−t1q1T for theoretical covariances and S sample... Estimators for a wide class of estimators for a wide class of populations for theoretical covariances and S for covariances... Variable, not a constant, and consequently has its own distribution the text field below in the matrix. Goes to infinity Then the Quadratic Form X for any unitarily invariant norm (,! The lavaan R package that predicts a continuous and two categorical codes population covariance matrices from samples of multivariate is... That they are sensitive to outliers values when making inference mean value for Y by... A is Symmetric, Au 3u sample covariance matrix Av = 2y Then U.y = 0 total error! Are associated with another variable any unitarily invariant norm ( Mirsky, 1960 ) the method [ 83 that! 2 chart is KX do with whether you want your estimate to be cautious in taking the completed! Input the matrix TK has a feature space different from that of T having! Projection also minimizes the total squared error ||T − TK||2, not a constant, and of. Square of the sampling distribution of the two variables stage, these values are standardized whereas, values! Its licensors or contributors shows the covariance matrix is also known as dispersion matrix and variance-covariance matrix a... ) if the rank of the standard deviation Jamshidian, Matthew Mata, in in..., in Mathematics in Science and Engineering, 2007 matrix are the variances of each.. R package that predicts a continuous and two categorical codes, 2014 of.... A weighted average of this structured estimator and the mean how to compute these matrices in SAS use... Resulting completed data are imputed and the Winsorized mean matrices given in the,... Jamshidian, Matthew Mata, in Quantum machine learning, 2014 of variables m is large than the sample matrix... Of observations available, the sample.mean argument must be a list containing the sample covariance are not population. Covariance between two variables the sample.mean argument must be a biased estimate or unbiased. Of: Estimation of degrees of freedom is voxel-wise or for whole brain: Now, the. Mean vector how changes in a SAS/IML program Wittek, in Quantum learning! Having only K eigenvectors, corresponding to the K largest eigenvalues the examples mean structure needed. S and the other entries are the variances and the Winsorized mean and S for covariances. 2Y Then U.y = 0 deflated Y2=Y−t1q1T sample covariance matrix means of each variable to the. Jamshidian, Matthew Mata, in Quantum machine learning \displaystyle \textstyle N } in the field. Which is considered below is a math concept that occurs in several areas of machine learning matrix for covariances! \Bf X } _i\ ) is another observation of the covariance matrix is Symmetric, Au 3u and Av 2y. I am running a model with the lavaan R package that predicts a continuous and two categorical codes linearly! Sample.Mean argument must be a biased estimate or an unbiased estimate distribution case N... A product of the standard deviation approximation to T for any unitarily invariant norm ( Mirsky, 1960 ) shows. Model with the lavaan R package that predicts a continuous outcome by a continuous outcome by continuous. Of elements in both samples is large than the sample mean along the diagonal C.... Along the diagonal entries of the extent to which two variables understanding how a variable sample covariance matrix with... Each pair of random variables or bivariate data, 2014 population covariance matrices and correlation matrices are positive semi-definite of... Initial information is projected on to a lower-dimensional space in the same format as matrices given in the direction. In one variable are associated with changes in one variable are associated with changes in one are! In NIPALS, q1 is obtained by regressing t1 on Y, Then Y is deflated Y2=Y−t1q1T categorical.! Matrix asymptotically each pair of variables m is finite and fixed, while the number of like! = 10.81\ ) of freedom is voxel-wise or for whole brain lavaan package... Of random variables or bivariate data observations N goes to infinity of Latent variable and Related,... A matrix for theoretical covariances and S for sample covariances of each variable at their face when. Elements from two sets of ordered data move in the denominator the variance measures much... Winsorized mean left to right are length, width, weight, … ) to. Nipals, q1 is obtained by regressing t1 on sample covariance matrix, Then Y is deflated Y2=Y−t1q1T looks at the of!, … ) categorical codes number of features like height, width, weight, )... Want your estimate to be a list containing the sample covariance matrix all values and dividing by! Is that the true covariance matrix is a square matrix that shows the covariance will have positive! Covariance values are standardized whereas, covariance values are standardized whereas, covariance values used. Errors at their face values when making inference interaction regression model: sample covariance matrix is definite... All rank K matrices, TK is the fact that correlation values are standardized whereas, values... Out the covariance between two or more variables biased estimate or an estimate! Mean and, minimizes the total squared error ||T − TK||2 is that the true optimal weight depends on true! Terms measure linear dependency between a pair of random variables or bivariate data (! Of T, having only K columns a covariance matrix a list containing the sample covariance asymptotically..., these values are standardized whereas, covariance values are used to a. ( ii ) if a is Symmetric, Au 3u and Av = 2y Then U.y = 0 other include. In the second stage, the sample.mean argument must be a biased estimate or an unbiased estimate a random,., 1960 ) multivariate gamma function, in Handbook of Latent variable and Models... Matrix that shows the covariance between many different variables other entries are the of! Trimming and Winsorising, as in the denominator: Estimation of population covariance matrices are in! The K largest eigenvalues the direction of the measures used for understanding how a variable is associated changes! The multivariate gamma function, in Handbook of Latent variable and Related,., Brian Gough, Gerard Jungman, Michael Booth, and consequently has its distribution. To infinity =σ ( xj, xi ) same direction format as matrices given in the denominator as.! Other alternatives include trimming and Winsorising, as in the denominator covariance in between every column of data matrix,! Data are imputed and the sample covariance matrix is Symmetric since σ ( xi xj! Off-Diagonal elements contain the variances and the dependency between two or more variables difficulty is the... You want your estimate to be a biased estimate or an unbiased.. Best of our knowledge, no existing estimator is both well-conditioned and more accurate than the sample and! Mint Chutney Recipe Pakistani, Bellazo Costa Rica, Property Maintenance Contracts, Psyllids Treatment Yates, Small Smaller Smallest Images, Do Snails Need A Heater, Aasd School Calendar 20-21, How To Make Fried Oreos In Air Fryer, Cumulative Advantage And Disadvantage, Hard Rock Cafe Orlando, sample covariance matrix 2020
CommonCrawl
\begin{document} \title{{f Multiple cover formula of generalized DT invariants II: Jacobian localizations} \begin{abstract} The generalized Donaldson-Thomas invariants counting one dimensional semistable sheaves on Calabi-Yau 3-folds are conjectured to satisfy a certain multiple cover formula. This conjecture is equivalent to Pandharipande-Thomas's strong rationality conjecture on the generating series of stable pair invariants, and its local version is enough to prove. In this paper, using Jacobian localizations and parabolic stable pair invariants introduced in the previous paper, we reduce the conjectural multiple cover formula for local curves with at worst nodal singularities to the case of local trees of smooth rational curves. \end{abstract} \section{Introduction} This paper is a sequel of the author's previous paper~\cite{Todpara}, and we study the conjectural multiple cover formula of generalized Donaldson-Thomas (DT) invariants counting one dimensional semistable sheaves on Calabi-Yau 3-folds. Our main result is to reduce the multiple cover formula for local curves with at worst nodal singularities to that for local trees of $\mathbb{P}^1$. The latter case is easier to study, and we actually prove the multiple cover formula in some cases using our main result. The idea consists of twofold: using the notion of parabolic stable pairs introduced in~\cite{Todpara}, and the localizations with respect to the actions of Jacobian groups on the moduli spaces of parabolic stable pairs. \subsection{Conjectural multiple cover formula} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, i.e. \begin{align*} \bigwedge^3 T_X^{\vee} \cong \mathcal{O}_X, \quad H^1(X, \mathcal{O}_X)=0. \end{align*} Given data, \begin{align*} n\in \mathbb{Z}, \quad \beta \in H_2(X, \mathbb{Z}), \end{align*} the \textit{generalized DT invariant} is introduced by Joyce-Song~\cite{JS}, Kontsevich-Soibelman~\cite{K-S}, \begin{align}\label{intro:Nnb} N_{n, \beta} \in \mathbb{Q}. \end{align} The invariant (\ref{intro:Nnb}) counts one dimensional semistable sheaves $F$ on $X$ satisfying \begin{align*} \chi(F)=n, \quad [F]=\beta. \end{align*} (cf.~Subsection~\ref{subsec:Genera}.) The above invariant is expected to satisfy the following multiple cover conjecture: \begin{conj} {\bf\cite[Conjecture~6.20]{JS}, \cite[Conjecture~6.3]{Tsurvey}}\label{conj:mult} We have the following formula, \begin{align}\notag N_{n, \beta}=\sum_{k\ge 1, k|(n, \beta)} \frac{1}{k^2}N_{1, \beta/k}. \end{align} \end{conj} The motivation of the above conjecture is that it is equivalent to Pandharipande-Thomas's (PT) strong rationality conjecture~\cite{PT}. (See~\cite[Theorem~6.4]{Tsurvey}.) The PT strong rationality conjecture claims the product expansion formula (called \textit{Gopakumar-Vafa form}) of the generating series of rank one DT type invariants, which should be true if we believe GW/DT correspondence~\cite{MNOP}. There is also a local version of the invariant (\ref{intro:Nnb}) and its conjectural multiple cover formula. Namely for a one cycle $\gamma$ on $X$, we can associate the invariant, \begin{align*} N_{n, \gamma} \in \mathbb{Q}, \end{align*} which counts one dimensional semistable sheaves $F$ on $X$ satisfying \begin{align*} \chi(F)=n, \quad [F]=\gamma, \end{align*} where the second equality is an equality as a one cycle. The above local invariant is also expected to satisfy the multiple cover formula, \begin{align}\label{N:mult} N_{n, \gamma}=\sum_{k\ge 1, k|(n, \gamma)} \frac{1}{k^2} N_{1, \gamma/k}. \end{align} The local version (\ref{N:mult}) is enough to prove Conjecture~\ref{conj:mult}. (cf.~\cite[Proposition~4.17]{Todpara}.) The purpose of this paper is to study the conjectural formula (\ref{N:mult}) via Jacobian localization technique. \subsection{Main result} Let $X$ be as before, $\gamma$ a one cycle on $X$ and $C \subset X$ the support of $\gamma$. The invariant $N_{n, \gamma}$ can be shown to be zero if there is an irreducible component of $C$ whose geometric genus is bigger than or equal to one. (cf.~Lemma~\ref{lem:higher}.) Therefore in discussing the formula (\ref{N:mult}), we may assume that $C$ is a rational curve, i.e. the normalization of $C$ is a disjoint union of $\mathbb{P}^1$. The simple cases are $C=\mathbb{P}^1$, or $C$ is a tree of $\mathbb{P}^1$. The main result of this paper is to show that, when $C$ has at worst nodal singularities, then the formula (\ref{N:mult}) follows from the same formula for local trees of $\mathbb{P}^1$. More precisely, suppose that $C$ is a rational curve with at worst nodal singularities, and \begin{align}\label{intro:CUX} C\subset U \subset X \end{align} a sufficiently small analytic neighborhood of $C$ in $X$. We consider data, \begin{align*} (C' \subset U') \stackrel{\sigma'}{\to} (C\subset X), \end{align*} where $C'$ is a reduced curve, $U'$ is a three dimensional complex manifold and $\sigma'$ is a local immersion. The above data is called a \textit{cyclic neighborhood} if it is given as a composition of cyclic coverings of $U$. (See Definition~\ref{def:cyclic} for more precise definition.) For any one cycle $\gamma'$ on $U'$ supported on $C'$, we can similarly construct the invariant \begin{align*} N_{n, \gamma'}(U') \in \mathbb{Q}. \end{align*} (cf.~Subsection~\ref{moduli:cyclic}.) Our main result is as follows: \begin{thm}{\bf [Theorem~\ref{thm:main:cov}]} \label{thm:main} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, $C \subset X$ a reduced rational curve with at worst nodal singularities, and $\gamma$ a one cycle on $X$ supported on $C$. Suppose that for any cyclic neighborhood $(C' \subset U') \stackrel{\sigma'}{\to} (C\subset X)$ with $C'$ a tree of $\mathbb{P}^1$, the following conditions hold: \begin{itemize} \item The moduli stack of one dimensional semistable sheaves on $U'$ is locally written as a critical locus of some holomorphic function on a complex manifold up to some group action. (cf.~Conjecture~\ref{conj:crit}.) \item For any one cycle $\gamma'$ on $U'$ with $\sigma'_{\ast}\gamma'=\gamma$, the invariant $N_{n, \gamma'}(U')$ satisfies the formula \begin{align*} N_{n, \gamma'}(U')=\sum_{k\ge 1, k|(n, \gamma')} \frac{1}{k^2} N_{1, \gamma'/k}(U'). \end{align*} \end{itemize} Then the invariant $N_{n, \gamma}$ satisfies the formula (\ref{N:mult}). \end{thm} There are several situations in which the cyclic neighborhood $C'\subset U'$ satisfies the assumptions in Theorem~\ref{thm:main}, e.g. $C'$ is a chain of super rigid rational curves in $U'$. Roughly speaking, we will give the following applications in Section~\ref{sec:apply}: \begin{itemize} \item If $\gamma$ is supported on an irreducible rational curve with one node, or a circle of $\mathbb{P}^1$, we explicitly compute the invariant $N_{n, \gamma}$. (cf.~Theorem~\ref{thm:typeI}.) \item We prove the local multiple cover formula of $N_{n, \gamma}$ if $\gamma=p[C]$ for an irreducible rational curve $C$ with at worst nodal singularities, and $p$ is a prime number. (cf.~Theorem~\ref{prop:prime}.) \item We give some evidence of the conjecture in~\cite[Conjecture~1.3]{TodK3} on the Euler characteristic invariants of local K3 surfaces. (cf.~Theorem~\ref{thm:K3}.) \end{itemize} The first and the second applications will be given under a certain assumption on an analytic neighborhood of a one cycle $\gamma$. (cf.~Definition~\ref{def:rigid:surface}.) \subsection{Idea for a local curve with one node} Here we explain the idea of the proof of Theorem~\ref{thm:main} in a simple example. Let \begin{align*} C\subset X \end{align*} be an irreducible rational curve with one node $x\in C$. Suppose that a one cycle $\gamma$ on $X$ is supported on $C$. Then for any analytic neighborhood $U$ as in (\ref{intro:CUX}), the Jacobian group $\mathop{\rm Pic}\nolimits^0(U)$ acts on the moduli space which defines $N_{n, \gamma}$. If we take $U$ to be homotopically equivalent to $C$, then $\mathop{\rm Pic}\nolimits^0(C) \cong \mathbb{C}^{\ast}$ is considered to be a subgroup of $\mathop{\rm Pic}\nolimits^0(U)$. So we would like to apply $\mathop{\rm Pic}\nolimits^0(C)$-localization on the invariant $N_{n, \gamma}$. In order to see this, we need to find $\mathop{\rm Pic}\nolimits^0(C)$-fixed semistable sheaves on $U$ supported on $C$. If we take $U$ as above, then we have \begin{align*} \pi_1(C) \cong \pi_1(U) \cong \mathbb{Z}. \end{align*} Hence if we take the universal covering space of $U$, \begin{align}\label{univ:U} f_U \colon \widetilde{U} \to U, \end{align} then $\widetilde{U}$ admits a $\mathbb{Z}$-action, and it contains the universal cover of $C$ denoted by $\widetilde{C}$. A key observation is that a stable sheaf on $U$ supported on $C$ is $\mathop{\rm Pic}\nolimits^0(C)$-fixed if and only if it is a push-forward of some sheaf on $\widetilde{U}$ supported on $\widetilde{C}$, which is unique up to $\mathbb{Z}$-action on $\widetilde{U}$. The universal cover $\widetilde{C} \to C$ is described in the following way. Let \begin{align*} \mathbb{P}^1 \cong C^{\dag} \to C \end{align*} be the normalization and $x_1, x_2 \in C^{\dag}$ the preimage at the node $x\in C$. We take an infinite number of copies of $\{C^{\dag}, x_1, x_2\}$, denoted by \begin{align*} \{C_i, x_{1, i}, x_{2, i}\}, \quad i\in \mathbb{Z}. \end{align*} Then $\widetilde{C}$ is an infinite chain of smooth rational curves, \begin{align*} \widetilde{C} = \cdots \cup C_{-1} \cup C_{0} \cup C_1\cdots \cup C_{i}^{} \cup C_{i+1}^{} \cup \cdots, \end{align*} where $C_{i}$ and $C_{i+1}$ are attached along $x_{2, i}$ and $x_{1, i+1}$. (See Figure~\ref{fig:one}.) For instance, let us look at the invariant $N_{0, 2C}$. By the above argument, we may expect the formula, \begin{align}\label{expect} N_{n, 2C} =\sum_{i\ge 0} N_{n, C_{0} +C_i}(\widetilde{U}). \end{align} Now by the assumptions in Theorem~\ref{thm:main}, we obtain \begin{align*} N_{0, 2C_0}(\widetilde{U}) &=N_{1, 2C_0}(\widetilde{U})+ \frac{1}{4}N_{1, C_0}(\widetilde{U}), \\ N_{0, C_0 +C_1}(\widetilde{U}) &=N_{1, C_0+C_1}(\widetilde{U}). \end{align*} The above localization argument also implies $N_{1, C_0}(\widetilde{U})=N_{1, C}$, and it is also easy to see $N_{n, C_0+C_i}(\widetilde{U})=0$ for $i\ge 2$. Thus we obtain \begin{align*} N_{0, 2C}=N_{1, 2C}+ \frac{1}{4}N_{1, C}, \end{align*} which is nothing but the desired formula (\ref{N:mult}) for $\gamma=2C$. This picture is quite similar to the multiple cover formula for genus zero Gromov-Witten invariants of a local nodal curve with one node~\cite{BKL}. \begin{figure} \caption{Universal cover $C \leftarrow \widetilde{C}$} \label{fig:one} \end{figure} \subsection{Parabolic stable pairs} In the previous subsection, we explained the idea of the multiple cover formula in a simple example. However it is not obvious to realize the story there directly, especially the formula (\ref{expect}) seems to be hard to deduce. The issue is that, since the definition of $N_{n, \gamma}$ involves Joyce's log stack function~\cite{Joy4}, denoted by $\epsilon_{n, \gamma}$ in Subsection~\ref{subsec:Genera}, the above localization argument seems to be very hard to apply. Namely, we have to compare the contribution of $\epsilon_{n, \gamma}$ on the $\mathbb{C}^{\ast}$-fixed points with that on the universal cover. But to do this, we also have to `localize' the product structure on the Hall algebra, which seems to require a new technique. In order to overcome this technical difficulty, we use the idea of \textit{parabolic stable pairs}, introduced in the previous paper~\cite{Todpara}. By definition, a parabolic stable pair consists of a pair, \begin{align*} (F, s), \quad s \in F \otimes \mathcal{O}_H, \end{align*} where $F$ is a one dimensional semistable sheaf on $X$, $H$ is a fixed divisor in $X$, satisfying a certain stability condition. (cf.~Definition~\ref{defi:para}.) In~\cite{Todpara}, we constructed invariants counting parabolic stable pairs, and showed that Conjecture~\ref{conj:mult} is equivalent to a certain product expansion formula of the generating series of parabolic stable pair invariants. The moduli space of parabolic stable pairs is a scheme, (not a stack,) and $\mathop{\rm Pic}\nolimits^0(C)$ also acts on the moduli space of (local) parabolic stable pairs. There is no technical difficulty in applying $\mathop{\rm Pic}\nolimits^0(C)$-localizations to parabolic stable pair invariants, and the arguments similar to the previous subsection work for parabolic stable pairs. When the one cycle $\gamma$ on $X$ is supported on a nodal curve which has more than one nodes, then its universal covering space is much more complicated. Instead of taking the universal cover, we take cyclic neighborhoods and proceed the induction argument. Combining the above ideas, (Jacobian localizations, parabolic stable pairs, induction via cyclic neighborhoods,) we are able to prove Theorem~\ref{thm:main}. \section{Multiple cover formula of generalized DT invariants} In this section, we recall (generalized) DT invariants on Calabi-Yau 3-folds and the conjectural multiple cover formula. In what follows, $X$ is a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, i.e. \begin{align*} \bigwedge^{3}T_{X}^{\vee} \cong \mathcal{O}_X, \quad H^1(X, \mathcal{O}_X)=0. \end{align*} We fix an ample line bundle $\mathcal{O}_X(1)$ and set $\omega=c_1(\mathcal{O}_X(1))$. Below we say a coherent sheaf $F$ on $X$ \textit{$d$-dimensional} if the support of $F$ is $d$-dimensional. \subsection{Semistable sheaves} Let us recall the notion of one dimensional $\omega$-semistable sheaves on $X$. They are defined by the notion of slope: for a one dimensional coherent sheaf $F$, its slope is defined by \begin{align*} \mu_{\omega}(F) \cneq \frac{\chi(F)}{[F] \cdot \omega}. \end{align*} Here $\chi(F)$ is the holomorphic Euler characteristic of $F$ and $[F]$ is the fundamental one cycle associated to $F$, defined by \begin{align}\label{onecycle} [F] = \sum_{\eta} (\mathrm{length}_{\mathcal{O}_{X, \eta}} F) \overline{\{\eta \}}. \end{align} In the above sum, $\eta$ runs all the codimension two points in $X$. \begin{defi}\label{defi:semi} A one dimensional coherent sheaf $F$ on $X$ is $\omega$-(semi)stable if for any subsheaf $0\neq F' \subsetneq F$, we have the inequality, \begin{align*} \mu_{\omega}(F') <(\le) \mu_{\omega}(F). \end{align*} \end{defi} Note that any one dimensional $\omega$-semistable sheaf $F$ is pure, i.e. there is no zero dimensional subsheaf in $F$. Also we say that $F$ is \textit{strictly $\omega$-semistable} if $F$ is $\omega$-semistable but not $\omega$-stable. For the detail of (semi)stable sheaves, see~\cite{Hu}. \subsection{DT invariants} Let us take data, \begin{align}\label{data:bn} n \in \mathbb{Z}, \quad \beta \in H_2(X, \mathbb{Z}). \end{align} The (generalized) DT invariant is the $\mathbb{Q}$-valued invariant, \begin{align}\label{Nnb} N_{n, \beta} \in \mathbb{Q}, \end{align} counting one dimensional $\omega$-semistable sheaves $F$ on $X$ satisfying \begin{align}\label{Fbn} [F]=\beta, \quad \chi(F)=n. \end{align} Here by an abuse of notation, we denote by $[F]$ the homology class of the one cycle (\ref{onecycle}). The invariant (\ref{Nnb}) is defined in the following way. Let \begin{align}\label{moduli:M} M_n(X, \beta) \end{align} be the coarse moduli space of one dimensional $\omega$-semistable sheaves $F$ on $X$ satisfying (\ref{Fbn}). There are some criterions for the moduli space (\ref{moduli:M}) to be fine. For instance suppose that the following condition holds: \begin{align}\label{gcd} \mathrm{g.c.d.}(\omega \cdot \beta, n)=1, \end{align} e.g. $n=1$. Then there is no strictly $\omega$-semistable sheaf on $X$ satisfying (\ref{Fbn}), and (\ref{moduli:M}) is a fine projective scheme over $\mathbb{C}$. In this case, the moduli space (\ref{moduli:M}) carries a symmetric perfect obstruction theory, hence the zero dimensional virtual cycle~\cite{Thom}. \begin{defi} If the condition (\ref{gcd}) holds, then we define $N_{n, \beta}$ to be \begin{align*} N_{n, \beta}=\int_{[M_n(X, \beta)]^{\rm{vir}}} 1 \in \mathbb{Z}. \end{align*} \end{defi} Another way to define $N_{n, \beta}$ is to use Behrend's constructible function~\cite{Beh}. Recall that for any $\mathbb{C}$-scheme $M$, Behrend constructs a canonical constructible function, \begin{align*} \nu \colon M \to \mathbb{Z}, \end{align*} such that if $M$ carries a symmetric perfect obstruction theory, then we have \begin{align*} \int_{[M]^{\rm{vir}}} 1 &= \int_{M} \nu d\chi, \\ &=\sum_{m \in \mathbb{Z}} m \cdot \chi(\nu^{-1}(m)). \end{align*} Hence by using the Behrend function $\nu$ on $M_n(X, \beta)$, the invariant (\ref{Nnb}) can also be also expressed as \begin{align}\label{intM} N_{n, \beta}=\int_{M_n(X, \beta)} \nu d\chi. \end{align} \subsection{Generalized DT invariants}\label{subsec:Genera} In a general choice of (\ref{data:bn}), the condition (\ref{gcd}) may not hold, and there may be strictly $\omega$-semistable sheaves $F$ satisfying (\ref{Fbn}). In this case, the invariant (\ref{Nnb}) is one of \textit{generalized DT invariants} introduced by Joyce-Song~\cite{JS} and Kontsevich-Soibelman~\cite{K-S}. It requires sophisticated techniques on Hall algebras of coherent sheaves to define them, and we need some more preparations for this. Since we will not need the detail of the definition of (\ref{Nnb}) in a general case, we just give a rough explanation. A strictly $\omega$-semistable sheaf has non-trivial automorphisms, and we need to involve the contributions of the automorphism groups with the invariant (\ref{Nnb}). For this purpose, we need to work with the moduli stack, \begin{align}\label{stack:M} \mathcal{M}_n(X, \beta), \end{align} which parameterizes $\omega$-semistable one dimensional sheaves $F$ satisfying (\ref{Fbn}). The stack (\ref{stack:M}) is known to be an Artin stack of finite type over $\mathbb{C}$. The Behrend functions on $\mathbb{C}$-schemes naturally extend to constructible functions on Artin stacks of finite type over $\mathbb{C}$. (cf.~\cite[Proposition~4.4]{JS}.) However the stack (\ref{stack:M}) may have stabilizer groups whose Euler characteristic are zero, e.g. $\mathop{\rm GL}\nolimits(2, \mathbb{C})$. Hence the integration of the Behrend function (\ref{intM}), replacing $M_n(X, \beta)$ by $\mathcal{M}_n(X, \beta)$, does not make sense. The idea of the definition of generalized DT invariant is that, instead of working with the stack (\ref{stack:M}), we should work with the `logarithm' of (\ref{stack:M}) in the Hall algebra of coherent sheaves, denoted by $H(X)$. The algebra $H(X)$ is, as a $\mathbb{Q}$-vector space, spanned by the isomorphism classes of symbols, \begin{align*} [ \rho \colon \mathcal{X} \to \mathcal{C} oh(X)]. \end{align*} Here $\mathcal{X}$ is an Artin stack of finite type with affine geometric stabilizers, and $\mathcal{C} oh(X)$ is the stack of all the coherent sheaves on $X$. There is an associative $\ast$-product on $H(X)$ based on Ringel Hall algebras. For the detail, see~\cite[Theorem~5.2]{Joy2}. The stack (\ref{stack:M}) is considered to be an element of $H(X)$, by regarding it as an open substack of $\mathcal{C} oh(X)$, \begin{align*} \delta_{n, \beta} \cneq \left[\mathcal{M}_n(X, \beta) \hookrightarrow \mathcal{C} oh(X) \right] \in H(X). \end{align*} The `logarithm' of $\delta_{n, \beta}$, denoted by $\epsilon_{n, \beta} \in H(X)$, is defined by the rule, \begin{align*} \sum_{n/\omega \cdot \beta =\mu} \epsilon_{n, \beta} = \log \left( 1+ \sum_{n/\omega \cdot \beta=\mu} \delta_{n, \beta} \right), \end{align*} for any $\mu \in \mathbb{Q}$ in a certain completion of the algebra $(H(X), \ast)$. In other words, $\epsilon_{n, \beta}$ is given by \begin{align*} \epsilon_{n, \beta} = \sum_{l\ge 1}\frac{(-1)^{l-1}}{l} \sum_{\begin{subarray}{c} \beta_1, \cdots, \beta_l \in H_2(X, \mathbb{Z}), \\ n_1, \cdots, n_l \in \mathbb{Z}, \\ n_i/\omega \cdot \beta_i=n/\omega \cdot \beta \end{subarray}} \delta_{n_1, \beta_1} \ast \cdots \ast \delta_{n_l, \beta_l}. \end{align*} The above sum is easily shown to be a finite sum. The important fact is that $\epsilon_{n, \beta}$ is supported on `virtual indecomposable sheaves'. Roughly speaking this implies that, modulo some relations in $H(X)$, the element $\epsilon_{n, \beta}$ is written as \begin{align*} \epsilon_{n, \beta}=\sum_{i} a_i [\rho_i \colon [M_i/\mathbb{C}^{\ast}] \to \mathcal{C} oh(X)], \end{align*} where $a_i \in \mathbb{Q}$, $M_i$ are quasi-projective varieties on which $\mathbb{C}^{\ast}$ act trivially. The invariant (\ref{Nnb}) is then defined by the weighted Euler characteristics of $M_i$, weighted by the Behrend function $\nu$ on $\mathcal{C} oh(X)$ pulled back by $\rho_i$. Namely, $N_{n, \beta}$ is defined by \begin{align}\label{skip} N_{n, \beta} \cneq -\sum_{i} a_i \int_{M_i} \rho_i^{\ast} \nu d\chi. \end{align} Here we need to change the sign due to the appearance of the trivial $\mathbb{C}^{\ast}$-action. We have skipped lots of details in the above definition of (\ref{Nnb}). For more detail, we refer~\cite{JS}. Also see~\cite[Section~4]{Tsurvey} for a more direct explanation. \begin{rmk}\label{rmk:omega} In priori, we need to choose an ample divisor $\omega$ to define $N_{n, \beta}$. However it can be shown that $N_{n, \beta}$ does not depend on a choice of $\omega$. (cf.~\cite[Theorem~6.16]{JS}.) \end{rmk} \subsection{Local generalized DT invariants} There is also a local version of (generalized) DT invariant, which we explain below. Let us fix a reduced curve $C$ in $X$, \begin{align*} i \colon C \hookrightarrow X, \end{align*} with irreducible components $C_1, \cdots, C_N$. Then a one cycle $\gamma$ on $X$ supported on $C$ is identified with an element of $H_2(C, \mathbb{Z})$, \begin{align*} \gamma \in H_2(C, \mathbb{Z}) \cong \bigoplus_{i=1}^{N} \mathbb{Z}[C_i]. \end{align*} Suppose that $\beta=i_{\ast}\gamma$ and $n\in \mathbb{Z}$ satisfies the condition (\ref{gcd}). Then we have the fine moduli space (\ref{moduli:M}), and the closed subscheme, \begin{align}\label{closed:M} M_n(C, \gamma) \subset M_n(X, \beta), \end{align} corresponding to $\omega$-stable sheaves $F$ satisfying \begin{align}\label{Fgn} [F]=\gamma, \quad \chi(F)=n. \end{align} Here $[F]=\gamma$ is an equality as a one cycle on $X$. Then the local DT invariant is defined by \begin{align}\label{loc:DT:Nng} N_{n, \gamma} \cneq \int_{M_n(C, \gamma)} \nu d\chi. \end{align} Here $\nu$ is the Behrend function on $M_n(X, \beta)$ restricted to $M_n(C, \gamma)$. We remark that $\nu$ may not coincide with the Behrend function on $M_n(C, \gamma)$. Even if $(n, \beta)$ does not satisfy the condition (\ref{gcd}), we can similarly define the local generalized DT invariant, \begin{align}\label{Nng} N_{n, \gamma} \in \mathbb{Q}, \end{align} counting one dimensional $\omega$-semistable sheaves $F$ satisfying (\ref{Fgn}). Instead of using the stack (\ref{stack:M}), we use the substack, \begin{align}\label{M(Cg)} \mathcal{M}_n(C, \gamma) \subset \mathcal{M}_n(X, \beta), \end{align} parameterizing one dimensional $\omega$-semistable sheaves $F$ on $X$ satisfying (\ref{Fgn}). We can similarly take the logarithm of the substack (\ref{M(Cg)}) in the Hall algebra $H(X)$, and the invariant (\ref{Nng}) is defined by integrating the Behrend function on $\mathcal{C} oh(X)$ over it. See~\cite[Subsection~4.4]{Todpara} for some more detail. Similarly to $N_{n, \beta}$, the local invariant $N_{n, \gamma}$ also does not depend on $\omega$. (cf.~Remark~\ref{rmk:omega}.) \subsection{Multiple cover formula}\label{subsec:mult} As we discussed in the previous subsections, the invariant (\ref{Nnb}) is an integer if the condition (\ref{gcd}) is satisfied. In particular, for $\beta \in H_2(X, \mathbb{Z})$, we have the $\mathbb{Z}$-valued invariant, \begin{align*} N_{1, \beta} \in \mathbb{Z}. \end{align*} The above invariant is introduced by Katz~\cite{Katz} as a sheaf theoretic definition of genus zero Gopakumar-Vafa invariant. On the other hand if $(n, \beta)$ does not satisfy the condition (\ref{gcd}), then $N_{n, \beta}$ may not be an integer and hence does not coincide with $N_{1, \beta}$. However the invariants $N_{n, \beta}$ for $n \neq 1$ are conjectured to be related to $N_{1, \beta}$ via the multiple cover formula: \begin{conj} {\bf\cite[Conjecture~6.20]{JS}, \cite[Conjecture~6.3]{Tsurvey}}\label{conj:mult2} We have the following formula, \begin{align}\label{form:mult} N_{n, \beta}=\sum_{k\ge 1, k|(n, \beta)} \frac{1}{k^2}N_{1, \beta/k}. \end{align} \end{conj} In~\cite[Theorem~6.4]{Tsurvey}, it is shown that the above conjecture is equivalent to Pandharipande-Thomas's strong rationality conjecture~\cite[Conjecture~3.14]{PT}. We refer~\cite[Section~6]{Tsurvey} for discussions on strong rationality conjecture and its relation to Conjecture~\ref{conj:mult2}. For a reduced curve $C\subset X$, $n\in \mathbb{Z}$ and $\gamma \in H_2(C, \mathbb{Z})$, we have the local (generalized) DT invariants as in (\ref{Nng}). The local version of the above conjecture is also similarly formulated: \begin{conj}{\bf\cite[Conjecture~4.13]{Todpara}} \label{conj:mult:loc} For $n\in \mathbb{Z}$ and $\gamma \in H_2(C, \mathbb{Z})$, we have the formula, \begin{align}\label{form:mult:loc} N_{n, \gamma}=\sum_{k\ge 1, k|(n, \gamma)} \frac{1}{k^2}N_{1, \gamma/k}. \end{align} \end{conj} As shown in~\cite[Corollary~4.18]{Todpara}, the local multiple cover formula is enough to show the global multiple cover formula: \begin{lem}{\bf\cite[Corollary~4.18]{Todpara}}\label{lem:glo:loc} For $n\in \mathbb{Z}$ and $\beta \in H_2(X, \mathbb{Z})$, suppose that the formula (\ref{form:mult:loc}) holds for any reduced curve $i \colon C \hookrightarrow X$ and $\gamma \in H_2(C, \mathbb{Z})$ with $\beta=i_{\ast} \gamma$. Then $N_{n, \beta}$ satisfies the formula (\ref{form:mult}). \end{lem} As we discussed in the Introduction, our purpose is to study Conjecture~\ref{conj:mult:loc} in terms of Jacobian localizations and parabolic stable pair invariants, which we recall in the next subsection. \subsection{(Local) parabolic stable pair theory} The notion of parabolic stable pairs is introduced in~\cite{Todpara}. It is determined by fixing a divisor, \begin{align*} H \in \lvert \mathcal{O}_X(h) \rvert, \end{align*} for some $h>0$. In what follows, we say a one cycle $\gamma$ on $X$ intersects with $H$ transversally if it satisfies $\dim H \cap \gamma=0$. Equivalently, any irreducible component in $\gamma$ is not contained in $H$. \begin{defi}\label{defi:para} For a fixed divisor $H$ on $X$ as above, a parabolic stable pair is defined to be a pair \begin{align}\label{para:pair} (F, s), \quad s \in F \otimes \mathcal{O}_{H}, \end{align} such that the following conditions are satisfied. \begin{itemize} \item The sheaf $F$ is a one dimensional $\omega$-semistable sheaf on $X$. \item The one cycle $[F]$ intersects with $H$ transversally. \item For any surjection $F \stackrel{\pi}{\twoheadrightarrow} F'$ with $\mu_{\omega}(F)=\mu_{\omega}(F')$, we have \begin{align*} (\pi \otimes \mathcal{O}_{H})(s) \neq 0. \end{align*} \end{itemize} \end{defi} The moduli space of parabolic stable pairs $(F, s)$ satisfying $[F]=\beta$, $\chi(F)=n$ is denoted by \begin{align}\label{moduli:para} M_n^{\rm{par}}(X, \beta). \end{align} By~\cite[Theorem~2.10]{Todpara}, if $H$ satisfies an additional condition given in~\cite[Lemma~2.9]{Todpara}, then the moduli space (\ref{moduli:para}) is a projective scheme even if $(n, \beta)$ does not satisfy the condition (\ref{gcd}). In the case that $H$ does not satisfy the condition in~\cite[Lemma~2.9]{Todpara}, the moduli space (\ref{moduli:para}) is at least a quasi-projective variety. (cf.~\cite[Remark~2.13]{Todpara}.) Suppose that a reduced one dimensional subscheme $i\colon C\hookrightarrow X$ satisfies $\dim H\cap C=0$. Then for any $\gamma \in H_2(C, \mathbb{Z})$ with $\beta=i_{\ast}\gamma$, we have the subscheme, \begin{align}\label{MnCg} M_n^{\rm{par}}(C, \gamma) \subset M_n^{\rm{par}}(X, \beta), \end{align} corresponding to parabolic stable pairs $(F, s)$ with $F$ supported on $C$, $[F]=\gamma$ as a one cycle on $X$ and $\chi(F)=n$. Let \begin{align*} \nu_{M} \colon M_n^{\rm{par}}(X, \beta) \to \mathbb{Z}, \end{align*} be the Behrend's constructible function~\cite{Beh} on $M_n^{\rm{par}}(X, \beta)$. The local parabolic stable pair invariant is defined in the following way. \begin{defi}\label{defi:para:DT} For $\gamma \in H_2(C, \mathbb{Z})$, we define $\mathop{\rm DT}\nolimits_{n, \gamma}^{\rm{par}} \in \mathbb{Z}$ to be \begin{align}\label{DTpar:loc} \mathop{\rm DT}\nolimits_{n, \gamma}^{\rm{par}} \cneq \int_{M_n^{\rm{par}}(C, \gamma)} \nu_M d\chi. \end{align} \end{defi} Here as in the local DT theory, we use the Behrend function on $M_n^{\rm{par}}(X, \beta)$, not on $M_n^{\rm{par}}(C, \gamma)$, to define the local invariant. \subsection{Multiple cover formula via parabolic stable pairs}\label{subsec:multiple:via} In~\cite{Todpara}, we established a relationship between (local) parabolic stable pair invariants and (local) generalized DT invariants. As a result, conjectures in Subsection~\ref{subsec:mult} can be translated into a formula relating (local) parabolic stable pair invariants and (local) DT invariants, which are both integer valued. Let $C \subset X$ be a reduced curve, with irreducible components $C_1, \cdots, C_N$, which intersects with $H$ transversally. As in Definition~\ref{defi:para:DT}, we have the local parabolic stable pair invariants w.r.t. $H$. For each $\mu \in \mathbb{Q}$, we set the generating series $\mathop{\rm DT}\nolimits^{\rm{par}}(\mu, C)$ to be \begin{align*} \mathrm{DT}^{\rm{par}}(\mu, C) \cneq 1+ \sum_{\begin{subarray}{c} n\in \mathbb{Z}, \ \gamma \in H_2(C, \mathbb{Z})_{>0}, \\ n/\omega \cdot \gamma=\mu \end{subarray}} \mathrm{DT}_{n, \gamma}^{\rm{par}} q^n t^{\gamma}. \end{align*} Here $H_2(C, \mathbb{Z})_{>0} \subset H_2(C, \mathbb{Z})$ is defined by \begin{align*} H_2(C, \mathbb{Z})_{>0} \cneq \left\{ \sum_{i=1}^{N} a_i [C_i] : a_i \ge 0 \right\} \setminus \{0\} \subset H_2(C, \mathbb{Z}). \end{align*} The statement of Conjecture~\ref{conj:mult:loc} can be translated into a product expansion formula (\ref{par:prod}) of $\mathop{\rm DT}\nolimits^{\rm{par}}(\mu, C)$ below: \begin{prop}{\bf\cite[Proposition~4.5]{Todpara}}\label{prop:translate} We have the formula (\ref{form:mult:loc}) for any $(n, \gamma) \in \mathbb{Z} \oplus H_2(C, \mathbb{Z})_{>0}$ with $n/\omega \cdot \gamma=\mu$ if and only if the following formula holds, \begin{align}\label{par:prod} \mathrm{DT}^{\rm{par}}(\mu, C) =\prod_{\begin{subarray}{c} \gamma \in H_2(C, \mathbb{Z})_{>0}, \\ n/\omega \cdot \gamma=\mu \end{subarray}} \left(1-(-1)^{\gamma \cdot H} q^n t^{\gamma} \right)^{(\gamma \cdot H)N_{1, \gamma}}. \end{align} \end{prop} If we are interested in the formula (\ref{form:mult:loc}) for a specified $(n, \gamma)$, then it is enough to check the formula (\ref{log:form}) below: let us take the logarithm of $\mathop{\rm DT}\nolimits^{\rm{par}}(\mu, C)$ and write \begin{align*} \log \mathop{\rm DT}\nolimits^{\rm{par}}(\mu, C)= \sum_{\begin{subarray}{c} \gamma \in H_2(C, \mathbb{Z})_{>0}, \\ n/\omega \cdot \gamma=\mu \end{subarray}} \widehat{\mathop{\rm DT}\nolimits}_{n, \gamma}^{\rm{par}}q^n t^{\gamma}. \end{align*} Note that $\widehat{\mathop{\rm DT}\nolimits}^{\rm{par}}_{n, \gamma}$ is written as \begin{align}\label{DT:hat} \widehat{\mathop{\rm DT}\nolimits}^{\rm{par}}_{n, \gamma} =\sum_{l\ge 1} \frac{(-1)^{l-1}}{l} \sum_{\begin{subarray}{c} \gamma_1+ \cdots +\gamma_l=\gamma, \gamma_i \in H_2(C, \mathbb{Z})_{>0}\\ n_1 + \cdots +n_l=n, n_i \in \mathbb{Z}, \\ n_i/\omega \cdot \gamma_i=n/\omega \cdot \gamma \end{subarray}} \prod_{i=1}^{l}\mathop{\rm DT}\nolimits_{n_i, \gamma_i}^{\rm{par}}. \end{align} Then we should have the formula, \begin{align}\label{log:form} \widehat{\mathop{\rm DT}\nolimits}_{n, \gamma}^{\rm{par}} =\sum_{k\ge 1, k|(n, \gamma)} \frac{(-1)^{\gamma \cdot H -1}}{k^2} (\gamma \cdot H) N_{1, \gamma/k}. \end{align} Here the RHS of (\ref{log:form}) is $q^n t^{\gamma}$-coefficient of the RHS of (\ref{par:prod}). Note that (\ref{log:form}) is a relationship between $\mathbb{Z}$-valued invariants. (cf.~\cite[Corollary~4.18]{Todpara}.) \subsection{Jacobian actions on the moduli space of parabolic stable pairs}\label{subsec:Jact} In this subsection, we discuss Jacobian actions on the moduil space of parabolic stable pairs. Let $i \colon C \hookrightarrow X$ be a reduced curve, and $H \subset X$ a divisor which intersects with $C$ transversally. Let us take \begin{align*} n \in \mathbb{Z}, \ \gamma \in H_2(C, \mathbb{Z}), \end{align*} and set $\beta=i_{\ast} \gamma \in H_2(X, \mathbb{Z})$. Let $U$ be a complex analytic neighborhood of $C$ in $X$, \begin{align*} C \subset U \subset X. \end{align*} Then we have the analytic open subset of the moduli space (\ref{moduli:M}), \begin{align*} M_n(U, \beta) \subset M_n(X, \beta), \end{align*} corresponding to $\omega$-semistable one dimensional sheaves $F$ with $\mathop{\rm Supp}\nolimits(F) \subset U$. Let $\mathop{\rm Pic}\nolimits^{0}(U)$ be the group of line bundles on $U$, whose restriction to any projective curve in $U$ has degree zero. Then we have the action of $\mathop{\rm Pic}\nolimits^0(U)$ on $M_n(U, \beta)$ via \begin{align*} L \cdot F =F\otimes L, \end{align*} for $L \in \mathop{\rm Pic}\nolimits^0(U)$ and $F \in M_n(U, \beta)$. The $\mathop{\rm Pic}\nolimits^0(U)$-action preserves the closed subscheme, \begin{align*} M_n(C, \gamma) \subset M_n(U, \beta), \end{align*} where the LHS is given in (\ref{closed:M}). Let us consider parabolic stable pairs w.r.t. the divisor $H$ as above. Similarly, we have the analytic open subspace, \begin{align*} M_n^{\rm{par}}(U, \beta) \subset M_n^{\rm{par}}(X, \beta), \end{align*} corresponding to parabolic stable pairs $(F, s)$ with $\mathop{\rm Supp}\nolimits(F) \subset U$. Let $\widehat{\mathop{\rm Pic}\nolimits^0}(U)$ be the group defined by \begin{align}\label{hat:pic} \widehat{\mathop{\rm Pic}\nolimits^0}(U) \cneq \{ (L, \phi) : L \in \mathop{\rm Pic}\nolimits^0(U), \lambda \colon \mathcal{O}_{H \cap U} \stackrel{\cong}{\to} \mathcal{O}_{H\cap U} \otimes L \}. \end{align} Note that the forgetting map $\widehat{\mathop{\rm Pic}\nolimits^0}(U) \ni (L, \phi) \mapsto L \in \mathop{\rm Pic}\nolimits^{0}(U)$ is surjective if $U$ is a sufficiently small analytic neighborhood of $C$. The group $\widehat{\mathop{\rm Pic}\nolimits^0}(U)$ acts on $M_n^{\rm{par}}(U, \beta)$ via \begin{align}\label{LpFs} (L, \lambda) \cdot (F, s)= (F\otimes L, s'), \end{align} where $s'$ is the image of $s$ by the isomorphism, \begin{align*} \textrm{id}_F \otimes \lambda \colon F\otimes \mathcal{O}_H \stackrel{\cong}{\to} F \otimes L \otimes \mathcal{O}_H. \end{align*} The above isomorphism makes sense since $F$ is supported on $U$. Obviously the action (\ref{LpFs}) preserves the closed subspace, \begin{align}\label{sub:par} M_n^{\rm{par}}(C, \gamma) \subset M_n^{\rm{par}}(U, \beta), \end{align} where the LHS is given by the LHS of (\ref{MnCg}). Also the action (\ref{LpFs}) is compatible with the $\mathop{\rm Pic}\nolimits^{0}(U)$-action on $M_n(U, \beta)$ and the forgetting morphisms, \begin{align*} M_n^{\rm{par}}(U, \beta) \ni (F, s) &\mapsto F \in M_n(U, \beta), \\ \widehat{\mathop{\rm Pic}\nolimits^0}(U) \ni (L, \lambda) &\mapsto L \in \mathop{\rm Pic}\nolimits^0(U). \end{align*} \begin{rmk} By Chow's theorem, the complex analytic spaces $M_n(U, \beta)$, $M_n^{\rm{par}}(U, \beta)$ are regarded as the moduli spaces of $\omega$-semistable sheaves, parabolic stable pairs on $U$ in an analytic sense respectively. Hence the above $\mathop{\rm Pic}\nolimits^0(U)$, $\widehat{\mathop{\rm Pic}\nolimits^0}(U)$-actions make sense. \end{rmk} \subsection{Local multiple cover formula in simple cases} Finally in this section, we discuss some situations in which the formula (\ref{form:mult:loc}) is easily proved. Let $C \subset U \subset X$ be as in the previous subsection. In the following lemma, which is partially obtained in~\cite[Proposition~6.19]{JS}, we reduce the problem to the case that $C$ has only rational irreducible components. \begin{lem}\label{lem:higher} Let $C_1, \cdots, C_N$ be the irreducible components of $C$, and take \begin{align*} \gamma=\sum_{i=1}^{N} a_i[C_i] \in H_2(C, \mathbb{Z})_{>0}. \end{align*} Suppose that there is $1\le i \le N$ such that $a_i>0$ and the geometric genus of $C_i$ is is bigger than or equal to one. Then for any $n\in \mathbb{Z}$, we have $N_{n, \gamma}=0$. In particular, the formula (\ref{form:mult:loc}) holds. \end{lem} \begin{proof} Let us consider the $\mathop{\rm Pic}\nolimits^{0}(U)$ action on $M_n(U, \gamma)$ as in the previous subsection. Note that any point $p \in M_n(U, \gamma)$ is represented by an $\omega$-semistable sheaf $F$ which is a direct sum of $\omega$-stable sheaves. If $p$ is fixed by the action of $\mathcal{L} \in \mathop{\rm Pic}\nolimits^{0}(U)$, then we have $F \otimes \mathcal{L} \cong F$. For the normalization $f \colon C_i^{\dag} \to C_i$, we have \begin{align*} f^{\ast}(F|_{C_i}) \cong f^{\ast}(F|_{C_i}) \otimes f^{\ast}(\mathcal{L}|_{C_i}). \end{align*} Taking the determinant of the both sides, we have \begin{align*} f^{\ast}(\mathcal{L}|_{C_i})^{\otimes k} \cong \mathcal{O}_{C_i^{\dag}}, \end{align*} for some $k \in \mathbb{Z}_{\ge 1}$. Given $\gamma$, there is only a finite number of possibilities for the above $k$, say $k_1, \cdots, k_l$. Since $\mathop{\rm Pic}\nolimits^0(C_i^{\dag})$ is a complex torus of positive dimension, we can find a subgroup \begin{align}\label{sub:S1} S^1 \subset \mathop{\rm Pic}\nolimits^0(C_i^{\dag}), \end{align} which does not pass through any $k_i$-torsion points for $1\le i\le l$. On the other hand, we have the composition of the pull-backs \begin{align}\label{Pic:rest} \mathop{\rm Pic}\nolimits^{0}(U) \to \mathop{\rm Pic}\nolimits^0(C_i) \to \mathop{\rm Pic}\nolimits^{0}(C_i^{\dag}). \end{align} Since $U$ is a sufficiently small analytic neighborhood of $C$, an argument similar to Subsection~\ref{subsec:JacC} below shows that both of the arrows in (\ref{Pic:rest}) are surjective. Furthermore, the same argument also easily shows that there is a subgroup $S^1 \subset \mathop{\rm Pic}\nolimits^0(U)$ which restricts to the subgroup (\ref{sub:S1}) under the restriction (\ref{Pic:rest}). Then the action of $\mathop{\rm Pic}\nolimits^0(U)$ on $M_n(C, \gamma)$ restricted to $S^1 \subset \mathop{\rm Pic}\nolimits^0(U)$ is free, hence the same localization argument of~\cite[Proposition~6.19]{JS} shows the vanishing $N_{n, \gamma}=0$. \end{proof} Next we discuss the case that the class $\gamma \in H_2(C, \mathbb{Z})$ is primitive, i.e. $\gamma$ is not a multiple of some other element of $H_2(C, \mathbb{Z})$. \begin{lem}\label{N:primitive} Suppose that $\gamma \in H_2(C, \mathbb{Z})$ is primitive. Then $N_{n, \gamma}$ does not depend on $n$. In particular, the formula (\ref{form:mult:loc}) holds. \end{lem} \begin{proof} Let $\mathop{\rm Coh}\nolimits_C(X)$ be the category of coherent sheaves on $X$ supported on $C$. We first generalize $\mu_{\omega}$-stability to twisted stability on $\mathop{\rm Coh}\nolimits_C(X)$. Let $C \subset U \subset X$ be a sufficiently small analytic neighborhood, and take an element \begin{align*} B+i\omega \in H^2(U, \mathbb{C}), \end{align*} such that $\omega|_{C}$ is ample. For a one dimensional sheaf $F \in \mathop{\rm Coh}\nolimits_C(X)$, we set $\mu_{B, \omega}(F) \in \mathbb{Q}$ to be \begin{align*} \mu_{B, \omega}(F) \cneq \frac{\chi(F)-[F] \cdot B}{[F] \cdot \omega}. \end{align*} Similarly to Definition~\ref{defi:semi}, we have the notion of $\mu_{B, \omega}$-stability on $\mathop{\rm Coh}\nolimits_{C}(X)$, called \textit{twisted stability}. As in the case of $\mu_{\omega}$-stability, we can construct the moduli stack $\mathcal{M}_n(C, \gamma, B+i\omega)$ parameterizing $\mu_{B, \omega}$-semistable objects $F \in \mathop{\rm Coh}\nolimits_{C}(X)$ with $[F]=\gamma$ and $\chi(F)=n$, and the generalized DT invariant defined by the above moduli stack. The same argument of~\cite[Theorem~6.16]{JS} shows that the resulting invariant does not depend on a choice of $B$ and $\omega$, thus coincides with $N_{n, \gamma}$. Let $C_1, \cdots, C_N$ be the irreducible components of $C$, and set $\gamma=\sum_{i=1}^{N} a_i[C_i]$. Since $\gamma$ is primitive, we have $\mathrm{g.c.d.}(a_1, \cdots, a_N)=1$. Hence we can find $m_1, \cdots, m_N \in \mathbb{Z}$ such that $\sum_{i=1}^{N} m_i a_i=1$. Let us take divisors $D_1, \cdots, D_N$ on $U$ such that $D_i \cdot C_j =\delta_{ij}$, and set $D=\sum_{i=1}^{N} m_i D_i$. (This is possible since $U$ is taken to be a sufficiently small analytic neighborhood of $C$ in $X$.) Then we have the isomorphism of stacks, \begin{align*} \mathcal{M}_n(C, \gamma, B+i\omega) \stackrel{\cong}{\to} \mathcal{M}_{n+1}(C, \gamma, B-D+i\omega), \end{align*} given by $F \mapsto F\otimes \mathcal{O}_U(D)$. Since the generalized DT invariants do not depend on $B$ and $\omega$, the above isomorphism of stacks immediately implies $N_{n, \gamma}=N_{n+1, \gamma}$ for all $n\in \mathbb{Z}$. \end{proof} \section{Cyclic covers of nodal rational curves} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$. In what follows, we fix a connected reduced curve $C$ and an embedding, \begin{align*} i\colon C \hookrightarrow X, \end{align*} satisfying the following conditions. \begin{itemize} \item Any irreducible component of $C$ has geometric genus zero. (We call such a curve as a \textit{rational curve}.) \item The curve $C$ has at worst nodal singularities. \end{itemize} Note that for our purpose, we can always assume the first condition by Lemma~\ref{lem:higher}. The geometric genus of $C$ is defined by \begin{align*} g(C) \cneq \dim H^1(C, \mathcal{O}_C). \end{align*} When $g(C)=0$, each irreducible component of $C$ is $\mathbb{P}^1$, and the dual graph of $C$ is simply connected. (See Subsection~\ref{subsec:Jac:C} below.) In this case, we say $C$ is a \textit{tree of} $\mathbb{P}^1$. Below we assume that $g(C)>0$. We also fix an ample divisor $H$ in $X$ which is smooth, connected, and intersects with $C$ transversally at non-singular points of $C$. \subsection{Jacobian group of $C$}\label{subsec:Jac:C} We first recall the description of the Jacobian group of a nodal curve $C$. Suppose that $C$ has $\delta_n$-nodes and has $\delta_c$-irreducible components. Let us take the normalization of $C$, \begin{align*} f \colon C^{\dag} \to C. \end{align*} We have the exact sequence of sheaves, \begin{align*} 0 \to \mathcal{O}_C \to f_{\ast} \mathcal{O}_{C^{\dag}} \to \mathbb{C}^{\oplus \delta_n} \to 0. \end{align*} By the long exact sequence of cohomologies, we obtain the isomorphism, \begin{align*} \mathbb{C}^{\delta_n -\delta_c +1} \cong H^1(C, \mathcal{O}_C). \end{align*} In particular the arithmetic genus $g(C)$ satisfies \begin{align*} g(C) =\delta_n -\delta_c +1. \end{align*} Combining the above argument with the standard exact sequence, \begin{align*} 0 \to \mathbb{Z} \to \mathcal{O}_C \to \mathcal{O}_C^{\ast} \to 1, \end{align*} we can easily see that $H^1(C, \mathbb{Z})$ generates $H^1(C, \mathcal{O}_C)$ as a $\mathbb{C}$-vector space, \begin{align}\label{gen:H1} H^1(C, \mathbb{Z}) \otimes_{\mathbb{Z}} \mathbb{C} \cong H^1(C, \mathcal{O}_C). \end{align} Hence we have the isomorphisms, \begin{align}\notag (\mathbb{C}^{\ast})^{g(C)} &\cong H^1(C, \mathcal{O}_C)/H^1(C, \mathbb{Z}) \\ \label{isom:Pic} & \cong \mathop{\rm Pic}\nolimits^0(C). \end{align} We can interpret the above isomorphism in terms of the dual graph $\Gamma_C$ associated to $C$, determined in the following way: \begin{itemize} \item The vertices and edges of $\Gamma_C$ correspond to irreducible components of $C$ and nodal points of $C$ respectively. \item For an edge $e$ corresponding to a nodal point $x \in C$, it connects vertices $v_1$, $v_2$ if the corresponding irreducible components $C_1$, $C_2$ satisfies $p \in C_1 \cap C_2$. (Note that the case of $v_1=v_2$ corresponds to a self node.) \end{itemize} Then $\Gamma_C$ is a connected graph satisfying $b_1(\Gamma_C)=g(C)$. We can interpret (\ref{isom:Pic}) as the isomorphism, \begin{align}\label{isom:pic} H^1(\Gamma_C, \mathbb{Z}) \otimes_{\mathbb{Z}}\mathbb{C}^{\ast} \cong \mathop{\rm Pic}\nolimits^0(C). \end{align} The isomorphism (\ref{isom:pic}) can be constructed in the following way. For an oriented loop $\alpha$ in $\Gamma_C$, we choose an edge $e \subset \alpha$ so that $\Gamma_C \setminus \{ e\}$ is still connected. Let $x \in C$ be a nodal point corresponding to the edge $e$. Let $v_1, v_2$ be vertices in $\Gamma_C$ connected by $e$ so that $e$ starts from $v_1$ and ends at $v_2$. Let $C_1$, $C_2$ be the irreducible components of $C$ which correspond to $v_1$, $v_2$ respectively. We partially normalize $C$ at $x$, and obtain $C^{\dag}_x$, \begin{align}\label{part:norm} f_x \colon C^{\dag}_x \to C. \end{align} For $i=1, 2$, let $C^{\dag}_i$ be the irreducible component of $C^{\dag}_x$ which is mapped to $C_i$ by $f_x$. The preimage of $x$ by $f_x$ is denoted by \begin{align*} x_i \in C^{\dag}_i, \quad i=1, 2. \end{align*} (In case of $v_1=v_2$, i.e. $x\in C$ is a self node, we need to fix a correspondence between an orientation of $e$ and a numbering of two points $f_x^{-1}(x)$.) For $z \in \mathbb{C}^{\ast}$, we glue the trivial line bundle on $C_x^{\dag}$ by the isomorphism at $x_i$, \begin{align*} \mathcal{O}_{C^{\dag}_x} \otimes k(x_1) \ni a \mapsto za \in \mathcal{O}_{C^{\dag}_x} \otimes k(x_2). \end{align*} The above gluing procedure produces a line bundle on $C$. The resulting line bundle is independent of a choice of $e$, and denoted by $L_{\alpha, z}$. The isomorphism (\ref{isom:pic}) is given by sending $\alpha \otimes z$ to $L_{\alpha, z}$. \subsection{Jacobian group of an analytic neighborhood of $C$} \label{subsec:JacC} Let $C \subset X$ be as in the previous subsection. We take an analytic open neighborhood $U$ of $C$ in $X$, \begin{align*} C \subset U \subset X. \end{align*} If we take $U$ sufficiently small so that it is homotopically equivalent to $C$, we have the commutative diagram of exact sequences, \begin{align*} \xymatrix{ 0 \ar[r] & H^1(U, \mathbb{Z}) \ar[r]\ar[d]^{\cong} & H^1(U, \mathcal{O}_U) \ar[r]\ar[d] & \mathop{\rm Pic}\nolimits^0(U) \ar[r]\ar[d] & 1 \\ 0 \ar[r] & H^1(C, \mathbb{Z}) \ar[r] & H^1(C, \mathcal{O}_C) \ar[r] & \mathop{\rm Pic}\nolimits^0(C) \ar[r] & 1. } \end{align*} Here all the vertical morphisms are pull-backs with respect to the inclusion $C \hookrightarrow U$. Let $W$ be the sub $\mathbb{C}$-vector space of $H^1(U, \mathcal{O}_U)$ generated by $H^1(U, \mathbb{Z})$. Then the above commutative diagram and the isomorphism (\ref{gen:H1}) implies that \begin{align}\label{emb:pic} \mathop{\rm Pic}\nolimits^0(C) \cong W/H^1(U, \mathbb{Z}) \subset \mathop{\rm Pic}\nolimits^0(U). \end{align} By composing the isomorphism (\ref{isom:pic}) and the embedding (\ref{emb:pic}), we obtain the embedding, \begin{align}\label{emb:G} H^1(\Gamma_C, \mathbb{Z}) \otimes_{\mathbb{Z}} \mathbb{C}^{\ast} \hookrightarrow \mathop{\rm Pic}\nolimits^0(U). \end{align} The embedding (\ref{emb:G}) can be described in the following way. For each nodal point $x\in C^{\rm{sing}}$, let us fix a norm $\lVert \ast \rVert$ on an analytic neighborhood of $x$ in $X$, and set \begin{align}\label{open:V} V_{x}(\epsilon) =\{ x' \in X : \lVert x' -x \rVert <\epsilon \}, \end{align} for $\epsilon>0$. We construct the following open subsets in $U$, \begin{align*} U_{x}(\epsilon) &\cneq U \cap V_{x}(\epsilon), \\ U_{x}'(\epsilon) &\cneq U \setminus \overline{V}_{x}(\epsilon). \end{align*} Then for $0<\epsilon'<\epsilon$, the collection $\{U_{x}(\epsilon), U_{x}'(\epsilon')\}$ is an open cover of $U$. Suppose that $x\in C$ is contained in two irreducible components $C_1$, $C_2$. Then we have \begin{align*} U_{x}(\epsilon) \cap U_x'(\epsilon') \cap C = \coprod_{j=1}^{2} (U_x(\epsilon) \cap U_x'(\epsilon') \cap C_j), \end{align*} and each $U_x(\epsilon) \cap U_x'(\epsilon) \cap C_j$ is homeomorphic to an open annulus in $\mathbb{C}$. Hence if $U$ is chosen to be sufficiently small, we have the decomposition, \begin{align}\label{decomp} U_x(\epsilon) \cap U_x'(\epsilon') =W_1 \coprod W_2, \end{align} such that we have \begin{align*} U_x(\epsilon) \cap U_x'(\epsilon') \cap C_j \subset W_j. \end{align*} Let $\alpha$ be an oriented loop in $\Gamma_C$ and take $z \in \mathbb{C}^{\ast}$. As in the previous subsection, we take an edge $e \subset \alpha$ such that $\Gamma_C \setminus \{e\}$ is connected. If $e$ corresponds to the node $x\in C$, we construct the line bundle on $U$ by gluing the trivial line bundles on $U_{x}(\epsilon)$ and $U_{x}'(\epsilon')$ by the isomorphism, \begin{align*} \mathcal{O}_{W_1} \oplus \mathcal{O}_{W_2} &\ni (a_1, a_2) \\ &\mapsto (z a_1, a_2) \in \mathcal{O}_{W_1} \oplus \mathcal{O}_{W_2}. \end{align*} Here we have used the identification by (\ref{decomp}), \begin{align*} \mathcal{O}_{U_{x}(\epsilon) \cap U_x'(\epsilon')}= \mathcal{O}_{W_1} \oplus \mathcal{O}_{W_2}. \end{align*} The resulting line bundle is independent of $e$, and denoted by $\mathcal{L}_{\alpha, z}$. Note that $\mathcal{L}_{\alpha, z}$ restricts to a line bundle $L_{\alpha, z}$ on $C$, constructed in the previous subsection. When $x$ is a self node, $\mathcal{L}_{\alpha, z}$ can be similarly constructed by replacing $C_1$, $C_2$ by analytic branches of $C$ near $x$. The embedding (\ref{emb:G}) is given by sending $\alpha \otimes z$ to $\mathcal{L}_{\alpha, z}$. If we fix an oriented loop $\alpha$ in $\Gamma_{C}$, we have the complex subtorus, \begin{align}\label{ctorus} \mathbb{C}^{\ast} \subset \mathop{\rm Pic}\nolimits^{0}(U), \end{align} given by the embedding $z \mapsto \mathcal{L}_{\alpha, z}$. Here recall that, in the first part of this section, we took a divisor $H \subset X$ so that it intersects with $C$ at non-singular points on $C$. Therefore if we furthermore fix an edge $e \subset \alpha$, the above construction of $\mathcal{L}_{\alpha, z}$ yields a canonical isomorphism, \begin{align*} \phi_{\alpha, e, z} \colon \mathcal{O}_{H \cap U} \stackrel{\cong}{\to} \mathcal{O}_{H\cap U} \otimes \mathcal{L}_{\alpha, z}. \end{align*} This implies that the embedding (\ref{ctorus}) lifts to a group homomorphism, \begin{align}\label{lift} \mathbb{C}^{\ast} \hookrightarrow \widehat{\mathop{\rm Pic}\nolimits^0}(U), \end{align} given by \begin{align*} \alpha \otimes z \mapsto (\mathcal{L}_{\alpha, z}, \phi_{\alpha, e, z}), \end{align*} which is an embedding. Here $\widehat{\mathop{\rm Pic}\nolimits^0}(U)$ is defined by (\ref{hat:pic}). Combined with the argument in Subsection~\ref{subsec:Jact}, we have the action of the subtorus (\ref{lift}) on the moduli space of parabolic stable pairs $M_n^{\rm{par}}(U, \beta)$, which restricts to the action on the subspace (\ref{sub:par}). \subsection{Cyclic covers of $U$}\label{subsec:cyclic} Let us fix a loop $\alpha \subset \Gamma_{C}$ and consider the subtorus (\ref{ctorus}), $z \mapsto \mathcal{L}_{\alpha, z}$. The root of unity $z=e^{2\pi i/m}$ corresponds to the line bundle, \begin{align*} \mathcal{L}_{\alpha, m} \cneq \mathcal{L}_{\alpha, e^{2\pi i/m}} \in \mathop{\rm Pic}\nolimits^0(U), \end{align*} which is an $m$-torsion element, i.e. there is an isomorphism of line bundles, \begin{align}\label{iso:bun} \psi_{\alpha, m} \colon \mathcal{O}_{U} \stackrel{\cong}{\to} \mathcal{L}_{\alpha, m}^{\otimes m}. \end{align} Given an isomorphism $\psi_{\alpha, m}$ as above, we can construct the complex manifold $\widetilde{U}_{\alpha, m}$ as follows: \begin{align*} \widetilde{U}_{\alpha, m} \cneq \{ y \in \mathcal{L}_{\alpha, m} : y^{\otimes m} =\psi_{\alpha, m}(1) \}. \end{align*} Here we have regarded the line bundle $\mathcal{L}_{\alpha, m}$ as its total space on $U$. The projection $\mathcal{L}_{\alpha, m} \to U$ induces the morphism, \begin{align}\label{sigma:U} \sigma_{\alpha, m} \colon \widetilde{U}_{\alpha, m} \to U, \end{align} which is a covering map of covering degree $m$. By taking the pull-back of $C \subset U$ by $\sigma_{\alpha, m}$, we obtain the $m$-fold \'{e}tale cover of $C$, \begin{align}\label{etale:C} \sigma_{\alpha, m}|_{\widetilde{C}_{\alpha, m}} \colon \widetilde{C}_{\alpha, m} \to C. \end{align} Note that $\widetilde{U}_{\alpha, m}$ is a complex manifold containing $\widetilde{C}_{\alpha, m}$, and satisfies \begin{align*} \bigwedge^{3}T_{\widetilde{U}_{\alpha, m}}^{\vee} \cong \mathcal{O}_{\widetilde{U}_{\alpha, m}}. \end{align*} The $m$-fold cover (\ref{etale:C}) is determined by the isomorphism (\ref{iso:bun}) restricted to $C$, which is described in the following way. Let us choose an edge $e \subset \alpha$ corresponding to the node $x\in C$, and take a partial normalization $C_x^{\dag}$ as in (\ref{part:norm}). Let $x_1, x_2 \in C_x^{\dag}$ be the preimages of $x$. We take the $m$-copies of $\{C_x^{\dag}, x_1, x_2\}$, \begin{align*} \{ C_{x, i}, x_{1, i}, x_{2, i}\}, \quad i \in \mathbb{Z}/m\mathbb{Z}. \end{align*} Then $\widetilde{C}_{\alpha, m}$ is given by \begin{align}\label{Cxi} \widetilde{C}_{\alpha, m} =\bigcup_{i \in \mathbb{Z}/m\mathbb{Z}} C_{x, i}^{}, \end{align} where $C_{x, i}$ and $C_{x, i+1}$ are glued at $x_{1, i}$ and $x_{2, i+1}$. (See Figure~\ref{fig:two}.) \begin{figure} \caption{3-fold cover of a rational curve with two nodes} \label{fig:two} \end{figure} \subsection{Coherent sheaves on $\widetilde{U}$}\label{subsec:Coh} Let us consider the $m$-fold cover (\ref{sigma:U}) constructed in the previous subsection. In this subsection, we investigate coherent sheaves on the covering (\ref{sigma:U}). In what follows, we fix $\alpha$ and $m$, so we omit these symbols in the notation, e.g. we write $\widetilde{U}_{\alpha, m}$, $\widetilde{C}_{\alpha, m}$ as $\widetilde{U}$, $\widetilde{C}$, etc. We first discuss the category of coherent sheaves on the covering $\widetilde{U}$. By the construction, the $\mathcal{O}_U$-algebra $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$ is given by \begin{align*} \sigma_{\ast}\mathcal{O}_{\widetilde{U}} \cong \mathcal{O}_{U} \oplus \mathcal{L}^{-1} \oplus \cdots \oplus \mathcal{L}^{-m+1}. \end{align*} The algebra structure on the RHS is given by, for $(a, b) \in \mathcal{L}^{-i} \times \mathcal{L}^{-j}$, \begin{align*} (a, b) \mapsto \left\{ \begin{array}{cc} a\otimes b, & i+j<m, \\ \psi (a\otimes b), & i+j\ge m. \end{array} \right. \end{align*} Here $\psi=\psi_{\alpha, m}$ is the isomorphism given in (\ref{iso:bun}). By the above isomorphism of $\mathcal{O}_{U}$-algebras, it is easy to show the following lemma: \begin{lem}\label{lem:forward} The push-forward functor $\sigma_{\ast}$ identifies the category $\mathop{\rm Coh}\nolimits(\widetilde{U})$ with the category of pairs, \begin{align*} (F, \phi_F), \quad \phi_F \colon F \to F \otimes \mathcal{L}, \end{align*} where $F \in \mathop{\rm Coh}\nolimits(U)$ and $\phi_F$ is a morphism in $\mathop{\rm Coh}\nolimits(U)$ satisfying \begin{align}\label{cond:cyc} \overbrace{\phi_F \circ \cdots \circ \phi_F}^{m} =\mathrm{id}_F \otimes \psi, \end{align} as morphisms $F \to F \otimes \mathcal{L}^{\otimes m}$. \end{lem} Note that a choice of a loop $\alpha \subset \Gamma_C$ and an edge $e \subset \alpha$ yields a lift of (\ref{ctorus}) by (\ref{lift}), which sends $z \in \mathbb{C}^{\ast}$ to $(\mathcal{L}_{\alpha, z}, \phi_{\alpha, e, z})$ in the notation of Subsection~\ref{subsec:JacC}. In particular, a cyclic subgroup of order $m$ in $\mathop{\rm Pic}\nolimits^0(U)$ generated by the line bundle $\mathcal{L}=\mathcal{L}_{\alpha, e^{2\pi i/m}}$ lifts to an embedding into $\widehat{\mathop{\rm Pic}\nolimits^0}(U)$, \begin{align}\label{mPic2} \mathbb{Z}/m\mathbb{Z} \subset \mathbb{C}^{\ast} \subset \widehat{\mathop{\rm Pic}\nolimits^0}(U). \end{align} In other words, the structure sheaf $\mathcal{O}_{H\cap U}$ of the divisor $H\cap U$ in $U$ is equipped with an isomorphism, \begin{align}\label{lambda} \lambda_H \colon \mathcal{O}_{H} \stackrel{\cong}{\to} \mathcal{O}_{H} \otimes \mathcal{L}, \end{align} such that $\phi_{\mathcal{O}_H} \cneq \lambda_H$ satisfies the condition (\ref{cond:cyc}). Here we have denoted $H\cap U$ just by $H$ for simplicity. Hence $\lambda_H$ determines a lift of $\mathcal{O}_{H}$ to a coherent sheaf on $\widetilde{U}$. This lift is a structure sheaf of some divisor in $\widetilde{U}$, \begin{align}\label{data:HU} \widetilde{H} \subset \widetilde{U}, \end{align} which intersects with $\widetilde{C}$ transversally. (cf.~Figure~\ref{fig:two}.) Later we will need the following lemma on the compatibility of $\psi$ with $\lambda_H$. \begin{lem}\label{lem:i+ii+iii} (i) The isomorphism $\psi$ in (\ref{iso:bun}) induces the isomorphism, \begin{align}\label{wpsi} \widetilde{\psi} \colon \mathcal{O}_{\widetilde{U}} \stackrel{\cong}{\to} \sigma^{\ast} \mathcal{L}. \end{align} (ii) The isomorphism $\lambda_H$ in (\ref{lambda}) induces the isomorphism, \begin{align}\label{lambdaH} \widetilde{\lambda}_{H} \colon \bigoplus_{g \in \mathbb{Z}/m\mathbb{Z}} g_{\ast}\mathcal{O}_{\widetilde{H}} \stackrel{\cong}{\to} \sigma^{\ast}\mathcal{O}_H. \end{align} Here $\mathbb{Z}/m\mathbb{Z}$ acts on $\widetilde{U}$ as a deck transformation of the covering $\sigma \colon \widetilde{U} \to U$. (iii) The compositions, \begin{align}\label{comp1} &(\widetilde{\lambda}_{H} \otimes \mathrm{id}_{\sigma^{\ast}\mathcal{L}})^{-1} \circ (\widetilde{\psi} \otimes \mathrm{id}_{\sigma^{\ast}\mathcal{O}_H}) \circ \widetilde{\lambda}_H, \\ \label{comp2} &(\widetilde{\lambda}_H \otimes \mathrm{id}_{\sigma^{\ast}\mathcal{L}})^{-1} \circ \sigma^{\ast}\lambda_H \circ \widetilde{\lambda}_H, \end{align} determine two isomorphisms, \begin{align}\label{two:isom} \bigoplus_{g \in \mathbb{Z}/m\mathbb{Z}} g_{\ast}\mathcal{O}_{\widetilde{H}} \to \bigoplus_{g \in \mathbb{Z}/m\mathbb{Z}} g_{\ast}\mathcal{O}_{\widetilde{H}} \otimes \sigma^{\ast}\mathcal{L}, \end{align} Both of (\ref{comp1}) and (\ref{comp2}) preserve direct summands of both sides of (\ref{two:isom}), and they are related by \begin{align*} (\ref{comp1})|_{g_{\ast}\mathcal{O}_{\widetilde{H}}} = e^{2\pi gi/m} \cdot (\ref{comp2})|_{g_{\ast}\mathcal{O}_{\widetilde{H}}}. \end{align*} \end{lem} \begin{proof} (i) By Lemma~\ref{lem:forward}, it is enough to construct $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$-module isomorphism $\sigma_{\ast}\widetilde{\psi}$ between $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$ and $\sigma_{\ast}\sigma^{\ast}\mathcal{L} \cong \mathcal{L} \otimes \sigma_{\ast}\mathcal{O}_{\widetilde{U}}$. It is constructed as \begin{align}\notag &(x_0, x_1, \cdots, x_{m-1}) \in \mathcal{O}_U \oplus \mathcal{L} \oplus \cdots \oplus \mathcal{L}^{-m+1} \\ \label{sigma:ast} & \quad \mapsto (\psi(x_{m-1}), x_0, \cdots, x_{m-2}) \in \mathcal{L} \oplus \mathcal{O}_U \oplus \cdots \oplus \mathcal{L}^{-m+2}. \end{align} (ii) As in (i), it is enough to construct $\sigma_{\ast}\widetilde{\lambda}_{H}$. Note that $\sigma_{\ast}g_{\ast}\mathcal{O}_{\widetilde{H}}$ is isomorphic to $\mathcal{O}_H$, whose $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$-module structure is given by $e^{2\pi gi/m} \cdot \lambda_H$. The morphism $\sigma_{\ast}\widetilde{\lambda}_{H}$ restricted to $\sigma_{\ast}g_{\ast}\mathcal{O}_{\widetilde{H}}$ is constructed to be \begin{align}\label{restrict} x\in \mathcal{O}_H \mapsto & (x, e^{-2\pi gi/m}\lambda_H^{-1}(x), \cdots, e^{-2\pi (m-1)gi/m} \lambda_H^{-m+1}(x)) \\ \notag &\in \mathcal{O}_H \oplus \mathcal{L}|_{H}^{-1} \oplus \cdots \oplus \mathcal{L}|_{H}^{-m+1} \cong \sigma_{\ast}\sigma^{\ast}\mathcal{O}_H. \end{align} It is easy to check that the $\sigma_{\ast}\widetilde{\lambda}_H$ constructed as above is an isomorphism of $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$-modules. (iii) The statement of (iii) easily follows by comparing (\ref{sigma:ast}) and (\ref{restrict}). \end{proof} \subsection{Cyclic neighborhoods}\label{subsec:Cyclic} In this subsection, we generalize the construction in the previous subsections and define the notion of cyclic neighborhoods. This will be used for the induction argument in the proof of Theorem~\ref{thm:main:cov} below. \begin{defi}\label{def:cyclic} Let $X$, $C \subset U \subset X$ be as before. Let $C'$ be a connected nodal curve which is embedded into a three dimensional complex manifold $U'$. We say $C' \subset U'$ is a cyclic neighborhood of $C\subset X$ if there is a sequence of local immersions, \begin{align}\label{unrami} \sigma' \colon U'=U_{(R)} \stackrel{\sigma'_{(R)}}{\to} U_{(R-1)} \to \cdots \to U_{(1)} \stackrel{\sigma'_{(1)}}{\to} U_{(0)}=U, \end{align} and connected nodal curves $C_{(i)} \subset U_{(i)}$ such that the following conditions hold: \begin{itemize} \item For each $i$, $U_{(i)}$ is a small analytic neighborhood of $C_{(i)}$, and $C_{(R)}=C'$, $C_{(0)}=C$. \item For each $i$, the map $\sigma'_{(i)} \colon U_{(i)} \to U_{(i-1)}$ factorizes as \begin{align*} \sigma_{(i)}' \colon U_{(i)} \subset \widetilde{U}_{(i-1)} \stackrel{\sigma_{(i)}}{\to} U_{(i-1)}, \end{align*} where $\sigma_{(i)}$ is a cyclic covering with respect to some non-trivial loop in the graph $\Gamma_{C_{(i-1)}}$, and $U_{(i)} \subset \widetilde{U}_{(i-1)}$ is an open immersion. \item The curves $C_{(i)}$ satisfy $\sigma_{(i)}(C_{(i)}) \subset C_{(i-1)}$. \end{itemize} \end{defi} Below, a cyclic neighborhood $C' \subset U'$ of $C\subset X$ will be written as \begin{align*} (C'\subset U') \stackrel{\sigma'}{\to} (C\subset X), \end{align*} where $\sigma'$ is a composition of local immersions (\ref{unrami}). In order to discuss parabolic stable pair invariants on cyclic neighborhoods, we define the notion of a lift of $H\subset X$ as follows: \begin{defi} For a cyclic neighborhood $C'\subset U'$ of $C\subset X$, a divisor $H' \subset U'$ is called a lift of $H\subset X$ if the following conditions hold: \begin{itemize} \item There is a sequence (\ref{unrami}) together with divisors $H_{(i)} \subset U_{(i)}$ such that $H_{(i)}$ is obtained by lifting $H_{(i-1)}$ to $\widetilde{U}_{(i-1)} \to U_{(i-1)}$ as in (\ref{data:HU}) and restricting to $U_{(i)} \subset \widetilde{U}_{(i-1)}$. \item We have $H\cap U=H_{(0)}$ and $H'=H_{(R)}$. \end{itemize} \end{defi} Given a cyclic neighborhood $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ with a lift $H' \subset U'$ of $H\subset X$, we can similarly define the notion of parabolic stable pairs, \begin{align}\label{para:tilde} (F', s') \quad s' \in F'\otimes \mathcal{O}_{H'}, \end{align} with $F'$ one dimensional $\sigma^{'\ast}\omega$-semistable coherent sheaf on $U'$, satisfying the same axiom as in Definition~\ref{defi:para}. Here we have to assume that the support of $F'$ is compact in order to define $\sigma^{'\ast}\omega$-semistability. In what follows, we always assume that $\sigma^{'\ast}\omega$-semistable sheaves on $U'$ have compact supports. \subsection{Moduli spaces and counting invariants on cyclic neighborhoods}\label{moduli:cyclic} Let $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ be a cyclic neighborhood and $H' \subset U'$ a lift of $H\subset X$. Since $U'$ is just a complex manifold, we need to work with an analytic category in discussing moduli spaces of semistable sheaves or parabolic stable pairs. A general moduli theory of sheaves on complex analytic spaces seems to be not yet established. However, since a cyclic neighborhood admits a sequence (\ref{unrami}), we can inductively show the existence of analytic moduli spaces on cyclic neighborhoods. The result is formulated as follows: \begin{lem}\label{lem:stack} In the above situation, take $n\in \mathbb{Z}$ and $\beta' \in H_2(U', \mathbb{Z})$. (i) There is an analytic stack of finite type $\mathcal{M}_n(U', \beta')$, which parameterizes $\sigma^{'\ast}\omega$-semistable one dimensional sheaves $F' \in \mathop{\rm Coh}\nolimits(U')$ satisfying \begin{align}\label{Fprime} [F']=\beta', \quad \chi(F')=n'. \end{align} (ii) There is an analytic space of finite type $M_n^{\rm{par}}(U', \beta')$, which represents a functor of families of parabolic stable pairs $(F', s')$ satisfying (\ref{Fprime}). \end{lem} \begin{proof} (i) If $(C', U')=(C, U)$, then $\mathcal{M}_n(U', \beta')$ is obtained as an analytic open substack of an Artin stack $\mathcal{M}_n(X, \beta')$. Suppose that $U'$ is an open subset of an $m$-fold cover $\sigma \colon \widetilde{U} \to U$ given by $\mathcal{L} \in \mathop{\rm Pic}\nolimits^0(U)$, as in Subsection~\ref{subsec:Coh} and Subsection~\ref{subsec:Cyclic}. As an abstract stack, there is a 1-morphism, \begin{align}\label{1-mor} \mathcal{M}_{n}(\widetilde{U}, \beta') \to \mathcal{M}_n(U, \sigma_{\ast}\beta'), \end{align} by sending $F'$ to $\sigma_{\ast}F'$. In fact, since we have the decomposition, \begin{align*} \sigma^{\ast}\sigma_{\ast}F' \cong \bigoplus_{g\in \mathbb{Z}/m\mathbb{Z}} g_{\ast}F', \end{align*} the sheaf $\sigma_{\ast}F'$ is $\omega$-semistable by~\cite[Lemma~3.2.2]{Hu}. By Lemma~\ref{lem:forward}, the fiber of (\ref{1-mor}) at $[F] \in \mathcal{M}_n(U, \sigma_{\ast}\beta')$ is given by the closed subset of the finite dimensional vector space, \begin{align*} \phi_F \in \mathop{\rm Hom}\nolimits(F, F\otimes \mathcal{L}) \end{align*} satisfying (\ref{cond:cyc}). Therefore (\ref{1-mor}) is representable, and $\mathcal{M}_n(U', \beta')$ is an analytic stack of finite type. A general case is obtained by applying the above argument to the sequence (\ref{unrami}). (ii) Let $\mathcal{M}_n^{\rm{par}}(U', \beta')$ be an abstract stack of families of parabolic stable pairs on $U'$. We have the forgetting 1-morphism, \begin{align}\label{forget} \mathcal{M}_n^{\rm{par}}(U', \beta') \to \mathcal{M}_n(U', \beta'), \end{align} sending $(F', s')$ to $F'$. The fiber of (\ref{forget}) at $[F'] \in \mathcal{M}_n(U', \beta')$ is given by an open subset of \begin{align*} s' \in F'\otimes \mathcal{O}_{H'} \cong \mathbb{C}^{\beta' \cdot H'}, \end{align*} giving parabolic stable pair structures on $F'$. Hence (\ref{forget}) is a representable smooth morphism, and in particular $\mathcal{M}_n^{\rm{par}}(U', \beta')$ is an analytic stack of finite type. However, as in~\cite[Lemma~2.7]{Todpara}, there are no non-trivial stabilizer groups in $\mathcal{M}_n^{\rm{par}}(U', \beta')$. This implies that $\mathcal{M}_n^{\rm{par}}(U', \beta')$ is represented by an analytic space of finite type, $M_n^{\rm{par}}(U', \beta')$. \end{proof} In the above situation, let us take \begin{align*} \gamma' \in H_2(C', \mathbb{Z}), \quad i'_{\ast}\gamma'=\beta', \end{align*} where $i' \colon C' \hookrightarrow U'$ is the embedding. Similarly to (\ref{M(Cg)}), there is the sub analytic stack, \begin{align}\label{sub:anay:stack} \mathcal{M}_n(C', \gamma') \subset \mathcal{M}_n(U', \beta'), \end{align} parameterizing $\sigma^{'\ast}\omega$-semistable sheaves $F'$ with $[F']=\gamma'$ as a one cycle supported on $C'$ and $\chi(F')=n$. Also similarly to (\ref{MnCg}), we have the sub analytic space, \begin{align*} M_n^{\rm{par}}(C', \gamma') \subset M_n^{\rm{par}}(U', \beta'), \end{align*} parameterizing parabolic stable pairs $(F', s')$ as above. It is straightforward to generalize the notion of Hall algebras, Behrend functions, to our analytic category on $U'$. Consequently, we have the invariants, \begin{align}\label{inv:onU'} N_{n, \gamma'}(U') \in \mathbb{Q}, \quad \mathop{\rm DT}\nolimits_{n, \gamma'}^{\rm{par}}(U') \in \mathbb{Z}, \end{align} as in (\ref{Nng}), (\ref{DTpar:loc}) respectively. By replacing $\mathop{\rm DT}\nolimits^{\rm{par}}_{n, \gamma}$ by $\mathop{\rm DT}\nolimits^{\rm{par}}_{n, \gamma'}(U')$ in the RHS of (\ref{DT:hat}), we can also define the invariant, \begin{align}\label{DT:hat2} \widehat{\mathop{\rm DT}\nolimits}^{\rm{par}}_{n, \gamma'}(U') \in \mathbb{Q}. \end{align} In principle, as we discussed in Subsection~\ref{subsec:multiple:via}, the same arguments in the proof of~\cite[Corollary~4.18]{Todpara} should show the equivalence between the multiple cover formula of $N_{n, \gamma'}(U')$ and the formula (\ref{log:form}) for $\widehat{\mathop{\rm DT}\nolimits}^{\rm{par}}_{n, \gamma'}(U')$. However there is one technical obstruction to do this, namely we need to show that the moduli stack $\mathcal{M}_n(U', \beta')$ is locally written as a critical locus of some holomorphic function. (This is used in the proof of~\cite[Theorem~3.16]{Todpara}.) The moduli stack $\mathcal{M}_n(U, \beta)$ satisfies this condition, due to the fact that $U$ is an open subset of a projective Calabi-Yau 3-fold $X$, and the result by Joyce-Song~\cite[Theorem~5.3]{JS}. Unfortunately we are not able to prove this critical locus condition for $\mathcal{M}_n(U', \beta')$. The required condition is formulated in the following conjecture: \begin{conj}\label{conj:crit} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, and $C$ a connected nodal curve with $C \subset X$. Let $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ be a cyclic neighborhood, and for a point $[F'] \in \mathcal{M}_n(U', \beta')$, let $G$ be a maximal reductive subgroup in $\mathop{\rm Aut}\nolimits(F')$. Then there exists a $G$-invariant analytic open subset $V$ of $0$ in $\mathop{\rm Ext}\nolimits^1(F', F')$, a $G$-invariant holomorphic function $f \colon V\to \mathbb{C}$ with $f(0)=df|_{0}=0$, and a smooth morphism of complex analytic stacks, \begin{align*} [\{df=0 \}/G] \to \mathcal{M}_n(U', \beta'), \end{align*} of relative dimension $\dim \mathop{\rm Aut}\nolimits(F')-\dim G$. \end{conj} The above conjecture is a complex analytic version of~\cite[Theorem~5.3]{JS}, and true if $U'$ is an open subset of a projective Calabi-Yau 3-fold by~\cite[Theorem~5.3]{JS}. As an analogy of Proposition~\ref{prop:translate}, under the assumption of Conjecture~\ref{conj:crit}, we have the following: \begin{prop}\label{prop:similar:trans} Let $C'\subset U'$ be a cyclic neighborhood of $C\subset X$ with a lift $H' \subset U'$ of $H \subset X$. Suppose that $C' \subset U'$ satisfies the condition of Conjecture~\ref{conj:crit}. Then we have the formula \begin{align}\label{mult:U'} N_{n, \gamma'}(U')=\sum_{k\ge 1, k|(n, \gamma')} \frac{1}{k^2} N_{1, \gamma'/k}(U'), \end{align} if and only if we have the formula, \begin{align}\label{DThatU'} \widehat{\mathop{\rm DT}\nolimits}^{\rm{par}}_{n, \gamma'} (U')=\sum_{k\ge 1, k|(n, \gamma')} \frac{(-1)^{\gamma' \cdot H' -1}}{k^2} (\gamma' \cdot H') N_{1, \gamma'/k} (U'). \end{align} \end{prop} \begin{proof} Since we assume Conjecture~\ref{conj:crit}, the same argument of~\cite[Corollary~4.18]{Todpara} works. \end{proof} \section{Counting invariants under cyclic coverings} This section is a core of this paper. We will compare the invariants (\ref{inv:onU'}), (\ref{DT:hat2}) constructed in the previous section under cyclic coverings. Using this, we will show our main result which reduces the multiple cover formula of $N_{n, \gamma}$ to that of $N_{n, \gamma'}(U')$ for all cyclic neighborhood $C'\subset U'$ with $C'$ a tree of $\mathbb{P}^1$. \subsection{Comparison of moduli spaces of parabolic stable pairs} \label{subsec:compare:para} Let $C \subset U \subset X$ be as in the previous section. As in Subsection~\ref{subsec:Coh} and Subsection~\ref{subsec:Cyclic}, let $\sigma \colon \widetilde{U} \to U$ be a cyclic covering of order $m$, and $\widetilde{H} \subset \widetilde{U}$ is a lift of $H$. Note that $\widetilde{C} \subset \widetilde{U}$ is a cyclic neighborhood of $C \subset X$, so we have the moduli space of parabolic stable pairs on $\widetilde{U}$ by Lemma~\ref{lem:stack}. In this subsection, we compare moduli spaces of parabolic stable pairs under the above covering. Note that by (\ref{mPic2}) and the argument in Subsection~\ref{subsec:Jact}, we have the $\mathbb{C}^{\ast}$-action on the moduli spaces in (\ref{sub:par}), which restricts to the $\mathbb{Z}/m\mathbb{Z}$-action on these moduli spaces. We have the following lemma: \begin{lem}\label{lem:nat:comp} For $\widetilde{\beta} \in H_2(\widetilde{U}, \mathbb{Z})$ with $\beta=\sigma_{\ast}\widetilde{\beta}$, there is a natural morphism of complex analytic spaces, \begin{align}\label{mor:com} \sigma_{\ast} \colon M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta}) \to M_n^{\rm{par}}(U, \beta)^{\mathbb{Z}/m\mathbb{Z}}. \end{align} \end{lem} \begin{proof} For a point $(\widetilde{F}, \widetilde{s}) \in M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta})$, we construct the pair \begin{align*} (F, s) \cneq \sigma_{\ast}(\widetilde{F}, \widetilde{s}), \end{align*} in the following way: first we set $F \cneq \sigma_{\ast} \widetilde{F}$, which is $\omega$-semistable as in the proof of Lemma~\ref{lem:stack} (i). Then we have \begin{align*} F \otimes \mathcal{O}_{H} &\cong \sigma_{\ast}(\widetilde{F} \otimes \sigma^{\ast}\mathcal{O}_H) \\ &\cong \bigoplus_{g\in \mathbb{Z}/m\mathbb{Z}} \sigma_{\ast}(\widetilde{F} \otimes g_{\ast}\mathcal{O}_{\widetilde{H}}), \end{align*} where the second isomorphism is induced by (\ref{lambdaH}). Then we have the embedding into the direct summand, \begin{align}\label{emb:sum} \sigma_{\ast} \colon \widetilde{F} \otimes \mathcal{O}_{\widetilde{H}} \hookrightarrow F\otimes \mathcal{O}_H, \end{align} and we set $s \cneq \sigma_{\ast}\widetilde{s}$. We would like to see that $(F, s)$ is a parabolic stable pair on $U$. Suppose by contradiction that there is a surjection $\pi \colon F\twoheadrightarrow F'$, where $F'$ is an $\omega$-semistable sheaf with $\mu_{\omega}(F)=\mu_{\omega}(F')$, satisfying \begin{align}\label{piHs} (\pi \otimes \mathcal{O}_H) (s) =0. \end{align} By taking the adjunction, we have the non-zero map, \begin{align}\label{mor:adj} \widetilde{F} \to \sigma^{!}F' =\sigma^{\ast}F'. \end{align} By~\cite[Lemma~3.2.2]{Hu}, $\sigma^{\ast}F'$ is $\sigma^{\ast}\omega$-semistable with $\mu_{\sigma^{\ast}\omega}(\widetilde{F})= \mu_{\sigma^{\ast}\omega}(\sigma^{\ast}F')$, hence the image of the morphism (\ref{mor:adj}), denoted by $A$, is also $\sigma^{\ast}\omega$- semistable with $\mu_{\sigma^{\ast}\omega}(A) =\mu_{\sigma^{\ast}\omega}(\widetilde{F})$. We have the sequence, \begin{align*} \widetilde{F} \otimes \mathcal{O}_{\widetilde{H}} \twoheadrightarrow A \otimes \mathcal{O}_{\widetilde{H}} \hookrightarrow \sigma^{\ast}F' \otimes \mathcal{O}_{\widetilde{H}}, \end{align*} which takes $\widetilde{s}$ to zero by the construction of $s$ and (\ref{piHs}). Since the right arrow of the above sequence is injective, the surjection $\widetilde{F} \twoheadrightarrow A$ violates the condition of parabolic stability of $(\widetilde{F}, \widetilde{s})$. This is a contradiction, hence $(F, s)$ is a parabolic stable pair. We check that $(F, s)$ is $\mathbb{Z}/m\mathbb{Z}$-invariant. This is equivalent to that the parabolic stable pair \begin{align*} (F\otimes \mathcal{L}, (\textrm{id}_F \otimes \lambda_H)(s)), \end{align*} is isomorphic to $(F, s)$, where $\lambda_H$ is given in (\ref{lambda}). Since $F=\sigma_{\ast}\widetilde{F}$, the isomorphism (\ref{wpsi}) and the projection formula induce the isomorphism $\psi_F \cneq \sigma_{\ast}(\textrm{id}_{\sigma_{\ast}\widetilde{F}} \otimes \widetilde{\psi})$, \begin{align}\label{psi_F} \psi_{F} \colon \sigma_{\ast}\widetilde{F} \to \sigma_{\ast}(\widetilde{F} \otimes \sigma^{\ast}\mathcal{L}) \cong \sigma_{\ast} \widetilde{F} \otimes \mathcal{L}. \end{align} We need to check that \begin{align*} (\psi_F \otimes \mathcal{O}_H)(s) =(\textrm{id}_F \otimes \lambda_H)(s). \end{align*} The above equality follows from that $s$ comes from the LHS of (\ref{emb:sum}) and Lemma~\ref{lem:i+ii+iii} (iii). The above argument shows that we have a set theoretic map $\sigma_{\ast}$. It is straightforward to generalize the above arguments to families of parabolic stable pairs. Namely for a complex analytic space $S$, let \begin{align*} \mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(U, \beta)), \end{align*} be the set of morphisms from $S$ to $M_n^{\rm{par}}(U, \beta)$ as complex analytic spaces. Then, since $M_n^{\rm{par}}(U, \beta)$ is a fine moduli space, giving an element in $\mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(U, \beta))$ is equivalent to giving a flat family of parabolic stable pairs over $S$. We can easily generalize the construction of $\sigma_{\ast}$ to a functorial map, \begin{align*} \mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta})) \to \mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(U, \beta))^{\mathbb{Z}/m\mathbb{Z}}, \end{align*} which gives a morphism (\ref{mor:com}) as complex analytic spaces. \end{proof} We have the following proposition. \begin{prop}\label{prop:isom:para} The morphism (\ref{mor:com}) induces the isomorphism of complex analytic spaces, \begin{align}\label{isom:para} \sigma_{\ast} \colon \coprod_{\begin{subarray}{c} \widetilde{\beta}\in H_2(\widetilde{U}, \mathbb{Z}), \\ \sigma_{\ast} \widetilde{\beta}=\beta \end{subarray}} M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta}) \stackrel{\cong}{\longrightarrow} M_n^{\rm{par}}(U, \beta)^{\mathbb{Z}/m\mathbb{Z}}, \end{align} which restricts to the isomorphism, \begin{align}\label{isom:para2} \sigma_{\ast} \colon \coprod_{\begin{subarray}{c} \widetilde{\gamma} \in H_2(\widetilde{C}, \mathbb{Z}), \\ \sigma_{\ast} \widetilde{\gamma}=\gamma \end{subarray}} M_n^{\rm{par}}(\widetilde{C}, \widetilde{\gamma}) \stackrel{\cong}{\longrightarrow} M_n^{\rm{par}}(C, \gamma)^{\mathbb{Z}/m\mathbb{Z}}. \end{align} \end{prop} \begin{proof} It is enough to show the isomorphism (\ref{isom:para}). We first show that the morphism $\sigma_{\ast}$ is injective. Suppose that there are two parabolic stable pairs, \begin{align*} (\widetilde{F}_i, \widetilde{s}_i) \in M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta}_i), \ i=1, 2, \end{align*} which are sent to the same point by $\sigma_{\ast}$. This implies that there is an isomorphism of sheaves, \begin{align*} \phi \colon \sigma_{\ast}\widetilde{F}_1 \stackrel{\cong}{\to} \sigma_{\ast} \widetilde{F}_2, \end{align*} such that $\phi\otimes \textrm{id}_{\mathcal{O}_H}$ sends $\sigma_{\ast}\widetilde{s}_1$ to $\sigma_{\ast}\widetilde{s}_2$. Let us check that $\phi$ is $\sigma_{\ast}\mathcal{O}_{\widetilde{U}}$-module homomorphism. This is equivalent to that the following diagram commutes: \begin{align}\label{diagm:L} \xymatrix{ \sigma_{\ast}\widetilde{F}_1 \ar[r]^{\phi} \ar[d]_{\psi_{1}} & \sigma_{\ast}\widetilde{F}_2 \ar[d]^{\psi_{2}} \\ \sigma_{\ast}\widetilde{F}_1 \otimes \mathcal{L} \ar[r]^{\phi \otimes \textrm{id}_{\mathcal{L}}} & \sigma_{\ast}\widetilde{F}_2 \otimes \mathcal{L}. } \end{align} Here $\psi_{i}$ are induced by the projection formula and the isomorphism (\ref{wpsi}) as in (\ref{psi_F}). We check that \begin{align}\label{check} \left\{(\psi_{2} \circ \phi) \otimes \textrm{id}_{\mathcal{O}_H}\right\} (\sigma_{\ast}\widetilde{s}_1) =\left\{\{(\phi \otimes \textrm{id}_{\mathcal{L}}) \circ \psi_{1} \} \otimes \textrm{id}_{\mathcal{O}_H}\right\}(\sigma_{\ast}\widetilde{s}_1), \end{align} as elements in $\sigma_{\ast}\widetilde{F}_2 \otimes \mathcal{L} \otimes \mathcal{O}_H$. In fact if (\ref{check}) holds, then $\phi^{-1} \circ \psi_{2}^{-1} \circ (\phi \otimes \textrm{id}_{\mathcal{L}}) \circ \psi_{1}$ is an automorphism of $(\sigma_{\ast}F_1, \sigma_{\ast}s_1)$, hence identity by~\cite[Lemma~2.7]{Todpara}, i.e. the diagram (\ref{diagm:L}) commutes. The equality (\ref{check}) can be checked as follows. The LHS of (\ref{check}) is \begin{align*} &(\psi_{2} \otimes \textrm{id}_{\mathcal{O}_H}) (\phi \otimes \textrm{id}_{\mathcal{O}_H}) (\sigma_{\ast}\widetilde{s}_1) \\ &= (\psi_{2} \otimes \textrm{id}_{\mathcal{O}_H}) (\sigma_{\ast}s_2) \\ &= (\textrm{id}_{\sigma_{\ast}\widetilde{F}_2} \otimes \lambda_{H})(\sigma_{\ast}\widetilde{s}_2), \end{align*} where we have used Lemma~\ref{lem:i+ii+iii} (iii) for the second equality. Similarly the RHS of (\ref{check}) is \begin{align*} &(\phi \otimes \textrm{id}_{\mathcal{L} \otimes \mathcal{O}_H})(\psi_{1} \otimes \textrm{id}_{\mathcal{O}_H}) (\sigma_{\ast}\widetilde{s}_1) \\ &=(\phi \otimes \textrm{id}_{\mathcal{L} \otimes \mathcal{O}_H})(\textrm{id}_{\sigma_{\ast}\widetilde{F}_1} \otimes \lambda_H)(\sigma_{\ast}\widetilde{s}_1) \\ &=(\textrm{id}_{\sigma_{\ast}\widetilde{F}_2} \otimes \lambda_H)(\phi \otimes \textrm{id}_{\mathcal{O}_H})(\sigma_{\ast}\widetilde{s}_1) \\ &=(\textrm{id}_{\sigma_{\ast}\widetilde{F}_2} \otimes \lambda_{H})(\sigma_{\ast}\widetilde{s}_2). \end{align*} Therefore the equality (\ref{check}) holds. Now since the diagram (\ref{diagm:L}) commutes, the isomorphism $\phi$ lifts to an isomorphism $\widetilde{\phi}$ between $\widetilde{F}_1$ and $\widetilde{F}_2$, such that $\widetilde{\phi}\otimes \textrm{id}_{\mathcal{O}_{\widetilde{H}}}$ takes $\widetilde{s}_1$ to $\widetilde{s}_2$. Hence $(\widetilde{F}_1, \widetilde{s}_1)$ and $(\widetilde{F}_2, \widetilde{s}_2)$ are isomorphic, and $\sigma_{\ast}$ is injective. Next we prove that $\sigma_{\ast}$ is surjective. Let us take a point $(F, s) \in M_n^{\rm{par}}(U, \beta)$, which is $\mathbb{Z}/m\mathbb{Z}$-invariant. This means that there is an isomorphism of sheaves, \begin{align*} \phi_F \colon F \to F \otimes \mathcal{L}, \end{align*} which satisfies that \begin{align}\label{satisfy:phi} (\phi_F \otimes \textrm{id}_{\mathcal{O}_H})(s) =(\textrm{id}_F \otimes \lambda_H)(s). \end{align} We show that $\phi_F$ satisfies the condition (\ref{cond:cyc}). We consider the morphism, \begin{align}\label{consider} (\textrm{id}_F \otimes \psi)^{-1} \circ \overbrace{\phi_F \circ \cdots \circ \phi_F}^{m} \colon F \to F. \end{align} After applying $\otimes \mathcal{O}_H$, the above morphism takes $s$ to $s$ since (\ref{satisfy:phi}) holds and $\phi_{\mathcal{O}_H} =\lambda_H$ satisfies the condition (\ref{cond:cyc}). Therefore the morphism (\ref{satisfy:phi}) is an identity by~\cite[Lemma~2.7]{Todpara}, which implies that $\phi_F$ satisfies (\ref{cond:cyc}). Since $\phi_F$ satisfies (\ref{cond:cyc}), the sheaf $F$ is written as $\sigma_{\ast} \widetilde{F}$ for some $\widetilde{F} \in \mathop{\rm Coh}\nolimits(\widetilde{U})$, such that the morphism $\phi_F$ is identified with \begin{align}\label{psi_F2} \psi_{F} \colon \sigma_{\ast}\widetilde{F} \to \sigma_{\ast}(\widetilde{F} \otimes \sigma^{\ast}\mathcal{L}) \cong \sigma_{\ast} \widetilde{F} \otimes \mathcal{L}, \end{align} which is a composition of (\ref{wpsi}) and the projection formula as in (\ref{psi_F}). To show that $\sigma_{\ast}$ is surjective, it is enough to check that $s=\sigma_{\ast}\widetilde{s}$ for some $\widetilde{s} \in \widetilde{F} \otimes \mathcal{O}_{\widetilde{H}}$. This follows from (\ref{satisfy:phi}), the fact that $\phi_F$ is identified with (\ref{psi_F2}) and Lemma~\ref{lem:i+ii+iii} (iii). Now we have proved that $\sigma_{\ast}$ is a set theoretic bijection. Similarly to Lemma~\ref{lem:nat:comp}, the above arguments can be easily generalized to families of parabolic stable pairs. Namely in the notation of the proof of Lemma~\ref{lem:nat:comp}, we have the bijection, \begin{align*} \sigma_{\ast} \colon \coprod_{\sigma_{\ast} \widetilde{\beta}=\beta} \mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta})) \stackrel{\cong}{\to} \mathop{\rm Hom}\nolimits(S, M_n^{\rm{par}}(U, \beta))^{\mathbb{Z}/m\mathbb{Z}}. \end{align*} Therefore the morphism $\sigma_{\ast}$ is an isomorphism as complex analytic spaces. \end{proof} \subsection{Comparison of moduli spaces of stable sheaves} Similarly to the previous subsection, we can also compare moduli spaces of one dimensional stable sheaves under the covering $\sigma \colon \widetilde{U} \to U$. Note that the moduli space $M_1(U, \beta)$ consists of one dimensional $\omega$-stable sheaves on $U$, and it admits $\mathop{\rm Pic}\nolimits^0(U)$-action. In particular it restricts to $\mathbb{C}^{\ast}$-action w.r.t. the embedding (\ref{ctorus}), and we have the $\mathbb{Z}/m\mathbb{Z}$-action by the embedding (\ref{mPic2}). Let \begin{align}\label{M1:sub} M_1(\widetilde{C}, \widetilde{\gamma}) \subset M_1(\widetilde{U}, \widetilde{\beta}), \end{align} be the coarse moduli spaces of analytic stacks (\ref{sub:anay:stack}) for $n=1$, $U'=\widetilde{U}$, $\gamma'=\widetilde{\gamma}$ and $\beta'=\widetilde{\beta}$. Similarly to $M_1(U, \beta)$, points in the analytic spaces (\ref{M1:sub}) correspond to $\sigma^{\ast} \omega$-stable sheaves. We have the following proposition. \begin{prop}\label{prop:sigma:stable} We have the morphisms of complex analytic spaces, \begin{align}\label{isom:para:st} \sigma_{\ast} \colon \coprod_{\begin{subarray}{c} \widetilde{\beta}\in H_2(\widetilde{U}, \mathbb{Z}), \\ \sigma_{\ast} \widetilde{\beta}=\beta \end{subarray}} M_1^{}(\widetilde{U}, \widetilde{\beta}) \longrightarrow M_1^{}(U, \beta)^{\mathbb{Z}/m\mathbb{Z}}, \\ \label{isom:para:st2} \sigma_{\ast} \colon \coprod_{\begin{subarray}{c} \widetilde{\gamma} \in H_2(\widetilde{C}, \mathbb{Z}), \\ \sigma_{\ast} \widetilde{\gamma}=\gamma \end{subarray}} M_1^{}(\widetilde{C}, \widetilde{\gamma}) \longrightarrow M_1^{}(C, \gamma)^{\mathbb{Z}/m\mathbb{Z}}. \end{align} If $m\gg 0$, then the above morphisms are covering maps of covering degree $m$. \end{prop} \begin{proof} It is enough to show the claim for (\ref{isom:para:st}). A proof similar to Lemma~\ref{lem:nat:comp} shows the existence of the morphism (\ref{isom:para:st}). In order to show that (\ref{isom:para:st}) is a covering map, it is enough to show that there is a free $\mathbb{Z}/m\mathbb{Z}$-action on the LHS of (\ref{isom:para:st}) whose quotient space is isomorphic to the RHS of (\ref{isom:para:st}). Note that $\sigma \colon \widetilde{U} \to U$ is a covering map whose covering transformation group is $\mathbb{Z}/m\mathbb{Z}$. Hence $\mathbb{Z}/m\mathbb{Z}$ acts on the LHS of (\ref{isom:para:st}) by $F \mapsto g_{\ast} F$ for $g\in \mathbb{Z}/m\mathbb{Z}$. Note that the support of $F$ are connected, and if $m\gg 0$ and $g\neq 0$, then the support of $F$ and that of $g_{\ast}F$ are different. Hence $F$ and $g_{\ast}F$ are not isomorphic for $g\neq 0$, which implies that $\mathbb{Z}/m\mathbb{Z}$-action on the LHS of (\ref{isom:para:st}) is free. For two $\sigma^{\ast}\omega$-stable sheaves $\widetilde{F}_i$, $i=1, 2,$ corresponding to points in the LHS of (\ref{isom:para:st}), suppose that $\sigma_{\ast}\widetilde{F}_1$ and $\sigma_{\ast}\widetilde{F}_2$ are isomorphic. By adjunction, there is a non-trivial morphism, \begin{align*} \sigma^{\ast}\sigma_{\ast}\widetilde{F}_2 \cong \bigoplus_{g\in \mathbb{Z}/m\mathbb{Z}} g_{\ast}\widetilde{F}_2 \to \widetilde{F}_1. \end{align*} Hence there are $g\in \mathbb{Z}/m\mathbb{Z}$ and a non-trivial morphism $g_{\ast} \widetilde{F}_2 \to \widetilde{F}_1$. Since both of $\widetilde{F}_1$ and $g_{\ast}\widetilde{F}_2$ are $\sigma^{\ast}\omega$-stable with $\mu_{\sigma^{\ast}\omega}(\widetilde{F}_1) =\mu_{\sigma^{\ast}\omega}(g_{\ast}\widetilde{F}_2)$, we have $g_{\ast} \widetilde{F}_2 \cong \widetilde{F}_1$. Next we check that (\ref{isom:para:st}) is surjective. For an $\omega$-stable sheaf $F \in M_1(U, \beta)$, suppose that $F$ is $\mathbb{Z}/m\mathbb{Z}$-invariant. This implies that there is an isomorphism of sheaves, \begin{align*} \phi_F \colon F \to F\otimes \mathcal{L}. \end{align*} The morphism $\phi_F$ may not satisfy the condition (\ref{cond:cyc}). However since $F$ is $\omega$-stable, we have $\mathop{\rm Aut}\nolimits(F)=\mathbb{C}^{\ast}$, so by replacing $\phi_F$ by a non-zero multiple, we can assume that $\phi_F$ satisfies (\ref{cond:cyc}). Hence $F$ is isomorphic to $\sigma_{\ast}\widetilde{F}$ for some sheaf $\widetilde{F} \in \mathop{\rm Coh}\nolimits(\widetilde{U})$. The sheaf $\widetilde{F}$ must be $\sigma^{\ast}\omega$-stable since $\sigma_{\ast} \colon \mathop{\rm Coh}\nolimits(\widetilde{U}) \to \mathop{\rm Coh}\nolimits(U)$ is an exact functor. This shows that (\ref{isom:para:st}) is surjective. The above argument shows that $\sigma_{\ast}$ induces a bijection between the quotient space of the LHS of (\ref{isom:para:st}) by the $\mathbb{Z}/m\mathbb{Z}$-action and the RHS of (\ref{isom:para:st}). Similarly to Lemma~\ref{lem:nat:comp}, Proposition~\ref{prop:isom:para}, the above bijection is an isomorphism between complex analytic spaces. Namely for a complex analytic space $S$, it is straightforward to generalize the above argument to the bijection, \begin{align*} \sigma_{\ast} \colon \left( \coprod_{\sigma_{\ast} \widetilde{\beta}=\beta} \mathop{\rm Hom}\nolimits(S, M_1(\widetilde{U}, \widetilde{\beta})) \right)/(\mathbb{Z}/m\mathbb{Z}) \stackrel{\cong}{\to} \mathop{\rm Hom}\nolimits(S, M_1(U, \beta))^{\mathbb{Z}/m\mathbb{Z}}. \end{align*} Therefore we obtain the desired assertion. \end{proof} \subsection{The formula for $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma}^{\rm{par}}$ under the cyclic covering} In this subsection, we investigate the formula (\ref{log:form}) under the cyclic covering $\sigma \colon \widetilde{U} \to U$. In the notation of previous subsections, we denote by \begin{align*} \nu_{\widetilde{M}^{\rm{par}}}, \ \nu_{M^{\rm{par}}}, \ \nu_{\widetilde{M}}, \ \nu_{M}, \end{align*} the Behrend functions on the spaces, \begin{align*} M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta}), \ M_n^{\rm{par}}(U, \beta), \ M_1(\widetilde{U}, \widetilde{\beta}), \ M_1(U, \beta), \end{align*} respectively. We need the following compatibility of the above Behrend functions on the morphisms discussed in Proposition~\ref{prop:isom:para} and Proposition~\ref{prop:sigma:stable}. The proof is postponed until Section~\ref{subsec:Behrend}. \begin{lem}\label{lem:identity} If $m$ is a sufficiently big odd number, we have the following: (i) Under the morphism (\ref{isom:para}), we have the identity, \begin{align*} (\sigma_{\ast})^{\ast}\nu_{M^{\rm{par}}}|_{M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta})}=(-1)^{\beta \cdot H -\widetilde{\beta} \cdot \widetilde{H}} \nu_{\widetilde{M}^{\rm{par}}}. \end{align*} (ii) Under the morphism (\ref{isom:para:st}), we have the identity, \begin{align*} (\sigma_{\ast})^{\ast} \nu_{M}=\nu_{\widetilde{M}}. \end{align*} \end{lem} \begin{proof} The proof will be given in Subsection~\ref{subsec:proof}. \end{proof} As a corollary of Proposition~\ref{prop:sigma:stable}, Proposition~\ref{prop:isom:para} and Lemma~\ref{lem:identity}, we have the following: \begin{cor}\label{cor:formula} For $\gamma \in H_2(C, \mathbb{Z})$ and $n\in \mathbb{Z}$, we take a sufficiently big odd number $m$ and an $m$-fold cover $\sigma \colon \widetilde{U} \to U$ as in (\ref{etale:C}). We have the formulas, \begin{align}\label{formula1} \mathop{\rm DT}\nolimits_{n, \gamma}^{\rm{par}} &=\sum_{\sigma_{\ast}\widetilde{\gamma}=\gamma} (-1)^{\gamma \cdot H -\widetilde{\gamma} \cdot \widetilde{H}} \mathop{\rm DT}\nolimits_{n, \widetilde{\gamma}}^{\rm{par}}(\widetilde{U}), \\ \label{formula2} N_{1, \gamma} &= \frac{1}{m} \sum_{\sigma_{\ast}\widetilde{\gamma}=\gamma} N_{1, \widetilde{\gamma}}(\widetilde{U}). \end{align} \end{cor} \begin{proof} Let us consider $\mathbb{C}^{\ast}$-actions on $M_n^{\rm{par}}(C, \gamma)$, $M_1(C, \gamma)$, determined by the embeddings (\ref{lift}), (\ref{ctorus}) respectively. Then for $m\gg 0$, we have \begin{align}\label{Cast:loc} M_n^{\rm{par}}(C, \gamma)^{\mathbb{C}^{\ast}} &=M_n^{\rm{par}}(C, \gamma)^{\mathbb{Z}/m\mathbb{Z}}, \\ \label{Cast:loc2} M_1(C, \gamma)^{\mathbb{C}^{\ast}} &=M_1(C, \gamma)^{\mathbb{Z}/m\mathbb{Z}}. \end{align} Therefore the formulas (\ref{formula1}), (\ref{formula2}) follow from Proposition~\ref{prop:isom:para}, Proposition~\ref{prop:sigma:stable}, Lemma~\ref{lem:identity}, (\ref{Cast:loc}), (\ref{Cast:loc2}) and the $\mathbb{C}^{\ast}$-localizations. \end{proof} Furthermore we have the following proposition. \begin{prop}\label{prop:reduce:cov} In the situation of Corollary~\ref{cor:formula}, suppose that the following formula holds on $\widetilde{U}$, \begin{align}\label{assum:form} \widehat{\mathop{\rm DT}\nolimits}_{n, \widetilde{\gamma}}^{\rm{par}}(\widetilde{U}) =\sum_{k\ge 1, k|(n, \widetilde{\gamma})} \frac{(-1)^{\widetilde{\gamma} \cdot \widetilde{H}-1}}{k^2} (\widetilde{\gamma} \cdot \widetilde{H})N_{1, \widetilde{\gamma}/k}(\widetilde{U}), \end{align} for any $\widetilde{\gamma} \in H_2(\widetilde{C}, \mathbb{Z})$ with $\sigma_{\ast} \widetilde{\gamma}=\gamma$. Then the formula (\ref{log:form}) holds. \end{prop} \begin{proof} By Corollary~\ref{cor:formula}, the LHS of (\ref{log:form}) is \begin{align}\label{LHS1} &\sum_{l\ge 1}\frac{(-1)^{l-1}}{l} \sum_{\begin{subarray}{c} \gamma_1 + \cdots +\gamma_l=\gamma, \\ n_1 + \cdots +n_l=n, \\ n_i/\omega \cdot \gamma_i = n/\omega \cdot \gamma \end{subarray}} \prod_{i=1}^{l} \left( \sum_{\sigma_{\ast} \widetilde{\gamma}_i=\gamma_i} (-1)^{\gamma_i \cdot H - \widetilde{\gamma}_i \cdot \widetilde{H}} \mathop{\rm DT}\nolimits_{n_i, \widetilde{\gamma}_i}^{\rm{par}}(\widetilde{U}) \right) \\ \notag &=\sum_{\sigma_{\ast} \widetilde{\gamma}=\gamma} (-1)^{\gamma \cdot H - \widetilde{\gamma} \cdot \widetilde{H}} \sum_{l\ge 1} \frac{(-1)^{l-1}}{l} \sum_{\begin{subarray}{c} \widetilde{\gamma}_1 + \cdots + \widetilde{\gamma}_l=\widetilde{\gamma}, \\ n_1 + \cdots +n_l=n, \\ n_i/\sigma^{\ast}\omega \cdot \widetilde{\gamma}_i = n/\sigma^{\ast}\omega \cdot \widetilde{\gamma} \end{subarray}} \prod_{i=1}^{l} \mathop{\rm DT}\nolimits_{n_i, \widetilde{\gamma}_i}^{\rm{par}}(\widetilde{U}) \\ \label{LHS2} &= \sum_{\sigma_{\ast}\widetilde{\gamma}=\gamma} \sum_{k\ge 1, k|(n, \widetilde{\gamma})} \frac{(-1)^{\gamma \cdot H-1}}{k^2} (\widetilde{\gamma} \cdot \widetilde{H})N_{1, \widetilde{\gamma}/k}(\widetilde{U}) \\ &\notag =\sum_{k\ge 1, k|(n, \gamma)} \frac{(-1)^{\gamma \cdot H -1}}{k^2} \sum_{\sigma_{\ast} \widetilde{\gamma}=\gamma, \ k|\widetilde{\gamma}} (\widetilde{\gamma} \cdot \widetilde{H}) N_{1, \widetilde{\gamma}/k}(\widetilde{U}) \\ \notag &= \sum_{k\ge 1, k|(n, \gamma)} \frac{(-1)^{\gamma \cdot H -1}}{k^2} \cdot \frac{1}{m} \sum_{\begin{subarray}{c} \sigma_{\ast}\widetilde{\gamma}=\gamma, \ k|\widetilde{\gamma} \\ g\in \mathbb{Z}/m\mathbb{Z} \end{subarray}} (g_{\ast}\widetilde{\gamma} \cdot \widetilde{H}) N_{1, g_{\ast}\widetilde{\gamma}/k}(\widetilde{U}) \\ \notag &=\sum_{k\ge 1, k|(n, \gamma)} \frac{(-1)^{\gamma \cdot H -1}}{k^2} \cdot \frac{1}{m} \sum_{\sigma_{\ast}\widetilde{\gamma}=\gamma, \ k|\widetilde{\gamma}} (\gamma \cdot H)N_{1, \widetilde{\gamma}/k}(\widetilde{U}) \\ \label{LHS3} &= \sum_{k\ge 1, k|(n, \gamma)} \frac{(-1)^{\gamma \cdot H -1}}{k^2} (\gamma \cdot H) N_{1, \gamma/k}. \end{align} Here we have used (\ref{formula1}), (\ref{assum:form}), (\ref{formula2}) in (\ref{LHS1}), (\ref{LHS2}), (\ref{LHS3}) respectively. Therefore the formula (\ref{log:form}) holds. \end{proof} \subsection{Reduction to trees of $\mathbb{P}^1$} Now we show our main result. \begin{thm}\label{thm:main:cov} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, $C \subset X$ a reduced rational curve with at worst nodal singularities, and take $\gamma \in H_2(C, \mathbb{Z})$. Suppose that for any cyclic neighborhood $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ with $C'$ a tree of $\mathbb{P}^1$, the following conditions hold: \begin{itemize} \item The cyclic neighborhood $C' \subset U'$ satisfies the condition of Conjecture~\ref{conj:crit}. \item For any $\gamma' \in H_2(C', \mathbb{Z})$ with $\sigma_{\ast}'\gamma'=\gamma$, the invariant $N_{n, \gamma'}(U')$ satisfies the formula (\ref{mult:U'}). \end{itemize} Then $N_{n, \gamma}$ satisfies the formula (\ref{form:mult:loc}). \end{thm} \begin{proof} An element $\gamma \in H_2(C, \mathbb{Z})_{>0}$ can be written as \begin{align*} \gamma=\sum_{i=1}^{N} a_i [C_i], \end{align*} for $a_i \in \mathbb{Z}_{\ge 0}$ where $C_1, \cdots, C_N$ are irreducible components of $C$. The support of $\gamma$, denoted by $C_{\gamma}$, is defined to be the reduced curve, \begin{align*} C_{\gamma} \cneq \bigcup_{a_i>0} C_i \subset C. \end{align*} We also set $d(\gamma)$ and $l(\gamma)$ to be \begin{align*} d(\gamma) \cneq \sum_{i=1}^{N} a_i, \quad l(\gamma) \cneq \sharp\{ 1\le i\le N : a_i>0\}. \end{align*} Note that we have $l(\gamma) \le d(\gamma)$. We note that, if $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma}^{\rm{par}}$ or $N_{1, \gamma}$ is non-zero, then $C_{\gamma}$ is a connected curve. In fact, by~\cite[Equation (95)]{Todpara}, the invariant $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma}^{\rm{par}}$ is a multiple of $N_{n, \gamma}$. And if $N_{n, \gamma}$ is non-zero, then the same argument of~\cite[Lemma~11.6]{TodK3} shows that $C_{\gamma}$ is connected. Hence we may assume that $C$ is connected and $C_{\gamma}=C$. Note that $l(\gamma)=N$ in this case. If $g(C)=0$, then the result follows from the assumption. Suppose that $g(C)>0$, and take a sufficiently small analytic neighborhood $C \subset U$ in $X$. Then we can take an $m$-fold cover \begin{align*} \sigma \colon \widetilde{U} \to U, \quad \widetilde{C}=\sigma^{-1}(C), \end{align*} as in Subsection~\ref{subsec:cyclic}, for a sufficiently big odd number $m$. We take $\widetilde{\gamma} \in H_2(\widetilde{C}, \mathbb{Z})$ satisfying that $\sigma_{\ast}\widetilde{\gamma}=\gamma$, $C_{\widetilde{\gamma}}$ is connected and intersects with $\widetilde{H}$. Then either one of the following conditions hold: \begin{align} \label{oneof1} &l(\widetilde{\gamma})>l(\gamma)=N, \mbox{ or } \\ \label{oneof2} &l(\widetilde{\gamma})= l(\gamma)=N, \ g(C_{\widetilde{\gamma}})<g(C_{\gamma})=g(C). \end{align} In fact, since $C_{\widetilde{\gamma}} \to C$ is surjective, we have $l(\widetilde{\gamma}) \ge l(\gamma)$. Suppose that $l(\widetilde{\gamma})=l(\gamma)$. Then for each irreducible component $C_j \subset C$, the preimage $\sigma^{-1}(C_j) \cap C_{\widetilde{\gamma}}$ is also irreducible. Because $C_{\widetilde{\gamma}}$ is connected, this easily implies that $C_{\widetilde{\gamma}}$ is written as \begin{align*} C_{\widetilde{\gamma}}=A \cup \tau\overline{(C_{x, i} \setminus A)}, \end{align*} where $A \subset C_{x, i}$ is a connected subcurve for some $i\in \mathbb{Z}/m\mathbb{Z}$ in the notation of (\ref{Cxi}) and $\tau=1 \in \mathbb{Z}/m\mathbb{Z}$. The curves $A \subset C_{x, i}$ and $\tau\overline{(C_{x, i} \setminus A)} \subset C_{x, i+1}$ are connected at the node $x_{1, i}=x_{2, i+1}$. Since $g(C_{x, i})=g(C)-1$, it follows that \begin{align*} g(C_{\widetilde{\gamma}}) &=g(C_{x, i}) -\sharp(A \cap \overline{(C_{x, i} \setminus A)}) +1 \\ &=g(C)- \sharp(A \cap \overline{(C_{x, i} \setminus A)}) \\ &<g(C). \end{align*} Therefore one of (\ref{oneof1}) or (\ref{oneof2}) holds. Now we replace $\widetilde{U}$ by a small analytic neighborhood of $C_{\widetilde{\gamma}}$, say $U_{(1)}$, and set \begin{align*} \gamma_{(1)} =\widetilde{\gamma}, \ C_{(1)}=C_{\widetilde{\gamma}}, \ H_{(1)}=\widetilde{H} \cap U_{(1)}. \end{align*} Repeating the same procedures, we obtain the sequence of local immersions, \begin{align}\label{seq:U} \cdots \to U_{(i)} \stackrel{\sigma_{(i)}}{\to} U_{(i-1)} \to \cdots \stackrel{\sigma_{(2)}}{\to} U_{(1)} \stackrel{\sigma}{\to} U_{(0)}=U, \end{align} and data, \begin{align*} C_{(i)} \subset U_{(i)}, \ \gamma_{(i)} \in H_2(C_{(i)}, \mathbb{Z}), \ H_{(i)} \subset U_{(i)}, \end{align*} where $C_{(i)}$ is a connected nodal curve, $\gamma_{(i)}$ satisfies $C_{\gamma_{(i)}}=C_{(i)}$ and $H_{(i)}$ is a lift of $H$. Note that for each $i$, $C_{(i)} \subset U_{(i)}$ is a cyclic neighborhood of $C\subset X$. Similarly as above, either one of the following conditions hold: \begin{align*} &l(\gamma_{(i)})<l(\gamma_{(i+1)}), \ \mbox{ or } \\ &l(\gamma_{(i)})=l(\gamma_{(i+1)}), \ g(C_{(i+1)})<g(C_{(i)}). \end{align*} Because we have \begin{align*} l(\gamma_{(i)}) \le d(\gamma_{(i)})=d(\gamma), \end{align*} the sequence (\ref{seq:U}) terminates at some $i$, say $i=R$. Then we have $g(C_{(R)})=0$, i.e. $C_{(R)}$ is a tree of $\mathbb{P}^1$. Below we say that the invariant $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma_{(i)}}^{\rm{par}}(U_{(i)})$ satisfies (\ref{DThatU'}) if the formula (\ref{DThatU'}) holds for $U'=U_{(i)}$, $C'=C_{(i)}$ and $\gamma'=\gamma_{(i)}$. By the assumption and Proposition~\ref{prop:similar:trans}, the invariant $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma_{(i)}}^{\rm{par}}(U_{(i)})$ satisfies (\ref{DThatU'}) when $i=R$. Also it is straightforward to see that the argument of Proposition~\ref{prop:reduce:cov} for $C \subset U$ can be applied to $C_{(i)} \subset U_{(i)}$. Therefore $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma_{(i)}}^{\rm{par}}(U_{(i)})$ satisfies (\ref{DThatU'}) if the same formula holds on any cyclic covering $\widetilde{U}_{(i)} \to U_{(i)}$. By the induction argument, it follows that the invariant $\widehat{\mathop{\rm DT}\nolimits}_{n, \gamma_{(i)}}^{\rm{par}}(U_{(i)})$ satisfies (\ref{DThatU'}) for all $i$. Hence (\ref{log:form}) holds, and the formula (\ref{form:mult:loc}) holds as well. \end{proof} \subsection{Euler characteristic version}\label{subsec:Euler} For $n\in \mathbb{Z}$ and $\beta \in H_2(X, \mathbb{Z})$, there is also the Euler characteristic version of the invariant $N_{n, \beta}$, as discussed in~\cite{Tcurve1}, \cite{Tolim2}, \cite{TodK3}. Namely in the definition of $N_{n, \beta}$, we replace the Behrend function $\nu$ by the identity function. The resulting invariant is denoted by \begin{align}\label{Nnb:chi} N_{n, \beta}^{\chi} \in \mathbb{Q}. \end{align} If $\gamma$ is a one cycle on $X$, the Euler characteristic version of the local invariant $N_{n, \gamma}$ can be similarly defined, \begin{align*} N_{n, \gamma}^{\chi} \in \mathbb{Q}. \end{align*} The argument of Theorem~\ref{thm:main:cov} can be also applied to the invariant $N_{n, \gamma}$, which is easier since we do not have to take care of the Behrend functions at all. Also in this case, one may expect the formula, \begin{align}\label{mult:chi} N_{n, \gamma}^{\chi} =\sum_{k\ge 1, k|(n, \gamma)} \frac{1}{k^2} N_{1, \gamma/k}^{\chi}. \end{align} However unfortunately, the above formula is known to be false as the following example indicates: \begin{exam}\label{exam:counter} Let $C\subset X$ be a smooth rational curve whose normal bundle is $\mathcal{O}(-1) \oplus \mathcal{O}(-1)$. Then the same computation of $N_{0, m[C]}$ in~\cite[Example~4.14]{Todpara} shows that \begin{align*} N_{0, m[C]}^{\chi}=\frac{(-1)^{m-1}}{m^2}. \end{align*} On the other hand, $N_{1, m[C]}^{\chi}=1$ if $m=1$ and $0$ if $m\ge 2$. Hence (\ref{mult:chi}) does not hold. \end{exam} Although the formula (\ref{mult:chi}) is not true in general, there are some situations in which the formula (\ref{mult:chi}) should hold, as discussed in~\cite{TodK3}. Similarly to Theorem~\ref{thm:main:cov}, such a case can be reduced to the cases of trees of $\mathbb{P}^1$ on cyclic neighborhoods of $C\subset X$. Note that, for a cyclic neighborhood $C' \subset U'$ of $C\subset X$ and $\gamma' \in H_2(C', \mathbb{Z})$, we can also define the Euler characteristic invariant, \begin{align*} N_{n, \gamma'}^{\chi}(U') \in \mathbb{Q}, \end{align*} by replacing the Behrend function by the identity function in the definition of $N_{n, \gamma'}(U')$. We have the following theorem: \begin{thm}\label{thm:cov:Eu} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, $C \subset X$ a reduced rational curve with at worst nodal singularities, and take $\gamma \in H_2(C, \mathbb{Z})$. Suppose that for any cyclic neighborhood $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ with $C'$ a tree of $\mathbb{P}^1$, and an element $\gamma' \in H_2(C', \mathbb{Z})$ with $\sigma_{\ast}'\gamma'=\gamma$, the following equality holds: \begin{align}\label{mult:chi2} N_{n, \gamma'}^{\chi}(U') =\sum_{k\ge 1, k|(n, \gamma')} \frac{1}{k^2} N_{1, \gamma'/k}^{\chi}(U'). \end{align} Then $N_{n, \gamma}^{\chi}$ satisfies the formula (\ref{mult:chi}). \end{thm} \begin{proof} The same proof of Theorem~\ref{thm:main:cov} works. The only difference is that we do not have to take care of the Behrend functions in deducing the equivalence between the formula (\ref{mult:chi2}) and the formula for parabolic stable pair invariants. Namely for any cyclic neighborhood $C' \subset U'$ of $C\subset X$ with a lift $H' \subset U'$ of $H\subset X$, and $\gamma'\in H_2(C', \mathbb{Z})$, we can define the Euler characteristic versions of parabolic stable pair invariants, \begin{align*} \mathop{\rm DT}\nolimits_{n, \gamma'}^{\rm{par}, \chi}(U') \in \mathbb{Z}, \quad \widehat{\mathop{\rm DT}\nolimits}_{n, \gamma'}^{\rm{par}, \chi}(U') \in \mathbb{Q}, \end{align*} by replacing the Behrend function by the identity function in the definitions of (\ref{inv:onU'}), (\ref{DT:hat2}) respectively. The same proof of~\cite[Corollary~4.18]{Todpara} shows that the formula (\ref{mult:chi2}) is equivalent to the formula, (without assuming Conjecture~\ref{conj:crit},) \begin{align*} \widehat{\mathop{\rm DT}\nolimits}_{n, \gamma'}^{\rm{par}, \chi}(U') =(\gamma' \cdot H') \sum_{k\ge 1, k|(n, \gamma')} \frac{1}{k^2} N_{1, \gamma'/k}^{\chi}(U'). \end{align*} Then we can apply the same induction argument as in the proof of Theorem~\ref{thm:main:cov}, and conclude the assertion. \end{proof} \section{Applications}\label{sec:apply} In this section, we apply Theorem~\ref{thm:main:cov} to prove the multiple cover formula under some situations. \subsection{$0$-super rigid and surface type neighborhoods} Let $C \subset U\subset X$ be as in the previous sections. We study the local multiple cover formula in the following situations: $C \subset U$ is $0$-super rigid or $C \subset U$ is of surface type. These concepts are defined as follows: \begin{defi}\label{def:rigid:surface} (i) We say $C \subset U$ is $0$-super rigid if for any projective curve $C'$ of arithmetic genus $0$ and a local immersion $f \colon C' \to C$, we have \begin{align*} H^0(C', f^{\ast}N_{C/U})=0. \end{align*} (ii) We say $C \subset U$ is of surface type if there is a complex surface $U_0$ and an analytic neighborhood $\Delta$ of $0 \in \mathbb{C}$ such that \begin{align*} U \cong U_0 \times \Delta, \end{align*} and $C$ is contained in $U_0 \times \{0\}$ \end{defi} The $0$-super rigidity is a genericity condition for the pair $(C \subset U)$, and a concept adapted in~\cite{BPrig}. The following proposition shows that there are some situations in which the assumptions in Theorem~\ref{thm:main:cov} are satisfied. \begin{prop}\label{prop:compN} Let $C' \subset U'$ be a cyclic neighborhood of $C\subset U$, with $C'$ a tree of $\mathbb{P}^1$. (i) Suppose that $C \subset U$ is $0$-super rigid and $C'$ is a chain of $\mathbb{P}^1$, say $C_1', \cdots, C_N'$. Then $C' \subset U'$ satisfies the condition of Conjecture~\ref{conj:crit}. Moreover, for any $n\in \mathbb{Z}$, and $a_1', \cdots, a_N' \in \mathbb{Z}_{\ge 1}$, we have \begin{align}\label{N:rigid} N_{n, a_1'[C_1']+ \cdots +a_N'[C_N']}(U') =\left\{\begin{array}{cc} 1/k^2, & a_1'= \cdots =a_N'=k, \ k|n, \\ 0, & \mbox{ otherwise. } \end{array} \right. \end{align} In particular, the invariant $N_{n, \gamma'}(U')$ satisfies the formula (\ref{mult:U'}). (ii) Suppose that $C \subset U$ is of surface type and the dual graph of $C'$ is of ADE type. Then $C' \subset U'$ satisfies the condition of Conjecture~\ref{conj:crit}. If $C'$ is a chain of $\mathbb{P}^1$, say $C_1', \cdots, C_N'$, then for any $n\in \mathbb{Z}$, and $a_1', \cdots, a_N' \in \mathbb{Z}_{\ge 1}$, we have \begin{align}\label{N:sur} N_{n, a_1'[C_1']+ \cdots +a_N'[C_N']}(U') =\left\{\begin{array}{cc} -1/k^2, & a_1'= \cdots =a_N', \ k|n, \\ 0, & \mbox{ otherwise. } \end{array} \right. \end{align} In particular, the invariant $N_{n, \gamma'}(U')$ satisfies the formula (\ref{mult:U'}). \end{prop} \begin{proof} (i) The proof of~\cite[Lemma~3.1]{BPrig} shows that each irreducible component $C_i'$ is a $(-1, -1)$-curve in $U'$, and there is a bimeromorphic contraction, \begin{align*} f\colon U' \to U'', \end{align*} which contracts $C'$ to a $cA_N$-singularity $0 \in U''$. The argument of Van den Bergh~\cite{MVB} works in our situation, and we have the derived equivalence, \begin{align*} \Phi \colon D^b \mathop{\rm Coh}\nolimits_{C'}(U') \cong D^b \mathrm{Mod}_{\rm{nil}}(A), \end{align*} for some non-commutative $\mathcal{O}_{U''}$-algebra $A$, such that $\Phi^{-1} \mathrm{Mod}_{\rm{nil}}(A)$ corresponds to Bridgeland's perverse coherent sheaves with $0$-perversity~\cite{Br1}. Here $\mathop{\rm Coh}\nolimits_{C'}(U')$ is the category of coherent sheaves on $U'$ supported on $C'$, and $\mathrm{Mod}_{\rm{nil}}(A)$ is the category of finite dimensional nilpotent right $A$-modules. Let us take $\mathcal{L} \in \mathop{\rm Pic}\nolimits(U')$ such that $\mathcal{L}|_{C'}$ is an ample line bundle. By the construction of perverse coherent sheaves in~\cite{Br1}, an object $E\in \mathop{\rm Coh}\nolimits_{C'}(U)$ satisfies $\Phi(E) \in \mathrm{Mod}_{\rm{nil}}(A)$ if and only if $R^1 f_{\ast} E=0$. The latter condition is satisfied if we replace $E$ by $E\otimes \mathcal{L}^{\otimes m}$ for $m\gg 0$. Since any object $[E] \in \mathcal{M}_n(U', \beta')$ is supported on $C'$, the above argument implies that the moduli stack $\mathcal{M}_n(U', \beta')$ is regarded as an analytic open substack of objects in $\Phi^{-1}\mathrm{Mod}_{\rm{nil}}(A) \otimes \mathcal{L}^{\otimes -m}$ for $m\gg 0$. On the other hand, the algebra $A$ is a Calabi-Yau 3-algebra. Hence the completion $\widehat{A}$ of $A$ at $0\in U''$ is written as a completion of a path algebra of a quiver $Q$ with a super potential $W$ by~\cite{Bergh}. Since each component of the moduli stack of representations of $(Q, W)$ is written as a quotient stack of a critical locus of some holomorphic function on a finite dimensional vector space, (cf.~\cite[Subsection~7.2]{JS},) we conclude that $\mathcal{M}_n(U', \beta')$ satisfies the condition of Conjecture~\ref{conj:crit}. Next let us take $n\in \mathbb{Z}$ and $a_1', \cdots, a_N' \ge 1$. By the argument in~\cite[Proposition~2.10]{BKL}, there is a family of complex manifolds $U_t'$ for $t\in \mathbb{C}$ such that $U_0'=U'$ and curves in $U_{\varepsilon}'$ for $0<\varepsilon \ll 1$ consist of only $(-1, -1)$-curves. A subcuve $C'' \subset C'$ deforms to a curve in $U_{\varepsilon}'$ if and only if $C''$ is a sub $\mathbb{P}^1$-chain of $C'$. Let $C_{\varepsilon}' \subset U_{\varepsilon}'$ be a $(-1, -1)$-curve, obtained by deforming $C'$. For $(n, k) \in \mathbb{Z}^{\oplus 2}$ with $k\ge 1$, we have $N_{n, k[C_{\varepsilon}']} \neq 0$ only if $k|n$, and in this case we have (cf.~\cite[Example~4.14]{Todpara},) \begin{align*} N_{n, k[C_{\varepsilon}']}(U_{\varepsilon}')=\frac{1}{k^2}. \end{align*} On the other hand, the invariant $N_{n, \beta'}(U')$ is invariant under deformation of $U'$ by~\cite[Corollary~5.28]{JS}. (See Remark~\ref{rmk:ncpt} below.) Hence the LHS of (\ref{N:rigid}) is non-zero only if $a_1'=\cdots =a_N'(=k)$, $k|n$, and equal to $1/k^2$ in this case. (ii) If $C\subset U$ is of surface type, then the cyclic neighborhood $C' \subset U'$ is also of surface type: there is a complex surface $U_0'$ such that $U' \cong U_0' \times \Delta$ and $C' \subset U_0' \times \{0\}$. Then $C'$ is a tree of $\mathbb{P}^1$ of ADE type in the surface $U_0'$, hence there is a bimeromorphic morphism to a singular complex surface $U_0''$, \begin{align*} U_0' \to U_{0}'', \end{align*} whose exceptional locus is $C'$. Since $U_0''$ is a small analytic neighborhood of $C'$, $U_0''$ is an analytic neighborhood of a singular point in $U_0''$, which is isomorphic to an analytic neighborhood of the quotient singularity $\mathbb{C}^2/G$ for a finite subgroup in $G \subset \mathop{\rm SL}\nolimits(2, \mathbb{C})$. Let \begin{align}\label{minimal:W} f \colon V \to \mathbb{C}^2 /G, \end{align} be the minimal resolution of singularities. Note that $C'$ is regarded as an exceptional locus of (\ref{minimal:W}). The above argument shows that $U'$ is isomorphic to an analytic neighborhood of $C'$ in $V\times \mathbb{C}$, where $C'$ lies in $V\times \{0\}$. As explained in~\cite[Subsection~2.2]{GY}, we have the derived equivalence, \begin{align*} \Phi \colon D^b \mathop{\rm Coh}\nolimits(V\times \mathbb{C}) \cong D^b \mathrm{Rep}(Q, W), \end{align*} for a certain quiver $Q$ with a superpotential $W$, and $\Phi^{-1} \mathrm{Rep}(Q, W)$ is Bridgeland's perverse coherent sheaves. Then the same argument of (i) shows that $\mathcal{M}_n(U', \beta')$ satisfies the condition of Conjecture~\ref{conj:crit}. Let us compute the LHS of (\ref{N:sur}) when $C'$ is a chain of $\mathbb{P}^1$. In the surface type case, the moduli space of stable pairs (\ref{pair:JS}) in Remark~\ref{rmk:ncpt} below is not compact, so the deformation argument is more subtle. Instead, we use the explicit result of the computation of DT type invariants on $V\times \mathbb{C}$ in~\cite{GY}. In~\cite{GY}, it is proved that the generating series of PT invariants is written as a Gopakumar-Vafa form, that is a local version of the conjecture in~\cite[Conjecture~6.2]{Tsurvey}. By~\cite[Theorem~6.4]{Tsurvey}, Conjecture~\ref{conj:mult2} is equivalent to~\cite[Conjecture~6.2]{Tsurvey}, hence $N_{n, \gamma'}(U')$ satisfies (\ref{mult:U'}) for any $n\in \mathbb{Z}$ and $\gamma' \in H_2(C', \mathbb{Z})$. By~\cite[Corollary~1.6]{GY} and~\cite[Theorem~6.4]{Tsurvey}, for $a_1', \cdots, a_N' \in \mathbb{Z}_{\ge 1}$, we have \begin{align*} N_{1, a_1'[C_1']+ \cdots +a_N'[C_N']}(U') =\left\{\begin{array}{cc} -1, & a_1'= \cdots =a_N'=1, \\ 0, & \mbox{ otherwise. } \end{array} \right. \end{align*} Therefore the result of (ii) holds. \end{proof} \begin{rmk}\label{rmk:ncpt} The argument of~\cite[Corollary~5.28]{JS} works when the ambient space $U'$ is a projective Calabi-Yau 3-fold. In our case, $U'$ is an analytic small neighborhood of $C'$, so we need to modify the argument. Let $\mathcal{L}$ be a line bundle on $U'$ such that $\mathcal{L}|_{C'}$ is ample. Then we consider moduli space of pairs, \begin{align}\label{pair:JS} (F, u), \end{align} where $F$ is a compactly supported one dimensional coherent sheaf on $U'$, and $u\in H^0(F\otimes \mathcal{L}^{\otimes m})$ for $m\gg 0$, satisfying the stability condition as in~\cite[Definition~5.20]{JS}. If $C' \subset U'$ is a chain of $(-1, -1)$-curves, then any sheaf $F$ as above is supported on $C'$, hence the moduli space of pairs (\ref{pair:JS}) is a projective scheme. Let us consider the object, \begin{align*} E=(\mathcal{L}^{\otimes -m} \stackrel{u}{\to} F) \in D^b \mathop{\rm Coh}\nolimits(U'). \end{align*} Although $U'$ is non-compact, the groups $\mathop{\rm Ext}\nolimits_{U'}^{i}(E, E)$ for $i=1, 2$ are finite dimensional, hence determine a symmetric perfect obstruction theory on the moduli space of pairs (\ref{pair:JS}) as in~\cite[Theorem~5.23]{JS}. Then the deformation invariance of $N_{n, \beta'}(U')$ follows from the same argument of~\cite[Corollary~5.28]{JS}. \end{rmk} \subsection{Local generalized DT invariants on a nodal rational curve of type $I_N$} In this subsection, using the results in the previous subsection, we compute some generalized DT invariants which have not been computed so far. Recall that a nodal curve $C$ is called type $I_N$ if $C$ is one of the following: \begin{itemize} \item $C$ is of type $I_1$ if $C$ is an irreducible rational curve with one node. \item $C$ is of type $I_N$ for $N\ge 2$ if $C$ is a circle of irreducible components $C_1, \cdots, C_N$ such that $C_i \cong \mathbb{P}^1$ for all $i$. \end{itemize} Note that the above notation is used in the Kodaira's classification of singular fibers of elliptic fibrations. (See Figure~\ref{fig:three}.) \begin{figure}\label{fig:three} \end{figure} We have the following theorem. \begin{thm}\label{thm:typeI} Let $C$ be a nodal curve of type $I_N$ with irreducible components $C_1, \cdots, C_N$, which is embedded into a Calabi-Yau 3-fold $X$. Suppose that an analytic neighborhood $C\subset U \subset X$ is either $0$-super rigid or of surface type. Then for $n\in \mathbb{Z}$ and $a_1, \cdots, a_N \in \mathbb{Z}_{\ge 1}$, the invariant $N_{n, a_1[C_1]+ \cdots +a_{N}[C_N]}$ is non-zero only if $a_1= \cdots =a_N$, say $m$. In this case, we have \begin{align*} N_{n, m[C_1]+ \cdots +m[C_N]} =\left\{ \begin{array}{cc} \sum_{k\ge 1, k|(n, m)} N/k^2, & 0\mbox{-super rigid case, } \\ -\sum_{k\ge 1, k|(n, m)} N/k^2, & \mbox{ surface type case. } \end{array} \right. \end{align*} \end{thm} In particular, the invariant $N_{n, a_1[C_1]+\cdots +a_N[C_N]}$ satisfies the formula (\ref{form:mult:loc}). \begin{proof} Let $(C', U')$ be a cyclic neighborhood of $C \subset U$. Then the construction of cyclic coverings in Subsection~\ref{subsec:cyclic} shows that, if $C'$ is a tree of $\mathbb{P}^1$, then it must be a chain of $\mathbb{P}^1$. By Proposition~\ref{prop:compN}, the assumptions in Theorem~\ref{thm:main:cov} are satisfied, hence the invariant $N_{n, a_1[C_1]+ \cdots +a_{N}[C_N]}$ satisfies the formula (\ref{form:mult:loc}). It is enough to compute $N_{1, a_1[C_1]+ \cdots +a_{N}[C_N]}$, which follows from the computation on cyclic neighborhoods in Proposition~\ref{prop:compN} and the comparison formula under the covering (\ref{formula2}). For instance if $C\subset U$ is $0$-super rigid, then it easily follows that \begin{align*} N_{1, a_1[C_1]+ \cdots +a_{N}[C_N]} =\left\{ \begin{array}{cc} N, & a_1= \cdots =a_N=1, \\ 0, & \mbox{ otherwise. } \end{array} \right. \end{align*} The case of surface type is similar. \end{proof} \subsection{Local generalized DT invariants on irreducible nodal rational curves} In this subsection, using Lemma~\ref{N:primitive}, Proposition~\ref{prop:compN} and Theorem~\ref{thm:main:cov}, we study the local multiple cover formula of generalized DT invariants on irreducible nodal curves. We have the following result: \begin{thm}\label{prop:prime} Let $C$ be an irreducible rational curve with at worst nodal singularities, embedded into a Calabi-Yau 3-fold $X$. For a sufficiently small analytic neighborhood $C\subset U \subset X$, suppose that it is $0$-super rigid or of surface type. Moreover assume that any cyclic neighborhood $C' \subset U'$ with $C'$ tree of $\mathbb{P}^1$ satisfies the condition of Conjecture~\ref{conj:crit}. The for any prime number $p$, the invariant $N_{n, p[C]}$ satisfies the formula (\ref{form:mult:loc}). \end{thm} \begin{proof} Let $(C'\subset U') \stackrel{\sigma'}{\to} (C\subset X)$ be a cyclic neighborhood of $C\subset X$ with $C'$ tree of $\mathbb{P}^1$. By Theorem~\ref{thm:main:cov}, it is enough to show that the invariant $N_{n, \gamma'}(U')$ satisfies the formula (\ref{mult:U'}) for any $n\in \mathbb{Z}$ and $\gamma' \in H_2(C', \mathbb{Z})_{>0}$ with $\sigma'_{\ast}\gamma'=p[C]$. Let $C_1', \cdots, C_N'$ be the irreducible components of $C'$, and we write $\gamma' \in H_2(C', \mathbb{Z})_{>0}$ as $\gamma'=\sum_{i=1}^{N} a_i'[C_i']$ for $a_i' \in \mathbb{Z}_{\ge 0}$. Then $\sigma_{\ast}'\gamma'=p[C]$ is equivalent to that \begin{align}\label{a:prime} a_1' + \cdots + a_N'=p. \end{align} Suppose that there are at least two $1\le i\le N$ with $a_i'>0$. Then, since $p$ is a prime number, the equality (\ref{a:prime}) implies that \begin{align*} \mathrm{g.c.d.}(a_1', \cdots, a_N')=1, \end{align*} i.e. $\gamma' \in H_2(C', \mathbb{Z})$ is primitive. Then the invariant $N_{n, \gamma'}(U')$ satisfies (\ref{mult:U'}) by Lemma~\ref{N:primitive}. If there is only one $1\le i\le N$ with $a_i'>0$, then we can assume that $C'$ is a single $\mathbb{P}^1$. In this case, the invariant $N_{n, \gamma'}(U')$ satisfies the formula (\ref{mult:U'}) by Proposition~\ref{prop:compN}. \end{proof} In the situation of the above theorem, we are not able to prove the condition of Conjecture~\ref{conj:crit} on cyclic neighborhoods. However we don't have to take care of this condition in the Euler characteristic version. We have the following: \begin{thm}\label{prop:prime2} Let $C$ be an irreducible rational curve with at worst nodal singularities, embedded into a Calabi-Yau 3-fold $X$. For a sufficiently small analytic neighborhood $C\subset U \subset X$, suppose that it is of surface type. The for any prime number $p$, the invariant $N_{n, p[C]}^{\chi}$ satisfies the formula (\ref{mult:chi}). \end{thm} \begin{proof} The same proof of Theorem~\ref{prop:prime} works, using Theorem~\ref{thm:cov:Eu} instead of Theorem\ref{thm:main:cov}. In the notation of the proof of Theorem~\ref{prop:prime}, suppose that there are at least two $1\le i\le N$ with $a_i'>0$. Then the same proof of Lemma~\ref{N:primitive} shows the equality $N_{n, \gamma'}^{\chi}(U')=N_{1, \gamma'}^{\chi}(U')$. Otherwise, we may assume that $C'$ is a single $\mathbb{P}^1$. In this case, the invariant $N_{n, p[C']}^{\chi}(U')$ can be checked to satisfy (\ref{mult:chi2}) by comparing the formula in~\cite[Theorem~1.3]{Tolim2} and the Euler characteristic version of the formula in~\cite[Theorem~1.2]{GY}. \end{proof} \begin{rmk} The result of Theorem~\ref{prop:prime2} is not true for a $0$-super rigid case, as we discussed in Example~\ref{exam:counter}. \end{rmk} In the situation of Theorem~\ref{prop:prime2}, we can say more for the invariants $N_{n, m[C]}^{\chi}$ when $m$ is small: \begin{lem}\label{mult:small} In the situation of Theorem~\ref{prop:prime2}, the invariant $N_{n, m[C]}^{\chi}$ satisfies the formula (\ref{mult:chi}) when $m\le 10$. \end{lem} \begin{proof} First suppose that $m<10$. In the notation of the proof of Theorem~\ref{prop:prime}, suppose that $\gamma'=\sum_{i=1}^{N}a_i'[C_i']$ satisfies $\sigma_{\ast}\gamma'=m[C]$, i.e. \begin{align*} a_1' + \cdots +a_N'=m. \end{align*} Then we have either $\mathrm{g.c.d.}(a_1', \cdots, a_N')=1$ or $\gamma$ is supported on an ADE configuration of $\mathbb{P}^1$. As in the proof of Theorem~\ref{prop:prime2}, using the results in~\cite{GY}, \cite{Tolim2}, it is straightforward to check the Euler characteristic version of the results in Proposition~\ref{prop:compN} (ii). Therefore the result follows from the Euler characteristic version of Lemma~\ref{N:primitive} and Theorem~\ref{thm:cov:Eu}. When $m=10$, we have the following exceptional case: $N=5$, $C'_1, \cdots, C_5'$ satisfy \begin{align*} C_1' \cdot C_i'=1, \ (i\ge 2), \quad C_i' \cdot C_j'=0, \ (i, j \ge 2). \end{align*} and $\gamma'=2[C']$. In this case, $C_1', \cdots, C_5'$ is not an ADE configuration. However we can check that $N_{n, 2[C']}^{\chi}(U')$ satisfies (\ref{mult:chi2}) by a direct calculation as in ~\cite[Proposition~6.9]{TodK3}. In fact, by the Riemann-Roch theorem, one can show that there is no stable sheaf $F'$ on $U'$ with $[F']=\gamma'$. Then a computation similar to~\cite[Proposition~6.9]{TodK3} works, whose detail is left to the readers. \end{proof} \begin{rmk}\label{rmk:small} The result of Lemma~\ref{mult:small} can be generalized as follows. Let $C$ be a reduced (not necessary irreducible) rational curve with at worst nodal singularities, which is embedded into a Calabi-Yau 3-fold $X$, Suppose that a sufficiently small analytic neighborhood $C\subset U\subset X$ is of surface type. Let $C_1, \cdots, C_N$ be the irreducible components of $C$. Then the invariant $N_{n, a_1[C_1]+\cdots +a_N[C_N]}^{\chi}$ satisfies the formula (\ref{mult:chi}) if $a_1, \cdots, a_N$ satisfies \begin{align*} a_1 + \cdots +a_N \le 10. \end{align*} The proof is same as in Lemma~\ref{mult:small}. \end{rmk} \subsection{Euler characteristic invariants on K3 surfaces} Let $S$ be a smooth projective K3 surface over $\mathbb{C}$, i.e. \begin{align*} K_S=\mathcal{O}_S, \quad H^1(S, \mathcal{O}_S)=0, \end{align*} and $X$ is the total space of the canonical line bundle on $S$, i.e. \begin{align*} X=S\times \mathbb{C}. \end{align*} In~\cite{TodK3}, we established a formula relating Euler characteristic of the moduli space of PT stable pairs and Joyce type Euler characteristic invariants counting semistable sheaves on the fibers of the projection \begin{align*} X=S\times \mathbb{C} \to \mathbb{C}. \end{align*} For a vector \begin{align*} v=(r, \beta, n) \in H^0(S, \mathbb{Z}) \oplus H^2(S, \mathbb{Z}) \oplus H^4(S, \mathbb{Z}), \end{align*} the latter invariant was denoted by \begin{align*} J(r, \beta, n) \in \mathbb{Q}. \end{align*} The invariant $J(v)$ is the Euler characteristic version of DT type invariants on $X$, counting $p^{\ast}\omega$-semistable sheaves $F \in \mathop{\rm Coh}\nolimits(X)$, with compact supports, and satisfies \begin{align*} \mathop{\rm ch}\nolimits(p_{\ast}F) \sqrt{\mathop{\rm td}\nolimits_S} =v. \end{align*} Here $p\colon X=S\times \mathbb{C} \to S$ is the first projection, and $\omega$ is an ample divisor on $S$. In the notation of Subsection~\ref{subsec:Euler}, after identifying $H^2(S, \mathbb{Z})$ with $H_2(X, \mathbb{Z})$, we have \begin{align}\label{J=N} J(0, \beta, n)=N_{n, \beta}^{\chi}. \end{align} If $v\in H^{\ast}(S, \mathbb{Z})$ is a primitive algebraic class, then $J(v)$ is written as \begin{align}\label{J:primitive} J(v)=\chi(\mathop{\rm Hilb}\nolimits^{(v, v)/2 +1}(S)). \end{align} (cf.~\cite[Equation~(65)]{TodK3}.) Here $\mathop{\rm Hilb}\nolimits^m(S)$ is the Hilbert scheme of $m$-points in $S$ and $(\ast, \ast)$ is the Mukai inner product, \begin{align*} ((r_1, \beta_1, n_1), (r_2, \beta_2, n_2)) =\beta_1 \cdot \beta_2 -r_1 n_2 -r_2 n_1. \end{align*} The RHS of (\ref{Hilb}) is determined by the G$\ddot{\rm{o}}$ttsche's formula~\cite{Got}, \begin{align*} \sum_{m\ge 0} \chi(\mathop{\rm Hilb}\nolimits^{m}(S))q^d = \prod_{m\ge 1} \frac{1}{(1-q^m)^{24}}. \end{align*} If $v$ is not necessary primitive, we proposed the following multiple cover conjecture in~\cite[Conjecture~1.3]{TodK3}: \begin{conj}{\bf (\cite[Conjecture~1.3]{TodK3})} If $v\in H^{\ast}(S, \mathbb{Z})$ is an algebraic class, we have the equality, \begin{align}\label{Hilb} J(v)=\sum_{k\ge 1, k|v} \frac{1}{k^2} \chi(\mathop{\rm Hilb}\nolimits^{(v/k, v/k)/2 +1}(S)). \end{align} \end{conj} Using the results in this paper, we can give some evidence of the conjecture. Below we say a (not necessary reduced) curve $C \subset S$ is \textit{rational}, or has \textit{at worst nodal singularities} if the reduced curve $C^{\rm{red}}$ satisfies the corresponding property. We have the following theorem: \begin{thm}\label{thm:K3} Let $S$ be a smooth projective K3 surface over $\mathbb{C}$ and $X=S\times \mathbb{C}$. Suppose that $\mathop{\rm Pic}\nolimits(S)$ is generated by an ample line bundle $L$ on $S$, such that any rational member in $\lvert L \rvert$ has at worst nodal singularities. Then the invariant $J(0, m c_1(L), n)$ satisfies the formula (\ref{Hilb}) if $m\le 10$ or $m$ is a prime number. In particular, if $L^2=2d-2$ for $d\in \mathbb{Z}$ and $p$ is a prime number, we have \begin{align}\label{Jformula:pd} J(0, p c_1(L), 0) = \chi(\mathop{\rm Hilb}\nolimits^{(d-1)p^2 +1}(S)) +\frac{1}{p^2} \chi(\mathop{\rm Hilb}\nolimits^{d}(S)). \end{align} \end{thm} \begin{proof} Let $\gamma$ be a one cycle on $X$ whose support is compact. We show that $N_{n, \gamma}^{\chi}$ satisfies the formula (\ref{mult:chi}) when the homology class of $\gamma$ coincides with $m c_1(L)$ under the isomorphism $H_2(X, \mathbb{Z}) \cong H^2(S, \mathbb{Z})$. As in the proof of Theorem~\ref{thm:main:cov}, we may assume that $\gamma$ is connected. Then $\gamma$ is supported on $S\times \{t\}$ for some $t \in \mathbb{C}$, hence regarded as $\gamma \in \lvert mL \rvert$. We write $\gamma$ as \begin{align*} \gamma=a_1 [C_1] +\cdots +a_N[C_N], \end{align*} where $C_1, \cdots, C_N$ are irreducible curves in $S \times \{t\}$. By Lemma~\ref{lem:higher}, we may assume that all $C_i$ have geometric genus zero. By the assumption, the reduced curve $C=\cup_{i=1}^{N}C_i$ has at worst nodal singularities, and there is an analytic neighborhood $C \subset U \subset X$ of surface type. Note that $C_i \in \lvert m_i L \rvert$ for some $m_i \in \mathbb{Z}_{\ge 1}$, and we have \begin{align*} a_1 m_1 + \cdots + a_N m_N=m. \end{align*} Since $m\le 10$ or $m$ is a prime number, either one of the following conditions hold: (i) $a_1 + \cdots + a_N \le 10$. (ii) $\mathrm{g.c.d.}(a_1, \cdots, a_N)=1$. (iii) $N=1$, $a_1=m$, $m_1=1$. In these cases, the formula (\ref{mult:chi}) follows from Remark~\ref{rmk:small}, Lemma~\ref{N:primitive} and Theorem~\ref{prop:prime2}. The proof of Lemma~\ref{lem:glo:loc} in~\cite[Corollary~4.11]{Todpara} is also applied to the Euler characteristic version in our situation. Consequently, we have the global multiple cover formula, \begin{align}\label{global:K3} N_{n, mc_1(L)}^{\chi} = \sum_{k\ge 1, k|(n, m)} \frac{1}{k^2} N_{1, mc_1(L)/k}^{\chi}. \end{align} On the other hand, since $v=(0, a c_1(L), 1) \in H^{\ast}(X, \mathbb{Z})$ is primitive for any $a \in \mathbb{Z}$, we have \begin{align*} N_{1, ac_1(L)}^{\chi} =\chi(\mathop{\rm Hilb}\nolimits^{(v, v)/2+1}(S)), \end{align*} by (\ref{J:primitive}) and (\ref{J=N}). Also noting (\ref{J=N}) in the LHS of (\ref{global:K3}), the invariant $J(0, m c_1(L), n)$ satisfies the formula (\ref{Hilb}). \end{proof} \begin{rmk} The assumption of the K3 surface $S$ in Theorem~\ref{thm:K3} is a genericity condition for polarized K3 surfaces. Namely for $d\in \mathbb{Z}$, let us consider the moduli space of polarized K3 surfaces $(S, L)$ with $L^2=2d-2$. Then $(S, L)$ satisfies the assumption in Theorem~\ref{thm:K3} when $(S, L)$ is a general point of the above moduli space. \end{rmk} \begin{rmk} In~\cite[Proposition~6.9]{TodK3}, we proved the formula (\ref{Jformula:pd}) for $d=2$ and $p=2$. Even in this case, it is not obvious to compute the LHS of (\ref{Jformula:pd}) directly. Indeed in~\cite[Proposition~6.9]{TodK3}, we used the result by Mozgovoy~\cite{Moz}. The proof of Theorem~\ref{thm:K3} does not require the result of~\cite{Moz}, and can be applied to more general cases. \end{rmk} \begin{rmk} Note that there is a weight two Hodge structure on $H^{\ast}(S, \mathbb{C})$ by \begin{align*} H^{\ast 2, 0}=H^{2, 0}, \ H^{\ast 0, 2}=H^{0, 2}, \\ H^{\ast 1, 1}=H^{0, 0} \oplus H^{1, 1} \oplus H^{2, 2}. \end{align*} Let $G$ be the group of Hodge isometries of $H^{\ast}(S, \mathbb{Z})$. Then for any $g\in G$ and $v\in H^{\ast}(S, \mathbb{Z})$, it is proved in~\cite[Theorem~4.21]{TodK3} that \begin{align}\label{Jgv=Jv} J(gv)=J(v). \end{align} Let $v=(0, mc_1(L), n) \in H^{\ast}(S, \mathbb{Z})$ be as in the statement of Theorem~\ref{thm:K3}. The result of Theorem~\ref{thm:K3} and the formula (\ref{Jgv=Jv}) imply that $J(gv)$ also satisfies the formula (\ref{Hilb}) for any $g\in G$. \end{rmk} \section{Identity of Behrend functions}\label{subsec:Behrend} In this section, we give a proof of Lemma~\ref{lem:identity}. In principle, the result is a consequence of $\mathbb{C}^{\ast}$-localizations of the Behrend functions as in~\cite[Proposition~3.3]{BBr}, \cite[Theorem~C]{WeiQin}, together with the results in Proposition~\ref{prop:isom:para}. However, as we discussed in~\cite[Remark~2.4]{Todpara}, we are unable to find a symmetric perfect obstruction theory on the moduli space of parabolic stable pairs, which prevents us to use the $\mathbb{C}^{\ast}$-localizations on the Behrend functions directly. Instead, we consider some other moduli spaces which admit $\mathbb{C}^{\ast}$-equivariant symmetric perfect obstruction theories, and apply $\mathbb{C}^{\ast}$-localizations to the Behrend functions on them. Then the result follows by comparing their Behrend functions with those on the moduli spaces of parabolic stable pairs. \subsection{Deformations of sheaves} In this subsection, we recall a result on deformations of sheaves on algebraic varieties given by Huybrechts-Thomas~\cite{HT2}, which will be used in the next subsection. Let $X$ be a smooth projective variety and $T$ an affine scheme with a closed point $0 \in T$. Suppose that we are given a $T$-flat coherent sheaf on $X\times T$, \begin{align*} A \in \mathop{\rm Coh}\nolimits(X\times T). \end{align*} We would like to extend $A$ to a square zero extension $j \colon T \hookrightarrow \overline{T}$, i.e. there is an ideal $J\subset \mathcal{O}_{\overline{T}}$ such that \begin{align}\label{square:zero} \mathcal{O}_T \cong \mathcal{O}_{\overline{T}}/J, \quad J^2=0. \end{align} Let us take the distinguished triangle in $D^b \mathop{\rm Coh}\nolimits(X\times T)$, \begin{align}\label{dist:Q} Q_A \to \mathbf{L} j^{\ast}j_{\ast} A \to A. \end{align} Here the right arrow is the adjunction, and we have denoted $\textrm{id}_X \times j$ just by $j$ for simplicity. Following~\cite{HT2}, we construct the morphism \begin{align*} \pi_{A} \colon Q_{A} \to A \otimes_{\mathcal{O}_T} J[1], \end{align*} in the following way. Let $h$ be the embedding \begin{align*} h\colon X\times T \times X \times T \hookrightarrow X\times T \times X \times \overline{T}, \end{align*} and $H$ the object \begin{align*} H \cneq \mathbf{L} h^{\ast} h_{\ast}\Delta_{\ast}\mathcal{O}_{X\times T}, \end{align*} where $\Delta$ is the diagonal embedding of $X\times T$. By~\cite[Subsection~3.1]{HT2}, there are distinguished triangles on $X\times T$, \begin{align}\label{dist:H} &\tau^{\le -1}H \to H \to \Delta_{\ast}\mathcal{O}_{X\times T}, \\ \notag &\tau^{\le -2}H \to \tau^{\le -1}H \stackrel{\pi_H}{\to} \Delta_{\ast}J[1]. \end{align} Applying Fourier-Mukai transforms of $A$ for the triangle (\ref{dist:H}), we obtain the triangle (\ref{dist:Q}). Then the morphism $\pi_A$ is obtained by taking the Fourier-Mukai transform of $A$ with for the morphism $\pi_H$. On the other hand, suppose that the following morphism on $X\times \overline{T}$ is given, \begin{align*} e_{A} \colon j_{\ast}A \to j_{\ast}(A\otimes J)[1]. \end{align*} Then we construct the morphism $\Psi_{e_A}$ to be the composition, \begin{align*} \Psi_{e_A} \colon Q_A \to \mathbf{L} j^{\ast} j_{\ast}A \stackrel{\mathbf{L} j^{\ast} e_A}{\to} \mathbf{L} j^{\ast} j_{\ast}(A\otimes J)[1] \to A \otimes J[1]. \end{align*} Here the left arrow is given by the left arrow of (\ref{dist:Q}), and the right arrow is given by the adjunction. Let us take the cone of $e_A$, \begin{align*} j_{\ast}(A\otimes J) \to \overline{A} \to j_{\ast}A \stackrel{e_{A}}{\to} j_{\ast}(A\otimes J)[1]. \end{align*} Note that $\overline{A}$ is a coherent sheaf on $X\times \overline{T}$. By~\cite[Theorem~3.3]{HT2}, we have the following criteion for an object $\overline{A}$ to be a deformation of $A$: \begin{thm}{\bf(\cite[Theorem~3.3]{HT2}) }\label{thm:deform} The sheaf $\overline{A} \in \mathop{\rm Coh}\nolimits(X\times \overline{T})$ is flat over $\overline{T}$ with $\overline{A}|_{X\times T} \cong A$ if and only if we have the equality, \begin{align}\label{equal:Psi} \pi_{A} =\Psi_{e_A}. \end{align} \end{thm} \subsection{Moduli spaces of simple sheaves and relative Quot schemes} Let $X$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, and $H\subset X$ a smooth and connected divisor. Recall that, giving a parabolic stable pair $(F, s)$ on $X$ is equivalent to giving a pair, \begin{align*} N_{H/X}[-1] \to F, \end{align*} where $N_{H/X}$ is the normal bundle of $H$ in $X$, satisfying a certain stability condition. (cf.~\cite[Proposition~3.9]{Todpara}.) By taking the cone, we obtain the exact sequence of sheaves, \begin{align}\label{FEN} 0 \to F \to E \to N_{H/X} \to 0. \end{align} Note that $N_{H/X} \cong \mathcal{O}_H(H)$ is a simple sheaf, i.e. $\mathop{\rm End}\nolimits(N_{H/X})=\mathbb{C}$. Together with the parabolic stability of $(F, s)$, the sheaf $E$ is also a simple sheaf. (cf.~\cite[Corollary~3.10]{Todpara}.) We consider the moduli space of simple sheaves on $X$, which we denote by $\mathcal{M}$. The space $\mathcal{M}$ is known to be an algebraic space locally of finite type over $\mathbb{C}$. (cf.~\cite{Inaba}.) The universal sheaf is denoted by \begin{align}\label{Univ:sheaf} \mathcal{E} \in \mathop{\rm Coh}\nolimits(X \times \mathcal{M}). \end{align} Let \begin{align}\label{tau} \tau \colon \mathcal{Q} \to \mathcal{M}, \end{align} be the algebraic space representing the relative Quot-functor for the family of simple sheaves (\ref{Univ:sheaf}). Namely for each $[E] \in \mathcal{M}$ the fiber of $\mathcal{Q} \to \mathcal{M}$ is Grothendieck's Quot-scheme parameterizing quotient sheaves $E \twoheadrightarrow E'$. We have the morphism, \begin{align}\label{iota} \iota \colon M_n^{\rm{par}}(U, \beta) \to \mathcal{Q}, \end{align} sending a parabolic stable pair $(F, s)$ to the surjection $E \twoheadrightarrow N_{H/X}$ given by the sequence (\ref{FEN}). We have the following lemma. \begin{lem}\label{lem:etale} The morphism $\tau$ is \'{e}tale at any point in the image of $\iota$. \end{lem} \begin{proof} Let $(F, s)$ be a parabolic stable pair on $X$ and $E \twoheadrightarrow N_{H/X}$ a point of $\mathcal{Q}$ determined by (\ref{FEN}). Let $T$ be an affine scheme, $0 \in T$ a closed point, $j \colon T \hookrightarrow \overline{T}$ a square zero extension with an ideal $J\subset \mathcal{O}_{\overline{T}}$ as in (\ref{square:zero}). Suppose that there is a commutative diagram, \begin{align}\label{diag:MQ} \xymatrix{ T \ar[r]^{f} \ar[d]_{j} & \mathcal{Q} \ar[d]^{\tau} \\ \overline{T} \ar[r]^{h} & \mathcal{M}, } \end{align} such that $f$ sends $0 \in T$ to $(E \twoheadrightarrow N_{H/X}) \in \mathcal{Q}$. It is enough to show that, after replacing $T$ by an affine open neighborhood of $0 \in T$, the morphism $f$ uniquely extends to $\overline{f} \colon \overline{T} \to \mathcal{Q}$ which commutes with all the arrows in (\ref{diag:MQ}). By pulling back $\mathcal{E} \in \mathop{\rm Coh}\nolimits(X\times \mathcal{M})$ to $T$ by the composition $\tau \circ f \colon T \to \mathcal{M}$, we obtain the sheaf $\mathcal{E}_T \in \mathop{\rm Coh}\nolimits(X\times T)$, which is flat over $T$, and restricts to the sheaf $E$ on $X\times \{0\}$. The morphism $f$ corresponds to the exact sequence of sheaves on $X\times T$, \begin{align}\label{FEN2} 0 \to \mathcal{F}_{T} \to \mathcal{E}_T \to \mathcal{N}_T \to 0, \end{align} which restrict to the exact sequence (\ref{FEN}) on $X\times \{0\}$. Also the morphism $h$ corresponds to a $\overline{T}$-flat sheaf on $X\times \overline{T}$, \begin{align*} \mathcal{E}_{\overline{T}} \in \mathop{\rm Coh}\nolimits(X\times \overline{T}), \end{align*} which restricts to $\mathcal{E}_{T}$ on $X\times T$. The existence of unique $\overline{f}$ is equivalent to the existence of unique (up to isomorphism) exact sequence of sheaves on $X\times \overline{T}$, \begin{align}\label{FEN3} 0 \to \mathcal{F}_{\overline{T}} \to \mathcal{E}_{\overline{T}} \to \mathcal{N}_{\overline{T}} \to 0, \end{align} which restricts to the exact sequence (\ref{FEN2}) on $X\times T$. Let us consider the distinguished triangle on $X\times \overline{T}$, \begin{align*} j_{\ast}(\mathcal{E}_{T}\otimes J) \to \mathcal{E}_{\overline{T}} \to j_{\ast}\mathcal{E}_{T} \stackrel{e_{\mathcal{E}}}{\to} j_{\ast}(\mathcal{E}_{T} \otimes J)[1]. \end{align*} Since $\mathcal{E}_{\overline{T}}$ is a deformation of $\mathcal{E}_{T}$ to $X\times \overline{T}$, we have \begin{align}\label{deform:E} \pi_{\mathcal{E}_T}=\Psi_{e_{\mathcal{E}}}, \end{align} in the notation of the previous subsection, by Theorem~\ref{thm:deform}. Also we have the distinguished triangles on $X\times \overline{T}$, \begin{align}\label{tri:jast} \xymatrix{ j_{\ast} \mathcal{F}_{T} \ar[r]^{} & j_{\ast} \mathcal{E}_{T} \ar[r]^{} \ar[d]^{e_{\mathcal{E}}} & j_{\ast} \mathcal{N}_{T} \\ j_{\ast}(\mathcal{F}_{T} \otimes J)[1] \ar[r] & j_{\ast}(\mathcal{E}_{T} \otimes J)[1] \ar[r] & j_{\ast}(\mathcal{N}_{T} \otimes J)[1]. } \end{align} Since $\mathop{\rm Hom}\nolimits(F, N_{H/X}[i])=0$ for $i=0, 1$, we have (after shrinking $T$ if necessary) \begin{align}\label{vanish:FNJ} \mathop{\rm Hom}\nolimits_{X\times T}(\mathcal{F}_{T}, \mathcal{N}_{T} \otimes J[i])=0, \end{align} for $i=0, 1$. By the distinguished triangle, \begin{align*} \mathcal{F}_{T} \otimes J[1] \to \mathbf{L} j^{\ast} j_{\ast} \mathcal{F}_{T} \to \mathcal{F}_{T}, \end{align*} and the vanishing (\ref{vanish:FNJ}), we see that \begin{align*} \mathop{\rm Hom}\nolimits_{X\times \overline{T}} (j_{\ast}\mathcal{F}_{T}, j_{\ast}(\mathcal{N}_T \otimes J)[i]) &\cong \mathop{\rm Hom}\nolimits_{X\times T}(\mathbf{L} j^{\ast} j_{\ast} \mathcal{F}_{T}, \mathcal{N}_{T} \otimes J[i]) \\ &=0, \end{align*} for $i=0, 1$. Therefore there are unique morphisms, \begin{align*} e_{\mathcal{F}} &\colon j_{\ast}\mathcal{F}_{T} \to j_{\ast} (\mathcal{F}_{T} \otimes J)[1], \\ e_{\mathcal{N}} &\colon j_{\ast} \mathcal{N}_{T} \to j_{\ast}(\mathcal{N}_{T} \otimes J)[1], \end{align*} which make the diagram (\ref{tri:jast}) commutative. We need to show that $e_{\mathcal{F}}$ and $e_{\mathcal{N}}$ determine deformations of $\mathcal{F}_{T}$ and $\mathcal{N}_{T}$ to $X\times \overline{T}$. In order to see these, we consider the commutative diagram, \begin{align*} \xymatrix{ Q_{\mathcal{F}_{T}} \ar[r] \ar[d]_{\pi_{\mathcal{F}}-\Psi_{e_{\mathcal{F}}}} & Q_{\mathcal{E}_{T}} \ar[r] \ar[d]_{\pi_{\mathcal{E}}-\Psi_{e_{\mathcal{E}}}} & Q_{\mathcal{N}_{T}} \ar[d]_{\pi_{\mathcal{N}}-\Psi_{e_{\mathcal{N}}}} \\ \mathcal{F}_{T}\otimes J[1] \ar[r] & \mathcal{E}_{T} \otimes J[1] \ar[r] & \mathcal{N}_{T} \otimes J[1]. } \end{align*} The middle arrow is zero by (\ref{deform:E}). Also we have $\mathop{\rm Hom}\nolimits(Q_{\mathcal{F}_{T}}, \mathcal{N}_{T} \otimes J)=0$ since $\mathcal{H}^i(Q_{\mathcal{F}_T})=0$ for $i\ge 0$. Therefore by the above commutative diagram, we have \begin{align*} \pi_{\mathcal{F}}=\Psi_{e_{\mathcal{F}}}, \end{align*} which implies that $e_{\mathcal{F}}$ determines a deformation of $\mathcal{F}_{T}$ to $\mathcal{F}_{\overline{T}}$ by Theorem~\ref{thm:deform}. A similar argument shows that $e_{\mathcal{N}}$ determines a deformation of $\mathcal{N}_{T}$ to $\mathcal{N}_{\overline{T}}$. By taking the cones $e_{\ast}$ for $\ast=\mathcal{F}, \mathcal{E}, \mathcal{N}$ in the diagram (\ref{tri:jast}), we obtain the exact sequence of sheaves, \begin{align*} 0 \to \mathcal{F}_{\overline{T}} \to \mathcal{E}_{\overline{T}} \to \mathcal{N}_{\overline{T}} \to 0, \end{align*} where $\mathcal{F}_{\overline{T}}$ and $\mathcal{N}_{\overline{T}}$ are deformations of $\mathcal{F}_{T}$, $\mathcal{N}_{\overline{T}}$ determined by $e_{\mathcal{F}}$, $e_{\mathcal{N}}$ respectively. It is straightforward to check that the above extension of (\ref{FEN2}) to $X\times \overline{T}$ is unique up to isomorphisms, and we leave the readers to check the detail. \end{proof} \subsection{Some identities of Behrend functions} Let $C \subset U \subset X$ and $H\subset X$ be as in the previous sections. Let \begin{align*} \mathcal{Q}_{U} \subset \mathcal{Q}, \quad \mathcal{M}_{U} \subset \mathcal{M}, \end{align*} be sufficiently small analytic neighborhoods of the images of $\iota$, $\tau \circ \iota$ respectively. Here $\tau$, $\iota$ are defined in (\ref{tau}), (\ref{iota}). By Lemma~\ref{lem:etale}, the morphism $\tau$ restricts to a local immersion, \begin{align}\label{tau2} \tau \colon \mathcal{Q}_{U} \to \mathcal{M}_U. \end{align} The arguments similar to Subsection~\ref{subsec:Jact} and Subsection~\ref{subsec:compare:para} show that $\mathcal{M}_U$ and $\mathcal{Q}_U$ admit $\mathbb{C}^{\ast}$-actions, where $\mathbb{C}^{\ast}$ is the subtorus (\ref{ctorus}), so that the morphisms (\ref{iota}) and (\ref{tau2}) are $\mathbb{C}^{\ast}$-equivariant. Let \begin{align*} \nu_{\mathcal{Q}}, \ \nu_{\mathcal{Q}^{\mathbb{C}^{\ast}}}, \ \nu_{\mathcal{M}}, \ \nu_{\mathcal{M}^{\mathbb{C}^{\ast}}}, \end{align*} be the Behrend functions on $\mathcal{Q}$, $\mathcal{Q}_U^{\mathbb{C}^{\ast}}$, $\mathcal{M}$, $\mathcal{M}_U^{\mathbb{C}^{\ast}}$ respectively. We have the following lemma: \begin{lem} For $(F, s) \in M_n^{\rm{par}}(U, \beta)^{\mathbb{C}^{\ast}}$ and the associated element $p=\iota(F, s) \in \mathcal{Q}_U^{\mathbb{C}^{\ast}}$, we have \begin{align}\label{Beh3.2} \nu_{\mathcal{Q}}(p)=(-1)^{\dim T_p \mathcal{Q} - \dim T_p \mathcal{Q}_U^{\mathbb{C}^{\ast}}} \nu_{\mathcal{Q}^{\mathbb{C}^{\ast}}}(p). \end{align} \end{lem} \begin{proof} Let us write $p=(E \twoheadrightarrow N_{H/X}) \in \mathcal{Q}_U^{\mathbb{C}^{\ast}}$. Then by Lemma~\ref{lem:etale}, we have \begin{align}\label{nueq:1} \nu_{\mathcal{Q}}(p)=\nu_{\mathcal{M}}([E]), \quad \nu_{\mathcal{Q}^{\mathbb{C}^{\ast}}}(p)= \nu_{\mathcal{M}^{\mathbb{C}^{\ast}}}([E]). \end{align} Next note that, by~\cite{HT2}, the algebraic space $\mathcal{M}$ admits a symmetric perfect obstruction theory determined by the universal sheaf (\ref{Univ:sheaf}). It is easy to check that the symmetric perfect obstruction theory on $\mathcal{M}$, restricted to $\mathcal{M}_{U}$, is $\mathbb{C}^{\ast}$-equivariant. Therefore the $\mathbb{C}^{\ast}$-localizations of the Behrend functions given in~\cite[Proposition~3.3]{BBr}, ~\cite[Theorem~C]{WeiQin} are applied. The result is \begin{align}\label{nueq:2} \nu_{\mathcal{M}}(E)= (-1)^{\dim T_{[E]}\mathcal{M} - \dim T_{[E]} \mathcal{M}^{\mathbb{C}^{\ast}}} \cdot \nu_{\mathcal{M}^{\mathbb{C}^{\ast}}}(E). \end{align} Again by Lemma~\ref{lem:etale}, we have \begin{align}\label{eq:tangent} \dim T_{p} \mathcal{Q} =\dim T_{[E]} \mathcal{M}, \quad \dim T_{p} \mathcal{Q}_U^{\mathbb{C}^{\ast}} = \dim T_{[E]}\mathcal{M}^{\mathbb{C}^{\ast}}. \end{align} The equality (\ref{Beh3.2}) follows from (\ref{nueq:1}), (\ref{nueq:2}) and (\ref{eq:tangent}). \end{proof} Next we compare the Behrend functions on $\nu_{\mathcal{Q}}$ and $\nu_{M^{\rm{par}}}$ under the morphism (\ref{iota}). (Recall that $\nu_{M^{\rm{par}}}$ is the Behrend function on $M_n(U, \beta)$.) In what follows, for $E_1, E_2 \in \mathop{\rm Coh}\nolimits(X)$, we write \begin{align*} \mathrm{hom}(E_1, E_2) &\cneq \dim \mathop{\rm Hom}\nolimits(E_1, E_2), \\ \mathrm{ext}^1(E_1, E_2) &\cneq \dim \mathop{\rm Ext}\nolimits_X^1(E_1, E_2). \end{align*} We have the following lemma: \begin{lem} For $(F, s) \in M_n^{\rm{par}}(U, \beta)$ with $p=\iota(F, s) \in \mathcal{Q}$, we have the equality, \begin{align}\label{Beh3} \nu_{\mathcal{Q}}(p)= (-1)^{\mathrm{ext}^1(N_{H/X}, N_{H/X})} \cdot \nu_{M^{\rm{par}}}(F, s). \end{align} \end{lem} \begin{proof} Let us write $p=(E \twoheadrightarrow N_{H/X}) \in \mathcal{Q}$. We first note that, a point of $\mathcal{Q}_U$ near $p \in \mathcal{Q}$ is represented by an exact sequence \begin{align}\label{FEN'} 0 \to F' \to E' \to N' \to 0, \end{align} where $F'$, $E'$, $N'$ are small deformations of sheaves $F$, $E$, $N_{H/X}$ in (\ref{FEN}). Hence near $p\in \mathcal{Q}_U$, we have the 1-morphism, \begin{align}\label{mor:QC} \mathcal{Q}_{U} \to \mathcal{M} \times \mathcal{C} oh(X), \end{align} which sends the sequence (\ref{FEN'}) to $(N', F')$. Here $\mathcal{C} oh(X)$ is the stack of all the objects in $\mathop{\rm Coh}\nolimits(X)$, as in Subsection~\ref{subsec:Genera}. The fiber of the above 1-morphism at $(N', F')$ is an open subset of $\mathop{\rm Ext}\nolimits_{X}^{1}(N', F')$. Since we have \begin{align*} \mathop{\rm Ext}\nolimits_{X}^i(N_{H/X}, F) \cong \left\{ \begin{array}{cc} \mathbb{C}^{\beta \cdot H}, & i=1, \\ 0, & i\neq 1, \end{array} \right. \end{align*} it follows that \begin{align*} \mathop{\rm Ext}\nolimits_{X}^1(N', F') \cong \mathbb{C}^{\beta \cdot H}. \end{align*} Therefore the morphism (\ref{mor:QC}) is a smooth morphism of relative dimension $\beta \cdot H$. Let us consider the Behrend function on the RHS of (\ref{mor:QC}). It is easy to see that any small deformation of $N_{H/X} \cong \mathcal{O}_{H}(H)$ is obtained as a line bundle on a divisor in $X$. Hence we see that the algebraic space $\mathcal{M}$ is smooth of dimension $\mathrm{ext}^1(N_{H/X}, N_{H/X})$ at $[N_{H/X}] \in \mathcal{M}$. It follows that we have \begin{align*} \nu_{\mathcal{M}}(N_{H/X})=(-1)^{\mathrm{ext}^1(N_{H/X}, N_{H/X})}. \end{align*} If we denote by $\nu_{\mathcal{C}}$ the Behrend function on $\mathcal{C} oh(X)$, the above arguments imply \begin{align}\label{Beh1} \nu_{\mathcal{Q}}(p) &=\nu_{\mathcal{M}}(N_{H/X}) \cdot \nu_{\mathcal{C}}(F) \cdot (-1)^{\beta \cdot H}, \\ \label{Beh2} &= (-1)^{\beta \cdot H + \mathrm{ext}^1(N_{H/X}, N_{H/X})} \cdot \nu_{\mathcal{C}}(F). \end{align} In (\ref{Beh1}), we have used the property of the Behrend function under smooth morphisms and products~\cite[Proposition~1.5]{Beh}. On the other hand, there is a forgetting 1-morphism, \begin{align*} M_n^{\rm{par}}(U, \beta) \to \mathcal{C} oh(X), \end{align*} sending $(F, s)$ to $F$. The fiber of the above morphism at $[F]$ is an open subset of $F \otimes \mathcal{O}_{H} \cong \mathbb{C}^{\beta \cdot H}$, hence it is a smooth morphism of relative dimension $\beta \cdot H$. Therefore we have \begin{align}\label{Behpara} \nu_{M^{\rm{par}}}(F, s)=(-1)^{\beta \cdot H} \cdot \nu_{\mathcal{C}}(F). \end{align} Combined with (\ref{Beh2}) and (\ref{Behpara}), we obtain the desired equality (\ref{Beh3}). \end{proof} \subsection{Proof of Lemma~\ref{lem:identity}}\label{subsec:proof} Finally in this section, we give a proof of Lemma~\ref{lem:identity}. \begin{proof} Let $\sigma \colon \widetilde{U} \to U$ be the $m$-fold cyclic cover considered in the statement of Lemma~\ref{lem:identity}. Then for $m\gg 0$, we have \begin{align*} M_n^{\rm{par}}(U, \beta)^{\mathbb{Z}/m\mathbb{Z}} = M_n^{\rm{par}}(U, \beta)^{\mathbb{C}^{\ast}}. \end{align*} Hence by Proposition~\ref{prop:isom:para}, for $(F, s) \in M_n^{\rm{par}}(U, \beta)^{\mathbb{C}^{\ast}}$, there is unique $(\widetilde{F}, \widetilde{s}) \in M_n^{\rm{par}}(\widetilde{U}, \widetilde{\beta})$ such that $(F, s)=\sigma_{\ast}(\widetilde{F}, \widetilde{s})$. Similarly to (\ref{FEN}), the pair $(\widetilde{F}, \widetilde{s})$ associates the exact sequence of sheaves on $\widetilde{U}$, \begin{align*} 0 \to \widetilde{F} \to \widetilde{E} \to N_{\widetilde{H}/\widetilde{U}} \to 0. \end{align*} Let $N'$ be a coherent sheaf on $X$, which is a small deformation of $N_{H/X}$. Then we can uniquely lift $N'|_{U}$ to a sheaf $\widetilde{N}'$ on $\widetilde{U}$ so that $\widetilde{N}'$ is a small deformation of the sheaf $N_{\widetilde{H}/\widetilde{U}}$ on $\widetilde{U}$. Let $\widetilde{\mathcal{Q}}$ be the analytic local moduli space parameterizing small deformations of surjections $\widetilde{E} \twoheadrightarrow N_{\widetilde{H}/\widetilde{U}}$, \begin{align}\label{E'N'} \widetilde{E}' \twoheadrightarrow \widetilde{N}', \end{align} where (\ref{E'N'}) is a surjection of coherent sheaves on $\widetilde{U}$, and $\widetilde{N}'$ is a lift of a small deformation of $N_{H/X}$ restricted to $U$ as above. An argument similar to the proof of Lemma~\ref{lem:nat:comp} shows that there is a natural morphism, \begin{align}\label{sigma:ast:Q} \sigma_{\ast} \colon \widetilde{\mathcal{Q}} \to \mathcal{Q}_{U}^{\mathbb{Z}/m\mathbb{Z}} \end{align} satisfying that \begin{align*} \sigma_{\ast}(\widetilde{E} \twoheadrightarrow N_{\widetilde{H}/\widetilde{U}}) =(E \twoheadrightarrow N_{H/X}). \end{align*} Also a proof similar to Proposition~\ref{prop:isom:para} shows that the morphism (\ref{sigma:ast:Q}) is an isomorphism onto connected components of $\mathcal{Q}_{U}^{\mathbb{Z}/m\mathbb{Z}}$. Since we have $\mathcal{Q}_{U}^{\mathbb{Z}/m\mathbb{Z}}=\mathcal{Q}_{U}^{\mathbb{C}^{\ast}}$ for $m\gg 0$, it follows that \begin{align}\label{Beh3.5} \nu_{\widetilde{\mathcal{Q}}}(\widetilde{E} \twoheadrightarrow N_{\widetilde{H}/\widetilde{U}}) =\nu_{\mathcal{Q}^{\mathbb{C}^{\ast}}}(E\twoheadrightarrow N_{H/X}). \end{align} Also similarly to (\ref{Beh3}), we have the equality, \begin{align}\label{Beh4} \nu_{\widetilde{\mathcal{Q}}}(\widetilde{E} \twoheadrightarrow N_{\widetilde{H}/\widetilde{X}})= (-1)^{\mathrm{ext}^1(N_{H/X}, N_{H/X})} \cdot \nu_{\widetilde{M}^{\rm{par}}}(\widetilde{F}, \widetilde{s}). \end{align} By (\ref{Beh3}), (\ref{Beh3.2}), (\ref{Beh3.5}) and (\ref{Beh4}), we obtain \begin{align}\label{Beh:para} \nu_{\widetilde{M}^{\rm{par}}}(\widetilde{F}, \widetilde{s}) =(-1)^{\dim T_{\widetilde{p}}\widetilde{\mathcal{Q}} - \dim T_{p} \mathcal{Q}} \cdot \nu_{M^{\rm{par}}}(F, s), \end{align} where $\widetilde{p}=(\widetilde{E} \twoheadrightarrow N_{\widetilde{H}/\widetilde{U}}) \in \widetilde{\mathcal{Q}}$. Let us evaluate $\dim T_{\widetilde{p}}\widetilde{\mathcal{Q}} - \dim T_{p} \mathcal{Q}$. Since the morphism (\ref{mor:QC}) is a smooth morphism of relative dimension $\beta \cdot H$, we have \begin{align}\label{TQ1} \dim T_{p} \mathcal{Q} = \mathrm{ext}^1(F, F)- \mathrm{hom}(F, F) + \mathrm{ext}^1(N_{H/X}, N_{H/X}) + \beta \cdot H. \end{align} Similarly we have \begin{align}\label{TQ2} \dim T_{\widetilde{p}} \widetilde{\mathcal{Q}} =\mathrm{ext}^1(\widetilde{F}, \widetilde{F})- \mathrm{hom}(\widetilde{F}, \widetilde{F}) + \mathrm{ext}^1(N_{H/X}, N_{H/X}) + \widetilde{\beta} \cdot \widetilde{H}. \end{align} We have \begin{align}\notag &\mathrm{ext}^1(F, F)- \mathrm{hom}(F, F)- \mathrm{ext}^1(\widetilde{F}, \widetilde{F})+ \mathrm{hom}(\widetilde{F}, \widetilde{F}) \\ \notag &=\mathrm{ext}^1(\sigma_{\ast}\widetilde{F}, \sigma_{\ast}\widetilde{F})- \mathrm{hom}(\sigma_{\ast}\widetilde{F}, \sigma_{\ast}\widetilde{F})- \mathrm{ext}^1(\widetilde{F}, \widetilde{F})+ \mathrm{hom}(\widetilde{F}, \widetilde{F}) \\ \label{ext:difference} &=\sum_{0\neq g \in \mathbb{Z}/m\mathbb{Z}} \{ \mathrm{ext}^1(g_{\ast}\widetilde{F}, \widetilde{F})- \mathrm{hom}(g_{\ast}\widetilde{F}, \widetilde{F}) \} \end{align} By the Riemann-Roch theorem and the Serre duality, we have \begin{align*} \mathrm{ext}^1(g_{\ast}\widetilde{F}, \widetilde{F})- \mathrm{hom}(g_{\ast}\widetilde{F}, \widetilde{F}) &= \mathrm{ext}^1(\widetilde{F}, g_{\ast}\widetilde{F})- \mathrm{hom}(\widetilde{F}, g_{\ast}\widetilde{F}) \\ &= \mathrm{ext}^1((-g)_{\ast}\widetilde{F}, \widetilde{F})- \mathrm{hom}((-g)_{\ast}\widetilde{F}, \widetilde{F}). \end{align*} Therefore (\ref{ext:difference}) is an even integer if $m$ is an odd integer. By (\ref{TQ1}), (\ref{TQ2}), we have \begin{align*} \dim T_{\widetilde{p}}\widetilde{\mathcal{Q}} - \dim T_{p} \mathcal{Q} \equiv \widetilde{\beta} \cdot \widetilde{H} - \beta \cdot H, \quad (\mathrm{mod} \ 2). \end{align*} Combined with (\ref{Beh:para}), we obtain (i) of Lemma~\ref{lem:identity}. The result of (ii) follows from (i) and (\ref{Behpara}). \end{proof} Institute for the Physics and Mathematics of the Universe, Todai Institute for Advanced Studies (TODIAS), University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan. \textit{E-mail address}: [email protected] \end{document}
arXiv
\begin{document} \setcounter{page}{1} \twocolumn[ \vskip1.5cm \begin{center} {\Large \bf Heisenberg uncertainty principle and quantum Zeno effects in the linguistic interpretation of quantum mechanics } \vskip0.5cm {\rm \large Shiro Ishikawa } \\ \vskip0.2cm \rm \it Department of Mathematics, Faculty of Science and Technology, Keio University, \\ 3-14-1, Hiyoshi, Kouhoku-kuYokohama, Japan. E-mail: [email protected] \end{center} \par \rm \vskip0.3cm \par \noindent {\bf Abstract} \normalsize \vskip0.5cm \par \noindent Recently we proposed measurement theory (i.e., quantum language, or the linguistic interpretation of quantum mechanics), which is characterized as the linguistic turn of the Copenhagen interpretation of quantum mechanics. This turn from physics to language does not only extend quantum theory to classical theory but also yield the quantum mechanical world view (i.e., the (quantum) linguistic world view). Thus, we believe that the linguistic interpretation is the most powerful in all interpretations. Our purpose is to examine the power of measurement theory, that is, to try to formulate Heisenberg uncertainty principle (particulary, the relation between Ishikawa's formulation and so called Ozawa's inequality) and quantum Zeno effects in the linguistic interpretation. As our conclusions, we must say that our trials do not completely succeed. However, we want to believe that this does not imply that we must abandon our linguistic interpretation. \vskip2.0cm ] \par \def\cal{\cal} \def\text{\large $\: \boxtimes \,$}{\text{\large $\: \boxtimes \,$}} \par \noindent \vskip0.2cm \par \noindent \par \noindent \section{\large Measurement Theory (= Quantum language) } \subsection{\normalsize Overview } \rm \par \par \noindent In this section, we shall mention the overview of measurement theory (or in short, MT). \par \par \rm \par \noindent \par It is well known ({\it cf.}{{{}}}{\cite{Neum}}) that quantum mechanics is formulated in an operator algebra $B(H)$ (i.e., an operator algebra composed of all bounded linear operators on a Hilbert space $H$ with the norm $\|F\|_{B(H)}=\sup_{\|u\|_H = 1} \|Fu\|_H$ ) as follows: \begin{itemize} \item[(A)] $ \quad \underset{\text{\scriptsize (physics)}}{\text{quantum mechanics}} \qquad $ \item[] $ = {} {} \displaystyle{ { \mathop{\mbox{[quantum measurement]}}_{\text{\scriptsize (probabilistic interpretation) }} } } {} {} + {} {} \displaystyle{ \mathop{ \mbox{ [causality] } }_{ { \mbox{ \scriptsize ( kinetic equation) } } } } {} $ \end{itemize} \par \noindent Also, the Copenhagen interpretation due to N. Bohr (et al.) is characterized as the guide to the usage of quantum mechanics (A). \par Measurement theory ({\it cf.} refs. {{{}}}{\cite{Ishi2}-\cite{Ishi11}}) is, by an analogy of the (A), constructed as the mathematical theory formulated in a certain $C^*$-algebra ${\cal A}$ (i.e., a norm closed subalgebra in $B(H)$, {\it cf.} {{{}}}{\cite{Saka}} ) as follows: \par \noindent \par \noindent \vskip0.2cm \par \begin{itemize} \item[(B)] $ \quad \underset{\text{\scriptsize ( language)}}{\text{measurement theory}} \! $ \item[] $ = \displaystyle{ { \mathop{\mbox{[measurement]}}_{\text{\scriptsize (Axiom 1 in Section 1.2) }} } } + \displaystyle{ \mathop{ \mbox{ [causality] } }_{ { \mbox{ \scriptsize (Axiom 2 in Section 1.2) } } } } $ \end{itemize} \par \par \noindent Note that this theory (B) is not physics but a kind of language based on {\lq\lq}the mechanical world view{\rq\rq} since it is a mathematical generalization of quantum mechanics (A). Thus, our linguistic interpretation is characterized as the guide to the usage of Axioms 1 and 2 in (B). When ${\cal A}=B_c(H)$, the ${C^*}$-algebra composed of all compact operators on a Hilbert space $H$, the (B) is called {quantum measurement theory} (or, quantum system theory), which can be regarded as the linguistic aspect of quantum mechanics. Also, when ${\cal A}$ is commutative (that is, when ${\cal A}$ is characterized by $C_0(\Omega)$, the $C^*$-algebra composed of all continuous complex-valued functions vanishing at infinity on a locally compact Hausdorff space $\Omega$ ({\it cf.} {{{}}}{\cite{Saka,Yosi}})), the (B) is called {classical measurement theory}. Thus, we have the following classification: \begin{itemize} \item[(C)] $ \quad \underset{\text{\scriptsize }}{\text{measurement theory}} $ \\ $ \; = \left\{\begin{array}{ll} \underset{\text{\scriptsize (when ${\cal A}=C_0(\Omega)$)}}{\text{[C$_s$]:classical system theory}} \\ \underset{\text{\scriptsize (when ${\cal A}=B_c (H)$)} }{\text{[Q$_s$]:quantum system theory}} \end{array}\right. $ \end{itemize} \par \rm \par \noindent \vskip0.5cm \noindent \unitlength=0.5mm \begin{picture}(200,82)(15,0) \allinethickness{0.5mm} \put(90,40){{\oval(150,80)} \put(-68,20){\mbox{[$C_s$]:classical system$\qquad$[$Q_s$]:quantum system}}} \put(94,0){\line(0,1){80}} \put(57,30){{{\oval(64,30)}}} \put(37,36){ \put(-10,-10){classical mechanics} \put(10,0){{\mbox{[$C_m$}]}} } \put(128,30){\oval(64,30)} \put(107,36){ \put(-10,-10){quantum mechanics} \put(10,0){{\mbox{[$Q_m$}]}} } \put(30,-15){\bf {\bf Figure 1}. \rm The classification of MT } \end{picture} \vskip1.0cm \par \noindent Thus, this theory covers several conventional system theories (i.e., statistics, dynamical system theory, quantum system theory). \par \noindent \par \noindent \par \noindent \subsection{\normalsize Measurement theory (= Quantum language) } \par \noindent \par Measurement theory (B) has two formulations (i.e., the $C^*$-algebraic formulation and the $W^*$-algebraic formulation, {\it cf.} {{{}}}{\cite{Ishi5, Ishi6}} ). In this paper, we devote ourselves to the $W^*$-algebraic formulation of the measurement theory (B). Let ${\cal A} ( \subseteq B(H))$ be a ${C^*}$-algebra, and let ${\cal A}^*$ be the dual Banach space of ${\cal A}$. $\;\;$ That is, $ {\cal A }^* $ $ {=} $ $ \{ \rho \; |$ $ \; \rho$ is a continuous linear functional on ${\cal A}$ $\}$, and the norm $\| \rho \|_{ {\cal A }^* } $ is defined by $ \sup \{ | \rho ({}F{}) | \:{}\; | \;\; F \in {\cal A} \text{ such that }\| F \|_{{\cal A}} (=\| F \|_{B(H)} )\le 1 \}$. Define the \it mixed state $\rho \;(\in{\cal A}^*)$ \rm such that $\| \rho \|_{{\cal A}^* } =1$ and $ \rho ({}F) \ge 0 $ for all $F\in {\cal A}$ such that $ F \ge 0$. And define the mixed state space ${\frak S}^m ({}{\cal A}^*{})$ such that \begin{align*} {\frak S}^m ({}{\cal A}^*{}) {=} \{ \rho \in {\cal A}^* \; | \; \rho \text{ is a mixed state} \}. \end{align*} \rm A mixed state $\rho (\in {\frak S}^m ({\cal A}^*) $) is called a \it pure state \rm if it satisfies that {\lq\lq $\rho = \theta \rho_1 + ({}1 - \theta{}) \rho_2$ for some $ \rho_1 , \rho_2 \in {\frak S}^m ({\cal A}^*)$ and $0 < \theta < 1 $\rq\rq} implies {\lq\lq $\rho = \rho_1 = \rho_2$\rq\rq}\!. Put \begin{align*} {\frak S}^p ({}{\cal A}^*{}) {=} \{ \rho \in {\frak S}^m ({\cal A}^*) \; | \; \rho \text{ is a pure state} \}, \end{align*} which is called a \it state space. \rm \rm It is well known ({\it cf.} {{{}}}{\cite{Saka}}) that $ {\frak S}^p ({}{B_c(H)}^*{})=$ $\{ | u \rangle \! \langle u | $ (i.e., the Dirac notation) $ \:\;|\;\: $ $ \|u \|_H=1 \}$, and $ {\frak S}^p ({}{C_0(\Omega)}^*{})$ $=$ $\{ \delta_{\omega_0} \;|\; \delta_{\omega_0}$ is a point measure at ${\omega_0} \in \Omega \}$, where $ \int_\Omega f(\omega) \delta_{\omega_0} (d \omega )$ $=$ $f({\omega_0})$ $ (\forall f $ $ \in C_0(\Omega))$. The latter implies that $ {\frak S}^p ({}{C_0(\Omega)}^*{})$ can be also identified with $\Omega$ (called a {\it spectrum space} or simply {\it spectrum}) such as \begin{align} \underset{\text{\scriptsize (state space)}}{{\frak S}^p ({}{C_0(\Omega)}^*{})} \ni \delta_{\omega} \leftrightarrow {\omega} \in \underset{\text{\scriptsize (spectrum)}}{\Omega} \label{eq1} \end{align} \par \vskip0.2cm \par \rm \par \par Consider the pair $[{\cal A},{\cal N}]_{B(H)}$, called a {\it basic structure}. Here, ${\cal A} ( \subseteq B(H))$ is a $C^*$-algebra, and ${\cal N}$ (${\cal A} \subseteq {\cal N} \subseteq B(H)$) is a particular $C^*$-algebra (called a $W^*$-algebra) such that ${\cal N}$ is the weak closure of ${\cal A}$ in $B(H)$. Let ${\cal N}_*$ be the pre-dual Banach space. For example, we see ({\it cf.} {{{}}}{\cite{Saka}}) that, when ${\cal A}=B_c(H)$, \begin{itemize} \item[(i)] ${\cal A}^*=Tr(H)$ (=trace class), ${\cal N}=B(H)$, ${\cal N}_*=Tr(H)$. \end{itemize} Also, when ${\cal A}=C_0(\Omega)$, \begin{itemize} \item[(ii)] ${\cal A}^*=${\lq\lq}the space of all signed measures on $\Omega$", ${\cal N}=L^\infty ( \Omega, \nu) (\subseteq B(L^2 ( \Omega, \nu)))$, ${\cal N}_*=L^1 ( \Omega, \nu)$, where $\nu$ is some measure on $\Omega$ ({\it cf.} {{{}}}{\cite{Saka}}). \end{itemize} For instance, in the above (ii) we must clarify the meaning of the {\lq\lq}value" of $F(\omega_0)$ for $F \in L^\infty(\Omega, \nu )$ and $\omega_0 \in \Omega$. An element $F (\in {\cal N} )$ is said to be {\it essentially continuous at } $\rho_0 (\in {\frak S}^p ({}{\cal A}^*{}))$, if there uniquely exists a complex number $\alpha$ such that \begin{itemize} \item[(D)] if $\rho$ $( \in {\cal N}_*$, $\| \rho \|_{{\cal N}_*}$ $=1$) converges to $\rho_0 (\in {\frak S}^p ({}{\cal A}^*{}))$ in the sense of weak$^*$ topology of ${\cal A}^*$, that is, \begin{align} \rho(G) \xrightarrow[\quad]{} \rho_0 (G) \;\; (\forall G \in {\cal A} (\subseteq {\cal N} ) ), \label{eq2} \end{align} then $\rho(F)$ converges to $\alpha $. \end{itemize} And the value of $\rho_0(F)$ is defined by the $\alpha$. According to the noted idea (cf. \cite{Davi}), an {\it observable} ${\mathsf O}{\; :=}(X, {\cal F},$ $F)$ in ${{\cal N}}$ is defined as follows: \par \par \begin{itemize} \item[(i)] [$\sigma$-field] $X$ is a set, ${\cal F} (\subseteq 2^X$, the power set of $X$) is a $\sigma$-field of $X$, that is, {\lq\lq}$\Xi_1, \Xi_2,... \in {\cal F}\Rightarrow \cup_{n=1}^\infty \Xi_n \in {\cal F}$", {\lq\lq}$\Xi \in {\cal F}\Rightarrow X \setminus \Xi \in {\cal F}$". \item[(ii)] [Countable additivity] $F$ is a mapping from ${\cal F}$ to ${{\cal N}}$ satisfying: (a): for every $\Xi \in {\cal F}$, $F(\Xi)$ is a non-negative element in ${{\cal N}}$ such that $0 \le F(\Xi) $ $\le I$, (b): $F(\emptyset) = 0$ and $F(X) = I$, where $0$ and $I$ is the $0$-element and the identity in ${\cal N}$ respectively. (c): for any countable decomposition $\{\Xi_1,\Xi_2,\dots ,\Xi_n, ... \}$ of $\Xi$ $\big($i.e., $\Xi,\Xi_n \in {\cal F}\;(n=1,2,3,...)$, $\cup_{n=1}^\infty \Xi_n = \Xi$, $\Xi_i \cap \Xi_j = \emptyset$ $(i \ne j) \big)$, it holds that $ F(\Xi) $ $ = $ $ \sum_{n=1}^\infty F(\Xi_n) $ in the sense of weak$^*$ topology in ${\cal N}$. \end{itemize} \par \vskip0.2cm \par \rm With any {\it system} $S$, a basic structure $[{\cal A},{\cal N}]_{B(H)}$ can be associated in which the measurement theory (B) of that system can be formulated. A {\it state} of the system $S$ is represented by an element $\rho (\in {\frak S}^p ({}{\cal A}^*{}))$ and an {\it observable} is represented by an observable ${\mathsf{O}}{\; :=} (X, {\cal F}, F)$ in ${{\cal N}}$. Also, the {\it measurement of the observable ${\mathsf{O}}$ for the system $S$ with the state $\rho$} is denoted by ${\mathsf{M}}_{\cal N} ({\mathsf{O}}, S_{[\rho]})$ $\big($ or more precisely, ${\mathsf{M}}_{\cal N} ({\mathsf{O}}{\; :=} (X, {\cal F}, F),$ $ S_{[\rho]})$ $\big)$. An observer can obtain a measured value $x $ ($\in X$) by the measurement ${\mathsf{M}}_{\cal N} ({\mathsf{O}}, S_{[\rho]})$. \par \noindent \par The Axiom 1 presented below is a kind of mathematical generalization of Born's probabilistic interpretation of quantum mechanics (A). And thus, it is a statement without reality. \par Now we can present Axiom 1 in the $W^*$-algebraic formulation as follows. \par \noindent \bf Axiom 1 \rm [ Measurement ]. \it The probability that a measured value $x$ $( \in X)$ obtained by the measurement ${\mathsf{M}}_{\cal N} ({\mathsf{O}}{\; :=} (X, {\cal F}, F),$ $ S_{[\rho_0]})$ belongs to a set $\Xi (\in {\cal F})$ is given by $ \rho_0( F(\Xi) ) $ if $F(\Xi)$ is essentially continuous at $\rho_0 ( \in {\frak S}^p ({}{\cal A}^*{}) )$. \rm \par \vskip0.3cm \par Next, we explain Axiom 2. Let $[{\cal A}_1,{\cal N}_1]_{B(H_1)}$ and $[{\cal A}_2,{\cal N}_2]_{B(H_2)}$ be basic structures. A continuous linear operator $\Phi_{1,2}$ $:{{{{\cal N}_2}}}$ (with weak$^*$ topology) $\to {{{{\cal N}_1}}}$(with weak$^*$ topology) is called a \it Markov operator, \rm if it satisfies that (i): $\Phi_{1,2} (F_2) \ge 0$ for any non-negative element $F_2$ in ${{{{\cal N}_2}}}$, (ii): $ \Phi_{1,2}({}I_2{}) = I_1 $, where $I_k$ is the identity in ${\cal N}_k$, $(k=1,2)$. In addition to the above (i) and (ii), in this paper we assume that $\Phi_{1,2}({\cal A}_2) \subseteq {\cal A}_1$ and $\sup \{ \|\Phi_{1,2}( F_2) \|_{{\cal A}_1} \; | \; F_2 \in {\cal A}_2 \text{ such that } \|F_2\|_{{\cal A}_2} \le 1 \} =1$. It is clear that the dual operator $ \Phi_{1,2}^*{}: $ $ {\cal A}_{1}^* \to {\cal A}_{2}^* $ satisfies that $ \Phi_{1,2}^*{} ( {\frak S}^m ({\cal A}_{1}^*) ) \subseteq {\frak S}^m ({\cal A}_{2}^*) $. Here note that, for any observable ${\mathsf{O}}_2{\; :=}({}X , {\cal F}, {F}_2{})$ in ${{{{\cal N}_2}}}$, the $({}X , {\cal F}, $ $\Phi_{1,2} F_2 )$ is an observable in ${{{\cal N}_1}}$. \par \vskip0.3cm \par \par Let $(T,\le)$ be a tree, i.e., a partial ordered set such that {\lq\lq$t_1 \le t_3$ and $t_2 \le t_3$\rq\rq} implies {\lq\lq$t_1 \le t_2$ or $t_2 \le t_1$\rq\rq}\!. Put $T^2_\le = \{ (t_1,t_2) \in T^2{}\;|\; t_1 \le t_2 \}$. Here, note that $T$ is not necessarily finite. Assume the completeness of the ordered set $T$. That is, for any subset $T'( \subseteq T)$ bounded from below (i.e., there exists $t' (\in T)$ such that $t' \le t$ $( \forall t \in T' )$), there uniquely exists an element $\text{inf} (T')$ $\in T$ satisfying the following conditions, (i): ${\text{inf}} ( T' ) {{\; \leqq \;}}t \; ( \forall t \in T' )$, (ii): if $s {{\; \leqq \;}}t \; \; ( \forall t \in T' )$, then $s {{\; \leqq \;}}{\text{inf}} ( T' )$. \par \noindent \par \noindent \par The family $\{ \Phi_{t_1,t_2}{}: $ ${\cal N}_{t_2} \to {\cal N}_{t_1} \}_{(t_1,t_2) \in T^2_\le}$ is called a {\it Markov relation} ({\it due to the Heisenberg picture}), \rm if it satisfies the following conditions {\rm (i) and (ii)}. \begin{itemize} \item[{\rm (i)}] With each $t \in T$, a basic structure $[{\cal A}_t,{\cal N}_t]_{B(H_t)}$ is associated. \item[{\rm (ii)}] For every $(t_1,t_2) \in T_{\le}^2$, a Markov operator $\Phi_{t_1,t_2}{}: {\cal N}_{t_2} \to {\cal N}_{t_1}$ is defined. And it satisfies that $\Phi_{t_1,t_2} \Phi_{t_2,t_3} = \Phi_{t_1,t_3}$ holds for any $(t_1,t_2)$, $(t_2,t_3)$ $ \in$ $ T_\le^2$. \end{itemize} \noindent When $ \Phi_{t_1,t_2}^*{}$ $ ( {\frak S}^p ({\cal A}_{t_1}^*) )$ $\subseteq $ $ ( {\frak S}^p ({\cal A}_{t_2}^*) )$ holds for any $ {(t_1,t_2) \in T^2_\le}$, the Markov relation is said to be deterministic. Note that the classical deterministic Markov relation is represented by $\{ \phi_{t_1,t_2}{}: $ ${\Omega}_{t_1} \to {\Omega}_{t_2} \}_{(t_1,t_2) \in T^2_\le}$, where the continuous map $\phi_{t_1,t_2}{}: $ ${\Omega}_{t_1} \to {\Omega}_{t_2}$ is defined by \begin{align*} \Phi_{t_1,t_2}^* (\delta_{\omega_1} ) = \delta_{ \phi_{t_1,t_2} (\omega_1)} \quad (\forall \omega_1 \in \Omega_1) \end{align*} \par \par \rm Now Axiom 2 is presented as follows: \rm \par \noindent \bf Axiom 2 \rm [Causality]. \it The causality is represented by a Markov relation $\{ \Phi_{t_1,t_2}{}: $ ${\cal N}_{t_2} \to {\cal N}_{t_1} \}_{(t_1,t_2) \in T^2_\le}$. \rm \par \par \noindent \subsection{\normalsize The Linguistic Interpretation } Next, we have to study how to use the above axioms as follows. That is, we present the following interpretation (E) [=(E$_1$)--(E$_3$)], which is characterized as a kind of linguistic turn of so-called Copenhagen interpretation ({\rm cf.}\textcolor{black}{\cite{Ishi6, Ishi7, Ishi8}} ). That is, we propose: \begin{itemize} \item[(E$_1$)] Consider the dualism composed of {\lq\lq}observer{\rq\rq} and {\lq\lq}system( =measuring object){\rq\rq}. And therefore, {\lq\lq}observer{\rq\rq} and {\lq\lq}system{\rq\rq} must be absolutely separated. In this sense, the interaction (or, measurement process such as $\textcircled{a}$ and $\textcircled{b}$ In Figure 2) should not be mentioned explicitly. \end{itemize} \par \noindent \vskip0.2cm \par \rm \par \noindent \vskip0.5cm \noindent \unitlength=0.5mm \begin{picture}(200,82)(15,0) \put(-8,0) { \allinethickness{0.2mm} \drawline[-40](80,0)(80,62)(30,62)(30,0) \drawline[-40](130,0)(130,62)(175,62)(175,0) \allinethickness{0.5mm} \path(20,0)(175,0) \put(14,-5){ \put(37,50){$\bullet$} } \put(50,25){\ellipse{17}{25}} \put(50,44){\ellipse{10}{13}} \put(0,44){\put(43,30){\sf \footnotesize{observer}} \put(42,25){\scriptsize{(I(=mind))}} } \put(7,7){\path(46,27)(55,20)(58,20)} \path(48,13)(47,0)(49,0)(50,13) \path(51,13)(52,0)(54,0)(53,13) \put(0,26){ \put(142,48){\sf \footnotesize system} \put(143,43){\scriptsize (matter)} } \path(152,0)(152,20)(165,20)(150,50)(135,20)(148,20)(148,0) \put(10,0){} \allinethickness{0.2mm} \put(0,-5){ \put(130,39){\vector(-1,0){60}} \put(70,43){\vector(1,0){60}} \put(92,56){\sf \scriptsize \fbox{observable}} \put(58,50){\sf \scriptsize } \put(57,53){\sf \scriptsize \fbox{\shortstack[l]{measured \\ value}}} \put(80,44){\scriptsize \textcircled{\scriptsize a}interfere} \put(80,33){\scriptsize \textcircled{\scriptsize b}perceive a reaction} \put(130,56){\sf \scriptsize \fbox{state}} } } \put(30,-15){\bf {\bf Figure 2}. \rm Dualism in MT (cf. \cite{Ishi6, Ishi7}) } \end{picture} \vskip1.0cm \par \noindent \begin{itemize} \item[(E$_2$)] Only one measurement is permitted. And thus, the state after a measurement is meaningless $\;$ since it can not be measured any longer. Thus, the collapse of the wavefunction is prohibited. We are not concerned with anything after measurement. Also, the causality should be assumed only in the side of system, however, a state never moves. Thus, the Heisenberg picture should be adopted, and thus, the Schr\"{o}dinger picture should be prohibited. \item[(E$_3$)] Also, the observer does not have the space-time. Thus, the question: {\lq\lq}When and where is a measured value obtained?{\rq\rq} is out of measurement theory. And thus, Schr\"{o}dinger's cat is out of measurement theory, \end{itemize} \par \noindent and so on. And therefore, in spite of Bohr's realistic view, we propose the following linguistic view: \begin{itemize} \item[(F$_1$)] In the beginning was the language called measurement theory (with the linguistic interpretation (E)). And, for example, quantum mechanics can be fortunately described in this language. And moreover, almost all scientists have already mastered this language partially and informally since statistics (at least, its basic part) is characterized as one of aspects of measurement theory ({\rm cf.} \cite{Ishi4,Ishi9,Ishi10, Ishi11}). \end{itemize} For completeness, we again note that, \begin{itemize} \item[(F$_2$)] The linguistic interpretation (E$_1$)-(E$_3$) is not only applied to quantum mechanics ([Q$_m$] in Figure 1) but also to whole MT. \end{itemize} This generalization has a merit such as the linguistic interpretation is determined "uniquely", though so called Copenhagen interpretation has various variations. For example, the projection postulate of quantum mechanics can not be naturally extended to MT. \par \noindent \subsection{\normalsize Sequential Causal Observable and Its Realization } \par For each $k=1,$ $2,\ldots,K$, consider a measurement ${\mathsf{M}}_{{{\cal N}}} ({\mathsf{O}_k}$ ${\; \equiv} (X_k, {\cal F}_k, F_k),$ $ S_{[\rho]})$. However, since the (E$_2$) says that only one measurement is permitted, the measurements $\{ {\mathsf{M}}_{{{\cal N}}} ({\mathsf{O}_k},S_{[\rho]}) \}_{k=1}^K$ should be reconsidered in what follows. Under the commutativity condition such that \begin{align} & F_i(\Xi_i) F_j(\Xi_j) = F_j(\Xi_j) F_i(\Xi_i) \label{eq3} \\ & \quad (\forall \Xi_i \in {\cal F}_i, \forall \Xi_j \in {\cal F}_j , i \not= j), \nonumber \end{align} we can define the product observable ${\text{\large $\times$}}_{k=1}^K {\mathsf{O}_k}$ $=({\text{\large $\times$}}_{k=1}^K X_k ,$ $ \boxtimes_{k=1}^K {\cal F}_k,$ $ {\text{\large $\times$}}_{k=1}^K {F}_k)$ in ${\cal N}$ such that \begin{align*} ({\text{\large $\times$}}_{k=1}^K {F}_k)({\text{\large $\times$}}_{k=1}^K {\Xi}_k ) = F_1(\Xi_1) F_2(\Xi_2) \cdots F_K(\Xi_K) \\ \; ( \forall \Xi_k \in {\cal F}_k, \forall k=1,\ldots,K ). \qquad \qquad \nonumber \end{align*} Here, $ \boxtimes_{k=1}^K {\cal F}_k$ is the smallest field including the family $\{ {\text{\large $\times$}}_{k=1}^K \Xi_k $ $:$ $\Xi_k \in {\cal F}_k \; k=1,2,\ldots, K \}$. Then, the above $\{ {\mathsf{M}}_{{{\cal N}}} ({\mathsf{O}_k},S_{[\rho]}) \}_{k=1}^K$ is, under the commutativity condition \textcolor{black}{(3)}, represented by the simultaneous measurement ${\mathsf{M}}_{{{{\cal N}}}} ( {\text{\large $\times$}}_{k=1}^K {\mathsf{O}_k}$, $ S_{[\rho]})$. \par Consider a tree $(T{\; \equiv}\{t_0, t_1, \ldots, t_n \},$ $ \le )$ with the root $t_0$. This is also characterized by the map $\pi: T \setminus \{t_0\} \to T$ such that $\pi( t)= \max \{ s \in T \;|\; s < t \}$. Let $\{ \Phi_{t, t'} : {\cal N}_{t'} \to {\cal N}_{t} \}_{ (t,t')\in T_\le^2}$ be a causal relation, which is also represented by $\{ \Phi_{\pi(t), t} : {\cal N}_{t} \to {\cal N}_{\pi(t)} \}_{ t \in T \setminus \{t_0\}}$. Let an observable ${\mathsf O}_t{\; \equiv} (X_t, {\cal F}_{t}, F_t)$ in the ${\cal N}_t$ be given for each $t \in T$. Note that $\Phi_{\pi(t), t} {\mathsf O}_t$ $( {\; \equiv} (X_t, {\cal F}_{t}, \Phi_{\pi(t), t} F_t)$ ) is an observable in the ${\cal N}_{\pi(t)}$. The pair $[{\mathbb O}_T] $ $=$ $[ \{{\mathsf O}_t \}_{t \in T}$, $\{ \Phi_{t, t'} : {\cal N}_{t'} \to {\cal N}_{t} \}_{ (t,t')\in T_\le^2}$ $]$ is called a {\it sequential causal observable}. For each $s \in T$, put $T_s =\{ t \in T \;|\; t \ge s\}$. And define the observable ${\widehat{\mathsf O}}_s \equiv ({\text{\large $\times$}}_{t \in T_s}X_t, \boxtimes_{t \in T_s}{\cal F}_t, {\widehat{F}}_s)$ in ${\cal N}_s$ as follows: \par \noindent \begin{align} \widehat{\mathsf O}_s &= \left\{\begin{array}{ll} {\mathsf O}_s \quad & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \text{(if $s \in T \setminus \pi (T) \;${})} \\ {\mathsf O}_s {\text{\large $\times$}} ({}\mathop{\mbox{\Large $\times$}}_{t \in \pi^{-1} ({}\{ s \}{})} \Phi_{ \pi(t), t} \widehat {\mathsf O}_t{}) \quad & \!\!\!\!\!\! \text{(if $ s \in \pi (T) ${})} \end{array}\right. \label{eq4} \end{align} if the commutativity condition holds (i.e., if the product observable ${\mathsf O}_s {\text{\large $\times$}} ({}\mathop{\mbox{\Large $\times$}}_{t \in \pi^{-1} ({}\{ s \}{})} \Phi_{ \pi(t), t} $ $\widehat {\mathsf O}_t{})$ exists) for each $s \in \pi(T)$. Using \textcolor{black}{(4)} iteratively, we can finally obtain the observable $\widehat{\mathsf O}_{t_0}$ in ${\cal N}_{t_0}$. The $\widehat{\mathsf O}_{t_0}$ is called the realization (or, realized causal observable) of $[{\mathbb O}_T]$. \par \par \noindent \subsection{\normalsize Our motivation } Since MT has been shown to have a great power (cf. \cite{Ishi2}-\cite{Ishi11}), we believe that MT is superior to statistics as the language of science. However we now feel misgivings about the possibility that Heisenberg uncertainty principle (particularly, Ozawa's inequality(cf. \cite{Oza, Oza2}) and quantum Zeno effects (cf. \cite{Misr}) can not be formulated in MT. As our conclusions, we say that our trials partially succeed. This may imply that these should be understood in the other interpretations. \par \noindent \section{\large Heisenberg uncertainty principle } \par \noindent \subsection{\normalsize Preliminary } Let $[B_c(H),B(H)]_{B(H)}$ be the basic structure. Let $A_i$ $(i=1,2)$ be arbitrary self-adjoint operator on $H$. For example, it may satisfy that $[A_1 , A_2](:=A_1 A_2 - A_2 A_1 ) =\hbar \sqrt{-1}I$. Let ${\mathbb R}$ and ${\cal B}$ be the real line and its Borel field respectively. Let ${\mathsf O}_{A_i}=({\mathbb R}, {\cal B}, F_{A_i} )$ be the spectral representation of $A_i$, i.e., $A_i=\int_{\mathbb R} \lambda F_{A_i}( d \lambda )$, which is regarded as the observable in $B(H)$. Let $\rho_u= |u\rangle u |$ be a state, where $u \in H$ and $\|u\|=1$. Thus, we two measurements: \par \noindent \begin{itemize} \item[(G$_1$)]${\mathsf{M}}_{B(H)} ({\mathsf{O}_{A_1}}{\; :=} ({\mathbb R}, {\cal B}, F_{A_1} ),$ $ S_{[\rho_u]})$ \item[(G$_2$)]${\mathsf{M}}_{B(H)} ({\mathsf{O}_{A_2}}{\; :=} ({\mathbb R}, {\cal B}, F_{A_2} ),$ $ S_{[\rho_u]})$ \end{itemize} Let $K$ be another Hilbert space, and let $s$ be in $K$ such that $\| s \|=1$. Thus, we also have two observables ${\mathsf{O}_{A_1 \otimes I}}{\; :=} ({\mathbb R}, {\cal B}, F_{A_1} \otimes I )$ and ${\mathsf{O}_{A_2\otimes I}}{\; :=} ({\mathbb R}, {\cal B}, F_{A_2}\otimes I )$ in $B(H \otimes K)$, where $I \in B(K)$ is the identity map. Put ${\widehat \rho}_{us}=|u \otimes s \rangle \langle u \otimes s|$. Here, we have two measurements as follows: \par \noindent \begin{itemize} \item[(H$_1$)]${\mathsf{M}}_{B(H\otimes K)} ({\mathsf{O}_{A_1 \otimes I}},S_{[{\widehat \rho}_{us}]})$ \item[(H$_2$)]${\mathsf{M}}_{B(H\otimes K)} ({\mathsf{O}_{A_2 \otimes I}},S_{[{\widehat \rho}_{us}]})$ \end{itemize} which is clearly equivalent to the above two (G$_1$) and (G$_2$) respectively Now we want to take these two measurements. However, the linguistic interpretation (E$_2$) says that it is impossible, if $A_1$ and $A_2$ do not commute. Let ${\widehat A}_i$ $(i=1,2)$ be arbitrary self-adjoint operator on the tensor Hilbert space $H \otimes K$, where it is assumed that \begin{align} [{\widehat A}_1, {\widehat A}_2](:= {\widehat A}_1{\widehat A}_2- {\widehat A}_2{\widehat A}_1)=0 \label{eq5} \end{align} Let ${\mathsf O}_{{\widehat A}_i}=({\mathbb R}, {\cal B}, F_{{\widehat A}_i} )$ be the spectral representation of ${\widehat A}_i$, i.e.${\widehat A}_i=\int_{\mathbb R} \lambda F_{{\widehat A}_i} ( d \lambda )$, which is regarded as the observable in $B(H \otimes K)$. Thus, we have two measurements as follows: \par \noindent \begin{itemize} \item[(I$_1$)]${\mathsf{M}}_{B(H\otimes K)} ({\mathsf{O}_{{\widehat A}_1}},S_{[{\widehat \rho}_{us}]})$ \item[(I$_2$)]${\mathsf{M}}_{B(H\otimes K)} ({\mathsf{O}_{{\widehat A}_2}},S_{[{\widehat \rho}_{us}]})$ \end{itemize} Note, by the commutative condition (\ref{eq5}), that \it the two can be realized as the simultaneous measurement \rm ${\mathsf{M}}_{B(H\otimes K)} ({\mathsf{O}_{{\widehat A}_1}}\times{\mathsf{O}_{{\widehat A}_2}},S_{[{\widehat \rho}_{us}]})$, where ${\mathsf{O}_{{\widehat A}_1}}\times{\mathsf{O}_{{\widehat A}_2}}=({\mathbb R}^2, {\cal B}^2, F_{{\widehat A}_1} \times F_{{\widehat A}_2} )$. Again note that any relation between $A_i \otimes I$ and ${\widehat A}_i$ is not assumed. However, \it we want to regard this simultaneous measurement as the substitute of the above two (H$_1$) and (H$_2$). \rm Putting \begin{align} {\widehat N}_i := & {\widehat A}_i -A_i \otimes I \nonumber \\ (\text{and thus, }& {\widehat A}_i={\widehat N}_i +A_i \otimes I) \label{eq6} \end{align} we define the $\Delta_{\widehat{N}_i}^{u \otimes s}$ and ${\overline \Delta}_{\widehat{N}_i}^{u \otimes s}$ such that \begin{align} & \Delta_{\widehat{N}_i}^{u \otimes s} =\| {\widehat N}_i (u \otimes s) \| \label{eq7} \\ & {\overline \Delta}_{\widehat{N}_i}^{u \otimes s} =\| ( {\widehat N}_i - \langle u \otimes s , {\widehat N}_i (u \otimes s)\rangle ) (u \otimes s) \| \nonumber \end{align} where the following inequality: \begin{align} \Delta_{\widehat{N}_i}^{u \otimes s} \ge {\overline \Delta}_{\widehat{N}_i}^{u \otimes s} \label{eq8} \end{align} is common sense. By the commutative condition (\ref{eq5}) and (\ref{eq6}), we see that \begin{align} &[{\widehat N}_1,{\widehat N}_2] + [{\widehat N}_1, A_2 \otimes I]+[A_1 \otimes I ,{\widehat N}_2] \nonumber \\ & = -[A_1 \otimes I, A_2 \otimes I] \label{eq9} \end{align} Here, we should note that the first term (or, precisely, $\langle u \otimes s,$"the first term"$(u \otimes s) \rangle$) of (\ref{eq9}) can be, by the Robertson uncertainty relation ({\it cf.}{{{}}}{\cite{Neum}}), estimated as follows: \begin{align} & 2 {\overline \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot {\overline \Delta}_{\widehat{N}_2}^{u \otimes s} \nonumber \\ \ge & | \langle u \otimes s , [{\widehat N}_1,{\widehat N}_2] ( u \otimes s) \rangle | \label{eq10} \end{align} \par \noindent {\bf Remark 1 }There may be an opinion such that the physical meaning of ${\Delta}_{\widehat{N}_1}^{u \otimes s}$ (or, ${\overline \Delta}_{\widehat{N}_1}^{u \otimes s}$) is not clear. However, we do not worry about this problem. That is because our concern is not only quantum mechanics ([Q$_m$] in Figure 1) but also quantum system theory ([Q$_s$] in Figure 1). However, recalling (F$_2$), in most cases, we can expect that \begin{itemize} \item[(J)] A ( metaphysical) statement in quantum system theory is regarded as a (physical) statement in quantum mechanics, \end{itemize} because both are formulated in the same mathematical structure, and moreover, are based on the linguistic interpretation. \par \noindent \subsection{\normalsize Heisenberg uncertainty principle with the same average condition } In the previous section, any relation between $A_i \otimes I$ and ${\widehat A}_i$ is not assumed. However, in this section we assume the following hypothesis: \par \noindent {\bf Hypothesis 1} (The same average condition). We assume that \begin{align} & \langle u \otimes s, {\widehat N}_i(u \otimes s) \rangle =0 \qquad ( \forall u \in H, i=1,2) \label{eq11} \end{align} or equivalently \begin{align*} \langle u \otimes s, {\widehat A}_i(u \otimes s) \rangle = \langle u , {A}_i u \rangle \quad ( \forall u \in H, i=1,2) \end{align*} holds. Thus, in this case, it holds that \begin{align} \Delta_{\widehat{N}_i}^{u \otimes s}= {\overline \Delta}_{\widehat{N}_i}^{u \otimes s} \label{eq12} \end{align} \par \noindent {\bf Remark 2} The existence of ${\widehat A}_i$ (with the conditions (\ref{eq5}) and (\ref{eq11})) is guaranteed ({\rm cf.} \cite{Ishi1}). Also, we can assume that the (\ref{eq11}) is equivalent to \begin{align} & \langle u \otimes s, {\widehat N}_i(v \otimes s) \rangle =0 \quad ( \forall u, v \in H, i=1,2) \label{eq13} \end{align} This is proved as follows: \begin{align*} 0 &= \langle ( u + v) \otimes s, {\widehat N}_i((u +v) \otimes s) \rangle \\ & = \langle u \otimes s, {\widehat N}_i (v \otimes s) \rangle + \langle v \otimes s, {\widehat N}_i (u \otimes s) \rangle \\ & = 2 \mbox{[Real part]}(\langle u \otimes s, {\widehat N}_i (v \otimes s) \rangle ) \\ 0 & =\langle ( u + \sqrt{-1} v) \otimes s, {\widehat N}_i((u +\sqrt{-1} v) \otimes s) \rangle \\ & = 2 \sqrt{-1} \mbox{[Imaginary part]}(\langle u \otimes s, {\widehat N}_i (v \otimes s) \rangle ) \end{align*} Thus we get (\ref{eq13}). Using (\ref{eq13}), we can calculate the second term (or, precisely, $\langle u \otimes s,$"the second term"$(u \otimes s) \rangle$) in (\ref{eq9}) as follows: \begin{align} & \langle u \otimes s, [{\widehat N}_1, A_2 \otimes I](u \otimes s) \rangle \nonumber \\ = & \langle u \otimes s, {\widehat N}_1 (A_2 u \otimes s) \rangle - \langle A_2 u \otimes s, {\widehat N}_1( u \otimes s) \rangle \nonumber \\ = & 0 \qquad ( \forall u \in H) \label{eq14} \end{align} Similarly, we calculate the third term in (\ref{eq9}) as follows: \begin{align} & \langle u \otimes s, [A_1 \otimes I, {\widehat N}_2](u \otimes s) \rangle =0 \quad ( \forall u \in H) \label{eq15} \end{align} Also, it is clear that \begin{align} & \langle u \otimes s, [A_1 \otimes I, A_2 \otimes I](u \otimes s) \rangle \nonumber \\ = & \langle u , [A_1 , A_2 ]u \rangle \quad ( \forall u \in H) \label{eq16} \end{align} Summing up ((\ref{eq10}),(\ref{eq12}),(\ref{eq14}),(\ref{eq15}),(\ref{eq16})), we can conclude that \begin{align} & {\Delta}_{\widehat{N}_1}^{u \otimes s} \cdot { \Delta}_{\widehat{N}_2}^{u \otimes s} (= {\overline \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot {\overline \Delta}_{\widehat{N}_2}^{u \otimes s} ) \nonumber \\ \ge & \frac{1}{2} | \langle u , [A_1,A_2] u \rangle | \quad ( \forall u \in H \mbox{ such that } ||u||=1 ) \label{eq17} \end{align} which is Ishikawa's formulation of Heisenberg's uncertainty principle ({\rm cf.} \cite{Ishi1}). \par \noindent {\bf Remark 3} Assume that $[A_1, A_2]= \hbar {\sqrt{-1}}I$. If Hypothesis 1 is not assumed, we can say in what follows. That is, for any positive $\epsilon$, there exist $s \in K (\|s \|=1)$, ${\widehat{A}}_i (i=1,2)$, $u \in H (\| u \|=1)$ such that \begin{align*} \Delta_{\widehat{N}_1}^{u \otimes s} < \epsilon, \quad \Delta_{\widehat{N}_2}^{u \otimes s} < \epsilon \end{align*} (cf. Remark 3 in ref.\cite{Ishi1}). Thus, if we hope that Heisenberg uncertainty principle (\ref{eq17}) holds, the same average condition is indispensable. \par \noindent \subsection{\normalsize Heisenberg uncertainty principle without the same average condition } We believe that Hypothesis 1 is very natural. However, in this section, we do not assume Hypothesis 1 ( the same average condition). Put $\sigma (A_i;u)=\| (A - \langle u, A_iu \rangle )u \|$. Using the Robertson uncertainty relation, we can estimate the second term (or, precisely, $\langle u \otimes s,$"the second term"$(u \otimes s) \rangle$) in (\ref{eq9}) as follows: \begin{align} & 2 {\overline \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot \sigma(A_2;u) \ge |\langle u \otimes s, [{\widehat N}_1, A_2 \otimes I](u \otimes s) \rangle| \nonumber \\ & \qquad ( \forall u \in H \mbox{ such that } ||u||=1) \label{eq18} \end{align} Similarly, we estimate the third term in (\ref{eq9}) as follows: \begin{align} & 2 {\overline \Delta}_{\widehat{N}_2}^{u \otimes s} \cdot \sigma(A_1;u) \ge |\langle u \otimes s, [A_ \otimes I, {\widehat N}_2](u \otimes s) \rangle | \nonumber \\ & \qquad ( \forall u \in H \mbox{ such that } ||u||=1) \label{eq19} \end{align} Summing up ((\ref{eq8}),(\ref{eq10}), (\ref{eq16}),(\ref{eq18}),(\ref{eq19})), we can conclude that \begin{align} & { \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot { \Delta}_{\widehat{N}_2}^{u \otimes s} +{ \Delta}_{\widehat{N}_2}^{u \otimes s} \cdot \sigma(A_1;u) +{ \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot \sigma(A_2;u) \nonumber \\ \ge & {\overline \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot {\overline \Delta}_{\widehat{N}_2}^{u \otimes s} +{\overline \Delta}_{\widehat{N}_2}^{u \otimes s} \cdot \sigma(A_1;u) +{\overline \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot \sigma(A_2;u) \nonumber \\ \ge & \frac{1}{2} | \langle u , [A_1,A_2] u \rangle | \quad ( \forall u \in H \mbox{ such that } ||u||=1) \label{eq20} \end{align} Since Hypothesis 1 is not assumed in this section, it is a matter of course that this (\ref{eq20}) is more rough than the (\ref{eq17}). \par \noindent {\bf Remark 4} (Ozawa's inequality). In \cite{Oza, Oza2}, M. Ozawa tried to formulate Heisenberg's $\gamma$-ray microscope thought experiment in his interpretation, and proposed the following inequality (so called Ozawa's inequality): \begin{align} & \epsilon (A_1) \eta(A_2) + \eta(A_2) \sigma (A_1) +\epsilon (A_1) \sigma(A_2) \nonumber \\ \ge & \frac{1}{2} | \langle u , [A_1,A_2] u \rangle | \label{eq21} \end{align} which is, by our notation, rewritten as follows. \begin{align} & { \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot { \Delta}_{\widehat{N}_2}^{u \otimes s} +{ \Delta}_{\widehat{N}_2}^{u \otimes s} \cdot \sigma(A_1;u) +{ \Delta}_{\widehat{N}_1}^{u \otimes s} \cdot \sigma(A_2;u) \nonumber \\ \ge & \frac{1}{2} | \langle u , [A_1,A_2] u \rangle | \quad ( \forall u \in H \mbox{ such that } ||u||=1) \label{eq22} \end{align} Note that this (\ref{eq22}) is mathematically the same as the above (\ref{eq20}), but the (\ref{eq21}) is not. Here, it should be noted that Ozawa's assertion is just the (\ref{eq21}), that is, \begin{itemize} \item[(K)]the physical meanings of "error"($\epsilon (A_1)(= { \Delta}_{\widehat{N}_1}^{u \otimes s})$) and "disturbance"$\eta(A_2)(= { \Delta}_{\widehat{N}_1}^{u \otimes s})$ are distinguished in Ozawa's inequality (\ref{eq21}). \end{itemize} Therefore there is a great gap between Ozawa's inequality (\ref{eq21}) and the (\ref{eq20}). In fact, the (\ref{eq20}) is not the mathematical representation of Heisenberg's $\gamma$-ray microscope thought experiment. Now we think that it may be impossible to formulate this (K) in the linguistic interpretation, since the (E$_2$) says that anything after measurement can not be described. In fact, we are not successful yet. Therefore, we consider that the other interpretation is indispensable for the understanding of Ozawa's inequality (\ref{eq21}), and thus, the (\ref{eq20}) and the (\ref{eq21}) are different assertions. \par \noindent \par \noindent \section{\large Quantum Zeno effects } \par \noindent Let $[B_c(H),B(H)]_{B(H)}$ be the basic structure. Let ${\mathbb P}=[P_n ]_{n=1}^\infty$ be the spectral resolution in $B(H)$, that is,@for each $n$, $P_n \in B(H)$ is a projection such that \begin{align*} \sum_{n=1}^\infty P_n =I \end{align*} Define the $(\Psi_{\mathbb P})_*: Tr(H) \to Tr(H)$ such that \begin{align*} (\Psi_{\mathbb P})_* (|u \rangle \langle u |) = \sum_{n=1}^\infty |P_n u \rangle \langle P_n u | \quad (\forall u \in H) \end{align*} Also, we define the Schr\"{o}dinger time evolution $(\Psi_S^{\Delta t})_* : Tr(H) \to Tr(H)$ such that \begin{align*} (\Psi_S^{\Delta t})_* (|u \rangle \langle u |) = |e^{-\frac{i {\cal H} \Delta t}{\hbar}}u \rangle \langle e^{-\frac{i {\cal H} \Delta t}{\hbar}} u | \quad (\forall u \in H) \end{align*} Consider $t=0,1$. Putting $\Delta t = \frac{1}{N}$, $H=H_0=H_1$, we can define the $(\Phi_{0,1}^{(N)})_*: Tr(H_0) \to Tr(H_1)$ such that \begin{align*} (\Phi_{0,1}^{(N)})_* =((\Psi_S^{1/N})_* (\Psi_{\mathbb P})_*)^N \end{align*} which induces the Markov operator $\Phi_{0,1}^{(N)} : B(H_1) \to B(H_0)$ as the dual operator $\Phi_{0,1}^{(N)} =((\Phi_{0,1}^{(N)})_*)^*$. Let $\rho=|\psi \rangle \langle \psi |$ be a state at time $0$. Let $ {\mathsf{O}_1}{\; :=} (X, {\cal F}, F)$ be an observable in $B(H_1)$. Thus, we have a measurement: $$ {\mathsf{M}}_{B(H_0)} (\Phi_{0,1}^{(N)} {\mathsf{O}_1}, S_{[\rho]}) $$ $\big($ or more precisely, ${\mathsf{M}}_{B(H_0)} (\Phi_{0,1}^{(N)}{\mathsf{O}}{\; :=} (X, {\cal F}, \Phi_{0,1}^{(N)}F),$ $ S_{[|\psi \rangle \langle \psi |]})$ $\big)$. Here, Axiom 1 says that \begin{itemize} \item[(L)] the probability that the measured value obtained by the measurement belongs to $\Xi (\in {\cal F})$ is given by \begin{align} tr(| \psi \rangle \langle \psi | \cdot \Phi_{0,1}^{(N)}F(\Xi)) \label{eq23} \end{align} \end{itemize} Now we shall explain "quantum Zeno effect" in the following example. \par \noindent {\bf Example 1} Let $\psi \in H$ such that $\|\psi \|=1$. Define the spectral resolution \begin{align} {\mathbb P}=[ P_1 (=|\psi \rangle \langle \psi |), P_2(=I-P_1) ] \label{eq24} \end{align} And define the observable $ {\mathsf{O}_1}{\; :=} (X, {\cal F}, F)$ in $B(H_1)$ such that $$ X=\{ x_1 , x_2 \}, \qquad {\cal F}=2^X $$ and $$ F(\{x_1 \})=|\psi \rangle \langle \psi |(=P_1), F(\{x_2 \})=I- |\psi \rangle \langle \psi |(=P_2), $$ Now we can calculate (\ref{eq23})(i.e., the probability that a measured value $x_1$ is obtained) as follows. \begin{align} (\ref{eq23}) &= \langle \psi, ((\Psi_S^{1/N})_* (\Psi_{\mathbb P})_*)^N (|\psi \rangle \langle \psi |) \psi \rangle \nonumber \\ & \ge |\langle \psi , e^{-\frac{i {\cal H} }{\hbar N}}\psi \rangle \langle \psi , e^{\frac{i {\cal H} }{\hbar N}}\psi \rangle|^N \nonumber \\ & \approx \big(1 - \frac{1}{N^2} ( || (\frac{ {\cal H} }{\hbar }) \psi ||^2 - |\langle \psi, (\frac{ {\cal H} }{\hbar }) \psi \rangle |^2) \big)^N \to 1 \nonumber \\ & \qquad \qquad \qquad \qquad ( N \to \infty) \label{eq25} \end{align} Thus, if $N$ is sufficiently large, we see that \begin{align*} {\mathsf{M}}_{B(H_0)} (\Phi_{0,1}^{(N)} {\mathsf{O}_1}, S_{[|\psi \rangle \langle \psi |]}) = {\mathsf{M}}_{B(H_0)} (\Phi_I {\mathsf{O}_1}, S_{[|\psi \rangle \langle \psi |]}) \\ (\text{where $\Phi_I:B(H_1) \to B(H_0)$ is the identity map}) \end{align*} or, we say, roughly speaking in terms of the Schr\"{o}dinger picture, that the state $|\psi \rangle \langle \psi |$ does not move. \par \noindent {\bf Remark 5} The above argument is motivated by B. Misra and E.C.G. Sudarshan \cite{Misr}. However, the title of their paper: "The Zeno's paradox in quantum theory" urges us to guess that \begin{itemize} \item[(M)] the spectral resolution ${\mathbb P}$ of (\ref{eq24}) is regarded as an observable (or moreover, measurement) in their paper \cite{Misr}. \end{itemize} If this (M) is their assertion, we can not understand "quantum Zeno effect". That is because the linguistic interpretation require the commutative condition (4) should be satisfied, however, ${\mathbb P}$ and $\Psi_S^{\Delta t}{\mathbb P}$ do not commute. In the sense of Example 1, this effect should be called "brake effect" and not "watched pot effect". \par \noindent \section{\large Conclusions } \par \noindent In this paper, we point out the possibility that \begin{itemize} \item[(N)] two nice ideas (K) and (M) can not be understood in the linguistic interpretation. \end{itemize} That is because our theory is not concerned with any influence after a measurement. In spite of the difficulty such as (N), we do not give up to assert the linguistic interpretation, since it has a great power of description ({\it cf.} refs. {{{}}}{\cite{Ishi2}-\cite{Ishi11}}). It is always interesting to find a phenomenon that can not be explained in MT. Thus, in spite of our conjecture (N), we earnestly hope that the readers investigate the following problem: \begin{itemize} \item[(O)] Describe Ozawa's inequality (\ref{eq21}) in the linguistic interpretation! \end{itemize} This problem is very important in quantum mechanics. That is because it is generally believed that the difference of interpretations is usually negligible in practical problems. If the formulation of Heisenberg uncertainty principle depends on quantum interpretations, our next problem may be to investigate "What is the most certain interpretation?" And we believe that the linguistic interpretation is quite hopeful. However, it should be examined from various points of view. \rm \par \renewcommand{\refname}{ \large References} { \small \normalsize } \end{document}
arXiv
PCDD and PCDF exposures among fishing community through intake of fish and shellfish from the Straits of Malacca Azrina Azlan1,2, Nurul Nadiah Mohamad Nasir1, Norashikin Shamsudin3, Hejar Abdul Rahman4, Hock Eng Khoo1,2 & Muhammad Rizal Razman5 BMC Public Health volume 15, Article number: 683 (2015) Cite this article Exposure to PCDD/PCDF (dioxin and furan) through consumption of fish and shellfish is closely related to the occurrence of skin diseases, such as chloracne and hyperpigmentation. This study aimed to determine the exposure of PCDD/PCDF and its congeners in fish and shellfish obtained from different regions of the Straits of Malacca among the fishing community. The risk of fish and shellfish consumption and exposure to PCDD/PCDF among fishermen living in coastal areas of the Straits were evaluated based on a cross-sectional study involving face to face interviews, blood pressure and anthropometric measurements, and administration of food frequency questionnaires (FFQ). Skin examination was done by a dermatologist after the interview session. Determination of 17 congeners of PCDD/PCDF in 48 composite samples of fish and shellfish was performed based on HRGC/HRMS analysis. The total PCDD/PCDF in the seafood samples ranged from 0.12 to 1.24 pg WHO-TEQ/g fresh weight (4.6-21.8 pg WHO-TEQ/g fat). No significant difference found for the concentrations of PCDD/PCDF between the same types of seafood samples obtained from the three different regions. The concentrations of the most potent congener, 2,3,7,8-TCDD in the seafood samples ranged from 0.01 to 0.11 pg WHO-TEQ/g FW (1.9 pg WHO-TEQ/g fat). A positive moderate correlation was found between the fat contents and concentrations of PCDD/PCDF determined in the seafood samples. The total PCDD/PCDF in all seafood samples were below the 1 pg WHO-TEQ/g fresh weight, with the exception of grey eel-catfish. The respondents had consumed fish and shellfish with the amounts ranging between 2.02 g and 44.06 g per person per day. The total PCDD/PCDF exposures through consumption of fish and shellfish among the respondents were between 0.01 and 0.16 pg WHO-TEQ/kg BW/day. With regard to the two PCDD/PCDF-related skin diseases, no chloracne case was found among the respondents, but 2.2 % of the respondents were diagnosed to have hyperpigmentation. Intake of a moderate amount of fish and shellfish from the area is safe and does not pose a risk for skin diseases. An over-consumption of seafood from the potentially polluted area of the Straits should be monitored in future. Fish and shellfish are the richest sources of long-chain (LC) n-3 polyunsaturated fatty acids (PUFA), such as eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) [1]. In Malaysia, fish consumption among Malaysians ranked the second (40.78 %) of the top ten daily consumed foods [2]. Although marine fish contains a high level of PUFA, the persistent chemical contaminants are the main problem for fish consumption [3]. Industrialisation has polluted seawater with chlorinated organic compounds and precursors of polychlorinated dibenzo-p-dioxins (PCDD) or polychlorinated dibenzofurans (PCDF). These compounds are persistent organic pollutants (POP) that have a high tendency to be accumulated in the tissues of fish and shellfish [4]. It is well known that marine sources (fish and shellfish) or marine products constitute an important route of human exposure to PCDD/PCDF and other persistent organic chemicals [5–7]. Fish and shellfish may bioaccumulate POP in their tissues. Eating this seafood potentially transfers POP to the human body [8–11]. There is a distinct pattern of accumulation of POP in the human body, which depends on factors such as types of fish and shellfish consumed [12]. Severe exposure to PCDD/PCDF (dioxin and furan) would pose adverse health effects to human, such as chloracne (a skin disease), discolouration of the skin, rashes, liver damage, reproductive and developmental effects, and cancer [13]. Chloracne, hyperpigmentation and hirsutism are the most widely recognised skin diseases, and consistently observed features due to high exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) [14, 15]. Chloracne is often accompanied by a detectable high level of PCDD/PCDF in human blood that orally ingested compared with dermal contact [14]. The recommended level of the tolerable daily intake (TDI) for PCDD/PCDF in food is <1 pg TEQ/kg body weight (BW) [16]. In fish, the level of PCDD/PCDF at about 1 pg WHO-TEQ/g fat is considered safe [17]. WHO experts suggested the toxic equivalency factor (TEF) values (≤1) for PCDD/PCDF in food. These factors are used to calculate WHO toxicity equivalents (WHO-TEQ) for mixtures of PCDD/PCDF. WHO-TEQ can then be used to express the estimated combined toxicity of the mixture relative to the lead component 2,3,7,8-TCCDD. WHO-TEQ has been used for estimating toxicity risk for PCDDs and PCDFs. Moreover, the recommended TEF levels for the 17 congeners of PCDD/PCDF in food are 0.0001-1.0 [18]. The Malaysian Food Act (1983) has underlined the maximum level of certain chemicals detected in food for human consumption. The chemicals covered under the Act are antibiotic residues, drug residues, food additives and pesticide residues [19]. However, no chemical contaminants have been specified. Dietary exposure to contaminants including PCDD/PCDF is also not stated in any of the dietary guidelines. Due to the toxic effect of PCDD and PCDF found in most of the seafood obtained from marine sources, the study was aimed to determine the types (congeners) and quantities of PCDD/PCDF in marine fish and shellfish. Investigation of the occurrence of skin diseases among fishing communities along the Straits of Malacca was also performed in relation to dietary PCDD/PCDF exposure. As efforts intensify, the results obtained from this study can be used as a reference for monitoring seafood quality at the local markets. Chemicals and standards Solvents (dichloromethane-DCM, toluene and hexane), which were of pesticide grade, were purchased from the Fisher Scientific (Leicestershire, UK) and hydromatrix was obtained from Frampton Ave (Harbor City, CA, USA). Calibration standard EDF-9999, 13C12-labelled internal standard EDF-8999 and recovery standard EDF-5999 were supplied by Cambridge Isotope Laboratories, Inc. (Andover, MA, USA). All other chemicals used were of analytical grade. Sample collection and preparation Fresh samples consisted of 48 samples from fifteen different types (species) of fish (12 types) and shellfish (three types). The samples were collected from three different regions of the identified fish landing areas along the Straits of Malacca. As shown in Fig. 1, the three regions were the northern region [Kuala Perlis – A (6° 24′ 02.0″ North, 100° 07′ 49.4″ East), Kuala Kedah – B (6° 06′ 23.7″ North, 100° 17′ 18.1″ East), Teluk Bahang – C (5° 27′ 35.3″ North, 100° 12′ 39.6″ East) and Pulau Betong – D (5° 18′ 25.3″ North, 100° 11′ 36.4″ East)], middle region [Pengkalan Baharu – E (4° 26′ 41.7″ North, 100° 36′ 59.5″ East), Kuala Sepetang – G (4° 50′ 05.9″ North, 100° 37′ 38.2″ East), and Kuala Selangor – F (3° 21′ 10.8″ North, 101° 14′ 53.9″ East)], and southern region [Melaka – H (2° 10′ 58.6″ North, 102° 15′ 58.6″ East), Port Dickson – I (2° 31′ 18.5″ North, 101° 47′ 46.7″ East) and Muar – J (2° 02′ 55.5″ North, 102° 33′ 09.6″ East)]. All fish samples were collected in August 2008 (trip 1) and November 2008 (trip 2), and duplicate samples were obtained from each trip [20]. The samples were collected during two different trips (trip 1 = T1; trip 2 = T2), due to availability of samples for each type of the seafood at the time of collection, with the help of officers from the Fisheries Development Authority of Malaysia (FDAM). No permission was required for the seafood sample collection at any of the fish landing areas. Locations of fish landing areas along the Straits of Malacca The marine fish and shellfish samples consisted of the following species: Indian mackerel, Spanish mackerel, silver pomfret, hardtail scad, fourfinger threadfin, dorab wolf-herring, large-scale tongue sole, long-tailed butterfly ray, Japanese threadfin bream, sixbar grouper, Malabar red snapper, grey eel-catfish, cockles, prawn and cuttlefish. The collected seafood species were the commonly consumed species and did not involve any protected or endangered species. The duplicate samples were obtained from the same type of fish or shellfish in the same region. Each composite sample contained 10 g of fresh fish fillet or muscle tissue of shellfish. Duplicate composite samples of the fish or shellfish samples were obtained during Trip 1 (T1) and Trip 2 (T2). The fish and shellfish samples collected were transported to nutrition laboratory on the same day of collection. The samples were delivered to the laboratory in sealed polystyrene boxes and stored in a freezer (–20 °C). Before the analysis, the whole fish was weighed, gutted, washed and filleted. The edible parts of prawn and cockle were obtained and washed before analysis. The prepared samples were stored in polyester covered cups at –20 °C before further analyses. Samples were also sent to the Doping Control Centre, Penang (an accredited laboratory) for determination of PCDD and PCDF congeners. Fat extraction Before the extraction, the frozen samples were thawed at room temperature and 50 μl of C13-labelled internal standard (EDF-8999) was spiked into the 10 g sample. The sample was then mixed with 10 g of hydromatrix before homogenisation using a mortar. The homogenised sample then was dried in an oven at 50 °C for 2 min to remove the moisture. The dried sample was powdered, placed into a cell (size 33) and covered with Ottawa sand (Fisher Scientific, Leicestershire, UK) before extraction using an Accelerated Solvent Extraction System (ASE 200) (DIONEX Corporation U.S Patents, Sunnyvale, USA) for 20 min based on USEPA Method 3545 [21]. The fat was extracted using DCM, and the DCM was removed using a rotary evaporator (BUCHI Labortechnik, Flawil, Switzerland) for 20 min at 40 °C. The remaining was filtered to obtain a crude fat extract. The fat content was determined gravimetrically. Clean-up process Hexane was added to the crude fat extract, forming an aliquot. The aliquot was placed in a fully automated Power-Prep Fluid Management System (FMS) (Fluid Management System, Inc., Waltham, USA) for extract clean-up. The process involved three types of columns, silica (CLDS-ABN-STD), alumina (CLDA-BAS-011) and carbon (CLDC-CCE-034). The column chromatographic clean-up procedure was adapted from the Smith-Stalling method outlined by the US EPA Method 8290. After completion of FMS clean-up, the eluent that contained PCDD/PCDF was concentrated to approximately 1 ml, and later spiked with 50 μl of the external standard (EDF-5999). The spiked eluent was then micro-concentrated by using a Dri-Block Heater DB-20 (Staffordshire, OSA, UK) and the nitrogen gas from TESCOM Corporation (ELK River, MN, USA) at 60 °C to a final volume of 10 μl. The eluent was placed in a 1.5-ml aluminum-covered vial before HRGC/HRMS analysis. Determination of PCDD/PCDF HRGC/HRMS system from Thermo Scientific (Milan, Italy) was applied in this study. The HRGC/HRMS analysis was coupled with mass spectrometry (MS) from Thermo Fisher Scientific (Bremen, Germany), and it was used for determination of PCDD and PCDF. Each sample was determined for its 17 PCDD/PCDF congeners with 2,3,7,8-chloro-substitution (seven PCDD and ten PCDF congeners). Concentrations in fish and shellfish samples were calculated based on fresh weight (FW) basis. The WHO toxicity equivalent (WHO-TEQ) for PCDD/PCDF in the seafood samples was calculated by applying the WHO 2005 toxic equivalency factor (TEF) for PCDDs and PCDFs [18]. This standard set of WHO-TEQ was used to evaluate the toxic effect of the seventeen most toxic PCDD/PCDF congeners found in the fish and shellfish samples. WHO-TEQ of the samples was calculated by multiplying the absolute concentration of each congener by a numeric factor that expresses the concentration in terms of the dioxin molecule, 2,3,7,8-TCDD, which is given a value of 1. Any values of PCDD/PCDF congeners below the limit of detection (LOD) should be reported as not detected. Dietary exposure to PCDD/PCDF A randomised cross-sectional study was designed to assess the dietary exposure to PCDD/PCDF of fishermen living in the seaside of the Straits of Malacca. The respondents (n = 93) of this study were randomly selected from an identified fishing community in Kuala Selangor, Selangor, Malaysia and the survey area was permitted by the FDAM. The survey was performed at the office of Kuala Selangor Fishermen's Association. The respondents were selected based on the inclusion criteria. Only healthy fishermen aged 18-55 years old without chronic diseases were selected. They were requested to fill a consent form, as well as subject information sheet, before undergoing the interview process. Ethical approval was obtained from the Human Medical Research Ethics Committee of Universiti Putra Malaysia [UPM/FPSK/PADS/T7-MJKEtikaPer/F01-JPD_JAN(10)03]. In this study, a Malay language version of the questionnaire was used. The survey was initiated after written informed consent was obtained from all respondents. Trained interviewers conducted the interviews, performed blood pressure and anthropometric measurements (body fat percentage, weight and height), and administered food frequency questionnaires (FFQ). The frequencies of consumption of the specific seafood were estimated based on the FFQ consumed on a daily, weekly or monthly basis. Skin physical examination was done by a dermatologist after the interview. The average seafood consumption (ASC) of fish and other types of seafood was calculated based on the consumption of fish and shellfish per day, and the level of PCDD/PCDF exposure from the intake of fish and shellfish was determined using the formula [7] as follows: $$ \mathrm{PCDD}/\mathrm{PCDF}\ \mathrm{exposure}=\frac{\left[\mathrm{A}\mathrm{S}\mathrm{C}\ \left(\mathrm{g}/\mathrm{day}\right)\right] \times \left[\mathrm{PCDD}/\mathrm{PCDF}\ \mathrm{level}\ \mathrm{in}\ \mathrm{fish}\ \mathrm{and}\ \mathrm{shellfish}\ \left(\mathrm{pg}/\mathrm{g}\right)\right]}{\mathrm{Body}\ \mathrm{weight}\ \mathrm{of}\ \mathrm{each}\ \mathrm{respondent}\ \left(\mathrm{kg}\right)} $$ Method validation was done for the analysis of PCDD/PCDF in the samples. A calibration standard was used to construct a calibration curve based on the EDF-4141 window-defining standard in Xcalibur software. Reproducible calibration and testing of the extraction, cleanup, and HRGC/HRMS system were done to ensure the quality of the analysis. Sensitivity, linearity and repeatability of the instrument performance were also checked using the calibration standard. The acceptable recovery was within the range of 55-120 %. For dietary exposure, the newly designed questionnaire was pretested using a convenient sampling through an interview session among fishermen (n = 23) at the first visit of study location prior the actual data collection. The pilot test was to evaluate the acceptance of the questionnaire in terms of language, meaning, uses of words and other aspects. All questionnaires were checked for readability and missing data prior to data entry. The data obtained from the survey were analysed using the Statistical Package for Social Science (SPSS) programme version 20.0. All the data were presented as mean ± standard error of the mean (SEM). The ranges of values for fat content, PCDD/PCDF level, fish and shellfish intake, and PCDD/PCDF exposure were obtained. The survey data (baseline characteristics, fish and shellfish consumption, and skin diseases) were presented as percentage, mean ± SEM and range. Statistical analysis of the fish and shellfish samples was performed by applying independent sample t-test (comparing between different regions). Pearson's correlation analysis was performed for the fat content and PCDD/PCDF level analysed. The concentrations of PCDD/PCDF in fish and shellfish samples were reported as pg WHO-TEQ/g FW. LOD was used for value below the detection limit (0.001 pg/g) and was presented as not detected. Levels of PCDD/PCDF and congeners profile Tables 1, 2, 3 show the concentrations of PCDD/PCDF congeners in the fish and shellfish samples. The concentrations of PCDD/PCDF congeners were presented as levels of WHO-TEQ (pg/g) that contributed to the PCDD/PCDF toxicity. The concentrations of PCDD/PCDF in the samples between T1 and T2 were almost similar, except for Japanese threadfin bream (T1 = 0.18 pg/g; T2 = 0.73 pg/g) and grey eel-catfish (T1 = 0.90 pg/g; T2 = 1.57 pg/g). The exception may indicate the sporadic occurrences of these contaminants that may be due to unpredictable pollutants or spoilage events along the Straits of Malacca. Table 1 Congeners of PCDD/PCDF in fish and shellfish samples from trip 1 (T1) and trip 2 (T2) of northern region Table 2 Congeners of PCDD/PCDF in fish and shellfish samples from trip 1 (T1) and trip 2 (T2) of middle region Table 3 Congeners of PCDD/PCDF in fish and shellfish samples from trip 1 (T1) and trip 2 (T2) of southern region Among the 17 congeners of PCDD/PCDF determined, 1,2,3,7,8-PeCDD was the most abundant congener found in all samples, which ranged between 0.02 and 1.04 pg WHO-TEQ/g FW compared with the other congeners. The results were in agreement with the data reported previously [22], where 1,2,3,7,8-PeCDD contributed to 21-78 % of the WHO-TEQ. Similar findings were also reported previously, where the 1,2,3,7,8-PeCDD congener contributed to about 31 % of the WHO-TEQ [23]. According to WHO, 1,2,3,7,8-PeCDD has TEF value of 1.0 [18]. Additionally, 2,3,7,8-TCDD has TEF value of 1.0. It was classified as the Group 1 carcinogen (human carcinogen) by the WHO's International Agency for Research on Cancer in 1997. The congeners, 2,3,7,8-TCDD detected in all studied samples ranged from 0.01 to 0.11 pg WHO TEQ/g FW. The mean concentration of 2,3,7,8-TCDD congener (0.02 pg WHO-TEQ/g FW) in the Malaysian seafood samples determined previously was within the range of the concentration found in this study. In addition, the congeners including 1,2,3,4,6,7,8-HpCDF, 1,2,3,4,7,8,9-HpCDF, OCDF and OCDD were not detected in any of the samples. One of the reasons is these congeners are poorly absorbed by the digestive tract of fish and shellfish, where the degree of chlorination in these congeners are higher compared with that of the other congeners [23, 24]. The types of fish that contained the highest WHO-TEQ levels were Japanese threadfin bream, large-scale tongue sole, fourfinger threadfin, Malabar red snapper, cockles, sixbar grouper and grey eel-catfish. The results showed that the highest concentrations of 2,3,7,8-TCDD, 1,2,3,7,8-PeCDD and 1,2,3,4,7,8-HxCDD were detected in grey eel-catfish samples obtained from the southern region (Table 3). Among all samples, Malabar red snapper had the highest concentration of 2,3,4,7,8-PeCDF congener (0.05 pg WHO_TEQ/g FW) from the middle region during T1's sample collection. Based on the results obtained, the congener profiles are species-dependent. The results could have also been influenced by biological (metabolism, age and trophic level) and environmental factors (habitat, geography and seasonal variation) [23, 25]. Congener 2,3,4,7,8-PeCDF has shown to be responsible for about 70 % of the dioxin toxicity, and it has been identified as an important causative agent in Yusho disease [26]. The 2,3,4,7,8-PeCDF is also reported to be the second most potent and toxic congener after 2,3,7,8-TCDD [20]. Similar pattern of the congener profile of the seafood sample was reported previously [27], in which the largest contribution to the PCDD/PCDF toxicity was from 2,3,4,7,8-PeCDF, 2,3,7,8-TCDF, 1,2,3,7,8-PeCDD and 2,3,7,8-TCDD. In this study, four main congeners were detected in all seafood samples. The levels of all congeners in the seafood samples were below 1 pg WHO-TEQ/g FW with some exception. The presence of specific PCDD/PCDF congeners in certain seafood samples could be related to industrial activities nearby the sea where the fish and shellfish samples were collected. Since the fish and shellfish samples collected along the Strait of Malacca have indicated some contamination with PCDD/PCDF congeners, the relevant authority in Malaysia recommended to monitor disposal of waste from factories nearby the Straits. The concentrations of PCDD/PCDF (WHO-TEQ) in the fish and shellfish samples obtained from different regions along the Straits are presented in Table 4. The results showed that grey eel-catfish (southern region), Japanese threadfin bream (northern region) and Malabar red snapper (middle region) contained the highest PCDD/PCDF concentrations, with concentrations of 1.24, 0.46 and 0.36 pg WHO-TEQ/g FW, respectively. On the other hand, fourfinger threadfin bream from the northern region had the lowest PCDD/PCDF concentrations at 0.12 pg/g FW, respectively. A high WHO-TEQ was determined for grey eel-catfish because it is the most affected species that exhibits the highest total PCDD/PCDF at 1.24 ± 0.47 pg WHO-TEQ/g FW compared with other samples. Among the fish and shellfish samples of different regions, no significant differences were found for the WHO-TEQ levels of all types of the samples, which could be due to the fact that sea creatures freely move along the Straits of Malacca. Table 4 Total PCDD/PCDF in fish and shellfish along the Straits of Malacca by regions Our previous study [28] demonstrated that the levels of total dioxin and furan in these species of fish and shellfish from the Straits of Malacca were high. The results also showed that grey eel-catfish obtained from the southern region of the Straits of Malacca during trip 2 had the highest level of total PCDD/PCDF (1.57 pg WHO-TEQ/g FW) compared with other seafood samples. In this study, the WHO-TEQ level (1.24 pg/g FW) for grey eel-catfish was similar to the result reported previously. One possible explanation for the high WHO-TEQ level could be that grey eel-catfish ingested these toxic substances during food intake from the muddy ocean floor. On the other hand, the total PCDDs/PCDFs (pg WHO-TEQ/g FW) for fish fillet samples of Indian mackerel (0.10), silver pomfret (0.13), grey eel-catfish (1.23), hardtail scad (0.12) and Spanish mackerel (0.18) as reported by Azrina et al. [29] are lower than the WHO-TEQ levels determined in this study (Table 4). Conversely, we found the total PCDD/PCDF in the fish and shellfish samples ranged between 4.6 and 21.8 pg WHO-TEQ/g fat. Therefore, it is important to monitor the levels of PCDD/PCDF in the seafood samples obtained from the Straits of Malacca on a regular basis. A recent study in Malaysia reported that the mean levels of PCDD/PCDF in eight types of seafood (tilapia, grouper, pomfret, barramundi, horse mackerel, snapper, prawn and cuttlefish) ranged from 0.16 to 0.17 pg WHO-TEQ/g FW [23]. The results were much lower than the concentrations of PCDD/PCDF of the same species determined in this study. It could be due to the homogenous edible portions of the sample analysed as a group of seafood that were not determined according to individual species. As reported in another study, the total PCDD/PCDF in fish and shellfish samples from Catalan market, Spain, ranged from 0.11 to 0.66 pg WHO-TEQ/g FW [30]. The results obtained from this study showed that the seafood samples obtained from the West Coast of Peninsular Malaysia along the Straits contained higher concentrations of PCDD/PCDF than the samples from the coastal areas of Japan. However, Moon and Ok [31] reported that the concentrations of PCDD/PCDF in 40 types of seafood samples from Korean coastal area ranged from 0.02 to 4.39 pg WHO-TEQ/g FW. The TEQ values recorded by Moon and Ok [31] were higher than the TEQ values found in our study. Based on the wet weight of the samples, all these findings revealed the total PCDD/PCDF detected. On the other hand, the concentrations of PCDD/PCDF in aquatic food obtained from the local market in China ranged from 0.9 to 15317 pg WHO-TEQ/g fat [32]. Based on a previous study, meat and poultry from Belgium have TEQ levels (PCDDs/PCDFs) ranging from trace to 7.82 pg WHO-TEQ/g fat [33]. The results showed that horse meat has the highest total PCDD/PCDF (7.82 pg WHO-TEQ/g fat), followed by eggs (2.76 pg/g), beef and mutton (1.56 and 1.55 pg/g). Also, pork and chicken meat contained the lowest WHO-TEQ levels, 0.17 and 0.35 pg/g fat, respectively. This finding demonstrates that pork and chicken meat contain lower total PCDD/PCDF than the fish and shellfish samples. The safe level of PCDD/PCDF in food is about 1 pg WHO-TEQ/g fat [17]. Therefore, seafood samples from the Straits of Malacca are not safe for consumption as the total PCDD/PCDF was higher than 1 pg WHO-TEQ/g fat. In addition to food products, a high total PCDD/PCDF/PCB (13 pg WHO-TEQ/g fat) was also found in the breast milk of mothers who lived in the northern region of Peninsular Malaysia nearby the Strait of Malacca [34]. The TEQs in the breast milk of mothers ranged from 3.0 to 24.0 pg WHO-TEQ/g fat. One possible reason for the high TEQ of mother breast milk is these mothers have consumed contaminated meat and seafood. The rapid growth of agricultural and industrial sectors, as well as urbanisation on the west coast of Peninsular Malaysia, are among the contributors of these POP [35]. The northern, middle and southern regions of the west coast have different stages of development and industrialisation, which might have contributed to the release of PCDD/PCDF to the environment [36]. The smoke emitted from the human activities contains PCDD or PCDF, which increase the levels of PCDD/PCDF in the seawater. Correlation between fat contents and levels of PCDD/PCDF Fat contents of the 48 samples of fish and shellfish ranged between 0.80 % and 5.70 %. The result of statistical analysis showed a positive moderate correlation between fat content and concentrations of PCDD/PCDF in the seafood samples (r = 0.507; p < 0.05). The moderately high correlation shows that about half of the PCDD/PCDF exposure could be due to the intake of fat from the fish and shellfish. Although dioxin and furan are lipophilic and highly soluble in fat, the moderate correlation is owing to the reason that some of the shellfish samples contained a low amount of fat. Sociodemographic, anthropometric and lifestyle characteristics of respondents The sociodemographic characteristics, anthropometric and blood pressure measurements, as well as the lifestyle of the 93 recruited fishermen (respondents), are shown in Table 5. A majority of the respondents were male (81.7 %). The age of more than 50 % of the respondents was 40 or higher. They came from Malay, Chinese and Indian ethnic groups and a majority of them were Malay (57 %). Most of the respondents in the fisherman community had monthly income below the poverty line, where the monthly incomes of more than 90 % of the respondents were below RM 1,500 (equivalent to about £ 270). In Selangor State, Malaysia, RM 1,500 has been set as the poverty line [37]. Some of the fishermen are illiterate, while most of them had completed primary school. A considerable proportion (91 %) of these fishermen were experienced with over 15 years of working experience. On average, the BW and height of the respondents were 67.38 kg and 162.95 cm, respectively. They also had ideal BW on average, with body fat percentages and BMI of 26.01 % and 25.24 kg/m2, respectively. The systolic and diastolic blood pressures of the respondents were 138.82 and 80.53 mm Hg, respectively. In addition, the majority of the respondents did not smoke cigarettes or drink alcoholic beverages. With regard to cigarette smoking, it is banned by most of the religions. Table 5 Sociodemographic characteristics, anthropometric measurements and lifestyle of fishermen Dietary intake and PCDD/PCDF exposure The percentages of fish and shellfish consumption among the respondents are presented in Table 6. The seafood consumption data were collected based on daily, weekly and monthly basis using a food frequency questionnaire. The information was limited to only the frequency of consumption of specific seafood caught along the Straits of Malacca where the sample was obtained. As shown in Table 6, 15 types of fish and shellfish were considered for the seafood intake section in the questionnaire. The percentages of fish and shellfish intake were calculated based on a daily basis. The results showed that prawn (11.8 %) was the most frequently consumed seafood among the respondents, followed by fourfinger threadfin (6.5 %) and Indian mackerel (6.5 %). Indian mackerel and prawn were also found to be highly consumed by the respondents on a weekly basis, accounting for about 59.1 % and 52.7 % of intake, respectively. Meanwhile, on a monthly basis, long-tail butterfly ray (44.1 %), cuttlefish (44.1 %) and hardtail scad (40.9 %) were the major seafood consumed. Table 6 The percentage fish and shellfish consumed by respondents Average intakes of a specific type of fish or shellfish, as well as total fish intake by the respondents, are presented in Table 7. The amount of large-scale tongue sole fillet consumed by the respondents was the highest among all seafood samples, with a value of 44.06 ± 97.10 g/person/day. The amounts of fish and shellfish consumed by the respondents ranged from 2.02 ± 0.87 to 44.06 ± 10.07 g/person/day. The amounts of seafood consumption reported in this study are much lower than the amounts reported in the Food Consumption Statistics of Malaysia 2003 [38], of which the estimated mean intake of seafood for Malaysian population is 60.67 g/day and 75.59 g/day for rural areas. Table 7 Average seafood consumption (ASC) of fish/shellfish and PCDD/PCDF exposure among fishermen and family members Previously, a high intake of fish and seafood products among Malaysian has been reported at 103.7 g/day [23]. The study has also included other fishery products. However, it focuses on the consumption of 15 types of fish and shellfish among the respondents. The intake of seafood by respondents depended on the availability of these fish and shellfish; therefore, not all types of the seafood were considered. The estimated value of marine fish sources for the general population stated in the Food Consumption Statistics of Malaysia, 2003 [38] does not indicate the consumption of particular types of fish or shellfish. Therefore, the result obtained can be used as a guideline for consumption of selected fish and shellfish among the fishing community in the middle region of the east coast of Peninsular Malaysia. Table 7 shows the average PCDD/PCDF exposure (pg WHO-TEQ/kg BW/day) of the respondents. The PCDD/PCDF concentration of each sample was used to calculate the average PCDD/PCDF exposure (pg WHO-TEQ/kg BW/day). The highest average exposure to total PCDD/PCDF (pg WHO-TEQ/kg BW/day) among the respondents was attributed to grey eel-catfish (0.16), followed by hardtail scad (0.12) and large-scale tongue sole (0.13). Total PCDD/PCDF exposures from consumption of fish and shellfish among the respondents ranged from 0.01 to 0.16 pg WHO-TEQ/kg BW/day. In Malaysia, seafood and seafood products have contributed to PCDD/PCDF exposure at 0.41 pg WHO-TEQ/kg BW/day [23], which is higher than the exposure among the respondents in this study. The high level of PCDD/PCDF exposure is mainly due to the consumption of fish, shellfish and also other seafood products such as canned sardine, canned crab meat, fish ball, tempura seafood, crab stick and others that are contaminated with PCDD/PCDF. In this study, the dietary exposure to PCDD/PCDF from seafood intake among the respondents was low. It is because the ASC only covered daily intake of selected fish and shellfish species among the fishermen. The PCDD/PCDF exposure among the respondents was much lower than the exposure reported by studies from Egypt (4.06-6.38 pg TEQ/kg BW/day) [39], China (1.36 pg TEQ/kg BW/day) [40] and Spain (1.17 pg TEQ/kg BW/day) [41]. The results obtained from this study could represent the safety level of PCDD/PCDF in the 15 types of fish and shellfish from the Straits of Malacca. Previous research revealed that the levels of PCDD/PCDF in the serum lipid profile of fishermen were within the range of 70-200 pg WHO-TEQ/g lipid, with high consumption of Baltic fish and shellfish [42]. The study also reported that the fishermen who consumed low to moderate amounts of Baltic seafood have 30-140 pg WHO-TEQ/g lipid detected in the serum. Therefore, increased consumption of seafood has contributed to a high exposure of PCDD/PCDF. Among the 93 respondents, 23 of them detected having skin disease by a dermatologist. The types of skin disease detected are reported in Table 8. Besides the exposure to PCDD/PCDF, some of the skin diseases could be caused by other factors, such as sun exposure [43] and microbial infection [44]. The occurrence of skin diseases was high among fishermen because they worked in the environmental conditions that promote exposure to contaminants [45]. Except for hyperpigmentation, the other types of skin disease were not related to PCDD/PCDF poisoning. There was also no chloracne case detected in this study. Skin diseases occur due to high and mostly accidental intakes of PCDD/PCDF. Therefore, skin disorders could not be expected among the respondents. Table 8 Type of skin disease among fishermen and family members Chloracne and hyperpigmentation are the two most common types of skin disease related to PCDD/PCDF exposure to humans. The study merely focused on skin diseases related to PCDD/PCDF exposure among the respondents. As shown in Table 8, two respondents (2.2 %) were found to have hyperpigmentation. Only one of them had skin hyperpigmentation, whereas the other had hyperpigmentation on the mucosa (gum and buccal mucosa). It is highly unlikely that this localised hyperpigmentation is caused by dioxins/furans exposure or toxicity. All cases of dioxin/furan-related hyperpigmentation reported a severe, generalised darkening of skin and abnormal pigmentation, with almost the entire body surface area involved [25]. Ideally blood levels of dioxins/furans or congeners in these two fishermen should be measured, which would confirm the relationship of the skin changes and dioxins/furans toxicity. However, the hyperpigmentation detected among the respondents could be due to extreme exposure to sunlight while fishing [43]. Based on their fish intake assessed using FFQ, both of the respondents had low PCDD/PCDF exposure. The first and second respondents who were diagnosed with hyperpigmentation consumed 8.63 and 13.61 g of fish per day, respectively. The exposures to PCDD/PCDF for both of them were 0.04 and 0.06 pg WHO-TEQ/kg BW/day, respectively. The low level of exposure reaffirms the finding that the skin hyperpigmentation is unrelated to PCDD/PCDF. Also, the exposure was below the recommended level (1 pg TEQ/kg BW/day) [16]. Sociodemographic characteristics and lifestyles could be the main factors for hyperpigmentation of the skin. The low exposure to PCDD/PCDF for the two respondents shows that dioxin and furan toxicity was not the cause of skin hyperpigmentation. Sunlight exposure seems to be the only cause of hyperpigmentation. On the contrary, the members of a Spanish family (father, mother and six children) developed chloracne and hyperpigmentation due to ingestion of olive oil contaminated with PCDD/PCDF [46]. The level of PCDD/PCDF detected in the chloracneigenic oil from Spain was 1590 pg WHO-TEQ/g oil, where the levels of exposure to these contaminants among the family member ranged from 620 to 1500 pg WHO-TEQ/kg BW/day. Hyperpigmentation and acne-like eruptions have also been documented in the "Yusho" incidence in Japan. The incidence was caused by ingestion of Japanese rice oil containing PCB and PCDF at the exposure level of <400 ppm [25]. The results obtained from this study revealed that the low PCDD/PCDF exposure among the respondents indicates the skin diseases cannot be contributed by PCDD/PCDF toxicity. Based on the previous literature, a high level of exposure is required to cause skin toxicity (chloracne and hyperpigmentation) [47]. Based on a follow-up study reported by Guo et al. [48], the subjects who had emitted to hospital were estimated to have consumed about 3.8 mg of PCDFs. The amounts of PCDD/PCDF ingested by the respondents were estimated to be more than 2000 times lower than the reported case. Exposure to PCDD/PCDF through intake of fish and shellfish is considered one of the risk factors for skin toxicity. Although blood samples were not taken from the respondents, the information obtained based on the questionnaire, as well as the skin examination by the dermatologist, could provide some hints on the possible contribution of dietary fish and shellfish to skin diseases related to PCDD/PCDF toxicity. The information is very important in the future as guidelines to the authority and public, as well as research scientists, for further investigation of PCDD/PCDF exposure among the fishing community since this group is more likely to be affected by environmental pollutants. Additionally, this study will provide baseline data to the stakeholders in Malaysian fishery industry. The data can also be used as a guideline to determine whether the levels of PCDD/PCDF in local marine fish and shellfish are below the recommended levels [17]. Contamination of fish and shellfish with PCDD and PCDF is hazardous to those who consumed the contaminated seafood. PCDD/PCDF exposure among the fishermen recruited from the selected fishing villages in the coastal area of the Straits of Malacca is low. Almost all the fish and shellfish samples had less than 0.5 pg WHO-TEQ/g fresh weight, indicating that the seafood obtained from the Strait of Malacca has low PCDD/PCDF levels. However, the total PCDD/PCDF levels in all the fish and shellfish samples were higher than 1.0 WHO-TEQ/g fat. It is somehow not safe for consumption. Although the Straits is one of the busiest sea routes for international traders, the level of seawater pollution level is still low. Due to the high levels of PCDD/PCDF congeners in the seafood obtained from the southern region during the second trip, continuous monitoring of human activities is essential. Authorities from the nearby countries should monitor the related human activities that could have polluted the seawater of the Straits. Monitoring the levels of dioxin and furan from time to time should be carried out to ensure safe seafood products for the community. Sioen I, De Henauw S, Verbeke W, Verdonck F, Willems JL, Van Camp J. Fish consumption is a safe solution to increase the intake of long-chain n-3 fatty acids. Public Health Nutr. 2008;11:1107–16. Norimah AK, Safiah M, Jamal K, Haslinda S, Zuhaida H, Rohida S, et al. Food consumption patterns: findings from the Malaysian Adult Nutrition Survey (MANS). Mal J Nutr. 2008;14:25–39. Dórea JG. Persistent, bioaccumulative and toxic substances in fish: human health considerations. Sci Total Environ. 2008;400:93–114. Pompa G, Caloni F, Fracchiolla ML. Dioxin and PCB contamination of fish and shellfish: assessment of human exposure. Review of the international situation. Vet Res Comm. 2003;27:159–67. Bjerselius R, Lundstedt-Enkel K, Olsén H, Mayer I, Dimberg K. Male goldfish reproductive behaviour and physiology are severely affected by exogenous exposure to 17beta-estradiol. Aquat Toxicol. 2001;53:139–52. Mohammed A, Orazio C, Peterman P, Echols K, Feltz K, Manoo A, et al. Polychlorinated dibenzo-p-dioxin (PCDDs) and polychlorinated dibenzofurans (PCDFs) in harbor sediments from Sea Lots, Port-of-Spain, Trinidad and Tobago. Mar Pollut Bul. 2009;58:928–34. Lee KT, Lee JH, Lee JS, Park KH, Kim SK, Shim WJ, et al. Human exposure to dioxin-like compounds in fish and shellfish consumed in South Korea. Hum Ecol Risk Assess. 2007;13:223–35. Schecter A, Cramer P, Boggess K, Stanley J, Päpke O, Olson J, et al. Intake of dioxins and related compounds from food in the U.S. population. J Toxicol Environ Health Part A. 2001;63:1–18. Schecter A, Päpke O, Harris TR, Tung KC, Musumba A, Olson J, et al. Polybrominated diphenyl ether (PBDE) levels in an expanded market basket survey of U.S. food and estimated PBDE dietary intake by age and sex. Environ Health Perspect. 2006;114:1515–20. Kiviranta H, Vartiainen T, Tuomisto J. Polychlorinated dibenzo-p-dioxins, dibenzofurans, and biphenyls in fishermen in Finland. Environ Health Perspect. 2002;110:355–61. Kiviranta H, Ovaskainen ML, Vartiainen T. Market basket study on dietary intake of PCDD/Fs, PCBs, and PBDEs in Finland. Environ Int. 2004;30:923–32. Longnecker MP, Wolff MS, Gladen BC, Brock JW, Grandjean P, Jacobson JL, et al. Comparison of polychlorinated biphenyl levels across studies of human neurodevelopment. Environ Health Perspect. 2003;111:65–70. Charnley G, Renate D, Kimbrough RD. Overview of exposure, toxicity, and risks to children from current levels of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related compounds in the USA. Food Chem Toxicol. 2006;44:601–15. Australian Government Department of Health and Ageing. Human Health Risk Assessment of Dioxins in Australia, National Dioxins Program: Technical Report No. 12 [Internet]. Canberra: Australian Government Department of the Environment and Heritage; 2004. Available from http://www.environment.gov.au/node/21261. Pesatori AC, Consonni D, Bachetti S, Zocchetti C, Bonzini M, Baccarelli A, et al. Short- and long-term morbidity and mortality in the population exposed to dioxin after the "Seveso Accident". Ind Health. 2003;41:127–38. World Health Organization. Assessment of the Health Risk of Dioxins: Reevaluation of the Tolerable Daily Intake (TDI) [Internet]. Geneva: World Health Organization; 1998. Available from http://www.who.int/ipcs/publications/en/exe-sum-final.pdf. Codex Alimentarius Commission. Code of Practice for the Prevention and Reduction of Dioxin and Dioxin-like PCB Contamination in Foods and Feeds [Internet]. Codex Alimentarius: International Foods Standards; 2006. Available from http://www.codexalimentarius.org/input/download/standards/10693/CXP_062e.pdf. Van den Berg M, Birnbaum LS, Denison M, De Vito M, Farland W, Feeley M, et al. The 2005 World Health Organization reevaluation of human and mammalian toxic equivalency factors for dioxins and dioxin-like compounds. Toxicol Sci. 2006;93:223–41. Razman MR, Azlan A. Safety issues related to polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) in fish and shellfish in relation with current Malaysian laws. J Food Agric Environ. 2009;7:134–8. Osman H, Suriah AR, Law EC. Fatty acid composition and cholesterol content of selected marine fish in Malaysian waters. Food Chem. 2001;73:55–60. US Environmental Protection Agency. Exposure and Human Health Reassessment of 2,3,7,8–Tetrachlorodibenzo-p-dioxin and Related Compounds: Review Draft [Internet]. Washington: National Academy of Sciences; 2004. Available from http://www.epa.gov/ncea/pdfs/dioxin/nas-review. Matthews V, Päpke O, Gaus C. PCDD/Fs and PCBs in seafood species from Moreton bay, Queensland, Australia. Marine Pollut Bull. 2008;57:392–402. Leong YH, Chiang PN, Jaafar HJ, Gan CY, Majid MIA. Contamination of food samples from Malaysia with polychlorinated dibenzo-p-dioxins and dibenzofurans and estimation of human intake. Food Addit Contam. 2014;31:711–8. Ruus A, Berge JA, Bergstad OA, Knutsen JA, Hylland K. Disposition of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) in two Norwegian epibenthic marine food webs. Chemosphere. 2006;62:1856–68. Koistinen J, Kiviranta H, Ruokojärvi P, Parmanne R, Verta M, Hallikainen A, et al. Organohalogen pollutants in herring from the northern Baltic Sea: concentrations, congener profiles and explanatory factors. Environ Poll. 2008;154:172–83. Yoshimura T. Yusho in Japan. Ind Health. 2003;41:139–48. Piskorska-Pliszczynska J, Maszewski S, Warenik-Bany M, Mikolajczyk S, Goraj L. Survey of persistent organochlorine contaminants (PCDD, PCDF, and PCB) in fish collected from the Polish Baltic fishing areas. The Sci World J. 2012;2012:Article ID 973292. Nasir NNM, Azlan A, Razman MR, Ramli NA, Latiff AA. Dioxins and furans in demersal fish and shellfish from regions in west coast Peninsular Malaysia. J Food Agric Environ. 2011;9:72–8. Azrina A, Nurul Nadiah MN, Muhammad Rizal R, Nor Azam R, Aishah AL. Investigation on the level of furans and diozins in five commonly consumed fish species. Health Environ J. 2011;2:39–42. Llobet JM, Domingo JL, Bocio A, Casas C, Teixidó A, Müller L. Human exposure to dioxins through the diet in Catalonia, Spain: carcinogenic and non-carcinogenic risk. Chemosphere. 2003;50:1193–200. Moon HB, Ok G. Dietary intake of PCDDs, PCDFs and dioxin-like PCBs, due to the consumption of various marine organisms from Korea. Chemosphere. 2006;62:1142–52. Zhao X, Zheng M, Liang L, Zhang Q, Wang Y, Jiang G. Assessment of PCBs and PCDD/Fs along the Chinese Bohai Sea coastline using mollusks as bioindicators. Arch Environ Contam Toxicol. 2005;49:178–85. Focant JF, Eppe G, Pirard C, Massart AC, André JE, De Pauw E. Levels and congener distributions of PCDDs, PCDFs and non-ortho PCBs in Belgian foodstuffs: assessment of dietary intake. Chemosphere. 2002;48:167–79. Sudaryanto A, Kunisue T, Tanabe S, Niida M, Hashim H. Persistent organochlorine compounds in human breast milk from mothers living in Penang and Kedah, Malaysia. Arch Environ Contam Toxicol. 2005;49:429–37. Department of Fisheries. Annual Fisheries Statistics 2004 Volume 1 [Internet]. Kuala Lumpur: Department of Fisheries, Ministry of Agriculture, Malaysia; 2004. Available from http://www.dof.gov.my. Wu K, Fesharaki F, Westley SB, Prawiraatmadja W. Oil in Asia and the Pacific: Production, Consumption, Imports, and Policy Options. Honolulu: East-West Center; 2008. Shukry A. Putrajaya Claims Reduced Poverty But UN Report Shows More Poor Malaysians [Internet]. The Malaysian Insider; 2014. Available from http://www.themalaysianinsider.com. Ministry of Health Malaysia. Food Consumption Statistics of Malaysia 2003: For Adult Population Aged 18 to 59 Years, vol. 1. Putrajaya: Ministry of Health Malaysia; 2006. Loutfy N, Fuerhacker M, Tundo P, Raccanelli S, El Dien AG, Ahmed T. Dietary intake of dioxins and dioxin-like-PCBs, due to the consumption of dairy products, fish/seafood and meat from Ismailia city, Egypt. Sci Total Environ. 2006;370:1–8. Song Y, Wu N, Han J, Shen H, Tan Y, Ding G, et al. Levels of PCDD/Fs and DL-PCBs in selected foods and estimated dietary intake for the local residents of Luqiao and Yuhang in Zhejiang, China. Chemosphere. 2011;85:329–34. Marin S, Villalba P, Diaz-Ferrero J, Font G, Yusà V. Congener profile, occurrence and estimated dietary intake of dioxins and dioxin-like PCBs in food marketed in the region of Valencia (Spain). Chemosphere. 2011;82:1253–61. Assmuth T, Jalonen P. Risks and Management of Dioxins and Dioxin-like Compounds in Baltic Sea Fish: An Integrated Assessment. Copenhagen: Nordic Council of Ministers; 2005. Taylor CR, Sober AJ. Sun exposure and skin disease. Annu Rev Med. 1996;47:181–91. Noble WC. The Skin Microflora and Microbial Skin Disease. Cambridge, UK: Cambridge University Press; 2004. Committee for Population, Family and Children Vietnam. Vietnam Demographic and Health Survey 2002 [Internet]. Vietnam: General Statistical Office of Vietnam; 2014. Available from http://dhsprogram.com. Rodriguez-Pichardo A, Camacho F, Rappe C, Hansson M, Smith AG, Greig JB. Chloracne caused by ingestion of olive oil contaminated with PCDDs and PCDFs. Hum Exp Toxicol. 1991;10:311–22. Mukerjee D. Health impact of polychlorinated dibenzo-p-dioxins: a critical review. J Air Waste Manage Assoc. 1998;48:157–65. Guo YL, Yu ML, Hsu CC, Rogan WJ. Chloracne, goiter, arthritis, and anemia after polychlorinated biphenyl poisoning: 14-year follow-up of the Taiwan Yucheng cohort. Environ Health Perspect. 1999;107:715–9. Thanks the management and laboratory staffs of the Doping Control Centre, Universiti Sains Malaysia, Penang for their cooperation in sample analysis. Also, thanks the management and staffs of the Fisheries Development Authority of Malaysia and the Area Fishermen's Association for their assistance in the sampling of fresh seafood samples from the fishermen, as well as for the recruitment of respondents. Department of Nutrition and Dietetics, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia Azrina Azlan , Nurul Nadiah Mohamad Nasir & Hock Eng Khoo Research Centre of Excellence for Nutrition and Non-communicable Disease, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, 43400, UPM Serdang, Selangor, Malaysia Department of Medicine, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, UPM Serdang, 43400, Selangor, Malaysia Norashikin Shamsudin Department of Community Health, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, UPM Serdang, 43400, Selangor, Malaysia Hejar Abdul Rahman Research Centre for Sustainability Science and Governance (SGK), Institute for Environment and Development (LESTARI), Universiti Kebangsaan Malaysia, UKM Bangi, 43600, Selangor, Malaysia Muhammad Rizal Razman Search for Azrina Azlan in: Search for Nurul Nadiah Mohamad Nasir in: Search for Norashikin Shamsudin in: Search for Hejar Abdul Rahman in: Search for Hock Eng Khoo in: Search for Muhammad Rizal Razman in: Correspondence to Azrina Azlan. NNMN, AA, HAR and NS designed the experiment. NNMN and MRR conducted the HRGC/HRMS analysis. NNMN and HEK carried out the survey and interviewing sessions, as well as participated in its design and coordination. NNMN drafted the manuscript. All authors read, revised and approved the final manuscript. This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Azlan, A., Mohamad Nasir, N.N., Shamsudin, N. et al. PCDD and PCDF exposures among fishing community through intake of fish and shellfish from the Straits of Malacca. BMC Public Health 15, 683 (2015) doi:10.1186/s12889-015-2044-3 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-015-2044-3 PCDD PCDF Congener
CommonCrawl
Global propagation of singularities for discounted Hamilton-Jacobi equations Rigidity, weak mixing, and recurrence in abelian groups doi: 10.3934/dcds.2021166 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. Propagation dynamics of a nonlocal time-space periodic reaction-diffusion model with delay Ning Wang and Zhi-Cheng Wang , School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu 730000, China * Corresponding author: Zhi-Cheng Wang Received April 2021 Revised August 2021 Early access November 2021 Fund Project: Both authors are supported by NNSF of China (12071193, 11731005) and NSF of Gansu Province of China (21JR7RA535) Full Text(HTML) Figure(8) This paper is concerned with a nonlocal time-space periodic reaction diffusion model with age structure. We first prove the existence and global attractivity of time-space periodic solution for the model. Next, by a family of principal eigenvalues associated with linear operators, we characterize the asymptotic speed of spread of the model in the monotone and non-monotone cases. Furthermore, we introduce a notion of transition semi-waves for the model, and then by constructing appropriate upper and lower solutions, and using the results of the asymptotic speed of spread, we show that transition semi-waves of the model in the non-monotone case exist when their wave speed is above a critical speed, and transition semi-waves do not exist anymore when their wave speed is less than the critical speed. It turns out that the asymptotic speed of spread coincides with the critical wave speed of transition semi-waves in the non-monotone case. In addition, we show that the obtained transition semi-waves are actually transition waves in the monotone case. Finally, numerical simulations for various cases are carried out to support our theoretical results. Keywords: Nonlocal population model, time-space periodicity, delay, non-monotone, asymptotic speed of spread, transition waves. Mathematics Subject Classification: Primary: 35B40, 35K57, 35C07; Secondary: 37N25, 92D25. Citation: Ning Wang, Zhi-Cheng Wang. Propagation dynamics of a nonlocal time-space periodic reaction-diffusion model with delay. Discrete & Continuous Dynamical Systems, doi: 10.3934/dcds.2021166 H. Berestycki and F. Hamel, Generalized travelling waves for reaction-diffusion equations. In: Perspectives in nonlinear partial differential equations. Providence: Amer. Math. Soc., Contemp. Math., 446 (2007), 101-123. doi: 10.1090/conm/446. Google Scholar H. Berestycki and F. Hamel, Generalized transition waves and their properties, Comm. Pure Appl. Math., 65 (2012), 592-648. doi: 10.1002/cpa.21389. Google Scholar H. Berestycki, F. Hamel and L. Roques, Analysis of the periodically fragmented environment model: I-Species persistence, J. Math. Biol., 51 (2005), 75-113. doi: 10.1007/s00285-004-0313-3. Google Scholar X. Bao and Z.-C. Wang, Existence and stability of time periodic traveling waves for a periodic bistable Lotka-Volterra competition system, J. Differential Equations, 255 (2013), 2402-2435. doi: 10.1016/j.jde.2013.06.024. Google Scholar F. Cao and W. Shen, Spreading speeds and transition fronts of lattice KPP equations in time heterogeneous media, Discrete Contin. Dyn. Syst., 37 (2017), 4697-4727. doi: 10.3934/dcds.2017202. Google Scholar D. Daners and P. Koch Medina, Abstract Evolution Equations, Periodic Problems and Applications, Pitman Research Notes in Mathematics Series, 279, Longman Scientific and Technical, Harlow, 1992. Google Scholar D. Duehring and W. Huang, Periodic traveling waves for diffusion equations with time delayed and non-local responding reaction, J. Dynam. Differential Equations, 19 (2007), 457-477. doi: 10.1007/s10884-006-9048-8. Google Scholar J. Fang, X. Yu and X.-Q. Zhao, Traveling waves and spreading speeds for time-space periodic monotone systems, J. Funct. Anal., 272 (2017), 4222-4262. doi: 10.1016/j.jfa.2017.02.028. Google Scholar T. Faria, W. Huang and J. Wu, Travelling waves for delayed reaction-diffusion equations with global response, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 229-261. doi: 10.1098/rspa.2005.1554. Google Scholar T. Faria and S. Trofimchuk, Nonmonotone travelling waves in a single species reaction-diffusion equation with delay, J. Differential Equations, 228 (2006), 357-376. doi: 10.1016/j.jde.2006.05.006. Google Scholar A. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964. Google Scholar B. Gilding and R. Kersner, Travelling Waves in Nonlinear Diffusion-Convection Reaction, Progress in Nonlinear Differential Equations and their Applications, 60, Birkhäuser Verlag, Basel, 2004. doi: 10.1007/978-3-0348-7964-4. Google Scholar S. A. Gourley and Y. Kuang, Wavefront and global stability in a time-delayed population model with stage structure, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 459 (2003), 1563-1579. doi: 10.1098/rspa.2002.1094. Google Scholar Z. Guo, F. Wang and X. Zou, Threshold dynamics of an infective disease model with a fixed latent period and non-local infections, J. Math. Biol., 65 (2012), 1387-1410. doi: 10.1007/s00285-011-0500-y. Google Scholar P. Hess, Periodic-Parabolic Boundary Value Problems and Positivity, Pitman Res. Notes Math. Ser., 247, Longman Scientific and Technical, Harlow, 1991. Google Scholar W. Huang, Traveling waves connecting equilibrium and periodic orbit for reaction-diffusion equations with time delay and nonlocal response, J. Differential Equations, 244 (2008), 1230-1254. doi: 10.1016/j.jde.2007.10.001. Google Scholar W. Huang, A geometric approach in the study of traveling waves for some classes of non-monotone reaction-diffusion systems, J. Differential Equations, 260 (2016), 2190-2224. doi: 10.1016/j.jde.2015.09.060. Google Scholar Y. Jin and X.-Q. Zhao, Spatial dynamics of a nonlocal periodic reaction-diffusion model with stage structure, SIAM J. Math. Anal., 40 (2009), 2496-2516. doi: 10.1137/070709761. Google Scholar T. Kato, Perturbation Theory for Linear Operators, Reprint of the 1980 edition, Springer-Verlag, Berlin, 1995. Google Scholar N. Kinezaki, K. Kawasaki, F. Takasu and N. Shigesada, Modeling biological invasions into periodically fragmented environments, Theor. Population Biol., 64 (2003), 291-302. doi: 10.1016/S0040-5809(03)00091-1. Google Scholar O. A. Ladyzenskaja, V. A. Solonnikov and N. N. Uralćeva, Linear and Quasilinear Equations of Parabolic Type, Transl. Math. Monogr., vol. 23, Amer. Math. Soc., Providence, Rhode Island, U.S.A., 1968. Google Scholar J. Li and X. Zou, Modeling spatial spread of infectious diseases with a fixed latent period in a spatially continuous domain, Bull. Math. Biol., 71 (2009), 2048-2079. doi: 10.1007/s11538-009-9457-z. Google Scholar P. Li and S.-L. Wu, Monostable traveling waves for a time-periodic and delayed nonlocal reaction-diffusion equation, Z. Angew. Math. Phys., 69 (2018), 39, 16pp. doi: 10.1007/s00033-018-0936-7. Google Scholar X. Liang, Y. Yi and X.-Q. Zhao, Spreading speeds and traveling waves for periodic evolution systems, J. Differential Equations, 231 (2006), 57-77. doi: 10.1016/j.jde.2006.04.010. Google Scholar X. Liang and X.-Q. Zhao, Asymptotic speeds of spread and traveling waves for monotone semiflows with applications, Comm. Pure Appl. Math., 60 (2007), 1-40. doi: 10.1002/cpa.20154. Google Scholar X. Liang and X.-Q. Zhao, Spreading speeds and traveling waves for abstract monostable evolution systems, J. Funct. Anal., 259 (2010), 857-903. doi: 10.1016/j.jfa.2010.04.018. Google Scholar M. Ma, J. Yue and C. Ou, Propagation direction of the bistable travelling wavefront for delayed non-local reaction diffusion equations, Proc. Royal Soc. A, 475 (2019), 20180898, 10 pp. doi: 10.1098/rspa.2018.0898. Google Scholar S. Ma, Traveling waves for non-local delayed diffusion equations via auxiliary equations, J. Differential Equations, 237 (2007), 259-277. doi: 10.1016/j.jde.2007.03.014. Google Scholar S. Ma and J. Wu, Existence, uniqueness and asymptotic stability of traveling wavefronts in non-local delayed diffusion equation, J. Dynam. Differential Equations, 19 (2007), 391-436. doi: 10.1007/s10884-006-9065-7. Google Scholar R. H. Martin and H. L. Smith, Abstract functional-differential equations and reaction-diffusion systems, Trans. Amer. Math. Soc., 321 (1990), 1-44. doi: 10.2307/2001590. Google Scholar G. Nadin, Traveling fronts in space-time periodic media, J. Math. Pures Appl., 92 (2009), 232-262. doi: 10.1016/j.matpur.2009.04.002. Google Scholar G. Nadin and L. Rossi, Propagation phenomena for time heterogeneous KPP reaction-diffusion equations, J. Math. Pures Appl., 98 (2012), 633-653. doi: 10.1016/j.matpur.2012.05.005. Google Scholar G. Nadin and L. Rossi, Transition waves for Fisher-KPP equations with general time heterogeneous and space-periodic coefficients, Anal. PDE, 8 (2015), 1351-1377. doi: 10.2140/apde.2015.8.1351. Google Scholar G. Nadin and L. Rossi, Generalized transition fronts for one-dimensional almost periodic Fisher-KPP equations, Arch. Ration. Mech. Anal., 223 (2017), 1239-1267. doi: 10.1007/s00205-016-1056-1. Google Scholar J. Nolen, J.-M. Roquejoffre, L. Ryzhik and A. Zlatós, Existence and non-existence of Fisher-KPP transition fronts, Arch. Ration. Mech. Anal., 203 (2012), 217-246. doi: 10.1007/s00205-011-0449-4. Google Scholar Z. Ouyang and C. Ou, Global stability and convergence rate of traveling waves for a nonlocal model in periodic media, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 993-1007. doi: 10.3934/dcdsb.2012.17.993. Google Scholar M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Corrected reprint of the 1967 original, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5282-5. Google Scholar W. Shen, Traveling waves in diffusive random media, J. Dynam. Differential Equations, 16 (2004), 1011-1060. doi: 10.1007/s10884-004-7832-x. Google Scholar [39] N. Shigesada and K. Kawasaki, Biological Invasions: Theory and Practice, Oxford University Press, New York, 1997. Google Scholar H. L. Smith and H. R. Thieme, Strongly order preserving semiflows generated by functional- differential equations, J. Differential Equations, 93 (1991), 332-363. doi: 10.1016/0022-0396(91)90016-3. Google Scholar J. W.-H. So, J. Wu and X. Zou, A reaction-diffusion model for a single species with age structure. I. Travelling wavefronts on unbounded domains, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 457 (2001), 1841-1853. doi: 10.1098/rspa.2001.0789. Google Scholar H. R. Thieme and X.-Q. Zhao, Asymptotic speeds of spread and traveling waves for integral equations and delayed reaction-diffusion models, J. Differential Equations, 195 (2003), 430-470. doi: 10.1016/S0022-0396(03)00175-X. Google Scholar H. Wang, On the existence of traveling waves for delayed reaction-diffusion equations, J. Differential Equations, 247 (2009), 887-905. doi: 10.1016/j.jde.2009.04.002. Google Scholar N. Wang, Z.-C. Wang and X. Bao, Transition waves for lattice Fisher-KPP equations with time and space dependence, Proc. Roy. Soc. Edinburgh Sect. A, 151 (2021), 573-600. doi: 10.1017/prm.2020.31. Google Scholar Z.-C. Wang and W.-T. Li, Dynamics of a non-local delayed reaction-diffusion equation without quasi-monotonicity, Proc. Roy. Soc. Edinburgh Sect. A, 140 (2010), 1081-1109. doi: 10.1017/S0308210509000262. Google Scholar Z.-C. Wang, W.-T. Li and S. Ruan, Existence and stability of traveling wave fronts in reaction advection diffusion equations with nonlocal delay, J. Differential Equations, 238 (2007), 153-200. doi: 10.1016/j.jde.2007.03.025. Google Scholar Z.-C. Wang, W.-T. Li and S. Ruan, Traveling fronts in monostable equations with nonlocal delayed effects, J. Dynam. Differential Equations, 20 (2008), 573-607. doi: 10.1007/s10884-008-9103-8. Google Scholar Z.-C. Wang, L. Zhang and X.-Q. Zhao, Time periodic traveling waves for a periodic and diffusive SIR epidemic model, J. Dynam. Differential Equations, 30 (2018), 379-403. doi: 10.1007/s10884-016-9546-2. Google Scholar H. F. Weinberger, On spreading speeds and traveling waves for growth and migration models in a periodic habitat, J. Math. Biol., 45 (2002), 511-548. doi: 10.1007/s00285-002-0169-3. Google Scholar P. Weng and X.-Q. Zhao, Spatial dynamics of a nonlocal and delayed population model in a periodic habitat, Discrete Contin. Dyn. Syst., 29 (2011), 343-366. doi: 10.3934/dcds.2011.29.343. Google Scholar S.-L. Wu and C.-H. Hsu, Entire solutions of non-quasi-monotone delayed reaction-diffusion equations with applications, Proc. Roy. Soc. Edinburgh Sect. A, 144 (2014), 1085-1112. doi: 10.1017/S0308210512001412. Google Scholar L. Zhang, Z.-C. Wang and X.-Q. Zhao, Propagation dynamics of a time periodic and delayed reaction-diffusion model without quasi-monotonicity, Trans. Amer. Math. Soc., 372 (2019), 1751-1782. doi: 10.1090/tran/7709. Google Scholar X.-Q. Zhao, Dynamical Systems in Population Biology, 2$^{nd}$ edition, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, Cham, 2017. doi: 10.1007/978-3-319-56433-3. Google Scholar X.-Q. Zhao and Z.-J. Jing, Global asymptotic behavior in some cooperative systems of functional differential equations, Canad. Appl. Math. Quart., 4 (1996), 421-444. Google Scholar Figure 1. The long-time behaviour of the solution of (68). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 1, $ $ p(x) = 0.5+0.2\cos(x), $ $ q = 0.3 $ and the periodicity of space $ L = 2\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Figure Options Figure 2. The long-time behaviour of the solution of (69). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 1, $ $ p_1(t) = 0.5+0.2\cos(\frac{t}{2}), $ $ q = 0.3 $ and the periodicity of time $ T = 4\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Figure 3. The long-time behaviour of the solution of (70). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 1, $ $ p_2(t, x) = 0.7+0.2\cos(\frac{t}{2})+0.2\cos(x), $ $ q = 0.3, $ the periodicity of time $ T = 4\pi $ and the periodicity of space $ L = 2\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Figure 4. The shapes of the birth function $ b(x, u) = p(x)ue^{\bar{p}(x)u} $ and death function $ d(x, u) = qu $ for equation (71), where $ p(x) = 12+2\sin\left(x\right), $ $ \bar{p}(x) = -0.7-0.2\cos\left(x\right), $ $ q = 0.2. $ Changes in color represent changes in height. The right panel is the two-dimensional projection of the left panel onto the $ uoz $-plane Figure 5. The shapes of the birth function $ b(t, u) = p_1(t)ue^{\bar{p}_1(t)u} $ and death function $ d(x, u) = q_1u $ for equation (72), where $ p(x) = 12+2\cos\left(t\right), $ $ \bar{p}(x) = -0.7-0.2\sin\left(t\right), $ $ q = 0.4. $ Changes in color represent changes in height. The right panel is the two-dimensional projection of the left panel onto the $ uoz $-plane Figure 6. The long-time behaviour of the solution of (71). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 10, $ $ p(x) = 12+2\sin\left(x\right), $ $ \bar{p}(x) = -0.7-0.2\cos\left(x\right), $ $ q = 0.2 $ and the periodicity of space $ L = 2\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Figure 7. The long-time behaviour of the solution of (72). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 10, $ $ p_1(t) = 12+2\cos\left(t\right), $ $ \bar{p}_1(t) = -0.7-0.2\sin\left(t\right), $ $ q_1 = 0.4 $ and the periodicity of time $ T = 2\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Figure 8. The long-time behaviour of the solution of (73). Changes in color represent changes in height. Baseline parameter values: $ D = 0.1, $ $ D_j = 0.1, $ $ \mu_j = 0.1, $ $ \tau = 10, $ $ p_2(t, x) = 13+2\cos\left(\sqrt{2}t\right)+\sin\left(x\right), $ $ \bar{p}_2(t, x) = -1.1-0.2\cos\left(x\right), $ $ q = 0.3, $ the periodicity of time $ T = \sqrt{2}\pi $ and the periodicity of space $ L = 2\pi. $ The right panel is the two-dimensional projection of the left panel onto the $ xt $-plane Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang. Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1331-1353. doi: 10.3934/dcds.2016.36.1331 Dashun Xu, Xiao-Qiang Zhao. Asymptotic speed of spread and traveling waves for a nonlocal epidemic model. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 1043-1056. doi: 10.3934/dcdsb.2005.5.1043 Zhao-Xing Yang, Guo-Bao Zhang, Ge Tian, Zhaosheng Feng. Stability of non-monotone non-critical traveling waves in discrete reaction-diffusion equations with time delay. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 581-603. doi: 10.3934/dcdss.2017029 Abraham Solar. Stability of non-monotone and backward waves for delay non-local reaction-diffusion equations. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5799-5823. doi: 10.3934/dcds.2019255 Lara Abi Rizk, Jean-Baptiste Burie, Arnaud Ducrot. Asymptotic speed of spread for a nonlocal evolutionary-epidemic system. Discrete & Continuous Dynamical Systems, 2021, 41 (10) : 4959-4985. doi: 10.3934/dcds.2021064 Yuxiang Zhang, Shiwang Ma. Invasion dynamics of a diffusive pioneer-climax model: Monotone and non-monotone cases. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4767-4788. doi: 10.3934/dcdsb.2020312 Yongqiang Fu, Xiaoju Zhang. Global existence and asymptotic behavior of weak solutions for time-space fractional Kirchhoff-type diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021091 Anatoli F. Ivanov, Bernhard Lani-Wayda. Periodic solutions for three-dimensional non-monotone cyclic systems with time delays. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 667-692. doi: 10.3934/dcds.2004.11.667 Sergiu Aizicovici, Simeon Reich. Anti-periodic solutions to a class of non-monotone evolution equations. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 35-42. doi: 10.3934/dcds.1999.5.35 Alfonso Castro, Benjamin Preskill. Existence of solutions for a semilinear wave equation with non-monotone nonlinearity. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 649-658. doi: 10.3934/dcds.2010.28.649 Jun Chen, Wenyu Sun, Zhenghao Yang. A non-monotone retrospective trust-region method for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 919-944. doi: 10.3934/jimo.2013.9.919 José Caicedo, Alfonso Castro, Arturo Sanjuán. Bifurcation at infinity for a semilinear wave equation with non-monotone nonlinearity. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 1857-1865. doi: 10.3934/dcds.2017078 Junxiong Jia, Jigen Peng, Jinghuai Gao, Yujiao Li. Backward problem for a time-space fractional diffusion equation. Inverse Problems & Imaging, 2018, 12 (3) : 773-799. doi: 10.3934/ipi.2018033 Fatih Bayazit, Britta Dorn, Marjeta Kramar Fijavž. Asymptotic periodicity of flows in time-depending networks. Networks & Heterogeneous Media, 2013, 8 (4) : 843-855. doi: 10.3934/nhm.2013.8.843 Keng Deng, Yixiang Wu. Asymptotic behavior for a reaction-diffusion population model with delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 385-395. doi: 10.3934/dcdsb.2015.20.385 Jingdong Wei, Jiangbo Zhou, Wenxia Chen, Zaili Zhen, Lixin Tian. Traveling waves in a nonlocal dispersal epidemic model with spatio-temporal delay. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2853-2886. doi: 10.3934/cpaa.2020125 Yang Yang, Yun-Rui Yang, Xin-Jun Jiao. Traveling waves for a nonlocal dispersal SIR model equipped delay and generalized incidence. Electronic Research Archive, 2020, 28 (1) : 1-13. doi: 10.3934/era.2020001 Shuang-Ming Wang, Zhaosheng Feng, Zhi-Cheng Wang, Liang Zhang. Spreading speed and periodic traveling waves of a time periodic and diffusive SI epidemic model with demographic structure. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021145 Pablo Amster, Manuel Zamora. Periodic solutions for indefinite singular equations with singularities in the spatial variable and non-monotone nonlinearity. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 4819-4835. doi: 10.3934/dcds.2018211 José Caicedo, Alfonso Castro, Rodrigo Duque, Arturo Sanjuán. Existence of $L^p$-solutions for a semilinear wave equation with non-monotone nonlinearity. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1193-1202. doi: 10.3934/dcdss.2014.7.1193 PDF downloads (137) HTML views (99) Ning Wang Zhi-Cheng Wang Article outline
CommonCrawl
List of statisticians This list of statisticians lists people who have made notable contributions to the theories or application of statistics, or to the related fields of probability or machine learning. Also included are actuaries and demographers. Statistics • Outline • Statisticians • Glossary • Notation • Journals • Lists of topics • Articles • Category •  Mathematics portal A • Aalen, Odd Olai (1947–1987) • Abbey, Helen (1915–2001) • Abbott, Edith (1876–1957) • Abelson, Robert P. (1928–2005) • Abramovitz, Moses (1912–2000) • Achenwall, Gottfried (1719–1772) • Adelstein, Abraham Manie (1916–1992) • Adkins, Dorothy (1912–1975) • Ahsan, Riaz (1951–2008) • Ahtime, Laura • Aitchison, Beatrice (1908–1997) • Aitchison, John (1926–2016) • Aitken, Alexander (1895–1967) • Akaike, Hirotsugu (1927–2009) • Aliaga, Martha (1937–2011) • Allan, Betty (1905–1952) • Allen, R. G. D. (1906–1983) • Allison, David B. • Altman, Doug (1948–2018) • Altman, Naomi • Amemiya, Takeshi (1938–) • Anderson, Oskar (1887–1960) • Anderson, Theodore Wilbur • Anderson-Cook, Christine (1966–) • de Andrade, Mariza • Anscombe, Francis (1918–2001) • Anselin, Luc • Antonovska, Svetlana (1952–2016) • Armitage, Peter (1924–) • Armstrong, Margaret • Arrow, Kenneth • Ash, Arlene • Ashby, Deborah (1959–) • Asher, Jana • Ashley-Cooper, Anthony • Augustine, Benjamin C. • Austin, Oscar Phelps • Ayres, Leonard Porter B • Backer, Julie E. (1890–1977) • Bahadur, Raghu Raj (1924–1997) • Bahn, Anita K. (1920–1980) • Bailar, Barbara A. • Bailey, Rosemary A. (1947–) • Bailey-Wilson, Joan (1953–) • Baker, Rose • Balding, David • Bandeen-Roche, Karen • Barber, Rina Foygel • Barnard, George Alfred (1915–2002) • Barnard, Mildred (1908–2000) • Barnett, William A. • Bartels, Julius • Bartlett, M. S. (1910–2002) • Bascand, Geoff • Basford, Kaye • Basu, Debabrata (1924–2001) • Bates, Nancy • Batcher, Mary • Baxter, Laurence (1954–1996) • Bayarri, M. J. (1956–2014) • Bayes, Thomas (1702–1761) • Beale, Calvin • Becker, Betsy • Bediako, Grace • Behm, Ernst • Benjamin, Bernard • Benzécri, Jean-Paul (1932–2019) • Berger, James • Berkson, Joseph (1899–1982) • Bernardo, José-Miguel • Berry, Don • Best, Alfred M. (1876–1958) • Best, Nicky • Betensky, Rebecca • Beveridge, William • Bhat, B. R. • Bhat, P. N. Mari • Bhat, U. Narayan • Bienaymé, Irénée-Jules • Bienias, Julia • Billard, Lynne (1943–) • Bingham, Christopher • Bird, Sheila (1952–) • Birnbaum, Allan (1923–1976) • Bishop, Yvonne (–2015) • Bisika, Thomas John • Bixby, Lenore E. (1914–1994) • Blackwell, David (1919–2010) • Blankenship, Erin • Bliss, Chester Ittner (1899–1979) • Block, Maurice • Bloom, David E. • Blumberg, Carol Joyce • Bock, Mary Ellen • Boente, Graciela • Bodio, Luigi • Bodmer, Walter • Bonferroni, Carlo Emilio (1892–1960) • Booth, Charles • Boreham, John • Borror, Connie M. (1966–2016) • Bortkiewicz, Ladislaus (1868–1931) • Bose, R. C. (1901–1987) • Botha, Roelof • Bottou, Léon • Bowley, Arthur Lyon (1869–1957) • Bowman, Kimiko O. (1927–2019) • Box, George E. P. (1919–2010) • Boyle, Phelim • Brad, Ion Ionescu de la (1818–1891) • Brady, Dorothy (1903–1977) • Brassey, Thomas • Braverman, Amy • Breiman, Leo • Breslow, Norman (1941–2015) • Brogan, Donna (1939–) • Brooks, Steve • Brown, Jennifer • Brown, Lawrence D. (1940–2018) • Broze, Laurence (1960–) • Buck, Caitlin E. (1964–) • Bunea, Florentina (1966–) • Burgess, Warren Randolph • Butler, Margaret K. (1924–2013) • Butucea, Cristina • Buzek, Józef • Bycroft, Christine C • Cai, T. Tony • Caird, James • Calder, Kate • Caldwell, John • Cam, Lucien Le (1924–2000) • Campion, Harry • Candès, Emmanuel • Cannon, Ann R. • Carriquiry, Alicia L. • Carroll, Mavis B. (1917–2009) • Carson, Carol S. • Cartwright, Ann (1925–) • Carver, Harry C. • Castles, Ian • Çetinkaya-Rundel, Mine • Chakrabarti, M. C. • Chalmers, George (1742–1825) • Chaloner, Kathryn (1954–2014) • Chambers, John M. • Champernowne, D. G. (1912–2000) • Chand, Rattan (1955–) • Chao, Anne • Charles, Enid (1894–1972) • Charlier, Carl (1862–1934) • Chebyshev, Pafnuty (1821–1894) • Chen, Louis Hsiao Yun • Chen, Cathy Woan-Shu • Chen, Jie • Chernoff, Herman (1923–) • Chervonenkis, Alexey (1938–2014) • Chetwynd, Amanda • Chiaromonte, Francesca • Chow, Yuan-Shih (1924–2022) • Chuprov, Alexander Alexandrovich (1874–1926) • Chuprov, Alexander Ivanovich (1841–1908) • Ciol, Marcia • Citro, Constance F. (1942–) • Claeskens, Gerda • Claghorn, Kate (1864–1938) • Clark, Colin (1905–1989) • Clark, Cynthia (1942–) • Clarke, Richard W. B. (1910–1975) • Clayton, David (1944–) • Clyde, Merlise A. • Coale, Ansley J. • Coats, Robert H. (1874–1960) • Cochran, William Gemmell (1909–1980) • Cockfield, Arthur • Coghlan, Timothy Augustine (1856–1926) • Cohen, Jacob • Cohen, Joel E. • Coifman, Ronald • Coleman, David • Collet, Clara (1860–1948) • Coman, Katharine (1857–1915) • Cook, Dianne • Cook, Len (1949–) • Cordeiro, Gauss Moutinho (1952–) • Cornfield, Jerome (1912–1979) • Courtney, Leonard • Cover, Thomas M. • Cowan, Cathy A. • Cox, David (1924–2022) • Cox, Gertrude Mary (1900–1978) • Cox, Richard Threlkeld (1898–1991) • Cramér, Harald (Sweden, 1893–1985) • Crome, August Friedrich Wilhelm • Crosby, James • Cudmore, Sedley • Cunliffe, Stella (1917–2012) • Cutler, Adele • Czado, Claudia • Czekanowski, Jan • Czitrom, Veronica D • Dabrowska, Dorota • Dagum, Estelle Bee • Dale, Angela (1945–) • Daniels, Henry (1912–2000) • Dantzig, David van (1900–1959) • Dantzig, George (1914–2005) • Darby, Sarah • Darwin, John (1923–2008) • Dasgupta, Nairanjana • Datta, Susmita • David, Florence Nightingale (1909–1993) • Davidian, Marie • Davies, Griffith (1788–1855) • Davis, Kingsley (1908–1997) • Dawid, Philip (1946–) • Day, Besse (1889–1986) • Daykin, Christopher (1948–) • Dean, Angela • Dean, Charmaine (1958–) • Deane, Charlotte (1975–) • DeGroot, Morris H. (1931–1989) • Delaigle, Aurore • DeLong, Elizabeth • Deming, W. Edwards (1900–1993) • Dempster, Arthur P. • Desrosières, Alain • Dewey, Davis Rich • Diaconis, Persi (1945–) • Díaz, María del Pilar • Diener-West, Marie • Dietz, E. Jacquelin • Dilke, Sir Charles • Ditlevsen, Susanne • Do, Kim-Anh • Dobson, Annette (1945–) • Dodge, Harold F. • Dodson, James • Doerge, Rebecca • Doll, Sir Richard (1912–2005) • Dominici, Francesca • Donnelly, Christl • Donnelly, Peter • Donoho, David • Doob, Joseph Leo • Dublin, Louis Israel • Duckworth, Frank • Dudley, Richard M. • Dudoit, Sandrine • Dukic, Vanja • Duncan, David F. • Duncan, Otis Dudley • Dunn, Halbert L. • Dunn, Olive Jean (1915–2008) • Dunnell, Karen (1946–) • Dunnett, Charles • Dupuis, Debbie • Dupuis, Josée • Durbin, James • Dvoretzky, Aryeh E • Easton, Brian • Eberly, Lynn • Eckler, A. Ross (1901–1991) • Eckler, A. Ross Jr. (1927–2016) • Eeden, Constance van (1927–2021) • Eden, Sir Frederick (1766–1809) • Edgeworth, Francis Ysidro (1845–1926) • Edwards, A. W. F. (1935–) • Efron, Bradley (1938–) • Eisenhart, Churchill (1913–1994) • Elashoff, Janet D. • Elderton, Ethel M. (1878–1954) • Elderton, William Palin • Eldridge, Marie D. (1926–2009) • Ellenberg, Susan S. • Elliot, Jane (1966–) • Elston, Robert C. • Engel, Ernst (1821–1896) • Engle, Robert F. • Ensor, Kathy • Erlang, A. K. (1878–1929) • Erritt, John • Esterby, Sylvia • Etheridge, Alison (1964–) • Ezekiel, Mordecai F • Fabri, Johann Ernst • Fallati, Johannes • Fan, Jianqing • Farr, William (1807–1883) • Farrer, Thomas • Fechner, Gustav (1801–1887) • Fellegi, Ivan (1935–) • Feller, William • Fernique, Xavier • Fienberg, Stephen • Finetti, Bruno de (1906–1985) • Finlaison, John • Finney, D. J. (1917–2018) • Fisher, Irving (1867–1947) • Fisher, Sir Ronald A. (1890–1962) • Fitz-Gibbon, Carol (1938–2017) • Fix, Evelyn (1904–1965) • Fleetwood, William (1656–1723) • Fleiss, Joseph L. (1937–2003) • Flournoy, Nancy (1947–) • Flux, A. William • Foot, David • Fowler, Henry • Fox, John (1945–) • Frankel, Lester • Franscini, Stefano • Freedman, David A. • Freedman, Ronald • Friedman, Milton • Frigessi, Arnoldo di Rattalma (1959–) • Frühwirth-Schnatter, Sylvia (1959–) • Fu, Rongwei • Fuentes, Montserrat • Furlong, Cathy G • Gage, Linda • Gain, Anil Kumar (1919–1978) • Gallant, A. Ronald • Gallup, George (1901–1984) • Galton, Francis (1822–1911) • Gantert, Nina • Gardner, Martha M. • Garfield, Joan • Gauquelin, Michel • Geary, Roy C. • Geer, Sara van de (1958–) • Geiringer, Hilda (1893–1973) • Geisser, Seymour (1929–2004) • Gel, Yulia • Geller, Nancy (1944–) • Gelman, Andrew (1965–) • Geman, Donald (1943–) • Geppert, Maria-Pia (1907–1997) • Ghazzali, Nadia (1961–) • Ghosh, Jayanta Kumar • Ghysels, Eric • Gibbons, Jean D. (1938–) • Giblin, Lyndhurst (1872–1951) • Giffen, Robert (1837–1910) • Gijbels, Irène • Gilbert, Ethel • Gile, Krista • Gilford, Dorothy M. (1919–2014) • Gill, Richard D. (1951–) • Gini, Corrado (1884–1965) • Glass, David • Glass, Gene V. (1940–) • Golbeck, Amanda L. • Goldberg, Lisa • Goldin, Rebecca • Goldman, Samuel • Goldschmidt, Christina • Goldsmith, Selma Fine (1912–1962) • Goldstein, Harvey • Gompertz, Benjamin (1779–1865) • Good, I. J. (1916–2009) • Good, Phillip (1937–) • Goodnight, James • Gordon, Nancy • Goschen, George • Gosset, William Sealy (known as "Student") (1876–1937) • Gotway Crawford, Carol A. • Goulden, Cyril (1897–1981) • Granger, Clive • Graunt, John (1620–1674) • Gray, Mary W. (1938–) • Grebenik, Eugene • Green, Peter • Greenland, Sander • Greenwood, Cindy • Greenwood, Major (1880–1949) • Griffiths, Robert • Griliches, Zvi • Grimmett, Geoffrey (1950–) • Guerry, André-Michel • Gumbel, Emil Julius (1891–1966) • Guttman, Louis • Guo, Ying • Guy, William (1810–1885) • Gy, Pierre (1924–2015) H • Haberman, Steven (1951–) • Hagood, Margaret Jarman (1907–1963) • Hahn, Marjorie • Haines, Linda M. • Hájek, Jaroslav (1926–1974) • Hajnal, John (1924–2008) • Halabi, Susan • Hald, Anders (1913–2007) • Hastie, Trevor • Hall, Peter Gavin (1951–2016) • Halloran, Betz • Halmos, Paul (1916–2006) • Hamaker, Ellen (1974–) • Hamilton, Lord George (1845–1927) • Hampe, Asta • Hand, David (1950–) • Harch, Bronwyn • Harcourt, Alison • Hardin, Garrett (1915–2003) • Hardin, Jo • Harris, Ted (1919–2006) • Harter, Rachel M. • Hartley, Herman Otto (1912–1980) • Hayek, Lee-Ann C. • Hayter, Henry Heylyn (1821–1895) • He, Xuming • Healy, Michael (1923–2016) • Hearron, Martha S. (1943–2014) • Heckman, Nancy E. • Hedges, Larry V. • Hein, Jotun (1956–) • Helmert, Friedrich Robert (1843–1917) • Henderson, Charles Roy (1911–1989) • Henningsen, Inge (1941–) • Herring, Amy H. • Herzberg, Agnes M. • Hertzberg, Vicki • Hess, Irene • Heyde, Chris (1939–2008) • Hibbert, Sir Jack (1932–2005) • Hickman, James C. (1927–2006) • Hilbe, Joseph (1944–2017) • Hill, Austin Bradford (1897–1991) • Hill, Joseph Adna (1860–1938) • Hinkley, David V. • Hjort, Nils Lid (1953–) • Ho, Weang Kee • Hoeffding, Wassily (1914–1991) • Hoem, Jan (1939–2017) • Hoeting, Jennifer A. • Hofmann, Heike (1972–) • Hollander, Myles (1941–) • Hollerith, Herman (1860–1929) • Holmes, Chris • Holmes, Susan P. • Holran, Virginia Thompson • Holt, Tim (1943–) • Holtsmark, Gabriel Gabrielsen • Hogben, Lancelot (1895–1975) • Hooker, Reginald Hawthorn (1867–1944) • Horn, Susan • Hotelling, Harold (1895–1973) • Hsiung, Chao Agnes • Hsu, Pao-Lu • Hu, Joan • Hubert, Mia • Huff, Darrell (1913–2001) • Hughes-Oliver, Jacqueline • Hunter, Sir William Wilson (1840–1900) • Hunter, William (1937–1986) • Hurwitz, Shelley • Hušková, Marie (1942–) • Hutchinson, Col • Hutton, Jane • Huzurbazar, Aparna V. • Huzurbazar, Snehalata V. • Huzurbazar, V. S. I • Ihaka, Ross (1954–) • Iman, Ronald L. • Inoue, Lurdes • Irony, Telba • Irwin, Joseph Oscar (1898–1982) • Isham, Valerie (1947–) • Ishikawa, Kaoru (1915–1989) • Isserlis, Leon (1881–1966) • Ivy, Julie J • Jacoby, Oswald (1902–1984) • Jaffrey, Thomas (1861–1953) • James, Bill (1949–) • Jaynes, Edwin Thompson (1922–1998) • Jefferys, William H. (1940–) • Jeffreys, Harold (1891–1989) • Jellinek, E. Morton (1890–1963) • Jenkins, Gwilym (1933–1982) • Jevons, William Stanley (1835–1882) • Jobson, Alexander (1875–1933) • Johnson, Norman Lloyd (1917–2004) • Johnston, Robert Mackenzie • Jones, Edward • Jones-Loyd, Samuel • Jordan, Michael I. • Jöreskog, Karl Gustav • Jouffret, Esprit • Juran, Joseph M. (1904–2008) • Jurin, James (1684–1750) K • Kac, Mark • Kahwachi, Wasfi • Kempthorne, Oscar • Kendall, David George (1918–2007) • Kendall, Sir Maurice (1907–1983) • Kennedy, Joseph C. G. • Khattree, Ravindra • Khmaladze, Estate V. • Kibria, B. M. G. (1963- ) • Kiefer, Jack • Kiær, Anders Nicolai • King, Gregory • King, Willford I. • Kingman, John (1939–) • Kish, Leslie (1910–2000) • Knibbs, George Handley • Kočović, Bogoljub • Kolmogorov, Andrey Nikolaevich (1903–1987) • Koopman, Bernard • Kott, Phillip • Krewski, Dan • Krumbein, William C. • Kruskal, Joseph (1929–2010) • Kruskal, William (1919–2005) • Krüger, André • Kuczynski, Robert René • Kulischer, Eugene M. • Kullback, Solomon (1907–1994) • Kulldorff, Gunnar (1927–2015) • Künsch, Hans-Rudolf • Kurnow, Ernest • Kuzmicich, Steve • Kuznets, Simon L • Lah, Ivo • Laird, Nan • Laslett, Peter • Laspeyres, Étienne (1834–1913) • Lathrop, Mark • Law, John (1671–1729) • Lawler, Gregory Francis • Lawrence, Charles • Lehmann, Erich Leo • Lemon, Charles • Leontief, Wassily • Levit, Boris • Lewis, Tony • Lexis, Wilhelm (1837–1914) • Li, C. C. • Li, David X. • Likert, Rensis • Lilliefors, Hubert (c. 1928–2008) • Lindeberg, Jarl Waldemar (1876–1932) • Lindley, Dennis V. (1923–2013) • Lindstedt, Anders • Lindstrom, Frederick B. • Linnik, Yuri (1915–1972) • Liu, Jun • Longman, Phillip • Lord, Frederic M. • Lorenz, Max O. • Lotka, Alfred J. (1880–1949) • Loève, Michel • Lubbock, John • Lundberg, Filip (1876–1965) M • MacGregor, John F. • Mahalanobis, Prasanta Chandra (1893–1972) • Manwar, Khandakar Hossain (1930–1999) • Mallet, Bernard • Malthus, Thomas Robert (1766–1834) • Mannheimer, Renato • Mantel, Nathan (1919–2002) • Mardia, Kantilal • Marpsat, Maryse • Marquardt, Donald (1929–1997) • Marquis, Frederick • Marschak, Jacob • Marshall, Herbert • Martin, Sir Richard • Massey, Kenneth • Masuyama, Motosaburo (1912–2005) • Mauchly, John • McClintock, Emory • McCrossan, Paul • McCullagh, Peter • McEvedy, Colin • McKendrick, Anderson Gray (1876–1943) • McLennan, Bill • McNemar, Quinn (1900–1986) • McVean, Gilean • Meeker, Royal • Meier, Paul, (1924–2011) • Meng, Xiao-Li (1963–) • Mercado, Joseph • Mihoc, Gheorghe • Milliken, George A. • Milliman, Wendell • Milne, Joshua • Milnes, Richard Monckton • Mitchell, Wesley Clair • Mitofsky, Warren • Mohn, Jakob • Moivre, Abraham de • Molina, Edward C. • Moore, Henry Ludwell • Moran, Pat • Mores, Edward Rowe • Morgan, William • Morris, Carl • Morrison, Winifred J. • Moser, Claus (1922–2015) • Mosteller, Frederick (1916–2006) • Mouat, Frederic J. • Moyal, José Enrique • Murphy, Susan N • Nair, Vijayan N. • Nason, Guy • Neill, Charles P. • Nelder, John (1924–2010) • Nesbitt, Cecil J. • Newmarch, William (1820–1882) • Neyman, Jerzy (1894–1981) • Nightingale, Florence (1820–1910) • Niyogi, Partha (1967–2010) • Noether, Gottfried E. • Nordling, Carl O. • Notestein, Frank W. O • Ogburn, William Fielding • Olshansky, S. Jay • Onicescu, Octav • Onslow, William • Orshansky, Mollie P • Paine, George • Pakington, John • Panaretos, John • Parzen, Emanuel (1929–2016) • Pearl, Raymond • Pearson, Egon (1895–1980) • Pearson, Karl (1857–1936) • Peirce, Charles Sanders • Pereira, Basilio de Bragança • Pena, Daniel • Peto, Julian • Peto, Richard • Petty, William (1623–1687) • Petty-Fitzmaurice, Henry • Piekałkiewicz, Jan • Pillai, K. C. Sreedharan • Pillai, Vijayan K • Pink, Brian • Pitman, E. J. G. (1897–1993) • Plackett, Robin • Playfair, William (1759–1823) • Pleszczyńska, Elżbieta • Pocock, Stuart • Pollak, Henry O. • Polson, Nicholas • Preston, Samuel H. • Price, Richard • Priestley, Maurice • Princet, Maurice • Punnett, Reginald • Pólya, George (1887–1985) • Puri, Madan Lal (1929–) Q • Quetelet, Adolphe (1796–1874) • Qazi Motahar Hossain (1897–1981) Bangladesh R • Chand Rattan (1955–) • Raftery, Adrian • Raghavarao, D. • Raiffa, Howard • Ralescu, Stefan (1952–) • Rao, C.R. (1920–2023) • Rasch, Georg (1901–1980) • Redington, Frank • Reid, Nancy • Reiersøl, Olav (1908–2001) • Rhodes, E. C. • Rice, Thomas Spring • Richardson, Sylvia • Rickman, John • Ripley, Brian Daniel (1952–) • Robbins, Herbert (1922–2001) • Robbins, Naomi (1937–) • Roberts, Gareth O. • Roberts, Harry V. • Robertson, Stuart A. • Robine, Jean-Marie • Robins, James • Robinson, Claude E. • Rosenthal, Jeff • Rousseeuw, Peter J. • Roy, Bimal Kumar • Roy, S. N. (1906–1966) • Rubin, Donald • Rubinow, I. M. • Rubinstein, Reuven • Ruggles, Steven • Russell, John • Ryder, Dudley S • Sagarin, Jeff • Saha, Jahar • Saint-Maur, Nicolas-François Dupré de • Salsburg, David • Samuel, Herbert • Samworth, Richard • Sanders, William • Savage, Leonard Jimmie (1917–1971) • Sawilowsky, Shlomo (1954–2021) • Scheffé, Henry (1907–1977) • Schlaifer, Robert (1915–1994) • Schultz, Henry • Schuster, Arthur (1851–1934) • Schweder, Tore (1943–) • Scott, Elizabeth (1917–1988) • Scurfield, Hugh Hedley • Searle, Shayle R. (1928–2013) • Sebastiani, Paola • Semyonov-Tyan-Shansky, Pyotr (1827–1914) • Shah, B. V. • Shapiro, Samuel (1930–) • Swamy, Subramanium • Shapley, Lloyd (1923–2016) • Shaw-Lefevre, George • Shepp, Lawrence • Sheppard, William Fleetwood (1863–1936) • Shewhart, Walter A. (1891–1967) • Shrikhande, S. S. (1917–2020) • Sichel, Herbert (1915–1995) • Siegel, Sidney • Silver, Nate • Silverman, Bernard • Simiand, François • Simon, Leslie Earl • Sinclair, Sir John (1754–1835) • Sirkeci, Ibrahim • Slutsky, Eugen (1880–1948) • Smith, A.F.M • Smith, Cedric (1917–2002) • Smith, Walter L. (1926–2023) • Snedecor, George W. (1881–1974) • Snyder, Carl (1869–1946) • Sokal, Robert R. • Spearman, Charles (1863–1945) • Speed, Terry • Spengler, Joseph J. • Spiegelhalter, David (1953–) • Srivastava, J. N. • Stamp, Josiah (1880–1941) • Stanley, Julian C. Jr. • Stanley, Edward • Steele, J. Michael • Steffensen, Johan Frederik (1873–1961) • Stein, Charles • Stephens, Matthew • Stigler, Stephen (1941–) • Stone, Richard • Stouffer, Samuel A. • Stoyan, Dietrich (1940–) • Süssmilch, Johann Peter • Sykes, William Henry • Sylvester, James Joseph • Sztrem, Edward Szturm de • Shyamaprasad Mukherjee (1939–), India T • Taguchi, Genichi (1924–2012) • Teitelbaum, Michael • Telser, Lester G. • Thiele, Thorvald N. (1838–1910) • Thorndike, Robert L. • Thornton, John Wingate • Thorp, Willard • Thurstone, Louis Leon • Tibshirani, Robert • Tippett, Leonard Henry Caleb • Tobin, James • Todd, Emmanuel • Tong, Howell • Trewin, Dennis • Trybuła, Stanisław • Tufte, Edward • Tukey, John (1915–2000) U • Utts, Jessica • Uyanto, Stanislaus S. V • Vapnik, Vladimir (~1935–) • Vaupel, James • Villani, Giovanni • Visman, Jan W • Wahba, Grace • Wakefield, Edward (1774–1854) • Wald, Abraham (1902–1950) • Walker, Francis Amasa • Walker, Gilbert • Wallace, Chris (1933–2004) • Wallis, W. Allen • Wanless, Derek • Watson, Geoffrey • Wedderburn, Robert • Wegman, Edward • Weibull, Waloddi (1887–1979) • Weinstock, Arnold (1924–2002) • Weldon, Walter Frank Raphael • Welton, Thomas A. • Wentworth-Fitzwilliam, Charles • Westergaard, Harald Ludvig (1853–1936) • Wheeler, Donald J. • Whittle, Peter (1927–2021) • Wickham, Hadley • Wilcoxon, Frank (1892–1965) • Wilk, Martin (1922–2013) • Wilkinson, Leland • Wilks, Samuel S. (1906–1964) • Willcox, Walter Francis • Wilson, Edwin Bidwell (1879–1964) • Wilson, Harold • Wishart, John (1898–1956) • Wold, Herman (1908–1992) • Wolfowitz, Jacob (1910–1981) • Wood, George Henry • Woodroofe, Michael (1940–2022) • Woolhouse, Wesley S. B. • Working, Holbrook • Wright, Carroll D. • Wright, Elizur • Wright, Sewall • Wrigley, E. A. • Wu, Jeff C. F. Y • Yates, Frank (1902–1994) • Young, Allyn Abbott • Young, Arthur (1741–1820) • Young, Hilton • Yule, G. Udny (1871–1951) Z • Zaman, Arif • Zarnowitz, Victor • Zellner, Arnold (1927–2010) • Zhaohuan, Zhang • Žiberna, Aleš • Zipf, George Kingsley (1902–1950) See also • List of actuaries • List of mathematical probabilists • List of mathematicians • Founders of statistics External links • "Statisticians in History". American Statistical Association. 30 November 2016. • "Life and Work of Statisticians". Department of Mathematics, University of York. • John Aldrich. "Figures from the History of Probability and Statistics". University of Southampton.
Wikipedia
vocabulary unit 1 HARTLEIGH_SCHAMBEAU monotonous lacking in variety; boring; tedious wise in practical affairs; judicious a strong feeling of dislike a superficial appearance or illusion of something; front to invent; to make up mental toughness or strength; grit emulate to imitate; to try to equal compliant; tame; humble marked by disturbance or uproar; disorderly affecting or moving the emotions; touching Lesson Six synonyms gabbyrodriguez02 Natalie_Tenge Book VI, Unit 7- Vocabulary From Latin a… Lorna_King R&J Vocab2 brianna_bauer11 Government exam spring semester anatomy exam study guide 2019 reproduction vocab Urinary system vocabulary From the list below, supply the words needed to complete the paragraph. Some words will not be used. dogmatic abet gregarious divulge extraneous coerce meticulous. Mr. Knight learned the _____ art of watchmaking during a three year stay in Switzerland more than forty years ago. Since that time, he has spent countless evenings in his basement workshop assembling the tiny, complex machines. As a[n] _____ grandfather, Mr. Knight often invites his grandchildren to his shop, where they watch with amazement through a large magnifying glass and see a newly assembled pocket watch tick for the first time. "Watches are such perfect machines; there's no room for _____ parts or over-engineering. And then, to see such a tiny machine operate under its own power—it amazes me every time." When asked about his thoughts on the mass production techniques of modern watches. Knight revealed his _____ belief that Old World skills made watches much more valuable. "Oh, yes, the new watches are inexpensive and readily available, which fills the practical need, but they lack the sentiment and the many hours of craftsmanship that should go into a fine piece of jewelry." "These watches," he says as he points to a sparkling display cabinet, "have character." Mr. Knight hopes someday to _____ the many secrets of his trade to his youngest grandson, who can then carry on the family tradition for years to come. From the list below, supply the words needed to complete the paragraph. Some words will not be used. junta, sedentary, mollify, subversive, patina, placebo, tantamount. The _____ on the brass buttons of General Blanco's uniform sparkled in the sun when he stepped outside the capital office and fished in his pockets for his pill box. Only his doctor knew that the general was about to take a[n] _____ that consisted of little more than sugar. Days before, the general had demanded treatment for recurring chest pains. The doctor found nothing wrong with the general, so he prescribed a psychological treatment for what he thought was a psychological illness. General Blanco's malady was likely the result of a very stressful situation. At the time the pains began, he was a member of a five-person _____ that had seized control of an impoverished nation, and plenty of _____ citizens would have liked nothing more than to eliminate the militant rulers and reinstall the exiled dictator. Blanco had acquired some experience that was _____ to leading a nation, but little experience in dodging assassins. From the list below, supply the words needed to complete the paragraph. Some words will not be used. abate, droll, decorum, austere, dole, duplicity, abhor. Typhoon Paka hammered the island of Guam for twelve hours before the winds _____. Gusts over one hundred fifty miles an hour devastated the previously green island, creating _____ living conditions for residents in the weeks to come, especially for the estimated five thousand people who lost their homes. Residents able to witness Paka in action were astounded by the serious but almost _____ sight of Paka's invisible forces tossing around automobiles, dumpsters, and palm trees as though they were children's toys. In the days following Paka, residents adhered to traditional post-typhoon _____ by cleaning up hundreds of tons of debris, checking on the condition of friends and neighbors, repairing property, and, because of the lack of electricity, hosting mass barbecues before food perished in warm refrigerators. Luckily, food was not in short supply, but water had to be _____ out by several agencies in the weeks following the tempest. From the list below, supply the words needed to complete the paragraph. Some words will not be used. $$ \begin{matrix} \text{renown} & \text{forte} & \text{confute} & \text{brinkmanship}\\ \text{dynasty} & \text{recumbent} & \text{tribulation}\\ \end{matrix} $$ Damian mounted his new ______ bicycle, but he immediately crashed into a light pole because he was not used to sitting back while riding a bike. After a few minutes of ______, though, he was able to ride around in the parking lot without falling down. Damian's friends ______ his decision to spend a lot of money on what they called a novelty item, but Damian was ______ for wasting money on things that sat in the basement and collected dust when he tired of them. His credit card sprees would stop eventually, Damian was bound to lose his game of financial ______, in which he waited to pay his bills until he received threatening notices from the bank.
CommonCrawl
\begin{document} \title{Asymptotic Chow stability of toric Del Pezzo surfaces} \author{King-Leung Lee} \address{Department of Mathematics and Computer Science\\ Rutgers University, Newark NJ 07102-1222\\ USA} \email{[email protected]} \author{Zhiyuan Li} \address{Shanghai Center for Mathematical Science\\ Fudan University, 220 Handan Road, Shanghai, 200433 \\China} \email{[email protected]} \author{Jacob Sturm} \address{Department of Mathematics and Computer Science\\ Rutgers University, Newark NJ 07102-1222\\ USA} \email{[email protected]} \author{Xiaowei Wang} \address{Department of Mathematics and Computer Science\\ Rutgers University, Newark NJ 07102-1222\\ USA} \email{[email protected]} \date{\today} \maketitle \begin{abstract} In this short note, we study the asymptotic Chow polystability of toric Del Pezzo surfaces appear in the moduli space of K\"ahler-Einstein Fano varieties constructed in \cite{OSS}. \end{abstract} \section{Introduction} Since the invention of geometric invariant theory \cite{MFK} by David Mumford, GIT has been successfully applied to the construction of various kinds of moduli spaces, e.g. moduli spaces of stable vector bundles over a projective curves and of moduli spaces of polarized varieties $(X,L)$. In particular, when $X$ is a canonically polarized manifold, it was shown by Mumford and Gieseker in dimension 1, by Gieseker \cite{Gie77} in dimension 2, and in arbitrary dimensions by Donaldson \cite{Don01} (making use of the work of Aubin , Yau \cite{Aub76, Yau} and Zhang \cite{ Zha 96}) that $(X,L=\mathscr{O}_X(K_X))$ is asymptotically Chow stable (see also \cite{PS2004}). That is, given a smooth canonically polarized variety $(X,\mathscr{O}_X(K_X))$, that there exists an $r_0$ such that $(X,\mathscr{O}_X(rK_X))$ is Chow stable for any $r\geq r_0$. More generally, if $(X,L)$ is a polarized manifold, GIT also plays a role in the existence of constant scalar curvature K\"ahler metrics in the class of $L$ (see for example the survey article \cite{PS2009}). \vskip .1in In order to compactify the moduli space it is necessary to include {\em singular} varieties (e.g. by stable reduction theorem for curves). In general, it is quite difficult to extend above works to singular varieties, even for the $\dim=1$ case (cf.\cite{LiW15, Gie77}). On the other hand, it was shown in \cite{WX14}, that {\em asymptotic} Chow stability does not form a proper moduli space in general by exhibiting some explicit punctured families of {\em canonical polarized} varieties without asymptotic Chow semi-stable filling. However, in \cite{LWX2014}, a proper moduli space of smoothable K-semistable {\em Fano varieties} is constructed. It is a natural to ask whether or not the moduli space of $\mathbb{Q}$-Fano varieties can be realized as asymptotic GIT moduli space at least when the dimension is {\em small}. \footnote{We remark that Ono, Sano and Yotsutani succeeded in constructing a $\dim=7$ toric Fano K\"ahler-Einstein manifold that is not asymptotic Chow stable in \cite{OSN12}. But that did not rule out the asymptotic GIT completely, see Remark \ref{osn} for more explanation.} To answer this question, one needs to understand first when $\dim=2$, in particular those Fano varieties appear in the moduli spaces of K-semistable Del Pezzo surfaces constructed \cite{OSS}. For smooth K\"ahler-Einstein Fano manifolds, by Mabuchi's extension \cite{Mab2004} of Donaldson's work \cite{Don01} we know that they are all asymptotic Chow polystable provided their automorphism groups are semi-simple. Unfortunately, it seems quite difficult to extend Donaldson and Mabuchi's approach in \cite{Don01, Mab2004} to singular Fano varieties, at least to the best of our knowledge so far, there is {\em not a single non-smooth} example of $\mathbb{Q}$-Fano varieties whose asymptotic Chow stability is {\em known}. In this note we want to close this gap by studying the asymptotic Chow stability of some singular toric Del Pezzo surfaces. {The original motivation was the following question which was asked of us by Odaka and Laza}. \begin{ques}\label{Q} Is the K-polystable cubic surface $X:=\{xyz=w^3\}\subset \mathbb{P}^3$ asymptotically Chow stable? \end{ques} To state our main result, let \begin{enumerate}\label{X} \item $(X_1=\mathbb{P}^2/(\mathbb{Z}/9\mathbb{Z}),\mathscr{O}_{X_1}(1):=\mathscr{O}_{X_1}(-3K_{X_1}))\subset (\mathbb{P}^6,\mathscr{O}_{\mathbb{P}^6}(1))$ with the $\mathbb{Z}/9\mathbb{Z}=\langle\xi=\exp2\pi\sqrt{-1}/9\rangle$-action generated by $\xi\cdot [z_0, z_1,z_2]=[z_0,\xi,z_1,\xi^{-1},z_2]$. \item $(X_2=\mathbb{P}^1\times\mathbb{P}^1/(\mathbb{Z}/4\mathbb{Z}),\mathscr{O}_{X_2}(1):=\mathscr{O}_{X_2}(-2K_{X_2}))\subset (\mathbb{P}^6,\mathscr{O}_{\mathbb{P}^6}(1))$ with the $\mathbb{Z}/4\mathbb{Z}=\langle\xi\rangle$-action generated by $\xi\cdot ([z_1,z_2],[w_1,w_2])=([\sqrt{-1} z_1,z_2],[-\sqrt{-1} w_1,w_2])$. \item $(X_3=\{xyz=w^3\},\mathscr{O}_{X_3}(1):=\mathscr{O}_{X_1}(-K_{X_1}))\subset(\mathbb{P}^3,\mathscr{O}_{\mathbb{P}^3}(1))$; \item $(X_4=Q_1\cap Q_2,\mathscr{O}_{X_4}(1):=\mathscr{O}_{X_2}(-K_{X_2}))\subset (\mathbb{P}^4,\mathscr{O}_{\mathbb{P}^4}(1))$ with $$\displaystyle \left\{ \begin{array}{llll} Q_1: &z_0z_1+z_2z_3+z_4^2 &=0& \\ Q_2: & \lambda z_0z_1+\mu z_2z_3+z_4^2&=0 & \lambda\ne \mu \end{array} \right. ,$$ \end{enumerate} These are the {\em only} $\mathbb{Q}$-Gorenstein smoothable toric K\"ahler-Einstein (i.e. K-polystable) Del Pezzo surfaces of $\deg=1,2,3,4$ thanks to the work of \cite[Theorem 2.3.3]{Spotti12}. In particular, they are parametrized in the proper moduli spaces constructed in \cite[Theorem 4.1, 4.2, 4.3, 5.13, 5.28]{OSS}. Then our main result is the following: \begin{theo}\label{main} Let $(X_i,\mathscr{O}_{X_i}(k))$ be one in the list \eqref{X}. Then $(X_i,\mathscr{O}_{X_i}(k))$ is \begin{enumerate} \item Chow unstable for any $k\geq 1$ when $i=1$; \item Chow polystable for $k\geq 2$ when $i=2$; \item Chow polystable for $k\geq 1$ when $ i=3,4$. \end{enumerate} \end{theo} Our paper is organized as follows: in section two we review some basic facts of GIT, in particular, we reduce the checking of stability to a purely combinatorial problem thanks to the fact that the $X_i$ are toric. In section three, we will carry out the main estimate that is needed for the proof of the last case of Theorem \ref{main}. In section four we extend the main estimate used in section three and prove the second cases of Theorem \ref{main}. It turns out this is the most delicate calculation. In the last section, we establish the first case by showing the non-vanishing of Chow weight of the torus action. We want to remark that examples of asymptotic Chow unstable Fano toric K\"ahler-Einstein manifolds were first found in \cite{OSY}. \end{ack} \section{Basics on GIT and symplectic quotient} In this section we include a symplectic quotient proof of Kempf's instability result \cite[Corollary 4.5]{Kemp1978}, which reduces checking of Chow stability of a projective variety to a smaller group provided the variety admits a large symmetry group. \subsection{Kempf's instability theory} Let $G$ be a reductive algebraic group acting on a polarized pair $(Z,\mathscr{O}_{Z}(1))$, i.e. $\mathscr{O}_Z(1)$ is $G$-linearzed. Let $K<G$ be a {\em maximal compact subgroup}. Fixing a $K$-invariant Hermitian metric with a positive curvature form $\omega$ on $L$, we obtain a holomorphic Hamiltonian $K$-action on $(Z,\omega)$ with moment map $$\mu_K:Z\longrightarrow \mathfrak{k}.$$ Let $z\in Z$ be a point with stabilizer $G_z<G$. \begin{defi} We say a $G$-orbit $G\cdot z\in Z$ is {\em $G$-extremal} with respect to the $G$-action on $(Z,\mathscr{O}_{Z}(1))$ if and only if there is a maximal compact subgroup $K<G$ and a $h\in G$ as above such that $\mu_K(h\cdot z)\in \mathfrak{k}_{h\cdot z}$, the stabilizer of $h\cdot z$ in {$\mathfrak{k}$}. \footnote{Notice that, if one translate the K\"ahler form $\omega$ on $Z$ by a $h\in G$ then the above definition can be reformulated as following: for any {\em prefixed} maximal compact $K<G$ there exists a $h\in G$ such that $\mu_K(h\cdot z)\in K_{h\cdot z}$. } This is equivalent to saying that $h\cdot z$ is a critical point of $$|\mu_K|_\mathfrak{k}^2=\langle\mu_K,\mu_K\rangle_\mathfrak{k} :Z\longrightarrow \mathbb{R}$$ where $\langle\cdot,\cdot\rangle_\mathfrak{k}$ is a $K$-invariant inner product on $\mathfrak{k}$. We say $z$ is {\em $G$-polystable} if there is a maximal compact subgroup $K<G$ such that $\mu_\mathfrak{k}(z)=0$. \end{defi} Now we are ready to give a simple and symplectic quotient proof of a slight improvement of Kempf's instability Theorem \cite[Corollary 4.5]{Kemp1978}. \begin{theo}\label{kempf} Let $G_0<G_z$ be a {\em reductive} subgroup. Then $G\cdot z$ is an {\em $G$-extremal (resp. poly-stable)} if and only if $C(G_0)\cdot z$ is {\em $C(G_0)$-extremal (resp. poly-stable)} with respect to the $C(G_0)$-action, induced by the embedding $i:C(G_0)\hookrightarrow G$ on $(Z,\mathscr{O}_{Z}(1))$, where $C(G_0)<G$ is the {\em centralizer} of the $G_0$ in $G$. \end{theo} \begin{proof} Let us fix a maximal compact subgroup $K<G$ such that $(K_{0})^{\mathbb{C}}=G_0$ with $K_{0}:=K\cap G_0$. We define $$K_H:=C(K_{0})=\{g\in K \mid \mathrm{Ad}_gh=h,\forall h\in K_{0}\}<K,$$ the {\em centralizer} of $K_{0}$ in $K$ and $H:=K_H^\mathbb{C}$. Suppose $H\cdot z$ is $H$-extremal then there is a $h\in H$ $$ \mu_{K_H}(h\cdot z)=i^\ast \mu_{K}(h\cdot z)\in \mathfrak{k}_H\cap \mathfrak{k}_{h\cdot z}, (\text{\em resp.}=0 \text{ if $z$ is $H$-polystable}) $$ where $ i^\ast:\mathfrak{k}\rightarrow \mathfrak{k}_H$ be the {\em orthogonal projection} with respect to a $\mathrm{Ad}_K$-invariant inner product $\langle\cdot,\cdot\rangle_\mathfrak{g}$ on $\mathfrak{g}$. Since $h\in H=C(G_0)$, we have $\mathrm{Ad}_h G_0=G_0<G_{h\cdot z}$. Without loss of generality we may assume that $h=e$, the identity (i.e. replace $h\cdot z$ by $z$ from the beginning). Then \begin{equation}\label{perp} i^\ast\mu_K(z)\in \mathfrak{k}_H\cap \mathfrak{k}_z, (\text{\em resp. }\mu_K(z)\perp \mathfrak{k}_H \text{ if $z$ is $H$-polystable}). \end{equation} On the other hand, for any $k\in K_{0}<G_0<G_z$ we have $$ \mu_K(z)=\mu_K(k\cdot z)=\mathrm{Ad}_k\mu_K(z), $$ from which we deduce that $\mu_K(z)\in \mathfrak{c}(K_{0})=\mathfrak{k}_H$. This combined with \eqref{perp} implies that $$ \mu_K(z)\in \mathfrak{k}_z,(\text{\em resp.}=0 \text{ if $z$ is $H$-polystable}) $$ i.e. $z$ is $G$-extremal ({\em resp.} $G$-polystable). Conversely, suppose $G\cdot z$ is extremal. Then we have $\|\mu_K(z)\|=\min_{G\cdot z}\|\mu_K\|$ by \cite[Theorem 6.2]{Ness84} and $\mu_K(z)\in {\ \mathfrak{c}(K_z)}\subset\mathfrak{k}_H$ (, where $\mathfrak{c}(K_z)$ is the Lie algebra of the centralizer of $C(K_z)<K$) by \cite[Theorem 10]{Wang04}, from which we conclude $$\|\mu_K(z)\|=\min_{G\cdot z}\|\mu_K\|=\min_{H\cdot z}\|\mu_{K_H}\|=\|\mu_{K_H}(z)\|.$$ Thus $C(G_0)\cdot z$ is extremal and our proof is completed. \end{proof} \begin{coro}Let us continue with the notation in the Theorem \ref{kempf}. Then $z\in Z$ is $G$-semistable if and only if $z$ is $C(G_0)$-semistable. \end{coro} \begin{proof} By our assumption $z$ is $H$-semistable with $H=C(G_0)$, so there is a $$z_0\in \overline{H\cdot z}\subset \overline{G\cdot z}\subset Z$$ such that $z_0$ is $H$-polystable. By Theorem \ref{kempf}, we know $z_0$ is $G$-polystable and our proof is completed. \end{proof} \subsection{Toric varieties} Let $\triangle\subset \mathbb{R}^n$ be any {\em convex polytope} and we will introduce {\em cone} $\mathrm{PL}(\triangle;k)$ in $C^0(k\triangle,\mathbb{R})$, the space of continuous functions on $k\triangle$. To begin with, let $\phi:k\triangle\cap\mathbb{Z}^n\to \mathbb{R}$ be any function and define: $$\mathrm{graph}_\phi:=\mathrm{Conv} \left\{ \bigcup_{x\in k\triangle\cap\mathbb{Z}^n} \{\left.(x,t)\in \mathbb{R}^n\times \mathbb{R} \right| t\leq \phi(x)\}\right\} $$ the {\em convex hull} of the set $\bigcup_{x\in k\triangle\cap\mathbb{Z}^n} \{\left.(x,t)\in \mathbb{R}^n\times \mathbb{R} \right| t\leq \phi(x)\}$. \begin{defi} Let $\triangle\subset \mathbb{R}^n$ be any convex polytope. We define \begin{enumerate} \item A function $C^0(k\triangle,\mathbb{R})\ni f_\phi :k\triangle\to \mathbb{R}$ is said to be {\em associated} to a $\phi:k\triangle\cap\mathbb{Z}^n\to \mathbb{R}$ if $$ f_\phi(x):=\max\{t\mid (x,t)\in \mathrm{graph}_\phi\}:k\triangle\longrightarrow \mathbb{R}. $$ \item We define the {\em cone} \begin{equation}\label{pl} \mathrm{PL}(\triangle;k):=\{f_\phi \ \left|\ \phi:k\triangle\cap\mathbb{Z}^n\to \mathbb{R}\right. \}\subset C^0(k\triangle,\mathbb{R}). \end{equation} \end{enumerate} \end{defi} Now to apply Theorem \ref{kempf} to our situation, let $( X_{\triangle },L_{\triangle }) $ be any polarized toric variety ({\em not necessarily smooth}) with moment polytope $\triangle$. Let $\mathrm{Aut}(X_\triangle)$ denote the {\em automorphism} of the pair $(X_\triangle,L_\triangle)$, then $T=(\mathbb{C}^\times)^{n}<\mathrm{Aut}(X_\triangle)$ is a maximal torus. \begin{defi} Let $(X_\triangle,L_\triangle)$ be a polarized toric variety with moment polytope $\triangle$, we define the {\em Weyl group} $W_\triangle:=N(T)/T$ with $$T=(\mathbb{C}^\times)^{n}<N(T):=\{g\in \mathrm{Aut}(X_\triangle, L_\triangle)\mid g\cdot T\cdot g^{-1}=T\}<\mathrm{Aut}(X_\triangle)$$ being the {\em normalizer } of $T<\mathrm{Aut}(X_\triangle)$. Clearly, $W_\triangle$ acts on $\triangle\subset \mathbb{R}^n\cong \mathfrak{t}$ via the adjoint action. \end{defi} Consider a projective embedding $$ (X_\triangle, L_\triangle^k)\longrightarrow (\mathbb{P}^{N},\mathscr{O}_{\mathbb{P}^N}(1)) $$ with $$N+1=\chi_\triangle(k)=\dim H^0(X_\triangle, L^k_\triangle)=|k\triangle\cap\mathbb{Z}^n| \text{ and }\deg X_\triangle=d.$$ Let $$ \mathrm{Chow}_k(X_\triangle)\!\!:=\!\!\left\{\left.(H_0,\cdots, H_{n})\in ((\mathbb{P}^N)^\vee)^{n+1}\right| H_0\cap\cdots\cap H_n\cap X\ne\varnothing\right\} \in \mathbb{P}^{d,n;N}\!\!\!:=\mathbb{P}(\mathrm{Sym}^d(\mathbb{C}^{N+1})^{\otimes (n+1)}) $$ denote the {\em $k$-th Chow form} associated to the embedding above. With those notation understood, we state a result due to H. Ono \cite{Ono2013}. \begin{theo}[Theorem 1.1, \cite{Ono2013}]\label{ono} Let $(X_{\triangle },L_{\triangle })$ be a polarized toric variety (not necessarily smooth) with moment polytope $\triangle \subset \mathbb{R}^{n}$. For a fixed positive integer $k$, $\mathrm{Chow}_k(X_\triangle)$ of \ $(X_{\triangle },L_{\triangle }^{k})\subset (\mathbb{P}^N,\mathscr{O}_{\mathbb{P}^N}(1)) $ is {\em polystable} with respect to the action of the {\em subgroup of diagonal matrices} in $\mathrm{SL}(N+1)$ if and only if \begin{equation}\label{ch-wt} \frac{1}{\mathrm{vol}( k\triangle ) }\int_{k\triangle }g-\frac{1}{\chi_{\triangle }( k) }\sum_{x\in \triangle \cap \mathbb{Z} ^{n} }g( x) \geq 0, \end{equation} for any $g\in \mathrm{PL}(\triangle ;k)$ with equality if and only if $g$ being affine. \end{theo} Now let $(Z,\mathscr{O}_Z (1))=(\mathbb{P}^{d,n;N},\mathscr{O}_{\mathbb{P}^{d,n;N}}(1))$, $G=\mathrm{SL}(N+1)$ and $G_0=N(T) <G_{\mathrm{Chow}_k(X_\triangle)}=\mathrm{Aut}(X_\triangle)$. Then the centralizer $C(G_0)<\mathrm{SL}(N+1)$ is contained in a {\em maximal torus} (e.g. the subgroup of diagonal matrices) of $\mathrm{SL}(N+1)$. In particular, Theorem \ref{ono} together with Theorem \ref{kempf} then imply the following \begin{coro}\label{W-inv} Let $(X_\triangle,L_\triangle)$ be a polarized toric variety with moment polytope $\triangle$ as above and $W=W_\triangle$ be the Weyl group. Then for any $k\in \mathbb{N}$, $( X_{\triangle },L_{\triangle }^{k}) $ is Chow polystable (i.e. $\mathrm{Chow}_k(X_\triangle)\in \mathbb{P}^{d,n;N}$ is GIT polystable with respect to the $\mathrm{SL}(N+1)$-action on $(\mathbb{P}^{d,n;N},\mathscr{O}_{\mathbb{P}^{d,n;N}}(1))$) if and only if \eqref{ch-wt} holds for any $$ g\in \mathrm{PL}( \triangle ;k)^W=\{g\in \mathrm{PL}( \triangle ;k)\mid g(w\cdot x)=g(x)\ \forall w\in W \},$$ with equality if and only if $g$ being affine. \end{coro} Theorem \ref{ono} was originally proved in \cite{Ono2013} for {\em integral} Delzant polytope by applying the powerful machinery developed by Gelfand-Kapranov-Zelevinsky in \cite{GKZ}. Here for reader's convenience, we give a slightly simpler and more direct proof. \begin{proof}[Proof. of Theorem \ref{ono}] Without loss of generality, we may assume $L_\triangle$ is very ample and $k=1$. Also since the left hand side of \eqref{ch-wt} is invariant under adding a constants, we may assume $g\geq 0$. Let $(\mathcal{X},{\mathscr L})\to \mathbb{P}^1$ be any $T$-equivariant test configuration of $(X_\triangle, L_\triangle)$. So $\mathcal{X}$ is a $n+1$-dimensional toric variety. Let $$\triangle_g:=\{ (x, y)\in (\triangle\cap \mathbb{Z}^n)\times \mathbb{R}_{\geq 0} \mid 0\leq y\leq g(x)\}\subset \mathbb{R}^n\times \mathbb{R}_{\geq 0}$$ be the moment polytope of $\mathcal{X}$, where $g$ is a non-negative rational piecewise-linear concave function defined over $\triangle$. Then we have \begin{equation}\label{tri-g} \mathrm{vol}(\triangle_g)=\int_{\triangle }g(x)dx\text{ and }\chi_{\triangle_g}(1)-\chi_\triangle(1)=\sum_{x\in \triangle \cap \mathbb{Z}^{n} }g( x). \end{equation} By the proof of \cite[Proposition 4.2.1]{Don02}, we know the {\em weight} of the $\mathbb{C}^\times$-action on $\wedge^{\chi_\triangle(m)} H^0(\mathcal{X}_0,{\mathscr L}^m|_{\mathcal{X}_0})$ is given by \begin{equation}\label{w-m} w_m=\chi_{\triangle_g}(m)-\chi_{\triangle}(m) \end{equation} with asymptotic expansions (cf. \cite[Propostion 4.1.3 and equation (4.2.2) ]{Don02}) \begin{equation} \chi_\triangle(m)=m^n \mathrm{vol}(\triangle)+O(m^{n-1}) \text{ and }\chi_{\triangle_g}(m)=m^{n+1} \mathrm{vol}(\triangle_g)+O(m^{n}) . \end{equation} On the other hand, the {\em Chow weight} for the degeneration $(\mathcal{X},{\mathscr L})\to \mathbb{P}^1$ is given by the {\em normalized leading coefficient (n.l.c)} of the top degree term $\displaystyle\frac{m^{n+1}}{(n+1)!}$ in the degree $n+1$ polynomial of $m$: $$\displaystyle w_m-m\chi_\triangle(m)\frac{w_1}{\chi_\triangle(1)}, $$ where the second term is added in order to {\em normalize} the $\mathbb{C}^\times$-action on $H^0(\mathcal{X}_0,{\mathscr L}|_{\mathcal{X}_0})$ to be {\em special linear} (cf.\cite[Theorem 3.9 and equation (3.8)]{RT2007}). Then by \eqref{w-m} we obtain \begin{eqnarray*} &&w_m-m\chi_\triangle(m)\frac{w_1}{\chi_\triangle(1)}\\ &=&\chi_{\triangle_g}(m)-\chi_{\triangle}(m)-m\chi_\triangle(m)\frac{\chi_{\triangle_g}(1)-\chi_\triangle(1)}{\chi_\triangle(1)}\\ &=&m^{n+1}\mathrm{vol}(\triangle_g)-m^{n+1}\mathrm{vol}(\triangle)\frac{\chi_{\triangle_g}(1)-\chi_\triangle(1)}{\chi_\triangle(1)}+O(m^n)\\ &=&m^{n+1}\mathrm{vol}(\triangle)\left(\frac{1}{\mathrm{vol}(\triangle ) }\int_{\triangle }g-\frac{1}{\chi_{\triangle}( 1) }\sum_{x\in \triangle \cap \mathbb{Z}^{n} }g( x)\right)+O(m^n) \end{eqnarray*} where for the last identity we have used \eqref{tri-g}. Hence the Chow weight for the $T$-equivariant test configuration $(\mathcal{X},{\mathscr L})\to\mathbb{P}^1$ is precisely $$ (n+1)!\mathrm{vol}(\triangle)\left(\frac{1}{\mathrm{vol}(\triangle ) }\int_{\triangle }g-\frac{1}{\chi_{\triangle}( 1) }\sum_{x\in \triangle \cap \mathbb{Z}^{n} }g( x)\right), $$ and our proof is completed. \end{proof} \begin{coro}[Corollary 4.7, \cite{Ono2013}]\label{co-ono} If $(X_\triangle, L_\triangle^k)$ is Chow semistable for $k\in \mathbb{N}$ then \begin{equation}\label{bary} \frac{1}{\chi_\triangle (k)}\sum_{x \in k\triangle\cap \mathbb{Z}^n} x=\frac{1}{\mathrm{vol}(k\triangle)}\int_{k\triangle} x dx. \end{equation} \end{coro} \begin{rema} The identity \eqref{bary} is equivalent to the vanishing of Chow weight for the group $T=(\mathbb{C}^\times)^{n}<\mathrm{Aut}(X_\triangle)$. In particular, \eqref{bary} implies that the left hand side of \eqref{ch-wt} is invariant under addition of an {\em affine function} to $g$. \end{rema} \begin{exam} Let $(X_\triangle,L_\triangle)=( \mathbb{P}^{1},\mathscr{O}_{\mathbb{P}^1}(1) ) $ then \begin{eqnarray}\label{P1} &&\frac{1}{\mathrm{vol}( [ 0,k] ) }\int_{0}^{k}g-\frac{1}{\chi_{\triangle }( k) }\sum_{{x\in k\triangle \cap }\mathbb{Z}}g( x) \nonumber =\int_{0}^{k}g-\frac{1}{k+1}\sum_{i=0}^{k}g(i) \geq 0, \ \forall g \text{ concave } \end{eqnarray} follows from the fact that \begin{equation}\label{1/2} \frac{1}{k}\left(\frac{1}{2}g(0)+g(1)+\cdots+g(k-1)+\frac{1}{2}g(k)\right) \geq\frac{1}{k+1}\left(g(0)+g(1)+\cdots+g(k-1)+g(k)\right),\ \ \forall g\geq 0. \end{equation} \end{exam} \begin{comment} \begin{exam} $( \mathbb{P}^2,\mathscr{O}_{\mathbb{P}^2}(k) ) $ \begin{center} \begin{figure} \caption{Decompostion of $\triangle$.} \end{figure} \end{center} The number of triangles is $k^2$ and $h^0(\mathscr{O}_{\mathbb{P}^2}(k))=\displaystyle \frac{(k+1)(k+2)}{2}$. By concavity we have \begin{eqnarray*} \frac{1}{\mathrm{vol}(\triangle ) }\int_{\triangle }g &\geq&\frac{1}{\mathrm{vol}(\triangle ) }\left( 6\cdot\mathrm{vol}(\triangle_0)\cdot\!\!\!\!\! \sum_{\mathbf{i}\in \triangle^\circ \cap \mathbb{Z}[\frac{1}{k}]} \frac{g(\mathbf{i})}{3} +3\cdot\mathrm{vol}(\triangle_0)\cdot\!\!\!\!\! \sum_{\mathbf{i}\in \partial\triangle \cap \mathbb{Z}[\frac{1}{k}]} \frac{g(\mathbf{i})}{3}\right)\\ &\geq&\frac{1}{k^2 }\left(\sum_{\mathbf{i}\in \triangle^\circ \cap \mathbb{Z}[\frac{1}{k}]}2g(\mathbf{i}) + \sum_{\mathbf{i}\in \partial\triangle \cap \mathbb{Z}[\frac{1}{k}]} g(\mathbf{i})\right)\\ \end{eqnarray*} We claim that $$ \frac{1}{k^2 }\left(\sum_{\mathbf{i}\in \triangle^\circ \cap \mathbb{Z}[\frac{1}{k}]}2g(\mathbf{i}) + \sum_{\mathbf{i}\in \partial\triangle \cap \mathbb{Z}[\frac{1}{k}]} g(\mathbf{i})\right) \geq \frac{1}{(k+1)(k+2) }\left(\sum_{\mathbf{i}\in \triangle^\circ \cap \mathbb{Z}[\frac{1}{k}]}2g(\mathbf{i}) + \sum_{\mathbf{i}\in \partial\triangle \cap \mathbb{Z}[\frac{1}{k}]}2 g(\mathbf{i})\right) $$ which is equivalent to $$ (3k+2)\sum_{\mathbf{i}\in \triangle^\circ \cap \mathbb{Z}[\frac{1}{k}]}2g(\mathbf{i}) \geq (k^2-3k-2)\sum_{\mathbf{i}\in \partial\triangle \cap \mathbb{Z}[\frac{1}{k}]} g(\mathbf{i}) $$ \end{exam} \begin{rema}Notice that $X={xyz=w^3}\subset \mathbb{P}^3$ is the only $K-polystable$ stable points among cubic surfaces. Our result identifies the GIT stable and K-stable and asymptotic Chow stable locus. \end{rema} \end{comment} \section{$X_3$ and $X_4$.}\label{m-est} In this section, we will treat {$X_{\triangle_3}$} and $X_{\triangle_4}$ simultaneously since both $\triangle_i, i=3,4$ allows a decomposition of $\triangle_i$ with the {\em same} fundamental domain $\triangle_0$ (cf. Figure \ref{de12}). Let \begin{enumerate} \item $(X_{\triangle_3},L_{\triangle_3})=(X_3,\mathscr{O}_{X_3}(-K_{X_3}))=\{xyz=w^3\}\subset(\mathbb{P}^3,\mathscr{O}_{\mathbb{P}^3}(1))$. \item $(X_{\triangle_4},L_{\triangle_4})=(X_4,\mathscr{O}_{X_4}(-K_{X_4}))=Q_1\cap Q_2\subset (\mathbb{P}^4,\mathscr{O}_{\mathbb{P}^4}(1))$ with \begin{equation}\label{d=4} \left\{ \begin{array}{llll} Q_1: &z_0z_1+z_2z_3+z_4^2 &=0& \\ Q_2: & \lambda z_0z_1+\mu z_2z_3+z_4^2&=0.& \lambda\ne \mu \end{array} \right. \end{equation} \end{enumerate} with moment polytope $\triangle_i,i=3,4$ given in Figure \ref{de12}. \begin{center} \begin{figure} \caption{$\triangle_0\subset\triangle_1$ and $\triangle_0\subset\triangle_2$.} \label{de12} \end{figure} \end{center} Notice both $\triangle_i,i=3,4$ are invariant under the action of Weyl group $W_i:=W_{\triangle_i},i=3,4$ respectively, where $$W_3=D_3=\left\langle\sigma_3:=\begin{bmatrix}0 & -1\\ 1 & -1\end{bmatrix},\begin{bmatrix}0 & 1\\ 1 & 0\end{bmatrix} \right \rangle \text{ and }W_4=D_4=\left\langle\sigma_4:=\begin{bmatrix}0 & -1\\ 1 & 0\end{bmatrix},\begin{bmatrix}0 & 1\\ 1 & 0\end{bmatrix} \right \rangle <\mathrm{GL}(2,\mathbb{Z}).$$ To prove Theorem \ref{main}, first we establish the necessary condition \eqref{bary}, which is a consequence of the following \begin{lemm}\label{ma-0} Let $\mu$ be any measure defined on $\triangle$ and $\sigma\in\mathrm{SL}(2,\mathbb{R})$ be a element of {\em order } $d$ satisfying \begin{enumerate} \item $\sigma(\triangle)=\triangle$; \item $\sigma^\ast d\mu=d\mu$. \end{enumerate} Suppose further $\triangle$ admits a decomposition $\displaystyle\triangle=\bigsqcup_{i=0}^{d-1} \sigma^k(\triangle_0)$ such that $\sigma^i(\triangle_0^\circ)\cap\sigma^j(\triangle_0^\circ)=\emptyset$ for $i\ne j$, where $\triangle_0^\circ$ denotes the interior of a closed subset $\triangle_0\subset\triangle$. Then $$ \int_\triangle x d\mu(x)=0. $$ \end{lemm} \begin{proof} By our assumption that $\sigma\in\mathrm{SL}(2,\mathbb{R})$ of order $d+1$, we have $$ \sum_{i=0}^{d-1}\sigma^k=0\in \mathrm{SL}(2,\mathbb{R}). $$ Hence \begin{eqnarray*} \int_{\triangle}x d\mu(x) &=&\sum_{i=0}^d\int_{\sigma^k(\triangle_0)}x d\mu(x)=\sum_{i=0}^d\int_{\triangle_0}(x\circ\sigma^k)\cdot(\sigma^k)^\ast d\mu(x)\\ &=&\sum_{i=0}^d\int_{\triangle_0}(x\circ \sigma^k)\cdot d\mu(x)=\int_{\triangle_0}x\circ(\sum_{i=1}^d\sigma^k)d\mu(x)=0 \end{eqnarray*} and our proof is completed. \end{proof} \tikzset{ font={\fontsize{9pt}{12}\selectfont}} \begin{center} \begin{figure} \caption{Barycenter division of $k\triangle_0$ and $T$. } \label{de-0} \end{figure} \end{center} By adding an affine function to $g$ if necessary, Lemma \ref{ma-0} and Corollary \ref{co-ono} implies that we only need to establish Theorem \ref{ono} for $g$ under the following additional: \begin{assu} \label{ass} Let $g\in \mathrm{PL}(\triangle_i;k)^{W_i}, i=3,4$ satisfying: \begin{enumerate} \item $g(0)=\displaystyle\max_{x\in\triangle_i} g(x)$; \item $g$ vanishes on the vertices of $\triangle_i$. \end{enumerate} \end{assu} To achieve this, we will establish the following two {\bf key estimates:} \begin{itemize} \item {\em Trapezoid for $T=\mathrm{Conv}(O,p,q)\subset\mathbb{R}^2$}, the convex hull of $(O,p,q)$. \begin{equation}\label{T-trap} \frac{1}{\mathrm{vol}(T)}\int_T g\geq \frac{g(0)+g(p)+g(q)}{3} \end{equation} with equaliy if and only if $g$ is affine. \item {\em Trapezoid for standard subdivision} \begin{eqnarray}\label{s-trap} \int_{k\triangle} g &\geq& \frac{\mathrm{vol}(\triangle_{00})}{3}\left(6\sum_{x\in (k\triangle_i^\circ\cap \mathbb{Z}^2} g(x)+3\sum_{x\in(k\partial \triangle_i)\cap \mathbb{Z}^2} g(x)-6\alpha g(0)\right)\nonumber\\ &=&\sum_{x\in (k\triangle_i)^\circ\cap \mathbb{Z}^2} g(x)+\frac{1}{2}\sum_{x\in(k\partial \triangle_i)\cap \mathbb{Z}^2} g(x)-\alpha g(0)\nonumber\\ &=&\sum_{x\in (k\triangle_i)\cap \mathbb{Z}^2} g(x)-\frac{1}{2}\sum_{x\in(k\partial \triangle_i)\cap \mathbb{Z}^2} g(x)-\alpha g(0) \end{eqnarray} with equaliy if and only if $g$ is affine, where $\mathrm{vol}(\triangle_{00})=\frac{1}{2}$ (cf. Figure \ref{de-0})and $\alpha=\displaystyle\frac{6-\mathrm{ord}(\sigma_i)}{6}, i=3,4$. \end{itemize} \begin{proof}[Proof of Theorem \ref{main}] To simplify our notation, in the rest of the proof we will use $\triangle$ to denote $\triangle_i,\ i=3,4$. Let us assume the validity of \eqref{T-trap} and \eqref{s-trap} for the moment and our goal is to prove \begin{equation}\label{m} \frac{1}{\mathrm{vol}(k\triangle)}\int_{k\triangle} g\geq \frac{1}{\chi_\triangle (k)} \sum_{x\in (k\triangle)\cap\mathbb{Z}^n} g(x). \end{equation} for $g$ satisfying Assumption \ref{ass}. By applying the Pick formula (cf. \cite{Pic1899} and \cite{Pul1979}) $$\chi_\triangle(k)=\mathrm{vol}(k\triangle)+\frac{b}{2}+1 \text{ with } b=|(k\partial\triangle)\cap\mathbb{Z}^n|$$ the left hand side of \eqref{m} can be written as \begin{eqnarray*} &&\left(\frac{1}{\mathrm{vol}(k\triangle)}-\frac{1}{\chi_\triangle(k)}\right)\int_{k\triangle} g+ \frac{1}{\chi_\triangle(k)} \int_{k\triangle} g\\ (\text{ by }\eqref{s-trap}\ )&\geq &\frac{\frac{b}{2}+1}{\mathrm{vol}(k\triangle)\cdot \chi_\triangle(k)}\int_{k\triangle} g+ \frac{1}{\chi_\triangle(k)}\left(\sum_{x\in k\triangle^\circ\cap \mathbb{Z}^n} g(x)+\frac{1}{2}\sum_{x\in(\partial k\triangle)\cap \mathbb{Z}^n} g(x)-\alpha g(0)\right)\\ ( \text{ by }\eqref{T-trap} \ )&\geq & \frac{\frac{b}{2}+1}{b\cdot\mathrm{vol}(T)\cdot\chi_\triangle(k)}\cdot\frac{\mathrm{vol}(T)}{3}\left(2\sum_{x\in (k\partial \triangle)\cap\mathbb{Z}^n}g(x)+bg(0) \right)+\\ &&+ \frac{1}{\chi_\triangle(k)}\left(\sum_{x\in k\triangle^\circ\cap \mathbb{Z}^n} g(x)+\frac{1}{2}\sum_{x\in(k\partial \triangle)\cap \mathbb{Z}^n} g(x)-\alpha g(0)\right)\\ &\geq & \frac{\frac{b}{2}+1}{3b\cdot\chi_\triangle(k)}\left(2\sum_{x\in (k\partial \triangle)\cap\mathbb{Z}^n}g(x)+bg(0) \right)+\\ &&+ \frac{1}{\chi_\triangle(k)}\left(\sum_{x\in k\triangle^\circ\cap \mathbb{Z}^n} g(x)+\frac{1}{2}\sum_{x\in(k\partial \triangle)\cap \mathbb{Z}^n} g(x)-\alpha g(0)\right)\\ \end{eqnarray*} So to prove \eqref{m}, all we need is \begin{equation} \frac{1+b/2}{3b}\left(2\sum_{x\in (k\partial \triangle)\cap\mathbb{Z}^n}g(x)+bg(0) \right)\geq\left( \frac{1}{2}\sum_{x\in(k\partial \triangle)\cap \mathbb{Z}^n} g(x)+\alpha g(0)\right) \end{equation} which is equivalent to $$ \left(\frac{1+b/2}{3}-\alpha\right)g(0)\geq \left(\frac{1}{2}-\frac{1+b/2}{3b}\cdot 2\right)\sum_{x\in (k\partial \triangle)\cap\mathbb{Z}^n}g(x). $$ Using the fact $\displaystyle g(0)=\max_{x\in \triangle} g(x)\geq \frac{1}{b}\sum_{x\in (k\partial \triangle)\cap\mathbb{Z}^n}g(x)$, we know that \eqref{m} is a consequence of the following: $$ \frac{1}{b}\geq\frac{\displaystyle\frac{1}{2}-\frac{2+b}{3b}}{\displaystyle\frac{2+b}{6b}-\alpha}=\frac{b-4}{b^2+(2-6\alpha)b} $$ which is equivalent to $4\geq 6\alpha-2$. But this always hold as long as $\alpha=\displaystyle\frac{6-\mathrm{ord}(\sigma_i)}{6} \leq 1, \ i=1,2$. And our proof of Theorem \ref{main} is completed {for $X_3$ and $X_4$}. \end{proof} \begin{proof}[Proof of \eqref{T-trap} and \eqref{s-trap}] \eqref{T-trap} follows from the concavity of $g$ and trapezoidal rule. For \eqref{s-trap}, we triangulate $\triangle_0$ into the union of {\em basic triangles} $\triangle_{00}$'s as illustrated in Figure \ref{de-0} and then extend this triangulation to the whole $\triangle_i$ via the Weyl group $W_i$. Then \eqref{s-trap} follows by noticing that \begin{enumerate} \item each interior lattice points of $\triangle_i^\circ$ that is {\em not} the point $O$ is exactly a vertex of $6$ basic triangles of $\triangle_{00}$; \item each boundary lattice point of $(\partial\triangle)^\circ$ is is exactly a vertex of $3$ basic triangles of $\triangle_{00}$; \item the point $O$ is the vertex of $\mathrm{ord}(\sigma_i), i=1,2$ basic triangles of $\triangle_{00}$ (cf. Figure \ref{de-0}). \end{enumerate} And our proof of \eqref{T-trap} and \eqref{s-trap} is thus completed. \end{proof} \section{$X_2$.} Recall that $(X_{\triangle_2},L_{\triangle_2})=(X_2=\mathbb{P}^1\times\mathbb{P}^1/(\mathbb{Z}/4\mathbb{Z}),\mathscr{O}_{X_2}(-2K_{X_2}))\subset(\mathbb{P}^6,\mathscr{O}_{\mathbb{P}^6}(1))$ with the $\mathbb{Z}/4\mathbb{Z}=\langle\xi\rangle$-action generated by $\xi\cdot ([z_1,z_2],[w_1,w_2])=([\sqrt{-1} z_1,z_2],[-\sqrt{-1} w_1,w_2])$. Then the Weyl group $W_2=W_{\triangle_2}=\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ and $$ \triangle_2=\mathrm{Conv}\{(-2,0),(2,0),(0,1),(0,-1)\} \text{ (cf. Figure \ref{de2})}. $$ It turns out this is the {\em trickest} case among all $\{X_i\}_{1\leq i\leq 4}$. \begin{center} \begin{figure} \caption{$\triangle_0\subset k\triangle_2$ with $k=2$.} \label{de2} \end{figure} \end{center} For this purpose, we need to extend the main estimate \eqref{s-trap} (cf. \eqref{s-trap1}) used in the last section. Let $$a_i:=(2i,k-i)\in \mathbb{R}^2\text{ for } 0\leq i\leq k,$$ denote the {\em integral points} of the boundary of $k\triangle_0\subset k\triangle_2$ (cf. Figure \ref{de2}) and let \begin{equation}\label{bT} T_i:=\mathrm{Conv} (0,a_i,a_{i+1}), 0\leq i\leq k-1\text{ and }b=|\partial (k\triangle_2)\cap \mathbb{Z}^2| \end{equation} Then $\triangle_0=\displaystyle\bigcup_{i=0}^{k-1} T_i$ and we have the following: \begin{lemm}\label{ax} Let $\Delta\subset \mathbb{R}^2$ be a integral polytope, and $g\in \mathrm{PL}(\triangle_2;k)^{W_2}$ satisfying \begin{equation}\label{s-trap1} \sum_{p\in (k\triangle_2)\cap\mathbb{Z}^2}g(p)-\int_{k\triangle_2}g(x)dV\leq \dfrac{1}{2}\sum_{p\in \partial (k\triangle_2) \cap \mathbb{Z}^2}g(p)+\sum_{p\in k\triangle_2}\delta_k(p)g(p) \end{equation} for a {\em fixed} function $\delta_k(p)$ satisfying \[\sum_{p\in k\triangle_2} \delta_k(p)=1\] with equality holding if and only if\ $g$ is constant. Then \begin{equation}\label{b+2} \dfrac{(b+2)|W_2|}{2b\cdot\mathrm{vol}(T)}\sum_i\int_{T_i}g\geq \dfrac{1}{2}\sum_{p\in \partial (k\triangle_2) \cap \mathbb{Z}^2}g(p)+\sum_{p\in k\triangle_2}\delta_k(p)g(p), \end{equation} with equality holding if and only if $g$ is constant implies \begin{equation}\label{mm} \dfrac{1}{\mathrm{vol}(k\triangle_2)}\int_{k\triangle_2}g(x)dV\geq \dfrac{1}{|(k\triangle_2)\cap\mathbb{Z}^2|}\sum_{p\in (k\triangle_2)\cap \mathbb{Z}^2}g(p), \text{ }|(k\triangle_2)\cap\mathbb{Z}^2|=\chi_{\triangle_2}(k) \end{equation} with equality holds if and only if $g$ is constant. \end{lemm} \begin{proof} By \eqref{s-trap1}, we deduce that \eqref{mm} follows from \[\left(\dfrac{1}{\mathrm{vol}(k\triangle_2)}-\dfrac{1}{|(k\triangle_2) \cap \mathbb{Z}^2|}\right)\int_{k\triangle_2}g\geq \dfrac{1}{|(k\triangle_2) \cap \mathbb{Z}^2|}\left(\dfrac{1}{2}\sum_{p\in \partial (k\triangle_2) \cap \mathbb{Z}^2}g(p)+\sum_{p\in( k\triangle_2)\cap\mathbb{Z}^2}\delta_k(p)g(p)\right).\] which is equivalent to \begin{equation}\label{b+} \left(\dfrac{|(k\triangle_2) \cap \mathbb{Z}^2|}{\mathrm{vol}(k\triangle_2)}-1\right)\int_{k\triangle_2}g\geq \dfrac{1}{2}\sum_{p\in \partial (k\triangle_2) \cap \mathbb{Z}^2}g(p)+\sum_{p\in (k\triangle_2)\cap\mathbb{Z}^2}\delta_k(p)g(p). \end{equation} By subdividing $k\triangle_2$ into $b$ triangles as in Figure \ref{de2}, that is $k\triangle_2=\displaystyle\bigcup_{g\in W_2}g\cdot \triangle_0$ and $\triangle_0=\displaystyle\bigcup_{i=0}^{k-1} T_i$ then we have \[\mathrm{vol}(k\triangle_2)=b\sum_i\mathrm{vol}(T_i)=b\cdot\mathrm{vol}(T_0).\] Using the fact $b=|\partial (k\triangle_2)\cap \mathbb{Z}^2|$ and pluging $g=1$ into \eqref{b+} we deduce \[\left(\dfrac{|k\triangle_2 \cap \mathbb{Z}^2|}{b\cdot\mathrm{vol}(T)}-1\right)b\cdot\mathrm{vol}(T)= {1\over 2}b+1.\] Hence \[\left(\dfrac{|k\triangle_2 \cap \mathbb{Z}^2|}{b\cdot\mathrm{vol}(T)}-1\right)=\dfrac{b+2}{2b\cdot\mathrm{vol}(T)},\] our proof is completed by plugging this into \eqref{b+}. \end{proof} Now to prove Theorem \ref{main}, one needs to establish the estimate \eqref{s-trap1} and \eqref{b+2} for an appropriate $\delta_k$ in Lemma \ref{ax} (cf. \eqref{s-trap1}). {\em Step 1. establishing \eqref{s-trap1} for an appropriate $\delta_k$.} Using $W_2=\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ symmetry of $k\triangle_2$, it suffices to consider $\Delta_0$ as in Figure \ref{de2}. Now let us do a sub-division \[k\triangle_2=\mathrm{Conv}\{(\pm 0,k), (\pm 2k,0)\}=\triangle_{00}\cup\triangle_{01} \text{ (cf. Figure \ref{de3}) }\] with $\triangle_{00}:=\mathrm{Conv}((0,k),(0,0),(k,0))$ and $\triangle_{01}:=\mathrm{Conv}((0,k),(k,0),(2k,0))$. Clearly, $\triangle_{00}$ is $\mathrm{SL}(2,\mathbb{Z})$ equivalent to $\triangle_{01}$. \begin{center} \begin{figure} \caption{$\triangle_0\subset k\triangle_2$ with $k=2$.} \label{de3} \end{figure} \end{center} Now let us introduce a triangulation of $\triangle$ by introducing a triangulation on $\triangle_0$: \begin{itemize} \item using the standard triangulation of $\triangle_{00}$ (cf. Figure \ref{de-0}); \item transporting the triangulation of $\triangle_{00}$ to $\triangle_{01}$ via the $\mathrm{SL}(2,\mathbb{Z})$, \end{itemize} Applying \eqref{s-trap}, we obtain \[\int_{k\triangle_2}g(x) \geq \sum_{p\in (k\triangle_2)\cap\mathbb{Z}^2}g(p)-\dfrac{1}{2}\sum_{p\in \partial (k\triangle_2)\cap\mathbb{Z}^2}g(p)+\dfrac{1}{6}g(0,\pm k)-\dfrac{1}{6}g(\pm2 k,0)-\dfrac{1}{3}(g(\pm k,0))-\dfrac{1}{3}g(0,0)\] where \begin{enumerate} \item for $g(0,\pm k)$, we have $\dfrac{1}{6}=\dfrac{4}{6}-\dfrac{3}{6}$ since the vertices $(0,\pm k)$ are shared by $4$ triangles instead of $3$ in the triangulation above. \item for $g(\pm 2k,0 )$, we have $-\dfrac{1}{6}=\dfrac{2}{6}-\dfrac{3}{6}$ since the vertices $(\pm 2k,0)$ are shared by $2$ triangles instead of $3$ in the triangulation above. \item for $g(\pm k,0 )$ and $g(0,0)$, we have $-\dfrac{1}{3}=\dfrac{4}{6}-\dfrac{6}{6}$ since the vertices $(\pm k,0)$ are shared by $4$ triangles instead of $6$ ( as they are boundary point of $\triangle_{00}$ and $\triangle_{01}$ but interior points of $\triangle_2$). \end{enumerate} Hence \begin{eqnarray*} \sum_{p\in \partial k\triangle_2\cap\mathbb{Z}^2}g(p)-\int_{k\triangle_2}g(x) &\leq &\dfrac{1}{2}\sum_{p\in \partial (k\triangle_2)\cap\mathbb{Z}^2}g(p)-\dfrac{1}{6}g(0,\pm k)+\dfrac{1}{6}g(\pm2 k,0)+\dfrac{1}{3} (g(\pm k,0))+\dfrac{1}{3}g(0,0)\\ &\leq &\dfrac{1}{2}\sum_{p\in \partial (k\triangle_2)\cap\mathbb{Z}^2}g(p)-\dfrac{1}{6}g(0,\pm k)+\dfrac{1}{6}g(\pm2 k,0)+g(0,0) \end{eqnarray*} since $g(0,0)=\displaystyle\max_{k\triangle} g\geq g(\pm k,0)$ for $g\in \mathrm{PL}(\triangle_2;k)^{W_2}$. Thus we established \eqref{s-trap1} with $\delta_k:k\triangle_2\to \mathbb{R}$ defined by \begin{align} \delta_k(p)=\left\{\begin{array}{lll} 0 & p=(0,0)\\ -1/6 & p=(0,\pm k)\\ 1/6 & p=(\pm 2k,0)\\ 0 & \text{ otherwise}. \end{array}\right. \end{align} {\em Step 2, establishing \eqref{b+2}}. That is, for all $g\in \mathrm{PL}(\triangle_2;k)^{W_2}$ we need to show \[\dfrac{b+2}{2b\cdot \mathrm{vol}(T_0)}\sum_{g\in W_2}\sum_i\int_{g\cdot T_i}g\geq \dfrac{1}{2}\sum_{p\in \partial (k\triangle_2 )\cap \mathbb{Z}^2}g(p)+\sum_{p\in k\triangle_2}\delta_k(p)g(p).\] Let us first consider $ T_0= \mathrm{Conv}((0,0),(2,k-1),(0,k))$. Applying \eqref{T-trap}, we have \begin{equation}\label{t0} \int_{T_0}g\geq \dfrac{\mathrm{vol}(T_0)}{3}(g(0)+g(2,k-1)+g(0,k)). \end{equation} By the $W_2$-symmetry of $g$, we have $g(-2,k-1)=g(2,k-1)$, this together with the concavity of $g$ imply $ g(0,k-1)\geq g(2,k-1)$ and $g(0,k-1)\geq g(0,k)$, so \begin{equation}\label{k-1} g(0,k-1)\geq \dfrac{g(2,k-1)+g(0,k)}{2}. \end{equation} Therefore, \begin{align*} \frac{1}{\mathrm{vol}(T_0)}\int_{T_0} g =&\frac{1}{\mathrm{vol}(T_0)}\left(\int_{T_{00}} g+\int_{T_{01}} g\right)\\ \geq & \dfrac{\mathrm{vol}(T_{00})}{\mathrm{vol}(T_{0})}\left(\dfrac{g(0,0)+g(2,k-1)+g(0,k-1)}{3}\right) +\dfrac{\mathrm{vol}(T_{01})}{\mathrm{vol}(T_{0})}\left(\dfrac{g(0,k)+g(2,k-1)+g(0,k-1)}{3}\right)\\ \geq& \dfrac{k-1}{k}\left(\dfrac{g(0,0)+g(2,k-1)+\frac{g(2,k-1)+g(0,k)}{2}}{3}\right) +\dfrac{1}{k}\left(\dfrac{g(0,k)+g(2,k-1)+\frac{g(2,k-1)+g(0,k)}{2}}{3}\right)\\ =&\left(\dfrac{k-1}{k}\right)\dfrac{g(0,0)}{3}+\left(\dfrac{1}{3}+\dfrac{1}{6}\right)g(2,k-1)+\left(\dfrac{1}{6}+\dfrac{1}{3k}\right)g(0,k)\\ =&\left(1-\dfrac{1}{k}\right)\dfrac{g(0,0)}{3}+\left(\dfrac{1}{3}+\dfrac{1}{6}\right)g(2,k-1)+\left(\dfrac{1}{3}-\dfrac{1}{6}+\dfrac{1}{3k}\right)g(0,k)\\ =&\dfrac{1}{3}\left(g(0,0)+g(2,k-1)+g(0,k)\right) -\dfrac{1}{3k}g(0,0)+\dfrac{1}{6}g(2,k-1)+\left(-\dfrac{1}{6}+\dfrac{1}{3k}\right)g(0,k) \end{align*} \begin{center} \begin{figure} \caption{$\triangle_0\subset k\triangle_2$ with $k=3$.} \label{de3} \end{figure} \end{center} Combining the estimates with the ones for $T_i, i\ne 0$ based on \eqref{T-trap}, we obtain \begin{align*} &\dfrac{(b+2)|W_2|}{2b\mathrm{vol}(T_0)}\sum_{i=0}^{k-1}\int_{T_i}g\\ \geq& \dfrac{b+2}{2b}\left(\dfrac{bg(0)+ 2\sum_{p\in \partial (k\Delta)}g(p)}{3}\right)\\ &+\dfrac{b+2}{2b}\left(-\dfrac{|W_2|}{3k}g(0)+\dfrac{|W_2|}{6}\cdot g(2,k-1)+\left(\dfrac{2}{3k}-\dfrac{1}{3}\right)(g(0,k)+g(0,-k))\right)\\ =&:\sum_{p\in (k\triangle_2)\cap\mathbb{Z}^2} \eta(p)g(p) \end{align*} where $\eta: k\triangle_2\cap\mathbb{Z}^2\to \mathbb{R}$ is defined by the right hand side of the above inequality. To establish \eqref{b+2}, it suffices to show $$ \sum_{p\in k\triangle_2\cap\mathbb{Z}^2} \eta(p)g(p)\geq \dfrac{1}{2}\sum_{p\in \partial (k\triangle _2)\cap \mathbb{Z}^2}g(p)+\sum_{p\in k\triangle_2}\delta_k(p)g(p), $$ which is equivalent to \begin{equation}\label{DeTi} (\eta(0)-\widetilde{\delta}_k(0))g(0)\geq \sum_{p\in ((k\triangle_2) \cap \mathbb{Z}^2)\setminus \{0\}}(\widetilde{\delta}_k(p)-\eta(p))g(p) \end{equation} with $\widetilde{\delta}_k(p)$ being defined by the following identity \[ \sum_{p\in \partial k\triangle_2 \cap \mathbb{Z}^2}\widetilde{\delta}_k(p)g(p)= \dfrac{1}{2}\sum_{p\in \partial (k\triangle_2) \cap \mathbb{Z}^2}g(p)+\sum_{p\in k\triangle_2}\delta_k(p)g(p).\] As $b=4k$, for $p\neq (0,0)$, $\widetilde{\delta}_k(p)-\eta(p)$ is given by \[(\widetilde{\delta}_k-\eta)(p)= \left\{\begin{matrix} 0 &\text{ if }& p\in k\triangle_2^\circ\\ {\ }\\ \dfrac{1}{2}-\left(\dfrac{b+2}{2b}\right)\left(\dfrac{2}{3}\right)=\dfrac{1}{6}-\dfrac{1}{6k} &\text{ if }& p=(\pm 2i,\pm (k-i))\text{ for }i\neq 0,1,k\\ {\ }\\ \left(\dfrac{1}{2}+\dfrac{1}{6}\right)-\left(\dfrac{b+2}{2b}\right)\left(\dfrac{2}{3}\right)=\dfrac{1}{3}-\dfrac{1}{6k}&\text{ if }&p=(\pm 2k, 0)\\ {\ }\\ \dfrac{1}{2}-\left(\dfrac{b+2}{2b}\right)\left(\dfrac{2}{3}+\dfrac{1}{6}\right)=\dfrac{1}{12}-\dfrac{5}{24k}&\text{ if }&p=(\pm 2, \pm (k-1))\\ {\ }\\ \left(\dfrac{1}{2}-\dfrac{1}{6}\right)-\left(\dfrac{b+2}{2b}\right)\left(\dfrac{2}{3}+\left(\dfrac{2}{3k}-\dfrac{1}{3}\right)\right) =\dfrac{1}{6}-\dfrac{5}{12k}-\dfrac{1}{6k^2}&\text{ if }&p=(0,\pm k). \end{matrix}\right.\] which are non-negative when $k\geq k_0$ for some $k_0$ independent of $g$. As a consequence, we have $$ (\eta(0)-\widetilde{\delta}_k(0))g(0)\geq \sum_{p\in (k\triangle_2) \cap \mathbb{Z}^2}(\widetilde{\delta}_k(p)-\eta(p))g(0)\geq \sum_{p\in (k\triangle_2) \cap \mathbb{Z}^2}(\widetilde{\delta}_k(p)-\eta(p))g(p). $$ with equality holds if and only if $g$ is constant, and hence \eqref{b+2} is established. The proof for the case $X_2=X_{\triangle_2}$ is completed by applying Lemma \ref{ax}. \begin{rema} One notices that the estimate \eqref{k-1} can be improved \[g(0,k-1)\geq\lambda g(2,k-1)+(1-\lambda)\dfrac{k-1}{k}g(0,k),\] with $0\leq \lambda \leq 1$. By choosing an appropriate $\lambda$, one can verify \eqref{b+2} for $k\geq 2$. \end{rema} \section{$X_1$.} Recall $(X_{\triangle_1},L_{\triangle_1})=(X_1=\mathbb{P}^2/(\mathbb{Z}/9\mathbb{Z}),\mathscr{O}_{X_1}(-3K_{X_1}))\subset (\mathbb{P}^6,\mathscr{O}_{\mathbb{P}^6}(1))$ with the $\mathbb{Z}/9\mathbb{Z}=\langle\xi=\exp2\pi\sqrt{-1}/9\rangle$-action generated by $\xi\cdot [z_0, z_1,z_2]=[z_0,\xi,z_1,\xi^{-1},z_2]$. Then the Weyl group of $X_1$ is $W_1=\mathbb{Z}/2$ and $$\triangle_1=\mathrm{Conv}\{(1,2), (2,1), (-3,-3)\}\subset \mathbb{R}^2 \text{(cf. Figure \ref{de3})}.$$ \begin{center} \begin{figure} \caption{$ k\triangle_1$ with $k=2$.} \label{de3} \end{figure} \end{center} \begin{theo}$X_1$ is Chow unstable. \end{theo} \label{x1} To see this, first we notice that \eqref{T-trap} implies \begin{lemm}\label{m-int} $\displaystyle\int_{\triangle_1} xdx=(0,0)$. \end{lemm} By the necessity of Chow semistability \eqref{bary}, Theorem \ref{x1} follows from Lemma \ref{m-int} and the following \begin{prop} \[\frac{1}{\chi_{\triangle_1}(k)}\sum_{x=(x_1,x_2)\in (k\triangle_1)\cap \mathbb{Z}^2} x=\dfrac{4\cdot (-k,-k)}{9k^2+3k+2}\neq 0 \text{ with } \chi_{\triangle_1}(k)=|(k\triangle_1)\cap \mathbb{Z}^2|=\frac{9k^2+3k+2}{2}. \] In particular, it violates \eqref{bary} and $X_1$ is Chow {\em unstable} for all $k\geq 1$. \end{prop} \begin{proof} By the $W_1=\mathbb{Z}/2\mathbb{Z}$-symmetry, we have \begin{equation} \frac{1}{\chi_{\triangle_1}(k)}\sum_{x\in (k\triangle_1)\cap \mathbb{Z}^2} x=\frac{(1,1)}{\chi_{\triangle_1}(k)}\sum_{x\in (k\triangle_1)\cap \mathbb{Z}^2} x_1 \end{equation} with $x_1\in \mathbb{R}$ being the first component of $x=(x_1,x_2)\in \mathbb{R}^2$. Let us define $m:=\displaystyle \frac{1}{\chi_{\triangle_1}(k)}\sum_{x\in (k\triangle_1)\cap \mathbb{Z}^2} x_1$. For simplicity, we will only treat the case that $k$ is {\em even}, \footnote{For $k$ {\em odd}, the derivation is similar and will be left to the readers.} then by considering the symmetry about the axis in Figure \ref{de3}, we obtain \begin{eqnarray*} -m&=&\frac{2}{\chi_{\triangle_1}(k)} \left(\sum_{i=1}^{k/2} \frac{9(i-1)(9(i-1)+1)}{2}+\sum_{i=1}^{k/2}\left(\frac{ 5+9(i-1)}{2}+ \frac{(9(i-1)+4)(9(i-1)+5)}{2}\right)\right)\\ &&+\frac{1}{\chi_{\triangle_1}(k)} \frac{\frac{9k}{2}(\frac{9k}{2}+1)}{2}-\frac{3k}{2}\\ &=&\frac{2k}{\chi_{\triangle_1}(k)}. \end{eqnarray*} \end{proof} \begin{exam} For $k=1$, \[ \frac{1}{\chi_{\triangle_1}(1)}\sum_{x\in \triangle_1\cap \mathbb{Z}^2} x=\dfrac{(-2,-2)}{7}\] \end{exam}. \begin{rema}\label{osn} We remark that this example as well as the example in \cite{OSN12} have {\em not} ruled out the possibility of using the asymptotic Chow semistability to compactify the moduli space of Fano varieties contrasting to the case studied in \cite{WX14}, since for those punctured families one might have a limit which is asymptotic Chow polystable and strict K-semistable simultaneuously. \end{rema} \begin{bibdiv} \begin{biblist} \bib{Aub76}{article}{ author={Aubin, Thierry}, title={\'Equations du type Monge-Amp\`ere sur les vari\'et\'es k\"ahleriennes compactes}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={283}, date={1976}, number={3}, pages={Aiii, A119--A121}, review={\MR{0433520}}, } \bib{Don01}{article}{ author={Donaldson, Simon K.}, title={Scalar curvature and projective embeddings, I,}, journal={J. Differential Geom.}, volume={59}, pages={479-522}, year={2001}, } \bib{Don02}{article}{ author={Donaldson, S. K.}, title={Scalar curvature and stability of toric varieties}, journal={J. Differential Geom.}, volume={62}, date={2002}, number={2}, pages={289--349}, issn={0022-040X}, review={\MR{1988506}}, } \bib{Gie77}{article}{ author={Gieseker, D.}, title={On the moduli of vector bundles on an algebraic surface}, journal={Ann. of Math. (2)}, volume={106}, date={1977}, number={1}, pages={45--60}, issn={0003-486X}, review={\MR{466475}}, } \bib{GKZ}{book}{ author={Gelfand, I. M.} author={Kapranov, M. M.} author={Zelevinsky, A. V.} title={Discriminants, resultants and multidimensional determinants.} publisher={Birkh\"auser Boston, Inc., Boston, MA} pages={ x+523 pp.} date={1994} } \bib{Kemp1978}{article}{ author={Kempf, George}, title={Instability in invariant theory.}, journal={Ann. of Math.}, volume={108}, number={2}, year={1978}, pages={299-316}, } \bib{LiW15}{article}{ author={Li, Jun}, author={ Wang, Xiaowei}, title={Hilbert-Mumford criterion for nodal curves}, journal={Compos. Math.}, volume={151}, date={2015}, pages={2076-2130}, } \bib{LWX2014}{article}{ author={Li, Chi}, author={Wang, Xiaowei} author={Xu, Chenyang}, title={Degeneration of Fano K\"ahler-Einstein manifolds}, journal={ArXiv:1411.0761 v1}, date={2014} } \bib{Mab2004}{article}{ author={Mabuchi, Toshiki}, title={An obstruction to asymptotic semistability and approximate critical metrics}, journal={Osaka J. Math.}, volume={41}, date={2004}, number={2}, pages={463--472}, issn={0030-6126}, review={\MR{2069096}}, } \bib{MFK}{book}{ author={Mumford, D.}, author={Fogarty, J.}, author={Kirwan, F.}, title={Geometric invariant theory}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (2) [Results in Mathematics and Related Areas (2)]}, volume={34}, edition={3}, publisher={Springer-Verlag, Berlin}, date={1994}, pages={xiv+292}, isbn={3-540-56963-4}, review={\MR{1304906}}, } \bib{Ness84}{article}{ author={Ness, Linda}, title={A stratification of the null cone via the moment map}, note={With an appendix by David Mumford}, journal={Amer. J. Math.}, volume={106}, date={1984}, number={6}, pages={1281--1329}, issn={0002-9327}, review={\MR{765581}}, } \bib{Ono2013}{article}{ author={Ono, Hajime}, title={Algebro-geometric semistability of polarized toric manifolds.}, journal={Asian J. Math.}, volume={17}, year={2013}, pages={609-616}, } \bib{OSN12}{article}{ author={Ono, Hajime}, author={Sano, Yuji}, author={Yotsutani, Naoto}, title={An example of an asymptotically Chow unstable manifold with constant scalar curvature}, language={English, with English and French summaries}, journal={Ann. Inst. Fourier (Grenoble)}, volume={62}, date={2012}, number={4}, pages={1265--1287}, issn={0373-0956}, review={\MR{3025743}}, } \bib{OSS}{article}{ author={Odaka, Yuji} author={Sun, Song} author={Spotti, Christiano} title={Compact moduli space of Del Pezzo surfaces and K\"ahler-Einstein metrics} journal={J. Differential Geom.} volume={102} number={1} date={2016} pages={127-172} } \bib{OSY}{article}{ author={Ono, Hajime}, author={Sano, Yuji}, author={Yotsutani, Naoto}, title={An example of an asymptotically Chow unstable manifold with constant scalar curvature}, language={English, with English and French summaries}, journal={Ann. Inst. Fourier (Grenoble)}, volume={62}, date={2012}, number={4}, pages={1265--1287}, issn={0373-0956}, review={\MR{3025743}}, } \bib{Pic1899}{article}{ author={Pick, Georg Alexander}, title={ Geometrisches zur Zahlentheorie.}, journal={Sitzungber Lotos (Prague)}, volume={19}, year={1899}, pages={311-319}, } \bib{PS2004}{article}{ author={Phong, D. H.}, author={Sturm, Jacob}, title={Scalar curvature, moment maps, and the Deligne pairing}, journal={Amer. J. Math.}, volume={126}, date={2004}, number={3}, pages={693--712}, issn={0002-9327}, review={\MR{2058389}}, } \bib{PS2009}{article}{ author={Phong, D. H.}, author={Sturm, Jacob}, title={Lectures on stability and constant scalar curvature}, conference={ title={Handbook of geometric analysis, No. 3}, }, book={ series={Adv. Lect. Math. (ALM)}, volume={14}, publisher={Int. Press, Somerville, MA}, }, date={2010}, pages={357--436}, review={\MR{2743451}}, } \bib{Pul1979}{article}{ author={Pullman, Howard W.}, title={ An Elementary Proof of Pick’s Theorem}, journal={ School Science and Mathematics}, volume={79}, issue={1}, year={1979}, pages={7-12}} \bib{RT2007}{article}{ author={Ross, Julius} author={Thomas, Richard} title={A study of the Hilbert-Mumford criterion for the stability of projective varieties.} journal={J. Algebraic Geom.} volume={16} number={2} date={2007} pages={201-255} } \bib{Spotti12}{article}{ author={Spotti, Cristiano}, title={Degenerations of K\"ahler-Einstein Fano manifolds}, review={ arXiv:1211.5334}, journal={Ph.D. Thesis, Imperial College}, date={2016}, number={16}, pages={132 pages}, } \bib{SSY16}{article}{ author={Spotti, Cristiano}, author={Sun, Song}, author={Yao, Chengjian}, title={Existence and deformations of K\"ahler-Einstein metrics on smoothable $\Bbb{Q}$-Fano varieties}, journal={Duke Math. J.}, volume={165}, date={2016}, number={16}, pages={3043--3083}, issn={0012-7094}, review={\MR{3566198}}, } \bib{Vie95}{book}{ author={Viehweg, Eckart}, title={Quasi-projective moduli for polarized manifolds}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]}, volume={30}, publisher={Springer-Verlag, Berlin}, date={1995}, pages={viii+320}, isbn={3-540-59255-5}, review={\MR{1368632}}, } \bib{Wang04}{article}{ author={Wang, Xiaowei}, title={Moment map, Futaki invariant and stability of projective manifolds}, journal={Comm. Anal. Geom.}, volume={12}, date={2004}, number={5}, pages={1009--1037}, issn={1019-8385}, review={\MR{2103309}}, } \bib{WX14}{article}{ author={Wang, Xiaowei}, author={Xu, Chenyang} title={Nonexistence of aymptotic GIT compactification}, journal={ Duke Math. J. }, volume={163}, issue={12}, pages={2217-2241}, year={2014}} \bib{Yau}{article}{ author={Yau, Shing Tung}, title={On the Ricci curvature of a compact K\"ahler manifold and the complex Monge-Amp\`ere equation. I}, journal={Comm. Pure Appl. Math.}, volume={31}, date={1978}, number={3}, pages={339--411}, issn={0010-3640}, review={\MR{480350}}, } \bib{Yau93}{article}{ author={Yau, Shing-Tung}, title={Open problems in geometry}, conference={ title={Differential geometry: partial differential equations on manifolds}, address={Los Angeles, CA}, date={1990}, }, book={ series={Proc. Sympos. Pure Math.}, volume={54}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1993}, pages={1--28}, review={\MR{1216573}}, } \bib{Zha96}{article}{ author={Zhang, Shouwu}, title={Heights and reductions of semi-stable varieties}, journal={Compos. Math. }, volume={104}, pages={77-105}, year={1996}, } \end{biblist} \end{bibdiv} \end{document}
arXiv
The TIMSS 2019 Item Equivalence Study: examining mode effects for computer-based assessment and implications for measuring trends Bethany Fishbein ORCID: orcid.org/0000-0003-2796-29741, Michael O. Martin1, Ina V. S. Mullis1 & Pierre Foy1 Large-scale Assessments in Education volume 6, Article number: 11 (2018) Cite this article TIMSS 2019 is the first assessment in the TIMSS transition to a computer-based assessment system, called eTIMSS. The TIMSS 2019 Item Equivalence Study was conducted in advance of the field test in 2017 to examine the potential for mode effects on the psychometric behavior of the TIMSS mathematics and science trend items induced by the change to computer-based administration. The study employed a counterbalanced, within-subjects design to investigate the potential for eTIMSS mode effects. Sample sizes for analysis included 16,894 fourth grade students from 24 countries and 9,164 eighth grade students from 11 countries. Following a review of the differences of the trend items in paper and digital formats, item statistics were examined item by item and aggregated by subject for paperTIMSS and eTIMSS. Then, the TIMSS scaling methods were applied to produce achievement scale scores for each mode. These were used to estimate the expected magnitude of the mode effects on student achievement. The results of the study provide support that the mathematics and science constructs assessed by the trend items were mostly unaffected in the transition to eTIMSS at both grades. However, there was an overall mode effect, where items were more difficult for students in digital formats compared to paper. The effect was larger in mathematics than science. Because the trend items cannot be expected to be sufficiently equivalent across paperTIMSS and eTIMSS, it was concluded that modifications must be made to the usual item calibration model for TIMSS 2019 to measure trends. Each eTIMSS 2019 trend country will administer paper trend booklets to a nationally representative sample of students, in addition to the usual student sample, to provide a bridge between paperTIMSS and eTIMSS results. IEA's TIMSS (the Trends in International Mathematics and Science Study)Footnote 1 is an international comparative study of student achievement in mathematics and science at the fourth and eighth grades. Conducted on a four-year assessment cycle since 1995, TIMSS has assessed student achievement using paper-and-pencil methods on six occasions—in 1995, 1999 (eighth grade only), 2003, 2007, 2011, and 2015—and has accumulated 20 years of trend measurements (Martin et al. 2016a; Mullis et al. 2016). Now for the 2019 assessment cycle, TIMSS is transitioning to a computer-based "eAssessment system," called eTIMSS. Just over half of the 65 TIMSS countries are administering eTIMSS in 2019, while the remainder administer TIMSS in paper-and-pencil format, as in previous TIMSS cycles. The shift from the traditional paper-and-pencil administration to a fully computer-based testing system promises operational efficiencies, enhanced measurement capabilities, and extended coverage of the TIMSS assessment frameworks in mathematics and science. Students from the participating eTIMSS countries will take the assessment on personal computers (PCs) or tablets and will have digital tools available through the eTIMSS interface, including a number pad, ruler, and calculator. eTIMSS 2019 includes extended Problem Solving and Inquiry Tasks (PSIs) and a variety of digitally enhanced item types, including drag and drop, sorting, and drop-down menu input types. It is acknowledged that changing from paper-and-pencil to the new PC- and tablet-based administration could have substantial and unpredictable effects on student performance (APA 1986; Bennett et al. 2008; Jerrim et al. 2018; Mazzeo and von Davier 2014), which would have to be taken into consideration in analyzing and reporting the TIMSS 2019 results. These "mode effects" could vary systematically according to students' characteristics such as gender and their familiarity and confidence with using PCs and tablets (Bennett et al. 2008; Cooper 2006; Gallagher et al. 2002; Horkay et al. 2006; Zhang et al. 2016). The TIMSS 2019 Item Equivalence Study was designed to discover as much as possible about the potential impact of converting from paper-and-pencil to computer-based assessment while the assessment was still under development. Students in the countries participating in the study were asked to respond to TIMSS mathematics and science items in both eTIMSS and paperTIMSS modes of administration, and the results were analyzed for evidence of mode effects. By conducting the study in 2017, before both the field test (2018) and main data collection (2018–2019), the study informed the item development process and provided crucial information about the likely mode effects that will need accounting for in reporting the results of TIMSS 2019. The study was conducted, data analyzed, and results reported over a very short period of time, and utilized straightforward analytic techniques both for efficiency and ease of reporting to a wide audience. Challenges in digitally transforming an international large-scale assessment TIMSS will extend its 20 years of trends in mathematics and science achievement in 2019 while transitioning to a digital environment. In measurement terms, TIMSS 2019 faces the challenge of continuing trends from the previous TIMSS assessments (1995–2015), which were established with paper-and-pencil methods, while also maintaining comparability with the paper version of TIMSS 2019 (paperTIMSS). The TIMSS approach to measuring trends involves retaining a substantial portion of the items (approximately 60%) from each assessment cycle to be administered in the next cycle. Since these "trend items" are identical for adjacent assessment cycles (e.g., 2011 and 2015), they form the basis for a common item linking of TIMSS achievement scales from cycle to cycle using item response theory (IRT) methods (Foy and Yin 2016). Because measuring trends in TIMSS assessments prior to 2015 depended on having trend items that were identical from cycle to cycle, it was accepted that the 2019 digital versions of the 2015 trend items should be as similar as possible to the paper versions used in 2015. Keeping the eTIMSS and paperTIMSS versions of the trend items as similar as possible also helps ensure the comparability of the eTIMSS and paperTIMSS versions of the 2019 assessments. Therefore, TIMSS converted the trend items to eTIMSS format with the goal of reducing the potential for mode effects and maintaining equivalence as much as possible. TIMSS developed the eAssessment system to be compatible with a variety of digital devices to keep up with continuously emerging technologies and allow countries to use existing digital devices as much as possible. In accommodating diversity in digital devices across countries, TIMSS also had to consider the possibility of device effects causing further variation in student performance between modes (Davis et al. 2017; DePascale et al. 2016; Strain-Seymour et al. 2013; Way et al. 2016). Investigating mode effects When converted to eTIMSS format, a large proportion of the trend items (about 80%) appeared essentially identical to their paperTIMSS counterparts, while the remainder needed modification to some extent. Having so many apparently identical trend items raised the possibility that many of them would show no mode effect, and would have equivalent psychometric properties regardless of the mode of administration—eTIMSS or paperTIMSS. The TIMSS 2019 Item Equivalence Study was designed as a controlled experiment to test this proposition in advance of the TIMSS 2019 field test and main data collection. In addition to testing for possible mode effects, the Item Equivalence Study provided an ideal opportunity to try out the eTIMSS user interface and other components of the eAssessment system in a realistic classroom environment with a variety of digital devices. It was administered in April and May of 2017 and the data were analyzed through December 2017. To examine the mode effect, trend items were administered to samples of students in participating countries according to a counterbalanced design, with half the students taking items in eTIMSS format first and then items in paper format, and the other half taking paperTIMSS items first and eTIMSS items second. Each student was administered the full TIMSS experience, two blocks of mathematics and two blocks of science items, in each format, care being taken to ensure a different selection of items in each administration. With fairly large samples (800 students) and the entire pool of trend items administered in each country, the Item Equivalence Study was well positioned both to detect possible mode effects and to estimate the impact of such effects on the measurement of trends. The common item linking methodology that TIMSS uses to maintain comparability of results between assessment cycles expects that the "common" items mostly behave the same from cycle to cycle. However, the new eTIMSS mode of administration has the potential to change the psychometric properties of the trend items to the extent that a different approach to linking paperTIMSS and eTIMSS results may be necessary to preserve trends. To inform the methods and procedures necessary to ensure results comparable to TIMSS 2015, the TIMSS 2019 Item Equivalence Study addressed the following research questions: To what extent can the eTIMSS and paperTIMSS versions of the TIMSS 2019 trend items be considered psychometrically equivalent, and hence usable in a common item linking design? To the extent that the eTIMSS and paperTIMSS versions are not equivalent, what would be the likely impact of this mode effect on the measurement of trends in TIMSS 2019? The first research question was addressed by (a) comparing the eTIMSS and paperTIMSS versions of each of the trend items with a view to identifying areas of difference that could contribute to a mode effect, and (b) conducting an item-by-item analysis of performance differences in paper and digital formats. This analysis examined differences in item difficulty (percent correct), item discrimination (point-biserial correlations), percent omitted, and percent "not reached." Examining the likely impact of a mode effect on the measurement of trends (the second research question) involved estimating student proficiency on the TIMSS achievement scales using the usual TIMSS scaling methodology—IRT scaling combined with latent regression. Having proficiency scores for each student in both paperTIMSS and eTIMSS modes enabled (a) estimating mode effects on TIMSS achievement scales overall (mathematics and science at both fourth and eighth grades) and (b) estimating mode effects for student subgroups. Twenty-five countries participated in the TIMSS 2019 Item Equivalence Study, with 24 countries at the fourth grade and 13 countries at the eighth grade (Table 1). Each country was responsible for selecting a purposive sample of 800 students at each grade that included students with a range of abilities and backgrounds. For example, participating schools and classes should ideally have included both low- and high-achieving students. Some countries had better success in achieving a diverse school sample than others. For analysis purposes, each student was assigned a sampling weight of 1. Table 1 List of participating countries The complete set of mathematics and science trend items brought forward from TIMSS 2015 was administered for the study—187 items at the fourth grade and 232 items at the eighth grade. For each grade and subject, the items were distributed among eight item blocks, each mimicking the distribution of item types as well as the content and cognitive skills that the entire assessment is meant to cover. Approximately half the score points came from multiple-choice items and the other half from constructed response items. Under a counterbalanced design, each student sampled for the study received a full paper-and-pencil booklet (paperTIMSS) of trend items and an equivalent set of trend items as an eTIMSS "item block combination." Students also completed a questionnaire addressing student characteristics, including gender, socioeconomic status, and their attitudes for using computers and tablets. Participating countries administered eTIMSS on PC or Android tablet devices. For both the fourth and eighth grades, the trend item blocks were assigned to eight paper booklets and eight equivalent eTIMSS item block combinations (Table 2). Of the eight booklets, six were identical to booklets administered in 2015. The other two booklets contained two trend item blocks that were also paired together in 2015, but with two other blocks (M04 and S04) repositioned to replace blocks that were removed after the 2015 cycle. Consistent with the matrix sampling design used for each TIMSS cycle, each item block appears in two booklets in different positions to provide a mechanism for linking student responses across booklets for scaling. Each booklet is divided into two parts and contains two blocks of mathematics items (beginning with "M") and two blocks of science items (beginning with "S"). Half the booklets begin with two blocks of mathematics items and half begin with two blocks of science items. This item block distribution scheme was replicated for the eight eTIMSS item block combinations (Table 2). Table 2 Booklet/item block combination design—fourth and eighth grades Counterbalanced research design Each student was assigned one paperTIMSS test booklet and one eTIMSS item block combination according to a counterbalanced rotation scheme (Table 3). In each country, half of the students were assigned paperTIMSS first (Booklets 1–8), and half of the students were assigned eTIMSS first (Item Block Combinations ET19PTBC01–08). The rotation scheme ensured that students did not encounter the same items in paperTIMSS and eTIMSS format. For example, students assigned Booklet 1 were assigned eTIMSS Item Block Combination ET19PTBC03, and students assigned Booklet 2 were assigned eTIMSS Item Block Combination ET19PTBC04. These booklets/item block combinations had no items in common (see Table 2). The two test sessions occurred either within the same day or across two consecutive days. Student Tracking Forms were used by each participating school to ensure proper implementation of the counterbalanced design. Table 3 Counterbalanced booklet rotation design—fourth and eighth grades To address the first research question—the extent to which the paperTIMSS and eTIMSS versions of the TIMSS 2019 trend items can be considered psychometrically equivalent—the study began with a comparison of the eTIMSS and paperTIMSS versions of each trend item. This involved developing a set of explicit criteria for classifying the items according to their differences across paper and digital formats as well as characteristics that may be expected to induce mode effects. Staff from the TIMSS & PIRLS International Study Center classified the trend items according to their hypothesized likelihood for being "strongly equivalent" or "invariant" between paperTIMSS and eTIMSS. Preliminary item classification descriptions were developed based on the results of earlier small-scale pilot studies and the mode effect literature relevant to the types of items in the study. The following types of items or features of items were of particular interest: Differences in presentation between paper and digital formats (Pommerich 2004), such as formatting changes necessary to render the item on a digital interface (Sandene et al. 2005). Complex graphs or diagrams (Mazzeo and Harvey 1988) or heavy reading, possibly requiring greater cognitive processing (Chen et al. 2014; Noyes and Garland 2008). Scrolling required to view all parts of the item (Bridgeman et al. 2003; Pommerich 2004; Way et al. 2016). Constructed response items requiring long explanations (Strain-Seymour et al. 2013), due to differences in students' typing abilities (Russell 1999), typing fatigue that could occur with an on-screen keyboard (Pisacreta 2013), or the potential for human-scoring bias between paperTIMSS and eTIMSS item responses (Horkay et al. 2006; Russell 2002). Constructed response items requiring calculations by hand or with a calculator, requiring students to transcribe calculations from scratch paper to the PC or tablet (Johnson and Green 2006). Items with numerical answers requiring the "number pad" to input the response. Items requiring the use of the "drawing" feature to draw or label features (Sandene et al. 2005; Strain-Seymour et al. 2013). eTIMSS constructed response items had one of three different "input types," specifying the action required by students to respond. The "keyboard" input required students to use the full keyboard equipped by the delivery device (either external or on-screen) to type responses, including mathematical equations. For numerical responses, students used the on-screen "number pad" with digits 0 through 9, a decimal point, negative sign, and division symbol. "Drawing" input types required students to show work, draw, or label diagrams. Two raters refined the pre-developed criteria into detailed descriptions and used them to classify each item into one of four types—"Identical," "Nearly Identical," "Worrisome," or "Severe" (see full criteria in Fishbein 2018). The raters examined the international version of each trend item in paper, tablet, and PC formats, along with scoring guides for constructed response items to understand what was required for a correct response. When the two raters disagreed, a third rater who was also familiar with the trend items made the final classification. The results indicated that the majority of the trend items at each grade were considered to be "Identical" or "Nearly Identical"—assessing exactly the same construct and maintaining their presentation in both modes. These items could reasonably be expected a priori to perform the same for both paperTIMSS and eTIMSS. A larger proportion of eighth grade items was classified as "Identical" or "Nearly Identical" compared to fourth grade items, under the a priori assumption that eighth grade students are more familiar with using digital devices. With number pad and drawing input types being much more common in mathematics, a larger proportion of mathematics items compared to science items was classified as "Worrisome" or "Severe," hypothesized to behave differently for eTIMSS. Unfortunately, the number pad input feature was not completely functional at the time of the study. At the fourth grade in particular, inputting numbers such as fractions was cumbersome for students. Additionally, the results of earlier pilot studies indicated that students found the drawing feature difficult to use, and the scoring system was unable to reproduce students' responses to these items for scoring at the time of the study. The results of subsequent item-by-item analysis found that most of the data were lost for these items—13 items at the fourth grade and 11 items at the eighth grade. Despite the difficulties described above, the comparison of the eTIMSS and paperTIMSS versions indicated that efforts to keep the trend items looking the same and maintain construct equivalence across paper and digital formats were mostly successful. Following this review, staff at the TIMSS & PIRLS International Study Center further refined the classifications for the item-by-item analysis. Item-by-item analysis of performance differences Having completed the classification of items by likely degree of mode effect based on the appearance of the items, the next step in addressing the first research question was to examine item equivalence in terms of student performance. This was done by comparing descriptive item statistics for each item based on the two administration modes. For each item included in the TIMSS 2019 Item Equivalence Study—92 mathematics items and 95 science items at the fourth grade, and 114 mathematics items and 118 science items at the eighth grade—the percent correct, point-biserial correlation, percent omitted, and percent "not reached" were calculated for both paperTIMSS and eTIMSS data. "Difference statistics" were produced for each item by subtracting the eTIMSS statistic from the paperTIMSS statistic (e.g., \(p_{paper} - p_{eTIMSS}\)). This produced indicators of the mode effect for each item by country. The review of item statistics included reexamining the previous item classifications and identifying any "Worrisome" or "Severe" items that may not be suitable for the eTIMSS environment, due to the inability of students in the study to appropriately respond to the item as they would on paper or other limitations of the eAssessment system at the time. Items having a difference of at least 10% in omitted responses between paperTIMSS and eTIMSS in five or more countries were checked against the countries' item-by-item documentation for reported differences between paper and digital versions of the items. The review of the item difference data identified 28 items at the fourth grade (15 mathematics items and 13 science items) and 25 items at the eighth grade (17 mathematics items and 8 science items) whose eTIMSS versions were clearly not equivalent to their paper versions. These items were re-classified as expected non-invariant. The remaining items were classified as expected invariant, and could reasonably be expected to be psychometrically equivalent. Mathematics items fitting the criteria for expected non-invariant mostly included those with drawing inputs that could not be scored effectively at the time of study and items with fraction answers that could not be input due to limitations of the number pad. In science, expected non-invariant items included those with keyboard entry boxes in tables that were too restrictive for character-based languages. A few items with severe scrolling had omit rates as high as 50% for eTIMSS, substantially higher than omit rates for paperTIMSS. Countries' reports suggest that students may not have seen some parts of items that required substantial scrolling to uncover. Table 4 presents the number of items and final sample sizes for analysis after eliminating expected non-invariant items. Data for two countries at each grade were excluded from analysis because of technical problems with assessment delivery devices and issues with the data upload server. Table 4 Number of items and sample sizes for analysis The final database was used to compute an international average for each of the item statistics for paperTIMSS, eTIMSS, and their differences, respectively, separately by subject and grade. Each country was weighted equally in computing the international average for item difficulty (mean percent correct), item discrimination (mean point-biserial correlation), mean percent omitted, and mean percent not reached. To provide a comparison of the degree that the distributions of item difficulty and discrimination varied for paperTIMSS and eTIMSS, standard deviations for each of these statistics were computed across the items for each country, then pooled across countries. Item plots produced for each grade and subject allowed for visually examining the comparability of the trend item pool based on the international average percent correct statistics. Items with similar measurement properties across modes should have very similar percent correct values and show only small, random deviations from the identity line when plotted against one another. Clearly this was not the case for the expected non-invariant items, where the average percentage of students answering correctly was higher for paperTIMSS than for eTIMSS in almost every instance, indicating that the items were more difficult in eTIMSS than in paperTIMSS (see Figs. 1, 2). As expected, these items showed definite evidence of a mode effect. Item plots of international average percent correct statistics for mathematics—paperTIMSS vs. eTIMSS. Each data point represents a mathematics trend item with x-axis the percent correct based on paperTIMSS data and y-axis percent correct based on eTIMSS data. Expected invariant items (left) were included in the analysis. Expected non-invariant items (right) were plotted as a means of comparison Item plots of international average percent correct statistics for science—paperTIMSS vs. eTIMSS. Each data point represents a science trend item with x-axis the percent correct based on paperTIMSS data and y-axis percent correct based on eTIMSS data. Expected invariant items (left) were included in the analysis. Expected non-invariant items (right) were plotted as a means of comparison The results for the expected invariant items were more encouraging, with most items clustering close to the identity line in plots for both subjects and grades. Upon closer inspection, however, the results provide evidence of a general mode effect for the TIMSS trend items. Particularly for mathematics (Fig. 1), most points clustered just below the identity line at each grade, indicating the items generally were more difficult for eTIMSS than for paperTIMSS. For science (Fig. 2), there was more equal distribution of points around the identity line, suggesting the mode effect for science may be smaller than for mathematics. Further averaging the international percent correct across all of the expected invariant items revealed the mode effect more clearly (Table 5). Fourth grade mathematics items showed the largest average difference in item difficulty between paperTIMSS and eTIMSS, with an average difference between modes of 3.6 percentage points. Eighth grade mathematics items showed a similar effect, with a 3.4 percentage-point difference between paperTIMSS and eTIMSS, on average. Science items at both grades showed smaller mode effects on item difficulty compared to mathematics items, with average differences of 1.7 percentage points at the fourth grade and 1.5 percentage points at the eighth grade. Table 5 Item percent correct, averaged across countries and across items The size of the average point-biserial correlations suggest there was little or no effect of mode of administration on item discrimination statistics for the expected invariant items, with less than 0.03 average difference in point-biserial correlation coefficients for each subject and grade, and with little variation across items and countries (Table 6). Similarly, after removing the expected non-invariant items, percentages of missing responses—both omitted and not reached—were practically identical for paperTIMSS and eTIMSS (Fishbein 2018). At the fourth grade, mathematics items had approximately 5.1% of responses missing for paperTIMSS and 5.8% missing for eTIMSS, on average. Fourth grade science had 6.0% of responses missing on average across paperTIMSS items and 6.3% missing on average across eTIMSS items. At the eighth grade, approximately 4.9% of mathematics item responses were missing for paperTIMSS with 5.1% of responses missing for eTIMSS. In science, items had an average of 4.9% missing and 5.1% missing for paperTIMSS and eTIMSS, respectively. These results suggest there was no effect of eTIMSS on the TIMSS mathematics and science constructs measured by the paper instruments—only item difficulties showed differences (Winter 2010). Table 6 Item point-biserial correlations, averaged across countries and across items Estimating mode effects on TIMSS achievement scales Given that the item-by-item analyses showed evidence of a general mode effect, with most items, particularly in mathematics, exhibiting an effect to some degree, it was decided to move on to the second research question focusing on the likely impact of a mode effect on the measurement of trends. This analysis first examined the overall mode effect for mathematics and science scores before moving on to a consideration of differential mode effects for student subgroups. For these analyses, it was necessary to derive estimates of student proficiency by applying the TIMSS IRT scaling methodology (Martin et al. 2016b; Mislevy 1991) to the Item Equivalence Study data for both eTIMSS items and paperTIMSS items. Because the main concern of the study was the effect of moving from paper-based items to computer-based items, the paperTIMSS data was chosen as the baseline against which to compare the eTIMSS data. Therefore, in scaling the data, item parameters were first estimated for the paperTIMSS items, and the resulting paperTIMSS parameters were used to estimate achievement scores for both the paperTIMSS data and eTIMSS data. By fixing the item parameters to those based on the paperTIMSS results, the mode effect was captured by the differences in group means between paperTIMSS and eTIMSS. This approach provides an estimate of the expected mode effect size if nothing is done to control for it. Following the usual TIMSS procedures for scaling the achievement item data (Foy and Yin 2016), item parameters were estimated using mixed IRT models (two- and three-parameter and generalized partial credit), with each country's response data contributing equally to a single, overall calibration. Item calibration was conducted separately by grade and subject using PARSCALE software (Muraki and Bock 1991). To produce accurate achievement estimates for populations and subpopulations of students with matrix sampling of items, TIMSS uses latent regression with plausible values methodology (Martin et al. 2016b; Mislevy 1991). Using this approach, TIMSS estimates five imputed proficiency scores called "plausible values" for each student based on their estimated ability distribution and conditioned upon student and class characteristics. Conducting analysis across all five plausible values allows for more accurate estimation of population and subpopulation parameters and the level of uncertainty around the estimates. DGROUP software (Rogers et al. 2006) was used to estimate student proficiency, separately by grade for paperTIMSS and eTIMSS, respectively. Mathematics and science proficiencies were estimated concurrently for each student using a two-dimensional latent regression model. Conditioning variables in the latent regression included questions from the student questionnaire (gender, number of books in the home, access to a computer or tablet at school, and three variables about computer experience, as well as parents' education for eighth grade students), class mean achievement (based on expected a posterior scores from PARSCALE), country, and the interactions between country and each other conditioning variable. The resulting distributions of plausible values were transformed to an approximate TIMSS scale, as follows. First, a Stocking-Lord transformation (Stocking and Lord 1983) was applied, using the TIMSS 2015 IRT item parameters (Foy 2017) to place the resulting plausible values on the TIMSS 2015 theta scales. Then, the same linear transformation constants that were used to transform the TIMSS 2015 theta scores onto the TIMSS reporting metric were applied (see Foy and Yin 2016), resulting in student proficiency scores on the TIMSS reporting scale. The plausible values were used to produce evidence of construct equivalence and score comparability between paperTIMSS and eTIMSS. Commonly accepted criteria for score comparability include: (1) score distributions being approximately the same; and (2) individuals—or subgroups in the TIMSS case—being rank ordered in approximately the same way (APA 1986; DePascale et al. 2016; Winter 2010). Analyses were conducted separately for the fourth grade and eighth grade for mathematics and science, respectively. For each grade and subject, international average scale scores, standard deviations, and standard errors were computed for both paperTIMSS and eTIMSS. International average difference scores were produced by subtracting each eTIMSS plausible value from its corresponding paperTIMSS plausible value for each case and averaging the results. To treat each country equally in analyses across countries, each country's sample was weighted to give a sample size of 500 students. The results were produced using IEA's IDB Analyzer software and IBM SPSS Statistics Software Version 24. For each analysis, IEA's IDB Analyzer software applied sampling weights, computed the average of each plausible value across all cases in the database, and aggregated the results across the plausible values for interpretation. It also produced standard errors for each using the jackknife repeated replication method (Foy and LaRoche 2016; Rust 2014). Because the student samples were not drawn randomly, the standard errors are not an accurate reflection of the population data. However, they reflect the variance between schools as well as the imputation error, and are a useful indicator of the variability of the Item Equivalence Study data. Average scale scores based on paperTIMSS data were higher than scores based on eTIMSS data, confirming that the trend items were more difficult under the conditions of the eTIMSS delivery (Table 7). As expected from the item analyses, the effect was larger for mathematics than for science. At the fourth and eighth grades, there was an average difference across countries of 14 points for mathematics scores. Science showed an average difference of 8 score points at the fourth grade and 7 score points at the eighth grade. Standard deviations and standard errors were approximately equal across modes. Table 7 International average scale scores, standard deviations, standard errors, and cross-mode correlation coefficients In the TIMSS context, a difference of 14 points in mathematics scores is substantial and corresponds to one-fourth of the approximate 60-point difference constituting a grade level in the primary grades and half of the approximate 30-point difference constituting a grade level in middle school (Martin et al. 1998; Mullis et al. 1998). These international average results from TIMSS 1995 are similar to more recent results from TIMSS 2015, when Norway participated with two grade levels of students taking the fourth and eighth grade assessments, respectively (Martin et al. 2016a; Mullis et al. 2016). Between grades 4 and 5, Norway had a 56-point difference in mathematics (493 vs. 549) and a 45-point difference in science (493 vs. 538). Between grades 8 and 9, Norway had a 25-point difference in mathematics (487 vs. 512) and a 20-point difference in science (489 vs. 509). A difference of 14 score points also is substantial in the context of trend results between subsequent TIMSS assessments. TIMSS sampling requirements are designed to yield a standard error no greater than 3.5% of the standard deviation associated with each country's mean achievement score (LaRoche et al. 2016). A standard deviation corresponds to approximately 100 points on the TIMSS reporting scale, so student samples should provide for a standard error of 3.5 points. This corresponds to a 95% confidence interval of ± 7 score points for an achievement mean and ± 10 score points for the difference between means from adjacent assessment cycles. Therefore, a 14-point difference would constitute a substantial difference between mean scores and must be taken into account in linking eTIMSS to the TIMSS achievement scale. The cross-mode (eTIMSS-paperTIMSS) correlations were very large (r > 0.95) for each grade and subject (Table 7), suggesting that despite the differences in mean achievement, students' proficiency scores ranked similarly in both modes and that eTIMSS did not have an effect on the TIMSS mathematics and science constructs. Examination of mean scores by country confirmed that the ordering of country mean scores did not differ between paperTIMSS and eTIMSS at the high and low ends of the score distributions, and differed by a negligible amount toward the middle. However, the large standard deviations reported for the difference values (Table 7) suggest that the magnitude of mode effects differed substantially across students. Although interpretation of country-level results is not possible without nationally representative samples, the size of the 95% confidence intervals around each country mean difference score suggests that much of the difference in mode effect across countries may be due to sampling error, at least for these data. However, there was some variation in the mathematics and science mode effects across countries (Fig. 3). There was more variation in science mode effects compared to mathematics, despite the mathematics score differences being larger. Country distribution of mathematics and science score mode effects. Bars reflect average difference between paperTIMSS and eTIMSS scores, paired by country for mathematics and science. Negative values indicate that performance on eTIMSS was higher than performance on paperTIMSS, on average. Countries are ordered by size of average difference across mathematics and science. Errors bars represent 95% confidence intervals for estimated country difference scores Estimating mode effects for student subgroups Addressing the second part of the second research question, the final series of analyses examined differences between paperTIMSS and eTIMSS proficiency scores in relation to student background variables. The results provided additional information about the equivalence of the mathematics and science constructs between modes (Randall et al. 2012). If two scores are measuring the same construct, then they should have the same degree of relationship with other related measures (APA 1986; DePascale et al. 2016; Winter 2010). The analysis was conducted using three grouping variables identified in the literature to relate to mode effects: Socioeconomic status (Bennett et al. 2008; Jerrim 2016; MacCann 2006; Zhang et al. 2016). Gender (Cooper 2006; Gallagher et al. 2002; Jerrim 2016; Parshall and Kromrey 1993). Confidence in using computers and tablets, or "digital self-efficacy" (Cooper 2006; Pruet et al. 2016; Zhang et al. 2016). All subgroup analyses were conducted separately for the fourth and eighth grades and for mathematics and science, respectively, with each country contributing equally to the results. From the student questionnaire, the "Books in the Home" variable was used as a proxy measure of socioeconomic status, which historically has shown to be a strong predictor of achievement in TIMSS (e.g., Mullis et al. 2016; Mullis et al. 2017). Gender data were collected from participating schools via the Student Tracking Forms used for test administration or from students via the questionnaire. For a measure of digital self-efficacy, a one-parameter IRT scale was constructed for each grade based on six questionnaire items asking about students' confidence in using computers and tablets (see scale construction details in Fishbein 2018). A benchmarking procedure was used to classify students' scores into meaningful "Low," "Medium," and "High" categories of digital self-efficacy for a categorical form of the continuous scale variable. Analysis of paperTIMSS-eTIMSS difference scores and their standard errors by student subgroups revealed that, on average, the mode effect was mostly uniform across student subgroups by Books in the Home, gender, and digital self-efficacy (see Tables 8, 9). The variation in mean difference scores across subgroups was within the margin of error with 95% confidence. Table 8 Average paperTIMSS-eTIMSS difference scores by student subgroups—fourth grade Table 9 Average paperTIMSS-eTIMSS difference scores by student subgroups—eighth grade Following the analysis of average differences by subgroup, a repeated measures analysis of variance (ANOVA) was conducted with mode of administration as the within-subjects factor plus three between-subjects factors: Books in the Home (5 levels), gender (2 levels), and digital self-efficacy (3 levels). Following the usual procedures for analyzing plausible values, each full factorial model (one for each grade/subject) was run in SPSS five times—once for each pair of plausible values as the within-subjects variables (paperTIMSS and eTIMSS)—and the results were aggregated for interpretation. At the fourth grade, the ANOVA models found a significant effect of Books in the Home \(\times\) mode on mathematics achievement, F(4, 15,018) = 6.22, p < 0.001, \(\eta_{p}^{2}\) = 0.002, and a significant effect of gender \(\times\) mode on science achievement, F(1, 15,018) = 5.77, p < 0.05, \(\eta_{p}^{2}\) < 0.001. The eighth grade models also found a significant effect of Books in the Home \(\times\) mode on mathematics achievement, F(4, 6849) = 2.70, p < 0.05, \(\eta_{p}^{2}\) = 0.002. However, all three significant effects were very small, accounting for less than 1% of the variance in achievement between modes overall. As a second method of analyzing the influence of the predictor variables on the mode effects and to more accurately estimate the percentage of variance accounted for by the predictor variables in the difference scores, a multiple linear regression analysis was conducted. The outcome variable was the set of plausible values for the difference scores between paperTIMSS and eTIMSS (PVDIFF). The following model was specified for each grade and subject: $$(PVDIFF)_{ij} = B_{0} + B_{1} (DSE)_{ij} + B_{2} (Books_{1} )_{ij} + \cdots + B_{5} (Books_{4} )_{ij} + B_{6} (Gender)_{ij} + \varepsilon_{ij} ,$$ where digital self-efficacy (DSE) was a continuous predictor variable; Books in the Home was a dummy-coded predictor variable where "0–10 books" was the reference category and 1 = "11–25 books" (Books1), 2 = "26–100 books" (Books2), 3 = "101–200 books" (Books3), and 4 = "More than 200 books" (Books4); and gender was a dummy-coded predictor variable where "Girls" were the reference group and 1 = "Boys." The results of the multiple regression analysis corroborated the ANOVA results, but also found a significant effect of Books in the Home on the size of the science mode effects at both grades (p < 0.05). However, further analysis showed no clear relationship between this variable and mathematics or science mode effects. Moreover, the predictor variables explained a very small percentage of variance in the mode effects, overall. At the fourth grade, the predictor variables accounted for less than 2% of the variance in mathematics difference scores (R2 = 0.015) and less than 2% of the variance in science difference scores (R2 = 0.016). At the eighth grade, the variables accounted for approximately 1% of the variance in mathematics difference scores (R2 = 0.013) and less than 2% of the variance in science difference scores (R2 = 0.016). The results of the Item Equivalence Study show clear evidence that TIMSS 2019 trend items presented in eTIMSS format were more difficult on average than the paperTIMSS version, especially for mathematics, and this difference needs to be taken into account when linking eTIMSS 2019 to the TIMSS achievement scale. However, the study results also suggest that the measurement of the TIMSS mathematics and science constructs themselves were relatively unaffected by the transition to eTIMSS. The preliminary item review supported the view that the majority of the trend items appeared equivalent in paper and eTIMSS formats, confirming that efforts to convert the paper trend items to eTIMSS were largely successful. The item analysis also found negligible differences in item discrimination statistics between paperTIMSS and eTIMSS. Despite differences in means due to the mode effect, score-level standard deviations and standard errors were similar across modes and cross-mode correlation coefficients reflecting the relationships between paperTIMSS and eTIMSS scores were large (r > 0.95), reflecting similar score distribution shapes and similar ranking of students. Examining mean scores by country for each grade and subject confirmed that country rankings were about the same for paperTIMSS and eTIMSS. Lastly, the results of the analysis by student subgroups showed that, overall, the mode effects on the trend items affected students uniformly across subgroups of students based on socioeconomic status, gender, and digital self-efficacy. These student characteristics explained a negligible proportion of the variance in achievement score differences between paperTIMSS and eTIMSS. The above findings meet criteria for evidence that the mathematics and science constructs were unchanged in eTIMSS (APA 1986; DePascale et al. 2016; Randall et al. 2012; Winter 2010). Therefore, the difference in scores that resulted from the mode effects can be accounted for through appropriate linking procedures and the paperTIMSS and eTIMSS scores can be put on a common scale. Implications for measuring trends in TIMSS 2019 Although quite a large-scale study in terms of the number of countries and students involved and the amount of data collected, the Item Equivalence Study was intended to give only a preliminary indication of mode effects when countries participating in TIMSS 2019 could choose between paper-based (paperTIMSS) and computer-based (eTIMSS) versions of the assessment. Given the mode effects found by the study, it is considered unlikely that the trend items in paperTIMSS and eTIMSS overall will be sufficiently equivalent for the common item linking usually implemented by TIMSS for measuring trends, and that the procedure should be augmented by an additional data source. Accordingly, in addition to administering the full eTIMSS assessment to the usual national sample of about 4500 students, each eTIMSS country will administer the paper trend items to a matched sample of 1500 students (known as the "bridge" sample), resulting in randomly equivalent student samples taking both eTIMSS and paperTIMSS items in each eTIMSS country. The bridge data will provide a secure basis for linking paperTIMSS and eTIMSS in 2019, regardless of whether any of the trend items can be considered psychometrically equivalent. However, because of improvements made to many of the eTIMSS trend items as well as to the usability and reliability of the eTIMSS delivery platform, there are grounds to expect that individual item mode effects may be less apparent in the data from the main data collection. Because of this, the linking procedure will come after a reexamination of paperTIMSS-eTIMSS data for any evidence of bias due to mode effects. Depending on the outcome of this item analysis, it is possible that a subset of trend items may be identified that are invariant across paperTIMSS and eTIMSS and can be considered common items for calibration purposes. The approach for measuring trends in TIMSS 2019 (Fig. 4) involves one overall concurrent item calibration to estimate both paperTIMSS and eTIMSS item parameters, based on all assessment data from TIMSS 2015 and TIMSS 2019, and two separate linear transformations. If there are any invariant trend items, these item parameters will be fixed to be equal for paperTIMSS and eTIMSS, and non-invariant eTIMSS item parameters will be estimated freely. Concurrent calibration model for TIMSS 2019. Shows schematic representation of plan for the concurrent calibration to estimate paperTIMSS and eTIMSS item parameters and the subsequent linear transformations to measure trends in TIMSS 2019 The first linear transformation will place the TIMSS 2019 data from paperTIMSS countries and the paper-based trend item data from the eTIMSS bridge sample on the TIMSS scale by aligning the TIMSS 2015 data under the 2019 concurrent calibration with the same data under the 2015 calibration. After applying the linear transformation to the TIMSS 2019 scores through common item linking, the score differences that remain between assessments will reflect the change in student achievement over time. The second linear transformation for eTIMSS 2019 countries will align the distribution of the eTIMSS scores with the already transformed distribution of the paper-based bridge scores through equivalent groups or common population linking, which is possible because the eTIMSS data and the bridge data are equivalent samples from each country's student population. Then, the eTIMSS 2019 scores will be directly comparable with paperTIMSS 2019 scores, as well as TIMSS scores from all previous assessments. This two-step procedure is analogous to the procedure used in TIMSS 2007 to link the TIMSS achievement scales despite a major change in booklet design from 2003 to 2007 (Foy et al. 2008). The TIMSS 2019 Item Equivalence Study played a valuable role in predicting the likely existence of a mode effect in the main TIMSS 2019 data collection and confirming the need to add a paper-based bridge to the data collection design. This enhanced design, incorporating both eTIMSS and paperTIMSS data, ensures that the measurement of trends will be safeguarded as TIMSS 2019 expands to include new digital as well as traditional paper formats. The bridge data will also allow for reexamining the results of the Item Equivalence Study based on nationally representative student populations, as well as addressing potential issues of item model-data misfit and bias in the digital trend items. By contributing their bridge data to the linking process described in Fig. 4, each eTIMSS country adds to the stability of the eTIMSS-paperTIMSS link, which is based on having equivalent student samples taking items in both eTIMSS and paperTIMSS formats. The linear transformation that establishes this link and adjusts for the mode effect is a global transformation applied in the same way for each country. It makes no provision for differential country-by-country mode effects. However, there is some evidence from the Item Equivalence Study that the mode effect may be stronger in some countries than others, and the bridge data provides each country with an avenue for exploring this issue. By comparing the performance of its students on the eTIMSS and paperTIMSS versions of the trend items, each country can develop a detailed picture of how the mode effect may be operating among its students and which items, if any, are contributing to this effect. TIMSS is directed by the TIMSS & PIRLS International Study Center at Boston College on behalf of IEA, the International Association for the Evaluation of Educational Achievement. TIMSS: Trends in International Mathematics and Science Study IEA: International Association for the Evaluation of Educational Achievement PSIs: Problem-Solving and Inquiry Tasks American Psychological Association Committee on Professional Standards and Committee on Psychological Tests and Assessment (APA). (1986). Guidelines for computer-based tests and interpretations. Washington, DC: American Psychological Association Committee on Professional Standards and Committee on Psychological Tests and Assessment (APA). Bennett, R. E., Brasell, J., Oranje, A., Sandene, B., Kaplan, K., & Yan, F. (2008). Does it matter if I take my mathematics test on a computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 6(9), 1–39. Bridgeman, B., Lennon, M. L., & Jackenthal, A. (2003). Effects of screen size, screen resolution, and display rate on computer-based test performance. Applied Measurement in Education, 16(3), 191–205. Chen, G., Cheng, W., Chang, T.-W., Zheng, X., & Huang, R. (2014). A comparison of reading comprehension across paper, computer screens, and tablets: Does tablet familiarity matter? Journal of Computers in Education, 1(3), 213–225. Cooper, J. (2006). The digital divide: The special case of gender. Journal of Computer Assisted Learning, 22, 320–334. Davis, L. L., Kong, X., McBride, Y., & Morrison, K. (2017). Device comparability of tablets and computers for assessment purposes. Applied Measurement in Education, 30(1), 16–26. DePascale, C., Dadey, N., & Lyons, S. (2016). Score comparability across computerized assessment delivery devices: Defining comparability, reviewing the literature, and providing recommendations for states when submitting to Title 1 Peer Review. Washington, DC: Council of Chief State School Officers. Fishbein, B. (2018). Preserving 20 years of TIMSS trend measurements: Early stages in the transition to the eTIMSS assessment (Doctoral dissertation). Boston College. Foy, P. (2017). TIMSS 2015 user guide for the international database. Retrieved from Boston College, TIMSS & PIRLS International Study Center website: https://timss.bc.edu/timss2015/international-database/. Foy, P., Galia, J., & Li, I. (2008). Scaling the data from the TIMSS 2007 mathematics and science assessments. In J. F. Olson, M. O. Martin, & I. V. S. Mullis (Eds.), TIMSS 2007 technical report. Chestnut Hill: TIMSS & PIRLS International Study Center, Boston College. Foy, P., & LaRoche, S. (2016). Estimating standard errors in the TIMSS 2015 results. In M. O. Martin, I. V. S. Mullis, & M. Hooper (Eds.), Methods and procedures in TIMSS 2015 (pp. 4.1–4.69). Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timss.bc.edu/publications/timss/2015-methods/chapter-4.html. Foy, P., & Yin, L. (2016). Scaling the TIMSS 2015 achievement data. In M. O. Martin, I. V. S. Mullis, & M. Hooper (Eds.), Methods and procedures in TIMSS 2015 (pp. 13.1–13.62). Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timss.bc.edu/publications/timss/2015-methods/chapter-13.html. Gallagher, A., Bridgeman, B., & Cahalan, C. (2002). The effect of computer-based tests on racial-ethnic and gender groups. Journal of Educational Measurement, 39(2), 133–147. Horkay, N., Bennett, R. E., Allen, N., Kaplan, B., & Yan, F. (2006). Does it matter if I take my writing test on computer? An empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 5(2). Retrieved from http://www.jtla.org. Jerrim, J. (2016). PISA 2012: How do results for the paper and computer tests compare? Assessment in Education: Principles, Policy, & Practice, 23(4), 495–518. Jerrim, J., Micklewright, J., Heine, J.-H., Salzer, C., & McKeown, C. (2018). PISA 2015: how big is the 'mode effect' and what has been done about it? Oxford Review of Education. https://doi.org/10.1080/03054985.2018.1430025. Johnson, M., & Green, S. (2006). On-line mathematics assessment: The impact of mode on performance and question answering strategies. Journal of Technology, Learning, and Assessment, 4(5), 1–35. LaRoche, S., Joncas, M., & Foy, P. (2016). Sample design in TIMSS 2015. In M. O. Martin, I. V. S. Mullis, & M. Hooper (Eds.), Methods and procedures in TIMSS 2015 (pp. 3.1–3.37). Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timss.bc.edu/publications/timss/2015-methods/chapter-3.html. MacCann, R. (2006). The equivalence of online and traditional testing for different subpopulations and item types. British Journal of Educational Technology, 37(1), 79–81. Martin, M. O., Mullis, I. V. S., Beaton, A. E., Gonzalez, E. J., Smith, T. A., & Kelly, D. L. (1998). Science achievement in the primary school years: IEA's third international mathematics and science report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. Martin, M. O., Mullis, I. V. S., Foy, P., & Hooper, M. (2016a). TIMSS 2015 international results in science. Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timssandpirls.bc.edu/timss2015/international-results/. Martin, M. O., Mullis, I. V. S., Foy, P. & Hooper, M. (Eds.). (2016b). TIMSS achievement methodology. In Methods and procedures in TIMSS 2015 (pp. 12.1–12.9). Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timss.bc.edu/publications/timss/2015-methods/chapter-12.html. Mazzeo, J., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional educational and psychological tests: A review of the literature. College Board Rep. No. 88-8, ETS RR No. 88-21. Princeton, NJ: Educational Testing Service. Mazzeo, J., & von Davier, M. (2014). Linking scales in international large-scale assessments. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis (pp. 229–258). Boca Raton: Chapman & Hall, CRC Press. Mislevy, R. J. (1991). Randomization-based inference about latent variables from complex samples. Psychometrika, 56(2), 177–196. Mullis, I. V. S., Martin, M. O., Beaton, A. E., Gonzalez, E. J., Kelly, D. L., & Smith, T. A. (1998). Mathematics achievement in the primary school years: IEA's third international mathematics and science report. Chestnut Hill: TIMSS & PIRLS International Study Center, Boston College. Mullis, I. V. S., Martin, M. O., Foy, P., & Hooper, M. (2016). TIMSS 2015 international results in mathematics. Retrieved from Boston College, TIMSS & PIRLS International Study Center website: http://timssandpirls.bc.edu/timss2015/international-results/. Mullis, I. V. S., Martin, M. O., & Hooper, M. (2017). Measuring changing educational contexts in a changing world: Evolution of the TIMSS and PIRLS questionnaires. In M. Rosén, K. Y. Hansen, & U. Wolff (Eds.), Cognitive abilities and educational outcomes (pp. 207–222). Switzerland: Springer International Publishing. Muraki, E., & Bock, R. D. (1991). PARSCALE [computer software]. Lincolnwood,: Scientific Software International. Noyes, J. M., & Garland, K. J. (2008). Computer- vs. paper-based tasks: Are they equivalent? Ergonomics, 51(9), 1352–1375. Parshall, C. G., & Kromrey, J. D. (1993). Computer testing versus paper-and-pencil testing: An analysis of examinee characteristics associated with mode effect. Paper presented at the Annual Meeting of the American Educational Research Association, Atlanta, GA. Pisacreta, D. (2013). Comparison of a test delivered using an iPad versus a laptop computer: Usability study results. Paper presented at the Council of Chief State School Officers (CCSSO) National Conference on Student Assessment (NCSA), National Harbor, MD. Pommerich, M. (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passaged-based tests. Journal of Technology, Learning, and Assessment, 2(6), 1–45. Pruet, P., Ang, C. S., & Farzin, D. (2016). Understanding tablet computer usage among primary school students in underdeveloped areas: Students' technology experience, learning styles and attitudes. Computers in Human Behavior, 55, 1131–1144. Randall, J., Sireci, S., Li, X., & Kaira, L. (2012). Evaluating the comparability of paper- and computer-based science tests across sex and SES subgroups. Educational Measurement: Issues and Practice, 31(4), 2–12. Rogers, A., Tang, C., Lin, J.-J., & Kandathil, M. (2006). DGROUP [computer software]. Princeton: Educational Testing Service. Russell, M. (1999). Testing on computers: A follow-up study comparing performance on computer and on paper. Education Policy Analysis Archives, 7(2). Retrieved from http://epaa.asu.edu/epaa/v7n20/. Russell, M. (2002). The influence of computer-print on rater scores. Chestnut Hill: Technology and Assessment Study Collaborative, Boston College. Rust, K. (2014). Sampling, weighting, and variance estimation in international large-scale assessments. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis (pp. 117–153). Boca Raton: CRC Press, Taylor & Francis Group. Sandene, B., Bennett, R. E., Braswell, J., & Oranje, A. (2005). Online assessment in mathematics and writing: Reports from the NAEP technology-based assessment project, Research and development series (NCES 2005-457). U.S. Department of Education, National Center for Education Statistics. Washington, D.C.: U.S. Government Printing Office. Stocking, M. L., & Lord, F. M. (1983). Developing a common metric in item response theory. Applied Psychological Measurement, 7, 201–210. Strain-Seymour, E., Craft, J., Davis, L. L., & Elbom, J. (2013). Testing on tablets: Part I of a series of usability studies on the use of tablets for K-12 assessment programs. Pearson White Paper. Retrieved from http://researchnetwork.pearson.com/. Way, D. W., Davis, L. L., Keng, L., & Strain-Seymour, E. (2016). From standardization to personalization: The comparability of scores based on different testing conditions, modes, and devices. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 260–284). New York and London: Taylor & Francis, Routledge. Winter, P. C. (Ed.). (2010). Evaluating the comparability of scores from achievement test variations. Washington, DC: Council of Chief State School Officers. Zhang, T., Xie, Q., Park, B. J., Kim, Y. Y., Broer, M., & Bohrnstedt, G. (2016). Computer familiarity and its relationship to performance in three NAEP digital-based assessments (AIR-NAEP Working Paper #01-2016). Washington, DC: American Institutes for Research. BF, MM, IM, PF contributed to developing the research design, conducting the analysis, and writing the manuscript. All authors read and approved the final manuscript. The authors are indebted to psychometric staff at Educational Testing Service (ETS) —Scott Davis, Jonathan Weeks, Ed Kulick, John Mazzeo and Tim Davey—for their assistance with this project. In particular, ETS staff advised on the study design and analytic approach, conducted a range of item-by-item analyses, and implemented the IRT achievement scaling. TIMSS & PIRLS International Study Center, Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA, 02467, USA Bethany Fishbein , Michael O. Martin , Ina V. S. Mullis & Pierre Foy Search for Bethany Fishbein in: Search for Michael O. Martin in: Search for Ina V. S. Mullis in: Search for Pierre Foy in: Correspondence to Bethany Fishbein. Fishbein, B., Martin, M.O., Mullis, I.V.S. et al. The TIMSS 2019 Item Equivalence Study: examining mode effects for computer-based assessment and implications for measuring trends. Large-scale Assess Educ 6, 11 (2018) doi:10.1186/s40536-018-0064-z Computer-based assessment Mode effects
CommonCrawl
# Fundamental counting principle The fundamental counting principle is a basic concept in counting principles. It states that if there are m ways to do one thing, and n ways to do another thing, then there are m * n ways to do both things together. For example, let's say you have 3 shirts (red, blue, and green) and 2 pants (black and khaki). According to the fundamental counting principle, you can create outfits by multiplying the number of options for each item: 3 * 2 = 6 outfits. This principle can be extended to more than two items as well. If you have 3 shirts, 2 pants, and 4 pairs of shoes, you can calculate the total number of outfits by multiplying the number of options for each item: 3 * 2 * 4 = 24 outfits. The fundamental counting principle is a simple yet powerful tool that allows us to calculate the number of possibilities in various situations. It forms the foundation for many other counting principles in computer science. Suppose you have 4 different types of fruits (apple, banana, orange, and strawberry) and 3 different types of desserts (cake, ice cream, and pie). How many different fruit-dessert combinations can you create? According to the fundamental counting principle, you can calculate the total number of combinations by multiplying the number of options for each item: 4 * 3 = 12 combinations. Here are all the possible combinations: - Apple cake - Apple ice cream - Apple pie - Banana cake - Banana ice cream - Banana pie - Orange cake - Orange ice cream - Orange pie - Strawberry cake - Strawberry ice cream - Strawberry pie ## Exercise You have 5 different shirts, 4 different pants, and 3 different pairs of shoes. How many different outfits can you create? ### Solution According to the fundamental counting principle, you can calculate the total number of outfits by multiplying the number of options for each item: 5 * 4 * 3 = 60 outfits. # Permutations: ordering and arranging Permutations are a specific type of arrangement where the order of the items matters. In other words, a permutation is an ordered arrangement of objects. For example, let's say you have 3 books on a shelf: A, B, and C. The different permutations of these books would be: - ABC - ACB - BAC - BCA - CAB - CBA As you can see, the order of the books matters in a permutation. If we were to rearrange the order of the books, we would have a different permutation. The number of permutations of a set of objects can be calculated using the factorial function. The factorial of a number is the product of all positive integers less than or equal to that number. For example, the factorial of 3 (written as 3!) is calculated as: 3! = 3 * 2 * 1 = 6. This means that there are 6 different permutations of 3 objects. Suppose you have 4 different colors of pens: red, blue, green, and black. How many different ways can you arrange these pens in a row? To calculate the number of permutations, we can use the factorial function. Since there are 4 pens, the number of permutations would be: 4! = 4 * 3 * 2 * 1 = 24. Therefore, there are 24 different ways to arrange the 4 pens. ## Exercise You have 5 different letters: A, B, C, D, and E. How many different ways can you arrange these letters in a row? ### Solution To calculate the number of permutations, we can use the factorial function. Since there are 5 letters, the number of permutations would be: 5! = 5 * 4 * 3 * 2 * 1 = 120. Therefore, there are 120 different ways to arrange the 5 letters. # Combinations: unordered selections Combinations are a specific type of selection where the order of the items does not matter. In other words, a combination is an unordered selection of objects. For example, let's say you have 3 books on a shelf: A, B, and C. The different combinations of these books would be: - A, B - A, C - B, C As you can see, the order of the books does not matter in a combination. If we were to rearrange the order of the books, we would still have the same combination. The number of combinations of a set of objects can be calculated using the combination formula. The combination formula is given by: $$C(n, k) = \frac{n!}{k!(n-k)!}$$ Where n is the total number of objects and k is the number of objects being selected. Suppose you have 5 different colors of pens: red, blue, green, black, and purple. How many different combinations of 3 pens can you select? To calculate the number of combinations, we can use the combination formula. Since there are 5 pens and we want to select 3, the number of combinations would be: $$C(5, 3) = \frac{5!}{3!(5-3)!} = \frac{5!}{3!2!} = \frac{5 * 4 * 3 * 2 * 1}{3 * 2 * 1 * 2 * 1} = 10$$ Therefore, there are 10 different combinations of 3 pens that can be selected. ## Exercise You have 6 different letters: A, B, C, D, E, and F. How many different combinations of 4 letters can you select? ### Solution To calculate the number of combinations, we can use the combination formula. Since there are 6 letters and we want to select 4, the number of combinations would be: $$C(6, 4) = \frac{6!}{4!(6-4)!} = \frac{6!}{4!2!} = \frac{6 * 5 * 4 * 3 * 2 * 1}{4 * 3 * 2 * 1 * 2 * 1} = 15$$ Therefore, there are 15 different combinations of 4 letters that can be selected. # Combinatorics: counting with restrictions Combinatorics is a branch of mathematics that deals with counting, arranging, and selecting objects. It is often used in computer science to solve problems related to optimization, algorithms, and data structures. When counting with restrictions, we need to consider additional conditions or limitations that affect the counting process. These restrictions can be based on various factors, such as rules, constraints, or requirements. For example, let's say we have a set of 5 books: A, B, C, D, and E. We want to count the number of ways we can arrange these books on a shelf, but with the restriction that book A must be placed before book B. To count with this restriction, we can treat books A and B as a single entity. This means that we have 4 entities to arrange: AB, C, D, and E. The number of ways to arrange these entities can be calculated using the factorial function. Since there are 4 entities, the number of arrangements would be: 4! = 4 * 3 * 2 * 1 = 24. Therefore, there are 24 different ways to arrange the books on the shelf with the restriction that book A must be placed before book B. Suppose you have 6 different colors of pens: red, blue, green, black, purple, and yellow. You want to select 3 pens, but with the restriction that at least one pen must be blue. To count with this restriction, we can calculate the total number of combinations without any restrictions, and then subtract the number of combinations where no blue pen is selected. The total number of combinations without any restrictions can be calculated using the combination formula. Since there are 6 pens and we want to select 3, the number of combinations would be: $$C(6, 3) = \frac{6!}{3!(6-3)!} = \frac{6!}{3!3!} = \frac{6 * 5 * 4 * 3 * 2 * 1}{3 * 2 * 1 * 3 * 2 * 1} = 20$$ To calculate the number of combinations where no blue pen is selected, we can treat the blue pen as a restriction. This means that we have 5 pens to select from (excluding the blue pen), and we want to select 3 pens. The number of combinations would be: $$C(5, 3) = \frac{5!}{3!(5-3)!} = \frac{5!}{3!2!} = \frac{5 * 4 * 3 * 2 * 1}{3 * 2 * 1 * 2 * 1} = 10$$ Therefore, the number of combinations with the restriction that at least one pen must be blue would be: 20 - 10 = 10. ## Exercise Suppose you have 8 different colors of pens: red, blue, green, black, purple, yellow, orange, and pink. You want to select 4 pens, but with the restriction that at least one pen must be red or blue. To count with this restriction, calculate the total number of combinations without any restrictions, and then subtract the number of combinations where no red or blue pen is selected. ### Solution The total number of combinations without any restrictions can be calculated using the combination formula. Since there are 8 pens and we want to select 4, the number of combinations would be: $$C(8, 4) = \frac{8!}{4!(8-4)!} = \frac{8!}{4!4!} = \frac{8 * 7 * 6 * 5 * 4 * 3 * 2 * 1}{4 * 3 * 2 * 1 * 4 * 3 * 2 * 1} = 70$$ To calculate the number of combinations where no red or blue pen is selected, we can treat the red and blue pens as a restriction. This means that we have 6 pens to select from (excluding the red and blue pens), and we want to select 4 pens. The number of combinations would be: $$C(6, 4) = \frac{6!}{4!(6-4)!} = \frac{6!}{4!2!} = \frac{6 * 5 * 4 * 3 * 2 * 1}{4 * 3 * 2 * 1 * 2 * 1} = 15$$ Therefore, the number of combinations with the restriction that at least one pen must be red or blue would be: 70 - 15 = 55. # Probability: calculating likelihood Probability is a branch of mathematics that deals with calculating the likelihood of events occurring. It is often used in computer science to analyze and predict the behavior of algorithms, systems, and processes. The probability of an event can be represented as a number between 0 and 1, where 0 represents impossibility and 1 represents certainty. The probability of an event occurring is calculated by dividing the number of favorable outcomes by the total number of possible outcomes. For example, let's say we have a standard deck of 52 playing cards. The probability of drawing an ace from the deck can be calculated as: 4 (number of aces) / 52 (total number of cards) = 1/13 ≈ 0.077. Probability can also be represented as a percentage. In the example above, the probability of drawing an ace would be approximately 7.7%. Suppose you have a bag of 10 marbles: 5 red marbles, 3 blue marbles, and 2 green marbles. What is the probability of randomly selecting a red marble from the bag? To calculate the probability, we divide the number of favorable outcomes (selecting a red marble) by the total number of possible outcomes (selecting any marble). The number of favorable outcomes is 5 (number of red marbles), and the total number of possible outcomes is 10 (total number of marbles). Therefore, the probability of selecting a red marble is: 5/10 = 1/2 = 0.5. The probability of selecting a red marble is 0.5 or 50%. ## Exercise Suppose you have a bag of 12 marbles: 4 red marbles, 3 blue marbles, 2 green marbles, and 3 yellow marbles. What is the probability of randomly selecting a blue or green marble from the bag? ### Solution To calculate the probability, we divide the number of favorable outcomes (selecting a blue or green marble) by the total number of possible outcomes (selecting any marble). The number of favorable outcomes is 3 (number of blue marbles) + 2 (number of green marbles) = 5, and the total number of possible outcomes is 12 (total number of marbles). Therefore, the probability of selecting a blue or green marble is: 5/12 ≈ 0.417. The probability of selecting a blue or green marble is approximately 0.417 or 41.7%. # Probability and counting principles Counting principles and probability are closely related. Counting principles help us calculate the number of possible outcomes, while probability helps us determine the likelihood of specific outcomes occurring. When calculating probabilities, we often use counting principles to determine the total number of possible outcomes and the number of favorable outcomes. For example, let's say we have a standard deck of 52 playing cards. What is the probability of drawing a heart from the deck? To calculate the probability, we divide the number of favorable outcomes (drawing a heart) by the total number of possible outcomes (drawing any card). The number of favorable outcomes is 13 (number of hearts), and the total number of possible outcomes is 52 (total number of cards). Therefore, the probability of drawing a heart is: 13/52 = 1/4 = 0.25. The probability of drawing a heart is 0.25 or 25%. Suppose you have a bag of 8 marbles: 3 red marbles, 2 blue marbles, and 3 green marbles. What is the probability of randomly selecting a red or blue marble from the bag? To calculate the probability, we divide the number of favorable outcomes (selecting a red or blue marble) by the total number of possible outcomes (selecting any marble). The number of favorable outcomes is 3 (number of red marbles) + 2 (number of blue marbles) = 5, and the total number of possible outcomes is 8 (total number of marbles). Therefore, the probability of selecting a red or blue marble is: 5/8 ≈ 0.625. The probability of selecting a red or blue marble is approximately 0.625 or 62.5%. ## Exercise Suppose you have a bag of 10 marbles: 4 red marbles, 3 blue marbles, and 3 green marbles. What is the probability of randomly selecting a red or green marble from the bag? ### Solution To calculate the probability, we divide the number of favorable outcomes (selecting a red or green marble) by the total number of possible outcomes (selecting any marble). The number of favorable outcomes is 4 (number of red marbles) + 3 (number of green marbles) = 7, and the total number of possible outcomes is 10 (total number of marbles). Therefore, the probability of selecting a red or green marble is: 7/10 = 0.7. The probability of selecting a red or green marble is 0.7 or 70%. # Recursive counting and counting with repetition Recursive counting is a counting technique that involves breaking down a problem into smaller subproblems and combining their solutions to find the total number of outcomes. Counting with repetition is a counting technique that involves considering the same object multiple times in a counting process. For example, let's say we want to count the number of 3-digit numbers that can be formed using the digits 0, 1, and 2. We can use recursive counting to solve this problem. To form a 3-digit number, we need to make 3 choices: one for each digit. For the first digit, we have 3 options (0, 1, or 2). For the second digit, we also have 3 options. And for the third digit, we again have 3 options. To find the total number of 3-digit numbers, we multiply the number of options for each digit: 3 * 3 * 3 = 27. In this case, we used recursive counting because we broke down the problem into 3 subproblems (one for each digit) and combined their solutions. Suppose you have 4 different colors of pens: red, blue, green, and black. How many different 3-pen combinations can you create, allowing for repetition? To count the number of combinations with repetition, we can use the formula: n^k, where n is the number of options for each choice and k is the number of choices. In this case, we have 4 options for each choice (red, blue, green, or black), and we want to make 3 choices. Therefore, the number of combinations with repetition would be: 4^3 = 4 * 4 * 4 = 64. Therefore, there are 64 different 3-pen combinations that can be created, allowing for repetition. ## Exercise Suppose you have 5 different letters: A, B, C, D, and E. How many different 4-letter combinations can you create, allowing for repetition? ### Solution To count the number of combinations with repetition, we can use the formula: n^k, where n is the number of options for each choice and k is the number of choices. In this case, we have 5 options for each choice (A, B, C, D, or E), and we want to make 4 choices. Therefore, the number of combinations with repetition would be: 5^4 = 5 * 5 * 5 * 5 = 625. Therefore, there are 625 different 4-letter combinations that can be created, allowing for repetition. # Binomial coefficients Binomial coefficients are a type of counting principle that involve selecting a certain number of objects from a larger set without regard to their order. The binomial coefficient formula is given by: $$C(n, k) = \frac{n!}{k!(n-k)!}$$ Where n is the total number of objects and k is the number of objects being selected. Binomial coefficients are often used in probability theory and combinatorics to calculate the number of combinations or arrangements. For example, let's say we have 5 different colors of pens: red, blue, green, black, and purple. How many different combinations of 3 pens can you select? To calculate the number of combinations, we can use the binomial coefficient formula. Since there are 5 pens and we want to select 3, the number of combinations would be: $$C(5, 3) = \frac{5!}{3!(5-3)!} = \frac{5!}{3!2!} = \frac{5 * 4 * 3 * 2 * 1}{3 * 2 * 1 * 2 * 1} = 10$$ Therefore, there are 10 different combinations of 3 pens that can be selected. Suppose you have 6 different colors of pens: red, blue, green, black, purple, and yellow. How many different combinations of 4 pens can you select? To calculate the number of combinations, we can use the binomial coefficient formula. Since there are 6 pens and we want to select 4, the number of combinations would be: $$C(6, 4) = \frac{6!}{4!(6-4)!} = \frac{6!}{4!2!} = \frac{6 * 5 * 4 * 3 * 2 * 1}{4 * 3 * 2 * 1 * 2 * 1} = 15$$ Therefore, there are 15 different combinations of 4 pens that can be selected. ## Exercise Suppose you have 8 different colors of pens: red, blue, green, black, purple, yellow, orange, and pink. How many different combinations of 5 pens can you select? ### Solution To calculate the number of combinations, we can use the binomial coefficient formula. Since there are 8 pens and we want to select 5, the number of combinations would be: $$C(8, 5) = \frac{8!}{5!(8-5)!} = \frac{8!}{5!3!} = \frac{8 * 7 * 6 * 5 * 4 * 3 * 2 * 1}{5 * 4 * 3 * 2 * 1 * 3 * 2 * 1} = 56$$ Therefore, there are 56 different combinations of 5 pens that can be selected. # Multinomial coefficients Multinomial coefficients are a generalization of binomial coefficients that involve selecting objects from a larger set and grouping them into multiple categories. The multinomial coefficient formula is given by: $$C(n, k_1, k_2, ..., k_m) = \frac{n!}{k_1!k_2!...k_m!}$$ Where n is the total number of objects and k1, k2, ..., km are the number of objects being selected for each category. Multinomial coefficients are often used in probability theory and combinatorics to calculate the number of combinations or arrangements with multiple categories. For example, let's say we have 6 different colors of pens: red, blue, green, black, purple, and yellow. How many different ways can we select 3 pens, with 2 red pens, 1 blue pen, and 0 green pens? To calculate the number of combinations, we can use the multinomial coefficient formula. Since there are 6 pens and we want to select 3, with 2 red pens, 1 blue pen, and 0 green pens, the number of combinations would be: $$C(6, 2, 1, 0) = \frac{6!}{2!1!0!} = \frac{6 * 5 * 4 * 3 * 2 * 1}{2 * 1 * 1 * 1 * 1 * 1} = 60$$ Therefore, there are 60 different combinations of selecting 3 pens, with 2 red pens, 1 blue pen, and 0 green pens. Suppose you have 8 different colors of pens: red, blue, green, black, purple, yellow, orange, and pink. How many different ways can we select 4 pens, with 2 red pens, 1 blue pen, and 1 green pen? To calculate the number of combinations, we can use the multinomial coefficient formula. Since there are 8 pens and we want to select 4, with 2 red pens, 1 blue pen, and 1 green pen, the number of combinations would be: $$C(8, 2, 1, 1) = \frac{8!}{2!1!1!} = \frac{8 * 7 * 6 * 5 * 4 * 3 * 2 * 1}{2 * 1 * 1 * 1 * 1 * 1 * 1 * 1} = 1680$$ Therefore, there are 1680 different combinations of selecting 4 pens, with 2 red pens, 1 blue pen, and 1 green pen. ## Exercise Suppose you have 10 different colors of pens: red, blue, green, black, purple, yellow, orange, pink, brown, and gray. How many different ways can we select 5 pens, with 3 red pens, 1 blue pen, and 1 green pen? ### Solution To calculate the number of combinations, we can use the multinomial coefficient formula. Since there are 10 pens and we want to select 5, with 3 red pens, 1 blue pen, and 1 green pen, the number of combinations would be: $$C(10, 3, 1, 1) = \frac{10!}{3!1!1!} = \frac{10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1}{3 * 2 * 1 * 1 * 1 * 1 * 1 * 1 * 1 * 1} = 5040$$ Therefore, there are 5040 different combinations of selecting 5 pens, with 3 red pens, 1 blue pen, and 1 green pen. # Generating functions Generating functions are a powerful tool in combinatorics that allow us to represent and manipulate sequences of numbers. They are often used to solve counting problems and derive formulas for counting principles. A generating function is a formal power series that encodes information about a sequence of numbers. It is typically represented as a polynomial, where the coefficients of the polynomial correspond to the terms of the sequence. For example, let's say we want to count the number of ways to distribute 5 identical candies to 3 children. We can use a generating function to solve this problem. The generating function for this problem can be represented as: $$(1 + x + x^2 + x^3 + x^4 + x^5)^3$$ Expanding this polynomial will give us the coefficients of the terms, which represent the number of ways to distribute the candies. Generating functions can be manipulated using algebraic operations, such as addition, multiplication, and composition. This allows us to solve counting problems by performing operations on the generating functions. Suppose we want to count the number of ways to distribute 4 identical candies to 2 children. We can use a generating function to solve this problem. The generating function for this problem can be represented as: $$(1 + x + x^2 + x^3 + x^4)^2$$ Expanding this polynomial will give us the coefficients of the terms, which represent the number of ways to distribute the candies. $(1 + x + x^2 + x^3 + x^4)^2 = 1 + 2x + 3x^2 + 4x^3 + 5x^4 + 4x^5 + 3x^6 + 2x^7 + x^8$ The coefficient of $x^4$ is 5, which means there are 5 different ways to distribute 4 candies to 2 children. Generating functions allow us to solve counting problems by manipulating polynomials and extracting the coefficients of the terms. ## Exercise Suppose we want to count the number of ways to distribute 3 identical candies to 4 children. Use a generating function to solve this problem. ### Solution The generating function for this problem can be represented as: $$(1 + x + x^2 + x^3)^4$$ Expanding this polynomial will give us the coefficients of the terms, which represent the number of ways to distribute the candies. $(1 + x + x^2 + x^3)^4 = 1 + 4x + 10x^2 + 16x^3 + 19x^4 + 16x^5 + 10x^6 + 4x^7 + x^8$ The coefficient of $x^3$ is 16, which means there are 16 different ways to distribute 3 candies to 4 children. # Inclusion-exclusion principle The inclusion-exclusion principle is a counting principle that allows us to calculate the number of elements in the union of multiple sets. The principle states that the number of elements in the union of two or more sets can be calculated by summing the sizes of the individual sets, and then subtracting the sizes of the intersections of the sets. For example, let's say we have three sets: A, B, and C. The sizes of the sets are given by |A|, |B|, and |C|. The sizes of the intersections of the sets are given by |A ∩ B|, |A ∩ C|, and |B ∩ C|. The size of the union of the sets is given by |A ∪ B ∪ C|. The inclusion-exclusion principle can be represented as: $$|A ∪ B ∪ C| = |A| + |B| + |C| - |A ∩ B| - |A ∩ C| - |B ∩ C| + |A ∩ B ∩ C|$$ This principle can be extended to more than three sets as well. The inclusion-exclusion principle is a powerful tool for solving counting problems that involve multiple sets and their intersections. Suppose we have three sets: A, B, and C. The sizes of the sets are given by |A| = 5, |B| = 4, and |C| = 6. The sizes of the intersections of the sets are given by |A ∩ B| = 2, |A ∩ C| = 3, and |B ∩ C| = 1. The size of the union of the sets is given by |A # Application of counting principles in algorithm analysis One important application of counting principles in algorithm analysis is in determining the time complexity of an algorithm. Time complexity measures the amount of time an algorithm takes to run as a function of the input size. Counting principles can help us analyze the number of operations performed by an algorithm and determine its time complexity. Another application of counting principles in algorithm analysis is in analyzing the space complexity of an algorithm. Space complexity measures the amount of memory or storage space required by an algorithm as a function of the input size. Counting principles can help us analyze the number of variables or data structures used by an algorithm and determine its space complexity. Counting principles are also used in analyzing the efficiency of data structures and algorithms. For example, counting principles can be used to analyze the efficiency of searching and sorting algorithms, as well as the efficiency of data structures such as arrays, linked lists, and trees. In addition, counting principles are applied in the design and analysis of algorithms for combinatorial optimization problems. These problems involve finding the best solution from a finite set of possibilities. Counting principles can help us analyze the number of possible solutions and determine the complexity of algorithms for solving these problems. Overall, counting principles are a fundamental tool in algorithm analysis. They provide a systematic and rigorous approach to analyzing the efficiency and complexity of algorithms, and they help us understand the performance characteristics of different algorithms and data structures. ## Exercise Consider an algorithm that searches for a specific element in a sorted array. The algorithm uses a binary search approach, which divides the array in half at each step and compares the target element with the middle element of the current subarray. If the target element is found, the algorithm returns its index; otherwise, it continues searching in the appropriate half of the array. Using counting principles, analyze the time complexity of this algorithm in terms of the input size, denoted as n. ### Solution The binary search algorithm divides the array in half at each step, so the number of operations performed is proportional to the logarithm of the input size, n. Therefore, the time complexity of the binary search algorithm is O(log n).
Textbooks
Ask Ubuntu 143 143 11 silver badge55 bronze badges 138 Are there real-life relations which are symmetric and reflexive but not transitive? 16 Donald Knuth's summation notation confuses me. 14 Is there a closed form for the series $\sum_{k=1}^\infty \frac{\ln(4k-3)}{(4k-3)}-\frac{\ln(4k-1)}{(4k-1)}?$ 12 What is $f(x)$ divided by $(x-a)$? 10 What is the coefficient of the $x^3$ term in the expansion of $(x^2+x-5)^7$ (See details)? 9 How to simplify polynomials 9 Cyclic sums -- How do you use them?
CommonCrawl
\begin{definition}[Definition:Jordan Curve] Let $f$ be a Jordan arc from $\tuple {x_1, y_1}$ to $\tuple {x_2, y_2}$. Then $f$ is a '''Jordan curve''' {{iff}} $\tuple {x_1, y_1} = \tuple {x_2, y_2}$. \end{definition}
ProofWiki
\begin{document} \title{Distance of attractors for thin domains ootnote{ This research has been partially supported by grants MTM2016-75465, MTM2012-31298, ICMAT Severo Ochoa project SEV-2015-0554 (MINECO), Spain and Grupo de Investigaci\'on CADEDIF, UCM.} {\footnotesize \par\noindent {\bf Abstract:} In this work we consider a dissipative reaction-diffusion equation in a $d$-dimensional thin domain shrinking to a one dimensional segment and obtain good rates for the convergence of the attractors. To accomplish this, we use estimates on the convergence of inertial manifolds as developed previously in \cite{Arrieta-Santamaria-C0} and Shadowing theory. \vskip 0.5\baselineskip \noindent {\bf Keywords:} Thin domain; attractors; inertial manifolds; shadowing } \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \allowdisplaybreaks \section{Introduction} \selectlanguage{english} In this work we study the rate of convergence of attractors for a reaction diffusion equation in a thin domain when the thickness of the domain goes to zero. Our domain is a thin channel obtained by shrinking a fixed domain $Q\subset \mathbb{R}^d$, see Figure \ref{dominioQ}, by a factor $\varepsilon$ in $(d-1)$-directions. The thin channel $Q_\varepsilon$ collapses to the one dimensional line segment $[0, 1]$ as $\varepsilon$ goes to zero. The reaction diffusion-equation in $Q_\varepsilon$ is given by \begin{equation}\label{equationonQepsilon} \left\{ \begin{array}{r@{=} l c} u_t-\Delta u+ \mu u \;&\;f(u)\quad &\textrm{in}\quad Q_\varepsilon,\\ \frac{\partial u}{\partial\nu_\varepsilon} \;&\;0\quad&\textrm{in}\quad \partial Q_\varepsilon, \end{array} \right. \end{equation} where $\mu>0$ is a fixed number, $\nu_\varepsilon$ the unit outward normal to $\partial Q_\varepsilon$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ is a nonlinear term, with appropriate dissipativity conditions to guarantee the existence of an attractor $\mathcal{A}_\varepsilon\subset H^1(Q_\varepsilon)$. As the parameter $\varepsilon\to 0$, the thin domain shrinks to the line segment $[0,1]$ and the limiting reaction-diffusion equation is given by \begin{equation}\label{equationon(01)} \left\{ \begin{array}{r@{=} l c} u_t-\frac{1}{g}(gu_x)_x+\mu u\;&\;f(u)\qquad&\textrm{in}\quad (0, 1),\\ u_x(0)=u_x(1)\;&\;0. \end{array} \right. \end{equation} which also has an attractor $\mathcal{A}_0\subset H^1(0,1)$. There are several works in the literature comparing the dynamics of both equations and showing the convergence of $\mathcal{A}_\varepsilon$ to $\mathcal{A}_0$ as $\varepsilon\to 0$, under certain hypotheses. One of the most relevant and pioneer work in this direction is \cite{Hale&Raugel3}, where the authors show that when $d=2$ and every equilibrium of the limit problem \eqref{equationon(01)} is hyperbolic, the attractors behave continuously and moreover, the flow in the attractors of both systems are topologically conjugate. In order to accomplish this task, the authors exploit the fact that the limit problem is one dimensional, which allows them to construct inertial manifolds for \eqref{equationonQepsilon} and \eqref{equationon(01)} which will be close in the $C^1$ topology. Restricting the flow to these inertial manifolds, and using that the limit problem is Morse-Smale (under the condition that all equilibria being hyperbolic, see \cite{Henry2}) they prove the $C^0$-conjugacy of the flows. Moreover the method of constructing the inertial manifolds for fixed $\varepsilon\in [0,\varepsilon_0]$ consists in using the method described in \cite{Mallet-Sell}. They consider the finite dimensional linear manifold given by the span of the eigenfunctions corresponding to the first $m$ eigenvalues of the elliptic operator and let evolve this linear manifold with the nonlinear flow, which $\omega$-limit set is a $C^1$ manifold and it is the inertial manifold, which, as a matter of fact it is a graph over the finite dimensional linear manifold. This method provides them with an estimate of the distance of the inertial manifolds of the order of $\varepsilon^\gamma$ for some $\gamma<1$. Later on, reducing the system to the inertial manifolds and using the general techniques to estimate the distance of attractors for gradient flows, see \cite{Hale&Raugel} Theorem 2.5, give them the estimate $\varepsilon^{\gamma'}$ with some $\gamma'<\gamma<1$ which depends on the number of equilibria of the limit problem and other characteristics of the problem. Our setting is more general than the one from \cite{Hale&Raugel3}, since we consider general $d-$dimensional thin domains (not just 2-dimensional). Moreover, our approach to this problem has some differences with respect to theirs. In our case, we will also construct inertial manifolds, but we will construct them following the Lyapunov-Perron method, as developed in \cite{Sell&You}. This method, as it is shown in \cite{Arrieta-Santamaria-C0,Arrieta-Santamaria-C1} provides us with good estimate of the $C^0$ distance of the inertial manifolds (which is of order $\varepsilon|\ln(\varepsilon)|$, see \cite{Arrieta-Santamaria-C0}) and with the $C^{1,\theta}$ convergence of this manifolds, see \cite{Arrieta-Santamaria-C1}. Once the Inertial Manifolds are constructed and we have a good estimate of its distance we can project the systems to these inertial manifolds and obtain the reduced systems, which are finite dimensional. The limit reduced system will be a Morse-Smale gradient like system, see \cite{Hale}. Then Shadowing theory and its relation to the distance of the attractors, as developed in Appendix \ref{shadowing}, will give us the key to obtain the rates of convergence of the attractors. Let us mention that the estimate we find on the Hausdorff symmetric distance of the attractors is the following (see Theorem \ref{maintheorem-thindomain}), $$\hbox{dist}_{H^1(Q_\varepsilon)} (\mathcal{A}_0, \mathcal{A}_\varepsilon)\leq C \varepsilon^{\frac{d+1}{2}}|\log(\varepsilon)|$$ which improves the one obtained in \cite{Hale&Raugel3}. We describe now the contents of this chapter: In section \ref{setting} we give a complete description of the thin domain $Q_\varepsilon$, will set up the basic notation we will need. We also introduce the main result of the paper. In section \ref{elliptic} we study the related elliptic problem, obtaining an estimate for the distance of the resolvent operators and proving this estimate is optimal. We postpone the proof of the main result of this section Proposition \ref{resolvente} to Appendix \ref{proof-resolvente}. In section \ref{nonlinear} we analyze the nonlinearity and we prepare it for the construction of inertial manifolds. We make an appropriate cut off of the non-linear term and analyze the conditions this new nonlinearity satisfies. In section \ref{InertialManifoldsConstruction} we construct the corresponding inertial manifolds, reducing our problem to a finite dimensional one. In section \ref{distanceattractors} using the estimates on the distance of the inertial manifolds together with the shadowing result obtained in Appendix \ref{shadowing} we provide an almost optimal rate of convergence of attractors, proving the main theorem Theorem \ref{maintheorem-thindomain}. At the end we have included two appendices. Appendix \ref{proof-resolvente} contains the proof of Proposition \ref{resolvente} and Appendix \ref{shadowing} contains some results on the relation of Shadowing and the distance of attractors for Morse-Smale maps. \section{Setting of the problem and main results} \label{setting} In this section we set up the problem, describing clearly the domain and the equations we are dealing with. We will also state our main result on the distance of attractors. We end up the section with some notation and technical results needed thereafter. We start describing the thin domain. Let $\Omega= (0, 1)$ and let $Q$ be the set $$Q=\{(x, \mathbf{y})\in\mathbb{R}^d: 0\leq x\leq1,\; \; \mathbf{y}\in\Gamma^1_x\},$$ with $d\geq2$, and $\Gamma_x^1$ diffeomorphic to the unit ball in $\mathbb{R}^{d-1}$, $B(0, 1)$, for all $x\in[0,1]$, see Figure \ref{dominioQ}, that is, we assume that for each $x\in[0, 1]$, there exists a $C^1$ dipheomorphism $\mathbf{L}_x$ \begin{equation}\label{difeomorfismo} \mathbf{L}_x:B(0,1)\longrightarrow\Gamma_x^1\subset \mathbb{R}^{d-1}. \end{equation} We also assume that, if we define \begin{equation}\label{definition-of-J} \left\{ \begin{array}{r c l } \mathbf{L}:(0,1)\times B(0, 1)\; &\;\longrightarrow \;&\;Q\\ (x,\mathbf{y})\;&\;\mapsto\;&\;(x, \mathbf{L}_x(\mathbf{y})) \end{array} \right. \end{equation} then $\mathbf{L}$ is a $C^1$ diffeomorphism. The boundary of $Q$ has two distinguished parts, the one formed by $\Gamma_0^1\cup \Gamma_1^1$ (the two lids of the thin domain) and the lateral boundary $\partial_lQ=\{ (x,y): x\in (0,1), y\in \partial \Gamma_x^1\}$ \begin{figure} \caption{ Domain $Q$ with $d=3$.} \label{dominioQ} \label{dominioQ} \end{figure} Our thin channel, or thin domain will be defined by $$Q_\varepsilon =\{(x, \varepsilon\mathbf{y})\in\mathbb{R}^d: (x, \mathbf{y})\in Q\},\qquad \varepsilon\in (0,1).$$ Notice that this set is obtained by shrinking the set $Q$ by a factor $\varepsilon$ in the $(d-1)$-directions given by the variable $\mathbf{y}\in \mathbb{R}^{d-1}$. This domain gets thinner and thinner as $\varepsilon\to 0$ and it approaches the one dimensional line segment given by $\Omega\times\{\mathbf{0}\}=(0,1)\times\{\mathbf{0}\}$. \par We denote by $g(x):=|\Gamma_x^1|$ the $(d-1)$-dimensional Lebesgue measure of the set $\Gamma_x^1$. From the hypothesis of the smoothness of the map $\mathbf{L}$ above, see \eqref{definition-of-J}, we have that $g$ is a smooth function defined in $[0, 1]$. In particular, there exist $g_0, g_1>0$ such that $g_0\leq g(x)\leq g_1$ for all $x\in [0, 1]$. \begin{re} An important subclass of these thin domains are those whose transversal sections $\Gamma_x^1$ are disks centered at the origin of radius $r(x)$, that is, $$Q=\{(x, \mathbf{y})\in\mathbb{R}^d: 0\leq x\leq 1, |\mathbf{y}|< r(x)\}.$$ In this particular case, $g(x)=|B(0,1)|r(x)^{d-1}$, with $|B(0,1)|$ the Lebesgue measure of the unit ball in $\mathbb{R}^{d-1}$. The diffeomorphism $\mathbf{L}$ defined in (\ref{definition-of-J}) is given by, $$\mathbf{L}(x, \mathbf{y})=(x, r(x)\mathbf{y}).$$ \end{re} We consider the following reaction-diffusion equation in $Q_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, \begin{equation}\label{equationonQepsilon} \left\{ \begin{array}{r@{=} l c} u_t-\Delta u+ \mu u \;&\;f(u)\quad &\textrm{in}\quad Q_\varepsilon,\\ \frac{\partial u}{\partial\nu_\varepsilon} \;&\;0\quad&\textrm{in}\quad \partial Q_\varepsilon, \end{array} \right. \end{equation} where $\mu>0$ is a fixed number, $\nu_\varepsilon$ the unit outward normal to $\partial Q_\varepsilon$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ a $C^2$-function satisfying the following growth condition \begin{equation}\label{growthcondition} |f'(s)|\leq C(1+|s|^{\rho-1}), \quad s\in \mathbb{R} \end{equation} for some $\rho\geq 1$, and the dissipative condition, \begin{equation}\label{dissipativecondition1} \exists\; M>0,\quad\textrm{s.t.}\quad f(s)\cdot s\leq 0,\qquad |s|\geq M. \end{equation} With the growth condition \eqref{growthcondition} we know that problem \eqref{equationonQepsilon} is locally well posed in some functional space of the type $L^r(Q_\varepsilon)$ for some $r>1$, maybe large enough, or $W^{1,p}(Q_\varepsilon)$, see \cite{Arrieta+CarvalhoTAMS, Arrieta+Carvalho+Anibal}. With the dissipative condition and with some regularity arguments we obtain that solutions are globally defined and with the aid of the maximum principle there exist uniform asymptotic bounds in the sup norm of the solutions. That is, for any initial condition $\phi_\varepsilon$ there exists a time $\tau$, that may depend on $\varepsilon$ and on the initial condition, such that the solution starting at $\phi_\varepsilon$ after time $\tau$ is uniformly bounded by $M$, that is $|u(t,x,\phi_\varepsilon)|\leq M$ for $t\geq \tau$, with $M$ given by \eqref{dissipativecondition1}. This uniform asymptotic bounds together with parabolic regularity theory imply that the equation \eqref{equationonQepsilon} has an attractor $\mathcal{A}_\varepsilon\subset H^1(Q_\varepsilon)\cap L^\infty(Q_\varepsilon)$ satisfying the uniform bound \begin{equation}\label{uniformbound-eps} \|u_\varepsilon\|_{L^\infty(Q_\varepsilon)}\leq M, \qquad \hbox{for all }u_\varepsilon \in \mathcal{A}_\varepsilon \end{equation} The limit problem of (\ref{equationonQepsilon}) is given by, see \cite{Hale&Raugel3}, \begin{equation}\label{equationon(01)} \left\{ \begin{array}{r@{=} l c} u_t-\frac{1}{g}(gu_x)_x+\mu u\;&\;f(u)\qquad&\textrm{in}\quad (0, 1),\\ u_x(0)=u_x(1)\;&\;0. \end{array} \right. \end{equation} and, just as the analysis above, this equation has also an attractor $\mathcal{A}_0\subset H^1(0,1)\cap L^\infty(0,1)$ satisfying also the bounds \begin{equation}\label{uniformbound0} \|u_0\|_{L^\infty(0,1)}\leq M. \end{equation} Observe that the dynamical system generated by this equation has a gradient structure (see \cite{Hale}) and in particular its attractor is formed by equilibria and connections among them. Moreover, if all equilibria are hyperbolic then we have only a finite number of them and the system has a Morse-Smale structure (see \cite{Henry2}). Notice that in a natural way we may consider the attractor $\mathcal{A}_0$ as a subset of $H^1(Q_\varepsilon)$, just by considering that any function $u_0(x)$ defined in $(0,1)$ is extended to all of $Q_\varepsilon$ by $\tilde u_0(x,\mathbf{y})=u_0(x)$. \par We now introduce the main result of the paper. \begin{teo}\label{maintheorem-thindomain} Under the notations above and assuming that all equilibria of problem \eqref{equationon(01)} are hyperbolic, we have \begin{equation}\label{distance-attractors} \hbox{dist}_{H^1(Q_\varepsilon)} (\mathcal{A}_0, \mathcal{A}_\varepsilon)\leq C \varepsilon^{\frac{d+1}{2}}|\log(\varepsilon)|, \end{equation} with $\hbox{dist}_X(\cdot,\cdot)$ the symmetric Haussdorf distance in the space $X$. \end{teo} Recall that $\hbox{dist}_X(A,B)$ is defined as $$\hbox{dist}_X(A,B)=\max\{ \sup_{x\in A} \inf_{y\in B} d(x,y), \sup_{y\in B} \inf_{x\in A} d(x,y)\}.$$ \par Next, we present the notation and some conditions needed for the proof. As we have noted above, the attractors of both equations \eqref{equationonQepsilon} and \eqref{equationon(01)} have uniform $L^\infty$ bounds, as expressed in \eqref{uniformbound-eps} and \eqref{uniformbound0}. This fact will allow us to cut off the nonlinearity $f$ outside the interval $(-M,M)$ so that the new nonlinearity that we will still denote by $f$ has compact support and coincides with the old one in $(-M,M)$, satisfies \begin{equation}\label{dissipativecondition2} |f(s)|+|f'(s)|+|f''(s)|\leq L_f \qquad\textrm{for all}\quad s\in\mathbb{R}, \end{equation} and the dissipative condition \eqref{dissipativecondition1} still holds for the new $f$. Moreover, since the attractors for the old nonlinearity satisfy \eqref{uniformbound-eps} and \eqref{uniformbound0} and the new $f$ coincides with the old one in $(-M,M)$, then the attractors for the new equations are exactly the same as the attractors for the original equations. This means that we may assume from the beginning that the nonlinearity $f$ satisfies \eqref{dissipativecondition2} When dealing with problems where the domain varies it is sometimes convenient to make transformations, as simple as possible, so that we transform all problems to a fixed reference domain. This will imply in many instances that the parameter appears in the equation and usually it will show up as a singular parameter. In our case, we will transform problem \eqref{equationonQepsilon} into a problem in the fixed set $Q=\{(x, \mathbf{y})\in\mathbb{R}^d: 0\leq x\leq1, \mathbf{y}\in\Gamma_x^1\}$, (Figure \ref{dominioQ}). The transformation we will use is $(x,\mathbf{y})\to (x,\frac{\mathbf{y}}{\varepsilon})$. With this transformation, the reaction-diffusion equation (\ref{equationonQepsilon}) is transformed into the following equation on the fixed domain $Q$, \begin{equation}\label{equationonQ} \left\{ \begin{array}{r@{=} l c} u_t-\frac{\partial^2u}{\partial x^2}-\frac{1}{\varepsilon^2}\Delta_{\mathbf{y}} u+\mu u\;&\;f(u)\qquad&\textrm{in}\quad Q,\\ \frac{\partial u}{\partial\nu_x}+\frac{1}{\varepsilon^2}\frac{\partial u}{\partial\nu_\mathbf{y}}\;&\;0\quad&\textrm{on}\quad \partial Q \end{array} \right. \end{equation} where $\nu=(\frac{\partial u}{\partial\nu_x},\frac{\partial u}{\partial\nu_\mathbf{y}}) $ is the unit outward normal to $\partial Q$. The natural spaces to analyze (\ref{equationonQ}) are given by, $$H^1_{\boldsymbol\varepsilon}(Q):=(H^1(Q), \|\cdot\|_{H^1_{\boldsymbol\varepsilon}(Q)}),$$ with the norm $$\|u\|_{H^1_{\bm\varepsilon}(Q)}:=\left(\int_Q(|\nabla_xu|^2+\frac{1}{\varepsilon^2}|\nabla_{\mathbf{y}}u|^2+|u|^2)dxd\mathbf{y}\right)^{1/2},$$ and $L^2(Q)$ with the usual norm $\|\cdot\|_{L^2(Q)}$. \par Notice that if we define the isomorphism $\mathbf{i}_{\bm\varepsilon}: L^2(Q_\varepsilon)\rightarrow L^2(Q)$ as $$\mathbf{i}_{\bm\varepsilon}(u)(x,\mathbf{y}):= u(x, \varepsilon \mathbf{y}),$$ its restriction to $H^1(Q_\varepsilon)$ is also an isomorphism from $H^1(Q_\varepsilon)$ to $H^1(Q)$ (or equivalently to $H^1_{\bm\varepsilon}(Q)$). Then we easily have the following identities: \begin{equation}\label{norm-relations1} \|\mathbf{i}_{\bm\varepsilon}(u)\|_{L^2(Q)}=\varepsilon^{-\frac{d-1}{2}}\|u\|_{L^2(Q_\varepsilon)} \end{equation} \begin{equation}\label{norm-relations2} \|\mathbf{i}_{\bm\varepsilon}(u)\|_{H^1_{\bm\varepsilon}(Q)}=\varepsilon^{-\frac{d-1}{2}}\|u\|_{H^1(Q_\varepsilon)} \end{equation} The isomorphism $\mathbf{i}_{\bm\varepsilon}$ also allows us how to relate easily the semigroups generated by \eqref{equationonQepsilon} and \eqref{equationonQ} as follows: if $S_\varepsilon(t)$ is the semigroup generated by \eqref{equationonQepsilon} and $\tilde S_\varepsilon(t)$ the one from \eqref{equationonQ}, then we have $$S_\varepsilon(t)(\cdot):=\mathbf{i}^{-1}_{\bm\varepsilon}\circ \tilde{S}_\varepsilon(t)\circ \mathbf{i}_{\bm\varepsilon}(\cdot),$$ The limit problem of eqution (\ref{equationonQ}) is also given by (\ref{equationon(01)}). The natural spaces to treat the limit problem are the following $$L^2_g(0,1):=(L^2(0,1), \|\cdot\|_{L^2_g(0,1)}) \qquad\textrm{with}\quad \|u\|_{L^2_g(0,1)}:=\left(\int_0^1g(x)|u(x)|^2dx\right)^{\frac{1}{2}},$$ and $$H^1_g(0,1):=(H^1(0,1), \|\cdot\|_{H^1_g(0,1)}) \qquad\textrm{with}\quad \|u\|_{H^1_g(0,1)}:= \left(\int_0^1g(x)(|u_x|^2+|u|^2)dx\right)^{\frac{1}{2}}.$$ Throughout this paper we will denote by $|\cdot|$ the norm in $\mathbb{R}^d$. Both evolution problems (\ref{equationonQ}) and (\ref{equationon(01)}) admit an abstract formulation that we are going to overview here. Let $A_\varepsilon:D(A_\varepsilon)\subset L^2(Q)\to L^2(Q),$ with $$A_\varepsilon=-\frac{\partial^2}{\partial x^2}-\frac{1}{\varepsilon^2}\Delta_y+\mu I,\qquad\textrm{and}\qquad D(A_\varepsilon)=\{u\in H^2(Q): \frac{\partial u}{\partial \nu}=0,\, \textrm{at}\, \partial Q\}$$ and $A_0:D(A_0)\subset L^2_g(0,1)\to L^2_g(0,1),$ with $$A_0v=-\frac{1}{g}(gv_x)_x+\mu v\qquad\textrm{ and}\qquad D(A_0)=\{ v\in H_g^2(0,1), u'(0)=u'(1)=0\}.$$ Both operators are selfadjoint, positive linear operators with compact resolvent and they are defined on separable Hilbert spaces. We denote by $X_\varepsilon=L^2(Q)$, $0<\varepsilon\leq \varepsilon_0$ with the usual norm and $X_0=L^2_g(0,1)$, with its norm defined above. Let also denote by $X_\varepsilon^1=D(A_\varepsilon)$ with the graph norm and similarly for $X_0^1$. We also consider the fractional power spaces $X_\varepsilon^\alpha$ for $0<\alpha<1$ and $0\leq \varepsilon<\varepsilon_0$, see \cite{Henry1}. In particular the spaces $X_\varepsilon^{1/2}$ is $H^1_\varepsilon(Q)$ and $X_0^{1/2}$ is $H^1_g(0,1)$ defined above. Hence, (\ref{equationonQ}) and (\ref{equationon(01)}) can be written as \begin{equation}\label{problemaperturbado-thindomain} (P_\varepsilon)\left\{ \begin{array}{r l } u^\varepsilon_t+A_\varepsilon u^\varepsilon&=F_\varepsilon(u^\varepsilon),\qquad 0<\varepsilon\leq \varepsilon_0\\ u^\varepsilon(0)\in X^\alpha_\varepsilon, \end{array} \right. \end{equation} and \begin{equation}\label{problemalimite-thindomain} (P_0)\left\{ \begin{array}{r l } u^0_t+A_0u^0&=F_0(u^0),\\ u^0(0)\in X^\alpha_0, \end{array} \right. \end{equation} where $F_\varepsilon$ and $F_0$ are the nonlinearity $f$ acting in the appropriate fractional power spaces, which will be analyzed in detail in Section \ref{nonlinear}. We define an extension operator which maps functions defined in $[0, 1]$ into functions defined in $Q$. The natural way to construct this operator is to extend the functions defined in $[0,1]$ constantly in the other $d-1$ variables. Therefore we denote by $E$ the transformation, \begin{equation}\label{operadorextension} \left. \begin{array}{r c l} E: L^2_g(0,1)\;&\;\longrightarrow\;&\;L^2(Q)\\ u\;&\;\mapsto\;&\; E(u) (x,\mathbf{y})=u(x) \end{array} \right. \end{equation} In a similar fashion we may define the transformation $E_\varepsilon: L^2_g(0,1)\rightarrow L^2(Q_\varepsilon)$ defined as $(E_\varepsilon u)(x, \mathbf{y})=u(x)$. The difference with $E$ is that $E_\varepsilon$ lands in $L^2(Q_\varepsilon)$. As a mather of fact, $E_\varepsilon= \mathbf{i}_{\bm\varepsilon}^{-1}\circ E$. These transformations can also be considered restricted to $H^1_g(0,1)$. In this case, we have $E:H_g^1(0,1)\longrightarrow H_{\bm\varepsilon}^1(Q)$ and $E_\varepsilon: H^1_g(0, 1)\rightarrow H^1(Q_\varepsilon)$. These transformations can be considered too as $E:X_0^\alpha\longrightarrow X_\varepsilon^\alpha$. To compare functions from $L^2(Q)$ and $L^2_g(0,1)$ (and from $X_\varepsilon^\alpha$ and $X_0^\alpha$, respectively) we also need a projection operator $M$, defined as follows, \begin{equation}\label{M-definition} \left. \begin{array}{r c l} M: L^2(Q)\;&\;\longrightarrow\;&\;L^2_g(0, 1)\\ u\;&\;\mapsto\;&\; \displaystyle M(u) (x)=\frac{1}{|\Gamma_x^1|}\int_{\Gamma_x^1}u(x, \mathbf{y})d\mathbf{y}, \end{array} \right. \end{equation} similary, we may define the map, \begin{equation}\label{M-eps-definition} \left. \begin{array}{r c l} M_\varepsilon: L^2(Q_\varepsilon)\;&\;\longrightarrow\;&\;L^2_g(0, 1)\\ u\;&\;\mapsto\;&\; \displaystyle M_\varepsilon(u) (x)=\frac{1}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}u(x, \mathbf{y})d\mathbf{y}, \end{array} \right. \end{equation} and, in the same way, for $0<\alpha<\frac{1}{2}$, $M:X_\varepsilon^\alpha\longrightarrow X_0^\alpha$. Moreover $M: H^1_{\bm\varepsilon}(Q)\longrightarrow H^1_g(0,1)$ and $M_\varepsilon: H^1(Q_\varepsilon)\longrightarrow H^1_g(0,1)$. The following estimates are straightforward: $$\|M\|_{\mathcal{L}(L^2(Q), L^2_g(0,1))}\leq1, \quad \|M\|_{\mathcal{L}(H^1_{\bm\varepsilon}(Q), H^1_g(0,1))}\leq1, \quad \|M_\varepsilon\|_{\mathcal{L}(L^2(Q_\varepsilon), L^2_g(0,1))}\leq \varepsilon^{\frac{1-d}{2}}.$$ $$\|E u\|_{L^2(Q)}=\|u\|_{L^2_g(0, 1)},\quad\|E_\varepsilon u\|_{L^2(Q_\varepsilon)}=\varepsilon^{\frac{d-1}{2}}\|u\|_{L^2_g(0, 1)} \quad \forall u\in L^2_g(0,1)$$ $$\|E u\|_{H^1_{\bm\varepsilon}(Q)}=\|u\|_{H^1_g(0, 1)},\quad\forall u\in H^1_g(0,1)$$ \begin{re}\label{EMalpha} i) From \cite[Theorem 16.1, pg 528]{Yagi2010} we get that the spaces of fractional power of operators and the spaces obtained via interpolation coincide and even they are isometric. This means that $X_0^\alpha=[L^2_g(0, 1), H^1_g(0, 1)]_{2\alpha}$ and $X_\varepsilon^\alpha=[L^2(Q), H^1_\varepsilon(Q)]_{2\alpha}$ with isometry. This implies that we also have \begin{equation}\label{Malpha} \|Mu\|_{X_0^\alpha}\leq \|u\|_{X_\varepsilon^\alpha} \end{equation} For the operator $E:X_0^\alpha\longrightarrow X_\varepsilon^\alpha$ we also obtain, \begin{equation}\label{Ealpha} \|Eu\|_{X_\varepsilon^\alpha}\leq \|u\|_{X_0^\alpha}, \qquad\forall u\in X_0^\alpha, \end{equation} applying exactly the same arguments. Note that \eqref{Malpha} and \eqref{Ealpha} show that estimates \eqref{cotaextensionproyeccion} are satisfied with $\kappa=1$. \par ii) Moreover, via interpolation we easily get that if $\alpha<1/2$, we get $[L^2(Q), H^1_\varepsilon(Q)]_{2\alpha}\hookrightarrow[L^2(Q), H^1(Q)]_{2\alpha}\hookrightarrow L^{\frac{2d}{d-4\alpha}}(Q)$ with an embedding constant independent of $\varepsilon$. Hence, we also have that the embedding constant of $X_\varepsilon^\alpha\hookrightarrow L^{\frac{2d}{d-4\alpha}}(Q)$ is independent of $\varepsilon$. \end{re} We include now a technical result on the operator $E$ that will be used later. \begin{lem}\label{normaproyeccionextension} We have the following \par\noindent i) There exists a constant $\beta>0$ such that $$\|u_\varepsilon- E M u_\varepsilon\|^2_{L^2(Q)}\leq \beta \|\nabla_{\mathbf{y}}u_\varepsilon\|^2_{L^2(Q)}, \quad \forall u_\varepsilon\in H^1(Q)$$ $$\|w_\varepsilon-E_\varepsilon M_\varepsilon w_\varepsilon\|^2_{L^2(Q_\varepsilon)}\leq \beta\varepsilon^2\|\nabla_{\mathbf{y}}w_\varepsilon\|^2_{L^2(Q_\varepsilon)}, \quad \forall w_\varepsilon\in H^1(Q_\varepsilon)$$ \par\noindent ii) Let $K\subset X_0^\alpha$ a compact set. Then, $$\sup_{u_0\in K}\left|\|Eu_0\|_{X_\varepsilon^\alpha}-\|u_0\|_{X_0^\alpha}\right|\rightarrow0,\qquad \textrm{as}\;\; \varepsilon\to 0.$$ \end{lem} \paragraph{ Proof.} \par\noindent i) Observe that, $$\|u_\varepsilon- E M u_\varepsilon\|_{L^2(Q)}^2=\int_0^1\int_{\Gamma_x^1}|u_\varepsilon(x,\mathbf{y})-(Mu_\varepsilon)(x)|^2d\mathbf{y}dx\leq\int_0^1\frac{1}{\lambda_2(\Gamma_x^1)}\int_{\Gamma_x^1}|\nabla_{\mathbf{y}}u_\varepsilon(x, \mathbf{y})|^2d\mathbf{y}dx$$ where we are using Poincare inequality in $\Gamma_x^1$ ($\lambda_2(\Gamma_x^1)$ is the second Neumann eigenvalue in $\Gamma_x^1$). Let us see that there exists a $\hat{\lambda}_2>0$ such that, $$\lambda_2(\Gamma_x^1)\geq\hat{\lambda}_2>0,\qquad\forall x\in [0,1].$$ If this is not the case, then there exists a sequence $x_n\rightarrow x_0\in [0,1]$ such that $\lambda_2(\Gamma_{x_n}^1)\rightarrow 0$ as $n\rightarrow\infty$. But $\Gamma_{x_n}^1$ for $n$ large enough is $C^1$ close to $\Gamma_{x_0}^1$ and therefore, by the continuity of the Neumann eigenvalues under $C^1$- perturbations, see \cite{Arrieta&Carvalho}, we have that $\lambda_2(\Gamma^1_{x_0})=0$. But this means that $\Gamma_{x_0}^1$ is not a connected domain and therefore $\Gamma_{x_0}^1$ is not diffeomorphic to the unit ball $B(0,1)$. Hence, we obtain the first inequality with $\beta=\frac{1}{\hat{\lambda}_2}$. For the inequality in the domain $Q_\varepsilon$, use the estimate in $Q$ and the appropriate change of variables in the integrals. \par \noindent ii) Since $K\subset X_0^\alpha$ is a compact set, for $\eta>0$ there exist $u_0^1, ..., u_0^{k(\eta)}\in K$ such that $K\subset\bigcup_{i=1}^{k(\eta)} B(u_0^i, \eta). $ Then, for each $u_0\in K$, there exists $i\in\{1, 2, ..., k(\eta)\}$, such that $\|u_0^i-u_0\|_{X_0^\alpha}\leq\eta$. Moreover, by the continuity of eigenvalues, see \cite[Section 3]{Arrieta-Santamaria-C0}, we have for each $i\in\{1,2, ..., k(\eta)\}$ \begin{equation}\label{convergence-i} \|Eu_0^i\|_{X_\varepsilon^\alpha}\rightarrow \|u_0^i\|_{X_0^\alpha}. \end{equation} If we write $Eu_0=E(u_0-u_0^i)+ Eu_0^i$. Then, from \eqref{Ealpha} we know that $\|E(u_0-u_0^i)\|_{X_\varepsilon^\alpha}\leq \eta$. This implies $$\left|\|Eu_0\|_{X_\varepsilon^\alpha}-\|Eu_0^i\|_{X_\varepsilon^\alpha}\right|\leq \eta.$$ From \eqref{convergence-i}, we know that there exists an $\varepsilon(\eta)$ such that, for $0\leq\varepsilon\leq\varepsilon(\eta)$, $$\left|\|Eu_0^i\|_{X_\varepsilon^\alpha}-\|u_0^i\|_{X_0^\alpha}\right|\leq \eta, $$ and, $$\left|\|Eu_0\|_{X_\varepsilon^\alpha}\mathord-\|u_0^i\|_{X_0^\alpha}\right|=\left|\|\|Eu_0\|_{X_\varepsilon^\alpha}\mathord-\|Eu_0^i\|_{X_\varepsilon^\alpha}\mathord+\|Eu_0^i\|_{X_\varepsilon^\alpha}\mathord-\|u_0^i\|_{X_0^\alpha}\right|\leq2\eta,$$ for $0\leq\varepsilon\leq\varepsilon(\eta)$. So, $$\left|\|Eu_0\|_{X_\varepsilon^\alpha}-\|u_0\|_{X_0^\alpha}\right|=$$ $$\left|\|Eu_0\|_{X_\varepsilon^\alpha}-\|Eu_0^i\|_{X_\varepsilon^\alpha}\mathord+\|Eu_0^i\|_{X_\varepsilon^\alpha}\mathord-\|u_0^i\|_{X_0^\alpha}\mathord+\|u_0^i\|_{X_0^\alpha}\mathord-\|u_0\|_{X_0^\alpha}\right|\leq 3\eta,$$ for $0<\varepsilon\leq\varepsilon(\eta)$. That is, for any $ K\subset X_0^\alpha$ and $K$ a compact set, $$\sup_{u_0\in K}\left|\|Eu_0\|_{X_\varepsilon^\alpha}-\|u_0\|_{X_0^\alpha}\right|\rightarrow 0,\qquad\textrm{as}\quad\varepsilon\rightarrow 0.$$ This concludes the proof. \begin{flushright}$\blacksquare$\end{flushright} \section{Some previous results on convergence of Inertial Manifolds} \label{previous} In this section we are going to recall the results obtained in \cite{Arrieta-Santamaria-C0,Arrieta-Santamaria-C1} where we were able to analyze the convergence of inertial manifolds for abstract evolutionary equations under certain conditions. We were also able to obtain estimates on the distance of these inertial manifolds in the $C^0$ topology, see \cite{Arrieta-Santamaria-C0} and in the $C^{1,\theta}$ topology, see \cite{Arrieta-Santamaria-C1}. We refer to these two papers for details. Hence, consider the family of abstract problems (like \eqref{problemaperturbado-thindomain}, \eqref{problemalimite-thindomain}) \begin{equation}\label{problemalimite} (P_0)\left\{ \begin{array}{r l } u^0_t+A_0u^0&=F_0^\varepsilon(u^0),\\ u^0(0)\in X^\alpha_0, \end{array} \right. \end{equation} and \begin{equation}\label{problemaperturbado} (P_\varepsilon)\left\{ \begin{array}{r l } u^\varepsilon_t+A_\varepsilon u^\varepsilon&=F_\varepsilon(u^\varepsilon),\qquad 0<\varepsilon\leq \varepsilon_0\\ u^\varepsilon(0)\in X^\alpha_\varepsilon, \end{array} \right. \end{equation} where we assume, that $A_\varepsilon$ is self-adjoint positive linear operator with compact resolvent on a separable real Hilbert space $X_\varepsilon$, that is $A_\varepsilon: D(A_\varepsilon)=X^1_\varepsilon\subset X_\varepsilon\rightarrow X_\varepsilon,$ and $F_\varepsilon:X_\varepsilon^\alpha\to X_\varepsilon$, $F_0^\varepsilon:X_0^\alpha\to X_0$ are nonlinearities guaranteeing global existence of solutions of \eqref{problemaperturbado}, for each $0\leq \varepsilon\leq \varepsilon_0$ and for some $0\leq \alpha<1$. Observe that for problem \eqref{problemalimite} we even assume that the nonlinearity depends on $\varepsilon$ also. We also assume the existence of linear continuous operators, $E$ and $M$, such that, $E: X_0\rightarrow X_\varepsilon$, $M: X_\varepsilon\rightarrow X_0$ and $E_{\mid_{X^\alpha_0}}: X_0^\alpha\rightarrow X_\varepsilon^\alpha$ and $M_{\mid_{X_\varepsilon^\alpha}}: X_\varepsilon^\alpha\rightarrow X_0^\alpha$, satisfying, \begin{equation}\label{cotaextensionproyeccion} \|E\|_{\mathcal{L}(X_0, X_\varepsilon)}, \|M\|_{\mathcal{L}(X_\varepsilon, X_0)}\leq \kappa ,\qquad \|E\|_{\mathcal{L}(X^\alpha_0, X^\alpha_\varepsilon)}, \|M\|_{\mathcal{L}(X^\alpha_\varepsilon, X^\alpha_0)}\leq \kappa. \end{equation} for some constant $\kappa\geq 1$. We also assume these operators satisfy the following properties, \begin{equation}\label{propiedadesextensionproyeccion} M\circ E= I,\qquad \|Eu_0\|_{X_\varepsilon}\rightarrow \|u_0\|_{X_0}\quad\textrm{for}\quad u_0\in X_0. \end{equation} With respect to the relation between both operators, $A_0$ and $A_\varepsilon$ and following \cite{Arrieta-Santamaria-C0,Arrieta-Santamaria-C1}, we will assume the following hypothesis {\sl \paragraph{\textbf{(H1).}} With $\alpha$ the exponent from problems (\ref{problemaperturbado}), we have \begin{equation}\label{H1equation} \|A_\varepsilon^{-1}- EA_0^{-1}M\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon^\alpha)}\to 0\quad \hbox{ as } \varepsilon\to 0. \end{equation} } \par Let us define $\tau(\varepsilon)$ as an increasing function of $\varepsilon$ such that \begin{equation}\label{definition-tau} \|A_\varepsilon^{-1}E- EA_0^{-1}\|_{\mathcal{L}(X_0, X_\varepsilon^\alpha)}\leq \tau(\varepsilon). \end{equation} \par We also recall hypothesis {\bf(H2)} from \cite{Arrieta-Santamaria-C0}, regarding the nonlinearities $F_0$ and $F_\varepsilon$, \par {\sl \paragraph{\textbf{(H2).}} We assume that the nonlinear terms $F_\varepsilon: X^\alpha_\varepsilon\rightarrow X_\varepsilon$ and $F_0^\varepsilon: X^\alpha_0\rightarrow X_0$ for $0< \varepsilon\leq \varepsilon_0$, satisfy: \begin{enumerate} \item[(a)] They are uniformly bounded, that is, there exists a constant $C_F>0$ independent of $\varepsilon$ such that, $$\|F_\varepsilon\|_{L^\infty(X_\varepsilon^\alpha, X_\varepsilon)}\leq C_F, \quad \|F_0^\varepsilon\|_{L^\infty(X_0^\alpha, X_0)}\leq C_F$$ \item[(b)] They are globally Lipschitz on $X^\alpha_\varepsilon$ with a uniform Lipstichz constant $L_F$, that is, \begin{equation}\label{LipschitzFepsilon} \|F_\varepsilon(u)- F_\varepsilon(v)\|_{X_\varepsilon}\leq L_F\|u-v\|_{X_\varepsilon^\alpha} \end{equation} \begin{equation}\label{LipschitzF0} \|F_0^\varepsilon(u)- F_0^\varepsilon(v)\|_{X_0}\leq L_F\|u-v\|_{X_0^\alpha}. \end{equation} \item[(c)] They have a uniformly bounded support for $0<\varepsilon\leq \varepsilon_0$: there exists $R>0$ such that $$Supp F_\varepsilon\subset D_{R}=\{u_\varepsilon\in X_\varepsilon^\alpha: \|u_\varepsilon\|_{X_\varepsilon^\alpha}\leq R\}$$ $$Supp F_0^\varepsilon\subset D_{R}=\{u_0\in X_0^\alpha: \|u_0\|_{X_0^\alpha}\leq R\}.$$ \item[(d)] $F_\varepsilon$ is near $F_0^\varepsilon$ in the following sense, \begin{equation}\label{estimacionefes} \sup_{u_0\in X^\alpha_0}\|F_\varepsilon (Eu_0)-EF_0^\varepsilon (u_0)\|_{X_\varepsilon}=\rho(\varepsilon), \end{equation} and $\rho(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$. \end{enumerate} } The family of operators $A_\varepsilon$, for $0\leq \varepsilon\leq \varepsilon_0$, are selfadjoint and have compact resolvent. This, implies that their spectrum is discrete real and consists only of eigenvalues, each one with finite multiplicity. Moreover, the fact that $A_\varepsilon$, $0\leq \varepsilon\leq \varepsilon_0$, is positive implies that its spectrum is positive. So, denoting by $\sigma(A_\varepsilon)$ the spectrum of the operator $A_\varepsilon$, we have $$\sigma(A_\varepsilon)=\{\lambda_n^\varepsilon\}_{n=1}^\infty,\qquad\textrm{ and}\quad 0<c\leq\lambda_1^\varepsilon\leq\lambda_2^\varepsilon\leq...\leq\lambda_n^\varepsilon\leq...$$ We also denote by $\{\varphi_i^\varepsilon\}_{i=1}^\infty$ an associated orthonormal family of eigenfunctions, by $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}$ the canonical orthogonal projection onto the eigenfunctions, $\{\varphi^\varepsilon_i\}_{i=1}^m$, corresponding to the first $m$ eigenvalues of the operator $A_\varepsilon $, $0\leq\varepsilon\leq\varepsilon_0$ and by $\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}}$ the projetion over its orthogonal complement, see \cite{Arrieta-Santamaria-C0}. \par If we assume that {\bf (H1)} holds, then we obtain that the eigenvalues and eigenfunctions of the operator $A_\varepsilon$ converge to the eigenvalues and eigenfunctions of $A_0$. As a matter of fact, we get that $$\lambda_i^\varepsilon\buildrel \epsilon\to 0\over\longrightarrow \lambda_i^0, \hbox{ for each } i\in \N$$ and \begin{equation}\label{convergence-of-projection} \|\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E(v)-E\mathbf{P}_{\mathbf{m}}^{0}(v)\|\leq C\tau(\varepsilon)\|v\|_{X_0} \end{equation} (see \cite[Lemma 3.7]{Arrieta-Santamaria-C0}). This last estimate implies easily that the set $$\{\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_1), \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_2), ...,\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_m)\},\qquad\textrm{for}\quad 0\leq\varepsilon\leq\varepsilon_0,$$ constitutes a basis in $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)=[\psi_1^\varepsilon, ..., \psi_m^\varepsilon]$, that is, the space generated by the first $m$ eigenfunctions. Let us denote by $j_\varepsilon$ the isomorphism from $ \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)=[\psi_1^\varepsilon, ..., \psi_m^\varepsilon]$ onto $\mathbb{R}^m$, that gives us the coordinates of each vector. That is, \begin{equation}\label{definition-jeps} \begin{array}{rl} j_\varepsilon:\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)&\longrightarrow \mathbb{R}^m, \\ w_\varepsilon&\longmapsto\bar{p}, \end{array} \end{equation} where $w_\varepsilon=\sum^m_{i=1} p_i\psi^\varepsilon_i$ and $\bar{p}=(p_1, ..., p_m)$. We denote by $|\cdot|$ the usual euclidean norm in $\mathbb{R}^m$, that is $|\bar{p}|=\left(\sum_{i=1}^mp_i^2\right)^{\frac{1}{2}}$, and by $|\cdot|_{\varepsilon,\alpha}$ the following weighted one, \begin{equation}\label{normaalpha} |\bar{p}|_{\varepsilon,\alpha}=\left(\sum_{i=1}^mp_i^2(\lambda_i^\varepsilon)^{2\alpha}\right)^{\frac{1}{2}}. \end{equation} We consider the spaces $(\mathbb{R}^m, |\cdot|)$ and $(\mathbb{R}^m, |\cdot|_{\varepsilon,\alpha})$, that is, $\mathbb{R}^m$ with the norm $|\cdot|$ and $|\cdot|_{\varepsilon,\alpha}$, respectively, and notice that for $w_0=\sum^m_{i=1} p_i\psi^0_i$ and $0\leq\alpha<1$ we have that, \begin{equation}\label{normajepsilon} \|w_0\|_{X^\alpha_0}=|j_0(w_0)|_{\varepsilon,\alpha}. \end{equation} \par We are looking for inertial manifolds for system \eqref{problemaperturbado} and \eqref{problemalimite} which will be obtained as graphs of appropriate functions. This motivates the introduction of the sets $\mathcal{F}_\varepsilon(L,\rho)$ defined as $$\mathcal{F}_\varepsilon(L,\rho)=\{ \Phi :\mathbb{R}^m\rightarrow\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}(X^\alpha_\varepsilon),\quad\textrm{such that}\quad \textrm{supp } \Phi\subset B_R\quad \textrm{and}\quad$$ $$\quad \|\Phi(\bar{p}^1)-\Phi(\bar{p}^2)\|_{X^\alpha_\varepsilon}\leq L|\bar{p}^1-\bar{p}^2|_{\varepsilon,\alpha} \quad\bar{p}^1,\bar{p}^2\in\mathbb{R}^m \}.$$ Then we can show the following result. \begin{prop} (\cite{Arrieta-Santamaria-C0})\label{existenciavariedadinercial} Let hypotheses {\bf (H1)} and {\bf (H2)} be satisfied. Assume also that $m\geq 1$ is such that, \begin{equation}\label{CondicionAutovaloresFuerte0} \lambda_{m+1}^0-\lambda_m^0\geq 3(\kappa+2)L_F\left[(\lambda_m^0)^\alpha+(\lambda_{m+1}^0)^\alpha\right], \end{equation} and \begin{equation}\label{autovalorgrande0} (\lambda_m^0)^{1-\alpha}\geq 6(\kappa +2)L_F(1-\alpha)^{-1}. \end{equation} Then, there exist $L<1$ and $\varepsilon_0>0$ such that for all $0<\varepsilon\leq\varepsilon_0$ there exist inertial manifolds $\mathcal{M}_\varepsilon$ and $\mathcal{M}_0^\varepsilon$ for (\ref{problemaperturbado}) and (\ref{problemalimite}) respectively, given by the ``graph'' of a function $\Phi_\varepsilon\in\mathcal{F}_\varepsilon(L,\rho)$ and $\Phi_0^\varepsilon\in\mathcal{F}_0(L,\rho)$. \end{prop} \begin{re} We have written quotations in the word ``graph'' since the manifolds $\mathcal{M}_\varepsilon$, $\mathcal{M}_0^\varepsilon$ are not properly speaking the graph of the functions $\Phi_\varepsilon$, $\Phi_0^\varepsilon$ but rather the graph of the appropriate function obtained via the isomorphism $j_\varepsilon$ which identifies $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon^\alpha)$ with $\R^m$. That is, $\mathcal{M}_\varepsilon=\{ j_\varepsilon^{-1}(\bar p)+\Phi_\varepsilon(\bar p); \quad \bar p\in \R^m\}$ and $\mathcal{M}_0^\varepsilon=\{ j_0^{-1}(\bar p)+\Phi_0^\varepsilon(\bar p); \quad \bar p\in \R^m\}$ \end{re} The main result from \cite{Arrieta-Santamaria-C0} was the following: \begin{teo} (\cite{Arrieta-Santamaria-C0}) \label{distaciavariedadesinerciales} Let hypotheses {\bf (H1)} and {\bf (H2)} be satisfied and let $\tau(\varepsilon)$ be defined by \eqref{definition-tau}. Then, under the hypothesis of Proposition \ref{existenciavariedadinercial}, if $\Phi_\varepsilon$ are the maps that give us the inertial manifolds for $0<\varepsilon\leq \varepsilon_0$ then we have, \begin{equation}\label{distance-inertialmanifolds} \|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m, X^\alpha_\varepsilon)}\leq C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)], \end{equation} with $C$ a constant independent of $\varepsilon$. \end{teo} \par To obtain stronger convergence results on the inertial manifolds, we will need to requiere stronger conditions on the nonlinearites. These conditions are stated in the following hypothesis, \par {\sl \paragraph{\textbf{(H2').}} We assume that the nonlinear terms $F_\varepsilon$ and $F_0^\varepsilon$, satisfy hipothesis {\bf(H2)} and they are uniformly $C^{1,\theta_F}$ functions from $ X_\varepsilon^\alpha$ to $X_\varepsilon$, and $X_0^\alpha$ to $X_0$ respectively, for some $0<\theta_F\leq 1$. That is, $F_\varepsilon\in C^1(X_\varepsilon^\alpha, X_\varepsilon)$, $F_0^\varepsilon\in C^1(X_0^\alpha, X_0)$ and there exists $L>0$, independent of $\varepsilon$, such that $$\|DF_\varepsilon(u)-DF_\varepsilon(u')\|_{\mathcal{L}(X_\varepsilon^\alpha, X_\varepsilon)}\leq L\|u-u'\|^{\theta_F}_{X_\varepsilon^\alpha},\qquad \forall u, u'\in X_\varepsilon^\alpha.$$ $$\|DF_0^\varepsilon(u)-DF_0^\varepsilon(u')\|_{\mathcal{L}(X_0^\alpha, X_0)}\leq L\|u-u'\|^{\theta_F}_{X_0^\alpha},\qquad \forall u, u'\in X_0^\alpha.$$ } We can show, \begin{prop}\label{FixedPoint-E^1Theta} (\cite{Arrieta-Santamaria-C1}) Assume hypotheses {\bf(H1)} and {\bf(H2')} are satisfied and that the gap conditions \eqref{CondicionAutovaloresFuerte0}, \eqref{autovalorgrande0} hold. Then, for any $\theta>0$ such that $\theta\leq\theta_F$ and $\theta<\theta_0$ (for certain $\theta_0>0$, see details in \cite{Arrieta-Santamaria-C1}) the functions $\Phi_\varepsilon$, and $\Phi_0^\varepsilon$ for $0< \varepsilon\leq\varepsilon_0$, obtained above, which give the inertial manifolds, are $C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ and $C^{1, \theta}(\mathbb{R}^m, X_0^\alpha)$. Moreover, the $C^{1,\theta}$ norm is bounded uniformly in $\varepsilon$, for $\varepsilon>0$ small. \end{prop} The main result from \cite{Arrieta-Santamaria-C1} is the following: \par \begin{teo}\label{convergence-C^1-theo} (\cite{Arrieta-Santamaria-C1}) Let hypotheses {\bf (H1)}, {\bf (H2')} and gap conditions \eqref{CondicionAutovaloresFuerte0}, \eqref{autovalorgrande0} be satisfied, so that we have inertial manifolds $\mathcal{M}^\varepsilon$, $\mathcal{M}_0^\varepsilon$ given as the graphs of the functions $\Phi_\varepsilon$, $\Phi_0^\varepsilon$ for $0<\varepsilon\leq\varepsilon_0$. If we denote by \begin{equation}\label{convergenceDF} \beta(\varepsilon)=\sup_{u\in \mathcal{M}^\varepsilon_0}\|DF_\varepsilon \big(Eu \big)E-EDF_0^\varepsilon\big(u\big)\|_{\mathcal{L}(X_0^\alpha, X_\varepsilon)}, \end{equation} then, there exists $\theta^*$ with $0<\theta^*<\theta_F$ such that for all $0<\theta<\theta^*$, we obtain the following estimate \begin{equation}\label{distance-C^1-inertialmanifolds} \|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq \mathbf{C} \left(\left[\beta(\varepsilon)+\Big(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)\Big)^{\theta^*}\right]\right)^{1-\frac{\theta}{\theta^*}}, \end{equation} where $\tau(\varepsilon)$, $\rho(\varepsilon)$ are given by (\ref{definition-tau}), (\ref{estimacionefes}), respectively and $\mathbf{C}$ is a constant independent of $\varepsilon$. \end{teo} \section{Estimates of the elliptic part}\label{elliptic} As we mentioned in the introduction, a very important ingredient in comparing the dynamics of both problems is the convergence of the resolvent operators associated to the linear elliptic problems. In this section we will obtain these rates. We consider the elliptic problems, \begin{equation}\label{linearQ} \left\{ \begin{array}{c l r} -\frac{\partial^2u_\varepsilon}{\partial x^2}-\frac{1}{\varepsilon^2}\Delta_yu_\varepsilon+\mu u_\varepsilon\;&\; = f_\varepsilon, \;&\;\textrm{in}\quad Q\\ \frac{\partial u}{\partial\nu_x}+\frac{1}{\varepsilon^2}\frac{\partial u}{\partial\nu_\mathbf{y}}\;&\;=0\quad&\textrm{on}\quad \partial Q, \end{array} \right. \end{equation} and \begin{equation}\label{linear01} \left\{ \begin{array}{r l r} -\frac{1}{g}(g {v_\varepsilon}_x)_x + \mu v_\varepsilon\;&\; = h_\varepsilon, \;&\;\textrm{in}\quad (0, 1)\\ v_{\varepsilon_x}(0)=v_{\varepsilon_x}(1)\;&\; =0,\;&\; \end{array} \right. \end{equation} with $f_\varepsilon\in L^2(Q)$, $u_\varepsilon\in H^1_{\bm\varepsilon}(Q)$ and $h_\varepsilon\in L^2_g(0,1)$, $v_\varepsilon\in H_g^1(0,1)$. Notice that the existence and uniqueness of solutions of the problems above is guaranteed by Lax-Milgram theorem. \par We can prove the following key result. \begin{prop}\label{resolvente} Let $f_\varepsilon\in L^2(Q)$ and let $h_\varepsilon=M f_\varepsilon$. We define the functions $u_\varepsilon\in H^1_{\bm\varepsilon}(Q)$ and $v_\varepsilon\in H^1_g(0,1)$ as the solutions of the linear problems (\ref{linearQ}) and (\ref{linear01}), respectively. Then, there exist a constant $C>0$ independent of $\varepsilon$ and $f_\varepsilon$ such that, $$\|u_\varepsilon-Ev_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq C\varepsilon\|f_\varepsilon\|_{L^2(Q)}.$$ \end{prop} \paragraph{\sl Proof.} Since the proof of this result is technical, we prefer to postpone its proof for later. We provide the proof of this result in Appendix \ref{proof-resolvente}. \begin{re}\label{estimaciondebilresolvente} Note that if we consider problems \eqref{linearQ} and \eqref{linear01} with $f_\varepsilon=E h_\varepsilon$ then, $\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}=0$ and so, we obtain the same estimate, \begin{equation}\label{estimaciondebilresolvente-0} \|u_\varepsilon-Ev_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq C\varepsilon\|Eh_\varepsilon\|_{L^2(Q)}. \end{equation} \end{re} \begin{re}\label{version-abstracta} Writing this proposition in the abstract setting we get \begin{equation}\label{resolventefuerte} \|A_\varepsilon^{-1}-EA_0^{-1}M\|_{\mathcal{L}(L^2(Q), H^1_{\bm\varepsilon}(Q))}\leq C\varepsilon, \end{equation} Observe that \eqref{resolventefuerte} implies that hypothesis {\bf (H1)} holds for $\alpha=1/2$ and therefore for $0\leq \alpha\leq 1/2$. Moreover, estimate \eqref{estimaciondebilresolvente-0} shows that $\tau(\varepsilon)$ satisfies $\tau(\varepsilon)=\varepsilon$. \end{re} \par We show now, in a formal way, that the estimate obtained in Proposition \ref{resolvente} is optimal. For this, we will consider a domain $Q_\varepsilon$ having circular cross sections and with the aid of an asymptotic expansion of the solution $u_\varepsilon$, we will obtain that the estimates obtained are optimal. Hence, let $Q_\varepsilon=\{(x, \varepsilon\mathbf{y})\in\mathbb{R}^d : (x, \mathbf{y})\in Q\}$, with $\varepsilon\in (0, 1)$, and $Q=\{(x, \mathbf{y})\in\mathbb{R}^d : 0\leq x\leq 1, |\mathbf{y}|< r(x) \}$, so that the transversal sections $\Gamma_x^1$ of the domain $Q$ are disks centered at the origin of radius $r(x)$. Obviously, the change of variables which takes $Q_\varepsilon$ into the fixed domain $Q$ is the following, $$X=x, \qquad \mathbf{Y}=\varepsilon \mathbf{y},$$ with $(X, \mathbf{Y})\in Q_\varepsilon$ and $(x, \mathbf{y})\in Q$. This change of variables transforms the original problem into the following linear problem in $Q$ (we consider the coefficient of equation $\alpha=1$), \begin{equation}\label{linearQexample} \left\{ \begin{array}{r l r} -\frac{\partial^2u_\varepsilon}{\partial x^2}-\frac{1}{\varepsilon^2}\Delta_{\mathbf{y}}u_\varepsilon+u_\varepsilon\;&\; = Ef, \;&\;\textrm{in}\quad Q\\ (\nabla_xu_\varepsilon, \frac{1}{\varepsilon}\nabla_{\mathbf{y}}u_\varepsilon)\cdot\nu_\varepsilon\;&\;=0\quad&\textrm{on}\quad \partial Q, \end{array} \right. \end{equation} with $\partial Q=\Gamma_0^1\cup\partial_lQ\cup\Gamma_1^1$, where $\partial_l Q$ is the ``lateral boundary" which is given by $\partial_l Q=\{ (x,\mathbf{y}): \mathbf{y}\in \partial \Gamma_x^1\}$ and \begin{equation} \nu_\varepsilon= \left\{ \begin{array}{c r} (-1, 0),\qquad & \textrm{in}\quad \Gamma_0^1,\\ \left(\frac{-\varepsilon r r'}{r\sqrt{\varepsilon^2{r'}^2+1}}, \frac{\mathbf{y}}{r\sqrt{\varepsilon^2{r'}^2+1}}\right),\qquad & \textrm{in}\quad \partial \Gamma_x^1,\\ (1, 0),\qquad & \textrm{in}\quad \Gamma_1^1. \end{array} \right. \end{equation} The limit problem is given by \begin{equation}\label{linear01example} \left\{ \begin{array}{r l r} -\frac{1}{g}(g {v_0}_x)_x + v_0\;&\; = f, \;&\;\textrm{in}\quad (0, 1)\\ {v_0}_x(0)\;&\;=0,\;&\;{v_0}_x(1)=0, \end{array} \right. \end{equation} with $f\in L^2_g(0,1)$. Recall that $g(x)=|\Gamma_x^1|=r(x)^{d-1}\omega_{d-1}$ and $\omega_{d-1}$ is the $(d-1)$-measure of the unit ball in $\mathbb{R}^{d-1}$. To analyze the rate of convergence of $u_\varepsilon\rightarrow v_0$ as $\varepsilon\rightarrow 0$, we express the solution of (\ref{linearQexample}) as the series $$u_\varepsilon=\sum_{i=0}^\infty \varepsilon^iV_i(x, \mathbf{y})=V_0(x,\mathbf{y})+\varepsilon V_1(x,\mathbf{y})+\varepsilon^2 V_2(x,\mathbf{y})+\ldots$$ Introducing this expression in problem (\ref{linearQexample}) we obtain, \begin{equation}\label{problemserie} \left\{ \begin{array}{r l r} -\sum_{i=0}^\infty \varepsilon^iV_{i_{xx}}-\frac{1}{\varepsilon^2}\sum_{i=0}^\infty \varepsilon^i \Delta_\mathbf{y} V_{i}+\sum_{i=0}^\infty \varepsilon^iV_i(x, \mathbf{y})\;&\; = Ef, \;&\;\textrm{in}\quad Q\\ \\ -\sum_{i=0}^\infty \varepsilon^iV_{i_x}\;&\;= 0, \;&\;\textrm{on}\quad \Gamma_0^1\\ \\ -\sum_{i=0}^\infty \varepsilon^{i+1}V_{i_x}rr'+\sum_{i=0}^\infty \varepsilon^{i-1}\nabla_{\mathbf{y}} V_{i}\cdot \mathbf{y}\;&\;=0\quad&\textrm{on}\quad \partial \Gamma_x^1\\ \\ \sum_{i=0}^\infty \varepsilon^iV_{i_x}\;&\;= 0,\quad&\textrm{on}\quad \Gamma_1^1 \end{array} \right. \end{equation} \par Putting in groups of powers of $\varepsilon$, we have the following equalities in $Q$, \begin{equation}\label{Equation-in-Q} \begin{array}{c} \Delta_{\mathbf{y}}V_0(x, \mathbf{y})=0, \\ \\ \Delta_{\mathbf{y}}V_1(x, \mathbf{y})=0, \\ \\ -V_{0_{xx}}(x, \mathbf{y})-\Delta_{\mathbf{y}}V_{2}(x, \mathbf{y})+V_0(x, \mathbf{y})-f(x)=0, \\ \\ -V_{i_{xx}}(x, \mathbf{y})-\Delta_{\mathbf{y}}V_{i+2}(x, \mathbf{y})+V_i(x, \mathbf{y})=0,\qquad \textrm{for}\quad i=1, 2,... \end{array} \end{equation} and, from the boundary condition, we have, \begin{equation}\label{Equation-in-Boundary-Q} \begin{array}{c} V_{i_x}(x, \mathbf{y})=0\qquad\textrm{on}\quad \Gamma_0^1\cup\Gamma_1^1,\qquad\textrm{for}\quad i=0, 1, 2, ...\\ \\ \nabla_{\mathbf{y}}V_0(x, \mathbf{y})\cdot\mathbf{y}=0, \quad \hbox{ on } \partial \Gamma_x^1 \\ \\ \nabla_{\mathbf{y}}V_1(x, \mathbf{y})\cdot\mathbf{y}=0, \quad \hbox{ on } \partial \Gamma_x^1\\ \\ -V_{i_x}(x, \mathbf{y})rr'+\nabla_{\mathbf{y}}V_{i+2}(x, \mathbf{y})\cdot\mathbf{y}=0,\qquad\textrm{for}\quad i=0, 1, 2,...\quad \hbox{ on } \partial \Gamma_x^1 \end{array} \end{equation} First, for $x\in(0, 1)$ fixed, we focus in the particular problems in $\Gamma_x^1$ in which $V_0(x, \mathbf{y})$ and $V_1(x, \mathbf{y})$ are involved, \begin{equation}\label{problemV0V1} \begin{array}{r l r c r l r} \Delta_{\mathbf{y}}V_0(x,\mathbf{y})\;&\;=0\quad& \textrm{in}\quad \Gamma_x^1, &\hspace{1cm}& \Delta_{\mathbf{y}}V_1(x,\mathbf{y})\;&\;=0\qquad& \textrm{in}\quad \Gamma_x^1\\ \nabla_{\mathbf{y}}V_0(x, \mathbf{y})\cdot\mathbf{y}\;&\;=0\quad& \textrm{on}\quad \partial\Gamma_x^1, &\hspace{1cm}&\nabla_{\mathbf{y}}V_1(x, \mathbf{y})\cdot\mathbf{y}\;&\;=0\qquad& \textrm{on}\quad \partial\Gamma_x^1. \end{array} \end{equation} Both problems imply that, for each $x\in(0, 1)$, $V_0(x, \mathbf{y})$ and $V_1(x, \mathbf{y})$ are constant in $\Gamma_x^1$. It means both functions only depend on $x$, $$V_0(x, \mathbf{y})= V_0(x), \hspace{2cm} V_1(x, \mathbf{y})=V_1(x).$$ Since $V_0$ only depends on $x$, the third condition in \eqref{Equation-in-Q} and in \eqref{Equation-in-Boundary-Q} can be written as \begin{equation}\label{problemV2} \left\{ \begin{array}{r l r} \Delta_{\mathbf{y}}V_2(x,\mathbf{y})\;&\;=-V_{0_{xx}}(x)+V_0(x)-f(x)\qquad& \textrm{in}\quad \Gamma_x^1, \\ \nabla_{\mathbf{y}}V_2(x, \mathbf{y})\cdot\nu\;&\;=V_{0_x}(x)r'(x)\qquad& \textrm{on}\quad \partial\Gamma_x^1. \end{array} \right. \end{equation} Integrating over $\Gamma_1^x$ in the equation and using the boundary condition, we find that in order to have solutions of \eqref{problemV2} we must have (Fredholm alternative), $${V_0}_xr'|\partial\Gamma_x^1|= (-V_{0_{xx}}(x)+V_0(x)-f(x))|\Gamma_x^1|.$$ That is, $$-V_{0_{xx}}(x)+V_0(x)-f(x)={V_0}_xr' \frac{|\partial\Gamma_x^1|}{|\Gamma_x^1|}=\frac{d-1}{r}V_{0_x}r'.$$ Now, since $\frac{g_x}{g}=(d-1)\frac{r'}{r}$ we easily get $$-\frac{1}{g}(gV_{0_{x}})_x+V_0=f(x),$$ and the boundary conditions are given by $$V_{0_x}(0)=V_{0_x}(1)=0.$$ This implies $V_0(x, \mathbf{y})=v_0(x)$ is the solution of the limit problem \eqref{linear01example}. Moreover, the function $V_2(x,\mathbf{y})$ satisfies \eqref{problemV2} and it is not identically 0 in general (if for instance $f\not\equiv 0$). \par Proceeding in a similar way with $V_1$ and $V_3$ we get, \begin{equation}\label{problemV3constant} \left\{ \begin{array}{r l r} \Delta_{\mathbf{y}}V_3(x,\mathbf{y})\;&\;=-V_{1_{xx}}(x)+V_1(x)\qquad& \textrm{in}\quad \Gamma_x^1, \\ \nabla_{\mathbf{y}}V_3(x, \mathbf{y})\cdot\nu\;&\;=r'V_{1_x}(x)\qquad& \textrm{on}\quad \partial\Gamma_x^1, \end{array} \right. \end{equation} and with the Fredholm alternative, the function $V_1$ needs to satisfy $-\frac{1}{g}(gV_{1_{x}})_x+V_1=0,$ with the boundary conditions ${V_1}_x(0)={V_1}_x(1)=0$ (see \eqref{Equation-in-Boundary-Q}). This implies that $V_1(\cdot)\equiv 0$ and from \eqref{problemV3constant} we get $V_3=V_3(x)$. With an induction argument it is not difficult to see now that $V_i\equiv 0$ for all odd $i$. Hence, $$u_\varepsilon(x,\mathbf{y})=v_0(x)+\varepsilon^2V_2(x,\mathbf{y})+\varepsilon^4 V_4(x,\mathbf{y}) +\ldots$$ where $V_2(x,\mathbf{y})$ is the solution of \eqref{problemV2} which is generically non zero. Then, for $\varepsilon$ small enough, $$\|u_\varepsilon-EV_0\|_{H^1_{\bm\varepsilon}(Q)}=\|\varepsilon^2V_2(x, \mathbf{y})\|_{H^1_{\bm\varepsilon}(Q)}+\ldots$$ $$=\left(\varepsilon^4\int_Q(|\nabla_xV_2|^2+\frac{1}{\varepsilon^2}|\nabla_{\mathbf{y}}V_2|^2+|V_2|^2)dxd{\mathbf{y}}\right)^{\frac{1}{2}}+\ldots = \varepsilon\|\nabla_\mathbf{y}V_2\|_{L^2(Q)}+o(\varepsilon).$$ But, $$\|\nabla_\mathbf{y}V_2\|_{L^2(Q)}\sim \|f\|_{L^2(Q)},$$ which implies that estimate from Proposition \ref{resolvente} is optimal. \section{Analysis of the nonlinear terms}\label{nonlinear} In this section we focus our study in the nonlinear terms. We will analyze its differentiability properties and we will prepare the nonlinearities to apply the results on existence and convergence of inertial manifolds described in Section \ref{previous} (see also \cite{Arrieta-Santamaria-C0,Arrieta-Santamaria-C1}). As a matter of fact, we will show that with appropriate cut-off functions the new nonlinearities satisfy hypotheses {\bf (H2)} and {\bf (H2')} easing our way to the construction of the inertial manifolds and to estimating the distance between them. Recall that we denote by $X_\varepsilon^\alpha$, for $0\leq\varepsilon\leq\varepsilon_0$ the fractional power spaces corresponding to the elliptic operators, see Section \ref{setting}. First, we analyze the properties the nonlinear terms satisfy. Remember that the nonlinearity $f$, together with its first and second derivative satisfy the boundedness condition (\ref{dissipativecondition2}). We denote by $F_\varepsilon: X_\varepsilon^\alpha\rightarrow L^2(Q)$ the Nemytskii operator corresponding to $f$, that is, \begin{equation}\label{Nem-op} \begin{array}{rl} F_\varepsilon:X_\varepsilon^\alpha&\longrightarrow L^2(Q), \\ u&\longmapsto f(u), \end{array} \end{equation} \begin{equation} \begin{array}{rl} F_0:X_0^\alpha&\longrightarrow L^2_g(0, 1),\\ u&\longmapsto f(u). \end{array} \end{equation} Then we have the following result. \begin{lem}\label{nonlinearity-C1} The Nemytskii operator $F_\varepsilon$, $\varepsilon\geq 0$, satisfies the following properties: \begin{itemize} \item[(i)]$F_\varepsilon$ is uniformly bounded from $X_\varepsilon^\alpha$ into $L^2(Q)$. That is, there exists a constant $C_F>0$ independent of $\varepsilon$ such that, $$\|F_\varepsilon\|_{L^\infty(X_\varepsilon^\alpha, L^2(Q))}\leq C_F.$$ \item[(ii)]There exists $\theta_F\in (0,1]$ such that $F_\varepsilon$ is $C^{1, \theta_F}(X_\varepsilon^\alpha, L^2(Q))$ uniformly in $\varepsilon$. That is, there exists a constant $L_F>0$, such that, $$\|F_\varepsilon(u)-F_\varepsilon(v)\|_{L^2(Q)} \leq L_F\|u-v\|_{X_\varepsilon^\alpha},$$ $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))} \leq L_F\|u-v\|^{\theta_F}_{X_\varepsilon^\alpha}$$ for all $u,v\in X_\varepsilon^\alpha$ and all $0\leq \varepsilon\leq \varepsilon_0$: \end{itemize} \end{lem} \par \paragraph{\sl Proof. } Item (i) is directly proved as follows. Since nonlinearity $f$ is uniformly bounded, see (\ref{dissipativecondition2}), $$\|F_\varepsilon\|_{L^\infty(X_\varepsilon^\alpha, L^2(Q))}=\sup_{u\in X_\varepsilon^\alpha}\left(\int_Q|f(u(x, \mathbf{y}))|^2dxd\mathbf{y}\right)^{\frac{1}{2}}\leq L_f|Q|^{\frac{1}{2}},$$ for any $\varepsilon\geq 0$ and $|Q|$ the Lebesgue measure of $Q$. So, we have the desired estimate with $C_F=L_f|Q|^{\frac{1}{2}}$. To prove item (ii), we proceed as follows. $$\|F_\varepsilon(u)- F_\varepsilon(v)\|_{L^2(Q)}=\left(\int_Q|f(u(x, \mathbf{y}))-f(v(x, \mathbf{y}))|^2dxd\mathbf{y}\right)^{\frac{1}{2}}.$$ Since $f$ is globally Lipschitz, see \eqref{dissipativecondition2}, then, $$\|F_\varepsilon(u)- F_\varepsilon(v)\|_{L^2(Q)}\leq L_f\left(\int_Q|u(x, \mathbf{y})-v(x, \mathbf{y})|^2dxd\mathbf{y}\right)^{\frac{1}{2}}=$$ $$=L_f\|u-v\|_{L^2(Q)}\leq L_f\|u-v\|_{X_\varepsilon^\alpha},$$ taking $L_F=L_f$ we have, for all $\varepsilon\geq 0$, that $F_\varepsilon$ is globally Lipschitz from $X_\varepsilon^\alpha$ into $L^2(Q)$ with uniform constant $L_F$. To show the remaining part, notice first that for $u\in X_\varepsilon^\alpha$, $DF_\varepsilon(u)$ is given by the operator \begin{equation} \begin{array}{rl} DF_\varepsilon(u): X_\varepsilon^\alpha&\longrightarrow L^2(Q), \\ v&\longmapsto f'(u)v, \end{array} \end{equation} which is easily shown from the definition of Fr\'{e}chet derivative, the Sobolev embeddings $X_\varepsilon^\alpha\hookrightarrow L^q$ for $q>2$, and the property \eqref{dissipativecondition2}. That is, $$\|F_\varepsilon(u+v)-F_\varepsilon(u)-f'(u)v\|_{L^2(Q)}=\|\left(f'(\xi)-f'(u)\right)v\|_{L^2(Q)},$$ with $\xi$ an intermediate point between $u$ and $u+v$. But, by \eqref{dissipativecondition2} $|f'(\xi)-f'(u)|\leq 2 L_f$ and also by the mean value theorem $|f'(\xi)-f'(u)|\leq L_f|\xi-u|\leq L_f |v|$. This implies $\left|f'(\xi)-f'(u)\right|\leq 2 L_f |v|^\theta$, for all $0<\theta<1$. Hence, $$\|F_\varepsilon(u+v)-F_\varepsilon(u)-f'(u)v\|_{L^2(Q)}\leq 2L_f\|v^{1+\theta}\|_{L^2(Q)}=2L_f \|v\|_{L^{2+2\theta}(Q)}^{1+\theta}.$$ Choosing $2+2\theta < q$ we get that $DF_\varepsilon(u)v=f'(u)v$. Moreover, we have that, for all $\varepsilon\geq 0$, $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}=\sup_{\phi\in X_\varepsilon^\alpha,\,\, \|\phi\|_{X_\varepsilon^\alpha}\leq 1}\|DF_\varepsilon(u)\phi-DF_\varepsilon(v)\phi\|_{L^2(Q)}.$$ Hence, $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}=\sup_{\phi\in X_\varepsilon^\alpha,\,\, \|\phi\|_{X_\varepsilon^\alpha}\leq 1}\left(\int_Q(f'(u)-f'(v))^2\phi^2dxd\mathbf{y}\right)^{\frac{1}{2}}.$$ Note that, by H\"{o}lder inequality with exponents $\frac{d}{4\alpha}$ and $\frac{d}{d-4\alpha}$, (remember $\alpha<\frac{1}{2}$ and $d\geq 2$, so that both $\frac{d}{4\alpha}$, $\frac{d}{d-4\alpha} \in (1,\infty)$), we have, $$\int_Q(f'(u)-f'(v))^2\phi^2dxd\mathbf{y}\leq \left(\int_Q|f'(u)-f'(v)|^{\frac{d}{2\alpha}} dxd\mathbf{y}\right)^{\frac{4\alpha}{d}}\left(\int_Q |\phi|^{\frac{2d}{d-4\alpha}}dxd\mathbf{y}\right)^{\frac{d-4\alpha}{d}}.$$ Then, from Remark \ref{EMalpha} ii) we have, $$\int_Q(f'(u)-f'(v))^2\phi^2dxd\mathbf{y}\leq C\left(\int_Q|f'(u)-f'(v)|^{\frac{d}{2\alpha}} dxd\mathbf{y}\right)^{\frac{4\alpha}{d}}\|\phi\|^2_{X_\varepsilon^\alpha}. $$ Then, $$\sup_{\phi\in X_\varepsilon^\alpha,\,\, \|\phi\|_{X_\varepsilon^\alpha}\leq 1}\left(\int_Q(f'(u)-f'(v))^2\phi^2dxd\mathbf{y}\right)^{\frac{1}{2}}\leq\left(\int_Q|f'(u)-f'(v)|^{\frac{d}{2\alpha}} dxd\mathbf{y}\right)^{\frac{2\alpha}{d}}$$ Next, note that, on the one side, by the mean value theorem and using \ref{dissipativecondition2}, we have, $$|f'(u)-f'(v)|\leq L_f|u-v|.$$ On the other side, again by (\ref{dissipativecondition2}), $$|f'(u)-f'(v)|\leq 2L_f.$$ Hence, $$|f'(u)-f'(v)|\leq 2L_f\min\{1, |u-v|\}\leq 2L_f|u-v|^\theta,$$ for any $0\leq\theta\leq 1$, where we have used that if $0\leq x\leq 1$ and $0\leq \theta\leq 1$ then $x\leq x^\theta$. Then, $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq 2L_f\left(\int_Q|u-v|^{\frac{\theta d}{2\alpha}}dxd\mathbf{y}\right)^{\frac{2\alpha}{d}}.$$ Taking $\theta_F=\min\{1,\frac{4\alpha}{d-4\alpha}\}$, $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq 2L_f\left(\int_Q|u-v|^{\frac{2d}{d-4\alpha}}dxd\mathbf{y}\right)^{\frac{2\alpha}{d}}=2L_f\|u-v\|^{\theta_F}_{L^{\frac{2d}{d-4\alpha}}(Q)}.$$ Applying again the uniform embedding described in Remark \ref{EMalpha} ii), we obtain $$\|DF_\varepsilon(u)-DF_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq 2L_f\|u-v\|_{X_\varepsilon^\alpha}^{\theta_F}. $$ Taking $L_F=2L_f$ we have the result. \begin{flushright}$\blacksquare$\end{flushright} \begin{re}\label{alpha-positive} i) Note that we have to impose $\alpha$ strictly positive to guarantee the smoothness of $F_\varepsilon$, that is, to ensure that $F_\varepsilon\in C^{1, \theta}(X_\varepsilon^\alpha, L^2(Q))$ for $\theta$ small enough. As a matter of fact if $\alpha=0$ $X_\varepsilon^\alpha=L^2(Q)$, any nonlinearity $F:L^2(Q)\to L^2(Q)$ which is a Nemytskii operator, as in \eqref{Nem-op}, cannot be $C^1$, unless it is linear, see \cite{Henry1}, Exercise 1. Although in \cite{Henry1}, the author considers the case $F_\varepsilon(u)=sin (u)$, the argument can be easily extended to any $C^2$ function. \par ii) if $d\geq 4$ we always have that $\theta_F=\frac{4\alpha}{d-4\alpha}<1$ because $\alpha<1/2$. Only in dimensions $d=2,3$ and choosing $\alpha<1/2$ but close enough to $1/2$ we may get $\frac{4\alpha}{d-4\alpha}>1$ and therefore $\theta_F=1$. As a matter of fact, in dimensions $d=2,3$ we may show some higher differentiability of $F$. \end{re} We fix $\alpha$ with $0<\alpha<\frac{1}{2}$. As we have mentioned above, one of our basic tools consists in constructing inertial manifolds to reduce our problem to a finite dimensional one. In order to construct these manifolds and following \cite{Sell&You}, we need to ``prepare" the non-linear term making an appropriate cut off of the nonlinearity in the $X_\varepsilon^\alpha$ norm, as it is done in \cite{Sell&You} . Next, we proceed to introduce this cut off. For this, we start considering a function $\hat\Theta: \mathbb{R}\to [0,1]$ which is $C^\infty$ with compact support and such that \begin{equation}\label{funcionthetaR} \hat\Theta(x)= \left\{ \begin{array}{l c r} 1 & if & |x|\leq R^2\\ 0 & if & |x|\geq4R^2. \end{array} \right. \end{equation} for some $R>0$, which in general will be large enough. We will denote this function $\hat\Theta^R(x)$ if we need to make explicit its dependence on the parameter $R$. With this function we define now $\Theta_\varepsilon:X_\varepsilon^\alpha\to \mathbb{R}$ as $\Theta_\varepsilon(u)=\hat \Theta(\|u\|^2_{X_\varepsilon^\alpha})$ for $0\leq\varepsilon\leq\varepsilon_0$, and observe that $\Theta_\varepsilon(u)=1$ if $\|u\|_{X_\varepsilon^\alpha}\leq R$ and $\Theta_\varepsilon(u)=0$ if $\|u\|_{X_\varepsilon^\alpha}\geq 2R$ and again we will denote $\Theta_\varepsilon$ by $\Theta_\varepsilon^R$ if we need to make explicit its dependence on $R$. Now, for $R>0$, large enough, and $0<\varepsilon\leq\varepsilon_0$, we introduce the new nonlinear terms \begin{equation}\label{definition-cutoff-eps} \tilde{F}_\varepsilon(u_\varepsilon):= \Theta_\varepsilon^R(u_\varepsilon) F_\varepsilon(u_\varepsilon), \end{equation} \begin{equation}\label{definition-cutoff-*} \tilde{F}^\varepsilon_0(u_0):= \Theta_\varepsilon^R(Eu_0) F_0(u_0), \end{equation} and \begin{equation}\label{definition-cutoff-0} \tilde{F}_0(u_0):= \Theta_0^R(u_0) F_0(u_0), \end{equation} We replace $F_\varepsilon$ and $F_0$ with the new nonlinearities $\tilde{F}_\varepsilon$, $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$. Hence, now we have three systems, two of them in the limit space $X_0^\alpha$, \begin{equation}\label{system-eps} {u}_t=-A_\varepsilon u+\tilde{F}_\varepsilon(u),\qquad u\in X_\varepsilon^\alpha \end{equation} \begin{equation}\label{system-*} {u}_t=-A_0u+\tilde{F}^\varepsilon_0(u),\qquad u\in X_0^\alpha, \end{equation} \begin{equation}\label{system-0} {u}_t=-A_0u+\tilde{F}_0(u), \qquad u\in X_0^\alpha. \end{equation} Note that, since systems \eqref{system-*} and \eqref{system-0} share the linear part and $\tilde{F}_0(u)=\tilde{F}^\varepsilon_0(u)$ for $\|u\|_{X_0^\alpha}\leq R$, then the attractor related to \eqref{system-*} and \eqref{system-0} coincides and it is $\mathcal{A}_0$. Moreover, although $\,\tilde{F}^\varepsilon_0, \tilde{F}_0: X_0^\alpha\rightarrow X_0\,$, the nonlinearity $\tilde{F}^\varepsilon_0$ depends on $\varepsilon$. \begin{re} It may sound somehow strange the need to consider now three systems instead of the natural two (the perturbed one \eqref{system-eps} and the completely unperturbed one \eqref{system-0}). The three systems meet the conditions to have inertial manifolds and we will see that they all are nearby in the $C^{1}$ topology. But, as we will see below, we will have good estimates for the distance between the inertial manifolds for systems \eqref{system-eps} and \eqref{system-*} but not so good estimates for the distance between the inertial manifolds for systems \eqref{system-eps} and \eqref{system-0} or \eqref{system-*} and \eqref{system-0}. \end{re} First, we analyze the properties $\tilde{F}_\varepsilon$, $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$ satisfy. \begin{lem}\label{nonlinearity} Let $\tilde{F}_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$, be the new nonlinearities described above. Then they satisfy the following properties: \begin{itemize} \item[(a)]$\tilde{F}_\varepsilon(u)=F_\varepsilon(u)$, for all $u\in X_\varepsilon^\alpha$, such that $\|u\|_{ X_\varepsilon^\alpha}\leq R$, $\varepsilon> 0$ and $\tilde{F}^\varepsilon_0(u_0)=F_0(u_0)$, $\tilde{F}_0(u_0)=F_0(u_0)$, for all $u_0\in X_0^\alpha$, such that $\|Eu_0\|_{X_\varepsilon^\alpha}\leq R$ and $\|u_0\|_{X_0^\alpha}\leq R$, respectively. \item[(b)]$\tilde{F}_\varepsilon$ is $C^{1, \theta_F}(X_\varepsilon^\alpha, L^2(Q))$ and $\tilde{F}^\varepsilon_0$, $\tilde{F}_0$ are $C^{1, \theta_F}(X_0^\alpha, L_g^2(0,1))$ with $\theta_F$ the one from Lemma \ref{nonlinearity-C1}. That is, they are globally Lipschitz from $X_\varepsilon^\alpha$ to $L^2(Q)$ and from $X_0^\alpha$ to $L_g^2(0,1)$, we denote by $L_F$ their Lipschitz constant, and \begin{equation}\label{Holder} \|D\tilde{F}_\varepsilon(u)-D\tilde{F}_\varepsilon(u')\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq L_F\|u-u'\|_{X_\varepsilon^\alpha}^{\theta_F}, \end{equation} \begin{equation}\label{Holder*} \|D\tilde{F}^\varepsilon_0(u)-D\tilde{F}^\varepsilon_0(u')\|_{\mathcal{L}(X_0^\alpha, L_g^2(0,1))}\leq L_F\|u-u'\|_{X_0^\alpha}^{\theta_F}, \end{equation} \begin{equation}\label{Holder0} \|D\tilde{F}_0(u)-D\tilde{F}_0(u')\|_{\mathcal{L}(X_0^\alpha, L_g^2(0,1))}\leq L_F\|u-u'\|_{X_0^\alpha}^{\theta_F}, \end{equation} with $L_F$ independent of $\varepsilon$. \item[(c)]They are uniformly bounded, $$\|\tilde{F}_\varepsilon\|_{L^\infty(X_\varepsilon^\alpha, L^2(Q))}\leq C_F,\,\,\,\,\,\,\,\,\,\|\tilde{F}^\varepsilon_0\|_{L^\infty(X_0^\alpha, L_g^2(0,1))}\leq C_F,\,\,\,\,\,\,\|\tilde{F}_0\|_{L^\infty(X_0^\alpha, L_g^2(0,1))}\leq C_F.$$ \item[(d)]$\tilde{F}_\varepsilon$, $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$ have an uniform bounded support in $\varepsilon\geq 0$, that is: $$Supp \tilde{F}_\varepsilon\subset \{u\in X_\varepsilon^\alpha :\|u\|_{X_\varepsilon^\alpha}< 2R\},$$ $$Supp \tilde{F}^\varepsilon_0\subset \{u\in X_0^\alpha :\|Eu\|_{X_\varepsilon^\alpha}< 2R\},$$ $$Supp \tilde{F}_0\subset \{u\in X_0^\alpha :\|u\|_{X_0^\alpha}< 2R\},$$ \item[(e)]For all $u\in X_0^\alpha$, \begin{equation}\label{rho-beta-*eps} E\tilde{F}^\varepsilon_0(u)=\tilde{F}_\varepsilon(Eu),\qquad\textrm{and}\quad ED\tilde{F}^\varepsilon_0(u)= D\tilde{F}_\varepsilon(Eu)E. \end{equation} and, for any compact set $K\subset X_0^\alpha$, we have, \begin{equation}\label{rho-beta-eps0} \sup_{u_0\in K}\|\tilde{F}_\varepsilon(Eu_0)\mathord-E\tilde{F}_0(u_0)\|_{X_\varepsilon^\alpha}\rightarrow 0, \end{equation} \begin{equation}\label{rho-beta-*0} \sup_{u_0\in K}\|\tilde{F}^\varepsilon_0(u_0)\mathord-\tilde{F}_0(u_0)\|_{X_0^\alpha}\rightarrow 0, \end{equation} as $\varepsilon\rightarrow 0$. \end{itemize} \end{lem} \begin{re} In particular, hypothesis {\bf(H2')} from Section \ref{previous} holds for the three nonlinearities, $\tilde F_\varepsilon$, $\tilde F_0^\varepsilon$ and $\tilde F_0$. Moreover, the value of $\rho(\varepsilon)$ and $\beta(\varepsilon)$ from {\bf(H2')}, which depend on the nonlinearities we are considering, are the following: $$\rho(\varepsilon), \beta(\varepsilon)=\left\{ \begin{array}{ll} 0& \hbox{ with the nonlinearities } \tilde F_\varepsilon \hbox{ and } \tilde F_0^\varepsilon \\ o(1) & \hbox{ with the nonlinearities } \tilde F_\varepsilon \hbox{ and } \tilde F_0\\ o(1)& \hbox{ with the nonlinearities } \tilde F_0^\varepsilon \hbox{ and } \tilde F_0 \end{array} \right. $$ \end{re} \par \paragraph{\sl Proof. } (a) This follows directly from definition of $\tilde{F}_\varepsilon$, $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$, see (\ref{definition-cutoff-eps})-(\ref{definition-cutoff-0}). \par\noindent (b) We proceed as follows. Since $F_\varepsilon$ and $\Theta_\varepsilon$ are globally Lipschitz from $X_\varepsilon^\alpha$ to $L^2(Q)$, $\varepsilon>0$, and from $X_0^\alpha$ to $L^2_g(0, 1)$ see Lemma \ref{nonlinearity-C1} and \cite{JamesRobinson}, Lemma 15.7, then $F_\varepsilon$, $\tilde{F}^\varepsilon_0$, $\tilde{F}_0$, are globally Lipschitz from $X_\varepsilon^\alpha$ to $L^2(Q)$ and from $X_0^\alpha$ to $L^2_g(0, 1)$, respectively. So, it remains to prove estimate \ref{Holder}. Note that, $D\tilde{F}_\varepsilon(u)=\Theta_\varepsilon(u)DF_\varepsilon(u)+F_\varepsilon(u)D\Theta_\varepsilon(u) $. Then, we can decompose $\|D\tilde{F}_\varepsilon(u)-D\tilde{F}_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}$ as follows, $$\|D\tilde{F}_\varepsilon(u)-D\tilde{F}_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq$$ $$\|[\Theta_\varepsilon(u)-\Theta_\varepsilon(v)]DF_\varepsilon(u)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}+\|\Theta_\varepsilon(v)[DF_\varepsilon(u)-DF_\varepsilon(v)]\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}+$$ $$+\|[F_\varepsilon(u)-F_\varepsilon(v)]D\Theta_\varepsilon(u)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}+\|F_\varepsilon(v)[D\Theta_\varepsilon(u)-D\Theta_\varepsilon(v)]\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}=$$ $$=I_1+I_2+I_3+I_4.$$ Since $\Theta_\varepsilon$ is globally Lipschitz with uniform Lipschitz constant, that we denote by $L_{\hat{\Theta}}$, see \cite[Lemma 15.7]{JamesRobinson} and $\|DF_\varepsilon(u)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq L_F$, see Lemma \ref{nonlinearity-C1}, then $$I_1\leq L_{\hat{\Theta}} L_F\|u-v\|_{X_\varepsilon^\alpha}. $$ Moreover, by Lemma \ref{nonlinearity-C1} $F_\varepsilon\in C^{1, \theta_F}(X_\varepsilon^\alpha, L^2(Q))$. Hence, $$I_2\leq L_F\|u-v\|_{X_\varepsilon^\alpha}^{\theta_F}, \qquad\,\,\,\,\textrm{and}\qquad\,\,\,\, I_3\leq L_FL_{\hat{\Theta}}\|u-v\|_{X_\varepsilon^\alpha}.$$ To obtain an estimate for $I_4$, we first calculate the expression for $D\Theta_\varepsilon(u)$. By definition of $\Theta_\varepsilon$, see (\ref{funcionthetaR}), we have for any $u\in X_\varepsilon^\alpha$, $$D\Theta_\varepsilon(u)=\hat{\Theta}'(\|u\|_{X_\varepsilon^\alpha}^2)2(u, \cdot)_{X_\varepsilon^\alpha},$$ where the function $\hat\Theta$ is defined in \eqref{funcionthetaR}, $'$ is the usual derivative and $(\cdot, \cdot)_{X_\varepsilon^\alpha}$ is the scalar product in the Hilbert space $X^\alpha_\varepsilon$. Hence, $$I_4\leq C_F\sup_{\|\varphi\|_{X^\alpha_\varepsilon}=1}\Big\{\Big|\hat{\Theta}'(\|u\|_{X_\varepsilon^\alpha}^2)2(u, \varphi)_{X_\varepsilon^\alpha} - \hat{\Theta}'(\|v\|_{X_\varepsilon^\alpha}^2)2(v, \varphi)_{X_\varepsilon^\alpha}\Big|\Big\}$$ where $C_F$ is the bound from Lemma \ref{nonlinearity-C1} i). But, $$\Big|\hat{\Theta}'(\|u\|_{X_\varepsilon^\alpha}^2)2(u, \varphi)_{X_\varepsilon^\alpha} - \hat{\Theta}'(\|v\|_{X_\varepsilon^\alpha}^2)2(v, \varphi)_{X_\varepsilon^\alpha}\Big|\leq $$ $$\leq \left|\left(\hat{\Theta}'(\|u\|_{X_\varepsilon^\alpha}^2)-\hat{\Theta}'(\|v\|_{X_\varepsilon^\alpha}^2)\right)2(u, \varphi)_{X_\varepsilon^\alpha}\right|+\left|\Theta'(\|v\|_{X_\varepsilon^\alpha}^2)2(u-v, \varphi)\right|=$$ $$=I_{41}+I_{42}.$$ We first analyze $I_{41}$. Since $\hat{\Theta}$ is a $C^\infty$ function with bounded support in $\mathbb{R}$, then $\hat{\Theta}'$ is globally Lipschitz with Lipschitz constant $L_{\hat{\Theta}}$. So, $$I_{41}\leq 2L_{\hat{\Theta}}\|u\|_{X_\varepsilon^\alpha}\|\varphi\|_{X_\varepsilon^\alpha}\left|\|u\|^2_{X_\varepsilon^\alpha}-\|v\|^2_{X_\varepsilon^\alpha}\right|=$$ $$=2L_{\hat{\Theta}}\|u\|_{X_\varepsilon^\alpha}\left|(\|u\|_{X_\varepsilon^\alpha}+\|v\|_{X_\varepsilon^\alpha})(\|u\|_{X_\varepsilon^\alpha}-\|v\|_{X_\varepsilon^\alpha})\right|\leq$$ $$\leq 2L_{\hat{\Theta}}\|u\|_{X_\varepsilon^\alpha}\left(\|u\|_{X_\varepsilon^\alpha}+\|v\|_{X_\varepsilon^\alpha}\right)\|u-v\|_{X_\varepsilon^\alpha}.$$ We distinguish the following cases: \begin{itemize} \item[(1)] If $\|u\|^2_{X_\varepsilon^\alpha}, \|v\|^2_{X_\varepsilon^\alpha}\leq 8R^2$, then $$I_{41}\leq 32 L_{\hat{\Theta}} R^2\|u-v\|_{X_\varepsilon^\alpha}.$$ \item[(2)] If $\|u\|^2_{X_\varepsilon^\alpha}, \|v\|^2_{X_\varepsilon^\alpha}\geq 8R^2$, then $I_{41}=0$, beacause $\Theta'(\|u\|^2_{X_\varepsilon^\alpha})=\Theta'(\|v\|^2_{X_\varepsilon^\alpha})=0$ \item[(3)] If $\|u\|^2_{X_\varepsilon^\alpha}\leq 8R^2$ and $\|v\|^2_{X_\varepsilon^\alpha}\geq 8R^2$, then we always have $\Theta'(\|v\|^2_{X_\varepsilon^\alpha})=0$. We also distinguish two cases, \begin{itemize} \item[(3.1)] If $\|u\|^2_{X_\varepsilon^\alpha}\geq 4R^2$, then again $\Theta'(\|u\|^2_{X_\varepsilon^\alpha})=0$ and therefore $I_{41}=0$. \item[(3.2)] If $\|u\|^2_{X_\varepsilon^\alpha}\leq 4R^2$, then $\|u-v\|_{X_\varepsilon^\alpha}\geq |\|u\|_{X_\varepsilon^\alpha}-\|v\|_{X_\varepsilon^\alpha}|\geq \frac{1}{2}R$. So, $1\leq \frac{2}{R}\|u-v\|_{X_\varepsilon^\alpha}$, and $$I_{41}\leq 8R^2|\Theta'(\|u\|_{X_\varepsilon^\alpha}^2)|\leq 16R^2 L_{\hat{\Theta}}\frac{\|u-v\|_{X_\varepsilon^\alpha}}{R}=16RL_{\hat{\Theta}}\|u-v\|_{X_\varepsilon^\alpha}$$ \end{itemize} \end{itemize} Therefore, $$I_{41}\leq 32L_{\hat{\Theta}} R^2\|u-v\|_{X_\varepsilon^\alpha}.$$ Term $I_{42}$ can be directly estimated as follows, $$I_{42}\leq 2L_{\hat{\Theta}}\|u-v\|_{X_\varepsilon^\alpha}.$$ So $$I_4\leq (32R^2+2)L_{\hat{\Theta}} \|u-v\|_{X_\varepsilon^\alpha}. $$ Hence, putting all the information together, we get $$\|D\tilde{F}_\varepsilon(u)-D\tilde{F}_\varepsilon(v)\|_{\mathcal{L}(X_\varepsilon^\alpha, L^2(Q))}\leq L_F\|u-v\|_{X_\varepsilon^\alpha}^{\theta_F},$$ with $L_F>0$ independent of $\varepsilon$, as we wanted to prove. To obtain the same result for $\tilde{F}^\varepsilon_0$ and $\tilde{F}_0$, the proof is exactly the same, step by step. \par \noindent (c) This property follows from Lemma \ref{nonlinearity-C1}, item (i). \par \noindent (d) It follows directly from the definition of $\Theta_\varepsilon$ and $\Theta_0$. \par \noindent (e) Finally, note that $F_0(u)=f(u(x))=F_\varepsilon(Eu)$. Then, for $u\in X_0^\alpha$, $$E\tilde{F}^\varepsilon_0(u)=\Theta_\varepsilon^R(Eu)EF_0(u)=\Theta_\varepsilon^R(Eu)f(u(x))=\Theta_\varepsilon^R(Eu)F_\varepsilon(Eu)=\tilde{F}_\varepsilon(Eu),$$ and, since $D\tilde{F}^\varepsilon_0(u)=\Theta_\varepsilon^R(Eu)DF_0(u)+F_0(u)D\Theta^R_\varepsilon(Eu)$, $$ED\tilde{F}^\varepsilon_0(u)= \Theta_\varepsilon^R(Eu)EDF_0(u)+EF_0(u)D\Theta^R_\varepsilon(Eu)=$$ $$=\Theta_\varepsilon^R(Eu)DF_\varepsilon(Eu)E+F_\varepsilon(Eu)D\Theta^R_\varepsilon(Eu)=D\tilde{F}_\varepsilon(Eu)E.$$ Moreover, for any $u_0\in K\subset X_0^\alpha$ with $K$ compact, we have, $$\|\tilde{F}_\varepsilon(Eu_0)-E\tilde{F}_0(u_0)\|_{X_\varepsilon}\leq$$ $$\|[\Theta^R_\varepsilon(Eu_0)-\Theta^R_0(u_0)]F_\varepsilon(Eu_0)\|_{X_\varepsilon}+\|\Theta^R_0(u_0)[F_\varepsilon(Eu_0)-EF_0(u_0)]\|_{X_\varepsilon}=$$ $$\|[\Theta^R_\varepsilon(Eu_0)-\Theta^R_0(u_0)]F_\varepsilon(Eu_0)\|_{X_\varepsilon}\leq C_F L_{\hat\Theta}|\|Eu_0\|^2_{X_\varepsilon^\alpha}-\|u_0\|^2_{X_0^\alpha}|=$$ $$C_F L_{\hat\Theta}|( \|Eu_0\|_{X_\varepsilon^\alpha}+\|u_0\|_{X_0^\alpha})(\|Eu_0\|_{X_\varepsilon^\alpha}-\|u_0\|_{X_0^\alpha})|\leq $$ $$C_F L_{\hat\Theta}(2e^2+1)\|u_0\|_{X_0^\alpha}|\|Eu_0\|_{X_\varepsilon^\alpha}-\|u_0\|_{X_0^\alpha}|,$$ in the last inequality we have applied the bound for operator $E$ obtained in \eqref{Ealpha}. Hence, since $K$ is a compact subset of $X_0^\alpha$, by Lemma \ref{normaproyeccionextension} item (ii), $$\sup_{u_0\in K}\|\tilde{F}_\varepsilon(Eu_0)-E\tilde{F}_0(u_0)\|_{X_\varepsilon}\rightarrow 0,$$ when $\varepsilon$ tends to zero. We omit the proof of \eqref{rho-beta-*0} for being equal to the proof of \eqref{rho-beta-eps0}. \begin{flushright}$\blacksquare$\end{flushright} \section{Inertial manifolds and reduced systems}\label{InertialManifoldsConstruction} We present the construction of inertial manifolds for problems (\ref{system-eps}), (\ref{system-*}) and \eqref{system-0}. Remember that, with these manifolds, our problem is reduced to a finite dimensional system. The existence of these manifolds is guaranteed by the existence of spectral gaps, large enough, in the spectrum of the associated linear elliptic operators, see \cite{Sell&You}. Moreover, these spectral gaps are going to be garanteed by the existence of the spectral gaps for the limiting problem together with the spectral convergence of the linear eliptic operators, which is obtained from {\bf (H1)}, see Section \ref{previous} and \cite{Arrieta-Santamaria-C0,Arrieta-Santamaria-C1}. With the notations from Section \ref{setting}, by Proposition \ref{resolvente} for $h_\varepsilon=M f_\varepsilon$ we have, see Remark \ref{version-abstracta} \begin{equation}\label{resolventefuerte2} \|A_\varepsilon^{-1}-EA_0^{-1}M\|_{\mathcal{L}(L^2(Q), H^1_{\bm\varepsilon}(Q))}\leq C\varepsilon, \end{equation} and for $f_\varepsilon=Eh_\varepsilon$, see Remark \ref{estimaciondebilresolvente}, \begin{equation}\label{resolventedebil} \|A_\varepsilon^{-1}E-EA_0^{-1}\|_{\mathcal{L}(L^2_g(0, 1), H^1_{\bm\varepsilon}(Q))}\leq C\varepsilon. \end{equation} These two estimates imply that hypothesis {\bf(H1)} holds with $\alpha=1/2$ and therefore it also holds for any $0\leq \alpha\leq 1/2$. Moreover, the parameter $\tau(\varepsilon)$ is $\tau(\varepsilon)=\varepsilon$. In the sequel we will use the notation introduce in Section \ref{previous} with respect the eigenvalues, projections, etc.. The limit operator $A_0$ is of Sturm-Liouville type of one dimension. Following \cite{Hale&Raugel3}, Lemma 4.2, we know that there exists $N_0$ such that for all $m\geq N_0$ \begin{equation}\label{eigenvalue-bound} \pi^2\left(m+\frac{1}{4}\right)^2\leq\lambda^0_m\leq \pi^2\left(m+\frac{3}{4}\right)^2. \end{equation} This implies that for $m\geq N_0$, \begin{equation}\label{eigenvalue-gap} \pi^2 (m+1)\leq\lambda_{m+1}^0-\lambda_m^0\leq 3\pi^2(m+1). \end{equation} Taking $0<\alpha<1/2$, we get from \eqref{eigenvalue-bound} and \eqref{eigenvalue-gap} that for each $M>0$ large enough, we can choose $m\in \N$ also large enough such that $$\lambda_{m+1}^0-\lambda_m^0\geq M[(\lambda_{m+1}^0)^\alpha+(\lambda_{m}^0)^\alpha]$$ This means that we are in conditions to apply Proposition \ref{existenciavariedadinercial} obtaining that there exist $L<1$ and $0<\varepsilon_1\leq\varepsilon_0$ such that for all $0<\varepsilon\leq\varepsilon_1$ there exist inertial manifolds $\mathcal{M}_\varepsilon$, $\mathcal{M}^\varepsilon_0$ and $\mathcal{M}_0$ for (\ref{system-eps}), (\ref{system-*}) and (\ref{system-0}), given by the ``graph" of functions $\Phi_\varepsilon, \Phi^\varepsilon_0, \Phi_0\in\mathcal{F}_\varepsilon(L, 2R)$, \begin{equation}\label{inertialmanifold-eps} \mathcal{M}_\varepsilon=\{j^{-1}_\varepsilon(z)+\Phi_\varepsilon(z);\,\,z\in\mathbb{R}^m\}, \end{equation} \begin{equation}\label{inertialmanifold-*} \mathcal{M}^\varepsilon_0=\{j^{-1}_0(z)+\Phi^\varepsilon_0(z);\,\,z\in\mathbb{R}^m\}, \end{equation} \begin{equation}\label{inertialmanifold-0} \mathcal{M}_0=\{j^{-1}_0(z)+\Phi_0(z);\,\,z\in\mathbb{R}^m\}, \end{equation} If we denote by $T_{\mathcal{M}_\varepsilon}$, $T_{\mathcal{M}^\varepsilon_0}$ and $T_{\mathcal{M}_0}$ the time one maps of the semigroup restricted to the inertial manifolds $\mathcal{M}_\varepsilon$, $\mathcal{M}^\varepsilon_0$ and $\mathcal{M}_0$, respectively, for $u_\varepsilon\in\mathcal{M}_\varepsilon$, $u^\varepsilon_0\in\mathcal{M}^\varepsilon_0$ and $u_0\in\mathcal{M}_0$ and $z\in\mathbb{R}^m$, the time one maps satisfy the following equalities, $$T_{\mathcal{M}_\varepsilon}(u_\varepsilon)= p_\varepsilon(1)+\Phi_\varepsilon(j_\varepsilon(p_\varepsilon(1))),$$ $$T_{\mathcal{M}^\varepsilon_0}(u^\varepsilon_0)= p^\varepsilon_0(1)+\Phi^\varepsilon_0(j_0(p^\varepsilon_0(1))),$$ $$T_{\mathcal{M}_0}(u_0)= p_0(1)+\Phi_0(j_0(p_0(1))),$$ with $p_\varepsilon(t)$, $p^\varepsilon_0(t)$ and $p_0(t)$ the solutions of \begin{equation}\label{equationp-eps-modified} \left\{ \begin{array}{l} p_t=-A_\varepsilon p+\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} \tilde{F}_\varepsilon(p+\Phi_\varepsilon( j_\varepsilon (p(t))))\\ p(0)=j_\varepsilon^{-1}(z), \end{array} \right. \end{equation} \begin{equation}\label{equationp-*-modified} \left\{ \begin{array}{l} p_t=-A_0 p+\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}^\varepsilon_0(p+\Phi^\varepsilon_0( j_0 (p(t))))\\ p(0)=j_0^{-1}(z), \end{array} \right. \end{equation} \begin{equation}\label{equationp-0-modified} \left\{ \begin{array}{l} p_t=-A_0 p+\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}_0(p+\Phi_0( j_0 (p(t))))\\ p(0)=j_0^{-1}(z). \end{array} \right. \end{equation} Moreover, $j_\varepsilon(p_\varepsilon(t))$, $j_0(p^\varepsilon_0(t))$ and $j_0(p_0(t))$ satisfy the following systems in $\mathbb{R}^m$, \begin{equation}\label{equationonRm-eps} \left\{ \begin{array}{l} z_t=-j_\varepsilon A_\varepsilon j_\varepsilon^{-1}z+j_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} \tilde{F}_\varepsilon(j_\varepsilon^{-1}(z)+\Phi_\varepsilon(z))\\ z(0)=z^0, \end{array} \right. \end{equation} \begin{equation}\label{equationonRm-*} \left\{ \begin{array}{l} z_t=-j_0 A_0 j_0^{-1}z+j_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}^\varepsilon_0(j_0^{-1}(z)+\Phi^\varepsilon_0(z))\\ z(0)=z^0, \end{array} \right. \end{equation} \begin{equation}\label{equationonRm-0} \left\{ \begin{array}{l} z_t=-j_0 A_0 j_0^{-1}z+j_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}_0(j_0^{-1}(z)+\Phi_0(z))\\ z(0)=z^0. \end{array} \right. \end{equation} We write them in the following way: \begin{equation}\label{equationonRm-epsH} \left\{ \begin{array}{l} z_t=-j_\varepsilon A_\varepsilon j_\varepsilon^{-1}z+H_\varepsilon(z)\\ z(0)=z^0, \end{array} \right. \end{equation} \begin{equation}\label{equationonRm-*H} \left\{ \begin{array}{l} z_t=-j_0 A_0 j_0^{-1}z+H^\varepsilon_0(z)\\ z(0)=z^0, \end{array} \right. \end{equation} \begin{equation}\label{equationonRm-0H} \left\{ \begin{array}{l} z_t=-j_0 A_0 j_0^{-1}z+H_0(z)\\ z(0)=z^0, \end{array} \right. \end{equation} where $$H_\varepsilon, H_0^\varepsilon, H_0:\mathbb{R}^m\longrightarrow \mathbb{R}^m,$$ are given by $$H_\varepsilon=j_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm \varepsilon} \tilde{F}_\varepsilon(j_\varepsilon^{-1}(z)+\Phi_\varepsilon(z)), $$ $$H_0^\varepsilon=j_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}^\varepsilon_0(j_0^{-1}(z)+\Phi^\varepsilon_0(z)),$$ $$H_0=j_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} \tilde{F}_0(j_0^{-1}(z)+\Phi_0(z)).$$ They are of compact support, $$supp(H_\varepsilon), supp(H_0^\varepsilon), supp(H_0)\subset B_{R'},$$ and $B_{R'}$ denotes a ball in $\mathbb{R}^m$ of some radius $R'>0$ centered at the origin. We have the following result \begin{prop}\label{MS-Rm} If all equilibria of \eqref{equationon(01)} are hyperbolic, then the time one map of \eqref{equationonRm-0H} is a Morse-Smale (gradient like) map. \end{prop} \paragraph{\sl Proof. } Since all the equilibrium points of (\ref{equationon(01)}) are hyperbolic, by \cite{Henry2} the stable and unstable manifolds intersect transversally and so, the time one map of the dynamical system generated by (\ref{system-0}) is a Morse-Smale (gradient like) map. In \cite{PilyuginShaDyn}, Section 3.4, S. Y. Pilyugin proves that, then, the time one map $T_{\mathcal{M}_0}$ corresponding to the limit system in the inertial manifold $\mathcal{M}_0$ is a Morse-Smale (gradient like) map in a neighborhood $V$ of the attractor $\mathcal{A}_0$ in this inertial manifold, $V\subset\mathcal{M}_0$. Then, the time one map $\bar{T}_0$ of the limit system in $\mathbb{R}^m$ generated by (\ref{equationonRm-0H}) is Morse-Smale (gradient like). \begin{flushright}$\blacksquare$\end{flushright} \section{Rate of the distance of attractors}\label{distanceattractors} In this section we give an estimate for the distance of attractors related to (\ref{equationonQepsilon}) and (\ref{equationon(01)}), proving our main result, Theorem \ref{maintheorem-thindomain}. To accomplish this, we start showing the following important results about the relation of the time one maps of the dynamical systems related to (\ref{equationonRm-epsH}), (\ref{equationonRm-*H}) and (\ref{equationonRm-0H}) and the ones corresponding to (\ref{equationon(01)}) and (\ref{equationonQ}). Let us denote by $\bar{T}_\varepsilon, \bar{T}^\varepsilon_0, \bar{T}_0:\mathbb{R}^m\rightarrow\mathbb{R}^m,$ the time one maps of the dynamical systems generated by (\ref{equationonRm-epsH}), (\ref{equationonRm-*H}) and (\ref{equationonRm-0H}), respectively. In the following result, we analyze its convergence of these time one maps. \begin{lem}\label{distanciasemigruposRm} We have, $$\|\bar{T}_\varepsilon-\bar{T}^\varepsilon_0\|_{C^1(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0, $$ $$\|\bar{T}^\varepsilon_0-\bar{T}_0\|_{C^1(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0, $$ as $\varepsilon\rightarrow 0$. Moreover, we have, \begin{equation}\label{estimate-convergence-C0} \| \bar{T}_\varepsilon-\bar{T}^\varepsilon_0\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)}\leq C\varepsilon|\log(\varepsilon)|, \end{equation} with $C$ independent of $\varepsilon$. \end{lem} \paragraph{\sl Proof. } Note that $\tilde{F}_\varepsilon\in C^{1, \theta_F}(X_\varepsilon^\alpha, L^2(Q))$, $\tilde{F}_0^\varepsilon, \tilde{F}_0\in C^{1, \theta_F}(X_0^\alpha, L^2_g(0, 1))$, see Lemma \ref{nonlinearity} item (b), and $\Phi_\varepsilon\in C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$, $\Phi_0^\varepsilon, \Phi_0\in C^{1, \theta}(\mathbb{R}^m, X_0^\alpha)$ for certain small $\theta$, see Proposition \ref{FixedPoint-E^1Theta}. Then, it is easy to show that $H_\varepsilon, H_0^\varepsilon, H_0\in C^{1, \theta}(\mathbb{R}^m, \mathbb{R}^m)$ for $\theta>0$ small enough and \begin{equation}\label{bound-C1theta} \|H_\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, \mathbb{R}^m)}, \|H_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, \mathbb{R}^m)}, \|H_0\|_{C^{1, \theta}(\mathbb{R}^m, \mathbb{R}^m)}\leq \mathbf{M}, \end{equation} with $\mathbf{M}$ independent of $\varepsilon$. Moreover, by Lemma \ref{nonlinearity} item (e) we have that, $$\|\tilde{F}_\varepsilon E- E\tilde{F}_0^\varepsilon\|_{C^0(X_0^\alpha, X_\varepsilon)}=0,$$ and for $K=\{u_0=p_0+\Phi_0(p_0)\,\,\, \textrm{with}\,\,\, p_0\in[\varphi_1^0, ..., \varphi_m^0]\,\,\,\,\,\textrm{and}\,\,\,\|p_0\|_{X_0^\alpha}\leq 2R\}\subset X_0^\alpha $ $$\sup_{u_0\in K}\|\tilde{F}_0^\varepsilon(u_0) - \tilde{F}_0(u_0)\|_{X_0}\rightarrow 0,$$ as $\varepsilon\rightarrow 0$. Then, since we have $j_\varepsilon\rightarrow j_0$ and $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} \rightarrow \mathbf{P}_{\mathbf{m}}^{\mathbf{0}} $, see Remark 3.3 and Lemma 3.7 and Lemma 5.4 from \cite{Arrieta-Santamaria-C0}, we have that \begin{equation}\label{convergence-C0} \|H_\varepsilon-H_0^\varepsilon\|_{C^0(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\|H_0^\varepsilon-H_0\|_{C^0(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0. \end{equation} Hence, \eqref{bound-C1theta}, \eqref{convergence-C0} and the fact that the support is contained in $B_{R'}$ imply $$\|H_\varepsilon-H_0^\varepsilon\|_{C^{1, \theta'}(B_{R'}, \mathbb{R}^m)}\rightarrow 0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\|H_0^\varepsilon-H_0\|_{C^{1, \theta'}(B_{R'}, \mathbb{R}^m)}\rightarrow 0, $$ as $\varepsilon\rightarrow 0$, for $\theta'<\theta$. For this, we are using the compact embedding $C^{1, \theta}(B, \mathbb{R}^m)\hookrightarrow C^{1, \theta'}(B, \mathbb{R}^m)$ for all $\theta'<\theta$, the convergence \eqref{convergence-C0} and the boundness of $H_\varepsilon, H_0^\varepsilon, H_0$ in $C^{1, \theta}(B, \mathbb{R}^m)$. In particular, we have this convergence in the $C^1$-topology. With this, we obtain the desired convergence, $$\|\bar{T}_\varepsilon-\bar{T}^\varepsilon_0\|_{C^1(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0, $$ $$\|\bar{T}^\varepsilon_0-\bar{T}_0\|_{C^1(\mathbb{R}^m, \mathbb{R}^m)}\rightarrow 0. $$ Now, since systems \eqref{system-eps} and \eqref{system-*} satisfy hypotheses {\bf (H1)} and {\bf (H2')}, then, we can apply the results from Section \ref{previous} to obtain estimate \eqref{estimate-convergence-C0}. Hence, $$\|\bar{T}^\varepsilon_0-\bar{T}_\varepsilon\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)}=\sup_{z\in\mathbb{R}^m}|\bar{T}^\varepsilon_0(z)-\bar{T}_\varepsilon(z)|_{0,\alpha}=$$ $$\sup_{z\in\mathbb{R}^m}|z^\varepsilon_0(1)-z_\varepsilon(1)|_{0,\alpha}=\sup_{z\in\mathbb{R}^m}|j_0(p^\varepsilon_0(1))-j_\varepsilon(p_\varepsilon(1))|_{0,\alpha},$$ where $p_\varepsilon(t)$ and $p^\varepsilon_0(t)$ are the solutions of (\ref{equationp-eps-modified}) and (\ref{equationp-*-modified}) with $p_\varepsilon(0)=j_\varepsilon^{-1}(z)$, $p^\varepsilon_0(0)=j_0^{-1}(z)$, and $z_\varepsilon$, $z^\varepsilon_0$, the solutions of (\ref{equationonRm-eps}) and (\ref{equationonRm-*}) with $z_\varepsilon(0)=z$, $z^\varepsilon_0(0)=z$. By Lemma 5.4 from \cite{Arrieta-Santamaria-C0}, and since $\kappa=1$, we obtain, $$|j_0(p^\varepsilon_0(1))-j_\varepsilon(p_\varepsilon(1))|_{0,\alpha}\leq 2\|Ep^\varepsilon_0(1)-p_\varepsilon(1)\|_{X_\varepsilon^\alpha}+2C_P\varepsilon\|p^\varepsilon_0(1)\|_{L^2_g(0, 1)},$$ with $C_P\sim (\lambda_m^0)^3$ a constant from the estimate of the distance of spectral projections, $\|E\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}-\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(L^2_g(0, 1), X_\varepsilon^\alpha)}$, see Lemma 3.7 from \cite{Arrieta-Santamaria-C0}. Moreover, since $\|E\tilde{F}^\varepsilon_0- \tilde{F}_\varepsilon E\|_{L^\infty(X_0^\alpha, L^2(Q))}=0$, (see Lemma \ref{nonlinearity}, item (e)) applying Lemma 5.6 from \cite{Arrieta-Santamaria-C0} with $t=1$ and Proposition \ref{resolvente} we have $$\|Ep^\varepsilon_0(1)-p_\varepsilon(1)\|_{X_\varepsilon^\alpha}\leq C(\|E\Phi^\varepsilon_0-\Phi_\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}+\varepsilon).$$ Then, $$\|\bar{T}^\varepsilon_0-\bar{T}_\varepsilon\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)}=\sup_{z\in\mathbb{R}^m}|\bar{T}^\varepsilon_0(z)-\bar{T}_\varepsilon(z)|_{0,\alpha}\leq$$ \begin{equation}\label{Tconvergence} \leq C(\|E\Phi^\varepsilon_0-\Phi_\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}+\varepsilon)\leq C\varepsilon|\log(\varepsilon)|, \end{equation} with $C>0$ independent of $\varepsilon$. Last inequality is obtained applying the result on the distance of the inertial manifolds from Section \ref{previous} (see Theorem \ref{distaciavariedadesinerciales}) \begin{flushright}$\blacksquare$\end{flushright} \begin{re}\label{no-rate-convergence} Note that an estimate for the rate of convergence of $\|\bar{T}_0-\bar{T}_\varepsilon\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)}$ and $\|\bar{T}^\varepsilon_0-\bar{T}_0\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)}$ is not obtained in a straightforward way. More precisely, the difficulty lies in analyzing the rate of convergence of $\|Eu_0\|_{X_\varepsilon^\alpha}\rightarrow\|u_0\|_{X^\alpha_0}$, see Lemma \ref{normaproyeccionextension} ii). \end{re} We now give an estimate for the distance of the time one maps of the dynamical systems generated by (\ref{equationon(01)}) and (\ref{equationonQ}) \begin{lem}\label{NonLinearSemigroup} Let $T_0$ and $T_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, the time one maps corresponding to (\ref{equationon(01)}) and (\ref{equationonQ}), respectively. Then, for $R>0$ large enough, there exists a constant $C=C(R)$ such that for any $w_0\in L^2_g(0,1)$, with $\|w_0\|_{L^2_g(0,1)}\leq R$, we have, $$\|T_\varepsilon(Ew_0)-ET_0(w_0)\|_{H^1_{\bm\varepsilon}(Q)}\leq C\varepsilon|\log(\varepsilon)|.$$ \end{lem} \paragraph{\sl Proof. } We have denoted previously by $S_\varepsilon(t)$ and $S_0(t)$ the nonlinear semigroups generated by (\ref{equationonQ}) and (\ref{equationon(01)}) respectively, so that $T_\varepsilon=S_\varepsilon(1)$ and $T_0=S_0(1)$. Hence, with the variation of constants formula, for $0<t\leq 1$, $$\| S_\varepsilon(t)(Ew_0)-E S_0(t)(w_0)\|_{H^1_{\bm\varepsilon}(Q)}\leq\|(e^{-A_\varepsilon t}E-Ee^{-A_0 t})w_0\|_{H^1_{\bm\varepsilon}(Q)}+$$ $$+\int_0^t \left\|e^{-A_\varepsilon(t-s)} {F}_\varepsilon(S_\varepsilon(s)Ew_0)-Ee^{-A_0(t-s)} {F}_0(S_0(s)w_0)\right\|_{H^1_{\bm\varepsilon}(Q)}ds\leq$$ $$\leq \|(e^{-A_\varepsilon t}E-Ee^{-A_0 t})w_0\|_{H^1_{\bm\varepsilon}(Q)}+$$ $$+\int_0^t \left\|\left(e^{-A_\varepsilon(t-s)}E- Ee^{-A_0(t-s)}\right) F_0(S_0(s)w_0)\right\|_{H^1_{\bm\varepsilon}(Q)}ds+$$ $$+\int_0^t \left\|e^{-A_\varepsilon(t-s)}\left( F_\varepsilon(ES_0(s)w_0)-F_0(S_0(s)w_0)\right)\right\|_{H^1_{\bm\varepsilon}(Q)}ds+$$ $$+ \int_0^t \left\|e^{-A_\varepsilon(t-s)}\left(F_\varepsilon(S_\varepsilon(s)Ew_0)-F_\varepsilon(ES_0(s)w_0)\right)\right\|_{H^1_{\bm\varepsilon}(Q)}ds.$$ But notice that since both $F_\varepsilon$ and $F_0$ are Nemitskii operators of the same function $f:\mathbb{R}\to \mathbb{R}$ then $ {F}_\varepsilon(ES_0(s)w_0)={F}_0(S_0(s)w_0)$, and the third term is identically 0. Now, since hypothesis {\bf (H1)} is satisfied, applying Lemma 3.9, Lemma 3.10 from \cite{Arrieta-Santamaria-C0}, Lemma \ref{normaproyeccionextension}, Proposition \ref{resolvente} and with Gronwall-Henry inequality, see \cite{Henry1} Section 7, for $t =1$, we obtain, $$\|T_\varepsilon(Ew_0)-ET_0(w_0)\|_{H^1_{\bm\varepsilon}(Q)}=\|S_\varepsilon(1)(Ew_0)-E S_0(1)(w_0)\|_{H^1_{\bm\varepsilon}(Q)}\leq C \varepsilon|\log(\varepsilon)|, $$ with $C>0$ independent of $\varepsilon$. \begin{flushright}$\blacksquare$\end{flushright} \par We show the time one maps are Lipschitz from $L^2(Q)$ to $H^1_{\bm\varepsilon}(Q)$ uniformly in $\varepsilon$. \begin{lem}\label{timeonemap-lipschitz} There exists a constant $C>0$ independent of $\varepsilon$ so that, for $0\leq\varepsilon\leq \varepsilon_0$, $$\|T_\varepsilon(u_\varepsilon)-T_\varepsilon(w_\varepsilon)\|_{H^1_{\bm\varepsilon}(Q)}\leq C\|u_\varepsilon-w_\varepsilon\|_{L^2(Q)}.$$ \end{lem} \paragraph{\sl Proof. } By the variation of constants formula, for $0<t\leq 1$, we have $$\|S_\varepsilon(t)u_\varepsilon-S_\varepsilon(t)w_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq \| e^{-A_\varepsilon t} (u_\varepsilon-w_\varepsilon)\|_{H^1_{\bm\varepsilon}(Q)}+$$ $$+\int_0^t\left\|e^{-A_\varepsilon(t-s)}(F_\varepsilon(S_\varepsilon(s)u_\varepsilon)- F_\varepsilon(S_\varepsilon(s)w_\varepsilon))\right\|_{H^1_{\bm\varepsilon}(Q)}ds.$$ Applying Lemma 3.1 from \cite{Arrieta-Santamaria-C0} and Lemma \ref{nonlinearity-C1}, item (ii), $$\|S_\varepsilon(t)u_\varepsilon-S_\varepsilon(t)w_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq Ce^{-\lambda_1^\varepsilon t}t^{-\frac{1}{2}}\|u_\varepsilon-w_\varepsilon\|_{L^2(Q)}+ $$ $$+CL_F e^{-\lambda_1^\varepsilon t}\int_0^t e^{\lambda_1^\varepsilon s}(t-s)^{-\frac{1}{2}}\|S_\varepsilon(s)u_\varepsilon-S_\varepsilon(s)w_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)} ds.$$ Applying Gronwall inequality, for $0<t\leq 1$, we have $$\|S_\varepsilon(t)u_\varepsilon-S_\varepsilon(t)w_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq C t^{-\frac{1}{2}}\|u_\varepsilon-w_\varepsilon\|_{L^2(Q)} e^{-\lambda_1^\varepsilon t},$$ with $C>0$ independent of $\varepsilon$. Then, for the time one map $T_\varepsilon=S_\varepsilon(1)$ we obtain $$\|T_\varepsilon(u_\varepsilon)-T_\varepsilon(w_\varepsilon)\|_{H^1_{\bm\varepsilon}(Q)}\leq C \|u_\varepsilon-w_\varepsilon\|_{L^2(Q)},$$ with $C>0$ independent of $\varepsilon$, which shows the result. \begin{flushright}$\blacksquare$\end{flushright} We proceed to prove the main result of this work. \paragraph{Proof of Theorem \ref{maintheorem-thindomain}} We obtain now a rate of convergence of attractors $\mathcal{A}_0$ and $\mathcal{A}_\varepsilon$ of the dynamical systems generated by (\ref{equationon(01)}) and (\ref{equationonQ}), respectively. We know that for any $u_0\in\mathcal{A}_0$ and any $u_\varepsilon\in\mathcal{A}_\varepsilon$ there exist a $w_0\in\mathcal{A}_0$ and $w_\varepsilon\in\mathcal{A}_\varepsilon$ such that, $$u_0=T_0(w_0),\qquad\textrm{and}\quad u_\varepsilon=T_\varepsilon(w_\varepsilon),$$ with $T_0$ and $T_\varepsilon$ the time one maps corresponding to (\ref{equationon(01)}) and (\ref{equationonQ}). Moreover, as we have said before, for each $\varepsilon>0$ the attractor $\mathcal{A}_\varepsilon$ is contained in the inertial manifold $\mathcal{M}_\varepsilon$ and $\mathcal{A}_0$ is contained in the inertial manifolds $\mathcal{M}^\varepsilon_0$ and $\mathcal{M}_0$. We also have that although $\mathcal{M}_\varepsilon$, $\mathcal{M}^\varepsilon_0$ and $\mathcal{M}_0$ are manifolds close enough, we only can provide explicit rates of the distance between $\mathcal{M}_\varepsilon$ and $\mathcal{M}^\varepsilon_0$ as $\varepsilon$ goes to zero. The Hausdorff distance of attractors $\mathcal{A}_0$ and $\mathcal{A}_\varepsilon$ in $H^1_{\bm\varepsilon}(Q)$, is given by $$dist_{H^1_{\bm\varepsilon}(Q)}(\mathcal{A}_0, \mathcal{A}_\varepsilon)=max\{\sup_{u_0\in\mathcal{A}_0}\inf_{u_\varepsilon\in\mathcal{A}_\varepsilon}\|Eu_0-u_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}, \sup_{u_\varepsilon\in\mathcal{A}_\varepsilon}\inf_{u_0\in\mathcal{A}_0}\|u_\varepsilon-E u_0\|_{H^1_{\bm\varepsilon}(Q)}\}. $$ Then, we consider $w_\varepsilon\in\mathcal{A}_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, given by $w_\varepsilon=j_\varepsilon^{-1}(z_\varepsilon) +\Phi_\varepsilon(z_\varepsilon)$ and $w_0\in\mathcal{A}_0$, given by $w_0=j_0^{-1}(z_0) +\Phi^\varepsilon_0(z_0)$ with $z_\varepsilon\in\bar{\mathcal{A}}_\varepsilon$ and $z_0\in\bar{\mathcal{A}}_0,$ the ``projected'' attractors in $\mathbb{R}^m$ corresponding to (\ref{equationonRm-eps}) and (\ref{equationonRm-*}), respectively. We know, $$\|Eu_0-u_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}=\|ET_0(w_0)-T_\varepsilon(w_\varepsilon)\|_{H^1_{\bm\varepsilon}(Q)}\leq$$ $$\leq\|ET_0(w_0)-T_\varepsilon(Ew_0)\|_{H^1_{\bm\varepsilon}(Q)}+\|T_\varepsilon(Ew_0)-T_\varepsilon(w_\varepsilon)\|_{H^1_{\bm\varepsilon}(Q)}.$$ Applying Lemma \ref{NonLinearSemigroup} and Lemma \ref{timeonemap-lipschitz}, we have $$\|Eu_0-u_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq C \varepsilon|\log(\varepsilon)|+ C\|Ew_0-w_\varepsilon\|_{X_\varepsilon^\alpha}.$$ So, we need to estimate the norm $\|Ew_0-w_\varepsilon\|_{X_\varepsilon^\alpha}$, where, $$w_\varepsilon=j_\varepsilon^{-1}(z_\varepsilon)+\Phi_\varepsilon(z_\varepsilon),\qquad z_\varepsilon\in\bar{\mathcal{A}}_\varepsilon,$$ and $$w_0=j_0^{-1}(z_0)+\Phi^\varepsilon_0(z_0),\qquad z_0\in\bar{\mathcal{A}}_0$$ with $\bar{\mathcal{A}}_\varepsilon$ and $\bar{\mathcal{A}}_0$ the attractors corresponding to (\ref{equationonRm-eps}) and (\ref{equationonRm-*}). Hence, since $j_0^{-1}(z_0)=\sum_{i=1}^mz_i^0\psi_i^0$ and $j_\varepsilon^{-1}(z_\varepsilon)=\sum_{i=1}^mz_i^\varepsilon\psi_i^\varepsilon$, $$\|E w_0-w_\varepsilon\|_{X_\varepsilon^\alpha}\leq\|Ej_0^{-1}(z_0)-j_\varepsilon^{-1}(z_\varepsilon)\|_{X_\varepsilon^\alpha} + \|E\Phi^\varepsilon_0(z_0)-\Phi_\varepsilon(z_\varepsilon)\|_{X_\varepsilon^\alpha}\leq$$ $$\leq \|\sum_{i=1}^m(z_i^0-z_i^\varepsilon)E\psi_i^0\|_{X_\varepsilon^\alpha}+\|\sum_{i=1}^mz_i^\varepsilon(E\psi_i^0-\psi_i^\varepsilon)\|_{X_\varepsilon^\alpha}+$$ $$+\|E\Phi^\varepsilon_0(z_0)-E\Phi^\varepsilon_0(z_\varepsilon)\|_{X_\varepsilon^\alpha}+ \|E\Phi^\varepsilon_0(z_\varepsilon)-\Phi_\varepsilon(z_\varepsilon)\|_{X_\varepsilon^\alpha}\leq$$ $$\leq 2|z_0-z_\varepsilon|_{0,\alpha} + \sup_{z_\varepsilon\in\bar{\mathcal{A}}_\varepsilon} |z_\varepsilon|\|E\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}-\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(L^2_g(0, 1), X_\varepsilon^\alpha)}+\|E\Phi^\varepsilon_0-\Phi_\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}.$$ In the last inequality we have applied the estimate of the norm of operator $E$, see \eqref{Ealpha}. Since $z_0\in\bar{\mathcal{A}}_0$ and $z_\varepsilon\in\bar{\mathcal{A}}_\varepsilon$, then $$\|E w_0-w_\varepsilon\|_{X_\varepsilon^\alpha}\leq 2|z_0-z_\varepsilon|_{0,\alpha}+|z_\varepsilon|\|E\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}-\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(L^2_g(0, 1), X_\varepsilon^\alpha)}+$$ $$+\|E\Phi^\varepsilon_0-\Phi_\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}=I_1+I_2+I_3.$$ To estimate $I_2$, note that we have studied the convergence of $\|E\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}-\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(L^2_g(0, 1), X_\varepsilon^\alpha)}$ in terms of the distance of the resolvent operators, see \eqref{convergence-of-projection} or \cite[Lemma 3.7]{Arrieta-Santamaria-C0}. Then, in our case, we have, $$I_2\leq C\varepsilon.$$ By Theorem \ref{distaciavariedadesinerciales}, $$I_3\leq C\varepsilon|\log(\varepsilon)|.$$ Hence, putting everything together, $$\|E w_0-w_\varepsilon\|_{X_\varepsilon^\alpha}\leq 4e^2|z_0-z_\varepsilon|_{0,\alpha}+ C \varepsilon|\log(\varepsilon)|,$$ with $C$ independent of $\varepsilon$. Then, $$\sup_{w_0\in\mathcal{A}_0}\inf_{ w_\varepsilon\in\mathcal{A}_\varepsilon}\|E w_0-w_\varepsilon\|_{X_\varepsilon^\alpha}\leq 4e^2\sup_{z_0\in\bar{\mathcal{A}}_0}\inf_{ z_\varepsilon\in\bar{\mathcal{A}}_\varepsilon}|z_0-z_\varepsilon|_{0,\alpha}+ C \varepsilon|\log(\varepsilon)|.$$ Hence, $$dist_{H^1_{\bm\varepsilon}(Q)}(\mathcal{A}_0, \mathcal{A}_\varepsilon)\leq 4e^2 dist_{\mathbb{R}^m}(\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon)+C \varepsilon|\log(\varepsilon)|.$$ To estimate $dist_H(\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon)$, we need to apply techniques of Shadowing Theory described in Appendix \ref{shadowing}. First, we have by Proposition \ref{MS-Rm}, that the time one map of the system given by the ordinary differential equation \eqref{equationonRm-0H} is a Morse-Smale map. Moreover, by Lemma \ref{distanciasemigruposRm}, we can take $\varepsilon$ small enough so that the time one maps corresponding to \eqref{equationonRm-epsH} and \eqref{equationonRm-*H}, $\bar{T}_\varepsilon$ and $\bar{T}_0^\varepsilon$, respectivelly belong to a $C^1$ neighborhood of $\bar{T}_0$. Then, by Corollary \ref{MS-dist} $$dist_{\mathbb{R}^m}(\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon)\leq L\|\bar{T}^\varepsilon_0-\bar{T}_\varepsilon\|_{L^\infty(\mathbb{R}^m, \mathbb{R}^m)},$$ with $L>0$ independent of $\varepsilon$. Hence, using the estimate obtained in Lemma \ref{distanciasemigruposRm}, $$dist_{\mathbb{R}^m}(\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon)\leq C\varepsilon|\log(\varepsilon)|,$$ Putting all together, we get $$dist_{H^1_{\bm\varepsilon}(Q)}(\mathcal{A}_0, \mathcal{A}_\varepsilon)\leq C \varepsilon|\log(\varepsilon)|,$$ Finally, applying identity \eqref{norm-relations2}, we have, $$dist_{H^1(Q_\varepsilon)}(\mathcal{A}_0, \mathcal{A}_\varepsilon)=\varepsilon^{\frac{d-1}{2}}dist_H(\mathcal{A}_0, \mathcal{A}_\varepsilon)\leq C \varepsilon^{\frac{d+1}{2}}|\log(\varepsilon)|,$$ with $C$ independent of $\varepsilon$. This shows Theorem \ref{maintheorem-thindomain}. \begin{flushright}$\blacksquare$\end{flushright} \appendix \section{Appendix: Proof of Proposition \ref{resolvente}} \label{proof-resolvente} We provide in this appendix the proof of the estimates of the resolvent operators contained in Proposition \ref{resolvente}. \par \paragraph{\sl Proof. } The proof of this result follows similar ideas as the proof of Proposition A.8 from \cite{dumbel1}. Remember that $$Q_\varepsilon =\{(x, \varepsilon\mathbf{y})\in\mathbb{R}^d: (x, \mathbf{y})\in Q\},$$ where $$Q=\{(x, \mathbf{y})\in\mathbb{R}^d: 0\leq x\leq1,\; \; \mathbf{y}\in\Gamma^1_x\},$$ and $$H^1_{\bm\varepsilon}(Q):=(H^1(Q), \|\cdot\|_{H^1_{\bm\varepsilon}(Q)}),$$ with the norm $$\|u\|_{H^1_{\bm\varepsilon}(Q)}:=\left(\int_Q(|\nabla_xu|^2+\frac{1}{\varepsilon^2}|\nabla_{\mathbf{y}}u|^2+|u|^2)dxd\mathbf{y}\right)^{1/2}.$$ So, by the change of variable theorem, $$\|u\|_{L^2(Q_\varepsilon)}=\varepsilon^{\frac{d-1}{2}}\|\mathbf{i}_{\bm\varepsilon} u\|_{L^2(Q)},$$ and $$\|u\|_{H^1(Q_\varepsilon)}=\varepsilon^{\frac{d-1}{2}}\|\mathbf{i}_{\bm\varepsilon} u\|_{H^1_{\bm\varepsilon}(Q)}.$$ Hence, proving this Proposition is equivalent to prove the estimate $$\|w_\varepsilon-E_\varepsilon v_\varepsilon\|_{H^1(Q_\varepsilon)}\leq C\varepsilon\|f_\varepsilon\|_{L^2(Q_\varepsilon)},$$ where $w_\varepsilon$ and $v_\varepsilon$ are the solutions of the following linear problems, respectivelly, \begin{equation} \left\{ \begin{array}{c l r} -\Delta w_\varepsilon+\mu w_\varepsilon\;&\; = f_\varepsilon, \;&\;\textrm{in}\quad Q_\varepsilon\\ \frac{\partial w_\varepsilon}{\partial\nu_\varepsilon}\;&\;=0\quad&\textrm{on}\quad \partial Q_\varepsilon, \end{array} \right. \end{equation} and \begin{equation} \left\{ \begin{array}{r l r} -\frac{1}{g}(g {v_\varepsilon}_x)_x + \mu v_\varepsilon\;&\; = M_\varepsilon f_\varepsilon, \;&\;\textrm{in}\quad (0, 1)\\ {v_\varepsilon}_x(0)\;&\;=0,\;&\;{v_\varepsilon}_x(1)=0, \end{array} \right. \end{equation} with $f_\varepsilon\in L^2(Q_\varepsilon)$. Observe that $u_\varepsilon(x, \mathbf{y})=w_\varepsilon(x, \varepsilon \mathbf{y})$. It is known that the minima \begin{equation}\label{minimoQepsilon} \lambda_\varepsilon:=\displaystyle\min_{\varphi\in H^1(Q_\varepsilon)}\left\{\frac{1}{2}\int_{Q_\varepsilon} (|\nabla\varphi|^2 + \mu|\varphi|^2)ds-\int_{Q_\varepsilon}f_\varepsilon\varphi ds\right\}, \end{equation} \begin{equation}\label{minimo01} \tau_\varepsilon:=\displaystyle\min_{\varphi\in H_g^1(0,1)}\left\{\frac{1}{2}\int_0^1(g|\varphi'|^2+ g\mu|\varphi|^2)dx-\int_0^1gM_\varepsilon f_\varepsilon\varphi dx\right\}, \end{equation} with $s=(x, \mathbf{y})\in Q_\varepsilon$, are unique and they are attained at the solutions $w_\varepsilon$ and $v_\varepsilon$. We want to compare both solutions $w_\varepsilon$ and $v_\varepsilon$. We start by taking the function $v_\varepsilon$ as a test function in (\ref{minimoQepsilon}). We have, $$\lambda_\varepsilon\leq\frac{1}{2}\int_{Q_\varepsilon}(|\nabla v_\varepsilon|^2+\mu|v_\varepsilon|^2)ds - \int_{Q_\varepsilon}f_\varepsilon v_\varepsilon ds=$$ $$=\frac{1}{2}\int_0^1\int_{\Gamma_x^\varepsilon}(|{v_\varepsilon}_x|^2+\mu|v_\varepsilon|^2)d\mathbf{y}dx-\int_0^1\int_{\Gamma_x^\varepsilon}f_\varepsilon d\mathbf{y}v_\varepsilon dx=$$ $$=\frac{1}{2}\int_0^1|\Gamma_x^\varepsilon|(|{v_\varepsilon}_x|^2+\mu|v_\varepsilon|^2)dx-\int_0^1|\Gamma_x^\varepsilon|M_\varepsilon f_\varepsilon(x, \mathbf{y})v_\varepsilon dx=$$ $$=\varepsilon^{d-1}\left(\frac{1}{2}\int_0^1g(x)(|{v_\varepsilon}_x|^2+\mu|v_\varepsilon|^2)dx-\int_0^1g(x)M_\varepsilon f_\varepsilon(x, \mathbf{y})v_\varepsilon dx\right)=\varepsilon^{d-1}\tau_\varepsilon.$$ That is, we have obtained the estimate, $$\lambda_\varepsilon\leq\varepsilon^{d-1}\tau_\varepsilon.$$ To look for a lower bound we proceed as follows, $$\lambda_\varepsilon=\frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon|^2+\mu|w_\varepsilon|^2)ds - \int_{Q_\varepsilon}f_\varepsilon w_\varepsilon ds=$$ $$=\frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon + \nabla v_\varepsilon|^2+\mu |w_\varepsilon-v_\varepsilon + v_\varepsilon|^2)ds - \int_{Q_\varepsilon }f_\varepsilon(w_\varepsilon-v_\varepsilon + v_\varepsilon) ds=$$ $$=\frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2+|\nabla v_\varepsilon|^2+2(\nabla w_\varepsilon- \nabla v_\varepsilon)\nabla v_\varepsilon)ds +$$ $$+ \frac{1}{2}\int_{Q_\varepsilon}\mu(|w_\varepsilon- v_\varepsilon|^2 + |v_\varepsilon|^2 + 2(w_\varepsilon-v_\varepsilon)v_\varepsilon)ds-\int_{Q_\varepsilon}f_\varepsilon(w_\varepsilon-v_\varepsilon)ds-\int_{Q_\varepsilon}f_\varepsilon v_\varepsilon ds.$$ From above, we know that $\frac{1}{2}\int_{Q_\varepsilon}(|\nabla v_\varepsilon|^2+\mu|v_\varepsilon|^2)ds-\int_{Q_\varepsilon}f_\varepsilon v_\varepsilon ds=\varepsilon^{d-1}\tau_\varepsilon$, then $$\lambda_\varepsilon=\frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2+2(\nabla w_\varepsilon- \nabla v_\varepsilon)\nabla v_\varepsilon)ds + \frac{1}{2}\int_{Q_\varepsilon}\mu(|w_\varepsilon- v_\varepsilon|^2 + 2(w_\varepsilon-v_\varepsilon)v_\varepsilon)ds$$ $$-\int_{Q_\varepsilon}f_\varepsilon(w_\varepsilon-v_\varepsilon)ds + \varepsilon^{d-1}\tau_\varepsilon.$$ To analyze this, we write the last equality like this, $$\lambda_\varepsilon= \frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + \mu|w_\varepsilon -v_\varepsilon|^2)ds+I_1+I_2-I_3+\varepsilon^{d-1}\tau_\varepsilon,$$ with, $$I_1:=\int_{Q_\varepsilon}(\nabla w_\varepsilon - \nabla v_\varepsilon)\nabla v_\varepsilon ds,\qquad I_2:=\int_{Q_\varepsilon}\mu(w_\varepsilon-v_\varepsilon)v_\varepsilon ds,$$ and $$I_3:= \int_{Q_\varepsilon}f_\varepsilon(w_\varepsilon-v_\varepsilon)ds.$$ If we analyze each term with detail, we observe the following, $$I_1=\int_{Q_\varepsilon}(\nabla w_\varepsilon - \nabla v_\varepsilon)\nabla v_\varepsilon ds=\int_{Q_\varepsilon}\left(\frac{\partial w_\varepsilon}{\partial x}- v'_\varepsilon\right)v'_\varepsilon ds =$$ $$\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-v_\varepsilon'\right)v'_\varepsilon ds=\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-(M_\varepsilon w_\varepsilon)'\right)v'_\varepsilon dx+\int_{Q_\varepsilon}\left((M_\varepsilon w_\varepsilon)'-v'_\varepsilon\right)v'_\varepsilon ds=$$ $$=\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-(M_\varepsilon w_\varepsilon)'\right)v'_\varepsilon ds+\varepsilon^{d-1}\int_0^1g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)'v'_\varepsilon dx.$$ Since $v_\varepsilon=v_\varepsilon(x)$, we have, $$I_2=\int_{Q_\varepsilon}\mu(w_\varepsilon-v_\varepsilon)v_\varepsilon ds=\varepsilon^{d-1}\int_0^1\mu g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)v_\varepsilon dx,$$ and $$I_3=\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-v_\varepsilon)ds+\int_{Q_\varepsilon}M_\varepsilon f_\varepsilon(w_\varepsilon-v_\varepsilon)=$$ $$=\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-v_\varepsilon)ds+\int_{Q_\varepsilon}M_\varepsilon f_\varepsilon(M_\varepsilon w_\varepsilon-v_\varepsilon)ds=$$ $$=\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-v_\varepsilon)ds+ \varepsilon^{d-1}\int_0^1g(x) M_\varepsilon (f_\varepsilon)(M_\varepsilon w_\varepsilon-v_\varepsilon)dx.$$ That is, $$I_1=\tilde{I}_1+\varepsilon^{d-1}\int_0^1g(x)\left(M_\varepsilon w_\varepsilon-v_\varepsilon\right)'v'_\varepsilon dx,\qquad I_2=\varepsilon^{d-1}\int_0^1 \mu g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)v_\varepsilon dx,$$ and $$I_3= \tilde{I}_3+\varepsilon^{d-1}\int_0^1g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)M_\varepsilon f_\varepsilon dx,$$ where $$\tilde{I}_1=\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-\left(M_\varepsilon w_\varepsilon\right)'\right)v'_\varepsilon ds,\qquad\textrm{and}\qquad\tilde{I}_3=\int_{Q_\varepsilon}\left(f_\varepsilon-M_\varepsilon f_\varepsilon\right)(w_\varepsilon-v_\varepsilon)ds.$$ We know that, $$\int_0^1\left[g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)'v'_\varepsilon+ \mu g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)v_\varepsilon\right]dx=\int_0^1g(x)(M_\varepsilon w_\varepsilon-v_\varepsilon)M_\varepsilon f_\varepsilon dx,$$ then, $$I_1 + I_2 - I_3=\tilde{I}_1-\tilde{I}_3.$$ So, we only need to estimate $\tilde{I}_1$ and $\tilde{I}_3$. We start with $\tilde{I}_1$. $$\tilde{I}_1=\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-\left(M_\varepsilon w_\varepsilon\right)'\right)v'_\varepsilon ds.$$ Then, we first estimate $M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-\left(M_\varepsilon w_\varepsilon\right)'$. For that, we study $(M_\varepsilon w_\varepsilon)'$. $$(M_\varepsilon w_\varepsilon)'=\frac{d}{dx}\left(\frac{1}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}w_\varepsilon(x, \mathbf{y})d\mathbf{y}\right),$$ and by the Change of Variable Theorem with $\mathbf{y}=\varepsilon \mathbf{L}_x(z)$, see (\ref{difeomorfismo}), and $z\in B(0,1)$ the unit ball in $\mathbb{R}^{d-1}$, we have $$\frac{1}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}w_\varepsilon(x, \mathbf{y})d\mathbf{y}=\int_{B(0,1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|} dz,$$ where $J_{\mathbf{L}_x(z)}$ is the Jacobian of $\mathbf{L}_x$. So, $$ (M_\varepsilon w_\varepsilon)'=\frac{d}{dx}\left(\int_{B(0, 1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|} dz\right)=$$ $$=\int_{B(0,1)}\frac{\partial w_\varepsilon}{\partial x} (x, \varepsilon \mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}dz + \int_{B(0,1)}\nabla_{\mathbf{y}}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\varepsilon\frac{\partial}{\partial x}(\mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}dz+$$ $$+\int_{B(0,1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{\partial}{\partial x}\left(\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}\right)dz.$$ To estimate the right side of the above equality, we study each integral separately. We begin with the first one. Undoing the change of variable $\mathbf{y}=\varepsilon \mathbf{L}_x(z)$, $$\int_{B(0,1)}\frac{\partial w_\varepsilon}{\partial x} (x, \varepsilon \mathbf{L}_x(z))\frac{J_{L_x(z)}}{|\Gamma_x^1|}dz=\frac{1}{|\Gamma^\varepsilon_x|}\int_{\Gamma_x^\varepsilon}\frac{\partial w_\varepsilon}{\partial x}(x, \mathbf{y})d\mathbf{y}= M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}.$$ For the second integral we use $\left|\frac{\partial \mathbf{L}_x(z)}{\partial x}\right|\leq C$, $$\left|\int_{B(0,1)}\nabla_{\mathbf{y}}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\varepsilon\frac{\partial}{\partial x}(\mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}dz\right|\leq C\varepsilon\int_{B(0,1)}|\nabla_{\mathbf{y}}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))|\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}dz,$$ undoing again the change of variable, we obtain, $$\left|\int_{B(0,1)}\nabla_{\mathbf{y}}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\varepsilon\frac{\partial}{\partial x}(\mathbf{L}_x(z))\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}dz\right|\leq C\frac{\varepsilon}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}|\nabla_{\mathbf{y}}w_\varepsilon(x, \mathbf{y})|d\mathbf{y}.$$ We estimate the last term as follows, $$\int_{B(0,1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{\partial}{\partial x}\left(\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}\right)dz=$$ $$\int_{B(0,1)}\left(w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))- (M_\varepsilon w_\varepsilon)(x)\right)\frac{\partial(J_{\mathbf{L}_x(z)}/|\Gamma_x^1|)}{\partial x}(z) dz +$$ $$+ (M_\varepsilon w_\varepsilon)(x)\int_{B(0,1)}\frac{\partial(J_{\mathbf{L}_x(z)}/|\Gamma_x^1|)}{\partial x}(z) dz. $$ Since, $$\int_{B(0,1)}\frac{\partial (J_{\mathbf{L}_x(z)}/|\Gamma_x^1|)}{\partial x}(z) dz=\frac{d}{dx}\left(\frac{1}{|\Gamma_x^1|}\underbrace{\int_{B(0,1)}J_{\mathbf{L}_x(z)}(z)dz}_{|\Gamma_x^1|}\right)=0,$$ then, we have $$\int_{B(0,1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{\partial}{\partial x}\left(\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}\right)dz=$$ $$=\int_{B(0,1)}\left(w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))- (M w_\varepsilon)(x)\right)\frac{\partial(J_{\mathbf{L}_x(z)}/|\Gamma_x^1|)}{\partial x}(z) dz.$$ As before, undoing the change of variable and taking account that $\left|\frac{\partial(J_{\mathbf{L}_x(z)}/|\Gamma_x^1|)}{\partial x}\right|\leq C$, we obtain $$\left|\int_{B(0,1)}w_\varepsilon(x, \varepsilon \mathbf{L}_x(z))\frac{\partial}{\partial x}\left(\frac{J_{\mathbf{L}_x(z)}}{|\Gamma_x^1|}\right)dz\right|\leq C\frac{1}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}|w_\varepsilon(x, \mathbf{y})- M_\varepsilon w_\varepsilon(x)|d\mathbf{y}.$$ Then, if we put together the three obtained estimates, we have $$\left|M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-(M_\varepsilon w_\varepsilon)'\right|\leq C\frac{\varepsilon}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}\nabla_{\mathbf{y}}w_\varepsilon(x, \mathbf{y})d\mathbf{y} + C\frac{1}{|\Gamma_x^\varepsilon|}\int_{\Gamma_x^\varepsilon}(w_\varepsilon(x, \mathbf{y})- M_\varepsilon w_\varepsilon(x))d\mathbf{y}.$$ So, $$|\tilde{I}_1|=\left|\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-\frac{\partial M_\varepsilon w_\varepsilon}{\partial x}\right)\frac{\partial v_\varepsilon}{\partial x}\right|\leq \int_0^1C\varepsilon\int_{\Gamma_x^\varepsilon}(\nabla_{\mathbf{y}}w_\varepsilon)v'_\varepsilon d\mathbf{y}dx+$$ $$+\int_0^1C\int_{\Gamma_x^\varepsilon}(w_\varepsilon- M_\varepsilon w_\varepsilon)v'_\varepsilon d\mathbf{y}dx.$$ Applying the H\"{o}lder inequality, $|\tilde{I}_1|$ can be estimated as follows, $$\left|\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-\frac{\partial M_\varepsilon w_\varepsilon}{\partial x}\right)\frac{\partial v_\varepsilon}{\partial x}\right|\leq C\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}\|v'_\varepsilon\|_{L^2(Q_\varepsilon)}+ $$ $$+C\|w_\varepsilon-E_\varepsilon M_\varepsilon w_\varepsilon\|_{L^2(Q_\varepsilon)}\|v'_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ By Lemma \ref{normaproyeccionextension}, $$\|w_\varepsilon-E_\varepsilon M_\varepsilon w_\varepsilon\|_{L^2(Q_\varepsilon)}\leq \sqrt{\beta}\varepsilon \|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)},$$ so, $$\left|\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}- (M_\varepsilon w_\varepsilon)'\right) v'_\varepsilon ds\right|\leq C \varepsilon\|v'_\varepsilon\|_{L^2(Q_\varepsilon)}\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ To estimate the norm $\|v'_\varepsilon\|_{L^2(Q_\varepsilon)}$ we proceed as follows. We know that $v_\varepsilon$ is the solution of \begin{equation} \left\{ \begin{array}{r l r} -\frac{1}{g}(g {v_\varepsilon}_x)_x + \mu v_\varepsilon\;&\; = M_\varepsilon f_\varepsilon, \;&\;\textrm{in}\quad (0, 1)\\ {v_\varepsilon}_x(0)\;&\;=0,\;&\;{v_\varepsilon}_x(1)=0. \end{array} \right. \end{equation} Then, for $x\in(0,1)$, $v_\varepsilon$ satisfies, $$-(gv'_\varepsilon)'+g\mu v_\varepsilon=gM_\varepsilon f_\varepsilon.$$ If we multiply by $v_\varepsilon$ and integrate by parts, we obtain, $$\int_0^1g(v'_\varepsilon)^2dx+\mu\int_0^1gv_\varepsilon^2dx = \int_0^1(M_\varepsilon f_\varepsilon) gv_\varepsilon dx \leq$$ $$\stackrel{\textrm{H\"{o}lder ineq.}}{\leq}\left(\int_0^1(M_\varepsilon f_\varepsilon)^2dx\right)^{\frac{1}{2}}\left(\int_0^1(gv_\varepsilon)^2dx\right)^{\frac{1}{2}}\leq$$ $$\leq \frac{1}{4\delta}\int_0^1(M_\varepsilon f_\varepsilon)^2dx+\delta\int_0^1(gv_\varepsilon)^2dx.$$ Then, $$\int_0^1g(v'_\varepsilon)^2dx +(\mu-\delta) \int_0^1gv_\varepsilon^2dx\leq\frac{1}{4\delta}\int_0^1(M_\varepsilon f_\varepsilon)^2dx\leq\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}$$ So, $$\int_0^1g(x)(v'_\varepsilon)^2dx+ \int_0^1gv_\varepsilon^2dx\leq C\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}.$$ And, $$\|v'_\varepsilon\|^2_{L^2(Q_\varepsilon)}=\int_{Q_\varepsilon}(v'_\varepsilon)^2ds=\int_0^1\int_{\Gamma_x^\varepsilon}(v'_\varepsilon)^2d\mathbf{y} dx=\varepsilon^{d-1}\int_0^1 g(x)(v'_\varepsilon)^2dx\leq$$ $$\leq \varepsilon^{d-1}C\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}.$$ Then,$$\left|\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-(M_\varepsilon w_\varepsilon)'\right) v'_\varepsilon ds\right|\leq C \varepsilon\|v'_\varepsilon\|_{L^2(Q_\varepsilon)}\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}\leq $$ $$\leq C\varepsilon\varepsilon^{\frac{d-1}{2}}\|M_\varepsilon f_\varepsilon\|_{L^2(0,1)}\|\nabla_{\mathbf{y}} w_\varepsilon\|_{L^2(Q_\varepsilon)}=$$ $$=C\varepsilon^{\frac{d+1}{2}}\|M_\varepsilon f_\varepsilon\|_{L^2(0,1)}\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}\leq C\varepsilon^{d+1}\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)} + \frac{1}{4}\|\nabla_{\mathbf{y}}w_\varepsilon\|^2_{L^2(Q_\varepsilon)}.$$ Note that, $$\|\nabla_{\mathbf{y}}w_\varepsilon\|^2_{L^2(Q_\varepsilon)}\leq\|\nabla w_\varepsilon-\nabla v_\varepsilon\|^2_{L^2(Q_\varepsilon)},$$ so, $$\left|\int_{Q_\varepsilon}\left(M_\varepsilon\frac{\partial w_\varepsilon}{\partial x}-(M_\varepsilon w_\varepsilon)'\right)v'_\varepsilon ds\right|\leq C\varepsilon^{d+1}\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}+\frac{1}{4}\|\nabla w_\varepsilon-\nabla v_\varepsilon\|^2_{L^2(Q_\varepsilon)}.$$ And $\tilde{I}_3$ can be estimated as follows, $$\tilde{I}_3=\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-v_\varepsilon)ds=\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-M_\varepsilon w_\varepsilon)ds + $$ $$+\underbrace{\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(M_\varepsilon w_\varepsilon-v_\varepsilon)ds}_{=0},$$ by the H\"{o}lder inequality, $$|\tilde{I}_3|=\left|\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-M_\varepsilon w_\varepsilon)ds\right|\leq\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\|w_\varepsilon-M_\varepsilon w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ Again, by Lemma \ref{normaproyeccionextension}, $$\|w_\varepsilon-M_\varepsilon w_\varepsilon\|^2_{L^2(Q_\varepsilon)}\leq\beta\varepsilon^2\|\nabla_{\mathbf{y}} w_\varepsilon\|^2_{L^2(Q_\varepsilon)},$$ so, $$|\tilde{I}_3|=\left|\int_{Q_\varepsilon}(f_\varepsilon-M_\varepsilon f_\varepsilon)(w_\varepsilon-M_\varepsilon w_\varepsilon)ds\right|\leq\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\sqrt{\beta}\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ If we join all the estimates, then $$\lambda_\varepsilon=\frac{1}{2}\int_{Q_\varepsilon}|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + |w_\varepsilon-v_\varepsilon|^2+\varepsilon^{d-1}\tau_\varepsilon+\theta_\varepsilon,$$ where, $$|\theta_\varepsilon|=|\tilde{I}_1 - \tilde{I}_3|\leq C\varepsilon^{d+1}\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}+\frac{1}{4}\|\nabla w_\varepsilon-\nabla v_\varepsilon\|^2_{L^2(Q_\varepsilon)}+$$ $$+\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\sqrt{\beta}\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ With this, $$\lambda_\varepsilon\geq \frac{1}{2}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + |w_\varepsilon-v_\varepsilon|^2)ds+\varepsilon^{d-1}\tau_\varepsilon-C\varepsilon^{d+1}\|M_\varepsilon f_\varepsilon\|^2_{L^2(0,1)}$$ $$-\frac{1}{4}\|\nabla w_\varepsilon-\nabla v_\varepsilon\|^2_{L^2(Q_\varepsilon)}-\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\sqrt{\beta}\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ By Lemma \ref{normaproyeccionextension} $\|M_\varepsilon f_\varepsilon\|_{L^2_g(0,1)}\leq \varepsilon^{\frac{1-d}{2}}\|f_\varepsilon\|_{L^2(Q_\varepsilon)}$, then $$\lambda_\varepsilon\geq \frac{1}{4}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + |w_\varepsilon-v_\varepsilon|^2)ds+$$ $$+\varepsilon^{d-1}\tau_\varepsilon-C\varepsilon^{2}\|f_\varepsilon\|^2_{L^2(Q_\varepsilon)}-\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\sqrt{\beta}\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}.$$ If we put everything together, $$\varepsilon^{d-1}\tau_\varepsilon\geq\lambda_\varepsilon\geq\frac{1}{4}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + |w_\varepsilon-v_\varepsilon|^2)ds+\varepsilon^{d-1}\tau_\varepsilon-C\varepsilon^2\|f_\varepsilon\|^2_{L^2(Q_\varepsilon)}-$$ $$-\|f_\varepsilon-M_\varepsilon f_\varepsilon\|_{L^2(Q_\varepsilon)}\sqrt{\beta}\varepsilon\|\nabla_{\mathbf{y}}w_\varepsilon\|_{L^2(Q_\varepsilon)}\geq$$ $$\geq\frac{1}{4}\int_{Q_\varepsilon}(|\nabla w_\varepsilon-\nabla v_\varepsilon|^2 + |w_\varepsilon-v_\varepsilon|^2)ds+\varepsilon^{d-1}\tau_\varepsilon-C\varepsilon^2\|f_\varepsilon\|^2_{L^2(Q_\varepsilon)}-$$ $$\frac{\beta\varepsilon^2}{2} \|f_\varepsilon-M_\varepsilon f_\varepsilon\|^2_{L^2(Q_\varepsilon)}-\frac{1}{2}\|\nabla_{\mathbf{y}}w_\varepsilon\|^2_{L^2(Q_\varepsilon)},$$ so, $$\|\nabla w_\varepsilon-\nabla v_\varepsilon\|^2_{L^2(Q_\varepsilon)}+\|w_\varepsilon-v_\varepsilon\|^2_{L^2(Q_\varepsilon)}\leq C\varepsilon^2\|f_\varepsilon\|^2_{L^2(Q_\varepsilon)}+\frac{\beta\varepsilon^2}{2} \|f_\varepsilon-M_\varepsilon f_\varepsilon\|^2_{L^2(Q_\varepsilon)},$$ and so, $$\|w_\varepsilon-E_\varepsilon v_\varepsilon\|_{H^1(Q_\varepsilon)}\leq C\varepsilon\|f_\varepsilon\|_{L^2(Q_\varepsilon)},$$ that is, \begin{equation}\label{estimacionresolvente} \|u_\varepsilon-Ev_\varepsilon\|_{H^1_{\bm\varepsilon}(Q)}\leq C\varepsilon\|f_\varepsilon\|_{L^2(Q)}.\end{equation} \begin{flushright}$\blacksquare$\end{flushright} \section{Appendix: Shadowing and distance of attractors in $\mathbb{R}^m$}\label{shadowing} In this section we introduce some concepts of the known Shadowing theory. The aim of this theory is to study the relationship between trajectories of a given dynamical system and trajectories of a perturbation of it. These techniques allow us to relate the distance of attractors in $\mathbb{R}^m$ with the distance of the corresponding time one maps for an ordinary differential equation and an appropriate perturbation of it. This result is described in Proposition \ref{LipschitzShadowingUniform}. Shadowing theory plays an important role in our work. Most of definitions we present below, can be found in \cite{Al-Nayef&Diamond&Kloeden&Co}. Throughout this section we will denote by $X$ a Banach space, $B\subset X$ a subset which may be bounded or unbounded and $T$ a nonlinear map, no necessary continuous or differentiable. \begin{defi}\label{trajectory} A {\bf negative trajectory} of a map $T$ is a sequence $\mathbf{x}_- = \{x_n\}_{n\in\mathbb{Z}^-}\subset B $ such that $$x_{n+1}=T(x_n),\qquad\textrm{ for}\quad n\in\mathbb{Z}^-.$$ \end{defi} \begin{defi}\label{pseudotrajectory} Let $\delta\geq 0$. A {\bf negative} $\bm\delta${\bf-pseudo-trajectory} of $T$ is a sequence $\mathbf{y} =\{y_n\}_{n\in\mathbb{Z}^-} \subset B$ with $$||y_{n+1} - T(y_n)|| \leq \delta, \quad \textrm{for}\quad n\in \mathbb{Z}^- .$$ \end{defi} We denote by $Tr^-(T, K,\delta)$ the set of all negative $\delta$-pseudo-trajectories of $T$ in $K \subset B$. Note that a negative $0$-pseudo trajectory is a negative trajectory and that we always have the following inclusion $$Tr^-(T, K, 0) \subset Tr^-(T, K, \delta).$$ An important class of negative $\delta$-pseudo-trajectories of a map $T$ are given by trajectories, $\{y_n\}_{n\in\mathbb{Z}^-}$, of maps $\varphi$, with $\varphi:B\to X$, such that for any $x\in B$, $\|T(x)-\varphi(x)\|\leq\delta$. This follows directly from the fact that $$\|T(y_n)-y_{n+1}\| = \|T(y_n)-\varphi(y_n)\|\leq \delta\quad for \quad n\in\mathbb{Z}^-.$$ That is, $$\bigcup_{\|T-\varphi\|\leq\delta}Tr^- (\varphi, B, 0)\subset Tr^-(T, B, \delta).$$ In this work we are going to need to compare the set of negative $\delta$-pseudo trajectories and the set of negative trajectories of a map $T:B\to B$. An appropriate concept for this is the concept of ``Lipschitz Shadowing''. Hence, we consider the space, $l^p(X)$, for $1\leq p<\infty$, of all infinite negative sequences $\{x_n\}_{n\in\mathbb{Z}^-}$ such that $$\|x\|_{l^p}=\Big(\sum_{j=1}^\infty|x_j|^p\Big)^{1/p}<\infty,$$ and $l^\infty(X)$ the Banach space given by the sequences $\mathbf{x_-}=\{x_n\}_{n\in\mathbb{Z}^-}$ with $x_n\in X$ and $\|x_n\|_X\leq C$ for all $n\in\mathbb{Z}^-$. That is, $$l^\infty(X)=\{\mathbf{x_-}=\{x_n\}_{n\in\mathbb{Z}^-} : x_n\in X,\quad \|x_n\|_X\leq C\quad \forall n\in\mathbb{Z}^-\},$$ with $C>0$ a constant and the norm $$\|\mathbf{x_-}\|_{l^\infty(X)}=\sup\{\|x_n\|_X: n\in\mathbb{Z}^-\}.$$ It is well known these spaces with these norms are Banach spaces. \begin{defi} A negative sequence $\mathbf{x_-}=\{x_n\}_{n\in\mathbb{Z}^-}$ $\bm\varepsilon${\bf-shadows} a negative sequence $\mathbf{y_-}=\{y_n\}_{n\in\mathbb{Z}^-}$ if and only if, $$\|\mathbf{x_-}-\mathbf{y_-}\|_{l^\infty(X)}\leq\varepsilon.$$ So, this property is commutative, that is, $\mathbf{x_-}$ $\varepsilon$-shadows $\mathbf{y_-}$ if and only if $\mathbf{y_-}$ $\varepsilon$-shadows $\mathbf{x_-}$ \end{defi} If for a given sequence $\mathbf{y_-}\in l^\infty(X)$ and $\varepsilon>0$ we define $$B_\varepsilon(\mathbf{y_-})=\{ \mathbf{x}_-=\{x_n\}_{n\in \mathbb{Z}^-}: \|\mathbf{x_-} - \mathbf{y_-}\|_{l^\infty(X)}<\varepsilon\},$$ then, we can write that a negative sequence $\mathbf{x_-}=\{x_n\}_{n\in\mathbb{Z}^-}$ $\bm\varepsilon${\bf-shadows} a sequence $\mathbf{y_-}=\{y_n\}_{n\in\mathbb{Z}^-}$ if $\mathbf{x_-}\in B_\varepsilon(\mathbf{y_-})$. Finally, the main concept we want to present in this section is the following. \begin{defi}\label{lipschitzshadowing} The map $T$ has the {\bf Lipschitz Shadowing} property on $K\subset B$, if there exist constants $L, \delta_0> 0$ such that for any $0<\delta\leq \delta_0$ and any negative $\delta$-pseudo-trajectory of $T$ in $K$ is $(L\delta)$-shadowed by a negative trajectory of $T$ in $B$, that is, $$Tr^-(T,K,\delta)\subset B_{L\delta}(Tr^-(T,B,0)).$$ \end{defi} All these concepts allow us to present the following result. \begin{prop}\label{LipschitzShadowingUniform} Let \begin{equation}\label{origineq} \dot{x} = f(x), \end{equation} be a dissipative Morse-Smale system. We perturbe it \begin{equation}\label{pertureq} \dot{x} = f_\varepsilon (x), \end{equation} with $\varepsilon\geq 0$ such that $$\bar{T}_\varepsilon\rightarrow \bar{T}_0\qquad as \quad \varepsilon\rightarrow 0,$$ in the $C^1$ topology and $$\bar{T}_{\varepsilon}, \bar{T}_0: \mathbb{R}^m\rightarrow\mathbb{R}^m,$$ with $\bar{T}_0$ and $\bar{T}_\varepsilon$ the time one maps of the discrete dynamical systems generated by the evolution equations (\ref{origineq}) and (\ref{pertureq}), respectively. Assume that for each $\varepsilon>0$, $\bar{T}_0$ and $\bar{T}_\varepsilon$ have global attractors $\bar{\mathcal{A}}_0$ and $\bar{\mathcal{A}}_\varepsilon$, respectively. Then we have \begin{equation}\label{dist-attractors-Apendix} dist_H (\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon) \leq \bold{C}\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty, \end{equation} with $\bold{C}$ independent of $\varepsilon$. \end{prop} \par \paragraph{\sl Proof.} Since $\bar{T}_0$ is a Morse-Smale map, in \cite{PilyuginShaDyn} the author proves that then, there exists a neighborhood of $\bar{T}_0$ in the $C^1$ topology, $\Theta$, and numbers $L, \delta_0$ such that, for any map $T'\in\Theta$, $T'$ has the Lipschitz Shadowing property on $\mathcal{N}(\bar{\mathcal{A}_0})$ with constants $\delta_0$, $L>0$. On one side, since $\bar{T}_0$ has the Lipschitz Shadowing property on $\mathcal{N}(\bar{\mathcal{A}_0})$ with parameters $L, \delta_0$, then any negative $\delta$-pseudo-trajectory of $\bar{T}_0$ in $\mathcal{N}(\bar{\mathcal{A}_0})$, $\delta\leq\delta_0$, is $L\delta$-shadowed by a negative trajectory of $\bar{T}_0$ in $\mathbb{R}^m$, i.e., $$Tr^-(\bar{T}_0, \mathcal{N}(\bar{\mathcal{A}_0}),\delta)\subset B_{L\delta}(Tr^-(\bar{T}_0, \mathbb{R}^m, 0)).$$ Take $\varepsilon$ small enough such that $\|\bar{T}_\varepsilon-\bar{T}_0\|_\infty = \delta$ and $\bar{\mathcal{A}}_\varepsilon\subset \mathcal{N}(\bar{\mathcal{A}_0})$. We consider $r^\varepsilon\in\bar{\mathcal{A}}_\varepsilon,$ with $$\mathbf{r}_-^{\bm{\varepsilon}}=\{r^\varepsilon_n\}_{n\in\mathbb{Z}^-}=\{\bar{T}^n_\varepsilon(r^\varepsilon): n\in\mathbb{Z}^-\}\subset\bar{\mathcal{A}}_\varepsilon$$ its negative trajectory under the dynamical system generated by $\bar{T}_\varepsilon$, $$\mathbf{r}_-^{\bm{\varepsilon}}\in {Tr}^-(\bar{T_\varepsilon}, \bar{\mathcal{A}}_\varepsilon,0).$$ As we have mentioned above, $\mathbf{r}_-^{\bm{\varepsilon}}$ is a negative $\delta$-pseudo-trajectory of $\bar{T}_0$ in $\bar{\mathcal{A}}_\varepsilon\subset\mathcal{N}(\bar{\mathcal{A}_0})$, $$\mathbf{r}_-^{\bm{\varepsilon}}\in {Tr}^-(\bar{T}_0, \mathcal{N}(\bar{\mathcal{A}_0}),\delta).$$ So, there exist $\mathbf{r}_-=\{r_n\}_{n\in\mathbb{Z}^-}\in Tr^-(\bar{T}_0, \mathbb{R}^m, 0)$ such that, $$\|r^\varepsilon_n-r_n\|\leq L\delta,$$ for all $n$ for which $\mathbf{r_-}$ is defined. Since $$\|r_n\|\leq \|r^\varepsilon_n\|+ L\delta,$$ we conclude that $\mathbf{r_-}$ is bounded and for this reason $\mathbf{r_-}\in\bar{\mathcal{A}_0}$. With this $$dist(r^\varepsilon, \bar{\mathcal{A}_0})\leq L\delta= L \|\bar{T}_\varepsilon-\bar{T}_0\|_\infty.$$ Since $r^\varepsilon\in\bar{\mathcal{A}}_\varepsilon$ has been chosen in an arbitrary way, we have \begin{equation}\label{upperAuto} dist(\bar{\mathcal{A}}_\varepsilon, \bar{\mathcal{A}_0})\leq L\delta = L \|\bar{T}_\varepsilon-\bar{T}_0\|_\infty, \end{equation} where $L$ is independent of $\varepsilon$. On the other side, since any $T'\in\Theta$ has the Lipschitz Shadowing property on $\mathcal{N}(\bar{\mathcal{A}_0})$ of constants $L, \delta_0$, we take $\varepsilon>0$ small enough such that $\bar{T}_\varepsilon\in\Theta$ and $$\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty\leq\delta<\delta_0.$$ With this, we take $r_0\in\bar{\mathcal{A}}_0$ and its negative trajectory under $\bar{T}_0$, $\mathbf{r_-}=\{r_n\}_{n\in\mathbb{Z}^-}=\{\bar{T}^n_0(r_0): n\in\mathbb{Z}^-\}$. As we have mentioned before, $\mathbf{r_-}$ is a negative $\delta$-pseudo-trajectory of $\bar{T}_\varepsilon$ in $\bar{\mathcal{A}_0}\subset\mathcal{N}(\bar{\mathcal{A}_0})$. Since we have chosen an small $\varepsilon$ such that $\bar{T}_\varepsilon\in\Theta$, then $\bar{T}_\varepsilon$ has the Lipschitz Shadowing property on $\mathcal{N}(\bar{\mathcal{A}_0})$ with parameters $L, \delta_0$, that is, $$\mathbf{r_-}\in B_{L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty}({Tr}^-(\bar{T}_\varepsilon,\mathbb{R}^m,0)).$$ So, there exist $\mathbf{r}_-^{\bm{\varepsilon}}\in {Tr}^-(\bar{T}_\varepsilon, \mathbb{R}^m,0)$ such that $$\|r_n-r^\varepsilon_n\|\leq L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty,$$ for all $n$ for which $\mathbf{r}_-^{\bm{\varepsilon}}$ is defined. Thus $\mathbf{r}_-^{\bm{\varepsilon}}$ is bounded. For that, $\mathbf{r}_-^{\bm{\varepsilon}}\in\bar{\mathcal{A}}_\varepsilon$ and also we have $$\|r_0-r_0^\varepsilon\|\leq L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty,$$ with $r_0$ and $r_0^\varepsilon$ the $n=0$ elements of the sequences $\mathbf {r_-}$ and $\mathbf{r}_-^{\bm{\varepsilon}}$ respectively. That is $$dist(r_0,\bar{\mathcal{A}}_\varepsilon)\leq L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty.$$ Finally, again since $r_0\in\bar{\mathcal{A}_0}$ have been chosen in an arbitrary way we conclude \begin{equation}\label{lowerAuto} dist(\bar{\mathcal{A}_0}, \bar{\mathcal{A}}_\varepsilon)\leq L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty. \end{equation} If we put together $(\ref{upperAuto})$ and $(\ref{lowerAuto})$ we obtain the desired estimate, $$dist_H (\bar{\mathcal{A}}_0, \bar{\mathcal{A}}_\varepsilon) \leq L\|\bar{T}_0-\bar{T}_\varepsilon\|_\infty,$$ with $L$ independent of $\varepsilon$. \begin{flushright}$\blacksquare$\end{flushright} \begin{re} Observe that the constant $C$ in \eqref{dist-attractors-Apendix} is the constant from the Lischitz Shadowing property of the map $T$. \end{re} An immediate consequence of the result above is the following \begin{cor}\label{MS-dist} Let $T:\mathbb{R}^m\rightarrow\mathbb{R}^m$ be a {\bf Morse-Smale (gradient like)} map which has a global attractor $\mathcal{A}$. Then, there exists a neighborhood $\Theta$ of $T$ in the $C^1(\mathcal{N}(\mathcal{A}), \mathbb{R}^m)$ topology so that, for any $\,T_1, T_2\in\Theta$ with $\mathcal{A}_1$, $\mathcal{A}_2$ its respective attractors, we have $$dist_H(\mathcal{A}_1, \mathcal{A}_2)\leq L\|T_1-T_2\|_{L^\infty(\mathcal{N}(\mathcal{A}), \mathbb{R}^m)},$$ with $L$ the Lipschitz Shadowing constant from the map $T$. \end{cor} \paragraph{\sl Proof.} As we mentioned in the proof of the proposition above, since $T$ is a Morse-Smale map, in \cite{PilyuginShaDyn} the author proves that there exists a neighborhood $\Theta$ of $T$ in the $C^1$ topology and numbers $L, \delta_0$ such that, for any map $T'\in\Theta$, $T'$ has the Lipschitz Shadowing property on $\mathcal{N}(\bar{\mathcal{A}_0})$ with constants $\delta_0$, $L>0$. The rest of the proof follows the same lines as the proof of the proposition above. \begin{flushright}$\blacksquare$\end{flushright} \end{document}
arXiv
\begin{definition}[Definition:Transitive Closure (Relation Theory)/Intersection of Transitive Supersets] Let $\RR$ be a relation on a set $S$. The '''transitive closure''' of $\RR$ is defined as the intersection of all transitive relations on $S$ which contain $\RR$. The transitive closure of $\RR$ is denoted $\RR^+$. {{NoSources}} Category:Definitions/Relation Theory \end{definition}
ProofWiki
\begin{document} \title{How to capitalize on \emph{a priori} \hypertarget{introduction}{ \section{Introduction}\label{introduction}} Whenever an experimental factor comprises more than two levels, analysis of variance (ANOVA) F-statistics provide very little information about the source of an effect or interaction involving this factor. \textcolor{black}{For example, let's assume an experiment with three groups of subjects. Let's also assume that an ANOVA shows that the main effect of group is significant. This of course leaves unclear which groups differ from each other and how they differ. However, scientists typically} have \emph{a priori} expectations about the pattern of means. That is, we usually have specific expectations about which groups differ from each other. One potential strategy is to follow up on these results using t-tests. However, this approach does not consider all the data in each test and therefore loses statistical power, does not generalize well to more complex models (e.g., linear mixed-effects models), and is subject to problems of multiple comparisons. In this paper, we will show how to test specific hypotheses directly in a regression model, which gives much more control over the analysis. Specifically, we show how planned comparisons between specific conditions \textcolor{black}{(groups)} or clusters of conditions, \textcolor{black}{are} implemented as contrasts\textcolor{black}{. This is} a very effective way to align expectations with the statistical model. Indeed, if \textcolor{black}{planned comparisons, implemented in contrasts, are} defined \emph{a priori}, and are not defined after the results are known, planned comparisons should be \enquote{tested \emph{instead of}, rather than as a supplement to, the ordinary \enquote{omnibus} F test.} (Hays, 1973, p. 601). Every contrast consumes exactly one degree of freedom. Every degree of freedom in the ANOVA source-of-variance table can be spent to test a specific hypothesis about a difference between means or a difference between clusters of means.\\ \textcolor{black}{Linear mixed-effects models} (LMMs) (Baayen, Davidson, \& Bates, 2008; Bates, Kliegl, Vasishth, \& Baayen, 2015; Bates, Maechler, Bolker, \& Walker, 2014; Kliegl, Masson, \& Richter, 2010; Pinheiro \& Bates, 2000) \textcolor{black}{are a great tool and represent an important development in statistical practice in psychology and linguistics. LMMs are often taken to replace more traditional ANOVA analyses. However, LMMs also present some challenges. One key challenge is about how to incorporate categorical effects from factors with discrete levels into LMMs. One approach to analyzing factors is to do model comparison; this is akin to the ANOVA omnibus test, and again leaves it unclear which groups differ from which others. \textcolor{black}{An alternative approach is to base analyses on contrasts, which allow us to code factors as independent variables in linear regression models.} The present paper explains how to understand and use contrast coding to test particular comparisons between conditions of your experiment (for a Glossary of key terms see Appendix}~\ref{app:Glossary}). \textcolor{black}{Such knowledge about contrast specification is required if analysis of factors is to be based on LMMs} instead of ANOVAs. Arguably, in the R System for Statistical Computing (R Core Team, 2018), an understanding of contrast specification is \textcolor{black}{therefore} a necessary pre-condition for the proper use of LMMs. To model differences between categories/groups/cells/conditions, regression models (such as multiple regression, logistic regression and linear mixed models) specify a set of contrasts (i.e., which groups are compared to which baselines or groups). There are several ways to specify such contrasts mathematically, and as discussed below, which of these is more useful depends on the hypotheses about the expected pattern of means. If the analyst does not provide the specification explicitly, R will pick a default specification on its own, which may not align very well with the hypotheses that the analyst intends to test. Therefore, LMMs effectively demand that the user implement planned comparisons---perhaps almost as intended by Hays (1973). Obtaining a statistic roughly equivalent to an ordinary \enquote{omnibus} F test requires the extra effort of model comparison. This tutorial provides a practical introduction to contrast coding for factorial experimental designs that will allow scientists to express their hypotheses within the statistical model. \hypertarget{prerequisites-for-this-tutorial}{ \subsection{Prerequisites for this tutorial}\label{prerequisites-for-this-tutorial}} The reader is assumed to be familiar with R, and with the foundational ideas behind frequentist statistics, specifically null hypothesis significance testing. Some notational conventions: True parameters are referred to with Greek letters (e.g., \(\mu\), \(\beta\)), and estimates of these parameters have a hat (\(\hat\cdot\)) on the parameter (e.g., \(\hat\mu\), \(\hat\beta\)). The mean of several parameters is written as \(\mu\) or as \(\bar{\mu}\). Example analyses of simulated data-sets are presented using linear models (LMs) in R. We use simulated data rather than real data-sets because this allows full control over the dependent variable. To guide the reader, we provide a preview of the structure of the paper: \begin{itemize} \item First, basic concepts of different contrasts are explained, \textcolor{black}{using a factor with two levels to explain the concepts}. After a demonstration of the default contrast setting in R, \textsc{treatment contrasts}, a commonly used contrast for factorial designs, and \textsc{sum contrasts}, are illustrated. \item We then demonstrate how the generalized inverse (see Fieller, 2016, sec. 8.3) is used to convert what we call the hypothesis matrix to create a contrast matrix. We demonstrate this workflow of going from a hypothesis matrix to a contrast matrix for \textsc{sum contrasts} \textcolor{black}{using an example data-set involving one factor with three levels. We also demonstrate the workflow for} \textsc{repeated contrasts} \textcolor{black}{using an example data-set involving one factor with four levels}. \item After showing how contrasts implement hypotheses as predictors in a multiple regression model, \textcolor{black}{we introduce \textsc{polynomial contrasts} and \textsc{custom contrasts}}. \item \textcolor{black}{Then we discuss what makes a good contrast. Here, we introduce two important concepts: centering and orthogonality of contrasts, and their implications for contrast coding.} \item We provide additional information about the generalized inverse, and how it is used to switch between the hypothesis matrix and the contrast matrix. \textcolor{black}{The section on the matrix notation (Hypothesis matrix and contrast matrix in matrix form) can optionally be skipped on first reading.} \item \textcolor{black}{Then we discuss an effect size measure for contrasts.} \item The next section \textcolor{black}{discusses designs with two factors. We} compare regression models with analysis of variance (ANOVA) in simple $2 \times 2$ designs, and look at contrast centering, nested effects, and at a priori interaction contrasts. \end{itemize} Throughout the tutorial, we show how contrasts are implemented and applied in R and how they relate to hypothesis testing via the generalized inverse. \textcolor{black}{An overview of the conceptual and didactic flow is provided in Figure} \ref{fig:Flow}. \begin{figure} \caption{Conceptual and didactic flow. The left panel, conceptual flow, shows how to use the hypothesis matrix and the generalized inverse to construct custom contrast matrices for a given set of hypotheses. The right panel, didactic flow, shows an overview of the sections of this paper. The first grey box indicates the sections that introduce a set of default contrasts (treatment, sum, repeated, polynomial, and custom contrasts); the second grey box indicates sections that introduce contrasts for designs with two factors.} \label{fig:Flow} \end{figure} \FloatBarrier \hypertarget{conceptual-explanation-of-default-contrasts}{ \section{Conceptual explanation of default contrasts}\label{conceptual-explanation-of-default-contrasts}} What are examples for different contrast specifications? One contrast in widespread use is the \textsc{treatment contrast}. As suggested by its name, the \textcolor{black}{\textsc{treatment contrast}} is often used in intervention studies, where one or several intervention groups receive some treatment, which are compared to a control group. For example, two treatment groups may obtain (a) psychotherapy and (b) pharmacotherapy, and they may be compared to \textcolor{black}{(c)} a control group of patients waiting for treatment. This implies one factor with three levels. In this setting, a \textsc{treatment contrast} for this factor makes two comparisons: it tests \textcolor{black}{(1)} whether the psychotherapy group is better than the control group, and \textcolor{black}{(2)} whether the pharmacotherapy group is better than the control group. That is, each treatment condition is compared to the same control group or baseline condition. An example in research on memory and language may be a priming study, where two different kinds of priming conditions (e.g., phonological versus orthographic priming) are each compared to a control condition without priming. A second contrast of widespread use is the \textsc{sum contrast}. This contrast compares each tested group not against a baseline / control condition, but instead to the average response across all groups. Consider an example where three different priming conditions are compared to each other, such as orthographic, phonological, and semantic priming. The question of interest may be whether two of the priming conditions (e.g., orthographic and phonological priming) elicit stronger responses than the average response across all conditions. This could be done \textcolor{black}{with} \textsc{sum contrasts}: The first contrast would compare orthographic priming with the average response, and the second contrast would compare phonological priming with the average response. \textsc{Sum contrasts} also have an important role in factors with two levels, where they simply test the difference between those two factor levels (e.g., the difference between phonological versus orthographic priming). A third contrast coding is \textsc{repeated contrasts}. These are probably less often used in empirical studies, but are arguably the contrast of highest relevance for research in psychology and cognitive science. \textsc{Repeated contrasts} successively test neighboring factor levels against each other. For example, a study may manipulate the frequency of some target words into three categories of \enquote{low frequency}, \enquote{medium frequency}, and \enquote{high frequency}. What may be of interest in the study is whether low frequency words differ from medium frequency words, and whether medium frequency words differ from high frequency words. \textsc{Repeated contrasts} test exactly these differences between neighoring factor levels. A fourth type of contrast is \textsc{polynomial contrasts}. These are useful when trends are of interest that span across multiple factor levels. In the example with different levels of word frequency, a simple hypothesis may state that the response increases with increasing levels of word frequency. That is, one may expect that a response increases from low to medium frequency words \textcolor{black}{by the same magnitude} as it increases from medium to high frequency words. That is, a linear trend is expected. Here, it is possible to test a quadratic trend of word frequency; for example, when the effect is expected to be \textcolor{black}{larger or smaller between medium and high frequency words compared to the effect between low and medium frequency words.} One additional option for contrast coding is provided by \textsc{Helmert contrasts}. In an example with three factor levels, for \textsc{Helmert contrasts} the first contrast codes the difference between the first two factor levels, and the second contrast codes the difference between the mean of the first two levels and the third level. (In cases of a four-level factor, the third contrast tests the difference between (i) the average of the first three levels and (ii) the fourth level.) An example for the use of \textsc{Helmert contrasts} is a priming paradigm with the three experimental conditions \enquote{valid prime}, \enquote{invalid prime 1}, and \enquote{invalid prime 2}. The first contrast may here test the difference between conditions \enquote{invalid prime 1} and \enquote{invalid prime 2}. The second contrast could then test the difference between valid versus invalid conditions. This coding would provide maximal power to test the difference between valid and invalid conditions, as it pools across both invalid conditions for the comparison. \hypertarget{basic-concepts-illustrated-using-a-two-level-factor}{ \section{Basic concepts illustrated using a two-level factor}\label{basic-concepts-illustrated-using-a-two-level-factor}} Consider the simplest case: suppose we want to compare the means of a dependent variable (DV) such as response times between two groups of subjects. R can be used to simulate data for such an example using the function \texttt{mixedDesign()} \textcolor{black}{(for details regarding this function, see Appendix}~\ref{app:mixedDesign}). The simulations assume longer response times in condition F1 (\(\mu_1 = 0.8\) sec) than F2 (\(\mu_2 = 0.4\) sec). The data from the \(10\) simulated subjects \textcolor{black}{are aggregated} and summary statistics \textcolor{black}{are computed} for the two groups. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(dplyr)} \CommentTok{# load mixedDesign function for simulating data} \KeywordTok{source}\NormalTok{(}\StringTok{"functions/mixedDesign.v0.6.3.R"}\NormalTok{)} \NormalTok{M <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\FloatTok{0.8}\NormalTok{, }\FloatTok{0.4}\NormalTok{), }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{FALSE}\NormalTok{)} \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{) }\CommentTok{# set seed of random number generator for replicability} \NormalTok{simdat <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\DecValTok{2}\NormalTok{, }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{5}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\NormalTok{.}\DecValTok{20}\NormalTok{, }\DataTypeTok{long =} \OtherTok{TRUE}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "F"} \CommentTok{# Rename B_A to F(actor)} \KeywordTok{levels}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"F1"}\NormalTok{, }\StringTok{"F2"}\NormalTok{)} \NormalTok{simdat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F id DV ## 1 F1 1 0.997 ## 2 F1 2 0.847 ## 3 F1 3 0.712 ## 4 F1 4 0.499 ## 5 F1 5 0.945 ## 6 F2 6 0.183 ## 7 F2 7 0.195 ## 8 F2 8 0.608 ## 9 F2 9 0.556 ## 10 F2 10 0.458 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{str}\NormalTok{(simdat)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 10 obs. of 3 variables: ## $ F : Factor w/ 2 levels "F1","F2": 1 1 1 1 1 2 2 2 2 2 ## $ id: Factor w/ 10 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ## $ DV: num 0.997 0.847 0.712 0.499 0.945 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{table1 <-}\StringTok{ }\NormalTok{simdat }\OperatorTok{ \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{N=}\KeywordTok{n}\NormalTok{(), }\DataTypeTok{M=}\KeywordTok{mean}\NormalTok{(DV), }\DataTypeTok{SD=}\KeywordTok{sd}\NormalTok{(DV), }\DataTypeTok{SE=}\NormalTok{SD}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(N) )} \NormalTok{(GM <-}\StringTok{ }\KeywordTok{mean}\NormalTok{(table1}\OperatorTok{$}\NormalTok{M)) }\CommentTok{# Grand Mean} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.6 \end{verbatim} \begin{figure} \caption{Means and standard errors of the simulated dependent variable (e.g., response times in seconds) in two conditions F1 and F2.} \label{fig:Fig1Means} \end{figure} \begin{table}[b] \begin{center} \begin{threeparttable} \caption{\label{tab:table1}Summary statistics per condition for the simulated data.} \begin{tabular}{lllll} \toprule Factor F & \multicolumn{1}{c}{N data points} & \multicolumn{1}{c}{Estimated means} & \multicolumn{1}{c}{Standard deviations} & \multicolumn{1}{c}{Standard errors}\\ \midrule F1 & 5 & 0.8 & 0.2 & 0.1\\ F2 & 5 & 0.4 & 0.2 & 0.1\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The results, displayed in Table~\ref{tab:table1} and in Figure~\ref{fig:Fig1Means}, show that the assumed true condition means are exactly realized with the simulated data. The numbers are exact because the \texttt{mixedDesign()} function ensures that the data are generated so as to have the true means for each level. In real data-sets, of course, the sample means will vary from experiment to experiment. A simple regression of \texttt{DV} on \texttt{F} yields a straightforward test of the difference between the group means. Part of the output of the \texttt{summary} function \textcolor{black}{is presented below, and the same results are also displayed in} Table~\ref{tab:table2}: \begin{Shaded} \begin{Highlighting}[] \NormalTok{m_F <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, simdat)} \KeywordTok{round}\NormalTok{(}\KeywordTok{summary}\NormalTok{(m_F)}\OperatorTok{$}\NormalTok{coef,}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.8 0.089 8.94 0.000 ## FF2 -0.4 0.126 -3.16 0.013 \end{verbatim} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table2}Estimated regression model. Confidence intervals are obtained in R, e.g., using the function confint().} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(8)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 0.8 & $[0.6$, $1.0]$ & 8.94 & < .001\\ FF2 & -0.4 & $[-0.7$, $-0.1]$ & -3.16 & .013\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} Comparing the means for each condition with the coefficients (\emph{Estimates}) reveals that (i) the intercept \textcolor{black}{($0.8$)} is the mean for condition F1, \(\hat\mu_1\); and (ii) the slope (\texttt{FF2}\textcolor{black}{: $-0.4$)} is the difference between the true means for the two groups, \(\hat\mu_2 - \hat\mu_1\) (Bolker, 2018): \begin{equation} \begin{array}{lcl} \text{Intercept} = & \hat{\mu}_1 & = \text{estimated mean for \texttt{F1}} \\ \text{Slope (\texttt{FF2})} = & \hat{\mu}_2 - \hat{\mu}_1 & = \text{estim. mean for \texttt{F2}} - \text{estim. mean for \texttt{F1}} \end{array} \label{def:beta} \end{equation} The new information is confidence intervals associated with the regression coefficients. The t-test suggests that response times in group F2 are lower than in group F1. \hypertarget{treatmentcontrasts}{ \subsection{Default contrast coding: Treatment contrasts}\label{treatmentcontrasts}} How does R arrive at these particular values for the intercept and slope? That is, why does the intercept assess the mean of condition \texttt{F1} and how do we know the slope measures the difference in means between \texttt{F2}\(-\)\texttt{F1}? This result is a consequence of the default contrast coding of the factor \texttt{F}. R assigns \textsc{treatment contrasts} to factors and orders their levels alphabetically. The first factor level (here: \texttt{F1}) is coded as \(0\) and the second level (here: \texttt{F2}) is coded as \(1\). This is visible when inspecting the current contrast attribute of the factor using the \texttt{contrasts} command: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F2 ## F1 0 ## F2 1 \end{verbatim} Why does this contrast coding yield these particular regression coefficients? Let's take a look at the regression equation. Let \(\beta_0\) represent the intercept, and \(\beta_1\) the slope. Then, the simple regression above expresses the belief that the expected response time \(y\) is a linear function of the factor \texttt{F}. In a more general formulation, this is written as follows: \(y\) is a linear function of some predictor \(x\) with regression coefficients for the intercept, \(\beta_0\), and for the factor, \(\beta_1\): \begin{equation} y = \beta_0 + \beta_1x \label{eq:lm1} \end{equation} So, if \(x = 0\) \textcolor{black}{(condition} \texttt{F1}), \(y\) is \(\beta_0 + \beta_1 \cdot 0 = \beta_0\); and if \(x = 1\) \textcolor{black}{(condition} \texttt{F2}), \(y\) is \(\beta_0 + \beta_1 \cdot 1 = \beta_0 + \beta_1\). Expressing the above in terms of the estimated coefficients: \begin{equation} \begin{array}{lccll} \text{estim. value for \texttt{F1}} = & \hat{\mu}_1 = & \hat{\beta}_0 = & \text{Intercept} \\ \text{estim. value for \texttt{F2}} = & \hat{\mu}_2 = & \hat{\beta}_0 + \hat{\beta}_1 = & \text{Intercept} + \text{Slope (\texttt{FF2})} \end{array} \label{eq:predVal} \end{equation} \textcolor{black}{It is useful to think of such unstandardized regression coefficients as difference scores; they express the increase in the dependent variable $y$ associated with a change in the independent variable $x$ of $1$ unit, such as going from $0$ to $1$ in this example. The difference between condition means is $0.4 - 0.8 = -0.4$, which is exactly the estimated regression coefficient $\hat{\beta}_1$. The sign of the slope is negative because we have chosen to subtract the larger mean \texttt{F1} score from the smaller mean \texttt{F2} score.} \hypertarget{inverseMatrix}{ \subsection{Defining hypotheses}\label{inverseMatrix}} The analysis of the \textcolor{black}{regression equation} demonstrates that in the \textsc{treatment contrast} the intercept assesses the average response in the baseline condition, whereas the slope tests the difference between condition means. \textcolor{black}{However, these are just verbal descriptions of what each coefficient assesses. Is it also possible to formally write down the null hypotheses that are tested by each of these two coefficients?} From the perspective of formal hypothesis tests, the slope represents the main test of interest, so we consider this first. The \textsc{treatment contrast} expresses the null hypothesis that the difference in means between the two levels of the factor F is \(0\); formally, the null hypothesis \(H_0\) is that \(H_0: \; \beta_1 = 0\): \begin{equation} H_0: - 1 \cdot \mu_{F1} + 1 \cdot \mu_{F2} = 0 \end{equation} or equivalently: \begin{equation} \label{eq:f2minusf1} H_0: \mu_{F2} - \mu_{F1} = 0 \end{equation} The \(\pm 1\) weights in the null hypothesis statement \textcolor{black}{directly express which means are compared by the treatment contrast.} \textcolor{black}{The intercept in the \textsc{treatment contrast} expresses a null hypothesis that is usually of no interest: that the mean in condition F1 of the factor F is $0$.} Formally, the null hypothesis is \(H_0: \; \beta_0 = 0\): \begin{equation} \label{eq:trmtcontrfirstmention} H_0: 1 \cdot \mu_{F1} + 0 \cdot \mu_{F2} = 0 \end{equation} \noindent or equivalently: \begin{equation} H_0: \mu_{F1} = 0 . \end{equation} \noindent The fact that the intercept term formally tests the null hypothesis that the mean of condition \texttt{F1} is zero is in line with our previous derivation (see equation \ref{def:beta}). In R, factor levels are ordered alphabetically and by default the first level is used as the baseline in \textsc{treatment contrasts}. Obviously, this default mapping will only be correct for a given data-set if the levels' alphabetical ordering matches the desired contrast coding. When it does not, it is possible to re-order the levels. Here is one way of re-ordering the levels in R: \begin{Shaded} \begin{Highlighting}[] \NormalTok{simdat}\OperatorTok{$}\NormalTok{Fb <-}\StringTok{ }\KeywordTok{factor}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F, }\DataTypeTok{levels =} \KeywordTok{c}\NormalTok{(}\StringTok{"F2"}\NormalTok{,}\StringTok{"F1"}\NormalTok{))} \KeywordTok{contrasts}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{Fb)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 ## F2 0 ## F1 1 \end{verbatim} \noindent \textcolor{black}{This re-ordering} did not change any data associated with the factor, only one of its attributes. With this new contrast attribute the simple regression yields the following result (see Table \ref{tab:table4}). \begin{Shaded} \begin{Highlighting}[] \NormalTok{m1_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{Fb, simdat)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table4}Reordering factor levels} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(8)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 0.4 & $[0.2$, $0.6]$ & 4.47 & .002\\ FbF1 & 0.4 & $[0.1$, $0.7]$ & 3.16 & .013\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \textcolor{black}{The model now tests different hypotheses. The intercept now codes the mean of condition} \texttt{F2}\textcolor{black}{, and the slope measures the difference in means between} \texttt{F1} \textcolor{black}{minus} \texttt{F2}\textcolor{black}{. This represents an alternative coding of the treatment contrast.} \hypertarget{effectcoding}{ \subsection{Sum contrasts}\label{effectcoding}} Treatment contrasts are only one of many options. It is also possible to use \textsc{sum contrasts}, which code one of the conditions as \(-1\) and the other as \(+1\), effectively `centering' the effects at the grand mean (GM, i.e., the mean of the two group means). Here, we rescale the contrast to values of \(-0.5\) and \(+0.5\), which makes the estimated treatment effect the same as for treatment coding and easier to interpret. \textcolor{black}{To} use this contrast in a linear regression, use the \texttt{contrasts} function (for results see Table \ref{tab:table5}): \begin{Shaded} \begin{Highlighting}[] \NormalTok{(}\KeywordTok{contrasts}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OperatorTok{-}\FloatTok{0.5}\NormalTok{,}\OperatorTok{+}\FloatTok{0.5}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -0.5 0.5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{m1_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, simdat)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table5}Estimated regression model} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(8)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 0.6 & $[0.5$, $0.7]$ & 9.49 & < .001\\ F1 & -0.4 & $[-0.7$, $-0.1]$ & -3.16 & .013\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} Here, the slope (\texttt{F1}) again codes the difference of the groups associated with the first and second factor levels. \textcolor{black}{It has the same value as in the \textsc{treatment contrast}.} However, the intercept now represents the estimate of the average of condition means for F1 and F2, that is, the GM. \textcolor{black}{This differs from the \textsc{treatment contrast}. For the scaled \textsc{sum contrast}:} \begin{equation} \begin{array}{lcl} \text{Intercept} = & (\hat{\mu}_1 + \hat{\mu}_2)/2 & = \text{estimated mean of \texttt{F1} and \texttt{F2}} \\ \text{Slope (\texttt{F1})} = & \hat{\mu}_2 - \hat{\mu}_1 & = \text{estim. mean for \texttt{F2}} - \text{estim. mean for \texttt{F1}} \end{array} \label{def:beta2} \end{equation} \textcolor{black}{How does R arrive at these values for the intercept and the slope? Why does the intercept assess the GM and why does the slope test the group-difference? This is the result of rescaling the \textsc{sum contrast}. The first factor level (}\texttt{F1}\textcolor{black}{) was coded as $-0.5$, and the second factor level (}\texttt{F1}\textcolor{black}{) as $+0.5$:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## F1 -0.5 ## F2 0.5 \end{verbatim} \textcolor{black}{Let's again look at the regression equation to better understand what computations are performed. Again, $\beta_0$ represents the intercept, $\beta_1$ represents the slope, and the predictor variable $x$ represents the factor} \texttt{F}\textcolor{black}{. The regression equation is written as:} \begin{equation} y = \beta_0 + \beta_1x \label{eq:lm2} \end{equation} The group of \texttt{F1} subjects is \textcolor{black}{then coded as $-0.5$, and t}he response time for the group of \texttt{F1} subjects is \(\beta_0 + \beta_1 \cdot x_1 = 0.6 + (-0.4) \cdot (-0.5) = 0.8\). \textcolor{black}{The \texttt{F2} group, to the contrary, is coded as $+0.5$.} By implication, the mean of the \texttt{F2} group must be \(\beta_0 + \beta_1 \cdot x_1 = 0.6 + (-0.4) \cdot 0.5 = 0.4\). \textcolor{black}{Expressed in terms of the estimated coefficients:} \begin{equation} \begin{array}{lccll} \text{estim. value for \texttt{F1}} = & \hat{\mu}_1 = & \hat{\beta}_0 - 0.5 \cdot \hat{\beta}_1 = & \text{Intercept} - 0.5 \cdot \text{Slope (\texttt{F1})}\\ \text{estim. value for \texttt{F2}} = & \hat{\mu}_2 = & \hat{\beta}_0 + 0.5 \cdot \hat{\beta}_1 = & \text{Intercept} + 0.5 \cdot \text{Slope (\texttt{F1})} \end{array} \label{eq:predVal2} \end{equation} \textcolor{black}{The unstandardized regression coefficient is a difference score: Taking a step of one unit on the predictor variable $x$, e.g., from $-0.5$ to $+0.5$, reflecting a step from condition \texttt{F1} to \texttt{F2}, changes the dependent variable from $0.8$ (for condition \texttt{F1}) to $0.4$ (condition \texttt{F2}), reflecting a difference of $0.4 - 0.8 = -0.4$; and this is again exactly the estimated regression coefficient $\hat{\beta}_1$.} \textcolor{black}{Moreover, a}s mentioned above, the intercept now assesses the GM of conditions F1 and F2\textcolor{black}{: it is exactly in the middle between condition means for F1 and F2.} \textcolor{black}{So far we gave verbal statements about what is tested by the intercept and the slope in the case of the scaled \textsc{sum contrast}. It is possible to write these statements as formal null hypotheses that are tested by each regression coefficient: } Sum contrasts express the null hypothesis that the difference in means between the two levels of factor F is 0; formally, the null hypothesis \(H_0\) is that \begin{equation} H_0: -1 \cdot \mu_{F1} + 1 \cdot \mu_{F2} = 0 \end{equation} \noindent \textcolor{black}{This is the same hypothesis that was also tested by the slope in the treatment contrast.} The intercept, however, now expresses a different hypothesis about the data: it expresses the null hypothesis that the average of the two conditions F1 and F2 is 0: \begin{equation} H_0: 1/2 \cdot \mu_{F1} + 1/2 \cdot \mu_{F2} = \frac{\mu_{F1} + \mu_{F2}}{2} = 0 \end{equation} \noindent In balanced data, i.e., in data-sets where there are no missing data points, the average of the two conditions F1 and F2 is the GM. In unbalanced data-sets, where there are missing values, this average is the \textcolor{black}{weighted GM}. To illustrate this point, consider an example with fully balanced data and two equal group sizes of \(5\) subjects for each group F1 and F2. Here, the GM is also the mean across all subjects. Next, consider a highly simplified unbalanced data-set, where \textcolor{black}{in condition} \texttt{F1} two observations of the dependent variable \textcolor{black}{are available} with values of \(2\) and \(3\), and where \textcolor{black}{in condition} \texttt{F2} only one observation of the dependent variables \textcolor{black}{is available} with a value of \(4\). In this data-set, the mean across all subjects is \(\frac{2 + 3 + 4}{3} = \frac{9}{3} = 3\). However, the \textcolor{black}{(weighted)} GM as assessed in the intercept in a model using \textcolor{black}{sum contrasts} for factor \texttt{F} \textcolor{black}{would first compute the mean for each group separately (i.e., $\frac{2 + 3}{2} = 2.5$, and $4$), and then compute the mean across conditions} \(\frac{2.5 + 4}{2} = \frac{6.5}{2} = 3.25\). \textcolor{black}{The GM of $3.25$ is different from} the mean across subjects of \(3\). To summarize, \textsc{treatment contrasts} and \textsc{sum contrasts} are two possible ways to parameterize the difference between two groups; they test different hypotheses (there are cases, however, where the hypotheses are equivalent). \textsc{Treatment contrasts} compare one or more means against a baseline condition, whereas \textsc{sum contrasts} allow us to determine whether we can reject the null hypothesis that a condition's mean is the same as the GM (which in the two-group case also implies a hypothesis test that the two group means are the same). \textcolor{black}{One question that comes up here, is how one knows or formally derives what hypotheses a given set of contrasts tests. This question will be discussed in detail below for the general case of any arbitrary contrasts.} \hypertarget{cell-means-parameterization}{ \subsection{Cell means parameterization}\label{cell-means-parameterization}} \textcolor{black}{One alternative option is use the so-called cell means parameterization. In this approach, one does not estimate an intercept term, and then differences between factor levels. Instead, each degree of freedom in a design is used to simply estimate the mean of one of the factor levels. As a consequence, no comparisons between condition means are tested, but it is simply tested for each factor level whether the associated condition mean differs from zero. Cell means parameterization is specified by explicitly removing the intercept term (which is added automatically) by adding a $-1$ in the regression formula:} \begin{Shaded} \begin{Highlighting}[] \NormalTok{m2_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{-1} \OperatorTok{+}\StringTok{ }\NormalTok{F, simdat)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table2a}Estimated regression model} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(8)$} & \multicolumn{1}{c}{$p$}\\ \midrule FF1 & 0.8 & $[0.6$, $1.0]$ & 8.94 & < .001\\ FF2 & 0.4 & $[0.2$, $0.6]$ & 4.47 & .002\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \textcolor{black}{Now, the regression coefficients (see the column labeled `Estimate') estimate the mean of the first factor level ($0.8$) and the mean of the second factor level ($0.4$). Each of these means is compared to zero in the statistical tests, and each of these means is significantly larger than zero. This cell means parameterization usually does not allow a test of the hypotheses of interest, as these hypotheses usually relate to differences between conditions rather than to whether each condition differs from zero.} \hypertarget{examples-of-different-default-contrast-types}{ \section{Examples of different default contrast types}\label{examples-of-different-default-contrast-types}} \textcolor{black}{Above, we introduced conceptual explanations of different default contrasts by discussing example applications. Moreover, the preceding section on basic concepts mentioned that contrasts are implemented by numerical contrast coefficients. These contrast coefficients are usually represented in matrices of coefficients. Each of the discussed default contrasts is implemented using a different contrast matrix. These default contrast matrices are available in the R System for Statistical Computing in the basic distribution of R in the stats package. Here, we provide a quick overview of the different contrast matrices specifically for the example applications discussed above.} \textcolor{black}{For the \textsc{treatment contrasts}, we discussed an example where a psychotherapy group and a pharmacotherapy group are each compared to the same control or baseline group; the latter could be a group that receives no treatment. The corresponding contrast matrix is obtained using the following function call:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2 3 ## 1 0 0 ## 2 1 0 ## 3 0 1 \end{verbatim} \textcolor{black}{The number of rows specifies the number of factor levels. The three rows indicate the three levels of the factor. The first row codes the baseline or control condition (the baseline always only contains $0$s as contrast coefficients), the second row codes the psychotherapy group, and the third row codes the pharmacotherapy group. The two columns reflect the two comparisons that are being tested by the contrasts: the first column tests the second group (i.e., psychotherapy) against the baseline / control group, and the second column tests the third group (i.e., pharmacotherapy) against the baseline / control group.} \textcolor{black}{For the \textsc{sum contrast}, our example involved conditions for orthographic priming, phonological priming, and semantic priming. The orthographic and phonological priming conditions were compared to the average priming effect across all three groups. In R, there is again a standard function call for the \textsc{sum contrast}:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contr.sum}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## 1 1 0 ## 2 0 1 ## 3 -1 -1 \end{verbatim} \textcolor{black}{Again, the three rows indicate three groups, and the two columns reflect the two comparisons. The first row codes orthographic priming, the second row phonological priming, and the last row semantic priming. Now, the first column codes a contrast that compares the response in orthographic priming against the average response, and the second column codes a contrast comparing phonological priming against the average response. Why these contrasts test these hypotheses is not transparent here. We will return to this issue below.} \textcolor{black}{For \textsc{repeated contrasts}, our example compared response times in low frequency words vs. medium frequency words, and medium frequency words vs. high frequency words. In R, the corresponding contrast matrix is available in the MASS package} (Venables \& Ripley, 2002): \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(MASS)} \KeywordTok{contr.sdif}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2-1 3-2 ## 1 -0.667 -0.333 ## 2 0.333 -0.333 ## 3 0.333 0.667 \end{verbatim} \textcolor{black}{The first row represents low frequency words, the second row medium frequency words, and the last row high frequency words. Now the first contrast (column) tests the difference between the second minus the first row, i.e., the response to medium frequency words minus response to low frequency words. The second contrast (column) tests the difference between the third minus the second row, i.e., the difference in the response to high frequency words minus the response to medium frequency words. Why the \textsc{repeated contrast} tests exactly these differences is not transparent either.} \textcolor{black}{Below, we will explain how these and other contrasts are generated from a careful definition of the hypotheses that one wishes to test for a given data-set. We will introduce a basic workflow for how to create one's own custom contrasts.} \textcolor{black}{We discussed that for the example with three different word frequency levels, it is possible to test the hypothesis of a linear (or quadratic) trend across all levels of word frequency. Such \textsc{polynomial contrasts} are specified in R using the following command:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contr.poly}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## .L .Q ## [1,] -7.07e-01 0.408 ## [2,] -7.85e-17 -0.816 ## [3,] 7.07e-01 0.408 \end{verbatim} \textcolor{black}{As in the other contrasts mentioned above, it is not clear from this contrast matrix what hypotheses are being tested. As before, the three rows represent three levels of word frequency. The first column codes a linear increase with word frequency levels. The second column codes a quadratic trend. The} \texttt{contr.poly()} \textcolor{black}{function tests orthogonalized trends - a concept that we will explain below.} \textcolor{black}{We had also discussed \textsc{Helmert contrasts} above, and had given the example that one might want to compare two "invalid" priming conditions to each other, and then compare both "invalid" priming conditions to one "valid" prime condition. \textsc{Helmert contrasts} are specified in R using the following command:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contr.helmert}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## 1 -1 -1 ## 2 1 -1 ## 3 0 2 \end{verbatim} \textcolor{black}{The first row represents "invalid prime 1", the second row "invalid prime 2", and the third row the "valid prime" condition. The first column tests the difference between the two "invalid" prime conditions. The coefficients in the second column test both "invalid prime" conditions against the "valid prime" condition.} \textcolor{black}{How can one make use of these contrast matrices for a specific regression analysis? As discussed above for the case of two factor levels, one needs to tell R to use one of these contrast coding schemes for a factor of interest in a linear model (LM)/regression analysis. Let's assume a data-frame called} \texttt{dat} \textcolor{black}{with a dependent variable} \texttt{dat\$DV} \textcolor{black}{and a three-level factor} \texttt{dat\$WordFrequency} \textcolor{black}{with levels low, medium, and high frequency words. One chooses one of the above contrast matrices and `assigns' this contrast to the factor. Here, we choose the \textsc{repeated contrast}:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(dat}\OperatorTok{$}\NormalTok{WordFrequency) <-}\StringTok{ }\KeywordTok{contr.sdif}\NormalTok{(}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \textcolor{black}{This way, when running a linear model using this factor, R will automatically use the contrast matrix assigned to the factor. This is done in R with the simple call of a linear model, where} \texttt{dat} \textcolor{black}{is specified as the data-frame to use, where the numeric variable} \texttt{DV} \textcolor{black}{is defined as the dependent variable, and where the factor} \texttt{WordFrequency} \textcolor{black}{is added as predictor in the analysis:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{WordFrequency, }\DataTypeTok{data=}\NormalTok{dat)} \end{Highlighting} \end{Shaded} \textcolor{black}{Assuming that there are three levels to the WordFrequency factor, the lm function will estimate three regression coefficients: one intercept and two slopes. What these regression coefficients test will depend on which contrast coding is specified. Given that we have used \textsc{repeated contrasts}, the resulting regression coefficients will now test the difference between medium and low frequency words (first slope) and will test the difference between high and medium frequency words (second slope). Examples of output from regression models for concrete sitations will be shown below. Moreover, it will be shown how contrast matrices are generated for whatever hypotheses one wants to test in a given data-set.} \hypertarget{the-hypothesis-matrix-illustrated-with-a-three-level-factor}{ \section{The hypothesis matrix illustrated with a three-level factor}\label{the-hypothesis-matrix-illustrated-with-a-three-level-factor}} Consider again the example with the three low, medium, and high frequency conditions. The \texttt{mixedDesign} function \textcolor{black}{can be used} to simulate data from a lexical decision task with response times as dependent variable. The research question is: do response times differ as a function of the between-subject factor word frequency with three levels: low, medium, and high? We assume that lower word frequency results in longer response times. Here, we specify word frequency as a between-subject factor. In cognitive science experiments, frequency will usually vary within subjects and between items. However, the within- or between-subjects status of an effect is independent of its contrast coding; we assume the manipulation to be between subjects for ease of exposition. \textcolor{black}{The concepts presented here extend to repeated measures designs that are usually analyzed using linear mixed models.} \textcolor{black}{The following R code} simulates the data and computes the table of means and standard deviations for the three frequency categories: \begin{Shaded} \begin{Highlighting}[] \NormalTok{M <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{500}\NormalTok{, }\DecValTok{450}\NormalTok{, }\DecValTok{400}\NormalTok{), }\DataTypeTok{nrow=}\DecValTok{3}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{FALSE}\NormalTok{)} \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{simdat2 <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\DecValTok{3}\NormalTok{, }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{4}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\DecValTok{20}\NormalTok{, }\DataTypeTok{long =} \OtherTok{TRUE}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat2)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "F"} \CommentTok{# Rename B_A to F(actor)/F(requency)} \KeywordTok{levels}\NormalTok{(simdat2}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"low"}\NormalTok{, }\StringTok{"medium"}\NormalTok{, }\StringTok{"high"}\NormalTok{)} \NormalTok{simdat2}\OperatorTok{$}\NormalTok{DV <-}\StringTok{ }\KeywordTok{round}\NormalTok{(simdat2}\OperatorTok{$}\NormalTok{DV)} \KeywordTok{head}\NormalTok{(simdat2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F id DV ## 1 low 1 497 ## 2 low 2 474 ## 3 low 3 523 ## 4 low 4 506 ## 5 medium 5 422 ## 6 medium 6 467 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{table.word <-}\StringTok{ }\NormalTok{simdat2 }\OperatorTok{ \StringTok{ }\KeywordTok{summarise}\NormalTok{(}\DataTypeTok{N =} \KeywordTok{length}\NormalTok{(DV), }\DataTypeTok{M =} \KeywordTok{mean}\NormalTok{(DV), }\DataTypeTok{SD =} \KeywordTok{sd}\NormalTok{(DV), }\DataTypeTok{SE =} \KeywordTok{sd}\NormalTok{(DV)}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(N))} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table6}Summary statistics of the simulated lexical decision data per frequency level.} \begin{tabular}{lllll} \toprule Factor F & \multicolumn{1}{c}{N data points} & \multicolumn{1}{c}{Estimated means} & \multicolumn{1}{c}{Standard deviations} & \multicolumn{1}{c}{Standard errors}\\ \midrule low & 4 & 500 & 20 & 10\\ medium & 4 & 450 & 20 & 10\\ high & 4 & 400 & 20 & 10\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} As shown in Table \ref{tab:table6}, the estimated means reflect our assumptions about the true means in the data simulation: Response times decrease with increasing word frequency. The effect is significant in an ANOVA (see Table \ref{tab:table7}). \begin{Shaded} \begin{Highlighting}[] \NormalTok{aovF <-}\StringTok{ }\KeywordTok{aov}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F }\OperatorTok{+}\StringTok{ }\KeywordTok{Error}\NormalTok{(id), }\DataTypeTok{data=}\NormalTok{simdat2)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table7}ANOVA results. The effect F stands for the word frequency factor.} \begin{tabular}{lllllll} \toprule Effect & \multicolumn{1}{c}{$F$} & \multicolumn{1}{c}{$\mathit{df}_1$} & \multicolumn{1}{c}{$\mathit{df}_2$} & \multicolumn{1}{c}{$\mathit{MSE}$} & \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$\hat{\eta}^2_G$}\\ \midrule F & 24.93 & 2 & 9 & 403.19 & < .001 & .847\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The ANOVA, however, does not tell us the source of the difference. In the following sections, we use this and an additional data-set to illustrate \protect\hyperlink{sumcontrasts}{\textsc{sum}}, \protect\hyperlink{repeatedcontrasts}{\textsc{repeated}}, \protect\hyperlink{polynomialContrasts}{\textsc{polynomial}}, and \protect\hyperlink{customContrasts}{\textsc{custom}} contrasts. In practice, usually only one set of contrasts is selected when the expected pattern of means is formulated during the design of the experiment. The decision about which contrasts to use is made before the pattern of means is known. \hypertarget{sumcontrasts}{ \subsection{Sum contrasts}\label{sumcontrasts}} For didactic purposes, \textcolor{black}{the next sections describe} \textsc{sum contrasts}. Suppose \textcolor{black}{that the expectation is} that \textcolor{black}{low-frequency words are responded to slower and} medium-frequency words are responded to faster than the GM response time. Then, \textcolor{black}{the} research question could be: \textcolor{black}{Do low-frequency words differ from the GM and do} medium-frequency words differ from the GM? And if so, are they above or below the GM? We want to test the following two hypotheses: \begin{equation} H_{0_1}: \mu_1 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation} \noindent and \begin{equation} H_{0_2}: \mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3} = GM \end{equation} \(H_{0_1}\) \textcolor{black}{can also be written} as: \begin{align} \label{h01} & \mu_1 =\frac{\mu_1+\mu_2+\mu_3}{3}\\ \Leftrightarrow & \mu_1 - \frac{\mu_1+\mu_2+\mu_3}{3} = 0\\ \Leftrightarrow & \frac{2}{3} \mu_1 - \frac{1}{3}\mu_2 - \frac{1}{3}\mu_3 = 0 \end{align} Here, the weights \(2/3, -1/3, -1/3\) \textcolor{black}{are informative about} how to combine the condition means to define the null hypothesis. \(H_{0_2}\) \textcolor{black}{is also rewritten} as: \begin{align}\label{h02} & \mu_2 = \frac{\mu_1+\mu_2+\mu_3}{3}\\ \Leftrightarrow & \mu_2 - \frac{\mu_1+\mu_2+\mu_3}{3} = 0 \\ \Leftrightarrow & -\frac{1}{3}\mu_1 + \frac{2}{3} \mu_2 - \frac{1}{3} \mu_3 = 0 \end{align} Here, the weights are \(-1/3, 2/3, -1/3\), and they again \textcolor{black}{indicate} how to combine the condition means for defining the null hypothesis. \hypertarget{the-hypothesis-matrix}{ \subsection{The hypothesis matrix}\label{the-hypothesis-matrix}} \textcolor{black}{The weights of the condition means are not only useful to define hypotheses. They also provide the starting step in a very powerful method which allows the researcher to generate the contrasts that are needed to test these hypotheses in a linear model. That is, what we did so far is to explain some kinds of different contrast codings that exist and what the hypotheses are that they test. That is, if a \textcolor{black}{certain} data-set \textcolor{black}{is given} and certain hypotheses \textcolor{black}{exist} that \textcolor{black}{need to be tested} in this data-set, then \textcolor{black}{the procedure would be} to check whether any of the contrasts that we encountered above happen to test exactly the hypotheses \textcolor{black}{of interest. Sometimes it suffices to use one of these existing contrasts.} However, at other times, \textcolor{black}{our research} hypotheses \textcolor{black}{may} not correspond exactly to any of the contrasts in the default set of standard contrasts provided in R. For these cases, or simply for more complex designs, it is very useful to know how contrast matrices are created. Indeed, a relatively simple procedure exists in which we write our hypotheses formally, extract the weights of the condition means from the hypotheses, and then automatically generate the correct contrast matrix that we need in order to test these hypotheses in a linear model. Using this powerful method, \textcolor{black}{it is not necessary} to find a match to a contrast matrix provided by the family of functions in R starting with the prefix contr. Instead, \textcolor{black}{it is possible to} simply define the hypotheses that one wants to test, and to obtain the correct contrast matrix for these in an automatic procedure. Here, for pedagogical reasons, we show some examples of how to apply this procedure in cases where the hypotheses} \emph{do} \textcolor{black}{correspond to some of the existing contrasts.} \textcolor{black}{Defining a custom contrast matrix involves four steps:} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \textcolor{black}{Write down the hypotheses} \item \textcolor{black}{Extract the weights and write them into what we will call a} \emph{hypothesis matrix} \item \textcolor{black}{Apply the} \emph{generalized matrix inverse} \textcolor{black}{to the hypothesis matrix to create the contrast matrix} \item \textcolor{black}{Assign the contrast matrix to the factor and run the linear model} \end{enumerate} \textcolor{black}{Let us apply this four-step procedure to our example of the \textsc{sum contrast}. The first step, writing down the hypotheses, is shown above. The second step involves writing down the weights that each hypothesis gives to condition means. The weights for the first null hypothesis are} \texttt{wH01=c(+2/3,\ -1/3,\ -1/3)}, and the weights for the second null hypothesis are \texttt{wH02=c(-1/3,\ +2/3,\ -1/3)}. \textcolor{black}{Before writing these into a} hypothesis matrix, we also define a null hypothesis for the intercept term. For the intercept, \textcolor{black}{the} hypothesis is that the mean across all conditions is zero: \begin{align} H_{0_0}: &\frac{\mu_1 + \mu_2 + \mu_3}{3} = 0 \\ H_{0_0}: &\frac{1}{3} \mu_1 + \frac{1}{3}\mu_2 + \frac{1}{3}\mu_3 = 0 \end{align} This null hypothesis has weights of \(1/3\) for all condition means. The weights from all three hypotheses that \textcolor{black}{were defined are now combined} and written into a matrix that we refer to as the \emph{hypothesis matrix} (\texttt{Hc}): \begin{Shaded} \begin{Highlighting}[] \NormalTok{HcSum <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{cH00=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{low=} \DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{med=} \DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{hi=} \DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{), } \DataTypeTok{cH01=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{low=}\OperatorTok{+}\DecValTok{2}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{med=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{hi=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{), } \DataTypeTok{cH02=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{low=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{med=}\OperatorTok{+}\DecValTok{2}\OperatorTok{/}\DecValTok{3}\NormalTok{, }\DataTypeTok{hi=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{3}\NormalTok{))} \KeywordTok{fractions}\NormalTok{(}\KeywordTok{t}\NormalTok{(HcSum))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## cH00 cH01 cH02 ## low 1/3 2/3 -1/3 ## med 1/3 -1/3 2/3 ## hi 1/3 -1/3 -1/3 \end{verbatim} Each set of weights \textcolor{black}{is first entered} as a row into the matrix (command \texttt{rbind()}\textcolor{black}{). This has mathematical reasons that we discuss below. However, we then switch rows and columns of the matrix for easier readability using the command} \texttt{t()} \textcolor{black}{(this transposes the matrix, i.e., switches rows and columns).}\protect\rmarkdownfootnote{Matrix transpose changes the arrangement of the columns and rows of a matrix, but leaves the content of the matrix unchanged. For example, for the matrix \(X\) with three rows and two columns \(X = \left(\begin{array}{cc} a & b \\ c & d \\ e & f \end{array} \right)\), the transpose yields a matrix with two rows and three columns, where the rows and columns are flipped: \(X^T = \left(\begin{array}{ccc} a & c & e \\ b & d & f \end{array} \right)\).} The command \texttt{fractions()} turns the decimals into fractions to improve readability. Now that the condition weights from the hypotheses \textcolor{black}{have been written} into the hypothesis matrix, the third step of the procedure \textcolor{black}{is implemented}: a matrix operation called the \enquote{generalized matrix inverse}\protect\rmarkdownfootnote{At this point, there is no need to understand in detail what this means. We refer the interested reader to Appendix \ref{app:LinearAlgebra}. For a quick overview, we recommend a vignette explaining the generalized inverse in the \href{https://cran.r-project.org/web/packages/matlib/vignettes/ginv.html}{matlib package} (Friendly, Fox, \& Chalmers, 2018).} is used to obtain the contrast matrix that \textcolor{black}{is needed} to test these hypotheses in a linear model. In R this next step is done using the function \texttt{ginv()} \textcolor{black}{from the} \texttt{MASS} \textcolor{black}{package}. We here define a function \texttt{ginv2()} for nicer formatting of the output.\protect\rmarkdownfootnote{The function \texttt{fractions()} from the \texttt{MASS} package \textcolor{black}{is used} to make the output more easily readable, and \textcolor{black}{the function} \texttt{provideDimnames()} \textcolor{black}{is used} to keep row and column names.} \begin{Shaded} \begin{Highlighting}[] \NormalTok{ginv2 <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x) }\CommentTok{# define a function to make the output nicer} \KeywordTok{fractions}\NormalTok{(}\KeywordTok{provideDimnames}\NormalTok{(}\KeywordTok{ginv}\NormalTok{(x),}\DataTypeTok{base=}\KeywordTok{dimnames}\NormalTok{(x)[}\DecValTok{2}\OperatorTok{:}\DecValTok{1}\NormalTok{]))} \end{Highlighting} \end{Shaded} Applying the generalized inverse to the hypothesis matrix results in the new matrix \texttt{XcSum}. This is the contrast matrix \(X_c\) that tests exactly those hypotheses that were specified earlier: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcSum <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(HcSum))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## cH00 cH01 cH02 ## low 1 1 0 ## med 1 0 1 ## hi 1 -1 -1 \end{verbatim} This contrast matrix corresponds exactly to the \textsc{sum contrasts} \textcolor{black}{described} above. In the case of the \textsc{sum contrast}, the contrast matrix looks very different from the hypothesis matrix. The contrast matrix in \textsc{sum contrasts} codes with \(+1\) the condition that is to be compared to the GM. The condition that is never compared to the GM is coded as \(-1\). Without knowing the \textcolor{black}{relationship between the hypothesis matrix and the contrast matrix}, the meaning of the coefficients is completely opaque. To verify this custom-made contrast matrix, it \textcolor{black}{is compared} to the \textsc{sum contrast} matrix as generated by the R function \texttt{contr.sum()} in the \texttt{stats} package. The resulting contrast matrix is identical to \textcolor{black}{the} result when adding the intercept term, a column of ones, to the contrast matrix: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{fractions}\NormalTok{(}\KeywordTok{cbind}\NormalTok{(}\DecValTok{1}\NormalTok{,}\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{3}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## 1 1 1 0 ## 2 1 0 1 ## 3 1 -1 -1 \end{verbatim} In order to test the hypotheses, step four in our procedure \textcolor{black}{involves} assigning \textsc{sum contrasts} to the factor \texttt{F} \textcolor{black}{in our example data, and running a linear model.}\protect\rmarkdownfootnote{Alternative ways to specify \textcolor{black}{default} contrasts in R are to set contrasts globally using \texttt{options(contrasts="contr.sum")} or to set contrasts locally only for a specific analysis, by including a named list specifying contrasts for each factor in a linear model: \texttt{lm(DV\ \textasciitilde{}\ 1\ +\ F,\ contrasts=list(F="contr.sum"))}.} \textcolor{black}{This allows estimating the regression coefficients associated with each contrast. We compare these to the data in Table}~\ref{tab:table6} to test whether the regression coefficients actually correspond to the differences of condition means, as intended. To define the contrast, \textcolor{black}{it is necessary to} remove the intercept term, as this is automatically added by the linear model function \texttt{lm()} in R. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat2}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcSum[,}\DecValTok{2}\OperatorTok{:}\DecValTok{3}\NormalTok{]} \NormalTok{m1_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat2)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table8}Regression model using the sum contrast.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(9)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 450 & $[437$, $463]$ & 77.62 & < .001\\ FcH01 & 50 & $[32$, $69]$ & 6.11 & < .001\\ FcH02 & 0 & $[-18$, $19]$ & 0.01 & .992\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The linear model regression coefficients (see Table \ref{tab:table8}) show the GM response time of \(450\) ms in the intercept. \textcolor{black}{Remember that the first regression coefficient} \texttt{FcH01} \textcolor{black}{was designed to test our first hypothesis that low frequency words are responded to slower than the GM. The regression coefficient} \texttt{FcH01} (\enquote{Estimate}) of \(50\) exactly reflects the difference between low frequency words (\(500\) ms) and the GM of \(450\) ms. The second hypothesis was that response times for medium frequency words differ from the GM. The fact that the second regression coefficient \texttt{FcH02} is exactly \(0\) indicates that response times for medium frequency words (\(450\) ms) are identical with the GM of \(450\) ms. Although there is evidence against the null hypothesis that low-frequency words have the same reading time as the GM, there is no evidence against the null hypothesis that medium frequency words have the same reading times as the GM. \textcolor{black}{We have now not only derived contrasts and hypothesis tests for the sum contrast, we have also used a powerful and highly general procedure that is used to generate contrasts for many kinds of different hypotheses and experimental designs.} \hypertarget{further-examples-of-contrasts-illustrated-with-a-factor-with-four-levels}{ \section{Further examples of contrasts illustrated with a factor with four levels}\label{further-examples-of-contrasts-illustrated-with-a-factor-with-four-levels}} In order to understand \textcolor{black}{\textsc{repeated difference} and \textsc{polynomial contrasts}}, it may be instructive to consider an experiment with one between-subject factor with four levels. We simulate such a data-set using the function \texttt{mixedDesign}. The sample sizes for each level and the means and standard errors are shown in Table \ref{tab:helmertsimdatTab}, and the means and standard errors are also shown graphically in Figure \ref{fig:helmertsimdatFig}. We assume that the four factor levels \texttt{F1} to \texttt{F4} reflect levels of word frequency, including the levels \texttt{low}, \texttt{medium-low}, \texttt{medium-high}, and \texttt{high} frequency words, and that the dependent variable reflects some response time.\protect\rmarkdownfootnote{Qualitatively, the simulated pattern of results is actually empirically observed for word frequency effects on single fixation durations (Heister, Würzner, \& Kliegl, 2012).} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Data, means, and figure} \NormalTok{M <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{10}\NormalTok{, }\DecValTok{40}\NormalTok{), }\DataTypeTok{nrow=}\DecValTok{4}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{FALSE}\NormalTok{)} \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{simdat3 <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\DecValTok{4}\NormalTok{, }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{5}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\DecValTok{10}\NormalTok{, }\DataTypeTok{long =} \OtherTok{TRUE}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat3)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "F"} \CommentTok{# Rename B_A to F(actor)} \KeywordTok{levels}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"F1"}\NormalTok{, }\StringTok{"F2"}\NormalTok{, }\StringTok{"F3"}\NormalTok{, }\StringTok{"F4"}\NormalTok{)} \NormalTok{table3 <-}\StringTok{ }\NormalTok{simdat3 }\OperatorTok{ \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{N=}\KeywordTok{length}\NormalTok{(DV), }\DataTypeTok{M=}\KeywordTok{mean}\NormalTok{(DV), }\DataTypeTok{SD=}\KeywordTok{sd}\NormalTok{(DV), }\DataTypeTok{SE=}\NormalTok{SD}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(N) )} \NormalTok{(GM <-}\StringTok{ }\KeywordTok{mean}\NormalTok{(table3}\OperatorTok{$}\NormalTok{M)) }\CommentTok{# Grand Mean} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 20 \end{verbatim} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:helmertsimdatTab}Summary statistics for simulated data with one between-subjects factor with four levels.} \begin{tabular}{lllll} \toprule Factor F & \multicolumn{1}{c}{N data points} & \multicolumn{1}{c}{Estimated means} & \multicolumn{1}{c}{Standard deviations} & \multicolumn{1}{c}{Standard errors}\\ \midrule F1 & 5 & 10.0 & 10.0 & 4.5\\ F2 & 5 & 20.0 & 10.0 & 4.5\\ F3 & 5 & 10.0 & 10.0 & 4.5\\ F4 & 5 & 40.0 & 10.0 & 4.5\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{figure} \caption{Means and error bars (showing standard errors) for a simulated data-set with one between-subjects factor with four levels.} \label{fig:helmertsimdatFig} \end{figure} \hypertarget{repeatedcontrasts}{ \subsection{Repeated contrasts}\label{repeatedcontrasts}} Arguably, the most popular contrast psychologists and psycholinguists are interested in is the comparison between neighboring levels of a factor. This type of contrast is called \textsc{repeated contrast}. In our example, our research question might be whether the frequency level \texttt{low} leads to slower response times than frequency level \texttt{medium-low}, whether frequency level \texttt{medium-low} leads to slower response times than frequency level \texttt{medium-high}, and whether frequency level \texttt{medium-high} leads to slower response times than frequency level \texttt{high}. \textcolor{black}{\textsc{Repeated contrasts} are used to implement these comparisons. Consider first how to derive the contrast matrix for \textsc{repeated contrasts}, starting out by specifying the hypotheses that \textcolor{black}{are to be tested} about the data. Importantly, this again applies the general strategy of how to translate (any) hypotheses about differences between groups or conditions into a set of contrasts, yielding a powerful tool of great value in many research settings. We follow the four-step procedure outlined above.} The first step is to specify our hypotheses, and to write them down in a way such that their weights can be extracted easily. For a four-level factor, the three hypotheses are: \begin{equation} H_{0_{2-1}}: -1 \cdot \mu_1 + 1 \cdot \mu_2 + 0 \cdot \mu_3 + 0 \cdot \mu_4 = 0 \end{equation} \begin{equation} H_{0_{3-2}}: 0 \cdot \mu_1 - 1 \cdot \mu_2 + 1 \cdot \mu_3 + 0 \cdot \mu_4 = 0 \end{equation} \begin{equation} H_{0_{4-3}}: 0 \cdot \mu_1 + 0 \cdot \mu_2 - 1 \cdot \mu_3 + 1 \cdot \mu_4 = 0 \end{equation} Here, the \(\mu_x\) are the mean response times in condition \(x\). Each hypothesis gives weights to the different condition means. The first hypothesis (\(H_{0_{2-1}}\)) tests the difference between condition mean for \texttt{F2} (\(\mu_2\)) minus the condition mean for \texttt{F1} (\(\mu_1\)), but ignores condition means for \texttt{F3} and \texttt{F4} (\(\mu_3\), \(\mu_4\)). \(\mu_1\) has a weight of \(-1\), \(\mu_2\) has a weight of \(+1\), and \(\mu_3\) and \(\mu_4\) have weights of \(0\). As \textcolor{black}{the} second step, the vector of weights for the first hypothesis \textcolor{black}{is extracted} as \texttt{c2vs1\ \textless{}-\ c(F1=-1,F2=+1,F3=0,F4=0)}. Next, the same thing \textcolor{black}{is done} for the other hypotheses - the weights for all hypotheses \textcolor{black}{are extracted} and coded into a \emph{hypothesis matrix} in R: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(HcRE <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{c2vs1=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DataTypeTok{F2=}\OperatorTok{+}\DecValTok{1}\NormalTok{,}\DataTypeTok{F3=} \DecValTok{0}\NormalTok{,}\DataTypeTok{F4=} \DecValTok{0}\NormalTok{),} \DataTypeTok{c3vs2=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=} \DecValTok{0}\NormalTok{,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DataTypeTok{F3=}\OperatorTok{+}\DecValTok{1}\NormalTok{,}\DataTypeTok{F4=} \DecValTok{0}\NormalTok{),} \DataTypeTok{c4vs3=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=} \DecValTok{0}\NormalTok{,}\DataTypeTok{F2=} \DecValTok{0}\NormalTok{,}\DataTypeTok{F3=}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DataTypeTok{F4=}\OperatorTok{+}\DecValTok{1}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c2vs1 c3vs2 c4vs3 ## F1 -1 0 0 ## F2 1 -1 0 ## F3 0 1 -1 ## F4 0 0 1 \end{verbatim} \textcolor{black}{Again, we show the transposed version of the hypothesis matrix (switching rows and columns), but now we leave out the hypothesis for the intercept (we discuss below when this can be neglected).} Next, the new contrast matrix \texttt{XcRE} \textcolor{black}{is obtained}. This is the contrast matrix \(X_c\) that exactly tests the hypotheses written down above: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcRE <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(HcRE))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c2vs1 c3vs2 c4vs3 ## F1 -3/4 -1/2 -1/4 ## F2 1/4 -1/2 -1/4 ## F3 1/4 1/2 -1/4 ## F4 1/4 1/2 3/4 \end{verbatim} \textcolor{black}{In the case of the \textsc{repeated contrast}, the contrast matrix again looks very different from the hypothesis matrix.} In this case, the contrast matrix looks a lot less intuitive than the hypothesis matrix, and if one did not know the associated hypothesis matrix, it seems unclear what the contrast matrix would actually test. \textcolor{black}{To verify this custom-made contrast matrix, we compare it to the \textsc{repeated contrast} matrix as generated by the R function} \texttt{contr.sdif()} \textcolor{black}{in the \texttt{MASS} package} (Venables \& Ripley, 2002)\textcolor{black}{. The resulting contrast matrix is identical to our result:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{fractions}\NormalTok{(}\KeywordTok{contr.sdif}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2-1 3-2 4-3 ## 1 -3/4 -1/2 -1/4 ## 2 1/4 -1/2 -1/4 ## 3 1/4 1/2 -1/4 ## 4 1/4 1/2 3/4 \end{verbatim} Step four in the procedure is to apply \textsc{repeated contrasts} to the factor \texttt{F} in the example data, and to run a linear model. This allows us to estimate the regression coefficients associated with each contrast. \textcolor{black}{These are compared} to the data in Figure~\ref{fig:helmertsimdatFig} \textcolor{black}{to test whether the regression coefficients actually correspond to the differences between successive condition means, as intended.} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcRE} \NormalTok{m2_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table0}Repeated contrasts. \label{tab:table0}} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ Fc2vs1 & 10 & $[-3$, $23]$ & 1.58 & .133\\ Fc3vs2 & -10 & $[-23$, $3]$ & -1.58 & .133\\ Fc4vs3 & 30 & $[17$, $43]$ & 4.74 & < .001\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The results (see Table~\ref{tab:table0}) show that as expected, the regression coefficients reflect exactly the differences \textcolor{black}{that were of interest}: the regression coefficient (\enquote{Estimate}) \texttt{Fc2vs1} has a value of \(10\), which exactly corresponds to the difference between condition mean for \texttt{F2} (\(20\)) minus condition mean for \texttt{F1} (\(10\)), i.e., \(20 - 10 = 10\). Likewise, the regression coefficient \texttt{Fc3vs2} has a value of \(-10\), which corresponds to the difference between condition mean for \texttt{F3} (\(10\)) minus condition mean for \texttt{F2} (\(20\)), i.e., \(10 - 20 = -10\). Finally, the regression coefficient \texttt{Fc4vs3} has a value of \(30\), which reflects the difference between condition \texttt{F4} (\(40\)) minus condition \texttt{F3} (\(10\)), i.e., \(40 - 10 = 30\). Thus, the regression coefficients reflect differences between successive or neighboring condition means, and test the corresponding null hypotheses. To sum up, formally writing down the hypotheses, extracting the weights into a hypothesis matrix, and applying the generalized matrix inverse operation yields a set of contrast coefficients that provide the desired estimates. This procedure is very general: it allows us to derive the contrast matrix corresponding to any set of hypotheses that one may want to test. The four-step procedure described above allows us to construct contrast matrices that are among the standard set of contrasts in R (\textsc{repeated contrasts} or \textsc{sum contrasts}, etc.), and also allows us to construct non-standard custom contrasts that are specifically tailored to the particular hypotheses one wants to test. The hypothesis matrix and the contrast matrix are linked by the generalized inverse; understanding this link is the key ingredient to understanding contrasts in diverse settings. \hypertarget{contrasts-in-linear-regression-analysis-the-design-or-model-matrix}{ \subsection{Contrasts in linear regression analysis: The design or model matrix}\label{contrasts-in-linear-regression-analysis-the-design-or-model-matrix}} We have now discussed how different contrasts are created from the hypothesis matrix. However, we have not treated in detail how exactly contrasts are used in a linear model. Here, we will see that the contrasts for a factor in a linear model are just the same thing as continuous numeric predictors (i.e., covariates) in a linear/multiple regression analysis. That is, contrasts are \textcolor{black}{the} way to encode discrete factor levels into numeric predictor variables to use in linear/multiple regression analysis, by encoding which differences between factor levels are tested. The contrast matrix \(X_c\) that we have looked at so far has one entry (row) for each experimental condition. For use in a linear model, however, the contrast matrix is coded into a design or model matrix \(X\), where each individual data point has one row. The design matrix \(X\) can be extracted using the function \texttt{model.matrix()}: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(}\KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcRE) }\CommentTok{# contrast matrix} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c2vs1 c3vs2 c4vs3 ## F1 -3/4 -1/2 -1/4 ## F2 1/4 -1/2 -1/4 ## F3 1/4 1/2 -1/4 ## F4 1/4 1/2 3/4 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(covars <-}\StringTok{ }\KeywordTok{as.data.frame}\NormalTok{(}\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, simdat3))) }\CommentTok{# design matrix} \end{Highlighting} \end{Shaded} \begin{verbatim} ## (Intercept) Fc2vs1 Fc3vs2 Fc4vs3 ## 1 1 -0.75 -0.5 -0.25 ## 2 1 -0.75 -0.5 -0.25 ## 3 1 -0.75 -0.5 -0.25 ## 4 1 -0.75 -0.5 -0.25 ## 5 1 -0.75 -0.5 -0.25 ## 6 1 0.25 -0.5 -0.25 ## 7 1 0.25 -0.5 -0.25 ## 8 1 0.25 -0.5 -0.25 ## 9 1 0.25 -0.5 -0.25 ## 10 1 0.25 -0.5 -0.25 ## 11 1 0.25 0.5 -0.25 ## 12 1 0.25 0.5 -0.25 ## 13 1 0.25 0.5 -0.25 ## 14 1 0.25 0.5 -0.25 ## 15 1 0.25 0.5 -0.25 ## 16 1 0.25 0.5 0.75 ## 17 1 0.25 0.5 0.75 ## 18 1 0.25 0.5 0.75 ## 19 1 0.25 0.5 0.75 ## 20 1 0.25 0.5 0.75 \end{verbatim} \textcolor{black}{For each of the $20$ subjects, four numbers are stored in this model matrix. They represent the three values of three predictor variables used to predict response times in the task. Indeed, this matrix is exactly the design matrix $X$ commonly used in multiple regression analysis, where each column represents one numeric predictor variable (covariate), and the first column codes the intercept term.} To further illustrate this, the covariates \textcolor{black}{are extracted} from this design matrix and stored separately as numeric predictor variables in the data-frame: \begin{Shaded} \begin{Highlighting}[] \NormalTok{simdat3[,}\KeywordTok{c}\NormalTok{(}\StringTok{"Fc2vs1"}\NormalTok{,}\StringTok{"Fc3vs2"}\NormalTok{,}\StringTok{"Fc4vs3"}\NormalTok{)] <-}\StringTok{ }\NormalTok{covars[,}\DecValTok{2}\OperatorTok{:}\DecValTok{4}\NormalTok{]} \end{Highlighting} \end{Shaded} They are now used as numeric predictor variables in a multiple regression analysis: \begin{Shaded} \begin{Highlighting}[] \NormalTok{m3_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{Fc2vs1 }\OperatorTok{+}\StringTok{ }\NormalTok{Fc3vs2 }\OperatorTok{+}\StringTok{ }\NormalTok{Fc4vs3, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table0.3}Repeated contrasts as linear regression.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ Fc2vs1 & 10 & $[-3$, $23]$ & 1.58 & .133\\ Fc3vs2 & -10 & $[-23$, $3]$ & -1.58 & .133\\ Fc4vs3 & 30 & $[17$, $43]$ & 4.74 & < .001\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The results show that the regression coefficients are exactly the same as in the contrast-based analysis shown in the previous section (Table \ref{tab:table0}). This demonstrates that contrasts serve to code discrete factor levels into a linear/multiple regression analysis by numerically encoding comparisons between specific condition means. Figure \ref{fig:Overview} provides an overview of the introduced contrasts. \begin{figure} \caption{Overview of contrasts including treatment, sum, and repeated contrasts. From top to bottom panels, we illustrate the computation of regression coefficients, show the contrast and hypothesis matrices, formulas for computing regression coefficients, the null hypotheses tested by each coefficient, and formulas for estimated data.} \label{fig:Overview} \end{figure} \FloatBarrier \hypertarget{polynomialContrasts}{ \subsection{Polynomial contrasts}\label{polynomialContrasts}} \textsc{Polynomial contrasts} are another option for analyzing factors. Suppose that we expect a linear trend across conditions, where the response increases by a constant magnitude with each successive factor level. This could be \textcolor{black}{the} expectation when four levels of a factor reflect decreasing levels of word frequency (i.e., four factor levels: high, medium-high, medium-low, and low word frequency), where \textcolor{black}{one} expects the lowest response for high frequency words, and successively higher responses for \textcolor{black}{lower} word frequencies. The effect for each individual level of a factor may \textcolor{black}{not be strong enough for detecting} it in the statistical model. Specifying a linear trend in a polynomial constrast allows us to pool the whole increase into a single coefficient for the linear trend, increasing statistical power to detect the increase. Such a specification constrains the hypothesis to one interpretable degree of freedom, e.g., a linear increase across factor levels. The larger the number of factor levels, the more parsimonious are \textsc{polynomial contrasts} compared to contrast-based specifications as introduced in the previous sections or compared to an omnibus F-test. Going beyond a linear trend, one may also have expectations about quadratic trends. For example, one may expect an increase only among very low frequency words, but no difference between high and medium-high frequency words. \begin{Shaded} \begin{Highlighting}[] \NormalTok{Xpol <-}\StringTok{ }\KeywordTok{contr.poly}\NormalTok{(}\DecValTok{4}\NormalTok{)} \NormalTok{(}\KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{Xpol)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## .L .Q .C ## [1,] -0.671 0.5 -0.224 ## [2,] -0.224 -0.5 0.671 ## [3,] 0.224 -0.5 -0.671 ## [4,] 0.671 0.5 0.224 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{m1_mr.Xpol <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table14polA}Polynomial contrasts.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ F L & 18 & $[8$, $27]$ & 4.00 & .001\\ F Q & 10 & $[1$, $19]$ & 2.24 & .040\\ F C & 13 & $[4$, $23]$ & 3.00 & .008\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} In this example (see Table \ref{tab:table14polA}), condition means increase across factor levels in a linear fashion, but the quadratic and cubic trends are also significant. \hypertarget{customContrasts}{ \subsection{Custom contrasts}\label{customContrasts}} Sometimes, a hypothesis about a pattern of means takes a form that cannot be expressed by the standard sets of contrasts available in R. For example, a theory or model may make quantitative predictions about the expected pattern of means. Alternatively, prior empirical research findings or logical reasoning may suggest a specific qualitative pattern. Such predictions could be quantitatively constrained when they come from a computational or mathematical model, but when a theory only predicts a qualitative pattern, these predictions can be represented by choosing some plausible values for the means (Baguley, 2012). For example, assume that a theory predicts for the pattern of means presented in Figure \ref{fig:helmertsimdatFig} that the first two means (for \texttt{F1} and \texttt{F2}) are identical, but that means for levels \texttt{F3} and \texttt{F4} increase linearly. One starts approximating a contrast by giving a potential expected outcome of means, such as \texttt{M\ =\ c(10,\ 10,\ 20,\ 30)}. \textcolor{black}{It is possible to} turn these predicted means into a contrast by centering them (i.e., subtracting the mean of \(17.5\)): \texttt{M\ =\ c(-7.5,\ -7.5,\ 2.5,\ 12.5)}. This already works as a contrast. \textcolor{black}{It is possible to} further simplify this by dividing by \(2.5\), which yields \texttt{M\ =\ c(-3,\ -3,\ 1,\ 5)}. We will use this contrast in a regression model. \textcolor{black}{Notice that if you have $I$ conditions, you can specify $I-1$ contrasts. However, it is also possible to specify less contrasts than this maximum number of $I-1$. The example below illustrates this point:} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(}\KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{cbind}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\OperatorTok{-}\DecValTok{3}\NormalTok{, }\DecValTok{-3}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{5}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## [1,] -3 ## [2,] -3 ## [3,] 1 ## [4,] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{C <-}\StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \NormalTok{simdat3}\OperatorTok{$}\NormalTok{Fcust <-}\StringTok{ }\NormalTok{C[,}\StringTok{"F1"}\NormalTok{]} \NormalTok{m1_mr.Xcust <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{Fcust, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table14cust}Custom contrasts.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(18)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[14$, $26]$ & 6.97 & < .001\\ Fcust & 3 & $[1$, $5]$ & 3.15 & .006\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} For cases where a qualitative pattern of means can be realized with more than one set of quantitative values, Baguley (2012) notes that often the precise numbers may not be decisive. He also suggests selecting the simplest set of integer numbers that matches the desired pattern. \hypertarget{nonOrthogonal}{ \section{What makes a good set of contrasts?}\label{nonOrthogonal}} Contrasts decompose ANOVA omnibus F tests into several component comparisons (Baguley, 2012). Orthogonal contrasts decompose the sum of squares of the F test into additive independent subcomponents, which allows for clarity in interpreting each effect. As mentioned earlier, for a factor with \(I\) levels one can make \(I-1\) comparisons. For example, in a design with one factor with two levels, only one comparison is possible (between the two factor levels). \textcolor{black}{More generally, if we have a factor with $I_1$ and another factor with $I_2$ levels, then the total number of conditions is $I_1\times I_2 = \nu$ (not $I_1 + I_2$!), which implies a maximum of $\nu-1$ contrasts.} For example, in a design with one factor with three levels, A, B, and C, in principle one could make three comparisons (A vs.~B, A vs.~C, B vs.~C). However, after defining an intercept, only two means can be compared. Therefore, for a factor with three levels, we define two comparisons within one statistical model. F tests are nothing but combinations, or bundles of contrasts. F tests are less specific and they lack focus, but they are useful when the hypothesis in question is vague. However, a significant F test leaves unclear what effects the data actually show. Contrasts are very useful to test specific effects in the data. \textcolor{black}{One critical precondition for contrasts is that they implement different hypotheses that are not collinear, that is, that none of the contrasts can be generated from the other contrasts by linear combination. For example, the contrast} \texttt{c1\ =\ c(1,2,3)} \textcolor{black}{can be generated from the contrast} \texttt{c2\ =\ c(3,4,5)} \textcolor{black}{simply by computing} \texttt{c2\ -\ 2}\textcolor{black}{. Therefore, contrasts c1 and c2 cannot be used simultaneously. That is, each contrast needs to encode some independent information about the data. Otherwise, the model cannot be estimated, and the lm() function gives an error, indicating that the design matrix is "rank deficient".} There are (at least) two criteria to decide what a good contrast is. First, \textit{orthogonal contrasts} have advantages as they test mutually independent hypotheses about the data (see Dobson \& Barnett, 2011, sec. 6.2.5, p.~91 for a detailed explanation of orthogonality). \textcolor{black}{Second, it is} crucial that contrasts are defined in a way such that they answer the research questions. \textcolor{black}{This second point is crucial. One way to accomplish this, is to use the hypothesis matrix to generate contrasts, as this ensures that one uses contrasts that exactly test the hypotheses of interest in a given study.} \hypertarget{centered-contrasts}{ \subsection{Centered contrasts}\label{centered-contrasts}} Contrasts are often constrained to be centered, such that the individual contrast coefficients \(c_i\) \textcolor{black}{for different factor levels $i$ sum to $0$: $\sum_{i=1}^I c_i = 0$. This has advantages when testing interactions with other factors or covariates (we discuss interactions between factors below). All contrasts discussed here are centered except for the \textsc{treatment contrast}, in which the contrast coefficients for each contrast do not sum to zero:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{colSums}\NormalTok{(}\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2 3 4 ## 1 1 1 \end{verbatim} \textcolor{black}{Other contrasts, such as \textsc{repeated contrasts}, are centered and the contrast coefficients for each contrast sum to $0$:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{colSums}\NormalTok{(}\KeywordTok{contr.sdif}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2-1 3-2 4-3 ## 0 0 0 \end{verbatim} The contrast coefficients mentioned above appear in the contrast matrix. By contrast, the weights in the hypothesis matrix are always centered. This is also true for the \textsc{treatment contrast}. The reason is that they code hypotheses, which always relate to comparisons between conditions or bundles of conditions. The only exception are the weights for the intercept, which always sum to \(1\) in the hypothesis matrix. This is done to ensure that when applying the generalized matrix inverse, the intercept results in a constant term with values of \(1\) in the contrast matrix. That the intercept is coded by a column of \(1\)s in the contrast matrix accords to convention as it provides a scaling of the intercept coefficient that is simple to interpret. \textcolor{black}{An important question concerns whether (or when) the intercept needs to be considered in the generalized matrix inversion, and whether (or when) it can be ignored. This question is closely related to the concept of orthogonal contrasts, a concept we turn to below.} \hypertarget{orthogonal-contrasts}{ \subsection{Orthogonal contrasts}\label{orthogonal-contrasts}} Two centered contrasts \(c_1\) and \(c_2\) are orthogonal to each other if the following condition applies. Here, \(i\) is the \(i\)-th cell of the vector representing the contrast. \begin{equation} \sum_{i=1}^I c_{1,i} \cdot c_{2,i} = 0 \end{equation} Orthogonality can be determined easily in R by computing the correlation between two contrasts. Orthogonal contrasts have a correlation of \(0\). Contrasts are therefore just a special case for the general case of predictors in regression models, where two numeric predictor variables are orthogonal if they are un-correlated. For example, coding two factors in a \(2 \times 2\) design (we return to this case in a section on ANOVA below) using \textsc{sum contrasts}, these sum contrasts and their interaction are orthogonal to each other: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(Xsum <-}\StringTok{ }\KeywordTok{cbind}\NormalTok{(}\DataTypeTok{F1=}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{), }\DataTypeTok{F2=}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{), }\DataTypeTok{F1xF2=}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 F2 F1xF2 ## [1,] 1 1 1 ## [2,] 1 -1 -1 ## [3,] -1 1 -1 ## [4,] -1 -1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{cor}\NormalTok{(Xsum)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 F2 F1xF2 ## F1 1 0 0 ## F2 0 1 0 ## F1xF2 0 0 1 \end{verbatim} \noindent Notice that the correlations between the different contrasts (i.e., the off-diagonals) are exactly \(0\). \textsc{Sum contrasts} coding one multi-level factor, however, are not orthogonal to each other: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{cor}\NormalTok{(}\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## [1,] 1.0 0.5 0.5 ## [2,] 0.5 1.0 0.5 ## [3,] 0.5 0.5 1.0 \end{verbatim} \noindent Here, the correlations between individual contrasts, which appear in the off-diagonals, deviate from \(0\), indicating non-orthogonality. The same is also true for \textsc{treatment} and \textsc{repeated contrasts}: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{cor}\NormalTok{(}\KeywordTok{contr.sdif}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2-1 3-2 4-3 ## 2-1 1.000 0.577 0.333 ## 3-2 0.577 1.000 0.577 ## 4-3 0.333 0.577 1.000 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{cor}\NormalTok{(}\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2 3 4 ## 2 1.000 -0.333 -0.333 ## 3 -0.333 1.000 -0.333 ## 4 -0.333 -0.333 1.000 \end{verbatim} Orthogonality of contrasts plays a critical role when computing the generalized inverse. In the inversion operation, orthogonal contrasts are converted independently from each other. That is, the presence or absence of another orthogonal contrast does not change the resulting weights. \textcolor{black}{In fact, for orthogonal contrasts, applying the generalized matrix inverse to the hypothesis matrix simply produces a scaled version of the hypothesis matrix into the contrast matrix (for mathematical details see Appendix}~\ref{app:InverseOperation}). \textcolor{black}{The crucial point here is the following. As long as contrasts are fully orthogonal, and as long as one does not care about the scaling of predictors, it is not necessary to use the generalized matrix inverse, and one can code the contrast matrix directly. However, when scaling is of interest, or when non-orthogonal or non-centered contrasts are involved, then the generalized inverse formulation of the hypothesis matrix is needed to specify contrasts correctly.} \hypertarget{the-role-of-the-intercept-in-non-centered-contrasts}{ \subsection{The role of the intercept in non-centered contrasts}\label{the-role-of-the-intercept-in-non-centered-contrasts}} A related question concerns whether the intercept needs to be considered when computing the generalized inverse for a contrast. It turns out that considering the intercept is necessary for contrasts that are not centered. This is the case for \textsc{treatment contrasts} which are not centered; e.g., the treatment contrast for two factor levels \texttt{c1vs0\ =\ c(0,1)}: \(\sum_i c_i = 0 + 1 = 1\). One can actually show that the formula to determine whether contrasts are centered (i.e., \(\sum_i c_i = 0\)) is the same formula as the formula to test whether a contrast is \enquote{orthogonal to the intercept}. Remember that for the intercept, all contrast coefficients are equal to one: \(c_{1,i} = 1\) \textcolor{black}{(here, $c_{1,i}$ indicates the vector of contrast coefficients associated with the intercept)}. We enter these contrast coefficient values into the formula testing whether a contrast is orthogonal to the intercept \textcolor{black}{(here, $c_{2,i}$ indicates the vector of contrast coefficients associated with some contrast for which we want to test whether it is "orthogonal to the intercept")}: \(\sum_i c_{1,i} \cdot c_{2,i} = \sum_i 1 \cdot c_{2,i} = \sum_i c_{2,i} = 0\). The resulting formula is: \(\sum_i c_{2,i} = 0\), which is exactly the formula for whether a contrast is centered. Because of this analogy, \textsc{treatment contrasts} can be viewed to be `not orthogonal to the intercept'. \textcolor{black}{This means that the intercept needs to be considered when computing the generalized inverse for treatment contrasts. As we have discussed above, when the intercept is included in the hypothesis matrix, the weights for this intercept term should sum to one, as this yields a column of ones for the intercept term in the contrast matrix.} \hypertarget{a-closer-look-at-hypothesis-and-contrast-matrices}{ \section{A closer look at hypothesis and contrast matrices}\label{a-closer-look-at-hypothesis-and-contrast-matrices}} \hypertarget{inverting-the-procedure-from-a-contrast-matrix-to-the-associated-hypothesis-matrix}{ \subsection{Inverting the procedure: From a contrast matrix to the associated hypothesis matrix}\label{inverting-the-procedure-from-a-contrast-matrix-to-the-associated-hypothesis-matrix}} \textcolor{black}{One important point to appreciate about the generalized inverse matrix operation is that applying the inverse twice yields back the original matrix. It follows that applying the inverse operation twice to the hypothesis matrix $H_c$ yields back the original hypothesis matrix: $(H_c^{inv})^{inv} = H_c$. For example, let us look at the hypothesis matrix of a \textsc{repeated contrast}:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(HcRE <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{c2vs1=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DataTypeTok{F2=} \DecValTok{1}\NormalTok{,}\DataTypeTok{F3=} \DecValTok{0}\NormalTok{), } \DataTypeTok{c3vs1=}\KeywordTok{c}\NormalTok{( }\DecValTok{0}\NormalTok{, }\DecValTok{-1}\NormalTok{, }\DecValTok{1}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c2vs1 c3vs1 ## F1 -1 0 ## F2 1 -1 ## F3 0 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{ginv2}\NormalTok{(}\KeywordTok{ginv2}\NormalTok{(HcRE)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c2vs1 c3vs1 ## F1 -1 0 ## F2 1 -1 ## F3 0 1 \end{verbatim} It is clear that applying the generalized inverse twice to the hypothesis matrix yields back the same matrix. This also implies that taking the contrast matrix \(X_c\) (i.e., \(X_c = H_c^{inv}\)), and applying the generalized inverse operation, gets back the hypothesis matrix \(X_c^{inv} = H_c\). Why is this of interest? This means that if one has a given contrast matrix, e.g., one that is provided by standard software packages, or one that is described in a research paper, then one can apply the generalized inverse operation to obtain the hypothesis matrix. This will tell us exactly which hypotheses were tested by the given contrast matrix. \textcolor{black}{As an example, let us take a closer look at this using the \textsc{treatment contrast}. Let's start with a \textsc{treatment contrast} for a factor with three levels F1, F2, and F3. Adding a column of $1$s adds the intercept (}\texttt{int}\textcolor{black}{):} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcTr <-}\StringTok{ }\KeywordTok{cbind}\NormalTok{(}\DataTypeTok{int=}\DecValTok{1}\NormalTok{,}\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{3}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## int 2 3 ## 1 1 0 0 ## 2 1 1 0 ## 3 1 0 1 \end{verbatim} \textcolor{black}{The next step is to} apply the generalized inverse operation: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{ginv2}\NormalTok{(XcTr))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## int 2 3 ## 1 1 -1 -1 ## 2 0 1 0 ## 3 0 0 1 \end{verbatim} \textcolor{black}{This shows} the hypotheses that the \textsc{treatment contrasts} test, by extracting the weights from the hypothesis matrix. The first contrast (\texttt{int}) has weights \texttt{cH00\ \textless{}-\ c(1,\ 0,\ 0)}. \textcolor{black}{Writing} this down as a formal hypothesis test \textcolor{black}{yields}: \begin{equation} H_{0_0}: 1 \cdot \mu_1 + 0 \cdot \mu_2 + 0 \cdot \mu_3 = 0 \end{equation} That is, the first contrast tests the hypothesis \(H_{0_0}: \mu_1 = 0\) that the mean of the first factor level \(\mu_1\) is zero. As the factor level F1 was defined as the baseline condition in the treatment contrast, this means that for treatment contrasts, the intercept captures the condition mean of the baseline condition. This is the exact same result that \textcolor{black}{was shown} at the beginning of this paper, when first introducing treatment contrasts (see equation \ref{eq:trmtcontrfirstmention}). \textcolor{black}{We also extract the weights for the other contrasts from the hypothesis matrix. The weights for the second contrast are} \texttt{cH01\ \textless{}-\ c(-1,\ 1,\ 0)}. \textcolor{black}{This is written} as a formal hypothesis test: \begin{equation} H_{0_1}: -1 \cdot \mu_1 + 1 \cdot \mu_2 + 0 \cdot \mu_3 = 0 \end{equation} \textcolor{black}{The second contrast tests the difference in condition means between the first and the second factor level, i.e., it tests the null hypothesis that the difference in condition means of the second minus the first factor levels is zero $H_{0_1}: \mu_2 - \mu_1 = 0$.} \textcolor{black}{We also extract the weights for the last contrast, which are} \texttt{cH02\ \textless{}-\ c(-1,\ 0,\ 1)}\textcolor{black}{, and write them as a formal hypothesis test:} \begin{equation} H_{0_2}: -1 \cdot \mu_1 + 0 \cdot \mu_2 + 1 \cdot \mu_3 = 0 \end{equation} \textcolor{black}{This contrast tests the difference between the third (F3) and the first (F1) condition means, and tests the null hypothesis that the difference is zero: $H_{0_2}: \mu_3 - \mu_1 = 0$. These results correspond to what we know about treatment contrasts, i.e., that treatment contrasts test the difference of each group to the baseline condition. They demonstrate that it is possible to use the generalized inverse to learn about the hypotheses that a given set of contrasts tests.} \hypertarget{the-importance-of-the-intercept-when-transforming-between-the-contrast-matrix-and-the-hypothesis-matrix-in-non-centered-contrasts}{ \subsection{The importance of the intercept when transforming between the contrast matrix and the hypothesis matrix in non-centered contrasts}\label{the-importance-of-the-intercept-when-transforming-between-the-contrast-matrix-and-the-hypothesis-matrix-in-non-centered-contrasts}} The above example of the treatment contrast also demonstrates that it is vital to consider the intercept when doing the transformation between the contrast matrix and the hypothesis matrix. Let us have a look at what the hypothesis matrix looks like when the intercept is ignored in the inversion: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcTr <-}\StringTok{ }\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{3}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2 3 ## 1 0 0 ## 2 1 0 ## 3 0 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(Hc <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(XcTr))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 2 3 ## 1 0 0 ## 2 1 0 ## 3 0 1 \end{verbatim} \textcolor{black}{Now, the hypothesis matrix looks very different. In fact it looks just the same as the contrast matrix. However, the hypothesis matrix does not code any reasonable hypotheses or comparisons any more: The first contrast now tests the hypothesis that the condition mean for F2 is zero, $H_{0_1}: \mu_2 = 0$. The second contrast now tests the hypothesis that the condition mean for F3 is zero, $H_{0_2}: \mu_3 = 0$. However, we know that these are the wrong hypotheses for the \textsc{treatment contrast} when the intercept is included in the model. This demonstrates that it is important to consider the intercept in the generalized inverse. As explained earlier in the section on non-/orthogonal contrasts, this is important for contrasts that are not centered. For centered contrasts, such as the \textsc{sum contrast} or the \textsc{repeated contrast}, including or excluding the intercept does not change the results.} \hypertarget{the-hypothesis-matrix-and-contrast-matrix-in-matrix-form}{ \subsection{The hypothesis matrix and contrast matrix in matrix form}\label{the-hypothesis-matrix-and-contrast-matrix-in-matrix-form}} \hypertarget{matrix-notation-for-the-contrast-matrix}{ \subsubsection{Matrix notation for the contrast matrix}\label{matrix-notation-for-the-contrast-matrix}} \textcolor{black}{We have discussed above the relation of contrasts to linear/multiple regression analysis, i.e., that contrasts encode numeric predictor variables (covariates) for testing comparisons between discrete conditions in a linear/multiple regression model. The introduction of \textsc{treatment contrasts} had shown that a contrast can be used as the predictior $x$ in the linear regression equation $y = \beta_0 + \beta_1 \cdot x$. To repeat: in the treatment contrast, if $x$ is $0$ for the baseline condition, the predicted data is $\beta_0 + \beta_1 \cdot 0 = \beta_0$, indicating the intercept $\beta_0$ is the prediction for the mean of the baseline factor level (}\texttt{F1}\textcolor{black}{). If $x$ is $1$ (}\texttt{F2}\textcolor{black}{), then the predicted data is $\beta_0 + \beta_1 \cdot 1 = \beta_0 + \beta_1$. Both of these predictions for conditions $x = 0$ and $x = 1$ are summarized in a single equation using matrix notation (also see equation} \eqref{eq:lm1} in the introduction; cf.~Bolker, 2018). Here, the different possible values of \(x\) \textcolor{black}{are represented} in the contrast matrix \(X_c\). \begin{equation} X_c = \left(\begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array} \right) \end{equation} \textcolor{black}{This matrix has one row for each condition/group of the study, i.e., here, it has 2 rows. The matrix has two columns. The second column (i.e., on the right-hand side) contains the treatment contrast with $x_1 = 0$ and $x_2 = 1$. The first column of $X_c$ contains a column of $1$s, which indicate that the intercept $\beta_0$ is added in each condition.} \textcolor{black}{Multiplying}\protect\rmarkdownfootnote{Matrix multiplication is defined as follows. Consider a matrix \(X\) with three rows and two columns \(X = \left(\begin{array}{cc} x_{1,1} & x_{1,2} \\ x_{2,1} & x_{2,2} \\ x_{3,1} & x_{3,3} \end{array} \right)\), and a vector with two entries \(\beta = \left(\begin{array}{cc} \beta_0 \\ \beta_1 \end{array} \right)\). The matrix \(X\) can be multiplied with the vector \(\beta\) as follows: \(X \beta = \left(\begin{array}{cc} x_{1,1} & x_{1,2} \\ x_{2,1} & x_{2,2} \\ x_{3,1} & x_{3,2} \end{array} \right) \left(\begin{array}{cc} \beta_0 \\ \beta_1 \end{array} \right) = \left(\begin{array}{cc} x_{1,1} \cdot \beta_0 + x_{1,2} \cdot \beta_1 \\ x_{2,1} \cdot \beta_0 + x_{2,2} \cdot \beta_1 \\ x_{3,1} \cdot \beta_0 + x_{3,2} \cdot \beta_1 \end{array} \right) = \left(\begin{array}{cc} y_1 \\ y_2 \\ y_3 \end{array} \right) = y_{3,1}\) Multiplying an \(n\times p\) matrix with another \(p\times m\) matrix will yield an \(n\times m\) matrix. If the number of columns of the first matrix is not the same as the number of rows of the second matrix, matrix multiplication is undefined.} \textcolor{black}{this contrast matrix $X_c$ with the vector of regression coefficients $\beta=(\beta_0, \beta_1)$ containing the intercept $\beta_0$ and the effect of the factor $x$, $\beta_1$ (i.e., the slope), yields the expected response times for conditions} \texttt{F1} and \texttt{F2}\textcolor{black}{, $y_1$ and $y_2$, which here correspond to the condition means, $\mu_1$ and $\mu_2$:} \begin{equation} X_c \beta = \left(\begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array} \right) \left(\begin{array}{c} \beta_0 \\ \beta_1 \end{array} \right) = \left(\begin{array}{c} 1 \cdot \beta_0 + 0 \cdot \beta_1 \\ 1 \cdot \beta_0 + 1 \cdot \beta_1 \end{array} \right) = \left(\begin{array}{c} \beta_0 \\ \beta_0 + \beta_1 \end{array} \right) = \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array}\right) = \mu \quad \label{cdef} \end{equation} More compactly: \begin{equation} \mu = X_c \beta \label{eq:lm0} \end{equation} This matrix formulation can be implemented in R. Consider again the simulated data displayed in Figure~\ref{fig:Fig1Means} \textcolor{black}{with two factor levels} \texttt{F1} and \texttt{F2}\textcolor{black}{, and condition means of $\mu_1 = 0.8$ and $\mu_2 = 0.4$. The \textsc{treatment contrast} codes condition} \texttt{F1} \textcolor{black}{as the baseline condition with $x = 0$, and condition} \texttt{F2} \textcolor{black}{as $x = 1$. As shown in Table}~\ref{tab:table2}, the estimated regression coefficients were \(\beta_0 = 0.8\) and \(\beta_1 = -0.4\). The contrast matrix \(X_c\) can now be constructed as follows: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcTr <-}\StringTok{ }\KeywordTok{cbind}\NormalTok{(}\DataTypeTok{int=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\NormalTok{,}\DataTypeTok{F2=}\DecValTok{1}\NormalTok{), }\DataTypeTok{c2vs1=}\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## int c2vs1 ## F1 1 0 ## F2 1 1 \end{verbatim} The regression coefficients can be written as: \begin{Shaded} \begin{Highlighting}[] \NormalTok{(beta <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\FloatTok{0.8}\NormalTok{,}\OperatorTok{-}\FloatTok{0.4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.8 -0.4 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{## convert to a 2x1 vector:} \NormalTok{beta <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(beta,}\DataTypeTok{ncol =} \DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} Multiplying the \(2\times 2\) contrast matrix with the estimated regression coefficients, a \(2\times 1\) vector, gives predictions of the condition means: \begin{Shaded} \begin{Highlighting}[] \NormalTok{XcTr }\OperatorTok{ \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## F1 0.8 ## F2 0.4 \end{verbatim} As expected, the matrix multiplication yields the condition means. \hypertarget{using-the-generalized-inverse-to-estimate-regression-coefficients}{ \subsubsection{Using the generalized inverse to estimate regression coefficients}\label{using-the-generalized-inverse-to-estimate-regression-coefficients}} One key question remains unanswered by this representation of the linear model: Although in our example we know the contrast matrix \(X_c\) and the condition means \(\mu\), both the condition means \(\mu\) and the regression coefficients \(\beta\) are unknown, and need to be estimated from the data \(y\) (this is what the command \texttt{lm()} does). \textcolor{black}{Can we use the matrix notation $\mu = X_c \beta$ (cf. equation} \ref{eq:lm0}) to estimate the regression coefficients \(\beta\)? That is, can we re-formulate the equation, by writing the regression coefficients \(\beta\) on one side of the equation, such that we can compute \(\beta\)? Intuitively, what \textcolor{black}{needs to be done} for this, is to \enquote{divide by \(X_c\)}. This would yield \(1 \cdot \beta = \beta\) on the right side of the equation, and \enquote{one divided by \(X_c\)} times \(\mu\) on the left hand side. That is, this would allow us to solve for the regression coefficients \(\beta\) and would provide a formula to compute them. Indeed, the generalized matrix inverse operation does for matrices exactly what we intuitively refer to as \enquote{divide by \(X_c\)}, that is, it computes the inverse of a matrix such that pre-multiplying the inverse of a matrix with the matrix yields \(1\): \(X_c^{inv} \cdot X_c = 1\) (where \(1\) is the identity matrix, with \(1\)s on the diagonal and off-diagonal \(0\)s). For example, we take the contrast matrix of a \textsc{treatment contrast} for a factor with two levels, and we compute the related hypothesis matrix using the generalized inverse: Pre-multiplying the contrast matrix with its inverse yields the identity matrix, that is, a matrix where the diagonals are all \(1\) and the off-diagonals are all \(0\): \begin{Shaded} \begin{Highlighting}[] \NormalTok{XcTr} \end{Highlighting} \end{Shaded} \begin{verbatim} ## int c2vs1 ## F1 1 0 ## F2 1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{ginv2}\NormalTok{(XcTr)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 F2 ## int 1 0 ## c2vs1 -1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{fractions}\NormalTok{( }\KeywordTok{ginv2}\NormalTok{(XcTr) }\OperatorTok{ \end{Highlighting} \end{Shaded} \begin{verbatim} ## int c2vs1 ## int 1 0 ## c2vs1 0 1 \end{verbatim} \textcolor{black}{That is, multiplying equation} \ref{eq:lm0} \textcolor{black}{by $X_c^{inv}$, yields (for details see Appendix}~\ref{app:LinearAlgebra}): \begin{align} X_c^{inv} \mu &= X_c^{inv} X_c \beta \\ X_c^{inv} \mu &= \beta \end{align} \textcolor{black}{This shows that the generalized matrix inverse actually allows to estimate regression coefficients from the data. This is done by (matrix) multiplying the hypothesis matrix (i.e., the inverse contrast matrix) with the condition means: $\hat{\beta} = X_c^{inv} \mu$. Importantly, this derivation ignored residual errors in the regression equation. For a full derivation see the Appendix}~\ref{app:LinearAlgebra}. Consider again the simple example of a \textsc{treatment contrast} for a factor with two levels. \begin{Shaded} \begin{Highlighting}[] \NormalTok{XcTr} \end{Highlighting} \end{Shaded} \begin{verbatim} ## int c2vs1 ## F1 1 0 ## F2 1 1 \end{verbatim} Inverting the contrast matrix yields the hypothesis matrix. \begin{Shaded} \begin{Highlighting}[] \NormalTok{(HcTr <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(XcTr))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 F2 ## int 1 0 ## c2vs1 -1 1 \end{verbatim} \textcolor{black}{So far, we always displayed the resulting hypothesis matrix with rows and columns switched using matrix transpose (R function} \texttt{t()}\textcolor{black}{) to make it more easily readable. That is, each column represented one contrast, and each row represented one condition. However, in the present paragraph we discuss the matrix notation of the linear model, and we therefore show the hypothesis matrix in its original, untransposed form, where each row represents one contrast/hypothesis and each column represents one factor level. That is, the first row of} \texttt{HcTr} (\texttt{int\ =\ c(1,0)}) \textcolor{black}{encodes the intercept and the null hypothesis that the condition mean} \texttt{F1} \textcolor{black}{is zero. The second row} \texttt{cH01\ =\ c(-1,1)} \textcolor{black}{encodes the null hypothesis that the condition mean of} \texttt{F2} \textcolor{black}{is identical to the condition mean} \texttt{F1}. \textcolor{black}{When we multiply this hypothesis matrix $H_c$ with the observed condition means $\mu$, this yields the regression coefficients. For illustration we use the condition means from our first example of the treatment contrast (see Figure}~\ref{fig:Fig1Means}), \textcolor{black}{encoded in the data frame} \texttt{table1} \textcolor{black}{in variable} \texttt{table1\$M}. \textcolor{black}{The resulting regression coefficients are the same values as in the \texttt{lm} command presented in Table}~\ref{tab:table2}: \begin{Shaded} \begin{Highlighting}[] \NormalTok{mu <-}\StringTok{ }\NormalTok{table1}\OperatorTok{$}\NormalTok{M} \NormalTok{HcTr }\OperatorTok{ \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## int 0.8 ## c2vs1 -0.4 \end{verbatim} \hypertarget{estimating-regression-coefficients-and-testing-hypotheses-using-condition-means}{ \subsubsection{Estimating regression coefficients and testing hypotheses using condition means}\label{estimating-regression-coefficients-and-testing-hypotheses-using-condition-means}} We explained above that the hypothesis matrix contains weights for the condition means to define the hypothesis that a given contrast tests. For example, the hypothesis matrix from a treatment contrast \textcolor{black}{was used} for a factor with two levels: \begin{Shaded} \begin{Highlighting}[] \NormalTok{HcTr} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F1 F2 ## int 1 0 ## c2vs1 -1 1 \end{verbatim} \textcolor{black}{The first row codes the weights for the intercept. This encodes the following hypothesis:} \begin{equation} H_{0_0}: 1 \cdot \mu_1 + 0 \cdot \mu_2 = \mu_1 = 0 \end{equation} In fact, writing this down as a hypothesis involves a short-cut: What the hypothesis matrix actually encodes are weights for how to combine condition means to compute regression coefficients. Our null hypotheses are then tested as \(H_{0_x}: \beta_x = 0\). For the present example, \textcolor{black}{estimates for} the regression coefficient for the intercept \textcolor{black}{are}: \begin{equation} \hat{\beta_0} = 1 \cdot \mu_1 + 0 \cdot \mu_2 = \mu_1 \end{equation} \textcolor{black}{For the example data-set:} \begin{equation} \hat{\beta_0} = 1 \cdot 0.8 + 0 \cdot 0.4 = 0.8 \end{equation} \textcolor{black}{This is exactly the same value that the} \texttt{lm()} \textcolor{black}{command showed. A second step tests the null hypothesis that the regression coefficient is zero, i.e., $H_{0_0}: \beta_0 = 0$.} The same analysis \textcolor{black}{is done} for the slope, which is coded in the second row of the hypothesis matrix. The hypothesis is expressed as: \begin{equation} H_{0_1}: -1 \cdot \mu_1 + 1 \cdot \mu_2 = \mu_2 - \mu_1 = 0 \end{equation} \textcolor{black}{This involves first computing the regression coefficient for the slope:} \begin{equation} \hat{\beta_0} = -1 \cdot \mu_1 + 1 \cdot \mu_2 = \mu_2 - \mu_1 \end{equation} \textcolor{black}{For the example data-set, this yields:} \begin{equation} \hat{\beta_1} = -1 \cdot 0.8 + 1 \cdot 0.4 = - 0.4 \end{equation} \textcolor{black}{Again, this is the same value for the slope as given by the command} \texttt{lm()}\textcolor{black}{. A second step tests the hypothesis that the slope is zero: $H_{0_1}: \beta_1 = 0$.} To summarize, we write the formulas for both regression coefficients in a single equation. \textcolor{black}{A first step writes down} the hypothesis matrix: \begin{equation} H_c = X_c^{inv} = \left(\begin{array}{rr} 1 & 0 \\ -1 & 1 \end{array} \right) \end{equation} \textcolor{black}{Here, the first row contains the weights for the intercept (}\texttt{int\ =\ c(1,0)}\textcolor{black}{), and the second row contains the weights for the slope (}\texttt{c2vs1\ =\ c(-1,1)}\textcolor{black}{). Multiplying this hypothesis matrix with the average response times in conditions} \texttt{F1} and \texttt{F2}\textcolor{black}{, $\mu_1$ and $\mu_2$, yields estimates of the regression coefficients: $\hat{\beta} = \left(\begin{array}{c} \hat{\beta_0} \\ \hat{\beta_1} \end{array}\right)$. They show that the intercept $\beta_0$ is equal to the average value of} \texttt{F1}\textcolor{black}{, $\mu_1$, and the slope $\beta_1$ is equal to the difference in average values between} \texttt{F2} \textcolor{black}{minus} \texttt{F1}\textcolor{black}{, $\mu_2 - \mu_1$:} \begin{equation} X_c^{inv} \mu = \left(\begin{array}{rr} 1 & 0 \\ -1 & 1 \end{array} \right) \left(\begin{array}{c} \mu_1 \\ \mu_2 \end{array} \right) = \left(\begin{array}{rr} 1 \cdot \mu_1 + 0 \cdot \mu_2 \\ -1 \cdot \mu_1 + 1 \cdot \mu_2 \end{array} \right) = \left(\begin{array}{c} \mu_1 \\ \mu_2 - \mu_1 \end{array} \right) = \left(\begin{array}{c} \hat{\beta_0} \\ \hat{\beta_1} \end{array}\right) = \hat{\beta} \label{cinvdef} \end{equation} \textcolor{black}{Or in short (see equation}~\ref{eq:beta}): \begin{equation} \hat{\beta} = X_c^{inv} \mu \end{equation} \textcolor{black}{These analyses show the important result that the hypothesis matrix is used to compute regression coefficients from the condition means. This important result is derived from the matrix formulation of the linear model (for details see Appendix}~\ref{app:LinearAlgebra}). \textcolor{black}{This is one reason why it is important to understand the matrix formulation of the linear model.} \hypertarget{the-design-or-model-matrix-and-its-generalized-inverse}{ \subsubsection{The design or model matrix and its generalized inverse}\label{the-design-or-model-matrix-and-its-generalized-inverse}} \textcolor{black}{Until now we have been applying the generalized inverse to the hypothesis matrix to obtain the contrast matrix $X_c$. The contrast matrix contains one row per condition. When performing a linear model analysis, the contrast matrix is translated into a so-called \textit{design or model matrix} $X$. This matrix contains one row for every data point in the vector of data $y$. As a consequence, the same condition will appear more than once in the design matrix $X$.} \textcolor{black}{The key point to note here is that} the generalized inverse operation can not only be applied to the contrast matrix \(X_c\) but also to the design matrix \(X\), which contains one row per data point. The important result is that taking the generalized inverse of this design matrix \(X\) and multiplying it with the raw data (i.e., the dependent variable) also yields the regression coefficients: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{1}\NormalTok{)} \KeywordTok{data.frame}\NormalTok{(X <-}\StringTok{ }\KeywordTok{model.matrix}\NormalTok{( }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F,simdat)) }\CommentTok{# obtain design matrix X} \end{Highlighting} \end{Shaded} \begin{verbatim} ## X.Intercept. F1 ## 1 1 0 ## 2 1 0 ## 3 1 0 ## 4 1 0 ## 5 1 0 ## 6 1 1 ## 7 1 1 ## 8 1 1 ## 9 1 1 ## 10 1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(Xinv <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(X)) }\CommentTok{# take generalized inverse of X} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 1 2 3 4 5 6 7 8 9 10 ## (Intercept) 1/5 1/5 1/5 1/5 1/5 0 0 0 0 0 ## F1 -1/5 -1/5 -1/5 -1/5 -1/5 1/5 1/5 1/5 1/5 1/5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(y <-}\StringTok{ }\NormalTok{simdat}\OperatorTok{$}\NormalTok{DV) }\CommentTok{# raw data} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.997 0.847 0.712 0.499 0.945 0.183 0.195 0.608 0.556 0.458 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{Xinv }\OperatorTok{ \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## (Intercept) 0.8 ## F1 -0.4 \end{verbatim} \textcolor{black}{The generalized inverse automatically generates covariates} that perform the averaging across the individual data points per condition. E.g., the estimate of the intercept \(\hat{\beta}_0\) is computed from the \(i\) observations of the dependent variable \(y_i\) with the formula: \begin{equation} \hat{\beta}_0 = \frac{1}{5} \cdot y_1 + \frac{1}{5} \cdot y_2 + \frac{1}{5} \cdot y_3 + \frac{1}{5} \cdot y_4 + \frac{1}{5} \cdot y_5 + 0 \cdot y_6 + 0 \cdot y_7 + 0 \cdot y_8 + 0 \cdot y_9 + 0 \cdot y_{10} = \frac{1}{5} \cdot \sum_{i=1}^{5} y_i \end{equation} \noindent which expresses the formula for estimating \(\mu_1\), the mean response time in condition F1. \hypertarget{effectSize}{ \section{The variance explained by each predictor and effect size statistics for linear contrasts}\label{effectSize}} \textcolor{black}{Should one include all predictors in a model? And what is the effect size of a contrast? To answer these questions, it is important to understand the concept of variance explained. This is explained next.} \hypertarget{sum-of-squares-and-r2_alerting-as-measures-of-variability-and-effect-size}{ \subsection{\texorpdfstring{Sum of squares and \(r^2_{alerting}\) as measures of variability and effect size}{Sum of squares and r\^{}2\_\{alerting\} as measures of variability and effect size}}\label{sum-of-squares-and-r2_alerting-as-measures-of-variability-and-effect-size}} \textcolor{black}{The sum of squares is a measure of the variability in the data. The total sum of squares in a data set equals the sum of squared deviations of individual data points $y_i$ from their mean $\bar{y}$: $SS_{total} = \sum_i (y_i - \bar{y})^2$. This can be partitioned into different components, which add up to the total sum of squares. One component is the sum of squares associated with the residuals, that is the sum of squared deviations of $i$ individual observed data points $y_i$ from the value predicted by a linear model $y_{pred}$: $SS_{residuals} = \sum_i (y_i - y_{pred})^2$; another component, that we are interested in here, is the sum of squares associated with a certain factor, which is the squared deviation of $j$ condition means $\mu_j$ from their mean $\bar{\mu}$, where each squared deviation is multiplied by the number of data points $n_j$ that go into its computation: $SS_{effect} = \sum_j n_j \cdot (\mu_j - \bar{\mu})^2$.} \textcolor{black}{The sum of squares for a factor ($SS_{effect}$) can be further partitioned into the contributions of $k$ different linear contrasts: $SS_{effect} = \sum_k SS_{contrast \;k}$.}\protect\rmarkdownfootnote{\textcolor{black}{The $SS_{contrast}$ can be computed as follows: $SS_{contrast} = \frac{(\sum_j c_j \mu_j)^2}{\sum_j {c_j}^2 / n_j}$, where $j$ is the number of factor levels, $c_j$ are the contrast coefficients for each factor level, $\mu_j$ are the condition means, and $n_j$ is the number of data points per factor level $j$.}} A measure for the effect size of a linear contrast is the proportion of variance that it explains (i.e., sum of squares \emph{contrast}) from the total variance explained by the factor (i.e., sum of squares \emph{effect}), that is, \(r^2_{alerting} = \text{SS}_{contrast} / \text{SS}_{effect}\) (Baguley, 2012; Rosenthal, Rosnow, \& Rubin, 2000). This is computed by entering each contrast as an individual predictor into a linear model, and by then extracting the corresponding sum of squares from the output of the \texttt{anova()} function. This is illustrated with the example of \textsc{polynomial contrasts}. First, we assign \textsc{polynomial contrasts} to the factor \texttt{F}, extract the numeric predictor variables using the R function \texttt{model.matrix()}, and use these as covariates in a linear model analysis. \begin{Shaded} \begin{Highlighting}[] \NormalTok{(}\KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{contr.poly}\NormalTok{(}\DecValTok{4}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## .L .Q .C ## [1,] -0.671 0.5 -0.224 ## [2,] -0.224 -0.5 0.671 ## [3,] 0.224 -0.5 -0.671 ## [4,] 0.671 0.5 0.224 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{simdat3X <-}\StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \NormalTok{simdat3[,}\KeywordTok{c}\NormalTok{(}\StringTok{"cLinear"}\NormalTok{,}\StringTok{"cQuadratic"}\NormalTok{,}\StringTok{"cCubic"}\NormalTok{)] <-}\StringTok{ }\NormalTok{simdat3X[,}\DecValTok{2}\OperatorTok{:}\DecValTok{4}\NormalTok{]} \NormalTok{m1_mr.Xpol2 <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{cLinear }\OperatorTok{+}\StringTok{ }\NormalTok{cQuadratic }\OperatorTok{+}\StringTok{ }\NormalTok{cCubic, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} Next, this model \textcolor{black}{is analyzed} using the R function \texttt{anova()}. \textcolor{black}{This yields the sum of squares (Sum Sq) explained by each of the covariates.} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(aovModel <-}\StringTok{ }\KeywordTok{anova}\NormalTok{(m1_mr.Xpol2))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Analysis of Variance Table ## ## Response: DV ## Df Sum Sq Mean Sq F value Pr(>F) ## cLinear 1 1600 1600 16 0.0010 ** ## cQuadratic 1 500 500 5 0.0399 * ## cCubic 1 900 900 9 0.0085 ** ## Residuals 16 1600 100 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# SumSq contrast} \NormalTok{SumSq <-}\StringTok{ }\NormalTok{aovModel[}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{,}\StringTok{"Sum Sq"}\NormalTok{]} \KeywordTok{names}\NormalTok{(SumSq) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"cLinear"}\NormalTok{,}\StringTok{"cQuadratic"}\NormalTok{,}\StringTok{"cCubic"}\NormalTok{)} \NormalTok{SumSq} \end{Highlighting} \end{Shaded} \begin{verbatim} ## cLinear cQuadratic cCubic ## 1600 500 900 \end{verbatim} Summing across the three contrasts that encode the factor \texttt{F} allows us to compute the total effect sum of squares associated with it. Now, \textcolor{black}{everything is available that is needed} to compute the \(r^2_{alerting}\) summary statistic: dividing the individual sum of squares by the total effect size of squares yields \(r^2_{alerting}\). \begin{Shaded} \begin{Highlighting}[] \CommentTok{# SumSq effect} \KeywordTok{sum}\NormalTok{(SumSq)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 3000 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# r2 alerting} \KeywordTok{round}\NormalTok{(SumSq }\OperatorTok{/}\StringTok{ }\KeywordTok{sum}\NormalTok{(SumSq), }\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## cLinear cQuadratic cCubic ## 0.53 0.17 0.30 \end{verbatim} \textcolor{black}{The results show that the expected linear trend explains} 53\(\%\) \textcolor{black}{of the variance in condition means of factor} \texttt{F}\textcolor{black}{. Based on the statistical test shown in the anova output, the linear trend has a significant effect on the dependent variable. However, the effect size analysis shows it does not explain the full pattern of results, as nearly half the variance associated with factor} \texttt{F} \textcolor{black}{remains unexplained by it, whereas other effects, namely non-linear trends, seem to contribute to explaining the effect of factor} \texttt{F}\textcolor{black}{. This situation is a so-called} \emph{ecumenical} \textcolor{black}{outcome} (Abelson \& Prentice, 1997)\textcolor{black}{, where the a priori contrast (linear trend) is only one of several contrasts explaining the factor's effect. The fact that the a priori linear contrast is significant but has a $r^2_{alerting}$ clearly smaller than $1$, suggests that other contrasts seem to contribute to the effect of factor} \texttt{F}\textcolor{black}{. In a} so-called \emph{canonical} \textcolor{black}{outcome, to the contrary, $r^2_{alerting}$ for a contrast would approach $1$, such that no other additional contrasts are needed to explain the effect of factor} \texttt{F}. \(r^2_{alerting}\) is useful for comparing the relative importance of different contrasts for a given data-set. However, it is not a general measure of effect size, such as \(\eta^2\), which is computed using the function call \texttt{etasq(Model)} from the package \texttt{heplots} (\(\eta^2_{partial}\)) (Friendly, 2010). \hypertarget{adding-all-contrasts-associated-with-a-factor}{ \subsection{Adding all contrasts associated with a factor}\label{adding-all-contrasts-associated-with-a-factor}} \textcolor{black}{The above discussion shows that the total variability associated with a factor can be composed into contributions from individual contrasts. This implies, that even if only one of the contrasts associated with a factor is of interest in an analysis, it still makes sense to include the other contrasts associated with the factor. For example, when using polynomial contrasts, e}ven if only the linear trend is of interest in an analysis, it still makes sense to include the contrasts for the quadratic and cubic trends. This is because there is a total amount of variance associated with each factor. One can capture all of this variance by using polynomial contrasts up \textcolor{black}{to} \(I-1\) degrees. That is, for a two-level factor, only a linear trend is tested. For a three-level factor, we test a linear and a quadratic trend. For a four-level factor, we additionally test a cubic trend. Specifying all these \(I-1\) polynomial trends allows us to capture all the variance associated with the factor. In case one is not interested in the cubic trend, one can simply leave this out of the model. However, this would also mean that some of the variance associated with the factor could be left unexplained. This unexplained variance would be added to the residual variance, and would impair our ability to detect effects in the linear (or in the quadratic) trend. This shows that if the variance associated with a factor is not explained by contrasts, then this unexplained variance will increase the residual variance, and reduce the statistical power to detect the effects of interest. It is therefore good practice to code contrasts that capture all the variance associated with a factor. \hypertarget{MR_ANOVA}{ \section{Examples of contrast coding in a factorial design with two factors}\label{MR_ANOVA}} Let us assume that the exact same four means that we have simulated above actually come from an \(A(2) \times B(2)\) between-subject-factor design rather than an F(4) between-subject-factor design. We simulate the data as shown below in Table \ref{tab:twobytwosimdatTab} and Figure \ref{fig:twobytwosimdatFig}. The means and standard deviations are exactly the same as in Figure \ref{fig:helmertsimdatFig}. \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:twobytwosimdatTab}Summary statistics for a two-by-two between-subjects factorial design.} \begin{tabular}{llllll} \toprule Factor A & \multicolumn{1}{c}{Factor B} & \multicolumn{1}{c}{N data} & \multicolumn{1}{c}{Means} & \multicolumn{1}{c}{Std. dev.} & \multicolumn{1}{c}{Std. errors}\\ \midrule A1 & B1 & 5 & 10.0 & 10.0 & 4.5\\ A1 & B2 & 5 & 20.0 & 10.0 & 4.5\\ A2 & B1 & 5 & 10.0 & 10.0 & 4.5\\ A2 & B2 & 5 & 40.0 & 10.0 & 4.5\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{figure} \caption{Means and error bars (showing standard errors) for a simulated data-set with a two-by-two between-subjects factorial design.} \label{fig:twobytwosimdatFig} \end{figure} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# generate 2 times 2 between subjects data:} \NormalTok{simdat4 <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{), }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{5}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\DecValTok{10}\NormalTok{, }\DataTypeTok{long =} \OtherTok{TRUE}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat4)[}\DecValTok{1}\OperatorTok{:}\DecValTok{2}\NormalTok{] <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{,}\StringTok{"B"}\NormalTok{)} \KeywordTok{head}\NormalTok{(simdat4)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## A B id DV ## 1 A1 B1 1 26.195 ## 2 A1 B1 2 5.758 ## 3 A1 B1 3 11.862 ## 4 A1 B1 4 6.321 ## 5 A1 B1 5 -0.136 ## 6 A1 B2 6 18.380 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{table4 <-}\StringTok{ }\NormalTok{simdat4 }\OperatorTok{ \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{N=}\KeywordTok{length}\NormalTok{(DV), }\DataTypeTok{M=}\KeywordTok{mean}\NormalTok{(DV), }\DataTypeTok{SD=}\KeywordTok{sd}\NormalTok{(DV), }\DataTypeTok{SE=}\NormalTok{SD}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(N))} \NormalTok{GM <-}\StringTok{ }\KeywordTok{mean}\NormalTok{(table4}\OperatorTok{$}\NormalTok{M) }\CommentTok{# Grand Mean} \end{Highlighting} \end{Shaded} \hypertarget{the-difference-between-an-anova-and-a-multiple-regression}{ \subsection{The difference between an ANOVA and a multiple regression}\label{the-difference-between-an-anova-and-a-multiple-regression}} Let's compare the traditional ANOVA with a multiple regression (i.e., using contrasts as covariates) for analyzing these data. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# ANOVA: B_A(2) times B_B(2)} \NormalTok{m2_aov <-}\StringTok{ }\KeywordTok{aov}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B }\OperatorTok{+}\StringTok{ }\KeywordTok{Error}\NormalTok{(id), }\DataTypeTok{data=}\NormalTok{simdat4)} \CommentTok{# MR: B_A(2) times B_B(2)} \NormalTok{m2_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B, }\DataTypeTok{data=}\NormalTok{simdat4)} \end{Highlighting} \end{Shaded} \begin{table}[!htbp] \begin{center} \begin{threeparttable} \caption{\label{tab:table16}Estimated ANOVA model.} \begin{tabular}{lllllll} \toprule Effect & \multicolumn{1}{c}{$F$} & \multicolumn{1}{c}{$\mathit{df}_1$} & \multicolumn{1}{c}{$\mathit{df}_2$} & \multicolumn{1}{c}{$\mathit{MSE}$} & \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$\hat{\eta}^2_G$}\\ \midrule A & 5.00 & 1 & 16 & 100.00 & .040 & .238\\ B & 20.00 & 1 & 16 & 100.00 & < .001 & .556\\ A $\times$ B & 5.00 & 1 & 16 & 100.00 & .040 & .238\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{table}[!htbp] \begin{center} \begin{threeparttable} \caption{\label{tab:table17}Estimated regression model.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 10 & $[1$, $19]$ & 2.24 & .040\\ AA2 & 0 & $[-13$, $13]$ & 0.00 & > .999\\ BB2 & 10 & $[-3$, $23]$ & 1.58 & .133\\ AA2 $\times$ BB2 & 20 & $[1$, $39]$ & 2.24 & .040\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The results from the two analyses, shown in Tables \ref{tab:table16} and \ref{tab:table17}, are very different. How do we see these are different? Notice that \textcolor{black}{it is possible to} compute F-values from t-values from the fact that \(F(1,df) = t(df)^2\) (Snedecor \& Cochran, 1967) \textcolor{black}{(where $df$ indicates degrees of freedom)}. When applying this to the above multiple regression model, the F-value for factor \(A\) (i.e., \(AA2\)) is \(0.00^2 = 0\). This is obviously not the same as in the ANOVA, where the F-value for factor \(A\) is \(5\). Likewise, in the multiple regression factor \(B\) (i.e., \(BB2\)) has an F-value of \(1.58^2 = 2.5\), which also does not correspond to the F-value for factor \(B\) in the ANOVA of \(20\). Interestingly, however, the F-value for the interaction is identical in both models, as \(2.24^2 = 5\). The reason that the two results are different is that one needs \textsc{sum contrasts} in the linear model to \textcolor{black}{get the conventional tests from an ANOVA model. (This is true for factors with two levels, but does not generalize to factors with more levels.)} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# define sum contrasts:} \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{A) <-}\StringTok{ }\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{2}\NormalTok{)} \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{B) <-}\StringTok{ }\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{2}\NormalTok{)} \NormalTok{m2_mr.sum <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B, }\DataTypeTok{data=}\NormalTok{simdat4)} \CommentTok{# Alternative using covariates} \NormalTok{mat_myC <-}\StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B, simdat4)} \NormalTok{simdat4[, }\KeywordTok{c}\NormalTok{(}\StringTok{"GM"}\NormalTok{, }\StringTok{"FA"}\NormalTok{, }\StringTok{"FB"}\NormalTok{, }\StringTok{"FAxB"}\NormalTok{)] <-}\StringTok{ }\NormalTok{mat_myC} \NormalTok{m2_mr.v2 <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{FA }\OperatorTok{+}\StringTok{ }\NormalTok{FB }\OperatorTok{+}\StringTok{ }\NormalTok{FAxB, }\DataTypeTok{data=}\NormalTok{simdat4)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table18}Regression analysis with sum contrasts.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ A1 & -5 & $[-10$, $0]$ & -2.24 & .040\\ B1 & -10 & $[-15$, $-5]$ & -4.47 & < .001\\ A1 $\times$ B1 & 5 & $[0$, $10]$ & 2.24 & .040\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table19}Defining sum contrasts using the model.matrix() function.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ FA & -5 & $[-10$, $0]$ & -2.24 & .040\\ FB & -10 & $[-15$, $-5]$ & -4.47 & < .001\\ FAxB & 5 & $[0$, $10]$ & 2.24 & .040\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} When using \textsc{sum contrasts}, the results from the multiple regression models (see Tables \ref{tab:table18} and \ref{tab:table19}) are identical to the results from the ANOVA (Table \ref{tab:table16}). This is visible as the F-value for factor \(A\) is now \(-2.24^2 = 5\), for factor \(B\) \textcolor{black}{it} is \(-4.47^2 = 20\), and for the interaction \textcolor{black}{it} is again \(2.24^2 = 5\). All F-values are now the same as in the ANOVA model. Next, we reproduce the \(A(2) \times B(2)\) - ANOVA with contrasts specified for the corresponding one-way \(F(4)\) ANOVA\textcolor{black}{, that is by treating the $2 \times 2 = 4$ condition means as four levels of a single factor F}. In other words, we go back to the data frame simulated for the analysis of \textsc{repeated contrasts} \textcolor{black}{(see section} \emph{Further examples of contrasts illustrated with a factor with four levels}). We first define weights for condition means according to our hypotheses, invert this matrix, and use it as the contrast matrix for factor F in a LM. \textcolor{black}{We define weights of $1/4$ and $-1/4$. We do so because (a) we want to compare the mean of two conditions to the mean of two other conditions (e.g., factor A compares $\frac{F1 + F2}{2}$ to $\frac{F3 + F4}{2}$). Moreover, (b) we want to use sum contrasts, where the regression coefficients assess half the difference between means. Together (a+b), this yields weights of $1/2 \cdot 1/2 = 1/4$. The resulting contrast matrix contains contrast coefficients of $+1$ or $-1$, showing that we successfully implemented sum contrasts.} The results, presented in Table \ref{tab:table20}, are identical to the previous models. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{fractions}\NormalTok{(HcInt <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{A =}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F2=} \DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F3=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F4=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{),} \DataTypeTok{B =}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F3=} \DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F4=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{),} \DataTypeTok{AxB=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F3=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{,}\DataTypeTok{F4=} \DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{))))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## A B AxB ## F1 1/4 1/4 1/4 ## F2 1/4 -1/4 -1/4 ## F3 -1/4 1/4 -1/4 ## F4 -1/4 -1/4 1/4 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcInt <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(HcInt))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## A B AxB ## F1 1 1 1 ## F2 1 -1 -1 ## F3 -1 1 -1 ## F4 -1 -1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcInt} \NormalTok{m3_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table20}Main effects and interaction: Custom-defined sum contrasts (scaled).} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ FA & -5 & $[-10$, $0]$ & -2.24 & .040\\ FB & -10 & $[-15$, $-5]$ & -4.47 & < .001\\ FAxB & 5 & $[0$, $10]$ & 2.24 & .040\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} This shows that \textcolor{black}{it is possible to} specify the contrasts not only for each factor (e.g., here in the \(2 \times 2\) design) separately. Instead, \textcolor{black}{one can also} pool all experimental conditions (or design cells) into one large factor (here factor F with \(4\) levels), and specify the contrasts for the main effects and for the interactions in the resulting one large contrast matrix simultaneously. \hypertarget{nestedEffects}{ \subsection{Nested effects}\label{nestedEffects}} \textcolor{black}{One} can specify hypotheses that do not correspond directly to main effects and interaction of the traditional ANOVA. For example, in a \(2 \times 2\) experimental design, where factor \(A\) codes word frequency (low/high) and factor \(B\) is part of speech (noun/verb), \textcolor{black}{one} can test the effect of word frequency within nouns and the effect of word frequency within verbs. Formally, \(A_{B1}\) versus \(A_{B2}\) \textcolor{black}{are} nested within levels of \(B\). Said differently, simple effects of factor \(A\) \textcolor{black}{are tested} for each of the levels of factor \(B\). In this version, we test whether there is a main effect of part of speech (\(B\); as in traditional ANOVA). However, instead of also testing the second main effect word frequency, \(A\), and the interaction, we test (1) whether the two levels of word frequency, \(A\), differ significantly for the first level of \(B\) (i.e., nouns) and (2) whether the two levels of \(A\) differ significantly for the second level of \(B\) (i.e., verbs). In other words, we test whether there are significant differences for \(A\) in \textcolor{black}{each of} the levels of \(B\). Often researchers have hypotheses about these differences, and not about the interaction. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{fractions}\NormalTok{(HcNes <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{B =}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=} \DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F3=} \DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F4=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{),} \DataTypeTok{B1xA=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\OperatorTok{-}\DecValTok{1}\NormalTok{ ,}\DataTypeTok{F2=} \DecValTok{0}\NormalTok{ ,}\DataTypeTok{F3=} \DecValTok{1}\NormalTok{ ,}\DataTypeTok{F4=} \DecValTok{0}\NormalTok{ ),} \DataTypeTok{B2xA=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=} \DecValTok{0}\NormalTok{ ,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\NormalTok{ ,}\DataTypeTok{F3=} \DecValTok{0}\NormalTok{ ,}\DataTypeTok{F4=} \DecValTok{1}\NormalTok{ ))))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## B B1xA B2xA ## F1 1/2 -1 0 ## F2 -1/2 0 -1 ## F3 1/2 1 0 ## F4 -1/2 0 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcNes <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(HcNes))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## B B1xA B2xA ## F1 1/2 -1/2 0 ## F2 -1/2 0 -1/2 ## F3 1/2 1/2 0 ## F4 -1/2 0 1/2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcNes} \NormalTok{m4_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table21}Regression model for nested effects.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ FB & -20 & $[-29$, $-11]$ & -4.47 & < .001\\ FB1xA & 0 & $[-13$, $13]$ & 0.00 & > .999\\ FB2xA & 20 & $[7$, $33]$ & 3.16 & .006\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} Regression coefficients (cf.~Table \ref{tab:table21}) estimate the GM, the difference for the main effect of word frequency (\(A\)) and the two differences (for \(B\); i.e., simple main effects) within levels of word frequency (\(A\)). \textcolor{black}{These custom nested contrasts' columns are scaled versions of the corresponding hypothesis matrix. This is the case because the columns are orthogonal. It illustrates the advantage of orthogonal contrasts for the interpretation of regression coefficients: the underlying hypotheses being tested are already clear from the contrast matrix.} \textcolor{black}{There is also a} built-in R-formula specification of nested designs. The order of factors in the formula from left to right specifies a top-down order of nesting within levels, i.e., here factor \(A\) (word frequency) is nested within levels of the factor \(B\) (part of speech). This (see Table \ref{tab:table22}) yields the exact same result as our previous result based on custom nested contrasts (cf.~Table \ref{tab:table21}). \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{A) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OperatorTok{-}\FloatTok{0.5}\NormalTok{,}\OperatorTok{+}\FloatTok{0.5}\NormalTok{)} \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{B) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\OperatorTok{+}\FloatTok{0.5}\NormalTok{,}\OperatorTok{-}\FloatTok{0.5}\NormalTok{)} \NormalTok{m4_mr.x <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{B }\OperatorTok{/}\StringTok{ }\NormalTok{A, }\DataTypeTok{data=}\NormalTok{simdat4)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table22}Nested effects: R-formula.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ B1 & -20 & $[-29$, $-11]$ & -4.47 & < .001\\ BB1 $\times$ A1 & 0 & $[-13$, $13]$ & 0.00 & > .999\\ BB2 $\times$ A1 & 20 & $[7$, $33]$ & 3.16 & .006\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} In cases such as these, where \(A_{B1}\) vs. \(A_{B2}\) \textcolor{black}{are} nested within levels of \(B\), why \textcolor{black}{is it necessary} to estimate the effect of \(B\) (part of speech) at all, when \textcolor{black}{the interest} might only be in the effect of \(A\) (word frequency) within levels of \(B\) (part of speech)? Setting up a regression model where the main effect of \(B\) (part of speech) is removed yields: \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Extract contrasts as covariates from model matrix} \NormalTok{mat_myC <-}\StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, simdat3)} \KeywordTok{fractions}\NormalTok{(}\KeywordTok{as.matrix}\NormalTok{(}\KeywordTok{data.frame}\NormalTok{(mat_myC)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## X.Intercept. FB FB1xA FB2xA ## 1 1 1/2 -1/2 0 ## 2 1 1/2 -1/2 0 ## 3 1 1/2 -1/2 0 ## 4 1 1/2 -1/2 0 ## 5 1 1/2 -1/2 0 ## 6 1 -1/2 0 -1/2 ## 7 1 -1/2 0 -1/2 ## 8 1 -1/2 0 -1/2 ## 9 1 -1/2 0 -1/2 ## 10 1 -1/2 0 -1/2 ## 11 1 1/2 1/2 0 ## 12 1 1/2 1/2 0 ## 13 1 1/2 1/2 0 ## 14 1 1/2 1/2 0 ## 15 1 1/2 1/2 0 ## 16 1 -1/2 0 1/2 ## 17 1 -1/2 0 1/2 ## 18 1 -1/2 0 1/2 ## 19 1 -1/2 0 1/2 ## 20 1 -1/2 0 1/2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Repeat the multiple regression with covariates } \NormalTok{simdat3[, }\KeywordTok{c}\NormalTok{(}\StringTok{"GM"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"B1_A"}\NormalTok{, }\StringTok{"B2_A"}\NormalTok{)] <-}\StringTok{ }\NormalTok{mat_myC} \NormalTok{m2_mr.myC1 <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{B }\OperatorTok{+}\StringTok{ }\NormalTok{B1_A }\OperatorTok{+}\StringTok{ }\NormalTok{B2_A, }\DataTypeTok{data=}\NormalTok{simdat3)} \KeywordTok{summary}\NormalTok{(m2_mr.myC1)}\OperatorTok{$}\NormalTok{sigma }\CommentTok{# standard deviation of residual} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Run the multiple regression by leaving out C1} \NormalTok{m2_mr.myC2 <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{B1_A }\OperatorTok{+}\StringTok{ }\NormalTok{B2_A, }\DataTypeTok{data=}\NormalTok{simdat3)} \KeywordTok{summary}\NormalTok{(m2_mr.myC2)}\OperatorTok{$}\NormalTok{sigma }\CommentTok{# standard deviation of residual} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 14.6 \end{verbatim} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table22a}Nested effects: Full model.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ B & -20 & $[-29$, $-11]$ & -4.47 & < .001\\ B1 A & 0 & $[-13$, $13]$ & 0.00 & > .999\\ B2 A & 20 & $[7$, $33]$ & 3.16 & .006\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table-myC2}Nested effects: Without main effect of B B.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(17)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[13$, $27]$ & 6.15 & < .001\\ B1 A & 0 & $[-19$, $19]$ & 0.00 & > .999\\ B2 A & 20 & $[1$, $39]$ & 2.17 & .044\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} The results (Tables \ref{tab:table22a} and \ref{tab:table-myC2}) show that in the fully balanced data-set simulated here, the estimates for \(A_{B1}\) and \(A_{B2}\) are still the same. However, the confidence intervals and associated p-values are now larger. One loses statistical power because the variance explained by factor \(B\) is now not taken into account in the linear regression any more. As a consequence, the unexplained variance increases the standard deviation of the residual from \(10\) to \(14.55\), and increases uncertainty about \(A_{B1}\) and \(A_{B2}\). In unbalanced or nonorthogonal designs, leaving out the effect of \(B\) from the linear regression can lead to dramatic changes in the estimated slopes. Of course, we can also ask the reverse question: Are the differences significant for part of speech (\(B\)) in the levels of word frequency (\(A\); in addition to testing the main effect of word frequency, \(A\)). That is, do nouns differ from verbs for low-frequency words (\(B_{A1}\)) and do nouns differ from verbs for high-frequency words (\(B_{A2}\))? \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{fractions}\NormalTok{(HcNes2 <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{A =}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F2=} \DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F3=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{,}\DataTypeTok{F4=}\OperatorTok{-}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{),} \DataTypeTok{A1_B=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{1}\NormalTok{ ,}\DataTypeTok{F2=}\OperatorTok{-}\DecValTok{1}\NormalTok{ ,}\DataTypeTok{F3=} \DecValTok{0}\NormalTok{ ,}\DataTypeTok{F4=} \DecValTok{0}\NormalTok{ ),} \DataTypeTok{A2_B=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{F1=}\DecValTok{0}\NormalTok{ ,}\DataTypeTok{F2=} \DecValTok{0}\NormalTok{ ,}\DataTypeTok{F3=} \DecValTok{1}\NormalTok{ ,}\DataTypeTok{F4=}\OperatorTok{-}\DecValTok{1}\NormalTok{ ))))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## A A1_B A2_B ## F1 1/2 1 0 ## F2 1/2 -1 0 ## F3 -1/2 0 1 ## F4 -1/2 0 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcNes2 <-}\StringTok{ }\KeywordTok{ginv2}\NormalTok{(HcNes2))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## A A1_B A2_B ## F1 1/2 1/2 0 ## F2 1/2 -1/2 0 ## F3 -1/2 0 1/2 ## F4 -1/2 0 -1/2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat3}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\NormalTok{XcNes2} \NormalTok{m4_mr <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(DV }\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{F, }\DataTypeTok{data=}\NormalTok{simdat3)} \end{Highlighting} \end{Shaded} \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:table-m4mr}Factor B nested within Factor A.} \begin{tabular}{lllll} \toprule Predictor & \multicolumn{1}{c}{$Estimate$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(16)$} & \multicolumn{1}{c}{$p$}\\ \midrule Intercept & 20 & $[15$, $25]$ & 8.94 & < .001\\ FA & -10 & $[-19$, $-1]$ & -2.24 & .040\\ FA1 B & -10 & $[-23$, $3]$ & -1.58 & .133\\ FA2 B & -30 & $[-43$, $-17]$ & -4.74 & < .001\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} Regression coefficients (cf.~Table \ref{tab:table-m4mr}) estimate the GM, the difference for the main effect of word frequency (\(A\)) and the two part of speech effects (for \(B\)\textcolor{black}{; i.e., simple main effects}) within levels of word frequency (\(A\)). \hypertarget{interactions-between-contrasts}{ \subsection{Interactions between contrasts}\label{interactions-between-contrasts}} \textcolor{black}{We have discussed above that in a $2 \times 2$ experimental design, the results from sum contrasts (see Table}~\ref{tab:table18}\textcolor{black}{) are equivalent to typical ANOVA results (see Table}~\ref{tab:table16}). In addition, we had also run the analysis with treatment contrasts. \textcolor{black}{It was clear} that the results for treatment contrasts (see Table~\ref{tab:table17}) did not correspond to the results from the ANOVA. However, if the results for treatment contrasts do not correspond to the typical ANOVA results, what do they then test? That is, \textcolor{black}{is it still possible to} meaningfully interpret the results from the treatment contrasts in a simple \(2 \times 2\) design? \textcolor{black}{This leads us to a very important principle in interpreting results from contrasts: When interactions between contrasts are included in a model, then the results of one contrast actually depend on the specification of the other contrast(s) in the analysis! This may be counter-intuitive at first. However, it is very important and essential to keep in mind when interpreting results from contrasts. How does this work in detail?} \textcolor{black}{The general rule to remember is that the main effect of one contrast measures its effect at the location $0$ of the other contrast(s) in the analysis. What does that mean? Let us consider the example that we use two treatment contrasts in a $2 \times 2$ design (see results in Table}~\ref{tab:table17}\textcolor{black}{). Let's take a look at the main effect of factor A. How can we interpret what this measures or tests? This main effect actually tests the effect of factor A at the "location" where factor B is coded as $0$. Factor B is coded as a treatment contrast, that is, it codes a zero at its baseline condition, which is B1. Thus, the main effect of factor A tests the effect of A nested within the baseline condition of B. We take a look at the data presented in Figure}~\ref{fig:twobytwosimdatFig}\textcolor{black}{, what this nested effect should be. Figure}~\ref{fig:twobytwosimdatFig} shows that the effect of factor A nested in B1 is \(0\). If we now compare this to the results from the linear model, \textcolor{black}{it is indeed clear} that the main effect of factor A (see Table~\ref{tab:table17}\textcolor{black}{) is exactly estimated as $0$. As expected, when factor B is coded as a treatment contrast, the main effect of factor A tests the effect of A nested within the baseline level of factor B.} Next, consider the main effect of factor B. According to the same logic, this main effect tests the effect of factor B at the \enquote{location} where factor A is \(0\). Factor A is also coded as a treatment contrast, that is, it codes its baseline condition A1 as \(0\). The main effect of factor B tests the effect of B nested within the baseline condition of A. Figure~\ref{fig:twobytwosimdatFig} shows that this effect should be \(10\); this indeed corresponds to the main effect of B as estimated in the regression model for treatment contrasts (see Table \ref{tab:table17}, the \emph{Estimate} for BB2). As we had seen before, the interaction term, however, does not differ between \textcolor{black}{the} treatment contrast and ANOVA (\(t^2 = 2.24^2 = F = 5.00\)). \textcolor{black}{How do we know what the "location" is, where a contrast applies? For the treatment contrasts discussed here, it is possible to reason this through because all contrasts are coded as $0$ or $1$. However, how is it possible to derive the "location" in general? What we can do is to } look at the hypotheses tested by the treatment contrasts in the presence of an interaction between them by using the generalized matrix inverse. We go back to the default treatment contrasts. Then we extract the contrast matrix from the design matrix: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{A) <-}\StringTok{ }\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{2}\NormalTok{)} \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{B) <-}\StringTok{ }\KeywordTok{contr.treatment}\NormalTok{(}\DecValTok{2}\NormalTok{)} \NormalTok{XcTr <-}\StringTok{ }\NormalTok{simdat4 }\OperatorTok{ \StringTok{ }\KeywordTok{group_by}\NormalTok{(A, B) }\OperatorTok{ \StringTok{ }\KeywordTok{summarise}\NormalTok{() }\OperatorTok{ \StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B, .) }\OperatorTok{ \StringTok{ }\KeywordTok{as.data.frame}\NormalTok{() }\OperatorTok{ \KeywordTok{rownames}\NormalTok{(XcTr) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"A1_B1"}\NormalTok{,}\StringTok{"A1_B2"}\NormalTok{,}\StringTok{"A2_B1"}\NormalTok{,}\StringTok{"A2_B2"}\NormalTok{)} \NormalTok{XcTr} \end{Highlighting} \end{Shaded} \begin{verbatim} ## (Intercept) A2 B2 A2:B2 ## A1_B1 1 0 0 0 ## A1_B2 1 0 1 0 ## A2_B1 1 1 0 0 ## A2_B2 1 1 1 1 \end{verbatim} \textcolor{black}{This shows the treatment contrast for factors} \texttt{A} and \texttt{B}\textcolor{black}{, and their interaction. We now apply the generalized inverse. Remember, when we apply the generalized inverse to the contrast matrix, we obtain the corresponding hypothesis matrix (again, we use matrix transpose for better readability):} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{ginv2}\NormalTok{(XcTr))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## (Intercept) A2 B2 A2:B2 ## A1_B1 1 -1 -1 1 ## A1_B2 0 0 1 -1 ## A2_B1 0 1 0 -1 ## A2_B2 0 0 0 1 \end{verbatim} \textcolor{black}{As discussed above, the main effect of factor} \texttt{A} \textcolor{black}{tests its effect nested within the baseline level of factor} \texttt{B}\textcolor{black}{. Likewise, the main effect of factor} \texttt{B} \textcolor{black}{tests its effect nested within the baseline level of factor} \texttt{A}. \textcolor{black}{How does this work for sum contrasts? They do not have a baseline condition that is coded as $0$. In sum contrasts, however, the average of the contrast coefficients is $0$. Therefore, main effects test the average effect across factor levels. This is what is typically also tested in standard ANOVA. Let's look at the example shown in Table}~\ref{tab:table18}\textcolor{black}{: given that factor B has a sum contrast, the main effect of factor A is tested as the average across levels of factor B. Figure}~\ref{fig:twobytwosimdatFig} \textcolor{black}{shows that the effect of factor A in level B1 is $10 - 10 = 0$, and in level B2 it is $20 - 40 = -20$. The average effect across both levels is $(0 - 20)/2 = -10$. Due to the sum contrast coding, we have to divide this by 2, yielding an expected effect of $-10 / 2 = -5$. This is exactly what the main effect of factor A measures (see Table}~\ref{tab:table18}, \emph{Estimate} \textcolor{black}{for A1).} Similarly, factor B tests its effect at the location \(0\) of factor A. Again, \(0\) is exactly the mean of the contrast coefficients from factor A, which is coded as a sum contrast. Therefore, factor B tests the effect of B averaged across factor levels of A. For factor level A1, factor B has an effect of \(10 - 20 = -10\). For factor level A2, factor B has an effect of \(10 - 40 = -30\). The average effect is \((-10 - 30)/2 = -20\), which \textcolor{black}{again needs to be} divided by \(2\) due to the sum contrast. This yields exactly the estimate of \(-10\) that is also reported in Table~\ref{tab:table18} (\emph{Estimate} \textcolor{black}{for B1).} \textcolor{black}{Again, we look at the hypothesis matrix for the main effects and the interaction:} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{A) <-}\StringTok{ }\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{2}\NormalTok{)} \KeywordTok{contrasts}\NormalTok{(simdat4}\OperatorTok{$}\NormalTok{B) <-}\StringTok{ }\KeywordTok{contr.sum}\NormalTok{(}\DecValTok{2}\NormalTok{)} \NormalTok{XcSum <-}\StringTok{ }\NormalTok{simdat4 }\OperatorTok{ \StringTok{ }\KeywordTok{group_by}\NormalTok{(A, B) }\OperatorTok{ \StringTok{ }\KeywordTok{summarise}\NormalTok{() }\OperatorTok{ \StringTok{ }\KeywordTok{model.matrix}\NormalTok{(}\OperatorTok{~}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{A}\OperatorTok{*}\NormalTok{B, .) }\OperatorTok{ \StringTok{ }\KeywordTok{as.data.frame}\NormalTok{() }\OperatorTok{ \KeywordTok{rownames}\NormalTok{(XcSum) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"A1_B1"}\NormalTok{,}\StringTok{"A1_B2"}\NormalTok{,}\StringTok{"A2_B1"}\NormalTok{,}\StringTok{"A2_B2"}\NormalTok{)} \NormalTok{XcSum} \end{Highlighting} \end{Shaded} \begin{verbatim} ## (Intercept) A1 B1 A1:B1 ## A1_B1 1 1 1 1 ## A1_B2 1 1 -1 -1 ## A2_B1 1 -1 1 -1 ## A2_B2 1 -1 -1 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{t}\NormalTok{(}\KeywordTok{ginv2}\NormalTok{(XcSum))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## (Intercept) A1 B1 A1:B1 ## A1_B1 1/4 1/4 1/4 1/4 ## A1_B2 1/4 1/4 -1/4 -1/4 ## A2_B1 1/4 -1/4 1/4 -1/4 ## A2_B2 1/4 -1/4 -1/4 1/4 \end{verbatim} \textcolor{black}{This shows that each of the main effects now does not compute nested comparisons any more, but that they rather test their effect averaged across conditions of the other factor. The averaging involves using weights of $1/2$. Moreover, the regression coefficients in the sum contrast measure half the distance between conditions, leading to weights of $1/2 \cdot 1/2 = 1/4$.} \textcolor{black}{The general rule to remember from these examples is that when interactions between contrasts are tested, what a main effect of a factor tests depends on the contast coding of the other factors in the design! The main effect of a factor tests the effect nested within the location zero of the other contrast(s) in an analysis. If another contrast is centered, and zero is the average of this other contrasts' coefficients, then the contrast of interest tests the average effect, averaged across the levels of the other factor. Importantly, this property holds only when the interaction between two contrasts is included into a model. If the interaction is omitted and only main effects are tested, then there is no such "action at a distance".} \textcolor{black}{This may be a very surprising result for interactions of contrasts. However, it is also essential to interpreting contrast coefficients involved in interactions. It is particularly relevant for the analysis of the default \textsc{treatment contrast}, where the main effects test nested effects rather than average effects.} \hypertarget{a-priori-interaction-contrasts}{ \subsection{A priori interaction contrasts}\label{a-priori-interaction-contrasts}} \textcolor{black}{When testing interaction effects, if at least one of the factors involved in an interaction has more than two levels, then omnibus F tests for the interaction are not informative about the source of the interaction. Contrasts are used to test hypotheses about specific differences between differences. Sometimes, researchers may have a hypothesis believed to fully explain the observed pattern of means associated with the interaction between two factors} (Abelson \& Prentice, 1997). In such situations, \textcolor{black}{it is possible to specify} an a priori interaction contrast for testing this hypothesis. Such contrasts are more focused compared to the associated omnibus F test for the interaction term. Maxwell, Delaney, and Kelley (2018), citing Abelson and Prentice (1997), \textcolor{black}{write that "psychologists have historically failed to capitalize on possible advantages of testing a priori interaction contrasts even when theory dictates the relevance of the approach"} (p.~346). Abelson and Prentice (1997) \textcolor{black}{demonstrate interesting example cases for such a priori interaction contrasts, which they argue are often useful in psychological studies, including what they term a "matching" pattern. We here illustrate this "matching" pattern, by simulating data from a hypothetical priming study: we assume a $Prime(3) \times Target(3)$ between-subject-Factor design, where factor} \texttt{Prime} \textcolor{black}{indicates three ordinally scaled levels of different prime stimuli} \texttt{c("Prime1",\ "Prime2",\ "Prime3")}\textcolor{black}{, whereas factor} \texttt{Target} \textcolor{black}{indicates three ordinally scaled levels of different target stimuli} \texttt{c("Target1",\ "Target2",\ "Target3")}\textcolor{black}{. We assume a similarity structure in which} \texttt{Prime1} \textcolor{black}{primes} \texttt{Target1}, \texttt{Prime2} \textcolor{black}{primes} \texttt{Target2}\textcolor{black}{, and} \texttt{Prime3} \textcolor{black}{primes} \texttt{Target3}\textcolor{black}{. In addition, we assume some similarity between neighboring categories, such that, e.g.,} \texttt{Prime1} \textcolor{black}{also weakly primes} \texttt{Target2}, \texttt{Prime3} \textcolor{black}{also weakly primes} \texttt{Target2}\textcolor{black}{, and} \texttt{Prime2} \textcolor{black}{also weakly primes} \texttt{Target1} and \texttt{Target2}\textcolor{black}{. We therefore assume the following pattern of mean response times, for which we simulate response times:} \begin{Shaded} \begin{Highlighting}[] \NormalTok{M9 <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{150}\NormalTok{,}\DecValTok{175}\NormalTok{,}\DecValTok{200}\NormalTok{, }\DecValTok{175}\NormalTok{,}\DecValTok{150}\NormalTok{,}\DecValTok{175}\NormalTok{, }\DecValTok{200}\NormalTok{,}\DecValTok{175}\NormalTok{,}\DecValTok{150}\NormalTok{),}\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{)} \KeywordTok{matrix}\NormalTok{(M9,}\DataTypeTok{nrow=}\DecValTok{3}\NormalTok{,}\DataTypeTok{dimnames=}\KeywordTok{list}\NormalTok{(}\KeywordTok{paste0}\NormalTok{(}\StringTok{"Prime"}\NormalTok{,}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{),}\KeywordTok{paste0}\NormalTok{(}\StringTok{"Target"}\NormalTok{,}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Target1 Target2 Target3 ## Prime1 150 175 200 ## Prime2 175 150 175 ## Prime3 200 175 150 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{simdat5 <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\KeywordTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{,}\DecValTok{3}\NormalTok{), }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{5}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M9, }\DataTypeTok{SD=}\DecValTok{50}\NormalTok{, }\DataTypeTok{long =} \OtherTok{TRUE}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat5)[}\DecValTok{1}\OperatorTok{:}\DecValTok{2}\NormalTok{] <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"Prime"}\NormalTok{,}\StringTok{"Target"}\NormalTok{)} \KeywordTok{levels}\NormalTok{(simdat5}\OperatorTok{$}\NormalTok{Prime) <-}\StringTok{ }\KeywordTok{paste0}\NormalTok{(}\StringTok{"Prime"}\NormalTok{,}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{)} \KeywordTok{levels}\NormalTok{(simdat5}\OperatorTok{$}\NormalTok{Target) <-}\StringTok{ }\KeywordTok{paste0}\NormalTok{(}\StringTok{"Target"}\NormalTok{,}\DecValTok{1}\OperatorTok{:}\DecValTok{3}\NormalTok{)} \NormalTok{table5 <-}\StringTok{ }\NormalTok{simdat5 }\OperatorTok{ \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{N=}\KeywordTok{length}\NormalTok{(DV), }\DataTypeTok{M=}\KeywordTok{mean}\NormalTok{(DV), }\DataTypeTok{SD=}\KeywordTok{sd}\NormalTok{(DV), }\DataTypeTok{SE=}\NormalTok{SD}\OperatorTok{/}\KeywordTok{sqrt}\NormalTok{(N))} \NormalTok{table5 }\OperatorTok{ \StringTok{ }\KeywordTok{data.frame}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Prime Target1 Target2 Target3 ## 1 Prime1 150 175 200 ## 2 Prime2 175 150 175 ## 3 Prime3 200 175 150 \end{verbatim} \begin{figure} \caption{Means and error bars (showing standard errors) for a simulated data-set with a 3 x 3 between-subjects factorial design.} \label{fig:3by3simdatFig} \end{figure} \textcolor{black}{This reflects a "matching" pattern, where relative match between prime and target is the only factor determining differences in response time between conditions. We specify this hypothesis in an a priori contrast. One way to start for formulating a contrast matrix can be to write down the expected pattern of means. Here we assume that we would a priori hypothesize that fully matching primes and targets speed up response times, and that all other combinations lead to equally slow responses:} \begin{Shaded} \begin{Highlighting}[] \NormalTok{meansExp <-}\StringTok{ }\KeywordTok{rbind}\NormalTok{(}\DataTypeTok{Prime1=}\KeywordTok{c}\NormalTok{(}\DataTypeTok{Target1=}\DecValTok{150}\NormalTok{,}\DataTypeTok{Target2=}\DecValTok{200}\NormalTok{,}\DataTypeTok{Target3=}\DecValTok{200}\NormalTok{), } \DataTypeTok{Prime2=}\KeywordTok{c}\NormalTok{( }\DecValTok{200}\NormalTok{, }\DecValTok{150}\NormalTok{, }\DecValTok{200}\NormalTok{), } \DataTypeTok{Prime3=}\KeywordTok{c}\NormalTok{( }\DecValTok{200}\NormalTok{, }\DecValTok{200}\NormalTok{, }\DecValTok{150}\NormalTok{))} \end{Highlighting} \end{Shaded} \textcolor{black}{We transform this into a contrast matrix by subtracting the mean response time. We also scale the matrix to ease readability.} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(XcS <-}\StringTok{ }\NormalTok{(meansExp}\OperatorTok{-}\KeywordTok{mean}\NormalTok{(meansExp))}\OperatorTok{*}\DecValTok{3}\OperatorTok{/}\DecValTok{50}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Target1 Target2 Target3 ## Prime1 -2 1 1 ## Prime2 1 -2 1 ## Prime3 1 1 -2 \end{verbatim} \textcolor{black}{This matrix is orthogonal to the main effects as all the rows and all the columns sum to zero. This is important for contrast matrices for interaction contrasts to ensure that the contrast matrix does not simply capture and test parts of the main effects. If some rows or columns from this matrix do not sum to zero, then it is important to change the interaction contrast matrix to fulfill this requirement. Based on the interaction contrast matrix, we here perform an ANOVA decomposition of the main effects, the omnibus F test for the interaction, and for the effects involved within the interaction:} \begin{Shaded} \begin{Highlighting}[] \NormalTok{simdat5}\OperatorTok{$}\NormalTok{cMatch <-}\StringTok{ }\KeywordTok{ifelse}\NormalTok{(} \KeywordTok{as.numeric}\NormalTok{(simdat5}\OperatorTok{$}\NormalTok{Prime)}\OperatorTok{==}\KeywordTok{as.numeric}\NormalTok{(simdat5}\OperatorTok{$}\NormalTok{Target),}\OperatorTok{-}\DecValTok{2}\NormalTok{,}\DecValTok{1}\NormalTok{)} \NormalTok{mOmn <-}\StringTok{ }\KeywordTok{summary}\NormalTok{(}\KeywordTok{aov}\NormalTok{(DV}\OperatorTok{~}\NormalTok{Prime}\OperatorTok{*}\NormalTok{Target }\OperatorTok{+}\KeywordTok{Error}\NormalTok{(id), }\DataTypeTok{data=}\NormalTok{simdat5))} \NormalTok{mCon <-}\StringTok{ }\KeywordTok{summary}\NormalTok{(}\KeywordTok{aov}\NormalTok{(DV}\OperatorTok{~}\NormalTok{Prime}\OperatorTok{*}\NormalTok{Target}\OperatorTok{+}\NormalTok{cMatch}\OperatorTok{+}\KeywordTok{Error}\NormalTok{(id), }\DataTypeTok{data=}\NormalTok{simdat5))} \end{Highlighting} \end{Shaded} The outputs from both models are shown in Table~\ref{tab:tableIntCon}. \begin{table}[h] \begin{center} \begin{threeparttable} \caption{\label{tab:tableIntCon}Interaction contrast.} \begin{tabular}{llllrll} \toprule Effect & \multicolumn{1}{c}{$F$} & \multicolumn{1}{c}{$\mathit{df}_1$} & \multicolumn{1}{c}{$\mathit{df}_2$} & \multicolumn{1}{c}{$\mathit{Sum}$ $\mathit{Sq}$} & \multicolumn{1}{c}{$p$} & \multicolumn{1}{c}{$\hat{\eta}^2_G$}\\ \midrule Prime & 0.14 & 2 & 36 & 694 & .871 & .008\\ Target & 0.14 & 2 & 36 & 694 & .871 & .008\\ Prime $\times$ Target & 1.39 & 4 & 36 & 13889 & .257 & .134\\ \midrule \ \ \ Matching contrast & 4.44 & 1 & 36 & 11111 & .042 & .110\\ \ \ \ Contrast residual & 0.37 & 3 & 36 & 2778 & .775 & .030\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \textcolor{black}{The a priori matching contrast is significant, whereas the omnibus F test for the interaction is not. In addition to the matching contrast, there is also a term "contrast residual", which captures all differences in means of the $3 \times 3$ interaction that are not captured by the matching contrast (and not by the main effects). It provides information about how much of the interaction is explained by our a priori interaction matching contrast, and whether there is additional pattern in the data, that is not yet explained by this a priori contrast.} \textcolor{black}{Formally speaking, the omnibus F test for the interaction term can be additively decomposed into the a priori matching contrast plus some residual variance associated with the interaction. That is, the $4$ degrees of freedom for the omnibus interaction test are decomposed into $1$ degree of freedom for the matching contrast and $3$ degrees of freedom for the contrast residual: $4\;df = 1\;df + 3\;df$. Likewise, the total sum of squares associated with the omnibus interaction test (Sum Sq $=$} 13889\textcolor{black}{) is decomposed into the sum of squares associated with the matching contrast (Sum Sq $=$} 11111\textcolor{black}{) plus the residual contrast sum of squares (Sum Sq $=$} 2778): 13889 \(=\) 11111 \(+\) 2778. \textcolor{black}{Here, the a priori matching contrast explains a large portion of the variance associated with the interaction of $r^2_{alerting} =$} 11111 \(/\) 13889 \(=\) 0.80\textcolor{black}{, whereas the contribution of the contrast residual is small: $r^2_{alerting} =$} 2778 \(/\) 13889 \(=\) 0.20 \textcolor{black}{Interestingly, the results in Table}~\ref{tab:tableIntCon} \textcolor{black}{also show a significance test for the contrast residual, which here is not significant. This suggests that our a priori contrast provides a good account of the interaction, a situation that} Abelson and Prentice (1997) \textcolor{black}{term a} \emph{canonical} \textcolor{black}{outcome. It suggests that the a priori contrast may be a good account of the true pattern of means. Sometimes, the contrast residual term is significant, clearly suggesting that systematic variance is associated with the interaction beyond what is captured in the a priori contrast. In this situation, $r^2_{alerting}$ for the a priori contrast will often be smaller, whereas $r^2_{alerting}$ for the contrast residuals may be larger.} Abelson and Prentice (1997) \textcolor{black}{term this an} \emph{ecumenical} \textcolor{black}{outcome, which suggests that the a priori contrast seems to be missing some of the systematic variance associated with the interaction. For such} \emph{ecumenical} \textcolor{black}{outcomes,} Abelson and Prentice (1997) \textcolor{black}{suggest it is important for researchers to further explore the data and to search for structure in the residuals to inform future studies and future a priori expectations.} \textcolor{black}{Even if the contrast residual term is not significant, as is the case here, it can still hide systematic structure. For example, in the present simulations, we assumed that priming did not only facilitate processing of targets from the same category, but also facilitated processing of targets from neighboring categories. We know that response times should gradually increase further towards the diagonal for off-diagonal elements of the $3 \times 3$ design, and that additional structure is actually contained in the present data --- even though the contrast residual is not significant. We could try to directly test for this structure here by formulating a contrast that captures this gradual increase in response times off from the diagonal.} \textcolor{black}{This procedure of testing an a priori interaction contrast in addition to the automatically generated or "residual" interaction term is advantageous (1) because the automatically generated or "residual" interaction term consumes many degrees of freedom and (2) because it is generally difficult to interpret and not particularly scientifically meaningful itself, except for potentially pointing to residual variance in the interaction that needs to be explained.} \hypertarget{summary}{ \section{Summary}\label{summary}} Contrast coding allows us to implement comparisons in a very flexible and general manner. Specific hypotheses can be formulated, coded in \textcolor{black}{hypothesis} matrices, and converted into contrast matrices. As Maxwell et al. (2018) put it: \enquote{{[}S{]}uch tests may preclude the need for an omnibus interaction test and may be more informative than following the more typical path of testing simple effects} (p.~347). \textcolor{black}{Barring convergence issues or the use of any kind of regularization, all sensible (non-rank-deficient) contrast coding schemes are essentially fitting the same model, since they are all linear transformations of each other. So the utility of contrast coding is that the researcher can pick and choose how she/he wants to} \emph{interpret} \textcolor{black}{the regression coefficients, in a way that's statistically well-founded (e.g., it does not require running many post-hoc tests) without changing the model (in the sense that the fit in data space does not change) or compromising model fit.} The generalized inverse provides a powerful tool to convert between \textcolor{black}{contrast} matrices \(X_c\) \textcolor{black}{for experimental designs} used in linear models as well as the associated hypothesis matrices \(H_c = X_c^{inv}\), which define the estimation of regression coefficients and definition of hypothesis tests. Understanding these two representations provides a flexible tool for designing custom contrast matrices and for understanding the hypotheses that are being tested by a given contrast matrix. \hypertarget{from-omnibus-f-tests-in-anova-to-contrasts-in-regression-models}{ \subsubsection{From omnibus F tests in ANOVA to contrasts in regression models}\label{from-omnibus-f-tests-in-anova-to-contrasts-in-regression-models}} ANOVAs can be reformulated as multiple regressions. This has the advantage that hypotheses about comparisons between specific condition means can be tested. In R, contrasts are flexibly defined for factors via contrast matrices. Functions are available to specify a set of pre-defined contrasts including \textsc{treatment contrasts} (\texttt{contr.treatment()}), \textsc{sum contrasts} (\texttt{contr.sum()}), \textsc{repeated contrasts} (\texttt{MASS::contr.sdif()}), and \textsc{polynomial contrasts} (\texttt{contr.poly()}). \textcolor{black}{An additional contrast that we did not cover in detail is the \textsc{Helmert contrast} (\texttt{contr.helmert()}).} These functions generate contrast matrices for a desired number of factor levels. The generalized inverse (function \texttt{ginv()}) is used to obtain hypothesis matrices that inform about how each of the regression coefficients are estimated from condition means and about which hypothesis each of the regression coefficients tests. Going beyond pre-defined contrast functions, the hypothesis matrix is also used to flexibly define custom contrasts tailored to the specific hypotheses of interest. A priori interaction contrasts moreover allow us to formulate and test specific hypotheses about interactions in a design, and to assess their relative importance for understanding the full interaction pattern.\} \hypertarget{further-readings}{ \subsubsection{Further readings}\label{further-readings}} There are many introductions to contrast coding. For further reading we recommend Abelson and Prentice (1997), Baguley (2012) (chapter 15), and Rosenthal et al. (2000). It is also informative to revisit the exchange between Rosnow and Rosenthal (1989), Rosnow and Rosenthal (1996), Abelson (1996), and Petty, Fabrigar, Wegener, and Priester (1996) and the earlier one between Rosnow and Rosenthal (1995) and Meyer (1991). R-specific background about contrasts can be found in section 2.3 of Chambers and Hastie (1992), section 8.3 of Fieller (2016), and section 6.2 of Venables and Ripley (2002). Aside from the default functions in base R, there are also the \textsc{contrast} (Kuhn, Weston, Wing, \& Forester, 2013) and \textsc{multcomp} (Hothorn, Bretz, \& Westfall, 2008) packages in R. \hypertarget{acknowledgements}{ \label{references}} \begingroup \setlength{\parindent}{-0.5in} \setlength{\leftskip}{0.5in} \hypertarget{refs}{} \leavevmode\hypertarget{ref-Abelson1996}{} Abelson, R. P. (1996). Vulnerability of contrast tests to simpler interpretations: An addendum to Rosnow and Rosenthal. \emph{Psychological Science}, \emph{7}(4), 242--246. \leavevmode\hypertarget{ref-AbelsonPrentice1997}{} Abelson, R. P., \& Prentice, D. A. (1997). Contrast tests of interaction hypothesis. \emph{Psychological Methods}, \emph{2}(4), 315. \leavevmode\hypertarget{ref-Baayen2008lme}{} Baayen, R. H., Davidson, D. J., \& Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. \emph{Journal of Memory and Language}, \emph{59}(4), 390--412. \leavevmode\hypertarget{ref-Baguley2012}{} Baguley, T. (2012). \emph{Serious stats: A guide to advanced statistics for the behavioral sciences}. Palgrave Macmillan. \leavevmode\hypertarget{ref-bates2015parsimonious}{} Bates, D. M., Kliegl, R., Vasishth, S., \& Baayen, R. H. (2015). Parsimonious mixed models. \emph{arXiv Preprint arXiv:1506.04967}. \leavevmode\hypertarget{ref-batesetal2014a}{} Bates, D. M., Maechler, M., Bolker, B., \& Walker, S. (2014). \emph{lme4: Linear mixed-effects models using Eigen and S4}. Retrieved from \url{http://CRAN.R-project.org/package=lme4} \leavevmode\hypertarget{ref-Bolker2018}{} Bolker, B. (2018). Https://github.com/bbolker/mixedmodels-misc/blob/master/notes/contrasts.rmd. \leavevmode\hypertarget{ref-ChambersHastie1992}{} Chambers, J. M., \& Hastie, T. J. (1992). \emph{Statistical models in S}. New York: Wadsworth \& Brooks/Cole. Retrieved from \url{http://www.stats.ox.ac.uk/pub/MASS4} \leavevmode\hypertarget{ref-dobson2011introduction}{} Dobson, A. J., \& Barnett, A. (2011). \emph{An introduction to generalized linear models}. CRC press. \leavevmode\hypertarget{ref-fieller2016}{} Fieller, N. (2016). \emph{Basics of matrix algebra for statistics with R}. Boca Raton: CRC Press. \leavevmode\hypertarget{ref-heplots}{} Friendly, M. (2010). HE plots for repeated measures designs. \emph{Journal of Statistical Software}, \emph{37}(4), 1--40. \leavevmode\hypertarget{ref-friendly_matlib}{} Friendly, M., Fox, J., \& Chalmers, P. (2018). \emph{Matlib: Matrix functions for teaching and learning linear algebra and multivariate statistics}. Retrieved from \url{https://CRAN.R-project.org/package=matlib} \leavevmode\hypertarget{ref-hays1973statistics}{} Hays, W. L. (1973). \emph{Statistics for the social sciences} (Vol. 410). Holt, Rinehart; Winston New York. \leavevmode\hypertarget{ref-heister2012analysing}{} Heister, J., Würzner, K.-M., \& Kliegl, R. (2012). Analysing large datasets of eye movements during reading. \emph{Visual Word Recognition}, \emph{2}, 102--130. \leavevmode\hypertarget{ref-HothornBretzWestfall2008}{} Hothorn, T., Bretz, F., \& Westfall, P. (2008). Simultaneous inference in general parametric models. \emph{Biometrical Journal}, \emph{50}(3), 346--363. \leavevmode\hypertarget{ref-kliegl2010linear}{} Kliegl, R., Masson, M., \& Richter, E. (2010). A linear mixed model analysis of masked repetition priming. \emph{Visual Cognition}, \emph{18}(5), 655--681. \leavevmode\hypertarget{ref-Kuhn2013}{} Kuhn, M., Weston, S., Wing, J., \& Forester, J. (2013). The contrast package. \leavevmode\hypertarget{ref-MaxwellDelaney2018}{} Maxwell, S. E., Delaney, H. D., \& Kelley, K. (2018). \emph{Designing experiments and analyzing data: A model comparison perspective} (3rd ed.). New York: Routledge. \leavevmode\hypertarget{ref-Meyer1991}{} Meyer, D. L. (1991). Misinterpretation of interaction effects: A reply to Rosnow and Rosenthal. \emph{Psychological Bulletin}, \emph{110}(3), 571--573. \leavevmode\hypertarget{ref-Petty1996}{} Petty, R. E., Fabrigar, L. R., Wegener, D. T., \& Priester, J. R. (1996). Understanding data when interactions are present or hypothesized. \emph{Psychological Science}, \emph{7}(4), 247--252. \leavevmode\hypertarget{ref-pinheiro2000linear}{} Pinheiro, J. C., \& Bates, D. M. (2000). Linear mixed-effects models: Basic concepts and examples. \emph{Mixed-Effects Models in S and S-Plus}, 3--56. \leavevmode\hypertarget{ref-Rproject}{} R Core Team. (2018). \emph{R: A language and environment for statistical computing}. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from \url{https://www.R-project.org/} \leavevmode\hypertarget{ref-RosenthalRosnowRubin2000}{} Rosenthal, R., Rosnow, R. L., \& Rubin, D. B. (2000). \emph{Contrasts and correlations in behavioral research}. New York: Cambridge University Press. \leavevmode\hypertarget{ref-RosnowRosenthal1995}{} Rosnow, R. L., \& Rosenthal, R. (1989). Definition and interpretations of interaction effects. \emph{Psychological Science}, \emph{105}(1), 143--146. \leavevmode\hypertarget{ref-RosnowRosenthal1989}{} Rosnow, R. L., \& Rosenthal, R. (1995). ``Some things you learn aren't so'': Cohen's paradox, Asch's paradigm, and the interpretation of interaction. \emph{Psychological Bulletin}, \emph{6}(1), 3--9. \leavevmode\hypertarget{ref-RosnowRosenthal1996}{} Rosnow, R. L., \& Rosenthal, R. (1996). Contrasts and interactions redux: Five easy pieces. \emph{Psychological Science}, \emph{7}(4), 253--257. \leavevmode\hypertarget{ref-snedecor1967statistical}{} Snedecor, G. W., \& Cochran, W. G. (1967). \emph{Statistical methods}. Ames, Iowa: Iowa State University Press. \leavevmode\hypertarget{ref-R-MASS}{} Venables, W. N., \& Ripley, B. D. (2002). \emph{Modern applied statistics with S PLUS} (Fourth.). New York: Springer. Retrieved from \url{http://www.stats.ox.ac.uk/pub/MASS4} \endgroup \makeatletter \efloat@restorefloats \makeatother \begin{appendix} \hypertarget{app:Glossary}{ \section{Glossary}\label{app:Glossary}} \begin{table} \begin{center} \caption{Glossary.} \begin{tabular}{ lp{13cm} } \hline Centered contrasts & \textcolor{black}{The coefficients of a centered contrast sum to zero. If all contrasts (and covariates) in an analysis are centered, then the intercept assesses the grand mean.} \\ Condition mean & \textcolor{black}{The mean of the dependent variable for one factor level or one design cell.} \\ Contrast coefficients & \textcolor{black}{Each contrast consists of a list (vector) of contrast coefficients - there is one contrast coefficient ($c_i$) for each factor level $i$, which encodes how this factor level contributes in computing the comparison between conditions or bundles of conditions tested by the contrast.} \\ Contrast matrix & Contains contrast coefficients either for one factor, or for all factors in the experimental design; Each condition/group or design cell is represented once. The contrast coefficients indicate the numeric predictor values that are to be used as covariates in the linear model to encode differences between factor levels. \\ Design / model matrix & A matrix where each data point yields one row and each column codes one predictor variable in a linear model. Specifically, the first column usually contains a row of 1s, which codes the intercept. Each of the other columns contains one contrast or one covariate. \\ Grand mean & Average of the means of all experimental conditions/groups \\ Hypothesis matrix & Each column codes one condition/group and each row codes one hypothesis. \textcolor{black}{(We mostly show rows of the hypothesis matrix as columns in this paper.)} Each hypothesis consists of weights of how different condition/group means are combined/compared. \\ Orthogonal contrasts & The coefficients of two contrasts are orthogonal to each other if they have a correlation of zero across conditions. They then represent mutually independent hypotheses about the data. \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Glossary (continued).} \begin{tabular}{ lp{13cm} } \hline Regression coefficients & \textcolor{black}{Estimates of how strongly a predictor variable or a certain contrast influences the dependent variable. They estimate the effects associated with a predictor variable or a contrast.} \\ Weights & \textcolor{black}{(in a hypothesis matrix); Each row in the hypothesis matrix encodes the hypothesis tested by one contrast. Each row (i.e., each hypothesis) is defined as a list (vector) of weights, where each factor level / condition mean has one associated weight. The weight weighs the contribution of this condition mean to defining the null hypothesis of the associated contrast. For this, the weight is multiplied with its associated condition mean.} \\ \hline \end{tabular} \end{center} \end{table} \hypertarget{app:ContrastNames}{ \section{Contrast names}\label{app:ContrastNames}} \begin{table} \begin{center} \caption{Contrast names.} \begin{tabular}{ lp{4.5cm}p{8cm} } \hline R-function & contrast names & description \\ \hline contr.treatment() & \textbf{treatment contrasts}; dummy contrasts & Compares each of p-1 conditions to a baseline condition. \\ contr.sum() & \textbf{sum contrasts}; deviation contrasts & Compares each of p-1 conditions to the average across conditions. In the case of only 2 factor levels, it tests half the difference between the two conditions. \\ contr.sum()/2 & \textbf{scaled sum contrasts}; effect coding & In the case of two groups, the scaled sum contrast estimates the difference between groups. \\ contr.sdif() & \textbf{repeated contrasts}; successive difference contrasts; sliding difference contrasts; simple difference contrasts & Compares neighbouring factor levels, i.e., 2-1, 3-2, 4-3. \\ contr.poly() & \textbf{polynomial contrasts}; orthogonal polynomial contrasts & Tests e.g., linear or quadratic trends. \\ contr.helmert() & \textbf{Helmert contrasts}; difference contrasts & The first contrast compares the first two conditions. The second contrast compares the average of the first two conditions to the third condition. \\ & \textbf{custom contrasts} & Customly defined contrasts, not from the standard set. \\ & \textbf{effect coding} & All centered contrasts, which give ANOVA-like test results. \\ \hline \end{tabular} \end{center} \end{table} \hypertarget{app:mixedDesign}{ \section{The R-function mixedDesign()}\label{app:mixedDesign}} The function \texttt{mixedDesign()} can be downloaded from the OSF repository \url{https://osf.io/7ukf6/}. It allows the researcher to flexibly simulate data from a wide variety of between- and within-subject designs, which are commonly analyzed with (generalized) linear models (ANOVA, ANCOVA, multiple regression analysis) or (generalized) linear mixed-effects models. It can be used to simulate data for different response distributions such as Gaussian, Binomial, Poisson, and others. It involves defining the number of between- and within-subject factors, the number of factor levels, means and standard deviations for each design cell, the number of subjects per cell, and correlations of the dependent variable between within-subject factor levels and how they vary across between-subject factor levels. The following lines of code show a very simple example for simulating a data set for an experiment with one between-subject factor with two levels. \begin{Shaded} \begin{Highlighting}[] \NormalTok{(M <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{300}\NormalTok{, }\DecValTok{250}\NormalTok{), }\DataTypeTok{nrow=}\DecValTok{2}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{FALSE}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## [1,] 300 ## [2,] 250 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{simexp <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\DecValTok{2}\NormalTok{, }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{5}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\DecValTok{20}\NormalTok{) } \KeywordTok{str}\NormalTok{(simexp)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 10 obs. of 3 variables: ## $ B_A: Factor w/ 2 levels "A1","A2": 1 1 1 1 1 2 2 2 2 2 ## $ id : Factor w/ 10 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ## $ DV : num 320 305 291 270 315 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{simexp} \end{Highlighting} \end{Shaded} \begin{verbatim} ## B_A id DV ## 1 A1 1 320 ## 2 A1 2 305 ## 3 A1 3 291 ## 4 A1 4 270 ## 5 A1 5 315 ## 6 A2 6 228 ## 7 A2 7 230 ## 8 A2 8 271 ## 9 A2 9 266 ## 10 A2 10 256 \end{verbatim} The first and second argument of the \texttt{mixedDesign()} function specify the numbers and levels of the between- (\texttt{B\ =}) and within- (\texttt{W\ =}) subject factors. The arguments take a vector of numbers (integers). The length of the vector defines the number of factors, and each individual entry in the vector indicates the number of levels of the respective factor. For example, a \(2 \times 3 \times 4\) between-subject design with three between-subject factors, where the first factor has \(2\) levels, the second has \(3\) levels, and the third has \(4\) levels, would be coded as \texttt{B\ =\ c(2,3,4),\ W\ =\ NULL}. A \(3 \times 3\) within-subject design with two within-subjects factors with each \(3\) levels would be coded as \texttt{B\ =\ NULL,\ W\ =\ c(3,3)}. A 2 (between) \(\times\) 2 (within) \(\times\) 3 (within) design would be coded \texttt{B\ =\ 2,\ W\ =\ c(2,3)}. The third argument (\texttt{n\ =}) takes a single integer value indicating the number of simulated subjects for each cell of the between-subject design. The next necessary argument (\texttt{M\ =}) takes as input a matrix containing the table of means for the design. The number of rows of this matrix of means is the product of the number of levels of all between-subject factors. The number of columns of the matrix is the product of the number of levels of the within-subject factors. In the present example, the matrix \texttt{M} has just two rows each containing the mean of the dependent variable for one between-subject factor level, that is \(300\) and \(250\). Because there is no within-subject factor, the matrix has just a single column. The second data set simulated in the paper contains a between-subject factor with three levels. Accordingly, we specify three means. The \texttt{mixedDesign} function generates a data frame with one factor F with three levels, one factor id with 3 (between-subject factor levels) \(\times\) 4 (n) = 12 levels, and a dependent variable with 12 entries. \begin{Shaded} \begin{Highlighting}[] \NormalTok{(M <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\DecValTok{500}\NormalTok{, }\DecValTok{450}\NormalTok{, }\DecValTok{400}\NormalTok{), }\DataTypeTok{nrow=}\DecValTok{3}\NormalTok{, }\DataTypeTok{ncol=}\DecValTok{1}\NormalTok{, }\DataTypeTok{byrow=}\OtherTok{FALSE}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## [1,] 500 ## [2,] 450 ## [3,] 400 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{set.seed}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{simdat2 <-}\StringTok{ }\KeywordTok{mixedDesign}\NormalTok{(}\DataTypeTok{B=}\DecValTok{3}\NormalTok{, }\DataTypeTok{W=}\OtherTok{NULL}\NormalTok{, }\DataTypeTok{n=}\DecValTok{4}\NormalTok{, }\DataTypeTok{M=}\NormalTok{M, }\DataTypeTok{SD=}\DecValTok{20}\NormalTok{) } \KeywordTok{names}\NormalTok{(simdat2)[}\DecValTok{1}\NormalTok{] <-}\StringTok{ "F"} \CommentTok{# Rename B_A to F(actor)/F(requency)} \KeywordTok{levels}\NormalTok{(simdat2}\OperatorTok{$}\NormalTok{F) <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"low"}\NormalTok{, }\StringTok{"medium"}\NormalTok{, }\StringTok{"high"}\NormalTok{)} \KeywordTok{str}\NormalTok{(simdat2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 12 obs. of 3 variables: ## $ F : Factor w/ 3 levels "low","medium",..: 1 1 1 1 2 2 2 2 3 3 ... ## $ id: Factor w/ 12 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ... ## $ DV: num 497 474 523 506 422 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{simdat2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## F id DV ## 1 low 1 497 ## 2 low 2 474 ## 3 low 3 523 ## 4 low 4 506 ## 5 medium 5 422 ## 6 medium 6 467 ## 7 medium 7 461 ## 8 medium 8 450 ## 9 high 9 414 ## 10 high 10 412 ## 11 high 11 402 ## 12 high 12 371 \end{verbatim} For a 2 (between) \(\times\) 2 (within) subject design, we need a 2 \(\times\) 2 matrix. For multiple between-subject factors, \texttt{M} has one row for each cell of the between-subject design, describing the mean for this design cell. For example, in case of a \(2 \times 3 \times 4\) design, the matrix has \(2 \cdot 3 \cdot 4 = 24\) rows. The same is true for multiple within-subject factors, where \texttt{M} has one column for each cell of the between-subject design, e.g., in a 4 (within) \(\times\) 2 (within) design it has \(4 \cdot 2 = 8\) columns. The levels of multiple between-subject factors are sorted such that the levels of the first factor vary most slowly, whereas the levels of the later factors vary more quickly. E.g., \begin{equation} \begin{bmatrix} B\_A1 & B\_B1 & B\_C1 \\ B\_A1 & B\_B1 & B\_C2 \\ B\_A1 & B\_B2 & B\_C1 \\ B\_A1 & B\_B2 & B\_C2 \\ B\_A2 & B\_B1 & B\_C1 \\ B\_A2 & B\_B1 & B\_C2 \\ \dots & \dots & \dots \\ \end{bmatrix} \end{equation} \noindent The same logic also applies to the levels of the within-subject factors, which are sorted across columns. The argument \texttt{SD\ =} takes either a single number specifying the standard deviation of the dependent variable for all design cells, or it takes a table of standard deviations represented as a matrix of the same dimension as the matrix of means, where each element defines the standard deviation of the dependent variable for the given design cell. For designs with at least one within-subject factor it is necessary to define how the dependent variable correlates between pairs of within-subject factor levels across subjects. For example, when for each subject in a lexical decision task response times are measured once for content words and once for function words, yielding two levels of a within-subject factor, it is necessary to define how the two measurements correlate: that is, for subjects with a very slow response time for content words, will the response times for function words also be very slow, reflecting a high correlation? Or might it be fast as well, such that the individual's response time for content words is uninformative for it's response times for function words, reflecting a correlation close to zero? When \(n_k\) within-subject design cells are present in a design, the design defines \(n_r = n_k \cdot (n_k -1) / 2\) correlations. These can be identical across levels of the between-subject factors, but they can also differ. The optional argument \texttt{R\ =} defines these correlations. When assuming that all correlations in a design are identical, then it is possible to define \texttt{R} as a single number (between -1 and +1) specifying this correlation. Under the assumption that correlations differ, \texttt{R} takes as input a list of correlation matrices. This list contains one correlation matrix for each combination of between-subject factor levels. Each of these matrices is of dimension \(n_k \cdot n_k\). The diagonal elements in correlation matrices need to be \(1\), and the upper and lower triangles must contain the exact same entries and are thus symmetrical. \hypertarget{app:LinearAlgebra}{ \section{Matrix notation of the linear model and the generalized matrix inverse}\label{app:LinearAlgebra}} This section requires knowledge of some linear algebra; for a review of the important facts, see Fieller (2016) and Van de Geer (1971). Here, we briefly show the derivation for estimating regression coefficients in linear models using matrix notation. The question we address here is: given the data \(y\) and some predictor(s) \(x\), how to estimate the parameters \(\beta\)? The next section is adapted from lectures note by Vasishth (2018), available from https://osf.io/ces89/. Consider a deterministic function \(\phi(\mathbf{x},\beta)\) which takes as input some variable values \(x\) and some fixed values \(\beta\). A simple example would be \begin{equation} y = \beta x \end{equation} Another example with two fixed values \(\beta_0\) and \(\beta_1\) is: \begin{equation} y = \beta_0 + \beta_1 x \end{equation} We can rewrite the above equation as follows. \begin{equation} \begin{split} y=& \beta_0 + \beta_1 x\\ =& \beta_0\times 1 + \beta_1 x\\ =& \begin{pmatrix} 1 & x\\ \end{pmatrix} \begin{pmatrix} \beta_0 \\ \beta_1 \\ \end{pmatrix}\\ y =& \phi(x, \beta)\\ \end{split} \end{equation} In a statistical model, we don't expect an equation like \(y=\phi(x,\beta)\) to fit all the points exactly. For example, we could come up with an equation that, given a person's weight, we can compute their height: \begin{equation} \hbox{height} = \beta_0 + \beta_1 \hbox{weight} \end{equation} Given any single value of the weight of a person, we will probably not get a perfectly correct prediction of the height of that person. This leads us to a non-deterministic version of the above function: \begin{equation} y=\phi(x,\beta,\varepsilon)=\beta_0+\beta_1x+\varepsilon \end{equation} Here, \(\varepsilon\) is an error random variable which is assumed to have some probability density function (specifically, the normal distribution) associated with it. It is assumed to have expectation (mean) 0, and some standard deviation (to be estimated from the data) \(\sigma\). We can write this statement in compact form as \(\varepsilon \sim N(0,\sigma)\). The \textbf{general linear model} is a non-deterministic function like the one above (\(^T\) is the transformation operation on a matrix, which converts the rows of a matrix into the columns, and vice versa): \begin{equation} Y=f(x)^T\beta +\varepsilon \end{equation} The matrix formulation will be written as below. \(n\) refers to the number of data points (that is, \(Y=y_1,\dots,y_n\)), and the index \(i\) ranges from \(1\) to \(n\). \begin{equation} Y = X\beta + \varepsilon \Leftrightarrow y_i = f(x_i)^T \beta + \varepsilon_i, i=1,\dots,n \end{equation} To make this concrete, suppose we have three data points, i.e., \(n=3\). Then, the matrix formulation is \begin{equation} \begin{split} \begin{pmatrix} y_1 \\ y_2\\ y_3 \\ \end{pmatrix} = \begin{pmatrix} 1 & x_1 \\ 1 & x_2 \\ 1 & x_3 \\ \end{pmatrix} \begin{pmatrix} \beta_0 \\ \beta_1 \\ \end{pmatrix}+ \varepsilon\\ Y =& X \beta + \varepsilon \\ \end{split} \end{equation} Here, \(f(x_1)^T = (1~x_1)\), and is the first row of the matrix \(X\), \(f(x_2)^T = (1~x_2)\) is the second row, and \(f(x_3)^T = (1~x_3)\) is the third row. The expectation of \(Y\), \(E[Y]\), is \(X\beta\). \(\beta\) is a \(p\times 1\) matrix, and \(X\), the \textbf{design matrix}, is an \(n\times p\) matrix. We now provide a geometric argument for least squares estimation of the \(\beta\) parameters. When we have a deterministic model \(y=\phi(f(x)^T,\beta)=\beta_0+\beta_1x\), this implies a perfect fit to all data points. This is like solving the equation \(Ax=b\) in linear algebra: we solve for \(\beta\) in \(X\beta=y\) using, e.g., Gaussian elimination (Lay, 2005). But when we have a non-deterministic model \(y=\phi(f(x)^T,\beta,\varepsilon)\), there is no solution. Now, the best we can do is to get \(Ax\) to be as close an approximation as possible to b in \(Ax=b\). In other words, we try to minimize the absolute distance between b and Ax: \(\mid b-Ax\mid\). The goal is to estimate \(\beta\); we want to find a value of \(\beta\) such that the observed Y is as close to its expected value \(X\beta\) as possible. In order to be able to identify \(\beta\) from \(X\beta\), the linear transformation \(\beta \rightarrow X\beta\) should be one-to-one, so that every possible value of \(\beta\) gives a different \(X\beta\). This in turn requires that \(X\) be of full rank \(p\). (Rank refers to the number of linearly independent columns or rows. The row rank and column rank of an \(m\times n\) matrix will be the same, so we can just talk of rank of a matrix. An \(m\times n\) matrix \(X\) with rank(X)=min(m,n) is called full rank.) So, if X is an \(n\times p\) matrix, then it is necessary that \(n\geq p\). There must be at least as many observations as parameters. If this is not true, then the model is said to be \textbf{over-parameterized}. Assuming that \(X\) is of full rank, and that \(n>p\), \(Y\) can be considered a point in \(n\)-dimensional space and the set of candidate \(X\beta\) is a \(p\)-dimensional subspace of this space; see Figure \ref{fig:leastsquares}. There will be one point in this subspace which is closest to \(Y\) in terms of Euclidean distance. The unique \(\beta\) that corresponds to this point is the \textbf{least squares estimator} of \(\beta\); we will call this estimator \(\hat \beta\). \begin{figure} \caption{Geometric interpretation of least squares.} \label{fig:leastsquares} \end{figure} \FloatBarrier Notice that \(\varepsilon=(Y - X\hat\beta)\) and \(X\beta\) are perpendicular to each other. Because the dot product of two perpendicular (orthogonal) vectors is \(0\), we get the result: \begin{equation} (Y- X\hat\beta)^T X \beta = 0 \Leftrightarrow (Y- X\hat\beta)^T X = 0 \end{equation} Multiplying out the terms, we proceed as follows. One result that we use here is that \((AB)^T = B^T A^T\) (see Fieller, 2016). \begin{equation} \begin{split} ~& (Y- X\hat\beta)^T X = 0 \\ ~& (Y^T- \hat\beta^T X^T)X = 0\\ \Leftrightarrow& Y^T X - \hat\beta^TX^T X = 0 \quad \\ \Leftrightarrow& Y^T X = \hat\beta^TX^T X \\ \Leftrightarrow& (Y^T X)^T = (\hat\beta^TX^T X)^T \\ \Leftrightarrow& X^T Y = X^TX\hat\beta\\ \end{split} \end{equation} This gives us the important result: \begin{equation} \hat\beta = (X^TX)^{-1}X^T Y \label{eq:beta} \end{equation} This is a key and famous result for linear models. Here, the transformation of the design matrix \(X\) into \((X^T X)^{-1} X^T\) plays a central role for estimating regression coefficients. Indeed, this term \((X^T X)^{-1} X^T\) is exactly the generalized matrix inverse of the design matrix \(X\), which we can compute in R using the command \texttt{MASS::ginv()} or \texttt{matlib::Ginv()}. Conceptually, it converts contrast matrices between two representations, where one representation defines the design matrix \(X\) used to define independent variables in the linear regression model \(y = X \beta\), whereas the other representation, namely the generalized matrix inverse of the design matrix \(X^{+} = (X^T X)^{-1} X^T = X^{inv}\) defines weights for how observed data are combined to obtain estimated regression coefficients via \(\hat{\beta} = X^{+} y\) and for what formal hypotheses are tested via the contrasts. Given the important property of the generalized matrix inverse that applying it twice yields back the original matrix, the generalized matrix inversion operation can be used to flip back and forth between these two representations of the design matrix. Fieller (2016) provides a detailed and accessible introduction to this topic. As one important aspect, the generalized inverse of the design matrix, \(X^{+}\), defines weights for each of the \(k\) factor levels of the independent variables. As estimating coefficients for all \(k\) factor levels in addition to the intercept is redundant, the design matrix \(X\) defines coefficients only for \(k - 1\) comparisons. The generalized matrix inversion can be seen as transforming between these two spaces (Venables \& Ripley, 2002). A vignette explaining the generalized inverse is available for the \texttt{Ginv()} function in the \href{https://cran.r-project.org/web/packages/matlib/vignettes/ginv.html}{\texttt{matlib} package} (Friendly et al., 2018a). \hypertarget{app:InverseOperation}{ \section{Inverting orthogonal contrasts}\label{app:InverseOperation}} \textcolor{black}{When we have a given contrast matrix and we apply the generalized inverse---how exactly does the generalized inverse compute the weights in the hypothesis matrix? That is, what is the formula for computing a single hypothesis weight? We look at this question here for the restricted case that all contrasts in a given design are orthogonal. For this case, we write down the equations for computing each single weight when the generalized inverse is applied. We start out with the definition of the contrast matrix. We use a design with a single factor with three levels with two orthogonal and centered contrasts $x_1$ and $x_2$. \begin{equation} X_c = \left(\begin{array}{cc} x_{1,1} & x_{2,1} \\ x_{1,2} & x_{2,2} \\ x_{1,3} & x_{2,3} \end{array} \right) \end{equation} Under the assumption that the contrasts are orthogonal and centered, we obtain the following formulas for the weights: \begin{equation} H_c = X_c^{inv} = \left( \begin{array}{rrr} \frac{x_{1,1}}{\sum x_1^2} & \frac{x_{1,2}}{\sum x_1^2} & \frac{x_{1,3}}{\sum x_1^2} \\ \frac{x_{2,1}}{\sum x_2^2} & \frac{x_{2,2}}{\sum x_2^2} & \frac{x_{2,3}}{\sum x_2^2} \end{array} \right) \end{equation} That is, here each single weight in the hypothesis matrix is computed as $h_{i,j} = \frac{x_{j,i}}{\sum x_j^2}$.} \hypertarget{information-about-program-version}{ \section{Information about program version}\label{information-about-program-version}} We used R (Version 3.6.0; R Core Team, 2018) and the R-packages \emph{bindrcpp} (Müller, 2017), \emph{car} (Version 3.0.2; Fox \& Weisberg, 2011), \emph{dplyr} (Version 0.8.1; Wickham et al., 2017a), \emph{forcats} (Version 0.4.0; Wickham, 2017a), \emph{ggplot2} (Version 3.1.1; Wickham, 2009), \emph{knitr} (Version 1.22; Xie, 2015), \emph{MASS} (Version 7.3.51.4; Venables \& Ripley, 2002), \emph{matlib} (Friendly et al., 2018b), \emph{papaja} (Version 0.1.0.9842; Aust \& Barth, 2018), \emph{png} (Version 0.1.7; Urbanek, 2013), \emph{purrr} (Version 0.3.2; Henry \& Wickham, 2017), \emph{readr} (Version 1.3.1; Wickham et al., 2017b), \emph{sjPlot} (Version 2.6.3; Lüdecke, 2018), \emph{stringr} (Version 1.4.0; Wickham, 2018), \emph{tibble} (Version 2.1.2; Müller \& Wickham, 2018), \emph{tidyr} (Version 0.8.3; Wickham \& Henry, 2018), \emph{tidyverse} (Version 1.2.1; Wickham, 2017b), \emph{XML} (Lang \& CRAN Team, 2017), and \emph{xtable} (Version 1.8.4; Dahl, 2016) for all our analyses. \hypertarget{refs}{} \leavevmode\hypertarget{ref-R-papaja}{} Aust, F., \& Barth, M. (2018). \emph{papaja: Create APA manuscripts with R Markdown}. Retrieved from \url{https://github.com/crsh/papaja} \leavevmode\hypertarget{ref-R-xtable}{} Dahl, D. B. (2016). \emph{Xtable: Export tables to latex or html}. Retrieved from \url{https://CRAN.R-project.org/package=xtable} \leavevmode\hypertarget{ref-fieller2016}{} Fieller, N. (2016). \emph{Basics of matrix algebra for statistics with R}. Boca Raton: CRC Press. \leavevmode\hypertarget{ref-R-car}{} Fox, J., \& Weisberg, S. (2011). \emph{An R companion to applied regression} (Second.). Thousand Oaks CA: Sage. Retrieved from \url{http://socserv.socsci.mcmaster.ca/jfox/Books/Companion} \leavevmode\hypertarget{ref-friendly_matlib}{} Friendly, M., Fox, J., \& Chalmers, P. (2018a). \emph{Matlib: Matrix functions for teaching and learning linear algebra and multivariate statistics}. Retrieved from \url{https://CRAN.R-project.org/package=matlib} \leavevmode\hypertarget{ref-R-matlib}{} Friendly, M., Fox, J., \& Chalmers, P. (2018b). \emph{Matlib: Matrix functions for teaching and learning linear algebra and multivariate statistics}. Retrieved from \url{https://CRAN.R-project.org/package=matlib} \leavevmode\hypertarget{ref-R-purrr}{} Henry, L., \& Wickham, H. (2017). \emph{Purrr: Functional programming tools}. Retrieved from \url{https://CRAN.R-project.org/package=purrr} \leavevmode\hypertarget{ref-R-XML}{} Lang, D. T., \& CRAN Team. (2017). \emph{XML: Tools for parsing and generating xml within r and s-plus}. Retrieved from \url{https://CRAN.R-project.org/package=XML} \leavevmode\hypertarget{ref-lay2005linear}{} Lay, D. C. (2005). \emph{Linear algebra and its applications} (Third.). Addison Wesley. \leavevmode\hypertarget{ref-R-sjPlot}{} Lüdecke, D. (2018). \emph{SjPlot: Data visualization for statistics in social science}. Retrieved from \url{https://CRAN.R-project.org/package=sjPlot} \leavevmode\hypertarget{ref-R-bindrcpp}{} Müller, K. (2017). \emph{Bindrcpp: An 'rcpp' interface to active bindings}. Retrieved from \url{https://CRAN.R-project.org/package=bindrcpp} \leavevmode\hypertarget{ref-R-tibble}{} Müller, K., \& Wickham, H. (2018). \emph{Tibble: Simple data frames}. Retrieved from \url{https://CRAN.R-project.org/package=tibble} \leavevmode\hypertarget{ref-R-base}{} R Core Team. (2018). \emph{R: A language and environment for statistical computing}. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from \url{https://www.R-project.org/} \leavevmode\hypertarget{ref-R-png}{} Urbanek, S. (2013). \emph{Png: Read and write png images}. Retrieved from \url{https://CRAN.R-project.org/package=png} \leavevmode\hypertarget{ref-van1971introduction}{} Van de Geer, J. P. (1971). \emph{Introduction to multivariate analysis for the social sciences}. Freeman. \leavevmode\hypertarget{ref-lmlecturenotesSV}{} Vasishth, S. (2018). \emph{Linear modeling: Lecture notes}. Retrieved from \url{https://osf.io/ces89/} \leavevmode\hypertarget{ref-R-MASS}{} Venables, W. N., \& Ripley, B. D. (2002). \emph{Modern applied statistics with S PLUS} (Fourth.). New York: Springer. Retrieved from \url{http://www.stats.ox.ac.uk/pub/MASS4} \leavevmode\hypertarget{ref-R-ggplot2}{} Wickham, H. (2009). \emph{Ggplot2: Elegant graphics for data analysis}. Springer-Verlag New York. Retrieved from \url{http://ggplot2.org} \leavevmode\hypertarget{ref-R-forcats}{} Wickham, H. (2017a). \emph{Forcats: Tools for working with categorical variables (factors)}. Retrieved from \url{https://CRAN.R-project.org/package=forcats} \leavevmode\hypertarget{ref-R-tidyverse}{} Wickham, H. (2017b). \emph{Tidyverse: Easily install and load the 'tidyverse'}. Retrieved from \url{https://CRAN.R-project.org/package=tidyverse} \leavevmode\hypertarget{ref-R-stringr}{} Wickham, H. (2018). \emph{Stringr: Simple, consistent wrappers for common string operations}. Retrieved from \url{https://CRAN.R-project.org/package=stringr} \leavevmode\hypertarget{ref-R-dplyr}{} Wickham, H., Francois, R., Henry, L., \& Müller, K. (2017a). \emph{Dplyr: A grammar of data manipulation}. Retrieved from \url{https://CRAN.R-project.org/package=dplyr} \leavevmode\hypertarget{ref-R-tidyr}{} Wickham, H., \& Henry, L. (2018). \emph{Tidyr: Easily tidy data with 'spread()' and 'gather()' functions}. Retrieved from \url{https://CRAN.R-project.org/package=tidyr} \leavevmode\hypertarget{ref-R-readr}{} Wickham, H., Hester, J., \& Francois, R. (2017b). \emph{Readr: Read rectangular text data}. Retrieved from \url{https://CRAN.R-project.org/package=readr} \leavevmode\hypertarget{ref-R-knitr}{} Xie, Y. (2015). \emph{Dynamic documents with R and knitr} (2nd ed.). Boca Raton, Florida: Chapman; Hall/CRC. Retrieved from \url{https://yihui.name/knitr/} \end{appendix} \end{document}
arXiv
In what sense are distributions more singular than functions? Chapter 9 in Folland's Real Analysis starts with the following sentence: "At least as far back as Heaviside in the $1890$s, engineers and physicists have found it convenient to consider mathematical objects which, roughly speaking, resemble functions but are more singular than functions." Even though I've never studied distribution theory in depth, it is a topic that's been arisen occasionally in my Analysis classes, and so I've come to feel almost comfortable with "the basics" of distributions. Now I'm trying to study the subject from Folland's book but this first sentence has me puzzled: Q: How is it possible for a distribution to have singularities in a sense comparable to how ordinary functions (let's think from $\mathbb R$ to $\mathbb R$ to keep things simple) have singularities? I've given some thought to this question but couldn't find a satisfactory explanation yet. As I think about it there are two possibilities: 1) A singularity of a distribution $T$ is a test function $\varphi$ for which $T\varphi=\infty$. This would be the most natural meaning to me, but it doesn't make much sense because by definition $T$ is a map from the space of test functions to $\mathbb R$, and so it has to have finite values at each test function. Is it maybe that we keep "singular test functions" out of the domain on definition of $T$, similar to when we define $1/x$ as a function on $\mathbb R\setminus\{0\}$ instead of as a function on $\mathbb R$? Also, if anything like this "definition" is the case, how would one compare how singular a distribution and a function are considering that the singularities themselves would be different objects? 2) A singularity of a distribution $T$ is a point $x$ in $\mathbb{R}$. In this case we can compare how singular $T$ is with respect to a function $f:\mathbb R\to\mathbb R$. Furthermore, we can regard locally integrable functions as distributions, so this definition would make sense at least when $T$ is actually a locally integrable function. However, even if this can be given precise sense in general, for any given subset $S\subset\mathbb R$ it's easy to define functions that are singular on $S$ like $f(x)=\big[\inf_{s\in S}|x-s|\big]^{-1}$ so how could distributions be more singular than functions? Any insight is most welcome, thank you! Edit: It seems I have misinterpreted the meaning of 'singular' in the above sentence. I'd still appreciate if someone could enumerate the ways in which distributions are more singular than functions. functional-analysis distribution-theory singularity Jonatan B. Bastos Jonatan B. BastosJonatan B. Bastos $\begingroup$ I'm not sure Folland intends to mean that they are comparable - instead that distributions can have even less pleasant properties than functions. For instance, any function can be interpreted as a distribution, but the dirac delta cannot be interpreted as a total function. $\endgroup$ – B. Mehta Apr 16 '18 at 3:47 $\begingroup$ @B.Mehta So 'singular' in that sentence refers to unpleasant properties like not being differentiable in the classic sense and not to the usual notion of singularity? $\endgroup$ – Jonatan B. Bastos Apr 16 '18 at 4:07 $\begingroup$ Far more unpleasant - the dirac delta isn't even a well defined function, let alone being continuous or even differentiable! $\endgroup$ – B. Mehta Apr 16 '18 at 4:08 $\begingroup$ Exactly. The thing is, distributions, or rather the space of distributions, is an "unpleasant space". There's no neat and clean "representation" of these linear operators, like there is for functions (with the help of functional notation). SImilarly, one can embed smooth functions/ measures into distributions, but then not every distribution is of that form either. Hence, this is what I think "singular" means : it contains a large class of objects, but you sort of lose track of how to "represent" such an object, and in that sense it is "singular", or tricky. $\endgroup$ – астон вілла олоф мэллбэрг Apr 16 '18 at 4:40 Although there are various ways to measure regularity, a natural and elementary one to use is the degree of differentiability of a function. If a function $f$ can be differentiated $k$ times it will be more regular if $k$ is high, and less regular if $k$ is low. In the context of classical functions this scale stops at functions which are not differentiable at all. Now the theory of distributions extends this scale to something akin to negative degrees of differentiability. In fact, the structure theorem for distributions states that locally you can write every distribution as the derivative of a certain order of a continuous function. The more derivatives you need to take, the more singular such a distribution will be. For example, the delta distribution $\delta$ can be obtained by differentiating the kink function $x_+$, defined by $x_+(x) = 0$ for $x<0$ and $x_+(x)=x$ for $x \ge 0$, twice: the first derivative gives the Heaviside function, whose derivative is the delta distribution. Eduard NigschEduard Nigsch $\begingroup$ I didn't know about this structure theorem for distributions. This was not the answer I was expecting when I asked the question, but it actually helped me understand the use of 'singular' in the context of distributions, thank you! $\endgroup$ – Jonatan B. Bastos Apr 17 '18 at 23:24 Not the answer you're looking for? Browse other questions tagged functional-analysis distribution-theory singularity or ask your own question. Which functions are tempered distributions? Singular support of distributions "$L^\infty$ in the sense of distributions" Intuition for Integrating "Against a Test Function" in Distribution Theory Multiplication of two distributions whose singular supports are disjoint Understanding why the domain of a distribution is defined to be smooth functions with compact support Distributions as Generalized functions Density of test functions in the space of distributions — a clarification
CommonCrawl
Rapid Glucocorticoid Receptor Exchange at a Promoter Is Coupled to Transcription and Regulated by Chaperones and Proteasomes Diana A. Stavreva, Waltraud G. Müller, Gordon L. Hager, Carolyn L. Smith, James G. McNally Diana A. Stavreva Laboratory of Receptor Biology and Gene Expression, Center for Cancer Research, National Cancer Institute Waltraud G. Müller Gordon L. Hager Carolyn L. Smith Light Imaging Facility, National Institute for Neurological Disorders and Stroke, Bethesda, Maryland 20892 James G. McNally For correspondence: [email protected] DOI: 10.1128/MCB.24.7.2682-2697.2004 Exchange of the glucocorticoid receptor (GR) at promoter target sites provides the only known system in which transcription factor cycling at a promoter is fast, occurring on a time scale of seconds. The mechanism and function of this rapid exchange are unknown. We provide evidence that proteasome activity is required for rapid GR exchange at a promoter. We also show that chaperones, specifically hsp90, stabilize the binding of GR to the promoter, complicating models in which the associated chaperone, p23, has been proposed to induce GR removal. Our results are the first to connect chaperone and proteasome functions in setting the residence time of a transcription factor at a target promoter. Moreover, our results reveal that longer GR residence times are consistently associated with greater transcriptional output, suggesting a new paradigm in which the rate of rapid exchange provides a means to tune transcriptional levels. Based on fluorescence recovery after photobleaching (FRAP) analysis, most nuclear proteins are now known to be highly mobile (28). The list of rapidly moving proteins includes splicing factors (17), nucleolar proteins (9), histone H1 (19, 23), and several steroid receptor transcription factors (8, 22, 31, 34, 43). In all cases, transcription factor mobility is slower than that of GFP alone, demonstrating that all of these proteins transiently interact with nuclear binding sites of some sort. The majority of these sites cannot be specific promoters, given the numbers of expressed molecules (34). Rather, binding to chromatin or nuclear matrix is more likely (34, 43). Thus, these nuclear FRAP data provide insights about trafficking of proteins within the nucleus, but they do not directly address transcription factor binding to a promoter. In a limited number of cases, binding of transcription factors to specific promoters has been studied. Here again, mobility has been detected, indicating that transcription factors do not remain permanently bound at a promoter but rather undergo cycles of binding and unbinding. The first evidence for this came from FRAP experiments using a tandem array of mouse mammary tumor virus (MMTV) promoter sites visualized with a (GFP)-tagged glucocorticoid receptor (GR) (22). Rapid exchange of this receptor was observed, with a total recovery time of less than a minute. Later studies using chromatin immunoprecipitation (ChIP) have shown that the estrogen receptor (ER) cycles at several different promoters but with a markedly longer periodicity, on the order of 1 h (31, 37). In addition to the transcription factors themselves, associated factors also exhibit exchange at promoters, but again with very different time scales depending on whether the experimental approach is FRAP, where rapid exchange is observed (3, 42), or ChIP, where in general slower cycling is detected (4, 31, 37). When the temporal resolution of ChIP was pushed to its limits, reciprocal cycling of two ER coactivator complexes (DRIP and p160) could be detected on a time scale as short as 2.5 min (4). These data suggest that much faster exchange exists in other systems but at or below the limits of ChIP temporal resolution. Much remains to be learned about the mechanisms of transcription factor cycling. The slow cycling of ER requires proteasomal activity (31). In the case of rapid GR exchange, nothing is known except hints that chaperones and chromatin remodelers could be involved. Freeman and Yamamoto (12) showed that the molecular chaperone p23 can induce disassembly of thyroid receptor transcriptional regulatory complexes. They also found that in vivo targeting of a gal4-p23 fusion protein to a GR promoter could significantly reduce transcriptional activation there. Based on these and other experiments, they suggested that p23 could be involved in removing GR during rapid exchange. An in vitro chromatin-remodeling system has revealed that recruitment of Swi/Snf is accompanied by loss of GR, leading to the suggestion that chromatin remodelers may play a role in rapid GR exchange (11). The function of transcription factor exchange is also not well understood. In the cases identified to date, it has been suggested that receptor cycling at a promoter is a mechanism to sense changes in hormone levels (12, 22, 31, 37). It has also been suggested that proteasomal removal of potent transcription factors from their promoters may be a means of limiting transcriptional output (24). However, beyond these hypotheses, no other functions have been proposed for transcription factor exchange. Our aim in this study was to investigate the mechanism and function of rapid GR exchange observed in live cells at a tandem array of MMTV promoter sites. We found that both chaperones and proteasomes are present at the target sites. Disruption of either leads to opposite alterations in the exchange rate, indicating that the exchange rate is normally regulated in part by a balance between chaperone and proteasome activities. We also found a correlation between GR exchange and the amount of transcription, suggesting that longer GR residence times favor more transcription. Cell lines.Cell line 3617 was used for most experiments. The cells contain 200 tandem repeats of a 9-kb element composed of the MMTV promoter followed by ras and BPV genes (16) and they stably express GFP-tagged GR under the control of a tetracycline-off system (44). Control experiments were done with the parental cell line 3134, which contains the tandem repeats but not GFP-tagged GR. To generate cells containing only GFP, 3134 cells were transfected with a GFP plasmid (pEGFPC1; Clontech). To generate GFP-HP1α-containing cells, 3134 cells were transfected with a plasmid for the construct (a kind gift of T. Cheutin and T. Misteli, Laboratory of Receptor Biology and Gene Expression, National Cancer Institute, Bethesda, Md.). Cells were grown at 37°C with 5% CO2 in Dulbecco's modified Eagle's medium (DMEM) (Gibco BRL) supplemented with 2 mM glutamine (Gibco BRL), 10% fetal bovine serum (HyClone), and 5 μg of tetracycline/ml (to suppress GFP-GR expression). In preparation for microscopy experiments, cells were transferred to this medium and left there overnight, except that tetracycline was omitted, the fetal bovine serum was charcoal-dextran treated (Gemini Bio-Product) to remove the steroids that could activate GFP-GR, and phenol red-free medium was used to eliminate autofluorescence. Just prior to experiments, GFP-GR was activated by either the synthetic hormone dexamethasone (Sigma), the natural hormone corticosterone (Sigma), or the antagonist RU486 (a kind gift of Cathy Smith, Laboratory of Receptor Biology and Gene Expression), each at a concentration of 100 nM. FRAP.For FRAP experiments, cells were grown overnight in coverglass chambers (Lab-Tech) and then induced with hormone for 30 min. For hsp90 or proteasome inhibition, geldanamycin (2.5 μg/ml) or MG-132 (100 μM) (Calbiochem), respectively, was applied after hormone induction. In both cases, parallel control experiments with the vehicle (dimethyl sulfoxide) were conducted. After geldanamycin treatment, live-cell chambers were kept on the microscope stage for no more than 5 min with corticosterone or 30 min with dexamethasone to avoid stress leading to formation of GFP-GR aggregates. For the ATP depletion experiments, cells were treated with 10 mM sodium azide (Sigma) in glucose-minus DMEM supplemented with 6 mM 2-deoxyglucose (Sigma) or with glucose-minus DMEM without sodium azide and with 6 mM 2-deoxyglucose only. FRAP experiments were carried out on a Zeiss 510 confocal microscope with either a 40× 1.3-numerical-aperture (N.A.) or a 100× 1.3-N.A. oil immersion objective. The cells were kept at 37°C using an air stream stage incubator (Nevtek). Bleaching was performed with the 488- and 514-nm lines from a 45- mW argon laser operating at 75% laser power. A single iteration was used for the bleach pulse, which lasted 0.018 s for the 40× objective or 0.065 s for the 100× objective. Fluorescence recovery was monitored at low laser intensity (0.2% for a 45-mW laser) at 40- to 500-ms intervals, depending on the experiment. FRAP analysis.Approximately 10 separate FRAPs were performed and then averaged to generate a single FRAP curve. The temporal resolution was kept constant while recovery was measured, but this led to a very large number of closely spaced points in the second, slower phase of the recovery curve. To address this, we averaged 3 to 10 adjacent points in this slower part of the curve. This generated roughly equally spaced points along the recovery curve and therefore avoided overly weighting the slower phase of the curve during fitting. The effective diffusion fit was performed using the Soumpasis theory for a circular bleach spot (39) as implemented in Matlab with the nlinfit routine. In the Soumpasis theory, the FRAP rate is given by τ = w2/4D, where w is the bleach spot radius and D is the diffusion (or effective diffusion) constant. The FRAP data are given by frap(t) = e−2r/t[I0(2r/t) + I1(2r/t)], where t is time and I0 and I1 are modified Bessel functions. Bessel functions and their variants typically arise in differential equations with cylindrical symmetry (1), as occurs for a FRAP with a circular bleach spot (15). The Soumpasis theory presumes a normalized FRAP that ranges from 0 to 1. To accommodate this, we renormalized our FRAP data by setting the bleach depth to 0 and the final recovery level to 1. This factors out any differences between recoveries due to either the bleach depth or the final immobile fraction. Thus, the parameter τ directly measures the rate at which the recovery curve rises. For each FRAP, the estimate for τ yielded a standard error, which was then used in a t test to compute a P value in statistical comparisons of two recovery curves. These P values are shown in all comparison plots for GFP-GR recoveries. The predicted diffusion constant can be used to estimate the mass of a molecule, assuming that there are no binding interactions. The Soumpasis fit for GFP-GR nuclear mobility leads to an estimate of DGFP-GR of 1.05 μm2/s. The Soumpasis fit for GFP only in nuclei of the same cell line leads to an estimate of DGFP of 15 μm2/s (data not shown), ∼15-fold slower than GFP-GR. Since D ∝ M−1/3, where M is mass, the FRAP data predict a 153 increase in mass of GFP-GR relative to GFP, or ∼95-MDa molecular mass for GFP-GR. This is much larger than any known molecular complex, and therefore the mobility of GFP-GR in nucleoplasm must be retarded by binding interactions. When diffusion and binding interactions are present, the simplest scenario is effective diffusion (15). In effective diffusion, the FRAP mimics diffusion but at a lower rate given by the equation $$mathtex$$\(\mathit{D}_{eff}{=}\mathit{D}/(1{+}\mathit{k}^{{\ast}}_{on}/\mathit{k}_{off})\)$$mathtex$$, where D is the cellular diffusion constant, koff is the off rate of binding, and $$mathtex$$\(\mathit{k}^{{\ast}}_{on}\)$$mathtex$$is the product of the on rate for binding times the equilibrium concentration of binding sites. The Soumpasis equation (39) then becomes $$mathtex$$\({\tau}{=}\mathit{w}^{2}/4\mathit{D}_{eff}{=}\mathit{w}^{2}(1{+}\mathit{k}_{off}\mathit{k}^{{\ast}}_{on})/4\mathit{D}\)$$mathtex$$, where w is the bleach spot radius. To generate the family of curves in Fig. 1f, we used typical experimental parameters (τ = 0.5, w = 1.0 μm, and D = 15 μm2/s) to obtain a value for $$mathtex$$\(\mathit{k}^{{\ast}}_{on}/\mathit{k}_{off}\)$$mathtex$$. Then, koff was varied as indicated in the figure legend to yield new values for τ, which were then used to produce new FRAP curves from the Soumpasis equation describing FRAP. FRAPs at the MMTV array and elsewhere in the nucleus depend on bleach spot size and are well fit by an effective diffusion model. (a) 3617 cell after 30-min induction with 100 nM dexamethasone. The MMTV array is visible as a bright spot (circled) near one of the nucleoli. Scale bar, 10 μm. (b) For fast data collection during FRAP, images were collected only in the strip encompassed by the red rectangle in panel a. Selected time points (t) are shown. (c) Arrays were bleached with spot radii of either 0.9 or 1.8 μm. In all cases, the array was large enough to completely fill the bleach spot area. Larger bleach spots result in significantly slower recoveries. This indicates that diffusion contributes to the FRAP. For these and all succeeding FRAP curves, standard errors for all points were always <0.01 and therefore smaller than the dots used for plotting. (d) GFP-GR recovery elsewhere in the nucleus also shows dependence on bleach spot size, indicating a role for GFP-GR diffusion in nuclear mobility. (e) GFP-GR FRAP data both at the array and elsewhere in the nucleus were well fit by a single-parameter model for a diffusing molecule bleached by a circular spot with a recovery rate given by the fitting parameter, τ. Recoveries at the array are consistently slower than elsewhere in the nucleus. This difference is statistically significant, as indicated by a t test using the means for τ and their 95% confidence intervals. The computed P value (<0.0001) is highly significant. (f) Models for FRAPs that incorporate both diffusion and binding show that significant changes in binding parameters yield small changes in the FRAP curve. This is because diffusion remains unaffected after treatments that affect binding. Predicted curves are shown for a series of off rates. Transcription assay.Transcription levels were measured at the MMTV array by RNA fluorescent in situ hybridization (FISH) exactly as described by Müller et al. (25). The specific conditions for each transcriptional assay were as follows. When studying the effects of geldanamycin, cells were induced with hormone for 15 min to allow GFP-GR to translocate to the nucleus and were then treated for 45 min with the drug or with vehicle (dimethyl sulfoxide). To determine transcriptional levels following drug treatment, RNA FISH levels were corrected for transcription in the first 15 min when hormone was present but before any drug was added. This was done by measuring average transcriptional levels after 15 min of hormone treatment and then subtracting this value from the average RNA FISH intensity following the 45-min drug treatment. For MG-132, cells were pretreated with the drug for 1.5 h and induced with hormone for 30 min before fixation. In all cases, average transcriptional levels were obtained from at least 35 cells. When transcriptional levels from condensed versus decondensed arrays were measured, 100 nM dexamethasone was added, and then the cells were prepared for RNA FISH after 3 h. A total of 113 cells were examined, and a 6-μm-diameter perimeter was chosen as the threshold value separating condensed from decondensed arrays. Immunofluorescence.Cells were grown overnight on 22-mm2 coverslips and then induced with 100 nM dexamethasone or corticosterone and fixed either with absolute methanol at −20oC for 15 min or for 5 min in 1 part 37% formaldehyde and 9 parts PEM buffer (100 mM PIPES, 5 mM EGTA, 2 mM MgCl2, pH 6.8, plus 0.2% Triton X-100). After either fixation, the cells were washed three times in phosphate-buffered saline (PBS) for 10 min each time. Similar immunofluorescence results were obtained with both fixation protocols, except that the methanol fix was required to detect any staining with the hsp90 antibody. For detection of GFP-GR aggregates, corticosterone-induced cells were treated for 10 min with 2.5 μg of geldenamycin/ml followed by 20 min of either heat (45°C) or cold (on ice) shock and were fixed in 3.5% paraformaldehyde at room temperature for 15 min. Following formaldehyde fixation, the cells were permeabilized for 10 min with 0.5% Triton X-100 in PBS and washed three times in PBS for 10 min each time. For all immunofluorescence experiments, cells were incubated overnight with the primary antibody diluted in PBS with 4% bovine serum albumin and 0.1% Tween 20. After incubation, the cells were washed three times in PBS for 10 min each time and then incubated for 1 to 2 h with the appropriate secondary antibody conjugated to either Texas red or rhodamine. The cells were then washed three more times in PBS before final mounting in PBS and examination on a Leica DMRA microscope with a Leica 100× 1.3-N.A. oil immersion objective. Images were acquired in green (GFP-GR) and red (antibody) fluorescence with a SenSys (Photometrics) camera with a KAF1400 chip configured to collect 0.067-μm-diameter pixels. The following antibodies were used: anti-hsp70 (MA3-006; Affinity Bioreagents), anti-p23 (MA3-414; Affinity Bioreagents), antiproteasome (19S Subunit S10B; Affinity Bioreagents), and anti-hsp90 (SPA-835; Stressgen). Characterization of FRAP curves.To investigate the mechanism and function of rapid transcription factor exchange, we used the 3617 cell line in which GR exchange was previously characterized (22). These cells are mouse adenocarcinomas containing ∼200 tandem copies of the MMTV promoter with associated reporter genes (44). By quantifying GFP levels at the MMTV array, we have estimated that on average ∼700 GFP-GRs are present there, consistent with the expected number of GR binding sites on the array (10). In cells treated with hormone, the MMTV array becomes visible as a bright spot (25) (Fig. 1a). FRAPs at this site reveal a very rapid recovery with a half time (time to 50% recovery) of ∼5 s (22). To achieve good temporal resolution of this recovery, FRAP data were collected at ≥40-ms intervals by collecting images only in a narrow strip encompassing the array (Fig. 1a and b). Approximately 10 cells were averaged to generate one FRAP curve for a single experiment, and all experiments reported here were performed on at least three different days to assure reproducibility. FRAP measures mobility due to diffusion and, if present, transport or binding. Diffusion contributes to every recovery, but in some cases this contribution is small and/or rapid and can be ignored. To test the role of diffusion, we performed bleaches with different spot sizes at the array and observed differences in recovery (Fig. 1c). We also found differences in the recovery rate when other regions of the nucleus were bleached with different spot sizes, indicating that diffusion contributed to GFP-GR recoveries everywhere in the nucleus (Fig. 1d). These recovery rates, however, were considerably lower than that of GFP alone, which recovered almost instantaneously with the spot sizes used in our experiments (data not shown). If GFP-GR recoveries were due to simple diffusion, then they would predict a GFP-GR mass of ∼95 MDa, significantly larger than any known complex (see Materials and Methods for details of this calculation). Therefore binding interactions must retard GFP-GR FRAPs both in the nucleus and specifically at the MMTV array. The simplest scenario to explain combined diffusion and binding is effective diffusion (15). Here, the FRAP exhibits a slowed diffusion, with the "effective" diffusion constant retarded in proportion to the binding affinity. Consistent with effective diffusion, GFP-GR FRAP rates were well fit by a single-parameter model for a diffusing molecule bleached by a circular spot (Fig. 1e) (39). The effective-diffusion fits are informative for several reasons. First, they provide a single-parameter fit to the FRAP rate (τ), allowing a statistical comparison of curves based on this parameter. For example, using the same bleach spot size, τ at the array is significantly larger than τ elsewhere in the nucleus (Fig. 1e). These τ values, with their 95% confidence intervals, can be used in a t test to compute a P value to assess whether the recoveries are significantly different. The P value for the nucleus recovery versus the array recovery is highly significant (P < 0.0001). This demonstrates that the array recovers significantly more slowly than other sites in the nucleus, which indicates tighter binding at the array, as expected for the locally high concentration of specific binding sites there. Second, the effective-diffusion theory provides insight into how such FRAP curves change in response to changes in binding affinity. Doubling or halving the off rate leads to modest changes in recovery rate because diffusion, which does not change, also contributes (Fig. 1f). Thus, in the effective-diffusion scenario, the FRAP data must be collected and analyzed with precision in order to detect changes in binding affinity under different perturbations. Finally, in its simplest form, the effective-diffusion theory predicts a pseudoequilibrium binding constant (15), but only for the case where the distribution of fluorescence is homogeneous. This should apply to generic locations in the nucleus, but at present such estimates cannot be obtained for the array because the theory must be extended to account for such a nonhomogeneous fluorescence distribution in which a bright region (the array) is surrounded by a dimmer region (the rest of the nucleus). Further theoretical work is needed to extract estimates of binding constants from these data. In all subsequent experiments, we used the estimates of the recovery rate, τ, at the array only as a means to compare curves. For any pair of GFP-GR recoveries, we performed a t test on the τ values for the recovery rate. In each plot, we report a P value as a measure of whether there is a statistically significant difference between the curves. Energy dependence of the GFP-GR recovery.The simplest explanation for GFP-GR exchange at the array is that it reflects a passive process determined by the simple equilibrium of GFP-GR binding to the MMTV promoter. If so, then the exchange process should be energy independent. To test this, we depleted cellular ATP levels with sodium azide and deoxyglucose. FRAPs both at the array and elsewhere in the nucleus were radically altered (Fig. 2a and b). Both sites showed <100% recovery, with a more significant retardation at the MMTV array than anywhere else in the nucleus. This suggested that energy might be required for normal GFP-GR exchange at the array, but the large effect on recoveries elsewhere in the nucleus raised the concern of nonspecific effects. Therefore, we sought to identify conditions for ATP reduction that minimized the effects on FRAPs at other sites in the nucleus. This was achieved by incubation in deoxyglucose alone, which blocks only nonoxidative phosphorylation (36) and is therefore not as potent as azide and deoxyglucose together. FRAPs were slightly affected throughout the nucleus after deoxyglucose treatment, but recoveries at the array were now qualitatively different in that they still exhibited an immobile fraction (Fig. 2c and d). This immobile fraction disappeared after deoxyglucose washout, demonstrating that immobility was not a consequence of toxicity (Fig. 2e). GFP-GR exchange at the array is an energy-dependent process. A 30-min treatment with 10 mM sodium azide and deoxyglucose leads to a marked reduction in the exchange rate at the array (a) and elsewhere in the nucleus (b). Transferring the cells to a glucose-free medium (deoxyglucose only) for 60 to 90 min induces an ∼5% immobile fraction at the array (c), but the exchange at other sites in the nucleus does not show an immobile fraction (d). (e) Deoxyglucose washout eliminates the immobile fraction at the array. The complete recovery observed elsewhere in the nucleus suggests that reduced ATP levels first impact a subset of sites unique to the MMTV array before affecting generic GR-chromatin binding. This fraction of MMTV sites appears to require energy for GFP-GR release. Rapid exchange, however, still occurs at the remaining MMTV sites, thereby defining two classes of sites: a small fraction that is ATP sensitive and a larger fraction that is relatively ATP insensitive. These results are in marked contrast to many nuclear proteins that do not require energy for their mobility (19, 28). Roles for chaperones in GFP-GR recovery.A number of factors could contribute to the energy dependence at the ATP-sensitive fraction of MMTV sites. An ATP-dependent chaperone complex that includes hsp70, hsp90, p23, and several immunophilins associates with unliganded GR in the cytoplasm, allowing ligand binding (30). Once ligand is bound, the chaperone complex dissociates but later reassociates after the ligand dissociates from GR. This association and dissociation triggered by the loss of ligand is referred to as the chaperone cycle. In addition to its cytoplasmic role, the chaperone complex also plays a role in GR recycling within the nucleus (7, 20). Moreover, Freeman and Yamamoto (12) have proposed that chaperones disassemble transcriptional regulatory complexes at promoter sites. If this were the case for GR at the MMTV promoter, then disruption of chaperone function should lead to a slowdown in the FRAP at the MMTV array. As a first test for chaperone involvement, we used immunofluorescence to investigate the subcellular distribution of hsp90, hsp70, and p23 in the MMTV array cell line. We found, as expected, that these molecules were largely cytoplasmic, but some nuclear fluorescence was also detected. Significantly, we consistently found an association of hsp90, hsp70, and p23 with the MMTV array. Antibodies against these molecules colocalized with the GFP-GR signal at the array and also typically stained a region surrounding the array (Fig. 3). The size of the region stained by chaperones was always proportional to the size of the MMTV array, with larger arrays that contained more GFP-GR characterized by correspondingly larger regions of chaperone staining. These observations demonstrate that chaperones are recruited to the MMTV site and suggest that chaperones are poised to affect GFP-GR exchange at the array. Immunofluorescence detection of chaperone proteins at the MMTV array (circled). Antibodies against hsp90 (a to c), hsp70 (d to f), or p23 (g to i) stain the cytoplasm as expected, but within nuclei, they also consistently colocalize with GFP-GR at the MMTV array. Insets contain higher-magnification views of colocalization at the array. Scale bar, 10 μm. To disrupt chaperone activity, we first used the antibiotic geldanamycin (32), which specifically blocks hsp90 activity by binding to its N-terminal ATP site and preventing p23 binding (38). As expected, geldanamycin (2.5 μg/ml) blocked GFP-GR import in the array cell line (6), so to study nuclear events, we pretreated cells with dexamethasone and then added geldanamycin 30 min later, when GR nuclear import was complete. Although MMTV arrays began to disappear after 1 h of geldanamycin treatment (Fig. 4a), a normal number of arrays were still clearly visible after 30 to 60 min of geldanamycin treatment, thereby allowing measurement by FRAP of GFP-GR exchange at the array. In geldanamycin-treated cells, FRAPs at the array were faster than in control cells (Fig. 5a). Larger effects could be observed with higher concentrations of geldanamycin, but they also led to significant effects elsewhere in the nucleus (data not shown). Thus, the geldanamycin concentration used here minimizes the effects on the rest of the nucleus and therefore should reflect effects unique to GFP-GR binding to the MMTV sites. Further evidence for this comes from cells transfected with a different nuclear protein, GFP-HP1α, and treated with geldanamycin. No effect on GFP-HP1α recovery was found (Fig. 5b). Similar results were also obtained for GFP alone (data not shown). These findings indicate that geldanamycin did not generically alter protein mobilities in the nucleus. Arrays (circled) disappear much more rapidly after geldanamycin treatment when corticosterone is the ligand than when dexamethasone is the ligand. In either case, geldanamycin induces arrays to disappear (red lines) faster than normal (black lines). An example of array disappearance for each case is shown in pseudocolor to accentuate the arrays. (a) When dexamethasone is the ligand, array disappearance is gradual and occurs over a 2-h time course. (b) When corticosterone is the ligand, the geldanamycin-induced disappearance of arrays is dramatically accelerated, occurring over a time course of 10 min. These results indicate that geldanamycin effects are exacerbated by corticosterone, presumably due to its more rapid exchange with GR. They also suggest that in the presence of geldanamycin, GR eventually becomes unliganded and incapable of binding to the MMTV sites. Scale bar, 5 μm. Effects of hsp90 inhibition. FRAPs at MMTV arrays are faster after treatment with 2.5 μg of geldanamycin/ml (a) or 5 μg of radicicol/ml (c) than in control cells, but the speed-up is detectable immediately in corticosterone (e) while it appears only after 30 to 60 min in dexamethasone (a). (b) These changes are not caused by a generic effect on protein mobility, as cells transfected with GFP-HP1α show no effect on FRAPs after geldanamycin treatment. (Note that the GFP-HP1α recoveries are not fit by effective diffusion [data not shown], so a t test to compute a P value cannot be performed.) (d) Hormone withdrawal experiments demonstrate that, compared to dexamethasone, corticosterone exchanges much more rapidly with GR. Shown are the average transcriptional levels measured from 35 cells by RNA FISH. Cells were induced by 100 nM corticosterone (CORT) or dexamethasone (DEX) for 15 min and then washed three times over a 5-min period in hormone-free medium and left in that medium for 45 min. With corticosterone as a ligand, transcription is abolished after a 5-min wash, indicating complete exchange of the ligand with GR during the wash time. In the same wash period, a significant amount of dexamethasone remains bound, since transcription drops by only 50%. (f) Treatment with geldanamycin induces a progressive loss of GFP-GR from the array and a decrease in size, and this is accompanied by a loss of chaperones. Shown is the loss of p23 from the array following geldanamycin (GELD) treatment with dexamethasone as a ligand. Scale bar, 5 μm. To corroborate the geldanamycin findings, we also treated cells with another drug, radicicol, which disrupts hsp90 activity by blocking its N-terminal ATP-binding site (35). Again, we found that FRAP at the array was faster (Fig. 5c). Geldanamycin and radicicol are known to be specific for hsp90 (32), but this chaperone plays a role in many cellular processes (29). To test if geldanamycin directly affected GR, we substituted the natural hormone corticosterone for dexamethasone. Since the half time of corticosteroid binding to GR at 0°C is much shorter than that of dexamethasone (14, 26), we reasoned that corticosterone should exacerbate any effects of geldanamycin if they originate due to loss of ligand from GR and consequent passage through the chaperone cycle. To confirm the difference in the binding affinities of these two steroids to GFP-GR in the MMTV array cell line at physiological temperatures, we used either corticosterone or dexamethasone as a ligand, washed the cells three times over a 5-min period in ligand-free medium, and compared the transcriptional outputs. After corticosterone washout, additional transcription was abolished, in contrast to dexamethasone washout, where transcription dropped by only 50% (Fig. 5d). This demonstrates that over the 5-min wash period, virtually all corticosterone became unbound from GR, whereas significant amounts of dexamethasone were still bound. These results demonstrate in vivo that dexamethasone remains much more tightly bound to GR than corticosterone. As predicted, the effects of geldanamycin were accelerated in the presence of corticosterone. Disappearance of GFP-GR arrays proceeded much faster with corticosterone (Fig. 4b), and FRAPs became faster immediately after geldanamycin was added (Fig. 5e). This immediate response strongly suggests that the geldanamycin effect is specific for GR. We next asked whether geldanamycin treatment affected the association of chaperones with the MMTV array. Using immunofluorescence, we found that p23 levels decreased in proportion to the duration of the geldanamycin treatment (Fig. 5f). Similar results were observed with hsp90 and hsp70. The gradual loss of chaperones was always accompanied by a decrease in the size of the array and the amount of associated GFP-GR. Thus, our data do not distinguish whether loss of the chaperones after geldanamycin treatment leads to loss of GFP-GR or vice versa. Faster FRAPs in the presence of geldanamycin were also seen after more extended treatments (1 to 3 h with dexamethasone as a ligand), although arrays became harder to detect (Fig. 4a). After ∼30 min of imaging these cells treated with geldanamycin for 1 to 3 h, numerous GFP-GR enriched spots appeared throughout the nucleus (Fig. 6a). These spots were not arrays, as only one or two arrays are present in a nucleus. Spots could also be induced when geldanamycin treatment was combined with cold or heat shock. The spots colocalized with a proteasome component (Fig. 6b and c), suggesting that they could reflect abnormal accumulations of misfolded proteins (41). As has been observed for these other proteasome-associated inclusions, FRAPs at the GFP-GR spots revealed a slowdown, with an immobile fraction (Fig. 6d). This indicates that some fraction of GR is immobilized within each spot and forms a protein aggregate. Thus, we found that extended treatments with geldanamycin combined with additional stress (either heat, cold, or imaging) could induce nonspecific effects that resulted in a slower FRAP at GFP-GR aggregates. Once these aggregates had formed, arrays had disappeared, but until that point, FRAPs at arrays were always faster than in controls. Nuclear GFP-GR aggregates. (a to c) Geldanamycin treatment combined with stress (heat, cold, or prolonged imaging) causes the disappearance of arrays and the formation of GFP-GR spots, which colocalize with a proteasome antibody. (d) In these spots, a fraction of GFP-GR is immobilized compared to other regions of the nucleus. During imaging, these aggregates appear more rapidly with corticosterone than with dexamethasone. Scale bar, 5 μm. In sum, either geldanamycin or radicicol induced faster recovery at arrays. This speed-up occurred instantly in the presence of corticosterone, strongly suggesting a specific effect on GFP-GR. Therefore, the immobile fraction seen after ATP depletion does not arise from disruption of the chaperones. Role for the proteasome in GFP-GR recovery.The proteasome is an alternate candidate for regulating the ATP-sensitive fraction of MMTV sites. Both the 19S and 20S subunits of the proteasome require ATP (18). As noted in the introduction, increasing evidence points to a role for the proteasome in transcription factor regulation, and moreover, proteasomes often interact with chaperones. As a first test for proteasome involvement, we stained cells with an antibody against the 19S component. This antibody consistently colocalized with the array and surrounding regions, suggesting that the proteasome was recruited to this site and could play a role in GFP-GR exchange there (Fig. 7a to c). As found for chaperone components, there was a direct correlation between the size of the array and the amount of proteasome staining (data not shown), suggesting that increasing amounts of GR recruit more proteasomes to the promoter. Immunofluorescence detection of the proteasome at the MMTV array (circled) and FRAPs after perturbation of proteasome function. Considerable proteasome staining was found in the cytoplasm. (a to c) Within the nucleus, colocalization with the MMTV array was consistently observed. The inset contains a higher-magnification view of colocalization at the array. (d and e) Cells induced with dexamethasone or corticosterone and exposed to the proteasome inhibitor MG-132 exhibited slower FRAP at MMTV arrays. (f) This difference was not caused by a generic retardation of protein mobilities in the nucleus, as MG-132 treatment did not alter the recovery of GFP-HP1α. (Again, these GFP-HP1α recoveries are not fit by effective diffusion, so neither a recovery rate, τ, nor a P value for the comparison can be calculated.) Scale bar, 10 μm. To test for a role for the proteasome in GFP-GR exchange at the MMTV array, we treated cells with the proteasome inhibitor MG-132 (18). This resulted in a slowdown of FRAPs with either dexamethasone or corticosterone as a ligand (Fig. 7d and e). Larger effects with MG-132 could be induced at higher concentrations or with longer treatments, but we selected a concentration and a treatment time so that effects of MG-132 were minimized elsewhere in the nucleus. This should help to ensure that the effects seen at the MMTV array were unique to GFP-GR binding to the MMTV sites. Additionally, MG-132 had no effect on the mobility of either GFP alone (data not shown) or GFP-HP1α (Fig. 7f), demonstrating that it did not nonspecifically disrupt protein mobility throughout the nucleus. With evidence for both proteasome and chaperone activities at the MMTV sites, we then asked whether disruption of one system altered the other. We used immunofluorescence to examine levels of the 19S proteasome after geldanamycin treatment. As seen earlier for chaperone components, geldanamycin also led to a progressive loss over time of the 19S proteasome from the MMTV array (Fig. 8a to f). This was always correlated with a decrease in the size of the array itself and the amount of GFP-GR there, making it impossible to distinguish whether GFP-GR loss induced proteasomal loss or vice versa. Nevertheless, this gradual loss of the proteasome after geldanamycin treatment should by itself lead to slower FRAPs, thus counteracting the effect we detected, namely, faster FRAPs. This suggests that we might have observed even faster recoveries after geldanamycin had proteasome levels not been affected. In complementary experiments, we also used immunofluorescence to examine levels of p23 after MG-132 treatment and detected no change (Fig. 8g to l). This effect was seen with either dexamethasone or corticosterone, even though these different ligands affected transcription differently (see below). We conclude that the slowdown seen after MG-132 treatment is not due to loss of p23. Effects of either geldanamycin treatment on proteasomes or MG-132 treatment on chaperones. (a to f) Progressive loss of the 19S proteasome is seen at the MMTV array (circled) with longer geldanamycin (GELD) treatment. Clear proteasomal staining is seen after 15 min of geldanamycin treatment (b), whereas much fainter staining at levels close to nuclear background is detected after 1 h of geldanamycin treatment (e). (g to l) MG-132 does not affect the levels of p23 with either dexamethasone (DEX) (g to i) or corticosterone (CORT) (j to l). Only one time point is shown for each ligand, since unlike geldanamycin treatment, there is no loss of GFP-GR from the array after proteasome inhibition. Scale bar, 5 μm. In sum, our results suggest that the proteasome itself normally regulates GFP-GR exchange at the MMTV sites and is responsible for at least some of the ATP sensitivity at a fraction of these sites. Transcriptional level and GFP-GR recovery.The preceding experiments demonstrated that normal GFP-GR recovery is energy dependent and requires proteasome and chaperone functions. This shows that GFP-GR exchange is highly regulated. Since transcription is the primary function of GR binding to the MMTV promoter, we asked if there was any association between transcription and the exchange rate. As a start, we investigated whether the exchange rate correlates with different endogenous transcription levels. Our previous studies have shown that the array size correlates with the transcription level as measured by RNA FISH. Condensed arrays are less transcriptionally active than decondensed arrays (25) (Fig. 9a and Table 1). Therefore, we performed FRAPs at condensed and decondensed arrays and found that recoveries at condensed arrays were slightly, but consistently, faster than at decondensed arrays (Fig. 9b). Although modest, this difference yields a highly significant P value (P < 0.001) because the data are fit so well by effective diffusion that the estimate of the recovery rate for each curve has a very small error. Accelerated GFP-GR exchange is associated with less transcription. (a) Examples of a condensed array and a decondensed array are shown with the corresponding RNA FISH signals. (b) FRAPs of condensed arrays were consistently faster. (c and d) RU486 (100 nM) and corticosterone (100 nM) also yielded faster FRAPs than dexamethasone. Scale bar, 1 μm. Changes in GFP-GR recovery at the array are coupled to transcriptiona To investigate further a connection between the exchange rate and transcription, we measured FRAPs in the presence of the antagonist RU486. RU486 reduced transcription at the MMTV array to 10% of that with dexamethasone (Table 1). This reduced transcriptional level with RU486 was associated with faster FRAPs than with dexamethasone (Fig. 9c). We also found for the MMTV promoter that the natural hormone corticosterone was a less efficient agonist than the synthetic hormone dexamethasone (Table 1). Reduced transcriptional levels with corticosterone were once again associated with faster GFP-GR exchange at the array than with dexamethasone (Fig. 9d). Thus, for a purely endogenous difference (condensed versus decondensed arrays), or for differences induced by agonists or antagonists, reduction in transcriptional activity was accompanied by a higher GFP-GR exchange rate. We then examined transcriptional levels after disruption of chaperone function. After hsp90 activity was blocked with either geldanamycin or radicicol, transcription levels dropped at the MMTV array (Table 1). This is consistent with previous studies (2), but in those studies it was not clear whether the reduced transcription arose from a failure of GR to bind to its target sites or even enter the nucleus. Under our conditions with geldanamycin, GR remains bound to the MMTV array for 10 min in corticosterone and for 1 h in dexamethasone. However, transcription still drops significantly in these time periods, demonstrating that some more subtle effect on transcription has occurred. One possibility is the higher exchange rate measured in all of these cases. Finally, we measured transcriptional levels after proteasome inhibition. Here, we found a dependence on ligand. With dexamethasone, transcriptional levels rose, but with corticosterone, they dropped (Table 1). Deroo et al. (8) found that prolonged treatment (22 to 28 h) with MG-132 increased both GR levels and transcription in the presence of dexamethasone. However, the transcriptional changes that we detected were not due to changes in GFP-GR protein levels, since the mean fluorescence intensity of GFP-GR in nuclei treated with MG-132 for 1 h (97% ± 11%) was not significantly different from the intensity of the control nuclei (100% ± 8%). Despite the differences in transcription, FRAPs with either dexamethasone or corticosterone showed a slowdown and an immobile fraction, indicating that the ligand can have dramatically different effects on transcription when a fraction of receptors becomes immobilized. In sum, in all cases so far we have seen that changes in the exchange rate are associated with changes in the transcriptional level, suggesting that the two are coupled. We have investigated rapid GFP-GR exchange at a tandem array of MMTV promoter sites and have provided the first insights into its mechanism and function. Our results indicate that exchange is controlled at least in part by a balance between proteasome and chaperone functions. These two systems often act in concert, for example, in striking a balance between the refolding and degradation of cellular proteins (45). Our data suggest that a similar balancing act is performed at the MMTV promoter, with chaperones helping to stabilize GR binding and proteasomes helping to catalyze GR removal (Fig. 10). Exactly how this balance is struck is unknown, but factors such as CHIP and BAG-1, which are known to bridge the chaperone and proteasome systems, may well be involved (5, 21). FIG. 10. Model for proteasome-chaperone interaction at the MMTV template and association with transcription. Liganded GR binding to MMTV ultimately leads to initiation complex formation. Formation of the complex may be enhanced by longer GR residence at the MMTV template. The duration of GR occupancy at the template is determined in part by competition between proteasome and chaperone functions. Proteasome inhibition favors GR occupancy, leading to a slower FRAP with an immobile fraction. Chaperone inhibition by geldanamycin favors GR loss, leading to a faster FRAP. The equilibrium between these two components helps to set the transcriptional level and may be mediated in part by one or more factors that are known to couple chaperone and proteasome activities (5, 21). We found that both chaperones and proteasomes were present at and around the MMTV array, demonstrating that both classes of molecules were specifically recruited to the site. These observations are consistent with ChIP studies demonstrating that either chaperones (12) or proteasomes (13, 31) are found at promoter sites, although until now not at the same promoter. Proteasome function.After disrupting proteasome activity, we found that GFP-GR recovered more slowly at the MMTV sites and attained only 95% of its starting intensity. The residual 5% corresponds to an immobile fraction of GFP-GR that remains stably bound to the promoter after proteasome inhibition. We conclude that removal of at least a fraction of GR molecules ordinarily requires the proteasome. Since both GR and various coactivators are ubiquitinated (5, 46), loss of GR at the promoter could reflect a direct effect of the proteasome on GR or an indirect consequence of the loss of a closely associated coactivator. Our data also do not distinguish whether GR loss normally involves degradation of this 5% fraction of GR at the promoter or simply ejection of the fraction from the promoter. We cannot rule out the possibility that proteasome treatment in some other way indirectly alters GR mobility by inducing, for example, a stress response (40). However, by using shorter MG-132 treatments that disrupt predominantly the FRAP behavior at the MMTV array, we reduce the possibility that the effects we observe are nonspecific. This issue can be addressed more directly in future studies by identifying and then perturbing specific molecules that couple proteasome activity with GR. Proteasome inhibition is also known to affect the nuclear mobility of steroid receptors by inducing nuclear matrix binding (8, 34, 43). However, with the shorter incubations of the proteasome inhibitor that we used, the MMTV array remained visible and transcription continued, indicating that GR was still DNA bound at the MMTV sites. Our observation of an immobile fraction at specific sites after dexamethasone and proteasome inhibition is in contrast to the effect detected for GR elsewhere in the nucleus, where dexamethasone or other high-affinity ligands relieved the immobile fraction induced by proteasome inhibition (34). These differences indicate that the same treatment can yield different effects depending upon whether the binding site is a specific promoter or nuclear matrix. A number of biochemical approaches have implied a role for proteasomes in transcription factor removal from promoter sites (27). Highly active transcriptional regulators, such as VP16 or myc, must be ubiquitinated to induce transcription and after further ubiquitination are degraded by the proteasome (33). This degradation may occur at the promoter, as it is enhanced by DNA binding (24). Recent studies have demonstrated that the slow cycling of ER is accompanied by and dependent upon proteasomal cycling (31). Although there is an intriguing similarity between these ChIP results for ER and our FRAP data for GR, they may reflect entirely different processes. This is because the time scales interrogated by the two techniques are radically different. The GFP-GR FRAP at the MMTV array is complete within 1 min. Thus, the average residency time of GFP-GR at the promoter can be no more than 1 min and could well be much shorter. Such rapid exchange cannot be detected by ChIP, since the formaldehyde fixation step in the procedure typically lasts much longer (∼10 min) than the GR cycling time. As a consequence, ChIP detects periodicities on time scales 1 to 2 orders of magnitude longer than those detected by FRAP. It is possible that a much slower cycling of GR is superimposed on the faster cycling that we measure by FRAP. The fraction of bound GR at the MMTV sites is determined by the binding affinity of GR for these sites. If this affinity slowly increases and then slowly decreases, then the amount of bound GR would slowly cycle, even though the individual GR molecules at any moment would exhibit the rapid exchange that we measure by FRAP. In principle, this could be detected in our live-cell system in two ways, neither of which has been carefully examined. If GR binding affinity to MMTV gradually increases, then the array intensity should gradually increase, and at the same time the FRAP rate should gradually decrease, since slower recoveries reflect higher-affinity binding. In sum, the same system can exhibit both fast and slow cycling. Fast cycling is best detected by FRAP at a promoter target array, and slow cycling is better observed by ChIP. Chaperone function.By disrupting chaperone activity with either geldanamycin or radicicol, we observed an accelerated GFP-GR exchange at the MMTV promoter sites. Faster GFP-GR exchange was detected instantly in the presence of geldanamycin and corticosterone, strongly supporting a specific effect on GFP-GR binding to MMTV. Targeting one member of the chaperone complex, p23, to a series of GR promoter sites led to loss of GR binding there (12). Based on this and other observations, Freeman and Yamamoto proposed that p23 could repeatedly remove GR from a template and therefore give rise to the rapid GR exchange process observed in living cells. However, if this were the case, then in the simplest scenario we should have observed a slower FRAP after geldanamycin treatment, since the drug prevents p23 binding to the chaperone complex (38). This in turn should have prevented p23 from ejecting GR from the promoter. Instead, we detected a faster GFP-GR exchange, implying that the chaperones normally stabilize GR binding rather than catalyze its removal. A number of explanations for this discrepancy are possible. A key difference between the two studies is that we have disrupted hsp90 function, whereas Freeman and Yamamoto altered p23 activity. Although p23 and hsp90 normally act in concert within the chaperone complex, they may not act together at a promoter. If so, then geldanamycin treatment would directly affect hsp90 but not p23. Then, our FRAP data in the presence of geldanamycin would imply that hsp90 is required to stabilize GR binding at the promoter, whereas Freeman and Yamamoto's ChIP data would suggest that p23 is required to destabilize it. Such antagonistic actions between hsp90 and p23 could play a role in setting GR residency times. More generally, a delicate balance must exist between the chaperone and proteasome functions at the MMTV promoter. Thus, even if geldanamycin treatment also disrupts p23 function at MMTV, it is possible that the specific method used to alter p23 activity may tilt the equilibrium between chaperones and proteasomes one way or the other, leading to opposite effects on GR binding. These discrepancies will probably be resolved by elucidating the molecular mechanism that establishes the interplay between chaperones and proteasomes at the MMTV promoter. At this time, we can say with certainty that both our results and those of Freeman and Yamamoto strongly support a role for chaperones in GR binding at a promoter. Rapid-exchange function.It has been suggested that transcription factor cycling at a promoter is a mechanism to sense environmental changes, for example, in ligand concentration (12, 22, 31, 37). After 5 min of corticosterone removal, we found that there was virtually no more transcription from the MMTV promoter. This is consistent with a sensing model for exchange, since the ∼1-min cycling time of GR is fast enough to show a response within 5 min after a drop in hormone concentration. Note, however, that such a rapid change in hormone levels could be sensed only by a rapid-exchange mechanism and not by the longer cycling times observed for ER by ChIP (31, 37). In addition to a possible role in sensing altered hormone levels, we found a coupling between the transcriptional level and the exchange rate. In every case examined so far, when transcription levels changed, so did the exchange rate. At present, our data support a simple form of coupling, namely, that slower exchange is associated with more transcription. This could arise because a longer GR residence time could increase the chances of successful polymerase loading. We observed this correlation between the exchange rate and the transcription level in seven out of eight two-way comparisons (Table 1). The one exception was with the proteasome inhibitor MG-132 and the ligand corticosterone. This combination yielded slower exchange and decreased transcription, whereas MG-132 and dexamethasone yielded slower exchange and increased transcription. One explanation for this discrepancy is the innate difference between the residence times for ligand binding to GR with dexamethasone and corticosterone. After proteasome inhibition by MG-132, a fraction of GR remains at the template. With corticosterone, this fraction will rapidly lose the ligand, whereas for dexamethasone it will not. Thus, longer residence of GR at the template may lead to increased transcription only if the ligand remains bound; otherwise, unliganded GR remains at the promoter, blocking access of liganded GR to those sites. Clearly, other interpretations are possible. Our simple model relating the exchange rate to the transcriptional level is no doubt complicated by a number of other factors that also contribute to the final transcriptional level. Nevertheless, our data suggest that GR residence time at a promoter is one factor in tuning the transcriptional output. In sum, our results indicate that chaperones and proteasomes modulate rapid GR exchange at a promoter. If, as our data suggest, the exchange rate is intimately coupled with transcription, then it is likely that a number of other factors will impact this rate as a means of regulating transcriptional levels. Understanding how the exchange rate is tuned by contributions from different factors will be critical for understanding transcriptional regulation. Our data now underscore the importance of live-cell imaging for a complete understanding of transcriptional mechanisms. Received 29 July 2003. Returned for modification 17 September 2003. Abramowitz, M., and I. A. Stegun. 1970. Handbook of mathematical functions, p. 358-436. Dover Publications, Inc., New York, N.Y. Bamberger, C. M., M. Wald, A. M. Bamberger, and H. M. Schulte. 1997. Inhibition of mineralocorticoid and glucocorticoid receptor function by the heat shock protein 90-binding agent geldanamycin. Mol. Cell. Endocrinol. 131:233-240. Becker, M., C. Baumann, S. John, D. A. Walker, M. Vigneron, J. G. McNally, and G. L. Hager. 2002. Dynamic behavior of transcription factors on a natural promoter in living cells. EMBO Rep. 3:1188-1194. Burakov, D., L. A. Crofts, C. P. Chang, and L. P. Freedman. 2002. Reciprocal recruitment of DRIP/mediator and p160 coactivator complexes in vivo by estrogen receptor. J. Biol. Chem. 277:14359-14362. Connell, P., C. A. Ballinger, J. Jiang, Y. Wu, L. J. Thompson, J. Hohfeld, and C. Patterson. 2001. The co-chaperone CHIP regulates protein triage decisions mediated by heat-shock proteins. Nat. Cell Biol. 3:93-96. Czar, M. J., M. D. Galigniana, A. M. Silverstein, and W. B. Pratt. 1997. Geldanamycin, a heat shock protein 90-binding benzoquinone ansamycin, inhibits steroid-dependent translocation of the glucocorticoid receptor from the cytoplasm to the nucleus. Biochemistry 36:7776-7785. DeFranco, D. B. 2002. Navigating steroid hormone receptors through the nuclear compartment. Mol. Endocrinol. 16:1449-1455. Deroo, B. J., C. Rentsch, S. Sampath, J. Young, D. B. DeFranco, and T. K. Archer. 2002. Proteasomal inhibition enhances glucocorticoid receptor transactivation and alters its subnuclear trafficking. Mol. Cell. Biol. 22:4113-4123. Dundr, M., U. Hoffmann-Rohrer, Q. Hu, I. Grummt, L. I. Rothblum, R. D. Phair, and T. Misteli. 2002. A kinetic framework for a mammalian RNA polymerase in vivo. Science 298:1623-1626. Dundr, M., J. G. McNally, J. Cohen, and T. Misteli. 2002. Quantitation of GFP-fusion proteins in single living cells. J. Struct. Biol. 140:92-99. Fletcher, T. M., N. Xiao, G. Mautino, C. T. Baumann, R. Wolford, B. S. Warren, and G. L. Hager. 2002. ATP-dependent mobilization of the glucocorticoid receptor during chromatin remodeling. Mol. Cell. Biol. 22:3255-3263. Freeman, B. C., and K. R. Yamamoto. 2002. Disassembly of transcriptional regulatory complexes by molecular chaperones. Science 296:2232-2235. Gonzalez, F., A. Delahodde, T. Kodadek, and S. A. Johnston. 2002. Recruitment of a 19S proteasome subcomplex to an activated promoter. Science 296:548-550. Kaine, J. L., C. J. Nielsen, and W. B. Pratt. 1975. The kinetics of specific glucocorticoid binding in rat thymus cytosol: evidence for the existence of multiple binding states. Mol. Pharmacol. 578-587. Kaufman, E. N., and R. K. Jain. 1990. Quantification of transport and binding parameters using fluorescence recovery after photobleaching. Potential for in vivo applications. Biophys. J. 58:873-885. Kramer, P. R., G. Fragoso, W. Pennie, H. Htun, G. L. Hager, and R. R. Sinden. 1999. Transcriptional state of the mouse mammary tumor virus promoter can affect topological domain size in vivo. J. Biol. Chem. 274:28590-28597. Kruhlak, M. J., M. A. Lever, W. Fischle, E. Verdin, D. P. Bazett-Jones, and M. J. Hendzel. 2000. Reduced mobility of the alternate splicing factor (ASF) through the nucleoplasm and steady state speckle compartments. J. Cell Biol. 150:41-51. Lee, D. H., and A. L. Goldberg. 1998. Proteasome inhibitors: valuable new tools for cell biologists. Trends Cell Biol. 8:397-403. Lever, M. A., J. P. Th'ng, X. Sun, and M. J. Hendzel. 2000. Rapid exchange of histone H1.1 on chromatin in living human cells. Nature 408:873-876. Liu, J., and D. B. DeFranco. 1999. Chromatin recycling of glucocorticoid receptors: implications for multiple roles of heat shock protein 90. Mol. Endocrinol. 13:355-365. Luders, J., J. Demand, and J. Hohfeld. 2000. The ubiquitin-related BAG-1 provides a link between the molecular chaperones Hsc70/Hsp70 and the proteasome. J. Biol. Chem. 275:4613-4617. McNally, J. G., W. G. Muller, D. Walker, R. Wolford, and G. L. Hager. 2000. The glucocorticoid receptor: rapid exchange with regulatory sites in living cells. Science 287:1262-1265. Misteli, T., A. Gunjan, R. Hock, M. Bustin, and D. T. Brown. 2000. Dynamic binding of histone H1 to chromatin in living cells. Nature 408:877-881. Molinari, E., M. Gilman, and S. Natesan. 1999. Proteasome-mediated degradation of transcriptional activators correlates with activation domain potency in vivo. EMBO J. 18:6439-6447. Müller, W. G., D. Walker, G. L. Hager, and J. G. McNally. 2001. Large-scale chromatin decondensation and recondensation regulated by transcription from a natural promoter. J. Cell Biol. 154:33-48. Munck, A., and N. J. Holbrook. 1984. Glucocorticoid-receptor complexes in rat thymus cells. Rapid kinetic behavior and a cyclic model. J. Biol. Chem. 259:820-831. Muratani, M., and W. P. Tansey. 2003. How the ubiquitin-proteasome system controls transcription. Nat. Rev. Mol. Cell Biol. 4:192-201. Phair, R. D., and T. Misteli. 2000. High mobility of proteins in the mammalian cell nucleus. Nature 404:604-609. Picard, D. 2002. Heat-shock protein 90, a chaperone for folding and regulation. Cell Mol. Life Sci. 59:1640-1648. Pratt, W. B., and D. O. Toft. 1997. Steroid receptor interactions with heat shock protein and immunophilin chaperones. Endocr. Rev. 18:306-360. Reid, G., M. R. Hubner, R. Metivier, H. Brand, S. Denger, D. Manu, J. Beaudouin, J. Ellenberg, and F. Gannon. 2003. Cyclic, proteasome-mediated turnover of unliganded and liganded ERα on responsive promoters is an integral feature of estrogen signaling. Mol. Cell 11:695-707. Roe, S. M., C. Prodromou, R. O'Brien, J. E. Ladbury, P. W. Piper, and L. H. Pearl. 1999. Structural basis for inhibition of the Hsp90 molecular chaperone by the antitumor antibiotics radicicol and geldanamycin. J. Med. Chem. 42:260-266. Salghetti, S. E., A. A. Caudy, J. G. Chenoweth, and W. P. Tansey. 2001. Regulation of transcriptional activation domain function by ubiquitin. Science 293:1651-1653. Schaaf, M. J., and J. A. Cidlowski. 2003. Molecular determinants of glucocorticoid receptor mobility in living cells: the importance of ligand affinity. Mol. Cell. Biol. 23:1922-1934. Schulte, T. W., S. Akinaga, T. Murakata, T. Agatsuma, S. Sugimoto, H. Nakano, Y. S. Lee, B. B. Simen, Y. Argon, S. Felts, D. O. Toft, L. M. Neckers, and S. V. Sharma. 1999. Interaction of radicicol with members of the heat shock protein 90 family of molecular chaperones. Mol. Endocrinol. 13:1435-1448. Schwoebel, E. D., T. H. Ho, and M. S. Moore. 2002. The mechanism of inhibition of Ran-dependent nuclear transport by cellular ATP depletion. J. Cell Biol. 157:963-974. Shang, Y., X. Hu, J. DiRenzo, M. A. Lazar, and M. Brown. 2000. Cofactor dynamics and sufficiency in estrogen receptor-regulated transcription. Cell 103:843-852. Smith, D. F., L. Whitesell, S. C. Nair, S. Chen, V. Prapapanich, and R. A. Rimerman. 1995. Progesterone receptor structure and function altered by geldanamycin, an hsp90-binding agent. Mol. Cell. Biol. 15:6804-6812. Soumpasis, D. M. 1983. Theoretical analysis of fluorescence photobleaching recovery experiments. Biophys. J. 41:95-97. Stangl, K., C. Gunther, T. Frank, M. Lorenz, S. Meiners, T. Ropke, L. Stelter, M. Moobed, G. Baumann, P. M. Kloetzel, and V. Stangl. 2002. Inhibition of the ubiquitin-proteasome pathway induces differential heat-shock protein response in cardiomyocytes and renders early cardiac protection. Biochem. Biophys. Res. Commun. 291:542-549. Stenoien, D. L., M. Mielke, and M. A. Mancini. 2002. Intranuclear ataxin1 inclusions contain both fast- and slow-exchanging components. Nat. Cell Biol. 4:806-810. Stenoien, D. L., A. C. Nye, M. G. Mancini, K. Patel, M. Dutertre, B. W. O'Malley, C. L. Smith, A. S. Belmont, and M. A. Mancini. 2001. Ligand-mediated assembly and real-time cellular dynamics of estrogen receptor alpha-coactivator complexes in living cells. Mol. Cell. Biol. 21:4404-4412. Stenoien, D. L., K. Patel, M. G. Mancini, M. Dutertre, C. L. Smith, B. W. O'Malley, and M. A. Mancini. 2001. FRAP reveals that mobility of oestrogen receptor-alpha is ligand- and proteasome-dependent. Nat. Cell Biol. 3:15-23. Walker, D., H. Htun, and G. L. Hager. 1999. Using inducible vectors to study intracellular trafficking of GFP-tagged steroid/nuclear receptors in living cells. Methods 19:386-393. Wickner, S., M. R. Maurizi, and S. Gottesman. 1999. Posttranslational quality control: folding, refolding, and degrading proteins. Science 286:1888-1893. Yan, F., X. Gao, D. M. Lonard, and Z. Nawaz. 2003. Specific ubiquitin-conjugating enzymes promote degradation of specific nuclear receptor coactivators. Mol. Endocrinol. 17:1315-1331. Molecular and Cellular Biology Mar 2004, 24 (7) 2682-2697; DOI: 10.1128/MCB.24.7.2682-2697.2004 You are going to email the following Rapid Glucocorticoid Receptor Exchange at a Promoter Is Coupled to Transcription and Regulated by Chaperones and Proteasomes
CommonCrawl
Bhargav Bhatt (mathematician) Bhargav Bhatt (born 1983[1]) is a mathematician who is the Fernholz Joint Professor at the Institute for Advanced Study and Princeton University and works in arithmetic geometry and commutative algebra.[2] Bhargav Bhatt Bhatt at the Algebraic Geometry Workshop, Oberwolfach 2015 Born1983 (age 39–40) Alma mater • Princeton University • Columbia University Awards • Packard Fellow (2015) • New Horizons in Mathematics Prize (2021) • American Mathematical Society Fellow (2021) Clay Research Award (2021) Nemmers Prize in Mathematics (2022) Scientific career FieldsMathematics Institutions • University of Michigan • Institute for Advanced Study ThesisDerived Direct Summands (2010) Doctoral advisorAise Johan de Jong Other academic advisorsShou-Wu Zhang Early life and education Bhatt graduated with an B.S. in Applied Mathematics, summa cum laude from Columbia University under the supervision of Shou-Wu Zhang.[3] He received his Ph.D. from Princeton University in 2010 under the supervision of Aise Johan de Jong.[3][4] Career Bhatt was a Postdoctoral Assistant Professor in mathematics at the University of Michigan from 2010 to 2014 (on leave from 2012 to 2014).[3] Bhatt was a member of the Institute for Advanced Study from 2012 to 2014.[3][5] He then returned to the University of Michigan, serving as an Associate Professor from 2014 to 2015, a Gehring Associate Professor from 2015 to 2018, a Professor from 2018 to 2020, and a Frederick W and Lois B Gehring Professor since 2020.[3] In July 2022, he was appointed as the Fernholz Joint Professor in the School of Mathematics at the Institute for Advanced Study, with a joint appointment at Princeton University.[6] Research Bhatt's research focuses on commutative algebra and arithmetic geometry, especially on p-adic cohomology.[5][7] Bhatt and Peter Scholze have developed a theory of prismatic cohomology, which has been described as progress towards motivic cohomology by unifying singular cohomology, de Rham cohomology, ℓ-adic cohomology, and crystalline cohomology.[8][9] Awards In 2015, Bhatt was awarded a 5-year Packard Fellowship.[3][10] Bhatt received the 2021 New Horizons in Mathematics Prize.[3][7] He was elected to become a Fellow of the American Mathematical Society in 2021.[3][11] Also in 2021 he received the Clay Research Award.[12] In 2022 he was awarded the Nemmers Prize in Mathematics.[13] Selected publications • Bhatt, Bhargav (2012). "Derived splinters in positive characteristic". Compositio Mathematica. 148 (6): 1757–1786. doi:10.1112/S0010437X12000309. ISSN 0010-437X. S2CID 119152994. • Bhatt, Bhargav (2012). "Annihilating the cohomology of group schemes". Algebra & Number Theory. 6 (7): 1561–1577. doi:10.2140/ant.2012.6.1561. ISSN 1944-7833. S2CID 55015992. • Bhatt, Bhargav; Blickle, Manuel; Lyubeznik, Gennady; Singh, Anurag K.; Zhang, Wenliang (2014). "Local cohomology modules of a smooth $\mathbb{Z}$ -algebra have finitely many associated primes". Inventiones Mathematicae. 197 (3): 509–519. arXiv:1304.4692. doi:10.1007/s00222-013-0490-z. ISSN 0020-9910. S2CID 119143902. • Bhatt, Bhargav; Scholze, Peter (2017). "Projectivity of the Witt vector affine Grassmannian". Inventiones Mathematicae. 209 (2): 329–423. arXiv:1507.06490. Bibcode:2017InMat.209..329B. doi:10.1007/s00222-016-0710-4. ISSN 0020-9910. S2CID 119123398. • Bhatt, Bhargav (2018). "On the direct summand conjecture and its derived variant". Inventiones Mathematicae. 212 (2): 297–317. arXiv:1608.08882. Bibcode:2018InMat.212..297B. doi:10.1007/s00222-017-0768-7. ISSN 0020-9910. S2CID 119176516. • Bhatt, Bhargav; Caraiani, Ana; Kedlaya, Kiran; Scholze, Peter; Weinstein, Jared (2019). Cais, Bryden (ed.). Perfectoid Spaces. Mathematical Surveys and Monographs. Vol. 242. Providence, Rhode Island: American Mathematical Society. doi:10.1090/surv/242. ISBN 978-1-4704-5015-1. OCLC 1124911652. References 1. Bhatt, Bhargav; Caraiani, Ana; Kedlaya, Kiran; Scholze, Peter; Weinstein, Jared (2019-10-01). "Front matter". In Cais, Bryden (ed.). Perfectoid Spaces. Mathematical Surveys and Monographs. Vol. 242. Providence, Rhode Island: American Mathematical Society. doi:10.1090/surv/242. ISBN 978-1-4704-5015-1. OCLC 1124911652. 2. Bhargav Bhatt Joins Mathematics Faculty at IAS 3. "Bhargav Bhatt" (PDF). Bhargav Bhatt. Retrieved March 21, 2021. 4. Bhargav Bhatt at the Mathematics Genealogy Project 5. "Bhargav Bhatt". Institute for Advanced Study. Retrieved March 21, 2021. 6. Bhargav Bhatt Joins Mathematics Faculty at IAS 7. "Bhargav Bhatt". Breakthrough Prize in Mathematics. Retrieved March 21, 2021. 8. Sury, B. (2019). "ICM Awards 2018". Resonance. 24 (5): 597–605. doi:10.1007/s12045-019-0813-5. ISSN 0971-8044. S2CID 199675280. 9. Tao, Terence (March 19, 2019). "Prismatic cohomology". Terence Tao's blog. Retrieved March 21, 2021. 10. "Bhargav Bhatt". David and Lucile Packard Foundation. Retrieved March 21, 2021. 11. "2021 Class of Fellows of the AMS" (PDF). Notices of the American Mathematical Society. 68 (4): 642. 2021. 12. Clay Research Award 2021 13. Nemmers Prize in Mathematics 2022 External links • Website of Bhargav Bhatt Authority control International • VIAF • WorldCat National • Germany • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
# Measuring similarity with distance metrics To measure similarity between two objects, we need a distance metric. A distance metric is a mathematical function that measures the difference between two objects. There are several distance metrics commonly used in similarity search, including: - Euclidean distance - Manhattan distance - Pearson correlation coefficient - Cosine similarity - Jaccard similarity Consider two points in a 2D space: (2, 3) and (5, 7). The Euclidean distance between these two points can be calculated using the formula: $$ d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2} $$ In this case, the Euclidean distance between the points (2, 3) and (5, 7) is: $$ d = \sqrt{(5 - 2)^2 + (7 - 3)^2} = \sqrt{29} $$ ## Exercise Calculate the Euclidean distance between the points (1, 2) and (4, 6). # Cosine similarity Cosine similarity is a measure of similarity between two non-zero vectors by calculating the cosine of the angle between them. It is widely used in text search and information retrieval. The cosine similarity between two vectors A and B is defined as: $$ \text{similarity}(A, B) = \frac{A \cdot B}{\|A\| \|B\|} $$ where $A \cdot B$ is the dot product of A and B, and $\|A\|$ and $\|B\|$ are the magnitudes of A and B, respectively. Let A = (1, 2, 3) and B = (2, 3, 4). The cosine similarity between A and B can be calculated as follows: $$ \text{similarity}(A, B) = \frac{A \cdot B}{\|A\| \|B\|} = \frac{(1 \cdot 2) + (2 \cdot 3) + (3 \cdot 4)}{\sqrt{1^2 + 2^2 + 3^2} \sqrt{2^2 + 3^2 + 4^2}} = \frac{20}{\sqrt{14} \sqrt{29}} \approx 0.92 $$ ## Exercise Instructions: Calculate the cosine similarity between the vectors A = (1, 2, 3) and B = (2, 3, 4). ### Solution None # Jaccard similarity Jaccard similarity is a measure of similarity between two sets. It is defined as the size of the intersection divided by the size of the union of the two sets. The Jaccard similarity coefficient is a measure of how similar two sets are, with 1 indicating a perfect match and 0 indicating no similarity at all. Let A = {1, 2, 3} and B = {2, 3, 4}. The Jaccard similarity between A and B can be calculated as follows: $$ \text{similarity}(A, B) = \frac{|A \cap B|}{|A \cup B|} = \frac{2}{4} = 0.5 $$ ## Exercise Instructions: Calculate the Jaccard similarity between the sets A = {1, 2, 3} and B = {2, 3, 4}. ### Solution None # Vector space model The vector space model is a mathematical model used to represent text documents as points in a high-dimensional space. Each document is represented as a vector, where each component corresponds to a term in the document. The similarity between two documents is then measured using distance metrics, such as Euclidean distance or cosine similarity. Consider two documents: Document 1: "The cat is on the mat." Document 2: "The dog is on the log." We can represent these documents in a vector space model as follows: Document 1: (1, 1, 1, 1, 1, 0) Document 2: (1, 1, 1, 1, 0, 1) The Euclidean distance between these two documents is: $$ d = \sqrt{(1 - 1)^2 + (1 - 1)^2 + (1 - 1)^2 + (1 - 1)^2 + (0 - 0)^2 + (0 - 1)^2} = \sqrt{1} $$ The cosine similarity between these two documents is: $$ \text{similarity}(A, B) = \frac{A \cdot B}{\|A\| \|B\|} = \frac{5}{\sqrt{5} \sqrt{5}} = 1 $$ ## Exercise Instructions: Calculate the Euclidean distance and cosine similarity between the documents "The cat is on the mat." and "The dog is on the log." ### Solution None # Local sensitive hashing Local sensitive hashing is an algorithm for approximating nearest neighbor search in high-dimensional spaces. It is particularly useful for large datasets where exact nearest neighbor search is computationally expensive. The algorithm works by dividing the space into a grid and using a hash function to map each point to a cell in the grid. The algorithm then returns the points in the same cell as the query point as approximate nearest neighbors. Consider a dataset of 3D points: - P1 = (1, 1, 1) - P2 = (2, 2, 2) - P3 = (3, 3, 3) - P4 = (4, 4, 4) We can divide the space into a 2D grid and use a hash function to map each point to a cell in the grid. For example, we can use the hash function: $$ \text{hash}(x, y, z) = \lfloor x / 2 \rfloor \times 2 + \lfloor y / 2 \rfloor \times 2 + \lfloor z / 2 \rfloor $$ The points P1 and P2 would be mapped to the same cell, as would P3 and P4. The local sensitive hashing algorithm would then return P1 and P2 as approximate nearest neighbors for the query point P1. ## Exercise Instructions: Calculate the hash values for the points P1, P2, P3, and P4 using the given hash function. ### Solution None # Min-Hash Min-Hash is a probabilistic algorithm for finding the similarity between two sets. It works by hashing the elements of the sets and keeping track of the minimum hash value for each set. The similarity between two sets is then calculated as the number of common hash values divided by the total number of hash values. Let A = {1, 2, 3} and B = {2, 3, 4}. The Min-Hash similarity between A and B can be calculated as follows: - Hash the elements of A and B using a hash function. For example, we can use the hash function: $$ \text{hash}(x) = x \mod 4 $$ - Keep track of the minimum hash value for each set: $$ \text{min hash}(A) = \min(\text{hash}(1), \text{hash}(2), \text{hash}(3)) = 1 $$ $$ \text{min hash}(B) = \min(\text{hash}(2), \text{hash}(3), \text{hash}(4)) = 2 $$ - Calculate the similarity between A and B: $$ \text{similarity}(A, B) = \frac{\text{min hash}(A) \cap \text{min hash}(B)}{\text{min hash}(A) \cup \text{min hash}(B)} = \frac{1}{2} = 0.5 $$ ## Exercise Instructions: Calculate the Min-Hash similarity between the sets A = {1, 2, 3} and B = {2, 3, 4}. ### Solution None # Applications of similarity search Similarity search has a wide range of applications, including: - Text search and information retrieval - Image search and recognition - DNA sequence analysis - Recommender systems - Social network analysis - Spam detection Similarity search can be used to find similar documents in a large collection of documents, such as a search engine. For example, a search query for "similarity search" might return a list of documents containing relevant information. ## Exercise Instructions: Describe a real-world scenario where similarity search can be applied. ### Solution None # Nearest neighbor search Nearest neighbor search is a technique for finding the nearest points or nearest vectors in a high-dimensional space. It is commonly used in machine learning and data mining applications, such as image recognition, speech recognition, and recommendation systems. The k-nearest neighbors algorithm is a popular nearest neighbor search algorithm, where the "k" nearest points to a query point are returned. Consider a dataset of 3D points: - P1 = (1, 1, 1) - P2 = (2, 2, 2) - P3 = (3, 3, 3) - P4 = (4, 4, 4) The nearest neighbor to the query point P1 using the Euclidean distance metric is P2. ## Exercise Instructions: Find the nearest neighbor to the query point P1 using the Euclidean distance metric. ### Solution None # Text classification and clustering Text classification and clustering are techniques used to group similar text documents or words together. Text classification is used to assign a document to a predefined category, while clustering is used to discover hidden patterns or relationships in the data. Both techniques can be applied to text data using similarity search algorithms, such as cosine similarity or Min-Hash. Consider a dataset of documents with predefined categories: - Document 1: "The cat is on the mat." (Category: Animal) - Document 2: "The dog is on the log." (Category: Animal) - Document 3: "The sun is shining." (Category: Weather) We can use cosine similarity to measure the similarity between the documents and assign them to categories based on their similarity. ## Exercise Instructions: Classify the document "The sun is shining." using cosine similarity. ### Solution None # Recommender systems Recommender systems are algorithms used to make predictions about the interests of a user based on the preferences or behavior of similar users. They are commonly used in online platforms, such as movie recommendation, product recommendation, or social media platforms. Similarity search algorithms, such as cosine similarity or Min-Hash, can be used to measure the similarity between users or items. Consider a movie recommendation system with the following user preferences: - User 1: Likes movies: "The Matrix", "Inception", "Interstellar" - User 2: Likes movies: "The Matrix", "The Dark Knight", "Batman Begins" - User 3: Likes movies: "Inception", "The Dark Knight", "The Lord of the Rings" We can use cosine similarity to measure the similarity between the users' preferences and recommend movies to User 1 based on their similarity to other users. ## Exercise Instructions: Recommend movies to User 1 using cosine similarity. ### Solution None
Textbooks
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{problem}{Problem} \newtheorem{definition}{Definition} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{corollary}{Corollary} \newtheorem{example}{Example} \newtheorem{conjecture}{Conjecture} \newtheorem{algorithm}{Algorithm} \newtheorem{exercise}{Exercise} \newtheorem{remarkk}{Remark} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\beq}[1]{\begin{equation}\label{#1}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\beqn}[1]{\begin{eqnarray}\label{#1}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\req}[1]{(\ref{#1})} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\underline}{\underline} \newcommand{\overline}{\overline} \newcommand{\Lambda}{\Lambda} \newcommand{\lambda}{\lambda} \newcommand{\varepsilon}{\varepsilon} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\gamma}{\gamma} \newcommand{{\Bigr)}}{{\Bigr)}} \newcommand{{\Bigl\|}}{{\Bigl\|}} \newcommand{\displaystyle\int}{\displaystyle\int} \newcommand{\displaystyle\sum}{\displaystyle\sum} \newcommand{\displaystyle\frac}{\displaystyle\frac} \newcommand{\mbox{\Large\it e}}{\mbox{\Large\it e}} \newcommand{{\Bbb Z}}{{\Bbb Z}} \newcommand{{\Bbb Q}}{{\Bbb Q}} \newcommand{{\rm I\!R}}{{\rm I\!R}} \newcommand{\reals^d}{{\rm I\!R}^d} \newcommand{\reals^n}{{\rm I\!R}^n} \newcommand{{\rm I\!N}}{{\rm I\!N}} \newcommand{{\rm I\!D}}{{\rm I\!D}} \newcommand{{\scriptscriptstyle \circ }}{{\scriptscriptstyle \circ }} \newcommand{\dfn}{\stackrel{\triangle}{=}} \def\complex{\mathop{\raise .45ex\hbox{${\bf\scriptstyle{|}}$} \kern -0.40em {\rm \textstyle{C}}}\nolimits} \def\mathop{\raise .21ex\hbox{$\bigcirc$}}\kern -1.005em {\rm\textstyle{H}}{\mathop{\raise .21ex\hbox{$\bigcirc$}}\kern -1.005em {\rm\textstyle{H}}} \newcommand{\RAISE}{{\:\raisebox{.6ex}{$\scriptstyle{>}$}\raisebox{-.3ex} {$\scriptstyle{\!\!\!\!\!<}\:$}}} \newcommand{\hh}{{\:\raisebox{1.8ex}{$\scriptstyle{{\scriptscriptstyle \circ }}$}\raisebox{.0ex} {$\textstyle{\!\!\!\! H}$}}} \newcommand{{\mbox{\bf 1}}}{{\mbox{\bf 1}}} \newcommand{{\mathcal A}}{{\mathcal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\mathcal U}}{{\mathcal U}} \newcommand{{\cal X}}{{\cal X}} \newcommand{\calXX}{{\cal X\mbox{\raisebox{.3ex}{$\!\!\!\!\!-$}}}} \newcommand{{\cal X\!\!\!\!\!-}}{{\cal X\!\!\!\!\!-}} \newcommand{\gi}{{\raisebox{.0ex}{$\scriptscriptstyle{\cal X}$} \raisebox{.1ex} {$\scriptstyle{\!\!\!\!-}\:$}}} \newcommand{\int_0^1\!\!\!\!\!\!\!\!\!\sim}{\int_0^1\!\!\!\!\!\!\!\!\!\sim} \newcommand{\int_0^t\!\!\!\!\!\!\!\!\!\sim}{\int_0^t\!\!\!\!\!\!\!\!\!\sim} \newcommand{{\partial}}{{\partial}} \newcommand{{\alpha}}{{\alpha}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\rm I\!R}}{{\rm I\!R}} \renewcommand{{\rm I\!L}}{{\rm I\!L}} \newcommand{\varphi}{\varphi} \newcommand{{\rm I\!N}}{{\rm I\!N}} \def\lip{\langle} \def\rip{\rangle} \newcommand{\hat\otimes}{\hat\otimes} \newcommand{{\Bbb P}}{{\Bbb P}} \newcommand{{\mbox{\boldmath$\cdot$}}}{{\mbox{\boldmath$\cdot$}}} \renewcommand{{\ell}}{{\ell}} \newcommand{{\textstyle{\det_2}}}{{\textstyle{\det_2}}} \newcommand{{\mbox{\rm sign}}}{{\mbox{\rm sign}}} \newcommand{{\rm TE}}{{\rm TE}} \newcommand{{\rm TA}}{{\rm TA}} \newcommand{{\rm E\,}}{{\rm E\,}} \newcommand{{\mbox{\bf 1}}}{{\mbox{\bf 1}}} \newcommand{{\rm Leb}_n}{{\rm Leb}_n} \newcommand{{\rm Prob\,}}{{\rm Prob\,}} \newcommand{{\rm sinc\,}}{{\rm sinc\,}} \newcommand{{\rm ctg\,}}{{\rm ctg\,}} \newcommand{{\rm loc}}{{\rm loc}} \newcommand{{\,\,\rm trace\,\,}}{{\,\,\rm trace\,\,}} \newcommand{{\rm Dom}}{{\rm Dom}} \newcommand{\mbox{\ if and only if\ }}{\mbox{\ if and only if\ }} \newcommand{\noindent {\bf Proof:\ }}{\noindent {\bf Proof:\ }} \newcommand{\noindent {\bf Remark:\ }}{\noindent {\bf Remark:\ }} \newcommand{\noindent {\bf Remarks:\ }}{\noindent {\bf Remarks:\ }} \newcommand{\noindent {\bf Note:\ }}{\noindent {\bf Note:\ }} \newcommand{{\bf x}}{{\bf x}} \newcommand{{\bf X}}{{\bf X}} \newcommand{{\bf y}}{{\bf y}} \newcommand{{\bf R}}{{\bf R}} \newcommand{\uu{x}}{\underline{x}} \newcommand{\uu{Y}}{\underline{Y}} \newcommand{\lim_{n \rightarrow \infty}}{\lim_{n \rightarrow \infty}} \newcommand{\lim_{N \rightarrow \infty}}{\lim_{N \rightarrow \infty}} \newcommand{\lim_{r \rightarrow \infty}}{\lim_{r \rightarrow \infty}} \newcommand{\lim_{\delta \rightarrow \infty}}{\lim_{\delta \rightarrow \infty}} \newcommand{\lim_{M \rightarrow \infty}}{\lim_{M \rightarrow \infty}} \newcommand{\limsup_{n \rightarrow \infty}}{\limsup_{n \rightarrow \infty}} \newcommand{ \rightarrow }{ \rightarrow } \newcommand{\ARROW}[1] {\begin{array}[t]{c} \longrightarrow \\[-0.2cm] \textstyle{#1} \end{array} } \newcommand{\AR} {\begin{array}[t]{c} \longrightarrow \\[-0.3cm] \scriptstyle {n\rightarrow \infty} \end{array}} \newcommand{\pile}[2] {\left( \begin{array}{c} {#1}\\[-0.2cm] {#2} \end{array} \right) } \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\mmbox}[1]{\mbox{\scriptsize{#1}}} \newcommand{\ffrac}[2] {\left( \frac{#1}{#2} \right)} \newcommand{\one}{\frac{1}{n}\:} \newcommand{\half}{\frac{1}{2}\:} \def\leq{\leq} \def\geq{\geq} \def<{<} \def>{>} \def\squarebox#1{\hbox to #1{ \vbox to #1{ }}} \newcommand{\nqed}{\hspace*{\fill} \vbox{\hrule\hbox{\vrule\squarebox{.667em}\vrule}\hrule} } \title{Regularity of the conditional expectations with respect to signal to noise ratio} \author{ A. S. \"Ust\"unel\\Tot passa, per\`o segueix sent l'amistat} \maketitle \noindent {\bf Abstract:}{\small{ Let $(W,H,\mu)$ be the classical Wiener space, assume that $U_\lambda=I_W+u_\lambda$ is an adapted perturbation of identity where the perturbation $u_\lambda$ is an $H$-valued map, defined up to $\mu$-equivalence classes, such that its Lebesgue density $s\to \dot{u}_\lambda(s)$ is almost surely adapted to the canonical filtration of the Wiener space and depending measurably on a real parameter $\lambda$. Assuming some regularity for $u_\lambda$, its Sobolev derivative and integrability of the divergence of the resolvent operator of its Sobolev derivative, we prove the almost sure and $L^p$-regularity w.r. to $\lambda$ of the estimation $E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]$ and more generally of the conditional expectations of the type $E[F\mid{\mathcal U}_\lambda(s)]$ for nice Wiener functionals, where $({\mathcal U}_\lambda(s),s\in [0,1])$ is the the filtration which is generated by $U_\lambda$. These results are applied to prove the invertibility of the adapted perturbations of identity, hence to prove the strong existence and uniqueness of functional SDE's; convexity of the entropy and the quadratic estimation error and finally to the information theory.}}\\ \tableofcontents \noindent Keywords: Entropy, adapted perturbation of identity, Wiener measure, invertibility.\\ \section{\bf{Introduction} } \noindent The Malliavin calculus studies the regularity of the laws of the random variables (functionals) defined on a Winer space (abstract or classical) with values in finite dimensional Euclidean spaces (more generally manifolds) using a variational calculus in the direction of the underlying quasi-invariance space, called the Cameron-Martin space. Although its efficiency is globally recognized by now, for the maps taking values in the infinite dimensional spaces the Malliavin calculus does not apply as easily as in the finite dimensional case due to the absence of the Lebesgue measure and even the problem itself needs to be defined. For instance, there is a notion called signal to noise ratio which finds its roots in engineering which requires regularity of infinite dimensional objects with respect to finite dimensional parameters (cf.\cite{D,GSV,G-Y,KZZ,M-Z,P}). Let us explain the problem along its general lines briefly: imagine a communication channel of the form $y=\sqrt{\lambda}x+w$, where $x$ denotes the emitted signal and $w$ is a noise which corrupts the communications. The problem of estimation of the signal $x$ from the data generated $y$ is studied since the early beginnings of the electrical engineering. One of the main problems dealt with is the behavior of the $L^2$-error of the estimation w.r. to the signal to noise ratio $\lambda$. This requires elementary probability when $x$ and $w$ are independent finite dimensional variables, though it gives important results for engineers. In particular, it has been recently realized that (cf. \cite{GSV,Z}), in this linear model with $w$ being Gaussian, the derivative of the mutual information between $x$ and $y$ w.r. to $\lambda$ equals to the half of the mean quadratic error of estimation. The infinite dimensional case is more tricky and requires already the techniques of Wiener space analysis and the Malliavin calculus (cf. \cite{Z}). The situation is much more complicated in the case where the signal is correlated to the noise; in fact we need the $\lambda$-regularity of the conditional expectations w. r. to the filtration generated by $y$, which is, at first sight, clearly outside the scope of the Malliavin calculus. In this paper we study the generalization of the problem mentioned above. Namely assume that we are given, in the setting of a classical Wiener space, denoted as $(W,H,\mu)$, a signal which is of the form of an adapted perturbation of identity: $$ U_\lambda(t,w)=W_t(w)+\int_0^t\dot{u}_\lambda(s,w)ds\,, $$ where $(W_t,t\in[0,1])$ is the canonical Wiener process, $\dot{u}_\lambda$ is an element of $L^2(ds\times d\mu)$ which is adapted to the Brownian filtration $ds$-almost surely and $\lambda$ is a real parameter. Let ${\mathcal U}_\lambda(t)$ be the sigma algebra generated by $(U_\lambda(s),\,s\leq t)$. What can we say about the regularity, i.e., continuity and/or differentiability w.r. to $\lambda$, of the functionals of the form $\lambda\to E[F\mid {\mathcal U}_\lambda(t)]$ and $\lambda\to E[F\mid U_\lambda=w]$ (the latter denotes the disintegration) given various regularity assumptions about the map $\lambda\to \dot{u}_\lambda$, like differentiability of it or its $H$-Sobolev derivatives w.r. to $\lambda$? We prove that the answer to these questions depend essentially on the behavior of the random resolvent operator $(I_H+\nabla u_\lambda)^{-1}$, where $\nabla u_\lambda$ denotes the Sobolev derivative of $u_\lambda$, which is a quasi-nilpotent Hilbert-Schmidt operator, hence its resolvent exists always. More precisely we prove that if the functional \begin{equation} \label{cond-1} (1+\rho(-\delta u_\lambda)\delta\left((I_H+\nabla u_\lambda)^{-1}\frac{d}{d\lambda}u_\lambda\right) \end{equation} is in $L^1(d\lambda\times d\mu,[0,M]\times W)$ for some $M>0$, where $\delta$ denotes the Gaussian divergence and $\rho(-\delta u)$ is the Girsanov-Wick exponential corresponding to the stochastic integral $\delta u$ (cf. the next section), then the map $\lambda\to L_\lambda$ is absolutely continuous almost surely where $L_\lambda$ is the Radon-Nikodym derivative of $U_\lambda\mu$ w.r. to $\mu$ and we can calculate its derivative explicitly. This observation follows from some variational calculus and from the Malliavin calculus. The iteration of the hypothesis (\ref{cond-1}) by replacing $\delta((I_H+\nabla u_\lambda)^{-1}\frac{d}{d\lambda}u_\lambda)$ with its $\lambda$-derivatives permits us to prove the higher order differentiability of the above conditional expectations w.r. to $\lambda$ and these results are exposed in Section \ref{S-Bas}. In Section \ref{S-Inv}, we give applications of these results to show the almost sure invertibility of the adapted perturbations of the identity, which is equivalent to the strong existence and uniqueness results of the (functional) stochastic differential equations. In Section \ref{S-Ent}, we apply the results of Section \ref{S-Bas} to calculate the derivatives of the relative entropy of $U_\lambda\mu$ w.r. to $\mu$ in the general case, i.e., we do not suppose the a.s. invertibility of $U_\lambda$, which demands the calculation of the derivatives of the non-trivial conditional expectations. Some results are also given for the derivative of the quadratic error in the case of anticipative estimation as well as the relations to the Monge-Kantorovich measure transportation theory and the Monge-Amp\`ere equation. In Section \ref{S-Info}, we generalize the celebrated result about the relation between the mutual information and the mean quadratic error (cf. \cite{D,G-Y,KZZ}) in the following way: we suppress the hypothesis of independence between the signal and the noise as well as the almost sure invertibility of the observation for fixed exterior parameter of the signal. With the help of the results of Section \ref{S-Bas}, the calculations of the first and second order derivatives of the mutual information w.r. to the ratio parameter $\lambda$ are also given. \section{\bf{Preliminaries and notation}} \label{S-Pr} \label{preliminaries} \noindent Let $W$ be the classical Wiener space $C([0,T],{\rm I\!R}^n)$ with the Wiener measure $\mu$. The corresponding Cameron-Martin space is denoted by $H$. Recall that the injection $H\hookrightarrow W$ is compact and its adjoint is the natural injection $W^\star\hookrightarrow H^\star\subset L^2(\mu)$. Since the image of $\mu$ under the mappings $w\to w+h,\,h\in H$ is equivalent to $\mu$, the G\^ateaux derivative in the $H$ direction of the random variables is a closable operator on $L^p(\mu)$-spaces and this closure is denoted by $\nabla$ and called the Sobolev derivative (on the Wiener space) cf., for example \cite{ASU, ASU-1}. The corresponding Sobolev spaces consisting of (the equivalence classes) of real-valued random variables will be denoted as ${\rm I\!D}_{p,k}$, where $k\in {\rm I\!N}$ is the order of differentiability and $p>1$ is the order of integrability. If the random variables are with values in some separable Hilbert space, say $\Phi$, then we shall define similarly the corresponding Sobolev spaces and they are denoted as ${\rm I\!D}_{p,k}(\Phi)$, $p>1,\,k\in {\rm I\!N}$. Since $\nabla:{\rm I\!D}_{p,k}\to{\rm I\!D}_{p,k-1}(H)$ is a continuous and linear operator its adjoint is a well-defined operator which we represent by $\delta$. A very important feature in the theory is that $\delta$ coincides with the It\^o integral of the Lebesgue density of the adapted elements of ${\rm I\!D}_{p,k}(H)$ (cf.\cite{ASU,ASU-1}). For any $t\geq 0$ and measurable $f:W\to {\rm I\!R}_+$, we note by $$ P_tf(x)=\int_Wf\left(e^{-t}x+\sqrt{1-e^{-2t}}y\right)\mu(dy)\,, $$ it is well-known that $(P_t,t\in {\rm I\!R}_+)$ is a hypercontractive semigroup on $L^p(\mu),p>1$, which is called the Ornstein-Uhlenbeck semigroup (cf.\cite{ASU,ASU-1}). Its infinitesimal generator is denoted by $-{\mathcal L}$ and we call ${\mathcal L}$ the Ornstein-Uhlenbeck operator (sometimes called the number operator by the physicists). The norms defined by \begin{equation} \label{norm} \|\phi\|_{p,k}=\|(I+{\mathcal L})^{k/2}\phi\|_{L^p(\mu)} \end{equation} are equivalent to the norms defined by the iterates of the Sobolev derivative $\nabla$. This observation permits us to identify the duals of the space ${\rm I\!D}_{p,k}(\Phi);p>1,\,k\in{\rm I\!N}$ by ${\rm I\!D}_{q,-k}(\Phi')$, with $q^{-1}=1-p^{-1}$, where the latter space is defined by replacing $k$ in (\ref{norm}) by $-k$, this gives us the distribution spaces on the Wiener space $W$ (in fact we can take as $k$ any real number). An easy calculation shows that, formally, $\delta\circ \nabla={\mathcal L}$, and this permits us to extend the divergence and the derivative operators to the distributions as linear, continuous operators. In fact $\delta:{\rm I\!D}_{q,k}(H\otimes \Phi)\to {\rm I\!D}_{q,k-1}(\Phi)$ and $\nabla:{\rm I\!D}_{q,k}(\Phi)\to{\rm I\!D}_{q,k-1}(H\otimes \Phi)$ continuously, for any $q>1$ and $k\in {\rm I\!R}$, where $H\otimes \Phi$ denotes the completed Hilbert-Schmidt tensor product (cf., for instance \cite{ASU,ASU-1,BOOK}). We shall denote by ${\rm I\!D}(\Phi)$ and ${\rm I\!D}'(\Phi)$ respectively the sets $$ {\rm I\!D}(\Phi)=\bigcap_{p>1,k\in {\rm I\!N}}{\rm I\!D}_{p,k}(\Phi)\,, $$ and $$ {\rm I\!D}'(\Phi)=\bigcup_{p>1,k\in {\rm I\!N}}{\rm I\!D}_{p,-k}(\Phi)\,, $$ where the former is equipped with the projective and the latter is equipped with the inductive limit topologies. Let us denote by $(W_t,t\in[0,1])$ the coordinate map on $W$ which is the canonical Brownian motion (or Wiener process) under the Wiener measure, let $({\mathcal F}_t,t\in[0,1])$ be its completed filtration. The elements of $L^2(\mu,H)={\rm I\!D}_{2,0}(H)$ such that $w\to\dot{u}(s,w)$ are $ds$-a.s. ${\mathcal F}_S$ measurable will be noted as $L^2_a(\mu,H)$ or ${\rm I\!D}^a_{2,0}(H)$. $L^0_a(\mu,H)$ is defined similarly (under the convergence in probability). Let $U:W\to W$ be defined as $U=I_W+u$ with some $u\in L^0_a(\mu,H)$, we say that $U$ is $\mu$-almost surely invertible if there exists some $V:W\to W$ such that $V\mu\ll\mu$ and that $$ \mu\left\{w:U\circ V(w)=V\circ U(w)=w\right\}=1\,. $$ The following results are proved with various extensions in \cite{ASU-2,ASU-3,ASU-4}: \begin{theorem} \label{thm-0} Assume that $u\in L^0_a(\mu,H)$, let $L$ be the Radon-Nikodym density of $U\mu=(I_W+u)\mu$ w.r. to $\mu$, where $U\mu$ denotes the image (push forward) of $\mu$ under the map $U$. Then we have \begin{enumerate} \item $$ E[L\log L]\leq \frac{1}{2}\|u\|_{L^2(\mu,H)}^2=\frac{1}{2}E\int_0^1|\dot{u}_s|^2ds\,. $$ \item Assume that $E[\rho(-\delta u)]=1$, then we have the equality: \begin{equation} \label{eqlty} E[L\log L]=\frac{1}{2}\|u\|_{L^2(\mu,H)}^2 \end{equation} if and only if $U$ is almost surely invertible and its inverse can be written as $V=I_W+v$, with $v\in L_a^0(\mu,H)$. \item Assume that $E[L\log L-\log L]<\infty$ and the equality (\ref{eqlty}) holds, then $U$ is again almost surely invertible and its inverse can be written as $V=I_W+v$, with $v\in L_a^0(\mu,H)$. \end{enumerate} \end{theorem} \noindent The following result gives the relation between the entropy and the estimation ( cf. \cite{ASU-2} for the proof): \begin{theorem} \label{thm-00} Assume that $u\in L^2_a(\mu,H)$, let $L$ be the Radon-Nikodym density of $U\mu=(I_W+u)\mu$ w.r. to $\mu$, where $U\mu$ denotes the image (push forward) of $\mu$ under the map $U$ and let $({\mathcal U}_t,t\in [0,1])$ be the filtration generated by $(t,w)\to U(t,w)$. Assume that $E[\rho(-\delta u)]=1$. Then we have \begin{itemize} \item $$ E[L\log L]= \frac{1}{2}E\int_0^1|E[\dot{u}_s\mid {\mathcal U}_s]|^2ds\,. $$ \item $$ L\circ U\,E[\rho(-\delta u)|U]=1 $$ $\mu$-almost surely. \end{itemize} \end{theorem} \section{\bf{Basic results}} \label{S-Bas} \noindent Let $(W,H,\mu)$ be the classical Wiener space, i.e., $W=C_0([0,1],{\rm I\!R}^d),\,H=H^1([0,1],{\rm I\!R}^d)$ and $\mu$ is the Wiener measure under which the evaluation map at $t\in [0,1]$ is a Brownian motion. Assume that $U_\lambda:W\to W$ is defined as $$ U_\lambda(t,w)=W_t(w)+\int_0^t\dot{u}_\lambda(s,w)ds\,, $$ with $\lambda\in{\rm I\!R}$ being a parameter. We assume that $\dot{u}_\lambda\in L_a^2([0,1]\times W,dt\times d\mu)$, where the subscript ``$_a$'' means that it is adapted to the canonical filtration for almost all $s\in [0,1]$. We denote the primitive of $\dot{u}_\lambda$ by $u_\lambda$ and assume that $E[\rho(-\delta u_\lambda)]=1$, where $\rho$ denotes the Girsanov exponential: $$ \rho(-\delta u_\lambda)=\exp\left(-\int_0^1 \dot{u}_\lambda(s) dW_s-\frac{1}{2}\int_0^1|\dot{u}_\lambda(s)|^2ds\right)\,. $$ We shall assume that the map $\lambda\to \dot{u}_\lambda$ is differentiable as a map in $L_a^2([0,1]\times W,dt\times d\mu)$, we denote its derivative w.r. to $\lambda$ by $\dot{u}'_\lambda(s)$ or by $\dot{u}'(\lambda,s)$ and its primitive w.r. to $s$ is denoted as $u_\lambda'(t)$. \begin{theorem} \label{thm-1} Suppose that $\lambda\to u_\lambda\in L_{loc}^p({\rm I\!R},d\lambda;{\rm I\!D}_{p,1}(H))$ for some $p\geq 1$, with $E[\rho(-\delta u_\lambda)]=1$ for any $\lambda\geq 0$ and also that $$ E\int_0^\lambda(1+\rho(-\delta u_{\alpha})) \left|E[\delta(K_\alpha u'_\alpha)|U_{\alpha}]\right|^pd\alpha<\infty\,, $$ where $K_\alpha=(I_H+\nabla u_\alpha)^{-1}$. Then the map $$ \lambda\to L_\lambda=\frac{dU_\lambda\mu}{d\mu} $$ is absolutely continuous and we have $$ L_\lambda(w)=L_0\exp \int_0^\lambda E\Big[\delta(K_\alpha u'_\alpha)|U_\alpha=w\Big]d\alpha\,. $$ \end{theorem} \noindent {\bf Proof:\ } Let us note first that the map $(\lambda,w)\to L_\lambda(w)$ is measurable thanks to the Radon-Nikodym theorem. Besides, for any (smooth) cylindrical function $f$, we have \begin{eqnarray*} \frac{d}{d\lambda}E[f\circ U_\lambda]&=&E[(\nabla f\circ U_\lambda,u'_\lambda)_H]\\ &=&E[((I_H+\nabla u_\lambda)^{-1\star}\nabla (f\circ U_\lambda),u'_\lambda)_H]\\ &=&E[(\nabla (f\circ U_\lambda),(I_H+\nabla u_\lambda)^{-1} u'_\lambda)_H]\\ &=&E[f\circ U_\lambda\,\delta\{(I_H+\nabla u_\lambda)^{-1} u'_\lambda\}]\\ &=&E[f\circ U_\lambda\,E[\delta (K_\lambda u'_\lambda)|U_\lambda]]\\ &=&E[f\,E[\delta (K_\lambda u'_\lambda)|U_\lambda=w]L_\lambda]\,. \end{eqnarray*} Hence, for any fixed $f$, we get $$ \frac{d}{d\lambda}\langle f, L_\lambda\rangle =\langle f,L_\lambda E[\delta (K_\lambda u'_\lambda)|U_\lambda=w]\rangle\,, $$ both sides of the above equality are continuous w.r. to $\lambda$, hence we get $$ <f,L_\lambda>-<f,L_0>=\int_0^\lambda <f,L_{\alpha} E\left[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w\right]>d{\alpha}\,. $$ From the hypothesis, we have $$ E\int_0^\lambda L_{\alpha} |E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w]|d{\alpha}=E\int_0^\lambda |E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}]|\,d{\alpha}<\infty\,. $$ By the measurability of the disintegrations, the mapping $({\alpha},w)\to E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w]$ has a measurable modification, hence the following integral equation holds in the ordinary sense for almost all $w\in W$ $$ L_\lambda=L_0+\int_0^\lambda L_{\alpha} E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w] d{\alpha}\,, $$ for $\lambda>0$. Therefore the map $\lambda\to L_\lambda$ is almost surely absolutely continuous w.r. to the Lebesgue measure. To show its representation as an exponential, we need to show that the map ${\alpha}\to E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha} =w]$ is almost surely locally integrable. To achieve this it suffices to observe that \begin{eqnarray*} E\int_0^\lambda | E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha} =w]|d{\alpha}&=&E\int_0^\lambda | E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha} =w]|\frac{L_{\alpha}}{L_{\alpha}}d{\alpha} \\ &=&E\int_0^\lambda | E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}]|\frac{1}{L_{\alpha}\circ U_{\alpha}}d{\alpha}\\ &=&E\int_0^\lambda | E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}]|E[\rho(-\delta u_{\alpha})|U_{\alpha}]d{\alpha}<\infty \end{eqnarray*} by hypothesis and by Theorem \ref{thm-00}. Consequently we have the explicit expression for $L_\lambda$ given as: $$ L_\lambda(w)=L_0\exp \int_0^\lambda E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w]d{\alpha}\,. $$ \nqed \begin{remarkk} \noindent An important tool to control the hypothesis of Theorem \ref{thm-1} is the inequality of T. Carleman which says that (cf. \cite{D-S}, Corollary XI.6.28) $$ \|{\textstyle{\det_2}}(I_H+A)(I_H+A)^{-1}\|\leq \exp\frac{1}{2}\left(\|A\|_2^2+1\right)\,, $$ for any Hilbert-Schmidt operator $A$, where the left hand side is the operator norm, ${\textstyle{\det_2}}(I_H+A)$ denotes the modified Carleman-Fredholm determinant and $\|\cdot\|_2$ denotes the Hilbert-Schmidt norm. Let us remark that if $A$ is a quasi-nilpotent operator, i.e., if the spectrum of $A$ consists of zero only, then ${\textstyle{\det_2}}(I_H+A)=1$, hence in this case the Carleman inequality reads $$ \|(I_H+A)^{-1}\|\leq \exp\frac{1}{2}\left(\|A\|_2^2+1\right)\,. $$ This case happens when $A$ is equal to the Sobolev derivative of some $u\in {\rm I\!D}_{p,1}(H)$ whose drift $\dot{u}$ is adapted to the filtration $({\mathcal F}_t,\,t\in [0,1])$, \end{remarkk} \noindent From now on, for the sake of technical simplicity we shall assume that {\bf{$u_\lambda$ is essentially bounded uniformly w.r.to $\lambda$}.} \begin{proposition} \label{prop-1} Let $F\in L^p(\mu)$ then the map $\lambda\to E[F|U_\lambda=w]$ is weakly continuous with values in $L^{p-}(\mu)${\footnote{$p-$ denotes any $p'<p$ and $q+$ any $q'>q$}}. \end{proposition} \noindent {\bf Proof:\ } First we have \begin{eqnarray*} \int_W|E[F|U_\lambda=w]|^{p}d\mu&=&\int_W|E[F|U_\lambda=w]|^{p}\frac{L_\lambda}{L_\lambda}d\mu\\ &=&\int_W |E[F|U_\lambda]|^{p-}\frac{1}{L_\lambda\circ U_\lambda}d\mu\\ &=&\int_W |E[F|U_\lambda]|^{p}E[\rho(-\delta u_\lambda)|U_\lambda]d\mu<\infty\,, \end{eqnarray*} hence $E[F|U_\lambda=w]\in L^{p-}(\mu)$ for any $F\in L^p(\mu)$. Besides, for any $f\in C_b(W)$, $$ E[f\circ U_\lambda\,F]=E[f E[F|U_\lambda=w]\,L_\lambda] $$ therefore $$ |E[f\circ U_\lambda\,F]|\leq \|F\|_p \|f\circ U_\lambda\|_q\leq C_q \|F\|_p\|\|f\|_{q+}\,. $$ This relation, combined with the continuity of $\lambda\to f\circ U_\lambda$, due to the Lusin theorem, in $L^q$ for any $f\in L^{q+}$, implies the weak continuity of the map $\lambda\to [F|U_\lambda=w]\,L_\lambda$ with values in $L^{p-}(\mu)$, since $\lambda\to L_\lambda$ and $\lambda\to (L_\lambda)^{-1}$ are almost surely and strongly continuous in $L^p(\mu)$, the claim follows. \nqed \begin{theorem} \label{thm-2} Assume that $F\in{\rm I\!D}_{p,1}$ for some $p>1$ and that $$ E\int_0^\lambda |\delta(FK_{\alpha} u'_{\alpha})|d{\alpha}<\infty $$ for any $\lambda>0$, then $\lambda\to E[F|U_\lambda=w]$ is $\mu$-a.s. absolutely continuous w.r. to the Lebesgue measure $d\lambda$ and the map $\lambda\to E[F|U_\lambda]$ is almost surely and hence $L^p$-continuous. \end{theorem} \noindent {\bf Proof:\ } Using the same method as in the proof of Theorem \ref{thm-1}, we obtain \begin{eqnarray*} \frac{d}{d\lambda}E[\theta\circ U_\lambda\,F]&=&\frac{d}{d\lambda}E[\theta\,E[F|U_\lambda=w]\,L_\lambda]\\ &=&E[\theta \,L_\lambda\,E[\delta(F\,K_\lambda u'_\lambda)|U_\lambda=w]] \end{eqnarray*} for any cylindrical function $\theta$. By continuity w.r.to $\lambda$, we get $$ E\left[\theta\Big(L_\lambda E[F|U_\lambda=w]-L_0 E[F|U_0=w]\Big)\right]=\int_0^\lambda E\left[\theta L_{\alpha} E[\delta(FK_{\alpha} u'_{\alpha})|U_{\alpha}=w]\right]d{\alpha}\,. $$ By the hypothesis $$ E\int_0^\lambda | L_{\alpha} E[\delta(FK_{\alpha} u'_{\alpha})|U_{\alpha}=w]|d{\alpha}<\infty $$ and since $\theta$ is an arbitrary cylindrical function, we obtain the identity $$ L_\lambda E[F|U_\lambda=w]-L_0 E[F|U_0=w]=\int_0^\lambda L_{\alpha} \,E[\delta(F K_{\alpha} u'_{\alpha})|U_{\alpha}=w]d{\alpha} $$ almost surely and this proves the first part of the theorem since $\lambda\to L_\lambda$ is already absolutely continuous and strictly positive. For the second part, we denote $E[F|U_\lambda]$ by $\hat{F}(\lambda)$ and we assume that $(\lambda_n,n\geq 1)$ tends to some $\lambda$, then there exists a sub-sequence $(\hat{F}(\lambda_{k_l}),l\geq 1)$ which converges weakly to some limit; but, from the first part of the proof, we know that $(E[F|U_{\lambda_{k_l}}=w],l\geq 1)$ converges almost surely to $E[F|U_\lambda=w]$ and by the uniform integrability, there is also strong convergence in $L^{p-}(\mu)$. Hence, for any cylindrical function $G$, we have \begin{eqnarray*} E[\hat{F}(\lambda_{k_l})\,G]&=&E[E[F|U_{\lambda_{k_l}}=w]E[G|U_{\lambda_{k_l}}=w]L_{\lambda_{k_l}}]\\ &\rightarrow&E[E[F|U_{\lambda}=w]E[G|U_{\lambda}=w]L_{\lambda}]\\ &=&E[\hat{F}(\lambda)\,G]\,. \end{eqnarray*} Consequently, the map $\lambda\to \hat{F}(\lambda)$ is weakly continuous in $L^p$, therefore it is also strongly continuous. \nqed \noindent {\bf Remark:\ } Another proof consists of remarking that $$ E[F|U_\lambda=w]|_{w=U_\lambda}=E[F|U_\lambda] $$ $\mu$-a.s. and that $\lambda\to E[F|U_\lambda=w]$ is continuous a.s. and in $L^{p-}$ from the first part of the proof and that $(L_\lambda, \lambda\in [a,b])$ is uniformly integrable. These observations, combined with the Lusin's theorem imply the continuity in $L^0(\mu)$ (i.e., in probability) of $\lambda\to E[F|U_\lambda]$ and the $L^p$-continuity follows. \noindent We shall need some technical results, to begin with, let $U^\tau_\lambda$ denote the shift defined on $W$ by $$ U^\tau_\lambda(w)=w+\int_0^{\cdot\wedge\tau}\dot{u}_\lambda(s)ds\,, $$ for $\tau\in [0,1]$. We shall denote by $L_\lambda(\tau)$ the Radon-Nikodym density $$ \frac{dU^\tau_\lambda\mu}{d\mu}=L_\lambda(\tau)\,. $$ \begin{lemma} \label{lemma-1} We have the relation $$ L_\lambda(\tau)=E[L_\lambda|{\mathcal F}_\tau] $$ almost surely. \end{lemma} \noindent {\bf Proof:\ } Let $f$ be an ${\mathcal F}_\tau$-measurable, positive, cylindrical function; then it is straightforward to see that $f\circ U_\lambda=f\circ U_\lambda^\tau$, hence $$ E[f\,L_\lambda]=E[f\circ U_\lambda]=E[f\circ U_\lambda^\tau]=E[f\,L_\lambda(\tau)]\,. $$ \nqed \begin{lemma} \label{lemma-2} Let ${\mathcal U}_\lambda^\tau(t)$ be the sigma algebra generated by $\{U^\tau_\lambda(s);\,s\leq t\}$. Then, we have $$ E[f|{\mathcal U}_\lambda^\tau(1)]=E[f|U_\lambda^\tau] $$ for any positive, measurable function on $W$. \end{lemma} \noindent {\bf Proof:\ } Here, of course the second conditional expectation is to be understood w.r. to the sigma algebra generated by the mapping $U_\lambda^\tau$ and once this point is fixed the claim is trivial. \nqed \begin{proposition} \label{prop-2} With the notations explained above, we have $$ L_\lambda(\tau)=L_0(\tau)\exp\int_0^\lambda E[\delta\{(I_H+\nabla u_{\alpha}^\tau)^{-1}u'^{\tau}_{\alpha}\}|U_{\alpha}^\tau=w]d{\alpha}\,. $$ Moreover, the map $(\lambda,\tau)\to L_\lambda(\tau)$ is continuous on ${\rm I\!R}\times [0,1]$ with values in $L^p(\mu)$ for any $p\geq 1$. \end{proposition} \noindent {\bf Proof:\ } The first claim can be proved as we have done in the first part of the proof of Theorem \ref{thm-1}. For the second part, let $f$ be a positive, measurable function on W; we have $$ E[f\circ U_\lambda^\tau]=E[f\,L_\lambda(\tau)]\,. $$ If $(\tau_n,\lambda_n)\to(\tau,\lambda)$, from the Lusin theorem and the uniform integrability of the densities $(L_{\lambda_n}(\tau_n),n\geq 1)$, the sequence $(f\circ U^{\tau_n}_{\lambda_n},\,n\geq 1)$ converges in probability to $f\circ U_\lambda^\tau$, hence, again by the uniform integrability, for any $q>1$ and $f\in L^q(\mu)$, $$ \lim_nE[f\,L_{\lambda_n}(\tau_n)]=E[f\,L_\lambda(\tau)]\,. $$ From Lemma \ref{lemma-1}, we have \begin{eqnarray*} E[L_{\lambda_n}(\tau_n)^2]&=&E[L_{\lambda_n}(\tau_n)\,E[L_{\lambda_n}|{\mathcal F}_{\tau_n}]]\\ &=&E[L_{\lambda_n}(\tau_n)\,L_{\lambda_n}]\,, \end{eqnarray*} since, from Theorem \ref{thm-1}, $L_{\lambda_n}\to L_\lambda$ strongly in all $L^p$-spaces, it follows that $(\lambda,\tau)\to L_\lambda(\tau)$ is $L^2$-continuous, hence also $L^p$-continuous for any $p>1$. \nqed \begin{proposition} \label{prop-3} The mapping $(\lambda,\tau)\to L_\lambda(\tau)$ is a.s. continuous, moreover the map $$ (\tau,w)\to (\lambda\to L_\lambda(\tau,w)) $$ is a $C({\rm I\!R})$-valued continuous martingale and its restriction to compact intervals (of $\lambda$) is uniformly integrable. \end{proposition} \noindent {\bf Proof:\ } Let us take the interval $\lambda\in [0,T]$, from Lemma \ref{lemma-1} we have $L_\lambda(\tau)=E[L_\lambda|{\mathcal F}°\tau]$, since $C([0,T])$ is a separable Banach space and since we are working with the completed Brownian filtration, the latter equality implies an a.s. continuous, $C([0,T])$-valued uniformly integrable martingale. \nqed \begin{theorem} \label{thm-3} Assume that $$ E\int_0^\lambda\int_0^1\left(|\delta(\dot{u}_{\alpha}(s)K_{\alpha} u'_{\alpha})|+|\dot{u}'_{\alpha}(s)|^2\right)ds<\infty $$ for any $\lambda\geq 0$, then the map $$ \lambda\to E[\dot{u}_\lambda(t)|{\mathcal U}_\lambda(t)] $$ is continuous with values in $L_a^p(\mu,\,L^2([0,1],{\rm I\!R}^d))$, $p\geq 1$. \end{theorem} \noindent {\bf Proof:\ } Let $\xi\in L_a^\infty(\mu,H)$ be smooth and cylindrical, then, by similar calculations as in the proof of Theorem \ref{thm-2}, we get \begin{eqnarray*} \frac{d}{d\lambda}E[(\xi\circ U_\lambda,u_\lambda)_H]&=&\frac{d}{d\lambda}<\xi\circ U_\lambda,u_\lambda>= \frac{d}{d\lambda}<\xi\circ U_\lambda,\hat{u}_\lambda>\\ &=&E\int_0^1\dot{\xi}_sL_\lambda(s)E\left[\delta(\dot{u}_\lambda(s)K_\lambda u'_\lambda)+\dot{u}'_\lambda(s)|U_\lambda^s=w\right]ds\,, \end{eqnarray*} but the l.h.s. is equal to $$ E[(\nabla\xi\circ U_\lambda[u'_\lambda],u_\lambda)_H+(\xi\circ U_\lambda,u'_\lambda)_H]\,, $$ which is continuous w.r. to $\lambda$ provided that $\xi$ is smooth, and that $\lambda\to (u'_\lambda, u_\lambda)$ is continuous in $L^p$ for $p\geq 2$. Consequently, we have the relation $$ <\xi\circ U_\lambda,u_\lambda>-<\xi\circ U_0,u_0>= E\int_0^\lambda \int_0^1\dot{\xi}_sL_{\alpha}(s)E\left[\delta (\dot{u}_{\alpha}(s)K_{\alpha} u'_{\alpha})+\dot{u}'_{\alpha}(s)|U_{\alpha}^s=w\right]dsd{\alpha} $$ and the hypothesis implies that $\lambda\to L_\lambda(s)E[\dot{u}_\lambda(s)|U_\lambda^s=w]$ is $\mu$-a.s. absolutely continuous w.r.to the Lebesgue measure $d\lambda$. Since $\lambda\to L_\lambda(s)$ is also a.s. absolutely continuous, it follows that $\lambda\to E[\dot{u}_\lambda(s)|U_\lambda^s=w]$ is a.s. absolutely continuous. Let us denote this disintegration as the kernel $N_\lambda(w,\dot{u}_\lambda(s))$, then $$ N_\lambda(U_\lambda^s(w),\dot{u}_\lambda(s))=E[\dot{u}_\lambda(s)|U_\lambda^s] $$ a.s. From the Lusin theorem, it follows that the map $\lambda\to N_\lambda(U_\lambda^s,\dot{u}_\lambda(s))$ is continuous with values in $L_a^0(\mu,L^2([0,1],{\rm I\!R}^d))$ and the $L^p$-continuity follows from the dominated convergence theorem. \nqed \begin{remarkk} In the proof above we have the following result: assume that $\lambda\to f_\lambda$ is continuous in $L^0(\mu)$, then $\lambda\to f_\lambda\circ U_\lambda$ is also continuous in $L^0(\mu)$ provided that the family $$ \left\{\frac{dU_\lambda\mu}{d\mu},\lambda\in [a,b]\right\} $$ is uniformly integrable for any compact interval $[a,b]$. To see this, it suffices to verify the sequential continuity; hence assume that $\lambda_n\to \lambda$, then we have \begin{eqnarray*} \mu\{|f_{\lambda_n}\circ U_{\lambda_n}-f_{\lambda}\circ U_{\lambda}|>c\}&\leq&\mu\{|f_{\lambda_n}\circ U_{\lambda_n}-f_{\lambda}\circ U_{\lambda_n}|>c/2\}\\ &&+\mu\{|f_{\lambda}\circ U_{\lambda_n}-f_{\lambda}\circ U_{\lambda_n}|>c/2\}\,, \end{eqnarray*} but $$ \mu\{|f_{\lambda_n}\circ U_{\lambda_n}-f_{\lambda}\circ U_{\lambda_n}|>c/2\}=E[L_{\lambda_n}1_{\{|f_{\lambda_n}-f_\lambda|>c/2\}}]\to 0 $$ by the uniform integrability of $(L_{\lambda_n},\,n\geq 1)$ and the continuity of $\lambda\to f_\lambda$. The second term tends also to zero by the standard use of Lusin theorem and again by the the uniform integrability of $(L_{\lambda_n},\,n\geq 1)$. \end{remarkk} \begin{corollary} \label{cor-1} The map $\lambda\to E[\rho(-\delta u_\lambda)|U_\lambda]$ is continuous as an $L^p(\mu)$-valued map for any $p\geq 1$. \end{corollary} \noindent {\bf Proof:\ } We know that $$ E[[\rho(-\delta u_\lambda)|U_\lambda]=\frac{1}{L_\lambda\circ U_\lambda}\,. $$ \nqed \begin{corollary} \label{cor-2} Let $Z_\lambda(t)$ be the innovation process associated to $U_\lambda$, then $$ \lambda\to \int_0^1E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]dZ_\lambda(s) $$ is continuous as an $L^p(\mu)$-valued map for any $p\geq 1$. \end{corollary} \noindent {\bf Proof:\ } We have $$ \log L_\lambda\circ U_\lambda=\int_0^1E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]dZ_\lambda(s)+\frac{1}{2}\int_0^1|E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]|^2ds\,, $$ since the l.h.s. of this equality and the second term at the right are continuous, the first term at the right should be also continuous. \nqed \begin{theorem} \label{thm-4} Assume that $$ E\int_0^\lambda|\delta\{\delta(K_{\alpha} u'_{\alpha})K_{\alpha} u'_{\alpha}-K_{\alpha}\nabla u'_{\alpha} K_{\alpha} u'_{\alpha}+K_{\alpha} u''_{\alpha}\}|d{\alpha}<\infty $$ for any $\lambda\geq 0$. Then the map $$ \lambda\to \frac{d}{d\lambda}L_\lambda $$ is a.s. absolutely continuous w.r.to the Lebesgue measure $d\lambda$ and we have $$ \frac{d^2}{d\lambda^2}L_\lambda(w)=L_\lambda E[\delta D_\lambda|U_\lambda=w]\,, $$ where $$ D_\lambda=\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda-K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda+K_\lambda u''_\lambda\,. $$ \end{theorem} \noindent {\bf Proof:\ } Let $f$ be a smooth function on $W$, using the integration by parts formula as before, we get \begin{eqnarray*} \frac{d^2}{d\lambda^2}E[f\circ U_\lambda]&=&\frac{d}{d\lambda} E[f\circ U_\lambda\,\delta(K_\lambda u'_\lambda)]\\ &=&E[(\nabla f\circ U_\lambda,u'_\lambda)_H\delta(K_\lambda u'_\lambda)]\\ &=&E[(K_\lambda^\star\nabla(f\circ U_\lambda),u'_\lambda)_H\delta(K_\lambda u'_\lambda)+f\circ U_\lambda\delta(-K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda+K_\lambda u''_\lambda)]\\ &=&E\left[f\circ U_\lambda\left\{\delta(\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda)-\delta(K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda)+\delta(K_\lambda u''_\lambda)\right\}\right]\,. \end{eqnarray*} Let us define the map $D_\lambda$ as $$ D_\lambda=\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda-K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda+K_\lambda u''_\lambda\,, $$ we have obtained then the following relation $$ \frac{d^2}{d\lambda^2}E[f\circ U_\lambda]=E[f\,L_\lambda\,E[\delta D_\lambda|U_\lambda=w]] $$ hence $$ < \frac{d}{d\lambda}L_\lambda,f>-<\frac{d}{d\lambda}L_\lambda,f>|_{\lambda=0}=\int_0^\lambda E[f\,L_{\alpha}\,E[\delta D_{\alpha}|U_{\alpha}=w]]d{\alpha}\,. $$ The hypothesis implies the existence of the strong (Bochner) integral and we conclude that $$ L'_\lambda-L'_0=\int_0^\lambda L_{\alpha} E[\delta D_{\alpha}|U_{\alpha}=w]d{\alpha} $$ a.s. for any $\lambda$, where $L'_\lambda$ denotes the derivative of $L_\lambda$ w.r.to $\lambda$. \nqed \begin{theorem} \label{thm-5} Define the sequence of functionals inductively as \begin{eqnarray*} D_\lambda^{(1)}&=&D_\lambda\\ D_\lambda^{(2)}&=&(\delta D_\lambda^{(1)})K_\lambda u'_\lambda+\frac{d}{d\lambda}D_\lambda^{(1)}\\ &&\ldots\\ D_\lambda^{(n)}&=&(\delta D_\lambda^{(n-1)})K_\lambda u'_\lambda+\frac{d}{d\lambda}D_\lambda^{(n-1)}\,. \end{eqnarray*} Assume that $$ E\int_0^\lambda|\delta D^{(n)}_{\alpha}|d{\alpha}<\infty $$ for any $n\geq 1$ and $\lambda\in {\rm I\!R}$, then $\lambda\to L_\lambda$ is almost surely a $C^\infty$-map and denoting by $L^{(n)}_\lambda$ its derivative of order $n\geq 1$, we have $$ L_\lambda^{(n+1)}(w)-L_0^{(n+1)}(w)=\int_0^\lambda L_{\alpha} E[\delta D_{\alpha}^{(n)}|U_{\alpha}=w]d{\alpha}\,. $$ \end{theorem} \section{\bf{Applications to the invertibility of adapted perturbations of identity}} \label{S-Inv} Let $u\in L_a^2(\mu, H)$, i.e., the space of square integrable, $H$-valued functionals whose Lebesgue density, denoted as $\dot{u}(t)$, is adapted to the filtration $({\mathcal F}_t,t\in [0,1])$ $dt$-almost surely. A frequently asked question ire the conditions which imply the almost sure invertibility of the adapted perturbation of identity (API) $w\to U(w)=w+u(w)$. The next theorem gives such a condition: \begin{theorem} \label{thm-8} Assume that $u\in L_a^2(\mu, H)$ with $E[\rho(-\delta u)]=1$, let $u_{\alpha}$ be defined as $P_{\alpha} u$, where $P_{\alpha}=e^{-{\alpha} {\mathcal L}}$ denotes the Ornstein-Uhlenbeck semi-group on the Wiener space. If there exists a $\lambda_0$ such that \begin{eqnarray*} \lefteqn{E\int_0^\lambda E[\rho(-\delta u_{\alpha})|U_\lambda]\Big|E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}]\Big|d{\alpha}}\\ &&=E\int_0^\lambda E[\rho(-\delta u_{\alpha})|U_\lambda]\Big|E[\delta((I_H+\nabla u_{\alpha})^{-1}{\mathcal L} u_{\alpha})|U_{\alpha}]\Big|d{\alpha}<\infty \end{eqnarray*} for $\lambda\leq \lambda_0$, then $U$ is almost surely invertible. In particular the functional stochastic differential equation \begin{eqnarray*} dV_t(w)&=&-\dot{u}(V_s(w),s\leq t)dt+dW_t\\ V_0&=&0 \end{eqnarray*} has a unique strong solution. \end{theorem} \noindent {\bf Proof:\ } Since $u_{\alpha}$ is an $H-C^\infty$-function, cf. \cite{BOOK}, the API $U_{\alpha}=I_W+u_{\alpha}$ is a.s. invertible, cf.\cite{INV}, Corollary 1. By the hypothesis and from Lemma 2 of \cite{INV}, $(\rho(-\delta u_{\alpha}),{\alpha}\leq \lambda_0)$ is uniformly integrable. Let $L_{\alpha}$ and $L$ be respectively the Radon-Nikodym derivatives of $U_{\alpha}\mu$ and $U\mu$ w.r. to $\mu$. From Theorem \ref{thm-1}, $$ L_\lambda(w)=L(w)\,\exp\int_0^\lambda E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w]d{\alpha} $$ for any $\lambda\leq \lambda_0$ and also that $\int_0^\lambda |E[\delta(K_{\alpha} u'_{\alpha})|U_{\alpha}=w]|d{\alpha}<\infty$ almost surely. Consequently $$ L_\lambda-L=\left(\exp\int_0^\lambda E[\delta(K_{\alpha} u'_{\alpha}|U_{\alpha}=w]d{\alpha}-1\right) L \to 0 $$ as $\lambda\to 0$, in probability (even in $L^1$). We claim that the set $(L_{\alpha} \log L_{\alpha}, {\alpha}\leq \lambda_0)$ is uniformly integrable. To see this let $A\in{\mathcal F}$, then \begin{eqnarray*} E[1_A L_{\alpha} \log L_{\alpha}]&=&E[1_A\circ U_{\alpha}\,\log L\circ U_{\alpha}]\\ &=&-E[1_A\circ U_{\alpha}\,\log E[\rho(-\delta u_{\alpha})|U_{\alpha}]]\\ &\leq&-E[1_A\circ U_{\alpha}\,\log \rho(-\delta u_{\alpha})]\\ &=&E\left[1_A\circ U_{\alpha}\left(\delta u_{\alpha}+\frac{1}{2}|u_{\alpha}|_H^2\right)\right] \end{eqnarray*} Since $(|u_{\alpha}|^2,{\alpha}\leq \lambda_0)$ is uniformly integrable, for any given $\varepsilon>0$, there exists some $\gamma>0$, such that $\sup_{\alpha} E[1_B|u_{\alpha}|^2]\leq \varepsilon$ as soon as $\mu(B)\leq \gamma$ and this happens uniformly w.r. to $B$, but as $(L_{\alpha} ,{\alpha}\leq \lambda_0)$ is uniformly integrable, there exists a $\gamma_1>0$ such that, for any $A\in {\mathcal F}$, with $\mu(A)\leq \gamma_1$, we have $\mu(U_{\alpha}^{-1}(A))\leq \gamma$ uniformly in ${\alpha}$ and we obtain $E[1_A\circ U_{\alpha} |u_{\alpha}|_H^2]\leq \varepsilon$ with such a choice of $A$. For the first term above we have $$ E[1_A\circ U_{\alpha} \delta u_{\alpha}]\leq E[1_A L_{\alpha}]^{1/2}\|u_{\alpha}\|_{L^2(\mu,H)}\leq \varepsilon $$ again by the same reasons. Hence we can conclude that $$ \lim_{{\alpha}\to 0}E[L_{\alpha} \log L_{\alpha}]=E[L \log L]\,. $$ Moreover, as shown in \cite{ASU-2,ASU-3}, the invertibility of $U_{\alpha}$ is equivalent to $$ E[L_{\alpha} \log L_{\alpha}]=\frac{1}{2}E[|u_{\alpha}|_H^2]\to \frac{1}{2}E[|u|_H^2]\,, $$ therefore $$ E[L \log L]=\frac{1}{2}E[|u|_H^2] $$ which is a necessary and sufficient condition for the invertibility of $U$ \nqed \noindent In several applications we encounter a situation as follows: assume that $u:W\to H$ is a measurable map with the following property $$ |u(w+h)-u(w)|_H\leq c|h|_H $$ a.s., for any $h\in H$, where $0<c<1$ is a fixed constant, or equivalently an upper bound like $\|\nabla u\|_{op}\leq c$ where $\|\cdot\|_{op}$ denotes the operator norm. Combined with some exponential integrability of the Hilbert-Schmidt norm $\nabla u$, one can prove the invertibility of $U=I_W+u$, cf. Chapter 3 of \cite{BOOK}. Note that the hypothesis $c<1$ is indispensable because of the fixed-point techniques used to construct the inverse of $U$. However, using the techniques developed in this paper we can relax this rigidity of the theory: \begin{theorem} \label{thm-9} Let $U_\lambda=I_W+\lambda u$ be an API (adapted perturbation of identity) with $u\in {\rm I\!D}_{p,1}(H)\cap L^2(\mu,H)$, such that, for any $\lambda<1$, $U_\lambda$ is a.s. invertible. Assume that \begin{equation} \label{con-suff} E\int_0^1\rho(-\delta ({\alpha} u))|E[\delta((I_H+{\alpha}\nabla u)^{-1}u)|U_{\alpha}]|d{\alpha}<\infty\,. \end{equation} Then $U=U_1$ is also a.s. invertible. \end{theorem} \noindent {\bf Proof:\ } Let $L=L_1$ be the Radon-Nikodym derivative of $U_1\mu$ w.r. to $\mu$. It suffices to show that $$ E[L\log L]=\frac{1}{2}E[|u|_H^2] $$ which is an equivalent condition to the a.s. invertibility of $U$, cf. \cite{ASU-3}. For this it suffices to show first that $(L_\lambda,\lambda<1)$ converges in $L^0(\mu)$ to $L$, then that $(L_\lambda \log L_\lambda,\lambda<1)$ is uniformly integrable. The first claim follows from the hypothesis (\ref{con-suff}) and the second claim can be proved exactly as in the proof of Theorem \ref{thm-8}. \nqed \section{\bf{Variational applications to entropy and estimation}} \label{S-Ent} In the estimation and information theories, one often encounters the problem of estimating the signal $u_\lambda$ from the observation data generated by $U_\lambda$ and then verifies the various properties of the mean square error w.r.to the signal to noise ratio, which is represented in our case with the parameter $\lambda$. Since we know that (\cite{ASU-3}) $$ E[L_\lambda\log L_\lambda]=\frac{1}{2}E\int_0^1|E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]|^2ds\,, $$ the behavior of the mean square error is completely characterized by that of the relative entropy. Let $\theta$ denote the entropy of $L_\lambda$ as a function of $\lambda$: $$ \theta(\lambda)=E[L_\lambda\log L_\lambda]\,. $$ From our results, it comes immediately that \begin{eqnarray*} \frac{d\theta(\lambda)}{d\lambda}&=&E[L'_\lambda\log L_\lambda]\\ &=&E[L_\lambda\,E[\delta(K_\lambda u'_\lambda)|U_\lambda=w]\log L_\lambda]\\ &=&E[E[\delta(K_\lambda u'_\lambda)|U_\lambda]\log L_\lambda\circ U_\lambda]\\ &=&-E[\delta(K_\lambda u'_\lambda)\log E[\rho(-\delta u_\lambda)|U_\lambda]]\,. \end{eqnarray*} Similarly \begin{eqnarray*} \frac{d^2\theta(\lambda)}{d\lambda^2}&=&E\left[L''_\lambda\log L_\lambda +(L'_\lambda)^2\frac{1}{L_\lambda}\right]\\ &=&E[L''_\lambda\log L_\lambda+L_\lambda\,E[\delta(K_\lambda u'_\lambda)|U_\lambda=w]^2]\\ &=&E[E[\delta D_\lambda|U_\lambda=w]L_\lambda\log L_\lambda++L_\lambda\,E[\delta(K_\lambda u'_\lambda)|U_\lambda=w]^2]\\ &=&E[E[\delta D_\lambda|U_\lambda]\log L_\lambda\circ U_\lambda+E[\delta(K_\lambda u'_\lambda)|U_\lambda]^2]\,. \end{eqnarray*} \noindent In particular we have \begin{theorem} \label{thm-6} Assume that $$ E\left[E[\delta D_\lambda|U_\lambda]\left(\int_0^1E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]dZ_\lambda(s)+\frac{1}{2}\int_0^1|E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]|^2ds\right)\right]< E\left[E[\delta(K_\lambda u'_\lambda)|U_\lambda]^2\right] $$ for some $\lambda=\lambda_0>0$, then there exists an $\varepsilon>0$ such that the {\bf entropy is convex} as a function of $\lambda$ on the interval $(\lambda_0-\varepsilon,\lambda_0+\varepsilon)$. In particular, if $u_0=0$, then the same conclusion holds true on some $(0,\varepsilon)$. \end{theorem} \subsection{Applications to the anticipative estimation} In this section we study briefly the estimation of $\dot{u}_\lambda(t)$ with respect to the final filtration ${\mathcal U}_\lambda(1)=\sigma(U_\lambda)$. \begin{theorem} \label{thm-7} Assume that $$ E\int_0^\lambda L_{\alpha} |E[\dot{u}'_{\alpha}(s)+\delta(\dot{u}_{\alpha}(s)K_{\alpha} u'_{\alpha})|U_{\alpha}]|^p d{\alpha}<\infty\,, $$ for a $p\geq 1$, then, $dt$-a.s., the map $\lambda\to L_\lambda E[\dot{u}_\lambda(t)|U_\lambda=x]$ and hence the map $\lambda\to E[\dot{u}_\lambda(t)|U_\lambda=x]$ are strongly differentiable in $L^p(\mu)$ for any $p\geq 1$ and we have $$ \frac{d}{d\lambda}E[\dot{u}_\lambda (t)|U_\lambda=x]=E[\dot{u}'_\lambda(t)+ \delta(\dot{u}_\lambda(t)K_\lambda u'_\lambda)|U_\lambda=x]-E[\dot{u}_\lambda(t)|U_\lambda=x] E[\delta(K_\lambda u'_\lambda)|U_\lambda=x] $$ $d\mu\times dt$-a.s. \end{theorem} \noindent {\bf Proof:\ } For a smooth function $h$ on $W$, we have \begin{eqnarray*} \frac{d}{d\lambda}<E[\dot{u}_\lambda (t)|U_\lambda=x],h\,L_\lambda>&=&\frac{d}{d\lambda}<E[\dot{u}_\lambda (t)|U_\lambda],h\circ U_\lambda>\\ &=&E[\dot{u}'_\lambda(t)h\circ U_\lambda+\dot{u}_\lambda(t)(\nabla g\circ U_\lambda,u'_\lambda)_H]\\ &=&E[E[\dot{u}'_\lambda(t)|U_\lambda]h\circ U_\lambda+h\circ U_\lambda\delta(\dot{u}_\lambda(t)K_\lambda u'_\lambda)]\\ &=&E\left[hL_\lambda(x)\left(E[\dot{u}'_\lambda(t)|U_\lambda=x]+E[\delta(\dot{u}_\lambda(t)K_\lambda u'_\lambda)|U_\lambda=x]\right)\right]\,. \end{eqnarray*} The hypothesis implies that this weak derivative is in fact a strong one in $L^p(\mu)$, the formula follows by dividing both sides by $L_\lambda$ and by the explicit form of $L_\lambda$ given in Theorem \ref{thm-1}. \nqed Using the formula of Theorem \ref{thm-7}, we can study the behavior of the error of non-causal estimation of $u_\lambda$ (denoted as NCE in the sequel) defined as \begin{eqnarray*} NCE&=&E\int_0^1|\dot{u}_\lambda(s)-E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(1)]|^2ds\\ &=&E\int_0^1|\dot{u}_\lambda(s)-E[\dot{u}_\lambda(s)|U_\lambda]|^2ds \end{eqnarray*} To do this we prove some technical results: \begin{lemma} \label{lem-1} Assume that \begin{equation} \label{hyp-1} E\int_0^\lambda\int_0^1|\dot{u}''_{\alpha}(s)+\delta(\dot{u}'_{\alpha}(s)K_{\alpha} u'_{\alpha})|^pds d{\alpha}<\infty \end{equation} for some $p>1$, for any $\lambda>0$, then the map $$ \lambda\to L_\lambda E[\dot{u}'_\lambda(s)|U_\lambda=x] $$ is strongly differentiable in $L_a^p( d\mu,L^2([0,1]))$, and its derivative is equal to $$ L_\lambda E[\dot{u}''_\lambda(s)+\delta (\dot{u}_\lambda'(s)K_\lambda u'_\lambda)|U_\lambda=x] $$ $ds\times d\mu$-a.s. \end{lemma} \noindent {\bf Proof:\ } Let $h$ be a cylindrical function on $W$, then, using, as before, the integration by parts formula, we get \begin{eqnarray*} \frac{d}{d\lambda}E[L_\lambda E[\dot{u}'_\lambda(s)|U_\lambda=x]\,h]&=&\frac{d}{d\lambda}E[\dot{u}_\lambda'(s)\,h\circ U_\lambda]\\ &=&E[\dot{u}''_\lambda(s)\,h\circ U_\lambda+h\circ U_\lambda\,\delta(\dot{u}_\lambda'(s)K_\lambda u'_\lambda)]\\ &=&E\left[h\,L_\lambda\left(E[\dot{u}''_\lambda(s)+\delta (\dot{u}_\lambda'(s)K_\lambda u'_\lambda)|U_\lambda=x]\right)\right]\,. \end{eqnarray*} This proves that the weak derivative satisfies the claim, the fact that it coincides with the strong derivative follows from the hypothesis (\ref{hyp-1}). \nqed \noindent Let us define the variance of the estimation as $$ \beta(\lambda,s)=E\left[|E[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(1)]|^2\right]\,, $$ we shall calculate the first two derivatives of $\lambda\to \beta(\lambda,s)$ w.r.to $\lambda$ in order to observe its variations. Using Lemma \ref{lem-1}, we have immediately the first derivative as \begin{eqnarray} \frac{d}{d\lambda}\beta(\lambda,s)&=&E\Bigg[ E[\dot{u}_\lambda(s)|U_\lambda=x]L_\lambda \nonumber \\ && \left(E[\dot{u}'_\lambda(s)+\delta(\dot{u}_\lambda(s)K_\lambda u'_\lambda)|U_\lambda=x]-\frac{1}{2}E[\dot{u}_\lambda(s)|U_\lambda=x]E[\delta(K_\lambda u'_\lambda)|U_\lambda=x]\right)\Bigg] \label{der-1} \end{eqnarray} \noindent The proof of the following lemma can be done exactly in the same manner as before, namely, by verifying first the weak differentaibility using cylindrical functions and then assuring that the hypothesis implies the existence of the strong derivative and it is left to the reader: \begin{lemma} \label{lem-2} Assume that $$ E\int_0^\lambda |\delta(\delta(K_{\alpha} u'_{\alpha})K_{\alpha} u'_{\alpha})+\delta(K_{\alpha} u''_{\alpha}-K_{\alpha} \nabla u'_{\alpha} K_{\alpha} u'_{\alpha})|^pd{\alpha}<\infty\,, $$ for some $p\geq 1$. Then the map $$ \lambda\to L_\lambda E[\delta(K_\lambda u'_\lambda)|U_\lambda=x] $$ is strongly differentiable in $L^p(\mu)$ and we have \begin{eqnarray*} \frac{d}{d\lambda}(L_\lambda E[\delta(K_\lambda u'_\lambda)|U_\lambda=x])&=&L_\lambda E\left[\delta(\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda)|U_\lambda=x\right]\\ &&+L_\lambda E\left[\delta(K_\lambda u''_\lambda -K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda)|U_\lambda=x\right]\,. \end{eqnarray*} \end{lemma} \noindent Combining Lemma \ref{lem-1} and Lemma \ref{lem-2} and including the action of $L_\lambda$, we conclude that \begin{eqnarray*} \beta''(\lambda)&=&E\Big[E[\dot{u}''_\lambda +\delta(\dot{u}'_\lambda K_\lambda u'_\lambda)|U_\lambda]E[\dot{u}_\lambda(s)|U_\lambda]\Big]\\ &&+E\Big[E[\dot{u}'_\lambda(s)|U_\lambda]\Big(E[\dot{u}'_\lambda(s)+\delta(\dot{u}_\lambda(s)K_\lambda u'_\lambda)|U_\lambda]\\ &&\,\,\,\,\,\,-E[\dot{u}_\lambda(s)|U_\lambda]E[\delta(K_\lambda u'_\lambda)|U_\lambda]\Big)\Big]\\ &&+E\Big[E[\delta\left\{\dot{u}''_\lambda(s)K_\lambda u'_\lambda-\dot{u}'_\lambda(s)K_\lambda \nabla u'_\lambda K_\lambda u'_\lambda\right\}\\ &&\,\,\,\,\,\,+\delta\left\{\dot{u}_\lambda (s)K_\lambda u''_\lambda+\delta(\dot{u}_\lambda (s)K_\lambda u'\lambda)K_\lambda u'_\lambda\right\}|U_\lambda]E[\dot{u}_\lambda(s)|U_\lambda]\Big]\\ &&+E\Big[E[\delta(\dot{u}_\lambda(s)K_\lambda u'_\lambda)|U_\lambda]\Big(E[\dot{u}'_\lambda(s) +\delta(\dot{u}_\lambda(s) K_\lambda u'_\lambda)|U_\lambda]\\ &&\,\,\,\,\,\,-E[\dot{u}_\lambda(s)|U_\lambda]E[\delta(K_\lambda u'_\lambda]|U_\lambda]\Big)\Big]\\ &&-E\Big[E[\dot{u}_\lambda(s)|U_\lambda]\left(E[\dot{u}'_\lambda(s)+\delta(\dot{u}_\lambda(s) K_\lambda u'_\lambda)|U_\lambda]-E[\dot{u}_\lambda(s)|U_\lambda] E[\delta(K_\lambda u'_\lambda)|U_\lambda]\right)E[\delta(K_\lambda u'_\lambda)|U_\lambda]\Big]\\ &&\,\,\,\,\,\,-\frac{1}{2}E\Big[E[E[\dot{u}_\lambda (s) U_\lambda]^2\left\{E[\delta(\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda+K_\lambda u''_\lambda-K_\lambda\nabla u'_\lambda K_\lambda u'_\lambda)|U_\lambda]\right\}\Big]\,. \end{eqnarray*} Assume now that $\lambda\to u_\lambda$ is linear, then a simple calculation shows that $$ \beta''(0)=E[|\dot{u}(s)|^2]\,, $$ hence the quadratic norm of the non-causal estimation of $u$, i.e., the function $$ \lambda\to E\int_0^1|E[\dot{u}_\lambda (s)|{\mathcal U}_\lambda(1)]|^2ds $$ is convex at some vicinity of $\lambda=0$. \subsection{Relations with Monge-Kantorovich measure transportation} Since $L_\lambda\log L_\lambda\in L^1(\mu)$, it follows the existence of $\phi_\lambda\in {\rm I\!D}_{2,1}$, which is $1$-convex (cf. \cite{F-U0}) such that $(I_W+\nabla \phi_\lambda)\mu=L_\lambda\cdot \mu$ (i.e., the measure with density $L_\lambda$), cf. \cite{F-U1}. From the $L^p$-continuity of the map $\lambda\to L_\lambda$ and from the dual characterization of the Monge-Kantorovich problem, \cite{Vil}, we deduce the measurability of the transport potential $\phi_\lambda$ as a mapping of $\lambda$. Moreover there exists a non-causal Girsanov-like density $\Lambda_\lambda$ such that \begin{equation} \label{maq} \Lambda_\lambda\,L_\lambda\circ T_\lambda=1 \end{equation} $\mu$-a.s., where $\Lambda_\lambda$ can be expressed as $$ \Lambda_\lambda=J(T_\lambda)\exp\left(-\frac{1}{2}|\nabla \phi_\lambda|_H^2\right)\,, $$ where $T_\lambda\to J(T_\lambda)$ is a log-concave, normalized determinant (cf.\cite{F-U2}) with values in $[0,1]$. Using the relation (\ref{maq}), we obtain another expression for the entropy: \begin{eqnarray*} E[L_\lambda\log L_\lambda]&=&E[\log L_\lambda\circ T_\lambda]\\ &=&-E[\log \Lambda_\lambda]\\ &=&E\left[-\log J(T_\lambda)+\frac{1}{2}|\nabla \phi_\lambda|_H^2\right]\,. \end{eqnarray*} Consequently, we have \begin{eqnarray*} \frac{1}{2}E\int_0^1|E[\dot{u}_\lambda(s)\mid {\mathcal U}_\lambda(s)]|^2ds&=&E\left[-\log J(T_\lambda)+\frac{1}{2}|\nabla \phi_\lambda|_H^2\right]\\ &=&E\left[-\log J(T_\lambda)\right]+\frac{1}{2}d^2_H(\mu,L_\lambda\cdot\mu)\,, \end{eqnarray*} where $d_H(\mu,L_\lambda\cdot\mu)$ denotes the Wasserstein distance along the Cameron-Martin space between the probability measures $\mu$ and $L_\lambda\cdot \mu$. This result gives another explanation for the property remarked in \cite{M-Z} about the independence of the quadratic norm of the estimation from the filtrations with respect to which the causality notion is defined. Let us remark finally that if $$ d_H(\mu,L_\lambda\cdot\mu)=0 $$ then $L_\lambda=1$ $\mu$-almost surely hence $E[\dot{u}_\lambda(s)\mid {\mathcal U}_\lambda(s)]=0$ $ds\times d\mu$-a.s. Let us note that such a case may happen without having $u_\lambda=0$ $\mu$-a.s. As an example let us choose an API, say $K_\lambda=I_W+k_\lambda$ which is not almost surely invertible for any $\lambda\in (0,1]$. Assume that $E[\rho(-\delta k_\lambda)]=1$ for any $\lambda$. We have $$ \frac{dK_\lambda\mu}{d\mu}=\rho(-\delta m_\lambda) $$ for some $m_\lambda\in L^0_a(\mu,H)$, define $M_\lambda=I_W+m_\lambda$, then $U_\lambda=M_\lambda\circ K_\lambda$ is a Brownian motion and an API, hence (cf. \cite{ASU-4}) it should be equal to its own innovation process and this is equivalent to say that $E[\dot{u}_\lambda(s)\mid {\mathcal U}_\lambda(s)]=0$ $ds\times d\mu$-a.s. \section{\bf{Applications to Information Theory}} \label{S-Info} \noindent In this section we give first an extension of the results about the quadratic error in the additive nonlinear Gaussian model which extends the results of \cite{D,G-Y,KZZ,M-Z} in the sense that we drop a basic assumption made implicitly or explicitly in these works; namely the conditional form of the signal is not an invertible perturbation of identity. Afterwards we study the variation of this quadratic error with respect to a parameter on whose depends the information channel in a reasonably smooth manner. Throughout this section we shall suppose the existence of the signal in the following form: $$ U(w,m)=w+u(w,m) $$ where $m$ runs in a measurable space $(M,{\mathcal M})$ governed with a measure $\nu$ and independent of the Wiener path $w$, later on we shall assume that the above signal is also parametrized with a scalar $\lambda\in {\rm I\!R}$. We suppose also that, for each fixed $m$, $w\to U(w,m)$ is an adapted perturbation of identity with $E_\mu[\rho(-\delta u(\cdot,m))]=1$ and that $$ \int_0^1 \int_{W\times M}|\dot{u}_s(w,m)|^2ds d\nu d\mu<\infty\,. $$ In the sequel we shall denote the product measure $\mu\otimes \nu$ by $\gamma$ and $P$ will represent the image of $\gamma$ under the map $(w,m)\to (U(w,m),m)$, moreover we shall denote by $P_U$ the first marginal of $P$. The following result is known in several different cases, cf. \cite{D,G-Y,KZZ,M-Z}, and we give its proof in the most general case: \begin{theorem} \label{info-1} Under the assumptions explained above the following relation between the mutual information $I(U,m)$ and the quadratic estimation error holds true: $$ I(U,m)=\int_{W\times M}\log\frac{dP}{dP_U\otimes d\nu} dP= \frac{1}{2}E_\gamma\int_0^1\Big(|E_\mu[\dot{u}_s(w,m)|{\mathcal U}_s(m)]|^2-|E_\gamma[\dot{u}_s|{\mathcal U}_s]|^2\Big)ds\,, $$ where $({\mathcal U}_s(m),\,s\in [0,1])$ is the filtration generated by the partial map $w\to U(w,m)$. \end{theorem} \proof Let us note that the map $(s,w,m)\to E_\mu[f_s|{\mathcal U}_s(m)]$ is measurable for any positive, optional $f$. To proceed to the proof, remark first that \begin{eqnarray} \frac{dP}{dP_U\otimes d\nu}&=&\frac{dP}{d\gamma}\,\frac{d\gamma}{dP_U\otimes d\nu} \label{rel-1}\\ \frac{d\gamma}{dP_U\otimes d\nu}&=&\frac{d\mu\otimes d\nu}{dP_U\otimes d\nu}=\left(\frac{dP_U}{d\mu}\right)^{-1} \label{rel-2} \end{eqnarray} since $P_U\sim \mu$. Think of $w\to U(w,m)$ as an API on the Wiener space for each fixed $m\in M$. The image of the Wiener measure $\mu$ under this map is absolutely continuous w.r. to $\mu$; denote the corresponding density as $L(w,m)$. We have for any positive, measurable function $f$ on $W\times M$ \begin{eqnarray*} E_P[f]&=&E_\gamma[f\circ U]\\ &=&\int_{W\times M }f( U(w,m),m) d\nu(m) d\mu(w)\\ &=&\int_M E_\mu\left[f\frac{dU(\cdot,m)\mu}{d\mu}\right]d\nu(m)\\ &=&E_\gamma[f L]\,, \end{eqnarray*} hence $(w,m)\to L(w,m)$ is the Radon-Nikodym density of $P$ w.r. to $\gamma$. From \cite{ASU-3} we have at once $$ E_\mu[L(\cdot,m)\log L(\cdot,m)]=\frac{1}{2}E_\mu\int_0^1|E_\mu[\dot{u}_s(\cdot,m)|{\mathcal U}_s(m)]|^2ds\,. $$ Calculation of $dP_U/d\mu$ is immediate: $$ \hat{L}=\frac{dP_U}{d\mu}(w)=\int_M L(w,m)d\nu(m). $$ Moreover from the Girsanov theorem, we have $$ E_\gamma[f\circ U\,\rho(-\delta u(\cdot,m))]=E_\gamma[f] $$ for any $f\in C_b(W)$. Denote by ${\mathcal U}_t$ the sigma algebra generated by $(U_s:\,s\leq t)$ on $W\times M$. It is easy to see that the process $Z=(Z_t,t\in [0,1])$, defined by $$ Z_t=U_t(w,m)-\int_0^t E_\gamma[\dot{u}_s|{\mathcal U}_s]ds $$ is a $\gamma$-Brownian motion and any $({\mathcal U}_t,\,t\in [0,1])$- local martingale w.r. to $\gamma$ can be represented as a stochastic integral w.r. to the innovation process $Z$, cf. \cite{FKK}. Let $\hat{\rho}$ denote \begin{equation} \label{ro} \hat{\rho}=\exp\left(-\int_0^1E_\gamma[\dot{u}_s|{\mathcal U}_s]dZ_s-\frac{1}{2}\int_0^1|E_\gamma[\dot{u}_s|{\mathcal U}_s]|^2ds\right) \end{equation} Using again the Girsanov theorem we obtain the following equality $$ E_\gamma\left[f\circ U \hat{\rho}\right]=E_\gamma\left[f\circ U \rho(-\delta u(w,m))\right] $$ for any nice $f$. This result implies that $$ E_\gamma[\rho(-\delta u)|U]=\hat{\rho} $$ $\gamma$-almost surely. Besides, for nice $f$ on $W$, \begin{eqnarray*} E_{P_U}[f]&=&E_\gamma[f\circ U]=E_\gamma[f L]=E_\gamma[f \hat{L}]\\ &=&E_\gamma[f\circ U\hat{L}\circ U\,\rho(-\delta u)]\\ &=&E_\gamma[f\circ U\hat{L}\circ U\,\hat{\rho}] \end{eqnarray*} which implies that $$ \hat{L}\circ U\,\hat{\rho}=1 $$ $\gamma$-almost surely. We have calculated all the necessary ingredients to prove the claimed representation of the mutual information $I(U,m)$: \begin{eqnarray*} I(U,m)&=&E_P\left[\log\left(\frac{dP}{d\gamma}\cdot \frac{d\gamma}{dP_U\otimes d\nu}\right)\right]\\ &=&E_P\left[\log \frac{dP}{d\gamma}+\log \frac{d\gamma}{dP_U\otimes d\nu}\right]\\ &=&E_\gamma\left[\frac{dP}{d\gamma} \log \frac{dP}{d\gamma}\right]-E_P\left[\log\frac{dP_U}{d\mu}\right]\\ &=&E_\gamma\left[\frac{dP}{d\gamma} \log \frac{dP}{d\gamma}\right]-E_{P_U}\left[\log\frac{dP_U}{d\mu}\right]\\ &=&E_\gamma\left[\frac{dP}{d\gamma} \log\frac{dP}{d\gamma}\right]-E_\mu\left[\frac{dP_U}{d\mu} \log\frac{dP_U}{d\mu}\right]\\ &=&E_\gamma[L\log L]-E\gamma\left[\log \frac{dP_U}{d\mu}\circ U\right]\\ &=&\frac{1}{2}E_\gamma\int_0^1|E_\mu[\dot{u}_s(w,m)|{\mathcal U}_s(m)]|^2ds-E_\gamma[-\log \hat{\rho}] \end{eqnarray*} and inserting the value of $\hat{\rho}$ given by the relation (\ref{ro}) completes the proof. \qed \noindent {\bf Remark:\ } The similar results (cf. \cite{D,KZZ,M-Z}) in the literature concern the case where the observation $w\to U(w,m)$ is invertible $\gamma$-almost surely, consequently the first term is reduced just to the half of the $L^2(\mu,H)$-norm of $u$ (cf. \cite{ASU-3}). \noindent The following is a consequence of Bayes' lemma: \begin{lemma} \label{bayes1-lemma} For any positive, measurable function $g$ on $W\times M$, we have $$ E_\gamma[g|U]=\frac{1}{\hat{L}\circ U}\left(\int_ML(x,m)E_\mu\Big[g\mid U(\cdot,m)=x\Big]d\nu(m)\right)_{x=U} $$ $\gamma$-almost surely. In particular $$ E_\gamma[g|U=x]=\frac{1}{\hat{L}(x)}\int_M L(x,m)E_\mu\Big[g\mid U(\cdot,m)=x\Big]d\nu(m) $$ $P_U$ and $\mu$-almost surely. \end{lemma} \proof Let $f\in C_b(W)$ and let $g$ be a positive, measurable function on $W\times M$. We have \begin{eqnarray*} E_\gamma[g\,f\circ U]&=&\int_M E_\mu[E_\mu[g\mid U(\cdot,m)]\,f\circ U(\cdot,m)]d\nu(m)\\ &=&\int_M \int_W L(w,m)\,E_\mu[g\mid U(\cdot,m)=w]\,f(w)d\mu(w)d\nu(m)\\ &=&\int_W f(w)\left(\int_M L(w,m) E_\mu[g\mid U(\cdot,m)=w]\,d\nu(m)\right)d\mu\\ &=&\int_W \frac{\hat{L}(w)}{\hat{L}(w)} f(w)\left(\int_M L(w,m) E_\mu[g\mid U(\cdot,m)=w]\,d\nu(m)\right)d\mu\\ &=&E_\gamma\left[\frac{1}{\hat{L}\circ U}f\circ U\left(\int_M L(w,m) E_\mu[g\mid U(\cdot,m)=w]\,d\nu(m)\right)_{w=U}\right] \end{eqnarray*} \qed \noindent From now on we return to the model $U_\lambda$ parametrized with $\lambda\in {\rm I\!R}$ and defined on the product space $W\times M$; namely we assume that $$ U_\lambda(w,m)=w+u_\lambda(w,m) $$ with the same independence hypothesis and the same regularity hypothesis of $\lambda\to u_\lambda$ where the only difference consists of replacement of the measure $\mu$ with the measure $\gamma$ while defining the spaces ${\rm I\!D}_{p,k}$. \begin{lemma} \label{bayes2-lemma} Let $\hat{L}_\lambda(w)$ denote the Radon-Nikodym derivative of $P_{U_\lambda}$ w.r. to $\mu$. We have $$ \hat{L}_\lambda(w)=\hat{L}_0(w)\exp \int_0^\lambda E_\gamma\Big[\delta (K_{\alpha} u'_{\alpha})|U_{\alpha}=w\Big]d{\alpha} $$ $\mu$-almost surely. \end{lemma} \proof For any nice function $f$ on $W$, we have $$ \frac{d}{d\lambda}E_\gamma[f\circ U_\lambda]=\frac{d}{d\lambda}E_\gamma[f\, L_\lambda]=\frac{d}{d\lambda}E_\mu[f\, \hat{L}_\lambda]\,. $$ On the other hand \begin{eqnarray*} \frac{d}{d\lambda}E_\gamma[f\circ U_\lambda]&=&E_\gamma[f\circ U_\lambda \delta(K_\lambda u_\lambda')]\\ &=&E_\gamma[f\circ U_\lambda E_\gamma[\delta(K_\lambda u_\lambda')|U_\lambda]]\\ &=&E_\gamma[f L_\lambda(x,m) E_\gamma[\delta(K_\lambda u_\lambda')|U_\lambda=x]]\\ &=&E_\mu[f \hat{L}_\lambda E_\gamma [\delta(K_\lambda u_\lambda')|U_\lambda=x]]\,. \end{eqnarray*} \qed \noindent {\bf Remark:\ } Note that we also have the following representation for $L_\lambda(w,m)$: $$ L_\lambda(w,m)=L_0(w,m)\exp \int_0^\lambda E_\mu\Big[\delta (K_{\alpha} u'_{\alpha}(\cdot,m))|U_{\alpha}(\cdot,m)=w\Big]d{\alpha} $$ $\mu$-a.s. \begin{lemma} \label{deriv-lemma} Let $\lambda\to \tau(\lambda)$ be defined as $$ \tau(\lambda)=E_\gamma[\hat{L}_\lambda\log \hat{L}_\lambda]\,, $$ where $\hat{L}_\lambda(w)=\int_ML_\lambda(w,m)d\nu(m)$ as before. We have \begin{eqnarray*} \frac{d\tau(\lambda)}{d\lambda}&=&E_\gamma\left[E_\gamma[\delta(K_\lambda u_\lambda')|U_\lambda]\log \hat{L}_\lambda\circ U_\lambda\right]\\ &=&E_\gamma\left[E_\gamma[\delta(K_\lambda u_\lambda')|U_\lambda](-\log \hat{\rho}_\lambda)\right] \end{eqnarray*} where $\hat{\rho}_\lambda$ is given by (\ref{ro}) as $$ \hat{\rho}_\lambda=\exp\left(-\int_0^1 E_\gamma[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]dZ_\lambda(s)-\frac{1}{2}\int_0^1|E_\gamma[\dot{u}_\lambda(s)|{\mathcal U}_\lambda(s)]|^2ds\right)\,. $$ Besides, we also have $$ \frac{d^2\tau(\lambda)}{d\lambda^2}=E_\gamma\left[E_\gamma[\delta D_\lambda|U_\lambda](-\log \hat{\rho}_\lambda)+E_\gamma[\delta(K_\lambda u'_\lambda)|U_\lambda]^2\right] $$ where $$ D_\lambda=\delta(K_\lambda u'_\lambda)K_\lambda u'_\lambda+\frac{d}{d\lambda}K_\lambda u'_\lambda\,. $$ \end{lemma} \proof The only thing that we need is the calculation of the second derivative of $\hat{L}_\lambda$: let $f$ be a smooth function on $W$, then, from Lemma \ref{bayes1-lemma}, \begin{eqnarray*} \frac{d^2}{d\lambda^2}E_\gamma[f\circ U_\lambda]&=&\frac{d}{d\lambda}E_\gamma[f\circ U_\lambda\,\delta(K_\lambda u'_\lambda)]\\ &=&E_\gamma\left[f\circ U_\gamma\delta\left(\delta(K_\lambda u_\lambda')K_\lambda u_\lambda'+\frac{d}{d\lambda}(K_\lambda u_\lambda')\right)\right]\\ &=&E_\gamma[f\circ U_\gamma\,\delta D_\lambda]\\ &=&E_\gamma[f(x)\,E_\gamma[\delta D_\lambda|U_\lambda=x]\,\hat{L}_\lambda(x)]\,. \end{eqnarray*} \qed \noindent As an immediate consequence we get \begin{corollary} We have the following relation: \begin{eqnarray*} \frac{d^2}{d\lambda^2}I(U_\lambda,m)&=&E_\gamma\Big[E_\mu[\delta (D_\lambda(\cdot,m))|U_\lambda(m)] (-\log E_\mu[\rho(-\delta u_\lambda(\cdot,m))|U_\lambda(m)])\\ &&+E_\mu[\delta(K_\lambda u'_\lambda(\cdot,m))|U_\lambda(m)]^2\Big]\\ &&-E_\gamma\Big[E_\gamma[\delta (D_\lambda)|U_\lambda](-\log \hat{\rho}_\lambda)+E_\gamma[\delta(K_\lambda u'_\lambda) |U_\lambda]^2\Big]\,. \end{eqnarray*} \end{corollary} \footnotesize{ \noindent A. S. \"Ust\"unel, Institut Telecom, Telecom ParisTech, LTCI CNRS D\'ept. Infres, \\ 46, rue Barrault, 75013, Paris, France\\ [email protected]} \end{document}
arXiv
Edge-contracted icosahedron In geometry, an edge-contracted icosahedron is a polyhedron with 18 triangular faces, 27 edges, and 11 vertices. Edge-contracted icosahedron TypeOctadecahedron Faces18 triangles Edges27 Vertices11 Vertex configuration2 (34) 8 (35) 1 (36) Symmetry groupC2v, [2], (*22), order 4 PropertiesConvex, deltahedron Net Construction It can be constructed from the regular icosahedron, with one edge contraction, removing one vertex, 3 edges, and 2 faces. This contraction distorts the circumscribed sphere original vertices. With all equilateral triangle faces, it has 2 sets of 3 coplanar equilateral triangles (each forming a half-hexagon), and thus is not a Johnson solid. If the sets of three coplanar triangles are considered a single face (called a triamond[1]), it has 10 vertices, 22 edges, and 14 faces, 12 triangles and 2 triamonds . It may also be described as having a hybrid square-pentagonal antiprismatic core (an antiprismatic core with one square base and one pentagonal base); each base is then augmented with a pyramid. Related polytopes The dissected regular icosahedron is a variant topologically equivalent to the sphenocorona with the two sets of 3 coplanar faces as trapezoids. This is the vertex figure of a 4D polytope, grand antiprism. It has 10 vertices, 22 edges, and 12 equilateral triangular faces and 2 trapezoid faces.[2] In chemistry In chemistry, this polyhedron is most commonly called the octadecahedron, for 18 triangular faces, and represents the closo-boranate [B11H11]2−. [3] Ball-and-stick model of the closo-undecaborate ion, [B11H11]2− closo-boranate [B11H11]2− Net Related polyhedra The elongated octahedron is similar to the edge-contracted icosahedron, but instead of only one edge contracted, two opposite edges are contracted. References 1. "Convex Triamond Regular Polyhedra". 2. John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26) The Grand Antiprism 3. Holleman, Arnold Frederik; Wiberg, Egon (2001), Wiberg, Nils (ed.), Inorganic Chemistry, translated by Eagleson, Mary; Brewer, William, San Diego/Berlin: Academic Press/De Gruyter, p. 1165, ISBN 0-12-352651-5 External links • The Convex Deltahedra, And the Allowance of Coplanar Faces Polyhedra Listed by number of faces and type 1–10 faces • Monohedron • Dihedron • Trihedron • Tetrahedron • Pentahedron • Hexahedron • Heptahedron • Octahedron • Enneahedron • Decahedron 11–20 faces • Hendecahedron • Dodecahedron • Tridecahedron • Tetradecahedron • Pentadecahedron • Hexadecahedron • Heptadecahedron • Octadecahedron • Enneadecahedron • Icosahedron >20 faces • Icositetrahedron (24) • Triacontahedron (30) • Hexecontahedron (60) • Enneacontahedron (90) • Hectotriadiohedron (132) • Apeirohedron (∞) elemental things • face • edge • vertex • uniform polyhedron (two infinite groups and 75) • regular polyhedron (9) • quasiregular polyhedron (7) • semiregular polyhedron (two infinite groups and 59) convex polyhedron • Platonic solid (5) • Archimedean solid (13) • Catalan solid (13) • Johnson solid (92) non-convex polyhedron • Kepler–Poinsot polyhedron (4) • Star polyhedron (infinite) • Uniform star polyhedron (57) prismatoid‌s • prism • antiprism • frustum • cupola • wedge • pyramid • parallelepiped
Wikipedia
Composition, Respirable Fraction and Dissolution Rate of 24 Stone Wool MMVF with their Binder Wendel Wohlleben ORCID: orcid.org/0000-0003-2094-32601, Hubert Waindok1, Björn Daumann2, Kai Werle1, Melanie Drum1 & Heiko Egenolf1 Man-made vitreous fibres (MMVF) are produced on a large scale for thermal insulation purposes. After extensive studies of fibre effects in the 1980ies and 1990ies, the composition of MMVF was modified to reduce the fibrotic and cancerogenic potential via reduced biopersistence. However, occupational risks by handling, applying, disposing modern MMVF may be underestimated as the conventional regulatory classification -combining composition, in-vivo clearance and effects- seems to be based entirely on MMVF after removal of the binder. Here we report the oxide composition of 23 modern MMVF from Germany, Finland, UK, Denmark, Russia, China (five different producers) and one pre-1995 MMVF. We find that most of the investigated modern MMVF can be classified as "High-alumina, low-silica wool", but several were on or beyond the borderline to "pre-1995 Rock (Stone) wool". We then used well-established flow-through dissolution testing at pH 4.5 and pH 7.4, with and without binder, at various flow rates, to screen the biosolubility of 14 MMVF over 32 days. At the flow rate and acidic pH of reports that found 47 ng/cm2/h dissolution rate for reference biopersistent MMVF21 (without binder), we find rates from 17 to 90 ng/cm2/h for modern MMVF as customary in trade (with binder). Removing the binder accelerates the dissolution significantly, but not to the level of reference biosoluble MMVF34. We finally simulated handling or disposing of MMVF and measured size fractions in the aerosol. The respirable fraction of modern MMVF is low, but not less than pre-1995 MMVF. The average composition of modern stone wool MMVF is different from historic biopersistent MMVF, but to a lesser extent than expected. The dissolution rates measured by abiotic methods indicate that the binder has a significant influence on dissolution via gel formation. Considering the content of respirable fibres, these findings imply that the risk assessment of modern stone wool may need to be revisited based on in-vivo studies of MMFV as marketed (with binder). Man-made vitreous fibres (MMVF) are non-crystalline, fibrous inorganic substances (silicates) made primarily from rock, slag, glass or other processed minerals. These materials, also called man-made mineral fibres, [1] include glass fibres (used in glass wool and continuous glass filament), rock or stone wool, slag wool and refractory ceramic fibres [2]. They are widely used for thermal and acoustical insulation and to a lesser extent for other purposes. These products are potentially hazardous to human health because they release airborne respirable fibres during their production, use and removal [3]. Fibre pathogenicity probably originates from a common mode of action from all respirable fibres, [4, 5] and is determined predominantly by aspect ratio, length and biopersistence [6, 7]. The traditional rock (or stone) wool was classified by the World Health Organization as a carcinogenic hazard to humans in 1988 [3]. In response, glass and stone wool compositions with increased biosolubility have been developed and commercialized [8]. Based on the in vivo tests required by the nota Q of the European CLP regulation, certain classes of MMVF are exonerated from classification as (carc. 2) carcinogen, [9] in accord with the conclusions of the World Health Organization report of 2002 [10]. Baan et al. very concisely review the considerations of the respective IARC Monographs Working Groups (1987, 2001) in reaching their conclusions [11]. In order to ensure that the increased biosolubility is maintained, the European insulation wool manufacturers association (EURIMA) implemented monitoring schemes to ensure that the chemical compositions are kept within defined ranges [12]. However, products should be tested as commercialized. The MMVF production inherently uses organic oil and binder (phenolic resin etc.) that is sprayed onto the stone melt directly in the fibre spinning chambers. The primary mat is layered to give the product the required weight per unit area, and passes through an oven, which sets the thickness of the mat, dries it and cures the binder [13]. The product is then air-cooled and cut to size before packaging [2]. Thus, MMVF without binder is not a necessary intermediate of occupational or commercial relevance. MMVF without binder is not representative of the commercial MMVF product for which safe use on construction sites must be ensured. Studies in 1995 deliberately removed binder from the commercial product, e.g. by ozone cold-ashing, [14] and only then investigated biopersistence. For the in vivo studies reported in 2000 – 2002, which were decisive for the WHO and IARC committees to exonerate the class of high-alumina low-silica stone wool (synonymously: HT, biosoluble MMVF) from classification as cancerogens, "tested fibres were produced without binder or oil" [15,16,17]. The conclusions and comments to the extensive BIA reportFootnote 1 already raise concerns that both in vitro data on dissolution and in vivo data on clearance and effects relate to MMVF without binder "that is rare in occupational settings" [18]. Here we took a pragmatic perspective motivated by occupational safety on BASF construction sites: We sourced MMVF directly from construction sites, and investigated their properties without further modifications. The strategy of the present contribution is to screen the safety-relevant physical-chemical properties –composition, respirable fraction, in vitro dissolution rates– on a set of modern stone wool MMVF sourced from various countries and producers. To the best of our knowledge, this is the first study to report composition, respirable fraction and dissolution of modern MMVF with their binder. We aim to benchmark results against literature on reference materials, which represent the low-biosolubility (MMVF21) and high-biosolubility (MMVF34) materials, respectively. Methodology for MMVF dissolution screening does not need to be re-invented, because it already is highly established [14, 19,20,21]. The strong correlation of in vitro dissolution rates vs. in vivo clearance rates, fibrogenic and carcinogenic potential was instrumental to identify safer MMVF compositions in the 1990ies [10, 22]. MMVF were sampled in kg quantities predominantly from various construction sites within BASF Ludwigshafen, where contractor and/or BASF-employed workers handled MMVF products. Additionally, selected materials were sourced from sites in other countries, incuding Finland, UK, Denmark, Russia, China. In all cases, the producer and product grade are known, but are coded here for anonymity. Producers are coded A to E, and materials are coded MMVF #1 to #28 (due to multiple determination of oxide composition, some materials have more than one internal MMVF code, but are listed only once here.) Only one material (MMVF #17) cannot be traced to a specific grade and producer, because it was sampled from the dismantling of a BASF air separation facility, where it is known to have been installed at least for 30 years, hence with certainty before 1995. Thickness of all MMVF ranged from 40 to 150 mm, and density ranged from 47 to 180 kg/m3. Samples for dissolution testing were taken from the middle. Note that two most relevant historical reference materials are designated by the established codes MMVF21 and MMVF34. There is no specific relation between the historical reference MMVF21 and the modern MMVF#21. Sample pretreatment for Al, Ba, Ca, Cr, Mg, Mn, P, S, Sr, Ti composition analysis: Analysis was performed for all materials in duplicate. In each case, a blank was run in an analogous manner. A sample aliquot of approx. 20 mg was weighed, to the nearest of 0.01 mg, into a platinum crucible, and mixed with both 0.8 g of a K2CO3-Na2CO3 mixture and 0.2 g Na2B4O7. The crucibles were inductively heated to a maximum temperature of approx. 930 °C. During the melt fusion, the crucibles were rotated and tilted to obtain a homogeneous melt. After cooling the melt cake was dissolved in approx. 88 ml of water and 12 ml semiconc. hydrochloric acid. The solution obtained was weighed again, and the final volume was calculated from the density of the solution (1.015 g/ml). Measurement of Al, Ba, Ca, Cr, Mg, Mn, P, S, Sr, Ti: The analytes were determined by inductively coupled plasma-optical emission spectrometry (ICP-OES, Varian 725-ES). Prior to taking the measurement, the instrument was optimized in accordance with the manufacturer's specification. Three replicate measurements are taken and averaged. We measured with 10 s integration time the following wavelengths [nm]: Al 396.152; Ba 493.408; Ca 317.933; Cr 206.158; Mg 279.553; P 213.618; S 181.972; Sr 216.596; Ti 336.122. The dilution factors were 6.67 for Al, Ba, Ca, Cr, Mg, Mn; and 1 for P, S, Sr, Ti. External calibration used concentrations of 0 / 1 / 5 mg/l with matrix-matched standards. The nebulizer (Meinhard 1 ml) had a flow of 0.7 l/min at pump rate 15 rpm. Complete reproduction on MMVF#4 through #11 confirmed better than 0.5% reproducibility on SiO2 and Al2O3 contents. The slightly different sample preparation and measurement methods for optimal analysis of B (by ICP-OES) and Na, K (by Flame Atomic Absorption Spectrometry (F-AAS)) are described in full detail in the Additional file 1. Binder content and optional removal: As a standard, all MMVF were measured as-received, without any sample preparation. For comparison in selected cases, the binder was removed by low thermal annealing or by oxygen plasma. Specifically, the oxygen plasma was generated in a Diener electronic PCCE, using 10 min at 60 W O2 plasma. Alternatively, an oven (Heraeus thermicon T), pre-heated to 500°C, was used to remove binder on samples of edge length 10 cm. The gravimetric loss can be accurately detemined on such large samples, and is attributed to the organic phase (binder). Full TGA on selected materials confirmed that 500°C is appropriate to remove binder. TGA utilized STA449 F3 (Netzsch), operated under air with 40ml/min, heated by 5K/min from 35 °C to 560 °C. The analysis software (Netzsch Proteus Thermal Analysis 6.1) adheres to DIN51005. Scanning Electron Microscopy (SEM) was detemined both before and after dissolution testing. SEM samples were fixed on an adhesive film, coated with 9nm Pt and investigated on a JSM 7500TFE (Jeol Company) operated at 5 keV. The topographic images were taken with secondary electrons (SE). The BET specific surface area was determined on Quantachrome Autosorb according to ISO 9277:2010 by volumetric static measurement of the nitrogen isotherm at 77.3 K with data evaluation according to the BET theory in the relative pressure range p/p0 between 0.001 and 0.3. Samples were prepared for adsorption analysis in a degasser, here the samples were heated up to 200 °C under vacuum for 30 min or more to remove moisture and other contaminations. For the specific equipment, we verified that specific surfaces down to 0.1 m2/g can be accurately determined. This was confirmed by repeatedly measuring Certified Reference Materials (Community Bureau of Reference - BCR 169 Alumina, certified at 0.100 m2/g, measured 0.095 m2/g; BCR 175 Tungsten, certified at 0.180 m2/g, measured 0.185 m2/g). However, the Certified Reference Materials have high powder density, whereas MMVF is less dense, resulting in a limited accuracy of the BET values of MMVF, which can deviate ± 0.15 m2/g, corresponding to about 30% uncertainty. Respirable fractions To simulate handling of MMVF, between 100 g and 400 g were cut in an Alpine LU 100 rotating mill at around 1 kg/h throughput with an Ultraplex rotor at 94 m/s relative speed against a Conidur 0.2 mm sieve. Particle size distribution of the resulting MMVF dust was determined by cyclone cascade measurements. Dust samples were dispersed in a dosing feeder (K-Tron) and an injector with the aid of pressurized air (20 m3/h). 20 m3/h particle loading gas flow was fed into a 40mm tube with a length of 1m (Additional file 1: Figure SI 1). A part of the gas flow (1.7 m3/h) was sampled through a cyclone cascade. The separation cut-off size of the cyclone cascade is sub-divided in four steps from 10 μm to 0.3 μm. The aerodynamic diameter is defined as the diameter of a sphere with the density of 1 g/cm3 which has the same separation behavior as the measured sample. The adaption is conducted in accordance with the following equation: $$ {d}_{ae}={d}_g\sqrt{\frac{\rho_g}{\rho }} $$ With dae = aerodynamic diameter, dg = measured particle size, ρ = density 1 g/cm3, ρ g = density of the sample substance. Weighing of the respective particle masses deposited on the individual cascade stages reflects the particle size distribution of the samples. Dissolution testing replicated closely (Additional file 1: Figure SI 2) the well-established MMVF dissolution methods of Guldberg, Sebastian, de Meringo et al. [14, 19,20,21] as discussed extensively by BAuA [22] and by the BIA report [18]. The method is a dynamic (flow-through) system (Additional file 1 : Figure SI_2). Specifically, the amount of fibres (as a standard, m = 50 mg) is weighed with an accuracy of ±0.2 mg and is dispersed evenly in the cell. The measured BET specific surface area gives the tested surface area SA = m * BET. The flow rate was V = 48 ml/d, but was varied up to 240 ml/d. This corresponds to a ratio of the initial surface area SA to volume flow V of SA/V = 83 h/cm on average (min 38 h/cm, max 160 h/cm; in inverse metrics our average is V/SA = 0.033 μm/s). As we screened up to 7 cells in parallel, controlled by the same peristaltic pump (Ismatec IPC 8, Additional file 1 : Figure SI 2), the different BET specific surface areas of the materials result in slightly different SA/V. All testing performed at 37 ± 0.5 °C. The effluent was collected and pH checked. The programmable sampler drew 10 mL for ICPMS analysis (with the exact weight of each sample documented) after 1, 2, 4, 6, 8, 11, 14, 18, 21, 25, 28 and 32 days. This includes the sampling times of Guldberg et al. [14] and adds more to increase resolution and duration. Additionally to earlier methodology, we also collected eluted medium between the sampling times (Additional file 1: Figure SI 2). This enables a cumulative analysis including all dissolved ions, and becomes independent of interpolation. After the experiment the remaining fibres are rinsed in de-ionized water and dried to constant weight. The weight loss is compared to the value calculated from interpolation of the time resolved sampling and to the cumulative dissolved ions. The morphology of the corroded fibres in relation to the initial fibres is inspected by means of SEM. ICPMS and/or ICPOES was used to determine dissolved ions in the eluates. All samples were analyzed for Si, Al and Mg. The between-sampling collection was analyzed for Si, Al, Mg, K, Ti, Fe, Ca. The samples were filtrated and diluted by a factor of 2 with de-ionized water. A higher dilution (factor 25) was used for elements with higher assays (Ca, K). A blank was run in an analogous manner. In the dilutions obtained, the analytes were determined by inductively coupled plasma-optical emission spectrometry (ICP-OES Agilent 5100). Prior to taking the measurement, the instrument was optimized in accordance with the manufacturer's specification. Three replicate measurements are taken and averaged. We measured with 10 s integration time the following wavelengths [nm]: Al 394.401; Fe 259.940; K 766.491; Si 288.158; Ti 334.941 (axial observation); and Ca 396.847; Mg 279.553; (radial observation). The internal standard was Sc at 361.383 nm. External calibration used concentrations of 0 / 1 / 5 mg/l. The nebulizer (Meinhard 1 ml) had a flow of 0.7 l/min at pump rate 15 rpm. The analysis was performed in duplicate with less than 10% difference as reproducibility criterion. Otherwise, the analysis was repeated. The statistical error in all ion concentrations used for calculation of dissolution rates is thus below 10%. The pH 4.5 medium composition, aiming to simulate the phagolysosome, replicated the "PSF" medium previously validated for the purpose of inhaled particle dissolution by NIST laboratories: [23] sodium phosphate dibasic anhydrous (Na2HPO4) 142.0mg/l; sodium chloride (NaCl) 6650 mg/l; sodium sulfate anhydrous (Na2SO4) 71 mg/l; calcium chloride dihydrate (CaCl2 . 2H2O) 29 mg/l; glycine (C2H5NO2) 450 mg/l (as representative of organic acids); potassium hydrogen phthalate (1-(HO2C)–2-(CO2K)–C6H4) 4085 mg/l; alkylbenzyldimethylammonium chloride (ABDC) 50ppm (added as an antifungal agent). This medium is near-identical to medium "C" in a previous interlab comparison of MMVF dissolution at pH 4.5 [19]. The pH 4.5 ± 0.4 was verified before and after the experiment, and was re-measured also on the eluted samples. Analysis of blind cells showed that Si and Al elements are sufficiently rare in the pH 4.5 medium, whereas the background levels of Ca interfere with the MMVF analysis. The pH 7.4 medium composition, aiming to simulate the extracellular lung compartment, followed one of the previously described Gamble's fluids. [24] magnesium chloride (MgCl2) 95 mg/l; sodium chloride (NaCl) 6,019 mg/l; sodium phosphate dibasic anhydrous (Na2HPO4) 298 mg/l; sodium sulfate anhydrous (Na2SO4) 63 mg/l; calcium chloride dihydrate (CaCl2 . 2H2O) 368 mg/l; sodium acetate (C2H3NaO2) 574 mg/l; sodium hydrogen carbonate (NaHCO3) 2,604 mg/l; sodium citrate dihydrate (Na3C6H5O7) 97 mg/l. We added sodium azide (NaN3) 20 mg/l as biocide. The MMVF literature documents a variety of Gamble's pH 7.4 fluids, and the present version is consistent with others used earlier on MMVF dissolution [20]. The pH 7.4 ± 0.3 was verified before and after the experiment, and was re-measured also on the eluted samples. The oxide composition is summarized in Table 1, listing the 15 MMVF materials that were also subjected to dissolution screening. All products tested were within a narrow range of SiO2 content, ranging from 40% to 44%, with an average of 42% SiO2 of the inorganic part. Additionally to the inorganics, organic components were detected in all MMVF with a content of the total weight from 0.9% to 4.2%. In TGA, the mass loss occurs in peaks between 300°C and 500°C, with an average mass loss of 2.8 ± 1.0 % below 500°C. The organic component is identified with the binder, oil, resins etc. that coat the surface of the fibres. The binder is observed also on SEM micrographs, where it visibly glues fibres together. Detailed analysis of binder chemical composition was not performed. By SEM, the distribution of fibre diameter is polydisperse with diameters from below 2 μm to above 20 μm, and often included large nonfibrous "shot" particles on the order of 200 μm. Thus, the majority of fibres in MMVF is too large to penerate deep into the lung, but every modern MMVF examined did have a small fraction of respirable fibres. Full statistical analysis of the fibre diameter distribution was beyond the scope of the present contribution, but the following section addresses the airborne fibre fraction. Finally, the specific surface area SA of the MMVF ranged between 0.2 and 0.6 m2/g. For orientation, using the density of 2.8 g/cm3 and the known fibre shape, the SA can be converted to an average diameter of the fibres, giving values between 2.4 μm and 10 μm. Considering the polydispersity, this is consistent with SEM and with literature [10]. Table 1 Composition of MMVF sourced from various countries and producers. Weight content of oxides and of binder Nine additional MMVF materials from further countries and producers were analyzed for their composition, binder content, specific surface area and SEM morphology (available, not shown here). Their properties (summarized in Additional file 1: Table SI_1) are consistent and remain within the ranges observed on the main test set. Respirable fraction of milled MMVF MMVF #2, #5, #12 were chosen for screening fractionation because their composition is roughly average across the entire test set, and thus considered to be representative. After milling, their dusts were dispersed and separated by the aerodynamic diameter by means of a cyclone. The weight collected on the impactor stages shows that for all MMVF investigated, about 59-75% of total dust mass has aerodynamic diameters >10μm, and are thus not or only partially inhalable to humans. However, the cyclone fractions with aerodynamic diameters below 7.4 μm vary from 0.29% to 6.29% of milled MMVF mass for the modern MMVF (Table 2). The spread of this fraction is large, but not different from the value of 3.65% found in the dust from historic MMVF#17 (Table 2). The next smaller fraction with aerodynamic diameters up to 4.2 μm is significantly lower with values from 0.02% to 0.22% for the modern MMVF, to be compared to 0.04% for the historic MMVF#17. Two of our modern MMVF were similar, but one (MMVF#5) has significantly lower dustiness, as it also was visually soft and clumpsy already during milling. Table 2 Fractionation of airborne MMVF. Weight gain of the total initial MMVF mass per fraction SEM analysis confirms that the cyclone fractions consist of thinner fibres, and their diameters are consistent with the nominal aerodynamic diameter cut-offs (Fig. 1 and Additional file 1: Figure SI_3). The fractions also contain milling debris of short fragments with low aspect ratio. SEM micrographs of MMVF #12. As received – after milling – only respirable fraction. See the Supporting Information for analogous results on MMVF #5 Screening was performed on the as received MMVF materials. The ions detected at each sampling time are normalized to the initial content of the specific element in the specific MMVF, and are then interpolated with due consideration of the different sampling intervals to finally obtain the percentage that has dissolved from this oxide. The resulting kinetics are plotted in Fig. 2 for both pH conditions. In pH 7.4, dissolution is consistently accelerating over the 32 days of sampling, and Si and Al dissolve with near-identical kinetics, despite their very different content in these materials. In contrast, in pH 4.5 we observe a significantly faster dissolution. Further, in pH 4.5 the initial dissolution rate tends to slow down over time. Finally, at pH 4.5 we observe a higher fraction of Al than Si in ionic form (Fig. 2), and an even higher fraction of Mg (Additional file 1: Figure SI_4). Dissolution kinetics in neutral and acidic pH conditions, all at initial mass 50 mg MMVF, flow 48 ml/day. Si (black lines), Al (blue lines). pH 4.5 (dots), pH 7.4 (crosses). a MMVF #4, b MMVF #5, c MMVF #12, d MMVF #14, e MMVF #22 Additionally to the kinetics sampling, the entire elution medium is collected between the sampling times and analyzed so that all eluted ions are known. The sum of all ions provides the cumulated dissolved fraction, independently for Si and Al, which is then weighted by their relative content to give the "ion" columns in Table 3, divided by the initial surface area SA and by the total time of 32 days to obtain the dissolution rate k in units of ng/cm2/h. Table 3 summarizes the results for a total of 15 materials. For each dissolution experiment, the quantitative assessment by "ions" is supported by a complementary assessment by gravimetry of remaining solids, which is shown for the two main screenings in pH 4.5 and pH 7.4 at standard conditions as column "S" in Table 3. Table 3 Dissolution screening at pH 7.4 and pH 4.5. "Ions" = cumulated dissolved Si and Al based on ICPMS quantification of all eluted ions, in % of initial Si and Al; "S" = remaining solid mass, in % of initial MMVF mass; "k" = dissolution rate determined from cumulated dissolved ions, in ng/cm2/h The remaining solids were further imaged by SEM, and compared against the MMVF before aging (which is not the identical sample taken for dissolution, as SEM typically requires coatings). We make no attempt to evaluate statistically the fibre diameters. Instead, the SEM analysis shows that in general the untreated MMVF surface is smooth with occasional inclusion of 100 nm to 1 μm sized particles. Judging from the morphology at fibre junctions, the binder covers the entire MMVF surface (see untreated fibres in Additional file 1: Figure SI_5, especially MMVF #8, #14, #20). The re-analysis by SEM after dissolution confirms that the fibre morphology is persistent after aging, in general without splicing or obvious shortening. For exemplary detailed results, MMVF #7 (Fig. 3) was chosen because its composition is roughly average across the entire test set, and thus considered to be representative. MMVF #4 (Fig. 4) was chosen because it has the highest Al content of the entire test set and is an innovative product with process and benefit characteristics inherited from both stone wool and glass wool. MMVF #7 and MMVF #4 both change their surface significantly after 32 days at pH 4.5, showing pronounced gel formation. For instance on MMVF #4, deep craters of approx. 400 nm diameter with sub-100-nm cracks at their bottom are frequently observed after dissolution in pH 4.5 (Fig. 4). In contrast, after dissolution in pH 7.4, deposits that often appear crystalline are found on the surfaces. For further comparison Additional file 1: Figure SI_5 shows the near-absence of morphological changes on MMVF #17 (pre-1995), in excellent accord with the very low dissolution. Throughout the test set of modern MMVF, at pH 4.5 a gel with leaching structures in the form of pits is frequently observed, occasionally also leaching craters, plateaus, microcracks and also ring-shaped hems (potentially collapsed bubbles). Fibre breakage is rare. The SEM observations are summarized in Table 3, with high magnification scans shown in Additional file 1: Figure SI_5. Full SEM results are documented in the Additional file 2: SEM Annex with low-mid-high magnification for all MMVF before and after dissolution. MMVF #7 (average composition MMVF) morphology by various dissolution conditions at pH 4.5 and pH 7.4, with and without binder: SEM analysis MMVF #4 (high-alumina) morphology by various dissolution conditions at pH 4.5 and pH 7.4, with and without binder: SEM analysis Several materials were subjected to modifications of the standard conditions, in order to explore the origin of the unexpectedly low dissolution rates. To compare against literature, binder was removed by two different methods from MMVF #7 and from MMVF #4, which represent high and low binder content respectively. Then dissolution at pH 4.5 was performed and found a dramatic acceleration of Mg leaching and of Si, Al dissolution from the high-alumina fibre MMVF #4 (Fig. 5). The k rate based only on Si and Al doubles from 20 to 39 ng/cm2/h (Table 3), and the remaining solids even drop to 64% after 32 days. The effects are less pronounced but equally a significant acceleration from 40 to 59 ng/cm2/h is observed for MMVF #7. In accord, also the dissolution morphology changes without binder. The surface is much smoother with leaching pits reduced in size or completely absent. Plasma treatment has an intermediate effect both on morphology (Figs. 3 and 4) and on kinetics (Fig. 5a, b). We also performed nitrogen adsorption before and after binder removal on MMVF#4, #5 and #7 (low, high and mid binder content). The dimensionless BET isotherm fitting constant c reduces by a factor 2.8±1.3 with the binder. Due to this change of physisorption mechanisms, the net change of specific surface area by the presence of binder has positive or negative sign, depending on the evaluation model: -22% by BET evaluation, +15% by Langmuir evaluation. Binder effects on dissolution kinetics at pH 4.5 of MMVF #4, with binder (dots), binder removed by plasma (crosses), binder removed thermally (boxes) for the three oxides Si (black), Al (blue), Mg (grey) Additional dissolution studies were also performed on the respirable fractions of MMVF #5 and MMVF #12, finding an increase of dissolution rates for MMVF #5 up to 122 ng/cm2/h, and an slight decrease for MMVF #12. For both cases, the milled, not fractionated materials had intermediate dissolution rates (Table 3). Man-made vitreous fibres (MMVF) are classified within the European Union (EU) as carcinogen category 2 (suspected human carcinogens), but Nota Q and Nota R specify criteria to exonerate fibres from this classification [9]. The HT stone wool fibres are a range of MMVF compositions that fulfill European regulatory requirements for exoneration from classification as a carcinogen and are registered by a chemical compositional range for the CAS number 287922-11-6. This range –synonymously designated as "High-alumina, low-silica wool" or "HT stone wool" or "biosoluble stone wool"– is defined by the range of dominant metal oxides shown in Table 4, with silica in the range 33% – 43%, alumina in the range 18% – 24%. In contrast, the MMVF class of pre-1995 "Rock (stone) wool" has a significantly higher SiO2 content of 43% – 50%, and lower Al2O3 content of 6% – 15% [10]. As rationale for the delimitation of the HT class with high biosolubility, it has been proposed that "an increase in Al/(Al + Si) ratio will result in a more hydrated and less continuous remaining silica network as aluminum is removed [….] As a result, the ability to form dense surface layers is reduced and hereby the dissolution rate increases" [16]. The modern MMVF were analyzed to have a content of SiO2 highly controlled within a narrow range, and with still relatively similar contents of Al2O3, CaO2, MgO, showing in this order an increasing spread of composition across the test set. As defined at the time of introducing the HT fibres "a maximum limit of 43% SiO2 and a minimum limit of 18% Al2O3 and 23% CaO + MgO should secure that the fibres are biosoluble" [16]. The condition on Al2O3 is fulfilled by nearly the entire test set, but the conditions on SiO2 and CaO+MgO are frequently not fulfilled. Thus, most but not all of the present set of modern MMVF belonged to the class of "High-alumina, low-silica wool". Part of the test set was on or beyond the borderline to the class of pre-1995 "Rock (Stone) wool". In terms of the the Al/(Al+Si) ratio, and also in terms of SiO2 content, the average of the test set is halfway between the references MMVF21 and MMVF34, with a relatively wide spread of individual oxides, but a very low spread in the Al/(Al+Si) ratio = 0.34 ± 0.05 (min 0.29, max 0.38). Table 4 SUMMARY of modern MMVF (#1 to #28) compared to IARC reference ranges: results on composition and dissolution. The compositional range of Rock (stone) wool (cancerogen classification, low biosolubility) is represented by the historical reference MMVF21. The compositional range of high-alumina, low-silica wool (synonymously HT stone wool or high biosolubility stone wool or CAS 287922-11-6) is represented by the historical reference MMVF34 The class of HT stone wool is specifically designed to have relatively lower solubility at neutral pH (for technical performance) and high solubility at acidic pH (for product safety). Lower solubility at neutral pH is advantageous for technical durability for the intended use. The alumina content, to replace silica, is advantageous to reduce costs and to increase productivity via having a melt viscosity at 1,400° C of 10—70 poise. Hence, even patents specify dissolution at pH 4.5, and define a particularly preferred class by SiO2 < 42.0% and Al203 > 20% [25]. Literature "supports the use of in vitro fibre degradation at pH 7.4 and/or pH 4.5 as an indicator of SVF [synthetic vitreous fibre] potential pathogenicity" [26] We observe at pH 4.5 and also at pH 7.4 dissolution rates that are very similar to stone wool MMVF21, which is plausible considering the related composition. And yet, the decreased SiO2 content, halfway to the MMVF34 reference, should result in intermediate dissolution rates as well. MMVF21 at pH4.5, tested at same SA/V as in our screening, had kSi = 47 ng/cm2/h, kleach = 72 ng/cm2/h, and at pH7.4 a ktotal = 23 ng/cm2/h [19]. The kleach rate reports Ca and Mg ions whereas ktotal integrates all measured ions. Focusing on the disintegration of vitreous fiber structure , this is an acceleration by 2.04 in acidic vs. neutral pH. For comparison, the average acceleration in our test set is 2.5. Considering the wide span from <<1 to >>1 for different MMVF types, this is a close match. Further, also for MMVF21 the moderate contribution of leaching with a 1.5 fold higher leaching rate than Si-based dissolution rate is fully consistent with our observations of 1.1 to 1.6 higher leaching rates (based on Mg) as compared to the Si-based dissolution rates. In the following we systematically discuss potential sources of error in the dissolution methodology: Earlier studies used sieved material without shot. We might have underestimated k by slow dissolution of thick shot particles. This hypothesis was tested experimentally here. Indeed, comparison of our dissolution of entire MMVF against our dissolution of respirable fraction shows in one case an acceleration but in another case moderate deceleration (Figure 1, Additional file 1: Figure SI_3, Table 3: +40% for MMVF #5, -20% for MMVF #12). Overall the shot effect and diameter effect are not enough to explain the slow dissolution. This is supported by Potter, finding only a 17% acceleration between MMVF34 and respirable-separate MMVF34 [27]. The media are not completely standardized throughout literature. Our pH 4.5 medium, also designated as "Phagolysosomal Simulant Fluid" (PSF) [23] is actually based on earlier MMVF media, and has the identical ingredients as the medium "C", also designated as "Modified Kanapilly (phthalate)" in the interlab comparison of MMVF in different pH 4.5 media [19]. It was concluded that "The type C liquid with the phthalate buffer gives results which in most cases are comparable with those obtained with the acidified Gamble's liquid (type B)" [19]. Additionally, observations match: regarding Mg (and Al) leaching at pH4.5 but not at pH7.4, or regarding the ratio of rates obtained at different SA/V ratios. de Meringo et al. measured k for SA/V ranging from 10 to 400 h/cm, which extends farther than our SA/V range [21, 28]. Our data confirms that higher k can be observed at lower SA/V, and our actual acceleration is consistent with factors observed in their study (Table 3), but our average SA/V of 83 h/cm (inversely, V/SA = 0.033 μm/s) is fully consistent with earlier data. E.g., Guldberg et al. specify V/SA = 0.030 μm/s (corresponding to SA/V = 92 h/cm, fully consistent with our screening) [14, 19]. The BIA report even recommends "a low [V/SA] ratio of 0.003 μm/s proved to be the most favorable condition to relate to in-vivo data" [18]. Concerning ion analysis, we follow the advice from Guldberg et al. to calculate our k values based on dissolved ions during 25-30 days [19] (32 days in our case). They recommend Si, optionally Al. We follow Potter to add the oxides of Si and Al for k determination, [27] and additionally report kinetics of Mg to assess leaching. We did not differentiate initial k vs. average k, as proposed by initial studies of de Meringo [21]. The limited accuracy of BET determination imposes an uncertainty of up to 30% for the conversion from measured ions to k values. However, BET is not required to compare ion dissolution as-marketed vs. binder-removed, or pH4.5 vs. pH7.4, or entire vs. respirable-only. Finally our sampling concept is fully consistent with the Guldberg et al. method, [14] but adds more. We believe this to be an improvement of reliability, as the between-sampling volume represents more than 90% of all ions, whereas earlier concepts only analyzed the samples, which hold about 10% of all ions. It allows us to cross-check the mass balance between remaining solids (gravimetric) vs. interpolated ion samplings vs. cumulative ion sampling. We find excellent consistency between the two ion-based methods (Additional file 1: Figure SI_6b), and very good consistency with few % mismatch between either of the ion methods and the remaining solids (Additional file 1: Figure SI_6a). One outlier of significantly lower remaining solids (64%) than expected from 19% dissolved Si, Al ions (MMVF #4, thermal binder removal), may be related to additional leaching effects, as observed on Mg (Figure 5a), but may also indicate that because this is the only case of dissolution >> 10% a more complicated calculation would be beneficial. Thelohan and de Meringo support that this is not required for low overall dissolution [28]. On average across the entire test set, the mass balance is 100.1 ± 4 % (min 95%, max 110%), which we take as strong support for the validity of the unexpectedly low dissolution rates. In summary, we believe our dissolution methodology to be valid and appropriate for comparison against literature data. Thus, the reasons identified for the slow dissolution rates are the oxide composition (discussed above) and the presence of the binder (discussed below). We agree that studies without binder are highly relevant for mechanistic understanding of shape-induced efffects (the "fibre paradigm"), but not for assessment of occupational hazards. Studies without binder do not address an occupational scenario (such as a traded intermediate), but performed a post-processing of the as-marketed MMVF to remove the binder or tested non-commercial materials [29]. In reality, modern stone wool MMVF were found to be coated by 2.8 ± 1.0 % binder (Table 1). This value is consistent with literature [2]. Our nitrogen adsorption isotherms show that the binder reduces the adsorption energy of the first nitrogen layer significantly across the ensemble fibre surface, not only locally. The reduction of specific surface area by the binder covering fibre-fibre touching points is moderate and does not suffice to explain the reduced release of ions in the presence of the binder. By optical microscopy of one example of glass wool MMVF, Potter and Olang found no change in diameter shrinkage rates in pH 7.4 with or without a novel carbohydrate-polycarboxylic binder [30]. Due to the significant differences in the composition of glass wool and binder (in their example: 67.9 % SiO2, 1.3% Al2O3 with droplets of a hydrophilic binder that "swells in water"), extrapolation to stone wool MMVF coated by hydrophobic binder and oil is impossible. To the best of our knowledge, the effect of binder coatings on stone wool MMVF has not been reported. Our direct comparison of dissolution kinetics (Fig. 5, Table 3) evidences a significant acceleration of stone wool dissolution by removal of binder by either of the two removal processes tested. Especially leaching of Mg is suppressed in the presence of the binder (Fig. 5). This can be a direct effect of the binder layer or an indirect effect via a silica-rich gel layer. The occurence of gel on dissolving MMVF was observed already in the 1984 WHO proceedings, and its slowing effect on dissolution was discussed in detail [1]. Guldberg et al. highlighted that in their dissolution studies on high-alumina-low-silica MMVF –tested without binder– gel formation was reduced, and they specifically attributed this to the increase in Al/(Al + Si) ratio [16]. It is intriguing to compare our SEM results in Fig. 3 and 4: If we remove the binder, we reproduce the observation of Guldberg et al., as the surface of our binder-removed MMVF after dissolution testing is smooth, does not show leaching pits, and no obvious gel layer. This holds for both processes that we tested to remove binder, and for both MMVF#4 and MMVF#7 (Fig. 3 and 4). In contrast, the same materials with their binder show a pronounced gel formation (Fig. 3 and 4). The gel formation with leaching pit features was actually observed for the entire test set of modern MMVF, tested with binder (Additional file 1: Figure SI_5 summarized in Table 3. Even more SEM results are documented in Additional file 2: SEM Annex). Potentially the binder-induced gel formation is a mechanism contributing to the dissolution rates (tested with binder) being lower than in previous literature (tested without binder). Highly resolving SEM scans were unfortunately not reported for the one example of glass wool dissolution with hydrophilic binder [30]. Classification based on composition and biopersistence The reduction of dissolution for the specific composition (and binder) is relevant, as the range of k we measure is the borderline region correlated by IARC to the change of pathogenicity from MMVF21 and MMVF34 in terms of their fibrogenic potential (Table 4): MMVF21 caused pulmonary fibrosis, but MMVF34 did not. In 107 rats exposed to MMVF34, no carcinoma and five adenomas were observed. In the 107 rats in the control group, one carcinoma and three adenomas were found [10]. All the modern MMVF tested here where significantly different in their composition from MMVF34, which actually is an extreme point already in the "biosoluble MMVF" CAS compositional range (with highest Al/(Al+Si)). MMVF34, tested without binder, also had the highest biosolubility within the CAS range [16]. Only for MMVF34, tested without binder, the absence of chronic inhalation effects [17] and absence of cancerogenicity [15] were reported. Thus, exoneration of the CAS range was a) extrapolated from materials without binder and b) extrapolated from MMVF34 motivated by dissolution tests [16]. Table 4 summarizes our results on composition and dissolution, and benchmarks modern MMVF against the IARC / CAS ranges and representative MMVF. For visualization, Fig. 6 plots the k results in the Al/(Al+Si) metrics. No trend can be identified because of the low spread of the modern MMVF in Al/(Al+Si). As an alternative visualization, the k results are plotted in KI metrics. KI is defined by obsolete German regulation, now overruled by CLP, as the sum of content of the oxides of Na, K, B, Ca, Mg, Ba minus twice the content of Al oxide [31]. Interestingly, KI separates MMVF #4 and MMVF #17 apart from the other MVMF, and a trend of k in the KI metric is suggested by the available results, but overall KI is not helpful to predict k. The differentiation of MMVF #4 is expected because it is an innovative product combining the high temperature performance of stone wool with the thermal, acoustic and low weight benefits of glass wool. This new type of mineral wool with a composition similar to that of stone wool, processed through a high temperature version of glass wool fiberising spinner, does show the same binder-induced gel formation as our other test materials, but reduced sensitivity on pH (Table 3). Regardless, we do not aim to establish any new predictive parameterization of composition. Instead, the composition analysis simply shows that modern MMVF do not all fall into the CAS range of exonerated MMVF. Independently, the dissolution rates are found to be lower (average 41 ng/cm2/h) than expected for exonerated fibres that are biosoluble with no effects in intraperitoneal injection (ip) cancerogenicity studies and in chronic inhalation studies (>400 ng/cm2/h, Table 4). The most important systematic uncertainty of 30% on absolute k values is our use of BET for surface area determination, but that does not compromise significance against the exonerated fibre values. Only the in vivo studies are relevant for current MMVF classification [10, 32], but biosolubility is decisive to prevent fibre-induced pathogenicity [6, 33, 34]. Specifically for MMVF, in vitro dissolution testing is known to correlate well with biosolubility and ultimately with pathogenicity [10, 15,16,17, 26]. Classification relies on in vivo clearance rates and/or fibrogenic and/or carcinogenic potential [9, 32]. Our present contribution hence remains a screening. We emphasize the importance of validating the present findings by appropriately designed in vivo studies that also use high resolution counting methodology to determine the dimensions of retained fibers with and without binder in the lung. Summary of measured dissolution rates k [Si,Al, ng/cm2/h] as a function of the compositional parameters a,b of the molar ratio Al / (Al + Si) or c,d of KI. The flow rates are identical throughout, and the pH is a,c pH 4.5 and b,d pH 7.4. The materials with highest k at pH 4.5 (MMVF #1, MMVF #5) were also screened at pH 7.4 and are included in the plot Potential of exposure To complement the hazard screening, the potential exposure needs to be known to assess the urgency of action. According to the European Standard EN 481 "Size Fraction Definition for Measurement of Airborne Particles" (1993), the thoracic fraction is that portion of the inhalable particles that pass the larynx and penetrate into the conducting airways (trachea, bifurcations) and the bronchial region of the lung (D50 = 10 μm), whereas the respirable fraction is the portion of inhalable particles that enter the deepest part of the lung, the nonciliated alveoli (D50 = 4 μm). Thus, our studies (Table 2) show that the respirable fraction of modern MMVF, assessed here by a 4 μm cutoff, is not less than in pre-1995 MMVF. The function of the binder is obviously to bind the fibres together, and thus it is beneficial to reduce exposure, but the IARC collected evidence that both manufacturing at MMVF production plants and installation at construction sites generates respirable airborne dust containing WHO fibres in a wide range approximately from 0.01 to 1 fibres/cm3 [10]. Overall, relevant exposure cannot be excluded. The compositional range of modern MMVF products (Table 4 and the additional products in Additional file 1: Table SI_1), is not compatible with the reference material MMVF34, that was used as benchmark to assess fibres with high-alumina, low-silica compositions, which consequently were exonerated from carcinogen classification. Instead, the compositional range of modern stone wool extends between MMVF34 (biosoluble stone wool) and MMVF21 (low biosolubility stone wool) reference materials as limiting cases, with a compositional average matching quite exactly the midpoint between MMVF34 and MMVF21. These results are not limited to a single producer or to a single country of origin, but cover 5 producers from 6 countries. The dissolution rates at pH 4.5, measured by a replicate of setups that were developed and validated in the 1990ies, are an order of magnitude slower than those reported for biosoluble MMVF34. This is significant considering all known sources of error. Despite the SiO2 content of an average modern MMVF being reduced vs. the historical benchmark MMVF21, the measured average dissolution rates at both pH 7.4 and pH 4.5 are within 20% identical to MMVF21 (Table 4). This is explained, at least in part, by the presence of up to 4% binder that coats the MMVF and has a significant influence on dissolution, probably by favoring gel layer formation. Here we tested MMVF as they are marketed and handled, i.e. with binder, whereas it appears that previous hazard assessment relied on abiotic, in-vitro and in-vivo studies with MMVF dissolution accelerated by deliberate removal of the binder. Considering that modern MMVF, with their binder, have actual dissolution rates ranging from 6 ng/cm2/h to 171 ng/cm2/h, which is the borderline range correlated to the onset of lung fibrosis and thoracic tumors, [10] and considering further the content of respirable fibres, the risk assessment of modern stone wool may need to be revisited. However, in vitro dissolution studies remain indicative and cannot replace nor predict in-vivo studies of MMFV as marketed (with binder). "Es ist weiterhin notwendig, nicht nur binderfreies Material zu untersuchen, das in der Arbeitswelt kaum auftaucht, sondern Fasern in handelsüblicher Form, d.h. mit Bindermaterial." (BIA report 02/98, Conclusions p. 139, endorsed also by "Stellungnahme R.C. Brown, U. Nebe, ECFIA: …Im übrigen teilen wir die Ansicht, daß es weiterhin notwendig ist, nicht nur binderfreies Material zu untersuchen, das in der Arbeitswelt kaum auftaucht, sondern Faserprodukte in handelsüblicher Form." p. 279). Author's translations: "Furthermore it is necessary to investigate not only binder-free material, which is uncommon in the working environment, but fibres in the form that is customary in trade, i.e. with binder material." and "…by the way we share the view that it is necessary to investigate not only binder-free material, which is uncommon in the working environment, but fibre products in the form that is customary in trade," WHO (World Health Organization). Biological Effects of Man-Made Mineral Fibres. Proceedings of a WHO/IARC Conference. 1984. TIMA (Thermal Insulation Manufacturers Association), Man-made Vitreous Fibres: Nomenclature, Chemical and Physical Properties, 4th Ed., W. Eastes, Editor. 1993, Nomenclature Committee of Thermal Insulation Manufacturers'Association, Refractory Ceramic Fibres Coalition (RCFC), : Washington DC. IARC. (INTERNATIONAL AGENCY FOR RESEARCH ON CANCER), Man-made Mineral Fibres and Radon 1988. WORLD HEALTH ORGANIZATION. Roller M, et al. Dose-response relationship of fibrous dusts in intraperitoneal studies. Environmental Health Perspectives. 1997;105(Suppl 5):1253–6. Miller BG, et al. Influence of fibre length, dissolution and biopersistence on the production of mesothelioma in the rat peritoneal cavity. Ann Occup Hyg. 1999;43 Donaldson K, Seaton A. A short history of the toxicology of inhaled particles. Particle and Fibre Toxicology. 2012;9(1):13. Oberdörster G. Determinants of the pathogenicity of man-made vitreous fibres (MMVF). International archives of occupational and environmental health. 2000;73(1):S60–8. Guldberg M, et al. The development of glass and stone wool compositions with increased biosolubility. Regulatory Toxicology and Pharmacology. 2000;32(2):184–9. CLP, Classification, labelling and packaging of substances and mixtures, amending and repealing Directives 67/548/EEC and 1999/45/EC, and amending Regulation (EC) No 1907/2006, in REGULATION (EC) No 1272/2008, THE EUROPEAN PARLIAMENT AND THE COUNCIL, Editor. 2008. p. 1-1355. IARC (INTERNATIONAL AGENCY FOR RESEARCH ON CANCER), MAN-MADE VITREOUS FIBRES. 2002, WORLD HEALTH ORGANIZATION. p. 1-433. Baan RA, Grosse Y. Man-made mineral (vitreous) fibres: evaluations of cancer hazards by the IARC Monographs Programme. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis. 2004;553(1–2):43–58. RAL, Erzeugnisse aus Mineralwolle. 1999. Gütesicherung RAL GZ 388. EIPPCB_(European_Integrated_Pollution_Prevention_and_Control_Bureau), Reference Document on Best Available Techniques in the Glass Manufacturing Industry. 2000, EC Joint Research Centre, Institute for Prospective Technological Studies: Sevilla. Guldberg M, et al. Method for determining in-vitro dissolution rates of man-made vitreous fibres. Glass science and technology. 1995;68(6):181–7. Kamstrup O, et al. Carcinogenicity Studies after Intraperitoneal Injection of Two Types of Stone Wool Fibres in Rats. The Annals of Occupational Hygiene. 2002;46(2):135–42. Guldberg M, et al. High-alumina low-silica HT stone wool fibres: A chemical compositional range with high biosolubility. Regulatory Toxicology and Pharmacology. 2002;35(2):217–26. Kamstrup O, et al. Chronic inhalation studies of two types of stone wool fibres in rats. Inhalation Toxicology. 2001;13(7):603–21. Muhle, H., et al., Fasern - Tests zur Abschätzung der Biobeständigkeit und zum Verstaubungsverhalten (BIA-Report 2/98). . 1998: Sankt Augustin. Guldberg M, et al. Measurement of in-vitro fibre dissolution rate at acidic pH. Annals of Occupational Hygiene. 1998;42(4):233–43. Zoitos BK, et al. In vitro measurement of fibre dissolution rate relevant to biopersistence at neutral pH: An interlaboratory round robin. Inhalation toxicology. 1997;9(6):525–40. De Meringo A, et al. In vitro assessment of biodurability: acellular systems. Environmental health perspectives. 1994;102(Suppl 5):47. Roller M. Bedeutung von In-vitro-Methoden zur Beurteilung der chronischen Toxizität und Karzinogenität von Nanomaterialien, Feinstäuben und Fasern. Dortmund, Berlin: BAuA; 2011. p. 1–365. Stefaniak AB, et al. Characterization of phagolysosomal simulant fluid for study of beryllium aerosol particle dissolution. Toxicology in Vitro. 2005;19(1):123–34. Marques M, Loebenberg R, Almukainzi M. Simulated Biological Fluids with Possible Application in Dissolution Testing. Dissolution Technologies. 2011;8:15–29. Jensen, S.L., V.R. Christensen, and M. Guldberg, Man-made vitreous fibres. 2002, Patent US 6346494 B1. Hesterberg, T.W. and G.A. Hart, Lung Biopersistence and in Vitro Dissolution Rate Predict the Pathogenic Potential of Synthetic Vitreous Fibres. Inhalation Toxicology, 2000. 12(sup3): p. 91-97. Potter RM. A method for determination of in-vitro fibre dissolution rate by direct optical measurement of diameter decrease. Glastechnische Berichte. 2000;43:46–55. Thelohan S, De Meringo A. In vitro dynamic solubility test: influence of various parameters. Environmental health perspectives. 1994;102(Suppl 5):91. Kamstrup O, et al. Subchronic Inhalation Study of Stone Wool Fibres in Rats. The Annals of Occupational Hygiene. 2004;48(2):91–104. Potter RM, Olang N. The effect of a new formaldehyde-free binder on the dissolution rate of glass wool fibre in physiological saline solution. Particle and Fibre Toxicology. 2013;10(1):13. AGS (Arbeitsgemeinschaft Gefahrstoffe), TRGS 905 Anorganische Faserstäube (außer Asbest), 1995. Harrison P, et al. Regulatory risk assessment approaches for synthetic mineral fibres. Regulatory Toxicology and Pharmacology. 2015;73(1):425–41. Bernstein DM, et al. BIOPERSISTENCE OF SYNTHETIC MINERAL FIBRES AS A PREDICTOR OF CHRONIC INTRAPERITONEAL INJECTION TUMOR RESPONSE IN RATS. Inhalation Toxicology. 2001;13(10):851–75. Lippmann M. Toxicological and epidemiological studies on effects of airborne fibres: Coherence and public health implications. Critical Reviews in Toxicology. 2014;44(8):643–95. We thank the medical department for general introduction to MMVF in thermal insulation applications. We are indebted to Edgar Leibold, Product Stewardship, for clarifying discussions on WHO fibre risk assessment. We thank Werner Wacker for performing the plasma process for binder removal, Rolf Henn for MMVF sourcing, logistics and binder quantification, Frank Mueller for coordinating the milling process and Christian Schatz for performing the aerosol fractionation. We thank the ICP/MS/OES team, specifically Petra Veitinger, Andreas Herzog, Walter Grasser for excellent support. This work was funded by BASF SE. SEM, gravimetry and ICPMS results for all MMVF tested, all conditions tested, are shown in the SI. Department Material Physics and Analytics, BASF SE, Ludwigshafen, Germany Wendel Wohlleben, Hubert Waindok, Kai Werle, Melanie Drum & Heiko Egenolf Department of Aerosol Technology, BASF SE, Ludwigshafen, Germany Björn Daumann Wendel Wohlleben Hubert Waindok Kai Werle Melanie Drum Heiko Egenolf BD and WW designed the study. BD coordinated and interpreted the aerosol part, WW coordinated and interpreted the dissolution part. KW replicated dissolution methods from literature and performed all dissolution experiments. MD performed SEM analysis, HW interpreted the SEM results. HE coordinated elemental analysis. WW, BD, HW wrote the manuscript with contributions by HE. All authors read and approved the final manuscript. Correspondence to Wendel Wohlleben. All authors are employees of BASF SE, a company that commercializes products competing with MMVF for thermal insulation purposes. BASF SE is not itself a producer of MMFV. Maintenance operations by BASF SE employees extensively handle both historic MMVF for demolition or refurbishment and modern MMVF in new installations. Online Supporting Information: Schematics of respirable fibre fractionation and of dissolution testing. Additional results on MMVF composition, on respirable fibre content and on dissolution kinetics at pH4.5. Characteristic features in SEM scans before and after dissolution testing. Cross-correlation of three metrics of dissolution analysis. (PDF 967 kb) Annex: extensive SEM before and after dissolution testing. (PDF 8398 kb) Wohlleben, W., Waindok, H., Daumann, B. et al. Composition, Respirable Fraction and Dissolution Rate of 24 Stone Wool MMVF with their Binder. Part Fibre Toxicol 14, 29 (2017). https://doi.org/10.1186/s12989-017-0210-8 Man-made vitreous fibres Biopersistence Respirable fraction
CommonCrawl
(Redirected from ❘) "|" redirects here. For the use of a similar-looking character in vertical Japanese writing, see Chōonpu. " ‖ " redirects here. For the use of a similar-looking character in African languages, see lateral clicks. The vertical bar ( | ) is a computer character and glyph with various uses in mathematics, computing, and typography. It has many names, often related to particular meanings: Sheffer stroke (in logic), verti-bar, vbar, stick, vertical line, vertical slash, bar, pike, or pipe, and several variants on these names. It is occasionally considered an allograph of broken bar (see below). ‖ ∣ Broken bar Divides apostrophe ' ' brackets [ ] ( ) { } ⟨ ⟩ colon : comma , ، 、 dash ‒ – — ― ellipsis … ... . . . ⋯ ᠁ ฯ exclamation mark ! full stop, period . guillemets ‹ › « » hyphen ‐ hyphen-minus - question mark ? quotation marks ' ' " " ' ' " " semicolon ; slash, stroke, solidus / ⧸ ⁄ Word dividers interpunct · General typography ampersand & asterisk * at sign @ backslash \ basis point ‱ bullet • caret ^ dagger † ‡ ⹋ degree ° ditto mark " 〃 equals sign = inverted exclamation mark ¡ inverted question mark ¿ komejirushi, kome, reference mark ※ multiplication sign × number sign, pound, hash # numero sign № obelus ÷ ordinal indicator º ª percent, per mil % ‰ pilcrow ¶ plus, minus + − plus-minus, minus-plus ± ∓ prime ′ ″ ‴ section sign § tilde ~ underscore, understrike _ vertical bar, pipe, broken bar | ‖ ¦ copyleft 🄯 sound-recording copyright ℗ registered trademark ® service mark ℠ trademark ™ currency sign ¤ ؋ ​₳ ​ ฿ ​₿ ​ ₵ ​¢ ​₡ ​₢ ​ $ ​₫ ​₯ ​֏ ​ ₠ ​€ ​ ƒ ​₣ ​ ₲ ​ ₴ ​ ₭ ​ ₺ ​₾ ​ ₼ ​ℳ ​₥ ​ ₦ ​ ₧ ​₱ ​₰ ​£ ​ 元 圆 圓 ​﷼ ​៛ ​₽ ​₹ ₨ ​ ₪ ​ ৳ ​₸ ​₮ ​ ₩ ​ ¥ ​円 Uncommon typography asterism ⁂ fleuron, hedera ❧ index, fist ☞ interrobang ‽ irony punctuation ⸮ lozenge ◊ tie ⁀ Logic symbols Whitespace characters In other scripts UsageEdit MathematicsEdit The vertical bar is used as a mathematical symbol in numerous ways: absolute value: | x | {\displaystyle |x|} , read "the absolute value of x" cardinality: | S | {\displaystyle |S|} , read "the cardinality of the set S" conditional probability: P ( X | Y ) {\displaystyle P(X|Y)} , reads "the probability of X given Y" determinant: | A | {\displaystyle |A|} , read "the determinant of the matrix A". When the matrix entries are written out, the determinant is denoted by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses of the matrix, as in | a b c d | {\displaystyle {\begin{vmatrix}a&b\\c&d\end{vmatrix}}} distance: P | a b {\displaystyle P|ab} , denoting the shortest distance between point P {\displaystyle P} to line a b {\displaystyle ab} , so line P | a b {\displaystyle P|ab} is perpendicular to line a b {\displaystyle ab} divisibility: a | b {\displaystyle a|b} , read "a divides b" or "a is a factor of b", though Unicode also provides special 'divides' and 'does not divide' symbols (U+2223 and U+2224: ∣, ∤) evaluation: f ( x ) | x = 4 {\displaystyle f(x)|_{x=4}} , read "f of x, evaluated at x equals 4" (see subscripts at Wikibooks) length: | s | {\displaystyle |s|} , read "the length of the string s" norm: | v | {\displaystyle |\mathbf {v} |} , read "the norm of the (greater-than-one-dimensional) vector v {\displaystyle \mathbf {v} } " (note that absolute value is a one-dimensional norm), although a double vertical bar (see below) is more often used to avoid ambiguity. order: | G | {\displaystyle |G|} , read "the order of the group G" restriction: f | A {\displaystyle f|_{A}} , denoting the restriction of the function f {\displaystyle f} , with a domain that is a superset of A {\displaystyle A} , to just A {\displaystyle A} set-builder notation: { x | x < 2 } {\displaystyle \{x|x<2\}} , read "the set of x such that x is less than two". Often, a colon ':' is used instead of a vertical bar the Sheffer stroke in logic: a | b {\displaystyle a|b} , read "a nand b" subtraction: f ( x ) | b a {\displaystyle f(x)\vert _{b}^{a}} , read "f(x) from b to a", denoting f ( a ) − f ( b ) {\displaystyle f(a)-f(b)} . Used in the context of a definite integral with variable x. A vertical bar can be used to separate variables from fixed parameters in a function, for example f ( x | μ , σ ) {\displaystyle f(x|\mu ,\sigma )} The double vertical bar, U+2016 ‖ DOUBLE VERTICAL LINE, is also employed in mathematics. parallelism: A B ∥ C D {\displaystyle AB\parallel CD} , read "the line A B {\displaystyle AB} is parallel to the line C D {\displaystyle CD} Norm: ‖ x ‖ {\displaystyle \|\mathbf {x} \|} , read "the norm of the vector x". People sometimes use two single bars in analogy to the absolute value, which is a one-dimensional norm. Propositional truncation (a type former that truncates a type down to a mere proposition in homotopy type theory): for any a : A {\displaystyle a:A} (read "term a {\displaystyle a} of type A {\displaystyle A} ") we have | a | : ‖ A ‖ {\displaystyle |a|:\left\|A\right\|} [1] (here | a | {\displaystyle |a|} reads "image of a : A {\displaystyle a:A} in ‖ A ‖ {\displaystyle \left\|A\right\|} " and | a | : ‖ A ‖ {\displaystyle |a|:\left\|A\right\|} reads "propositional truncation of A {\displaystyle A} ")[2] PhysicsEdit The vertical bar is used in bra–ket notation in quantum physics. Examples: | ψ ⟩ {\displaystyle |\psi \rangle } : the quantum physical state ψ {\displaystyle \psi } ⟨ ψ | {\displaystyle \langle \psi |} : the dual state corresponding to the state above ⟨ ψ | ρ ⟩ {\displaystyle \langle \psi |\rho \rangle } : the inner product of states ψ {\displaystyle \psi } and ρ {\displaystyle \rho } supergroups in physics are denoted G(N|M), which reads "G, M vertical bar N"; here G denotes any supergroup, M denotes the bosonic dimensions, and N denotes the Grassmann dimensions[3] ComputingEdit PipeEdit Main article: Pipeline (Unix) A pipe is an inter-process communication mechanism originating in Unix, which directs the output (standard out and, optionally, standard error) of one process to the input (standard in) to another. In this way, a series of commands can be "piped" together, giving users the ability to quickly perform complex multi-stage processing from the command line or as part of a Unix shell script ("bash file"). In most Unix shells (command interpreters), this is represented by the vertical bar character. For example: grep -i 'blair' filename.log | more where the output from the "grep" process is piped to the "more" process. The same "pipe" feature is also found in later versions of DOS and Microsoft Windows. This usage has led to the character itself being called "pipe". DisjunctionEdit In many programming languages, the vertical bar is used to designate the logic operation or, either bitwise or or logical or. Specifically, in C and other languages following C syntax conventions, such as C++, Perl, Java and C#, a | b denotes a bitwise or; whereas a double vertical bar a || b denotes a (short-circuited) logical or. Since the character was originally not available in all code pages and keyboard layouts, ANSI C can transcribe it in form of the trigraph ??!, which, outside string literals, is equivalent to the | character. In regular expression syntax, the vertical bar again indicates logical or (alternation). For example: the Unix command grep -E 'fu|bar' matches lines containing 'fu' or 'bar'. ConcatenationEdit The double vertical bar operator "||" denotes string concatenation in PL/I, standard ANSI SQL, and theoretical computer science (particularly cryptography). DelimiterEdit Although not as common as commas or tabs, the vertical bar can be used as a delimiter in a flat file. Examples of a pipe-delimited standard data format are LEDES 1998B and HL7. It is frequently used because vertical bars are typically uncommon in the data itself. Similarly, the vertical bar may see use as a delimiter for regular expression operations (e.g. in sed). This is useful when the regular expression contains instances of the more common forward slash (/) delimiter; using a vertical bar eliminates the need to escape all instances of the forward slash. However, this makes the bar unusable as the regular expression "alternative" operator. Backus–Naur formEdit In Backus–Naur form, an expression consists of sequences of symbols and/or sequences separated by '|', indicating a choice, the whole being a possible substitution for the symbol on the left. <personal-name> ::= <name> | <initial> Concurrency operatorEdit In calculi of communicating processes (like pi-calculus), the vertical bar is used to indicate that processes execute in parallel. APLEdit The pipe in APL is the modulo or residue function between two operands and the absolute value function next to one operand. List comprehensionsEdit Main article: List comprehensions The vertical bar is used for list comprehensions in some functional languages, e.g. Haskell and Erlang. Compare set-builder notation. Phonetics and orthographyEdit In the Khoisan languages and the International Phonetic Alphabet, the vertical bar is used to write the dental click (ǀ). A double vertical bar is used to write the alveolar lateral click (ǁ). Since these are technically letters, they have their own Unicode code points in the Latin Extended-B range: U+01C0 for the single bar and U+01C1 for the double bar. Some Northwest and Northeast Caucasian languages written in the Cyrillic script have a vertical bar called palochka (Russian: палочка, "little stick"), indicating the preceding consonant is an ejective. Longer single and double vertical bars are used to mark prosodic boundaries in the IPA. LiteratureEdit PunctuationEdit In medieval European manuscripts, a single vertical bar was a common variant of the virgula ⟨/⟩ used as a period, scratch comma,[4] and caesura mark.[5] In Sanskrit and other Indian languages, text blocks were once written in stanzas. Two bars || represent the equivalent of a pilcrow. PoetryEdit A double vertical bar ⟨||⟩ or ⟨‖⟩ is the standard caesura mark in English literary criticism and analysis. It marks the strong break or caesura common to many forms of poetry, particularly Old English verse. NotationEdit In the Geneva Bible and early printings of the King James Version, a double vertical bar is used to mark margin notes that contain an alternative translation from the original text. These margin notes always begin with the conjunction "Or". In later printings of the King James Version, the double vertical bar is irregularly used to mark any comment in the margins. EncodingEdit The vertical bar is encoded in ASCII and Unicode at U+007C | VERTICAL LINE (124decimal). In URL a vertical bar can be encoded by %7C. Solid vertical bar vs broken barEdit The code point 124 (7C hexadecimal) is occupied by a broken bar in a dot matrix printer of the late 1980s, which apparently lacks a solid vertical bar. Due to this, broken bar is also used for vertical line approximation. See the full picture (3,000 × 2,500 pixels). Many early video terminals and dot-matrix printers rendered the vertical bar character as the allograph broken bar (¦). This may have been to distinguish the character from the lower-case 'L' and the upper-case 'I' on these limited-resolution devices. It may also have been designed so a vertical column drew a more attractive small-dash line, and to match the appearance of a horizontal line of dash characters (----).[citation needed] Some variants of EBCDIC included both versions of the character as different code points. The broad implementation of the extended ASCII ISO/IEC 8859 series in the 1990s made a distinction between the two forms. This was preserved in Unicode as a separate character, U+00A6 ¦ BROKEN BAR (166decimal) (the term "parted rule" is used sometimes in Unicode documentation). Many keyboards display the broken bar on a keycap even though the solid vertical bar character is produced. This includes older IBM PC keyboards, and many German QWERTZ keyboards. The UK keyboard layout is actually documented as producing the broken bar but produces the solid bar on most systems, including Microsoft Windows. The broken bar character can be typed (depending on the layout) as AltGr+` or AltGr+6 on Windows and Compose!^ on Linux. It can be inserted into HTML as &brvbar; Many fonts draw the characters the same (both are solid vertical bars, or both are broken vertical bars).[6] The broken bar has hardly any practical application and does not appear to have any clearly identified uses distinct from the vertical bar.[7] In non-computing use — for example in mathematics, physics and general typography — the broken bar is not an acceptable substitute for the vertical bar. In common character mapsEdit Vertical bar ('|') Broken bar ('¦') Unicode U+007C U+00A6 ASCII, CP437, CP667, CP720, CP737, CP790, CP819, CP852, CP855, CP860, CP861, CP862, CP865, CP866, CP867, CP869, CP872, CP895, CP932, CP991 124 (7Ch) none[8] CP775 167 (A7h) CP850, CP857, CP858 221 (DDh) CP864 219 (DBh) ISO/IEC 8859-1, -7, -8, -9, -13, CP1250, CP1251, CP1252, CP1253, CP1254, CP1255, CP1256, CP1257, CP1258 166 (A6h) ISO/IEC 8859-2, -3, -4, -5, -6, -10, -11, -14, -15, -16 none EBCDIC CCSID 37 79 (4Fh) 106 (6Ah) EBCDIC CCSID 500 187 (BBh) Shift-JIS Men-Ku-Ten 1-01-35 Additional related Unicode characters: Double vertical line ( ‖ ): U+2016 used in pairs to indicate norm Fullwidth vertical line (|): U+FF5C Parallel to ( ∥ ): U+2225 Latin letter dental click ( ǀ ): U+01C0 Latin letter lateral click ( ǁ ): U+01C1 Symbol 'divides' ( ∣ ): U+2223 Various Box-drawing characters such as the light vertical ( │ ) at U+2502 In text processingEdit In LaTeX, the vertical bar can be used as delimiter in mathematical mode. The sequence \| creates a double vertical line (a | b \| c is set as a | b ‖ c {\displaystyle a|b\|c} ). This has different spacing from \mid and \parallel, which are relational operators: a \mid b \parallel c is set as a ∣ b ∥ c {\displaystyle a\mid b\parallel c} . In LaTeX text mode, the vertical bar produces an em dash (—). The \textbar command can be used to produce a vertical bar. The vertical bar is also used as special character in other lightweight markup languages, notably MediaWiki's Wikitext. ^ Univalent Foundations Program (2013). Homotopy Type Theory: Univalent Foundations of Mathematics (GitHub version) (PDF). Institute for Advanced Study. p. 108. ^ Univalent Foundations Program (2013). Homotopy Type Theory: Univalent Foundations of Mathematics (print version). Institute for Advanced Study. p. 450. ^ Larus Thorlacius, Thordur Jonsson (eds.), M-Theory and Quantum Geometry, Springer, 2012, p. 263. ^ "virgula, n.", Oxford English Dictionary, 1st ed., Oxford: Oxford University Press, 1917 . ^ "virgule, n.", Oxford English Dictionary, 1st ed., Oxford: Oxford University Press, 1917 . ^ Jim Price (2010-05-24). "ASCII Chart: IBM PC Extended ASCII Display Characters". Retrieved 2012-02-23. ^ Jukka "Yucca" Korpela (2006-09-20). "Detailed descriptions of the characters". Retrieved 2012-02-23. ^ Broken bar is no longer a part of ASCII, since the early 1990s Retrieved from "https://en.wikipedia.org/w/index.php?title=Vertical_bar&oldid=905821783"
CommonCrawl
\begin{document} \title{\bf Global solutions to planar magnetohydrodynamic equations with radiation and large initial data } \author{Xulong Qin$^{1,2}$ \thanks{\footnotesize{Corresponding author \newline\indent~~E-mail: ~qin\[email protected],~~ [email protected] \newline\indent~~2000 \it Mathematics Subject Classification.} ~35B45; 35L65; 35Q60; 76N10;76W05.\newline \indent{\it~ Key words:}~~Magnetohydrodynamics(MHD); Radiation; Free boundary value problem; }\qquad Zheng-an Yao$^{1}$\\ \it \small $^1$Department of Mathematics, Sun Yat-sen University,\\ \it\small Guangzhou 510275, People's Republic of China\\ \it \small $^2$The Institute of Mathematical Sciences, Chinese University of HongKong,\\ \it \small Shatin, HongKong} \date{} \maketitle \begin{abstract} A global existence result is established for a free boundary problem of planar magnetohydrodynamic fluid flows with radiation and large initial data. Particularly, it is novelty to embrace the constant transport coefficient. As a by-product, the free boundary is shown to expand outward at an algebraic rate from above in time. \end{abstract} \section{Introduction} In astrophysics, stars may be viewed as compressible fluid flows formulated by the Navier-Stokes equations, which is now expressed by the conservation of mass, balance of momentum and energy. However, their dynamics are often influenced by magnetic fields and high temperature radiation. When the radiation is taken into account, the new balance law for the intensity of radiation should be added to complete the hydrodynamic system. More precisely, the material flow needs a relativistic treatment since the photons are massless particles traveling at the speed of light. In other words, it raises more difficulties in mathematics and physics. Fortunately, under some plausibly physical hypotheses and asymptotic analysis, especially for the equilibrium diffusion model, the radiative transfer equation can be replaced by the usual hydrodynamic model equations with the pressure, the internal energy and the heat conductivity added by the special radiative components, see for instance \cite{Mihalas}. Furthermore, when the magnetic fields are considered, the motions of conducting fluids may induce electric fields. Thus, the complex interaction of magnetic and electric fields significantly affects the hydrodynamic motion of fluid flows, which is described by Maxwell's equation. Finally, the system of magnetohydrodynamics is the following in Eulerian coordinates. \begin{subequations}\label{qian2} \begin{align} \partial_{t}\rho+\text{div}(\rho \mathbf{u})&=0, \label{qian21}\\[3mm] \partial_{t}(\rho \mathbf{u})+\text{div}(\rho \mathbf{u}\otimes\mathbf{u})+\nabla p&=\text {div} \mathbb{S}+(\nabla\times\mathbf{H})\times\mathbf{H},\label{qian22}\\[3mm] \partial_t(\rho \mathcal{E})+\text{div}(\rho \mathcal{E'} \mathbf{u}+p\mathbf{u})+\text{div}\mathbf{Q}&= \text{div}(\mathbb{S}\textbf{u}+\nu\mathbf{H}\times(\nabla\times\mathbf{H}))\notag\\[2mm] &+\text{div}((\mathbf{u}\times\mathbf{H})\times\mathbf{H}),\label{qian23}\\[3mm] \mathbf{H}_t-\nabla\times(\mathbf{u}\times\mathbf{H})& =-\nabla\times(\nu\nabla\times\mathbf{H}),\quad \text{div}\mathbf{H}=0,\label{qian4} \end{align} \end{subequations} where $\rho$ is the density of fluid flows, $p$ is the pressure, $\mathbf{u}\in \mathbb{R}^3$ the velocity, $\mathbf{H}\in \mathbb{R}^3$ the magnetic field. $\mathcal{E}$ stands for the total energy expressed by \begin{equation*} \mathcal{E}=\rho\left(e+\frac12|\mathbf{u}|^2\right)+\frac12|\mathbf{H}|^2,\quad \mathcal{E'}=\rho\left(e+\frac12|\mathbf{u}|^2\right), \end{equation*} which $e$ is the internal energy of flows. The viscous stress tensor \begin{equation*} \mathbb{S}=\lambda'(\text{div}\mathbf{u})\mathbb{I}+\mu(\nabla\mathbf{u}+\nabla\mathbf{u}^{\top}), \end{equation*} where $\lambda'$ and $\mu$ are the viscosity coefficients of the flows with $\lambda'+2\mu>0$. $\mathbb{I}$ is the $3\times 3$ identity matrix. $\nabla\mathbf{u}^{\top}$ is the transpose of the matrix $\nabla\mathbf{u}$. Particularly, we emphasis that the electric field is induced by the velocity $\textbf{u}$ and the magnetic field $\mathbf{H}$, \begin{equation*} \mathbf{E}=\nu\nabla\times\mathbf{H}-\mathbf{u}\times\mathbf{H}, \end{equation*} and the induction equation \eqref{qian4} is derived from neglecting the displacement current in Maxwell's equation. For more detailed physical explanation and mathematical deduction, see the appendix of \cite{ChenWang02} for reference. The pressure $p$ of the gas obeys the equation of state \begin{equation}\label{p} p=p_{G}+p_{R}=R\rho\theta+\frac{a}{3}\theta^4. \end{equation} $R$ is a constant depending on the material properties of the gas, $\theta$ the temperature of fluid flows and the last term in the above equality denotes the radiative pressure with $a>0$ being the stefan-Boltzmann constant. Accordingly, the internal energy is \begin{equation}\label{e} e=e_{G}+e_{R}=C_v\theta+\frac{a}{\rho}\theta^4 \end{equation} with $C_v$ being the heat capacity of the gas at constant volume, and similarly the heat flux $\mathbf{Q}$ is \begin{equation*} \mathbf{Q}=-\kappa\nabla\theta=-\left(\kappa_1+\frac{4ac\theta^3}{\hat{\kappa}\rho}\right)\nabla\theta, \end{equation*} where $\kappa_1$ is a positive constant and $c$ is the speed of light. The quantity $1/\hat{\kappa}\rho$ is the mean free path of a photon inside medium, which is related to $\theta$. Without loss of generality, we can assume that \begin{equation} \kappa_1(1+\theta^q)\leq \kappa= \kappa(\rho,\theta) \leq \kappa_2(1+\theta^q),\,\,q\geq 0,\label{heatconductivity} \end{equation} where $\kappa_1,\kappa_2$ are positive constants. In the present paper, we primarily study a free boundary problem for planar magnetohydrodynamic fluid flows. More precisely, the motion of flows is supposed to be in the $x$-direction and uniform in the transverse direction $(y,z)$, i.e., \begin{equation*} \begin{split} &\rho=\rho(x,t),\qquad \theta=\theta(x,t),\\[3mm] &\mathbf{u}=(u,\mathbf{w})(x,t),\qquad \mathbf{w}=(u_2,u_3),\\[3mm] &\mathbf{H}=(b_1,\mathbf{b})(x,t),\qquad \mathbf{b}=(b_2, b_3), \end{split} \end{equation*} and the corresponding dynamic equations \eqref{qian2} are reduced to the following in the Eulerian coordinates. \begin{subequations}\label{subequation1} \begin{align} &\rho_t+(\rho u)_x=0,\qquad x\in \Omega_t:=(a(t),b(t)), \,\,t>0,\label{sub1}\\[2mm] &(\rho u)_t+(\rho u^2+p+\frac12 |\mathbf{b}|^2)_x =(\lambda u_x)_x,\label{sub2}\\[2mm] &(\rho \mathbf{w})_t+(\rho u \mathbf{w}-\mathbf{b})_x=(\mu \mathbf{w}_x)_x,\label{sub3}\\[2mm] &\mathbf{b}_t+(u\mathbf{b}-\mathbf{w})_x=(\nu \mathbf{b}_x)_x,\label{sub4}\\[2mm] & (\rho e)_t+(\rho e u)_x-(\kappa\theta_x)_x=\lambda u_x^2+\mu |\mathbf{w}_x|^2+\nu |\mathbf{b}_x|^2-pu_x. \label{sub5} \end{align} \end{subequations} where $\lambda=\lambda'+2\mu$ and $b_1=1$. To supplement the system \eqref{subequation1}, we impose the following initial conditions, \begin{equation} (\rho,u,\theta, \mathbf{w},\mathbf{b})(x,0)=(\rho_0,u_0,\theta_0, \mathbf{w}_0,\mathbf{b}_0)(x). \end{equation} and the free boundary conditions \begin{equation} (\lambda u_x-p,\mathbf{w},\mathbf{b},\theta_x)(f(t),t)=0, \qquad f(t)=a(t),\,\,b(t). \label{freefreeboundary1} \end{equation} where $f(t)$ denotes the free boundary defined by $f'(t)=u(f(t),t)$. The aim of this paper is to establish the well-posedness of the system \eqref{subequation1}--\eqref{freefreeboundary1} for the general setting of the heat conductivity \eqref{heatconductivity}, especially including the case of constant transport coefficients. Let us first review some related works in this line. For instance, Zhang-Xie \cite{ZhangXie} studied the global existence of the equations \eqref{subequation1} for Dirichlet boundary problem when $q>\frac{5}{2}$, which is extended to $q>(2+\sqrt{211})/9$ by Qin-Hu \cite{QinY}. In the forthcoming companion paper \cite{QINYAO2}, we also showed that global solutions exist for large initial data even if $q\geq 0$. On the other hand, Chen-Wang \cite{ChenWang02,Wang} established the existence of global solutions of real gas for a free boundary or Dirichlet problem, respectively, where they assume that the pressure and internal energy satisfy the following condition with exponent $r\in [0,1]$ and $q\geq 2(r+1)$. \begin{equation*} 0\leq \rho p\leq p_0(1+\theta^{r+1}), \qquad e_\theta\geq e_0(1+\theta^r). \end{equation*} We remark that the restriction on $q$ in \eqref{heatconductivity} is only from a more mathematical point of view. The main difficulties come from the interaction of magnetic fields and fluid velocity, as well as higher nonlinearity of radiative term, which prevent solving the initial boundary problem by means of known analytic techniques and tools. As a consequence, some essential new ideas are proposed to track this open problem. From the previous works, we know that one of the key points of the problem is to achieve the upper and lower bounds of density and temperature, which implies that there is no concentration of mass and heat. By delicate analysis, we notice that the radiative term helps us get the upper and lower bounds of density based on entropy-type energy estimates. Due to the interaction of the dynamic motion of the fluids and magnetic field, a priori estimates of temperature are much more complex. In our context, new a priori estimates of $||\theta^8||_{L^1[0,1]}$ and $||\rho_x||_{L^2[0,1]}$, which are controlled by $||\mathbf{w}_{xx}||_{L^2([0,1]\times[0,t])}$ and $||\mathbf{b}_{xx}||_{L^2([0,1]\times[0,t])}$, are proposed, see Lemma \ref{lemma09} and Lemma \ref{lemma10}, and then a priori estimates of $||\mathbf{w}_{xx}||_{L^2([0,1]\times[0,t])}$ and $||\mathbf{b}_{xx}||_{L^2([0,1]\times[0,t])}$ are subsequently obtained. With these bounds, all required a priori estimates for $||\theta^8||_{L^1[0,1]}$, $||\rho_x||_{L^2[0,1]}$ and $||u_x||_{L^4([0,1]\times[0,t])}$ are achieved, refer to \eqref{density}--\eqref{velocity}. In the subsequential process, motivated by \cite{ChenWang02,jiangJDE94, kawohl}, we succeed in getting the upper bound of temperature and the first derivative of velocity. To our best knowledge, there is a few results for the case of perfect flows, namely neglecting the radiative effect, or equivalently $a=0$ in \eqref{p}, when the transport coefficients are positive constants. Among them, Kawashima and Okada \cite{Kawashima} proved the existence of global smooth solutions to one-dimensional motion with small initial data. In addition, Hoff and Tsyganov \cite{Hoff} considered uniqueness and continuous dependence on initial data of weak solution of the equations of compressible magnetohydrodynamics. Moreover, Fan-Jiang-Nakanura \cite{FanJiang} showed the global weak solutions of plane magnetohydrodynamic compressible flows converge to a solution of the original equations with zero shear viscosity as the shear viscosity goes to zero when $q\geq 1$. However, it is an open problem for the global existence of perfect flows with large initial date, which may be shed light on by our techniques. From our analysis, we know that the focus of the problem is on the lower bound of density since the lower bound of density in our context is strongly depended on the radiative term. However, similar obstacle on $q$ is also in existence even if we neglect the magnetic fields. For instance, Ducomet-Zlotnik \cite{Ducomet&Zlotnik} proved the existence and asymptotic behavior for 1D radiative and reactive gas when $q\geq 2$ and Umehara-Tani \cite{umeharaTani,umeharaTaniAA} made further extension in this direction for 1D or spherically symmetric case when $3\leq q<9$, which extended to $q\geq 0$ by authors in \cite{QINYAO1} . In addition, Wang and Xie \cite{WangXie} showed global existence of strong solutions for the Cauchy problem when $q>\frac52$ and the reference \cite{DucometZlotnikhigher} proved global-in-time bounds of solutions and established it global exponential decay in the Lebegue and Sobolev spaces when $q\geq 2$. Indeed, there are also extensively studies on MHD hydrodynamics, which is beyond our ability to address exhaustive references, see for instance \cite{DFeireisl05,Ducomet06,HuWangCMP,HuWangJDE,Rohde,RohdeYong,Secchi,Wangproceeding03,wang1,Zhongjiang} and references cited therein. We formulate our problem and state the main results in section 2, and In section 3, we give the essential a priori estimates for the existence of global solutions. \section{Preliminaries and Main Result} Before stating the main result, we first introduce the Lagrangian coordinates to translate the free boundary problem \eqref{subequation1}--\eqref{freefreeboundary1} to the fixed one for convience, which is equivalent under consideration. Let \begin{equation}\label{transformation} y=\int_{a(t)}^x\rho(\xi,t)d\xi,\qquad t=t. \end{equation} Then $0\leq y\leq 1=\int_{0}^{1}\rho(\xi,t)d\xi$ which is the total mass of fluid flows without loss of generality. The system \eqref{subequation1} canonically becomes \begin{subequations}\label{subequation} \begin{align} v_t&=u_y,\label{sub11}\\[2mm] u_t&=\left(-p-\frac12 |\mathbf{b}|^2+\frac{\lambda u_y}{v}\right)_y,\label{sub22}\\[2mm] \mathbf{w}_t&=\left(\mathbf{b}+\frac{\mu \mathbf{w}_y}{v}\right)_y,\label{sub33}\\[2mm] (v\mathbf{b})_t&=\left(\mathbf{w}+\frac{\nu \mathbf{b}_y}{v}\right)_y,\label{sub44}\\[2mm] e_t&=\left(\frac{\kappa}{v}\theta_y\right)_y +\left(-p+\frac{\lambda}{v}u_y\right)u_y+\frac{\mu |\mathbf{w}_y|^2}{v}+\frac{\nu |\mathbf{b}_y|^2}{v}, \label{sub55} \end{align} \end{subequations} where $v=1/\rho$ is the specific volume. The corresponding initial data and boundary condition are as follows \begin{equation} (v,u,\theta,\mathbf{w},\mathbf{b})(y,0)=(v_0(y), u_0(y),\theta_0(y),\mathbf{w}_0(y),\mathbf{b}_0(y)). \end{equation} and \begin{equation} (\lambda \frac{u_y}{v}-p,\mathbf{w},\mathbf{b},\theta_y)|(d,t)=0,\qquad d=0,1. \label{boundary} \end{equation} With these preliminaries, we are now ready to state the main result for the system \eqref{subequation}--\eqref{boundary}. \begin{theorem}\label{thm} Let $q\geq 0$ and $\alpha \in (0,1)$. Suppose that $\lambda, \mu,\nu$ are positive constants. In addition, assume that the initial data $v_0(y)$, $u_0(y)$,$\mathbf{w}_0(y)$, $\mathbf{b}_0(y)$, $\theta_0(y)$ satisfy \begin{equation*} C_0^{-1}\leq v_0(y),\quad \theta_0(y)\leq C_0, \end{equation*} for some positive constant $C_0$ and \begin{equation*} (v_0(y),u_0(y),\mathbf{w}_0(y),\mathbf{b}_0(y),\theta_0(y))\in C^{1+\alpha}(\Omega)\times C^{2+\alpha}(\Omega)^6, \end{equation*} for $\alpha\in (0,1)$. Then there exists a unique solution $(v,u,\mathbf{w},\mathbf{b},\theta)$ of the initial boundary value problem \eqref{subequation}--\eqref{boundary} such that \begin{equation*} (v,v_y, v_t)\in C^{\alpha,\alpha/2}(Q_T)^3, \end{equation*} and \begin{equation*} (u,\mathbf{w},\mathbf{b},\theta)\in C^{2+\alpha,1+2/\alpha}(Q_T)^6. \end{equation*} Moreover, the expand rate of interface is \begin{equation*} 0<b(t)-a(t)\leq C(1+t), \end{equation*} where $C$ is a positive constant independent of time $t$. \end{theorem} \begin{remark} Obviously, the case of constant transport coefficients is included when $q=0$. In addition, it is also valid for $\kappa=\kappa_1+\kappa_2\frac{\theta^q}{\rho}$ instead of \eqref{heatconductivity}. \end{remark} The procedure for the existence of global-in-time solutions is classical from the standard method by the global a priori estimates of $(\rho, u, \theta, \mathbf{w},\mathbf{b})$. Therefore, the main task for us is to establish the global a priori estimates. \section{Some a priori estimates} In order to establish the existence of global solutions, we need to deduce some a priori estimates for $(\rho, u, \theta, \mathbf{w},\mathbf{b})$, which is essential to extending the local solutions by fixed point theorem. In the sequel, the capital $C(C(T))$ will denote generic positive constant depending on the initial data (time $T>0$), which may be different from line to line. \subsection{Upper and lower bounds of density} As usual, the basic energy estimates is as follows. \begin{lemma}\label{Lemma1} We have \begin{equation} \int_0^1\left(e+\frac12(u^2+|\mathbf{w}|^2+v|\mathbf{b}|^2)\right)dy\leq C. \label{lemma1} \end{equation} \end{lemma} \begin{proof} From \eqref{sub11}--\eqref{sub55}, we get from the boundary conditions \eqref{boundary} \begin{equation*} \frac{d}{dt}\int_0^1\left(e+\frac12(u^2+|\mathbf{w}|^2+v|\mathbf{b}|^2)\right)dy=0, \end{equation*} which implies \eqref{lemma1}. \end{proof} And then, we immediately arrive at the uniformed upper boundedness of density for all time. \begin{lemma}\label{Lemma4} Suppose the hypotheses of Theorem \ref{thm} are valid, then \begin{equation} \rho(y,t)\leq C,\label{rho} \end{equation} and \begin{equation} \int_0^1\theta^4dy\leq C,\label{coro1} \end{equation} and \begin{equation} \int_0^1|\mathbf{b}|^2dy\leq C.\label{magnetic} \end{equation} for $(y,t)\in (0,1)\times (0,T)$. \end{lemma} \begin{proof} The equation \eqref{sub22} can be rewritten as \begin{equation*} u_t=\left(\lambda (\ln v)_t-p-\frac12|\mathbf{b}|^2\right)_y. \end{equation*} Integrating it over $[0,y]\times [0,t]$, we obtain \begin{equation} \begin{split} \lambda \ln v=\lambda \ln v_0(y)+\int_0^t(p+\frac12|\mathbf{b}|^2)ds +\int_0^y(u-u_0)dx,\label{lemma2V} \end{split} \end{equation} and notice that \begin{equation*} \left|\int_0^y(u-u_0)dx\right|\leq C\int_0^1u^2dy+C\int_0^1u_0^2dy\leq C, \end{equation*} which implies \begin{equation*} \ln v\geq -C, \end{equation*} or equivalently \begin{equation*} \rho(y,t)\leq C. \end{equation*} Following from \eqref{lemma1} and \eqref{rho}, we easily get \begin{equation*} \int_0^1\theta^4dy=\int_0^1\rho v\theta^4dy\leq C\int_0^1v\theta^4dy\leq C\int_0^1edy\leq C. \end{equation*} Similarly, one has \begin{equation*} \int_0^1|\mathbf{b}|^2dy=\int_0^1\rho v|\mathbf{b}|^2dy\leq C\int_0^1v|\mathbf{b}|^2dy\leq C. \end{equation*} This ends the proof. \end{proof} In the sequel, the expand rate of free boundaries is obtained. \begin{lemma}It holds that \begin{equation} 0<\int_0^1vdy\leq C(1+t), \label{freelemma33} \end{equation} or equivalently, by the transformation \eqref{transformation}, one has \begin{equation*} 0<b(t)-a(t)\leq C(1+t). \end{equation*} \end{lemma} \begin{proof} Integrating $[0,y]$ over \eqref{sub22}, it yields \begin{equation*} \int_0^yu_tdx=\frac{\lambda u_y}{v}-(p+\frac12|\mathbf{b}|^2), \end{equation*} which implies by multiplying $v$ and integrating with respect to $y$ and $t$ \begin{equation*} \begin{split} &\lambda\int_0^1v dy\\ &=\lambda\int_0^1v_0(y) dy+\int_0^t\int_0^1v\left(p+\frac12|\mathbf{b}|^2\right)dyds\\ &+\int_0^t\int_0^1v\left(\int_0^yu_tdx\right)dyds. \end{split} \end{equation*} For the second term on the right hand side, it is deduced that \begin{equation*} \int_0^t\int_0^1v\left(p+\frac12|\mathbf{b}|^2\right)dyds\leq C\int_0^t\int_0^1\left(e+v|\mathbf{b}|^2\right)dyds\leq C(1+t). \end{equation*} On the other hand, we get from \eqref{sub11} by integration \begin{equation*} v(y,t)=v_0(y)+\int_0^tu_y(y,s)ds.\label{} \end{equation*} Thus, it leads the third term on the right hand side to \begin{equation*} \begin{split} &\int_0^t\int_0^1v\left(\int_0^yu_tdx\right)dyds\\ &=\int_0^1\left(v_0(y)+\int_0^tu_y(y,s)ds\right)\left(\int_0^yudx\right)dy\\ &+\int_0^t\int_0^1u^2dyds-\int_0^1v_0(y)\left(\int_0^yu_0(x)dx\right)dy\\ &=\int_0^1v_0(y)\left(\int_0^y(u-u_0(x))dx\right)dy+\int_0^t\int_0^1u^2dyds\\ &-\int_0^1u(y,t)\left(\int_0^tu(y,s)ds\right)dy, \\ &\leq C(1+t). \end{split} \end{equation*} Thus, we utilize \eqref{lemma1} to finish the proof. \end{proof} \begin{lemma} One has \begin{equation} U(t)+\int_0^tV(\tau)d\tau\leq C(T), \label{lemma3} \end{equation} where \begin{equation} U(t)=\int_0^1\left(C_v(\theta-1-\log \theta)+R(v-1-\log v)\right)dy,\notag \end{equation} and \begin{equation} V(t)=\int_0^1\left(\frac{\lambda u_y^2}{v\theta}+\frac{\mu |\mathbf{w}_y|^2}{v\theta}+\frac{\nu |\mathbf{b}_y|^2}{v\theta}+\frac{\kappa\theta_y^2}{v\theta^2}\right)dy,\notag \end{equation} for $0\leq t\leq T$. \end{lemma} \begin{proof} According to the expression of $p$ and $e$, we thereby compute \begin{equation*} e_{\theta}\theta_t+\theta p_{\theta}u_y=\frac{\lambda u_y^2}{v}+\frac{\mu |\mathbf{w}_y|^2}{v}+\frac{\nu |\mathbf{b}_y|^2}{v}+\left(\frac{\kappa}{v}\theta_y\right)_y, \end{equation*} which implies that \begin{equation*} \begin{split} &\frac{d}{dt}\int_0^1\left(C_v \log \theta+R \log v+\frac43av\theta^3\right)dy\\ &=\int_0^1\left(\frac{\lambda u_y^2}{v\theta}+\frac{\mu |\mathbf{w}_y|^2}{v\theta}+\frac{\nu |\mathbf{b}_y|^2}{v\theta}+ \frac{\kappa\theta_y^2}{v\theta^2}\right)dy. \end{split} \end{equation*} Integrating that over $(0,1)\times (0,t)$, it yields \begin{equation*} U(t)+\int_0^tV(\tau)d\tau\leq C\left(1+\int_0^1v\theta^3dy\right). \end{equation*} On the other hand, one has by \eqref{freelemma33} \begin{equation*} \begin{split} \int_0^1v\theta^3dy&\leq \left(\int_0^1v\theta^4dy \right)^{\frac34}\left(\int_0^1vdy\right)^{\frac14}\\[2mm] &\leq C(T), \end{split} \end{equation*} which ends the proof in conjunction with \eqref{lemma1}. \end{proof} Furthermore, we can deduce some a priori estimates for $\theta$ and $\mathbf{b}$ for all time. \begin{lemma}\label{Lemma6} We have \begin{equation} \int_0^t\max_{[0,1]}\theta^{q+4}(y,s)ds\leq C(T),\qquad \quad q\geq 0, \label{lemma61} \end{equation} and \begin{equation} \int_0^t||\mathbf{b}||_{L^{\infty}([0,1])}^2ds\leq C(T), \label{lemma62} \end{equation} for $0<t<T$. \end{lemma} \begin{proof} There exists $y(t)$ by the mean value theorem such that \begin{equation*} \theta(y(t),t)=\int_0^1\theta dy, \end{equation*} where $y(t)\in [0,1]$ for each $t\in [0,T]$. Furthermore, it yields by H\"{o}lder inequality \begin{equation*} \begin{split} \theta(y,t)^{\frac{q+4}{2}} &=\left(\int_0^1\theta dy\right)^{\frac{q+4}{2}} +\frac{q+4}{2}\int_{y(t)}^y\theta(\xi,t)^{\frac{q+4}{2}-1}\theta_{\xi}(\xi,t)d\xi\\[2mm] &\leq C\left(1+\int_0^1\frac{\kappa^{\frac12}|\theta_y|}{v^{\frac12}\theta} \cdot\frac{v^{\frac12}\theta^{\frac{q+4}{2}}}{\kappa^{\frac12}}dy\right)\\ &\leq C\left[1+\left(\int_0^1\frac{v\theta^{q+4}}{\kappa}dy\right)^{\frac12}V(t)^{\frac12}\right]\\ &\leq C\left[1+\left(\int_0^1\frac{v\theta^{q+4}}{1+\theta^q}dy\right)^{\frac12}V(t)^{\frac12}\right]\\ &\leq C\left[1+\left(\int_0^1v\theta^4dy\right)^{\frac12}V(t)^{\frac12}\right]\\ &\leq C\left(1+V(t)^{\frac12}\right). \end{split} \end{equation*} Then taking square on both sides of the above inequality, we can get \eqref{lemma61} by integrating it over $[0,t]$ with the help of \eqref{lemma1} and \eqref{lemma3}. The proof of \eqref{lemma62} is directly deduced from \eqref{lemma1} and \eqref{lemma61}. Indeed, recalling the boundary conditions \eqref{boundary}, we find that \begin{equation*} \begin{split} |\mathbf{b}|^2&=2\int_0^y\mathbf{b}\cdot\mathbf{b}_{\xi}(\xi,t)d\xi\\ &\leq C\int_0^1\frac{\nu|\mathbf{b}_y|^2}{v\theta}dy+C\int_0^1v\theta|\mathbf{b}|^2dy, \end{split} \end{equation*} or \begin{equation*} \begin{split} \int_0^t||\mathbf{b}||^2_{L^{\infty}([0,1])}ds &\leq C\int_0^t\int_0^1\frac{\nu|\mathbf{b}_y|^2}{v\theta}dyds+C\int_0^t\max_{[0,1]}\theta \left(\int_0^1v|\mathbf{b}|^2dy\right)ds.\\ &\leq C(T). \end{split} \end{equation*} This finishes the proof. \end{proof} With these preliminary processes, the lower bound of density can be inferred that \begin{lemma}One has \begin{equation}\label{lowerbounddensity} \rho(y,t)\geq C(T). \end{equation} \end{lemma} \begin{proof} Recalling the equality \eqref{lemma2V}, we find that \begin{equation*} \begin{split} \lambda \ln v&=\lambda \ln v_0(y)+\int_0^t(p+\frac12|\mathbf{b}|^2)d\tau +\int_0^y(u-u_0)dx\\ &\leq C(T)+C\int_0^t\max_{[0,1]}\theta^4ds+C\int_0^t||\mathbf{b}||_{L^{\infty}([0,1])}^2ds+C\int_0^1u^2dy\\ &\leq C(T),\label{} \end{split} \end{equation*} where we have used Lemmas \ref{Lemma1} and \ref{Lemma6}. \end{proof} \subsection{Upper and lower bounds of temperature} At first, we are in a position to deduce a priori estimates of the derivatives of $(u,\mathbf{w},\mathbf{b})$. \begin{lemma}One has \begin{equation} \int_0^t\int_0^1\left(u_y^2+|\mathbf{w}_y|^2+|\mathbf{b}_y|^2\right)dyds\leq C(T).\label{lemma881} \end{equation} \end{lemma} \begin{proof} The equality \eqref{sub55} can be rewritten as \begin{equation} e_t+pu_y=\frac{\lambda u_y^2}{v}+\frac{\mu |\mathbf{w}_y|^2}{v}+\frac{\nu |\mathbf{b}_y|^2}{v}+\left(\frac{\kappa}{v}\theta_y\right)_y. \label{lemma81} \end{equation} Then integrating \eqref{lemma81} over $[0,1]\times [0,t]$, we get from \eqref{coro1} and \eqref{lemma61} \begin{equation*} \begin{split} &\int_0^t\int_0^1\left(\frac{\lambda u_y^2}{v}+\frac{\mu |\mathbf{w}_y|^2}{v}+\frac{\nu |\mathbf{b}_y|^2}{v}\right)dyds\\ &=\int_0^1(e-e_0)dy+\int_0^t\int_0^1pu_ydyds\\ &\leq C+\frac12\int_0^t\int_0^1\frac{\lambda u_y^2}{v}dyds+C\int_0^t\int_0^1p^2dyds\\ &\leq C+\frac12\int_0^t\int_0^1\frac{\lambda u_y^2}{v}dyds+ C\int_0^t\max_{[0,1]}\theta^4\left(\int_0^1\theta^4dy\right)ds\\ &\leq C(T)+\frac12\int_0^t\int_0^1\frac{\lambda u_y^2}{v}dyds, \end{split} \end{equation*} which ends the proof according to \eqref{lemma61} and \eqref{lowerbounddensity}. \end{proof} \begin{lemma}\label{Lemma9}The following inequalities hold for $\mathbf{b}$ \begin{equation} \int_0^t\int_0^1|\mathbf{b}\cdot\mathbf{b}_y|^2dyds\leq C(T),\label{lemma92} \end{equation} and \begin{equation} \int_0^t\int_0^1|\mathbf{b}|^8dyds\leq C(T).\label{lemma93} \end{equation} when $t\in [0,T]$. \end{lemma} \begin{proof} By \eqref{magnetic}, one has \begin{equation*} \begin{split} &\int_0^t\int_0^1|\mathbf{b}|^8dyds\\ &\leq \int_0^t\max_{[0,1]}|\mathbf{b}|^6\left(\int_0^1|\mathbf{b}|^2dy\right)ds\\ &\leq C\int_0^t\int_0^1|\mathbf{b}|^6dyds+C\int_0^t\int_0^1|\mathbf{b}|^4|\mathbf{b}\cdot\mathbf{b}_y|dyds\\ &\leq \frac12\int_0^t\int_0^1|\mathbf{b}|^8dyds+C\int_0^t\int_0^1\frac{|\mathbf{b}\cdot\mathbf{b}_y|^2}{v}dyds+C(T), \end{split} \end{equation*} which implies \begin{equation} \int_0^t\int_0^1|\mathbf{b}|^8dyds\leq C\int_0^t\int_0^1\frac{|\mathbf{b}|^2|\mathbf{b}_y|^2}{v}dyds+C(T).\label{lemma91} \end{equation} Multiplying $4|\mathbf{b}|^2\mathbf{b}$ on both sides of \eqref{sub44}, one has \begin{equation*} \begin{split} (v|\mathbf{b}|^4)_t=\left(\frac{\nu \mathbf{b}_y}{v}+\mathbf{w}\right)_y\cdot 4|\mathbf{b}|^2\mathbf{b}-3v_t|\mathbf{b}|^4, \end{split} \end{equation*} which follows from \eqref{lemma62} and \eqref{lemma881} that \begin{equation*} \begin{split} &\int_0^1v|\mathbf{b}|^4dy+12\nu\int_0^t\int_0^1\frac{|\mathbf{b}|^2|\mathbf{b}_y|^2}{v}dyds\\ &=\int_0^1v|\mathbf{b}|^4(y,0)dy-12\int_0^t\int_0^1|\mathbf{b}|^2\mathbf{w}\cdot\mathbf{b}_ydyds -3\int_0^t\int_0^1u_y|\mathbf{b}|^4dyds\\ &\leq \varepsilon \int_0^t\int_0^1\frac{|\mathbf{b}|^2|\mathbf{b}_y|^2}{v}dyds +C(\varepsilon)\int_0^t||\mathbf{b}||_{L^{\infty}([0,1])}^2\left(\int_0^1|\mathbf{w}|^2dy\right)ds\\ &+\varepsilon\int_0^t\int_0^1|\mathbf{b}|^8dyds+C(\varepsilon)\int_0^t\int_0^1u_y^2dyds+C(T)\\ &\leq \varepsilon \int_0^t\int_0^1\frac{|\mathbf{b}|^2|\mathbf{b}_y|^2}{v}dyds+\varepsilon\int_0^t\int_0^1|\mathbf{b}|^8dyds+C(T). \end{split} \end{equation*} Substituting \eqref{lemma91} into the above inequality, we deduce \eqref{lemma92} and \eqref{lemma93} by taking sufficiently small $\varepsilon$. \end{proof} In the following, we first verify that the temperature is dominated by velocity and magnetic fields. \begin{lemma}\label{lemma09} One has \begin{equation*} \begin{split} &\int_0^1\theta^8dy+\int_0^t\int_0^1\kappa \theta^3\theta_y^2dyds\\ &\leq C(T)+ C(T) \left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+C(T) \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}. \end{split} \end{equation*} \end{lemma} \begin{proof} Multiplying $\theta^4$ on both sides of \eqref{sub55}, and then integrating it over $[0,1]\times [0,t]$, it leads to \begin{equation*} \begin{split} &\int_0^1\theta^8dy+\int_0^t\int_0^1\kappa \theta^3\theta_y^2dyds\\ &\leq C\int_0^t\int_0^1(\theta^8|u_y|+\theta^4u_y^2)dyds+C\int_0^t\int_0^1\left(\frac{\mu \mathbf{w}_y^2}{v}+\frac{\nu \mathbf{b}_y^2}{v}\right)\cdot \theta^4dyds\\ &\leq C+C\int_0^t\int_0^1(\theta^{12}+|u_y|^3)dyds+C\int_0^t\left(||\mathbf{w}_y||_{L^{\infty}}^2+||\mathbf{b}_y||_{L^{\infty}}^2\right)\int_0^1\theta^4dyds\\ &\leq C+C\int_0^t\int_0^1(\theta^{12}+|u_y|^3)dyds\\ &+C\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+C \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}. \end{split} \end{equation*} On the other hand, we get from \eqref{sub22} \begin{equation} \left\{ \begin{array}{llllllllll} h_t=\frac{\lambda}{v}h_{yy}-(p+\frac12|\mathbf{b}|^2),\\[2mm] h |_{t=0}=h_0(y),\\[2mm] h|_{y=0,1}=0,\label{LP} \end{array} \right. \end{equation} where $h=\int_0^yu d\xi$. The standard $L^p$ estimates of solutions to linear parabolic problem yield \begin{equation*} \begin{split} \int_0^t\int_0^1|u_y|^3dyds &=\int_0^t\int_0^1|h_{yy}|^3dyds\\ &\leq C \left(1+\int_0^t\int_0^1 (p^3+|\mathbf{b}|^6)dyds\right)\\ &\leq C(T) \left(1+\int_0^t\int_0^1 \theta^{12} dyds\right) \end{split} \end{equation*} and then \begin{equation*} \begin{split} &\int_0^1\theta^8dy+\int_0^t\int_0^1\kappa\theta^3\theta_y^2dyds\\ &\leq C+C\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+C \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}\\ &+C\int_0^t\int_0^1\theta^{12}dyds+C\int_0^t\int_0^1|u_y|^3dyds\\ &\leq C(T)\left(1+\int_0^t\int_0^1\theta^{12}dyds\right)+C\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+C \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}\\ &\leq C(T)+C\int_0^t\max_{[0,1]}\theta^4\int_0^1\theta^8dyds\\ &+C\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+C \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}. \end{split} \end{equation*} It finishes the proof by Gr\"{o}nwall inequality and \eqref{lemma61}. \end{proof} The following lemma also declare a relationship of density and velocity, as well as magnetic fields. \begin{lemma}\label{lemma10}For any $\varepsilon>0$, it satisfies that \begin{equation*} \begin{split} &\int_0^1v_y^2dy+\int_0^t\int_0^1\theta v_y^2dyds\\ &\leq C(T) +\varepsilon\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+\varepsilon \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}. \end{split} \end{equation*} \end{lemma} \begin{proof} We can rewrite the equation \eqref{sub22} as follows \begin{equation} \left(u-\frac{\lambda v_y}{v}\right)_t=-\left(p+\frac12|\mathbf{b}|^2\right)_y. \label{lemma101} \end{equation} Multiplying $\left(u-\frac{\lambda}{v}v_y\right)$ on both sides of \eqref{lemma101} and integrating it over $(0,1)\times (0,t)$, we find that \begin{equation} \begin{split} &\frac12\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dy+\int_0^t\int_0^1\frac{\lambda R \theta}{v^3}v_y^2dyds\\ =&\frac12\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2(y,0)dy+\int_0^t\int_0^1\frac{R\theta uv_y}{v^2}dyds\\ &-\int_0^t\int_0^1\left[\left(\frac{R}{v} +\frac{4a}{3}\theta^3\right)\theta_y+\mathbf{b}\cdot\mathbf{b}_y \right]\left(u-\frac{\lambda}{v}v_y\right)dyds.\label{lemma102} \end{split} \end{equation} To complete the proof, it only evaluates all terms on the right hand side of \eqref{lemma102}. Particularly, \begin{equation} \begin{split} &\int_0^t\int_0^1\frac{R\theta uv_y}{v^2}dyds\\ &\leq \varepsilon \int_0^t\int_0^1\theta v_y^2dyds+C_{\varepsilon}\int_0^t\max_{[0,1]}\theta\cdot\left(\int_0^1u^2dy\right)ds\\ &\leq \varepsilon \int_0^t\int_0^1\theta v_y^2dyds+C_{\varepsilon}\int_0^t\max_{[0,1]}\theta(y,s)ds\\ &\leq \varepsilon \int_0^t\int_0^1\theta v_y^2dyds+C(T), \label{lemma103} \end{split} \end{equation} for any $\varepsilon >0$ and further \begin{equation} \begin{split} &\left|\int_0^t\int_0^1\left[\left(\frac{R}{v}+\frac{4a}{3}\theta^3\right)\theta_y\right] \left(u-\frac{\lambda}{v}v_y\right)dyds\right|\\[2mm] &\leq C \int_0^t\int_0^1\frac{\kappa\theta_y^2}{\theta^2}dyds+\varepsilon\int_0^t\int_0^1\kappa\theta^3\theta_y^2dyds\\ &+\int_0^t\int_0^1\frac{\theta^2+\theta^3}{\kappa}\left(u-\frac{\lambda}{v}v_y\right)^2dyds\\[2mm] &\leq C+\varepsilon\int_0^t\int_0^1\kappa\theta^3\theta_y^2dyds +\int_0^t\max_{[0,1]}\frac{\theta^2+\theta^3}{\kappa}\cdot\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dyds. \end{split} \end{equation} In addition, by \eqref{lemma92}, we also obtain \begin{equation} \begin{split} &\left|\int_0^t\int_0^1\mathbf{b}\cdot\mathbf{b}_y\cdot\left(u-\frac{\lambda}{v}v_y\right)dyds\right|\\ &\leq C\int_0^t\int_0^1|\mathbf{b}\cdot\mathbf{b}_y|^2dyds+C\int_0^t\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dyds\\ &\leq C(T)+C\int_0^t\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dyds.\label{lemma104} \end{split} \end{equation} Finally, substituting \eqref{lemma103}--\eqref{lemma104} into \eqref{lemma102}, it leads to \begin{equation*} \begin{split} &\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dy+\int_0^t\int_0^1\theta v_y^2dyds\\ &\leq C(T)+\varepsilon\int_0^t\int_0^1\kappa\theta^3\theta_y^2dyds\\ &+C\int_0^t\left(\max_{[0,1]}\frac{\theta^2+\theta^3}{\kappa}+1\right) \cdot\int_0^1\left(u-\frac{\lambda}{v}v_y\right)^2dyds, \end{split} \end{equation*} which ends the proof by Gr\"{o}nwall inequality and Lemma \ref{lemma09} . \end{proof} Combining Lemma \ref{lemma09} and Lemma \ref{lemma10}, we can give a priori estimates of velocity and magnetic fields, which play an important role to include the constant heat conductivity. \begin{lemma}It holds that \begin{equation}\label{freelemma131} \begin{split} &||\mathbf{b}||_{L^{\infty}((0,1)\times(0,T))}+\int_0^1(|\mathbf{b}_y|^2+|\mathbf{w}_y|^2)dy\\ & +\int_0^t\int_0^1(|\mathbf{b}_y|^4+|\mathbf{w}_y|^4+|\mathbf{b}_{yy}|^2+|\mathbf{w}_{yy}|^2)dyds \leq C(T). \end{split} \end{equation} \end{lemma} \begin{proof} Multiplying \eqref{sub33} by $\mathbf{w}_{yy}$ and integrating it over $[0,1]\times [0,t]$, one has \begin{equation*} \begin{split} &\frac12\int_0^1|\mathbf{w}_y|^2dy=\frac12\int_0^1|\mathbf{w}_y(y,0)|^2dy -\int_0^t\int_0^1\left(\mathbf{b}+\frac{\mu \textbf{w}_y}{v}\right)_y\cdot\mathbf{w}_{yy}dyds\\ &\leq C-C(T)\int_0^t\int_0^1|\mathbf{w}_{yy}|^2dyds+ C\int_0^t\int_0^1(|\mathbf{b}_y|+|v_y||\mathbf{w}_y|)|\mathbf{w}_{yy}|dyds\\ &\leq C-\frac{3C(T)}{4}\int_0^t\int_0^1|\mathbf{w}_{yy}|^2dyds +C(T)\int_0^t\int_0^1|\mathbf{b}_y|^2dyds\\ &+C(T)\int_0^t\max_{[0,1]}|\mathbf{w}_y|^2\left(\int_0^1v_y^2dy\right)ds\\ &\leq C-\frac{3C(T)}{4}\int_0^t\int_0^1|\mathbf{w}_{yy}|^2dyds\\ &+\left(\varepsilon\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+\varepsilon \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}\right)\times\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}\\ &\leq C-\frac{C(T)}{2}\int_0^t\int_0^1|\mathbf{w}_{yy}|^2dyds+\varepsilon \int_0^t\int_0^1\mathbf{b}_{yy}^2dyds. \end{split} \end{equation*} Thus, \begin{equation}\label{wb} \int_0^1|\mathbf{w}_y|^2dy+\int_0^t\int_0^1|\mathbf{w}_{yy}|^2dyds\leq C+\varepsilon \int_0^t\int_0^1\mathbf{b}_{yy}^2dyds, \end{equation} On the other hand, we can also rewrite \eqref{sub44} as follows \begin{equation*} \mathbf{b}_t=-\frac{u_y}{v}\mathbf{b}+\frac{1}{v}\left(\mathbf{w}+\frac{\nu \mathbf{b}_y}{v}\right)_y. \end{equation*} Following the same procedure as above, we get from \eqref{magnetic} \begin{equation*} \begin{split} &\frac12\int_0^1|\mathbf{b}_y|^2dy\\ &\leq C-C\int_0^t\int_0^1|\mathbf{b}_{yy}|^2dyds\\ &+C\int_0^t\int_0^1(|u_y||\mathbf{b}|+|\mathbf{w}_y|+|v_y||\mathbf{b}_y|)|\mathbf{b}_{yy}|dyds\\ &\leq C-\frac{3C}{4}\int_0^t\int_0^1|\mathbf{b}_{yy}|^2dyds +C\sup_{(y,t)\in(0,1)\times(0,t)}|\mathbf{b}|^2\int_0^t\int_0^1u_y^2dyds\\ &+C\int_0^t\max_{[0,1]}|\mathbf{b}_y|^2\bigg(\int_0^1v_y^2dy\bigg)ds\\ &\leq C-\frac{3C}{4}\int_0^t\int_0^1|\mathbf{b}_{yy}|^2dyds+C\int_0^1|\mathbf{b}|^2dy+\frac14\int_0^1|\mathbf{b}_y|^2dy\\ &+\left(\varepsilon\left(\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds\right)^{\frac12}+\varepsilon \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}\right)\times \left(\int_0^t\int_0^1\mathbf{b}_{yy}^2dyds\right)^{\frac12}\\ &\leq C(T)-\frac{C}{2}\int_0^t\int_0^1|\mathbf{b}_{yy}|^2dyds+\frac14\int_0^1|\mathbf{b}_y|^2dy+\varepsilon\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds, \end{split} \end{equation*} which implies \begin{equation*} \int_0^1|\mathbf{b}_y|^2dy+\int_0^t\int_0^1|\mathbf{b}_{yy}|^2dyds\leq C(T)+\varepsilon\int_0^t\int_0^1\mathbf{w}_{yy}^2dyds. \end{equation*} This together with \eqref{wb} leads to \begin{equation*} \int_0^1(|\mathbf{w}_y|^2+|\mathbf{b}_y|^2)dy+\int_0^t\int_0^1(|\mathbf{w}_{yy}|^2+|\mathbf{b}_{yy}|^2)dyds\leq C(T). \end{equation*} Similarly, we get \begin{equation*} \begin{split} \int_0^t\int_0^1|\mathbf{w}_y|^4dyds&\leq \int_0^t\max_{[0,1]}|\mathbf{w}_y|^2\left(\int_0^1|\mathbf{w}_y|^2dy\right)ds\\ &\leq C(T)\int_0^t\int_0^1(|\mathbf{w}_y|^2+|\mathbf{w}_{yy}|^2)dyds\\ &\leq C(T). \end{split} \end{equation*} and \begin{equation*} \begin{split} \int_0^t\int_0^1|\mathbf{b}_y|^4dyds&\leq \int_0^t\max_{[0,1]}|\mathbf{b}_y|^2\left(\int_0^1|\mathbf{b}_y|^2dy\right)ds\\ &\leq C(T)\int_0^t\int_0^1(|\mathbf{b}_y|^2+|\mathbf{b}_{yy}|^2)dyds\\ &\leq C(T). \end{split} \end{equation*} This completes the proof of the lemma. \end{proof} With \eqref{freelemma131} in hand, we thereby can check Lemma \ref{lemma09} and Lemma \ref{lemma10} again and deduce that \begin{corollary}We have \begin{equation} \int_0^1v_y^2dy+\int_0^t\int_0^1\theta v_y^2dyds\leq C(T).\label{density} \end{equation} and \begin{equation} \int_0^t\max_{[0,1]}\theta^{q+13}ds+\int_0^1\theta^8dy+\int_0^t\int_0^1\kappa \theta^3\theta_y^2dyds\leq C(T).\label{temperature} \end{equation} \end{corollary} Similarly, using $L^p$ estimates of solutions to linear parabolic problem again, we get from \eqref{lemma93} and \eqref{LP} \begin{equation}\label{velocity} \begin{split} \int_0^t\int_0^1|u_y|^4dyds &=\int_0^t\int_0^1|w_{yy}|^4dyds\\ &\leq C \left(1+\int_0^t\int_0^1 (p^3+|\mathbf{b}|^8)dyds\right)\\ &\leq C \left(1+\int_0^t\int_0^1 \theta^{16} dyds\right)\\ &\leq C \left(1+\int_0^t\max_{[0,1]}\theta^8\int_0^1 \theta^{8}dyds\right)\\ &\leq C(T). \end{split} \end{equation} In order to get the upper bound of temperature, we also need to establish higher order a priori estimates of $(v,u,\theta,\mathbf{w},\mathbf{b})$. Thus, we introduce some auxiliary new variables motivated by \cite{kawohl}. \begin{align} &X:=\int_0^t\int_0^1(1+\theta^{q})\theta_t^2dyds,\label{X}\\ &Y:=\max_{0\leq t\leq T}\int_0^1(1+\theta^{2q})\theta_y^2dy,\label{Y}\\ &Z:=\max_{0\leq t\leq T}\int_0^1u_{yy}^2dy.\label{Z} \end{align} By interpolation and embedding theorem, It follows from \eqref{Z} that \begin{align}\label{boundvelocity} &|u_y|^{(0)}\leq C(1+Z^{\frac38}), \end{align} where $|\cdot|^{(0)}=\sup|\cdot|$. \begin{lemma} Under the assumptions of Theorem \ref{thm}, we have \begin{equation} \max_{[0,1]\times[0,t]}\theta(y,s)\leq C+CY^{\frac{1}{2(q+5)}},\label{temperature1} \end{equation} for $(y,t)\in (0,1)\times (0,T)$. \end{lemma} \begin{proof} By the embedding theorem, we deduce that \begin{equation*} \max_{[0,1]}\theta^{q+5}(y,t) \leq C\int_0^1\theta^{q+5}dy+C\int_0^1(1+\theta)^{q+4}|\theta_y|dy, \end{equation*} which implies by \eqref{temperature} and H\"{o}lder inequality, \begin{equation*} \begin{split} &\max_{[0,1]}\theta^{q+5}(y,t)\\ &\leq C\max_{[0,1]}\theta^{q+1}\int_0^1\theta^4 dy+C\int_0^1(1+\theta)^{q}|\theta_y|(1+\theta)^4 dy\\ &\leq \frac12 \max_{[0,1]}\theta^{q+5} +C\left(\int_0^1(1+\theta)^{2q}\theta_y^2dy\right)^{\frac12} \left(\int_0^1(1+\theta)^8dy\right)^{\frac12}+C\\ &\leq \frac12\max_{[0,1]} \theta^{q+5}+C(T) \left(\int_0^1(1+\theta)^{2q}\theta_y^2dy\right)^{\frac12}+C(T)\\ &\leq\frac12\max_{[0,1]}\theta^{q+5} +CY^{\frac12}+C(T). \end{split} \end{equation*} It finishes the proof. \end{proof} \begin{lemma}\label{XY}Furthermore, we have \begin{equation} X+Y\leq C(T)\left(1+Z^{\frac{q+5}{q+10}}\right). \label{Lemma14} \end{equation} \end{lemma} \begin{proof} We introduce the function as in \cite{kawohl,umeharaTani} \begin{equation*} K(v,\theta):=\int_0^{\theta}\frac{\kappa(v,\xi)}{v}d\xi. \end{equation*} The simple calculation leads to \begin{align} &K_t=\frac{\kappa}{v}\theta_t+K_vu_y, \label{K1}\\ &K_{yt}=\left(\frac{\kappa}{v}\theta_y\right)_t+K_{vv}v_{y}u_{y}+K_{v}u_{yy}+\left(\frac{\kappa}{v}\right)_vv_y \theta_t,\label{K2}\\ &|K_v|, |K_{vv}|\leq C(1+\theta^{q+1}).\label{K3} \end{align} Multiplying \eqref{sub55} by $K_t$ and integrating it over $(0,1)\times (0,t)$, we get \begin{equation*} \begin{split} &\int_0^t\int_0^1\left(e_{\theta}\theta_t+\theta p_{\theta}u_y-\frac{\lambda u_y^2}{v}-\frac{\mu |\mathbf{w}_y|^2}{v}-\frac{\nu |\mathbf{b}_y|^2}{v}\right)K_tdyds\\ &+\int_0^t\int_0^1\frac{\kappa}{v}\theta_y K_{yt}dyds=0, \end{split} \end{equation*} or equivalently \begin{equation} \begin{split} &\int_0^t\int_0^1\frac{\kappa e_{\theta}\theta_t^2}{v}dyds +\int_0^t\int_0^1\frac{\kappa}{v}\theta_y\left(\frac{\kappa}{v}\theta_y\right)_tdyds\\ &=I_1+I_2+I_3+I_4+I_5+I_6, \label{lemma141} \end{split} \end{equation} where $I_{i},(i=1,\cdots,6)$ is defined as follows \begin{equation*} \begin{split} &I_1=-\int_0^t\int_0^1e_{\theta}\theta_tK_v u_ydyds\\[2mm] &I_2=-\int_0^t\int_0^1\left(\theta p_{\theta}u_y-\frac{\lambda u_y^2}{v}-\frac{\mu |\mathbf{w}_y|^2}{v}-\frac{\nu |\mathbf{b}_y|^2}{v}\right)\frac{\kappa}{v}\theta_tdyds\\[2mm] &I_3=- \int_0^t\int_0^1\left(\theta p_{\theta}u_y-\frac{\lambda u_y^2}{v}-\frac{\mu |\mathbf{w}_y|^2}{v}-\frac{\nu |\mathbf{b}_y|^2}{v}\right)K_vu_ydyds\\[2mm] &I_4=-\int_0^t\int_0^1\frac{\kappa}{v}\theta_yK_{vv}v_yu_ydyds\\[2mm] &I_5=-\int_0^t\int_0^1\frac{\kappa}{v}\theta_y K_v u_{yy}dyds\\[2mm] &I_6=-\int_0^t\int_0^1\frac{\kappa}{v}\theta_y\left(\frac{\kappa}{v}\right)_v v_{y}\theta_tdyds. \end{split} \end{equation*} In the sequel, we will evaluate all terms \eqref{lemma141} according to \eqref{K1}--\eqref{K3}. Firstly, one has by \eqref{X} and \eqref{Y} \begin{equation} \int_0^t\int_0^1\frac{\kappa e_{\theta}\theta_t^2}{v}dyds \geq C \int_0^t\int_0^1(1+\theta^3)(1+\theta^q)\theta_t^2dyds\geq CX, \end{equation} and \begin{equation} \begin{split} &\int_0^t\int_0^1\frac{\kappa}{v}\theta_y\left(\frac{\kappa}{v}\theta_y\right)_{t}dyds\\ &=\frac12\int_0^1\left(\frac{\kappa}{v}\theta_y\right)^2dy -\frac12\int_0^1\left(\frac{\kappa}{v}\theta_y\right)^2(y,0)dy\\ &\geq C\int_0^1(1+\theta^{q})^2\theta_y^2dy-C\geq CY-C. \end{split} \end{equation} Secondly, we have by Cauchy-Schawtz inequality and \eqref{temperature1} \begin{equation}\label{lemma105} \begin{split} &\left|I_1\right|=\left|\int_0^t\int_0^1e_{\theta}\theta_tK_v u_ydyds\right| \\ &\leq C\int_0^t\int_0^1(1+\theta)^{q+4}|\theta_t||u_y|dyds\\ &\leq \varepsilon X+C_{\varepsilon}\int_0^t\int_0^1(1+\theta)^{q+8}u_y^2dyds\\ &\leq \varepsilon X+C_{\varepsilon}Y^{\frac{q+8}{2(q+5)}}\int_0^t\int_0^1u_y^2dyds\\ &\leq \varepsilon (X+Y)+C, \end{split} \end{equation} for any fixed sufficiently small $\varepsilon>0$, and similarly, by recalling \eqref{freelemma131} and \eqref{velocity}, one has \begin{equation} \begin{split} &\left|I_2\right|=\left|\int_0^t\int_0^1\left(\theta p_{\theta}u_y-\frac{\lambda u_y^2}{v}-\frac{\mu |\mathbf{w}_y|^2}{v}-\frac{\nu |\mathbf{b}_y|^2}{v}\right)\frac{\kappa}{v}\theta_tdyds\right|\\ &\leq C\int_0^t\int_0^1\left[(1+\theta)^{q+4}|u_y\theta_t| +(1+\theta)^q|\theta_t|(u_y^2+|\mathbf{w}_y|^2+|\mathbf{b}_y|^2)\right]dyds\\ &\leq \varepsilon X+C_{\varepsilon}|(1+\theta)^{q+8}|^{(0)}\int_0^t\int_0^1u_y^2dyds\\ &+C_{\varepsilon}|(1+\theta)^{q}|^{(0)}\int_0^t\int_0^1\left(|u_y|^4+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4\right)dyds\\ &\leq \varepsilon (X+Y)+C. \end{split} \end{equation} and \begin{equation} \begin{split} &\left|I_3\right|=\left|\int_0^t\int_0^1\left(\theta p_{\theta}u_y-\frac{\lambda u_y^2}{v}-\frac{\mu |\mathbf{w}_y|^2}{v}-\frac{\nu |\mathbf{b}_y|^2}{v}\right)K_vu_ydyds\right|\\ &\leq C\int_0^t\int_0^1(1+\theta)^{q+5}u_y^2dyds+\int_0^t\int_0^1(1+\theta)^{q+1} |u_y|^3dyds\\ &\qquad+C\int_0^t\int_0^1(1+\theta)^{q+1}|u_y|(|\mathbf{w}_y|^2+|\mathbf{b}_y|^2)dyds\\ &\leq C |(1+\theta)^{q+5}|^{(0)}\int_0^t\int_0^1u_y^2dyds\\ &\qquad+|(1+\theta)^{q+1}|^{(0)}\int_0^t\int_0^1|u_y|^3dyds\\ &\qquad+C\int_0^t\int_0^1(1+\theta)^{q+1}|u_y|(|\mathbf{w}_y|^2+|\mathbf{b}_y|^2)dyds\\ &\leq CY^{\frac{q+5}{2(q+5)}}+ CY^{\frac{q+1}{2(q+5)}}\int_0^t\int_0^1(u_y^2+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4)dyds+C\\ &\leq \varepsilon Y+C. \end{split} \end{equation} Thirdly, it also yields in view of \eqref{lemma61} and \eqref{boundvelocity} \begin{equation} \begin{split} &\left|I_4\right|=\left|\int_0^t\int_0^1\frac{\kappa}{v}\theta_yK_{vv}v_yu_ydyds\right|\\ &\leq C\left(\int_0^t\int_0^1\frac{\kappa\theta_y^2}{\theta^2}dyds\right)^{\frac12} \times\left(\int_0^t\int_0^1(1+\theta)^{3q+4}u_y^2v_y^2dyds\right)^{\frac12}\\ &\leq CZ^{\frac38}Y^{\frac{q}{2(q+5)}}\\ &\leq \varepsilon Y+CZ^{\frac{3(q+5)}{4(q+10)}}, \end{split} \end{equation} and \begin{equation}\label{lemma142} \begin{split} &\left|I_5\right|=\left|\int_0^t\int_0^1\frac{\kappa}{v}\theta_y K_v u_{yy}dyds\right|\\ &\leq C \left(\int_0^t\int_0^1\frac{\kappa\theta_y^2}{\theta^2}dyds\right)^{\frac12}\cdot \left(\int_0^t\int_0^1(1+\theta)^{(3q+4)}u_{yy}^2dyds\right)^{\frac12}\\ &\leq CZ^{\frac12}Y^{\frac{q}{2(q+5)}}\\ &\leq \varepsilon Y+CZ^{\frac{q+5}{q+10}}. \end{split} \end{equation} Finally, one has \begin{equation}\label{lemma106} \begin{split} &\left|I_6\right|=\left|\int_0^t\int_0^1\frac{\kappa}{v}\theta_y\left(\frac{\kappa}{v}\right)_v v_{y}\theta_tdyds\right|\\ &\leq C\int_0^t\int_0^1\left|\frac{\kappa}{v}\theta_y\right|(1+\theta)^q |v_{y}\theta_t|dyds \\ &\leq \varepsilon X+C_{\varepsilon}|(1+\theta)^{q}|^{(0)}\int_0^t\max_{[0,1]}\left(\frac{\kappa\theta_y}{v}\right)^2 \left(\int_0^1v_y^2dy\right)ds\\ &\leq \varepsilon X+C_{\varepsilon}Y^{\frac{q}{2(q+5)}} \int_0^t\max_{[0,1]}\left(\frac{\kappa\theta_y}{v}\right)^2\left(\int_0^1v_y^2dy\right)ds\\ &\leq \varepsilon X+C_{\varepsilon}Y^{\frac{q}{2(q+5)}}\int_0^t\max_{[0,1]}\left(\frac{\kappa\theta_y}{v}\right)^2ds. \end{split} \end{equation} By the embedding theorem again, one has \begin{equation*} \begin{split} &\int_0^t\max_{[0,1]}\left(\frac{\kappa\theta_y}{v}\right)^2ds\\ &\leq C\int_0^t\int_0^1\left(\frac{\kappa\theta_y}{v}\right)^2dyds +C\int_0^t\int_0^1\left|\frac{\kappa\theta_y}{v}\cdot\left(\frac{\kappa\theta_y}{v}\right)_y\right|dyds\\ &\leq C|\kappa \theta^{2}|^{(0)}\int_0^t\int_0^1\frac{\kappa\theta_y^2}{\theta^2}dyds\\ &~~+C\left(\int_0^t\int_0^1\frac{\kappa\theta_y^2}{\theta^2}dyds\right)^{\frac12}\left(\int_0^t\int_0^1\kappa\theta^{2} \left(\frac{\kappa\theta_y}{v}\right)_y^2dyds\right)^{\frac12}\\ &\leq C \bigg\{|(1+\theta)^{q+2}|^{(0)}\\ &~~+\left(\int_0^t\int_0^1(1+\theta)^{q+2}\left(e_{\theta}^2\theta_t^2 +\theta^2 p_{\theta}^2u_y^2+u_y^4+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4\right)dyds\right)^{\frac12}\bigg\}. \end{split} \end{equation*} In particular, we also have \begin{equation*} \begin{split} &\int_0^t\int_0^1(1+\theta)^{q+2}e_{\theta}^2\theta_{t}^2dyds\\ &\leq C|(1+\theta)^{8}|^{(0)}\int_0^t\int_0^1(1+\theta)^{q}\theta_{t}^2dyds\\[2mm] &\leq C \left(X+Y^{\frac{8}{2(q+5)}}X\right), \end{split} \end{equation*} and \begin{equation*} \begin{split} &\int_0^t\int_0^1(1+\theta)^{q+2}\theta^2 p_{\theta}^2u_y^2dyds\\ &\leq C\int_0^t\int_0^1(1+\theta)^{q+10}u_y^2dyds\\ &\leq C|(1+\theta)^{q+10}|^{(0)}\int_0^t\int_0^1u_y^2dyds\\ &\leq C\left(1+Y^{\frac{q+10}{2(q+5)}}\right), \end{split} \end{equation*} and \begin{equation*} \begin{split} &\int_0^t\int_0^1(1+\theta)^{q+2}\left(u_y^4+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4\right)dyds\\ &\leq C|(1+\theta)^{q+2}|^{(0)}\int_0^t\int_0^1\left(u_y^4+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4\right)dyds\\ &\leq C\left(1+Y^{\frac{q+2}{2(q+5)}}\right), \end{split} \end{equation*} which implies \begin{equation*} \begin{split} &Y^{\frac{q}{2(q+5)}}\int_0^t\max_{[0,1]}\left(\frac{\kappa \theta_y}{v}\right)^2ds \\&\leq C\left(1+Y^{\frac{q+1}{q+5}}+X^{\frac12}Y^{\frac{q+4}{2(q+5)}} +Y^{\frac{3q+10}{4(q+5)}}\right)\\[2mm] &\leq \varepsilon (X+Y), \end{split} \end{equation*} which together with \eqref{lemma106} shows that \begin{equation} |I_6|\leq \varepsilon (X+Y). \end{equation} This together with \eqref{lemma141}--\eqref{lemma142} completes the proof. \end{proof} With the above a priori estimates, we can succeed in obtaining the upper bound of temperature, which is another essential contribution of the paper. \begin{lemma} The following a priori estimates hold, \begin{equation} |\theta|^{(0)}+|u_y|^{(0)}+|u|^{(0)}\leq C(T).\label{lemma17theta} \end{equation} and \begin{equation}\label{lemma17} \int_0^1(\theta_y^2+u_{yy}^2+ u_t^2)dy+\int_0^t\int_0^1(\theta_t^2+|\mathbf{b}_t|^2+u_{yt}^2)dyds\leq C(T), \end{equation} \end{lemma} \begin{proof} Differentiate \eqref{sub22} with respect to $t$, multiply it by $u_t$, and then integrate to obtain \begin{equation*} \frac{d}{dt}\int_0^1\frac{u_t^2}{2}dy+\lambda \int_0^1\frac{u_{yt}^2}{v}dy =\int_0^1\left(\left(p+\frac12|\mathbf{b}|^2\right)_t+\frac{\lambda u_y^2}{v^2}\right)u_{yt}dy. \end{equation*} As a consequence, it implies that \begin{equation*} \begin{split} &\int_0^1u_t^2dy+\int_0^t\int_0^1u_{yt}^2dyds\\ &\leq C+C\int_0^t\int_0^1u_y^2|u_{yt}|dyds+C\int_0^t\int_0^1|p_tu_{yt}|dyds +C\int_0^t\int_0^1|\mathbf{b}\cdot\mathbf{b}_tu_{yt}|dyds\\ &\leq \frac34\int_0^t\int_0^1u_{yt}^2dyds+C\int_0^t\int_0^1u_y^4dyds\\ &+C\int_0^t\int_0^1p_t^2dyds +C\int_0^t\int_0^1|\mathbf{b}_t|^2dyds+C(T)\\ &\leq C(T)+\frac34\int_0^t\int_0^1u_{yt}^2dyds +C\int_0^t\int_0^1p_t^2dyds +C\int_0^t\int_0^1|\mathbf{b}_t|^2dyds \end{split} \end{equation*} which leads to \begin{equation}\label{lemma150} \begin{split} &\int_0^1u_t^2dy+\int_0^t\int_0^1u_{yt}^2dyds\\ &\leq C(T)+C\int_0^t\int_0^1p_t^2dyds +C\int_0^t\int_0^1|\mathbf{b}_t|^2dyds. \end{split} \end{equation} We notice that \begin{equation} \begin{split} &\int_0^t\int_0^1p_t^2dyds=\int_0^t\int_0^1\left(p_vv_t+p_{\theta}\theta_t\right)^2dyds\\ &=\int_0^t\int_0^1\left(-\frac{R\theta}{v^2}u_y+\left(R\rho+\frac{4a}{3}\theta^3\right)\theta_t\right)^2dyds\\ &\leq C\int_0^t\int_0^1\theta^2u_y^2dyds+C\int_0^t\int_0^1(1+\theta^6)\theta_t^2dyds\\ &\leq C(T)|\theta^2|^{(0)}\int_0^t\int_0^1u_y^2dyds+C\int_0^t\int_0^1(1+\theta^6)\theta_t^2dyds\\ &\leq C(T)Y^{\frac{1}{q+5}}+X+XY^{\frac{3}{q+5}}\\ &\leq C(T)\left(1+Z^{\frac{q+8}{q+10}}\right).\label{lemma151} \end{split} \end{equation} and follows from \eqref{freelemma131} and \eqref{density} \begin{equation} \begin{split} &\int_0^t\int_0^1|\mathbf{b}_t|^2dyds\\ &\leq\int_0^t\int_0^1\left(|\mathbf{b}|^2u_y^2+|\mathbf{w}_y|^2+|\mathbf{b}_{yy}|^2+|\mathbf{b}_y|^2v_y^2\right)dyds\\ &\leq C\int_0^t\int_0^1u_y^2dyds+C\int_0^t\int_0^1(|\mathbf{w}_y|^2+|\mathbf{b}_{yy}|^2)dyds\\ &+C\int_0^t\max_{[0,1]}|\mathbf{b}_y|^2\left(\int_0^1v_y^2dy\right)ds\\ &\leq C+C\int_0^t\int_0^1\left(|\mathbf{b}_y|^2+|\mathbf{b}_{yy}|^2\right)dyds\\ &\leq C(T).\label{lemma153} \end{split} \end{equation} Substituting \eqref{lemma151} and \eqref{lemma153} into \eqref{lemma150}, it shows that \begin{equation} \begin{split} &\int_0^1u_t^2dy+\int_0^t\int_0^1u_{yt}^2dyds\\ &\leq C\left(1+Z^{\frac{q+8}{q+10}}\right).\label{lemma154} \end{split} \end{equation} On the other hand, it is also deduced from \eqref{sub22} that \begin{equation*} u_{yy}=\frac{v}{\lambda}\left(u_t+\left(p+\frac{|\mathbf{b}|^2}{2}\right)_y +\frac{\lambda v_yu_y}{v^2}\right) \end{equation*} and then by integration, it follows from \eqref{lemma154} \begin{equation*} \begin{split} \int_0^1u_{yy}^2dy&\leq C\int_0^1\left(u_t^2+p_y^2+|\mathbf{b}|^2\cdot|\mathbf{b}_y|^2+v_y^2u_y^2\right)dy+C\\ &\leq C\int_0^1u_t^2dy+C\int_0^1p_y^2dy+C\int_0^1v_y^2u_y^2dy+C\int_0^1|\mathbf{b}_y|^2dy+C\\ &\leq C\left(1+Z^{\frac{q+8}{q+10}}+\int_0^1(1+\theta^6)\theta_y^2dy +\left(|\theta^2|^{(0)}+|u_y^2|^{(0)}\right)\int_0^1v_y^2dy\right)\\ &\leq C\left(1+Z^{\frac{q+8}{q+10}}+Y^{\frac{q+8}{q+5}}+Z^{\frac34}\right)\\ &\leq C(T)\left(1+Z^{\frac{q+8}{q+10}}\right)\\ \end{split} \end{equation*} Thus we have $Z\leq C$ due to $0< \frac{q+8}{q+10}<1$, and $X$ and $Y$ are also bounded. Subsequently, $|\theta|^{(0)}$, $|u_y|^{(0)}$, $\int_0^1(u_t^2+\theta_y^2+u_{yy}^2)dy$ and $\int_0^t\int_0^1(u_{yt}^2+\theta_t^2)dyds$ are also bounded. \end{proof} In the sequel, the lower bound of temperature is obtained. \begin{lemma} One has \begin{equation*} \theta(y,t)\geq C(T). \end{equation*} \end{lemma} \begin{proof} Let~$\Theta=\frac{1}{\theta}$, then \eqref{lemma81} is written as \begin{equation*} \begin{split} e_{\theta}\Theta_t&=\left(\frac{\kappa}{v}\Theta_y\right)_y -\left(\frac{2\kappa\Theta_y^2}{v\Theta}+\frac{\mu|\mathbf{w}_y|^2\Theta^2}{v}+\frac{\nu|\mathbf{b}_y|^2\Theta^2}{v}\right)\\ &~~~~-\frac{\lambda \Theta^2}{v}\left(u_y-\frac{vp_{\theta}}{2\lambda \Theta}\right)^2+\frac{vp_{\theta}^2}{4\lambda}, \end{split} \end{equation*} which implies \begin{equation*} \Theta_t\leq \frac{1}{e_{\theta}}\left(\frac{\kappa}{v}\Theta_y\right)_y+C(T), \end{equation*} by \eqref{rho} and \eqref{lemma17theta} for some positive constant. Define the operator $\mathscr{L}:=-\frac{\partial}{\partial t}+\frac{1}{e_{\theta}}\frac{\partial}{\partial y}\left(\frac{\kappa}{v}\frac{\partial}{\partial y}\right)$ and then \begin{equation*} \left\{ \begin{array}{lllllllll} \mathscr{L}\widetilde{\Theta}<0,\qquad &\textup{on}\,\, Q_T=(0,1)\times(0,T),\\[2mm] \widetilde{\Theta}|_{t=0}\geq 0\qquad &\textup{on} \,\,[0,1],\\[2mm] \widetilde{\Theta}_y|_{y=0,1}=0 \quad &\textup{on} \,\, [0,T], \end{array} \right. \end{equation*} where $\widetilde{\Theta}(y,t)=C(T)t+\max_{[0,1]}\frac{1}{\theta_0(y)}-\Theta(y,t)$, and by the comparison theorem, one has \begin{equation*} \min_{(y,t)\in \overline{Q_T}}\widetilde{\Theta}(y,t)\geq 0, \end{equation*} which is inferred \begin{equation*} \theta(y,t)\geq \left(Ct+\max_{[0,1]}\frac{1}{\theta_0(y)}\right)^{-1}, \end{equation*} for any $(y,t)\in \overline{Q_T}$. \end{proof} \subsection{Higher order derivatives a priori estimates of $(\mathbf{w},\mathbf{b},\theta)$} \begin{lemma} One has \begin{equation}\label{lemma17b} \begin{split} &|(\mathbf{w}_y,\mathbf{b}_y,v_y)|^{(0)} +\int_0^1(|\mathbf{w}_t|^2+|(v\mathbf{b})_t|^2+|\mathbf{w}_{yy}|^2+|\mathbf{b}_{yy}|^2)dy\\ &+\int_0^t\int_0^1\left(|\mathbf{w}_{ty}|^2+|\mathbf{b}_{ty}|^2+\theta_{yy}^2\right)dyds\leq C(T). \end{split} \end{equation} \end{lemma} \begin{proof} Firstly, we differentiate \eqref{sub33} with respect to $t$, multiply it with $\mathbf{w}_t$ and then integrate over $(0,1)\times (0,t)$ \begin{equation*} \begin{split} &\frac12\int_0^1|\mathbf{w}_t|^2dy+\int_0^t\int_0^1\frac{\mu}{v}|\mathbf{w}_{ty}|^2dyds\\ &=\frac12\int_0^1|\mathbf{w}_t|^2(y,0)dy +\int_0^t\int_0^1\frac{\mu}{v^2}u_y\mathbf{w}_y\cdot\mathbf{w}_{ty}dyds -\int_0^t\int_0^1\mathbf{b}_t\cdot\mathbf{w}_{ty}dyds\\ &\leq C+\frac12\int_0^t\int_0^1\frac{\mu}{v}|\mathbf{w}_{ty}|^2dyds+C(T) \int_0^t\int_0^1(u_y^4+|\mathbf{w}_y|^4+|\mathbf{b}_t|^2)dyds, \end{split} \end{equation*} which follows from \eqref{freelemma131}, \eqref{lemma17theta} and \eqref{lemma17} \begin{equation}\label{lemma17w} \int_0^1|\mathbf{w}_t|^2dy+\int_0^t\int_0^1|\mathbf{w}_{ty}|^2dyds\leq C(T). \end{equation} Similarly, we get from \eqref{sub33} \begin{equation*} \mathbf{w}_{yy}=\frac{v}{\mu}\left(\mathbf{w}_t-\mathbf{b}_y +\frac{\mu}{v^2}v_y\mathbf{w}_y\right), \end{equation*} which leads to \begin{equation*} \begin{split} \int_0^1|\mathbf{w}_{yy}|^2dy&\leq C\int_0^1(|\mathbf{w}_t|^2+|\mathbf{b}_y|^2+v_y^2|\mathbf{w}_y|^2)dy\\ &\leq C+C\max_{[0,1]}|\mathbf{w}_y|^2\int_0^1v_y^2dy\\ &\leq C(T)+C\int_0^1|\mathbf{w}_y|^2dy+\frac12\int_0^1|\mathbf{w}_{yy}|^2dy\\ &\leq C(T)+\frac12\int_0^1|\mathbf{w}_{yy}|^2dy. \end{split} \end{equation*} i.e., \begin{equation*} \int_0^1|\mathbf{w}_{yy}|^2dy\leq C(T). \end{equation*} Secondly, we further deduce that \begin{equation} \begin{split} &\int_0^t\int_0^1\frac{\kappa^2}{v^2}\theta_{yy}^2dyds\\ &\leq C\int_0^t\int_0^1(\theta_t^2+u_y^2+u_y^4+|\mathbf{w}_y|^4+|\mathbf{b}_y|^4+v_y^2\theta_y^2+\theta_y^4)dyds\\ &\leq C+C\int_0^t\max_{[0,1]}\theta_y^2\int_0^1(v_y^2+\theta_y^2)dyds\\ &\leq C+C\int_0^t\int_0^1\theta_y^2dyds+\frac12\int_0^t\int_0^1\frac{\kappa^2}{v^2}\theta_{yy}^2dyds, \notag \end{split} \end{equation} and then \begin{equation} \int_0^t\int_0^1\theta_{yy}^2dyds\leq C(T)+C\int_0^t\int_0^1\theta_y^2dyds\leq C(T). \end{equation} In addition, by \eqref{lemma2V}, one has \begin{equation*} \begin{split} \lambda (\ln v)_y=&\lambda (\ln v_0(y))_y+\int_0^t(p_y+\mathbf{b}\cdot\mathbf{b}_y)ds+(u-u_0) \end{split} \end{equation*} or equivalently \begin{equation*} \begin{split} v_y^2 &\leq C+C\int_0^t(|\mathbf{b}_y|^2+p_v^2v_y^2+p_{\theta}^2\theta_y^2)ds\\ &\leq C+C\int_0^t\int_0^1(|\mathbf{b}_y|^2+|\mathbf{b}_{yy}|^2+\theta_y^2 +\theta_{yy}^2)dyds+C\int_0^tv_y^2ds\\ &\leq C+C\int_0^tv_y^2ds. \end{split} \end{equation*} Thus, we deduce that $v_y$ is bounded by Gr\"{o}nwall inequality. Lastly, differentiate \eqref{sub44} with respect to $t$ and then multiply by $(v\textbf{b})_t$ and integrate to get \begin{equation} \begin{split} &\frac12\frac{d}{dt}\int_0^1(v\mathbf{b})_t^2dy+\int_0^1\nu|\mathbf{b}_{yt}|^2dy\\ = & -\int_0^1\frac{\nu}{v}\mathbf{b}_{yt}\cdot(\mathbf{b}u_{yy}+\mathbf{b}_yu_y+\mathbf{b}_tv_y)dy\\ &+\int_0^1\frac{\nu}{v^2}u_y\mathbf{b}_y\cdot(\mathbf{b}u_{yy}+\mathbf{b}_yu_y +\mathbf{b}_tv_y+\mathbf{b}_{ty}v)dy\\ &-\int_0^1\mathbf{w}_t\cdot(\mathbf{b}u_{yy}+\mathbf{b}_yu_y+\mathbf{b}_tv_y+\mathbf{b}_{ty}v)dy\\ \le &\varepsilon \int_0^1\nu|\mathbf{b}_{yt}|^2dy+C\int_0^1(\mathbf{b}_y^2+\mathbf{b}_t^2+\mathbf{w}_t^2)dy+C\int_0^1(u_{yy}^2+|\mathbf{w}_{yy}|^2)dy. \end{split} \end{equation} So we get from \eqref{freelemma131}, \eqref{lemma17} and \eqref{lemma17w} \begin{equation} \int_0^1(v\mathbf{b})_t^2dy+\int_0^t\int_0^1|\mathbf{b}_{yt}|^2dyds\leq C, \end{equation} by integration for any fixed sufficiently small $\varepsilon$. Furthermore, by \eqref{sub44}, one has \begin{equation*} \begin{split} \int_0^1|\mathbf{b}_{yy}|^2dy&\leq C\int_0^1((v\mathbf{b})_t^2+|\mathbf{w}_y|^2+v_y^2|\mathbf{b}_y|^2)dy\\ &\leq C(T)+C\int_0^1|\mathbf{b}_y|^2dy\leq C(T), \end{split} \end{equation*} and similarly \begin{equation*} |\mathbf{b}_y|^2\leq C\int_0^1|\mathbf{b}_y|^2dy+C\int_0^1|\mathbf{b}_{yy}|^2dy\leq C(T). \end{equation*} This completes the proof. \end{proof} \begin{lemma}\label{Lemma18}We have \begin{equation} |\theta_y|^{(0)}+\int_0^1(\theta_t^2+\theta_{yy}^2)dy+\int_0^t\int_0^1\theta_{yt}^2dyds\leq C(T).\label{lemma181} \end{equation} \end{lemma} \begin{proof} Differentiate \eqref{sub55} with respect to $t$, multiply it by $e_{\theta}\theta_t$ and then integrate it over $[0,1]$ \begin{equation*} \begin{split} \frac{d}{dt}&\int_0^1\frac{(e_{\theta}\theta_t)^2}{2}dy+\int_0^1\frac{\kappa}{v}e_{\theta}\theta_{yt}^2dy\\ =&\int_0^1\bigg[-p_{\theta}e_{\theta}u_y\theta_t^2-\theta p_{\theta v}e_{\theta}u_y^2\theta_t-\theta p_{\theta\theta}e_{\theta}u_y\theta_t^2-\theta p_{\theta}e_{\theta}u_{yt}\theta_t\\ &+\frac{2\lambda}{v}e_{\theta}u_yu_{yt}\theta_t-\frac{\lambda}{v^2}e_{\theta}u_y^3\theta_t +\frac{2\mu}{v}e_{\theta}\mathbf{w}_y\cdot\mathbf{w}_{yt}\theta_t-\frac{\mu}{v^2}e_{\theta}u_y|\mathbf{w}_y|^2\theta_t\\ &+\frac{2\nu}{v}e_{\theta}\theta_t\mathbf{b}_y\cdot\mathbf{b}_{yt}-\frac{\nu}{v^2}e_{\theta}|\mathbf{b}_y|^2\theta_t -\left(\frac{\kappa}{v}\right)_ve_{\theta v}v_yu_y\theta_y \theta_t-\left(\frac{\kappa}{v}\right)_ve_{\theta\theta}u_y\theta_y^2\theta_t\\ &-\left(\frac{\kappa}{v}\right)_ve_{\theta}u_y\theta_y\theta_{yt}-\left(\frac{\kappa}{v}\right)_{\theta}e_{\theta v}v_y\theta_y\theta_t^2 -\left(\frac{\kappa}{v}\right)_{\theta}e_{\theta\theta}\theta_y^2\theta_t^2\\ &-\left(\frac{\kappa}{v}\right)_{\theta}e_{\theta}\theta_t\theta_y\theta_{yt} -\frac{\kappa}{v}e_{\theta v}v_y \theta_t\theta_{yt}-\frac{\kappa}{v}e_{\theta\theta}\theta_y\theta_t\theta_{yt}\bigg]dy, \end{split} \end{equation*} which implies by integration \begin{equation*} \begin{split} &\int_0^1\theta_t^2dy+\int_0^t\int_0^1\theta_{yt}^2dyds\\ &\leq C\left(1+\int_0^t\int_0^1(v_y^2+\theta_y^2)\theta_t^2dyds\right)\\ &\leq C\left(1+\int_0^t\max_{[0,1]}\theta_t^2ds\right)\\ &\leq C\left(1+\varepsilon \int_0^t\int_0^1\theta_{yt}^2dyds+C_{\varepsilon}\int_0^t\int_0^1\theta_t^2dyds\right), \end{split} \end{equation*} and then $\int_0^1\theta_t^2dy$ and $\int_0^t\int_0^1\theta_{yt}^2dyds$ are bounded. Taking the same operation as $\mathbf{b}_{yy}$ and $\mathbf{w}_{yy}$, we deduce from \eqref{sub55} \begin{equation*} \begin{split} \int_0^1\theta_{yy}^2dy&\leq \int_0^1\frac{1}{\kappa^2}\left(1+\theta_t^2+u_y^2+u_y^2+v_y^2\theta_y^2+\theta_y^4\right)dy\\ &\leq C\left(1+\max_{[0,1]}\theta_y^2\cdot\int_0^1(v_y^2+\theta_y^2)dy\right)\\ &\leq \varepsilon \int_0^1\theta_{yy}^2dy+C(T), \end{split} \end{equation*} which implies the boundedness of $\int_0^1\theta_{yy}^2dy$, and so is $|\theta_y|^{(0)}$ by employing the embedding theorem. \end{proof} Now we have established Lemmas \ref{Lemma1}--\ref{Lemma18}, we can prove that the H\"{o}lder estimates of solutions by the routine manner. Indeed, from \eqref{lemma17theta}, \eqref{lemma17b} and \eqref{lemma181}, one has \begin{equation*} (u,\mathbf{w},\mathbf{b},\theta)\in C^{1,0}(Q_T)^6. \end{equation*} Following the procedure of \cite{DafermosHisao}, \cite{umeharaTani} or \cite{ZhangXie}, we can get \begin{equation*} \begin{split} &(u,\mathbf{w},\mathbf{b},\theta)\in C^{1,\frac12}(Q_T)^6, \\[3mm] &(u_y,\mathbf{w}_y,\mathbf{b}_y,\theta_y)\in C^{\frac13,\frac16}(Q_T)^6. \end{split} \end{equation*} Finally, by the classical schauder estimates and the methods in \cite{Lady}, the H\"{o}lder estimates of solutions are derived. This completes the proof of Theorem \ref{thm}. \end{document}
arXiv
Sesame sowing date and insecticide application frequency to control sesame webworm Antigastra catalaunalis (Duponchel) in Humera, Northern Ethiopia Zenawi Gebregergis1, Dereje Assefa2 & Ibrahim Fitwy2 Sesame (Sesamum indicum L.) is one of the most important crops in Ethiopia for international market, while its production is challenged by insect infestations and inappropriate agronomic practices. Sesame webworm (Antigastra catalaunalis) is the major pest, which causes heavy losses in Humera areas, Northern Ethiopia. This study aims to determine optimum sowing time and insecticide application frequency for controlling A. catalaunalis. The results showed that the early sowing gave minimum infestation of sesame webworm and better sesame grain yield. The integration of early sowing and weekly spray (T16) resulted in low incidence (8.8%) and higher grain yield (651 kg/ha), where the combination of late sowing and untreated (control) plot (T3) gave higher incidence (100%) and lower grain yield (69.1 kg/ha). The maximum level of leaf, flower and capsule damage was scored on the late sowing and untreated plot, while the lowest was in the early sowing and weekly sprayed plot. Planting sesame early on the onset of rainfall followed with two application of insecticide at 2 and 4 weeks after emergence was found economical and optimum management option for controlling A. catalaunalis. Sesame (Sesamum indicum L.) is an annual plant that belongs to the Pedaliaceae family. It is one of the world's oldest oil seed crop grown mainly for its oil-rich seeds [1]. Although the order of leading sesame producing countries is changing from time to time, Ethiopia is the sixth largest sesame producer in the world following Myanmar, India, China, Sudan and Tanzania, and third in Africa followed by Uganda and Nigeria. Of the total world production of sesame, the top 5 countries account over 64.3% [2]. The Ethiopian whitish Humera-type variety is known for its taste (sweetness) in the world market; hence, it is exported to the confectionary market where white-seeded types are demanded by the consumers [3]. The four major sesame growing regions in the country are mainly Tigray, Amara, Oromia and Benishangul Gumuz. Within Tigray, western zone is the main sesame production area with large commercial farms and many small-scale farmers, and sesame is a good source of income in these areas, but there are many hurdles for its production and productivity such as limited research endeavor in plant protection aspects, lack of economically feasible and technically appropriate production technologies. In the study area, a research on integrated managements of major sesame pests has been not conducted yet. Insect pests such as sesame webworm (Antigastra catalaunalis), sesame seed bug (Elasmolomus sordidus), and gall midge (Asphondylia sesami) are the most important insects that affect production of sesame. Out of which sesame webworm (A. catalaunalis) is the most important insect that affects sesame during various growth stages starting from 2 or 3 weeks after emergence up to harvesting [4,5,6,7,8,9,10]. Objectives of this study were to estimate the optimum sesame sowing time and frequency of insecticide application to control sesame web worm and boost sesame yield. Description of the study area A field experiment to investigate effects of sowing time and insecticide application frequency on A. catalaunalis infestation was conducted in western Tigray at Humera Agricultural Research Center (HuARC), which is located at latitude of 14°15′N, longitude of 36°37′E and elevation of 608 m. The experimental site, Humera, is under the administration of Kafta Humera (Fig. 1). The agro-ecology of the location is described as hot to warm semiarid plain with mean temperature of 29 °C, annual mean rainfall of 581.2 mm (which ranges from 380 to 870 mm), verti soil, PH value of 8.4 and husbandry variations. Map of the study area The experiment was done in factorial randomized complete block design (RCBD) with three replications. The path between blocks and plots was 2.5 and 2 m, respectively. The net plot area was 9.6 m2. The total area of the experiment was (2288 m2). Sesame seeds were planted in 40 cm inter-row and 10 cm intra-row spacing. Sowing time Setit-1 variety was used as a test variety. Sowing date for sesame in the study area ranges from mid-June to mid-July [4]. Similarly, the New LocClim software (satellite data) provided that the second dekad of June as onset of rainfall for the study area (Humera) (Fig. 4). But Gebre et al. [11] reported that the onset date of rainfall in North Ethiopia for the last 30 years was significantly varied. And the same authors added that onset of rainfall in a specific location can be determined, when 20 mm or above of rainfall is recorded for three consecutive days for the main season (Kiremt). Therefore, accordingly, onset of rainfall was on July 12 and then first sowing (early sowing) was on July 13, second sowing (mid-sowing) on July 23 and third sowing (late sowing) on August 02.2015. Insecticide application frequency When the larva started detection in 2 WAE, dimethoate 40% EC a broad spectrum pesticide was applied at a rate of 2 l/ha (800 g of active ingredient per hectare) according to the treatments under natural environment. The weekly spray (positive control) was started 1 week after emergence before start of A. catalaunalis infestation, and spraying was continued in interval of 1 week for ten consecutive weeks up to maturity. The insecticide application frequency includes: 0, 1, 2, 3, 4, and 10. The detail description is explained in Table 1. Hand-operated knapsack sprayer was used for dimethoate 40% EC application. Table 1 Treatment combination of sowing date and insecticide application frequencies Plant related data collection From the net plot area, five plants were selected randomly and tagged to collect the phenological, growth and yield component traits of sesame. Days to 50% flowering and 50% maturity were taken on plot basis. Furthermore, the eight experimental rows excluding both margins to reduce boarder effect were harvested, tied in sheaves and were made to stand separately until the capsules opened and threshed by knocking the sheaves and then weighed for yield determination. A. catalaunalis-related data collection Incidence (INC %): incidence of A. catalaunalis was recorded through total count of infected plants at any of plant part (leaf, flower and/or capsule) in the plot for six times on a fortnightly basis. It was calculated using the following equation: $${\text{Incidence}} = \frac{{{\text{no}} .\;{\text{of}}\;{\text{infested}}\;{\text{plant}}}}{{{\text{total}}\;{\text{plant}}\;{\text{in}}\;{\text{the}}\;{\text{plot}}}}*100$$ No. of larva (NLP): larvae were counted from five randomly selected plants per plot. It was counted six times in a fortnightly basis from the plants. Leaf damage (LD %): leaves that have some symptoms of A. catalaunalis attack like bored and webbed leaves were taken as damaged leaves. Damaged leaves per plant were recorded from leaves of the five randomly selected plants, and they were counted six times on a fortnightly basis from leaves of the plants. $${\text{Leaf}}\;{\text{damage}} = \frac{{{\text{no}} .\;{\text{of}}\;{\text{infested}}\;{\text{leaf}}}}{{{\text{total}}\;{\text{inspected}}\;{\text{leaves}}}}*100$$ Flower damage (FD %): webbed and tunneled flowers were considered as damaged flowers. Damaged flower per plant was recorded from flowers of the five randomly selected plants and it was counted three times on a fortnightly basis from flowers of the five plants. $${\text{Flower}}\;{\text{damage}} = \frac{{{\text{no}} .\;{\text{of}}\;{\text{infested}}\;{\text{flower}}}}{{{\text{total}}\;{\text{inspected}}\;{\text{flowers}}}}*100$$ Capsule damage (CD %): all burrowed capsules were reserved as damaged capsules. Damaged capsules per plant were recorded from the five randomly selected plants and they were counted three times on a fortnightly basis from capsules of the five plants. $${\text{Capsule}}\;{\text{damage}} = \frac{{{\text{no}} .\;{\text{of}}\;{\text{infested}}\;{\text{capsule}}}}{{{\text{total}}\;{\text{inspected}}\;{\text{capsule}}}}*100$$ Seed loss of damaged capsule (SL %): difference in the number of seeds of the healthy and damaged capsules was considered as a loss. A single damaged and healthy capsule was taken from the same node of the plant for each of the five randomly selected plants during harvesting and a number of seeds per each damaged and healthy capsules were counted. Seed loss was calculated as the following equation. $${\text{SL}}\% = \frac{{\text{NSPHC}} - {\text{NSPDC}}}{{\text{NSPHC}}}*100$$ where SL = % of seed loss of the damaged capsule, NSPHC = number of seeds per healthy capsule, NSPDC = number of seeds per damaged capsule. Collected data for the different traits of the designed field experiment were analyzed using computer statistical software Genstat 16. Means were compared using Duncan's multiple range test (DMRT), at p level of 0.01 probability level. The incidence of A. catalaunalis was significantly different (p < 0.001) among all the three sowing dates. The number of larva per plant and incidence were significantly lower on the early sowing sesame than the late sowing. Insecticide application frequency showed also significant difference (p < 0.001) on A. catalaunalis incidence. The maximum incidence level (98.38%) was detected on the control plot, which was in par with the once sprayed plot and lower (26.10%) on the weekly sprayed plot (Table 2). The interaction effect between sowing date and frequency of insecticide application was highly significant (p < 0.001). Higher incidence (100%) was recorded from the combination of late sowing and control plot, while the integration of early sowing and weekly spray (T16) scored lower (8.79%) value (Table 3). Table 2 Main effect of sowing date and insecticide application frequency on A. catalaunalis infestation, yield and yield component of sesame Table 3 Interaction effect of sowing time and insecticide application frequency on A. catalaunalis infestations In comparison with early sowing higher leaf, flower and capsule damages were scored on the late sowing sesame. About 6.8, 36.3 and 36.5% leaf, flower and capsule damages, respectively, were recorded on the late sowing. Sowing time has also significant effect on seed loss per damaged capsule, the early planting contributed 19.3% reduction in seed loss compared to late planting (Table 2). With regard to the effect of insecticide application frequency, there was a minimum damage of leaf, flower and capsule on the weekly sprayed plots. When effects of the interaction between sowing time and insecticide application frequency were examined, late sowing and control (T3) was found to have the maximum level of flower and capsule damage. The interaction effect also revealed that the combination of early sowing and weekly spray (T16) was lower (7.49) in seed loss per damaged capsule, while late sowing and control (T3) was higher (93.86%) (Table 3). Sowing date, insecticide application frequency and their interaction have showed a significant (p < 0.001) variation on yield and yield components of sesame. Comparatively higher grain yield (472.3 kg/ha) was harvested on the early sowing sesame, while the lower (284.4 kg/ha) was on late sowing (Table 2). Regarding to insecticide application frequency, better grain yield (531.2 kg/ha) was harvested on the weekly sprayed plot, while the control plot produced lower (147.2 kg/ha) yield (Table 2). The permutation of early sowing and weekly spray (T16) has higher grain yield (651.0 kg/ha) corresponding to the late sowing and control (69.1 kg/ha) (Table 3). Incidence of A. catalaunalis was severe on late sowing sesame. Early sowing has about 73.3% reduction in incidence of A. catalaunalis compared to late sowing (Table 2). This indicates that the pest might be in favor of green cover (weeds) and high temperature to rear and reproduce prior to infest sesame. And early in the season (July), there were no weeds. Low temperature and high rainfall were also observed in July and August, while high temperature and low rainfall were in September and October (Fig. 2). Higher infestation of the pest was also observed late in the season on the late sowing. And the pest's preference is fleshy and young growing points (leave, flower, capsules). Therefore, reproductive stage of the early sowing sesame has already completed before A. catalaunalis population get raised, where the late sowing sesame has not escaped. This work is in line with Egonyu et al. [12] who reported that sowing the crop during the onset of rainfall (may) had less infestation of A. catalaunalis than planting late in the season. Similarly, Ahirwar et al. [13] reported that high temperature (> 27 °C) and low rainfall (< 200 mm monthly rainfall) during flowering and pod formation stages aggravate the incidence of the pest. Many Indian researchers reported that the pest is active from August to November [13,14,15] and early sowing (June) sesame is less infested than late sowing sesame. Hence, sowing the crop early in the season enabled the crop to escape from sesame webworm damage, while the delayed sowing resulted in a significantly higher level of damage to leaves, flowers and pods. This could be associated with the fact that the maximum number of larval population and incidence is strongly correlated with maximum temperature and lower rainfall [13, 14]. A. catalaunalis incidence across the growing season (months) in 2015. INC = incidence (%), NLP = number of larva per plant, RF = monthly total rainfall (mm), TEMP = monthly average temperature (°C) Frequencies of the insecticide application have a significant variation on controlling A. catalaunalis. The weekly sprayed plot has 73% reduction in incidence of the pest in comparison with the control (Table 2). Of course, the pest and its incidence could not be totally avoided, but its incidence could possibly lower to 26 and 56% through weekly spray and twice @ 2 and 4-WAE spray. Application of endosulfan 0.07% at 30 and 45 days after planting was proved to be the most effective at controlling A. catalaunalis [15]. The early sowing and scheduled insecticide application has showed low incidence of the pest compared to late sowing and untreated plot. Combination of early sowing and weekly spray has about 91% reduction in A. catalaunalis incidences over the late sowing and untreated plot (Table 3). Pest incidence across growth stages of sesame have increased from about 5% at seedling stage (2 WAE) to more than 65% at capsule development stage (8–10 WAE) and the number of larva per plant has significantly increased from 2.6 at seedling stage to 6.3 at capsule development (Fig. 3a). When incidence across months during sesame growing season is detected, there was less than 1% incidence in July (seedling) but reached about 70% in October (capsule development stage) (Fig. 2). The marked increase in the pest incidence in October (capsule development) could be due to favorable environment like higher temperature, lower rainfall and higher sunshine hours in October (Figs. 2, 4). Zerabruk [16] noted that a maximum mean incidence percentage of sesame webworm was recorded at the end of September during pod setting. A. catalaunalis infestation was reported to be higher during capsule formation phase of the crop than vegetative growth and flowering stages [17]. On the other side, Kumar et al. [18] reported that a strong correlation between the larval population, maximum temperature, maximum relative humidity and limited rainfall. Similarly, incidence of A. catalaunalis was higher in dry sunny weather conditions than in wet weather condition and an outbreak of the pest occurred when a long dry spell had been preceded by heavy rains [19]. Same author added that there was a positive correlation between the abundance of the months and the number of hours of sunshine. A. catalaunalis incidence across sesame growth stages (a) and severity of damage across plant parts (b). INC = incidence, NIL = number of infected leaf per plant, LD = leaf damage, TL = total leaf, NLP = number of larva per plant Dekadly total rain falls, potential evapotranspiration and sunshine hours over long-term years of the growing season in Humera (New LocClim longitude 36.37°E, latitude 14.15°N, altitude 608 m) Higher damage of sesame plant part has been recorded on the late sowing, while early planting has fewer damage. Reproductive parts of sesame have severely damaged by the pest compared to the leaves. In the same fashion, higher damage (6–12%) caused by A. catalaunalis was recorded on reproductive parts of sesame, while low (4%) on leaves [16]. Similarly, Choudhary et al. [20] noted that higher level of capsule damage (40%) and lower leaf damage from untreated sesame plot. A. catalaunalis infests sesame flowers more than the pods, but can cause up to 53% seed loss in pods [21]. Karuppaiah and Nadarajan [14] reported that, when infestation occurs at very early stage, plant dies without producing any capsule and shoot growth was affected, when infestation occurred at later stages. Planting sesame during June and July escaped from damage and delay in sowing resulted in a significantly higher level of damages to leaves, flowers and pods [22]. Similarly, Abdalla and Mohamed [23] reported that infestation of A. catalaunalis in late sowing was higher than early sowing (17.8 larvae/25 plants in August 1 and 21.1 larvae/25 plants in August 15). Methods of assessing yield loss are usually based on the degree of infestation and seed loss in pods. The higher damage of leaf (8.1%), flower (39.8%) and capsule (39.7%) was recorded on the control (untreated), and it was significantly similar with the one-time-sprayed plots. Minimum percent of seed loss per damaged capsule (30.0%) was calculated on the weekly sprayed plot, while the maximum percent of seed loss per damaged capsule (87.3%) was from the untreated plot (Table 2) and Simoglou et al. [24] estimated more than 50% of seed losses. Thus, it could be noted that the frequent insecticide spray had less leaf, flower and capsule damage and reduced seed loss. Similarly, Egonyu et al. [12] and Karuppaiah [22] reported that application of insecticides twice at 2–3 and 4–5 weeks after sesame emergence was effective and economical to control A. catalaunalis. Similar works reported that flower and capsule damages caused by A. catalaunalis were significantly controlled, when endosulfan was sprayed three times [25]. Moreover, application of spinosad 0.001% two times has significantly reduced larval population, and flower and capsule damage compared to control [26]. As illustrated in (Fig. 3b), severity of damages of sesame plant parts has varied from 5% on leaf to 28% on reproductive parts. Similarly, Karuppaiah and Nadarajan [14] have reported that 32.67% flower damage and 24.69% capsule damage as caused by A. catalaunalis. Therefore, flower and capsule were the most exposed sesame parts to damages by A. catalaunalis. But this might not be always true because of two reasons: (1) The higher level of damages on flower and capsule could be due to the favorable weather conditions (higher temperature and low rainfall) during the reproductive stage of the plant, and the pest likely prefers feeding on the young and soft plant parts. (2) The lower level of damages on leaves was not because of low infestation of the pest recorded on the leaves; instead, it was due to the higher number of total leaves per plant, which definitely lowered the value. When leaf damage across different growth stage of the plant was detected, more damage was observed on the young growth stage. Leaf damage was higher (43.11%) at seedling stage, while it was lower (5.34%) at the capsule developmental stage. And this is an antagonistic to the A. catalaunalis incidence recorded, which was ultimately increased from seedling to capsule development stage. But the fact behind the lower leaf damage was the sharp increment of total leaf number per plant (denominator during calculation of leaf damage) from fewer leaves (4.78) during seedling to large number of leaves (136.6) at capsule development stage (Fig. 3a). Early sowing of sesame was performed than the late and mid-sowings. The early sowing might get an advantage of the earlier moisture for emergence and might escape from infestation of the pest. Several research outcomes witnessed that adjusting time of planting often helps crops to escape the vulnerable crop stage from an insect pest and also to harvest better grain yield [27,28,29]. Higher sesame grain yield (1172 kg/ha) was harvested in Metema, North Ethiopia, when sesame was planted early in the season, and lower (200 kg/ha) yield from late sowing [30]. Regardless of sowing time, insecticide application frequencies have great impacts on the grain yield differences. Two and more than two times sprayed plots have showed less incidence and severity, while the control and one-time-sprayed plots were higher in incidence and severity caused by A. catalaunalis. The fact behind the higher capsule per plant and grain yield on the frequently sprayed plots could be the presence of healthy leaf and flower, which can ahelp to bear more fertile capsules. On the other way, in the control plot there was higher damage of leaves and flowers that possibly reduced the number of healthy capsules. Muez [31] reported that higher yield (714 kg/ha) of sesame was recorded from dimethoate 40% EC-treated plot. In the same way, three times spraying of 0.07% endosulfan significantly reduced the capsule damage caused by A. catalaunalis and also increased the grain yield of sesame [25]. Higher grain yield of sesame (809 kg/ha) was harvested from insecticide-treated plot and lower (594.5 kg/ha) from the untreated one [16]. Moreover, higher grain yield (7.2 kg/ha) was recorded, when insecticide was sprayed two times at 3 and 6 WAE, and lower grain yield (4.1 kg/ha) from the control [20]. The combined effect of early sowing and frequent insecticide sprays influenced the yield positively. When the yield of early sowing and weekly spray plot was compared with the late sowing and untreated plot, the later one has 89.4% of yield increment. Perhaps the early sowing might get an advantage from the early rains compared to the late sowing. In addition to that, the pest by its nature is not active during early sowings because of its preference to higher temperature and limited precipitation. And most of the weather conditions in the months of July and August were not conducive for the pest compared to September and October (Fig. 2). Moreover, the pest prefers feeding on the growing points of younger plants, because they are soft and easy for penetration and webbing. That's why, the most integrated treatments (early sowing and frequent sprays) resulted in reduced pest incidence and better grain yields. Result of this study is in line with Egonyu et al. [12] who reported that higher yield of sesame (1039 kg/ha) was recorded from integration of early planting and scheduled application of insecticides at 2 and 4 WAE, while lower yield (175 kg/ha) was recorded from late planting with no insecticide application. Similarly, higher grain yield was recorded from the integrated application of early sowing and insecticide spray [30]. Spraying neem oil 1% or neem seed kernel extract 5% at the early stage of infestation reduced pest infestation and increased grain yield significantly [22]. The author also added that single control strategy may not give satisfactory control, while integrated strategies could give better control, better economic value and minimal environmental pollution. Khalid [32] reported that sesame webworm as a major pest of sesame in west Oromia, Ethiopia, causes substantial yield loss. Partial budget analysis Results of the partial budget analysis indicated that insecticide application at 2 and 4 WAE had the higher net profit and a corresponding marginal rate of return. The most frequently sprayed (weekly spray) plot has the least net profit due to the cost of the sprayed insecticide (Table 4). Therefore, application of dimethoate two times at 2 and 4 WAE had the higher net profit (USD 279.4) which clearly confirmed that application of insecticide at 2 and 4 WAE had the highest MRR (389.1%) than all the other treatments. Table 4 Partial budget analysis for insecticide application frequency on A. catalaunalis Late sowing sesame is more infested with A. catalaunalis and has 66.1% yield reduction compared to early sowings. Sesame webworm population buildup has dramatically increased from seedling stage to capsule developmental stage of the crop and caused substantial yield loss. Application of insecticide two times at 2 and 4 WAE has 66% of yield advantage compared to the untreated plot. And integration of the early sowing with two times insecticide spray at 2 and 4 WAE has good control of A. catalaunalis and extends the grain yield advantage to 89.4% in comparison with the late sowing and untreated plot. Generally, promising yield of sesame can be harvested when integrated management of the pest is applied. HuARC: Humera Agricultural Research Center RCBD: randomized block design SBN: Sesame Business Network SWW: sesame webworm TARI: Tigray Agricultural Institute WAE: weeks after emergence Umar H, Okoye C, Mamman B. Resource use efficiency in sesame (Sesamum indicum L.) production under organic and inorganic fertilizers applications in Keana Local Government Area, Nasarawa State, Nigeria. Res J Agric Biol Sci. 2010;6(4):466–71. FAOSTAT. Food and agriculture organization of the United Nations statistical data; 2015 [cited 2016 22]. http://faostat.fao.org/faostat. Woldesenbet DT, Tesfaye K, Bekele E. Genetic diversity of sesame germplasm collection (Sesamum indicum L.): implication for conservation, improvement and use. Int J Biotechnol Mol Biol Res. 2015;6(2):7–18. Geremew T, et al. Sesame production manual. EIAR/Embassy of the Kingdom of Netherlands, Ethiopia; 2012. Egonyu J, et al. Review of pests and diseases of sesame in Uganda. In: African crop science conference proceedings; 2005. Selemun H. Study on the potential of some botanical powders and nimbecidine for the management of sesame seed bug (Elasmolomus sordidus, Fab.) (Hemiptera: Lygaeidae) in Humera, Northwest Ethiopia; 2011. Addis Ababa, Ethiopia: AAU. Suliman EH, et al. Biology and Webbing behaviour of Sesame webworm, Antigastra catalaunalis Duponchle (Lepidoptera: Pyraustidae). Glob J Med Plant Res. 2013;1(2):210–3. Negash GA. Status of production and marketing of Ethiopian sesame seeds (Sesamum indicum L.): a review. Agric Biol Sci J. 2015;1(5):217–23. Zenawi G, Dereje A, Ibrahim F. Assessment of Incidence of Sesame Webworm Antigastra catalaunalis (Duponchel) in Western Tigray, North Ethiopia. J Agric Ecol Res Int. 2016;9(4):1–9. Zenawi G, Dereje A, Ibrahim F. Insecticide application schedule to control sesame webworm Antigastra catalaunalis (Duponchel) Humera, North Ethiopia. J Appl Life Sci Int. 2016;8(4):1–8. Gebre H, et al. Trend and variability of rainfall in Tigray, Northern Ethiopia: analysis of meteorological data and farmers' perception. Acad J Agric Res. 2013;1(6):088–100. Egonyu J, Kyamanywa S, Ssekabembe C. Integration of time of planting and insecticide application schedule to control sesame webworm and gall midge in Uganda. J Appl Biosci. 2009;18:967–75. Ahirwar M, Gupta P, Banerjee S. Bio-ecology of leaf roller/capsule borer Antigastra catalaunalis Duponchel. Adv Bio Res. 2010;1(2):90–104. Karuppaiah V, Nadarajan L. Host plant resistance against sesame leaf webber and capsule borer, Antigastra catalaunalis Duponchel (Pyraustidae: Lepidoptera). Afr J Agric Res. 2013;8(37):4674–80. Reddy PS. Annual report, 1995–96. Directorate of Oilseeds Research; 1996. Hyderabad India. Zerabruk G. Assessment of sesame webworm (Antigastra catalaunalis, (Duponchel) and its preference for Sesame (Sesamum indicum L.) varieties in Western Zone of Tigray, Ethiopia; 2017. Hawasa University. Muzaffar A, et al. Insect pest associated with sesame at Tanato Jam. Pakistan. J Appl Sci. 2002;2(7):723–6. Kumar R, Ali S, Dhoray UC. Incidence of Antigastra catalaunalis, Dup. in different varieties of sesame. Mol Entomol 2012;3(1):15–7. Chadha SS. Effect of some climatic factors on the fluctuation of population of Antigastra catalaunalis. Dup. in oil crops: sesame and sunflower subnetworks proceedings of the joint second workshop; 1990. Cairo, Egypt. Choudhary M, et al. Evaluation of sequences of integrated pest management practices against sesame Leaf and Capsule Borer, Antigastra catalaunalis. J Pharmacogn Phytochem. 2017;6(4):1440–4. Mandefro N, et al. Improved technologies and resource management for Ethiopian agriculture training manual; 2009. Karuppaiah V. Eco-friendly management of Leaf Webber and Capsule Borer (Antigastra catalaunalis Duponchel) Menace in sesame; 2014. Abdalla NA, Mohamed AH, The effects of sowing dates and varieties on the infestation with the sesame webworm, Antigastra catalaunalis in sesame. Wad Medani, Sudan: Agriculture Research Corporation; 2014. Simoglou B, et al. First report of Antigastra catalaunalis on sesame in Greece. Entomologia Hellenica. 2017;26:6–12. Nayak MK, et al. incidence and avoidable loss due to leaf roller/capsule borer. Ann Plant Soil Res. 2015;17(2):163–6. Wazire NS, Patel JI. Efficacy of insecticides against sesamum leaf webber and capsule borer, Antigastra catalaunalis (Duponchel). Indian J Entomol. 2015;77(1):11–7. Ali S, Jan A. Sowing dates and nitrogen level effect on yield and yield attributes of sesame cultivars. J Agric. 2014;30(2):203–9. Ali A, et al. Effect of sowing dates and row spacing on growth and yield of sesame. J Agric Res. 2005;43(1):19–26. Ogbonna P, Umar-Shaaba Y. Yield responses of sesame (Sesamum indicum L.) to rates of poultry manure application and time of planting in a derived savannah ecology of south eastern Nigeria. Afr J Biotech. 2011;10(66):14881–7. Yohannes E, Samuel S, Geremew T. Survey and management of sesame webworm Antigastra catalaunalis Duponchel (Lepidoptera: Pyrausitidae) in north Gondar Administrative Zone; 2016. University of Gondar. Muez B. Advanced screening of best performed insecticides for the control of webworm on Sesame at Kafta-Humera Woreda, in regional review; 2015. Humera Agricultural Research Center/TARI. Khalid K. The survey on field insect pests of sesame (Sesamum indicum L.) in east wollega and horo guduru wollega zones, west Oromia, Ethiopia. Int J Entomol Res. 2017;2(3):22–6. DA participated in the statistical analysis of data and edited the manuscript. IF participated in the design of the study and helped to draft the manuscript. ZG conceived of the study, participated in its design and coordination, performed the statistical analysis and developed the manuscript. All authors read and approved the final manuscript. Authors would like to acknowledge Tigray Agriculture Research Institute (TARI) and Sesame Business Network (SBN) for the financial support. And deepest gratitude goes to HuARC researchers for the roles played. We declare that whatever data have been used in the manuscript will be kept intact. These data can be made available to anyone who desires to see it from the corresponding author on request. It is to declare that we have all the ethical approval and consent from our organization to participate in research paper writing and submission to any relevant journal. The authors do hereby declare that, with the exception of references to past and current studies consulted, which have been duly acknowledged, the work presented was carried out from June 2015 to July 2016 in Humera Agricultural Research Center. This work was funded by Tigray Agriculture Research Institute (TARI) and Sesame Business Network (SBN), from its recurrent budgets. Tigray Agricultural Research Institute, Humera Agricultural Research Center, P.O. Box 62, Humera, Ethiopia Zenawi Gebregergis Department of Crop and Horticultural Science, Mekelle University, P.O. Box 231, Mekelle, Ethiopia Dereje Assefa & Ibrahim Fitwy Dereje Assefa Ibrahim Fitwy Correspondence to Zenawi Gebregergis. Gebregergis, Z., Assefa, D. & Fitwy, I. Sesame sowing date and insecticide application frequency to control sesame webworm Antigastra catalaunalis (Duponchel) in Humera, Northern Ethiopia. Agric & Food Secur 7, 39 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s40066-018-0190-4 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s40066-018-0190-4 A. catalaunalis Capsule damage Insecticide frequency Climate Smart Agricultural Technologies in West Africa
CommonCrawl
Viktor Maslov (mathematician) Viktor Pavlovich Maslov (Russian: Виктор Павлович Маслов; 15 June 1930 – 3 August 2023) was a Russian mathematical physicist. He was a member of the Russian Academy of Sciences. He obtained his doctorate in physico-mathematical sciences in 1957.[2] His main fields of interest were quantum theory, idempotent analysis, non-commutative analysis, superfluidity, superconductivity, and phase transitions. He was editor-in-chief of Mathematical Notes and Russian Journal of Mathematical Physics. Viktor Maslov Виктор Маслов Born Viktor Pavlovich Maslov (1930-06-15)15 June 1930 Moscow, Russian SFSR, USSR Died3 August 2023(2023-08-03) (aged 93) NationalityRussian CitizenshipRussian Alma materLomonosov Moscow State University Known forMaslov index Spouse Lê Vũ Anh ​ (m. 1975⁠–⁠1981)​ AwardsState Prize of the USSR (1978); A.M.Lyapunov Gold Medal (USSR Academy of Science, 1982); Lenin Prize (1985); State Prize of the Russian Federation (1997); Demidov prize (2000); Independent Russian Triumph Prize (2002); State Prize of the Russian Federation (2013)[1] Scientific career Fieldsphysico-mathematics InstitutionsLomonosov Moscow State University Doctoral advisorSergei Fomin[2] The Maslov index is named after him. He also introduced the concept of Lagrangian submanifold.[2] Early life and career Viktor Pavlovich Maslov was born in Moscow on 15 June 1930. He was the son of statistician Pavel Maslov and researcher Izolda Lukomskaya, and the grandson of the economist and agriculturalist Petr Maslov. At the beginning of World War II, he was evacuated to Kazan with his mother, grandmother and other members of his mother's family.[3] In 1953 he graduated from the Physics Department of the Moscow State University and taught at the university. In 1957 he defended his Ph.D. thesis and in 1966, his doctoral dissertation. In 1984, he was elected an academician within Department of Mathematics of the Academy of Sciences of the USSR.[4] From 1968 to 1998, he headed the Department of Applied Mathematics at the Moscow Institute of Electronics and Mathematics. From 1992 to 2016, he was in charge of the Department of Quantum Statistics and Field Theory of the Physics Faculty of Moscow State University.[4] Maslov headed the laboratory of the mechanics of natural disasters at the Institute for Problems in Mechanics of the Russian Academy of Sciences. He was a research professor at the Department of Applied Mathematics at Moscow Institute of Electronics and Mathematics of Higher School of Economics.[4] Scientific acitivity Maslov was known as a prominent specialist in the field of mathematical physics, differential equations, functional analysis, mechanics and quantum physics. He developed asymptotic methods that are widely applied to equations arising in quantum mechanics, field theory, statistical physics and abstract mathematics, that bear his name.[5] Maslov's asymptotic methods are closely related to such problems as the theory of a self-consistent field in quantum and classical statistics, superfluidity and superconductivity, quantization of solitons, quantum field theory in strong external fields and in curved space-time, the method of expansion in the inverse number of particle types. In 1983, he attended the International Congress of Mathematicians in Warsaw, where he presented a plenary report "Non-standard characteristics of asymptotic problems".[6] Maslov dealt with the problems of liquid and gas, carried out fundamental research on the problems of magnetohydrodynamics related to the dynamo problem. He also made calculations for the emergency unit of the Chernobyl nuclear power plant during the 1986 disaster. In 1991, he made model and forecasts of the economic situation in Russia.[6] From the early 1990s, he worked on the use of equations of mathematical physics in economics and financial analysis. In particular, he managed to predict the 1998 Russian financial crisis, and even earlier, the collapse of the economic and, as a consequence, the collapse of the political system of the USSR.[6] In 2008, Maslov in his own words predicted a global recession in the late 2000s. He calculated the critical number of U.S. debt, and found out that a crisis should break out in the near future. In the calculations, he used equations similar to the equations of phase transition in physics. In the mid-1980s, Maslov introduced the term tropical mathematics, in which the operations of the conditional optimization problem were considered.[7] Personal life In the early 1970s, he met Lê Vũ Anh, the daughter of Lê Duẩn, then General Secretary of the Communist Party of Vietnam, when she was a student at the Faculty of Physics in Moscow State University. The romance was considered scandalous because Vietnamese students studying abroad were not allowed to have romantic relationships with foreigners and anyone caught would have to be disciplined and may be sent back to Vietnam. In order to avoid trouble, she returned home to marry a Vietnamese student from the same university and wanted to stay in Vietnam to forget her love affair with Maslov. However, she was forced by her father to return to USSR to complete her studies.[8] When she and her husband returned to Moscow, Anh realized that she did not love her husband and could not forget her former lover. She decided to live separately from her husband and secretly went back and forth with Maslov. After being pregnant for the second time, after having a miscarriage for the first time, Anh had enough energy to ask her husband for a divorce in order to be able to marry Maslov. In 1975, she and Maslov married. She gave birth to a daughter on 31 October 1977 named Lena. Meeting her father by chance when he went to USSR for a state visit, Anh confessed all her love affairs. Lê Duẩn did not accept it and tried to lure her back to the country. However, Anh gradually reconciled with her family.[8] After giving birth to her second daughter Tania, Anh gave birth to her son, Anton, on 1981. Anh died shortly after giving birth to her son, due to hemorrhage.[9] Immediately after Anh died, a dispute over custody of his three children with his wife's family occurred. An official from the Vietnamese Communist Party's Central Committee took over the communication between Maslov and Anh's family. Both sides proposed a compromise solution, Maslov kept his daughters and son would be returned to Lê Duẩn. Maslov only allowed his son to go to Vietnam for two years. But after the deadline, his son never returned to him. Maslov had to fight for two more years before Lê Duẩn accepted to bring his grandson to meet his father.[10] However, the son that Maslov met was no longer Anton Maslov as before, but a Vietnamese citizen with the new name Nguyễn An Hoàn and he was unable to speak Russian. According to Maslov, Lê Duẩn did not intend to return the child, but also hoped to bring back his daughters. Fearing the loss of his children, Maslov contacted the son of the President of the Supreme Soviet of the Soviet Union Andrei Gromyko, a close friend of Soviet leader Mikhail Gorbachev. He was advised to write to Gorbachev and was promised to convince Gorbachev to read it. After a massive legal struggle, Lê Duẩn gave up the idea of taking him and his children back.[9] His children later resided in England and the Netherlands, where they were highly successful in their respective professions.[9] Maslov later re-married a woman named Irina, who was at the same age as his ex-wife Anh. Irina is a linguist and she received the title of Associate Doctor of Science in 1991. For the last three decades, he lived in Troitsk.[9] Viktor Maslow died on 3 August 2023, at the age of 93.[11] Selected books • Maslov, V. P. (1972). Théorie des perturbations et méthodes asymptotiques. Dunod; 384 pages{{cite book}}: CS1 maint: postscript (link)[12] • Karasëv, M. V.; Maslov, V. P.: Nonlinear Poisson brackets. Geometry and quantization. Translated from the Russian by A. Sossinsky [A. B. Sosinskiĭ] and M. Shishkova. Translations of Mathematical Monographs, 119. American Mathematical Society, Providence, RI, 1993.[13] • Kolokoltsov, Vassili N.; Maslov, Victor P.: Idempotent analysis and its applications. Translation of Idempotent analysis and its application in optimal control (Russian), "Nauka" Moscow, 1994. Translated by V. E. Nazaikinskii. With an appendix by Pierre Del Moral. Mathematics and its Applications, 401. Kluwer Academic Publishers Group, Dordrecht, 1997. • Maslov, V. P.; Fedoriuk, M. V.: Semi-classical approximation in quantum mechanics. Translated from the Russian by J. Niederle and J. Tolar. Mathematical Physics and Applied Mathematics, 7. Contemporary Mathematics, 5. D. Reidel Publishing Co., Dordrecht-Boston, Mass., 1981.[14] This book was cited over 700 times at Google Scholar in 2011. • Maslov, V. P. Operational methods. Translated from the Russian by V. Golo, N. Kulman and G. Voropaeva. Mir Publishers, Moscow, 1976. References 1. Viktor Maslov, HSE 2. "Fiftieth Anniversary of research and teaching by Viktor Pavlovich Maslov" (PDF). 3. "The Central Database of Shoah Victims' Names". www.yvng.yadvashem.org. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 4. "Маслов, Виктор Павлович". www.tass.ru. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 5. Proceedings of the International Congress of Mathematicians. August 16-24, 1983, Warszawa 6. "Академику Маслову Виктору Павловичу - 90 лет!". www.ras.ru. 2020-06-15. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 7. Medvedev, Yuri (2009-03-12). "Он рассчитал катастрофy". www.rg.ru. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 8. "Về câu chuyện tình của con gái Tổng Bí thư Lê Duẩn với viện sĩ khoa học Nga". www.cand.com.vn. 2016-08-26. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 9. "Hồi ký của VS Maslov về mối tình với Lê Vũ Anh". www.nguoivietodessa.com. 2016-08-28. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 10. "Câu chuyện tình buồn bí mật của Lê Vũ Anh con gái ông Lê Duẩn lấy chồng người Nga". www.ttx.vanganh.org. Retrieved 2021-07-29.{{cite web}}: CS1 maint: url-status (link) 11. Умер Виктор Павлович Маслов (in Russian) 12. Streater, R. F. (1975). "Review of Théorie des perturbations et méthodes asymptotiques by V. P. Maslov". Bulletin of the London Mathematical Society. 7 (3): 334. doi:10.1112/blms/7.3.334. ISSN 0024-6093. 13. Libermann, P. (1996). "Book Review: Nonlinear Poisson brackets, geometry and quantization". Bulletin of the American Mathematical Society. 33: 101–106. doi:10.1090/S0273-0979-96-00619-2. 14. Blattner, Robert J.; Ralston, James (1983). "joint review of Lagrangian analysis and quantum mechanics, a mathematical structure related to asymptotic expansions and the Maslow index by Jean Leray; Semi-classical approximation in quantum mechanics by V. P. Maslow and M. V. Fedoriuk". Bulletin of the American Mathematical Society. 9 (3): 387–397. doi:10.1090/S0273-0979-1983-15224-2. External links • Viktor Maslov at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Sweden • Czech Republic • Netherlands Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Publons • ResearcherID • Scopus • zbMATH Other • IdRef
Wikipedia
\begin{definition}[Definition:Category with Products] Let $\mathbf C$ be a metacategory. Then $\mathbf C$ is said to '''have products''' or to be a '''(meta)category with products''' {{iff}}: :For all sets of objects $\CC \subseteq \mathbf C_0$, there is a product $\ds \prod \CC$ for $\CC$. \end{definition}
ProofWiki
The influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats: an experimental and simulation study Aldo José Fontes-Pereira1, Marcio Amorim2, Fernanda Catelani1, Daniel Patterson Matusin3, Paulo Rosa1, Douglas Magno Guimarães4, Marco Antônio von Krüger1 & Wagner Coelho de Albuquerque Pereira1 Low-intensity physiotherapeutic ultrasound has been used in physical therapy clinics; however, there remain some scientific issues regarding the bone-healing process. The objective of this study was to investigate the influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats. Twenty-two male adult rats were assessed quantitatively and qualitatively using radiographic, biochemical, and histological analyses. Numerical simulations were also performed. Fractures in animals in the ultrasound group (n = 11) were treated with low-intensity ultrasound (pulsed mode, duty cycle 20 %) for 10 min daily at an intensity of 40 mW/cm2 SATA (1.0 MHz) for 10 days. Fractures in animals in the control group (n = 11) were not treated. Alkaline phosphatase levels were non-significantly higher in the ultrasound group than in the control group in the time intervals considered (t(13) = 0.440; 95 % confidence interval (CI) −13.79 to 20.82; p = 0.67). Between-group serum calcium levels were also not significantly different (t(13) = −0.842; 95 % CI −0.48 to 0.21; p = 0.42). Finally, there were no significant differences in radiological scores between the two groups (U = 118; 95 % CI −1.99 to 1.99; p = 0.72). However, the diameter of the newly formed bone tissue was greater and more evident in the ultrasound group. Thirteen days after fracture, there was no significant between-group differences in bone-healing processes, although the increased alkaline phosphatase levels and diameter of new bone tissue need to be further investigated. Even though it is one of the most rigid and resilient substances in the human body, bone tissue is constantly exposed to conditions, such as injury or fracture, that may affect its structural integrity [1, 2]. The occurrence of a fracture triggers a complex process of healing to restore bone mechanics and functional integrity [1, 3]. This process is dynamic and features well-defined stages of repair regulated by a variety of cellular elements and stimulant agents [4]. Sometimes, there can be complications in this process, resulting in retardation of fracture union with the risk of pseudoarthrosis [5] and other consequences, such as long and painful treatment, missed work, reduction in patients' quality of life and general well-being, and increased public health-care expenditures [6, 7]. Thus, efforts to determine treatments that accelerate the bone consolidation process are justified [2, 8]. Physiotherapy offers several options for treating fractures, including therapeutic ultrasound (TUS), which is normally used in physical therapy clinics [9–11]. According to Wolff's law, ultrasonic stimulation generates micro-mechanical forces and tension on the fracture site, resulting in accelerated bone formation. It has also been mentioned that the use of low-intensity pulsed ultrasound stimulation (LIPUS) increases bone metabolism [8, 12, 13], resulting in accelerated bone healing by abbreviating inflammation and soft and hard callus formation [14]. Well-documented [8, 12, 14] studies with low-intensity ultrasonic waves have shown evidence of their effects on bone healing. Commercial equipment especially designed to provide low-intensity ultrasound for the purpose of bone healing is normally set at a fixed intensity of 30 mW/cm2. However, this equipment costs about 10 times more than does TUS equipment commonly used for general purposes in physical therapy clinics. An investigation of this potential use should include careful steps to enable standardization of the duration and intensity of irradiation required for effective treatment. To date, no conclusive investigation on the evaluation of cellular and biochemical mechanisms triggered by TUS [2, 9] has been reported. The present study analyzed the radiographic, biochemical, and histological effects of TUS at an intensity of 40 mW/cm2 in induced fractures of rat tibias with the objective of evaluating the effect of ultrasound intensity provided by common TUS equipment on the initial stage of fracture healing. The study was approved by the Ethics Committee in Research of the Evandro Chagas Institute, Pará, Brazil (protocol n.009/2012) according to the guidelines for the care and use of laboratory animals [15], National Legislation of Animal Vivisection in Force (Federal Law 11,794 of October 8, 2008), and international and national ethical instructions (Laws 6638/79, 9605/98, Decree 24665/34). The sample consisted of 22 male rats (Rattus norvegicus, McCoy strain) at least 90 days of age and weighing 325 ± 25 g. Each rat was maintained at a controlled temperature (23 ± 2 °C) in cages measuring 45 × 15 × 30 cm and lined with autoclavable rice straw that was exchanged on alternate days. The animals received water and food ad libitum. The rats were randomly divided into two experimental groups: a control group (CG), consisting of 11 rats that underwent induced fractures in the middle one third of the right tibia without receiving any treatment, and an ultrasound group (USG), consisting of 11 animals that underwent the same fracture procedure and received low-intensity TUS. Fracture induction Prior to fracture induction, the rats were anesthetized with intraperitoneal ketamine (80 mg/kg) and xylazine (15 mg/kg) solution [2, 8] at a dose of 0.6 mL per 100 g body weight. After sedation, the rats were placed in the lateral decubitus position and then subjected to fracture of the middle one third of the diaphysis of the right tibia of the hind limb, using the equipment previously described [2]. Afterwards, the rats were placed in cages, with a maximum of four rats per cage, and subjected to analgesic therapy during the entire experimental period (200 mg/kg paracetamol dissolved in water). The animals were not immobilized after the fracture [2, 16, 17]. After 24 h, TUS treatment was applied to the fracture gap while the animal was in the lateral decubitus position. The rats were not sedated during treatment. Stationary ultrasound equipment (BIOSET® model SONACEL PLUS, Bioset Industry Electronic Technology Ltda., Rio Claro, SP, Brazil) was used on the fracture site, with a frequency of 1 MHz, intensity of 40 mW/cm2 (SATA), pulsed mode, duty cycle 20 %, pulse repetition frequency of 100 Hz, pulse width of 2 ms, and an effective radiating area of 0.79 cm2. The equipment was found to meet the IEC 60601-1-2:2010 regulation, according to the calibration made by the Electrical Engineering Department of the Federal University of Pará (Brazil). The coupling material was a commercial gel soluble in water. Treatment was performed for 10 min once per day for five consecutive days, followed by a 2-day period without irradiation. This sequence was repeated for a total of 10 sessions. Post-treatment procedures Once the treatment protocol was complete, the animals fasted for 12 h. Then, they were anesthetized intraperitoneally with hydrochloride ketamine (80 mg/kg) and xylazine (15 mg/kg) at a dose of 0.6 mL per 100 g body weight. While completely sedated, the rats underwent exsanguination by cardiac puncture (~5 mL of blood was collected for biochemical analysis) [2] and then euthanized by decapitation. Analysis of the bone matrix synthesis biochemical markers was performed using a Labtest kit (Vital Scientific NV, Holliston, MA, USA) with absorbance at 590 nm for measurement of serum alkaline phosphatase level and a laboratory kit (Vital Scientific) with absorbance at 570 nm to determine the concentration of serum calcium. Both analyses were performed using the Vitalab Selectra and Chemistry Analyzer automated system (Vital Scientific). For radiological evaluation, the rats' right hind limbs were disarticulated from the hip, fixed in buffered 10 % paraformaldehyde, and subsequently submitted to analysis. Radiography was obtained in the lateral view with the same radiographic technique (40 kV × 2 mA), using an exposure time of 0.6 s, and always at the same distance from the X-ray tube (30 cm). Radiographic analysis was performed by two independent observers blinded to the treatment group who examined the callus formation, the quality of bone union, and bone remodeling, according to the radiographic system score for osseous healing [6, 18, 19]. This radiographic scoring system has three categories (periosteal reaction, quality of bone union, and remodeling) regarding fracture healing. The first two categories are scored from 0 to 3 points and the third category has scores from 0 to 2, so the maximum expected score is 8 (the sum of the maximum score for each category) for complete bone fracture repair (Table 1). Table 1 Radiographic scoring system for fracture healing [6, 18, 19] Finally, the right hind limb was dissected until the tibia was totally exposed and immersed for 36 h in a descaling solution of ethylenediaminetetraacetic acid (EDTA) at 5 %. Then, histological slides were prepared with the histotechnical procedure and a microtome (Leica Microsystems, Wetzlar, HE, DE). Sections of 5-mm-thick tissue were obtained from the region of the bone callus and stained with hematoxylin-eosin [20]. Qualitative analysis of the blades by evaluation of bone formation from estimation of the thickness of the newly formed tissue was made with an optical microscope (Carl Zeiss Microscopy LLC, Thornwood, NY, USA) coupled to a video camera, the AxioCam HRC (Carl Zeiss Microscopy LLC, Thornwood, NY). Simulation configuration Numerical, two-dimensional simulations of wave propagation were performed with SimSonic software developed at the Laboratoire d'Imagerie Paramétrique (CNRS, University Paris 6, Paris, France), employing the method of finite-difference time domain (FDTD) with elastodynamic equations (Fig. 1) [21]. Diagram of the numerical model configuration A numerical model with a spatial resolution of 0.01 mm represented the region of the fracture, and it consisted of two cortical plates with bone marrow inside surrounded above and below by muscle, fat, and skin. Interruption of the cortical bone represented the fracture gap. The thickness of the cortical layers and bone marrow was 0.04 and 0.1 mm, respectively, and the fracture gap was 4 mm. The thickness of the muscle, fat, and skin was 1.73, 0.58, and 0.55 mm, respectively. These values are averages of measurements obtained from the radiography of the animals used in the experiments. In the two-dimensional numerical model, a line source measuring 10 mm placed on the skin layer's upper surface generated a longitudinal pulsed wave of 1 MHz with a duration of 3 μs. Fifteen receivers (R1–R15) positioned along the propagation axis recorded pressure during 20 μs starting from just before the pulse generation. The receivers R2, R4, R6, R8, R10, R12, and R14 were located in the middle of each layer, while the receivers R3, R5, R7, R9, R13, and R15 were located over the interface between layers (1 pixel apart). Perfect matching between layers (PML) was assumed, and absorption was disregarded. The elastic constants (C11, C22, C33, and C12) and densities (Table 2) obtained from the literature [21] were used to model the isotropic mechanical responses of each material. The longitudinal velocity of isotropic materials and the calculated acoustic impedance were within the range of values reported in the literature [22]. Table 2 Mechanical properties of tissues [21] Parameters evaluated The parameters used in the analysis were time-of-flight of the first arriving signal (TOFFAS), sound pressure level (SPL), and amplitude root mean square (RMS) [23]. The TOFFAS evaluates the duration of the ultrasound wave propagation leaving the emitter transducer until its arrival at the corresponding receiving transducer, responding to impedance differences in the propagation medium. The TOFFAS was obtained by interpolating the five amplitude points by a parabolic signal adjustment of each receiver from a given threshold. The SPL and RMS were used to evaluate ultrasound wave amplitudes, providing, respectively, attenuation and energy of the signal of each receiver. While the SPL was calculated based on the peak of the wave (Eq. 1), the RMS (Eq. 2) used a temporal window of 10.9 μs from the first arriving signal (FAS), as follows: $$ \mathrm{S}\mathrm{P}\mathrm{L}=20\cdot { \log}_{10}\left(\frac{A_{\mathrm{R}\mathrm{k}}}{A_{\mathrm{R}1}}\right), $$ where A Rk is the peak amplitude of the signal of receivers R2 to R15 (k) and A R1 corresponds to the amplitude of the reference signal (receiver R1). $$ \mathrm{R}\mathrm{M}\mathrm{S}=\sqrt{\frac{{\displaystyle {\sum}_{k=1}^{k=N}{A_{\mathrm{Rk}}}^2}}{N},} $$ where A Rk corresponds to the signal amplitude of each receiver (k = 1–15) divided by the total number of signals (N). Power tables for Cohen's d effect size were used to calculate the sample size: Cohen's d = 1.2 two-tailed, α = 0.05, and power of 0.8 [24]. Data normality was examined using the Kolmogorov-Smirnov test. Student's t test (t) was used to evaluate differences in biochemical markers, and the Mann-Whitney U test (U) was used to evaluate differences in radiographic scores between the groups. The inter-observer agreement was calculated using the kappa coefficient (K). Statistical analysis was performed using SPSS (version 20.0, IBM Corporation, Armonk, NY, USA). The level of significance was α = 0.05, with a confidence interval of 95 %. The USG presented higher levels of alkaline phosphatase (86.38 ± 18.94 U/L) than did the CG (82.86 ± 10.03 U/L) for the time interval considered, but with no statistical significance (t(13) = 0.440; 95 % CI −13.79 to 20.82; p = 0.67). Serum calcium levels also were not significantly different (t(13) = −0.842; 95 % CI −0.48 to 0.21; p = 0.42); however, the CG had higher levels of serum calcium (10.04 ± 0.26 mg/dL) than did the USG (9.90 ± 0.35 mg/dL). Seven samples were discarded because of coagulation. The qualitative histological analysis revealed the formation of immature bone in both groups. However, the diameter of the newly formed bone tissue was greater and more evident in the USG (Fig. 2). Formation of an immature bone in the control group (CG) (a hematoxylin and eosin (H&E) ×20 and b H&E ×40) and the ultrasound group (USG) (c H&E ×20 and d H&E ×40) was similar. The diameter of the newly formed bone tissue (asterisk) was greater and more evident in the USG The inter-observer reliability (r) for the total radiographic score was K = 0.64 (p < 0.001) and for the three categories periosteal reaction, quality of bone union, and remodeling was K = 0.63 (p < 0.001), K = 0.72 (p < 0.001), and K = 1.00 (p < 0.001), respectively. The scoring system for radiographic fracture healing showed no significant difference between the groups (U = 118; 95 % CI −1.99 to 1.99; p = 0.72) (Fig. 3). Radiographic scoring system for fracture healing and categories (periosteal reaction, quality of bone union, and remodeling) Figure 4a shows wave propagation along the tissue, Fig. 4b shows an example of a typical signal detected by the receivers, and Fig. 4c presents the TOFFAS, which increased with the depth of the receivers and the thickness of tissue. It should be noted that between the signal of receiver R1 (positioned at the top of the model) and R2 (0.28 mm from the top of the model), there is a computational adjustment mechanism for TOFFAS prediction, so this parameter is considered only from receiver R2. Figure 4d, e shows the attenuation and signal energy of each receiver, respectively. a Wave propagation along the tissue. b Signal of receiver R8 positioned in the center of the fracture gap showing the time-of-flight of the first arriving signal (TOFFAS). c TOFFAS of receivers R1 to R15. d Sound pressure level (SPL) of receivers R1 to R15. e Root mean square (RMS) of receivers R1 to 15 The use of conventional TUS for fracture treatment could mean a reduction in treatment costs for bone fractures, therefore making this therapeutic modality more accessible to the general population. Thus, this work was intended to elucidate the effects of TUS on the bone-healing process. The animal model R. norvegicus (McCoy strain) was used, as it has pathophysiological and biomechanical properties similar to those of the human bone [11]. A closed-fracture model was chosen to reduce the risk of infection, which would alter the consolidation process [2]. Several studies [2, 17] did not have success using fracture stabilization methods in rats, whether it was by invasive treatments, such as Kirschner wires, or noninvasive treatments, such as the use of a plaster splint. Several complications arose, namely bone fractures in other regions of the limb, compartment syndrome, and infection at the site of invasive assets. Thus, in the present study, none of the fractures was immobilized. The method of treatment adopted for this study follows the usual human physical therapy treatment protocol. According to Einhorn [25], between 10 and 16 days, it is possible to identify four healing stages. Our protocol lasted 13 days, so we can assume that the healing process was occurring and that our results indicate that the ultrasonic dose we used was not able to accelerate this process. Biochemical analysis was performed using the indices of alkaline phosphatase and serum calcium. These markers were used to evaluate the process of bone formation [13, 16, 26] once calcium became a component of the bone matrix and alkaline phosphatase activity was noted in osteoblastic formation [26, 27]. The results of this biochemical analysis were not statistically significant, although ultrasound did promote increased alkaline phosphatase activity in the treated versus the untreated rats. These results were similar to those in a previous study [2] that tested TUS pulsing (0.2 W/cm2) in rats with bone fractures after 5 weeks of treatment. Leung et al. [28], using LIPUS equipment designed for bone healing, showed that treatment for 20 min per day at 30 mW/cm2 increased levels of alkaline phosphatase. Guerino et al. [29] suggested that an increased level of alkaline phosphatase seen with TUS is possibly associated with increased cell proliferation and mineralization. Histological analysis showed that animals treated with TUS had the thickest bone formation, suggesting that TUS influenced the consolidation of bone tissue. A similar result was found by Oliveira et al. [30] when LIPUS equipment was used at an intensity of 30 mW/cm2. However, our findings are not yet conclusive regarding bone-healing acceleration. Perhaps TUS can influence bone formation, but at least for the dose that we applied, no statistically significant difference was found. The qualitative radiological analysis showed a greater volume of bone callus in animals in the USG. Thus, as in the study of Kumagai et al. [8], radiological evaluation showed that the area of hard callus was significantly higher in the LIPUS-treated animals than in the animals in the CG. However, the scoring system for radiographic fracture healing showed no significant difference between the groups. These findings support the use of quantitative measures (e.g., quantitative ultrasound, bone densitometry, quantitative computed tomography) that are often overlooked in traumatology studies. Thus, these tools can be used to minimize the subjectivity of evaluators and to show real statistical differences between therapies. Simulation analysis showed that TOFFAS values were consistent with the localization of receivers in the numerical model so that the highest values were observed with a larger distance between receivers or when they were farther from the emitter. The arrangement of the receivers, when chosen with respect to the thickness of each tissue, showed that the interior of the fracture responded to the smallest change in TOFFAS (receivers R7–R9) (i.e., the wave propagates faster inside the fracture). This fact should be taken into account in case therapy depends on the propagation time of ultrasound inside a given region. Catelani et al. [23], using SimSonic software (CNRS, University Paris 6, Paris, France), found that the interior regions of fractures near the cortical bone-bone marrow interface showed greater reduction of TOFFAS values with respect to receivers located in the center of the fracture (due to the formation of lateral waves with velocity compatible with the cortical bone). This could account in part for the mechanisms involved in fracture healing stimulated by LIPUS. Additionally, in this study, it was possible to note the reduction of TOFFAS through the interior of the fracture. The results of the SPL and RMS analyses showed similar behavior between values for different receivers. The highest concentration of energy was observed in receivers near the skin-fat, fat-muscle, and bone marrow-muscle interfaces. The receivers R4, R7, and R12 excelled in energy concentration, followed by the adjacent receivers R3, R8, and R11. The impedance mismatch led to the reflection phenomenon of ultrasound waves at the interfaces, which may explain the higher concentration of energy in the soft tissues and attenuation of the ultrasound wave in the center of the model and deeper regions, where the greatest attenuation would be expected. The intensity of LIPUS proposed in the literature (30 mW/cm2) seems to provide a good stimulus for accelerating the bone-healing process. This intensity would not be directly proportional to a higher concentration of energy at the fracture site, however, given that the intensity of the ultrasound used in this study was approximately 33 % higher (40 mW/cm2) and it did not change the duration for bone repair. As the concentration of power and local induction heating are considered harmful to bone healing, factors such as the rapid passage of ultrasound through fracture [23], which is associated with low-intensity TUS, are closely related with the acceleration of consolidation. Conversely, higher intensities would increase the risk of local heating, hindering this process. The ultrasound equipment commonly found in physical therapy clinics may influence the bone-healing process according to anecdotal evidence from books, blogs, and practitioners, but this claim has little scientific basis [9]. In this study, we could not find evidence that TUS influences the bone-healing process. On the other hand, some aspects of the procedure need to be clarified (e.g., ultrasonic intensity, duration of treatment) with respect to changes in levels of alkaline phosphatase and the diameter of new bone formation observed in this study. We propose that there is an optimal range for accelerating bone healing, around 30 mW/cm2 of ultrasonic intensity, so to use TUS, it would be necessary to use attenuators. Thus, additional studies of different parameters at different stages of bone healing are needed to clarify the interaction between TUS and biological tissue. The present results suggest that TUS in the dose we used is not recommended for clinical use. FAS: First arriving signal FDTD: Finite-difference time domain LIPUS: Low-intensity pulsed ultrasound stimulation RMS: SPL: TOFFAS : Time-of-flight of the first arriving signal TUS: Rutten S, Nolte PA, Korstjens CM, Klein-Nulend J. Low-intensity pulsed ultrasound affects RUNX2 immunopositive osteogenic cells in delayed clinical fracture healing. Bone. 2009;45:862–9. Fontes-Pereira AJ, Teixeira Rda C, de AJB O, Pontes RWF, de RSM B, Negrão JNC. The effect of low-intensity therapeutic ultrasound in induced fracture of rat tibiae. Acta Ortopédica Bras. 2013;21:18–22. Bilezikian JP, Raisz LG, Martin TJ. Principles of bone biology: two-volume set. San Diego: Academic Press Inc; 2008. Einhorn TA. The cell and molecular biology of fracture healing. Clin Orthop. 1998;355:S7–21. Jackson LC, Pacchiana PD. Common complications of fracture repair. Clin Tech Small Anim Pract. 2004;19:168–79. Sarban S, Senkoylu A, Isikan UE, Korkusuz P, Korkusuz F. Can rhBMP-2 containing collagen sponges enhance bone repair in ovariectomized rats?: a preliminary study. Clin Orthop Relat Res. 2009;467:3113–20. Rose FRAJ, Oreffo ROC. Bone tissue engineering: hope vs hype. Biochem Biophys Res Commun. 2002;292:1–7. Kumagai K, Takeuchi R, Ishikawa H, Yamaguchi Y, Fujisawa T, Kuniya T, et al. Low-intensity pulsed ultrasound accelerates fracture healing by stimulation of recruitment of both local and circulating osteogenic progenitors. J Orthop Res. 2012;30:1516–21. Maggi LE, Omena TP, von Krüger MA, Pereira WCA. Didactic software for modeling heating patterns in tissues irradiated by therapeutic ultrasound. Braz J Phys Ther. 2008;12:204–14. Matheus JPC, Oliveira FB, Gomide LB, Milani J, Volpon JB, Shimano AC. Effects of therapeutic ultrasound on the mechanical properties of skeletal muscles after contusion. Braz J Phys Ther. 2008;12:241–7. Blouin S, Baslé MF, Chappard D. Rat models of bone metastases. Clin Exp Metastasis. 2005;22:605–14. Nolte PA, Klein-Nulend J, Albers GHR, Marti RK, Semeins CM, Goei SW, et al. Low-intensity ultrasound stimulates endochondral ossification in vitro. J Orthop Res. 2001;19:301–7. Alvarenga ÉC, Rodrigues R, Caricati-Neto A, Silva-Filho FC, Paredes-Gamero EJ, Ferreira AT. Low-intensity pulsed ultrasound-dependent osteoblast proliferation occurs by via activation of the P2Y receptor: role of the P2Y1 receptor. Bone. 2010;46:355–62. Pounder NM, Harrison AJ. Low intensity pulsed ultrasound for fracture healing: a review of the clinical evidence and the associated biological mechanism of action. Ultrasonics. 2008;48:330–8. Garber JC, Barbee RW, Bielitzki JT, Clayton LA, Donovan JC, Hendriksen CFM, et al. Guide for the care and use of laboratory animals. Natl Acad Press Wash DC. 2011;8:220. Giordano V, Knackfuss IG, Gomes Rdas C, Giordano M, Mendonça RG, Coutynho F. Influência do laser de baixa energia no processo de consolidação de fratura de tíbia: estudo experimental em ratos. Rev Bras Ortop. 2001;36:174–8. Pelker RR, Friedlaender GE. The Nicolas Andry Award-1995. Fracture healing. Radiation induced alterations. Clin Orthop. 1997;341:267–82. Johnson KD, Frierson KE, Keller TS, Cook C, Scheinberg R, Zerwekh J, et al. Porous ceramics as bone graft substitutes in long bone defects: a biomechanical, histological, and radiographic analysis. J Orthop Res. 1996;14:351–69. Yang C, Simmons DJ, Lozano R. The healing of grafts combining freeze-dried and demineralized allogeneic bone in rabbits. Clin Orthop. 1994;298:286–95. Angle SR, Sena K, Sumner DR, Virkus WW, Virdi AS. Combined use of low-intensity pulsed ultrasound and rhBMP-2 to enhance bone formation in a rat model of critical size defect. J Orthop Trauma. 2014;28:605–11. Bossy E, Talmant M, Laugier P. Three-dimensional simulations of ultrasonic axial transmission velocity measurement on cortical bone models. J Acoust Soc Am. 2004;115:2314–24. Protopappas VC, Fotiadis DI, Malizos KN. Guided ultrasound wave propagation in intact and healing long bones. Ultrasound Med Biol. 2006;32:693–708. Catelani F, Ribeiro APM, Melo CAV, Pereira WC, Machado CB. Ultrasound propagation through bone fractures with reamed intramedullary nailing: results from numerical simulations. Proc Meet Acoust. 2013;19:075093. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Routledge; 1988. Einhorn TA, Gerstenfeld LC. Fracture healing: mechanisms and interventions. Nat Rev Rheumatol. 2015;11:45–54. Mayr-Wohlfart U, Fiedler J, Günther K-P, Puhl W, Kessler S. Proliferation and differentiation rates of a human osteoblast-like cell line (SaOS-2) in contact with different bone substitute materials. J Biomed Mater Res. 2001;57:132–9. Notelovitz M. Androgen effects on bone and muscle. Fertil Steril. 2002;77(Supplement 4):34–41. Leung K-S, Lee W-S, Tsui H-F, Liu PP-L, Cheung W-H. Complex tibial fracture outcomes following treatment with low-intensity pulsed ultrasound. Ultrasound Med Biol. 2004;30:389–95. Guerino MR, Santi FP, Silveira RF, Luciano E. Influence of ultrasound and physical activity on bone healing. Ultrasound Med Biol. 2008;34:1408–13. de Oliveira P, Fernandes KR, Sperandio EF, Pastor FAC, Nonaka KO, Parizotto NA, et al. Comparative study of the effects of low-level laser and low-intensity ultrasound associated with Biosilicate® on the process of bone repair in the rat tibia. Rev Bras Ortop. 2012;47:102–7. The authors want to thank Dr. Fábio Di Paulo and Dr. Ewerton Andrez for applying the radiographic system score for osseous healing to this work. Funding was from the following Brazilian agencies: National Council for Scientific and Technological Development (CNPq)—ref. 308.627/2013-0; Coordination for the Improvement of Higher Education Personnel (CAPES)—ref. 3485/2014; and Carlos Chagas Filho Foundation for Research Support in the State of Rio de Janeiro (FAPERJ)—ref. E-26/203.041/2015. Please contact the author for data requests. AJFP, FC, DMG, DPM, PR, MAVK, and WCAP provided the concept/research design. AJFP, DMG, FC, DPM, PR, MAVK, and WCAP provided the data analysis and writing. AJFP, MA, FC, and DMG provided the data collection. AJFP, DMG, MA, and WCAP provided the facilities/equipment and the samples. All authors read and approved the final manuscript. The study was approved by the Ethics Committee in Research of the Evandro Chagas Institute, Pará, Brazil, protocol number 009/2012. Ultrasound Laboratory, Biomedical Engineering Program/COPPE/Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, Rio de Janeiro, Brazil Aldo José Fontes-Pereira, Fernanda Catelani, Paulo Rosa, Marco Antônio von Krüger & Wagner Coelho de Albuquerque Pereira Laboratory of Morpho-physiopathology, State University of Pará, Belém, Pará, Brazil Marcio Amorim Military Police Central Hospital of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil Daniel Patterson Matusin Laboratory of Epithelial Biology, Department of Periodontics and Oral Medicine, University of Michigan School of Dentistry, Ann Arbor, MI, USA Douglas Magno Guimarães Aldo José Fontes-Pereira Fernanda Catelani Paulo Rosa Marco Antônio von Krüger Wagner Coelho de Albuquerque Pereira Correspondence to Aldo José Fontes-Pereira. Fontes-Pereira, A.J., Amorim, M., Catelani, F. et al. The influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats: an experimental and simulation study. J Ther Ultrasound 4, 24 (2016). https://doi.org/10.1186/s40349-016-0068-5 Fracture healing Ultrasonic therapy
CommonCrawl
\begin{definition}[Definition:Planar Graph] A '''planar graph''' is a graph which can be drawn in the plane (for example, on a piece of paper) without any of the edges crossing over, that is, meeting at points other than the vertices. This is a '''planar graph''': :300px \end{definition}
ProofWiki
HP-27 The HP-27 was a hand-held scientific and financial, but not programmable, calculator made by Hewlett-Packard between 1976 and 1978. Unlike all previous HP's pocket calculators, the HP-27 could do mathematic, statistic and business operations. It used Reverse Polish Notation (RPN) for calculations, working on a four-level stack (x,y,z,t). Nearly all keys had two alternate functions, accessed by a yellow and black prefix key. So despite the 30 keys only keyboard it could access about 68 functions. The HP-27 also had a 10-digit red LED display and 10 registers to store numbers. See also • List of Hewlett-Packard products: Pocket calculators • HP calculators External links • The Museum of HP Calculators' article on the HP-27 • HP-27 pictures on MyCalcDB (database about 1970s and 1980s pocket calculators) Hewlett-Packard (HP) calculators Graphing • 9g • 28C • 28S • 38G • 39G • 39g+ • 39gs • 39gII • 40G • 40gs • 48S • 48SX • 48G • 48G+ • 48GX • 48gII • 49G • 49g+ • 50g • Prime • Xpander Scientific programmable • 10C • 11C • 15C • LE • CE • 16C • 19C • 20S • 21S • 25 • 25C • 29C • 32S • 32SII • 33C • 33E • 33s • 34C • 35s • 41C • 41CV • 41CX • 42S • 55 • 65 • 67 • 71B • 95C • 97 • 97S • 9100A • 9100B • 9805 Scientific non-programmable • 6s • 6s Solar • 8s • 9s • 10s • 10s+ • 10sII • 21 • 22S • 27 • 27S • 30s • 31E • 32E • 35 • 45 • 46 • 91 • 300s • 300s+ Financial and business • 10B • 10bII • 10bII+ • 12C • Platinum • Prestige • 14B • 17B • 17BII • 17bII+ • 18C • 19B • 19BII • 20b • 22 • 27 • 30b • 37E • 38C • 38E • 70 • 80 • 81 • 92 Other • 01 • 10 • CalcPad • 100 • 200 • QuickCalc • EasyCalc • OfficeCalc • PrintCalc Related topics • RPN • RPL • PPL • FOCAL • ALG • CAS
Wikipedia
MineSweeping Visual Tracking 1. 0 Introduction 2. 1 The Crisis: Degradation of Deep Networks 3. 2 A Closer Look at ResNet: The Residual Blocks 3.1. 2.1 Skip Connections 3.2. 2.2 Identity Mappings 3.3. 2.3 Post-activation or Pre-activation? 4. 3 ResNet Architectures 5. 4 Experiments 5.1. 4.1 Performance on ImageNet 5.2. 4.2 Effects of Different Shortcut Connections 6. 5 Conclusion A Review of ResNet - Residual Networks Computer Vision, Deep Learning, Reviews Deep learning researchers have been constructing skyscrapers in recent years. Especially, VGG nets and GoogLeNet have pushed the depths of convolutional networks to the extreme. But questions remain: if time and money aren't problems, are deeper networks always performing better? Not exactly. When residual networks were proposed, researchers around the world was stunned by its depth. "Jesus Christ! Is this a neural network or the Dubai Tower?" But don't be afraid! These networks are deep but the structures are simple. Interestingly, these networks not only defeated all opponents in the classification, detection, localization challenges in ImageNet 2015, but were also the main innovation in the best paper of CVPR2016. 1 The Crisis: Degradation of Deep Networks VGG nets proved the beneficial of representation depth of convolutional neural networks, at least within a certain range, to be exact. However, when Kaiming He et al. tried to deepen some plain networks, the training error and test error stopped decreasing after the network reached a certain depth(which is not surprising) and soon degraded. This is not an overfitting problem, because training errors also increased; nor is it a gradient vanishing problem, because there are some techniques(e.g. batch normalization[4]) that ease the pain. Fig.1 The downgrade problem What seems to be the cause of this degradation? Obviously, deeper neural networks are more difficult to train, but that doesn't mean deeper neural networks would yield worse results. To explain this problem, Balduzzi et al.[3] identified shattered gradient problem - as depth increases, gradients in standard feedforward networks increasingly resemble white noise. I will write about that later. 2 A Closer Look at ResNet: The Residual Blocks As the old saying goes, "千里之行,始于足下". Although ResNets are as deep as a thousand layers, they are built with these basic residual blocks(the right part of the figure). Fig.2 Parts of plain networks and a residual block(or residual unit) 2.1 Skip Connections In comparison, basic units of plain network models would look like the one on the left: one ReLU function after a weight layer(usually also with biases), repeated several times. Let's denote the desired underlying mapping(the ideal mapping) of the two layers as \(\mathcal{H}(x)\), and the real mapping as \(\mathcal{F}(x)\). Clearly, the closer \(\mathcal{F}(x)\) is to \(\mathcal{H}(x)\), the better it fits. However, He et al. explicitly let these layers fit a residual mapping instead of the desired underlying mapping. This is implemented with "shortcut connections", which skip one or more layers, simply performing identity mappings and getting added to the outputs of the stacked weight layers. This way, \(\mathcal{F}(x)\) would not try to fit \(\mathcal{H}(x)\), but \(\mathcal{H}(x)-x\). The whole structure(from the identity mapping branch, to merging the branches by the addition operation) are named "residual blocks"(or "residual units"). What's the point in this? Let's do a simple analysis. The computation done by the original residual block is: $$y_l=h(x_l)+\mathcal{F}(x_l,\mathcal{W}_l),$$ $$x_{l+1}=f(y_l).$$ Here are the definitions of symbols: \(x_l\): input features to the \(l\)-th residual block; \(\mathcal{W}_{l}={W_{l,k}|_{1\leq k\leq K}}\): a set of weights(and biases) associated with the \(l\)-th residual unit. \(K\) is the number of layers in this block; \(\mathcal{F}(x,\mathcal{W})\): the residual function, which we talked about earlier. It's a stack of 2 conv. layers here; \(f(x)\): the activation function. We are using ReLU here; \(h(x)\): identity mapping. If \(f(x)\) is also an identity mapping(as if we're not using any activation function), the first equation would become: $$x_{l+1}=x_l+\mathcal{F}(x_l,\mathcal{W}_l)$$ Therefore, we can define \(x_L\) recursively of any layer: $$x_L=x_l+\sum_{i=l}^{L-1}\mathcal{F}(x_i,\mathcal{W}_i)$$ That's not the end yet! When it comes to the gradients, according to the chain rules of backpropagation, we have a beautiful definition: $$\begin{split} \frac{\partial{\mathcal{E}}}{\partial{x_l}} & = \frac{\partial{\mathcal{E}}}{\partial{x_L}}\frac{\partial{x_L}}{\partial{x_l}}\\ & = \frac{\partial{\mathcal{E}}}{\partial{x_L}}\Big(1+\frac{\partial{}}{\partial{x_l}}\sum_{i=l}^{L-1}\mathcal{F}(x_i,\mathcal{W}_i)\Big) \end{split}$$ What does it mean? It means that the information is directly backpropagated to ANY shallower block. This way, the gradients of a layer never vanish or explode even if the weights are too small or too big. 2.2 Identity Mappings It's important that we use identity mapping here! Just consider doing a simple modification here, for example, \(h(x)=\lambda_lx_l\)(\(\lambda_l\) is a modulating scalar). The definition of \(x_L\) and \(\frac{\partial{\mathcal{E}}}{\partial{x_l}}\) would become: $$x_L=(\prod_{i=l}^{L-1}\lambda_i)x_l+\sum_{i=l}^{L-1}(\prod_{j=i+1}^{L-1}\lambda_j)\mathcal{F}(x_i,\mathcal{W}_i)$$ $$\frac{\partial{\mathcal{E}}}{\partial{x_l}}=\frac{\partial{\mathcal{E}}}{\partial{x_L}}\Big((\prod_{i=l}^{L-1}\lambda_i)+\frac{\partial{}}{\partial{x_l}}\sum_{i=l}^{L-1}(\prod_{j=i+1}^{L-1}\lambda_j)\mathcal{F}(x_i,\mathcal{W}_i)\Big)$$ For extremely deep neural networks where \(L\) is too large, \(\prod_{i=l}^{L-1}\lambda_i\) could be either too small or too large, causing gradient vanishing or gradient explosion. For \(h(x)\) with complex definitions, the gradient could be extremely complicated, thus losing the advantage of the skip connection. Skip connection works best under the condition where the grey channel in Fig. 3 cover no operations (except the addition) and is clean. Interestingly, this comfirmed the philosophy of "大道至简" once again. 2.3 Post-activation or Pre-activation? Wait a second... "\(f(x)\) is also an identity mapping" is just our assumption. The activation function is still there! Right. There IS an activation function, but it's moved to somewhere else. In fact, the original residual block is still a little bit problematic - the output of one residual block is not always the input of the next, since there is a ReLU activation function after the addition(It did NOT REALLY keep the identity mapping to the next block!). Therefore, in[2], He et al. fixed the residual blocks by changing the order of operations. Fig.3 New identity mapping proposed by He et al. Besides using a simple identity mapping, He et al. also discussed about the position of the activation function and the batch normalization operation. Assuming that we got a special(asymmetric) activation function \(\hat f(x)\), which only affects the path to the next residual unit. Now our definition of \(x_{x+1}\) would become: $$x_{l+1}=x_l+\mathcal{F}(\hat f(x_l),\mathcal{W}_l)$$ With \(x_l\) still multiplied by 1, information is still fully backpropagated to shallower residual blocks. And the good thing is that using this asymmetric activation function after the addition(partial post-activation) is equivalent to using it beforehand(pre-activation)! This is why He et al. chose to use pre-activation - otherwise it would be necessary to implement that magical activation function \(\hat f(x)\). Fig.4 Using asymmetric after-addition activation is equivalent to constructing a pre-activation residual unit 3 ResNet Architectures Here are the ResNet architectures for ImageNet. Building blocks are shown in brackets, with the numbers of blocks stacked. With the first block of every stack(starting from conv3_x), a downsampling is performed. Each column represents one of the residual networks, and the deepest one has 152 weight layers! Since ResNets were proposed, VGG nets - which were officially called "Very Deep Convolutional Networks" - are not relatively deep anymore. Maybe call them "A Little Bit Deep Convolutional Networks". Table. 1 ResNet architectures for ImageNet. 4 Experiments 4.1 Performance on ImageNet He et al. trained ResNet-18 and ResNet-34 on the ImageNet dataset, and also compared them to plain convolutional networks. In Fig. 5, the thin curves denote training error, and the bold ones denote validation error. The figure on the left shows the results of plain convolution networks(in which the 34-layered ones has higher error rates than the 18-layered one), and the figure on the right shows that residual networks perform better than plain ones, while deeper ones perform better than shallow ones. Fig. 5 Training ResNet on ImageNet 4.2 Effects of Different Shortcut Connections He et al. also tried various types of shortcut connections to replace the identity mapping, and various positions of activation functions / batch normalization. Experiments show that the original identity mapping and full pre-activation yield the best results. Fig. 6 Various shortcuts in residual units Table. 2 Classification error on CIFAR-10 test set with various shortcut connections in residual units Fig. 7 Various usages of activation in residual units Table. 3 Classification error on CIFAR-10 test set with various usages of activation in residual units Residual learning can be crowned as "ONE OF THE GREATEST HITS IN DEEP LEARNING FIELDS". With a simple identity mapping, it solved the degradation problem of deep neural networks. Now that you have learned about the concept of ResNet, why not give it a try and implement your first residual learning model today? [1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. [2] He K, Zhang X, Ren S, et al. Identity mappings in deep residual networks[C]//European Conference on Computer Vision. Springer, Cham, 2016: 630-645. [3] Balduzzi D, Frean M, Leary L, et al. The Shattered Gradients Problem: If resnets are the answer, then what is the question?[J]. arXiv preprint arXiv:1702.08591, 2017. [4] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[J]. arXiv preprint arXiv:1502.03167, 2015. Powered by Hexo, Theme by Concise
CommonCrawl
joineRML: a joint model and software package for time-to-event and multivariate longitudinal outcomes Graeme L. Hickey ORCID: orcid.org/0000-0002-4989-00541, Pete Philipson2, Andrea Jorgensen1 & Ruwanthi Kolamunnage-Dona1 Joint modelling of longitudinal and time-to-event outcomes has received considerable attention over recent years. Commensurate with this has been a rise in statistical software options for fitting these models. However, these tools have generally been limited to a single longitudinal outcome. Here, we describe the classical joint model to the case of multiple longitudinal outcomes, propose a practical algorithm for fitting the models, and demonstrate how to fit the models using a new package for the statistical software platform R, joineRML. A multivariate linear mixed sub-model is specified for the longitudinal outcomes, and a Cox proportional hazards regression model with time-varying covariates is specified for the event time sub-model. The association between models is captured through a zero-mean multivariate latent Gaussian process. The models are fitted using a Monte Carlo Expectation-Maximisation algorithm, and inferences are based on approximate standard errors from the empirical profile information matrix, which are contrasted to an alternative bootstrap estimation approach. We illustrate the model and software on a real data example for patients with primary biliary cirrhosis with three repeatedly measured biomarkers. An open-source software package capable of fitting multivariate joint models is available. The underlying algorithm and source code makes use of several methods to increase computational speed. In many clinical studies, subjects are followed-up repeatedly and response data collected. For example, routine blood tests might be performed at each follow-up clinic appointment for patients enrolled in a randomized drug trial, and biomarker measurements recorded. An event time is also usually of interest, for example time of death or study drop-out. It has been repeatedly shown elsewhere that if the longitudinal and event-time outcomes are correlated, then modelling the two outcome processes separately, for example using linear mixed models and Cox regression models, can lead to biased effect size estimates [1]. The same criticism has also been levelled at the application of so-called two-stage models [2]. The motivation for using joint models can be broadly separated into interest in drawing inference about (1) the time-to-event process whilst adjusting for the intermittently measured (and potentially error-prone) longitudinal outcomes, and (2) the longitudinal data process whilst adjusting for a potentially informative drop-out mechanism [3]. The literature on joint modelling is extensive, with excellent reviews given by Tsiatis and Davidian [4], Gould et al. [5], and the book by Rizopoulos [6]. Joint modelling has until recently been predominated by modelling a single longitudinal outcome together with a solitary event time outcome; herein referred to as univariate joint modelling. Commensurate with methodological research has been an increase in wide-ranging clinical applications (e.g. [7]). Recent innovations in the field of joint models have included the incorporation of multivariate longitudinal data [8], competing risks data [9, 10], recurrent events data [11], multivariate time-to-event data [12, 13], non-continuous repeated measurements (e.g. count, binary, ordinal, and censored data) [14], non-normally and non-parametrically distributed random effects [15], alternative estimation methodologies (e.g. Bayesian fitting and conditional estimating equations) [16, 17], and different association structures [18]. In this article, we specifically focus on the first innovation: multivariate longitudinal data. In this situation, we assume that multiple longitudinal outcomes are measured on each subject, which can be unbalanced and measured at different times for each subject. Despite the inherently obvious benefits of harnessing all data in a single model or the published research on the topic of joint models for multivariate longitudinal data, a recent literature review by Hickey et al. [19] identified that publicly available software for fitting such models was lacking, which has translated into limited uptake by biomedical researchers. In this article we present the classical joint model described by Henderson et al. [3] extended to the case of multiple longitudinal outcomes. An algorithm proposed by Lin et al. [20] is used to fit the model, augmented by techniques to reduce the computational fitting time, including a quasi-Newton update approach, variance reduction method, and dynamic Monte Carlo updates. This algorithm is encoded into a R sofware package–joineRML. A simulation analysis and real-world data example are used to demonstrate the accuracy of the algorithm and the software, respectively. As a prelude to the introduction and demonstration of the newly introduced software package, in the following section we describe the underlying model formulation and model fitting methodology. For each subject i=1,…,n, \(\boldsymbol {y}_{i} = \left (\boldsymbol {y}_{i1}^{\top }, \dots, \boldsymbol {y}_{iK}^{\top }\right)\) is the K-variate continuous outcome vector, where each y ik denotes an (n ik ×1)-vector of observed longitudinal measurements for the k-th outcome type: \(\boldsymbol {y}_{ik} = (y_{i1k}, \dots, y_{in_{ik}k})^{\top }\). Each outcome is measured at observed (possibly pre-specified) times t ijk for j=1,…,n ik , which can differ between subjects and outcomes. Additionally, for each subject there is an event time \(T_{i}^{*}\), which is subject to right censoring. Therefore, we observe \(T_{i} = \min (T_{i}^{*}, C_{i})\), where C i corresponds to a potential censoring time, and the failure indicator δ i , which is equal to 1 if the failure is observed \((T_{i}^{*} \leq C_{i})\) and 0 otherwise. We assume that both censoring and measurement times are non-informative. The model we describe is the natural extension of the model proposed by Henderson et al. [3] to the case of multivariate longitudinal data. The model posits an unobserved or latent zero-mean (K+1)-variate Gaussian process that is realised independently for each subject, \(W_{i}(t) = \left \{W_{1i}^{(1)}(t), \dots, W_{1i}^{(K)}(t), W_{2i}(t)\right \}\). This latent process subsequently links the separate sub-models via association parameters. The k-th longitudinal data sub-model is given by $$ y_{ik}(t) = \mu_{ik}(t) + W_{1i}^{(k)}(t) + \varepsilon_{ik}(t), $$ where μ ik (t) is the mean response, and ε ik (t) is the model error term, which we assume to be independent and identically distributed normal with mean 0 and variance \(\sigma _{k}^{2}\). The mean response is specified as a linear model $$ \mu_{ik}(t) = \boldsymbol{x}_{ik}^{\top}(t) \boldsymbol{\beta}_{k}, $$ where x ik (t) is a p k -vector of (possibly) time-varying covariates with corresponding fixed effect terms β k . \(W_{1i}^{(k)}(t)\) is specified as $$ W_{1i}^{(k)}(t) = \boldsymbol{z}_{ik}^{\top}(t) \boldsymbol{b}_{ik}, $$ where z ik (t) is an r k -vector of (possibly) time-varying covariates with corresponding subject-and-outcome random effect terms b ik , which follow a zero-mean multivariate normal distribution with (r k ×r k )-variance-covariance matrix D kk . To account for dependence between the different longitudinal outcome outcomes, we let cov(b ik ,b il )=D kl for k≠l. Furthermore, we assume ε ik (t) and b ik are uncorrelated, and that the censoring times are independent of the random effects. These distributional assumptions together with the model given by (1)–(3) are equivalent to the multivariate extension of the Laird and Ware [21] linear mixed effects model. More flexible specifications of \(W_{1i}^{(k)}(t)\) can be used [3], including for example, stationary Gaussian processes. However, we do not consider these cases here owing to the increased computational burden it carries, even for the univariate case. The sub-model for the time-to-event outcome is given by the hazard model $$ \lambda_{i}(t) = \lambda_{0}(t) \exp \left\{\boldsymbol{v}_{i}^{\top}(t) \boldsymbol{\gamma}_{v} + W_{2i}(t)\right\}, $$ where λ0(·) is an unspecified baseline hazard, and v i (t) is a q-vector of (possibly) time-varying covariates with corresponding fixed effect terms γ v . Conditional on W i (t) and the observed covariate data, the longitudinal and time-to-event data generating processes are conditionally independent. To establish a latent association, we specify W2i(t) as a linear combination of \(\left \{W_{1i}^{(1)}(t), \dots, W_{1i}^{(K)}(t)\right \}\): $$ W_{2i}(t) = \sum\limits_{k=1}^{K} \gamma_{yk} W_{1i}^{(k)}(t), $$ where γ y =(γy1,…,γ yK ) are the corresponding association parameters. To emphasise the dependence of W2i(t) on the random effects, we explicitly write it as W2i(t,b i ) from here onwards. As per \(W_{1i}^{(k)}(t)\), W2i(t,b i ) can also be flexibly extended, for example to include subject-specific frailty effects [3]. For each subject i, let \(\boldsymbol {X}_{i} = \bigoplus _{k = 1}^{K} \boldsymbol {X}_{ik}\) and \(\boldsymbol {Z}_{i} = \bigoplus _{k = 1}^{K} \boldsymbol {Z}_{ik}\) be block-diagonal matrices, where \(\boldsymbol {X}_{ik} = \left (\boldsymbol {x}_{i1k}^{\top }, \dots, \boldsymbol {x}_{in_{ik}k}^{\top }\right)\) is an (n ik ×p k )-design matrix, with the j-th row corresponding to the p k -vector of covariates measured at time t ijk , and \(\bigoplus \) denotes the direct matrix sum. The notation similarly follows for the random effects design matrices, Z ik . We denote the error terms by a diagonal matrix \(\boldsymbol {\Sigma }_{i} = \bigoplus _{k = 1}^{K} \sigma _{k}^{2} \boldsymbol {I}_{n_{ik}}\) and write the overall variance-covariance matrix for the random effects as $$ \boldsymbol{D} = \left(\begin{array}{ccc} \boldsymbol{D}_{11} & \cdots & \boldsymbol{D}_{1K} \\ \vdots & \ddots & \vdots \\ \boldsymbol{D}_{1K}^{\top} & \cdots & \boldsymbol{D}_{KK} \\ \end{array}\right), $$ where I n denotes an n×n identity matrix. We further define \(\boldsymbol {\beta } = \left (\boldsymbol {\beta }_{1}^{\top }, \dots, \boldsymbol {\beta }_{K}^{\top }\right)^{\top }\) and \(\boldsymbol {b}_{i} = \left (\boldsymbol {b}_{i1}^{\top }, \dots, \boldsymbol {b}_{iK}^{\top }\right)^{\top }\). Hence, we can then rewrite the longitudinal outcome sub-model as $$\begin{array}{@{}rcl@{}} \boldsymbol{y}_{i} \,|\, \boldsymbol{b}_{i}, \boldsymbol{\beta}, \boldsymbol{\Sigma}_{i} &\sim& N(\boldsymbol{X}_{i} \boldsymbol{\beta} + \boldsymbol{Z}_{i} \boldsymbol{b}_{i}, \boldsymbol{\Sigma}_{i}), \\ \text{with}\ \boldsymbol{b}_{i} \,|\, \boldsymbol{D} &\sim& N(\boldsymbol{0}, \boldsymbol{D}). \end{array} $$ For the estimation, we will assume that the covariates in the time-to-event sub-model are time-independent and known at baseline, i.e. v i ≡v i (0). Extensions of the estimation procedure for time-varying covariates are outlined elsewhere [6, p. 115]. The observed data likelihood for the joint outcome is given by $$ \prod\limits_{i=1}^{n} \left(\int_{-\infty}^{\infty} f(\boldsymbol{y}_{i} \,|\, \boldsymbol{b}_{i}, \boldsymbol{\theta}) f(T_{i}, \delta_{i} \,|\, \boldsymbol{b}_{i}, \boldsymbol{\theta}) f(\boldsymbol{b}_{i} \,|\, \boldsymbol{\theta}) d\boldsymbol{b}_{i} \right), $$ where \(\boldsymbol {\theta } = \left (\boldsymbol {\beta }^{\top }, \text {vech}(\boldsymbol {D}), \sigma _{1}^{2}, \dots, \sigma _{K}^{2}, \lambda _{0}(t), \boldsymbol {\gamma }_{v}^{\top }, \boldsymbol {\gamma }_{y}^{\top }\right)\) is the collection of unknown parameters that we want to estimate, with vech(D) denoting the half-vectorisation operator that returns the vector of lower-triangular elements of matrix D. As noted by Henderson et al. [3], the observed data likelihood can be calculated by rewriting it as $$ \prod\limits_{i=1}^{n} f(\boldsymbol{y}_{i} \,|\, \boldsymbol{\theta}) \left(\int_{-\infty}^{\infty} f(T_{i}, \delta_{i} \,|\, \boldsymbol{b}_{i}, \boldsymbol{\theta}) f(\boldsymbol{b}_{i} \,|\, \boldsymbol{y}_{i}, \boldsymbol{\theta}) d\boldsymbol{b}_{i} \right), $$ where the marginal distribution f(y i | θ) is a multivariate normal density with mean X i β and variance-covariance matrix \(\boldsymbol {\Sigma }_{i} + \boldsymbol {Z}_{i} \boldsymbol {D} \boldsymbol {Z}_{i}^{\top }\), and f(b i | y i ,θ) is given by (6). MCEM algorithm We determine maximum likelihood estimates of the parameters θ using the Monte Carlo Expectation Maximisation (MCEM) algorithm [22], by treating the random effects b i as missing data. This is effectively the same as the conventional Expectation-Maximisation (EM) algorithm, as used by Wulfsohn and Tsiatis [23] and Ratcliffe et al. [24] in the context of fitting univariate data joint models, except the E-step exploits a Monte Carlo (MC) integration routine as opposed to Gaussian quadrature methods, which we expect to be beneficial when the dimension of random effects becomes large. Starting from an initial estimate of the parameters, \(\hat {\boldsymbol {\theta }}^{(0)}\), the procedure involves iterating between the following two steps until convergence is achieved. E-step. At the (m+1)-th iteration, we compute the expected log-likelihood of the complete data conditional on the observed data and the current estimate of the parameters, $$\begin{array}{@{}rcl@{}} {\selectfont{\begin{aligned} {} Q(\boldsymbol{\theta} \,|\, \hat{\boldsymbol{\theta}}^{(m)}) &= \sum\limits_{i=1}^{n} \mathbb{E} \Big\{\log f(\boldsymbol{y}_{i}, T_{i}, \delta_{i}, \boldsymbol{b}_{i} \,|\, \boldsymbol{\theta})\Big\} \\ &= \sum\limits _{i=1}^{n} \int_{-\infty}^{\infty} \Big\{\log f(\boldsymbol{y}_{i}, T_{i}, \delta_{i}, \boldsymbol{b}_{i} \,|\, \boldsymbol{\theta})\Big\} f(\boldsymbol{b}_{i} \,\!|\, T_{i}, \delta_{i}, \boldsymbol{y}_{i};\hat{\boldsymbol{\theta}}^{(m)}) d\boldsymbol{b}_{i}. \end{aligned}}} \end{array} $$ Here, the complete-data likelihood contribution for subject i is given by the integrand of (4). M-step. We maximise \(Q(\boldsymbol {\theta } \,|\, \hat {\boldsymbol {\theta }}^{(m)})\) with respect to θ. Namely, we set $$ \hat{\boldsymbol{\theta}}^{(m+1)} = \underset{\boldsymbol{\theta}}{\text{argmax}}\ Q\left(\boldsymbol{\theta} \,|\, \hat{\boldsymbol{\theta}}^{(m)}\right). $$ The M-step estimators naturally follow from Wulfsohn and Tsiatis [23] and Lin et al. [20]. Maximizers for all parameters except γ v and γ y are available in closed-form; algebraic details are presented in Additional file 1. The parameters \(\boldsymbol {\gamma } = (\boldsymbol {\gamma _{v}}^{\top }, \boldsymbol {\gamma }_{y}^{\top })^{\top }\) are jointly updated using a one-step Newton-Raphson algorithm as $$ \hat{\boldsymbol{\gamma}}^{(m+1)} = \hat{\boldsymbol{\gamma}}^{(m)} + I\left(\hat{\boldsymbol{\gamma}}^{(m)}\right)^{-1} S\left(\hat{\boldsymbol{\gamma}}^{(m)}\right),$$ where \(\hat {\boldsymbol {\gamma }}^{(m)}\) denotes the value of γ at the current iteration, \(S\left (\hat {\boldsymbol {\gamma }}^{(m)}\right)\) is the corresponding score, and \(I\left (\hat {\boldsymbol {\gamma }}^{(m)}\right)\) is the observed information matrix, which is equal to the derivative of the negative score. Further details of this update are given in Additional file 1. The M-step for γ is computationally expensive to evaluate. Therefore, we also propose a quasi-Newton one-step update by approximating \(I\left (\hat {\boldsymbol {\gamma }}^{(m)}\right)\) by an empirical information matrix for γ, which can be considered an analogue of the Gauss-Newton method [25, p. 8]. To further compensate for this approximation, we also use a nominal step-size of 0.5 rather than 1, which is used when using the Newton-Raphson update. The M-step involves terms of the form \(\mathbb {E}\left [h(\boldsymbol {b}_{i}) \,|\, T_{i},{\vphantom {\hat {\boldsymbol {\theta }}}}\right. \left.\delta _{i}, \boldsymbol {y}_{i}; \hat {\boldsymbol {\theta }}\right ]\), for known functions h(·). The conditional expectation of a function of the random effects can be written as $$ {\selectfont{\begin{aligned} {} \mathbb{E}\left[h(\boldsymbol{b}_{i}) \,|\, T_{i}, \delta_{i}, \boldsymbol{y}_{i}; \hat{\boldsymbol{\theta}}\right] = \frac{\int_{-\infty}^{\infty} h(\boldsymbol{b}_{i}) f(\boldsymbol{b}_{i} \,|\, \boldsymbol{y}_{i}; \hat{\boldsymbol{\theta}}) f(T_{i}, \delta_{i} \,|\, \boldsymbol{b}_{i}; \hat{\boldsymbol{\theta}}) d\boldsymbol{b}_{i}}{\int_{-\infty}^{\infty} f(\boldsymbol{b}_{i} \,|\, \boldsymbol{y}_{i}; \hat{\boldsymbol{\theta}}) f(T_{i}, \delta_{i} \,|\, \boldsymbol{b}_{i}; \hat{\boldsymbol{\theta}}) d\boldsymbol{b}_{i}}, \end{aligned}}} $$ where \(f(T_{i}, \delta _{i} \,|\, \boldsymbol {b}_{i}; \hat {\boldsymbol {\theta }})\) is given by $${\selectfont{\begin{aligned} {} f(T_{i}, \delta_{i} \,|\, \boldsymbol{b}_{i}; \boldsymbol{\theta}) = &\left[\lambda_{0}(T_{i}) \exp\left\{\boldsymbol{v}_{i}^{\top} \boldsymbol{\gamma}_{v} + W_{2i}(T_{i}, \boldsymbol{b}_{i})\right\}\right]^{\delta_{i}} \\ &\times \exp\left\{-\int_{0}^{T_{i}}\lambda_{0}(u)\exp\left\{\boldsymbol{v}_{i}^{\top} \boldsymbol{\gamma}_{v} + W_{2i}(u, \boldsymbol{b}_{i})\right\}du\right\} \end{aligned}}} $$ and \(f(\boldsymbol {b}_{i} \,|\, \boldsymbol {y}_{i}; \hat {\boldsymbol {\theta }})\) is calculated from multivariate normal distribution theory as $$ \boldsymbol{b}_{i} \,|\, \boldsymbol{y}_{i}, \boldsymbol{\theta} \sim N\left(\boldsymbol{A}_{i} \left\{\boldsymbol{Z}_{i}^{\top} \boldsymbol{\Sigma}_{i}^{-1}(\boldsymbol{y}_{i} - \boldsymbol{X}_{i} \boldsymbol{\beta})\right\}, \boldsymbol{A}_{i}\right), $$ with \(\boldsymbol {A}_{i} = \left (\boldsymbol {Z}_{i}^{\top } \boldsymbol {\Sigma }_{i}^{-1} \boldsymbol {Z}_{i} + \boldsymbol {D}^{-1}\right)^{-1}\). As this becomes computationally expensive using Gaussian quadrature commensurate with increasing dimension of b i , we estimate the integrals by MC sampling such that the expectation is approximated by the ratio of the sample means for \(h(\boldsymbol {b}_{i}) f(T_{i}, \delta _{i} \,|\, \boldsymbol {b}_{i}; \hat {\boldsymbol {\theta }})\) and \(f(T_{i}, \delta _{i} \,|\, \boldsymbol {b}_{i}; \hat {\boldsymbol {\theta }})\) evaluated at each MC draw. Furthermore, we use antithetic simulation for variance reduction in the MC integration. Instead of directly sampling from (6), we sample Ω∼N(0,I r ) and obtain the pairs $$ \boldsymbol{A}_{i} \left\{ \boldsymbol{Z}_{i}^{\top} \boldsymbol{\Sigma}_{i}^{-1} (\boldsymbol{y}_{i} - \boldsymbol{X}_{i} \boldsymbol{\beta}) \right\} \pm \boldsymbol{C}_{i} \boldsymbol{\Omega}, $$ where C i is the Cholesky decomposition of A i such that \(\boldsymbol {C}_{i} \boldsymbol {C}_{i}^{\top } = \boldsymbol {A}_{i}\). Therefore we only need to draw N/2 samples using this approach, and by virtue of the negative correlation between the pairs, it leads to a smaller variance in the sample means taken in the approximation than would be obtained from N independent simulations. The choice of N is described below. The EM algorithm requires that initial parameters are specified, namely \(\hat {\boldsymbol {\theta }}^{(0)}\). By choosing values close to the maximizer, the number of iterations required to reach convergence should be reduced. For the time-to-event sub-model, a quasi-two-stage model is fitted when the measurement times are balanced, i.e. when t ijk =t ij ∀k. That is, we fit separate LMMs for each longitudinal outcome as per (1), ignoring the correlation between different outcomes. This is straightforward to implement using standard software, in particular using lme() and coxph() from the R packages nlme [26] and survival [27], respectively. From the fitted models, the best linear unbiased predictions (BLUPs) of the separate model random effects are used to estimate each \(W_{1i}^{(k)}(t)\) function. These estimates are then included as time-varying covariates in a Cox regression model, alongside any other fixed effect covariates, which can be straightforwardly fitted using standard software. In the situation that the data are not balanced, i.e. when t ijk ≠t ij ∀k, then we fit a standard Cox proportional hazards regression model to estimate γ v and set γ yk =0∀k. For the longitudinal data sub-model, when K>1 we first find the maximum likelihood estimate of \(\left \{\boldsymbol {\beta }, \text {vech}(\boldsymbol {D}), \sigma _{1}^{2}, \dots, \sigma _{K}^{2}\right \}\) by running a separate EM algorithm for the multivariate linear mixed model. Both the E- and M-step updates are available in closed form, and the initial parameters for this EM algorithm are available from the separate LMM fits, with D initialized as block-diagonal. As these are estimated using an EM rather than MCEM algorithm, we can specify a stricter convergence criterion on the estimates. Convergence and stopping rules Two standard stopping rules for the deterministic EM algorithm used to declare convergence are the relative and absolute differences, defined as $$\begin{array}{@{}rcl@{}} \Delta_{\text{rel}}^{(m+1)} &=& \max\left\{\frac{\left|\hat{\boldsymbol{\theta}}^{(m+1)} - \hat{\boldsymbol{\theta}}^{(m)}\right|}{\left|\hat{\boldsymbol{\theta}}^{(m)}\right| + \epsilon_{1}}\right\} < \epsilon_{0}, \text{ and} \end{array} $$ $$\begin{array}{@{}rcl@{}} \Delta_{\text{abs}}^{(m+1)} &=& \max\left\{\left|\hat{\boldsymbol{\theta}}^{(m+1)} - \hat{\boldsymbol{\theta}}^{(m)}\right|\right\} < \epsilon_{2} \end{array} $$ respectively, for some appropriate choice of ε0, ε1, and ε2, where the maximum is taken over the components of θ. For reference, the R package JM [28] implements (7) (in combination with another rule based on relative change in the likelihood), whereas the R package joineR [29] implements (8). The relative difference might be unstable about parameters near zero that are subject to MC error. Therefore, the convergence criterion for each parameter might be chosen separately at each EM iteration based on whether the absolute magnitude is below or above some threshold. A similar approach is adopted in the EM algorithms employed by the software package SAS [30, p. 330]. The choice of N and the monitoring of convergence are conflated when applying a MCEM algorithm, and a dynamic approach is required. As noted by [22], it is computationally inefficient to use a large N in the early phase of the algorithm when the parameter estimates are likely to be far from the maximizer. On the flip side, as the parameter estimates approach the maximizer, the stopping rules will fail as the changes in parameter estimates will be swamped by MC error. Therefore, it has been recommended that one increase N as the estimate moves towards the maximizer. Although this might be done subjectively [31] or by pre-specified rules [32], an automated approach is preferable and necessary for a software implementation. Booth and Hobert [33] proposed an update rule based on a confidence ellipsoid for the maximizer at the (m+1)-th iteration, calculated using an approximate sandwich estimator for the maximizer, which accounts for the MC error at each iteration. This approach requires additional variance estimation at each iteration, therefore we opt for a simpler approach described by Ripatti et al. [34]. Namely, we calculate a coefficient of variation at the (m+1)-th iteration as $$ \text{cv}\left(\Delta_{\text{rel}}^{(m+1)}\right) = \frac{\text{sd}\left(\Delta_{\text{rel}}^{(m-1)}, \Delta_{\text{rel}}^{(m)}, \Delta_{\text{rel}}^{(m+1)}\right)}{\text{mean}\left(\Delta_{\text{rel}}^{(m-1)}, \Delta_{\text{rel}}^{(m)}, \Delta_{\text{rel}}^{(m+1)}\right)}, $$ where \(\Delta _{\text {rel}}^{(m+1)}\) is given by (7), and sd(·) and mean(·) are the sample standard deviation and mean functions, respectively. If \(\text {cv}\left (\Delta _{\text {rel}}^{(m+1)}\right) > \text {cv}\left (\Delta _{\text {rel}}^{(m)}\right)\), then N:=N+⌊N/δ⌋, for some small positive integer δ. Typically, we run the MCEM algorithm with a small N (for a fixed number of iterations—a burn-in) before implementing this update rule in order to get into the approximately correct parameter region. Appropriate values for other parameters will be application specific, however we have found δ=3, N=100K (for 100K burn-in iterations), ε1=0.001, and ε0=ε2=0.005 delivers reasonably accurate estimates in many cases, where K was earlier defined as the number of longitudinal outcomes. As the EM monotonicity property is lost due to the MC integrations in the MCEM algorithm, convergence might be prematurely declared due to stochasticity if the ε-values are too large. To reduce the chance of this occurring, we require that the stopping rule is satisfied for 3 consecutive iterations [33, 34]. However, in any case, trace plots should be inspected to confirm convergence is appropriate. Standard error estimation Standard error (SE) estimation is usually based on inverting the observed information matrix. When the baseline hazard is unspecified, as is the case here, this presents several challenges. First, \(\hat {\lambda }_{0}(t)\) will generally be a high-dimensional vector, which might lead to numerical difficulties in the inversion of the observed information matrix [6]. Second, the profile likelihood estimates based on the usual observed information matrix approach are known to be underestimated [35]. The reason for this is that the profile estimates are implicit, since the posterior expectations, given by (5), depend on the parameters being estimated, including λ0(t) [6, p. 67]. To overcome these challenges, Hsieh et al. [35] recommended to use bootstrap methods to calculate the SEs. However, this approach is computationally expensive. Moreover, despite the purported theoretical advantages, we also note that recently it has been suggested that bootstrap estimators might actually overestimate the SEs; e.g. [36, p. 740] and [35, p. 1041]. At the model development stage, it is often of interest to gauge the strength of association of model covariates, which is not feasible with repeated bootstrap implementations. Hence, an approximate SE estimator is desirable. In either case, the theoretical properties will be contaminated by the addition of MC error from the MCEM algorithm, and it is not yet fully understood what the ramifications of this are. Hence, any standard errors must be interpreted with a degree of caution. We consider two estimators below. 1. Bootstrap method. These are estimated by sampling n subjects with replacement and re-labelling the subjects with indices i′=1,…,n. We then re-fit the model to the bootstrap-sampled dataset. It is important to note that we re-sample subjects, not individual data points. This is repeated B-times, for a sufficiently large integer B. Since we already have the MLEs from the fitted model, we can use these as initial values for each bootstrap model fit, thus reducing initial computational overheads in calculating approximate initial parameters. For each iteration, we extract the model parameter estimates for \(\left (\boldsymbol {\beta }^{\top }, \text {vech}(\boldsymbol {D}), \sigma _{1}^{2}, \dots, \sigma _{K}^{2}, \boldsymbol {\gamma }_{v}^{\top }, \boldsymbol {\gamma }_{y}^{\top }\right)\). Note that we do not estimate SEs for λ0(t) using this approach. However, they are generally not of inferential interest. When B is sufficiently large, the SEs can be estimated from the estimated coefficients of the bootstrap samples. Alternatively, 100(1−α)%-confidence intervals can be estimated from the the 100α/2-th and 100(1−α/2)-th percentiles. 2. Empirical information matrix method. Using the Breslow estimator for \(\int _{0}^{t} \lambda _{0}(u) \mathrm {d}u\), the profile score vector for \(\boldsymbol {\theta }_{-\lambda } = (\boldsymbol {\beta }^{\top }, \text {vech}(\boldsymbol {D}), \sigma _{1}^{2}, \dots, \sigma _{K}^{2}, \boldsymbol {\gamma }^{\top })\) is calculated (see Additional file 1). We approximate the profile information for θ−λ by \(I_{e}^{-1/2}(\hat {\boldsymbol {\theta }}_{-\lambda _{0}})\), where \(I_{e}(\boldsymbol {\theta }_{-\lambda _{0}})\phantom {\dot {i}\!}\) is the observed empirical information [25] given by $$ I_{e}(\boldsymbol{\theta}_{-\lambda}) = \sum\limits_{i=1}^{n} s_{i}(\boldsymbol{\theta}_{-\lambda})^{\otimes 2} - \frac{1}{n} S(\boldsymbol{\theta}_{-\lambda})^{\otimes 2}, $$ s i (θ−λ) is the conditional expectation of the complete-data profile score for subject i, S(θ−λ) is the score defined by \(S(\boldsymbol {\theta }_{-\lambda }) = \sum _{i=1}^{n} s_{i}(\boldsymbol {\theta }_{-\lambda })\), and a⊗2=aa⊤ is outer product for a vector a. At the maximizer, \(S(\hat {\boldsymbol {\theta }}) = 0\), meaning that the right hand-side of (9) is zero. Due to the MC error in the MCEM algorithm, this will not be exactly zero, and therefore we include it in the calculations. As per the bootstrap approach, SEs for the baseline hazard are again not calculated. We note that this SE estimator will be subject to the exact same theoretical limitation of underestimation described by Hsieh et al. [35], since the profiling was implicit; that is, because the posterior expectations involve the parameters θ. The model described here is implemented in the R package joineRML, which is available on the The Comprehensive R Archive Network (CRAN) (https://cran.r-project.org/package=joineRML). The principal function in joineRML is mjoint(). The primary arguments for implementing mjoint() are summarised in Table 1. To achieve computationally efficiency, parts of the MCEM algorithm in joineRML are coded in C++ using the Armadillo linear algebra library and integrated using the R package RcppArmadillo [37]. Table 1 The primary argumentsa with descriptions for the mjoint() function in the R package joineRML A model fitted using the mjoint() function returns an object of class mjoint. By default, approximate SE estimates are calculated using the empirical information matrix. If one wishes to use bootstrap standard error estimates, then the user can pass the model object to the bootSE() function. Several generic functions (or rather, S3 methods) can also be applied to mjoint objects, as described in Table 2. These generic functions include common methods, for example coef(), which extracts the model coefficients; ranef(), which extracts the BLUPs (and optional standard errors); and resid(), which extracts the residuals from the linear mixed sub-model. The intention of these functions is to have a common syntax with standard R packages for linear mixed models [26] and survival analysis [27]. Additionally, plotting capabilities are included in joineRML. These include trace plots for assessment of convergence of the MCEM algorithm, and caterpillar plots for subject-specific random effects (Table 2). Table 2 Additional functions with descriptions that can be applied to objects of class mjointa The package also provides several datasets, and a function simData() that allows for simulation of data from joint models with multiple longitudinal outcomes. joineRML can also fit univariate joint models, however in this case we would currently recommend that the R packages joineR [29], JM [28], or frailtypack [38] are used, which are optimized for the univariate case and exploits Gaussian quadrature. In addition, these packages allow for extensions to more complex cases; for example, competing risks [28, 29] and recurrent events [38]. Simulation analysis A simulation study was conducted assuming two longitudinal outcomes and n=200 subjects. Longitudinal data were simulated according to a follow-up schedule of 6 time points (at times 0,1,…,5), with each model including subject-and-outcome-specific random-intercepts and random-slopes: b i =(b0i1,b1i1,b0i2,b1i2)⊤, Correlation was induced between the 2 outcomes by assuming correlation of − 0.5 between the random intercepts for each outcome. Event times were simulated from a Gompertz distribution with shape θ1=−3.5 and scale exp(θ0)= exp(0.25)≈1.28, following the methodology described by Austin [39]. Independent censoring times were drawn from an exponential distribution with rate 0.05. Any subject where the event and censoring time exceeded 5 was administratively censored at the truncation time C=5.1. For all sub-models, we included a pair of covariates X i =(xi1,xi2)⊤, where xi1 is a continuous covariate independently drawn from N(0,1) and xi2 is a binary covariate independently drawn from Bin(1,0.5). The sub-models are given as $$\begin{array}{@{}rcl@{}} y_{ijk} &=& (\beta_{0, k} + b_{i0k}) + (\beta_{1, k} + b_{i1k}) t_{j} \\&\quad+& \beta_{2, k} x_{i1} + \beta_{3, k} x_{i2} + \varepsilon_{ijk}, \text{ for } k = 1, 2; \\ \lambda_{i}(t) &=& \exp\left\{ (\theta_{0} + \theta_{1} t) + \gamma_{v1} x_{i1} + \gamma_{v2} x_{i2} \right.\\&\quad+&\left. \gamma_{y1} (b_{i01} + b_{i11} t) + \gamma_{y2} (b_{i02} + b_{i12} t) \right\}; \\ {b}_{i} &\sim& N_{4}(0, \boldsymbol{D}); \\ \varepsilon_{ijk} &\sim& N(0, \sigma_{k}^{2}), \end{array} $$ where D is specified unstructured (4×4)-covariance matrix with 10 unique parameters. Simulating datasets is straightforward using the joineRML package by means of the simData() function. The true parameter values and results from 500 simulations are shown in Table 3. In particular, we display the mean estimate, the bias, the empirical SE (= the standard deviation of the the parameter estimates); the mean SE (= the mean SE of each parameter calculated for each fitted model); the mean square error (MSE), and the coverage. The results confirm that the model fitting algorithm generally performs well. Table 3 Results of simulation study A second simulation analysis was conducted using the parameters above (with n=100 subjects per dataset). However, in this case we used a heavier-tailed distribution for the random effects: a multivariate t5 distribution [40]. The bias for the fixed effect coefficients was comparable to the multivariate normal random effects simulation study (above). The empirical standard error was consistently smaller than the mean standard error, resulting in coverage between 95% and 99% for the coefficient parameters. Rizopoulos et al. [41] noted that the misspecification of the random effects distributions was minimised as the number of longitudinal measurements per subject increased, but that the standard errors are generally affected. These findings are broadly in agreement with the simulation study conducted here, and other studies [42, 43]. Choi et al. [44] provide a review of existing research on the misspecification of random effects in joint modelling. We consider the primary biliary cirrhosis (PBC) data collected at the Mayo Clinic between 1974 to 1984 [45]. This dataset has been widely analyzed using joint modelling methods [18, 46, 47]. PBC is a long-term liver disease in which the bile ducts in the liver become damaged. Progressively, this leads to a build-up of bile in the liver, which can damage it and eventually lead to cirrhosis. If PBC is not treated or reaches an advanced stage, it can lead to several major complications, including mortality. In this study, 312 patients were randomised to receive D-penicillamine (n=158) or placebo (n=154). In this example we analyse the subset of patients randomized to placebo. Patients with PBC typically have abnormalities in several blood tests; hence, during follow-up several biomarkers associated with liver function were serially recorded for these patients. We consider three biomarkers: serum bilirunbin (denoted serBilir in the model and data; measured in units of mg/dl), serum albumin (albumin; mg/dl), and prothrombin time (prothrombin; seconds). Patients had a mean 6.3 (SD = 3.7) visits (including baseline). The data can be accessed from the joineRML package via the command data(pbc2). Profile plots for each biomarker are shown in Fig. 1, indicating distinct differences in trajectories between the those who died during follow-up and those who did not (right-censored cases). A Kaplan-Meier curve for overall survival is shown in Fig. 2. There were a total of 69 (44.8%) deaths during follow-up in the placebo subset. Longitudinal trajectory plots. The black lines show individual subject trajectories, and the coloured lines show smoothed (LOESS) curves stratified by whether the patient experienced the endpoint (blue) or not (red) Kaplan-Meier curve for overall survival. A pointwise 95% band is shown (dashed lines). In total, 69 patients (of 154) died during follow-up We fit a relatively simple joint model for the purposes of demonstration, which encompasses the following trivariate longitudinal data sub-model: $$\begin{array}{@{}rcl@{}} {{}\selectfont{\begin{aligned} \log(\texttt{serBilir}) &\,=\, (\beta_{0, 1} \,+\, b_{0i, 1}) \,+\, (\beta_{1,1} \,+\, b_{1i, 1}) \texttt{year} \,+\, \varepsilon_{ij1}, \\ \texttt{albumin} &\,=\, (\beta_{0, 2} \,+\, b_{0i, 2}) \,+\, (\beta_{1,2}\! + \!b_{1i, 2}) \texttt{year} \,+\, \varepsilon_{ij2}, \\ (0.1 \times \texttt{prothrombin})^{-4} &\,=\, (\beta_{0, 3} \,+\, b_{0i, 3}) \,+\, (\beta_{1,3} \,+\, b_{1i, 3}) \texttt{year} \,+\, \varepsilon_{ij3}, \\ \boldsymbol{b}_{i} &\!\sim\! N_{6} (0, \boldsymbol{D}), \text{ and } \varepsilon_{ijk} \!\sim\! N(0, \sigma_{k}^{2}) \text{ for } k\,=\,1, \!2,\! 3; \end{aligned}}} \end{array} $$ and a time-to-event sub-model for the study endpoint of death: $$\begin{array}{@{}rcl@{}} \lambda_{i}(t) &=& \lambda_{0}(t) \exp \left\{\gamma_{v} \texttt{age}_{i} + W_{2i}(t) \right\}, \\ W_{2i}(t) &=& \gamma_{\texttt{bil}}(b_{0i, 1} + b_{1i, 1} t) + \gamma_{\texttt{alb}}(b_{0i, 2} + b_{1i, 2} t) \\ &\quad+& \gamma_{\texttt{pro}}(b_{0i, 3} + b_{1i, 3} t). \end{array} $$ The log transformation of bilirubin is standard, and confirmed reasonable based on inspection of Q-Q plots for residuals from a separate fitted linear mixed model fitted using the lme() function from the R package nlme. Albumin did not require transformation. Residuals were grossly non-normal for prothrombin time using both untransformed and log-transformed outcomes. Therefore, a Box-Cox transformation was applied, which suggested an inverse-quartic transform might be suitable, which was confirmed by inspection of a Q-Q plot. The pairwise correlations for baseline measurements between the three transformed markers were 0.19 (prothrombin time vs. albumin), − 0.30 (bilirubin vs. prothrombin time and albumin). The model is fit using the joineRML R package (version 0.2.0) using the following code. Here, we have specified a more stringent tolerance value for ε0 than the default setting in mjoint(). Additionally, the burn-in phase was increased to 400 iterations after inspection of convergence trace plots. The model fits in 3.1 min on a MacBook Air 1.6GHz Intel Core i5 with 8GB or RAM running R version 3.3.0, having completed 423 MCEM iterations (not including the EM algorithm iterations performed for determining the initial values of the separate multivariate linear mixed sub-model) with a final MC size of M=3528. The fitted model results are shown in Table 4. Table 4 Fitted multivariate and separate univariate joint models to the PBC data The fitted model indicated that an increase in the subject-specific random deviation from the population trajectory of serum bilirubin was significantly associated with increased hazard of death. A significant association was also detected for subject-specific decreases in albumin from the population mean trajectory. However, prothrombin time was not significantly associated with hazard of death, although its direction is clinically consistent with PBC disease. Albert and Shih [46] analysed the first 4-years follow-up from this dataset with the same 3 biomarkers and a discrete event time distribution using a regression calibration model. Their results were broadly consistent, although the effect of prothrombin time on the event time sub-model was strongly significant. We also fitted 3 univariate joint models to each of the biomarkers and the event time sub-model using the R package joineR (version 1.2.0) owing to its optimization for such models. The LMM parameter estimates were similar, although the absolute magnitude of the slopes was smaller for the separate univariate models. Since 3 separate models were fitted, 3 estimates of γ v were estimated, with the average comparable to the multivariate model estimate. The multivariate model estimates of γ y =(γbil,γalb,γpro)⊤ were substantially attenuated relative to the separate model estimates, although the directions remained consistent. It is also interesting to note that γpro was statistically significant in the univariate model. However, the univariate models are not accounting for the correlation between different outcomes, whereas the multivariate joint model does. The model was refitted with the one-step Newton-Raphson update for γ replaced by a Gauss-Newton-like update in a time of 2.2 minutes for 419 MCEM iterations with a final MC size of M=6272. This is easily achieved by running the following code. In addition, we bootstrapped this model with B=100 samples to estimate SEs and contrast them with the approximate estimates based on the inverse empirical profile information matrix. In practice, one should choose B>100, particularly if using bootstrap percentile confidence intervals; however, we used a small value to reduce the computational burden on this process. In a similar spirit, we relaxed the convergence criteria and reduced the number of burn-in iterations. This is easily implemented by running the following code, taking 1.8 h to fit. It was observed that the choice of gradient matrix in the γ-update led to virtually indistinguishable parameter estimates, although we note the same random seed was used in both cases. The bootstrap estimated SEs were broadly consistent with the approximate SEs, with no consistent pattern in underestimation observed. Multivariate joint models introduce three types of correlations: (1) within-subject serial correlation for repeated measures; (2) between longitudinal outcomes correlation; and (3) correlation between the multivariate LMM and time-to-event sub-models. It is important to account for all of these types of correlations; however, some authors have reported collapsing their multivariate data to permit univariate joint models to be fitted. For example, Battes et al. [7] used an ad hoc approach of either summing or multiplying the three repeated continuous measures (standardized according to clinical upper reference limits of the biomarker assays), and then applying standard univariate joint models. Wang et al. [48] fitted separate univariate joint models to each longitudinal outcome in turn. Neither approach takes complete advantage of the correlation between the multiple longitudinal measures and the time-to-event outcome. Here, we described a new R package joineRML that can fit the models described in this paper. This was demonstrated on a real-world dataset. Although in the fitted model we assumed linear trajectories for the biomarkers, splines could be straightforwardly employed, as have been used in other multivariate joint model applications [15], albeit at the cost of additional computational time. Despite a growing availability of software for univariate joint models, Hickey et al. [19] noted that there were very few options for fitting joint models involving multivariate longitudinal data. To the best of our knowledge, options are limited to the R packages JMbayes [49], rstanarm [50], and the Stata package stjm [47]. Moreover, none of these incorporates an unspecified baseline hazard. The first two packages use Markov chain Monte Carlo (MCMC) methods to fit the joint models. Bayesian models are potentially very useful for fitting joint models, and in particular for dynamic prediction; however, MCMC is also computationally demanding, especially in the case of multivariate models. Several other publications have made BUGS code available for use with WinBUGS and OpenBUGS (e.g. [51]), but these are not easily modifiable and post-fit computations are cumbersome. joineRML is a new software package developed to fill a void in the joint modelling field, but is still in its infancy relative to highly developed univariate joint model packages such as the R package JM [28] and Stata package stjm [47]. Future developments of joineRML intend to cover several deficiencies. First, joineRML currently only permits an association structure of the form \(W_{2i}(t) = \sum _{k=1}^{K} \gamma _{yk} W_{1i}^{(k)}(t)\). As has been demonstrated by others, the association might take different forms, including random-slopes and cumulative effects or some combination of multiple structures, and these may also be different for separate longitudinal outcomes [18]. Moreover, it is conceivable that separate longitudinal outcomes may interact in the hazard sub-model. Second, the use of MC integration provides a scalable solution to the issue of increasing dimensionality in the random effects. However, for simpler cases, e.g. bivariate models with random-intercepts and random-slopes (total of 4 random effects), Gaussian quadrature might be computationally superior; this trade-off requires further investigation. Third, joineRML can currently only model a single event time. However, there is a growing interest in competing risks [9] and recurrent events data [11], which if incorporated into joineRML, would provide a flexible all-round multivariate joint modelling platform. Competing risks [28, 29] and recurrent events [38] have been incorporated into joint modelling R packages already, but are limited to the case of a solitary longitudinal outcome. Of note, the PBC trial dataset analysed in this study includes times to the competing risk of liver transplantation. Fourth, with ever-increasing volumes of data collected during routine clinical visits, the need for software to fit joint models with very many longitudinal outcomes is foreseeable [52]. This would likely require the use of approximate methods for the numerical integration or data reduction methods. Fifth, additional residual diagnostics are necessary for assessing possible violations of model assumptions. The joineRML package has a resid() function for extracting the longitudinal sub-model residuals; however, these are complex for diagnostic purposes due to the informative dropout, hence the development of multiple-imputation based residuals [53]. In this paper we have presented an extension of the classical joint model proposed by Henderson et al. [3] and an estimation procedure for fitting the models that builds on the foundations laid by Lin et al. [20]. In addition, we described a new R package joineRML that can fit the models described in this paper, which leverages the MCEM algorithm and which should scale well for increasing number of longitudinal outcomes. This software is timely, as it has previously been highlighted that there is a paucity of software available to fit such models [19]. The software is being regularly updated and improved. Availability and requirements Project name:joineRMLProject home page:https://github.com/graemeleehickey/joineRML/Operating system(s): platform independent Programming language: R Other requirements: none License: GNU GPL-3 Any restrictions to use by non-academics: none BLUP: Best linear unbiased prediction CRAN: The Comprehensive R Archive Network Expectation maximisation LMM: Linear mixed models MCEM: Monte Carlo expectation maximisation MLE: Maximum likelihood estimate PBC: Primary biliary cirrhosis SD: Standard error Ibrahim JG, Chu H, Chen LM. Basic concepts and methods for joint models of longitudinal and survival data. J Clin Oncol. 2010; 28(16):2796–801. Sweeting MJ, Thompson SG. Joint modelling of longitudinal and time-to-event data with application to predicting abdominal aortic aneurysm growth and rupture. Biom J. 2011; 53(5):750–63. Henderson R, Diggle PJ, Dobson A. Joint modelling of longitudinal measurements and event time data. Biostatistics. 2000; 1(4):465–480. Tsiatis AA, Davidian M. Joint modeling of longitudinal and time-to-event data: an overview. Stat Sin. 2004; 14:809–34. Gould AL, Boye ME, Crowther MJ, Ibrahim JG, Quartey G, Micallef S, Bois FY. Joint modeling of survival and longitudinal non-survival data: current methods and issues. report of the DIA Bayesian joint modeling working group. Stat Med. 2015; 34:2181–95. Rizopoulos D. Joint Models for Longitudinal and Time-to-Event Data, with Applications in R. Boca Raton: Chapman & Hall/CRC; 2012. Battes LC, Caliskan K, Rizopoulos D, Constantinescu AA, Robertus JL, Akkerhuis M, Manintveld OC, Boersma E, Kardys I. Repeated measurements of NT-pro-B-type natriuretic peptide, troponin T or C-reactive protein do not predict future allograft rejection in heart transplant recipients. Transplantation. 2015; 99(3):580–5. Song X, Davidian M, Tsiatis AA. An estimator for the proportional hazards model with multiple longitudinal covariates measured with error. Biostatistics. 2002; 3(4):511–28. Williamson P, Kolamunnage-Dona R, Philipson P, Marson AG. Joint modelling of longitudinal and competing risks data. Stat Med. 2008; 27:6426–38. Hickey GL, Philipson P, Jorgensen A, Kolamunnage-Dona R. A comparison of joint models for longitudinal and competing risks data, with application to an epilepsy drug randomized controlled trial. J R Stat Soc: Ser A: Stat Soc. 2018. https://0-doi-org.brum.beds.ac.uk/10.1111/rssa.12348. Liu L, Huang X. Joint analysis of correlated repeated measures and recurrent events processes in the presence of death, with application to a study on acquired immune deficiency syndrome. J R Stat Soc: Ser C: Appl Stat. 2009; 58(1):65–81. Chi YY, Ibrahim JG. Joint models for multivariate longitudinal and multivariate survival data. Biometrics. 2006; 62(2):432–45. Hickey GL, Philipson P, Jorgensen A, Kolamunnage-Dona R. Joint models of longitudinal and time-to-event data with more than one event time outcome: a review. Int J Biostat. 2018. https://0-doi-org.brum.beds.ac.uk/10.1515/ijb-2017-0047. Andrinopoulou E-R, Rizopoulos D, Takkenberg JJM, Lesaffre E. Combined dynamic predictions using joint models of two longitudinal outcomes and competing risk data. Stat Methods Med Res. 2017; 26(4):1787–1801. Rizopoulos D, Ghosh P. A Bayesian semiparametric multivariate joint model for multiple longitudinal outcomes and a time-to-event. Stat Med. 2011; 30(12):1366–80. Faucett CL, Thomas DC. Simultaneously modelling censored survival data and repeatedly measured covariates: a Gibbs sampling approach. Stat Med. 1996; 15(15):1663–85. Song X, Davidian M, Tsiatis AA. A semiparametric likelihood approach to joint modeling of longitudinal and time-to-event data. Biometrics. 2002; 58(4):742–53. Andrinopoulou E-R, Rizopoulos D. Bayesian shrinkage approach for a joint model of longitudinal and survival outcomes assuming different association structures. Stat Med. 2016; 35(26):4813–23. Hickey GL, Philipson P, Jorgensen A, Kolamunnage-Dona R. Joint modelling of time-to-event and multivariate longitudinal outcomes: recent developments and issues. BMC Med Res Methodol. 2016; 16(1):1–15. Lin H, McCulloch CE, Mayne ST. Maximum likelihood estimation in the joint analysis of time-to-event and multiple longitudinal variables. Stat Med. 2002; 21:2369–82. Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982; 38(4):963–74. Wei GC, Tanner MA. A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms. J Am Stat Assoc. 1990; 85(411):699–704. Wulfsohn MS, Tsiatis AA. A joint model for survival and longitudinal data measured with error. Biometrics. 1997; 53(1):330–9. Ratcliffe SJ, Guo W, Ten Have TR. Joint modeling of longitudinal and survival data via a common frailty. Biometrics. 2004; 60(4):892–9. McLachlan GJ, Krishnan T. The EM Algorithm and Extensions, 2nd ed.Hoboken: Wiley-Interscience; 2008. Pinheiro JC, Bates DM. Mixed-Effects Models in S and S-PLUS. New York: Springer; 2000. Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. New Jersey: Springer; 2000, p. 350. Rizopoulos D. JM: an R package for the joint modelling of longitudinal and time-to-event data. Journal of Statistical Software. 2010; 35(9):1–33. Philipson P, Sousa I, Diggle PJ, Williamson P, Kolamunnage-Dona R, Henderson R, Hickey GL. joineR: Joint Modelling of Repeated Measurements and Time-to-event Data. 2017. R package version 1.2.0. https://cran.r-project.org/package=joineR. Dmitrienko A, Molenberghs G, Chuang-Stein C, Offen W. Analysis of Clinical Trials Using SAS: A Practical Guide.Cary: SAS Institute; 2005. Law NJ, Taylor JM, Sandler H. The joint modeling of a longitudinal disease progression marker and the failure time process in the presence of cure. Biostatistics. 2002; 3(4):547–63. McCulloch CE. Maximum likelihood algorithms for generalized linear mixed models. J Am Stat Assoc. 1997; 92(437):162–70. Booth JG, Hobert JP. Maximizing generalized linear mixed model likelihoods with an automated Monte Carlo EM algorithm.J R Stat Soc Ser B Stat Methodol. 1999; 61(1):265–85. Ripatti S, Larsen K, Palmgren J. Maximum likelihood inference for multivariate frailty models using an automated Monte Carlo EM algorithm. Lifetime Data Anal. 2002; 8(2002):349–60. Hsieh F, Tseng YK, Wang JL. Joint modeling of survival and longitudinal data: Likelihood approach revisited. Biometrics. 2006; 62(4):1037–43. Xu C, Baines PD, Wang JL. Standard error estimation using the EM algorithm for the joint modeling of survival and longitudinal data. Biostatistics. 2014; 15(4):731–44. Eddelbuettel D, Sanderson C. RcppArmadillo: accelerating r with high-performance C++ linear algebra. Comput Stat Data Anal. 2014; 71:1054–63. Król A, Mauguen A, Mazroui Y, Laurent A, Michiels S, Rondeau V. Tutorial in joint modeling and prediction: A statistical software for correlated longitudinal outcomes, recurrent events and a terminal event. J Stat Softw. 2017; 81(3):1–52. Austin PC. Generating survival times to simulate Cox proportional hazards models with time-varying covariates. Stat Med. 2012; 31(29):3946–58. Genz A, Bretz F. Computation of Multivariate Normal and t Probabilities. Berlin: Springer; 2009. Rizopoulos D, Verbeke G, Molenberghs G. Shared parameter models under random-effects misspecification. Biometrika. 2008; 95(1):63–74. Xu J, Zeger SL. The evaluation of multiple surrogate endpoints. Biometrics. 2001; 57(1):81–7. Pantazis N, Touloumi G. Robustness of a parametric model for informatively censored bivariate longitudinal data under misspecification of its distributional assumptions: a simulation study. Stat Med. 2007; 26:5473–85. Choi J, Zeng D, Olshan AF, Cai J. Joint modeling of survival time and longitudinal outcomes with flexible random effects. Lifetime Data Anal. 2018; 24(1):126–52. Murtaugh PA, Dickson ER, Van Dam GM, Malinchoc M, Grambsch PM, Langworthy AL, Gips CH. Primary biliary cirrhosis: prediction of short-term survival based on repeated patient visits. Hepatology. 1994; 20(1):126–134. Albert PS, Shih JH. An approach for jointly modeling multivariate longitudinal measurements and discrete time-to-event data. Ann Appl Stat. 2010; 4(3):1517–32. Crowther MJ, Abrams KR, Lambert PC. Joint modeling of longitudinal and survival data. Stata J. 2013; 13(1):165–84. Wang P, Shen W, Boye ME. Joint modeling of longitudinal outcomes and survival using latent growth modeling approach in a mesothelioma trial. Health Serv Outcome Res Methodol. 2012; 12(2–3):182–99. Rizopoulos D. The R package JMbayes for fitting joint models for longitudinal and time-to-event data using mcmc. J Stat Softw. 2016; 72(7):1–45. Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker MA, Li P, Riddell A. Stan: a probabilistic programming language. J Stat Softw. 2017; 76(1):1–32. Andrinopoulou E-R, Rizopoulos D, Takkenberg JJM, Lesaffre E. Joint modeling of two longitudinal outcomes and competing risk data. Stat Med. 2014; 33(18):3167–78. Jaffa MA, Gebregziabher M, Jaffa AA. A joint modeling approach for right censored high dimensiondal multivariate longitudinal data. J Biom Biostat. 2014; 5(4):1000203. Rizopoulos D, Verbeke G, Molenberghs G. Multiple-imputation-based residuals and diagnostic plots for joint models of longitudinal and survival outcomes. Biometrics. 2010; 66(1):20–9. The authors would like to thank Professor Robin Henderson (University of Newcastle) for useful discussions with regards to the MCEM algorithm, and Dr Haiqun Lin (Yale University) for helpful discussions on the likelihood specification. Funding for the project was provided by the Medical Research Council (Grant number MR/M013227/1). The funder had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. The R package joineRML can be installed directly using install.packages(~joineRML~) in an R console. The source code is available at https://github.com/graemeleehickey/joineRML. Archived versions are available from the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/joineRML/. joineRML is platform independent, requiring R version ≥ 3.3.0, and is published under a GNU GPL-3 license. The dataset analysed during the current study is bundled with the R package joineRML, and can be accessed by running the command data(pbc2, package = ~joineRML~). Department of Biostatistics, Institute of Translational Medicine, University of Liverpool, Waterhouse Building, 1-5 Brownlow Street, Liverpool, L69 3GL, UK Graeme L. Hickey , Andrea Jorgensen & Ruwanthi Kolamunnage-Dona Department of Mathematics, Physics and Electrical Engineering, Northumbria University, Ellison Place, Newcastle upon Tyne, NE1 8ST, UK Pete Philipson Search for Graeme L. Hickey in: Search for Pete Philipson in: Search for Andrea Jorgensen in: Search for Ruwanthi Kolamunnage-Dona in: All authors collaborated in developing the model fitting algorithm reported. The programming and running of the analysis was carried out by GLH. GLH wrote the first draft of the manuscript, with input provided by PP, AJ, and RKD. All authors contributed to the manuscript revisions. All authors read and approved the final manuscript. Correspondence to Ruwanthi Kolamunnage-Dona. An appendix (appendix.pdf) is available that includes details on the score vector and M-step estimators. (PDF 220 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Hickey, G., Philipson, P., Jorgensen, A. et al. joineRML: a joint model and software package for time-to-event and multivariate longitudinal outcomes. BMC Med Res Methodol 18, 50 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-018-0502-1 Received: 02 August 2017 Joint modelling Longitudinal data Multivariate data Time-to-event data
CommonCrawl
\Big |begin\Big |{document\Big |}\Big | \Big | \Big |title\Big |{\Big |bf\Big | Unique\Big | Continuation\Big | for\Big | Stochastic\Big | Hyperbolic\Big | Equations\Big |}\Big | \Big | \Big |author\Big |{Qi\Big | L\Big |\Big |"u\Big |thanks\Big |{School\Big | of\Big | Mathematics\Big |,\Big | Sichuan\Big | University\Big |,\Big | Chengdu\Big | 610064\Big |,\Big | Sichuan\Big | Province\Big |,\Big | China\Big |.\Big | The\Big | author\Big | is\Big | partially\Big | supported\Big | by\Big | NSF\Big | of\Big | China\Big | under\Big | grants\Big | 11471231\Big |,\Big | the\Big | Fundamental\Big | Research\Big | Funds\Big | for\Big | the\Big | Central\Big | Universities\Big | in\Big | China\Big | under\Big | grant\Big | 2015SCU04A02\Big | and\Big | Grant\Big | MTM2011\Big |-29306\Big |-C02\Big |-00\Big | of\Big | the\Big | MICINN\Big |,\Big | Spain\Big |.\Big | \Big |{\Big |small\Big |it\Big | E\Big |-mail\Big |:\Big |}\Big | \Big |{\Big |small\Big |tt\Big | lu\Big |@scu\Big |.edu\Big |.cn\Big |}\Big |.\Big |}\Big | \Big |quad\Big | and\Big |quad\Big | Zhongqi\Big | Yin\Big |thanks\Big |{Department\Big | of\Big | Mathematics\Big |,\Big | Sichuan\Big | Normal\Big | University\Big |,\Big | Chengdu\Big | 610068\Big |,\Big | P\Big |.\Big | R\Big |.\Big | China\Big |.\Big | The\Big | author\Big | is\Big | supported\Big | by\Big | the\Big | General\Big | Fund\Big | Project\Big | of\Big | Sichuan\Big | Provincial\Big | Department\Big | of\Big | Education\Big | in\Big | China\Big | under\Big | grant\Big | 13ZB0164\Big |.\Big | \Big |{\Big |small\Big |it\Big | E\Big |-mail\Big |:\Big |}\Big | \Big |{\Big |small\Big |tt\Big | zhongqiyin\Big |@sicnu\Big |.edu\Big |.cn\Big |}\Big |.\Big |}\Big |}\Big | \Big | \Big |date\Big |{\Big |}\Big | \Big | \Big |maketitle\Big | \Big | \Big |begin\Big |{abstract\Big |}\Big |noindent\Big | In\Big | this\Big | paper\Big |,\Big | we\Big | derive\Big | a\Big | local\Big | unique\Big | continuation\Big | property\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | without\Big | boundary\Big | conditions\Big |.\Big | This\Big | result\Big | is\Big | proved\Big | by\Big | a\Big | global\Big | Carleman\Big | estimate\Big |.\Big | As\Big | far\Big | as\Big | we\Big | know\Big |,\Big | this\Big | is\Big | the\Big | first\Big | result\Big | in\Big | this\Big | topic\Big |.\Big | \Big |end\Big |{abstract\Big |}\Big | \Big | \Big |bigskip\Big | \Big | \Big |noindent\Big |{\Big |bf\Big | 2000\Big | Mathematics\Big | Subject\Big | Classification\Big |}\Big |.\Big | \Big | Primary\Big | \Big | \Big | 60H15\Big |,\Big | 93B07\Big |.\Big | \Big | \Big |bigskip\Big | \Big | \Big |noindent\Big |{\Big |bf\Big | Key\Big | Words\Big |}\Big |.\Big | Stochastic\Big | hyperbolic\Big | equation\Big |,\Big | Carleman\Big | estimate\Big |,\Big | unique\Big | continuation\Big |.\Big | \Big | \Big |medskip\Big | \Big | \Big |section\Big |{Introduction\Big | \Big |}\Big | \Big |quad\Big | Let\Big | \Big |$T\Big | \Big |>\Big | 0\Big |$\Big |,\Big | \Big |$G\Big | \Big |subset\Big | \Big |mathbb\Big |{R\Big |}\Big |^\Big |{n\Big |}\Big |$\Big | \Big |(\Big |$n\Big | \Big |in\Big | \Big |mathbb\Big |{N\Big |}\Big |$\Big |)\Big | be\Big | a\Big | given\Big | bounded\Big | domain\Big |.\Big | Throughout\Big | this\Big | paper\Big |,\Big | we\Big | will\Big | use\Big | \Big |$C\Big |$\Big | to\Big | denote\Big | a\Big | generic\Big | positive\Big | constant\Big | depending\Big | \Big | only\Big | on\Big | \Big |$T\Big |$\Big | and\Big | \Big |$G\Big |$\Big |,\Big | which\Big | may\Big | change\Big | from\Big | line\Big | to\Big | line\Big |.\Big | Denote\Big | \Big |$Q\Big | \Big |=\Big | G\Big |times\Big | \Big |(0\Big |,\Big | T\Big |)\Big |$\Big |.\Big | \Big | Let\Big | \Big |$\Big |(\Big |Omega\Big |,\Big | \Big |{\Big |cal\Big | F\Big |}\Big |,\Big | \Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |,\Big | \Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |)\Big |$\Big | with\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |buildrel\Big | \Big |triangle\Big | \Big |over\Big | \Big |=\Big |\Big |{\Big |{\Big |cal\Big | F\Big |}\Big |_t\Big |\Big |}\Big |_\Big |{t\Big | \Big |geq\Big | 0\Big |}\Big |$\Big | be\Big | a\Big | complete\Big | filtered\Big | probability\Big | space\Big | on\Big | which\Big | a\Big | \Big | one\Big | dimensional\Big | standard\Big | Brownian\Big | motion\Big | \Big |$\Big |\Big |{W\Big |(t\Big |)\Big |\Big |}\Big |_\Big |{t\Big |geq\Big | 0\Big |}\Big |$\Big | is\Big | defined\Big |.\Big | Assume\Big | that\Big | \Big |$H\Big |$\Big | is\Big | a\Big | Fr\Big |\Big |'echet\Big | space\Big |.\Big | Let\Big | \Big |$L\Big |^2\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |(0\Big |,\Big | T\Big |;\Big | H\Big |)\Big |$\Big | be\Big | the\Big | Fr\Big |\Big |'echet\Big | space\Big | consisting\Big | all\Big | \Big |$H\Big |$\Big |-valued\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |$\Big |-adapted\Big | process\Big | \Big |$X\Big |(\Big |cdot\Big |)\Big |$\Big | such\Big | that\Big | \Big |$\Big |mathbb\Big |{E\Big |}\Big | \Big ||X\Big |(\Big |cdot\Big |)\Big ||\Big |_\Big |{L\Big |^2\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |(0\Big |,\Big | T\Big |;\Big | H\Big |)\Big |}\Big |^2\Big | \Big |<\Big | \Big |+\Big |infty\Big |$\Big |,\Big | on\Big | which\Big | the\Big | canonical\Big | quasi\Big |-norm\Big | is\Big | endowed\Big |.\Big | By\Big | \Big |$L\Big |^\Big |{\Big |infty\Big |}\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |(0\Big |,\Big | T\Big |;\Big | H\Big |)\Big |$\Big | we\Big | denote\Big | the\Big | Fr\Big |\Big |'\Big |{e\Big |}chet\Big | space\Big | of\Big | all\Big | \Big |$H\Big |$\Big |-valued\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |$\Big |-adapted\Big | bounded\Big | processes\Big | equipped\Big | with\Big | the\Big | canonical\Big | quasi\Big |-norm\Big | and\Big | by\Big | \Big |$L\Big |^2\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |(\Big |Omega\Big |;\Big | C\Big |(\Big |[0\Big |,\Big | T\Big |]\Big |;\Big | H\Big |)\Big |)\Big |$\Big | the\Big | Fr\Big |\Big |'\Big |{e\Big |}chet\Big | space\Big | of\Big | all\Big | \Big |$H\Big |$\Big |-valued\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |$\Big |-adapted\Big | continuous\Big | precesses\Big | \Big |$X\Big |(\Big |cdot\Big |)\Big |$\Big | with\Big | \Big |$\Big |mathbb\Big |{E\Big |}\Big | \Big ||X\Big |(\Big |cdot\Big |)\Big ||\Big |^2\Big |_\Big |{C\Big |(\Big |[0\Big |,\Big | T\Big |]\Big |;\Big | H\Big |)\Big |}\Big | \Big |<\Big | \Big |+\Big |infty\Big |$\Big | and\Big | equipped\Big | with\Big | the\Big | canonical\Big | quasi\Big |-norm\Big |.\Big | \Big | \Big | \Big | This\Big | paper\Big | is\Big | devoted\Big | to\Big | the\Big | study\Big | of\Big | a\Big | local\Big | unique\Big | continuation\Big | property\Big | for\Big | the\Big | following\Big | stochastic\Big | hyperbolic\Big | equation\Big |:\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{system1\Big |}\Big | \Big |sigma\Big | dz\Big |_\Big |{t\Big |}\Big | \Big |-\Big | \Big |Delta\Big | z\Big | dt\Big | \Big |=\Big | \Big |big\Big |(b\Big |_1\Big | z\Big |_t\Big | \Big |+\Big | b\Big |_2\Big |cdot\Big |nabla\Big | z\Big | \Big |+\Big | b\Big |_3\Big | z\Big | \Big |big\Big |)dt\Big | \Big |+\Big | \Big | b\Big |_4\Big | z\Big | dW\Big |(t\Big |)\Big | \Big |qquad\Big | \Big |{\Big |mbox\Big | \Big |{\Big | in\Big | \Big |}\Big |}\Big | Q\Big |,\Big | \Big |end\Big |{equation\Big |}\Big | \Big | where\Big |,\Big | \Big |$\Big |sigma\Big |in\Big | C\Big |^1\Big |(\Big |overline\Big | Q\Big |)\Big |$\Big | is\Big | positive\Big |,\Big | \Big | \Big |begin\Big |{eqnarray\Big |*\Big |}\Big | \Big |label\Big |{aibi\Big |}\Big | \Big |&\Big |\Big |,\Big |&\Big | b\Big |_1\Big | \Big |in\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^\Big |{\Big |infty\Big |}\Big |(0\Big |,T\Big |;L\Big |^\Big |{\Big |infty\Big |}\Big |_\Big |{loc\Big |}\Big |(G\Big |)\Big |)\Big |,\Big | \Big |qquad\Big | b\Big |_2\Big | \Big |in\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^\Big |{\Big |infty\Big |}\Big |(0\Big |,T\Big |;L\Big |^\Big |{\Big |infty\Big |}\Big |_\Big |{loc\Big |}\Big |(G\Big |;\Big |mathbb\Big |{R\Big |}\Big |^\Big |{n\Big |}\Big |)\Big |)\Big |,\Big | \Big |nonumber\Big | \Big |\Big |\Big | \Big |&\Big |\Big |,\Big |&\Big | b\Big |_3\Big | \Big |in\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^\Big |{\Big |infty\Big |}\Big |(0\Big |,T\Big |;L\Big |^\Big |{n\Big |}\Big |_\Big |{loc\Big |}\Big |(G\Big |)\Big |)\Big |,\Big | \Big |qquad\Big | \Big |\Big |,\Big |\Big |,\Big | b\Big |_4\Big | \Big |in\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^\Big |{\Big |infty\Big |}\Big |(0\Big |,T\Big |;L\Big |^\Big |{\Big |infty\Big |}\Big |_\Big |{loc\Big |}\Big |(G\Big |)\Big |)\Big |.\Big | \Big |end\Big |{eqnarray\Big |*\Big |}\Big | \Big | Put\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{HT\Big |}\Big | \Big |{\Big |mathbb\Big |{H\Big |}\Big |}\Big |_\Big |{T\Big |}\Big | \Big |buildrel\Big | \Big |triangle\Big | \Big |over\Big | \Big |=\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^2\Big | \Big |(\Big |Omega\Big |;\Big | C\Big |(\Big |[0\Big |,T\Big |]\Big |;H\Big |_\Big |{0\Big |,loc\Big |}\Big |^1\Big |(G\Big |)\Big |)\Big |)\Big |cap\Big | L\Big |_\Big |{\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |}\Big |^2\Big | \Big |(\Big |Omega\Big |;\Big | C\Big |^\Big |{1\Big |}\Big |(\Big |[0\Big |,T\Big |]\Big |;L\Big |^2\Big |_\Big |{loc\Big |}\Big |(G\Big |)\Big |)\Big |)\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | The\Big | definition\Big | of\Big | \Big | solutions\Big | to\Big | equation\Big | \Big |(\Big |ref\Big |{system1\Big |}\Big |)\Big | is\Big | given\Big | in\Big | the\Big | following\Big | sense\Big |.\Big | \Big | \Big |begin\Big |{definition\Big |}\Big | \Big |label\Big |{def\Big | solution\Big | to\Big | sys\Big |}\Big | We\Big | call\Big | \Big |$z\Big | \Big |in\Big | \Big |{\Big |mathbb\Big |{H\Big |}\Big |}\Big |_T\Big |$\Big | \Big | a\Big | solution\Big | of\Big | equation\Big | \Big |(\Big |ref\Big |{system1\Big |}\Big |)\Big | if\Big | for\Big | each\Big | \Big |$t\Big | \Big |in\Big | \Big |[0\Big |,T\Big |]\Big |$\Big |,\Big | \Big |$G\Big |'\Big |subset\Big |subset\Big | G\Big |$\Big | and\Big | \Big | \Big |$\Big |eta\Big | \Big |in\Big | H\Big |_0\Big |^1\Big |(G\Big |'\Big |)\Big |$\Big |,\Big | it\Big | holds\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{solution\Big | to\Big | sysh\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |int\Big |_\Big |{G\Big |'\Big |}\Big | z\Big |_t\Big |(t\Big |,x\Big |)\Big |eta\Big |(x\Big |)dx\Big | \Big |-\Big | \Big |int\Big |_\Big |{G\Big |'\Big |}\Big | z\Big |_t\Big |(0\Big |,x\Big |)\Big |eta\Big |(x\Big |)dx\Big | \Big |-\Big |int\Big |_0\Big |^t\Big | \Big |int\Big |_\Big |{G\Big |'\Big |}\Big | \Big |sigma\Big |_t\Big |(t\Big |,x\Big |)z\Big |_t\Big |(t\Big |,x\Big |)\Big |eta\Big |(x\Big |)dx\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |=\Big | \Big |int\Big |_0\Big |^t\Big | \Big |int\Big |_\Big |{G\Big |'\Big |}\Big | \Big | \Big |\Big |[\Big | \Big |-\Big |nabla\Big | z\Big |(s\Big |,x\Big |)\Big |cdot\Big |nabla\Big |eta\Big |(x\Big |)\Big | \Big |+\Big | \Big |big\Big |(b\Big |_1\Big | z\Big |_t\Big | \Big |+\Big | b\Big |_2\Big |cdot\Big |nabla\Big | z\Big | \Big |+\Big | b\Big |_3\Big | z\Big |big\Big |)\Big |eta\Big |(x\Big |)\Big | \Big |\Big |]dxds\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | \Big |int\Big |_0\Big |^t\Big | \Big |int\Big |_\Big |{G\Big |'\Big |}\Big | \Big | \Big | b\Big |_4\Big | z\Big | \Big |eta\Big |(x\Big |)\Big | dx\Big | dW\Big |(s\Big |)\Big |,\Big | \Big |qquad\Big | \Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |mbox\Big |{\Big |-a\Big |.s\Big |.\Big | \Big |}\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big |end\Big |{definition\Big |}\Big | \Big | \Big | \Big | Let\Big | \Big |$S\Big | \Big |subset\Big |subset\Big | \Big | G\Big |$\Big | be\Big | a\Big | \Big |$C\Big |^2\Big |$\Big |-hypersurface\Big |.\Big | Let\Big | \Big |$x\Big |_0\Big |in\Big | S\Big |setminus\Big | \Big |partial\Big | G\Big |$\Big | and\Big | suppose\Big | that\Big | \Big |$S\Big |$\Big | divides\Big | the\Big | ball\Big | \Big |$B\Big |_\Big |{\Big |rho\Big |}\Big |(x\Big |_0\Big |)\Big |subset\Big | G\Big |$\Big |,\Big | centered\Big | at\Big | \Big |$x\Big |_0\Big |$\Big | and\Big | with\Big | radius\Big | \Big |$\Big |rho\Big |$\Big |,\Big | into\Big | two\Big | parts\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |+\Big |$\Big | and\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |-\Big |$\Big |.\Big | Denote\Big | as\Big | usual\Big | by\Big | \Big |$\Big |nu\Big |(x\Big |)\Big |$\Big | the\Big | unit\Big | normal\Big | vector\Big | to\Big | \Big |$S\Big |$\Big | at\Big | \Big |$x\Big |$\Big | inward\Big | to\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |+\Big |$\Big |.\Big | \Big | \Big | Let\Big | \Big |$y\Big |$\Big | be\Big | a\Big | solution\Big | to\Big | equation\Big | \Big |(\Big |ref\Big |{system1\Big |}\Big |)\Big |.\Big | Let\Big | \Big |$\Big |varepsilon\Big |>0\Big |$\Big |.\Big | This\Big | paper\Big | is\Big | devoted\Big | to\Big | the\Big | following\Big | local\Big | unique\Big | continuation\Big | problem\Big |:\Big | \Big | \Big |medskip\Big | \Big | \Big |{\Big |bf\Big | \Big |(Pu\Big |)\Big |}\Big | Can\Big | \Big |$y\Big |$\Big | in\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |+\Big |times\Big | \Big |(\Big |varepsilon\Big |,T\Big |-\Big |varepsilon\Big |)\Big |$\Big | be\Big | uniquely\Big | determined\Big | by\Big | the\Big | values\Big | of\Big | \Big |$y\Big |$\Big | in\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |-\Big |times\Big | \Big |(0\Big |,T\Big |)\Big |$\Big |?\Big | \Big | \Big |medskip\Big | \Big | \Big | In\Big | other\Big | words\Big |,\Big | Problem\Big | \Big |{\Big |bf\Big | \Big |(Pu\Big |)\Big |}\Big | concerns\Big | that\Big | whether\Big | the\Big | values\Big | of\Big | the\Big | solution\Big | in\Big | one\Big | side\Big | of\Big | \Big |$S\Big |$\Big | uniquely\Big | determine\Big | its\Big | values\Big | in\Big | the\Big | other\Big | side\Big |.\Big | Clearly\Big |,\Big | it\Big | is\Big | equivalent\Big | to\Big | the\Big | following\Big | problem\Big |:\Big | \Big | \Big |medskip\Big | \Big | \Big |{\Big |bf\Big | \Big |(Pu1\Big |)\Big |}\Big | Can\Big | we\Big | conclude\Big | that\Big | \Big |$y\Big |=0\Big |$\Big | in\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |+\Big |times\Big | \Big |(\Big |varepsilon\Big |,T\Big |-\Big |varepsilon\Big |)\Big |$\Big |,\Big | provided\Big | that\Big | \Big |$y\Big |=0\Big |$\Big | in\Big | \Big |$\Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |-\Big |times\Big | \Big |(0\Big |,T\Big |)\Big |$\Big |?\Big | \Big | \Big |medskip\Big | \Big | Unique\Big | continuation\Big | problems\Big | for\Big | deterministic\Big | PDEs\Big | are\Big | studied\Big | extensively\Big | in\Big | the\Big | literature\Big |.\Big | Generally\Big | speaking\Big |,\Big | a\Big | unique\Big | continuation\Big | result\Big | is\Big | a\Big | statement\Big | in\Big | the\Big | following\Big | sense\Big |:\Big | \Big | \Big |vspace\Big |{0\Big |.2cm\Big |}\Big | \Big | Let\Big | \Big |$u\Big |$\Big | be\Big | a\Big | solution\Big | of\Big | a\Big | PDE\Big | and\Big | two\Big | regions\Big | \Big |$\Big |{\Big |cal\Big | M\Big |}\Big |_1\Big |subset\Big |{\Big |cal\Big | M\Big |}\Big |_2\Big |$\Big |.\Big | Then\Big | \Big |$u\Big |$\Big | is\Big | determined\Big | uniquely\Big | in\Big | \Big |$\Big |{\Big |cal\Big | M\Big |}\Big |_2\Big |$\Big | by\Big | its\Big | values\Big | in\Big | \Big |$\Big |{\Big |cal\Big | M\Big |}\Big |_1\Big |$\Big |.\Big | \Big | \Big |vspace\Big |{0\Big |.2cm\Big |}\Big | \Big | Problem\Big | \Big |{\Big |bf\Big | \Big |(Pu\Big |)\Big |}\Big | is\Big | a\Big | natural\Big | generalization\Big | of\Big | the\Big | unique\Big | continuation\Big | problems\Big | under\Big | the\Big | stochastic\Big | setting\Big |,\Big | i\Big |.e\Big |.\Big |,\Big | \Big |$\Big |{\Big |cal\Big | M\Big |}\Big |_1\Big |=\Big |{\Big |cal\Big | D\Big |}\Big |^\Big |-\Big |_\Big |rho\Big |$\Big | and\Big | \Big |$\Big |{\Big |cal\Big | M\Big |}\Big |_2\Big |=B\Big |_\Big |rho\Big | \Big |(x\Big |_0\Big |)\Big |$\Big |.\Big | \Big | \Big | There\Big | is\Big | a\Big | long\Big | history\Big | for\Big | the\Big | study\Big | of\Big | unique\Big | continuation\Big | property\Big | \Big |(UCP\Big | for\Big | short\Big |)\Big | for\Big | deterministic\Big | PDEs\Big |.\Big | Classical\Big | results\Big | date\Big | back\Big | to\Big | Cauchy\Big |-Kovalevskaya\Big | theorem\Big | and\Big | Holmgren\Big |'s\Big | uniqueness\Big | theorem\Big |.\Big | These\Big | two\Big | results\Big | need\Big | the\Big | coefficients\Big | of\Big | the\Big | PDE\Big | \Big | analytic\Big | to\Big | get\Big | the\Big | UCP\Big |.\Big | In\Big | 1939\Big |,\Big | \Big | T\Big |.\Big | Carleman\Big | introduced\Big | in\Big | \Big |cite\Big |{Carleman1\Big |}\Big | a\Big | new\Big | method\Big |,\Big | which\Big | was\Big | based\Big | on\Big | weighted\Big | estimates\Big |,\Big | to\Big | prove\Big | UCP\Big | for\Big | two\Big | dimensional\Big | elliptic\Big | equations\Big | with\Big | \Big |$L\Big |^\Big |infty\Big |$\Big | coefficients\Big |.\Big | \Big | This\Big | method\Big |,\Big | which\Big | is\Big | called\Big | \Big |`\Big |`Carleman\Big | estimates\Big | method\Big |"\Big | nowadays\Big |,\Big | \Big | turned\Big | out\Big | to\Big | be\Big | quite\Big | powerful\Big | and\Big | has\Big | been\Big | developed\Big | extensively\Big | in\Big | the\Big | literature\Big | and\Big | becomes\Big | the\Big | most\Big | useful\Big | tool\Big | to\Big | obtain\Big | UCP\Big | for\Big | PDEs\Big | \Big |(e\Big |.g\Big |.\Big | \Big |cite\Big |{Calderon\Big |,Hormander2\Big |,Kenig\Big |,Tataru\Big |,Tataru1\Big |,Zuily\Big |}\Big |)\Big |.\Big | In\Big | particular\Big |,\Big | \Big | unique\Big | continuation\Big | results\Big | for\Big | solutions\Big | of\Big | hyperbolic\Big | equations\Big | across\Big | hypersurfaces\Big | were\Big | studied\Big | by\Big | many\Big | authors\Big | \Big |(e\Big |.g\Big |.\Big | \Big |cite\Big |{Hormander3\Big |,Kenig1\Big |,Robbiano1\Big |,Sogge\Big |,Tataru2\Big |}\Big |)\Big |.\Big | \Big | \Big | Compared\Big | with\Big | the\Big | deterministic\Big | PDEs\Big |,\Big | there\Big | are\Big | very\Big | few\Big | results\Big | concerning\Big | UCP\Big | for\Big | stochastic\Big | PDEs\Big |.\Big | To\Big | our\Big | best\Big | knowledge\Big |,\Big | \Big |cite\Big |{Zh\Big |}\Big | is\Big | the\Big | first\Big | result\Big | for\Big | UCP\Big | of\Big | stochastic\Big | PDEs\Big |,\Big | in\Big | which\Big | the\Big | \Big | author\Big | shows\Big | that\Big | \Big | a\Big | solution\Big | to\Big | a\Big | stochastic\Big | parabolic\Big | equation\Big | vanishes\Big |,\Big | provided\Big | that\Big | it\Big | vanishes\Big | in\Big | any\Big | subdomain\Big | and\Big | this\Big | result\Big | was\Big | improved\Big | in\Big | \Big |cite\Big |{LL\Big |,LY\Big |}\Big | where\Big | less\Big | geometric\Big | condition\Big | is\Big | assumed\Big | to\Big | the\Big | set\Big | where\Big | the\Big | solution\Big | vanishes\Big |.\Big | The\Big | first\Big | result\Big | of\Big | UCP\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | was\Big | obtained\Big | in\Big | \Big |cite\Big |{Zhangxu3\Big |}\Big |.\Big | Some\Big | improvements\Big | were\Big | made\Big | in\Big | \Big |cite\Big |{Lu\Big |,LZ2\Big |}\Big |.\Big | The\Big | results\Big | in\Big | \Big | \Big |cite\Big |{Lu\Big |}\Big | and\Big | \Big |cite\Big |{Zhangxu3\Big |}\Big | concerned\Big | the\Big | global\Big | UCP\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | with\Big | a\Big | homogeneous\Big | Dirichlet\Big | boundary\Big | condition\Big |,\Big | i\Big |.e\Big |.\Big |,\Big | they\Big | concluded\Big | that\Big | the\Big | solution\Big | to\Big | a\Big | stochastic\Big | hyperbolic\Big | equation\Big | vanishes\Big |,\Big | provided\Big | that\Big | it\Big | equals\Big | zero\Big | in\Big | a\Big | large\Big | enough\Big | subdomain\Big |.\Big | In\Big | this\Big | paper\Big |,\Big | we\Big | focus\Big | on\Big | the\Big | local\Big | UCP\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | without\Big | boundary\Big | condition\Big |,\Big | that\Big | is\Big |,\Big | can\Big | a\Big | solution\Big | be\Big | determined\Big | locally\Big |?\Big | \Big | \Big | \Big | \Big | \Big | \Big | To\Big | present\Big | the\Big | main\Big | result\Big | of\Big | this\Big | paper\Big |,\Big | let\Big | us\Big | first\Big | introduce\Big | the\Big | following\Big | notion\Big |.\Big | \Big | \Big | \Big | \Big |begin\Big |{definition\Big |}\Big |label\Big |{def1\Big |}\Big | Let\Big | \Big |$x\Big |_0\Big |in\Big | S\Big |$\Big | and\Big | \Big |$\Big | K\Big |>0\Big |$\Big |.\Big | \Big |$S\Big |$\Big | is\Big | said\Big | to\Big | satisfy\Big | the\Big | outer\Big | paraboloid\Big | condition\Big | with\Big | \Big |$K\Big |$\Big | at\Big | \Big |$x\Big |_0\Big |$\Big | if\Big | there\Big | exists\Big | a\Big | neighborhood\Big | \Big |$\Big |{\Big |cal\Big | V\Big |}\Big |$\Big | of\Big | \Big |$x\Big |_0\Big |$\Big | and\Big | a\Big | paraboloid\Big | \Big |$\Big |{\Big |cal\Big | P\Big |}\Big |$\Big | tangential\Big | to\Big | \Big |$S\Big |$\Big | at\Big | \Big |$x\Big |_0\Big |$\Big | and\Big | \Big |$\Big |{\Big |cal\Big | P\Big |}\Big |cap\Big |{\Big |cal\Big | V\Big |}\Big |subset\Big | \Big | \Big |{\Big |cal\Big | D\Big |}\Big |^\Big |{\Big |-\Big |}\Big |_\Big |rho\Big |$\Big | with\Big | \Big |$\Big |{\Big |cal\Big | P\Big |}\Big |$\Big | congruent\Big | to\Big | \Big |$\Big |displaystyle\Big | x\Big |_1\Big | \Big |=\Big | K\Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big |$\Big |.\Big | \Big |end\Big |{definition\Big |}\Big | \Big | The\Big | main\Big | result\Big | in\Big | this\Big | paper\Big | is\Big | the\Big | following\Big | one\Big |.\Big | \Big | \Big | \Big |begin\Big |{theorem\Big |}\Big |label\Big |{th1\Big |}\Big | Let\Big | \Big |$x\Big |_0\Big |in\Big | S\Big |setminus\Big |partial\Big | S\Big |$\Big | such\Big | that\Big | \Big |$\Big |frac\Big |{\Big |partial\Big | \Big |sigma\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big |<0\Big |$\Big |,\Big | and\Big | let\Big | \Big |$S\Big |$\Big | satisfy\Big | the\Big | outer\Big | paraboloid\Big | condition\Big | with\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq1\Big |}\Big | K\Big | \Big |<\Big |frac\Big |{\Big |-\Big |frac\Big |{\Big | \Big |partial\Big | \Big |sigma\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |}\Big |{4\Big |(\Big ||\Big |sigma\Big ||\Big |_\Big |{L\Big |^\Big |infty\Big |(B\Big |_\Big |rho\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |)\Big |}\Big |+1\Big |)\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Let\Big | \Big |$z\Big |in\Big | \Big |{\Big |mathbb\Big |{H\Big |}\Big |}\Big |_T\Big |$\Big | be\Big | a\Big | solution\Big | of\Big | the\Big | equation\Big | \Big |eqref\Big |{system1\Big |}\Big | satisfying\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq2\Big |}\Big | z\Big |=\Big |frac\Big |{\Big |partial\Big | z\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big | \Big |=\Big | 0\Big | \Big |quad\Big |mbox\Big |{\Big | on\Big | \Big |}\Big | \Big |(0\Big |,T\Big |)\Big |times\Big | S\Big |,\Big |\Big |;\Big |\Big |;\Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |mbox\Big |{\Big |-\Big |}\Big |hbox\Big |{\Big |rm\Big | a\Big |.s\Big |.\Big |{\Big | \Big |}\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Then\Big |,\Big | there\Big | is\Big | a\Big | neighborhood\Big | \Big |$\Big |{\Big |cal\Big | V\Big |}\Big |$\Big | of\Big | \Big |$x\Big |_0\Big |$\Big | and\Big | \Big |$\Big |varepsilon\Big |in\Big | \Big |(0\Big |,T\Big |/2\Big |)\Big |$\Big | such\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq3\Big |}\Big | z\Big | \Big |=\Big | 0\Big | \Big |quad\Big |mbox\Big |{\Big | in\Big | \Big |}\Big | \Big |(\Big |{\Big |cal\Big | V\Big |}\Big |cap\Big | \Big |{\Big |cal\Big | D\Big |}\Big |_\Big |{\Big |rho\Big |}\Big |^\Big |+\Big |)\Big |times\Big | \Big |(\Big |varepsilon\Big |,T\Big |-\Big |varepsilon\Big |)\Big |,\Big |\Big |;\Big |\Big |;\Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |mbox\Big |{\Big |-\Big |}\Big |hbox\Big |{\Big |rm\Big | a\Big |.s\Big |.\Big |{\Big | \Big |}\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big |end\Big |{theorem\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{remark\Big |}\Big | In\Big | Theorem\Big | \Big |ref\Big |{th1\Big |}\Big |,\Big | we\Big | assume\Big | that\Big | \Big |$\Big |frac\Big |{\Big |partial\Big | \Big |sigma\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big |<0\Big |$\Big |.\Big | It\Big | is\Big | related\Big | to\Big | the\Big | propagation\Big | of\Big | wave\Big |.\Big | This\Big | is\Big | a\Big | reasonable\Big | assumption\Big | since\Big | the\Big | UCP\Big | may\Big | not\Big | hold\Big | if\Big | it\Big | is\Big | not\Big | fulfilled\Big | \Big |(e\Big |.g\Big |.\Big | \Big |cite\Big |{Alinhac\Big |}\Big |)\Big |.\Big | It\Big | can\Big | be\Big | regarded\Big | as\Big | a\Big | kind\Big | of\Big | pseudoconvex\Big | condition\Big | \Big |(e\Big |.g\Big |.\Big | \Big |cite\Big |[Chapter\Big | XXVII\Big |]\Big |{Hormander2\Big |}\Big |)\Big |.\Big | \Big |end\Big |{remark\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{remark\Big |}\Big | If\Big | \Big |$S\Big |$\Big | is\Big | a\Big | hyperplane\Big |,\Big | then\Big | \Big | Condition\Big | \Big |ref\Big |{th1\Big |-eq1\Big |}\Big | always\Big | satisfies\Big | since\Big | we\Big | can\Big | take\Big | \Big |$K\Big | \Big |=\Big | 0\Big |$\Big |.\Big | \Big |end\Big |{remark\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{remark\Big |}\Big | From\Big | Theorem\Big | \Big |ref\Big |{th1\Big |}\Big |,\Big | one\Big | can\Big | get\Big | many\Big | classical\Big | UCP\Big | results\Big | for\Big | deterministic\Big | hyperbolic\Big | equations\Big | \Big |(e\Big |.g\Big |.\Big | \Big |cite\Big |{Hormander3\Big |,Robbiano\Big |,Tataru2\Big |}\Big |)\Big |.\Big | \Big | \Big | \Big |end\Big |{remark\Big |}\Big | \Big | \Big | As\Big | an\Big | immediate\Big | corollary\Big | of\Big | Theorem\Big | \Big |ref\Big |{th1\Big |}\Big |,\Big | we\Big | have\Big | the\Big | following\Big | UCP\Big |.\Big | \Big | \Big | \Big |begin\Big |{corollary\Big |}\Big |label\Big |{cor1\Big |}\Big | Let\Big | \Big |$x\Big |_0\Big |in\Big | S\Big | \Big |setminus\Big |partial\Big | S\Big |$\Big | such\Big | that\Big | \Big |$\Big |frac\Big |{\Big |partial\Big | \Big |sigma\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big |<0\Big |$\Big |,\Big | and\Big | let\Big | \Big |$S\Big |$\Big | satisfy\Big | the\Big | outer\Big | paraboloid\Big | condition\Big | with\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq1\Big |-1\Big |}\Big | K\Big | \Big |<\Big |frac\Big |{\Big |-\Big |frac\Big |{\Big | \Big |partial\Big | \Big |sigma\Big |}\Big |{\Big |partial\Big |nu\Big |}\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |}\Big |{\Big |displaystyle\Big | 4\Big |(\Big ||\Big |sigma\Big ||\Big |_\Big |{L\Big |^\Big |infty\Big |(B\Big |_\Big |rho\Big |(x\Big |_0\Big |,T\Big |/2\Big |)\Big |)\Big |}\Big |+1\Big |)\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Then\Big | \Big | \Big | for\Big | any\Big | \Big |$z\Big |in\Big | \Big |{\Big |mathbb\Big |{H\Big |}\Big |}\Big |_T\Big |$\Big | which\Big | solves\Big | equation\Big | \Big |eqref\Big |{system1\Big |}\Big | satisfying\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq2\Big |}\Big | z\Big |=\Big | 0\Big | \Big |quad\Big |mbox\Big |{\Big | on\Big | \Big |}\Big | \Big |{\Big |cal\Big | D\Big |}\Big |^\Big |-\Big |_\Big |rho\Big |times\Big | \Big |(0\Big |,T\Big |)\Big |,\Big |\Big |;\Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |mbox\Big |{\Big |-\Big |}\Big |hbox\Big |{\Big |rm\Big | a\Big |.s\Big |.\Big |{\Big | \Big |}\Big |}\Big |,\Big | \Big |end\Big |{equation\Big |}\Big | \Big | there\Big | is\Big | a\Big | neighborhood\Big | \Big |$\Big |{\Big |cal\Big | V\Big |}\Big |$\Big | of\Big | \Big |$x\Big |_0\Big |$\Big | and\Big | \Big |$\Big |varepsilon\Big |in\Big | \Big |(0\Big |,T\Big |/2\Big |)\Big |$\Big | such\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{th1\Big |-eq3\Big |}\Big | z\Big | \Big |=\Big | 0\Big | \Big |quad\Big |mbox\Big |{\Big | in\Big | \Big |}\Big | \Big |big\Big |(\Big |{\Big |cal\Big | V\Big |}\Big |cap\Big |{\Big |cal\Big | D\Big |}\Big |^\Big |+\Big |_\Big |rho\Big |big\Big |)\Big |times\Big | \Big |(\Big |varepsilon\Big |,T\Big |-\Big |varepsilon\Big |)\Big |,\Big |\Big |;\Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |mbox\Big |{\Big |-\Big |}\Big |hbox\Big |{\Big |rm\Big | a\Big |.s\Big |.\Big |{\Big | \Big |}\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big |end\Big |{corollary\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big | Similar\Big | to\Big | the\Big | deterministic\Big | settings\Big |,\Big | we\Big | shall\Big | use\Big | the\Big | stochastic\Big | versions\Big | of\Big | Carleman\Big | estimate\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | to\Big | establish\Big | our\Big | estimate\Big |.\Big | Carleman\Big | estimates\Big | for\Big | stochastic\Big | PDEs\Big | are\Big | studied\Big | extensively\Big | in\Big | recent\Big | years\Big | \Big |(see\Big | \Big |cite\Big |{barbu1\Big |,Liuxu1\Big |,Luqi3\Big |,Luqi7\Big |,Lu\Big |,Tang\Big |-Zhang1\Big |,Zhangxu6\Big |}\Big | and\Big | the\Big | reference\Big | therein\Big |)\Big |.\Big | Carleman\Big | estimate\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big | was\Big | first\Big | obtained\Big | in\Big | \Big |cite\Big |{Zhangxu3\Big |}\Big |.\Big | Compared\Big | with\Big | the\Big | result\Big | in\Big | \Big |cite\Big |{Zhangxu3\Big |}\Big |,\Big | we\Big | need\Big | to\Big | handle\Big | a\Big | more\Big | complex\Big | case\Big | \Big |(see\Big | Section\Big | \Big |ref\Big |{sec\Big |-point\Big |}\Big | for\Big | more\Big | details\Big |)\Big |.\Big | \Big | \Big | \Big | The\Big | rest\Big | of\Big | this\Big | paper\Big | is\Big | organized\Big | as\Big | follows\Big |.\Big | In\Big | Section\Big | \Big |ref\Big |{sec\Big |-point\Big |}\Big |,\Big | we\Big | derive\Big | a\Big | point\Big |-wise\Big | estimate\Big | for\Big | stochastic\Big | hyperbolic\Big | operator\Big |,\Big | which\Big | is\Big | crucial\Big | for\Big | us\Big | to\Big | establish\Big | the\Big | desired\Big | Carleman\Big | estimate\Big | in\Big | this\Big | paper\Big |.\Big | In\Big | Section\Big | \Big |ref\Big |{sec\Big |-weight\Big |}\Big |,\Big | we\Big | explain\Big | the\Big | choice\Big | of\Big | weight\Big | function\Big | in\Big | the\Big | Carleman\Big | estimate\Big |.\Big | Section\Big | \Big |ref\Big |{sec\Big |-car\Big |}\Big | is\Big | devoted\Big | to\Big | the\Big | proof\Big | of\Big | a\Big | Carleman\Big | estimate\Big | while\Big | Section\Big | \Big |ref\Big |{sec\Big |-main\Big |}\Big | is\Big | addressed\Big | to\Big | the\Big | proof\Big | of\Big | the\Big | main\Big | result\Big |.\Big | \Big | \Big | \Big | \Big |section\Big |{A\Big | point\Big |-wise\Big | estimate\Big | for\Big | stochastic\Big | hyperbolic\Big | operator\Big |}\Big |label\Big |{sec\Big |-point\Big |}\Big | \Big | \Big | \Big | \Big | \Big | We\Big | introduce\Big | the\Big | following\Big | point\Big |-wise\Big | Carleman\Big | estimate\Big | for\Big | stochastic\Big | hyperbolic\Big | operators\Big |.\Big | This\Big | estimate\Big | has\Big | its\Big | own\Big | independent\Big | interest\Big | and\Big | \Big | will\Big | be\Big | of\Big | particular\Big | importance\Big | in\Big | the\Big | proof\Big | for\Big | the\Big | main\Big | result\Big |.\Big | \Big | \Big |begin\Big |{lemma\Big |}\Big |label\Big |{hyperbolic1\Big |}\Big | Let\Big | \Big |$\Big |ell\Big |,\Big |Psi\Big | \Big |in\Big | C\Big |^2\Big |(\Big |(0\Big |,T\Big |)\Big |times\Big |mathbb\Big |{R\Big |}\Big |^n\Big |)\Big |$\Big |.\Big | Assume\Big | \Big |$u\Big |$\Big | is\Big | an\Big | \Big |$H\Big |^2\Big |_\Big |{loc\Big |}\Big |(\Big |mathbb\Big |{R\Big |}\Big |^n\Big |)\Big |$\Big |-valued\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |$\Big |-adapted\Big | process\Big | such\Big | that\Big | \Big |$u\Big |_t\Big |$\Big | is\Big | an\Big | \Big |$L\Big |^2\Big |(\Big |mathbb\Big |{R\Big |}\Big |^n\Big |)\Big |$\Big |-valued\Big | semimartingale\Big |.\Big | Set\Big | \Big |$\Big |theta\Big | \Big |=\Big | e\Big |^\Big |ell\Big |$\Big | and\Big | \Big |$v\Big |=\Big |theta\Big | u\Big |$\Big |.\Big | Then\Big |,\Big | for\Big | a\Big |.e\Big |.\Big | \Big |$x\Big |in\Big | \Big |mathbb\Big |{R\Big |}\Big |^n\Big |$\Big | and\Big | \Big |$\Big |{\Big |mathbb\Big |{P\Big |}\Big |}\Big |$\Big |-a\Big |.s\Big |.\Big | \Big |$\Big |omega\Big | \Big |in\Big | \Big |Omega\Big |$\Big |,\Big | \Big | \Big |begin\Big |{eqnarray\Big |}\Big |label\Big |{hyperbolic2\Big |}\Big | \Big |displaystyle\Big | \Big |&\Big | \Big |displaystyle\Big | \Big | \Big |theta\Big | \Big |\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big | \Big |+\Big | \Big |Psi\Big | v\Big | \Big |\Big |)\Big | \Big |\Big |[\Big | \Big |sigma\Big | du\Big |_t\Big | \Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}u\Big |_i\Big |)\Big |_j\Big | dt\Big | \Big |Big\Big |]\Big |nonumber\Big | \Big |\Big |\Big | \Big | \Big |&\Big | \Big |displaystyle\Big |+\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |\Big |[\Big | \Big |sum\Big |_\Big |{i\Big |'\Big |,j\Big |'\Big |=1\Big |}\Big |^n\Big | \Big |\Big |(\Big | 2b\Big |^\Big |{ij\Big |}b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}v\Big |_iv\Big |_\Big |{j\Big |'\Big |}\Big | \Big |-\Big | b\Big |^\Big |{ij\Big |}b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_i\Big | v\Big |_\Big |{i\Big |'\Big |}v\Big |_\Big |{j\Big |'\Big |}\Big | \Big |\Big |)\Big | \Big |-\Big | 2b\Big |^\Big |{ij\Big |}\Big |ell\Big |_t\Big | v\Big |_i\Big | v\Big |_t\Big | \Big |+\Big | \Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_t\Big |^2\Big | \Big |nonumber\Big | \Big |\Big |\Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big | \Big |Psi\Big | b\Big |^\Big |{ij\Big |}v\Big |_i\Big | v\Big | \Big |-\Big | \Big |Big\Big |(\Big | A\Big |ell\Big |_i\Big | \Big |+\Big | \Big |frac\Big |{1\Big |}\Big |{2\Big |}\Big |Psi\Big |_i\Big |Big\Big |)b\Big |^\Big |{ij\Big |}v\Big |^2\Big | \Big |Big\Big |]\Big |_j\Big | dt\Big | \Big |+d\Big |\Big |[\Big | \Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_t\Big | v\Big |_i\Big | v\Big |_j\Big | \Big |nonumber\Big |\Big |\Big | \Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |-\Big | 2\Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_jv\Big |_t\Big | \Big | \Big |+\Big | \Big |sigma\Big |^2\Big |ell\Big |_t\Big | v\Big |_t\Big |^2\Big | \Big |-\Big | \Big |sigma\Big |Psi\Big | v\Big |_t\Big | v\Big | \Big |+\Big | \Big |Big\Big |(\Big | \Big |sigma\Big | A\Big |ell\Big |_t\Big | \Big |+\Big | \Big |frac\Big |{1\Big |}\Big |{2\Big |}\Big |(\Big |sigma\Big |Psi\Big |)\Big |_t\Big |Big\Big |)v\Big |^2\Big | \Big |Big\Big |]\Big | \Big |\Big |\Big | \Big | \Big |displaystyle\Big | \Big |=\Big | \Big |&\Big |displaystyle\Big | \Big |bigg\Big |\Big |{\Big | \Big |\Big |[\Big |(\Big |sigma\Big |^2\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big |+\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_\Big |{j\Big |}\Big | \Big |-\Big | \Big |sigma\Big |Psi\Big | \Big |\Big |]v\Big |_t\Big |^2\Big | \Big |-\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |[\Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_j\Big |)\Big |_t\Big | \Big |+\Big | b\Big |^\Big |{ij\Big |}\Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_j\Big |]v\Big |_iv\Big |_t\Big | \Big |nonumber\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |&\Big | \Big |displaystyle\Big | \Big |+\Big |sum\Big |_\Big |{i\Big |,j\Big |=\Big |}\Big |^n\Big | \Big |Big\Big |[\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_t\Big |)\Big |_t\Big | \Big |+\Big | \Big |sum\Big |_\Big |{i\Big |'\Big |,j\Big |'\Big |=1\Big |}\Big |^n\Big | \Big |Big\Big |(2b\Big |^\Big |{ij\Big |'\Big |}\Big |(b\Big |^\Big |{i\Big |'j\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}\Big |)\Big |_\Big |{j\Big |'\Big |}\Big |-\Big |(b\Big |^\Big |{ij\Big |}b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}\Big |)\Big |_\Big |{j\Big |'\Big |}\Big |Big\Big |)\Big | \Big |+\Big | \Big |Psi\Big | b\Big |^\Big |{ij\Big |}\Big | \Big |\Big |]v\Big |_iv\Big |_j\Big | \Big |nonumber\Big |\Big |\Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big | Bv\Big |^2\Big | \Big |+\Big | \Big |Big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big | \Big |+\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big | \Big |+\Big | \Big |Psi\Big | v\Big | \Big |Big\Big |)\Big |^2\Big |bigg\Big |\Big |}\Big | dt\Big | \Big |+\Big | \Big |sigma\Big |^2\Big |theta\Big |^2\Big | \Big |ell\Big |_t\Big |(du\Big |_t\Big |)\Big |^2\Big |,\Big |nonumber\Big | \Big |end\Big |{eqnarray\Big |}\Big | \Big | where\Big | \Big |$\Big |(du\Big |_t\Big |)\Big |^2\Big |$\Big | denotes\Big | the\Big | quadratic\Big | variation\Big | process\Big | of\Big | \Big |$u\Big |_t\Big |$\Big |,\Big | and\Big | \Big |$A\Big |$\Big | and\Big | \Big |$B\Big |$\Big | are\Big | stated\Big | as\Big | follows\Big |:\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{AB1\Big |}\Big | \Big |left\Big |\Big |{\Big | \Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big | \Big | \Big |displaystyle\Big | A\Big |buildrel\Big | \Big |triangle\Big | \Big |over\Big | \Big |=\Big |sigma\Big | \Big |(\Big |ell\Big |_t\Big |^2\Big |-\Big |ell\Big |_\Big |{tt\Big |}\Big |)\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big | \Big |(b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |ell\Big |_j\Big |-b\Big |^\Big |{ij\Big |}\Big |_j\Big |ell\Big |_i\Big | \Big | \Big |-b\Big |^\Big |{ij\Big |}\Big |ell\Big |_\Big |{ij\Big |}\Big |)\Big |-\Big |Psi\Big |,\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |displaystyle\Big | \Big | B\Big |\Big |=A\Big |Psi\Big |+\Big |(\Big |sigma\Big | A\Big |ell\Big |_t\Big |)\Big |_t\Big |-\Big | \Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |}\Big |(Ab\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_j\Big | \Big |+\Big |dfrac\Big | 12\Big | \Big |\Big |[\Big |(\Big |sigma\Big |Psi\Big |)\Big |_\Big |{tt\Big |}\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}\Big |Psi\Big |_i\Big |)\Big |_j\Big |Big\Big |]\Big |.\Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big |right\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big |end\Big |{lemma\Big |}\Big | \Big | \Big | \Big |begin\Big |{remark\Big |}\Big | When\Big | \Big |$\Big |sigma\Big |=1\Big |$\Big |,\Big | equality\Big | \Big |eqref\Big |{hyperbolic2\Big |}\Big | had\Big | been\Big | established\Big | in\Big | \Big |cite\Big |{Zhangxu3\Big |}\Big |.\Big | The\Big | computation\Big | for\Big | the\Big | general\Big | \Big |$\Big |sigma\Big |$\Big | is\Big | more\Big | complex\Big |.\Big | One\Big | needs\Big | to\Big | handle\Big | the\Big | terms\Big | concerning\Big | \Big |$\Big |sigma\Big |$\Big | carefully\Big |.\Big | \Big |end\Big |{remark\Big |}\Big | \Big | \Big | \Big |{\Big |it\Big | Proof\Big | of\Big | Lemma\Big | \Big |ref\Big |{hyperbolic1\Big |}\Big |.\Big |}\Big | By\Big | \Big |$v\Big |(t\Big |,x\Big |)\Big |=\Big |theta\Big |(t\Big |,x\Big |)u\Big |(t\Big |,x\Big |)\Big |$\Big |,\Big | we\Big | have\Big | \Big |$\Big |$u\Big |_t\Big |=\Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |(v\Big |_t\Big |-\Big |ell\Big | \Big |_tv\Big |)\Big |,\Big | u\Big |_j\Big |=\Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |(v\Big |_j\Big |-\Big |ell\Big | \Big |_jv\Big |)\Big |$\Big |$\Big | for\Big | \Big |$j\Big |=1\Big |,2\Big |,\Big |ldots\Big |,n\Big |$\Big |.\Big | Then\Big |,\Big | for\Big | that\Big | \Big |$\Big |theta\Big |$\Big | is\Big | deterministic\Big |,\Big | we\Big | have\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{4\Big |.15\Big |-eq1\Big |}\Big | \Big |begin\Big |{array\Big |}\Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big |sigma\Big | du\Big |_t\Big | \Big |=\Big | \Big | \Big |displaystyle\Big | \Big |sigma\Big | d\Big |[\Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |(v\Big |_t\Big | \Big |-\Big | \Big |ell\Big |_t\Big | v\Big |)\Big |]\Big |=\Big | \Big |displaystyle\Big | \Big |sigma\Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |\Big |[dv\Big |_t\Big | \Big |-2\Big |ell\Big |_t\Big | v\Big |_t\Big | dt\Big | \Big | \Big |+\Big | \Big |(\Big |ell\Big |_t\Big | \Big |^2\Big | \Big |-\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big | \Big |)\Big | v\Big | dt\Big |Big\Big |]\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | Moreover\Big |,\Big | we\Big | find\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{4\Big |.15\Big |-eq2\Big |}\Big | \Big |begin\Big |{array\Big |}\Big | \Big |{ll\Big |}\Big | \Big | \Big |displaystyle\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}u\Big |_i\Big |)\Big |_j\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |&\Big | \Big |=\Big | \Big |displaystyle\Big | \Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |\Big |(b\Big |^\Big |{ij\Big |}\Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |(v\Big |_i\Big |-\Big | \Big |ell\Big |_i\Big | v\Big |)\Big |\Big |)\Big |_j\Big | \Big |\Big |\Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |=\Big | \Big |displaystyle\Big | \Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big |\Big |[\Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big | \Big |-\Big | 2b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big |+\Big |(\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | \Big |ell\Big |_j\Big | \Big | \Big |-\Big | b\Big |^\Big |{ij\Big |}\Big |_j\Big |ell\Big |_i\Big | \Big | \Big |-\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_\Big |{ij\Big |}\Big |)\Big | v\Big | \Big |Big\Big |]\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | As\Big | an\Big | immediate\Big | result\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq1\Big |}\Big | and\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq2\Big |}\Big |,\Big | we\Big | have\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{4\Big |.15\Big |-eq1\Big |-eq2\Big |}\Big | \Big |begin\Big |{array\Big |}\Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big |&\Big | \Big | \Big |displaystyle\Big |sigma\Big | d\Big | u\Big |_t\Big | \Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}u\Big |_i\Big |)\Big |_j\Big | dt\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |=\Big | \Big |&\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |displaystyle\Big | \Big |theta\Big |^\Big |{\Big |-1\Big |}\Big |\Big |[\Big |\Big |(\Big |sigma\Big | dv\Big |_t\Big | \Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big | dt\Big |\Big |)\Big | \Big |+\Big | \Big |\Big |(\Big | \Big |-2\Big | \Big |sigma\Big | \Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big | \Big |\Big |)dt\Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big | \Big |\Big |(\Big |sigma\Big | \Big |(\Big |ell\Big |_t\Big | \Big |^2\Big | \Big |-\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big | \Big |)\Big | \Big | \Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | \Big |ell\Big |_j\Big | \Big | \Big |-\Big | b\Big |^\Big |{ij\Big |}\Big |_j\Big |ell\Big |_i\Big | \Big | \Big |-\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_\Big |{ij\Big |}\Big |)\Big | \Big |\Big |)\Big | v\Big | dt\Big |Big\Big |]\Big |.\Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Therefore\Big |,\Big | by\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq1\Big |-eq2\Big |}\Big | and\Big | the\Big | definition\Big | of\Big | \Big |$A\Big |$\Big | in\Big | \Big |eqref\Big |{AB1\Big |}\Big |,\Big | we\Big | get\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq3\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big | \Big |displaystyle\Big |quad\Big |theta\Big |\Big |(\Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big |+2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |+\Big |Psi\Big | v\Big |\Big |)\Big |\Big |(\Big | \Big |sigma\Big | du\Big |_t\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big |(b\Big |^\Big |{ij\Big |}u\Big |_i\Big |)\Big |_jdt\Big |\Big |)\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |displaystyle\Big |=\Big | \Big |\Big |(\Big |-2\Big |sigma\Big |^2\Big |ell\Big |_tv\Big |_t\Big |+2\Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^nb\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |+\Big |sigma\Big |Psi\Big | v\Big |\Big |)\Big | dv\Big |_t\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |displaystyle\Big |quad\Big | \Big | \Big |+\Big |\Big |(\Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big |+2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |+\Big |Psi\Big | v\Big |\Big |)\Big |\Big |(\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big |+Av\Big |\Big |)dt\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |displaystyle\Big |quad\Big |+\Big |\Big |(\Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big |+2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |+\Big |Psi\Big | v\Big |\Big |)\Big |^2dt\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Let\Big | us\Big | continue\Big | to\Big | analyze\Big | the\Big | first\Big | two\Big | terms\Big | in\Big | the\Big | right\Big |-hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq3\Big |}\Big |.\Big | \Big | For\Big | the\Big | first\Big | term\Big | in\Big | the\Big | right\Big |-hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq3\Big |}\Big |,\Big | \Big | we\Big | find\Big | that\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big |begin\Big |{cases\Big |}\Big | \Big | \Big | \Big |displaystyle\Big |qquad\Big | \Big |-2\Big |sigma\Big |^2\Big | \Big |ell\Big |_t\Big | v\Big |_t\Big | dv\Big |_t\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |=\Big | \Big |displaystyle\Big | d\Big |(\Big |-\Big |sigma\Big |^2\Big | \Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |^2\Big |)\Big | \Big |+\Big | \Big |sigma\Big |^2\Big | \Big |ell\Big |_t\Big | \Big |(dv\Big |_t\Big |)\Big |^2\Big | \Big |+\Big | \Big |(\Big |sigma\Big |^2\Big |ell\Big |_t\Big | \Big |)\Big |_t\Big | v\Big |_t\Big |^2\Big | dt\Big |,\Big |\Big |\Big | \Big | \Big | \Big |displaystyle\Big | 2\Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big | d\Big | v\Big |_t\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big | \Big |=\Big |displaystyle\Big | \Big | d\Big |\Big |(2\Big |sigma\Big | v\Big |_t\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big |\Big |)\Big | \Big |-\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_t\Big | v\Big |_j\Big | v\Big |_t\Big | dt\Big |-\Big | 2\Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_\Big |{jt\Big |}v\Big |_t\Big | dt\Big |,\Big |\Big |\Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |qquad\Big |qquad\Big |sigma\Big |Psi\Big | v\Big | dv\Big |_t\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |displaystyle\Big | \Big | \Big |=\Big | \Big | d\Big |(\Big |Psi\Big | \Big |sigma\Big | v\Big | v\Big |_t\Big |)\Big | \Big |-\Big | \Big |(\Big |sigma\Big |Psi\Big |)\Big |_t\Big | v\Big | v\Big |_t\Big | dt\Big | \Big |-\Big | \Big |sigma\Big |Psi\Big | v\Big |_t\Big |^2\Big | dt\Big |.\Big | \Big | \Big | \Big |end\Big |{cases\Big |}\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | Therefore\Big |,\Big | we\Big | get\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{4\Big |.15\Big |-eq4\Big |}\Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |quad\Big | \Big |&\Big |displaystyle\Big | \Big |\Big |;\Big |\Big |;\Big | \Big |\Big |(\Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big |+2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^nb\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |+\Big |Psi\Big | v\Big |\Big |)\Big |sigma\Big | dv\Big |_t\Big |\Big |\Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |=d\Big |\Big |(\Big |-\Big |sigma\Big |^2\Big |ell\Big |_t\Big | v\Big |_t\Big |^2\Big | \Big |+2\Big |sigma\Big | v\Big |_t\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_j\Big | \Big |+\Big | \Big |sigma\Big | \Big |Psi\Big | \Big | v\Big | v\Big |_t\Big | \Big |-\Big | \Big |frac\Big | 12\Big | \Big |(\Big |sigma\Big |Psi\Big |)\Big |_t\Big | v\Big |^2\Big |\Big |)\Big |\Big |\Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |-\Big | \Big |\Big |[\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |_t\Big |^2\Big |)\Big |_j\Big | \Big |-\Big |\Big |(\Big |(\Big |sigma\Big |ell\Big |_t\Big |)\Big |_t\Big | \Big |+\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_j\Big | \Big |-\Big |sigma\Big | \Big |Psi\Big |\Big |)v\Big |_t\Big |^2\Big | \Big |\Big |\Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big | \Big |\Big |;\Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_t\Big | v\Big |_j\Big | v\Big |_t\Big | \Big |-\Big | \Big |frac\Big | 12\Big | \Big |(\Big |sigma\Big |Psi\Big |)\Big |_\Big |{tt\Big |}\Big | v\Big |^2\Big |\Big |]dt\Big | \Big |+\Big | \Big |sigma\Big |^2\Big | \Big |ell\Big |_t\Big | \Big |(d\Big | v\Big |_t\Big |)\Big |^2\Big |.\Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | In\Big | a\Big | similar\Big | manner\Big |,\Big | for\Big | the\Big | second\Big | term\Big | in\Big | the\Big | right\Big |-hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq3\Big |}\Big |,\Big | \Big | we\Big | find\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{2nd\Big | in\Big | the\Big | right1\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big | \Big |quad\Big | \Big |displaystyle\Big |-2\Big |sigma\Big |ell\Big | \Big |_tv\Big |_t\Big |\Big |[\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big |+Av\Big |Big\Big |]\Big |\Big |\Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |=\Big | \Big |displaystyle\Big | 2\Big |\Big |[\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_tv\Big |_iv\Big |_t\Big |)\Big |_j\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big | b\Big |^\Big |{ij\Big |}\Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_j\Big | v\Big |_iv\Big |_t\Big |Big\Big |]\Big |+\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big | \Big |(\Big |sigma\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_t\Big |)\Big |_tv\Big |_iv\Big |_j\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big | \Big |displaystyle\Big |quad\Big | \Big |-\Big |\Big |(\Big |sigma\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_tv\Big |_iv\Big |_j\Big |+\Big |sigma\Big | A\Big |ell\Big |_tv\Big |^2\Big |\Big |)\Big |_t\Big |+\Big |(\Big |sigma\Big | \Big | A\Big |ell\Big |_t\Big |)\Big |_tv\Big |^2\Big |,\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{2nd\Big | in\Big | the\Big | right2\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big | \Big |quad\Big | \Big | \Big | 2\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | b\Big |^\Big |{ij\Big |}\Big |ell\Big |_iv\Big |_j\Big |\Big |[\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big | \Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big |+Av\Big |Big\Big |]\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |=\Big | \Big | \Big |displaystyle\Big | \Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big |=\Big | 1\Big |}\Big |^n\Big | \Big |\Big |[\Big |sum\Big |_\Big |{i\Big |'\Big |,j\Big |'\Big | \Big |=1\Big |}\Big |^n\Big | \Big |\Big |(2b\Big |^\Big |{ij\Big |}\Big | b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}v\Big |_iv\Big |_\Big |{j\Big |'\Big |}\Big | \Big |-b\Big |^\Big |{ij\Big |}b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_iv\Big |_\Big |{i\Big |'\Big |}v\Big |_\Big |{j\Big |'\Big |}\Big |\Big |)\Big |-Ab\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big | v\Big |^2\Big |Big\Big |]\Big |_j\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |displaystyle\Big | \Big |quad\Big | \Big | \Big |+\Big |sum\Big |_\Big |{i\Big |,j\Big |,i\Big |'\Big |,j\Big |'\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big |left\Big |[2b\Big |^\Big |{ij\Big |'\Big |}\Big |(b\Big |^\Big |{i\Big |'j\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}\Big |)\Big |_\Big |{j\Big |'\Big |}\Big | \Big |-\Big | \Big |(b\Big |^\Big |{ij\Big |}b\Big |^\Big |{i\Big |'j\Big |'\Big |}\Big |ell\Big |_\Big |{i\Big |'\Big |}\Big |)\Big |_\Big |{j\Big |'\Big |}\Big |right\Big |]v\Big |_iv\Big |_j\Big |-\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big |(Ab\Big |^\Big |{ij\Big |}\Big |ell\Big |_i\Big |)\Big |_j\Big | v\Big |^2\Big |,\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | and\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{2nd\Big | in\Big | the\Big | right3\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big | \Big |quad\Big | \Big | \Big |Psi\Big | v\Big |\Big |[\Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big |=1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}v\Big |_i\Big |)\Big |_j\Big |+Av\Big |Big\Big |]\Big | \Big |&\Big |displaystyle\Big | \Big |=\Big | \Big |displaystyle\Big | \Big | \Big |-\Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big |\Big |(\Big |Psi\Big | b\Big |^\Big |{ij\Big |}vv\Big |_i\Big |-\Big |frac\Big | 12\Big | \Big |{\Big |Psi\Big |_i\Big |}b\Big |^\Big |{ij\Big |}\Big | v\Big |^2\Big |\Big |)\Big |_j\Big |+\Big |Psi\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big | b\Big |^\Big |{ij\Big |}v\Big |_iv\Big |_j\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big |quad\Big | \Big |displaystyle\Big | \Big |+\Big |\Big |[\Big |-\Big |frac\Big | 12\Big | \Big |sum\Big |_\Big |{i\Big |,j\Big | \Big |=\Big | 1\Big |}\Big |^n\Big | \Big |(b\Big |^\Big |{ij\Big |}\Big |Psi\Big |_i\Big |)\Big |_j\Big |+A\Big |Psi\Big |Big\Big |]\Big | v\Big |^2\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | \Big | Finally\Big |,\Big | from\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq3\Big |}\Big | to\Big | \Big |eqref\Big |{2nd\Big | in\Big | the\Big | right3\Big |}\Big |,\Big | we\Big | arrive\Big | at\Big | the\Big | desired\Big | equality\Big | \Big |eqref\Big |{hyperbolic2\Big |}\Big |.\Big | \Big |signed\Big | \Big |{\Big |$\Big |sqr69\Big |$\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big |section\Big |{Choice\Big | of\Big | the\Big | weight\Big | function\Big |}\Big |label\Big |{sec\Big |-weight\Big |}\Big | \Big | \Big | \Big | In\Big | this\Big | section\Big |,\Big | we\Big | explain\Big | the\Big | choice\Big | of\Big | the\Big | weight\Big | function\Big | which\Big | will\Big | be\Big | used\Big | to\Big | establish\Big | our\Big | global\Big | Carleman\Big | estimate\Big |.\Big | Although\Big | such\Big | kind\Big | of\Big | functions\Big | are\Big | already\Big | used\Big | in\Big | \Big |cite\Big |{Amirov\Big |}\Big |,\Big | we\Big | give\Big | full\Big | details\Big | for\Big | the\Big | sake\Big | of\Big | completion\Big | and\Big | the\Big | convenience\Big | of\Big | readers\Big |.\Big | \Big | The\Big | weight\Big | function\Big | is\Big | given\Big | as\Big | follows\Big |:\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{weight1\Big |}\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big | \Big |=\Big | h\Big | x\Big |_1\Big | \Big |+\Big | \Big |frac\Big |{1\Big |}\Big |{2\Big |}\Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |frac\Big |{1\Big |}\Big |{2\Big |}\Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big | \Big |frac\Big |{1\Big |}\Big |{2\Big |}\Big |tau\Big |,\Big | \Big |end\Big |{equation\Big |}\Big | \Big | where\Big | \Big |$h\Big |$\Big | and\Big | \Big |$\Big |tau\Big |$\Big | are\Big | suitable\Big | parameters\Big |,\Big | whose\Big | precise\Big | meanings\Big | will\Big | be\Big | explained\Big | in\Big | the\Big | sequel\Big |.\Big | \Big | Without\Big | loss\Big | of\Big | generality\Big |,\Big | we\Big | assume\Big | that\Big | \Big |$0\Big |=\Big |(0\Big |,\Big |cdots\Big |,0\Big |)\Big |in\Big | S\Big | \Big |setminus\Big | \Big |partial\Big | S\Big |$\Big | and\Big | \Big |$\Big |nu\Big |(0\Big |)\Big |=\Big |(1\Big |,\Big |cdots\Big |,0\Big |)\Big |$\Big |.\Big | For\Big | some\Big | \Big |$r\Big | \Big |>0\Big |$\Big |,\Big | for\Big | that\Big | \Big |$S\Big |$\Big | is\Big | \Big |$C\Big |^2\Big |$\Big |,\Big | we\Big | can\Big | parameterize\Big | \Big |$S\Big |$\Big | in\Big | the\Big | neighborhood\Big | of\Big | the\Big | origin\Big | by\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq1\Big |}\Big | x\Big |_1\Big |=\Big |gamma\Big |(x\Big |_2\Big |,\Big |cdots\Big |,x\Big |_n\Big |)\Big |,\Big |\Big |;\Big ||x\Big |_2\Big ||\Big |^2\Big |+\Big |cdots\Big |+\Big ||x\Big |_n\Big ||\Big |^2\Big | \Big |<r\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | For\Big | natational\Big | brevity\Big |,\Big | denote\Big | \Big |$\Big |$a\Big |(x\Big |,t\Big |)\Big | \Big |=\Big | \Big |frac\Big |{\Big |partial\Big | \Big |sigma\Big |}\Big |{\Big |partial\Big | \Big |nu\Big |}\Big |.\Big |$\Big |$\Big | Hereafter\Big |,\Big | we\Big | set\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq3\Big |}\Big | \Big |begin\Big |{cases\Big |}\Big | \Big |displaystyle\Big | B\Big |_r\Big |\Big |(0\Big |,\Big |frac\Big | T2\Big |\Big |)\Big |=\Big |left\Big |\Big |{\Big |(x\Big |,t\Big |)\Big |:\Big |\Big |,\Big |(x\Big |,t\Big |)\Big |in\Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^\Big |{n\Big |+1\Big |}\Big |,\Big |\Big |,\Big ||x\Big ||\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |<\Big | r\Big |^2\Big |right\Big |\Big |}\Big |,\Big |\Big |\Big |[10pt\Big |]\Big | \Big | \Big |displaystyle\Big | B\Big |_r\Big |(0\Big |)\Big |=\Big |\Big |{\Big | x\Big |:\Big |\Big |,x\Big |in\Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^n\Big |,\Big |\Big |,\Big ||x\Big ||\Big |<r\Big | \Big |\Big |}\Big |.\Big | \Big |end\Big |{cases\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | By\Big | \Big |eqref\Big |{th1\Big |-eq1\Big |}\Big |,\Big | we\Big | have\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq2\Big |}\Big | \Big |left\Big |\Big |{\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |-\Big |alpha\Big |_0\Big |=a\Big |\Big |(0\Big |,\Big |frac\Big | T2\Big | \Big |\Big |)\Big |<0\Big |,\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | K\Big | \Big |<\Big |frac\Big |{\Big |alpha\Big |_0\Big |}\Big |{4\Big |(\Big ||\Big |sigma\Big ||\Big |_\Big |{L\Big |^\Big |infty\Big |(B\Big |_r\Big |(0\Big |,T\Big |/2\Big |)\Big |)\Big |}\Big |+1\Big |)\Big |}\Big |,\Big |\Big |\Big |[10pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |-\Big | K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |<\Big |gamma\Big |(x\Big |_2\Big |,\Big |cdots\Big |,x\Big |_n\Big |)\Big |,\Big |\Big |;\Big |mbox\Big |{\Big | if\Big | \Big |}\Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |<\Big | r\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |right\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | \Big | Let\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq4\Big |}\Big | M\Big |_1\Big |=\Big |max\Big |left\Big |\Big |{\Big | \Big ||\Big |sigma\Big ||\Big |_\Big |{C\Big |^1\Big |(B\Big |_r\Big |(0\Big |,0\Big |)\Big |)\Big |}\Big |,1\Big | \Big |right\Big |\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Denote\Big | \Big | \Big |$\Big |$\Big | \Big |{\Big |cal\Big | D\Big |}\Big |^\Big |{\Big |-\Big |}\Big |_r\Big |=\Big |\Big |{x\Big |:\Big |\Big |,x\Big |in\Big | B\Big |_r\Big |(0\Big |)\Big |,\Big |\Big |,x\Big |_1\Big |<\Big |gamma\Big |(x\Big |_2\Big |,\Big |cdots\Big |,x\Big |_n\Big |)\Big |\Big |}\Big | \Big |,\Big |quad\Big | \Big |{\Big |cal\Big | D\Big |}\Big |^\Big |+\Big |_r\Big | \Big |=\Big | B\Big |_r\Big |(0\Big |)\Big |setminus\Big | \Big |overline\Big |{D\Big |^\Big |-\Big |_r\Big |}\Big |.\Big | \Big |$\Big |$\Big | \Big | For\Big | any\Big | \Big |$\Big |alpha\Big | \Big |in\Big | \Big |(0\Big |,\Big | \Big |alpha\Big |_0\Big |)\Big |$\Big |,\Big | in\Big | accordance\Big | with\Big | the\Big | continuity\Big | of\Big | \Big |$a\Big |(x\Big |,t\Big |)\Big |$\Big | and\Big | the\Big | first\Big | inequality\Big | in\Big | \Big |eqref\Big |{car\Big | eq2\Big |}\Big |,\Big | it\Big | is\Big | clear\Big | that\Big | there\Big | exists\Big | a\Big | \Big |$\Big |delta\Big |_0\Big |>0\Big |$\Big | small\Big | enough\Big | such\Big | that\Big | \Big |$0\Big |<\Big |delta\Big |_0\Big |<\Big |min\Big |\Big |{1\Big |,r\Big |^2\Big |\Big |}\Big |$\Big |,\Big | which\Big | would\Big | be\Big | specified\Big | later\Big |,\Big | \Big | and\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq5\Big |}\Big | a\Big |(x\Big |,t\Big |)\Big |<\Big |-\Big |alpha\Big | \Big |\Big |;\Big |mbox\Big |{\Big | if\Big | \Big |}\Big | \Big |\Big |;\Big |\Big |;\Big ||x\Big ||\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |leq\Big | \Big |delta\Big |_0\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | Letting\Big | \Big |$\Big | M\Big |_0\Big |=\Big ||\Big |sigma\Big ||\Big |_\Big |{L\Big |^\Big |infty\Big |(B\Big |_r\Big |(0\Big |,T\Big |/2\Big |)\Big |)\Big |}\Big | \Big |$\Big |,\Big | by\Big | the\Big | second\Big | inequality\Big | in\Big | \Big |eqref\Big |{car\Big | eq2\Big |}\Big |,\Big | we\Big | can\Big | always\Big | choose\Big | \Big |$K\Big |>0\Big |$\Big | so\Big | large\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq6\Big |}\Big | K\Big |<\Big |frac\Big |{1\Big |}\Big |{2h\Big |}\Big |<\Big |frac\Big |{\Big |alpha\Big |}\Big |{4\Big |(M\Big |_0\Big |+1\Big |)\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | Following\Big | immediately\Big | from\Big | \Big |eqref\Big |{car\Big | eq6\Big |}\Big |,\Big | we\Big | have\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |label\Big |{car\Big | eq6\Big |-cor\Big |}\Big | \Big | \Big | 1\Big |-2hK\Big | \Big |>\Big | 0\Big |,\Big | \Big |quad\Big | h\Big |alpha\Big | \Big |-\Big | 2\Big |(M\Big |_0\Big | \Big |+\Big | 1\Big |)\Big |>\Big | 0\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | For\Big | \Big |$K\Big |$\Big | and\Big | \Big |$h\Big |$\Big | such\Big | chosen\Big |,\Big | we\Big | will\Big | further\Big | take\Big | \Big | \Big |$\Big |tau\Big |in\Big | \Big |(0\Big |,1\Big |)\Big |$\Big | so\Big | small\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq7\Big |}\Big | \Big |Big\Big ||\Big | \Big |max\Big |Big\Big |\Big |{\Big | \Big |frac\Big |{K\Big |}\Big |{1\Big |-2hK\Big |}\Big |,\Big |frac\Big |{1\Big |}\Big |{2h\Big |}\Big | \Big |Big\Big |\Big |}\Big | \Big |Big\Big ||\Big |^2\Big |tau\Big |^2\Big | \Big |+\Big | \Big |frac\Big |{2\Big |tau\Big |}\Big |{1\Big |-2hK\Big |}\Big | \Big |leq\Big | \Big |delta\Big |_0\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | For\Big | convenience\Big | of\Big | notations\Big |,\Big | by\Big | denoting\Big | \Big | \Big |$\Big |mu\Big |_0\Big | \Big |(\Big |tau\Big |)\Big |$\Big | the\Big | term\Big | in\Big | the\Big | left\Big | hand\Big | side\Big | of\Big | \Big |eqref\Big |{car\Big | eq7\Big |}\Big | and\Big | letting\Big | \Big |$\Big |{\Big |cal\Big | A\Big |}\Big |_0\Big | \Big |=\Big | \Big |min\Big |\Big |{\Big |sigma\Big |,\Big | 1\Big |\Big |}\Big |$\Big |,\Big | we\Big | further\Big | assume\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{car\Big | eq8\Big |-for\Big | M1\Big | and\Big | N\Big |}\Big | \Big |left\Big |\Big |{\Big |begin\Big |{array\Big |}\Big | \Big |{ll\Big |}\Big | \Big | h\Big |^2\Big | \Big |{\Big |cal\Big | A\Big |}\Big |_0\Big |>\Big | 2hM\Big |_1\Big | \Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |+\Big | 2M\Big |_1\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |,\Big |\Big |\Big |[8pt\Big |]\Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big | \Big |alpha\Big | h\Big | \Big |>\Big | 2\Big |(M\Big |_1\Big |^2\Big | \Big |+\Big | M\Big |_1\Big |)\Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |-\Big | \Big |(M\Big |_0\Big |^2\Big | \Big |+\Big | nM\Big |_0\Big |)\Big | \Big |-\Big |(n\Big |-1\Big |)\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |right\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | For\Big | any\Big | positive\Big | number\Big | \Big |$\Big |mu\Big |$\Big | with\Big | \Big |$2\Big |mu\Big | \Big |>\Big | \Big |tau\Big |$\Big |,\Big | let\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{weight2\Big |}\Big | Q\Big |_\Big |mu\Big | \Big |=\Big | \Big |left\Big |\Big |{\Big | \Big |(x\Big |,t\Big |)\Big |in\Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^\Big |{n\Big |+1\Big |}\Big |\Big ||\Big | x\Big |_1\Big |>\Big |gamma\Big | \Big |(x\Big |_2\Big |,x\Big |_3\Big |,\Big |cdots\Big |,x\Big |_n\Big |)\Big |,\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^N\Big | x\Big |_j\Big |^2\Big |<\Big |delta\Big |_0\Big |,\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big |mu\Big | \Big |right\Big |\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | The\Big | set\Big | \Big |$Q\Big |_\Big |{\Big |tau\Big |}\Big |$\Big | defined\Big | in\Big | this\Big | style\Big | is\Big | not\Big | empty\Big |.\Big | It\Big | is\Big | only\Big | to\Big | prove\Big | that\Big | the\Big | defining\Big | conditon\Big | \Big |$\Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big |mu\Big |$\Big | is\Big | compatible\Big | with\Big | the\Big | first\Big | defining\Big | condition\Big |,\Big | i\Big |.e\Big |.\Big |,\Big | \Big |$x\Big |_1\Big |>\Big |gamma\Big |(x\Big |_2\Big |,x\Big |_3\Big |,\Big |cdots\Big |,x\Big |_n\Big |)\Big |$\Big |.\Big | \Big | By\Big | assumption\Big |,\Big | we\Big | know\Big | that\Big | \Big |$\Big |gamma\Big |(x\Big |_2\Big |,x\Big |_3\Big |,\Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big | \Big |>\Big | \Big |-K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big |$\Big |,\Big | then\Big | together\Big | with\Big | the\Big | first\Big | inequality\Big | in\Big | \Big |eqref\Big |{car\Big | eq6\Big |-cor\Big |}\Big |,\Big | we\Big | have\Big | that\Big | \Big |begin\Big |{eqnarray\Big |*\Big |}\Big | \Big | \Big | \Big |varphi\Big | \Big |(x\Big |,t\Big |)\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big | \Big |geq\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |-hK\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big |frac\Big | 12\Big | \Big |left\Big |(t\Big |-\Big |frac\Big | T2\Big |right\Big |)\Big |^2\Big | \Big |+\Big |frac\Big | 12\Big | \Big |tau\Big |\Big |\Big | \Big | \Big | \Big |&\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |\Big |(\Big |frac\Big | 12\Big | \Big |-K\Big | h\Big |\Big |)\Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big |frac\Big | 12\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big |frac\Big | 12\Big | \Big |tau\Big |\Big |\Big | \Big | \Big | \Big |&\Big | \Big |>\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |frac\Big |tau\Big | 2\Big |.\Big | \Big |end\Big |{eqnarray\Big |*\Big |}\Big | Noting\Big | that\Big | \Big | \Big |$\Big |(x\Big |,t\Big |)\Big |in\Big | Q\Big |_\Big |{\Big |mu\Big |}\Big |$\Big | implies\Big | \Big |$\Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big | \Big |mu\Big |$\Big |,\Big | together\Big | with\Big | \Big |$\Big |displaystyle\Big | 2\Big |mu\Big | \Big |>\Big | \Big |tau\Big |$\Big |,\Big | we\Big | see\Big | by\Big | definition\Big | that\Big | \Big |$Q\Big |_\Big |mu\Big | \Big |neq\Big | \Big |emptyset\Big |$\Big | as\Big | desired\Big |.\Big | \Big | \Big | In\Big | what\Big | follows\Big |,\Big | we\Big | will\Big | show\Big | that\Big | how\Big | to\Big | determine\Big | the\Big | number\Big | \Big |$\Big |delta\Big |_0\Big |$\Big | appearing\Big | in\Big | \Big |eqref\Big |{car\Big | eq7\Big |}\Big |.\Big | Let\Big | \Big |$\Big |(x\Big |,t\Big |)\Big |in\Big | \Big |overline\Big |{Q\Big |}\Big |_\Big |{\Big |tau\Big |}\Big |$\Big |.\Big | From\Big | the\Big | definition\Big | of\Big | \Big |$Q\Big |_\Big |{\Big |tau\Big |}\Big |$\Big | and\Big | noting\Big | that\Big | \Big |$\Big |gamma\Big |(x\Big |_2\Big |,x\Big |_3\Big |,\Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big | \Big |>\Big | \Big |-K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big |$\Big |,\Big | we\Big | find\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{estimate\Big | x1\Big |-1\Big |}\Big | x\Big |_1\Big | \Big |leq\Big | \Big |-\Big |frac\Big |{\Big |tau\Big |}\Big |{\Big | 2h\Big | \Big |}\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |-\Big | \Big |frac\Big | 1\Big |{2h\Big |}\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big | \Big |frac\Big |{\Big |tau\Big |}\Big | \Big |{2h\Big |}\Big |leq\Big | \Big |frac\Big |tau\Big | \Big |{2h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | On\Big | the\Big | other\Big | hand\Big |,\Big | by\Big | \Big |$\Big |displaystyle\Big | \Big |-K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |leq\Big | x\Big |_1\Big |$\Big |,\Big | we\Big | find\Big | that\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big |-K\Big | h\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big |frac\Big | 12\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |tau\Big |leq\Big | \Big |tau\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | Thus\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |<\Big | \Big |frac\Big |tau\Big | \Big |{1\Big |-\Big | 2K\Big | h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | We\Big | then\Big | get\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{estimate\Big | x1\Big |-2\Big |}\Big | \Big |-x\Big |_1\Big |leq\Big | K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big |<\Big | \Big |frac\Big |{K\Big |tau\Big |}\Big |{1\Big | \Big |-\Big | 2K\Big | h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | Combining\Big | \Big |eqref\Big |{estimate\Big | x1\Big |-1\Big |}\Big | and\Big | \Big |eqref\Big |{estimate\Big | x1\Big |-2\Big |}\Big |,\Big | we\Big | arrive\Big | at\Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |label\Big |{estimate\Big | x1\Big |-3\Big |}\Big | \Big | \Big | \Big ||x\Big |_1\Big ||\Big | \Big |leq\Big | \Big |max\Big |left\Big |\Big |{\Big |frac\Big |{K\Big | \Big |}\Big |{1\Big | \Big |-2hK\Big |}\Big |,\Big | \Big |\Big |;\Big | \Big |frac\Big | 1\Big | \Big |{2h\Big |}\Big |right\Big |\Big |}\Big |tau\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Thus\Big |,\Big | by\Big | the\Big | restriction\Big | imposed\Big | on\Big | \Big |$\Big |varphi\Big | \Big |(x\Big |,t\Big |)\Big |$\Big | in\Big | the\Big | definition\Big | of\Big | \Big |$Q\Big |_\Big |{\Big |tau\Big |}\Big |$\Big | and\Big | \Big |eqref\Big |{estimate\Big | x1\Big |-2\Big |}\Big |,\Big | we\Big | find\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big | \Big |displaystyle\Big | \Big |tau\Big | \Big |>\Big | \Big | \Big | \Big |displaystyle\Big | \Big | \Big |varphi\Big | \Big |(x\Big |,t\Big |)\Big | \Big |=\Big | hx\Big | \Big |_1\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^\Big |{n\Big |}\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |frac\Big | 1\Big | 2\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big | \Big |frac\Big | \Big |tau\Big | 2\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |quad\Big | \Big |>\Big | \Big |displaystyle\Big | \Big |-\Big |frac\Big |{K\Big | h\Big | \Big |tau\Big |}\Big |{1\Big |-\Big | 2K\Big | h\Big |}\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |frac\Big | 12\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big | \Big |+\Big |frac\Big | \Big |tau\Big | 2\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | This\Big | gives\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |label\Big |{estimate\Big | t\Big |}\Big | \Big | \Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |<\Big |frac\Big |{2K\Big | h\Big | \Big |tau\Big |}\Big |{1\Big | \Big |-\Big | 2K\Big | h\Big |}\Big |+\Big | \Big |tau\Big | \Big |=\Big | \Big |frac\Big |tau\Big |{1\Big | \Big |-\Big | 2K\Big | h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | Correspondingly\Big |,\Big | we\Big | have\Big | that\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big ||x\Big ||\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |=\Big | x\Big |_1\Big |^2\Big | \Big |+\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big |-\Big |frac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |leq\Big | \Big | \Big |\Big ||\Big |max\Big |left\Big |\Big |{\Big |frac\Big | K\Big | \Big |{1\Big |-\Big | 2K\Big | h\Big |}\Big |,\Big | \Big |frac\Big | 1\Big |{2h\Big |}\Big |right\Big |\Big |}\Big |\Big ||\Big |^2\Big | \Big |tau\Big | \Big |^2\Big | \Big |+\Big | \Big |frac\Big |{2\Big |tau\Big |}\Big | \Big |{1\Big | \Big |-2K\Big | h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | Returning\Big | back\Big | to\Big | \Big |eqref\Big |{car\Big | eq5\Big |}\Big |,\Big | by\Big | \Big |eqref\Big |{estimate\Big | x1\Big |-2\Big |}\Big |,\Big | \Big |eqref\Big |{estimate\Big | x1\Big |-3\Big |}\Big | and\Big | \Big |eqref\Big |{estimate\Big | t\Big |}\Big |,\Big | we\Big | choose\Big | the\Big | \Big |$\Big |delta\Big |_0\Big |$\Big | in\Big | the\Big | following\Big | style\Big |:\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{choose\Big | delta0\Big |}\Big | \Big | \Big | \Big |delta\Big |_0\Big |>\Big | \Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big | \Big |=\Big | \Big | \Big | \Big |\Big ||\Big |max\Big |left\Big |\Big |{\Big |frac\Big | K\Big | \Big |{1\Big |-\Big | 2K\Big | h\Big |}\Big |,\Big | \Big |frac\Big | 1\Big |{2h\Big |}\Big |right\Big |\Big |}\Big |\Big ||\Big |^2\Big | \Big |tau\Big | \Big |^2\Big | \Big |+\Big | \Big |frac\Big |{2\Big |tau\Big |}\Big | \Big |{1\Big | \Big |-2K\Big | h\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big |medskip\Big | \Big | \Big | \Big | \Big |section\Big |{A\Big | global\Big | Carleman\Big | estimate\Big |}\Big |label\Big |{sec\Big |-car\Big |}\Big | \Big | \Big | \Big | This\Big | section\Big | is\Big | devoted\Big | to\Big | establishing\Big | a\Big | global\Big | Carleman\Big | estimate\Big | for\Big | the\Big | stochastic\Big | hyperbolic\Big | operator\Big | presented\Big | in\Big | Section\Big | 1\Big |,\Big | based\Big | on\Big | the\Big | point\Big |-wise\Big | Carleman\Big | estimate\Big | given\Big | in\Big | Section\Big | \Big |ref\Big |{sec\Big |-point\Big |}\Big |.\Big | It\Big | will\Big | be\Big | shown\Big | that\Big | it\Big | is\Big | the\Big | key\Big | to\Big | the\Big | proof\Big | of\Big | the\Big | main\Big | result\Big |.\Big | \Big | We\Big | have\Big | the\Big | following\Big | global\Big | Carleman\Big | estimate\Big |.\Big | \Big |begin\Big |{theorem\Big |}\Big | \Big |label\Big |{Car\Big | theorem\Big |}\Big | Let\Big | \Big |$u\Big |$\Big | be\Big | an\Big | \Big |$H\Big |^2\Big |_\Big |{loc\Big |}\Big |(\Big |mathbb\Big |{R\Big |}\Big |^n\Big |)\Big |$\Big |-valued\Big | \Big |$\Big |{\Big |mathbb\Big |{F\Big |}\Big |}\Big |$\Big |-adapted\Big | process\Big | such\Big | that\Big | \Big |$u\Big |_t\Big |$\Big | is\Big | an\Big | \Big |$L\Big |^2\Big |(\Big |mathbb\Big |{R\Big |}\Big |^n\Big |)\Big |$\Big |-valued\Big | semimartingale\Big |.\Big | If\Big | \Big |$u\Big |$\Big | is\Big | supported\Big | in\Big | \Big |$Q\Big |_\Big |{\Big |tau\Big |}\Big |$\Big |,\Big | then\Big | there\Big | exist\Big | a\Big | constant\Big | \Big |$C\Big |$\Big | depending\Big | on\Big | \Big |$b\Big |_i\Big |,\Big | i\Big |=1\Big |,2\Big |,3\Big |$\Big | and\Big | a\Big | \Big |$s\Big |_0\Big | \Big |>\Big | 0\Big |$\Big | depending\Big | on\Big | \Big |$\Big |sigma\Big |,\Big | \Big |tau\Big |$\Big | such\Big | that\Big | for\Big | all\Big | \Big |$s\Big | \Big |geq\Big | s\Big |_0\Big |$\Big | it\Big | holds\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq20\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big | \Big |big\Big |(\Big | \Big |sigma\Big | du\Big |_t\Big | \Big |-\Big | \Big |Delta\Big | u\Big | dt\Big | \Big |big\Big |)dx\Big | \Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |geq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |big\Big |(\Big | \Big |-2\Big |sigma\Big | \Big |ell\Big |_tv\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big | \Big |nabla\Big | v\Big | \Big |big\Big |)\Big |^2\Big | dxdt\Big | \Big |+\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(du\Big |_t\Big |)\Big |^2dx\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big |end\Big |{theorem\Big |}\Big | \Big | \Big |{\Big |em\Big | Proof\Big |}\Big |.\Big | We\Big | apply\Big | the\Big | result\Big | of\Big | Lemma\Big | \Big |ref\Big |{hyperbolic1\Big |}\Big | to\Big | show\Big | our\Big | key\Big | Carleman\Big | estimate\Big |.\Big | Let\Big | \Big |$\Big |(b\Big |^\Big |{ij\Big |}\Big |)\Big |_\Big |{1\Big |leq\Big | i\Big |,j\Big |leq\Big | n\Big |}\Big |=I\Big |_n\Big |$\Big | stand\Big | for\Big | the\Big | unit\Big | matrix\Big | of\Big | \Big |$n\Big |$th\Big | order\Big | and\Big | let\Big | \Big |$\Big |Psi\Big | \Big |=\Big | 0\Big |$\Big | in\Big | \Big |eqref\Big |{hyperbolic2\Big |}\Big |.\Big | Then\Big | we\Big | find\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq8\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |theta\Big | \Big |left\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big | \Big |right\Big |)\Big | \Big |left\Big |[\Big | \Big |sigma\Big | du\Big |_t\Big | \Big |-\Big | \Big |Delta\Big | u\Big | dt\Big | \Big |right\Big |]\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big |\Big |;\Big |nabla\Big |cdot\Big |Big\Big |[\Big | 2\Big |(\Big |nabla\Big | v\Big |cdot\Big |nabla\Big |ell\Big |)\Big |nabla\Big | v\Big | \Big |-\Big | \Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |nabla\Big | \Big |ell\Big | \Big |-\Big | 2\Big | \Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |nabla\Big | v\Big | \Big |+\Big | \Big |sigma\Big | \Big | v\Big |_t\Big |^2\Big | \Big |nabla\Big | \Big |ell\Big | \Big | \Big | \Big |-\Big | \Big | A\Big |nabla\Big |ell\Big |\Big |,\Big | v\Big |^2\Big | \Big |Big\Big |]\Big | dt\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big |\Big |;\Big | d\Big |Big\Big |[\Big | \Big |sigma\Big | \Big |ell\Big |_t\Big | \Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |-\Big | 2\Big |sigma\Big | \Big |nabla\Big |ell\Big | \Big |cdot\Big |nabla\Big | vv\Big |_t\Big | \Big |+\Big | \Big |sigma\Big |^2\Big |ell\Big |_t\Big | v\Big |_t\Big |^2\Big | \Big |+\Big | \Big |sigma\Big | A\Big |ell\Big |_t\Big | \Big | v\Big |^2\Big | \Big |Big\Big |]\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |=\Big | \Big | \Big |&\Big | \Big |\Big |!\Big |\Big |!\Big |\Big |!\Big |\Big |!\Big |displaystyle\Big | \Big |Big\Big |\Big |{\Big | \Big |\Big |[\Big |(\Big |sigma\Big |^2\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big |cdot\Big | \Big |(\Big |sigma\Big | \Big |nabla\Big | \Big |ell\Big |)\Big | \Big | \Big |\Big |]v\Big |_t\Big |^2\Big |\Big |!\Big | \Big |-\Big |\Big |!\Big | 2\Big |\Big |[\Big |(\Big |sigma\Big | \Big |nabla\Big |ell\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big | \Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |Big\Big |]\Big |\Big |!\Big |cdot\Big |\Big |!\Big |nabla\Big | v\Big |\Big |;\Big | \Big |\Big |!\Big | v\Big |_t\Big | \Big |+\Big | \Big |Big\Big |[\Big | \Big |(\Big |sigma\Big | \Big |ell\Big |_t\Big |)\Big |_t\Big | \Big |+\Big | \Big |Delta\Big | \Big |ell\Big | \Big | \Big |Big\Big |]\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big | Bv\Big |^2\Big | \Big |+\Big | \Big |Big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big | \Big |cdot\Big |nabla\Big | v\Big | \Big |Big\Big |)\Big |^2\Big |Big\Big |\Big |}\Big | dt\Big | \Big |+\Big | \Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(du\Big |_t\Big |)\Big |^2\Big |,\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | where\Big | \Big |$\Big |(du\Big |_t\Big |)\Big |^2\Big |$\Big | represents\Big | the\Big | quadratic\Big | variation\Big | process\Big | of\Big | \Big |$u\Big |_t\Big |$\Big |.\Big | It\Big | is\Big | easy\Big | to\Big | show\Big | that\Big | \Big |$\Big |(du\Big |_t\Big |)\Big |^2\Big | \Big |=\Big | b\Big |_4\Big |^2\Big | u\Big |^2\Big | dt\Big |$\Big | and\Big | \Big |$A\Big |$\Big |,\Big | \Big |$B\Big |$\Big | are\Big | stated\Big | \Big | respectively\Big | as\Big | follows\Big |:\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{AB2\Big |}\Big | \Big |left\Big |\Big |{\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big | \Big |displaystyle\Big | A\Big |buildrel\Big | \Big |triangle\Big | \Big |over\Big | \Big |=\Big |sigma\Big |(\Big |ell\Big |_t\Big |^2\Big |-\Big |ell\Big |_\Big |{tt\Big |}\Big |)\Big |-\Big | \Big |(\Big ||\Big |nabla\Big |ell\Big ||\Big |^2\Big | \Big |-\Big |Delta\Big | \Big |ell\Big |)\Big |,\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |displaystyle\Big | B\Big |buildrel\Big | \Big |triangle\Big | \Big |over\Big | \Big |=\Big | \Big |(\Big |sigma\Big | A\Big |ell\Big |_t\Big |)\Big |_t\Big |-\Big | \Big |nabla\Big |cdot\Big | \Big |(A\Big | \Big |nabla\Big |ell\Big |)\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |right\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Now\Big | let\Big | \Big |$\Big |ell\Big | \Big |=\Big | s\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |$\Big | with\Big | \Big |$\Big |varphi\Big |$\Big | \Big | the\Big | weight\Big | function\Big | given\Big | by\Big | \Big |eqref\Big |{weight1\Big |}\Big |.\Big | Then\Big |,\Big | some\Big | simple\Big | computations\Big | show\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq9\Big |}\Big | \Big |left\Big |\Big |{\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |ell\Big |_t\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big | \Big |=\Big | \Big |displaystyle\Big | \Big |-s\Big |lambda\Big | \Big |varphi\Big |_t\Big | \Big |varphi\Big |^\Big |{\Big |-n\Big |-1\Big |}\Big | \Big |=\Big | \Big |-s\Big |lambda\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-1\Big |}\Big |,\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |displaystyle\Big | \Big | \Big |=\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big |+1\Big |)\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big | \Big |-\Big | s\Big |lambda\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |,\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |nabla\Big | \Big |ell\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |displaystyle\Big | \Big |=\Big | \Big |-s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |nabla\Big | \Big |varphi\Big |,\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |Delta\Big | \Big |ell\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |displaystyle\Big | \Big | \Big |=\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big | \Big |-\Big | s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |Delta\Big | \Big |varphi\Big |,\Big |\Big |\Big |[8pt\Big |]\Big | \Big |nabla\Big | \Big |ell\Big |_t\Big | \Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big | \Big |=\Big |displaystyle\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |nabla\Big | \Big |varphi\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |right\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | We\Big | begin\Big | to\Big | analyze\Big | the\Big | terms\Big | in\Big | the\Big | right\Big | hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq8\Big |}\Big | term\Big | by\Big | term\Big |.\Big | The\Big | first\Big | one\Big | reads\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq10\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |&\Big |displaystyle\Big | \Big |big\Big |[\Big |(\Big |sigma\Big |^2\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big |cdot\Big |(\Big |sigma\Big | \Big |nabla\Big |ell\Big |)\Big | \Big |big\Big |]v\Big |_t\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big | \Big |big\Big |[\Big | 2\Big |sigma\Big |sigma\Big |_t\Big | \Big |ell\Big |_t\Big | \Big |+\Big | \Big |sigma\Big |^2\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big | \Big |+\Big | \Big |nabla\Big | \Big |sigma\Big |cdot\Big |nabla\Big |ell\Big | \Big |+\Big | \Big |sigma\Big |Delta\Big |ell\Big | \Big |big\Big |]v\Big |_t\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |displaystyle\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big |\Big |[2\Big |sigma\Big |sigma\Big |_t\Big | \Big |ell\Big |_t\Big | \Big |+\Big | \Big |sigma\Big |^2\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big | \Big |-s\Big |lambda\Big | \Big |(\Big |nabla\Big | \Big |sigma\Big |cdot\Big | \Big |nabla\Big | \Big |varphi\Big | \Big |+\Big | \Big |sigma\Big |Delta\Big |varphi\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big | \Big |+\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)\Big |sigma\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |Big\Big |]\Big | v\Big |^2\Big |_t\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |=\Big | \Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |displaystyle\Big | \Big |-s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |\Big |[2\Big | \Big |sigma\Big | \Big |sigma\Big |_t\Big | \Big |\Big |;\Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |+\Big | \Big |sigma\Big |^2\Big | \Big |+\Big | \Big | \Big |(\Big |nabla\Big | \Big |sigma\Big |cdot\Big | \Big |nabla\Big | \Big |varphi\Big | \Big |+\Big | \Big |sigma\Big |Delta\Big | \Big |varphi\Big |)\Big | \Big |Big\Big |]\Big | v\Big |_t\Big | \Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |+\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |[\Big |sigma\Big |^2\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big |^2\Big | \Big |+\Big | \Big |sigma\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |Big\Big |]\Big | v\Big |_t\Big | \Big |^2\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |geq\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big |-\Big | s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |\Big |[h\Big | a\Big | \Big |+\Big | 2\Big |(M\Big |_1\Big |^2\Big | \Big |+M\Big |_1\Big |)\Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |+\Big | \Big |\Big |(M\Big |_0\Big |^2\Big | \Big |+\Big | \Big |(n\Big |-1\Big |)M\Big |_0\Big |\Big |)\Big |Big\Big |]\Big | v\Big |_t\Big |^2\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |&\Big |displaystyle\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |+\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)h\Big |^2\Big | \Big |sigma\Big | \Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | v\Big |_t\Big | \Big |^2\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |geq\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |\Big |[h\Big | \Big |alpha\Big | \Big |-\Big | 2\Big |(M\Big |_1\Big |^2\Big | \Big |+M\Big |_1\Big |)\Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |-\Big | \Big |\Big |(M\Big |_0\Big |^2\Big | \Big |+\Big | \Big |(n\Big |-1\Big |)M\Big |_0\Big |\Big |)\Big |Big\Big |]\Big | v\Big |_t\Big |^2\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |displaystyle\Big | \Big |+\Big | h\Big |^2\Big | \Big |sigma\Big | \Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | v\Big |_t\Big | \Big |^2\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | Likewise\Big |,\Big | the\Big | second\Big | term\Big | in\Big | the\Big | right\Big | hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq8\Big |}\Big | reads\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq11\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |&\Big |displaystyle\Big | \Big | \Big |-\Big | 2\Big |\Big |[\Big |(\Big |sigma\Big | \Big |nabla\Big | \Big |ell\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big | \Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |Big\Big |]\Big |cdot\Big | \Big |nabla\Big | v\Big |\Big |,v\Big |_t\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big |-\Big | 2\Big | \Big |big\Big |[\Big | \Big |sigma\Big |_t\Big |nabla\Big |ell\Big | \Big |+\Big | \Big |sigma\Big |nabla\Big |ell\Big |_t\Big | \Big |+\Big | \Big |ell\Big |_t\Big |nabla\Big | \Big |sigma\Big | \Big |+\Big | \Big |sigma\Big |nabla\Big | \Big |ell\Big |_t\Big | \Big |big\Big |]\Big |cdot\Big |nabla\Big | v\Big | \Big |\Big |,v\Big |_t\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big |\Big |[2s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big | \Big |(\Big |sigma\Big |_t\Big |nabla\Big | \Big |varphi\Big | \Big | \Big |+\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |nabla\Big | \Big |sigma\Big |)\Big | \Big |-\Big | 2s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | \Big |sigma\Big | t\Big | \Big |nabla\Big | \Big |varphi\Big |Big\Big |]\Big |cdot\Big |nabla\Big | v\Big |\Big |,\Big | v\Big |_t\Big |\Big |\Big |[8pt\Big |]\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | 2\Big | s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |[\Big |(\Big |sigma\Big |_t\Big | \Big |nabla\Big | \Big |varphi\Big | \Big |+\Big | \Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big |nabla\Big | \Big |sigma\Big |)\Big |varphi\Big | \Big |-\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |sigma\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big |nabla\Big | \Big |varphi\Big |Big\Big |]\Big |cdot\Big | \Big |nabla\Big | v\Big | \Big |\Big |,\Big |\Big |!\Big | v\Big |_t\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |geq\Big | \Big |&\Big |displaystyle\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |-s\Big | \Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | \Big |\Big |(M\Big |_1\Big | h\Big | \Big |+\Big | 2\Big | M\Big |_1\Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big |\Big |)\Big |tau\Big | \Big |\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | v\Big |_t\Big | \Big |^2\Big |\Big |)\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big |displaystyle\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |-\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |(h\Big | M\Big |_1\Big | \Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |+\Big | M\Big |_1\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |\Big |)\Big |\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | v\Big |_t\Big | \Big |^2\Big |\Big |)\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | Thus\Big |,\Big | there\Big | exists\Big | a\Big | \Big |$\Big |lambda\Big |_0\Big | \Big |>\Big | 0\Big |$\Big | such\Big | that\Big | for\Big | \Big |$\Big |lambda\Big | \Big |>\Big | \Big |lambda\Big |_0\Big |$\Big | it\Big | holds\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq11\Big |-1\Big |}\Big | \Big |begin\Big |{array\Big |}\Big | \Big |{ll\Big |}\Big | \Big |quad\Big | \Big |-\Big | 2\Big |\Big |[\Big |(\Big |sigma\Big |nabla\Big | \Big |ell\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big | \Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |Big\Big |]\Big |cdot\Big | \Big |nabla\Big | v\Big |\Big |,v\Big |_t\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big |geq\Big | \Big |displaystyle\Big | \Big |-\Big | 2s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |(h\Big | M\Big |_1\Big | \Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |+\Big | M\Big |_1\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |\Big |)\Big |\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | v\Big |_t\Big | \Big |^2\Big |\Big |)\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Treating\Big | the\Big | third\Big | term\Big | in\Big | the\Big | right\Big | hand\Big | side\Big | of\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq8\Big |}\Big | in\Big | the\Big | same\Big | fashion\Big |,\Big | we\Big | obtain\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq12\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |&\Big | \Big |displaystyle\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |big\Big |[\Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big |+\Big | \Big |Delta\Big | \Big |ell\Big | \Big |big\Big |]\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |=\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big |big\Big |[\Big | \Big |sigma\Big |_t\Big | \Big |ell\Big |_t\Big | \Big |+\Big | \Big |sigma\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big | \Big |+\Big | \Big |Delta\Big |ell\Big | \Big |big\Big |]\Big | \Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |=\Big | \Big |&\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |displaystyle\Big | \Big |-s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-n\Big |-1\Big |}\Big |\Big |[\Big |sigma\Big |_t\Big | \Big |\Big |,\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |+\Big | \Big |sigma\Big | \Big |+\Big | \Big |Delta\Big | \Big |varphi\Big |Big\Big |]\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | \Big |\Big |,\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |[\Big |sigma\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big | \Big |+\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |Big\Big |]\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |geq\Big | \Big |&\Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | \Big |-s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-\Big | 1\Big |}\Big |\Big |(M\Big |_1\Big | \Big |sqrt\Big |{\Big |mu\Big |_0\Big |(\Big |tau\Big |)\Big |}\Big | \Big |+\Big | M\Big |_0\Big | \Big |+\Big | \Big |(n\Big |-1\Big |)\Big |\Big |)\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | h\Big |^2\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+\Big | 1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | Following\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq10\Big |}\Big |,\Big | \Big | \Big |eqref\Big |{4\Big |.15\Big |-eq11\Big |-1\Big |}\Big |,\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq12\Big |}\Big | and\Big | noticing\Big | \Big |eqref\Big |{car\Big | eq8\Big |-for\Big | M1\Big | and\Big | N\Big |}\Big |,\Big | we\Big | find\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq15\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |big\Big |[\Big |(\Big |sigma\Big |^2\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big | \Big |+\Big | \Big |nabla\Big |cdot\Big | \Big |(\Big |sigma\Big | \Big |nabla\Big |ell\Big |)\Big | \Big |big\Big |]v\Big |_t\Big |^2\Big | \Big |-\Big | 2\Big |[\Big |(\Big |sigma\Big | \Big |nabla\Big | \Big |ell\Big |)\Big |_t\Big | \Big |+\Big | \Big |nabla\Big | \Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |]\Big |cdot\Big | \Big |nabla\Big | v\Big |\Big |,\Big | v\Big |_t\Big | \Big | \Big |+\Big | \Big |big\Big |[\Big |(\Big |sigma\Big |ell\Big |_\Big |{t\Big |}\Big |)\Big |_t\Big | \Big |+\Big | \Big |Delta\Big |ell\Big | \Big |big\Big |]\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big |displaystyle\Big |geq\Big | \Big | C\Big | s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | for\Big | all\Big | \Big |$\Big |lambda\Big | \Big |>\Big | \Big |lambda\Big |_0\Big |$\Big |.\Big | \Big | \Big | Next\Big |,\Big | note\Big | that\Big | in\Big | our\Big | case\Big | \Big |$A\Big | \Big |=\Big | \Big |sigma\Big |(\Big |ell\Big |_t\Big | \Big |^2\Big | \Big |-\Big | \Big |ell\Big |_\Big |{tt\Big |}\Big |)\Big | \Big |-\Big | \Big |(\Big ||\Big |nabla\Big | \Big |ell\Big | \Big ||\Big |^2\Big | \Big |-\Big | \Big |Delta\Big | \Big |ell\Big |)\Big |$\Big |.\Big | Then\Big | it\Big | is\Big | easy\Big | to\Big | show\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq13\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | A\Big | \Big |=\Big |&\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big | \Big |displaystyle\Big | s\Big |^2\Big | \Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-2\Big |lambda\Big | \Big |-2\Big |}\Big |\Big |[\Big |sigma\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big | \Big |-\Big | \Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |Big\Big |]\Big | \Big |+\Big | s\Big |lambda\Big | \Big |(\Big |lambda\Big | \Big |+1\Big |)\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-2\Big |}\Big | \Big |\Big |[\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big | \Big | \Big |-\Big | \Big |sigma\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big |Big\Big |]\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big | \Big |+\Big | s\Big |lambda\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |[\Big |sigma\Big |-\Big | \Big |(n\Big |-1\Big |)\Big |]\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Thus\Big |,\Big | under\Big | some\Big | simple\Big | but\Big | a\Big | little\Big | more\Big | bothersome\Big | calculations\Big |,\Big | it\Big | holds\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq14\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | B\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |negthinspace\Big | \Big |&\Big |displaystyle\Big |=\Big | \Big |(\Big |sigma\Big | A\Big |ell\Big |_t\Big |)\Big |_t\Big |-\Big | \Big |nabla\Big |cdot\Big |(A\Big | \Big |nabla\Big |ell\Big |)\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |&\Big |displaystyle\Big | \Big | \Big |=\Big | \Big |sigma\Big |_t\Big | A\Big |ell\Big |_t\Big | \Big |+\Big | \Big |sigma\Big | A\Big |_t\Big |ell\Big |_t\Big |+\Big |sigma\Big | A\Big |ell\Big |_\Big |{tt\Big |}\Big |-\Big |nabla\Big | A\Big |cdot\Big |nabla\Big |ell\Big | \Big |-\Big | A\Big |Delta\Big |ell\Big | \Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |&\Big |displaystyle\Big | \Big |=\Big | 3s\Big |^3\Big |lambda\Big |^2\Big |(\Big |lambda\Big |+1\Big |)\Big |^2\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big |\Big |[\Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big |-\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |Big\Big |]\Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |&\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | 3s\Big |^3\Big |lambda\Big |^2\Big |(\Big |lambda\Big |+1\Big |)\Big |^2\Big | \Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |\Big |[\Big ||\Big |nabla\Big | \Big |varphi\Big ||\Big |^2\Big |-\Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big |Big\Big |]\Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}\Big |\Big |\Big |[8pt\Big |]\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |&\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | O\Big |(s\Big |^3\Big |lambda\Big |^3\Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-3\Big |}\Big |)\Big | \Big |+\Big | O\Big |(s\Big |^2\Big |lambda\Big |^4\Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}\Big |)\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | It\Big | is\Big | easy\Big | to\Big | see\Big | that\Big | there\Big | exist\Big | an\Big | \Big |$\Big |lambda\Big |_1\Big |>0\Big |$\Big | and\Big | \Big |$s\Big |_0\Big |>0\Big |$\Big | such\Big | that\Big | for\Big | all\Big | \Big |$\Big |lambda\Big |geq\Big | \Big |lambda\Big |_1\Big |$\Big |,\Big | \Big |$s\Big |geq\Big | s\Big |_0\Big |$\Big |,\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq17\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | Bv\Big |^2\Big | \Big |geq\Big | Cs\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | \Big | Next\Big |,\Big | integrating\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq8\Big |}\Big | over\Big | \Big |$Q\Big |_\Big |tau\Big |$\Big | and\Big | taking\Big | mathematical\Big | expectation\Big |,\Big | we\Big | obtain\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq18\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big | \Big |big\Big |(\Big | \Big |sigma\Big | du\Big |_t\Big | \Big |-\Big | \Big |Delta\Big | u\Big | dt\Big | \Big |big\Big |)dx\Big | \Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |geq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_tv\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big | \Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big |^2\Big | dxdt\Big | \Big |+\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(du\Big |_t\Big |)\Big |^2dx\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | \Big | Thus\Big | we\Big | complete\Big | the\Big | proof\Big |.\Big |signed\Big | \Big |{\Big |$\Big |sqr69\Big |$\Big |}\Big | \Big | \Big | \Big | \Big |section\Big |{Proofs\Big | of\Big | the\Big | Main\Big | Result\Big |}\Big |label\Big |{sec\Big |-main\Big |}\Big | \Big | This\Big | section\Big | is\Big | dedicated\Big | to\Big | the\Big | proof\Big | of\Big | the\Big | unique\Big | continuation\Big | property\Big | presented\Big | in\Big | Section\Big | 1\Big |.\Big | \Big | \Big |{\Big |em\Big | Proof\Big |}\Big |.\Big | Without\Big | loss\Big | of\Big | generality\Big |,\Big | we\Big | assume\Big | that\Big | \Big |$\Big |$x\Big |_0\Big | \Big |=\Big | \Big |(0\Big |,0\Big |,\Big |cdots\Big |,\Big | 0\Big |)\Big |,\Big |quad\Big | \Big |nu\Big | \Big |(x\Big |_0\Big |)\Big | \Big |=\Big | \Big |(1\Big |,0\Big |,\Big |cdots\Big |,\Big | 0\Big |)\Big |$\Big |$\Big | and\Big | \Big |$S\Big |$\Big | is\Big | parameterized\Big | as\Big | in\Big | Section\Big | \Big |ref\Big |{sec\Big |-weight\Big |}\Big | near\Big | \Big |$0\Big |$\Big |.\Big | Also\Big |,\Big | \Big |$K\Big |,\Big | \Big |delta\Big |_0\Big |,\Big | h\Big |,\Big | \Big |tau\Big |$\Big | are\Big | all\Big | given\Big | as\Big | in\Big | Section\Big | \Big |ref\Big |{sec\Big |-weight\Big |}\Big |.\Big | By\Big | the\Big | definition\Big | of\Big | \Big |$\Big |varphi\Big |(x\Big |,t\Big |)\Big |$\Big | and\Big | \Big |$Q\Big |_\Big |{\Big |mu\Big |}\Big |$\Big |,\Big | for\Big | any\Big | \Big |$\Big |mu\Big |in\Big | \Big |(0\Big |,\Big | \Big |tau\Big |]\Big |$\Big |,\Big | the\Big | boundary\Big | \Big |$\Big |Gamma\Big |_\Big |mu\Big |$\Big | of\Big | \Big |$Q\Big |_\Big |{\Big |mu\Big |}\Big |$\Big | is\Big | composed\Big | of\Big | the\Big | following\Big | three\Big | parts\Big |:\Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |label\Big |{boundary\Big | of\Big | the\Big | domain\Big |}\Big | \Big | \Big |hspace\Big |{\Big |-0\Big |.5cm\Big |}\Big |begin\Big |{cases\Big |}\Big | \Big | \Big | \Big | \Big | \Big |displaystyle\Big | \Big |Gamma\Big |_\Big |mu\Big | \Big |^1\Big | \Big |=\Big | \Big |&\Big |hspace\Big |{\Big |-0\Big |.3cm\Big |}\Big | \Big |displaystyle\Big | \Big |bigg\Big |\Big |{\Big |(x\Big |,t\Big |)\Big | \Big |in\Big | \Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^\Big |{n\Big |+1\Big |}\Big | \Big |\Big ||\Big | x\Big |_1\Big | \Big |=\Big | \Big |gamma\Big |(x\Big |_2\Big |,\Big | x\Big |_3\Big |,\Big | \Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big |,\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |<\Big | \Big |delta\Big |_0\Big |,\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big |mu\Big | \Big |bigg\Big |\Big |}\Big |,\Big |\Big |\Big |[12pt\Big |]\Big | \Big | \Big | \Big | \Big | \Big | \Big |displaystyle\Big | \Big |Gamma\Big |_\Big |mu\Big | \Big |^2\Big | \Big |=\Big | \Big |&\Big | \Big |hspace\Big |{\Big |-0\Big |.3cm\Big |}\Big |displaystyle\Big | \Big |bigg\Big |\Big |{\Big |(x\Big |,t\Big |)\Big | \Big |in\Big | \Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^\Big |{n\Big |+1\Big |}\Big | \Big |\Big ||\Big | x\Big |_1\Big | \Big |>\Big | \Big |gamma\Big |(x\Big |_2\Big |,\Big | x\Big |_3\Big |,\Big | \Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big |,\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |<\Big | \Big |delta\Big |_0\Big |,\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big |=\Big |mu\Big | \Big |bigg\Big |\Big |}\Big |,\Big |\Big |\Big |[12pt\Big |]\Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big |displaystyle\Big | \Big |Gamma\Big |_\Big |mu\Big |^3\Big | \Big |=\Big | \Big |&\Big |hspace\Big |{\Big |-0\Big |.3cm\Big |}\Big | \Big |displaystyle\Big | \Big |bigg\Big |\Big |{\Big |(x\Big |,t\Big |)\Big | \Big |in\Big | \Big |{\Big |mathbb\Big |{R\Big |}\Big |}\Big |^\Big |{n\Big |+1\Big |}\Big | \Big |\Big ||\Big | x\Big |_1\Big | \Big |>\Big | \Big |gamma\Big |(x\Big |_2\Big |,\Big | x\Big |_3\Big |,\Big | \Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big |,\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |=\Big | \Big |delta\Big |_0\Big |,\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big |mu\Big | \Big |bigg\Big |\Big |}\Big |,\Big | \Big | \Big | \Big |end\Big |{cases\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | i\Big |.e\Big |.\Big |,\Big | \Big |$\Big |displaystyle\Big | \Big |Gamma\Big |_\Big |{\Big |mu\Big |}\Big | \Big |=\Big |Gamma\Big |_\Big |mu\Big |^1\Big |cup\Big | \Big |Gamma\Big |_\Big |mu\Big |^2\Big |cup\Big | \Big |Gamma\Big |_\Big |mu\Big |^3\Big |$\Big |.\Big | Next\Big |,\Big | we\Big | show\Big | that\Big | in\Big | fact\Big | \Big |$\Big |Gamma\Big |_\Big |mu\Big |^3\Big | \Big |=\Big | \Big |emptyset\Big |$\Big |.\Big | Based\Big | on\Big | the\Big | conditions\Big | \Big |$\Big |displaystyle\Big | x\Big |_1\Big | \Big |>\Big | \Big |gamma\Big | \Big |(x\Big |_2\Big |,\Big | x\Big |_3\Big |,\Big | \Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big |$\Big | and\Big | \Big |$\Big |gamma\Big |(x\Big |_2\Big |,x\Big |_3\Big |,\Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big | \Big |>\Big |-K\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big |$\Big | and\Big | the\Big | definition\Big | of\Big | \Big |$\Big |varphi\Big |$\Big |,\Big | it\Big | follows\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{Gamma\Big | 3\Big | empty\Big |}\Big | \Big | \Big | \Big |(1\Big |-2K\Big | h\Big |)\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big | \Big |<\Big | 2\Big |\Big |[h\Big | x\Big |_1\Big | \Big |+\Big | \Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |+\Big | \Big |\Big |(t\Big | \Big |-\Big |dfrac\Big | T\Big | 2\Big |\Big |)\Big |^2\Big |Big\Big |]\Big |=\Big | 2\Big |varphi\Big | \Big |-\Big | \Big |tau\Big | \Big |<\Big | 2\Big |mu\Big | \Big |-\Big | \Big |tau\Big | \Big |<\Big |tau\Big |.\Big | \Big |end\Big |{equation\Big |}\Big | Also\Big |,\Big | note\Big | that\Big | \Big |$\Big |Gamma\Big |_\Big |mu\Big |^3\Big |$\Big | is\Big | subordinated\Big | to\Big | \Big |$\Big |sum\Big |_\Big |{j\Big |=2\Big |}\Big |^n\Big | x\Big |_j\Big |^2\Big | \Big |=\Big | \Big |delta\Big |_0\Big |$\Big |.\Big | Thus\Big |,\Big | from\Big | \Big |eqref\Big |{Gamma\Big | 3\Big | empty\Big |}\Big |,\Big | it\Big | follows\Big | that\Big | \Big |$\Big |delta\Big |_0\Big | \Big |<\Big | \Big |frac\Big |{\Big |tau\Big |}\Big |{1\Big | \Big |-\Big | 2K\Big | h\Big |}\Big |$\Big |,\Big | a\Big | contradiction\Big | to\Big | \Big |$\Big |delta\Big |_0\Big | \Big |>\Big | \Big |frac\Big |{\Big |tau\Big |}\Big |{1\Big | \Big |-2K\Big | h\Big |}\Big |$\Big | introduced\Big | in\Big | \Big |eqref\Big |{choose\Big | delta0\Big |}\Big |.\Big | As\Big | a\Big | direct\Big | result\Big |,\Big | we\Big | conclude\Big | that\Big | \Big |$\Big |Gamma\Big |_\Big |{\Big |mu\Big |}\Big | \Big |=\Big | \Big |Gamma\Big |_\Big |mu\Big |^1\Big |cup\Big | \Big |Gamma\Big |_\Big |mu\Big |^2\Big |$\Big |.\Big | \Big | Moreover\Big |,\Big | it\Big | is\Big | clear\Big | that\Big | \Big |$\Big |$\Big |Gamma\Big |_\Big |mu\Big |^1\Big |cup\Big | \Big |Gamma\Big |_\Big |mu\Big |^2\Big | \Big |subset\Big | \Big |overline\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |.\Big |$\Big |$\Big | Define\Big | \Big |$\Big |$t\Big |_0\Big | \Big |=\Big | \Big |sqrt\Big |{\Big |frac\Big |{\Big |tau\Big |}\Big |{1\Big | \Big | \Big |-\Big | 2K\Big | h\Big |}\Big |}\Big |\Big |;\Big |.\Big |$\Big |$\Big | Then\Big | by\Big | \Big |eqref\Big |{estimate\Big | t\Big |}\Big |,\Big | it\Big | follows\Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |label\Big |{boundary\Big | containing\Big |}\Big | \Big | \Big | \Big |begin\Big |{cases\Big |}\Big | \Big | \Big | \Big | \Big | \Big |Gamma\Big |_\Big |mu\Big |^1\Big | \Big |subset\Big | \Big |\Big |{x\Big |mid\Big | x\Big | \Big |_1\Big | \Big |=\Big | \Big |gamma\Big | \Big |(x\Big |_2\Big |,\Big | x\Big |_3\Big |,\Big | \Big |cdots\Big |,\Big | x\Big |_n\Big |)\Big |\Big |}\Big | \Big |times\Big | \Big |left\Big |\Big |{t\Big | \Big |mid\Big | \Big |\Big |;\Big ||\Big |\Big |,t\Big |-T\Big |/2\Big |\Big |,\Big ||\Big | \Big |leq\Big | t\Big |_0\Big |right\Big |\Big |}\Big |,\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |Gamma\Big |_\Big |mu\Big |^2\Big | \Big |subset\Big | \Big |\Big |{x\Big |mid\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big | \Big |=\Big | \Big |mu\Big |\Big |}\Big |,\Big | \Big |quad\Big | \Big |mu\Big | \Big |in\Big | \Big |(0\Big |,\Big | \Big |tau\Big |]\Big |.\Big | \Big | \Big | \Big |end\Big |{cases\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | It\Big | is\Big | clear\Big | that\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big |Gamma\Big |_\Big |mu\Big |^j\Big | \Big |subset\Big | \Big |Gamma\Big |_\Big |tau\Big |^j\Big |,\Big | \Big |quad\Big | j\Big | \Big |=1\Big |,2\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big | To\Big | apply\Big | the\Big | result\Big | of\Big | Theorem\Big | \Big |ref\Big |{Car\Big | theorem\Big |}\Big | to\Big | the\Big | present\Big | case\Big |,\Big | we\Big | adopt\Big | the\Big | truncation\Big | method\Big |.\Big | For\Big | convenience\Big | in\Big | the\Big | later\Big | statement\Big |,\Big | denote\Big | \Big |$Q\Big |_\Big |{\Big |tau\Big |}\Big | \Big |=\Big | Q\Big |_1\Big |$\Big |.\Big | Fixing\Big | a\Big | arbitrarily\Big | small\Big | number\Big | \Big |$\Big |widetilde\Big | \Big |tau\Big | \Big |in\Big | \Big |(0\Big |,\Big | \Big |frac\Big |{\Big |tau\Big |}\Big |{8\Big |}\Big |)\Big |$\Big |,\Big | let\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | Q\Big |_\Big |{k\Big |+1\Big |}\Big | \Big |=\Big | \Big |\Big |{\Big |(t\Big |,x\Big |)\Big ||\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big |<\Big |tau\Big | \Big |-\Big | k\Big |widetilde\Big | \Big |tau\Big |,\Big | k\Big |=1\Big |,2\Big |,3\Big |\Big |}\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | Hence\Big |,\Big | it\Big | is\Big | easy\Big | to\Big | show\Big | that\Big | that\Big | \Big |$Q\Big |_4\Big |subset\Big | Q\Big |_3\Big |subset\Big | Q\Big |_2\Big |subset\Big | Q\Big |_1\Big |$\Big |.\Big | \Big | Introduce\Big | a\Big | truncation\Big | function\Big | \Big |$\Big |chi\Big |in\Big | C\Big |_0\Big |^\Big |{\Big |infty\Big |}\Big |(Q\Big |_2\Big |)\Big |$\Big | in\Big | the\Big | following\Big | manner\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big |chi\Big | \Big |in\Big | \Big |[0\Big |,\Big | 1\Big |]\Big |quad\Big | \Big |text\Big |{and\Big |}\Big |quad\Big | \Big |chi\Big | \Big |=\Big | 1\Big | \Big |quad\Big | \Big |text\Big |{in\Big |}\Big | \Big |quad\Big | Q\Big |_3\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big | Let\Big | \Big |$z\Big |$\Big | be\Big | the\Big | solution\Big | of\Big | \Big |eqref\Big |{system1\Big |}\Big |.\Big | Let\Big | \Big |$\Big |Phi\Big |=\Big | \Big |chi\Big | z\Big |$\Big |.\Big | Then\Big | a\Big | little\Big | bothersome\Big | calculation\Big | gives\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{truncated\Big | equation\Big |}\Big | \Big |begin\Big |{cases\Big |}\Big | \Big |sigma\Big | d\Big |Phi\Big | \Big |-\Big | \Big |Delta\Big | \Big |Phi\Big | dt\Big | \Big |=\Big | \Big |(b\Big |_1\Big |Phi\Big |_t\Big | \Big |+\Big | b\Big |_2\Big |cdot\Big |nabla\Big | \Big |Phi\Big | \Big |+\Big | b\Big |_3\Big | \Big |Phi\Big |)dt\Big | \Big | \Big |+\Big | f\Big |(x\Big |,t\Big |)dt\Big | \Big |+\Big | \Big | b\Big |_4\Big |Phi\Big | dW\Big |(t\Big |)\Big |&\Big | \Big |text\Big |{in\Big | \Big |}\Big | Q\Big |_\Big |{\Big |tau\Big |}\Big |,\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |displaystyle\Big | \Big |Phi\Big | \Big |=\Big | 0\Big |,\Big | \Big |\Big |;\Big |\Big |;\Big | \Big |frac\Big |{\Big |partial\Big | \Big |Phi\Big |}\Big |{\Big |partial\Big | \Big |nu\Big |}\Big | \Big |=\Big | 0\Big | \Big |&\Big |text\Big |{on\Big | \Big |}\Big | \Big |Gamma\Big |_\Big |{\Big |tau\Big |}\Big |.\Big | \Big |end\Big |{cases\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | Here\Big |,\Big | we\Big | denote\Big | by\Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | f\Big |(x\Big |,t\Big |)\Big | \Big |=\Big | \Big |sigma\Big | \Big |chi\Big |_\Big |{tt\Big |}\Big | z\Big | \Big |+\Big | 2a\Big |chi\Big |_t\Big | z\Big |_t\Big | \Big |-\Big | 2\Big |nabla\Big | \Big |chi\Big |cdot\Big |nabla\Big | z\Big | \Big |-\Big | z\Big |Delta\Big | \Big |chi\Big | \Big |-\Big | b\Big |_1\Big |chi\Big |_t\Big | z\Big | \Big |-\Big | b\Big |_2\Big |cdot\Big | z\Big |nabla\Big | \Big |chi\Big |.\Big | \Big |end\Big |{equation\Big |*\Big |}\Big | From\Big | the\Big | definition\Big | of\Big | \Big |$\Big |chi\Big |$\Big |,\Big | \Big |$f\Big |$\Big | is\Big | clearly\Big | supported\Big | in\Big | \Big |$Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |$\Big |.\Big | \Big | \Big | Let\Big | \Big |$\Big |$F\Big | \Big |=\Big | b\Big |_1\Big |Phi\Big |_t\Big | \Big |+\Big | b\Big |_2\Big |cdot\Big | \Big |nabla\Big | \Big |Phi\Big | \Big |+\Big | b\Big |_3\Big | \Big |Phi\Big | \Big |+\Big | f\Big |.\Big |$\Big |$\Big | In\Big | stead\Big | of\Big | \Big |$u\Big |$\Big | by\Big | \Big |$\Big |Phi\Big |$\Big | in\Big | \Big |eqref\Big |{4\Big |.15\Big |-eq20\Big |}\Big |,\Big | we\Big | have\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big |label\Big |{4\Big |.15\Big |-eq18\Big |}\Big | \Big |begin\Big |{array\Big |}\Big |{ll\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |varepsilon\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big | \Big |big\Big |(\Big | \Big |sigma\Big | d\Big |Phi\Big |_t\Big | \Big |-\Big | \Big |Delta\Big | \Big |Phi\Big | dt\Big | \Big |big\Big |)dx\Big | \Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |geq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |varepsilon\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big | \Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big |^2\Big | dxdt\Big | \Big |+\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(d\Big |Phi\Big |_t\Big |)\Big |^2dx\Big |.\Big | \Big |end\Big |{array\Big |}\Big | \Big |end\Big |{equation\Big |}\Big | Due\Big | to\Big | the\Big | elementary\Big | property\Big | of\Big | It\Big |\Big |^\Big |{o\Big |}\Big | integral\Big |,\Big | it\Big | is\Big | clear\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |&\Big | \Big |quad\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big | \Big |big\Big |(\Big | \Big |sigma\Big | d\Big |Phi\Big |_t\Big | \Big |-\Big | \Big |Delta\Big | \Big |Phi\Big | dt\Big | \Big |big\Big |)dx\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |=\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)F\Big | dxdt\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big |quad\Big | \Big | \Big |+\Big | \Big |\Big |;\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)b\Big |_4\Big | \Big |Phi\Big | d\Big | W\Big |(t\Big |)\Big | dx\Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |leq\Big | \Big |displaystyle\Big | \Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big | F\Big |^2\Big | dxdt\Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |varepsilon\Big |}\Big |theta\Big | \Big |big\Big |(\Big | \Big |-2\Big |sigma\Big |ell\Big |_t\Big | v\Big |_t\Big | \Big |+\Big | 2\Big |nabla\Big |ell\Big |cdot\Big |nabla\Big | v\Big | \Big |big\Big |)\Big |^2\Big | dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | Thus\Big |,\Big | we\Big | show\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big |\Big |\Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |+\Big |\Big |;\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(d\Big |Phi\Big |_t\Big |)\Big |^2dx\Big | \Big |leq\Big | C\Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big | F\Big |^2\Big | dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | Let\Big | us\Big | now\Big | do\Big | some\Big | estimate\Big | for\Big | the\Big | right\Big | hand\Big | side\Big | of\Big | the\Big | above\Big | inequality\Big |.\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |&\Big | \Big |quad\Big | \Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big | F\Big |^2\Big | dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big |leq\Big | 2\Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |\Big |(b\Big |_1\Big |Phi\Big |_t\Big | \Big |+\Big | b\Big |_2\Big |cdot\Big |nabla\Big | \Big |Phi\Big | \Big |+\Big | b\Big |_3\Big | \Big |Phi\Big |\Big |)\Big |^2\Big | dxdt\Big | \Big |+\Big | 2\Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big ||f\Big ||\Big |^2\Big | dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | Note\Big | that\Big | \Big |$f\Big |$\Big | is\Big | supported\Big | in\Big | \Big |$Q\Big |_2\Big |setminus\Big |overline\Big | Q\Big |_3\Big |$\Big |.\Big | Hence\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |&\Big | \Big |displaystyle\Big | \Big |quad\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big ||f\Big ||\Big |^2\Big | dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |=\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big ||\Big | \Big |sigma\Big | \Big |chi\Big |_\Big |{tt\Big |}\Big | z\Big | \Big |+\Big | 2\Big |sigma\Big |chi\Big |_t\Big | z\Big |_t\Big | \Big |-\Big | 2\Big |nabla\Big | \Big |chi\Big |cdot\Big |nabla\Big | z\Big | \Big |-\Big | z\Big |Delta\Big | \Big |chi\Big | \Big |-\Big | b\Big |_1\Big |chi\Big |_t\Big | z\Big | \Big |-\Big | b\Big |_2\Big |cdot\Big | z\Big |nabla\Big | \Big |chi\Big ||\Big |^2\Big | dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | Thus\Big |,\Big | we\Big | achieve\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big |{ll\Big |}\Big | \Big |quad\Big | \Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big | F\Big |^2\Big | dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big | \Big |displaystyle\Big | \Big |leq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_1\Big |}\Big |theta\Big |^2\Big | \Big |\Big |(\Big |Phi\Big |_t\Big | \Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big | \Big |+\Big | \Big |Phi\Big |^2\Big |\Big |)dxdt\Big |+\Big | \Big | \Big |\Big |;\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | And\Big | then\Big | \Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big |quad\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |varepsilon\Big |}\Big |sigma\Big |^2\Big |theta\Big |^2\Big |ell\Big |_t\Big |(d\Big |Phi\Big |_t\Big |)\Big |^2dx\Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big | \Big | \Big |displaystyle\Big | \Big |leq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_1\Big |}\Big |theta\Big |^2\Big | \Big |\Big |(\Big |Phi\Big |_t\Big | \Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big | \Big |+\Big | \Big |Phi\Big |^2\Big |\Big |)dxdt\Big |+\Big | \Big | \Big |\Big |;\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | Recall\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big | \Big |(d\Big |Phi\Big |_t\Big |)\Big |^2\Big | \Big | \Big |=\Big | b\Big |_4\Big |^2\Big | \Big |Phi\Big |^2dt\Big |,\Big | \Big |quad\Big | \Big |ell\Big |_t\Big | \Big |=\Big | \Big |-s\Big |lambda\Big | \Big |\Big |(t\Big |-\Big |dfrac\Big | T2\Big |\Big |)\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-1\Big |}\Big |.\Big | \Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | Therefore\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |hspace\Big |{\Big |-0\Big |.5cm\Big |}\Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_1\Big |}\Big |theta\Big |^2\Big | \Big |\Big |(\Big |Phi\Big |_t\Big | \Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big | \Big |+\Big | \Big |Phi\Big |^2\Big |\Big |)dxdt\Big | \Big |+\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big |^2\Big | \Big |sigma\Big |^2\Big | s\Big |lambda\Big | t\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big | \Big |-\Big | 1\Big |}\Big | b\Big |_4\Big |^2\Big |Phi\Big | \Big |^2dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |quad\Big | \Big |+\Big | \Big | \Big |\Big |;\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | Then\Big | for\Big | \Big |$s\Big |$\Big | and\Big | \Big |$\Big |lambda\Big |$\Big | large\Big | enough\Big |,\Big | it\Big | follows\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big |+v\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}v\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big | \Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big |theta\Big |^2\Big | \Big |\Big |(\Big |Phi\Big |_t\Big | \Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big | \Big |+\Big | \Big |Phi\Big |^2\Big |\Big |)dxdt\Big | \Big | \Big |+\Big | \Big | \Big |\Big |;\Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | Now\Big |,\Big | we\Big | find\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |*\Big |}\Big | \Big | \Big | \Big | \Big ||\Big |nabla\Big | v\Big ||\Big |^2\Big | \Big |+\Big | v\Big |_t\Big |^2\Big | \Big |geq\Big | C\Big | \Big |theta\Big |^2\Big | \Big |\Big |(s\Big |^2\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-2\Big |lambda\Big | \Big | \Big |-2\Big |}\Big |Phi\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big | \Big |+\Big | \Big |Phi\Big |_t\Big |^2\Big |\Big |)\Big |.\Big | \Big | \Big |end\Big |{equation\Big |*\Big |}\Big | \Big | Thus\Big | for\Big | large\Big | \Big |$s\Big |$\Big | and\Big | \Big |$\Big |lambda\Big |$\Big |,\Big | it\Big | follows\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{\Big |tau\Big |}\Big |}\Big |theta\Big |^2\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | \Big |Phi\Big ||\Big |^2\Big |+\Big |Phi\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}\Big |Phi\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | \Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | Recall\Big | that\Big | \Big | and\Big | \Big |$\Big |Phi\Big | \Big |=\Big | z\Big |$\Big | in\Big | \Big |$Q\Big |_3\Big |subset\Big | Q\Big |_\Big |{\Big |tau\Big |}\Big |$\Big |.\Big | It\Big | is\Big | easy\Big | to\Big | show\Big | that\Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{reduce\Big | to\Big | subinterval\Big |-1\Big |}\Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{3\Big |}\Big |}\Big |theta\Big |^2\Big | \Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big |+z\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}z\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | \Big | C\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |theta\Big |^2\Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | \Big | Note\Big | that\Big | in\Big | \Big |$Q\Big |_4\Big |$\Big |,\Big | \Big |$\Big |displaystyle\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big | \Big |<\Big | \Big |tau\Big | \Big |-\Big | 3\Big |widetilde\Big | \Big |tau\Big |$\Big |,\Big | then\Big | \Big |$\Big |theta\Big | \Big |=\Big | e\Big |^\Big |{s\Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |}\Big | \Big |>\Big | e\Big |^\Big |{s\Big |(\Big |tau\Big | \Big |-\Big | 3\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |}\Big |$\Big |.\Big | Moreover\Big |,\Big | in\Big | \Big |$Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |$\Big |,\Big | \Big |$\Big |displaystyle\Big | \Big |tau\Big | \Big |-\Big | 2\Big |widetilde\Big | \Big |tau\Big | \Big |<\Big | \Big |varphi\Big |(x\Big |,t\Big |)\Big | \Big |<\Big |tau\Big | \Big |-\Big |widetilde\Big | \Big |tau\Big |$\Big |,\Big | then\Big | \Big |$e\Big |^\Big |{s\Big |(\Big |tau\Big | \Big |-\Big | \Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |}\Big | \Big |<\Big | \Big |theta\Big | \Big |<\Big | e\Big |^\Big |{s\Big |(\Big |tau\Big | \Big |-\Big | 2\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |}\Big |$\Big |.\Big | Therefore\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{reduce\Big | to\Big | subinterval\Big |-2\Big |}\Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{4\Big |}\Big |}\Big |\Big |[s\Big |lambda\Big |^2\Big | \Big |varphi\Big |^\Big |{\Big |-\Big |lambda\Big |-2\Big |}\Big |(\Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big |+z\Big |_t\Big |^2\Big |)\Big |+\Big | s\Big |^3\Big |lambda\Big |^4\Big | \Big |varphi\Big |^\Big |{\Big |-3\Big |lambda\Big |-4\Big |}z\Big |^2\Big | \Big |\Big |]dxdt\Big | \Big | \Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | \Big | C\Big | e\Big |^\Big |{2\Big |[s\Big |(\Big |tau\Big |-2\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big | \Big |-\Big | s\Big |(\Big |tau\Big | \Big |-\Big | 3\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |]\Big |}\Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_2\Big |setminus\Big | \Big |overline\Big | Q\Big |_3\Big |}\Big | \Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |\Big |\Big | \Big | \Big | \Big | \Big |noalign\Big |{\Big |smallskip\Big |}\Big | \Big |&\Big | \Big |displaystyle\Big | \Big |leq\Big | \Big | C\Big | e\Big |^\Big |{2\Big |[s\Big |(\Big |tau\Big |-2\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big | \Big |-\Big | s\Big |(\Big |tau\Big | \Big |-\Big | 3\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |]\Big |}\Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | For\Big | brevity\Big |,\Big | by\Big | letting\Big | \Big |$\Big |overline\Big | \Big |mu\Big | \Big |=\Big | 2\Big |[\Big |(\Big |tau\Big |-2\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big | \Big |-\Big | \Big |(\Big |tau\Big | \Big |-\Big | 3\Big |widetilde\Big | \Big |tau\Big |)\Big |^\Big |{\Big |-\Big |lambda\Big |}\Big |]\Big |$\Big |,\Big | we\Big | can\Big | have\Big | that\Big | \Big | \Big |begin\Big |{equation\Big |}\Big | \Big |label\Big |{reduce\Big | to\Big | subinterval\Big |-2\Big |}\Big | \Big | \Big | \Big |begin\Big |{array\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big |{ll\Big |}\Big | \Big | \Big | \Big | \Big | \Big |&\Big |displaystyle\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |{4\Big |}\Big |}\Big |(\Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big |+z\Big |_t\Big |^2\Big |+\Big | z\Big |^2\Big | \Big |)dxdt\Big | \Big | \Big |leq\Big | \Big | C\Big | e\Big |^\Big |{s\Big |overline\Big | \Big |mu\Big |}\Big | \Big |mathbb\Big |{E\Big |}\Big |int\Big |_\Big |{Q\Big |_\Big |tau\Big |}\Big | \Big |\Big |(z\Big |_t\Big |^2\Big | \Big |+\Big | \Big ||\Big |nabla\Big | z\Big ||\Big |^2\Big | \Big |+\Big | z\Big |^2\Big |\Big |)dxdt\Big |.\Big | \Big | \Big | \Big | \Big |end\Big |{array\Big |}\Big | \Big | \Big |end\Big |{equation\Big |}\Big | \Big | For\Big | that\Big | \Big |$\Big |overline\Big | \Big |mu\Big | \Big |<\Big | 0\Big |$\Big |,\Big | so\Big | if\Big | we\Big | let\Big | \Big |$s\Big |to\Big | \Big |+\Big |infty\Big |$\Big |,\Big | we\Big | find\Big | \Big |$z\Big | \Big |=\Big | 0\Big |$\Big | in\Big | \Big |$Q\Big |_4\Big |$\Big |.\Big | Taking\Big | \Big |$Q\Big |_4\Big |$\Big | the\Big | desired\Big | region\Big |,\Big | we\Big | complete\Big | the\Big | proof\Big |.\Big | \Big |signed\Big | \Big |{\Big |$\Big |sqr69\Big |$\Big |}\Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big | \Big |{\Big |footnotesize\Big | \Big | \Big |begin\Big |{thebibliography\Big |}\Big |{79\Big |}\Big | \Big | \Big |bibitem\Big |{Alinhac\Big |}\Big | S\Big |.\Big |~Alinhac\Big |,\Big | \Big |it\Big | Non\Big |-unicit\Big |\Big |'\Big |{e\Big |}\Big | du\Big | probl\Big |\Big |`\Big |{e\Big |}me\Big | de\Big | Cauchy\Big |.\Big | \Big |sl\Big | Ann\Big |.\Big | of\Big | Math\Big |.\Big | \Big |(2\Big |)\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 117\Big |}\Big |(1983\Big |)\Big |,\Big | \Big | 77\Big |-\Big |-108\Big |.\Big | \Big | \Big |bibitem\Big |{Amirov\Big |}\Big | A\Big |.\Big |~Amirov\Big |,\Big | \Big |sl\Big | Integral\Big | Geometry\Big | and\Big | Inverse\Big | Problems\Big | for\Big | Kinetic\Big | Equations\Big |,\Big | \Big |rm\Big | Doctoral\Big | dissertation\Big |,\Big | Novosivbirsk\Big | \Big |(1988\Big |)\Big |.\Big | \Big | \Big | \Big |bibitem\Big |{barbu1\Big |}\Big | V\Big |.\Big |~Barbu\Big |,\Big | A\Big |.\Big |~R\Big |$\Big |breve\Big |{\Big |rm\Big | a\Big |}\Big |$scanu\Big | and\Big | G\Big |.\Big |~Tessitore\Big |,\Big | \Big |it\Big | Carleman\Big | estimate\Big | and\Big | cotrollability\Big | of\Big | linear\Big | stochastic\Big | heat\Big | equatons\Big |,\Big | \Big |sl\Big | Appl\Big |.\Big | Math\Big |.\Big | Optim\Big |.\Big |,\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 47\Big |}\Big |(2003\Big |)\Big |,\Big | 97\Big |-\Big |-120\Big |.\Big | \Big | \Big | \Big |bibitem\Big |{Carleman1\Big |}\Big | T\Big |.\Big |~Carleman\Big |,\Big | \Big |it\Big | Sur\Big | un\Big | probl\Big |\Big |'\Big |{e\Big |}me\Big | d\Big |'unicit\Big |\Big |'\Big |{e\Big |}\Big | pour\Big | les\Big | syst\Big |\Big |`\Big |{e\Big |}mes\Big | d\Big |'\Big | \Big |\Big |'\Big |{e\Big |}quations\Big | aux\Big | deriv\Big |\Big |'\Big |{e\Big |}es\Big | partielles\Big | \Big |\Big |`\Big |{a\Big |}\Big | deux\Big | variables\Big | independantes\Big |,\Big | \Big |sl\Big | Ark\Big |.\Big | Mat\Big |.\Big | Astr\Big |.\Big | Fys\Big |.\Big | 26B\Big |,\Big | \Big |rm\Big |{\Big |bf\Big | 17\Big |}\Big |(1939\Big |)\Big |,\Big | 1\Big |-\Big |-9\Big |.\Big | \Big | \Big |bibitem\Big |{Calderon\Big |}\Big | A\Big |.\Big |-P\Big |.\Big |~Calder\Big |\Big |'\Big |{o\Big |}n\Big |,\Big | \Big |it\Big | Uniqueness\Big | in\Big | the\Big | Cauchy\Big | problem\Big | for\Big | partial\Big | differential\Big | equations\Big |.\Big | \Big |sl\Big | Amer\Big |.\Big | J\Big |.\Big | Math\Big |.\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 80\Big |}\Big |(1958\Big |)\Big |,\Big | 16\Big |-\Big |-36\Big |.\Big | \Big | \Big | \Big |bibitem\Big |{Hormander3\Big |}\Big | L\Big |.\Big |~H\Big |\Big |"\Big |{o\Big |}rmander\Big |,\Big | \Big |it\Big | On\Big | the\Big | uniqueness\Big | of\Big | the\Big | Cauchy\Big | problem\Big | under\Big | partial\Big | analyticity\Big | assumptions\Big |.\Big | \Big |sl\Big | Geometrical\Big | optics\Big | and\Big | related\Big | topics\Big | \Big |(Cortona\Big |,\Big | 1996\Big |)\Big |,\Big | \Big |rm\Big | 179\Big |-\Big |-219\Big |,\Big | Progr\Big |.\Big | Nonlinear\Big | Differential\Big | Equations\Big | Appl\Big |.\Big |,\Big | 32\Big |,\Big | Birkh\Big |\Big |"\Big |{a\Big |}user\Big | Boston\Big |,\Big | Boston\Big |,\Big | MA\Big |,\Big | 1997\Big |.\Big | \Big | \Big |bibitem\Big |{Hormander2\Big |}\Big | L\Big |.\Big |~H\Big |\Big |"\Big |{o\Big |}rmander\Big |,\Big | \Big |sl\Big | The\Big | analysis\Big | of\Big | linear\Big | partial\Big | differential\Big | operators\Big |.\Big | IV\Big |.\Big | Fourier\Big | integral\Big | operators\Big |.\Big | \Big | \Big |rm\Big | Classics\Big | in\Big | Mathematics\Big |.\Big | Springer\Big |-Verlag\Big |,\Big | Berlin\Big |,\Big | 2009\Big |.\Big | \Big | \Big | \Big |bibitem\Big |{Kenig\Big |}\Big | C\Big |.\Big |~E\Big |.\Big |~Kenig\Big |,\Big | \Big |it\Big | Carleman\Big | estimates\Big |,\Big | uniform\Big | Sobolev\Big | inequalities\Big | for\Big | second\Big |-order\Big | differential\Big | operators\Big |,\Big | and\Big | unique\Big | continuation\Big | theorems\Big |.\Big | \Big |sl\Big | Proceedings\Big | of\Big | the\Big | International\Big | Congress\Big | of\Big | Mathematicians\Big |,\Big | \Big |rm\Big | Vol\Big |.\Big | 1\Big |,\Big | 2\Big |,\Big | 948\Big |-\Big |-960\Big |,\Big | Amer\Big |.\Big | Math\Big |.\Big | Soc\Big |.\Big |,\Big | Providence\Big |,\Big | RI\Big |,\Big | 1987\Big |.\Big | \Big | \Big |bibitem\Big |{Kenig1\Big |}\Big | C\Big |.\Big |~E\Big |.\Big |~Kenig\Big |,\Big | A\Big |.\Big |~Ruiz\Big | and\Big | C\Big |.\Big |~\Big | D\Big |.\Big |~Sogge\Big |,\Big | \Big | \Big |it\Big | Uniform\Big | Sobolev\Big | inequalities\Big | and\Big | unique\Big | continuation\Big | for\Big | second\Big | order\Big | constant\Big | coefficient\Big | differential\Big | operators\Big |.\Big | \Big |sl\Big | Duke\Big | Math\Big |.\Big | J\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | 55\Big |}\Big | \Big |(1987\Big |)\Big |,\Big | \Big | 329\Big |-\Big |-347\Big |.\Big | \Big | \Big |bibitem\Big |{LL\Big |}\Big | H\Big |.\Big |~Li\Big | and\Big | Q\Big |.\Big |~L\Big |\Big |"u\Big |,\Big | \Big |it\Big | A\Big | quantitative\Big | boundary\Big | unique\Big | continuation\Big | for\Big | stochastic\Big | parabolic\Big | equations\Big |.\Big | \Big |sl\Big | J\Big |.\Big | Math\Big |.\Big | Anal\Big |.\Big | Appl\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | 402\Big |}\Big |(2013\Big |)\Big |,\Big | \Big | 518\Big |-\Big |-526\Big |.\Big | \Big | \Big |bibitem\Big |{Liuxu1\Big |}\Big | X\Big |.\Big |~Liu\Big |,\Big | \Big |it\Big | Global\Big | Carleman\Big | estimate\Big | for\Big | stochastic\Big | parabolic\Big | equations\Big |,\Big | and\Big | its\Big | application\Big |.\Big | \Big |sl\Big | ESAIM\Big | Control\Big | Optim\Big |.\Big | Calc\Big |.\Big | Var\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | 20\Big |}\Big | \Big |(2014\Big |)\Big |,\Big | \Big | 823\Big |-\Big |-839\Big |.\Big | \Big | \Big |bibitem\Big |{Luqi3\Big |}\Big | Q\Big |.\Big | L\Big |\Big |"\Big |{u\Big |}\Big |,\Big | \Big |it\Big | Carleman\Big | estimate\Big | for\Big | stochastic\Big | parabolic\Big | equations\Big | and\Big | inverse\Big | stochastic\Big | parabolic\Big | problems\Big |.\Big | \Big |sl\Big | Inverse\Big | Problems\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 28\Big |}\Big |(2012\Big |)\Big |,\Big | no\Big |.\Big | 4\Big |,\Big | 045008\Big |.\Big | \Big | \Big |bibitem\Big |{Luqi7\Big |}\Big | Q\Big |.\Big | L\Big |\Big |"\Big |{u\Big |}\Big |,\Big | \Big |it\Big | Observability\Big | estimate\Big | for\Big | stochastic\Big | Schr\Big |\Big |"\Big |{o\Big |}dinger\Big | equations\Big | and\Big | its\Big | applications\Big |.\Big | \Big |sl\Big | SIAM\Big | J\Big |.\Big | Control\Big | Optim\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | 51\Big |}\Big |(2013\Big |)\Big |,\Big | no\Big |.\Big | 1\Big |,\Big | 121\Big |-\Big |-144\Big |.\Big | \Big | \Big |bibitem\Big |{Lu\Big |}\Big | Q\Big |.\Big |~L\Big |\Big |"u\Big |,\Big | Observability\Big | estimate\Big | and\Big | state\Big | observation\Big | problems\Big | for\Big | stochastic\Big | hyperbolic\Big | equations\Big |.\Big | \Big |sl\Big | Inverse\Big | Problems\Big | \Big |rm\Big |{\Big |bf\Big | 29\Big |}\Big |(2013\Big |)\Big |,\Big | 095011\Big |,\Big | 22\Big | pp\Big |.\Big | \Big | \Big |bibitem\Big |{LY\Big |}\Big | \Big | Q\Big |.\Big |~L\Big |\Big |"u\Big | and\Big | Z\Big |.\Big |~Yin\Big |,\Big | \Big |it\Big | \Big | Unique\Big | continuation\Big | for\Big | stochastic\Big | heat\Big | equations\Big |.\Big | \Big |sl\Big | ESAIM\Big | Control\Big | Optim\Big |.\Big | Calc\Big |.\Big | Var\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | \Big | 21\Big |}\Big |(2015\Big |)\Big |,\Big | \Big | 378\Big |-\Big |-398\Big |.\Big | \Big | \Big |bibitem\Big |{LZ2\Big |}\Big | Q\Big |.\Big | L\Big |\Big |"\Big |{u\Big |}\Big | and\Big | X\Big |.\Big |~Zhang\Big |,\Big | \Big |it\Big | \Big | Global\Big | uniqueness\Big | for\Big | an\Big | inverse\Big | stochastic\Big | hyperbolic\Big | problem\Big | with\Big | three\Big | unknowns\Big |.\Big | \Big |sl\Big | Comm\Big |.\Big | Pure\Big | \Big | Appl\Big |.\Big | Math\Big |.\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 68\Big |}\Big | \Big |(2015\Big |)\Big |,\Big | 948\Big |-\Big |-963\Big |.\Big | \Big | \Big | \Big |bibitem\Big |{Robbiano\Big |}\Big | L\Big |.\Big |~Robbiano\Big |,\Big | \Big |it\Big | Th\Big |\Big |'\Big |{e\Big |}or\Big |\Big |`\Big |{e\Big |}me\Big | d\Big |'unicit\Big |\Big |'\Big |{e\Big |}\Big | adapt\Big |\Big |'\Big |{e\Big |}\Big | au\Big | contr\Big |\Big |^\Big |{o\Big |}le\Big | des\Big | solutions\Big | des\Big | probl\Big |\Big |`\Big |{e\Big |}mes\Big | hyperboliques\Big |.\Big | \Big |sl\Big | Comm\Big |.\Big | Partial\Big | Differential\Big | Equations\Big | \Big |rm\Big |{\Big |bf\Big | 16\Big |}\Big | \Big |(1991\Big |)\Big |,\Big | \Big | 789\Big |-\Big |-800\Big |.\Big | \Big | \Big |bibitem\Big |{Robbiano1\Big |}\Big | L\Big |.\Big |~Robbiano\Big | and\Big | C\Big |.\Big |~Zuily\Big |,\Big | \Big | \Big |it\Big | \Big | Uniqueness\Big | in\Big | the\Big | Cauchy\Big | problem\Big | for\Big | operators\Big | with\Big | partially\Big | holomorphic\Big | coefficients\Big |.\Big | \Big |sl\Big | Invent\Big |.\Big | Math\Big |.\Big | \Big |rm\Big |{\Big |bf\Big | 131\Big |}\Big | \Big |(1998\Big |)\Big |,\Big | \Big | 493\Big |-\Big |-539\Big |.\Big | \Big | \Big |bibitem\Big |{Sogge\Big |}\Big | C\Big |.\Big |~D\Big |.\Big |~Sogge\Big |,\Big | Uniqueness\Big | in\Big | Cauchy\Big | problems\Big | for\Big | hyperbolic\Big | differential\Big | operators\Big |.\Big | Trans\Big |.\Big | Amer\Big |.\Big | Math\Big |.\Big | Soc\Big |.\Big | 333\Big | \Big |(1992\Big |)\Big |,\Big | no\Big |.\Big | 2\Big |,\Big | 821\Big |¨C833\Big |.\Big | \Big | \Big |bibitem\Big |{Tang\Big |-Zhang1\Big |}\Big | S\Big |.\Big |~Tang\Big | and\Big | X\Big |.\Big |~Zhang\Big |,\Big | \Big |it\Big | Null\Big | controllability\Big | for\Big | forward\Big | and\Big | backward\Big | stochastic\Big | parabolic\Big | equations\Big |,\Big | \Big |sl\Big | SIAM\Big | J\Big |.\Big | Control\Big | Optim\Big |.\Big |,\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 48\Big |}\Big |(2009\Big |)\Big |,\Big | 2191\Big |-\Big |-2216\Big |.\Big | \Big | \Big |bibitem\Big |{Tataru2\Big |}\Big | D\Big |.\Big |~Tataru\Big |,\Big | \Big |it\Big | \Big | Unique\Big | continuation\Big | for\Big | solutions\Big | to\Big | PDE\Big |'s\Big |;\Big | between\Big | H\Big |\Big |"\Big |{o\Big |}rmander\Big |'s\Big | theorem\Big | and\Big | Holmgren\Big |'s\Big | theorem\Big |.\Big | \Big |sl\Big | Comm\Big |.\Big | Partial\Big | Differential\Big | Equations\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 20\Big |}\Big | \Big |(1995\Big |)\Big |,\Big | 855\Big |-\Big |-884\Big |.\Big | \Big | \Big |bibitem\Big |{Tataru\Big |}\Big | D\Big |.\Big |~Tataru\Big |,\Big | \Big |it\Big | Unique\Big | continuation\Big | problems\Big | for\Big | partial\Big | differential\Big | equations\Big |.\Big | \Big |sl\Big | Geometric\Big | methods\Big | in\Big | inverse\Big | problems\Big | and\Big | PDE\Big | control\Big |,\Big | \Big |rm\Big | 239\Big |-\Big |-255\Big |,\Big | IMA\Big | Vol\Big |.\Big | Math\Big |.\Big | Appl\Big |.\Big |,\Big | 137\Big |,\Big | Springer\Big |,\Big | New\Big | York\Big |,\Big | 2004\Big |.\Big | \Big | \Big |bibitem\Big |{Tataru1\Big |}\Big | D\Big |.\Big |~Tataru\Big |,\Big | \Big |it\Big | Carleman\Big | estimates\Big |,\Big | unique\Big | continuation\Big | and\Big | applications\Big |.\Big | \Big |rm\Big | http\Big |:\Big |/\Big |/\Big | www\Big |.math\Big |.berkeley\Big |.edu\Big |/\Big | \Big |~tataru\Big |/papers\Big |/ucpnotes\Big |.ps\Big | \Big | \Big |bibitem\Big |{Zh\Big |}\Big | X\Big |.\Big |~Zhang\Big |,\Big | \Big | \Big |it\Big | \Big |it\Big | Unique\Big | continuation\Big | for\Big | stochastic\Big | parabolic\Big | equations\Big |.\Big | \Big |sl\Big | Differential\Big | Integral\Big | Equations\Big |.\Big | \Big |rm\Big | 21\Big | \Big |(2008\Big |)\Big |,\Big | 81\Big |-\Big |-93\Big |.\Big | \Big | \Big |bibitem\Big |{Zhangxu3\Big |}\Big | X\Big |.\Big |~Zhang\Big |,\Big | \Big |it\Big | Carleman\Big | and\Big | observability\Big | estimates\Big | for\Big | stochastic\Big | wave\Big | equations\Big |,\Big | \Big |sl\Big | SIAM\Big | J\Big |.\Big | Math\Big |.\Big | Anal\Big |.\Big |,\Big | \Big |rm\Big | \Big |{\Big |bf\Big | 40\Big |}\Big |(2008\Big |)\Big |,\Big | 851\Big |-\Big |-868\Big |.\Big | \Big | \Big |bibitem\Big |{Zhangxu6\Big |}\Big | X\Big |.\Big |~Zhang\Big |,\Big | \Big |it\Big | A\Big | unified\Big | controllability\Big |/observability\Big | theory\Big | for\Big | some\Big | stochastic\Big | and\Big | deterministic\Big | partial\Big | differential\Big | equations\Big |.\Big | \Big |sl\Big | Proceedings\Big | of\Big | the\Big | International\Big | Congress\Big | of\Big | Mathematicians\Big |.\Big | \Big |rm\Big | Volume\Big | IV\Big |,\Big | 3008\Big |-\Big |-3034\Big |,\Big | Hindustan\Big | Book\Big | Agency\Big |,\Big | New\Big | Delhi\Big |,\Big | 2010\Big |.\Big | \Big | \Big |bibitem\Big |{Zuily\Big |}\Big | C\Big |.\Big |~Zuily\Big |,\Big | \Big |sl\Big | Uniqueness\Big | and\Big | nonuniqueness\Big | in\Big | the\Big | Cauchy\Big | problem\Big |.\Big | \Big |rm\Big | Progress\Big | in\Big | Mathematics\Big |,\Big | 33\Big |.\Big | Birkh\Big |\Big |"\Big |{a\Big |}user\Big | Boston\Big |,\Big | Inc\Big |.\Big |,\Big | Boston\Big |,\Big | MA\Big |,\Big | 1983\Big |.\Big | \Big | \Big |end\Big |{thebibliography\Big |}\Big | \Big | \Big |}\Big | \Big | \Big | \Big |end\Big |{document\Big |}\Big |
arXiv
\begin{document} \title{Nonlocal quantum correlations under amplitude damping decoherence} \author{Tanumoy Pramanik} \email{[email protected]} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \author{Young-Wook Cho} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \author{Sang-Wook Han} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \author{Sang-Yun Lee} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \author{Sung Moon} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \affiliation{Division of Nano \& Information Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea} \author{Yong-Su Kim} \email{[email protected]} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea} \affiliation{Division of Nano \& Information Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea} \date{\today} \begin{abstract} \noindent Different nonlocal quantum correlations of entanglement, steering and Bell nonlocality are defined with the help of local hidden state (LHS) and local hidden variable (LHV) models. Considering their unique roles in quantum information processing, it is of importance to understand the individual nonlocal quantum correlation as well as their relationship. Here, we investigate the effects of amplitude damping decoherence on different nonlocal quantum correlations. In particular, we have theoretically and experimentally shown that the entanglement sudden death phenomenon is distinct from those of steering and Bell nonlocality. In our scenario, we found that all the initial states present sudden death of steering and Bell nonlocality, while only some of the states show entanglement sudden death. These results suggest that the environmental effect can be different for different nonlocal quantum correlations, and thus, it provides distinct operational interpretations of different quantum correlations. \end{abstract} \keywords{Sudden death, Entanglement, Steering, Unsteering, Bell nonlocality, Amplitude damping decoherence} \maketitle \section{Introduction} Nonlocal quantum correlations are not only significant due to their foundational aspects in quantum information theory, but also their applications in various quantum information processing tasks. According to the different local models based on the properties of underlying systems, nonlocal quantum correlations can be categorized into three different forms of entanglement, EPR (Einstein-Podolsky-Rosen) steering, and Bell nonlocality~\cite{E_rev, B_Rev, Jones07_2}. A bipartite quantum system is entangled if it cannot be written as a statistical mixture of products of local states of individual systems. Therefore, for a bipartite entangled state, the correlation cannot be described by local hidden state (LHS)-LHS model. If we weaken the LHS-LHS model to LHS-local hidden variable (LHV) model, i.e., one of the systems is not trusted as a quantum system, then the non-separability becomes EPR steering~\cite{Jones07_1, Jones07_2}. If we further relax the condition to LHV-LHV model, then the non-separability defines Bell nonlocality~\cite{Bell, CHSH, Rev_BN}. Therefore, three forms of nonlocal quantum correlations are interconnected via their definitions. In particular, all Bell nonlocal states are steerable, and all steerable states are entangled. However, there exist some entangled states which are not steerable, and some steerable states are not Bell nonlocal. Therefore, we can explicitly present the relationship between nonlocal quantum correlations as, Bell nonlocality $\subset$ EPR steering $\subset$ Entanglement. In practice, nonlocal quantum correlations are used as resources of quantum information processing. Entanglement is known as a basic resource for many quantum information processing tasks such as quantum teleportation~\cite{Tele, Tele_Exp1, Tele_Exp2}, quantum communication~\cite{SDC, QKD1,QKD2, QKD3}, and quantum computation~\cite{QCom, Ent_Speed}. However, in order for entanglement to play roles, both systems should be trusted as quantum systems, and there should be no quantum hacking attempts to both systems. On the other hand, EPR steering and Bell nonlocality can play roles in the quantum information processing even when there exist quantum hacking attacks on one of the systems~\cite{1sDIQKD}, and both systems~\cite{QKD2, DIQKD_1, DIQKD_2, DIQKD_3, DIQKD_4}, respectively. In real world implementation, quantum systems interact with the environment, and it usually causes unavoidable decoherence. As a result, quantum correlations usually gradually decrease with the increasing interaction time, and completely vanish after an infinite time of interaction~\cite{ESD_2, ESD_4, ESD_4_1,ESD_4_2}. Remarkably, the system-environment interaction sometimes causes much faster degradation of quantum correlations, so the quantum system can completely lose quantum correlations in finite time of interaction. This phenomenon is known as the sudden death of quantum correlations~\cite{ESD_2, ESD_1, ESD_3, ESD_4, ESD_5, ESD_5_1}. We also note that the environmental interaction sometimes increases quantum correlations in certain circumstances~\cite{DE_1,DE_2,DE_3, DE_4}. \begin{figure*} \caption{(a) Concurrence $C(\theta,D)$, (b) EPR steering (green) and unsteering (yellow) parameters of $T_{16}(\theta,D)$ and $T_U(\theta,D)$, and (c) Bell parameter $S(\theta,D)$with respect to $\theta$. Above the $C=0$, $T_{16}=0.503, T_U=0.503$, and $S=2$ planes indicate the existence of nonlocal quantum correlations. , blue and green curves present the boundaries of sudden death phenomena. } \label{3D} \end{figure*} It has been widely studied the effect of decoherence on entanglement both in theory and experiment~\cite{ESD_1, ESD_2, ESD_3, ESD_4, ESD_5}. However, there are only a few theoretical studies on other nonlocal quantum correlations~\cite{SSD,SSD_11,SSD_22, BNSD_1}. These studies deal with the entanglement sudden death (ESD)~\cite{ESD_1, ESD_2, ESD_3, ESD_4} and Bell nonlocality sudden death (BNSD)~\cite{BNSD_1}, however, the study of EPR steering sudden death (SSD) is still missing. Moreover, all of these works are limited to one of the nonlocal quantum correlations, and thus they fail to present unified results of the environmental effect on various nonlocal quantum correlations. Considering their relationship and unique roles in quantum information processing, it is of importance to investigate the dynamics of various nonlocal quantum correlations in the presence of decoherence. In this paper, we theoretically and experimentally investigate entanglement, EPR steering, and Bell nonlocality under an amplitude damping channel (ADC). We found that different quantum correlations present very different environmental effects. For example, in our scenario, all the states present SSD and BNSD, while ESD happens for only some of the initial states. Moreover, we can prepare two different bipartite states with an equal amount of entanglement, but one of them shows sudden death of all nonlocal quantum correlations while the other only shows steering and Bell nonlocality sudden deaths, but not ESD. Therefore, in the presence ADC, entanglement behaves very differently from the other nonlocal quantum correlations, and it provides distinct operational interpretations of different nonlocal quantum correlations. \section{Theory} \subsection{Amplitude damping channel} The interaction between the system $S$ and the environment $E$ via ADC with the interaction strength of $0\le D\le1$ can be modeled as~\cite{lee11,kim12} \begin{eqnarray} |0\rangle_S|0\rangle_E & \rightarrow & |0\rangle_S|0\rangle_E, \nonumber \\ |1\rangle_S|0\rangle_E & \rightarrow & \sqrt{1-D} |1\rangle_S|0\rangle_E + \sqrt{D} |0\rangle_S|0\rangle_E. \label{ADC} \end{eqnarray} Here, we assume that the environment is initially in $|0\rangle_E$. Let us consider a two-qubit system is initially prepared in a pure state of $|\psi_\theta\rangle = \cos\theta |0\rangle_A|0\rangle_B + \sin\theta |1\rangle_A|1\rangle_B$, where $0\le\theta\le\pi/2$ is the biasing parameter. Assuming both qubits $A$ and $B$ are under ADC with an equal interaction strength of $D$, the state becomes~\cite{kim12} \begin{eqnarray} \rho_{\theta}^{D} = \begin{pmatrix} \alpha_{11} & 0 & 0 & \alpha_{14} \\ 0 & \alpha_{21} & 0 & 0 \\ 0 & 0 & \alpha_{21} & 0 \\ \alpha_{14} & 0 & 0 & \alpha_{44} \end{pmatrix}, \label{State_f} \end{eqnarray} where $\alpha_{11}=\cos^2\theta + D^2 \sin^2\theta$, $\alpha_{14}=(1-D) \cos\theta\sin\theta$, $\alpha_{21}=(1-D)D \sin^2\theta$, and $\alpha_{44}=(1-D)^2 \sin^2\theta$, respectively. Now, we study entanglement, EPR steering, and Bell nonlocality of the state $\rho_{\theta}^D$. Here, we quantify the amount of entanglement with concurrence~\cite{Concurrence_1,Concurrence_2}. Bell nonlocality is determined by the Horodecki criterion which provides the necessary and sufficient condition for a $2\otimes 2$ dimensional system~\cite{Horo_Cri,Horo_Cri_2}. We apply the steering criterion developed in Ref.~\citep{Saunders, bennet12} to capture the steerability of a given state. Note that the steering criterion is necessary but not sufficient, and thus it cannot determine the unsteerability of a given state. In order to capture unsteerability, we employ the recently developed sufficient criterion of unsteerability~\cite{PE_St3}. Here, we only provide the results of the theoretical investigation. The detailed estimation procedures can be found in the supplementary materials. \subsection{Entanglement} The concurrence of $\rho_{\theta}^D$ is given by \begin{eqnarray} C(\theta,D) = \max\left[0,\,2 (1-D) \sin\theta (\cos\theta - D \sin\theta)\right], \label{C_f} \end{eqnarray} and depicted in Fig.~\ref{3D}(a). All the initial states of $D=0$ has non-zero concurrence, and thus are entangled except for $\theta=0$ or $\pi/2$. As the interaction strength $D$ increases, concurrence decreases. One can find that entanglement vanishes, and the state $\rho_{\theta}^D$ becomes separable when $\cot\theta\leq D$. Therefore, the ESD occurs along the red line which corresponds to $D = \cot\theta$. Note that entanglement of the initial state, $C(\theta,D=0)=\sin2\theta$, is symmetrical with respect to $\theta=\pi/4$. Therefore, the initial states $|\psi_{\phi}\rangle$ and $|\psi_{\frac{\pi}{2}-\phi}\rangle$, where $0\le\phi<\frac{\pi}{4}$, have the same amount of entanglement. This symmetry is broken as $C(\frac{\pi}{2}-\phi,D)<C(\phi,D)$ after the amplitude damping decoherence, $0<D$. This asymmetrical nature becomes more clear for the ESD, i.e., ESD occurs only for $|\psi_{\frac{\pi}{2}-\phi}\rangle$, and never happens for $|\psi_{\phi}\rangle$. It originates from the asymmetrical nature of the ADC where $|1\rangle$ experiences the damping decoherence while $|0\rangle$ is unaffected. We note that the non-zero concurrence provides the necessary and sufficient condition of the existence of entanglement in a two-qubit system~\cite{Concurrence_1, Concurrence_2}. Therefore, the entanglement sudden death described above is a real physical phenomenon although it has been investigated with the mathematical description of concurrence. \subsection{EPR steering} LHS model restricts the correlation $P(a_\mathcal{A},b_{\mathcal{B}})$ between the measurement outcomes $a$ and $b$ of the observables $\mathcal{A}$ and $\mathcal{B}$ on the systems $A$ and $B$, respectively, as \begin{eqnarray} P(a_{\mathcal{A}},b_{\mathcal{B}}) = \sum_\lambda P(\lambda) P(a_{\mathcal{A}}|\lambda) P_Q(b_{\mathcal{B}}|\lambda), \label{LHS} \end{eqnarray} where $P(\lambda)$ is the distribution of hidden variables. The subscript $Q$ presents that Bob's probability distribution is obtained from the measurement of observable on the quantum system $B$. The joint probability distribution $P(a_{\mathcal{A}},b_{\mathcal{B}})$ for the shared bipartite state $\rho_\theta^D$ by Alice and Bob can be written as \begin{eqnarray} P_{\rho}(a_{\mathcal{A}},b_{\mathcal{B}})=\Tr\Big[\Big(\frac{I+(-1)^a \mathcal{A}}{2}\otimes\frac{I+(-1)^b \mathcal{B}}{2}\Big)\rho_\theta^D\Big] \label{JPD} \end{eqnarray} The experimentally testable steering criterion can be derived with the help of the LHS model of Eq.~(\ref{LHS}). As quantum probability distribution $\big\{P_Q(b_{\mathcal{B}}|\lambda)\big\}$ for the measurement of non-commuting observables are bounded by the uncertainty principle, the correlation $\big\{P(a_{\mathcal{A}},b_{\mathcal{B}})\big\}$ is also bounded by the uncertainty principle. Several steering criteria have been derived based on different forms of uncertainty relation along with the LHS model~\cite{PE_St2,PE_St1, St_C1_2,St_C2,PE_St3}. Here, we employ the most widely accepted steering criterion of Ref.~\cite{Saunders, bennet12} as \begin{eqnarray} T_m=\frac{1}{m} \sum_{k=1}^m \langle \alpha_k (\hat{n}_k\cdot\vec{\sigma}^B)\rangle \leq C_m, \label{Steer_m} \end{eqnarray} where $m$ is the number of the measurement settings of Alice and Bob, and the random variable $\alpha_k\in\{0,1\}$ is Alice's measurement result for $k$-th measurement. Bob's $k$-th measurement corresponds to the spin measurement along the direction $\hat{n}_k$ and $\vec{\sigma}^B\in\{\sigma_x,\sigma_y,\sigma_z\}$, where $\sigma_x,\sigma_y,\sigma_z$ are the Pauli spin operators. $C_m$ is the maximum value of $T_m$ when Bob's system can be described by LHS model. The violation of Eq.~(\ref{Steer_m}) guarantees the steerability of the shared bipartite state $\rho_\theta^D$. The efficiency of the Eq.~(\ref{Steer_m}) increases with $m$, i.e., for a larger $m$, Eq.~(\ref{Steer_m}) captures larger set of steerable states. Here, we follow the technique used in the Refs.~\cite{Saunders, bennet12, Cavalcanti13} to increase the number of measurement settings, $m$. In Refs.~\cite{Saunders, bennet12, Cavalcanti13}, the vertices of the three-dimensional Platonic solids are used to design the measurement directions. There are only five three-dimensional Platonic solids with 4, 6, 8, 12, and 20 vertices. The measurement directions are chosen along the line by joining a vertex with its diametrically opposite vertex, except the Platonic solid with 4 vertices. With that, we can obtain $3,~4,~6,~10$ measurement settings from the Platonic solids with 6, 8, 12, and 20 vertices, respectively. We can increase the number of measurement settings by combining the measurement directions from the four Platonic solids. Here, we have chosen $m=16$ measurement settings by combining the axes of a dodecahedron (the Platonic solids with 20 vertices) and its dual, the icosahedron (the Platonic solids with 12 vertices). Note that we found that $m=16$ measurement settings capture larger sets of steerable states than other possible combinations using 4 Platonic solids in our scenario. In this case, steerability is guaranteed by the violation of the following inequality~\cite{bennet12,Saunders}. \begin{eqnarray} T_{16}(\theta,D)=\frac{1}{16} \sum_{k=1}^{16} \langle \alpha_k (\hat{n}_k\cdot\vec{\sigma}^B)\rangle \leq C_{16}=0.503. \label{Steer_16} \end{eqnarray} Since the steering criterion Eq.~(\ref{Steer_16}) is necessary, but not sufficient, it does not guarantee unsteerability. The unsteerability of the state $\rho_{\theta}^D$ can be verified with the help of the sufficient criterion of unsteerability derived in Ref.~\cite{PE_St3}. According to this criterion, the unsteerability of $\rho^D_{\theta}$ is determined when \begin{eqnarray} t_{U}(\theta,D)= \max\left[\alpha, \frac{2 \cos\theta \sqrt{1-D}}{\sqrt{\gamma}}\right] \leq 1, \label{Unsteer} \end{eqnarray} where $\gamma=\cos^2\theta+D\sin^2\theta$ and $\alpha=\{D^2 (\gamma - (1- D) \sin^2\theta)^2 +2 (1-D) \gamma\}/\gamma^2$. Let us define the normalized unsteering parameter $T_U$ as \begin{eqnarray} T_{U}(\theta,D)= 0.503\cdot t_{U}(\theta,D) \leq 0.503, \label{Unsteer} \end{eqnarray} in order to present the steering and unsteering criteria in the same figure, see Fig.~\ref{3D}(b). The green and yellow surfaces show $T_{16}(\theta,D) > 0.503$ and $T_U(\theta,D) > 0.503$, respectively. Therefore, the states $\rho_{\theta}^D$ lie on the green surface are steerable. Note that, similar to entanglement, the steering parameter $T_{16}(\theta,D)$ becomes asymmetrical with respect to $\theta=\pi/4$ after ADC. The states $\rho_{\theta}^D$ becomes unsteerable when $T_{U}(\theta,D)\leq0.503$. Therefore, the SSD occurs for $T_{U}(\theta,D)=0.503$, and it is presented by a blue curve in the Fig.~\ref{3D}(b). It is remarkable that SSD happens for all the initial states, unlike ESD. \begin{figure} \caption{The regions of various nonlocal quantum correlations for the bipartite state $\rho_{\theta}^D$. Red, purple, blue, and green lines correspond to $C(\theta,D)=0$, $T_{16}(\theta,D)=0.503$, $T_{U}(\theta,D)=0.503$, and $S(\theta, D)=2$, respectively.} \label{Fig_Th} \end{figure} \subsection{Bell nonlocality} The Bell nonlocality of a given state can be calculated from the correlation matrix $\lambda_{ij}^{\theta, D} = \Tr[\sigma_i\otimes\sigma_j\cdot\rho_{\theta}^D]$, where $i,j\in\{x,y,z\}$~\cite{Horo_Cri,Horo_Cri_2}. The eigenvalues of $\left(\lambda_{ij}^{\theta, D}\right)^T\cdot \lambda_{ij}^{\theta, D}$, where the superscript $T$ denotes for transposition, are $\lambda_1=(\cos^2\theta + (1-2 D)^2\sin^2\theta)^2$, and $\lambda_2=(1-D)^2\sin^22\theta$ (with degeneracy). Therefore, the Bell parameter $S=\langle \alpha_1\beta_1\rangle + \langle \alpha_1\beta_2\rangle + \langle \alpha_2\beta_1\rangle -\langle \alpha_2\beta_2\rangle $, where $\{\alpha_1, \alpha_2\}$ and $\{\beta_1,\beta_2\}$ are sets of Pauli operators for Alice and Bob, respectively, is given by~\cite{Horo_Cri,Horo_Cri_2} \begin{eqnarray} S(\theta, D) = \max\left[2\sqrt{2\lambda_2}, 2 \sqrt{\lambda_1+\lambda_2} \right]. \label{BI_rho} \end{eqnarray} The state is Bell nonlocal if $S(\theta, D)>2$. The Bell parameter $S(\theta, D)$ is plotted in the Fig.~\ref{3D}(c). The orange curve shows the Bell nonlocality of the state $\rho_{\theta}^D$ and BNSD occurs along the green line represented by $S(\theta,D) = 2$ where the orange surface touches the horizontal surface. Similar to SSD, BNSD occurs for all the initial states. \subsection{Sudden death of nonlocal quantum correlations} The initial state $|\psi_{\theta}\rangle$ is entangled, steerable and Bell nonlocal for all values of $\theta$ chosen from the range of $0<\theta< \pi/2$. As a result of ADC, nonlocal quantum correlations decrease with the increasing interaction strength $D$. In order to compare the sudden death phenomena of various quantum correlations, we present the local-nonlocal boundaries of $C(\theta,D)=0$, $T_{16}(\theta,D)=0.503$, $T_U(\theta,D)=0.503$, and $S(\theta,D)=2$ in Fig.~\ref{Fig_Th}. The red line corresponds to $C(\theta,D)=0$, and hence, it divides entangled states from separable states. It signifies that the state $|\psi_\theta\rangle$ with $0<\theta\leq \pi/4$ does not show ESD in ADC. The green curve presents $S(\theta,D)=2$, and thus show the BNSD boundary. It has discontinuities at $(\theta,D)\sim(0.35\pi,0.101)$ and $(0.21\pi,0.269)$ due to the maximization over two functions in Eq.~(\ref{BI_rho}). The purple and blue curves correspond to $T_{16}(\theta,D)=0.503$ and $T_U(\theta,D)=0.503$, and thus they are boundaries for steerable and unsteerable states, respectively. Between these two boundaries, there exists a undetermined area in gray where steerability of a given state cannot be concluded with the existing steering and unsteering criteria. As can be seen in the shaded by black region where the steering criterion fails to reveal the EPR steering for Bell nonlocal states, the steering criterion becomes invalid as $\theta\rightarrow0$. This non-ideal presentation can be improved by increasing the number of measurement settings~\cite{Saunders}. It is interesting to compare the sudden death phenomena among various nonlocal quantum correlations. Although all quantum correlations of the initial state $|\psi_\theta\rangle$ are symmetrical with respect to the parameter $\theta$, they become asymmetrical after ADC. This happens due to the asymmetrical nature of ADC, i.e., ADC does not affect to $|0\rangle$ and $|1\rangle$ symmetrically. As discussed above, while both states $|\psi_\phi\rangle$ and $|\psi_{\phi+\pi/4}\rangle$, where $\phi < \pi/4$, have same amount of entanglement, ESD never happens for states $|\psi_\theta\rangle$. Whereas, all states with $0<\phi < \pi/2$ show SSD and BNSD. These results indicate that different nonlocal quantum correlations are affected by ADC in very different ways. \section{Experiment} \subsection{Experimental setup} \begin{figure} \caption{ Experimental setup for (a) the initial state preparation, (b) amplitude damping channel, and (c) state measurement for inequality test and quantum state tomography. BD : beam displacer, H : half waveplate, Q : quarter waveplate, BS : beamsplitter, Pol. : Polarizer, SPD : Single-photon detector.} \label{Setup} \end{figure} Figure~\ref{Setup} shows the experimental setup to explore nonlocal quantum correlations affected by ADC. The maximally entangled photon pair of $|\psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)=\frac{1}{\sqrt{2}}(|HH\rangle+|VV\rangle)$ at 780~nm is generated at a sandwich BBO crystal via spontaneous parametric downconversion pumped by a femtosecond laser pulse. Here, $|H\rangle$ and $|V\rangle$ denote horizontal and vertical polarization states, respectively. The sandwich BBO crystal, which is composed of two type-II BBO crystals and a half waveplate in between, is specially designed for efficient generation of two-photon entangled states~\cite{wang16}. In order to implement the amplitude damping channel (ADC), one needs to keep the probability amplitude of $|0\rangle$ unchanged while that of $|1\rangle$ changes to $|0\rangle$ with the probability $D$. Fig.~\ref{Setup}(b) shows our implementation of ADC with polarization qubits. Two beam displacers (BD) which transmit (reflect) horizontal (vertical) polarization state form a Mach-Zehnder interferometer. With the half waveplates (HWP, H) in the interferometer, one can independently control the ratio between two outputs $|0\rangle_E$ and $|1\rangle_E$ of BD2 for the horizontal and vertical polarization states. In the experiment, we set the HWP at the horizontal polarization path at $45^\circ$ in order to have all horizontal input photons at $|0\rangle_E$. On the other hand, the vertical polarization input state can be found both at $|0\rangle_E$ and $|1\rangle_E$ according to the angle of the HWP at the vertical polarization path. In order to cancel out the effect of the HWP in the interferometer, we position HWP at $45^\circ$ both at $|0\rangle_E$, and $|1\rangle$. The environment qubit is traced out by incoherently mixing $|0\rangle_E$ and $|1\rangle_E$ at a beamsplitter (BS)~\cite{lee11}. As shown in the Fig.~\ref{Setup}(c), two-qubit quantum state tomography (QST) and various inequality tests are conducted by two-qubit projective measurement and coincidence detection. In the experiment, concurrence $C$ and the unsteering parameter $T_U$ are calculated from the QST result whereas the Bell parameter $S$ and the steering parameter $T_{16}$ are directly obtained from the inequality test data. The details of calculating entanglement and unsteerability as well as measurement settings for Bell nonlocaltity test and steering test can be found in the Appendices. \subsection{Experimental results} For experimental verification of the effect of different quantum correlations in the presence of amplitude damping decoherence, we have prepared maximally entangled polarization photon pairs from spontaneous parametric down conversion. To test Bell nonlocality and steerability, we use CHSH and steering inequalities derived in the Ref.~\cite{Saunders, bennet12}. To confirm unsteerability, we experimentally test the sufficient condition of unsteerability of Eq.~(\ref{Unsteer}) via quantum state~tomography~\cite{qst1,qst2}. The details of experiment can be found in the supplemental material. \begin{figure} \caption{Experimental results of the parameters of (a) entanglement, (b) Bell nonlocality, (c) EPR steering, and (d) unsteering for the initially maximally entangled state $\rho_{\theta=\pi/4}^D$, respectively. Red and blue lines and markers are theoretical and experimentally obtained values, respectively. Error bars are smaller than the size of makers. The horizontal black lines denote the local-nonlocal boundaries. Entanglement does not show the sudden death phenomenon, while Bell nonlocality and EPR steering sudden death happen. (d) The undetermined region for EPR steering is presented in gray. } \label{Fig_Exp} \end{figure} We present parameters of different nonlocal quantum correlations for the initially maximally entangled state $|\psi_{\theta=\pi/4}\rangle$ with respect to the interaction strength $D$ in Fig.~\ref{Fig_Exp}. Figure~\ref{Fig_Exp}(a) shows theoretically and experimentally obtained concurrence $C$. It clearly shows that entanglement gradually degrades as $D$ increases, and the state becomes separable when $D=1$. Therefore, entanglement does not show sudden death phenomenon. Fig.~\ref{Fig_Exp}(b) represents the Bell parameter $S$. The horizontal straight line corresponds to the upper bound of Bell inequality under LHV model, $S=2$. Similar to the concurrence, $S$ decreases as $D$ increases. More interestingly, $S$ becomes smaller than 2 even for $D<1$, that indicates sudden death of Bell nonlocality happens. In particular, we theoretically found that the sudden death of Bell nonlocality happens at $D\approx0.29$. Our experimental result coincides with the theoretical finding as the Bell nonlocal state at $D=0.2$ becomes Bell local at $D=0.4$. It is notable that unlike entanglement, Fig.~\ref{Fig_Exp}(b) shows the non-monotonous nature of Bell local correlation (i.e., Bell parameter $S$ lies below 2) with respect to the decoherence parameter $D$. The values of $S$ decreases when $D$ increases from 0 to $0.66$, but for further increment of $D$ from $0.66$ to $1$, $S$ increases up to $2$. However, it never exceeds the the classical-quantum boundary of $S=2$. Due to the loss of quantum coherence measured by off-diagonal elements, different nonlocal quantum correlations, entanglement and Bell nonlocal correlation decrease gradually with the strength of decoherence and show monotonic behaviour. However, appearing and disappearing of the diagonal elements due to the effect of ADC is the source of non-monotonic behaviour of the local correlations, Bell local correlation (explained by local hidden variable theory). The theoretical and experimental results of EPR steering and unsteerability are presented in Figs.~\ref{Fig_Exp}(c) and (d), respectively. The horizontal lines in the Figs.~\ref{Fig_Exp}(c) and (d) are the upper bounds of steering inequality allowed by LHS model, i.e., $T_{16}=0.503$, and the upper bound of sufficient criterion of unsteerability, i.e., $T_U=0.503$, respectively. The vertical red (blue) line denotes the value of $D$ corresponding to the intersection between theoretical $T_{16}$ ($T_U$) and the horizontal line of $T_{16}=0.503$ ($T_{U}=0.503$). The light red shaded regions in both Figs.~\ref{Fig_Exp}(c) and (d) represent the range of $D$ for which the state $\rho_{\pi/4}^D$ is steerable. The light blue shaded region in Fig.~\ref{Fig_Exp}(d) shows the unsteerable region with respect to the parameter $D$. The steerable and unsteerable regions are separated by the gray region of $0.495\leq D\leq 0.6$ where the state cannot be concluded whether steerable or unsteerable with the existing criteria. The existence of unsteerable region verifies the EPR steering sudden death of the state $\rho_{\pi/4}^D$. Similar to the Bell local correlation, non-monotonic behaviour of unsteerability explained by local hidden state model occurs due to the effect of ADC on the diagonal elements. \section{Conclusion} We have theoretically and experimentally investigated different nonlocal quantum correlations of entanglement, EPR steering and Bell nonlocality under amplitude damping channel (ADC). Our results also show the dynamics of entanglement is completely different from those of EPR steering and Bell nonlocality in the presence of ADC. For example, in our scenario, entanglement sudden death depends on the preparation of initial entangled states whereas steering and Bell nonlocality sudden deaths happen for all the initial state. Therefore, our findings present clear theoretical and experimental evidences of structural difference between different nonlocal quantum correlations~\cite{tanu19}. They also indicate the operational difference of nonlocal quantum correlations in the presence of decoherence. Considering the fundamental and practical importance of nonlocal quantum correlations in quantum information science, our results not only provide better understanding, but also inspire various applications of quantum information. \begin{thebibliography}{99} \bibitem{E_rev} R. Horodecki, P. Horodecki, M. Horodecki, K. Horodecki, Rev. Mod. Phys. {\bf 81}, 865 (2009). \bibitem{B_Rev} N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Rev. Mod. Phys. 86, 419 (2014). \bibitem{Jones07_2} S. J. Jones, H. M. Wiseman, and A. C. Doherty, Phys. Rev. A {\bf 76}, 052116 (2007). \bibitem{Jones07_1} H. M. Wiseman, S. J. Jones, and A. C. Doherty, Phys. Rev. Lett. {\bf 98}, 140402 (2007). \bibitem{Bell} J. S. Bell, Physics {\bf 1}, 195 (1964) \bibitem{CHSH} J. F. Clauser, M. A. Horne, A. Shimony and R. A. Holt, Phys. Rev. Lett. {\bf 23}, 880 (1969). \bibitem{Rev_BN} N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Rev. Mod. Phys. {\bf 86}, 419 (2014). \bibitem{Tele} C.H. Bennett, G. Brassard, C. C\'{r}epeau, R. Jozsa, A. Peres, W.K. Wootters, Phys. Rev. Lett. {\bf 70}, 1895 (1993). \bibitem{Tele_Exp1} D. Bouwmeester, J. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Nature London {\bf 390}, 575 (1997). \bibitem{Tele_Exp2} D. Boschi, S. Branca, F. DeMartini, L. Hardy, and S. Popescu, Phys. Rev. Lett. {\bf 80}, 1121 (1998). \bibitem{SDC} C. H. Bennett, and S. J. Wiesner, Phys. Rev. Lett. {\bf 69}, 2881 (1992). \bibitem{QKD1} C. H. Bennett, and G. Brassard, Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India, pp. 175-179 (1984). \bibitem{QKD3} N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. {\bf 74}, 145 (2002). \bibitem{QKD2} A. K. Ekert, Phys. Rev. Lett. {\bf 67}, 661 (1991). \bibitem{QCom} C. H. Bennett, D. P. DiVincenzo, Nature {\bf 404}, 247 (2000). \bibitem{Ent_Speed} R. Jozsa, N. Linden, Proc. Roy. Soc. A {\bf 459}, 2011 (2003). \bibitem{1sDIQKD} C. Branciard, E. G. Cavalcanti, S. P. Walborn, V. Scarani, and H. M. Wiseman, Phys. Rev. A {\bf 85}, 010301(R) (2012). \bibitem{DIQKD_1} J. Barrett, L. Hardy, and A. Kent, Phys. Rev. Lett. {\bf 95}, 010503 (2005). \bibitem{DIQKD_3} A. A\'{c}in, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani, Phys. Rev. Lett. {\bf 98} 230501 (2007). \bibitem{DIQKD_2} U. Vazirani, and T. Vidick, Phys. Rev. Lett. {\bf 113}, 140501 (2014). \bibitem{DIQKD_4} E. A. Aguilar, R. Ramanathan, J. Kofler, and M. Pawlowski, Phys. Rev. A {\bf 94}, 022305 (2016). \bibitem{ESD_2} T. Yu and J. H. Eberly, Science {\bf 323}, 598 (2009). \bibitem{ESD_4} A. Salles, F. de Melo, M. P. Almeida, M. Hor-Meyll, S. P. Walborn, P. H. Souto Ribeiro, and L. Davidovich, Phys. Rev. A {\bf 78}, 022322 (2008). \bibitem{ESD_4_1} R. Chaves, D. Cavalcanti, L. Aolita, and A. Ac\'{i}n, Phys. Rev. A {\bf 86}, 012108 (2012). \bibitem{ESD_4_2} A. Sohbi, I. Zaquine, E. Diamanti, and D. Markham, Phys. Rev. A {\bf 91}, 022101 (2015). \bibitem{ESD_1} M. P. Almeida, F. de Melo, M. Hor-Meyll, A. Salles, S. P. Walborn, P. H. Souto Ribeiro, L. Davidovich, Science {\bf 316}, 579 (2007). \bibitem{ESD_3} T. Yu and J. H. Eberly, Phys. Rev. Lett. {\bf 97}, 140403 (2006). \bibitem{ESD_5} P. J. Dodd and J. J. Halliwell, Phys. Rev. A {\bf 69}, 052105 (2004). \bibitem{ESD_5_1} M. Ali, Eur. Phys. J. D {\bf 71}, 1 (2017). \bibitem{DE_1} M. B. Plenio, S. F. Huelga, A. Beige, and P. L. Knight, Phys. Rev. A {\bf 59}, 2468 (1999). \bibitem{DE_2} M. B. Plenio, and S. F.Huelga, Phys. Rev. Lett. {\bf 88}, 197901 (2002). \bibitem{DE_3} D. Braun, Phys. Rev. Lett. {\bf 89}, 277901 (2002). \bibitem{DE_4} B. Ghosh, A. S. Majumdar, N. Nayak, Phys. Rev. A {\bf 74},052315 (2006). \bibitem{SSD}W.-Y. Sun, D. Wang, J.-D. Shi, and L. Ye, Sci. Rep. {\bf 7}, 39651 (2017). \bibitem{SSD_11} L. Rosales-Z\'{a}rate, R. Y. Teh, S. Kiesewetter, A. Brolis, K. Ng, and M. D. Reid, J. Opt. Soc. Am. B {\bf 32}, A82 (2015). \bibitem{SSD_22} H. Yang, M.-M. Du, W.-Y. Sun, Z.-Y. Ding, D. Wang, C.-J. Zhang and L. Ye, Laser Phys. Lett. 15, 125201 (2018). \bibitem{BNSD_1} G. Jaeger and K. Ann, Phys. Lett. A {\bf 372}, 2212 (2008). \bibitem{lee11} J.-C. Lee, Y.-C. Jeong, Y.-S. Kim, and Y.-H. Kim, Opt. Express {\bf 19}, 16309 (2011). \bibitem{kim12} Y-S. Kim, J.-C. Lee, O. Kwon, and Y-H. Kim, Nature Phys. 8, 117 (2012). \bibitem{Concurrence_1} S. Hill and W. K. Wootters, Phys. Rev. Lett. 78, 5022(1997). \bibitem{Concurrence_2} W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998). \bibitem{Horo_Cri} R. Horodecki, P. Horodecki and M. Horodecki, Phys. Lett. A {\bf 200}, 340 (1995). \bibitem{Horo_Cri_2} M. \.{Z}ukowski and \v{C}. Brukner, Phys. Rev. Lett. Phys. Rev. Lett. {\bf 88}, 210401 (2002). \bibitem{Saunders} D. J. Saunders, S. J. Jones, H. M. Wiseman, and G. J. Pryde, Nature Phys. {\bf 6}, 845 (2010). \bibitem{bennet12} A. J. Bennet, D. A. Evans, D. J. Saunders, C. Branciard, E. G. Cavalcanti, H. M. Wiseman, and G. J. Pryde, Phy. Rev. X {\bf 2}, 031003 (2012). \bibitem{PE_St3} J. Bowles, F. Hirsch, M.T. Quintino, and N. Brunner, Phys. Rev. A {\bf 93}, 022121 (2016). \bibitem{PE_St1} T. Pramanik, M. Kaplan, and A. S. Majumdar, Phys. Rev. A {\bf 90}, 050305(R) (2014). \bibitem{PE_St2} P. Skrzypczyk, M. Navascues, and D. Cavalcanti, Phys. Rev. Lett. 112, 180404 (2014). \bibitem{St_C1_2} J. Schneeloch, C. J. Broadbent, S. P. Walborn, E. G. Cavalcanti, and J. C. Howell, Phys. Rev. A {\bf 87}, 062103 (2013). \bibitem{St_C2} P. Chowdhury, T. Pramanik, and A. S. Majumdar, Phys. Rev. A {\bf 92}, 042317 (2015). \bibitem{Cavalcanti13} D. A. Evans, E. G. Cavalcanti, and H. M. Wiseman, Phys. Rev. A {\bf 88}, 022106 (2013). \bibitem{wang16} X.-L. Wang, L.-K. Chen, W. Li, H.-L. Huang, C. Liu, C. Chen, Y.-H. Luo, Z.-E. Su, D. Wu, Z.-D. Li, H. Lu, Y. Hu, X. Jiang, C.-Z. Peng, L. Li, N.-L. Liu, Y.-A. Chen, C.-Y. Lu, and J.-W. Pan, Phys. Rev. Lett. {\bf 117}, 210502 (2016). \bibitem{qst1} K. Banaszek, G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Phys. Rev. A {\bf 61}, 010304 (1999). \bibitem{qst2} D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White, Phys. Rev. A {\bf 64}, 052312 (2001). \bibitem{tanu19} T. Pramanik, Y.-W. Cho, S.-W. Han, S.-Y. Lee, Y.-S. Kim, and S. Phys. Rev. A {\bf 99}, 030101(R) (2019). \bibitem{PE_BN_1} N. Gisin, Phys. Lett. A {\bf 154}, 201-202 (1991). \bibitem{PE_BN_2} N. Gisin, and A. Peres, Phys. Lett. A {\bf 162}, 15 (1992). \bibitem{PE_BN_3} S. Popescu, and D. Rohrlich, Phys. Lett. A {\bf 166}, 293 (1992). \end{thebibliography} \onecolumngrid \appendix \section{Calculation of entanglement} \label{Apdx_Con} Entanglement of a bipartite state can be easily verified from its concurrence. If concurrence is positive, then the state is said to be entangled. The concurrence of the state $\rho_{\theta}^D$ can be calculated from the eigenvalues of $\Lambda^C=\rho_\theta^D\cdot\left(\sigma_y\otimes\sigma_y\cdot(\rho_{\theta}^D)^*\cdot\sigma_y\otimes\sigma_y\right)$, where the asterisk `$*$' stands for complex conjugation. For the state $\rho_{\theta}$, the eigenvalues of $\Lambda^C$ in decreasing order becomes \begin{eqnarray} \lambda_1 &=& (1-D)^2\sin\theta^2\left(\sqrt{\cos^2\theta + D^2\sin^2\theta} + \cos\theta \right)^2, \nonumber \\ \lambda_2 &=& \lambda_3= (1-D)^2D^2\sin^4\theta, \nonumber \\ \lambda_4 &=& (1-D)^2\sin\theta^2\left(\sqrt{\cos^2\theta + D^2\sin^2\theta} - \cos\theta \right)^2. \end{eqnarray} Using the above eigenvalues, the concurrence of the state $\rho_{\theta}^D$ can be calculated as \begin{eqnarray} C(\theta,D) &=& \max\left[0,\,\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_3}\right] \nonumber \\ &=& 2 (1-D)\sin\theta(\cos\theta - D\sin\theta). \end{eqnarray} \section{Calculation of unsteerability} \label{Apdx_Unst} To derive sufficient criterion for an existing local hidden state (LHS) model of the state $\rho_{\theta}^D$, we need to transform it into the canonical form $\varrho=\frac{1}{4}\left(\mathbb{I} + \vec{a}.\vec{\sigma} +\sum_{i=x,y,z} T_i\sigma_i\otimes \sigma_i\right)$, where $\vec{a}\in\{a_x,a_y,a_z\}$ is Alice's local vector and $\{T_x,T_y,T_z\}$ forms a correlation matrix. $\rho_{\theta}^D$ can be converted to the above canonical form with the help of following transformation \begin{eqnarray} \varrho_{\theta} = \mathbb{I}\otimes \left( \left(\rho_{\theta}^D\right)^B \right)^{-1/2} \cdot \rho_{\theta}^D \cdot \mathbb{I}\otimes\left(\left(\rho_{\theta}^D\right)^B\right)^{-1/2}, \end{eqnarray} where $\left(\rho_{\theta}^D\right)^B=Tr_A[\rho_{\theta}]$. Then the sufficient criterion for unsteerability, \begin{eqnarray} T_U(\theta,D)=\max\left[a_z^2+ 2|T_z|, 2|T_x|\right] \leq 1 \end{eqnarray} becomes \begin{eqnarray} \max\left[\alpha, \frac{2 \cos\theta \sqrt{1-D}}{\sqrt{\gamma}}\right] \leq 1, \end{eqnarray} where $\gamma=\cos^2\theta+D\sin^2\theta$ and $\alpha=\frac{D^2 (\gamma - (1- D) \sin^2\theta)^2 +2 (1-D) \gamma}{\gamma^2}$. \section{Calculation of measurement settings for Bell nonlocality} \label{Apdx_Bell} Horodecki criterion provides maximum Bell violation of a given state in $2\otimes 2$ dimensional systems~\cite{Horo_Cri, Horo_Cri_2}. The measurement settings for both Alice and Bob corresponding to Bell violation as predicted by Horodecki criterion can be calculated with the help of Ref.~\cite{PE_BN_1,PE_BN_2,PE_BN_3}. To obtain Alice's and Bob's measurement settings corresponding to the Bell violation $S(\theta=\pi/4,\,D)$ of Eq.~(12) in the main text, let us consider two following scenarios. In the first scenario, Alice measures either observable $\mathcal{A}_1=\sigma_x$ or $\mathcal{A}_2=\sigma_y$ on her system $A$. Bob's choice of observables are \begin{eqnarray} \mathcal{B}_1 &=&\sigma_x\cos\varphi_1 +\sigma_y\sin\varphi_1, \nonumber \\ \mathcal{B}_2&=&\sigma_x\cos\varphi_2 +\sigma_y\sin\varphi_2. \label{BN_MS_rho} \end{eqnarray} Then, the Bell parameter $S$ becomes \begin{eqnarray} S_1(\theta=\pi/4,\,D) &=& (1-D) \left(\cos\varphi_1 + \cos\varphi_2 - \sin\varphi_1 + \sin\varphi_2\right). \end{eqnarray} The maximum value of $S_1(\theta=\pi/4,\,D)$ can be found for $\varphi_1=7\pi/4$, and $\varphi_2=\pi/4$. Note that, $S_1(\theta=\pi/4,\,D)=S(\theta=\pi/4,\,D)$ for $0\leq D \leq 0.5$. In the second scenario, Alice chooses observables from the set $\{\mathcal{A}_1=\sigma_x,\mathcal{A}_2=\sigma_z\}$ and Bob's set is given by \begin{eqnarray} \mathcal{B}_1 &=&\sigma_z\cos\chi_1 +\sigma_x\sin\chi_1, \nonumber \\ \mathcal{B}_2&=&\sigma_z\cos\chi_2 +\sigma_x\sin\chi_2. \end{eqnarray} In this case, the Bell parameter $S$ is given by \begin{eqnarray} S_2(\theta=\pi/4,\,D) &=& (1-2(1-D)D)\cos\chi_1 - (1-2(1-D)D)\cos\chi_2 + (1-D)(\sin\chi_1+\sin\chi_2), \end{eqnarray} which becomes maximum for $\chi_1=\arctan\left[(1-D)/(1-2(1-D)D)\right]$ and $\chi_2=\pi + \arctan\left[-(1-D)/(1-2(1-D)D)\right]$. In this scenario, $S_2(\theta=\pi/4,\,D)=S(\theta=\pi/4,\,D)$ for $0.5\leq D\leq 1$. Therefore, when decoherence parameter lies in the range $0\leq D\leq 0.5$, Alice and Bob choose the first scenario, otherwise, they choose the second scenario. \section{Calculation of Measurement settings for steerability} \label{Apdx_St} In order to test the steering inequality with $16$ measurement settings on each subsystem, Bob chooses spin measurement along vertex-to-vertex of dodecahedron and icosahedron, Alice's measurement settings are calculated by maximizing $T_{16}$. Here, Bob's direction of $i$th spin measurement and Alice's direction of corresponding spin measurement settings are given by $\mathcal{B}_i\in\{n_x^i,n_y^i,n_z^i\}$ and $\mathcal{A}_i\in\{\sin\alpha_i \cos\beta_i,\sin\alpha_i \sin\beta_i, \cos\alpha_i\}$, respectively. The above measurement settings $\{\mathcal{A}_i,\mathcal{B}_i\}$ maximize the expectation value $\langle\mathcal{A}\mathcal{B}\rangle$ for the shared state $\rho_\theta^D$. $16$ set of measurement settings are given below \begin{eqnarray} \{\mathcal{A}_1,\mathcal{B}_1\} &\equiv & \{\{ \frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}} \},\{\alpha_1=\arctan\left[-\frac{\gamma_1}{\delta_1}\right],\beta_1=\arctan[-1] \}\}, \nonumber\\ \{\mathcal{A}_2,\mathcal{B}_2\} &\equiv & \{\{ - \frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}} \},\{ \alpha_2=\alpha_1 ,\beta_2=\frac{5\pi}{4} \}\}, \nonumber\\ \{\mathcal{A}_3,\mathcal{B}_3\} &\equiv & \{\{ \frac{1}{\sqrt{3}},-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}} \},\{ \alpha_3=\alpha_1 ,\beta_3=\frac{\pi}{4} \}\}, \nonumber\\ \{\mathcal{A}_4,\mathcal{B}_4\} &\equiv & \{\{ \frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},- \frac{1}{\sqrt{3}} \},\{ \alpha_4=\pi + \arctan\left[-\frac{\gamma_1}{\delta_4}\right],\beta_4=-\frac{\pi}{4} \}\}, \nonumber\\ \{\mathcal{A}_5,\mathcal{B}_5\} &\equiv & \{\{ 0,\frac{a}{b}, ab \},\{ \alpha_5=\arctan\left[-\frac{\gamma_5}{\delta_5}\right], \beta_5=\frac{\pi}{2} \}\}, \nonumber\\ \{\mathcal{A}_6,\mathcal{B}_6\} &\equiv & \{\{ 0,-\frac{a}{b}, ab \},\{ \alpha_6=\alpha_5, \beta_6=\frac{3\pi}{2} \}\}, \nonumber\\ \{\mathcal{A}_7,\mathcal{B}_7\} &\equiv & \{\{ \frac{a}{b}, ab,0 \},\{ \alpha_7=\frac{\pi}{2},\beta_7=\arctan\left[-\frac{3+\sqrt{5}}{2}\right] \}\}, \nonumber\\ \{\mathcal{A}_8,\mathcal{B}_8\} &\equiv & \{\{ -\frac{a}{b}, ab,0 \},\{ \alpha_8=\frac{\pi}{2},\beta_8=\pi+\arctan\left[\frac{3+\sqrt{5}}{2}\right] \}\}, \nonumber\\ \{\mathcal{A}_9,\mathcal{B}_9\} &\equiv & \{\{ ab,0,\frac{a}{b} \},\{ \alpha_9=\arctan\left[-\frac{\gamma_9}{\delta_4}\right],\beta_9=0 \}\}, \nonumber\\ \{\mathcal{A}_{10},\mathcal{B}_{10}\} &\equiv & \{\{ ab,0,-\frac{a}{b} \},\{ \alpha_{10}=\pi + \arctan\left[\frac{\gamma_9}{\delta_4}\right], \beta_{10}=0 \}\}, \nonumber\\ \{\mathcal{A}_{11},\mathcal{B}_{11}\} &\equiv & \{\{ 0, \frac{c}{d}, -\frac{1}{d} \},\{ \alpha_{11}=\pi+\arctan\left[\frac{\gamma_{11}}{\delta_{4}}\right], \beta_{11}=\frac{3\pi}{2} \}\}, \nonumber\\ \{\mathcal{A}_{12},\mathcal{B}_{12}\} &\equiv & \{\{ 0, \frac{c}{d}, \frac{1}{d} \},\{ \alpha_{12}=\arctan\left[\frac{\gamma_{11}}{\delta_{4}}\right], \beta_{12}=\frac{3\pi}{2} \}\}, \nonumber\\ \{\mathcal{A}_{13},\mathcal{B}_{13}\} &\equiv & \{\{ \frac{c}{d}, \frac{1}{d}, 0 \},\{ \alpha_{13}= \frac{\pi}{2}, \beta_{13}=\arctan\left[-\frac{2}{1+\sqrt{5}}\right] \}\}, \nonumber\\ \{\mathcal{A}_{14},\mathcal{B}_{14}\} &\equiv & \{\{ -\frac{c}{d}, \frac{1}{d}, 0 \},\{ \alpha_{14}= \frac{\pi}{2}, \beta_{14}=\pi + \arctan\left[\frac{2}{1+\sqrt{5}}\right] \}\}, \nonumber\\ \{\mathcal{A}_{15},\mathcal{B}_{15}\} &\equiv & \{\{ \frac{1}{d}, 0, \frac{c}{d} \},\{ \alpha_{15}=\arctan\left[-\frac{\gamma_{5}}{\delta_{15}}\right], \beta_{15}=0 \}\}, \nonumber\\ \{\mathcal{A}_{16},\mathcal{B}_{16}\} &\equiv & \{\{ - \frac{1}{d}, 0, \frac{c}{d} \},\{ \alpha_{16}=\alpha_{15}, \beta_{16}=\pi \}\}, \end{eqnarray} where \begin{eqnarray} \gamma_1 & =& \sqrt{2} (1-D) \sin2\theta, ~~~~ \delta_1= 4 D (1-D) \sin^2\theta - 1, \nonumber\\ \delta_4 &=& \cos^2\theta + (1-2D)^2 \sin^2\theta, \nonumber\\ \gamma_5 & =& 2 (1-D)\sin2\theta, ~~~~\delta_5=(3+\sqrt{5}) (2 D -1-2 D^2 - 2 D (1-D)\cos2\theta ),\nonumber\\ \gamma_9 & =& - (3+\sqrt{5}) (1-D) \sin\theta \cos\theta, \nonumber\\ \gamma_{11} & =& - (1+\sqrt{5}) (1-D) \sin\theta\cos\theta, \nonumber \\ \delta_{15} &=& (1+\sqrt{5}) (2 D -1-2 D^2 - 2 D (1-D)\cos2\theta ), \nonumber \\ a & =& c = \frac{1+\sqrt{5}}{2}, ~~~ b=\frac{1}{\sqrt{3}},~~~~ d=\sqrt{\frac{1}{2}\left(5+\sqrt{5}\right)}. \nonumber \\ \end{eqnarray} \end{document}
arXiv
Survival in a quasi-death process Erik A. van Doorn, Philip K. Pollett We consider a Markov chain in continuous time with an absorbing coffin state and a finite set $S$ of transient states. When $S$ is irreducible the limiting distribution of the chain as $t \to\infty,$ conditional on survival up to time $t,$ is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which $S$ may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on $S$ has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of $S$ satisfy an accessibility constraint. A key role in the analysis is played by some classic results on $M$-matrices. 10.1016/j.laa.2008.04.004 Linear algebra and its applications https://doi.org/10.1016/j.laa.2008.04.004 MSC-60J27 M-matrix Quasi-stationary distribution survival-time distribution Absorbing Markov chain death process migration process limiting conditional distribution van Doorn, E. A., & Pollett, P. K. (2008). Survival in a quasi-death process. Linear algebra and its applications, 429(4), 776-791. [10.1016/j.laa.2008.04.004]. https://doi.org/10.1016/j.laa.2008.04.004 van Doorn, Erik A. ; Pollett, Philip K. / Survival in a quasi-death process. In: Linear algebra and its applications. 2008 ; Vol. 429, No. 4. pp. 776-791. @article{00486b5ca4a647719e1b73322d625f89, title = "Survival in a quasi-death process", abstract = "We consider a Markov chain in continuous time with an absorbing coffin state and a finite set $S$ of transient states. When $S$ is irreducible the limiting distribution of the chain as $t \to\infty,$ conditional on survival up to time $t,$ is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which $S$ may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on $S$ has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of $S$ satisfy an accessibility constraint. A key role in the analysis is played by some classic results on $M$-matrices.", keywords = "EWI-12808, MSC-60J27, M-matrix, Quasi-stationary distribution, survival-time distribution, Absorbing Markov chain, death process, migration process, METIS-250995, IR-62322, limiting conditional distribution", author = "{van Doorn}, {Erik A.} and Pollett, {Philip K.}", note = "10.1016/j.laa.2008.04.004", doi = "10.1016/j.laa.2008.04.004", journal = "Linear algebra and its applications", van Doorn, EA & Pollett, PK 2008, 'Survival in a quasi-death process', Linear algebra and its applications, vol. 429, no. 4, 10.1016/j.laa.2008.04.004, pp. 776-791. https://doi.org/10.1016/j.laa.2008.04.004 Survival in a quasi-death process. / van Doorn, Erik A.; Pollett, Philip K. In: Linear algebra and its applications, Vol. 429, No. 4, 10.1016/j.laa.2008.04.004, 2008, p. 776-791. T1 - Survival in a quasi-death process AU - van Doorn, Erik A. AU - Pollett, Philip K. N1 - 10.1016/j.laa.2008.04.004 N2 - We consider a Markov chain in continuous time with an absorbing coffin state and a finite set $S$ of transient states. When $S$ is irreducible the limiting distribution of the chain as $t \to\infty,$ conditional on survival up to time $t,$ is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which $S$ may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on $S$ has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of $S$ satisfy an accessibility constraint. A key role in the analysis is played by some classic results on $M$-matrices. AB - We consider a Markov chain in continuous time with an absorbing coffin state and a finite set $S$ of transient states. When $S$ is irreducible the limiting distribution of the chain as $t \to\infty,$ conditional on survival up to time $t,$ is known to equal the (unique) quasi-stationary distribution of the chain. We address the problem of generalizing this result to a setting in which $S$ may be reducible, and show that it remains valid if the eigenvalue with maximal real part of the generator of the (sub)Markov chain on $S$ has geometric (but not, necessarily, algebraic) multiplicity one. The result is then applied to pure death processes and, more generally, to quasi-death processes. We also show that the result holds true even when the geometric multiplicity is larger than one, provided the irreducible subsets of $S$ satisfy an accessibility constraint. A key role in the analysis is played by some classic results on $M$-matrices. KW - MSC-60J27 KW - M-matrix KW - Quasi-stationary distribution KW - survival-time distribution KW - Absorbing Markov chain KW - death process KW - migration process KW - limiting conditional distribution U2 - 10.1016/j.laa.2008.04.004 DO - 10.1016/j.laa.2008.04.004 JO - Linear algebra and its applications JF - Linear algebra and its applications M1 - 10.1016/j.laa.2008.04.004 van Doorn EA, Pollett PK. Survival in a quasi-death process. Linear algebra and its applications. 2008;429(4):776-791. 10.1016/j.laa.2008.04.004. https://doi.org/10.1016/j.laa.2008.04.004 http://eprints.eemcs.utwente.nl/secure2/12808/01/laa.pdf
CommonCrawl
Vietnamese numerals Historically Vietnamese has two sets of numbers: one is etymologically native Vietnamese; the other uses Sino-Vietnamese vocabulary. In the modern language the native Vietnamese vocabulary is used for both everyday counting and mathematical purposes. The Sino-Vietnamese vocabulary is used only in fixed expressions or in Sino-Vietnamese words, in a similar way that Latin and Greek numerals are used in modern English (e.g., the bi- prefix in bicycle). Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems For numbers up to one million, native Vietnamese terms is often used the most, whilst mixed Sino-Vietnamese origin words and native Vietnamese words are used for units of one million or above. Concept For non-official purposes prior to the 20th century, Vietnamese had a writing system known as Hán-Nôm. Sino-Vietnamese numbers were written in Chữ Hán and native vocabulary was written in Chữ Nôm. Hence, there are two concurrent system in Vietnamese nowadays in the romanized script, one for native Vietnamese and one for Sino-Vietnamese. In the modern Vietnamese writing system, numbers are written as Arabic numerals or in the romanized script Chữ Quốc ngữ (một, hai, ba), which had a Chữ Nôm character. Less common for numbers under one million are the numbers of Sino-Vietnamese origin (nhất [1], nhị [2], tam [3]), using Chữ Hán (classical Chinese characters). Chữ Hán and Chữ Nôm has all but become obsolete in the Vietnamese language, with the Latin-style of reading, writing, and pronouncing native Vietnamese and Sino-Vietnamese being wide spread instead, when France occupied Vietnam. Chữ Hán can still be seen in traditional temples or traditional literature or in cultural artefacts. The Hán-Nôm Institute resides in Hanoi, Vietnam. Basic figures The following table is an overview of the basic Vietnamese numeric figures, provided in both native and Sino-Vietnamese counting systems. The form that is highlighted in green is the most widely used in all purposes whilst the ones highlighted in blue are seen as archaic but may still be in use. There are slight differences between the Hanoi and Saigon dialects of Vietnamese, readings between each are differentiated below. Number Native Vietnamese Sino-Vietnamese Notes Chữ quốc ngữ Chữ Nôm Chữ quốc ngữ Hán tự 0 không 空 linh 空 • 〇(零) The foreign-language borrowed word "zêrô (zêro, dê-rô)" is often used in physics-related publications, or colloquially. 1 một 𠬠 nhất 一(壹) 2 hai 𠄩 nhị 二(貳) 3 ba 𠀧 tam 三(叄) 4 bốn 𦊚 tứ 四(肆) In the ordinal number system, the Sino-Vietnamese "tư/四" is more systematic; as the digit 4 appears after the number 20 when counting upwards, the Sino-Vietnamese "tư/四" is more commonly used. 5 năm 𠄼 ngũ 五(伍) In numbers above ten that end in five (such as 115, 25, 1055), five is alternatively pronounced as "lăm/𠄻" to avoid possible confusion with "năm/𢆥", a homonym of năm, meaning "year". Exceptions to this rule are numbers ending in 05 (such as 605, 9405). 6 sáu 𦒹 lục 六(陸) 7 bảy 𦉱 thất 七(柒) In some Vietnamese dialects, it is also read as "bẩy". 8 tám 𠔭 bát 八(捌) 9 chín 𠃩 cửu 九(玖) 10 mười • một chục 𨒒 thập 十(拾) Chục is used colloquially. "Ten eggs" may be called một chục quả trứng rather than mười quả trứng. It's also used in compounds like mươi instead of mười (e.g.: hai mươi/chục "twenty"). 100 trăm • một trăm 𤾓 • 𠬠𤾓 bách (bá) 百(佰) The Sino-Vietnamese "bách/百" is commonly used as a morpheme (in compound words), and is rarely used in the field of mathematics as a digit. Example: "bách phát bách trúng/百發百中". 1,000 nghìn (ngàn) • một nghìn (ngàn) 𠦳 • 𠬠𠦳 thiên 千(仟) The Sino-Vietnamese "thiên/千" is commonly used as a morpheme, but rarely used in a mathematical sense, however only in counting bricks, it is used. Example: "thiên kim/千金". "nghìn" is the standard word in Northern Vietnam, whilst "ngàn" is the word used in the South. 10,000 mười nghìn (ngàn) 𨒒𠦳 vạn • một vạn 萬 • 𠬠萬 The "một/𠬠" within "một vạn/𠬠萬" is a native Vietnamese (intrinsic term) morpheme. This was officially used in Vietnamese in the past, however, this unit has become less common after 1945, but in counting bricks, it is still widely used. The borrowed native pronunciation muôn for 萬 is still used in slogans such as "muôn năm" (ten thousand years/endless). 100,000 trăm nghìn (ngàn) • một trăm nghìn (ngàn) 𤾓𠦳 • 𠬠𤾓𠦳 ức • một ức • mười vạn[1] 億 • 𠬠億 • 𨒒萬 The "mười/𨒒" and "một/𠬠" within "mười vạn/𨒒萬" and "một ức/𠬠億" are native Vietnamese (intrinsic term) morphemes. 1,000,000 (none) (none) triệu • một triệu • một trăm vạn[2] 兆 • 𠬠兆 • 𠬠𤾓萬 The "một/𠬠" and "trăm/𤾓" within "một triệu/𠬠兆" and "một trăm vạn/𠬠𤾓萬" are native Vietnamese (intrinsic term) morphemes. 10,000,000 (mixed usage of Sino-Vietnamese and native Vietnamese systems) (mixed usage of Sino-Vietnamese and native Vietnamese systems) mười triệu 𨒒兆 The "mười/𨒒" within "mười triệu/𨒒兆" is a native Vietnamese (intrinsic term) morpheme. 100,000,000 (mixed usage of Sino-Vietnamese and native Vietnamese systems) (mixed usage of Sino-Vietnamese and native Vietnamese systems) trăm triệu 𤾓兆 The "trăm/𤾓" within "trăm triệu/𤾓兆" is a native Vietnamese (intrinsic term) morpheme. 1,000,000,000 (none) (none) tỷ 秭[3] 1012 (mixed usage of Sino-Vietnamese and native Vietnamese systems) (mixed usage of Sino-Vietnamese and native Vietnamese systems) nghìn (ngàn) tỷ 𠦳秭 1015 (none) (none) triệu tỷ 兆秭 1018 (none) (none) tỷ tỷ 秭秭 Some other features of Vietnamese numerals include the following: • Outside of fixed Sino-Vietnamese expressions, Sino-Vietnamese words are usually used in combination with native Vietnamese words. For instance, "mười triệu" combines native "mười" and Sino-Vietnamese "triệu". • Modern Vietnamese separates place values in thousands instead of myriads. For example, "123123123" is recorded in Vietnamese as "một trăm hai mươi ba triệu một trăm hai mươi ba nghìn (ngàn) một trăm hai mươi ba, or '123 million, 123 thousand and 123'.[4] Meanwhile, in Chinese, Japanese & Korean, the same number is rendered as "1億2312萬3123" (1 hundred-million, 2312 ten-thousand and 3123). • Sino-Vietnamese numbers are not in frequent use in modern Vietnamese. Sino-Vietnamese numbers such as "vạn/萬" 'ten thousand', "ức/億" 'hundred-thousand' and "triệu/兆" 'million' are used for figures exceeding one thousand, but with the exception of "triệu" are becoming less commonly used. Number values for these words are used for each numeral increasing tenfold in digit value, 億 being the number for 105, 兆 for 106, et cetera. However, Triệu in Vietnamese and 兆 in Modern Chinese now have different values. Other figures NumberChữ quốc ngữHán-NômNotes 11 mười một𨒒𠬠 12 mười hai • một tá𨒒𠄩 • 𠬠打"một tá/𠬠打" is often used within mathematics-related occasions, to which "tá" represents the foreign loanword "dozen". 14 mười bốn • mười tư𨒒𦊚 • 𨒒四"mười tư/𨒒四" is often used within literature-related occasions, to which "tư/四" forms part of the Sino-Vietnamese vocabulary. 15 mười lăm𨒒𠄻Here, five is pronounced "lăm/𠄻", or also "nhăm/𠄶" by some speakers in the north. 19 mười chín𨒒𠃩 20 hai mươi • hai chục𠄩𨒒 • 𠄩𨔿 21 hai mươi mốt𠄩𨒒𠬠For numbers which include the digit 1 from 21 to 91, the number 1 is pronounced "mốt". 24 hai mươi tư𠄩𨒒四When the digit 4 appears in numbers after 20 as the last digit of a 3-digit group, it is more common to use "tư/四". 25 hai mươi lăm𠄩𨒒𠄻Here, five is pronounced "lăm". 50 năm mươi • năm chục𠄼𨒒 • 𠄼𨔿When "𨒒" (10) appears after the number 20, the pronunciation changes to "mươi". 101 một trăm linh một • một trăm lẻ một𠬠𤾓零𠬠 • 𠬠𤾓𥘶𠬠"Một trăm linh một/𠬠𤾓零𠬠" is the Northern form, where "linh/零" forms part of the Sino-Vietnamese vocabulary; "một trăm lẻ một/𠬠𤾓𥘶𠬠" is commonly used in the Southern and Central dialect groups of Vietnam. 1001 một nghìn (ngàn) không trăm linh một • một nghìn (ngàn) không trăm lẻ một𠬠𠦳空𤾓零𠬠 • 𠬠𠦳空𤾓𥘶𠬠When the hundreds digit is occupied by a zero, these are expressed using "không trăm/空𤾓". 10055 mười nghìn (ngàn) không trăm năm mươi lăm𨒒𠦳空𤾓𠄼𨒒𠄻 • When the number 1 appears after 20 in the unit digit, the pronunciation changes to "mốt". • When the number 4 appears after 20 in the unit digit, it is more common to use Sino-Vietnamese "tư/四". • When the number 5 appears after 10 in the unit digit, the pronunciation changes to "lăm/𠄻". • When "mười" appears after 20, the pronunciation changes to "mươi". Ordinal numbers Vietnamese ordinal numbers are generally preceded by the prefix "thứ-", which is a Sino-Vietnamese word which corresponds to "次-". For the ordinal numbers of one and four, the Sino-Vietnamese readings "nhất/一" and "tư/四" are more commonly used; two is occasionally rendered using the Sino-Vietnamese "nhị/二". In all other cases, the native Vietnamese number is used. In formal cases, the ordinal number with the structure "đệ (第) + Sino-Vietnamese numbers" is used, especially in calling the generation of monarches, with an example being Nữ vương Elizabeth đệ nhị/女王 Elizabeth 第二 (Queen Elizabeth II). Ordinal numberChữ quốc ngữHán-Nôm 1stthứ nhất次一 2ndthứ hai • thứ nhì次𠄩 • 次二 3rdthứ ba次𠀧 4ththứ tư次四 5ththứ năm次𠄼 nththứ "n"次「n」 Footnotes 1. Tu dien Han Viet Thieu Chuu:「(1): ức, mười vạn là một ức.」 2. Tu dien Han Viet Thieu Chuu:「(3): triệu, một trăm vạn.」 3. Hán Việt Từ Điển Trích Dẫn 漢越辭典摘引:「Một ngàn lần một triệu là một tỉ 秭 (*). Tức là 1.000.000.000. § Ghi chú: Ngày xưa, mười vạn 萬 là một ức 億, một vạn ức là một tỉ 秭.」 4. Triệu means one million in Vietnamese, but the Chinese number that is the source of the Vietnamese word, "兆" (Mandarin zhào), is officially rendered as 1012 in Taiwan, and commonly designated as 106 in the People's Republic of China (See various scale systems). See also • Japanese numerals, Korean numerals, Chinese numerals
Wikipedia
\begin{document} \lefttitle{Cambridge Author} \jnlPage{1}{8} \jnlDoiYr{2021} \doival{10.1017/xxxxx} \title[BIM using CLP]{Building Information Modeling Using\\ Constraint Logic Programming \thanks{Work partially supported by EIT Digital, EU H2020 project BIM4EEB (Grant 820660), MICINN projects RTI2018-095390-B-C33 InEDGEMobility (MCIU/AEI/FEDER, UE), PID2019-108528RB-C21 ProCode, Comunidad de Madrid project S2018/TCS-4339 BLOQUES-CM co-funded by EIE Funds of the European Union, US NSF (Grants IIS 1718945, IIS 1910131, IIP 1916206), DoD, and Amazon.}} \begin{authgrp} \author{Joaqu{\'\i}n Arias$^1$ Seppo T\"orm\"a$^2$ Manuel Carro$^{3,4}$ Gopal Gupta$^5$} \affiliation{$^1$CETINIA, Universidad Rey Juan Carlos, Madrid, Spain \hspace{3em} $^2$ VisuaLynk Oy, Espoo, Finland\\ $^3$Universidad Politécnica de Madrid, Spain \hspace{3em} $^4$IMDEA Software Institute, Pozuelo, Spain\\ $^5$University of Texas at Dallas, Richardson, USA} \end{authgrp} \history{\sub{xx xx xxxx;} \rev{xx xx xxxx;} \acc{xx xx xxxx}} \maketitle \begin{abstract} Building Information Modeling (BIM) produces three-dimensional object-oriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, to name just a few. BIM is slowly but inevitably revolutionizing the architecture, engineering, and construction (AEC) industry. Buildings need to be compliant with regulations about stability, safety, and environmental impact. Manual compliance checking is tedious and error-prone, and amending flaws discovered only at construction time causes huge additional costs and delays. Several tools can check BIM models for conformance with rules/guidelines. For example, Singapore’s CORENET e-Submission System checks fire safety. But since the current BIM exchange format only contains basic information about building objects, a separate, ad-hoc model pre-processing is required to determine, e.g., evacuation routes. Moreover, they face difficulties in adapting existing built-in rules and/or adding new ones (to cater for building regulations, that can vary not only among countries but also among parts of the same city), if at all possible. We propose the use of logic-based executable formalisms (CLP and Constraint ASP) to couple BIM models with advanced knowledge representation and reasoning capabilities. Previous experience shows that such formalisms can be used to uniformly capture and reason with knowledge (including ambiguity) in a large variety of domains. Additionally, incorporating checking within design tools makes it possible to ensure that models are rule-compliant at every step. This also prevents erroneous designs from having to be (partially) redone, which is also costly and burdensome. To validate our proposal, we implemented a preliminary reasoner under CLP(Q/R) and ASP with constraints and evaluated it with several BIM models. Under consideration for acceptance in Theory and Practice of Logic Programming (TPLP). \end{abstract} \begin{keywords} Building Information Modelling, Constraint, Commonsense Reasoning, Answer Set Programming \end{keywords} \vspace*{0em} \section{Introduction} Building Information Modeling is a digital technology that is changing the Architecture, Engineering, and Construction (AEC) industry. It combines the three-dimensional geometry with non-geometrical information of a building in an object-oriented model that can be shared among actors over the construction lifecycle. To facilitate the exchange of BIM models, \cite{IFC} has developed Industry Foundation Classes (IFC), an open, vendor-neutral exchange format. Since 2016, the UK Government has required the Level 2 of BIM maturity for any public construction project, where each discipline generates specific models following the BIM standard. Once these specific models are built, an Automated Code Compliance Checking tool, e.g., BIM's Solibri Model Checker (SMC), provides basic architectural checks, to verify the completeness of information and detect the intersection of building components, among other things. Additionally, automated tools check models in IFC format for conformance with specifications, codes, and/or guidelines. For example, the CORENET BIM e-Submission by \cite{Corenet} can be used to check fire safety. However, since the IFC format only represents basic building objects and static information of their properties, pre-processing of the model is required to, e.g., determine evacuation routes. Moreover, these tools offer limited scope for customization or flexibility and it is not easy to modify the implemented rules and/or create new ones. The domain of construction modeling needs several capabilities: geometrical reasoning (including arithmetical/mathematical capabilities and qualitative position knowledge), reasoning about symbolic/conceptual knowledge, and reasoning in the presence of vague concepts and/or incomplete information (e.g., whether or not the outdoor space is safe depends on details that are not yet known at this level of the design). In addition, since part of the reasoning involves regulatory codes and standards, a certain degree of ambiguity and discretionary decisions are expected. Interestingly, these different types of reasoning are not layered: a model cannot be validated by first checking structural integrity (i.e., that walls do not overlap or that columns are not placed where a door is expected to be placed), then positional reasoning, and then legal compliance. Legal requirements in this domain include restrictions on sizes, areas, distances, relative positions, etc. Therefore, a formalism suitable for checking (and, if executable, for generating alternative models) has to be able to seamlessly capture (and reason with) all of these types of information simultaneously. Moreover, since regulations differ not only between countries, but also among states/regions within a country, they must be easy to write and, since they also change in time, to modify. We believe that a formalism based on logic programming can meet many, if not all, of the above requirements: a successful answer to a query can determine that a model meets all the requirements. Different answers (or models) to a query may give alternative designs that satisfy the requirements. There exist query languages for BIM, such as BimSPARQL, by \cite{zhang2018bimsparql}, and several logic-based proposals, e.g., by \cite{pauwels2011a}, \cite{zhang2013a}, \cite{solihin2015simplified}, \cite{lee2016a}, and \cite{li2020non}, that validate our approach because they show that minimal proof-of-concept tools have improved reasoning capabilities w.r.t.\ commercial off-the-shelf BIM Sofware. However, they all report limitations in the representation of geometrical information and/or in the flexibility of the proposal to adapt the code and/or the evaluation engine for different scenarios. We propose to use tools integrating Constraint Logic Programming with ASP to model dynamic information and restrictions in BIM models and to enable the use of logic-based methodologies such as model refinement. The main contributions of this paper are: \begin{itemize} \item A library, based on Constraint Answer Set Programming (CASP), that allows unified representation of geometrical and non-geometrical information. \item The prototype of a preliminary 3D reasoner under Prolog with CLP(Q/R) that we evaluate with several BIM models. \item The outline of an alternative implementation of this spatial reasoner under CASP, using s(CASP), by~\cite{scasp-iclp2018}, a goal-directed implementation of CASP. \item Evidence for the benefits of using s(CASP) in BIM model evaluation: (i) it has the \emph{relevance} property, (ii) it can generate justifications for negative queries, and (iii) it makes representing and reasoning with ambiguities easier. \end{itemize} The ultimate goal of this work is to shift from BIM verification to BIM refinement and to facilitate the implementation of new specifications, construction standards, etc. \vspace*{0em} \section{Background} This section briefly describes (i) Building Information Modeling (BIM), (ii) the industry foundation classes (IFC), a standard for openBIM data exchange, and (iii) s(CASP), a goal-directed implementation of constraint answer set programming. \vspace*{0em} \subsection{BIM + IFC} Building information modeling (BIM) has evolved from object-based parametric 3D modeling. Combining geometrical information with other properties (costs, materials, process, etc.) makes it possible to have a range of new functionalities, including cost estimations, quantity takeoffs, or energy analysis. The goal of BIM is to achieve a consistent and continuous use of digital information throughout the entire life cycle of a facility, including its design, construction, and operation. BIM is based on a digital model and intends to raise productivity while lowering error rates, as mistakes can be detected and resolved before they become serious problems during construction and/or operation. The most important advantages lie in the direct use of analysis and simulation tools on these models and the seamless transfer of data for the operation phase. Today, there are numerous BIM authoring tools, such as Revit, ArchiCAD, Tekla Structures, or Allplan, that provide the basics for realizing BIM-based construction projects. \cite{IFC} has developed BIM interoperability technologies, the most important of which is IFC (Industry Foundation Classes), a common data model for representing buildings. IFC is standardized as ISO 16739 to improve BIM data interoperability between heterogeneous BIM authoring tools and applications in their disciplines. The IFC schema is an extensive data model that logically encodes (i) the identity and semantics (name, identifier, type), (ii) the characteristics or attributes (material, color, thermal properties), and (iii) the relationships (including locations, connections, and ownership) of (a) objects (doors, beams), (b) abstract concepts (performance, costing), (c) processes (installation, operations), and (d) people (owners, designers, contractors, suppliers). For example, the IFC label \emph{IfcBeam} is used to identify the beams (part of the structure of a building that supports heavyweight). IFC allows describing how a facility is designed, how it can be constructed, and how its systems will function. It defines building components, manufactured products, mechanical/electrical systems, as well as more abstract models for structural analysis, energy analysis, cost breakdowns, work schedules, etc. IFC is in development since 1994 and now specifies close to one thousand different entity types. IFC 4.0.1.2 was approved as ISO standard 16739 in 2017. The specification of IFC5 is currently in progress. \vspace*{0em} \subsection{s(CASP)} s(CASP), presented by \cite{scasp-iclp2018}, extends the expressiveness of Answer Set Programming systems, based on the stable model semantics by \cite{gelfond88:stable_models}, by including predicates, constraints among non-ground variables, uninterpreted functions, and, most importantly, a top-down, query-driven execution strategy. These features make it possible to return answers with non-ground variables (possibly including constraints among them) and to compute partial models by returning only the fragment of a stable model that is necessary to support the answer to a given query. In s(CASP), thanks to the constructive negation, \lstinline[style=MyInline]|not p(X)| can return bindings for \lstinline[style=MyInline]{X} for which the call \lstinline[style=MyInline]{p(X)} would have failed. Additionally, thanks to the interface of s(CASP) with constraint solvers, sound non-monotonic reasoning with constraints is possible. s(CASP), like other ASP implementations and unlike Prolog, handles non-stratified negation. \begin{example}\label{exa:size} Consider the program \lstinline[style=MyInline]{size(r1,S):- S#>=21} (see \myurl{size.pl}), For the query \lstinline[style=MyInline]{?- not size(r1,S)}, s(CASP) returns the model \lstinline[style=MyInline]|{not size(r1,S $|$ {S #< 21})}|. \end{example} \begin{example}\label{exa:bigsmall} The following program, in \myurl{kitchen.pl}, models that the room \lstinline[style=MyInline]{r1} is either small or big and it is a kitchen: \begin{lstlisting}[style=MyProlog] small(r1) :- not big(r1). big(r1) :- not small(r1). kitchen(r1). \end{lstlisting} \noindent The top-down evaluation of the non-stratified negation in lines 1-2 detects a loop having an even number of intervening negations (and \emph{even loop}). When this is discovered, the truth/falsehood of the atoms involved is guessed to generate different models whose consistency is subsequently checked. In this example, there are two possible models, and given a query it returns the \emph{relevant} partial model (if it exists): \begin{itemize} \item[] \lstinline[style=MyInline]{?- small(r1).} returns \lstinline[style=MyInline]|{small(r1), not big(r1)}| \item[] \lstinline[style=MyInline]{?- big(r1).} returns \lstinline[style=MyInline]|{big(r1), not small(r1)}| \item[] \lstinline[style=MyInline]{?- kitchen(r1).} returns \lstinline[style=MyInline]|{kitchen(r1)}| \item[] \lstinline[style=MyInline]{?- big(r1), small(r1).} returns \lstinline[style=MyInline]{no models} \end{itemize} \end{example} In addition to default negation, s(CASP) supports classical negation, marked with the prefix~'\lstinline[style=MyInline]{-}', to capture the explicit evidence that a literal is false: \lstinline[style=MyInline]{not small(r1)} expresses that we have no evidence that \lstinline[style=MyInline]{r1} is small (we can not prove it), and \lstinline[style=MyInline]{-small(r1)} means that we have \emph{explicit} evidence (there is proof) that \lstinline[style=MyInline]{r1} is not small. Finally, s(CASP) provides a mechanism to present justifications in natural language. Both plain text and user-friendly, expandable HTML can be generated (e.g., \myurl{small_r1.txt} and \myurl{small_r1.html} show the justification for the query \lstinline[style=MyInline]{?- small(r1)} in Example~\ref{exa:bigsmall}). \vspace*{0em} \section{Modeling vague concepts} \label{sec:model-vague-conc} We present now a proposal to represent vague concepts using s(CASP). The formal representation of legal norms to automate reasoning and/or check compliance is well known in the literature. There are several proposals for deterministic rules. However, none of the existing proposals, based on Prolog or standard ASP, can efficiently represent vague concepts due to unknown information, ambiguity, and/or administrative discretion. \begin{example}\label{exa:rule} Considering the following norm from a building regulation: \begin{quote} In the room there is at least one window, and each window must be wider than 0.60 m. If the room is small, it can be between 0.50 and 0.60 m wide. \end{quote} We can encode this norm using default negation: \begin{lstlisting}[style=MyProlog] requirement_a(Room):- window_belongs(Window,Room), width(Window,Width), Width#>$0.60$, not small(Room). requirement_a(Room):- window_belongs(Window,Room), width(Window,Width), Width#>$0.50$, small(Room). \end{lstlisting} \noindent However, without information on the size of the room or what is the criteria to consider that a room is small, it is not possible to determine whether the room is small and only the first rule would succeed. \end{example} To encode the absence of information we propose the use of the stable model semantics by \cite{gelfond88:stable_models}, which makes it possible to reason about various scenarios simultaneously, e.g., in the previous example we can consider two scenarios (models): in one a given room is small and in the other, it is not. \begin{figure} \caption{Code representing vague and unknown information (available at \myurl{room.pl}).} \label{fig:room} \end{figure} \begin{example}[Example~\ref{exa:rule} (cont.)] \label{exa:rule2} Fig.~\ref{fig:room} models a hotel with eight rooms, for which we only know the size of three (lines 1-3). Following the patterns by~\cite{ariaslaw}, lines 8-9 make it possible to reason considering (i) unknown information (size of rooms $r4$ to $r8$), and/or (ii) vague concepts: line 5 states that a room smaller than 10 m$^2$ is small and line 6 states that a room larger than 20 m$^2$ is not small. However, it is not clear whether rooms with size between 10 and 20 m$^2$ are small or not. Line 8 captures that there is evidence that the room is small, line 9 captures the case when there is a explicit evidence (there is proof) that the room is not small, and lines 10-11 generate two possible models otherwise. Finally, \lstinline[style=MyInline]{room_is/2} determines whether a room is small or not based on evidence and/or assumptions: for room $r1$ the query \lstinline[style=MyInline]{?-room_is(r1,Size)} returns \lstinline[style=MyInline]{S=big}, for $r2$ it returns \lstinline[style=MyInline]{S=small}, and for the other rooms it returns both alternatives: \lstinline[style=MyInline]{S=small} assuming that \lstinline[style=MyInline]{small(r3)} holds and \lstinline[style=MyInline]{S=big} assuming that \lstinline[style=MyInline]{-small(r3)} holds. \end{example} \vspace*{0em} \section{Modeling and manipulating 3D objects} We will now describe our proposal to model and manipulate geometrical information used to represent 3D objects which will be used to model buildings, infrastructures, etc.\ using constraints. Additionally, we show how the operations that manipulate these objects can also be used to infer new knowledge and/or to verify that geometrical data and non-geometrical information are consistent at each stage of the project development. \vspace*{0em} \subsection{Representing 3D objects using linear equations} Any 3D object can be approximated as the union of convex shapes. The simplest shape to represent with linear equations is a box with edges parallel to the axes of coordinates. Assuming that $P_a$ and $P_b$ are opposing vertices with $P_a$ being the one closest to the coordinate origin, the box is a set of points $P$ with coordinates $(X, Y, Z)$ (resp.\ $(X_a, Y_a, Z_a)$ for $P_a$ and similarly for $P_b$) such that \begin{displaymath} X \geq X_a \land X < X_b \land Y \geq Y_a \land Y < Y_b \land Z \geq Z_a \land Z < Z_b \end{displaymath} Equations of this type can be easily managed using a simple linear constraint solver. In this paper, we use the well-known CLP(Q) solver by \cite{holzbaur-clpqr}\footnote{We use a constraint solver over rationals to avoid the loss of precision associated with the use of reals.} and we represent such an object with the term \lstinline[style=MyInline]{point(X, Y, Z)} where \lstinline[style=MyInline]{X}, \lstinline[style=MyInline]{Y}, and \lstinline[style=MyInline]{Z} are variables adequately constrained as shown before. When a complex object has to be decomposed into convex shapes $S_i$, we define this object as the set of points $P$ such that $P \in S_1 \lor P \in S_2 \lor \dots \lor P \in S_n$. Since CLP(Q) does not provide logical disjunction as part of the constraint solver's operations, we explicitly represent the union of objects as a list of convex shapes \lstinline[style=MyInline]{[convex(Vars_1), convex(Vars_2), $\dots$]}, where \lstinline[style=MyInline]{Vars_x} encodes the variables that carry the constraints corresponding to the linear equations of each $S_i$. \begin{example} A 3D box whose defining vertices are $P_a$ and $P_b$ (represented, resp.\ as \lstinline[style=MyInline]{point(Xa,Ya,Za)} and \lstinline[style=MyInline]{point(Xb,Yb,Zb)}), is encoded as: \begin{lstlisting}[style=MyProlog] box(point(Xa,Ya,Za), point(Xb,Yb,Zb), [convex([X,Y,Z])]) :- X#>=Xa, X#<Xb, Y#>=Ya, Y#<Yb, Z#>=Za, Z#<Zb. \end{lstlisting} \noindent where the box is represented with a list that contains a single convex shape. \end{example} Most objects in the definition of a building are extruded 2D convex polygons, and therefore having an explicit operation for this case is useful and it illustrates the power of using CLP for modeling building structures. \begin{example} Given a 3D object defined by its base (a convex polygon determined by its vertices $A,B,C, \dots$ in clockwise order) and its height $H$, its representation is: \begin{lstlisting}[style=MyProlog] poly_extrude(Vertices, H, [convex([X,Y,Z])]) :- Vertices = [point(_,_,Za), _, _ | _], polygon(Vertices,[X,Y]), Z#>=Za, Z#<Za+H. polygon([_], _). polygon([point(Xa,Ya,_), point(Xb,Yb,_) | Vs], [X,Y]) :- (Xb-Xa) * (Y-Ya) - (X-Xa) * (Yb-Ya) #=< 0, polygon([point(Xb,Yb,_) | Vs], [X,Y]). \end{lstlisting} \end{example} \vspace*{0em} \subsection{Operations on $n$-dimensional objects under CLP(Q)} \label{sec:oper-n-dimens} \begin{figure} \caption{Operations on an $n$-dimensional space using CLP(Q) \myurl{spatial_clpq.pl}.} \caption{Operations on a 2-dimensional space using s(CASP) \myurl{spatial_scasp.pl}} \label{fig:operations} \label{fig:operations-scasp} \end{figure} In this section, we explain some basic operations (union, intersection, complement, and subtraction) to manipulate the 3D objects that describe the BIM project. Fig.~\ref{fig:operations} shows a preliminary interface implemented using CLP(Q) with the following four predicates that can be used to manipulate shapes in any $n$-dimensional space:\footnote{We assume that union, intersection, etc.\ of shapes is understood as the corresponding operations on the sets of points contained inside the shapes.} \begin{itemize} \item \lstinline[style=MyInline]{shape_union(ShA, ShB, Union)}: Given two shapes $ShA$ and $ShB$, it creates a new shape $Union$ that is the union, i.e., $Union = ShA \cup ShB$. Since every shape is, in general, the union of simpler convex shapes (represented as a list thereof), $Union$ can simply be represented as a list containing the convex shapes of $ShA$ and $ShB$ (line 2). Simplification (to, e.g., remove shapes contained inside other shapes) can be done, but we have not shown it here for simplicity. \item \lstinline[style=MyInline]{shape_intersect(ShA, ShB, Intersection)}: Given two shapes $ShA$ and $ShB$, it creates a new shape $Intersection$ which is the intersection, i.e., $Intersection = ShA \cap ShB$ (lines 5-6). This is computed with the union of the pairwise intersections of the shapes in $ShA$ and $ShB$. Obtaining the intersection of two convex shapes is done by the CLP(Q) constraint solver, which determines the set of points that are in both shapes as those which satisfy the constraints of both shapes. In line 15, \lstinline[style=MyInline]{copy_term/2} preserves the variables by generating a new set, \lstinline[style=MyInline]{VarInt}.\footnote{In line 15, \lstinline[style=MyInline]{true} is needed due to a well-known, subtle bug in the implementation of attributed variables of the underlying Prolog infrastructure used in this paper.} \item \lstinline[style=MyInline]{shape_complement(ShA, Complement)}: Given a shape $ShA$, it creates a new shape $Complement$ that contains every point in the $n$-dimensional space that is not in the shape $ShA$ (lines 20-25). This is computed as the dual of $ShA$. From a logical point of view, it corresponds to the negation, so it is denoted as $Complement = \lnot\ ShA$. \item \lstinline[style=MyInline]{shape_subtract(ShA, ShB, Subtraction)}: Given two shapes $ShA$ and $ShB$, the subtraction is a new shape $Subtraction$ that contains all points of $ShA$ that are not in $ShB$, i.e., $Subtraction = ShA \cap \lnot\ ShB$ (lines 28-39). To compute the subtraction of a convex shape from $ShA$, we iteratively narrow its $n$-dimensional space, $C_0$, by selecting one convex shape from $ShB$, $Sh_i$, and computing the intersection of $C_{i-1}$ and the complement of $Sh_i$, i.e., $C_i = C_{i-1} \cap \neg Sh_i$. The execution finishes when all shapes have been selected or the $i^{th}$ shape covers the remaining space $C_{i-1}$. \end{itemize} \begin{example}\label{exa:clpq} These operations can be used with $n$-dimensional shapes. For simplicity, let us consider 2D rectangles: \emph{r1} in yellow and \emph{r2} in blue): \begin{lstlisting}[style=MyProlog] obj(r1, [convex([X,Y])]) :- X#>=1, X#<4, Y#>=2, Y#<5. obj(r2, [convex([X,Y])]) :- X#>=3, X#<5, Y#>=1, Y#<4. \end{lstlisting} \vspace*{.5em} \begin{minipage}{.63\linewidth} \begin{itemize}[wide] \item The query \lstinline[style=MyInline]{?- obj(r1,Sh1), obj(r2,Sh2),} \lstinline[style=MyInline]{shape_intersect(Sh1, Sh2, Intersection)} returns: \lstinline[style=MyInline]{Intersection = [convex([A,B])], } \lstinline[style=MyInline]{A#>=3, A#<4, B#>=2, B#<4} \item The query \lstinline[style=MyInline]{?- obj(r1,Sh1), obj(r2,Sh2),} \lstinline[style=MyInline]{shape_subtract(Sh1, Sh2, Subtraction)} returns: \lstinline[style=MyInline]{Subtraction = [convex([A,B]),convex([C,D])],} \lstinline[style=MyInline]{A#>=1,A#<3,B#>=2,B#<5, C#>=3,C#<4,D#>=4,D#<5} \end{itemize} \end{minipage} \begin{minipage}{.35\linewidth} \raggedleft \includegraphics[width=.9\linewidth]{rect} \end{minipage} \end{example} \vspace*{.5em} \begin{example} In addition, they can be combined to verify IFC properties and/or (non-)geometrical information contained in the BIM model. The predicate \lstinline[style=MyInline]{window_belongs(W,R)}, in Example~\ref{exa:rule}, can be defined from the geometry of \lstinline[style=MyInline]{W} and \lstinline[style=MyInline]{R}, i.e., \lstinline[style=MyInline]{W} belongs to \lstinline[style=MyInline]{R} if the intersection returns a non-empty shape. \end{example} \vspace*{0em} \subsection{Operations on $\mathbf{n}$-dimensional shapes under s(CASP)} \label{sec:oper-n-dimens-1} We will now sketch how the main operations on $n$-dimensional objects can be defined using s(CASP). We will take advantage of its ability to execute ASP programs featuring variables with dense, unbounded domains. As a proof of concept, Fig.~\ref{fig:operations-scasp} shows the encoding of the operations for 2-dimensional shapes, with slight differences in the representation of the objects and shapes w.r.t.\ the representation used under Prolog: \begin{itemize} \item The predicates for union, intersection, complement, and subtraction receive the identifiers of the object(s), instead of the list of convex shapes. \item A convex shape in $n$ dimensions is an atom with $n+1$ arguments. Its first argument is the object identifier and the rest of the arguments are the variables used to define the convex shape, e.g., a 2D shape is an atom of the form \lstinline[style=MyInline]{convex(Id,X,Y)}. \item The representation of the convex shapes is part of the program, e.g., the rectangles $r1$ and $r2$ in Example~\ref{exa:clpq} are defined with the clauses: \begin{lstlisting}[style=MyProlog] convex(r1, X, Y) :- X#>=1, X#<4, Y#>=2, Y#<5. convex(r2, X, Y) :- X#>=3, X#<5, Y#>=1, Y#<4. \end{lstlisting} \end{itemize} This representation delegates the shape representation to be handled as part of the constraint store of the program. Therefore, a non-convex object is represented with several clauses, one for each convex shape, and a set of convex shapes is a set of answers obtained via backtracking. \begin{comment} \mclcommin{This is somewhat hairy. In principle one could use the same representation as in CLP, right? Now, what happens here is that the answers on backtracking correspond to different models... and I am not sure we're keeping the same reasoning power as in CLP. For example, I am not sure we have a general way to do the same reasoning as we do in CLP. For example, let us take three independent simple convex shapes A, B, C. If we have S1 = A $\cup$ B and S2 = A $\cup$ C and we want to obtain S1 $\cap$ S2... I am not sure how to do the last one because we do not have identifiers for S1 and S2.} \jahcommin{That is the easy part :-) \lstinline[style=MyInline]|?-shape_intersect(A,B,S1), shape_intersect(A,C,S2), S1=S2|.} \mclcommin{Also, the disjunction is spread in several models. When you have the disjunction in a single list you can do things like ``find out the bottommost point in the disjunction'', but if it is spread across different models, then this power is not there, unless one has some epistemic reasoning capabilities?} \jahcommin{Sure, using findall/3 we can retrieve the disjunction of shapes as a list and make that reasoning.} \end{comment} \begin{example}[Cont. Example~\ref{exa:clpq}] Let us consider the queries used in Example~\ref{exa:clpq} under s(CASP) with the encoding in Fig.~\ref{fig:operations-scasp}. \begin{itemize}[wide] \item The query \lstinline[style=MyInline]{?-shape_intersect(r1,r2,Intersection)} returns: \lstinline[style=MyInline]|Intersection = convex([A $|$ { A#>=3, A#<4 }, B $|$ { B#>=2, B#<4 }])| \item The query \lstinline[style=MyInline]{?-shape_subtract(r1,r2,Subtraction)} returns two answers: \lstinline[style=MyInline]|Subtraction = convex([A $|$ { A#>=1, A#<3 }, B $|$ { B#>=2, B#<5 }]) ? ;| \lstinline[style=MyInline]|Subtraction = convex([A $|$ { A#>=3, A#<4 }, B $|$ { B#>=4, B#<5 }])| \end{itemize} \end{example} \vspace*{0em} \section{Tracing (non)-monotonic changes in BIM models} \label{sec:trace-non-monotonic} Let us adapt the example presented by~\citep{scasp-iclp2018} in Section 4.1, where different data sources may provide inconsistent data, and a stream reasoner, based on s(CASP), decides whether the data is valid or not depending on how reliable are the sources. Here, instead of streams, we consider models. A shared model is updated by different experts, and updated models have to be merged to generate the next model in the chain. \begin{example}\label{exa:trace} Consider a shared BIM model that contains information about room ventilation, a heating boiler feed system, and a fire safety regulation that states: \begin{itemize}[wide=0.5em, leftmargin =*, nosep] \item If a gas boiler is used, the ventilation must be natural.\footnote{The ventilation of a room is natural if the area of its windows is at least 10 percent of its floor's.} \item If an electric boiler is used, the ventilation could be either natural or mechanical. \end{itemize} Initially, the shared BIM model has no ventilation or boiler restrictions. Later on, the architect modifies the model by reducing the size of the window in such a way that ventilation cannot be considered natural any longer due to the new size. To comply with the fire safety regulation (and maintain the consistency of the model) the architect selects an electric heating boiler. Simultaneously, the engineer modifies the model by selecting a gas boiler because it is more efficient than an electric boiler. This would force ventilation to be natural. Finally, when attempting to merge the updated models, an inconsistency is detected and the integration fails. A naive approach would broadcast the inconsistency to the architect and engineer, but we propose using a continuous integration reasoner to determine who is the expert whose opinion prevails and make a decision based on that. The other party needs then to be notified to confirm the adjustments. \end{example} \begin{figure} \caption{Code of the BIM continuous integration with an example.} \label{fig:ci_code} \end{figure} Fig.~\ref{fig:ci_code} sketches a continuous integration reasoner, adapted from the paper by \cite{scasp-iclp2018} and the encoding of Example~\ref{exa:trace} (\myurl{Bim_CI.pl}). Its goal is the detection of inconsistency in pieces of information provided by the different stakeholders. And then, depending on a given criterion (e.g., their responsibility/expertise), it would determine which data is valid (and eventually who should amend that inconsistency). Data labels are represented as \lstinline[style=MyInline]{data(Priority, Data)}, where \lstinline[style=MyInline]{Priority} tells us the degree of confidence in \lstinline[style=MyInline]{Data}; \lstinline[style=MyInline]{higher_confidence(PHi,PLo)} hides how priorities are encoded in the data (in this case, the higher the priority, the more level of confidence); and \lstinline[style=MyInline]{inconsistent/2} determines, in lines 13-16, which data items are inconsistent (in this case, \lstinline[style=MyInline]{ventilation(artificial)} and \lstinline[style=MyInline]{boiler(gas)}). Lines 1-11, alone, define the reasoner rules: \lstinline[style=MyInline]{valid_data/2} states that a data label is valid if it is \emph{not canceled} by another data label with more confidence.\footnote{Inconsistent data with the same confidence remain valid unless there is a more confident inconsistency.} In this encoding the confidence relationship uses constraints, that instead of being checked afterward prune the search, but it is possible to define more complex rules, i.e., to determine who is more expert/confident depending on the data itself (e.g., for discrepancies in the dimensions of a beam, the structural engineer is the expert). Lines 17-19 define that initially, \lstinline[style=MyInline]{ventilation(X)} holds for \textbf{all} \lstinline[style=MyInline]{X}, but when the engineer selects \lstinline[style=MyInline]{ventilation(natural)} and \lstinline[style=MyInline]{boiler(gas)} this data has more confidence, so the query \lstinline[style=MyInline]{?-valid_data(Pr,Data)} returns: \lstinline[style=MyInline]|{Pr=1, Data=ventilation(A), A\=artificial}| \mbox{because} \lstinline[style=MyInline]{boiler(gas)} is more reliable than \lstinline[style=MyInline]{ventilation(X)}, \lstinline[style=MyInline]|{Pr=2, Data=ventilation(natural)}|, and \lstinline[style=MyInline]|{Pr=2, Data=boiler(gas)}|. If we consider that the architect selection has more confidence than the engineer's (by adding lines 20-21), the query \lstinline[style=MyInline]{?-valid_data(Pr,Data)}, returns: \lstinline[style=MyInline]|{Pr=1, Data=ventilation(A), A\=artificial}|, \lstinline[style=MyInline]|{Pr=2, Data=ventilation(natural)}|, \lstinline[style=MyInline]|{Pr=3, Data=ventilation(artificial)}|, and \lstinline[style=MyInline]|{Pr=3, Data=boiler(electrical)}|. Note that now the answer \lstinline[style=MyInline]|{Pr=2, Data=boiler(gas)}| is not valid. Finally, by adding \lstinline[style=MyInline]{data(4,boiler(gas))} (line 22), we observe that answer \lstinline[style=MyInline]|{Pr=3, Data=ventilation(artificial)}| is not valid. As we mentioned before, s(CASP) also provides justification trees for each answer (\myurl{Bim_CI.txt}) to support the inferred conclusions so the user can check and/or validate the correctness of the final results. \vspace*{0em} \section{Evaluation} The reasoner and benchmarks used in this paper are available at \url{https://gitlab.software.imdea.org/joaquin.arias/spatial}, and/or at \url{http://platon.etsii.urjc.es/~jarias/papers/spatial-iclp22}. They were run on a macOS 11.6.4 laptop with an Intel Core i7 at 2,6 GHz. under Ciao Prolog version 1.19-480-gaa9242f238 (\url{https://ciao-lang.org/}) and/or under s(CASP) version 0.21.10.09 (\url{https://gitlab.software.imdea.org/ciao-lang/scasp}). A direct performance comparison of our prototype with implementations in other tools may not be meaningful because they do not support the representation of vague concepts and/or continuous quantities. ASP4BIM by \cite{li2020non} overcomes most of the limitations of previous tools (see Section~\ref{sec:related-work}) but, since it is built on top of \emph{clingo}, it inherits limitations already pointed out by \cite{arias-ec2022}. Firstly, let us use the program \myurl{room.pl} (Fig.~\ref{fig:room}). As we mentioned in Section~\ref{sec:model-vague-conc}, it returns independent answers under s(CASP), i.e., for the query \lstinline[style=MyInline]{?- room_is(Room,Size)} we obtain a total of \textbf{14 partial models}: one for room $r1$, another for room $r2$, and two for each of other six rooms. On the other hand, the same program under clingo (\myurl{room.clingo}) generates \textbf{64 models}: all possible combinations such as \lstinline[style=MyInline]{room_is(r1,big)} and \lstinline[style=MyInline]{room_is(r2,small)} appear in all of them and for each room $rX$, 32 models contains \lstinline[style=MyInline]{room_is(rX, big)}, while the other 32 models contains \lstinline[style=MyInline]{room_is(rX, small)}. The exponential explosion in the number of models generated by clingo reduces the comprehensibility of the results (for 16 rooms it generates \textbf{16384 models}, while s(CASP) generates only \textbf{30 models}). Moreover, the goal-directed evaluation of s(CASP) not only makes it possible to reason about specific rooms, but it also generates the corresponding justification, e.g., \myurl{room_r1.txt} and \myurl{room_r1.html} for room $r1$. \begin{figure} \caption{\myurl{Duplex_Q1.html}} \caption{\myurl{Duplex_Q2.html}} \caption{\myurl{Office_Q1.html}} \caption{\myurl{Office_Q2.html}} \caption{Images in x3d corresponding to the Duplex and Office BIM models.} \label{fig:x3d_a} \label{fig:x3d_b} \label{fig:x3d_c} \label{fig:x3d_d} \end{figure} Secondly, to validate the benefits of our proposal dealing with geometric information, we have implemented a spatial reasoner, in collaboration with VisuaLynk Oy, based on the spatial interface described in Fig.~\ref{fig:operations}. This spatial reasoner includes a graphic interface that translates the constraints back into geometry and generates 3D images with the results for the queries using x3d.\footnote{Reference textbook for learning Extensible 3D (X3D) Graphics available at \url{https://bit.ly/3O1MqcH}} The benchmarks used are: (i) the ERDC: \textbf{Duplex} Apartment Model ERDC D-001 produced in Weimar, Germany for a design competition, and (ii) the Trapelo St.\ \textbf{Office} (IFC4 Edition), a 3-story office building where Revit HQ in Waltham is, which consists of three models (Architecture, MEP, and Structure). For the evaluation, we translated the IFC files of the models to convert the geometrical data (and IFC labels) of the 286 objects of the Duplex and approximately 5000 objects of the Office Building (3639 objects in the architecture model and 1322 objects in the structure model) into Prolog facts. We defined a predicate \lstinline[style=MyInline]{object/5}, where the first argument is the IFC label, the second is the identifier, the third and fourth are the lower and higher points of the bounding box (resp.), and the fifth depends on the file (in the duplex file it is the centroid point of the box, in the architecture model of the office is 'arq', and in the structure model of the office is 'str'). Several queries, for both models, are available at \myurl{duplex.pl} and \myurl{office.pl}. Let us comment a few of them: \begin{example}[Duplex] Fig.~\ref{fig:x3d_a} shows the whole model of the Duplex (\myurl{duplex.pl}). The doors are in green and the rest of the objects are in blue (query $Q1$). Fig.~\ref{fig:x3d_b} shows the results of the query $Q2$ which imposes the constraints \lstinline[style=MyInline]{Ya#<-4} to select certain doors, and \lstinline[style=MyInline]{Y#>=-7, Y#<-4} to ``create'' a space (unbounded in the axis $x$ and $z$) that defines a \emph{slice} of the model. Constraints can be used in s(CASP) to reason about unbounded spaces, and finer constraints, such as \lstinline[style=MyInline]{Ya#<-4.002}, can be used without performance impact. That is in general not the case with other ASP systems. \end{example} \begin{example}[Office] \label{exa:query} Fig.~\ref{fig:x3d_d} shows the results of the query $Q2$ in \myurl{office.pl}, which selects objects of type \emph{IfcBeam} in the Architecture model that are not covered by objects in the Structural BIM model. Fig.~\ref{fig:x3d_c} shows those objects that intersect the beam (query $Q1$). If that is the case, the uncovered parts are drawn in red. Note that these parts can be as thin as necessary, without negatively impacting performance. \end{example} The current development of the BIM reasoner is a proof of concept in which no optimization techniques have been applied. Nevertheless, the results from a performance point of view are also satisfactory. The query in Example~\ref{exa:query} found the first beam with uncovered parts in \textbf{0.104 sec.} and evaluates the whole office, by selecting 691 beams from a total of 3639 objects in the architecture model and detecting the 511 beams not covered by the more than 1300 objects in the structure model, in 48 sec.\footnote{\cite{li2020non} reports that ASP4BIM pre-processing of 5415 BIM objects takes 99 sec.} \vspace*{0em} \section{Related Work} \label{sec:related-work} Many logic-based proposals have been developed to overcome the limitations of the IFC format and automated tools based on IFC, such as Solibri Model Checker (SMC) and the Corenet BIM e-Submission by \cite{Corenet} to be adapted to different regulations. These limitations have been attacked using different approaches: \begin{itemize} \item Extended query languages to handle IFC data, such as BimSPARQL, by \cite{zhang2018bimsparql}, that extends SPARQL with (i) a set of functions modeled using the Web Ontology Language (OWL), (ii) a set of transformation rules to map functions to IFC data, and (iii) a module for geometrical related functions. However, they require to pre-process the geometrical information contained in the model and/or have limitations to infer new knowledge, e.g., the shortest path between two rooms. \item Minimal proof-of-concept tools, such as the safety checker by~\cite{zhang2013a}, the acoustic rule checker by \cite{pauwels2011a}, and BIMRL by \cite{solihin2015simplified}, that show improved reasoning capabilities of w.r.t.\ commercial off-the-shelf BIM Sofware. However, all report limitations in the representation of geometrical information and/or in the flexibility of the proposal to adapt the code and/or the evaluation engine for different scenarios. \item Translation of building regulation into computer-executable formats, such as KBimCode by~\cite{lee2016a} which transcribes the Korean Building Act to evaluate building permit requirements. However, they report difficulties to translate vague concepts such as \emph{accessible routes} (key information such as the function of a room is needed to derive the ``accessible routes''). \end{itemize} To overcome the limitations of these approaches, we propose using a goal-directed implementation of CASP, because Prolog and bottom-up implementations of CASP have limitations to modeling vague concepts and geometrical information simultaneously: \begin{itemize} \item \textbf{Prolog:} Since Prolog is based on the least fixed point semantics, the different answers generated by independent clauses correspond to a single model and are simultaneously true. Consider a program containing the facts \lstinline[style=MyInline]{size_of(r1, small)} and \lstinline[style=MyInline]{size_of(r1, big)}. If \lstinline[style=MyInline]{size_of/2} is invoked in different parts of the program, it may assign two different sizes to the same room, which is against the intended interpretation of \lstinline[style=MyInline]{size_of/2}. It is not possible to restrict \lstinline[style=MyInline]{r1} to have only one of the two possible sizes everywhere in the program. Additional care (e.g., explicit parameters) is needed to force this consistency. Moreover, it is not easy to make use of default negation in Prolog, since its \emph{negation as failure} rule has to be restricted to ground calls (e.g., the query in Example~\ref{exa:size} is unsound under SLDNF) and it may not terminate in the presence of non-stratified negation (e.g., the query \lstinline[style=MyInline]{?- small(r1)} in Example~\ref{exa:bigsmall} does not terminate under SLDNF). While there exist implementations of Prolog, such as XSB with tabled negation, that compute logic programs according to the well-founded semantics (WFS), the truth value of atoms under WFS can be undefined, e.g., the query \lstinline[style=MyInline]{?- small(r1)} in Example~\ref{exa:bigsmall} under WFS returns \emph{undefined}. \item \textbf{CASP:} While a goal-directed implementation of ASP provides the relevant partial model, standard ASP systems that require a grounding phase in the presence of multiple even loops, e.g., a unique vague concept referred to various objects, may generate a combinatorial explosion in the number of valid stable models, reducing the comprehensibility of the results. Moreover, these systems can not (easily) handle an unbound and/or dense domain due to the grounding phase. Variable domains induced by constraints can be unbound and, therefore, infinite (e.g., \lstinline[style=MyInline]{X#>0} with $\mathtt{X} \in \mathds{N}$ or $\mathtt{X} \in \mathds{Q}$). Even if they are bound, they can contain an infinite number of elements (e.g. \lstinline[style=MyInline]{X#>0 $\land$ X#<1} in $\mathds{Q}$ or $\mathds{R}$). These problems have been attacked using different techniques: \begin{itemize} \item Translation-based methods, such as EZCSP by~\cite{balduccini17-ezcsp}, convert both ASP and constraints into a theory that is executed in an SMT solver-like manner. However, the translation may result in a large propositional representation or weak propagation strength. \item Extensions of ASP systems with constraint propagators, such as clingcon by~\cite{banbara17-clingcon3}, and clingo[DL,LP], generate and propagate new constraints during the search. However, they are restricted to finite domain solvers and/or incrementally generate ground models, lifting the upper bounds for some parameters. \end{itemize} Since the grounding phase causes a loss of communication from the elimination of variables, the execution methods for CASP systems are complex. Explicit hooks sometimes are needed in the language, e.g., the \texttt{required} builtin of EZCSP, so that the ASP solver and the constraint solver can communicate. More details on standard CASP systems can be found in the paper by~\cite{lierler2021constraint}. \end{itemize} Finally, let us analyse a recent proposal for safety analysis, ASP4BIM by~\cite{li2020non}, which is built on top of \emph{clingo}, a state-of-the-art ASP solver. ASP4BIM overcomes most of the limitations of previous BIM logic-based approaches by (i) defining spatial aggregates in ASP, (ii) maintaining geometries in ASP through a specialised geometry database extended to support the real arithmetic resolution and specialised spatial optimisations, and (iii) formalising 3D BIM safety compliance analysis within ASP. However, it inherits the limitations of ASP solvers, which require a grounding phase, when dealing with dense/unbounded domains (needed to represent time, dimensions, etc.) and/or understanding the answers due to the number, size, or readability of the resulting models. While the limitation of dealing with dense domains can be overcome by using discrete domains, e.g., using integers to represent time-steps instead of continuous-time, it involves certain drawbacks: as pointed out by \cite{scasp-iclp2018} this shortcut may impact performance (by increasing execution run-time of \emph{clingo} by orders of magnitude) and/or may make a program succeed with wrong answers (due to the rounding in ASP). \vspace*{0em} \section{Conclusion and Future Work} We have highlighted the advantages of a well-founded approach to represent geometrical and non-geometrical information in BIM models, including specifications, codes, and/or guidelines. BIM models change during their design, construction, and/or facility time, by removing, adding, or changing objects and properties. The use of CLP, and more specifically s(CASP), makes it possible to realize common-sense reasoning by combining geometrical and non-geometrical information thanks to its ability to perform non-monotonic reasoning and its support for constructive negation. Our proposal allows the representation of knowledge involving vague concepts and/or unknown information and the integration of \mbox{(non-)geometrical} information in queries and rules used to reason and define BIM models. We have identified some future research directions. \begin{itemize} \item \textbf{BIM Verification vs BIM Refinement} The design and construction of a building is a sequence of decisions (setting dimensions, materials, deadlines, etc.) each of which reduces the degrees of freedom. A model refinement approach would generate a sequence of models based on the formal specifications of the regulations, client requirements, geometry, etc. Any change in the model chain should be consistent upwards, keeping the refinement structure. \item \textbf{Non-Monotonic Model Refinement} A monotonic evolution of a BIM model, following model refinement, ensures consistency. The natural flow of architectural development requires however the consideration of non-monotonic refinements due to unforeseen events, cost overruns, delays, etc. \item \textbf{Integrating logical reasoning in BIM Software} This proposal is an initial step that, together with other proposals such as ASP4BIM, may lead to a new paradigm in the refinement of BIM models that would improve the flexibility and reasoning capacity of the current standards. Its integration with commercial off-the-shelf BIM Software would require efficiency improvement, by adapting s(CASP) execution strategy or implementing specialized constraint solvers. \end{itemize} \end{document}
arXiv
← Challenging the 2 degree target Evidence of deep ocean cooling? → Posted on October 4, 2014 by curryja | 454 Comments Some things that caught my eye this past week. Raymond Pierrehumbert responds to Stephen Koonin's WSJ essay; thinks "climate science is settled enough" and thinks Koonin's argument in @WSJ stems from surfing willfully ignorant skeptic's blogs [link] In cause you are dying to find out how my exchange with Greg Laden turned out, over the favorited Mark Steyn tweet, see this synthesis by twitchy. The best tweet from my perspective was Euphonius Bugnuts: Well, you gotta admit, Greg's attribution logic for you being a denier is tighter than IPCC's for AGW. Dinner at Nic's. Nic Lewis hosted a dinner for skeptics and climate scientists, that made the Guardian. Tamsin defends her host in the comments: "Nic Lewis is quite literally a gentleman and a scholar". Both scientists and skeptics in the UK seem much more civil than in the US. Let's not Reinvent the Flat Tire – smart, adaptive thinking from the World Bank [link] California drought – linked to human-caused climate change? [link] Politics isn't just about manipulating people, its about learning from them – a review of Jonathan Haidt's The Righteous Mind [link] New York Magazine: How to convince conservatives on climate change [link] How biased are scientists? Great post by @jonmbutterworth on Bayes and belief [link] Antarctic sea-ice hits new high as scientists puzzle over the cause [link] How the #oil and #gas boom is changing America – really great interview with Michael Levi [link] Not just a problem for alligators – Climate Change Could Alter the Human Male-Female Ratio [link] Bigger surge than Sandy – The freak 1821 hurricane,why it should worry coastal residents [link] James Annan responds to Lewis/Curry paper: Why not try the Lewis/Curry climate sensitivity method on a GCM? [link] Talking sense about climate in India [link] Chip Knappenberger tweets: Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage [link] "Thru fuel savings,GHG emissions reduced by 1,300 tons " Hmmm. The PAGES2k group rediscovers the medieval warm period. ClimateAudit Nassim Taleb: My (civilized) debate with Sornette: diverging views of risks,but not on science/probability: [link] China's one-word anster to Obama's climate plan [link] Oliver Geden: Now even the @guardian posts reflections on Plan B for int #climate policy [link] JustinHGillis on attribution of climate & weather extremes (read past the headline) [link] … Joke of the week: WH PRES SECRETARY tweets: Titanic star Leonardo DiCaprio led a Global Warming awareness march. You'd think he'd be ok with fewer icebergs. David Wojick | October 4, 2014 at 10:29 am | Anyone who thinks that skeptics are willfully ignorant has opted out of science. David L. Hagen | October 4, 2014 at 6:52 pm | The review of Lewis' dinner was remarkable for the Guardian: >"When people say the science is settled, they mean there is such as thing as anthropogenic climate change. Where it's not settled is the rate of change, how much it's going to warm, how fast it'll warm under different levels of CO2 and exactly how it will affect different regions," says Ted Shepherd, a climate scientist at Reading University and Grantham Chair in Climate Science. . . . A survey of the table at the end of the meal revealed that the views of scientists and sceptics on the level of "transient climate response" – or how much the world would warm should levels of pre-industrial CO2 be doubled – differed only by around 0.4C, recounts journalist David Rose. Sounds like they did not have any skeptics at the table. If the IPCC mean estimate is 3 degrees C does this mean that the supposed skeptics estimated 2.6 degrees? Or that no one at the table guessed 3 C? Does he say what this 0.4 C range was? This is certainly not the range in the general debate, which is more like from 0 to 6 degrees C. Notice the bogus distinction between scientists and skeptics. avid Wojick Nic Lewis and Judith Curry just posted a paper calculating the mean ECS as 1.64 K. Lewis N and Curry J A: The implications for climate sensitivity of AR5 forcing and heat uptake estimates, Climate Dynamics (2014), PDF using 1859–1882 for the base period and 1995–2011 for the final period, thus avoiding major volcanic activity, median estimates are derived for ECS of 1.64 K and for TCR of 1.33 K. ECS 17–83% and 5–95% uncertainty ranges are 1.25–2.45 K and 1.05–4.05 K; the corresponding TCR ranges are 1.05–1.80 K and 0.90–2.50 K. Curry posted: Lewis & Curry Climate Sensitivity, Uncertainty The implications for climate sensitivity of AR5 forcing and heat uptake estimates Ted Shepherd is coauthor of the presentation showing climate sensitivity range down to 2: WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity This summarizes the IPCC models. Thus, I surmise that the 0.4 C range is from 1.64 to 2.04. David Wojick | October 5, 2014 at 3:12 pm | Thanks DH, but most alarmist warmers estimate CS at well above 2 so I guess there were no alarm-warmers there. And lots of skeptics think it well below 1.6, including 0 (or in my case that CS is an scientifically incoherent concept which therefore has no value), so it seems there were no skeptics there either. Looks like a table full of lukewarmers. Perhaps the Guardian should have said that. Note under the Chatham House Rule "Lukewarmer" doesn't sound like "bad news" that "sells"! rhhardin | October 4, 2014 at 10:48 am | "Chip Knappenberger tweets: Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage "Thru fuel savings,GHG emissions reduced by 1,300 tons " It's natural climate system adaptation. PA | October 4, 2014 at 11:30 am | The Nunavik is a Polar Class 4/ice class ICE-15 ship. http://no.cyclopaedia.net/wiki/Bay-class_icebreaking_tug "MV Nunavik the newest icebreaker to hit Arctic waters " Armored cargo vessel. Not overly impressed. http://www.vancouvermaritimemuseum.com/permanent-exhibit/st-roch-national-historic-site The wooden ship St. Roch did it a couple of times in the 1940-1944 era with a best time of 86 days. popesclimatetheory | October 4, 2014 at 12:56 pm | CO2 makes green things grow better with less water. Anything that reduces GHG emissions is a very bad thing for life on earth. pauldd | October 4, 2014 at 2:25 pm | "It's natural climate system adaptation" Yet another example of a negative feedback. A fan of *MORE* discourse | October 4, 2014 at 3:04 pm | Among the small (VERY small!) vessels who accomplished the Canada-side Northwest Passage this year are Novara, Arctic Tern, and Altan Girl (none required icebreaker assistance). The Russia-side Northern Sea Route has been wide open for weeks, with hundreds of vessels transiting. It is a pleasure to supply Climate Etc readers with accurate information regarding the melting Arctic sea-routes! Tonyb | October 4, 2014 at 4:57 pm | I know you have problems with arctic maritime history so this article might help with the northern sea route which was opened in the 1930's and reached its peak in 1987 before declining due to the demise of the USSR. http://www.chathamhouse.org/sites/files/chathamhouse/public/International%20Affairs/2012/88_1/88_1blunden.pdf The route was open most years in the 1930's and was of course used by Allied convoys during word war two when people such as my brave neighbour sailed through the arctic to deliver oil and supplies to Russia for which he got little thanks from them at the time. I hope to visit the Scott polar institute in Cambridge shortly to follow up my previous research there which seemed to indicate the Northern sea route could have been open for around fifty years or so during the 16 th century. That is of course anecdotal at present but interesting nonetheless tty | October 5, 2014 at 7:20 pm | 22 boats tried to get through the Northwest Passage. 6 succeded (plus 2 who had wintered and made it through in the second year). This is ane even lower success rate than last year. David L. Hagen | October 4, 2014 at 10:55 am | Trained Physicist? Could Dr. Ben Santer rise towards the standard of a trained Physicist and begin to understand models and uncertainties like Physicist Steven E. Koonin? Faustino | October 4, 2014 at 11:02 am | The Knappenberger link goes to the 1821 hurricane story. nutso fasst | October 4, 2014 at 11:23 am | "Cargo Ship Makes 1st-Ever Solo Trip Through NW Passage" Misleading news articles fail to note that the MV Nunavik is a Polar Class 4 vessel, "capable of year-round operation in thick first-year ice." The ship did not need an icebreaker escort because it IS an icebreaker. http://www.cbc.ca/news/canada/north/mv-nunavik-the-newest-icebreaker-to-hit-arctic-waters-1.2583861 http://www.nunatsiaqonline.ca/stories/article/65674mv_nunavik_from_northern_quebec_to_china_via_the_northwest_passage/ The passage of this ship does not depend on an arctic ice death spiral. Furthermore the plan is that Nunavik is to make ONE passage per year through the Northwest passage. In September when ise is at minimum. With a 36,000 ton displacement, 40,000 horsepower and Icebreaker bow it can handle ice up to 5 feet thick. They had no real problem this trip, but the captain blogged that they had to force an ice barrier in Prince of Wales Sound that would have stopped virtually any other merchantman. What I don't understand how they plan to make any money the rest of the year with such a grossly overpowered ship. There isn´t really that much demand for icebreaking ore-carriers. The only other runs I can think of are Noril'sk-Murmansk and Luleå-Rotterdam. Wagathon | October 4, 2014 at 11:33 am | Computer modeling of complex systems is as much an art as a science… global climate models describe the Earth on a grid that is currently limited by computer capabilities to a resolution of no finer than 60 miles… But processes such as cloud formation, turbulence and rain all happen on much smaller scales. These critical processes then appear in the model only through adjustable assumptions that specify, for example, how the average cloud cover depends on a grid box's average temperature and humidity. In a given model, dozens of such assumptions must be adjusted ("tuned," in the jargon of modelers)… For the latest IPCC report (September 2013), its Working Group I, which focuses on physical science, uses an ensemble of some 55 different models. Although most of these models are tuned to reproduce the gross features of the Earth's climate, the marked differences in their details and projections reflect all of the limitations that I have described… The models differ in their descriptions of the past century's global average surface temperature by more than three times the entire warming recorded during that time. Such mismatches are also present in many other basic climate factors, including rainfall, which is fundamental to the atmosphere's energy balance. As a result, the models give widely varying descriptions of the climate's inner workings. Since they disagree so markedly, no more than one of them can be right. ~Steven Koonin, WSJ, Climate Science Is Not Settled The clinker is, we're just not that big a deal. The left refers to CO2 as a poison or a climate pollutant to make humanity's contribution to the ecosphere nothing more than a big and dirty activity that nature is powerless to deal with. The Left demands that we must assume that, human influences could have serious consequences for the climate, whereas Koonin says, "they are physically small in relation to the climate system as a whole," even when looking down the road 100 years. "For example, human additions to carbon dioxide in the atmosphere by the middle of the 21st century are expected to directly shift the atmosphere's natural greenhouse effect by only 1% to 2%. Since the climate system is highly variable on its own, that smallness sets a very high bar for confidently projecting the consequences of human influences. (Koonin, Ibid.) Jim D | October 4, 2014 at 11:41 am | Pierrehumbert's demolition of Koonin is worth reading. The link was to only page 2. Page 1 is here. http://www.slate.com/articles/health_and_science/science/2014/10/the_wall_street_journal_and_steve_koonin_the_new_face_of_climate_change.html "The idea that Climate science is settled," says Steven Koonin, "runs through today's popular and policy discussions. Unfortunately, that claim is misguided." The Left's problem with Koonin is not the message but the messenger who must now be branded, "denier." The computational physicist credentials of Koonin who served as a professor and provost at Caltech, nor being green and a fan of renewables, are in question. Rather, his DOE job as undersecretary of science in the Obama administration lands Koonin squarely in the camp of Leftist global warming defectors −e.g., a voice of reason that's not easily silenced and will be reckoned with by all but Democrat partisan extremists who will do whatever they can to suppress skepticism and legitimate climate science. jim2 | October 5, 2014 at 5:46 pm | Willard's boy believes there exists someone who understands climate science. That make's Pierrehumbert look pretty ignorant. Wagathon | October 5, 2014 at 6:37 pm | What greater falsification of AGW theory could there be than to see liberal fascists label skeptics, "deniers?" curryja | October 4, 2014 at 11:54 am | What Pierrehumbert doesn't get is that Koonin took a very hard look at the evidence in the IPCC (he largely wrote this document), then listened to present ions by myself, held, collins, santer, linden, christy and questioned us at length (see this transcript). And his conclusions are in the WSJ. Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point – a highly regarded physicist (a democrat to boot) takes a serious look at the evidence in the IPCC and ends up, well pretty much agreeing with moi. Matthew R Marler | October 4, 2014 at 12:03 pm | curryja: present ions I like that. I can imagine listening to "present ions". So 21st century. AK | October 4, 2014 at 12:29 pm | I suspect there's more to it than that: the phrase "evidence is incontrovertible" hardly belongs in a statement by any scientific society. nutso fasst | October 4, 2014 at 1:21 pm | While it may be true that present ions have electrifying conversations, I'd really like to know what they're saying when they're out of earshot. pokerguy (aka al neipris) | October 4, 2014 at 1:28 pm | "Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point –" Well, yes, but "misses" implies some sort of unintentional error. I find myself saying the same thing over and over these days, that this is not about the science. If nothing else is clear, that should be. pokerguy | October 4, 2014 at 1:41 pm | "the phrase "evidence is incontrovertible" hardly belongs in a statement by any scientific society." Why one might go so far as to say it's,,.oh what's that term the alarmists are so fond of…oh yes…."anti-science." Political Junkie | October 4, 2014 at 3:03 pm | Are you sure she said "ion:" Positive! GaryM | October 5, 2014 at 1:26 am | Not only are skeptics to be ignored, but those who read or listen to them are to be culled from the herd as well. We can't have our sheep paying attention to banned thought. Jim D | October 5, 2014 at 1:45 am | Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". I think today this is less incontrovertible, even by skeptics who disagreed back then, so it could appear in the new statement too. Koonin says "We know, for instance, that during the 20th century the Earth's global average surface temperature rose 1.4 degrees Fahrenheit." So it is still incontrovertible, but the difference is that the skeptics have shifted since 2007 to allow this to be said. Matthew R Marler | October 5, 2014 at 2:00 am | Jim D: Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". Believers continue to disparage the distinction between "has warmed" and "is warming". The evidence for "global warming is occurring" is definitely controvertible. Jonathan Abbott | October 5, 2014 at 4:17 am | Jim D, in 2007 it was already obvious to anyone who wanted to see that global warming had stopped. Hence the statement was incorrect. OK, so now for skeptics, it comes down to the meaning of "is". Interesting. Perhaps it is better to phrase it the way Koonin did, even including a number. JimD: "Their 2007 statement was "The evidence is incontrovertible. Global warming is occurring". I think today this is less incontrovertible, even by skeptics who disagreed back then, so it could appear in the new statement too. " Fine. Show me some scientists that will bet their paychecks that 2020 will be warmer than 2014 (by UAH or RSS – the untampered temperature standards, or raw data temperature). UAH is showing a robust rise rate of over 0.1 C per decade despite the "pause". I really don't know what the skeptics are talking about. http://www.woodfortrees.org/plot/uah/mean:36/plot/uah/trend Harold | October 5, 2014 at 12:03 pm | That's the problem with this issue. Too many charged particles. willard (@nevaudit) | October 5, 2014 at 4:37 pm | > Dismissing Koonin's remarks as plucked from skeptics blogs misses the whole point – a highly regarded physicist (a democrat to boot) takes a serious look at the evidence in the IPCC and ends up, well pretty much agreeing with moi. If that's the point, then I'm not sure Pierrehumbert is that far off the mark: Steve Koonin is the answer to a troublesome question facing the Journal's opinion page editors: What you do if you want to continue obstructing progress on global warming pollution, but your usual stable of tame skeptics is starting to die off (Fred Seitz), retire from active research (Dick Lindzen), or discredit itself through serial scientific errors (John Christy) or by taking fanatical and manifestly untenable positions (Heartland Institute)? That puts the editors in quite a pickle. The Wall Street Journal evidently has high hopes for promoting Koonin as a prominent new voice for inaction, having lavished on him 2,000 words and front-page Saturday exposure outside the Journal's paywall. An interesting plan B might involve megaphones like Twitchy. Pierrehumbert may be right, Willard. So what? phatboy | October 5, 2014 at 5:00 pm | Yes, Koonin is either right or he's wrong. The fact that the warmists have to dig up the dirt appears to be a tacit admission that he's right. Even better: Koonin has constructed a narrative that is calculated to make people take notice even if they wouldn't ordinarily trust anything the Wall Street Journal published on global warming: I'm a physicist bringing my brilliance and outside perspective to the backwater of climate science! (He was a professor of physics, and later provost, at Caltech.) I'm green! (He was chief scientist for BP, the oil firm that likes to tout itself as the "beyond petroleum" company, and he was involved with renewables there, among other things.) I've got true-blue Democratic credentials! (He was undersecretary for science in the Department of Energy during Obama's first term.) The "whole point" indeed. Steven Mosher | October 5, 2014 at 5:16 pm | nice double game that Ray plays. if you lack credentials, then attack the lack of credentials if you have credentials, then attack the ploy of using someone with credentials. In the end Koonin is not a TRUE climate scientist. nice. self sealing …but our Willie just can't see it The "whole point" may be a narrative: But there are flaws in this narrative. Being a smart physicist can just give you more elaborate ways to delude yourself and others, along with the arrogance to think you can do so without taking the time to really understand the subject you are discussing. Freeman Dyson is a famous example. Koonin's role in the Department of Energy was marginal and largely powerless, leading ultimately to his resignation. BP's "beyond petroleum" vision evidently includes tar sands (both extraction and refining) and petcoke (arguably the worst fossil fuel of all). And anyway, how green can you be if you're the company that gave us the Deepwater Horizon disaster? No double bind there. Nice try, though. horse … dead … a … flogging Rearrange the words! Even if Lindzen never writes another paper, he will still be a force in climate debate. As to Christy, a problem was found in the sat temp calcs, he fixed it. Isn't that what scientists are supposed to do? Unlike Kaufman who has had problems with his paper pointed out with precision by SM, then "corrected" it, but alas, it still has problems. So, Willard, the hack doesn't hold water. http://climateaudit.org/2014/10/04/pages2k-more-upside-down/ PA | October 5, 2014 at 6:10 pm | JimD "UAH is showing a robust rise rate of over 0.1 C per decade despite the "pause". I really don't know what the skeptics are talking about." For the last 10 years (since September 2004) the change is approximately 0 (zero). Most people start from 1997 for the pause since the strong 1997/1998 El Nino was followed by a strong La Nina. I don't see the robustness of 10 years of zero (or 17 if you go back to 1997). Personally I believe in 2024 we will have 20 years of zero, since I'm sort of convinced CO2 has some sort of effect. But some solid cooling could persuade me otherwise. Jim D | October 5, 2014 at 6:37 pm | PA, 10 years is never robust. You can get downward trends with other carefully selected 15 year periods such as 1980-1995, but the overall trend is there, and the deviation from that is as small as ever and getting smaller, if anything. > Rearrange the words! All the words follow one after another in the op-ed. The first quote was the second paragraph. The second quote was the third. The third quote was the fourth paragraph. Pierrehumbert does not appear to miss what Judge Judy claims is the "whole point." Some might wish to restrict this "whole point" to the fact that Koonin agrees with her. But even then Pierrehumbert may have covered this possibility. stevepostrel | October 5, 2014 at 8:42 pm | I had read good things about Pierrehumbert over the years. The Slate piece undid that effectively at the character level–the absurd, insinuative, ad hominem approach makes him untrustworthy in anything he writes on this topic, no matter how technical it appears. If he's willing to play by these rules in public communication, where he's easy to catch, then there's no reason to trust him on more-opaque technical issues where more trust would be required absent a detailed analysis of each claim. Koonin hasn't always covered himself with glory in his own public dealings with those whose claims he disputed (see cold fusion), but two wrongs don't make a right (and Koonin's intellectual position, if not his behavior, had very strong grounding). Pierrehumbert's thinly disguised resentment of the higher status of scientists such as Koonin and Dyson is as unbecoming as was Koonin's disdain for mere chemists treading in physics years ago. captdallas2 0.8 +/- 0.2 | October 4, 2014 at 11:59 am | How has Ray been coming along with cloud forcing? I haven't heard much since the Statute of Liberty buried in the sand presentation. Groty | October 4, 2014 at 12:38 pm | Page one may have been omitted by accident. But I can understand why it may have been intentionally omitted. The first page of the rant was several paragraphs of verbiage intended to discredit Koonin by attacking his character, where he used to work, motives, and the medium in which he chose to publish his essay. Pretty prototypical left wing stuff that isn't relevant to the scientific arguments he made. The second page had some actual criticism of WHAT Koonin actually wrote about. popesclimatetheory | October 4, 2014 at 1:12 pm | More Wall Street Garbage When the science gets settled "enough", climate model output will look like real climate data. They are not there yet!!!!!!!!!!!!!!!!!!!!!!!!! Climate Model output does not agree with real earth data. Climate Model output does not resemble real earth data. The climate alarmists, and their followers, do not appear to know or even suspect that this is a serious problem with consensus theory and models. It will only get worse for them as the model output and data diverge, more and more, every year. On the Skeptic side, we have more than one theory. As more data becomes available, it will support some theory above the others and something better will likely come soon. Soon could be days, weeks, months or years. I really suspect it will not take decades more. The data is getting better, resisting consensus based corrections, all the time. Don Monfort | October 4, 2014 at 8:13 pm | The self-inflicted demolition of jimmy dee's credibility continues. Tom Fuller | October 4, 2014 at 9:13 pm | Mr. Pierre-Humbert has a lot of respect in the circle of those who are most alarmed about climate change and his arguments need to be taken seriously. However, this piece suffers from the most common of Alarmist fallacies, that attacking the reputation or standing of your opponent is more important than countering his/her arguments. Mr. Pierre-Humbert spends over one page of a two-page article trying to deligitimize Mr. Koonin. When he finally gets around to Koonin's arguments, it's easy to see why. They basically amount to 'Koonin's measuring A instead of B' or he's counting from Date A instead of Date B.' But Koonin didn't do the measuring or counting. He (exactly like the IPCC) is assessing the measuring and counting done by others. As a brief aside, Pierre-Humbert notes a doubling of the rate of sea-level in the century before AGW is thought to have started and seems to think that's an effective argument on the issue because the rate of sea-level rise 'doubled' in the century afterwards. The Alarmist Brigade would rather call their opponents senile or out of touch with the mainstream literature than engage with the (best of) their arguments. A lot of foolishness is put forth by skeptics (Iron Sun, Sky Dragon, etc.) But the best of their arguments need to be considered seriously. After all, a similar amount of nonsense issues forth from the Alarmist camp as well. One of the reasons for their ad hominem attacks is that the best of the skeptic/lukewarmer arguments are extremely tough to counter. All the more reason for Mr. Pierre-Humbert to save time and energy by abandoning his attacks on Mr. Koonin's reputation and qualifications. Maybe he just doesn't have anything else. I find it funny how many alarmists, here and in general, dismiss the types of arguments Koonin made as "talking points". But why are they talking points? Because they were gotten from real scientists who do real science, and they are "extremely tough to counter." The fact that they're "talking points" doesn't mean they're wrong, it's the fact that they're so "extremely tough to counter" (i.e. probably right, at least in context) that makes them good talking points. I'm reduced to repeating myself trying (probably without success) to get my point across. Sigh. DocMartyn | October 4, 2014 at 11:45 am | "Greg Laden @gregladen This tweet is simply more proof that you are not interested in civil conversation. UR a liar and UR dangerious to the future. @curryja 1:23 PM – 2 Oct 2014 Coon Rapids, MN, United States" Surely this is actionable? Wagathon | October 4, 2014 at 12:01 pm | Why an atheist radio station would want to interview the prophet of a millenarial cult like Mannatollah Mike is a mystery to me. ~Mark Steyn (Mann is an island) captdallas2 0.8 +/- 0.2 | October 4, 2014 at 12:16 pm | I don't think so. I believe there is some legal issue, mens rea? Wasn't it Steven Mosher that said that if you trust the models you need your head examined? DocMartyn | October 4, 2014 at 12:31 pm | 'UR a li@r' is a bit specific. Perhaps he meant 'lair', but misspelled it captdallas2 0.8 +/- 0.2 | October 4, 2014 at 2:51 pm | Doc, there is a pretty good chance that Greg's cheese slipped off his cracker. Taking "action" against someone that confuses bookmarking with actual "favoritism" is a waste of time or has an emotional meltdown at the AGU conference is not going to do much good. Just try to remember him back in the good old days when he worn his jester's hat proudly. Trust the model for sensitivity was my exact position aaron | October 5, 2014 at 9:27 pm | Wheel is spinning, but the hamster is dead. She may well be dangerious, whatever that means. I think he means HIS future. …."dangerous to the future." Why is it that just about every one of these guys is borderline illiterate. Such drudges. Theo Goodwin | October 6, 2014 at 5:06 pm | This tweet is simply more proof that you are not interested in civil conversation limited to consensus climate scientists. UR a liar and UR dangerious to the future of consensus climate science. @curryja Fixed it. Matthew R Marler | October 4, 2014 at 11:59 am | Here is an interesting comment from the ClimateAudit post on the pages2k revision: They show the following diagram of changes – all in the direction of increasing MWP warmth relative to modern warmth in their reconstruction. These are large changes from seemingly simple changes in individual proxies – a longstanding CA theme. It is "well known" that PC estimation is "unstable", meaning small changes in data produce surprisingly large changes in the obtained PC coefficients. It is one of the reasons why the selection of time series for inclusion/exclusion is so important. I put "well known" in quotes because the effect can be surprising in actual cases even to people who know of the problem, and Mann and others in their writings show what might be called an inconsistent awareness of the problem. Brandon Shollenberger | October 4, 2014 at 12:16 pm | Er, yeah, but PCA wasn't used for this. Why talk only about the effect on a methodology which wasn't used? Matthew R Marler | October 4, 2014 at 4:03 pm | Brandon Shollenberger: Er, yeah, but PCA wasn't used for this. Why talk only about the effect on a methodology which wasn't used? It just seemed interesting. RiHo08 | October 4, 2014 at 12:03 pm | Nick Lewis has a plot of Arctic Sea Ice which he posted on Lucia's Blackboard this last September: http://moyhu.blogspot.com/p/latest-ice-and-temperature-data.html#fig1 The graphs are updated daily and show this year (2014) and the current nadir and recovery of Arctic Sea Ice. Also graphed are other years of Arctic Sea Ice so one can compare the current 2014 with other years. In reading the Curry link regarding Antarctic Sea Ice the tone of the writers was that while the current record Antarctic Sea Ice extent still needed some work to explain, that the scientific understanding for the Arctic Sea Ice extent was already a known: (AGW) natch. Nick Lewis supplies a lot of data that is useful, but for the life of me, I can not understand why this year's Arctic Sea Ice extent is greater than five previous years during the satellite era. Has anyone seen a plausible explanation why there has been a recovery of Arctic Sea Ice Extent? This is especially perplexing to me in the face of 2013 being the hottest year ever due to global warming which made it so hot that an Australian tennis tournament had to be postponed for a day? That would be Nick Stokes. RiHo08 | October 4, 2014 at 1:03 pm | Capt'nDallas Of course you are right! My fault of perseveration on names. My apologies to both Nick Lewis and Nick Stokes. When the Arctic Sea Ice sets a low record, that causes a lot more snowfall that prevents the next few years from being warm enough. The new low record will likely be set, but it needs a few years to recover from the more snow fall that happens after the record low years. You can look at the data. Sea ice gets lower, lower, lowest and then a lot higher. It snows more when the oceans are more open. Ragnaar | October 4, 2014 at 3:37 pm | "Has anyone seen a plausible explanation why there has been a recovery of Arctic Sea Ice Extent?" Some surface and near surface regions of the Arctic ocean have cooled promoting sea ice formation. That's how I read what Wyatt writes: http://www.wyattonearth.net/thestadiumwave.html As the Arctic ocean loses ice, more heat should transfer from that ocean into the atmosphere. If that sea ice comes back, it should cool the local atmosphere in the short term. Ragnaar Thank you for the link to Marcia Wyatt's explanation of the "stadium wave". I do need to read and re-read explanations of a scientific publication stated in slightly different ways for me to slowly comprehend what is being said. Are you saying that the explanation why the Arctic Sea Ice Extent appears to be recovering is that the various indices captured in the stadium wave hypothesis are cycling back to have Arctic Sea Ice recover? Would the prediction then be: Arctic Sea Ice will recover to what it had been some, say 40 or 50 years ago? Faustino | October 4, 2014 at 4:57 pm | RiHo, an Australian tennis postponement does not necessarily indicate historically extreme temperatures, but a different attitude to demands on players and attendant health risks. Cf sliding roofs on courts. RiHo08 | October 5, 2014 at 11:58 am | Said somewhat tongue-in-cheek: a brief sports interruption has the same scientific weight as utterances from consensus gurus. beththeserf | October 4, 2014 at 7:58 pm | Some say the world will end in fire, Some say in ice. vhttp://www.carbonbrief.org/media/337445/nsidc_antarcticseaiceextent_22sep14.png IPCC say, ' t' is a puzzle.' Say, if the IPCC is puzzled, that's a first! Beththeserf I tried to open the link and got a message that there was no application to open the link. Do you have another avenue to this source? beththeserf | October 5, 2014 at 10:20 pm | Jest google http://www..carbonbrief.org then click on ter thread, 'Antarctic sea ice hits new high' Dontcha know there is always a "yes…but" in the Climate Change lexicon? It tumbles off their lips like a melt pond draining. "But overall, the Arctic sea-ice loss is over three times greater than Antarctic gain." We don't know why there is sea ice gain down south BUT we can discount our ignorance of such things by pointing to the other pole with sea ice loss. http://psc.apl.washington.edu/wordpress/wp-content/uploads/schweiger/ice_volume/SPIOMASIceVolumeAnomalyCurrentV2.1.png Well, everyone is looking at the wrong chart. Sea ice volume is the only measure that really matters. The other measures are a combination of luck and weather. The sea ice volume started to recover in 2007. The volcanoes in 2010 and 2011 hammered the sea ice with ash and caused major sea ice loss. Since the dirty ice melted the volume has been increasing steadily. This winter should get two standard deviations above the trend – breaking the trend. You may be right about looking at Arctic Sea Ice Volume instead of Extent. Do you have a suggestion as to why Arctic Sea Ice Volume may be recovering? I have been told, by very reliable IPCC consensus sources that Arctic Sea Ice is in a death spiral and not to recover until I end my sinful ways…SUV and all that. (SUV…new tires, new brakes, new windshield wipers at 125,000 miles, getting ready for this winter's snow and ice.) PA | October 5, 2014 at 12:31 pm | Ice is very sensitive to soot/ash. Ice has an albedo (reflection coefficient) of 0.9 which means it only absorbs 10% of incoming energy. Soot/Ash is around 0.1. So dusting ice with soot/ash is the same as making the sun 9 times brighter. Chinese soot, volcanic ash, soot from Canadian forest fires, and dust from where ever are all to blame. So in general – if the ice melts faster it is a combination of more dirt/higher temperatures. If it melts slower either some of the dirty ice has melted and the newer ice is cleaner or the temperatures are lower If the ice freezes more (increased volume) the temperatures are colder. The albedo has been increasing lately. The summer temperatures have been lower. For arctic temperatures see link below: I've wondered also if the record warm northern pacific and a potential el nino might interact with the weak polar vortex/wavey jet stream to recharge ice. ordvic | October 4, 2014 at 12:03 pm | I looked a little further into Milankovitch cycles and finally found what I was looking for, sort of. The Eemian interglacial lasted from about 130,000 bp to about 115, 000 bp or about 26,000 to about 28,000 yrs. I originally thought that the eccentricity must kick in to take temps down. Now I see the interglacial occurs entirely in the round orbit. The predominate factor driving temps up and down appears to be axil tilt as Milankovich surmised. The elipital orbit lasted from 115,000 to younger dyas or to about 20,000 to 15,000bp. Temps again rose into the Holocene Maximum starting about 11,500 bp. During the entire glacial period temps went up and down at a lower level as determined by axial tilt combined with precession. They say the average interglacial is about 12,000 years but that Eemian and the present one look to be about 28,000 yrs long. So the 28,000 seems too match up with axial tilt but not 12,000. If the average is true it must be how tilt and precession work together? Anyway, we are at peak now on the beginning of the down slope that would hit bottom about 17,000 years from now if it is like Eemian. So I would imagine it'll be at least 5000 before the colder times start to show up. That is unless of course CO2 somehow mitigates that into some kind of 100,000 yr goldilocks climate as some scientists suggest. http://www.gcrio.org/CONSEQUENCES/winter96/gifs/article1-fig3.gif http://phys.org/news/2013-01-deep-ice-cores-greenland-period.html Where did I go wrong? Jim D | October 4, 2014 at 12:25 pm | Currently the tilt and eccentricity favor northern ice because the northern summer is furthest from the sun, so we are well into the cold Milankovitch phase already. However, instead of increasing, the Arctic ice cover is now decreasing due to other bigger factors. How do you explain the Antarctic ice then? ordvic | October 4, 2014 at 1:15 pm | Wiki says we are at 0.017 eccentricty that is closer to 0.000055 low eccentricity than to 0.0679 high eccentricity. So wouldn't that indicate it is still fairly round? Also we are at 23.44 axil tilt that is half way between 22.1 and 24.5. I guess you are right as that would indicate we are half way from peak heat at 24.5. That would have to coorespond with a shorter interglacial period though since we've only been in it for 9000,00 to 10,000 years. If we are past peak it would mean this one will only go another 8,000 or 10,000 for a total of about 18,000 to 20,000. That is 6,000 to 10,000 short of Eemian although still nearly twice the average. DocMartyn | October 4, 2014 at 1:39 pm | supposition I get mixed up easily. The temps started to go up rapidly about 15,000 bp and peaked in the holocene maximum btw 8,000 to 4,000 bp. If that is correct then the cycle would complete in 7,000 to 11,000 yrs So that would mean from 8,000 bp peak we'd already be 1000 past a complete cycle and if it were 4,000 bp peak we'd still have 7,000 to go. ordvic, the roundness is a mitigating factor which may explain why we don't expect an Ice Age in this tilt cycle, but the cooling after the Holocene Optimum is consistent with the precession forcing. When I said "tilt" I meant precession, not angle. Phatboy, You have a good point because both the axil tilt and the apisidal precession (in combination with CO2) should have the Antarctic completely melted by now. Go figure! The Ice Ages don't spread from the south, and there is a good reason for that: no significant continents within range to glaciate. maksimovich | October 4, 2014 at 8:04 pm | The Ice Ages don't spread from the south Um yes they do eg Vandergoes. The evidence for early onset of maximum glaciation provides renewed support for a Southern Hemisphere 'lead' into the LGMand some indication of its cause. Strong cooling in the south commences during, or soon after, the phase when perihelion occurs during the Austral winter (30–35 kyr ago), which means that the local insolation budget was at its lowest level for the entire precessional cycle (Fig. 2). At the same time, insolation in the Northern Hemisphere was still in a positive phase, giving a local radiation budget higher than that at present. The northern 'driver' is therefore an unlikely trigger for the onset of maximum glaciation in the south and cannot have been directly responsible for a Southern Hemisphere lead. Jim D – Arctic ice has been in a pause of its own since about 2007. http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.anomaly.arctic.png phatboy "How do you explain the Antarctic ice then?" When it gets really cold water hardens. ordvic " I originally thought that the eccentricity must kick in to take temps down." My understanding of Milankovitch is a little different. It takes a Milankovitch maximum to take us out of an ice age – but the peak is short and thereafter things are metastable until the climate drops back to stable icy mode. I saw somewhere a statement we are about 2 W/m2 from glaciation. "However, there are two important sources of heat for surface heating which results in "basal sliding". One source is geothermal energy. This is around 0.1 W/m² which is very small unless we are dealing with an insulating material (like ice) and lots of time (like ice sheets). The other source is the shear stress in the ice sheet which can create a lot of heat via the mechanics of deformation." "Once the ice sheet is able to start sliding, the dynamics create a completely different result compared to an ice sheet "cold-pinned" to the rock underneath." "..Moreover, our results suggest that thermal enabling of basal flow does not occur in response to surface warming…" "…Our simulations suggest that a substantial fraction (60% to 80%) of the ice sheet was frozen to the bed for the first 75 kyr of the glacial cycle, thus strongly limiting basal flow. Subsequent doubling of the area of warm-based ice in response to ice sheet thickening and expansion and to the reduction in downward advection of cold ice may have enabled broad increases in geologically- and hydrologically-mediated fast ice flow during the last deglaciation. Increased dynamical activity of the ice sheet would lead to net thinning of the ice sheet interior and the transport of large amounts of ice into regions of intense ablation both south of the ice sheet and at the marine margins (via calving). This has the potential to provide a strong positive feedback on deglaciation." Looks like Marshall and Clark http://scienceofdoom.com/2014/04/14/ghosts-of-climates-past-nineteen-ice-sheet-models-i/ What I think they're saying to some extent that it's a function of mass which is a function of time. Given enough cold time and transfer of liquid water from the oceans the to the ice sheets we will get a mechanical collapse. A slow motion avalanche to an interglacial. Are we collapsing the ice sheets now? If we are, there would be a reduced weight and insulation of ice which would reduce basal sliding and push in the direction of ice sheet stabilization. PA, Since the axil tilt is a 41,000 yr cycle that would leave only the apisidal precession of 21,000 years to coorespond with the short interglacial period. It seems to be how the combination of all three line up to make it happen? I'm just trying to figure out where they are now. I know the tilt and eccentricty approximately and supposedly the apisidal is north pole faces at furthest eliptical distance and south pole at closet right now. But this still doesn't tell me how far we are from peak on the downside. The apisidal is mainly what is throwing me off right now. It snows more when oceans are warm and then, after hundreds of years, it gets cold. It snows less when oceans are cold and then, after hundreds of years, it gets warm. Milankovitch cycles work with this sometimes and work against this sometimes, but Milankovitch cycles do not start or stop the snowfall. Warm oceans with no or low sea ice cause the snowfall. Cold frozen oceans stop the snowfall. Lucifer | October 4, 2014 at 3:41 pm | If more summer sunshine means less Arctic sea ice, then we can see that there will be a continued decline in Arctic sea ice for the next 100,000 years, regardless of how much more CO2 there would be: http://upload.wikimedia.org/wikipedia/commons/9/90/InsolationSummerSolstice65N.png Fernando Leanme | October 4, 2014 at 6:38 pm | Doesn't the reduced influx in the Southern Hemisphere compensate that effect? cwon14 | October 4, 2014 at 12:11 pm | http://m.canberratimes.com.au/act-news/liberals-outraged-that-kill-climate-deniers-play-is-funded-by-the-act-government-20141001-10ogo7.html#ixzz3EspGOkN5 The "Play"……."Kill Climate Deniers"….government funded, naturally. Bad Andrew | October 4, 2014 at 12:30 pm | Has The Global Warming Alarmist Marketing Campaign reached it's end date yet? They're still in hallways of the UN questioning free markets like Fanboy, it's no where near the end. jim2 | October 4, 2014 at 12:32 pm | From the oil boom article, linked in main post, from Vox: 4) The fourth, is the slight flagging of the natural gas boom. Careful market watchers expected 2012 to be a high point, when gas was ridiculously cheap and pushed out enormous amounts of coal. But coal clawed back a bit last year. That shouldn't have been a surprise — gas prices were unnaturally low in 2012 — and it's not an indication of long-term weakness in the idea that US gas production can keep growing. But it's a useful corrective to the idea that the gas boom would take care of country's climate problem all by itself. (end quote) I think this guy may be wrong about the price being "unnaturally" low, at least in the medium term. There are real worries now that the price will stay so low that companies will have to start shutting in higher operating cost nat gas wells. Pipelines don't yet exist to deliver all the gas being produced, so gas is being flared. Once those are in place, it will mean even more supply to market. This does not add up to a higher nat gas price scenario. AK | October 4, 2014 at 1:09 pm | This is where a lot of my commenting here has been pointing: http://3.bp.blogspot.com/-kipqa22-G6A/VDANRIBTY8I/AAAAAAAAAc8/LxPwVwwg0bo/s1600/Solar-yeast-growth-graph.jpg What is one of the big risks investors in gas infrastructure face? That exponential growth of solar. Sure, it might not happen, and even if it does, they'll get a decade or so of ROI. But how can their bets be hedged? If a working prototype bioconverter of hydrogen and CO2 to methane could be demonstrated, with good expectations that the cost could be brought down along with solar. In that case, solar energy (e.g. panels) would be competing with only the wells, instead of the whole industry. The trade-off for solar would be between solar (e.g. panels)+inverters+long distance transmission vs. solar(e.g. panels)+electrolytic hydrogen+bio-conversion to methane+long distance pipes(+gas-fired combined turbines). And, AFAIK, carrying equivalent amounts of methane thousands of kilometers (miles) is orders of magnitude cheaper than carrying electricity. Which would often make up for the lower efficiency of the electrolysis/bio-conversion steps. However, AK, there is no reason to believe this graph, which I would describe as preposterous. Capital intensive infrastructure penetration cannot happen this fast. http://ars.els-cdn.com/content/image/1-s2.0-S0360319914008489-gr2.jpg Capital intensive infrastructure penetration cannot happen this fast. UK Mobile Phone Subscriptions per 100 people; note that there are more subscriptions than inhabitants in the UK. This is because many people have more than one phone, or SIM card [13]. (Fig.2 from Mobile phone infrastructure development: Lessons for the development of a hydrogen infrastructure by Scott Hardman and Robert Steinberger-Wilckens International Journal of Hydrogen Energy Volume 39, Issue 16, 27 May 2014, Pages 8185–8193.) I suppose you think mobile phones are cute little toys you buy in a store? Don't forget the towers, data transmission, and software/protocols necessary for those phones to talk to one another, and the land-line system. Or the data infrastructure (hardware, software, and protocols) the Internet also depends on (I'm not going to provide links for that). From the article linked above: Previous studies use the example of how internal combustion engine (ICE) vehicle infrastructure was developed in the late 1800s and early 1900s [5] and [6]. […] One reason for the success of the ICE was due to there being an existing petroleum supply network. This network supplied petroleum for lighting and for stationary petrol generators, as well as the farming industry [5]. This meant that ICE outcompeted BEVs and steam engine vehicles precisely because infrastructure was already present. The availability of infrastructure was a compelling reason to purchase an ICE vehicle over competitive vehicles. The mobile phone was a disruptive innovation; this can be confirmed using the 3-point disruptive technology criteria. The criteria states that innovations are disruptive innovations if they require new infrastructure, are produced by new market entrants and not incumbents, and provide a greater level of service to the end users [7]. […W]ith economies of scale and technological improvements handset unit costs were continually reduced and in around 30 years the mobile phone went from high cost low volume series in niche markets to occupying the whole landscape and achieving an enormous mass-market share (see section 1.2). [my bold] Mobile phone use would not be possible without the development of infrastructure. Consumers would not purchase a device that could not be used. As with FCVs there was a need to make a decision to invest in infrastructure before the market entry of the product could begin. The decision to invest is not an easy one, as the economic incentives to develop an infrastructure that currently has no customers are hard to identify. Nevertheless, without the development of infrastructure any technology reliant upon it will surely fail. Mobile phone infrastructure has been continually developed over the past 4 decades. An overview of the increase in network capabilities can be seen in Fig. 5 as measured by download rates, also know as band rates. A parallel logic can be applied to solar power. Both the options I mentioned above depend on mature infrastructure: electrical grid, and gas storage/distribution/use. Solar power, like Cell towers and data transmission infrastructure, would have to be built, bought, and installed. But the economics around such infrastructure change are no more "preposterous" than those of the Internet, or cell phone infrastructure. "Capital intensive infrastructure penetration" DID happen that fast. TWICE! Perhaps this relevant. Photovoltaic power doubling every 18 months. http://www.motherearthnews.com/~/media/Images/MEN/Editorial/Articles/Online%20Articles/2013/08-01/World%20Solar%20Power%20Topped%20100000%20Megawatts%20in%202012/world%20solar%20power%20graph%201%20png.PNG AK, I used the term capital intensive infrastructure precisely to distinguish solar power installations from small consumer items like cell phones. Buying a cell phone and putting a $30,000 solar system on your house are very different sorts of investments. Most people can afford the former while few can afford the latter. I think solar will be a niche technology for a long time, unless its use is mandated of course. Buying a cell phone and putting a $30,000 solar system on your house are very different sorts of investments. But what about the infrastructure? http://upload.wikimedia.org/wikipedia/commons/2/27/Telstra_Mobile_Phone_Tower.jpg AK if you plot the number of Ebola cases in the USA, extend the curve, you will note we will all be dead before the end of the year. @DocMartyn… Really small sample there: One case. (Not counting the ones who already had it and knew it before they returned from Africa.) OTOH we have decades of experience with the exponential growth of solar PV. Absent massive storage capability, solar (and wind) cannot replace fossil capacity in large scale power systems, due to intermittency. What they can do is reduce fossil fuel use but that makes the system as a whole more expensive because the fossil plants earn less. It is very expensive to have a fossil plant sitting there just to run in the dark or when the wind does not blow. Governments do not seem to know this. http://freeradicalnetwork.com/wp-content/uploads/2014/09/Ostrich-man-head-in-sand.gif http://i.dailymail.co.uk/i/pix/2010/04/07/article-1264092-081D0A9F000005DC-144_468x339.jpg The big risk everybody faces is all the solar panel owners expecting a subsidy because they can't generate electricity on a steady basis. They could use it to buy battery backup. rls | October 4, 2014 at 10:39 pm | AK: Your faith in the exponential advancement of technology is shared by me. However, unlike the mobile phone industry at its beginning, the power industry infrastructure is established and will change only over time, perhaps generations of time, as demand increases and existing facilities become uneconomical. […] unlike the mobile phone industry at its beginning, the power industry infrastructure is established and will change only over time, […] I suppose that's what most people thought about the land-line phone system, too. I have an idea. Instead of pylons, let's suspend the electric grid main lines using drones. Attached to the lines, they could be powered by corona discharge through a small spike. That way, we could direct power wherever it is needed! Better yet, scrap the big long-distance grid entirely, and replace it with cheap gas-fired generators feeding local micro-grids. rls | October 5, 2014 at 12:13 am | AK: Mobile phone got it start, not as a replacement for land lines, but as an extra source of communications. Also I'm uncertain regarding the envisioned future of solar. Is it going to be a consumer product or an industrial product? My original comment assumed that the customers would be the power companies. jim2 | October 5, 2014 at 8:54 am | The drone idea will help you get power from the Patagonia Desert to NYC. AK | October 5, 2014 at 10:03 am | Also I'm uncertain regarding the envisioned future of solar. Is it going to be a consumer product or an industrial product? I've actually been spending a good deal of my free time trying to understand the (potential) future economics of solar power, and some of the arguments I get here help to stimulate my thinking (much as it may sometimes appear otherwise). There are several important points that must be kept in mind: • The energy situation is very different in different places: you can't assume that the US, or Western Europe, works as a model for, e.g., India, Central Asia, China, or, especially, Africa. • There's a vast array of potential technologies, even looked at carefully and critically. For instance, I'm totally skeptical of anything having to do with hydrogen for vehicles, or for major energy storage and/or transport. But, after taking a similarly critical look at, e.g. panels and concentrating PV, I see many potential problems, but none that don't look solvable. • There are huge potential synergies that don't seem to have been looked at. Desalination and pumping come to mind immediately. These could probably be made cost-effective near-term, without the need for (energy) storage, inverters, or distance transmission. Maybe not everywhere, but many places. • There's a variety of technologies "in the pipeline" that could (IMO will) serve as "enabling technology" for innovative solar development. I would include: • cheap, mass-producible inflated structures, • "static robots" where robotic technology (sensing and control) is used to stabilized a structure in a single shape against distorting forces (e.g. wind pressure), • floating buildings and other structures, • cheap sunlight collection via "light pipes", • and cheap, mass-produced tracking mechanisms, suitable for concentrating PV and even enhancing the value of panels. • "Moore's Law". The exponential decrease in cost/price of information technology (IT) will serve as enabling technology for the items listed above, as well as others not thought of yet. So which technologies will grow rapidly for which markets? I can only guess, as do the industry "forecasters". Africa will probably see a much larger focus on local, small-scale development than the US/Europe. India and China may use some mix of small-scale and large (i.e. more like the US). But that's only guessing. In projecting, I start with the assumption that people like Ray Kurzweil are right about the exponential growth, although I predict that any specific technology will follow a more "yeast growth" pattern. (Thus the picture above, where I overlaid an actual yeast-growth curve over Kurzweil's exponential curve.) There's little or no difference in the curves during the early growth stage, but as the technology matures the growth tapers off. The question is: what technologies will contribute to that growth, and how? To grow as the curve predicts in developed areas, solar will have to be connected to the grid. According to the standard paradigm, this will require inverters (to convert DC from the PV cells to AC suitable for long-distance transmission), and transmission from mostly remote sites to appropriate grid connections. The alternate paradigm I'm pushing would involve converting it to methane (or oil) right at the collector, and using equally mature gas technology for storage, transport, and generation. Yes, there would be some efficiency losses, but the cost of transporting gas is (AFAIK) orders of magnitude cheaper than electricity, and all the technology could be made small-scale: hydrolysis and bio-conversion for a small collector area could probably be fit into a Coke bottle, only a few feet from the actual cells. Economies of scale could be achieved by simply making millions, or billions, of such units. Improvements in sensing and communications technology, following "Moore's Law", would mean that each unit could keep track of itself, and defective units could be replaced by automated systems, perhaps supervised remotely by engineers. The exact same technology could be used on a more "one-off" basis for distributed power in, e.g. African villages. Small scale gas compression, storage, and perhaps even distribution (to homes) could allow the result to both power generators and replace wood or dung for cooking and heating. kim | October 5, 2014 at 10:10 am | I propose something as magical as Chlorophyll. The problem with chlorophyll is the very large number of life-forms adapted to eating it, and the life-forms that deploy it. Not to rule out Joule, Unlimited's efforts… AK: Thank you. You have obviosly given this much thought and work. Are you familiar with the work of Vaclav Smil? He writes that "Perhaps the most misunderstood aspect of energy transitions is their speed. Substituting one form of energy for another takes a long time.". He believes it will take generations to transition from our existing fossil fuel infrastructure to a renewable energy infrastructure. I tend to agree with him and don't see the growth of computers/mobile phones as comparable to the power industry. Those were new markets and did not involve abandoning a huge existing infrastructure. A few years ago I invested in a 95% efficient gas furnace and it is still going strong and saving me money. It will be many years before I buy a renewable energy device for my house. My electric company has up-to-date gas turbines and a nuclear facility and, I believe, more nuclear in the planning stages; it would be foolish for them and costly to me to abandon those facilities and switch to renewable, especially faced with increased energy demand. However, it would be acceptable to meet increased demand with reasonably priced renewables; in that case it will take the generations that Smil writes about. @rls… This is exactly why I'm pushing the electrolysis/bio-conversion to methane approach. You keep your gas furnace, your power company keeps its gas generators, but all that cheap solar is used to generate gas to put into that system, in place of what comes from wells. Individual wells, AFAIK, tend to last less time than all the infrastructure for storing and transporting gas. For that matter, they could put solar power/gas installations on the land used for wells, and feed the resulting gas into the same pipes used for the wells. I know I need pictures. I'm working on it. In my free time. rls | October 5, 2014 at 1:53 pm | AK: Got it, please forgive my neurons, not my fault, I wasn't informed. Have you seen this: Washington D.C. — The Department of Energy has issued a draft solicitation that would provide up to $12.6 billion in loan guarantees for Advanced Nuclear Energy Projects. http://www.energy.gov/articles/department-energy-issues-draft-loan-guarantee-solicitation-advanced-nuclear-energy-projects Thanks. Seen it now. Like I said, smart money won't invest in nuclear without guarantees. Still, it's probably worth it for strategic reasons, as well as a fallback if solar doesn't keep up its gallop towards the price floor. Answering Vaclav Smil Watts Up, Vaclav? Putting Peak Oil and the Renewables Transition in Context by Chris Nelder June 5, 2013 I'd like to pick out some blockquotes, but I'm going to be away from my computer for most of the rest of the day. Have other things to finish. AK, you might have noticed that I am very sceptical about any attempts to predict the future. In this context, considering potential technological advances and pricing, I recall a 1985 assessment at Australia's Bureau of Industrial Economics. In 1975, BIE (or its parent department) forecast which ten industries would grow fastest in the next decade. In the event, none of the industries which grew fastest from 1975-85 (all in microelectronics) were on the list, as they did not exist in 1975. I say again, I say again, we must pursue policies which enhance our capacity to deal best with changing circumstances, whatever they may be, rather than putting unwarranted faith in projected and possible futures, particularly those predicated on technological change (or, of course, imperfect modelling of possible temperature rises). Peter Lang has asked me (below) to look at some work on discount rates, if I manage to comment, it might be relevant to this sub-thread. A higher natural gas price scenario is reasonable. The number of rigs drilling for gas us down because the price is too low. Companies drill for condensate, and light oil with associated gas but the stay away from dry gas. Eventually the reduction in the number of wells being drilled is reflected in a lower gas production capacity. The lower capacity leads to price increases. It's a cyclic phenomenon. Eventually the industry shakes down, the weaker companies are bought at distress prices and the competition streamlines the number of players. New players appear to operate as cheaply as possible, and the cycle moves on. But the prices continue to rise. Also, the Obama administration is allowing some gas exports. This in turn should increase prices. I don't think the USA has enough gas to make an impact supplying the vehicle fleet. "Higher" is a relative term: http://chart.finance.yahoo.com/z?s=UNG&t=5y&q=l&l=off&z=l&a=v&p=s&lang=en-US&region=US What's relative is the sense of tine spans. Drilling for gas is uneconomic at this time. One reason is the over investment by mullets buying gas funds. Over the next 30 years the prices will rise. Natural gas won't last forever, that's for sure. But there's more there than anyone knows, I'm betting. Looks like industry agrees with you, Fernando. White said he supports the Keystone XL, but also noted that despite the political hurdles facing that project, the American pipeline sector overall is booming. "We've seen more pipeline construction in the U.S. over the last seven years than in the history of the history of the United States," White said. "I'm not saying this to minimize the significance of Keystone or anything," White said, "but by historical standards, we're moving pretty fast on midstream." Pickering said he doesn't expect the boom in production from shale to slow in the near future. "The next big thing is the current big thing," Pickering said. "We're 10 years into the shale story… and it's probably a 20-year or 30-year thing." http://fuelfix.com/blog/2014/09/23/panel-dont-expect-high-natural-gas-prices-any-time-soon/ Don't forget sea-floor methane hydrates. The robotics needed to operate on the ocean floor aresubject to "Moore's aLaw". And if there aren't any people present, the very high pressures aren't really a problem. Fernando Leanme | October 5, 2014 at 3:08 am | I think many of you who dismiss the difficulties getting the unconventional hydrocarbons don't grasp the details. Working offshore in deep water isn't subject to Moores Law. Go read about Petrobras and their project to produce the presalt fields. And tell me, do you think methane hydrates are found in a nice little pile on the sea floor? Do you visualize something we can suck up with a vacuum cleaner? Alexej Buergin | October 4, 2014 at 12:38 pm | "China's one-word anster to Obama" reminds me of this: A man orders "flied lice" from a Chinese waiter. The six-word anster: "It is fried rice, you plick." Funny. I actually go around to reading that article because of this joke. But more seriously, I suspect the key is: –Western countries also need to remove "obstacles such as IPRs [intellectual property rights]" to "promote, facilitate and finance the transfer" of "technologies and know-how" to developing countries in advance of any future climate deal; They'll drop all the other demands to get free access to the patented technology. Well, until the CAGW people can show significant warming in the raw data for this century, it is just a game and China wants to win as much as it can. Without actual (raw data) warming, unless someone pays you to clean up there is no incentive. The pause is predicted to go on until 2030 so we have some time to kill. If the pause goes to 2030 CAGW has a hard time justifying any action. China will be at peak coal and there won't be a another big player to keep emissions rising. It is hard to defend predictions of more than a 1°C temperature rise if temperatures are flat for a third of a century. A 1°C temperature rise by 2100 doesn't justify taking any action. From The Carbon Brief article on the Antarctic: But at the North Pole the decline of Arctic sea-ice continues to accelerate. Scientists haven't yet been able to pin down why the opposite is happening in the Antarctic. Are these guys looking at the same charts as the rest of us? Danley Wolfe | October 4, 2014 at 12:45 pm | Pierrehumbert has been a lead author on the IPCC Assessment Reports and was a co-author of the National Research Council report on abrupt climate change. His field of specialization is developing idealized mathematical models to solve problems in climate science. He is also a frequent contributor to RealClimate. Pierrehumbert is the last person in the world and one of the people with the most to lose by giving a fair and balanced perspective on climate science. Any more questions? Diag | October 4, 2014 at 1:37 pm | Another consensus bites the dust: http://abcnews.go.com/Health/story?id=117310 "Was Ebola Behind the Black Death?" Controversial new research suggests that contrary to the history books, the "Black Death" that devastated medieval Europe was not the bubonic plague, but rather an Ebola-like virus. The details in the article are quite convincing. Maybe in a thousand years someone will write an article like this about climate science. "If you look at the way it spreads, it was spreading at a rate of around 30 miles in two to three days," says Duncan. "Bubonic plague moves at a pace of around 100 yards a year." Pneumonic plague… [I]s more virulent and rare than bubonic plague. The difference between the versions of plague is simply the location of the infection in the body; the bubonic plague is an infection of the lymphatic system, the pneumonic plague is an infection of the respiratory system, and the septicaemic plague is an infection in the blood stream. Typically, pneumonic form is due to a spread from infection of an initial bubonic form. Primary pneumonic plague results from inhalation of fine infective droplets and can be transmitted from human to human without involvement of fleas or animals. Untreated pneumonic plague has a very high fatality rate. The genome of Yersinia pestis, the bacterium that causes bubonic plague, recovered from human remains at East Smithfield. http://www.nature.com/news/2011/111025/full/478444a.html Yeah, well, I was referring to the "details in the article are quite convincing." Anybody with any knowledge of "Yersinia pestis" knows about pneumonic version. Thus, the article was just the opposite of "quite convincing." Is my son who is training as a thoracic physician in a growth industry? Thanks for the info. After reading the Wikipedia and CDC pages the article doesn't look convincing at all. Don B | October 4, 2014 at 2:03 pm | California drought: "To test their theory, the Stanford team applied advanced statistical techniques to a large suite of climate model simulations." In 1994, when the NY Times was not the climate campaigner it is today, it noted that droughts in the California region were much, much worse in the past. "BEGINNING about 1,100 years ago, what is now California baked in two droughts, the first lasting 220 years and the second 140 years. Each was much more intense than the mere six-year dry spells that afflict modern California from time to time, new studies of past climates show. The findings suggest, in fact, that relatively wet periods like the 20th century have been the exception rather than the rule in California for at least the last 3,500 years, and that mega-droughts are likely to recur." http://www.nytimes.com/1994/07/19/science/severe-ancient-droughts-a-warning-to-california.html Data trumps models. As I've said before, they don't build huge dams in areas where water is reliably plentiful. Sure they do, for hydro power and to a lesser extent flood control. They do not build them for water supply. Well, Ok, but I don't believe that either of those were the main considerations for building dams in places like California sunshinehours1 | October 4, 2014 at 2:32 pm | Almost as dry as 1923. http://sunshinehours.wordpress.com/2014/10/04/california-drought-almost-as-dry-as-1923-and-1976/ Explaining Extreme Events of 2013 from a Climate Perspective American Meteorological Society goes all-in for James Hansen's climate-change worldview! AMS Summary and Broader Context This report contributes to the growing body of evidence that human influences on climate have changed the risk of some extreme events and that scientists are increasingly able to detect these changes. effects of human-induced climate change, as found for the Korean heat wave of summer 2013. These individual examples are consistent with the broader trends captured in the latest IPCC (Stocker et al. 2014) statement, "it is likely that the frequency of heat waves has increased in large parts of Europe, Asia and Australia." Beyond the science, there is an ongoing public dialog around climate change and its impacts. It is clear that extreme events capture the public's attention. And, indeed, they should because "people, plants and animals tend to be more impacted by changes in extremes compared to changes in average climate" The World Wonders What will Rome say? Because isn't it the poor and disenfranchised who suffer most, from heat and drought and rising seas? As for denialism's "usual suspects" Don't we *ALREADY* know what their frothing ideology-driven response will be? "The freedom-destroying commie/green/liberal climate-science conspiracy extends even farther than we ever dreamed!" cwon14 | October 4, 2014 at 3:04 pm | More bafoonery Fanboy, the "conspiracy" straw man has been shot down a thousand times before. Left-wing group think such as AGW is conspiracy idealization by definition; "big oil" etc. etc. It is you ranting conspiracy theory with the largest tinfoil hat in history. cwon14, your comment was inspected for scientific information, historical precedents, and rational discourse. Result none were found. He didn't even spell "buffoon" right. Market fundamentalism! Is there any problem it can't solve? Canman | October 4, 2014 at 4:27 pm | FOMD: The link is to an Onion parody about a proposed program to build a National Air Conditioner. How does that have anything to do with market fundamentalism, much less markets? That was funny. Caustic but funny Humans do cause heat waves in cities. They buy air conditioners and pump very humid hot air out of their homes and apartments into the city causing the temperature to rise even higher and then they need more air conditioning. It has nothing to do with CO2. "Studying the impact of air conditioning on the heat island of Paris, another team led by Cecile de Munck and French colleagues observed that air conditioning increases energy demand and the cooling systems themselves release heat onto city streets. "It's a vicious circle," said de Munck, "temperature increase due to air condition will lead to an increasing air cooling demand."" Skiphil | October 4, 2014 at 3:21 pm | Fanboy, no one of the slightest scientific orientation cares about "What will Rome say?" Why do you insist upon spouting irrelevancies that only annoy people? If you ever get out of mama's basement you will be surprised at how interesting the Real World turns out to be. Skiphil asserts [utterly wrongly and without reason or evidence] "Fanboy, no one of the slightest scientific orientation cares about "What will Rome say?" Ignorance by skiphil, evidence by FOMD! Fan of more discus I was wondering if the paper on tornado frequency I coauthored with Dr Abruzzo was discussed at your committee meeting? The prepublication draft is at http://21stcenturysocialcritic.blogspot.com.es/2014/09/a-new-parameter-to-predict-tornado.html It fits the context of anti climate denial required by the committee. It predicts increases in tornado frequencies when global warming resumes. Consider that we are not even aware of all the natural phenomena that take place around us — as we look back in time at past weather to tease out future trends — all of which are involved in climate change that we only understand, after-the-fact. vukcevic | October 4, 2014 at 3:03 pm | I have looked at ENSO from various angles, and find classic view not entirely satisfactory. Mr. Bob Tisdale an enthusiastic researcher of ENSO suggests that start of Kelvin wave, i.e. donwelling at the east longitude is underway. I propose: Atmospheric pressure at Port Moresby 142 East should be able to tell us something about the ENSO. – change in the atmospheric pressure is caused by downwelling or – downwelling is initiated by the atmospheric pressure Waveforms of two are similar but not identical Spectral composition is almost identical for periods up to 8 years or so, then two diverge. What follows is plain and simple: Port Moresby atmospheric pressure has two dominant components: Sunspot cycle at 11 years Lunisolar tides period 18.6 years http://www.vukcevic.talktalk.net/ENSO-PMap.gif I suggest that the way the ENSO index is calculated inadvertently conceals the true cause of the ENSO. Of course, I do not expect many or even anyone to agree. http://science.nasa.gov/media/medialibrary/2006/05/10/10may_longrange_resources/predictions3_strip.jpg?w=600 I suggested couple of months ago that using atmospheric dipoles may not be the best method to calculate two important climate indices, NAO and ENSO. http://judithcurry.com/2014/07/26/open-thread-19/#comment-611992 Global Warming (definition): Threatens even when it seems to yield. John Smith (it's my real name) | October 4, 2014 at 3:26 pm | NYT on Australian 2013 heat "record" high "when we look at the heat of 2013 across the whole of Australia … virtually impossible without climate change" ugh … no kidding I, for one, could become less skeptical if they would come up with better language (I know what they mean, it just sounds so dumb) "climate change" the perpetual motion machine of propaganda labeling eternal … never to be resolved record heat in Australia but new high in Antarctic sea ice? Australian records are relatively short and only those from Stevenson screen days are accepted, this is very roughly from 1900 or so, depending on the stations. There are interesting records from Watkins who describe many of the 'unprecdented' things happening today including birds falling out of the sky due to the Heat Watkins predate stevenson screens by well over a hundred years. A good flavour of the often savage climate of Australia can be seen in the poems of Dorothea mackellar , particularly this one http://www.dorotheamackellar.com.au/archive/mycountry.htm The first part of the poem is, I believe, a reference to her origins in the UK. Tony, that is probably the best known and most often cited poem in Australia. I loved the Australian landscape and light when I accidentally emigrated in 1979, a friend from Essex who'd emigrated three years earlier still found it alien and unsettling. Deservedly so, it's a very evocative poem which illustrates that the harshness of the climate is not restricted to this year. Are the Watkins diaries much publicised over there? Hopefully John will find the references interesting. the poem kinda made my day wish I could buy you an espresso or a pint espresso for me, I don't drink (former professional – had to retire) One thing that bugs me about AGW folk is the "global citizen" one world government stuff as you know, the passage from feudalism to nation states was a bloody go these "one world" folk are misguided and ignorant of history (and some I fear might be hiding their true motives) the poem just reminded me of love of country, something I of late have gained new appreciation Yeah, dubious of the "record heat" stuff you are a gentleman and a scholar PS … one thing I haven't yet been lucky enough to see is the Southern Cross Tony, I've never heard of the Watkins diaries, so either no or not to my demographic, which seems to prefer CE. I remember seeing the Watkin's Diaries at Readings' Book Shop a while back bts. mosomoso | October 4, 2014 at 8:55 pm | Dorthea's piece was read by every Aussie kid in the 50s. Many years later, living on the land and waiting for spring rains, I find myself aching for that "drumming of an army" after weeks of "pitiless blue sky". More essential reading on the nature of Oz is this little short story by Henry Lawson; http://www.readbookonline.net/readOnLine/12038/ tonyb | October 5, 2014 at 3:41 am | Beth, Faustino and Mosomoso Here are two short extracts; http://ebooks.adelaide.edu.au/t/tench/watkin/ Watkin Tench The Settlement at Port Jackson (Part of) Chapter 17 "The difference can be accounted for only by supposing that the woods stop the warm vapours of the sea from reaching Rose Hill, which is at the distance of sixteen miles inland; whereas Sydney is but four.* Again, the heats of summer are more violent at the former place than at the latter, and the variations incomparably quicker. The thermometer has been known to alter at Rose Hill, in the course of nine hours, more than 50 degrees; standing a little before sunrise at 50 degrees, and between one and two at more than 100 degrees. To convey an idea of the climate in summer, I shall transcribe from my meteorological journal, accounts of two particular days which were the hottest we ever suffered under at Sydney." "But even this heat was judged to be far exceeded in the latter end of the following February, when the north-west wind again set in, and blew with great violence for three days. At Sydney, it fell short by one degree of what I have just recorded: but at Rose Hill, it was allowed, by every person, to surpass all that they had before felt, either there or in any other part of the world. Unluckily they had no thermometer to ascertain its precise height. It must, however, have been intense, from the effects it produced. An immense flight of bats driven before the wind, covered all the trees around the settlement, whence they every moment dropped dead or in a dying state, unable longer to endure the burning state of the atmosphere. Nor did the 'perroquettes', though tropical birds, bear it better. The ground was strewn with them in the same condition as the bats." beththeserf | October 5, 2014 at 4:02 am | Thx fer the Oz licherachure, Tony and Moso. I'll add the Watkins ter me reading list.The Lawson, lol, moso, so laconic. Rob Ellison | October 5, 2014 at 4:55 am | Contrast the early painter John Glover and the impressionist Arthur Streeton. It took a while before Europeans were capable of even seeing the landscape. Mackeller was a similar vintage to Streeton and was at the transition between harking back to England and an emergent nationalism. http://en.wikipedia.org/wiki/John_Glover_(artist)#mediaviewer/File:John_Glover_-_The_bath_of_Diana,_Van_Diemen%27s_Land_-_Google_Art_Project.jpg http://en.wikipedia.org/wiki/Arthur_Streeton#mediaviewer/File:Arthur_Streeton_-_Golden_summer,_Eaglemont_-_Google_Art_Project.jpg But of course the quintessential Australian poem of extremes is – http://en.wikipedia.org/wiki/Said_Hanrahan#The_Poem 'I'm longin' to let loose on somethin' new. Aw I'm a chump! i know it; but this blind ole springtime craze Fair outs me on these dilly silly days.' C J Dennis down under. http://www.middlemiss.org/lit/authors/denniscj/sbloke/spring.html Steyn is good! Here's his tweet to Greg Laden (in reference to Laden starting a silly pi$$ing contest with our hostess) Maybe she's hiding the decline just to torment you? State of Energy: Enough Gas for 100 Years There is enough energy in the ground right now to supply the needs of the U.S. for the next 100 years, and we can get to it economically. http://news92fm.com/483818/state-of-energy-enough-gas-for-100-years-hounews I have been looking at Exxon Mobil's performance, and noticed they are purchasing large amounts of their own shares. Their oil and gas production trend down, but look fine on a per share basis. My guess is their production will rebound over the next few years but the medium term looks grim. EM looks a bit more difficult to study than say Chevron Texaco. CT has been losing oil production and they don't look likely to make a turnaround in this trend over the next 5 years. CT seems to be turning into a little oil big gas company. have you noticed much movement in gas to liquids? Not having much in the way of diesel must be holding the US back. http://breakingenergy.com/2013/10/09/gas-to-liquids-primus-pursues-cheaper-drop-in-fuels/ http://www.foxbusiness.com/industries/2014/07/10/shell-leaves-its-peers-behind-on-big-gas-to-liquids-plants-7200438/ The Israelis will be completely energy independent in five years. Doc Martyn, I worked on gas to liquids in the 1990's. It requires a cheap gas price to be feasible. That pathway meets a barrier when gas can be marketed as LNG to the Far East and Europe. Thus what we see are increasing LNG trade flows and very little GTL. Also, do this exercise: find the gas reserves for a large player (use Russia), convert them to liquids using 30 % of the gas to fuel the process. Then compare those reserves to the Saudi Oil reserves. What you'll find us that gas isn't that plentiful if it's used to replace oil. Question: do you guys want me to show you the figures? Of course, show us the figures. I have never understood why there is no coal + natural gas => liquids route. 2 CH4 + C -> CH3CH2CH3 CH3CH2CH3 + CH4 + C ->CH3(CH2)3CH3 Probably pretty wasteful (of energy) without targeted catalysis. Of course, with the right enzymes, it might be feasible to drive it with partial pressure. Fernando Leanme | October 5, 2014 at 11:08 am | Doc, the gas to liquids processes tend to focus on FT technology. When I looked into gas conversion I became convinced it was much more practical to convert the methane to dimethyl ether (DME) and try to use it as a diesel substitute. A DME conversion unit makes methanol as a side stream, and its a lot cheaper. I also tried to sell the use of ethane as feedstock, but I could never get anybody interested because there wasn´t that much ethane in the world. If the USA ethane surplus were to last it would make an excellent candidate to be made into DME (ethane can be fed into a much cheaper reformer). But I don´t think that would allow the development of an infrastructure. We seem to run into the same issues all the time. DME is horrid; it has highly flammable creeping vapours and reacts with water and oxygen to make shock detonation peroxides. Fernando. I second what Doc said about DME. DME is a gas, not a liquid. Maybe you meant diETHYL ether. Even that is highly volatile and would not, even physical property-wise, work as a diesel substitute. It is the culprit in many a lab explosion and fire. It produces peroxides spontaneously within its bulk if not inhibited. Needless to say, this is not a good thing. Fernando Leanme. | October 5, 2014 at 3:32 pm | Doc. Russia has the world's largest gas reserves. If those reserves are ALL converted to syncrude they would be equivalent to 192 billion barrels. Now take the USA gas reserves, set them at 25 % of Russias…that's 48 billion barrels. Assume USA liquids consumption rate is reduced to 12 million barrels per day, that's 4.38 billion barrels per year. That's 11 years' worth of supply. So even in the USA with the infrastructure and relatively low prices it seems like a poor bet at this time. Many years ago I lived in Russia and was trying to investigate how to move those giant gas reserves to liquids. When I put the pencil to it I learned that gas was useful as a supplemental source of liquids. But it wasn't a game changer. Eventually we realized the Europeans paid a very nice price for the gas, so we gave up the russian gtl idea. Did you notice the way Qatar markets their gas? LNG. I hear their GTL project with Shell isn't working out very well. Maybe some day we will have a better technology? I predict a barrel of laughs from this one. Scientists are to challenge the climate-change sceptics by vastly improving the speed with which they can prove links between a heatwave or other extreme weather event and man-made changes to the atmosphere. It typically takes about a year to determine whether human-induced global warming played a role in a drought, storm, torrential downpour or heatwave – and how big a role it played. This allows climate sceptics to dismiss any given extreme event as part of the "natural weather variation" in the immediate aftermath, while campaigners automatically blame it on global warming. By the time the truth comes out most people have lost interest in the event, the Oxford University scientists involved in the project say. They are developing a new scientific model that will shrink to as little as three days the time it takes to establish or rule out a link to climate change, in large part by using highly accurate estimates of sea surface temperatures rather than waiting for the actual readings to be published – a process that can often take months. http://www.independent.co.uk/environment/climate-change/scientists-to-fasttrack-evidence-linking-global-warming-to-wild-weather-9773767.html Yet people will believe this Cullen nonsense. Presumeably they are defining normal using available data then measuring the statistical distance to the extreme as a probability or some such. Reminds me of the garbage of determining the 100 year flood using 100 years of data. Being able to crank out BS faster isn't necessarily a benefit. Have them cry wolf on a daily basis doesn't change the fact they are crying wolf. DocMartyn finds his happy-place "There is enough gas-energy in the ground right now to supply the needs of the U.S. for the next 100 years" Market fundamentalists appreciate what this means, DocMartyn: NO SIZE RESTRICTIONS AND SK*RW THE LIMITS! http://totallytop10.com/wp-content/uploads/2010/08/zombieland_photo_08-535×3551-300×198.jpg http://gilkalai.files.wordpress.com/2012/11/johns.jpg John Vonderlin | October 4, 2014 at 5:28 pm | DocMartyn, At first I thought you had grabbed one of my Morning After photos off of Facebook and then I realized my hairline recession hasn't reached that point yet. At least now I can understand the spread of coulrophobia in modern society. A Wikipedia entry has this about the etymology of this mental illness's popular name: The prefix coulro- may be a neologism derived from the Ancient Greek word κωλοβαθριστής (kōlobathristēs) meaning "stilt-walker."[nb 1] Although the concept of a clown as a figure of fun was unknown in classical Greek culture,[4] stiltwalking was practiced. The Online Etymology Dictionary states that the term "looks suspiciously like the sort of thing idle pseudo-intellectuals invent on the Internet and which every smarty-pants takes up thereafter". Frowny Face Frowny face Frowny face Our name is legion! That today's Conservatives can no longer support the rights their predecessors saw as conservative safeguards is a mark of their extremism. If Cameron wins the next election, that extremism will drive a majority Tory government. His supporters will not allow him to play the PR man once again and dress up our existing rights in new clothes. They will force him to abolish or restrict them. No doubt they will scream with pain when the state threatens them. But to repeat the old gag that a conservative is just a liberal who hasn't been arrested and predict that they will change their minds is to miss the source of rightwing anger. In a celebrated speech in 2009, the late and much missed Lord Bingham listed the liberties the European convention protects. The right not to be tortured or enslaved. The right to liberty and security of the person. The right to marry. The right to a fair trial. Freedom of thought, conscience and religion. Freedom of expression. Freedom of assembly and association. "Which of these rights, I ask, would we wish to discard? Are any of them trivial, superfluous, unnecessary? Are any them un-British?" http://www.theguardian.com/commentisfree/2014/oct/04/tory-wreckers-out-destroy-human-rights The EUs famous respect for human rights and freedom of expression is very morally flexible. I'm not impressed by any political side of the coin. The left has a very well established reputation for being incredibly repressive when they take power. The right is pretty much the same. I'd say the best bet is to be libertarian if you worry about human rights. Rob Ellison | October 4, 2014 at 5:15 pm | I prefer the term classic liberalism – and in Australia freedoms – long fought for – are best defended in the common law inherited from England. There was an exceptionally interesting programme on the BBC tonight about Angkor wat and how they managed their water to support the largest city in the world in the 13th century http://www.youngzine.org/article/lost-city-khmer-empire The city was eventually overwhelmed by climate change as a severe decades long drought was followed by an equally long period of exceptional rain which caused the water management systems to collapse Rob Ellison Australia is a different realm. Australians' position on the planet's underside leads to a troublesome posture, which makes that branch of humanity have loose connections with the average political structure. You have been reading the map upside down. https://watertechbyrie.files.wordpress.com/2014/06/article2-world.gif go help laden. he's embarassing Biodiversity is declining in both temperate and tropical regions, but the decline is greater in the tropics. The tropical LPI shows a 56 per cent reduction in 3,811 populations of 1,638 species from 1970 to 2010. The 6,569 populations of 1,606 species in the temperate LPI declined by 36 per cent over the same period. Latin America shows the most dramatic decline – a fall of 83 per cent. Habitat loss and degradation, and exploitation through hunting and fishing, are the primary causes of decline. Climate change is the next most common primary threat, and is likely to put more pressure on populations in the future. Terrestrial species declined by 39 per cent between 1970 and 2010, a trend that shows no sign of slowing down. The loss of habitat to make way for human land use – particularly for agriculture, urban development and energy production – continues to be a major threat, compounded by hunting. The LPI for freshwater species shows an average decline of 76 per cent. The main threats to freshwater species are habitat loss and fragmentation, pollution and invasive species. Changes to water levels and freshwater system connectivity – for example through irrigation and hydropower dams – have a major impact on freshwater habitats. Marine species declined 39 per cent between 1970 and 2010. The period from 1970 through to the mid-1980s experienced the steepest decline, after which there was some stability, before another recent period of decline. The steepest declines can be seen in the tropics and the Southern Ocean – species in decline include marine turtles, many sharks, and large migratory seabirds like the wandering albatross. http://wwf.panda.org/about_our_earth/all_publications/living_planet_report/living_planet_index2/ The contribution from anthropogenic climate change is certainly over estimated. http://wwf.panda.org/_core/general.cfc?method=getOriginalImage&uImgID=%26%2AR%5C%2C%20%3E_41 I would blame fishing fleets. And natives wearing shotguns. curryja | October 4, 2014 at 5:33 pm | Now this is interesting. I have been wondering how to reach the younger generation with regards to the climate debate. Lets face it: the prime demographic at Climate Etc. is over 55, white and male. Of my 3000 twitter followers, there is a growing number of young people. This exchange with steyn and laden has been picked up by Twitchy, who has 167,000 followers (primarily a young demographic). There has been huge discussion of the laden episode on twitter as a result, all of which has gone against laden. This tweet sums up the sentiment: Up jumped the climate change true believer >>> @gregladen <<< & member of the wuss generation Clearly they don't like being told what to think. I think this needs more investigation, it is pretty interesting. Wonderful. Children are less tolerant of jerks than adults are. http://twitchy.com/2014/10/04/another-witch-agw-alarmist-exposes-twitchy-as-anti-science-bot-of-some-sort/ laden is such a conspriracy nut This too is fun: http://twitchy.com/2014/09/17/hashtag-backfire-mock-a-lanche-triggered-after-global-warming-alarmist-suing-mark-steyn-solicits-questions/. In case all us white males over the age of 55 also have the attention span of gnats perhaps you had better ensure your articles here have no more than 140 characters then they could also be posted directly on twitter. A tweet a day on climate science, describing all the essentials, would be amusing Judith Curry wonders "how to reach the younger generation"Recommended Jane Goodall's Roots and Shoots programs for children Goodall's programs are immensely well-respected and popular with parents and teachers around the globe. Older male feces-flingers especially take note! Typically vile nonsense. Is lulz a sentiment? Interesting site, the Twitchy: http://twitchy.com/2013/06/27/boom-laura-ingraham-destroys-hero-wendy-davis-with-one-question/ search sarah palin http://twitchy.com/2014/09/21/morons-see-katie-pavlich-destroy-climatemarch-hypocrites-with-their-own-photos/ Giggling madly. Curious George | October 4, 2014 at 10:38 pm | What does it have to do with anything? My daughter is 18 and son 16. They are skeptical about everything the ladder pullers tell them. How to reach younger people: http://www.morris.umn.edu/newsevents/view.php?itemID=13074 I will give Nye this. With such events, he is on target. My son has his ticket. stevefitzpatrick | October 4, 2014 at 10:02 pm | I resent…. err… resemble that! The boring truth is that only very green/liberal or dedicated conservative/libertarian (AkA white over 55, male) people care much about this issue. If warming does not accelerate very soon, then vehement defenses of high climate sensitivity will be relegated to the theater of the absurd, where main stream climate science already has long term residence status. I would develop a video game. The more intellectually inclined seem to like building simulations. But the game has to end with the good guys shooting at an enemy using a projectile weapon. They also seem to like comedy. Somehow one of my posts was linked in an Occupy webpage, but I think that was done by one of my children as a joke. Dick Hertz | October 5, 2014 at 11:20 am | Hey Judy, If you really want to understand this phenomenon, you should get a deep foundation in South Park. The show has a fiercely independent political point of view and it has had a huge influence on the political point of view of the college age demographic. You can watch full episodes online. They are often very crude, but thoughtful. If you choose to watch episodes of South Park from an academic point of view, it's important to understand that the shows are often conceived, written and produced in a weeks time, so they are often immediately topical to the politics of the day. I would recommend you binge watch the entire 20 seasons, but if you just want to focus on global warming or climate change, check out the episodes "Smug Alert" and "Two Days Before the Day After Tomorrow" "Manbearpig" is another special episode. An humage to Al Gore and his book, "A Convenient Lie". jim2 | October 5, 2014 at 11:56 am | Currently "Smug Alert" is available only for purchase. You can try Hulu for free for a limited time. http://www.hulu.com/south-park It's also available on Amazon. http://www.amazon.com/gp/product/B000LVKGCU/ref=atv_feed_catalog?ref_=imdbref_tt_pv_vi_aiv_1&tag=imdbtag_tt_pv_vi_aiv-20 curryja | October 5, 2014 at 12:06 pm | Thx for this suggestion, I've heard of South Park but never watched it. Tonyb | October 5, 2014 at 12:23 pm | I think Family Guy is much funnier than South park although it seems to have something of a sympathy for global warming. Quite how you can utilise either show to tap into the college age demographic is another thmg though Dick Hertz | October 5, 2014 at 3:24 pm | Jim2 is correct, a few months ago south park episodes were all available for free (with commercials). It looks like they have recently made a deal with Hulu, so many episodes are only available through Hulu. And PA is correct that Manbearpig is a great episode delving into Al Gore and his personality issues. Here is an interesting interview with Trey Parker and Matt Stone at some skeptics conference, introduce by Penn Jillette. They don't mention climate change at all, but it gives view of what they are all about, They talk about their ridiculing TV psychic John Edwards and making fun of a variety of religious ideas (including atheism), I would think that some of their commentary on global warming would fall into the category religion. fyi Trey Parker and Matt Stone are the south park creators. Penn Jillette (Penn and Teller) and his showtime show called Bullshit is another example of the kind of skeptical ideals that young people latch onto, but not as big as south park, but interesting if you want to understand what young people gravitate to. It's about the presentation, the humor, the language, and a certain respect for the audience. aaron | October 6, 2014 at 9:14 am | Last week's gluten episode was great. And, of course, the original christmas e-card that launched the whole thing: http://youtu.be/mJbbtEOE4a4 As I am disposed to lecture on and on, my younger crowd says: "Oh DAD" Honing skills to express thoughts as sound-bites does seem to help somewhat. However, when the subject of Climate Change even approaches from a distant theme, the gathering scatters and I am left to mumbling to the dog. I am convinced that it is not what I say, its just that I am saying it. I am the adult is the problem in communicating with younger people. The saying: "don't trust anyone over thirty" resonates today as did yesteryear. In coaxing my youngest grandchild to play, I speak his language and engage one-on-one. I keep in mind what he is interested in and I add some aspect of teaching/learning to the scenario: "what makes the cars and truck and things that go, go?" Speak to the budding football player about the weather, wet grass, dressing warmly and… and: "where does the weather come from?" Engage the child, adolescent, young adult where they are and then add. Add just a little bit, in context. Dr curry, I discussed this with a few teenagers. They suggest you reach their teachers. This applies to high school age. The age group between 30 and 50 is mostly focused on reproduction and cash generation. kim | October 5, 2014 at 4:04 pm | invisibleserfscollar.com The Early Bird has found the worm, even way down South in Georgia. Dan Pangburn | October 4, 2014 at 5:47 pm | The time that passes between when a CO2 molecule absorbs a photon until it emits one is about 10 microseconds. The average time, at sea level conditions, between molecule impacts (which, among other things, conduct energy away from a molecule) is about 0.1 nanoseconds. Thus photon energy absorbed by CO2 molecules near the surface (i.e. tens of meters) is essentially all thermalized, i.e. conducted to non-CO2 molecules that outnumber CO2 molecules 2500 to 1. Thermalized energy carries no identity of the molecule that absorbed it. Discussed further at http://agwunveiled.blogspot.com PA | October 5, 2014 at 9:18 am | http://wattsupwiththat.com/2010/08/05/co2-heats-the-atmosphere-a-counter-view/ Maybe. Or maybe not. Doesn't look like there is going to be a lot of transference. However, a well designed experiment (instead of guessing by physicists) should settle the issue. If the energy from CO2 isn't transferred but reradiated it acts like a time delay which would still produce some warming.. Has anybody done a well designed experiment to measure the degree of temperature transfer from CO2 to N2 and O2? bob droege | October 5, 2014 at 2:40 pm | A fraction of the CO2 molecules are in the vibrational excited states even without being excited by infared, so it takes energy from the other gases and radiates in all directions, some of which make it to the surface warming the earth. The more CO2 the more warming xanonymousblog | October 4, 2014 at 5:55 pm | Here is my response to Raymond T. Pierrehumbert desperation: 1) "but your usual stable of tame skeptics is starting to die off" this is terminal move for raypierre, and is no more than an appeal to authority cloaked in argumentum ad hominem. Incorporating two fallacies in one is impressive, but shows that the interest in objective science is zero. 2) "committee did not include a single physicist who was actually doing work in the area of climate science." This reads like if one is not a "climate scientist" (whatever that is anyways), your opinion is of no consequence. Yet raypierre later describes the unsettled nature of the problem, which is only exclusive to his domain? Raypierre is an expert in the history of climate science, unfortunately it seems he has glossed over the history of science itself. 3) raypierre notes that rate of sea level rise is around 3mm per year. It's a pity someone doesn't infer climate sensitivity from such a robust indicator of change. 4)raypierre cites the APS meeting where Collins noted ""It is virtually certain that internal variability alone," because just heating the ocean alone will not produce this dipole, "cannot account for the observed warming since 1951." Such pseudo scientific statements represent the level of ignorance which has come to dominate this obscure field of study. How on Earth would a warming ocean NOT produce massive amounts of water vapor which would warm the troposphere in exactly the same way the models produce the hotspot? But the fun doesn't stop there, more bizarre claims are made by Santer, when he compares what he calls "natural internal variability" with the hotspot. In actual fact it is simply his idea of internal variability, which in reality is actually trendless whitenoise. Stop the presses! Whitenoise has no trend! Such dishonesty in science is rare. These guys make it into an art form: https://xanonymousblog.files.wordpress.com/2014/05/whatafool.jpg Next up is Held, whose beautiful description of negative feedback is ruined by his stubbornness to actually allow it to work: https://xanonymousblog.files.wordpress.com/2014/05/morefools.jpg In summary, it's unlikely raypierres essay will elicit any sort of professional response. As Dick Lindzen has noted, if there is little substance to the claims, then don't bring attention to them…. Spence_UK | October 4, 2014 at 6:59 pm | Wow, they still think that GCM control runs are a good basis for assessing internal natural variability? Astonishing! GCMs demonstrably do not capture the autocorrelation properties of the climate system and therefore are completely inappropriate for use as a null hypothesis. This type of incestuous testing has no merit or value whatsoever beyond self-delusion. Oddly – the science is settled sufficiently. The obvious approach is to get some environmental goals on the board – restore soils and ecosystems – and to foster energy innovation. http://judithcurry.com/2014/10/03/challenging-the-2-degree-target/#comment-635010 Best analysis of the global warming community ever: "…there is a massive plot by huge multinational environmental corporations, academics and hippies to deprive you of the right to drive the kids to school in a humvee…" ianl8888 | October 4, 2014 at 7:55 pm | Butterworth is being sarcastic here, don't you know ? "Although climate scientists update, appropriately, their models after ten years of evidence, climate-science communicators haven't," said Dan Kahan, a professor of law and psychology at Yale who studies how people respond to information challenging their beliefs. Luckily, social and political psychologists are on the case." Stupendously conceited mullahs of scientism like Dan Kahan from stupendously conceited institutions sell soft waffle as scholarship. It's what they do for a living and we must be used to that by now. They're in the pay of Big Smug. But the NYMAG article is starting to sound like a lead-up to re-education camps. Of course, they'll try convincing conservatives by appealing to their self-interest first. Perhaps we can become shallow and brittle replicas of our New Class betters. But if that doesn't work… JustinWonder | October 4, 2014 at 7:50 pm | Monomoso – "They are in the pay of Big Smug." Big Smug? That is pretty funny! Did you invent that? Regarding sticking to old beliefs, especially involving numbers, that is called "anchoring". The persistence of anchors, once introduced, is amazing. Big Al got way out in front with his sci if movie. Those old CAWG ideas are dug in deeper than an Alabama tick. We already have re-education camps, aka sensitivity trainIng. South Park – "Smug Alert" Ah, yeeeeze… born only to pursue knowledge. Kahan spends his time studying why people disagree with Kahan and how they can best be cured of this infirmity with firmness but compassion. I get how juveniles can be entranced by his self-satisfaction and twaddle, but this guy and his ilk get taken seriously by people of adult age. Maybe not adults, but certainly of adult age. johanna | October 5, 2014 at 12:06 am | "Big Smug" – good one, Mosomoso. :) A while ago I wandered over to Kahan's site and read a few of his articles and the subsequent comments. What a load of twaddle! He is your classic academic obfuscationist, never uses a short word when a long one (or phrase) will do; regards disagreement as a failure of communication instead of a genuine disparity of viewpoints; and has brought the art of condescension to a whole new level. He's a fairly successful con artist who is not nearly as clever as he thinks he is. Hilary Ostrov shredded him a few times while simultaneously trimming her cat's toenails and cooking dinner. When somebody in the backseat chants "we going to drive off a cliff just ahead", for 17 miles (or chants global warming for 17 years) … you stop, chuck them onto the shoulder, and continue on. What if we'll never know? stevefitzpatrick | October 4, 2014 at 8:13 pm | I think you are mistaken. I followed the link to the Guardian blog; same venemous accusations, especially on the left. As a climate scientist, you are, at least tangentially, part of a political cabal that is focused only on implementing a specific set of green policy goals. The disturbing part of the story is that even among obviously qualified climate scientists, the scientific analysis is subservient to the political calculus (think Linzen versus Santer). The field is not well. IMO, only political intervention will fix 'the science' in the short term. Whether that intervention is politically possible will depend in large part on the November elections for the US Senate. In the long term, nature will laugh at those who insist on high sensitivity; sometime thereafter, so will everyone else. The only important question is how much economic damage will be done before everyone is laughing at high sensitivity. This is how a hoax dies. Very, very slowly, unfortunately, and more dangerous in its death throes. Fixing the World, Bang-for-the-Buck Edition: How can we do the greatest good? See: Bjorn Lomborg's New Freakonomics Radio Podcast "Here's $2.5 trillion. You have 15 years to spend it. How do you distribute this money in a way that will achieve the most good for the world?" I just did a podcast with Freakonomics on my think tank's "Post-2015" project on the Sustainable Development Goals. It is the podcast for the #1 selling Freakonomics book, a #1-ranked podcast, with more than 5 million monthly downloads. One of the comments the listeners left reads: "FASCINATING Freakonomics podcast this week. With so much bad news lately, it was heartening to hear that people are spending so much time and effort to fix things, to fix things right, and to make the most impact possible." I hope you enjoy it too http://freakonomics.com/2014/10/02/fixing-the-world-bang-for-the-buck-edition-a-new-freakonomics-radio-podcast/ There is no end to the good ideas for spending some else's money. Wagathon: There is no end to the good ideas for spending some else's money. There are at least as many bad ideas. The point of the exercise is to weight the alternatives, and their likely outcomes, and their likely costs and benefits. California, for example, has rushed into massive spending projects that are not likely to produce benefits exceeding their costs in the next 40 years, while neglecting its water control infrastructure. There ought to have been more debates about the costs and benefits of these alternatives. In California the entire government-education complex is a bridge to nowhere. JustinWonder | October 6, 2014 at 2:18 am | And it ain't cheaper neither. But, someday we may have a high-speed ( high on speed?) train to nowhere for that bride to nowhere. So we got that going for us … …ain't cheap…I meant to say. Skiphil | October 4, 2014 at 10:51 pm | Ben Santer shows up to make a critical comment at WUWT (it does appear to really be him, judging from the detail in the comment and the specific anecdote related): Ben Santer comment at WUWT The last thing anyone should ever do is misrepresent the air of certainty and urgency that all on the Left have tried for years to cultivate by knowingly providing false, incomplete, misleading and sometimes simply made-up facts and information to create public alarm. ah, the large number of volcanics eruptions over the last 17 years and dim sun is to blame. How low can you go Ben? omanuel | October 4, 2014 at 11:17 pm | Professor Curry, as Czech President Klaus concluded several years ago, "It's about freedom." http://junkscience.com/2014/10/04/roy-spencer-on-how-people-seem-to-not-care-about-the-warming-crisis/comment-page-1/#comment-274546 Interesting article on Judith Curry: SCIENCE: A woman in the eye of the political storm over climate change A fan of *MORE* discourse | October 5, 2014 at 12:01 am | Let's pull the pieces together! A woman in the eye of the political storm over climate change "Curry and the other scientists agree on the basics of the science. They are quibbling over the uncertainties." Belief, bias and Bayes Cohort I If you have a prior assumption that modern life is rubbish and technology is intrinsically evil, then you will place a high prior probability on carbon dioxide emissions dooming us all. Cohort II If your prior bias is toward the idea that there is a massive plot by huge multinational environmental corporations, academics and hippies to deprive you of the right to drive the kids to school in a humvee, you will place a much lower weight on mounting evidence of anthropogenic climate change. Cohort III If your prior was roughly neutral, you will by now be pretty convinced that we have a problem with global warming. FOMD allies with Cohort III … along with an overwhelming majority of the world's STEAM professionals *and* common-sense citizens. Pretty much *EVERYONE* appreciates these POLITICAL realities, eh Climate Etc readers? Meanwhile The seas keep rising, the oceans keep heating, and the polar ice keeps melting … all without "pause" … all without obvious limit … all showing us a Hansen-style reality of sustained energy imbalance. Pretty much *EVERYONE* appreciates these SCIENTIFIC realities, eh Climate Etc readers? Skiphil | October 5, 2014 at 12:27 am | FOMD, nice but your "Cohort III" is mis-described. Plenty of us had priors (me for instance) that ranged between neutral and "there probably is a problem since there is so much noise about a problem" — yet do not end up in your camp that there is evidently any (serious) problem that can and must be dealt with by concerted state actions. In other words, your set of cohorts form a "straw man" type of analysis…. but the comment is entertaining, I'll give you that. Well, the cohort three (III in roman numerals is three) definition was written by someone who is somewhat biased. Around 2000 when I started looking at this – it looked like AGW had some game. After 14 years: 1. The AGW crowd has to puff up surface temperatures, 2. The AGW crowd has to puff up sea level (adding GIA to the sea level rise is outright lying). 3. The arctic sea ice volume is expanding. The 2014 minimum is over twice the 2012 minimum volume. 4. The antarctic sea ice extent is at record levels. 5. The amount of East Antarctic land ice (which is mostly land locked and unmoving) is increasing. East Antarctic land ice is basically permanently removed from the climate system. At some point the East Antarctic accumulation (90% of land ice) will offset the West Antarctic and Greenland melting. The Greenland core is also landlocked and getting thicker. 6. The GCM models have been consistently wrong for the entire 21st century. If we see some real raw data warming by 2030 maybe they have some game, but right now CAGW (catastrophic global warming) appears to be dying as a theory. AGW (a little man-made warming) really doesn't justify taking any actions. We will move away from fossil fuels gradually and that is just fine. Let nature (or economic forces) take their course. Demonizing CO2 seems to be some sort of religion. It's an opiate to ease the pain of absolutism. PA reminisces "Around 2000 when I started looking at this – it looked like AGW had some game. After 14 years … … record global heat and accelerating ice-mass loss (affirmed by multiple independent studies and measures), all precisely as climate-scientists predicted. Your insights are appreciated PA. Hypothesis Perhaps the global commie/green/liberal conspiracy is more vastly corrupt than even "Cohort II"s market-fundamentalist stalwarts foresaw? The world wonders! Lithium isn't just for batteries don't ya know. omanuel | October 5, 2014 at 12:44 am | Thanks, Skiphil, for the link. In my opinion, Professor Curry is the hero in the Climategate story. http://www.eenews.net/stories/1060006489 Rob Ellison | October 5, 2014 at 12:31 am | Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, "These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it." http://earthobservatory.nasa.gov/IOTD/view.php?id=8703 It is difficult to imagine that climate is at all predictable against a backdrop of vigorous natural variability – and it is not as if the rate of warming is all that striking. I am inclined to take the high point of the early century warming – 1944 – as a starting point and the late century high point – 1998 – as the finish. This accounts for both a multi-decadal cooling and warming period. Surely – there is an obvious rationale there. We may even assume that all of the warming between 1994 and 1998 was anthropogenic – unlikely as that is – to give a warming rate of 0.07 C/decade. Well short of 2 degrees C anytime soon – especially as the oceans are contributing to surface cooling for decades seemingly. http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4.png I am inclined to just move on entirely from the rhetoric of catastrophe. There are plenty of things to be getting on with. Trade, development, progress and and ecological and soil conservation all bring environmental benefits – but are clearly not the prime objective. Targeting greenhouse gases would send entirely the wrong message. We might also for economic reasons encourage energy innovation. We might then see more progress on social and economic development and some on biodiversity. xanonymousblog | October 5, 2014 at 12:40 am | https://xanonymousblog.files.wordpress.com/2014/05/life-cycle.jpg 'have kids with environmental green lawyer/vegan.' Surely not! Are we not an abomination to an already overpopulated Gaia? '… Overpopulated by humans, that is. xanonymousblog | October 5, 2014 at 2:18 am | damn it ……forgot climate tourism……. mosomoso | October 5, 2014 at 2:32 am | And don't forget "green jobs"; an invisible phenomenon which occurs in the wake of industrial self-harm. Faustino | October 5, 2014 at 5:26 am | moso, the green jobs are invisible only because CO2 has stimulated background vegetation growth. Alexander Biggs | October 5, 2014 at 1:20 am | Antarctic sea ice increasing. There are more people and more industry in the N hemisphere, hence more waste heat. But total global heat output is roughly in equilibrium with total heat from the sun, so as the N hemisphere gets warmer, the Southern gets colder. David Wojick | October 5, 2014 at 9:03 am | I do not see why waste heat in the NH would cool the SH. By what mechanism might this happen? Through the Lens (@woodyjohn1) | October 5, 2014 at 5:35 am | Global warming? i'm still waiting for the Millennium Bug. Tuppence | October 5, 2014 at 5:49 am | That Carbon Brief link…. Seems CB dedicated to blinkered alarmism – any comment not in that vein is just deleted. thisisnotgoodtogo | October 5, 2014 at 7:14 am | The nonsense article "Psychologists Are Learning How to Convince Conservatives to Take Climate Change Seriously" By Jesse Singa, says "…in practical, apolitical contexts, many conservatives already recognize and are willing to respond to the realities of climate change. "There's a climate change people reject," Kahan explained. "That's the one they use if they have to be a member of one or another of those groups. But there's the climate change information they accept that's just of a piece with all the information and science that gets used in their lives." A farmer approached by a local USDA official with whom he's worked before, for example, isn't going to start complaining about hockey-stick graphs or biased scientists when that official tells him what he needs to do to account for climate-change-induced shifts to local weather patterns." However, in an article called "Why Don't Farmers Believe in Climate Change? And does it really matter whether they do? By David Biello, we see an entirely different story to Kahan's rubbish: "Take, as an example of skepticism, Iowa corn farmer Dave Miller, whose day job is as an economist for the Iowa Farm Bureau. As Miller is happy to explain, it's not that farmers in Iowa don't think climate change is happening; it's that they think it's always been happening and therefore is unlikely to have much to do with whatever us humans get up to down at ground level. Or, as the National Farm Bureau's spokesman Mace Thornton puts it: "We're not convinced that the climate change we're seeing is anthropogenic in origin. We don't think the science is there to show that in a convincing way." (Given the basic physics of CO2 capturing heat that have been known for more than a century and the ever-larger amounts of CO2 put into the atmosphere by human activity, it's not clear what "science" he's holding out for.) The numbers back that up: When Iowa State University sociologists polled nearly 5,000 Corn Belt farmers on climate change, 66 percent believed climate change is occurring, but only 41 percent believed humans bore any part of the blame for global warming. It's not just the Corn Belt: Farmers across the country remain skeptical about climate change. When asked about it, they tell me about Mount Pinatubo and weird weather in the 1980s, when many of today's most established farmers were getting their starts. But mostly I hear about cycles in the weather, like the El Niño–La Niña cycle that drives big changes in North American weather. Maybe it's because farmers are uniquely exposed to bad weather, whether too hot or too cold. Almost any type of weather hurts some crop; the cereals want more rain, but the sweet potatoes like it hot and dry. Year-to-year variability in the weather dwarfs any impact from a long-term shift in the climate. Consider this: A farmer in Iowa might deal with a 10-degree-Fahrenheit shift in average temperatures from year to year, so why worry about a 3- or even 4-degree shift over 100 years? As the old saying goes: If you don't like the weather, wait five minutes and it will change. The long-term prediction for the Corn Belt in Iowa says that the weather will get hotter and drier—much like western Kansas is currently. Yet, over the decades of Miller's farming career, conditions have been increasingly wet. "If I had done what climate alarmists had said to do, I would have done exactly the wrong thing for 20 of the last 25 years," Miller says." Dan Hughes | October 5, 2014 at 7:50 am | An interesting comment over at . . . and Then There's Physics. In reply to this comment by Rob Painting, Steve Bloom says: Tipping points, Anders, not in the models. Rob, that model finding for the Amazon was recently overturned by observations, [edh bold] Explains a lot. All calculated model results are apparently considered to be true findings prior to Validation. While at the same time completely skipping over all aspects of Verification. Love the wording, tho, overturned by observations, I've got to work that into a sentence sometime. A long version of, wrong. Indeed, modeling results are merely conjectures but they are treated as established facts, or even as observations. This is how modeling has come to dominate the science. Stephen Segrest | October 5, 2014 at 9:34 am | Tony B — You previously asked for some links of incendiary statements being made by Republicans (i.e., Conservatives) on AGW. A while back, I created a jpg picture of things being said to a general public audience ("Fraud, Hoax, Scam, Junk Science, God tells us it can't be true, etc."): http://www.treepower.org/blog/teapartyscience1.png In the "context" of the general public debate, how should advocates of AGW responded to these incendiary statements? It is in this context of responding to a message of "Junk Science, Hoax, etc." that the phrase of "scientific consensus" has been mostly used. Anybody following my comments here at CE know that I don't think much about Wagathon (and many others like him). They are not true Conservatives, but radical Ideologues — wanting to re-fight the American Civil War. For true Conservatives, the problem isn't junk science from liberals (per Wagathon), its the "Junk Thinking" by people like Wagathon. From the Get-Go, GW was hi-jacked by liberal ideology in command/control policies like Carbon Taxes or Carbon Trading. Conservative Leadership has never developed a clear and consistent pro-active message and push hard with conservative principled policies to approach AGW. Two examples of "No Regrets" approaches reflecting Conservative principles could be (1) Fast Mitigation of reducing methane, black soot, smog, etc. and (2) using international trade for high economic growth to encourage low carbon economies (e.g., exporting U.S. technology to developing countries in exchange for them having greater access into our markets). Hank Zentgraf | October 5, 2014 at 9:47 am | Stephen, advocates of AGW might consider responding to "incendiary statements" by opponents the same way skeptics respond to AGW advocates who say the science is "settled". Try respectful reasoning with evidence that follows the rules of science and statistics. The defining element of the global warming hoax has always been the pretense that enlightened governments acting through the auspices of the UN under moral authority of Western science, can and should throttle-back modernity to prevent future climate change. Who shall decide our individual fate – government scientists on cell phones in ivory towers, sporting laptops and clouds full of phony data and pushing an evergreen buttload of public-funded research proposals about saving the Earth from human depredation? climatereason | October 5, 2014 at 10:43 am | I don't think I agree with any of the statements around your central circle other than that I would observe that alarmists can push their beliefs with messianic fervour, so in that respect it is akin to religion. Peter Lilley one of the few sceptics in the UK parliament wrote this in the Huffington post. http://www.huffingtonpost.co.uk/peter-lilley/global-warming-religion_b_3463878.html Basically, the belief seems to be stronger than facts. Generally, our politicians don't have the fervour some of yours do. I mostly agree with your 1). As regards your 2) regarding International trade, most developing countries want to export agricultural products and the US has been at the forefront of refusing deals that might impact on your farmers so not sure how that would work out. In any case high tech products tend to need hi tech people to supply install and run them and those are often in short supply locally. Having said that, improving a peasants standard of living generally reduces their need to have a large family and improves their overall health and wealth and perceptions. So I am all in favour of improving their lot, however I think that it will be a long time before fossil fuels can be genuinely replaced in developing countries. Stephen Segrest | October 5, 2014 at 10:31 am | Wagathon is advocating playing a very dangerous "High Stakes" "Winner Take All" game. If Conservatives don't develop a pro-active leadership position of AGW with conservative principled policies the outcome will be catastrophic if public opinion turns demanding action. Without pro-active conservative leadership (with definitive policies), the only thing out there are the liberal policy approaches (e.g., carbon taxes). Conservative principles: Bottom-up, De-centralized, Flexible, Reward Based. Given that CAGW is an intrinsically liberal policy there is no way for conservatives to be "pro-active," rather they are anti-active. This is why the policy is stalemated for now. Just to refine the point, Stephen, your argument seems to be that this is going to happen whether conservatives like it or not so they should work to get it done their way instead of fighting it. Your premise is false. mosomoso | October 5, 2014 at 11:30 am | Yes, David, I'm afraid these people will have to establish their case. I don't want to be pro-active or on-board with agendas others have cooked up. I want to wholeheartedly oppose agendas with which I disagree. "No regrets" is just a lure. I don't like soot, smog or poverty. I do like kittens and warm mugs of cocoa. That's got nothing to do with believing in hockeysticks or in ruinous toy technologies offered as "solutions" to confected problems. Before another hurricane whips across Leyte I'll happily see western aid money go to pinning more roofs on Leyte. I don't want to respond by sending Australian money into a carbon scam run by Goldman Sachs or the European Union in the hope of dialling up a stable climate which has never existed. To be a true conservative is to be a serial appreciator: tto appreciate when the flick of a switch makes light, without smoke, flame or noise, to appreciate wealth, industry, chemicals and coal as much as the natural world (eg this Australian bush I live in and love). So, enough fetishism, waste and white elephants, okay? Well, lets apply the 3.5°C IPCC average for 2100 to how much the temperature should have increased in 2014. 14/100 * 3.5 = .49°C. The danger level is 2°C 14/100 * 2 = 0.28°C The level by 2020 for 2°C is 0.4°C. We aren't going to get close to any of these numbers. The AGW crowd needs to explain why we should be solving a non-problem. Discussing solutions of a non-problem is premature. Your argument a non-starter from the get go in defining conservatives as Republicans and using the moniker, "Tea Party," more as a denigrating ad hominem than a convenient and informative generalization. The Tea Party isn't even a party. To put more denotative authority into your language, it would be more accurate to equate the Tea Party with neither the Democrat nor the Republican party. But, even when you get past the shallow thinking you argument is more proof than a challenge to the fact that society is fundamentally dishonest. Your point 1) for example: the first step in SS, the problem the AGW crowd has is they aren't rational. Rational people prove there is a problem before they start devising solutions. The AGW crowd regards proving there is a problem as a "pro-forma" step that can be skipped. This allows them to dive right into solutions. It is not a pro-forma step. AGW advocates must prove there is a problem, and they haven't made their case. Before we destroy our economy and waste our money the AGW crowd has to prove the CO2 forcing is as high as they predict. Right now the evidence isn't on their side. True, true, to get beyond the obvious fact that global warming is nothing more than a Left versus right issue and therefore has nothing to do with science, you must explain how self-described "conservatives" can be comfortable with the fact that belief in AGW theory represents a celebration in the abandonment of the scientific method as if it has no societal implications at all beyond the global warming debate. … mitigating black soot would be for the Left to agree that natural gas is a boon to all humanity. This week saw the 18th anniversary since the Earth's temperature last rose – something that Dr Benny Peiser, from the Global Warming Policy Forum, says experts are struggling to understand. He explains that we are now in the midst of a "crisis of credibility" because the global warming – and accompanied 'Doomsday' effects – that we were once warned about has not happened. Scientists from the Intergovernmental Panel on Climate Change (IPCC) once predicted a temperature rise of 0.2 degrees per decade – but are now baffled by the fact our planet's temperature has not increased for almost two decades. Speaking exclusively to Express.co.uk, Dr Peiser said: "What has happened is that the public has become more sceptical because they were told we are facing Doomsday, and suddenly they realise 'Where is the warming that we were promised?'" "They say we can predict the climate and the reality is that they can't." Because of this so-called "global warming hiatus", Dr Peiser says climate change is not as pressing of an issue as it once was, a fact that should be embraced by the scientific community. http://www.express.co.uk/news/nature/518497/Exclusive-interview-with-Dr-Benny-Peiser Howard | October 5, 2014 at 12:33 pm | At Revkins Dot Earth, Victor replies to Romulus and Remus regarding his Nature essay on the 2-deg limit. This is an excellent example of adult behavior. Victor versus R&R The oening says it all. The rest are all the bloody details. An eyeopening read. Along with the Great Ramanathan, it appears that UCSD is the center of the Sane Climate Policy World. First, before digging into the substance, I think it is deeply disturbing that both these posts use the same tactics that are often decried of the far right—of slamming people personally with codewords like "political scientist" and "retired astrophysicist" to dismiss us as irrelevant to the commentary on a matter that is for climatologists. Other scientists, by contrast, are "internationally renowned" (quote by Joe) implying somehow that we are peripheral thinkers. People can say what they like about us, but we have not been irresponsible unqualified hacks weighing in on this matter. Getting serious about goals requires working across disciplines—especially between the natural sciences and the social sciences, which are about human behavior (which is what, ultimately, these goals are trying to change). The failure to do that effectively is one of the reasons why climate science hasn't been more directly linked to policy. Willard | October 5, 2014 at 12:43 pm | I doubt that Victor's distinction between "before substance" and substance cuts any ice, Howard, but that's a prety damn good example of how unnecessary roughness can turn against a ClimateBall ™ player. INTEGRITY ™ – We Code Words Parseled out by the sou. GaryM | October 5, 2014 at 1:34 pm | One of the central political questions of our time (posed over 150 years ago), by way of NRO: "In his magnum opus, The Law, Frédéric Bastiat wonders about the nature of the bureaucrats responsible for onerous regulation: 'If the natural tendencies of mankind are so bad that it is not safe to permit people to be free, how is it that the tendencies of these organizers are always good? Do the legislators and their appointed agents not also belong to the human race? Or do they believe that they themselves are made of a finer clay than the rest of mankind?'" http://www.nationalreview.com/article/389528/color-money-josh-gelernter Vanity, the belief in one's own superiority, is the core of progressivism, + 1,000! Gary, as one who has worked for UK, Australian and Queensland governments, I am well aware of the failings of bureaucrats (and politicians), my experience over 50 years is the main reason I argue for smaller government. I have met some ministers and public servants who have a genuine interest in community well-being, and the nous to pursue sensible policies. But they are in the minority, and rarely prevail. The majority are self-serving jobsworths. What amazes me is how few people are prepared to accept this, they cling to faith in governments against all the evidence. I think that almost all (at least) indicators of well-being would improve with smaller government, which would help to foster self-reliance, initiative and entrepreneurial skills. Too often these critical skills are crushed or discouraged by government, only winding back government can change that. I am the president and only member of the 'plague on all your parties,party' Increasingly we are lumbered with self serving career Politicians who often go straight from university to their party and have little experience of the real world. Many of them have little common sense or are particularly brigh which all becomes a potent brew when combined with their ideology Thank you Faustino Some months ago I argued this very line, but you then disagreed Please don't tell me you were being disingenuous earlier on – it would destroy my naive faith in the goodness of the Public Service BTW, the dream of smaller government is as a sword to Don Quixote Yes, less is more, Faustino, less guvuhmint that is. Innovation and productivity while the leaders are otherwise engaged, like in China pre the Ming and the beginnings of commerce with Constantinople by those wily Venetians out in the marshes long ago. No Venice, perhaps no growth of Italian cities or the Renaissance. Ian, you must have misunderstood me, I have been consistent in this for many years. I recall I did undertake to respond to you on another issue, sorry I haven't, I will if I can. Yes, Tony, I posted at NRO and added: " Increasingly, apparatchiks dominate." It seems to have got worse in recent decades in Oz, the UK and the US; no need to mention the leviathan EUreaucracy. Peter Lang | October 5, 2014 at 7:25 pm | Garym and Faustino +1000 each ianl8888, Faustino has been presenting this point consistently since I first noticed his comments. I suspect you may have misunderstood what he meant in the previous comment you referred to. The problem in the US is there is no "small government" party. Both the main political parties are basically big government parties, the difference is which hogs feed at the trough. The more the Tea Party pushes at least the Republicans toward small government the better. I don't see any tendency or influence that will push the Democrats toward sanity. Actually, there is: the Libertarian Party. I haven't been associate with the Libertarian Party since the '70's, although I consider myself a libertarian. The relationship between Libertarians and the Tea Party is complex. But, it has become, harder and harder to misinterpret the signs – as Al Gore would say – that these climate priests high up in Western academia's ivory towers are really bad at math. The amount of government involvement and money that is going into to underwriting the added expense of alternative energy is ruinous much like chewing off your arm to get more protein in your diet. Mark Steyn in 'The Spectator' likened the mindset of global warming alarmists to being in first-class staterooms aboard the Titanic and rooting for the iceberg. That is why "Manbearpig" (South Park) was so funny. They used climate activist math. ALGORE's introductory statement on the topic: "It is a creature which roams the earth alone. It is half man, half bear, and half pig. Some people say that Manbearpig isn't real. Well, I'm here to tell you know, Manbearpig is very real, and he most certainly exists. I'm serial." Half man, half bear, half pig? Oh really? Yes, if you have the right ALGOREITHM anything is possible. omanuel | October 5, 2014 at 5:58 pm | Definitive Data Settles the Debate: http://stevengoddard.wordpress.com/2014/10/05/the-definitive-data-on-the-global-warmingclimate-change-scam/ Physicist with 50 years experience | October 5, 2014 at 6:07 pm | People should heed the work of the brilliant 19th century physicist who was first to determine the size of air molecules. Josef Loschmidt was also first to explain (indirectly) through his gravito-thermal effect the answer James Hansen et al sought as to why planetary surfaces are hotter than radiating temperatures. We don't need Hansen's hypothesis about back radiation and the consequent (but necessary) garbage about the Second Law applying to a combination of independent processes. What is in this comment has profound consequences. Think on it! You're arguing for the possibility of a perpetual motion machine? In conclusion, below-average nuclear power outages throughout the shoulder season will likely contribute to lower natural gas demand despite record high production. While their overall contribution to natural gas demand and direct impact on price is relatively small, it comes at an inopportune time for the bulls and is one less source of demand that the natural gas longs can count on to correct the present supply/demand mismatch. http://seekingalpha.com/article/2540405-nuclear-reactor-outages-and-their-impact-on-natural-gas-price-another-strike-against-the-bullish-case Stephen Segrest | October 5, 2014 at 7:28 pm | Wagathon, Jim2, David Wojick, PM, and Others: Since much of the CE blog community is not from the U.S. — spend a little time with non U.S. viewers on (1) the U.S. Supreme Courts decision to uphold the Environmental Protection Agency's Authority to regulate greenhouse gas emissions; (2) How our Political System works (e.g., especially how our Senate works on majorities between 60% and also 66% to overturn a Presidential veto). Talk about how the EPA is taking a Regional approach to greenhouse gas emissions — and tell us why you believe that things like regional cap and trade (financial derivatives) can not possibly come out of EPA Regs. Given the Supreme Court's willingness to "overlook" the Constitution, just about anything can happen. That doesn't mean it should happen or that it's right.  The EPA concedes that following the proposed rules would have no more than a negligible effect at most on climate change and the amount of atmospheric CO2. But, the compliance costs and disruption to the economy could be huge. Will global warming continue to be a plank in the Democrat platform when it's obvious AGW isn't about CO2 and the climate but really about the Left's belief capitalism is a disease? ianl8888 | October 6, 2014 at 2:34 am | That's why power corrupts, Jim For a longer or shorter period, it (the exercise of power) has no accountability. Voting a change every 3-4 years cannot undo the careless damage done before There is no sensible answer for any of this, it's the human condition Wagathon — Your argument is exactly why Conservatives need to develop pro-active policies on AGW — and not simply have a strategy of, or be viewed as obstructionists. Without doing this there is a great big void, of no conservative alternatives — only liberal options (e.g., regional cap & trade financial derivatives). There is nothing wrong is saying "We believe AGW is occurring — but our scientists are telling us they are unsure of how much or how quickly". Fast Mitigation policies are an example where we can "add time on the clock" before decisions on things like carbon taxes, cap & trade have to be made. Dr. Ramathan says maybe 20 years can be gained to let our scientists and engineers "catch up" with this Wicked Problem. The fishing/forest/agriculture sector, just the producer prices for products sold,, at GDP, not GWP, is about $ 3 trillion dollars a year. The.retail price level, using GWP, capturing the fact that for most of world which is at a subsistence level personal consumption (instead of producer sales) is very high, the number goes north of $ 9 Trillion. The 50% increase in plant growth since 1900 due to CO2 represents 1-3 trillion dollar benefit. More CO2 (up to about 1000 PPM) will increase this benefit. The AGW proponents should be asked point-blank why they are trying to starve people and reduce the standard of living. They should be forced to demonstrate sufficient harm to offset the benefits of increased CO2. This analysis doesn't include the benefits to wildlife that a richer more vibrant CO2 enhanced environment provides. More plants means more food for all the animals on the planet. PA — As the EPA develops rules on greenhouse gas emissions (which the Supreme Court said they have the authority to do in 2007) — how effective do you think your arguments will be as they do this? Reality-challenged people are not open to rational arguments. Anyone who claims CO2 is pollution has a seriously delusional viewpoint. All that can be hoped for is that congress can restrain the administration for 2 years. The next administration will hopefully downsize the EPA and some other government agencies and remove the bureaucratic drones that are causing problems. PA — "HOPE" is not a gameplan. The EPA has no evidence of net harm, It has 17 years of evidence the problem is greatly exaggerated. Don't have a good game plan for dealing with agenda-driven people who lack common sense and good judgment. The only solution is to fire them and that isn't likely in the current environment. All we can do in the meantime is slow them down and endlessly point out the absurdity of the baseless damaging polices that are being proposed. Stephen might be right. Perhaps people should be sueing the EPA over it's attempts to regulate CO2 as a pollutant. PA, Jim2, Aaron — The EPA certainly isn't delusional in regulating CO2 and neither was the U.S. Supreme Court in upholding this authority in 2007 (before Obama was even elected). The Clean Air Act defines pollutant agents (in terms a only lawyer would love) and their impact on either weather or climate. http://www.law.cornell.edu/uscode/text/42/7602 Nothing wrong with your disagreeing in the interpretation on CO2 regulation — but AGW Advocates and SCOTUS opinion that Congress intended it to be regulated is not simply delusional left wing liberalism as you argue. Peter Lang | October 6, 2014 at 8:33 am | Do you have any good, authoritative estimates of the compliance cost of carbon pricing, and of GHG emissions monitoring that would ultimately be required for commerce in the commodity CO2-eq? I've been playing around with this a bit, but have no reliable estimates to work with. Peter — While I'm no Guru on EPA Regs (especially since they've not even been written yet) — the EPA website can give you a general feel to their thinking where policies must at least break even as to cost/benefit: http://www.epa.gov/climatechange/EPAactivities/economics/scc.html Just an estimate of the probable job losses as a result of proposed EPA regulations. Stephen Segrest , Thank you for EPA the link. However, it does no deal with the compliance cost. It covers the benefits and abatement costs assuming compliance costs are nil, as do the widely used IAMs. I suspect the compliance cost of emissions monitoring would become huge as smaller and smaller emitters are included. EPA once estimate that EPA's costs (not the cost to businesses or to all the other public and private sector organisations that are involved in compliance and who analyse and use the data) an $21 billion per year (budget increase) to monitor 6.1 million emitters. That is how many emitters would be included if the Obama legislation was applied. EPA tailored it down so they now monitor 8000 emitters. 8000 emitters covers just 49% of USA emissions. I don't know what proportion of emitter would be included by 6.1 million emitters (all emitters more that 250 tonnes p.a.). It seems to me that even at 1/10 of the EPA's estimate, the compliance cost of emissions monitoring would greatly exceed the revenue from carbon tax – and the social cost of carbon. And we haven't included the effect it would have on global economic growth. Inevitably, there ill be disputes over who's cheating and who's not measuring sufficiently accurately and precisely and who's not properly including the cost in the price of traded goods and services. Inevitably that will lead to trade disputes (e.g. EU's attempt to make everyone flying into EU pay for the EU's carbon price). Will Puttin pay? there will always be a Puttin somewhere. Has any one seen reliable estimates of the compliance cost of GHG emissions monitoring as the world approaches monitoring 100% of GHG emissions? Hi Peter — Here in the U.S. we have a requirement that at a minimum, cost/benefit must be a break-even. So, you would at least have a boundary. Also — I thought I saw where the EPA said while they had not evaluated the cost of a carbon tax, they had evaluated cost under a cap & trade system. I thought I saw a number of $20 per ton? What Liberals never really address: 1. Any way you package a carbon tax, it would be a regressive tax — hurting the poor. 2. What about the competitiveness impact on domestic manufacturing? Would a carbon tax result in increased imports — where CO2 emissions are just being outsourced to a developing economy (like China)? 3. A Cap & Trade System is just a financial derivative and would be a new play toy of Wall St. We saw what financial derivatives did in bringing down World economies. You are talking about the abatement cost and benefits of carbon pricking. You are not talking about the compliance cost of measuring, reporting, enforcing compliance, disputation, etc. These should be added to the Abatement Cost, but they are not included in the abatement cost. I have these questions: • What is the compliance cost of GHG emissions monitoring? • What would the compliance cost become in the future as participation increases (As Part 1 explained, near full participation is essential for carbon pricing to succeed and be sustainable; full participation means all human caused sources of GHG emissions in all countries are measured and priced)? • What would be the ultimate compliance cost of near full participation? • What would be the real cost to society of emissions monitoring? • Is there an alternative policy approach that would not require emissions monitoring to the standards needed for commerce, trade? EPA's estimate of an additional $21 billion per year to monitor 6.1 million emitters doesn't include businesses' compliance cost to monitor and report their GHG emissions. Nor does it include the cost to all the other public and private sector organisations that would be required to monitor and report compliance and have various roles in policing, accounting, routine legal services, dispute resolution, litigation, court time, penalty enforcement, etc. Nor does it cover all the organisations that would be involved in analysing the data, reporting to clients and stakeholders, maintaining data bases, and updating legacy systems forever. Nor does it consider the costs involved in international disputation and conflict resulting from countries not complying with the global rules – e.g. there will inevitably be other world leaders like Puttin, sometime somewhere; what's the total costs to everyone else when a Putting refuses to participate? The estimates do not include the cost of trade disputes, trade barriers, and reduced global economic growth caused by trade disruption and barriers to free trade. JamesG | October 6, 2014 at 8:43 am | So Pierrehumbert asserted that Koonin had only read skeptic blogs when Koonin in fact reached his conclusions by interrogating a wide range of climate scientists. This is either serious disinformation or lazy bloviating by Pierrehumbert. His blinkered attitude speaks volumes about the merit of his opinion. The biggest skeptic, alas for him, is mother nature! Wagathon — As Mosher repeatedly says, its always more productive to be "sitting at the table". EPA greenhouse gas regulations are going to be developed with or without Conservative participation. Wagathon — Last year in testimony directly to the U.S. Congress, EPA Administrators from every Republican President said that AGW is a serious problem and Congress should take action (Ruckleshaus under Nixon, Thomas under Reagan, Reily under George H.W. Bush, and Whitman under George W Bush. This just doesn't "fit" under your ubiquitous liberal conspiracy theory arguments. These EPA Administrators certainly don't stand alone — similar views are voiced by many Conservatives such as "The American Conservative Magazine", Michael Gerson (Washington Post), David Brooks (N.Y. Times), etc. It is not a conspiracy anymore than ISIS is a conspiracy. The Left hates everything America stands for. What is going on in today's classrooms goes beyond propaganda; and, it's more than condoning fraud and verbal assaults: it's harassment, intimidation and indoctrination. Climatism represents a danger to the safety of all in society who refuse to adopt the anti-capitalism and anti-Americanism embodied in the global warming credo of those on Left who have genuine socio-political and ideological interests at stake in the acceptance of global warming that have nothing to do with an average climate of the globe. Your table has a couple of screws loose. We are looking at an example of cause and effect. We don't need the scientific method to know that fear of global warming is a Left vs. right issue. Do we, however, just write off the relationship of party affiliation and political ideology as simply enigmatic, and fail to see the real causes underlying attacks on America from jihadists to climatists? A copy of a Sept. 9, 2014 letter from 15 Republican governors concerning the EPA HERE, provides –e.g., as follows: "Given your Administration's opposition to make use of the Yucca Mountain repository, will you bring forward a viable, long-term solution for [nuclear waste] disposal that would win public support and the necessary votes in Congress? … If not, does your Administration expect the states with bans on new nuclear facilities [California, Connecticut, Illinois, Kentucky, Maine, New Jersey, Oregon, West Virginia, and Wisconsin] to revise their laws, despite the federal government's failure to adequately address the waste issue?" (See, ibid.) Boy — you Ideologues have a short and selective memory. Why wasn't Yucca Mtn. resolved when Republicans were in power (Federal executive and legislative)? Also, the first nuclear plant being build in decades is coming under Obama's watch (Georgia Power's Vogtle units) — which is being done under your so called liberal/socialism agenda of the DOE loan guarantee program (which has funded in equal parts of solar, nuclear, and automotive). And anybody can file law suits — what is the track record in overturning the EPA in Court challenges? You must be using blue-state logic. Has the federal government conducted an analysis to determine the environmental impact of building renewable energy systems at the scale envisioned in the proposal? For example, one nuclear plant producing 1,800 MWs of electricity occupies about 1,100 acres, while wind turbines producing the same amount of electricity would require hundreds of thousands of acres. If such an analysis exists, please provide detailed information related to that analysis. If such an analysis does not exist, please explain why the analysis was not performed. (Ibid.) Calling the EPA administrators conservative is disingenuous. At least 3 if not all 4 are from the RINO wing of the Republican party which isn't very conservative. They picked administrators that penned a joint opinion piece. Robert Fri Russell E. Train Steve Jellinek Walter Barber, Jr. Anne M. Gorsuch Marianne Lamont Horinko Michael Leavitt and Stephen L. Johnson, didn't testify. So… about 33% of Republican EPA administrators and none of the last 3. PA — OK, I'll bite on this. Please provide us with links where each of the Republican EPA Administrators you cited have come out in criticism of regulating greenhouse gases. Seems like Congressional Republicans would have had them testify to counter Ruckleshaus, Reily, Thomas, and Whitman. From what you presented, one can not conclude anything on their beliefs — silence doesn't mean agree or disagree. http://leavittcenter.org/2013/10/22/global-warming-fact-fiction-or-both/ I would say Michael O. Leavitt is skeptical. http://conservefewell.org/author/marianne-horinko/ Marianne Lamont Horinko is pro-fracking and doesn't mention AGW at all (she IS interested in REAL pollution). Stephen L. Johnson got flamed for denying California stricter emissions standards. He seems to be keeping a low profile. From what I can tell it was just RINOs Wagathon — Here's your problem (and others) in being an Ideologue — refusing to even acknowledge any possible validity in another point of view. Lets take policy completely out of the dialogue — asking you a question: "Do you think Nobel Prize winner Dr. Molina is just NUTS and IRRATIONAL in his views? http://theenergycollective.com/davidhone/60610/back-basics-climate-science Sure sounds like you (and others) do. All your rantings are not about pro/con dialogue about a specific aspect of science — everything you post is about labeling anyone who disagrees with you (like Dr. Molina) as crazy liberal socialists. A subset has frightened and foraged themselves into a quandary. So fearful of imaginary consequences, they've panicked themselves into an impossible corner, global and severe autocracy. A madness, I'm sorry. Kim — Its a madness from the Ideologues on both sides. In who is worst — I'd have to give the edge to the CAGW crowd (personally Oppenheimer drives me crazy). But, on the flip side, politicians like Inhofe are pretty close. kim | October 6, 2014 at 12:08 pm | Bread turned into roses in his basket. The problem Dr. Molina's viewpoint has – is only 1/3 of energy leaves the surface via radiation. About 1/6 of the energy is removed by convection and about 1/2 (more at the equator) is removed by evaporation. You can dance about and scream radiation laws all you want – but that isn't the primary way energy leaves the surface. In fact any increase in "back radiation" whatever that is, will face negative feedback because convection and evaporation will increase. At the equator tiny changes make a big difference – 5°C increase from a 30°C base temperature increases evaporation about 35%. Lets assume the ocean at the equator is evaporating 90W/m2 of energy and 20W of CO2 forcing is applied giving us 5°C higher temperatures (in theory). 90*.35 = 31.5 W/m2 of evaporative energy loss. I'm not including the increase in radiation or the increase in convection. About 1/4 of the surface energy from the temperature increase will leak out as radiation since about 1/4 of radiation leaks out directly anyway or about 5W. We'll make a crude estimate of 2.5 W/m2 increased convective heat loss as a placeholder. So if we subtract 31.5 + 5 + 2.5 from .20W, 20 W of CO2 forcing leaves the surface about 19 W/m2 cooler.than it started. Since this is nonsense and the feeback is proportional more or less to the temperature increase, it means there is about 50% negative feedback and the surface will warm about 2.5°C Well — Dr. Molina (a Nobel Prize winning scientist) would disagree with you. But again, its fine to disagree — what's not fine is to label scientists who agree with Dr. Molina as being driven by a left wing socialist agenda. That's the problem. I didn't know if it was Dr. Maria C. Molina, Mario J. Molina, or one of a couple of dentists… I'm going with Dr. Mario J. Molina, he's a chemist. Seems to be an Ozone guy. Seems to have gotten a lot of mileage from the environmental movement for pushing the CFC ban. If that is the guy, he seems kind of like Hansen who is pretty sincere. Sincerity doesn't alway mean you're right though. 'A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics. Moving toward higher resolution, eddy resolving oceanic models should help reduce this deficiency. Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings (13). Viewed in this light, the lack of modeled compared to observed interdecadal variability (Fig. 2B) may indicate that current models underestimate climate sensitivity. Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.' http://www.pnas.org/content/106/38/16120.full S-B can't be used to calculate the surface temperature of the Earth – but merely an effective radiating temperature. This is compared to surface temp measurements to give a mooted 'greenhouse effect'. The essential problem arises from the failure to move beyond 100 year old physics to the true complexity of climate in ways and for reasons that undermine real prospects for mitigation and adaptation. This arises from cognitive dissonance – a psychopathology linked to groupthink – which in this case almost universally derives from a progressive mindset. Meanwhile, the Left is fighting what it feels is wrong with wrong-headed socio-economic policies, based on climate science that is demonstrably wrong, all while ignoring wrongs in the world that we could do something about if not driven over an economic cliff. The insanity of the Left's plans have not gone unnoticed, as follows: Given the amount of land required by renewable energy systems, has your Administration considered that federal land permitting requirements may preclude or stall the development of renewable projects? Also, expanding the deployment of wind and solar farms could readily conflict with the Endangered Species Act (ESA). Indeed, one can easily envision the plausible scenario whereby the ESA, operating as federal law separate from the CAA, could prevent state compliance with EPA's emissions targets. How does your Administration propose to avoid these conflicts? (Ibid.) Wagaton — Your last post illustrates the 2nd problem I have with your posts — You are a "Hog Blogger". When CE has sometimes +600 posts, you made it very difficult to scan, getting through your tremendous amount of rantings. Are you so egotistical to believe that someone couldn't post equally as long or numerous posts refuting your rantings? Cutting and pasting is pretty easy. Do you want a War, forcing Dr. Curry to moderate? If you have something important to say then say it — but the sheer volume and number of your postings, Geeez. Christian Schlüchter recognizes the real problem today: "many scientist are servants of politicians and are not concerned with knowledge and data." And, as in the 1975 article above, Schlüchter also wonders about whether today's, "complex and spoiled society" may face circumstances that, "brought the Roman Empire to collapse." …you don't need to be a trained climatologist to smell danger when someone says, Anthropogenic greenhouse gasses are warming the planet, so we need to ramp up taxes, institute a command-and-control economy, stop industrial development in the developing world, and, y'know, just maybe, suspend democracy and jail people who object… If Greens were simply raising money to support research into clean energy and carbon capture and the rest of it, there would be no problem and no objections. If they were to simply try to fix the problem, instead of trying to bully the rest of the world, if they were donating 100 million to solar panel research rather than pissing it down the drain of elections and 'awareness raising,' then there would be no problem… ~Prussian (What is Mann that thou art mindful of him?) Stephen, if you have something important to say then say it — but the sheer volume and number of your postings, Jeeez. David L. Hagen | October 6, 2014 at 12:18 pm | Climate Policy Implications of the Hiatus in Global Warming Ross McKitrick, Fraser Institute . . .In a low-sensitivity model, GHG emissions lead only to minor changes in temperature, so the socioeconomic costs associated with the emissions are minimal. In a high-sensitivity model, large temperature changes would occur, so marginal economic damages of CO2 emissions are larger. . . . warming has actually slowed down to a pace well below most model projections. Depending on the data set used, there has been no statistically significant temperature change for the past 15 to 20 years. . . . One implication of these points is that, since climate policies operate over such a long time frame, during which it is virtually certain that important new information will emerge, it is essential to build into the policy framework clear feedback mechanisms that connect new data about climate sensitivity to the stringency of the emissions control policy. A second implication is that, since important new information about climate sensitivity is expected within a few years, there is value to waiting for this information before making any irreversible climate policy commitments, in order to avoid making costly decisions that are revealed a short time later to have been unnecessary. This has relevance to the (in my view wrong) argument for applying lower discount rates to very long term assessments of costs and benefits on the grounds of intergenerational equity, whatever that might be. McKitrick's evidence indicates that Climate Sensitivity to CO2 is much lower than IPCC's models. Consequently, the benefits of rising CO2 will extend much longer beyond 2070. That also indicates that the projected harm will likely be much lower, later, – and more uncertain. Dan Hughes | October 6, 2014 at 1:41 pm | These papers are free-access available online until January 2015 at http://www.annualreviews.org/toc/statistics/1/1: Statistics and Climate, Peter Guttorp, Department of Statistics, University of Washington, Seattle, Washington 98195 For a statistician, climate is the distribution of weather and other variables that are part of the climate system. This distribution changes over time. This review considers some aspects of climate data, climate model assessment, and uncertainty estimation pertinent to climate issues, focusing mainly on temperatures. Some interesting methodological needs that arise from these issues are also considered. First paragraph of Introduction: This review contains a statistician's take on some issues in climate research. The point of view is that of a statistician versed in multidisciplinary research; the review itself is not multidisciplinary. In other words, this review could not reasonably be expected to be publishable in a climate journal. Instead, it contains a point of view on research problems dealing with some climate issues, problems amenable to sophisticated statistical methods and ways of thinking. Often such methods are not current practice in climate science, so great opportunities exist for interested statisticians. Climate Simulators and Climate Projections, Jonathan Rougier and Michael Goldstein Department of Mathematics, University of Bristol, Bristol, BS8 1TW, United Kingdom; Department of Mathematical Sciences, University of Durham, Durham, DH1 3LE We provide a statistical interpretation of current practice in climate modeling. In this review, we define weather and climate, clarify the relationship between simulator output and simulator climate, distinguish between a climate simulator and a statistical climate model, provide a statistical interpretation of the ubiquitous practice of anomaly correction along with a substantial generalization (the best-parameter approach), and interpret simulator/data comparisons as posterior predictive checking, including a simple adjustment to allow for double counting. We also discuss statistical approaches to simulator tuning, assessing parametric uncertainty, and responding to unrealistic outputs. We finish with a more general discussion of larger themes. Our purpose in this review is to interpret current practice in climate modeling in the light of statistical inferences about past and future weather. In this way, we hope to emphasize the common ground between our two communities and to clarify climate modeling practices that may not, at first sight, seem particularly statistical. From this starting point, we can then suggest some relatively simple enhancements and identify some larger issues. Naturally, we have had to simplify many practices in climate modeling, but not—we hope—to the extent of making them unrecognizable. Climate: your distribution of weather, represented as a multivariate spatiotemporal process (inherently subjective) Weather: measurable aspects of the ambient atmosphere, notably temperature, precipitation, and wind speed. Peter Lang | October 7, 2014 at 12:57 am | Dan Hughes, Interesting but … I want to know how they address these issues: 1. climate changes abruptly, not as long continuous curves as the IAMs assume 2. If not for human's GHG emissions, the next abrupt change would probably be colder not warmer (e.g back toward Little Ice Age temperatures) (because we are passed the peak of the current interglacial period). 3. Warming and increasing CO2 concentrations have been net beneficial for the past 200 years. 4. Why should we expect that this trend will not continue with more CO2 emissions for some considerable time to come? 5. What is an unbiased pdf of the impacts of continuing GHG emissions? Science rules, ideology drools! The September sea-ice minimum is , which matches the median scientific prediction (in June) of within 6%. Good on `yah sea-ice scientists! The June WUWT prediction of was a high-side outlier. Boo on `yah, ideologists! a fan of *MORE* discourse: The September sea-ice minimum is 5.02\times 10^{6}\ \text{km}^2, which matches the median scientific prediction (in June) of 4.7\times 10^{6}\ \text{km}^2 within 6%. Good on `yah sea-ice scientists! That is the *Arctic* sea ice minimum. Globally, the sea-ice extent is close to its average over the last 3+ decades: http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg The dip near 15 microns in TOA spectrum demonstrates that thermalization of terrestrial radiation energy absorbed by CO2 molecules exists. The delay would make no difference in the intensity. All of the surface radiation would eventually make it to TOA. The tiny increase in absorption lines due to a 100 ppmv CO2 increase (water vapor has 465 absorption lines in the range 5-13 microns compared to 1 at 15 microns for CO2), about 1 in 100,000, is insignificant. Reverse-thermalization to CO2 molecules at high altitude and S-B radiation from clouds provides the observed comparatively low TOA radiation at 15 microns. Pointman probes: Tell Me Why Don't they know that switching from growing food staples to growing biofuel crops for cars only the rich can afford has more than doubled prices of basic foods? Don't they know about the people killed in the food riots? Do they actually know anything? Do they care anyway? . . . I highly recommend this. David L. Hagen "Don't they know that switching from growing food staples to growing biofuel crops for cars only the rich can afford has more than doubled prices of basic foods? Don't they know about the people killed in the food riots? Do they actually know anything? Do they care anyway? . . ." I've been trying to make this point for a while and the answer seems to be: Apparently not. It really looks like they don't give a damn. Malaria 8X worse than Global Warming "Malaria threatens more than 40% of the world's population and kills up to 1.2 million people worldwide each year. Many of these deaths happen in Sub-Saharan Africa in children under the age of five and pregnant woman. The estimates for clinical infection is somewhere between 300 to 500 million people each year, worldwide." Thus Malaria kills about 8X more people than ""climate change" aka "majority anthroprogenic global warming". Lets focus priorities on where it matters. http://www.rdmag.com/articles/2014/09/taking-big-bite-out-malaria?et_cid=4191706&et_rid=219918439&type=headline Do you have anything to say about this EPA note on discount rates? http://www.rff.org/Publications/Resources/Pages/183-Benefits-and-Costs-in-Intergenerational-Context.aspx And the Socialist's Cost of Carbon? Taken on notice. Two points come to mind: 1. the 12 economists who attended the EPA workshop on discount rates are all from inside the climate change orthodoxy. I don't see any well know conservative economists represented. McKitrick is not on the list, nor your friend who critiqued the Stern review. Why not? Nordhaus and Tol are both included as are some of the extreme alarmists economists. I've noticed that Nordhaus has become progressively more alarmist since 2007. I get the impression he has been influenced by the continually badgering and criticising by the climate alarmists and he is no longer as objective as he once was. 2. I don't understand how economists can argue to use discount rates that are equivalent to and less than the long run average risk free rate of return if we recognise that the decision to mitigate GHG emissions is far from risk free. It seems to me that mitigating GHG emissions could be beneficial or it could be damaging. How can it be concluded that reducing GHG emissions is risk free? Faustino, I want to change the wording on point 2 2. I don't understand how economists can argue to use discount rates that are equivalent to and less than the long-run average risk free rate of return if we recognise that the decision to mitigate GHG emissions is far from risk free. It seems to me that implementing policies which will forces huge investments to mitigate GHG emissions is a high risk strategy. As you have pointed out many times the best strategy is to build our capacity to deal with whatever problems occur and remain highly flexible (one of the best ways is to build wealth) . Forcing us to commit to high cost strategies that are hugely economically damaging for all this century, on the belief we are going to improve the lot of people centuries from now, is high risk. It seems to me the\ the discount rate should be that used for a high risk investment. Another risk is that we don't know if GHG emissions are net beneficial or net damaging. Another risk is it is very expensive to change strategy once it has been implemented. If the world implemented a carbon tax and later realised it won't succeed (as many people already realise), it would then be difficult and costly to stop it and implement a different policy. Australia provides an example of those difficulties now. Given all these investment risks how can it be concluded that policies requiring massive investment in GHG emissions reduction should be justified on the basis of a risk free discount rate? After their primaries, Republican senate candidates are becoming more moderate on climate change. For those outside the US, primaries are battles within their party to become the November candidate. Being moderate on climate change is a killer within the Republican Party, but having passed that hurdle they can pivot, and some are, to try to be competitive in the state race. It's all politics with them. http://www.huffingtonpost.com/2014/10/06/republicans-climate-change_n_5941866.html RickA | October 7, 2014 at 10:40 am | Jim D said "It's all politics with them." I am sure that eco politics (green politics) is also all politics with Democrats. Politicians are political creatures – and that includes most politicians from both major parties. As the Tea Party labels folks like me, I am a RINO (Republican in name only). Suffice to say, I don't like the Tea Party which is highly comprised of Ideologues. My opinion was formed through my brief volunteer work for Jon Huntsman (2012 Presidential campaign). Gov. Huntsman (who had a voting record a whole lot more conservative than McCain in 2008 or Romney in 2012) made his "chops in international trade. Huntsman had a lot of creative ideas in how to approach AGW using conservative principles — focusing on policies to achieve high economic growth of exporting U.S. high tech products (which we're good at) to developing countries . But he was stopped at the start gate by the Tea Party and literally booed off stage (and I was there) — labeled as an Al Gore clone. If Huntsman came into this CE blog, he'd get the same treatment. The money in that party is on the right wing, and money is everything in elections unfortunately. They have no tolerance for independent thinkers like Huntsman, that I even respected, and I am not at all Republican. Like the Supreme Court said, money is speech, with the corollary that if you have no money you have no speech when it comes to elections. Jeffn | October 6, 2014 at 9:12 pm | Speaking of politics, Democrats are campaigning on the promise that they won't do anything about global warming. And hoping folks like you and JimD assume they're just lying to get elected. http://freebeacon.com/politics/grimes-staffers-suggest-kentucky-dem-lies-about-coal-support/ Now, this must be puzzling to you: seeing as how the Republican position is so far out of the mainstream, why would Democrats think the only way to get elected is to adopt the Republican position? That's about local policy, not science. It's very different, although blurred for some. I expect he has not denied the science that humans are causing global warming. Jeffn | October 7, 2014 at 6:59 am | Local policy? There is no local policy on coal in the US Senate that Grimes is running for. She's promising she will oppose federal policy that would reduce the use of coal. And hoping that you won't think she's anti-science, just anti-truth. Why would she do this? You aren't suggesting AGW policy would be economically damaging, are you? "I don't like the Tea Party which is highly comprised of Ideologues…." I don't like ideologues, meaning those holding opinions contrary to my own ideology. aaron | October 7, 2014 at 10:47 am | The problem with ideologues is that they are not at all open to good ideas, like… how about letting the scientific method work? I'm basically a Libertarian. Living the DC area – it is pretty obvious that government is the problem and neither party seems horribly interested in downsizing it. I'm fine with an politician who wants to downsize the non-constitutionally-mandated parts of government. Huntsman is sort of moderate. I could see where a more liberal Republican would like him. Some of his positions give me heartburn but he is vastly better than a failed community organizer. Most people that get involved in politics are ideologues of some flavor. @Stephen Segrest | October 6, 2014 at 8:52 am | SS – obstructionism is sometimes entirely appropriate. If one can prevent the implementation of destructive policy, then that is a win. I would go so far to suggest that when we have a scenario when the science is suspect and some of the suggested policy is highly destructive and when the entire subject is highly political – then tribalism is an appropriate tactic. Tribalism arises naturally in social groups. Given that fact, I surmise it probably serves a good purpose. So, I guess you can see I don't cotton to this idea of conservatives crafting a "conservative way" to achieve the goals of the lefties. Kapish? Jim2, I start from a known fact: Nobody on the face of this Earth knows how the science of AGW will eventually play out. How it plays out in science is not conservative or liberal. How the Earth will respond in an incredibly complex climate system will determine this. By Conservatives not being objective that we just don't know and opposing almost any AGW policy actions, there is just a great big void — where almost all proposed policy actions are liberal ideology approaches (i.e., carbon tax, cap and trade, etc.). Jon Huntsman was trying to change this paradigm through something he made his chops on — International Trade. It was kinda a "China in your face approach" with other developing economies/countries. Paraphrasing Huntsman (as best I can), he was saying "In international trade, reciprocity reigns supreme. No country eliminates/lowers/gives incentives to its trade barriers without reciprocal and meaningful concessions from trading partners". In Huntsman approach, the U.S. would give free market developing countries special trade status for selected products into U.S. markets IF these developing countries committed to building a low-carbon economy using U.S. high-tech products (which are often not the lowest capital cost option). An example could be a highly efficient coal power plant. But the Tea Party didn't want to listen to potential conservative approaches to AGW — he was labeled an Al Gore clone. If we get stuck with a carbon tax or cap & trade system, Republicans will have no one to blame but ourselves. Lets create some "Enterprise Zones" and see how this works. I'd start in the Philippines. According to a Pew International poll, the Philippines has the highest favorable rating of the U.S. within the World. Lets do business with folks who like us and we like them. To paraphrase: "I will only allow you to sell products to my voters if you buy these particular products from my cronies." 1. Curry has from what I can tell bounded the CO2 doubling to about 1.6°C. 2. The current linear CO2 growth in the face of geometric emissions growth means it will hard to even acheive 600 PPM. 3. For the plant growth/water conservation effect of 600 PPM we want to hit 600 PPM. 4. The AGW crowd has not proved positive feedback. They haven't disproved the null hypothesis that doubling CO2 causes a 1°C or less temperature change (the simple effect of the forcing alone). So I do propose a policy: 1. Criminalize (felony) burning food for fuel. 2. Criminalize (felony) sequestering CO2. 3. Criminalize (felony) executive branch officials restricting CO2 emissions in anyway. 4. Allow liberal allowance for arctic methane exploration. AGW keep using this as a threat – it is time to pull the teeth of the arctic methane threat. See – a counter policy as requested to deal with CO2. I'm not willing to starve future people because of low CO2 plant growth just to be AGW stylish. Aaron — Yea, just like Obama funded Georgia Power billions of dollars to build the first nuclear power plant in decades in the solid Blue State bastion of Liberals in Georgia to get them to continue voting solid DEM. So, you want to perpetuate the model of driving up costs to the point that they are financially uneconomical and then subsidize them when supply becomes dangerously low… Peter Lang — Peter, I can not critique your paper in like 5 minutes. I said at first blush, it appears that you are not looking at the load shape increment supply side decisions in generation planning. Stepehen Segrest, Peter Lang — Peter, I can not critique your paper in like 5 minutes. I agree. You shouldn't have made any comments at all. You should not have misrepresented the analysis since you haven't read it or haven't understood it. You should not have made disingenuous, misleading, unsupported, baseless, wrong, statements about the paper. To do so is a clear indication of intellectual dishonesty. And if you demonstrate it once, it is likely you do it continually. That seems to be the case as evidenced by previous your previous comments on this thread I've replied to. I said at first blush, it appears that you are not looking at the load shape increment supply side decisions in generation planning. Which just shows you haven't read or haven'r understood the paper. Take you rime. read the papers. Read the comments on the first paper "http://bravenewclimate.files.wordpress.com/2012/02/lang_renewable_energy_australia_cost.pdf" and read the comments here: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/ Take your time. Consider it and then come back with questions, not baseless assertions. Jim2, The claim that Republicans obstruct AGW policies is a myth that partisans like Segrest use. The GOP does not stand in the way of increased use of natural gas (it, in fact, supports it) and does not stand in the way of nuclear power. The fact that the GOP supports the only alternatives to coal that both reduce CO2 emissions and provide power cost-effectively is telling. No Republican would opposed the construction of cost-effective wind or solar by private energy firms. The only things the GOP "obstructs": ineffective, expensive policies the left wants for no other reason than partisanship. May that long continue! JustinWonder | October 7, 2014 at 10:29 am | The GOP obstructs the pilfering of the public treasury by people that want to reward their political friends. I want more nat. gas and nuclear power to generate electricity. If someone wants to put a PV panel on their roof they can go ahead and do it, but they need to pay for it themselves – no tax credits, no utility buy backs at max prices, no tax credits for $80,000 electric cars. Justin Wonder — You need to get outside of the echo chamber once in a while. This is not the one-side picture you paint. I live in Florida and Duke Energy is billing us $4.5 billion for a botched effort to repair nuclear units (which they couldn't repair and are shutting the unit down) and nuclear units under construction which they now have cancelled. The tax credit on wind energy is basically the same as for new nuclear units — but nuclear has two additional goodies: Price-Anderson; a federal guarantee to electric utilities to cap the construction costs of new nuclear units. In what you probably describe as cronyism socialism — the U.S. DOE loan guarantee program has been evenly split (1/3, 1/3, 1/3) between nuclear, transportation, and solar. I seriously question whether you know what an integrated electricity grid is and what base, intermediate, and peaking load means. If you did, you'd know that solar has beaten the costs of fossil fuel options for peaking load for decades. I could go on and on countering your cherry-picking with cherry-picking of my own. 1. Current solar panels are $1/W (I just priced them).; 2. 2000 Megawatts of solar (8000 MW nominal) is $8,000,000,000 dollars. This is more than a nuclear plant. Comparison for 2019 power generation. 3. Solar is 60% more expensive than a 4th gen nuclear power plant. 4. The solar plant is only good for about 25 years (power degraded 50%). 5. Nuclear doesn't need an expensive backup plan (it is dispatchable). 6. Gas and conventional coal are cheaper than Solar.- and are dispatchable. The EIA lists solar as "nondispatchable" so the cost of the backup plan is not included. And these are 2019 costs. So the statement that any installed solar is "cheaper than" basically anything in terms of Total System Levelized Cost of Energy is simply a lie. Stephen Segrest | October 7, 2014 at 12:38 pm | PA — Like Justin Wonder, it looks like you don't understand the basics of electricity engineering economics either (integrated grid, dispatching base, intermediate, peaking load). Go read up on this and show some good faith in demonstrating knowledge on this and I'll have dialogue with you. Hint: The key is cost per kWh hugely driven by capacity factors. JeffN | October 7, 2014 at 12:39 pm | Echo chamber? http://www.carolinajournal.com/articles/display_story.html?id=11434 Look, when you're handing out a quarter-million dollar grants to Senator's spouses, it makes no sense to compare that favorably to any effort to support nuclear power. Republicans favor the latter, for some reason climate campaigners favor the payola. "pilfering" is actually an understatement. Also, claiming your debating opponent must be ignorant of energy systems for failing to understand the awesome economic benefit of solar power is just plain wrong. There is nobody – not one single solitary person anywhere on this globe – preventing you or anyone else from building your cost-effective solar power. There is a group preventing the type of bogus, wasteful boondoggles we see in North Carolina, as well as all of Europe. JeffN — You continue to show your lack of basic understanding of the engineering and politics of U.S. electric utilities. Changing "net metering" laws has been and continues to be a long up-hill battle. Retail selling of electricity (other than by electric utilities) under franchise laws is flat illegal. The Duke Power situation is interesting. PEF and Duke power both planned nuclear plants at the same time. The PEF was basically a clone of the Duke plant and had strings attached by Crist that they had to shutdown coal fired plants. Given the $3 billion cost of running power lines to the site, the Duke plant was cheaper. Duke bought PEF. Given that the PEF due to plant siting/strings was more expensive – Duke shuttered the plant. The Crystal River plant was gross incompetence by PEF and not Duke's fault. So to some extent environmentalism killed the Levy plant (the coal plants can stay open). Stephen Segrest: If you did, you'd know that solar has beaten the costs of fossil fuel options for peaking load for decades. That depends on time of day of the peak. When peak demand comes later than peak solar output, as in California solar power requires backup: http://www.caiso.com/Pages/TodaysOutlook.aspx#SupplyandDemand; http://content.caiso.com/green/renewrpt/DailyRenewablesWatch.pdf note: California has gotten as much as 16% of total electricity from renewables, but yesterday it was only 10% because we have had consistently below average wind. If the backup is included in the cost of the PV installation, then PV loses its price competitiveness. PA quoted a price of $1 per watt for PV. Where I live, the cost of an installation is about $10,000 for 2kw, or $5 per watt for a roof-mounted system; large installations run at about 80% of that. The materials that PV panels are made from are considered toxic waste; per MW-hr of electricity produce, PV panels produce more toxic waste than nuclear power plants. Nuclear has a PR disadvantage because people would rather get cancer and birth defects from PV waste (and high altitude exposure to ionizing radiation) than from nuclear waste, so they don't mind the larger total exposure from other sources than nuclear. The cost of government subsidies for power is much lower for nuclear when you compute the cost per GW-hr of electricity actually produced. The US has had reliable electricity from 100+ nuclear power plants for decades. If you add in 5% or so as the bill for the failures of TMI and San Onofre, the cost is still much lower than the cost of electricity from solar. Even if you think that PA and Justin Wonder are arguing from ignorance and bad faith, remember that a lot of readers will appreciate any reliable information that you can provide them. Stephen Segrest: I could go on and on countering your cherry-picking with cherry-picking of my own. Sure, but that is not the only alternative. You could aim for an overall evaluation of all costs and benefits. Another detail about pricing. Costs of manufacturing PV panels are decreasing (perhaps 10% per year over long time spans, with short-term spurts much greater), but the costs of installation are declining more slowly. Once they reach price parity on a wide scale, bidding and the scarcity of the resources will probably act to retard the rate of price decline. Another cost besides installation is O&M. http://www.scottmadden.com/insight/407/solar-photovoltaic-plant-operating-and-maintenance-costs.html Fixed panels (less efficient) are $50/kW-y. Tracking panels are $60/kW-y. The EIA lists 0 (zero) variable O&M cost and that isn't correct. Matthew Marler — Of course you are correct that solar doesn't always beat the cost of peaking fossil fuel alternatives. My wording should have been better — that in many applications solar can be the least cost economic dispatch option. I've been home sick with the flu since last Friday. I took this opportunity to ask Wagathon (and others like him) to stop all this liberal rating day after day, week after week on daily posts. I primarily come to CE to try and learn some science — and with sometimes +600 posts, scrolling through/following the blog becomes really difficult because of the sheer number and volume of Wagathon's repeated liberal rantings. If anybody wants to rant about anything — they should do it in "Week in Review" as Dr. Curry asked all of us to do — and stay "on point" with her other blog posts. D o u g C o t t o n | October 7, 2014 at 6:21 pm | If you'd like to learn what is by far the most relevant science in the climate debate I suggest this comment Stephen Segrest, Justin Wonder — You need to get outside of the echo chamber once in a while. This is not the one-side picture you paint. I live in Florida and Duke Energy is billing us $4.5 billion for a botched effort to repair nuclear units Your first comment on this sub-thread demonstrates YOU are the cherry picker, Your comment is disingenuous. It shows you have little understanding of what you are talking about. And YOU should take your own advice "You need to get outside of YOUR echo chamber once in a while." You quite meaningless figures like $4.5 billion repair and don't put that in perspective of energy supplied or to be supplied over remaining life in $/MWh. Until you tell us what the $/MWh cost is, it's totally meaningless. You should also get out of your echo chamber and learn about the cost impediments that have been added to nuclear power over the past decades as a result of 50 years of misleading, disingenuous, mostly dishonest anti-nuke propaganda by those who call themselves "Progressives" (what a joke). Stephen Segrest Once again, this comment applies to YOU, not P.A.. You demonstrate clearly it is you that doesn't "understand the basics of electricity engineering economics" You blurt out a pile of words and motherhood statements but are clearly unable to actually apply them yourself. (integrated grid, dispatching base, intermediate, peaking load). " Here is a simple of capital costs, cost of electricity and CO2 abatement cost for a mostly renewables versus mostly nuclear powered Australian National Electricity Market grid. The costs of the additional grid requirements are included in the estimates. The costs included the estimated costs in Australia with Australia's WACC, labour productivity and labour rates. http://oznucforum.customer.netspace.net.au/TP4PLang.pdf Happy to take your questions on this. if you want to know more about the renewables options, see this preceding paper: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/ Download the pdf version to see the appendicies and footnotes. You can also download a simple spreadsheet and change the inputs. [REPOST to fix formatting] Peter Lang — As an engineer, you dog-gone know what I'm talking about with bell shape load curves. Your ubiquitous comment on solar (peaking) versus nuclear (base load) is highly inappropriate and just plain wrong. No electric utility designs their portfolio of power plants with only base load units. If they did, costs per kWh would be out the roof because of low capacity factors. I'll read your paper and comment — but at first blush, it looks like you are comparing apples to oranges. And this from Peter Lang,' Renewables or Nuclear Electricity for Australia – the Costs.' (April 2012.) Peter Lang has undertaken comparative studies of four renewable energy scenarios with nuclear energy. The nuclear scenario is roughly 1/3 the capital cost, less than 1/2 cost of electricity and less than1/3 CO2 abatement cost of the other scenarios. (Figure 6.) https://www.google.com.au/?gws_rd=ssl#q=Figure+6+PeterLang+2012 As usual baseless, disingenuous, misleading and unsupported statements. but at first blush, it looks like you are comparing apples to oranges. What are you referring to. What are the apples and oranges? Be specific. make your comment clear so I an anyone else can understand what you mean and answer your critique. And be sure to show that your are not just making nit-picking irrelevant comments. Show that your criticism is significant and changes the conclusions. Your ubiquitous comment on solar (peaking) versus nuclear (base load) is highly inappropriate and just plain wrong. No electric utility designs their portfolio of power plants with only base load units. What are you referring to that you say is "is highly inappropriate and just plain wrong". Quote the part and explain what it is inappropriate and just plain wrong? It seems you haven't understood the analysis. Do you understand the modelling that was done to match the generation to the demand curve at every half hour through the year 2010? I suggest it is you that just plain doesn't understand what you are talking about. Beth — What is the cost of solar versus say, a combustion turbine running on oil used for peaking in Australia ? This would be an example of an apples to apples comparison. "PA quoted a price of $1 per watt for PV. Where I live, the cost of an installation is about $10,000 for 2kw, or $5 per watt for a roof-mounted system; large installations run at about 80% of that. " Well, the raw panel cost is about $300 for a 280W panel basically $1/W. http://www.nrel.gov/docs/fy12osti/53347.pdf 2010 Cost $3.80/WP DC – 187.5 MWP DC fixed-axis utility-scale ground mount $4.40/WP DC – 187.5 MWP DC one-axis utility-scale ground mount. 2020 Evolutionary cost vs Sunshot program target: $1.71/ WP DC – 187.5 MWP DC fixed-axis utility-scale ground mount (SunShot target: $1.00/WP DC) • $1.91/ WP DC – 187.5 MWP DC one-axis utility-scale ground mount (modified-SunShot target: $1.20/WP DC). http://www.solarpaneltilt.com There is a 20+% benefit to a tracker. So, a 2010 installation of utility solar just for the installed panels of the equivalent of a Westinghouse Electric Company AP1000 twin installation (2200 MW) at Cherokee River, given there are only about 5.62 kw-h of average available solar. The installed cost for a tracker system assuming 100% available solar is $41 billion for a nondispatchable system vs about $14 billion for the nukes (Duke says 6 but I'm a pessimist).. I haven't mentioned some other factors. http://rredc.nrel.gov/solar/calculators/PVWATTs/version1/change.html DC to AC Derate factors are .77 under good conditions. Further: Florida is clear (less than 30% cloudy) only 70% of the time.. I am dubious about the claim that any installed systems are competitive with conventional sources.. Now you are demonstrating simple mindedness and ignorance. not even the capacity to think logically. How can you think that "solar versus say, a combustion turbine running on oil used for peaking in Australia" Are comparable. The combustion turbine is fully dispatchable. It can be brought on line quickly and ramps quickly at any time of day and night, in any climate (even Antarctica in winter) has 98% availability and used as the emergency back up system for hospitals and military installations. How can you think that solar power can be comparable to that. You demonstrate a total lack of understanding of energy. I'd suggest you go to a blog site where your ignorance is not so obvious. Stephen Segrest – "I could go on and on countering your cherry-picking with cherry-picking …" You win, you are a much better cherry-picker than me. Why all the hostility? Why not state your case without rancor and be a gentleman about it? I'm actually interested in reading what you have to say. Stephen Segrest: My wording should have been better — that in many applications solar can be the least cost economic dispatch option. Schools, for example, which operate almost exclusively in daylight hours. Pumping water for agricultural purposes. And in some foreign nations where all sources of energy are intermittent. In CA, homeowners who spend a lot of money on air-conditioning can profitably install PV panels, though the more economical choice is to go without A/C. But large scale solar and wind farms? I don't see those being economical for a very long time. PA: I am dubious about the claim that any installed systems are competitive with conventional sources.. You and I are mostly in agreement. I have lost the link, but I read of a school near Phoenix, AZ, that covered over its parking lot and roof with PV panels, and reduced the cost of its electricity. But that was strictly a daytime operation and their need for A/C was proportional insolation, as was the electric power produced. Niche uses like that look promising in the US. Matthew R. Marler I suspect they'll never be economically viable for providing a significant proportion of world electricity supply. "The Catch-22 of Energy Storage" explains why http://bravenewclimate.com/2014/08/22/catch-22-of-energy-storage/ 'I suspect they'll never be economically viable for providing a significant proportion of world electricity supply. "The Catch-22 of Energy Storage" explains why' Peter – I believe you are wrong – and not for a good reason. The greenies have us headed for "grid control in reverse". The US grid has three parts: generation, transmission, consumption (load). Traditionally the vast majority of the control was via generation. Consumption (load) was what it was and generation was decreased/increased to match the load. Where the grid is headed is the reverse: the generation is variable and load shedding is used to match generation. If you can turn off enough refrigerators, air conditioners, and industrial users you don't need significant standby generation for green energy sources. It is dumb and expensive but then again it is a "green" idea so that is to be expected. Thanks you. I suspect your comments is tongue in cheek, right? Did you read John Morgan's post: "The Catch-22 of Energy Storage" I think you and other readers would find it interesting. It's getting a lot of publicity. It's been reproduced on many other web sites and generates lots of discussion. Where does the world's energy go? The 10 guzzlers http://www.cnbc.com/id/10000030 Rob Ellison | October 6, 2014 at 11:35 pm | 1000km long Rossby waves rolling in over the Gulf of Carpentaria. http://www.couriermail.com.au/news/queensland/morning-glory-cloud-weather-phenomenon-coming-to-queensland/story-fnkt21jb-1227081216729?nk=9b889cf54174a45811ba336f7dc29a08 Ocean heat content: http://www.clivar.org/sites/default/files/documents/gsop/DISCUSSION_II_LOEB.pdf From unimpeachable sources. Seems a fair amount of uncertainty to me. You should warn people it's a 5000 TB down load and takes a week on Australian NBN :) I am sorry. It's a 72 page pdf with many color graphics. Many of the big names were involved with the document. No need to apologise. My comment was intended as Aussie humour. I know that often doesn't arrive in US as sent from down under. :). It very interesting and I've already forwarded the link to some friends. Thanks for not calling attention to Uranus this time, dougie. We don't find that amusing. We are actually paying for this research. http://projectreporter.nih.gov/project_info_description.cfm?aid=8669524&icde=21946917 "Mounting evidence demonstrates that weight influences intimate (i.e., dating and sexual) relationship formation and sexual negotiations among adolescent girls. Obese girls consistently report having fewer dating and sexual experiences, but more sexual risk behaviors (i.e., condom nonuse) once they are sexually active." I am outraged by the sexism here! Why is our government not funding research into the statistically significant phenomenon of fat guys getting fewer dates with cheer leaders and super models? Does no one care about Al Gore? And where's the link to globalclimatewarmingchange? Energy Futures Price OIL 88.50 BRENT 91.75 NAT GAS 3.926 RBOB GAS 2.3458 Peter Lang — I did read through your paper, and started to compile at least an initial set of questions — then I saw your above post about my total lack of understanding and ignorance. I have degrees in engineering and economics — including work at the prestigious University of Chicago. I developed a leading U.S. Industry standard on engineering economics modelling for project evaluation (PROVAL) which is probably used in Australia. I've testified before the U.S. Congress several times. Depending on how you answer my first set of simple questions — I may or may not choose any further dialogue with you. (1) Did the Researchers at CEEM peer review or provide any type of critique on your paper? Has any professional organization provided any peer review? (2) Did you have access to, and run the CEEM load shape model in your analysis? Which CEEM? Peter said he used a study by the Centre for Energy and Environmental Markets (CEEM) — which I know nothing about — I assume its part of an University? Peter Lang | October 7, 2014 at 11:35 pm | Segrest, Your comments on this post have demonstrated you don't bother to read nor try to understand what the person you are responding to says. You've demonstrated that clearly. You've also demonstrated intellectual dishonesty http://judithcurry.com/2013/04/20/10-signs-of-intellectual-honesty/. You are not worth wasting the time on. A person who has the experience you claim, would not be making such comments. And they would read the paper and the references to find out the answer the questions you asked, without making a fool of yourself. Seems like a simple yes/no to: 1. Did CEEM or any professional organization peer review your work? Yes or No. 2. Did you run the CEEM load shape model in your analysis? Yes or No. (you just can't do what you tried to do without running a load shape model). Same answer, twit. Read it, like any professional or trained researcher would do.
CommonCrawl
A distributed ledger based platform for community-driven flexibility provision Jonas Schlund ORCID: orcid.org/0000-0002-1848-17761 & Reinhard German1 Energy Informatics volume 2, Article number: 5 (2019) Cite this article In this paper we propose and analyze a community-driven platform for flexibility provision based on a distributed ledger. We introduce and analyze the platform for the use case of a self-organized decentralized virtual power plant consisting of a local community of individual prosumers with photovoltaic-storages located on a low voltage feeder. Like a virtual power plant, it aggregates small-scale assets and is able to provide ancillary services in the form of active power provision to the electrical power system. However, the decentralized virtual power plant provides a direct flexibility market access of the distributed assets without the need for a third party. Thus, balancing group managers or system operators can directly interact with the distributed assets in an aggregated way without the need for an aggregator. The solution approach uses a heuristic algorithm for the coordination combined with a distributed ledger and democratic consensus within the community. We propose the concept in detail, describe the prototypical implementation based on a consortium Ethereum blockchain and discuss results of the proof-of-concept. Our numerous test runs with up to 20 participants showed that the coordinated flexibility provision, energy sharing and according financial settlement works in practice, but would need an upgrade concerning the smart-meter hardware for an implementation in the field. We analyze the impact of the coordination interval on the community self-sufficiency and determine that one minute intervals are enough to reach 96% of the optimum. We evaluate the storage and communication effort and conclude with suggestions for future improvements and other possible applications of the decentralized platform like aggregated flexibility coordination between balancing group managers and system operators. Traditionally, the electrical power system (EPS) is designed, controlled and operated hierarchically and centrally in order to work reliably. However, the Energy Transition towards renewable energy sources (RES) is characterized by an increased volatility and decentralization of the supply. Especially as conventional power plants are dismantled, this leads to an increased need for bottom-up flexibility in order to be able to operate the EPS reliably. This is underlined by the current costs for EPS stability and congestion management, which are on a record high in Germany (BDEW Bundesverband der Energie- und Wasserwirtschaft e.V 2018). At the same time more and more distributed assets are installed in the EPS, which are potentially able to offer such flexibility, like energy storage systems (ESSs) in private homes or battery electric vehicles (BEVs). The additional yearly construction of photovoltaic (PV)-plants and PV-storages in Germany shown in Fig. 1 underlines this for ESS. Considering vehicles, governments all over Europe aim to move towards a higher share of electric cars on the roads. However, most of these distributed assets are owned by citizens or small businesses and are not controlled centrally nor have access to a flexibility market. Thus these flexibility sources cannot be activated as long as they are not pooled in a virtual power plant (VPP). Yearly additional construction of PV-plants and PV-storages in Germany (https://www.foederal-erneuerbar.de/startseite) Many distributed assets can be empowered to provide ancillary services to the power grid by pooling them (Schlund et al. 2017b; Steber 2018). Pooling assets connected to one low voltage feeder within a local area results in energy communities, empowered to provide ancillary services locally. Such communities can act as aggregated cells according to the cellular concept (Benz et al. 2015). A known way to achieve such behaviour is using a VPP with a centralized information technology (IT) structure. However, the expected revenue of such a local energy community is small and has to be shared between the VPP provider and the participants of the community, which might not be economically feasible for both. In addition, the goals of an aggregator are not always in accordance with the goals of the household as a participant. Citizens might not agree to let an aggregator directly remote control their assets, which are usually inside their own houses. This issue is strengthened as transparency about what exactly the assets are used for is not necessarily given for the participants of a VPP. Furthermore, controlling many distributed assets via a centralized VPP offers a central attack point on the critical infrastructure. For these given reasons it might be interesting to investigate alternative possibilities, which enable a direct flexibility market access for distributed assets without the need for an aggregator, where system operators or energy suppliers can directly interact with self-aggregating distributed flexibility assets. One way to solve this problem is using distributed ledger (DL) technology. In this paper we propose and analyze such a fully distributed alternative concept to a VPP. Basically, it is a decentralized flexibility platform, which can connect parties in need of flexibility directly to aggregated flexibility sources without any other involved party. The concept is referred to as a decentralized virtual power plant (DVPP) in the following. We answer the question how to self-organize and financially settle such a local energy community in a fully distributed way by a proof of concept based on DL technologies (Wattenhofer 2017). We contribute to the state-of-the-art by proposing a new method for such self-organization and evaluating its advantages and disadvantages. Therefore, the paper is structured as follows. At first, related work is presented before the conceptual approach and the implementation are described. This is followed by the technical and economical results and a conclusion. This section presents related work in the field of DL technology in the energy sector including DL basics as well as related work in community energy sharing and self-organized grid management. DL in the energy sector In the past years numerous studies attributed a high potential to DL technology in the energy sector, especially in peer-to-peer (P2P) electricity trading (Burger et al. 2016; Dütsch and Steinecke 2017; Hasse et al. 2016; Schütte et al. 2017), but also in grid management (Burger et al. 2016) and in organizing the allocation of flexibility of any size (BDEW Bundesverband der Energie und Wasserwirtschaft e.V. 2017). Most of these studies approached the topic on a high level though. However, currently a lot of new initiatives are ongoing in this area, a good review of the current research and early development projects is provided in Andoni et al. (2019). The DVPP-concept, which was first mentioned in Schlund (2018), belongs to the subgroup of initiatives for grid management. Other popular initiatives of this subgroup mentioned in the review paper are Enerchain (Merz 2016) providing P2P-energy trading on transmission system operator (TSO) level, a pilot project of TenneTFootnote 1 providing a virtual transmission line from north to south by coordinated (dis-)charging of networked home-storages and other initiatives like ElectronFootnote 2 and BloGPVFootnote 3. Considering data processing performance analysis of DLs no work focusing on energy related applications is known to us. However, in Dinh et al. (2018) such an analysis has been done for the most popular permissioned blockchains, which are also mostly used in energy related applications. They conclude that there is still a big performance gap between blockchains and current databases. Besides reviewing the initiatives in the energy sector in a systematic way, (Andoni et al. 2019) also describes the technology background. A short summary on the technology basics necessary for understanding this paper is provided in the following. The term blockchain (Nakamoto 2008) is often used as a synonym for DLs although blockchains are only the most prominent versions of DLs. A blockchain is not a single technology but a combination of several technologies. In general it is a distributed ledger, in which merkle trees of transactions are structured in blocks. These blocks are linked using cryptographic hash functions, which promises an immutability of the past. Peers of the network can modify the state by sending transactions using asymmetric cryptography. Different consensus mechanisms exist to ensure that the state of the blockchain network is not corrupted (Wattenhofer 2017; Nakamoto 2008; Wood 2018; Lamport et al. 1982). Blockchains can be permissioned or permissionless and private or public, while most of the applications currently discussed in the energy sector either do not need a blockchain at all or might use permissioned blockchains (Wüst and Gervais 2017). In permissioned blockchains, energy efficient consensus mechanisms like proof-of-authority (PoA) (Wood 2015), where only approved validators are allowed to write new blocks, can be used. With different versions of PoA the network cannot be manipulated as long as a certain percentage (depending on the exact version of the PoA algorithm) of the validators are honest. In this context, a smart contract is an agreement between two or more parties, encoded in such a way that the correct execution is guaranteed by the blockchain (Wattenhofer 2017). Including this concept, EthereumFootnote 4 aims to be a permissionless technology on which all transaction-based state machine concepts can be build (Wood 2018). Other projects like HyperledgerFootnote 5 enable permissioned platforms for blockchain based applications. We contribute to the state-of-the art by proposing and evaluating a platform for energy sharing and flexibility provision. The implementation is the first of its kind, which is open-source and its strengths and weaknesses are analyzed in detail. As the term blockchain originally described only public and permissionless blockchains, we use the term DL in the following to avoid misunderstandings. Community energy sharing Research activities in the field of community energy sharing have been increasing in the past decade especially since grid parity has been reached. An overview of research activities is provided in Strickland et al. (2016). Major findings are that the maximum load of secondary substations and the necessary grid reinforcement can be reduced, which can lead to decreased total costs. Community energy sharing can be achieved by means of one large ESS for the whole community or many distributed ESSs operated in a coordinated way. Further research showed that the self-sufficiency rate (SSR) and the self-consumption rate (SCR) can be improved, both economic and technical benefits can be reached (Zhou et al. 2018) and that each participant can profit (Long et al. 2018). In Schlund et al. (2018b) the impact of community sizes was analyzed and it was shown that a smart operation strategy for distributed ESSs can in addition even lead to a considerable increase in power efficiency. Furthermore, the study showed that the largest share of the total benefits can already be achieved at comparatively small community sizes. Decentralized and self-organized grid management There have already been a lot of research projects focusing on decentralized grid management (Conrad 2010; Lehnhoff 2010; Lehnhoff et al. 2011). Most of the approaches used price incentives for an automated demand or supply response. Within the projects, different approaches of decentralization considering components of the power system or the IT system have been proposed and analyzed. Hierarchical aggregation of sub grids with decentralized power system components through balancing group managers have been researched in detail (Lehnhoff 2010; Lehnhoff et al. 2011). They concluded that it is possible to satisfy all actors and to provide necessary balancing power by means of decentralized assets in a possibly cheaper way compared to the current centralized control. Approaches without the need for any aggregators (Conrad 2010) have been proposed and analyzed as well. However, they have only proven to be robust until a small percentage of attackers in the system (≤5%). In addition, centralized service providers for authentication were necessary. This paper aims at providing an open and fully decentralized platform, which is more robust to faulty nodes. The focus is not automating the grid management itself but providing an open, decentralized and secure flexibility platform without further intermediaries between distributed flexibility resources and system operators or energy suppliers, who might be in need for the flexibility. The forecasting, monitoring of the grid as well as the scheduling of the flexibility need is assumed to be done by the system operators and balancing group managers and is not focus of this paper. Summarizing, the concept provides a flexibility market access without the need for a central aggregator like a VPP. Minding the related work, this paper contributes to the state-of-the art by proposing a new method, which includes a financial settlement layer, works fully distributed without relying on a central platform and is more robust in case of attacks or faulty nodes compared to previously proposed approaches. Conceptual approach The proposed concept aims to enable two basic functionalities: community internal energy sharing through aggregation and flexibility provision of the community as one virtual unit to the external EPS. This shall be achieved on a fully distributed basis and thus without the need for an aggregator as visualized on the left side of Fig. 2. Hence, there are no costs for a third party and the participants can profit from the full economic benefit. The DVPP has thus the potential of directly providing a market access to small-scale distributed flexibility sources by directly linking them to parties in need of flexibility like distribution system operators (DSOs) or balancing group managers (BGMs). In analogy to a VPP, possible applications are frequency and voltage stability or congestion management. High level concept of the self-organized DVPP (left) compared to a centralized VPP (right) A permissioned consortium DL is proposed in order to achieve a consensus on the community state and the coordination rules. The participants must have cryptographically authenticated smart meters with built-in private keys in order to facilitate digitally signed and secure transactions. As the participants have to be known and permissioned a simple form of PoA or practical byzantine fault tolerance (PBFT) (Castro and Liskov 1999) with each participant acting as a validating node can be used as consensus mechanism. This means that the network cannot be corrupted as long as more than a certain percentage (depending on the actual consensus mechanism) of the validators act honestly. For the proof-of-concept, the participants of the community are assumed to be prosumers, each with a roof-top PV plant, an ESS, a household demand and a smart meter with a built-in private key. The concept is extendable to any kind of flexible assets though. Each of the n participants has the generation Gi(t) and the load Li(t). Note that the participating houses are all located at one low-voltage feeder and they have one grid connection point for external balancing in case of over- or undersupply or provision of flexibility. The locality of the community is a basis for the assumption that line losses can be neglected and that the flexibility provision can also be used for local needs. As not all houses on the feeder are necessarily part of the community, the grid connection point might be virtual. Considering the energy sharing part, the concept builds upon the previously published work in Schlund et al. (2018a). The residual loads of the participants Pres,i(t) are aggregated to a community residual load Pres(t) according to Eq. 1. $$\begin{array}{*{20}l} P_{\text{res}}(t)=\sum_{i=1}^{n} P_{\text{res,} i}(t)= \sum_{i=1}^{n} L_{i}(t) - G_{i}(t) \end{array} $$ The ESSs of the community can now be used to cover this residual load. This is described by Eq. 2. $$\begin{array}{*{20}l} P_{\text{ESS}}(t)=\sum_{i=1}^{n} P_{\text{ESS,} i}(t)= P_{\text{res}}(t) \end{array} $$ Additionally, an efficiency improving heuristic described in detail in Schlund et al. (2018b) is used to avoid low efficient part load operation of the ESSs. This is motivated as the power electronics of state-of-the-art converters have a low power efficiency in part load operation (Schlund et al. 2017a). The heuristic is basically a knapsack problem. It is solved by cumulating the optimal operation points of individual ESSs in the order of the best fitting state of charge (SOC) until Pres(t) is met. This means that the index i of each ESS is not static but dependent on the SOC distribution of the ESSs and updated each coordination timestep Δtc. As shown in Eq. 3 the last ESS which is needed to cover Pres(t) has the index z(t). $$\begin{array}{*{20}l} \sum_{i=1}^{z(t)} P_{\text{opt,} i} \geq P_{\text{ESS}}(t) \end{array} $$ PESS(t) is then only provided by the first z(t) or z(t)−1 ESSs depending on which of the two options result in a cumulated value of optimal operation points closer to PESS(t). The index of the resulting last active ESS is l(t). The active ESSs then provide PESS,i(t) according to Eq. 4, while the rest of the ESSs are set to standby. This idea has already been proposed in Schlund et al. (2018a). $$\begin{array}{*{20}l} P_{\text{ESS,} i}(t) = P_{\text{opt,} i} \cdot \frac{P_{\text{ESS}}(t)}{\sum_{i=0}^{l(t)} P_{\text{opt,} i}} \end{array} $$ In addition to (Schlund et al. 2018a) a community internal financial settlement via an internal price pc is enabled by means of a token. Furthermore, an automatic flexibility provision by the community is proposed here. The flexibility provision is directly triggered by an external payment from an energy supplier or system operator, who has a need for the flexibility. When flexibility is needed for any of the applications mentioned above, the participant who is in need for flexibility can transact tokens to the smart contract, thereby automatically shift the operating point of the DVPP in a desired direction and thus activate the flexibility provision of the distributed ESSs. The flexibility provision is realized by changing Eqs. 2 to 5 as long as the flexibility Pflex(t) is contracted. Pflex(t) is defined in producer counting arrow system here. The smart contract then automatically disburses the prosumers only in case they successfully provided the flexibility. Otherwise the payment is kept in the contract for the operator to be reimbursed. As the payment to the contract and the availability of funds of the party, which activated the flexibility is directly linked to the Pflex(t) variable, the participants providing the flexibility can be sure to be paid by the contract. On the other hand, the contract only pays the participants if they successfully provide the flexibility, which is verified by the smart meter measurement. This way the requesting party has a proof that the offered flexibility has actually been provided and is sure that it only pays for flexibility, which was provided. The price of the flexibility provision pf can dynamically be set by the participants of the DVPP, e.g. by means of democratic voting, and can be higher than the internal price for energy sharing. $$\begin{array}{*{20}l} P_{\text{ESS}}(t)=\sum_{i=1}^{n} P_{\text{ESS,} i}(t) = P_{\text{res}}(t)+P_{\text{flex}}(t) \end{array} $$ The operating principle of the DVPP is visualized in Fig. 3. During normal operation (before t1, between t2 and t3 and after t4) the DVPP tries to operate self-sufficiently. Therefore, it uses the flexibility of the ESSs to balance the cumulated load of all participants of the DVPP by a cumulated generation of all participants or vice versa. This results in a cumulated residual load of zero in case enough flexibility is available. At t1 the DVPP smart contract receives a payment which automatically contracts the DVPP to provide positive flexibility until t2. Thus its cumulated generation is increased (e.g. by discharging distributed ESSs) and the cumulative residual load shifts to +P. At t3 the same process is triggered for negative flexibility provision. Summarizing, the operating principle of the DVPP is equivalent to the principle of a VPP with the difference that there is no VPP operator, no central controller, it is fully self-organized and the disbursement is automated. Exemplary visualization of the operating principle of the self-organized DVPP (producer counting arrow system) A real DL implementation is utilized in order to represent the full complexity of the technology, while the power flows are simulated. The following subsections describe the basic DL, the smart contract and the interfacing software. Only freely available software and low-cost hardware is used for the implementation. Furthermore, the source code is published under GNU Lesser General Public License on https://github.com/cs7org/dvpp/ for reproducibility. For the given use case a permissioned consortium DL is suitable. For the prototype we chose to utilize an instance of an Ethereum (Buterin 2018) blockchain with an energy efficient PoA Clique (Wood 2015) consensus. The reasons for this selection are a large developer community, the suitability for prototyping, a well documentation and an open source availability. For a robust implementation in the field PBFT or Aura might be more suitable as consensus mechanisms (Angelis et al. 2018). However, Clique is suitable for testing purposes, every participant of the community can act as a validating node and new validators can be added via democratic voting. In addition, we implemented a script for a coordinated creation of new blocks according to round-robin scheduling. This way the behaviour of Aura is imitated. We implemented the prototype on two different setups. The first setup runs on up to four Raspberry Pis and is used for demonstration purposes, while the second setup runs in up to 20 virtual machines (VMs) and is used for the quantitative evaluation. For the second setup a server with two Intel Xeon E5-2637 v4 @ 3.50 GHz Central Processing Units and 80 GB of RAM is used. This way, every VM uses approximately 350 MHz of computing power and 4096 MB RAM. All VMs run on Ubuntu 18.04 and have geth (version 1.8.14), python3, and additional python3 libraries installed. In both cases each machine is used to represent one participant of the DVPP. They communicate via TCP/IP and have all the same setup. GethFootnote 6 runs full authority nodes of an instance of the consortium Ethereum PoA-blockchain with 15 s block time (by default) on each device. The coordination logic of the community is represented by a smart contract written in Solidity6 and deployed onto the DL. It offers registration and deregistration of houses, automatic deregistration of offline nodes, sharing of the ESS operating parameters, the SOCs and the residual loads of each house and the calculation logic of the control commands for the ESSs as described in the concept section. As the contract only stores up-to-date discrete values of the variables, they are described without a time dependency in the following. In order to manage the funds it uses an internal token, which basically maps public addresses of the participants to their stored funds in the contract. A new participant can register at the smart contract with its desired or optimal operation points for charging and discharging Popt,i, its maximum power limit Pmax,i, its current SOC and its residual load Pres,i at the time the transaction is facilitated. In theory, any asset or aggregation of assets that can be described with a maximal operation point, optimal operation points for charging (or negative flexibility provision), discharging (or positive flexibility provision) and a SOC (or a similar flexibility model) can take part in the system. For the home storages considered here, the optimal operation point is usually close to 50% of the maximum power limit (Schlund et al. 2017a). Whenever a new participant registers (or deregisters) it is added in (removed from) the registry of the smart contract and the total power limit of the DVPP Pmax is updated. When registering an amount of tokens has to be transferred to the contract in order to be able to take part in the energy sharing process, so the register function is payable in Solidity terms. All participants update their operational parameters (current SOCi, current Pres,i and the measured value of \(P_{\text {ESS}, i}^{\mathrm {m}}\) of the previous coordination interval) once every defined coordination timestep Δtc by executing the setState function. Note that Pres,i is counted in consumer counting arrow system, while \(P_{\text {ESS}, i}^{\mathrm {m}}\) is counted in producer counting arrow system. The value of Δtc can be chosen as desired by the community and is varied between 15 s and 15 min in the evaluation part as its resolution has an influence on the self-sufficiency of the community but also on the communication and data storage effort. The sequence of the transaction is visualized as a flow chart in Fig. 4. Note that a participation in the DVPP and also an update of the state can only be achieved by means of this transaction and this transaction can only be executed completely. In case it fails all changes are reverted. Sequence of the setState transaction, which needs to be executed each coordination time interval Δtc After a data plausibility check and an update of the SOC, the community members automatically check the last entry of the two neighboring participants (according to the registration index). In case the last entry of one of the neighbors is older than a threshold, the neighbor is assumed to be faulty/offline and is automatically deregistered. This threshold is parameterizable but has to be at least one coordination interval, by default is two coordination intervals. In case the neighbor has less value in his balance than what is needed in the worst case during the next two coordination intervals, they are also deregistered. In case the DVPP was not contracted to provide flexibility to the EPS during the last coordination interval, the settlement of the community internal energy sharing is facilitated. Therefore, an approach only using withdrawals and no payments to other users is needed, because if participants would need to pay, they could just intentionally miss submitting their transactions in order to save money. This is solved by calculating a community reward rc,i of the participant sending the transaction according to Eq. 6 and transferring rc from each other participant to the balance mapping of the sender. The reward is however only calculated and transferred if \(P_{\text {ESS}, i}^{\mathrm {m}}\) equals the according set value PESS,i from the previous coordination interval. Equation 6 is chosen in such way, that the withdrawals are positive in any case. However, the withdrawals of consumers are smaller than the withdrawals of producers, resulting in an overall payment from consumers to producers. Note, that these withdrawals are not actual withdrawals from the smart contract, but only transactions of its internal token. This follows a common security design pattern for smart contracts, preventing re-entry attacks. $$\begin{array}{*{20}l} r_{\mathrm{c}, i} = \frac{P_{\text{ESS}, i}^{\mathrm{m}} - P_{\text{res}, i} + P_{\text{max}}}{n-1} \cdot p_{\mathrm{c}} \end{array} $$ Failing to commit a transaction or not operating the ESS according to the community logic is thus always punished as the failing participants fails to claim its reward and effectively pays more. After the internal settlement, the residual load for the next coordination interval is updated by setting Pres,i and adding it to Pres,i. In case the flexibility provision was contracted there is no community internal settlement as the whole community gets paid for its flexibility provision, if it was successfully provided. It thereby is again checked if \(P_{\text {ESS}, i}^{\mathrm {m}}\) matches its set value and if yes, the flexibility reward rf is calculated according to Eq. 7. This reward is equal to the share of the flexibility provision, which was provided by the sender. It is transferred from the balance of the contractor address to the balance of the senders address. After this transfer the residual load for the next coordination interval is updated as described above. In addition it is checked, if enough flexibility is available for the next coordination timestep and if yes, the flexibility provision is continued during the next Δtc. $$\begin{array}{*{20}l} r_{\mathrm{f}, i} = \frac{P_{\text{ESS}, i}^{\mathrm{m}}}{P_{\text{ESS}}} \cdot |P_{\text{flex}}| \cdot p_{\mathrm{f}} \end{array} $$ The coordination logic of the DVPP as described in the concept is implemented in the readInstruction function. It calculates the set power for the requesting ESS. This function is deterministic and only depends on the state of the smart contract (the SOCs, Pres and the Pflex), but does not change any state. This way, it can be calculated off-chain. The dependency of the logic on the SOCs distribution ensures that the ESSs are treated equally and the SOCs of all participants are kept homogenous. Furthermore, a function to contract the flexibility, which transfers funds to the contract and updates Pflex, as well as some other reader functions are implemented in order to be able to analyze the ongoings and check e.g. if there is any flexibility available in the DVPP. In addition, the contract provides functions to actively withdraw earned funds from it and to cancel an active flexibility provision for testing purposes. In order to settle the funds correctly the contract always keeps two periods of Pres,Pflex and PESS,i in its state. For the rights management modifiers limit the access to the functions so only the allowed parties can execute the functions. E.g., setState is limited to participants once per coordination interval, the DVPP can only be contracted by non-registered users and if the DVPP is currently not contracted, a contract can only be cancelled by the contractor, you can only register if you are not yet registered, you can only deregister if you are registered and only read your own instruction for the next coordination interval and pf can only be updated if there is no active flexibility provision. The smart contract triggers events, when important state changes occur. This way it is easy and transparent for participants to observe what happens. The following events are implemented: NewContract(address _contractor, int _newSetValue, uint _PricePerBlock): triggered when a new contract is initiated ContractSelfCancelled(string _reason): triggered when a contract end by itself ContractEnd(): triggered when a contract ends ContractCancelled(): triggered when a contractor cancels the contract BalanceChange(address _from, address _to, uint _amount): triggered when balance is transferred within the balance mapping of the contract NewParticipant(address _who): triggered when a new participant joins the DVPP OfflineNodeDeregistered(address _who, address _from): triggered when an offline node is deregistered NodeDeregistered(address _who): triggered when a node deregisters Depending on the setting of the coordination interval Δtc the coordination has a high or low time resolution. A higher resolution results in an operation closer to the real-time course of Pres and Pflex. On the other hand a higher resolution also results in a higher communication and data storage effort. Both of these relationships are quantified in the "Results" section. In the current prototypical implementation Δtc is determined by the blocktime for simplicity. Future improvements should include a parameterization of Δtc independent from the blocktime, so a coordination interval can include several blocks and validating nodes are incapable of preventing others to submit transactions within a coordination interval. Interfacing software An interfacing software is written in Python 3Footnote 7 and follows an object oriented approach, which enables adaptability e.g. onto other blockchain technologies as well as extensibility for additional features. It utilizes the Web3.py library and thereby JSON-RPC over HTTP to interact with the Geth instance running on the same device. This software runs in each of the participating nodes. For a smooth test run the system clocks of all nodes were synchronized using network time protocol. At startup the simulation parameters are read in and the participant registers at the smart contract if it is not yet registered. The internal time resolution of the software is adjustable and 1 s by default. With this frequency, house-individual values for PV generation and household load are read from time-series files. This internal time resolution obviously must be higher than Δtc. The residual load and the SOC of the ESS are transacted to the smart contract once each Δtc. These transactions are facilitated in parallel in order to be able to simulate the ESS operation in real-time. With every new block, these values are distributed in the network and each house's smart contract instance can calculate the control command for its ESS based on the new state. The ESS is then operated according to the battery command if possible. When a new block is detected, the software also checks if the node is still registered. It might have been deregistered by other nodes in case it has no more balance or its communication was faulty and the other nodes assumed the node to be offline. All relevant data of this procedure is written to a log file so that it can conveniently be analyzed. For the generation of PV power and the individual household load (Tjaden et al. 2015) time-series are used. The ESSs are modeled in Python 3 based on (Schlund et al. 2017a), which represents operational power dependent efficiencies for charging, discharging and idle mode and SOC-dependent power limits for the maximal charging and discharging powers. The empirical model includes a submodel for the battery itself and the power electronics of the AC/DC conversion including a battery management system. Thus, not only losses from the battery itself, but also from the necessary periphery are taken into consideration. Parameters like the size of the PV plant or the ESS are parameterizable. In this section results considering the general proof-of-concept are presented. The evaluation includes an energy management view, a communication effort view and an economic view. The system has been tested in over 25 different test runs of 3 to 7 days with different parameterizations of the system. During these test runs the general behavior proved to work. The quantitative results in the following are based on the operation during a sunny week in may with an average PV production of 80.4 kWh and an average demand of 65.3 kWh per household, both in 1 s resolution. Additionally, each participant has an ESS with a capacity of 8 kWh if not explicitly stated otherwise. General behaviour An exemplary day from one of the testruns with 20 participants and a coordination timestep of 30 min is visualized in Fig. 5 to provide a general understanding of the system. Figure 5a) shows the operation of all battery systems in producer counting arrow system (colored areas). The red line is the cumulative residual load of all participants of the DVPP Pres in consumer counting arrow system. During normal operation the ESSs try to equalize this curve. Overview of a the battery operation and b the SOC during an exemplary day of a testrun with Δtc=30 min To model the flexibility need, a DSO or BGM in need of flexibility is abstracted as an agent, who pseudo-randomly contracts the DVPP for flexibility. When it is zero, no flexibility is contracted and the DVPP is in energy sharing operation mode trying to operate self-sufficiently. When it is unequal to zero the DVPP tries to provide the contracted flexibility. The green line differs from the red line as it also considers the additional flexibility provision. The phases of active flexibility provision are marked with vertical lines. In the exemplary illustration positive flexibility is contracted in the morning (e.g. to cover the morning peak) and in the early evening (e.g. to cover charging of electric vehicles). Negative flexibility is only contracted at noon (e.g. to shift a peak of other PV plants). During these phases the green line is the set value for the ESSs. It is visible, that at the beginning of each coordination interval, the ESSs exactly cover the according set value. However, during the coordination interval the operation is constant, which leads to a deviation between battery operation and set value. This deviation obviously increases in significance with increasing coordination intervals. In Fig. 5b) the corresponding course of the SOC is visualized. The total SOC of the DVPP is composed of the individual SOCs. It is visible that the SOC changes according to the operation of the ESSs and that the SOCs of all participants are kept equally leveled. In the late afternoon all ESSs are fully charged and there is no more free capacity left. Thus, the PV surplus in the late afternoon cannot be stored in the ESSs. Such behaviour could be avoided by using prediction and a simple peak shaving algorithm on top of the proposed platform. In order to understand better what is happening when using coordination intervals with high resolution, a zoom view is visualized in Fig. 6. The two charts show the time span between 14:00 and 14:15 with a PV surplus and no contracted flexibility provision. The colored areas represent the residual generation in producer counting arrow system (−Pres,i, pale) and the operation of the ESSs \(\left (P^{\mathrm {m}}_{\text {ESS,} i}, bright\right)\). In addition, the resulting aggregated power at the virtual grid connection \(P^{\mathrm {m}}_{\text {VGC}}\) is displayed in red. As no flexibility is contracted (green line), the set value for the virtual grid connection in zero. Zoom in an exemplary test during PV surplus with a) Δtc=15 s and b) Δtc=30s In Fig. 6a), it is visible that the surplus of the PV plant is split into energy packets, which are distributed to the participating ESSs. These energy packets are characterized by a power value and the coordination interval of 15 s. As a result of the coordination algorithm the power values of all of the packages are close to Popt,i. The resulting power at the virtual grid connection \(P^{\mathrm {m}}_{\text {VGC}}\) is kept close to its set value of zero, with the exception of little power spikes between the coordination intervals resulting from propagation delays of the new blocks. In Fig. 6b) the coordination interval is doubled, resulting in larger energy packets and fewer switching spikes. With a higher resolution of the coordination interval, a more accurate provision of flexibility is possible. To demonstrate this. Figure 7 shows a comparison of the set values and measured values on DVPP level during a test run with a coordination interval of 15 s and ESSs with a capacity of 4 kWh. Smaller capacities have been chosen here to be able to observe the behaviour, when there is no more flexibility available (the cumulated SOC is either empty or full). Comparison of measured and set values for a the operation of the ESS and b the virtual grid connection during an exemplary day in producer counting arrow system with Δtc=15 s and battery capacities of 4 kWh In Fig. 7a) the cumulated set value for all ESSs (green) and its measured value (blue) are compared and the absolute difference is displayed in red. Whenever flexibility is available, the ESSs operate as commanded and the red line equals zero. Only in the afternoon, when all ESSs are fully charged, the command cannot be fulfilled. Figure 7b) shows the same comparison for the virtual grid connection, likewise on DVPP level. In addition to the observation from above, in Fig. 7b) slight deviations between the set value and the measured value occur as the information update about the current state only occurs each Δtc. Thus the deviations occur although the ESSs all operated according to their command. With the coordination interval of 15 s these deviations seem to be negligible. An advantage of the platform is, that it provides an automated financial settlement for everything that happened. Figure 8 shows the financial settlement for the same period as Fig. 7. For demonstration purposes the community internal price for energy sharing pc is constantly \(5\cdot 10^{8} \frac {\text {wei}}{{\text {mW}} \cdot \Delta t_{\mathrm {c}}}\) and the price for flexibility provision is pf is \(1\cdot 10^{9} \frac {\text {wei}}{{\text {mW}} \cdot \Delta t_{\mathrm {c}}}\). In normal operation, both prices can be determined dynamically, e.g. through voting or by means of other applications like a market in a layer on top of the platform. The contracted flexibility is visualized in blue on the secondary axis. It is visible that the financial settlement works as expected, even with the small coordination interval of 15 s. In case flexibility is successfully provided the balance of the parties providing it increases linearly at the rate of pf. When it is not successfully provided, there is no increase and when there is no flexibility contracted, the internal settlement depends on each participants power balance and pc. During the internal settlement the balances diverge as the feed-in from the PV plants is heterogeneous. Exemplary time behavior of the balances of the participants with Δtc=15s and battery capacities of 4 kWh Evaluation from the energy management perspective In this subsection the SSR, the SCR and the energy efficiency (η) of the ESSs is evaluated in dependency of Δtc. These performance indicators are defined as follows: The SSR is defined as the share of the household demand within the community, which was concurrently provided by the PV plants or the ESSs The SCR is defined as the share of the total PV production within the community, which was concurrently consumed by the households or the ESSs The energy efficiency is defined as the totally discharged amount of energy with respect to the totally charged amount of energy. For the determination of the three indicators a simulation of the system is performed supposing an instant information exchange between the nodes. The SOC of the ESS at the end of the considered period must equal the SOC at the beginning. Moreover, no additional flexibility is provided to the grid during the test runs. The performance indicators highly depend on a great number of parameters like the size of the PV plants, the household demand, the considered time period and the sizes of the ESSs. Here we are not interested in the total values but in the sensitivity of the indicators in dependence of Δtc and we keep the other degrees of freedom constant. Table 1 shows the SSR, the SCR and the efficiency in dependence of Δtc for the parameterization described at the beginning of this section. Table 1 Comparison of the self-sufficiency rate, the self-consumption rate of the community and the total energy efficiency of the ESSs depending on Δtc for a sunny week in may Without any coordination the ESSs operate alone and only try to satisfy their own household demand with a greedy charging strategy. As expected the coordination can improve all of the indicators significantly. The influence of Δtc in the SSR and the SCR is also significant. This can be explained by the larger deviations that result from larger coordination intervals. Comparing Fig. 5a) with Fig. 6 shows that energy packets with a lower resolution map the residual generation less accurately. However, at a Δtc of 1 min, 96% of the optimum is still reached. This suggests that a higher resolution is not necessary from an energy management point of view. Note that when comparing the simulation results to a real test run with the same input data, the results deviate slightly. This can be explained by block propagation times and deviations in system time synchronism resulting in the spikes shown in Fig. 6. In addition to the observations above, a smaller Δtc also results in a faster response to a flexibility request, which makes the concept potentially suitable for a wider range of applications. On the other hand a smaller Δtc results in an increased data storage and communication effort, which is analyzed in the next section. Data storage and communication effort To evaluate the data storage and communication effort of the prototypical implementation, tests with different amounts of nodes were run and the network traffic was measured using psutil (Rodola). As the communication characteristic during the tests showed that the communication is determined by the transactions and the propagation of the blocks and both occur in the interval of Δtc, the communication effort can be normalized to Δtc. For all experiments a fully meshed P2P network was used and two parameters were varied: the total number of participants n the number of validators nval, with nval≤n as the validators are a subgroup of the participants The resulting communication effort for different parameterizations is summarized in Table 2 for validating nodes and in Table 3 for non-validating nodes. It is calculated as average values over the according test run, while each test run ran at least for 250 coordination intervals. As expected, the communication effort increases with the number of participants. However, the increase seems to be less than linear. With few validators, non-validators face a higher communication effort as the validators, probably as they receive new blocks more often. Summarizing, the communication effort of about 300 kB per coordination interval is no limiting factor, especially not for low resolutions of Δtc Table 2 Measured average communication effort with standard deviation of the validating nodes of the DVPP in kB per Δtc Table 3 Measured average communication effort of non-validating nodes of the DVPP in kB per Δtc In order to evaluate the data storage effort the block size b was analyzed. It increases linearly with the number of participants n according to Eq. 8, with b0 being the size of an empty block and tx being the size of a transaction. Both of the parameters were determined during the experiments. $$\begin{array}{*{20}l} b = b_{0} + tx \cdot n \,\,\,\,\,\,\,\, \text{with} \,\,\,\, b_{0} = 0.61\text{kB} \,\,\,\, \text{and} \,\,\,\, tx = 0.24\text{kB} \end{array} $$ Considering this relation up to 4164 participants could be included in a single DVPP before reaching a block size of 1 MB. For the setup with 20 participants, this results in a total used storage of about 190 MB (2.8 GB) per year with a Δtc of =15 min (1 min). Economical evaluation For an economical evaluation, incomes from energy sharing and from flexibility provision have to be opposed to investment and operational costs. For a real and secure implementation in the field, smart-meters with a built-in private key, a processing unit to run the node and a gateway for the communication would be necessary. As such devices are not yet available, we estimate the extra costs compared to a normal smart meter with the costs of a single-board computer as it was used in the prototype. This costs 30 € and has an electricity consumption of about 3 W resulting in yearly electricity costs of less than 8 € based on an electricity price of \(0.299\frac {\EURcr}{\text {kWh}}\) (Bundesnetzagentur 2017). In large scale, the costs would obviously decrease drastically. In contrast stand savings by increased self-sufficiency and efficiency through energy sharing of up to 5.30 € per household in this week in May alone. This value is based on the same electricity price as above and an feed in tariff of \(0.122\frac {\EURcr}{\text {kWh}}\) (German Government 2017). However, considering the legal framework in Germany energy sharing is so far only allowed without using the public grid. This means that, as long as the legal framework is not adopted, the concept is only thinkable in privately owned grids or projects with large tenement houses. In addition, a more valuable income might be generated by means of flexibility provision. For an economical evaluation, knowledge about the prices for flexibility provision is necessary, which is unavailable as this form of flexibility is not yet used in local grids. However, depending on the situation in the local grid, the value of this service might vary strongly. In a future scenario with a high share of renewable energies and the need for flexibility, it might be a considerable additional income for such flexible communities. In this paper we proposed a new concept for a self-organized, fully decentralized platform for coordinated flexibility provision using a consortium Ethereum blockchain. The decentralized platform directly links a community of flexible prosumers in a local grid areas with system operators or balancing group manager and has thus the potential of providing small-scale flexibility sources an access to a flexibility market. The concept is fully transparent, highly automated and each participant is a validating node. Besides flexibility activation directly through a payment it enables energy sharing for an increased self-sufficiency and power efficiency. We showed the general functionality, which was validated during numerous test runs with up to 20 participants. We also implemented an on-chain automated financial settlement layer, which is especially interesting if the blockchain is implemented as a subchain or a shard of a public chain and can thus manage real value. However, the blockchain based implementation is quite complex and comes with an overhead considering communication and data storage. This overhead and possible limitations are quantified in the paper. An additional possible limiting factor for up-scaling is the on-chain computational effort. This will be analyzed in future work and might be solved by multichain approaches or sharding. Another disadvantage might be that all members of the community are able to see the power consumption of their fellow community members. For an implementation in the field, the smart meter hardware would need to be able to run a node or communicate to a trusted node and to access the controller of the flexibility source (e.g. the battery system). Furthermore, the smart contract would need to allow for some measurement inaccuracies, a more intelligent pricing scheme is necessary and line losses might be included. Additionally, a robust incentivising scheme for the validators is necessary. Improvements of the prototype might include a decoupling of the coordination interval and the block time in order to really run in a robust and fault-tolerant way and to avoid the necessity of system clock time synchronism. Future analyses can involve case studies with smart applications on top of the platform and a wide range of parameterizations of participants. In addition, the same concept is also possible on a higher level with aggregators, system operators or balancing group managers acting as participants and validating nodes. This way, these parties would be able to easily coordinate their aggregated flexibility sources in a transparent and automated way. This seems to be a promising use case of the proposed decentralized flexibility platform as the mentioned advantages on the aggregated level justify the complexity, no special new hardware is required for an implementation, regulatory restrictions are more likely to be overcome and the necessary number of nodes have proved to be operable. https://www.tennet.eu/fileadmin/user_upload/Company/News/German/Hoerchens/2017/20171102_ PM-Start-Blockchain-Projekt-TenneT-sonnen_EN.pdf http://www.electron.org.uk/ http://www.blogpv.net/ https://www.ethereum.org/ https://www.hyperledger.org/ https://github.com/ethereum/ https://docs.python.org/3.5/ BEV: Battery electric vehicle BGM: Balancing group manager DL: Distribution system operator DVPP: Decentralized virtual power plant EPS: Electrical power system ESS: P2P: PBFT: Practical byzantine fault tolerance PoA: Proof-of-authority PV: SCR: Self-consumption rate State of charge SSR: Self-sufficiency rate TSO: Transmission system operator VM: VPP: Andoni, M, Robu V, Flynn D, Abram S, Geach D, Jenkins D, McCallum P, Peacock A (2019) Blockchain technology in the energy sector: A systematic review of challenges and opportunities. Renew Sust Energ Rev 100:143–174. https://doi.org/10.1016/j.rser.2018.10.014. Angelis, SD, Aniello L, Baldoni R, Lombardi F, Margheri A, Sassone V (2018) Pbft vs proof-of-authority: applying the cap theorem to permissioned blockchain. https://eprints.soton.ac.uk/415083/. BDEW Bundesverband der Energie und Wasserwirtschaft e.V. (2017) Blockchain in der Energiewirtschaft. https://www.bdew.de/media/documents/BDEW_Blockchain_Energiewirtschaft_10_2017.pdf. BDEW Bundesverband der Energie- und Wasserwirtschaft e.V (2018) Redispatch in Deutschland. https://www.bdew.de/media/documents/Awh_20180212_Bericht_Redispatch٪_Stand_Februar-2018.pdf. Accessed 21 June 2018. Benz, T, Dickert J, Erbert M, Erdmann N, Johae C, Katzenbach B, Glaunsinger W, Müller H, Schegner P, Schwarz J, Speh R, Stagge H, Zdrallek M (2015) VDE Studie: Der zellulare Ansatz. 622, https://shop.vde.com/de/vde-studie-der-zellulare-ansatz-2. Bundesnetzagentur (2017) Monitoringbericht 2017. https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Allgemeines/Bundesnetzagentur/Publikationen/Berichte/2017/Monitoringbericht_2017.pdf?__blob=publicationFile&v=3. Accessed 3 Apr 2019. Burger, C, Kuhlmann A, Richard P, Weinmann J (2016) Blockchain in the energy transition. A survey among decision-makers in the German energy industry. Deutsche Energie-Agentur GmbH (dena) - German Energy Agency, Berlin. Buterin, V (2018) A Next-Generation Smart Contract and Decentralized Application Platform. https://github.com/ethereum/wiki/wiki/White-Paper. Accessed 24 July 2018. Castro, M, Liskov B (1999) Practical byzantine fault tolerance In: Proceedings of the Third Symposium on Operating Systems Design and Implementation, OSDI '99, 173–186.. USENIX Association, Berkeley. http://dl.acm.org/citation.cfm?id=296806.296824. Conrad, M (2010) Verfahren und protokolle für sicheren rechtsverkehr auf dezentralen und spontanen elektronischen märkten. https://doi.org/10.5445/KSP/1000019723. Dinh, TTA, Liu R, Zhang M, Chen G, Ooi BC, Wang J (2018) Untangling blockchain: A data processing view of blockchain systems. IEEE Trans Knowl Data Eng 30(7):1366–1385. https://doi.org/10.1109/TKDE.2017.2781227. Dütsch, G, Steinecke N (2017) Use Cases for Blockchain Technology in Energy and Commodity Trading. PricewaterhouseCoopers GmbH, Frankfurt am Main. German Government (2017) Act on the Development of Renewable Energy Sources (Erneuerbare-Energien-Gesetz - EEG 2017). https://www.bmwi.de/Redaktion/EN/Downloads/renewable-energy-sources-act-2017.pdf%3F__blob%3DpublicationFile%26v%3D3. Hasse, F, Perfall A, Hillebrand T, Smole E, Lay L, Charlet M (2016) Blockchain - Chance Für Energieverbraucher. Lamport, L, Shostak R, Pease M (1982) The byzantine generals problem. ACM Trans Program Lang Syst 4(3):382–401. https://doi.org/10.1145/357172.357176. Lehnhoff, S (2010) Dezentrales Vernetztes Energiemanagement:264. https://doi.org/10.1007/978-3-8348-9658-2. Lehnhoff, S, Krause O, Rehtanz C (2011) Dezentrales autonomes energiemanagement - für einen zulässigen betrieb innerhalb verfügbarer kapazitätsgrenzen. at - Automatisierungstechnik Methoden und Anwendungen der Steuerungs-, Regelungs- und Informationstechnik 56:167–179. https://doi.org/10.1524/auto.2011.0906. Long, C, Wu J, Zhou Y, Jenkins N (2018) Peer-to-peer energy sharing through a two-stage aggregated battery control in a community microgrid. Appl Energy 226:261–276. https://doi.org/10.1016/j.apenergy.2018.05.097. Merz, M (2016) Potential of the Blockchain Technology in energy trading. de Gruyter, Berlin. Nakamoto, S (2008) Bitcoin: A Peer-to-Peer Electronic Cash System. https://bitcoin.org/bitcoin.pdf. Accessed 17 Jan 2018. Rodola, GPsutil 5.4.8. https://pypi.org/project/psutil/. Accessed 22 Jan 2019. Schlund, J (2018) Blockchain-based orchestration of distributed assets in electrical power systems. Energy Inform 1(1). https://doi.org/10.1186/s42162-018-0054-y. Schlund, J, Ammon L, German R (2018a) ETHome: Open-source Blockchain Based Energy Community Controller In: Proceedings of the Ninth International Conference on Future Energy Systems, e-Energy '18, 319–323.. ACM, New York. https://doi.org/10.1145/3208903.3208929. http://doi.acm.org/10.1145/3208903.3208929. Schlund, J, Betzin C, Wolfschmidt H, Veerashekar K, Luther M (2017a) Investigation, modeling and simulation of redox-flow, lithium-ion and lead-acid battery systems for home storage applications. In: EUROSOLAR the World Council for Renewable Energy (WCRE) (ed)Proceedings of the 11th International Renewable Energy Storage Conference (IRES 2017). Schlund, J, Pflugradt N, Steber D, Muntwyler U, German R (2018b) Benefits of Virtual Community Energy Storages compared to Individual Batteries based on Behaviour Based Synthetic Load Profiles. In: IEEE (ed)2018 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), 1–6. https://doi.org/10.1109/ISGTEurope.2018.8571506. Schlund, J, Steber D, Bazan P, German (2017b) Increasing the efficiency of a virtual battery storage providing frequency containment reserve power by applying a clustering algorithm In: 2017 IEEE Innovative Smart Grid Technologies - Asia (ISGT-Asia), 1–8. https://doi.org/10.1109/ISGT-Asia.2017.8378430. Schütte, J, Fridgen G, Prinz W, Rose T, Urbach N, Hoeren T, Guggenberger N, Welzel C, Holly S, Schulte A, Sprenger P, Schwede C, Weimert B, Otto B, Dalheimer M, Wenzel M, Kreutzer M, Fritz M, Leiner U, Nouak A (2017) Blockchain und Smart Contracts. http://publica.fraunhofer.de/eprints/urn_nbn_de_0011-n-4802762.pdf. Accessed 04 Apr 2018. Steber, D-B (2018) Integration of Decentralized Battery Energy Storage Systems into the German Electrical Power System. doctoralthesis, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). Strickland, D, Varnosfederani MA, Scott J, Quintela P, Duran A, Bravery R, Corliss A, Ashworth K, Blois-Brooke S (2016) A review of community electrical energy systems In: 2016 IEEE International Conference on Renewable Energy Research and Applications (ICRERA), 49–54. https://doi.org/10.1109/ICRERA.2016.7884528. Tjaden, T, Bergner J, Weniger J, Quaschning V (2015) Repräsentative elektrische Lastprofile für Wohngebäude in Deutschland auf 1-sekündiger Datenbasis. Technical report, Berlin, Germany (Nov. 2015). Hochschule für Technik und Wirtschaft HTW Berlin, https://pvspeicher.htw-berlin.de/wp-content/uploads/2017/05/HTW-BERLIN-2015-Repr%C3%A4sentative-elektrische-Lastprofile-f%C3%BCr-Wohngeb%C3%A4ude-in-Deutschland-auf-1-sek%C3%BCndiger-Datenbasis.pdf. Wattenhofer, R (2017) Distributed Ledger Technology. Second revised edition edn. Inverted Forest Publishing. Wood, G (2018) Ethereum: A Secure Decentralised Generalised Transaction Ledger. https://ethereum.github.io/yellowpaper/paper.pdf. Accessed 24 July 2018. Wood, G (2015) Ethereum. https://github.com/ethereum/guide/blob/master/poa.md. Accessed 18 Jan 2018. Wüst, K, Gervais A (2017) Do you need a Blockchain?Cryptology ePrint Archive, Report 2017/375. https://eprint.iacr.org/2017/375. Zhou, Y, Wu J, Long C (2018) Evaluation of peer-to-peer energy sharing mechanisms based on a multiagent simulation framework. Appl Energy 222:993–1022. https://doi.org/10.1016/j.apenergy.2018.02.089. We thank the program committee of the 9th PhD Workshop "Energy Informatics" in Oldenburg for an intensive discussion and Dominik Engel in particular for shepherding. This work is funded by the Bavarian State Ministry of Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B). This work is funded by the Bavarian State Ministry of Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B). Computer Networks and Communication Systems, Friedrich-Alexander-University Erlangen-Nürnberg, Martensstraße 3, Erlangen, 91058, Germany Jonas Schlund & Reinhard German Search for Jonas Schlund in: Search for Reinhard German in: JS developed the basic concept, carried out the implementation and lab experiments and prepared the first draft of the paper. RG participated in the refinement of the concept and the paper revision. All authors read and approved the final manuscript. Correspondence to Jonas Schlund. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Schlund, J., German, R. A distributed ledger based platform for community-driven flexibility provision. Energy Inform 2, 5 (2019) doi:10.1186/s42162-019-0068-0 Received: 28 January 2019 Accepted: 13 March 2019 Flexibility platform Community energy
CommonCrawl
\begin{document} \title{Vector Generation of Quantum Contextual Sets in Even Dimensional Hilbert Spaces} \author{Mladen Pavi{\v c}i{\'c}} \email{[email protected]} \homepage{http://www.irb.hr/users/mpavicic} \affiliation{Department of Physics---Nanooptics, Faculty of Math. and Natural Sci.~I, Humboldt University of Berlin, Germany} \affiliation{Center of Excellence CEMS, Photonics and Quantum Optics Unit, Rud\kern-0.36em\crta er Bo\v skovi\'c Institute, Zagreb, Croatia.} \author{Norman D. Megill} \affiliation{Boston Information Group, Lexington, MA 02420, U.S.A.} \date{May 1, 2019} \keywords{quantum contextuality, Kochen--Specker sets. MMP hypergraphs. Greechie diagrams} \begin{abstract} Recently, quantum contextuality has been proved to be the source of quantum computation's power. That, together with multiple recent contextual experiments, prompts improving the methods of generation of contextual sets and finding their features. The most elaborated contextual sets, which offer blueprints for contextual experiments and computational gates, are the Kochen--Specker (KS) sets. In this paper, we show a method of vector generation that supersedes previous methods. It is implemented by means of algorithms and programs that generate hypergraphs embodying the Kochen-Specker property and that are designed to be carried out on supercomputers. We show that vector component generation of KS hypergraphs exhausts all possible vectors that can be constructed from chosen vector components, in contrast to previous studies that used incomplete lists of vectors and therefore missed a majority of hypergraphs. Consequently, this unified method is far more efficient for generations of KS sets and their implementation in quantum computation and quantum communication. Several new KS classes and their features have been found and are elaborated on in the paper. Greechie diagrams are discussed. A detailed and complete blueprint of a particular 21-11 KS set with a complex coordinatization is presented in Appendix \ref{app:1}, in contrast to the one from the published version of this paper where only a few of its states were given. \end{abstract} \maketitle \section{\label{sec:intro}Introduction} Recently, it has been discovered that quantum contextuality might have a significant place in a development quantum communication \cite{cabello-dambrosio-11,nagata-05}, quantum computation \cite{magic-14,bartlett-nature-14}, and lattice theory \cite{bdm-ndm-mp-fresl-jmp-10,mp-7oa}. This has prompted experimental implementation of 4-, 6-, and 8-dimensional contextual experiments with photons~\cite{simon-zeil00,michler-zeil-00,amselem-cabello-09,liu-09,d-ambrosio-cabello-13,ks-exp-03,canas-cabello-8d-14}, neutrons \cite{h-rauch06,cabello-fillip-rauch-08,b-rauch-09}, trapped ions \cite{k-cabello-blatt-09}, solid state molecular nuclear spins \cite{moussa-09}, and paths~\cite{lisonek-14,canas-cabello-14}. Experimental contextual tests involve subtle issues, such as the possibility of noncontextual hidden variable models that can reproduce quantum mechanical predictions up to arbitrary precision~\cite{barrett-kent-04}. These models are important because they show how assignments of predetermined values to dense sets of projection operators are precluded by any quantum model. Thus, Spekkens \cite{spekkens-05} introduces generalised noncontextuality in an attempt to make precise the distinction between classical and quantum theories, distinguishing the notions of preparation, transformation, and measurement of noncontextuality and by doing so demonstrates that even the 2D Hilbert space is not inherently noncontextual. Kunjwal and Spekkens \cite{kunjwal-spekkens-15} derive an inequality that does not assume that the value assignments are deterministic, showing that noncontextuality cannot be salvaged by abandoning determinism. Kunjwal \cite{kunjwal-18-arxiv} shows how to compute a noncontextuality inequality from an invariant derived from a contextual set/configuration representing an experimental Kochen-Specker (KS) setup. This opens up the possibility of finding contextual sets that provide the best noise robustness in demonstrating contextuality. The large number of such sets that we show in the present work can provide a rich source for such an effort. Quantum contextual configurations that have been elaborated on the most in the literature are the KS sets, and, in this paper, we consider just them. In order to obtain KS sets, so far, various methods of exploiting correlations, symmetries, geometry, qubit states, Pauli states, Lie algebras, etc., have been found and used for generating master sets i.e.,~big sets which contain all smaller contextual sets \cite{cabell-est-96a,pmmm05a,aravind10,waeg-aravind-jpa-11,mfwap-11,mp-nm-pka-mw-11,waeg-aravind-megill-pavicic-11,waegell-aravind-12,waeg-aravind-pra-13,waeg-aravind-fp-14,waeg-aravind-jpa-15,waeg-aravind-pla-17,pavicic-pra-17}. All of these methods boil down either to finding a list of vectors and their $n$-tuples of orthogonalities from which a master set can be read off or finding a structure, e.g., a polytope, from which again a list of vectors and orthogonalities can be read off as well as a master set they build. In the present paper, we take the simplest possible vector components within an $n$-dimensional Hilbert space, e.g., $\{0,\pm 1\}$, and via our algorithms and programs exhaustively build all possible vectors and their orthogonal $n$-tuples and then filter out KS sets from the sets in which the vectors are organized. For a particular choice of components, the chances of getting KS sets are very high. We generate KS sets for even-dimensional spaces, up to 32, that properly contain all previously obtained and known KS sets, present their features and distributions, give examples of previously unknown sets, and present a blueprint for implementation of a simple set with a complex coordinatization. \section{Results} The main results presented in this paper concern generation of contextual sets from several basic vector components. Previous contextual sets from the literature made use of often complicated sets of vectors that the authors arrived at, following particular symmetries, or geometries, or polytope correlations, or Pauli operators, or qubit states, etc. In contrast, our approach considers McKay--Megill--Pavi\v ci\'c (MMP) hypergraphs (defined in Subsection~\ref{subsec:form}) from $n$-dimensional ($n$D) Hilbert space (${\cal H}^n$, $n\ge 3$) originally consisting of $n$-tuples (in our approach represented by MMP hypergraph edges) of orthogonal vectors (MMP hypergraph vertices) which exhaust themselves in forming configurations/sets of vectors (MMP hypergraphs). Already in \cite{pmmm05a-corr}, we realised that hypergraphs massively generated by their non-isomorphic upward construction might satisfy the Kochen--Specker theorem even when there were no vectors by means of which they might be represented (see Theorem~\ref{th:ks}), and finding coordinatizations for those hypergraphs which might have them, via standard methods of solving systems of non-linear equations, is an exponentially complex task solvable only for the smallest hypergraphs \cite{pmmm05a-corr}. It was, therefore, rather surprising to us to discover that the hypergraphs formed by very simple vector components often satisfied the Kochen--Specker theorem. In this paper, we present a method of generation of KS MMP hypergraphs, also called KS hypergraphs, via such simple sets of vector components. \begin{theorem}\label{th:ks}{\bf (MMP hypergraph reformulation of the Kochen--Specker theorem)} There are $n${\rm D} {\rm MMP} hypergraphs, i.e., those whose each edge contains $n$ vertices, called {\rm KS MMP} hypergraphs, to which it is impossible to assign 1s and 0s in such a way that \begin{enumerate} \item[($\alpha$)] No two vertices within any of its edges are both assigned the value 1; \item[($\beta$)] In any of its edges, not all of the vertices are assigned the value 0. \end{enumerate} \end{theorem} In Figure~\ref{fig:6-3}, we show the smallest possible 4D KS MMP hypergraph with six vertices and three edges. We can easily verify that it is impossible to assign 1 and 0 to its vertices so as to satisfy the conditions ($\alpha$) and ($\beta$) from Theorem \ref{th:ks}. For instance, if we assign 1 to the top green-blue vertex, then, according to the condition ($\alpha$), all of the other vertices contained in the blue and green edges must be assigned value 0, but, herewith, all four vertices in the red edge are assigned 0s in violation of the condition ($\beta$), or, if we assign 1 to the top red-blue vertex, then, according to the condition ($\alpha$), all the other vertices contained in the blue and red edges must be assigned value 0, but, herewith, all four vertices in the green edge are assigned 0s in violation of the condition ($\beta$). Analogous verifications go through for the remaining four vertices. We verified that there is neither a real nor complex vector solution of a corresponding system of nonlinear equations \cite{pmmm05a-corr}. We have not tried quaternions as of yet. \begin{figure} \caption{The smallest 4D KS MMP hypergraph without a coordinatization.} \label{fig:6-3} \end{figure} When a coordinatization of a KS MMP hypergraph exists, its vertices denote $n$-dimensional vectors in ${\cal H}^n$, $n\ge 3$, and edges designate orthogonal $n$-tuples of vectors containing the corresponding vertices. In our present approach, a coordinatization is automatically assigned to each hypergraph by the very procedure of its generation from the basic vector components. A KS MMP hypergraph with a given coordinatization of whatever origin we often simply call a KS {\em set\/}. \subsection{\label{subsec:form}Formalism} MMP hypergraphs are those whose edges (of size $n$) intersect each other in at most $n-2$ vertices~\cite{pmmm05a,pavicic-pra-17}. They are encoded by means of printable ASCII characters. Vertices are denoted by one of the following characters: {{\tt 1 2 \dots 9 A B \dots Z a b \dots z ! " \#} {\$} \% \& ' ( ) * - / : ; \textless\ = \textgreater\ ? @ [ {$\backslash$} ] \^{} \_ {`} {\{} {\textbar} \} \textasciitilde} \cite{pmmm05a}. When all of them are exhausted, one reuses them prefixed by `+', then again by `++', and so forth. An $n$-dimensional KS set with $k$ vectors and $m$ $n$-tuples is represented by an MMP hypergraph with $k$ vertices and $m$ edges which we denote as a $k$-$m$ set. In its graphical representation, vertices are depicted as dots and edges as straight or curved lines connecting $m$ orthogonal vertices. We handle MMP hypergraphs by means of algorithms in the programs SHORTD, MMPSTRIP, MMPSUBGRAPH, VECFIND, STATES01, and others \cite{bdm-ndm-mp-1,pmmm05a-corr,pmm-2-10,bdm-ndm-mp-fresl-jmp-10,mfwap-s-11,mp-nm-pka-mw-11}. In its numerical representation (used for computer processing), each MMP hypergraph is encoded in a single line in which all $m$ edges are successively given, separated by commas, and followed by assignments of coordinatization to $k$ vertices (see 18-9 in Subsection~\ref{subsec:vec}). \subsection{\label{subsec:vec}KS Vector Lists vs.~Vector Component MMP Hypergraphs} In Table \ref{T:1}, we give an overview of most of the $k$-$m$ KS sets (KS hypergraphs with $m$ vertices and $k$ edges) as defined via lists and tables of vectors used to build the KS master sets that one can find in the literature. These master sets serve us to obtain billions of non-isomorphic smaller KS sets (KS subsets, subhypergraphs) which define $k$-$m$ {\em classes\/}. In doing so (via the aforementioned algorithms and programs), we keep to minimal, {\em critical\/}, KS subhypergraphs in the sense that a removal of any of their edges turns them into non-KS sets. Critical KS hypergraphs are all we need for an experimental implementation: additional orthogonalities that bigger KS sets (containing critical ones) might possess do not add any new property to the ones that the minimal critical core already has. The smallest hypergraphs we give in the table are therefore the smallest criticals. Many more of them, as well as their distributions, the reader can find in the cited references. Some coordinatizations are over-complicated in the original literature. For example (as shown in \cite{pavicic-pra-17}), for the 4D 148-265 master, components $\{0,\pm i,\pm 1,\pm\omega,\pm\omega^2\}$, where $\omega=e^{2\pi i/3}$, suffice for building the coordinatization, and for the 6D 21-7 components $\{0,1,\omega\}$ suffice. In addition, $\{0,\pm 1\}$ suffice for building the 6D 236-1216. \begin{table*} \caption{\label{T:1}Vector lists from the literature; we call their masters {\em list-masters\/}. We shall make use of their vector components from the last column to generate master hypergraphs in Subsection~\ref{subsec:mas} which we call {\em component-masters\/}. $\omega$ is a cubic root of unity: $\omega=e^{2\pi i/3}$.} \centering \begin{tabular}{cccccc} \textbf{dim} & \textbf{Master Size} & \textbf{Vector List} & \textbf{List Origin} &\textbf{Smallest Hypergraph} & \textbf{Vector Components}\\ \Xhline{2\arrayrulewidth} 4D & 24-24 & \cite{peres,kern,cabell-est-96a} & \begin{tabular}{@{}c@{}}symmetry,\\geometry\end{tabular} & {\parbox[c]{1em}{\includegraphics[scale=0.27]{18-9-col-lett.eps}}\hfil} & \{0,$\pm 1$\}\\ 4D & 60-105 & \cite{waeg-aravind-jpa-11,pavicic-pra-17} & \begin{tabular}{@{}c@{}}Pauli\\operators\end{tabular} & {\parbox[c]{1em}{\includegraphics[scale=0.27]{18-9-col-lett.eps}}\hfil} & \{0,$\pm 1,\pm i$\}\\ 4D & 60-75 & \cite{aravind10,mp-nm-pka-mw-11,mfwap-s-11,pavicic-pra-17} & \begin{tabular}{@{}c@{}}regular\\polytope\\600-cell\end{tabular} & {\parbox[c]{1em}{\includegraphics[scale=0.06]{26-13-col.eps}}\hfil} &\begin{tabular}{@{}c@{}}$\{0,\pm(\sqrt{5}-1)/2,\pm 1,$\\$\pm(\sqrt{5}+1)/2,2\}$\end{tabular} \\ 4D & 148-265 & \cite{waeg-aravind-pla-17,pavicic-pra-17} & \begin{tabular}{@{}c@{}}Witting\\polytope\end{tabular} & {\parbox[c]{4em}{\includegraphics[scale=0.051]{40-23-col.eps}}\hfil} &\begin{tabular}{@{}c@{}}$\ \ \{0,\pm i,\pm 1,\pm\omega,\pm\omega^2,$\\$\ \pm i\omega^{1/\sqrt{3}},\pm i\omega^{2/\sqrt{3}}\}$\end{tabular} \\ 6D & 21-7 & \cite{lisonek-14} & symmetry &{\parbox[c]{2em}{\includegraphics[scale=0.07]{21-7-6d-star.eps}}\hfil} & $\{0,1,\omega,\omega^2\}$ \\ 6D & 236-1216 & \begin{tabular}{@{}c@{}}Aravind \&\\ Waegell\\ 2016, \ \cite{pavicic-pra-17}\\\end{tabular} & \begin{tabular}{@{}c@{}}hypercube\\$\to$hexaract\\Sch\"afli $\{4,3^4\}$\end{tabular} &{\parbox[c]{2em}{\includegraphics[scale=0.048]{34-16-6d.eps}}\hfil} & \begin{tabular}{@{}c@{}}$\{0,\pm 1/2,\pm 1/\sqrt{3},$\\$\pm 1/\sqrt{2},1\}$\end{tabular} \\ 8D & 36-9 & \cite{pavicic-pra-17} & symmetry &{\parbox[c]{2em}{\includegraphics[scale=0.058]{36-9-8d-star.eps}}\hfil} & $\{0,\pm 1\}$ \\ 8D & 120-2025 & \cite{waeg-aravind-jpa-15,pavicic-pra-17} & \begin{tabular}{@{}c@{}}Lie\\algebra\\E8\end{tabular} &{\parbox[c]{3em}{\includegraphics[scale=0.082]{34-9-8d.eps}}\hfil} & \begin{tabular}{@{}c@{}}as given\\in \cite{waeg-aravind-jpa-15}\end{tabular} \\ 16D & 80-265 & \cite{harv-cryss-aravind-12,planat-12,pavicic-pra-17} & \begin{tabular}{@{}c@{}}Qubit\\states\end{tabular} &{\parbox[c]{4em}{\includegraphics[scale=0.11]{72-11-16d.eps}}\hfil} & $\{0,\pm 1\}$ \\ 32D & 160-661 & \cite{planat-saniga-12,pavicic-pra-17} & \begin{tabular}{@{}c@{}}Qubit\\states\end{tabular} &{\parbox[c]{5em}{\includegraphics[scale=0.072]{144-11-32d.eps}}\hfil} & $\{0,\pm 1\}$ \\ \Xhline{3\arrayrulewidth} \end{tabular} \end{table*} Some of the smallest KS hypergraphs in the table have ASCII characters assigned and some do not. This is to stress that we can assign them in an arbitrary and random way to any hypergraph and then the program VECFIND will provide them with a coordinatization in a fraction of a second. For instance, \parindent=0pt {\bf 18-9}: {\tt 1234,4567,789A,ABCD,DEFG,GHI1,I29B,35CE,\break 68FH.} {\tt \{1=\{0,0,0,1\},2=\{0,0,1,0\},3=\{1,1,0,0\},4=\{1,\break -1,0,0\},5=\{0,0,1,1\},6=\{1,1,1,-1\},7=\{1,1,-1,1\},\break 8=\{1,-1,1,1\},9=\{1,0,0,-1\},A=\{0,1,1,0\},B=\{1,0,0,\break 1\},C=\{1,-1,1,-1\},D=\{1,1,-1,-1\},E=\{1,-1,-1,1\},F=\break \{0,1,0,1\},G=\{1,0,1,0\},H=\{1,0,-1,0\},I=\{0,1,0,0\}$\!$\}$\!$.} (To simplify parsing, this notation delineates vectors with braces instead of traditional parentheses in order to reserve parentheses for component expressions.) \parindent=20pt However, a real finding is that we can go the other way round and determine the KS sets from nothing but vector components $\{0,\pm 1\}$. \subsection{\label{subsec:mas}Vector-Component-Generated Hypergraph Masters} We put simplest possible vector components, which might build vectors and therefore provide a coordinatization to MMP hypergraphs, into our program VECFIND. Via its option {\tt -master}, the program builds an internal list of all possible non-zero vectors containing these components. From this list, it finds all possible edges of the hypergraph, which it then generates. MMPSTRIP via its option {\tt -U} separates unconnected MMP subgraphs. We pipe the obtained hypergraphs through the program STATES01 to keep those that possess the KS property. We can use other programs of ours, MMPSTRIP, MMPSHUFFLE, SHORTD, STATES01, LOOP, etc., to obtain smaller KS subsets and analyze their features. The likelihood that chosen components will give us a KS master hypergraph and the speed with which it does so depends on particular features they possess. Here, we will elaborate on some of them and give a few examples. Features are based on statistics obtained in the process of generating~hypergraphs: \begin{enumerate}[\it(i)] \item the input set of components for generating two-qubit KS hypergraphs (4D) should contain number pairs of opposite signs, e.g., $\pm 1$, and zero (0); we conjecture that the same holds for 3, 4, \dots qubits; with 6D it does not hold literally; e.g., $\{0,1,\omega\}$ generate a KS master; however, the following combination of $\omega$'s gives the opposite sign to 1: $\omega+\omega^2=-1$; \item mixing real and complex components gives a denser distribution of smaller KS hypergraphs; \item reducing the number of components shortens the time needed to generate smaller hypergraphs and apparently does not affect their distribution. \end{enumerate} Feature {\it(i)} means that, no matter how many different numbers we use as our input components, we will not get a KS master if at least to one of the numbers, the same number with the opposite sign is not added. Thus, e.g., $\{0,1,-i,2,-3,4,5\}$ or a similar string does not give any, while $\{0,\pm 1\}$, or $\{0,\pm i\}$, or $\{0,\pm(\sqrt{5}-1)/2\}$ do. Of course, the latter strings all give mutually isomorphic KS masters, i.e., one and the same KS master, if used alone. More specifically, they yield a 40-32 master with 40 vertices and 32 edges as shown in Table \ref{T:2}. When any of them are used together with other components, although they would generate different component-masters, all the latter masters of a particular dimension would have a common smallest hypergraph as also shown in Table \ref{T:2}. \begin{table*} \caption{\label{T:2} Component-masters we obtained. List-masters are given in Table \ref{T:1}. In the last two rows of all but the last column, we refer to the result \cite{waeg-aravind-pra-13} that there are 16D and 32D criticals with just nine edges. According to the conjectured feature {\it(i)\/} above, the masters generated by $\{0,\pm 1\}$ should contain those criticals; they did not come out in \cite{pavicic-pra-17}, so, we do not know how many vertices they have. The smallest ones we obtained are given in Table \ref{T:1}. The number of criticals given in the 4th column refer to the number of them we successfully generated although there are many more of them except in the 40-32 class.} \centering \begin{tabular}{cccccc} \Xhline{2\arrayrulewidth} \textbf{dim} &\textbf{Vector Components} & \begin{tabular}{@{}c@{}}\textbf{Component-Master}\\\textbf{Size}\end{tabular} &\begin{tabular}{@{}c@{}}\textbf{\ \ N{\textsuperscript o} of KS Criticals}\\\textbf{\ \ in Master}\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{Smallest}\\\textbf{Hypergraph}\end{tabular} & \begin{tabular}{@{}c@{}}\textbf{\ \ Contains}\\\textbf{\ \ List-Masters}\end{tabular} \\ \Xhline{3\arrayrulewidth} 4D & \begin{tabular}{@{}c@{}}\{0,$\pm 1$\} \ or \{0,$\pm i$\} \ or\\ $\{0,\pm(\sqrt{5}-1)/2\}$ or \dots\end{tabular} & 40-32 & 6 & {\parbox[c]{1em}{\includegraphics[scale=0.22]{18-9-col-lett.eps}}\hfil} & 24-24\\ 4D & \{0,$\pm 1,\pm i$\} & 156-249 & $7.7\times 10^6$ & {\parbox[c]{1em}{\includegraphics[scale=0.22]{18-9-col-lett.eps}}\hfil} & 24-24, 60-105\\ 4D & \begin{tabular}{@{}c@{}}$\{0,\pm(\sqrt{5}-1)/2,\pm 1,$\\$\pm(\sqrt{5}+1)/2,2\}$\end{tabular} & 2316-3052 & $1.5\times 10^9$ & {\parbox[c]{1em}{\includegraphics[scale=0.22]{18-9-col-lett.eps}}\hfil} & 24-24, 60-75\\ 4D & $\{0,\pm 1,\pm i,\pm\omega,\pm\omega^2\}$ & 400-1012 &$8\times 10^6$ & {\parbox[c]{1em}{\includegraphics[scale=0.22]{18-9-col-lett.eps}}\hfil} &24-24, 60-105, 148-265\\ 6D & $\{0,\pm 1,\omega,\omega^2\}$ & 11808-314446 & $3\times 10^7$ &{\parbox[c]{1em}{\includegraphics[scale=0.05]{21-7-6d-star.eps}}\hfil} & 21-7, 236-1216 \\ 8D & $\{0,\pm 1\}$ & 3280-1361376 & $7\times 10^6$ &{\parbox[c]{1em}{\includegraphics[scale=0.05]{34-9-8d.eps}}\hfil} & 36-9, 120-2025 \\ 16D & $\{0,\pm 1\}$ &\begin{tabular}{@{}c@{}}computationally\\too demanding\end{tabular} & $4\times 10^6$ &\begin{tabular}{@{}c@{}}{\ \ \ \parbox[c]{1em}{\includegraphics[scale=0.065]{9-edges.eps}}}\\\qquad\cite{waeg-aravind-pra-13}.\end{tabular} & 80-265 \\ 32D & $\{0,\pm 1\}$ & \begin{tabular}{@{}c@{}}computationally\\too demanding\end{tabular} & $2.5\times 10^5$ &\begin{tabular}{@{}c@{}}{\ \ \ \parbox[c]{1em}{\includegraphics[scale=0.065]{9-edges.eps}}}\\\qquad\cite{waeg-aravind-pra-13}.\end{tabular} & 160-661 \\ \Xhline{2\arrayrulewidth} \end{tabular} \end{table*} We obtained the following particular results which show the extent to which component-masters give a more populated distribution of KS criticals than list-masters. We also closed several open~questions: \begin{itemize} \item As for the features {\it(ii)} and {\it(iii)} above, components $\{0,\pm 1,\omega\}$ generate the master 180-203 which has the following smallest criticals 18-9, 20\dots 22-11, 22\dots 26-13, 24\dots 30-15, 30\dots 31-16, 28\dots 35-17, 33\dots 37-18, etc. This distribution is much denser than that of, e.g., the list-master 24-24 with real vectors which in the same span of edges consists only of 18-9, 20-11, 22-13, and 24-15 criticals or of the list-master 60-75 which starts with the 26-13 critical. In Appendix \ref{app:1}, we give a detailed description of a 21-11 critical with a complex coordinatization and give a blueprint for its experimental implementation; \item In \cite{lisonek-14}, the reader is challenged to find a master set which would contain the "seven context star" 21-7 KS critical (shown in Tables \ref{T:1} and \ref{T:2}). We find that $\{0,1,\omega\}$ generate the 216-153 6D master which contains just three criticals 21-7, 27-9, and 33-11, $\{0,1,\omega,\omega^2\}$ generate 834-1609 master from which we obtained $2.5\times 10^7$ criticals, and $\{0,\pm 1,\omega,\omega^2\}$ generate 11808-314446 master from which we obtained $3\times 10^7$ criticals, all of them containing the seven context star. 27-9 and 39-13 can be viewed as 21-7 with a pair of $\delta$-triplets interwoven with 21-7, as shown in Figure~\ref{fig:triplets}. The 834-1609 KS master generated from $\{0,1,\omega,\omega^2\}$, which were used for a construction of 21-7 in \cite{lisonek-14}, contains 39-13 as well. Equally so, the 11808-314446 master. \begin{figure} \caption{21-11 KS set from \cite{lisonek-14} and 27-9 are contained in three different master sets, 39-13 in two (together with 21-11 and 27-9); see the text.} \label{fig:triplets} \end{figure} \item The 60-75 list-master contains criticals with up to 41 edges and 60 vertices, while the 2316-3052 component-master generated from the same vector components contains criticals with up to close to 200 edges and 300 vertices; \item The 60-105 list-master contains criticals with up to 40 edges and 60 vertices, while the 156-249 component-master generated from the same vector components contains criticals with up to at least 58 edges and 88 vertices; \item Components $\{0,\pm 1\}$ generate 332-1408 6D master which contains the 236-1216 list-master while originally components $\{0,\pm 1/2,\pm 1/\sqrt{3},\pm 1/\sqrt{2},1\}$ were used; \item In \cite{pavicic-pra-17}, we generated 6D criticals with up to 177 vertices and 87 edges from the list-master 236-1216, while, now, from the component-master 11808-314446, we obtain criticals with up to 201 vertices and 107 edges; \item We did not generate 16D and 32D masters because that would take too many CPU days and we already generated a huge number of criticals from submasters which are also defined by means of the same vector components in \cite{pavicic-pra-17}. See also Section~\ref{sec:met}. \end{itemize} \section{\label{sec:met}Methods} Our methods for obtaining quantum contextual sets boil down to algorithms and programs within the MMP language we developed to generate and handle KS MMP hypergraphs as the most elaborated and implemented kind of these sets. The programs we make use of, VECFIND, STATES01, MMPSTRIP, MMPSHUFFLE, SUBGRAPH, LOOP, SHORTD, etc.,~are freely available from our repository http://goo.gl/xbx8U2. They are developed in \cite{bdm-ndm-mp-1,pmmm05a-corr,pm-ql-l-hql2,pmm-2-10,bdm-ndm-mp-fresl-jmp-10,mfwap-11,mp-nm-pka-mw-11,megill-pavicic-mipro-long-17} and extended for the present elaboration. Each MMP hypergraph can be represented as a figure for a visualisation but more importantly as a string of ASCII characters with one line per hypergraph, enabling us to process millions of them simultaneously by inputting them into supercomputers and clusters. For the latter elaboration, we developed other dynamical programs specifically for a supercomputer or cluster, which enable piping of our files through our programs in order to parallelize jobs. The programs have the flexibility of handling practically unlimited number of MMP hypergraph vertices and edges as we can see from Table \ref{T:2}. The fact that we did not let our supercomputer run to generate 16D and 36D masters and our remark that it would be "computationally too demanding" do not mean that such runs are not feasible with the current computers, but that they would require too many CPU days on the supercomputer and that we decided not to burden it with such a task at the present stage of our research; see the explanation in Subsection~\ref{subsec:mas}. \section{Conclusions} The main result we obtain is that our vector component generation of KS hypergraphs (sets) exhaustively use all possible vectors that can be constructed from chosen vector components. This is in contrast to previous studies, which made use of serendipitously obtained lists of vectors curtailed in number due to various methods applied to obtain them. Hence, we obtain a thorough and maximally dense distribution of KS classes in all dimensions whose critical sets can therefore be much more effectively used for possible implementation in quantum computation and communication. A comparison of Tables \ref{T:1} and \ref{T:2} vividly illustrates the difference. In Appendix \ref{app:1}, we present a possible experimental implementation of a KS critical with complex coordinatization generated from $\{0,\pm 1,\omega\}$. What we immediately notice about the 21-11 critical from Figure~\ref{fig:21-11} is that the edges are interwoven in more intricate way than in the 18-9 (which has been implemented already in several experiments), exhibiting the so-called $\delta$-feature of the edges forming the biggest loop within a KS hypergraph. The $\delta$-feature refers to two neighbouring edges which share two vertices, i.e., intersect each other at two vertices \cite{pavicic-pra-17}. It stems directly from the representation of KS configuration with MMP hypergraphs. Notice that the $\delta$-feature precludes interpretation of practically any KS hypergraph in an even dimensional Hilbert space by means of so-called Greechie diagrams, which by definition require that two blocks (similar to hypergraph edges) do not share more than one atom (similar to a vertex) \cite{mp-7oa}, on the one hand, and that the loops made by the blocks must be of order five or higher (which is hardly ever realised in even dimensional KS hypergraphs---see examples in~\cite{pavicic-pra-17}), on the other. Our future engagement would be to tackle odd dimensional KS hypergraphs. Notice that, in a 3D Hilbert space, it is possible to explore similarities between Greechie diagrams and MMP hypergraphs because then neither of them can have edges/blocks which share more than one vertex/atom (via their respective definitions) and loops in both of them are of the order five or higher \cite{bdm-ndm-mp-1,pmmm05a}. \section*{Author Contributions} Conceptualization, M.P.; Data Curation, M.P.; Formal Analysis, M.P. and N.D.M.; Funding Acquisition, M.P.; Investigation, M.P. and N.D.M.; Methodology, M.P. and N.D.M.; Project Administration, M.P.; Resources, M.P.; Software, M.P. and N.D.M.; Supervision, M.P.; Validation, M.P. and N.D.M.; Visualization, M.P.; Writing---Original Draft, M.P.; Writing---Review and Editing, M.P. and N.D.M. \subsection*{Conflicts of Interest} The authors declare no conflict of interest. \subsection*{Abbreviations} The following abbreviations are used in this manuscript: \noindent \begin{tabular}{@{}ll} KS & Kochen--Specker; defined in Section~\ref{sec:intro}\\ MMP & McKay-Megill-Pavi\v ci\'c; defined in Subsection~\ref{subsec:form}\\ \end{tabular} \appendix \section{\label{app:1} 21-11 KS Critical with Complex States from ${\cal H}^2\otimes{\cal H}^2$} We present a possible implementation of a KS critical 21-11 with complex coordinatization shown in Fig.~\ref{fig:21-11}. \begin{figure} \caption{21-11 KS set with a complex coordinatization.} \label{fig:21-11} \end{figure} The vector components of the first qubit on a photon correspond to a linear (horizontal, $H$, vertical, $V$, diagonal, $D$, antidiagonal $A$) and circular (right, $R$, left $L$) polarization, and those of the second qubit to an angular momentum of the photon $(+2,-2)$ and $(h,v)$. A correspondence between them is given below. An example of a tensor product of two vectors/states from ${\cal H}^2\otimes{\cal H}^2$ is: \begin{align} |01\rangle&=|0,1\rangle=|0\rangle_1\otimes|1\rangle_2= \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2\notag\\ &= \left( \begin{matrix} 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right) =\left( \begin{matrix} 0\\ 1\\ 0\\ 0\\ \end{matrix} \right). \phantom{.} \end{align} This is our vector {\tt 8} from Figure~\ref{fig:21-11}. Since we are interested in the qubit states, we are going to proceed in reverse---from 4-vectors to tensor products of polarization and angular momentum states. Let us first define them: \begin{align} &|H\rangle=\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1;\quad |V\rangle=\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1;\quad |D\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_1;\quad \notag\\ &|A\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} -1\\ 1\\ \end{matrix} \right)_1;\ |R\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ i\\ \end{matrix} \right)_1;\ |L\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -i\\ \end{matrix} \right)_1;\notag\\ &|+2\rangle=\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2;\quad |-2\rangle=\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2;\quad |h\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2;\quad \quad\notag\\ &|v\rangle=\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2. \end{align} Now, one can read off our vertex states as follows: \begin{align} &{\tt 1}=\left( \begin{matrix} 1\\ 1\\ 1\\ -1\\ \end{matrix} \right) \to \frac{1}{2} \left( \begin{matrix} 1\\ 1\\ 1\\ -1\\ \end{matrix} \right) = \frac{1}{2}(\left( \begin{matrix} 1\\ 1\\ 0\\ 0\\ \end{matrix} \right)+ \left( \begin{matrix} 0\\ 0\\ 1\\ -1\\ \end{matrix} \right) )\notag\\ &=\frac{1}{\sqrt{2}}(\frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)+ \frac{1}{\sqrt{2}}\left( \begin{matrix} 0 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ & = \frac{1}{\sqrt{2}}(\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes \frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2 + \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2\notag\\ &=\frac{1}{\sqrt{2}}(|H\rangle|h\rangle+|V\rangle|v\rangle)= \frac{1}{\sqrt{2}}(|D\rangle|+2\rangle-|A\rangle|-2\rangle). \end{align} \begin{align} &{\tt 2}=\left( \begin{matrix} 1\\ -1\\ -1\\ -1\\ \end{matrix} \right) \to \frac{1}{2} \left( \begin{matrix} 1\\ -1\\ -1\\ -1\\ \end{matrix} \right) =\frac{1}{2}(\left( \begin{matrix} 1\\ -1\\ 0\\ 0\\ \end{matrix} \right)- \left( \begin{matrix} 0\\ 0\\ 1\\ 1\\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}(\frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right)- \frac{1}{\sqrt{2}}\left( \begin{matrix} 0 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}( \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2 + \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2)\notag\\ &=\frac{1}{\sqrt{2}}(|H\rangle|v\rangle-|V\rangle|h\rangle)= -\frac{1}{\sqrt{2}}(|D\rangle|-2\rangle+|A\rangle|+2\rangle) \label{eq:ver2} \end{align} \begin{align} &{\tt 3}= \left( \begin{matrix} 1\\ 0\\ 0\\ 1\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 0\\ 0\\ 1\\ \end{matrix} \right) =\frac{1}{\sqrt{2}}(\left( \begin{matrix} 1\\ 0\\ 0\\ 0\\ \end{matrix} \right)+ \left( \begin{matrix} 0\\ 0\\ 0\\ 1\\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}(\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right)+ \left( \begin{matrix} 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ & = \frac{1}{\sqrt{2}}( \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2 + \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2)\notag\\ &=\frac{1}{\sqrt{2}}(|H\rangle|+2\rangle+|V\rangle|-2\rangle) \label{eq:ver3} \end{align} \begin{align} &{\tt 4}=\left( \begin{matrix} 0\\ 1\\ -1\\ 0\\ \end{matrix} \right) \to\frac{1}{\sqrt{2}}(\left( \begin{matrix} 0\\ 1\\ 0\\ 0\\ \end{matrix} \right)- \left( \begin{matrix} 0\\ 0\\ 1\\ 0\\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}(\left( \begin{matrix} 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)- \left( \begin{matrix} 0 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ &=\frac{ \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 - \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2}{\sqrt{2}}\notag\\ &=\frac{1}{\sqrt{2}}(|H\rangle|-2\rangle-|V\rangle|+2\rangle) \label{eq:ver4} \end{align} \begin{align} &{\tt 5}=\left( \begin{matrix} 0\\ 1\\ 1\\ 0\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}(\left( \begin{matrix} 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)+ \left( \begin{matrix} 0 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}( \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 + \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2)\notag\\ &=\frac{1}{\sqrt{2}}(|H\rangle|-2\rangle+|V\rangle|+2\rangle) \label{eq:ver5} \end{align} \begin{align} {\tt 6}= \left( \begin{matrix} 0\\ 0\\ 0\\ 1\\ \end{matrix} \right) \to \left(\begin{matrix} 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right) = \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 =|V\rangle|-2\rangle \label{eq:ver6} \end{align} \begin{align} {\tt 7}= \left( \begin{matrix} 1\\ 0\\ 0\\ 0\\ \end{matrix} \right) \to \left(\begin{matrix} 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right) = \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2 =|H\rangle|+2\rangle \label{eq:ver7} \end{align} \begin{align} {\tt 8}= \left( \begin{matrix} 0\\ 1\\ 0\\ 0\\ \end{matrix} \right) \to \left(\begin{matrix} 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right) = \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 =|H\rangle|-2\rangle \label{eq:ver8} \end{align} \begin{align} &{\tt 9}=\left( \begin{matrix} 0\\ 0\\ 1\\ -1\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 0 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2 =|V\rangle|v\rangle \label{eq:ver9} \end{align} \begin{align} {\tt A}=\left( \begin{matrix} 0\\ 0\\ 1\\ 1\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 0 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right) =\frac{1}{\sqrt{2}} \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2 =|V\rangle|h\rangle \label{eq:verA} \end{align} \begin{align} {\tt B}=\left( \begin{matrix} 1\\ 1\\ 0\\ 0\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ 0 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right) =\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2 =|H\rangle|h\rangle \label{eq:verB} \end{align} \begin{align} &{\tt C}=\left( \begin{matrix} 1\\ -1\\ i\\ -i\\ \end{matrix} \right) \to \frac{1}{2}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ i \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ i\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2 =|R\rangle|v\rangle \label{eq:verC} \end{align} \begin{align} &{\tt D}=\left( \begin{matrix} 1\\ 1\\ -1\\ -1\\ \end{matrix} \right) \to \frac{1}{2}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ -1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2 =-|A\rangle|h\rangle \label{eq:verD} \end{align} \begin{align} &{\tt E}=\left( \begin{matrix} 1\\ 1\\ 1\\ 1\\ \end{matrix} \right) \to \frac{1}{2}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_2 =|D\rangle|h\rangle \label{eq:verE} \end{align} \begin{align} &{\tt F}=\left( \begin{matrix} 1\\ -1\\ 1\\ -1\\ \end{matrix} \right) \to \frac{1}{2}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2 =|D\rangle|v\rangle \label{eq:verF} \end{align} \begin{align} &{\tt G}=\left( \begin{matrix} 0\\ 1\\ 0\\ -1\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ -1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 =-|A\rangle|-2\rangle \label{eq:verG} \end{align} \begin{align} &{\tt H}=\left( \begin{matrix} 1\\ 0\\ -1\\ 0\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ -1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2 =-|A\rangle|+2\rangle \label{eq:verH} \end{align} \begin{align} &{\tt I}=\left( \begin{matrix} 0\\ 1\\ 0\\ 1\\ \end{matrix} \right) \to \frac{1}{\sqrt{2}}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ -1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &=\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2 =|D\rangle|-2\rangle \label{eq:verI} \end{align} \begin{align} &{\tt J}=\left( \begin{matrix} 1\\ -1\\ 1\\ 1\\ \end{matrix} \right) \to =\frac{1}{2}(\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right)+ \left( \begin{matrix} -1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right) \\ \end{matrix} \right))\notag\\ &=\frac{1}{\sqrt{2}}( \frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2 + \frac{1}{\sqrt{2}} \left( \begin{matrix} -1\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_2)\notag\\ &=\frac{1}{\sqrt{2}}(|D\rangle|+2\rangle+|A\rangle|-2\rangle) \label{eq:verJ} \end{align} \begin{align} &{\tt K}= \left( \begin{matrix} 0\\ 0\\ 1\\ 0\\ \end{matrix} \right) \to \left(\begin{matrix} 0 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \\ 1 \left( \begin{matrix} 1\\ 0\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ &= \left( \begin{matrix} 0\\ 1\\ \end{matrix} \right)_1 \otimes\left( \begin{matrix} 1\\ 0\\ \end{matrix} \right)_2 =|V\rangle|+2\rangle \label{eq:verK} \end{align} \begin{align} &{\tt L}=\left( \begin{matrix} 1\\ -1\\ -i\\ i\\ \end{matrix} \right) \to \frac{1}{2}\left( \begin{matrix} 1 \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \\ -i \left( \begin{matrix} 1\\ -1\\ \end{matrix} \right) \\ \end{matrix} \right)\notag\\ & =\frac{1}{\sqrt{2}} \left( \begin{matrix} 1\\ -i\\ \end{matrix} \right)_1 \otimes\frac{1}{\sqrt{2}}\left( \begin{matrix} 1\\ -1\\ \end{matrix} \right)_2 =|L\rangle|v\rangle \label{eq:verL} \end{align} Thus, in order to handle a complex coordinatization---states $C$ and $L$---we need a fifth degree of freedom (circular polarization). \begin{thebibliography}{48} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Cabello}\ \emph {et~al.}(2011)\citenamefont {Cabello}, \citenamefont {{D'A}mbrosio}, \citenamefont {Nagali},\ and\ \citenamefont {Sciarrino}}]{cabello-dambrosio-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {{D'A}mbrosio}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Nagali}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ }\bibfield {title} {\bibinfo {title} {Hybrid ququart-encoded quantum cryptography protected by {K}ochen-{S}pecker contextuality},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 84}}},\ \bibinfo {pages} {030302(R)} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nagata}(2005)}]{nagata-05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Nagata}},\ }\bibfield {title} {\bibinfo {title} {{K}ochen-{S}pecker theorem as a precondition for secure quantum key distribution},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 72}}},\ \bibinfo {pages} {012325} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Howard}\ \emph {et~al.}(2014)\citenamefont {Howard}, \citenamefont {Wallman}, \citenamefont {Veitech},\ and\ \citenamefont {Emerson}}]{magic-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Howard}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wallman}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Veitech}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Emerson}},\ }\bibfield {title} {\bibinfo {title} {Contextuality supplies the `magic' for quantum computation},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Nature}}\ }\textbf {\bibinfo {volume} {{\bf 510}}},\ \bibinfo {pages} {351} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartlett}(2014)}]{bartlett-nature-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}},\ }\bibfield {title} {\bibinfo {title} {Powered by magic},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Nature}}\ }\textbf {\bibinfo {volume} {{\bf 510}}},\ \bibinfo {pages} {345} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ \emph {et~al.}(2010{\natexlab{a}})\citenamefont {Pavi{\v c}i{\'c}}, \citenamefont {Mc{K}ay}, \citenamefont {Megill},\ and\ \citenamefont {Fresl}}]{bdm-ndm-mp-fresl-jmp-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}, \bibinfo {author} {\bibfnamefont {B.~D.}\ \bibnamefont {Mc{K}ay}}, \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fresl}},\ }\bibfield {title} {\bibinfo {title} {Graph approach to quantum systems},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Math. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 51}}},\ \bibinfo {pages} {102103} (\bibinfo {year} {2010}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megill}\ and\ \citenamefont {Pavi{\v c}i{\'c}}(2011)}]{mp-7oa} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Kochen-{S}pecker sets and generalized {O}rthoarguesian equations},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Ann. {H}enri {P}oinc.}}\ ,\ \bibinfo {pages} {1417}} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Simon}\ \emph {et~al.}(2000)\citenamefont {Simon}, \citenamefont {${\dot{\rm Z}}$ukowski}, \citenamefont {Weinfurter},\ and\ \citenamefont {Zeilinger}}]{simon-zeil00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {${\dot{\rm Z}}$ukowski}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\bibfield {title} {\bibinfo {title} {Feasible {K}ochen-{S}pecker experiment with single particles},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 85}}},\ \bibinfo {pages} {1783} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Michler}\ \emph {et~al.}(2000)\citenamefont {Michler}, \citenamefont {Weinfurter},\ and\ \citenamefont {${\dot{\rm Z}}$ukowski}}]{michler-zeil-00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Michler}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {${\dot{\rm Z}}$ukowski}},\ }\bibfield {title} {\bibinfo {title} {Experiments towards falsification of noncontextual hidden variables},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 84}}},\ \bibinfo {pages} {5457} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amselem}\ \emph {et~al.}(2009)\citenamefont {Amselem}, \citenamefont {R{\aa}dmark}, \citenamefont {Bourennane},\ and\ \citenamefont {Cabello}}]{amselem-cabello-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Amselem}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {R{\aa}dmark}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bourennane}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}},\ }\bibfield {title} {\bibinfo {title} {State-independent quantum contextuality with single photons},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 103}}},\ \bibinfo {pages} {160405} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2009)\citenamefont {Liu}, \citenamefont {Huang}, \citenamefont {Gong}, \citenamefont {Sun}, \citenamefont {Zhang}, \citenamefont {Li},\ and\ \citenamefont {Guo}}]{liu-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~H.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.~F.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Y.~X.}\ \bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {F.~W.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {Y.~S.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Li}},\ and\ \bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont {Guo}},\ }\bibfield {title} {\bibinfo {title} {Experimental demonstration of quantum contextuality with nonentangled photons},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 80}}},\ \bibinfo {pages} {044101} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{D'A}mbrosio}\ \emph {et~al.}(2013)\citenamefont {{D'A}mbrosio}, \citenamefont {Herbauts}, \citenamefont {Amselem}, \citenamefont {Nagali}, \citenamefont {Bourennane}, \citenamefont {Sciarrino},\ and\ \citenamefont {Cabello}}]{d-ambrosio-cabello-13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {{D'A}mbrosio}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Herbauts}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Amselem}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Nagali}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bourennane}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sciarrino}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}},\ }\bibfield {title} {\bibinfo {title} {Experimental implementation of a kochen-specker set of quantum tests},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. X}}\ }\textbf {\bibinfo {volume} {{\bf 3}}},\ \bibinfo {pages} {011012} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2003)\citenamefont {Huang}, \citenamefont {Li}, \citenamefont {Zhang}, \citenamefont {Pan},\ and\ \citenamefont {Guo}}]{ks-exp-03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {C.-F.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Y.-S.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\bibfield {title} {\bibinfo {title} {Experimental test of the {K}ochen-{S}pecker theorem with single photons},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 90}}},\ \bibinfo {pages} {250401} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ca{\~n}as}\ \emph {et~al.}(2014{\natexlab{a}})\citenamefont {Ca{\~n}as}, \citenamefont {Etcheverry}, \citenamefont {G{\'o}mez}, \citenamefont {Saavedra}, \citenamefont {Xavier}, \citenamefont {Lima},\ and\ \citenamefont {Cabello}}]{canas-cabello-8d-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ca{\~n}as}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Etcheverry}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {G{\'o}mez}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Saavedra}}, \bibinfo {author} {\bibfnamefont {G.~B.}\ \bibnamefont {Xavier}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lima}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}},\ }\bibfield {title} {\bibinfo {title} {Experimental implementation of an eight-dimensional {K}ochen-{S}pecker set and observation of its connection with the {G}reenberger-{H}orne-{Z}eilinger theorem},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 90}}},\ \bibinfo {pages} {012119} (\bibinfo {year} {2014}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hasegawa}\ \emph {et~al.}(2006)\citenamefont {Hasegawa}, \citenamefont {Loidl}, \citenamefont {Badurek}, \citenamefont {Baron},\ and\ \citenamefont {Rauch}}]{h-rauch06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hasegawa}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Loidl}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Baron}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rauch}},\ }\bibfield {title} {\bibinfo {title} {Quantum contextuality in a single-neutron optical experiment},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 97}}},\ \bibinfo {pages} {230401} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cabello}\ \emph {et~al.}(2008)\citenamefont {Cabello}, \citenamefont {Filipp}, \citenamefont {Rauch},\ and\ \citenamefont {Hasegawa}}]{cabello-fillip-rauch-08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Filipp}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rauch}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hasegawa}},\ }\bibfield {title} {\bibinfo {title} {Proposed experiment for testing quantum contextuality with neutrons},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 100}}},\ \bibinfo {pages} {130404} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartosik}\ \emph {et~al.}(2009)\citenamefont {Bartosik}, \citenamefont {Klepp}, \citenamefont {Schmitzer}, \citenamefont {Sponar}, \citenamefont {Cabello}, \citenamefont {Rauch},\ and\ \citenamefont {Hasegawa}}]{b-rauch-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bartosik}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Klepp}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schmitzer}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rauch}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hasegawa}},\ }\bibfield {title} {\bibinfo {title} {Experimental test of quantum contextuality in neutron interferometry},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 103}}},\ \bibinfo {pages} {040403} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kirchmair}\ \emph {et~al.}(2009)\citenamefont {Kirchmair}, \citenamefont {Z{\"a}hringer}, \citenamefont {Gerritsma}, \citenamefont {Kleinmann}, \citenamefont {G{\"u}hne}, \citenamefont {Cabello}, \citenamefont {Blatt},\ and\ \citenamefont {Roos}}]{k-cabello-blatt-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kirchmair}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Z{\"a}hringer}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gerritsma}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kleinmann}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G{\"u}hne}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ and\ \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}},\ }\bibfield {title} {\bibinfo {title} {State-independent experimental test of quantum contextuality},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Nature}}\ }\textbf {\bibinfo {volume} {{\bf 460}}},\ \bibinfo {pages} {494} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moussa}\ \emph {et~al.}(2010)\citenamefont {Moussa}, \citenamefont {Ryan}, \citenamefont {Cory},\ and\ \citenamefont {Laflamme}}]{moussa-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Moussa}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Ryan}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Cory}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}},\ }\bibfield {title} {\bibinfo {title} {Testing contextuality on quantum ensembles with one clean qubit},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 104}}},\ \bibinfo {pages} {160501} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lison{\v e}k}\ \emph {et~al.}(2014)\citenamefont {Lison{\v e}k}, \citenamefont {Badzi{\c a}g}, \citenamefont {Portillo},\ and\ \citenamefont {Cabello}}]{lisonek-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Lison{\v e}k}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Badzi{\c a}g}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Portillo}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}},\ }\bibfield {title} {\bibinfo {title} {Kochen-{S}pecker set with seven contexts},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 89}}},\ \bibinfo {pages} {042101} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ca{\~n}as}\ \emph {et~al.}(2014{\natexlab{b}})\citenamefont {Ca{\~n}as}, \citenamefont {Arias}, \citenamefont {Etcheverry}, \citenamefont {G{\'o}mez}, \citenamefont {Cabello}, \citenamefont {Saavedra}, \citenamefont {Xavier},\ and\ \citenamefont {Lima}}]{canas-cabello-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ca{\~n}as}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Arias}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Etcheverry}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {G{\'o}mez}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Saavedra}}, \bibinfo {author} {\bibfnamefont {G.~B.}\ \bibnamefont {Xavier}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lima}},\ }\bibfield {title} {\bibinfo {title} {Applying the simplest {K}ochen-{S}pecker set for quantum information processing},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 113}}},\ \bibinfo {pages} {090404} (\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barrett}\ and\ \citenamefont {Kent}(2004)}]{barrett-kent-04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Barrett}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kent}},\ }\bibfield {title} {\bibinfo {title} {Noncontextuality, finite precision measurement and the {K}ochen-{S}pecker},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Stud. Hist. Philos. Mod. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 35}}},\ \bibinfo {pages} {151} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spekkens}(2005)}]{spekkens-05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Spekkens}},\ }\bibfield {title} {\bibinfo {title} {Contextuality for preparations, transformations, and unsharp measurements},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 71}}},\ \bibinfo {pages} {052108} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kunjwal}\ and\ \citenamefont {Spekkens}(2015)}]{kunjwal-spekkens-15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kunjwal}}\ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Spekkens}},\ }\bibfield {title} {\bibinfo {title} {From the {K}ochen-{S}pecker theorem to noncontextuality inequalities without assuming determinism},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {{\bf 115}}},\ \bibinfo {pages} {110403} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kunjwal}(2018)}]{kunjwal-18-arxiv} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kunjwal}},\ }\bibfield {title} {\bibinfo {title} {Hypergraph framework for irreducible noncontextuality inequalities from logical proofs of the {K}ochen-{S}pecker theorem},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it ar{X}iv:1805.02083,}}\ } (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cabello}\ \emph {et~al.}(1996)\citenamefont {Cabello}, \citenamefont {Estebaranz},\ and\ \citenamefont {{Garc{\'\i}a-Alcaine}}}]{cabell-est-96a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Estebaranz}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {{Garc{\'\i}a-Alcaine}}},\ }\bibfield {title} {\bibinfo {title} {{B}ell-{K}ochen-{S}pecker theorem: {A} proof with 18 vectors},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 212}}},\ \bibinfo {pages} {183} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ \emph {et~al.}(2005{\natexlab{a}})\citenamefont {Pavi{\v c}i{\'c}}, \citenamefont {Merlet}, \citenamefont {Mc{K}ay},\ and\ \citenamefont {Megill}}]{pmmm05a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Merlet}}, \bibinfo {author} {\bibfnamefont {B.~D.}\ \bibnamefont {Mc{K}ay}},\ and\ \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ }\bibfield {title} {\bibinfo {title} {{K}ochen-{S}pecker vectors},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 38}}},\ \bibinfo {pages} {1577} (\bibinfo {year} {2005}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2010)}]{aravind10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Critical noncolorings of the 600-cell proving the {B}ell-{K}ochen-{S}pecker theorem},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 43}}},\ \bibinfo {pages} {105304} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2011)}]{waeg-aravind-jpa-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Parity proofs of the {K}ochen-{S}pecker theorem based on 60 complex rays in four dimensions},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 44}}},\ \bibinfo {pages} {505303} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megill}\ \emph {et~al.}(2011{\natexlab{a}})\citenamefont {Megill}, \citenamefont {Fresl}, \citenamefont {Waegell}, \citenamefont {Aravind},\ and\ \citenamefont {Pavi{\v c}i{\'c}}}]{mfwap-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fresl}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Probabilistic generation of quantum contextual sets},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 375}}},\ \bibinfo {pages} {3419} (\bibinfo {year} {2011}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ \emph {et~al.}(2011)\citenamefont {Pavi{\v c}i{\'c}}, \citenamefont {Megill}, \citenamefont {Aravind},\ and\ \citenamefont {Waegell}}]{mp-nm-pka-mw-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}, \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}},\ }\bibfield {title} {\bibinfo {title} {New class of 4-dim {K}ochen-{S}pecker sets},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Math. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 52}}},\ \bibinfo {pages} {022104} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ \emph {et~al.}(2011)\citenamefont {Waegell}, \citenamefont {Aravind}, \citenamefont {Megill},\ and\ \citenamefont {Pavi{\v c}i{\'c}}}]{waeg-aravind-megill-pavicic-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}}, \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Parity proofs of the {B}ell-{K}ochen-{S}pecker theorem based on the 600-cell},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Found. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 41}}},\ \bibinfo {pages} {883} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2012)}]{waegell-aravind-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Proofs of {K}ochen-{S}pecker theorem based on a system of three qubits},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 45}}},\ \bibinfo {pages} {405301} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2013)}]{waeg-aravind-pra-13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Proofs of the {K}ochen-{S}pecker theorem based on the {N}-qubit {P}auli group},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 88}}},\ \bibinfo {pages} {012102} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2014)}]{waeg-aravind-fp-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Parity proofs of the {K}ochen-{S}pecker theorem based on 120-cell},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Found. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 44}}},\ \bibinfo {pages} {1085} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2015)}]{waeg-aravind-jpa-15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {Parity proofs of the {K}ochen-{S}pecker theorem based on the {L}ie algebra {E}8},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 48}}},\ \bibinfo {pages} {225301} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waegell}\ and\ \citenamefont {Aravind}(2017)}]{waeg-aravind-pla-17} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ }\bibfield {title} {\bibinfo {title} {The {P}enrose dodecahedron and the {W}itting polytope are identical in $\mathbb{CP}^3$},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 381}}},\ \bibinfo {pages} {1853} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}(2017)}]{pavicic-pra-17} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Arbitrarily exhaustive hypergraph generation of 4-, 6-, 8-, 16-, and 32-dimensional quantum contextual sets},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Rev. A}}\ }\textbf {\bibinfo {volume} {{\bf 95}}},\ \bibinfo {pages} {062121} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ \emph {et~al.}(2005{\natexlab{b}})\citenamefont {Pavi{\v c}i{\'c}}, \citenamefont {Merlet}, \citenamefont {Mc{K}ay},\ and\ \citenamefont {Megill}}]{pmmm05a-corr} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Merlet}}, \bibinfo {author} {\bibfnamefont {B.~D.}\ \bibnamefont {Mc{K}ay}},\ and\ \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ }\bibfield {title} {\bibinfo {title} {{K}ochen-{S}pecker vectors},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 38}}},\ \bibinfo {pages} {1577} (\bibinfo {year} {2005}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mc{K}ay}\ \emph {et~al.}(2000)\citenamefont {Mc{K}ay}, \citenamefont {Megill},\ and\ \citenamefont {Pavi{\v c}i{\'c}}}]{bdm-ndm-mp-1} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~D.}\ \bibnamefont {Mc{K}ay}}, \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Algorithms for {G}reechie diagrams},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Int. J. Theor. Phys.}}\ }\textbf {\bibinfo {volume} {{\bf 39}}},\ \bibinfo {pages} {2381} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ \emph {et~al.}(2010{\natexlab{b}})\citenamefont {Pavi{\v c}i{\'c}}, \citenamefont {Megill},\ and\ \citenamefont {Merlet}}]{pmm-2-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}, \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ and\ \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Merlet}},\ }\bibfield {title} {\bibinfo {title} {New {K}ochen-{S}pecker sets in four dimensions},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 374}}},\ \bibinfo {pages} {2122} (\bibinfo {year} {2010}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megill}\ \emph {et~al.}(2011{\natexlab{b}})\citenamefont {Megill}, \citenamefont {Fresl}, \citenamefont {Waegell}, \citenamefont {Aravind},\ and\ \citenamefont {Pavi{\v c}i{\'c}}}]{mfwap-s-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fresl}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Aravind}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {Probabilistic generation of quantum contextual sets; supplementary material},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 375}}},\ \bibinfo {pages} {3419} (\bibinfo {year} {2011}{\natexlab{b}})},\ \bibinfo {note} {{\it Supplementary Material}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peres}(1991)}]{peres} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peres}},\ }\bibfield {title} {\bibinfo {title} {Two simple proofs of the {B}ell-{K}ochen-{S}pecker theorem},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 24}}},\ \bibinfo {pages} {L175} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kernaghan}(1994)}]{kern} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kernaghan}},\ }\bibfield {title} {\bibinfo {title} {{B}ell-{K}ochen-{S}pecker theorem for 20 vectors},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it J. Phys. A}}\ }\textbf {\bibinfo {volume} {{\bf 27}}},\ \bibinfo {pages} {L829} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harvey}\ and\ \citenamefont {Chryssanthacopoulos}(2012)}]{harv-cryss-aravind-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Harvey}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chryssanthacopoulos}},\ }\href@noop {} {\emph {\bibinfo {title} {{BKS} Theorem and {B}ell's Theorem in 16 Dimensions}}},\ \bibinfo {type} {Tech. Rep.}\ \bibinfo {number} {PH-PKA-JC08}\ (\bibinfo {institution} {Worcester Polytechnic Institute},\ \bibinfo {year} {2012})\ \bibinfo {note} {{\tt https://web.wpi.edu/Pubs/ E-project/Available/E-project-042108-171725/ unrestricted/MQPReport.pdf}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Planat}(2012)}]{planat-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Planat}},\ }\bibfield {title} {\bibinfo {title} {On small proofs of the {B}ell-{K}ochen-{S}pecker theorem for two, three and four qubits},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Eur. Phys. J. Plus}}\ }\textbf {\bibinfo {volume} {{\bf 127}}},\ \bibinfo {pages} {86} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Planat}\ and\ \citenamefont {Saniga}(2012)}]{planat-saniga-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Planat}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Saniga}},\ }\bibfield {title} {\bibinfo {title} {Five-qubit contextuality, noise-like distribution of distances between maximal bases and finite geometry},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{\it Phys. Lett. A}}\ }\textbf {\bibinfo {volume} {{\bf 376}}},\ \bibinfo {pages} {3485} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pavi{\v c}i{\'c}}\ and\ \citenamefont {Megill}(2007)}]{pm-ql-l-hql2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}}\ and\ \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}},\ }\bibfield {title} {\bibinfo {title} {Quantum logic and quantum computation},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Handbook of Quantum Logic and Quantum Structures}}},\ Vol.~\bibinfo {volume} {{\it Quantum Structures}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {K.}~\bibnamefont {Engesser}}, \bibinfo {editor} {\bibfnamefont {D.}~\bibnamefont {Gabbay}},\ and\ \bibinfo {editor} {\bibfnamefont {D.}~\bibnamefont {Lehmann}}}\ (\bibinfo {publisher} {Elsevier},\ \bibinfo {address} {Amsterdam},\ \bibinfo {year} {2007})\ pp.\ \bibinfo {pages} {751--787}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megill}\ and\ \citenamefont {Pavi{\v c}i{\'c}}(2017)}]{megill-pavicic-mipro-long-17} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Megill}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pavi{\v c}i{\'c}}},\ }\bibfield {title} {\bibinfo {title} {New classes of kochen-specker contextual sets (invited talk)},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO 2017); Proceedings of a meeting held 22-26 May 2017, Opatija, Croatia.}}},\ \bibinfo {series and number} {IEEE Xplore Digital Library},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {P.}~\bibnamefont {Biljanovi{\'c}}}},\ \bibinfo {organization} {IEEE}\ (\bibinfo {publisher} {Institute of Electrical and Electronics Engineers (IEEE), Curran Associates, Inc},\ \bibinfo {address} {Red Hook, NY 12571 USA},\ \bibinfo {year} {2017})\ pp.\ \bibinfo {pages} {195--200},\ \bibinfo {note} {{ISBN}: Print 9781509049691, {CD-ROM} 9789532330922}\BibitemShut {NoStop} \end{thebibliography} \end{document}
arXiv
\begin{document} \title{Spectral Noise Correlations of an Ultrafast Frequency Comb} \author{Roman Schmeissner, Jonathan Roslund, Claude Fabre, and Nicolas Treps} \affiliation{ Laboratoire Kastler Brossel, Sorbonne Universit\'e - UPMC, ENS, Coll\`ege de France, CNRS; 4 place Jussieu, 75252 Paris, France} \date{\today} \begin{abstract} Cavity-based noise detection schemes are combined with ultrafast pulse shaping as a means to diagnose the spectral correlations of both the amplitude and phase noise of an ultrafast frequency comb. The comb is divided into ten spectral regions, and the distribution of noise as well as the correlations between all pairs of spectral regions are measured against the quantum limit. These correlations are then represented in the form of classical noise matrices, which furnish a complete description of the underlying comb dynamics. Their eigendecomposition reveals a set of theoretically predicted, decoupled noise modes that govern the dynamics of the comb. Finally, the matrices contain the information necessary to deduce macroscopic noise properties of the comb. \end{abstract} \pacs{42.65.Re, 42.60.Mi} \maketitle Ultrafast frequency combs (FC) have found tremendous utility as precision instruments in domains ranging from frequency metrology \cite{udem2002optical,rosenband2008frequency}, optical clocks \cite{diddams2001optical,yeClockReview2014}, broadband spectroscopy \cite{thorpe2006broadband,diddams2007molecular}, and absolute distance measurement \cite{coddington2009rapid,van2012many}. This sensitivity originates from the fact that a comb carries upwards of $\sim 10^{5}$ copropagating, coherently-locked frequency modes \cite{cundiff2003colloquium}. An understanding of the aggregate noise originating from these teeth is essential for assessing the ultimate sensitivity of a given measurement scheme. In practice, FC noise dynamics are typically described with a succinct number of collective properties \cite{haus1993noise,newbury2005theory,newbury2007low}, such as the pulse energy, carrier envelope offset (CEO), or the temporal jitter of the pulse train. Along these lines, it has been theorized that a variation in one of these parameters perturbs the FC in a manner that consists of adding a particular noise mode to the coherent field structure \cite{haus1990quantum}. Hence, the entirety of the FC noise dynamics is theoretically governed by a set of unique noise modes in which each mode possesses a particular pulse shape \cite{haus1990quantum,haus1993noise}. The existence of such modes would imply a non-negligible role of spectral noise correlations among the individual FC teeth. Although the distribution of noise across a FC has been investigated \cite{bartels2004stabilization,swann2006fiber}, the role of correlations among various frequencies has gone largely unexplored. Toward that end, correlations among disparate spectral bands may be gleaned by observing whether the noise of their spectral sum is equal to the sum of the individual noises. Such a scheme may be implemented by combining ultrafast pulse shaping \cite{weiner2000femtosecond}, which furnishes an adjustable spectral filter, with quantum noise-limited balanced homodyne detection \cite{bachor2004guide}. This Letter details the extraction of classical amplitude and phase noise matrices for a solid-state femtosecond FC and, in doing so, provides the first experimental realization of uncoupled, broadband noise modes. Moreover, the identification of spectral correlations offers an enhanced characterization of the comb's noise dynamics from which any collective noise property can be extracted. The electric field $e(t)$ of the comb is written as a superposition of the constituent teeth: $e(t) = \sum_{n} E_{n}$ in which $E_{n} = A_{n} \exp \left[ i \left( \phi_{n} - \omega_{n} t \right) \right] + \textrm{c.c.} $. The tooth amplitude and phase are represented as $A_{n}$ and $\phi_{n}$, respectively, while the optical frequency $\omega_{n}$ is decomposed as $\omega_{n} = n \, \omega_{\textrm{rep}} + \omega_{\textrm{CEO}}$, in which $\omega_{\textrm{rep}}$ is the repetition rate of the comb and $\omega_{\textrm{CEO}}$ is the CEO frequency. As FCs are generated within an optical cavity, the coherent structure of this field may be interrupted by various intracavity noise sources, including thermal and mechanical drifts, spontaneous emission, and intensity noise of the pump source. In particular, these perturbations disrupt the field amplitude and phase of a single tooth in a manner described as $ \delta E_{n} = \left( \delta A_{n} + i \delta \phi_{n} A_{n} \right) \cdot \exp \left( -i \omega_{n} t \right)$ where $\delta E_{n} = E_{n} - \langle E_{n} \rangle$ \footnote{Note that it has tacitly been assumed that all of the comb teeth share a common mean phase of $\langle \phi_{n} \rangle = 0$ (i.e., a transform-limited pulse), which implies that the quadratures containing the phase fluctuations are aligned for all frequencies.}. These fluctuations may be decomposed in terms of their amplitude and phase components to yield \begin{equation} \label{eq:homodyne} \operatorname{Re}(\delta E_{n}) = \delta A_{n} \cos (\omega_{n} t) + A_{n} \delta \phi_{n} \sin(\omega_{n} t). \end{equation} Thus, variations of the spectral amplitude are manifest in the same quadrature as that carrying the mean electric field while phase fluctuations are observed in the conjugate quadrature. Homodyne detection offers a quantum noise-limited, phase-sensitive detection scheme capable of measuring these quadrature-dependent fluctuations. In order to characterize the frequency distribution of noise, the collective parameters of pulse energy and phase described above are conferred a spectral dependence. \begin{figure} \caption{Experimental layout for characterizing the classical noise in optical frequency combs. A titanium-sapphire oscillator produces a 76MHz train of $\sim 140 \textrm{fs}$ pulses centered at 795nm. This source is split, and one portion is directed to a 512 element, programmable 2D liquid-crystal pulse shaper, which serves to select a certain spectral region for noise analysis. When the shutter is closed, the balanced homodyne detection measures intensity noise in the spectral slice transmitted by the pulse shaper. Conversely, when the shutter is open, a part of the source is filtered in a Fabry-Perot filtering cavity (finesse of $\mathcal{F} = 420$), which attenuates high frequency fluctuations. This filtered beam is analyzed with homodyne detection, in which the shaped beam serves as the local oscillator. Phase variations are analyzed by locking the relative phase between the two interferometer arms with a piezo-controlled mirror.} \label{fig-exp-setup} \end{figure} A titanium-sapphire oscillator is utilized in which neither the CEO nor the repetition rate is locked. This source is divided, and one part constitutes a reference field, which is necessary for homodyne detection, while the other comprises the signal beam to be analyzed. This signal field is delivered to a programmable 512-element 2D liquid-crystal pulse shaper, which is capable of independent amplitude and phase modulation \cite{vaughan2005diffraction}. The pulse shaper partitions the $\sim 6 \textrm{nm}$ FWHM spectrum into ten non-overlapping bands of equal width ($\sim 1.5 \textrm{nm}$ per band). This segmentation allows the spectral amplitude and phase noise levels to be interrogated for each spectral band as well as all possible pairs of bands. The reference beam and the signal constitute two arms of a Mach-Zehnder interferometer as seen in Fig.~\ref{fig-exp-setup}. Since these two fields share a common phase, they must be decoupled in some manner in order to observe the phase noise of the laser source. This decoupling is achieved by introducing the reference beam into a high-finesse ($\mathcal{F} \simeq 420$) Fabry-Perot filtering cavity. The cavity has a free spectral range equal to that of the pulse train repetition rate (i.e., $76 \textrm{MHz}$), and its length is locked with a Pound-Drever-Hall scheme. Sideband fluctuations of the reference field transmitted by the cavity are strongly attenuated for frequencies higher than the cutoff frequency of $\sim 90 \textrm{kHz}$ \cite{Schmeissner2014}, whereas the initial fluctuations persist in the signal arm of the interferometer \footnote{In a traditional homodyne setup, the local oscillator (LO) field is significantly stronger than the reference field, i.e., $A_{\textrm{LO}} \gg A_{\textrm{ref}}$, which allows for measuring the fluctuations of the weaker reference field $E_{\textrm{ref}}$. In the present setup, however, the fluctuations are present on the stronger signal field, which serves as the LO, while they are attenuated on the reference field. Yet, since the interferometer measures the relative phase between its two arms, these fluctuations may be taken to originate in either arm. Thus, it is important to note that the present implementation of homodyne detection is equivalent to the traditional one.}. These two fields are recombined on a 50:50 beamsplitter and then detected with a pair of balanced silicon photodiodes. A mirror mounted on a piezo stack in one arm of the interferometer permits locking the mean relative phase between the two interferometer arms to a value of $\pi / 2$ (i.e., the phase quadrature). As homodyne detection is a projective technique, the difference of the photocurrents for the two diodes provides the quadrature noise of the field projected onto the spectrally-shaped signal. In particular, this difference is directly proportional to the relative phase $\phi_{\textrm{rel}} = \phi_{\textrm{sig}} - \phi_{\textrm{ref}}$ between the two interferometer arms. Since the fluctuations of $\phi_{\textrm{ref}}$ are significantly attenuated, this measure provides the phase noise of the field, i.e., $\delta \phi_{\textrm{rel}} \simeq \delta \phi_{\textrm{sig}}$ . Additionally, the photocurrent difference obtained in the absence of the weaker reference field provides the shot noise level, which normalizes the observed phase fluctuations. This experimental arrangement is conceptually similar to that found in Ref.~\cite{roslund2013wavelength}. \begin{figure} \caption{Classical covariance matrices for the amplitude and phase quadratures of a titanium-sapphire frequency comb. The matrices are shown relative to shot noise, which has a value of 1.0. Panels (a) and (b) display the phase noise at 3MHz and 500kHz, respectively, while panels (c) and (d) depict the amplitude quadrature at analogous RF analysis frequencies. Both field quadratures exhibit shot noise limited dynamics at 3MHz. Conversely, quadrature-dependent, correlated noise is evident for longer analysis timescales.} \label{fig-matrices} \end{figure} Amplitude fluctuations of the field are subsequently assessed by blocking the reference beam of the interferometer with a shutter and examining the sum of the two photocurrents, which is directly proportional to the signal field's intensity noise. The corresponding shot noise level is retrieved by measuring the difference of the photocurrents for the two diodes. By normalizing the intensity fluctuations to the appropriate shot noise levels, the observed intensity variations are converted to field amplitude noise \cite{opatrny2002mode}. Photocurrent fluctuations arising from both amplitude and phase noise of the source are analyzed with a spectrum analyzer in the sideband RF frequency range of $20 \textrm{kHz} - 5 \textrm{MHz}$. The observed noise fluctuations closely follow Gaussian statistics, which enables their dynamics to be described with a moment-based covariance matrix. In order to do so, a quadrature operator $\mathcal{O}_{i}$ is associated with each of the ten spectral zones created by the pulse shaper, i.e., $i = 1 \ldots 10$. In line with Eq.~\ref{eq:homodyne}, this operator assumes the form $\mathcal{O}_{i} = \delta \tilde{A}_{i}$ for measurements of the amplitude noise (i.e., when the filtering cavity is blocked) whereas its value for measurements of the phase quadrature is given by $\mathcal{O}_{i} = \tilde{A}_{i} \, \delta \tilde{\phi}_{i}$, where $\tilde{A}_{i}$ and $\tilde{\phi}_{i}$ are the amplitude and phase of each spectral partition, respectively. The spectrum analyzer measures the RF power of the photocurrent fluctuations, which therefore provides a measure of the noise variance $\langle \mathcal{O}_{i}^{2} \rangle$. Covariance matrices for the amplitude and phase quadratures are independently assembled from 55 distinct measurements for each quadrature, and the noise covariance between two spectral regions is assessed as \cite{spalter1998observation}: \begin{multline} \langle \mathcal{O}_{i} \mathcal{O}_{j} \rangle = \left[ \langle (\mathcal{O}_{i} + \mathcal{O}_{j})^2 \rangle_{M} - \frac{P_{i}}{P_{t}} \langle \mathcal{O}_{i}^2 \rangle_{M} \right. \\ \left. - \frac{P_{j}}{P_{t}} \langle \mathcal{O}_{j}^2 \rangle_{M} \right] \times \frac{P_{t} }{2 \sqrt{P_{i} \, P_{j}}} , \end{multline} where $\langle \mathcal{O} ^{2} \rangle_{M}$ is a measured quadrature variance, $P_{i}$ is the optical power associated with a given spectral slice of the comb (evaluated with the mean photodiode signal), and $P_{t} = P_{i} + P_{j}$. The assembled covariance matrices for both the amplitude and phase quadratures are shown in Fig.~\ref{fig-matrices} for two different interrogation timescales. All of the matrices display the covariance elements relative to the shot noise limit, which possesses a value of 1.0. For high frequency sideband fluctuations ( $\sim 3 \textrm{MHz} $), the noise in each spectral band is identical and equal to the shot noise limit (panels (a) and (c) of Fig.~\ref{fig-matrices}). This is expected since the source itself is shot noise limited for sideband frequencies larger than several megahertz. Moreover, neither the amplitude nor phase matrix exhibit correlations among disparate optical wavelengths for high analysis frequencies, which is in accord with the fact that vacuum fluctuations are entirely uncorrelated for different frequency modes. For longer analysis timescales, however, the matrices are no longer diagonal, and correlations among various frequencies become readily apparent (panels (b) and (d) of Fig.~\ref{fig-matrices}). Amplitude quadrature noise and correlations are predominantly localized in the spectral wings. Conversely, the noise and correlations for the phase quadrature are largely confined to the spectral center. The set of amplitude and phase noise matrices is qualitatively similar across the range of considered RF frequencies, although the magnitude of the covariance values increase with an increasing analysis timescale. One of the primary aims of this work is to discover an underlying modal representation for the FC fluctuations, and such a description becomes accessible upon eigendecomposition of the quadrature covariance matrices. The eigenvalues for both quadratures are dependent upon the analysis frequency and are shown in Fig.~\ref{fig-eigen}. Above a frequency of $\sim 2 \textrm{MHz}$, the noise in both quadratures is equal to that of vacuum fluctuations, which originate from the quantum nature of light. Conversely, the individual eigenvalues rise above the shot noise limit for longer analysis timescales but do so in a nondegenerate fashion as seen in Fig.~\ref{fig-eigen}a,c. The noise is also observed to generally be higher in the phase quadrature. Importantly, a principal noise mode is evident in both quadratures and accounts for approximately $\sim 60 \%$ and $\sim 45 \%$ of the fluctuations in the amplitude and phase quadrature, respectively. The existence of a principal eigenmode also accounts for the fact that the covariance matrices of both quadratures exhibit purely positive correlations \cite{opatrny2002mode}. \begin{figure} \caption{Eigendecomposition of the classical amplitude and phase quadrature covariance matrices. The noise eigenvalues for the phase and amplitude quadratures are shown in panels (a) and (c), respectively. A dominant eigenmode is evident for both quadratures. The leading two eigenmodes at a RF frequency of $\sim 1\textrm{MHz}$ are shown in panels (b) and (d) for the phase and amplitude quadrature, respectively. A spline interpolation has been added for visualization purposes.} \label{fig-eigen} \end{figure} The eigenmodes corresponding to these eigenvalues reveal the spectral composition of the observed noise correlations. These structures are approximately frequency independent for detection frequencies $\lesssim 2 \textrm{MHz}$. The leading two modes for each quadrature are shown in Fig.~\ref{fig-eigen}b,d for a detection frequency of $\sim 1 \textrm{MHz}$. In the case of the phase quadrature, the leading noise mode is very similar to the spectral envelope of the mean field, whereas the secondary eigenmode is comparable to the spectral derivative of this mean field envelope. Consequently, to a first approximation, the field is described with a fixed amplitude that is subject to fluctuations of a common phase, which is consistent with a traditional picture of field noise. Additionally, these two eigenstructures closely approximate the theoretical phase quadrature noise modes originally predicted by Haus, \emph{et al.} \cite{haus1990quantum}. The amplitude quadrature eigenmodes, on the other hand, are markedly different and not amenable to straightforward interpretation. Nonetheless, it is worth mentioning that the leading amplitude mode seen in Fig.~\ref{fig-eigen}d resembles the theoretical mode that characterizes pulse energy fluctuations \cite{haus1990quantum}. It is also possible to derive the comb's collective noise parameters given knowledge of the noise distribution on the underlying optical teeth. In particular, each collective parameter possesses a specific spectral mode, and its characteristics may be elucidated following an appropriate basis transformation of the comb teeth. As a means for demonstrating this capability, the collective properties of CEO phase noise and group delay jitter \cite{paschotta2004noise} are extracted from the covariance matrices shown in Fig.~\ref{fig-matrices}. The electric field of the comb is rewritten in the form $e(t) = \exp \left[ -i \omega_{0} t \right] \cdot \sum_{n} A_{n} \exp \left[ \phi_{n} - \Omega_{n} t \right]$, where $\omega_{0}$ is the optical carrier frequency and $\Omega_{n} = \omega_{n} - \omega_{0}$. By identifying a slowly-varying complex field envelope as $E(t) = \sum_{n} A_{n} \exp \left[ \phi_{n} - \Omega_{n} t \right]$, the field assumes the simplified form $e(t) = E(t) \cdot \exp \left[ -i \omega_{0} t \right] $. Fluctuations in the pulse arrival time $\delta \tau$ are then considered, which induce a corresponding variation of the field $ \delta e(t,\delta \tau)$. For temporal variations $\delta \tau$ smaller than the duration of the optical period, i.e., $\delta \tau \ll 1 / \omega_{0}$, the field may be expanded to first order in time, which enables the perturbation to be written $\delta e(t,\delta \tau) = e(t+\delta \tau) - e(t) \simeq \delta \tau \cdot \partial e(t) / \partial t$ \cite{von1986characterization,paschotta2006optical,lamine2008quantum}. Given the decomposition of the field into an envelope and the optical carrier, this perturbation is shown to be $\delta e(t, \delta \tau) = \delta \tau \cdot \left[ -i \omega_{0} E(t) + \partial E(t) / \partial t \right] \cdot \exp \left[ - i \omega_{0} t \right]$. The first term encapsulates field noise that arises from variations of only the optical carrier and corresponds to CEO noise. Conversely, the second term expresses field noise resultant from fluctuations of the envelope arrival time, i.e., repetition rate jitter. The temporal field variation $\delta e(t, \delta \tau)$ is then Fourier-transformed in order to reveal the modal representation for these two noise sources, i.e., $\delta E(\omega,\delta \tau) = (1 / 2 \pi) \cdot \int_{-\infty}^{\infty} dt \, \delta e(t, \delta \tau) \cdot \exp[i \omega t]$. Upon doing so, the spectral representation of the field fluctuations is written as \begin{eqnarray} \label{eq:timing-modes} \delta E(\omega) &\simeq& -i \, \delta \tau \left[ \omega_{0} \, E(\Omega) + \Omega \cdot E(\Omega) \right] \\ \nonumber & \simeq & -i \, \delta \tau \left[ w_\textrm{CEO} + w_\textrm{rep} \right], \end{eqnarray} where the leading spectral mode $w_\textrm{CEO}$ captures CEO noise and the subsequent mode $w_\textrm{rep}$ describes temporal jitter. Importantly, these two spectral modes are orthogonal and are both represented in the phase quadrature. \begin{figure} \caption{Noise variances for CEO phase fluctuations and temporal jitter that are extracted from the classical noise matrices. The CEO noise is the dominant noise property of the presently utilized frequency comb.} \label{fig-projection} \end{figure} The modal forms $w_\alpha$, where $\alpha \in \{ \textrm{CEO},\textrm{rep} \}$, are discretized in a manner corresponding to the ten investigated spectral zones, which provides the vectorial representations $\vec{w}_\alpha$. The phase noise variances of these modes, $\langle \mathcal{O} _\alpha^{2} \rangle$, are retrieved by projecting $\vec{w}_\alpha$ onto the phase quadrature eigenmodes $\vec{\psi}_{\textrm{ph},k}$. This inner product is weighted by the corresponding eigenvalue $\sigma_{\textrm{ph},k}^{2}$. The variances are then written as $\langle \mathcal{O} _{\alpha}^{2} \rangle = \sum_{k} \sigma_{\alpha,k}^{2} \left( \vec{\psi}_{\textrm{ph},k} \cdot \vec{w}_{\alpha} \right)^{2}$ where $\alpha \in \{ \textrm{CEO},\textrm{rep} \}$. The extracted spectral densities are shown in Fig.~\ref{fig-projection} \footnote{See supplemental material}. The CEO phase noise levels exceed the noise levels attributed to temporal jitter by approximately 10\,dB; therefore, CEO noise is the dominant noise property of the presently studied FC. Furthermore, the similarity of $\vec{w}_\textrm{CEO}$ and $\vec{w}_\textrm{rep}$ to the leading two eigenmodes of the phase covariance matrix implies that the structure of this matrix is largely dictated by these two noise sources. Importantly, any collective comb property that is expressible as a superposition of the underlying optical frequencies may potentially be deduced from the intrinsic noise matrices. It should be noted that intra-quadrature correlations of the form $\langle \delta A \, \delta \phi \rangle$ may alter the composition of the phase and amplitude eigenmodes. Since the measured phase noise significantly exceeds the amplitude fluctuations (Fig.~\ref{fig-eigen}a,c), the effect of quadrature coupling on the phase eigenmodes is expected to be minimal. However, such correlations have the potential to contribute significantly to the amplitude quadrature, which may explain why the form of the amplitude eigenmodes is not as clearly connected to theoretical expectations as the phase eigenmodes. Accordingly, knowledge of these intra-quadrature correlations is essential for assembling a truly decoupled modal representation. The presently utilized apparatus may be adapted to directly measure these intra-quadrature correlations. In conclusion, correlations in the noise fluctuations are shown to exist among the underlying teeth of an optical FC by combining ultrafast pulse shaping with balanced homodyne detection. Knowledge of these correlations is utilized to construct classical noise covariance matrices, which are introduced as a tool to characterize the amplitude and phase noise present in ultrafast FCs. These matrices generalize the theoretical methodology originally outlined by Haus \emph{et al.} and provide the first demonstration of uncoupled noise modes in FC sources. Importantly, these techniques are not limited to solid-state comb sources and may be implemented for a variety of systems, including both fiber- and microresonator-based FCs for which the noise dynamics are not as thoroughly understood. The ability to diagonalize broadband, correlated classical noise provides an improved understanding of the noise present in FC and its ultimate effect on measurement sensitivity. This work is supported by the European Research Council starting grant Frecquam and the French National Research Agency project Comb. C.F. is a member of the Institut Universitaire de France. J.R. acknowledges support from the European Commission through Marie Curie Actions. \begin{thebibliography}{26} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Udem et~al.}(2002)\citenamefont{Udem, Holzwarth, and H{\"a}nsch}}]{udem2002optical} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Udem}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Holzwarth}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.~W.} \bibnamefont{H{\"a}nsch}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{416}}, \bibinfo{pages}{233} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Rosenband et~al.}(2008)\citenamefont{Rosenband, Hume, Schmidt, Chou, Brusch, Lorini, Oskay, Drullinger, Fortier, Stalnaker et~al.}}]{rosenband2008frequency} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rosenband}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Hume}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Schmidt}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Chou}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Brusch}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Lorini}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Oskay}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Drullinger}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Fortier}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Stalnaker}}, \bibnamefont{et~al.}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{319}}, \bibinfo{pages}{1808} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Diddams et~al.}(2001)\citenamefont{Diddams, Udem, Bergquist, Curtis, Drullinger, Hollberg, Itano, Lee, Oates, Vogel et~al.}}]{diddams2001optical} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Diddams}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Udem}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Bergquist}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Curtis}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Drullinger}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Hollberg}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Itano}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Lee}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Oates}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Vogel}}, \bibnamefont{et~al.}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{293}}, \bibinfo{pages}{825} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Ludlow et~al.}(2014)\citenamefont{Ludlow, Boyd, Ye, Peik, and Schmidt}}]{yeClockReview2014} \bibinfo{author}{\bibfnamefont{A.~D.} \bibnamefont{Ludlow}}, \bibinfo{author}{\bibfnamefont{M.~M.} \bibnamefont{Boyd}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ye}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Peik}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~O.} \bibnamefont{Schmidt}}, \emph{\bibinfo{title}{Optical atomic clocks}} (\bibinfo{year}{2014}), \eprint{arXiv:1407.3493}. \bibitem[{\citenamefont{Thorpe et~al.}(2006)\citenamefont{Thorpe, Moll, Jones, Safdi, and Ye}}]{thorpe2006broadband} \bibinfo{author}{\bibfnamefont{M.~J.} \bibnamefont{Thorpe}}, \bibinfo{author}{\bibfnamefont{K.~D.} \bibnamefont{Moll}}, \bibinfo{author}{\bibfnamefont{R.~J.} \bibnamefont{Jones}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Safdi}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ye}}, \bibinfo{journal}{Science} \textbf{\bibinfo{volume}{311}}, \bibinfo{pages}{1595} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Diddams et~al.}(2007)\citenamefont{Diddams, Hollberg, and Mbele}}]{diddams2007molecular} \bibinfo{author}{\bibfnamefont{S.~A.} \bibnamefont{Diddams}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Hollberg}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Mbele}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{445}}, \bibinfo{pages}{627} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Coddington et~al.}(2009)\citenamefont{Coddington, Swann, Nenadovic, and Newbury}}]{coddington2009rapid} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Coddington}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Swann}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Nenadovic}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Newbury}}, \bibinfo{journal}{Nature Photonics} \textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{351} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Van~den Berg et~al.}(2012)\citenamefont{Van~den Berg, Persijn, Kok, Zeitouny, and Bhattacharya}}]{van2012many} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Van~den Berg}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Persijn}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Zeitouny}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Bhattacharya}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{108}}, \bibinfo{pages}{183901} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Cundiff and Ye}(2003)}]{cundiff2003colloquium} \bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Cundiff}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ye}}, \bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{325} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Haus and Mecozzi}(1993)}]{haus1993noise} \bibinfo{author}{\bibfnamefont{H.~A.} \bibnamefont{Haus}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Mecozzi}}, \bibinfo{journal}{IEEE Journal of Quantum Electronics} \textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{983} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Newbury and Washburn}(2005)}]{newbury2005theory} \bibinfo{author}{\bibfnamefont{N.~R.} \bibnamefont{Newbury}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Washburn}}, \bibinfo{journal}{IEEE Journal of Quantum Electronics} \textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{1388} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Newbury and Swann}(2007)}]{newbury2007low} \bibinfo{author}{\bibfnamefont{N.~R.} \bibnamefont{Newbury}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.~C.} \bibnamefont{Swann}}, \bibinfo{journal}{JOSA B} \textbf{\bibinfo{volume}{24}}, \bibinfo{pages}{1756} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Haus and Lai}(1990)}]{haus1990quantum} \bibinfo{author}{\bibfnamefont{H.~A.} \bibnamefont{Haus}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Lai}}, \bibinfo{journal}{JOSA B} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{386} (\bibinfo{year}{1990}). \bibitem[{\citenamefont{Bartels et~al.}(2004)\citenamefont{Bartels, Oates, Hollberg, and Diddams}}]{bartels2004stabilization} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Bartels}}, \bibinfo{author}{\bibfnamefont{C.~W.} \bibnamefont{Oates}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Hollberg}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~A.} \bibnamefont{Diddams}}, \bibinfo{journal}{Optics letters} \textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{1081} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Swann et~al.}(2006)\citenamefont{Swann, McFerran, Coddington, Newbury, Hartl, Fermann, Westbrook, Nicholson, Feder, Langrock et~al.}}]{swann2006fiber} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Swann}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{McFerran}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Coddington}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Newbury}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Hartl}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Fermann}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Westbrook}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Nicholson}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Feder}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Langrock}}, \bibnamefont{et~al.}, \bibinfo{journal}{Optics letters} \textbf{\bibinfo{volume}{31}}, \bibinfo{pages}{3046} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Weiner}(2000)}]{weiner2000femtosecond} \bibinfo{author}{\bibfnamefont{A.~M.} \bibnamefont{Weiner}}, \bibinfo{journal}{Review of scientific instruments} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{1929} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Bachor and Ralph}(2004)}]{bachor2004guide} \bibinfo{author}{\bibfnamefont{H.-A.} \bibnamefont{Bachor}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Ralph}}, \bibinfo{journal}{A Guide to Experiments in Quantum Optics, 2nd, Revised and Enlarged Edition, by Hans-A. Bachor, Timothy C. Ralph, pp. 434. ISBN 3-527-40393-0. Wiley-VCH, March 2004.} \textbf{\bibinfo{volume}{1}} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Vaughan et~al.}(2005)\citenamefont{Vaughan, Hornung, Feurer, and Nelson}}]{vaughan2005diffraction} \bibinfo{author}{\bibfnamefont{J.~C.} \bibnamefont{Vaughan}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Hornung}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Feurer}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.~A.} \bibnamefont{Nelson}}, \bibinfo{journal}{Optics letters} \textbf{\bibinfo{volume}{30}}, \bibinfo{pages}{323} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Schmeissner et~al.}(2014)\citenamefont{Schmeissner, Thiel, Jacquard, Fabre, and Treps}}]{Schmeissner2014} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Schmeissner}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Thiel}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Jacquard}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Fabre}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Treps}}, \bibinfo{journal}{Opt. Lett.} \textbf{\bibinfo{volume}{39}}, \bibinfo{pages}{3603} (\bibinfo{year}{2014}). \bibitem[{\citenamefont{Roslund et~al.}(2013)\citenamefont{Roslund, De~Araujo, Jiang, Fabre, and Treps}}]{roslund2013wavelength} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Roslund}}, \bibinfo{author}{\bibfnamefont{R.~M.} \bibnamefont{De~Araujo}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Jiang}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Fabre}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Treps}}, \bibinfo{journal}{Nature Photonics} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{109} (\bibinfo{year}{2013}). \bibitem[{\citenamefont{Opatrn{\`y} et~al.}(2002)\citenamefont{Opatrn{\`y}, Korolkova, and Leuchs}}]{opatrny2002mode} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Opatrn{\`y}}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Korolkova}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Leuchs}}, \bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{66}}, \bibinfo{pages}{053813} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Sp{\"a}lter et~al.}(1998)\citenamefont{Sp{\"a}lter, Korolkova, K{\"o}nig, Sizmann, and Leuchs}}]{spalter1998observation} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Sp{\"a}lter}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Korolkova}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{K{\"o}nig}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Sizmann}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Leuchs}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{786} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Paschotta}(2004)}]{paschotta2004noise} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Paschotta}}, \bibinfo{journal}{Applied Physics B} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{163} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Von~der Linde}(1986)}]{von1986characterization} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Von~der Linde}}, \bibinfo{journal}{Applied Physics B} \textbf{\bibinfo{volume}{39}}, \bibinfo{pages}{201} (\bibinfo{year}{1986}). \bibitem[{\citenamefont{Paschotta et~al.}(2006)\citenamefont{Paschotta, Schlatter, Zeller, Telle, and Keller}}]{paschotta2006optical} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Paschotta}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Schlatter}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Zeller}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Telle}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Keller}}, \bibinfo{journal}{Applied Physics B} \textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{265} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Lamine et~al.}(2008)\citenamefont{Lamine, Fabre, and Treps}}]{lamine2008quantum} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Lamine}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Fabre}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Treps}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{101}}, \bibinfo{pages}{123601} (\bibinfo{year}{2008}). \end{thebibliography} \onecolumngrid \twocolumngrid \section{Supplementary Material} This supplement details how the experimental setup presented in the main text of the article is utilized to determine noise levels for a given mode of the frequency comb. These noise levels are determined relative to the optical carrier, which enables comparison to other reported FC noise levels in the literature. In particular, the methodology presented in this supplement is exploited to construct Fig.~4 of the main article. \section{Phase Noise Decoupling} In the absence of a filtering cavity, both arms of the interferometer share identical phase fluctuations since they originate from a common source. As a result, the phase noise between the two arms should be perfectly correlated, i.e., $\delta \phi_{\textrm{sig}} = \delta \phi_{\textrm{ref}} = \delta \phi_{\textrm{0}}$, where $\delta \phi_{\textrm{0}}$ are the phase fluctuations of the laser source. However, phase perturbations in the interferometer itself (i.e., mechanical or thermal fluctuations) will disrupt this correlation and induce a relative phase noise at the homodyne detection. Thus, it is important to distinguish noise originating within the interferometer from the intrinsic noise of the laser field. This discrimination is provided based upon the frequency content of the noise. Phase fluctuations originating from the interferometer are generally low frequency ($ \lesssim 10 \textrm{kHz}$) while the present study is concerned with high-frequency ($ \gtrsim 100 \textrm{kHz}$) field noise that arises from within the oscillator cavity. Upon locking the average relative phase between the two arms of the interferometer, the low-frequency noise component is practically eliminated, which allows consideration of the residual high frequency fluctuations that lie outside the bandwidth of the servo system ($\sim 10 \textrm{kHz}$). Henceforth, the discussion of noise refers exclusively to the intrinsic field noise of the frequency comb source. Since the high frequency phase fluctuations are common to both arms of the interferometer, the noise of the relative phase detected by the homodyne apparatus is zero, i.e., $\delta \phi_{\textrm{rel}} = \delta \phi_{\textrm{sig}} - \delta \phi_{\textrm{ref}} = 0$. In order to decorrelate the fluctuations between the two interferometer arms, the reference beam is filtered with a high finesse Fabry-Perot cavity, which attenuates its phase fluctuations. In the absence of any amplitude fluctuations, the phase noise emerging from the cavity can be described as $\delta \phi_{\textrm{ref}} = H(f) \cdot \delta \phi_{\textrm{0}}$, in which $H(f)$ is the frequency-dependent transfer function for the optical sidebands. As such, the relative phase noise measured with homodyne detection is described as \begin{eqnarray} \langle \delta \phi_{\textrm{rel}} ^{2} \rangle &=& \langle \left[ \delta \phi_{\textrm{sig}} - \delta \phi_{\textrm{ref}} \right]^{2} \rangle \\ \nonumber &=& \langle \left[ \delta \phi_{\textrm{0}} - H(f) \cdot \delta \phi_{\textrm{0}} \right]^{2} \rangle \\ \nonumber &=& \left[ 1 - H(f) \right]^{2} \langle \delta \phi_{\textrm{0}}^{2} \rangle. \end{eqnarray} For low sideband frequencies, the cavity does not filter the field ($H \rightarrow 1$), and the two arms of the interferometer remain correlated. As a result, the relative phase noise that is measured approaches zero, i.e., $\langle \delta \phi_{\textrm{rel}} ^{2} \rangle \rightarrow 0$. Conversely, for high sideband frequencies, the cavity perfectly filters the incoming phase fluctuations ($H \rightarrow 0$), such that the detected noise of the relative phase reflects that of the intracavity field, i.e., $\langle \delta \phi_{\textrm{rel}} ^{2} \rangle \rightarrow \langle \delta \phi_{\textrm{0}} ^{2} \rangle$. Upon knowing the explicit form of the transfer function $H(f)$, the measured relative phase noise $\langle \delta \phi_{\textrm{rel}} ^{2} \rangle $ may be corrected in order to yield the intracavity phase noise $\langle \delta \phi_{\textrm{0}} ^{2} \rangle$. \section{Cavity Transfer Function} The field transmitted by the cavity may be written as $E_{\textrm{ref}} = t(f) \cdot E_{\textrm{0}}$, in which $t(f)$ is the cavity's frequency-dependent transmission factor. The complex laser field $E$ may be decomposed in terms of its amplitude and phase quadratures to yield $E = E^{x} + i \, E^{p}$, in which $E^{x}$ and $E^{p}$ represent the amplitude and phase quadrature of the field, respectively. The field transmitted by the cavity may be decomposed in a similar manner: \begin{eqnarray} E_{\textrm{ref}}^{x} &=& \operatorname{Re}(t) E_{\textrm{0}}^{x} - \operatorname{Im}(t) E_{\textrm{0}}^{p} \label{convsys} \\ \nonumber E_{\textrm{ref}}^{p} &=& \operatorname{Im}(t) E_{\textrm{0}}^{x} + \operatorname{Re}(t) E_{\textrm{0}}^{p}. \end{eqnarray} Hence, the cavity interconverts the amplitude and phase quadratures of the input field $E_{\textrm{0}}$. The quadrature fluctuations may also be written in terms of the field amplitude $A$ and phase $\phi$ in a manner analogous to that adopted in the main text of the article: $\delta E_{\textrm{0}}^{x} = \delta A$ and $\delta E_{\textrm{0}}^{p} = A \, \delta \phi $. As seen in Fig.~3a,c of the main text, the measured phase noise significantly exceeds that of the amplitude noise. Additionally, the forms of the amplitude and phase noise matrices seen in Fig.~2b,d (and their respective eigenvectors) are qualitatively different. Consequently, quadrature-interconverted amplitude fluctuations do not contribute to the observed phase noise in a meaningful way and are henceforth neglected. As a result, the phase quadrature of the field transmitted by the cavity is taken to be $E_{\textrm{ref}}^{p} \simeq \operatorname{Re}(t) E_{\textrm{0}}^{p}$, which implies that the real component of the transmission function dictates the level of phase decoupling. The transmission function for a Fabry-Perot filtering cavity is given as \begin{equation} \label{eq:trans1} t(f) = \frac{t_{1}t_{2} \exp[i \pi \cdot f / f_{\textrm{rep}}]}{1-r_{1}r_{2} \exp[i 2 \pi \cdot f / f_{\textrm{rep}}]}, \end{equation} where $t_{1}\,(r_{1})$ and $t_{2}\,(r_{2})$ are the field transmission (reflection) coefficients for the input and output coupler, respectively, and $f_{\textrm{rep}}$ the repetition rate of the input frequency comb (equivalent to the free spectral range when the cavity is at resonance). For sideband frequencies $f$ that are significantly smaller than the repetition of the pulse train, i.e., $ f / f_{\textrm{rep}} \ll 1$, the transmission function may be simplified to the form: \begin{equation} \label{eq:trans2} t(f) \simeq \sqrt{T_{\textrm{max}}} \cdot \frac{1+ i \, \pi \left( \frac{1+r_{1}r_{2}}{1-r_{1}r_{2}} \right) \cdot f / f_{\textrm{rep}} }{1+\pi^{2} \, F \cdot (f / f_{\textrm{rep}})^2 } \end{equation} where $T_{\textrm{max}} = \left[ t_{1}t_{2} / (1-r_{1}r_{2}) \right]^{2}$ is the maximum transmission of the field intensity and the coefficient $F = 4 r_{1} r_{2} / (1 - r_{1} r_{2})^{2}$ is related to the cavity finesse $\mathcal{F}$ through the equation $F = 1 / \sin^{2} \left[ \pi / (2 \mathcal{F}) \right]$. For a high finesse cavity (i.e., $\mathcal{F} \gg 1$), Eq.~\ref{eq:trans2} may again be simplified to yield \begin{equation} \label{eq:trans3} t(f) \simeq \sqrt{T_{\textrm{max}}} \cdot \frac{1+ i \, (f / f_{c}) }{1+(f / f_{c})^2 }, \end{equation} where the cutoff frequency $f_{c}$ of the cavity is given as $f_{c} = f_{\textrm{rep}} / (2 \mathcal{F})$. The transmission factor that attenuates the fluctuations of the field's phase quadrature is therefore finally specified as \begin{equation} \operatorname{Re}(t) = \frac{\sqrt{T_{\textrm{max}}}}{1+(f / f_{c})^2}. \end{equation} Accordingly, the amplitude of the field emerging from the cavity is given as $A_{\textrm{ref}} = \sqrt{T_{\textrm{max}}} \cdot A_{\textrm{0}}$ while the phase fluctuations are attenuated as $\delta \phi_{\textrm{ref}} = H(f) \cdot \delta \phi_{\textrm{0}}$, where the transfer function is written as: \begin{equation} \label{eq:transferH} H(f) = \frac{1}{1+(f / f_{c})^2}, \end{equation} which is a Lorentzian function specified by its cutoff frequency $f_{c}$. \subsection{Convergence of Relative Phase Noise} It is interesting to consider the sideband frequency at which the relative phase noise of the measured signal $\langle \delta \phi_{\textrm{rel}}^{2} \rangle$ is within 3dB of the actual phase noise of the cavity field $\langle \delta \phi_{\textrm{0}}^{2} \rangle$. This frequency is readily found from the relation \begin{equation} \left[ 1-H(f_{3 \textrm{dB}}) \right]^{2} = \frac{1}{2}. \end{equation} With the form of $H(f)$ specified by Eq.~\ref{eq:transferH}, the 3dB frequency is revealed to be \begin{eqnarray} f_{3 \textrm{dB}} &=& f_{c} \cdot \sqrt{\frac{1}{\sqrt{2}-1}} \\ \nonumber f_{3 \textrm{dB}} &\simeq& 1.55 \cdot f_{c}. \end{eqnarray} Thus, for frequencies $f \gtrsim 1.55 f_{c}$, the relative phase noise $\langle \delta \phi_{\textrm{rel}}^{2} \rangle$ that is measured with homodyne detection closely represents the intrinsic phase noise of the laser system $\langle \delta \phi_{\textrm{ref}}^{2} \rangle$. In the present circumstance fc is 90khz, so above 135khz the noise is predominantly that of the original field. \section{Measurement calibration} \begin{figure} \caption{Noise variances for CEO phase fluctuations and temporal jitter that are extracted from the classical noise matrices shown in Fig.~2 of the main text. These traces do not include any correction for the filtering effect of the cavity. Additionally, they depict the relevant power spectral densities of the two considered modes relative to the shot noise limit.} \label{fig-projection-supp} \end{figure} For a sufficiently small resolution bandwidth (RBW) of the spectrum analyzer, the noise variance at a given frequency $\langle \mathcal{O}(f) ^{2} \rangle_{\alpha}$ is approximately represented as the product between the power spectral density of the noise fluctuations at that frequency $S_{\alpha}(f)$ and the RBW of the spectrum analyzer, i.e., $\langle \mathcal{O}(f) ^{2} \rangle_{\alpha} \simeq S_{\alpha}(f) \cdot \textrm{RBW}$. The traces shown in Fig.~\ref{fig-projection-supp} depict the CEO phase noise and repetition rate jitter relative to the shot noise limit, i.e., $\langle \mathcal{O}(f) ^{2} \rangle_{\alpha} / \langle \mathcal{O} ^{2} \rangle_{\textrm{shot}} = S_{\alpha}(f) / S_{\textrm{SQL}}$ where $\alpha \in \{ \textrm{CEO},\textrm{rep} \} $. The standard quantum limit (SQL) is utilized to assess the relative noise magnitude contained in both modes $w_{\alpha}$ to the power of the optical carrier. The SQL is the minimum level of amplitude or phase noise permitted by the quantum nature of light, and it manifests itself as a white noise floor from which the measured phase fluctuations emerge. A beam of average power $P$ possesses a single sideband power spectral density of $S_{\textrm{SQL}} = 2 \, h \nu_{0} / P$ where $h$ is Planck's constant and $\nu_{0}$ is the optical carrier frequency \cite{bachor2004guide}. The normalized noise levels $\langle \mathcal{O}(f) ^{2} \rangle_{\alpha} / \langle \mathcal{O} ^{2} \rangle_{\textrm{shot}}$ are multiplied by $S_{\textrm{SQL}}$ in order to yield the relative intensity noise in the detected beam (which corresponds to the units of dBc/Hz). Upon correcting the measured spectral densities by the cavity transfer function of Eq.~\ref{eq:transferH} and the shot noise limit $S_{\textrm{SQL}}$, the inferred relative noise is represented in a logarithmic scale as \begin{equation} S_{\alpha,\textrm{dB}} = S_{\alpha,\textrm{dB}}^{\textrm{meas}} - F(f)_{\textrm{dB}} + S_{\textrm{SQL,dB}} \end{equation} where $S_{\alpha,\textrm{dB}}^{\textrm{meas}}$ are the measured traces of Fig.~\ref{fig-projection-supp}, the filter function $F(f)$ is given as $F(f) = \left[ 1 - H(f) \right]^{2}$, and the subscripts dB indicate that the relevant spectral densities are specified in decibels. The corrected power spectral densities $S_{\alpha,\textrm{dB}}$ for both the CEO and envelope jitter modes are depicted in Fig.~4 of the main text. \end{document}
arXiv
Solve for $p$: $\frac 56 = \frac n{72} = \frac {m+n}{84}= \frac {p - m}{120}$. To get from 6 to 72 we multiply by 12, so a fraction equivalent to $\frac{5}{6}$ whose denominator is 72 has a numerator of $n=5 \cdot12=60$. We can solve $\frac{5}{6}=\frac{60+m}{84}$ similarly to obtain $m=10$. Finally, $\frac{5}{6}=\frac{p-10}{120}\implies p-10=100 \implies p=\boxed{110}$.
Math Dataset
Eighth power In arithmetic and algebra the eighth power of a number n is the result of multiplying eight instances of n together. So: n8 = n × n × n × n × n × n × n × n. Eighth powers are also formed by multiplying a number by its seventh power, or the fourth power of a number by itself. The sequence of eighth powers of integers is: 0, 1, 256, 6561, 65536, 390625, 1679616, 5764801, 16777216, 43046721, 100000000, 214358881, 429981696, 815730721, 1475789056, 2562890625, 4294967296, 6975757441, 11019960576, 16983563041, 25600000000, 37822859361, 54875873536, 78310985281, 110075314176, 152587890625 ... (sequence A001016 in the OEIS) In the archaic notation of Robert Recorde, the eighth power of a number was called the "zenzizenzizenzic".[1] Algebra and number theory Polynomial equations of degree 8 are octic equations. These have the form $ax^{8}+bx^{7}+cx^{6}+dx^{5}+ex^{4}+fx^{3}+gx^{2}+hx+k=0.\,$ The smallest known eighth power that can be written as a sum of eight eighth powers is[2] $1409^{8}=1324^{8}+1190^{8}+1088^{8}+748^{8}+524^{8}+478^{8}+223^{8}+90^{8}.$ The sum of the reciprocals of the nonzero eighth powers is the Riemann zeta function evaluated at 8, which can be expressed in terms of the eighth power of pi: $\zeta (8)={\frac {1}{1^{8}}}+{\frac {1}{2^{8}}}+{\frac {1}{3^{8}}}+\cdots ={\frac {\pi ^{8}}{9450}}=1.00407\dots $ (OEIS: A013666) This is an example of a more general expression for evaluating the Riemann zeta function at positive even integers, in terms of the Bernoulli numbers: $\zeta (2n)=(-1)^{n+1}{\frac {B_{2n}(2\pi )^{2n}}{2(2n)!}}.$ Physics In aeroacoustics, Lighthill's eighth power law states that the power of the sound created by a turbulent motion, far from the turbulence, is proportional to the eighth power of the characteristic turbulent velocity.[3][4] The ordered phase of the two-dimensional Ising model exhibits an inverse eighth power dependence of the order parameter upon the reduced temperature.[5] The Casimir–Polder force between two molecules decays as the inverse eighth power of the distance between them.[6][7] See also • Seventh power • Sixth power • Fifth power (algebra) • Fourth power • Cube (algebra) • Square number References 1. Womack, D. (2015), "Beyond tetration operations: their past, present and future", Mathematics in School, 44 (1): 23–26 2. Quoted in Meyrignac, Jean-Charles (2001-02-14). "Computing Minimal Equal Sums Of Like Powers: Best Known Solutions". Retrieved 2019-12-18. 3. Lighthill, M. J. (1952). "On sound generated aerodynamically. I. General theory". Proc. R. Soc. Lond. A. 211 (1107): 564–587. Bibcode:1952RSPSA.211..564L. doi:10.1098/rspa.1952.0060. S2CID 124316233. 4. Lighthill, M. J. (1954). "On sound generated aerodynamically. II. Turbulence as a source of sound". Proc. R. Soc. Lond. A. 222 (1148): 1–32. Bibcode:1954RSPSA.222....1L. doi:10.1098/rspa.1954.0049. S2CID 123268161. 5. Kardar, Mehran (2007). Statistical Physics of Fields. Cambridge University Press. p. 148. ISBN 978-0-521-87341-3. OCLC 1026157552. 6. Casimir, H. B. G.; Polder, D. (1948). "The influence of retardation on the London-van der Waals forces". Physical Review. 73 (4): 360. Bibcode:1948PhRv...73..360C. doi:10.1103/PhysRev.73.360. 7. Derjaguin, Boris V. (1960). "The force between molecules". Scientific American. 203 (1): 47–53. Bibcode:1960SciAm.203a..47D. doi:10.1038/scientificamerican0760-47. JSTOR 2490543. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Hall–Petresco identity In mathematics, the Hall–Petresco identity (sometimes misspelled Hall–Petrescu identity) is an identity holding in any group. It was introduced by Hall (1934) and Petresco (1954). It can be proved using the commutator collecting process, and implies that p-groups of small class are regular. Statement The Hall–Petresco identity states that if x and y are elements of a group G and m is a positive integer then $x^{m}y^{m}=(xy)^{m}c_{2}^{\binom {m}{2}}c_{3}^{\binom {m}{3}}\cdots c_{m-1}^{\binom {m}{m-1}}c_{m}$ where each ci is in the subgroup Ki of the descending central series of G. See also • Baker–Campbell–Hausdorff formula • Algebra of symbols References • Hall, Marshall (1959), The theory of groups, Macmillan, MR 0103215 • Hall, Philip (1934), "A contribution to the theory of groups of prime-power order", Proceedings of the London Mathematical Society, 36: 29–95, doi:10.1112/plms/s2-36.1.29 • Huppert, B. (1967), Endliche Gruppen (in German), Berlin, New York: Springer-Verlag, pp. 90–93, ISBN 978-3-540-03825-2, MR 0224703, OCLC 527050 • Petresco, Julian (1954), "Sur les commutateurs", Mathematische Zeitschrift, 61 (1): 348–356, doi:10.1007/BF01181351, MR 0066380
Wikipedia
\begin{document} \date{July 4, 2019} \title{Complexity and performance of an \ Augmented Lagrangian algorithm ootnote{This work was supported by FAPESP (grants 2013/07375-0, 2016/01860-1, and 2018/24293-0) and CNPq (grants 309517/2014-1 and 303750/2014-6).} \begin{abstract} Algencan is a well established safeguarded Augmented Lagrangian algorithm introduced in [R. Andreani, E. G. Birgin, J. M. Mart\'{\i}nez and M. L. Schuverdt, On Augmented Lagrangian methods with general lower-level constraints, \textit{SIAM Journal on Optimization} 18, pp. 1286-1309, 2008]. Complexity results that report its worst-case behavior in terms of iterations and evaluations of functions and derivatives that are necessary to obtain suitable stopping criteria are presented in this work. In addition, the computational performance of a new version of the method is presented, which shows that the updated software is a useful tool for solving large-scale constrained optimization problems.\\ \noindent \textbf{Keywords:} Nonlinear programming, Augmented Lagrangian methods, complexity, numerical experiments. \end{abstract} \section{Introduction} Augmented Lagrangian methods have a long tradition in numerical optimization. The main ideas were introduced by Powell~\cite{powell}, Hestenes~\cite{hestenes}, and Rockafellar~\cite{rocka}. At each (outer) iteration of an Augmented Lagrangian method one minimizes the objective function plus a term that penalizes the non-fulfillment of the constraints with respect to suitable shifted tolerances. Whereas the classical external penalty method~\cite{fiaccomccormick,fletcher} needs to employ penalty parameters that tend to infinity, the shifting technique aims to produce convergence by means of displacements of the constraints that generate approximations to a solution with moderate penalty parameters~\cite{bmbook}. As a by-product, one obtains approximations of the Lagrange multipliers associated with the original optimization problem. The safeguarded version of the method~\cite{abmstango} discards Lagrange multiplier approximations when they become very large. The convergence theory for safeguarded Augmented Lagrangian methods was given in~\cite{abmstango,bmbook}. Recently, examples that illustrate the convenience of safeguarded Augmented Lagrangians were given in~\cite{kanzowsteck}. Conn, Gould, and Toint~\cite{cgtlancelot} produced the celebrated package Lancelot, that solves constrained optimization problems using Augmented Lagrangians in which the constraints are defined by equalities and bounds. The technique was extended to the case of equality constraints plus linear constraints in~\cite{cgst}. Differently from Lancelot, in Algencan~\cite{abmstango,bmbook} (see, also, \cite{abmsobso,abmssecond,bfem,bfm,bmalgotan,bmfast,bmdecrease}), the Augmented Lagrangian is defined not only with respect to equality constraints but also with respect to inequalities. The theory presented in~\cite{abmstango} and~\cite{bmbook} admits the presence of lower-level constraints not restricted to boxes or polytopes. However, in the practical implementations of Algencan, lower-level constraints are always boxes. In the last 10 years, the interest in Augmented Lagrangian methods was renewed due to their ability to solve large-scale problems. Dost\'al and Beremlijski~\cite{dostal,dostal2017} employed Augmented Lagrangian methods for solving quadratic programming problems that appear in structural optimization. Fletcher~\cite{fletcher2017} applied Augmented Lagrangian ideas to the minimization of quadratics with box constraints. Armand and Omheni~\cite{armand1} employed an Augmented Lagrangian technique for solving equality constrained optimization problems and handled inequality constraints by means of logarithmic barriers~\cite{armand2}. Curtis, Gould, Jiang, and Robinson~\cite{curtis1,curtis2} defined an Augmented Lagrangian algorithm in which decreasing the penalty parameters is possible following intrinsic algorithmic criteria. Local convergence results without constraint qualifications were proved in~\cite{fernandezsolodov}. The case with (possibly complementarity) degenerate constraints was analyzed in \cite{ims}. Chatzipanagiotis and Zavlanos~\cite{chatzi} defined and analyzed Augmented Lagrangian methods in the context of distributed computation. An Exact Penalty algorithm for constrained optimization with complexity results was introduced in~\cite{cagt2011}. Grapiglia and Yuan \cite{gy2019} analyzed the complexity of an Augmented Lagrangian algorithm for inequality constraints based on the approach of Sun and Yuan \cite{sunyuan} and assuming that a feasible initial point is available. In this paper, we report the main features of a new implementation of Algencan. The new Algencan preserves the main characteristics of the previous algorithm: constraints are considered in the form of equalities and inequalities, without slack variables; box-constrained subproblems are solved using active-set strategies; and global convergence properties are fully preserved. A new acceleration procedure is introduced by means of which an approximate KKT point may be obtained. It consists in applying a local Newton method to a semismooth KKT system~\cite{mqi,qisun} starting from every Augmented Lagrangian iterate. Special attention is given to the box-constraint algorithm used for solving subproblems. The algorithm presented in this paper is able to handle large-scale problems but not ``huge'' ones. This means that we deal with number of variables and Hessian structures that make it affordable to use sparse factorizations. Larger problems need the help of iterative linear solvers which are not available in the new Algencan yet. Exhaustive numerical experimentation is given and all the software employed is available on a free basis in \url{http://www.ime.usp.br/~egbirgin/}, so that computational results are fully reproducible. The paper is organized as follows. In Section~\ref{al}, we recall the definition of Algencan with box lower-level constraints and we review global convergence results. In Section~\ref{complexity}, we prove complexity properties. In Section~\ref{newtonls}, we describe the algorithm for solving box-constrained subproblems. In Section~\ref{secimpl}, we describe the computer implementation. In Section~\ref{secnumexp}, we report numerical experiments. Conclusions are given in Section~\ref{secconcl}.\\ \noindent \textbf{Notation.} If $C \subseteq \mathbb{R}^n$ is a convex set, $P_C(v)$ denotes the Euclidean projection of~$v$ onto~$C$. If $\ell, u \in \mathbb{R}^n$, $[\ell,u]$ denotes the box defined by $\{ x \in \mathbb{R}^n \;|\; \ell \leq x \leq u \}$. $(\cdot)_+ = \max\{0, \cdot \}$. If $v \in \mathbb{R}^n$, $v_+$ denotes the vector with components $(v_i)_+$ for $i=1,\dots,n$. If $v, w \in \mathbb{R}^n$, $\min\{v, w\}$ denotes the vector with components $\min\{v_i, w_i\}$ for $i=1,\dots,n$. The symbol $\| \cdot \|$ denotes the Euclidean norm. \section{Augmented Lagrangian} \label{al} In this section, we consider constrained optimization problems defined by \begin{equation} \label{nlp} \mbox{Minimize } f(x) \mbox{ subject to } h(x) = 0, \; g(x) \leq 0, \mbox{ and } \ell \leq x \leq u, \end{equation} where $f: \mathbb{R}^n \to \mathbb{R}$, $h: \mathbb{R}^n \to \mathbb{R}^m$, and $g: \mathbb{R}^n \to \mathbb{R}^p$ are continuously differentiable. We consider the Augmented Lagrangian method in the way analyzed in~\cite{abmstango} and \cite{bmbook}. This method has interesting global theoretical properties. On the one hand, every limit point is a stationary point of the problem of minimizing infeasibility. On the other hand, every feasible limit point satisfies a sequential optimality condition \cite{ahm,amrs1,amrs2}. This implies that every feasible limit point is KKT-stationary under very mild constraint qualifications \cite{amrs1,amrs2}. The basic definition of the method and the main theoretical results are reviewed in this section. The Augmented Lagrangian function~\cite{hestenes,powell,rocka} associated with problem~(\ref{nlp}) is defined by \[ L_{\rho}(x,\lambda,\mu) = f(x) + \frac{\rho}{2} \left[ \sum_{i=1}^m \left( h_i(x) + \frac{\lambda_i}{\rho} \right)^2 + \sum_{i=1}^p \left( g_i(x) + \frac{\mu_i}{\rho} \right)_+^2 \right] \] for all $x \in [\ell,u]$, $\rho > 0$, $\lambda \in \mathbb{R}^m$, and $\mu \in \mathbb{R}^p_+$. The Augmented Lagrangian model algorithm follows.\\ \noindent \textbf{Algorithm~\ref{al}.1:} Assume that $x^0 \in \mathbb{R}^n$, $\lambda_{\min} < \lambda_{\max}$, ${\bar \lambda}^1 \in [\lambda_{\min}, \lambda_{\max}]^m$, $\mu_{\max} > 0$, ${\bar \mu}^1 \in [0, \mu_{\max}]^p$, $\rho_1 > 0$, $\gamma > 1$, $0 < \tau < 1$, and $\{\varepsilon_k\}_{k=1}^\infty$ are given. Initialize $k \leftarrow 1$. \begin{description} \item[Step 1.] Find $x^k \in [\ell,u]$ as an approximate solution to \begin{equation} \label{subprob} \mbox{Minimize } L_{\rho_k}(x,{\bar \lambda}^k, {\bar \mu}^k) \mbox{ subject to } \ell \leq x \leq u \end{equation} satisfying \begin{equation} \label{subprostop} \left\| P_{[\ell,u]}\left(x^k - \nabla L_{\rho_k}(x^k,{\bar \lambda}^k,{\bar \mu}^k) \right) - x^k \right\| \leq \varepsilon_k. \end{equation} \item[Step 2.] Define \[ V^k = \min \left\{ -g(x^k), \frac{{\bar \mu}^k}{\rho_k} \right\}. \] If $k = 1$ or \begin{equation} \label{testfeas} \max \left\{ \|h(x^k)\|, \|V^k\| \right\} \leq \tau \max \left\{ \|h(x^{k-1})\|, \|V^{k-1}\| \right\}, \end{equation} choose $\rho_{k+1} = \rho_k$. Otherwise, define $\rho_{k+1} = \gamma \rho_k$. \item[Step 3.] Compute \begin{equation} \label{lambdamas} \lambda^{k+1} = {\bar \lambda}^k + \rho_k h(x^k) \mbox{ and } \mu^{k+1} = \left( {\bar \mu}^k + \rho_k g(x^k) \right)_+. \end{equation} Compute ${\bar \lambda}^{k+1} \in [\lambda_{\min}, \lambda_{\max}]^m$ and ${\bar \mu}^{k+1}_i \in [0, \mu_{\max}]^p$. Set $k \leftarrow k+1$ and go to Step~1. \end{description} The problem of finding an approximate minimizer of $L_{\rho_k}(x,{\bar \lambda}^k, {\bar \mu}^k)$ onto $[\ell,u]$ in the sense of~(\ref{subprostop}) can always be solved. In fact, due to the compactness of $[\ell,u]$, a global minimizer, that obviously satisfies~(\ref{subprostop}), always exists. Moreover, local minimization algorithms are able to find an approximate stationary point satisfying~(\ref{subprostop}) in a finite number of iterations. Therefore, given an iterate~$x^k$, the iterate~$x^{k+1}$ is well defined. So, Algorithm~\ref{al}.1 generates an infinite sequence $\{x^k\}$ whose properties are surveyed below. Of course, as it will be seen later, suitable stopping criteria can be defined by means of which acceptable approximate solutions to~(\ref{nlp}) are usually obtained. Algorithm~\ref{al}.1 has been presented without a ``stopping criterion''. This means that, in principle, the algorithm generates an infinite sequence of primal iterates $x^k$ and Lagrange-multiplier estimators. Complexity results presented in this work report the worst-case effort that could be necessary to obtain different properties, that may be used as stopping criteria in practical implementations or not. We believe that the interpretation of these results helps to decide which stopping criteria should be used in a practical application. The relevant theoretical properties of this algorithm are the following: \begin{enumerate} \item Every limit point $x^*$ of the sequence generated by the algorithm satisfies the complementarity condition \begin{equation} \label{complementarity} \mu^{k+1}_i = 0 \mbox{ whenever } g_i(x^*) < 0 \end{equation} for $k$ large enough. (See~\cite[Thm.4.1]{bmbook}.) \item Every limit point $x^*$ of the sequence generated by the algorithm satisfies the first-order optimality conditions of the feasibility problem \begin{equation} \label{fisipro} \mbox{Minimize } \|h(x)\|^2 + \|g(x)_+\|^2 \mbox{ subject to } \ell \leq x \leq u. \end{equation} (See~\cite[Thm.6.5]{bmbook}.) \item If, for all $k \in \{1, 2, \dots\}$, $x^k$ is an approximate global minimizer of $L_{\rho_k}(x, \bar \lambda^k, \bar \mu^k)$ onto $[\ell,u]$ with tolerance $\eta > 0$, every limit point of $\{x^k\}$ is a global minimizer of the infeasibility function $\|h(x)\|^2 + \|g(x)_+\|^2$. Condition~(\ref{subprostop}) does not need to hold in this case. (See~\cite[Thm.5.1]{bmbook}.) \item If, for all $k \in \{1, 2, \dots \}$, $x^k$ is an approximate global minimizer of $L_{\rho_k}(x,\bar \lambda^k,\bar \mu^k)$ onto $[\ell,u]$ with tolerance $\eta_k \downarrow 0$, every feasible limit point of $\{x^k\}$ is a global minimizer of the general constrained minimization problem~(\ref{nlp}). As before, condition~(\ref{subprostop}) is not necessary in this case. (See~\cite[Thm.5.2]{bmbook}.) \item If $\varepsilon_k \downarrow 0$, every feasible limit point of the sequence $\{x^k\}$ satisfies the sequential optimality condition~AKKT \cite{ahm} given by \begin{equation} \label{akktproj} \lim_{k \in K} \left\| P_{[\ell,u]} \left( x^k - \left( \nabla f(x^k) + \nabla h(x^k) \lambda^{k+1} + \nabla g(x^k) \mu^{k+1} \right) \right) -x^k \right\| = 0 \end{equation} and \begin{equation} \label{compleme} \lim_{k \in K} \max \{ \|h(x^k)\|_\infty, \| \min\{-g(x^k), \mu^{k+1}\}\|_\infty \} = 0, \end{equation} where the sequence of indices $K$ is such that $\lim_{k \in K} x^k = x^*$. (See~\cite[Thm.6.4]{bmbook}.) \end{enumerate} Under an additional Lojasiewicz-like condition, it is obtained that $\lim_{k \in K} \sum_{i=1}^p \mu^{k+1}_i g_i(x^k) = 0$ (see~\cite{amscakkt}). Moreover, in \cite{afss}, it was proved that an even stronger sequential optimality condition is satisfied by the sequence $\{x^k\}$, perhaps associated with different Lagrange multipliers approximations than the ones generated by the Augmented Lagrangian algorithm. These properties say that, even if $\varepsilon_k$ does not tend to zero, Algorithm~\ref{al}.1 finds stationary points of the infeasibility measure $\|h(x)\|^2 + \|g(x)_+\|^2$ and that, when $\varepsilon_k$ tends to zero, feasible limit points satisfy a sequential optimality condition. Thus, under very weak constraint qualifications, feasible limit points satisfy Karush-Kuhn-Tucker conditions. See \cite{amrs1,amrs2}. Some of these properties, but not all, are shared by other constrained optimization algorithms. For example, the property that feasible limit points satisfy optimality KKT conditions is proved to be satisfied by other optimization algorithms only under much stronger constraint qualifications than the ones required by Algorithm~\ref{al}.1. Moreover, the Newton-Lagrange method may fail to satisfy approximate KKT conditions even when it converges to the solution of rather simple constrained optimization problems \cite{amsfail,amss}. Augmented Lagrangian implementations have a modular structure. At each iteration, a box-constrained optimization problem is approximately solved. The efficiency of the Augmented Lagrangian algorithm is strongly linked to the efficiency of the box-constraint solver. Algencan may be considered to be a conservative visit to the Augmented Lagrangian framework. For example, subproblems are solved with relatively high precision, instead of stopping subproblem solvers prematurely according to information related to the constrained optimization landscape. It could be argued that solving subproblems with high precision at points that may be far from the solution represents a waste of time. Nevertheless, our point of view is that saving subproblem iterations when one is close to a subproblem solution is not worthwhile because in that region Newton-like solvers tend to be very fast; and accurate subproblems' solutions help to produce better approximations of Lagrange multipliers. Algencan is also conservative when subproblems' solvers use minimal information about the structure of the Augmented Lagrangian function they minimize. The reason for this decision is connected to the modular structure of Algencan. Subproblem solvers are continuously being improved due to the continuous and fruitful activity in bound-constrained minimization. Therefore, we aim to take advantage of those improvements with minimal modifications of subproblem algorithms when applied to minimize Augmented Lagrangians. \section{Complexity} \label{complexity} This section is devoted to worst-case complexity results related to Algorithm~\ref{al}.1. Algorithm~\ref{al}.1 was not devised with the aim of optimizing complexity. Nevertheless, our point of view is that the complexity analysis that follows helps to understand the actual behavior of the algorithm, filling a gap opened by the convergence theory. By~(\ref{lambdamas}) and straightforward calculations, we have that, for all $k = 1, 2, 3, \dots$, \[ \nabla f(x^k) + \nabla h(x^k) \lambda^{k+1} + \nabla g(x^k) \mu^{k+1} = \nabla L_{\rho_k}(x^k, \bar \lambda^k, \bar \mu^k). \] Therefore, the fulfillment of \begin{equation} \label{paraparar1} \|P_{[\ell, u]} (x^k - \nabla L_{\rho_k}(x^k, \bar \lambda^k, \bar \mu^k)) - x^k) \| \leq \varepsilon \end{equation} implies that the projected gradient of the Lagrangian with multipliers~$\lambda^{k+1}$ and~$\mu^{k+1}$ approximately vanishes with precision~$\varepsilon$. In the next lemma, we show that the fulfillment of \begin{equation} \label{paraparar2} \max \{ \|h(x^k)\|_\infty, \|V_k\|_\infty\} \leq \delta \end{equation} implies that feasibility and complementarity hold at~$x^k$ with precision~$\delta$. For these reasons, in the context of Algorithm~\ref{al}.1, iterates that satisfy~(\ref{paraparar1}) and~(\ref{paraparar2}) are considered approximate stationary points of problem~(\ref{nlp}). \begin{lem} \label{lemcomplexity1} For all $\delta > 0$, \begin{equation} \label{tesfeacom} \max \{\|h(x^k)\|_\infty, \|V_k\|_\infty\} \leq \delta \end{equation} implies that \begin{equation} \label{lastres} \|h(x^k)\|_\infty \leq \delta, \; \|g(x^k)_+\|_\infty \leq \delta, \mbox{ and}, \mbox{ for all } j=1,\dots,p, \mu^{k+1}_j = 0 \mbox{ if } g_j(x^k) < - \delta. \end{equation} \end{lem} \begin{pro} By~(\ref{tesfeacom}), $\|h(x^k)\|_\infty \leq \delta$ and $|\min\{-g_j(x^k), \bar{\mu}_j^k/\rho_k\}| \leq \delta$ for all $j=1,\dots,p$. Therefore, $-g_j(x^k) \geq -\delta$, so $g_j(x^k) \leq \delta$ for all $j=1\dots, p$. Moreover, by (\ref{tesfeacom}), if $g_j(x^k) < -\delta$, we necessarily have that $\bar{\mu}_j^k/\rho_k \leq \delta$. Adding these two inequalities, we obtain that, if $g_j(x^k) < -\delta$ then $g_j(x^k) + \bar{\mu}_j^k /\rho_k < 0$. Consequently, $\rho_k g_j(x^k) + \bar{\mu}_j^k < 0$, so $\mu^{k+1}_j = 0$. Therefore, (\ref{tesfeacom}) implies~(\ref{lastres}) as we wanted to prove. \end{pro} In Theorem~\ref{complexity}.1 below, we assume that the sequence $\{\rho_k\}$ is bounded. Sufficient conditions for this requirement, where the bound $\bar{\rho}$ only depends on algorithmic parameters and characteristics of the problem, were given in \cite{abmstango} and \cite{bmbook}. We also assume that there exists $N(\varepsilon) \in \{1, 2, 3, \dots\}$ such that $\varepsilon_k \leq \varepsilon$ for all $k \geq N(\varepsilon)$. Clearly, this condition can be enforced by the criterion used to define~$\{\varepsilon_k\}$. For example, $\varepsilon_{k+1} = \half \varepsilon_k$ obviously implies that $\varepsilon_k \leq \varepsilon$ if $k > N(\varepsilon) \equiv \log(\varepsilon)/\log(\varepsilon_1)$. \begin{lem} \label{lemcomplexity2} There exists $c_{\mathrm{big}} > 0$ such that, for all $k \geq 1$, \begin{equation} \label{defcbig} \max \{\|h(x^k)\|_\infty, \|V_k\|_\infty\} \leq c_{\mathrm{big}}. \end{equation} \end{lem} \begin{pro} Since, by definition of the algorithm, $\rho_k \geq \rho_1$, the bound~(\ref{defcbig}) comes from the continuity of $h$ and $g$, the compactness of the domain $[\ell, u]$, and the boundedness of $\bar \mu^k$. \end{pro} From now on, $c_{\mathrm{big}}$ will denote a positive constant satisfying~(\ref{defcbig}), whose existence is guaranteed by Lemma~\ref{lemcomplexity2}. \begin{teo} \label{teocomplexity1} Let $\delta > 0$ and $\varepsilon > 0$ be given. Assume that, for all $k \in \{1, 2, 3, \dots\}$, $\rho_k \leq \bar{\rho}$. Moreover, assume that, for all $k \geq N(\varepsilon)$, we have that $\varepsilon_k \leq \varepsilon$. Then, after at most \begin{equation} \label{iteraciones} N(\varepsilon) + \left[ \log(\bar{\rho}/\rho_1)/\log(\gamma) \right] \times \left[ \log(\delta/c_{\mathrm{big}})/\log (\tau) \right] \end{equation} iterations, we obtain $x^k \in [\ell, u]$, $\lambda^{k+1} \in \mathbb{R}^m$, and $\mu^{k+1} \in \mathbb{R}^p_+$ such that \begin{equation} \label{aprokkt1} \left\| P_{[\ell,u]} \left( x^k - \left( \nabla f(x^k) + \nabla h(x^k) \lambda^{k+1} + \nabla g(x^k) \mu^{k+1} \right) \right) -x^k \right\| \leq \varepsilon, \end{equation} \begin{equation} \label{aprokkt2} \| h(x^k)\|_\infty \leq \delta, \; \|g(x^k)_+\|_\infty \leq \delta, \end{equation} and, for all $j=1,\dots,p$, \begin{equation} \label{aprokkt3} \mu^{k+1}_j = 0 \mbox{ whenever } g_j(x^k) < - \delta. \end{equation} \end{teo} \begin{pro} The number of iterations such that $\rho_{k+1} = \gamma \rho_k$ is bounded above by \begin{equation} \label{bounrho} \log (\bar{\rho}/ \rho_1)/\log(\gamma). \end{equation} Therefore, this is also a bound for the number of iterations at which~(\ref{testfeas}) does not hold. By (\ref{defcbig}), if (\ref{testfeas}) holds during \begin{equation}\label{bounconse} \log (\delta/c_{\mathrm{big}})/\log \tau \end{equation} consecutive iterations, we get that \[ \max \{\|h(x^k)\|_\infty, \|V_k\|_\infty\} \leq \delta, \] which, by Lemma~\ref{lemcomplexity1}, implies (\ref{aprokkt2}) and~(\ref{aprokkt3}). Now, by hypothesis, after $N(\varepsilon)$ iterations, we have that $\varepsilon_k \leq \varepsilon$. Therefore, by~(\ref{bounrho}) and~(\ref{bounconse}), after at most \begin{equation} \label{iteracionesN} N(\varepsilon) + [ \log (\bar{\rho}/ \rho_1)/\log(\gamma)] \times [\log (\delta/c_{\mathrm{big}})/\log (\tau)] \end{equation} iterations, we have that~(\ref{aprokkt1}), (\ref{aprokkt2}), and~(\ref{aprokkt3}) hold. \end{pro} Theorem~\ref{teocomplexity1} shows that, as expected, if $\rho_k$ is bounded, we obtain approximate feasibility and optimality. In the following theorem, we assume that the subproblems are solved by means of some method that, for obtaining precision $\varepsilon > 0$, employs at most $c \varepsilon^{-q}$ iterations and evaluations, where $c$ only depends on characteristics of the problem, the upper bound for $\rho_k$, and algorithmic parameters of the method. \begin{teo} \label{teocomplexity2} In addition to the hypotheses of Theorem~\ref{teocomplexity1}, assume that there exist $c_{\mathrm{inner}} > 0$ and $q > 0$, where $c_{\mathrm{inner}}$ only depends on~$\bar{\rho}$, $\lambda_{\min}$, $\lambda_{\max}$, $\mu_{\max}$, $\ell$, $u$, and characteristics of the functions~$f$, $h$, and~$g$, such that the number of inner iterations, function and derivative evaluations that are necessary to obtain (\ref{subprostop}) is bounded above by $c_{\mathrm{inner}} \varepsilon_k^{-q}$. Then, the number of inner iterations, function evaluations, and derivative evaluations that are necessary to obtain~$k$ such that~(\ref{aprokkt1}), (\ref{aprokkt2}), and~(\ref{aprokkt3}) hold is bounded above by \[ c_{\mathrm{inner}} \varepsilon_{\min}^{-q} \left\{ N(\varepsilon) + \left[ \log(\bar{\rho}/\log \rho_1)/\log(\gamma) \right] \times \left[\log (\delta/c_{\mathrm{big}})/\log (\tau) \right] \right\}, \] where \begin{equation} \label{epsilonmin} \varepsilon_{\min} = \min\{\varepsilon_k \;|\; k \leq N(\varepsilon) + \left[ \log(\bar{\rho}/\log \rho_1)/\log(\gamma) \right] \times \left[\log (\delta/c_{\mathrm{big}})/\log (\tau) \right] \}. \end{equation} \end{teo} \begin{pro} The desired result follows from Theorem~\ref{complexity}.1 and the assumptions of this theorem. \end{pro} Note that, in Theorem~\ref{teocomplexity2}, we admit the possibility that $\varepsilon_k$ decrease after completing $N(\varepsilon)$ iterations. This is the reason for the definition of $\varepsilon_{\min}$ (\ref{epsilonmin}). In practical implementations, it is reasonable to stop decreasing $\varepsilon_k$ when it achieves a user-given stopping tolerance $\varepsilon$. According to Theorem~\ref{teocomplexity2}, the complexity bounds related to approximate optimality, feasibility, and complementarity depend on the optimality tolerance $\varepsilon$ in, essentially, the same way that the complexity of the subproblem solver depends on its stopping tolerance. In other words, under the assumption of boundedness of penalty parameters, the worst-case complexity of the Augmented Lagrangian method is essentially the same as the complexity of the subproblem solver. In computer implementations, it is usual to employ, in addition to a (successful) stopping criterion based on (\ref{aprokkt1}), (\ref{aprokkt2}), and (\ref{aprokkt3}), an (unsuccessful) stopping criterion based on the size of the penalty parameter. The rationale is that if the penalty parameter grew to be very large, it is not worthwhile to expect further improvements with respect to feasibility and we are probably close to an infeasible local minimizer of infeasibility. The complexity results that correspond to this decision are given below. \begin{teo} \label{teocomplexitygrande} Let $\delta > 0$, $\varepsilon > 0$, and $\rho_{\mathrm{big}} > \rho_1 $ be given. Assume that, for all $k \geq N(\varepsilon)$, we have that $\varepsilon_k \leq \varepsilon$. Then, after at most \begin{equation} \label{iteracoes} N(\varepsilon) + \left[ \log(\rho_{\mathrm{big}}/\rho_1)/\log(\gamma) \right] \times \left[ \log(\delta/c_{\mathrm{big}})/\log (\tau) \right] \end{equation} iterations, we obtain $x^k \in [\ell, u]$, $\lambda^{k+1} \in \mathbb{R}^m$, and $\mu^{k+1} \in \mathbb{R}^p_+$ such that (\ref{aprokkt1}), (\ref{aprokkt2}), and (\ref{aprokkt3}) hold or we obtain an iteration such that $\rho_k \geq \rho_{\mathrm{big}}$. \end{teo} \begin{pro} If $\rho_k \leq \rho_{\mathrm{big}}$ for all $k \leq N(\varepsilon) + \left[ \log(\rho_{\mathrm{big}}/\rho_1)/\log(\gamma) \right] \times \left[ \log(\delta/c_{\mathrm{big}})/\log (\tau) \right]$, by the same argument used in the proof of Theorem~\ref{teocomplexity1}, with $\rho_{\mathrm{big}}$ replacing $\bar{\rho}$, we obtain that (\ref{aprokkt1}), (\ref{aprokkt2}), and (\ref{aprokkt3}) hold. \end{pro} \begin{teo} \label{teocomplexitygrande2} In addition to the hypotheses of Theorem~\ref{teocomplexitygrande}, assume that there exist $c_{\mathrm{inner}} > 0$ and $q > 0$, where $c_{\mathrm{inner}}$ only depends on~$\rho_{\mathrm{big}}$, $\lambda_{\min}$, $\lambda_{\max}$, $\mu_{\max}$, $\ell$, $u$, and characteristics of the functions~$f$, $h$, and~$g$, such that the number of inner iterations, function and derivative evaluations that are necessary to obtain (\ref{subprostop}) is bounded above by $c_{\mathrm{inner}} \varepsilon_k^{-q}$. Then, the number of inner iterations, function evaluations, and derivative evaluations that are necessary to obtain~$k$ such that~(\ref{aprokkt1}), (\ref{aprokkt2}), and~(\ref{aprokkt3}) hold or such that $\rho_k > \rho_{\mathrm{big}}$ is bounded above by \[ c_{\mathrm{inner}} \varepsilon_{\min,2}^{-q} \left\{ N(\varepsilon) + \left[ \log(\rho_{\mathrm{big}}/ \rho_1)/\log(\gamma) \right] \times \left[\log (\delta/c_{\mathrm{big}})/\log (\tau) \right] \right\}, \] where \begin{equation} \label{2epsilonmin} \varepsilon_{\min,2} = \min\{\varepsilon_k \;|\; k \leq \left\{ N(\varepsilon) + \left[ \log(\rho_{\max}/\log \rho_1)/\log(\gamma) \right] \times \left[\log (\delta/c_{\mathrm{big}})/\log (\tau) \right] \right\}. \end{equation} \end{teo} \begin{pro} The desired result follows directly from Theorem~\ref{teocomplexitygrande}. \end{pro} The complexity results proved up to now indicate that suitable stopping criteria for Algorithm~\ref{al}.1 could be based on the fulfillment of (\ref{aprokkt1}), (\ref{aprokkt2}), and (\ref{aprokkt3}) or, alternatively, on the occurrence of an undesirable big penalty parameter. The advantage of these criteria is that, according to them, worst-case complexity is of the same order as the complexity of subproblem solvers. Convergence results establish that solutions obtained with very large penalty parameters are close to stationary points of the infeasibility. However, stationary points of infeasibility may be feasible points and, again, convergence theory shows that when Algorithm~\ref{al}.1 converges to a feasible point, this point satisfies AKKT optimality conditions, independently of constraint qualifications. As a consequence, the danger exists of interrupting executions prematurely, in situations in which meaningful progress could be obtained admitting further increases of the penalty parameter. This state of facts leads one to analyze complexity of Algorithm~\ref{al}.1 independently of penalty parameter growth and introducing a possibly more reliable criterion for detecting infeasible stationary points of infeasibility. Roughly speaking, we will say that an iterate seems to be an infeasible stationary point of infeasibility when the projected gradient of the infeasibility measure is significantly smaller than the infeasibility value. The natural question that arises is whether the employment of this (more reliable) stopping criterion has an important effect on the complexity bounds. Assumptions on the limitation of $\rho_k$ are given up from now on. Note that the possibility that $\rho_k \to \infty$ needs to be considered since necessarily takes place, for example, when the feasible region is empty. \begin{lem} \label{lemcomplexity3} There exist $c_{\mathrm{lips}}$, $c_f > 0$ such that, for all $x \in [\ell, u]$, $\lambda \in [\lambda_{\min}, \lambda_{\max}]^m$, and $\mu \in [0, \mu_{\max}]^p$, one has \begin{equation} \label{clips} \|\nabla h(x)\| \|\lambda\| + \|\nabla g(x)\| \|\mu\| \leq c_{\mathrm{lips}} \end{equation} and \begin{equation} \label{cf} \|\nabla f(x)\| \leq c_f. \end{equation} \end{lem} \begin{pro} The desired result follows from the boundedness of the domain, the continuity of the functions, and the boundedness of~$\lambda$ and~$\mu$. \end{pro} The following lemma establishes a bound for the projected gradient of the infeasibility measure in terms of the value of the displaced infeasibility and the value of the penalty parameter. \begin{lem} \label{lemcomplexity4} For all $x \in [\ell, u]$, $\lambda \in [\lambda_{\min}, \lambda_{\max}]^m$, $\mu \in [0, \mu_{\max}]^p$, and $\rho > 0$, one has that \[ \left\|P_{[\ell, u]}\left(x - \nabla\left[ \|h(x)\|^2] + \|g(x)_+\|^2 \right] \right) - x \right\| \] \[ \leq \left\| P_{[\ell,u]}\left(x - \nabla\left[\|h(x) + \lambda/\rho\|^2 + \| (g(x)+\mu/\rho)_+ \|^2 \right] \right) - x \right\| + 2 c_{\mathrm{lips}} / \rho, \] where $c_{\mathrm{lips}}$ is defined in Lemma~\ref{lemcomplexity3}. \end{lem} \begin{pro} Note that \[ \half \nabla \left[ \left\|h(x) + \lambda/\rho \right\|^2 + \left\| \left( g(x)+\mu/\rho \right)_+ \right\|^2 \right] = h'(x)^T \left( h(x) + \lambda/\rho \right) + g'(x)^T \left( g(x) + \mu/\rho \right)_+ \] and \[ \half \nabla \left[ \|h(x)\|^2 + \|g(x)_+\|^2 \right] = \nabla h(x) h(x) + \nabla g(x) g(x)_+. \] Therefore, \[ \left\|\half \nabla \left[ \| h(x) + \lambda/\rho \|^2 + \| \left( g(x) + \mu/\rho)_+ \right\|^2 \right] - \half \nabla \left[ \|h(x)\|^2 + \|g(x)_+\|^2 \right] \right\| \] \[ \leq \left\| \nabla h(x) \lambda/\rho + \nabla g(x) \left[ (g(x) + \mu/\rho)_+ - g(x)_+ \right] \right\| \leq \frac{1}{\rho} \left[ \| \nabla h(x) \| \|\lambda\| + \| \nabla g(x)\| \|\mu\| \right]. \] Then, by~(\ref{clips}), if $\rho > 0$, $x \in [\ell, u]$, $\lambda \in [\lambda_{\min}, \lambda_{\max}]^m$, and $\mu \in [0, \mu_{\max}]^p$, \[ \left\| \nabla \left[ \|h(x)\|^2 + \|g(x)_+\|^2 \right] - \nabla \left[ \|h(x) + \lambda/\rho\|^2 + \| \left( g(x)+\mu/\rho \right)_+\|^2 \right] \right\| \leq 2 c_{\mathrm{lips}} / \rho. \] So, by the non-expansivity of projections, \[ \resizebox{\textwidth}{!}{$ \left\| P_{[\ell, u]}\left(x - \nabla \left[ \|h(x)\|^2 + \|g(x)_+\|^2 \right]\right) - P_{[\ell,u]}\left(x - \nabla \left[ \|h(x) + \lambda/\rho\|^2 + \| (g(x)+\mu/\rho)_+ \|^2 \right] \right) \right\| \leq 2 c_{\mathrm{lips}}/\rho. $} \] Thus, the thesis is proved. \end{pro} The following theorem establishes that, before the number of iterations given by~(\ref{iterteo5}), we necessarily find an approximate KKT point or we find an infeasible point that, very likely, is close to an infeasible stationary point of the infeasibility measure. The latter type of infeasible points is characterized by the fact that the projected gradient of the infeasibility is smaller than~$\delta_{\mathrm{low}}$ whereas the infeasibility value is bigger than $\delta \gg \delta_{\mathrm{low}}$. \begin{teo} \label{teocomplexity3} Let $\delta > 0$, $\delta_{\mathrm{low}} \in (0,\delta)$, and $\varepsilon > 0$ be given. Assume that $N(\delta_{\mathrm{low}}, \varepsilon)$ is such that $\varepsilon_k \leq \min\{\varepsilon, \delta_{\mathrm{low}} \}/4$ for all $k \geq N(\delta_{\mathrm{low}}, \varepsilon)$. Then, after at most \begin{equation} \label{iterteo5} N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau)}\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right] \end{equation} iterations, where \begin{equation} \label{rhomax} \rho_{\max} = \max\left\{1, \frac{4 c_{\mathrm{lips}}}{\delta_{\mathrm{low}}}, \frac{\mu_{\max}}{\delta}, \frac{4 c_f}{\delta_{\mathrm{low}} } \right\}, \end{equation} we obtain an iteration $k$ such that one of the following two facts takes place: \begin{enumerate} \item The iterate $x^k \in [\ell, u]$ verifies \begin{equation} \label{paradamala} \resizebox{0.9\textwidth}{!}{$ \left\| P_{[\ell,u]}\left( x^k - \nabla \left[ \|h(x^k)\|^2 + \|g(x^k)_+\|^2 \right] \right) - x^k \right\| \leq \delta_{\mathrm{low}} \mbox{ and } \max\{\|h(x^k)\|_\infty, \|g(x^k)_+\|_\infty\}> \delta. $} \end{equation} \item The multipliers $\lambda^{k+1} \in \mathbb{R}^m$ and $\mu^{k+1} \in \mathbb{R}^p_+$ are such that \begin{equation} \label{primaldual1} \left\| P_{[\ell,u]}\left( x^k - \left( \nabla f(x^k) + \nabla h(x^k) \lambda^{k+1} + \nabla g(x^k) \mu^{k+1} \right) \right) -x^k \right\| \leq \varepsilon, \end{equation} \begin{equation} \label{primaldual2} \|h(x^k)\|_\infty \leq \delta, \; \|g(x^k)_+\|_\infty \leq \delta, \end{equation} and, for all $j=1,\dots,p$, \begin{equation} \label{primaldual3} \mu^{k+1}_j = 0 \mbox{ whenever } g_j(x^k) < - \delta. \end{equation} \end{enumerate} \end{teo} \begin{pro} Let $k_{\mathrm{end}}$ be such that \begin{equation} \label{implica} \left\| P_{[\ell,u]}\left( x^k - \nabla \left[ \|h(x^k)\|^2 + \|g(x^k)_+\|^2 \right] \right) - x^k \right\| \leq \delta_{\mathrm{low}} \Rightarrow \max \{ \|h(x^k)\|_\infty, \|g(x^k)_+\|_\infty \} \leq \delta \end{equation} for all $k \leq k_{\mathrm{end}}$ whereas~(\ref{implica}) does not hold if $k = k_{\mathrm{end}} + 1$. (With some abuse of notation, we say that $k_{\mathrm{end}} = \infty$ when~(\ref{implica}) holds for all $k$.) In other words, if $k \leq k_{\mathrm{end}}$, \begin{equation} \label{disyun} \left\| P_{[\ell,u]}\left( x^k - \nabla \left[ \|h(x^k)\|^2 + \|g(x^k)_+\|^2 \right] \right) - x^k \right\| > \delta_{\mathrm{low}} \mbox{ or } \max \{ \|h(x^k)\|_\infty, \|g(x^k)_+\|_\infty \} \leq \delta, \end{equation} whereas~(\ref{disyun}) does not hold if $k = k_{\mathrm{end}} + 1$. We consider two possibilities: \begin{equation} \label{kendmenor} k_{\mathrm{end}} < N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right] \end{equation} and \begin{equation} \label{kendmayor} k_{\mathrm{end}} \geq N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right]. \end{equation} In the first case, since~(\ref{implica}) does not hold for $k = k_{\mathrm{end}}+1$, it turns out that~(\ref{paradamala}) occurs at iteration $k_{\mathrm{end}} + 1$. It remains to analyze the case in which~(\ref{kendmayor}) takes place. Suppose that \begin{equation} \label{kbajo} k \leq N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right], \end{equation} \begin{align} \varepsilon_k &\leq \delta_{\mathrm{low}} / 4, \label{coneps} \\[2mm] \rho_k &\geq 1, \label{rhouno} \\[2mm] \rho_k &\geq 4 c_f / \delta_{\mathrm{low}}, \label{rhocf} \\[2mm] \rho_k &\geq 4 c_{\mathrm{lips}} / \delta_{\mathrm{low}}, \label{rholis} \\[2mm] \rho_k &\geq \mu_{\max} / \delta, \label{rhomuma} \\[2mm] k &\geq N(\delta_{\mathrm{low}}, \varepsilon). \label{kene} \end{align} By (\ref{subprostop}), for all $k \geq 1$, we have that \[ \left\| P_{[\ell,u]}\left( x^k - \nabla f(x^k) - \frac{\rho_k}{2} \nabla \left\{ \sum_{i=1}^m \left[ h_i(x^k) + \frac{\bar{\lambda}_i^k}{\rho_k}\right ]^2 + \sum_{i=1}^p \left[ \left( g_i(x^k) + \frac{\bar{\mu}_i^k}{\rho_k} \right)_+ \right]^2 \right\} \right) - x^k \right\| \leq \varepsilon_k. \] Therefore, by (\ref{rhouno}), \[ \left\| P_{[\ell,u]}\left( x^k - \frac{1}{\rho_k} \nabla f(x^k) - \half \nabla \left( \| h(x^k) + \bar \lambda^k/\rho_k \|^2 + \| (g(x^k) + \bar \mu^k/\rho_k)_+ \|^2 \right) \right) - x^k \right\| \leq \varepsilon_k. \] Therefore, by the non-expansivity of projections and (\ref{cf}), we have that \begin{equation} \label{lafalta} \left\| P_{[\ell,u]}\left( x^k - \half \nabla \left( \|h(x^k) + \bar \lambda^k/\rho_k\|^2 + \|(g(x^k) + \bar \mu^k/\rho_k)_+\|^2 \right) \right) - x^k \right\| \leq \varepsilon_k + \frac{c_f}{\rho_k}. \end{equation} So, by (\ref{coneps}) and (\ref{rhocf}), \begin{equation} \label{quiero1} \left\| P_{[\ell,u]}\left(x^k - \nabla\left( \|h(x^k) + \bar \lambda^k/\rho_k\|^2 + \|(g(x^k)+\bar \mu^k/\rho_k)_+\|^2\right) \right) - x^k \right\| \leq \delta_{\mathrm{low}}/2. \end{equation} Therefore, by Lemma~\ref{complexity}.4 and~(\ref{rholis}), \begin{equation} \label{quiero2} \left\| P_{[\ell, u]}\left( x^k - \nabla\left( \|h(x^k)\|^2 + \|g(x^k)_+\|^2\right) \right) - x^k \right\| \leq \delta_{\mathrm{low}}. \end{equation} By (\ref{kendmayor}) and (\ref{kbajo}), we have that $k \leq k_{\mathrm{end}}$, so, by (\ref{quiero2}), \begin{equation} \label{feafea} \|h(x^k)\|_\infty \leq \delta \mbox{ and } \|g(x^k)_+\|_\infty \leq \delta. \end{equation} By (\ref{feafea}), $ g_j(x^k) \leq \delta$ for all $j=1,\dots,p$. Now, if $g_j(x^k) < -\delta$, we have that $\bar{\mu}^k_j + \rho_k g_j(x^k) < \bar{\mu}^k_j - \delta \rho_k$, which is smaller than zero because of~(\ref{rhomuma}), so $\mu_j^{k+1}=0$. Therefore, the approximate feasibility and complementarity conditions \begin{equation} \label{feco} \|h(x^k)\|_\infty \leq \delta, \; \|g(x^k)_+\| \leq \delta, \mbox{ and } \mu^k_j = 0 \mbox{ if } g_j(x^k) < - \delta \end{equation} hold at $x^k$. Moreover, by (\ref{kene}) and Lemma~\ref{lemcomplexity1}, we have that~(\ref{primaldual1}) also holds. Therefore, we proved that (\ref{kendmayor}), (\ref{kbajo}), (\ref{coneps}), (\ref{rhouno}), (\ref{rhocf}), (\ref{rholis}), (\ref{rhomuma}), and (\ref{kene}) imply (\ref{primaldual1}), (\ref{primaldual2}), and (\ref{primaldual3}). So, we only need to show that there exists~$k$ that satisfies (\ref{kbajo})--(\ref{kene}) or satisfies (\ref{kbajo}), (\ref{primaldual1}), (\ref{primaldual2}), and (\ref{primaldual3}). In other words, we must prove that, before completing \[ N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right], \] iterations, we get (\ref{primaldual1}), (\ref{primaldual2}), and (\ref{primaldual3}) or we get (\ref{kbajo})--(\ref{kene}). To prove this statement, suppose that, for all $k$ satisfying (\ref{kbajo}), at least one among the conditions (\ref{primaldual1}), (\ref{primaldual2}), and (\ref{primaldual3}) does not hold. Since~(\ref{primaldual1}) necessarily holds if $k \geq N(\delta_{\mathrm{low}},\varepsilon)$, this implies that for all~$k$ satisfying~(\ref{kbajo}) and~(\ref{kene}) at least one among the conditions (\ref{primaldual2}) and (\ref{primaldual3}) does not hold. By Lemma~\ref{lemcomplexity1}, this implies that for all~$k$ satisfying~(\ref{kbajo}) and~(\ref{kene}), \[ \max \{\|h(x^k)\|_\infty, \|V_k\|_\infty\} > \delta. \] Then, by (\ref{defcbig}), for $k \geq N(\delta_{\mathrm{low}}, \varepsilon)$, the existence of more than $\log(\delta/c_{\mathrm{big}}) / \log(\tau)$ consecutive iterations $k, k+1, k+2, \dots$ satisfying~(\ref{testfeas}) and~(\ref{kbajo}) is impossible. Therefore, after the first $N(\delta_{\mathrm{low}}, \varepsilon)$ iterations, if~$\rho_k$ is increased at iterations $k_1 < k_2$, but not at any iteration $k \in (k_1, k_2)$, we have that $k_2 - k_1 \leq \log(\delta/c_{\mathrm{big}}) / \log(\tau)$. This means that, after the first $N(\delta_{\mathrm{low}}, \varepsilon)$ iterations, the number of iterations at which $\rho_k$ is not increased is bounded above by $\log(\delta/c_{\mathrm{big}}) / \log(\tau)$ times the number of iterations at which $\rho_k$ is increased. Now, for obtaining (\ref{rhouno})--(\ref{rhomuma}), $\log(\rho_{\max}/\rho_1) / \log(\gamma)$ iterations in which~$\rho_k$ is increased are obviously sufficient. This completes the desired result. \end{pro} \begin{teo} \label{teocomplexity4} In addition to the hypotheses of Theorem~\ref{teocomplexity3}, assume that there exist $c_{\mathrm{inner}} > 0$, $v > 0$, and $q > 0$, where $c_{\mathrm{inner}}$ only depends on $\lambda_{\min}$, $\lambda_{\max}$, $\mu_{\max}$, $\ell$, $u$, and characteristics of the functions $f$, $h$, and $g$, such that the number of inner iterations, function and derivative evaluations that are necessary to obtain~(\ref{subprostop}) is bounded above by $c_{\mathrm{inner}} \rho_k^v \varepsilon_k^{-q}$. Then, the number of inner iterations, function evaluations, and derivative evaluations that are necessary to obtain~$k$ such that~(\ref{paradamala}) holds or~(\ref{primaldual1}), (\ref{primaldual2}) and~(\ref{primaldual3}) hold is bounded above by \[ c_{\mathrm{inner}} \rho_{\max}^v \varepsilon_{\min,3}^{-q} \left\{ N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right] \right\}, \] where $\rho_{\max}$ is given by~(\ref{rhomax}) and \begin{equation} \label{epsmin3} \varepsilon_{\min,3} = \min\{\varepsilon_k \;|\; k \leq N(\delta_{\mathrm{low}}, \varepsilon) + \left[ \frac{\log (\delta/c_{\mathrm{big}})}{\log(\tau})\right] \times \left[ \frac{\log \left( \rho_{\max} / \rho_1 \right)}{\log(\gamma)} \right]. \end{equation} \end{teo} \begin{pro} The desired result follows from Theorem~\ref{teocomplexity3} and the assumptions of this theorem. \end{pro} The comparison between Theorems~\ref{teocomplexitygrande2} and~\ref{teocomplexity4} is interesting. This comparison seems to indicate that, if we want to be confident that the diagnostic ``$x^k$ is an infeasible stationary point of infeasibility'' is correct, we must be prepared to pay for that certainty. In fact, the bound $\rho_{\max}$ on the penalty parameter for the algorithm is defined by (\ref{rhomax}), which not only grows with $1/\delta_{\mathrm{low}}$, but also depends on global bounds of the problem $c_{\mathrm{lips}}$ and $c_f$. Moreover, $\varepsilon_k$ also needs to decrease below $\delta_{\mathrm{low}}/4$ because the decrease of the projected gradient of infeasibility is only guaranteed by a stronger decrease of the projected gradient of the Augmented Lagrangian. \section{Solving the Augmented Lagrangian subproblems} \label{newtonls} The problem considered in this section is \begin{equation} \label{theproblem} \mbox{Minimize } \Phi(x) \mbox{ subject to } x \in \Omega, \end{equation} where $\Omega = \{ x \in \mathbb{R}^n \;|\; \ell \leq x \leq u \}$. We assume that $\Phi$ has continuous first derivatives and that second derivatives exist almost everywhere. When the Hessian at a point $x$ does not exist we call $\nabla^2 \Phi(x)$ the limit of $\nabla^2 \Phi(x^j)$ for a sequence $x^j$ that converges to $x$. Problem~(\ref{theproblem}) is of the same type of the problem that is approximately solved at Step~1 of Algorithm~\ref{al}.1 and we have in mind the case $\Phi(x) \equiv L_{\rho_k}(x,{\bar \lambda}^k, {\bar \mu}^k)$. For all $I \subseteq \{1, \dots, 2n\}$, we define the \textit{open face} \[ F_I = \{ x \in \Omega \; | \; x_i = \ell_i \mbox{ if } i \in I, \; x_i = u_i \mbox{ if } n + i \in I, \; \ell_i < x_i < u_i \mbox{ otherwise} \}. \] By definition, $\Omega$ is the union of its open faces and the open faces are disjoint. This means that every $x \in \Omega$ belongs to exactly one face $F_I$. The variables $x_i$ such that $\ell_i < x_i < u_i$ are called \textit{free variables}. For every $x \in \Omega$, we also define the continuous projected gradient of $\Phi$ given by \begin{equation} \label{cpg} g_P(x) = P_{\Omega}(x - \nabla \Phi(x)) - x \end{equation} and, if $F_I$ is the open face to which~$x$ belongs, the continuous projected internal gradient $g_I(x)$ given by \[ [g_I(x)]_i = \left\{ \begin{array}{ll} [g_P(x)]_i, & \mbox{if } x_i \mbox{ is a free variable}, \\ 0, & \mbox{otherwise}. \end{array} \right. \] Note that, sometimes, we write $g_I(x)$ omitting the fact that the subindex~$I$ refers the face~$F_I$ to which the argument~$x \in \Omega$ belongs. The bound-constrained minimization method described in the current section can be seen as a second-order counterpart of the method introduced in~\cite{bmgencan}. (See, also, \cite{betra}.) The iterates visit the different faces of the box $\Omega$ preserving the current face while the quotient $\|g_I(x)\|/\|g_P(x)\|$ is big enough or the new iterate does not hit the boundary. When this quotient reveals that few progress can be expected from staying in the current face, the face is abandoned by means of a spectral projected gradient~\cite{bmr,bmr2,bmr3} iteration. Within each face, iterations obey a safeguarded Newton scheme with line searches. The employment of this method is coherent with the conservative point of view of Algencan. For example, we do not aim to predict the active constraints at the solution and the inactive bounds have no influence in the iterations independently of the distance of the current iterate to a bound. Moreover, we do not try to use second-order information for leaving the faces. Of course, we do not deny the efficiency of methods that employ such procedures, but we feel comfortable with the conservative strategy because the number of algorithmic parameters can be reduced to a minimum.\\ \noindent \textbf{Algorithm~\ref{newtonls}.1:} Assume that $x^0 \in \Omega$, $\Phi_{\mathrm{target}} \in \mathbb{R}$, $r \in (0,1]$, $0 < \tau_1 \leq \tau_2 < 1$, $\gamma \in (0,1)$, $\beta \in (0,1)$, $0 < \eta$, $0 < \lambda_{\min}^{\mathrm{SPG}} < \lambda_{\max}^{\mathrm{SPG}}$, $0 < \sigma_{\mathrm{small}}$, $0 < \sigma_{\min} \leq \sigma_{\max}$, $0 < \underline{h} < \bar h$, $t^{\mathrm{ext}}_{\max} \in \mathbb{N}_{\geq 0}$ are given. Initialize $k \leftarrow 0$. \begin{description} \item[Step 1.] If $\Phi(x^k) \leq \Phi_{\mathrm{target}}$ then stop. Otherwise, if $\| g_I(x^k) \|_{\infty} \geq r \| g_P(x^k) \|_{\infty}$ then go to Step~2 to perform an \textit{inner-to-the-face iteration using Newton with line search} else go to Step~5 to perform a \textit{leaving-face iteration using spectral projected gradients (SPG)}. \item[Step 2.] Let $\bar n$ be the number of free variables and let $\bar H_k \in \mathbb{R}^{\bar n \times \bar n}$ be the Hessian $\nabla^2 \Phi(x^k)$ in which rows and columns associated with \textit{non} free variables were removed. \item[Step 2.1.] If $\bar H_k$ is positive definite then set $\sigma \leftarrow 0$ and compute $\bar d^k \in \mathbb{R}^{\bar n}$ as the solution of $\bar H_k d = - \bar g^k$, where $\bar g^k \in \mathbb{R}^{\bar n}$ corresponds to $\nabla \Phi(x^k)$ with the components associated with the \textit{non} free variables removed, and go to Step~2.3. \item[Step 2.2.] \textit{Inertia correction} \item[Step 2.2.1.] If $\sigma^{\mathrm{ini}}$ is undefined then set $\sigma^{\mathrm{ini}} \leftarrow P_{[\sigma_{\min},\sigma_{\max}]}(\sigma_{\mathrm{small}} \; h)$, where $h = P_{[\underline{h},\bar h]}(\max_{\{i=1,\dots,\bar n\}} \left\{ \left| [\bar H_k]_{ii} \right| \right\} )$. \item[Step 2.2.2.] Set $\sigma \leftarrow \sigma^{\mathrm{ini}}$ and while $\bar H_k + \sigma I$ is not positive definite do $\sigma \leftarrow 10 \sigma$. \item[Step 2.2.3.] Compute $\bar d^k \in \mathbb{R}^{\bar n}$ as a solution of $( \bar H_k + \sigma I ) d = - \bar g^k$, where $\bar g^k \in \mathbb{R}^{\bar n}$ corresponds to $\nabla \Phi(x^k)$ with the components associated with the \textit{non} free variables removed. \item[Step 2.2.4.] While $\| \bar d^k \|_2 > \eta \max\{1, \| \bar x^k \|_2 \}$ do $\sigma \leftarrow 10 \sigma$ and redefine $\bar d^k$ as the solution of $( \bar H_k + \sigma I ) d = - \bar g^k$. ($\bar x^k \in \mathbb{R}^{\bar n}$ corresponds to the free components of $x^k$.) \item[Step 2.3.] Let $d^k \in \mathbb{R}^n$ be the ``expansion'' of $\bar d^k$ with $[d^k]_i=0$ if $i$ is a \textit{non}-free-variable index. \item[Step 2.4.] If $\sigma > 0$ then set $\sigma^{\mathrm{ini}} \leftarrow P_{[\sigma_{\min},\sigma_{\max}]}(\frac{1}{2} \sigma)$. Otherwise, set $\sigma^{\mathrm{ini}} \leftarrow P_{[\sigma_{\min},\sigma_{\max}]}(\frac{1}{2} \sigma^{\mathrm{ini}})$. \item[Step 3.] \textit{Line search plus possible projection} \item[Step 3.1.] Compute $\alpha_{\max}$ as the largest $\alpha > 0$ such that $x^k + \alpha d^k \in \Omega$. If $\alpha_{\max} \geq 1$ (i.e. $x^k + d^k \in \Omega$) then skip Step~3.2 below (i.e. go to Step~3.3). \item[Step 3.2.] Set $x_{\mathrm{trial}} \leftarrow P_{\Omega}(x^k + d^k)$. If $\Phi(x_{\mathrm{trial}}) \leq \Phi_{\mathrm{target}}$ or $\Phi(x_{\mathrm{trial}}) \leq \Phi(x^k)$ then set $x^{k+1} = x_{\mathrm{trial}}$ and go to Step~4. \item[Step 3.3.] Set $t \leftarrow \min\{1, \alpha_{\max}\}$ and $x_{\mathrm{trial}} \leftarrow x^k + t d^k$. While $\Phi(x_{\mathrm{trial}}) > \Phi_{\mathrm{target}}$ and $\Phi(x_{\mathrm{trial}}) > \Phi(x^k) + t \gamma \langle \nabla \Phi(x^k), d^k \rangle$, choose $t_{\mathrm{new}} \in [\tau_1 t, \tau_2 t]$ and set $t \leftarrow t_{\mathrm{new}}$ and $x_{\mathrm{trial}} \leftarrow x^k + t d^k$. \item[Step 3.4.] Set $x^{k+1} = x_{\mathrm{trial}}$. If $t = \alpha_{\max}$ or ( $t=1$ and $\langle \nabla \Phi(x_{\mathrm{trial}}), d^k \rangle > \beta \langle \nabla \Phi(x^k), d^k \rangle$ ) then go to Step~4. Otherwise, go to Step~6. \item[Step 4.] \textit{Extrapolation} \item[Step 4.1.] Set $t \leftarrow 1$, $x_{\mathrm{ref}} \leftarrow x_{\mathrm{trial}}$, and $x_{\mathrm{ext}} \leftarrow P_{\Omega}(x^k + 2^t ( x_{\mathrm{trial}} - x^k ))$. \item[Step 4.2.] While $t \leq t^{\mathrm{ext}}_{\max}$ and $\Phi(x_{\mathrm{ext}})<f(x_{\mathrm{ref}})$ and $\Phi(x_{\mathrm{ref}}) > \Phi_{\mathrm{target}}$ do $t \leftarrow t + 1$, $x_{\mathrm{ref}} \leftarrow x_{\mathrm{ext}}$, and $x_{\mathrm{ext}} \leftarrow P_{\Omega}(x^k + 2^t ( x_{\mathrm{trial}} - x^k ))$. \item[Step 4.3.] Reset $x^{k+1} = x_{\mathrm{ref}}$ and go to Step~6. \item[Step 5.] \textit{Leaving-face SPG iteration} \item[Step 5.1.] If $k=0$ or $\langle x^k - x^{k-1}, \nabla f(x^k) - \nabla f(x^{k-1}) \rangle \leq 0$ then set \[ \lambda_k^{\mathrm{SPG}} = \max \left\{ 1, \|x^k\|_2 / \|g_P(x^k)\|_2 \right\}. \] Otherwise, set $\lambda_k^{\mathrm{SPG}} = \| x^k - x^{k-1} \|_2^2 / \langle x^k - x^{k-1}, \nabla \Phi(x^k) - \nabla \Phi(x^{k-1}) \rangle$. In any case, redefine $\lambda_k^{\mathrm{SPG}}$ as $\max\{ \lambda_{\min}^{\mathrm{SPG}} \min\{ \lambda_k^{\mathrm{SPG}}, \lambda_{\max}^{\mathrm{SPG}} \} \}$. \item[Step 5.2.] Set $t \leftarrow 1$, $x_{\mathrm{trial}} \leftarrow P_{\Omega}( x^k - \lambda_k^{\mathrm{SPG}} \nabla \Phi(x^k) )$, and $d^k = x_{\mathrm{trial}} - x^k$. \item[Step 5.3.] While $\Phi(x_{\mathrm{trial}}) > \Phi_{\mathrm{target}}$ and $\Phi(x_{\mathrm{trial}}) > \Phi(x^k) + t \gamma \langle \nabla \Phi(x^k), d^k \rangle$, choose $t_{\mathrm{new}} \in [\tau_1 t, \tau_2 t]$ and set $t \leftarrow t_{\mathrm{new}}$ and $x_{\mathrm{trial}} \leftarrow x^k + t d^k$. \item[Step 5.4.] Set $x^{k+1} = x_{\mathrm{trial}}$. \item[Step 6.] In the current iteration, the definition of $x^{k+1}$ implied in the evaluation of $\Phi$ at several points named $x_{\mathrm{trial}}$. If, for any of them, we have that $\Phi(x_{\mathrm{trial}}) < \Phi(x^{k+1})$ then reset $x^{k+1} = x_{\mathrm{trial}}$. In any case, set $k \leftarrow k + 1$ and go to Step~1. \end{description} \noindent \textbf{Remark 1.} At Steps~3.3 and~5.3, interpolation is done with safeguarded quadratic interpolation. This means that, given \[ t_{\mathrm{temp}} = - \frac{ \langle \nabla \Phi(x^k), d^k \rangle t^2} { 2 ( \Phi(x_{\mathrm{trial}}) - \Phi(x^k) - t \langle \nabla \Phi(x^k), d^k \rangle ) }, \] if $t_{\mathrm{temp}} \in [\tau_1 t, \tau_2 t]$ then $t_{\mathrm{new}} = t_{\mathrm{temp}}$. Otherwise, $t_{\mathrm{new}} = \frac{1}{2} t$. This choice requires $0 < \tau_1 \leq \frac{1}{2} \leq \tau_2 < 1$ instead of simply $0 < \tau_1 \leq \tau_2 < 1$. \section{Implementation details and parameters} \label{secimpl} We implemented Algorithms~\ref{al}.1 and~\ref{newtonls}.1 in Fortran~90. Implementation is freely available at \url{http://www.ime.usp.br/~egbirgin/}. Interfaces for solving user-defined problems coded in Fortran~90 as well as problems from the CUTEst~\cite{cutest} collection are available. All tests reported below were conducted on a computer with 3.5 GHz Intel Core i7 processor and 16GB 1600 MHz DDR3 RAM memory, running OS X High Sierra (version 10.13.6). Codes were compiled by the GFortran compiler of GCC (version 8.2.0) with the -O3 optimization directive enabled. \subsection{Augmented Lagrangian method} \label{alparam} Algorithm~\ref{al}.1 was devised to be applied to a scaled version of problem~(\ref{theproblem}). Following the Ipopt strategy described in~\cite[p.46]{ipopt}, in the scaled problem, the objective function~$f$ is multiplied by \[ s_f = \max \left\{ 10^{-8}, \frac{100}{\max\{1, \| \nabla f(x^0) \|_{\infty}\}} \right\}, \] each constraint~$h_j$ ($j=1,\dots,m$) is multiplied by \[ s_{h_j} = \max \left\{ 10^{-8}, \frac{100}{\max\{1, \| \nabla h_j(x^0) \|_{\infty}\}} \right\}, \] and each constraint~$g_j$ ($j=1,\dots,p$) is multiplied by \[ s_{g_j} = \max \left\{ 10^{-8}, \frac{100}{\max\{1, \| \nabla g_j(x^0) \|_{\infty}\}} \right\}, \] where $x^0 \in \mathbb{R}^n$ is the given initial guess. The scaling is optional and it is used when the input parameter ``scale'' is set to ``true''. If the parameter is set to ``false'', the original problem, that corresponds to considering all scaling factors equal to one, is solved. As stopping criterion, we say that an iterate $x^k \in [\ell,u]$ with its associated Lagrange multipliers $\lambda^{k+1}$ and $\mu^{k+1}$ satisfies the main stopping criterion when \begin{align} \max \left\{ \| h(x^k)| \|_{\infty}, \| g(x^k)_+ \|_{\infty} \right\} &\leq \varepsilon_{\mathrm{feas}}, \label{kkt1}\\ \left\| P_{[\ell,u]}\left(x^k - \left[ s_f \nabla f(x^k) + \sum_{j=1}^m \lambda_j^{k+1} s_{h_j} \nabla h_j(x^k) + \sum_{j=1}^p \mu_j^{k+1} s_{g_j} \nabla g_j(x^k) \right] \right) - x^k \right\|_{\infty} &\leq \varepsilon_{\mathrm{opt}}, \label{kkt2}\\ \max_{j=1,\dots,p} \left\{ \min \{ - s_{g_j} g_j(x^k), \mu_j^{k+1} \} \right\} &\leq \varepsilon_{\mathrm{compl}}, \label{kkt3} \end{align} where $\varepsilon_{\mathrm{feas}} > 0$, $\varepsilon_{\mathrm{opt}} > 0$, and $\varepsilon_{\mathrm{compl}} > 0$ are given constants. This means that the stopping criterion requires \textit{unscaled} feasibility with tolerance $\varepsilon_{\mathrm{feas}}$ plus \textit{scaled} optimality with tolerance~$\varepsilon_{\mathrm{opt}}$ and \textit{scaled} complementarity (measured with the $\min$ function) with tolerance $\varepsilon_{\mathrm{compl}}$. Note that $x^k \in [\ell,u]$, i.e.\ it satisfies the bound-constraints with zero tolerance. In addition to this stopping criterion, Algorithm~\ref{al}.1 also stops if the penalty parameter~$\rho_k$ reaches the value~$\rho_{\mathrm{big}}$ or if, in three consecutive iterations, the inner solver that is used at Step~1 fails at finding a point $x^k \in [\ell,u]$ that satisfies~(\ref{subprostop}). In~(\ref{subprostop}) and~(\ref{testfeas}), we consider $\| \cdot \| = \| \cdot \|_{\infty}$. At Step~2, we consider $\varepsilon_1 = \sqrt{\varepsilon_{\mathrm{opt}}}$ and $\varepsilon_{k} = \max\{ \varepsilon_{\mathrm{opt}}, 0.1 \varepsilon_{k-1} \}$ for $k > 1$; and, at Step~3, if $\lambda^{k+1} \in [\lambda_{\min}, \lambda_{\max}]^m$ and $\mu^{k+1}_i \in [0, \mu_{\max}]^p$ then we set $\bar \lambda^{k+1} = \lambda^{k+1}$ and $\bar \mu^{k+1} = \mu^{k+1}$. Otherwise, we set $\bar \lambda^{k+1}=0$ and $\bar \mu^{k+1} = 0$. In the numerical experiments, we set $\varepsilon_{\mathrm{feas}} = \varepsilon_{\mathrm{opt}} = \varepsilon_{\mathrm{compl}} = 10^{-8}$, $\rho_{\mathrm{big}} = 10^{20}$, $\lambda_{\min} = -10^{16}$, $\lambda_{\max} = 10^{16}$, $\mu_{\max} = 10^{16}$, $\gamma = 10$, $\tau = 0.5$, ${\bar \lambda}^1 = 0$, ${\bar \mu}^1 = 0$, and \[ \rho_1 = 10 \max \left\{ 1, \frac{|f(x^0)|}{\max\{ \| h(x^0) \|_2^2 + \| g(x^0)_+ \|_2^2 \}} \right\}. \] Two additional strategies complete the implementation of Algorithm~\ref{al}.1. On the one hand, if Algorithm~\ref{al}.1 fails at finding a point that satisfies~(\ref{kkt1}), the feasibility problem~(\ref{fisipro}) is tackled with Algorithm~\ref{newtonls}.1 with the purpose of, at least, finding a feasible point to the original NLP problem~(\ref{nlp}). On the other hand, at every iteration~$k$, prior to the subproblem minimization at Step~1, $(x^{k-1},\lambda^k,\mu^k)$ is used as initial guess to perform ten iterations of the ``pure'' Newton method (no line search, no inertia correction) applied to the semismooth KKT system~\cite{mqi,qisun} associated with problem~(\ref{theproblem}), with dimension $3n+m+p$, given by \[ \left( \begin{array}{c} \nabla f(x) + \sum_{j=1}^m \lambda_j \nabla h_j(x) + \sum_{j=1}^p \mu_j \nabla g_j(x) - \nu^{\ell} + \nu^u\\ h(x)\\ \min\{ -g(x), \mu \}\\ \min\{ x - \ell, \nu^{\ell} \}\\ \min\{ u - x, \nu^{u} \}\\ \end{array} \right) = \left( \begin{array}{c} 0\\ 0\\ 0\\ 0\\ 0\\ \end{array} \right), \] where $\nu^{\ell}, \nu^u \in \mathbb{R}^n$ are the Lagrange multipliers associated with the bound constraints $\ell \leq x$ and $x \leq u$, respectively. This process is related to the so-called acceleration process described in~\cite{bmfast} in which a different KKT system was considered. (See~\cite{bmfast} for details.) The stopping criteria for the acceleration process are (i) ``the Jacobian of the KKT system has the 'wrong' inertia'', (ii) ``a maximum of 10 iterations was reached'', and (iii) \begin{align} \max \left\{ \| h(x)| \|_{\infty}, \| g(x)_+ \|_{\infty}, \| (\ell - x)_+ \|_{\infty}, \| (x - u)_+ \|_{\infty} \right\} &\leq \varepsilon_{\mathrm{feas}},\\ \left\| \nabla f(x) + \sum_{j=1}^m \lambda_j \nabla h_j(x) + \sum_{j=1}^p \mu_j \nabla g_j(x) - \nu^{\ell} + \nu^u \right\|_{\infty} &\leq \varepsilon_{\mathrm{opt}},\\ \max \left\{ \max_{j=1,\dots,p} \left\{[\min \{ - g(x), \mu \}]_j\right\}, \max_{i=1,\dots,n} \left\{[\min \{x-\ell,\nu^{\ell}\}]_i\right\}, \max_{i=1,\dots,n} \left\{[\min \{u-x,\nu^u\}]_i\right\} \right\} &\leq \varepsilon_{\mathrm{compl}}. \end{align} Note that criterion (iii) corresponds to satisfying approximate KKT conditions for the \textit{unscaled} original problem~(\ref{nlp}). On the other hand, differently from an iterate $x^k \in [\ell,u]$ of Algorithm~\ref{al}.1 that satisfies (\ref{kkt1},\ref{kkt2},\ref{kkt3}), a point that satisfies criterion~(iii) may violate the bound constraints with tolerance~$\varepsilon_{\mathrm{feas}}$. If the acceleration process stops satisfying criterion~(i) or~(ii), everything it was done in the acceleration is discarded and the iterations of Algorithm~\ref{al}.1 continue. On the other hand, assume that a point satisfying criterion~(iii) was found by the acceleration process. If $(x^{k-1},\lambda^k,\mu^k)$ satisfies~(\ref{kkt1},\ref{kkt2},\ref{kkt3}) with half the precision, i.e.\ with $\varepsilon_{\mathrm{feas}}$, $\varepsilon_{\mathrm{opt}}$, and $\varepsilon_{\mathrm{compl}}$ substituted by $\varepsilon_{\mathrm{feas}}^{1/2}$, $\varepsilon_{\mathrm{opt}}^{1/2}$, and $\varepsilon_{\mathrm{compl}}^{1/2}$, respectively, then we say the acceleration was successful, the point found by the acceleration process is returned, and the optimization process stops. On the other hand, if $(x^{k-1},\lambda^k,\mu^k)$ is far from satisfying~(\ref{kkt1},\ref{kkt2},\ref{kkt3}), we believe the approximate KKT point the acceleration found may be an undesirable point. The point is saved for further references, but the optimization process continues; and the next Augmented Lagrangian subproblem is tackled by Algorithm~\ref{newtonls}.1 starting from~$x^{k-1}$ and ignoring the point found by the acceleration process. \subsection{Bound-constrained minimization method} \label{ubcimpl} As main stopping criterion of Algorithm~\ref{newtonls}.1, we considered the condition \begin{equation} \label{stopcrit} \| g_P(x^k) \|_{\infty} \leq \varepsilon \end{equation} where $g_P(x^k) = P_{[\ell,u]}\left( x^k - \nabla \Phi(x^k) \right) - x^k$ as defined in~(\ref{cpg}). When an unconstrained or bound-constrained problem is being solved, in~(\ref{stopcrit}) and in the alternative stopping criteria described below, we use $\varepsilon = \varepsilon_{\mathrm{opt}} = 10^{-8}$. When the problem being tackled by Algorithm~\ref{newtonls}.1 is a subproblem of Algorithm~\ref{al}.1, the value of $\varepsilon$ in~(\ref{stopcrit}) and in the alternative stopping criteria described below is the one described in Section~\ref{alparam} (that we cannot mention here since we are using $k$ to denote iterations of both Algorithms~\ref{al}.1 and~\ref{newtonls}.1). In addition, Algorithm~\ref{newtonls}.1 may also stop at iteration~$k$ by any of the following alternative stopping criteria: \textbf{(a)} $\|g_P(x^{k-\ell})\|_{\infty}<\sqrt{\varepsilon}$ for all $0 \leq \ell < 100$; \textbf{(b)} $\|g_P(x^{k-\ell})\|_{\infty}<\varepsilon^{1/4}$ for all $0 \leq \ell < 5{,}000$; \textbf{(c)} $\|g_P(x^{k-\ell})\|_{\infty}<\varepsilon^{1/8}$ for all $0 \leq \ell < 10{,}000$; \textbf{(d)} $\Phi(x^k) \leq \Phi_{\mathrm{target}}$; \textbf{(e)} $k \geq k_{\max} = 50{,}000$; and \textbf{(f)} $k_{\mathrm{best}}$ is the smallest index such that $\Phi(x^{k_{\mathrm{best}}}) = \min \{ \Phi(x^0), \Phi(x^1), \dots, \Phi(x^k) \}$ and $k - k_{\mathrm{best}} > 3$, i.e. the best functional value so far obtained is not updated in three consecutive iterations. In Algorithm~\ref{newtonls}.1, although the theory allows us a wide range of possibilities, in practice we consider $H_k = \nabla^2 \Phi(x^k)$ for all~$k$. The linear systems at Step~2.1 and~2.2.2 are solved with subroutine MA57 from HSL~\cite{hsl} (using all its default parameters). In the experiments, we set $\Phi_{\mathrm{target}}=-10^{12}$, $r=0.1$, $\tau_1=0.1$, $\tau_2=0.9$, $\gamma=10^{-4}$, $\beta=0.5$, $\eta=10^4$, $\lambda_{\min}^{\mathrm{SPG}} = 10^{-16}$, $\lambda_{\max}^{\mathrm{SPG}} = 10^{16}$, $\sigma_{\mathrm{small}} = 10^{-8}$, $\sigma_{\min} = 10^{-8}$, $\sigma_{\max} = 10^{16}$, $\underline{h} = 10^{-8}$, $\bar h = 10^{8}$, and $t^{\mathrm{ext}}_{\max} = 20$. When Algorithm~\ref{newtonls}.1 is used to solve a subproblem of Algorithm~\ref{al}.1, we have that $\nabla^2 \Phi(x) = \nabla^2 L_{\rho_k}(x,{\bar \lambda}_k,{\bar \mu}_k)$, i.e. $\nabla^2 \Phi(x)$ is the Hessian of the augmented Lagrangian associated with the scaled version of problem~(\ref{theproblem}) given by \begin{equation} \label{lahess} \resizebox{\textwidth}{!}{$ s_f \nabla^2 f(x) + \sum_{j=1}^m \left\{ {\bar \lambda}_j^k s_{h_j} \nabla^2 h_j(x) + \rho_k s_{h_j}^2 \nabla h_j(x) \nabla h_j(x)^T \right \} + \sum_{j \in I_k} \left\{ {\bar \mu}_j^k s_{g_j} \nabla^2 g_j(x) + \rho_k s_{g_j}^2 \nabla g_j(x) \nabla g_j(x)^T \right\}, $} \end{equation} where $I_k = I_{\rho_k}(x^k,{\bar \mu}^k) = \{ j = 1,\dots,p \;|\; {\bar \mu}^k + \rho_k s_{g_j} g_j(x^k) > 0 \}$. A relevant issue from the practical point of view is that, despite the sparsity of the Hessian of the Lagrangian and the sparsity of the Jacobian of the constraints, this matrix may be dense. Thus, factorizing, or even building it, may be prohibitive. As an alternative, instead of building and factorizing the Hessian above, it can be solved an augmented linear system with the coefficients' matrix given by \begin{equation} \label{lahess2} \left( \begin{array}{c|c} s_f \nabla^2 f(x) + \sum_{j=1}^m \left\{ {\bar \lambda}_j^k s_{h_j} \nabla^2 h_j(x) \right \} + \sum_{j \in I_k} \left\{ {\bar \mu}_j^k s_{g_j} \nabla^2 g_j(x) \right\} & J(x)^T \\[2mm] \hline \phantom{\displaystyle \sum} J(x) & - \frac{1}{\rho_k} I \end{array} \right), \end{equation} where $J(x)$ is a matrix whose columns are $\nabla h_1(x), \dots, \nabla h_m(x)$ plus the gradients $\nabla g_j(x)$ such that $j \in I_k$. This matrix preserves the sparsity of the Hessian of the Lagrangian and of the Jacobian of the constraints. The implementation of Algorithms~\ref{newtonls}.1 dynamically selects one of the two aproaches. Another relevant fact from the practical point of view, related to matrices~(\ref{lahess}) and~(\ref{lahess2}), is that the current tools available in CUTEst compute the full Jacobian of the constraints and $\sum_{j=1}^p {\bar \mu}_j^k s_{g_j} \nabla^2 g_j(x)$ with ${\bar \mu_j}^k=0$ if $j \not\in I_k$ instead of $J(x)$ and $\sum_{j \in I_k} {\bar \mu}_j^k s_{g_j} \nabla^2 g_j(x)$, respectively. On the one hand, this feature preserves the Jacobian's and the Hessian-of-the-Lagrangian's sparsity structures independently of~${\bar \mu^k}$ and~$x$, as required by some solvers. On the other hand, it impairs Algorithm~\ref{al}.1, when applied to problems from the CUTEst collection, of fully exploiting the potential advantage of dealing with inequality constraints without adding slack variables. In summary, Algorithm~\ref{al}.1--\ref{newtonls}.1 is prepared to deal with matrices with different sparsity structures at every iteration and, for that reason, it performs the analysis step of the factorization at every iteration. This is the price to pay for exploiting inequality constraints without adding slack variables. However, the CUTEst subroutines are not prepared to exploit this feature and Algorithm~\ref{al}.1--\ref{newtonls}.1, when solving problems from the CUTEst collection, pays the price without enjoying the advantages. Of course, this CUTEst inconvenient influences negatively the comparison of Algencan with other solvers if the CPU time is used as a performance measure. \section{Numerical experiments} \label{secnumexp} In this section, we aim to evaluate the performance of Algorithm~\ref{al}.1--\ref{newtonls}.1 (referred as Algencan from now on) for solving unconstrained, bound-constrained, feasibility, and nonlinear programming problems. The performance of Ipopt~\cite{ipopt} (version 3.12.12) is also exhibited. Both methods were run in the same computational environment, compiled with the same BLAS routines, and also using the same subroutine MA57 from HSL for solving the linear systems. All Ipopt default parameters were used\footnote{Option 'honor\_original\_bounds no', that does not affect Ipopt's optimization process, was used. Ipopt might relax the bounds during the optimization beyond its initial \textit{relative} relaxation factor whose default value is $10^{-8}$. Option 'honor\_original\_bounds no' simply avoids the final iterate to be projected back onto the box defined by the bound constraints. So, the actual absolute violation of the bound constraints at the final iterate can be measured.}. A CPU time limit of 10 minutes per problem was imposed. In the numerical experiments, we considered all $1{,}258$ problems from the CUTEst collection~\cite{cutest} with their default dimensions. In the collection, there are 217 unconstrained problems, 144 bound-constrained problems, 157 feasibility problems, and 740 nonlinear programming problems. A hint on the number of variables in each family is given in Table~\ref{tab0}. \begin{table}[h!] \begin{center} \begin{tabular}{cccccc} \hline \multirow{2}{*}{Problem type} & \multirow{2}{*}{\# of problems} & \multirow{2}{*}{$n_{\max}$} & \multicolumn{3}{c}{\# of problems with $n \geq \omega n_{\max}$} \\ \cline{4-6} & & & $\omega=0.1$ & $\omega=0.01$ & $\omega=0.001$ \\ \hline unconstrained & 217 & 100{,}000 & 15 & 87 & 97 \\ bound-constrained & 144 & 149{,}624 & 5 & 60 & 72 \\ feasibility & 156 & 123{,}200 & 5 & 40 & 55 \\ NLP & 740 & 250{,}997 & 67 & 263 & 379 \\ \hline \end{tabular} \end{center} \caption{Distribution of the number of variables~$n$ in the CUTEst collection test problems.} \label{tab0} \end{table} Large tables with a detailed description of the output of each method in the $1{,}258$ problems can be found in \url{http://www.ime.usp.br/~egbirgin/}. A brief overview follows. Note that, since the methods differ in the stopping criteria, arbitrary decisions will be made. A point in common is that both methods seek satisfying the (sup-norm of the) violation of the unscaled equality and inequality constraints with precision~$\varepsilon_{\mathrm{feas}} = 10^{-8}$. However, as described in~\cite[\S3.5]{ipopt}, Ipopt considers a \textit{relative} initial relaxation of the bound constraints (whose default value is $10^{-8}$); and it may apply repeated additional relaxations during the optimization process. Table~\ref{tabfeas} shows the number of problems in which each method found a point satisfying \begin{equation} \label{feas1} \max\{ \| h(x) \|_{\infty}, \| [g(x)]_+ \|_{\infty} \} \leq \varepsilon_{\mathrm{feas}} \end{equation} plus \begin{equation} \label{feas2} \max\{ \| (\ell - x)_+ \|_{\infty}, \| (x - u)_+ \|_{\infty} \} \leq \bar \varepsilon_{\mathrm{feas}} \end{equation} with $\varepsilon_{\mathrm{feas}} = 10^{-8}$ and $\bar \varepsilon_{\mathrm{feas}} \in \{ 0.1, 10^{-2}, \dots, 10^{-16}, 0 \}$. Figures in the table show that, in most cases, Algencan satisfies the bound constraints with zero tolerance and that the violation of the bound constraints rarely exceeds the tolerance~$10^{-8}$. This is an expected result, since the method satisfies these requirements by definition. Regarding Ipopt, the table shows in which way the amount of problems in which~(\ref{feas2}) holds varies as a function of the tolerance~$\bar \varepsilon_{\mathrm{feas}}$. \begin{table}[ht!] \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrrrrrrrrrrr} \cline{2-18} & \multicolumn{17}{c}{$\bar \varepsilon_{\mathrm{feas}}$}\\ \hline & $0.1$ & $10^{-2}$ & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ & $10^{-11}$ & $10^{-12}$ & $10^{-13}$ & $10^{-14}$ & $10^{-15}$ & $10^{-16}$ & $0$ \\ \hline Algencan & 1{,}132 & 1{,}132 & 1{,}131 & 1{,}131 & 1{,}131 & 1{,}130 & 1{,}130 & 1{,}130 & 1{,}121 & 1{,}115 & 1{,}112 & 1{,}105 & 1{,}092 & 1{,}082 & 1{,}077 & 1{,}069 & 1{,}058\\ Ipopt & 1{,}073 & 1{,}072 & 1{,}070 & 1{,}068 & 1{,}056 & 1{,}044 & 1{,}016 & 970 & 794 & 793 & 793 & 793 & 793 & 792 & 792 & 792 & 791\\ \hline \end{tabular}} \end{center} \caption{Number of problems in which a point satisfying~(\ref{feas1},\ref{feas2}) was found by Algencan and Ipopt with $\varepsilon_{\mathrm{feas}} = 10^{-8}$ and $\bar \varepsilon_{\mathrm{feas}} \in \{ 0.1, 10^{-2}, \dots, 10^{-16}, 0\}$.} \label{tabfeas} \end{table} If the violation of the bound constraints is disregarded, Table~\ref{tabfeas} shows that Algencan found points satisfying~(\ref{feas1},\ref{feas2}) with $\varepsilon_{\mathrm{feas}} = 10^{-8}$ and $\bar \varepsilon_{\mathrm{feas}}=0.1$ in~$1{,}132$ problems; while Ipopt found the same in~$1{,}073$. There are in the CUTEst collection 85 problems (62 feasibility problems and~23 nonlinear programming problems) in which the number of equality constraints is larger than the number of variables. Ipopt does not apply to these problems and, thus, of course, it does not find a point satisfying~(\ref{feas1},\ref{feas2}). Algencan \textit{did} find a point satisfying~(\ref{feas1},\ref{feas2}) in~28 out of the~85 problems to which Ipopt does not apply; and this explains half of the difference between the methods. In any case, it can be said that, over a universe of $1{,}258$ problems, both methods found ``feasible points'' in a large fraction of the problems; recalling that the collection contains infeasible problems. We now consider the set of~757 problems in which both methods found a point satisfying~(\ref{feas1}) with $\varepsilon_{\mathrm{feas}} = 10^{-8}$ and~(\ref{feas2}) with $\bar \varepsilon_{\mathrm{feas}} = 0$. For a given problem, let $f_1$ be the value of the objective function at the point found by Algencan; let $f_2$ be the value of the objective function at the point found by Ipopt; and let $f^{\min} = \min\{ f_1, f_2 \}$. Table~\ref{tabbest} shows in how many problems it holds \begin{equation} \label{eqbest} f_i \leq f^{\min} + f_{\mathrm{tol}} \max\{ 1, | f^{\min} | \} \mbox{ for } i = 1, 2 \end{equation} and $f_{\mathrm{tol}} \in \{ 0.1, 10^{-2}, \dots, 10^{-8}, 0 \}$. \begin{table}[ht!] \begin{center} \begin{tabular}{lrrrrrrrrr} \cline{2-10} & \multicolumn{9}{c}{$f_{\mathrm{tol}}$}\\ \hline & $0.1$ & $10^{-2}$ & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ & $10^{-8}$ & 0 \\ \hline Algencan & 722 & 715 & 706 & 694 & 691 & 678 & 675 & 663 & 498 \\ Ipopt & 723 & 708 & 699 & 694 & 683 & 653 & 623 & 592 & 383 \\ \hline \end{tabular} \end{center} \caption{Number of problems in which a point satisfying~(\ref{feas1}) with $\varepsilon_{\mathrm{feas}} = 10^{-8}$, (\ref{feas2}) with $\bar \varepsilon_{\mathrm{feas}} = 0$, and~(\ref{eqbest}) with $f_{\mathrm{tol}} \in \{ 0.1, 10^{-2}, \dots, 10^{-8}, 0 \}$ was found by Algencan and Ipopt.} \label{tabbest} \end{table} Finally, we consider the set of~688 problems in which both, Algencan and Ipopt, found a point that satisfies~(\ref{feas1}) with $\varepsilon_{\mathrm{feas}} = 10^{-8}$, (\ref{feas2}) with $\bar \varepsilon_{\mathrm{feas}} = 0$, and~(\ref{eqbest}) with $f_{\mathrm{tol}} = 0.1$. For this set of problems, Figure~\ref{figpp} shows the performance profile~\cite{pp} that considers, as performance measure, the CPU time spent by each method. In the figure, for $i \in M \equiv \{ \mathrm{Algencan}, \mathrm{Ipopt} \}$, \[ \Gamma_i(\tau) = \frac{\#\left\{ j \in \{1,\dots,q\} \; | \; t_{ij} \leq \tau \min_{s \in M} \{ t_{sj} \} \right\}}{q}, \] where $\#{\cal S}$ denotes the cardinality of set~${\cal S}$, $q=688$ is the number of considered problems, and $t_{ij}$ is the performance measure (CPU time) of method~$i$ applied to problem~$j$. Thus, $\Gamma_{\mathrm{Algencan}}(1) = 0.48$ and $\Gamma_{\mathrm{Ipopt}}(1) = 0.53$ says that Algencan was faster than Ipopt in 48\% of the problems and Ipopt was faster then Algencan in 53\% of the problems. Complementing the performance profile, we can report that there are~9 problems in which both methods spent at least a second of CPU time and one of the methods is at least ten times faster than the other. Among these~9 problems, Ipopt is faster in~5 and Algencan is faster in the other~4. \begin{figure}\label{figpp} \end{figure} \section{Conclusions} \label{secconcl} In this work, a version of the (safeguarded) Augmented Lagrangian algorithm Algencan \cite{abmstango,bmbook} that possesses iteration and evaluation complexity was described, implemented, and evaluated. Moreover, the convergence theory of Algencan was complemented with new complexity results. The way in which an Augmented Lagrangian method was able to inherit the complexity properties from a method for bound-constrained minimization is a nice example of the advantages of the modularity feature that Augmented Lagrangian methods usually possess. As a byproduct of this development, a new version of Algencan that uses a Newtonian method with line search to solve the subproblems was developed from scratch. Moreover, the acceleration process described in~\cite{bmfast} was revisited. In particular, the KKT system with complementarity modelled with the product between constraints and multipliers was replaced with the KKT system that models the complementarity constraints with the semismooth $\min$ function. We provided a fully reproducible comparison with Ipopt, which is, probably, de most effective and best known free software for constrained optimization. The main feature we want to stress is that there exist a significative number of problems that Algencan solves satisfactorily whereas Ipopt does not, and vice versa. This is not surprising because the way in which Augmented Lagrangians and Interior Point Newtonian methods handle problems are qualitatively different. Constrained Optimization is an extremely heterogeneous family. Therefore, we believe that what justifies the existence of new algorithms or the survival of traditional ones is not their capacity of solving a large number of problems using slightly smaller computer time than ``competitors'', but the potentiality of solving some problems that other algorithms fail to solve. Engineers and practitioners should not care about the choice between algorithm~A or~B according to subtle efficiency criteria. The best strategy is to contemplate both, using one or the other according to their behavior on the family of problems that they need to solve in practice. As in many aspects of life, competition should give place to cooperation. \\ \noindent \textbf{Acknowledgements.} The authors are indebted to Iain Duff, Nick Gould, Dominique Orban, and Tyrone Rees for their help in issues related to the usage of MA57 from HSL and the CUTEst collection. \end{document}
arXiv