text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Vector subdivision schemes and multiple wavelets
by Rong-Qing Jia, S. D. Riemenschneider and Ding-Xuan Zhou PDF
Math. Comp. 67 (1998), 1533-1563 Request permission
We consider solutions of a system of refinement equations written in the form \begin{equation*}\phi = \sum _{\alpha \in \mathbb {Z}} a(\alpha )\phi (2\cdot -\alpha ),\end{equation*} where the vector of functions $\phi =(\phi ^{1},\ldots ,\phi ^{r})^{T}$ is in $(L_{p}(\mathbb {R}))^{r}$ and $a$ is a finitely supported sequence of $r\times r$ matrices called the refinement mask. Associated with the mask $a$ is a linear operator $Q_{a}$ defined on $(L_{p}(\mathbb {R}))^{r}$ by $Q_{a} f := \sum _{\alpha \in \mathbb {Z}} a(\alpha )f(2\cdot -\alpha )$. This paper is concerned with the convergence of the subdivision scheme associated with $a$, i.e., the convergence of the sequence $(Q_{a}^{n}f)_{n=1,2,\ldots }$ in the $L_{p}$-norm. Our main result characterizes the convergence of a subdivision scheme associated with the mask $a$ in terms of the joint spectral radius of two finite matrices derived from the mask. Along the way, properties of the joint spectral radius and its relation to the subdivision scheme are discussed. In particular, the $L_{2}$-convergence of the subdivision scheme is characterized in terms of the spectral radius of the transition operator restricted to a certain invariant subspace. We analyze convergence of the subdivision scheme explicitly for several interesting classes of vector refinement equations. Finally, the theory of vector subdivision schemes is used to characterize orthonormality of multiple refinable functions. This leads us to construct a class of continuous orthogonal double wavelets with symmetry.
Alfred S. Cavaretta, Wolfgang Dahmen, and Charles A. Micchelli, Stationary subdivision, Mem. Amer. Math. Soc. 93 (1991), no. 453, vi+186. MR 1079033, DOI 10.1090/memo/0453
C. K. Chui and J. A. Lian, A study of orthonormal multi-wavelets, J. Applied Numerical Math. 20 (1996), 273–298.
A. Cohen, I. Daubechies, and G. Plonka, Regularity of refinable function vectors, J. Fourier Anal. Appl. 3 (1997), 295-324.
A. Cohen, N. Dyn, and D. Levin, Stability and inter-dependence of matrix subdivision schemes, in Advanced Topics in Multivariate Approximation, F. Fontanella, K. Jetter and P.-J. Laurent (eds.), 1996, pp. 33-45.
W. Dahmen and C. A. Micchelli, Biorthogonal wavelet expansions, Constr. Approx. 13 (1997), 293–328.
Ingrid Daubechies and Jeffrey C. Lagarias, Two-scale difference equations. II. Local regularity, infinite products of matrices and fractals, SIAM J. Math. Anal. 23 (1992), no. 4, 1031–1079. MR 1166574, DOI 10.1137/0523059
George C. Donovan, Jeffrey S. Geronimo, Douglas P. Hardin, and Peter R. Massopust, Construction of orthogonal wavelets using fractal interpolation functions, SIAM J. Math. Anal. 27 (1996), no. 4, 1158–1192. MR 1393432, DOI 10.1137/S0036141093256526
Nira Dyn, John A. Gregory, and David Levin, Analysis of uniform binary subdivision schemes for curve design, Constr. Approx. 7 (1991), no. 2, 127–147. MR 1101059, DOI 10.1007/BF01888150
T. N. T. Goodman, R. Q. Jia, and C. A. Micchelli, On the spectral radius of a bi-infinite periodic and slanted matrix, Southeast Asian Bull. Math., to appear.
T. N. T. Goodman, Charles A. Micchelli, and J. D. Ward, Spectral radius formulas for subdivision operators, Recent advances in wavelet analysis, Wavelet Anal. Appl., vol. 3, Academic Press, Boston, MA, 1994, pp. 335–360. MR 1244611
B. Han and R. Q. Jia, Multivariate refinement equations and subdivision schemes, SIAM J. Math. Anal., to appear.
Christopher Heil and David Colella, Matrix refinement equations: existence and uniqueness, J. Fourier Anal. Appl. 2 (1996), no. 4, 363–377. MR 1395770
Christopher Heil, Gilbert Strang, and Vasily Strela, Approximation by translates of refinable functions, Numer. Math. 73 (1996), no. 1, 75–94. MR 1379281, DOI 10.1007/s002110050185
Loïc Hervé, Multi-resolution analysis of multiplicity $d$: applications to dyadic interpolation, Appl. Comput. Harmon. Anal. 1 (1994), no. 4, 299–315. MR 1310654, DOI 10.1006/acha.1994.1017
T. A. Hogan, Stability and linear independence of the shifts of finitely many refinable functions, J. Fourier Anal. Appl. 3 (1997), 757–774.
Rong Qing Jia, Subdivision schemes in $L_p$ spaces, Adv. Comput. Math. 3 (1995), no. 4, 309–341. MR 1339166, DOI 10.1007/BF03028366
Rong-Qing Jia, Shift-invariant spaces on the real line, Proc. Amer. Math. Soc. 125 (1997), no. 3, 785–793. MR 1350950, DOI 10.1090/S0002-9939-97-03586-7
Rong Qing Jia and Charles A. Micchelli, On linear independence for integer translates of a finite number of functions, Proc. Edinburgh Math. Soc. (2) 36 (1993), no. 1, 69–85. MR 1200188, DOI 10.1017/S0013091500005903
R. Q. Jia, S. Riemenschneider, and D. X. Zhou, Approximation by multiple refinable functions, Canadian J. Math. 49 (1997), 944-962.
Rong Qing Jia and Zuowei Shen, Multiresolution and wavelets, Proc. Edinburgh Math. Soc. (2) 37 (1994), no. 2, 271–300. MR 1280683, DOI 10.1017/S0013091500006076
Rong Qing Jia and Jianzhong Wang, Stability and linear independence associated with wavelet decompositions, Proc. Amer. Math. Soc. 117 (1993), no. 4, 1115–1124. MR 1120507, DOI 10.1090/S0002-9939-1993-1120507-8
W. Lawton, S. L. Lee, and Zuowei Shen, An algorithm for matrix extension and wavelet construction, Math. Comp. 65 (1996), no. 214, 723–737. MR 1333319, DOI 10.1090/S0025-5718-96-00714-4
W. Lawton, S. L. Lee, and Z. W. Shen, Stability and orthonormality of multivariate refinable functions, SIAM J. Math. Anal. 28 (1997), 999–1014.
W. Lawton, S. L. Lee, and Z. W. Shen, Convergence of multidimensional cascade algorithm, Numer. Math. 78 (1998), 427–438.
R. L. Long, W. Chen, and S. L. Yuan, Wavelets generated by vector multiresolution analysis, Appl. Comput. Harmon. Anal. 4 (1997), no. 3, 293–316.
R. L. Long and Q. Mo, $L^{2}$-convergence of vector cascade algorithm, manuscript.
Charles A. Micchelli and Hartmut Prautzsch, Uniform refinement of curves, Linear Algebra Appl. 114/115 (1989), 841–870. MR 986909, DOI 10.1016/0024-3795(89)90495-3
G. Plonka, Approximation order provided by refinable function vectors, Constr. Approx. 13 (1997), 221–244.
Gian-Carlo Rota and Gilbert Strang, A note on the joint spectral radius, Nederl. Akad. Wetensch. Proc. Ser. A 63 = Indag. Math. 22 (1960), 379–381. MR 0147922, DOI 10.1016/S1385-7258(60)50046-1
Z. W. Shen, Refinable function vectors, SIAM J. Math. Anal. 29 (1998), 235–250.
Lars F. Villemoes, Wavelet analysis of refinement equations, SIAM J. Math. Anal. 25 (1994), no. 5, 1433–1460. MR 1289147, DOI 10.1137/S0036141092228179
J. Z. Wang, $\;$Stability and linear independence associated with scaling vectors, SIAM J. Math. Anal., to appear
Ding-Xuan Zhou, Stability of refinable functions, multiresolution analysis, and Haar bases, SIAM J. Math. Anal. 27 (1996), no. 3, 891–904. MR 1382838, DOI 10.1137/0527047
D. X. Zhou, Existence of multiple refinable distributions, Michigan Math. J. 44 (1997), 317–329.
Retrieve articles in Mathematics of Computation with MSC (1991): 39B12, 41A25, 42C15, 65F15
Retrieve articles in all journals with MSC (1991): 39B12, 41A25, 42C15, 65F15
Rong-Qing Jia
Email: [email protected]
S. D. Riemenschneider
Affiliation: Department of Mathematical Sciences, University of Alberta, Edmonton, Canada T6G 2G1
Email: [email protected]
Ding-Xuan Zhou
Affiliation: Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong
Email: [email protected]
Received by editor(s): December 12, 1996
Additional Notes: Research supported in part by NSERC Canada under Grants # OGP 121336 and A7687.
Journal: Math. Comp. 67 (1998), 1533-1563
MSC (1991): Primary 39B12, 41A25, 42C15, 65F15
DOI: https://doi.org/10.1090/S0025-5718-98-00985-5 | CommonCrawl |
\begin{document}
\title{Exponential concentration of cover times} \author{Alex Zhai \\ Stanford University} \maketitle
\begin{abstract}
We prove an exponential concentration bound for cover times of
general graphs in terms of the Gaussian free field, extending the
work of Ding, Lee, and Peres \cite{DLP12} and Ding \cite{D14}. The
estimate is asymptotically sharp as the ratio of hitting time to
cover time goes to zero.
The bounds are obtained by showing a stochastic domination in the
generalized second Ray-Knight theorem, which was shown to imply
exponential concentration of cover times by Ding in \cite{D14}. This
stochastic domination result appeared earlier in a preprint of Lupu
\cite{L14}, but the connection to cover times was not mentioned. \end{abstract}
\section{Introduction}
Let $G = (V, E)$ be an undirected graph, possibly with self-loops and multiple edges. For the continuous time simple random walk on $G$ started at a given vertex $v_0 \in V$, define $\tau_{\text{cov}}$ to be the first time that all the vertices in $V$ have been visited at least once. This quantity, known as the \emph{cover time}, is of fundamental interest in the study of random walks.
Another fundamental object in the study of random walks on graphs is the \emph{Gaussian free field} (GFF). For purposes of stating our main result, let us define the GFF $\{ \eta_x \}_{x \in V}$ on $G$ with $\eta_{v_0} = 0$ to be the Gaussian process given by covariances $\mathbb{E}(\eta_x - \eta_y)^2 = R_{\text{eff}}(x, y)$, where $R_{\text{eff}}$ denotes effective resistance. More background is given in Section \ref{sec:prelim}.
Our main result is the following concentration bound on the cover time in terms of the Gaussian free field.
\begin{theorem} \label{thm:cover-concentration}
Let $G = (V, E)$ be an undirected graph with a specified initial
vertex $v_0 \in V$. Let $\{ \eta_x \}_{x \in V}$ be the Gaussian
free field on $G$ with $\eta_{v_0} = 0$. Define the quantities
\[ M = \mathbb{E} \max_{x \in V} \eta_x , \;\; R = \max_{x, y \in V} R_{\text{eff}}(x, y) = \max_{x, y \in V} \mathbb{E} \( \eta_x - \eta_y \)^2. \]
Then, there are universal constants $c$ and $C$ such that for the
continuous time random walk started at $v_0$, we have
\[ \mathbb{P}\( \Big| \tau_{\text{cov}} - |E|M^2 \Big| \ge |E| (\sqrt{\lambda R} \cdot M + \lambda R) \) \le Ce^{-c \lambda} \]
for any $\lambda \ge C$. \end{theorem} \begin{remark}
Our result is most easily stated for a continuous time random walk,
i.e. a random walk having the same jump probabilities as a simple
random walk, but whose times between jumps are i.i.d. unit
exponentials. However, note that if a continuous time random walk
has run for time $t$, then the number of jumps it has made has
Poisson distribution with mean $t$, which exhibits Gaussian
concentration with fluctuations of order $\sqrt{t}$. Thus, Theorem
\ref{thm:cover-concentration} can be easily translated into a
similar bound for discrete random walks. \end{remark} \begin{remark}
Note that the definition of $M$ is given in terms of a starting
vertex $v_0$, but it does not depend on $v_0$. Indeed, let $v'_0$ be
another starting vertex. Then, $\eta' = \{ \eta_x - \eta_{v'_0}
\}_{x \in V}$ has the law of a GFF with $\eta'_{v'_0} = 0$, and
\[ \mathbb{E} \max_{x \in V} \eta'_x = \mathbb{E} \max_{x \in V} \eta_x. \] \end{remark} \begin{remark}
We actually show Theorem \ref{thm:cover-concentration} in the
slightly more general setting of electrical networks, which are
introduced in Section \ref{sec:prelim}. \end{remark}
We prove Theorem \ref{thm:cover-concentration} following the approach first appearing in a paper of Ding, Lee, and Peres \cite{DLP12} and later refined by Ding \cite{D14}. Indeed, Ding observed that Theorem \ref{thm:cover-concentration} is implied by a certain stochastic domination; in \cite{D14}, the domination was proved for trees, but the general case was left as conjecture (\cite{D14}, Question 5.2). We establish Theorem \ref{thm:cover-concentration} by proving the stochastic domination for general graphs.
In relation to these previous works, Theorem
\ref{thm:cover-concentration} extends Theorem 1.2 of \cite{D14}, which gave the same concentration bound for trees. It also sharpens Theorem 1.1 of \cite{DLP12}, where the equivalence of cover times and $|E|M^2$ (in the notation of Theorem \ref{thm:cover-concentration}) was proven up to a universal multiplicative constant. By ``sharpen'', we mean that we are able to remove the constant factor under the assumption $\sqrt{R} \ll M$. We mention that this was done already for bounded-degree graphs in Theorem 1.1 of \cite{D14}, albeit without exponential tail bounds.
The condition $\sqrt{R} \ll M$ is a relatively mild one. Indeed, define $\tau_{\text{hit}}(x, y)$ to be the time it takes for a random walk started at $x$ to hit $y$, and define \[ t_{\text{hit}} = \max_{x, y \in V} \mathbb{E} \tau_{\text{hit}}(x, y), \;\; t_{\text{cov}} = \max_{x \in V} \mathbb{E}_x \tau_{\text{cov}}, \] where in the definition of $t_{\text{cov}}$, $\mathbb{E}_x$ denotes the expectation for the random walk started at $x$. The well-known commute time identity (\cite{LPW09}, Proposition 10.6) states that
\[ \mathbb{E} \tau_{\text{hit}}(x, y) + \mathbb{E} \tau_{\text{hit}}(y, x) = 2 |E| \cdot R_{\text{eff}}(x, y). \] It follows that
\[ t_{\text{hit}} \ge |E| \cdot R. \]
On the other hand, it was shown in \cite{DLP12} that $|E| \cdot M^2$ is within a constant of $t_{\text{cov}}$. It follows that for some constant $C$, \[ \frac{R}{M^2} \le C \cdot \frac{t_{\text{hit}}}{t_{\text{cov}}}, \] so $\sqrt{R} \ll M$ holds whenever $t_{\text{hit}} \ll t_{\text{cov}}$. We obtain the following corollary.
\begin{corollary} \label{cor:cover-concentration}
Let $G = (V, E)$, $v_0$, $\eta$, $M$, and $R$ be as in Theorem
\ref{thm:cover-concentration}. Then,
\[ \( 1 - C \sqrt{\frac{t_{\text{hit}}}{t_{\text{cov}}}} \) \cdot |E| \cdot M^2 \;\; \le \;t_{\text{cov}}\; \le \;\; \( 1 + C \sqrt{\frac{t_{\text{hit}}}{t_{\text{cov}}}} \) \cdot |E| \cdot M^2, \]
for a universal constant $C$. \end{corollary} \begin{remark}
There is a deterministic polynomial-time approximation scheme (PTAS)
due to Meka \cite{M12} for computing the supremum of a Gaussian
process. Applying this to the quantity $M$ gives a PTAS for $t_{\text{cov}}$
when $t_{\text{hit}} \ll t_{\text{cov}}$. \end{remark}
Conversely, it was shown by Aldous \cite{A91} that if $t_{\text{hit}}$ is of the same order as $t_{\text{cov}}$, then the cover time cannot be concentrated about its expectation (see the introduction of \cite{D14} for a more detailed discussion).
The main tool in estimating cover times employed in \cite{DLP12} and \cite{D14} is the generalized second Ray-Knight theorem, which is an identity in law relating the Gaussian free field to the time spent at each vertex by a continuous time random walk. In fact, the upper bound on $t_{\text{cov}}$ in Corollary \ref{cor:cover-concentration} was previously established as Theorem 1.4 of \cite{DLP12} (the same argument also proves the corresponding upper tail estimate in Theorem \ref{thm:cover-concentration}).
In \cite{D14}, the matching lower bound was reduced to proving a certain stochastic domination in the generalized second Ray-Knight theorem. There, the stochastic domination was proven only for trees (\cite{D14}, Theorem 2.3), but it was asked whether the same holds for general graphs (\cite{D14}, Question 5.2).
Indeed, in Section \ref{sec:domination} we prove Theorem \ref{thm:domination}, which generalizes Theorem 2.3 of \cite{D14} to arbitrary graphs. This is accomplished by viewing the random walk as Brownian motion on a metric graph. After writing up an early draft of the proof, it was pointed out to us that this idea appeared previously in a recent preprint of Lupu \cite{L14} to prove essentially the same result (\cite{L14}, Theorem 3). In that context, the idea was mainly used to study the percolation of loop clusters (\cite{L14}, Theorems 1 and 2; see also subsequent work by Sznitman \cite{S14}). However, the application to cover times was not mentioned.
Even though Theorem \ref{thm:domination} uses the same ideas as Theorem 3 of \cite{L14}, we include a proof in order to establish the result in the language of our specific application. Additionally, our exposition is intended to be more accessible to audiences interested in cover times of random walks.
\subsection{Related work on cover times}
Cover times have been studied in many papers over the last few decades. We highlight several of them below; see also \S 1.1 of \cite{DLP12} for further background.
We first mention some results relating cover times and hitting times. Clearly, $t_{\text{cov}} \ge t_{\text{hit}}$. A classical result of Matthews \cite{M88} is that on a graph of $n$ vertices, $t_{\text{cov}} \le t_{\text{hit}} (1 + \log n)$. This was proved by a clever argument analogous to the analysis of the coupon collector's problem. Matthews also gave an expression for a lower bound, which was later shown by Kahn, Kim, Lovasz, and Vu \cite{KKLV02} to approximate the cover time to within $(\log \log n)^2$.
In \cite{A91}, Aldous analyzed a generalization of the coupon collector's problem. As a consequence, he showed that $\tau_{\text{cov}}$ is concentrated around its expectation with high probability as $\frac{t_{\text{hit}}}{t_{\text{cov}}} \rightarrow 0$. More precisely, for any $\epsilon > 0$, there is a small enough $\delta$ so that
\[ \mathbb{P} \( |\tau_{\text{cov}} - t_{\text{cov}}| \le \epsilon t_{\text{cov}} \) \ge 1 - \epsilon \] whenever $\frac{t_{\text{hit}}}{t_{\text{cov}}} < \delta$. This shows qualitatively the concentration of cover times.
On the other hand, cover times have also been estimated for many specific classes of graphs, including regular graphs \cite{JLNS89}, lattices \cite{Z92}, and bounded degree planar graphs \cite{JS00}, to name a few. Precise asymptotics are known for the two-dimensional discrete torus \cite{DPRZ04} and regular trees \cite{A91b}.
More recently, a breakthrough was made by Ding, Lee, and Peres \cite{DLP12} whereby the cover time was given (up to a constant factor) in terms of the Gaussian free field. Their result gives in some sense a quantitative estimate of the cover time that works for any graph. As touched upon earlier, Ding \cite{D14} later removed the constant factor for trees and bounded degree graphs. We complete the picture by extending this to general graphs.
\subsection{Outline}
The remaining sections are organized as follows. In Section \ref{sec:prelim}, we establish notation and provide a brief review of electrical networks, local times, Gaussian free fields, and the generalized second Ray-Knight theorem. The notation mostly follows \cite{D14}. Section \ref{sec:domination} is devoted to proving the aforementioned stochastic domination in the form of Theorem \ref{thm:domination}. This is very similar to Theorem 3 of \cite{L14}; nevertheless, we include a proof in the notation of our setting. In Section \ref{sec:cover-times}, we apply Theorem \ref{thm:domination} to cover times to obtain Theorem \ref{thm:cover-concentration}. The final section contains acknowledgements.
\section{Definitions and preliminaries} \label{sec:prelim}
An \emph{electrical network} $G$ is a finite, undirected graph $(V, E)$, allowing self-loops, together with positive weights on the edges called \emph{conductances}. We use either $c_{xy}$ or $c_{yx}$ to denote the conductance of an edge $(x, y)$, and for vertices $x, y \in V$ that do not share an edge, we define $c_{xy} = 0$. It is convenient to define the quantity $c_x = \sum_{y \in V} c_{xy}$, which we refer to as the \emph{total conductance} at $x$.
The name ``electrical network'' comes from the fact that $G$ can be used to model an electric circuit, where each edge $(x, y)$ corresponds to placing a resistor with resistance $\frac{1}{c_{xy}}$ between vertices $x$ and $y$. For any $x, y \in G$, we can define the \emph{effective resistance} $R_{\text{eff}}(x, y)$ between $x$ and $y$ to be the physical resistance when a voltage is applied between $x$ and $y$. Mathematically, this quantity can be defined as a certain minimum energy (see Chapter 9 of \cite{LPW09} for more background on effective resistance and electrical networks).
There is a canonical \emph{discrete time random walk} on an electrical network defined by taking the transition probability from $x$ to $y$ to be $\frac{c_{xy}}{c_x}$. In the case where the non-zero conductances are all equal, this reduces to the simple random walk on the underlying graph.
We will also want to consider the \emph{continuous time random walk} on an electrical network. This is a continuous time process $\{ X_t \}_{t \in \mathbb{R}^+}$ which can be sampled by having the same transition probabilities as the discrete time walk but introducing unit exponential waiting times between transitions. (Contrast this with the discrete time random walk, which we can think of as having waiting times that are deterministically equal to $1$.)
In what follows, unless otherwise specified, all the electrical networks we consider will have a distinguished vertex $v_0 \in V$, and all random walks will be assumed to start at $v_0$.
\subsection{Local times} \label{subsec:local-times}
Let $X = \{ X_t \}_{t \in \mathbb{Z}^+}$ be a discrete time random walk on an electrical network $G$. For each time $t$ and vertex $v$, we define the quantity \[ L^X_t(v) = \sum_{i = 0}^t \mathbf{1}_{\{ X_i = v \}}, \] which counts the number of visits of $X$ to $v$ up to time $t$.
We also define a continuous analogue of $L^X_t(v)$. Suppose now that $X = \{ X_t \}_{t \in \mathbb{R}^+}$ is a continuous time random walk on $G$. For any time $t \ge 0$ and vertex $v \in V$, we define the \emph{local time} $\mathcal{L}^X_t(v)$ of $X$ at $v$ to be \[ \mathcal{L}^X_t(v) = \frac{1}{c_v} \int_0^t \mathbf{1}_{\{ X_s = v \}} ds. \] Note the factor of $\frac{1}{c_v}$; this is a convenient normalization for various formulas. When there is no risk of confusion about the random walk $X$, we will sometimes shorten the notation to $L_t(v)$ or $\mathcal{L}_t(v)$.
Clearly, the cover time is related to the local time; it is the first time that all local times are positive. For a continuous time random walk $X$, we have \[ \tau_{\text{cov}} = \inf \left\{ t \ge 0 : \min_{x \in V} \mathcal{L}^X_t(x) > 0 \right\}. \] We will also frequently consider the first time that $v_0$ accumulates a certain amount of local time. We give a formal definition for this stopping time. For a continuous time random walk $X$ and any $t > 0$, define the \emph{inverse local time} $\tau^+(t)$ as \[ \tau^+(t) = \inf \{ s \ge 0 : \mathcal{L}^X_s(v_0) \ge t \}, \] It will always be clear what $X$ is, so it is not included in the notation for sake of brevity.
\subsection{Gaussian free fields} \label{subsec:gff}
For an electrical network $G = (V, E)$, the Gaussian free field $\eta_S$ with boundary $S \subset V$ is defined to be a random variable taking values in the set $\mathbb{R}^{V \setminus S}$ of real-valued functions on $V \setminus S$. Its probability density at an element $f \in \mathbb{R}^{V \setminus S}$ is proportional to \begin{equation} \label{eq:gff-def} \exp \( - \frac{1}{4} \sum_{x, y \in V} c_{xy}(f(x) - f(y))^2 \), \end{equation} \noindent where we define $f(x) = 0$ for each $x \in S$. For our purposes, Gaussian free fields will always have boundary $S = \{ v_0 \}$. Thus, if we refer to \emph{the} Gaussian free field on some network, we will mean the one with this boundary, and we will drop the subscript $S$.
From (\ref{eq:gff-def}) it is clear that $\eta$ is a multidimensional Gaussian random variable. It is not too hard to calculate (e.g., Theorem 9.20 of \cite{J97}) that for all $x, y \in V$, \[ \mathbb{E}\( \eta_x - \eta_y \)^2 = R_{\text{eff}}(x, y), \] which confirms that our definition of the GFF is consistent with the one given in the introduction. Noting that $\eta_{v_0} = 0$, the above formula completely determines the correlations of $\eta$ in terms of the effective resistances.
The Gaussian free field comes into the picture via a class of identities known as Isomorphism Theorems. The first such theorems were proved independently by Ray \cite{R63} and Knight \cite{K63} relating the local times of Brownian motion to a $2$-dimensional Bessel process. More generally, it turns out that for any strongly symmetric Borel right process, there is an identity relating its local times to an associated Gaussian process.
Inspired by formulas of Symanzik \cite{S66} and Brydges, Fr\"ohlich, and Spencer \cite{BFS82}, Dynkin \cite{D84} gave the first isomorphism of this type to be expressed in terms of Gaussian free fields. Various related identities were subsequently discovered by Marcus and Rosen \cite{MR92}, Eisenbaum \cite{E95}, Le Jan \cite{lJ10}, Sznitman \cite{S12}, and others. There is a nice version of the isomorphism in the case of continuous time random walks on finite electrical networks, first appearing in \cite{EKMRS00} (see also Theorem 8.2.2 of the book by Marcus and Rosen \cite{MR06}).
\begin{theorem}[Generalized Second Ray-Knight Theorem] \label{thm:second-ray-knight}
Let $G = (V, E)$ be an electrical network, with a given vertex $v_0
\in V$. Let $X = \{X_t\}_{t \ge 0}$ be a continuous time random walk
on $G$, and for any $t > 0$, define $\tau^+(t) = \inf \{ s \ge 0 :
\mathcal{L}^X_s(v_0) \ge t \}$ to be the first time that $v_0$ accumulates
local time $t$. Then, we have
\[ \left\{ \mathcal{L}^X_{\tau^+(t)}(x) + \frac{1}{2} \eta_x^2 \right\}_{x \in V} \laweq \left\{ \frac{1}{2}\(\eta_x + \sqrt{2t}\)^2 \right\}_{x \in V}. \] \end{theorem}
\noindent For more background on isomorphism theorems, we refer the interested reader to \cite{MR06}. See also \cite{lJ11} for information relating Gaussian free fields to loop measures.
\subsection{Random walks on paths and the first Ray-Knight theorem} \label{subsec:path-walks}
The content of this subsection may appear somewhat unmotivated before reading Section \ref{sec:domination}. The reader may wish to first skim this subsection and revisit it when reading Section \ref{subsec:discrete-local-times} where it is used.
We will need a few basic facts concerning the special case where the underlying graph of $G$ is a path. In this setting, it is a classical theorem proved independently by Ray and Knight that the local times of a continuous time random walk can be related to Brownian motion.
\begin{theorem}[First Ray-Knight Theorem] \label{thm:first-ray-knight}
For any $a > 0$, let $B_t$ be a standard one-dimensional Brownian
motion started at $B_0 = a$, and let $T = \inf \{ t : B_t = 0
\}$. Let $\{ W_t \}_{t \ge 0}$ be a standard two-dimensional
Brownian motion. Then,
\[ \Big\{ \mathcal{L}^{B_t}_T(x) : x \in [0, a] \Big\} \laweq \Big\{ |W_x|^2 : x \in [0, a] \Big\}, \]
where $\mathcal{L}^{B_t}_T$ denotes the local time of Brownian motion. \end{theorem}
In Section \ref{subsec:local-times}, we did not define the local time of Brownian motion, which requires some minor technicalities due to the fact that it can only be defined as a density. For background on Brownian local times and Theorem \ref{thm:first-ray-knight}, we refer the reader to \cite{MP10}. However, we will only use a discretized version of Theorem \ref{thm:first-ray-knight}, where we restrict our attention to a finite set of values for $x$. This is equivalent to replacing the Brownian motion $B_t$ with a continuous time random walk on a path.
\begin{corollary} \label{cor:first-ray-knight}
Let $G = (V, E)$ be an electrical network whose underlying graph is
a path, with vertices labeled $0, 1, 2, \ldots , N$ and conductances
$c_{k, k + 1}$ between $k$ and $k + 1$ for $0 \le k < N$. Let $X_t$
be a continuous time random walk on $G$ started at $X_0 = N$, and
let $T = \inf \{ t : X_t = 0 \}$. Define
\[ a_k = \sum_{i = 0}^{k - 1} \frac{1}{c_{i, i + 1}}, \]
and let $\{ W_t \}_{t \ge 0}$ be a standard two-dimensional Brownian
motion. Then,
\[ \Big\{ \mathcal{L}^X_T(k) : 1 \le k < N \Big\} \laweq \Big\{ |W_{a_k}|^2 : 1 \le k < N \Big\}. \] \end{corollary} \begin{proof}
The equivalence to Theorem \ref{thm:first-ray-knight} can be seen as
follows. For any $x \in \mathbb{R}$, let $B_t$ be a Brownian motion started
at $x$ and stopped upon hitting $x - r$ or $x + s$. Then, the local
time accumulated at $x$ is distributed as an exponential random
variable with mean $\frac{rs}{r + s}$.
When $x = a_k$, $r = \frac{1}{c_{k, k - 1}}$ and $s = \frac{1}{c_{k,
k + 1}}$, this corresponds to an exponential jump time from the
vertex $k$ in $G$, scaled by a factor of $\frac{1}{c_{k, k - 1} +
c_{k, k + 1}}$ which appears in the definition of $\mathcal{L}^X_T(k)$. \end{proof}
In light of Corollary \ref{cor:first-ray-knight}, it is useful to know something about two-dimensional Brownian motion. For our purposes, we need the following estimate, which is a quantitative verson of the standard fact that two-dimensional Brownian motion is not point-recurrent.
\begin{lemma} \label{lem:brownian-disk-avoidance}
Let $W_t$ be a standard two-dimensional Brownian motion. For any
$\epsilon \in (0, 1)$ and $\lambda > 0$, we have
\[ \mathbb{P} \( \inf_{\epsilon \le t \le 1} |W_t|^2 < \lambda \) \le \frac{2}{\log \epsilon^{-1}} + \frac{3}{\epsilon} \exp \( - \frac{\log \lambda^{-1}}{\log \epsilon^{-1}} \). \] \end{lemma} \begin{proof}
See Appendix. \end{proof}
Finally, the next lemma shows that certain conditioned random walks on paths are equivalent to random walks on a path of different conductances. Thus, the first Ray-Knight theorem may be applied in a conditional setting as well. This will be important when we study random walk transitions on general electrical networks.
\begin{lemma} \label{lem:conditioned-path}
Let $N$ be a positive integer and $r > 0$ a real number.
Consider an electrical network $G = (V, E)$ whose underlying graph
is a path, with vertices labeled $0, 1, 2, \ldots , N + 1$. Suppose
that the conductances are $c_{k, k + 1} = 1$ for $0 \le k < N$ and
$c_{N, N + 1} = r$. Let $X = \{X_t\}_{t \ge 0}$ be a discrete time
random walk on $G$ started at $N$, and let $\tau$ be the first time
that $X$ hits $0$ or $N + 1$.
On the other hand, let $G'$ be a path on vertices $0, 1, 2, \ldots ,
N$ with conductances
\[ c'_{k, k + 1} = \frac{\(N - k - 1 + \frac{1}{r}\)\(N - k + \frac{1}{r}\)}{\frac{1}{r}\(1 + \frac{1}{r}\)} \]
for $0 \le k < N$. Let $Y = \{Y_t\}_{t \ge 0}$ be a discrete time
random walk on $G'$ started at $k$. Then, the paths of $Y$ stopped
upon hitting $0$ have the same distribution as the paths of $X$
conditioned on $X_{\tau} = 0$. \end{lemma} \begin{proof}
This can be easily checked by calculating hitting probabilities,
which can then be used to calculate transition probabilities for
$X_t$ conditioned on $X_\tau = 0$. See Appendix. \end{proof}
\begin{corollary} \label{cor:conditioned-path}
Let $N$, $r$, and $G$ be as in Lemma \ref{lem:conditioned-path}, and
suppose further that $r < 1$. Let $X$ be a continuous time random
walk on $G$, and let $\tau = \inf \{ t \ge 0 : X_t = 0 \text{ or } N
+ 1 \}$. Then, for any $\epsilon \in (0, 1)$ and $\beta > 0$,
\[ \mathbb{P} \( \min_{\epsilon N \le k < N} \mathcal{L}^X_\tau(k) \le \beta N \,\middle|\, X_\tau = 0 \) \le \frac{2}{\log \epsilon^{-1} - C_\alpha } + \frac{C_\alpha}{\epsilon} \exp \( - \frac{\log \beta^{-1} - C_\alpha}{\log \epsilon^{-1} + C_\alpha} \) \]
where $\alpha = rN$, and $C_\alpha > 0$ is a number depending only on $\alpha$. \end{corollary} \begin{remark} The statement of Corollary \ref{cor:conditioned-path} takes this somewhat awkward form because it will be used for $r$ on the order of $\frac{1}{N}$. \end{remark} \begin{proof}
By Lemma \ref{lem:conditioned-path} (using the same notation), the
paths of $X$ are distributed as a random walk on a path of $N$
edges with conductances
\[ c'_{k, k + 1} = \frac{\(N - k - 1 + \frac{1}{r}\)\(N - k + \frac{1}{r}\)}{\frac{1}{r}\(1 + \frac{1}{r}\)} \]
for $0 \le k < N$. Thus, by Corollary \ref{cor:first-ray-knight},
\[ \mathbb{P} \( \min_{\epsilon N \le k < N} \mathcal{L}^X_\tau(k) \le \beta N \,\middle|\, X_\tau = 0 \) = \mathbb{P} \( \min_{\epsilon N \le k < N} |W_{a_k}|^2 \le \beta N \), \]
where $W_t$ is a two-dimensional Brownian motion, and
\[ a_k = \sum_{i = 0}^{k - 1} \frac{1}{c'_{i, i + 1}} = \sum_{i = 0}^{k - 1} \frac{1}{r} \( 1 + \frac{1}{r} \) \( \frac{1}{N - i - 1 + \frac{1}{r}} - \frac{1}{N - i + \frac{1}{r}} \) \]
\[ = \frac{1}{r} \( 1 + \frac{1}{r} \) \( \frac{1}{N - k + \frac{1}{r}} - \frac{1}{N + \frac{1}{r}} \). \]
From the above equations, the following bounds are easy to verify
for $\epsilon N \le k < N$.
\[ c'_{k - 1, k + 1} + c'_{k, k + 1} \ge 2 \]
\[ a_k \ge \frac{1}{r} \( 1 + \frac{1}{r} \) \( \frac{1}{N - \epsilon N + \frac{1}{r}} - \frac{1}{N + \frac{1}{r}} \) > \frac{\epsilon N}{(1 + rN)^2}. \]
\[ a_k \le a_N \le \frac{2}{r}. \]
It follows that
\[ \mathbb{P} \( \min_{\epsilon N \le k < N} \mathcal{L}^X_\tau(k) \le \beta N \,\middle|\, X_\tau = 0 \) \le \mathbb{P} \( \inf_{\frac{\epsilon N}{(1 + rN)^2} \le t \le \frac{2}{r}} |W_t|^2 \le \beta N \) \]
\[ = \mathbb{P} \( \inf_{\frac{\epsilon rN}{2 (1 + rN)^2} \le t \le 1} |W_t|^2 \le \frac{\beta r N}{2} \) = \mathbb{P} \( \inf_{\frac{\epsilon \alpha}{2 (1 + \alpha)^2} \le t \le 1} |W_t|^2 \le \frac{\beta \alpha}{2} \) \]
\[ \le \frac{2}{\log \epsilon^{-1} - C_\alpha } + \frac{C_\alpha}{\epsilon} \exp \( - \frac{\log \beta^{-1} - C_\alpha}{\log \epsilon^{-1} + C_\alpha} \), \]
for $C_\alpha$ sufficiently large. In the second line, we have used
the scale-invariance of Brownian motion, and the third line is an
application of Lemma \ref{lem:brownian-disk-avoidance}. \end{proof}
\section{Stochastic domination in the generalized second Ray-Knight theorem} \label{sec:domination}
The goal of this section is to prove the following stochastic domination theorem, which is a variant of Theorem 3 in \cite{L14}.
\begin{theorem}[variant of \cite{L14}, Theorem 3] \label{thm:domination}
Let $\tau^+(t)$ and $\eta$ be as in Theorem
\ref{thm:second-ray-knight}. Then, we have
\[ \left\{ \sqrt{\mathcal{L}_{\tau^+(t)}(x)} : x \in V \right\} \preceq \frac{1}{\sqrt{2}} \left\{ \max \( \eta_x + \sqrt{2t}, 0 \) : x \in V \right\}, \]
where $\preceq$ denotes stochastic domination. \end{theorem}
Theorem \ref{thm:domination} extends Theorem 2.3 from \cite{D14}, which proves the result for trees. The approach in \cite{D14} uses a Markovian property of local times for trees which does not seem to extend to general electrical networks. We take a different approach of embedding the finite-dimensional Gaussian free field inside a larger infinite-dimensional Gaussian free field, which has desirable continuity properties that were not apparent in the finite-dimensional setting. As mentioned in the introduction, we discovered while writing up our results that this idea appeared earlier in \cite{L14}.
Let us first give a heuristic description of the approach. Recall that the continuous time random walk on an electrical network makes jumps at exponentially distributed random intervals. An equivalent way of sampling the continuous time random walk is to perform a Brownian motion along the edges of the network. By this we mean that our discrete state space $V$ is replaced by a larger state space $\widehat{V}$ which includes not only the vertices in $V$ but also each point along each edge of $E$ (regarding the edges as line segments, so that $\widehat{V}$ is topologically a simplicial $1$-complex). The object $\widehat{V}$ is known as a \emph{metric
graph} and arises in physics and chemistry (see e.g. \S 5 of \cite{CDT05}).
A Brownian motion on $\widehat{V}$ is, informally, a continuous Markov process $\widehat{X} = \{ \widehat{X}(t) \}_{t \ge 0}$ taking values in $\widehat{V}$ that behaves like a one-dimensional Brownian motion on edges. The earliest rigorous development of this idea we could find was carried out by Baxter and Chacon \cite{BC84}. See also \cite{KPS12} for a more recent treatment.
It turns out that the Gaussian free field $\widehat{\eta}$ on $\widehat{V}$ (without defining this precisely) is almost surely continuous in the topology of $\widehat{V}$.\footnote{The Gaussian
free field on $\widehat{V}$ can be constructed by sampling the GFF
on $V$ and then sampling Brownian bridges on each edge.} We can also define a notion of local time $\mathcal{L}^{\widehat{X}}_t(v)$, and we can define the stopping time $\tau^+(t)$ analogously to the discrete case. For convenience, let us write $\widehat{\mathcal{L}}_t$ for $\mathcal{L}^{\widehat{X}}_{\tau^+(t)}$. With an appropriate normalization, the restrictions of $\widehat{\eta}$ and $\widehat{\mathcal{L}}_t$ to $V \subset \widehat{V}$ have the same laws as the corresponding objects on the original network $G = (V, E)$. The generalized second Ray-Knight theorem translates to \begin{equation} \label{eq:continuous-isomorphism}
\left\{ \widehat{\mathcal{L}}_{\tau^+(t)}(v) + \frac{1}{2}
\widehat{\eta}_v^2 : v \in \widehat{V} \right\} \laweq \left\{
\frac{1}{2} \( \widehat{\eta}'_v + \sqrt{2t} \)^2 : v \in
\widehat{V} \right\}, \end{equation} where $\widehat{\eta}'$ is another copy of $\widehat{\eta}$, and $c_v$ is a continuous analogue of the total conductance at a vertex.
Now, suppose that $\widehat{\eta}$ and $\widehat{\eta}'$ are coupled in a way so that the two sides in equation \ref{eq:continuous-isomorphism} are actually equal. Consider the function $f: \widehat{V} \to \mathbb{R}$ given by $f(x) = (\widehat{\eta}'_x + \sqrt{2t}) - \widehat{\eta}_x$. We have that $f(v_0) = \sqrt{2t} > 0$, $f$ is continuous, and if $f(x) = 0$, then $\widehat{\mathcal{L}}_t(x) = 0$. It turns out that the set $U = \{ v \in \widehat{V} : \widehat{\mathcal{L}}_t(v) > 0 \}$ is connected, and clearly it includes $v_0$. It follows that $f(x) > 0$ for all $x \in U$, which is exactly the desired stochastic domination once we restrict to $V \subset \widehat{V}$.
The assertion that $U$ is connected deserves some elaboration. It is intuitively clear that the closure of $U$ should be connected, since any point $v \in \widehat{V}$ which accumulates positive local time must have been visited along some connected path from $v_0$ to $v$. Thus, every non-trivial segment along this path should have also accumulated positive local time.
On the other hand, it is not immediately obvious why $U$ itself is connected, since there might be local times of $0$ at isolated points. However, we can see heuristically that this pathology doesn't occur by the first Ray-Knight theorem. Recall from Section \ref{subsec:path-walks} that the first Ray-Knight theorem equates the local times of a certain stopped Brownian motion to the distance of a planar Brownian motion from the origin. Because planar Brownian motion is not point-recurrent, the local times are \emph{all} positive almost surely, and in particular, the set of points with $0$ local time does not have isolated points.
To avoid technicalities, we will not actually use Brownian motion in our proof. Instead, we will use a discrete approximation of Brownian motion and pass to the limit. Arguments involving the continuity of Gaussian free fields and positivity of local times will be translated into corresponding quantitative estimates.
\subsection{A discrete refinement of $G$} \label{subsec:discrete-refinement}
Recall our setting of an electrical network $G = (V, E)$ with conductances $\{ c_{xy} : x, y \in V \}$. For each positive integer $N > 1$, we define a refinement $G_N = (V_N, E_N)$ by replacing each edge $(x, y) \in E$ with a length $N$ path whose vertices we denote by
\[ \{ x = v_{xy, 0}, v_{xy, 1}, \ldots , v_{xy, N} = y \}. \]
We thus have edges between $v_{xy, i}$ and $v_{xy, i + 1}$ for each $0 \le i < N$. We will use $v_{yx, i}$ to denote the same vertex as $v_{xy, N - i}$, and we will regard $V$ as a subset of $V_N$, so that a vertex $x \in V$ will sometimes be considered as a vertex in $V_N$.
We choose the conductances of $G_N$ so that the effective resistance between $x, y \in V$ as vertices in $G$ will be the same when they are considered as vertices in $G_N$. In particular, we set the conductance between $v_{xy, i}$ and $v_{xy, i + 1}$ to be $N c_{xy}$. Since the effective resistances are equivalent, $G$ is in some sense a projection of $G_N$. The following proposition makes this explicit.
\begin{proposition} \label{prop:projection} Let $\eta$ be the GFF on $G$, and let $X$ be a continuous time random walk on $G$. Let $\eta_N$ and $X_N$ denote the corresponding objects for $G_N$. Then, for any $t > 0$ we have the following two identities in law.
\[ \{ \eta_{N, v} : v \in V \} \laweq \{ \eta_v : v \in V \} \] \[ \left\{ \mathcal{L}^{X_N}_{\tau^+(t)}(x) : x \in V \right\} \laweq \Big\{ \mathcal{L}^X_{\tau^+(t)}(x) : x \in V \Big\}. \] \end{proposition}
The identity between $\eta_N$ and $\eta$ is immediate from the equivalence of effective resistances. The identity between local times then follows from Theorem \ref{thm:second-ray-knight}. However, there is also a very direct way to see the equivalence of local times which we now describe.
If $X_N(t)$ is a continuous time random walk on $G_N$ started at $v_0$, then $X_N(t)$ induces a random walk $X_N^G(t)$ on $G$ by only recording the time spent in $V$. More formally, define $t_0 = 0$, and for each $i \ge 0$, define \[ t_{i + 1} = \inf \{ t > t_i : X_N(t) \in V \text{ and } X_N(t) \ne X_N(t_i)\}.\footnote{We are taking our process $X_N$ to be right
continuous, so the infimum is achieved, and in particular $X_N(t_{i
+ 1}) \in V$.} \] Define also \[ s_i = \int_{t_i}^{t_{i + 1}} \mathbf{1}_{\{ X_N(s) = X_N(t_i) \}} ds \] to be the amount of time spent in $X_N(t_i)$ during the time interval $[t_i, t_{i + 1}]$.
Then, consider the $V$-valued process $X_N^G(t)$ which starts at $v_0$ and, for each $i$, jumps to $X_N(t_{i + 1})$ at time $\sum_{j = 1}^i s_j$. Note that if $X_N(t_i) = x \in V$, at the next jump $X_N$ transitions to $v_{xy, 1}$ with probability $\frac{c_{xy}}{c_x}$ for each $y$ neighboring $x$ in $G$. After that, $X_N$ behaves like a simple random walk on $\mathbb{Z}$ started at $1$ and stopped upon hitting either $0$ (corresponding to $v_{xy, 0} = x$) or $N$ (corresponding to $v_{xy, N} = y$). Thus, with probability $\frac{N - 1}{N}$ it returns to $x$, and with probability $\frac{1}{N}$ it hits $y$.
Consequently, between times $t_i$ and $t_{i + 1}$, the number of times $X_N$ visits $x$ is geometrically distributed with mean $N$, and so the accumulated local time $s_i$ is exponentially distributed with mean $N$. Moreover, we see that \[ \mathbb{P} \( X_N(t_{i + 1}) = y \,\middle|\, X_N(t_i) = x \) = \frac{c_{xy}}{c_x}, \] so $X_N^G(t)$ has the same law as a continuous time random walk on $G$ except that the waiting times between jumps are scaled by $N$. In particular, we have \[ \left\{ \mathcal{L}^{X_N}_{\tau^+(t)}(x) : x \in V \right\} = \left\{ \frac{1}{N} \cdot \mathcal{L}^{X^G_N}_{\tau^+(Nt)}(x) : x \in V \right\} \laweq \Big\{ \mathcal{L}^X_{\tau^+(t)}(x) : x \in V \Big\}, \] where $X$ is a continuous time random walk on $G$. Note that the factor of $N$ appearing in the middle expression comes from the normalization by total conductance at $x$, which differs for $G$ and $G_N$.
\subsection{Local times of $G_N$} \label{subsec:discrete-local-times}
We will need two estimates concerning local times on $G_N$, stated as Lemmas \ref{lem:near-local-time} and \ref{lem:bridge-local-time} below. These correspond to our assertion that the set $U$ is connected in the heuristic proof outline provided at the beginning of the section.
In the lemmas that follow, we consider a continuous time random walk $X_N(t)$ on $G_N$ started at a vertex $x \in V$. Let $\tau_x$ denote the first time the walk hits another vertex $y \in V$ distinct from $x$. The first estimate states, roughly, that it is very likely for vertices near $x$ to accumulate significant local time.
We will need a standard concentration estimate for sums of i.i.d. exponential random variables. Unfortunately, we were unable to find a reference that contained both tail bounds, so a short proof is included in the appendix.
\begin{lemma} \label{lem:exp-conc}
Let $X_1, X_2, \ldots , X_N$ be i.i.d. exponential random variables
with mean $\mu$. Then, for any $\alpha \in [0, 1]$, we have
\[ \mathbb{P}\( \left| \sum_{i = 1}^N X_i - \mu N \right| \ge \alpha \mu N \) \le 2 e^{-\frac{1}{4} \alpha^2 N}. \] \end{lemma} \begin{proof}
See Appendix. \end{proof}
\begin{lemma} \label{lem:near-local-time}
Let $y \in V$ be any neighbor of $x$ in $G$, let $\epsilon \in \(0,
\frac{1}{2}\), \lambda > 0$ be given, and define $k = \lfloor
\epsilon N \rfloor$. Then,
\[ \mathbb{P}\( \min_{0 \le i \le k} \mathcal{L}_{\tau_x}(v_{xy, i}) < \lambda \) \le C_G \cdot \epsilon N \( \lambda + \exp \( - \frac{\lambda N}{8 C_G} \) \) \]
for some constant $C_G$ depending on $G$ but not $N$. \end{lemma} \begin{proof}
Recall the notation $L_{\tau_x}(x)$ for the number of visits to $x$
up until time $\tau_x$, and recall also from Section
\ref{subsec:discrete-refinement} that $L_{\tau_x}(x)$ is distributed
as a geometric random variable with mean $N$. Conditioning on
$L_{\tau_x}(x)$, we may decompose the walk up until time $\tau_x$
into $L_{\tau_x}(x)$ excursions from $x$ and a path to a neighbor of
$x$ in $G$. Each excursion may be sampled independently.
Let us now consider one excursion. The first step of the excursion
goes to some vertex $v_{xz, 1}$, where $z$ is a neighbor of $x$ in
$G$. As noted earlier, from there the walk behaves like a simple
random walk on $\mathbb{Z}$ started at $1$, stopped upon hitting $0$
(corresponding to the return to $x$), and conditioned on hitting $0$
before $N$ (corresponding to avoiding $z$).
Let $E_m$ denote the event that a simple random walk on $\mathbb{Z}$ started
at $1$ hits $m$ before $0$. By a standard martingale argument, we
have $\mathbb{P}(E_m) = \frac{1}{m}$. Thus,
\[ \mathbb{P} \( E_k \,\middle|\, E_N^c \) = \frac{\mathbb{P}(E_k \cap E_N^c)}{\mathbb{P}(E_N^c)} \ge \frac{1}{k} - \frac{1}{N} > \frac{1}{2k}. \]
In particular, this implies that for each excursion, there is a
$\frac{c_{xy}}{c_x}$ probability that the first step is $v_{xy, 1}$,
and with probability at least $\frac{1}{2k}$ the excursion will then
hit $v_{xy, k}$. In other words, letting $p$ be the probability that
a single excursion includes $v_{xy, k}$, we have $p \ge
\frac{c_{xy}}{2kc_x}$.
Let $L$ denote the number of excursions which hit $v_{xy, k}$. By
the preceding discussion, it is the sum of $L_{\tau_x}(x)$ i.i.d. Bernoulli
random variables with expectation $p$. Since $L_{\tau_x}(x)$ is geometrically
distributed with mean $N$, it follows that $L$ is geometrically
distributed with mean $pN$. We thus have
\begin{equation} \label{eq:P(L small)}
\mathbb{P} \( L < 2 \lambda c_x N \) \le \frac{2 \lambda c_x N}{pN} \le \frac{4 \lambda c_x^2 \epsilon N}{c_{xy}}.
\end{equation}
Note that for each $i \in \{ 0, 1, 2, \ldots, k \}$, the vertex
$v_{xy, i}$ is visited at least $L$ times, and the total conductance
of $v_{xy, i}$ is at most $Nc_x$. Thus, $\mathcal{L}_{\tau_x}(v_{xy}, i)$
stochastically dominates $\frac{1}{Nc_x}$ times the sum of $L$
i.i.d. unit exponentials. By Lemma \ref{lem:exp-conc} with $\alpha =
\frac{1}{2}$, we have
\[ \mathbb{P} \Big( c_x N \cdot \mathcal{L}_{\tau_x}(v_{xy, i}) < \lambda c_x N \,\Big|\, L \ge 2 \lambda c_x N \Big) \le 2 \exp \( - \frac{\lambda c_x N}{8} \), \]
and so
\[ \mathbb{P} \( \min_{0 \le i \le k} \mathcal{L}_{\tau_x}(v_{xy, i}) < \lambda \,\middle|\, L \ge 2 \lambda c_x N \) \le 2 \epsilon N \exp \( - \frac{\lambda c_x N}{8} \). \]
Combining this with equation (\ref{eq:P(L small)}) gives
\[ \mathbb{P} \( \min_{0 \le i \le k} \mathcal{L}_{\tau_x}(v_{xy, i}) < \lambda \) \le \frac{4 \lambda c_x^2 \epsilon N}{c_{xy}} + 2 \epsilon N \exp \( - \frac{\lambda c_x N}{8} \), \]
which takes the desired form for $C_G$ sufficiently large. \end{proof}
\begin{corollary} \label{cor:near-local-time}
Let $S = \{ y \in V : (x, y) \in E \}$ be the set of neighbors of
$x$ in $G$. Then,
\[ \mathbb{P}\( \min_{y \in S} \min_{0 \le k \le \frac{N}{\log^3 N}} \mathcal{L}_{\tau_x}(v_{xy, k}) < \frac{\log^2 N}{N} \) \longrightarrow 0 \]
as $N \rightarrow \infty$. \end{corollary} \begin{proof}
This follows immediately from Lemma \ref{lem:near-local-time} by
taking $\lambda = \frac{\log^2 N}{N}$ and $\epsilon =
\frac{1}{\log^3 N}$. \end{proof}
The second estimate states that, conditioned upon $X_N(\tau_x) = y$, it is very likely that vertices $v_{xy, k}$ are visited a large number of times, as long as $k$ is not too close to $N$. This essentially follows from Corollary \ref{cor:conditioned-path} from Section \ref{subsec:path-walks}.
\begin{lemma} \label{lem:bridge-local-time}
Let $y$ be a neighbor of $x$ in $G$. Then, for any $\epsilon,
\lambda \in (0, 1)$, we have
\[ \mathbb{P}\( \min_{\epsilon N \le k < N} \mathcal{L}_{\tau_x}(v_{yx, k}) < \lambda \,\middle|\, X_N(\tau_x) = y \) \le \frac{2}{\log \epsilon^{-1} - C_G} + \frac{C_G}{\epsilon} \exp \( - \frac{\log \lambda^{-1} - C_G }{\log \epsilon^{-1} + C_G} \) \]
for some constant $C_G$ depending on $G$ but not $N$. \end{lemma} \begin{proof}
Let $S = \{ z \in V : (x, z) \in E \}$. Note that the process $X_N$
up to time $\tau_x$ induces a continuous time random walk $Y = \{
Y_t \}_{t \ge 0}$ on the vertices
\[ \{ v_{xy, 0}, v_{xy, 1}, \ldots , v_{xy, N} \} \cup S \]
by ignoring visits to vertices outside of that set (namely, those of
the form $v_{xz, k}$ for $z \ne y$ and $1 \le k < N$). We can define
a stopping time $T_x$ analogous to $\tau_x$ as the first time $Y$
hits $S$.
For convenience, define $p_{xz} = \frac{c_{xz}}{c_x}$ for each $z
\in S$. Note that
\[ \mathbb{P}\( \text{$X_N$ hits $v_{xy, 1}$ before hitting $S$ or returning to $x$} \) = p_{xy} \]
\[ \mathbb{P}\( \text{$X_N$ hits $S$ before hitting $v_{xy, 1}$ or returning to $x$} \) = \frac{1 - p_{xy}}{N}. \]
Thus, we can interpret $Y$ up to time $T_x$ as a continuous time
random walk on a path with vertices $(w_0, w_1, w_2, \ldots , w_{N +
1})$, where all the conductances are $1$ except that the
conductance between $w_N$ and $w_{N + 1}$ is $\frac{1 -
p_{xy}}{Np_{xy}}$. Here, $w_k$ corresponds to $v_{yx, k}$ (so $Y$
is started at $w_N$), and $w_{N + 1}$ corresponds to any vertex in
$S \setminus \{ y \}$ (we may combine all of these states because
$Y$ is stopped upon hitting this set anyway).
We are now in the setting of Corollary \ref{cor:conditioned-path},
as conditioning on $Y_{T_x} = y$ corresponds to conditioning on
hitting $w_0$ before $w_{N + 1}$. Following the notation of
Corollary \ref{cor:conditioned-path}, we have $r = \frac{1 -
p_{xy}}{Np_{xy}}$, so that $\alpha = \frac{1 - p_{xy}}{p_{xy}}$.
We apply the corollary with $\beta = \lambda c_{xy}$. Note
that the total conductances at $v_{yx, k}$ are $2Nc_{xy}$ as opposed
to $2$ in the statement of Corollary \ref{cor:conditioned-path}, so
the local times will be scaled accordingly. It follows that
\[ \mathbb{P}\( \min_{\epsilon N \le k < N} \mathcal{L}^{X_N}_{\tau_x}(v_{yx, k}) < \lambda \,\middle|\, X_N(\tau_x) = y \) = \mathbb{P}\( \min_{\epsilon N \le k < N} \mathcal{L}^Y_{T_x}(w_k) < \lambda c_{xy} N \,\middle|\, Y_{T_x} = y \) \]
\[ \le \frac{2}{\log \epsilon^{-1} - C_\alpha} + \frac{C_\alpha}{\epsilon} \exp \( - \frac{\log \lambda^{-1} - \log c_{xy} - C_\alpha }{\log \epsilon^{-1} + C_\alpha} \) \]
\[ \le \frac{2}{\log \epsilon^{-1} - C_G} + \frac{C_G}{\epsilon} \exp \( - \frac{\log \lambda^{-1} - C_G }{\log \epsilon^{-1} + C_G} \), \]
whenever $C_G > \max(C_\alpha, C_\alpha + \log c_{xy})$. In
particular, since there are only finitely many possible values of
$p_{xy}$ and hence of $\alpha$, we can choose $C_G$ sufficiently
large so that this holds independently of $N$. This proves the
lemma. \end{proof}
\begin{corollary} \label{cor:bridge-local-time}
Let $y$ be a neighbor of $x$ in $G$. Then, we have
\[ \mathbb{P}\( \min_{\frac{N}{\log^3 N} \le k \le N} \mathcal{L}_{\tau_x}(v_{yx, k}) < \frac{\log^2 N}{N} \,\middle|\, X_{\tau_x} = y \) \longrightarrow 0 \]
as $N \rightarrow \infty$. \end{corollary} \begin{proof}
We apply Lemma \ref{lem:bridge-local-time} with $\epsilon =
\frac{1}{\log^3 N}$ and $\lambda = \frac{\log^2 N}{N}$. It
suffices to show that both terms on the right hand side tend to
zero. Clearly,
\[ \frac{2}{\log \epsilon^{-1} - C_G} \rightarrow 0 \]
as $N \rightarrow \infty$. To bound the other term, note that for
sufficiently large $N$, we have
\[ \frac{\log \lambda^{-1} - C_G}{\log \epsilon^{-1} + C_G} = \frac{\log N - 2 \log \log N - C_G}{3 \log \log N + C_G} \ge \frac{\log N}{6 \log \log N}, \]
in which case
\[ \frac{C_G}{\epsilon} \exp \( - \frac{\log \lambda^{-1} - C_G}{\log \epsilon^{-1} + C_G} \) \le C_G \log^3 N \exp \( - \frac{\log N}{6 \log \log N} \) \]
\[ = C_G \exp \( - \frac{\log N}{6 \log \log N} + 3 \log \log N \) \longrightarrow 0. \] \end{proof}
\subsection{Proof of Theorem \ref{thm:domination}}
We now prove Theorem \ref{thm:domination}, following the plan outlined at the beginning of the section. Let us first prove an approximation of Theorem \ref{thm:domination}.
\begin{lemma} \label{lem:domination}
Let $t > 0$ be given. Let $\Omega_N$ be a probability space with
random variables $\eta_N$, $\eta'_N$, and $X_N = \{ X_N(t) \}_{t \ge
0}$ such that $\eta_N$ and $\eta'_N$ are distributed as Gaussian
free fields on $G_N$, and $X_N$ is distributed as a continuous time
random walk on $G_N$. Furthermore, suppose that $\eta_N$ and $X_N$
are independent, and almost surely for each $v \in V_N$,
\[ \frac{1}{2} \eta_{N, v}^2 + \mathcal{L}^{X_N}_{\tau^+(t)}(v) = \frac{1}{2} \( \eta'_{N, v} + \sqrt{2t} \)^2. \]
\noindent (Theorem \ref{thm:second-ray-knight} ensures that such a
construction is always possible.) Then, for any $\epsilon > 0$, we
have
\[ \mathbb{P} \( \text{for some $x \in V$, both $\mathcal{L}^{X_N}_{\tau^+(t)}(x) > 0$ and $\eta'_{N, x} + \sqrt{2t} < 0$} \) \le \epsilon \]
\noindent for $N$ sufficiently large. \end{lemma} \begin{remark} \label{rmk:domination}
Note that the hypothesis of Lemma \ref{lem:domination} implies for
each $x \in V$ that
\[ \sqrt{\mathcal{L}^{X_N}_{\tau^+(t)}(x)} \le \frac{1}{\sqrt{2}} \left| \eta'_{N, x} + \sqrt{2t} \right|. \]
Consequently, the conclusion of the lemma may be expressed
equivalently as
\[ \mathbb{P} \( \sqrt{\mathcal{L}^{X_N}_{\tau^+(t)}(x)} > \frac{1}{\sqrt{2}} \max \( 0, \eta'_{N, x} + \sqrt{\frac{2t}{c_{v_0}}} \) \text{ for some $x \in V$} \) \le \epsilon. \] \end{remark}
\begin{proof}
To shorten notation, we use $\tau^+$ to denote $\tau^+(t)$.
Call a vertex $x \in V$ \emph{well-connected} at time $s$ if there
exists a sequence of vertices $v_0 = w_0, w_1, \ldots , w_n = x$ in
$V_N$ such that $(w_i, w_{i + 1}) \in E_N$ and $\mathcal{L}_s^{X_N}(w_i) \ge
\frac{\log^2 N}{N}$ for each $i$. We will show that with high
probability, every vertex in $V$ with positive local time at time
$\tau^+$ is well-connected.
Recall from the discussion in Section
\ref{subsec:discrete-refinement} that $X_N$ induces a random walk on
$G$ which, when regarded as a sequence of visited vertices
(disregarding holding times), has the same law as a discrete time
random walk on $G$. Thus, one way of sampling from $X_N$ is to first
sample a path
\[ P = (v_0 = x_0, x_1, x_2, \ldots ) \]
of the discrete time random walk on $G$. Then, we construct $X_N$ as
follows. For each $i \ge 0$, let $Y_i(t)$ be a continuous time
random walk on $G_N$ started at $x_i$, and let $\tau_i$ be the first
time that $Y_i$ hits a neighbor of $x_i$ in $G$.
Let $Z_i$ have the law of a copy of $Y_i$ conditioned on the event
$Y_i(\tau_i) = x_{i + 1}$. Then, we may form $X_N$ by concatenating
the walks $Z_i$ up to time $\tau_i$. More formally, we may define
\[ n(s) = \max \left\{ n \ge 1 : \sum_{i = 1}^{n - 1} \tau_i \le s \right\} \]
and set $X_N(s) = Z_{n(s)}\( s - \sum_{i = 1}^{n(s) - 1} \tau_i \)$.
To lighten notation, let us write $\mathcal{L}_i = \mathcal{L}^{Y_i}_{\tau_i}$ and
$\mathbb{P}_i( \cdot ) = \mathbb{P} \( \cdot \,\middle|\, Y_i(\tau_i) = x_{i + 1}\)$,
noting that the randomness of the $Y_i$ are independent. Let $P(s) =
(x_1, x_2, \ldots , x_{n(s)})$ denote the truncation of $P$ up until
time $s$. We will say that $P(s)$ is well-connected if each $x_i$
appearing in $P(s)$ is well-connected at time $s$. Then,
\begin{align}
& \mathbb{P} \Big( \text{$P(\tau^+)$ is not well-connected} \,\Big|\, P(\tau^+) \Big) \nonumber \\
& \hphantom{xxxxxxxxx} \le \sum_{i = 1}^{|P(\tau^+)| - 1} \mathbb{P}_i \( \min_{0 \le k \le N} \mathcal{L}_i(v_{x_ix_{i + 1}, k}) < \frac{\log^2 N}{N} \) \nonumber \\
& \hphantom{xxxxxxxxx} = \sum_{i = 1}^{|P(\tau^+)| - 2} \mathbb{P}_i \( \min_{0 \le k \le N - \frac{N}{\log^3 N}} \mathcal{L}_i(v_{x_ix_{i + 1}, k}) < \frac{\log^2 N}{N} \) + \nonumber \\
& \hphantom{xxxxxxxxx}\hphantom{\;=\;} \sum_{i = 2}^{|P(\tau^+)| - 1} \mathbb{P}_i \( \min_{0 \le k < \frac{N}{\log^3 N}} \mathcal{L}_i(v_{x_ix_{i - 1}, k}) < \frac{\log^2 N}{N} \) \label{eq:path-decomp}
\end{align}
Fix a number $T$ sufficiently large so that $\mathbb{P} \Big( |P(\tau^+)| >
T \Big) \le \frac{\epsilon}{4}$. Again, by the discussion of Section
\ref{subsec:discrete-refinement}, the law of $P(\tau^+)$ does not
depend on $N$, so the number $T$ can be chosen independently of
$N$. Note that by Corollaries \ref{cor:near-local-time} and
\ref{cor:bridge-local-time}, each summand in either sum of the last
expression of (\ref{eq:path-decomp}) is bounded by
$\frac{\epsilon}{8T}$ for sufficiently large $N$. Consequently, for
sufficiently large $N$, the whole expression is bounded by $2
|P(\tau^+)| \cdot \frac{\epsilon}{8T}$, and we have
\begin{align*}
\mathbb{P} \Big( \text{$P(\tau^+)$ is not well-connected} \Big) \le &\; \mathbb{P}
\Big( |P(\tau^+)| > T \Big) + \\
&\; \mathbb{P} \Big( \text{$P(\tau^+)$ is not well-connected} \,\Big|\, |P(\tau^+)| \le T \Big) \\
\le &\; \frac{\epsilon}{4} + 2T \cdot \frac{\epsilon}{8T} = \frac{\epsilon}{2}.
\end{align*}
\noindent Note that almost surely, the vertices $x \in V$ for which
$\mathcal{L}_{\tau^+}(x) > 0$ are exactly those appearing in
$P(\tau^+)$. Thus, we have
\begin{equation} \label{eq:well-connected}
\mathbb{P} \Big( \text{for some $x \in V$, $\mathcal{L}_{\tau^+}(x) > 0$ but $x$ is not well-connected} \Big) \le \frac{\epsilon}{2}.
\end{equation}
We next show that with high probability, the values of $\eta'_N$ at
adjacent vertices do not differ by very much. Consider any $(x, y)
\in E$ and $0 \le k < N$. For notational convenience, let $u =
v_{xy, k}$ and $w = v_{xy, k + 1}$. We have
\[ \mathbb{E} (\eta'_{N, u} - \eta'_{N, w})^2 = R_{\text{eff}}(u, w) \le \frac{1}{N c_{xy}}. \]
\noindent Since $\eta'_{N, u} - \eta'_{N, w}$ has a Gaussian
distribution, it follows that
\[ \mathbb{P} \( |\eta'_{N, u} - \eta'_{N, w}| \ge \frac{\log N}{\sqrt{N}} \) \le \exp \( - c_{xy} \log^2 N \). \]
\noindent Taking a union bound over all adjacent pairs $(u, w) \in
E_N$, we obtain
\begin{equation} \label{eq:gff-continuity}
\mathbb{P} \( \max_{(u, w) \in E_N} |\eta'_{N, u} - \eta'_{N, w}| \ge
\frac{\log N}{\sqrt{N}} \) \le N \exp \( - \( \min_{(x, y) \in E} c_{xy} \) \log^2 N \) \le
\frac{\epsilon}{2}
\end{equation}
\noindent for $N$ sufficiently large.
Finally, we may combine equations (\ref{eq:well-connected}) and
(\ref{eq:gff-continuity}) to deduce the lemma. Indeed, suppose that
for some $x \in V$, we have $\mathcal{L}^{X_N}_{\tau^+}(x) > 0$ but
$\sqrt{2t} + \eta'_{N, x} < 0$. If $x$ is well-connected at time
$\tau^+$, which occurs with high probability by
(\ref{eq:well-connected}), then there exists a path $v_0 = w_0, w_1,
\ldots , w_n = x$ in $G_N$ such that each $\mathcal{L}^{X_N}_{\tau^+}(w_i)$
is at least $\frac{\log^2 N}{N}$. Observe that $\sqrt{2t} +
\eta'_{N, v_0} = \sqrt{2t} > 0$, so for some $i$ we must have
\[ \sqrt{2t} + \eta'_{N, w_i} > 0 \text{ and } \sqrt{2t} + \eta'_{N, w_{i + 1}} < 0. \]
\noindent However, we also have
\[ \frac{1}{\sqrt{2}} \left| \sqrt{2t} + \eta'_{N, w_i} \right| = \sqrt{\mathcal{L}^{X_N}_{\tau^+}(w_i) + \frac{1}{2} \eta_{N, x_i}^2} \ge \frac{\log N}{\sqrt{N}}. \]
\noindent Therefore, this can only happen if
\[ \left| \eta'_{N, w_i} - \eta'_{N, w_{i + 1}} \right| \ge \frac{2 \log N}{\sqrt{N}}. \]
\noindent But by equation (\ref{eq:gff-continuity}), this is
unlikely. Thus, we have
\begin{align*}
& \mathbb{P} \( \text{for some $v \in V$, both $\mathcal{L}^{X_N}_{\tau^+}(v) > 0$ and $\sqrt{2t} + \eta'_{N, v} < 0$} \) \\
& \hphantom{xxxxxxxxx} \le \mathbb{P} \( \max_{(u, w) \in E_N} |\eta'_{N, u} - \eta'_{N, w}| \ge \frac{\log N}{\sqrt{N}} \) + \\
& \hphantom{xxxxxxxxx} \hphantom{\;\le\;} \mathbb{P} \Big( \text{for some $x \in V$, $\mathcal{L}_{\tau^+}(x) > 0$ but $x$ is not well-connected} \Big) \\
& \hphantom{xxxxxxxxx} \le \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon,
& \end{align*}
\noindent proving the lemma. \end{proof}
\noindent Theorem \ref{thm:domination} is now an easy consequence of Lemma \ref{lem:domination}.
\begin{proof}[Proof of Theorem \ref{thm:domination}]
Let $A \subset \mathbb{R}^V$ be any monotone set. Let $\epsilon > 0$ be
given, and take $N$ sufficiently large so that the conclusion of
Lemma \ref{lem:domination} holds.
Let $\eta_N$ be the Gaussian free field on $G_N$, and let $X_N$ be a
continuous time random walk independent of $\eta_N$. We will now try
to define another Gaussian free field $\eta'_{N, v}$ on the same
probability space so as to satisfy the hypotheses of Lemma
\ref{lem:domination}. In fact, by the isomorphism theorem, $\eta'_N$
can be given in terms of $\eta_N$ and the local times up to a choice
of sign in taking the square root.
To determine the signs, we can artificially introduce some
additional randomness. Fix an arbitrary ordering on $\{-1,
1\}^{V_N}$. For each $\sigma = \{ \sigma_v \}_{v \in V_N} \in \{-1,
1\}^{V_N}$, define the function $f_\sigma : \mathbb{R}^{V_N} \to \mathbb{R}$ by
\[ f_\sigma(Z) = \mathbb{P} \( \eta_{N, v} = \sigma_v\sqrt{Z_v} - \sqrt{2t} \text{ for all $v \in V_N$} \,\middle|\, \(\eta_{N, v} + \sqrt{2t}\)^2 = Z_v \text{ for all $v \in V_N$}\). \]
Let $U$ be uniformly distributed on $[0, 1]$ and independent of
$\eta_N$ and $X_N$. For any $u \in [0, 1]$ and $Z \in \mathbb{R}^{V_N}$, we
may define
\[ \sigma^*(u, Z) = \max \left\{ \sigma \in \{-1,1\}^{V_N} : u \ge \sum_{\rho < \sigma} f_\rho(Z) \right\}. \]
We can then define
\[ \zeta_{N, v} = \frac{1}{2} \eta_{N, v}^2 + \frac{1}{c_v}\mathcal{L}_{\tau^+(t)}^{X_N}(v) \]
\[ \eta'_{N, v} = \sigma^*\( U, 2\zeta_{N, v} \) \sqrt{2 \zeta_{N, v}} - \sqrt{2t}. \]
We are now in the setting of Lemma \ref{lem:domination}, which gives
\[ \mathbb{P} \( \text{for some $v \in V$, both $\mathcal{L}^{X_N}_{\tau^+(t)}(v) > 0$ and $\eta'_{N, v} + \sqrt{2t} < 0$} \) \le \epsilon, \]
or equivalently (by Remark \ref{rmk:domination}),
\[ \mathbb{P} \( \sqrt{\mathcal{L}^{X_N}_{\tau^+(t)}(x)} > \frac{1}{\sqrt{2}} \max \( 0, \eta'_{N, x} + \sqrt{2t} \) \text{ for some $x \in V$} \) \le \epsilon. \]
Now, let $\eta$ and $X$ be the GFF and a continuous time random walk
on $G$, respectively. By the relationship between $G_N$ and $G$
described in Proposition \ref{prop:projection}, we have
\[ \mathbb{P} \( \left\{ \frac{1}{\sqrt{2}} \max \( 0, \eta_x + \sqrt{2t} \) \right\}_{x \in V} \in A \) = \mathbb{P} \( \left\{ \frac{1}{\sqrt{2}} \max \( 0, \eta'_{N, x} + \sqrt{2t} \) \right\}_{x \in V} \in A \) \]
\[ \ge \mathbb{P} \( \left\{ \sqrt{\mathcal{L}^{X_N}_{\tau^+(t)}(x)} \right\}_{x \in V} \in A \) - \epsilon = \mathbb{P} \( \left\{ \sqrt{\frac{1}{c_x} \mathcal{L}^X_{\tau^+(t)}(x)} \right\}_{x \in V} \in A \) - \epsilon. \]
This holds for each $\epsilon > 0$, so taking $\epsilon \rightarrow
0$, we obtain
\[ \mathbb{P} \( \left\{ \frac{1}{\sqrt{2}} \max \( 0, \eta_x + \sqrt{2t} \) \right\}_{x \in V} \in A \) \ge \mathbb{P} \( \left\{ \sqrt{ \frac{1}{c_x} \mathcal{L}^X_{\tau^+(t)}(x)} \right\}_{x \in V} \in A \), \]
which proves the stochastic domination. \end{proof}
\section{Application to cover times} \label{sec:cover-times}
Theorem \ref{thm:domination} provides good control over the relationship between local times and the Gaussian free field. By showing that various quantities are concentrated around their expectation, one can deduce results pertaining to cover times. In fact, the exact same arguments used in proving Theorem 1.2 of \cite{D14} carry through, replacing Theorem 2.3 there with Theorem \ref{thm:domination} of the previous section. For the sake of completeness, we repeat the main parts of the argument from \cite{D14}. It should be mentioned that the argument for the upper tail bound is originally from \cite{DLP12} (see \S 2.2).
First, we record two auxiliary results used in \cite{D14}. Recall the notation that $M = \mathbb{E} \max_{x \in V} \eta_x$ for the Gaussian free field $\eta$ and $R = \max_{x, y \in V} \mathbb{E} \( \eta_x - \eta_y \)^2$.
\begin{lemma}[Lemma 2.1 of \cite{D14}] \label{lem:inverse-local-time-concentration}
Let $X$ be a continuous time random walk on an electrical network $G
= (V, E)$. Let $c_{\text{tot}} = \sum_{x, y \in V} c_{xy}$ be the total
conductance of $G$. For any $t \ge 0$ and $\lambda \ge 1$,
\[ \mathbb{P}\( \left| \tau^+(t) - c_{\text{tot}} \cdot t \right| \ge \frac{1}{2} \( \sqrt{\lambda R t} + \lambda R \) c_{\text{tot}} \) \le 6 \exp \( - \frac{\lambda}{16} \). \] \end{lemma} \begin{proof}
See Lemma 2.1 of \cite{D14} and the associated remark. We have
replaced $2|E|$ by $c_{\text{tot}}$. \end{proof}
\noindent The next result is a well-known Gaussian concentration bound. See for example Theorem 7.1, Equation (7.4) of \cite{L01}.
\begin{proposition} \label{prop:gaussian-concentration}
Let $\{ \eta_x : x \in S \}$ be a centered Gaussian process on a
finite set $S$, and suppose $\mathbb{E} \eta_x^2 \le \sigma^2$ for all $x
\in S$. Then, for $\alpha > 0$,
\[ \mathbb{P} \( \left| \max_{x \in S} \eta_x - \mathbb{E} \max_{x \in S} \eta_x \right| \ge \alpha \) \le 2 \exp \( - \frac{ \alpha^2}{2 \sigma^2} \). \] \end{proposition}
\noindent Note that by symmetry, $\max$ can be replaced by $\min$ in Proposition \ref{prop:gaussian-concentration}, which is the version that we will use. We now give a proof of Theorem \ref{thm:cover-concentration}, closely following the proof of Theorem 1.2 in \cite{D14}.
\begin{proof}[Proof of Theorem \ref{thm:cover-concentration}.]
We will prove Theorem \ref{thm:cover-concentration} in the slightly
more general setting where $G = (V, E)$ is an electrical network. As
before, define $c_{\text{tot}} = \sum_{x, y \in V} c_{xy}$.
We first estimate $\tau_{\text{cov}}$ in terms of $\tau^+$. Let $\beta \ge 3$
be a parameter to be specified later. In what follows, we will often
use the fact that
\[ R = \max_{x, y \in V} \mathbb{E} \(\eta_x - \eta_y\)^2 \ge \max_{x \in V} \mathbb{E} \eta_x^2. \]
To prove an upper bound, let
$t^+ = \frac{(M + \beta \sqrt{R})^2}{2}$, and define the event
\[ E = \left\{ \min_{x \in V} \( \mathcal{L}_{\tau^+(t^+)}(x) + \frac{1}{2} \eta^2_x \) \ge \frac{\beta^2 R}{8} \right\}, \]
where $\eta$ is an independent copy of the Gaussian free field as in
Theorem \ref{thm:second-ray-knight}. We also have by Proposition
\ref{prop:gaussian-concentration} that
\[ \mathbb{P} \( \min_{x \in V} \frac{1}{2} \( \eta_x + \sqrt{2t^+} \)^2 \le \frac{\beta^2 R}{8} \) \le \mathbb{P} \( \min_{x \in V} \( \eta_x + \sqrt{2t^+} \) \le \frac{\beta \sqrt{R}}{2} \) \]
\[ = \mathbb{P} \( \min_{x \in V} \eta_x \le -M - \frac{\beta \sqrt{R}}{2} \) \le 2 e^{-\frac{\beta^2}{8}}, \]
so that in light of the isomorphism theorem (Theorem
\ref{thm:second-ray-knight}),
\begin{equation} \label{eq:P(E)}
\mathbb{P} \( E^c \) \le 2e^{-\frac{\beta^2}{8}}.
\end{equation}
\noindent Suppose now that $\tau_{\text{cov}} > \tau^+(t^+)$. Then, $\mathcal{L}_{\tau^+(t^+)}(x) =
0$ for some $x \in V$. Since
\[ \mathbb{P} \( \eta^2_x \ge \frac{\beta^2 R}{4} \) \le 2e^{-\frac{\beta^2}{8}} \]
and $\eta$ is independent of the random walk, it follows that
\begin{equation} \label{eq:P(E|tau)}
\mathbb{P} \( E \,\middle|\, \tau_{\text{cov}} > \tau^+(t^+) \) \le 2e^{-\frac{\beta^2}{8}}.
\end{equation}
Combining equations (\ref{eq:P(E)}) and (\ref{eq:P(E|tau)}), we
conclude that
\[ \mathbb{P} \( \tau_{\text{cov}} > \tau^+(t^+) \) \le \frac{2e^{-\frac{\beta^2}{8}}}{1 - 2e^{-\frac{\beta^2}{8}}} \le 6e^{-\frac{\beta^2}{8}}. \]
\noindent For the lower bound, let $t^- = \frac{(M - \beta
\sqrt{R})^2}{2}$. By Theorem \ref{thm:domination}, we have
\[ \mathbb{P} \( \tau_{\text{cov}} < \tau^+(t^-) \) = \mathbb{P} \( \min_{x \in V} \mathcal{L}_{\tau^+(t^-)}(x) > 0 \) \le \mathbb{P} \( \min_{x \in V} \( \eta_x + \sqrt{2t^-} \) > 0 \) \]
\[ = \mathbb{P} \( \min_{x \in V} \eta_x > -M + \frac{\beta \sqrt{R}}{2} \) \le 2e^{-\frac{\beta^2}{2}}, \]
where the last inequality follows again from Proposition
\ref{prop:gaussian-concentration}.
Combining the upper and lower bounds, it follows that
\[ \mathbb{P} \( \tau^+(t^-) \le \tau_{\text{cov}} \le \tau^+(t^+) \) \ge 1 - 8e^{-\frac{\beta^2}{8}}. \]
For $\lambda \ge 9$, we now take $\beta = \sqrt{\lambda}$. Note that
\[ c_{\text{tot}} \cdot t^+ + \frac{1}{2} \( \sqrt{\lambda R t^+} + \lambda R \) c_{\text{tot}} = \frac{c_{\text{tot}}}{2} \( M^2 + 3 \sqrt{\lambda R} M + 3 \lambda R \) \]
\[ c_{\text{tot}} \cdot t^- - \frac{1}{2} \( \sqrt{\lambda R t^-} + \lambda R \) c_{\text{tot}} = \frac{c_{\text{tot}}}{2} \( M^2 - 3 \sqrt{\lambda R} M - \lambda R \), \]
so by Lemma \ref{lem:inverse-local-time-concentration},
\[ \mathbb{P} \( \tau^+(t^+) \ge \frac{c_{\text{tot}} M^2}{2} + \frac{3 c_{\text{tot}} (\sqrt{\lambda R} M + \lambda R)}{2} \) \le 6 \exp \( - \frac{\lambda}{16} \) \]
\[ \mathbb{P} \( \tau^+(t^-) \le \frac{c_{\text{tot}} M^2}{2} - \frac{3 c_{\text{tot}} (\sqrt{\lambda R} M + \lambda R)}{2} \) \le 6 \exp \( - \frac{\lambda}{16} \). \]
We thus conclude that for $\lambda \ge 9$,
\[ \mathbb{P} \( \left| \tau_{\text{cov}} - \frac{c_{\text{tot}} M^2}{2} \right| \ge \frac{3}{2} c_{\text{tot}} (\sqrt{\lambda R} M + \lambda R) \) \le 20 \exp \( - \frac{\lambda}{16} \). \]
We obtain Theorem \ref{thm:cover-concentration} upon an appropriate
rescaling of $\lambda$, noting that $c_{\text{tot}} = 2 |E|$ in the case
where all conductances are $1$. \end{proof}
\section{Appendix}
\subsection{Proof of Lemma \ref{lem:brownian-disk-avoidance}}
To break up the proof, we first establish a lemma.
\begin{lemma} \label{lem:brownian-disk-avoidance-helper}
Let $r > 0$ be given, and consider any point $y \in \mathbb{R}^2$ such that
$|y| > r$. Let $\{ W^y_t \}_{t \ge 0}$ be a standard planar Brownian
motion started at $y$. Then,
\[ \mathbb{P} \( \inf_{t \in [0, 1]} |W^y_t| \le r \) \le \inf_{0 \le \alpha \le |y|} 2 \( \frac{\log \alpha^{-1}}{\log r^{-1}} + \alpha^2 \). \] \end{lemma} \begin{proof}
Note that $\mathbb{P} \( \inf_{t \in [0, 1]} |W^y_t| \le r \)$ is decreasing
in $|y|$, so it suffices to show the inequality only for $\alpha =
|y|$. Let $s = \frac{1}{|y|}$. Define two stopping times
\[ T = \inf \{ t \ge 0 : |W^y_t| \not\in [r, s] \} \]
\[ T' = \inf \{ t \ge 0 : |W^y_t| \not\in [r, \infty) \} \]
Now, consider the stopped martingale $X_t = \log |W^y_{T \wedge t}|$,
noting that $X_0 = \log |y|$ and $X_t \in [\log r, \log s]$. By the martingale
property, we have
\[ \mathbb{P} \( X_1 = \log r \) \le \frac{\log s - \log |y|}{\log s - \log r} = \frac{2 \log |y|^{-1}}{\log |y|^{-1} + \log r^{-1}} \le \frac{2 \log |y|^{-1}}{\log r^{-1}}. \]
Moreover, by Doob's maximal inequality\footnote{We use Doob's
maximal inequality for brevity only. Other methods such as the
reflection principle would serve just as well; the bound on $\sup
|W^y_t|$ does not need to be sharp for our purposes.} on the
submartingale $|W^y_t|^2$,
\[ \mathbb{P} \Big( \min(T, 1) \ne \min(T', 1) \Big) \le \mathbb{P} \( \sup_{t \in [0, 1]} |W^y_t| \ge s \) \le \frac{|y|^2 + 1}{s^2} \le 2 |y|^2. \]
It follows that
\[ \mathbb{P} \( \inf_{t \in [0, 1]} |W^y_t| \le r \) = \mathbb{P} \Big( T' \le 1 \Big) \le \mathbb{P} \Big( X_1 = \log r \text{ or } \min(T, 1) \ne \min(T', 1) \Big) \]
\[ \le 2 \( \frac{\log |y|^{-1}}{\log r^{-1}} + |y|^2 \), \]
as desired. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:brownian-disk-avoidance}]
Define $\lambda' = \lambda^{\frac{1}{\log \epsilon^{-1}}}$. Recall
that the probability density of the standard two-dimensional
Gaussian is bounded above by $\frac{1}{2 \pi}$, and so the
probability density of $W_\epsilon$ is bounded above by $\frac{1}{2
\pi \epsilon}$. Thus,
\[ \mathbb{P} \( |W_\epsilon|^2 \le \lambda' \) \le \frac{1}{2 \pi \epsilon} \cdot \pi \lambda' = \frac{1}{2 \epsilon} \exp \( - \frac{\log \lambda^{-1}}{\log
\epsilon^{-1}} \). \]
We now apply Lemma \ref{lem:brownian-disk-avoidance-helper} with $y =
W_\epsilon$, $r = \sqrt{\lambda}$, and taking $\alpha =
\sqrt{\lambda'}$ in the infimum. This gives
\[ \mathbb{P} \( \inf_{\epsilon \le t \le 1} |W_t|^2 < \lambda \,\middle|\, |W_\epsilon|^2 \ge \lambda' \) \le 2 \( \frac{\log \lambda'^{-1}}{\log \lambda^{-1}} + \lambda' \) \]
\[ = \frac{2}{\log \epsilon^{-1}} + 2 \exp \( - \frac{\log \lambda^{-1}}{\log
\epsilon^{-1}} \) \le \frac{2}{\log \epsilon^{-1}} + \frac{5}{2
\epsilon} \exp \( - \frac{\log \lambda^{-1}}{\log \epsilon^{-1}} \). \]
This along with the previous inequality proves the corollary. \end{proof}
\subsection{Proof of Lemma \ref{lem:conditioned-path}}
\begin{proof}
Define
\[ f(x) = \left\{
\begin{array}{lr}
x & : 0 \le x \le N \\
N + \frac{1}{r} & : x = N + 1
\end{array}
\right. \]
Note that $f(X)$ is a martingale. Thus, for a walk started at $k$,
the probability of hitting $0$ before $N + 1$ is
\[ \frac{f(N + 1) - f(k)}{f(N + 1) - f(0)} = \frac{N - k + \frac{1}{r}}{N + \frac{1}{r}}.\]
It follows that for $1 \le k < N$,
\[ \frac{\mathbb{P} \Big( X_{t + 1} = k + 1 \,\Big|\, X_t = k, X_\tau = 0 \Big)}{\mathbb{P} \Big( X_{t + 1} = k - 1 \,\Big|\, X_t = k, X_\tau = 0 \Big)} = \frac{N - k - 1 + \frac{1}{r}}{N - k + 1 + \frac{1}{r}} = \frac{c'_{k, k + 1}}{c'_{k - 1, k}}, \]
where
\[ c'_{k, k + 1} = \frac{\(N - k - 1 + \frac{1}{r}\)\(N - k + \frac{1}{r}\)}{\frac{1}{r}\(1 + \frac{1}{r}\)}. \]
Thus, the transition probabilities of $X$ conditioned on $X_\tau =
0$ are exactly the unconditioned transition probabilities of
$Y$. Consequently, their paths have the same distribution. \end{proof}
\subsection{Proof of Lemma \ref{lem:exp-conc}}
\begin{proof}
For any $t < \frac{1}{\mu}$, we have by direct calculation
\[ \log \( \mathbb{E} \exp \( t \sum_{i = 1}^N X_i \) \) = N \log \( \frac{1}{1 - \mu t} \). \]
\noindent If in fact $|t| \le \frac{\alpha}{2 \mu}$, we have
\[ \log \( \frac{1}{1 - \mu t} \) = \sum_{k = 1}^\infty \frac{\mu^k t^k}{k} \le \mu t + \mu^2 t^2 = \mu t (1 + 2 \mu t) - \mu^2 t^2. \]
\noindent and so
\[ \frac{\mathbb{E} \exp \( t \sum_{i = 1}^N X_i \)}{\exp \( t(1 + 2 \mu t)\mu N \) } \le e^{-\mu^2 t^2 N}. \]
\noindent By Markov's inequality with $t = \frac{\alpha}{2 \mu}$ and $t = - \frac{\alpha}{2 \mu}$, we obtain
\[ \mathbb{P}\( (1 - \alpha) \mu N \le \sum_{i = 1}^N X_i \le (1 + \alpha) \mu N \) \le 2e^{-\frac{1}{4} \alpha^2 N}. \] \end{proof}
\end{document} | arXiv |
Maximal torus
In the mathematical theory of compact Lie groups a special role is played by torus subgroups, in particular by the maximal torus subgroups.
A torus in a compact Lie group G is a compact, connected, abelian Lie subgroup of G (and therefore isomorphic to[1] the standard torus Tn). A maximal torus is one which is maximal among such subgroups. That is, T is a maximal torus if for any torus T′ containing T we have T = T′. Every torus is contained in a maximal torus simply by dimensional considerations. A noncompact Lie group need not have any nontrivial tori (e.g. Rn).
The dimension of a maximal torus in G is called the rank of G. The rank is well-defined since all maximal tori turn out to be conjugate. For semisimple groups the rank is equal to the number of nodes in the associated Dynkin diagram.
Examples
The unitary group U(n) has as a maximal torus the subgroup of all diagonal matrices. That is,
$T=\left\{\operatorname {diag} \left(e^{i\theta _{1}},e^{i\theta _{2}},\dots ,e^{i\theta _{n}}\right):\forall j,\theta _{j}\in \mathbb {R} \right\}.$
T is clearly isomorphic to the product of n circles, so the unitary group U(n) has rank n. A maximal torus in the special unitary group SU(n) ⊂ U(n) is just the intersection of T and SU(n) which is a torus of dimension n − 1.
A maximal torus in the special orthogonal group SO(2n) is given by the set of all simultaneous rotations in any fixed choice of n pairwise orthogonal planes (i.e., two dimensional vector spaces). Concretely, one maximal torus consists of all block-diagonal matrices with $2\times 2$ diagonal blocks, where each diagonal block is a rotation matrix. This is also a maximal torus in the group SO(2n+1) where the action fixes the remaining direction. Thus both SO(2n) and SO(2n+1) have rank n. For example, in the rotation group SO(3) the maximal tori are given by rotations about a fixed axis.
The symplectic group Sp(n) has rank n. A maximal torus is given by the set of all diagonal matrices whose entries all lie in a fixed complex subalgebra of H.
Properties
Let G be a compact, connected Lie group and let ${\mathfrak {g}}$ be the Lie algebra of G. The first main result is the torus theorem, which may be formulated as follows:[2]
Torus theorem: If T is one fixed maximal torus in G, then every element of G is conjugate to an element of T.
This theorem has the following consequences:
• All maximal tori in G are conjugate.[3]
• All maximal tori have the same dimension, known as the rank of G.
• A maximal torus in G is a maximal abelian subgroup, but the converse need not hold.[4]
• The maximal tori in G are exactly the Lie subgroups corresponding to the maximal abelian subalgebras of ${\mathfrak {g}}$[5] (cf. Cartan subalgebra)
• Every element of G lies in some maximal torus; thus, the exponential map for G is surjective.
• If G has dimension n and rank r then n − r is even.
Root system
If T is a maximal torus in a compact Lie group G, one can define a root system as follows. The roots are the weights for the adjoint action of T on the complexified Lie algebra of G. To be more explicit, let ${\mathfrak {t}}$ denote the Lie algebra of T, let ${\mathfrak {g}}$ denote the Lie algebra of $G$, and let ${\mathfrak {g}}_{\mathbb {C} }:={\mathfrak {g}}\oplus i{\mathfrak {g}}$ denote the complexification of ${\mathfrak {g}}$. Then we say that an element $\alpha \in {\mathfrak {t}}$ is a root for G relative to T if $\alpha \neq 0$ and there exists a nonzero $X\in {\mathfrak {g}}_{\mathbb {C} }$ such that
$\mathrm {Ad} _{e^{H}}(X)=e^{i\langle \alpha ,H\rangle }X$
for all $H\in {\mathfrak {t}}$. Here $\langle \cdot ,\cdot \rangle $ is a fixed inner product on ${\mathfrak {g}}$ that is invariant under the adjoint action of connected compact Lie groups.
The root system, as a subset of the Lie algebra ${\mathfrak {t}}$ of T, has all the usual properties of a root system, except that the roots may not span ${\mathfrak {t}}$.[6] The root system is a key tool in understanding the classification and representation theory of G.
Weyl group
Given a torus T (not necessarily maximal), the Weyl group of G with respect to T can be defined as the normalizer of T modulo the centralizer of T. That is,
$W(T,G):=N_{G}(T)/C_{G}(T).$
Fix a maximal torus $T=T_{0}$ in G; then the corresponding Weyl group is called the Weyl group of G (it depends up to isomorphism on the choice of T).
The first two major results about the Weyl group are as follows.
• The centralizer of T in G is equal to T, so the Weyl group is equal to N(T)/T.[7]
• The Weyl group is generated by reflections about the roots of the associated Lie algebra.[8] Thus, the Weyl group of T is isomorphic to the Weyl group of the root system of the Lie algebra of G.
We now list some consequences of these main results.
• Two elements in T are conjugate if and only if they are conjugate by an element of W. That is, each conjugacy class of G intersects T in exactly one Weyl orbit.[9] In fact, the space of conjugacy classes in G is homeomorphic to the orbit space T/W.
• The Weyl group acts by (outer) automorphisms on T (and its Lie algebra).
• The identity component of the normalizer of T is also equal to T. The Weyl group is therefore equal to the component group of N(T).
• The Weyl group is finite.
The representation theory of G is essentially determined by T and W.
As an example, consider the case $G=SU(n)$ with $T$ being the diagonal subgroup of $G$. Then $x\in G$ belongs to $N(T)$ if and only if $x$ maps each standard basis element $e_{i}$ to a multiple of some other standard basis element $e_{j}$, that is, if and only if $x$ permutes the standard basis elements, up to multiplication by some constants. The Weyl group in this case is then the permutation group on $n$ elements.
Weyl integral formula
Suppose f is a continuous function on G. Then the integral over G of f with respect to the normalized Haar measure dg may be computed as follows:
$\displaystyle {\int _{G}f(g)\,dg=|W|^{-1}\int _{T}|\Delta (t)|^{2}\int _{G/T}f\left(yty^{-1}\right)\,d[y]\,dt,}$
where $d[y]$ is the normalized volume measure on the quotient manifold $G/T$ and $dt$ is the normalized Haar measure on T.[10] Here Δ is given by the Weyl denominator formula and $|W|$ is the order of the Weyl group. An important special case of this result occurs when f is a class function, that is, a function invariant under conjugation. In that case, we have
$\displaystyle {\int _{G}f(g)\,dg=|W|^{-1}\int _{T}f(t)|\Delta (t)|^{2}\,dt.}$
Consider as an example the case $G=SU(2)$, with $T$ being the diagonal subgroup. Then the Weyl integral formula for class functions takes the following explicit form:[11]
$\displaystyle {\int _{SU(2)}f(g)\,dg={\frac {1}{2}}\int _{0}^{2\pi }f\left(\mathrm {diag} \left(e^{i\theta },e^{-i\theta }\right)\right)\,4\,\mathrm {sin} ^{2}(\theta )\,{\frac {d\theta }{2\pi }}.}$
Here $|W|=2$, the normalized Haar measure on $T$ is ${\frac {d\theta }{2\pi }}$, and $\mathrm {diag} \left(e^{i\theta },e^{-i\theta }\right)$ denotes the diagonal matrix with diagonal entries $e^{i\theta }$ and $e^{-i\theta }$.
See also
• Compact group
• Cartan subgroup
• Cartan subalgebra
• Toral Lie algebra
• Bruhat decomposition
• Weyl character formula
• Representation theory of a connected compact Lie group
References
1. Hall 2015 Theorem 11.2
2. Hall 2015 Lemma 11.12
3. Hall 2015 Theorem 11.9
4. Hall 2015 Theorem 11.36 and Exercise 11.5
5. Hall 2015 Proposition 11.7
6. Hall 2015 Section 11.7
7. Hall 2015 Theorem 11.36
8. Hall 2015 Theorem 11.36
9. Hall 2015 Theorem 11.39
10. Hall 2015 Theorem 11.30 and Proposition 12.24
11. Hall 2015 Example 11.33
• Adams, J. F. (1969), Lectures on Lie Groups, University of Chicago Press, ISBN 0226005305
• Bourbaki, N. (1982), Groupes et Algèbres de Lie (Chapitre 9), Éléments de Mathématique, Masson, ISBN 354034392X
• Dieudonné, J. (1977), Compact Lie groups and semisimple Lie groups, Chapter XXI, Treatise on analysis, vol. 5, Academic Press, ISBN 012215505X
• Duistermaat, J.J.; Kolk, A. (2000), Lie groups, Universitext, Springer, ISBN 3540152938
• Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
• Helgason, Sigurdur (1978), Differential geometry, Lie groups, and symmetric spaces, Academic Press, ISBN 0821828487
• Hochschild, G. (1965), The structure of Lie groups, Holden-Day
| Wikipedia |
Qubit, one or two complex numbers?
I'm currently reading up on quantum computing and it seems like I have found some contradiction about how to represent qubits.
It is often stated that a qubit is represented as $a|0\rangle + b|1\rangle = (a, b)$ with both a and b being complex numbers.
However, it is stated just as often, that there is only one complex number needed, namely b, since to ignore the global phase shift means, that a becomes real while b stays complex. See for example: this and this.
What is it now? What don't I get here?
quantum-mechanics quantum-computer
Jason Davies
DänuDänu
In the representation $|\psi\rangle = a|0\rangle + b|1\rangle $ we must have $|a|^2+|b|^2=1$, so that gives us one constraint. The second is that an overall global phase doesn't make any difference. We can use these two freedoms to chose $a$ and $b$ in a specific way. Traditionally we choose them such that $$|\psi\rangle = \cos \theta|0\rangle+\exp(i\phi)\sin \theta|1\rangle $$
See this wiki article on the Bloch sphere for details.
David Z♦
twistor59twistor59
$\begingroup$ In all quantum computing simulations I've seen up to now, there are two complex numbers / data types used to simulate a qubit. Why would someone do this? It is quite an overhead, considering that one of the variables doesn't make any sense... $\endgroup$ – Dänu Jan 2 '13 at 18:59
$\begingroup$ I'm not familiar with the way people simulate qubits. All I can say is that to characterize the quantum mechanical system which is the embodiment of a qubit (say a spin 1/2 particle), all you need is the reduced representation. Maybe someone who's intimately involved with these simulations will provide an answer that could enlighten us as to why they do it like that. $\endgroup$ – twistor59 Jan 2 '13 at 19:03
$\begingroup$ Ah wait...the article I linked explains that if you want to represent mixed states, you're effectively moving inside the Bloch sphere (i.e not restricted to the surface). This would give a reason why it would be convenient to use two complex types in a simulation. $\endgroup$ – twistor59 Jan 2 '13 at 19:08
$\begingroup$ @Dänu: there is more than one way to represent a qubit. Some of the representations involve more redundant information than others, which may obfuscate similarities between states but which makes it easier to compose transformations. Twistor59's answer here accurately describes the minimal representation for pure states, and is fairly standard. The other, involving two complex numbers, extends more easily to performing linear transformations describing the evolution of states. It's not really an enormous amount of overhead in the big scheme of things. $\endgroup$ – Niel de Beaudrap Jan 2 '13 at 20:12
$\begingroup$ To simulate one qubit, you only need one complex number. But to simulate 10 qubits, you need 1023 complex numbers (because of entanglement). Most programs use 1024, since the fact that programming is easier if you have redundancy more than makes up for the fact that you need two extra memory slots. $\endgroup$ – Peter Shor Jan 2 '13 at 22:43
You are spot on. Ignoring the global phase does mean that $a$ is real while $b$ remains complex. As explained in the post above $\uparrow$. This forces the qubit to be confined to the Bloch sphere (AKA the 2-sphere). However the general description of the qubit is not the 2-sphere, it is the 3-sphere.
When the global phase is included (as it should be) the qubit is a 2-dimensional complex number $|\psi\rangle\in\mathbb{C}^2$. This means the qubit is isomorphic to vectors in $\mathbb{R}^4$. When the complex coefficients have a unit norm, as $|a|^2+|b|^2=1$, the qubit describes a vector in $\mathbb{R}^4$ extending to a point on the surface of the 3-sphere.
MeditationsMeditations
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-computer or ask your own question.
Quantum systems with real structure
Should it be obvious that independent quantum states are composed by taking the tensor product?
Bra-ket notation, Bits, & Superposition
Spatial profile for a superconducting qubit's wavefunction
QFT Hilbert spaces over other rings than the complex numbers $\mathbb{C}$
How many parameters are needed to specify a quantum state?
Holonomy of $\Lambda$-Configuration Hamiltonian
Can one test an octonionic interpretation for a quantum-information conjecture, apparently valid in the real, complex and quaternionic settings?
What is the dimension of the space of all pure qudit states of dimension $D$?
Understanding the degrees of freedom counting argument for complex amplitudes in quantum mechanics | CommonCrawl |
Determine the number of digits in the value of $2^{12} \times 5^8 $.
$2^{12}\times5^8=2^4\times(2\times5)^8=16\times10^8$. $10^8$ has 9 digits, so $16\times(10)^8$ has 10 digits (1, 6, and eight 0's). Therefore, there are $\boxed{10}$ digits in the value of $2^{12}\times5^8$. | Math Dataset |
\begin{document}
\maketitle
\begin{abstract} Let $\mathcal{M}(N_{h,n})$ denote the mapping class group of a compact nonorientable surface of genus $h\ge 7$ and $n\le 1$ boundary components, and let $\mathcal{T}(N_{h,n})$ be the subgroup of $\mathcal{M}(N_{h,n})$ generated by all Dehn twists. It is known that $\mathcal{T}(N_{h,n})$ is the unique subgroup of $\mathcal{M}(N_{h,n})$ of index $2$. We prove that $\mathcal{T}(N_{h,n})$ (and also $\mathcal{M}(N_{h,n})$) contains a unique subgroup of index $2^{g-1}(2^g-1)$ up to conjugation, and a unique subgroup of index $2^{g-1}(2^g+1)$ up to conjugation, where $g=\lfloor(h-1)/2\rfloor$. The other proper subgroups of $\mathcal{T}(N_{h,n})$ and $\mathcal{M}(N_{h,n})$ have index greater than $2^{g-1}(2^g+1)$. In particular, the minimum index of a proper subgroup of $\mathcal{T}(N_{h,n})$ is $2^{g-1}(2^g-1)$. \end{abstract}
\section{Introduction} For a compact surface $F$, its {\it mapping class group} $\mathcal{M}(F)$ is the group of isotopy classes of all, orientation preserving if $F$ is orientable, homeomorphisms $F\to F$ equal to the identity on the boundary of $F$. A compact surface of genus $g$ with $n$ boundary components will be denoted by $S_{g,n}$ if it is orientable, or by $N_{g,n}$ if it is nonorientable.
It is well known that $\mathcal{M}(S_{g,n})$ is residually finite \cite{Gross}, and since $\mathcal{M}(N_{g,n})$ embeds in $\mathcal{M}(S_{g-1,2n})$ for $g+2n\ge 3$ \cite{BC,SzepB}, it is residually finite as well. It means that mapping class groups have a rich supply of finite index subgroups. On the other hand, Berrick, Gebhardt and Paris \cite{BGP} proved that for $g\ge 3$ the minimum index of a proper subgroup of $\mathcal{M}(S_{g,n})$ is $m^-_g =2^{g-1}(2^g-1)$ (previously it was known that the minimum index is greater than $4g+4$ \cite{ParSmall}). More specifically, it is proved in \cite{BGP} that $\mathcal{M}(S_{g,n})$ contains a unique subgroup of index $m^-_g$ up to conjugation, a unique subgroup of index $m^+_g =2^{g-1}(2^g+1)$ up to conjugation, and all other proper subgroups of $\mathcal{M}(S_{g,n})$ have index strictly greater than $m^+_g$. The subgroups of indices $m^-_g$ and $m^+_g$ are constructed via the symplectic representation $\mathcal{M}(S_{g,n})\to\mathrm{Sp}(2g,\mathbb{Z})$ induced by the action of $\mathcal{M}(S_{g,n})$ on $H_1(S_{g,0},\mathbb{Z})=\mathbb{Z}^{2g}$ (after gluing a disc along each boundary component of $S_{g,n}$). Passing mod $2$ we obtain an epimorphism $\theta_{g,n}\colon\mathcal{M}(S_{g,n})\to\mathrm{Sp}(2g,\mathbb{Z}_2)$. The orthogonal groups $O^-(2g,\mathbb{Z}_2)$ and $O^+(2g,\mathbb{Z}_2)$ are subgroups of $\mathrm{Sp}(2g,\mathbb{Z}_2)$ of indices respectively $m^-_g$ and $m^+_g$ (see \cite{BGP} and references there), and thus $\mathcal{O}^-_{g,n}=\theta_{g,n}^{-1}(O^-(2g,\mathbb{Z}_2))$ and $\mathcal{O}^+_{g,n}=\theta_{g,n}^{-1}(O^+(2g,\mathbb{Z}_2))$ are subgroups of $\mathcal{M}(S_{g,n})$ of indices respectively $m^-_g$ and $m^+_g$.
Note that for $g\in\{1,2\}$ the minimum index of a proper subgroup of $\mathcal{M}(S_{g,n})$ is $2$. Indeed, the abelianization of $\mathcal{M}(S_{g,n})$ is $\mathbb{Z}_{12}$ for $(g,n)=(1,0)$, $\mathbb{Z}^n$ for $g=1$ and $n>0$, and $\mathbb{Z}_{10}$ for $g=2$ (see \cite{KorkSurv}). On the other hand, $\mathcal{M}(S_{g,n})$ is perfect for $g\ge 3$. Zimmermann \cite{Zimm} proved that for $g\in\{3,4\}$ the smallest nontrivial quotient of $\mathcal{M}(S_{g,0})$ is $\mathrm{Sp}(2g,\mathbb{Z}_2)$. The problem of determining the smallest nontrivial quotient of $\mathcal{M}(S_{g,n})$ for $g\ge 5$ is open. On the other hand, Masbaum and Reid \cite{MR} proved that for fixed $g\ge 1$, every finite group occurs as a quotient of some finite index subgroup of $\mathcal{M}(S_{g,0})$.
In this paper we consider the case of a nonorientable surface. For $h\ge 2$ and $n\ge 0$, $\mathcal{M}(N_{h,n})$ contains a subgroup of index $2$, namely the \emph{twist subgroup} $\mathcal{T}(N_{h,n})$ generated by all Dehn twists about two-sided curves \cite{Lick2,Stu_twist}. If $h\ge 7$, then $\mathcal{T}(N_{h,n})$ is perfect and equal to the commutator subgroup $[\mathcal{M}(N_{h,n}),\mathcal{M}(N_{h,n})]$ (see Theorem \ref{abel}). In particular, for $h\ge 7$, $\mathcal{T}(N_{h,n})$ is the unique subgroup of $\mathcal{M}(N_{h,n})$ of index $2$. The motivating question for this paper is as follows.
\emph{What is the minimum index of a proper subgroup of $\mathcal{T}(N_{h,n})$?}
\noindent To avoid complication, we restrict our attention to $n\le 1$. The reason is that for $n\le 1$, $\mathcal{M}(N_{h,n})$ and $\mathcal{T}(N_{h,n})$ have particularly simple generators. We emphasise, however, that finite generating sets for these groups are known for arbitrary $n$ \cite{Stu_twist, Stu_bdr}. It is worth mentioning at this point, that the first finite generating set for $\mathcal{M}(N_{h,0})$, $h\ge 3$, was obtained by Chillingworth \cite{Chill} using Lickorish's results \cite{Lick1,Lick2}.
Our starting observation is that $\mathcal{M}(N_{h,n})$ and $\mathcal{T}(N_{h,n})$ admit epimorphisms onto $\mathrm{Sp}(2g,\mathbb{Z}_2)$, where $g =\lfloor(h - 1)/2\rfloor$, hence they contain subgroups of indices $m^-_g$ and $m^+_g$. Here is the construction. Set $V_h=H_1(N_{h,0},\mathbb{Z}_2)$. It is a $\mathbb{Z}_2$-module of rank $h$. It was proved by McCarthy and Pinkall \cite{McCP} that if $\varphi$ is an automorphism of $V_h$ which preserves the mod $2$ intersection pairing, then $\varphi$ is induced by a homeomorphism which is a product of Dehn twists. In other words the natural maps $\mathcal{M}(N_{h,0})\to\mathrm{Aut}(V_h,\iota)$ and $\mathcal{T}(N_{h,0})\to\mathrm{Aut}(V_h,\iota)$ are epimorphisms, where $\iota$ is the mod $2$ intersection pairing on $V_h$ and $\mathrm{Aut}(V_h,\iota)$ is the group of automorphisms preserving $\iota$. By pre-composing these epimorphisms with those induced by gluing a disc along the boundary of $N_{h,1}$, respectively $\mathcal{M}(N_{h,1})\to\mathcal{M}(N_{h,0})$ and $\mathcal{T}(N_{h,1})\to\mathcal{T}(N_{h,0})$, we obtain epimorphisms from $\mathcal{M}(N_{h,1})$ and $\mathcal{T}(N_{h,1})$ onto $\mathrm{Aut}(V_h,\iota)$. By \cite[Section 3]{KorkH1} and \cite[Lemma 8.1]{SzLow} we have isomorphisms \[\mathrm{Aut}(V_h,\iota)\cong\begin{cases} \mathrm{Sp}(2g,\mathbb{Z}_2)&\textrm{if\ }h=2g+1\\ \mathrm{Sp}(2g,\mathbb{Z}_2)\ltimes\mathbb{Z}_2^{2g+1}&\textrm{if\ }h=2g+2 \end{cases}\] In either case there is an epimorphism $\mathrm{Aut}(V_h,\iota)\to\mathrm{Sp}(2g,\mathbb{Z}_2)$. By pre-composing this epimorphism with the map from $\mathcal{M}(N_{h,n})$ (resp. $\mathcal{T}(N_{h,n})$) onto $\mathrm{Aut}(V_h,\iota)$, we obtain for $n\in\{0,1\}$ epimorphisms \[\widetilde{\varepsilon}_{h,n}\colon\mathcal{M}(N_{h,n})\to\mathrm{Sp}(2g,\mathbb{Z}_2)\quad(\textrm{resp.\ }\varepsilon_{h,n}\colon\mathcal{M}(T_{h,n})\to\mathrm{Sp}(2g,\mathbb{Z}_2)).\] Set \begin{align*} &\widetilde{\mathcal{H}}_{h,n}^-=\widetilde{\varepsilon}_{h,n}^{-1}(O^-(2g,\mathbb{Z}_2)),\quad\widetilde{\mathcal{H}}_{h,n}^+=\widetilde{\varepsilon}_{h,n}^{-1}(O^+(2g,\mathbb{Z}_2))\\ &\mathcal{H}_{h,n}^-=\varepsilon_{h,n}^{-1}(O^-(2g,\mathbb{Z}_2)),\quad \mathcal{H}_{h,n}^+=\varepsilon_{h,n}^{-1}(O^+(2g,\mathbb{Z}_2)) \end{align*} Here is our main result. \begin{theorem}\label{ThmA} Let $h=2g+r$ for $g\ge 3$, $r\in\{1,2\}$ and $n\in\{0,1\}$. \begin{enumerate} \item $\mathcal{T}(N_{h,n})$ is the unique subgroup of $\mathcal{M}(N_{h,n})$ of index $2$.
\item $\widetilde{\mathcal{H}}_{h,n}^-$ (resp. $\mathcal{H}_{h,n}^-$) is the unique subgroup of $\mathcal{M}(N_{h,n})$ (resp. $\mathcal{T}(N_{h,n})$) of index $m_g^-$, up to conjugation.
\item $\widetilde{\mathcal{H}}_{h,n}^+$ (resp. $\mathcal{H}_{h,n}^+$) is the unique subgroup of $\mathcal{M}(N_{h,n})$ (resp. $\mathcal{T}(N_{h,n})$) of index $m_g^+$, up to conjugation.
\item All other proper subgroups of $\mathcal{M}(N_{h,n})$ or $\mathcal{T}(N_{h,n})$ have index strictly greater then $m_g^+$, and at least $5m_{g-1}^->m_g^+$ if $g\ge 4$. \end{enumerate} \end{theorem} The nontrivial content of Theorem \ref{ThmA} consists in points (2,3,4). The idea of the proof is as follows. Suppose that $H$ is a proper subgroup of $\mathcal{T}(N_{h,1})$ (for definiteness) of index $m\le m^+_g$. The group $\mathcal{T}(N_{h,1})$ contains an isomorphic copy of $\mathcal{M}(S_{g,1})$ and we prove in Lemma \ref{tran} that $H\cap \mathcal{M}(S_{g,1})$ has index $m$ in $\mathcal{M}(S_{g,1})$. Therefore, by \cite{BGP}, $H\cap \mathcal{M}(S_{g,1})$ is conjugate either to $\mathcal{O}^-_{g,1}$ or to $\mathcal{O}^+_{g,1}$. Then we prove in Theorem \ref{mainT} that $\mathcal{H}^-_{h,1}$ (resp. $\mathcal{H}^+_{h,1}$) is the unique up to conjugacy subgroup of $\mathcal{T}(N_{h,1})$ of index $m^-_g$ (resp. $m^+_g$) such that $\mathcal{H}^-_{h,1}\cap\mathcal{M}(S_{g,1})=\mathcal{O}^-_{g,1}$ (resp. $\mathcal{H}^+_{h,1}\cap\mathcal{M}(S_{g,1})=\mathcal{O}^+_{g,1}$ ).
For $h\in\{5,6\}$ the abelianization of $\mathcal{T}(N_{h,n})$ is $\mathbb{Z}_2$, hence the minimum index of a proper subgroup is $2$. We prove in Theorem \ref{ThmB} that for $h\in\{5,6\}$ and $n\in\{0,1\}$, there are $4$ conjugacy classes of proper subgroups of $\mathcal{T}(N_{h,n})$ of index at most $m_2^+=10$. By \cite{Stu_twist}, the abelianization of $\mathcal{T}(N_{h,n})$ is $\mathbb{Z}_{12}$ for $(h,n)=(3,0)$, $\mathbb{Z}_{24}$ for $(h,n)=(3,1)$, and $\mathbb{Z}_2\times\mathbb{Z}$ for $(h,n)=(4,0)$. In particular, every positive integer occurs as an index of a subgroup of $\mathcal{T}(N_{4,0})$.
\section{Preliminaries} \subsection{Permutations.} Let $\mathfrak{S}_m$ denote the full permutation group of $\{1,\dots, m\}$. The main tool used in this paper is the following well known relationship between index $m$ subgroups and maps to $\mathfrak{S}_m$ . A group homomorphism $\varphi\colon G\to\mathfrak{S}_m$ is transitive if the image acts transitively on $\{1,\dots, m\}$. If $\varphi\colon G\to\mathfrak{S}_m$ is transitive, then
$\mathrm{Stab}_\varphi(1)=\{x\in G\,|\,\varphi(x)(1)=1\}$ is a subgroup of $G$ of index $m$. Conversely, if $H$ is a subgroup of $G$ of index $m$, then the action of $G$ on the right cosets of $H$ gives rise to a transitive homomorphism $\varphi\colon G\to\mathfrak{S}_m$ such that $H=\mathrm{Stab}_\varphi(1)$. Such $\varphi$ will be called \emph{permutation representation associated with $H$}.
We say that two homomorphisms $\varphi$ and $\psi$ from $G_1$ to $G_2$ are conjugate if there exists $y\in G_2$ such that $\varphi(x)=y\psi(x)y^{-1}$ for all $x\in G_1$. It is easy to see that two subgroups of $G$ are conjugate if and only if the associated permutation representations are conjugate.
For $u\in\mathfrak{S}_m$ we have the partition of $\{1,\dots, m\}$ into the fixed set $F(u)$ and the support $S(u)$ of $u$. We will repeatedly use the fact that if $u,v\in\mathfrak{S}_m$ commute, then $v$ preserves $F(u)$ and $S(u)$. \subsection{Mapping class group of a nonorientable surface.} \begin{figure}
\caption{ The surface $N_{h,1}$ and the curve $\gamma_{i,j}$.}
\label{gij}
\end{figure}
Fix $h=2g+r$, where $g\ge 2$, $r\in\{1,2\}$. Let us represent $N_{h,1}$ as a disc with $h$ crosscaps. This means that interiors of $h$ small pairwise disjoint discs should be removed from the disc, and then antipodal points in each of the resulting boundary components should be identified. Let us arrange the crosscaps as shown on Figure \ref{gij} and number them from $1$ to $h$. The closed surface $N_{h,0}$ is obtained by gluing a disc along the boundary of $N_{h,1}$. The inclusion of $N_{h,1}$ in $N_{h,0}$ induces epimorphisms $\mathcal{M}(N_{h,1})\to\mathcal{M}(N_{h,0})$ and $\mathcal{T}(N_{h,1})\to\mathcal{T}(N_{h,0})$.
For $i\le j$ let $\gamma_{i,j}$ denote the simple closed curve on $N_{h,1}$ from Figure \ref{gij}. If $j-i$ is odd then $\gamma_{i,j}$ is two-sided and we will denote by $T_{\gamma_{i,j}}$ the Dehn twist about $\gamma_{i,j}$ in the direction indicated by arrows on Figure \ref{gij}. For $1\le i\le h-1$ set $\alpha_i=\gamma_{i,i+1}$, $\alpha_0=\gamma_{1,4}$ and $\alpha_{2g+2}=\gamma_{1,2g}$ (if $g=2$ then $\alpha_0=\alpha_6$). We can alter the curves $\alpha_i$ by an isotopy, so that they intersect each other minimally. Let $\Sigma$ be a regular neighbourhood of the union of $\alpha_i$ for $1\le i\le 2g$, and let $\Sigma'$ be a regular neighbourhood of the union of $\alpha_i$ for $1\le i\le 2g-2$. These neighbourhoods may be chosen so that $\Sigma'\subset\Sigma$, $\alpha_{2g+2}\subset\Sigma$, and $\alpha_0\subset\Sigma'$ if $g\ge 3$. Note that $\Sigma$ and $\Sigma'$ are homeomorphic to, and hence will be identified with, respectively $S_{g,1}$ and $S_{g-1,1}$. Figure \ref{Sg1} shows the configuration of the curves $\alpha_i$ on $S_{g,1}$. \begin{figure}
\caption{ The surface $S_{g,1}$.}
\label{Sg1}
\end{figure} Observe that the closure of $N_{h,1}\backslash\Sigma$ is homeomorphic to $N_{r,2}$. By \cite[Cor. 3.8]{Stu_geom} the inclusions $S_{g-1,1}\subset S_{g,1}\subset N_{h,1}$ induce injective homomorphisms \[\mathcal{M}(S_{g-1,1})\hookrightarrow\mathcal{M}(S_{g,1})\hookrightarrow\mathcal{T}(N_{h,1}).\] We treat $\mathcal{M}(S_{g-1,1})$ and $\mathcal{M}(S_{g,1})$ as subgroups of $\mathcal{T}(N_{h,1})$. Set $T_i=T_{\alpha_i}$. It is well known that $\mathcal{M}(S_{g,1})$ is generated by $T_i$ for $0\le i\le 2g$ (originally it was proved by Humphries \cite{Hum} that these $2g+1$ twists generate $\mathcal{M}(S_{g,0})$, if $S_{g,0}$ is obtained by gluing a disc along the boundary of $S_{g,1}$). We define \emph{crosscap transposition} $U$ to be the isotopy class of the homeomorphism interchanging the two rightmost crosscaps as shown on Figure \ref{U}, and equal to the identity outside a disc containing these crosscaps. The composition $T_{h-1}U$ is the Y-homeomorphism defined in \cite{Lick1}. In particular $U\in\mathcal{M}(N_{h,n})\backslash\mathcal{T}(N_{h,n})$. \begin{figure}
\caption{The crosscap transposition $U$.}
\label{U}
\end{figure} The next theorem can be deduced from the main result of \cite{PSz}. \begin{theorem}\label{MCGgens} For $h\ge 4$ and $n\le 1$, $\mathcal{M}(N_{h,n})$ is generated by $U$ and $T_i$ for $0\le i\le h-1$. \end{theorem} For $i\ne j$ we either have $T_iT_j=T_jT_i$ if $\alpha_i\cap\alpha_j=\emptyset$, or
$T_iT_jT_i=T_jT_iT_j$ if $|\alpha_i\cap\alpha_j|=1$. The last equality is called \emph{braid relation}.
Evidently $U$ commutes with $T_i$ for $1\le i\le h-3$, and if $h\ge 6$ then also with $T_0$. Observe that $U$ preserves (up to isotopy) the curve $\alpha_{h-1}$ and reverses orientation of its regular neighbourhood. Hence \begin{equation}\label{UTU} UT_{h-1}U^{-1}=T_{h-1}^{-1} \end{equation} There is no similarly short relation between $U$ and $T_{h-2}$. Observe, however, that up to isotopy, $U(\alpha_{h-2})$ intersects $\alpha_{h-2}$ in a single point. Because the local orientation used to define $T_{h-2}$ and that induced by $U$ do not match at the point of intersection, $UT_{h-2}U^{-1}$ satisfies the braid relation with $T_{h-2}^{-1}$. Similarly, if $g=5$ then $UT_0U^{-1}$ satisfies the braid relation with $T_0^{-1}$. \begin{cor}\label{Tgens} For $h\ge 4$ and $n\le 1$, $\mathcal{T}(N_{h,n})$ is generated by $T_i$ for $0\le i\le h-1$, $U^2$, $UT_{h-2}U^{-1}$, and if $h\le 5$ then also $UT_0U^{-1}$ . \end{cor} \begin{proof} Set $X=\{T_i\}_{0\le i\le h-1}$. By Theorem \ref{MCGgens}, $\mathcal{M}(N_{h,1})$ is generated by $X$ and $U$. Since $\mathcal{T}(N_{h,1})$ is an index 2 subgroup of $\mathcal{M}(N_{h,1})$, $X\subset\mathcal{T}(N_{h,1})$ and $U\in\mathcal{M}(N_{h,1})\backslash\mathcal{T}(N_{h,1})$, thus $\mathcal{T}(N_{h,1})$ is generated by $X$, $UXU^{-1}$ and $U^2$. By the remarks preceding the corollary we have $UXU^{-1}\subset X\cup X^{-1}\cup\{UT_{h-2}U^{-1}\}$ if $h>5$, and $UXU^{-1}\subset X\cup X^{-1}\cup\{UT_{h-2}U^{-1}, UT_0U^{-1} \}$ if $h\le 5$. \end{proof}
For a subset $X$ of a group $G$, we denote by $C_GX$ the centraliser of $X$ in $G$. \begin{cor}\label{genCen} Let $G=\mathcal{M}(N_{h,1})$ or $G=\mathcal{T}(N_{h,1})$ for $h\ge 6$. Then $G$ is generated by $\mathcal{M}(S_{g,1})\cup C_G\mathcal{M}(S_{g-1,1})$. \end{cor} \begin{proof} Every generator of $G$ from Theorem \ref{MCGgens} or Corollary \ref{Tgens} is either supported on $S_{g,1}$, or restricts to the identity on $S_{g-1,1}$. \end{proof} For $x,y\in G$ we write $[x,y]=xyx^{-1}y^{-1}$. The commutator subgroup of $G$ is denoted by $[G,G]$, and the abelianization $G/[G, G]$ by $G^\mathrm{ab}$. \begin{lemma}\label{comm} Let $h\ge 5$, $n\in\{0,1\}$, $N=N_{h,n}$. Suppose that $T_\alpha$ and $T_\beta$ are Dehn twists about two-sided simple closed curves $\alpha$ and $\beta$ on $N$, intersecting at one point. Then $[\mathcal{M}(N),\mathcal{M}(N)]=[\mathcal{T}(N),\mathcal{T}(N)]$ is the normal closure in $\mathcal{T}(N)$ of $[T_\alpha, T_\beta]$. \end{lemma} \begin{proof} Let $K$ be the normal closure in $\mathcal{T}(N)$ of $[T_\alpha, T_\beta]$. Evidently $K\subseteq[\mathcal{T}(N),\mathcal{T}(N)]$. Let $F$ be a regular neighbourhood of $\alpha\cup\beta$. Then $F$ is homeomorphic to $S_{1,1}$ and $N\backslash F$ is a nonorientable surface of genus $h-2\ge 3$. It follows that there is a homomorphism $y\colon N\to N$ equal to the identity on $F$ and not isotopic to a product of Dehn twists on $N$ (for example, $y$ may be taken to be a crosscap transposition supported on $N\backslash F$). Now, for $f\in\mathcal{M}(N)\backslash\mathcal{T}(N)$ we have $f[T_\alpha, T_\beta]f^{-1}=(fy)[T_\alpha,T_\beta](fy)^{-1}\in K$. It follows that $K$ is normal in $\mathcal{M}(N)$. By applying \cite[Lemma 3.3]{SzLow} to the canonical projection $\mathcal{M}(N)\to\mathcal{M}(N)/K$ we have that $M(N)/K$ is abelian, hence $[\mathcal{M}(N),\mathcal{M}(N)]\subseteq K$. \end{proof}
The following theorem is proved in \cite{KorkH1} for $n=0$ and generalised to $n>0$ in \cite{Stu_twist,Stu_bdr}. \begin{theorem}\label{abel} For $h\in\{5, 6\}$ we have $\mathcal{M}(N_{h,n})^\mathrm{ab}=\mathbb{Z}_2\times\mathbb{Z}_2$ and
$\mathcal{T}(N_{h,n})^\mathrm{ab}=\mathbb{Z}_2$. For $h\ge 7$ we have $\mathcal{M}(N_{h,n})^\mathrm{ab}=\mathbb{Z}_2$ and
$\mathcal{T}(N_{h,n})^\mathrm{ab}=0$. \end{theorem}
\section{Some permutation representations of $\mathcal{M}(S_{g,1})$.}\label{Section_orient} For $g\ge 2$ let $\phi^-_{g,n}\colon\mathcal{M}(S_{g,n})\to\mathfrak{S}_{m^-_g}$ and $\phi^+_{g,n}\colon\mathcal{M}(S_{g,n})\to\mathfrak{S}_{m^+_g}$ be the representations associated with the subgroups $\mathcal{O}^-_{g,n}$ and $\mathcal{O}^+_{g,n}$ respectively. The case $g=2$ is special, as $\mathcal{M}(S_{2,n})$ contains another subgroup of index $m^-_2=6$, not conjugate to $\mathcal{O}^-_{2,n}$ , which we will now describe (see \cite{BGP} for references for the facts used in this paragraph). The group $\mathrm{Sp}(4,\mathbb{Z}_2)=\mathfrak{S}_6$ has a noninner automorphism $\alpha$ defined by \[\alpha\colon \begin{cases} (1~~2) \mapsto (1~~2)(3~~5)(4~~6)\\ (2~~3) \mapsto (1~~3)(2~~4)(5~~6)\\ (3~~4) \mapsto (1~~2)(3~~6)(4~~5)\\ (4~~5) \mapsto (1~~3)(2~~5)(4~~6)\\ (5~~6) \mapsto (1~~2)(3~~4)(5~~6) \end{cases}\] It turns out that $\alpha(O^-(4,\mathbb{Z}_2))$ is a subgroup of $\mathrm{Sp}(4,\mathbb{Z}_2)$ of index $6$, which is not conjugate to $O^-(4,\mathbb{Z}_2)=\mathfrak{S}_5$ (on the other hand $\alpha(O^+(4,\mathbb{Z}_2))$ is conjugate to $O^+(4,\mathbb{Z}_2)$). Let $\phi^\alpha_{n}\colon\mathcal{M}(S_{2,n})\to\mathfrak{S}_6$ be the representation associated with the subgroup $\theta_{2,n}^{-1}(\alpha(O^-(4,\mathbb{Z}_2)))$. \begin{theorem}[\cite{BGP}]\label{Par} \begin{enumerate} \item Suppose that $m\le 10$ and $\phi\colon\mathcal{M}(S_{2,n})\to\mathfrak{S}_m$ is a nonabelian transitive representation. Then $m\in\{6,10\}$. If $m=6$ then $\phi$ is conjugate either to $\phi_{2,n}^-$ or to $\phi_n^\alpha$. If $m=10$ then $\phi$ is conjugate to $\phi_{2,n}^+$.
\item Suppose $g\ge 3$, $m\le m_g^+$ and $m<5m_{g-1}^-$ if $g\ge 4$, and $\phi\colon\mathcal{M}(S_{g,n})\to\mathfrak{S}_m$ is a non-trivial transitive representation. Then either $m=m_g^-$ and $\phi$ is conjugate $\phi_{g,n}^-$, or $m=m_g^+$ and $\phi$ is conjugate to $\phi_{g,n}^+$.
\item For $g\ge 3$ and $n\ge 1$, $\phi_{g,n}^-$ is conjugate to an extension of $(\phi_{g-1,n}^-)^3\oplus\phi_{g-1,n}^+$ from $\mathcal{M}(S_{g-1,n})$ to $\mathcal{M}(S_{g,n})$, and $\phi_{g,n}^+$ is conjugate to an extension of $(\phi_{g-1,n}^+)^3\oplus\phi_{g-1,n}^-$ from $\mathcal{M}(S_{g-1,n})$ to $\mathcal{M}(S_{g,n})$.
\item Let $g\ge 3$ and suppose that $T_\alpha$ is a Dehn twist about a nonseparating simple closed curve on $S_{g,n}$. Then $\phi(T_\alpha)$ is an involution for $\phi\in\{\phi_{g,n}^-,\phi_{g,n}^+\}$. \end{enumerate} \end{theorem} Implicit in the statement of (3) is the fact that for $n\ge 1$, $\mathcal{M}(S_{g-1,n})$ naturally embeds in $\mathcal{M}(S_{g,n})$. Such embedding is defined in \cite{BGP}, and it is coherent with our identification of $\mathcal{M}(S_{g-1,1})$ as a subgroup of $\mathcal{M}(S_{g,1})$.
For the proof of the next lemma we need explicit expressions of the images of the generators of $\mathcal{M}(S_{2,1})$ under $\phi_{2,1}^-$, $\phi_1^\alpha$ and $\phi_{2,1}^+$. (see \cite[Lemma 3.1]{BGP}). \[ \phi_{2,1}^-\colon \begin{cases} T_1 \mapsto (1~~2)\\ T_2 \mapsto (2~~3)\\ T_3 \mapsto (3~~4)\\ T_4 \mapsto (4~~5)\\ T_0 \mapsto (5~~6) \end{cases} \qquad \phi_1^\alpha\colon \begin{cases} T_1 \mapsto (1~~2)(3~~5)(4~~6)\\ T_2 \mapsto (1~~3)(2~~4)(5~~6)\\ T_3 \mapsto (1~~2)(3~~6)(4~~5)\\ T_4 \mapsto (1~~3)(2~~5)(4~~6)\\ T_0 \mapsto (1~~2)(3~~4)(5~~6) \end{cases}\] \[ \phi_{2,1}^+\colon \begin{cases} T_1 \mapsto (3~~5)(6~~8)(9~~10)\\ T_2 \mapsto (2~~3)(4~~6)(7~~9)\\ T_3 \mapsto (1~~2)(6~~10)(8~~9)\\ T_4 \mapsto (2~~4)(3~~6)(5~~8)\\ T_0 \mapsto (4~~7)(6~~9)(8~~10) \end{cases} \] The following lemma will be used in the next section to prove our main result. \begin{lemma}\label{main_lemma}
Let $\phi\in\{\phi_{g,1}^-,\phi_{g,1}^+\,|\,g\ge 2\}\cup\{\phi_1^\alpha\}$. Set $m=m_g^-$ if $\phi=\phi_{g,1}^-$, $m=m_g^+$ if $\phi=\phi_{g,1}^+$, $m=6$ if $\phi=\phi_{1}^\alpha$. For $0\le i\le 2g$ and $i=2g+2$ set $w_i=\phi(T_i)$. Then \begin{itemize} \item[(a)] $C_{\mathfrak{S}_m}\{w_1,w_2,w_4\}=\{1,w_4\}$ for $g=2$, \item[(b)] $C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g-1}\}=\{1,w_{2g+2}\}$ for $g\ge 2$, \item[(c)] $C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g}\}=\{1,w_{2g}\}$ for $g\ge 3$, \item[(d)] $C_{\mathfrak{S}_m}\phi(\mathcal{M}(S_{g,1}))=\{1\}$ for $g\ge 2$. \end{itemize} \end{lemma} \begin{proof} The proof is by induction on $g$. For $g=2$ the assertions (a), (b) and (d) may be easily verified by using the expressions for $w_i$ given above. Fix $g\ge 3$ and assume that the lemma is true for $g-1$. By (4) of Theorem \ref{Par}, $w_i$ are involutions.
Note that $w_{2g}\ne w_{2g+2}$. For suppose $w_{2g}=w_{2g+2}$. Then, from \[w_{2g-1}w_{2g+2}=w_{2g+2}w_{2g-1}\quad\textrm{and}\quad w_{2g}w_{2g-1}w_{2g}=w_{2g-1}w_{2g}w_{2g-1},\] we have $w_{2g-1}=w_{2g}$. By repeating this argument it is easy to show that all $w_i$ are equal. It follows that the image of $\phi$ is cyclic of order $2$, a contradiction. Since $\phi(\mathcal{M}(S_{g,1}))$ is generated by $\{w_0,w_1,\dots,w_{2g}\}$, (d) follows immediately from (b) and (c).
Let $G$ be the subgroup of $\mathfrak{S}_m$ generated by $\{w_0,w_1,\dots,w_{2g-2}\}$ and note that $G=\phi(\mathcal{M}(S_{g-1,1}))$. By (3) of Theorem \ref{Par}, the restriction of $\phi_g^-$ (resp. $\phi_g^+$) to $\mathcal{M}(S_{g-1,1})$ is conjugate to $(\phi_{g-1,1}^-)^3\oplus\phi_{g-1,1}^+$ (resp. $\phi_{g-1,1}^-\oplus(\phi_{g-1,1}^+)^3$). It follows that there are three orbits of cardinality $k$ of $G$ and one orbit of cardinality $l$ of $G$, where $k\ne l$. The centraliser $C_{\mathfrak{S}_m}G$ preserves the $l$-orbit and permutes the $k$-orbits, which gives a homomorphism $\theta\colon C_{\mathfrak{S}_m}G\to\mathfrak{S}_3$. If $u\in C_{\mathfrak{S}_m}G$ preserves an orbit $X$ of $G$, then the restriction of $u$ to $X$ commutes with the action of $G$, and by (d) of the induction hypothesis, $u$ restricts to the identity on $X$. It follows that $\theta$ is injective.
Set $C_1=C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g-1}\}$. We have $C_1\subset C_{\mathfrak{S}_m}G$ and $w_{2g}\in (C_{\mathfrak{S}_m}G)\backslash C_1$. Indeed, $w_{2g}$ does not commute with $w_{2g-1}$. For otherwise we would have $w_{2g}=w_{2g-1}$ by the braid relation, and we would obtain a contradiction as above, by arguing that the image of $\phi$ is cyclic. It follows that $\theta(C_1)$ is a proper subgroup of $\mathfrak{S}_3$ containing $\theta(w_{2g+2})$, hence $\theta(C_1)=\{1,\theta(w_{2g+2})\}$ and (b) follows by injectivity of $\theta$. Similarly, setting $C_2=C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g}\}$ we have that $\theta(C_2)$ is a proper subgroup of $\mathfrak{S}_3$, because $\theta(w_{2g+2})\notin\theta(C_2)$, hence $\theta(C_2)=\{1,\theta(w_{2g})\}$ and (c) follows. \end{proof}
\section{Proofs of the main results} In this section we prove Theorem \ref{ThmA}, and the analogous result for surfaces of genera $5$ and $6$ which we will now state. Let $h=4+r$ for $r\in\{1, 2\}$ and $n\in\{0, 1\}$. Set $\widetilde{\mathcal{H}}_{h,n}^\alpha=\widetilde{\varepsilon}_{h,n}^{-1}(\alpha(O^-(4,\mathbb{Z}_2)))$ and $\mathcal{H}_{h,n}^\alpha=\varepsilon_{h,n}^{-1}(\alpha(O^-(4,\mathbb{Z}_2)))$. Recall that $\alpha(O^-(4,\mathbb{Z}_2))$ is a subgroup of $\mathrm{Sp}(4,\mathbb{Z}_2)$ of index $6$ not conjugate to $O^-(4,\mathbb{Z}_2)$. \begin{theorem}\label{ThmB} Let $h=4+r$ for $r\in\{1,2\}$ and $n\in\{0,1\}$. \begin{enumerate} \item $[\mathcal{M}(N_{h,n}),\mathcal{M}(N_{h,n})]$ is the unique subgroup of $\mathcal{M}(N_{h,n})$ of index $4$ and the unique subgroup of $\mathcal{T}(N_{h,n})$ of index $2$.
\item There are three subgroups of $\mathcal{M}(N_{h,n})$ of index $2$.
\item $\widetilde{\mathcal{H}}_{h,n}^-$ and $\widetilde{\mathcal{H}}_{h,n}^\alpha$ (resp. $\mathcal{H}_{h,n}^-$ and $\mathcal{H}_{h,n}^\alpha$) are the only subgroups of $\mathcal{M}(N_{h,n})$ (resp. $\mathcal{T}(N_{h,n})$) of index $6$, up to conjugation.
\item $\widetilde{\mathcal{H}}_{h,n}^+$ (resp. $\mathcal{H}_{h,n}^+$) is the unique subgroup of $\mathcal{M}(N_{h,n})$ (resp. $\mathcal{T}(N_{h,n})$) of index $10$, up to conjugation.
\item All other proper subgroups of $\mathcal{M}(N_{h,n})$ or $\mathcal{T}(N_{h,n})$ have index strictly greater than $10$. \end{enumerate} \end{theorem} Let $G=\mathcal{M}(N_{h,n})$ or $G=\mathcal{T}(N_{h,n})$ for $h\ge 5$ and $n\in\{0,1\}$, and $g=\lfloor(h - 1)/2\rfloor$. Suppose that $H$ is a proper subgroup of $G$. If $H$ contains $[G, G]$, then by Theorem \ref{abel} either $H=[G, G]$, or $H$ is one of the three subgroups of index $2$ of $\mathcal{M}(N_{h,n})$ for $h\in\{5, 6\}$. Suppose that $H$ does not contain $[G, G]$. Then the associated permutation representation $G\to\mathfrak{S}_m$ is nonabelian. To prove Theorem \ref{ThmA} and Theorem \ref{ThmB} it suffices to show that there are only two (three if $g = 2$) conjugacy classes of nonabelian transitive permutation representations $G\to\mathfrak{S}_m$ of degree $m\le m_g^+$ and $m<5m_{g-1}^-$ if $g\ge 4$. Thus, Theorem \ref{ThmA} and Theorem \ref{ThmB} follow from Theorem \ref{mainT} below and the fact that $\mathcal{M}(N_{h,0})$ (resp. $\mathcal{T}(N_{h,0})$) is a quotient of $\mathcal{M}(N_{h,1})$ (resp. $\mathcal{T}(N_{h,1})$).
\begin{theorem}\label{mainT} Suppose $h=2g+r$ for $g\ge 2$ and $r\in\{1,2\}$, $m\le m_g^+$ and $m<5m_{g-1}^-$ if $g\ge 4$, $G=\mathcal{M}(N_{h,1})$ or $G=\mathcal{T}(N_{h,1})$, and $\varphi\colon G\to\mathfrak{S}_m$ is a nonabelian, transitive representation. Then $m\in\{m_g^-,m_g^+\}$. If $m=m_g^-$ then $\varphi$ is, up to conjugation, the unique extension of $\phi_{g,1}^-$ (or $\phi_1^\alpha$ if $g=2$) from $\mathcal{M}(S_{g,1})$ to $G$. If $m=m_g^+$ then $\varphi$ is, up to conjugation, the unique extension of $\phi_{g,1}^+$ from $\mathcal{M}(S_{g,1})$ to $G$. \end{theorem} For the proof of Theorem \ref{mainT} we need the following lemma. \begin{lemma}\label{tran} Suppose that $G$, $g$, $r$, $m$ and $\varphi\colon G\to\mathfrak{S}_m$ are as in Theorem \ref{mainT}. Then $m\in\{m_g^-,m_g^+\}$ and the restriction of $\varphi$ to $\mathcal{M}(S_{g,1})$ is transitive. \end{lemma} \begin{proof}
Let $\varphi'$ be the restriction of $\varphi$ to $\mathcal{M}(S_{g,1})$. The image of $\varphi'$ is not abelian, for otherwise $\varphi$ would be abelian (by Lemma \ref{comm}, $[G,G]$ is normally generated by $[T_1,T_2]$). By Theorem \ref{Par} ((1) and (2)), there is an orbit $X$ of $\varphi(\mathcal{M}(S_{g,1}))$ of order $m_g^-$ or $m_g^+$. Since $2m_g^->m_g^+$ and $2m_g^->5m_{g-1}^-$, thus $X$ is the unique orbit of $\varphi(\mathcal{M}(S_{g,1}))$ of order at least $m_g^-$. We want to show that $m=|X|$. By transitivity of $\varphi$, it suffices to show that $\varphi(G)$ preserves $X$.
Suppose $g\ge 3$ and $|X|=m_g^-$ (the proof is completely analogous for $|X|=m_g^+$). By (2) of Theorem \ref{Par}, $\varphi(\mathcal{M}(S_{g,1}))$ acts trivially on the complement of $X$ in $\{1,\dots,m\}$, and the subrepresentation $x\mapsto\varphi(x)|_X$ for $x\in\mathcal{M}(S_{g,1})$ is conjugate to $\phi_{g,1}^-$. By (3) of Theorem \ref{Par}, $X=Y\cup Z_1\cup Z_2\cup Z_3$, where $Y$ is an orbit of $\varphi(\mathcal{M}(S_{g-1,1}))$ of length $m_{g-1}^+$ and $Z_i$ are orbits of $\varphi(\mathcal{M}(S_{g-1,1}))$ of length $m_{g-1}^-$. All other orbits of $\varphi(\mathcal{M}(S_{g-1,1}))$ have length one. Since $\varphi(C_G\mathcal{M}(S_{g-1,1}))$ permutes the orbits of $\varphi(\mathcal{M}(S_{g-1,1}))$, it preserves $X$. By Corollary \ref{genCen}, $\varphi(G)$ preserves $X$.
For the rest of the proof assume $g=2$. If $|X|=m_2^+$ then obviously $m=m_2^+$. Suppose that $|X|=m_2^-=6$. Let $X'=\{1,\dots,m\}\backslash X$. By (1) of Theorem \ref{Par}, the action of $\varphi(\mathcal{M}(S_{2,1}))$ on $X'$ is abelian. Since the twists $T_i$ for $0\le i\le 4$ are all conjugate in $\mathcal{M}(S_{2,1})$, they induce the same permutation $\tau$ of $X'$. Since $|X'|\le 4$ and $\mathcal{M}(S_{2,1})^\mathrm{ab}=\mathbb{Z}_{10}$ (see \cite{KorkSurv}), thus $\tau^2=1$. After a conjugacy in $\mathfrak{S}_m$ we may suppose that $X=\{1,\dots,6\}$ and $\varphi(T_i)=\phi(T_i)\tau$ for $0\le i\le 4$, where $\phi\in\{\phi_{2,1}^-,\phi_1^\alpha\}$ and $\tau\in\{1, (7~~8), (7~~8)(9~~10)\}$. We will use the expressions for $\phi(T_i)$ given in Section \ref{Section_orient}. Set $w_i=\varphi(T_i)$ for $0\le i\le 3+r$, $v=\varphi(U^2)$, and $u=\varphi(U)$ if $G=\mathcal{M}(N_{4+r,1})$. We will repeatedly use the following two easy observations.
{\it Observation 1.} Suppose that $a\in\mathfrak{S}_m$ preserves $S(\tau)$ and for some $i,j\in\{0,\dots,4\}$ we have $w_ia=aw_i$ and $w_jaw_j=aw_ja$. Then the restriction of $a$ to $S(\tau)$ is equal to $\tau$.
{\it Observation 2.} Suppose that $a\in\mathfrak{S}_m$ preserves $X\cup S(\tau)$ and for some $i\in\{0,\dots,4\}$ we have $w_iaw_i=aw_ia$. Then $S(a)\subseteq X\cup S(\tau)$.
\noindent{\bf Case $r=1$.} Set $w'_3=\varphi(UT_3U^{-1})$, $w_0'=\varphi(UT_0U^{-1})$. Observe that, up to isotopy, $U(\alpha_3)$ is disjoint from $T_4(\alpha_3)$ and $\alpha_1$, and it intersects each of the curves $\alpha_3$ and $\alpha_2$ in a single point. Hence $w'_3$ commutes with $w_4w_3w_4$ and $w_1$, and satisfies the braid relation with $w_3$ and $w_2$. Similarly, $w'_0$ commutes with $w_1$, $w_2$, $w_3'$ and satisfies the braid relation with $w_0$ and $w_4$. Note, that $v$ and $u$ commute with $w_i$ for $i=1,2$, and by (\ref{UTU}) also with $w_4$.
By Theorem \ref{MCGgens} and Corollary \ref{Tgens}, to prove that $\varphi(G)$ preserves $X$ it suffices to show that $w'_3$, $w'_0$ and $v$ preserve $X$ if $G=\mathcal{T}(N_{5,1})$, and $u$ preserves $X$ if $G=\mathcal{M}(N_{5,1})$.
{\bf Subcase 1a: $\phi=\phi_2^-$.} First assume $G=\mathcal{T}(N_{5,1})$. Since $w'_3$ commutes with $w_1=(1~~2)\tau$ and $w_4w_3w_4=(3~~5)\tau$, it follows easily that $w'_3$ preserves $\{1,2\}$, $\{3,5\}$ and $S(\tau)$. Write $w'_3=v_1v_2v_3v_4$, where $S(v_1)\subseteq\{1,2\}$, $S(v_2)\subseteq\{3,5\}$, $S(v_3)\subseteq S(\tau)$ and $\{1,2,3,5\}\cup S(\tau)\subseteq F(v_4)$. Note that $v_i$ commute with each other for $i\in\{1,\dots,4\}$. By observation 1 we have $v_3=\tau$. Since $w_2$ commutes with $v_4$ (disjoint supports), by the braid relation $w_3'w_2w_3'=w_2w_3'w_2$ we have $v_4=1$. Similarly, from $w_3w_1=w_1w_3$ and $w_3'w_3w_3'=w_3w_3'w_3$, we have $v_1=1$ and $v_2=(3~~5)$. Hence $w_3'=(3~~5)\tau$.
Since $w'_0$ commutes with $w_1$, $w_2$, $w_3'$, we have $\{1,2,3,5\}\subseteq F(w'_0)$ and $w'_0$ preserves $S(\tau)$. By observation 1 and braid relations $w_0'w_iw_0'=w_iw_0'w_i$ for $i=0,4$, we have $w_0'=(4~~6)\tau$.
Since $v$ commutes with $w_1$, $w_2$ and $w_4$, we have $\{1,2,3\}\subseteq F(v)$ and $v$ preserves $\{4,5\}$ and $S(\tau)$. We claim that $\{4,5\}\subset F(v)$. For suppose $v(4)=5$. Then $vw_3v^{-1}=(3~~5)\tau'$, where $\tau'=v\tau v^{-1}$. Note that $\tau'$ commutes with $\tau$, and hence $[vw_3v^{-1},w_3']=1$. This implies, by Lemma \ref{comm}, that the image of $\varphi$ is abelian, because $U^2(\alpha_3)$ intersects $U(\alpha_3)$ in a single point (up to isotopy), a contradiction. Hence $v(4)=4$ and $v(5)=5$. We also have $v(6)=6$, for otherwise $vw_0v^{-1}=(5~~i)\tau'$ for $i\notin X\cup S(\tau)$, which contradicts the braid relation between $vw_0v^{-1}$ and $w_0'$.
Suppose that $G=\mathcal{M}(N_{5,1})$. We have to shown that $u=\varphi(U)$ preserves $X$. Since $u$ commutes with $w_1$, $w_2$ and $w_4$, we have $\{1,2,3\}\subseteq F(u)$ and $u$ preserves $S(\tau)$. We have $uw_3u^{-1}=w_3'=(3~~5)\tau$ and $uw_0u^{-1}=w_0'=(4~~6)\tau$. It follows that $u(4)=5$, $u(5)=4$ and $u(6)=6$.
{\bf Subcase 1b: $\phi=\phi_1^\alpha$.} Because $u$, $v$ and $w_0'$ commute with $w_1$ and $w_2$, they also commute with $w_2w_1=(1~~4~~5)(2~~3~~6)$. It follows that $u$, $v$ and $w_0'$ preserve $S(w_2w_1)=X$.
Since $w'_3$ commutes with $w_1$, it preserves $S(w_1)=X\cup S(\tau)$. It also commutes with $w_1w_4w_3w_4=(1~~6)(2~~4)$. By observation 2, it follows that $w'_3$ can be written as $v_1v_2$, where $v_1$ is one of the permutations $(1~~6)(2~~4)$ or $(1~~2)(4~~6)$ or $(1~~4)(2~~6)$,
and $S(v_2)\subseteq\{3,5\}\cup S(\tau)$. If $\tau=1$, then we are done. Suppose that $\tau\ne 1$ and $w'_3$ does not preserve $\{3,5\}$. Then, up to a permutation of $S(\tau)$, we either have $v_2=(3~~7)(5~~8)$, or $v_2=(3~~7)(5~~8)(9~~10)$. It can be checked, that for each of the three possibilities for $v_1$, $w_3'$ does not satisfy the braid relation either with $w_2$ or with $w_3$. Hence $v_2=(3~~5)\tau$ and $w_3'$ preserves $X$.
\noindent{\bf Case $r=2$.} Set $w'_4=\varphi(UT_4U^{-1})$. By Theorem \ref{MCGgens} and Corollary \ref{Tgens}, to prove that $\varphi(G)$ preserves $X$ it suffices to show that $w'_4$, $w_5$ and $v$ preserve $X$ if $G=\mathcal{T}(N_{6,1})$, and $w_5$ and $u$ preserve $X$ if $G=\mathcal{M}(N_{6,1})$.
{\bf Subcase 2a: $\phi=\phi_2^-$.} Since $w_5$, $u$ and $v$ commute with $w_1$, $w_2$, $w_3$ and $w_0$, they preserve $X$. Furthermore, we have $\{1,2,3,4\}\subseteq F(w_5)$ and it follows easily from observations 1 and 2 and the braid relation $w_5w_4w_5=w_4w_5w_4$ that $w_5=(5~~6)\tau$. Observe that, up to isotopy, $U(\alpha_4)$ is disjoint from $T_5(\alpha_4)$. Hence $w'_4$ commutes with $w_5w_4w_5=(4~~6)\tau$. As it also commutes
with $w_1$, $w_2$, we have $\{1,2,3\}\subseteq F(w'_4)$ and $w_4'$ preserves $S(\tau)$ and $\{4,6\}$. Let $w'_4=v_1v_2$, where $S(v_1)\subseteq\{4,6\}\cup S(\tau)$ and $\{1,2,3,4,6\}\cup S(\tau)\subseteq F(v_2)$. From $w_3v_2=v_2w_3$ and $w_3w'_4w_3=w'_4w_3w'_4$ we have $v_2=1$, hence $w'_4$ preserves $X$.
{\bf Subcase 2b: $\phi=\phi_1^\alpha$.} Since $w_5$, $w'_4$, $v$ and $u$ commute with $w_2w_1$, they preserve $X=S(w_2w_1)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{mainT}.] Let $\phi$ be the restriction of $\varphi$ to $\mathcal{M}(S_{g,1})$. By Lemma \ref{tran} and Theorem \ref{Par}, up to conjugation we may assume $\phi\in\{\phi_{g,1}^-,\phi_{g,1}^+\}$ or $\phi=\phi_1^\alpha$ if $g=2$. We will prove that $\varphi$ is determined by $\phi$. Set $w_i=\phi(T_i)$ for $0\le i\le 2g$ and $i=2g+2$.
Consider $\varphi^U\colon G\to\mathfrak{S}_m$ defined by $\varphi^U(x)=\varphi(UxU^{-1})$ for $x\in G$. By Lemma \ref{tran}, the restriction $\phi^U$ of $\varphi^U$ to $\mathcal{M}(S_{g,1})$ is transitive, and by Theorem \ref{Par}, $\phi^U$ is conjugate to $\phi$. Hence, there exist $a\in\mathfrak{S}_m$, such that $\phi^U(x)=a\phi(x)a^{-1}$ for $x\in\mathcal{M}(S_{g,1})$.
Suppose that $r=1$. If $g\ge 3$, then for $0\le i\le 2g-2$ we have $\phi^U(T_i)=\phi(T_i)$, and by (\ref{UTU}) and (4) of Theorem \ref{Par}, also $\phi^U(T_{2g})=\phi(T^{-1}_{2g})=\phi(T_{2g})$. Hence $a\in C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g}\}$. Similarly, for $g=2$ we have $a\in C_{\mathfrak{S}_m}\{w_1,w_2,w_4\}$. By (c) and (a) of Lemma \ref{main_lemma} we have $a\in\{1,w_{2g}\}$. We claim that $a=w_{2g}$. For suppose $a=1$. Then $\varphi(UT_{2g-1}U^{-1})=\phi^U(T_{2g-1})=\varphi(T_{2g-1})$. However, since $[UT_{2g-1}U^{-1}, T_{2g-1}]$ normally generates $[G,G]$ by Lemma \ref{comm}, thus $\varphi$ is abelian, a contradiction. Now suppose that $r=2$. Then we have $a\in C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g-1}\}$. By (b) of Lemma \ref{main_lemma} we have $a\in\{1,w_{2g+2}\}$, and by similar argument as above, we obtain $a=w_{2g+2}$. We conclude that $\varphi(UT_{2g-1}U^{-1})=w_{2g}w_{2g-1}w_{2g}$ if $r=1$ and $\varphi(UT_{2g}U^{-1})=w_{2g+2}w_{2g}w_{2g+2}$ if $r=2$. If $(g,r)=(2,1)$, then $\varphi(UT_0U^{-1})=w_4w_0w_4$.
If $G=\mathcal{M}(N_{2g+r})$ then $\varphi^U(x)=\varphi(U)\varphi(x)\varphi(U)^{-1}$ and by above arguments we obtain $\varphi(U)=w_{2g}$ if $r=1$, and $\varphi(U)=w_{2g+2}$ if $r=2$. In particular $\varphi(U^2)=1$. We claim that $\varphi(U^2)=1$ also for $G=\mathcal{T}(N_{2g+r})$. Set $v=\varphi(U^2)$. Suppose that $r=1$. If $g\ge 3$, then $v\in C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g}\}$, and if $g=2$ then $v\in C_{\mathfrak{S}_m}\{w_1,w_2,w_4\}$. By (c) and (a) of Lemma \ref{main_lemma} we have $v\in\{1,w_{2g}\}$. Suppose $v=w_{2g}$. Then $\varphi(U^2T_{2g-1}U^{-2})=w_{2g}w_{2g-1}w_{2g}=\varphi(UT_{2g-1}U^{-1})$. However, since $[U^2T_{2g-1}U^{-2},UT_{2g-1}U^{-1}]$ normally generates $[G,G]$ by Lemma \ref{comm}, thus $\varphi$ is abelian, a contradiction. For $r=2$ the argument is similar, using (b) of Lemma \ref{main_lemma}.
Suppose that $r=2$. Then $\varphi(T_{2g+1})\in C_{\mathfrak{S}_m}\{w_0,w_1,\dots,w_{2g-2},w_{2g-1}\}$. By (b) of Lemma \ref{main_lemma} we have $\varphi(T_{2g+1})\in\{1,w_{2g+2}\}$. Since $\varphi(T_{2g+1})$ is conjugate to $w_{2g}$, hence it is not trivial, and $\varphi(T_{2g+1})=w_{2g+2}$.
We have shown that the values of $\varphi$ on the generators of $G$ given in Theorem \ref{MCGgens} and Corollary \ref{Tgens} are determined by $\phi$. Hence $\varphi$ is the unique extension of $\phi$ from $\mathcal{M}(S_{g,1})$ to $G$. \end{proof}
\end{document} | arXiv |
Vous êtes ici : Accueil / Services, groupes et projets / Projet aîné·e·s LGBT / Bibliographie thématique / Type: articles scientifiques
Type: articles scientifiques
Abbruzzese, L. D., & Simon, P.. (2018). Special Concerns for the LGBT Aging Patient: What Rehab Professionals Should Know. Current Geriatrics Reports, 7(1), 26–36. doi:10.1007/s13670-018-0232-6.
[Résumé]
Purpose of Review The purpose of this review was to identify health disparities and culturally competent strategies for improving function and independence in a historically disadvantaged group. Recent Findings There are significant health disparities among LGBT older adults including increased disability rates, chronic conditions, and mental distress. There is a shift in the theoretical framework for addressing health disparities in LGBT older adults from an emphasis on minority stress theory, towards factors that facilitate resilience such as identity affirmation and social support networks. Summary Advocacy for cultural competency training and LGBT aging research is needed in multiple clinical domains including rehabilitation and geriatric care centers. There are very few studies that have identified effective intervention approaches for reducing mental health or mobility-related disparities in LGBT aging populations.
Mots-clé: santé
Acquaviva, K. D., & Krinsky, L.. (2015). Bridging politics, policy, and practice: Transforming health care in Massachusetts through the creation of a statewide commission on LGBT aging. Geriatric Nursing, 36(6), 482–483. doi:10.1016/j.gerinurse.2015.10.006.
Mots-clé: soins infirmiers
Adams, M.. (2016). An Intersectional Approach to Services and Care for LGBT Elders. Generations, 40(2), 94–100. doi:10.2307/26556217. [URL]
Mots-clé: intersectionnalité
Addis, S., Davies, M., Greene, G., MacBride-Stewart, S., & Shepherd, M.. (2009). The health, social care and housing needs of lesbian, gay, bisexual and transgender older people: a review of the literature. Health & Social Care in the Community, 17(6), 647–658. doi:10.1111/j.1365-2524.2009.00866.x.
Mots-clé: besoins et craintes, revue de littérature
Adelman, M.. (2016). Overcoming Barriers to Care for LGBT Elders with Alzheimer's. Generations, 40(2), 38–40. doi:10.2307/26556198. [URL]
Mots-clé: santé cognitive
Ahrendt, A., Sprankle, E., Kuka, A., & McPherson, K.. (2016). Staff Member Reactions to Same-Gender, Resident-to-Resident Sexual Behavior within Long- Term Care Facilities. Journal of Homosexuality, 64(11), 1502–1518. doi:10.1080/00918369.2016.1247533.
Mots-clé: attitudes des professionnel.le.s, sexualité
Alba, B., Lyons, A., Waling, A., Minichiello, V., Hughes, M., Barrett, C., Fredriksen-Goldsen, K. I., & Edmonds, S.. (2019). Health, well-being, and social support in older Australian lesbian and gay care-givers. Health & Social Care in the Community, 28(1), 204–215. doi:10.1111/hsc.12854.
Informal care-givers play an important role in society, and many of the people who provide this care are lesbian women and gay men. Being a care-giver is known to be associated with poorer health and well-being, and lesbian and gay care-givers re- port experiences of stigma and discrimination in the care-giving context. This study involved a survey of 230 lesbian women and 503 gay men aged 60 years and over living in Australia, of which 218 were care-givers. We compared care-givers to non- caregivers on a range of health and well-being measures, including psychological dis- tress, positive mental health, physical health and social support. While we found no significant differences between these two groups, we further compared care-givers who were caring for an LGBTI person to those who were caring for a non-LGBTI person. Among the lesbian women, care-givers of an LGBTI person reported feeling less supported in their carer role and reported lower levels of social support more generally. They were also lower on positive mental health and physical health indica- tors. Among the gay men, care-givers of an LGBTI person also reported feeling less supported in their carer role, but there were no differences in reported levels of social support more generally or health and well-being compared to those caring for a non-LGBTI person. Overall, results from this study suggest that older lesbian and gay care-givers may be facing some challenges related to their well-being and feeling supported, especially if they are caring for another LGBTI person.
Mots-clé: Australie, proche aidance, santé
Almack, K., & King, A.. (2019). Lesbian, Gay, Bisexual, and Trans Aging in a U.K. Context: Critical Observations of Recent Research Literature. The International Journal of Aging and Human Development, 89(1), 93–107. doi:10.1177/0091415019836921.
Mots-clé: recherche
Ansara, G. Y.. (2015). Challenging cisgenderism in the ageing and aged care sector: Meeting the needs of older people of trans and/or non-binary experience. Australasian Journal on Ageing, 34(2), 14–18. doi:10.1111/ajag.12278.
Recent Australian legislative and policy changes can benefit people of trans and/or non-binary experience (e.g. men assigned female with stereotypically {'}female{'} bodies, women assigned male with ster…
Mots-clé: personnes trans*
Averett, P., Yoon, I., & Jenkins, C. L.. (2012). Older Lesbian Sexuality: Identity, Sexual Behavior, and the Impact of Aging. Journal of Sex Research, 49(5), 495–507. doi:10.1080/00224499.2011.582543.
[Résumé] [URL]
In response to the very limited and mostly outdated literature on older lesbian sexuality, this exploratory study examined older lesbian sexual identity, romantic relationships, the impact of aging, and experiences of discrimination within these contexts. Utilizing an online survey that recruited via numerous online lesbian communities and snowball sampling, 456 lesbians over the age of 50 responded to closed, Likert scale, and open-ended questions that provided a preliminary understanding of older lesbian sexuality. The results indicated that older lesbians have experienced fluidity in past romantic and sexual relationships, as well as in erotic fantasies, despite strong identification with being lesbian. The findings also indicate a decreased focus on sexuality in the context of relationships, with more focus on stability and continuity. Future research is needed that provides greater specificity and detail about older lesbian conceptions of sexual behavior and sexual identity labels, as well as specific sex- ual behaviors.
Mots-clé: femmes lesbiennes, sexualité
Averett, P., Yoon, I., & Jenkins, C. L.. (2011). Older Lesbians: Experiences of Aging, Discrimination and Resilience. Journal of Women & Aging, 23(3), 216–232. doi:10.1080/08952841.2011.587742.
Older lesbians are, at minimum, a triple threat of marginalization due to ageism, heterosexism, and sexism. A national survey specific to this often-invisible population has not occurred in over 25 years. The present study was completed to reveal the needs, strengths, and experiences of the current cohort of older lesbians. Four hun- dred fifty-six older lesbians responded to an online survey on topics including sociodemographics, social activity, health, sexual iden- tity, family relationships, romantic relationships, service/program use, mental health, end-of-life care, and discrimination. The results and implications are included as well as a comparison to the last studied cohort.
Mots-clé: besoins et craintes, femmes lesbiennes
Banens, M.. (2016). Les relations sexuelles des seniors vivant avec le VIH. Sexologies, 25(3), 122–127. doi:10.1016/j.sexol.2016.06.002.
L{'}article interroge l{'}activité sexuelle des seniors vivant avec le VIH ainsi que son contexte conjugal et social. L{'}activité sexuelle des seniors séropositifs est l{'}une des préoccupations suscitées par le rapide vieillissement de la population séropositive. Elle est considérée comme l{'}une des dimensions de bien-être et d{'}intégration sociale. Plus ou moins étroitement liée à la conjugalité, elle renseigne également sur l{'}éventuelle prise en charge des seniors séropositifs quand le besoin se fera sentir. Une enquête a été menée auprès de 125 séropo- sitifs suivis dans les hôpitaux du COREVIH Vallée du Rhône (co-financeur de l{'}enquête), dont 80 par questionnaire et 45 par entretien approfondi. Au total, 80 seniors (50 ans et plus) ont été étudiés et comparés à 45 séropositifs plus jeunes. L{'}échantillon est constitué de fa ̧con aléatoire selon les rendez-vous à l{'}hôpital, mais le taux de refus est trop élevé pour considé- rer l{'}échantillon comme représentatif. Il a permis néanmoins de décrire une grande variété de situations et de repérer des régularités à l{'}intérieur des différentes catégories de seniors séropositifs. Ainsi, hommes et femmes hétérosexuels, HSH et bisexuels constituent quatre confi- gurations contrastées. Les hommes hétérosexuels ont tendance à continuer de vivre dans leur environnement familial (femme, enfants, parfois petits-enfants), mais sur un mode conflic- tuel et sans activité sexuelle. Les femmes hétérosexuelles vivent souvent seules {—} comme au moment de leur contamination {—} mais en bons termes avec leurs enfants. Les HSH vivent le plus souvent en couple séroconcordant, couple post-dépistage, affectif, harmonieux, mais peu sexuel. Les hommes bisexuels, enfin, vivent souvent seuls, sans activité sexuelle, en situation conflictuelle avec enfants et ex-compagnes. Ils semblent les plus isolés socialement et les plus fragiles psychologiquement.
Mots-clé: VIH
Baril, A., & Silverman, M.. (2019). Forgotten lives: Trans older adults living with dementia at the intersection of cisgenderism, ableism/cogniticism and ageism. Sexualities, 29(7), 136346071987683–15. doi:10.1177/1363460719876835.
There is little research at the international level to help us understand the experiences and needs of trans people living with dementia, despite population aging and the grow- ing numbers of trans people including the first cohort of trans older adults. There is a need to understand the widespread barriers, discrimination and mistreatment faced by trans people in the health and social service system, and the fears trans people express about aging and dementia. Anecdotal evidence from the scarce literature on the topic of LGBTQ populations and dementia suggest that cognitive changes can impact on gender identity. For example, trans older adults with dementia may forget they transitioned and reidentify with their sex/gender assigned at birth or may experience {'}gender confusion.{'} This raises crucial questions, for example regarding practices related to pronouns, care to the body (shaving, hair, clothes, etc.), social gendered interactions, health care (con- tinuing or not hormonal therapy) and so on. This article fills a gap in current literature by offering a first typology of responses offered by academics who analyzed the topic of dementia and gender identity, to trans older adults with dementia who may be experi- encing {'}gender confusion,{'} namely: (1) a gender neutralization approach; (2) a transaf- firmative stable approach; and (3) a trans-affirmative fluid approach. After providing critical reflections regarding each approach, we articulate the foundations of a fourth paradigm, rooted in an interdisciplinary dialogue regarding the interlocking systems of oppression faced by trans older adults with dementia, namely ageism, ableism/sanism, and cisgenderism.
Mots-clé: personnes non-binaires, personnes trans*, santé cognitive
Baril, A., Silverman, M., Gauthier, M., & Lévesque, M.. (2020). Forgotten Wishes: End-of-Life Documents for Trans People with Dementia at the Margins of Legal Change. Canadian Journal of Law and Society Revue Canadienne Droit et Société, 1–24. doi:10.1017/cls.2020.13.
Literature on the topic of trans older adults has documented a few anecdotal cases in which some trans people living with dementia forgot they transitioned and reidentified with their sex assigned at birth ({\textquotedblleft}detransition{\textquotedblright}). Trans commu- nities and their allies have encouraged trans people to engage in end-of-life planning, including the preparation of legal documents that state their wishes regarding gender identity and expression in the event of {\textquotedblleft}incapacity{\textquotedblright} caused by dementia. While useful, we contend that end-of-life planning is often implicitly based on cisnormative and cognonormative (normative system based on cog- nitive abilities) assumptions. Such planning is founded on a stable notion of gender identity throughout the life course ({\textquotedblleft}post-transition{\textquotedblright}) and assumes that the pre-dementia self is better equipped to make decisions than the {\textquotedblleft}demented{\textquotedblright} self. We conclude by encouraging, based on an intersectional, trans-affirmative, crip-positive, and age-positive approach, respect for the agency of trans people with dementia.
Mots-clé: personnes trans*, santé cognitive
Baril, A., Silverman, M., Gauthier, M., & Lévesque, M.. (2021). Souhaits oubliés : documents de fin de vie des personnes trans vivant avec une démence aux marges des changements juridiques. GLAD! Revue sur le langage, le genre, les sexualités, 10, 1–28. doi:10.1017/cls.2020.13.
Barker, J. C., Herdt, G., & de Vries, B.. (2006). Social Support in the Lives of Lesbians and Gay Men at Midlife and Later. Sexuality Research and Social Policy, 3(2), 1–23.
Mots-clé: relations sociales
Barrett, C., Crameri, P., Lambourne, S., Latham, J. R., & Whyte, C.. (2015). Understanding the experiences and needs of lesbian, gay, bisexual and trans Australians living with dementia, and their partners. Australasian Journal on Ageing, 34, 34–38. doi:10.1111/ajag.12271.
Mots-clé: besoins et craintes, santé cognitive
Barrett, C., Whyte, C., Comfort, J., Lyons, A., & Crameri, P.. (2014). Social connection, relationships and older lesbian and gay people. Sexual and Relationship Therapy, 30(1), 131–142. doi:10.1080/14681994.2014.963983.
This paper presents data from a small study exploring the impacts of homophobia on the lives of older lesbian and gay Australians. Eleven in-depth interviews were conducted with older lesbians (6) and gay men (5) ranging in age from 65 to 79 years. The study found that participants{'} sense of self was shaped by the dominant medical, legal and religious institutions of their youth that defined them as sick, immoral or criminal. Participants described enforced {\textquotedblleft}cure{\textquotedblright} therapies, being imprisoned, having employment terminated and being disowned and disinherited by family. In this context, intimate relationships and social networks provided refuge where trust was rebuilt and sexuality affirmed. Many created safe spaces for themselves. This equilibrium was threatened with increasing age, disability and the reliance on health and social services. Participants feared a return to institutional control and a need to {\textquotedblleft}straighten up{\textquotedblright} or hide their sexuality. In response, partners stepped into the role of caregiver, at times beyond their capacity and at a cost to their relationship. The study describes the importance of understanding social connections in the lives of older lesbians and gay men. It highlights the need for inclusive services to ensure that social networks are supported and that health and well-being are promoted.
Mots-clé: besoins et craintes, relations sociales
Beagan, B. L., Fredericks, E., & Goldberg, L.. (2012). Nurses' Work With LGBTQ Patients: "They're Just Like Everybody Else, So What's the Difference?". The Canadian journal of nursing research, 44(3), 44–63.
Se fondant sur les méthodes d{'}études critiques féministes et queer, cet article explore les perceptions qu{'}ont les infirmières et les infirmiers de leur pratique avec des patientes lesbiennes, gaies, bisexuelles, transgenres ou queer (LGBTQ). L{'}étude a comporté la réalisation d{'}entrevues en profondeur semi-structurées avec 12 membres de la profession infirmière à Halifax, en Nouvelle-Écosse. Ces entrevues ont permis de faire la lumière sur diverses approches en matière de pratique infirmière. Les participants ont le plus souvent soutenu que les différences comme l{'}orientation sexuelle et l{'}identité de genre ne font pas de différence : tout le monde devrait être traité comme une personne distincte. Les participants semblaient tenir beaucoup à éviter la discrimination ou la stéréo- typisation en tentant d{'}éviter les suppositions. Ils étaient soucieux de ne pas offenser les patientes par leur langage ou leurs gestes. Lorsqu{'}il était tenu compte des différences sociales, l{'}accent se limitait souvent à la santé sexuelle, bien que certains participants aient montré une compréhension nuancée de l{'}oppression et de la marginalisation. Faire la distinction entre les généralisations et les stéréotypes peut aider le personnel infirmier dans ses efforts pour recon- na{\^\i}tre les différences sociales sans faire de tort aux patientes LGBTQ.
Mots-clé: attitudes des professionnel.le.s, soins infirmiers
Beauchamp, J., Chamberland, L., & Carbonneau, H.. (2020). Le vieillissement chez les a\^\inés gais et lesbiennes: Entre la normalité, l'expression de besoins spécifiques et leur capacité d'agir. Nouvelles pratiques sociales, 31(1), 279–299. doi:10.7202/1069927ar.
Cet article porte sur le vieillissement des a{\^\i}nés gais et lesbiennes selon une approche qualitative. Selon des recherches, plusieurs facteurs viennent influencer les perceptions et les expériences du vieillissement des a{\^\i}nés gais et lesbiennes. Les résultats sont tirés d{'}une recherche doctorale explorant la participation sociale des a{\^\i}nés gais et lesbiennes. L{'}article documente les représentations et perceptions du vieillissement, les enjeux spécifiques à l{'}intersection de l{'}orientation sexuelle et de l{'}avancée en âge ainsi que l{'}agentivité des a{\^\i}nés gais et lesbiennes.
Mots-clé: besoins et craintes
Beauchamp, J., & Chamberland, L.. (2015). Les enjeux de santé mentale chez les aînés gais et lesbiennes. Santé mentale au Québec, 40(3), 173–21. doi:10.7202/1034917ar. [URL]
Mots-clé: parcours de vie, revue de littérature
Beauchamp, J.. (2013). Réalités et besoins des aînés gais et lesbiennes: des pistes d'action pour une approche adpatée. pluriâges, 4(1), 19–23.
Beeler, J. A., Rawls, T. W., Herdt, G., & Cohler, B. J.. (1999). The Needs of Older Lesbians and Gay Men in Chicago. Journal of Gay & Lesbian Social Services, 9(1), 31–49. doi:10.1300/J041v09n01_02.
Beringer, R., Gutman, G., Daudt, H., & de Vries, B.. (2020). Considering the potential impact of Covid-19 on older members of the LGBT community in Canada. GRC News, 18–19.
Mots-clé: covid-19
Blando, J. A.. (2001). Twice Hidden: Older Gay and Lesbian Couples, Friends, and Intimacy. Generations, 25(2), 87–89. [URL]
Brennan, D. J., Emlet, C. A., & Eady, A.. (2011). HIV, Sexual Health, and Psychosocial Issues Among Older Adults Living with HIV in North America. Ageing International, 36(3), 313–333. doi:10.1007/s12126-011-9111-6.
Brennan-Ing, M., Seidel, L., Larson, B., & Karpiak, S. E.. (2013). Social Care Networks and Older LGBT Adults: Challenges for the Future. Journal of Homosexuality, 61(1), 21–52. doi:10.1080/00918369.2013.835235.
Research on service needs among older adults rarely addresses the special circumstances of lesbian, gay, bisexual, and transgender (LGBT) individuals, such as their reliance on friend-centered social networks or the experience of discrimination from service providers. Limited data suggests that older LGBT adults underuti- lize health and social services that are important in maintaining independence and quality of life. This study explored the social care networks of this population using a mixed-methods approach. Data were obtained from 210 LGBT older adults. The average age was 60 years, and 71% were men, 24% were women, and 5% were transgender or intersex. One-third was Black, and 62% were Caucasian. Quantitative assessments found high levels of morbidity and friend-centered support networks. Need for and use of services was frequently reported. Content analysis revealed unmet needs for basic supports, including housing, economic supports, and help with entitlements. Limited opportunities for socialization were strongly expressed, particularly among older lesbians. Implications for senior programs and policies are discussed.
Mots-clé: besoins et craintes, relations sociales, travail social
Brotman, S., Ryan, B., & Cormier, R.. (2003). The Health and Social Service Needs of Gay and Lesbian Elders and Their Families in Canada. The Gerontologist, 43(2), 192–202.
Brotman, S., Ryan, B., Collins, S., Chamberland, L., Cormier, R., Julien, D., Meyer, E., Peterkin, A., & Richard, B.. (2007). Coming Out to Care: Caregivers of Gay and Lesbian Seniors in Canada. The Gerontologist, 47(4), 490–503. doi:10.1093/geront/47.4.490.
Purpose: This article reports on the findings of a study whose purpose was to explore the experiences of caregivers of gay and lesbian seniors living in the community and to identify issues that emerged from an exploration of access to and equity in health care services for these populations. Design and Methods: The study used a qualitative methodology based upon principles of grounded theory in which open-ended interviews were undertaken with 17 caregivers living in three different cities across Canada. Results: Findings indicated several critical themes, including the impact of felt and anticipated discrimination, complex processes of coming out, the role of caregivers, self-identification as a caregiver, and support. Implications: We consider several recommendations for change in light of emerging themes, including expanding the definition of caregivers to be more inclusive of gay and lesbian realities, developing specialized services, and advo- cating to eliminate discrimination faced by these populations.
Mots-clé: proche aidance
Brown, A., Hayter, C., & Barrett, C.. (2015). LGBTI Ageing and Aged Care. Australasian Journal on Ageing, 34, 1–2. doi:10.1111/ajag.12283.
Click on the article title to read more.
Mots-clé: Australie
Brown, M. J., & Patterson, R.. (2020). Subjective Cognitive Decline among Sexual and Gender Minorities: Results from a US Population-Based Sample. Journal of Alzheimers Disease, 73(2), 477–487. doi:10.3233/JAD-190869.
Mots-clé: analyse comparative, santé cognitive
Brown, M. T.. (2009). LGBT Aging and Rhetorical Silence. Sexuality Research and Social Policy, 6(4), 65–78.
The exclusion of lesbian, gay, bisexual, and transgender (LGBT) elders from queer and ge- rontological theories has resulted in the silencing of LGBT older adults and their experiences. His- torically, this silencing has left LGBT elders without adequate social or material supports and has isolated them from both the LGBT and the older-adult communities, as well as the agencies serving those communities. The author defines this silencing as a rhetorical move rendering elders invisible in queer theory and queerness invisible in gerontological theory and argues that the producers of queer and gerontological theory, from a position of power within these discourses, silence and ignore LGBT elders{'} rhetorical activities. The author further argues that although many LGBT elders have worked to arrange material and social supports for themselves and their peers, their activities have become audible only relatively recently, due to the activism of middle-aged and older LGBT members of hu- man service and academic networks.
Buczak-Stec, E., König, H., & Hajek, A.. (2020). Planning to move into a nursing home in old age: does sexual orientation matter?. Age and Ageing, 1–6. doi:10.1093/ageing/afaa185. [URL]
Mots-clé: EMS
Buczak-Stec, E., König, H., Feddern, L., & Hajek, A.. (2020). Long-Term Care Preferences and Sexual Orientation: Protocol for a Systematic Review. Healthcare, 8(4), 572. doi:10.3390/healthcare8040572. [URL]
Mots-clé: besoins et craintes, EMS
Butler, S. S.. (2017). Older lesbians' experiences with home care: Varying levels of disclosure and discrimination. Journal of Gay & Lesbian Social Services, 29(4), 378–398. doi:10.1080/10538720.2017.1365673.
Journal of Gay {&} Lesbian Social Services, 2017. doi:10.1080/10538720.2017.1365673
Mots-clé: femmes lesbiennes, soins à domicile
Butler, S. S.. (2004). Gay, Lesbian, Bisexual, and Transgender (GLBT) Elders: The Challenges and Resilience of this Marginalized Group. Journal of Human Behavior in the Social Environment, 9(4), 25–44. doi:10.1300/J137v09n04_02.
Current gay, lesbian, bisexual, and transgender (GLBT) individuals age 65 years and older grew up before the Gay Rights move- ment. They have learned over many years to hide their identities in order to avoid discrimination and ridicule. Unfortunately, this secrecy has led to the near invisibility of the elder GLBT population and to poor health and service access. This paper reviews what we know about GLBT el- ders, describes some of the unique strengths they bring to the aging pro- cess, and outlines some of the challenges they face. Micro, mezzo, and macro practice implications are suggested.
Caceres, B. A., Travers, J., Primiano, J., Luscombe, R. E., & Dorsen, C.. (2020). Provider and LGBT Individuals' Perspectives on LGBT Issues in Long-Term Care: A Systematic Review. The Gerontologist, 60(3), e169–e183. doi:10.1093/geront/gnz012.
Mots-clé: attitudes des professionnel.le.s, besoins et craintes, logement
Caceres, B. A.. (2019). Care of LGBTQ older adults: What geriatric nurses must know. Geriatric Nursing, 40, 342–343. doi:10.1016/j.gerinurse.2019.05.006.
Geriatric nurses have a responsibility to promote the health of all older adults. Lesbian, gay, bisexual, trans- gender, and queer (LGBTQ) older adults are particularly vulnerable to poor health outcomes and are less likely to seek healthcare due to fear of discrimination. Despite elevated risk LGBTQ older adults are often ignored within geriatric nursing as there is little evidence to inform care. To adequately care for LGBTQ patients geriatric nurses should recognize the effects of bias, appreciate the importance of terminology, understand diversity within the LGBTQ community, advocate for the inclusion of sexual orientation and gen- der identity in admission assessments, share best practices, and advocate for increased visibility. Caring for this population may be challenging, as it will require geriatric nurses to expand their knowledge of LGBTQ health, explore their own biases, and challenge institutional norms. However, through coordinated efforts geriatric nurses can work toward improving care for LGBTQ older adults.
Caceres, B. A., & Frank, M. O.. (2016). Successful ageing in lesbian, gay and bisexual older people: a concept analysis. International Journal of Older People Nursing, 11(3), 184–193. doi:10.1111/opn.12108.
Conclusion. Successful ageing in lesbian, gay and bisexual older people is defined as a subjective and multifactorial concept that is characterised by support from families of origin/families of choice, access to lesbian, gay, and bisexual-friendly services and the development of crisis competence skills which impact the ageing experience of LGB individuals. Implications for practice. Successful ageing models can provide a roadmap for developing culturally competent interventions to address key healthcare issues present in this population. The nursing profession{'}s multidisciplinary knowledge and competence in providing health promotion makes nurses well positioned to take a leading role in reducing disparities of lesbian, gay and bisexual older people.
Mots-clé: bien vieillir, soins infirmiers
Chambers, L. A.. (2015). Stigma, HIV and health: a qualitative synthesis. BMC Public Health, 15:848. doi:10.1186/s12889-015-2197-0. [URL]
Cloyes, K. G.. (2016). Seeing Silver in the Spectrum. Research in Gerontological Nursing, 9(2), 54–57.
Cobos Manuel, I., Jackson-Perry, D., Courvoisier, C., Bluntschli, C., Carel, S., Muggli, E., Waelti da Costa, V., Kampouri, E., Cavassini, M., & Darling, K. E. A.. (2020). Stigmatisation et VIH : tous concernés. Revue Médicale Suisse, 16, 744–748.
Mots-clé: stress minoritaire, Suisse, VIH
Cohen, H. L., Cox Curry, L., Jenkins, D., Walker, C. A., & Hogstel, M. O.. (2008). Older Lesbians and Gay Men: Long-Term Care Issues. Annals of Long-Term Care.
Many health and social service providers lack awareness of and knowledge about the long-term care (LTC) needs of the lesbian and gay population, about how to provide culturally-sensitive and affirming services and programs, and about ways to increase accessibility and acceptability of LTC options for les- bian and gay older adults. This arti- cle reviews the history of oppression experienced by lesbians and gay men, what is known about them, and issues for consideration by staff in LTC facilities. A life course perspective provides the conceptu- al framework for understanding the challenges and opportunities faced by older lesbians and gay men in LTC. Recommendations are provided to combat heterosexist assumptions and enhance culturally competent care.
Coleman, C. L.. (2018). Physical and Psychological Abuse among Seropositive African American MSM 50 Aged Years and Older. Issues in Mental Health Nursing, 39(1), 46–52. doi:10.1080/01612840.2017.1397828.
Little is known about abuse experienced among African American men who have sex with men (MSM) who are 50 years and older. A series of focus groups were conducted to examine perspectives of seropositive African American MSM age 50 years and older who reported experiencing some form of psychological or physical abuse. Thirty African American MSM were divided into four focus groups and four themes emerged: {\textquotedblleft}Fear Being Gay,{\textquotedblright} {\textquotedblleft}No One Else to Love Me,{\textquotedblright} {\textquotedblleft}Nowhere to Turn,{\textquotedblright} and {\textquotedblleft}Sexual Risk {&} Control.{\textquotedblright} The data suggest there is a need to develop culturally tailored interventions for this population.
Mots-clé: besoins et craintes, hommes gays, soins infirmiers, VIH
Cook-Daniels, L.. (2008). Transforming Mental Health Services for Older People: Gay, Lesbian, Bisexual, and Transgender (GLBT) Challenges and Opportunities. Journal of GLBT Family Studies, 4(4), 469–483. doi:10.1080/15504280802191723.
Mots-clé: santé mentale
Cook-Daniels, L.. (1998). Lesbian, Gay Male, Bisexual and Transgendered Elders: Elder Abuse and Neglect Issues. Journal of Elder Abuse & Neglect, 9(2), 35–49. doi:10.1300/J084v09n02_04.
Mots-clé: travail social
Cook-Daniels, L., & Munson, M.. (2010). Sexual Violence, Elder Abuse, and Sexuality of Transgender Adults, Age 50+: Results of Three Surveys. Journal of GLBT Family Studies, 6(2), 142–177. doi:10.1080/15504281003705238.
Mots-clé: maltraitance, personnes trans*, sexualité
Cook-Daniels, L.. (2008). Living Memory GLBT History Timeline: Current Elders Would Have Been This Old When These Events Happened \ldots. Journal of GLBT Family Studies, 4(4), 485–497. doi:10.1080/15504280802191731.
Correro, A. N., & Nielson, K. A.. (2020). A Review of Minority Stress as a Risk Factor for Cognitive Decline in Lesbian, Gay, Bisexual, and Transgender (LGBT) Elders. Journal of Gay & Lesbian Mental Health, 24(1), 2–19. doi:10.1080/19359705.2019.1644570.
Mots-clé: santé cognitive, stress minoritaire
Crameri, P., Barrett, C., Latham, J. R., & Whyte, C.. (2015). It is more than sex and clothes: Culturally safe services for older lesbian, gay, bisexual, transgender and intersex people. Australasian Journal on Ageing, 34, 21–25. doi:10.1111/ajag.12270.
Croghan, C. F., Moone, R. P., & Olson, A. M.. (2015). Working With LGBT Baby Boomers and Older Adults: Factors That Signal a Welcoming Service Environment. Journal of Gerontological Social Work, 58, 637–651. doi:10.1080/01634372.2015.1072759. [URL]
Mots-clé: besoins et craintes, formation à la diversité
Cronin, A., & King, A.. (2012). Only connect? Older lesbian, gay and bisexual (LGB) adults and social capital. Ageing and Society, 34(2), 258–279. doi:10.1017/S0144686X12000955.
The concept of social capital is widely used in the social sciences and has, to an extent, been applied to the lives and social networks of older lesbian, gay and bisexual (hereafter LGB) adults. Developing existing research, this paper argues that while not without its problems, the concept of social capital enriches our understanding of these networks, whilst simultaneously deconstructing the negative stereotypes surrounding homosexuality in later life. However, little attention has been paid to the social factors that mediate access and participation in lesbian and gay communities and the implications of this on the quality and experience of later life. Drawing on qualitative research conducted in the United Kingdom, this paper illustrates how biography, gender and socio-economic status are significant mediators in the development and maintenance of social capital by older LGB adults. It concludes with a set of recommendations aimed at improving the social capital of older LGB adults, together with the importance of {'}queering{'} the concept itself.
Cronin, A., Ward, R., Pugh, S., King, A., & Price, E.. (2011). Categories and their consequences: Understanding and supporting the caring relationships of older lesbian, gay and bisexual people. International Social Work, 54(3), 421–435. doi:10.1177/0020872810396261.
This article advocates incorporating biographical narratives into social work practice involving older lesbian, gay and bisexual service users. Offering a critique of {'}sexuality-blind{'} conditions in current policy and practice, the discussion draws on qualitative data to illustrate the potential benefits of narrative approaches for both practitioners and service users.
Cronin, A., & King, A.. (2010). Power, Inequality and Identification: Exploring Diversity and Intersectionality amongst Older LGB Adults. Sociology, 44(5), 876–892. doi:10.1177/0038038510375738.
Czaja, S. J., Sabbag, S., Lee, C. C., Schulz, R., Lang, S., Vlahovic, T., Jaret, A., & Thurston, C.. (2015). Concerns about aging and caregiving among middle- aged and older lesbian and gay adults. Aging & Mental Health, 20(11), 1107–1118. doi:10.1080/13607863.2015.1072795. [URL]
D'Augelli, A. R., & Grossman, A. H.. (2001). Disclosure of sexual orientation, victimization, and mental health among older lesbian, gay, and bisexual older adults. Journal of Interpersonal Violence, 16(10), 1008–1027.
Daley, A., & MacDonnell, J.. (2015). 'That would have been beneficial': LGBTQ education for home-care service providers. Health & Social Care in the Community, 23(3), 282–291. doi:10.1111/hsc.12141.
Mots-clé: formation à la diversité, soins à domicile
Dickey, G.. (2012). Survey of Homophobia: Views on Sexual Orientation From Certified Nurse Assistants Who Work in Long-Term Care. Research on Aging, 35(5), 563–570. doi:10.1177/0164027512447823.
It is estimated that there will be more than 5 million gays and lesbians aged 65 and over by 2030. This study examined attitudes of sexual orientation among a sample of certified nurse assistants who work in long-term care. A sample of 119 certified nurse assistants were recruited at a national con- ference and asked to complete a survey that included the Homophobic scale. Results indicate low levels of homophobia among the certified nurse assistants who participated.Age and acquaintances accounted for most of the variance in the homophobia scores of the certified nurse assistants. While scores show low levels of homophobia, caution is advised as scores may reflect a stereotype that elderly people are not sexually active and that their sexuality is no longer relevant.
Dicks, M., Santoro, E., & Teulan, S.. (2015). Welcoming and celebrating diversity: The Uniting journey of learning on inclusive practice. Australasian Journal on Ageing, 34, 26–28. doi:10.1111/ajag.12279.
Donaldson, W. V., Asta, E. L., & Vacha-Haase, T.. (2014). Attitudes of Heterosexual Assisted Living Residents Toward Gay and Lesbian Peers. Clinical Gerontologist, 37(2), 167–189. doi:10.1080/07317115.2013.868849. [URL]
Mots-clé: attitude des pairs, EMS
Dugan, J. T., & Fabbre, V. D.. (2019). To Survive on This Shore Selections from the South. Southern Cultures, 25(2), 19–45. doi:10.1353/scu.2019.0015. [URL]
Dune, T., Ullman, J., Ferfolja, T., Thepsourinthone, J., Garga, S., & Mengesha, Z.. (2020). Are Services Inclusive? A Review of the Experiences of Older GSD Women in Accessing Health, Social and Aged Care Services. International Journal of Environmental Research and Public Health, 17(11), 3861–17. doi:10.3390/ijerph17113861.
The review aimed to examine the views and experiences of ageing gender and sexually diverse (GSD) women{&}mdash;a triple minority in relation to their age, gender and sexual orientation{&}mdash;in accessing health, social and aged care services. Eighteen peer reviewed articles identified from seven electronic databases in health and social sciences were evaluated according to predefined criteria and a thematic review methodology drawing upon socio-ecological theory was used to analyse and interpret the findings. Four major themes were identified from the analysis: {&}ldquo;The Dilemma of Disclosure{&}rdquo;, {&}ldquo;Belonging/Connection{&}rdquo;, {&}ldquo;Inclusiveness of Aged Care{&}rdquo; and {&}ldquo;Other Barriers to Access Care{&}rdquo;. In the dilemma of disclosure, older GSD women consider factors such as previous experiences, relationship with the provider and anticipated duration of stay with the provider before disclosing their sexual identifies. The review also revealed that aged care services lack inclusiveness in their policies, advertising materials, aged care spaces and provider knowledge and attitude to provide sensitive and appropriate care to GSD women. Overall, older GSD women experience multiple and multilevel challenges when accessing health, aged and social services and interventions are needed at all levels of the socio-ecological arena to improve their access and quality of care.
Emlet, C. A., Jung, H., Kim, H., La Fazia, D., & Fredriksen-Goldsen, K. I.. (2020). Determinants of physical and mental health among LGBT older adult caregivers. Innovation in Aging, 3(S1), 345. doi:10.1080/00918369.2020.1804261.
This study identifies the interconnected needs and concerns of sexual and gender minority (SGM) older adults, with a particular focus on housing, healthcare, transportation, and social support.
Mots-clé: proche aidance, santé
Emlet, C. A., & Brennan-Ing, M.. (2020). Is There no Place for Us? The Psychosocial Challenges and Rewards of Aging with HIV. Journal of Elder Policy, 1(1), 69–95.
Emlet, C. A.. (2016). Social, Economic, and Health Disparities Among LGBT Older Adults. Generations, 40(2), 16–22. doi:10.2307/26556193. [URL]
Mots-clé: conditions de vie et finances, relations sociales, santé
Erosheva, E. A., Kim, H., Emlet, C. A., & Fredriksen-Goldsen, K. I.. (2015). Social Networks of Lesbian, Gay, Bisexual, and Transgender Older Adults. Research on Aging, 38(1), 98–123. doi:10.1177/0164027515581859.
Purpose: This study examines global social networks{—}including friendship, support, and acquaintance networks{—}of lesbian, gay, bisexual, and trans- gender (LGBT) older adults. Design and Methods: Utilizing data from a large community-based study, we employ multiple regression analyses to examine correlates of social network size and diversity. Results: Controlling for background characteristics, network size was positively associated with being female, transgender identity, employment, higher income, having a partner or a child, identity disclosure to a neighbor, engagement in religious activities, and service use. Controlling in addition for network size, network diversity was positively associated with younger age, being female, trans- gender identity, identity disclosure to a friend, religious activity, and service use. Implications: According to social capital theory, social networks provide a vehicle for social resources that can be beneficial for successful aging and well-being. This study is a first step at understanding the correlates of social network size and diversity among LGBT older adults.
Espinoza, R.. (2016). Protecting and Ensuring the Well-Being of LGBT Older Adults: A Policy Roadmap. Generations, 40(2), 87–93. doi:10.2307/26556215.
Federal acknowledgement of LGBT elders remains scant, including in the 2015 White House Conference on Aging report and the Older Americans Act. This article outlines the many reforms and policy changes necessary for LGBT elders to age independently, in good health, and be financially secure in their homes and communities, without discrimination, and also stresses the need for more research on LGBT aging.
Fabbre, V. D.. (2016). Agency and Social Forces in the Life Course: The Case of Gender Transitions in Later Life. The journals of gerontology. Series B, Psychological sciences and social sciences, 54, gbw109–9. doi:10.1093/geronb/gbw109.
Objectives: In order to bolster gerontology{'}s knowledge base about transgender issues and advance conceptualizations of agency and social forces in life course scholarship, this study explores the conditions under which people contemplate or pursue a gender transition in later life. Methods: In-depth interviews were conducted with male-to-female identified persons (N = 22) who have seriously contem- plated or pursued a gender transition after the age of 50 years. Participant observation was also carried out at three national transgender conferences (N = 170 hours). Interpretive analyses utilized open and focused coding, analytical memo writing, and an iterative process of theory development. Results: Participants in this study faced unrelenting social pressures to conform to normative gender expectations through- out their lives, which were often internalized and experienced as part of themselves. Confronting these internalized forces often took the form of a {\textquotedblleft}dam bursting,{\textquotedblright} an intense emotional process through which participants asserted agency in the face of constraining social forces in order to pursue a gender transition in later life. Discussion: Thefindingsinthispaperareusedtoextendthelifecourseconceptofagencywithinstructure,whichhasimplica- tions for future life course research in aging, especially with respect to socially marginalized and oppressed minority groups.
Mots-clé: parcours de vie, personnes trans*
Fabbre, V. D.. (2014). Gender Transitions in Later Life: The Significance of Time in Queer Aging. Journal of Gerontological Social Work, 57(2-4), 161–175. doi:10.1080/01634372.2013.855287.
Concepts of time are ubiquitous in studies of aging. This article integrates an existential perspective on time with a notion of queer time based on the experiences of older transgender per- sons who contemplate or pursue a gender transition in later life. Interviews were conducted with male-to-female identified persons aged 50 years or older (N = 22), along with participant obser- vation at three national transgender conferences (N = 170 hr). Interpretive analyses suggest that an awareness of {\textquotedblleft}time left to live{\textquotedblright} and a feeling of {\textquotedblleft}time served{\textquotedblright} play a significant role in later life development and help expand gerontological perspectives on time and queer aging.
Mots-clé: personnes trans*, théorie queer
Fabbre, V. D.. (2016). Queer Aging: Implications for Social Work Practice with Lesbian, Gay, Bisexual, Transgender, and Queer Older Adults. Social Work, 62(1), 73–76. doi:10.1093/sw/sww076. [URL]
Mots-clé: théorie queer, travail social
Fabbre, V. D.. (2017). Queer Aging: Implications for Social Work Practice. International Network for Critical Gerontology, 1–5. [URL]
Fabbre, V. D., Jen, S., & Fredriksen-Goldsen, K. I.. (2018). The State of Theory in LGBTQ Aging: Implications for Gerontological Scholarship. Research on Aging, 41(5), 495–518. doi:10.1177/0164027518822814.
Mots-clé: revue de littérature
Fenge, L.. (2010). Striving towards Inclusive Research: An Example of Participatory Action Research with Older Lesbians and Gay Men. British Journal of Social Work, 40, 878–894. doi:10.1093/bjsw/bcn144. [URL]
Fenkl, E. A.. (2012). Aging Gay Men: A Review of the Literature. Journal of LGBT Issues in Counseling, 6(3), 162–182. doi:10.1080/15538605.2012.711514. [URL]
Fish, J., & Weis, C.. (2019). All the lonely people, where do they all belong? An interpretive synthesis of loneliness and social support in older lesbian, gay and bisexual communities. Quality in Ageing and Older Adults, 20(3), 130–142. doi:10.1108/QAOA-10-2018-0050.
Loneliness is a phenomenon which affects people globally and constitutes a key social issue of our time. Yet few studies have considered the nature of loneliness and social support for older lesbian, gay and bisexual (LGB) people; this is of particular concern as they are among the social groups said to be at greater risk. The paper aims to discuss this issue.,Peer-reviewed literature was identified through a search of Scopus, PsycINFO and PubMed. A total of 2,277 papers were retrieved including qualitative and quantitative studies which were quality assessed using the Critical Appraisal Skills Programme.,In total, 11 papers were included in the review and findings were synthesised using thematic analysis. The studies were conducted in five countries worldwide with a combined sample size of 53,332 participants, of whom 4,288 were drawn from among LGB communities. The characteristics and circumstances associated with loneliness including living arrangements, housing tenure, minority stress and geographical proximity.,The review suggests that among older LGB people, living alone, not being partnered and being childfree may increase the risk of loneliness. This cohort of older people may experience greater difficulties in building relationships of trust and openness. They may also have relied on sources of identity-based social support that are in steep decline. Future research should include implementation studies to evaluate effective strategies in reducing loneliness among older LGB people.,Reaching older LGB people who are vulnerable due to physical mobility or rural isolation and loneliness because of bereavement or being a carer is a concern. A range of interventions including individual (befriending), group-based (for social contact) in addition to potential benefits from the Internet of Things should be evaluated. Discussions with the VCS suggest that take up of existing provision is 85:15 GB men vs LB women.,Formal social support structures which were provided by voluntary sector agencies have been disproportionately affected by recent austerity measures.,The authors sought to interrogate the tension between findings of lower levels of social support and discourses of resilient care offered by families of choice.
Mots-clé: relations sociales, revue de littérature, santé, stress minoritaire
Fish, J., & Williamson, I.. (2018). Exploring lesbian, gay and bisexual patients' accounts of their experiences of cancer care in the UK. European Journal of Cancer Care, 27(1), e12501. doi:10.1111/ecc.12501.
Despite greater recognition of rights and responsibilities around the care of cancer patients who identify as lesbian, gay or bisexual (LGB) within healthcare systems in the United Kingdom, recent qu…
Fredriksen-Goldsen, K. I.. (2016). The Future of LGBT+ Aging: A Blueprint for Action in Services, Policies, and Research By Karen I. Fredriksen-Goldsen. Generations, 40(2), 6–13.
Mots-clé: politique de la santé, recherche
Fredriksen-Goldsen, K. I., & Muraco, A.. (2010). Aging and Sexual Orientation: A 25-Year Review of the Literature. Research on Aging, 32(3), 372–413. doi:10.1177/0164027509360355.
Fredriksen-Goldsen, K. I., Kim, H., Barkan, S. E., Muraco, A., & Hoy-Ellis, C. P.. (2013). Health Disparities Among Lesbian, Gay, and Bisexual Older Adults: Results from a Population-Based Study. American Journal of Public Health, 103(10), 1802–1809. doi:10.2105/AJPH.2012.301110.
Objectives. We investigated health disparities among lesbian, gay, and bi- sexual (LGB) adults aged 50 years and older. Methods. We analyzed data from the 2003{–}2010 Washington State Behav- ioral Risk Factor Surveillance System (n = 96 992) on health outcomes, chronic conditions, access to care, behaviors, and screening by gender and sexual orientation with adjusted logistic regressions. Results. LGB older adults had higher risk of disability, poor mental health, smoking, and excessive drinking than did heterosexuals. Lesbians and bisexual women had higher risk of cardiovascular disease and obesity, and gay and bisexual men had higher risk of poor physical health and living alone than did heterosexuals. Lesbians reported a higher rate of excessive drinking than did bisexual women; bisexual men reported a higher rate of diabetes and a lower rate of being tested for HIV than did gay men. Conclusions. Tailored interventions are needed to address the health dispar- ities and unique health needs of LGB older adults. Research across the life course is needed to better understand health disparities by sexual orientation and age, and to assess subgroup differences within these communities
Mots-clé: analyse comparative, santé
Fredriksen-Goldsen, K. I., Emlet, C. A., Kim, H., Muraco, A., Erosheva, E. A., Goldsen, J., & Hoy-Ellis, C. P.. (2013). The Physical and Mental Health of Lesbian, Gay Male, and Bisexual (LGB) Older Adults: The Role of Key Health Indicators and Risk and Protective Factors. The Gerontologist, 53(4), 664–675. doi:10.1093/geront/gns123. [URL]
Fredriksen-Goldsen, K. I., & Espinoza, R.. (2014). Time for Transformation: Public Policy Must Change to Achieve Health Equity for LGBT Older Adults. Generations, 38(4), 97–106. [URL]
Mots-clé: politique de la santé
Fredriksen-Goldsen, K. I., Cook-Daniels, L., Kim, H., Erosheva, E. A., Emlet, C. A., Hoy-Ellis, C. P., Goldsen, J., & Muraco, A.. (2014). Physical and Mental Health of Transgender Older Adults: An At-Risk and Underserved Population. The Gerontologist, 54(3), 488–500. doi:10.1093/geront/gnt021.
Mots-clé: personnes trans*, santé
Fredriksen-Goldsen, K. I., Hoy-Ellis, C. P., Goldsen, J., Emlet, C. A., & Hooyman, N. R.. (2014). Creating a Vision for the Future: Key Competencies and Strategies for Culturally Competent Practice With Lesbian, Gay, Bisexual, and Transgender (LGBT) Older Adults in the Health and Human Services. Journal of Gerontological Social Work, 57(2-4), 80–107. doi:10.1080/01634372.2014.890690.
Sexual orientation and gender identity are not commonly addressed in health and human service delivery, or in educational degree programs. Based on findings from Caring and Aging with Pride: The National Health, Aging and Sexuality Study (CAP), the first national federally-funded research project on LGBT health and aging, this article outlines 10 core competencies and aligns them with specific strategies to improve professional practice and service development to promote the well-being of LGBT older adults and their families. The articulation of key competencies is needed to provide a blueprint for action for addressing the growing needs of LGBT older adults, their families, and their communities.
Mots-clé: formation à la diversité
Fredriksen-Goldsen, K. I., Simoni, J. M., Kim, H., Lehavot, K., Walters, K. L., Yang, J., Hoy-Ellis, C. P., & Muraco, A.. (2014). The health equity promotion model: Reconceptualization of lesbian, gay, bisexual, and transgender (LGBT) health disparities.. American Journal of Orthopsychiatry, 84(6), 653–663. doi:10.1037/ort0000030.
Mots-clé: équité en santé, intersectionnalité, recherche, santé
Fredriksen-Goldsen, K. I., Kim, H., Shiu, C., Goldsen, J., & Emlet, C. A.. (2015). Successful Aging Among LGBT Older Adults: Physical and Mental Health-Related Quality of Life by Age Group. The Gerontologist, 55(1), 154–168. doi:10.1093/geront/gnu081. [URL]
Mots-clé: résilience, santé
Fredriksen-Goldsen, K. I.. (2016). Aging Out in the Queer Community: Silence to Sanctuary to Activism in Faith Communities. Generations, 40(2), 30–33.
Mots-clé: religion et spiritualité
Fredriksen-Goldsen, K. I.. (2016). Pioneering Research Helps Shape Health and Well-Being for LGBT Elders. Generations, 40(2), 4–5. doi:10.2307/26556191. [URL]
Fredriksen-Goldsen, K. I., & Kim, H.. (2017). The Science of Conducting Research With LGBT Older Adults- An Introduction to Aging with Pride: National Health, Aging, and Sexuality/Gender Study (NHAS). The Gerontologist, 57(suppl 1), S1–S14. doi:10.1093/geront/gnw212. [URL]
Fredriksen-Goldsen, K. I.. (2017). Dismantling the Silence: LGBTQ Aging Emerging From the Margins. The Gerontologist, 57(1), 121–128. doi:10.1093/geront/gnw159.
Historical, environmental, and cultural contexts intersect with aging, sexuality, and gender across communities and genera- tions. My scholarship investigates health and well-being over the life course across marginalized communities, including LGBTQ (lesbian, gay, bisexual, transgender, and queer) midlife and older adults, native communities experiencing car- diovascular risk, and families in China living with HIV, in order to balance the realities of unique lives in contemporary society. By probing the intersection of age, sexuality, and gender, my analysis is informed by both personal and professional experiences. With the death of my partner occurring at a time of profound invisibility and silence before HIV/AIDS, I found my life out of sync, experiencing a loss without a name. My life was thrust into a paradox: My relationship was defined by a world that refused to recognize it. This essay provides an opportunity for me to weave together how such critical turn- ing points in my own life helped shape my approach to gerontology and how gerontology has informed my work and life. Reflecting on this journey, I illustrate the ways in which historical, structural, environmental, psychosocial, and biological factors affect equity, and the health-promoting and adverse pathways to health and well-being across marginalized com- munities. Although gerontology as a discipline has historically silenced the lives of marginalized older adults, it has much to learn from these communities. The growing and increasingly diverse older adult population provides us with unique opportunities to better understand both cultural variations and shared experiences in aging over the life course.
Fredriksen-Goldsen, K. I., Jen, S., & Muraco, A.. (2019). Iridescent Life Course: LGBTQ Aging Research and Blueprint for the Future – A Systematic Review. Gerontology, 65(3), 253–274. doi:10.1159/000493559.
Background: LGBTQ* (lesbian, gay, bisexual, trans, and queer) older adults are demographically diverse and growing populations. In an earlier 25-year review of the literature on sexual orientation and aging, we identified four waves of research that addressed dispelling negative stereotypes, psychosocial adjustment to aging, identity development, and social and community-based support in the lives of LGBTQ older adults. Objectives: The current review was designed to develop an evidence base for the field of LGBTQ aging as well as to assess the strengths and limitations of the existing research and to articulate a blueprint for future research. Methods: Using a life course framework, we applied a systematic narrative analysis of research on LGBTQ aging. The review included 66 empirical peer-reviewed journal articles (2009{–}2016) focusing on LGBTQ adults aged 50 years and older, as well as age-based comparisons (50 years and older with those younger). Results: A recent wave of research on the health and well-being of LGBTQ older adults was identified. Since the prior review, the field has grown rapidly. Several findings were salient, including the increas{\-}ed application of theory (with critical theories most often used) and more varied research designs and methods. While {\-}existing life course theory provided a structure for the investigation of the social dimensions of LGBTQ aging, it was limited in its attention to intersectionality and the psychological, behavioral, and biological work emerging in the field. There were few studies addressing the oldest in these {\-}communities, bisexuals, gender non-binary older adults, intersex, {\-}older adults of color, and those living in poverty. {\-}Conclusions: The Iridescent Life Course framework highlights the interplay of light and environment, creating dynamic and fluid colors as perceived from different angles and perspectives over time. Such an approach incorporates both queering and trans-forming the life course, capturing intersectionality, fluidity over time, and the psychological, behavioral, and biological as well as social dimensions of LGBTQ aging. Work is needed that investigates trauma, differing configurations of risks and resources over the life course, inequities and opportunities in representation and capital as LGBTQ adults age, and greater attention to subgroups that remain largely invisible in existing research. More depth than breadth is imperative for the field, and multilevel, longitudinal, and global initiatives are needed.
Fredriksen-Goldsen, K. I., & de Vries, B.. (2019). Global Aging With Pride: International Perspectives on LGBT Aging. The International Journal of Aging and Human Development, 88(4), 315–324. doi:10.1177/0091415019837648.
Fredriksen-Goldsen, K. I., Kim, H., Jung, H., & Goldsen, J.. (2019). The Evolution of Aging With Pride—National Health, Aging, and Sexuality/Gender Study: Illuminating the Iridescent Life Course of LGBTQ Adults Aged 80 Years and Older in the United States. The International Journal of Aging and Human Development, 88(4), 380–404.
Mots-clé: intersectionnalité, parcours de vie, recherche
Fredriksen-Goldsen, K. I., Kim, H., Jung, H., & Goldsen, J.. (2019). The Evolution of Aging With Pride—National Health, Aging, and Sexuality/Gender Study: Illuminating the Iridescent Life Course of LGBTQ Adults Aged 80 Years and Older in the United States:. The International Journal of Aging and Human Development, 7(4), 1712–1717. doi:10.1177/0091415019837591.
Aging with Pride: National Health, Aging, and Sexuality/Gender Study is the first federally funded study addressing aging among LGBTQ older adults throughout the United States. This article examine…
Mots-clé: parcours de vie, recherche
Fredriksen-Goldsen, K. I., Kim, H., Bryan, A. E. B., Shiu, C., & Emlet, C. A.. (2017). The Cascading Effects of Marginalization and Pathways of Resilience in Attaining Good Health Among LGBT Older Adults. The Gerontologist, 57(S1), S72–S83. doi:10.1093/geront/gnw170.
Mots-clé: parcours de vie, relations sociales, santé
Fredriksen-Goldsen, K. I., Bryan, A. E. B., Jen, S., Goldsen, J., Kim, H., & Muraco, A.. (2017). The Unfolding of LGBT Lives: Key Events Associated With Health and Well-being in Later Life. The Gerontologist, 57(suppl 1), S15–S29. doi:10.1093/geront/gnw185. [URL]
Mots-clé: parcours de vie
Fredriksen-Goldsen, K. I., Kim, H., Shui, C., & Bryan, A. E. B.. (2017). Chronic Health Conditions and Key Health Indicators Among Lesbian, Gay, and Bisexual Older US Adults, 2013-2014. American Journal of Public Health, 107(8), 1332–1338. doi:10.2105/AJPH. [URL]
Fredriksen-Goldsen, K. I., Shiu, C., Bryan, A. E. B., Goldsen, J., & Kim, H.. (2017). Health Equity and Aging of Bisexual Older Adults: Pathways of Risk and Resilience. The journals of gerontology. Series B, Psychological sciences and social sciences, 72(3), 468–478. doi:10.1093/geronb/gbw120. [URL]
Mots-clé: personnes bisexuelles
Furlotte, C., Gladstone, J. W., Cosby, R. F., & Fitzgerald, K.. (2016). "Could We Hold Hands?" Older Lesbian and Gay Couples' Perceptions of Long-Term Care Homes and Home Care. Canadian Journal on Aging / La Revue canadienne du vieillissement, 35(4), 432–446. doi:10.1017/S0714980816000489{&}domain=pdf.
This qualitative study describes expectations, concerns, and needs regarding long-term care (LTC) homes and home care services of 12 older lesbian and gay couples living in Canada. Our findings reflect four major themes: discrimination, identity, expenditure of energy, and nuanced care. Discrimination involved concerns about covert discrimination; loss of social buffers as one ages; and diminished ability to advocate for oneself and one{'}s partner. Identity involved anticipated risk over disclosing one{'}s sexual identity; the importance of being identified within a coupled relationship; and the importance of access to reference groups of other gay seniors. We conclude that partners were burdened by the emotional effort expended to hide parts of their identity, assess their environments for discrimination, and to placate others. Nuanced care involved a mutual level of comfort experienced by participants and their health care providers. These themes inform understandings of LTC homes and home care services for lesbian and gay older couples.
Mots-clé: besoins et craintes, EMS, soins à domicile
Garcia, D., Baeriswyl, M., Eckel, D., Müller, D., Schlatter, C., & Rauchfleisch, U.. (2014). Von der Transsexualität zur Gender-Dysphorie. , 1–6. [URL]
Mots-clé: personnes trans*, Suisse
Garcia Nuñez, D., & Jäger, M.. (2011). Comment aborder la question du sexe dans l'anamnèse des personnes homo- ou bisexuelles?. Forum Med Suisse, 11(12), 213–217.
Mots-clé: santé, Suisse
Gardner, A. T., de Vries, B., & Mockus, D. S.. (2013). Aging Out in the Desert: Disclosure, Acceptance, and Service Use Among Midlife and Older Lesbians and Gay Men. Journal of Homosexuality, 61(1), 129–144. doi:10.1080/00918369.2013.835240.
Lesbian, gay, bisexual, and transgender (LGBT) persons in the county of Riverside, CA and in the Palm Springs/Coachella Valley area, in particular, responded to a questionnaire addressing concerns about identity disclosure and comfort accessing social services. Distributed at a Pride festival, as well as through reli- gious, social, and service agencies, the final sample for analysis of 502 comprised 401 (80%) gay men and 101 (20%) lesbians in 4 groups: < 50 years of age (18%), 50 to 59 (26%), 60 to 69 (36%), and over 70 (20%). Results reveal that almost one-third of midlife and older gay men and lesbians maintain some fear of openly disclosing their sexual orientation. Along comparable lines with similar proportions, older gay men and lesbians maintain some discomfort in their use of older adult social services, even as the majority reports that they would feel more comfortable accessing LGBT-friendly identified services and programs. In both cases, les- bians reported greater fear and discomfort than did gay men; older gay men and lesbians reported that they would be less comfortable accessing LGBT-identified services and programs than did younger gay men and lesbians. These data support prior research on the apprehension of LGBT elders in accessing care, the crucial role of acceptance, with some suggestions of how social services might better prepare to address these needs.
Gendron, T., Maddux, S., Krinsky, L., White, J., Lockeman, K., Metcalfe, Y., & Aggarwal, S.. (2013). Cultural Competence Training for Healthcare Professionals Working with LGBT Older Adults. Educational Gerontology, 39(6), 454–463. doi:10.1080/03601277.2012.701114.
Glackin, M., & Higgins, A.. (2008). The grief experience of same-sex couples within an Irish context: tacit acknowledgement. International Journal of Palliative Nursing, 1–6.
Mots-clé: deuil
Goldsen, J., Bryan, A. E. B., Kim, H., Muraco, A., Jen, S., & Fredriksen-Goldsen, K. I.. (2017). Who Says I Do: The Changing Context of Marriage and Health and Quality of Life for LGBT Older Adults. The Gerontologist, 57(suppl 1), S50–S62. doi:10.1093/geront/gnw174. [URL]
Mots-clé: conditions de vie et finances
Grigorovich, A.. (2015). Negotiating sexuality in home care settings: older lesbians and bisexual women's experiences. Culture, Health & Sexuality, 17(8), 947–961. doi:10.1080/13691058.2015.1011237.
Mots-clé: femmes lesbiennes, personnes bisexuelles, soins à domicile
Grigorovich, A.. (2013). Long-Term Care for Older Lesbian and Bisexual Women: An Analysis of Current Research and Policy. Social Work in Public Health, 28(6), 596–606. doi:10.1080/19371918.2011.593468.
The Canadian health care system{'}s delivery and policies are often based on a heterosexual nuclear family model. Long-term care (LTC) policy in particular is built on specific assumptions about women and caregiving. Current health care and LTC policies can thus disadvantage and marginalize women who do not fit such constructions, such as older lesbian and bisexual women. Drawing from literature on lesbian, gay, bisexual, and transgender women{'}s health, aging, and caregiving, this article uses a feminist political economy analysis to demonstrate that a gap exists in current research and policy with respect to the LTC needs of older lesbian and bisexual women.
Mots-clé: femmes lesbiennes, soins infirmiers
Grigorovich, A.. (2016). The meaning of quality of care in home care settings: older lesbian and bisexual women's perspectives. Scandinavian Journal of Caring Sciences, 30, 108–116. doi:10.1111/scs.12228.
Grigorovich, A.. (2015). Restricted Access: Older Lesbian and Bisexual Women's Experiences With Home Care Services. Research on Aging, 37(7), 763–783. doi:10.1177/0164027514562650.
Grossman, A. H., D'Augelli, A. R., & O'connell, T. S.. (2008). Being Lesbian, Gay, Bisexual, and 60 or Older in North America. Journal of Gay & Lesbian Social Services, 13(4), 23–40. doi:10.1300/J041v13n04_05.
This study examined mental and physical health, per- ceived social support, and experiences with HIV/AIDS of 416 lesbian, gay, and bisexual adults aged 60 to 91. Most participants reported fairly high levels of self-esteem; however, many experienced loneliness. Most also reported low levels of internalized homophobia, but men reported significantly higher levels than women did. Ten percent of respondents sometimes or often considered suicide, with men reporting significantly more suicidal thoughts related to their sexual orientation. Men also had significantly higher drinking scores than women, and more men could be classified as problem drinkers. Only 11% of the respondents said that their health status interfered with the things they wanted to do. Although 93% of the participants knew people diagnosed with HIV/AIDS, 90% said that they were unlikely to be HIV-infected. Participants averaged six people in their support networks, most of whom were close friends. Most support network members knew about the participants{'} sexual ori- entation, and the respondents were more satisfied with support from those who knew. Those living with domestic partners were less lonely and rated their physical and mental health more positively than those living alone
Mots-clé: relations sociales, santé, VIH
Grossman, A. H., D'Augelli, A. R., & Dragowski, E. A.. (2008). Caregiving and Care Receiving Among Older Lesbian, Gay, and Bisexual Adults. Journal of Gay & Lesbian Social Services, 18(3-4), 15–38. doi:10.1300/J041v18n03_02.
A survey research design was used to examine caregiving, care receiving, and the willingness to provide caregiving among lesbian, gay, and bisexual (LGB) older adults recruited from community groups. More than one-third reported receiving care from people other than health- care providers in the last five years; more than two thirds provided care to other LGB adults. Those who had given care were more likely than non-caregivers to give care in the future. The gender and sexual orientation of recipients of future help affected participants{'} willingness to provide care, as did their education level and style of coping. Participants willing to provide care to older LGB adults perceived such experiences to be less burdensome and more personally rewarding than those who were un-willing to provide care.
Grossman, A. H., D'Augelli, A. R., & Hershberger, S. L.. (2000). Social Support Networks of Lesbian, Gay, and Bisexual Adults 60 Years of Age and Older . Journal of Gerontology, 55B(3), 171–179.
Grov, C., Golub, S. A., Parsons, J. T., Brennan, M., & Karpiak, S. E.. (2010). Loneliness and HIV-related stigma explain depression among older HIV-positive adults. AIDS Care: AIDS Care: Psychological and Socio-medical Aspects of AIDS/HIV, 22(5), 630–639. doi:10.1080/09540120903280901. [URL]
Gueler, A., Moser, A., Calmy, A., Günthard, H. F., Bernasconi, E., Furrer, H., Fux, C. A., Battegay, M., & Cavassini, M.. (2017). Life expectancy in HIV-positive persons in Switzerland: matched comparison with general population. AIDS, 31, 427–436. doi:10.1097/QAD.0000000000001335.
Mots-clé: Suisse, VIH
Hafford-Letchfield, T., Pezzella, A., Connell, S., Urek, M., Jurček, A., Higgins, A., Keogh, B., van der Vaart, N., Rabelink, I., Robotham, G., Bus, E., Buitenkamp, C., & Lewis-Brooke, S.. (2021). Learning to deliver LGBT+ aged care: exploring and documenting best practices in professional and vocational education through the World Café method. Ageing and Society, 1–22. doi:10.1017/S0144686X21000441.
Hafford-Letchfield, T., Simpson, P., Willis, P. B., & Almack, K.. (2018). Developing inclusive residential care for older lesbian, gay, bisexual and trans (LGBT) people: An evaluation of the Care Home Challenge action research project. Health & Social Care in the Community, 26, e312–e320. doi:10.1111/hsc.12521. [URL]
Mots-clé: EMS, formation à la diversité, Royaume-Uni
Hardacker, C. T., Rubinstein, B., Hotton, A., & Houlberg, M.. (2013). Adding silver to the rainbow: the development of the nurses' health education about LGBT elders (HEALE) cultural competency curriculum. Journal of Nursing Management, 22(2), 257–266. doi:10.1111/jonm.12125.
Mots-clé: formation à la diversité, soins infirmiers
Hatzenbuehler, M. L.. (2016). Structural stigma: Research evidence and implications for psychological science.. American Psychologist, 71(8), 742–751. doi:10.1037/amp0000068. [URL]
Mots-clé: équité en santé, recherche, santé
Hayman, B., & Wilkes, L.. (2016). Older lesbian women's health and healthcare: A narrative review of the literature. Journal of Clinical Nursing, 25(23-24), 3454–3468. doi:10.1111/jocn.13237.
Conclusions. Remarkably, very little contemporary literature exists that addresses the health and well-being of older lesbian women, and this cohort remain posi- tioned on the peripheries of research and society. Older lesbian women continue to be marginalised because of their lesbian identity and actively cultivate support systems, negotiate disclosure and develop resilience to minimise the effects of their marginal position. Relevance to practice. Recognition that older lesbian women often create, and draw on, a family of choice for support is imperative. In addition, the clinical environment should be safe for older lesbian women to disclose their sexual orientation and other sensitive information.
Mots-clé: revue de littérature, soins infirmiers
Heaphy, B.. (2009). Choice and Its Limits in Older Lesbian and Gay Narratives of Relational Life. Journal of GLBT Family Studies, 5(1-2), 119–138. doi:10.1080/15504280802595451.
Mots-clé: conditions de vie et finances, relations sociales, Royaume-Uni
Heaphy, B.. (2007). Sexualities, Gender and Ageing. Current Sociology, 55(2), 193–210. doi:10.1177/0011392107073301.
The issue of sexuality is under-studied in the sociology of ageing. This article advocates placing sexuality at the centre of our analyses of ageing and later life in late modernity, by illustrating the issue of non-heterosexual ageing. The article employs personal narratives of lesbians and gay men aged between their fifties and eighties to demonstrate the importance of material, social and cultural resources in shaping their negotiations of ageing and later life. It indicates how sexuality, gender and age interact in influencing these, and argues that non- heterosexual experience illuminates possibilities that exist for both the reconfigu- ration and resilience of {'}given{'} meanings and practices in relation to gender and ageing. It therefore provides insights into the uneven possibilities of reworking and/or undoing cultural meanings and social practices that shape gendered experiences of ageing and later life.
Mots-clé: analyse comparative, conditions de vie et finances, Royaume-Uni, sexualité
van Heesewijk BSc, J. O., Koen M A Dreijerink MD, P., Chantal M Wiepjes MD, P., PhD, A. K. A. L., van Schoor PhD, N. M., PhD, M. H., Martin den Heijer MD, P., & Baudewijntje P.C. Kreukels PhD. (2021). Long-Term Gender-Affirming Hormone Therapy and Cognitive Functioning in Older Transgender Women Compared With Cisgender Women and Men. AJO-DO Clinical Companion, 18(8), 1434–1443. doi:10.1016/j.jsxm.2021.05.013. [URL]
Mots-clé: analyse comparative, personnes trans*, santé cognitive
Henning, C. E.. (2016). Is old age always already heterosexual (and cisgender)? The LGBT Gerontology and the formation of the "LGBT elders". Vibrant: Virtual Brazilian Anthropology, 13(1), 132–154. doi:10.1590/1809-43412016v13n1p132. [URL]
Henrikson, M., Giwa, S., Hafford-Letchfield, T., Cocker, C., Mulé, N. J., Schauf, J., & Baril, A.. (2020). Research Ethics with Gender and Sexually Diverse Persons. International Journal of Environmental Research and Public Health, 17, 6615. doi:10.3390/ijerph17186615.
Herek, G. M., Gillis, R. J., & Cogan, J. C.. (2009). Internalized stigma among sexual minority adults: Insights from a social psychological perspective.. Journal of Counseling Psychology, 56(1), 32–43. doi:10.1037/a0014672.
Higgins, A., Downes, C., Sheaf, G., Bus, E., Connell, S., Hafford-Letchfield, T., Jurček, A., Pezzella, A., Rabelink, I., Robotham, G., Urek, M., van der Vaart, N., & Keogh, B.. (2019). Pedagogical principles and methods underpinning education of health and social care practitioners on experiences and needs of older LGBT+ people: Findings from a systematic review. Nurse Education in Practice, 40, 102625. doi:10.1016/j.nepr.2019.102625.
Nurse Education in Practice, 40 (2019) 102625. doi:10.1016/j.nepr.2019.102625
Higgins, A., & Hynes, G.. (2019). Meeting the Needs of People Who Identify as Lesbian, Gay, Bisexual, Transgender, and Queer in Palliative Care Settings. Journal of Hospice & Palliative Nursing, 21(4), 286–290. doi:10.1097/NJH.0000000000000525.
e or in a nursing home, hospital, or hospice. Although research on the needs of LGBTQ people at the end of life is sparse, drawing on what is available this article explores some of their unique concerns that practitioners should consider during their interactions….
Mots-clé: besoins et craintes, soins palliatifs
Higgins, A., Sharek, D., & Glacken, M.. (2016). Building resilience in the face of adversity: navigation processes used by older lesbian, gay, bisexual and transgender adults living in Ireland. Journal of Clinical Nursing, 25(23-24), 3652–3664. doi:10.1111/jocn.13288. [URL]
Mots-clé: résilience, santé mentale
Hillman, J., & Hinrichsen, G. A.. (2014). Promoting an Affirming, Competent Practice With Older Lesbian and Gay Adults. Professional Psychology Research and Practice, 45(4), 269–277. doi:10.1037/a0037172.
Mots-clé: formation à la diversité, santé mentale
Hinrichs, K. L. M., & Vacha-Haase, T.. (2010). Staff Perceptions of Same-Gender Sexual Contacts in Long-Term Care Facilities. Journal of Homosexuality, 57(6), 776–789. doi:10.1080/00918369.2010.485877.
An ongoing fear in the gay and lesbian community is that long-term care (LTC) facilities may not be sensitive to their needs. In the present study, 218 LTC staff members responded to one of three v…
Mots-clé: attitudes des professionnel.le.s, EMS
Hoy-Ellis, C. P., & Fredriksen-Goldsen, K. I.. (2016). Lesbian, gay, & bisexual older adults: linking internal minority stressors, chronic health conditions, and depression. Aging & Mental Health, 20(11), 1119–1130. doi:10.1080/13607863.2016.1168362.
Hoy-Ellis, C. P., Ator, M., Kerr, C., & Milford, J.. (2016). Innovative Approaches Address Aging and Mental Health Needs in LGBTQ Communities. Generations, 40(2), 56–62. doi:10.2307/26556203. [URL]
Hoy-Ellis, C. P., Ator, M., Kerr, C., & Milford, J.. (2016). Innovative Approaches Address Aging and Mental Health Needs in LGBTQ Communities. Generations, 40(2), 56–62.
Hsieh, N., Liu, H., & Lai, W.. (2021). Elevated Risk of Cognitive Impairment Among Older Sexual Minorities: Do Health Conditions, Health Behaviors, and Social Connections Matter?. The Gerontologist, 61(3), 352–363. doi:10.1093/geront/gnaa136.
Mots-clé: analyse comparative, santé cognitive, stress minoritaire
Hughes, M., & Kentlyn, S.. (2015). Older Lesbians and Work in the Australian Health and Aged Care Sector. Journal of Lesbian Studies, 19(1), 62–72. doi:10.1080/10894160.2015.959875.
Mots-clé: femmes lesbiennes
Hughes, M.. (2006). Queer ageing. Gay and Lesbian Issues and Psychology Review, 2(2), 54–59.
Mots-clé: théorie queer
Hughes, M., & Cartwright, C.. (2015). Lesbian, gay, bisexual and transgender people's attitudes to end-of-life decision-making and advance care planning. Australasian Journal on Ageing, 34, 39–43. doi:10.1111/ajag.12268.
Mots-clé: directives anticipées
Hutchins, T.. (2013). Hidden in the home: supporting same-sex partnerships. Nursing and Residential Care, 15(11), 738–740.
By allowing lesbian, gay and bisexual older people and their partners to remain hidden in health and social care, nurses risk ignoring their specific needs, which encompass a variety of biological and psychosocial issues, as Thomas Hutchins explains
Hébert, B., Chamberland, L., & Enriquez, M.. (2012). Les aîné-es trans~: une population émergente ayant des besoins spécifiques en soins de santé, en services sociaux et en soins liés au vieillissement. Frontières, 25(1), 57–81. doi:10.7202/1018231ar.
Les a{\^\i}né-es trans sont une population en devenir constituée d{'}individus aux identités, réalités et trajectoires très diversifiées. Cet article basé sur une recension des écrits, présente tout d{'}abord cette diversité, notamment en ce qui a trait à l{'}âge, tant à l{'}appartenance générationnelle qu{'}à l{'}âge du début de la transition. On y traite ensuite de la santé physique des a{\^\i}né-es trans, soit des problèmes et des besoins de santé qui leur sont propres, puis des barrières auxquelles ils et elles se heurtent dans leurs démarches pour avoir accès à des soins et des services de santé adéquats. Le~texte relève certaines difficultés comme l{'}isolement et le manque de soutien qui sont souvent le lot des a{\^\i}nés trans ainsi que les obstacles dans leur accès aux services sociaux et aux soins liés au vieillissement. L{'}article propose des pistes d{'}action pour les personnes professionnelles dans le domaine de la santé et des services sociaux et se conclut sur des pistes de recherche.
Jablonski, R. A., Vance, D. E., & Beattie, E.. (2013). The Invisible Elderly: Lesbian, Gay, Bisexual, and Transgender Older Adults. Journal of Gerontological Nursing, 39(11), 46–52. doi:10.3928/00989134-20130916-02. [URL]
Jacobs, R. J., Rasmussen, L. A., & Hohman, M. M.. (1999). The Social Support Needs of Older Lesbians, Gay Men, and Bisexuals. Journal of Gay & Lesbian Social Services, 9(1), 1–30. doi:10.1300/J041v09n01_01.
Jellestad, L., Jäggi, T., Corbisiero, S., Schaefer, D. J., Jenewein, J., Schneeberger, A., Kuhn, A., & Garcia Nuñez, D.. (2018). Quality of Life in Transitioned Trans Persons: A Retrospective Cross-Sectional Cohort Study. BioMed Research International, 2018, 8684625. doi:10.1155/2018/8684625.
Background. Medical gender-affirming interventions (GAI) are important in the transition process of many trans persons. The aim of this study was to examine the associations between GAI and quality of life (QoL) of transitioned trans individuals. Methods. 143 trans persons were recruited from a multicenter outpatient Swiss population as well as a web-based survey. The QoL was assessed using the Short Form (36) Health Survey questionnaire (SF-36). Depressive symptoms were examined using the Short Form of the Center for Epidemiologic Studies-Depression Scale (ADS-K). Multiple interferential analyses and a regression analysis were performed. Results. Both transfeminine and transmasculine individuals reported a lower QoL compared to the general population. Within the trans group, nonbinary individuals showed the lowest QoL scores and significantly more depressive symptoms. A detailed analysis identified sociodemographic and transition-specific influencing factors. Conclusions. Medical GAI are associated with better mental wellbeing but even after successful medical transition, trans people remain a population at risk for low QoL and mental health, and the nonbinary group shows the greatest vulnerability.
Jessup, M. A., & Dibble, S. L.. (2012). Unmet Mental Health and Substance Abuse Treatment Needs of Sexual Minority Elders. Journal of Homosexuality, 59(5), 656–674. doi:10.1080/00918369.2012.665674.
In a survey exploring the reliability and validity of a screening tool, we explored the substance abuse and mental health issues among 371 elders; 74 were sexual minorities. Analyses by age group indi- cated that elders 55{–}64 years had significantly more problems with substance abuse, posttraumatic stress disorder (PTSD), depression, anxiety, and suicidal thoughts compared to those 65 and older. Bisexuals reported significantly greater problems with depression, anxiety, and suicidality than either heterosexual or lesbian or gay elders. Mental health and substance abuse treatment utiliza- tion was low among all elders with problems. Implications for assessment, access to care, and group-specific services delivery are discussed.
Johnson, M. J., Jackson, N. C., Arnette, K. J., & Koffman, S. D.. (2005). Gay and Lesbian Perceptions of Discrimination in Retirement Care Facilities. Journal of Homosexuality, 49(2), 83–102. doi:10.1300/J082v49n02_05.
Much research on older gay, lesbian, bisexual and transgender (GLBT) adults has focused on refuting the widely held mis- conceptions people have about GLBT lifestyles. To date, however, few studies on older GLBTs have examined their social and health care needs. Further, most studies have collected survey samples of older GLBT adults in large metropolitan areas and have not specifically ad- dressed discrimination or bias in retirement care facilities. In the current exploratory study on perceptions of discrimination and bias in retire- ment care facilities, we surveyed a wide age range GLBT adults in a smaller metropolitan area of fewer than 400,000 people to discover the perceptions of both younger and older GLBTs. We surveyed perceptions of discrimination in retirement care facilities, sources of perceived dis- crimination, and suggestions for how discrimination might be reduced or eliminated in those settings. Respondents indicated that administra- tion, care staff, and residents of retirement care facilities themselves were all potential sources of discrimination, and that education address- ing awareness and acceptance of GLBTs is one potential remedy for dis- crimination against GLBTs in retirement care facilities. Respondents also indicated a strong desire for the development of GLBT-exclusive or GLBT-friendly retirement care facilities. Chi-square analyses of re- sponses to the discrimination questions and respondents{'} demographic characteristics revealed significant differences with regard to age, in- come, gender, community size, and education level of the respondents.
Johnston, T. R., & Meyer, H.. (2017). LGBT-specific housing in the USA. Housing, Care and Support, 20(3), 121–127. doi:10.1108/HCS-07-2017-0016. [URL]
Mots-clé: logement
Johnston, T. R.. (2016). Bisexual Aging and Cultural Competency Training: Responses to Five Common Misconceptions. Journal of Bisexuality, 16(1), 99–111. doi:10.1080/15299716.2015.1046629. [URL]
Mots-clé: formation à la diversité, personnes bisexuelles
Jurček, A., Downes, C., Keogh, B., Urek, M., Hafford-Letchfield, T., Buitenkamp, C., van der Vaart, N., & Higgins, A.. (2021). Educating health and social care practitioners on the experiences and needs of older LGBT+ adults: findings from a systematic review. Journal of Nursing Management, 29(1), 43–57. doi:10.1111/jonm.13145.
Karpiak, S. E., & Brennan-Ing, M.. (2016). Aging with HIV: The Challenges of Providing Care and Social Supports. Generations, 40(2), 23–25. doi:10.2307/26556194. [URL]
Kcomt, L., & Gorey, K. M.. (2017). End-of-Life Preparations among Lesbian, Gay, Bisexual, and Transgender People: Integrative Review of Prevalent Behaviors. Journal of Social Work in End-of-Life & Palliative Care, 13(4), 284–301. doi:10.1080/15524256.2017.1387214.
Proactively making end-of-life (EOL) preparations is important to ensure high quality EOL care. Critical to preparation is the discussion of preferences with one{'}s primary health care providers. Lesbian, gay, bisexual, and transgender (LGBT) people often experience discrimination from health care providers that will detrimentally affect their ability to commu- nicate their care preferences. Structural barriers, such as those based on sexual orientation and gender identity, may impede timely and quality care when one is most in need. The aim of this study was to examine the prevalence of EOL preparatory behaviors among LGBT people, with particular focus on transgender individuals. Eight survey instruments with 30 prevalence estimates found in the literature were analyzed. EOL discussions between LGBT people and their primary health care providers were rare (10%). Transgender people were found to be even less prepared for EOL; they were 50{–}70% less likely than their LGB counterparts to have a will, a living will or to have appointed a healthcare proxy. A need exists for future mixed-methods research focused on LGBT populations accompanied by the cultural sensitivity needed to ensure their wishes are honored at the EOL.
Mots-clé: directives anticipées, revue de littérature
Kia, H.. (2015). Hypervisibility: Toward a Conceptualization of LGBTQ Aging. Sexuality Research and Social Policy, 13(1), 46–57. doi:10.1007/s13178-015-0194-9.
There remains a salient need to conceptualize lesbian, gay, bisexual, transgender, and queer (LGBTQ) aging as an area of study. Although the limited body of theoretical literature in this field has delineated systemic silence or invisibility as a prominent feature of marginalization among LGBTQ elders, this model{&}nbsp;does not appear to account for mechanisms of surveillance and control that often regulate sexuality and gender identity in old age. This paper represents a preliminary attempt at developing a framework of LGBTQ aging that addresses social processes in which queerness{&}nbsp;and gender variance are monitored and limited in later stages of the life course. The analysis is guided by the Foucauldian notion of neoliberal governmentality, which enables consideration of bodies of discourse and technologies of power that together drive these systemic phenomena in contemporary political and economic contexts. The paper concludes with implications of this analysis on theory and empirical inquiry in the field of LGBTQ aging.
Kilbourn, S.. (2016). Perseverance, Patience, and Partnerships Build Elder LGBT Housing in San Francisco. Generations, 40(2), 103–105. doi:10.2307/26556221. [URL]
Kim, H., Fredriksen-Goldsen, K. I., Bryan, A. E. B., & Muraco, A.. (2017). Social Network Types and Mental Health Among LGBT Older Adults. The Gerontologist, 57(suppl 1), S84–S94. doi:10.1093/geront/gnw169. [URL]
Mots-clé: relations sociales, santé mentale
Kim, H., Acey, K., Guess, A., Jen, S., & Fredriksen-Goldsen, K. I.. (2016). A Collaboration for Health and Wellness: GRIOT Circle and Caring and Aging with Pride. Generations, 40(2), 49–55. [URL]
Mots-clé: racisme
Kim, H., & Fredriksen-Goldsen, K. I.. (2016). Living Arrangement and Loneliness Among Lesbian, Gay, and Bisexual Older Adults. The Gerontologist, 56(3), 548–558.
King, A., & Stoneman, P.. (2017). Understanding SAFE Housing – putting older LGBT* people's concerns, preferences and experiences of housing in England in a sociological context. Housing, Care and Support, 20(3), 89–99. doi:10.1108/HCS-04-2017-0010. [URL]
Mots-clé: besoins et craintes, logement
King, A., & Cronin, A.. (2016). Bonds, bridges and ties: applying social capital theory to LGBT people's housing concerns later in life. Quality in Ageing and Older Adults, 17(1), 16–25. doi:10.1108/QAOA-05-2015-0023.
Purpose {–} The purpose of this paper is to contribute to debates about lesbian, gay, bisexual and transgender (LGBT) housing later in life by placing these in a theoretical context: social capital theory (SCT). Design/methodology/approach {–} After a discussion of SCT, emanating from the works of Robert Putnam and Pierre Bourdieu, the paper draws on existing studies of LGBT housing later in life, identifying key concerns that are identified by this body of literature. Findings {–} The paper then applies SCT to the themes drawn from the LGBT housing later in life literature to illustrate the usefulness of putting these in such a theoretical context. Originality/value {–} Hence, overall, the paper fills an important gap in how the authors think about LGBT housing later in life; as something that is framed by issues of social networks and connections and the benefits, or otherwise, that accrue from them.
King, A.. (2014). Queer Categories: Queer(y)ing the Identification 'Older Lesbian, Gay and/or Bisexual (LGB) Adults' and its Implications for Organizational Research, Policy and Practice. Gender, Work & Organization, 23(1), 7–18. doi:10.1111/gwao.12065.
In recent years there has been a growth in organizational discourse concerning the lives of older lesbian, gay and/or bisexual (LGB) adults, which has started to address the serious omission and invisibility of this group of people in research, policy making and service provision. Whilst this development is welcomed, it inevitably draws attention to the identification {'}older LGB adults{'} on which it is based. Using insights from queer theory, in addition to the sociological perspectives of ethnomethodology and conversation analysis, this article troubles or {'}queers{'} such identifications. It does this, not only theoretically, but empirically, by conducting a membership categorization analysis (MCA) of some data emanating from a small organizational scoping study of older LGB adults. The ramifications of this for organizational research, policy making and practice are considered in the conclusion.
King, A.. (2013). Prepare for Impact? Reflecting on Knowledge Exchange Work to Improve Services for Older LGBT People in Times of Austerity. Social Policy and Society, 14(1), 15–27. doi:10.1017/S1474746413000523.
This article reflects on the experience of undertaking a knowledge exchange project with a local government authority to improve services for older lesbian, gay, bisexual and trans (LGBT) adults. It frames this project in terms of local government equality work, existing research and initiatives concerning older LGBT people and the coming of austerity. The project methodology is detailed, including discussion of the generation and measurement of impact. Some critical issues that arose during the project are considered, including suggestions that these may have been related to economic austerity. The article concludes that although knowledge exchange work with older LGBT people faces challenges in such times, future research and initiatives are warranted.
Kneale, D., Sholl, P., Sherwood, C., & Faulkner, J.. (2016). Ageing and lesbian, gay and bisexual relationships. Working wiht Older People, 18(3), 142–151. doi:10.1108/WWOP-06-2014-0015. [URL]
Mots-clé: analyse comparative, relations sociales, stress minoritaire
Kneale, D., Henley, J., Thomas, J., & French, R.. (2019). Inequalities in older LGBT people's health and care needs in the United Kingdom: a systematic scoping review. Ageing and Society, 1–23. doi:10.1017/S0144686X19001326.
The hostile environment that older lesbian, gay, bisexual and transgender (LGBT) people faced at younger ages in the United Kingdom (UK) may have a lasting negative impact on their health. This systematic scoping review adds to the current knowledge base through comprehensively synthesising evidence on what is known about the extent and nature of health and care inequalities, as well as highlighting gaps in the evidence which point the way towards future research priorities. We searched four databases, undertook manual searching, and included studies which presented empirical findings on LGBT people aged 50+ in the UK and their physical and mental health or social care status. From a total of 5,738 records, 48 papers from 42 studies were eligible and included for data extrac- tion. The synthesis finds that inequities exist across physical and mental health, as well as in social care, exposure to violence and loneliness. Social care environments appeared as a focal point for inequities and formal care environments severely compromised the identity and relationships that older LGBT people developed over their lifecourse. Conversely, the literature demonstrated how some older LGBT people successfully negotiated age-related transitions, e.g. emphasising the important role of LGBT-focused social groups in offset- ting social isolation and loneliness. While there exist clear policy implications around the requirement for formal care environments to change to accommodate an increasingly diverse older population, there is also a need to explore how to support older LGBT people to maintain their independence for longer, reducing the need for formal care.
Mots-clé: parcours de vie, Royaume-Uni, santé
Kneale, D., French, R., Spandler, H., Young, I., Purcell, C., Boden, Z., Brown, S. D., Callwood, D., Carr, S., Dymock, A., Eastham, R., Gabb, J., Henley, J., Jones, C., McDermott, E., Mkhwanazi, N., Ravenhill, J., Reavey, P., Scott, R., Smith, C., Smith, M., Thomas, J., & Tingay, K.. (2019). Conducting sexualities research: an outline of emergent issues and case studies from ten Wellcome-funded projects. Wellcome Open Research, 4(137). doi:10.12688/wellcomeopenres.15283.1. [URL]
Knochel, K. A., Quam, J. K., & Croghan, C. F.. (2011). Are Old Lesbian and Gay People Well Served?. Journal of Applied Gerontology, 30(3), 370–389. doi:10.1177/0733464810369809.
The lesbian and gay population is largely invisible in the gerontological literature and in planning and provision of aging services. A recent survey of providers of aging services in a large midwestern metropolitan area provides insight into providers{'} beliefs, preparation, and experience with serving old lesbian and gay people. Few agencies that participated in the study provided services targeted to this population, and some agencies were unwilling to consider their unique needs. Participating agencies generally recognized a need for greater knowledge and specific training in working with aging lesbian and gay people. Providers diverged over whether separate services should be established for the old lesbian and gay population. Providers consistently expressed values of care, inclusiveness, sensitivity, respect, and provision of service to everyone. The study results provide direction for future training and research with providers of aging services.
Mots-clé: attitudes des professionnel.le.s
Koepke, D.. (2016). Opportunities for Ministering to LGBT Elders: A Conversation with Rev. Daniel Hooper. Generations, 40(2), 26–29. doi:10.2307/26556195. [URL]
Kong, T. S.. (2018). Gay and grey: participatory action research in Hong Kong. Qualitative Research, 18(3), 257–272. doi:10.1177/1468794117713057.
Kridahl, L., & Kolk, M.. (2018). Retirement coordination in opposite-sex and same-sex married couples: Evidence from Swedish registers. Advances in Life Course Research, 38, 22–36. doi:10.1016/j.alcr.2018.10.003.
This study examines how married couples{'} age differences and gender dynamics influence retirement co- ordination in Sweden. High-quality longitudinal administrative registers allow us to study the labor market outcomes of all marital couples in Sweden. Using regression analysis, we find that the likelihood of couples retiring close in time decreases as their age difference increases but that age differences have a similar effect on retirement coordination for couples with larger age differences. Additionally, retirement coordination is largely gender-neutral in opposite-sex couples with age differences regardless of whether the male spouse is older. Additionally, male same-sex couples retire closer in time than both opposite-sex couples and female same-sex couples. The definition of retirement coordination as the number of years between retirements contributes to the literature on couples{'} retirement behavior and allows us to study the degree of retirement coordination among all couples, including those with larger age differences.
Mots-clé: activité professionnelle, Europe
Krinsky, B. L. L., Linscott, B., & Krinsky, L.. (2016). Engaging Underserved Populations: Outreach to LGBT Elders of Color. Generations, 40(2), 34–37. doi:10.2307/26556197. [URL]
Kushner, B., Neville, S., & Adams, J.. (2013). Perceptions of ageing as an older gay man: a qualitative study. Journal of Clinical Nursing, 22(23-24), 3388–3395. doi:10.1111/jocn.12362.
Conclusions. Resilience was a significant factor in how well older gay men aged even in an environment where homophobia and heterosexism were common. Having a strong social support network was an important factor that contributed to supporting the ageing process. These gay men were wary about having to go into residential care, preferring to age in their own homes. Relevance to clinical practice. Nurses and other healthcare professionals need to ensure healthcare services meet the needs of older gay men. Any interaction with older gay men should occur in a way that is open and respectful. The usage of best practice guidelines will assist organisations to deliver culturally safe and appropriate care to this group.
Kuyper, L., & Fokkema, T.. (2010). Loneliness Among Older Lesbian, Gay, and Bisexual Adults: The Role of Minority Stress. Archives of Sexual Behavior, 39, 1171–1180. doi:10.1007/s10508-009-9513-7. [URL]
Mots-clé: Europe, Hollande, relations sociales, stress minoritaire
Langley, J.. (2001). Developing Anti-Oppressive Empowering Social Work Practice with Older Lesbian Women and Gay Men. British Journal of Social Work, 31(6), 917–932. doi:10.1093/bjsw/31.6.917.
Available studies suggest that around 10 per cent of the population might self-identify as a lesbian woman or gay man (Davies and Neal, 1996). It follows that social workers will engage with older people who are homosexual. It does not follow that they will know who they are, as this is a group often characterized by its invisibility. This paper reports the results of a small-scale, exploratory study which examined how older les- bian women and gay men perceived their needs should they become ill or disabled as they age (Langley, 1997). Their concerns were viewed in the context of their past as well as present lives, and oppression was a unifying theme. Some of the findings are examined in order to highlight key challenges for social work practice. These include: (i) working with invisibility and fear of oppression; (ii) developing awareness and recog- nition of lesbian and gay relationships and supportive networks; (iii) the need for anti- oppressive empowering services which match the needs and circumstances of older lesbian women and gay men; (iv) importantly, the need for greater awareness of the heterosexist assumptions which influence institutional responses and individual prac- tice.
Larson, B.. (2016). Intentionally Designed for Success: Chicago's First LGBT-Friendly Senior Housing. Generations, 40(2), 106–107. doi:10.2307/26556223. [URL]
Latham, J. R., & Barrett, C.. (2015). Appropriate bodies and other damn lies: Intersex ageing and aged care. Australasian Journal on Ageing, 34, 19–20. doi:10.1111/ajag.12275.
Mots-clé: personnes intersexes
Lavigne, P., & Grenier, J.. (2015). "M'aides-tu pareil?" Proche aidance, diversité sexuelle et enjeux de reconnaissance.. Intervention(141), 29–40.
Mots-clé: proche aidance, travail social
Leyva, V. L., Breshears, E. M., & Ringstad, R.. (2014). Assessing the Efficacy of LGBT Cultural Competency Training for Aging Services Providers in California's Central Valley. Journal of Gerontological Social Work, 57(2-4), 335–348. doi:10.1080/01634372.2013.872215.
Lim, F. A., & Bernstein, I.. (2012). Promoting Awareness of LGBT Issues in Aging in a Baccalaureate Nursing Program. Nursing Education Perspectives, 33(3), 170–175.
it is estimated that up to 10 percent of the american population is lesbian, gay, bisexual, ort ransgender (lgBt) and that up to 7 million members of this population are elderly. Both the institute of Medicine and healthy People 2020 have addressed the health disparities that affect elderly members of the lgBt community. nurses are well positioned to bridge health disparities and provide culturally sensitive care across the lifespan, but compared with that of other disciplines, the nursing literature is lacking in content addressing lgBt health. eliminating health disparities in the care of lgBt elders should be a priority in nursing education.the authors review the issues lgBt elders face and rec- ommend how content related to lgBt aging can be integrated into nursing curricula.
Lottmann, R., & King, A.. (2020). Who can I turn to? Social networks and the housing, care and support preferences of older lesbian and gay people in the UK. Sexualities. doi:10.1177/1363460720944588. [URL]
Mots-clé: besoins et craintes, logement, proche aidance, relations sociales, Royaume-Uni
Lottmann, R., & Kollak, I.. (2017). LGBT*I & AGING für Vielfalt in der Pflege. Pflegezeitschrift, 70(7), 59.
Lottmann, R.. (2020). Sexuelle und geschlechtliche Vielfalt in der Altenhilfe – Intersektionale Perspektiven und die Relevanz von Situationen und Kontexten. Zeitschrift für Gerontologie und Geriatrie, 53(3), 216–221.
Geschlechtliche und sexuelle Vielfalt sind in der Alterns- und Pflegeforschung bislang eher randständig behandelt worden. Nicht-heterosexuelle Senior*innen und Pflegebedürftige berichten hinsichtlich der Gesundheitsversorgung im Alter von der Angst vor Ablehnung und der Abhängigkeit von Dritten, die ihre Lebenslage nicht in adäquater Weise erkennen.
Mots-clé: Allemagne, intersectionnalité
Lottmann, R., & Kollak, I.. (2018). Eine diversitätssensible Pflege für schwule und lesbische Pflegebedürftige – Ergebnisse des Forschungsprojekts GLESA. International Journal of Health Professions, 5(1), 53–63. doi:10.2478/ijhp-2018-0005. [URL]
Mots-clé: Allemagne, besoins et craintes, logement
Lyons, A., Pitts, M., Grierson, J., Thorpe, R., & Power, J.. (2010). Ageing with HIV: health and psychosocial well-being of older gay men. AIDS Care: AIDS Care: Psychological and Socio-medical Aspects of AIDS/HIV, 22(10), 1236–1244. doi:10.1080/09540121003668086.
Since the introduction of highly active antiretroviral therapy, people living with HIV/AIDS (PLWHA) are living longer, into older age, and therefore presenting a host of new challenges for health and social service providers. However, not all PLWHA are likely to experience similar transitions into older age. In particular, research has yet to fully investigate the health and psychosocial well-being of older HIV-positive gay men. Drawing from an Australian population-based sample of 693 HIV-positive gay men, the present study assesses the overall health and well-being of this older group compared to their younger counterparts. While older men reported greater comorbidity and were more likely to be living in poverty, other health and well-being indicators suggest this group to be coping comparatively well as they continue to age with HIV. These findings provide new directions for meeting the present and future needs and challenges of older HIV-positive gay men.
Mots-clé: Australie, hommes gays, VIH
Lyons, A., Alba, B., Waling, A., Minichiello, V., Hughes, M., Barrett, C., Fredriksen-Goldsen, K. I., Edmonds, S., & Blanchard, M.. (2019). Recent versus lifetime experiences of discrimination and the mental and physical health of older lesbian women and gay men. Ageing and Society, 1–22. doi:10.1017/S0144686X19001533.
This study examines the potential health-related impact of recent versus lifetime experi- ences of sexual orientation discrimination among older Australian lesbian women and gay men. In a nationwide survey, a sample of 243 lesbian women and 513 gay men aged 60 years and over reported on their experiences of sexual orientation discrimination and their mental and physical health, including psychological distress, positive mental health and self-rated health. Among both lesbian women and gay men, recent discrimin- ation uniquely predicted lower positive mental health after adjusting for experiences of discrimination across the lifetime and socio-demographic variables. In addition, recent discrimination uniquely predicted higher psychological distress among gay men. Experiences of discrimination over the lifetime further predicted higher psychological dis- tress and poorer self-rated health among gay men after adjusting for recent experiences of discrimination and socio-demographic variables. However, there were no associations between lifetime discrimination and any of the outcome variables among lesbian women. Overall, recent and lifetime experiences of sexual orientation discrimination were related to mental and physical health in different ways, especially among the men. These findings have potential implications for policy/practice, and suggest that distin- guishing between recent and lifetime experiences of discrimination may be useful when assessing potential health-related impacts of sexual orientation discrimination among older lesbian women and gay men, while also taking account of differences between these two groups.
Lyons, A., Alba, B., Waling, A., Minichiello, V., Hughes, M., Barrett, C., Fredriksen-Goldsen, K. I., & Edmonds, S.. (2020). Mental health and identity adjustment in older lesbian and gay adults: Assessing the role of whether their parents knew about their sexual orientation. Aging & Mental Health, 1–8. doi:10.1080/13607863.2020.1765314.
Lytle, A., Apriceno, M., Dyar, C., & Levy, S. R.. (2018). Sexual Orientation and Gender Differences in Aging Perceptions and Concerns Among Older Adults. Innovation in Aging, 2(3), 105–9. doi:10.1093/geroni/igy036. [URL]
Lévy, J. J., Adam, B., Blais, M., Chamberland, L., Dumas, J., Engler, K., Léobon, A., Ryan, B., Thoër, C., & Wells, K.. (2012). Le vieillissement chez les hommes gais et bisexuels canadiens~: un portrait de~l'état de santé et des préoccupations relatives à~la santé et aux relations interpersonnelles. Frontières, 25(1), 82–104. doi:10.7202/1018232ar.
Peu de recherches ont porté sur les profils et les préoccupations de santé parmi les hommes homosexuels et bisexuels vieillissants. Dans le cadre d{'}une enquête en ligne pancanadienne, 411 répondants âgés de 55 ans et plus ont répondu à un questionnaire portant sur l{'}évaluation de ces deux problématiques. Les résultats montrent que les écarts avec la population hétérosexuelle du même groupe d{'}âge se situent en particulier dans le champ de la santé mentale, où les problèmes sont plus prononcés, des différences qui se retrouvent aussi entre les 55-64 ans et les 64 ans et plus dans notre échantillon. Ces résultats peuvent contribuer à développer des interventions mieux ciblées visant à favoriser le bien-vieillir parmi ces minorités sexuelles.
Mots-clé: hommes gays, personnes bisexuelles
Löf, J., & Olaison, A.. (2020). 'I don't want to go back into the closet just because I need care': recognition of older LGBTQ adults in relation to future care needs. European Journal of Social Work, 23(2), 253–264. doi:10.1080/13691457.2018.1534087.
There is increasing awareness in research about the social service needs of older LGBTQ adults. However, there are few studies that deal with differences in this community regarding elder care services. As a rule, transgender individuals are not included in these studies. This study focuses on how older Swedish LGBTQ adults reason about openness in an elder care context concerning their future needs for services and adopts Nancy Fraser{'}s theoretical framework of recognition. The material consists of fifteen semi-structured interviews with older LGBTQ adults. The results indicate that the main concern for older LGBTQ individuals is being accepted for their preferred sexual orientation and/or gender identity in elder care. However, there were differences regarding that concern in this LGBTQ group. There were also a variety of approaches in the group as to preferences for equal versus special treatment with respect to their LGBTQ identity. In addition, there were differences as to whether they prefer to live in LGBTQ housing or not. The findings contribute to existing knowledge by highlighting the diverse views on elder care services in both this group of interviewees and its subgroups. These findings emphasise the importance of the social work practice recognising different preferences and having an accepting approach. The results can further provide guidance on how to design elder care services for older LGBTQ adults.
MacGabhann, P.. (2015). Caring for gay men and lesbians in nursing homes in Ireland. British Journal of Nursing, 24(22), 1142–1148.
This article examines the literature relating to the attitudes of nurses currently practicing in nursing homes towards caring for gay men and lesbians in Ireland. Nurses{'} knowledge of and attitudes towards the sexuality of those in their care can potentially have an impact on the quality of care they deliver and the patient experience.There is a consensus in the literature regarding the expression of sexuality as a lifelong need and integral element of quality of life. Research to date focusing on the needs of older gay or lesbian individuals has been virtually non-existent, despite increases in life expectancy and increasing numbers of older people, and therefore older gay and lesbian people requiring nursing home care.
Mots-clé: attitudes des professionnel.le.s, EMS, soins infirmiers
Mahieu, L., Cavolo, A., & Gastmans, C.. (2019). How do community-dwelling LGBT people perceive sexuality in residential aged care? A systematic literature review. Aging & Mental Health, 23(5), 529–540. [URL]
Mahieu, L., & Gastmans, C.. (2015). Older residents' perspectives on aged sexuality in institutionalized elderly care: A systematic literature review. International Journal of Nursing Studies, 52(12), 1891–1905. doi:10.1016/j.ijnurstu.2015.07.007.
The aim of this systematic literature review is to investigate older residents{'} thoughts on, experiences of and engagement in sexual behavior and aged sexuality within institutionalized elderly care.
McCann, E., & Brown, M. J.. (2019). The mental health needs and concerns of older people who identify as LGBTQ+: A narrative review of the international evidence. Journal of Advanced Nursing, 75(12), 3390–3403. doi:10.1111/jan.14193.
Conclusion: This review highlights key mental health-related issues that need to be taken into account in the creation and provision of appropriate, responsive and inclu- sive supports and services. Impact: What were the main findings? Some older people who identify as LGBTQ + have experienced stigma, discrimination, and minority stress. However, many have developed coping strategies and resilience while others have developed mental health issues. It is necessary to have in place appropriate interventions and supports to effectively meet the needs of this population. Where and on whom will the research have impact? The review has significant im- plications for health and nursing policy and inform developments in nursing practice and nurse education.
Mots-clé: santé mentale, soins infirmiers
McCann, E., Sharek, D., Higgins, A., Sheerin, F., & Glacken, M.. (2013). Lesbian, gay, bisexual and transgender older people in Ireland: Mental health issues. Aging & Mental Health, 17(3), 358–365. doi:10.1080/13607863.2012.751583.
McLaren, S.. (2016). The relationship between living alone and depressive symptoms among older gay men: the moderating role of sense of belonging with gay friends. International Psychogeriatrics, 28(11), 1895–1901. doi:10.1017/S1041610216001241.
Background: Living alone is a risk factor for depressive symptoms among older adults, although it is unclear if it is a risk factor for older gay men. A sense of belonging to the gay community is protective and might compensate for living alone. This research investigated whether a sense of belonging with gay friends weakened the relationship between living alone and depressive symptoms among older gay men. Methods: A community sample of 160 Australian gay men aged 65{–}92 years completed the Center for Epidemiologic Studies Depression Scale and two visual analogue scales assessing a sense of belonging with gay friends. Results: Results supported the moderation model, with increasing levels of belonging with gay friends weakening the relationship between living alone and depressive symptoms. Conclusion: Results imply that enhancing a sense of belonging with gay friends among older gay men who live alone is likely to be a protective factor in relation to depressive symptoms.
McParland, J., & Camic, P. M.. (2016). Psychosocial factors and ageing in older lesbian, gay and bisexual people: a systematic review of the literature. Journal of Clinical Nursing, 25(23-24), 3415–3437. doi:10.1111/jocn.13251.
Aims and objectives To synthesise and evaluate the extant literature investigating the psychosocial influences on ageing as a lesbian, gay or bisexual person, to develop understanding about these in…
Meyer, I. H.. (2003). Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence.. Psychological Bulletin, 129(5), 674–697. doi:10.1037/0033-2909.129.5.674.
Mots-clé: santé mentale, stress minoritaire
Meyer, H., & Johnston, T. R.. (2014). The National Resource Center on LGBT Aging Provides Critical Training to Aging Service Providers. Journal of Gerontological Social Work, 57(2-4), 407–412. doi:10.1080/01634372.2014.901997.
Meyer, I. H., Russell, S. T., Hammack, P. L., Frost, D. M., & Wilson, B. D. M.. (2021). Minority stress, distress, and suicide attempts in three cohorts of sexual minority adults: A U.S. probability sample. PLoS ONE, 16(3), e0246827. doi:10.1371/journal.pone.0246827.
During the past 50 years, there have been marked improvement in the social and legal envi- ronment of sexual minorities in the United States. Minority stress theory predicts that health of sexual minorities is predicated on the social environment. As the social environment improves, exposure to stress would decline and health outcomes would improve. We assessed how stress, identity, connectedness with the LGBT community, and psychological distress and suicide behavior varied across three distinct cohorts of sexual minority people in the United States. Using a national probability sample recruited in 2016 and 2017, we assessed three a priori defined cohorts of sexual minorities we labeled the pride (born 1956{–}1963), visibility (born 1974{–}1981), and equality (born 1990{–}1997) cohorts. We found significant and impressive cohort differences in coming out milestones, with members of the younger cohort coming out much earlier than members of the two older cohorts. But we found no signs that the improved social environment attenuated their exposure to minority stressors{—}both distal stressors, such as violence and discrimination, and proximal stress- ors, such as internalized homophobia and expectations of rejection. Psychological distress and suicide behavior also were not improved, and indeed were worse for the younger than the older cohorts. These findings suggest that changes in the social environment had limited impact on stress processes and mental health for sexual minority people. They speak to the endurance of cultural ideologies such as homophobia and heterosexism and accompanying rejection of and violence toward sexual minorities.
Mots-clé: analyse comparative, analyse intergénérationnelle, santé mentale
Meyer, I. H.. (2013). Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence.. Psychology of Sexual Orientation and Gender Diversity, 1(S), 3–26. doi:10.1037/2329-0382.1.S.3.
Misoch, S.. (2017). "Lesbian, gay & grey": Besondere Bedürfnisse von homosexuellen Frauen und Männern im dritten und vierten Lebensalter. Zeitschrift für Gerontologie und Geriatrie, 50(3), 239–246. doi:10.1007/s00391-016-1030-4.
Der sich vollziehende demografische Wandel führt dazu, dass 2050 ca. 30 % der Schweizer Bevölkerung 65{&}nbsp;Jahre und älter sein werden. Diese Entwicklung führt auch dazu, dass die Anzahl älterer Menschen zunimmt, die sich als lesbisch oder schwul identifizieren und die ihr Leben auch im Alter mit einer/m gleichgeschlechtlichen Partner/in teilen. So werden in der Schweiz 2050 ca. 90.000{–}300.000 homosexuelle Personen 65{&}nbsp;Jahre und älter sein. Der vorliegende Beitrag zeigt auf, dass in der Gerontologie im Bereich der Homosexualität und in der Homosexualitätsforschung für die Lebensphase Alter Forschungslücken bestehen. Aufgrund dessen fokussiert der Beitrag auf das dritte und vierte Lebensalter bei Homosexuellen beiderlei Geschlechts. Er zeigt anhand aktueller internationaler Forschungsdaten auf, dass Homosexuelle in der Lebensphase Alter vor speziellen Herausforderungen stehen und besondere Bedürfnisse haben, die bei ambulanten und stationären Angeboten berücksichtigt werden sollten. Es zeigt sich in der bisherigen Forschung, dass Homosexuelle, bedingt durch ihre Lebensweise, häufig Single sind, keine biologischen Kinder haben, häufig allein leben und v.{&}nbsp;a. bei Pflegebedürftigkeit Angst vor Diskriminierung und Stigmatisierung haben. Erschwerend kommt hinzu, dass sich in Studien zeigt, dass der Gesundheitszustand schlechter ist als bei altersgleichen Heterosexuellen, was sich in einem frühzeitigen und erhöhten Pflegebedarf äußern kann.
Mots-clé: Suisse
Mock, S. E., Walker, E. P., Humble, Á. M., de Vries, B., Gutman, G., Gahagan, J., Chamberland, L., Aubert, P., & Fast, J.. (2019). The Role of Information and Communication Technology in End-of-Life Planning Among a Sample of Canadian LGBT Older Adults. Journal of Applied Gerontology, 1, 073346481984863–28. doi:10.1177/0733464819848634. [URL]
Moone, R. P., Croghan, C. F., & Olson, A. M.. (2016). Why and How Providers Must Build Culturally Competent, Welcoming Practices to Serve LGBT Elders. Generations, 40(2), 73–77. doi:10.2307/26556207. [URL]
Munson, M.. (2016). FORGE's Trauma-Informed Trans Aging Work. Generations, 40(2), 71–72. doi:10.2307/26556205. [URL]
Muraco, A., Putney, J. M., Shiu, C., & Fredriksen-Goldsen, K. I.. (2018). Lifesaving in Every Way: The Role of Companion Animals in the Lives of Older Lesbian, Gay, Bisexual, and Transgender Adults Age 50 and Over. Research on Aging, 40(9), 859–882. doi:10.1177/0164027517752149.
Naudet, D., De Decker, L., Chiche, L., Doncarli, C., Ho-Amiot, V., Bessaud, M., Alitta, Q., & Retornaz, F.. (2017). Nursing home admission of aging HIV patients: Challenges and obstacles for medical and nursing staffs. European Geriatric Medecine, 8(1), 66–70. doi:10.1016/j.eurger.2016.12.003.
European Geriatric Medecine, 8 (2017) 66-70. doi:10.1016/j.eurger.2016.12.003
Mots-clé: EMS, France, VIH
Neville, S., Kushner, B., & Adams, J.. (2015). Coming out narratives of older gay men living in New Zealand. Australasian Journal on Ageing, 34, 29–33. doi:10.1111/ajag.12277.
Nideröst, S., & Imhof, C.. (2016). Aging With HIV in the Era of Antiretroviral Treatment: Living Conditions and the Quality of Life of People Aged Above 50 Living With HIV/AIDS in Switzerland. Gerontoloy and Geriatric Medicine, 2, 1–9. doi:10.1177/2333721416636300. [URL]
Nowaskie, D. Z., & Sewell, D. D.. (2021). Assessing the LGBT cultural competency of dementia care providers. Alzheimers and Dementia, 1–10. doi:10.1002/trc2.12137.
Mots-clé: formation à la diversité, santé cognitive
Office fédéral de la statistique. (2014). Décès dus aux maladies infectieuses et au sida de 1970 à 2009: évolution d'une génération à l'autre. Actualités OFS.
Mots-clé: statistiques, Suisse, VIH
Orel, N. A.. (2016). Families and Support Systems of LGBT Elders. Annual Review of Gerontology and Geriatrics, 37(1), 89–109. doi:10.1891/0198-8794.37.89.
Orel, N. A., & Coon, D. W.. (2016). The Challenges of Change: How Can We Meet the Care Needs of the Ever-Evolving LGBT Family?. Generations, 40(2), 41–45. doi:10.2307/26556199. [URL]
Orel, N. A.. (2014). Investigating the Needs and Concerns of Lesbian, Gay, Bisexual, and Transgender Older Adults: The Use of Qualitative and Quantitative Methodology. Journal of Homosexuality, 61(1), 53–78. doi:10.1080/00918369.2013.835236.
Oswald, A., & Roulston, K.. (2018). Complex Intimacy: Theorizing Older Gay Men's Social Lives. Journal of Homosexuality, 67(2), 223–243. doi:10.1080/00918369.2018.1536416.
This qualitative study explores the social lives of older gay men. In-depth interviews were conducted with 10 gay men over the age of 65 to elicit details about their relationships with other people. Findings paint a complex picture of older gay social life that is compounded by significant events affect- ing gay men from a particular socio-historical period. Three overarching themes emerged that capture the social lives of the participants: (1) coming of age as a gay man in the 20th century; (2) dealing with the aging body; and (3) enduring loss and the consequent impact on social life. The participants reported that being in a gay environment and closing the gay generational divide helped them adjust to their changing social lives in later life. This study adds to the ongoing discus- sion about the experiences of older gay men and makes suggestions for future research and practice considerations.
Mots-clé: hommes gays, relations sociales
Paolino, V.. (2017). queerAltern – das Schicksal in die eigenen Hände nehmen. Angewandte GERONTOLOGIE Appliquée, 4(17), 35–36. doi:10.1024/2297-5160/a000059. [URL]
Mots-clé: EMS, Suisse
Peate, I.. (2013). Caring for older lesbian, gay and bisexual people. British Journal of Community Nursing, 18(8), 372–374.
Ageing brings about a number of challenges for heterosexual, lesbian, gay and bisexual people. It can be a time of anxiety and concern. The expectations that many lesbian, gay and bisexual people have of how they would like to be cared for if they were to enter sheltered housing or other forms of residential care can be very different from the expectations of heterosexual people. This article considers issues that older lesbian, gay and bisexual people may encounter with regard to their health-care needs.
Peate, I.. (2013). The health-care needs of the older gay man living with HIV. British Journal of Community Nursing, 18(10), 492–495.
Human immunodeficiency virus (HIV) was once thought of as a condition predominately affecting the young. However, HIV among the older population is increasing. Older gay male adults living with HIV have received little attention from those who provide and commission services. However, with effective treatment, those gay men aged over 50 are the fastest growing group of people with HIV in the UK. Nurses will be required to offer care in a number of ways to this cohort of patients. In so doing, nurses will need to develop innovative and effective ways of supporting this growing group of people. This article provides an overview of the issues that can impact on the health and wellbeing of the older gay man living with HIV. The article discusses the epidemiology, the issue of HIV stigma, comorbidities and mental health and wellbeing needs.
Mots-clé: hommes gays, soins infirmiers, VIH
Peel, E., Taylor, H., & Harding, R.. (2016). Sociolegal and practice implications of caring for LGBT people with dementia. Nursing Older People, 28(10), 26–30. doi:10.7748/nop.2016.e852.
The needs of LGBT people living with dementia are poorly recognised due, in part, to assumptions that all older people are heterosexual, together with persistent ageist stereotypes that older people are asexual. LGBT older adults are more likely to reside in care homes as a quarter of gay and bisexual men and half of lesbian and bisexual women have children, compared to 90% of heterosexual women and men. Older LGBT people may be unwilling to express their identity within care settings and this can have an impact on their ongoing care. Recognition of the members of an older person{'}s informal care network is crucial for their ongoing involvement in the life of a person resident in a care setting. However, healthcare professionals may not always appreciate that LGBT people may rely more on their family of choice, or their wider social network, than their family of origin. This article explores socio-legal issues that may be encountered when caring for older LGBT people living with dementia, including enabling autonomy, capacity and applying the legal frameworks in ways which support the identities and relationships of these older people in care.
Mots-clé: santé cognitive, soins infirmiers
Peisah, C., Burns, K., Edmonds, S., & Brodaty, H.. (2018). Rendering visible the previously invisible in health care: the ageing LGBTI communities. The Medical Journal of Australia, 209(3), 106–108.e1. doi:10.5694/mja17.00896.
Perales-Puchalt, J., Gauthreaux, K., Flatt, J., Teylan, M. A., Resendez, J., Kukull, W. A., Chan, K. C., Burns, J., & Vidoni, E. D.. (2019). Risk of dementia and mild cognitive impairment among older adults in same-sex relationships. International Journal of Geriatric Psychiatry, 34(6), 828–835. doi:10.1002/gps.5092.
Phillips, J., & Marks, G.. (2006). Coming Out, Coming In: How do dominant discourses around aged care facilities take into account the identity and needs of ageing lesbians?. Gay Lesbian Issues and Psychology Review, 2(2), 67–77. [URL]
Porter, K. E., & Krinsky, L.. (2014). Do LGBT Aging Trainings Effectuate Positive Change in Mainstream Elder Service Providers?. Journal of Homosexuality, 61(1), 197–216. doi:10.1080/00918369.2013.835618.
Poteat, M. A. A. T., Adams, M. A., & Poteat, T.. (2016). ZAMI NOBLA: Preserving History and Fostering Wellness in Black Lesbians. Generations, 40(2), 80–82. doi:10.2307/26556211. [URL]
Mots-clé: relations sociales, santé
Putney, J. M., Keary, S., Hebert, N., Krinsky, L., & Halmo, R.. (2018). \textquotedblleftFear Runs Deep:\textquotedblright The Anticipated Needs of LGBT Older Adults in Long-Term Care. Journal of Gerontological Social Work, 00(00), 1–21. doi:10.1080/01634372.2018.1508109.
Older lesbian, gay, bisexual, and transgender (LGBT) adults are a vulnerable yet resilient population who face unique stressors as they foresee health decline. This paper presents the results of a study about community-dwell- ing LGBT older adults{'} anticipated needs and fears related to nursing homes and assisted living. Methods: This qualitative study collected data through seven focus groups. The sample (N = 50) consisted of LGBT-identified adults age 55 and over. We used an inductive, thematic analysis approach to data analysis. Results: Participants seek an inclusive environment where they will be safe and feel connected to a community. They fear dependence on healthcare providers, dementia, mis- treatment, and isolation. Importantly, these fears can lead to identity concealment and psychological distress, including sui- cide ideation. Discussion: This study adds to the existing litera- ture about the worries of older LGBT adults as they anticipate long-term care. The results suggest that older LGBT adults seek LGBT-inclusive residential care settings that encompass two distinct yet related aspects of LGBT-affirmative care: the pro- cedural (e.g. culturally competent skills and knowledge of practitioners) and the implicit (e.g. the values and mission of the organization). This paper identifies implications for prac- tice, policy, and training.
Putney, J. M., Hebert, N., Snyder, M., Linscott, R. O., & Cahill, S.. (2020). The Housing Needs of Sexual and Gender Minority Older Adults: Implications for Policy and Practice. Journal of Homosexuality, 1–18. doi:10.1080/00918369.2020.1804261.
ABSTRACT This study identifies the interconnected needs and concerns of sexual and gender minority (SGM) older adults, with a particular focus on housing, healthcare, transportation, and social support. Data were gathered through seven groups with a sample of SGM-identified adults age 55 and over (N =~50) and analyzed using thematic analysis. The participants seek affordable and inclusive housing options. They identified that access to transportation is paramount in maintaining social support and accessing healthcare. Findings underscore the need for strategies to serve the housing needs of low-income SGM-identified older adults in a nondiscriminatory way, train housing providers in culturally responsive care, meet transportation needs, and provide SGM-inclusive community-based services that reduce isolation.
Radicioni, S., & Weicht, B.. (2018). A place to transform: creating caring spaces by challenging normativity and identity. Gender, Place & Culture, 1–16. doi:10.1080/0966369X.2017.1382449.
Like all spaces, concrete caring places both shape and are shaped by understandings and constructions of normativity and identity. The traditional understanding of care for older people, imagining clearly demarcated dyadic roles, is firmly embedded in heterosexual logics of relationships within families, the own (family) home and institutional support. Social and residential places for older people thus both assume particular gender and sexual identities and contribute to a (re)production of the very normativity. But how can this interlinkage between the construction of caring spaces and the normativity of identities be understood and, possibly, challenged? In this article we discuss the transformative potential of the social (and partly residential) space of La Fundación 26 de Diciembre, in Madrid, Spain, which opened up to specifically support older LGBT people. Drawing on an in-depth case study we explore a space that allows visibility of different forms of living and caring practices of people with different genders, sexual preferences, origins, classes or political backgrounds. Through the daily life narratives of the people who work, volunteer or simply use the centre we discuss the potential of challenging the restricted notions, assumptions and constructions through which particular places gain both social and political meaning. The article highlights the transformative power of the active and collective making of caring spaces through which narratives of care, collective sexual and gender recognition and practices of caring relationships can replace both traditional/informal forms of living together and institutional spaces that provide professional care.
Mots-clé: EMS, Espagne, logement
Ramirez-Valles, J., Dirkes, J., & Barrett, H. A.. (2014). GayBy Boomers' Social Support: Exploring the Connection Between Health and Emotional and Instrumental Support in Older Gay Men. Journal of Gerontological Social Work, 57(2-4), 218–234. doi:10.1080/01634372.2013.843225.
We evaluate the association between emotional and instrumen- tal support and perceived health and depression symptoms in a sample of 182 gay/bisexual men age $\ge$ 55. Perceived health was positively correlated with number of sources of emotional sup- port and depression was negatively associated with instrumental support and health care providers{'} knowledge of patients{'} sexual orientation. Depression mediates the connection between providers{'} knowledge of patients{'} sexual orientation and perceived health. Number of sources of emotional support varied negatively with age and ethnic minority status, and positively with living with a part- ner. Instrumental support seemed to be dependent on living with a partner.
Mots-clé: hommes gays, relations sociales, santé mentale
Reynolds, R., Edmonds, S., & Ansara, G. Y.. (2015). Silver Rainbows: Advances in Australian ageing and aged care. Australasian Journal on Ageing, 34(5), 5–7. doi:10.1111/ajag.12274.
Mots-clé: Australie, politique de la santé
Roca i Escoda, M.. (2015). Des couples plus égaux que d'autres. REISO, Revue dinformation sociale, 1–4. [URL]
Mots-clé: histoire, Suisse
Roca i Escoda, M.. (2010). Le parcours de la reconnaissance des couples homosexuels en Suisse. Bulletin dhistoire politique, 18(2), 125–138. doi:10.7202/1054804ar.
Mots-clé: droit, histoire, Suisse
Rogers, A., Rebbe, R., Gardella, C., Worlein, M., & Chamberlin, M.. (2013). Older LGBT Adult Training Panels: An Opportunity to Educate About Issues Faced by the Older LGBT Community. Journal of Gerontological Social Work, 56(7), 580–595. doi:10.1080/01634372.2013.811710.
Older lesbian, gay, bisexual, and transgender (LGBT) adults face unique issues that can impede their well-being. Although many advances have helped address these issues, there is a need for education efforts that raise awareness of service providers about these issues. This study explores evaluation data of training panels provided by older LGBT adults and the views of training partici- pants on issues faced by the older LGBT community after attending the panels. Participants were 605 students and professionals from over 34 education and communication settings. Implications for trainings on participants and older LGBT trainers are discussed.
Romy, K.. (2021). La rente de veuve, un héritage à dépoussiérer. swissinfo.ch, 1–10.
Mots-clé: conditions de vie et finances, Suisse
Rosati, F., Pistella, J., & Baiocco, R.. (2021). Italian Sexual Minority Older Adults in Healthcare Services: Identities, Discriminations, and Competencies. Sexuality Research and Social Policy, 18, 64–74. doi:10.1007/s13178-020-00443-z.
Purpose This study explores perceptions and experiences related to healthcare utilization in a group of Italian sexual minority older adults, to understand the unique challenges faced by this population when accessing healthcare services. Older adults represent one of the subgroups exposed to the highest risk within sexual minorities with regard to physical and mental health. Method Data collection occurred between October 2018 and April 2019. Semi-structured interviews were carried out with 23 participants over 60 years, including questions about participants{'}: experiences when dealing with physical/mental healthcare services; tendency to disclose sexual orientation in clinical contexts; preferences and desires when seeking care. Data were analyzed using Interpretative Phenomenological Analysis (IPA), in order to provide qualitative information on participants{'} experiences. Results Three interconnected themes were identified: the relevance of clinician and patient{'}s identities in determining confidence and satisfaction; expectations and experiences of discrimination; the need for specific competencies on sexual minority concerns. Conclusion Access and utilization of healthcare services can be considered as a multi-faceted phenomenon which involves people{'}s past and current experiences, perceptions, expectations and desires. Participants{'} perception of having to deal with heterosexist healthcare settings influences health behaviors and outcomes. Policy Implications Interventions directed to healthcare providers are needed, to increase specific competencies and ensure safe and affirming environments.
Mots-clé: besoins et craintes, Italie, santé
Rosenfeld, D.. (2009). Heteronormativity and Homonormativity as Practical and Moral Resources. Gender & Society, 23(5), 617–638. doi:10.1177/0891243209341357.
Studies of heteronormativity have emphasized its normative content and repressive functions, but few have considered the strategic use of heteronormative and homonorma- tive precepts to shape sexual selves, public identities, and social relations. Adopting an interactionist approach, this article analyzes interviews with homosexual elders to uncover their use of heteronormative premises (specifically, the presumption of hetero- sexuality, and the gender binary) to pass as heterosexual. Informants also used homonor- mative precepts, grounded in a postwar, pre-gay liberation assimilationist homosexual politics they adopted in their early years and maintained in later life, to justify passing and to frame their understanding and evaluation of past and present homosexual practices. Viewed through a homonormative lens, heteronormativity provided the tools for personal survival in a hostile society and for the collective production of a respectable homosexual culture. Informants{'} strategic use of heteronormativity can help explain heteronormativity{'}s survival despite the incoherence and fragility of its content.
Rowan, N. L., & Giunta, N.. (2015). Lessons on social and health disparities from older lesbians with alcoholism and the role of interventions to promote culturally competent services. Journal of Human Behavior in the Social Environment, 26(2), 210–216. doi:10.1080/10911359.2015.1083504.
Older adults who are lesbian, gay, bisexual, or transgender (LGBT) face greater health risks and possibly more costly care because of their reluctance to seek out health and long-term care services because of limited cultural sensitivity of service providers. This is particularly evident in older lesbians who face substantial risk of health problems associated with alcoholism and are less likely to be open with health care providers because of stigma combined with feelings of alienation, stress, and depression. An estimated 4.4 million older adults are predicted to have problems with alcohol by 2020, and the rates of alcohol-related hospi- talizations are similar to those for heart attacks, creating exorbitant medical costs. More culturally competent health and long-term care may reduce health care costs by effectively addressing the dynamics of alcoholism, aging, and lesbian culture. Training initiatives such as those developed by the National Resource Center on LGBT Aging have begun to address the need of a more culturally competent aging services net- work. This article provides exemplars from empirical data on older les- bians with alcoholism to highlight some of the health, economic, and social disparities experienced in the aging LGBT community. Current interventions in the form of cultural competence training for service providers are presented as a potential step toward addressing health disparities among LGBT older adults.
Mots-clé: femmes lesbiennes, santé
Rowan, N. L., & Butler, S. S.. (2014). Resilience in Attaining and Sustaining Sobriety Among Older Lesbians With Alcoholism. Journal of Gerontological Social Work, 57, 176–197. doi:10.1080/01634372.2013.859645. [URL]
Rufli, C.. (2021). Erkämpfte Liebe. Zeit Online.
Mots-clé: femmes lesbiennes, Suisse
Scherrer, K. S.. Stigma and Special Issues for Bisexual Older Adults. Annual Review of Gerontology and Geriatrics(1), 43–57. [URL]
Schlagdenhauffen, R.. (2017). Parcours de vie d'homosexuels âgés en bonne santé. Recherches sociologiques et anthropologiques(48-1), 23–44. doi:10.4000/rsa.1799.
Cet article interroge le vieillissement d{'}hommes gays en bonne santé en France. Il met en perspective les représentations et les expressions du vécu de l{'}avancée en âge en s{'}appuyant sur les apports des recherches sur l{'}homosexualité, le gen- re et la vieillesse. Deux approches méthodologiques sont mobilisées. La pre- mière s{'}appuie sur une littérature scientifique en mettant en perspective diffé- rents modèles du vieillissement. La seconde se fonde sur l{'}analyse de données recueillies entre 2013 et 2015 au moyen d{'}une enquête quantitative et qualita- tive. L{'}analyse montre que les modèles du vieillissement LGBT ont changé en raison de la plus grande acceptation de l{'}homosexualité comme mode de vie et en raison d{'}une plus grande sociabilité homosexuelle des personnes âgées favo- risée par les réseaux sociaux et Internet. Ainsi, pour bon nombre d{'}homo- sexuels âgés, vivre leur identité gay s{'}avère plus aisé désormais que durant leur jeunesse.
Mots-clé: hommes gays, parcours de vie
Schlagdenhauffen, R.. (2011). Rapports à la conjugalité et à la sexualité chez les personnes âgées en Allemagne. Frontières(6). doi:10.4000/gss.2205.
Les résultats d{'}enquêtes menées en Allemagne sur la sexualité et la conjugalité chez les personnes âgées montrent que, quelle que soit l{'}orientation sexuelle, les variations de l{'}activité sexuelle sont bien plus liées au statut conjugal qu{'}à l{'}âge. 50% des hommes et femmes de soixante ans qui se déclarent en couple disent avoir une activité sexuelle élevée ! Aussi, c{'}est avant tout la perte du partenaire (mort, séparation) qui est le facteur principal empêchant les personnes âgées de continuer à avoir des relations sexuelles (72% des célibataires de 60 ans disent n{'}avoir plus eu de relations sexuelles depuis un an). Or, que ces personnes soient en couple ou célibataires, ils et elles considèrent que la sexualité est quelque chose d{'}important qui participe de la bonne santé individuelle et de celle d{'}un couple. Cependant, en Allemagne comme en France, hommes et femmes ne sont pas égaux sur le marché de la sexualité, ni sur celui du couple et encore moins sur celui de la rencontre… Et les choses semblent encore se complexifier lorsque l{'}on est gay ou lesbienne et âgé-e. En croisant les résultats d{'}enquêtes menées auprès de personnes hétérosexuelles et de personnes homosexuelles, il est possible de voir qu{'}en vieillissant les aspirations ne sont pas les mêmes selon que l{'}on est homme ou femmes, hétéro ou homo. Cependant, une chose semble commune à tous, avec l{'}âge être en couple est un plus, le ou la partenaire devenant la ressource sociale la plus importante (suivi des enfants et petits-enfants) et cela quelle que soit l{'}orientation sexuelle.
Mots-clé: relations sociales, sexualité
Schmidt, A. J., & Altpeter, E.. (2019). The Denominator problem: estimating the size of local populations of men-who-have-sex-with-men and rates of HIV and other STIs in Switzerland. Sexually Transmitted Infections, 95(4), sextrans–2017–053363. doi:http://dx.doi.org/10.1136/ sextrans-2017-053363.
Mots-clé: hommes gays, statistiques, Suisse, VIH
Schwinn, S. V., & Dinkel, S. A.. (2015). Changing the Culture of Long-Term Care: Combating Heterosexism. OJIN: The Online Journal of Issues in Nursing, 20(2). doi:10.3912/OJIN.Vol20No02PPT03.
The purpose of this article is to describe how heterosexism impedes the provision of culturally competent care for lesbian, gay, bisexual, transgender, and queer (LGBTQ) residents in long-term care (LTC) facilities. LTC facilities continue to employ staff members who lack an understanding of sexuality and sexual diversity in the elderly. In this article, we identify the heterosexual assumption, namely heterosexism, as the primary issue surrounding the holistic care of the LGBTQ elder in LTC. We first review the literature related to LGBTQ elders in LTC facilities, identifying the themes that emerged from the review, specifically the definitions of homophobia and heterosexism; perceptions of LGBTQ elders as they consider placement in LTC facilities; and staff knowledge of and biases toward sexuality and sexual diversity in LTC settings. Then, we suggest approaches for changing the culture of LTC to one in which LGBTQ elders feel safe and valued, and conclude by considering how facility leaders are in a unique position to enable LGBTQ elders to flourish in what may be their last home.
Mots-clé: attitudes des professionnel.le.s, besoins et craintes, formation à la diversité
Seidel, L., Karpiak, S. E., & Brennan-Ing, M.. (2016). Training Senior Service Providers about HIV and Aging: Evaluation of a Multi-Year, Multi-City Initiative. Gerontology & Geriatrics Education, 38(2), 188–203. doi:10.1080/02701960.2015.1090293. [URL]
Mots-clé: formation à la diversité, VIH
Sharek, D. B., McCann, E., Sheerin, F., Glacken, M., & Higgins, A.. (2014). Older LGBT people's experiences and concerns with healthcare professionals and services in Ireland. International Journal of Older People Nursing, 10(3), 230–240. doi:10.1111/opn.12078.
Conclusions: Irish healthcare services need to reflect on how they currently engage with older LGBT persons at both an organisational and practitioner level. Consideration needs to be given to the specific concerns of ageing LGBT persons, particularly in relation to long- term residential care. Implications for practice: Healthcare practitioners need to be knowledgeable of, and sensitive to, LGBT issues.
Silverman, M., & Baril, A.. (2021). Transing dementia: Rethinking compulsory biographical continuity through the theorization of cisism and cisnormativity. Journal of Aging Studies, 1–9. doi:10.1016/j.jaging.2021.100956.
Siverskog, A., & Bromseth, J.. (2019). Subcultural Spaces: LGBTQ Aging in a Swedish Context. The International Journal of Aging and Human Development, 88(4), 325–340. doi:10.1177/0091415019836923.
Mots-clé: Europe, parcours de vie, relations sociales
Siverskog, A.. (2015). Ageing Bodies that Matter: Age, Gender and Embodiment in Older Transgender People's Life Stories. NORA – Nordic Journal of Feminist and Gender Research, 23(1), 4–19. doi:10.1080/08038740.2014.979869. [URL]
Smith, R. W., Altman, J. K., Meeks, S., & Hinrichs, K. L. M.. (2018). Mental Health Care for LGBT Older Adults in Long- Term Care Settings: Competency, Training, and Barriers for Mental Health Providers. Clinical Gerontologist, 1–20. doi:10.1080/07317115.2018.1485197.
Mental health providers in LTC facilities would benefit from more training in LGBT-specific mental health problems and evidence-based treatments, and efforts to destigmatize LGBT identities in these settings might improve access to mental health care
Mots-clé: attitudes des professionnel.le.s, EMS, santé mentale
Solomon, P., O'Brien, K., Wilkins, S., & Gervais, N.. (2013). Aging with HIV and disability: The role of uncertainty. AIDS Care: AIDS Care: Psychological and Socio-medical Aspects of AIDS/HIV, 26(2), 240–245. doi:10.1080/09540121.2013.811209. [URL]
van der Star, A., Pachankis, J. E., & Bränström, R.. (2021). Country-Level Structural Stigma, School-Based and Adulthood Victimization, and Life Satisfaction Among Sexual Minority Adults: A Life Course Approach. Journal of Youth and Adolescence, 1–13. doi:10.1007/s10964-020-01340-9.
Country-level structural stigma, defined as prejudiced population attitudes and discriminatory legislation and policies, has been suggested to compromise the wellbeing of sexual minority adults. This study explores whether and how structural stigma might be associated with sexual minorities{'} school-based and adulthood experiences of victimization and adulthood life satisfaction. Using a sample of 55,263 sexual minority individuals (22% female; 53% 18{–}29 years old; 85% lesbian/gay, 15% bisexual) living across 28 European countries and a country-level index of structural stigma, results show that sexual minorities, especially men, reported school bullying in both higher- and lower-stigma countries. Higher rates of school bullying were found among sexual minorities living in higher-stigma countries when open about their identity at school. Past exposure to school bullying was associated with lower adulthood life satisfaction, an association partially explained by an increased risk of adulthood victimization. These findings suggest that sexual minorities living in higher-stigma countries might benefit from not being open about their sexual identity at school, despite previously established mental health costs of identity concealment, because of the reduced risk of school bullying and adverse adulthood experiences. These results provide one of the first indications that structural stigma is associated with sexual minority adults{'} wellbeing through both contemporaneous and historical experiences of victimization.
Mots-clé: Europe, santé mentale
Stein, G. L., Beckerman, N. L., & Sherman, P. A.. (2010). Lesbian and Gay Elders and Long-Term Care: Identifying the Unique Psychosocial Perspectives and Challenges. Journal of Gerontological Social Work, 53(5), 421–435. doi:10.1080/01634372.2010.496478. [URL]
Stein, G. L., & Bonuck, K. A.. (2001). Attitudes on End-of-Life Care and Advance Care Planning in the Lesbian and Gay Community. Journal of Palliative Medicine, 4(2), 173–190. [URL]
Mots-clé: directives anticipées, suicide assisté
Stinchcombe, A., Smallbone, J., Wilson, K., & Kortes-Miller, K.. (2017). Healthcare and End-of-Life Needs of Lesbian, Gay, Bisexual, and Transgender (LGBT) Older Adults: A Scoping Review. Geriatrics, 2(1), 13–13. doi:10.3390/geriatrics2010013. [URL]
Suen, Y.. (2016). Older Single Gay Men's Body Talk: Resisting and Rigidifying the Aging Discourse in the Gay Community. Journal of Homosexuality, 64(3), 397–414. doi:10.1080/00918369.2016.1191233.
Mots-clé: hommes gays
Sullivan, K. M., Mills, R. B., & Dy, L.. (2016). Serving LGBT Veterans: Los Angeles LGBT Center's Veterans Initiative. Generations, 40(2), 83–86. doi:10.2307/26556213. [URL]
Sullivan, K. M.. (2014). Acceptance in the domestic environment: The experience of senior housing for lesbian, gay, bisexual, and transgender seniors. Journal of Gerontological Social Work, 57, 235–250. doi:10.1080/01634372.2013.867002. [URL]
Mots-clé: EMS, logement
Sussman, T., Brotman, S., MacIntosh, H., Chamberland, L., MacDonnell, J., Daley, A., Dumas, J., & Churchill, M.. (2018). Supporting Lesbian, Gay, Bisexual, & Transgender Inclusivity in Long-Term Care Homes: A Canadian Perspective. Canadian Journal on Aging / La Revue canadienne du vieillissement, 37(2), 121–132. doi:10.1017/S0714980818000077. [URL]
Mots-clé: EMS, formation à la diversité
Taha, S., Blanchet Garneau, A., & Bernard, L.. (2020). Une revue de la portée sur la pratique infirmière auprès des personnes âgées issues de la diversité sexuelle et de genre. Recherche en soins infirmiers, 140(1), 29–56. doi:10.3917/rsi.140.0029.
Résultats : les recommandations ont été regroupées en cinq axes : se sensibiliser à l{'}existence des PADSG, à leur contexte historique et à leurs problèmes de santé ; s{'}abstenir de préconceptions hétérocissexistes et hétérocisnormatives en adoptant un langage inclusif et une attitude ouverte ; soutenir les PADSG et leurs proches aidants ou leur famille de choix ; créer un environnement sécuritaire et confidentiel ; et promouvoir l{'}inclusion des PADSG dans le système de soins de santé. Conclusion : les infirmières et autres professionnels de la santé pourraient utiliser les résultats obtenus pour optimiser la qualité des soins dispensés aux personnes âgées issues des minorités sexuelles et de genre.
Taylor, M.. (2016). Holistic Care of Older Lesbian, Gay, Bisexual, and Transgender Patients in the Emergency Department. Journal of Emergency Nursing, 42(2), 170–173. doi:10.1016/j.jen.2016.02.007.
Thurston, C.. (2016). The Intersectional Approach in Action: SAGE Center Bronx. Generations, 40(2), 101–102. doi:10.2307/26556219. [URL]
Mots-clé: action communautaire
Van Wagenen, A., Driskell, J., & Bradford, J.. (2013). \textquotedblleftI'm still raring to go\textquotedblright: Successful aging among lesbian, gay, bisexual, and transgender older adults. Journal of Aging Studies, 27(1), 1–14. doi:10.1016/j.jaging.2012.09.001. [URL]
Vernazza, P. L., & Bernard, E. J.. (2016). HIV is not transmitted under fully suppressive therapy: The Swiss Statement – eight years later. Swiss Medical Weekly. doi:10.4414/smw.2016.14246.
Villar, F., Serrat, R., Fabà, J., & Celdrán, M.. (2015). Staff Reactions Toward Lesbian, Gay, or Bisexual (LGB) People Living in Residential Aged Care Facilities (RACFs) Who Actively Disclose Their Sexual Orientation. Journal of Homosexuality, 62(8), 1126–1143. doi:10.1080/00918369.2015.1021637.
Fifty-three staff members currently working in residential aged care facilities located in Barcelona, Spain, were asked about the way they would react if a resident told them that he or she felt sexually attracted and had maintained sexual relationships with another resident of the same gender. Acceptance of non-heterosexual sex- ual orientation was a frequent answer, and around one in four professionals stated that they would try helping the resident in question, by offering a private space or giving some emotional sup- port. However, some reactions were not consistent with a respectful approach toward sexual diversity, as, for instance, informing the resident{'}s family or advising the resident to keep his or her sexual orientation hidden. We highlight the importance of developing for- mal policies and offering formal training to staff in order to address the specific needs of older LGB people living in RACFs.
Mots-clé: attitudes des professionnel.le.s, besoins et craintes, soins infirmiers
Villar, F., Serrat, R., Fabà, J., & Celdrán, M.. (2015). As Long as They Keep Away From Me: Attitudes Toward Non-heterosexual Sexual Orientation Among Residents Living in Spanish Residential Aged Care Facilities. The Gerontologist, 55, 1006–1014. doi:10.1093/geront/gnt150.
Mots-clé: attitude des pairs, EMS, Espagne
de Vries, B.. (2011). LGBT Aging: Research and Policy Implications. Public Policy and Aging Report, 21(3), 34–35.
de Vries, B., Gutman, G., Humble, Á., Gahagan, J., Chamberland, L., Aubert, P., Fast, J., & Mock, S.. (2019). End-of-Life Preparations Among LGBT Older Canadian Adults: The Missing Conversations. The International Journal of Aging and Human Development, 88(4), 358–379. doi:10.1177/0091415019836738. [URL]
de Vries, B., & Gutman, G.. (2016). End-of-Life Preparations Among LGBT Older Adults. Generations, 40(2), 46–48. doi:10.2307/26556200. [URL]
de Vries, B., & Croghan, C. F.. (2013). LGBT Aging: The Contributions of Community-Based Research. Journal of Homosexuality, 61(1), 1–20. doi:10.1080/00918369.2013.834794.
Waite, H.. (2015). Old lesbians: Gendered histories and persistent challenges. Australasian Journal on Ageing, 34, 8–13. doi:10.1111/ajag.12272.
Aim: This article provides an overview of how gender and historical contexts influence the well-being of old lesbians. It aims to inform the practice of aged care providers in addressing the needs of these women. Methods: The lived experience of old lesbians is examined using feminist methodology with a focus on hegemonic femininity, social structures and cultural life. Results: Old lesbians being selectively {'}open{'}, their use of health services and desire for lesbian-specific aged care are all influenced by lesbophobia, a complex of discriminations. The age women began living as lesbian and fluidity of orientation, are central to understanding their particular needs. Many old lesbians have created social groups and intentional communities where there is support and freedom. Conclusion: The current {'}inclusivity{'} approach is insufficient for culturally appropriate aged care for old lesbians. Developing practices that meet their needs requires better understanding of lesbians{'} different life courses and why they created lesbian cultures.
Mots-clé: femmes lesbiennes, parcours de vie
Waling, A., Lyons, A., Alba, B., Minichiello, V., Barrett, C., Hughes, M., Fredriksen-Goldsen, K. I., & Edmonds, S.. (2019). Experiences and perceptions of residential and home care services among older lesbian women and gay men in Australia. Health & Social Care in the Community, 34, 34–9. doi:10.1111/hsc.12760.
The needs of older lesbian and gay people regarding access and use of aged-care services remain underresearched. This paper reports the findings of 33 qualitative interviews with older lesbian women and gay men about their perceptions and expe- riences of residential aged-care and home-based aged-care services in Australia. The focus of this paper is their preparedness for using aged-care services. The results highlight that participants had a number of concerns related to accessing residential- care services in particular, including perceptions of a lack of inclusivity and concerns of potential for discrimination and hostility, loss of access to community and partners, decreased autonomy and concerns relating to quality of care and the potential for elder abuse. Participants noted a number of strategies they employed in avoiding residential-care services, including the use of home-care services, renovating the home for increased mobility, moving to locations with greater access to outside home-care services, a preference for lesbian/gay-specific housing and residential- care options if available, and the option of voluntary euthanasia to ensure dignity and autonomy. Participants, on the whole, were hopeful that they would never require the use of residential-care services, with some believing that having current good health or the support of friends could prevent this from happening. The findings sug- gest that older lesbian and gay people have a variety of concerns with aged-care and may need additional support and education to improve their perceptions and experi- ences of services, whether these are needed presently or in the future.
Mots-clé: besoins et craintes, EMS, logement
Waling, A., Lyons, A., Alba, B., Minichiello, V., Hughes, M., Barrett, C., Edmonds, S., & Fredriksen-Goldsen, K. I.. (2020). Older lesbian and gay men's perceptions on lesbian and gay youth in Australia. Culture, Health & Sexuality, 1–16. doi:10.1080/13691058.2019.1696984.
Waling, A., Lyons, A., Alba, B., Minichiello, V., Barrett, C., Hughes, M., Fredriksen-Goldsen, K. I., & Edmonds, S.. (2019). Trans Women's Perceptions of Residential Aged Care in Australia. British Journal of Social Work, 1–28. doi:10.1093/bjsw/bcz122/5606662.
Mots-clé: besoins et craintes, EMS, personnes trans*, suicide assisté
Walker, C. A., Cohen, H. L., & Jenkins, D.. (2016). An Older Transgender Woman's Quest for Identity. Journal of Psychosocial Nursing, 54(2), 31–38.
Despite sensationalized media attention, transgender individuals are the most marginalized and misunderstood group in the lesbian, gay, bisexual, and transgender (LGBT) community. The current article presents a case study of one woman{'}s quest for identity. Narrative inquiry was used to ana- lyze data from interview transcripts and four themes emerged during analysis: (a) naming the ambiguity, (b) revealing{–} concealing the authentic self, (c) discovering the transgender community, and (d) embracing the {\textquotedblleft}T{\textquotedblright} identity. Lifespan and empowerment theories were used to harvest meanings from these themes. Implications for nursing practice and research were examined based on study findings. Participatory ac- tion research offers an approach for future studies in which researchers advocate for transgender individuals and remove obstacles to their health care access
Mots-clé: personnes trans*, soins infirmiers
Wallace, S. P., Cochran, S. D., Durazo, E. M., & Ford, C. L.. (2011). The Health of Aging Lesbian, Gay and Bisexual Adults in California. Policy Brief UCLA Center for Health Policy Research, PB2011{–}2, 1–8. [URL]
Wallach, I.. (2012). "Je suis heureux d'avoir l'âge que j'ai" : la résilience des hommes gais âgés vivant avec le VIH au Québec. Revue canadienne de santé mentale communautaire, 30(2), 157–171. [URL]
Wallach, I.. (2012). L'expérience du vieillissement chez des femmes et des hommes vivant avec le VIH: un vécu à l'intersection du~genre, de l'orientation sexuelle et du parcours relié au VIH. Frontières, 25(1), 105–126. doi:10.7202/1018233ar. [URL]
Wallach, I.. (2012). Diversité et vieillissement. Frontières, 25(1), 5–9. doi:10.7202/1018228ar. [URL]
Wallach, I., & Brotman, S.. (2017). The intimate lives of older adults living with HIV: a qualitative study of the challenges associated with the intersection of HIV and ageing. Ageing and Society, 38(12), 2490–2518. doi:10.1017/S0144686X1700068X.
Mots-clé: intersectionnalité, parcours de vie, VIH
Wallach, I., Ducandas, X., Martel, M., & Thomas, R.. (2016). Vivre à l'intersection du VIH et du vieillissement : quelles répercussions sur les liens sociaux significatifs?. Canadian Journal on Aging / La Revue canadienne du vieillissement, 35(1), 42–54. doi:10.1017/S0714980815000525.
Wathern, T., & Green, R. W.. (2017). Older LGB&T housing in the UK: challenges and solutions. Housing, Care and Support, 20(3), 128–136. doi:10.1108/HCS-08-2017-0019. [URL]
Westwood, S., Willis, P., Fish, J., Hafford-Letchfield, T., Semlyen, J., King, A., Beach, B., Almack, K., Kneale, D., Toze, M., & Bécares, L.. (2020). Older LGBT+ health inequalities in the UK: setting a research agenda. Journal of Epidemiology and Community Health, 74(5), 408–411. doi:10.1136/jech-2019-213068. [URL]
Mots-clé: politique de la santé, recherche, santé
Westwood, S.. (2018). Abuse and older lesbian, gay bisexual, and trans (LGBT) people: a commentary and research agenda. Journal of Elder Abuse & Neglect, 00(00), 1–18. doi:10.1080/08946566.2018.1543624.
Journal of Elder Abuse {&} Neglect, 2018. doi:10.1080/08946566.2018.1543624
Mots-clé: maltraitance
Westwood, S., & Wathern, T.. (2017). Introduction to \textquotedbllefthousing, care and support for older lesbians, gay, bisexual and trans* people\textquotedblright. Housing, Care and Support, 20(3), 85–88. doi:10.1108/HCS-07-2017-0018. [URL]
Westwood, S.. (2017). Gender and older LGBT* housing discourse: the marginalised voices of older lesbians, gay and bisexual women. Housing, Care and Support, 20(3), 100–109. doi:10.1108/HCS-08-2017-0020. [URL]
Mots-clé: besoins et craintes, femmes lesbiennes, logement
Westwood, S.. (2016). Dementia, women and sexuality: How the intersection of ageing, gender and sexuality magnify dementia concerns among lesbian and bisexual women. Dementia, 15, 1494–1514. doi:10.1177/1471301214564446. [URL]
Mots-clé: femmes lesbiennes, personnes bisexuelles, santé cognitive
Westwood, S.. (2013). "My Friends are my Family": an argument about the limitations of contemporary law's recognition of relationships in later life. Journal of Social Welfare and Family Law, 35(3), 347–363. doi:10.1080/09649069.2013.801688.
Wight, R. G., LeBlanc, A. J., Meyer, I. H., & Harig, F. A.. (2015). Internalized gay ageism, mattering, and depressive symptoms among midlife and older gay-identified men. Social science & medicine, 147(C), 200–208. doi:10.1016/j.socscimed.2015.10.066.
Social Science {&} Medicine, 147 (2015) 200-208. doi:10.1016/j.socscimed.2015.10.066
Mots-clé: hommes gays, santé mentale
Wight, R. G., LeBlanc, A. J., de Vries, B., & Detels, R.. (2012). Stress and Mental Health Among Midlife and Older Gay-Identified Men. American Journal of Public Health, 102(3), 503–510. doi:10.2105/AJPH.
Wilkens, J.. (2015). Loneliness and Belongingness in Older Lesbians: The Role of Social Groups as \textquotedblleftCommunity\textquotedblright. Journal of Lesbian Studies, 19(1), 90–101. doi:10.1080/10894160.2015.960295.
Wilkens, J.. (2016). The significance of affinity groups and safe spaces for older lesbians and bisexual women: creating support networks and resisting heteronormativity in older age. Quality in Ageing and Older Adults, 17(1), 26–35. doi:10.1108/QAOA-08-2015-0040. [URL]
Mots-clé: femmes lesbiennes, personnes bisexuelles, relations sociales
Williams, M. E., & Fredriksen-Goldsen, K. I.. (2014). Same-Sex Partnerships and the Health of Older Adults. Journal of Community Psychology, 42(5), 558–570. doi:10.1002/jcop.21637.
Willis, P.. (2017). Queer, visible, present: the visibility of older LGB adults in long-term care environments. Housing, Care and Support, 20(3), 110–120. doi:10.1108/HCS-04-2017-0007.
This paper is a conceptual discussion of the ways in which the diverse lives, identities and collective politics of lesbian, gay and bisexual (LGB) people can be made visible, and how they are made visible, in long-term care environments for older people. The purpose of this paper is to problematise strategies of visibility as methods for promoting social inclusion in care environments.,This is a conceptual discussion that draws on several social theorists that have previously discussed the politics of visibility, knowledge and sexuality.,Promoting increased visibility in itself does not fully grapple with the ways in which older LGB can be represented and known as particular kinds of sexual citizens. This potentially curtails a more holistic recognition of their needs, interests and wishes, inclusive of their sexual lives and histories. Making LGB lives visible in care environments may not always be a productive or affirmative strategy for dismantling homophobic views and beliefs.,The theoretical implications of a politics of visibility warrant a deeper consideration of strategies for promoting visibility. The paper concludes with a discussion of some of the practical implications for rethinking strategies of visibility in care environments.,Critical discussions about the application of visibility strategies, and the problematic assumptions contained within such strategies, are lacking in relation to mainstream housing and social care provision for older LGB people. This paper seeks to initiate this important discussion.
Mots-clé: attitudes des professionnel.le.s, besoins et craintes, EMS, logement
Willis, P., Almack, K., Hafford-Letchfield, T., Simpson, P., Billings, B., & Mall, N.. (2018). Turning the Co-Production Corner: Methodological Reflections from an Action Research Project to Promote LGBT Inclusion in Care Homes for Older People. International Journal of Environmental Research and Public Health, 15(4), 694. doi:10.3390/ijerph15040695. [URL]
Willis, P., Maegusuku-Hewett, T., Raithby, M., & Miles, P.. (2014). Swimming upstream: the provision of inclusive care to older lesbian, gay and bisexual (LGB) adults in residential and nursing environments in Wales. Ageing and Society, 36(2), 282–306. doi:10.1017/S0144686X14001147.
This paper examines the ways in which older people{'}s residential and nursing homes can constitute heteronormative environments {–} social spaces in which the same-sex attractions and desires of residents are disregarded in the provision of everyday care. The aim of this discussion is to examine the synergies and differences between older lesbian, gay and bisexual (LGB) adults{'} expectations for future care home provision and the expectations of care staff and managers in providing residential services to older people with diverse sexual backgrounds. We present qualitative evidence from research into the provision of care environments in Wales. In this paper, we present findings from two cohorts: first, from five focus groups with care and nursing staff and managers; and second, from 29 semi-structured interviews with older LGB adults (50{–}76 years) residing in urban and rural locations across Wales. We argue that residential care environments can constitute heterosexualised spaces in which LGB identities are neglected in comparison to the needs and preferences of other residents. To this extent, we discuss how care staff and managers can be more attentive and responsive to the sexual biographies of all residents and argue against the separation of care and sexual orientation in practice.
Willis, P., Raithby, M., Dobbs, C., Evans, E., & Bishop, J.. (2020). 'I'm going to live my life for me': trans ageing, care, and older trans and gender non-conforming adults' expectations of and concerns for later life. Ageing and Society, 1–22. doi:10.1017/S0144686X20000604.
While research on the health and wellbeing of older lesbian, gay and bisexual adults is grad- ually expanding, research on older trans and gender non-conforming (TGNC) adults lags behind. Current scholarship about this group raises important questions about the intersec- tion of ageing and gender identity for enhancing care and support for older TGNC adults and the lack of preparedness of health and social professionals for meeting these needs. In this paper, we examine the accounts of 22 TGNC individuals (50{–}74 years) on the topic of ageing and unpack their concerns for and expectations of later life. We present qualitative findings from a study of gender identity, ageing and care, based in Wales, United Kingdom. Data were generated from two-part interviews with each participant. Four key themes are identified: (a) facilitative factors for transitioning in mid- to later life; (b) growing older as a new lease of life; (c) growing older: regrets, delays and uncertainties; and (d) ambivalent expectations of social care services. We argue that growing older as TGNC can be experi- enced across a multitude of standpoints, ranging from a new lease of life to a time of regret and uncertainty. We critically discuss emergent notions of trans time, precarity and uncer- tainty running across participants{'} accounts, and the implications for enhancing recognition of gender non-conformity and gender identity in social gerontology.
Mots-clé: besoins et craintes, personnes trans*
Willis, P., Maegusuku-Hewett, T., Raithby, M., & Miles, P.. (2016). 'Everyday Advocates' for Inclusive Care? Perspectives on Enhancing the Provision of Long-Term Care Services for Older Lesbian, Gay and Bisexual Adults in Wales. British Journal of Social Work, 22(1). doi:10.1093/bjsw/bcv143.
This paper centres on a neglected area of social work with older people{—}the social inclusion of older lesbian, gay and bisexual (LGB) adults in long-term care environments. The translation of equality law into the delivery of adult care services is a challenging endeavour for organisations, even more so in the morally-contested terrain of sexual wellbeing. In this paper we report findings from a mixed method study into the provision of long-term care for older adults who identify as LGB. Herein we present findings from a survey of care workers and managers (n=121) and from focus groups with equality and LGB stakeholder representatives (n=20) in Wales. Focussing on the current knowledge and understanding of staff, we suggest that affirmative beliefs and practices with sexual minorities are evident amongst care workers and managers, however the inclusion of LGB residents needs to be advanced systemically at structural, cultural and individual levels of provision. There is a need for enhancing awareness of the legacy of enduring discrimination for older LGB people, for cultural acceptance in care environments of older people{'}s sexual desires and relationships, and for a more explicit implementation of equality legislation. Social workers in adult care can advance this agenda.
Mots-clé: attitudes des professionnel.le.s, EMS, Europe, travail social
Wilson, K., Kortes-Miller, K., & Stinchcombe, A.. (2018). Staying Out of the Closet: LGBT Older Adults' Hopes and Fears in Considering End-of-Life. Canadian Journal on Aging / La Revue canadienne du vieillissement, 37(1), 22–31. doi:10.1017/S0714980817000514.
Le vieillissement de la population canadienne et l{'}hétérogéneité des a{\^\i}nés amène une diversité accrue en fin de vie. L{'}objectif de cette étude était d{'}aider à combler les lacunes présentes dans la recherche du vieillissement et la fin de vie des personnes LGBT. À l{'}aide des groupes de discussion, nous avons tenté de mieux comprendre les expériences vécues des individus LGBT plus âgés, afin de mettre en évidence leurs inquiétudes associés aux dernières phases de la vie. Notre analyse démontre que l{'}identité LGBT est déterminante lorsqu{'}on considère le vieillissement et les soins en fin de vie. En particulier, l{'}identité de genre et l{'}orientation sexuelle sont des facteurs importants par rapport aux liens sociaux, influençant les attentes des individus envers les soins qu{'}ils reçoivent, à la crainte unique associée à la révélation de son homosexualité et le maintien de l{'}identité tout au long du vieillissement et des dernières phases de la vie. Cette étude souligne le besoin de considérer l{'}identité du genre et l{'}orientation sexuelle en fin de vie. En particulier, la reconnaissance de l{'}intersectionnalité et des lieux sociaux est essentielle afin de faciliter des expériences positives par rapport au vieillissement et des soins en fin de vie.
Witten, T. M.. (2016). The Intersectional Challenges of Aging and of Being a Gender Non-Conforming Adult. Generations, 40(2), 63–70. doi:10.2307/26556204. [URL]
Witten, T. M.. (2017). Health and Well-Being of Transgender Elders. Annual Review of Gerontology and Geriatrics, 37(1), 27–41. [URL]
Woody, I.. (2016). Mary's House: An LGBTQ/SGL-Friendly, Alternative Environment for Older Adults. Generations, 40(2), 108–109. doi:10.2307/26556225.
Because discrimination against LGBTQ elders is so common in senior living residences, and social isolation is pervasive among elders who choose to remain at hom…
Wright, L. A., King, D. K., Retrum, J. H., Helander, K., Wilkins, S., Boggs, J. M., Kickman Portz, J., Nearing, K., & Gozansky, W. S.. (2017). Lessons learned from community-based participatory research: establishing a partnership to support lesbian, gay, bisexual and transgender ageing in place. Family Practice, 34(3), 330–335. doi:10.1093/fampra/cmx005. [URL]
{Major, K., {Clerc, O., {Rochat, S., {Cavassini, M., & {Büla, C.. (2011). Infection VIH et personnes âgées. Revue Médicale Suisse, 7, 2170–2175.
L{'}infection VIH est en voie de devenir une maladie chronique, suite à la baisse de la mortalité induite par l{'}introduction des traitements antiviraux co…
{Schmidt, J.. (2020). Retirement Guide For LGBTQ Americans. Forbes.
After he was diagnosed with HIV during the height of the AIDS epidemic, Ed Miller started living his life in two-year increments. {\textquotedblleft}In the beginning, they would just say {'}you could have a couple of months to two years,{'}{\textquotedblright} says Miller. {\textquotedblleft}That was the extent of my financial planning: I should have eno
Journée d'étude Seniors LGBT 04.02.2020
Discours de bienvenue
La vieillesse: dernier placard pour les LGBT ? Julien Rougerie, fondation Emergence
Témoignage de Geneviève et André
Etre LGBTIQ dans un EMS: état des lieux et attentes – Max Krieg et Urs Sager
Témoignage de Giovanni
Table ronde – Logement communautaire versus intégration: quel habitat pour les aîné·e·s LGBT ?
Table ronde – Outils et bonnes pratiques pour un accueil inclusif
Discours de clôture et synthèse de la journée
C'est moi Carole
Etre LGBTIQ dans un EMS: Etat des lieux et attentes des personnes LGBTIQ
"Les homosexuels âgés ne veulent pas retourner au placard"
"Une journée d'étude à l'UNIGE pour mieux soutenir les seniors LGBT: interview de Geneviève Donnet"
"Les seniors LGBT à Uni Mail"
"Relever le défi d'être soi à tout âge"
"Il brise le tabou des seniors LGBT"
"Quelles infrastructures pour les seniors LGBT?"
Domaine: formation
Par date de publication
Type: livres
Type: rapports scientifiques
Type: rapports et brochures
Bibliographie de "Seniors LGBT: Guide de réflexion et d'action pour un accueil inclusif"
Point de vue: attitude des professionnel·le·s
Point de vue: besoins et craintes
Groupe: personnes trans*
Groupe: personnes bisexuelles
Domaine: recherche
Domaine: soins infirmiers
Domaine: travail social
Enjeux: EMS
Enjeux: relations sociales
Enjeux: santé cognitive
Enjeux: santé mentale
Enjeux: santé
Enjeux: VIH
Groupe: femmes lesbiennes
Groupe: hommes gays
Point de vue: attitude des pairs
Guide de réflexion et d'action
Pré-enquête | CommonCrawl |
\begin{definition}[Definition:Symbolic Logic]
'''Symbolic logic''' is the study of logic in which the logical form of statements is analyzed by using symbols as tools.
Instead of explicit statements, logical formulas are investigated, which are symbolic representations of statements, and compound statements in particular.
In '''symbolic logic''', the rules of reasoning and logic are investigated by means of formal systems, which form a good foundation for the symbolic manipulations performed in this field.
\end{definition} | ProofWiki |
Open Access Journal
Electronic Research Announcements
Fixed frequency eigenfunction immersions and supremum norms of random waves
Yaiza Canzani and Boris Hanin
2015, 22: 76-86 doi: 10.3934/era.2015.22.76 +[Abstract](2650) +[PDF](381.3KB)
A compact Riemannian manifold may be immersed into Euclidean space by using high frequency Laplace eigenfunctions. We study the geometry of the manifold viewed as a metric space endowed with the distance function from the ambient Euclidean space. As an application we give a new proof of a result of Burq-Lebeau and others on upper bounds for the sup-norms of random linear combinations of high frequency eigenfunctions.
Yaiza Canzani, Boris Hanin. Fixed frequency eigenfunction immersions and supremum norms of random waves. Electronic Research Announcements, 2015, 22(0): 76-86. doi: 10.3934/era.2015.22.76.
Global Kolmogorov tori in the planetary $\boldsymbol N$-body problem. Announcement of result
Gabriella Pinzari
We improve a result in [9] by proving the existence of a positive measure set of $(3n-2)$-dimensional quasi-periodic motions in the spacial, planetary $(1+n)$-body problem away from co-planar, circular motions. We also prove that such quasi-periodic motions reach with continuity corresponding $(2n-1)$-dimensional ones of the planar problem, once the mutual inclinations go to zero (this is related to a speculation in [2]). The main tool is a full reduction of the SO(3)-symmetry, which retains symmetry by reflections and highlights a quasi-integrable structure, with a small remainder, independently of eccentricities and inclinations.
Gabriella Pinzari. Global Kolmogorov tori in the planetary $\\boldsymbol N$-body problem. Announcement of result. Electronic Research Announcements, 2015, 22(0): 55-75. doi: 10.3934/era.2015.22.55.
A sharp Sobolev-Strichartz estimate for the wave equation
Neal Bez and Chris Jeavons
We calculate the the sharp constant and characterize the extremal initial data in $\dot{H}^{\frac{3}{4}} \times \dot{H}^{-\frac{1}{4}}$ for the $L^4$ Sobolev--Strichartz estimate for the wave equation in four spatial dimensions.
Neal Bez, Chris Jeavons. A sharp Sobolev-Strichartz estimate for the wave equation. Electronic Research Announcements, 2015, 22(0): 46-54. doi: 10.3934/era.2015.22.46.
Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity
Jianwei Yang, Peng Cheng and Yudong Wang
In this paper, we study a combined incompressible and vanishing capillarity limit in the barotropic compressible Navier-Stokes-Korteweg equations for weak solutions. For well prepared initial data, the convergence of solutions of the compressible Navier-Stokes-Korteweg equations to the solutions of the incompressible Navier-Stokes equation are justified rigorously by adapting the modulated energy method. Furthermore, the corresponding convergence rates are also obtained.
Jianwei Yang, Peng Cheng, Yudong Wang. Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity. Electronic Research Announcements, 2015, 22(0): 20-31. doi: 10.3934/era.2015.22.20.
The $\boldsymbol{q}$-deformed Campbell-Baker-Hausdorff-Dynkin theorem
Rüdiger Achilles, Andrea Bonfiglioli and Jacob Katriel
We announce an analogue of the celebrated theorem by Campbell, Baker, Hausdorff, and Dynkin for the $q$-exponential $\exp_q(x)=\sum_{n=0}^{\infty} \frac{x^n}{[n]_q!}$, with the usual notation for $q$-factorials: $[n]_q!:=[n-1]_q!\cdot(q^n-1)/(q-1)$ and $[0]_q!:=1$. Our result states that if $x$ and $y$ are non-commuting indeterminates and $[y,x]_q$ is the $q$-commutator $yx-q\,xy$, then there exist linear combinations $Q_{i,j}(x,y)$ of iterated $q$-commutators with exactly $i$ $x$'s, $j$ $y$'s and $[y,x]_q$ in their central position, such that $\exp_q(x)\exp_q(y)=\exp_q\!\big(x+y+\sum_{i,j\geq 1}Q_{i,j}(x,y)\big)$. Our expansion is consistent with the well-known result by Schützenberger ensuring that one has $\exp_q(x)\exp_q(y)=\exp_q(x+y)$ if and only if $[y,x]_q=0$, and it improves former partial results on $q$-deformed exponentiation. Furthermore, we give an algorithm which produces conjecturally a minimal generating set for the relations between $[y,x]_q$-centered $q$-commutators of any bidegree $(i,j)$, and it allows us to compute all possible $Q_{i,j}$.
R\u00FCdiger Achilles, Andrea Bonfiglioli, Jacob Katriel. The $\\boldsymbol{q}$-deformed Campbell-Baker-Hausdorff-Dynkin theorem. Electronic Research Announcements, 2015, 22(0): 32-45. doi: 10.3934/era.2015.22.32.
Smoothing 3-dimensional polyhedral spaces
Nina Lebedeva, Vladimir Matveev, Anton Petrunin and Vsevolod Shevchishin
We show that 3-dimensional polyhedral manifolds with nonnegative curvature in the sense of Alexandrov can be approximated by nonnegatively curved 3-dimensional Riemannian manifolds.
Nina Lebedeva, Vladimir Matveev, Anton Petrunin, Vsevolod Shevchishin. Smoothing 3-dimensional polyhedral spaces. Electronic Research Announcements, 2015, 22(0): 12-19. doi: 10.3934/era.2015.22.12.
The approximate Loebl-Komlós-Sós conjecture and embedding trees in sparse graphs
Jan Hladký, Diana Piguet, Miklós Simonovits, Maya Stein and Endre Szemerédi
2015, 22: 1-11 doi: 10.3934/era.2015.22.1 +[Abstract](3249) +[PDF](423.3KB)
Loebl, Komlós and Sós conjectured that every $n$-vertex graph $G$ with at least $n/2$ vertices of degree at least $k$ contains each tree $T$ of order $k+1$ as a subgraph. We give a sketch of a proof of the approximate version of this conjecture for large values of $k$.
For our proof, we use a structural decomposition which can be seen as an analogue of Szemerédi's regularity lemma for possibly very sparse graphs. With this tool, each graph can be decomposed into four parts: a set of vertices of huge degree, regular pairs (in the sense of the regularity lemma), and two other objects each exhibiting certain expansion properties. We then exploit the properties of each of the parts of $G$ to embed a given tree $T$.
The purpose of this note is to highlight the key steps of our proof. Details can be found in [arXiv:1211.3050].
Jan Hladk\u00FD, Diana Piguet, Mikl\u00F3s Simonovits, Maya Stein, Endre Szemer\u00E9di. The approximate Loebl-Koml\u00F3s-S\u00F3s conjecture and embedding trees in sparse graphs. Electronic Research Announcements, 2015, 22(0): 1-11. doi: 10.3934/era.2015.22.1.
An inverse theorem for the Gowers $U^{s+1}[N]$-norm
Ben Green, Terence Tao and Tamar Ziegler
This is an announcement of the proof of the inverse conjecture for the Gowers $U^{s+1}[N]$-norm for all $s \geq 3$; this is new for $s \geq 4$, the cases $s = 1,2,3$ having been previously established. More precisely we outline a proof that if $f : [N] \rightarrow [-1,1]$ is a function with ||$f$|| $U^{s+1}[N] \geq \delta$ then there is a bounded-complexity $s$-step nilsequence $F(g(n)\Gamma)$ which correlates with $f$, where the bounds on the complexity and correlation depend only on $s$ and $\delta$. From previous results, this conjecture implies the Hardy-Littlewood prime tuples conjecture for any linear system of finite complexity. In particular, one obtains an asymptotic formula for the number of $k$-term arithmetic progressions $p_1 < p_2 < ... < p_k \leq N$ of primes, for every $k \geq 3$.
Ben Green, Terence Tao, Tamar Ziegler. An inverse theorem for the Gowers $U^{s+1}[N]$-norm. Electronic Research Announcements, 2011, 18(0): 69-90. doi: 10.3934/era.2011.18.69.
Linear approximate groups
Emmanuel Breuillard, Ben Green and Terence Tao
This is an informal announcement of results to be described and proved in detail in [3]. We give various results on the structure of approximate subgroups in linear groups such as ${\rm{S}}{{\rm{L}}_n}(k)$. For example, generalizing a result of Helfgott (who handled the cases $n = 2$ and $3$), we show that any approximate subgroup of ${\rm{S}}{{\rm{L}}_n}({\mathbb{F}_q})$ which generates the group must be either very small or else nearly all of ${\rm{S}}{{\rm{L}}_n}({\mathbb{F}_q})$. The argument is valid for all Chevalley groups $G(\mathbb{F}_q)$. Extending work of Bourgain-Gamburd we also announce some applications to expanders, which will be proven in detail in [4].
Emmanuel Breuillard, Ben Green, Terence Tao. Linear approximate groups. Electronic Research Announcements, 2010, 17(0): 57-67. doi: 10.3934/era.2010.17.57.
Pointwise theorems for amenable groups
Elon Lindenstrauss
1999, 5: 82-90 +[Abstract](852) +[PDF](207.5KB)
Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5(0): 82-90. | CommonCrawl |
\begin{document}
\begin{frontmatter} \title{Flexible and experimentally feasible shortcut to quantum Zeno dynamic passage} \author{Wenlin Li$^1$} \author{Fengyang Zhang$^{1,2}$} \author{Yunfeng Jiang$^3$} \author{Chong Li\corref{cor1}$^1$} \cortext[cor1]{Corresponding author. [email protected]} \author{Heshan Song\corref{cor2}$^1$} \cortext[cor2]{Corresponding author. [email protected]} \address{$^{1}$ School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024, China} \address{$^{2}$ School of Physics and Materials Engineering, Dalian Nationalities University, Dalian 116600, China} \address{$^{3}$ Materials Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0418, USA} \begin{abstract} We propose and discuss a theoretical scheme to speed up Zeno dynamic passage by an external acceleration Hamiltonian. This scheme is a flexible and experimentally feasible acceleration because the acceleration Hamiltonian does not adhere rigidly to an invariant relationship, whereas it can be a more general form $\sum u_{j}(t)H_{cj}$. Here $H_{cj}$ can be arbitrarily selected without any limitation, and therefore one can always construct an acceleration Hamiltonian by only using realizable $H_{cj}$. Applying our scheme, we finally design an experimentally feasible Hamiltonian as an example to speed up an entanglement preparation passage. \end{abstract} \begin{keyword} Approximation acceleration, Zeno dynamics, Quantum control \end{keyword} \end{frontmatter}
\section{Introduction} In order to achieve some appropriate approximate conditions and to simplify physics system, the evolution speed has to be sacrificed sometimes, which was a common practice in quantum information processing (QIP). For a success QIP, however, one necessary prerequisite is that the evolution time should be short enough to avoid the influence of decoherence \cite{1,2}. An ideal solution for reconciling this contradiction is to append an external Hamiltonian in the system in order to ensure that the evolutions are similar to the results adopted approximation conditions. This basic principle of approximation acceleration had been applied to speed up adiabatic passages successfully in recent years \cite{3,4,5,6,7,8,9,10}, but the discussions about the accelerations on other approximations are still rare in both theoretical and experimental areas. In addition, another common defect of existing acceleration schemes is that almost all acceleration Hamiltonians are designed based on a fixed expression ($H_1(t)=i\hbar\sum_{n}\vert\partial_t\lambda_n\rangle\langle\lambda_n\vert$). There remain some difficulties in achieving those schemes in experiments because the corresponding acceleration Hamiltonian may consist of some non--physical interactions. For examples, Chen's scheme \cite{3} needed a transition between two ground states of a $\Lambda$-type atom and Lu's scheme \cite{6} required a swap-gate like term $\vert gf\rangle_{12}\langle fg\vert+H.c.$ in his acceleration Hamiltonian. Detuning driving fields may realize some of those non--physical interactions to some extent \cite{6,11,12}. But obtaining an effective interaction is bound to introduce other approximate conditions. Therefore, it is still an open question to design an acceleration Hamiltonian by only using reasonable interactions.
In this letter, we try to improve above two defects, i.e., (a): The acceleration scheme is extended into other common approximations; (b): A general scheme is proposed (we call it ``flexible scheme'') so that the acceleration Hamiltonian, for a certain system, can be always divided to allowed interactions. We believe that such a scheme is universal and feasible in experiments.
In recent years, quantum Zeno dynamic \cite{13,14} was also an approximation used widely to simplify Hamiltonian in entanglement preparation or quantum gate realization in a long evolution time ($gt\sim 10^2$) \cite{19,20,15,16,17,18,r1}. Unlike adiabatic approximation, Zeno approximation acceleration only requires the system to evolve into a specific subspace instead of a specific state. In other words, Zeno approximation acceleration corresponds to a more relaxed restriction and it is more suitable for a flexible designs. Thus, in this letter, we discuss how to speed up a Zeno dynamic process in detail, and present an example of the entanglement preparation to explain the fixed scheme and flexible scheme more intuitively. We demonstrate that the evolution time takes on an obvious reduction after the acceleration, and boundaries of decay rates are also relaxed. And above all, the generators of the flexible acceleration Hamiltonian in our example are exactly the ones of system Hamiltonian, which provides a promising platform for advancing the maneuverability of QIP.
Before the in-depth discussion, we firstly give a brief introduction about the Zeno dynamic and quantum Lyapunov control. Suppose a dynamical evolution of the whole system is governed by the Hamiltonian $H=H_0+H_I=H_0+KH_m$, where $H_0$ is the subsystem Hamiltonian to be investigated and $H_I=KH_m$ is an additional interaction Hamiltonian to perform the continuous coupling with the constant $K$. Under the strong coupling condition $K\rightarrow\infty$, the subsystem investigated is dominated by the time--evolution operator ($\hbar=1$) \cite{13,14}: \begin{equation} U_0(t)=\lim_{K\rightarrow\infty}\exp(iKH_mt)U(t). \label{eq:subtime} \end{equation} On the other hand, the time--evolution operator of this subsystem can also be expressed as: $U_0(t)=\exp(-it\sum_nP_nH_0P_n)$, where $P_n$ is the eigenprojection of the $H_m$ with the eigenvalue $\lambda_n$. The time--evolution operator of the whole system can then be simplified as: \begin{equation} \begin{split} U(t)&\sim\exp(-iKH_mt)U_0(t)\\ &=\exp\left[-it\sum_{n}(K\lambda_nP_n+P_nH_0P_n)\right], \label{eq:time} \end{split} \end{equation} and we can obtain an effective Hamiltonian in the following form: $H_{eff}=\sum_{n}(K\lambda_nP_n+P_nH_0P_n)$. Because the Zeno condition requires a weaker $H_0$ compared with $H_m$, the evolution time will be quite long in this case.
{For a quantum system, the aim of quantum control is to make the system evolve to a specified target quantum state (or a target subspace) by designing appropriate time-varying control fields. The core idea of quantum Lyapunov control is to design an auxiliary function $V$ involving both quantum state and control field. $V$ can be regarded as a Lyapunov function if $V\geqslant 0$ and the system converges to the target state given by its saddle point $V=0$ \cite{21,21ad1,21ad2}. The Lyapunov control theory has demonstrated that the system can be controlled into the target state (subspace) only if the control fields are designed to meet $\dot{V}\leqslant 0$, i.e., the Lyapunov function is a monotonically nonincreasing function in the time domain corresponding to whole evolution process, and it tends to its minimum finally with the help of control fields.}
\section{General formalism of shortcut scheme} \label{subsection:General formalism}
\subsection{``Rough'' acceleration} Similarly to adiabatic approximation acceleration, a simple idea of Zeno acceleration is to compensate the terms neglected in the approximate processes, in other words, those terms reappear in system Hamiltonian, that is, \begin{equation} H_{R}=UH_{eff}U^{\dagger}-H_0-H_I, \label{eq:Rough} \end{equation} consequently, the total Hamiltonian $H_t=H_0+H_I+H_R$ will be precisely equal to $H_{eff}$ after a diagonalization but without any approximation. Therefore some restrictions of key parameters are no longer necessary, which provides a prerequisite for the process of acceleration.
We call this scheme as ``rough'' acceleration because the acceleration Hamiltonian based on this idea is irreplaceable, and it may be meaningless if some non–physical interactions exist in its expression. Oppositely, in the next subsection, we will introduce a more flexible scheme in which the acceleration Hamiltonian can be changed and replaced almost without limitation. \subsection{Flexible acceleration} Different rom the ``Rough'' acceleration, our aim is to realize an acceleration process with a more optional Hamiltonian. Consequently, the difficulties of the corresponding experiments will be significantly reduced because this acceleration Hamiltonian is not limited in the form of Eq. (\ref{eq:Rough}). For a general discussion, the external acceleration Hamiltonian can be written as $H_F=\sum_{j=0}^{n}u_{j}(t)H_{cj}$, where $H_{cj}$ can be chosen flexibly by the designers and $u_{j}(t)$ represent the corresponding control fields. A closed Zeno quantum system with this external acceleration Hamiltonian can be described by the following Liouville equation \begin{equation} \dfrac{d\rho}{dt}=\dot{\rho}=-i\left[\left(H_{0}+H_{I}+\sum_{j=0}^{n}u_{j}(t)H_{cj}\right),\rho\right]. \label{eq:Flexible} \end{equation} In order to obtain a suitable acceleration Hamiltonian, we define an auxiliary function $V(t)$ as: \begin{equation} V(t)=\Trr[H^{2}_{I}\rho(t)]. \label{eq:Auxiliary function} \end{equation} {We mark the $i$th eigenvalue and eigenvector of $H_I$ as $E_i$ and $\vert E_i\rangle$, respectively. Then Eq. \eqref{eq:Auxiliary function} becomes $V(t)=\sum_iP_i E^2_i$, i.e., the weighted sum of the eigenvalue squares. Here $P_i=\langle E_i\vert\rho(t)\vert E_i\rangle$ are the populations of $\rho(t)$ on the $i$th eigenvector. Under this definition, $V\geqslant 0$ is obvious. Such as the above-mentioned, Zeno approximation requires system to evolve in a Zeno subspace with degenerate eigenvalues. According to whether the corresponding eigenvectors are in the target space or not, we divide the populations into two parts, \begin{equation} \begin{split} (\underbrace{P_1^T,P_2^T,...,P_m^T}_{target\,space},\underbrace{P_{m+1}^N,P_{m+2}^N,...,P_n^N}_{non\,target\,space}). \label{eq:order} \end{split} \end{equation} If the goal subspace selected corresponds to $E_i=0$ ($i\leqslant m$), $V(t)=0+\sum_{i>m}P^N_i E^2_i$ will be obtained. When the quantum state $\rho(t)$ is completely in the target space, $P^N_i=0$ are satisfied for all $i>m$. In other words, the system will be trapped in the goal subspace if and only if $V=0$, and correspondingly, the physical meaning of $V$ can be regarded as a violation measure of the Zeno subspace limitation.}
Substituting Eq. (\ref{eq:Auxiliary function}) into Eq. (\ref{eq:Flexible}), we can obtain the derivative of $V$ as follow: \begin{equation} \begin{split} \dot{V}&=\Trr(H^{2}_{I}\dot{\rho})=\Trr\left(-iH^{2}_{I}\left[(H_0+H_I+\sum_{j=0}^{n}u_{j}(t)H_{cj}),\rho\right]\right)\\&=\Trr(-i\rho[H^{2}_I,H_0])+\sum_{j=0}^{n}u_{j}(t)\Trr(-i\rho[H_I^2,H_{cj}]). \label{eq:dot function1} \end{split} \end{equation} If the control fields are set as: \begin{equation} \left\lbrace \begin{array}{ll} u_0=-\dfrac{\Trr(-i\rho[H^{2}_I,H_0])}{\Trr(-i\rho[H_I^2,H_{c0}])}\\ \\ u_{j\neq 0}=-k_{j}\Trr(-i\rho[H_I^2,H_{cj}]), \label{eq:CON function1} \end{array} \right. \end{equation} {and the second term in Eq. \eqref{eq:dot function1} can be expanded as \begin{equation} \begin{split} &\sum_{j=0}^{n}u_{j}(t)\Trr(-i\rho[H_I^2,H_{cj}])\\=&u_{0}(t)\Trr(-i\rho[H_I^2,H_{c0}])+\sum_{j=1}^{n}u_{j}(t)\Trr(-i\rho[H_I^2,H_{cj}])\\=&-\Trr(-i\rho[H^{2}_I,H_0])+\sum_{j=1}^{n}u_{j}(t)\Trr(-i\rho[H_I^2,H_{cj}]) \label{eq:adfunction1} \end{split} \end{equation} by substituting $u_0$ in Eq. \eqref{eq:CON function1} into Eq. \eqref{eq:dot function1}.} Then Eq. (\ref{eq:dot function1}) can be simplified as $\dot{V}=-\sum_{j=1}^{n}k_{j}\Trr(-i\rho[H_I^2,H_{cj}])^2$ and it will always be negative if $k_{j}\geqslant 0$. By virtue of Eq. (\ref{eq:CON function1}), $V\geqslant 0$ and $\dot{V}\leqslant 0$ are satisfied simultaneously. In this case, $V$ is a so-called Lyapunov function, which can ensure that the system will evolve to the state corresponding to its own saddle point ($V=0$) \cite{21}. In other words, the acceleration Hamiltonian $H_F$ will control system into the goal subspace even if the Zeno condition is no longer satisfied.
\section{Example in entanglement preparation} Entanglement preparation is one of the most important issues in the field of QIP \cite{22,23}. In this section, we introduce and analyze a general entanglement preparation scheme based on Zeno dynamic to show the necessity of the shortcut and to explain our acceleration scheme in more detail. In the frame of cavity quantum electrodynamic (QED) system, the sketch of this entanglement preparation scheme is shown in Fig. \ref{fig:fig1}. \begin{figure}
\caption{(a): Two $\Lambda$--type atoms ($A$, $B$) coupled with radiation fields in two resonant cavities ($C_1$, $C_2$). The fiber $f$ links two cavities. Here we encode as follow: $\vert\psi\rangle_i$ means that the $i$th subsystem is in the state $\vert\psi\rangle$, where $i\in\{A,B,C_1,C_2,f\}$. (b): The field--atom interaction $\vert f\rangle\leftrightarrow\vert e\rangle$ is provided by quantum fields, correspondingly, $\vert g\rangle\leftrightarrow\vert e\rangle$ is provided by classical fields. }
\label{fig:subfig:a}
\label{fig:fig1}
\end{figure} In the interaction picture, the Hamiltonian of the whole system can be described in the following form: $H=H_{laser}+H_I$, where \begin{equation} \begin{split} &H_{laser}=\Omega_1\vert e\rangle_{A}\langle g\vert+\Omega_2\vert e\rangle_{B}\langle g\vert+H.c.\\ &H_{I}=g_1a_1\vert e\rangle_{A}\langle f\vert+g_2a_2\vert e\rangle_{B}\langle f\vert+\lambda[b^{\dagger}(a_1+a_2)]+H.c.. \label{eq:Hamilton} \end{split} \end{equation} In above expressions, $a_1$($a_1^\dagger$) and $a_2$($a_2^\dagger$) are the annihilation(creation) operators of the cavity fields $C_1$ and $C_2$, respectively. $b$($b^\dagger$) is the annihilation(creation) operator of the fiber. $g_{1,2}$ and $\Omega_{1,2}$ are the coupling intensities respectively corresponding to the field--atom interactions $\vert f\rangle\leftrightarrow\vert e\rangle$ and $\vert g\rangle\leftrightarrow\vert e\rangle$, $\lambda$ is the coupling intensity of the fiber. Here we set $g_1=g_2$ for convenience. {In this model, $u_{j}$ between energy levels corresponding to green lines in Fig. \ref{fig:fig1} can be designed as time-dependent because the intensity of classical and quantum fields can be adjusted. $u_j$ denotes the fiber coupling intensities and it should be set as a fixed value. $u_j$ corresponding to all other transitions (e.g., red lines in Fig. \ref{fig:fig1}) should be zero because those transitions are not allowed physically.} \begin{figure}
\caption{(a): The maximum fidelity and the corresponding $t_{min}$ with varied $\Omega_2/g$. Main figure exhibits the change of closed system and the inset shows the cases corresponding to $\gamma=0.0005$ (green), $\gamma=0.001$ (red) and $\gamma=0.002$ (blue), respectively. (b): Fidelity with the varied time under $\Omega_2/g=0.5$ (main figure) and $\Omega_2/g=0.05$ (the top inset). The blue line denotes the approximate solution and the red line is the exact solution. Another inset shows the probabilities of different Zeno subspaces. In this calculation, we set $g=\lambda=1$ for convenience. }
\label{fig:subfig:a}
\label{fig:fig2}
\end{figure}
If $\Omega_{1,2}\ll g,\lambda$, the Hamiltonian can be expanded to the following complete set: \begin{equation} \begin{split} \{&\vert\phi_1\rangle=\vert fg000\rangle,\,\vert\phi_2\rangle=\vert fe000\rangle,\,\vert\phi_3\rangle=\vert ff010\rangle\\ &\vert\phi_4\rangle=\vert ff100\rangle,\,\vert\phi_5\rangle=\vert ff001\rangle,\,\vert\phi_6\rangle=\vert gf000\rangle\\ &\vert\phi_7\rangle=\vert ef000\rangle\}, \label{eq:Basic} \end{split} \end{equation} and one can diagonalize conveniently after neglecting $H_{laser}$. {What needs to be explained is that $\vert fg000\rangle$ denotes $\vert fg\rangle_{AB}\vert 00\rangle_{c_1c_2}\vert 0\rangle_{f}$ and other vectors in Eq. \eqref{eq:Basic} obey the same order.} Under these conditions, the whole Hilbert space can be divided into five Zeno subspaces \cite{24}: \begin{equation} \begin{split} &Z_1=\{\vert\psi_{1}\rangle,\vert\psi_{2}\rangle,\vert\psi_{3}\rangle\}\\ &Z_2=\{\vert\psi_{4}\rangle\}\\ &Z_3=\{\vert\psi_{5}\rangle\}\\ &Z_4=\{\vert\psi_{6}\rangle\}\\ &Z_5=\{\vert\psi_{7}\rangle\}, \label{eq:Zeno} \end{split} \end{equation} corresponding to the eigenvalues $\zeta_{1,2,3}=0$, $\zeta_2=g$, $\zeta_3=-g$, $\zeta_4=\sqrt{g^2+2\lambda^2}$ and $\zeta_5=-\sqrt{g^2+2\lambda^2}$, respectively. Using the related derivations of Zeno dynamics (Eqs. (\ref{eq:subtime},\ref{eq:time})), the Hamiltonian in Eq. (\ref{eq:Hamilton}) can be rewritten in the form of: $H_{eff}=\Omega_2\delta\vert\psi_{1}\rangle\langle\psi_{2}\vert+\Omega_1\delta\vert\psi_{2}\rangle\langle\psi_{3}\vert+H.c.+\sum_{k=4}^7\zeta_k\vert\psi_k\rangle\langle\psi_k\vert$ \cite{25}, and it can be reduced to $H_{eff}=\Omega_2\delta\vert\psi_{1}\rangle\langle\psi_{2}\vert+\Omega_1\delta\vert\psi_{2}\rangle\langle\psi_{3}\vert+H.c.$ if the initial state is limited in the first Zeno subspace $Z_1$. $H_{eff}$ is just a $3\times 3$ matrix, therefore the evolution of the system state can be easily calculated and expressed as \begin{equation} \begin{split} \vert\psi(t)\rangle=&\dfrac{1}{\Omega^2}[(\Omega^2_1+\Omega^2_2\cos\Omega\delta t)\vert\psi_{1}\rangle-i\Omega\Omega_2\sin\Omega\delta t\vert\psi_{2}\rangle\\&+\Omega_1\Omega_2(-1+\cos\Omega\delta t)\vert\psi_{3}\rangle], \label{eq:State} \end{split} \end{equation} corresponding to the initial state $\vert\psi(0)\rangle=\vert fg\rangle_{AB}\vert00\rangle_{C_1C_2}\vert 0\rangle_f$. While the parameters are taken as $t_s=(2n+1)\pi/\Omega\delta$ and $\Omega_1=(\sqrt{2}-1)\Omega_2$, Eq. (\ref{eq:State}) becomes: $\vert\psi(t_s)\rangle=(\vert\psi_1\rangle+\vert\psi_3\rangle)/\sqrt{2}$ and the corresponding atoms are in the Bell state: $\vert\psi^+\rangle=(\vert fg\rangle+\vert gf\rangle)/\sqrt{2}$; On the contrary, if we set $t_s=(2n+1)\pi/\Omega\delta$ and $\Omega_1=(\sqrt{2}+1)\Omega_2$, the atoms are in the Bell state: $\vert\psi^-\rangle=(\vert fg\rangle-\vert gf\rangle)/\sqrt{2}$.
Through analyzing above entanglement preparation scheme, we find that the fastest time for the initial state to achieve the maximum entangled state is $gt_{min}=\pi/\Omega\delta$. However, the Zeno dynamic requires that $\Omega_{1,2}\ll g,\lambda$, which will lead to too long $gt_{min}$. To explain this, we consider the first case and plot the fidelity $\mathcal{F}(\vert\psi^+\rangle,\rho)=\langle\psi^+\vert\rho\vert\psi^+\rangle$ \cite{26,27,FF} and the corresponding $gt_{min}$ with the varied $\Omega_2/g$ in Fig. \ref{fig:fig2}. In Fig. \ref{fig:fig2}(a), we show that the oscillating fidelity is lower than $35\%(85\%)$ when $gt_{min}<5(15)$. {Similarly to previous works, Fig.\ref{fig:fig2} shows that the fidelity does not decrease monotonically with increasing $\Omega$ when the approximate condition is destroyed. In time domain, this phenomenon is reflected in the oscillating fidelity when $gt_{min}$ is small. The fidelity with intensive oscillation indicates that a tiny parameter deviation will affect the fidelity significantly, and the system actually corresponds to a state of weak robustness. On the contrary, the fidelity in a practical entanglement preparation scheme is required to be stable and to tend to the approximate solution (i.e., exhibits a strong robustness). And we further find that this requirement will not be satisfied until $gt=100$ \cite{r1}.} This long evolution time may make the scheme ineffective when the system interacts with the environment. In the inset in Fig. \ref{fig:fig2}(a), we also show system evolution under the following non--Hermitian Hamiltonian \begin{equation} H'_{eff}=H_{eff}-i\sum\dfrac{\gamma_{j}}{2}\vert\phi_j\rangle\langle\phi_j\vert, \label{eq:Hde} \end{equation} where $\gamma_{j}$ are the dissipation coefficients of atoms or optical fields. It illustrates that the entanglement preparation scheme is valid only if $\gamma/g\leqslant 0.0005$, which corresponds to a very demanding experimental condition. In Fig. \ref{fig:fig2}(b), we plot the $\mathcal{F}(t)$ under different $ \Omega_2/g$ and find that the system state is not limited within the subspace $Z_1$ but jumps into other Zeno subspaces if the Zeno conditions are destroyed. \begin{figure}
\caption{(a): The maximum fidelity and the corresponding $t_{min}$ with varied $\Omega_2/g$. (b): Fidelity evolution with varied $t$ under $\Omega_2/g=1$. The red line denotes the fidelity corresponding to ``rough'' acceleration scheme and the blue dotted and solid lines respectively are flexible acceleration and non-acceleration. Here we set $k_j=10$ and other parameters are the same as those in Fig. \ref{fig:fig2}. }
\label{fig:fig3}
\end{figure}
As we discussed in Sec. \ref{subsection:General formalism}, the acceleration Hamiltonians of this scheme are respectively $H_{R}=UH_{eff}U^{\dagger}-H_{laser}=\Omega_2(\delta^2-1)\vert\phi_1\rangle\langle\phi_2\vert-\Omega_2\delta^2g/\lambda\vert\phi_1\rangle\langle\phi_5\vert+\Omega_2\delta^2\vert\phi_1\rangle\langle\phi_7\vert+\Omega_1\delta^2\vert\phi_2\rangle\langle\phi_6\vert-\Omega_1\delta^2g/\lambda\vert\phi_5\rangle\langle\phi_6\vert+\Omega_1(\delta^2-1)\vert\phi_6\rangle\langle\phi_7\vert+H.c.$ which corresponds to the ``rough'' acceleration, and $H_{F}=\sum_{j=0}^{n}u_{j}(t)H_{cj}$ which corresponds to the flexible acceleration. We firstly consider a set of complete $\{H_{cj}\}$ in order to compare two kinds of acceleration schemes, i.e., $H_{cj}$ are selected as $H_{c0}=H_{c1}=\vert\psi_1\rangle\langle\psi_4\vert+H.c.$; $H_{c(2,3,4)}=\vert\psi_1\rangle\langle\psi_{5,6,7}\vert+H.c.$; $H_{c(5,6,7,8)}=\vert\psi_2\rangle\langle\psi_{4,5,6,7}\vert+H.c.$; $H_{c(9,10,11,12)}=\vert\psi_3\rangle\langle\psi_{4,5,6,7}\vert+H.c.$, respectively, because complete $\{H_{cj}\}$ can ensure that the system is controlled to a maximum level without loss of fidelity.
In Fig. \ref{fig:fig3}, we show the contrast results corresponding to non-acceleration, ``rough'' acceleration and flexible acceleration, respectively. Fig. \ref{fig:fig3}(a) illustrates that the ``rough'' acceleration can hold $\mathcal{F}=1$ during the whole evolution period, contrarily, the fidelity of flexible acceleration keeps on increasing from $0.993$ to $1$. This phenomenon is natural because $H_R$ adds all of the approximated terms into the system whereas the Lyapunov control theory only provides $\mathcal{F}\rightarrow 1$ in limited time. However, the fidelity is still significantly greater than the one without any acceleration. In particular, in the range of $gt\in[1,5]$, the flexible acceleration can ensure that the fidelity is always greater than $99.9\%$ although the fidelity corresponding to non-acceleration is only $34.5\%$. In Fig. \ref{fig:fig3}(b), we show the time evolution of fidelity under $\Omega_2/g=1$. It can be directly observed that the fidelity of flexible acceleration exhibits a similar evolution with that of the ``rough" acceleration and the fidelity distortion is only $0.08\%$. \begin{figure}
\caption{(a): The maximum fidelities with varied $\gamma/g$ under different $\Omega_2/g$. (b): Fidelity evolutions with varied $t$ under $\gamma/g$. All parameters in this simulation are the same as those in Fig. \ref{fig:fig3}. }
\label{fig:fig4}
\end{figure} In Fig. \ref{fig:fig4}, we consider the influence of the environment interaction and the result shows that while $\Omega_2=g$, $\mathcal{F}$ will remain at the level of $90\%$ even though $\gamma/g=0.03$. This boundary is enlarged about $60$ times than that using the non-acceleration scheme, which results in easier implementation for our scheme in experiments. \cite{17,29,30,31}.
For two $\Lambda$-type atoms in a real experiment, however, it can be known that both $H_R$ and $H_F$ do not exist because there are some non--physical interaction terms in their expressions (e.g., $\vert\phi_1\rangle\langle\phi_5\vert+H.c.,\vert\phi_1\rangle\langle\phi_7\vert+H.c.$, and so on). ``Rough'' acceleration can not fix this defect directly since $H_R$ is already an invariant function in a certain scheme. Contrarily, $H_F$ can be easily adjusted by selecting realizable $\{H_{cj}\}$ afresh. In general, a realizable $\{H_{cj}\}$ is usually an incomplete set of Hamiltonian. In the system shown in Fig. \ref{fig:fig1}, for example, the $\{H_{cj}\}$ set constituted only by realizable interactions is: \begin{equation} \begin{split} H_{cj}\in\{&H_{c1}=\vert\phi_1\rangle\langle\phi_2\vert+H.c.,H_{c2}=\vert\phi_6\rangle\langle\phi_7\vert+H.c.,\\ &H_{c3}=\vert\phi_2\rangle\langle\phi_3\vert+H.c.,H_{c4}=\vert\phi_4\rangle\langle\phi_7\vert+H.c.,\\ &H_{c5}=\vert\phi_3\rangle\langle\phi_5\vert+\vert\phi_4\rangle\langle\phi_5\vert+H.c.\}. \label{eq:hrhr} \end{split} \end{equation} {In this set, $H_{c1}\sim H_{c4}$ correspond to field--atom couplings and they can be achieved by adjusting the Rabi frequency and the cavity detuning of each transition processing. Correspondingly, $H_{c5}$ is cavity--fiber interaction and it can also be adjusted in some ring cavity systems. But this adjustment is uncommon, therefore we select $u_5$ as a constant for a general discussion.} This incomplete Hamiltonian can not contain all control paths between different Zeno subspaces, hence some fidelity distortions may exist here. Even so, the advantage of this scheme is obvious because all terms in Eq. (\ref{eq:hrhr}) can be realized in an experiment. \begin{figure}
\caption{(a): The maximum fidelities and the corresponding $t_{min}$ with varied $\Omega_2/g$. (b): Fidelity evolutions with varied $t$ under the condition $t_{min}=12.4$ (main figure) and their performances under different experiment parameters (inset). The solid and dotted lines denote the fidelity corresponding to acceleration and non-acceleration. Here we set $H_{c0}=0$ and $k=0.6$, and other parameters in this simulation are the same as those in Fig. \ref{fig:fig3}. }
\label{fig:fig5}
\end{figure}
In Fig. \ref{fig:fig5}(a), we show that the fidelities of flexible acceleration can not fast and perfectly approach to $1$ such as that in Fig. \ref{fig:fig3}. {If we define such a standard, that is, the fidelity not only is greater than $95\%$ but also always keeps the status, i.e., system with stronger robustness}, the minimum evolution time corresponding to flexible acceleration is $gt_{min}=8.1$, and obviously, it is still nearly $60\%$ compression compared with $gt_{min}=20$ in non-acceleration scheme. We also plot fidelity evolutions in Fig. \ref{fig:fig5}(b) to show a significant promotion at $gt_{min}=12.4$ in this acceleration process. Here we also present a brief discussion about the actual effectiveness of our acceleration under following experiment parameters. Recent experiments of cavity QED system have achieved $(\kappa,\beta_c,\beta_f)/g=(0.0035,0.0047,0.0002)$ in Fabry--P\'erot cavity \cite{33,34}, and $(\kappa,\beta_c,\beta_f)/g=(0.0021,0.0004,0.0004)$ in circuit QED system\cite{35,36}. In Fig. \ref{fig:fig5}(b), we also illustrate that $\mathcal{F}\geqslant 93\%$ is still satisfied in those experiment systems even if $\Omega_2/g\in [0.4,0.6]$. Therefore, we believe our acceleration is feasible under present available experiment technology.
{Finally, we will analyze the realizability of the functions $u_j(t)$ in experiments. It should be stressed that the experimentally feasible $u_j(t)$ should at least meet the following two requirements. One is that the interaction corresponding to each $u_j(t)$ should be achievable and can be adjusted; Another is that all $u_j(t)$ should be of smooth waveforms, and it is better that $u_j(t)$ are constituted by some common waveforms without high-frequency oscillation (sine function, square pulses and Gaussian function for examples) \cite{6}. In our scheme, the only used time-dependent control field are $u_{1,2,3,4}(t)$ and it already has been discussed that those corresponding interactions should be explicit time--dependent. Therefore, $u_j(t)$ exactly satisfy the first requirement in our model. \begin{figure}
\caption{Waveforms of $u_j(t)$. (a) and (b) are initial designs corresponding to Eq. (\ref{eq:CON function1}); (c) and (d) are square pulses corresponding to Eq. (\ref{eq:CON functionsq}). Here all parameters in this simulation are the same as those in Fig. \ref{fig:fig3}. }
\label{fig:fig6}
\end{figure} Considering the second requirement, we plot time evolutions of $u_j(t)$ in Fig. \ref{fig:fig6} (a) and (b) to show that $u_j(t)$ are smooth enough for the intensity adjustment \cite{6,37,38,39}. We also want to point out that $u_j(t)$ can be of various forms without being limited to an exclusive form since the Lyapunov control just needs to determine the positive or negative value instead of obtaining the concrete accurate value of $\dot{V}$. For example, we can intuitively set $u_{1,2,3,4}(t)$ as \begin{equation} u_{j}= \left\lbrace \begin{array}{ll} K\,\,\,\,\,\,\,\,\,\,\, if\,\,\,\,{\Trr(-i\rho[H_I^2,H_{cj}])}<0\\ \\ -K\,\,\,\,\,\,\, if\,\,\,\,{\Trr(-i\rho[H_I^2,H_{cj}])}>0, \label{eq:CON functionsq} \end{array} \right. \end{equation} and consequently, $u_j(t)$ will be simplified as the square pulses without high frequency oscillation, which can be more easily implemented \cite{3,6,40} by just controlling the on/off of control fields with constant intensities. With these square pulses, the fidelities are still greater than $96.7{\%} $ at the $t_{min}=10.8$, which means not only $H_{cj}$ but also $u_j(t)$ can be selected flexibly. This flexibility ensures that our scheme can always be implemented experimentally.} \section{Discussion and Outlook} In this letter, we have proposed a flexible and realizable acceleration scheme to speed up Zeno dynamic passage that has already been used widely in QIP. Unlike the efforts to eliminate the effects of neglected terms, the basic idea of our acceleration is to drive the system for evolving within an appointed subspace. The acceleration Hamiltonian is discussed with a general form $H_F=\sum u_{j}(t)H_{cj}$ instead of the traditional fixed form $H_{R}=UH_{eff}U^{\dagger}-H$. Thus, our acceleration Hamiltonian can be designed limberly by selecting different $H_{cj}$. Especially, one can always find such $\{H_{cj}\}$ set in which each element is reasonable and can be realized in experiments. Our acceleration scheme has been applied on an entanglement preparation process, and the result shows that the flexible acceleration can shorten the evolution time $gt_{min}$ from $20$ to $8.1$ under the condition $\mathcal{F}\geqslant 95\%$. On the other hand, the requisite time for a high robustness is also reduced to $gt_{min}=12$, which is clearly shorter than the time $gt_{min}=100$ when there does not exist acceleration scheme. In addition, the restrictions of the decay rates are also relaxed by the acceleration process. It can be found that our flexible acceleration will provide a more feasible scheme if the acceleration field $u_{j}(t)$ are taken as Gaussian distributions. The possibility of the idea will be further verified in some subsequent researches.
\end{document} | arXiv |
recently, i need to compute this kind of integral: $$ \int ^\infty _c \Phi(ax+b) \phi(x) dx$$ where a, b and c are all constants and $\Phi(x)$ denotes the CDF of standard normal distribution and $\phi(x)$ denotes the PDF of standard normal
I have looked up similar questions both in "math.stackexchange.com" and here, but i don't find any satisfactory answer. if "c" is negative infinity here, it would be relatively easy. but here it's not.
later i find if i can find a solution to: $$ \int ^\infty _c x^2\Phi(ax+b) \phi(x) dx$$ then the former integral could be calculated.
can someone help me? if no closed solution exists, is there any practical approximation to that integral?
integration probability-distributions gaussian
fresh_waterfresh_water
$\begingroup$ If $a=1, b=0,$ then Maple produces $$1/2\,{\frac { \left( -1/4\,\sqrt {\pi }\sqrt {2} {{\rm erf}\left(1/2\,c\sqrt {2}\right)}-1/8\,\sqrt {\pi }\sqrt {2} \left( {{\rm erf}\left(1/2\,c\sqrt {2}\right)} \right) ^{2}+3/8\, \sqrt {\pi }\sqrt {2} \right) \sqrt {2}}{\sqrt {\pi }}}. $$ $\endgroup$ – Mark May 17 '13 at 17:03
$\begingroup$ The simplified result is $$-1/8\, \left( {{\rm erf}\left(1/2\,c\sqrt {2}\right)} \right) ^{2}-1/4 \,{{\rm erf}\left(1/2\,c\sqrt {2}\right)}+3/8. $$ $\endgroup$ – Mark May 17 '13 at 17:09
$\begingroup$ @Mark: That case is easy to compute by hand. It would be the probability that $X \gt c, X \gt Y$ where $X,Y \sim N(0,1)$. If I calculate correctly, that's $P(X \gt c) - P(Y \gt X \gt c) = P(X \gt c) - 1/2 P(X \gt c)^2$. The general case doesn't simplify like that. $\endgroup$ – Douglas Zare May 17 '13 at 18:56
$\begingroup$ This questions asks for the probability that a standard normal distribution is in a wedge-shaped region, or equivalently for the CDF of a two-dimensional Gaussian distribution with a general covariance matrix. Wikipedia says that there isn't an analytic expression for this, but that approximations are known. One related paper I remember is by Marsaglia, jstatsoft.org/v16/i04/paper, on the distribution of the ratio between two normal distributions, but that would be a double wedge, between two lines instead of two rays. $\endgroup$ – Douglas Zare May 17 '13 at 19:16
$\begingroup$ @Mark: I don't understand what you are confused about. I translated the integral into a statement about random variables. Inside the integral is a density for $X=x, Y \lt x$. Integrating from $x=c$ to $x = \infty$ means $X \gt c, Y \lt X$. $\endgroup$ – Douglas Zare May 17 '13 at 20:44
Here's the answer to the first integral:
$\phi\left(\frac{b}{\sqrt{1+a^2}}\right)-\Phi_2\left[c,\frac{b}{\sqrt{1+a^2}},\frac{-a}{\sqrt{1+a^2}}\right]$
where $\Phi_2\left(x,y,\rho\right)$ is the bivariate normal cdf with means zero, variances one, and correlation $\rho$.
I found it by differentiating with respect to $b$, then reintegrating. You can get the second integral the same way.
beirutibeiruti
$\begingroup$ Someone posted an answer for $c=-\infty$ at mathoverflow.net/questions/101469/…. I wonder whether the beiruti's answer is consistent with that. IMO, something's wrong... try this chunk of code in R: library(mvtnorm) a<-1 b<-1 c<-1 dnorm(b/sqrt(1+a^2))-pmvnorm(lower=c(-Inf,-Inf),upper=c(c,b/sqrt(1+a^2)),mean=c(0,0),corr=matrix(c(1,-a/sqrt(1+a^2),-a/sqrt(1+a^2),1),2,2)) it returns a value < 0... Maybe the answer is: $\Phi(\frac{b}{\sqrt{1+a^2}})-\Phi_2[c,\frac{b}{\sqrt{1+a^2}},-\frac{a}{\sqrt{1+a^2}}]$ (swap $\phi$ with $\Phi$). $\endgroup$ – al cliver Oct 27 '14 at 14:31
Not the answer you're looking for? Browse other questions tagged integration probability-distributions gaussian or ask your own question.
Integration of the product of pdf & cdf of normal distribution
A definite integral
Distribution of Inverse of a Random Matrix
Asymptotics of a 1D integral, or the orthant probability of an equicorrelated random Gaussian vector
The asymptotics of $\int_{-\infty}^{\infty} \phi(x) {\Phi(\frac{x}{a})}^{qa} dx $ for normal distribution using saddle point approximation
Does CLT hold for joint distribution of two dependent binomial variables?
Solving integral over the space of two lognormally distributed variables | CommonCrawl |
\begin{document}
\title[Linear Stability Analysis of Periodic SBC Orbits]{Linear Stability Analysis of Symmetric Periodic Simultaneous Binary Collision Orbits in the Planar Pairwise Symmetric Four-Body Problem} \author[Bakker]{Lennard F. Bakker} \author[Mancuso]{Scott C. Mancuso} \author[Simmons]{Skyler C. Simmons} \address{Department of Mathematics \\ Brigham Young University\\ Provo, UT 84602} \email[Lennard F. Bakker]{[email protected]} \email[Scott Mancuso]{[email protected]} \email[Skyler Simmons]{[email protected]}
\date{}
\keywords{N-Body Problem, Singular Periodic Orbits, Linear Stability} \subjclass[2000]{Primary: 70F10, 70H12, 70H14; Secondary: 70F16, 70H33.}
\begin{abstract} We apply the symmetry reduction method of Roberts to numerically analyze the linear stability of a one-parameter family of symmetric periodic orbits with regularizable simultaneous binary collisions in the planar pairwise symmetric four-body problem with a mass $m\in(0,1]$ as the parameter. This reduces the linear stability analysis to the computation of two eigenvalues of a $3\times 3$ matrix for each $m\in(0,1]$ obtained from numerical integration of the linearized regularized equations along only the first one-eighth of each regularized periodic orbit. The results are that the family of symmetric periodic orbits with regularizable simultaneous binary collisions changes its linear stability type several times as $m$ varies over $(0,1]$, with linear instability for $m$ close or equal to $0.01$, and linear stability for $m$ close or equal to $1$. \end{abstract}
\maketitle
\section{Introduction}
In Hamiltonian systems like the Newtonian $N$-body problem, linear stability of a periodic orbit is necessary but insufficient for its nonlinear stability \cite{MH}. When the periodic orbit is not a relative equilibrium, the characteristic multipliers are typically found by computing its monodromy matrix, i.e., by numerically integrating the linearized equations along the periodic orbit over a full period (in which the periodic orbit and its period are typically computed numerically as well). For a symmetric periodic orbit, Roberts \cite{Ro2} developed a symmetry reduction method by which the nontrivial characteristic multipliers are computed by numerical integration of the linearized equations along the periodic orbit over a fraction of the full period. He applied this symmetry reduction method to show that numerically the Montgomery-Chenciner figure-eight periodic orbit with equal masses \cite{CM} is linearly stable \cite{Ro2}; the numerical integration of the linearized equations along the periodic orbit only needed to go over one-twelfth of the full period.
We apply Roberts' symmetry reduction method to a one-parameter family of symmetric singular periodic orbits in the planar pairwise symmetric four-body problem (PPS4BP) where the parameter is a mass $m\in(0,1]$ and the singularities are regularizable simultaneous binary collisions (SBCs). We recall in Section \ref{PPS4BP} the notation we used in \cite{BOYS} for the PPS4BP. (The PPS4BP is the Caledonian symmetric four-body problem \cite{SSS} without its collinear restrictions on the initial conditions.) To compute the nontrivial characteristic multipliers of these periodic orbits we numerically integrated the linearized regularized equations along each regularized periodic orbit over only one-eighth of its period. This shows that numerically these symmetric singular periodic orbits experience several changes in their linear stability type (linearly stable, spectrally stable, or linearly unstable) as $m$ is varied over $(0,1]$.
This is a marked improvement over our previous numerical investigations of the linear stability of these symmetric singular periodic orbits. We numerically computed \cite{BOYS} the monodromy matrix and its eigenvalues for each regularized symmetric periodic orbit starting at $m=1.00$ and decreasing by $0.01$ until $m=0.01$. This seemed to indicate that the periodic orbits were linearly stable for $m$ in the interval $[0.54,1.00]$ and linearly unstable for $m$ in the interval $[0.01,0.53]$. This agreed with the stability and instability suggested by our long-term numerical integrations of the regularized equations starting at a numerically computed approximation of each periodic orbit's initial conditions. However, our numerical estimates of the monodromy matrices failed to accurately account for the trivial characteristic multiplier $1$ of algebraic multiplicity $4$: instead of getting $1$ as an eigenvalue for each monodromy matrix, we were getting two pairs of eigenvalues, one pair of positive eigenvalues with one larger than $1$ and the other smaller than $1$, and one pair of complex conjugates close to $1$. As $m$ passed below $0.61$, the real eigenvalues began to move away from $1$, so much so, that below $m=0.21$, we had a real positive eigenvalue of the monodromy matrix whose value exceeded the limits of MATLAB. This calls into question the conclusions of our first attempt at determining for what values of $m$ the symmetric singular periodic orbits in the PPS4BP were linearly stable and linearly unstable.
We thus proceed to use Roberts' symmetry reduction method because it factors out, in an analytic manner, two of the trivial characteristic multipliers, leaving the numerical computations to estimate the two pairs of nontrivial characteristic multipliers and one pair of trivial characteristic multipliers. The details of these computations are given in Section \ref{numerics}. Two surprises here are the intervals $[0.21,0.22]$ and $[0.23,0.26]$ where we have linear stability. Our long-term numerical integrations of the regularized equations for these periodic orbits (starting at our numerical approximations of their initial conditions and over $100932$ periods) suggested instability for $m$ in these two intervals. We also refined the numerical computation for $m$ between $0.53$ and $0.54$ by increments of $0.001$ to get a better estimate of that value of $m$ where the linear stability type changes. This showed that we have linear stability for $m=0.539$ and linear instability for $m$ in $[0.531,0.538]$.
Such changes in the linear stability type of mass-parameterized families of symmetric periodic orbits with regularizable collisions have been found in other $N$-body problems. The Schubart orbit in the collinear three-body problem \cite{Sc}, \cite{He}, \cite{Mo}, \cite{Ve}, \cite{Sh} has the inner body alternating between binary collisions with the two outer bodies. These are linearly stable for certain choices of the three masses \cite{HM}. Linearly stable non-Schubart orbits have also been found in the collinear three-body problem for certain choices of the masses \cite{ST1}, \cite{ST2}, \cite{ST3}. The Schubart-like orbit in the collinear symmetric four-body problem \cite{SW1}, \cite{SW2}, \cite{SeT}, \cite{OY2}, \cite{Sh}, alternates between a binary collision of the two inner bodies and a SBC of the two outer pairs of bodies. If the masses in the collinear symmetric four-body problem are, from left to right, $1$, $m$, $m$, and $1$, then linear stability occurs when $0<m<2.83$ and $m>35.4$ with linear instability for $2.83<m<35.4$ by numerical computation of their linear stability indices \cite{SW2} (a method which requires numerical integration of the regularized equations over a full period) and corroborated by Roberts' symmetry reduction method \cite{BOYSR}. The symmetric singular periodic orbit in the fully symmetric planar four-body equal mass problem \cite{OY3}, \cite{Sh} (in which the position of one of the bodies determines the positions of the remaining three bodies) alternates between distinct SBCs and has been shown to be linearly stable, with respect to symmetrically constrained linear perturbations, by Roberts' symmetry reduction method \cite{BOYSR}. The linearly stable symmetric singular periodic SBC orbit with $m=1$ in the PPS4BP is the analytic extension \cite{BOYS} of the linearly stable symmetric singular periodic orbit in the fully symmetric planar four-body equal mass problem \cite{BOYSR}.
\section{The PPS4BP}\label{PPS4BP}
We recall from \cite{BOYS} the relevant notations for the PPS4BP, its regularized Hamiltonian, and properties of the regularized one-parameter family of symmetric SBC period orbits. In the PPS4BP the positions of the four planar bodies are \[ (x_1,x_2),\ (x_3,x_4),\ (-x_1,-x_2),\ (-x_3,-x_4),\] where the corresponding masses are $1$, $m$, $1$, $m$ with $0<m\leq 1$. With $t$ as the time variable and $\dot{}=d/dt$, the momenta for the four bodies are \[ (\omega_1,\omega_2) = 2(\dot x_1,\dot x_2),\ (\omega_3,\omega_4)=2m(\dot x_3,\dot x_4),\ -(\omega_1,\omega_2),\ -(\omega_3,\omega_4).\] The Hamiltonian for the PPS4BP is \begin{align*} H & = \frac{1}{4}\big[ \omega_1^2 + \omega_2^2\big] + \frac{1}{4m}\big[ \omega_3^2 + \omega_4^2\big]\\ & - \frac{1}{2\sqrt{x_1^2 + x_2^2}} - \frac{2m}{\sqrt{(x_3-x_1)^2 + ( x_4-x_2)^2}}\\ & - \frac{2m}{\sqrt{(x_1+x_3)^2+(x_2+x_4)^2 }} - \frac{m^2}{2\sqrt{x_3^2 + x_4^2}}. \end{align*} The angular momentum for the PPS4BP is \[ A = x_1\omega_2-x_2\omega_1 + x_3\omega_4- x_4\omega_3.\] A regularizable simultaneous binary collision occurs when $x_3=x_1\ne0$ and $x_4=x_2\ne 0$ (in the first and third quadrants), and also when $x_3=-x_1\ne 0$ and $x_4=-x_2\ne 0$ (in the second and fourth quadrants). Initial conditions for the symmetric SBC periodic orbits in the PPS4BP when $m=1$ are given in \cite{BOYS}, and when $m=0.539$ they are \begin{align*} & x_1 = 2.11421,\ x_2=0,\ x_3=0 ,\ x_4= 1.01146, \\ & \omega_1 = 0,\ \omega_2= 0.18151,\ \omega_3 = 0.70392,\ \omega_4=0. \end{align*}
\subsection{The Regularized Hamiltonian}
We define new variables $u_1$, $u_2$, $u_3$, $u_4$, $v_1$, $v_2$, $v_3$, and $v_4$ related to the variables $x_1$, $x_2$, $x_3$, $x_4$, $\omega_1$, $\omega_2$, $\omega_3$, and $\omega_4$ by the canonical transformation \begin{align*} x_1 & = (1/2)(u_1^2-u_2^2+u_3^2-u_4^2) \\ x_2 & = u_1u_2+u_3u_4, \\ x_3 & = (1/2)(u_3^2-u_4^2-u_1^2+u_2^2), \\ x_4 & = u_3u_4-u_1u_2,\\ \omega_1 & = \frac{ v_1u_1-v_2u_2+v_1u_2+v_1u_2+v_2u_1}{2(u_1^2+u_2^2)}, \\ \omega_2 & = \frac{ v_3u_3-v_4u_4+v_3u_4+v_4u_3}{2(u_3^2+u_4^2)}, \\ \omega_3 & = \frac{-v_1u_1+v_2u_2+v_1u_2+v_2u_1}{2(u_1^2+u_2^2)}, \\ \omega_4 & = \frac{-v_3u_3+v_4u_4+v_3u_4+v_4u_3}{2(u_3^2+u_4^2)}. \end{align*} In extended phase space, the variables are $u_1$, $u_2$, $u_3$, $u_4$, $\hat E$, $v_1$, $v_2$, $v_3$, $v_4$, and $t$, where $\hat E$ is the energy. If we set \begin{align*} M_1 & = v_1u_1-v_2u_2, & M_2 & = v_1u_2+v_2u_1, \\ M_3 & = v_3u_3-v_4u_4, & M_4 & = v_3u_4 + v_4u_3, \\ M_5 & = u_1^2 -u_2^2 + u_3^2 - u_4 ^2, & M_6 & = 2u_1u_2+2u_3u_4, \\ M_7 & = u_1^2-u_2^2-u_3^2 + u_4^2, & M_8 & = 2u_1u_2-2u_3u_4, \end{align*} then the regularized Hamiltonian for the PPS4BP in extended phase space is \begin{align*} \hat\Gamma = \frac{dt}{ds}\big(H-\hat E) = & \frac{1}{16}\bigg(1+\frac{1}{m}\bigg)\bigg( (v_1^2+v_2^2)(u_3^2+u_4^2) + (v_3^2+v_4^2)(u_1^2+u_2^2) \bigg) \\ & + \frac{1}{8}\bigg(1-\frac{1}{m}\bigg) \big(M_3M_1 + M_4M_2\big) \\ & - \frac{(u_1^2+u_2^2)(u_3^2+u_4^2)}{\sqrt{M_5^2 + M_6^2}} - 2m\big(u_1^2+u_2^2+u_3^2+u_4^2) \\ & - \frac{m^2(u_1^2+u_2^2)(u_3^2+u_4^2)}{\sqrt{M_7^2 + M_8^2 }} - \hat E(u_1^2+u_2^2)(u_3^2+u_4^2), \end{align*} where \[ \frac{dt}{ds} = (u_1^2+u_2^2)(u_3^2+u_4^2)\] is the regularizing change of time for this Levi-Civita regularization. The angular momentum in the new variables is \[ A = \frac{1}{2}\big[ -v_1u_2 + v_2u_1 - v_3u_4 + v_4u_3\big].\] Let ${}^\prime=d/ds$, \[ J = \begin{bmatrix} 0 & I \\ -I & 0\end{bmatrix}\] for $I$ the $4\times 4$ identity matrix, and $\nabla$ be the gradient with respect to the variables \[ z=(u_1,u_2,u_3,u_4,v_1,v_2,v_3,v_4).\] The regularized Hamiltonian system of equations with Hamiltonian $\hat\Gamma$ is \begin{equation}\label{regularized} z^\prime = J\nabla\hat\Gamma(z).\end{equation} The energy $\hat E$ is conserved because \[ \hat E^\prime = \frac{\partial\hat\Gamma}{\partial t} = 0.\]
\subsection{The Symmetric Periodic SBC Orbits in the Regularized PPS4BP}
For $m=1$, we have analytically proven \cite{BOYS} the existence and symmetries of a symmetric periodic SBC orbit $\gamma(s;1)$, with period $T=2\pi$, $\hat E\approx-2.818584789$, and $A=0$ for the regularized PPS4BP on the level set $\hat\Gamma=0$. The initial conditions of $\gamma(s;1)$ at $s=0$ satisfy \begin{align*} & u_3(0;1)=u_1(0;1),\ u_4(0;1)=-u_2(0;1),\\ & v_3(0;1)=-v_1(0;1),\ v_4(0;1)=v_2(0;1). \end{align*} The symmetries of $\gamma(s;1)$ are $S_F\gamma(s;1) = \gamma(s+\pi/2;1)$ and $S_G\gamma(s;1) = \gamma(\pi/2-s;1)$ where \[ S_F = \begin{bmatrix} 0 & F & 0 & 0 \\ -F & 0 & 0 & 0 \\ 0 & 0 & 0 & F \\ 0 & 0 & -F & 0\end{bmatrix}, \ \ S_G = \begin{bmatrix} -G & 0 & 0 & 0 \\ 0 & G & 0 & 0 \\ 0 & 0 & G & 0 \\ 0 & 0 & 0 & -G\end{bmatrix},\] for \[ F = \begin{bmatrix} -1 & 0 \\ 0 & 1\end{bmatrix}, \ \ G= \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}.\] Using a scaling of periodic orbits in the PPS4BP, we \cite{BOYS} numerically continued the symmetric SBC periodic orbit $\gamma(s;1)$ to symmetric periodic SBC orbits $\gamma(s;m)$ with $A=0$ for $0<m<1$ at $0.01$ decrements with fixed period $T=2\pi$ and varying energies $\hat E(m)$ using trigonometric polynomial approximations that ensured the symmetries $S_F\gamma(s;m) = \gamma(s+\pi/2;m)$ and $S_G\gamma(s;m) = \gamma(\pi/2-s;m)$. For all $0<m\leq 1$, the components of $\gamma(0;m)$ satisfy \begin{align}\label{initialconditions1} & u_3(0;m)=u_1(0;m),\ u_4(0;m)=-u_2(0;m), \\ \label{initialconditions2} & v_3(0;m)=-v_1(0;m),\ v_4(0;m)=v_2(0;m). \end{align} For all $0<m\leq 1$, regularized SBCs occur at $s=\pi/4,3\pi/4,5\pi/4,7\pi/4$, where at the first and third times we have $v_3^2+v_4^2=0$ while at the second and fourth times we have $v_1^2+v_2^2=0$. The regularized symmetric periodic orbit $\gamma(s;m)$, in going from $s=0$ to $s=2\pi$, corresponds in the original Hamiltonian system in the physical plane to two full periods of oscillation of a symmetric singular periodic orbit, whose only singularities are regularizable SBCs.
Each regularized symmetric periodic orbit $\gamma(s;m)$ has the trivial characteristic multiplier $1$ of algebraic multiplicity at least $4$. This is because the regularized Hamiltonian $\hat\Gamma$ and the angular momentum $A$ are first integrals for the regularized Hamiltonian system (\ref{regularized}), and because of the time translation along the periodics orbit and ${\rm SO}(2)$ rotations of the periodic orbits (see \cite{MH}).
\section{Linear Stability of Periodic SBC Orbits}
We apply Roberts' symmetry reduction method \cite{Ro2} to the one-parameter family of periodic orbits $\gamma(s;m)$, $0<m\leq 1$, of fixed period $2\pi$, in the regularized Hamiltonian system (\ref{regularized}). Let $\nabla^2\hat\Gamma$ denote the symmetric matrix of second-order partials of $\hat\Gamma$ with respect to the components of $z$. It is easily shown, that if $Y(t)$ is the fundamental matrix solution of the linearized equations along $\gamma(s;m)$, \[ \xi^\prime = J\nabla^2\hat\Gamma(\gamma(s))\xi,\ \ \xi(0)=Y_0,\] for an invertible $Y_0$, then the eigenvalues of $Y_0^{-1}Y(2\pi)$ are indeed the characteristic multipliers of $\gamma(s;m)$.
\subsection{Stability Reductions using Symmetries} We use the symmetries of $\gamma(s;m)$ to show that $Y_0^{-1}Y(2\pi)$ can be factored in part by terms of the form $Y(\pi/4)$, that is, one-eighth of the period of $\gamma(s;m)$. Thus the symmetries of $\gamma(s;m)$ will reduce the analysis of its linear stability type to the numerical computation of $Y(\pi/4)$.
\begin{lemma}\label{powerW} For each $0<m\leq 1$, there exists a matrix $W$ such that $Y_0^{-1}Y(2\pi) = W^4$ where $W=\Lambda D$ for involutions $\Lambda$ and $D$ with $\Lambda=Y_0^{-1}S_F^TS_GY_0$ and $D= B^{-1}S_G B$ for $B=Y(\pi/4)$. \end{lemma}
\begin{proof} Each $\gamma(s;m)$ satisfies $S_F\gamma(s;m) = \gamma(s+\pi/2;m)$. Then (by \cite{Ro2}, see also \cite{BOYSR}), we have that \[ Y(k\pi/2) = S_F^k Y_0(Y_0^{-1}S_F^T Y(\pi/2))^k\] holds for all $k\in{\mathbb N}$. Since $S_F^4=I$, taking $k=4$ gives \begin{equation}\label{timeT} Y(2\pi) = Y_0(Y_0^{-1}S_F^T Y(\pi/2))^4.\end{equation} Furthermore, each $\gamma(s;m)$ satisfies $S_G\gamma(s;m) = \gamma(\pi/2-s;m)$. Then (by \cite{Ro2}, see also \cite{BOYSR}), for \[ B = Y(\pi/4)\] we have that \begin{equation}\label{timeToverfour} Y(\pi/2) = S_G Y_0 B^{-1} S_G^T B = S_GY_0B^{-1}S_GB,\end{equation} where we have used $S_G^T=S_G$. Combining equations (\ref{timeT}) and (\ref{timeToverfour}) gives the factorization \[ Y(2\pi) = Y_0(Y_0^{-1}S_F^TS_G Y_0 B^{-1}S_GB)^4.\] By setting \[ Q=S_F^TS_G {\rm\ and\ } W=Y_0^{-1}QY_0 B^{-1}S_G B,\]
we obtain \[ Y_0^{-1}Y(2\pi) = (Y_0^{-1}QY_0 B^{-1}S_G B)^4 = W^4,\] where \[ \Lambda=Y_0^{-1}QY_0 {\rm \ and\ }D=B^{-1}S_G B\] are both involutions, i.e., $\Lambda^2=D^2=I$. \end{proof}
\subsection{A Choice of $Y_0$}
The matrix $Q=S_F^TS_G$ that appears in $\Lambda$ is orthogonal since $S_F$ and $S_G$ are both orthogonal. Furthermore, $Q$ is symmetric and its eigenvalues are $\pm 1$, each of multiplicity $4$. An orthogonal basis for the eigenspace ${\rm ker}(Q-I)$ is \[ \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1\end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 1 \\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix},\] and an orthogonal basis for the eigenspace ${\rm ker}(Q+I)$ is \[ \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 1\end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}.\] We look for an appropriate choice of $Y_0$ such that \begin{equation}\label{diagonalization} \Lambda = Y_0^{-1}QY_0 = \begin{bmatrix} I & 0 \\ 0 & -I\end{bmatrix}.\end{equation}
\begin{lemma}\label{Y_0} There exists an orthogonal and symplectic $Y_0$ such that Equation $($\ref{diagonalization}$)$ holds. \end{lemma}
\begin{proof} Since the components of $\gamma(s;m)$ satisfy the Equations (\ref{initialconditions1}) and (\ref{initialconditions2}), then using the Hamiltonian system (\ref{regularized}) on the level set $\hat\Gamma=0$, the components of $\gamma^{\,\prime}(0;m)$ satisfy \[ u_3^\prime(0;m)= - u_1^\prime(0;m),\ u_4^\prime(0;m)=u_2^\prime(0;m),\ v_3^\prime(0;m)=v_1^\prime(0;m),\ v_4^\prime(0;m)=-v_2^\prime(0;m).\] It is easily recognized that the vector $\gamma^{\,\prime}(0;m)$ belongs to ${\rm ker}(Q+I)$. Now set \[ a = u_1^\prime(0;m),\ b= u_2^\prime(0;m),\ c=v_1^\prime(0;m),\ d=v_2^\prime(0;m),\ e = \Vert \gamma^{\,\prime}(0;m)\Vert\] and define $Y_0$ by \begin{equation}\label{choiceY0} Y_0 = \frac{1}{e}\begin{bmatrix} c & d & a & b & a & -b & -c & d \\ d & -c & b & -a & b & a & -d & -c \\ c & d & a & b & -a & b & c & -d \\ -d & c & -b & a & b & a & -d & -c \\ -a & b & c & -d & c & d & a & b \\ -b & -a & d & c & d & -c & b & -a \\ a & -b & -c & d & c & d & a & b \\ -b & -a & d & c & -d & c & -b & a\end{bmatrix}.\end{equation} Let ${\rm col}_i(Y_0)$ denote the $i^{\rm th}$ column of $Y_0$. Notice that ${\rm col}_5(Y_0)=\gamma^{\,\prime}(0;m)/\Vert \gamma^{\,\prime}(0;m)\Vert$. The last four columns of $Y_0$ form an orthonormal basis for ${\rm ker}(Q+I)$, while the first four columns of $Y_0$ form an orthonormal basis for ${\rm ker}(Q-I)$. Since $Q$ is symmetric, its two eigenspaces are orthogonal, and so $Y_0$ is orthogonal. Note that $J{\rm col}_{4+i}(Y_0)={\rm col}_i(Y_0)$ for $i=1,2,3,4$; in other words, multiplication by $J$ maps ${\rm ker}(Q-I)$ bijectively to ${\rm ker}(Q+I)$. For $P_1$ the lower right $4\times 4$ submatrix of $Y_0$ and $P_2$ the upper right $4\times 4$ submatrix of $Y_0$, we have \[ Y_0 = \left( J\begin{bmatrix} P_2 \\ P_1\end{bmatrix}, \begin{bmatrix} P_2 \\ P_1\end{bmatrix} \right)= \begin{bmatrix} P_1 & P_2 \\ -P_2 & P_1\end{bmatrix},\] where $P_1^TP_1 + P_2^TP_2 = I$ and $P_1^TP_2=0$. These implies that $Y_0$ is symplectic. \end{proof}
\subsection{The Existence of $K$}
By Lemma \ref{powerW} we have $Y_0^{-1}Y(2\pi)=W^4$ where $W=\Lambda D$ with $\Lambda=Y_0^{-1}QY_0$ and $D= B^{-1}S_GB$ for $B=Y(\pi/4)$. By Lemma \ref{Y_0}, there exists an orthogonal and symplectic $Y_0$ such that Equation (\ref{diagonalization}) holds. Choose $Y_0$ as given in Equation (\ref{choiceY0}). The matrix $W=\Lambda D$ is then symplectic, i.e., $W^TJW=J$, because $\Lambda$ is symplectic with multiplier $-1$, $\Lambda^TJ\Lambda=-J$, and $S_G$ is symplectic with multiplier $-1$, $S_G^TJS_G=-J$, and $B$ is symplectic.
\begin{lemma}\label{existenceK} With the given choice of $Y_0$, there exists a matrix $K$ uniquely determined by $B=Y(\pi/4)$ such that \[ \frac{1}{2}\big(W+W^{-1}\big) = \begin{bmatrix} K^T & 0 \\ 0 & K\end{bmatrix}.\] \end{lemma}
\begin{proof} Since $W=\Lambda D$ where $\Lambda$ and $D$ are involutions, it follows that \[ W^{-1}=D\Lambda.\] By the choice of $Y_0$, the form of the matrix $\Lambda$ is given in Equation (\ref{diagonalization}). If we partition the symplectic matrix $B$ into the four $4\times 4$ submatrices, \begin{equation}\label{blockformB} B = \begin{bmatrix} A_1 & A_2 \\ A_3 & A_4\end{bmatrix},\end{equation} then the form of the inverse of $B$ is \[ B^{-1} = \begin{bmatrix} A_4^T & -A_2^T \\ -A_3^T & A_1^T\end{bmatrix}.\] Set \[ H = \begin{bmatrix} -G & 0 \\ 0 & G\end{bmatrix}.\] Then we have that \[ D = B^{-1}S_G B = \begin{bmatrix} K^T & L_1 \\ -L_2 & -K\end{bmatrix}\] where $K = A_3^THA_2 + A_1^THA_4$, $L_1 = A_4^THA_2+A_2^THA_4$, and $L_2 = A_3^THA_1+A_1^THA_3$. It follows that $K$ is uniquely determined by $B$, that \begin{equation}\label{formW} W = \Lambda D = \begin{bmatrix} I & 0 \\ 0 & -I\end{bmatrix} \begin{bmatrix} K^T & L_1 \\ L_2 & -K\end{bmatrix} = \begin{bmatrix} K^T & L_1 \\ L_2 & K\end{bmatrix},\end{equation} and that \[ W^{-1} = D\Lambda = \begin{bmatrix} K^T & L_1 \\ L_2 & -K\end{bmatrix} \begin{bmatrix} I & 0 \\ 0 & -I\end{bmatrix} = \begin{bmatrix} K^T & -L_1 \\ -L_2 & K\end{bmatrix}.\] Thus \begin{equation} \label{WK} \frac{1}{2}\big(W+W^{-1}\big) = \begin{bmatrix} K^T & 0 \\ 0 & K\end{bmatrix} \end{equation} for a $K$ uniquely determined by $B=Y(\pi/4)$ as was desired. \end{proof}
It has been shown \cite{Ro2} that the symplectic matrix $W$ is spectrally stable, i.e., all of its eigenvalues have modulus $1$, if and only if all of the eigenvalues of $K$ are real and have absolute value smaller than or equal to $1$. The particular relationship between the eigenvalues of $W$ and $K$ given tacitly in Lemma \ref{existenceK} is as follows. The map $f:{\mathbb C}\to{\mathbb C}$ given by $f(\lambda) = (1/2)(\lambda+1/\lambda)$ takes an eigenvalue of $W$ to an eigenvalue of $(1/2)(W+W^{-1})$. Note that the map $f$ satisfies $f(\lambda)=f(1/\lambda)$. For an eigenvalue $\lambda$ of $W$, the eigenvalue $f(\lambda)$ of $(1/2)(W+W^{-1})$ is an eigenvalue of $K$. If $\lambda$ is an eigenvalue of the symplectic matrix $W$, then $1/\lambda$, $\bar\lambda$, and $1/\bar\lambda$ are also eigenvalues of $W$. When $\lambda$ has modulus one, then $\lambda=1/\bar\lambda$ and $1/\lambda=\bar\lambda$, and so $f(\lambda)=f(\bar\lambda)$ which is a real number with absolute value smaller than or equal to $1$. Thus a complex conjugate pair of eigenvalues of $W$ of modulus one corresponds to a real eigenvalue of $K$ with absolute value smaller than or equal to $1$. When $\lambda$ is real, it is nonzero because $W$ is symplectic, and $f(\lambda)=f(1/\lambda)$ which is a real number with absolute value greater than $1$. Thus a reciprocal pair $\lambda$ and $1/\lambda$ of real nonzero eigenvalues of $W$ corresponds to a real eigenvalue of $K$ with absolute value greater than $1$. When $\lambda$ is not real and has a modulus other than $1$, then $f(\lambda)=f(1/\lambda)$ and $f(\bar\lambda)=f(1/\bar\lambda)$, with $f(\lambda)$ and $f(\bar\lambda)$ as complex conjugate eigenvalues of $K$ with nonzero imaginary part. Thus, the four eigenvalues $\lambda$, $1/\lambda$, $\bar\lambda$, and $1/\bar\lambda$ of $W$ correspond to a complex conjugate pair of eigenvalues of $K$ with nonzero imaginary part.
\subsection{The Form of $K$} We will show that one of the eigenvalues of $K$ is $1$, and the remaining three eigenvalues of $K$ are determined by the lower right $3\times 3$ submatrix of $K$. Let $c_i$ denote the $i^{\rm th}$ column of $B=Y(\pi/4)$.
\begin{lemma}\label{formK} With the given choice of $Y_0$, the matrix $K$ uniquely determined by $B=Y(\pi/4)$ is \[ \begin{bmatrix} 1 & * & * & * \\ 0 & c_2^T S_GJc_6 & c_2^TS_GJ c_7 & c_2^TS_GJ c_8
\\ 0 & c_3^T S_GJ c_6 & c_3^T S_G Jc_7 & c_3^T S_G J c_8
\\ 0 & c_4^T S_G J c_6 & c_4^T S_G J c_7 & c_4^T S_G J c_8\end{bmatrix}.\] \end{lemma}
\begin{proof} We begin by showing that $1$ is an eigenvalue of $W$ by identifying a corresponding eigenvector. Since $Y(\pi/2)=S_GY_0B^{-1}S_G B$ (Equation \ref{timeToverfour}) and $Q=S_F^T S_G$, it follows that \begin{align*} W & = Y_0^{-1}QY_0 B^{-1} S_GB \\ &= Y_0^{-1}S_F^T S_G Y_0 B^{-1}S_G B \\ &= Y_0^{-1}S_F^T Y(\pi/2). \end{align*} Set \[ v = Y_0^{-1}\gamma^{\,\prime}(0;m).\] The orthogonality of $Y_0$ and ${\rm col}_5(Y_0) = \gamma^{\,\prime}(0;m)/\Vert \gamma^{\,\prime}(0;m)\Vert$ imply that \[ v = Y_0^T \gamma^{\,\prime}(0;m) = \Vert \gamma^{\,\prime}(0;m)\Vert e_5,\] where $e_5=[0,0,0,0,1,0,0,0]^T$. Since $Y(s)$ is a fundamental matrix, then $\gamma^{\,\prime}(s;m) = Y(s)Y_0^{-1}\gamma^{\,\prime}(0;m)$. Hence, \begin{align*} Wv & = Y_0^{-1}S_F^TY(\pi/2)v \\ & = Y_0^{-1} S_F^T Y(\pi/2) Y_0^{-1}\gamma^{\,\prime}(0;m) \\ & = Y_0^{-1} S_F^T \gamma^{\,\prime}(\pi/2;m). \end{align*} Since $S_F\gamma(s;m) = \gamma(s+\pi/2;m)$ and $S_F^{-1}=S_F^T$, we have that \[ \gamma^{\,\prime}(s;m) = S_F^{-1}\gamma^{\,\prime}(s+\pi/2;m) = S_F^T\gamma^{\,\prime}(s+\pi/2;m).\] Setting $s=0$ in this gives \[ \gamma^{\,\prime}(0;m) = S_F^T\gamma^{\,\prime}(\pi/2;m).\] From this it follows that \begin{align*} Wv & = Y_0^{-1}S_F^T \gamma^{\,\prime}(\pi/2;m) \\ & = Y_0^{-1}\gamma^{\,\prime}(0;m) \\ & = v. \end{align*} Thus $1$ is an eigenvalue of $W$ and $v=\Vert \gamma^{\,\prime}(0;m)\Vert e_5$ is a corresponding eigenvector.
Next, we show that the first column of $K$ is $[1,0,0,0]^T$. Since $Wv=v$, then $We_5=e_5$. From the form of $W$ given in Equation (\ref{formW}), it follows that \[ e_5 = W e_5 = \begin{bmatrix} L_1 [1,0,0,0]^T \\ K [1,0,0,0]^T\end{bmatrix}.\] This implies that \[ K \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0\end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}.\] from which it follows that the first column of $K$ is $[1,0,0,0]^T$.
Finally we show that the lower right $3\times 3$ submatrix of $K$ has the prescribed entries. Since $Y_0$ is symplectic, the matrix $B=Y(\pi/4)$ is symplectic. Hence $B$ satisfies $J=B^TJB$, and so \[ B^{-1} = -J B^T J.\] For $W=\Lambda D$ with $D=B^{-1}S_GB$ where $S_G$ satisfies $S_GJ = -JS_G$ we then obtain \begin{align*} W & = \Lambda B^{-1}S_G B \\ & = \Lambda (-JB^T J)S_G B \\ & = -\Lambda J B^T JS_G B \\ & = -\Lambda JB^T(-S_G J)B \\ & = \Lambda J B^TS_G JB. \end{align*} Writing $B$ in the block partition form given in Equation (\ref{blockformB}), it follows that \begin{equation}\label{entriesW} \Lambda J B^T = \begin{bmatrix} 0 & I \\ I & 0\end{bmatrix} B^T = \begin{bmatrix} 0 & I \\ I & 0\end{bmatrix} \begin{bmatrix} A_1^T & A_3^T
\\ A_2^T & A_4^T\end{bmatrix} = \begin{bmatrix} A_2^T & A_4^T
\\ A_1^T & A_3^T\end{bmatrix}.\end{equation} Let ${\rm col}_i(S_GJB)$ denote the $i^{\rm th}$ column of $S_GJB$. Then ${\rm col}_i(S_GJB) = S_GJc_i$ where $c_i$ is the $i^{\rm th}$ column of $B=Y(\pi/4)$. This and Equation (\ref{entriesW}) imply that the $(i,j)$ entry of $W$ is then $c_i^T S_G Jc_j$. But Equation (\ref{formW}) implies that the $(6,6)$ entry of $W$ is the $(2,2)$ entry of $K$. Continuing in this manner we find the remaining entries of the lower right $3\times 3$ submatrix of $K$ to be given as prescribed. \end{proof}
\subsection{A Stability Theorem}
The characteristic multipliers of $\gamma(s;m)$ are the eigenvalues of $W^4$ which are the fourth powers of the eigenvalues of $W$. As was shown in the proof of Lemma \ref{formK}, an eigenvalue of $K$ is $1$. Because of Equation (\ref{WK}), an eigenvalue of $W$ is $1$ with algebraic multiplicity (at least ) 2. This accounts for two of the four known eigenvalues of $1$ for $W^4$. Our numerical calculations show that $-1$ is an eigenvalue of $K$ and hence of $W$ for all $0<m\leq 1$. This accounts for the remaining two known eigenvalues of $1$ for $W^4$.
When $W$ is spectrally stable, the eigenvalues of $K$ are the real parts of the eigenvalues of $W$. If $0$ is an eigenvalue of $K$, then $\pm i$ are eigenvalues of $W$ and so the algebraic multiplicity of $1$ as an eigenvalue of $W^4$ is at least $6$. If $1/\sqrt 2$ is an eigenvalue of $K$, then $1/\sqrt 2 \pm i/\sqrt 2$ are eigenvalues of $W$, and if $-1/\sqrt 2$ is and eigenvalue of $K$, then $-1/\sqrt 2\pm i/\sqrt 2$ are eigenvalues of $W$; both these imply that $-1$ is a repeated eigenvalue of $W^4$. So when the remaining two eigenvalues $\lambda_1$ and $\lambda_2$ of $K$ are real, distinct, have absolute value strictly smaller than one, and none of them are equal to $0$ or $\pm 1/\sqrt 2$, then the symmetric periodic SBC orbit is linearly stable, i.e., $W$, and hence $W^4$, is spectrally stable as well as semisimple when restricted to the four dimensional $W$-invariant subspace of ${\mathbb R}^8$ determined by the two distinct modulus one complex conjugate pairs of eigenvalues of $W$. On the other hand, if one of $\lambda_1$ or $\lambda_2$ is real with absolute value bigger than $1$, or is complex with a nonzero imaginary part, then the symmetric periodic SBC orbit is not spectrally stable, but is linearly unstable. The proof of the following result about the linear stability type for the symmetric periodic SBC orbits in the PPS4BP follows from all of the Lemmas and subsequent comments presented in this Section.
\begin{theorem}\label{stabilitytheorem} The symmetric periodic SBC orbit $\gamma(s;m)$ of period $T=2\pi$ and energy $\hat E(m)$ is spectrally stable in the PPS4BP if and only if $\lambda_1$ and $\lambda_2$ are real and have absolute value smaller or equal to $1$. If $\lambda_1$ and $\lambda_2$ are real, distinct, have absolute value strictly smaller than $1$, and none of them are equal to $0$ or $\pm 1/\sqrt 2$, then $\gamma(s;m)$ is linearly stable in the PPS4BP. \end{theorem}
\section{Numerical Results}\label{numerics}
We computed $Y(\pi/4)$ using our trigonometric polynomial approximations of $\gamma(s;m)$ for each $m$ starting at $m=1$ and decreasing by $0.01$ until we reached $m=0.01$, and the Runge-Kutta order 4 algorithm coded in MATLAB, with a fixed time step of \[ \frac{\pi/4}{50000}=\frac{\pi}{200000}.\] From the needed columns of $Y(\pi/4)$, we computed the entries of the lower right $3\times 3$ submatrix of $K$ as given in Lemma \ref{formK}, and then computed the eigenvalues $\lambda_1$, $\lambda_2$, and $\lambda_3$ of this $3\times 3$ matrix. We have plotted these three eigenvalues, when real, as functions of $m$ in Figure \ref{FigureR1}. One of these eigenvalues is real and stays close to $-1$ for all $m\in(0,1]$ except at $m=0.20$; label this eigenvalue $\lambda_3$.
\begin{figure}
\caption{The eigenvalues $\lambda_1$, $\lambda_2$, $\lambda_3$, when real, of the $3\times 3$ lower right submatrix of $K$ over $0<m\leq 1$.}
\label{FigureR1}
\end{figure}
The remaining two eigenvalues $\lambda_1$ and $\lambda_2$ of $K$ that determine the linear stability type of $\gamma(s;m)$ are for $m=0.01$ near $1$ and not shown, respectively, in Figure \ref{FigureR1}. The values of $\lambda_1$ and $\lambda_2$ at $m=0.01$ are $0.9743145796$, and $-50.70044516$ respectively. As $m$ increases from $0.01$, the value of $\lambda_1$ decreases, crossing $1/\sqrt 2$ for some $m$ in $(0.09,0.10)$, and crossing $0$ for some $m$ in $(0.26,0.27)$, while $\lambda_2$ increases to the value $-1.146019443$ at $m=0.19$, momentarily disappearing at $m=0.20$, reappearing at $m=0.21$ with a value of $-0.8641436215$, continuing to increase, crossing $-1/\sqrt 2$ for a value of $m$ in $(0.22,0.23)$, until at some value of $m$ in $(0.26,0.27)$, we have $\lambda_1=\lambda_2<0$. For $m$ in $[0.27,0.53]$, the eigenvalues $\lambda_1$ and $\lambda_2$ form a complex conjugate pair with nonzero imaginary part, and thus disappear in Figure \ref{FigureR1}. For some value of $m$ in $(0.53,0.54)$, we have $\lambda_1$ and $\lambda_2$ reappearing in Figure \ref{FigureR1}, with $\lambda_1=\lambda_2>0$. As $m$ increases from there, $\lambda_1$ increases and $\lambda_2$ decreases, with $\lambda_1$ crossing $0$ for a value of $m$ in $(0.54,0.55)$, and with the values of $\lambda_1$ and $\lambda_2$ at $m=1$ being respectively, \begin{equation}\label{firsttwo} 0.6941364299, -0.6802222699,\end{equation} where the first of these is slightly smaller than $1/\sqrt 2$, and the latter is slighter larger than $-1/\sqrt2$. These changes in the values of $\lambda_1$ and $\lambda_2$ account for the changes in the linear stability type of $\gamma(s;m)$ as $m$ varies over $(0,1]$.
From the numerical results and Theorem \ref{stabilitytheorem}, we conclude that the periodic orbit $\gamma(s;m)$ is linearly stable when $m$ is in $[0.21,0.22]$, or $m$ is in $[0.23,0.25]$, $m=0.54$, or $m$ is in $[0.55,1]$. We have linear instability when $m$ is in $[0.01,0.19]$ or in $[0.27,0.53]$. We have at least spectral stability when $m=0.20$ where $\lambda_3$ disappears momentarily along with $\lambda_2$ to form the complex conjugate pair with nonzero imaginary part, \[ -0.9972588720\pm 0.008650400165i.\] This appears numerically to be a repeated eigenvalue of $-1$ for $K$. We also have at least spectral stability for a value of $m$ in $(0.22,0.23)$, and for a value of $m$ in $(0.54,0.55)$.
We have confirmed that numerically the equal mass symmetric periodic SBC orbit $\gamma(s;1)$ is linearly stable in the PPS4BP. From the eigenvalues of $K$, which are $1$, $-1$, and those listed in (\ref{firsttwo}), the characteristic multipliers of $\gamma(s;1)$ are $1$ with algebraic multiplicity $4$, and the two distinct complex conjugate pairs of modulus one, \begin{align*} -0.9888710746 & \pm 0.1487749902i, \\ -0.9973574665 & \pm 0.07265042297i. \end{align*} These agree numerically \cite{BOYS} with the eigenvalues of the monodromy matrix for $\gamma(s;1)$.
To get a better estimate of the value of $m$ between $0.54$ and $0.55$ at which the orbit $\gamma(s;m)$ loses spectral stability as $m$ decreases, we numerically computed $Y(\pi/4)$ for the values of $m=0.531,0.532,\dots,0.538,0.539$, and then computed the values of $\lambda_1$ and $\lambda_2$. These show for $m=0.531$ through $m=0.538$ that $\gamma(s;m)$ is linearly unstable because $\lambda_1$ and $\lambda_2$ form a complex conjugate pair with nonzero imaginary part. For $m=0.539$, we have that $\gamma(s;m)$ is linearly stable because \[ \lambda_1=0.1425261155,\ \lambda_2= 0.08595095311,\] which are real, distinct, have absolute value smaller than $1$, and none are equal to $0$ or $\pm1/\sqrt 2$. These eigenvalues of $K$ imply that the characteristic multipliers of $\gamma(s;0.539)$ are $1$ with algebraic multiplicity $4$, and the two complex conjugate pairs \begin{align*} 0.8407916212 & \pm 0.5413588917i, \\ 0.9413360780 & \pm 0.3374705738i, \end{align*} with each one of these having modulus $1$. Thus the value of $m$ in $[0.53,0.54]$ at which $\gamma(s;m)$ is at least spectrally stable, lies in the interval $(0.538,0.539)$.
\end{document} | arXiv |
Tucker decomposition
In mathematics, Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor. It is named after Ledyard R. Tucker[1] although it goes back to Hitchcock in 1927.[2] Initially described as a three-mode extension of factor analysis and principal component analysis it may actually be generalized to higher mode analysis, which is also called higher-order singular value decomposition (HOSVD).
It may be regarded as a more flexible PARAFAC (parallel factor analysis) model. In PARAFAC the core tensor is restricted to be "diagonal".
In practice, Tucker decomposition is used as a modelling tool. For instance, it is used to model three-way (or higher way) data by means of relatively small numbers of components for each of the three or more modes, and the components are linked to each other by a three- (or higher-) way core array. The model parameters are estimated in such a way that, given fixed numbers of components, the modelled data optimally resemble the actual data in the least squares sense. The model gives a summary of the information in the data, in the same way as principal components analysis does for two-way data.
For a 3rd-order tensor $T\in F^{n_{1}\times n_{2}\times n_{3}}$, where $F$ is either $\mathbb {R} $ or $\mathbb {C} $, Tucker Decomposition can be denoted as follows,
$T={\mathcal {T}}\times _{1}U^{(1)}\times _{2}U^{(2)}\times _{3}U^{(3)}$
where ${\mathcal {T}}\in F^{d_{1}\times d_{2}\times d_{3}}$ is the core tensor, a 3rd-order tensor that contains the 1-mode, 2-mode and 3-mode singular values of $T$, which are defined as the Frobenius norm of the 1-mode, 2-mode and 3-mode slices of tensor ${\mathcal {T}}$ respectively. $U^{(1)},U^{(2)},U^{(3)}$ are unitary matrices in $F^{d_{1}\times n_{1}},F^{d_{2}\times n_{2}},F^{d_{3}\times n_{3}}$ respectively. The j-mode product (j = 1, 2, 3) of ${\mathcal {T}}$ by $U^{(j)}$ is denoted as ${\mathcal {T}}\times U^{(j)}$ with entries as
${\begin{aligned}({\mathcal {T}}\times _{1}U^{(1)})(n_{1},d_{2},d_{3})&=\sum _{i_{1}=1}^{d_{1}}{\mathcal {T}}(i_{1},d_{2},d_{3})U^{(1)}(i_{1},n_{1})\\({\mathcal {T}}\times _{2}U^{(2)})(d_{1},n_{2},d_{3})&=\sum _{i_{2}=1}^{d_{2}}{\mathcal {T}}(d_{1},i_{2},d_{3})U^{(2)}(i_{2},n_{2})\\({\mathcal {T}}\times _{3}U^{(3)})(d_{1},d_{2},n_{3})&=\sum _{i_{3}=1}^{d_{3}}{\mathcal {T}}(d_{1},d_{2},i_{3})U^{(3)}(i_{3},n_{3})\end{aligned}}$
Taking $d_{i}=n_{i}$ for all $i$ is always sufficient to represent $T$ exactly, but often $T$ can be compressed or efficiently approximately by choosing $d_{i}<n_{i}$. A common choice is $d_{1}=d_{2}=d_{3}=\min(n_{1},n_{2},n_{3})$, which can be effective when the difference in dimension sizes is large.
There are two special cases of Tucker decomposition:
Tucker1: if $U^{(2)}$ and $U^{(3)}$ are identity, then $T={\mathcal {T}}\times _{1}U^{(1)}$
Tucker2: if $U^{(3)}$ is identity, then $T={\mathcal {T}}\times _{1}U^{(1)}\times _{2}U^{(2)}$ .
RESCAL decomposition [3] can be seen as a special case of Tucker where $U^{(3)}$ is identity and $U^{(1)}$ is equal to $U^{(2)}$ .
See also
• Higher-order singular value decomposition
• Multilinear principal component analysis
References
1. Ledyard R. Tucker (September 1966). "Some mathematical notes on three-mode factor analysis". Psychometrika. 31 (3): 279–311. doi:10.1007/BF02289464. PMID 5221127.
2. F. L. Hitchcock (1927). "The expression of a tensor or a polyadic as a sum of products". Journal of Mathematics and Physics. 6: 164–189.
3. Nickel, Maximilian; Tresp, Volker; Kriegel, Hans-Peter (28 June 2011). A Three-Way Model for Collective Learning on Multi-Relational Data. ICML. Vol. 11. pp. 809–816.
| Wikipedia |
PROC Home
Existence of solutions to chemotaxis dynamics with logistic source
2015, 2015(special): 1134-1142. doi: 10.3934/proc.2015.1134
Pullback uniform dissipativity of stochastic reversible Schnackenberg equations
Yuncheng You 1,
Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620-5700
Received September 2014 Revised January 2015 Published November 2015
Asymptotic dynamics of stochastic reversible Schnackenberg equations with multiplicative white noise on a three-dimensional bounded domain is investigated in this paper. The pullback uniform dissipativity in terms of the existence of a common pullback absorbing set with respect to the reverse reaction rate of this typical autocatalytic reaction-diffusion system is proved through decomposed grouping estimates.
Keywords: pullback absorbing set, Reaction-diffusion system, random attractor., pullback uniform dissipativity, stochastic Schnackenberg equations.
Mathematics Subject Classification: Primary: 37L30, 37L55; Secondary: 35B40, 35K55, 60H1.
Citation: Yuncheng You. Pullback uniform dissipativity of stochastic reversible Schnackenberg equations. Conference Publications, 2015, 2015 (special) : 1134-1142. doi: 10.3934/proc.2015.1134
L. Arnold, "Random Dynamical Systems", Springer-Verlag, New York and Berlin, 1998. Google Scholar
P.W. Bates, K. Lu and B. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Diff. Eqns., 246 (2009), 845-869. Google Scholar
V. V. Chepyzhov and M. I. Vishik, "Attractors for Equations of Mathematical Physics," AMS Colloquium Publications, Vol. 49, AMS, Providence, RI, 2002. Google Scholar
I. Chueshov, "Monotone Random Systems Theory and Applications", Lect. Notes of Math., Vol. 1779, Springer, New York, 2002. Google Scholar
H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Theory Related Fields, 100 (1994), 365-393. Google Scholar
P. Gray and S. K. Scott, Autocatalytic reactions in the isothermal, continuous stirred tank reactor: Oscillations and instabilities in the system $a+2b\to 3b,b\to c$, Chem. Eng. Sci., 39 (1984), 1087-1097. Google Scholar
P. Martin-Rubio and J.C. Robinson, Attractors for the stochastic 3D Navier-Stokes equations, Stochastics and Dynamics, 3 (2003), 279-297. Google Scholar
J.D. Murray, Mathematical Biology II: Spatial Models and Biomedical Applications, 3rd edition, Springer, New York, 2003. Google Scholar
J. E. Pearson, Complex patterns in a simple system, Science, 261 (1993), 189-192. Google Scholar
J. Schnackenberg, Simple chemical reaction systems with limit cycle behavior, J. Theor. Biology, 81 (1979), 389-400. Google Scholar
G. R. Sell and Y. You, "Dynamics of Evolutionary Equations," Applied Mathematical Sciences, 143, Springer-Verlag, New York, 2002. Google Scholar
M.J. Ward and J. Wei, The existence and stability of asymmetric spike patterns for the Schnackenberg model, Stud. Appl. Math., 109 (2002), 229-264. Google Scholar
Y. You, Dynamics of three-component reversible Gray-Scott model, DCDS-B, 14 (2010), 1671-1688. Google Scholar
Y. You, Global dynamics and robustness of reversible autocatalytic reaction-diffusion systems, Nonlinear Analysis, Series A, 75 (2012), 3049-3071. Google Scholar
Y. You, Random attractor for stochastic reversible Schnackenberg equations, Discrete and Continuous Dynamical Systems, Series S, 7 (2014), 1347-1362. Google Scholar
Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143
Yuncheng You. Random attractor for stochastic reversible Schnackenberg equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1347-1362. doi: 10.3934/dcdss.2014.7.1347
Peter E. Kloeden, Thomas Lorenz. Pullback attractors of reaction-diffusion inclusions with space-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1909-1964. doi: 10.3934/dcdsb.2017114
Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343
María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 307-326. doi: 10.3934/dcdsb.2010.14.307
Vladimir V. Chepyzhov, Mark I. Vishik. Trajectory attractor for reaction-diffusion system with diffusion coefficient vanishing in time. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1493-1509. doi: 10.3934/dcds.2010.27.1493
Dingshi Li, Xuemin Wang. Regular random attractors for non-autonomous stochastic reaction-diffusion equations on thin domains. Electronic Research Archive, 2021, 29 (2) : 1969-1990. doi: 10.3934/era.2020100
Piermarco Cannarsa, Giuseppe Da Prato. Invariance for stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2012, 1 (1) : 43-56. doi: 10.3934/eect.2012.1.43
Wilhelm Stannat, Lukas Wessels. Deterministic control of stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2021, 10 (4) : 701-722. doi: 10.3934/eect.2020087
Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2722. doi: 10.3934/dcdsb.2020028
Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301
Bao Quoc Tang. Regularity of pullback random attractors for stochastic FitzHugh-Nagumo system on unbounded domains. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 441-466. doi: 10.3934/dcds.2015.35.441
Tomás Caraballo, José Real, I. D. Chueshov. Pullback attractors for stochastic heat equations in materials with memory. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 525-539. doi: 10.3934/dcdsb.2008.9.525
Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036
Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060
Wei Wang, Anthony Roberts. Macroscopic discrete modelling of stochastic reaction-diffusion equations on a periodic domain. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 253-273. doi: 10.3934/dcds.2011.31.253
Yangyang Shi, Hongjun Gao. Homogenization for stochastic reaction-diffusion equations with singular perturbation term. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021137
Gaocheng Yue. Attractors for non-autonomous reaction-diffusion equations with fractional diffusion in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1645-1671. doi: 10.3934/dcdsb.2017079
Jianhua Huang, Wenxian Shen. Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains. Discrete & Continuous Dynamical Systems, 2009, 24 (3) : 855-882. doi: 10.3934/dcds.2009.24.855
Impact Factor:
Yuncheng You | CommonCrawl |
A particle image velocimetry study of dual-rotor counter-rotating wind turbine near wake | springerprofessional.de Skip to main content
Close search form Enter your search terms
Access for companies
English subNavigationMarker
Javascript needed
Create an alert for Journal of Visualization now and receive an email for each new issue with an overview and direct links to all articles.
Back to the search result list
Download PDF-version
previous article Visualization and analysis of muzzle flow field...
next article Three-dimensional visualization of stratified t...
Activate PatentFit
Swipe to navigate through the articles of this issue
31-03-2020 | Regular Paper | Issue 3/2020 Open Access
A particle image velocimetry study of dual-rotor counter-rotating wind turbine near wake
Journal of Visualization > Issue 3/2020
Eloise O. Hollands, Chuangxin He, Lian Gan
» View abstract Download PDF-version
The global wind industry is expected to surpass generation of 60 GW in 2020, and reach a total of 840 GW by 2022 (Global Wind Energy Council 2017 ). These values alone highlight the significance of improvements to wind turbine efficiency. An increase as small as 5% could result in an additional 42 GW of clean, emissions-free wind power by the year 2020. The majority wind turbines are single rotor (SRWT) type. Current SRWTs are limited at the hands of three main factors: the inherent Betz limit, root losses and wake interactions. The Betz limit defines the theoretical maximum efficiency of a horizontal-axis single-rotor wind turbine stating that no more than 59.3% of the kinetic energy from wind may be converted into useful mechanical energy to turn the rotor. Root losses are the result of thick and thus aerodynamically poor turbine blade roots required to withstand large structural loads; it is stated that a loss in power generation of up to 5% is approximated for horizontal axial wind turbines due to increased structural integrity required at the root (Sharma and Frere 2010 ). Wake interactions are relevant specifically to wind farms featuring a configuration of many wind turbines as opposed to isolated use. The problem is that a resultant wake after passing through a turbine expands, superimposes and impinges upon downstream turbines negatively affecting the wake, and consequently, the downwind turbine's ability to extract energy from it. The combined effect of these three limitations sees an efficiency of approximately between 10% and 30% for conventional SRWTs.
In the context of a wind farm, it is desired that the wake passing through a turbine recovers quickly so as not to inhibit the functionality of the downstream turbine and reduce the possible harmful resonance on the blades. However, there are several factors that prohibit the recovery of a wake, the most detrimental of which is the phenomenon of tip vortices. The pressure difference responsible for generating lift at an airfoil surface also causes vortices emanating from the blade tips whose pathlines are helical in nature due to the rotation of the turbine blades (Sherry et al. 2013 ). As the speed of the turbine blade tips is significantly higher than the incoming flow speed, the distance between the spirals of the tip vortices is very small meaning they can be approximated as a very turbulent cylindrical shear layer, separating the wake containing slow moving air and the surrounding fluid at ambient conditions (Gomez-Elviraa et al. 2005 ). This shear layer essentially acts as a barrier, delaying wake re-energisation via mixing with ambient air. Furthermore, it is suggested by Bartl ( 2011 ) that the swirling motion leaving the turbine rotor could excite an eigenfrequency of the blades of the downstream turbine leading to material fatigue. Consequently, a method of dissipating these vortices as soon as possible is desired.
This evident need for improvement sparked the relatively new research into other unconventional wind turbine designs, such as horizontal dual-rotor wind turbine (DRWT). DRWTs are characterised by two rotors mounted on a single tower to both capture additional energy otherwise missed by a single rotor and, more importantly, ultimately to improve the characteristics of the wake. Ozbay ( 2014 ) conducted wind tunnel experiments and compared SRWT and DRWT for both co-rotating and counter-rotating configurations. The study found that the dissipation of the vortex-induced shear layer is highly dependent on the turbulence kinetic energy (TKE) available for turbulent mixing. This need for increased TKE as an input to the wake is met by the DRWT, experimentally validated by incorporating a smaller auxiliary rotor half of the size of the main rotor (Wang et al. 2018 ). It found improved recovery in the wake of a DRWT compared to that of a SRWT due to the interaction and consequent dissipation of separately emanating tip vortices. The measurements also concluded that although the highest turbulence production rate was seen behind the co-rotating DRWT, it suffered the largest velocity deficit and was unable to utilise the swirling velocity induced by the upwind rotor as the counter-rotating DWRT could. The counter-rotating and co-rotating DRWT saw power enhancements of 7.2% and 1.8%, respectively, providing confidence in the decision to go for the former configuration.
Herzog et al. ( 2010 ) compared both numerically and experimentally a SRWT with a counter-rotating DRWT in which both rotors were of the same diameter. In an effort to more accurately simulate the free stream operation of both turbine configurations, the blockage effects of the wind tunnel used were numerically studied as well as practical measurement of the drag coefficient. It was concluded that an increase in power output of 9% was achieved when compared to the SRWT. Lee et al. ( 2012 ) studied the effects of design parameters on the aerodynamic performance of a counter-rotating DRWT and concluded that for optimal system performance, the secondary rotor should be about 40%–50% the size of the larger rotor, depending on the pitch of each rotor. This also agreed with the field testing (Jung et al. 2005 ). The optimal power generation was later further studied parametrically (Rosenberg et al. 2014 ), which however found that the auxiliary rotor placed upwind of the existing rotor should have a diameter 25% the size of the latter, combined with 2 D ( D being the diameter of the main rotor) distance and a tip speed ratio of 6, to yield optimal system performance.
Although these studies align closely with the intentions of the current study, their primary focus was on the power output of a standalone system, while it is important to know the wake characteristics are not well-understood, which however are crucial for eventual large-scale implementation involving multiple DRWT systems. This study aims to focus on the near wake behind a counter-rotating DRWT. The dependence of the helical vortex wake decay on the size of the smaller auxiliary rotor is of the special interest. The primary advantage of this configuration is twofold; the smaller rotor placed upstream aims to capture the energy loss on the root of the main turbine more economically than using the same size rotor, while the opposite rotating direction is potentially beneficial to counteract the swirl or at least accelerate the swirl decay behind the DRWT system.
2 Experimental set-up
2.1 Turbine model and its operation
A scaled turbine tower and nacelle were manufactured for use in a wind tunnel of 0.5 m working section. The turbine tower is 225 mm long and extended vertically downwards from the ceiling, placing the nacelle at the centre of the testing section. The main rotor diameter is 180 mm, sufficiently small so as to avoid flow interference with the wind tunnel wall. The rotors embodied a NACA 4415 aerofoil profile, being one of the most common and broadly used aerodynamic shapes for wind turbine blades, and were 3D printed by FullCure®720 using an Eden Object 500 printer. This produced fairly accurate blade shape and good surface finish, ready to be spray-painted matte black straight after curing. The largest rotor provided the base model of which the two auxiliary rotors were simply scaled versions at factors of 0.8 and 0.5 to ensure aerodynamic similarity.
Three turbine configurations were studied: a SRWT comprised of the main rotor only, a DRWT that modifies the SRWT model to include a 144-mm-diameter rotor (80%), placed 60 mm (the length of the hub) upstream, and a second DRWT that utilises the smaller 90-mm-diameter rotor (50%); see Fig. 1a. Smaller rotor size was difficult to achieve due to the 3D printer resolution as well as the material strength. DWRTs having auxiliary rotors larger than 80% of the main rotor will not be economically suitable in practice compared to two separate single-rotor wind turbines. Hence, they are not studied.
a Rotor blades, b model circuit diagram, c, d experimental set-up photograph and schematic including wind tunnel working section dimensions
In the two DRWT configurations, the smaller auxiliary rotor was a mirror of the main and was placed upwind, aiming to capture the energy at the root part of the main rotor installed at the downstream side of the hub. As the superiority of the counter-rotating DRWT over the co-rotating DRWT and SRWT in power generation has been justified (Ozbay 2014 ; Wang et al. 2018 among others), only the counter-rotating condition is investigated in this study. Hereafter, DRWT refers to the counter-rotating configuration for short. The current SRWT configuration is different from a conventional one where the main rotor is installed on the windward side of the hub. The current SRWT configuration is to ensure a direct comparison to the two DRWT configurations. The conventional SRWT and DRWT configuration with the auxiliary rotor of the same size as the main rotor has been well-studied. Therefore, they are not repeated here.
In the context of this investigation, it is desirable that the results are independent of the Reynolds number (Re). Most wind tunnel experiments were conducted at lower Re than real conditions not surprisingly. Existing research into the Re dependence of turbulence statistics in the wake of turbines states that mean velocity and turbulence intensity, both of which are to be studied here, become independent of Re for \({\mathrm{Re}} \gtrsim 9.3 \times 10^{4}\) (Chamorro et al. 2012 ). Here, Re is defined as
$$\begin{aligned} {\mathrm{Re}}=\frac{U_{\infty }D}{\nu } \end{aligned}$$
where \(U_{\infty }\) is the free stream wind velocity at hub centre height, set to be \(8\,{\mathrm{ms}}^{-1}\); D is the characteristic length scale of the flow, taken as the diameter of the main rotor and \(\nu\) is the kinematic viscosity for air at 20 \(^{\circ }\)C. This gives \({\mathrm{Re}}\approx 9.5\times 10^4\), satisfying the Re independence criterion.
The tip speed ratio \(\varLambda\) is an important factor of wind turbine design (Yurdusev et al. 2006 ) which quantifies the power generation capability and consequent efficiency of a turbine. It is defined as the ratio between the blade tip speed and the free stream velocity, viz \(\varLambda =\varOmega L/U_{\infty }\), where \(\varOmega\) is the turbine rotational speed and L is the length of the blade. According to Ragheb and Ragheb ( 2011 ), the optimal \(\varLambda\) was found to be \(\varLambda _{\mathrm{opt}}=4\pi /m\), where \(m=3\) is the number of blades in this study.
A turbine with \(\varLambda\) that is too low fails to capture energy from the wind. Alternatively, a rotor having too large \(\varLambda\) acts as more of a barrier effect to the incoming air. It is shown by Siddiqui et al. ( 2017 ) that \(\varLambda\) greatly impacts properties of the wake, namely velocity, vorticity and flow streamline trend, rendering it a parameter that must remain constant throughout the experimentation. However, \(\varOmega\) required for \(\varLambda _{\mathrm{opt}}\) from the 50% auxiliary rotor was too high to be feasible, and for this reason, a slightly lower value of \(\varLambda =3.46\) was chosen and was set to be the same for all the rotors. At this value, the theoretical power coefficient for a turbine with three blades of NACA 4415 profile type, defined as the ratio of electrical power output to wind power input, is found to be 0.4196 (Yurdusev et al. 2006 ) well within the average range of 0.2–0.45.
A MFA como drill motor and a low inertia solar motor were used to control the rotor rotational speed. Each of them was connected to a rotor using a push-fit fastener and powered by a 5 V source. The connecting wires were wrapped around the turbine tower, properly secured so as to not alter its aerodynamic properties, and fed out of the working section. The rotors were powered to rotate in the direction they would naturally do in a free stream, as dictated by their blade profiles. As shown in Fig. 1b, adjustable resistors were used in order to tune the rotational velocity until the desired value was reached, measured using a strobe light. The accuracy provided by the strobe saw a change in tip speed ratio of maximum \(\pm\,0.5\%\), deeming the set-up suitably reliable. Table 1 details the working conditions of the rotors, where B is the auxiliary rotor upstream of the main rotor A.
Table of the working conditions for each configuration with reference to Fig. 1b
Rotor A ø (mm)
Rotor B ø (mm)
\(\varOmega\) (RPM)
SRWT
DRWT(L)
DRWT(S)
For the two DRWT configurations, only the \(\varOmega\) of the auxiliary rotor is listed, as the \(\varOmega\) for the main rotor is the same as that in the SRWT configuration. L and S denote large and small auxiliary rotor, respectively
2.2 PIV measurements
Two-dimensional particle image velocimetry (PIV) measurements were performed to investigate the near wake flow structure. The flow was seeded with oil droplets of diameter \(\approx 1\,\upmu\)m produced by an atomiser; they are small enough to follow the motion of the flow, large enough for PIV cross-correlation at the set field of view (FOV). Particles are distributed homogeneously in the FOV plane, illuminated by a 120 mJ per pulse 15 Hz double-headed Nd:YAG laser, which fired from far downstream of the flow as shown in Fig. 1c. The laser sheet was set to \(\approx 3\) mm thick to account for the out-of-plane velocity component due to blade rotation. The PIV \(\Delta t\) was set to 30 \(\upmu\)s for FOV size of about 190 mm \(\times\) 240 mm in the x– y plane, where x is the streamwise direction and y is along the rotor radius (vertical) direction. The origin was set at the leeward centre of the main rotor; see Fig. 1b. The FOV was offset in the y direction to be optimised for the part of the wake away from the tower. A low speed SensiCam camera was used as the imaging tool, sampling at a rate of four image pairs per second and faced normal to the FOV in an enclosure. Figure 1c, d shows the image and the schematic of the set-up, respectively.
In total, 1000 pairs of images were taken from each of the configurations defined in Table 1. The particle images were processed by LaVision®Davis 7.0, with \(32\times 32\) pixel interrogation window and \(50\%\) overlap. This gives the spatial resolution \(\approx 3\) mm by vector spacing. The instantaneous velocity in the x and y direction, respectively, is written as \((u,v)=\left( U+u',V+v'\right)\), where ( U, V) is the ensemble averaged (time mean) velocity and \((u',v')\) is the fluctuating component.
3 Results and discussion
3.1 Mean wake
Figure 2 illustrates the mean streamwise velocity U distribution over the region \(-\,0.4<x/D<0.9\). Similar to most of the wind turbine wake studies, we focus on the part of the wake away from the tower. Note that in our study, the tower was installed upside down, due to physical constraints of the facility (Fig. 1c).
Contour of the mean streamwise velocity. a SRWT, b DRWT(S), c DRWT(L). The position of the rotors are labelled with the white arrows indicating the rotational direction (viewing from downstream for the main rotor and from upstream for the auxiliary rotors, so they counter-rotate)
As expected, significant velocity deficit can be seen in the wakes behind all three configurations, indicating a large amount of the incoming flow's kinetic energy being consumed. The DRWT(L) featuring the larger auxiliary rotor shows the greatest velocity deficit, closely followed by the DRWT(S), both have greater deficit than SRWT at the end of the FOV. This is confirmed by the U distribution along the radial direction, as it is shown in Fig. 3a. It shows that the free stream velocity \(U_{\infty }\) is fully recovered at \(|y/D|=0.6\) by \(x=0.9D\). Figure 2 shows that the velocity deficit area is roughly cone shaped, gradually expands starting at the main rotor tips. The slight overshoot of U ( \(>U_{\infty }\)) at \(y/D=-0.6\) in Fig. 3a is because of the induced velocity caused by the three winding helical vortex cores originating from the main rotor blades, which will be investigated later. It shows that the width of the wake for the three configurations are very similar, with the wake of DRWT(L) marginally wider. This is consistent with the findings of Ozbay ( 2014 ) who also found the wake width behind an SRWT and DRWT to be almost identical and consequently dependent on the size of the main rotor, which has also been kept constant there. This establishes that it is not the wake shape that changes with the addition of an auxiliary rotor, but the characteristics within it, further supported visually by Fig. 2.
a Dependence of axial velocity on the radial distance from the hub centre at \(x=0.9D\), b dependence of the axial velocity on the axial distance at hub centre height
The fact that the obvious quicker velocity recovery is seen in the SWRT for \(-\,0.5\lesssim y/D\lesssim -0.2\) (Fig. 3a) confirms that the auxiliary rotors of the DRWTs capture a large proportion of the kinetic energy of the flow at the main rotor root region, otherwise missed by the SRWT. This is consistent with existing knowledge that DRWTs are able to yield a higher power output than conventional horizontal-axis SRWTs (Ozbay 2014 ; Herzog et al. 2010 ). Over the same region, Fig. 2 shows that DRWT(S) has the highest density of contour lines followed closely by DRWT(L). This can also be inferred by \(\partial {U}/\partial {y}\) derivable from Fig. 3a. This suggests that the velocity gradient in the radial direction, and therefore shear strength, is larger for the DRWTs when compared to the SRWT. Ozbay ( 2014 ) demonstrated that the presence of high shear is prone to flow instability and hence promotive of turbulent mixing. This turbulent mixing is desired to breakdown the cone-shaped shear layer, caused by tip vortices as shown in Sect. 1, in order to accelerate the process of wake recovery.
Figure 3b shows the U deficit with axial distance at hub height. The profiles differ significantly within the region \(0<x/D<0.3\) , wherein the auxiliary rotors of the DRWTs resulting in a larger velocity deficit in the immediate near wake. Beyond this distance, U behaviours are very similar among the three. By the end of the measurement range, U at hub height continues dropping, but it can be expected that it will eventually recover to \(U_{\infty }\) as the wake dissipated. Whether or not the U distributions of the three remain similar further downstream requires further investigation.
Figure 3b also shows that U of the DRWT(L) is consistently lower than the other two configurations, suggesting the former is slightly superior at extracting energy from the wind. This agrees with the findings of Jung et al. ( 2005 ) who stated that using a secondary rotor \(\approx 50\%\) the size of the main rotor sees the best performance in the context of power output, yielding the highest power coefficient.
3.2 Phase-averaged wake
As the rotation rate of the rotors is fixed, it is expected that the wake behind all the configurations manifests a periodic feature, at a frequency of \(m\varOmega /(2\pi )\). Since the samples were acquired at a fixed low frequency in a statistically independent way, without phase locking by an external phase indicator, the phase of the wake is resolved using the snapshot-based proper orthogonal decomposition (POD) (Berkooz et al. 1993 ). POD is a suitable but not unique tool to extract coherent flow structures from both turbulent velocity data, e.g. Wang et al. ( 2020 ), and passive scalar data, e.g. He and Liu ( 2017 ). [Other techniques, e.g. wavelet transform (Fijimoto and Rinoshika 2017 ), might also be suitable for the similar purposes under special circumstances.]
Briefly speaking, for a set of N snapshots of the fluctuation components, \((u',v')\), an auto-covariance matrix M is constructed where solving the standard eigenvalue problem produces N eigenvalues \(\lambda _{i}\), and N eigenvectors \(A_i\).
$$\begin{aligned} MA _{i}=\left( {\hat{U}}^T{\hat{U}}\right) A_{i}=\lambda _{i}A_{i}, \end{aligned}$$
where \({\hat{U}}=[u'_1\ldots u'_N,v'_1\ldots v'_N]\), combining both velocity components. The associated POD mode, \(\varPhi _{i}\) can be calculated as:
$$\begin{aligned} \varPhi _i=\frac{\sum _{n=1}^NA_{i,n}(u'_n,v'_n)}{\Vert \sum _{n=1}^NA_{i,n}(u'_n,v'_n) \Vert }, \; i=1,2,\ldots N. \end{aligned}$$
The eigenvalue \(\lambda _{i}\) reflects the contribution of mode \(\varPhi _{i}\) to the total fluctuating energy of the flow. The instantaneous velocity field can then be represented as a sum of orthogonal modal contributions as
$$\begin{aligned} (u,v)=(U,V) + \sum _{i=1}^N a_{i}\varPhi _{i}, \end{aligned}$$
where \(a_{i}\) is the coefficient that is obtained by projecting the instantaneous velocity fields on the POD basis. That is
$$\begin{aligned} a_i=[\varPhi _1\;\varPhi _2\ldots \;\varPhi _N]^T(u'_i,v'_i). \end{aligned}$$
The ranking of the modal energy contribution, viz. \(\lambda _i\) percentage, is given in Fig. 4a. It shows that \(\lambda _1\) and \(\lambda _2\), corresponding to \(\varPhi _1\) and \(\varPhi _2\), in total contribute 3.2%, 3.3% and 3.5% of the energy for the SRWT, DRWT(S) and DRWT(L) configurations, respectively, which are fairly similarly small. Higher modes contribute less than 1.5% each. This means that the wake behind all the three turbine configurations is very turbulent, and the energy contained in the coherent structures is relatively small. Contribution of \(\lambda _1\)– \(\lambda _4\) of the two DRWTs are similar, more than that of SRWT. From \(\lambda _5\) onwards, all the configurations become very similar in \(\lambda _i\). SRWT gains a small fraction back at higher mode \(i\gtrsim 25\), which are unimportant due to incoherency.
POD analysis of DRWT(S) dataset. a Percentage of the POD mode energy to the total energy. Mode zero, viz. energy of the mean flow fields is excluded. Legends follow Fig. 3, b projection of the normalised coefficients of the first two modes \(a_1/\sqrt{2\lambda _1}\) and \(a_2/\sqrt{2\lambda _2}\) from each snapshot on to the polar coordinates. The red lines mark the \(\pm\,10^{\circ }\) bin size for phase averaging, c the vorticity \(\omega _M\) contours of the first two modes \(\varPhi _1\) and \(\varPhi _2\), overlaid with the corresponding modal velocity vectors, d phase-averaged vorticity contour from the snapshots falling inside the bin in ( b), e the corresponding phase-averaged swirling strength \(\lambda _{ci}\)
Figure 4b illustrates the projection of the (normalised) \(a_1\) and \(a_2\) on to the polar coordinates, with the associated mode \(\varPhi _1\) and \(\varPhi _2\) presented in (c), in terms of vorticity derived from the modal velocity. It is evident from (c) that the first two modes reflect a periodic vortex shedding pattern, and this is confirmed by the rather homogeneous angular distribution in (b). This suggests that \(a_{1}=\sqrt{2\lambda _{1}}\sin (\phi )\) and \(a_{2}=\sqrt{2\lambda _{2}}\cos (\phi )\) with \(\phi\) representing the vortex shedding phase angle (Oudheusden et al. 2005 ). Phase averaging can be done by defining a sample bin size, in this study \(\pm\,10^{\circ }\), to ensure a sample size of \(55\pm 5\) in each bin (phase). An example of a bin centred at an arbitrary phase is shown in (b). At this phase, the averaged vorticity field ( \(\omega\)), using the raw instantaneous velocity ( u, v), is shown in (d) and the contour of the swirling strength \(\lambda _{ci}\) is shown in (e). \(\lambda _{ci}\) is the imaginary part of the complex eigenvalue of the (phase-averaged) velocity gradient tensor, which provides a measure of the swirl strength to allow shear layer to be excluded from detections (Zhou et al. 1999 ).
The coherent vortices shed from the main rotor tips are clearly shown in Fig. 4d after phase averaging. Also seen is the area of vorticity originates from the small auxiliary rotor. These vorticity, in the form of shear layer, fails to form any coherent vortex packets due to the interference of the main rotor downstream. At the same tip speed ratio \(\varLambda\), the auxiliary rotor and the main rotor rotate at different rates, having different vortex shedding frequency captured in FOV. It is confirmed in (e) that no strong swirl is observed in this area, while clear swirling vortex cores can be seen downstream of the main rotor. POD analysis of the sub-region excluding the main rotor vortices also do not show evidence of periodic shedding from the auxiliary rotor; figure not shown.
Figure 5 shows three successive wake phases from \(\phi =\pi /4\) behind all three configurations. As the phase angle increases, the tip vortices shed from one main rotor blade can be seen to align sequentially with the vortices shed from the other two blades of the same rotor. The distance between the neighbouring vortices is found to be fairly constant among the three configurations and over the entire FOV, which is about 0.15 D. Since the vortices are convected downstream by the local velocity and the rotation rate of the main rotor remains constant, this suggests that the auxiliary rotor does not impact on the local mean velocity in the wake, in agreement with Fig. 2.
Phase-averaged vorticity contour for three successive phase angles behind all three turbine configurations. From left to right: SRWT, DRWT(S), DRWT(L)
The trace of the turbine root vortices is also observable in the wake behind SRWT, but not in a coherent pattern in-phase with the tip vortices. This might be because of the particular blade shape used inhibiting coherent vortex shedding in the root part. In comparison, the influence of the smaller auxiliary rotor is clearly seen in the wake of DRWT(S), where stronger vorticity is seen as highlighted by the blue box. This shear layer is also reflected in the U profile at the end the FOV in Fig. 2a, where a 'step' is seen for \(0.4\lesssim |y/D|\lesssim 0.5\). This shear layer has the same sense of direction as the main rotor tip vortices in the x– y plane, but should have an opposite swirl direction as the main rotor helical wake due to the counter-rotating auxiliary rotor. This could be beneficial as it counteracts the main rotor tip wake swirl.
The strength of this shear layer is appreciably lower in DRWT(L). This is because the vortices shed by the larger auxiliary rotor are entrained into the main rotor vortices, attributed to their closer radial distance. This is also the reason for the stronger vortices (both size and \(\omega\) magnitude) behind DRWT(L). The vortex interaction also tends to distort the shape of the vortices for \(x>0.7D\).
The negative \(\omega\) is negligible hence not included in Fig. 5. Note that although the auxiliary rotor rotates in the opposite direction as the main rotor, because of its mirrored blade geometry, the two rotors shed vortices in the same sense in x– y plane.
It is so far clear that the main rotor tip vortices still play a dominant role, acting as a barrier preventing wake re-energisation. Its evolution is further analysed next. The vortex centre trajectory is shown in Fig. 6a. The vortex centre is found by the \(\lambda _{ci}\) weighted centroid; see Fig. 4e. The trajectory behind all three configurations are very similar, with a very small rate of expansion, weakly increasing from SRWT, DRWT(S) to DRWT(L), under the influence of the auxiliary rotor vortices.
a The trajectory of the main tip vortex centre, b dependence of the main rotor tip vortex circulation \(\varGamma\) on the streamwise distance. Legends follow Fig. 3, c dependence of the vorticity at vortex centroids on the streamwise distance. The solid lines are the least squares exponential fit to the data points
Figure 6b displays the decay of circulation \(\varGamma\) of the tip vortex packets, where \(\varGamma =\int _S\omega \;{\mathrm {d}}s\) for S denoting the vortex packet area based on a universal threshold \(\omega L/U_{\infty }=0.4\), about \(6\%\) of the peak vorticity value in Fig. 5. The \(\varGamma\) decay can be well-described by an exponential function \(\varGamma =\varGamma _0 \exp \left[ -\alpha _{\varGamma } (x/D)\right]\). At the main rotor tips \(x=0\), \(\varGamma _0/LU_{\infty }=0.11, 0.087\) and 0.141 for SRWT, DRWT(S) and DRWT(L), respectively. In consistency with Fig. 5, DRWT(L) have the strongest vortices in the wake, due to the auxiliary rotor vortices entrained and also the weaker vorticity connected with the auxiliary vortices shear region. Interestingly, DRWT(S) has the vortices of the lowest \(\varGamma\), even lower than SRWT. Close examination of Fig. 5 suggests that the influence of the smaller auxiliary rotor is to take the background vorticity near the main rotor vortices away from them and deliver that to the root vortices area.
The \(\varGamma\) decay rates are found to be similar among the three, with \(\alpha _{\varGamma }=0.20, 0.23\) and 0.24, respectively. In particular for the two DRWTs, their decay rates are nearly identical. This suggests that the size of the auxiliary rotors does not have a large impact on the tip vortices decay rate. However, compared to SRWT, incorporation of the auxiliary rotors does increase it, very weakly, due to the vortex interaction.
Similarly, the peak vorticity at the vortex centroids also displays an exponential decay described by \(\omega _p=\omega _0\exp \left[ -\alpha _{\omega } (x/D)\right]\); see Fig. 6c. The decay rate \(\alpha _{\omega }=0.60, 0.51\) and 1.33 for SRWT, DRWT(S) and DRWT(L), respectively. It is clear that DRWT(L) sees the most rapid \(\omega _p\) decay, while that for SRWT and DRWT(S) is similar. If the \(\omega\) profile of the vortices are assumed to be close to Gaussian, it is possible to deduce the x dependence of the characteristic vortex size r, combining the \(\varGamma\) and \(\omega _p\) behaviour. That is \(r\sim \exp \left( \alpha _r x\right)\), where \(\alpha _r=(\alpha _{\omega }-\alpha _{\varGamma })/2=0.2, 0.14\) and 0.55. This means that r gradually increases due to vorticity diffusion, and this rate is the fastest for DRWT(L). At \(x=0\), extrapolation of the exponential relations find \(\omega _{0}L/U_{\infty }=5.6, 6.1\) and 9.0 for the three configurations, respectively.
3.3 Turbulence kinetic energy
Finally, we take a look into the fluctuating velocities. Without knowing the out-of-plane velocity component w, TKE in this study is defined as
$$\begin{aligned} {\mathrm {TKE}}=\frac{1}{2}\left(\overline{u^{'}u^{'}} + \overline{v^{'}v^{'}}\right), \end{aligned}$$
where \(\overline{u^{'}u^{'}}\) and \(\overline{v^{'}v^{'}}\) are the normal stress in the x and y directions, respectively. Figure 7 depicts TKE contour for all three turbine configurations. Consistent with the finding that the wake width is mainly dependent on the vortex core trajectory, and is therefore very similar behind the three turbines, the TKE distribution patterns are also similar. Very close to the main rotor surface, the TKE intensity behind SRWT appears higher than the two DRWTs, where part of the free stream wind energy is absorbed by the auxiliary rotors. In DRWT(S), higher TKE intensity can vaguely be seen just above and below the vortex core trajectory, while in DRWT(L) higher TKE can only be seen below. This is in line with the visualisation shown in Fig. 5, where auxiliary rotor vortices are entrained to the main rotor vortices behind DRWT(L).
Contour of TKE overlaid with the vortex core trajectory taken from Fig. 6a. a SRWT, b DRWT(S) and c DRWT(L)
Figure 8 demonstrates the change of the fluctuating velocity root mean square, \(u(\mathrm{rms})\), along the vortex centroid trajectory, where \(u(\mathrm{rms})=\sqrt{\mathrm {TKE}}\). Up to the end of the FOV, the magnitude and the decay rate of \(u(\mathrm{rms})\) are found to be similar among all the three configurations. For \(x>0.5D\), \(u(\mathrm{rms})\) is the highest for DRWT(L) and lowest for DRWT(S). This is in agreement with the evolution of the \(\varGamma\) magnitude shown in Fig. 6b. The fluctuating velocity \((u',v')\) is obtained from subtracting the time mean from the instantaneous velocities, and hence consists of both coherent mean (for periodic flows) and random turbulence. High random turbulence intensity contributes to the dissipation of helical wake and consequent wake re-energisation deeming it, at appropriate regions, a desirable quantity for the application. The phase-averaged vortices discussed in Sect. 3.2 are coherent mean which contribute significantly to the fluctuating velocities. The \(u(\mathrm{rms})\) intensity around the vortex cores is thus a manifestation of the circulation \(\varGamma\) of the vortex packets. The similar \(u(\mathrm{rms})\) decay rates for \(x>0.5D\) cannot be fitted with a simple exponential function. However, they are also in consistence with the decay rate of \(\varGamma\) in Fig. 6a.
Fluctuating velocity rms along the vortex core trajectory. Solid lines are the least squares fits. Legends follow Fig. 3
The near wake velocity field behind three turbine configurations was experimentally studied in order to evaluate the impact of an additional counter-rotating auxiliary rotor on the conventional single-rotor wind turbine wake. The two auxiliary rotors were of 80% and 50% scale of the main rotor, installed upwind of the main rotor, aiming to capture the energy loss in the root part of the latter. All the rotors were tested at a constant tip speed ratio 3.46. We focused on the wake region within one main rotor diameter behind the turbines. Characteristics of the mean velocity field, phase-averaged vortices and TKE were studied using 2DPIV. The following conclusions may be established:
Incorporating auxiliary rotors induces greater mean velocity deficit, with DWRT(L) marginally larger than DRWT(S), meaning that more wind energy was utilised with an auxiliary rotor.
The size of the auxiliary rotor does not impact very much on the width of the wake, which primarily is determined by the trajectory of the vortices shed by the main rotor installed at the downwind side, in agreement with Ozbay ( 2014 ). The vortices shed by the 50% scale auxiliary rotor leaves a shear layer behind without coherent structures surviving the main rotor interference. Those shed at the 80% scale rotor are entrained into the main rotor vortices.
The DRWT(L) has the strongest main rotor tip vortices because of the entrainment of the auxiliary rotor vortices. Although it also experiences the most rapid decay of vorticity strength at the vortex core centroids, very obviously, the decay rate of its vortex circulation is only marginally larger than the other two configurations. The decay of peak vorticity and vortex circulation is found to be exponential for all three configurations.
In line with the circulation behaviour, DRWT(L) sees the slightly strongest TKE intensity at the vortex core trajectory, but the decay rate is similar to the other two configurations.
Overall, the two DRWTs do not see a significant difference in the near wake characteristics. DRWT(S) with 50% scaled auxiliary rotor seems to work better owing to its weaker vortex circulation and TKE along the vortex trajectory. Although it results in a strongest shear layer behind the mid-span of the main rotor, this wake is believed to be beneficial because of its opposite swirl direction (counter-rotating) which tends to counteract the main rotor wake. This is largely consistent with the findings of Jung et al. ( 2005 ) who found that an auxiliary rotor 40–50% the size sees the best performance in the context of power output, when compared to rotors of other sizes.
The authors would like to thank Mr. Lincoln Greatrick and Mr. Robbie Grout for their earlier contribution to the work; also the UK EPSRC (EP/P004377/1) for the financial support.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
go back to reference Bartl J (2011) Wake measurements behind an array of two model wind turbines. Master's thesis, Norwegian University of Science and Technology, Department of Energy and Process Engineering Bartl J (2011) Wake measurements behind an array of two model wind turbines. Master's thesis, Norwegian University of Science and Technology, Department of Energy and Process Engineering
go back to reference Berkooz G, Holmes P, Lumley L (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25:539 MathSciNetCrossRef Berkooz G, Holmes P, Lumley L (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25:539 MathSciNetCrossRef
go back to reference Chamorro L, Arndt R, Sotiropoulos F (2012) Reynolds number dependence of turbulence statistics in the wake of wind turbines. Wind Energy 15(5):733 CrossRef Chamorro L, Arndt R, Sotiropoulos F (2012) Reynolds number dependence of turbulence statistics in the wake of wind turbines. Wind Energy 15(5):733 CrossRef
go back to reference Fijimoto S, Rinoshika A (2017) Multi-scale analysis on wake structures of asymmetric cylinders with different aspect ratios. J Vis 20(3):519–533 CrossRef Fijimoto S, Rinoshika A (2017) Multi-scale analysis on wake structures of asymmetric cylinders with different aspect ratios. J Vis 20(3):519–533 CrossRef
go back to reference Global Wind Energy Council. Global Wind Report 2017 Global Wind Energy Council. Global Wind Report 2017
go back to reference Gomez-Elviraa R, Crespob A, Migoyab E, Manuelb F, Hernandezc J (2005) Anisotropy of turbulence in wind turbine wakes. J Wind Eng Ind Aerodyn 93:797 CrossRef Gomez-Elviraa R, Crespob A, Migoyab E, Manuelb F, Hernandezc J (2005) Anisotropy of turbulence in wind turbine wakes. J Wind Eng Ind Aerodyn 93:797 CrossRef
go back to reference He C, Liu Y (2017) Proper orthogonal decomposition of time-resolved LIF visualisation: scalar mixing in a round jet. J Vis 20(4):789–815 CrossRef He C, Liu Y (2017) Proper orthogonal decomposition of time-resolved LIF visualisation: scalar mixing in a round jet. J Vis 20(4):789–815 CrossRef
go back to reference Herzog R, Schaffarczyk A, Wacinski A, Zurcher O (2010) In: European wind energy conference EWEC Herzog R, Schaffarczyk A, Wacinski A, Zurcher O (2010) In: European wind energy conference EWEC
go back to reference Jung S, No T, Ryu K (2005) Aerodynamic performance prediction of a 30 kW counter-rotating wind turbine system. Renew Energy 30(5):631 CrossRef Jung S, No T, Ryu K (2005) Aerodynamic performance prediction of a 30 kW counter-rotating wind turbine system. Renew Energy 30(5):631 CrossRef
go back to reference Lee S, Hogeon K, Soogab L, Son E (2012) Effects of design parameters on aerodynamic performance of a counter-rotating wind turbine. Renew Energy 42:140 CrossRef Lee S, Hogeon K, Soogab L, Son E (2012) Effects of design parameters on aerodynamic performance of a counter-rotating wind turbine. Renew Energy 42:140 CrossRef
go back to reference Oudheusden B, Scarano F, Hinsberg N, Watt D (2005) Phase-resolved characterization of vortex shedding in the near wake of a square-section cylinder at incidence. Exp Fluids 39:86 CrossRef Oudheusden B, Scarano F, Hinsberg N, Watt D (2005) Phase-resolved characterization of vortex shedding in the near wake of a square-section cylinder at incidence. Exp Fluids 39:86 CrossRef
go back to reference Ozbay A (2014) An experimental investigation on wind turbine aeromechanics and wake interferences among multiple wind turbines. Master's thesis, Iowa State University Ozbay A (2014) An experimental investigation on wind turbine aeromechanics and wake interferences among multiple wind turbines. Master's thesis, Iowa State University
go back to reference Ragheb M, Ragheb A (2011) Wind turbines theory—the Betz equation and optimal rotor tip speed ratio. In: Fundamental and advanced topics in wind power. Intech. https://doi.org/10.5772/21398 Ragheb M, Ragheb A (2011) Wind turbines theory—the Betz equation and optimal rotor tip speed ratio. In: Fundamental and advanced topics in wind power. Intech. https://doi.org/10.5772/21398
go back to reference Rosenberg A, Selvaraj S, Sharma A (2014) A novel dual-rotor turbine for increased wind energy capture. J Phys Conf Ser 524:012078 CrossRef Rosenberg A, Selvaraj S, Sharma A (2014) A novel dual-rotor turbine for increased wind energy capture. J Phys Conf Ser 524:012078 CrossRef
go back to reference Sharma A, Frere A (2010) Diagnosis of aerodynamic losses in the root region of a horizontal axis wind turbine. Technical report, General Electric Global Research Center Internal Report Sharma A, Frere A (2010) Diagnosis of aerodynamic losses in the root region of a horizontal axis wind turbine. Technical report, General Electric Global Research Center Internal Report
go back to reference Sherry M, Sheridan J, Jacono D (2013) Characterisation of a horizontal axis wind turbine tip and root vortices. Technical report 2. Springer, Berlin Sherry M, Sheridan J, Jacono D (2013) Characterisation of a horizontal axis wind turbine tip and root vortices. Technical report 2. Springer, Berlin
go back to reference Siddiqui M, Rasheed A, Kvamsdal T, Tabib M (2017) Influence of tip speed ratio on wake flow characteristics utilizing fully resolved CFD methodology. J Phys 854:012043 Siddiqui M, Rasheed A, Kvamsdal T, Tabib M (2017) Influence of tip speed ratio on wake flow characteristics utilizing fully resolved CFD methodology. J Phys 854:012043
go back to reference Wang Z, Ozbay A, Tian W, Hu H (2018) An experimental study on the aerodynamic performances and wake characteristics of an innovative dual-rotor wind turbine. Energy 15(147):94 CrossRef Wang Z, Ozbay A, Tian W, Hu H (2018) An experimental study on the aerodynamic performances and wake characteristics of an innovative dual-rotor wind turbine. Energy 15(147):94 CrossRef
go back to reference Wang Q, Gan L, Xu S, Zhou Y (2020) Vortex evolution in the near wake behind polygonal cylinders. Exp Therm Fluid Sci 110:109940 CrossRef Wang Q, Gan L, Xu S, Zhou Y (2020) Vortex evolution in the near wake behind polygonal cylinders. Exp Therm Fluid Sci 110:109940 CrossRef
go back to reference Yurdusev M, Ata R, Cetin N (2006) Assessment of optimum tip speed ratio in wind turbines using artificial neural networks. Energy 31(12):2153 CrossRef Yurdusev M, Ata R, Cetin N (2006) Assessment of optimum tip speed ratio in wind turbines using artificial neural networks. Energy 31(12):2153 CrossRef
go back to reference Zhou J, Adrian R, Balachandar S, Kendall T (1999) Mechanisms for generating coherent packets of hairpin vortices in channel flow. J Fluid Mech 387:353 MathSciNetCrossRef Zhou J, Adrian R, Balachandar S, Kendall T (1999) Mechanisms for generating coherent packets of hairpin vortices in channel flow. J Fluid Mech 387:353 MathSciNetCrossRef
Eloise O. Hollands
Chuangxin He
Lian Gan
Journal of Visualization
Electronic ISSN: 1875-8975
Other articles of this Issue 3/2020
Go to the issue
Regular Paper
On the application of non-standard rainbow schlieren technique upon supersonic jets
Visual interactive exploration and clustering of brain fiber tracts
VEGA: visual comparison of phylogenetic trees for evolutionary genome analysis (ChinaVis 2019)
Visualization of dispersed phase in the carrier phase with lattice Boltzmann method through high- and low-resolution observations
Experimental study on fluid selection for a stable Taylor cone formation via micro-PIV measurement
Novel jet impingement atomization by synchronizing the sweeping motion of the fluidic oscillators
Neuer Inhalt/© ITandMEDIA | CommonCrawl |
Signorini problem
The Signorini problem is an elastostatics problem in linear elasticity: it consists in finding the elastic equilibrium configuration of an anisotropic non-homogeneous elastic body, resting on a rigid frictionless surface and subject only to its mass forces. The name was coined by Gaetano Fichera to honour his teacher, Antonio Signorini: the original name coined by him is problem with ambiguous boundary conditions.
History
• -"Il mio discepolo Fichera mi ha dato una grande soddisfazione"
• -"Ma Lei ne ha avute tante, Professore, durante la Sua vita", rispose il Dottor Aprile, ma Signorini rispose di nuovo:
• -"Ma questa è la più grande." E queste furono le sue ultime parole.[1]
— Gaetano Fichera, (Fichera 1995, p. 49)
The problem was posed by Antonio Signorini during a course taught at the Istituto Nazionale di Alta Matematica in 1959, later published as the article (Signorini 1959), expanding a previous short exposition he gave in a note published in 1933. Signorini (1959, p. 128) himself called it problem with ambiguous boundary conditions,[2] since there are two alternative sets of boundary conditions the solution must satisfy on any given contact point. The statement of the problem involves not only equalities but also inequalities, and it is not a priori known what of the two sets of boundary conditions is satisfied at each point. Signorini asked to determine if the problem is well-posed or not in a physical sense, i.e. if its solution exists and is unique or not: he explicitly invited young analysts to study the problem.[3]
Gaetano Fichera and Mauro Picone attended the course, and Fichera started to investigate the problem: since he found no references to similar problems in the theory of boundary value problems,[4] he decided to approach it by starting from first principles, specifically from the virtual work principle.
During Fichera's researches on the problem, Signorini began to suffer serious health problems: nevertheless, he desired to know the answer to his question before his death. Picone, being tied by a strong friendship with Signorini, began to chase Fichera to find a solution: Fichera himself, being tied as well to Signorini by similar feelings, perceived the last months of 1962 as worrying days.[5] Finally, on the first days of January 1963, Fichera was able to give a complete proof of the existence of a unique solution for the problem with ambiguous boundary condition, which he called the "Signorini problem" to honour his teacher. A preliminary research announcement, later published as (Fichera 1963), was written up and submitted to Signorini exactly a week before his death. Signorini expressed great satisfaction to see a solution to his question.
A few days later, Signorini had with his family Doctor, Damiano Aprile, the conversation quoted above.[6]
The solution of the Signorini problem coincides with the birth of the field of variational inequalities.[7]
Formal statement of the problem
The content of this section and the following subsections follows closely the treatment of Gaetano Fichera in Fichera 1963, Fichera 1964b and also Fichera 1995: his derivation of the problem is different from Signorini's one in that he does not consider only incompressible bodies and a plane rest surface, as Signorini does.[8] The problem consist in finding the displacement vector from the natural configuration $\scriptstyle {\boldsymbol {u}}({\boldsymbol {x}})=\left(u_{1}({\boldsymbol {x}}),u_{2}({\boldsymbol {x}}),u_{3}({\boldsymbol {x}})\right)$ of an anisotropic non-homogeneous elastic body that lies in a subset $A$ of the three-dimensional euclidean space whose boundary is $\scriptstyle \partial A$ and whose interior normal is the vector $n$, resting on a rigid frictionless surface whose contact surface (or more generally contact set) is $\Sigma $ and subject only to its body forces $\scriptstyle {\boldsymbol {f}}({\boldsymbol {x}})=\left(f_{1}({\boldsymbol {x}}),f_{2}({\boldsymbol {x}}),f_{3}({\boldsymbol {x}})\right)$, and surface forces $\scriptstyle {\boldsymbol {g}}({\boldsymbol {x}})=\left(g_{1}({\boldsymbol {x}}),g_{2}({\boldsymbol {x}}),g_{3}({\boldsymbol {x}})\right)$ applied on the free (i.e. not in contact with the rest surface) surface $\scriptstyle \partial A\setminus \Sigma $: the set $A$ and the contact surface $\Sigma $ characterize the natural configuration of the body and are known a priori. Therefore, the body has to satisfy the general equilibrium equations
(1) $\qquad {\frac {\partial \sigma _{ik}}{\partial x_{k}}}-f_{i}=0\qquad {\text{for }}i=1,2,3$
written using the Einstein notation as all in the following development, the ordinary boundary conditions on $\scriptstyle \partial A\setminus \Sigma $
(2) $\qquad \sigma _{ik}n_{k}-g_{i}=0\qquad {\text{for }}i=1,2,3$
and the following two sets of boundary conditions on $\Sigma $, where $\scriptstyle {\boldsymbol {\sigma }}={\boldsymbol {\sigma }}({\boldsymbol {u}})$ is the Cauchy stress tensor. Obviously, the body forces and surface forces cannot be given in arbitrary way but they must satisfy a condition in order for the body to reach an equilibrium configuration: this condition will be deduced and analyzed in the following development.
The ambiguous boundary conditions
If $\scriptstyle {\boldsymbol {\tau }}=(\tau _{1},\tau _{2},\tau _{3})$ is any tangent vector to the contact set $\Sigma $, then the ambiguous boundary condition in each point of this set are expressed by the following two systems of inequalities
(3) $\quad {\begin{cases}u_{i}n_{i}&=0\\\sigma _{ik}n_{i}n_{k}&\geq 0\\\sigma _{ik}n_{i}\tau _{k}&=0\end{cases}}$ or (4) ${\begin{cases}u_{i}n_{i}&>0\\\sigma _{ik}n_{i}n_{k}&=0\\\sigma _{ik}n_{i}\tau _{k}&=0\end{cases}}$
Let's analyze their meaning:
• Each set of conditions consists of three relations, equalities or inequalities, and all the second members are the zero function.
• The quantities at first member of each first relation are proportional to the norm of the component of the displacement vector directed along the normal vector $n$.
• The quantities at first member of each second relation are proportional to the norm of the component of the tension vector directed along the normal vector $n$,
• The quantities at the first member of each third relation are proportional to the norm of the component of the tension vector along any vector $\tau $ tangent in the given point to the contact set $\Sigma $.
• The quantities at the first member of each of the three relations are positive if they have the same sense of the vector they are proportional to, while they are negative if not, therefore the constants of proportionality are respectively $\scriptstyle +1$ and $\scriptstyle -1$.
Knowing these facts, the set of conditions (3) applies to points of the boundary of the body which do not leave the contact set $\Sigma $ in the equilibrium configuration, since, according to the first relation, the displacement vector $u$ has no components directed as the normal vector $n$, while, according to the second relation, the tension vector may have a component directed as the normal vector $n$ and having the same sense. In an analogous way, the set of conditions (4) applies to points of the boundary of the body which leave that set in the equilibrium configuration, since displacement vector $u$ has a component directed as the normal vector $n$, while the tension vector has no components directed as the normal vector $n$. For both sets of conditions, the tension vector has no tangent component to the contact set, according to the hypothesis that the body rests on a rigid frictionless surface.
Each system expresses a unilateral constraint, in the sense that they express the physical impossibility of the elastic body to penetrate into the surface where it rests: the ambiguity is not only in the unknown values non-zero quantities must satisfy on the contact set but also in the fact that it is not a priori known if a point belonging to that set satisfies the system of boundary conditions (3) or (4). The set of points where (3) is satisfied is called the area of support of the elastic body on $\Sigma $, while its complement respect to $\Sigma $ is called the area of separation.
The above formulation is general since the Cauchy stress tensor i.e. the constitutive equation of the elastic body has not been made explicit: it is equally valid assuming the hypothesis of linear elasticity or the ones of nonlinear elasticity. However, as it would be clear from the following developments, the problem is inherently nonlinear, therefore assuming a linear stress tensor does not simplify the problem.
The form of the stress tensor in the formulation of Signorini and Fichera
The form assumed by Signorini and Fichera for the elastic potential energy is the following one (as in the previous developments, the Einstein notation is adopted)
$W({\boldsymbol {\varepsilon }})=a_{ik,jh}({\boldsymbol {x}})\varepsilon _{ik}\varepsilon _{jh}$
where
• $\scriptstyle {\boldsymbol {a}}({\boldsymbol {x}})=\left(a_{ik,jh}({\boldsymbol {x}})\right)$ is the elasticity tensor
• $\scriptstyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}({\boldsymbol {u}})=\left(\varepsilon _{ik}({\boldsymbol {u}})\right)=\left({\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{k}}}+{\frac {\partial u_{k}}{\partial x_{i}}}\right)\right)$ is the infinitesimal strain tensor
The Cauchy stress tensor has therefore the following form
(5) $\sigma _{ik}=-{\frac {\partial W}{\partial \varepsilon _{ik}}}\qquad {\text{for }}i,k=1,2,3$
and it is linear with respect to the components of the infinitesimal strain tensor; however, it is not homogeneous nor isotropic.
Solution of the problem
As for the section on the formal statement of the Signorini problem, the contents of this section and the included subsections follow closely the treatment of Gaetano Fichera in Fichera 1963, Fichera 1964b, Fichera 1972 and also Fichera 1995: obviously, the exposition focuses on the basics steps of the proof of the existence and uniqueness for the solution of problem (1), (2), (3), (4) and (5), rather than the technical details.
The potential energy
The first step of the analysis of Fichera as well as the first step of the analysis of Antonio Signorini in Signorini 1959 is the analysis of the potential energy, i.e. the following functional
(6) $I({\boldsymbol {u}})=\int _{A}W({\boldsymbol {x}},{\boldsymbol {\varepsilon }})\mathrm {d} x-\int _{A}u_{i}f_{i}\mathrm {d} x-\int _{\partial A\setminus \Sigma }u_{i}g_{i}\mathrm {d} \sigma $
where $u$ belongs to the set of admissible displacements $\scriptstyle {\mathcal {U}}_{\Sigma }$ i.e. the set of displacement vectors satisfying the system of boundary conditions (3) or (4). The meaning of each of the three terms is the following
• the first one is the total elastic potential energy of the elastic body
• the second one is the total potential energy due to the body forces, for example the gravitational force
• the third one is the potential energy due to surface forces, for example the forces exerted by the atmospheric pressure
Signorini (1959, pp. 129–133) was able to prove that the admissible displacement $u$ which minimize the integral $I(u)$ is a solution of the problem with ambiguous boundary conditions (1), (2), (3), (4) and (5), provided it is a $C^{1}$ function supported on the closure $\scriptstyle {\bar {A}}$ of the set $A$: however Gaetano Fichera gave a class of counterexamples in (Fichera 1964b, pp. 619–620) showing that in general, admissible displacements are not smooth functions of these class. Therefore, Fichera tries to minimize the functional (6) in a wider function space: in doing so, he first calculates the first variation (or functional derivative) of the given functional in the neighbourhood of the sought minimizing admissible displacement $\scriptstyle {\boldsymbol {u}}\in {\mathcal {U}}_{\Sigma }$, and then requires it to be greater than or equal to zero
$\left.{\frac {\mathrm {d} }{\mathrm {d} t}}I({\boldsymbol {u}}+t{\boldsymbol {v}})\right\vert _{t=0}=-\int _{A}\sigma _{ik}({\boldsymbol {u}})\varepsilon _{ik}({\boldsymbol {v}})\mathrm {d} x-\int _{A}v_{i}f_{i}\mathrm {d} x-\int _{\partial A\setminus \Sigma }\!\!\!\!\!v_{i}g_{i}\mathrm {d} \sigma \geq 0\qquad \forall {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$
Defining the following functionals
$B({\boldsymbol {u}},{\boldsymbol {v}})=-\int _{A}\sigma _{ik}({\boldsymbol {u}})\varepsilon _{ik}({\boldsymbol {v}})\mathrm {d} x\qquad {\boldsymbol {u}},{\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$
and
$F({\boldsymbol {v}})=\int _{A}v_{i}f_{i}\mathrm {d} x+\int _{\partial A\setminus \Sigma }\!\!\!\!\!v_{i}g_{i}\mathrm {d} \sigma \qquad {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$
the preceding inequality is can be written as
(7) $B({\boldsymbol {u}},{\boldsymbol {v}})-F({\boldsymbol {v}})\geq 0\qquad \forall {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$
This inequality is the variational inequality for the Signorini problem.
See also
• Linear elasticity
• Variational inequality
Notes
1. Free English translation:
• "My disciple Fichera gave me a great contentment".
• "But you had many, Professor, during your life", replied Doctor Aprile, but then Signorini replied again:
• "But this is the greatest one". And those were his last words.
2. Italian: Problema con ambigue condizioni al contorno.
3. As it is stated in (Signorini 1959, p. 129).
4. See (Fichera 1995, p. 49).
5. This dramatic situation is described by Fichera (1995, p. 51) himself.
6. Fichera (1995, p. 53) reports the episode following the remembrances of Mauro Picone: see the entry "Antonio Signorini" for further details.
7. According to Antman (1983, p. 282)
8. See Signorini 1959, p. 127) for the original approach.
References
Historical references
• Antman, Stuart (1983), "The influence of elasticity in analysis: modern developments", Bulletin of the American Mathematical Society, 9 (3): 267–291, doi:10.1090/S0273-0979-1983-15185-6, MR 0714990, Zbl 0533.73001.
• Duvaut, Georges (1971), "Problèmes unilatéraux en mécanique des milieux continus" (PDF), Actes du Congrès international des mathématiciens, 1970, ICM Proceedings, vol. Mathématiques appliquées (E), Histoire et Enseignement (F) – Volume 3, Paris: Gauthier-Villars, pp. 71–78. A brief research survey describing the field of variational inequalities.
• Fichera, Gaetano (1972), "Boundary value problems of elasticity with unilateral constraints", in Flügge, Siegfried; Truesdell, Clifford A. (eds.), Festkörpermechanik/Mechanics of Solids, Handbuch der Physik (Encyclopedia of Physics), vol. VIa/2 (paperback 1984 ed.), Berlin–Heidelberg–New York: Springer-Verlag, pp. 391–424, ISBN 0-387-13161-2, Zbl 0277.73001. The encyclopedia entry about problems with unilateral constraints (the class of boundary value problems the Signorini problem belongs to) he wrote for the Handbuch der Physik on invitation by Clifford Truesdell.
• Fichera, Gaetano (1995), "La nascita della teoria delle disequazioni variazionali ricordata dopo trent'anni", Incontro scientifico italo-spagnolo. Roma, 21 ottobre 1993, Atti dei Convegni Lincei (in Italian), vol. 114, Roma: Accademia Nazionale dei Lincei, pp. 47–53. The birth of the theory of variational inequalities remembered thirty years later (English translation of the contribution title) is an historical paper describing the beginning of the theory of variational inequalities from the point of view of its founder.
• Fichera, Gaetano (2002), Opere storiche biografiche, divulgative [Historical, biographical, divulgative works] (in Italian), Napoli: Giannini, p. 491. A volume collecting almost all works of Gaetano Fichera in the fields of history of mathematics and scientific divulgation.
• Fichera, Gaetano (2004), Opere scelte [Selected works], Firenze: Edizioni Cremonese (distributed by Unione Matematica Italiana), pp. XXIX+432 (vol. 1), pp. VI+570 (vol. 2), pp. VI+583 (vol. 3), archived from the original on 2009-12-28, ISBN 88-7083-811-0 (vol. 1), ISBN 88-7083-812-9 (vol. 2), ISBN 88-7083-813-7 (vol. 3). Three volumes collecting Gaetano Fichera's most important mathematical papers, with a biographical sketch of Olga A. Oleinik.
• Signorini, Antonio (1991), Opere scelte [Selected works], Firenze: Edizioni Cremonese (distributed by Unione Matematica Italiana), pp. XXXI + 695, archived from the original on 2009-12-28. A volume collecting Antonio Signorini's most important works with an introduction and a commentary of Giuseppe Grioli.
Research works
• Andersson, John (2016), "Optimal regularity for the Signorini problem and its free boundary", Inventiones Mathematicae, 1 (1): 1–82, arXiv:1310.2511, Bibcode:2016InMat.204....1A, doi:10.1007/s00222-015-0608-6, MR 3480553, S2CID 118934322, Zbl 1339.35345.
• Fichera, Gaetano (1963), "Sul problema elastostatico di Signorini con ambigue condizioni al contorno" [On the elastostatic problem of Signorini with ambiguous boundary conditions], Rendiconti della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 34 (2): 138–142, MR 0176661, Zbl 0128.18305. A short research note announcing and describing (without proofs) the solution of the Signorini problem.
• Fichera, Gaetano (1964a), "Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno" [Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions], Memorie della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 7 (2): 91–140, Zbl 0146.21204. The first paper where aa existence and uniqueness theorem for the Signorini problem is proved.
• Fichera, Gaetano (1964b), "Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions", Seminari dell'istituto Nazionale di Alta Matematica 1962–1963, Rome: Edizioni Cremonese, pp. 613–679. An English translation of the previous paper.
• Petrosyan, Arshak; Shahgholian, Henrik; Uraltseva, Nina (2012), Regularity of Free Boundaries in Obstacle-Type Problems, Graduate Studies in Mathematics, vol. 136, Providence, RI: American Mathematical Society, pp. x+221, ISBN 978-0-8218-8794-3, MR 2962060, Zbl 1254.35001.
• Signorini, Antonio (1959), "Questioni di elasticità non linearizzata e semilinearizzata" [Topics in non linear and semilinear elasticity], Rendiconti di Matematica e delle sue Applicazioni, 5 (in Italian), 18: 95–139, Zbl 0091.38006.
External links
• Barbu, V. (2001) [1994], "Signorini problem", Encyclopedia of Mathematics, EMS Press
• Alessio Figalli, On global homogeneous solutions to the Signorini problem,
| Wikipedia |
\begin{definition}[Definition:Troy/Pennyweight]
The '''pennyweight''' is a troy unit of mass.
{{begin-eqn}}
{{eqn | o =
| r = 1
| c = '''pennyweight'''
}}
{{eqn | r = 24
| c = grains
}}
{{eqn | o = \approx
| r = 1 \cdotp 56
| c = grams
}}
{{end-eqn}}
\end{definition} | ProofWiki |
Cyclotomic polynomial
In mathematics, the nth cyclotomic polynomial, for any positive integer n, is the unique irreducible polynomial with integer coefficients that is a divisor of $x^{n}-1$ and is not a divisor of $x^{k}-1$ for any k < n. Its roots are all nth primitive roots of unity $e^{2i\pi {\frac {k}{n}}}$, where k runs over the positive integers not greater than n and coprime to n (and i is the imaginary unit). In other words, the nth cyclotomic polynomial is equal to
$\Phi _{n}(x)=\prod _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\left(x-e^{2i\pi {\frac {k}{n}}}\right).$
It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive nth-root of unity ($e^{2i\pi /n}$ is an example of such a root).
An important relation linking cyclotomic polynomials and primitive roots of unity is
$\prod _{d\mid n}\Phi _{d}(x)=x^{n}-1,$
showing that x is a root of $x^{n}-1$ if and only if it is a d th primitive root of unity for some d that divides n.[1]
Examples
If n is a prime number, then
$\Phi _{n}(x)=1+x+x^{2}+\cdots +x^{n-1}=\sum _{k=0}^{n-1}x^{k}.$
If n = 2p where p is an odd prime number, then
$\Phi _{2p}(x)=1-x+x^{2}-\cdots +x^{p-1}=\sum _{k=0}^{p-1}(-x)^{k}.$
For n up to 30, the cyclotomic polynomials are:[2]
${\begin{aligned}\Phi _{1}(x)&=x-1\\\Phi _{2}(x)&=x+1\\\Phi _{3}(x)&=x^{2}+x+1\\\Phi _{4}(x)&=x^{2}+1\\\Phi _{5}(x)&=x^{4}+x^{3}+x^{2}+x+1\\\Phi _{6}(x)&=x^{2}-x+1\\\Phi _{7}(x)&=x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{8}(x)&=x^{4}+1\\\Phi _{9}(x)&=x^{6}+x^{3}+1\\\Phi _{10}(x)&=x^{4}-x^{3}+x^{2}-x+1\\\Phi _{11}(x)&=x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{12}(x)&=x^{4}-x^{2}+1\\\Phi _{13}(x)&=x^{12}+x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{14}(x)&=x^{6}-x^{5}+x^{4}-x^{3}+x^{2}-x+1\\\Phi _{15}(x)&=x^{8}-x^{7}+x^{5}-x^{4}+x^{3}-x+1\\\Phi _{16}(x)&=x^{8}+1\\\Phi _{17}(x)&=x^{16}+x^{15}+x^{14}+x^{13}+x^{12}+x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{18}(x)&=x^{6}-x^{3}+1\\\Phi _{19}(x)&=x^{18}+x^{17}+x^{16}+x^{15}+x^{14}+x^{13}+x^{12}+x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{20}(x)&=x^{8}-x^{6}+x^{4}-x^{2}+1\\\Phi _{21}(x)&=x^{12}-x^{11}+x^{9}-x^{8}+x^{6}-x^{4}+x^{3}-x+1\\\Phi _{22}(x)&=x^{10}-x^{9}+x^{8}-x^{7}+x^{6}-x^{5}+x^{4}-x^{3}+x^{2}-x+1\\\Phi _{23}(x)&=x^{22}+x^{21}+x^{20}+x^{19}+x^{18}+x^{17}+x^{16}+x^{15}+x^{14}+x^{13}+x^{12}\\&\qquad \quad +x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{24}(x)&=x^{8}-x^{4}+1\\\Phi _{25}(x)&=x^{20}+x^{15}+x^{10}+x^{5}+1\\\Phi _{26}(x)&=x^{12}-x^{11}+x^{10}-x^{9}+x^{8}-x^{7}+x^{6}-x^{5}+x^{4}-x^{3}+x^{2}-x+1\\\Phi _{27}(x)&=x^{18}+x^{9}+1\\\Phi _{28}(x)&=x^{12}-x^{10}+x^{8}-x^{6}+x^{4}-x^{2}+1\\\Phi _{29}(x)&=x^{28}+x^{27}+x^{26}+x^{25}+x^{24}+x^{23}+x^{22}+x^{21}+x^{20}+x^{19}+x^{18}+x^{17}+x^{16}+x^{15}\\&\qquad \quad +x^{14}+x^{13}+x^{12}+x^{11}+x^{10}+x^{9}+x^{8}+x^{7}+x^{6}+x^{5}+x^{4}+x^{3}+x^{2}+x+1\\\Phi _{30}(x)&=x^{8}+x^{7}-x^{5}-x^{4}-x^{3}+x+1.\end{aligned}}$
The case of the 105th cyclotomic polynomial is interesting because 105 is the least positive integer that is the product of three distinct odd prime numbers (3*5*7) and this polynomial is the first one that has a coefficient other than 1, 0, or −1:[3]
${\begin{aligned}\Phi _{105}(x)&=x^{48}+x^{47}+x^{46}-x^{43}-x^{42}-2x^{41}-x^{40}-x^{39}+x^{36}+x^{35}+x^{34}+x^{33}+x^{32}+x^{31}-x^{28}-x^{26}\\&\qquad \quad -x^{24}-x^{22}-x^{20}+x^{17}+x^{16}+x^{15}+x^{14}+x^{13}+x^{12}-x^{9}-x^{8}-2x^{7}-x^{6}-x^{5}+x^{2}+x+1.\end{aligned}}$
Properties
Fundamental tools
The cyclotomic polynomials are monic polynomials with integer coefficients that are irreducible over the field of the rational numbers. Except for n equal to 1 or 2, they are palindromes of even degree.
The degree of $\Phi _{n}$, or in other words the number of nth primitive roots of unity, is $\varphi (n)$, where $\varphi $ is Euler's totient function.
The fact that $\Phi _{n}$ is an irreducible polynomial of degree $\varphi (n)$ in the ring $\mathbb {Z} [x]$ is a nontrivial result due to Gauss.[4] Depending on the chosen definition, it is either the value of the degree or the irreducibility which is a nontrivial result. The case of prime n is easier to prove than the general case, thanks to Eisenstein's criterion.
A fundamental relation involving cyclotomic polynomials is
${\begin{aligned}x^{n}-1&=\prod _{1\leqslant k\leqslant n}\left(x-e^{2i\pi {\frac {k}{n}}}\right)\\&=\prod _{d\mid n}\prod _{1\leqslant k\leqslant n \atop \gcd(k,n)=d}\left(x-e^{2i\pi {\frac {k}{n}}}\right)\\&=\prod _{d\mid n}\Phi _{\frac {n}{d}}(x)=\prod _{d\mid n}\Phi _{d}(x).\end{aligned}}$
which means that each n-th root of unity is a primitive d-th root of unity for a unique d dividing n.
The Möbius inversion formula allows the expression of $\Phi _{n}(x)$ as an explicit rational fraction:
$\Phi _{n}(x)=\prod _{d\mid n}(x^{d}-1)^{\mu \left({\frac {n}{d}}\right)},$
where $\mu $ is the Möbius function.
The cyclotomic polynomial $\Phi _{n}(x)$ may be computed by (exactly) dividing $x^{n}-1$ by the cyclotomic polynomials of the proper divisors of n previously computed recursively by the same method:
$\Phi _{n}(x)={\frac {x^{n}-1}{\prod _{\stackrel {d|n}{{}_{d<n}}}\Phi _{d}(x)}}$
(Recall that $\Phi _{1}(x)=x-1$.)
This formula defines an algorithm for computing $\Phi _{n}(x)$ for any n, provided integer factorization and division of polynomials are available. Many computer algebra systems, such as SageMath, Maple, Mathematica, and PARI/GP, have a built-in function to compute the cyclotomic polynomials.
Easy cases for computation
As noted above, if n is a prime number, then
$\Phi _{n}(x)=1+x+x^{2}+\cdots +x^{n-1}=\sum _{k=0}^{n-1}x^{k}.$
If n is an odd integer greater than one, then
$\Phi _{2n}(x)=\Phi _{n}(-x).$
In particular, if n = 2p is twice an odd prime, then (as noted above)
$\Phi _{n}(x)=1-x+x^{2}-\cdots +x^{p-1}=\sum _{k=0}^{p-1}(-x)^{k}.$
If n = pm is a prime power (where p is prime), then
$\Phi _{n}(x)=\Phi _{p}(x^{p^{m-1}})=\sum _{k=0}^{p-1}x^{kp^{m-1}}.$
More generally, if n = pmr with r relatively prime to p, then
$\Phi _{n}(x)=\Phi _{pr}(x^{p^{m-1}}).$
These formulas may be applied repeatedly to get a simple expression for any cyclotomic polynomial $\Phi _{n}(x)$ in term of a cyclotomic polynomial of square free index: If q is the product of the prime divisors of n (its radical), then[5]
$\Phi _{n}(x)=\Phi _{q}(x^{n/q}).$
This allows to give formulas for the nth cyclotomic polynomial when n has at most one odd prime factor: If p is an odd prime number, and h and k are positive integers, then:
$\Phi _{2^{h}}(x)=x^{2^{h-1}}+1$
$\Phi _{p^{k}}(x)=\sum _{j=0}^{p-1}x^{jp^{k-1}}$
$\Phi _{2^{h}p^{k}}(x)=\sum _{j=0}^{p-1}(-1)^{j}x^{j2^{h-1}p^{k-1}}$
For the other values of n, the computation of the nth cyclotomic polynomial is similarly reduced to that of $\Phi _{q}(x),$ where q is the product of the distinct odd prime divisors of n. To deal with this case, one has that, for p prime and not dividing n,[6]
$\Phi _{np}(x)=\Phi _{n}(x^{p})/\Phi _{n}(x).$
Integers appearing as coefficients
The problem of bounding the magnitude of the coefficients of the cyclotomic polynomials has been the object of a number of research papers. Several survey papers give an overview.[7]
If n has at most two distinct odd prime factors, then Migotti showed that the coefficients of $\Phi _{n}$ are all in the set {1, −1, 0}.[8]
The first cyclotomic polynomial for a product of three different odd prime factors is $\Phi _{105}(x);$ it has a coefficient −2 (see its expression above). The converse is not true: $\Phi _{231}(x)=\Phi _{3\times 7\times 11}(x)$ only has coefficients in {1, −1, 0}.
If n is a product of more different odd prime factors, the coefficients may increase to very high values. E.g., $\Phi _{15015}(x)=\Phi _{3\times 5\times 7\times 11\times 13}(x)$ has coefficients running from −22 to 23, $\Phi _{255255}(x)=\Phi _{3\times 5\times 7\times 11\times 13\times 17}(x)$, the smallest n with 6 different odd primes, has coefficients of magnitude up to 532.
Let A(n) denote the maximum absolute value of the coefficients of Φn. It is known that for any positive k, the number of n up to x with A(n) > nk is at least c(k)⋅x for a positive c(k) depending on k and x sufficiently large. In the opposite direction, for any function ψ(n) tending to infinity with n we have A(n) bounded above by nψ(n) for almost all n.[9]
A combination of theorems of Bateman resp. Vaughan states[7]: 10 that on the one hand, for every $\varepsilon >0$, we have
$A(n)<e^{\left(n^{(\log 2+\varepsilon )/(\log \log n)}\right)}$
for all sufficiently large positive integers $n$, and on the other hand, we have
$A(n)>e^{\left(n^{(\log 2)/(\log \log n)}\right)}$
for infinitely many positive integers $n$. This implies in particular that univariate polynomials (concretely $x^{n}-1$ for infinitely many positive integers $n$) can have factors (like $\Phi _{n}$) whose coefficients are superpolynomially larger than the original coefficients. This is not too far from the general Landau-Mignotte bound.
Gauss's formula
Let n be odd, square-free, and greater than 3. Then:[10][11]
$4\Phi _{n}(z)=A_{n}^{2}(z)-(-1)^{\frac {n-1}{2}}nz^{2}B_{n}^{2}(z)$
where both An(z) and Bn(z) have integer coefficients, An(z) has degree φ(n)/2, and Bn(z) has degree φ(n)/2 − 2. Furthermore, An(z) is palindromic when its degree is even; if its degree is odd it is antipalindromic. Similarly, Bn(z) is palindromic unless n is composite and ≡ 3 (mod 4), in which case it is antipalindromic.
The first few cases are
${\begin{aligned}4\Phi _{5}(z)&=4(z^{4}+z^{3}+z^{2}+z+1)\\&=(2z^{2}+z+2)^{2}-5z^{2}\\[6pt]4\Phi _{7}(z)&=4(z^{6}+z^{5}+z^{4}+z^{3}+z^{2}+z+1)\\&=(2z^{3}+z^{2}-z-2)^{2}+7z^{2}(z+1)^{2}\\[6pt]4\Phi _{11}(z)&=4(z^{10}+z^{9}+z^{8}+z^{7}+z^{6}+z^{5}+z^{4}+z^{3}+z^{2}+z+1)\\&=(2z^{5}+z^{4}-2z^{3}+2z^{2}-z-2)^{2}+11z^{2}(z^{3}+1)^{2}\end{aligned}}$
Lucas's formula
Let n be odd, square-free and greater than 3. Then[11]
$\Phi _{n}(z)=U_{n}^{2}(z)-(-1)^{\frac {n-1}{2}}nzV_{n}^{2}(z)$
where both Un(z) and Vn(z) have integer coefficients, Un(z) has degree φ(n)/2, and Vn(z) has degree φ(n)/2 − 1. This can also be written
$\Phi _{n}\left((-1)^{\frac {n-1}{2}}z\right)=C_{n}^{2}(z)-nzD_{n}^{2}(z).$
If n is even, square-free and greater than 2 (this forces n/2 to be odd),
$\Phi _{\frac {n}{2}}\left(-z^{2}\right)=\Phi _{2n}(z)=C_{n}^{2}(z)-nzD_{n}^{2}(z)$
where both Cn(z) and Dn(z) have integer coefficients, Cn(z) has degree φ(n), and Dn(z) has degree φ(n) − 1. Cn(z) and Dn(z) are both palindromic.
The first few cases are:
${\begin{aligned}\Phi _{3}(-z)&=\Phi _{6}(z)=z^{2}-z+1\\&=(z+1)^{2}-3z\\[6pt]\Phi _{5}(z)&=z^{4}+z^{3}+z^{2}+z+1\\&=(z^{2}+3z+1)^{2}-5z(z+1)^{2}\\[6pt]\Phi _{6/2}(-z^{2})&=\Phi _{12}(z)=z^{4}-z^{2}+1\\&=(z^{2}+3z+1)^{2}-6z(z+1)^{2}\end{aligned}}$
Sister Beiter conjecture
The Sister Beiter conjecture is concerned with the maximal size (in absolute value) $A(pqr)$ of coefficients of ternary cyclotomic polynomials $\Phi _{pqr}(x)$ where $3\leq p\leq q\leq r$ are three prime numbers.[12]
Cyclotomic polynomials over a finite field and over the p-adic integers
See also: Finite field § Roots of unity
Over a finite field with a prime number p of elements, for any integer n that is not a multiple of p, the cyclotomic polynomial $\Phi _{n}$ factorizes into ${\frac {\varphi (n)}{d}}$ irreducible polynomials of degree d, where $\varphi (n)$ is Euler's totient function and d is the multiplicative order of p modulo n. In particular, $\Phi _{n}$ is irreducible if and only if p is a primitive root modulo n, that is, p does not divide n, and its multiplicative order modulo n is $\varphi (n)$, the degree of $\Phi _{n}$.[13]
These results are also true over the p-adic integers, since Hensel's lemma allows lifting a factorization over the field with p elements to a factorization over the p-adic integers.
Polynomial values
If x takes any real value, then $\Phi _{n}(x)>0$ for every n ≥ 3 (this follows from the fact that the roots of a cyclotomic polynomial are all non-real, for n ≥ 3).
For studying the values that a cyclotomic polynomial may take when x is given an integer value, it suffices to consider only the case n ≥ 3, as the cases n = 1 and n = 2 are trivial (one has $\Phi _{1}(x)=x-1$ and $\Phi _{2}(x)=x+1$).
For n ≥ 2, one has
$\Phi _{n}(0)=1,$
$\Phi _{n}(1)=1$ if n is not a prime power,
$\Phi _{n}(1)=p$ if $n=p^{k}$ is a prime power with k ≥ 1.
The values that a cyclotomic polynomial $\Phi _{n}(x)$ may take for other integer values of x is strongly related with the multiplicative order modulo a prime number.
More precisely, given a prime number p and an integer b coprime with p, the multiplicative order of b modulo p, is the smallest positive integer n such that p is a divisor of $b^{n}-1.$ For b > 1, the multiplicative order of b modulo p is also the shortest period of the representation of 1/p in the numeral base b (see Unique prime; this explains the notation choice).
The definition of the multiplicative order implies that, if n is the multiplicative order of b modulo p, then p is a divisor of $\Phi _{n}(b).$ The converse is not true, but one has the following.
If n > 0 is a positive integer and b > 1 is an integer, then (see below for a proof)
$\Phi _{n}(b)=2^{k}gh,$
where
• k is a non-negative integer, always equal to 0 when b is even. (In fact, if n is neither 1 nor 2, then k is either 0 or 1. Besides, if n is not a power of 2, then k is always equal to 0)
• g is 1 or the largest odd prime factor of n.
• h is odd, coprime with n, and its prime factors are exactly the odd primes p such that n is the multiplicative order of b modulo p.
This implies that, if p is an odd prime divisor of $\Phi _{n}(b),$ then either n is a divisor of p − 1 or p is a divisor of n. In the latter case, $p^{2}$ does not divide $\Phi _{n}(b).$
Zsigmondy's theorem implies that the only cases where b > 1 and h = 1 are
${\begin{aligned}\Phi _{1}(2)&=1\\\Phi _{2}\left(2^{k}-1\right)&=2^{k}&&k>0\\\Phi _{6}(2)&=3\end{aligned}}$
It follows from above factorization that the odd prime factors of
${\frac {\Phi _{n}(b)}{\gcd(n,\Phi _{n}(b))}}$
are exactly the odd primes p such that n is the multiplicative order of b modulo p. This fraction may be even only when b is odd. In this case, the multiplicative order of b modulo 2 is always 1.
There are many pairs (n, b) with b > 1 such that $\Phi _{n}(b)$ is prime. In fact, Bunyakovsky conjecture implies that, for every n, there are infinitely many b > 1 such that $\Phi _{n}(b)$ is prime. See OEIS: A085398 for the list of the smallest b > 1 such that $\Phi _{n}(b)$ is prime (the smallest b > 1 such that $\Phi _{n}(b)$ is prime is about $\lambda \cdot \varphi (n)$, where $\lambda $ is Euler–Mascheroni constant, and $\varphi $ is Euler's totient function). See also OEIS: A206864 for the list of the smallest primes of the form $\Phi _{n}(b)$ with n > 2 and b > 1, and, more generally, OEIS: A206942, for the smallest positive integers of this form.
Proofs
• Values of $\Phi _{n}(1).$ If $n=p^{k+1}$ is a prime power, then
$\Phi _{n}(x)=1+x^{p^{k}}+x^{2p^{k}}+\cdots +x^{(p-1)p^{k}}\qquad {\text{and}}\qquad \Phi _{n}(1)=p.$
If n is not a prime power, let $P(x)=1+x+\cdots +x^{n-1},$ we have $P(1)=n,$ and P is the product of the $\Phi _{k}(x)$ for k dividing n and different of 1. If p is a prime divisor of multiplicity m in n, then $\Phi _{p}(x),\Phi _{p^{2}}(x),\cdots ,\Phi _{p^{m}}(x)$ divide P(x), and their values at 1 are m factors equal to p of $n=P(1).$ As m is the multiplicity of p in n, p cannot divide the value at 1 of the other factors of $P(x).$ Thus there is no prime that divides $\Phi _{n}(1).$
• If n is the multiplicative order of b modulo p, then $p\mid \Phi _{n}(b).$ By definition, $p\mid b^{n}-1.$ If $p\nmid \Phi _{n}(b),$ then p would divide another factor $\Phi _{k}(b)$ of $b^{n}-1,$ and would thus divide $b^{k}-1,$ showing that, if there would be the case, n would not be the multiplicative order of b modulo p.
• The other prime divisors of $\Phi _{n}(b)$ are divisors of n. Let p be a prime divisor of $\Phi _{n}(b)$ such that n is not be the multiplicative order of b modulo p. If k is the multiplicative order of b modulo p, then p divides both $\Phi _{n}(b)$ and $\Phi _{k}(b).$ The resultant of $\Phi _{n}(x)$ and $\Phi _{k}(x)$ may be written $P\Phi _{k}+Q\Phi _{n},$ where P and Q are polynomials. Thus p divides this resultant. As k divides n, and the resultant of two polynomials divides the discriminant of any common multiple of these polynomials, p divides also the discriminant $n^{n}$ of $x^{n}-1.$ Thus p divides n.
• g and h are coprime. In other words, if p is a prime common divisor of n and $\Phi _{n}(b),$ then n is not the multiplicative order of b modulo p. By Fermat's little theorem, the multiplicative order of b is a divisor of p − 1, and thus smaller than n.
• g is square-free. In other words, if p is a prime common divisor of n and $\Phi _{n}(b),$ then $p^{2}$ does not divide $\Phi _{n}(b).$ Let n = pm. It suffices to prove that $p^{2}$ does not divide S(b) for some polynomial S(x), which is a multiple of $\Phi _{n}(x).$ We take
$S(x)={\frac {x^{n}-1}{x^{m}-1}}=1+x^{m}+x^{2m}+\cdots +x^{(p-1)m}.$
The multiplicative order of b modulo p divides gcd(n, p − 1), which is a divisor of m = n/p. Thus c = bm − 1 is a multiple of p. Now,
$S(b)={\frac {(1+c)^{p}-1}{c}}=p+{\binom {p}{2}}c+\cdots +{\binom {p}{p}}c^{p-1}.$
As p is prime and greater than 2, all the terms but the first one are multiples of $p^{2}.$ This proves that $p^{2}\nmid \Phi _{n}(b).$
Applications
Using $\Phi _{n}$, one can give an elementary proof for the infinitude of primes congruent to 1 modulo n,[14] which is a special case of Dirichlet's theorem on arithmetic progressions.
Proof
Suppose $p_{1},p_{2},\ldots ,p_{k}$ is a finite list of primes congruent to $1$ modulo $n.$ Let $N=np_{1}p_{2}\cdots p_{k}$ and consider $\Phi _{n}(N)$. Let $q$ be a prime factor of $\Phi _{n}(N)$ (to see that $\Phi _{n}(N)\neq \pm 1$ decompose it into linear factors and note that 1 is the closest root of unity to $N$). Since $\Phi _{n}(x)\equiv \pm 1{\pmod {x}},$ we know that $q$ is a new prime not in the list. We will show that $q\equiv 1{\pmod {n}}.$
Let $m$ be the order of $N$ modulo $q.$ Since $\Phi _{n}(N)\mid N^{n}-1$ we have $N^{n}-1\equiv 0{\pmod {q}}$. Thus $m\mid n$. We will show that $m=n$.
Assume for contradiction that $m<n$. Since
$\prod _{d\mid m}\Phi _{d}(N)=N^{m}-1\equiv 0{\pmod {q}}$
we have
$\Phi _{d}(N)\equiv 0{\pmod {q}},$
for some $d<n$. Then $N$ is a double root of
$\prod _{d\mid n}\Phi _{d}(x)\equiv x^{n}-1{\pmod {q}}.$
Thus $N$ must be a root of the derivative so
$\left.{\frac {d(x^{n}-1)}{dx}}\right|_{N}\equiv nN^{n-1}\equiv 0{\pmod {q}}.$
But $q\nmid N$ and therefore $q\nmid n.$ This is a contradiction so $m=n$. The order of $N{\pmod {q}},$ which is $n$, must divide $q-1$. Thus $q\equiv 1{\pmod {n}}.$
See also
• Cyclotomic field
• Aurifeuillean factorization
• Root of unity
References
1. Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, p. 465 §18, ISBN 978-0-387-72828-5
2. Sloane, N. J. A. (ed.), "Sequence A013595", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation
3. Brookfield, Gary (2016), "The coefficients of cyclotomic polynomials", Mathematics Magazine, 89 (3): 179–188, doi:10.4169/math.mag.89.3.179, JSTOR 10.4169/math.mag.89.3.179, MR 3519075
4. Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556
5. Cox, David A. (2012), "Exercise 12", Galois Theory (2nd ed.), John Wiley & Sons, p. 237, doi:10.1002/9781118218457, ISBN 978-1-118-07205-9.
6. Weisstein, Eric W., "Cyclotomic Polynomial", MathWorld
7. Sanna, Carlo (2021), "A Survey on Coefficients of Cyclotomic Polynomials", arXiv:2111.04034 [math.NT]
8. Isaacs, Martin (2009), Algebra: A Graduate Course, AMS Bookstore, p. 310, ISBN 978-0-8218-4799-2
9. Maier, Helmut (2008), "Anatomy of integers and cyclotomic polynomials", in De Koninck, Jean-Marie; Granville, Andrew; Luca, Florian (eds.), Anatomy of integers. Based on the CRM workshop, Montreal, Canada, March 13-17, 2006, CRM Proceedings and Lecture Notes, vol. 46, Providence, RI: American Mathematical Society, pp. 89–95, ISBN 978-0-8218-4406-9, Zbl 1186.11010
10. Gauss, DA, Articles 356-357
11. Riesel, Hans (1994), Prime Numbers and Computer Methods for Factorization (2nd ed.), Boston: Birkhäuser, pp. 309–316, 436, 443, ISBN 0-8176-3743-5
12. Beiter, Marion (April 1968), "Magnitude of the Coefficients of the Cyclotomic Polynomial $F_{pqr}(x)$", The American Mathematical Monthly, 75 (4): 370–372, doi:10.2307/2313416, JSTOR 2313416
13. Lidl, Rudolf; Niederreiter, Harald (2008), Finite Fields (2nd ed.), Cambridge University Press, p. 65.
14. S. Shirali. Number Theory. Orient Blackswan, 2004. p. 67. ISBN 81-7371-454-1
Further reading
Gauss's book Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
• Gauss, Carl Friedrich (1986) [1801], Disquisitiones Arithmeticae, Translated into English by Clarke, Arthur A. (2nd corr. ed.), New York: Springer, ISBN 0387962549
• Gauss, Carl Friedrich (1965) [1801], Untersuchungen uber hohere Arithmetik (Disquisitiones Arithmeticae & other papers on number theory), Translated into German by Maser, H. (2nd ed.), New York: Chelsea, ISBN 0-8284-0191-8
• Lemmermeyer, Franz (2000), Reciprocity Laws: from Euler to Eisenstein, Springer Monographs in Mathematics, Berlin: Springer, doi:10.1007/978-3-662-12893-0, ISBN 978-3-642-08628-1
External links
• Weisstein, Eric W., "Cyclotomic polynomial", MathWorld
• "Cyclotomic polynomials", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• OEIS sequence A013595 (Triangle of coefficients of cyclotomic polynomial Phi_n(x) (exponents in increasing order))
• OEIS sequence A013594 (Smallest order of cyclotomic polynomial containing n or −n as a coefficient)
| Wikipedia |
\begin{document}
\baselineskip=18pt plus1pt
\setcounter{secnumdepth}{3} \setcounter{tocdepth}{3}
\title{Completeness of the ZX-calculus}
This thesis is dedicated to\\
someone\\ for some special reason\\ \end{dedication}
\begin{abstract} The ZX-calculus is an intuitive but also mathematically strict graphical language for quantum computing, which is especially powerful for the framework of quantum circuits. Completeness of the ZX-calculus means any equality of matrices with size powers of $n$ can be derived purely diagrammatically.
In this thesis, we give the first complete axiomatisation the ZX-calculus for the overall pure qubit quantum mechanics, via a translation from the completeness result of another graphical language for quantum computing-- the ZW-calculus. This paves the way for automated pictorial quantum computing, with the aid of some software like Quantomatic.
Based on this universal completeness, we directly obtain a complete axiomatisation of the ZX-calculus for the Clifford+T quantum mechanics, which is approximatively universal for quantum computing, by restricting the ring of complex numbers to its subring corresponding to the Clifford+T fragment resting on the completeness theorem of the ZW-calculus for arbitrary commutative ring.
Furthermore, we prove the completeness of the ZX-calculus (with just 9 rules) for 2-qubit Clifford+T circuits by verifying the complete set of 17 circuit relations in diagrammatic rewriting. This is an important step towards efficient simplification of general n-qubit Clifford+T circuits, considering that we now have all the necessary rules for diagrammatical quantum reasoning and a very simple construction of Toffoli gate within our axiomatisation framework, which is approximately universal for quantum computation together with the Hadamard gate.
In addition to completeness results within the qubit related formalism, we extend the completeness of the ZX-calculus for qubit stabilizer quantum mechanics to the qutrit stabilizer system.
Finally, we show with some examples the application of the ZX-calculus to the proof of generalised supplementarity, the representation of entanglement classification and Toffoli gate, as well as equivalence-checking for the UMA gate.
\end{abstract}
\begin{romanpages} \tableofcontents
\end{romanpages}
\chapter{Introduction} \iffalse For him to whom emptiness is clear, Everything becomes clear. For him to whom emptiness is not clear, Nothing becomes clear.
$M\bar{u}lamadhyamakak\bar{a}rik\bar{a}$
As is well known, complementarity is a key feature of quantum mechanics. This thesis is about a story of a graphical language for quantum computing--the ZX-calculus. In 2008, Coecke and Duncan had a paper "Interacting Quantum Observables" published in the proceedings of the conference ICALP \cite{coecke_interacting_2008}, which depicted a pair of complementary quantum observables in green and red nodes, based on the framework of symmetric monoidal categories. Afterwards, this paper was fully extended in a length of 80 pages \cite{CoeckeDuncan}, wherein the name of ZX-calculus was formally proposed. Since then, the theory of ZX-calculus has been developed in several directions.
Complementarity> process theory> ZX-calculus>Prop theory> each chapter \section{How} It happened to be a special category--Prop. Based ZX-calculus on the category Prop, describe soundness as a translation functor, universal as full functor, completeness as faithful functor.
\textcolor{red}{ Definition to be stated} 0.what is ZX-calculus. 1. category, functor, full functor, faithful functor. Examples: Finite Hilbert spaces, 2.(strict) monoidal category, symmetric, compact closed, symmetric monoidal functor, Prop . 3. Soundness, universal, completeness, defined in traditional way and categorical way. 4. motivation of the work: set a basis for automated quantum computing and information, and quantum compiler, even quantum programming and quantum machine learning, if we take ZX-calculus as the basis of quantum computation. 5. mention similar work which has completeness result in a prop. 6. results in each chapter.
\textcolor{red}{ need to do} 1.Use the simplest rules, remove unnecessary ones.
2. At the beginning of the chapter, mention this part was from the arXiv paper.
3. move the appendix to main part.
4. adapt the paper introduction at the beginning of each chapter to a chapter introduction.
\section{Main part} \fi
There are two paradigms for western metaphysics: substances and processes. The substance metaphysics sees objects as the basic constituents of the universe, while the process metaphysics take processes rather than objects as fundamental \cite{Celso}. Meanwhile, an influential eastern philosophy--the Madhyamaka philosophy \cite{Nagarjuna}, has the key idea that all things (dharmas) are empty of substance or essence (svabh$\bar{a}$va) because of dependent arising (Prat$\bar{\i}$tyasamutp$\bar{a}$da), thus similar to the opinion of process metaphysics. Following the spirit of the process philosophy and the Madhyamaka philosophy, we think the theory of processes has scientific and philosophical advantages, especially for its application in quantum physics. In fact, if we treat quantum processes as transformations between different types of quantum systems and highlight the compositions of processes, then we arrive at the theory of categorical quantum mechanics (CQM) proposed by Abramsky and Coecke \cite{Coeckesamson}. Therein the theory of processes can be made strict in the mathematical framework of symmetric monoidal categories \cite{MacLane}. It would be of great interest to know how processes could be fundamental while objects being less important in a mathematical formulation of process theory for quantum mechanics. To see this, we need some concepts from category theory. This part will be illustrated in Chapter \ref{backgrd}, standard references for which can be found in \cite{MacLane} and \cite{borceux_1994}.
\iffalse \subsection*{Category} A category $\mathfrak{C}$ consists of: \begin{itemize} \item a class of objects $ob(\mathfrak{C})$; \item for each pair of objects $A, B$, a set $\mathfrak{C}(A, B)$ of morphisms from $A$ to $B$; \item for each triple of objects $A, B, C$, a composition map \[ \begin{array}{ccc} \mathfrak{C}(B, C) \times \mathfrak{C}(A, B)& \longrightarrow & \mathfrak{C}(A, C)\\ (g, f) & \mapsto & g\circ f; \end{array} \] \item for each object $A$, an identity morphism $1_A \in \mathfrak{C}(A, A)$, \end{itemize} satisfying the following axioms: \begin{itemize} \item associativity: for any $f\in \mathfrak{C}(A, B), g\in \mathfrak{C}(B, C), h\in \mathfrak{C}(C, D)$, there holds $(h\circ g)\circ f=h\circ (g\circ f)$; \item identity law: for any $f\in \mathfrak{C}(A, B), 1_B\circ f=f=f\circ 1_A$. \end{itemize}
A morphism $f\in \mathfrak{C}(A, B)$ is an \textit{ isomorphism} if there exists a morphism $g\in \mathfrak{C}(B, A)$ such that $g\circ f=1_A$ and $f\circ g=1_B$. A \textit{product category} $\mathfrak{A} \times \mathfrak{B}$ can be defined componentwise by two categories $\mathfrak{A}$ and $\mathfrak{B}$. In the spirit of process theory, what is important is the part of processes. A process between two categories will be called as a functor. \subsection*{Functor} Given categories $\mathfrak{C}$ and $\mathfrak{D}$, a functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ consists of: \begin{itemize} \item a mapping \[ \begin{array}{ccc} \mathfrak{C}& \longrightarrow & \mathfrak{D}\\ A & \mapsto & F(A); \end{array} \]
\item for each pair of objects $A, B$ of $\mathfrak{C}$, a map \[ \begin{array}{ccc} \mathfrak{C}(A, B) & \longrightarrow &\mathfrak{D}(F(A), F(B))\\ f & \mapsto & F(f), \end{array} \]
\end{itemize} satisfying the following axioms: \begin{itemize} \item preserving composition: for any morphisms $f\in \mathfrak{C}(A, B), g\in \mathfrak{C}(B, C)$, there holds $F(g\circ f)=F(g)\circ F( f))$; \item preserving identity: for any object $A$ of $\mathfrak{C}$, $ F(1_A)=1_{F(A)}$.
\end{itemize}
A functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ is \textit{ faithful (full)} if for each pair of objects $A, B$ of $\mathfrak{C}$, the map \[ \begin{array}{ccc} \mathfrak{C}(A, B) & \longrightarrow &\mathfrak{D}(F(A), F(B))\\ f & \mapsto & F(f) \end{array} \] is injective (surjective).
\subsection*{Natural transformation}
Let $F, G: \mathfrak{C}\longrightarrow \mathfrak{D} $ be two functors. A natural transformation $\tau : F \rightarrow G$ is a family $(\tau_A:F(A)\longrightarrow G(A))_{A\in \mathfrak{C}} $ of morphisms in $ \mathfrak{D}$ such that the following square commutes:
\begin{center}
\begin{tikzpicture}
\node (1) at ( -0.5,4) [] {$F(A)$}; \node at ( 0.5,4.2) [] {$\tau_A$}; \node at ( -1,3.3) [] {$F(f)$}; \node (2) at ( 1.5,4) [] {$G(A)$}; \node (3) at ( -0.5,2.5) [] {$F(B)$}; \node at ( 0.5,2.72) [] {$\tau_B$}; \node at ( 2,3.3) [] {$G(f)$}; \node (4) at ( 1.5,2.5) [] {$G(B)$};
\draw [->] (1) -- (2) {} ; \draw [->] (3) -- (4) {} ; \draw [->] (1) -- (3) {} ; \draw [->] (2) -- (4) {} ;
\end{tikzpicture}
\end{center}
\noindent for all morphisms $f\in \mathfrak{C}(A,B) $. A natural isomorphism is a natural transformation where each of the $\tau_A$ is an isomorphism.
\subsection*{Strict monoidal category}
A strict monoidal category consists of:
\begin{itemize}
\item a category $\mathfrak{C}$;
\item a unit object $I\in ob(\mathfrak{C})$;
\item a bifunctor $- \otimes - : \mathfrak{C} \times \mathfrak{C} \longrightarrow \mathfrak{C} $,
\end{itemize} satisfying \begin{itemize} \item associativity: for each triple of objects $A, B, C$ of $\mathfrak{C}$, $A\otimes (B \otimes C)=(A\otimes B) \otimes C$; for each triple of morphisms $f, g, h$ of $\mathfrak{C}$, $f\otimes (g \otimes h)=(f\otimes g) \otimes h$;
\item unit law: for each object $A$ of $\mathfrak{C}$, $A\otimes I= A=I\otimes A$; for each morphism $f$ of $\mathfrak{C}$, $f\otimes 1_I= f=1_I\otimes f$. \end{itemize}
\subsection*{ Strict symmetric monoidal category}
A strict monoidal category $\mathfrak{C}$ is symmetric if it is equipped with a natural isomorphism
\begin{center} $\sigma_{A,B} : A \otimes B \rightarrow B \otimes A$
\end{center}
\noindent for all objects $A, B, C$ of $\mathfrak{C}$ satisfying:
$$\sigma_{B,A} \circ \sigma_{A,B} =1_{A \otimes B}, ~~\sigma_{A,I}=1_A, ~~ (1_B \otimes \sigma_{A,C}) \circ (\sigma_{A,B} \otimes 1_C) =\sigma_{A, B\otimes C}.$$
\iffalse The swap morphism is graphically represented as the following:
\begin{center}
\begin{tikzpicture}
\node at ( -1.5,3) [] {$\sigma_{A,B}$ $:=$};
\draw [-] ( 0,2.3) .. controls ( 0,3) and ( 1,3.1) .. ( 1,3.8);
\draw [-] ( 1,2.3) .. controls ( 1,3) and ( 0,3.1) .. ( 0,3.8);
\node at ( 1.3,2.5) [] {$B$}; \node at ( -0.3,2.5) [] {$A$};
\node at ( 1.3,3.6) [] {$A$}; \node at ( -0.3,3.6) [] {$B$};
\end{tikzpicture}
\end{center} \fi
\subsection*{Strict monoidal functor} Given two strict monoidal categories $\mathfrak{C}$ and $\mathfrak{D}$, a strict monoidal functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ is a functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ such that $F(A)\otimes F(B)=F(A\otimes B), F(f)\otimes F(g)=F(f\otimes g), F(I_{\mathfrak{C}})=I_{\mathfrak{D}}$, for any objects $A, B$ of $\mathfrak{C}$, and any morphisms $f\in\mathfrak{C}(A, A_1), g\in\mathfrak{C}(B, B_1)$.
A strict symmetric monoidal functor $F$ is a strict monoidal functor that preserves the symmetry structure, i.e., $ F(\sigma_{A,B})= \sigma_{F(A),F(B)}$.
\subsection*{Self-dual strict compact closed category } A self-dual strict compact closed category is a strict symmetric monoidal category $\mathfrak{C}$ such that for each object $A$ of $\mathfrak{C}$, there exists two morphisms
$$\epsilon_{A} : A \otimes A \rightarrow I, ~~ \eta_{A} : I \rightarrow A \otimes A$$
satisfying:
$$ (\epsilon_{A} \otimes 1_A ) \circ (1_A \otimes \eta_A) =1_A, ~~ (1_A \otimes \epsilon_{A} ) \circ (\eta_A \otimes 1_A) =1_A.$$
Note that here we use the word ``self-dual" in the same sense as in \cite{Coeckebk}, instead of the sense in Peter Selinger's paper \cite{selinger_selfdual_2010}.
\subsection*{PROP} `PROP' is an acronym for `products and permutations', as introduced by Mac Lane \cite{maclane1965}. A PROP is a strict symmetric monoidal category having the natural numbers as objects, with the tensor product of objects given by addition. A morphism between two PROPs is a strict symmetric monoidal functor that is the identity on objects \cite{BaezCR}.
Just as any group can be represented by generators and relations, any PROP can be described as a presentation in terms of generators and relations, which is proved in \cite{BaezCR}. \fi
Now we give a introduction to the main theme of this thesis--the ZX-calculus. The ZX-calculus introduced by Coecke and Duncan \cite{coeckeduncan2008, CoeckeDuncan} is an intuitive yet mathematically strict graphical language for quantum computing: it is formulated within the framework of compact closed categories which has a rigorous underpinning for graphical calculus \cite{Joyal}, meanwhile being an important branch of CQM \cite{Coeckesamson}. Notably, it has simple rewriting rules to transform diagrams from one to another. Each diagram in the ZX-calculus has a so-called standard interpretation \cite{PerdrixWang}, in finite-dimensional Hilbert spaces \cite{SELINGER2011113}, thus makes it relevant for quantum computing. For the past ten years, the ZX-calculus has enjoyed success in applying to fields of quantum information and quantum computation (QIC) \cite{Nielsen}, in particular (topological) measurement-based quantum computing \cite{Duncanpx, Horsman} and quantum error correction \cite{DuncanLucas, ckzh}. Very recently, the ZX-calculus is also used for reducing the cost of implementing quantum programs \cite{CQC}.
It is clear that the usefulness of the ZX-calculus is based on the properties of this theory. There are three main properties of the ZX-calculus: soundness, universality and completeness. Soundness means that all the rules in the ZX-calculus have a correct standard interpretation in the Hilbert spaces. Universality is about if there exists a ZX-calculus diagram for every linear map in Hilbert spaces under the standard interpretation. Completeness refers to whether an equation of diagrams can be derived in the ZX-calculus when their corresponding equation of linear maps under the standard interpretation holds true.
In the framework of category theory, the ZX-calculus is just a PROP \cite{BONCHI2017144}, and a Hilbert space model of the ZX-calculus is a symmetric monoidal category that is equivalent to a PROP, with objects generated by (tensor powers of) a Hilbert space of dimension $d>1$ (called qubit model for $d=2$ and qudit model for $d>2$). The three main properties of the ZX-calculus are just about the properties of the interpretation from the ZX-calculus to its Hilbert space model : soundness means that this interpretation is a symmetric monoidal functor, while universality and completeness mean this functor is a full and faithful functor respectively.
The property that the ZX-calculus is sound relative to the qubit and qudit model has been shown in \cite{CoeckeDuncan} and \cite{Ranchin} respectively. The universality of the ZX-calculus as to the qubit and qudit model has also been shown in \cite{CoeckeDuncan} and \cite{BianWang2} respectively. The ZX-calculus has been proved to be complete for qubit stabilizer quantum mechanics (qubit model) \cite{Miriam1}, and recently further axiomatised to be complete for qubit Clifford +T quantum mechanics \cite {Emmanuel}, an approximatively universal fragment of quantum mechanics which has been widely used in quantum computing \cite{Boykin}.
The axiomatisation given in \cite {Emmanuel} relies on a complicated translation from the ZX-calculus to another graphical calculus--the ZW-calculus \cite{amar1}. As a result that the ZX-calculus is not easy to use for exploring properties of multipartite entangled quantum states, Coecke and Kissinger propose a new graphical calculus called GHZ/W-calculus which is based on the interaction of special commutative Frobenius algebras induced by GHZ-states and anti-special commutative Frobenius algebras induced by W-states \cite{Coeckealeksmen}. In \cite{amar1}, Hadzihasanovic extends the GHZ/W-calculus into the ZW-calculus with diagrams corresponding to integer matrices, modelling on the ZX-calculus. Most importantly, the ZW-calculus is proved to be complete for pure qubit states with integer coefficients. It was just based on this completeness result that Jeandel, Perdrix and Vilmart were able to give a complete axiomatisation of the ZX-calculus for the Clifford +T quantum mechanics.
However, even with all the rules from the complete axiomatisation for qubit Clifford +T quantum mechanics, the ZX-calculus is still incomplete for the overall qubit quantum mechanics, as suggested in \cite{Vladimir} and proved in \cite{JPV2}. On the other hand,
the ZW-calculus is generalised to a new version $ZW_R$-calculus where $R$ is an arbitrary commutative ring, and the $ZW_R$-calculus is proved to be complete for $R$-bits ( analogues of qubits with coefficients in $R$) \cite{amar}.
In this thesis, further to the result of \cite{Emmanuel}, we give the first complete axiomatisation of the ZX-calculus for the overall qubit quantum mechanics (which will be called $ZX_{ full}$-calculus in this thesis) \cite{ngwang}, based on the completeness result of $ZW_{ \mathbb{C}}$-calculus, where $\mathbb{C}$ is the field of complex numbers. In view of our results, there comes the paper \cite{JPV2} afterwards which also give an axiomatisation of the ZX-calculus for the entire qubit quantum mechanics, with different generators of diagrams and rewriting rules. Chapter 2 of this thesis will show the details of the complete axiomatisation of the $ZX_{ full}$-calculus.
Given the complete axiomatisation of the $ZX_{ full}$-calculus \cite{ngwang}, we also obtain a complete axiomatisation of the ZX-calculus for the Clifford+T quantum mechanics (which will be called $ZX_{ C+T}$-calculus in this thesis) by restricting the ring $\mathbb{C}$ to its subring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ which exactly corresponds to the Clifford+T fragment, resting on the completeness theorem of the $ZW_R$-calculus. In contrast to the first complete axiomatisation of the ZX-calculus for the Clifford+T fragment \cite{Emmanuel}, we have two new generators--a triangle and a $\lambda$ box-- as features rather than novelties: the triangle can be employed as an essential component to construct a Toffoli gate in a very simple form, while the $\lambda$ box can be slightly extended to a generalised phase so that the generalised supplementarity (also called cyclotomic supplementarity) \cite{jpvw} is naturally seen as a special case of the generalised spider rule. In addition, due to the introduction of the new generators, our proof for that the Clifford +T fragment of the ZX-calculus exactly corresponds to matrices over the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ is much simpler than the corresponding proof given in \cite{Emmanuel}. These results are shown in detail in Chapter 3.
Considering that Clifford+T quantum circuits are most frequently used in quantum computation, it would be more efficient to use a small set of ZX rules for the purpose of circuit simplification, although the ZX-calculus is complete for both the overall pure qubit quantum mechanics \cite{ngwang} and the Clifford+T pure qubit quantum mechanics \cite{Emmanuel}. In Chapter 4, we prove the completeness of the ZX-calculus (with just 9 rules) for 2-qubit Clifford+T circuits by verifying the complete set of 17 circuit relations \cite{ptbian} in diagrammatic rewriting. As a consequence, our result can also be seen as a completeness result for single-qubit Clifford+T ZX-calculus \cite{Miriam1ct}.
In addition, we are able to give an analytic solution for converting from ZXZ to XZX Euler decompositions of single-qubit unitary gates as suggested by Schr\"oder de Witt and Zamdzhiev \cite{Vladimir}.
Since there already exist qutrit and general qudit versions of the ZX-calculus \cite{Ranchin, BianWang2}, it is natural to ask if one could generalise the completeness result of the qubit ZX-calculus to the qudit version for arbitrary dimension $d$. As for the completeness of the ZX-calculus for the overall and generalised Clifford+T qudit quantum mechanics, there is no result available. Fortunately, the completeness of the ZX-calculus for qubit stabilizer quantum mechanics can be generalised to the qutrit case, which will be shown explicitly in chapter 5.
Having the completeness results established above, it would be interesting to see how it could be applied. In chapter 6, we show by some examples the application of the ZX-calculus to the proof of generalised supplementarity, the representation of entanglement classification and Toffoli gate, as well as equivalence-checking for the UMA gate.
Finally in chapter 7, we conclude this thesis with some open problems.
\chapter{Background}\label{backgrd}
In this chapter we give the requisite knowledge of this thesis, which includes the concepts related to category theory, ZX-calculus, and ZW-calculus.
\section{Some concepts from category theory} In this section, we give some categorical concepts constituting the theoretical framework of this thesis, the standard references for which can be found in \cite{MacLane} and \cite{borceux_1994}.
\subsection*{Category} A category $\mathfrak{C}$ consists of: \begin{itemize} \item a class of objects $ob(\mathfrak{C})$; \item for each pair of objects $A, B$, a set $\mathfrak{C}(A, B)$ of morphisms from $A$ to $B$; \item for each triple of objects $A, B, C$, a composition map \[ \begin{array}{ccc} \mathfrak{C}(B, C) \times \mathfrak{C}(A, B)& \longrightarrow & \mathfrak{C}(A, C)\\ (g, f) & \mapsto & g\circ f; \end{array} \] \item for each object $A$, an identity morphism $1_A \in \mathfrak{C}(A, A)$, \end{itemize} satisfying the following axioms: \begin{itemize} \item associativity: for any $f\in \mathfrak{C}(A, B), g\in \mathfrak{C}(B, C), h\in \mathfrak{C}(C, D)$, there holds $(h\circ g)\circ f=h\circ (g\circ f)$; \item identity law: for any $f\in \mathfrak{C}(A, B), 1_B\circ f=f=f\circ 1_A$. \end{itemize}
A morphism $f\in \mathfrak{C}(A, B)$ is an \textit{ isomorphism} if there exists a morphism $g\in \mathfrak{C}(B, A)$ such that $g\circ f=1_A$ and $f\circ g=1_B$. A \textit{product category} $\mathfrak{A} \times \mathfrak{B}$ can be defined componentwise by two categories $\mathfrak{A}$ and $\mathfrak{B}$.
\subsection*{Functor} Given categories $\mathfrak{C}$ and $\mathfrak{D}$, a functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ consists of: \begin{itemize} \item a mapping \[ \begin{array}{ccc} \mathfrak{C}& \longrightarrow & \mathfrak{D}\\ A & \mapsto & F(A); \end{array} \]
\item for each pair of objects $A, B$ of $\mathfrak{C}$, a map \[ \begin{array}{ccc} \mathfrak{C}(A, B) & \longrightarrow &\mathfrak{D}(F(A), F(B))\\ f & \mapsto & F(f), \end{array} \]
\end{itemize} satisfying the following axioms: \begin{itemize} \item preserving composition: for any morphisms $f\in \mathfrak{C}(A, B), g\in \mathfrak{C}(B, C)$, there holds $F(g\circ f)=F(g)\circ F( f))$; \item preserving identity: for any object $A$ of $\mathfrak{C}$, $ F(1_A)=1_{F(A)}$.
\end{itemize}
A functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ is \textit{ faithful (full)} if for each pair of objects $A, B$ of $\mathfrak{C}$, the map \[ \begin{array}{ccc} \mathfrak{C}(A, B) & \longrightarrow &\mathfrak{D}(F(A), F(B))\\ f & \mapsto & F(f) \end{array} \] is injective (surjective).
\subsection*{Natural transformation}
Let $F, G: \mathfrak{C}\longrightarrow \mathfrak{D} $ be two functors. A natural transformation $\tau : F \rightarrow G$ is a family $(\tau_A:F(A)\longrightarrow G(A))_{A\in \mathfrak{C}} $ of morphisms in $ \mathfrak{D}$ such that the following square commutes:
\begin{center}
\begin{tikzpicture}
\node (1) at ( -0.5,4) [] {$F(A)$}; \node at ( 0.5,4.2) [] {$\tau_A$}; \node at ( -1,3.3) [] {$F(f)$}; \node (2) at ( 1.5,4) [] {$G(A)$}; \node (3) at ( -0.5,2.5) [] {$F(B)$}; \node at ( 0.5,2.72) [] {$\tau_B$}; \node at ( 2,3.3) [] {$G(f)$}; \node (4) at ( 1.5,2.5) [] {$G(B)$};
\draw [->] (1) -- (2) {} ; \draw [->] (3) -- (4) {} ; \draw [->] (1) -- (3) {} ; \draw [->] (2) -- (4) {} ;
\end{tikzpicture}
\end{center}
\noindent for all morphisms $f\in \mathfrak{C}(A,B) $. A natural isomorphism is a natural transformation where each of the $\tau_A$ is an isomorphism.
\subsection*{Strict monoidal category}
A strict monoidal category consists of:
\begin{itemize}
\item a category $\mathfrak{C}$;
\item a unit object $I\in ob(\mathfrak{C})$;
\item a bifunctor $- \otimes - : \mathfrak{C} \times \mathfrak{C} \longrightarrow \mathfrak{C} $,
\end{itemize} satisfying \begin{itemize} \item associativity: for each triple of objects $A, B, C$ of $\mathfrak{C}$, $A\otimes (B \otimes C)=(A\otimes B) \otimes C$; for each triple of morphisms $f, g, h$ of $\mathfrak{C}$, $f\otimes (g \otimes h)=(f\otimes g) \otimes h$;
\item unit law: for each object $A$ of $\mathfrak{C}$, $A\otimes I= A=I\otimes A$; for each morphism $f$ of $\mathfrak{C}$, $f\otimes 1_I= f=1_I\otimes f$. \end{itemize}
\subsection*{ Strict symmetric monoidal category}
A strict monoidal category $\mathfrak{C}$ is symmetric if it is equipped with a natural isomorphism
\begin{center} $\sigma_{A,B} : A \otimes B \rightarrow B \otimes A$
\end{center}
\noindent for all objects $A, B, C$ of $\mathfrak{C}$ satisfying:
$$\sigma_{B,A} \circ \sigma_{A,B} =1_{A \otimes B}, ~~\sigma_{A,I}=1_A, ~~ (1_B \otimes \sigma_{A,C}) \circ (\sigma_{A,B} \otimes 1_C) =\sigma_{A, B\otimes C}.$$
\iffalse The swap morphism is graphically represented as the following:
\begin{center}
\begin{tikzpicture}
\node at ( -1.5,3) [] {$\sigma_{A,B}$ $:=$};
\draw [-] ( 0,2.3) .. controls ( 0,3) and ( 1,3.1) .. ( 1,3.8);
\draw [-] ( 1,2.3) .. controls ( 1,3) and ( 0,3.1) .. ( 0,3.8);
\node at ( 1.3,2.5) [] {$B$}; \node at ( -0.3,2.5) [] {$A$};
\node at ( 1.3,3.6) [] {$A$}; \node at ( -0.3,3.6) [] {$B$};
\end{tikzpicture}
\end{center} \fi
\subsection*{Strict monoidal functor} Given two strict monoidal categories $\mathfrak{C}$ and $\mathfrak{D}$, a strict monoidal functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ is a functor $F: \mathfrak{C} \longrightarrow \mathfrak{D}$ such that $F(A)\otimes F(B)=F(A\otimes B), F(f)\otimes F(g)=F(f\otimes g), F(I_{\mathfrak{C}})=I_{\mathfrak{D}}$, for any objects $A, B$ of $\mathfrak{C}$, and any morphisms $f\in\mathfrak{C}(A, A_1), g\in\mathfrak{C}(B, B_1)$.
A strict symmetric monoidal functor $F$ is a strict monoidal functor that preserves the symmetry structure, i.e., $ F(\sigma_{A,B})= \sigma_{F(A),F(B)}$.
\subsection*{Self-dual strict compact closed category } A self-dual strict compact closed category is a strict symmetric monoidal category $\mathfrak{C}$ such that for each object $A$ of $\mathfrak{C}$, there exists two morphisms
$$\epsilon_{A} : A \otimes A \rightarrow I, ~~ \eta_{A} : I \rightarrow A \otimes A$$
satisfying:
$$ (\epsilon_{A} \otimes 1_A ) \circ (1_A \otimes \eta_A) =1_A, ~~ (1_A \otimes \epsilon_{A} ) \circ (\eta_A \otimes 1_A) =1_A.$$
Note that here we use the word ``self-dual" in the same sense as in \cite{Coeckebk}, instead of the sense in Peter Selinger's paper \cite{selinger_selfdual_2010}.
\subsection*{PROP} `PROP' is an acronym for `products and permutations', as introduced by Mac Lane \cite{maclane1965}. A PROP is a strict symmetric monoidal category having the natural numbers as objects, with the tensor product of objects given by addition. A morphism between two PROPs is a strict symmetric monoidal functor that is the identity on objects \cite{BaezCR}.
As an example, the category $\mathbf{FinSet}$ is a PROP whose objects are all finite sets and whose morphisms are all functions between them.
Just as any group can be represented by generators and relations, any PROP can be described as a presentation in terms of generators and relations, which is proved in \cite{BaezCR}.
\subsection*{Some typical examples of categories in this thesis} \begin{itemize}
\item $\mathbf{FdHilb_d}$: the category whose objects are complex Hilbert spaces with dimensions $d^k$, where $d > 1$ is a given integer and $k$ is an arbitrary non-negative integer, and whose morphisms are linear maps between the Hilbert spaces with ordinary composition of linear maps as composition of morphisms. The usual Kronecker tensor product is the monoidal tensor, and the field of complex numbers $\mathbb{C}$ (which is a one-dimensional Hilbert space over itself) is the tensor unit.
For each object of $\mathbf{FdHilb_d}$, we can choose an orthonormal basis $\{\ket{i}\}_{0 \leq i \leq d-1}$ denoted in Dirac notation.
\item $\mathbf{FdHilb_d}/s$: the category which has the same objects as $\mathbf{FdHilb_d}$ but whose morphisms are equivalence classes of $\mathbf{FdHilb_d}$-morphisms, given by the following equivalence relation
\[
f \sim g \Leftrightarrow \exists r \in \mathbb{C}\diagdown \{0\} \quad \mbox{such that} \quad f = r \cdot g
\]
\item $\mathbf{Mat}_{R}$: the category whose objects are natural numbers and whose morphisms $M: m \rightarrow n$
are $n \times m$ matrices taking values in a given commutative ring $R$. The composition is matrix multiplication, the monoidal product on objects and morphisms are multiplication of natural numbers and the Kronecker product of matrices respectively. \end{itemize}
\section{ZX-calculus}
The ZX-calculus was introduced by Coecke and Duncan in \cite{coeckeduncan2008, CoeckeDuncan} as a graphical language for describing a pair of complementary quantum observables. The key feature of the ZX is that it has intuitive rewriting rules which allow one to transform diagrams from one to another, instead of performing tedious matrix calculations. The original ZX-calculus is designed particularly for qubits \cite{Nielsen}, then it is generalised to higher dimensions \cite{BianWang2, Ranchin}. In this section, we will first give a formal definition of ZX-calculus for arbitrary dimension (called qudit ZX-calculus), then present the details of qubit ZX-calculus and qutrit ZX-calculus, including related properties. Finally we display another graphical language--ZW-calculus in terms of generators and rewriting rules.
\subsection {ZX-calculus in general }\label{zxgeneral}
We will describe the ZX-calculus in the framework of PROPs in terms of generators and relations following the way presented in \cite{amar1}. Explicitly, we build the ZX-calculus in the following way: first we give a set $\mathnormal{S}$ consisting of basic diagrams including empty diagram and the straight line as generators, where by diagram we mean a picture composed of a vertex, $n$ incoming wires (inputs) and $m$ outgoing wires (outputs):
\begin{center}
\tikzfig{diagrams//popmorph}
\end{center}
Note that in this thesis any diagram should be read from top to bottom. \noindent Let $ZX[\mathnormal{S}]$ be the strict monoidal category freely generated by diagrams of $\mathnormal{S}$ in parallel composition $\otimes$ where any two diagrams $D_1$ and $D_2$ are placed side-by-side with $D_1$ on the left of $D_2$ ($D_1 \otimes D_2$), or in sequential composition $\circ$ where $D_1$ has the same number of outputs as the number of inputs of $D_2$ and $D_1$ is placed above $D_2$ with the outputs of $D_1$ connected to the inputs of $D_2$ ($D_2 \circ D_1$). It is clear that the empty diagram is a uint of parallel composition and the diagram of a straight line is a unit of the sequential composition.
Then we give an equivalence relation $\mathnormal{R}$ of diagrams (morphisms) in $ZX[\mathnormal{S}]$ including the diagrammatical representation (see below) of axioms of a self-dual strict compact closed category. Let $ZX[\mathnormal{S}]/\mathnormal{R}$ be the PROP obtained from $ZX[\mathnormal{S}]$ modulo the equivalence relation $\mathnormal{R}$. The pairs in $\mathnormal{R}$ will be called the rules of $ZX[\mathnormal{S}]/\mathnormal{R}$. Furthermore, $ZX[\mathnormal{S}]/\mathnormal{R}$ is called a ZX-calculus if
\begin{itemize} \item the generating set $\mathnormal{S}$ consists of the following basic diagrams:
\begin{center}
\tikzfig{diagrams//quditgspider},\hspace{0.3cm} \tikzfig{diagrams//quditrspider},\hspace{0.3cm} \tikzfig{diagrams//HadaDecomSingleslt},\hspace{0.3cm} \tikzfig{diagrams//swap},\hspace{0.3cm} \tikzfig{diagrams//Id},\hspace{0.3cm} \tikzfig{diagrams//emptysquare},\hspace{0.3cm}\tikzfig{diagrams//cap},\hspace{0.3cm}\tikzfig{diagrams//cup}
\end{center}
where $ m,n\in \mathbb N$, $\overrightarrow{\alpha}=(\alpha_1, \cdots, \alpha_{d-1}), \alpha_i \in [0, 2\pi) $, and the dashed square represents an empty diagram;
\item the equivalence relation $\mathnormal{R}$ consists of two types of rules:
\begin{enumerate}
\item the structure rules for a self-dual compact closed category:
\begin{equation}\label{compactstructure}
\tikzfig{diagrams//compactstructure_cap} \qquad\quad \tikzfig{diagrams//compactstructure_cup} \qquad\quad \tikzfig{diagrams//compactstructure_snake}
\end{equation}
\begin{equation}\label{compactstructureslide}
\tikzfig{diagrams//compactstructure_slide}
\end{equation}
\item non-structural rewriting rules for transforming the generators, typically the spider rule, copy rule, bialgebra rule, Euler decomposition rule, and the colour change rule \cite{CoeckeDuncan}. \end{enumerate}
\end{itemize}
For convenience, we have the following short notation: \[
\tikzfig{diagrams//spiderg} := \tikzfig{diagrams//quditgspider0phase}\qquad\qquad\tikzfig{diagrams//spiderr} := \tikzfig{diagrams//quditrspider0phase} \]
Since we have the parameter $d$ in the ZX-calculus, we also call it \textit{ qudit ZX-calculus} which will be meaningful when we give to it a semantics. It is called \textit{ qubit ZX-calculus} if $d=2$, and called \textit{ qutrit ZX-calculus} if $d=3$. Also we call a diagram with no inputs and no outputs a \textit{ scalar}.
Now we associate to each diagram in the ZX-calculus a standard interpretation $\llbracket \cdot \rrbracket$ in $\mathbf{FdHilb_d}$:
\[ \left\llbracket \tikzfig{diagrams//quditgspider} \right\rrbracket=\sum_{j=0}^{d-1}e^{i\alpha_j}\ket{j}^{\otimes m}\bra{j}^{\otimes n}, \alpha_0=1, \]
\[ \left\llbracket \tikzfig{diagrams//quditrspider} \right\rrbracket=\sum_{j=0}^{d-1}e^{i\alpha_j}\ket{h_j}^{\otimes m}\bra{h_j}^{\otimes n}, \alpha_0=1,
\ket{h_j}=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\xi^{jk}\ket{k}, \xi=e^{i\frac{2\pi}{d}}, \]
\[ \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket=\frac{1}{\sqrt{d}}\sum_{i, j=0}^{d-1}\xi^{ji}\ket{j}\bra{i}, \quad \quad
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket=1, \quad
\left\llbracket\tikzfig{diagrams//Id}\right\rrbracket=\sum_{j=0}^{d-1}\ket{j}\bra{j},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket=\sum_{i, j=0}^{d-1}\ket{ji}\bra{ij},\quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket=\sum_{j=0}^{d-1}\ket{jj}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket=\sum_{j=0}^{d-1}\bra{jj},
\]
\[ \llbracket D_1\otimes D_2 \rrbracket = \llbracket D_1 \rrbracket \otimes \llbracket D_2 \rrbracket, \quad
\llbracket D_1\circ D_2 \rrbracket = \llbracket D_1 \rrbracket \circ \llbracket D_2 \rrbracket.
\]
Now we are ready to define three important properties of the ZX-calculus: soundness, universality and completeness. Note that if a diagram $D_1$ in the ZX-calculus can be rewritten into another diagram $D_2$ using the ZX rules, then we denote this as $ZX\vdash D_1=D_2$.
\begin{definition}
The ZX-calculus is called sound if for any two diagrams $D_1$ and $D_2$, $ZX\vdash D_1=D_2$ must imply that $\llbracket D_1 \rrbracket = \llbracket D_2 \rrbracket$.
\end{definition}
\begin{definition}
The ZX-calculus is called universal if for any linear map $L$ in $\mathbf{FdHilb_d}$, there must exist a diagram $D$ in the ZX-calculus such that $\llbracket D \rrbracket = L$.
\end{definition}
\begin{definition}
The ZX-calculus is called complete if for any two diagrams $D_1$ and $D_2$, $\llbracket D_1 \rrbracket = \llbracket D_2 \rrbracket$ must imply that $ZX\vdash D_1=D_2$.
\end{definition}
Among these three properties, soundness means if an equality of diagrams holds in the ZX-calculus then the corresponding linear maps under the standard interpretation must be the same. Since the derivation of true equalities comes from the rewriting rules,
soundness can be checked on a rule-by-rule basis. Secondly, universality means each linear map can be represented by a diagram in the ZX-calculus. Thirdly, completeness means all true equalities of linear maps can be derived graphically.
\begin{proposition} \cite{Ranchin}
The qudit ZX-calculus is sound for any $d \geq 2$.
\end{proposition}
\begin{proposition} \cite{BianWang2}
The qudit ZX-calculus is universal for any $d \geq 2$.
\end{proposition}
The proof of the completeness of the ZX-calculus is the main theme of this thesis.
\subsection*{Scalar-free ZX-calculus} A scalar $D$ is called non-zero if $\llbracket D \rrbracket \neq 0$. The ZX-calculus could has a scalar-free version where all the non-zero scalars can be ignored.
\begin{definition}
A scalar-free ZX-calculus is obtained from the ZX-calculus $ZX[\mathnormal{S}]/\mathnormal{R}$ modulo an equivalence relation $\mathnormal{F}$ where two diagrams $D_1$ and $D_2$ are in the same equivalent class if there exist non-zero scalars $s$ and $t$ such that $s \otimes D_1= t\otimes D_2$.
\end{definition} The standard interpretation $\llbracket \cdot \rrbracket: ZX[\mathnormal{S}]/\mathnormal{R} \longrightarrow \mathbf{FdHilb_d}$ can be generalised to an interpretation $\llbracket \cdot \rrbracket_{sf}: (ZX[\mathnormal{S}]/\mathnormal{R})/\mathnormal{F} \longrightarrow \mathbf{FdHilb_d}/s$ in a natural way such that the following square commute:
\begin{equation*} \tikzfig{diagrams//interpresqure} \end{equation*}
By the construction of the scalar-free ZX-calculus and the soundness of the qudit ZX-calculus, it is easy to see that the scalar-free ZX-calculus is sound in relative to the interpretation $\llbracket \cdot \rrbracket_{sf}$.
\subsection{Qubit ZX-calculus}\label{qubitzxintro} In this subsection, we describe qubit ZX-calculus in detail and give some of its useful properties.
The qubit ZX-calculus has the following generators: \begin{table} \begin{center}
\begin{tabular}{|r@{~}r@{~}c@{~}c|r@{~}r@{~}c@{~}c|} \hline
$R_{Z,\alpha}^{(n,m)}$&$:$&$n\to m$ & \tikzfig{diagrams//generator_spider2} & $R_{X,\alpha}^{(n,m)}$&$:$&$n\to m$ & \tikzfig{diagrams//generator_redspider2}\\
\hline $H$&$:$&$1\to 1$ &\tikzfig{diagrams//HadaDecomSingleslt}
& $\sigma$&$:$&$ 2\to 2$& \tikzfig{diagrams//swap}\\\hline
$\mathbb I$&$:$&$1\to 1$&\tikzfig{diagrams//Id} & $e $&$:$&$0 \to 0$& \tikzfig{diagrams//emptysquare}\\\hline
$C_a$&$:$&$ 0\to 2$& \tikzfig{diagrams//cap} &$ C_u$&$:$&$ 2\to 0$&\tikzfig{diagrams//cup} \\\hline \end{tabular}\caption{Generators of qubit ZX-calculus} \label{qbzxgenerator} \end{center} \end{table} where $m,n\in \mathbb N$, $\alpha \in [0, 2\pi)$, and $e$ represents an empty diagram.
The qubit ZX-calculus has non-structural rewriting rules as follows: \begin{figure}
\caption{Non-structural ZX-calculus rules, where $\alpha, \beta\in [0,~2\pi)$.}
\label{figurenon}
\end{figure}
\FloatBarrier Note that all the rules enumerated in Figures \ref{figurenon} still hold when they are flipped upside-down. Due to the rule (H) and (H2),
the rules in Figure \ref{figurenon} have a property that they still hold when the colours green and red swapped. In this thesis, for simplicity, we won't distinguish a rule with its flipped upside-down version or colour swapped version when it is referred to in a diagrammatic rewriting. The structural rules listed in (\ref{compactstructure}) and (\ref{compactstructureslide}) will also be used without being explicitly stated.
If we let $d=2$ in the standard interpretation of qudit ZX-clculus, the we have the following interpretation for qubit: \[ \left\llbracket \tikzfig{diagrams//generator_spider2} \right\rrbracket=\ket{0}^{\otimes m}\bra{0}^{\otimes n}+e^{i\alpha}\ket{1}^{\otimes m}\bra{1}^{\otimes n}, \left\llbracket \tikzfig{diagrams//generator_redspider2}\right\rrbracket=\ket{+}^{\otimes m}\bra{+}^{\otimes n}+e^{i\alpha}\ket{-}^{\otimes m}\bra{-}^{\otimes n}, \]
\[ \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket=\frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket=1, \quad \left\llbracket\tikzfig{diagrams//Id}\right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket=\begin{pmatrix}
1 \\
0 \\
0 \\
1 \\
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 1
\end{pmatrix},
\]
\[ \llbracket D_1\otimes D_2 \rrbracket = \llbracket D_1 \rrbracket \otimes \llbracket D_2 \rrbracket, \quad
\llbracket D_1\circ D_2 \rrbracket = \llbracket D_1 \rrbracket \circ \llbracket D_2 \rrbracket,
\] where $$ \ket{0}= \begin{pmatrix}
1 \\
0 \\
\end{pmatrix}, \quad
\bra{0}=\begin{pmatrix}
1 & 0
\end{pmatrix},
\quad \ket{1}= \begin{pmatrix}
0 \\
1 \\
\end{pmatrix}, \quad
\bra{1}=\begin{pmatrix}
0 & 1
\end{pmatrix},
$$ $$ \ket{+}= \frac{1}{\sqrt{2}}\begin{pmatrix}
1 \\
1 \\
\end{pmatrix}, \quad
\bra{+}=\frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1
\end{pmatrix},
\quad \ket{-}= \frac{1}{\sqrt{2}}\begin{pmatrix}
1 \\
-1 \\
\end{pmatrix}, \quad
\bra{-}=\frac{1}{\sqrt{2}}\begin{pmatrix}
1 & -1
\end{pmatrix}.
$$
Below we give some useful properties of the qubit ZX-calculus.
\begin{lemma} \begin{equation}\label{lem:hopf}
\input{diagrams/lemmahopf.tikz}
\end{equation} \end{lemma}
\begin{proof}
Proof in \cite{bpw}. The rules used are $S1$, $S2$, $S3$, $H2$, $H3$, $H$, $B1$, $B2$. \end{proof}
\begin{lemma}
\begin{equation}\label{lem:6b}
\input{diagrams/lemma6b.tikz}
\end{equation} \end{lemma}
\begin{proof}
Proof in \cite{bpw}. The rules and properties used are $S1$, $S2$, $S3$, $H$, $H2$, $H3$, $B1$, $B2$, $EU$, $IV$. \end{proof}
\begin{lemma}
\begin{equation}\label{lem:com}
\input{diagrams/lemmacom.tikz}
\end{equation} \end{lemma}
\begin{proof}
Proof in \cite{bpw}. The rules and properties used are \ref{lem:hopf}, $B2$, $S2$, $H2$, $S1$, $H$, $B1$. \end{proof}
\begin{lemma}
\begin{equation}\label{lem:S4}
\tikzfig{diagrams//alphadeletestate}
\end{equation}
\end{lemma}
\begin{proof}
Proof in \cite{bpw}. The rules and properties used are $S1$, $B1$, $K2$, $\ref{lem:6b}$, $IV$. \end{proof}
\begin{lemma}
\begin{equation}\label{lem:1}
\input{diagrams/lemma1.tikz}
\end{equation} \end{lemma}
\begin{proof}~\newline \newline
\input{diagrams/lemma1_proof.tikz} \end{proof}
\begin{lemma}
\begin{equation}\label{cntswap}
\input{diagrams/cnotswap.tikz}
\end{equation} \end{lemma}
\begin{proof}~\newline \newline
\input{diagrams/cntswap_proof.tikz} \end{proof}
\begin{lemma}
\begin{equation}\label{pbtsaa} \input{diagrams/alphadelete.tikz}
\end{equation} \end{lemma} \begin{proof}
Proof in \cite{bpw}. The rules and properties used are $S1$, $B1$, $K2$, \ref{lem:6b}, $IV$. \end{proof}
\begin{lemma}
\begin{equation}\label{lem:6}
\input{diagrams/lemma6.tikz}
\end{equation}
\end{lemma}
\begin{proof}
\begin{equation*}
\input{diagrams/lemma6-proof.tikz}
\end{equation*}
\end{proof}
\begin{lemma}
\begin{equation}\label{3crossing}
\input{diagrams/3crossings.tikz}
\end{equation} \end{lemma}
\begin{proof} The proof can be done by simply sliding the middle line (the line connects the second input and the second output) from the left to the right by naturality. \end{proof}
\subsection{ Some known completeness results of the qubit ZX-calculus}
It has been shown in \cite{Vladimir} that the original version of the ZX-calculus \cite{CoeckeDuncan} plus the Euler decomposition of Hadamard gate is incomplete for the overall pure qubit quantum mechanics (QM). Since then, plenty of efforts have been devoted to the completion of some fragment of qubit QM. In fact, the $\pi$-fragment of the ZX-calculus (corresponding to diagrams involving angles multiple of $\pi$) has been proved to be complete for real stabilizer QM in \cite{duncanperdrix}, and the $\frac{\pi}{2}$-fragment of the ZX-calculus was shown to be complete for the stabilizer QM in \cite{Miriam1}. Moreover, Backens has given the proof of completeness for single qubit Clifford+T ZX-calculus in \cite{Miriam1ct}.
The next chapters of this thesis will fill the gap between the above results and the universal completeness of the ZX-calculus for the whole QM.
\section{ZW-calculus}
The ZW-calculus is another graphical language for quantum computing modelled on the ZX-calculus \cite{amar}. Let $R$ be an arbitrary commutative ring. The ZW-calculus with all parameters in $R$ is denoted as $ZW_{R} $-calculus. Like the ZX-calculus, the $ZW_{R} $-calculus is also a self-dual compact closed PROP $\mathfrak{F}$ with a set of rewriting rules. An arbitrary morphism of $\mathfrak{F}$ is a diagram $D:k\to l$ with source object $k$ and target object $l$, composed of the following basic components:
\begin{center}
\begin{tabular}{|r@{~}r@{~}c@{~}c|r@{~}r@{~}c@{~}c|} \hline
$Z$&$:$&$1\to 2$ & \tikzfig{diagrams//generatorwtspider2} & $R$&$:$&$ 1\to 1$& \tikzfig{diagrams//rgatewhite}\\ \hline $\tau$&$:$&$2\to 2$ &\tikzfig{diagrams//corsszw}
& $P$&$:$&$1\to 1$ &\tikzfig{diagrams//piblack} \\\hline
$\sigma$&$:$&$ 2\to 2$& \tikzfig{diagrams//swap} &$\mathbb I$&$:$&$1\to 1$&\tikzfig{diagrams//Id} \\\hline
$e $&$:$&$0 \to 0$& \tikzfig{diagrams//emptysquare} &$W$&$:$&$1\to 2$&\tikzfig{diagrams//wblack}
\\\hline
$C_a$&$:$&$ 0\to 2$& \tikzfig{diagrams//cap} &$ C_u$&$:$&$ 2\to 0$&\tikzfig{diagrams//cup} \\\hline \end{tabular} \end{center}
where $r \in R$, and $e$ represents an empty diagram. With these generators, we can define the following diagrams: \begin{equation}\label{wnodespdef}
\tikzfig{diagrams//wnodesdefadd} \end{equation}
The composition of morphisms is to combine these components in the following two ways: for any two morphisms $D_1:a\to b$ and $D_2: c\to d$, a \textit{ parallel composition} $D_1\otimes D_2 : a+c\to b+d$ is obtained by placing $D_1$ and $D_2$ side-by-side with $D_1$ on the left of $D_2$;
for any two morphisms $D_1:a\to b$ and $D_2: b\to c$, a \textit{ sequential composition} $D_2\circ D_1 : a\to c$ is obtained by placing $D_1$ above $D_2$, connecting the outputs of $D_1$ to the inputs of $D_2$.
There are two kinds of rules for the morphisms of $\mathfrak{F}$: the structure rules for $\mathfrak{F}$ as an compact closed category shown in (\ref{compactstructure}) and (\ref{compactstructureslide}), as well as the rewriting rules listed in Figure \ref{figure3}, \ref{figure4}, \ref{figure5}.
Like the ZX-calculus, all the ZW diagrams should be read from top to bottom.
\begin{figure}
\caption{$ZW_{ R}$-calculus rules I}
\label{figure3}
\end{figure}
\begin{figure}
\caption{$ZW_{ R}$-calculus rules II}
\label{figure4}
\end{figure}
\begin{figure}
\caption{$ZW_{ R}$-calculus rules III}
\label{figure5}
\end{figure}
\FloatBarrier
Note that here we presented a $ZW_{ R}$-calculus generated by a finite set of diagrams. However, there is an equivalent yet more concise presentation generated by an infinite set of diagrams. Both of them are proposed in \cite{amar}. We have the following spider form of the white node due to expressions in (\ref{wnodespdef}) and the rule $rng_1$, which will be used for translations between the ZX-calculus and the ZW-calculus.
\begin{equation}\label{wspiderdef}
\input{diagrams/spiderwhitedef.tikz} \end{equation} By the rules $sym_{z,L}$, $sym_{z,R}$ and $asso_{z}$ listed in Figure \ref{figure4}, this white node spider is commutative.
The diagrams in the $ZW_{ R}$-calculus have a standard interpretation $\llbracket \cdot \rrbracket$ in the category $\mathbf{Mat}_{R}$. \[ \left\llbracket \tikzfig{diagrams//generatorwtspider2} \right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1
\end{pmatrix}, \quad \left\llbracket\tikzfig{diagrams//rgatewhite}\right\rrbracket=\ket{0}\bra{0}+r\ket{1}\bra{1}=\begin{pmatrix}
1 & 0 \\
0 & r
\end{pmatrix}. \]
\[ \left\llbracket\tikzfig{diagrams//corsszw}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//wblack}\right\rrbracket=\begin{pmatrix}
0 & 1 \\
1 & 0 \\
1 & 0 \\
0 & 0
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//piblack}\right\rrbracket=\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket=1.
\]
\[ \left\llbracket\tikzfig{diagrams//Id}\right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket=\begin{pmatrix}
1 \\
0 \\
0 \\
1 \\
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 1
\end{pmatrix}.
\]
\[ \llbracket D_1\otimes D_2 \rrbracket = \llbracket D_1 \rrbracket \otimes \llbracket D_2 \rrbracket, \quad
\llbracket D_1\circ D_2 \rrbracket = \llbracket D_1 \rrbracket \circ \llbracket D_2 \rrbracket.
\]
Soundness, universality and completeness of the $ZW_{ R}$-calculus can be similarly defined as that for the ZX-calculus. We only recall these properties here. \begin{theorem}\label{zwsounduniv}\cite{amar} The $ZW_{ R}$-calculus is sound and universal. \end{theorem}
\begin{theorem}\label{zwcomplete}\cite{amar} The $ZW_{ R}$-calculus is complete. \end{theorem}
\chapter{Completeness for full qubit quantum mechanics}\label{chfull}
It has been shown in \cite{Vladimir} that the original version of the ZX-calculus \cite{CoeckeDuncan} plus the Euler decomposition of Hadamard gate is incomplete for the overall pure qubit quantum mechanics (QM). Since then, plenty of efforts have been devoted to the completion of some part of QM: real QM \cite{duncanperdrix}, stabilizer QM \cite{Miriam1}, single qubit Clifford+T QM \cite{Miriam1ct} and Clifford+T QM \cite{Emmanuel}. Amongst them, the completeness of ZX-calculus for Clifford+T QM is especially interesting, since it is approximatively universal for QM. Note that their proof relies on the completeness of ZW-calculus for ``qubits with integer coefficients" \cite{amar1}.
In this chapter, we give the first complete axiomatisation of the ZX-calculus for the entire qubit QM, i.e., the $ZX_{ full}$-calculus, based on the completeness result of the $ZW_{ \mathbb{C}}$-calculus \cite{amar}. Firstly, we introduce two new generators: a triangle and a series of $\lambda$-labeled boxes ($\lambda \geq 0$), which turns out to be expressible in ZX-calculus without these symbols. Then we establish reversible translations from ZX to ZW and vice versa. By checking carefully that all the ZW rewriting rules still hold under translation from ZW to ZX, we finally finished the proof of completeness of $ZX_{ full}$-calculus.
Throughout this chapter, the terms ``equation" and ``rewriting rule" will be used interchangeably.
The proof of the completeness of the full qubit ZX-calculus has been published in \cite{amarngwanglics}, with coauthors Amar Hadzihasanovic and Kang Feng Ng.
\section{$ZX_{ full}$-calculus}\label{zxccls}
The $ZX_{ full}$-calculus has generators as listed in Table \ref{qbzxgenerator} plus two new generators given in Table \ref{newgr}.
\begin{table}\begin{center}
\begin{tabular}{|r@{~}r@{~}c@{~}c|r@{~}r@{~}c@{~}c|}
\hline
$L$&$:$&$1\to 1$ &\tikzfig{diagrams//lambdabox} &$T$&$:$&$1\to 1$&\tikzfig{diagrams//triangle} \\\hline
\end{tabular}\caption{New generators with $ \lambda \geq 0$.} \label{newgr}\end{center}
\end{table} \FloatBarrier
It seems that the $ZX_{ full}$-calculus has more generators than the traditional ZX-calculus. However we will show that they are expressible in red and green nodes in Proposition \ref{lem:lamb_tri_decomposition}.
Also we define the following notation:
\begin{equation}\label{downtriangledef}
\tikzfig{diagrams//downtriangle} \end{equation} Then it is clear that $$ \tikzfig{diagrams//horizontriangle-proof}$$ Thus it makes sense to draw the following picture: $$ \tikzfig{diagrams//horizontriangle}$$
The $ZX_{ full}$-calculus has the same structural rules as that of the traditional ZX-calculus given in (\ref{compactstructure}) and (\ref{compactstructureslide}). Its non-structural rewriting rules are presented in Figures \ref{figure1} (exactly the same as Figure \ref{figurenon}), \ref{figure2} and \ref{figure0}:
\begin{figure}
\caption{Traditional-style ZX-calculus rules, where $\alpha, \beta\in [0,~2\pi)$. The upside-down version and colour swapped version of these rules still hold.}
\label{figure1}
\end{figure}
\begin{figure}
\caption{Extended ZX-calculus rules for triangle, where $\lambda \geq 0, \alpha \in [0,~2\pi).$ The upside-down version of these rules still hold.}
\label{figure2}
\end{figure}
\begin{figure}
\caption{Extended ZX-calculus rules for $\lambda$ and addition, where $\lambda, \lambda_1, \lambda_2 \geq 0, \alpha, \beta, \gamma \in [0,~2\pi);$ in (AD), $\lambda e^{i\gamma}
=\lambda_1 e^{i\beta}+ \lambda_2 e^{i\alpha}$. The upside-down version of these rules still hold.}
\label{figure0}
\end{figure}
\FloatBarrier
The diagrams in the $ZX_{ full}$-calculus have a standard interpretation composed of two parts: the standard interpretation of the traditional qubit ZX-calculus described in Section \ref{qubitzxintro} as well as the interpretation of the new generators triangle and $\lambda$ box to be given below. For simplicity, we still use the notation $\llbracket \cdot \rrbracket$ to denote the standard interpretation for the $ZX_{ full}$-calculus.
\begin{equation}\label{interpretl}
\left\llbracket\tikzfig{diagrams//triangle}\right\rrbracket=\begin{pmatrix}
1 & 1 \\
0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//lambdabox}\right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & \lambda
\end{pmatrix}.
\end{equation}
\subsection*{Useful derivable results}
Now we derive some identities that will be useful in this thesis.
\begin{lemma}
\begin{equation}\label{lem:inv}
\input{diagrams/lemmainv.tikz}
\end{equation} \end{lemma} \begin{proof}~\newline First we have \begin{equation}\label{lem:inv2}
\input{diagrams/lemmainv_proof1.tikz}
\end{equation} Then ~\newline \input{diagrams/lemmainv_proof2.tikz} \end{proof}
\begin{lemma}
\begin{equation}\label{lem:4}
\input{diagrams/lemma4.tikz}
\end{equation} \end{lemma}
\begin{proof} \begin{equation*}
\input{diagrams/lemma4proof.tikz}
\end{equation*}
\end{proof}
\begin{lemma}
\begin{equation}\label{lem:5}
\input{diagrams/lemma5.tikz}
\end{equation} \end{lemma}
\begin{proof}
\begin{equation*}
\input{diagrams/lemma5-proof.tikz}
\end{equation*}
\end{proof}
\begin{lemma}
\begin{equation}\label{lem:7}
\input{diagrams/lemma7.tikz}
\end{equation} \end{lemma}
\begin{proof}~\newline
$
\input{diagrams/lemma7_proof.tikz}
$ \end{proof}
\section{Simplification of the rules of from the $ZX_{ full}$-calculus}\label{simplyrules}
The rules for the $ZX_{ full}$-calculus as listed in Figure \ref{figure2}, and Figure \ref{figure0}
can be further simplified, and some rules can be derived from others. We did not introduce the simplified version of rules at the beginning because we want to show the developing process of the
theory of the ZX-calculus. In this section, we will exhibit how rules could be simplified or derived.
First we show that the addition rule (AD) in Figure \ref{figure0} can be simplified:
\begin{equation} \tikzfig{diagrams//sumsimplify} \end{equation} This means \begin{equation}\label{addsimplify} \tikzfig{diagrams//sumsimplify2} \end{equation}
From now on, we will call the simplified addition rule (AD$^{\prime}$): \begin{equation*} \tikzfig{diagrams//plusnew} \end{equation*}
As a consequence, we have the following commutativity of addition: \begin{equation}\label{sumcommutativity} \tikzfig{diagrams//sumcommutativity2} \end{equation}
Next we prove that some rules in Figure \ref{figure2} are derivable. \begin{lemma} \begin{equation}\label{lem:3} \tikzfig{diagrams//tr3equivl} \end{equation} \end{lemma}
\begin{proof} \begin{equation*} \tikzfig{diagrams//tr3equivlproof} \end{equation*} \end{proof}
\begin{lemma}\label{redundantrules} The rules (TR4), (TR10), and (TR11) can be derived from other rules. \end{lemma}
\begin{proof} For the derivation of (TR4), we have
\begin{equation}\label{TR4}
\input{diagrams/TR4derive.tikz} \end{equation}
For the derivation of (TR10), we have \begin{equation}\label{TR10} \tikzfig{diagrams//TR10derive} \end{equation}
For the derivation of (TR11), we have \begin{equation}\label{TR11} \tikzfig{diagrams//TR11derive} \end{equation}
\end{proof}
The following property will be very useful for deriving (TR5) here and in later sections: \begin{lemma} \begin{equation*} \tikzfig{diagrams//tr10prime} ~~~~ (TR10^{\prime}) \end{equation*} \end{lemma}
\begin{proof} \begin{equation}\label{TR10primederive2}
\tikzfig{diagrams//TR10primederive22} \end{equation} Then \begin{equation*} \tikzfig{diagrams//TR10primederive23} \end{equation*} \end{proof}
\begin{lemma} \begin{equation}\label{TR5derived2} \tikzfig{diagrams//TR5derivepre} \end{equation} \end{lemma}
\begin{proof} \begin{equation*} \tikzfig{diagrams//TR5derive2} \end{equation*} \end{proof}
\begin{lemma} \begin{equation}\label{TR5derived3} \tikzfig{diagrams//TR5derivepre2} \end{equation} \end{lemma} \begin{proof} \begin{equation*} \tikzfig{diagrams//TR5derivepre2proof} \end{equation*} \end{proof}
\begin{lemma} \label{tr5derived} The rule (TR5) can be derived. \end{lemma}
\begin{proof}
\tikzfig{diagrams//TR5derive} \end{proof}
\begin{lemma} \label{TR7} The rule (TR7) can be derived. \end{lemma}
\begin{proof}
\tikzfig{diagrams//TR7derive} \end{proof}
\begin{lemma} The rules (TR13) and (TR14) can be combined. \end{lemma} \begin{proof} Obviously, (TR13) and (TR14) can be combined into a single rule called (TR$13^{\prime}$) as follows: \begin{equation}\label{TR1314com} \tikzfig{diagrams//TR1314combine} \end{equation} \end{proof}
\begin{lemma}\label{L1} The rule (L1) can be derived.
\end{lemma} \begin{proof} By the addition rule, we have \begin{equation*}
\tikzfig{diagrams//l1deriveproof} \end{equation*} Then (L1) directly follows from the spider rule (S1). \end{proof}
\begin{lemma}\label{L2} The rule (L2) can be derived.
\end{lemma}
\begin{proof}
First we can write the non-negative number $\lambda$ as a sum of its integer part and remainder part: $\lambda= [\lambda] +\{\lambda\}$, where $ [\lambda]$ is a non-negative integer and $0\leq\{\lambda\}<1$. Let $n= [\lambda], \alpha=arccos\frac{\{\lambda\}}{2}$. It follows that $\{\lambda\}=2\cos\alpha=e^{i\alpha}+e^{-i\alpha}$. If $n=0$, then we have \begin{equation*}
\tikzfig{diagrams//lambdaderive1} \end{equation*} If $n>0$, then we have
\begin{equation}\label{nrepresent}
\tikzfig{diagrams//lambdaderive2} \end{equation} We show this by induction on $n$. When $n=1$, we have \begin{equation*}
\tikzfig{diagrams//lambd1tr} \end{equation*}
Suppose (\ref{nrepresent}) holds for $n$. Then for $n+1$ we have
\begin{equation*}
\tikzfig{diagrams//lambdn1tr} \end{equation*}
This completes the induction. Therefore, \begin{equation*}
\tikzfig{diagrams//lambdaderive3} \end{equation*}
\end{proof}
\begin{lemma} The rule (L5) can be derived. \end{lemma} \begin{proof} \begin{equation}\label{L5}
\tikzfig{diagrams//l5proof} \end{equation} \end{proof}
\subsection{Simplified rules for the $ZX_{ full}$-calculus}
Now can summarise the results obtained in Section \ref{simplyrules} and give the simplified version of all the non-structural rewriting rules for the $ZX_{ full}$-calculus in the following Figures \ref{figure1sfy} and \ref{figure2sfy} .
\begin{figure}
\caption{Traditional ZX-calculus rules, where $\alpha, \beta\in [0,~2\pi)$. The upside-down version and colour swapped version of these rules still hold.}
\label{figure1sfy}
\end{figure}
\begin{figure}
\caption{Extended ZX-calculus rules, where $\lambda, \lambda_1, \lambda_2 \geq 0, \alpha, \beta, \gamma \in [0,~2\pi);$ in (AD$^{\prime}$), $\lambda e^{i\gamma}
=\lambda_1 e^{i\beta}+ \lambda_2 e^{i\alpha}$.The upside-down version of these rules still hold.}
\label{figure2sfy}
\end{figure}
\FloatBarrier With these simplified rules, we have \begin{proposition} The $ZX_{ full}$-calculus is sound. \end{proposition} \begin{proof} By the construction of general ZX-calculus described in Section \ref{zxgeneral}, any two diagrams $D_1$ and $D_2$ of the $ZX_{ full}$-calculus are equal modulo the equivalence relations given in (\ref{compactstructure}) and (\ref{compactstructureslide}) as structural rules and in Figures \ref{figure1sfy} and \ref{figure2sfy} as non-structural rewriting rules. That is, $D_1=D_2$ if and only if $D_1$ can be rewritten into $D_2$ using finitely many of these rules. In addition, $ \llbracket D_1\otimes D_2 \rrbracket = \llbracket D_1 \rrbracket \otimes \llbracket D_2 \rrbracket, \quad
\llbracket D_1\circ D_2 \rrbracket = \llbracket D_1 \rrbracket \circ \llbracket D_2 \rrbracket.
$
Therefore, to prove the soundness of the $ZX_{ full}$-calculus, it suffices to verify that all the rules listed in (\ref{compactstructure}) and (\ref{compactstructureslide}) and Figures \ref{figure1sfy} and \ref{figure2sfy} still hold under the standard interpretation $\llbracket \cdot \rrbracket$ including (\ref{interpretl}). The structural rules have been proved to be sound in \cite{CoeckeDuncan}. It is a routine check that the rules in Figures \ref{figure1sfy} and \ref{figure2sfy} are sound.
\end{proof}
\iffalse
\begin{figure}
\caption{Extended ZX-calculus rules for $\lambda$ and addition, where $\lambda, \lambda_1, \lambda_2 \geq 0, \alpha, \beta, \gamma \in [0,~2\pi);$ in (AD'), $\lambda e^{i\gamma}
=\lambda_1 e^{i\beta}+ \lambda_2 e^{i\alpha}$.}
\label{figure0sfy}
\end{figure} \fi
\iffalse \section{$ZW_{ \mathbb{C}}$-calculus}\label{zwcalcul}
The $ZW_{ \mathbb{C}}$-calculus \cite{amar} is also a self-dual compact closed PROP $\mathfrak{F}$ with a set of rewriting rules. An arbitrary morphism of $\mathfrak{F}$ is a diagram $D:k\to l$ with source object $k$ and target object $l$, composed of the following basic components:
\begin{center}
\begin{tabular}{|r@{~}r@{~}c@{~}c|r@{~}r@{~}c@{~}c|} \hline
$Z$&$:$&$1\to 2$ & \tikzfig{diagrams//generatorwtspider2} & $R$&$:$&$ 1\to 1$& \tikzfig{diagrams//rgatewhite}\\ \hline $\tau$&$:$&$2\to 2$ &\tikzfig{diagrams//corsszw}
& $P$&$:$&$1\to 1$ &\tikzfig{diagrams//piblack} \\\hline
$\sigma$&$:$&$ 2\to 2$& \tikzfig{diagrams//swap} &$\mathbb I$&$:$&$1\to 1$&\tikzfig{diagrams//Id} \\\hline
$e $&$:$&$0 \to 0$& \tikzfig{diagrams//emptysquare} &$W$&$:$&$1\to 2$&\tikzfig{diagrams//wblack}
\\\hline
$C_a$&$:$&$ 0\to 2$& \tikzfig{diagrams//cap} &$ C_u$&$:$&$ 2\to 0$&\tikzfig{diagrams//cup} \\\hline \end{tabular} \end{center} where $m,n\in \mathbb{N}$, $r \in \mathbb{C}$, and $e$ represents an empty diagram. With these generators, we can define the following diagrams: \begin{equation} \tikzfig{diagrams//wnodesdef} \end{equation}
The composition of morphisms is to combine these components in the following two ways: for any two morphisms $D_1:a\to b$ and $D_2: c\to d$, a \textit{ parallel composition} $D_1\otimes D_2 : a+c\to b+d$ is obtained by placing $D_1$ and $D_2$ side-by-side with $D_1$ on the left of $D_2$;
for any two morphisms $D_1:a\to b$ and $D_2: b\to c$, a \textit{ sequential composition} $D_2\circ D_1 : a\to c$ is obtained by placing $D_1$ above $D_2$, connecting the outputs of $D_1$ to the inputs of $D_2$.
There are two kinds of rules for the morphisms of $\mathfrak{F}$: the structure rules for $\mathfrak{F}$ as an compact closed category, as well as the rewriting rules listed in Figure \ref{figure3}, \ref{figure4}, \ref{figure5}.
Note that all the diagrams should be read from top to bottom.
\begin{figure}\label{figure3}
\end{figure}
\begin{figure}\label{figure4}
\end{figure}
\begin{figure}\label{figure5}
\end{figure}
\FloatBarrier
Note that here we presented a $ZW_{ \mathbb{C}}$-calculus generated by a finite set of diagrams. However, there is an equivalent yet more concise presentation generated by an infinite set of diagrams. Both of them are proposed in \cite{amar}. We give the spider form of the white node in the following, which will be used for translations between the ZX-calculus and the ZW-calculus.
\begin{equation}\label{wspiderdef}
\input{diagrams/spiderwhitedef.tikz} \end{equation}
The diagrams in the $ZW_{ \mathbb{C}}$-calculus have a standard interpretation $\llbracket \cdot \rrbracket$ in the category $\mathbf{FdHilb_2}$: \[ \left\llbracket \tikzfig{diagrams//generatorwtspider2} \right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & 0 \\
0 & 0 \\
0 & 1
\end{pmatrix}, \quad \left\llbracket\tikzfig{diagrams//rgatewhite}\right\rrbracket=\ket{0}\bra{0}+r\ket{1}\bra{1}=\begin{pmatrix}
1 & 0 \\
0 & r
\end{pmatrix}. \]
\[ \left\llbracket\tikzfig{diagrams//corsszw}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//wblack}\right\rrbracket=\begin{pmatrix}
0 & 1 \\
1 & 0 \\
1 & 0 \\
0 & 0
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//piblack}\right\rrbracket=\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket=1.
\]
\[ \left\llbracket\tikzfig{diagrams//Id}\right\rrbracket=\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket=\begin{pmatrix}
1 \\
0 \\
0 \\
1 \\
\end{pmatrix}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket=\begin{pmatrix}
1 & 0 & 0 & 1
\end{pmatrix}.
\]
\[ \llbracket D_1\otimes D_2 \rrbracket = \llbracket D_1 \rrbracket \otimes \llbracket D_2 \rrbracket, \quad
\llbracket D_1\circ D_2 \rrbracket = \llbracket D_1 \rrbracket \circ \llbracket D_2 \rrbracket.
\]
It can be verified that the interpretation $\llbracket \cdot \rrbracket$ is a monoidal functor. \fi
\section{Interpretations between the $ZX_{ full}$-calculus and the $ZW_{ \mathbb{C}}$-calculus}\label{transzxzw} The $ZX_{ full}$-calculus and the $ZW_{ \mathbb{C}}$-calculus are not irrelevant to each other. In fact, there exists an invertible translation between them. Note that a spider can be decomposed into a form consisting of a phase-free spider and a pure phase gate, which will be used for translation between the ZX-calculus and the ZW-calculus:
\begin{align}\label{gspiderdecom}
\tikzfig{diagrams//gspiderdecomp}\quad\quad \tikzfig{diagrams//rspiderdecomp}
\end{align}
First we define the interpretation $\llbracket \cdot \rrbracket_{XW}$ from $ZX_{ full}$-calculus to $ZW_{ \mathbb{C}}$-calculus as follows: \[
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket_{XW}= \tikzfig{diagrams//emptysquare}, \quad
\left\llbracket\tikzfig{diagrams//Id}\right\rrbracket_{XW}= \tikzfig{diagrams//Id}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket_{XW}= \tikzfig{diagrams//cap}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket_{XW}= \tikzfig{diagrams//cup},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket_{XW}= \tikzfig{diagrams//swap}, \quad
\left\llbracket\tikzfig{diagrams//generator_spider-nonum}\right\rrbracket_{XW}= \tikzfig{diagrams//spiderwhite}, \quad
\left\llbracket\tikzfig{diagrams//alphagate}\right\rrbracket_{XW}= \tikzfig{diagrams//alphagatewhite}, \quad
\left\llbracket\tikzfig{diagrams//lambdabox}\right\rrbracket_{XW}= \tikzfig{diagrams//lambdagatewhiteld},
\]
\[
\left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}= \tikzfig{diagrams//Hadamardwhite}, \quad
\left\llbracket\tikzfig{diagrams//triangle}\right\rrbracket_{XW}= \tikzfig{diagrams//trianglewhite},
\left\llbracket\tikzfig{diagrams//alphagatered}\right\rrbracket_{XW}= \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}\circ \left(\tikzfig{diagrams//alphagatewhite}\right) \circ \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}, \quad
\]
\[
\left\llbracket\tikzfig{diagrams//generator_redspider2cged}\right\rrbracket_{XW}= \left\llbracket \left( \tikzfig{diagrams//HadaDecomSingleslt}\right)^{\otimes m} \right\rrbracket_{XW}\circ \left(\tikzfig{diagrams//spiderwhiteindex}\right) \circ \left\llbracket \left( \tikzfig{diagrams//HadaDecomSingleslt}\right)^{\otimes n} \right\rrbracket_{XW},
\]
\[ \llbracket D_1\otimes D_2 \rrbracket_{XW} = \llbracket D_1 \rrbracket_{XW} \otimes \llbracket D_2 \rrbracket_{XW}, \quad
\llbracket D_1\circ D_2 \rrbracket_{XW} = \llbracket D_1 \rrbracket_{XW} \circ \llbracket D_2 \rrbracket_{XW},
\] where $ \alpha \in [0,~2\pi), ~ \lambda \geq 0$.
The interpretation $\llbracket \cdot \rrbracket_{XW}$ preserves the standard interpretation:
\begin{lemma}\label{xtowpreservesemantics} Suppose $D$ is an arbitrary diagram in the $ZX_{ full}$-calculus. Then $\llbracket \llbracket D \rrbracket_{XW}\rrbracket = \llbracket D \rrbracket$. \end{lemma}
\begin{proof} Since each diagram in the $ZX_{ full}$-calculus is generated in parallel composition $\otimes$ and sequential composition $\circ$ by generators in Table \ref{qbzxgenerator} and Table \ref{newgr}, and the interpretation $\llbracket \cdot \rrbracket_{XW}$ respects these two compositions, it suffices to prove $\llbracket \llbracket D \rrbracket_{XW}\rrbracket = \llbracket D \rrbracket$ whend $D$ is a generator diagram. This is a routine check, we omit the verification details here. \end{proof}
Next we define the interpretation $\llbracket \cdot \rrbracket_{WX}$ from $ZW_{ \mathbb{C}}$-calculus to $ZX_{ full}$-calculus as follows:
\[
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket_{WX}= \tikzfig{diagrams//emptysquare}, \quad
\left\llbracket\tikzfig{diagrams//Id}\right\rrbracket_{WX}= \tikzfig{diagrams//Id}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket_{WX}= \tikzfig{diagrams//cap}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket_{WX}= \tikzfig{diagrams//cup},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket_{WX}= \tikzfig{diagrams//swap}, \quad
\left\llbracket\tikzfig{diagrams//spiderwhite}\right\rrbracket_{WX}= \tikzfig{diagrams//generator_spider-nonum}, \quad
\left\llbracket\tikzfig{diagrams//rgatewhite}\right\rrbracket_{WX}= \tikzfig{diagrams//alphalambdagate}, \quad
\left\llbracket\tikzfig{diagrams//piblack}\right\rrbracket_{WX}= \tikzfig{diagrams//pired},
\]
\[
\left\llbracket\tikzfig{diagrams//corsszw}\right\rrbracket_{WX}= \tikzfig{diagrams//crossxz}, \quad \quad
\left\llbracket\tikzfig{diagrams//wblack}\right\rrbracket_{WX}= \tikzfig{diagrams//winzx}, \]
\[ \llbracket D_1\otimes D_2 \rrbracket_{WX} = \llbracket D_1 \rrbracket_{WX} \otimes \llbracket D_2 \rrbracket_{WX}, \quad
\llbracket D_1\circ D_2 \rrbracket_{WX} = \llbracket D_1 \rrbracket_{WX} \circ \llbracket D_2 \rrbracket_{WX}.
\] where $r=\lambda e^{i\alpha},~ \alpha \in [0,~2\pi), ~ \lambda \geq 0$.
The interpretation $\llbracket \cdot \rrbracket_{WX}$ preserves the standard interpretation as well:
\begin{lemma}\label{wtoxpreservesemantics} Suppose $D$ is an arbitrary diagram in $ZW_{ \mathbb{C}}$-calculus. Then $\llbracket \llbracket D \rrbracket_{WX}\rrbracket = \llbracket D \rrbracket$. \end{lemma} \begin{proof} The proof is similar to that of Lemma \ref{xtowpreservesemantics}. \end{proof}
Thus both $\llbracket \cdot \rrbracket_{WX}$ and $\llbracket \cdot \rrbracket_{XW}$ preserve the standard interpretation. Combining with the completeness of the $ZW_{ \mathbb{C}}$-calculus, we have \begin{lemma}\label{wtoxisfunctor} If $ZX_{ full}\vdash D_1=D_2$, then $ZW_{ \mathbb{C}}\vdash \llbracket D_1 \rrbracket_{XW} =\llbracket D_2 \rrbracket_{XW}$. \end{lemma}
\begin{proof} Suppose $ZX_{ full}\vdash D_1=D_2$, Then by the soundness of the $ZX_{ full}$, we have $\llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket$. Therefore $\llbracket \llbracket D_1 \rrbracket_{XW}\rrbracket = \llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket =\llbracket \llbracket D_2 \rrbracket_{XW}\rrbracket $ by Lemma \ref{xtowpreservesemantics}. Then it follows that $ZW_{ \mathbb{C}}\vdash \llbracket D_1 \rrbracket_{XW} =\llbracket D_2 \rrbracket_{XW}$ by the completeness of the $ZW_{ \mathbb{C}}$-calculus. \end{proof}
This means the interpretation $\llbracket \cdot \rrbracket_{XW}$ is a well-defined functor from the PROP $ZX_{ full}$ to the PROP $ZW_{ \mathbb{C}}$.
Moreover, we have \begin{lemma}\label{zwinverse} For any diagram $G\in ZW_{ \mathbb{C}}$, \begin{equation}\label{zwinvert} ZW_{ \mathbb{C}}\vdash \llbracket \llbracket G \rrbracket_{WX}\rrbracket_{XW} =G \end{equation} \end{lemma}
\begin{proof} By Lemma \ref{xtowpreservesemantics} and Lemma \ref{wtoxpreservesemantics}, the interpretations $\llbracket \cdot \rrbracket_{XW}$ and $\llbracket \cdot \rrbracket_{WX}$ preserve the standard interpretation $\llbracket \cdot \rrbracket$. Thus $\llbracket G \rrbracket = \llbracket \llbracket G \rrbracket_{WX} \rrbracket = \llbracket \llbracket \llbracket G \rrbracket_{WX}\rrbracket_{XW} \rrbracket $. By the completeness of the $ZW_{ \mathbb{C}}$-calculus, it must be that $ZW_{ \mathbb{C}}\vdash \llbracket \llbracket G \rrbracket_{WX}\rrbracket_{XW} =G$. \end{proof}
On the other hand, we have
\begin{lemma}\label{interpretationreversible} Suppose $D$ is an arbitrary diagram in the $ZX_{ full}$-calculus. Then $ZX_{ full}\vdash \llbracket \llbracket D \rrbracket_{XW}\rrbracket_{WX} =D$. \end{lemma}
\begin{proof} By the construction of $\llbracket \cdot \rrbracket_{XW}$ and $\llbracket \cdot \rrbracket_{WX}$, we only need to prove for the generators of the $ZX_{ full}$-calculus. Here we consider all the generators translated at the beginning of this section.
The first six generators are the same as the first six generators in the $ZW_{ \mathbb{C}}$-calculus, so we just need to check for the last six generators. Since the red phase gate and the red spider are translated in terms of the translation of Hadamard gate, green phase gate and green spider, we only need to care for the four generators: Hadamard gate, green phase gate, $\lambda$ box and the triangle.
Firstly, \[
\left\llbracket\tikzfig{diagrams//alphagate}\right\rrbracket_{XW}= \tikzfig{diagrams//alphagatewhite}, \] so we have \[
\left\llbracket \left\llbracket\tikzfig{diagrams//alphagate}\right\rrbracket_{XW}\right\rrbracket_{WX}= \left\llbracket\tikzfig{diagrams//alphagatewhite}\right\rrbracket_{WX}=\tikzfig{diagrams//alphagate}, \] by the definition of $\llbracket \cdot \rrbracket_{WX}$ and the ZX rule (L3). Similarly, we can easily check that \[
\left\llbracket \left\llbracket\tikzfig{diagrams//lambdabox}\right\rrbracket_{XW}\right\rrbracket_{WX}=\tikzfig{diagrams//lambdabox}.
\]
Finally,
\[
\left\llbracket \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}\right\rrbracket_{WX}= \left\llbracket \tikzfig{diagrams//Hadamardwhite} \right\rrbracket_{WX}
= \tikzfig{diagrams//Hadamardwhiteint} \overset{IV}{=} \tikzfig{diagrams//Hadamardwhiteint2} \overset{S1,S2}{=} \tikzfig{diagrams//HadaDecomSingleslt},
\]
\[
\left\llbracket \left\llbracket\tikzfig{diagrams//triangle}\right\rrbracket_{XW}\right\rrbracket_{WX}= \left\llbracket \tikzfig{diagrams//trianglewhite} \right\rrbracket_{WX} = \tikzfig{diagrams//trianinzx} \overset{B1,S1, S2}{=} \tikzfig{diagrams//triangle}. \]
\end{proof}
Therefore, by Lemma \ref{zwinverse} and Lemma \ref{interpretationreversible}, the interpretations between the $ZX_{ full}$-calculus and the $ZW_{ \mathbb{C}}$-calculus are mutually invertible to each other.
\section{Completeness} \begin{proposition}\label{zwrulesholdinzx} If $ZW_{ \mathbb{C}}\vdash D_1=D_2$, then $ZX_{ full}\vdash \llbracket D_1 \rrbracket_{WX} =\llbracket D_2 \rrbracket_{WX}$.
\end{proposition}
\begin{proof} By the construction of the ZW-calculus, any two ZW diagrams are equal if and only if one of them can be rewritten into another. So here we need only to prove that $ZX_{ full} \vdash \left\llbracket D_1\right\rrbracket_{WX} = \left\llbracket D_2\right\rrbracket_{WX}$ where $D_1=D_2$ is a rewriting rule of $ZW_{ \mathbb{C}}$-calculus. This proof is quite lengthy, we put it at the end of this chapter for the convenience of reading. \end{proof}
This proposition means the interpretation $\llbracket \cdot \rrbracket_{WX}$ is a well-defined functor from the PROP $ZW_{ \mathbb{C}}$ to the PROP $ZX_{ full}$. Together with lemma \ref{zwinverse} and lemma \ref{interpretationreversible}, now we can say that there are invertible functors between the PROP $ZW_{ \mathbb{C}}$ and the PROP $ZX_{ full}$, which means the two calculi $ZW_{ \mathbb{C}}$ and $ZX_{ full}$ are isomorphic.
At last, we prove the main theorem of this chapter. \begin{theorem}\label{maintheorem} The $ZX_{ full}$-calculus is complete for the entire pure qubit quantum mechanics: If $\llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket$, then $ZX_{ full}\vdash D_1=D_2$,
\end{theorem}
\begin{proof} Suppose $D_1, D_2 \in ZX_{ full}$ and $\llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket$. Then by lemma \ref{xtowpreservesemantics}, $\llbracket \llbracket D_1 \rrbracket_{XW}\rrbracket = \llbracket D_1 \rrbracket= \llbracket D_2 \rrbracket=\llbracket \llbracket D_2 \rrbracket_{XW}\rrbracket $. Thus by the completeness of the $ZW_{ \mathbb{C}}$-calculus \cite{amar}, $ZW_{ \mathbb{C}}\vdash \llbracket D_2 \rrbracket_{XW}= \llbracket D_2 \rrbracket_{XW}$. Now by proposition \ref{zwrulesholdinzx}, $ZX_{ full}\vdash \llbracket \llbracket D_1 \rrbracket_{XW}\rrbracket_{WX} =\llbracket \llbracket D_2 \rrbracket_{XW}\rrbracket_{WX}$. Finally, by lemma \ref{interpretationreversible}, $ZX_{ full}\vdash D_1=D_2$. \end{proof}
Now we can express the new generators in terms of green and red nodes. \begin{proposition}\label{lem:lamb_tri_decomposition} The triangle \tikzfig{diagrams//triangle} and the lambda box \tikzfig{diagrams//lambdabox} are expressible in Z and X phases. \end{proposition}
\begin{proof} The semantic representation of the triangle \tikzfig{diagrams//triangle} in terms of Z and X phases in the ZX-calculus has been clearly described in \cite{Emmanuel}. Interestingly, another representation of the triangle was implicitly given in \cite{Coeckebk} by a slash-labeled box in the following form: \begin{equation}\label{triangleslash}
\tikzfig{diagrams//triangledecompose} \end{equation} It can be directly verified that the two diagrams on both sides of (\ref{triangleslash}) have the same standard interpretation. Thus by Theorem \ref{maintheorem} the identity (\ref{triangleslash}) can be derived in the $ZX_{ full}$-calculus.
Now we consider the representation of the lambda box. Since $\lambda$ is a non-negative real number, we can write $\lambda$ as a sum of its integer part and remainder part: $\lambda= [\lambda] +\{\lambda\}$, where $ [\lambda]$ is a non-negative integer and $0\leq\{\lambda\}<1$. Let $n= [\lambda]$.
If $n=0$, then \begin{equation}\label{lemma0decom}
\tikzfig{diagrams//lemma02v} \end{equation}
If $n\geq 1$, then by (\ref{nrepresent}), we have
\begin{equation*}
\tikzfig{diagrams//lambdaderive2} \end{equation*}
Since $0\leq\{\lambda\}<1$, we could let $\alpha=arccos\frac{\{\lambda\}}{2}$. Then $\{\lambda\}=2\cos\alpha=e^{i\alpha}+e^{-i\alpha}$, and
$$\tikzfig{diagrams//lexpress3nd}.$$
Therefore, we have
$$\tikzfig{diagrams//lexpress4nd}.$$
\end{proof}
\section{Proof of proposition \ref{zwrulesholdinzx}}
\begin{lemma}
\begin{equation}\label{lem:2}
\left\llbracket~
\input{diagrams/lemma2left.tikz}
~\right\rrbracket_{WX}=
\input{diagrams/lemma2right.tikz}
\end{equation} \end{lemma}
\begin{proof}
\begin{equation*}
\left\llbracket~
\input{diagrams/lemma2left.tikz}
~\right\rrbracket_{WX}=
\input{diagrams/lemma2right2nd.tikz}
\end{equation*} \end{proof}
\begin{proposition}(ZW rule $rei^x_2$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/reix2_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/reix2_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}~\newline
\input{diagrams/reix2_proof.tikz} \end{proof}
\begin{proposition}(ZW rule $rei^x_3$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/reix3_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/reix3_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}~\newline
\input{diagrams/reix3_proof.tikz} \end{proof}
\begin{proposition}\label{prop:natnx}(ZW rule $nat^\eta_x$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natnx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natnx_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}~\newline
\input{diagrams/natnx_proof.tikz} \end{proof}
\begin{proposition}(ZW rule $nat^\varepsilon_x$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natex_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natex_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
This is the upside-down version of Proposition \ref{prop:natnx}, thus the proof is similar. We omit the proof but note that the rules and properties used are $S1$, $S2$, $S3$, $H$, \ref{lem:hopf}. \end{proof}
\begin{proposition}(ZW rule $rei^x_1$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/reix1_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/reix1_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof} \begin{equation}\label{prop:reix1}
\input{diagrams/reix1_proof.tikz} \end{equation} \end{proof}
\begin{proposition}(ZW rule $un^{co}_{w,L}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/uncowL_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/uncowL_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*}
\tikzfig{diagrams//uncowLproof} \end{equation*}
\end{proof}
\begin{proposition}(ZW rule $asso_w$)~\newline\\
$
ZX\vdash \left\llbracket~
\input{diagrams/natww_simon_RHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natww_simon_LHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//assowproof} \end{equation*}
\end{proof}
\begin{proposition}(ZW rule $nat^w_x$)~\newline\\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natwx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natwx_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natwxproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $com^{co}_w$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/comcow_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/comcow_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}~\newline
\input{diagrams/comcow_proof.tikz}\\ \end{proof}
\begin{proposition}(ZW rule $nat^m_w$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natmw_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natmw_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
This has been proved in \cite{Emmanuel}, proposition 7, part $5a$. The proof is quite complicated, lots of rules and lemmas are employed. Fortunately, all the rules used for this proof in \cite{Emmanuel} are either a part of the rules in the $ZX_{ full}$-calculus or derivatives from the rules of the $ZX_{ full}$-calculus. Also, all the lemmas applied to this proof are either a part of the rules in the $ZX_{ full}$-calculus or derived properties from the rules of the $ZX_{ full}$-calculus. Therefore, we need not to repeat the proof here, but just indicate part by part the rules and lemmas used in \cite{Emmanuel}, proposition 7, part $5a$, with the correspondence between lemmas in \cite{Emmanuel} and rules and properties in the $ZX_{ full}$-calculus shown in brackets.
There are eight parts in the proof: (\textit{i}), (\textit{ii}), (\textit{iii}), (\textit{iv}), (\textit{v}), (\textit{vi}), (\textit{vii}), (\textit{viii}), as well as a final derivation at the end.
The rules used in part (\textit{i}) are $B2$, $S1$, $H$, lemma 23 ($\ref{tr5derived}$ in this thesis), lemma 3 ($\ref{lem:6b}$ in this thesis).
The rules used in part (\textit{ii}) are lemma 26 ($\ref{lem:4}$ in this thesis), $S1$, $B2$, $S1$, lemma 32 ($\ref{TR11}$ in this thesis), lemma 3 ($\ref{lem:6b}$ in this thesis), lemma 16 ($TR1$ in this thesis).
The rules used in part (\textit{iii}) are lemma 24 ($TR6$ in this thesis), lemma 26 ($\ref{lem:4}$ in this thesis), $S1$, lemma 27 ($TR8$ in this thesis), lemma 3 ($\ref{lem:6b}$ in this thesis).
The rules used in part (\textit{iv}) are lemma 26 ($\ref{lem:4}$ in this thesis), $S1$, $B2$, lemma 3 ($\ref{lem:6b}$ in this thesis), lemma 32 ($\ref{TR11}$ in this thesis).
The rules used in part (\textit{v}) are lemma 3 ($\ref{lem:6b}$ in this thesis), part (\textit{v}).
The rules used in part (\textit{vi}) are part (\textit{i}), lemma 26 ($\ref{lem:4}$ in this thesis), lemma 28 ($TR9$ in this thesis), $S1$, lemma 3 ($\ref{lem:6b}$ in this thesis), lemma 2 ($\ref{lem:hopf}$ in this thesis), lemma 16 ($TR1$ in this thesis), $K2$, part (\textit{iii}), part (\textit{v}), part (\textit{iv}), part (\textit{ii}).
The rules used in part (\textit{vii}) are lemma 3 ($\ref{lem:6b}$ in this thesis), $S1$, lemma 16 ($TR1$ in this thesis), $B2$, lemma 25 ($TR7$ in this thesis), lemma 26 ($\ref{lem:4}$ in this thesis), lemma 32 ($\ref{TR11}$ in this thesis).
The rules used in part (\textit{viii}) are lemma 3 ($\ref{lem:6b}$ in this thesis), $H$, $B2$, $S1$, lemma 2 ($\ref{lem:hopf}$ in this thesis), lemma 8 ($\ref{lem:6}$ in this thesis).
The final derivation of the proof of this proposition uses lemma 26 ($\ref{lem:4}$ in this thesis), $S1$, part (\textit{viii}), lemma 2 ($\ref{lem:hopf}$ in this thesis), part (\textit{vii}), part (\textit{vi}), $B2$, $\ref{lem:com}$ and $\ref{lem:7}$ in this thesis, lemma 3 ($\ref{lem:6b}$ in this thesis), lemma 16 ($TR1$ in this thesis).
\end{proof}
\begin{proposition}\label{blakbialg}(ZW rule $nat^{m \eta}_{w}$)~\newline\\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natmnw_RHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natmnw_LHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natmnwproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $nat^{m \eta}_{\varepsilon ,w}$)~\newline\\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natmnew_RHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natmnew_LHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natmnewpproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $hopf$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/hopf_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/hopf_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//zwhopfproof} \end{equation*}
\end{proof}
\begin{proposition}(ZW rules $sym_{3,L}$ and $sym_{3,R}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/sym3_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/sym3_middle.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/sym3_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//sym3pfproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $inv$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/inv_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/inv_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//zwinvpfproof} \end{equation*}
\end{proof}
\begin{proposition}(ZW rule $ant^\eta_x$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/antnx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/antnx_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//antnxproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rules $sym_{z,L}$ and $sym_{z,R}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/symz_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/symz_middle.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/symz_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//symzdproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $un^{co}_{z,R}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/uncozR_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/uncozR_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//uncozRdproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $asso_z$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/assoz_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/assoz_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//assozdproof} \end{equation*} \end{proof}
\begin{proposition}\label{prop:ph}(ZW rule $ph$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/ph_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/ph_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}~\newline
\begin{equation*} \tikzfig{diagrams//ph_proof2} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $nat^n_c$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natnc_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natnc_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natncproof2} \end{equation*} \end{proof}
\begin{proposition}\label{zxnatmc}(ZW rule $nat^m_c$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natmc_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natmc_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natmcproof} \end{equation*}
\end{proof}
\begin{proposition}(ZW rule $loop$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/loop_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/loop_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//loopproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $unx$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/unx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/unx_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//unxproof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $rng_1$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/rng1_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rng1_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//rng1dproof} \end{equation*} \end{proof}
\begin{proposition}\label{prop:rng-1}(ZW rule $rng_{-1}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/rng-1_RHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rng-1_LHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//rng-1proof} \end{equation*} \end{proof}
\begin{proposition}(ZW rule $rng^{r,s}_{\times}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/rngrsx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rngrsx_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//rngrsxdproof} \end{equation*} where $s=\lambda_s e^{i\alpha}, r=\lambda_r e^{i\beta}$. \end{proof}
\begin{proposition}(ZW rule $rng^{r,s}_{+}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/rngrsp_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rngrsp_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//rngrspdproof} \end{equation*} where $s=\lambda_s e^{i\alpha}, r=\lambda_r e^{i\beta}, r+s=\lambda e^{i\gamma}$. \end{proof}
\begin{proposition}(ZW rule $nat^{r}_{c}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natrc_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natrc_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natrcdproof} \end{equation*} where $ r=\lambda_r e^{i\beta}$.
\end{proof}
\begin{proposition}(ZW rule $nat^{r}_{\varepsilon c}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/natrec_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natrec_RHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//natrecdproof} \end{equation*} where $ r=\lambda_r e^{i\beta}$. \end{proof}
\begin{proposition}(ZW rule $ph^{r}$)~\newline \\
$
ZX\vdash
\left\llbracket~
\input{diagrams/phr_RHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/phr_LHS.tikz}
~\right\rrbracket_{WX}
$ \end{proposition}
\begin{proof}
\begin{equation*} \tikzfig{diagrams//phrdproof} \end{equation*} where $ r=\lambda_r e^{i\beta}$. \end{proof}
\chapter{Completeness for Clifford+T qubit quantum mechanics}\label{cplustch}
Clifford+T qubit quantum mechanics (QM) is an approximately universal fragment of QM, which has been widely used in quantum computing.
One of the main open problems of the ZX-calculus is to give a complete axiomatisation for the Clifford+T QM \cite{cqmwiki}. After the first completeness result on this fragment \textemdash the completeness of the ZX-calculus for single qubit Clifford+T QM \cite{Miriam1ct}, there finally comes an completion of the ZX-calculus for the whole Clifford+T QM \cite{Emmanuel}, which contributes a solution to the above mentioned open problem. However, this complete axiomatisation for the Clifford+T QM relies on a very complicated translation from the ZX-calculus to the ZW-calculus.
Further to this complete axiomatisation for the Clifford+T fragment QM \cite{Emmanuel}, we have given a complete axiomatisation of the ZX-calculus for the overall pure qubit QM (i.e., the $ZX_{ full}$-calculus) in the previous chapter. Then a natural question arises: can we just make a restriction on the generators and rules of the $ZX_{ full}$-calculus obtained in Chapter \ref{chfull} to trivially get a complete axiomatisation of the ZX-calculus for the Clifford+T QM (called $ZX_{ C+T}$-calculus)? The answer is negative: we will show in this chapter that we can have a complete $ZX_{ C+T}$-calculus by restricting on the generators and rules of the $ZX_{ full}$-calculus, but some modifications like changing or adding rules have to be made. We will illustrate this by a counterexample. To do this, we need to determine the range of the value of $\lambda$ in a $\lambda$ box which will be used as a generator in the $ZX_{ C+T}$-calculus. Let $\mathbb{T}:=\mathbb{Z}[\frac{1}{2}, e^{i\frac{\pi}{4}}]$ be the ring extension of
$\mathbb{Z}$ in $\mathbb{C}$ generated by $\frac{1}{2}, e^{i\frac{\pi}{4}}$. Similarly we can define the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$. It is not difficult to see that $\mathbb{T}$ is a commutative ring and $\mathbb{Z}[\frac{1}{2}, e^{i\frac{\pi}{4}}]=\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$.
Denote by $ZX_{\frac{\pi}{4}}$ the ZX-calculus which has only traditional generators as given in Table \ref{qbzxgenerator} and angles multiple of $\frac{\pi}{4}$ in any green or red spider.
Now we recall a result on the relation between $ZX_{ \frac{\pi}{4}}$ diagrams and their corresponding matrices (we will give a simpler proof later in this Chapter):
\begin{proposition}\label{cliftmatix} \cite{Emmanuel} The diagrams of the $ZX_{\frac{\pi}{4}}$-calculus exactly corresponds to the matrices over the ring $\mathbb{T}$. \end{proposition} This means if we want to introduce the $\lambda$ box as a generator in the the same way ($\lambda$ being the magnitude of a complex number) as in Chapter \ref{chfull} to make a $ZX_{ C+T}$-calculus, then
it must be that each $\lambda$ is a non-negative real number and $\lambda \in \mathbb{T}$. Furthermore, as pointed out in \cite{Emmanuel}, each element $r$ of $\mathbb{T}$ can be uniquely written as the form $r=a_0+a_1e^{i\frac{\pi}{4}}+a_2(e^{i\frac{\pi}{4}})^2+a_3(e^{i\frac{\pi}{4}})^3,~ a_j \in \mathbb Z[\frac{1}{2}] $. Equivalently, $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3},~ 0\leq a_j \in \mathbb Z[\frac{1}{2}], ~\alpha_j=j\frac{\pi}{4} ~or~ j\frac{\pi}{4}+\pi,~~ j=0, 1, 2, 3$.
Therefore, if an arbitrary complex number in $\mathbb{T}$ has phase restricted to multiple of $\frac{\pi}{4}$, then its magnitude $\lambda$ must satisfy that $ 0\leq \lambda \in \mathbb Z[\frac{1}{2}]$. On the other hand, we have the following identity in the $ZX_{ full}$-calculus by the rule (AD$^{\prime}$):
\begin{equation}\label{conuterexam}
\tikzfig{diagrams//countexam}
\end{equation} The left side of (\ref{conuterexam}) is already in the range of $ZX_{ C+T}$-calculus, but the right side of (\ref{conuterexam}) is beyond the $ZX_{ C+T}$-calculus ($\lambda=\sqrt{2} \notin \mathbb Z[\frac{1}{2}]$). This means we can not have a $ZX_{ C+T}$-calculus that is complete for the Clifford+T QM just by restricting on the generators and rules of the $ZX_{ full}$-calculus.
\iffalse The main differences between the two complete axiomatisations of the ZX-calculus for the Clifford+T fragment QM shown in this chapter and that presented in \cite{Emmanuel} are as follows: \begin{enumerate} \item Although the number of rules (which is 22) listed in Figure \ref{figure1t} and Figure \ref{figure2t} of this chapter is much more than that of \cite{Emmanuel} (which is 12), the number of nodes in each non-scalar diagram of the extended part of rules (non-stablizer part) is at most 8 in this paper, in contrast to a maximum of 13 in \cite{Emmanuel}. Furthermore, the rule (C) in \cite{Emmanuel} has 10 nodes on each sides of the equality. Their lager rules are far from easy to be used practically, while our rules are relatively small thus can be employed more effectively, as shown in chapter \ref{chapply}.
\item Following \cite{ngwang}, we have still introduced two more generators-- the triangle (which appeared as a short notation in \cite{Emmanuel}) and the $\lambda$ box-- in this chapter, while there are only green nodes and red nodes as generators in \cite{Emmanuel}. Our new generators are features rather than novelties: the triangle can be employed as an essential component to construct a Toffoli gate in a very simple form, while the $\lambda$ box can be slightly extended to a generalised phase so that the generalised supplementarity (also called cyclotomic supplementarity, with supplementarity as a special case) \cite{jpvw} is naturally seen as a special case of the generalised spider rule. In addition, due to the introduction of the new generators, our proof for that the Clifford +T fragment of the ZX-calculus exactly corresponds to matrices over the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ is much simpler than the corresponding proof given in \cite{Emmanuel}.
\item The axiomatisation given in \cite{Emmanuel} depends on a particular decomposition of the triangle as shown in the second equality of (\ref{triangleslash}). However, there exist different forms of decomposition of the triangle, e.g., the forms in (\ref{triangleslash}). Therefore, it may result in different axiomatisations of the ZX-calculus. Our employing of the triangle as a generator successfully avoids this problem, thus makes our axiomatisation more fundamental in this sense.
\item The translation from the ZX-calculus to the ZW-calculus presented in this chapter is quite different from the corresponding one in \cite{Emmanuel}. Ours is more direct, and is even reversible, thus enables establishing an isomorphism of the ZX-calculus and the ZW-calculus.
\end{enumerate}
\fi
In this chapter, we propose a complete axiomatisation of the ZX-calculus for the Clifford+T quantum mechanics (the $ZX_{ C+T}$-calculus), not only making a restriction on the generators and rules of the $ZX_{ full}$-calculus, but also modifying and adding some rules. As before, we still need the completeness result of the ZW-calculus, but will restrict the parameter ring $\mathbb{C}$ to its subring $\mathbb{T}$ in the ZW-calculus (will be called $ZW_{\mathbb{T}}$-calculus afterwards in this thesis) \cite{amar}.
The results of this chapter are collected from the arXiv paper \cite{ngwang2} coauthored with Kang Feng Ng. The proof of the completeness of the $ZX_{ C+T}$-calculus has been published in \cite{amarngwanglics}, with coauthors Amar Hadzihasanovic and Kang Feng Ng.
\section{$ZX_{ C+T}$-calculus}
In this section, we give for the $ZX_{ C+T}$-calculus all the rewriting rules which will be shown to be sufficient for the completeness with regard to the Clifford+T QM.
The $ZX_{ C+T}$-calculus is still a kind of ZX-calculus as described in the previous chapter. Its generators are listed in Table \ref{qbzxgenerator} and Table \ref{newgr}, with the restriction that $\alpha \in \{\frac{k\pi}{4}| k=0, 1, \cdots, 7\}, 0\leqslant \lambda, \in \mathbb Z[\frac{1}{2}]$.
There are two kinds of rules for the $ZX_{ C+T}$-calculus: the categorical structure rules as listed in (\ref{compactstructure}) and (\ref{compactstructureslide}), and the non-structural rewriting rules including
the traditional rules in Figure \ref{figure1t} and the extended rules in Figure \ref{figure2t}.
\begin{figure}\label{figure1t}
\end{figure}
\begin{figure}\label{figure2t}
\end{figure}
\iffalse
\begin{figure}\label{figure0t}
\end{figure} \fi
\FloatBarrier
In comparison to the rules of the $ZX_{ full}$-calculus, except for the restrictions described above, we made the following modifications in the
$ZX_{ C+T}$-calculus:
\begin{itemize} \item The empty rule (IV2) for the $ZX_{ full}$-calculus shown in Figure \ref{figure2sfy} is changed to the form of rule (IV$^{\prime}$) in Figure \ref{figure1t}. \item The rule (TR5$^{\prime}$) is added in Figure \ref{figure1t}. \item The condition $\lambda e^{i\gamma} =\lambda_1 e^{i\beta}+ \lambda_2 e^{i\alpha}$ for the rule (AD$^{\prime}$) of the $ZX_{ full}$-calculus shown in Figure \ref{figure2sfy} has been changed to an equivalence condition $\alpha\equiv \beta \equiv \gamma ~(mod~ \pi)$ for the rule (AD$^{\prime}$) of the $ZX_{ C+T}$-calculus shown in Figure \ref{figure2t}.
\end{itemize}
Next we explain theses modifications are needed. Firstly, the complex number $1- \frac{1}{\sqrt{2}}$ in the $\lambda$ box of the rule (IV2) does not exist in the $ZX_{ C+T}$-calculus, so a modified empty rule is required
for the $ZX_{ C+T}$-calculus satisfying that the translation of the Hadamard gate to a ZW diagram is reversible. Therefore we have the rule (IV$^{\prime}$) and the following useful property.
\begin{lemma} The frequently used empty rule can be derived from the $ZX_{\frac{\pi}{4}}$-calculus: \begin{equation}\label{emptypiby4sim} \tikzfig{diagrams//emptyoften} \end{equation} \end{lemma}
\begin{proof} \begin{equation*} \tikzfig{diagrams//newemptytoold} \end{equation*} \end{proof}
Secondly, the rule (TR5$^{\prime}$) is introduced to make the $\frac{1}{2}$-box expressible in the $ZX_{ C+T}$-calculus: \begin{lemma}\label{halfboxlm} \begin{equation}\label{halfbox}
\tikzfig{diagrams//lambda1by2}
\end{equation}
\end{lemma} \begin{proof} First, it is clear that \begin{equation*}
\tikzfig{diagrams//boxof2} \end{equation*} Then \begin{equation*}
\tikzfig{diagrams//boxof2timehalf} \end{equation*} \end{proof}
Finally, it is easy to check that the condition $\lambda e^{i\gamma} =\lambda_1 e^{i\beta}+ \lambda_2 e^{i\alpha}$ is equivalent to the condition $\alpha\equiv \beta \equiv \gamma ~(mod~ \pi)$, when
$0\leqslant \lambda, \lambda_1, \lambda_2 \in \mathbb Z[\frac{1}{2}], \alpha, \beta, \gamma \in \{\frac{k\pi}{4}| k=0, 1, \cdots, 7\}$.
Following the soundness of the $ZX_{ full}$-calculus, it suffice to check that the rules (IV$^{\prime}$) and (TR5$^{\prime}$) are sound in order to have the soundness of $ZX_{ C+T}$-calculus. This is just a routine verification, we omit the details here.
Also we mention that the rules (TR10$^{\prime}$), (TR5) and (L5) proved in Chapter \ref{chfull} for the $ZX_{ full}$-calculus still hold for the $ZX_{ C+T}$-calculus, since each rule applied in those proofs resides in the $ZX_{ C+T}$-calculus as well.
In the previous chapter, the lambda box has been described in terms of triangle, green nods and red nodes with angles in $[0,~2\pi)$. While for the $ZX_{ C+T}$-calculus, we have
\begin{lemma}\label{lem:lamb_tri_decomposition2}
The lambda box \tikzfig{diagrams//lambdabox} is expressible in terms of triangle, green nods and red nodes with angles in $\{\frac{k\pi}{4}| k=0, 1, \cdots, 7\}$.
\end{lemma}
\begin{proof}
First we can write $\lambda$ as a sum of its integer part and remainder part: $\lambda= [\lambda] +\{\lambda\}$, where $ [\lambda]$ is a non-negative integer and $0\leq\{\lambda\}<1$. Since $\lambda \in \mathbb Z[\frac{1}{2}]$, $\{\lambda\}$ can be uniquely written as a binary expansion of the form $a_1\frac{1}{2}+\cdots+a_s\frac{1}{2^s}$, where $a_i\in \{ 0, 1\} , i=1, \cdots, s.$ For the integer part $[\lambda]$, the corresponding $\lambda$ box has been represented in terms of triangle, green nods and red nodes with angles multiples of $\frac{k\pi}{4}$ as shown in Lemma \ref{lem:lamb_tri_decomposition}. For the remainder part $\{\lambda\}$, it is sufficient to express the $\lambda$ box for $\lambda=\frac{1}{2^k}$ in terms of triangle and $Z, X$ phases for any positive integer $k$, since one can apply the addition rule (AD$^{\prime}$) recursively. In fact, we have
\begin{equation}\label{halfnbox}
\tikzfig{diagrams//lambda1by2k} \end{equation}
We prove this by induction on $k$. When $k=1$, it is just the identity (\ref{halfbox}) proved in Lemma \ref{halfboxlm}. Suppose (\ref{halfnbox}) holds for $k$. Then for $k+1$ we have
\begin{equation*}
\tikzfig{diagrams//halfnboxproof} \end{equation*}
This completes the induction.
Therefore,
$$\tikzfig{diagrams//lexpress4new}.$$
\end{proof}
\section{Translations between the $ZX_{ C+T}$-calculus and the $ZW_{\mathbb{T}}$-calculus}\label{zwtozx2}
The interpretation $\llbracket \cdot \rrbracket_{XW}$ from the $ZX_{ C+T}$-calculus to the $ZW_{\mathbb{T}}$-calculus is just a restriction of the interpretation $\llbracket \cdot \rrbracket_{XW}$ from the $ZX_{ full}$-calculus to the $ZW_{\mathbb{C}}$-calculus:
\[
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket_{XW}= \tikzfig{diagrams//emptysquare}, \quad
\left\llbracket\tikzfig{diagrams//Id}\right\rrbracket_{XW}= \tikzfig{diagrams//Id}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket_{XW}= \tikzfig{diagrams//cap}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket_{XW}= \tikzfig{diagrams//cup},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket_{XW}= \tikzfig{diagrams//swap}, \quad
\left\llbracket\tikzfig{diagrams//generator_spider-nonum}\right\rrbracket_{XW}= \tikzfig{diagrams//spiderwhite}, \quad
\left\llbracket\tikzfig{diagrams//alphagate}\right\rrbracket_{XW}= \tikzfig{diagrams//alphagatewhite}, \quad
\left\llbracket\tikzfig{diagrams//lambdabox}\right\rrbracket_{XW}= \tikzfig{diagrams//lambdagatewhiteld},
\]
\[
\left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}= \tikzfig{diagrams//Hadamardwhite}, \quad
\left\llbracket\tikzfig{diagrams//triangle}\right\rrbracket_{XW}= \tikzfig{diagrams//trianglewhite},
\left\llbracket\tikzfig{diagrams//alphagatered}\right\rrbracket_{XW}= \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}\circ \left(\tikzfig{diagrams//alphagatewhite}\right) \circ \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}, \]
\[
\left\llbracket\tikzfig{diagrams//generator_redspider2cged}\right\rrbracket_{XW}= \left\llbracket \left( \tikzfig{diagrams//HadaDecomSingleslt}\right)^{\otimes m} \right\rrbracket_{XW}\circ \left(\tikzfig{diagrams//spiderwhiteindex}\right) \circ \left\llbracket \left( \tikzfig{diagrams//HadaDecomSingleslt}\right)^{\otimes n} \right\rrbracket_{XW}, \]
\[
\llbracket D_1\otimes D_2 \rrbracket_{XW} = \llbracket D_1 \rrbracket_{XW} \otimes \llbracket D_2 \rrbracket_{XW}, \quad
\llbracket D_1\circ D_2 \rrbracket_{XW} = \llbracket D_1 \rrbracket_{XW} \circ \llbracket D_2 \rrbracket_{XW},
\]
where $0\leqslant \lambda \in \mathbb Z[\frac{1}{2}], \alpha \in \{\frac{k\pi}{4}| k=0, 1, \cdots, 7\}$.
Since $\mathbb{T} = \mathbb{Z}[\frac{1}{2}, e^{i\frac{\pi}{4}}]=\mathbb{Z}[i, \frac{1}{\sqrt{2}}] $, we have $\frac{\sqrt{2}-2}{2} \in \mathbb{T} $. Then it is clear that this restricted translation will always result in a well-defined $ZW_{\mathbb{T}}$ diagram.
By Lemma \ref{xtowpreservesemantics} the interpretation $\llbracket \cdot \rrbracket_{XW}$ preserves the standard interpretation.
On the other hand, the interpretation $\llbracket \cdot \rrbracket_{WX}$ from the $ZW_{\mathbb{T}}$-calculus to the $ZX_{ C+T}$-calculus is the same as the interpretation $\llbracket \cdot \rrbracket_{WX}$ from the $ZX_{ full}$-calculus to the $ZW_{\mathbb{C}}$-calculus except for the $r$-phase part:
\[
\left\llbracket\tikzfig{diagrams//emptysquare}\right\rrbracket_{WX}= \tikzfig{diagrams//emptysquare}, \quad
\left\llbracket\tikzfig{diagrams//Id}\right\rrbracket_{WX}= \tikzfig{diagrams//Id}, \quad
\left\llbracket\tikzfig{diagrams//cap}\right\rrbracket_{WX}= \tikzfig{diagrams//cap}, \quad
\left\llbracket\tikzfig{diagrams//cup}\right\rrbracket_{WX}= \tikzfig{diagrams//cup},
\]
\[
\left\llbracket\tikzfig{diagrams//swap}\right\rrbracket_{WX}= \tikzfig{diagrams//swap}, \quad
\left\llbracket\tikzfig{diagrams//spiderwhite}\right\rrbracket_{WX}= \tikzfig{diagrams//generator_spider-nonum}, \quad
\left\llbracket\tikzfig{diagrams//piblack}\right\rrbracket_{WX}= \tikzfig{diagrams//pired},
\]
\[
\left\llbracket\tikzfig{diagrams//corsszw}\right\rrbracket_{WX}= \tikzfig{diagrams//crossxz}, \quad \quad
\left\llbracket\tikzfig{diagrams//wblack}\right\rrbracket_{WX}= \tikzfig{diagrams//winzx}, \]
\[
\left\llbracket\tikzfig{diagrams//rgatewhite}\right\rrbracket_{WX}=\tikzfig{diagrams//rwphasenew2},
\]
\[ \llbracket D_1\otimes D_2 \rrbracket_{WX} = \llbracket D_1 \rrbracket_{WX} \otimes \llbracket D_2 \rrbracket_{WX}, \quad
\llbracket D_1\circ D_2 \rrbracket_{WX} = \llbracket D_1 \rrbracket_{WX} \circ \llbracket D_2 \rrbracket_{WX},
\]
where $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3},~ 0\leq a_j \in \mathbb Z[\frac{1}{2}], ~\alpha_j=j\frac{\pi}{4} ~or~ j\frac{\pi}{4}+\pi,~~ j=0, 1, 2, 3$. Note that the representation of $a_j$ box is described in Lemma \ref{lem:lamb_tri_decomposition2}.
It seems that the interpretation of the $r$-phase is somehow complicated. However, the corresponding interpreted ZX diagram is actually the result of three applications of the addition rule (AD$^{\prime}$) according to the identity $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3}$, which clearly has the same standard interpretation as that of the $r$-phase if we take this as happened in the $ZX_{ full}$-calculus. Then by Lemma \ref{wtoxpreservesemantics}, the interpretation
$\llbracket \cdot \rrbracket_{WX}$ from the $ZW_{\mathbb{T}}$-calculus to the $ZX_{ C+T}$-calculus preserves the standard interpretation.
Next we will prove that the effect of successive translations from ZX to ZW and then back to ZW is the same as no translations.
\begin{lemma}
\begin{equation}\label{greenpitriangl}
\tikzfig{diagrams//greenpitriangle} \end{equation} \end{lemma}
\begin{proof}
\begin{equation*}
\tikzfig{diagrams//greenpitriangleproof}
\end{equation*} \end{proof}
\begin{lemma}
\begin{equation}\label{redpilefttriangl}
\tikzfig{diagrams//redpilefttriangle} \end{equation} \end{lemma} \begin{proof}
\begin{equation*}
\tikzfig{diagrams//redpilefttriangleprf}
\end{equation*} \end{proof}
\begin{lemma}
\begin{equation}\label{emptyvariant}
\tikzfig{diagrams//emptyvariants} \end{equation} \end{lemma} \begin{proof}
\begin{equation*}
\tikzfig{diagrams//emptyvariantsprf}
\end{equation*} \end{proof}
\begin{lemma}\label{interpretationreversible2} Suppose $D$ is an arbitrary diagram in the $ZX_{ C+T}$-calculus. Then $ZX_{ C+T}\vdash \llbracket \llbracket D \rrbracket_{XW}\rrbracket_{WX} =D$. \end{lemma}
\begin{proof} By the construction of $\llbracket \cdot \rrbracket_{XW}$ and $\llbracket \cdot \rrbracket_{WX}$, we only need to prove for the generators of the $ZX_{ C+T}$-calculus. Here we consider all the generators translated at the beginning of this section.
The first six generators are the same as the first six generators in the $ZW_{ \mathbb{T}}$-calculus, so we just need to check for the last six generators. Since the red phase gate and the red spider are translated in terms of the translation of Hadamard gate, green phase gate and green spider, we only need to care for the four generators: Hadamard gate, green phase gate, $\lambda$ box and the triangle.
For the green phase gate, if we write $e^{i\alpha}$ in the form $r=a_0+a_1e^{i\frac{\pi}{4}}+a_2e^{i\frac{2\pi}{4}}+a_3e^{i\frac{3\pi}{4}}$, there is exactly one $a_i$ which is non-zero. By commutative property of addition \ref{sumcommutativity}, we have \[
\left\llbracket \left\llbracket\tikzfig{diagrams//alphagate}\right\rrbracket_{XW}\right\rrbracket_{WX}= \left\llbracket\tikzfig{diagrams//alphagatewhite}\right\rrbracket_{WX}\]
\[
\tikzfig{diagrams//alphadrvctnew}
\] For the Hadamard gate, we write $\frac{\sqrt{2}-2}{2}=-1+\frac{1}{2}e^{i\frac{\pi}{4}}+0e^{i\frac{2\pi}{4}}-\frac{1}{2}e^{i\frac{3\pi}{4}}=-1+\frac{1}{2}e^{i\frac{\pi}{4}}+0e^{i\frac{2\pi}{4}}+\frac{1}{2}e^{i\frac{-\pi}{4}}$, then
\[
\left\llbracket \left\llbracket\tikzfig{diagrams//HadaDecomSingleslt}\right\rrbracket_{XW}\right\rrbracket_{WX}=
\left\llbracket \tikzfig{diagrams//Hadamardwhite} \right\rrbracket_{WX} = \tikzfig{diagrams//Hadascalar} \left\llbracket \tikzfig{diagrams//Hadamardwhitescalar} \right\rrbracket_{WX}\]
\[
\tikzfig{diagrams//Hadamardxwx-new} \]
Finally, it is easy to check that \[
\left\llbracket \left\llbracket\tikzfig{diagrams//lambdabox}\right\rrbracket_{XW}\right\rrbracket_{WX}=\tikzfig{diagrams//lambdabox},\quad
\left\llbracket \left\llbracket\tikzfig{diagrams//triangle}\right\rrbracket_{XW}\right\rrbracket_{WX}=\tikzfig{diagrams//triangle}. \]
\end{proof}
For the same reason as we have shown in Section \ref{transzxzw} of Chapter \ref{chfull}, the translations $\llbracket \cdot \rrbracket_{XW}$ and $\llbracket \cdot \rrbracket_{WX}$ are mutually invertible to each other.
\section{Completeness} \begin{proposition}\label{zwrulesholdinzx2} If $ZW_{\mathbb{T}}\vdash D_1=D_2$, then $ZX_{ C+T}\vdash \llbracket D_1 \rrbracket_{WX} =\llbracket D_2 \rrbracket_{WX}$.
\end{proposition}
\begin{proof} Since the derivation of equalities in ZW and ZX is made by rewriting rules, we need only to prove that $ZX_{ C+T} \vdash \left\llbracket D_1\right\rrbracket_{WX} = \left\llbracket D_2\right\rrbracket_{WX}$ where $D_1=D_2$ is a rewriting rule of the $ZW_{\mathbb{T}}$-calculus. Most proofs of this proposition have been done in the proof for completeness of the $ZX_{ full}$-calculus in the previous chapter, we only need to check for the last 5 rules $rng^{r,s}_{\times}$, $rng^{r,s}_{+}$, $nat^{r}_{c}$, $nat^{r}_{\varepsilon c}$, $ph^{r}$, which involve white phases in the $ZW_{\mathbb{T}}$-calculus. The rules $nat^{r}_{\varepsilon c}$ and $ph^{r}$ are easy to check, we just deal with the rules $rng^{r,s}_{\times}$, $rng^{r,s}_{+}$ and $nat^{r}_{c}$ at the end of this section, for the sake of easy reading. \end{proof}
\begin{theorem}\label{maintheorem2} The $ZX_{ C+T}$-calculus is complete for Clifford+T QM: if $\llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket$, then $ZX_{ C+T}\vdash D_1=D_2$.
\end{theorem}
\begin{proof} Suppose $D_1, D_2 \in ZX$ and $\llbracket D_1 \rrbracket =\llbracket D_2 \rrbracket$. Then by lemma \ref{xtowpreservesemantics2}, $\llbracket \llbracket D_1 \rrbracket_{XW}\rrbracket = \llbracket D_1 \rrbracket= \llbracket D_2 \rrbracket=\llbracket \llbracket D_2 \rrbracket_{XW}\rrbracket $. Thus by the completeness of the $ZW_{ \mathbb{T}}$-calculus \cite{amar}, $ZW_{ \mathbb{T}}\vdash \llbracket D_2 \rrbracket_{XW}= \llbracket D_2 \rrbracket_{XW}$. Now by proposition \ref{zwrulesholdinzx2}, $ZX_{ C+T}\vdash \llbracket \llbracket D_1 \rrbracket_{XW}\rrbracket_{WX} =\llbracket \llbracket D_2 \rrbracket_{XW}\rrbracket_{WX}$. Finally, by lemma \ref{interpretationreversible2}, $ZX_{ C+T}\vdash D_1=D_2$. \end{proof}
Now we can give a new proof for Proposition \ref{cliftmatix}. \begin{proposition}\label{rings} The diagrams of the $ZX_{ C+T}$-calculus exactly corresponds to the matrices over the ring $ \mathbb{Z}[i, \frac{1}{\sqrt{2}}]$. \end{proposition} \begin{proof} The triangle \tikzfig{diagrams//triangle} has been represented in the $ZX_{\frac{\pi}{4}}$-calculus in \cite{Coeckebk} as follows: \begin{equation}\label{triangleslash2}
\tikzfig{diagrams//triangledecompose} \end{equation} It can be directly verified that the two diagrams on both sides of (\ref{triangleslash2}) have the same standard interpretation. Thus by Theorem \ref{maintheorem2} the identity (\ref{triangleslash2}) can be derived in the $ZX_{ C+T}$-calculus.
Obviuosly each traditional generator of the $ZX_{ C+T}$-calculus corresponds to a matrix over the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ via the standard interpretation. By lemma \ref{lem:lamb_tri_decomposition2}, the new generators can also be represented by traditional generators and the triangle (described in terms of green and red noeds in (\ref{triangleslash2})), thus correspond to matrices over $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ as well. Therefore, each diagram of the $ZX_{ C+T}$-calculus which is composed of those generators must correspond to a matrix over the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$.
Conversely, since $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]=\mathbb{Z}[\frac{1}{2}, e^{i\frac{\pi}{4}}]$, each matrix over the ring $\mathbb{T}$ can be first turned into a non-normalized quantum state by map-state duality, then represented by a normal form in the $ZW_{\mathbb{T}}$-calculus with phases belong to the ring $\mathbb{Z}[\frac{1}{2}, e^{i\frac{\pi}{4}}]$ \cite{amar}. Furthermore, by the interpretation $\llbracket \cdot \rrbracket_{WX}$ given in section \ref{zwtozx2}, this ZW normal form can be translated to a diagram in the $ZX_{ C+T}$-calculus.
\end{proof}
In \cite{GilesSelinger}, Giles and Selinger have established a correspondence between Clifford +T quantum circuits with some finite number of ancillas and the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$. Here we generalise this result to the case of the $ZX_{ C+T}$-calculus which is not restrained by unitarity. Our proof here is more direct thus simpler than the one presented in \cite{Emmanuel}.
\section*{Proof of proposition \ref{zwrulesholdinzx2}} First we note that all the ZX diagrams translated by $\llbracket \cdot \rrbracket_{WX}$ won't be given in this proof for the sake of readability. For simplicity, we will use ZW rules in the diagrammatic reasoning instead of listing all the ZX rules which are used to derive the ZW rules under the interpretation $\llbracket \cdot \rrbracket_{WX}$. All the ZW rules used in this proof are $cut_w, cut_z, ba_{zw}, sym^x_w, ba_w$, where $cut_w, cut_z, ba_{zw}, ba_w$ are just the spider forms of the rules $asso_w, asso_z, , nat^m_c, nat^m_w$ respectively as presented in Figure \ref{figure3} and Figure \ref{figure4}. We use these spider forms to avoid plenty of repetitive application of their corresponding non-spider forms \cite{amar}.
\begin{proposition}(ZW rule $nat^{r}_{c}$)~\newline \\
\begin{equation}\label{wpscopy}
ZX\vdash
\left\llbracket~
\input{diagrams/natrc_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/natrc_RHS.tikz}
~\right\rrbracket_{WX},
\end{equation}
where $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3},~ 0\leq a_j \in \mathbb Z[\frac{1}{2}], ~\alpha_j=j\frac{\pi}{4} ~or~ j\frac{\pi}{4}+\pi,~~ j=0, 1, 2, 3$.
\end{proposition}
\begin{proof}
Let $c_k=a_ke^{i\alpha_k}, k=0, 1, 2, 3$. Then
\begin{equation}\label{wphaseint} \begin{array}{ll} \left\llbracket~
\input{diagrams/sinzwphase.tikz}
~\right\rrbracket_{WX}
=\tikzfig{diagrams//rwphasenew2}
\stackrel{cut_w}{=} \left\llbracket~ \input{diagrams/zwphasefuse.tikz} ~\right\rrbracket_{WX}&
\\ \stackrel{cut_z}{=} \left\llbracket~ \input{diagrams/zwphasecopy22.tikz} ~\right\rrbracket_{WX} \stackrel{ba_{zw}}{=} \left\llbracket~ \input{diagrams/zwphasecopy33.tikz} ~\right\rrbracket_{WX} \stackrel{cut_z}{=} \left\llbracket~ \input{diagrams/zwphasecopy44.tikz} ~\right\rrbracket_{WX} \end{array}
\end{equation}
Thus \begin{equation} \begin{array}{ll} \left\llbracket~
\input{diagrams/natrc_RHS.tikz}
~\right\rrbracket_{WX} \stackrel{(\ref{wphaseint})}{=} \left\llbracket~ \input{diagrams/zwphasecopy4.tikz} ~\right\rrbracket_{WX} \stackrel{ba_{w}}{=} \left\llbracket~ \input{diagrams/zwphasecopy5.tikz} ~\right\rrbracket_{WX} &
\\ \stackrel{cut_w,sym_w^x, TR13, TR14}{=} \left\llbracket~ \input{diagrams/zwphasecopy6.tikz} ~\right\rrbracket_{WX} \stackrel{(\ref{wphaseint})}{=} \left\llbracket~
\input{diagrams/natrc_LHS.tikz}
~\right\rrbracket_{WX}& \end{array} \end{equation}
\iffalse On the other hand, \begin{equation} \begin{array}{ll} \left\llbracket~
\input{diagrams/natrc_LHS.tikz}
~\right\rrbracket_{WX}
=\left\llbracket~
\input{diagrams/zwphasecopy2l.tikz}
~\right\rrbracket_{WX}
=\left\llbracket~
\input{diagrams/zwphasecopy3l.tikz}
~\right\rrbracket_{WX}&
\\
=\left\llbracket~
\input{diagrams/zwphasecopy4l.tikz}
~\right\rrbracket_{WX}
=\left\llbracket~
\input{diagrams/zwphasecopy5l.tikz}
~\right\rrbracket_{WX}&
\end{array} \end{equation} Therefore (\ref{wpscopy}) is proved. \fi \end{proof}
\begin{proposition}(ZW rule $rng^{r,s}_{+}$)~\newline \\
\begin{equation}\label{zwplus}
ZX\vdash
\left\llbracket~
\input{diagrams/rngrsp_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rngrsp_RHS.tikz}
~\right\rrbracket_{WX},
\end{equation}
where $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3},~s=b_0e^{i\beta_0}+b_1e^{i\beta_1}+b_2e^{i\beta_2}+b_3e^{i\beta_3}, 0\leq a_j, b_j \in \mathbb Z[\frac{1}{2}], ~\alpha_j, \beta_j=j\frac{\pi}{4} ~or~ j\frac{\pi}{4}+\pi,~ j=0, 1, 2, 3$. \end{proposition}
\begin{proof} Let $c_k=a_ke^{i\alpha_k}, d_k=b_ke^{i\alpha_k}, k=0, 1, 2, 3$. Then we have \begin{equation} \begin{array}{ll} \left\llbracket~
\input{diagrams/rngrsp_LHS.tikz}
~\right\rrbracket_{WX} \stackrel{(\ref{wphaseint})}{=} \left\llbracket~
\input{diagrams/zwadd.tikz}
~\right\rrbracket_{WX} \stackrel{cut_w}{=} \left\llbracket~
\input{diagrams/zwadd1.tikz}
~\right\rrbracket_{WX}&
\\ \stackrel{cut_w}{=} \left\llbracket~
\input{diagrams/zwadd2.tikz}
~\right\rrbracket_{WX} \stackrel{(AD^{\prime})}{=} \left\llbracket~
\input{diagrams/zwadd3.tikz}
~\right\rrbracket_{WX}
\stackrel{(\ref{wphaseint})}{=}
\left\llbracket~
\input{diagrams/rngrsp_RHS.tikz}
~\right\rrbracket_{WX}& \end{array} \end{equation}
\end{proof}
\begin{proposition}\label{zwphasefusionclt}(ZW rule $rng^{r,s}_{\times}$)~\newline \\
\begin{equation}
ZX\vdash
\left\llbracket~
\input{diagrams/rngrsx_LHS.tikz}
~\right\rrbracket_{WX}
=
\left\llbracket~
\input{diagrams/rngrsx_RHS.tikz}
~\right\rrbracket_{WX},
\end{equation}
where $r=a_0e^{i\alpha_0}+a_1e^{i\alpha_1}+a_2e^{i\alpha_2}+a_3e^{i\alpha_3},~s=b_0e^{i\beta_0}+b_1e^{i\beta_1}+b_2e^{i\beta_2}+b_3e^{i\beta_3}, 0\leq a_j, b_j \in \mathbb Z[\frac{1}{2}], ~\alpha_j, \beta_j=j\frac{\pi}{4} ~or~ j\frac{\pi}{4}+\pi,~ j=0, 1, 2, 3$. \end{proposition}
\begin{proof} Let $c_k=a_ke^{i\alpha_k}, d_k=b_ke^{i\alpha_k}, k=0, 1, 2, 3$. Then by (\ref{wphaseint}) we have \begin{equation} \left\llbracket~
\input{diagrams/sinzwphase.tikz}
~\right\rrbracket_{WX}
= \left\llbracket~ \input{diagrams/zwphasecopy44.tikz} ~\right\rrbracket_{WX},~~ \left\llbracket~
\input{diagrams/sinzwphase2.tikz}
~\right\rrbracket_{WX}
= \left\llbracket~ \input{diagrams/zwphasefuse22.tikz} ~\right\rrbracket_{WX}
\end{equation} Therefore, \begin{equation} \begin{array}{ll}
\left\llbracket~
\input{diagrams/rngrsx_LHS.tikz}
~\right\rrbracket_{WX}
\stackrel{(\ref{wphaseint})}{=}
\left\llbracket~
\input{diagrams/zwphasefuse33.tikz}
~\right\rrbracket_{WX}
\stackrel{ba_w}{=}
\left\llbracket~
\input{diagrams/zwphasefuse44.tikz}
~\right\rrbracket_{WX}&
\\
\stackrel{TR13,TR14,cut_w}{=}
\left\llbracket~
\input{diagrams/zwphasefuse55.tikz}
~\right\rrbracket_{WX}&
\\
\stackrel{TR13,TR14,cut_w}{=}
\left\llbracket~
\input{diagrams/zwphasefuse66.tikz}
~\right\rrbracket_{WX}&
\\
\stackrel{(\ref{zwplus})}{=}
\left\llbracket~
\input{diagrams/rngrsx_RHS.tikz}
~\right\rrbracket_{WX}&
\end{array}
\end{equation}
\end{proof}
\iffalse
\section{Features of the new generators }\label{zxforct} In this section, we show the features of the two generators--the triangle and the $\lambda$ box.
The triangle notation was first introduced in \cite{Emmanuel} as a shorthand for the proof of completeness of the ZX-calculus for Clifford+T QM. Afterwards, it is employed as a generator for complete axiomatisations of the $ZX_{ full}$-calculus \cite{ngwang} and the $ZX_{ C+T}$-calculus \cite{ngwang2}. The purpose to use it as a generator is to make
the rewriting rules simple and the translations between the ZX-calculus and the ZW-calculus direct.
Moreover, very recently we find that the triangle can be an essential component for the construction of a Toffoli gate as shown in the following form:
\begin{equation}\label{toffoligate} \tikzfig{diagrams//toffoligtscalar} \end{equation} where the triangle with $-1$ on the top-left corner is the inverse of the normal triangle.
In contrast to the standard circuit form which realises the Toffoli gate in elementary gates \cite{Nielsen}, the form of (\ref{toffoligate}) is much more simpler, thus promising for simplifying Clifford + T quantum circuits with the aid of a bunch of ZX-calculus rules involving the triangles.
Unexpectedly, we also realise that the denotation of a slash box used in \cite{Coeckebk} to construct a Toffoli gate is just a triangle (up to a scalar) as shown in (\ref{triangleslash}).
Next we illustrate the feature of the $\lambda$ box. In chapter \ref{chfull} and the previous parts of this chapter, the $\lambda$ box is restricted to be parameterised by a non-negative real number. While in \cite{coeckewang}, it has been generalised to a general green phase of form \tikzfig{diagrams//greenbxa} with arbitrary complex number as a parameter. Similarly, we have the general red phase \tikzfig{diagrams//redbxb} \cite{coeckewang}. Below we give the spider form of general phase which are interpreted in Hilbert spaces:
\begin{equation}\label{gphaseinter}
\begin{array}{c} \left\llbracket \tikzfig{diagrams//generalgreenspider} \right\rrbracket=\ket{0}^{\otimes m}\bra{0}^{\otimes n}+a\ket{1}^{\otimes m}\bra{1}^{\otimes n}, \\ \left\llbracket \tikzfig{diagrams//generalredspider} \right\rrbracket=\ket{+}^{\otimes m}\bra{+}^{\otimes n}+a\ket{-}^{\otimes m}\bra{-}^{\otimes n}, \end{array} \end{equation} where $a$ is an arbitrary complex number. The generalised spider rules and colour change rule are depicted in the following:
\begin{equation}\label{generalspider}
\begin{array}{c}
\tikzfig{diagrams//generalgreenspiderfuse}, \quad \tikzfig{diagrams//generalredspiderfuse},\\
\tikzfig{diagrams//generalcolorchange}. \end{array} \end{equation} where $a, b$ are arbitrary complex numbers.
Now we consider the generalised supplementarity-- also called cyclotomic supplementarity, with supplementarity as a special case--which is interpreted as merging $n$ subdiagrams if the $n$ phase angles divide the circle uniformly \cite{jpvw}. The diagrammatic form of the generalised supplementarity is as follows: \begin{equation}\label{generalpsgt0}
\tikzfig{diagrams//generalsupp} \end{equation} where there are $n$ parallel wires in the diagram at the right-hand side.
Next we show that the generalised supplementarity can be seen as a special form of the generalised spider rule as shown in (\ref{generalspider}). For simplicity, we ignore scalars in the rest of this section.
First note that by comparing the normal form translated from the ZW-calculus \cite{amar}, we have \begin{equation}\label{generalpsgt}
\tikzfig{diagrams//generalpsgte} \end{equation} where $a\in \mathbb{C}, a\neq 1$.
In particular, \begin{equation}\label{generalpsgt2}
\tikzfig{diagrams//generalpsgte2} \end{equation}
where $\alpha \in [0, 2\pi), \alpha\neq \pi$. For $\alpha= \pi$, we can use the $\pi$ copy rule directly.
Then
\begin{equation}\label{generalpsgt3}
\tikzfig{diagrams//generalpsgte3} \end{equation} where we used the following formula given in \cite{jpvw}:
\begin{equation}\label{fracidentyty} \prod_{j=0}^{n-1}(1+e^{i(\alpha+\frac{j}{n}2\pi)})=1+e^{i(n\alpha+(n-1)\pi)} \end{equation}
Note that if $n$ is odd, then
\begin{equation}\label{generalpsgt4}
\tikzfig{diagrams//generalpsgte4} \end{equation}
If $n$ is even, then
\begin{equation}\label{generalpsgt5}
\tikzfig{diagrams//generalpsgte5} \end{equation}
It is not hard to see that if we compute the parity of $n$ in the right diagram of (\ref{generalpsgt0}) not considering the scalars, then by Hopf law we get the same result as shown in (\ref{generalpsgt4}) and (\ref{generalpsgt5}).
\section{Differences in axiomatisations of the the ZX-calculus for the Clifford+T QM } The main differences between the two complete axiomatisations of the ZX-calculus for the Clifford+T fragment QM shown in this chapter and that presented in \cite{Emmanuel} are as follows: \begin{enumerate} \item Although the number of rules (which is 22) listed in Figure \ref{figure1t} and Figure \ref{figure2t} of this chapter is much more than that of \cite{Emmanuel} (which is 12), the number of nodes in each non-scalar diagram of the extended part of rules (non-stablizer part) is at most 8 in this paper, in contrast to a maximum of 13 in \cite{Emmanuel}. Furthermore, the rule (C) in \cite{Emmanuel} has 10 nodes on each sides of the equality. Their lager rules are far from easy to be used practically, while our rules are relatively small thus can be employed more effectively, as shown in chapter \ref{chapply}.
\item Following \cite{ngwang}, we have still introduced two more generators-- the triangle (which appeared as a short notation in \cite{Emmanuel}) and the $\lambda$ box-- in this chapter, while there are only green nodes and red nodes as generators in \cite{Emmanuel}. Our new generators are features rather than novelties: the triangle can be employed as an essential component to construct a Toffoli gate in a very simple form, while the $\lambda$ box can be slightly extended to a generalised phase so that the generalised supplementarity (also called cyclotomic supplementarity, with supplementarity as a special case) \cite{jpvw} is naturally seen as a special case of the generalised spider rule. In addition, due to the introduction of the new generators, our proof for that the Clifford +T fragment of the ZX-calculus exactly corresponds to matrices over the ring $\mathbb{Z}[i, \frac{1}{\sqrt{2}}]$ is much simpler than the corresponding proof given in \cite{Emmanuel}.
\item The axiomatisation given in \cite{Emmanuel} depends on a particular decomposition of the triangle as shown in the second equality of (\ref{triangleslash}). However, there exist different forms of decomposition of the triangle, e.g., the forms in (\ref{triangleslash}). Therefore, it may result in different axiomatisations of the ZX-calculus. Our employing of the triangle as a generator successfully avoids this problem, thus makes our axiomatisation more fundamental in this sense.
\item The translation from the $ZX_{ C+T}$-calculus to the $ZW_{\mathbb{T}}$-calculus as presented in this chapter is quite different from the corresponding one given in \cite{Emmanuel}. Ours is more direct, and is even reversible, thus enables establishing an isomorphism of the $ZX_{ C+T}$-calculus and the $ZW_{\mathbb{T}}$-calculus, using the same reason as for the isomorphism of the $ZX_{ full}$-calculus and the $ZW_{ \mathbb{C}}$-calculus in the previous chapter.
\end{enumerate} \fi
\chapter{Completeness for 2-qubit Clifford+T circuits}
Quantum computing is powerful in outperforming classical computing at solving important problems like factoring large numbers by quantum algorithms. But before implementing a quantum algorithm, one needs to turn an algorithm into elementary gates. In principle, one could use any approximately universal set of elementary gates for gate-synthesis purpose. However, the most widely used approximately universal set of elementary gates is the Clifford+T gate set, which means any unitary transformation can be approximated with an arbitrary precision by a Clifford+T gate. And a useful cost measure for realising Clifford+T circuits is the T count ---- the number of T gates in the circuit.
With the motivation to minimise the T count, Selinger and Bian set up a set of relations of circuits which is complete for Clifford+T 2-qubit circuits \cite{ptbian}. Here by complete it means any two 2-qubit Clifford+T circuits that are equal in their corresponding matrices can be proved to be equal circuits only using the set of relations. Unlike the single-qubit case, there is no known algorithm that is both optimal and efficient for multi-qubit unitary synthesis.
As evident from the main theorem in \cite{ptbian}, it is not a simple task to use the circuit relations to simplify the quantum circuits. In contrast, the ZX-calculus has intuitive and simple rewriting rules to transform diagrams from one to another.
It is possible to use the ZX-calculus to efficiently simplify Clifford+T circuits. Although the ZX-calculus is complete for both the overall pure qubit quantum mechanics(QM) and the Clifford+T pure qubit QM as we have shown in the last two chapters, it would be more efficient to use a small set of ZX rules for the purpose of circuit simplification.
The first step towards this goal is from Backens' work on completeness of the ZX-calculus for the single-qubit Clifford+T group \cite{Miriam1ct}. However, the strategy for the single-qubit Clifford+T group does not apply to multi-qubit circuits.
In this chapter, we verify all the circuit relations (17 equations) in \cite{ptbian} by a small set of simple ZX-calculus rules (9 rules). Since the 17 relations are complete for 2-qubit Clifford+T circuits, so is the ZX-calculus with the 9 rules. Obviously, any single-qubit Clifford+T group can be seen as a 2-qubit Clifford+T circuit with one line empty of quantum gates. Therefore, our result can also be seen as a completeness result for single-qubit Clifford+T ZX-calculus.
In comparing to the normal ZX rules \cite{CoeckeDuncan, bpw}, we just added the (P) rule, which is a property of the general Euler decomposition in ZXZ and XZX forms. The problem of giving an analytic solution for converting from ZXZ to XZX Euler decompositions of single-qubit unitary gates has been proposed by Schr\"oder de Witt and Zamdzhiev in \cite{Vladimir}. Here we first give an explicit formula for the relation of ZXZ and XZX Euler decompositions of generalised Z and X phases, then obtain as a corollary the formula for normal Z and X phases. Note that we do not have to know the precise values of the angles in the (P) rule to verify those circuit relations, which makes it much simpler for simplifying circuits. Also, in the ZX computation at intermediate stages, we have gone beyond the range of Clifford+T and broken thorough the constraint of unitarity. These can be seen as the advantages of the ZX-calculus in comparison to quantum circuits.
Given that the ZX-calculus is universally complete, our result here is an important step towards efficient simplification of arbitrary n-qubit Clifford+T circuits.
For convenience, in this chapter we use the scalar-free version of the ZX-calculus as described in Section \ref{zxgeneral}.
The results of this chapter has been published in the paper \cite{coeckewang}, with the coauthor Bob Coecke.
\section{Complete relations for 2-qubit Clifford+T quantum circuits} In 2015, a set of relations that is complete for 2-qubit Clifford+T quantum circuits was given by Selinger and Bian \cite{ptbian}. We list these relations in Theorem \ref{ptbianthm} in the language of ZX-calculus ignoring the non-zero scalars. Note that given a 2-qubit Clifford+T quantum circuit, it is easy to translate it into a ZX diagram. On the contrary, given an arbitrary ZX diagram with two inputs and two outputs, it is usually hardy to decide whether it is a 2-qubit circuit.
\begin{theorem}[\cite{ptbian}]\label{ptbianthm} The following set of relations is complete for 2-qubit Clifford+T circuits: \begin{equation}\label{cmrns-2} \tikzfig{diagrams//completerelationlist-2} \end{equation} \begin{equation}\label{cmrns-1} \tikzfig{diagrams//completerelationlist-1} \end{equation} \begin{equation}\label{cmrns0} \tikzfig{diagrams//completerelationlist0} \end{equation} \begin{equation}\label{cmrns1} \tikzfig{diagrams//completerelationlist1} \end{equation} \begin{equation}\label{cmrns2} \tikzfig{diagrams//completerelationlist2} \end{equation} \begin{equation}\label{cmrns3} \tikzfig{diagrams//completerelationlist3} \end{equation} \begin{equation}\label{cmrns4} \tikzfig{diagrams//completerelationlist4} \end{equation} \begin{equation}\label{cmrns5} \tikzfig{diagrams//completerelationlist5} \end{equation} \begin{equation}\label{cmrns6} \tikzfig{diagrams//completerelationlist6} \end{equation} \begin{equation}\label{cmrns7} \tikzfig{diagrams//completerelationlist7} \end{equation} \begin{equation}\label{cmrns8} \tikzfig{diagrams//completerelationlist8} \end{equation} \begin{equation}\label{cmrns9} \tikzfig{diagrams//completerelationlist9} \end{equation} \begin{equation}\label{cmrns10} \tikzfig{diagrams//completerelationlist10} \end{equation} \begin{equation}\label{cmrns11} \tikzfig{diagrams//completerelationlist11} \end{equation} \begin{equation}\label{cmrns12} \left(\tikzfig{diagrams//pbct1}\right)^2=\tikzfig{diagrams//completerelationlist12} \end{equation} \begin{equation}\label{cmrns13} \left(\tikzfig{diagrams//pbctb}\right)^2=\tikzfig{diagrams//completerelationlist12} \end{equation} \begin{equation}\label{cmrns14} \begin{array}{ll} \tikzfig{diagrams//completerelationlist141}&\\ \tikzfig{diagrams//completerelationlist142}&\\ =\tikzfig{diagrams//completerelationlist12} \end{array} \end{equation} \end{theorem}
\section{The ZX-calculus for 2-qubit Clifford+T quantum circuits} Although we have several complete axiomatisations of the ZX-calculus for both entire qubit quantum mechanics and the Clifford +T fragment, it is more practical to have a small set of rewriting rules just for 2-qubit Clifford+T quantum circuits. In this section, we use the scalar-free version of the ZX-calculus which was described in Section \ref{zxgeneral}. The generators of the ZX-calculus for 2-qubit Clifford+T quantum circuits is the traditional generators of the qubit ZX-calculus as given in Table \ref{qbzxgenerator}. The structural rules are just those given in (\ref{compactstructure}) and (\ref{compactstructureslide}). The non-structural rewriting rules of the ZX-calculus for 2-qubit Clifford+T quantum circuits are listed in the following: \begin{figure}
\caption{Rules of ZX-calculus for 2-qubit Clifford+T quantum circuits, where $\alpha, \beta \in[0, 2\pi)$. The exact formula for the rule (P) is given in (\ref{zxztoxzxcreq}). The red-green colour swapped version and upside-down flipped version of these rules still hold.}
\label{figure2qbt}
\end{figure}
\FloatBarrier
The main difference between the rules in Figure \ref{figure2qbt} for 2-qubit Clifford+T quantum circuits and the traditional rules in Figure \ref{figure1sfy} for the ZX$_{full}$-calculus is that the former has a new identity called (P) rule. The soundness of the (P) rule is proved in Corollary \ref{zxztoxzxcr}, however in this thesis we only need to know that the (P) rule hold and to use the property given in Corollary \ref{zxztoxzxcr} that if $\alpha_1=\gamma_1$, then $\alpha_2=\gamma_2$, and if $\alpha_1=-\gamma_1$, then $\alpha_2=\pi+\gamma_2$. The exact values of the angles in the (P) rule need not to be known for the proof of the last three equations in the previous section. We add this (P) rule of the ZX-calculus for 2-qubit Clifford+T quantum circuits because we have no efficient way to prove equations (\ref{cmrns12}), (\ref{cmrns13}), (\ref{cmrns14}) in the ZX-calculus.
\section{ Verification of the complete relations in the ZX-calculus} \subsection{ Derivation of the (P) rule} Firstly, we explain how the ZX rule (P) is obtained.
For arbitrary complex numbers $a, b$, define \tikzfig{diagrams//greenbxa} as a general green phase which has the matrix form \[
\begin{pmatrix}
1 & 0 \\
0 & a \end{pmatrix},
\] define \tikzfig{diagrams//redbxb} as a general red phase which has the matrix form \[
\begin{pmatrix}
1+b & 1-b \\
1-b & 1+b \end{pmatrix}.
\]
\begin{lemma}\label{phaseclcg}(General phases colour-swap law)
Let \begin{equation}\label{taouvst} \begin{array}{ll} \tau= (1-\lambda_2)(\lambda_1+\lambda_3)+(1+\lambda_2)(1+\lambda_1\lambda_3),&\\ U=(1+\lambda_2)(\lambda_1\lambda_3-1), &V=( 1-\lambda_2)(\lambda_1-\lambda_3),\\ S=( 1-\lambda_2)(\lambda_1+\lambda_3)-(1+\lambda_2)(1+\lambda_1\lambda_3), & T=\tau(U^2-V^2). \end{array} \end{equation}
Then \begin{equation}\label{colorspps} \tikzfig{diagrams//gpcswp} \end{equation} where
$$ \sigma_1=-i(U+V)\sqrt{\frac{S}{T}},~~ \sigma_2=\frac{\tau+i\sqrt{\frac{T}{S}}}{\tau-i\sqrt{\frac{T}{S}}} ,~~ \sigma_3=-i(U-V)\sqrt{\frac{S}{T}}.
$$
Especially, if $\lambda_1=\lambda_3,$ then $\sigma_1=\sigma_3$; if $\lambda_1\lambda_3=1$, then $\sigma_1=-\sigma_3$; if $\lambda_1\lambda_3=-1$, then $\sigma_1\sigma_3=-1$; if $\lambda_1=-\lambda_3,$ then $\sigma_1\sigma_3=1$.
\end{lemma} \begin{proof} The matrix of the left-hand-side of (\ref{colorspps}) is \[ \begin{pmatrix}
1 & 0 \\
0 & \lambda_3
\end{pmatrix}
\begin{pmatrix}
1+\lambda_2 & 1-\lambda_2 \\
1-\lambda_2 & 1+\lambda_2
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & \lambda_1
\end{pmatrix}
=\begin{pmatrix}
1+\lambda_2 & \lambda_1( 1-\lambda_2)\\
\lambda_3( 1-\lambda_2) & \lambda_1 \lambda_3( 1+\lambda_2)
\end{pmatrix}
\]
The matrix of the right-hand-hand-side of (\ref{colorspps}) is $$
\begin{pmatrix}
1+\sigma_3 & 1-\sigma_3 \\
1-\sigma_3 & 1+\sigma_3
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & \sigma_2
\end{pmatrix}
\begin{pmatrix}
1+\sigma_1 & 1-\sigma_1 \\
1-\sigma_1 & 1+\sigma_1
\end{pmatrix}
$$ \[ =\begin{pmatrix}
(1+\sigma_3) (1+\sigma_1) +(1-\sigma_3) \sigma_2(1-\sigma_1)&
(1+\sigma_3) (1-\sigma_1) +(1-\sigma_3) \sigma_2(1+\sigma_1) \\
(1-\sigma_3) (1+\sigma_1) +(1+\sigma_3) \sigma_2(1-\sigma_1) &
(1-\sigma_3) (1-\sigma_1) +(1+\sigma_3) \sigma_2(1+\sigma_1)
\end{pmatrix}
:=\begin{pmatrix}
X & Y \\
Z & W
\end{pmatrix} \] To let the equality (\ref{colorspps}) hold, there must exist a non-zero complex number $k$ such that \begin{equation}\label{mtrixk} \begin{pmatrix}
X & Y \\
Z & W
\end{pmatrix}=
k\begin{pmatrix}
1+\lambda_2 & \lambda_1( 1-\lambda_2)\\
\lambda_3( 1-\lambda_2) & \lambda_1 \lambda_3( 1+\lambda_2)
\end{pmatrix} \end{equation}
Then $$X+Y=2(1+\sigma_2+\sigma_3-\sigma_2\sigma_3)=k[1+\lambda_2 + \lambda_1( 1-\lambda_2)]$$ \[ Z+W=[(1-\sigma_3)2+(1+\sigma_3) \sigma_22]=2(1+\sigma_2-\sigma_3+\sigma_2\sigma_3)=k[ \lambda_3( 1-\lambda_2)+\lambda_1\lambda_3(1+\lambda_2) ] \] Thus \[ X+Y+Z+W=4(1+\sigma_2)=k[(1+\lambda_2)(1+\lambda_1\lambda_3)+( 1-\lambda_2)(\lambda_1+\lambda_3)], \] i.e., \[ \sigma_2=\frac{k}{4}[(1+\lambda_2)(1+\lambda_1\lambda_3)+( 1-\lambda_2)(\lambda_1+\lambda_3)]-1=\frac{k}{4}\tau -1, \] and
\[ Z+W-(X+Y)=4\sigma_3(\sigma_2-1)=k[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_3-\lambda_1)], \] i.e., \[ \sigma_3=\frac{\frac{k}{4}[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_3-\lambda_1)]}{\frac{k}{4}\tau -2}=\frac{k[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_3-\lambda_1)]}{k\tau -8}. \] Similarly, $$X+Z=2(1+\sigma_1+\sigma_2-\sigma_1\sigma_2)=k[1+\lambda_2 + \lambda_3( 1-\lambda_2)]$$ \[ Y+W=2(1+\sigma_2-\sigma_1+\sigma_2\sigma_1)=k[ \lambda_1( 1-\lambda_2)+\lambda_1\lambda_3(1+\lambda_2) ] \]
\[ Y+W-(X+Z)=4\sigma_1(\sigma_2-1)=k[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_1-\lambda_3)], \]
i.e., \[ \sigma_1=\frac{\frac{k}{4}[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_1-\lambda_3)]}{\frac{k}{4}\tau -2}=\frac{k[(1+\lambda_2)(\lambda_1\lambda_3-1)+( 1-\lambda_2)(\lambda_1-\lambda_3)]}{k\tau -8}, \] Now we decide the value of $k$. Let $U=(1+\lambda_2)(\lambda_1\lambda_3-1), ~~V=( 1-\lambda_2)(\lambda_1-\lambda_3).$ Then \[ \sigma_1+\sigma_3=\frac{2kU}{k\tau -8}, ~~\sigma_1\sigma_3=\frac{k^2(U^2-V^2)}{(k\tau -8)^2}, \] Furthermore, \[ X=(1+\sigma_3) (1+\sigma_1) +(1-\sigma_3) \sigma_2(1-\sigma_1)=k(1+\lambda_2), \] i.e., \[ 1+\sigma_1+\sigma_2 +\sigma_3 +\sigma_1\sigma_3-\sigma_1\sigma_2-\sigma_2\sigma_3+\sigma_1\sigma_2\sigma_3=k(1+\lambda_2), \] by rearrangement, we have \[ (\sigma_1 +\sigma_3)(1 -\sigma_2)+(1+\sigma_2)(1+\sigma_1\sigma_3)=k(1+\lambda_2). \]
Therefore, \[ \frac{2kU}{k\tau -8}(2 -\frac{k}{4}\tau)+\frac{k}{4}\tau(1+\frac{k^2(U^2-V^2)}{(k\tau -8)^2})=k(1+\lambda_2). \] Divide by $k$ on both sides, then multiply by $(k\tau -8)^2$ on both sides, we obtain a quadratic equation of $k$: \[ 2U(k\tau -8)(2 -\frac{k}{4}\tau)+\frac{1}{4}\tau[(k\tau -8)^2+k^2(U^2-V^2)]=(k\tau -8)^2(1+\lambda_2). \] By rearrangement, we have \[ (k\tau -8)^2[\tau-2U -4(1+\lambda_2)]+k^2\tau(U^2-V^2)=0. \] Let \[ \begin{array}{l} S=\tau-2U -4(1+\lambda_2)\\ =(1+\lambda_2)(1+\lambda_1\lambda_3)+( 1-\lambda_2)(\lambda_1+\lambda_3)-2[(1+\lambda_2)(\lambda_1\lambda_3-1)+2(1+\lambda_2)]\\ = ( 1-\lambda_2)(\lambda_1+\lambda_3)-(1+\lambda_2)(1+\lambda_1\lambda_3),\\ T=\tau(U^2-V^2). \end{array} \] Then the equation can be rewritten as \[ (S\tau^2+T)k^2-16S\tau k+64S=0. \] Solving this equation, we have \[ k=\frac{8S\tau \pm 8\sqrt{-ST}}{S\tau^2+T}. \] When we calculate the square root, we do not have to consider its sign, hence we can write $ k$ as \[ k=\frac{8S\tau + 8\sqrt{-ST}}{S\tau^2+T}. \] Now \[ \frac{8}{k}=\frac{8}{\frac{8S\tau + 8\sqrt{-ST}}{S\tau^2+T}}=\frac{S\tau^2+T}{S\tau + \sqrt{-ST}}=\frac{(\sqrt{S}\tau+i\sqrt{T})(\sqrt{S}\tau-i\sqrt{T})}{\sqrt{S}(\sqrt{S}\tau+i\sqrt{T})}=\tau-i\sqrt{\frac{T}{S}}, \] i.e., $$ k=\frac{8}{\tau-i\sqrt{\frac{T}{S}}}. $$
Then $$ \sigma_1=\frac{k(U+V)}{k\tau-8}=\frac{U+V}{\tau-\frac{8}{k}}=\frac{U+V}{i\sqrt{\frac{T}{S}}} =-i(U+V)\sqrt{\frac{S}{T}}.$$ $$ \sigma_3=-i(U-V)\sqrt{\frac{S}{T}}, ~~\sigma_2=\frac{\tau+i\sqrt{\frac{T}{S}}}{\tau-i\sqrt{\frac{T}{S}}}. $$
If $\lambda_1\lambda_3=-1$, then clearly $\tau=S, T=S(U^2-V^2)$. Thus $\frac{S}{T}=\frac{1}{U^2-V^2}$. Therefore,
$\sigma_1\sigma_3=[-i(U+V)\sqrt{\frac{S}{T}}][-i(U-V)\sqrt{\frac{S}{T}}]=-(U^2-V^2)\frac{S}{T}=-1$. Similarly, if $\lambda_1=-\lambda_3,$ then $\sigma_1\sigma_3=1$.
\end{proof}
\begin{corollary}\label{zxztoxzxcr} For $\alpha_1, \beta_1, \gamma_1 \in (0, ~2\pi)$ we have: \begin{equation}\label{zxztoxzxcreq} \tikzfig{diagrams//zxztoxzx}\qquad\mbox{with}\quad \left\{ \begin{array}{l} \alpha_2=\arg z+\arg z_1\\
\beta_2=2\arg (|\frac{z}{z_1}|+i)\\ \gamma_2=\arg z-\arg z_1 \end{array} \right. \end{equation} where: \[ \begin{array}{l} z=\cos\frac{\beta_1}{2}\cos\frac{\alpha_1+\gamma_1}{2}+i\sin\frac{\beta_1}{2}\cos\frac{\alpha_1-\gamma_1}{2} \qquad z_1=\cos\frac{\beta_1}{2}\sin\frac{\alpha_1+\gamma_1}{2}-i\sin\frac{\beta_1}{2}\sin\frac{\alpha_1-\gamma_1}{2} \end{array} \] So if $\alpha_1=\gamma_1$, then $\alpha_2=\gamma_2$, and if $\alpha_1=-\gamma_1$,
then $\alpha_2=\pi+\gamma_2$. \end{corollary}
\begin{proof}
\iffalse In (\ref{colorspps}), let $\lambda_1=e^{i\alpha}, \lambda_2=e^{i\beta}, \lambda_3=e^{i\gamma}.$ Then for the values of $ U, V, S, \tau$ in (\ref{taouvst}) we have $$ \begin{array}{ll} U=4ie^{i\frac{\alpha+\beta+\gamma}{2}}\cos\frac{\beta}{2}\sin\frac{\alpha+\gamma}{2}, & V=4e^{i\frac{\alpha+\beta+\gamma}{2}}\sin\frac{\beta}{2}\sin\frac{\alpha-\gamma}{2},\\ S=4e^{i\frac{\alpha+\beta+\gamma}{2}}z, & \tau=4e^{i\frac{\alpha+\beta+\gamma}{2}}\overline{z}, \end{array} $$ where $z=\cos\frac{\beta}{2}\cos\frac{\alpha+\gamma}{2}+i\sin\frac{\beta}{2}\cos\frac{\alpha-\gamma}{2}$, $\overline{z}$ is the complex conjugate of $z$.
Also, if we let $z_1=\cos\frac{\beta}{2}\sin\frac{\alpha+\gamma}{2}-i\sin\frac{\beta}{2}\sin\frac{\alpha-\gamma}{2}$, then $$ U+V=4ie^{i\frac{\alpha+\beta+\gamma}{2}}z_1, ~~U-V=4ie^{i\frac{\alpha+\beta+\gamma}{2}}\overline{z}_1. $$ Thus
$$
\frac{U+V}{U-V}=\frac{z_1}{\overline{z}_1}=\frac{z_1^2}{|z_1|^2}, ~~\sqrt{\frac{U+V}{U-V}}=\frac{z_1}{|z_1|}=e^{i\theta}. $$
where $|z_1|$ is the magnitude of the complex number $z_1$, and $\theta=\arg z_1\in [0, 2\pi)$ is the phase of $z_1$.
Similarly, we have $$
\sqrt{\frac{z}{\overline{z}}}=\frac{z}{|z|}=e^{i\phi}, $$ where $\phi=\arg z\in [0, 2\pi)$ is the phase of $z$.
Therefore, $$ \begin{array}{ll} \sigma_1=-i(U+V)\sqrt{\frac{S}{T}}=-i(U+V)\sqrt{\frac{S}{\tau(U+V)(U-V)}}&\\ =-i\sqrt{\frac{S}{\tau}\frac{(U+V)}{(U-V)}}=-i\sqrt{-\frac{z}{\overline{z}}\frac{z_1}{\overline{z_1}}}=-iie^{i\phi}e^{i\theta}=e^{i(\phi+\theta)}, \end{array} $$
$$ \begin{array}{ll} \sigma_3=-i(U-V)\sqrt{\frac{S}{T}}=-i(U-V)\sqrt{\frac{S}{\tau(U+V)(U-V)}}&\\ =-i\sqrt{\frac{S}{\tau}\frac{(U-V)}{(U+V)}}=-i\sqrt{-\frac{z}{\overline{z}}\frac{\overline{z_1}}{z_1}}=-iie^{i\phi}e^{-i\theta}=e^{i(\phi-\theta)}, \end{array} $$
$$ \begin{array}{ll} \sigma_2=\frac{\tau+i\sqrt{\frac{T}{S}}}{\tau-i\sqrt{\frac{T}{S}}}=\frac{\tau\sqrt{\frac{S}{T}}+i}{\tau\sqrt{\frac{S}{T}}-i}=\frac{\sqrt{\frac{S\tau^2}{\tau(U^2-V^2)}}+i}{\sqrt{\frac{S\tau^2}{\tau(U^2-V^2)}}-i}\\
=\frac{\sqrt{\frac{S\tau}{(U+V)(U-V)}}+i}{\sqrt{\frac{S\tau}{(U+V)(U-V)}}-i}=\frac{\sqrt{\frac{z\overline{z}}{z_1\overline{z_1}}}+i}{\sqrt{\frac{z\overline{z}}{z_1\overline{z_1}}}-i}=\frac{|\frac{z}{z_1}|+i}{|\frac{z}{z_1}|-i}=\frac{z_2}{\overline{z_2}}=(\frac{z_2}{|z_2|})^2=e^{i2\varphi}, \end{array} $$
where $z_2=|\frac{z}{z_1}|+i, ~\varphi=\arg z_2$ is the phase of $z_2$. Let $\alpha_1=\phi+\theta, \beta_1=2\varphi, \gamma_1=\phi-\theta$.
Apparently, if $\alpha=\gamma$, then $V=0,$ i.e., $e^{i\theta}=1,$ thus $\theta=0$. It follows that $\alpha_1=\gamma_1$. If $\alpha=-\gamma$, then $U=0,$ thus
$e^{i(\phi+\theta)}=-e^{i(\phi-\theta)}=e^{i(\pi+\phi-\theta)}$, i.e., $e^{i\alpha_1 }=e^{i(\pi+\gamma_1)}$. Thus $\alpha_1\equiv \pi+\gamma_1(Mod ~2\pi)$. If $\alpha=\pi-\gamma$,
then by Lemma \ref{phaseclcg}, $\sigma_1\sigma_3=-1$, which means $e^{i(\phi+\theta)}e^{i(\phi-\theta)}=e^{i2\phi}=-1$. Thus $2\phi=\pi$, $\alpha_1+\gamma_1=2\phi=\pi.$
\fi
In (\ref{colorspps}), let $\lambda_1=e^{i\alpha_1}, \lambda_2=e^{i\beta_1}, \lambda_3=e^{i\gamma_1}.$ Then for the values of $ U, V, S, \tau$ in (\ref{taouvst}) we have $$ \begin{array}{ll} U=4ie^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}\cos\frac{\beta_1}{2}\sin\frac{\alpha_1+\gamma_1}{2}, & V=4e^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}\sin\frac{\beta_1}{2}\sin\frac{\alpha_1-\gamma_1}{2},\\ S=4e^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}z, & \tau=4e^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}\overline{z}, \end{array} $$ where $z=\cos\frac{\beta_1}{2}\cos\frac{\alpha_1+\gamma_1}{2}+i\sin\frac{\beta_1}{2}\cos\frac{\alpha_1-\gamma_1}{2}$, $\overline{z}$ is the complex conjugate of $z$. Also, if we let $z_1=\cos\frac{\beta_1}{2}\sin\frac{\alpha_1+\gamma_1}{2}-i\sin\frac{\beta_1}{2}\sin\frac{\alpha_1-\gamma_1}{2}$, then $$ U+V=4ie^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}z_1, ~~U-V=4ie^{i\frac{\alpha_1+\beta_1+\gamma_1}{2}}\overline{z}_1. $$ Thus $$
\frac{U+V}{U-V}=\frac{z_1}{\overline{z}_1}=\frac{z_1^2}{|z_1|^2}, ~~\sqrt{\frac{U+V}{U-V}}=\frac{z_1}{|z_1|}=e^{i\theta}. $$
where $|z_1|$ is the magnitude of the complex number $z_1$, and $\theta=\arg z_1\in [0, 2\pi)$ is the phase of $z_1$. Similarly, we have $$
\sqrt{\frac{z}{\overline{z}}}=\frac{z}{|z|}=e^{i\phi}, $$ where $\phi=\arg z\in [0, 2\pi)$ is the phase of $z$. Therefore, $$ \begin{array}{ll} \sigma_1=-i(U+V)\sqrt{\frac{S}{T}}=-i(U+V)\sqrt{\frac{S}{\tau(U+V)(U-V)}}&\\ =-i\sqrt{\frac{S}{\tau}\frac{(U+V)}{(U-V)}}=-i\sqrt{-\frac{z}{\overline{z}}\frac{z_1}{\overline{z_1}}}=-iie^{i\phi}e^{i\theta}=e^{i(\phi+\theta)}, \end{array} $$ $$ \begin{array}{ll} \sigma_3=-i(U-V)\sqrt{\frac{S}{T}}=-i(U-V)\sqrt{\frac{S}{\tau(U+V)(U-V)}}&\\ =-i\sqrt{\frac{S}{\tau}\frac{(U-V)}{(U+V)}}=-i\sqrt{-\frac{z}{\overline{z}}\frac{\overline{z_1}}{z_1}}=-iie^{i\phi}e^{-i\theta}=e^{i(\phi-\theta)}, \end{array} $$ $$ \begin{array}{ll} \sigma_2=\frac{\tau+i\sqrt{\frac{T}{S}}}{\tau-i\sqrt{\frac{T}{S}}}=\frac{\tau\sqrt{\frac{S}{T}}+i}{\tau\sqrt{\frac{S}{T}}-i}=\frac{\sqrt{\frac{S\tau^2}{\tau(U^2-V^2)}}+i}{\sqrt{\frac{S\tau^2}{\tau(U^2-V^2)}}-i}\\
=\frac{\sqrt{\frac{S\tau}{(U+V)(U-V)}}+i}{\sqrt{\frac{S\tau}{(U+V)(U-V)}}-i}=\frac{\sqrt{\frac{z\overline{z}}{z_1\overline{z_1}}}+i}{\sqrt{\frac{z\overline{z}}{z_1\overline{z_1}}}-i}=\frac{|\frac{z}{z_1}|+i}{|\frac{z}{z_1}|-i}=\frac{z_2}{\overline{z_2}}=(\frac{z_2}{|z_2|})^2=e^{i2\varphi}, \end{array} $$
where $z_2=|\frac{z}{z_1}|+i, ~\varphi=\arg z_2$ is the phase of $z_2$. Let $\alpha_2=\phi+\theta, \beta_2=2\varphi, \gamma_2=\phi-\theta$. Apparently, if $\alpha_1=\gamma_1$, then $V=0,$ i.e., $e^{i\theta}=1,$ thus $\theta=0$. It follows that $\alpha_2=\gamma_2$. If $\alpha_1=-\gamma_1$, then $U=0,$ thus
$e^{i(\phi+\theta)}=-e^{i(\phi-\theta)}=e^{i(\pi+\phi-\theta)}$, i.e., $e^{i\alpha_2 }=e^{i(\pi+\gamma_2)}$.
Thus $\alpha_2= \pi+\gamma_2$.
\end{proof}
\subsection{Proof of completeness for 2-qubit Clifford+T quantum circuits}
In this subsection we show that the ZX-calculus with rules in Figure \ref{figure2qbt} is complete for 2-qubit Clifford+T quantum circuits. First we prove as lemmas the correctness of the three complicated circuit relations (\ref{cmrns12}), (\ref{cmrns13}) and (\ref {cmrns14}) within the ZX-calculus.
\begin{lemma} Let $A=$ \[ \tikzfig{diagrams//pbct1} \] then $A^2=I.$ \end{lemma}
\begin{proof} First we have $A=$ \[
\tikzfig{diagrams//pbct22new} \] By the rule (P), we can assume that
\begin{equation}\label{pbct32} \tikzfig{diagrams//pbct32new} \end{equation}
Since $e^{i\frac{-\pi}{4}} e^{i\frac{\pi}{4}}=1 $, we could let $\gamma=\alpha+\pi$. Also note that $$ \tikzfig{diagrams//pbct421new}=\left(\tikzfig{diagrams//pbct422new}\right)^{-1} $$ Thus \begin{equation}\label{pbct52} \tikzfig{diagrams//pbct52new} \end{equation}
Therefore, $A=$ $$ \tikzfig{diagrams//pbct62} $$ Finally, $A^2=$ $$ \tikzfig{diagrams//pbct722} $$ \end{proof}
\begin{lemma} Let $B=$ \[ \tikzfig{diagrams//pbctb} \] then $B^2=I$. \end{lemma}
\begin{proof} Firstly we have \[
\tikzfig{diagrams//pbctb22new} \]
By the rule (P), we can assume that
\begin{equation}\label{pbctb32} \tikzfig{diagrams//pbctb32new} \end{equation}
Since $e^{i\frac{-\pi}{4}} e^{i\frac{\pi}{4}}=1 $, we could let $\gamma=\alpha+\pi$.
Also note that $$ \tikzfig{diagrams//pbctb421new}=\left(\tikzfig{diagrams//pbctb422new}\right)^{-1} $$ Thus \begin{equation}\label{pbctb52} \tikzfig{diagrams//pbctb52new} \end{equation}
Therefore, $B=$ $$ \tikzfig{diagrams//pbctb62} $$
Finally, $B^2=$ $$ \tikzfig{diagrams//pbctb72new} $$ \end{proof}
\begin{lemma} Let $C=$ \[ \tikzfig{diagrams//pbctc1} \] $D=$ \[ \tikzfig{diagrams//pbctc2} \] then $D\circ C=I$. \end{lemma}
\begin{proof} Firstly we simplify the circuit $C$ as follows: \[ \tikzfig{diagrams//pbctc3new} \] By the rule (P), we can assume that \begin{equation}\label{circuitceq} \tikzfig{diagrams//pbctc32new} \end{equation} Then we have \begin{equation}\label{circuitceq4} \tikzfig{diagrams//pbctc34new} \end{equation}
Secondly, we simplify the circuit $D$ as follows: \begin{equation*} \tikzfig{diagrams//pbctc4new} \end{equation*}
By the rule (P), we have \begin{equation}\label{circuitceq2} \tikzfig{diagrams//pbctc33new} \end{equation} Therefore, \begin{equation}\label{circuitceq5} \tikzfig{diagrams//pbctc35new} \end{equation} Then we obtain the composition \begin{equation}\label{circuitceq6} \tikzfig{diagrams//pbctc36new} \end{equation}
By Corollary \ref{zxztoxzxcr}, we can assume that \begin{equation}\label{circuitceq7} \tikzfig{diagrams//pbctc37new} \end{equation}
Then for its inverse, we have \begin{equation}\label{circuitceq8} \tikzfig{diagrams//pbctc38new} \end{equation}
Also we can obtian that \begin{equation}\label{circuitceq9} \tikzfig{diagrams//pbctc39new} \end{equation}
As a consequence, we have the inverse for both sides of (\ref{circuitceq9}):
\begin{equation}\label{circuitceq10} \tikzfig{diagrams//pbctc310new} \end{equation}
Now we can rewrite $D\circ C$ as
\begin{equation}\label{circuitceq11} \tikzfig{diagrams//pbctc311new} \end{equation}
We can depict the dashed part of (\ref{circuitceq11}) in a form of connected octagons:
\begin{equation}\label{octagon1eq} \tikzfig{diagrams//octagon1} \end{equation}
To deal with these octagons, we need to cope with hexagons. The following property of hexagons was first given in \cite{duncan_graph_2009}:
\begin{equation}\label{hexagon1eq} \tikzfig{diagrams//hexagon1new} \end{equation}
By colour change rule (H), we have \begin{equation}\label{hexagon2eq} \tikzfig{diagrams//hexagon2} \end{equation}
Then we can rewrite (\ref{octagon1eq}) as follows: \begin{equation}\label{octagon2checkeq} \tikzfig{diagrams//octagon2checknew} \end{equation}
By the (P) rule, we have \begin{equation}\label{octasim2eq} \tikzfig{diagrams//octasim2new} \end{equation} where $z=x+\pi$. Then we take inverse for each side of (\ref{octasim2eq}) and obtain that
\begin{equation}\label{octasim3eq} \tikzfig{diagrams//octasim3new} \end{equation}
By rearranging the phases on both sides of (\ref{octasim2eq}), we have \begin{equation}\label{octasim4eq} \tikzfig{diagrams//octasim4new} \end{equation}
Thus \begin{equation}\label{octasim5eq} \tikzfig{diagrams//octasim5new} \end{equation}
Therefore, \begin{equation}\label{octasim6eq} \tikzfig{diagrams//octasim6new} \end{equation}
It then follows that \begin{equation}\label{octasim7eq} \tikzfig{diagrams//octasim7new} \end{equation}
If we take the inverse of the left-hand-side of (\ref{octasim7eq}), then we have
\begin{equation}\label{octasim8eq} \tikzfig{diagrams//octasim8new} \end{equation}
Now we can further simplify the final diagram in (\ref{octagon2checkeq}) as follows:
\begin{equation}\label{octasim9eq} \tikzfig{diagrams//octasim9new} \end{equation}
Finally, the composite circuit $D\circ C$ as can be simplified as follows:
\begin{equation}\label{octasim10eq} \tikzfig{diagrams//octasim10new} \end{equation}
where we used the following property: \begin{equation}\label{octasim11eq} \tikzfig{diagrams//octasim11new} \end{equation} \end{proof}
The other relations are easy to be verified by the ZX-calculus, we omit those verification here. Since the circuit relations are proved to be complete for 2-qubit Clifford+T quantum circuits \cite{ptbian}, we have the following \begin{theorem} The ZX-calculus is complete for 2-qubit Clifford+T quantum circuits. \end{theorem}
In this chapter, the essential technique we employed is that we go beyond the range of Clifford+T and broke thorough the constraint of unitarity at intermediate stages by using the new (P) rule. Although we do not have a general general strategy for simplifying arbitrary quantum circuits by the ZX-calculus, we would expect the (P) rule to have more utilities in quantum computing and information.
\chapter{Completeness for qutrit stabilizer quantum mechanics}
The theory of quantum information and quantum computation (QIC) is traditionally based on binary logic (qubits). However, multi-valued logic has been recently proposed for quantum computing, using linear ion traps \cite{Muthukrishnan}, cold atoms \cite{Smith}, and entangled photons \cite{Malik}. In particular, metaplectic non-Abelian anyons were shown to be naturally connected to ternary (qutrit) logic in contrast to binary logic in topological quantum computing, based on which there comes the Metaplectic Topological Quantum Computer platform \cite{Shawn}.
Taking into consideration the practicality of qutrits and the fruitful results on completeness of the ZX-calculus that have been obtained in the previous chapters, it is natural to give a qutrit version of the ZX-calculus. However, the generalisation from qubits to qutrits is not trivial, since the qutrit-based structures are usually much more complicated then the qubit-based structures. For instance, the local Clifford group for qubits has only 24 elements, while the local Clifford group for qutrits has 216 elements. Thus it is no surprise that, as presented in \cite{GongWang}, the rules of qutrit ZX-calculus are significantly different from that of the qubit case: each phase gate has a pair of phase angles, the operator commutation rule is more complicated, the Hadamard gate is not self-adjoint, the colour-change rule is doubled, and the dualiser has a much richer structure than being just an identity. Despite being already well established in \cite{BianWang1, BianWang2} and independently introduced as a typical special case of qudit ZX-calculus in \cite{Ranchin}, to the best of our knowledge, there are no completeness results available for qutrit ZX-calculus. Without this kind of results, how can we even know that the rules of a so-called ZX-calculus are useful enough for quantum computing?
In this chapter, based on the rules and results in \cite{GongWang}, we show that the qutrit ZX-calculus is complete for pure qutrit stabilizer quantum mechanics (QM). The strategy we used here mirrors that of the qubit case in \cite{Miriam1}, although it is technically more complicated, especially for the completeness of the single qutrit Clifford group $\mathcal{C}_1$. Firstly, we show that any stabilizer state diagram is equal to some GS-LC diagram within the ZX-calculus, where a GS-LC diagram consists of a graph state diagram with arbitrary single-qutrit Clifford operators applied to each output. We then show that any stabilizer state diagram can be further brought into a reduced form of the GS-LC diagram. Finally, for any two stabilizer state diagrams on the same number of qutrits, we make them into a simplified pair of reduced GS-LC diagram such that they are equal under the standard interpretation in Hilbert spaces if and only if they are identical in the sense that they are constructed by the same element constituents in the same way. By the map-state duality,
the case for operators represented by diagrams are also covered, thus we have shown the completeness of the ZX-calculus for all pure qutrit stabilizer QM.
The results of this chapter are collected from the paper \cite{qlwang} without any coauthor. In addition, all of the proofs are included here.
\section{Qutrit Stabilizer quantum mechanics}
\subsection{ The generalized Pauli group and Clifford group }
The notions of Pauli group and Clifford group for qubits can be generalised to
qutrits in a natural way: In the 3-dimensional Hilbert space $H_3$, we define the \textit{ generalised Pauli operators} $X$ and $Z$ as follows \begin{equation} X\ket{j}=\ket{j+1}, ~~Z\ket{j}=\omega^j\ket{j}, \end{equation} where $ j \in \mathbb{Z}_3 $ (the ring of integers modulo 3), $\omega=e^{i\frac{2}{3}\pi}$, and the
addition is a modulo 3 operation. We will use the same denotation for
tensor products of these operators as is presented in \cite{Hostens}: for $v, w \in \mathbb{Z}_3^n$ ($n$-dimensional vector space over $\mathbb{Z}_3 $), let
$$
v=\left( \begin{array}{c} v_1 \\ \vdots \\ v_n \end{array} \right), \quad w=\left( \begin{array}{c} w_1 \\ \vdots \\ w_n \end{array} \right), \quad a=\left( \begin{array}{c} v \\ w \end{array} \right)\in \mathbb{Z}_3^{2n}, $$
\begin{equation}\label{xza} XZ(a)= X^{v_1}Z^{w_1}\otimes \cdots \otimes X^{v_n}Z^{w_n}. \end{equation}
We define the \textit{generalized Pauli group} $\mathcal{P}_n$ on $n$ qutrits as $$
\mathcal{P}_n=\{ \omega^{\delta}XZ(a)| a\in \mathbb{Z}^{2n}_3, \delta\in \mathbb{Z}_3 \}. $$
\begin{definition} The generalized Clifford group $\mathcal{C}_n$ on n qutrits is defined as the normalizer of $\mathcal{P}_n$ in the group of unitaries: $
\mathcal{C}_n=\{ Q| Q^{\dagger}=Q^{-1}, Q\mathcal{P}_n Q^{\dagger}= \mathcal{P}_n \}. $ Especially, for $n=1$, $\mathcal{C}_1$ is called the generalized local Clifford group.
\end{definition}
Similar to the qubit case, it can be shown that the generalized Clifford group is generated by the gate $\mathcal{S}=\ket{0}\bra{0}+\ket{1}\bra{1}+ \omega\ket{2}\bra{2}$, the generalized Hadamard gate $H=\frac{1}{\sqrt{3}}\sum_{k, j=0}^{2}\omega^{kj}\ket{k}\bra{j}$, and the SUM gate $\Lambda=\sum_{i, j=0}^{2}\ket{i, i+j(mod 3)}\bra{ij}$ \cite{Hostens, Shawn}. In particular, the local Clifford group $\mathcal{C}_1$ is generated by the gate $\mathcal{S}$ and the generalized Hadamard gate $H$ \cite{Hostens}, with the group order being $3^3(3^2-1)=216$, up to global phases \cite{Mark}.
We define the \textit{stabilizer code} as the non-zero joint eigenspace to the eigenvalue $1$ of a subgroup of the generalized Pauli group $\mathcal{P}_n$ \cite{Mohsen}. A \textit{stabilizer state} $\ket{\psi}$ is a stabilizer code of dimension 1, which
is therefore stabilized by an abelian subgroup of order $3^{n}$ of the Pauli group excluding multiples of the identity other than the identity itself \cite{Hostens}. We call this subgroup the \textit{stabilizer} $\mathfrak{S}$ of $\ket{\psi}$.
Finally, the \textit{qutrit stabilizer quantum mechanics} can be described as the fragment of the pure qutrit quantum theory where only states represented in computational basis, Clifford unitaries and measurements in computational basis are considered.
\subsection{ Graph states } Graph states are special stabilizer states which are constructed based on undirected graphs without loops. However, it turns out that they are not far from stabilizer states.
\begin{definition}\cite{Dan} A $\mathbb{Z}_3 $-weighted graph is a pair $G = (V,E)$ where $V$ is a set of $n$ vertices and $E$ is a collection of weighted edges specified by the adjacency matrix $\Gamma$, which is a symmetric $n$ by $n$ matrix with zero diagonal entries, each matrix element ${\Gamma_{lm}}\in \mathbb{Z}_3$ representing the weight of the edge connecting vertex $l$ with vertex $m$. \end{definition}
\begin{definition}\label{Graph State}\cite{Griffiths} Given a $\mathbb{Z}_3 $-weighted graph $G = (V,E)$ with $n$ vertices and adjacency matrix $\Gamma$,
the corresponding qutrit graph state can be defined as $$\ket{G}=\mathcal{U}\ket{+}^{\otimes n},$$ where $\ket{+}=\frac{1}{\sqrt{3}}(\ket{0}+\ket{1}+\ket{2})$, $\mathcal{U}=\prod_{\{l,m\}\in E}(C_{lm})^{\Gamma_{lm}}$, $C_{lm}=\Sigma_{j=0}^{2}\Sigma_{k=0}^{2}\omega^{jk}\ket{j}\bra{j}_l\otimes \ket{k}\bra{k}_m$, subscripts indicate to which qutrit the operator is applied. \end{definition}
\begin{lemma}\cite{Dan} The qutrit graph state $\ket{G}$ is the unique (up to a global phase) joint $+1$ eigenstate of the group generated by the operators $$ X_v\prod_{u\in V}(Z_{u})^{\Gamma_{uv}} ~~\mbox{for all}~ v \in V. $$ \end{lemma}
Therefore, graph states must be stabilizer states. On the contrary, stabilizer states are equivalent to graph states in the following sense. \begin{definition}\cite{Miriam1}
Two $n$-qutrit stabilizer states $\ket{\psi}$ and $\ket{\phi}$ are equivalent with respect to the local Clifford group if there exists $U \in \mathcal{C}_1^{\otimes n}$ such that $\ket{\psi}=U\ket{\phi}$.
\end{definition}
\begin{lemma}\cite{Mohsen} \label{stabilizerequivgraphstate}
Every qutrit stabilizer state is equivalent to a graph state with respect to the local Clifford group.
\end{lemma}
Below we describe some operations on graphs corresponding to graph states. Theses operations will play a central role in the proof of the completeness of ZX-calculus for qutrit stabilizer quantum mechanics.
\begin{definition}\cite{Mohsen}
Let $G = (V,E)$ be a $\mathbb{Z}_3 $-weighted graph with $n$ vertices and adjacency matrix $\Gamma$. For every vertex $v$, and $0\neq b \in \mathbb{Z}_3$, define the operator $\circ_b v$ on the graph as follows: $G\circ_b v$ is the graph on the same vertex set, with adjacency matrix $I(v, b)\Gamma I(v, b)$, where $I(v, b) = diag(1,1,...,b,...,1)$, $b$ being on the $v$-th entry. For every vertex $v$ and $a \in \mathbb{Z}_3$, define the operator $\ast_a v $ on the graph as follows: $G\ast_a v $ is the graph on the same vertex set, with adjacency matrix $\Gamma^{'}$, where
$\Gamma^{'}_{jk}=\Gamma_{jk}+a\Gamma_{vj}\Gamma_{vk}$ for $j\neq k$, and $\Gamma^{'}_{jj}=0$ for all $j$. The operator $\ast_a v $ is also called the $a$-local complementation at the vertex $v$ \cite{mmp} .
\end{definition}
Now the equivalence of graph states can be described in terms of these operations on graphs.
\begin{theorem}\cite{Mohsen} \label{graphstateeuivalence}
Two graph states $\ket{G}$ and $\ket{H}$ with adjacency matrices $M$ and $N$ over $\mathbb{Z}_3 $, are equivalent under local Clifford group if and only if there exists a sequence of $\ast$ and $\circ$ operators acting on one of them to obtain the other.
\end{theorem}
\section{Qutrit ZX-calculus} \subsection{The ZX-calculus for general pure state qutrit quantum mechanics}
As described in Section \ref{zxgeneral}, the qutrit ZX-calculus is a special case of the qudit ZX-caculus where $d=3$. Especially, we use the scalar-free version of the the qutrit ZX-calculus throughout this chapter. The generators of the qutrit ZX-calculus is explicitly given as follows:
\begin{table} \begin{center}
\begin{tabular}{|r@{~}r@{~}c@{~}c|r@{~}r@{~}c@{~}c|} \hline
$R_Z^{(n,m)}(\frac{\alpha}{\beta})$&$:$&$n\to m$ & \tikzfig{Qutrits//generator_spider} & $R_X^{(n,m)}(\frac{\alpha}{\beta})$&$:$&$n\to m$ & \tikzfig{Qutrits//generator_spider_gray}\\
\hline $H$&$:$&$1\to 1$ &\tikzfig{diagrams//HadaDecomSingleslt}
& $\sigma$&$:$&$ 2\to 2$& \tikzfig{diagrams//swap}\\\hline
$\mathbb I$&$:$&$1\to 1$&\tikzfig{diagrams//Id} & $e $&$:$&$0 \to 0$& \tikzfig{diagrams//emptysquare}\\\hline
$C_a$&$:$&$ 0\to 2$& \tikzfig{diagrams//cap} &$ C_u$&$:$&$ 2\to 0$&\tikzfig{diagrams//cup} \\\hline \end{tabular}\caption{Generators of qubit ZX-calculus} \label{qbzxgenerator} \end{center} \end{table} \FloatBarrier where $m,n\in \mathbb N$, $\alpha, \beta \in [0, 2\pi)$, and $e$ is denoted by an empty diagram.
For simplicity, we have the following notations: $$\tikzfig{Qutrits//spiderg} := \tikzfig{Qutrits//spidergz} \qquad\tikzfig{Qutrits//spiderr} := \tikzfig{Qutrits//spiderrz} \qquad\tikzfig{Qutrits//RGg_Hadad} :=\tikzfig{diagrams//hcubic} $$
The qutrit ZX-calculus has the structural rules as given in (\ref{compactstructure}) and (\ref{compactstructureslide}). Its non-structural rewriting rules are presented in Figure \ref{figuretrit}.
Note that all the diagrams should be read from top to bottom. \begin{figure}
\caption{Qutrit ZX-calculus rewriting rules}
\label{figuretrit}
\end{figure}
\FloatBarrier
For the easiness of reading, we will denote the frequently used angles $\frac{2 \pi}{3}$ and $\frac{4 \pi}{3}$ by $1$ and $2$ respectively. Note that the red spider rule still holds, for simplicity, we also refer to it as rule (S1).
The diagrams in Qutrit ZX-calculus have a standard interpretation (up to a non-zero scalar) $\llbracket \cdot \rrbracket$ in the Hilbert spaces:
\begin{center} \[ \left\llbracket\tikzfig{Qutrits//generator_spider}\right\rrbracket=\ket{0}^{\otimes m}\bra{0}^{\otimes n}+e^{i\alpha}\ket{1}^{\otimes m}\bra{1}^{\otimes n}+e^{i\beta}\ket{2}^{\otimes m}\bra{2}^{\otimes n} \]
\[ \left\llbracket\tikzfig{Qutrits//generator_spider_gray}\right\rrbracket=\ket{+}^{\otimes m}\bra{+}^{\otimes n}+e^{i\alpha}\ket{\omega}^{\otimes m}\bra{\omega}^{\otimes n}+e^{i\beta}\ket{\bar{\omega}}^{\otimes m}\bra{\bar{\omega}}^{\otimes n} \]
\[ \left\llbracket\tikzfig{Qutrits//RGg_Hada}\right\rrbracket=\ket{+}\bra{0}+ \ket{\omega}\bra{1}+\ket{\bar{\omega}}\bra{2}=\ket{0}\bra{+}+ \ket{1}\bra{\bar{\omega}}+\ket{2}\bra{\omega}\] \[ \left\llbracket\tikzfig{Qutrits//RGg_Hadad}\right\rrbracket=\ket{0}\bra{+}+ \ket{1}\bra{\omega}+\ket{2}\bra{\bar{\omega}}=\ket{+}\bra{0}+ \ket{\omega}\bra{2}+\ket{\bar{\omega}}\bra{1} \] \end{center}
where $\bar{\omega}=e^{\frac{4}{3}\pi i}=\omega^2$, and
\begin{center}
$
\left\{\begin{array}{rcl}
\ket{+} & = & \ket{0}+\ket{1}+\ket{2}\\
\ket{\omega} & = & \ket{0}+\omega \ket{1}+\bar{\omega}\ket{2}\\
\ket{\bar{\omega}} & = & \ket{0}+\bar{\omega}\ket{1}+\omega\ket{2}
\end{array}\right.
$
\end{center}
with $\omega$ satisfying $\omega^3=1,~~1+\omega+\bar{\omega}=0$.
For convenience, we also use the following matrix form:
\begin{equation}\label{phasematrix}
\left\llbracket\tikzfig{Qutrits//RGg_zph_ab}\right\rrbracket=
\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & e^{i\alpha} & 0\\
0 & 0 & e^{i\beta} \end{array} \right) \end{equation}
\begin{equation}\label{phasematrix2} \left\llbracket\tikzfig{Qutrits//RGg_xph_ab}\right\rrbracket=
\left( \begin{array}{ccc} 1+e^{i\alpha}+e^{i\beta} & 1+\bar{\omega} e^{i\alpha}+\omega e^{i\beta}& 1+\omega e^{i\alpha}+\bar{\omega}e^{i\beta} \\ 1+\omega e^{i\alpha}+\bar{\omega}e^{i\beta}& 1+e^{i\alpha}+e^{i\beta} & 1+\bar{\omega}e^{i\alpha}+\omega e^{i\beta}\\
1+\bar{\omega} e^{i\alpha}+\omega e^{i\beta}&1+\omega e^{i\alpha}+\bar{\omega}e^{i\beta} &1+e^{i\alpha}+e^{i\beta} \end{array} \right)
\end{equation}
\begin{equation}\label{phasematrix3} \left\llbracket\tikzfig{Qutrits//RGg_Hada}\right\rrbracket=\begin{pmatrix}
1 & 1 & 1 \\
1 & \omega & \bar{\omega}\\
1 & \bar{\omega} & \omega
\end{pmatrix} \end{equation}
Like the qubit case, there are three important properties about the qutrit ZX-calculus, i.e. universality, soundness, and completeness: Universality is about if there exists a ZX-calculus diagram for every corresponding linear map in Hilbert spaces under the standard interpretation. Soundness means that all the rules in the qutrit ZX-calculus have a correct standard interpretation in the Hilbert spaces. Completeness is concerned with whether an equation of diagrams can be derived in the ZX-calculus when their corresponding equation of linear maps under the standard interpretation holds true. Among these properties, universality is proved in \cite{BianWang2}. Soundness can easily be checked with the standard interpretation $\llbracket \cdot \rrbracket$.
To represent qutrit graph states in the ZX-calculus, we first show that the horizontal nodes $H$ and $H^{\dagger}$ make sense when connected to two green nodes.
\begin{lemma}\cite{GongWang}\label{controlz} \begin{align}\label{controlzhold} \tikzfig{Qutrits/controlz}=\tikzfig{Qutrits/controlz2}=: \tikzfig{Qutrits/controlz3},\quad\quad\quad\quad
\tikzfig{Qutrits/controlzsquare}=\tikzfig{Qutrits/controlzsquare2}=: \tikzfig{Qutrits/controlzsquare3}. \end{align} \end{lemma}
\begin{proof} We will only prove the first equation, since the proof for the second one is similar. \begin{align*} \tikzfig{Qutrits/controlzprf}.
\end{align*}
\end{proof}
We also list some useful properties which have been proved in \cite{GongWang} or \cite{BianWang2}: \begin{equation}\label{property1} \tikzfig{Qutrits//p2s} \end{equation} \begin{equation}\label{property2} \tikzfig{Qutrits//deri2_rotate} \end{equation}
\begin{equation}\label{property3} \tikzfig{Qutrits//controlnotslide} \end{equation} \begin{equation}\label{property4} \tikzfig{Qutrits//deri1_3break} \end{equation} \begin{equation}\label{property5} \tikzfig{Qutrits//quasibialgebra} \end{equation} \begin{equation}\label{property6} \tikzfig{Qutrits//hadslidegn} \end{equation} \begin{align}\label{property7} \tikzfig{Qutrits//twocontrolz3}= \tikzfig{Qutrits//controlzsquare3},\quad\quad \tikzfig{Qutrits//twocontrolzsquare3}= \tikzfig{Qutrits//controlz3}, \quad\quad \tikzfig{Qutrits//hhdaggern}= \tikzfig{Qutrits//hdaggerhn}= \tikzfig{Qutrits//parallel}. \end{align} \begin{align}\label{property8} \tikzfig{Qutrits//zxz-xzx10} \end{align}
Note that the red-green colour swap version of these properties also holds.
\begin{remark} In the qubit ZX-calculus, all its diagrams obey a meta-rule which has been formalized as a well-known slogan: ``only the topology matters" \cite{CoeckeDuncan}, or more recently as ``only connectedness matters" \cite{coeckewang}. This means, any diagram of the qubit ZX-calculus can be transformed into a semantically equal diagram (representing the same matrix) by moving the components around without changing their connections, thus rendering the qubit ZX-calculus unoriented. However, as one can see in (\ref{property3}), this meta-rule does not hold for all diagrams in the qutrit ZX-calculus. Therefore, one should be careful with the orientation of a qutrit ZX diagram, especially when there is a connection between green and red nodes. \end{remark}
\subsection{Qutrit stabilizer quantum mechanics in the ZX-calculus} In this subsection, we represent in ZX-calculus the qutrit stabilizer QM, which consists of state preparations and measurements based on the computational basis $\{ \ket{0}, \ket{1}, \ket{2}\}$, as well as unitary operators belonging to the generalized Clifford group $\mathcal{C}_n$. Furthermore, we give the unique form for single qutrit Clifford operators.
Firstly, the states and effects in computational basis can be represented as:
\begin{equation}\label{compbasis} \ket{0} =\left\llbracket\tikzfig{Qutrits//RGg_exd}\right\rrbracket, \quad \ket{1} =\left\llbracket\tikzfig{Qutrits//rdot1}\right\rrbracket, \quad \ket{2} =\left\llbracket\tikzfig{Qutrits//rdot2}\right\rrbracket, \quad \bra{0}=\left\llbracket\tikzfig{Qutrits//RGg_ex}\right\rrbracket,\quad \bra{1} =\left\llbracket\tikzfig{Qutrits//rcdot1}\right\rrbracket, \quad \bra{2} =\left\llbracket\tikzfig{Qutrits//rcdot2}\right\rrbracket. \end{equation} The generators of the generalized Clifford group $\mathcal{C}_n$ are denoted by the following diagrams:
\begin{equation}\label{clifgenerator} \mathcal{S}=\left\llbracket\tikzfig{Qutrits//phasegts}\right\rrbracket, \qquad H=\left\llbracket\tikzfig{Qutrits//RGg_Hada}\right\rrbracket, \qquad \Lambda=\left\llbracket\tikzfig{Qutrits//sumgt}\right\rrbracket. \end{equation}
\begin{lemma} \label{equivalentstabilizer}
A pure n-qutrit quantum state $\ket{\psi}$ is a stabilizer state if and only if there exists an n-qutrit Clifford unitary $U$ such that $\ket{\psi}=U\ket{0}^{\otimes n}$. \end{lemma}
\begin{proof}
If a pure n-qutrit quantum state $\ket{\psi}$ is a stabilizer state, then by Lemma \ref{stabilizerequivgraphstate}, there exists a Clifford operator $U^{\prime}$ such that $\ket{\psi}=U^{\prime}\ket{G}$, where
$\ket{G}=\mathcal{U}\ket{+}^{\otimes n}$ is a graph state, $\mathcal{U}=\prod_{\{l,m\}\in E}(C_{lm})^{\Gamma_{lm}}$, and $C_{lm}=\Sigma_{j=0}^{2}\Sigma_{k=0}^{2}\omega^{jk}\ket{j}\bra{j}_l\otimes \ket{k}\bra{k}_m$. It is shown in \cite{GongWang} that \[ C_{lm}=\left\llbracket\tikzfig{Qutrits//controlz}\right\rrbracket=\left\llbracket \tikzfig{Qutrits//clm2}\right\rrbracket, \] which means $C_{lm}$ is a Clifford operator, so is $\mathcal{U}$. Let $V=\mathcal{U}H^{\otimes n}$, then
$V$ is also a Clifford operator, and $\ket{G}=V\ket{0}^{\otimes n}$. Therefore, $\ket{\psi}=U^{\prime}\ket{G}=U^{\prime}V\ket{0}^{\otimes n}$. Let $U=U^{\prime}V$, then $\ket{\psi}=U\ket{0}^{\otimes n}$, and $U$ is a Clifford unitary.
On the contrary, assume $\ket{\psi}=U\ket{0}^{\otimes n}$, where $U$ is a Clifford unitary. Let $\ket{G}=\mathcal{U}\ket{+}^{\otimes n}$ be an n-qutrit graph state, $V=\mathcal{U}H^{\otimes n}$, and $W=UV^{\dagger}$. Then $\ket{\psi}=WV\ket{0}^{\otimes n}=W\ket{G}$. Suppose $\mathfrak{S}$ is the stabilizer group of $\ket{G}$. Since $WSW^{\dagger}\ket{\psi}=WSW^{\dagger}W\ket{G}=WS\ket{G}=W\ket{G}=\ket{\psi}, \forall S \in \mathfrak{S}$, it follows that $W\mathfrak{S}W^{\dagger}$ is the stabilizer group of $\ket{\psi}$, i.e., $\ket{\psi}$ is a stabilizer state.
\end{proof}
\begin{lemma}\label{stabaszx} In the qutrit formalism, any stabilizer state, Clifford unitary, or post-selected measurement can be represented by a ZX-calculus diagram with phase angles just integer multiples of $\frac{2}{3}\pi$. \end{lemma}
\begin{proof} Firstly, by Lemma \ref{equivalentstabilizer}, each stabilizer state can be obtained by applying an n-qutrit Clifford unitary to the computational basis state $\ket{0}^{\otimes n}$. Secondly, any Clifford unitary can be generated by the gates $\mathcal{S}, H$ and $\Lambda$ \cite{Hostens, Shawn}, which are clearly composed of phases with angles integer multiples of $\frac{2}{3}\pi$. Since the generators are composed of phases with angles integer multiples of $\frac{2}{3}\pi$, so are Clifford unitaries generated by these generators. Thirdly, computational basis states and post-selected computational basis measurements are clearly with phase angles integer multiples of $\frac{2}{3}\pi$. Therefore, by (\ref{compbasis}) and (\ref{clifgenerator}), any stabilizer state, Clifford unitary, or post-selected measurement can be represented by a ZX-calculus diagram with phase angles just integer multiples of $\frac{2}{3}\pi$.
\end{proof}
\begin{lemma}\label{diagramdecom} Each qutrit ZX-calculus diagram can be represented by a combination of the following components: \begin{equation}\label{component} \begin{array}{cccccccc} \tikzfig{Qutrits//RGg_ezd},& \tikzfig{Qutrits//RGg_ez}, & \tikzfig{Qutrits//RGg_dz}, &\tikzfig{Qutrits//RGg_dzd},
\tikzfig{Qutrits//RGg_zph_ab}, & \tikzfig{Qutrits//RGg_xph_ab}, & \tikzfig{Qutrits//RGg_Hada}, & \tikzfig{Qutrits//RGg_Hadad},
\end{array}
\end{equation} where $\alpha, \beta \in [0, 2\pi)$. \end{lemma} The proof follows directly from the qutrit spider rule (S1) and the colour change rules (H2) and (H$2^{\prime}$).
\begin{lemma}\label{zxasstab} Each qutrit ZX-calculus diagram with phase angles integer multiples of $\frac{2}{3}\pi$ corresponds to an element of the qutrit stabilizer QM. \end{lemma}
\begin{proof} By Lemma \ref{diagramdecom}, each ZX-calculus diagram can be decomposed into basic components as shown in (\ref{component}). Since composition of diagrams corresponds to composition of corresponding matrices in QM, we only need to consider those basic components where the phase angles are integer multiples of $\frac{2}{3}\pi$, in such a way that turns the standard interpretation of each basic component into a composition of Clifford unitaries, computational basis states, and computational basis effects. We have tackled the basic components \tikzfig{Qutrits//phasegts} and \tikzfig{Qutrits//RGg_Hada} in (\ref{clifgenerator}), the remaining components are dealt with as follows: \iffalse Firstly, note that the class of ZX-calculus diagram in which all phase angles are integer multiples of $\frac{2}{3}\pi$ is closed under the rewrite rules. Secondly, by Lemma \ref{diagramdecom}, any ZX-calculus diagram can be decomposed into four basic spiders plus phase shifts and Hadamard nodes. For each of these diagram generators, we exhibit a decomposition of the corresponding operator into the Clifford generators, computational basis states, and computational basis effects. In addition to the translations for \tikzfig{Qutrits//phasegts} and \tikzfig{Qutrits//RGg_Hada} given in (\ref{clifgenerator}),
this gives the following decompositions:
\fi
\begin{equation}
\left\llbracket\tikzfig{Qutrits//RGg_ezd}\right\rrbracket= \left\llbracket\tikzfig{Qutrits//gdotascliff}\right\rrbracket=H\ket{0} \end{equation}
\begin{equation}
\left\llbracket\tikzfig{Qutrits//RGg_ez}\right\rrbracket= \left\llbracket\tikzfig{Qutrits//gcdotascliff}\right\rrbracket=\bra{0}H \end{equation}
\begin{equation}
\left\llbracket\tikzfig{Qutrits//RGg_dz}\right\rrbracket= \left\llbracket\tikzfig{Qutrits//gcopyascliff}\right\rrbracket=\Lambda\circ(I \otimes \ket{0}) \end{equation}
\begin{equation}
\left\llbracket\tikzfig{Qutrits//RGg_dzd}\right\rrbracket= \left\llbracket\tikzfig{Qutrits//gcocopyascliff}\right\rrbracket=(I \otimes \bra{0})\circ\Lambda(I \otimes (H\circ H)) \end{equation} Thus we conclude the proof. \end{proof} bringing together
Bringing together Lemmas \ref{stabaszx} and \ref{zxasstab}, we have \begin{theorem} The ZX-calculus for pure qutrit stabilizer QM comprises exactly the diagrams with phase angles integer multiples of $\frac{2}{3}\pi$. \end{theorem}
In the sequel, a diagram in which all phase angles are integer multiples of $\frac{2}{3}\pi$ will be called a stabilizer diagram. Now we denote $\frac{2 \pi}{3}$ and $\frac{4 \pi}{3}$ by $1$ and $2$ (or $-1$) respectively, and let
$$\mathcal{P}= \{\nststile{1}{1}, \nststile{2}{2}\}, \quad \mathcal{N}=\{\nststile{0}{1}, \nststile{1}{0}, \nststile{0}{2}, \nststile{2}{0}\}, \quad \mathcal{M}=\{\nststile{0}{0}, \nststile{1}{2}, \nststile{2}{1}\}, \quad \mathcal{Q}=\mathcal{P} \cup \mathcal{N}, \quad \mathcal{A}=\mathcal{Q} \cup \mathcal{M},$$
where the symbol $\nststile{b}{a}$ is just the denotation of the pair $(a, b)$, instead of the fraction $\frac{a}{b}$. Then each green or red node in a stabilizer diagram has its phase angles denoted as elements in the set $\mathcal{A}$. Define the addition $+$ in $\mathcal{A}$ as $\nststile{b}{a}+\nststile{d}{c} ~:=~ \nststile{b+d (mod 3)}{a+c (mod3)}$. Then $\mathcal{A}$ is an abelian group, with $\mathcal{P}\cup \{\nststile{0}{0}\}~ \mbox{and} ~\mathcal{M}$ as subgroups. Note that the elements in $\mathcal{P}$ resemble to the phases $\frac{\pi}{2}$ and $-\frac{\pi}{2}$ in the quibit ZX-calculus, and the non-zero elements in
$\mathcal{M}$ are analogous to the phase $\pi$ in the quibit ZX-calculus.
With the notation of $\nststile{b}{a}$ and the addition defined above, the commutation rule (K2) has a special form for elements in $\mathcal{P}$:
\begin{lemma}
\begin{equation}\label{K2decom} \tikzfig{Qutrits//K2decom} \end{equation} where $\nststile{m_2}{m_1}, \nststile{m_4}{m_3} \in \mathcal{M}, \nststile{p_1}{p_1} \in \mathcal{P}.$ \end{lemma}
\begin{proof} There are several similar cases to be verified here. We only show one case where $\nststile{m_2}{m_1}=\nststile{2}{1}, \nststile{p_1}{p_1} =\nststile{1}{1}$:
\begin{equation*} \tikzfig{Qutrits//K2decomproof} \end{equation*} \end{proof}
\begin{lemma}\label{pzxz} Any single qutrit stabilizer diagram of form
\begin{equation}\label{pzxzreduce} \tikzfig{Qutrits//zxz0} \end{equation}
can be rewritten as
\begin{equation}\label{pzxzreduce2}
\tikzfig{Qutrits//zxz01} \quad \mbox{or} \quad \tikzfig{Qutrits//zxz02},
\end{equation}
where $ \nststile{p_{i2}}{p_{i1}}, \nststile{\bar{p}_{j2}}{\bar{p}_{j1}} \in \mathcal{P}, \forall i\in\{1,2,3,4\}, \forall j\in\{1,2,3\}$. \end{lemma} \begin{proof} First we list all the possible forms of (\ref{pzxzreduce}) and their rewritten forms of (\ref{pzxzreduce2}) as follows: \begin{align}\label{pzxzhh} \tikzfig{Qutrits//zxz-xzx1}\quad &\quad \tikzfig{Qutrits//zxz-xzx2} \quad &\quad \tikzfig{Qutrits//zxz-xzx3} \quad &\quad \tikzfig{Qutrits//zxz-xzx4} \end{align} \begin{align}\label{pzxzhh2}
\tikzfig{Qutrits//zxz-xzx5} \quad &\quad \tikzfig{Qutrits//zxz-xzx6}\quad &\quad \tikzfig{Qutrits//zxz-xzx7}\quad &\quad \tikzfig{Qutrits//zxz-xzx8} \end{align}
For simplicity, we just show one derivation here: \begin{align*} \tikzfig{Qutrits//zxz-xzx9} \end{align*}
\end{proof}
\begin{lemma}\label{xz112} Any single qutrit stabilizer diagram of form \begin{equation}\label{xz111} K=\tikzfig{Qutrits//zxz2} \end{equation}
can be rewritten as
$$\tikzfig{Qutrits//zxz1} \quad \mbox{or} \quad \tikzfig{Qutrits//zxz3},$$
where $ \nststile{m_2}{m_1} \in \mathcal{M}$. \end{lemma}
\begin{proof} If $ \nststile{b_1}{a_1} \in \mathcal{M}$, then by rule $(S2)$ or $(K2)$, this green node can be pushed down and merges with another green node thus $K$ is rewritten as
\begin{equation}\label{xz1} K=\tikzfig{Qutrits//zxz4} \end{equation} Otherwise, $ \nststile{b_1}{a_1} \in \mathcal{Q}$, then $ \nststile{b_1}{a_1} =\nststile{p_{12}}{p_{11}}+\nststile{m_{12}}{m_{11}}$, where $\nststile{p_{12}}{p_{11}}\in \mathcal{P}, \nststile{m_{12}}{m_{11}}\in \mathcal{M}$ (e.g., $ \nststile{1}{0} =\nststile{2}{2}+\nststile{2}{1}$). Therefore, by the spider rule $(S1)$ and the commutation rule $(K2)$, the $\mathcal{M}$ part can be separated from the $ \nststile{b_1}{a_1}$ node and pushed down to the bottom to merge with the other green node, thus $K$ can be rewritten as $$\tikzfig{Qutrits//zxz5}:=L $$ Furthermore, if $\nststile{t_{12}}{t_{11}}\in \mathcal{M}$, then the red node in $L$ can be pushed up, thus $L$ is rewritten as \begin{equation}\label{xz2} \tikzfig{Qutrits//zxz6} \end{equation} Otherwise, $\nststile{t_{12}}{t_{11}} \in \mathcal{Q}$, then use the same trick as for $ \nststile{b_1}{a_1} \in \mathcal{Q}$, $L$ can be rewritten as \begin{equation*}\label{xz3} \tikzfig{Qutrits//zxz7}:= R \end{equation*} where $\nststile{p_{22}}{p_{21}}\in \mathcal{P}, \nststile{m_{22}}{m_{21}}\in \mathcal{M}$. We continue the similar analysis on $\nststile{s_{32}}{s_{31}}$ in $R$. If $\nststile{s_{32}}{s_{31}}=\nststile{m_{32}}{m_{31}}\in \mathcal{M}$, then \begin{equation}\label{xz4} R=\tikzfig{Qutrits//zxz8} \end{equation} where by commutation rule $(K2)$ we push the $\nststile{m_{32}}{m_{31}}$ node up and merge nodes with same colours.
Otherwise, $\nststile{s_{32}}{s_{31}} \in \mathcal{Q}$, then by the same trick as for $ \nststile{b_1}{a_1} \in \mathcal{Q}$, $R$ can be rewritten as
\begin{equation}\label{xz5} \tikzfig{Qutrits//zxz9}:= J \end{equation} where $\nststile{p_{32}}{p_{31}}\in \mathcal{P}, \nststile{m_{42}}{m_{41}}\in \mathcal{M}$, and we used the fact that a geen node with phase angles in $\mathcal{M}$ must commute with a red node with phase angles also in $\mathcal{M}$.
By Lemma \ref{pzxz}, $J$ can be rewritten as
\begin{trivlist}\item
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{xz6} J=\tikzfig{Qutrits//zxz10}\overset{K2,S1}{=}\tikzfig{Qutrits//zxz1}, \end{equation} \end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{xz7} \mbox{or}\quad J= \tikzfig{Qutrits//zxz11}\overset{\ref{property1},S1}{=}\tikzfig{Qutrits//zxz3} \end{equation}
\end{minipage}
\end{trivlist}
\noindent where we used the commutation rule $(K2)$ to push the nodes with phase angles in $\mathcal{M}$ and the property (\ref{property1}).
Now we finish the proof by pointing out that the single qutrit stabilizer diagrams (\ref{xz1}), (\ref{xz2}), (\ref{xz4}) are just special forms of (\ref{xz6}). \end{proof}
\begin{corollary}\label{sinqu} Any ZX diagram for single qutrit Clifford operator can be rewritten in one of the forms
\begin{trivlist}\item
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{xz8} \tikzfig{Qutrits//zxz1}, \end{equation} \end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{xz9} \tikzfig{Qutrits//zxz12} \end{equation}
\end{minipage}
\end{trivlist}
\end{corollary} \begin{proof} Because of the spider rule (S1), we only consider the case where any two adjacent nodes are of different colours. First note that single qutrit stabilizer diagram with one node or two nodes is a special form of (\ref{xz8}). If there are only three aligned nodes in the diagram, then they are either the form of (\ref{xz8}) or (\ref{xz111}), the later can be rewritten into one of the forms (\ref{xz8}) and (\ref{xz9}) by Lemma \ref{xz112}. We can now proceed by induction: Suppose that any single qutrit stabilizer diagram $D_n$ with $n (n\geq 3)$ nodes can be rewritten into one of the forms (\ref{xz8}) and (\ref{xz9}). For single qutrit stabilizer diagram $D_{n+1}$ with $n +1$ nodes, we consider its toppest node called $t$. If $t$ is a red node, then by the spider rule (S1), it merges with a red node in $D_n$, thus $D_{n+1}$ is still of form (\ref{xz8}) or (\ref{xz9}). If $t$ is a green node, after rewriting the remaining $n$ nodes into a form of (\ref{xz8}) or (\ref{xz9}) by the induction hypothesis, $D_{n+1}$ now has 4 interlaced green and red nodes except for Hadamard nodes. The first 3 nodes then consist a diagram of form (\ref{xz111}), applying the induction hypothesis again, $D_{n+1}$ can be reduced to a form of (\ref{xz8}) or (\ref{xz9}), noting that we used $H^4=1$ derived from rules (H1) and (P1) in the case that 4 Hadamard notes appear in one diagram. \end{proof}
Now we can give a normal form for the single qutrit Clifford group $\mathcal{C}_1$. \begin{theorem}\label{uniformsin} In the ZX-calculus, each element of the single qutrit Clifford group $\mathcal{C}_1$ can be uniquely represented in one of the following forms
\begin{trivlist}\item
\begin{minipage}{0.295\textwidth}
\begin{equation}
\label{uniformsin1}
\tikzfig{Qutrits/cliffnorformj}, \end{equation}
\end{minipage}
\begin{minipage}{0.295\textwidth}
\begin{equation}
\label{uniformsin2}
\tikzfig{Qutrits/cliffnorformk}, \end{equation}
\end{minipage}
\begin{minipage}{0.295\textwidth}
\begin{equation}
\label{uniformsin3}
\tikzfig{Qutrits/cliffnorformt}, \end{equation}
\end{minipage}
\end{trivlist}
where $\nststile{a_2}{a_1}, \nststile{a_4}{a_3}, \nststile{a_6}{a_5}, \nststile{a_8}{a_7} \in \mathcal{A}, \nststile{p_2}{p_1} \in \mathcal{P}, \nststile{q_2}{q_1} \in \mathcal{Q}, \nststile{m_2}{m_1} \in \mathcal{M}$. \end{theorem}
\begin{proof} By Corollary \ref{sinqu}, any single qutrit Clifford operator is of form (\ref{xz8}) or (\ref{xz9}). Furthermore, if $\nststile{y_{1}}{x_{1}}\in \mathcal{M}$, then the corresponding red node can be pushed down and merge with the other red node at the bottom, thus diagram (\ref{xz8}) becomes the form (\ref{uniformsin1}).
If otherwise $\nststile{y_{1}}{x_{1}}\in \mathcal{Q}$, then we can use the same trick as in the proof of Lemma\ref{xz112}: separate the $\mathcal{M}$ part and push it down to be merged. Thus diagram (\ref{xz8}) becomes the form \begin{equation}\label{cliffnorform2} \tikzfig{Qutrits//cliffnorform22} \end{equation} where $\nststile{p_2}{p_1} \in \mathcal{P}$. If furthermore $\nststile{s_{2}}{s_{1}}\in \mathcal{M}$, then the corresponding green node can be pushed up and diagram (\ref{cliffnorform2}) becomes the form (\ref{uniformsin1}). Otherwise, $\nststile{s_{2}}{s_{1}}\in \mathcal{Q}$, and diagram (\ref{cliffnorform2}) is of the form (\ref{uniformsin2}), where $\nststile{q2}{q_1} \in \mathcal{Q}$. To sum up, the diagram (\ref{xz8}) can be rewritten into the form (\ref{uniformsin1}) or (\ref{uniformsin2}). As a consequence, the diagram (\ref{xz9}) can be rewritten into one of the forms
\begin{trivlist}\item
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{uniformsin12} \tikzfig{Qutrits//cliffnorformj2}, \end{equation} \end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{equation}\label{uniformsin22} \tikzfig{Qutrits//cliffnorformk2}, \end{equation}
\end{minipage}
\end{trivlist}
where $\nststile{p_2}{p_1} \in \mathcal{P}, \nststile{q_2}{q_1} \in \mathcal{Q}$.
Next we show that the diagrams (\ref{uniformsin12}) and (\ref{uniformsin22}) can be rewritten into the form (\ref{uniformsin1}), (\ref{uniformsin2}) or (\ref{uniformsin3}). For (\ref{uniformsin12}), if $\nststile{a_2}{a_1} \in \mathcal{M}$, it is just the form (\ref{uniformsin3}). Otherwise, $\nststile{a_2}{a_1} \in \mathcal{Q}$, thus $\nststile{a_1}{a_2} \in \mathcal{Q}$. Let $\nststile{a_1}{a_2} =\nststile{m_2}{m_1} +\nststile{p_1}{p_1} $, where $\nststile{m_2}{m_1} \in \mathcal{M}, \nststile{p_1}{p_1} \in \mathcal{P}$. It is clear from (\ref{pzxzhh}) and (\ref{pzxzhh2}) that
\begin{equation}\label{singlecliffnorm1} \tikzfig{Qutrits//singlecliffnormm1}, \end{equation} where $\nststile{p}{p}, \nststile{p^{\prime}}{p^{\prime}} \in \mathcal{P}$.
Combining with (\ref{property1}), we have
\begin{equation}\label{singlecliffnormm2} \tikzfig{Qutrits//singlecliffnorm2}, \end{equation} which is just of form (\ref{uniformsin2}).
For (\ref{uniformsin22}), except for the top red node of phase $\nststile{p_2}{p_1}$, the remaining part is exactly the same as the case of (\ref{uniformsin12}) when $\nststile{a_2}{a_1} \in \mathcal{Q}$,
thus can be rewritten into the form of (\ref{uniformsin2}). Composing with the red node $\nststile{p_2}{p_1}$, we obtain a diagram of form (\ref{uniformsin1}) or (\ref{uniformsin2}).
By now we have shown that each single qutrit Clifford operator can be represented by a ZX-calculus diagram in one of the forms (\ref{uniformsin1}), (\ref{uniformsin2}) and (\ref{uniformsin3}). Next we show that these forms are unique. First we prove that all the forms presented in (\ref{uniformsin1}), (\ref{uniformsin2}) and (\ref{uniformsin3}) are not equal to each other. There are several cases to be considered, we first compare the form of (\ref{uniformsin1}) with the form of (\ref{uniformsin3}). Suppose that \begin{align}\label{sp1} \tikzfig{Qutrits//cliffnorformj2}=\tikzfig{Qutrits//cliffnorformj21} \end{align} Then \begin{equation}\label{uniformsin23} \tikzfig{Qutrits//cliffnorformjt2} \end{equation}
which can be written in a matrix form as follows:
\begin{equation}\label{uniformsin24} \begin{pmatrix}
1 & 1 & 1 \\
1 & \omega & \bar{\omega}\\
1 & \bar{\omega} & \omega
\end{pmatrix} \begin{pmatrix}
1 & 1 & 1 \\
1 & \omega & \bar{\omega}\\
1 & \bar{\omega} & \omega
\end{pmatrix}
\begin{pmatrix}
1 & 0 & 0 \\
0& e^{i\alpha} & 0 \\
0 & 0& e^{i\beta}
\end{pmatrix}\simeq
\begin{pmatrix}
u & v & w \\
w & u & v \\
v & w & u
\end{pmatrix} \end{equation} where $\simeq$ means equal up to a non-zero scalar and the matrix on the right hand side of $\simeq$ comes from (\ref{phasematrix3}). After simplification on the left hand side, we have \begin{equation}\label{uniformsin25}
\begin{pmatrix}
1 & 0 & 0 \\
0& 0 & e^{i\beta} \\
0 & e^{i\alpha} & 0
\end{pmatrix}\simeq
\begin{pmatrix}
u & v & w \\
w & u & v \\
v & w & u
\end{pmatrix} \end{equation} From the first row of the above matrices, $u\neq 0, v=w=0$, while from the third row, $w\neq 0, u=0$, a contradiction. Therefore the equality (\ref{sp1}) is impossible, i.e., the form of (\ref{uniformsin1}) will never equal to the form of (\ref{uniformsin3}).
Then we compare the form of (\ref{uniformsin2}) with the form of (\ref{uniformsin3}). Suppose that \begin{align}\label{sp2} \tikzfig{Qutrits//cliffnorformj2}=\tikzfig{Qutrits//cliffnorformk} \end{align}
Then \begin{align}\label{sp3} \tikzfig{Qutrits//cliffnorformtran} \end{align} When $\nststile{p_2}{p_1} =\nststile{1}{1}$, the matrix form of (\ref{sp3}) is as follows: \begin{equation}\label{uniformsin26} \begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 1\\
0 & 1 & 0
\end{pmatrix} \begin{pmatrix}
a & b & c \\
c & a & b\\
b & c & a
\end{pmatrix}\simeq
\begin{pmatrix}
1 & 0 & 0 \\
0& e^{i\alpha} & 0 \\
0 & 0& e^{i\beta}
\end{pmatrix}
\begin{pmatrix}
\omega & 1 & 1 \\
1 & \omega & 1 \\
1 & 1 & \omega
\end{pmatrix}
\begin{pmatrix}
1 & 0 & 0 \\
0& e^{i\sigma} & 0 \\
0 & 0& e^{i\tau}
\end{pmatrix} \end{equation} where $\alpha, \beta, \sigma, \tau \in [0, 2\pi)$. That is, \begin{equation}\label{uniformsin27} \begin{pmatrix}
a & b & c \\
c & a & b\\
b & c & a
\end{pmatrix}\simeq
\begin{pmatrix}
\omega & e^{i\sigma} & e^{i\tau} \\
e^{i\beta} & e^{i(\beta+\sigma)} & \omega e^{i(\beta+\tau)}\\
e^{i\alpha} & \omega e^{i(\alpha+\sigma)} & e^{i(\alpha+\tau)}
\end{pmatrix} \end{equation} Therefore,
\begin{equation*}
\left\{\begin{array}{ccccc}
e^{i\sigma}&= & e^{i\alpha} &= & \omega e^{i(\beta+\tau)} \\
e^{i\tau} & = & e^{i\beta} &= & \omega e^{i(\alpha+\sigma)}\\
\omega &= & e^{i(\alpha+\tau)} &= & e^{i(\beta+\sigma)}\\
\end{array}\right.
\end{equation*}
Solving this system, we have
$\sigma=\alpha, \tau=\beta$,
\begin{align}\label{uniformsin28}
\left\{\begin{array}{l}
\alpha=0\\
\beta=\frac{2\pi}{3}\\
\end{array}\right. \quad
\left\{\begin{array}{l}
\alpha=\frac{2\pi}{3}\\
\beta=0\\
\end{array}\right. \quad
\left\{\begin{array}{l}
\alpha=\frac{4\pi}{3}\\
\beta=\frac{4\pi}{3}\\
\end{array}\right. \quad
\end{align}
Return to the equation (\ref{sp3}). For $ \alpha=0, ~\beta=\frac{2\pi}{3}$, we have $\nststile{q_2}{q_1} =\nststile{1}{0} ,~ \nststile{a_2}{a_1} =\nststile{2}{0} ,~ \nststile{a_4-a_5}{a_3-a_6} =\nststile{1}{0}$. For $ \alpha=\frac{2\pi}{3}, ~\beta=0 $, we have $\nststile{q_2}{q_1} =\nststile{0}{1} ,~ \nststile{a_2}{a_1} =\nststile{0}{2} ,~ \nststile{a_4-a_5}{a_3-a_6} =\nststile{0}{1}$. For $ \alpha=\frac{4\pi}{3}, ~\beta=\frac{4\pi}{3} $, we have $\nststile{q_2}{q_1} =\nststile{2}{2} ,~ \nststile{a_2}{a_1} =\nststile{1}{1} ,~ \nststile{a_4-a_5}{a_3-a_6} =\nststile{2}{2}$. To sum up, when $\nststile{p_2}{p_1} =\nststile{1}{1}$, it must be that $ \nststile{a_2}{a_1} \in \{\nststile{0}{2}, \nststile{2}{0}, \nststile{1}{1}\}$ for (\ref{sp2}) to hold.
Similarly, it can be shown that when $\nststile{p_2}{p_1} =\nststile{2}{2}$, it must be that $ \nststile{a_2}{a_1} \in \{\nststile{0}{1}, \nststile{1}{0}, \nststile{2}{2}\}$ for (\ref{sp2}) to hold.
Thus for $ \nststile{a_2}{a_1} \in \mathcal{M}$, (\ref{sp2}) does not hold,
i.e., the form of (\ref{uniformsin2}) will never equal to the form of (\ref{uniformsin3}).
In the same way, we can show that the form of (\ref{uniformsin1}) is not equal to the form of (\ref{uniformsin2}), and different elements (with different paremeters) of the same form (be it (\ref{uniformsin1}), (\ref{uniformsin2}), or (\ref{uniformsin3})) must not be equal to each other.
Now we can prove the uniqueness of these forms.
By direct calculation, (\ref{uniformsin1}) has $9\times 9=81$ elements, (\ref{uniformsin2}) has $2\times 6\times 9=108$ elements, and (\ref{uniformsin3}) has $3\times 9=27$ elements. So the total number elements is $81+108+27=216$, which is exactly equal to the order $|\mathcal{C}_1|$ of the group of single qutrit Clifford operators. Since each element of $\mathcal{C}_1$ has been shown to be representable by forms (\ref{uniformsin1}), (\ref{uniformsin2}) or (\ref{uniformsin3}), there should be enough members belonging to these kind of forms, thus no common elements are allowed. Therefore the representation of single qutrit Clifford operators is unique. \end{proof}
\section{Qutrit graph states in the ZX-calculus}
\subsection{Stabilizer state diagram and transformations of GS-LC diagrams }
The qutrit graph states have a nice representation in the ZX-calculus.
\begin{lemma}\cite{GongWang}\label{gstatezx} A qutrit graph state $\ket{G}$, where $G = (E;V)$ is an n-vertex graph, is represented in the graphical calculus as follows: \begin{itemize}
\item for each vertex $v \in V$, a green node with one output, and \item for each $1$-weighted edge $\{u,v\}\in E$, an $H$ node connected to the green nodes representing vertices $u$ and $v$, \item for each $2$-weighted edge $\{u,v\}\in E$, an $H^{\dagger}$ node connected to the green nodes representing vertices $u$ and $v$.
\end{itemize} \end{lemma} A graph state $\ket{G}$ is also denoted by the diagram \tikzfig{Qutrits/grapstg}.
\begin{definition}\cite{Miriam2} A diagram in the stabilizer ZX-calculus is called a GS-LC diagram if it consists of a graph state diagram with arbitrary single-qutrit Clifford operators applied to each output. These associated Clifford operators are called vertex operators. \end{definition} An $n$-qutrit GS-LC diagram is represented as
\begin{equation*}
\tikzfig{Qutrits/gslc}
\end{equation*} where $G = (V,E)$ is a graph and $U_v \in\mathcal{C}_1$ for $v\in V$.
\begin{theorem}\label{gandgst1}\cite{GongWang} Let $G = (V,E)$ be a graph with adjacency matrix $\Gamma$ and $G\ast_1 v $ be the graph that results from applying to $G$ a $1$-local complementation about some $v\in V$. Then the corresponding graph states $\ket{G} $ and $\ket{G\ast_1 v} $ are related as follows: \begin{equation}\label{locom1} \tikzfig{Qutrits/gandg1} \end{equation} where for $1\leqslant i \leqslant n, ~i\neq v$,
\begin{equation*} \nststile{b_i}{a_i} = \left\{\begin{array}{l}
\nststile{2}{2}, ~\mbox{if}~ \Gamma_{iv}\neq 0
\\
\nststile{0}{0}, ~\mbox{if} ~\Gamma_{iv}= 0\\
\end{array}\right.
\end{equation*} \end{theorem}
\begin{corollary}\label{gandgst2} Let $G = (V,E)$ be a graph with adjacency matrix $\Gamma$ and $G\ast_2 v $ be the graph that results from applying to $G$ a $2$-local complementation about some $v\in V$. Then the corresponding graph states $\ket{G} $ and $\ket{G\ast_2 v} $ are related as follows: \begin{equation}\label{locom2} \tikzfig{Qutrits/gandg2} \end{equation} where for $1\leqslant i \leqslant n, ~i\neq v$,
\begin{equation*} \nststile{b_i}{a_i} = \left\{\begin{array}{l}
\nststile{1}{1}, ~\mbox{if}~ \Gamma_{iv}\neq 0
\\
\nststile{0}{0}, ~\mbox{if} ~\Gamma_{iv}= 0\\
\end{array}\right.
\end{equation*} \end{corollary} \begin{proof} Note that $G\ast_2 v=(G\ast_1 v)\ast_1 v$. \end{proof}
\begin{theorem}\label{gandgst3} Let $G = (V,E)$ be a graph with adjacency matrix $\Gamma$ and let $G\circ_2 v $ be the graph that results from applying the transformation $\circ_2 v $ about some $v\in V$. Then the corresponding graph states $\ket{G} $ and $\ket{G\circ_2 v} $ are related as follows: \begin{equation}\label{locom3} \tikzfig{Qutrits//gandg3} \end{equation} where the Hadamard nodes are on the output of the vertex $v$.
\end{theorem} \begin{proof} According to the definition of $G\circ_2 v $, we only need to consider the vertices that are connected to $v$. The effect of applying the operator $G\circ_2 v $ is to replace the $ H$ node connected to $v$ with an $H^{\dagger}(=H^{-1})$ node, and vice versa, i.e., changing from $H^{\pm 1}$ to $H^{\mp 1}$. By the rewriting rule (P1), we have \tikzfig{Qutrits//hhdagexch}. That means $G\circ_2 v $ can be seen as having one more $D$ node on each edge connected to $v$ than on the corresponding edge in $G$. The equality (\ref{locom3}) then follows immediately from pushing these $D$ nodes to the output of $v$ by the property (\ref{property1}).
\end{proof}
\subsection{Rewriting arbitrary stabilizer state diagram into a GS-LC diagram} In this subsection we show that any stabilizer state diagram can be rewritten into a GS-LC diagram by rules of the ZX-calculus. We give the proof of this result following \cite{Miriam1, Miriam2}.
\begin{theorem}\label{stabtogslc1} Any stabilizer state diagram can be rewritten into a GS-LC diagram by rules of the ZX-calculus. \end{theorem} \begin{proof} By lemma \ref{diagramdecom}, any ZX-calculus diagram can be written as composed of four basic spiders together with phase shifts and Hadamard and Hadamard dagger nodes. Thus we can proceed with the proof by induction: \tikzfig{Qutrits//gdotsmall} is the initial GS-LC diagram, if it can be shown that applying any basic component of the ZX-calculus diagrams to a GS-LC diagram still results in a GS-LC diagram, then any stabilizer state diagram can be rewritten into a GS-LC diagram. The following lemmas \ref{stabtogslc2}, \ref{stabtogslc3}, \ref{stabtogslc4}, \ref{stabtogslc5} and \ref{stabtogslc6} exactly give the inductive steps, hence completes the proof. \end{proof}
\begin{lemma}\label{stabtogslc2} A GS-LC diagram associated with \tikzfig{Qutrits//gdotsmall} is still a GS-LC diagram. \end{lemma} \begin{proof} It follows directly from the definition of GS-LC diagrams and lemma \ref{gstatezx}. \end{proof}
\begin{lemma}\label{stabtogslc3} Appying a basic single qutrit Clifford unitary to a GS-LC diagram still results in a GS-LC diagram. \end{lemma} \begin{proof} This follows directly from the definition of GS-LC diagrams. \end{proof}
\begin{lemma}\label{stabtogslc5} If a vertex in a GS-LC diagram has no neighbours, then it must be a single-qutrit pure stabilizer state as one of the follows: \begin{equation}\label{singlestate}
\tikzfig{Qutrits//singlestates} \end{equation} \end{lemma} \begin{proof} A vertex with no neighbours in a GS-LC diagram must be written as \begin{equation*}
\tikzfig{Qutrits//singlestates2} \end{equation*} where $U$ is of one of the forms (\ref{uniformsin1}), (\ref{uniformsin2}) and (\ref{uniformsin3}). If
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformj} \end{equation*} then \begin{equation}\label{singlestatejk}
\tikzfig{Qutrits//singlestates2}= \tikzfig{Qutrits//singlestates3} \end{equation} For $\nststile{a_2}{a_1}=\nststile{0}{0}$, \begin{equation}\label{singlestates00}
\tikzfig{Qutrits//singlestates4} \end{equation}
For $\nststile{a_2}{a_1}=\nststile{2}{1}$, \begin{equation}\label{singlestates01}
\tikzfig{Qutrits//singlestates5} \end{equation}
For $\nststile{a_2}{a_1}=\nststile{1}{2}$, \begin{equation}\label{singlestates02}
\tikzfig{Qutrits//singlestates6} \end{equation}
For $\nststile{a_2}{a_1}=\nststile{1}{1}$, \begin{equation}\label{singlestates03}
\tikzfig{Qutrits//singlestates7} \end{equation} where we used the property proved in \cite{GongWang} that \begin{equation}\label{singlestates04}
\tikzfig{Qutrits//rtog12} \end{equation} Now we consider $\nststile{a_4}{a_3}$ in (\ref{singlestates03}). For $\nststile{a_4}{a_3}\in \{\nststile{0}{0}, \nststile{1}{1}, \nststile{2}{2}, \nststile{0}{2}, \nststile{2}{0}\}$, by (\ref{singlestates04}), we have \begin{equation*}
\tikzfig{Qutrits//singlestates71}\in\left\{\tikzfig{Qutrits//singlestates72}, \tikzfig{Qutrits//singlestates73}, \tikzfig{Qutrits//singlestates74}, \tikzfig{Qutrits//singlestates75}, \tikzfig{Qutrits//singlestates76}\right\}. \end{equation*} For $\nststile{a_4}{a_3}= \nststile{1}{0}$, we have \begin{equation}\label{singlestates0401}
\tikzfig{Qutrits//singlestates8} \end{equation} Similarly, we have \begin{equation}\label{singlestates0402} \tikzfig{Qutrits//singlestates9}, \quad \tikzfig{Qutrits//singlestates10}, \quad \tikzfig{Qutrits//singlestates11}. \end{equation}
In the same way, for $\nststile{a_2}{a_1}\in \{\nststile{0}{1}, \nststile{1}{0}, \nststile{2}{2}, \nststile{0}{2}, \nststile{2}{0}\}$, and $\nststile{a_4}{a_3}\in \mathcal{A}$, we can prove that \begin{equation*}
\tikzfig{Qutrits//singlestates71}\in\left\{ \tikzfig{Qutrits//singlestates}\right\}. \end{equation*}
If
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformk} \end{equation*} then \begin{equation*}
\tikzfig{Qutrits//singlestates2}= \tikzfig{Qutrits//singlestates12} \end{equation*} which is reduced to the case (\ref{singlestatejk}).
If
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformt} \end{equation*} then \begin{equation*}
\tikzfig{Qutrits//singlestates2}= \tikzfig{Qutrits//singlestates13} \end{equation*} which is also reduced to the case (\ref{singlestatejk}).
\end{proof}
\begin{definition} \cite{Miriam2}
A vertex in a GS-LC diagram that is being acted upon by some operation is called a operand vertex. \end{definition}
\begin{definition} \cite{Miriam2}
A neighbour of an operand vertex in a GS-LC diagram is called a swapping partner of the operand vertex. \end{definition}
\begin{lemma}\label{stabtogslc3to4}
If in some GS-LC diagram an operand vertex has at least one neighbour, it is always possible to change the vertex operator on the operand vertex to the following form using $1$-local ($2$-local) complementations:
\begin{equation}\label{chgoprt}
\tikzfig{Qutrits//chgoprt2} \end{equation}
where $\nststile{m_2}{m_1}, \nststile{m_4}{m_3}\in \mathcal{M}$. \end{lemma} \begin{proof} We can pick one neighbour of the operand vertex as the swapping partner. A $1$-local complementation about the operand vertex adds \tikzfig{Qutrits//11phasgt} to its vertex operator, while a $1$-local complementation about the swapping partner adds \tikzfig{Qutrits//22phasgt} to the vertex operator on the operand vertex. Note that local complementations about the operand vertex or its swapping partner do not remove the edge between these two vertices. Therefore, after each local complementation, the operand vertex still has a neighbour, enabling further local complementations. By Theorem \ref{uniformsin}, the vertex operator $U$ must be in one of the form (\ref{uniformsin1}), (\ref{uniformsin2}) or (\ref{uniformsin3}). For
\begin{equation}\label{chgoprt3}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformj} \end{equation} if $\nststile{a_2}{a_1}\in \mathcal{Q}$, and $\nststile{a_4}{a_3}\in \mathcal{Q}$, similar to the trick used in the proof of Lemma \ref{xz112}, we have $ \nststile{a_2}{a_1} =\nststile{p_{2}}{p_{1}}+\nststile{m_{2}}{m_{1}}, \nststile{a_4}{a_3} =\nststile{p_{4}}{p_{3}}+\nststile{m_{4}}{m_{3}}$, where $\nststile{p_{2}}{p_{1}}, \nststile{p_{4}}{p_{3}}\in \mathcal{P}, \nststile{m_{2}}{m_{1}}, \nststile{m_{4}}{m_{3}}\in \mathcal{M}$. Therefore,
\begin{equation}
\tikzfig{Qutrits//cliffnorformj}= \tikzfig{Qutrits//cliffnorformjpmdec} \end{equation} where $\nststile{\bar{m}_{4}}{\bar{m}_{3}}\in \mathcal{M}$. Applying the $1$-local complementation about the operand vertex or its swapping partner, the nodes $\nststile{p_{2}}{p_{1}}, \nststile{p_{4}}{p_{3}}$ can be removed, and the remaining part is of the form (\ref{chgoprt}). The case for
$\nststile{a_2}{a_1}\in \mathcal{Q}, \nststile{a_4}{a_3}\in \mathcal{M}$ or $\nststile{a_2}{a_1}\in \mathcal{M}, \nststile{a_4}{a_3}\in \mathcal{Q}$ is similar as above, yet simpler.
For
\begin{equation}\label{chgoprt4}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformk} \end{equation} the node $\nststile{p_{2}}{p_{1}}$ can be removed by the $1$-local complementation about the operand vertex or its swapping partner, then we are in the case of (\ref{chgoprt3}).
For
\begin{equation}\label{chgoprt5}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformt} \end{equation} the two Hadamard nodes can be pushed up to the top and removed by the $1$-local complementation about the operand vertex or its swapping partner, then we are in the case of (\ref{chgoprt3}).
The proof is similar for $2$-local complementations. \end{proof}
\begin{lemma}\label{stabtogslc4} Applying \tikzfig{Qutrits//gcdotsmall} to a GS-LC diagram results in a GS-LC diagram or a zero diagram. \end{lemma} \begin{proof}
There are two cases for the operand vertex when applying \tikzfig{Qutrits//gcdotsmall} to a GS-LC diagram.
Firstly, if the operand vertex has no neighbours, then by lemma \ref{stabtogslc5}, it is just one of the 12 single qutrit pure stabilizer states. If the operand vertex is in state \tikzfig{Qutrits//singlestates77} or \tikzfig{Qutrits//singlestates78}, the result of applying \tikzfig{Qutrits//gcdotsmall} is the scalar zero. Otherwise the result is a non-zero global factor, which can be ignored.
Secondly, we consider the case that the operand vertex has at least one neighbour. Note that we have the following property:
\begin{equation}\label{singlestates05}
\tikzfig{Qutrits//multycopy} \end{equation} where $a_i \in \{1, -1\}, i=1, \cdots, s$, $\nststile{b}{a}\in \mathcal{A}$,
$\nststile{m_2}{m_1}\in \mathcal{M}$,
$\nststile{m_{i2}}{m_{i1}}\in \{\nststile{m_2}{m_1}, \nststile{m_1}{m_2}\}, i=1, \cdots, s$.
Therefore, if the vertex operator of the operand vertex is \begin{equation}\label{singlestates06}
\tikzfig{Qutrits//multycopy2} \end{equation} then the operand vertex will be removed from the graph state when the operator \tikzfig{Qutrits//gcdotsmall} is applied.
Otherwise, we can pick one neighbour of the operand vertex as the swapping partner. By Lemma \ref{stabtogslc3to4}, it is always possible to change the vertex operator on the operand vertex to the form of (\ref{singlestates06}) using $1$-local complementations. Then we get back to the above case. This completes the proof. \end{proof}
\begin{lemma}\label{stabtogslc5} Applying \tikzfig{Qutrits//gcopysmall} to an GS-LC diagram will still result in a GS-LC diagram. \end{lemma} \begin{proof} There are two cases for the operand vertex.
First, if the operand vertex has no neighbours, then by lemma \ref{stabtogslc5}, it is just one of the 12 single qutrit pure stabilizer states which can be written as
\begin{equation*}
\tikzfig{Qutrits//abandm}
\end{equation*} where $\nststile{b}{a}\in \mathcal{A}$, $\nststile{m_2}{m_1}\in \mathcal{M}$. Then \begin{equation*}
\tikzfig{Qutrits//abandm2}
\end{equation*} and \begin{equation*}
\tikzfig{Qutrits//abandm3},
\end{equation*}
where the right hand side of each equation is clearly a GS-LC diagram.
Second, if the operand vertex has at least one neighbour, we can write the green copy as \begin{equation}\label{abandm4}
\tikzfig{Qutrits//abandm4}
\end{equation}
Thus if there is no vertex operator on the operand vertex, then we just add a new vertex and edge to the diagram.
Otherwise, by Lemma \ref{stabtogslc3to4}, the vertex operator $U$ on the operand vertex can be changed to the form (\ref{chgoprt}) using $1$-local complementations. Now applying \tikzfig{Qutrits//gcopysmall} to the operand vertex, we have \begin{equation*}
\tikzfig{Qutrits//abandm5}
\end{equation*} which still means we just add a new vertex and edge to the graph. This completes the proof. \end{proof}
\begin{lemma}\label{stabtogslc6} Applying \tikzfig{Qutrits//gcocopysmall} to an GS-LC diagram will result in a GS-LC diagram or a zero diagram. \end{lemma} \begin{proof} Now there are two operand vertices, and we have four cases to deal with.
Firstly, if the two operand vertices are connected only to each other, then we do not need to care about all non-operand vertices. Now applying \tikzfig{Qutrits//gcocopysmall} to the operand vertices, we have \begin{equation*}
\tikzfig{Qutrits//abandm6}
\end{equation*} where $W, V, V^{\prime}\in \mathcal{C}_1$, $U=W\circ H^{\pm 1}\circ V^{\prime}\in \mathcal{C}_1$. By Theorem \ref{uniformsin}, $U$ must be in one of the form (\ref{uniformsin1}), (\ref{uniformsin2}) or (\ref{uniformsin3}). For
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformj} \end{equation*} we have
\begin{equation}\label{loopu1}
\tikzfig{Qutrits//abandm7} \end{equation} The result is either a zero diagram or a GS-LC diagram.
For
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformk} \end{equation*} we have \begin{equation}\label{loopu2}
\tikzfig{Qutrits//abandm8} \end{equation} The result is a GS-LC diagram, where we used property (\ref{property2}) for the second equality, property (\ref{property5}) for the fourth equality, and (\ref{singlestates04}), (\ref{singlestates0401}) and (\ref{singlestates0402}) for the fifth equality.
For
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformt} \end{equation*} we have \begin{equation}\label{loopu3}
\tikzfig{Qutrits//abandm9} \end{equation}
The result is a GS-LC diagram. This complete the first case.
Secondly, if one operand vertex has no neighbours, then it must be one of the 12 single qutrit states given in (\ref{singlestate}). For $\nststile{a_2}{a_1}\in \mathcal{A}$ and
$\nststile{m_2}{m_1}\in \mathcal{M}$,
\begin{equation*}
\tikzfig{Qutrits//abandm10} \end{equation*}
\begin{equation*}
\tikzfig{Qutrits//abandm11} \end{equation*} Thus we are back into the application situation of lemma \ref{stabtogslc3} and lemma \ref{stabtogslc4}, which means the resulting diagram is still a GS-LC diagram.
Thirdly, if both operand vertices have non-operand neighbours, denoted by $a$ and $b$ respectively, then by applying \tikzfig{Qutrits//gcocopysmall} to $a$ and $b$, we have
\begin{equation*}
\tikzfig{Qutrits//bhvnonb1} \end{equation*} or \begin{equation*}
\tikzfig{Qutrits//bhvnonb2} \end{equation*} where $U, V\in \mathcal{C}_1$. From now on, we will always denote by \tikzfig{Qutrits//hnode} the node \tikzfig{Qutrits//RGg_Hada}
or \tikzfig{Qutrits//RGg_Hadad} when it is unnecessary to state explicitly.
By lemma \ref{stabtogslc3to4}, applying local complementations to $a$ and its swapping partner, the vertex operator $U$ can be changed to the form (\ref{chgoprt}). So we have
\begin{equation*}
\tikzfig{Qutrits//bhvnonb3} \end{equation*} or \begin{equation*}
\tikzfig{Qutrits//bhvnonb4} \end{equation*} where $V_1\in \mathcal{C}_1$. Note that \begin{equation*}
\tikzfig{Qutrits//mslide} \end{equation*} Therefore we have
\begin{equation*}
\tikzfig{Qutrits//bhvnonb5} \end{equation*} or \begin{equation}\label{bhvnonb66}
\tikzfig{Qutrits//bhvnonb6} \end{equation} where $W, V_2\in \mathcal{C}_1$. In the same way as we just did for $a$, we can repeat the procedure for b by choosing its own swapping partner. Note that we will get extra phases of the form \tikzfig{Qutrits//22phasgt} or \tikzfig{Qutrits//green11gt} added to $a'$s vertex operator thus merging into the operator $W$ to be $W_1$, in the case that $a$ is connected to b or to $b'$s swapping partner. Now we have
\begin{equation*}
\tikzfig{Qutrits//bhvnonb7} \end{equation*} or \begin{equation*}
\tikzfig{Qutrits//bhvnonb8} \end{equation*} where $W_1, W_2, W_3 \in \mathcal{C}_1$, $\nststile{m_4}{m_3}, \nststile{m_{40}}{m_{30}}, \nststile{m_{41}}{m_{31}}, \cdots \nststile{m_{4s}}{m_{3s}}\in \mathcal{M}$. Note that these green nodes $\nststile{m_{41}}{m_{31}}, \cdots \nststile{m_{4s}}{m_{3s}}$ can be merged with the vertex operators of $b'$s neighbours by the spider rule.
Furthermore, we can merge the nodes $a$, $b$ and $c$ to get a GS-LC diagram. In fact, there are three cases.
1) Neither $a$ and $b$ are connected by a Hadamard node nor they have common non-operand neighbours. In this case $a$, $b$ and $c$ can be directly merged, thus we have \begin{equation*}
\tikzfig{Qutrits//bhvnonb9} \end{equation*} which is a GS-LC diagram.
2) $a$ and $b$ are not connected but they have common non-operand neighbours. For any common non-operand neighbour $d$ we have \begin{equation*}
\tikzfig{Qutrits//bhvnonb10} \mbox{if}~ h_1 h_2=1 ~ \mbox{or} ~\tikzfig{Qutrits//bhvnonb11} ~\mbox{othwise}, \end{equation*} where in either sub-case we get a GS-LC diagram.
3) $a$ and $b$ are connected. Then we have \begin{equation*}
\tikzfig{Qutrits//bhvnonb12} \end{equation*} where $W_4\in \mathcal{C}_1$, $\nststile{p}{p} =\nststile{1}{1} \mbox{or} \nststile{2}{2}$, and we used the rule (EU), property (\ref{property3}) and property (\ref{property4}). Clearly the result is a GS-LC diagram.
Finally, if one operand vertex is connected only to the other, but the latter has a non-operand neighbour, then we have
\begin{equation*}
\tikzfig{Qutrits//bhvnonb13} \end{equation*} where $V, U\in \mathcal{C}_1$. We can use the same strategy as we have applied for (\ref{bhvnonb66}) in the third case to cancel out the vertex operator $U$ of the second operand vertex $b$. Therefore we have \begin{equation*}
\tikzfig{Qutrits//bhvnonb14} \end{equation*} where $V_1, T\in \mathcal{C}_1$. If in this process the operand vertex $a$ gains one or more non-operand neighbours, then we can proceed as above. Otherwise, \begin{equation*}
\tikzfig{Qutrits//bhvnonb15} \end{equation*} where $W=V_1\circ h\in \mathcal{C}_1$. By equalities (\ref{loopu1}), (\ref{loopu2}) and (\ref{loopu3}), and lemma \ref{stabtogslc5}, $K= \tikzfig{Qutrits//bhvnonb16}$ is either a zero diagram or a stabilizer state of the form (\ref{singlestate}). If $K$ is a stabilizer state represented by a green node in (\ref{singlestate}), then the green phase operator is added to the operator $T$ by the spider rule (S1), thus we get a GS-LC diagram. Otherwise, by the copy rule (B1) and the rule (K1), $K$ can be copied, and it is easy to see that we still get a GS-LC diagram. This completes the proof. \end{proof}
\subsection{Reduced GS-LC diagrams} Further to GS-LC diagrams, we can define a more reduced form as in \cite{Miriam2}.
\begin{definition}\label{rgslcdef}
A stabilizer state diagram is called to be in a reduced GS-LC (or rGS-LC) form if it is a GS-LC diagram satisfying the following conditions: \begin{itemize} \item All vertex operators belong to the set \begin{equation}\label{rgslc1}
\mathcal{R}=\left\{\quad \tikzfig{Qutrits/singlephases}\quad\right\}.
\end{equation}
\item Two adjacent vertices must not both have vertex operators including red nodes. \end{itemize} \end{definition}
\begin{theorem}\label{gslctorgslc} In the qutrit ZX-calculus, each qutrit stabilizer state diagram can be rewritten into some rGS-LC diagram.
\end{theorem}
\begin{proof} By theorem \ref{stabtogslc1}, any stabilizer state diagram can be rewrite into a GS-LC diagram. If a vertex in the GS-LC diagram has no neighbours, by Lemma \ref{stabtogslc5}, it can be brought into one of the forms in (\ref{singlestate}), where the nine green nodes can be clearly seen as having vertex operators belong to the set $\mathcal{R}$. For the other three red nodes, note that \begin{equation*}
\tikzfig{Qutrits//vxrgslc1} \end{equation*} \begin{equation*}
\tikzfig{Qutrits//vxrgslc2} \end{equation*} \begin{equation*}
\tikzfig{Qutrits//vxrgslc3} \end{equation*}
Therefore, the vertex operators on these three red nodes can be brought into elements of $\mathcal{R}$.
From now on, we can assume that each vertex in the GS-LC diagram has at least one neighbour. Our strategy is to prove the theorem in three steps: First we prove that each vertex operator in the GS-LC diagram can be brought into the following form: \begin{equation}\label{rprimeform}
\mathcal{R^{\prime}}=\left\{\quad \tikzfig{Qutrits//rprime}\quad\right\}.
\end{equation} where $ \nststile{q_{2}}{q_{1}} \in \mathcal{Q}, \nststile{p_{2}}{p_{1}} \in \mathcal{P}$. Then we show that any GS-LC diagram with vertex operators all belonging to $\mathcal{R^{\prime}}$ can be further brought into a form where any two adjacent vertices must not both have vertex operators including red nodes while the vertex operators still resides in $\mathcal{R^{\prime}}$. Finally we prove that the GS-LC diagram obtained in the second step can be rewritten into a rGS-LC diagram.
For the first step of the strategy, we pick an arbitrary node $v$ that has at least one neighbour. By theorem \ref{uniformsin}, the vertex operator $U$ of the vertex $v$ has the form (\ref{uniformsin1}), (\ref{uniformsin2}) or (\ref{uniformsin3}). We deal with these three forms separately.
Firstly consider that $U$ has the form (\ref{uniformsin1}):
\begin{equation}\label{u2nodes0}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformj}. \end{equation} We analyse all possible cases of the phase $ \nststile{a_{4}}{a_{3}}$.
If $ \nststile{a_{4}}{a_{3}}\in \mathcal{M} $, then either $\nststile{a_{4}}{a_{3}}=\nststile{0}{0}$, where clearly $U\in \mathcal{R^{\prime}}$, or $ \nststile{a_{4}}{a_{3}}\in \{ \nststile{2}{1}, \nststile{1}{2}\}$. In the latter case, by the commutation rule (K2) , the copy rule (K1) and the colour change rules (H2) and (H2$^{\prime}$), the red node $ \nststile{a_{4}}{a_{3}}$ can be pushed up through the node $ \nststile{a_{2}}{a_{1}}$, copied by $v$ into connected edges with its neighbours, then colour-changed from red to green by Hadamard nodes and finally merged into the vertex operators of neighbours of $v$. Therefore the vertex operator $U$ now is just a green node, thus belongs to the set $\mathcal{R^{\prime}}$.
If $ \nststile{a_{4}}{a_{3}}\in \mathcal{N}=\{\nststile{0}{1}, \nststile{1}{0}, \nststile{0}{2}, \nststile{2}{0}\}$, then $ \nststile{a_{4}}{a_{3}}=\nststile{p_{2}}{p_{1}} + \nststile{m_{2}}{m_{1}}$, where $\nststile{p_{2}}{p_{1}} \in \mathcal{P}, \nststile{m_{2}}{m_{1}} \in \{ \nststile{2}{1}, \nststile{1}{2}\}$. Thus by the commutation rule (K2),
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformju3}. \end{equation*}
In a similar way as above, the node $\nststile{m_{2}}{m_{1}}$ can be merged into the vertex operators of neighbours of $v$. Now $U$ has the form \begin{equation}\label{u2nodes}
\tikzfig{Qutrits//cliffnorformju32}. \end{equation}
We need to consider all the possible cases of $\nststile{\bar{a}_2}{\bar{a}_1}$ now. If $\nststile{\bar{a}_2}{\bar{a}_1} = \nststile{0}{0} $, then applying $1$-local complementations to $v$, the red node $\nststile{p_{2}}{p_{1}}$ can be removed, so $U$ is changed to the identity, which belongs to set $\mathcal{R^{\prime}}$. If $\nststile{\bar{a}_2}{\bar{a}_1} \in \{ \nststile{2}{1}, \nststile{1}{2}\}$, then
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformju33} \end{equation*} where $\nststile{n_{2}}{n_{1}} \in \mathcal{N}, \nststile{\bar{m}_2}{\bar{m}_1} \in \{ \nststile{2}{1}, \nststile{1}{2}\}, \nststile{\bar{p}_2}{\bar{p}_1} \in \mathcal{P}$. As we did above, the red node $ \nststile{\bar{m}_2}{\bar{m}_1}$ merged into the vertex operators of neighbours of $v$, and the red node $ \nststile{\bar{p}_2}{\bar{p}_1}$ can be removed by applying $1$-local complementations to $v$. So $U$ is changed to be just the green node $\nststile{\bar{a}_2}{\bar{a}_1}$, which clearly belongs to the set $\mathcal{R^{\prime}}$. To sum up, for $\nststile{\bar{a}_2}{\bar{a}_1} \in \mathcal{M}$ in the case of (\ref{u2nodes}), the vertex operator $U$ can be changed to have no red nodes. For the remaining case where
$\nststile{\bar{a}_2}{\bar{a}_1} \in \mathcal{Q}$, it is clear that the vertex operator $U$ in form (\ref{u2nodes}) now belongs to the set $\mathcal{R^{\prime}}$.
Therefore, for $ \nststile{a_{4}}{a_{3}}\in \mathcal{N}$ in the form of (\ref{u2nodes0}), the vertex operator $U$ can be brought into a form of $\mathcal{R^{\prime}}$.
Now the remaining case of $ \nststile{a_{4}}{a_{3}}$ in the form of (\ref{u2nodes0}) is that $ \nststile{a_{4}}{a_{3}} \in \mathcal{P}$, this case is exactly the same as (\ref{u2nodes}), which has already been proved. Therefore, for
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformj} \end{equation*}
$U$ can always be brought into the form of $\mathcal{R^{\prime}}$.
Secondly we consider that $U$ has the form (\ref{uniformsin2}):
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformk}. \end{equation*} We first apply $1$-local complementations to $v$, then the red node $ \nststile{p_2}{p_1}$ is removed, thus we are back to the case (\ref{u2nodes0}), which has already been proved.
Thirdly we consider that $U$ has the form (\ref{uniformsin3}):
\begin{equation*}
\tikzfig{Qutrits//singleu}=\tikzfig{Qutrits//cliffnorformt}. \end{equation*} Here we can push the two Hadamard nodes up by property (\ref{property1}), then apply the graphical transformation $\circ_2 v $, the two Hadamard nodes can be removed by theorem \ref{gandgst3} and property (\ref{property1}). Therefore we get back to the case (\ref{u2nodes0}), which has already been proved.
So far we are done with the chosen node $v$ in the first step of the proof strategy. To proceed to deal with other nodes than $v$ in the GS-LC diagram, we need to consider the impact on $v$'s neighbours whose vertex operators belong to the set $\mathcal{R^{\prime}}$ during the above operatings on $v$. First note that in the above process of proof, there is not any red node added to any neighbour of $v$, only a green node $ \nststile{p_2}{p_1}$ or $ \nststile{m_2}{m_1}$ is possibly added to $v$'s neighbours, where $\nststile{p_2}{p_1} \in \mathcal{P}, \nststile{m_2}{m_1}\in \{ \nststile{2}{1}, \nststile{1}{2}\}$. If it is the green node $ \nststile{m_2}{m_1}$ that is added to $v$'s neighbours, then their vertex operators still belong to the set $\mathcal{R^{\prime}}$. If the green node $ \nststile{p_2}{p_1}$ is added to $v$'s neighbours, then the resulted vertex operator that beyond the set $\mathcal{R^{\prime}}$ must be the sub-case of (\ref{u2nodes}) where $\nststile{\bar{a}_2}{\bar{a}_1} \in \mathcal{M}$, which we have shown that the red node can be removed. Therefore, each time when we have brought a vertex operator in to an element of $\mathcal{R^{\prime}}$, we need to check whether the vertex operators of neighbours of the operand vertex are still belong to $\mathcal{R^{\prime}}$. If not, then the red node will be removed. Since each vertex operator needs to be considered at most twice, the process will eventually terminate. Therefore all vertex operators must belong to the set $\mathcal{R^{\prime}}$ after finite steps.
By now we have finished the first step of the proof strategy: we have proved that any stabilizer state diagram is equal to some GS-LC diagram such that all vertex operators belong to the set $\mathcal{R^{\prime}}$.
For the second step of the strategy, we are going to prove that each GS-LC diagram with vertex operators all belonging to $\mathcal{R^{\prime}}$ can be further brought into a form where any two adjacent vertices must not both have vertex operators including red nodes, with all vertex operators being still in $\mathcal{R^{\prime}}$.
Suppose there are two adjacent qutrits $a$ and $b$ which have red nodes in their vertex operators, i.e. there is a subdiagram of the form
\begin{equation}\label{2redside}
\tikzfig{Qutrits//2rednodes2} \end{equation} where $\nststile{p_{2}}{p_{1}}, \nststile{p_{4}}{p_{3}} \in \mathcal{P}, \nststile{q_{2}}{q_{1}}, \nststile{q_{4}}{q_{3}} \in \mathcal{Q}$. We operate on the vertex $b$ to remove the red node from its vertex operator. Firstly,
\begin{equation}\label{2redside2}
\tikzfig{Qutrits//2rednodes3} \end{equation} where $\nststile{m_2}{m_1}\in \mathcal{M}$ and we used local complementations about $a$ to remove the green node $\nststile{p_6}{p_5} $ on the output of $b$ for the second equality. Below we analyse all possible values of $\nststile{m_2}{m_1}$ in (\ref{2redside2}).
If $\nststile{m_2}{m_1}=\nststile{0}{0}$, then we can apply local complementations about $b$ to remove the red node $\nststile{p_4}{p_3} $, and we have
\begin{equation}\label{2redside3}
\tikzfig{Qutrits//2rednodes4} \end{equation} Now we proceed on simplifying the vertex operator of $a$:
\begin{equation}\label{2redside4}
\tikzfig{Qutrits//2rednodes5} \end{equation} where $\nststile{p_i}{p_j}\in \mathcal{P}, \nststile{m_4}{m_3}, \nststile{m_6}{m_5}\in \{ \nststile{2}{1}, \nststile{1}{2}\} ~ \mbox{or} \nststile{m_4}{m_3}=\nststile{m_6}{m_5}=\nststile{0}{0}$. By lemma \ref{pzxz}, the first three nodes on the top of the output of $a$ can be rewritten in one of two forms. If we use the rewritten form with two Hadamard nodes, then we have
\begin{equation}\label{2redside5}
\tikzfig{Qutrits//2rednodes6} \end{equation} where we apply the graphical transformation $\circ_2 a $ and local complementations about $a$ for the second equality, perform local complementations about $a$ for the third equality, and make a copy of red node $\nststile{m_6}{m_5}$ through $a$ for the fourth equality. Obviously we get a desired form at the end of (\ref{2redside5}). Otherwise we use the rewritten form without Hadamard node, then
\begin{equation}\label{2redside6}
\tikzfig{Qutrits//2rednodes7} \end{equation} where $\nststile{\bar{p}_6}{\bar{p}_5} \in \mathcal{P}, \nststile{p_8}{p_7}+\nststile{p_2}{p_1} = \nststile{p_{10}}{p_{9}}\in \{ \nststile{0}{0}, \nststile{1}{1}, \nststile{2}{2}\}$, and we apply local complementations about $a$ for the second equality. Here we need to consider all the possible values of $ \nststile{p_{10}}{p_{9}}$ in (\ref{2redside6}). If $ \nststile{p_{10}}{p_{9}}=\nststile{0}{0}$, then
\begin{equation}\label{2redside7}
\tikzfig{Qutrits//2rednodes8} \end{equation} where $\nststile{\bar{p}_6}{\bar{p}_5} +\nststile{m_4}{m_3} = \nststile{q_6}{q_5}\in \mathcal{Q}, \nststile{q_8}{q_7}\in \mathcal{Q}, \nststile{\bar{m}_6}{\bar{m}_5} \in \mathcal{M}$, and we make a copy of red node $\nststile{m_6}{m_5}$ through $a$ for the third equality. Otherwise $ \nststile{p_{10}}{p_{9}} \in \mathcal{P}$ in (\ref{2redside6}), then
\begin{equation}\label{2redside8}
\tikzfig{Qutrits//2rednodes9} \end{equation} where $\nststile{q_{10}}{q_{9}}\in \mathcal{Q}, \nststile{q_{10}}{q_{9}}+\nststile{m_4}{m_3}=\nststile{q_{12}}{q_{11}}\in \mathcal{Q}, \nststile{q_{14}}{q_{13}}\in \mathcal{Q}, \nststile{\bar{m}_6}{\bar{m}_5}, \nststile{m_8}{m_7}, \nststile{\bar{m}_8}{\bar{m}_7} \in \mathcal{M}$, and we make copies of red nodes $\nststile{m_6}{m_5}$ and $\nststile{m_8}{m_7}$ through $a$ for the third equality and the sixth equality respectively.
By now, for $\nststile{m_2}{m_1}=\nststile{0}{0}$, we have proved that the resulting diagram in (\ref{2redside2}) can be brought into a form that at least one qutrit of $a$ and $b$ has no red nodes in its vertex operator while both $a$ and $b$ have vertex operators belonging to $\mathcal{R^{\prime}}$.
The remaining case is that $\nststile{m_2}{m_1}\in \{ \nststile{2}{1}, \nststile{1}{2}\} $ in (\ref{2redside2}). In this case,
\begin{equation}\label{2redside9}
\tikzfig{Qutrits//2rednodes10} \end{equation} where $\nststile{m_4}{m_3}, \nststile{m_6}{m_5}, \nststile{m_8}{m_7}, \nststile{m_{10}}{m_{9}} \in \{ \nststile{2}{1}, \nststile{1}{2}\}, \nststile{m_8}{m_7}+\nststile{m_{10}}{m_{9}} = \nststile{m_{12}}{m_{11}} \in \mathcal{M}$, $\nststile{q_{4}}{q_{3}}\in \mathcal{Q}$.
In the resulting diagram at the end of (\ref{2redside9}), the first four nodes on the top of the output of $a$ are of the same type as the corresponding part of (\ref{2redside3}). So we can use the obtained results of (\ref{2redside3}), push the red node $ \nststile{m_{12}}{m_{11}} $ through green nodes and make it copied and added to vertex operators of $a$'s neighbours in green colour by the commutation rule (K2) and colour-change rules (H2) and (H2$^{\prime}$). In this way, for $\nststile{m_2}{m_1}\in \{ \nststile{2}{1}, \nststile{1}{2}\} $ in (\ref{2redside2}), the resulting diagram can be brought into a form that at least one qutrit of $a$ and $b$ has no red nodes in its vertex operator, while both $a$ and $b$ have vertex operators belonging to $\mathcal{R^{\prime}}$.
Therefore, the diagram (\ref{2redside}) can always be brought into a form such that at least one qutrit of $a$ and $b$ has no red nodes in its vertex operator, while both $a$ and $b$ have vertex operators belonging to $\mathcal{R^{\prime}}$. This finishes the second step of the proof strategy.
The third step of the proof strategy is to show that the vertex operators not only belong to $\mathcal{R^{\prime}}$, but further resides in $\mathcal{R}$. To do this, it suffices to prove that the red node in the following diagram can be removed:
\begin{equation}\label{2redside10}
\tikzfig{Qutrits//2rednodes11} \end{equation} where the vertex operator of $a$ belongs to the set \begin{equation}\label{maset}
\mathcal{T}=\left\{\quad \tikzfig{Qutrits//alterphases}\quad\right\}.
\end{equation} For the sake of conciseness, we just show one case, the others are similar: \begin{equation}\label{2redside11}
\tikzfig{Qutrits//2rednodes12} \end{equation} where we apply $1$-local complementations about $a$ for the first equality, use equality (\ref{pzxzhh}) for the second equality, and apply the graphical transformation $\circ_2 a $ for the third equality.
Each time when we have brought a diagram of form (\ref{2redside}) into a diagram that satisfies the conditions of rGS-LC diagram for the two connected nodes, we need to check whether the vertex operators of neighbours of the operand vertex are still belong to $\mathcal{R}$. If not, then the red node will be removed. Since the number of vertex operator beyond $\mathcal{R}$ in such a diagram is always decreasing, all vertex operators must belong to the set $\mathcal{R}$ after finite steps. Therefore, any GS-LC diagram is equal to some rGS-LC diagram within the ZX-calculus.
\end{proof}
\subsection{Transformations of rGS-LC diagrams}
In this subsection we show how to transform one rGS-LC diagrams into another rGS-LC diagrams. Note that we call the graphical transformation $\circ_2 v $ a doubling-neighbour-edge transformation. \begin{lemma}\label{equivrgslc} Suppose there is a rGS-LC diagram which has a pair of neighbouring qutrits $a$ and $b$ as follows:\begin{equation}\label{equivrgslcs1}
\tikzfig{Qutrits/equivrgslct1} \end{equation} where $\nststile{a_2}{a_1}\in \mathcal{A}, \nststile{p_1}{p_1}\in \mathcal{P}, \nststile{m_2}{m_1}\in \mathcal{M}, \nststile{q_2}{q_1} = \nststile{p_1}{p_1} +\nststile{m_2}{m_1}$, $h$ stands for either an $H$ node or an $H^{\dagger}$ node. Then a rGS-LC diagram with the following pair can be obtained by performing firstly a $p_1$-local complementation about $b$, followed by a $(-p_1)$-local complementation about $a$, and possibly a further $(-p_1)$-local complementation about $b$, in addition with some doubling-neighbour-edge operations $\circ_2 a(b) $ and copying operations on red nodes with phase angles in $ \mathcal{M}$: \begin{equation}\label{equivrgslcs2}
\tikzfig{Qutrits/equivrgslct2} \end{equation} where $\nststile{a^{\prime}_2}{a^{\prime}_1}\in \mathcal{A}, \nststile{p^{\prime}_1}{p^{\prime}_1}\in \mathcal{P}, \nststile{m^{\prime}_2}{m^{\prime}_1}\in \mathcal{M}$. \end{lemma}
\begin{proof} We first show the process of transformation from (\ref{equivrgslcs1}) to (\ref{equivrgslcs2}), then check that all the involved operations, including the two local complementations, doubling-neighbour-edge operations and copying operations, will transform all vertex operators to allowed ones.
Firstly consider the case that $\nststile{m_2}{m_1}=\nststile{0}{0}, \nststile{a_2}{a_1}=\nststile{m_4}{m_3} \in \mathcal{M}$ in (\ref{equivrgslcs1}). We have \begin{equation}\label{equivrgslcs33}
\tikzfig{Qutrits//equivrgslct33} \end{equation} where $\nststile{m_j}{m_i} \in \mathcal{M}, $ we applied a $p_1$-local complementation about $b$ for the first equality, then a $(-p_1)$-local complementation about $a$ for the second equality, and copied the red node $\nststile{m_6}{m_5}$ in the last equality. The resulting diagram has the two properties in the definition of rGS-LC diagram.
Secondly consider the case that $\nststile{m_2}{m_1} \in \mathcal{M}, \nststile{a_2}{a_1}=\nststile{m_4}{m_3} \in \mathcal{M}$ in (\ref{equivrgslcs1}). We have
\begin{equation}\label{equivrgslcs4}
\tikzfig{Qutrits//equivrgslct4} \end{equation} where $\nststile{m_j}{m_i} \in \mathcal{M}, \nststile{m_{14}}{m_{13}} = \nststile{m_{8}}{m_{7}} +\nststile{m_{2}}{m_{1}}, \nststile{m_{12}}{m_{11}} = \nststile{m^{\prime}_{6}}{m^{\prime}_{5}} +\nststile{m_{4}}{m_{3}}$, we copied the red node $\nststile{m_6}{m_5}$ in the third equality. Now we can proceed base on the result of (\ref{equivrgslcs33}). That is \begin{equation}\label{equivrgslcs5}
\tikzfig{Qutrits//equivrgslct5} \end{equation} where $\nststile{m_j}{m_i} \in \mathcal{M}, \nststile{m_{20}}{m_{19}} = \nststile{m_{14}}{m_{13}} +\nststile{m_{16}}{m_{15}}, \nststile{m_{24}}{m_{23}} = \nststile{m_{18}}{m_{17}} +\nststile{m_{22}}{m_{21}}$, we copied the red node $\nststile{m_{10}}{m_9}$ in the third equality. The resulting diagram has the two properties in the definition of rGS-LC diagram.
\iffalse Thirdly consider the case that $\nststile{m_2}{m_1} =\nststile{0}{0}, \nststile{a_2}{a_1}=\nststile{p_2}{p_2} \in \mathcal{P}$ in (\ref{equivrgslcs1}). Note that either $p_1=p_2$ or $p_1=-p_2$ since $p_1, p_2 \in \mathbb{Z}_3 $. If $p_1=p_2$, we have
\begin{equation}\label{equivrgslcs6}
\tikzfig{Qutrits//equivrgslct6} \end{equation} where we applied a $p_1$-local complementation about $b$ for the first equality and the last equality, a $(-p_1)$-local complementation about $a$ for the second equality, and used the property (\ref{property8}) for the third equality. The resulted diagram has the two properties in the definition of rGS-LC diagram.
If $p_1=-p_2$, then $p_2=-p_1=2p_1 (mod 3)$, and we have \begin{equation}\label{equivrgslcs7}
\tikzfig{Qutrits//equivrgslct7} \end{equation} where $h_1=H~\mbox{or}~H^{-1}$, we applied a $p_1$-local complementation about $b$ for the first equality, a $(-p_1)$-local complementation about $a$ for the second equality, and doubling-neighbour-edge operation $\circ_2 b$ for the last equality. The resulted diagram has the two properties in the definition of rGS-LC diagram. \fi
Finally consider the case that $\nststile{m_2}{m_1} \in \mathcal{M}, \nststile{a_2}{a_1}=\nststile{q_2}{q_1} =\nststile{p_2}{p_2} +\nststile{m_4}{m_3}, \in \mathcal{M}$ in (\ref{equivrgslcs1}), where $\nststile{p_2}{p_2} \in \mathcal{P}, \nststile{m_4}{m_3} \in \mathcal{M}$. By (\ref{equivrgslcs4}) and (\ref{equivrgslcs5}), we have
\begin{equation}\label{equivrgslcs8}
\tikzfig{Qutrits//equivrgslct8} \end{equation} where $\nststile{m_j}{m_i} \in \mathcal{M}$. Note that either $p_1=p_2$ or $p_1=-p_2$ since $p_1p_2\neq 0, p_1, p_2 \in \mathbb{Z}_3 $. If $p_1=p_2$, we have
\begin{equation}\label{equivrgslcs9}
\tikzfig{Qutrits//equivrgslct9} \end{equation} where $\nststile{m_j}{m_i} \in \mathcal{M}$, we used the property (\ref{property8}) for the first equality, copied the red node $\nststile{m_{10}}{m_9}$ in the third equality, and applied a $(-p_1)$-local complementation about $b$ for the last equality. The resulting diagram has the two properties in the definition of rGS-LC diagram.
If $p_1=-p_2$, then $p_2=-p_1=2p_1 (mod 3)$, and we have \begin{equation}\label{equivrgslcs10}
\tikzfig{Qutrits//equivrgslct10} \end{equation} where $h_1=H~\mbox{or}~H^{-1}$, and we used doubling-neighbour-edge operation $\circ_2 b$ for the third equality. The resulting diagram has the two properties in the definition of rGS-LC diagram.
Next we need to check that all the operations applied above will transform all vertex operators to allowed ones. First note that we can neglect all vertices which are not connected to $a$ or $b$, since their vertex operators won't be changed under the transformation.
Besides, the neighbouring vertices of the operand vertex will gain some green phase operators through copying red nodes with phase angles in $ \mathcal{M}$. Since the vertex operator of a neighbouring vertex either is merely a green phase or contains a green phase in the group $ \mathcal{M}$, this copying operation preserves property 1 of rGS-LC diagrams.
As the doubling-neighbour-edge operation does not change any connectivity, but only add a $D$ node to the vertex operator of the operand vertex whenever it is necessary, it helps for preserving the first property of rGS-LC diagrams.
Furthermore we consider the effect of local complementations. First, each vertex operator consisting of only green phases on a vertex other than $a$ and $b$ will continue to be a green phase under local complementations, since local complementations merely give green phases to such kind of vertex. Second, we consider vertices adjacent to $a$ or $b$ with vertex operator containing a red node. By the definition of rGS-LC diagram, such vertices must not be adjacent to $a$. Thus we can assume that $w$ is a vertex whose vertex operator contains a red node and $\{w, b\}$ is an edge of the rGS-LC diagram before the transformation by local complementation. Then the $p_1$-local complementation about $b$ adds a phase \tikzfig{Qutrits//_p1-p1} to the vertex operator of $w$ and produces an edge between $w$ and $a$ with weight $\Gamma^{\prime}_{wa}=0+p_1\Gamma_{wb}\Gamma_{ab}=p_1\Gamma_{wb}\Gamma_{ab}\neq 0 (mod 3)$. And the $(-p_1)$-local complementation about $a$ adds \tikzfig{Qutrits//p1p1} to $w$ with the weight $\Gamma^{\prime\prime}_{wb}=\Gamma^{\prime}_{wb}+(-p_1)\Gamma^{\prime}_{wa}\Gamma^{\prime}_{ab}=\Gamma_{wb}-(p_1)^2\Gamma_{wb}\Gamma_{ab}^2=\Gamma_{wb}-\Gamma_{wb} = 0 (mod 3)$, where $\Gamma^{\prime}_{wb}=\Gamma_{wb}, ~\Gamma^{\prime}_{ab}=\Gamma_{ab}$, thus removes the edge between $w$ and $b$. So if there further follows a $(-p_1)$-local complementation about $b$, nothing will be added to the vertex operator of $w$. Therefore the vertex operator of $w$ still resides in the set $ \mathcal{R}$, which means the transformation preserves the first property of the rGS-LC diagram.
Finally we consider two vertices $v,w$ being adjacent to $b$ which simultaneously have red nodes in their vertex operators. Since $v,w$ lie in a rGS-LC diagram, there is no edge between $v$ and $w$. After the $p_1$-local complementation about $b$, there will be an edge between $v$ and $w$ and two edges $\{a, v\}$ and $\{a, w\}$ with weight $\Gamma^{\prime}_{vw}=p_1\Gamma_{wb}\Gamma_{vb},~\Gamma^{\prime}_{aw}=p_1\Gamma_{wb}\Gamma_{ab}, ~\Gamma^{\prime}_{av}=p_1\Gamma_{vb}\Gamma_{ab}$ respectively. Also the $(-p_1)$-local complementation about $a$ results in $\Gamma^{\prime\prime}_{vw}=\Gamma^{\prime}_{vw}+(-p_1)\Gamma^{\prime}_{wa}\Gamma^{\prime}_{va}=p_1\Gamma_{wb}\Gamma_{vb}-p_1(p_1)^2\Gamma_{wb}\Gamma_{vb}\Gamma_{ab}^2= 0 (mod 3)$, which means the edge $\{v, w\}$ is removed, thus $v$ and $w$ are still non-adjacent. By the above calculation, the edges $\{v, b\}$ and $\{w, b\}$ are also removed, so if there further follows a $(-p_1)$-local complementation about $b$, $v$ and $w$ will be non-adjacent. Clearly, edges connecting vertices which are not adjacent to $a$ or $b$ remain unchanged. Therefore the the second property of the rGS-LC diagram is also retained after transformation. To sum up, the resulting diagram is still a rGS-LC diagram.
\end{proof}
\section{Completeness}
\subsection{Comparing rGS-LC diagrams} In this subsection, we show that a pair of rGS-LC diagrams can be transformed into such a form that they are equal under the standard interpretation if and only if they are identical.
\begin{definition}\cite{Miriam2} A pair of rGS-LC diagrams on the same number of qutrit is called simplified if there are no pairs of qutrits $a, b$, such that $a$ has a red node in its vertex operator in the first diagram but not in the second, $b$ has a red node in the second diagram but not in the first, and $a$ and $b$ are adjacent in at least one of the diagrams.
\end{definition}
\begin{lemma}\label{rgspsym} Each pair of rGS-LC diagrams on the same number of qutrits can be made into a simplified form. \end{lemma} \begin{proof} The proof is same as that of the qubit case presented in \cite{Miriam1, Miriam2}.
\end{proof}
\begin{lemma}\label{unpaired} Any component diagrams of a simplified pair of rGS-LC with an unpaired red node are unequal, where this red node resides as a vertex operator only in one of the pair of diagrams. \end{lemma}
\begin{proof} We denote by $a$ the vertex which has the red node as a vertex operator, by $D_1$ the diagram where $a$ resides, and by $D_2$ the other diagram of the simplified pair. Then we give the proof by considering three cases.
Firstly, if $a$ has no neighbours in either $D_1$ or $D_2$, then $a$ along with its vertex operators must be a single qutrit state. Thus $D_1$ and $D_2$ are equal only if the two states of $a$ in each diagram are the same. But in fact, \begin{equation}\label{gnntrd}
\tikzfig{Qutrits//gnntred1}, \end{equation} where $\nststile{a_2}{a_1} \in \mathcal{A}, p \in \{ 1, 2\}, m \in \{0, 1, -1\}$, therefore $D_1$ and $D_2$ are unequal.
Secondly, we consider the case that $a$ is separated in just one of $D_1$ and $D_2$. By theorem \ref{graphstateeuivalence}, two graph states are equivalent if and only if there exists a sequence of local complementations $\ast$ and doulbing-neighbour-edge operations $\circ$ acting on one of them to obtain the other. But a local complementation or a doulbing-neighbour-edge operation will never change the connectivity of a separated single qutrit state, thus $D_1$ and $D_2$ cannot be equal.
Thirdly, we consider the case that $a$ has neighbours in both $D_1$ and $D_2$. Denote by $N_1$ the set of all vertices that are adjacent to $a$ in $D_1$, and by $N_2$ the set of all vertices that are adjacent to $a$ in $D_2$. Since $D_1$ is a rGS-LC diagram and $a$ has a red vertex operator, the vertex operators of any vertex in $N_1$ must be green phases. Also the vertex operators of any vertex in $N_2$ must be green phases, otherwise that vertex and $a$ would be a pair which is not allowed in the simplified pair of rGS-LC composed of $D_1$ and $D_2$.
Let $n = |N_1|, m = |N_1 \cap N_2|$. Without loss of generality, we can assume that the first $m$ elements are those exactly from $N_1 \cap N_2$. Now the diagram $D_1$ has the following form: \begin{equation}\label{d1form}
\tikzfig{Qutrits//d1fm} \end{equation} where $\nststile{b_i}{a_i} \in \mathcal{A}, h_i=H ~\mbox{or}~ H^{-1}, i \in \{ 1, \cdots, n\}, \nststile{m_2}{m_1} \in \mathcal{M}, \nststile{p}{p} \in \mathcal{P} $. Let \begin{equation*}
\tikzfig{Qutrits//hfm} \end{equation*}
\noindent then $h_i=h^{a_i}, a_i \in \{ 1, -1\}, i \in \{ 1, \cdots, n\}$.
We now define an operation depending on $D_1$ as follows: \begin{equation}\label{udef} U_{D_1}=\left(\bigotimes_{v_i\in N_1} C_{X, v_i\rightarrow a}\right)\circ \left( R_{Z,a}\bigotimes_{v_i\in N_1} D^{\frac{1-a_i}{2}}\right) \end{equation} where $D=H^2=(H^{-1})^2$, $R_{Z,a}$ is the operation \tikzfig{Qutrits//ppgt} acting on $a$, and $C_{X, v_i\rightarrow a}$ is a generalized controlled-NOT gate with control $v_i$ and target $a$. The operation $U_{D_1}$ is well-defined as all the $C_{X, v_i\rightarrow a}$ commute with each other. Since each component is invertible, $U_{D_1}$ must be invertible as well, therefore $U_{D_1}\circ D_1 = U_{D_1} \circ D_2 \Leftrightarrow D_1 = D_2$, which means if $U_{D_1}\circ D_1 \neq U_{D_1} \circ D_2$, then $D_1 \neq D_2$.
Below we will show that the qutrit $a$ is in state \tikzfig{Qutrits//m1m2st} in $U_{D_1}\circ D_1$, while being either entangled with other qutrits or in a state unequal to \tikzfig{Qutrits//m1m2st} in $U_{D_1}\circ D_2$,
where $\nststile{\bar{m}_2}{\bar{m}_1} \in \mathcal{M}$. By the proof for the first two cases, this would entail that $U_{D_1}\circ D_1 \neq U_{D_1} \circ D_2$, thus $D_1 \neq D_2$.
First, for $U_{D_1}\circ D_1$, we have
\begin{equation}\label{d1uop}
\tikzfig{Qutrits//d1uopt} \end{equation} where for the first equality, we applied the doubling-neighbour-edge operation to $v_i$ whenever $a_i=-1$, and used the colour change rules (H2), (H2$^{\prime}$) for the second and the last equalities, the property (\ref{property7}) for the third equality.
Second, we consider $U_{D_1}\circ D_2$. There are three cases for vertices about their adjacency to $a$ as well as application situation of controlled-NOT gates: 1) vertices are adjacent to $a$ but have no controlled-NOT gates applied to them; 2) vertices are not adjacent to $a$ but have controlled-NOT gates applied to them; 3) vertices are adjacent to $a$ and also have controlled-NOT gates applied to them. Here we ignore edges not connected to $a$ and edges without vertices in $N_1$. Then we have
\begin{equation}\label{d1uop2}
\tikzfig{Qutrits//d1uopt2} \end{equation} where the phase $\nststile{c}{b} \in \mathcal{A}$ comes from the fusion of the vertex operator on $a$ and the $R_Z$ part of $U_{D_1}$.
To proceed, we investigate on different cases according to the value of $\nststile{c}{b}$.
If $\nststile{c}{b}=\nststile{p_1}{p_1 }+ \nststile{m_4}{m_3}$, where $\nststile{p_1}{p_1 }\in \mathcal{P}, \nststile{m_4}{m_3}\in \mathcal{M}$, we apply the doubling-neighbour-edge operation to $v_i$ for $a_i=-1, 1\leqslant i \leqslant m$. Then we have
\begin{equation}\label{d1uop3}
\tikzfig{Qutrits//d1uopt3} \end{equation} Furthermore, we apply $p_1$-local complementation about $a$, then the vertex operator of $a$ is
\begin{equation}\label{d1uop4}
\tikzfig{Qutrits//d1uopt4} \end{equation} where $\nststile{m_6}{m_5}, \nststile{m_8}{m_7} \in \mathcal{M}$. The red node $\nststile{m_6}{m_5}$ can be copied and changed in colour to fuse into vertex operators of $a$'s neighbours. So the diagram (\ref{d1uop3}) is equal to
\begin{equation}\label{d1uop5}
\tikzfig{Qutrits//d1uopt5} \end{equation} If $N_1=N_2$, by property (\ref{property7}), $a$ is either adjacent to some neighbours or in the state \tikzfig{Qutrits//q1q2st} where $\nststile{q_2}{q_1} \in \mathcal{Q}$, which is clearly unequal to \tikzfig{Qutrits//m1m2st}. Otherwise, $a$ will always be connected to some neighbours after the application of $U_{D_1}$.
If $\nststile{c}{b}=\nststile{m_4}{m_3}\in \mathcal{M}$, there are two sub-cases. First, if $N_2-N_1\neq \emptyset$, then there exists $v \in N_2$ such that $v \notin N_1$. We apply a local complementation to this $v$, which will add a green phase \tikzfig{Qutrits//p1p1} to the vertex operator on $a$. The edges connected with $a$ are changed as well, while there remains at least one vertex adjacent to $a$.
Therefore we come back to the case $\nststile{c}{b}=\nststile{p_1}{p_1 }+ \nststile{m_4}{m_3}$. Second, if $N_2-N_1= \emptyset$, since $N_2\neq \emptyset$, it must be that $N_2 \subseteq N_1$, and thus $m =| N_2|>0$.
Not considering the edges unconnected to $a$, now the diagram is shown as follows: \begin{equation}\label{d1uop6}
\tikzfig{Qutrits//d1uopt6} \end{equation} where for the first equality, we applied the doubling-neighbour-edge operation to $v_i$ whenever $a_i=-1$. Note that if $a_i=-1$ while $v_i$ has no neighbours, then we just use the property that \tikzfig{Qutrits//dcopyg}.
As we have mentioned exactly before dealing with $U_{D_1}\circ D_1$, to show that $D_1\neq D_2$ it suffices to show that the qutrit $a$ is either entangled with other qutrits or in a state unequal to \tikzfig{Qutrits//m1m2st} in $U_{D_1}\circ D_2$. For this purpose, we only need to pay attention to vertices $a, v$ and the qutrit version of the controlled-Z gates between them.
Also the last $H^{\dagger}$ gate and the green phase $\nststile{m_4}{m_3}$ on $a$ are irrelevant here, so will be ignored. Therefore we have \begin{equation}\label{d1uop7}
\tikzfig{Qutrits//d1uopt7} \end{equation} where $\nststile{m}{m}=\nststile{1}{1} \mbox{or} \nststile{0}{0}$, we used the Euler decomposition rule (EU) and applied a $2$-local complementation to $v$ for the third equality, a $1$-local complementation on $a$ for the fourth equality, and the following property for the last equality which can be easily proved by rule (EU) and the property \ref{singlestates04} : \begin{equation}\label{d1uopt8}
\tikzfig{Qutrits//d1uopt8} \end{equation} Now it is obvious that $a$ is entangled with $v$. Thus for the second sub-case of the case $\nststile{c}{b}=\nststile{m_4}{m_3}\in \mathcal{M}$, we also proved $D_1\neq D_2$.
So far we have shown in all cases that $D_1\neq D_2$, hence finished the proof that any two component diagrams of a simplified pair of rGS-LC with an unpaired red node must be unequal.
\end{proof}
\begin{theorem}\label{identical} The two component diagrams of any simplified pair of rGS-LC are equal under the standard interpretation if and only if they are identical. \end{theorem}
\begin{proof} The necessity is obvious. We only prove the sufficiency.
Denote the two component diagrams by $D_1$ and $D_2$ respectively. Suppose $D_1 = D_2$ in the sense that they represent the same quantum state. By lemma \ref{unpaired}, $D_1\neq D_2$ if there exits an unpaired red node. So we can assume that there are no unpaired red nodes in the simplified pair of rGS-LC under consideration.
Let the graph underlying $D_1$ be $G_1 = (V, E_1 $), and the graph underlying $D_2$ be $G_2 = (V, E_2)$. Without loss of generality, assume that $V = \{1,2, \cdots, n\}$. Then $D_1$ and $D_2$ can be depicted as follows: \begin{equation}\label{dgmidn} \tikzfig{Qutrits//dgmidnty} \end{equation}
where $\nststile{d_v}{c_v}, \nststile{t_v}{s_v} \in \{ \nststile{0}{0}, \nststile{1}{1}, \nststile{2}{2}\}$,
\begin{equation*} \nststile{b_v}{a_v} \in \left\{\begin{array}{l}
\mathcal{A}, ~\mbox{if}~ \nststile{d_v}{c_v}= \nststile{0}{0}
\\
\{ \nststile{1}{1}, \nststile{2}{0}, \nststile{0}{2}\}, ~\mbox{if}~ \nststile{d_v}{c_v}= \nststile{1}{1}
\\
\{ \nststile{2}{2}, \nststile{1}{0}, \nststile{0}{1}\}, ~\mbox{if}~ \nststile{d_v}{c_v}= \nststile{2}{2}\\
\end{array}\right. \quad\quad \nststile{l_v}{k_v} \in \left\{\begin{array}{l}
\mathcal{A}, ~\mbox{if}~ \nststile{t_v}{s_v}= \nststile{0}{0}
\\
\{ \nststile{1}{1}, \nststile{2}{0}, \nststile{0}{2}\}, ~\mbox{if}~ \nststile{t_v}{s_v}= \nststile{1}{1}
\\
\{ \nststile{2}{2}, \nststile{1}{0}, \nststile{0}{1}\}, ~\mbox{if}~ \nststile{t_v}{s_v}= \nststile{2}{2}\\ \end{array}\right.
\end{equation*} $1\leqslant v \leqslant n$.
Since all red nodes are paired up, it follows that $\nststile{d_v}{c_v}= \nststile{0}{0} \Leftrightarrow \nststile{t_v}{s_v}= \nststile{0}{0}$. If $\nststile{d_v}{c_v}\neq \nststile{0}{0}, \nststile{t_v}{s_v}\neq \nststile{0}{0}$, but $\nststile{d_v}{c_v}\neq \nststile{t_v}{s_v}$, w.l.o.g., assume $\nststile{d_v}{c_v}= \nststile{1}{1} ,\nststile{t_v}{s_v}= \nststile{2}{2}$. Let
\begin{equation*} U_v=\tikzfig{Qutrits//dgmidnty2} \end{equation*} Since $U_v$ is unitary, $U_{v}\circ D_1 = U_{v} \circ D_2 \Leftrightarrow D_1 = D_2$. Now
\begin{equation*} U_v\circ D_1=\tikzfig{Qutrits//uvd1} \quad\quad \quad U_v\circ D_2=\tikzfig{Qutrits//uvd2} \end{equation*} Clearly $U_v\circ D_1$ and $U_v\circ D_2$ are unpaired, so $U_v\circ D_1\neq U_v\circ D_2$, thus $ D_1\neq D_2$. Therefore, $D_1 = D_2$ implies $ \nststile{d_v}{c_v} = \nststile{t_v}{s_v}, \forall v\in \{1,2, \cdots, n\}$. Thus $D_2$ can be represented as
\begin{equation*} \tikzfig{Qutrits//d2new} \end{equation*}
Define two operators
\begin{equation} U=\bigotimes_{v\in V} R_{X, v}^{-1} \quad\quad \mbox{and} \quad W=\bigotimes_{\{u, w \}\in E_1} C_{Z, uw}^{-1}, \end{equation} where $R_{X, v}=$ \tikzfig{Qutrits//cvdvps}, $ C_{Z, uw}$ is the edge \tikzfig{Qutrits//ctozuw} connecting $u$ and $w$ in $G_1$ with $h=H$ or $H^{-1}$. Clearly, both $U$ and $W$ are invertible, Thus $(W \circ U)\circ D_1 = (W \circ U) \circ D_2 \Leftrightarrow D_1 = D_2$. After applying the operator $U$ to $ D_1$ and $ D_2$, all the red nodes are cancelled out, only green nodes left as vertex operators. Based on this operation, further composition with $W$ will result in $(W \circ U)\circ D_1= \tikzfig{Qutrits//wud1}$. Since $D_1 = D_2$, it must be that $(W \circ U)\circ D_2= \tikzfig{Qutrits//wud1}$. If there is an edge in $E_1$ but not belong to $E_2$, or an edge in $E_2$ but not belong to $E_1$, then $(W \circ U)\circ D_2$ must be a entangled state. So $(E_1-E_2)\cup (E_2-E_1)=\emptyset$, i.e., $E_1=E_2$. Furthermore, the weight of each edge in $G_1$ should be the same as that of the corresponding edge in $G_2$, otherwise $(W \circ U)\circ D_2$ is impossible to be a product of single qutrit states. Therefore we have $G_1=G_2$. It follows immediately that $(W \circ U)\circ D_2= \tikzfig{Qutrits//wud12}$. Again by $D_1 = D_2$, we have $\nststile{b_v}{a_v} = \nststile{l_v}{k_v}, \forall v\in V$. Thus $D_1$ and $D_2$ are identical. This completes the proof.
\end{proof}
\subsection{Completeness for qutrit stabilizer quantum mechanics} To achieve the proof of completeness for qutrit stabilizer QM, we will proceed in two main steps. Firstly, we show the completeness for stabilizer states.
\begin{theorem}\label{complst} The ZX-calculus is complete for pure qutrit stabilizer states. \end{theorem}
\begin{proof} We only need to show that any two ZX-calculus diagrams that represent the same qutrit stabilizer state can be rewritten from one to the other by the ZX rules. Suppose $D_1$ and $D_2$ are such two diagrams. By theorem \ref{gslctorgslc}, $D_1$ and $D_2$ can be rewritten into rGS-LC diagrams $D^{\prime}_1$ and $D^{\prime}_2$ respectively. Clearly $D^{\prime}_1$ and $D^{\prime}_2$ are a pair of
rGS-LC diagrams on the same number of qutrits. Then by lemma \ref{rgspsym}, they can be transformed to a simplified pair of
rGS-LC diagrams $D^{\prime\prime}_1$ and $D^{\prime\prime}_2$ in the ZX-calculus while still representing the same quantum state. Now it follows from theorem \ref{identical} that $D^{\prime\prime}_1$ and $D^{\prime\prime}_2$ are identical diagrams, we denote this diagram by $D^{\prime\prime}$. As described above, we have show the steps of rewriting $D_1$ and $D_2$ into $D^{\prime\prime}$ respectively. If we invert the rewriting processes from $D_2$ to $D^{\prime\prime}$ and compose with the rewriting processes from $D_1$ to $D^{\prime\prime}$, then we get the rewriting processes from $D_1$ to $D_2$. This completes the proof.
\end{proof}
Secondly, we use the following map-state duality to relate quantum states and linear operators:
\begin{equation}\label{mapstate} \tikzfig{Qutrits/mapstatedual} \end{equation}
\noindent Then by theorem \ref{complst} and the map-state duality (\ref{mapstate}), we have the main result: \begin{theorem}\label{complall} The ZX-calculus is complete for qutrit stabilizer quantum mechanics. \end{theorem}
A natural question arises at the end of this chapter: is there a general proof of completeness of the ZX-calculus for arbitrary dimensional (qudit) stabilizer QM? This is the problem we would like to address next, but we should also mention some challenges we may face. With exception of the increase of the order of local Clifford groups, the main difficulty comes from the fact that that it is not known whether any stabilizer state is equivalent to a graph state under local Clifford group for the dimension $d$ having multiple prime factors \cite{Hostens}. As far as we know, stabilizer states are equivalent to graph states under local Clifford group for the dimension $d$ which has only single prime factors \cite{ShiangGr}. Furthermore, the sufficient and necessary condition for two qudit graph states to be equivalent under local Clifford group in terms of operations on graphs is unknown in the case that $d$ is non-prime. Finally, the generalised Euler decomposition rule for the generalised Hadamard gate can not be trivially derived, and other uncommon rules might be needed for general $d$.
\chapter{Some applications of the ZX-calculus}\label{chapply}
In the previous chapters, we mainly explored the theoretical aspects of the ZX-calculus: giving complete axiomatisations for the entire qubit quantum mechanics and other fragments of the QM. Now we turn to the application side of the ZX-calculus. Since its invention, the ZX-calculus has been applied to quantum foundations, such as non-locality \cite{CDKW,CDKW2}, and quantum computing \cite{Duncanpx, Horsman, ckzh, domniel}. It has also been incorporated into the Quantomatic software for automatic reasoning \cite{Quanto}. However, none of these applications use any of the rules involved with the new generators--triangle and $\lambda$ box.
In this chapter, we show some applications of the ZX-calculus with rules involving the new generators. These include four parts: proof of the generalise supplementarity, equivalent forms of 3-entanglement qubit states, representation of Toffoli gate, and equivalence-checking for the UMA gate.
\section{Proof of the generalised supplementarity}
In Chapter \ref{chfull}, the $\lambda$ box is restricted to be parameterised by a non-negative real number. However, we can define green and red spiders parameterised by arbitrary complex numbers in terms of generators of the $ZX_{ full}$-calculus. In fact, for any complex number $a$, let $a=\lambda e^{\alpha}$, where $\lambda \geq 0, \alpha \in [0,~2\pi)$. Let \tikzfig{diagrams//lambdanooutput}. Then by rules (S1), (L1) and (L5) of the $ZX_{ full}$-calculus, the following definition is well-defined:
\begin{equation}\label{generalgreenspiderdef}
\tikzfig{diagrams//generalgreenspiderdefini} \end{equation}
Thus by the spider rule (S1) and the rules (L1), (L4) and (L5), we have the generalised green spider rule:
\begin{equation}\label{generalgrspiderrule}
\tikzfig{diagrams//generalgreenspiderfuse} \end{equation} where $a, b$ are arbitrary complex numbers.
By the colour change rule (H), the generalised red spider can be defined as follows:
\begin{equation}\label{generalredspiderdef}
\tikzfig{diagrams//generalredspiderdefini} \end{equation}
Therefore, we have the generalised red spider rule:
\begin{equation}\label{generalredspiderrule}
\tikzfig{diagrams//generalredspiderfuse} \end{equation} where $a, b$ are arbitrary complex numbers.
The generalised green and red spiders have the following interpretation in Hilbert spaces:
\begin{equation}\label{gphaseinter}
\begin{array}{c} \left\llbracket \tikzfig{diagrams//generalgreenspider} \right\rrbracket=\ket{0}^{\otimes m}\bra{0}^{\otimes n}+a\ket{1}^{\otimes m}\bra{1}^{\otimes n}, \\ \left\llbracket \tikzfig{diagrams//generalredspider} \right\rrbracket=\ket{+}^{\otimes m}\bra{+}^{\otimes n}+a\ket{-}^{\otimes m}\bra{-}^{\otimes n}, \end{array} \end{equation} where $a$ is an arbitrary complex number.
Now we consider the generalised supplementarity-- also called cyclotomic supplementarity, with supplementarity as a special case--which is interpreted as merging $n$ subdiagrams if the $n$ phase angles divide the circle uniformly \cite{jpvw}. The diagrammatic form of the generalised supplementarity is as follows: \begin{equation}\label{generalpsgt0}
\tikzfig{diagrams//generalsupp} \end{equation} where there are $n$ parallel wires in the diagram at the right-hand side.
Next we show that the generalised supplementarity can be seen as a special form of the generalised spider rule as shown in (\ref{generalredspiderrule}). For simplicity, we ignore scalars in the rest of this section.
First, it can be directly calculated that \begin{equation*} \left\llbracket \tikzfig{diagrams//generalpsgtelt} \right\rrbracket= \left\llbracket \tikzfig{diagrams//generalpsgtert} \right\rrbracket \end{equation*} where $a\in \mathbb{C}, a\neq -1$.
Then by the completeness of the $ZX_{ full}$-calculus, we have \begin{equation}\label{generalpsgt}
\tikzfig{diagrams//generalpsgte} \end{equation}
In particular, \begin{equation}\label{generalpsgt2}
\tikzfig{diagrams//generalpsgte2} \end{equation}
where $\alpha \in [0, 2\pi), \alpha\neq \pi$. For $\alpha= \pi$, we can use the $\pi$ copy rule directly.
Then
\begin{equation}\label{generalpsgt3}
\tikzfig{diagrams//generalpsgte3} \end{equation} where we used the following formula given in \cite{jpvw}:
\begin{equation}\label{fracidentyty} \prod_{j=0}^{n-1}(1+e^{i(\alpha+\frac{j}{n}2\pi)})=1+e^{i(n\alpha+(n-1)\pi)} \end{equation}
Note that if $n$ is odd, then
\begin{equation}\label{generalpsgt4}
\tikzfig{diagrams//generalpsgte4} \end{equation}
If $n$ is even, then
\begin{equation}\label{generalpsgt5}
\tikzfig{diagrams//generalpsgte5} \end{equation}
It is not hard to see that if we compute the parity of $n$ in the right diagram of (\ref{generalpsgt0}) not considering the scalars, then by Hopf law we get the same result as shown in (\ref{generalpsgt4}) and (\ref{generalpsgt5}).
\section{ 3-entanglement qubit states} It is well known that there are two SLOCC (Stochastic Local Operations and Classical Communication) equivalent classes of 3-entanglement qubit states \cite{Dur} with representative members $\ket{GHZ}=\ket{000}+\ket{111}, \ket{W}=\ket{100}+\ket{010}+\ket{001}$. In fact, we have \begin{theorem} \cite{Dur} Two n-partite states $\ket{\psi}$ and $\ket{\phi}$ are SLOCC equivalent if and only if there exist invertible linear maps $L_1, L_2, \ldots, L_n$ such that
\begin{equation}\label{SLOCCequivalent} \ket{\psi}=(L_1\otimes L_2\otimes \cdots \otimes L_n)\ket{\phi} \end{equation} \end{theorem} In \cite{CoeckeEdwards}, Coecke and Edwards have represented the W state with phases in the ZX-calculus as follows:
\begin{equation}\label{wstatephases} \tikzfig{diagrams//wstatewithphases} \end{equation}
While another representation of the W state by a triangle was essentially given in \cite{Emmanuel}, and explicitly given in \cite{ngwang}: \begin{equation}\label{wstatetriangle} \tikzfig{diagrams//wstatewithtrianle} \end{equation}
Here we build a bridge between these two representations of the W state by $\lambda$ box.
\begin{lemma} For the $W$ state, up to non-zero scalars, we have
\tikzfig{diagrams//w2forms} \end{lemma}
\begin{proof}
\tikzfig{diagrams//w2formsderive}
\end{proof}
Furthermore, Coecke and Edwards give explicit conditions for a GHZ-class state: \begin{lemma}\cite{CoeckeEdwards} The following diagram
\begin{equation}\label{ghzgeneral} \tikzfig{diagrams//ghzgeneralorigin} \end{equation}
represents a GHZ-class state if $\alpha, \beta, \gamma$ violate one of the following equalities:
\[
\begin{array}{c}
\alpha+\beta+\gamma=\pi \\
\alpha+\beta-\gamma=\pi \\
\alpha-\beta+\gamma=\pi \\
\alpha-\beta-\gamma=\pi \\ \end{array}
\] \end{lemma}
However, if we use the generalised spiders defined in the previous section, then we can obtain an exact solution for transformation of GHZ-class state into simpler form. Note that we will use standard interpretations for deriving diagrammatical identities in the following proofs because of the completeness of the $ZX_{ full}$-calculus. \begin{lemma}\label{ghznorm} For GHZ state, we have
\begin{equation}\label{ghznormalform} \tikzfig{diagrams//ghznormal} \end{equation} where $\lambda_i$ and $x_i$ are complex numbers, $(1+\lambda_1\lambda_2\lambda_3)(\lambda_1+\lambda_2\lambda_3)(\lambda_2+\lambda_1\lambda_3)(\lambda_3+\lambda_1\lambda_2)\neq 0$, \[ x_1=\pm\sqrt{\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_1+\lambda_2\lambda_3)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}},
x_2= \pm\sqrt{\frac{(\lambda_1+\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_2+\lambda_1\lambda_3)}},
\]
\[ x_3=\pm\sqrt{\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_1+\lambda_2\lambda_3)}}. \]
\iffalse \[ \tikzfig{diagrams//lambdai}=\begin{pmatrix}
1 & 0 \\
0 & \lambda_i
\end{pmatrix},
\tikzfig{diagrams//xi}=\begin{pmatrix}
1 & 0 \\
0 &x_i
\end{pmatrix}.
\]
\fi \end{lemma} \begin{proof} The matrix of the left-hand side of (\ref{ghznormalform}) is \[ \begin{pmatrix}
1+\lambda_1\lambda_2\lambda_3 & 0 \\
0 &\lambda_2+\lambda_1\lambda_3\\
0 & \lambda_1+\lambda_2\lambda_3 \\
\lambda_3+\lambda_1\lambda_2 & 0
\end{pmatrix}. \] The matrix of the right-hand side of (\ref{ghznormalform}) is \[ \begin{pmatrix}
1 & 0 \\
0 &x_1x_3\\
0 &x_1x_2 \\
x_2x_3 & 0
\end{pmatrix}. \] These two matrices are equal up to a scalar, thus \[ x_1x_3=\frac{\lambda_2+\lambda_1\lambda_3}{1+\lambda_1\lambda_2\lambda_3}, x_1x_2=\frac{\lambda_1+\lambda_2\lambda_3}{1+\lambda_1\lambda_2\lambda_3}, x_2x_3=\frac{\lambda_3+\lambda_1\lambda_2}{1+\lambda_1\lambda_2\lambda_3}. \] Then \[ (x_1x_2x_3)^2=\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_1+\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)^3}, \] i.e., \[ x_1x_2x_3=\pm\sqrt{\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_1+\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)^3}}. \] Therefore, \[ x_1=\frac{x_1x_2x_3}{x_2x_3}=\pm\sqrt{\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_1+\lambda_2\lambda_3)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}},
x_2=\frac{x_1x_2x_3}{x_1x_3}= \pm\sqrt{\frac{(\lambda_1+\lambda_2\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_2+\lambda_1\lambda_3)}},
\]
\[ x_3=\frac{x_1x_2x_3}{x_1x_2}=\pm\sqrt{\frac{(\lambda_2+\lambda_1\lambda_3)(\lambda_3+\lambda_1\lambda_2)}{(1+\lambda_1\lambda_2\lambda_3)(\lambda_1+\lambda_2\lambda_3)}}. \]
\end{proof}
As an example, we have
\begin{lemma}\label{trianglebroken} \begin{equation}\label{trianglebroke} \tikzfig{diagrams//pbct6new} \end{equation} where \[ A=\begin{pmatrix}
1 & 0 \\
0 & \pm \sqrt{3}i
\end{pmatrix},
B=\begin{pmatrix}
1 & 0 \\
0 & \mp \sqrt{\frac{1}{3}}(\sqrt{2}+i)
\end{pmatrix},
C=\begin{pmatrix}
1 & 0 \\
0 & \pm \sqrt{\frac{1}{3}}(1+\sqrt{2}i)
\end{pmatrix}.
\] \end{lemma} \begin{proof} The matrix of the left-hand side of (\ref{trianglebroke}) is \[ \begin{pmatrix}
1 & 0 \\
0 &-\sqrt{2}+i\\
0 &1-\sqrt{2}i \\
-i & 0
\end{pmatrix}. \] Then the result follows from Lemma \ref{ghznorm}. \end{proof}
Finally, we have two more equivalent transformations of GHZ states from a loop form to a non-loop form like the bialgebra rule (B2), which can be verified by matrix calculations: \begin{equation}\label{triangleghz} \tikzfig{diagrams//triangleghzel} \end{equation}
\begin{equation}\label{triangleghz} \tikzfig{diagrams//andgtsimple2forms} \end{equation}
It is interesting to point out that the first diagram in (\ref{triangleghz}) has been used by Jeandel, Perdrix, and Vilmart to construct the Toffoli gate \cite{Emmanuel}, while the third diagram in (\ref{triangleghz}) was used in (\ref{toffoligate}) in this chapter and in \cite{ngwang2} to construct the same gate. The equalities of (\ref{triangleghz}) were obtained before the upload of the paper \cite{ngwang2}, we only choose the third diagram to construct the Toffoli gate because we think it makes the Toffoli gate simpler--there is no loop in the construction-- and is easy to generalise to a multiple control Toffoli gate of form (\ref{toffoligatemultiple}).
\section{Representation of Toffoli gate}\label{toffolirepresent} The Toffoli gate is known as the ``controlled-controlled-not" gate \cite{Nielsen}. The standard circuit form of Toffoli gate is given in \cite{Vivek} as follows: \begin{center} {\includegraphics[scale=0.65]{ToffoliCircuit}}
\end{center}
We express the circuit form of Toffoli gate in ZX-calculus as follows: \begin{equation}\label{toffoli1eq} \tikzfig{diagrams//toffoli1} \end{equation}
In contrast to circuit form, the Toffoli gate can be expressed by the new generator-- triangle--in a nice way (without phases). Here we follow the way of section 12.1.3 in \cite{Coeckebk} to construct the Toffoli gate. First, we need to specify the AND gate in ZX-calculus. The AND gate was given by a GHZ/W-pair in \cite{Herrmann} as follows: \begin{center} {\includegraphics[scale=0.65]{ANDgtghzw}}
\end{center}
The right-side diagram above is a GHZ/W-calculus diagram \cite{Coeckealeksmen}.
If we translate this diagram into ZX-calculus, then we get \begin{equation}\label{andgate0} \tikzfig{diagrams//andgt}
\end{equation} By further calculation, we have \begin{equation}\label{andgate}
\tikzfig{diagrams//qand}=\tikzfig{diagrams//andgtsimplest} \end{equation} where the triangle with a $-1$ on the top-left corner is the inverse of the normal triangle.
We check the correctness of the AND gate as follows: \begin{equation}\label{andgtverify0}
\tikzfig{diagrams//andgtvery02} \end{equation}
\iffalse \begin{equation}\label{andgtverify1} \tikzfig{diagrams//andgtvery1} \end{equation}
The simplified AND gate can be verified as follows: \begin{equation}\label{andgtsimpleverify1} \tikzfig{diagrams//andgtsimpleverify} \end{equation}
However, AND gate can be simplified further in two forms in the following, we will see which form is better. \begin{equation}\label{andgtsimple2form} \tikzfig{diagrams//andgtsimple2forms} \end{equation} \fi
Note that the matrix form of the diagram \tikzfig{diagrams//andcheck} in (\ref{andgtverify0}) is $\sqrt{2}\ket{0}(\bra{0}+\bra{1})$, which means the two rows of diagrammatical equalities of (\ref{andgtverify0}) have the same scalars under the standard basis. Therefore we prove the correctness of (\ref{andgate}) for the AND gate.
It turns out that the $``/"$ box ( Exercise 12.10 in \cite{Coeckebk}) has the same matrix form as the new generator triangle (up to a scalar), thus the AND gate there is in a form similar to (\ref{andgate}).
Then the Toffoli gate can be constructed as
\begin{equation}\label{toffoligate}
\tikzfig{diagrams//toffoligtsimplest} \end{equation}
Now we prove the correctness of the Toffoli gate. The key point to remember here is that the Toffoli gate is a ``controlled-CNOT" gate \cite{Coeckebk}. \begin{equation}\label{toffoligatecrt}
\tikzfig{diagrams//toffoligtctns002} \end{equation} \begin{equation}\label{toffoligatecrt2}
\tikzfig{diagrams//toffoligtctns012} \end{equation} The matrix form of \tikzfig{diagrams//cnotgt} is $\frac{1}{\sqrt{2}}(\ket{00}\bra{00}+\ket{01}\bra{01}+\ket{11}\bra{10}+\ket{10}\bra{11})$, which means the final diagrams of (\ref{toffoligatecrt}) and (\ref{toffoligatecrt2}) have the same scalars under the standard basis. Therefore we prove the correctness of (\ref{toffoligate}) for the Toffoli gate.
It is easy to see that the AND gate can be generalised as follows: \begin{equation}\label{andgategeneral} \tikzfig{diagrams//andgategeneralise} \end{equation}
Therefore the multiple control Toffoli gate (up to a scalar) can be represented as: \begin{equation}\label{toffoligatemultiple} \tikzfig{diagrams//toffoligtmultiple} \end{equation}
We end this section with pointing out that it is not easy to find an efficient way to prove in the ZX-calculus the equivalence of the two forms of Toffoli gate (\ref{toffoli1eq}) and (\ref{toffoligate}), although this can, in principle, be done by rewriting them into normal forms induced from the ZW-calculus.
\section{Equivalence-checking for the UMA gate} In this section, we consider applying the ZX-calculus to equivalence-checking \cite{Markov} for two special quantum circuits.
These two circuits are two forms of the so-called ``UnMajority and Add'' (UMA) quantum gate given in \cite{Cuccaro} as follows:
{\includegraphics[scale=0.65]{UMA2}}
We check the equivalence of the two kind of UMA gate diagrammatically in terms of the representation of Toffoli gate (\ref{toffoligate}). The translation from the UMA gate into ZX-calculus is direct: Toffoli gate in UMA translated to diagram (\ref{toffoligate}), CNOT gate in UMA to CNOT in ZX, and NOT gate in UMA to the red $\pi$ gate in the ZX-calculus. \begin{proposition} The two versions of the UMA gate can be proved to be equal in the ZX-calculus: \begin{equation}\label{umagt2form} \tikzfig{diagrams//umagt2forms} \end{equation}
\end{proposition}
\begin{proof} \begin{equation}\label{umagt2formprf} \tikzfig{diagrams//2umaequivproof} \end{equation}
where for the second equality we used lemma \ref{umapartlm}, and for the fourth equality we used the bialgebra rule. \end{proof}
\begin{lemma}\label{umapartlm} \begin{equation}\label{umapart} \tikzfig{diagrams//umaparttrans} \end{equation} \end{lemma}
\begin{proof} \begin{equation}\label{umagatedr} \tikzfig{diagrams//UMAgatederive} \end{equation} where for the third equality we used the rule (TR11), for the last equality we made the following transformation by commutativity of the red and green spiders as well as the naturality of diagrams (free move without changing any connections): \begin{equation}\label{umagatedr2} \tikzfig{diagrams//UMAgatederive2} \end{equation}
\end{proof}
\chapter{Conclusions and further work} In this thesis, we first show that the ZX-calculus is complete for the overall pure qubit QM, with the aid of completeness of ZW-calculus for the whole qubit QM. Thus qubit quantum computing can be done purely diagrammatically in principle.
Based on this universal completeness, we directly obtain a complete axiomatisation of the ZX-calculus for the Clifford+T quantum mechanics by restricting the ring of complex numbers to its subring corresponding to the Clifford+T fragment resting on the completeness theorem of the ZW-calculus for arbitrary commutative ring.
Furthermore, we prove the completeness of the ZX-calculus (with just 9 rules) for 2-qubit Clifford+T circuits by verifying the complete set of 17 circuit relations in diagrammatic rewriting. This is an important step towards efficient simplification of general n-qubit Clifford+T circuits.
In addition to completeness results within the qubit related formalism, we extend the completeness of the ZX-calculus for qubit stabilizer quantum mechanics to the qutrit stabilizer system. This generalisation is far from trivial as one can see that, for example, the local Clifford group for qubits has only 24 elements, while the local Clifford group for qutrits has 216 elements.
Finally, we we show by some examples the application of the ZX-calculus to the proof of generalised supplementarity, the representation of entanglement classification and Toffoli gate, as well as equivalence-checking for the UMA gate.
The results of this chapter has been published in the paper \cite{coeckewang}, with the coauthor Bob Coecke.
\section*{Further work}
An obvious next step is to generalise the completeness of the ZX-calculus for qubit to that for qudit with arbitrary dimension $d$. One possible way to do this is to establish a completeness result for the qudit version of the ZW-calculus. Though there is a ZW-calculus universal for qudit quantum mechanics \cite{amar}, it is not sure whether the completeness result for qudit can be finally obtained. To be honest, we have tried very hard along this direction, but have not succeeded in the end. Another way to have the completeness of the qudit ZX-calculus is to prove it directly without any resort to the completeness of the ZW-calculus. There is already a proof of completeness of the qubit ZX-calculus independent of the completeness of the ZW-calculus \cite{JPV3n}, however we have no idea on how to generalise this result from qubit to qudit. Furthermore, we would like to mention that a normal form of qubit ZX diagrams can be obtained if we resort to elementary transformations represented in ZX-calculus and the map-state duality, thus completeness could follow up. But no elegant ZX rewriting rules have been found in this way to date, so continuous work is still required, including generalisation to qudit case.
The standard interpretation of the ZX-calculus maps diagrams to matrices in a category with objects being tensor powers of a fixed finite dimensional Hilbert space. But in quantum computing, we often deal with the category $\mathbf{FdHilb}$, whose objects could be Hilbert space of arbitrary finite dimension. So it is natural to construct a ZX-calculus that can deal with quantum computing for various dimensions in the same framework. In fact, we could have such a ZX-calculus by indicating each dimension of the type of a system and introducing the isomorphism between composite space and its component spaces. Once this mixed version of ZX-calculus is obtained, then one could fill in all the boxes that one usually encounters in categorical quantum mechanics with theses kind of ZX diagrams. However, the completeness problem of this mixed version of ZX-calculus keeps open. Even more, one could try to generalise the ZX-calculus to the case of infinite dimension.
Since now we have a set of complete ZX rules for Clifford+T quantum mechanics, it would be of great interest to develop efficient strategies in the ZX-calculus for optimising Clifford+T quantum circuits. One simple idea for this direction is to firstly take the ZX-calculus as a learning machine to learn existing excellent strategies and algorithms by encoding everything in the language of the ZX-calculus, and then try to improve the results obtained.
As we demonstrated in Section \ref{toffolirepresent}, the Toffoli gate has a simple representation in the ZX-calculus. Then it would be interesting to apply this version of Toffoli gate to the filed of Toffoli based quantum circuits, for example, quantum boolean circuits \cite{Iwamaqbc}. Meanwhile, Toffoli and Hadamard are universal for quantum computation \cite{Shiyao}, so one could employ the ZX version of Toffoli gate to explore on Toffoli+H quantum circuits instead of Clifford+T circuits.
The completeness of ZX-calculus for 2-qubit Clifford+T circuits depends on the proof of complete relations given in \cite{ptbian}. It is unknown whether one can derive a direct proof from the universal completeness of the ZX-calculus. Furthermore, can we obtain the completeness of ZX-calculus for 3-qubit Clifford+T circuits, or even arbitrary n-qubit Clifford+T circuits?
As for the stabilizer formalism, it is natural to ask whether there is a general proof of completeness of the ZX-calculus for arbitrary dimensional (qudit) stabilizer quantum mechanics. As in the qubit case, we can't expect to use a triangle as a generator to have a complete axiomatisation of the qudit stabilizer quantum mechanics, so the techniques from qudit graph states may still be needed.
Another question suggested by Bob Coecke is to embed the qubit ZX-calculus into qutrit ZX-calculus, which is not obvious since a representation of some special non-stabilizer phase is involved.
Lastly, it is also interesting to incorporate the rules of the universally complete ZX-calculus in the automated graph rewriting system Quantomatic \cite{Quanto}.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
\addcontentsline{toc}{chapter}{Bibliography}
\end{document} | arXiv |
\begin{document}
\title{\bf Wall divisors on irreducible symplectic orbifolds of Nikulin-type}
\author{Gr\'egoire \textsc{Menet}; Ulrike \textsc{Riess}}
\maketitle
\begin{abstract}
We determine the wall divisors on irreducible symplectic orbifolds which are deformation
equivalent to a special type of examples, called Nikulin orbifolds. The Nikulin orbifolds are obtained as partial resolutions in codimension 2 of a quotient by a symplectic involution of a Hilbert scheme of 2 points on a K3 surface.
This builds on the previous article \cite{Menet-Riess-20} in which the theory of wall divisors was
generalized to orbifold singularities. \end{abstract}
\section{Introduction} \subsection{Motivations and main results} During the last years, many efforts have been made to extend the theory of smooth compact varieties with trivial first Chern class to a framework of varieties admitting some singularities.
Notably, let us cite, the generalization of the Bogomolov decomposition theorem
\cite{Bakker}. One of the motivations for such generalizations is given by the minimal model program in which certain singular varieties appear naturally.
More specifically, in the theory of irreducible symplectic varieties, many generalizations can be mentioned. One of the most important concerns the global Torelli theorem which allows to obtain geometrical information on the variety from its period (\cite{Bakker-Lehn-GlobalTorelli}, \cite{Menet-2020} and \cite{Menet-Riess-20}).
In this paper, we are considering a specific kind of singularities: quotient singularities. A complex analytic space with only quotient singularities is call an \emph{orbifold}. An orbifold $X$ is called \emph{irreducible holomorphically symplectic} if $X\smallsetminus \Sing X$ is simply connected, admits a unique (up to a scalar multiple), non-degenerate holomorphic 2-form and $\codim \Sing X\geq 4$ (Definition \ref{def}). The framework of irreducible symplectic orbifolds appears to be very favorable. In particular, general results about the Kähler cone have been generalized for the first time in this context (see \cite{Menet-Riess-20}). This is particularly important, since knowledge on the Kähler cone is needed to be able to apply the global Torelli theorem (see Theorem \ref{mainHTTO}) effectively. The key tool used to study the Kähler cone of irreducible symplectic orbifolds are wall divisors (originally introduced for the smooth case in \cite{Mongardi13}). \begin{defi}[{\cite[Definition 4.5]{Menet-Riess-20}}] Let $X$ be an irreducible symplectic orbifold and let $D\in\Pic(X)$. Then $D$ is called a \emph{wall divisor} if $q(D)<0$ and $g(D^{\bot})\cap \BK_X =\emptyset$, for all $g\in \Mon^2_{\Hdg}(X)$, where, $q$ denotes the famous Beauville--Bogomolov form on $H^2(X,{\mathbb Z})$.
\end{defi}
In particular, we recall that the Kähler classes can be characterized by their intersections with the wall divisors (see Corollary \ref{cor:desrK}). The definitions of the birational Kähler cone $\BK_X$ and the Hodge monodromy group $\Mon^2_{\Hdg}(X)$ are recalled in Section \ref{Kählersection} and Definition \ref{transp} respectively.
A very practical feature of wall divisors is their deformation invariance. More precisely, let $\Lambda$ be a lattice of signature $(3,\rk\Lambda-3)$ and $(X,\varphi)$ a marked irreducible symplectic orbifold with $\varphi:H^2(X,\Z)\simeq \Lambda$. Then there exists a set $\mathscr{W}_{\Lambda}\subset \Lambda$ such that for all $(Y,\psi)$ deformation equivalent to $X$, the set $\psi^{-1}(\mathscr{W}_{\Lambda})\cap H^{1,1}(Y,\Z)$ is the set of wall divisors of $Y$ (see Theorem \ref{wall}). We call the set $\mathscr{W}_{\Lambda}$ the \emph{set of wall divisors of the deformation class of $X$}.
In this paper, we are going to provide the first description of the wall divisors of a deformation class of singular irreducible symplectic varieties. The most "popular" singular irreducible symplectic variety, in the literature (see \cite[Section 13, table 1, I2]{Fujiki-1983}, \cite{Marku-Tikho}, \cite{Menet-2014}, \cite{Menet-2015}, \cite[Section 3.2 and 3.3]{Menet-Riess-20}, \cite{Camere}), is denoted by $M'$ and recently named Nikulin orbifold in \cite{Camere}; it is obtained as follows. Let $X$ be an irreducible symplectic manifold of $K3^{[2]}$-type and $\iota$ a symplectic involution on $X$. By \cite[Theorem 4.1]{Mongardi-2012}, $\iota$ has 28 fixed points and a fixed K3 surface $\Sigma$. We obtain $M'$ as the blow-up of $X/\iota$ in the image of $\Sigma$ (see Example \ref{exem}); we denote by $\Sigma'$ the exceptional divisor.
The orbifolds deformation equivalent to this variety will be called \emph{orbifolds of Nikulin-type}.
We also recall that the Beauville--Bogomolov lattice of the orbifolds of Nikulin-type is $U(2)^3\oplus E_8(-1)\oplus(-2)^2$ (see Theorem \ref{BBform}). \begin{thm}\label{main} Let $\Lambda:=U(2)^3\oplus E_8(-1)\oplus(-2)^2$.
The set $\mathscr{W}_{M'}$ of wall divisors of Nikulin-type orbifolds is given by:
$$\mathscr{W}_{M'}=\left\{D\in\Lambda\left|
\begin{array}{lll}
D^2=-2, & {\rm div}(D)=1, & \\
D^2=-4, & {\rm div}(D)=2, &\\
D^2=-6, & {\rm div}(D)=2, & \text{and}\\
D^2=-12, & {\rm div}(D)=2 & \text{if\ }D_{U(2)^3}\text{\ is divisible by\ }2
\end{array}
\right.\right\},$$
where $D_{U(2)^3}$ is the projection of $D$ to the $U(2)^3$-part of the lattice. \end{thm} The divisibility ${\rm div}$ is defined in Section \ref{notation} below.
\begin{remark}
Note that if one chooses an automorphism $\varphi$ of the lattice $\Lambda$, the conditions that $D_{U(2)^3}$
and $\varphi(D)_{U(2)^3}$ are divisible by 2 are equivalent for elements with $D^2=-12$ and ${\rm div}(D)=2$.
\TODO{Do you want to keep this: "(in this case it admits to distinguis between Cases 1 and 7 in Theorem \ref{thm:9monorb-M'})."} \end{remark}
Combined with the global Torelli theorem (Theorem \ref{mainHTTO}), the previous theorem can be used for studying automorphisms on orbifolds of Nikulin-type. As an example of application, we construct a symplectic involution on orbifolds of Nikulin-type which is not induced by a symplectic involution on a Hilbert scheme of 2 points on a K3 surface (\emph{non-standard involution}) (see Section \ref{Application}). \begin{prop}\label{main2} Let $X$ be an irreducible symplectic orbifold of Nikulin-type such that there exists $D\in\Pic (X)$ with $q(D)=-2$ and ${\rm div}(D)=2$. Then, there exists an irreducible symplectic orbifold $Y$ bimeromophic to $X$ and a symplectic involution $\iota$ on $Y$ such that: $$H^2(Y,\Z)^{\iota}\simeq U(2)^3\oplus E_8(-1)\oplus (-2)\ \text{and}\ H^2(Y,\Z)^{\iota\bot}\simeq (-2).$$ \end{prop} The proof of this Proposition is given in Section \ref{Application}.
For the proof of Theorem \ref{main} we need to show that the following two operators are monodromy operators.
The reflections $R_D$ on the second cohomology group are defined in Section \ref{notation} below. \begin{prop}[{Compare Corollaries \ref{Sigma'}, \ref{cor:Chiara}, and \ref{Lastmonodromy}}]\label{MonoIntroduction} \
\begin{itemize}
\item[(i)]
The reflection $R_{\Sigma'}$ is a monodromy operator of $M'$. \item[(ii)] More generally, let $X$ be an orbifold of Nikulin-type and $\alpha\in H^2(X,\Z)$ which verifies one of the two numerical conditions:
$$\left\{\begin{array}{ll} q(\alpha)=-2 & \text{and}\ {\rm div}(\alpha)=2,\ \text{or}\\ q(\alpha)=-4 & \text{and}\ {\rm div}(\alpha)=2. \end{array}\right.$$ Then $R_{\alpha}$ is a monodromy operator. \end{itemize} \end{prop} \begin{rmk}
Note that Proposition \ref{MonoIntroduction} (i) can also be obtained from the recent result of Lehn--Mongardi--Pacienza \cite[Theorem 3.10]{Lehn2}. \end{rmk}
\subsection{Organization of the paper and sketch of the proof} The paper is organized as follows. In Section \ref{reminders}, we provide some reminders related to irreducible symplectic orbifolds, especially from \cite{Menet-Riess-20} where the theory of the Kähler cone have been developed. In Section \ref{M'section0}, we provide some reminders about the orbifold $M'$ especially from \cite{Menet-2015}; moreover, we investigate the monodromy operators of $M'$ inherited from the ones on the Hilbert schemes of two points on K3 surfaces. In Section \ref{genericM'0}, we determined the wall divisors of an orbifold $M'$ obtained from a very general K3 surfaces endowed with a symplectic involution $(S,i)$. As a corollary, we can prove Proposition \ref{MonoIntroduction} (i). Our main tool to determine wall divisors is Proposition \ref{extremalray} which says that the dual divisor of an extremal ray of the cone of classes of effective curves (the \emph{Mori cone}) is a wall divisor. The proof of Theorem \ref{main} is then divided in two parts. The first part (Section \ref{extremalcurves}) consists in finding enough examples of extremal rays of Mori cones in several different $M'$-orbifolds; the second part (Section \ref{sec:monodromy-orbits}) consists in using our knowledge on the monodromy group of $M'$ to show that we have find all possible wall divisors. Finally, Section \ref{Application} is devoted to the proof of Proposition \ref{main2}.
\subsection{Notation and convention}\label{notation}
\begin{itemize}
\item Let $\Lambda$ be a lattice of signature $(3,\rk \Lambda -3)$. Let $x\in \Lambda$ such that $x^2< 0$. We define the \emph{reflection} $R_x$ associated to $x$ by: $$R_x(\lambda)=\lambda-\frac{2\lambda\cdot x}{x^2}x,$$ for all $\lambda\in\Lambda$. \item In $\Lambda$, we define the \emph{divisibility} of an element $x\in\Lambda$ as the integer $a\in \N^*$ such that $x\cdot \Lambda =a\Z$. We denote by ${\rm div}(x)$ the divisibility of $x$.
\item
Let $X$ be a manifold and $C\subset X$ a curve. We denote by $\left[C\right]_{X}$ the class in $X$ of the curve $C$. \end{itemize} ~\\ \textbf{Acknowledgements}:
We are very grateful to the Second Japanese-European Symposium on Symplectic Varieties and Moduli Spaces where our collaboration was initiated. The first author has been financed by the ERC-ALKAGE grant No. 670846 and by the PRCI SMAGP (ANR-20-CE40-0026-01).
The second author is a member of the Institute for Theoretical Studies at ETH Zürich (supported by Dr.~Max R\"ossler, the Walter Haefner Foundation and the ETH Z\"urich Foundation). \section{Reminders on irreducible symplectic orbifolds}\label{reminders} \subsection{Definition}\label{basicdef} In this paper an \emph{orbifold} is a complex space with only quotient singularities. \begin{defi}\label{def} An irreducible symplectic orbifold (or hyperkähler orbifold) is a compact Kähler orbifold $X$ such that: \begin{itemize} \item[(i)] $\codim\Sing X\geq4$, \item[(ii)] $H^{2,0}(X)=\C \sigma$ with $\sigma$ non-degenerated on $X_{\reg}:=X\smallsetminus \Sing X$, \item[(iii)] $\pi(X_{\reg})=0$. \end{itemize} \end{defi} We refer to \cite[Section 6]{Campana-2004}, \cite[Section 3.1]{Menet-2020}, \cite[Section 3.1]{Fu-Menet} and \cite[Section 2.1]{Menet-Riess-20} for discussions about this definition. \begin{ex}[{\cite[Section 3.2]{Menet-2020}}]\label{exem} Let $X$ be a hyperkähler manifold deformation equivalent to a Hilbert scheme of 2 points on a K3 surfaces and $\iota$ a symplectic involution on $X$. By \cite[Theorem 4.1]{Mongardi-2012}, $\iota$ has 28 fixed points and a fixed K3 surface $\Sigma$.
We denote by $M'$ the blow-up of $X/\iota$ in the image of $\Sigma$. The orbifold $M'$ is irreducible symplectic (see \cite[Proposition 3.8]{Menet-2020}). \end{ex} \begin{defi}\label{Nikulin} An orbifold $M'$ constructed as before is called a \emph{Nikulin orbifold}. An irreducible symplectic orbifold deformation equivalent to a Nikulin orbifold is called an orbifold of \emph{Nikulin-type}. \end{defi} \subsection{Moduli space of marked irreducible symplectic orbifolds}\label{per} Let $X$ be an irreducible symplectic orbifold. As explained in \cite[Section 3.4]{Menet-2020}, $H^2(X,\Z)$ is endowed with a quadratic form of signature $(3,b_2(X)-3)$ called the Beauville--Bogomolov form and denoted by $q_{X}$ (the bilinear associated form is denoted by $(\cdot,\cdot)_{q_X}$ or $(\cdot,\cdot)_{q}$ when there is no ambiguity). Let $\Lambda$ be a lattice of signature $(3,\rk \Lambda-3)$. We denote $\Lambda_{\mathbb{K}}:=\Lambda\otimes \mathbb{K}$ for $\mathbb{K}$ a field. A \emph{marking} of $X$ is an isometry $\varphi: H^{2}(X,\Z)\rightarrow \Lambda$. Let $\mathcal{M}_{\Lambda}$ be the set of isomorphism classes of marked irreducible symplectic orbifolds $(X,\varphi)$ with $\varphi:H^2(X,\Z)\rightarrow\Lambda$. As explained in \cite[Section 3.5]{Menet-2020}, this set can be endowed with a non-separated complex structure such that the \emph{period map}: $$\xymatrix@[email protected]{\ \ \ \ \ \ \ \ \mathscr{P}:& \mathcal{M}_{\Lambda}\ar[r]& \mathcal{D}_{\Lambda}\\ &(X,\varphi)\ar[r]&\varphi(\sigma_X)}$$
is a local isomorphism with $\mathcal{D}_{\Lambda}:=\left\{\left.\alpha\in \mathbb{P}(\Lambda_{\C})\ \right|\ \alpha^2=0,\ \alpha\cdot\overline{\alpha}>0\right\}$. The complex manifold $\mathcal{M}_{\Lambda}$ is called \emph{the moduli space of marked irreducible symplectic orbifolds of Beauville--Bogomolov lattice $\Lambda$}.
Moreover there exists a \emph{Hausdorff reduction} of $\mathcal{M}_{\Lambda}$. \begin{prop}[\cite{Menet-2020}, Corollary 3.25] There exists a Hausdorff reduction $\overline{\mathcal{M}_{\Lambda}}$ of $\mathcal{M}_{\Lambda}$ such that the period map $\mathscr{P}$ factorizes through $\overline{\mathcal{M}_{\Lambda}}$: $$\xymatrix{\mathcal{M}_{\Lambda}\ar@/^1pc/[rr]^{\mathscr{P}}\ar@{->>}[r]& \overline{\mathcal{M}_{\Lambda}}\ar[r]& \mathcal{D}_{\Lambda}.}$$ Moreover, two points in $\mathcal{M}_{\Lambda}$ map to the same point in $\overline{\mathcal{M}_{\Lambda}}$ if and only if they are non-separated in $\mathcal{M}_{\Lambda}$. \end{prop}
\subsection{Global Torelli theorems} \begin{thm}[\cite{Menet-2020}, Theorem 1.1]\label{mainGTTO} Let $\Lambda$ be a lattice of signature $(3,b-3)$, with $b\geq3$. Assume that $\mathcal{M}_{\Lambda}\neq\emptyset$ and let $\mathcal{M}_{\Lambda}^{o}$ be a connected component of $\mathcal{M}_{\Lambda}$. Then the period map: $$\mathscr{P}: \overline{\mathcal{M}_{\Lambda}}^{o}\rightarrow \mathcal{D}_{\Lambda}$$ is an isomorphism. \end{thm} There also exists a Hodge version of this theorem, which we state in the following. \begin{defi}\label{transp} Let $X_1$ and $X_2$ be two irreducible symplectic orbifolds. An isometry $f:H^{2}(X_{1},\Z)\rightarrow H^{2}(X_{2},\Z)$ is called a \emph{parallel transport operator} if there exists a deformation $s:\mathcal{X}\rightarrow B$, two points $b_{i}\in B$, two isomorphisms $\psi_{i}:X_{i}\rightarrow \mathcal{X}_{b_{i}}$, $i=1,2$ and a continuous path $\gamma:\left[0,1\right]\rightarrow B$ with $\gamma(0)=b_{1}$, $\gamma(1)=b_{2}$ and such that the parallel transport in the local system $Rs_{*}\Z$ along $\gamma$ induces the morphism $\psi_{2*}\circ f\circ\psi_{1}^{*}: H^{2}(\mathcal{X}_{b_{1}},\Z)\rightarrow H^{2}(\mathcal{X}_{b_{2}},\Z)$.
Let $X$ be an irreducible symplectic orbifold. If $f:H^{2}(X,\Z)\rightarrow H^{2}(X,\Z)$ is a parallel transport operator from $X$ to $X$ itself, $f$ is called a \emph{monodromy operator}. If moreover $f$ sends a holomorphic 2-form to a holomorphic 2-form, $f$ is called a \emph{Hodge monodromy operator}. We denote by $\Mon^2(X)$ the group of monodromy operators and by $\Mon^2_{\Hdg}(X)$ the group of Hodge monodromy operators. \end{defi} \begin{rmk} If $(X,\varphi)$ and $(X',\varphi')$ are in the same connected component $\mathcal{M}_{\Lambda}^{o}$ of $\mathcal{M}_{\Lambda}$, then $\varphi^{-1}\circ\varphi'$ is a parallel transport operator. \end{rmk} \begin{thm}[\cite{Menet-Riess-20}, Theorem 1.1]\label{mainHTTO} Let $X$ and $X'$ be two irreducible symplectic orbifolds. \begin{itemize} \item[(i)] The orbifolds $X$ and $X'$ are bimeromorphic if and only if there exists a parallel transport operator $f:H^2(X,\Z)\rightarrow H^2(X',\Z)$ which is an isometry of integral Hodge structures. \item[(ii)] Let $f:H^2(X,\Z)\rightarrow H^2(X',\Z)$ be a parallel transport operator, which is an isometry of integral Hodge structures. There exists an isomorphism $\widetilde{f}:X\rightarrow X'$ such that $f=\widetilde{f}_*$ if and only if $f$ maps some K\"ahler class on $X$ to a K\"ahler class on $X'$. \end{itemize} \end{thm} \subsection{Twistor space}\label{Twist}
Let $\Lambda$ be a lattice of signature $(3,\rk-3)$. We denote by "$\cdot$" its bilinear form. A \emph{positive three-space} is a subspace $W\subset \Lambda\otimes\R$ such that $\cdot_{|W}$ is positive definite. For any positive three-space, we define the associated \emph{twistor line} $T_W\subset \mathcal{D}_{\Lambda}$ by: $$T_W:=\mathcal{D}_{\Lambda}\cap \mathbb{P}(W\otimes\C).$$ A twistor line is called \emph{generic} if $W^{\bot}\cap \Lambda=0$. A point of $\alpha\in \mathcal{D}_{\Lambda}$ is called \emph{very general} if $\alpha^{\bot}\cap \Lambda=0$. \begin{thm}[\cite{Menet-2020}, Theorem 5.4]\label{Twistor}
Let $(X,\varphi)$ be a marked irreducible symplectic orbifold with $\varphi:H^2(X,\Z)\rightarrow \Lambda$. Let $\alpha$ be a Kähler class on $X$, and $W_\alpha\coloneqq\Vect_{\R}(\varphi(\alpha),$ $\varphi(\Rea \sigma_X),\varphi(\Ima \sigma_X))$. Then: \begin{itemize} \item[(i)] There exists a metric $g$ and three complex structures (see \cite[Section 5.1]{Menet-2020} for the definition) $I$, $J$ and $K$ in quaternionic relation on $X$ such that: $$\alpha= \left[g(\cdot,I\cdot)\right]\ \text{and}\ g(\cdot,J\cdot)+ig(\cdot,K\cdot)\in H^{0,2}(X).$$ \item[(ii)] There exists a deformation of $X$: $$\mathscr{X}\rightarrow T(\alpha)\simeq\mathbb{P}^1,$$ such that the period map $\mathscr{P}:T(\alpha)\rightarrow T_{W_\alpha}$ provides an isomorphism. Moreover, for each $s=(a,b,c)\in \mathbb{P}^1$, the associated fiber $\mathscr{X}_s$ is an orbifold diffeomorphic to $X$ endowed with the complex structure $aI+bJ+cK$. \end{itemize} \end{thm} \begin{rmk} Note that if the irreducible symplectic orbifold $X$ of the previous theorem is endowed with a marking then all the fibers of $\mathscr{X}\rightarrow T(\alpha)$ are naturally endowed with a marking. Therefore, the period map $\mathscr{P}:T(\alpha)\rightarrow T_{W_\alpha}$ is well defined.
\end{rmk} \begin{rmk}\label{twistorinvo} Let $X$ be an irreducible symplectic orbifold endowed with a finite symplectic automorphisms group $G$ (i.e. $G$ fixes the holomorphic 2-form of $X$). Let $\alpha$ be a Kähler class of $X$ fixed by $G$ and $\mathscr{X}\rightarrow T(\alpha)$ the associated twistor space. Then $G$ extends to an automorphism group on $\mathscr{X}$ and restricts on each fiber to a symplectic automorphism group. Indeed, since $G$ is symplectic $G$ fixes all the the complex structures $I$, $J$, $K$. \end{rmk}
We provide the following lemma which will be used several times in this paper. It is a generalization of \cite[Lemma 2.17]{Menet-Riess-20}. \begin{lemme}\label{lem:connected+}
Let $\Lambda'\subseteq \Lambda$ be a sublattice of rank $b'$, which also has signature $(3,b'-3)$.
Consider the inclusion of period domains $ {\mathcal D}_{\Lambda'} \subseteq{\mathcal D}_\Lambda$.
Suppose that a very general points of $(\widetilde{X},\widetilde{\varphi})\in {\mathscr M}_\Lambda \cap
{\mathscr P}^{-1}{\mathcal D}_{\Lambda'}$
(i.e.~$(\widetilde{X},\widetilde{\varphi})$ with $\widetilde{\varphi}(\Pic(\widetilde{X}))^\perp=\Lambda'$) satisfies that
${\mathcal K}_{\widetilde{X}}={\mathcal C}_{\widetilde{X}}$.
Let $(X,\varphi)$ and $(Y,\psi)\in \mathcal{M}_{\Lambda}^{\circ}$ be any two marked irreducible symplectic
orbifolds which satisfy ${\mathscr P}(X,\varphi)\in {\mathcal D}_{\Lambda'}$ and ${\mathscr P}(Y,\psi)\in {\mathcal D}_{\Lambda'}$ and for which $\varphi({\mathcal K}_X)\cap
\Lambda' \neq \emptyset$ and $\psi({\mathcal K}_Y)\cap \Lambda'\neq \emptyset$. Then $(X,\varphi)$ and $(Y,\psi)$ can
be connected by a sequence of generic twistor spaces whose image under the period domain is contained in ${\mathcal D}_{\Lambda'}$.
That is: there exists a sequence of generic twistor spaces $f_i:\mathscr{X}_i\rightarrow \mathbb{P}^1\simeq T(\alpha_i)$ with $(x_i,x_{i+1})\in\mathbb{P}^1\times\mathbb{P}^1$, $i\in \left\{0,...,k\right\}$, $k\in \mathbb{N}$ such that:
\begin{itemize}
\item $f^{-1}_0(x_0)\simeq (X,\varphi),\ f^{-1}_i(x_{i+1})\simeq f^{-1}_{i+1}(x_{i+1})\ \text{and}\ f^{-1}_{k}(x_{k+1})\simeq (Y,\psi),$ for all $0\leq i\leq k-1$. \item $\mathscr{P}(T(\alpha_i))\subset{\mathcal D}_{\Lambda'}$ for all $0\leq i\leq k+1$. \end{itemize} \end{lemme} \begin{proof} We split the proof in two steps.
\textbf{First case}: We assume that $(X,\varphi)$ and $(Y,\psi)$ are very general in ${\mathscr M}_\Lambda\cap
{\mathscr P}^{-1}{\mathcal D}_{\Lambda'}$ (hence ${\mathcal C}_{\widetilde{X}}={\mathcal K}_{\widetilde{X}}$ and ${\mathcal C}_{\widetilde{Y}}={\mathcal K}_{\widetilde{Y}}$). By \cite[Proposition 3.7]{Huybrechts12} the period domain ${\mathcal D}_{\Lambda'}$ is connected by generic twistor lines. Note that the proof of \cite[Proposition 3.7]{Huybrechts12} in fact shows that the twistor lines can be chosen in a such a way that they intersect in very general points of ${\mathcal D}_{\Lambda'}$. In particular, we can connect $\mathscr{P}(Y,\psi )$ and $\mathscr{P}(X,\varphi)$ by such generic twistor lines in ${\mathcal D}_{\Lambda'}$.
Since for a very general element $(\widetilde{X},\widetilde{\varphi})$ of ${\mathscr M}_\Lambda\cap
{\mathscr P}^{-1}{\mathcal D}_{\Lambda'}$ we know
${\mathcal K}_{\widetilde{X}}={\mathcal C}_{\widetilde{X}}$, Theorem \ref{Twistor} shows that all these twistor line can be lifted to twistor spaces. Moreover, by Theorem \ref{mainHTTO} (ii) the period map ${\mathscr P}$ is injective on the set of points $(\widetilde{X},\widetilde{\varphi})\in{\mathscr M}_\Lambda$ such that ${\mathcal K}_{\widetilde{X}}={\mathcal C}_{\widetilde{X}}$. Therefore, all these twistor spaces intersect and connect $(X,\varphi)$ to $(Y,\psi)$.
\textbf{Second case}: If $(X,\varphi)$ is not very general, we consider a very general Kähler class $\alpha\in{\mathcal K}_X \cap \Lambda'_{\R}\neq\emptyset$. Then the associated twistor space $\mathscr{X}\rightarrow T(\alpha)$ have a fiber which is a very general marked irreducible symplectic orbifold in ${\mathscr M}_\Lambda\cap
{\mathscr P}^{-1}{\mathcal D}_{\Lambda'}$. Hence we are back to the first case.
\end{proof} \subsection{Kähler cone} \label{Kählersection}
Let $X$ be an irreducible symplectic orbifold of dimension $n$. We denote by $\mathcal{K}_X$ the Kähler cone of $X$. We denote by $\mathcal{C}_X$ the connected component of $\left\{\left.\alpha\in H^{1,1}(X,\R)\right|\ q_X(\alpha)>0\right\}$ which contains $\mathcal{K}_X$; it is called the \emph{positive cone}. Let $\BK_X$ be the \emph{birational Kähler cone} which is the union $\cup f^{*}\mathcal{K}_{X'}$ for $f$ running through all birational maps between $X$ and any irreducible symplectic orbifold $X'$. In \cite[Definition 4.5]{Menet-Riess-20}, we define the wall divisors in the same way as Mongardi in \cite[Definition 1.2]{Mongardi13}. \begin{defi} Let $X$ be an irreducible symplectic orbifold and let $D\in\Pic(X)$. Then $D$ is called a \emph{wall divisor} if $q(D)<0$ and $g(D^{\bot})\cap \BK_X =\emptyset$, for all $g\in \Mon^2_{\Hdg}(X)$. \end{defi} We denote by $\mathscr{W}_X$ the set of primitive wall divisors of $X$ (non divisible in $\Pic X$). By \cite[Corollary 4.8]{Menet-Riess-20}, we have the following theorem. \begin{thm}\label{wall} Let $\Lambda$ be a lattice of signature $(3,\rk \Lambda -3)$ and $\mathcal{M}_{\Lambda}^{o}$ a connected component of the associated moduli space of marked irreducible symplectic orbifolds. Then there exists a set $\mathscr{W}_\Lambda\subset \Lambda$ such that for all $(X,\varphi)\in \mathcal{M}_{\Lambda}^{o}$:
$$\mathscr{W}_X=\varphi^{-1}(\mathscr{W}_\Lambda)\cap H^{1,1}(X,\Z).$$ \end{thm} \begin{defi} The set $\mathscr{W}_\Lambda$ will be called the \emph{set of wall divisor} of the deformation class of $X$. \end{defi} \begin{ex}[\cite{Mongardi13}, Proposition 2.12 and \cite{HuybrechtsK3}, Theorem 5.2 Chapter 8]\label{examplewall} If $\mathcal{M}_{\Lambda}^{o}$ is a connected component of the moduli space of marked K3 surface, then:
$$\mathscr{W}_\Lambda=\left\{\left.D\in \Lambda\ \right|\ D^2=-2\right\}.$$
If $\mathcal{M}_{\Lambda}^{o}$ is a connected component of the moduli space of marked irreducible symplectic manifolds equivalent by deformation to a Hilbert scheme of 2 points on a K3 surface, then: $$\mathscr{W}_\Lambda=\left\{\left.D\in \Lambda\ \right|\ D^2=-2\right\}\cup\left\{\left.D\in \Lambda\ \right|\ D^2=-10\ \text{and}\ D\cdot\Lambda\subset2\Z \right\}.$$ \end{ex} \begin{rmk}\label{dualclass} Let $\beta\in H^{2n-1,2n-1}(X,\Q)$. We can associate to $\beta$ its \emph{dual class} $\beta^{\vee}\in H^{1,1}(X,\Q)$ defined as follows. By \cite[Corollary 2.7]{Menet-2020} and since the Beauville--Bogomolov form is integral and non-degenerated (see \cite[Theorem 3.17]{Menet-2020}), we can find $\beta^{\vee}\in H^{2}(X,\Q)$ such that for all $\alpha\in H^2(X,\C)$:
$$(\alpha,\beta^{\vee})_q=\alpha\cdot \beta,$$ where the dot on the right hand side is the cup product. Since $(\beta^{\vee},\sigma_X)_q=\beta\cdot \sigma_X=0$, we have $\beta^{\vee}\in H^{1,1}(X,\Q)$. \end{rmk}
We also define the \emph{Mori cone} as the cone of classes of effective curves in $H^{2n-1,2n-1}(X,\Z)$. \begin{prop}[\cite{Menet-Riess-20}, Proposition 4.12]\label{extremalray} Let $X$ be an irreducible symplectic orbifold. Let $R$ be an extremal ray of the Mori cone of $X$ of negative self intersection. Then any class $D\in \Q R^{\vee}$ is a wall divisor. \end{prop} It induces a criterion for Kähler classes. \begin{defi}Given an irreducible symplectic orbifold $X$ endowed with
a Kähler class $\omega$.
Define ${\mathscr W}_X^+\coloneqq \{D\in {\mathscr W}_X\,|\, (D,\omega)_q>0\}$, i.e.~for every wall divisor, we choose the
primitive representative in its line, which pairs positively with the Kähler cone. \end{defi} \begin{cor}[\cite{Menet-Riess-20}, Corollary 4.14]\label{criterionwall}\label{cor:desrK}
Let $X$ be an irreducible symplectic orbifold such that either $X$ is projective or $b_2(X)\geq5$.
Then
$$
{\mathcal K}_X=\{\alpha\in {\mathcal C}_X \,|\, (\alpha, D)_q>0\ \forall D\in {\mathscr W}_X^+\}.$$ \end{cor} Finally, we recall the following proposition about the birational Kähler cone. \begin{prop}[\cite{Menet-Riess-20}, Corollary 4.17]\label{caca} Let $X$ be an irreducible symplectic orbifold. Then $\alpha\in H^{1,1}(X,\R)$ is in the closure $\overline{\BK}_X$ of the birational Kähler cone $\BK_X$ if and only if $\alpha\in\overline{\mathcal{C}}_X$ and $(\alpha,[D])_q\geq 0$ for all uniruled divisors $D\subset X$. \end{prop}
\section{The Nikulin orbifolds}\label{M'section0}
\subsection{Construction and description of Nikulin orbifolds}\label{M'section}
In order to enhance the readability, we recall the construction of the Nikulin orbifold from Example \ref{exem} and Definition \ref{Nikulin}. Let $X$ be a (smooth) irreducible symplectic 4-fold deformation equivalent to the Hilbert scheme of two points on a K3 surface (called \emph{manifold of $K3^{[2]}$-type}).
Suppose that $X$ admits a symplectic involution $\tilde{\iota}$. By \cite[Theorem 4.1]{Mongardi-2012}, $\tilde{\iota}$ has 28 fixed points and a fixed K3 surface $\Sigma$. We define $M:=X/\tilde{\iota}$ the quotient and $r:M'\rightarrow M$ the blow-up in the image of $\Sigma$. As mentioned in Example \ref{exem}, the orbifolds $M'$ constructed in this way are irreducible symplectic orbifolds (see \cite[Proposition
3.8]{Menet-2020}) and are named \emph{Nikulin orbifolds}.
A concrete example of such $X$ can be obtained in the following way: Let $S$ be a K3 surface endowed with a symplectic involution $\iota$. It induces a symplectic involution $\iota^{[2]}$ on $S^{[2]}$ the Hilbert scheme of two points on $S$. Then the fixed surface $\Sigma$ of $\iota^{[2]}$ is the following: \begin{equation}\label{eq:sigma}
\Sigma=\left\{ \left.\xi\in S^{[2]}\ \right|\ \Supp \xi=\left\{s,\iota(s)\right\}, s\in S\right\}. \end{equation}
\begin{remark}\label{rem:Sigma} Let us describe this surfaces $\Sigma$: Consider as usual $S\times S \overset{\widetilde{\nu}}{\longleftarrow} \widetilde{S\times S} \overset{\widetilde{\rho}}{\too} S^{[2]}$, where $\nutild$ is the blow-up of the diagonal $\Delta_S \subseteq S\times S$, and
$\rhotild$ the double cover induced by permutation of the two factors. Consider the surface $S_\iota \coloneqq \{(s,\iota(s))\,|\,s\in S\}\subseteq S\times S$, which is preserved by the involution $\iota\times\iota$. Restricted to $S_\iota$ the permutation of the two factors in $S\times S$ corresponds to the action of $\iota$ on $S$ (via the isomorphism $S_\iota \iso S$ induced by the first projection), and thus $S_\iota \cap \Delta_S$ corresponds to the fixed points of $\iota$ in $S$. Therefore, the strict transform $\widetilde{S_{\iota}}$ of $S_\iota$ is isomorphic to the blow-up $\Bl_{\Fix\iota}S$ of $S$ in the fixed points of $\iota$. Denote \begin{equation*}
\Sigma\coloneqq\rhotild(\widetilde{S_\iota})\iso \Bl_{\Fix\iota}S / \overline{\iota}, \end{equation*} where $\overline\iota$ is the involution on $\Bl_{\Fix\iota}S$ which is induced by $\iota$. Then $\Sigma$ is a K3 surface, which is fixed by $\iota^{[2]}$ and admits the description in \eqref{eq:sigma} by construction.
\end{remark}
Note that the existence of a symplectic involution on a K3 surfaces or on $K3^{[2]}$-type manifold can be checked purely on the level of lattices. We will need the following lemma. \begin{lemme}\label{pfff} Let $X$ be a K3 surface or an irreducible symplectic manifold of $K3^{[2]}$-type.
Assume that there is a primitive embedding $E_8(-2)\hookrightarrow\Pic X$, then there exists no wall divisor
in $E_8(-2)$. In particular under the additional assumption that $\Pic X\simeq E_8(-2)$, then $\mathcal{C}_X=\mathcal{K}_X$. \end{lemme} \begin{proof}
All elements of $E_8(-2)$ are of square divisible by 4. Hence by Example \ref{examplewall},
$E_8(-2)$ cannot contain any wall divisor. Then the lemma follows from Corollary \ref{cor:desrK}.
\end{proof} \begin{prop}\label{involutionE8}
Let $X$ be a K3 surface or a manifold of $K3^{[2]}$-type.
Then there exists a symplectic involution $\iota$ on $X$ if and only if $X$ satisfies the following
conditions:
\begin{compactenum}[(i)]
\item \label{it1iota} There exists a primitive embedding $E_8(-2)\hookrightarrow\Pic X$.
\item \label{it2iota} The intersection ${\mathcal K}_X \cap E_8(-2)^\perp\neq \emptyset$.
\end{compactenum}
In this case the pullback $\iota^*$ to $H^2(X,{\mathbb Z})$ acts
on $E_8(-2)$ as $-\id$ and trivially on $E_8(-2)^{\bot}$. \end{prop} \begin{proof}
Let us start by fixing a symplectic involution $\iota$. Then the fact that the fixed lattice of $\iota$ is
isomorphic to $E_8(-2)^\perp$ and the antifixed lattice is isomorphic to $E_8(-2)$ are shown in
\cite[Section 1.3]{Sarti-VanGeemen} and \cite[Theorem 5.2]{Mongardi-2012}. This readily implies (i).
To observe (ii), pick any Kähler class $\alpha \in {\mathcal K}_X$. Since $\iota$ is an isomorphism, $\iota^*(\alpha)$
is also a Kähler class. Therefore $\alpha + \iota^*(\alpha)\in E_8(-2)^\perp$ is a Kähler class.
For the other implication assume (i) and (ii). We consider the involution $i$ on $E_8(-2)\oplus E_8(-2)^{\bot}$ defined by $-\id$ on $E_8(-2)$ and $\id$ on $E_8(-2)^{\bot}$. By \cite[Corollary 1.5.2]{Nikulin}, $i$ extends to an involution on $H^2(X,\Z)$.
By \cite[Section 9.1.1]{Markman11}, $i$ is a monodromy operator. Moreover, by (ii), we can find a Kähler class of $X$ in $E_8(-2)^{\bot}$. It follows from the global Torelli theorem (see \cite[Theorem 1.3 (2)]{Markman11} or Theorem \ref{mainHTTO} (ii)) that there exists a symplectic automorphism
$\iota$ on $X$ such that
$\iota^*=i$. However by \cite[Propositions 10]{Beauville1982}, we know that the natural map $\Aut(X)\rightarrow \mathcal{O}(H^2(X,\Z))$ is an injection. Hence $\iota$ is necessarily an involution.
\end{proof}
\begin{remark} Fix a primitive embedding of $E_8(-2)$ in the K3$^{[2]}$-lattice $\Lambda\coloneqq U^3 \oplus E_8(-1)^2 \oplus (-2)$.
Let ${\mathscr M}_{\rm K3^{[2]}}^\iota$ be the moduli space of marked K3$^{[2]}$-type manifolds endowed with a
symplectic involution such that the anti-invariant lattice is identified with the chosen $E_8(-2)$.
Denote by $\Lambda^{{\iota}}\iso U^3\oplus E_8(-2)\oplus (-2)$ the orthogonal complement of $E_8(-2)$.
From Proposition \ref{involutionE8} we observe that the period map restricts to
$${\mathscr P}^\iota \colon
{\mathscr M}_{\rm K3^{[2]}}^\iota \to
\mathcal{D}_{\Lambda^{\iota}}:=\left\{\left.\sigma\in \mathbb{P}(\Lambda^{\iota}\otimes\C)\ \right|\ \sigma^2=0,\ \sigma\cdot\overline{\sigma}>0\right\}.$$ Note that the fibers of ${\mathscr P}^\iota$ are in one to one correspondence with the chambers cut out by wall divisors (no wall divisor can be contained in the orthogonal complement of $\Lambda^\iota$ see Example \ref{examplewall}). In particular, this is given by the chamber structure inside $\Lambda^\iota$ given by the images of the wall divisors under the orthogonal projection $\Lambda_{K3^{[2]}}\to \Lambda^\iota$. \end{remark}
\subsection{The lattice of Nikulin orbifolds starting from $S^{[2]}$}\label{M'S2} From now on we restrict ourselves to the case $X=S^{[2]}$ for a suitable K3 surface $S$ with an involution $\iota$. We consider the following commutative diagram: \begin{equation} \xymatrix{
\ar@(dl,ul)[]^{\iota^{[2]}_1}N_1\ar[r]^{r_1}\ar[d]^{\pi_1}&\ar[d]^{\pi}S^{[2]}\ar@(dr,ur)[]_{\iota^{[2]}}\\
M' \ar[r]^{r} & M,} \label{diagramM'} \end{equation} where $\pi : S^{[2]}\longrightarrow S^{[2]}/\iota^{[2]}=:M$ is the quotient map, $r_1$ is the blow-up in $\Sigma$ of $S^{[2]}$, $\iota^{[2]}_1$ is the involution induced by $\iota^{[2]}$ on $N_1$, $\pi_1 : N_1\longrightarrow N_1/\iota^{[2]}_1\simeq M'$ is the quotient map and $r$ is the blow-up in $\pi(\Sigma)$ of $M$. We also denote by $j:H^2(S,\Z)\hookrightarrow H^2(S^{[2]},\Z)$ the natural Hodge isometric embedding (see \cite[Proposition 6 Section 6 and Remark 1 Section 9]{Beauville1983}.
We fix the following notation for important divisors: \begin{itemize} \item $\Delta$ the class of the diagonal divisor in $S^{[2]}$ and $\delta:=\frac{1}{2}\Delta$; \item $\delta_1:=r_1^*(\delta)$ and $\Sigma_1$ the exceptional divisor of $r_1$; \item $\delta':=\pi_{1*}r_1^*(\delta)$ and $\Sigma':=\pi_{1*}(\Sigma_1)$ the exceptional divisor of $r$. \end{itemize}
Here we use the definition of the push-forward given in \cite{Aguilar-Prieto}. In particular $\pi_*$ verifies the following equations (see \cite[Theorem 5.4 and Corollary 5.8]{Aguilar-Prieto}): \begin{equation} \pi_*\circ\pi^*=2\id\ \text{and}\ \pi^*\circ\pi_*=\id+\iota^{[2]*}. \label{Smith} \end{equation} As a consequence, we have (see \cite[Lemma 3.6]{Menet-2018}): \begin{equation} \pi_*(\alpha)\cdot\pi_*(\beta)=2\alpha\cdot \beta, \label{Smith2} \end{equation} with $\alpha \in H^k(S^{[2]},\Z)^{\iota^{[2]}}$ and $\beta \in H^{8-k}(S^{[2]},\Z)^{\iota^{[2]}}$, $k\in \left\{0,...,8\right\}$. Of course, the same equations are also true for $\pi_{1*}$.
\begin{rmk}\label{Smithcomute} Note that the commutativity of diagram (\ref{diagramM'}) and equations (\ref{Smith}) imply $\pi_{1*}r_1^*(x)=r^*\pi_*(x)$ for all $x\in H^{2}(S^{[2]},\Z)$. \end{rmk} We denote by $q_{M'}$ and $q_{S^{[2]}}$ respectively the Beauville--Bogomolov form of $M'$ and $S^{[2]}$. We can also define a Beauville--Bogomolov form on $M$ by: $$q_{M}(x):=q_{M'}(r^*(x)),$$ for all $x\in H^2(M,\Z)$. We recall the following theorem. \begin{thm}\label{BBform} \begin{itemize} \item[(i)] The Beauville--Bogomolov lattice of $M'$ is given by $(H^2(M',\Z),q_{M'})\simeq U(2)^3\oplus E_8(-1)\oplus(-2)^2$ where the Fujiki constant is equal to 6. \item[(ii)] $q_{M}(\pi_*(x))=2q_{S^{[2]}}(x)$ for all $x\in H^2(S^{[2]},\Z)^{\iota^{[2]}}$. \item[(iii)] $q_{M'}(\delta')=q_{M'}(\Sigma')=-4$. \item[(iv)] $(r^*(x),\Sigma')_{q_{M'}}=0$ for all $x\in H^{2}(M,\Z)$. \item[(v)]
$H^2(M',\Z)=r^*\pi_*(j(H^2(S,\Z)))\oplus^{\bot}\Z\frac{\delta'+\Sigma'}{2}\oplus^{\bot}\Z\frac{\delta'-\Sigma'}{2}$. \end{itemize} \end{thm} \begin{proof} This theorem corresponds to several results in \cite{Menet-2015}. We want to emphasize that our notation are slightly different from \cite{Menet-2015}. In \cite{Menet-2015}, $r_1$, $r$, $\delta'$ are $\Sigma'$ are respectively denoted by $s_1$, $r'$, $\overline{\delta}'$ and $\overline{\Sigma}'$. Statement (i) is \cite[Theorem 2.5]{Menet-2015}. Knowing that the Fujiki constant is equal to 6 and Remark \ref{Smithcomute}, statement (ii) is \cite[Proposition 2.9]{Menet-2015}. Similarly, statements (iii) and (iv) are respectively given by \cite[Propositions 2.10 and 2.13]{Menet-2015}. Finally, statement (v) is provided by \cite[Theorem 2.39]{Menet-2015}. \end{proof} \begin{rmk} In the previous theorem the Beauville--Bogomolov lattice of $M'$ is obtained as follows: \begin{itemize} \item $r^*\pi_*(j(H^2(S,\Z)))\simeq U(2)^3\oplus E_8(-1)$, \item $\Z\frac{\delta'+\Sigma'}{2}\oplus^{\bot}\Z\frac{\delta'-\Sigma'}{2} \simeq (-2)^2.$ \end{itemize} \end{rmk} We recall that the divisibility ${\rm div}$ of a lattice element is defined in Section \ref{notation}. \begin{rmk}\label{div} Theorem \ref{BBform} shows that ${\rm div}(\Sigma')={\rm div}(\delta')=2$. \end{rmk} \subsection{Monodromy operators inherited from $\Mon^2(S^{[2]})$}
We keep the notation from the previous subsection. The monodromy group is defined in Section \ref{notation}. \begin{prop}\label{MonoM'}
Let $f\in \Mon^2(S^{[2]})$, (resp. $f\in \MonHdg(S^{[2]})$) be a monodromy operator such that $f\circ\iota^{[2]*}=\iota^{[2]*}\circ f$ on $H^2(S^{[2]},{\mathbb Z})$.
We consider $f':H^2(M',\Z)\rightarrow H^2(M',\Z)$ such that $f'(\Sigma')=\Sigma'$ and: $$f'(r^*(x))=\frac{1}{2}r^*\circ \pi_*\circ f \circ \pi^*(x),$$
for all $x\in H^2(M,\Z)$. Then $f'\in \Mon^2(M')$, (resp. $f'\in \MonHdg(M')$).
\end{prop} \begin{proof}
Let $\varphi$ be a marking of $S^{[2]}$. Since $f$ is a monodromy operator, we know that $(S^{[2]},\varphi)$ and $(S^{[2]},\varphi\circ f)$ are in the same connected component of their moduli space (see Section \ref{per} for the definition of the moduli space). We consider $$\Lambda^{\iota^{[2]}}:=\varphi\left(H^{2}(S^{[2]},\Z)^{\iota^{[2]}}\right).$$ We know that $\Lambda^{\iota^{[2]}}\simeq U^3\oplus E_8(-2)\oplus (-2)$ which is a lattice of signature $(3,12)$ (see for instance \cite[Proposition 2.6]{Menet-2015}). As in Section \ref{M'section}, we can consider the associated period domain: $$\mathcal{D}_{\Lambda^{\iota^{[2]}}}:=\left\{\left.\sigma\in
\mathbb{P}(\Lambda^{\iota^{[2]}}\otimes\C)\ \right|\ \sigma^2=0,\ \sigma\cdot\overline{\sigma}>0\right\}.$$ By Lemma \ref{pfff}, a very general K3$^{[2]}$-type manifold mapping to $\mathcal{D}_{\Lambda^{\iota^{[2]}}}$ satisfies that the Kähler cone is the entire positive cone. Furthermore, by Proposition \ref{involutionE8} \eqref{it2iota} the intersection $\varphi({\mathcal K}_{S^{[2]}})\cap\Lambda^{\iota^{[2]}}\neq \emptyset$ and therefore also $\varphi\circ f({\mathcal K}_{S^{[2]}})\cap \Lambda^{\iota^{[2]}}\neq \emptyset$ is non-empty. We can apply Lemma \ref{lem:connected+} to see that $(S^{[2]},\varphi)$ and $(S^{[2]},\varphi\circ f)$ can be connected by a sequence of twistor spaces $\mathscr{X}_i\rightarrow \mathbb{P}^1$. By construction and Remark \ref{twistorinvo}, all these twistor spaces are endowed with an involution $\mathscr{I}_i$ which restricts on each fiber to a symplectic involution. Hence we can consider for each twistor space the blow-up $\widetilde{\mathscr{X}_i/\mathscr{I}_i}\rightarrow \mathscr{X}_i/\mathscr{I}_i$ of the quotient $\mathscr{X}_i/\mathscr{I}_i$ in the codimension 2 component of its singular locus. We obtain $\widetilde{\mathscr{X}_i/\mathscr{I}_i}\rightarrow\mathbb{P}^1$ a sequence of families of orbifolds deformation equivalent to $M'$. This sequence of families provides a monodromy operator of $M'$ that we denote by $f'$.
We need to verify that $f'$ satisfies the claimed properties. First note that by construction $f'(\Sigma')=\Sigma'$. All fibers of a twistor space are diffeomorphic to each other and hence the monodromy operator $f$ is provided by a diffeomorphism $u: S^{[2]}\rightarrow S^{[2]}$ such that $u^*=f$. Moreover, by construction this diffeomorphism commutes with $\iota^{[2]}$. It induces a homeomorphism $\overline{u}'$ on $M'$ with the following commutative diagram:
$$\xymatrix{S^{[2]}\ar[r]^{u}\ar[d]^{\pi}& S^{[2]}\ar[d]^{\pi} \\ M\ar[r]^{\overline{u}}&M \\ M'\ar[u]^{r}\ar[r]^{\overline{u}'} & M'.\ar[u]^{r}}$$
Note that, by construction $f'=\overline{u}'^*$. We can use the commutativity of the previous diagram to check that $f'$ verifies the properties from the proposition. Let $x\in H^2(M,\Z)$. We have: \begin{equation} f'(r^*(x))=\overline{u}'^*(r^*(x))=r^*(\overline{u}^*(x)). \label{calculf} \end{equation} Moreover: $$\pi^*(\overline{u}^*(x))=u^*(\pi^*(x)).$$ Taking the image by $\pi_*$ and using \eqref{Smith}, we obtain that: $$2\overline{u}^*(x)=\pi_*u^*\pi^*(x).$$ Combining this last equation with (\ref{calculf}), we obtain the statement
of the proposition.
It is only left to prove that if $f\in\MonHdg(S^{[2]})$ then also $f'\in \MonHdg(M')$. The maps $\pi$ and $r$ are holomorphic maps between Kähler orbifolds, hence induce morphisms $\pi^*$ and $r^*$ which respect the Hodge structure. Then $\pi_*$ respects the Hodge structure because of (\ref{Smith}). Since $f'$ is a composition of morphisms which respect the Hodge structure, we therefore obtain that $f'\in \MonHdg(M')$. \end{proof} \begin{rmk} The previous proposition can be generalized to other irreducible symplectic orbifolds obtained as partial resolutions in codimension 2 of quotients of irreducible symplectic manifolds. \end{rmk}
\begin{cor}\label{Rdelta} The reflection $R_{\delta'}$ as defined in Section \ref{notation} is an element of the Monodromy group $\MonHdg(M')$. \end{cor} \begin{proof} By \cite[Section 9]{Markman11}, we know that $R_{\delta}\in \Mon^2(S^{[2]})$. By Proposition \ref{MonoM'} and Theorem \ref{BBform} (iv) , we only have to check that: $$R_{\delta'}(r^*(x))=\frac{1}{2}r^*\circ \pi_*\circ R_{\delta} \circ \pi^*(x),$$ for all $x\in H^{2}(M,\Z)$. We have: $$R_{\delta} \circ \pi^*(x)=\pi^*(x)-\frac{2(\delta,\pi^*(x))_{q_{S^{[2]}}}}{q_{S^{[2]}}(\delta)}\delta.$$ Taking the image by $\pi_*$, applying (\ref{Smith}) and Theorem \ref{BBform} (ii), we obtain: \begin{align*} \pi_*\circ R_{\delta} \circ \pi^*(x)=2x-\frac{4(\pi_*(\delta),2x)_{q_M}}{2q_{M}(\pi_*\delta)}\pi_*(\delta) =2\left(x-\frac{2(\pi_*(\delta),x)_{q_M}}{q_M(\pi_*\delta)}\pi_*(\delta)\right). \end{align*} Then dividing by 2, taking the image by $r^*$, and using $q_M=q_{M'}\circ r^*$ (compare Section \ref{M'S2}) concludes the computation. \end{proof}
\section{A first example: the very general Nikulin orbifolds}\label{genericM'0} \subsection{Wall divisors of a Nikulin orbifold constructed from a K3 surface without effective curves}\label{genericM'}
Let $S$ be a K3 surface admitting a symplectic involution, which does not contain any
effective curves. Such a K3 surface exists by Proposition \ref{involutionE8} and the surjectivity of the period map.
Then, we consider $M'$ the Nikulin orbifold associated to $S^{[2]}$ and the induced involution $\iota^{[2]}$ as in Section \ref{M'section} (we keep the same notation as earlier in this section).
\begin{prop}\label{exwalls} The wall divisors of $M'$ are $\delta'$ and $\Sigma'$ which are both of square $-4$ and divisibility $2$.
\end{prop}
This section is devoted to the proof of this proposition. The idea of the proof is to study the curves in $M'$ and use Proposition \ref{extremalray}.
Consider the following diagram: \begin{equation} \xymatrix{
&S^{[2]}\ar[r]^{\nu}& S^{(2)} & \\
&\widetilde{S\times S}\ar[u]^{\widetilde{\rho}}\ar[r]^{\widetilde{\nu}}&\ar[ld]_{p_1}S^2\ar[u]^{\rho}\ar[rd]^{p_2}& \\ &S & & S,} \label{S2} \end{equation} where $p_1$, $p_2$ are the projections, $\rho$ the quotient map and $\nu$ the blow-up in the diagonal in $S^{(2)}$. By assumption $S$ does not contain any effective curve. Hence considering the image by the projections $p_1$, $p_2$ and $\rho$, we deduce that $S^{(2)}$ does not contain any curve either. Hence all curves in $S^{[2]}$ are contracted by $\nu$, i.e.~fibers of the exceptional divisor $\Delta\rightarrow \Delta_{S^{(2)}}$, where $\Delta_{S^{(2)}}$ is the diagonal in $S^{(2)}$. We denote such a curve by $\ell_{\delta}^s$, where $s\in S$ keeps track of the point $(s,s)\in S^{(2)}$. To simplify the notation, we
denote the cohomology class $\ell_{\delta}\coloneqq[\ell_{\delta}^s]$, since it does not depend on $s \in S$.
Our next goal is to determine the irreducible curves in $N_1$. Recall that $r_1\colon N_1 \to S^{[2]}$ is the blow-up in the fixed surface $\Sigma$. Let $C$ be an irreducible curve in $N_1$. There are three cases: \begin{itemize}
\item[(i)]
The image of $C$ by $r_1$ does not intersect $\Sigma$ and is of class $\ell_{\delta}$.
Therefore, $C$ is of class $r_1^*(\ell_{\delta})$.
\item[(ii)]
The image of $C$ by $r_1$ is contained in $\Sigma$ and of class $\ell_{\delta}$.
\item[(iii)]
The image of $C$ by $r_1$ is a point. Then $C$ is of class $\ell_{\Sigma}$
(the class of a fiber of the exceptional divisor $\Sigma_1\longrightarrow \Sigma$). \end{itemize} Note that $\ell_{\delta}^s$ is contained in $\Sigma$ if $s\in S$ is a fixed point of the involution $\iota$, and otherwise the intersection $\ell_{\delta}^s\cap \Sigma=\emptyset$ is empty (this follows from the description of $\Sigma$ in Remark \ref{rem:Sigma}). Therefore, there cannot be a case, where the image of $\ell_{\delta}^s$ intersects $\Sigma$ in a zero-dimensional locus.
It remains to understand the case (ii). \begin{lemme}\label{Rdeltalemma} Consider a curve $\ell_{\delta}^s$ contained in $\Sigma$ (i.e.~when $s\in S$ is a fixed point of $\iota$). The surface $H_0:=r_1^{-1}(\ell_{\delta}^s)$ is isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$. \end{lemme} \begin{proof}
The surface $H_0$ is a Hirzebruch surface. Since $r_1$ is the blow-up along $\Sigma$, observe that $H_0 \iso{\mathbb P}(\mathcal{N}_{\Sigma|S^{[2]}}|_{\ell_\delta^s})$. Therefore, we need to compute $\mathcal{N}_{\Sigma/S^{[2]}}|_{\ell_{\delta}^s}$.
Keeping the notation from Remark \ref{rem:Sigma}, recall that $\widetilde{S_{\iota}}:=\widetilde{\rho}^{-1}(\Sigma)$ and $S_{\iota}:=\widetilde{\nu}(\widetilde{S_{\iota}})$. For simplicity, we also denote by $\ell_{\delta}^s$ the preimage of $\ell_{\delta}^s$ by $\widetilde{\rho}$. (Note for this, that $\widetilde{\rho}$ restricts to an isomorphism on the preimage of $\Delta$, and therefore, it makes sense to identify $\ell_{\delta}^s$ with its preimage).
Observe that:
$$\mathcal{N}_{\Sigma/S^{[2]}}|_{\ell_{\delta}^s}
\iso \widetilde{\rho}^{*}(\mathcal{N}_{\Sigma/S^{[2]}})|_{\ell_{\delta}^s}
\iso\mathcal{N}_{\widetilde{S_{\iota}}/\widetilde{S\times S}}|_{\ell_{\delta}^s}
\iso\widetilde{\nu}^{*}(\mathcal{N}_{S_{\iota}/S\times S}|_{s}) \iso\mathcal{O}_{\ell_{\delta}^s}\oplus\mathcal{O}_{\ell_{\delta}^s},$$ \TODO{give a reference... probably hartshorne}where we identify $s\in S\iso S_\iota$. It follows that $H_0\simeq \mathbb{P}^1\times \mathbb{P}^1$. \end{proof} It follows that the extremal curves in case (ii) have classes $r_1^*(\ell_{\delta})$. \begin{rmk}\label{mainldelta} In particular, considering cases (i) and (ii), we see that the extremal curves $C$ such that $r_1(C)=\ell_{\delta}^s$ for some $s\in S$ have classes $r_1^*(\ell_{\delta})$. \end{rmk}
Hence, we obtain that the extremal curves in $N_1$ have classes $r_1^*\ell_{\delta}$ and $\ell_{\Sigma}$. This implies that the extremal curves in $M'$ have classes $\pi_{1*}r_1^*\ell_{\delta}$ and $\pi_{1*}\ell_{\Sigma}$.
We can compute their dual divisors. \begin{lemme}\label{dualdeltasigma} The dual divisors in $H^2(M',\Q)$ of $\pi_{1*}r_1^*\ell_{\delta}$ and $\pi_{1*}\ell_{\Sigma}$ are respectively $\frac{1}{2}\delta'$ and $\frac{1}{2}\Sigma'\in H^2(M',{\mathbb Q})$ \end{lemme} \begin{proof} Write
$$H^2(N_1,\Z)=r_1^*\nu^*H^2(S^{(2)},\Z)\oplus \Z\left[\delta_1\right]\oplus\Z\left[\Sigma_1\right].$$
We denote by $p_{\delta_1}:H^2(N_1,\Z)\rightarrow \Z[\delta_1]\iso {\mathbb Z}$ and $p_{\Sigma_1}:H^2(N_1,\Z)\rightarrow \Z[\Sigma_1]\iso {\mathbb Z}$ the projections.
Let $x\in H^2(N_1,\Z)^{\iota_1^{[2]}}$. Since $\ell_\delta\cdot\alpha=0$ for all $\alpha \in r_1^*\nu^*H^2(S^{(2)},\Z)\oplus{\mathbb Z} [\Sigma]$, we have $$\ell_{\delta}\cdot x=(\ell_{\delta}\cdot \delta_1)p_{\delta_1}(x)=-p_{\delta_1}(x)$$ and similarly $\ell_{\Sigma}\cdot x=(\ell_{\Sigma}\cdot \Sigma_1)p_{\Sigma_1}(x)=-p_{\Sigma_1}(x)$.
It follows from (\ref{Smith2}): $$\pi_{1*}(\ell_{\delta})\cdot\pi_{1*}(x)=-2p_{\delta_1}(x)\ \text{and}\ \pi_{1*}(\ell_{\Sigma})\cdot\pi_{1*}(x)=-2p_{\Sigma_1}(x).$$
Therefore, Theorem \ref{BBform} (iii) shows that $\frac{1}{2}\delta'$ and $\frac{1}{2}\Sigma'\in H^2(M',{\mathbb Q})$ are the duals of $\pi_{1*}(\ell_{\delta})$ and $\pi_{1*}(\ell_{\Sigma})$ respectively. \end{proof} By Proposition \ref{extremalray} this proves that $\delta'$ and $\Sigma'$ are wall divisors in $M'$. Their claimed numerical properties are given by Theorem \ref{BBform} (iii) and Remark \ref{div}.
It remains to show that $\delta'$ and $\Sigma'$ are the only wall divisors in $M'$. Let us assume for contradiction that there is another wall divisor $D$.
By Theorem \ref{BBform} (v), we have $D=a\delta'+b\Sigma'+K$, with $(a,b)\in \Z[\frac{1}{2}]\times\N[\frac{1}{2}]$, (up to replacing $D$ by $-D$) and $K$ a divisor orthogonal to $\delta'$ and $\Sigma'$.
Since $\delta'$ and $\Sigma'$ correspond to the duals of the extremal rays of the Mori cone, all classes $\alpha\in \mathcal{C}_{M'}$ such that $(\alpha,\delta')_q>0$ and $(\alpha,\Sigma')_q>0$ are Kähler classes by \cite[Theorem 4.1]{Menet-Riess-20}.
Hence, we cannot have $a=b=0$. Indeed, $D$ would be orthogonal to the Kähler classes with orthogonal projection on $\Pic M'$ equal to $ -(\Sigma' + \delta')$ which is impossible by definition of wall divisors.
Therefore, $a$ or $b$ are non trivial.
It follows that $K=0$. Indeed, if $K\neq0$, as before, the class $ -(\Sigma' + \delta')-\frac{4(b+a)}{(K,K)}K$ is the projection on $\Pic M'$ of a Kähler class; however it is orthogonal to $D$ which is impossible.
Now, we can assume that $a\neq 0$ and $b\neq 0$ (indeed, if $a=0$ or $b=0$, then $D\in {\mathbb Z}\delta'$ or $D\in {\mathbb Z}\Sigma'$).
If $a<0$, $D$ would be orthogonal to $ a\Sigma' - b\delta'$ the projection on $\Pic M'$ of a Kähler class; this is impossible by definition of wall divisors. Hence we assume that $a>0$. By Corollary \ref{Rdelta}, $R_{\delta'}$ is a monodromy operator.
Moreover, $R_{\delta'}(D)=-a\delta+b\Sigma'$; as previously $R_{\delta'}(D)$ is orthogonal to some Kähler class. This gives a contradiction and thus concludes the proof.
\subsection{Application: an example of non-natural symplectic involution on a Nikulin orbifold}\label{inv0M'} Using the results from the previous subsection, we will prove the existence of a non-natural symplectic involution on our example. We recall that the reflections $R_x$ with $x\in H^2(M',\Z)$ are defined in Section \ref{notation}. \begin{prop}\label{involution} Let $(S,\iota)$ be a very general K3 surface endowed with a symplectic involution (that is $\Pic S\simeq E_8(-2)$). Let $M'$ be a Nikulin orbifold constructed from $(S,\iota)$ as in Section \ref{M'S2}.
There exists $\kappa'$ a symplectic involution on $M'$ such that $\kappa'^*=R_{\frac{1}{2}(\delta'-\Sigma')}$.
\end{prop} \begin{proof} We consider the following involution on $S\times S$: $$\xymatrix@R0pt{\kappa: \hspace{-1.5cm}& S\times S\ar[r] & S\times S\\ & (x,y)\ar[r]&(x,\iota(y)).}$$ We consider $$V:=S\times S\smallsetminus \left(\Delta_{S^{2}}\cup S_{\iota}\cup(\Fix \iota\times \Fix \iota)\right).$$ We denote by $\sigma_2$ the involution on $S\times S$ which exchange the two K3 surfaces and $\iota\times\iota$ the involution which acts as $\iota$ diagonally on $S\times S$. Then we consider $$U:=V/\left\langle \sigma_2,\iota\times\iota\right\rangle.$$
This set can be seen as an open subset of $M'$ and $V$ can also be seen as an open subset of $\widetilde{S\times S}$. Moreover the map $\pi_1\circ\widetilde{\rho}_{|V}:V\rightarrow U$ is a four to one non-ramified cover. For simplicity, we denote $\gamma:=\pi_1\circ\widetilde{\rho}_{|V}$.
First, we want to prove that $\kappa$ induces an involution $\kappa'$ on $U$ with a commutative diagram: \begin{equation} \xymatrix{V\ar[d]_{\gamma}\ar[r]^{\kappa}&V\ar[d]^{\gamma}\\ U\ar[r]^{\kappa'} & U.} \label{kappadiagram} \end{equation} If such a map $\kappa'$ would exist then it will necessarily verify the following equation: $$\kappa'\circ \gamma=\gamma\circ \kappa.$$ The map $\gamma$ being surjective, to be able to claim that the previous equation provides a well defined map from $U$ to $U$, we have to verify that: $$\kappa'\circ\gamma(x_1,y_1)=\kappa'\circ\gamma(x_2,y_2),$$ when $\gamma(x_1,y_1)=\gamma(x_2,y_2)$. That is: \begin{equation} \gamma\circ \kappa(x_1,y_1)=\gamma\circ \kappa(x_2,y_2), \label{welldefined} \end{equation} for all $((x_1,y_1),(x_2,y_2))\in S^4$ such that $\gamma(x_1,y_2)=\gamma(x_2,y_2)$. Let $(x,y)\in S^2$. We have: $$\gamma^{-1}(\gamma(x,y))=\left\{(x,y),(y,x),(\iota(x),\iota(y)),(\iota(y),\iota(x))\right\}.$$ We also have: $$\kappa(\gamma^{-1}(\gamma(x,y)))=\left\{(x,\iota(y)),(y,\iota(x)),(\iota(x),y),(\iota(y),x)\right\}=\gamma^{-1}(\gamma(x,\iota(y)))=\gamma^{-1}(\gamma(\kappa(x,y))).$$ This shows (\ref{welldefined}). Hence $\kappa'$ is set theoretically well defined. Since $\gamma$ is a four to one non-ramified cover, it is a local isomorphism; therefore $\kappa'$ inherits of the properties of $\kappa$. In particular $\kappa'$ is a holomorphic symplectic involution.
It follows that $\kappa'$ induces a bimeromorphic symplectic involution on $M'$. By \cite[Lemma 3.2]{Menet-Riess-20}, $\kappa'$ extends to bimeromorphic symplectic involution which is an isomorphism in codimension 1 (we still denote by $\kappa'$ this involution). In particular, $\kappa'^*$ is now well defined on $H^2(M',\Z)$ (see \cite[Lemma 3.4]{Menet-Riess-20}). Now, we are going to prove that $\kappa'$ extends to a regular involution. We recall from Theorem \ref{BBform} (v) that: $$H^2(M',\Z)=r^*\pi_*(j(H^2(S,\Z)))\oplus^{\bot}\Z\frac{\delta'+\Sigma'}{2}\oplus^{\bot}\Z\frac{\delta'-\Sigma'}{2}.$$ Since $\kappa'$ is symplectic $\kappa'$ acts trivially on $r^*\pi_*(j(H^2(S,\Z)))$. Indeed $\Pic M'=\Z\frac{\delta'+\Sigma'}{2}\oplus\Z\frac{\delta'-\Sigma'}{2}$. Moreover, $\kappa$ exchanges $\Delta_{S^{(2)}}$ and $S_{\iota}$. Hence by continuity and commutativity of diagram (\ref{kappadiagram}), we have that $\kappa'$ exchanges the divisors $\delta'$ and $\Sigma'$. By Proposition \ref{exwalls} and Corollary \ref{criterionwall}, it follows that $\kappa'$ sends Kähler classes to Kähler classes. Hence by \cite[Proposition 3.3]{Menet-Riess-20}, $\kappa'$ extends to an involution on $M'$. \end{proof}
\begin{cor}\label{Sigma'} Let $M'$ be a Nikulin orbifold.
Then: $$R_{\Sigma'}\in \Mon^2(M').$$ \end{cor} \begin{proof} Let $(X,\widetilde{\iota})$ be any manifold of $K3^{[2]}$-type endowed with a symplectic involution. Let $(S,\iota)$ be a very general K3 surface endowed with a symplectic involution.
With exactly the same argument as in proof of Proposition \ref{MonoM'}, we can connect $X$ and $S^{[2]}$ by
a sequence of twistor spaces; each twistor space being endowed with an involution which restricts to a symplectic involution on its fibers.
This sequence of twistor spaces provides a sequence of twistor spaces between $M'_X$ and $M'_{S^{[2]}}$ the irreducible symplectic orbifolds associated to $X$ and $S^{[2]}$ respectively. This sequence of twistor spaces provides a parallel transport operator $f:H^2(M'_X,\Z)\rightarrow H^2(M'_{S^{[2]}},\Z)$ which sends $\Sigma'_X$ to $\Sigma'_{S^{[2]}}$ (respectively the exceptional divisors of the blow-ups $M'_X\rightarrow X/\widetilde{\iota}$ and $M'_{S^{[2]}}\rightarrow S^{[2]}/\iota^{[2]}$).
By Proposition \ref{involution}, $\kappa'^*\in\Mon^2(M'_{S^{[2]}})$.
Moreover by Corollary \ref{Rdelta}, $R_{\delta'}\in \Mon^2(M'_{S^{[2]}})$. Hence $R_{\Sigma'_{S^{[2]}}}=\kappa'^*\circ R_{\delta'}\circ \kappa'^*\in\Mon^2(M'_{S^{[2]}})$. Therefore $R_{\Sigma'_{X}}=f^{-1}\circ R_{\Sigma'_{S^{[2]}}}\circ f\in\Mon(M'_X)$. \end{proof} \begin{rmk}\label{RdeltaSigma} Let $(S,\iota)$ be a K3 surface endowed with a symplectic involution. Let $M'$ be a Nikulin orbifold constructed from $(S,\iota)$ as in Section \ref{M'S2}. The previous proof also shows that $R_{\frac{1}{2}(\delta'-\Sigma')}$ is a parallel transport operator. \end{rmk} \section{In search of wall divisors in special examples}\label{extremalcurves} In this section we study some explicit examples of K3 surfaces with symplectic involutions and their associated Nikulin orbifolds. This will be used in Section \ref{endsection} in order to determine which divisors on Nikulin-type orbifolds are wall divisors.
\subsection{When the K3 surface $S$ used to construct the Nikulin orbifold contains a unique rational curve}\label{onecurve}
\subsubsection*{Objective} In this section we assume that $\Pic S\simeq E_8(-2)\oplus^{\bot} (-2)$. By Riemann--Roch $S$ contains only one curve which is rational. We denote this curve $C$. In particular in this case, $\mathcal{K}_S\cap E_8(-2)^{\bot}\neq\emptyset$. Hence, by Proposition \ref{involutionE8} there exists a symplectic involution $\iota$ on $S$ whose anti-invariant lattice is the $E_8(-2)\subset \Pic(S)$. Moreover, the curve $C$ is fixed by $\iota$. The objective of this section is to determine wall divisors of the Nikulin orbifold $M'$ obtained from the couple $(S^{[2]},\iota^{[2]})$ (see Section \ref{M'section}). \subsubsection*{Notation} We keep the notation from Section \ref{genericM'} and we consider the following notation in addition. \begin{nota}\label{notacurves} \begin{itemize} \item We denote by $D_C$ the following divisor in $S^{[2]}$:
$$D_C=\left\{\left.\xi\in S^{[2]}\right|\ \Supp\xi\cap C\neq\emptyset\right\}.$$ Moreover, we set $D_C':=\pi_{1*}r_{1}^*(D_C)$. \item We denote by $\overline{C^{(2)}}$ the strict transform of $C^{(2)}\subset S^{(2)}$ by $\nu$ and $\overline{\overline{C^{(2)}}}$ the strict transform of $\overline{C^{(2)}}\subset S^{[2]}$ by $r_1$. We denote by
$j^{C}:\overline{C^{(2)}}\hookrightarrow S^{[2]}$ and $\overline{j^{C}}:\overline{\overline{C^{(2)}}}\hookrightarrow N_1$ the embeddings. Note that $\overline{\overline{C^{(2)}}}\simeq\overline{C^{(2)}}\simeq C^{(2)}\simeq \mathbb{P}^2$. \item We recall that $\Delta_{S^{(2)}}$ is the diagonal in $S^{(2)}$ and $\Delta$ the diagonal in $S^{[2]}$. We also denote by $\Delta_{S^2}$ the diagonal in $S\times S$ and
$\Delta_{\widetilde{S^2}}:=\widetilde{\nu}^{-1}(\Delta_{S^{2}})$ the exceptional divisor. Furthermore, we denote $\Delta_{S^{(2)}}^C:=C^{(2)}\cap \Delta_{S^{(2)}}$ and $\Delta_{S^{2}}^{C}:=C^{2}\cap \Delta_{S^2}$. Moreover we denote $\Delta_{S^{(2)}}^{\iota,C}:=\left\{\left.\left\{x,\iota(x)\right\}\right|\ x\in C\right\}\subset S^{(2)}$ and $\Delta_{S^{2}}^{\iota,C}:=\left\{\left.(x,\iota(x))\right|\ x\in C\right\}\subset S^2$. \item We denote $H_2:=\nu^{-1}(\Delta_{S^{(2)}}^C)$ and $\overline{H_2}$ its strict transform by $r_1$. We denote by $j^{H_2}:H_2\hookrightarrow S^{[2]}$ and $j^{\overline{H_2}}:\overline{H_2}\hookrightarrow N_1$ the embeddings. \end{itemize} \end{nota} We summarize our notation on the following diagram:
$$\xymatrix{&\ar@{^{(}->}[dd]_{j^{\overline{H_2}}}\overline{H_2}\ar@{=}[r]& \ar@{^{(}->}[d]H_2\ar@/_1pc/@{_{(}->}[dd]_{j^{H_2}}\ar[r] & \ar@{^{(}->}[d]\Delta_{S^{(2)}}^C & &\\
& & \Delta\ar[r]\ar@{^{(}->}[d] & \Delta_{S^{(2)}}\ar@{^{(}->}[d]& &\\ & N_1\ar[r]^{r_1} & S^{[2]}\ar[r]^{\nu} & S^{(2)}& \ar@{^{(}->}[l] C^{(2)} &\ar@{^{(}->}[l] \Delta_{S^{(2)}}^{\iota,C}\\ \overline{\overline{C^{(2)}}}\ar@{^{(}->}[ru]^{\overline{j^{C}}}\ar@{=}[r]& \overline{C^{(2)}}\ar@{^{(}->}[ru]^{j^{C}}& \widetilde{S^{2}}\ar[u]^{\widetilde{\rho}} \ar[r]^{\widetilde{\nu}}& S^{2}\ar[u]^{\rho} & \ar@{^{(}->}[l] C^{2}\ar[u]^{2:1} &\ar@{^{(}->}[l] \Delta_{S^{2}}^{\iota,C}\ar[u]^{2:1}\\ & & \Delta_{\widetilde{S^2}}\ar@{^{(}->}[u]\ar[r] & \ar@{^{(}->}[u] \Delta_{S^{2}} && }$$ \begin{lemme}\label{Hirzebruch}
The surface $H_2$ is a Hirzebruch surface isomorphic to $\mathbb{P}(\mathcal{E})$, where $\mathcal{E}:=\mathcal{O}_C(2)\oplus\mathcal{O}_C(-2)$. Let $f$ be a fiber of $\mathbb{P}(\mathcal{E})$. There exists a section $C_0$ which satisfies: $\Pic H_2=\Z C_0\oplus\Z f$, $C_0^2=-4$, $f^2=0$ and $C_0\cdot f=1$. \end{lemme} \begin{proof}
Let $\sigma_2$ be the involution on $S\times S$ which exchanges the two K3 surfaces and $\widetilde{\sigma_2}$ the induced involution on $\widetilde{S\times S}$. The involution $\widetilde{\sigma_2}$ acts trivially on $\Delta_{\widetilde{S^2}}$. It follows that $\widetilde{\rho}$ induces an isomorphism $\Delta_{\widetilde{S^2}}\simeq \Delta$. In particular, it shows that:
$$H_2\simeq\mathbb{P}(\mathcal{N}_{\Delta_{S^{2}}/S\times S}|_{\Delta_{S^{2}}^{C}}).$$ We consider the following commutative diagram: \begin{equation} \xymatrix{\Delta_{S^{2}}\eq[r]& S\\ \Delta_{S^{2}}^{C}\ar@{^{(}->}[u]\eq[r] & \ar@{^{(}->}[u]C.} \label{delta0} \end{equation} Under the isomorphism induced by (\ref{delta0}), observe that:
$$\mathcal{N}_{\Delta_{S^2}/S\times S}|_{\Delta_{S^{2}}^{C}}\simeq T_S|_C.$$
To compute $T_S|_C$, we consider the following exact sequence:
$$\xymatrix{0\ar[r]&\ar[r]T_C\ar[r]&T_S|_C\ar[r]&\mathcal{N}_{C/S}\ar[r]&0.}$$
We have $T_C=\mathcal{O}_C(2)$ and $\mathcal{N}_{C/S}=\mathcal{O}_C(-2)$. Moreover, $\Ext^1(\mathcal{O}_C(-2),\mathcal{O}_C(2))=H^1(C,\mathcal{O}_C(4))=0$.
Hence: $$T_S|_C=\mathcal{O}_C(-2)\oplus\mathcal{O}_C(2).$$
As a consequence $H_2\iso {\mathbb P}(\mathcal{E})$ as claimed.
Therefore, by \cite[Chapter V, Proposition 2.3 and Proposition 2.9]{Hartshorne}, we know that $\Pic H_2=\Z C_0\oplus\Z f$, with $C_0^2=-4$, $f^2=0$ and $C_0\cdot f=1$;
$C_0$ being the class of a specific section and $f$ the class of a fiber.
\end{proof} \begin{lemme}\label{intersection} We have $\ell_{\delta}\cdot\delta=-1$ and $C\cdot D_C=-2$ in $S^{[2]}$. \end{lemme} \begin{proof} We denote by $\widetilde{\ell_{\delta}}$ a fiber associated to the exceptional divisor $\Delta_{\widetilde{S^2}}\rightarrow \Delta_{S^{2}}$. We know that $\widetilde{\ell_{\delta}}\cdot \Delta_{\widetilde{S^2}}=-1$. We can deduce for instance from \cite[Lemma 3.6]{Menet-2018} that $\ell_{\delta}\cdot \Delta=-2$. That is $\ell_{\delta}\cdot \delta=-1$.
Similarly, we have $(C\times S+S\times C)\cdot(s\times C+C\times s)=-4$. By \cite[Lemma 3.6]{Menet-2018} (see \ref{Smith2}): \begin{align*} &\rho_*(C\times S+S\times C)\cdot\rho_*(s\times C+C\times s)=-8\\ &\rho_*(C\times S)\cdot\rho_*(s\times C)=-2. \end{align*} Then taking the pull-back by $\nu$, we obtain $C\cdot D_C=-2$. \end{proof} \subsubsection*{Strategy}
We will need several steps to find the wall divisors on $M'$: \begin{itemize} \item[I.] Understand the curves contained in $S^{[2]}$. \item[II.] Understand the curves contained in $N_1$. \item[III.] Deduce the corresponding wall divisors in $M'$ using Proposition \ref{extremalray}, Corollaries \ref{Rdelta}, and \ref{Sigma'}.
\end{itemize} \subsubsection*{Curves in $S^{[2]}$} The first step is to determine the curves contained in $S^{[2]}$. Before that, we can say the following about curves in $S^{(2)}$. \begin{lemme}\label{S2curves} There are only two cases for irreducible curves in $S^{(2)}$: \begin{itemize} \item[(1)] the curves $C^s:=\rho(C\times s)=\rho(s\times C)$ with $s\notin C$; \item[(2)] curves in $C^{(2)}\simeq \mathbb{P}^2$. \end{itemize} \end{lemme} \begin{proof}
In $S\times S$, considering the images of curves by the projections $p_1$ and $p_2$ of diagram (\ref{S2}), there are only two possibilities: \begin{itemize} \item $s\times C$ or $C\times s$, with $s$ a point in $S$, \item curves contained in $C\times C$. \end{itemize} Then, we obtain all the curves in $S^{(2)}$ as images by $\rho$ of curves in $S\times S$. \end{proof} It follows four cases in $S^{[2]}$. \begin{lemme}\label{curveS2} We have the following four cases for irreducible curves in $S^{[2]}$: \begin{itemize} \item[(0)] Curves which are fibers of the exceptional divisor $\Delta\rightarrow \Delta_{S^{(2)}}$. As in Section \ref{genericM'}, we denote these curves by $\ell_{\delta}^s$, where $s=\nu(\ell_{\delta}^s)$ and we denote their classes by $\ell_{\delta}$. \item[(1)] Curves which are strict transforms of $C^s$ with $s\notin C$. We denote the class of these curves by $C$. Note that $C=\nu^*(\left[C^s\right])$ for $s\notin C$. \item[(2a)] Curves contained in $H_2$. \item[(2b)] Curves contained in $\overline{C^{(2)}}$. \end{itemize} \end{lemme} \begin{proof} Let $\gamma$ be an irreducible curve in $S^{[2]}$. By Lemma \ref{S2curves}, there are three cases for $\nu(\gamma)$: \begin{itemize} \item[(0)] $\nu(\gamma)$ is a point and $\gamma$ is a fiber of the exceptional divisor; \item[(1)] $\nu(\gamma)=C^s$, with $s\notin C$. \item[(2)] $\nu(\gamma)\subset C^{(2)}$. \end{itemize} Moreover case (2) can be divided in 2 sub-cases: $\nu(\gamma)=\Delta_{C}$ or $\nu(\gamma)\nsubseteq\Delta_{C}$. This provides cases (2a) and (2b). \end{proof}
Now, we are going to determine the classes of the extremal curves in cases (2a) and (2b) in two lemmas. \begin{lemme}\label{C0} We have $j^{H_2}_*(C_0)=2(C-\ell_{\delta})$ and $j^{H_2}_*(f)=\ell_{\delta}$. \end{lemme} \begin{proof} It is clear that $j^{H_2}_*(f)=\ell_{\delta}$, we are going to compute $j^{H_2}_*(C_0)$. Necessarily, $j^{H_2}_*(C_0)=aC+b\ell_{\delta}$. We can consider the intersection with $\Delta$ and $D_C$ and use Lemma \ref{intersection} to determine $a$ and $b$. We consider the following commutative diagram: $$\xymatrix{H_2\ar@{=}[d]\ar@{^{(}->}[r]^{\widetilde{j^{H_2}}}&\widetilde{S\times S}\ar[d]^{\widetilde{\rho}}\\ H_2\ar@{^{(}->}[r]^{j^{H_2}}&S^{[2]}.}$$ By commutativity of the diagram, we have: \begin{equation}
j^{H_2}_*(C_0)=\widetilde{\rho}_*\widetilde{j^{H_2}}_*(C_0). \label{heuuu} \end{equation}
By \cite[Propositions 2.6 and 2.8]{Hartshorne}, we have: $$C_0\cdot \widetilde{j^{H_2}}^*(\Delta_{\widetilde{S^2}})=C_0\cdot \mathcal{O}_{\mathcal{E}}(-1)=-C_0\cdot(C_0+2f)=4-2=2.$$ By projection formula that is: $$\widetilde{j^{H_2}}_*(C_0)\cdot \Delta_{\widetilde{S^2}}=2.$$ Taking the push-forwards by $\widetilde{\rho}$, we obtain by \cite[Lemma 3.6]{Menet-2018} (see \ref{Smith2}): $$\widetilde{\rho}_*\widetilde{j^{H_2}}_*(C_0)\cdot\widetilde{\rho}_*(\Delta_{\widetilde{S^2}})=4.$$ Hence by (\ref{heuuu}): $$j^{H_2}_*(C_0)\cdot\Delta=4.$$ Hence by lemma \ref{intersection}: $$b=-2.$$ We have $D_C=\nu^*(\rho_*(C\times S))$. So by projection formula: $$D_C\cdot j^{H_2}_*(C_0)=\rho_*(C\times S)\cdot\left[\Delta_{S^{(2)}}^C\right]=\rho_*(C\times S)\cdot\rho_*(s \times C+C\times s)=2\rho_*(C\times S)\cdot\rho_*(s \times C).$$ Taking the pull-back by $\nu$ of the last equality, we obtain: $$D_C\cdot j^{H_2}_*(C_0)=2D_C\cdot C.$$ So $a=2$ which concludes the proof.
\end{proof} \begin{lemme}\label{stricttransform} We have $j^C_*(d)=C-\ell_{\delta}$, where $d$ is the class of a line in $\overline{C^{(2)}}\simeq\mathbb{P}^2$. \end{lemme} \begin{proof} We consider the following commutative diagram: $$\xymatrix{\overline{C^{(2)}}\eq[d]\ar@{^{(}->}[r]^{j^C}&S^{[2]}\ar[d]^{\nu}\\ C^{(2)}\ar@{^{(}->}[r]^{j^C_0}&S^{(2)}.}$$
Let $\gamma$ be an irreducible curve in $\overline{C^{(2)}}$. Since $\overline{C^{(2)}}$ is the strict transform of $C^{(2)}$ by $\nu$, $j^{C}(\gamma)$ is the strict transform of $j^{C}_0(\gamma)$ by $\nu$. Hence to compute $j^C_*(d)$ for $d$ the class of a line, it is enough to find a curve in $C^{(2)}$ with class $d$ and determine its strict transform by $\nu$. For instance $C^s$ with $s\in C$ verifies $\left[C^s\right]=d$ in $C^{(2)}$. Moreover, $C^s$ intersects $\Delta_{S^{(2)}}$ transversely in one point. It follows that $j^C_*(d)=C-\ell_{\delta}$.
\end{proof} \subsubsection*{Curves in $N_1$} \begin{lemme}\label{curvesN1} We have the following cases for irreducible curves in $N_1$: \begin{itemize} \item[(00)] Curves contracted to a point by $r_1$. They are fibers of the exceptional divisor $\Sigma_1\rightarrow \Sigma$ and their class is $\ell_{\Sigma}$. \item[(0)] Curves send to $\ell_{\delta}^s$ by $r_1$. An extremal such a curve has class $r_1^*(\ell_{\delta})$ by Lemma \ref{Rdeltalemma}. \item[(1)] Curves send to $C^{s}$ by $r_1$ with $s\notin C$. They are curves of class $r_1^*(C)$. \item[(2a.i)] Curves contained in $r_1^{-1}(H_2\cap \Sigma)$. \item[(2a.ii)] Curves contained in $\overline{H_2}$ the strict transform of $H_2$ by $r_1$. \item[(2b.i)] Curves contained in $r_1^{-1}(\overline{C^{(2)}}\cap \Sigma)$. \item[(2b.ii)] Curves contained in $\overline{\overline{C^{(2)}}}$ the strict transform of $\overline{C^{(2)}}$ by $r_1$. \end{itemize} \end{lemme} \begin{proof} Let $\gamma$ be an irreducible curve in $N_1$. If $r_1(\gamma)$ is a point, we are in case (00). If $r_1(\gamma)$ is a curve, we are in one of the cases of Lemma \ref{curveS2}.
If $r_1(\gamma)=\ell_{\delta}^s$ for some $s\in S$. This is cases (i) and (ii) of Section \ref{genericM'}.
It follows from Remark \ref{mainldelta} that the extremal curves in case (0) have classes $r_1^*(\ell_{\delta})$.
If $r_1(\gamma)=C^s$ with $s\notin C$, then $C^s$ does not intersects $\Sigma$ and we have $\left[\gamma\right]=r_1^*(C)$. The last four cases appear when $r_1(\gamma)\subset H_2$ or $r_1(\gamma)\subset \overline{C^{(2)}}$. \end{proof} Now we are going to determine the classes of the curves in cases (2a.i), (2a.ii), (2b.i) and (2b.ii). \begin{lemme}\label{H2Sigma} The extremal curves in $r_1^{-1}(H_2\cap \Sigma)$ are of classes $\ell_{\Sigma}$ or $r_1^*(\ell_{\delta})$. \end{lemme} \begin{proof}
Since $C$ is the unique curve contained in $S$. The involution $\iota$ on $S$ restricts to $C$. Since $\iota$ is a symplectic involution, $\iota$ does not act trivially on $C$. Moreover, since $C\simeq \mathbb{P}^1$, $\iota_{|C}$ has two fixed points $x$ and $y$. It follows that $\iota^{(2)}_{|\Delta_{C}}$ also has two fixed points $(x,x)$ and $(y,y)$. Hence $H_2\cap \Sigma=\ell_{\delta}^x\cup \ell_{\delta}^y$. The surfaces $r_1^{-1}(\ell_{\delta}^x)$ and $r_1^{-1}(\ell_{\delta}^y)$ are Hirzebruch surfaces and by Lemma \ref{Rdeltalemma}, they are isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$. Then the extremal curves of these Hirzebruch surfaces will have classes $\ell_{\Sigma}$ or $r_1^*(\ell_{\delta})$ in $N_1$. \end{proof} \begin{lemme}
Let $C_0$ be the class of the section in $\overline{H_2}$ obtained in Lemma \ref{Hirzebruch}.
Then $j^{\overline{H_2}}_*(C_0)=2\left(r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma}\right)$. \end{lemme} \begin{proof} As explained in the proof of the previous lemma, $H_2\cap \Sigma$ corresponds to two fibers of the Hirzebruch surface $H_2$. Hence $j^{H_2}(C_0)$ and $\Sigma$ intersect in two points. The class $j^{\overline{H_2}}_*(C_0)$ corresponds to the class of the strict transform by $r_1$ of $j^{H_2}(C_0)$. By Lemma \ref{C0}, $\left[j^{H_2}(C_0)\right]=2(C-\ell_{\delta})$. Hence $j^{\overline{H_2}}_*(C_0)=2\left(r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma}\right)$. \end{proof} \begin{lemme}\label{deltaiota}
The curve $\Delta_{S^{(2)}}^{\iota,C}$ is a line in $C^{(2)}\simeq \mathbb{P}^2$.
\end{lemme} \begin{proof} The map $$\xymatrix{\eq[d]C^2\ar[r]^{\rho}_{2:1}&C^{(2)}\eq[d]\\ \mathbb{P}^1\times \mathbb{P}^1 &\mathbb{P}^2}$$
is a two to one ramified cover. We recall that $\Delta_{S^{2}}^{\iota,C}=\left\{\left.(x,\iota(x))\right|\ x\in C\right\}$. We have: $$\left[\Delta_{S^{2}}^{\iota,C}\right]_{S^2}=\left[C\times s\right]_{S^2}+\left[s\times C \right]_{S^2}.$$
It follows: $$\rho_*\left(\left[{\Delta_{S^{2}}^{\iota,C}}\right]_{S^2}\right)=2\left[C^s\right]_{S^{(2)}}.$$ However $\rho: \Delta_{S^{2}}^{\iota,C}\rightarrow \Delta_{S^{(2)}}^{\iota,C}$ is a two to one cover; so $\left[\Delta_{S^{(2)}}^{\iota,C}\right]_{S^{(2)}}=\left[C^s\right]_{S^{(2)}}$.
Hence $\left[\Delta_{S^{(2)}}^{\iota,C}\right]_{C^{(2)}}=\left[C^s\right]_{C^{(2)}}$ and $\left[\Delta_{S^{(2)}}^{\iota,C}\right]_{C^{(2)}}$ is the class of a line in $C^{(2)}$. \end{proof} \begin{lemme} The surface $r_1^{-1}(\overline{C^{(2)}}\cap \Sigma)$ is a Hirzebruch surface that we denote by $H_1$. Let $f$ be a fiber of $H_1$. There exists a section $D_0$ which satisfies: $\Pic H_1=\Z D_0\oplus\Z f$, $D_0^2=-2$, $f^2=0$ and $D_0\cdot f=1$. Moreover, $j^{H_1}_*(D_0)=r_1^*(C)-\ell_{\delta}-\ell_{\Sigma}$, where $j^{H_1}:H_1\hookrightarrow N_1$ is the embedding.
\end{lemme} \begin{proof} We denote by $\zeta$ the curve $\overline{C^{(2)}}\cap \Sigma$. This curve is the strict transform of $\Delta_{S^{(2)}}^{\iota,C}$ by $\nu$. In particular, it is a rational curve by Lemma \ref{deltaiota}. Moreover by Lemmas \ref{deltaiota} and \ref{stricttransform}, we have: \begin{equation} \left[\zeta\right]_{S^{[2]}}=C-\ell_{\delta}. \label{gammadelta} \end{equation}
To understand which Hirzebruch surface $H_1$ is, we are going to compute $\mathcal{N}_{\Sigma/S^{[2]}}|_{\zeta}=\mathcal{O}_{\zeta}(-k)\oplus\mathcal{O}_{\zeta}(k)$. We consider $\widetilde{\zeta}:=\widetilde{\rho}^{-1}(\zeta)$.
We have: $$\rho^*(\mathcal{N}_{\Sigma/S^{[2]}})|_{\widetilde{\zeta}}=\widetilde{\nu}^*(\mathcal{N}_{S_{\iota}/S^2}|_{\Delta_{S^{2}}^{\iota,C}}).$$ As in the proof of Lemma \ref{Hirzebruch}, we have:
$$\mathcal{N}_{S_{\iota}/S^2}|_{\Delta_{S^{2}}^{\iota,C}}\simeq T_S|_C\simeq \mathcal{O}_C(-2)\oplus\mathcal{O}_C(2).$$
Hence $$\rho^*(\mathcal{O}_{\zeta}(-k)\oplus\mathcal{O}_{\zeta}(k))=\rho^*(\mathcal{N}_{\Sigma/S^{[2]}})|_{\widetilde{\zeta}}=\mathcal{O}_{\widetilde{\zeta}}(-2)\oplus\mathcal{O}_{\widetilde{\zeta}}(2).$$ Since $\rho:\widetilde{\zeta}\rightarrow \zeta$ is a two to one cover, we obtain $k=1$, that is:
$$\mathcal{N}_{\Sigma/S^{[2]}}|_{\zeta}=\mathcal{O}_{\zeta}(-1)\oplus\mathcal{O}_{\zeta}(1).$$ By \cite[Chapter V, Proposition 2.3 and Proposition 2.9]{Hartshorne}, there exists a section $D_0$ such that
$\Pic H_1=\Z D_0\oplus\Z f$, with $D_0^2=-2$, $f^2=0$ and $D_0\cdot f=1$;
$f$ being the class of a fiber. Moreover, by (\ref{gammadelta}) and using the projection formula, we know that:
$$j^{H_1}_*(D_0)=r_1^*(C)-\ell_{\delta}+a\ell_{\Sigma}.$$ To compute $a$, we only need to compute $D_0\cdot j^{H_1*}(\Sigma_1)$. We apply the same method used in the proof of Lemma \ref{C0}. By \cite[Propositions 2.6 and 2.8]{Hartshorne}, we have: $$D_0\cdot \widetilde{j^{H_1}}^*(\Sigma_1)=D_0\cdot \mathcal{O}_{\mathbb{P}(\mathcal{O}_{\zeta}(-1)\oplus\mathcal{O}_{\zeta}(1))}(-1)=-D_0\cdot(D_0+f)=2-1=1.$$ This proves that $a=-1$.
\end{proof} \begin{lemme}
Let $d$ be the class of a line in $\overline{\overline{C^{(2)}}}$. Then $\overline{j^{C}}_*(d)=r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma}$. \end{lemme} \begin{proof}
For instance $\overline{j^{C}}_*(d)$ corresponds to the class of the strict transform of $C^s$ by $r_1\circ \nu$ for $s\in C$. Let $\overline{C^s}$ be the strict transform of $C^s$ by $\nu$. As we have already seen in Lemma \ref{stricttransform}, $\left[\overline{C^s}\right]_{S^{[2]}}=j^{C}_*(d)=C-\ell_{\delta}$. The intersection $\overline{C^{(2)}}\cap\Sigma$ corresponds to the strict transform of $\Delta_{S^{(2)}}^{\iota,C}$ by $\nu$ and by Lemma \ref{deltaiota}, it has the class of a line in $\overline{C^{(2)}}$. Hence $\overline{C^s}$ intersects $\Sigma$ transversely in one point and we obtain: $\overline{j^{C}}_*(d)=r_1^*(C-\ell_{\delta})-\ell_{\Sigma}$.
\end{proof} \subsubsection*{Conclusion on wall divisors} \begin{lemme} The extremal curves of $M'$ have classes $\pi_{1*}r_1^*\ell_{\delta}$, $\pi_{1*}\ell_{\Sigma}$ and $\pi_{1*}(r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma})$. \end{lemme} \begin{proof}
Our previous investigation on curves in $N_1$ show that the extremal curves in $N_1$ have classes $r_1^*\ell_{\delta}$, $\ell_{\Sigma}$ and $r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma}$. This implies that the extremal curves in $M'$ have classes $\pi_{1*}r_1^*\ell_{\delta}$, $\pi_{1*}\ell_{\Sigma}$ and $\pi_{1*}(r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma})$. \end{proof} We can compute their dual divisors to obtain wall divisors with Proposition \ref{extremalray}. \begin{prop}\label{walldiv1} The divisors $\delta'$, $\Sigma'$, $D_C'$ and $D_C'-\frac{1}{2}(\delta'+\Sigma')$ are wall divisors in $M'$. Moreover, they verify the following numerical properties: \begin{itemize} \item $q_{M'}(\delta')=q_{M'}(\Sigma')=q_{M'}(D_C')=-4$
and ${\rm div}(\delta')={\rm div}(\Sigma')={\rm div}(D_C')=2$;
\item $q_{M'}\left(D_C'-\frac{1}{2}(\delta'+\Sigma')\right)=-6$ and ${\rm div}\left[D_C'-\frac{1}{2}(\delta'+\Sigma')\right]=2$. \end{itemize} \end{prop} \begin{proof}
By Lemma \ref{dualdeltasigma}, $\frac{1}{2}\delta'$ and $\frac{1}{2}\Sigma'\in H^2(M',{\mathbb Q})$ are the duals of $\pi_{1*}(r_1^*(\ell_{\delta}))$ and $\pi_{1*}(\ell_{\Sigma})$ respectively. Moreover by Lemma \ref{intersection}, we know that $D_C\cdot C=-2$, hence by (\ref{Smith2}): $$D_C'\cdot \pi_{1*}(r_1^*(C))=-4.$$ Moreover, since $D_C=j(C)$ where $j$ is the isometric embedding $H^2(S,\Z)\hookrightarrow H^2(S^{[2]},\Z)$, we have: $q_{S^{[2]}}(D_C)=-2$. So by Theorem \ref{BBform} (ii): $$q_{M'}(D_C')=-4.$$ We obtain that $D_C'$ is the dual of $\pi_{1*}(r_1^*(C))$. Then $D_C'-\frac{1}{2}(\delta'+\Sigma')$ is the dual of $\pi_{1*}(r_1^*(C)-r_1^*(\ell_{\delta})-\ell_{\Sigma})$.
By Proposition \ref{extremalray} this proves that $\delta'$, $\Sigma'$ and $D_C'-\frac{1}{2}(\delta'+\Sigma')$ are wall divisors in $M'$. Their claimed numerical properties are given by Theorem \ref{BBform} (iii) and Remark \ref{div}.
It remains to show that $D_C'$ is a wall divisor. By Proposition \ref{caca}, since $D_C'$ is a uniruled divisor, we have $(D_C',\alpha)_{q_{M'}}\geq0$ for all $\alpha\in\mathcal{B}\mathcal{K}_{M'}$. Since $\mathcal{B}\mathcal{K}_{M'}$ is open, it follows that $(D_C',\alpha)_{q_{M'}}>0$ for all $\alpha\in\mathcal{B}\mathcal{K}_{M'}$.
Now, we assume that there exists $g\in \Mon_{\Hdg}^2(M')$ and $\alpha\in\mathcal{B}\mathcal{K}_{M'}$ such that $(g(D_C'),\alpha)_{q_{M'}}=0$ and we will find a contradiction. Since $g\in \Mon_{\Hdg}^2(M')$ and $\Pic(M')=\Z D_C'\oplus \Z \frac{\delta'+\Sigma'}{2}\oplus\Z\frac{\delta'-\Sigma'}{2}$, there are only 6 possibilities: $$g(D_C')=\left\{
\begin{array}{ll}
\pm D_C' & \text{or}\\
\pm \delta' & \text{or}\\
\pm \Sigma'. &
\end{array} \right.$$ Since $(D_C',\alpha)_{q_{M'}}\neq0$, $(\delta',\alpha)_{q_{M'}}\neq0$ and $(\Sigma',\alpha)_{q_{M'}}\neq0$. This leads to a contradiction. \end{proof} \subsection{When the K3 surface $S$ used to construct the Nikulin orbifold contains two rational curves swapped by the involution}\label{sec:twocurves} \subsubsection*{Framework} Let $\Lambda_{K3}:=U^3\oplus^\bot E_8(-1)\oplus^\bot E_8(-1)$ be the K3 lattice. We fix for all this section three embeddings in $\Lambda_{K3}$ of three lattices $\mathcal{U}\simeq U^3$, $E_1\simeq E_8(-1)$ and $E_2\simeq E_8(-1)$ such that $\Lambda_{K3}\simeq \mathcal{U}\oplus^\bot E_1\oplus^\bot E_2$. We consider $i$ the involution on $\Lambda_{K3}$ which exchanges $E_1$ and $E_2$ and fixes the lattice $\mathcal{U}$.
We consider $C\in E_1$ such that $C^2=-2$. We define $E^a:=\left\{\left.e-i(e)\right|\ e\in E_1\right\}\simeq E_8(-2)$. By the surjectivity of the period map (see for instance Theorem \ref{mainGTTO}), we can choose a K3 surface $S$ such that $$\Pic S=C\oplus E^a.$$ Then $\Pic S$ contains only two rational curves: one of class $C$ an the other of class $i(C)=C-(C-i(C))$. It follows from Example \ref{examplewall} and Corollary \ref{cor:desrK} that there exists $\alpha\in \mathcal{K}_S$ invariant under the action of $i$. Hence by Theorem \ref{mainHTTO} (ii), the involution $i$ extends to an involution $\iota$ on $S$ such that $\iota^*=i$. Of course, we can refer to older results on K3 surfaces to construct $\iota$, however we though simplest for the reader to refer to results stated in this paper.
As in Section \ref{onecurve}, the objective is to determine wall divisors of the Nikulin orbifold $M'$ obtained from the couple $(S^{[2]},\iota^{[2]})$ (see Section \ref{M'section}). \subsubsection*{Notation and strategy} We keep the same notation and the same strategy used in Section \ref{onecurve}. We also still use the notation from Section \ref{genericM'}. In particular, we denote by $C$ and $\iota(C)$ the two curves in $S$. \subsubsection*{Curves in $S^{[2]}$} First, we determine the curves in $S^{(2)}$. \begin{lemme}\label{S2curves2} There are 5 cases for irreducible curves in $S^{(2)}$: \begin{itemize} \item[(1)] the curves $C^s:=\rho(C\times s)=\rho(s\times C)$ with $s\notin C\cup\iota(C)$; \item[(2)] the curves $\iota(C)^s:=\rho(\iota(C)\times s)=\rho(s\times \iota(C))$ with $s\notin C\cup\iota(C)$; \item[(3)] curves in $C^{(2)}\simeq \mathbb{P}^2$; \item[(4)] curves in $\iota(C)^{(2)}\simeq \mathbb{P}^2$; \item[(5)] curves in $\iota(C)\times C=C\times \iota(C)\simeq \mathbb{P}^1\times\mathbb{P}^1$. \end{itemize} \end{lemme} \begin{proof} Same proof as Lemma \ref{S2curves}. \end{proof} It follows four cases in $S^{[2]}$. \begin{lemme}\label{curveS22} We have the following four cases for irreducible curves in $S^{[2]}$: \begin{itemize} \item[(0)] Curves which are fibers of the exceptional divisor $\Delta\rightarrow \Delta_{S^{(2)}}$. As in Section \ref{genericM'}, we denote these curves by $\ell_{\delta}^s$, where $s=\nu(\ell_{\delta}^s)$ and we denote their classes by $\ell_{\delta}$. \item[(1)] Curves which are strict transforms of $C^s$ with $s\notin C\cup\iota(C)$. We denote the class of these curves by $C$. Note that $C=\nu^*(\left[C^s\right])$ for $s\notin C$. \item[(2)] Curves which are strict transforms of $\iota(C^s)$ with $s\notin C\cup\iota(C)$. The class of these curves is $\iota^*(C)$. \item[(3a)] Curves contained in $H_2$. \item[(3b)] Curves contained in $\overline{C^{(2)}}$. \item[(4a)] Curves contained in $\iota(H_2)$. \item[(4b)] Curves contained in $\iota(\overline{C^{(2)}})$. \item[(5)] Curves in $\iota(C)\times C$. \end{itemize} \end{lemme} \begin{proof} The proof is similar as the one of Lemma \ref{curveS2}. We only remark in addition that $\nu^{-1}(\iota(C)\times C)\simeq\iota(C)\times C$ because $C$ and $\iota(C)$ do not intersect; hence $\iota(C)\times C$ does not intersect $\Delta_{S^{(2)}}$. So by an abuse of notation, we still denote $\nu^{-1}(\iota(C)\times C)$ by $\iota(C)\times C$. \end{proof} \subsubsection*{Curves in $N_1$} \begin{lemme}\label{curvesN12} We have the following cases for irreducible curves in $N_1$: \begin{itemize} \item[(00)] Curves contracted to a point by $r_1$. They are fibers of the exceptional divisor $\Sigma_1\rightarrow \Sigma$ and their class is $\ell_{\Sigma}$. \item[(0)] Curves send to $\ell_{\delta}^s$ by $r_1$. An extremal such a curve has class $r_1^*(\ell_{\delta})$ by Lemma \ref{Rdeltalemma}. \item[(1)] Curves send to $C^{s}$ by $r_1$ with $s\notin C\cup \iota(C)$. They are curves of class $r_1^*(C)$. \item[(2)] Curves send to $\iota(C^{s})$ by $r_1$ with $s\notin C\cup \iota(C)$. They are curves of class $r_1^*(\iota^*(C))$. \item[(3a)] Curves contained in $\overline{H_2}$ the strict transform of $H_2$ by $r_1$. \item[(3b)] Curves contained in $\overline{\overline{C^{(2)}}}$ the strict transform of $\overline{C^{(2)}}$ by $r_1$. \item[(4a)] Curves contained in $\iota(\overline{H_2})$ the strict transform of $\iota(H_2)$ by $r_1$. \item[(4b)] Curves contained in $\iota(\overline{\overline{C^{(2)}}})$ the strict transform of $\iota(\overline{C^{(2)}})$ by $r_1$. \item[(5a)] Curves contained in $\overline{\iota(C)\times C}$ the strict transform of $\iota(C)\times C$ by $r_1$. \item[(5b)] Curves contained in $r_1^{-1}(\overline{\iota(C)\times C}\cap\Sigma)$. \end{itemize} \end{lemme} \begin{proof} The proof is similar to the proof of Lemma \ref{curvesN1}; the difference is that $H_2$, $\iota(H_2)$, $\overline{C^{(2)}}$ and $\iota(\overline{C^{(2)}})$ do not intersect $\Sigma$. Only $\iota(C)\times C$ intersects $\Sigma$. \end{proof} Now, we are going to determine the classes of all these curves. \begin{lemme} \begin{itemize} \item[(3a)] The extremal curves of $\overline{H_2}$ have classes $2(r_1^*(C)-r_1^*(\ell_{\delta}))$ and $r_1^*(\ell_{\delta})$ in $N_1$. \item[(3b)] The extremal curves of $\overline{\overline{C^{(2)}}}$ have class $r_1^*(C)-r_1^*(\ell_{\delta})$ in $N_1$. \item[(4a)] The extremal curves of $\iota(\overline{H_2})$ have classes $2(\iota^*(r_1^*(C))-r_1^*(\ell_{\delta}))$ and $r_1^*(\ell_{\delta})$ in $N_1$. \item[(4b)] The extremal curves of $\iota(\overline{\overline{C^{(2)}}})$ have class $\iota^*(r_1^*(C))-r_1^*(\ell_{\delta})$ in $N_1$. \item[(5a)] The extremal curves of $\overline{\iota(C)\times C}$ have class $r_1^*(C)-\ell_{\Sigma}$ and $r_1^*(\iota^*(C))-\ell_{\Sigma}$ in $N_1$. \end{itemize} \end{lemme} \begin{proof} \begin{itemize} \item Since $H_2$ and $\iota(H_2)$ do not intersect $\Sigma$, (3a) and (4a) are consequences of Lemma \ref{C0}. \item Similarly, since $\overline{C^{(2)}} $ and $\iota(\overline{C^{(2)}})$ do not intersect $\Sigma$, (3b) and (4b) are consequences of Lemma \ref{stricttransform}. \item We have $\overline{\iota(C)\times C}\simeq \iota(C)\times C\simeq \mathbb{P}^1\times\mathbb{P}^1$. Let $j^{\iota}:\overline{\iota(C)\times C}\hookrightarrow N_1$ be the embedding in $N_1$. We want to compute the classes $j^{\iota}_*(\left\{x\right\}\times \mathbb{P}^1)$ and $j^{\iota}_*(\mathbb{P}^1\times \left\{x\right\})$, where $\left\{x\right\}$ is just the class of a point in $\mathbb{P}^1$. This corresponds to compute the strict transform by $r_1$ of $C^s$ with $s\in\iota(C)$ and the strict transform of $\iota(C^s)$ with $s\in C$. Since $C^s$ and $\iota(C^s)$ intersect $\Sigma$ in one point, we obtain our result. \end{itemize} \end{proof} \begin{lemme}\label{H2'} The surface $r_1^{-1}(\overline{\iota(C)\times C}\cap\Sigma)$ is a Hirzebruch surface that we denote by $H_2'$. Let $f$ be a fiber of $H_2'$. There exists a section $C_0'$ which satisfies: $\Pic H_2=\Z C_0'\oplus\Z f$, $C_0'^2=-4$, $f^2=0$ and $C_0'\cdot f=1$. Moreover, $j^{H_2'}_*(C_0')=r_1^*(C)+r_1^*(\iota^*(C))-2\ell_{\Sigma}$, where $j^{H_2'}:H_2'\hookrightarrow N_1$ is the embedding. \end{lemme} \begin{proof} We denote by $\zeta$ the curve $\overline{\iota(C)\times C}\cap\Sigma$. This curve is the strict transform of $\Delta_{S^{(2)}}^{\iota,C}$ by $\nu$. Since $\Delta_{S^{(2)}}^{\iota,C}$ does not intersect $\Delta$, its class in $S^{[2]}$ is: \begin{equation} \left[\zeta\right]=C+\iota^*(C). \label{gammadelta2} \end{equation}
To understand which Hirzebruch surface $H_2'$ is, we are going to compute $\mathcal{N}_{\Sigma/S^{[2]}}|_{\zeta}$.
Let $\Delta_{\widetilde{S^2}}^{\iota,C}$ be the strict transform of $\Delta_{S^{2}}^{\iota,C}$ by $\widetilde{\nu}$. We have $\zeta\simeq \Delta_{\widetilde{S^2}}^{\iota,C}\simeq \Delta_{S^{2}}^{\iota,C}$.
Hence: $$\mathcal{N}_{\Sigma/S^{[2]}}|_{\zeta}=\rho^*(\mathcal{N}_{\Sigma/S^{[2]}})|_{\Delta_{\widetilde{S^2}}^{\iota,C}}=\widetilde{\nu}^*(\mathcal{N}_{S_{\iota}/S^2}|_{\Delta_{S^{2}}^{\iota,C}}).$$ As in the proof of Lemma \ref{Hirzebruch}, we have:
$$\mathcal{N}_{S_{\iota}/S^2}|_{\Delta_{S^{2}}^{\iota,C}}\simeq T_S|_C\simeq \mathcal{O}_C(-2)\oplus\mathcal{O}_C(2).$$
Hence $$\mathcal{N}_{\Sigma/S^{[2]}}|_{\zeta}=\mathcal{O}_{\zeta}(-2)\oplus\mathcal{O}_{\zeta}(2).$$
By \cite[Chapter V, Proposition 2.3 and Proposition 2.9]{Hartshorne}, we know that there exists a section $C_0'$ of $H_2'$ such that $\Pic H_1=\Z C_0'\oplus\Z f$, with $C_0'^2=-4$, $f^2=0$ and $C_0'\cdot f=1$;
$f$ being the class of a fiber.
By (\ref{gammadelta2}) and using the projection formula, we know that:
$$j^{H_2'}_*(C_0')=r_1^*(C)+r_1^*(\iota^*(C))+a\ell_{\Sigma}.$$ To compute $a$, we only need to compute $C_0'\cdot j^{H_2'*}(\Sigma_1)$. We apply the same method used in the proof of Lemma \ref{C0}. By \cite[Propositions 2.6 and 2.8]{Hartshorne}, we have: $$C_0'\cdot \widetilde{j^{H_1}}^*(\Sigma_1)=C_0'\cdot \mathcal{O}_{\mathbb{P}(\mathcal{O}_{\zeta}(-2)\oplus\mathcal{O}_{\zeta}(2))}(-1)=-C_0'\cdot(C_0'+2f)=4-2=2.$$ This proves that $a=-2$.
\end{proof} \subsubsection*{Conclusion on wall divisors} \begin{lemme}\label{extrem2} The extremal curves of $M'$ have classes $\pi_{1*}r_1^*\ell_{\delta}$, $\pi_{1*}\ell_{\Sigma}$, $\pi_{1*}(r_1^*(C)-r_1^*(\ell_{\delta}))$ and $\pi_{1*}(r_1^*(C)-\ell_{\Sigma})$. \end{lemme} \begin{proof} It is obtain by taking the image by $\pi_{1*}$ of the classes of the extremal curves in $N_1$. \end{proof} We can compute their dual divisors to obtain wall divisors with Proposition \ref{extremalray}. \begin{prop}\label{prop:twocurves} The divisors $\delta'$, $\Sigma'$, $D_C'$, $2D_C'-\delta'$ and $2D_C'-\Sigma'$ are wall divisors in $M'$. Moreover, they verify the following numerical properties: \begin{itemize} \item $q_{M'}(\delta')=q_{M'}(\Sigma')=-4$ and ${\rm div}(\delta')={\rm div}(\Sigma')=2$; \item $q_{M'}(D_C')=-2$ and ${\rm div}(D_C')=1$; \item $q_{M'}\left(2D_C'-\delta'\right)=q_{M'}\left(2D_C'-\Sigma'\right)=-12$ and ${\rm div}\left[2D_C'-\delta'\right]={\rm div}\left[2D_C'-\Sigma'\right]=2$. \end{itemize} \end{prop} \begin{proof}
By Lemma \ref{dualdeltasigma}, $\frac{1}{2}\delta'$ and $\frac{1}{2}\Sigma'\in H^2(M',{\mathbb Q})$ are the duals of $\pi_{1*}(r_1^*(\ell_{\delta}))$ and $\pi_{1*}(\ell_{\Sigma})$ respectively. Moreover by Lemma \ref{intersection}, we know that $D_C\cdot C=-2$, hence by (\ref{Smith2}): $$\pi_{1*}(r_1^*(D_C+\iota^*(D_C)))\cdot \pi_{1*}(r_1^*(C+\iota^*(C)))=-8.$$ So $$D_C'\cdot \pi_{1*}(r_1^*(C))=-2.$$ Moreover, since $D_C=j(C)$ where $j$ is the isometric embedding $H^2(S,\Z)\hookrightarrow H^2(S^{[2]},\Z)$, we have: $q_{S^{[2]}}(D_C)=-2$. So by Theorem \ref{BBform} (ii): $$q_{M'}(\pi_{1*}(r_1^*(D_C+\iota^*(D_C))))=-8.$$ Hence: \begin{equation} q_{M'}(D_C')=-2. \label{DC'2} \end{equation} We obtain that $D_C'$ is the dual of $\pi_{1*}(r_1^*(C))$. Then $D_C'-\frac{1}{2}\delta'$ is the dual of $\pi_{1*}(r_1^*(C)-r_1^*(\ell_{\delta}))$ and $D_C'-\frac{1}{2}\Sigma'$ is the dual of $\pi_{1*}(r_1^*(C)-\ell_{\Sigma})$.
By Proposition \ref{extremalray} this proves that $\delta'$, $\Sigma'$, $2D_C'-\delta'$ and $2D_C'-\Sigma'$ are wall divisors in $M'$. Their claimed numerical properties are given by Theorem \ref{BBform}(iii), Remark \ref{div} and (\ref{DC'2}).
It remains to prove that $D_C'$ is a wall divisor. The proof is very similar to the one of Proposition \ref{walldiv1}. For the same reason, we have $(D_C',\alpha)_{q_{M'}}>0$ for all $\alpha\in\mathcal{B}\mathcal{K}_{M'}$.
Now, we assume that there exists $g\in \Mon_{\Hdg}^2(M')$ and $\alpha\in\mathcal{B}\mathcal{K}_{M'}$ such that $(g(D_C'),\alpha)_{q_{M'}}=0$ and we will find a contradiction. Since $C\in E_1$, we have ${\rm div}(D_C')=1$. Since $g\in \Mon_{\Hdg}^2(M')$ and $\Pic(M')=\Z D_C'\oplus \Z \frac{\delta'+\Sigma'}{2}\oplus\Z\frac{\delta'-\Sigma'}{2}$, it follows that there are only 2 possibilities: $$g(D_C')=\pm D_C',$$ because ${\rm div}(D_C')=1$ and ${\rm div}(\frac{\delta'+\Sigma'}{2})=2$. Since $(D_C',\alpha)_{q_{M'}}\neq0$, this leads to a contradiction. \end{proof} \begin{rmk}\label{Remark:twocurves} Note that in this case, $D_C'-\frac{\delta'+\Sigma'}{2}$ is not a wall divisor. Indeed, by Lemma \ref{extrem2}, the class $-2D_C'-\delta'-\Sigma'$ is the projection on $\Pic M'$ of a Kähler class. However, observe that $\left(D_C'-\frac{\delta'+\Sigma'}{2},-2D_C'-\delta'-\Sigma'\right)_q=0$. \end{rmk} \subsection{Wall divisors on a Nikulin orbifold constructed from a specific elliptic K3 surface}\label{sec:elliptic} As before, we consider the K3 lattice $\Lambda_{K3}:=U^3\oplus^\bot E_8(-1)\oplus^\bot E_8(-1)$ with the three embedded lattices $\mathcal{U}\simeq U^3$, $E_1\simeq E_8(-1)$ and $E_2\simeq E_8(-1)$ such that $\Lambda_{K3}\simeq \mathcal{U}\oplus^\bot E_1\oplus^\bot E_2$. The involution $i$ on $\Lambda_{K3}$ is still the involution which exchanges $E_1$ and $E_2$ and fixes the lattice $\mathcal{U}$.
As before, we keep $E^a:=\left\{\left.e-i(e)\right|\ e\in E_1\right\}\simeq E_8(-2)$. For simplicity, we denote $E_8(-2):=\left\{\left.e+i(e)\right|\ e\in E_1\right\}$ which is the invariant lattice.
Let $L_1\in \mathcal{U}$ such that $L_1^2=2$ and $e_2^{(0)}\in E_1$ an element with $(e_2^{(0)})^2=-4$. Using the surjectivity of the period map (see for instance Theorem \ref{mainGTTO}), we choose a K3 surface $S$ such that: $$\Pic S=\Z(L_1+e_2^{(0)}) \oplus E^a.$$ Note that the direct sum is not orthogonal. We denote: $$v_{K3}:=2L_1+e_2,$$ with $e_2:=e_2^{(0)}+i(e_2^{(0)})$. We have $v_{K3}^2=0$ and $\Pic S\supset \Z v_{K3} \oplus^{\bot} E^a$.
As before, it follows from Example \ref{examplewall} and Corollary \ref{cor:desrK} that there exists $\alpha\in \mathcal{K}_S$ invariant under the action of $i$. Hence by Theorem \ref{mainHTTO} (ii), the involution $i$ extends to an involution $\iota$ on $S$ such that $\iota^*=i$. We consider $M'$ constructed from the couple $(S,\iota)$.
In contrary to the two previous sections, we will not need to find all the extremal curves in this case.
The wall divisors will be deduced from the investigation of this section and the numerical properties obtained in Section \ref{endsection}.
The K3 surface $S$ contains a (-2)-curve of class $L_1+e_2^{(0)}$. We denote this curve $\gamma$. The class of $\iota(\gamma)$ is $L_1+i(e_2^{(0)})$. Hence $\gamma\cup\iota(\gamma)$ has class $v_{K3}$ and provides a fiber of the elliptic fibration $f:S\rightarrow \Pj^1$. Moreover, we have: $$\left[\gamma\right]\cdot \left[\iota(\gamma)\right]=(L_1+e_2^{(0)})\cdot(L_1+i(e_2^{(0)}))=2.$$ We denote by $\overline{\gamma}$ the class $\nu^*(\left[\gamma^s\right])$, with $\gamma^s:=\gamma\times\left\{s\right\}$. We also denote $D_{\gamma}:=j(\gamma)$ and $D_{\gamma}':=\pi_1*(r_1^*(D_{\gamma}))$.
We consider the following divisor in $S^{(2)}$:
$$A:=\left\{\left.\left\{x,y\right\}\in S^{(2)}\right|\ f(x)=f(y)\right\},$$ with $f$ the elliptic fibration $S\rightarrow \Pj$. We denote by $A'$ the image by $\pi_1$ of the strict transform of $A$ by $r_1\circ\nu$. \begin{lemme}\label{ellipticlemma} \begin{itemize} We have: \item[(i)] the class of the strict transform of $\gamma^s$ by $r_1\circ \nu$ is $r_1{*}(\overline{\gamma}-\ell_{\delta})-\ell_{\Sigma}$, for $s\in \gamma\cap \iota(\gamma)$. \item[(ii)] The dual of $\pi_1*(r_1^*(\overline{\gamma}))$ is $D_{\gamma}'$.
\item[(iii)] The divisor $D_{\gamma}'$ has square 0 and divisibility 1. \item[(iv)] The divisor $A'$ has class $D_{\gamma}'-\frac{\delta'+\Sigma'}{2}$. \end{itemize} \end{lemme} \begin{proof} \begin{itemize} \item[(i)] The statement follows directly from the fact that $\gamma^s$ intersects $\Delta_{S^{(2)}}$ and $\Sigma$ in one point. \item[(ii)] Let $w\in E_8(-2)$ be an invariant element under the action of $\iota$ such that $\gamma\cdot w=:k$.
Then \begin{equation} \overline{\gamma}\cdot j(w)=(D_\gamma,j(w))_{q_{S^{[2]}}}=k. \label{dualgamma} \end{equation} We set $w':=\pi_{1*}(r_1^*(j(w)))$. It follows by (\ref{Smith2}) that $\pi_{1*}(r_1^*(\overline{\gamma}+\iota^{[2]*}(\overline{\gamma})))\cdot w'=4k$. Hence \begin{equation} \pi_{1*}(r_1^*(\overline{\gamma}))\cdot w'=2k. \label{gammaprime} \end{equation}
Moreover by Theorem \ref{BBform} (ii), Remark \ref{Smithcomute} and (\ref{dualgamma}), we have: $$(\pi_{1*}r_1^{*}(D_\gamma+\iota^{[2]*}(D_\gamma)),\pi_{1*}(r_1^*(j(w))))_{q_{M'}}=4k.$$ Hence: \begin{equation} (D_\gamma',w')_{q_{M'}}=2k. \label{w} \end{equation}
We obtain that $D_\gamma'$ is the dual of $\pi_{1*}(r_1^*(\gamma))$.
\item[(iii)] We have by Theorem \ref{BBform} (ii) and Remark \ref{Smithcomute} $$q_{M'}(\pi_{1*}r_1^{*}(D_\gamma+\iota^{[2]*}(D_\gamma)))=2q_{S^{[2]}}(D_\gamma+\iota^{[2]*}(D_\gamma))=0.$$ Hence $q_{M'}(\pi_{1*}r_1^{*}(D_\gamma))=0$. To prove that ${\rm div}(D_\gamma')=1$, we choose a specific $w$: $w=w^{(0)}+i(w^{(0)})$ such that $w^{(0)}\in E_1$ and $w^{(0)}\cdot e_2^{(0)}=1$; then $w\cdot\gamma=1$. Then by (\ref{w}) we have $(D_\gamma',w')_{q_{M'}}=2$. However $w'$ is divisible by 2. We obtain our result. \item[(iv)] Since $A$ is invariant under the action of $\iota$ and considering the intersection with $\rho_*(w\times \left\{pt\right\})$, we see that the class of $A$ is given by $\rho_*(\left[S\times v_{K3}\right])$. Then the strict transform $\widetilde{A}$ by $r_1\circ \nu$ has class $r_1^*(j(v_{K3})-\delta)-\Sigma_1$ because $A$ contains the surfaces $\Delta_{S^{(2)}}$ and $\Sigma$. We recall that $\pi_{1*}(j(v_{K3}))=2 D_\gamma'$. Therefore and since $\widetilde{A}\rightarrow A'$ is a double cover, $A'$ has class $\frac{1}{2}(2D_\gamma'-\delta'-\Sigma_1')$. \end{itemize} \end{proof}
\begin{lemme}\label{monolemma}
The reflexion $R_{A'}$ is a monodromy operator and $R_{A'}(\Sigma')=2D_{\gamma}'-\delta'$.
\end{lemme}
\begin{proof}
The reflexion $R_{A'}$ is a monodromy operator by \cite[Theorem 3.10]{Lehn2}.
$$R_{A'}(\Sigma')=\Sigma'-\frac{2(\Sigma',A')_q}{q(A')}A'=\Sigma'+2A'=2D_\gamma'-\delta'.$$
\end{proof}
\begin{lemme}\label{extremallemma}
There exists an extremal curve of $M'$ that is written $a\pi_{1*}r_1^*(\overline{\gamma})+b\pi_{1*}r_1^*(\ell_{\delta})+c\pi_{1*}(\ell_{\Sigma})$ with $(a,b,c)\in\Z^3$ and $a>0$. \end{lemme} \begin{proof}
The curves $\pi_{1*}r_1^*(\overline{\gamma})$, $\pi_{1*}r_1^*(\ell_{\delta})$ and $\pi_{1*}(\ell_{\Sigma})$ are primitive in $H^{3,3}(M',\Z)$. Indeed, we have seen in the proof of Lemma \ref{ellipticlemma} that we can choose $k=1$ in equation (\ref{gammaprime}). Moreover $w'$ is divisible by 2. We obtain: $\pi_{1*}r_1^*(\gamma)\cdot \frac{1}{2}w'=1$. Similarly, by Lemma \ref{dualdeltasigma}, we know that $\pi_{1*}r_1^*(\ell_{\delta})$ and $\pi_{1*}(\ell_{\Sigma})$ are primitive by considering the intersection with $\frac{\delta'+\Sigma'}{2}$.
Therefore, the class of a curve in $M'$ will be written $a\pi_{1*}r_1^*(\overline{\gamma})+b\pi_{1*}r_1^*(\ell_{\delta})+c\pi_{1*}(\ell_{\Sigma})$, with $(a,b,c)\in\Z^3$.
If we consider the pull-back by $\pi_1$ and the push-forward by $r_1\circ \nu$ of the class $a\pi_{1*}r_1^*(\overline{\gamma})^+b\pi_{1*}r_1^*(\ell_{\delta})+c\pi_{1*}(\ell_{\Sigma})$, we obtain $2a\overline{\gamma}$. For a curve, there are two possibilities $a=0$ or $a>0$; it is not possible to have $a=0$ for every curves, hence there exists an extremal curve as mentioned in the statement of the lemma. \end{proof} \begin{rmk}\label{wallelliptic} Let $a\pi_{1*}r_1^*(\overline{\gamma})+b\pi_{1*}r_1^*(\ell_{\delta})+c\pi_{1*}(\ell_{\Sigma})$ be the class of the extremal cuve obtained from Lemma \ref{extremallemma}.
By Proposition \ref{extremalray}, the dual of this curve class is a wall divisor. According to Lemmas \ref{ellipticlemma} and \ref{dualdeltasigma}, that is $aD_{\gamma}'+\frac{b}{2}\delta'+\frac{c}{2}\Sigma'$. \end{rmk} \begin{lemme}\label{mainelliptic}
Let $E=aD_{\gamma}'+\frac{b}{2}\delta'+\frac{c}{2}\Sigma'$ be the previous wall divisor obtained from Remark \ref{wallelliptic} eventually renormalized such that $E$ is primitive in $H^2(M',\Z)$. Moreover, we assume that $E$ verifies one of the numerical conditions listed in the statement of Theorem \ref{main}. Then:
$$E=D_{\gamma}'-\frac{\delta'+\Sigma'}{2}.$$ \end{lemme} \begin{proof}
We have:
$$q_{M'}(E)=-(b^2+c^2).$$ Considering the numerical conditions of Theorem \ref{main}, there are three possibilities: \begin{itemize}
\item[(i)]
$b=\pm1$ and $c=\pm1$ or
\item[(ii)]
$b=\pm2$ and $c=0$ or
\item[(iii)]
$b=0$ and $c=\pm2$. \end{itemize} In the possibilities (ii) and (iii) $q_{M'}(E)=-4$. In this case, following the conditions in Theorem \ref{main}, we know that $E$ has divisibility 2. Hence by Lemma \ref{ellipticlemma} (iii), $a$ is divisible by 2. This corresponds to the extremal raies of curves $a\pi_{1*}r_1^*(\gamma)\pm2\pi_{1*}r_1^*(\ell_{\delta}))$ or $a\pi_{1*}r_1^*(\gamma)\pm2\pi_{1*}(\ell_{\Sigma})$. However, by Lemma \ref{ellipticlemma} (i) these raies cannot be extremal.
Therefore, $E=aD_{\gamma}'+\frac{\pm1}{2}\delta'+\frac{\pm1}{2}\Sigma'$. Moreover, the extremal ray associated to $E$ has class: $$a\pi_{1*}r_1^*(\gamma)\pm\pi_{1*}r_1^*(\ell_{\delta})\pm\pi_{1*}(\ell_{\Sigma}).$$ But, we know by Lemma \ref{ellipticlemma} (i) that $\pi_{1*}r_1^*(\gamma)-\pi_{1*}r_1^*(\ell_{\delta})-\pi_{1*}(\ell_{\Sigma})$ is the class of a curve. Hence the only possibility for the previous extremal ray is $\pi_{1*}r_1^*(\gamma)-\pi_{1*}r_1^*(\ell_{\delta})-\pi_{1*}(\ell_{\Sigma})$.
\end{proof}
\section{Monodromy orbits}\label{sec:monodromy-orbits} In this section, we study the orbit of classes in the lattice $H^2(X,{\mathbb Z})$ under the action of $\Mon^2(X)$ for an irreducible symplectic orbifold $X$ of Nikulin-type. Note that since the property of being a wall divisor is deformation invariant (see Theorem \ref{wall}) we may assume without loss of generality, that $X$ is the Nikulin orbifold $M'$ for a given K3 surface with symplectic involution.
The main result of this section is Theorem \ref{thm:9monorb-M'}, which describes a set of representatives in each monodromy orbit. This will enable us to determine wall divisors for Nikulin-type in the next section by checking only the representatives.
For completeness, note that we did not determine the precise monodromy orbit of each element. Since, we only used a subgroup of the actual monodromy group, it could happen that more than one of the elements in Theorem \ref{thm:9monorb-M'} belong to the same orbit.
\subsection{Equivalence of lattices} \label{sec:eq-lattices}
\begin{lemme}\label{lem:twist-spec}
For the questions on hand the consideration of the following two lattices are equivalent: $$\Lambda\coloneqq \Lambda_{M'}=U(2)^{ 3} \oplus E_8(-1)\oplus (-2) \oplus (-2)$$ and $$\hat{\Lambda} \coloneqq U^{ 3} \oplus E_8(-2)\oplus (-1) \oplus (-1).$$
More precisely, there is a natural correspondence between lattice automorphisms for both lattices, and a natural identification between the rays in both lattices.
\end{lemme} This is a special case of the following: \begin{lemme}\label{lem:twist-gen}
Let $M$ and $N$ be two unimodular lattices.
Then $L\coloneqq M\oplus N(2)$ and $\hat{L}\coloneqq M(2)\oplus N$ satisfy the following
properties:
There exists a natural identification between lattice automorphisms for both lattices, and
the rays in both lattices can be naturally identified. \end{lemme} \begin{proof}
Observe that by multiplying the quadratic form of the lattice $L$ by $2$, we obtain a lattice
$L(2) \iso M(2)\oplus N(4)$, which obviously satisfies that the rays and
automorphisms are naturally identified for $L$ and $L(2)$.
Notice that $N(4)$ can be identified with the sublattice of $N$ consisting of elements of the form $\{2n\,|\,
n\in N\}$. Therefore, we can naturally include $L (2) \subset \hat{L}$.
This immediately implies the natural identification of rays in $L$ and $\hat{L}$.
For the identification of automorphisms, observe that any automorphism $\hat{\varphi}\in \Aut(\hat{L})$
preserves the sublattice $L(2)$: In fact $L(2)\subset \hat{L}$ consists precisely of those
(not necessarily primitive) elements whose divisibility is even, and this subset needs to be preserved by
any automorphism. This yields a natural inclusion $\Aut(\hat{L})\subset \Aut(L(2))\iso
\Aut(L)$.
The inverse inclusion is given by the same argument from considering $\hat{L}(2)\subset L$. \end{proof}
Fix a K3 surface $S$ with a symplectic involution $\iota$ and consider the induced Nikulin orbifold $M'$ associated to $S$. Further, fix a marking $\varphi_S\colon H^2(S,{\mathbb Z}) \to \Lambda_{K3}=U^3\oplus E_8(-1)^2$ of $S$ such that $\iota^*$ corresponds to swapping the two copies of $E_8(-1)$. Then the fixed part of $\iota^*$ is isomorphic to $U^3 \oplus E_8(-2)$.
Note that on $X$, which we choose as the associated orbifold $M'$ to $(S,\iota)$, this induces a marking $\varphi_{X}\colon H^2(X,{\mathbb Z})\to \Lambda=\Lambda_{M'}=U(2)^{ 3} \oplus E_8(-1)\oplus (-2)^2$ by Theorem \ref{BBform}, where the $U(2)^{ 3} \oplus E_8(-1)$-part comes from the invariant lattice of the K3 surface (precisely as described in Lemma \ref{lem:twist-gen}), and the two $(-2)$-classes correspond to $\frac{\delta' + \Sigma'}{2}$ and $\frac{\delta' - \Sigma'}{2}$.
Therefore, the sublattice $U^{ 3} \oplus E_8(-2)$ in $\hat{\Lambda}$ can naturally be identified with the fixed part of $H^2(S,{\mathbb Z})$, whereas the generators of square $(-1)$ correspond to $\frac{\hdel + \hSig}{2}$ and $\frac{\hdel - \hSig}{2}$ for the corresponding elements $\hdel, \hSig \in \hat{\Lambda}$.
With this notation, we can define the group of $\Mon^2(\hat{\Lambda})$ of monodromy operators for the lattice $\hat{\Lambda}$: An automorphism $\hat{\varphi} \in \Aut(\hat{\Lambda})$ is in $\Mon^2(\hat{\Lambda})$ if the corresponding automorphism $\varphi \in \Aut(\Lambda)$ is identified with an element of $\Mon^2(X)$ via the marking $\varphi_X$.
In the following we will frequently consider the sublattice $$\hat{\Lambda}_1 \coloneqq U^{ 3} \oplus E_8(-2)\oplus (-2) \oplus (-2) \subset \hat{\Lambda},$$ which replaces the $(-1)\oplus (-1)$-part by the sublattice generated by $\hdel$ and $\hSig$.
Define $$\Mon^2(\hat{\Lambda}_1) \coloneqq \{f \in \Aut (\hat{\Lambda}_1) \,|\, \exists \hat{f} \in \Mon^2(\hat{\Lambda}):
f=\hat{f}|_{\hat{\Lambda}_1}\}.$$
\begin{rmk}
Note that while there exists an identification
$\Mon^2(X) =\Mon^2(\hat{\Lambda})$,
there exists a natural inclusion $\Mon^2(\hat{\Lambda}_1) \subseteq \Mon^2(\hat{\Lambda})$ but a priori this
is not an equality. \TODO{try to understand if this is an equality or not} \end{rmk}
Note that Proposition \ref{MonoM'} can be reformulated in terms of the lattice $\hat{\Lambda}_1$: \begin{cor}\label{cor:inheritedMononHat}
Let $f\in \Mon^2(S^{[2]})$ be a monodromy operator such
that $f \circ \iota^{[2]*} = \iota^{[2]*} \circ f$ on $H^2(S^{[2]},{\mathbb Z})$.
Let $\hat{f}\in \Aut(\hat{\Lambda}_1)$ be the automorphism defined via the following properties:
Via the marking described above, $\hat{f}$ restricted to $U^{ 3}\oplus E_8(-2)\oplus (-2)$ coincides with the restriction of
$f$ to the invariant part of the lattice (i.e.~$\hat{f}|_{U^{ 3}\oplus E_8(-2)\oplus (-2)}=
f|_{H^2(S^{[2]},{\mathbb Z})^{\iota^{[2]}}} $) and $\hat{f}(\hSig)=\hSig$.
Then $\hat{f}\in \Mon(\hat{\Lambda}_1)$. \end{cor} \begin{proof} This is a straight forward verification: Proposition \ref{MonoM'} gives the inherited monodromy operator
$f'\in \Mon^2(\Lambda)$ and $\hat{f}\in \Aut(\hat{\Lambda}_1)$ is precisely the restriction to
$\hat{\Lambda}_1$ of the corresponding automorphism of $\hat{\Lambda}$ obtained via Lemma \ref{lem:twist-spec}. \end{proof}
For the proof of Theorem \ref{thm:9monorb-M'} we will study monodromy orbits with respect to successively increasing lattices: \begin{equation*}\hat{\Lambda}_3\subset \hat{\Lambda}_2 \subset \hat{\Lambda}_1, \end{equation*} where $\hat{\Lambda}_3\coloneqq U^{ 3}\oplus (-2)$ with the generator $\hdel$ for the $(-2)$-part, $\hat{\Lambda}_2\coloneqq U^{ 3}\oplus E_8(-2) \oplus (-2) $, and $\hat{\Lambda}_1$ is as defined above.
Define the following monodromy groups for these lattices. \begin{align*}
\Mon^2(\hat{\Lambda}_2)&=\{f\in \Aut(\hat{\Lambda}_2)| \exists f_1 \in \Mon^2(\hat{\Lambda}_1) : f=f_1|_{\hat{\Lambda}_2}, f_1|_{\hat{\Lambda}_2^\perp}=\id\}\\
\Mon^2(\hat{\Lambda}_3)&=\{f\in \Aut(\hat{\Lambda}_3)| \exists f_1 \in \Mon^2(\hat{\Lambda}_1) : f=f_1|_{\hat{\Lambda}_3}, f_1|_{\hat{\Lambda}_3^\perp}=\id\}. \end{align*} Note that with this definition, there exist natural inclusions $\Mon^2(\hat{\Lambda}_3)\subseteq \Mon^2(\hat{\Lambda}_2)\subseteq \Mon^2(\hat{\Lambda}_1)$.
\subsection{Monodromy orbits in $\hat{\Lambda}_3$}\label{subsec:monK32-part}
In this subsection, we consider the sublattice $\hat{\Lambda}_3 = U^{ 3} \oplus (-2)\subset \hat{\Lambda}_1$, where the generator of $(-2)$ is the class $\hdel$.
\begin{nota} For the rest of this article, fix elements $L_i \in U \subset \hat{\Lambda}_1$ of square $2i$ for each $i\in {\mathbb Z}$. E.g. one can choose the elements $ie + f$, where $e,f$ is a standard basis for which $U$ has intersection matrix $\begin{pmatrix} 0&1\\ 1& 0 \end{pmatrix}$. \end{nota}
\begin{lemme}\label{lem:K3-2-part}
The $\Mon^2(\hat{\Lambda}_3)$-orbit of a primitive element in $U^{ 3} \oplus (-2)$ is uniquely determined by
its square and its divisibility.
More precisely, we prove the following:
Let $v \in \hat{\Lambda}_3$ be a primitive element.
Then there exists a monodromy operator $f\in \Mon^2(\hat{\Lambda}_3)$ such that
$v$ is moved to an element of the following form:
$f(v)=\left\{ \begin{array}{lll} L_{i} &\textrm{with\ } i=\frac{1}{2}q(v) & \textrm{if\ }{\rm div}(v)=1 \\ 2L_{i} -\hdel &\textrm{with\ } i=\frac{1}{8}(q(v)+2) & \, \textrm{if\ } {\rm div}(v)=2 \\ \end{array} \right. $ \end{lemme} The proof will make use of the following two well-known statements:
The Eichler criterion, which we will frequently use in this section (see \cite[Lemma 3.5]{Gritsenko-Hulek-Sankaran}, originally due to \cite[Chapter 10]{Eichler}\TODO{add a working reference}). \begin{lemme}\label{lem:Eichler}
Let $\Gamma$ be a lattice with $U^{ 2} \subseteq \Gamma$. Fix two elements $v,w \in \Gamma$ which
satisfy
\begin{enumerate}
\item $q(v) = q(w)$,
\item ${\rm div}(v)={\rm div}(w) =: r$,
\item $\frac{v}{r}=\frac {w}{r} \in A_\Gamma \coloneqq \Gamma^\vee / \Gamma$.
\end{enumerate}
Then there exists $\varphi \in \Aut(\Gamma)$ such that
$\varphi(v)=w$, and such that the induced action $\varphi_A$ on the discriminant group $A_\Gamma$ is the identity. \end{lemme}
Furthermore, recall the following description of the monodromy group of varieties of $K3^{[2]}$-type: \begin{thm}[{see \cite[Lemma 9.2]{Markman11}}] \label{thm:MonK32}
For a K3 surface $S$ the monodromy group $\Mon^2(S^{[2]})$ coincides with $O^+(\Lambda_{K3^{[2]}})$, which is the order 2 subgroup of $\Aut(\Lambda_{K3^{[2]}})$ which consists of automorphisms preserving the positive cone. \end{thm}
\begin{proof}[Proof of Lemma \ref{lem:K3-2-part}]
The proof of Lemma \ref{lem:K3-2-part} is an immediate consequence of the previous statements.
First apply the Eichler criterion (Lemma \ref{lem:Eichler}) for $\Gamma =\hat{\Lambda}_3$:
Observe that for a given element $v\in U^{ 3}\oplus (-2)$ the claimed image element has
the same square and divisibility.
In the case of ${\rm div}(v)=1$, note that $\frac{v}{1} = 0 \in A_{\hat{\Lambda}_3}$ (since $\frac{v}{1} \in
\hat{\Lambda}_3$). Therefore, the Eichler criterion applies automatically in this case.
For the case of ${\rm div}(v)=2$, note that $v$ can be written as $v=aL + b\hdel$ with a primitive element $L\in
U^{ 3}$ and $\hdel$ as before.
The fact that ${\rm div}(v)=2$ implies that $a$ is divisible by two.
Since we assumed that $v$ is primitive, $b$ is odd. Note that $A_{\hat{\Lambda}_3} \iso {\mathbb Z}/2{\mathbb Z}$ is
spanned by the image of $\frac{\hdel}{2}$, since $U$ is unimodular.
Therefore, we can also apply the Eichler criterion in this case.
In both cases, the Eichler criterion shows that there is $f \in \Aut(\hat{\Lambda}_3))$ where
$f(v)$ coincides with the claimed image.
Extending this by the identity on the respective orthogonal complements, we will by abuse of notation
consider $f \in \Aut (\Lambda_{K3^{[2]}})$ resp.~$f\in \Aut(\hat{\Lambda}_1)$.
Applying Theorem \ref{thm:MonK32}, we observe that up to potentially swapping a sign in one of the copies of $U$, $f\in \Aut (\Lambda_{K3^{[2]}})$ is in fact an
element of $\Mon^2(\Lambda_{K3^{[2]}})$.
And therefore, Corollary \ref{cor:inheritedMononHat} implies that $f\in \Aut (\hat{\Lambda}_1)$
is in $\Mon^2(\hat{\Lambda}_1)$ as claimed. \end{proof}
\subsection{Monodromy orbits in the lattice $\hat{\Lambda}_2$} In this section, we study the monodromy group for the lattice $\hat{\Lambda}_2 = U^3\oplus E_8(-2) \oplus (-2)$. Notice, that via the identifications described in Section \ref{sec:eq-lattices} the lattice $\hat{\Lambda}_2$ corresponds to the ${\iota^{[2]}}^*$-invariant lattice of $\Lambda_{K3^{[2]}}$. Let us refine the methods from the previous section to describe properties of the monodromy orbits in this lattice $\hat{\Lambda}_2$. For this we need to deal with the $E_8(-2)$-part of the lattice.
For the basic notions considering discriminant groups of lattices, we refer to \cite{Nikulin}. Recall that $E_8$ is a unimodular lattice and therefore, the discriminant group $A_{E_8}=0$ is trivial. Pick a basis of $E_8(-2)$ for which the intersection matrix is: \TODO{(-2) times the one associated
to the $E_8$-graph} $$\begin{pmatrix} -4 & 2 & 0 & 0 & 0 & 0 & 0 & 0\\ 2 & -4 & 2 & 0 & 0 & 0 & 0 & 0\\ 0 & 2 & -4 & 2 & 0 & 0 & 0 & 0\\ 0 & 0 & 2 & -4 & 2 & 0 & 0 & 0\\ 0 & 0 & 0 & 2 & -4 & 2 & 0 & 2\\ 0 & 0 & 0 & 0 & 2 & -4 & 2 & 0\\ 0 &0 & 0 & 0 & 0 & 2 & -4 & 0 \\ 0 &0 &0 & 0 & 2 & 0 & 0 & -4 \end{pmatrix}. $$ For the lattice $E_8(-2)$ one can deduce that the discriminant group $A_{E_8(-2)}\iso ({\mathbb Z}/2{\mathbb Z})^{ 8}$ which is generated by the residue classes of one half of the generators of the lattice. The quadratic form $q$ on $E_8(-2)$ induces a quadratic form $\bar{q}\colon A_{E_8(-2)}\to {\mathbb Z}/2{\mathbb Z}$. The possible values of $\bar{q}$ on elements of $A_{E_8(-2)}$ are in fact 1 and 0 (these values are achieved e.g.~by the residue classes of $\frac{v}{2}$ for lattice elements of $v\in E_8(-2)$ with squares $-4$ and $-8$).
Denote by $\Aut(A_{E_8(-2)})$ the automorphisms of $A_{E_8(-2)}$ preserving the quadratic form $\bar{q}$. Note that every automorphism $\varphi\in \Aut(E_8(-2))$ induces an element $\bar{\varphi}\in \Aut(A_{E_8(-2)})$. Therefore, there exists an induced action of $\Aut(E_8(-2))$ on $A_{E_8(-2)}$. \begin{lemme}\label{lem:A-E8-orbits}
There exist precisely three $\Aut(E_8(-2))$-orbits in $A_{E_8(-2)}$:
They are given by $0$, and by non-zero elements $\overline{e_1}, \overline{e_2}\in A_{E_8(-2)}$ with $\bar{q}(\overline{e_1})=1$ respectively $\bar{q}(\overline{e_2})=0$. \end{lemme} \begin{proof}
This lemma is a direct consequence of results in \cite[Chapter 4]{Griess}. Let $$\mathbb{L}_0=\left\{0\right\},\ \mathbb{L}_{-4}=\left\{\left.\alpha\in E_8(-2)\right|\ \alpha^2=-4\right\},\ \mathbb{L}_{-8}=\left\{\left.\alpha\in E_8(-2)\right|\ \alpha^2=-8\right\}.$$ As explained in \cite[just after Notation 4.2.32]{Griess}, the natural map $b:\mathbb{L}_0\cup \mathbb{L}_{-4} \cup \mathbb{L}_{-8}\rightarrow A_{E_8(-2)}$ is surjective. The images $b(\mathbb{L}_{-4})$ and $b(\mathbb{L}_{-8})$ correspond respectively to the elements of square 1 and the non trivial elements of square 0 in $A_{E_8(-2)}$.
However by \cite[Corollary 4.2.41]{Griess} and \cite[Lemma 4.2.46 (2)]{Griess} respectively, $\Aut(E_8(-2))$ acts transitively on $\mathbb{L}_{-8}$ and on $\mathbb{L}_{-4}$. The result follows.
\end{proof}
Consider $E_8(-2) \subset E_8(-1)\oplus E_8(-1)$ consisting of elements of the form $(e,e)$ for $e\in E_8(-1)$ and denote the sublattice consisting of elements of the form $(e, -e)$ by $E^a$. Note that this gets naturally identified with the anti-invariant lattice of $S$ via the marking described in Section
\ref{sec:eq-lattices}. \begin{lemme} \label{lem:extension}
With this notation, any lattice isometry $\varphi \in \Aut(\hat{\Lambda}_2)$ can be extended to a lattice isometry $\Phi \in \Aut(U^3\oplus
E_8(-1)\oplus E_8(-1)\oplus (-2))$, with the additional property, that $\Phi$ preserves the sublattice
$E^a$. \end{lemme} \begin{proof}
This is an immediate consequence of \cite[Corollary 1.5.2]{Nikulin} (applied to twice the lattice
$\hat{\Lambda}_2$) and the surjection $\Aut(E_8(-2)) \surj \Aut(A_{E_8(-2)})$ (which enables us to choose an appropriate
extension on the orthogonal complement $E^a$ of $\hat{\Lambda}_2$). \end{proof}
Fix two elements in $E_8(-2)$: one element $e_1$ of square $-4$ and one element $e_2$ of square $-8$. Note that according to Lemma \ref{lem:A-E8-orbits} the residue classes of $\frac{e_1}{2}$ and $\frac{e_2}{2}$ in $A_{E_8(-2)}$ represent the two non-zero orbits under the action of the isometry group. For coherence of the notation adept the choices such that the residue of $\frac{e_1}{2}$ in the discriminant is $\overline{e_1}$ and the residue of $\frac{e_2}{2}$ is $\overline{e_2}$. \begin{prop} \label{prop:6monorb-in-invariant}
Let $v\in \hat{\Lambda}_2$ be a primitive non-zero element. Denote by $v_{E_8}$ the projection of $v$ to the $E_8(-2)$-part
of the lattice, and let ${\bar{v}}_{E_8}$ be the image of $\half v_{E_8}$ in the discriminant group $A_{E_8(-2)}$.
Then there exists a monodromy operator
$f\in \Mon^2(\hat{\Lambda}_2)$ such that
$$f(v)=\left\{ \begin{array}{lllll} 1) &L_{i} & \textrm{if\ }{\rm div}(v)=1&\textrm{with\ } i=\frac{1}{2}q(v) & \\ 2) &2L_{i} - \hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-2, & \textrm{and\ } {\bar{v}}_{E_8}=0\\ 3) &2L_{i} + e_1 & \, \textrm{if\ } {\rm div} (v)=2,& q(v)=8i-4&\\ 4) &2L_{i+1} + e_2 & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i & \\ 5) &2L_{i} + e_1 - \hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-6 & \\ 6) &2L_{i+1} + e_2 -\hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-2, & \textrm{and\ } {\bar{v}}_{E_8}\neq 0.\\ \end{array} \right. $$ \end{prop} \begin{remark}
Note that the values of $q$ and ${\rm div}$ uniquely distinguish the orbit, except from the
cases 2 and 6, where the additional condition on ${\bar{v}}_{E_8}$ is needed to determine the known representative
of the orbit. \end{remark} \TODO{for ourselves, and probably for second version: If I remember correctly, we showed that the groups $\hdel^\perp$ and
$(e_2+\hdel)^\perp$ in $A_{\hat{\Lambda}_2}$ are not isomorphic, thus showing, that they actually represent
different orbits with respect to $\Aut(\hat{\Lambda}_2)$} \begin{proof}
Let us first observe that if ${\rm div}(v)=1$ the exact same proof as for Lemma \ref{lem:K3-2-part} for the case
of divisibility 1 applies.
Therefore, we only need to deal with the case, where ${\rm div}(v)=2$.
Start by observing that the discriminant group of $\hat{\Lambda}_2$ is $A_{E_8(-2)}\times {\mathbb Z}/2{\mathbb Z}$.
For our given element $v\in \hat{\Lambda}_2$, denote by $\bar{v}$ the image of $\frac{v}{2}$ in the discriminant group, and let
$\bar{v}_e$ be the $A_{E_8(-2)}$-part of this.
By Lemma \ref{lem:A-E8-orbits}, there exists $\varphi \in \Aut(E_8(-2))$ such that $\bar{\varphi}(\bar{v}_e) \in
A_{E_8(-2)}$ coincides with one of $\{0,\overline{e_1},\overline{e_2}\}$. For the corresponding $\varphi_1
\in \Aut(E_8(-1))$ consider $(\varphi_1,\varphi_1) \in \Aut(E_8(-1)\oplus E_8(-1))$, which obviously commutes with
the swapping of the two factors and induces $\varphi$ on $E_8(-2)\subset E_8(-1)\oplus E_8(-1)$.
Extend this to
$\Phi \in \Aut(\Lambda_{K3^{[2]}})$ via the identity on the other direct summands.
By Theorem \ref{thm:MonK32} the operator $\Phi\in \Mon^2(S^{[2]})$ is in the monodromy group
and therefore the induced action on $\hat{\Lambda}_2$ is an element of $\Mon^2(\hat{\Lambda}_2)$ by Proposition \ref{MonoM'}. By construction, this
restricts to $\varphi \in \Aut(E_8(-2))$.
Therefore, up to first applying the above monodromy operator, we may assume that $\bar{v}\in
\{0,\overline{e_1},\overline{e_2}\}\times {\mathbb Z}/2{\mathbb Z}$.
For the second step, observe that cases 2) to 6) listed in the proposition correspond
precisely to the non-zero elements of $\{0,\overline{e_1},\overline{e_2}\}\times {\mathbb Z}/2{\mathbb Z}$.
By varying the parameter $i$, the elements in the list can furthermore achieve all possible values for
$q(v)$ with the prescribed residue in $\{0,\overline{e_1},\overline{e_2}\}\times {\mathbb Z}/2{\mathbb Z}$.
Therefore, for our given element $v \in \hat{\Lambda}_2$ with $\bar{v}\in
\{0,\overline{e_1},\overline{e_2}\}\times {\mathbb Z}/2{\mathbb Z}$, we can choose $v_0$ from the above list (for
appropriate choice of $i$) such that $q(v)=q(v_0)$ and $\bar{v}=\overline{v_0}\in \hat{\Lambda}_2$ (and
${\rm div}(v)={\rm div}(v_0)=2$ follows automatically).
Therefore, by the Eichler criterion (Lemma \ref{lem:Eichler}), there exists an automorphism $\varphi\in \Aut(\hat{\Lambda}_2)$ such that
$\varphi(v)=v_0$. This can be extended to an automorphism $\Phi \in \Aut(U^3\oplus E_8(-1)\oplus E_8(-1)\oplus
(-2))$ of the lattice $\Lambda_{K3^{[2]}}$ by Lemma \ref{lem:extension}.
Observe that up to changing a sign in one of the copies of $U$, we can assume that $\Phi\in
\Mon^2(\Lambda_{K3^{[2]}})$ by Theorem \ref{thm:MonK32}. Since this monodromy operator commutes with ${\iota^{[2]}}^*$ (it
preserves the invariant lattice and the anti-invariant lattice by construction) Proposition \ref{MonoM'}
shows that it induces a monodromy operator on $\Lambda_{M'}$ which in turn corresponds to $\varphi$ extended by the
identity via Lemma \ref{lem:twist-spec}. Therefore, again up to potentially changing a sign in one of the
copies of $U$, the automorphism $\varphi \in \Mon^2(\hat{\Lambda}_2)$. This
completes the proof. \end{proof}
\subsection{Induced monodromy orbits on the lattice $\hat{\Lambda}_1$} Recall that $\hat{\Lambda}_1= U^{ 3} \oplus E_8(-2)\oplus (-2) \oplus (-2)$.
\begin{thm} \label{thm:9monorb}
Let $v\in \hat{\Lambda}_1$ be a primitive non-zero element. Denote by $v_{E_8}$ the projection of $v$ to the $E_8(-2)$-part
of the lattice, and let ${\bar{v}}_{E_8}$ be its image in the discriminant group $A_{E_8(-2)}$.
Then there exists a monodromy operator
$f\in \Mon^2(\hat{\Lambda}_1)$ such that
$$f(v)=\left\{ \begin{array}{lllll} 1) &L_{i} & \textrm{if\ }{\rm div}(v)=1&\textrm{with\ } i=\frac{1}{2}q(v) & \\ 2) &2L_{i} - \hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-2, & \textrm{and\ } {\bar{v}}_{E_8}=0\\ 3) &2L_{i+1} + e_2 -\hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-2, & \textrm{and\ } {\bar{v}}_{E_8}\neq 0\\ 4) &2L_i - \hdel - \hSig & \, \textrm{if\ } {\rm div} (v)=2,& q(v)=8i-4,&\textrm{and\ } {\bar{v}}_{E_8}=0\\ 5) &2L_{i+1} + e_2-\hdel - \hSig & \, \textrm{if\ } {\rm div} (v)=2,& q(v)=8i-4,&\textrm{and\ } \bar{q}({\bar{v}}_{E_8})=0, {\bar{v}}_{E_8}\neq 0\\ 6) &2L_{i} + e_1 & \, \textrm{if\ } {\rm div} (v)=2,& q(v)=8i-4,&\textrm{and\ } \bar{q}({\bar{v}}_{E_8})=1\\ 7) &2L_{i} + e_1 - \hdel & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-6, & \\ 8) &2L_i + e_1 - \hdel - \hSig & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i-8,
& \textrm{and\ } \bar{q}({\bar{v}}_{E_8})=1\\ 9) &2L_{i+1} + e_2 & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=8i, & \textrm{and\ } \bar{q}({\bar{v}}_{E_8})=0.\\ \end{array} \right. $$ \end{thm} \TODO{Regrouper les cas 2 et 3 en utilisant le lemme \ref{monolemma}} \begin{rmk}\label{rem:L0goesaway}
Observe that whenever $L_0$ is involved in the statement of Theorem \ref{thm:9monorb}, it can be replaced by
$0$ (apart from case 1) ) since both elements are in the same monodromy orbit. \end{rmk}
\begin{proof} The proof of this theorem consists of a series of applications of Proposition \ref{prop:6monorb-in-invariant}
and the existence of the monodromy operator $R_{\frac{\hdel - \hSig}{2}}$ (compare Remark \ref{RdeltaSigma} and notation of Section \ref{notation}).
First note that since $v$ is primitive, it can be expressed as
$v=k\gamma + a\hdel + b \hSig$, where $\gamma\in U^3\oplus E_8(-2)$ is a primitive element, and $\gcd(a,b,k)=1$.
First let us assume that ${\rm div}(v)=1$.
The element $k\gamma + a\hdel \in \hat{\Lambda}_2$ corresponds to $\gcd(k,a)$ times a primitive element of
divisibility 1 inside $\hat{\Lambda}_2$.
Therefore, by Proposition \ref{prop:6monorb-in-invariant} there exists an element $f_1 \in \Mon^2(\hat{\Lambda}_2)$ such that
$f_1(k\gamma + a\hdel) = \gcd(k,a)\cdot L_{q_1}$ for a suitable choice of $q_1$. By extending $f_1$ to $\Mon^2(\hat{\Lambda}_1)\supseteq
\Mon^2(\hat{\Lambda}_2)$, observe that
$f_1(v)=\gcd(k,a) \cdot L_{q_1} + b \hSig$.
Apply the monodromy operator $R_{\frac{\hdel -\hSig}{2}}$ to obtain $\gcd(k,a) \cdot L_{q_1} + b \hdel$, which is a primitive
element of divisibility 1 in $\hat{\Lambda}_2$.
Using once again Proposition \ref{prop:6monorb-in-invariant} find $f_2 \in \Mon^2(\hat{\Lambda}_2)\subseteq \Mon^2(\hat{\Lambda}_1)$
such that $f_2(\gcd(k,a) \cdot L_{q_1} + b \hdel)=L_{q_2}$.
The composition of these monodromy operators is therefore the claimed $f\in \Mon^2(\hat{\Lambda}_2)$ and concludes
the proof under the assumption that ${\rm div}(v)=1$.
Therefore, we only need to deal with the case that ${\rm div}(v)=2$. Let $\bar{v}$ be the residue class of
$\frac{v}{2}$ in $A_{\hat{\Lambda}_1}$.
Let us first work under the additional assumption that $\gcd(k,a)$ is odd (while still assuming ${\rm div}(v)=2$).
Under this assumption, the element $k\gamma + a\hdel \in \hat{\Lambda}_2$ corresponds to $\gcd(k,a)$ times a
primitive element $v_1$ in $\hat{\Lambda}_2$, satisfies that $\bar{v}_1=\bar{v}_{\hat{\Lambda}_2}\in A_{\hat{\Lambda}_2}$, where
$\bar{v}_1$ is the residue of $\frac{v_1}{2}$, and $\bar{v}_{\hat{\Lambda}_2}$ is the $A_{\hat{\Lambda}_2}$-part of
$\bar{v}$.
Then there exists a monodromy operator $f_1\in
\Mon^2(\hat{\Lambda}_2)\subset\Mon^2(\hat{\Lambda}_1)$ such that $f_1(v_1)$ is one of the cases 2) to 6) from Proposition
\ref{prop:6monorb-in-invariant}.
After applying the operator $R_{\frac{\hdel - \hSig}{2}}$, we obtain an element $v_2$ of one of the following
forms:
$$v_2=\left\{ \begin{array}{lllll}
a) &2\gcd(k,a)L_{q_1} &&+ b\hdel &- \gcd(k,a)\hSig \\ b) &2\gcd(k,a)L_{q_1} &+ \gcd(k,a)e_1 &+ b\hdel&\\ c) &2\gcd(k,a)L_{q_1} &+ \gcd(k,a)e_2 &+ b\hdel&\\ d) &2\gcd(k,a)L_{q_1} &+ \gcd(k,a)e_1 &+ b\hdel&-\gcd(k,a)\hSig \\ e) &2\gcd(k,a)L_{q_1} &+ \gcd(k,a)e_2 &+ b\hdel&- \gcd(k,a)\hSig \end{array} \right. $$ for suitable choice of $q_1$.
Since $\gcd(k,a,b)=1$, note that the $\hat{\Lambda}_2$-component of $v_2$ is primitive unless we are dealing with
case a) from above and at the same time $b$ is even. We will separately consider these two situations:
\noindent Case 1:
If the $\hat{\Lambda}_2$-component $v_{2,\hat{\Lambda}_2}$ of $v_2$ is primitive, then one can find a monodromy operator
moving $v_{2,\hat{\Lambda}_2}$ to one of the cases from Proposition \ref{prop:6monorb-in-invariant}.
In cases b) and c) from above the resulting element already attains the form of one of the cases claimed in our theorem.
In the other cases, apply the operator $R_{\frac{\hdel - \hSig}{2}}$ once again, followed by Proposition
\ref{prop:6monorb-in-invariant} to conclude the proof of case 1.
\noindent Case 2: Assume that $v_{2,\hat{\Lambda}_2}$ is non-primitive, and therefore we are in case a) from above
with the
additional assumption that $b=2b'$ is even.
This means that $v_2=2(\gcd(k,a)L_{q_1} + b'\hdel)- \gcd(k,a)\hSig$. Since $\gcd(k,a)$ is odd, $v_{2,\hat{\Lambda}_2}$ is twice
a primitive element of divisibility 1, and therefore $v_{2,\hat{\Lambda}_2}$ can be moved to an element of the form
$2L_{q_2}-\gcd(k,a)\hSig$. Applying the operator $R_{\frac{\hdel - \hSig}{2}}$ and using Proposition
\ref{prop:6monorb-in-invariant} completes the proof of case 2 (since $\gcd(k,a)$ is odd by assumption).
The only remaining case, which we have not yet been analyzed is if ${\rm div}(v)=2$ and $\gcd(k,a)$ is even.
Notice that under these assumptions $b$ is odd and in particular $\gcd(k,b)$ is odd.
Therefore, after application of the operator $R_{\frac{\hdel - \hSig}{2}}$, we find ourselves in the above
setting, which concludes the final case of the proof. \end{proof}
From this we can easily deduce a corresponding statement for the original lattice $\Lambda_{M'}$. We need to fix some notation in order to formulate the statement. Consider an irreducible symplectic orbifold $X$ of Nikulin-type, with a given marking $\varphi\colon H^2(X,{\mathbb Z})\overset{\iso}{\to} \Lambda_{M'}$. Let $L_i^{(2)}\in U(2)$ be an element of square $4i$ (corresponding to the element $L_i \in U$). Furthermore, fix elements $e_1^{(1)}$ and $e_2^{(1)}\in E_8(-1)$ with squares $q(e_1^{(1)})=-2$ and $q(e_2^{(1)})=-4$ (these elements correspond to the elements $e_1$ and $e_2 \in E_8(-2)$).
\begin{thm} \label{thm:9monorb-M'}
Let $v\in \Lambda_{M'}$ be a primitive non-zero element. Denote by $v_{E_8}$ the projection of $v$ to the $E_8(-1)$-part
of the lattice, and let ${\bar{v}}_{E_8}$ be its image in the ${\mathbb Z}/4{\mathbb Z}$-module $E_8(-1)/4E_8(-1)$.
Then there exists a monodromy operator
$f\in \Mon^2(X)$ such that
$$ f(v)=\left\{
\begin{array}{l}
\textrm{If $v$ corresponds to a ray of divisibility 1 in $\hat{\Lambda}_1$ (see below for checkable
condition):} \\
\hspace{0.5 em}
1) \hspace{1em}L^{(2)}_{i} \hspace{7.5em}\textrm{with\ }{\rm div}(v)=2 \textrm{\ and\ } q(v)=4i. \\
\textrm{Otherwise, if $v$ corresponds to a ray of divisibility 2 in $\hat{\Lambda}_1$:}\\
\begin{array}{lllll} 2) &2L^{(2)}_{i} - \delta' & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=16i-4, & \textrm{and\ } {\bar{v}}_{E_8}=0\\ 3) &2L^{(2)}_{i+1} + 2e_2^{(1)} -\delta' & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=16i-4, & \textrm{and\ } {\bar{v}}_{E_8}\neq 0\\ 4) &L^{(2)}_i - \frac{\delta' + \Sigma'}{2} & \, \textrm{if\ } {\rm div} (v)=2,& q(v)=4i-2,&\textrm{and\ } {\bar{v}}_{E_8}=0\\ 5) &L^{(2)}_{i+1} + e_2^{(1)}-\frac{\delta' + \Sigma'}{2} & \, \textrm{if\ } {\rm div} (v)=1,& q(v)=4i-2,&\textrm{and\ } q(v_{E_8})\equiv 0 \pmod{4}\\ 6) &L^{(2)}_{i} + e_1^{(1)} & \, \textrm{if\ } {\rm div} (v)=1,& q(v)=4i-2,&\textrm{and\ } q(v_{E_8})\equiv 2 \pmod{4} \\ 7) &2L^{(2)}_{i} + 2e_1^{(1)} - \delta' & \, \textrm{if\ } {\rm div}(v)=2, & q(v)=16i-12, & \textrm{and\ } {\bar{v}}_{E_8}\neq 0\\ 8) &L^{(2)}_i + e_1^{(1)} - \frac{\delta' + \Sigma'}{2} & \, \textrm{if\ } {\rm div}(v)=1, & q(v)=4i-4,
& \textrm{and\ } q(v_{E_8})\equiv 2 \pmod{4} \\ 9) &L^{(2)}_{i+1} + e_2^{(1)} & \, \textrm{if\ } {\rm div}(v)=1, & q(v)=4i, & \textrm{and\ } q(v_{E_8})\equiv 0 \pmod{4}.\\
\end{array}
\end{array}
\right. $$
The condition that $v$ corresponds to a ray of divisibility 1 in $\hat{\Lambda}_1$ is equivalent to
satisfying the following three conditions inside $\Lambda$:
\begin{compactenum}
\item The restriction $v_{U^3(2)}$ of $v$ to $U^3(2)$ is not divisible by 2,
\item the restriction $v_{E_8}$ of $v$ to $E_8(-1)$ is divisible by 2, and
\item the restriction $v_{(-2)\oplus (-2)}$ to $\langle \frac{\delta'+ \Sigma'}{2} , \frac{\delta'-
\Sigma'}{2}\rangle$ is contained in the sublattice $\langle \delta', \Sigma'\rangle$.
\end{compactenum}
\end{thm}
\begin{proof}
Similar to Lemma \ref{lem:twist-gen}
use the inclusion $\hat{\Lambda}_1(2) \subset \Lambda_{M'}$ and notice that under this correspondence $L_i$
is sent to $L_{i}^{(2)}$, $e_i$ is sent to $2e_i^{(1)}$, $\hdel$ to $\delta'$, and $\hSig$ to $\Sigma'$.
Then passing to the primitive element in the ray and determining the new square and divisibility gives the
new cases.
For the part of the condition involving $v_{E_8}$, simply check that under the assumptions on $q$ and ${\rm div}$ these are
equivalent to the corresponding ones in $\hat{\Lambda}_1$ from Theorem \ref{thm:9monorb}.
The same formalism admits a straight forward verification of the characterization when $v$ is corresponding
to a ray of divisibility 1 in $\hat{\Lambda}_1$. \end{proof}
\begin{cor}
There are at most 3 monodromy orbits of primitive non-zero
elements with prescribed square and divisibility (both in $\Lambda_{M'}$ and in $\hat{\Lambda}_1$). \end{cor} \begin{proof}
Since the values of $q$ and ${\rm div}$ are given, this can be read of immediately from the statements of Theorems \ref{thm:9monorb} and
\ref{thm:9monorb-M'}. \end{proof}
\begin{rmk}\label{rem:L0goesaway2} Again, one can replace $L_0^{(2)}$ by $0$ in all cases except from Case 1),
since both elements in question lie in the same monodromy orbit. \end{rmk}
Let us conclude this section by the following observation: \begin{cor}\label{cor:Chiara}
For every element $v\in \Lambda_{M'}$ of square $-4$ and divisibility $2$ the reflection (defined by
$R_v(\alpha)\coloneqq \alpha -2 \frac{(\alpha,v)_q}{q(v)}v$) gives an element in the monodromy group. \end{cor} \begin{proof}
Begin by observing that this property is equivalent for different elements in the same monodromy orbit.
Therefore, it suffices to check it for one representative of each orbit. By the list from Theorem
6.15, the orbits of square $-4$ and divisibility $2$ have one of the following representatives:
$L_{-1}^{(2)}$ (Case 1), $\delta'$ (Case 2), or $2L_1^{(2)} + 2 e_2^{(1)} - \delta'$ (Case 3).
The
associated elements $L_{-1}$, $\delta$, and $2L_1 + e_2 - \delta$ in the invariant part of
$\Lambda_{K3^{[2]}}$ all have square $-2$ and thus their reflections correspond to monodromy operators
on $\Lambda_{K3^{[2]}}$ (e.g.~by Theorem \ref{thm:MonK32}), which commute with $\iota ^{[2]*}$.
Therefore, Proposition \ref{MonoM'} applies to show that the claimed reflections in $\Lambda_{M'}$ are indeed
monodromy operators. \end{proof} \begin{cor}\label{Lastmonodromy}
For every element $v\in \Lambda_{M'}$ of square $-2$ and divisibility $2$ the reflection (defined by
$R_v(\alpha)\coloneqq \alpha -2 \frac{(\alpha,v)_q}{q(v)}v$) gives an element in the monodromy group. \end{cor} \begin{proof} The proof is similar to the one of Corollary \ref{cor:Chiara}. From Theorem \ref{thm:9monorb-M'}, we note that all such elements $v$ are in the same monodromy orbit (Case 4) which contains the element $\frac{\delta'-\Sigma'}{2}$. Hence the result follows from Remark \ref{RdeltaSigma}. \end{proof}
\section{Determining the wall divisors}\label{endsection} In this section, we combine the results from the last sections to prove the main theorem of this paper: Theorem \ref{main} which gives a complete description of the wall divisors for Nikulin-type orbifolds.
For the proof of the theorem let us start from some $X_0$ which is the Nikulin orbifold associated to some K3 surface $S_0$ obtained by the construction in Section \ref{M'section}. Fix a marking $\varphi_0 \colon H^2(X_0,\Z) \to \Lambda_{M'}= U(2)^{ 3} \oplus E_8(-1)\oplus (-2) \oplus (-2)$, where as usual $U(2)^{ 3} \oplus E_8(-1)$ corresponds to the part coming from the invariant lattice of $S_0$ and the two generators of the $(-2)$-part are $\frac{\delta'+ \Sigma'}{2}$ and $\frac{\delta'-
\Sigma'}{2}$. Let us recall the details of this identification: For the K3 surface $S_0$ with a symplectic involution $\iota$ the $\iota$-anti-invariant part of the lattice is isomorphic to $E_8(-2)$ and one can choose a marking $\varphi_{S_0}\colon H^2(S_0,{\mathbb Z}) \to \Lambda_{K3}\iso U^3 \oplus E_8(-1)^2$, such that the $\iota^*$ acts by exchanging the two copies of $E_8(-1)$. Therefore, the invariant lattice of $\iota$ corresponds to $\Lambda_{K3}^\iota \iso U^3\oplus E_8(-2)$, where the elements of $E_8(-2)$ are of the form $e+\iota^*(e)$ for elements $e$ in the first copy of $E_8(-1)$. Similarly the anti-invariant lattice of $\iota$ is $E_8(-2)$ consisting of elements of the form $e-\iota^*(e)$. We will denote the anti-invariant part of the lattice by $E^a$. With this convention, the lattice $U(2)^{3} \oplus E_8(-1)$ corresponds to the invariant via a twist as described in Lemma \ref{lem:twist-gen} of $\Lambda_{K3}^{\iota}$.
In order to prove the main theorem, we need to determine for each ray in $\Lambda_{M'}$ whose generator is of negative Beauville-Bogomolov square, whether it corresponds to a wall divisor for Nikulin-type orbifolds. Obviously, this notion is invariant under the monodromy action by the deformation invariance (see Theorem \ref{wall}). It therefore suffices to pick one representative for each monodromy orbit and to determine it for this choice.
By Lemma \ref{lem:twist-gen}, the rays of $\Lambda_{M'}$ are in (1:1)-correspondence with rays in the lattice $\hat{\Lambda}_1$, and obviously the property that the generator has negative square coincides in both cases. Therefore, we only need to deal with the cases from Theorem \ref{thm:9monorb} (respectively Theorem \ref{thm:9monorb-M'}), for which $i$ is chosen such that the square is negative.
\noindent {\it Case 1:} As a warm-up, let us start with Case 1 of Theorem \ref{thm:9monorb} separately (i.e.~the ray in question is generated by the element $L_i$ with $i<0$). Note that $L_i$ naturally corresponds to an element $\varphi_{S_0} ^{-1}L_i\in H^2(S_0,{\mathbb Z})$. Let $(S,\varphi_S)$ be a marked K3 surface such that the Picard lattice of $S$ is $\Pic(S)=\varphi_S^{-1}(L_i \oplus E^a)$ (which exists by the surjectivity of the period map). If $i< -1$, then $S$ does not contain any effective curve (since $\Pic(S)$ only has non-zero elements of square smaller than $-2$). Therefore, we are in the situation of Section \ref{genericM'} and
one observes that $L_i$ does not correspond to a wall divisor for Nikulin-type orbifolds if $i<-1$: In fact, $L_i\in \hat{\Lambda}_1$ corresponds to $L_{i}^{(2)}\in \Lambda$, which is
not a wall divisor by Proposition \ref{exwalls}. Note that the divisors $L_{i}^{(2)}\in \Lambda$ satisfy $q(L_{i}^{(2)})=-4i$, ${\rm div}(L_{i}^{(2)})=2$, and $(L_i^{(2)})_{U(2)^3}$ is not divisible by 2, which confirms Theorem \ref{main} for Case 1 if $i<-1$.
If $i=-1$ (and therefore $q(L_i)=-2$, we are in the situation of Section \ref{onecurve}, and one can deduce
from Proposition \ref{walldiv1} that $L_{i}^{(2)}\in \Lambda$ (which is precisely $D_C'$) is a wall-divisor for Nikulin-type, which confirms Theorem
\ref{main} in this case.
\noindent {\it Cases 2, 4:} As in the proof for Case 1, choose a K3 surface $S$ such that $\Pic(S)=\varphi_S ^{-1}(L_i\oplus E^a)$. As before, Section \ref{genericM'} applies and Proposition \ref{exwalls} implies that for $i<-1$, the only rays corresponding to wall-divisors are $\delta'$ and $\Sigma'$, and therefore there are no additional wall divisors of the forms given in Cases 2 and 4 in this example. Similarly, the results from Section \ref{onecurve} imply that for $i=-1$ the wall divisors are $\delta'$, $\Sigma'$, $L_{i}^{(2)}$, and $L_{i}^{(2)}-\half(\delta' + \Sigma')$ (compare Proposition \ref{walldiv1}). Therefore, Case 4 provides precisely a wall divisor of square $-6$ and divisibility $2$, thus confirming Theorem \ref{main} in this case.
However, for Cases 2 and 4, we also need to consider $i=0$ since the total square will still be negative. By Remark \ref{rem:L0goesaway}, the monodromy orbits of $2L_0 + \hdel$ (resp.~$2L_0 + \hdel + \hSig$) coincide with those of $\hdel$ and $\hdel + \hSig$, and therefore we can instead deform towards a very general K3 surface $S$ with a symplectic involution (i.e.~$\Pic(S)=E^a$) and apply the results from Section
\ref{genericM'} to observe that $\delta'$ is a wall divisor of square $-4$ and divisibility $2$, whereas
$\half (\delta' + \Sigma')$ is not.
\noindent {\it Cases 6, 7, and 8:} Similar to the previous situation, the element $2L_i + e_1$ naturally corresponds to an element $\varphi_{S_0}^{-1}(2L_i + e_1) \in H^2(S_0,{\mathbb Z})$. Notice, that we are only interested in the cases, where $q(2L_i + e_1)<0$, which corresponds to $i\leq 0$. Under this condition, the direct sum $(2L_i+ e_1) \oplus E^a$ is a negative definite sublattice of $\Lambda_{K3}$. However, notice that this in itself cannot be realized as the Picard lattice of a K3 surface, since it is not a saturated sublattice: By definition $e_1\in E_8(-2)$ is an element of square $-4$, where $E_8(-2)$ is part of the invariant lattice. Therefore, by the above observation, there exists an element $e_1^{(0)}$ in the first copy of $E_8(-1)$ of square $-2$ such that $e_1 = e_1^{(0)} + \iota ^*(e_1^{(0)})$. With this notation the element $2L_i + 2e_1^{(0)} = 2L_i + (e_1^{(0)} + \iota^*(e_1^{(0)})) + (e_1^{(0)} - \iota ^*(e_1^{(0)}))\in (2L_i+ e_1) \oplus E^a$, but the element $L_i + e_1^{(0)}$ is not part of this direct sum. In fact, $(L_i + e_1^{(0)})\oplus E^a$ is the saturation.
With this knowledge, let us choose a marked K3 surface $(S,\varphi_S)$ such that $\Pic(S) =\varphi_S^{-1}((L_i + e_1^{(0)})\oplus E^a)$. Note that if $i<0$, then $S$ does not contain any effective curve (since every non-zero element has square smaller than $-2$). Therefore, the results from Section \ref{genericM'} apply, and one observes that non of these cases provides wall divisors.
If $i=0$, then $S$ contains exactly two elements of square $-2$ which are exchanged by $\iota^*$: The elements $L_0 + e_1^{(0)}$ and $L_0 + \iota^*(e_1^{(0)})$. In this case according to Remark \ref{rem:L0goesaway}, we can choose $L_0=0$. Thus for $i=0$ we find ourselves in the setting of Section \ref{sec:twocurves} with $e_1^{(0)}=C$. Note that the element $D_C'$ from Section \ref{sec:twocurves} corresponds precisely to the element $e_1^{(1)}$
with our notation. We can therefore deduce from Proposition \ref{prop:twocurves}, that for $i=0$ the Cases 6 and 7 provide wall divisors
($e_1^{(1)}$ with square $-2$ and divisibility $1$, and
$2e_1^{(1)} - \delta'$ with square $-12$ and divisibility $2$), whereas by Remark \ref{Remark:twocurves} Case 8 does not provide a wall divisor, thus confirming Theorem \ref{main}.
\noindent {\it Cases 3, 5, and 9:} Again, the element $2L_{i+1}+ e_2$ corresponds to an element in $\varphi_{S_0}(H^2(S_0,{\mathbb Z}))$. We need to consider $i\leq 0$ to cover all possibilities for wall divisors with negative squares.
If $i<0$, then the lattice $(2L_{i+1}+ e_2) \oplus E^a \subseteq \Lambda_{K3}$ is negative definite, and again its saturation is $(L_{i+1} + e_2^{(0)})\oplus E^a $ for the corresponding element $e_2^{(0)}$ in the first copy of $E_8(-1)$ (we remind that $e_2^{(0)}$ has square $-4$). Similar to the above, deform to a marked K3 surface $(S,\varphi_S)$ such that $\varphi_S(\Pic(S))= (L_{i+1} + e_2^{(0)})\oplus E^a$. Observe that all non-zero elements of this lattice have squares smaller than $-2$. Therefore, we can apply the results from Section \ref{genericM'} to observe that we do not find any further wall divisors in theses cases.
For the remaining case $i=0$, we need to prove that both $2L_{1}^{(2)} + 2e_2^{(1)}- \delta'$ (with square $-4$ and divisibility $2$) and $L_{1}^{(2)} + e_2^{(1)} - \frac{\delta' + \Sigma'}{2}$ (with square $-2$ and divisibility $1$) correspond to wall divisors. If $i=0$, then $S$ contains exactly one element of square $0$: $2L_{1}+ e_2$.
Thus for $i=0$ we find ourselves in the setting of Section \ref{sec:elliptic}. Note that the element $D_\gamma'$ from Section \ref{sec:elliptic} corresponds precisely to the element $L_1^{(2)} + e_2^{(1)}$ with our notation. Let $M'$ constructed as in Section \ref{sec:elliptic}. From the investigations of the current section, we know that a wall divisor on $M'$ has the numerical properties of a wall divisor that we already found or possibly of $2L_{1}^{(2)} + 2e_2^{(1)}- \delta'$ or $L_{1}^{(2)} + e_2^{(1)} - \frac{\delta' + \Sigma'}{2}$. That is: we have proved that a wall divisor necessarily has one of the numerical properties listed in Theorem \ref{main}. Therefore, Lemma \ref{mainelliptic} shows that $L_{1}^{(2)} + e_2^{(1)} - \frac{\delta' + \Sigma'}{2}$ is a wall divisor. Finally, $2L_{1}^{(2)} + 2e_2^{(1)}- \delta'$ is also a wall divisor by Lemma \ref{monolemma}.
This concludes the analysis of all possible cases and thus the proof of Theorem \ref{main}.
\section{Application} \subsection{A general result about the automorphisms of Nikulin-type orbifolds} \begin{prop}\label{AutM'} Let $X$ be an orbifold of Nikulin-type and $f$ an automorphism on $X$. If $f^*=\id$ on $H^2(X,\Z)$, then $f=\id$. \end{prop} This section is devoted to the proof of this proposition. We will adapt Beauville's proof \cite[Proposition 10]{Beauville1982}.
\begin{lemme}\label{AutS} Let $S$ be a K3 surface such that $\Pic S=\Z H\oplus^{\bot} E_8(-2)$ with $H^2= 4$. According to Proposition \ref{involutionE8} or from \cite[Proposition 2.3]{Sarti-VanGeemen}, the K3 surface $S$ is endowed with a symplectic involution $\iota$. Let $f\in\Aut(S)$ such that $f$ commutes with $\iota$. Then $f=\iota$ or $f=\id$. \end{lemme} \begin{proof} We adapt the proof of \cite[Corollary 15.2.12]{HuybrechtsK3}. Let $f\in \Aut(S)$ which commutes with $\iota$. It follows that $f^*(H)=H$.
By \cite[Corollary 3.3.5]{HuybrechtsK3}, $f$ acts on $T(S)$ (the transcendental lattice of $S$) as $-\id$ or $\id$.
However, the actions of $f^*$ on $A_{T(S)}$ and on $A_{\Pic(S)}$ have to coincide. This forces $f^*_{T(S)}=\id$. Moreover, we can consider $f^*_{|E_8(-2)}$ as an isometry of $E_8(-2)$.
By \cite[Theorem 4.2.39]{Griess}, the isometries group of $E_8(-2)$ is finite, hence $f^*_{|E_8(-2)}$ is of finite order. Therefore by \cite[Chapter 15 Section 1.2]{HuybrechtsK3}, there are only two possibilities for $f$: $\id$ or a symplectic involution. Moreover, by \cite[Proposition 15.2.1]{HuybrechtsK3}
, there is at most one symplectic involution on $S$.
\end{proof}
\begin{lemme}\label{commute} Let $(S,\iota)$ be a K3 surface endowed with a symplectic involution such that $\Pic S=\Z H\oplus^{\bot} E_8(-2)$ with $H^2= 4$. Let $M'$ be the Nikulin orbifold constructed from $(S,\iota)$ as in Section \ref{M'section}. Let $(g,h)\in\Aut(S)^2$ such that $g\times h$ induces a bimeromorphism on $M'$ via the non-ramified cover $$\gamma: S\times S\smallsetminus \left(\Delta_{S^{2}}\cup S_{\iota}\cup(\Fix \iota\times \Fix \iota)\right)\rightarrow M'\smallsetminus \left(\delta'\cup \Sigma' \cup \Sing M' \right)$$ introduced in Section \ref{inv0M'} (i.e: there exists a bimeromorphism $\rho$ on $M'$ such that $\rho\circ\gamma=\gamma\circ f\times g$). Then $g$ and $h$ commute with $\iota$. \end{lemme} \begin{proof} It is enough to prove that $g$ commutes with $\iota$; the proof for $h$ being identical.
Let $$A:=\left\{\left.\eta=\eta_1\circ\eta_2\circ\eta_3\right|\ (\eta_1,\eta_3)\in\left\{\id,\iota\right\}^2\ \text{and}\ \eta_2\in\left\{\id, g, h\right\}\right\}.$$ Let $V:=S\times S\smallsetminus \left(\Delta_{S^{2}}\cup S_{\iota}\cup(\Fix \iota\times \Fix \iota)\right)$; we consider the following open subset of $V$:
$$V^{o}:=\left\{\left.(a,b)\in V\ \right|\ g(a)\neq \eta(b),\ g\circ\iota(a)\neq \eta(b),\ \forall \eta\in A\ \text{and}\ a\notin g^{-1}(\Fix \iota)\right\}.$$ Since $g\times h$ induces a bimeromorphism on $M'$, there exist an open subset $\mathcal{W}$ of $S\times S$ such that for all $(a,b)\in \mathcal{W}$: \small \begin{align*} g\times h\left(\left\{(a,b),(b,a),(\iota(a),\iota(b)),\right.\right.&\left.\left.(\iota(b),\iota(a))\right\}\right)\\ &=\left\{(g(a),h(b)),(h(b),g(a)),(\iota\circ g(a),\iota\circ h(b)),(\iota\circ h(b),\iota\circ g(a))\right\}. \end{align*} That is: \begin{align*} \left\{(g(a),h(b)),(g(b),h(a)),\right.&\left.(g\circ\iota(a),h\circ\iota(b)),(g\circ\iota(b),h\circ\iota(a))\right\}\\&=\left\{(g(a),h(b)),(h(b),g(a)),(\iota\circ g(a),\iota\circ h(b)),(\iota\circ h(b),\iota\circ g(a))\right\}. \end{align*} \normalsize If we choose in addition $(a,b)\in V^{o}$, then there are only one possibility: $$g\circ\iota(a)=\iota\circ g(a).$$
It follows that $g$ commutes with $\iota$ on an open set of $S$, so on all $S$. \end{proof} We are now ready to prove Proposition \ref{AutM'}. \begin{proof}[Proof of Proposition \ref{AutM'}] We consider $X$ an orbifold of Nikulin-type and $f$ an automorphism on $X$ such that $f^*=\id$. In particular, $f$ is a symplectic automorphism.
Let $(S,\iota)$ be a K3 surface, endowed with a symplectic involution, verifying the hypothesis of Lemma \ref{AutS}; we consider the Nikulin orbifold $M'$ constructed from $(S,\iota)$ as in Section \ref{M'section}. By \cite[Lemma 2.17]{Menet-Riess-20}, there exists two markings $\varphi$ and $\psi$ such that $(X,\varphi)$ and $(M',\psi)$ are connected by a sequence of twistor spaces. Moreover by Remark \ref{twistorinvo}, $f$ extends to an automorphism on all twistor spaces. In particular $f$ induces an automorphism $f'$ on $M'$. We consider $\gamma$, the non ramified cover of Lemma \ref{commute}:
$$\gamma: S\times S\smallsetminus \left(\Delta_{S^{2}}\cup S_{\iota}\cup(\Fix \iota\times \Fix \iota)\right)\rightarrow M'\smallsetminus \left(\delta'\cup \Sigma' \cup \Sing M' \right).$$ Since $V=S\times S\smallsetminus \left(\Delta_{S^{2}}\cup S_{\iota}\cup(\Fix \iota\times \Fix \iota)\right)$ is simply connected, it is the universal cover of $U:= M'\smallsetminus \left(\delta'\cup \Sigma' \cup \Sing M' \right)$.
Since $f'^*$ acts as $\id$ on $H^2(M',\Z)$, we have that $f'$ preserves $\delta'$ and $\Sigma'$ (it also preserves the set $\Sing M'$ ). Hence $f'$ induces an automorphism on $U$ and then on $V$. Therefore, it induces a bimeromorphism $\overline{f'}$ on $S\times S$. Let $s_2:S\times S\rightarrow S\times S: (a,b)\mapsto (b,a)$. By \cite[Theorem 4.1 (d)]{Oguiso}, $\overline{f'}$ can be written as a sequence of compositions between $s_2$ and automorphisms of the form $g_i\times h_i$, where $g_i$, $h_i$ are in $\Aut(S)$.
Since, we are interested in the automorphism $f'$ on $M'$, without loss of generality, we can assume that $\overline{f'}=g\times h$, with $g$, $h$ in $\Aut(S)$.
Therefore, by Lemma \ref{commute}, $g$ and $h$ commute with $\iota$.
It follows from Lemma \ref{AutS} that $(g,h)\in\left\{\id,\iota\right\}^2$. So, the unique possibility for $\overline{f'}$ to induces a non-trivial morphism on $U$ is $\overline{f'}=\id\times \iota$ (or $\iota\times\id $). However, in this case, as seen in Section \ref{inv0M'}, $f'$ would interchange $\delta'$ and $\Sigma'$. This is a contradiction with the fact that $f'^*=\id$ on $H^2(M',\Z)$. Therefore, we obtain $f'=\id$ and then $f=\id$. \end{proof}
\subsection{Construction of a non-standard symplectic involution on orbifolds of Nikulin-type}\label{Application} Adapting the vocabulary introduced in \cite{Mongardi-2013}, we state the following definition. \begin{defi}
Let $Y$ be an irreducible symplectic manifold of $K3^{[2]}$-type endowed with a symplectic involution $\iota$.
Let $M'$ be the Nikulin orbifold constructed from $(Y,\iota)$ as in Example \ref{exem}. Let $G\subset \Aut(Y)$ such that all $g\in G$ commute with $\iota$.
Then $G$ induces an automorphism group $G'$ on $M'$. The group $G'$ is called a \emph{natural automorphism group} on $M'$ and $(M',G')$ is called a \emph{natural pair}.
Let $X$ be an irreducible symplectic orbifold of Nikulin-type and $H\subset \Aut(X)$. The group $H$ will be said \emph{standard} if the couple
$(X,H)$ is deformation equivalent to a natural pair $(M',G')$; in this case, we say that the couple $(X,H)$ is a \emph{standard pair}. \end{defi} \begin{thm}\label{Involution2} Let $X$ be an irreducible symplectic orbifold of Nikulin-type such that there exists $D\in\Pic (X)$ with $D^2=-2$ and ${\rm div}(D)=2$. Then there exists an irreducible symplectic orbifold $Z$ bimeromophic to $X$ and a non-standard symplectic involution $\iota$ on $Z$ such that: $$H^2(Z,\Z)^{\iota}\simeq U(2)^3\oplus E_8(-1)\oplus (-2)\ \text{and}\ H^2(Z,\Z)^{\iota\bot}\simeq (-2).$$ \end{thm}
\begin{proof} By Theorem \ref{main}, $D$ is not a wall divisor, hence there exists $\beta\in\mathcal{B}\mathcal{K}_{X}$ and $g\in\Mon_{\Hdg}^2(X)$
such that $(g(D),\beta)_{q}=0$. Let $f:X\dashrightarrow Z$ be a bimeromorphic map such that $f_*(\beta)$ is a Kähler class on $Z$.
We set $D':=f_*\circ g(D)$.
By Corollary \ref{Lastmonodromy}, the involution $R_{D'}$ is a Hodge monodromy operator on $H^2(Z,\Z)$.
Moreover $R_{D'}(f_*(\beta))=f_*(\beta)$. Hence by Theorem \ref{mainHTTO},
there exists $\iota$ an automorphism on $Z$ such that $\iota^*=R_{D'}$. Moreover, by Proposition \ref{AutM'}, $\iota$ is an involution. Since $\iota^*=R_{D'}$, we have $H^2(Z,\Z)^{\iota}=D'^{\bot}$. It follows from Theorem \ref{BBform} that: $H^2(Z,\Z)^{\iota}\simeq U(2)^3\oplus E_8(-1)\oplus (-2)$ and $H^2(Z,\Z)^{\iota\bot}\simeq (-2)$.
Now, we show that $\iota$ is non-standard. We assume that $\iota$ is standard and we will find a contradiction. If $\iota$ is standard, there exists
a natural pair $(M',\iota')$ deformation equivalent to $(Z,\iota)$.
Since $\iota'$ is natural, $\iota'^*(\Sigma')=\Sigma'$. Moreover, since $(M',\iota')$ is deformation equivalent to $(Z,\iota)$,
there exists $D'\in \Pic M'$ such that $q_{M'}(D')=-2$, ${\rm div}(D')=2$ and $H^2(M',\Z)^{\iota'\bot}=\Z D'$.
However, since $\Sigma'\in H^2(M',\Z)^{\iota'}$, we obtain by Theorem \ref{BBform} that:
$$D'\in \Sigma'^{\bot}\simeq U(2)^3\oplus E_8(-1)\oplus (-4).$$
For the rest of the proof, we identify $\Sigma'^{\bot}$ with $U(2)^3\oplus E_8(-1)\oplus (-4)$.
If follows that $D'$ can be written:
$$D'=\alpha+\beta,$$
with $\alpha\in U(2)^3\oplus (-4)$ and $\beta\in E_8(-1)$. Since ${\rm div}(D')=2$, we have
$$D'=\alpha+2\beta',$$
with $\beta'\in E_8(-1)$.
If follows that $q_{M'}(D')\equiv0\mod 4$.
This is a contradiction with $q_{M'}(D')=-2$. \end{proof} \TODO{What is the fixed locus of $\iota$ ?}
\TODO{check for the following things: twister -> twistor, kähler -> Kähler} \TODO{look at Nikulin entry} \TODO{generic -> very general in many cases} \TODO{check that notation of $q$ is consistent}
\noindent Gr\'egoire \textsc{Menet}
\noindent Laboratoire Paul Painlevé
\noindent 59 655 Villeneuve d'Ascq Cedex (France),
\noindent
{\tt [email protected]}
\noindent Ulrike \textsc{Rie}\ss
\noindent Institute for Theoretical Studies - ETH Z\"urich
\noindent Clausiusstrasse 47, Building CLV, Z\"urich (Switzerland)
\noindent {\tt [email protected]}
\end{document} | arXiv |
2 edition of On the minimal dimension of the ambient space of a projective scheme. found in the catalog.
On the minimal dimension of the ambient space of a projective scheme.
Audun Holme
by Audun Holme
Published 1971 by Universitetsforlaget in Trondheim .
Algebraic varieties.,
Geometry, Projective.,
Algebraic fields.
Series Det Kongelige. Norske videnskabers selskab. skrifter 1971 ;, no. 13, Norske videnskabers selskab, Trondheim., no. 13.
LC Classifications AS283 .T8 1971, no. 13, QA564 .T8 1971, no. 13
Pagination 5 p.
So certainly the minimum embedding dimension is less than or equal to the minimum of these two numbers. However, imitating the argument in Ch. IV, Ex. (b) of Hartshorne, we can see that any secant of the Segre embedding of $\mathbb{P}^n \times \mathbb{P}^m$ is contained in the secant variety of $\mathbb{P}^1 \times \mathbb{P}^1$, that is. 3 Projective Space as a Quotient Space Figure 2: Boy's Surface, from Wikimedia Commons A better way to think of real projective space is as a quotient space of Sn. Now, we arrive at a quotient space by making an identi cation between di erent points on the manifold. Essentially, we de ne an.
in this moduli space of stable maps. 1. Introduction The geometry of the Kontsevich's moduli space M g,n(Pr,d) of degree d stable maps from n-pointed, genus gcurves to Pr has been studied inten-sively in the literature. R. Vakil studied its connection with the enumera-tive geometry of rational and elliptic curves in projective space in [Vak]. R. Scheme-theoretic definitions. One advantage of defining varieties over arbitrary fields through the theory of schemes is that such definitions are intrinsic and free of embeddings into ambient affine n-space.. A k-algebraic set is a separated and reduced scheme of finite type over Spec().A k-variety is an irreducible k-algebraic set.A k-morphism is a morphism between k-algebraic sets regarded.
Minimal prime ideal containing (x) Tangent space Steiner's crosscap A neighborhood A connected, reducible variety Normalization of a curve xi. Linear Systems. Let f 1, , f r be homogeneous polynomials of some common degree d on some projective space (P) defined over a field. The set of hypersurfaces a 1 f 1 + + a r f r = 0. where the a i s are elements of the base field of (P) is an example of a linear system. This can be thought of as being the vector space of elements (a 1, , a r) or even the projectivisation of that.
The great pursuit
The professional programmers guide to Prolog
Reconstructing the economy of Bangladesh.
Sculpture and sculptors
What Makes You Thank Okies Tawk Funny?
Raptors of war
Kayfiyat al-khalas fi tafsir surat al-Ikhlas
main line is murder
Elementary technology, third class Russian.
Out of the line of fire
Methods for determination of dimensional stability of textiles to dry cleaning in tetrachloroethylene.
plea for intellectual freedom in currency legislation
Douglas tasks for Airborne Solar Eclipse Expedition of May 30, 1965
Doing your research project
German Expressionists
On the minimal dimension of the ambient space of a projective scheme by Audun Holme Download PDF EPUB FB2
A - ambient space (e.g. affine or projective \(n\)-space) polynomials - single polynomial, ideal or iterable of defining polynomials; in any case polynomials must belong to the coordinate ring of the ambient space and define valid polynomial functions (e.g. they should be homogeneous in the case of a projective space) OUTPUT: algebraic scheme.
Variety and scheme structure Variety structure. Let k be an algebraically closed field. The basis of the definition of projective varieties is projective space, which can be defined in different, but equivalent ways.
as the set of all lines through the origin in + (i.e., one-dimensional sub-vector spaces of +); as the set of tuples (, ,) ∈ +, modulo the equivalence relation.
The choice of a homogeneous ideal in a polynomial ring defines a closed subscheme Z in a projective space as well as an infinite sequence of cones over Z in progressively higher dimension projective spaces.
Recent work of Aluffi introduces the Segre zeta function, a rational power series with integer coefficients which captures the relationship between the Segre class of Z and those of its : Grayson Jorgenson.
pdf (Kb) Year Permanent link URN:NBN:noAuthor: Audun Holme. Download PDF: Sorry, we are unable to provide the full text but you may find it at the following location(s): :NBN:no (external link) https Author: Audun Holme. For these reasons, projective space plays a fundamental role in algebraic geometry.
Nowadays, the projective space P n of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension n + 1, or equivalently to the set of the vector lines in a vector space of dimension.
Projective space. Projective space is one of the fundamental objects studied in algebraic geometry. In this section we just give its construction as $\text{Proj}$ of a polynomial ring. The number of affine patches is dependent on the type of projective ambient space in which X lies, but for instance, the standard projective space of dimension n has n + 1 affine patches.
it is the minimal dimension for affine ambients into which the abstract scheme-theoretic patch may be embedded. This can be seen from the fact that the. The dimension of a hypersurface is one less than that of its ambient space.
If $ M $ and $ N $ are differentiable manifolds, $ \mathop{\rm dim} N - \mathop{\rm dim} M = 1 $, and if an immersion $ f: M \rightarrow N $ has been defined, then $ f(M) $ is a hypersurface in $ N $. Minimal Varieties In Riemannian Manifolds Pdf. This module implements morphisms from affine schemes.
A morphism from an affine scheme to an affine scheme is determined by rational functions that define what the morphism does on points in the ambient affine space. A morphism from an affine scheme to a projective scheme is determined by homogeneous polynomials.
EXAMPLES. (3)When dealing with a scroll Xin projective space, we use d to refer to its degree, kto refer to its dimension, and n:= d+k-1to refer to the dimension of the ambient projective space, XˆPn. (4)If Xis a smooth scroll of degree dand dimension k, we use Hscroll d,k:= H X.
(5)Let Xbe a smooth scroll of degree dand dimension k. We use Hscroll. and since subspaces of dimension 1 correspond to lines through the origin in E,wecanviewP(E) as the set of lines in E passing through the origin.
So, the projective space P(E) can be viewed as the set obtained fromE when lines throughthe origin are treated as points. However,this is a ,dependingonthestructure. In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it.
Thus a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line.
A surface such as a plane or the surface of a cylinder or sphere. The simpler characterization requires that the projective scheme associated to S be a finite union of projective varieties of given dimensions. proper coarse moduli space of stable log-varieties of general type is projective.
We also prove subadditivity of log-Kodaira dimension for fiber spaces whose general fiber is of log general type. Contents 1.
Introduction 1 2. Basic tools and definitions 7 3. Almost proper varieties and big line bundles 9 4. Ampleness Lemma 11 5. A new projective space Proj(Generic(R)) will be created as the ambient space of this scheme.
EmptyScheme(X): Sch -> Sch EmptySubscheme(X): Sch -> Sch, MapSch The subscheme of X defined, for an affine scheme X by the trivial polynomial 1, or by maximal ideal (x 1,x n) for a projective scheme X. The returned scheme is marked as saturated. In this paper we derive geometric consequences from the presence of a long strand of linear syzygies in the minimal free resolution of a closed scheme in projective space whose homogeneous ideal.
minimal degree and dimension k+1. Introduction The purpose of this paper is to compute how many varieties of minimal degree and dimension k+ 1 contain a given scheme XˆPN of dimension k 1. If Xis arithmetically Buchsbaum, then there might be more than one variety of minimal degree containing it, but in this paper we prove that if X is r.
In our setting where the reduced subscheme is a smaller projective space, The length p of the minimal free resolution of M is called the projective dimension of M and is denote pd (M).
on the punctured spectrum, i.e. (S 2) scheme theoretically in projective space. an open source textbook and reference work on algebraic geometry.Audun Holme has written: 'Geometry' -- subject(s): Geometry, History 'On the minimal dimension of the ambient space of a projective scheme' -- subject(s): Algebraic fields, Algebraic varieties.The notion of scecant scheme for quasi-projective morphisms () - Matematisk Institutt, Universitetet i Oslo.
Holme, Audun. Rapporter - in DUO. ON THE MINIMAL DIMENSION OF THE AMBIENT SPACE OF A PROJECTIVE SCHEME () - Matematisk Institutt, Universitetet i Oslo. Holme, Audun. Rapporter - in DUO. 1; Results per page. 15; 30; 50;
certifiedneighborhoodspecialist.com - On the minimal dimension of the ambient space of a projective scheme. book © 2020 | CommonCrawl |
Ricci flow
In the mathematical fields of differential geometry and geometric analysis, the Ricci flow (/ˈriːtʃi/ REE-chee, Italian: [ˈrittʃi]), sometimes also referred to as Hamilton's Ricci flow, is a certain partial differential equation for a Riemannian metric. It is often said to be analogous to the diffusion of heat and the heat equation, due to formal similarities in the mathematical structure of the equation. However, it is nonlinear and exhibits many phenomena not present in the study of the heat equation.
The Ricci flow, so named for the presence of the Ricci tensor in its definition, was introduced by Richard Hamilton, who used it through the 1980s to prove striking new results in Riemannian geometry. Later extensions of Hamilton's methods by various authors resulted in new applications to geometry, including the resolution of the differentiable sphere conjecture by Simon Brendle and Richard Schoen.
Following Shing-Tung Yau's suggestion that the singularities of solutions of the Ricci flow could identify the topological data predicted by William Thurston's geometrization conjecture, Hamilton produced a number of results in the 1990s which were directed towards the conjecture's resolution. In 2002 and 2003, Grigori Perelman presented a number of fundamental new results about the Ricci flow, including a novel variant of some technical aspects of Hamilton's program. Hamilton and Perelman's works are now widely regarded as forming a proof of the Thurston conjecture, including as a special case the Poincaré conjecture, which had been a well-known open problem in the field of geometric topology since 1904. Their results are considered as a milestone in the fields of geometry and topology.
Mathematical definition
On a smooth manifold M, a smooth Riemannian metric g automatically determines the Ricci tensor Ricg. For each element p of M, by definition gp is a positive-definite inner product on the tangent space TpM at p. If given a one-parameter family of Riemannian metrics gt, one may then consider the derivative ∂/∂t gt, which then assigns to each particular value of t and p a symmetric bilinear form on TpM. Since the Ricci tensor of a Riemannian metric also assigns to each p a symmetric bilinear form on TpM, the following definition is meaningful.
• Given a smooth manifold M and an open real interval (a, b), a Ricci flow assigns, to each t in the interval (a,b), a Riemannian metric gt on M such that ∂/∂t gt = −2 Ricgt.
The Ricci tensor is often thought of as an average value of the sectional curvatures, or as an algebraic trace of the Riemann curvature tensor. However, for the analysis of existence and uniqueness of Ricci flows, it is extremely significant that the Ricci tensor can be defined, in local coordinates, by a formula involving the first and second derivatives of the metric tensor. This makes the Ricci flow into a geometrically-defined partial differential equation. The analysis of the ellipticity of the local coordinate formula provides the foundation for the existence of Ricci flows; see the following section for the corresponding result.
Let k be a nonzero number. Given a Ricci flow gt on an interval (a,b), consider Gt = gkt for t between a/k and b/k. Then ∂/∂t Gt = −2k RicGt. So, with this very trivial change of parameters, the number −2 appearing in the definition of the Ricci flow could be replaced by any other nonzero number. For this reason, the use of −2 can be regarded as an arbitrary convention, albeit one which essentially every paper and exposition on Ricci flow follows. The only significant difference is that if −2 were replaced by a positive number, then the existence theorem discussed in the following section would become a theorem which produces a Ricci flow that moves backwards (rather than forwards) in parameter values from initial data.
The parameter t is usually called time, although this is only as part of standard informal terminology in the mathematical field of partial differential equations. It is not physically meaningful terminology. In fact, in the standard quantum field theoretic interpretation of the Ricci flow in terms of the renormalization group, the parameter t corresponds to length or energy, rather than time.[1]
Normalized Ricci flow
Suppose that M is a compact smooth manifold, and let gt be a Ricci flow for t in the interval (a, b). Define Ψ:(a, b) → (0, ∞) so that each of the Riemannian metrics Ψ(t)gt has volume 1; this is possible since M is compact. (More generally, it would be possible if each Riemannian metric gt had finite volume.) Then define F:(a, b) → (0, ∞) to be the antiderivative of Ψ which vanishes at a. Since Ψ is positive-valued, F is a bijection onto its image (0, S). Now the Riemannian metrics Gs = Ψ(F −1(s))gF −1(s), defined for parameters s ∈ (0, S), satisfy
${\frac {\partial }{\partial s}}G_{s}=-2\operatorname {Ric} ^{G_{s}}+{\frac {2}{n}}{\frac {\int _{M}R^{G_{s}}\,d\mu _{G_{s}}}{\int _{M}d\mu _{G_{s}}}}G_{s}.$
Here R denotes scalar curvature. This is called the normalized Ricci flow equation. Thus, with an explicitly defined change of scale Ψ and a reparametrization of the parameter values, a Ricci flow can be converted into a normalized Ricci flow. The converse also holds, by reversing the above calculations.
The primary reason for considering the normalized Ricci flow is that it allows a convenient statement of the major convergence theorems for Ricci flow. However, it is not essential to do so, and for virtually all purposes it suffices to consider Ricci flow in its standard form. Moreover, the normalized Ricci flow is not generally meaningful on noncompact manifolds.
Existence and uniqueness
Let $M$ be a smooth closed manifold, and let $g_{0}$ be any smooth Riemannian metric on $M$. Making use of the Nash–Moser implicit function theorem, Hamilton (1982) showed the following existence theorem:
• There exists a positive number $T$ and a Ricci flow $g_{t}$ parametrized by $t\in (0,T)$ such that $g_{t}$ converges to $g_{0}$ in the $C^{\infty }$ topology as $t$ decreases to 0.
He showed the following uniqueness theorem:
• If $\{g_{t}:t\in (0,T)\}$ and $\{{\widetilde {g}}_{t}:t\in (0,{\widetilde {T}})\}$ are two Ricci flows as in the above existence theorem, then $g_{t}={\widetilde {g}}_{t}$ for all $t\in (0,\min\{T,{\widetilde {T}}\}).$
The existence theorem provides a one-parameter family of smooth Riemannian metrics. In fact, any such one-parameter family also depends smoothly on the parameter. Precisely, this says that relative to any smooth coordinate chart $(U,\phi )$ on $M$, the function $g_{ij}:U\times (0,T)\to \mathbb {R} $ is smooth for any $i,j=1,\dots ,n$.
Dennis DeTurck subsequently gave a proof of the above results which uses the Banach implicit function theorem instead.[2] His work is essentially a simpler Riemannian version of Yvonne Choquet-Bruhat's well-known proof and interpretation of well-posedness for the Einstein equations in Lorentzian geometry.
As a consequence of Hamilton's existence and uniqueness theorem, when given the data $(M,g_{0})$, one may speak unambiguously of the Ricci flow on $M$ with initial data $g_{0}$, and one may select $T$ to take on its maximal possible value, which could be infinite. The principle behind virtually all major applications of Ricci flow, in particular in the proof of the Poincaré conjecture and geometrization conjecture, is that, as $t$ approaches this maximal value, the behavior of the metrics $g_{t}$ can reveal and reflect deep information about $M$.
Convergence theorems
Complete expositions of the following convergence theorems are given in Andrews & Hopper (2011) and Brendle (2010).
Let (M, g0) be a smooth closed Riemannian manifold. Under any of the following three conditions:
• M is two-dimensional
• M is three-dimensional and g0 has positive Ricci curvature
• M has dimension greater than three and the product metric on (M, g0) × ℝ has positive isotropic curvature
the normalized Ricci flow with initial data g0 exists for all positive time and converges smoothly, as t goes to infinity, to a metric of constant curvature.
The three-dimensional result is due to Hamilton (1982). Hamilton's proof, inspired by and loosely modeled upon James Eells and Joseph Sampson's epochal 1964 paper on convergence of the harmonic map heat flow,[3] included many novel features, such as an extension of the maximum principle to the setting of symmetric 2-tensors. His paper (together with that of Eells−Sampson) is among the most widely cited in the field of differential geometry. There is an exposition of his result in Chow, Lu & Ni (2006, Chapter 3).
In terms of the proof, the two-dimensional case is properly viewed as a collection of three different results, one for each of the cases in which the Euler characteristic of M is positive, zero, or negative. As demonstrated by Hamilton (1988), the negative case is handled by the maximum principle, while the zero case is handled by integral estimates; the positive case is more subtle, and Hamilton dealt with the subcase in which g0 has positive curvature by combining a straightforward adaptation of Peter Li and Shing-Tung Yau's gradient estimate to the Ricci flow together with an innovative "entropy estimate". The full positive case was demonstrated by Bennett Chow (1991), in an extension of Hamilton's techniques. Since any Ricci flow on a two-dimensional manifold is confined to a single conformal class, it can be recast as a partial differential equation for a scalar function on the fixed Riemannian manifold (M, g0). As such, the Ricci flow in this setting can also be studied by purely analytic methods; correspondingly, there are alternative non-geometric proofs of the two-dimensional convergence theorem.
The higher-dimensional case has a longer history. Soon after Hamilton's breakthrough result, Gerhard Huisken extended his methods to higher dimensions, showing that if g0 almost has constant positive curvature (in the sense of smallness of certain components of the Ricci decomposition), then the normalized Ricci flow converges smoothly to constant curvature. Hamilton (1986) found a novel formulation of the maximum principle in terms of trapping by convex sets, which led to a general criterion relating convergence of the Ricci flow of positively curved metrics to the existence of "pinching sets" for a certain multidimensional ordinary differential equation. As a consequence, he was able to settle the case in which M is four-dimensional and g0 has positive curvature operator. Twenty years later, Christoph Böhm and Burkhard Wilking found a new algebraic method of constructing "pinching sets", thereby removing the assumption of four-dimensionality from Hamilton's result (Böhm & Wilking 2008). Simon Brendle and Richard Schoen showed that positivity of the isotropic curvature is preserved by the Ricci flow on a closed manifold; by applying Böhm and Wilking's method, they were able to derive a new Ricci flow convergence theorem (Brendle & Schoen 2009). Their convergence theorem included as a special case the resolution of the differentiable sphere theorem, which at the time had been a long-standing conjecture. The convergence theorem given above is due to Brendle (2008), which subsumes the earlier higher-dimensional convergence results of Huisken, Hamilton, Böhm & Wilking, and Brendle & Schoen.
Corollaries
The results in dimensions three and higher show that any smooth closed manifold M which admits a metric g0 of the given type must be a space form of positive curvature. Since these space forms are largely understood by work of Élie Cartan and others, one may draw corollaries such as
• Suppose that M is a smooth closed 3-dimensional manifold which admits a smooth Riemannian metric of positive Ricci curvature. If M is simply-connected then it must be diffeomorphic to the 3-sphere.
So if one could show directly that any smooth closed simply-connected 3-dimensional manifold admits a smooth Riemannian metric of positive Ricci curvature, then the Poincaré conjecture would immediately follow. However, as matters are understood at present, this result is only known as a (trivial) corollary of the Poincaré conjecture, rather than vice versa.
Possible extensions
Given any n larger than two, there exist many closed n-dimensional smooth manifolds which do not have any smooth Riemannian metrics of constant curvature. So one cannot hope to be able to simply drop the curvature conditions from the above convergence theorems. It could be possible to replace the curvature conditions by some alternatives, but the existence of compact manifolds such as complex projective space, which has a metric of nonnegative curvature operator (the Fubini-Study metric) but no metric of constant curvature, makes it unclear how much these conditions could be pushed. Likewise, the possibility of formulating analogous convergence results for negatively curved Riemannian metrics is complicated by the existence of closed Riemannian manifolds whose curvature is arbitrarily close to constant and yet admit no metrics of constant curvature.[4]
Li–Yau inequalities
Making use of a technique pioneered by Peter Li and Shing-Tung Yau for parabolic differential equations on Riemannian manifolds, Hamilton (1993a) proved the following "Li–Yau inequality".[5]
• Let $M$ be a smooth manifold, and let $g_{t}$ be a solution of the Ricci flow with $t\in (0,T)$ such that each $g_{t}$ is complete with bounded curvature. Furthermore, suppose that each $g_{t}$ has nonnegative curvature operator. Then, for any curve $\gamma :[t_{1},t_{2}]\to M$ :[t_{1},t_{2}]\to M} with $[t_{1},t_{2}]\subset (0,T)$, one has
${\frac {d}{dt}}{\big (}R^{g(t)}(\gamma (t)){\big )}+{\frac {R^{g(t)}(\gamma (t))}{t}}+{\frac {1}{2}}\operatorname {Ric} ^{g(t)}(\gamma '(t),\gamma '(t))\geq 0.$
Perelman (2002) showed the following alternative Li–Yau inequality.
• Let $M$ be a smooth closed $n$-manifold, and let $g_{t}$ be a solution of the Ricci flow. Consider the backwards heat equation for $n$-forms, i.e. ${\tfrac {\partial }{\partial t}}\omega +\Delta ^{g(t)}\omega =0$; given $p\in M$ and $t_{0}\in (0,T)$, consider the particular solution which, upon integration, converges weakly to the Dirac delta measure as $t$ increases to $t_{0}$. Then, for any curve $\gamma :[t_{1},t_{2}]\to M$ :[t_{1},t_{2}]\to M} with $[t_{1},t_{2}]\subset (0,T)$, one has
${\frac {d}{dt}}{\big (}f(\gamma (t),t){\big )}+{\frac {f{\big (}\gamma (t),t{\big )}}{2(t_{0}-t)}}\leq {\frac {R^{g(t)}(\gamma (t))+|\gamma '(t)|_{g(t)}^{2}}{2}}.$
where $\omega =(4\pi (t_{0}-t))^{-n/2}e^{-f}{\text{d}}\mu _{g(t)}$.
Both of these remarkable inequalities are of profound importance for the proof of the Poincaré conjecture and geometrization conjecture. The terms on the right hand side of Perelman's Li–Yau inequality motivates the definition of his "reduced length" functional, the analysis of which leads to his "noncollapsing theorem". The noncollapsing theorem allows application of Hamilton's compactness theorem (Hamilton 1995) to construct "singularity models", which are Ricci flows on new three-dimensional manifolds. Owing to the Hamilton–Ivey estimate, these new Ricci flows have nonnegative curvature. Hamilton's Li–Yau inequality can then be applied to see that the scalar curvature is, at each point, a nondecreasing (nonnegative) function of time. This is a powerful result that allows many further arguments to go through. In the end, Perelman shows that any of his singularity models is asymptotically like a complete gradient shrinking Ricci soliton, which are completely classified; see the previous section.
See Chow, Lu & Ni (2006, Chapters 10 and 11) for details on Hamilton's Li–Yau inequality; the books Chow et al. (2008) and Müller (2006) contain expositions of both inequalities above.
Examples
Constant-curvature and Einstein metrics
Let $(M,g)$ be a Riemannian manifold which is Einstein, meaning that there is a number $\lambda $ such that ${\text{Ric}}^{g}=\lambda g$. Then $g_{t}=(1-2\lambda t)g$ is a Ricci flow with $g_{0}=g$, since then
${\frac {\partial }{\partial t}}g_{t}=-2\lambda g=-2\operatorname {Ric} ^{g}=-2\operatorname {Ric} ^{g_{t}}.$
If $M$ is closed, then according to Hamilton's uniqueness theorem above, this is the only Ricci flow with initial data $g$. One sees, in particular, that:
• if $\lambda $ is positive, then the Ricci flow "contracts" $g$ since the scale factor $1-2\lambda t$ is less than 1 for positive $t$; furthermore, one sees that $t$ can only be less than $1/2\lambda $, in order that $g_{t}$ is a Riemannian metric. This is the simplest examples of a "finite-time singularity".
• if $\lambda $ is zero, which is synonymous with $g$ being Ricci-flat, then $g_{t}$ is independent of time, and so the maximal interval of existence is the entire real line.
• if $\lambda $ is negative, then the Ricci flow "expands" $g$ since the scale factor $1-2\lambda t$ is greater than 1 for all positive $t$; furthermore one sees that $t$ can be taken arbitrarily large. One says that the Ricci flow, for this initial metric, is "immortal".
In each case, since the Riemannian metrics assigned to different values of $t$ differ only by a constant scale factor, one can see that the normalized Ricci flow $G_{s}$ exists for all time and is constant in $s$; in particular, it converges smoothly (to its constant value) as $s\to \infty $.
The Einstein condition has as a special case that of constant curvature; hence the particular examples of the sphere (with its standard metric) and hyperbolic space appear as special cases of the above.
Ricci solitons
Ricci solitons are Ricci flows that may change their size but not their shape up to diffeomorphisms.
• Cylinders Sk × Rl (for k ≥ 2) shrink self similarly under the Ricci flow up to diffeomorphisms
• A significant 2-dimensional example is the cigar soliton, which is given by the metric (dx2 + dy2)/(e4t + x2 + y2) on the Euclidean plane. Although this metric shrinks under the Ricci flow, its geometry remains the same. Such solutions are called steady Ricci solitons.
• An example of a 3-dimensional steady Ricci soliton is the Bryant soliton, which is rotationally symmetric, has positive curvature, and is obtained by solving a system of ordinary differential equations. A similar construction works in arbitrary dimension.
• There exist numerous families of Kähler manifolds, invariant under a U(n) action and birational to Cn, which are Ricci solitons. These examples were constructed by Cao and Feldman-Ilmanen-Knopf. (Chow-Knopf 2004)
• A 4-dimensional example exhibiting only torus symmetry was recently discovered by Bamler-Cifarelli-Conlon-Deruelle.
A gradient shrinking Ricci soliton consists of a smooth Riemannian manifold (M,g) and f ∈ C∞(M) such that
$\operatorname {Ric} ^{g}+\operatorname {Hess} ^{g}f={\frac {1}{2}}g.$
One of the major achievements of Perelman (2002) was to show that, if M is a closed three-dimensional smooth manifold, then finite-time singularities of the Ricci flow on M are modeled on complete gradient shrinking Ricci solitons (possibly on underlying manifolds distinct from M). In 2008, Huai-Dong Cao, Bing-Long Chen, and Xi-Ping Zhu completed the classification of these solitons, showing:
• Suppose (M,g,f) is a complete gradient shrinking Ricci soliton with dim(M) = 3. If M is simply-connected then the Riemannian manifold (M,g) is isometric to $\mathbb {R} ^{3}$, $S^{3}$, or $S^{2}\times \mathbb {R} $, each with their standard Riemannian metrics. This was originally shown by Perelman (2003a) with some extra conditional assumptions. Note that if M is not simply-connected, then one may consider the universal cover $\pi :M'\to M,$ and then the above theorem applies to $(M',\pi ^{\ast }g,f\circ \pi ).$
There is not yet a good understanding of gradient shrinking Ricci solitons in any higher dimensions.
Relationship to uniformization and geometrization
Hamilton's first work on Ricci flow was published at the same time as William Thurston's geometrization conjecture, which concerns the topological classification of three-dimensional smooth manifolds.[6] Hamilton's idea was to define a kind of nonlinear diffusion equation which would tend to smooth out irregularities in the metric. Suitable canonical forms had already been identified by Thurston; the possibilities, called Thurston model geometries, include the three-sphere S3, three-dimensional Euclidean space E3, three-dimensional hyperbolic space H3, which are homogeneous and isotropic, and five slightly more exotic Riemannian manifolds, which are homogeneous but not isotropic. (This list is closely related to, but not identical with, the Bianchi classification of the three-dimensional real Lie algebras into nine classes.)
Hamilton succeeded in proving that any smooth closed three-manifold which admits a metric of positive Ricci curvature also admits a unique Thurston geometry, namely a spherical metric, which does indeed act like an attracting fixed point under the Ricci flow, renormalized to preserve volume. (Under the unrenormalized Ricci flow, the manifold collapses to a point in finite time.) However, this doesn't prove the full geometrization conjecture, because of the restrictive assumption on curvature.
Indeed, a triumph of nineteenth-century geometry was the proof of the uniformization theorem, the analogous topological classification of smooth two-manifolds, where Hamilton showed that the Ricci flow does indeed evolve a negatively curved two-manifold into a two-dimensional multi-holed torus which is locally isometric to the hyperbolic plane. This topic is closely related to important topics in analysis, number theory, dynamical systems, mathematical physics, and even cosmology.
Note that the term "uniformization" suggests a kind of smoothing away of irregularities in the geometry, while the term "geometrization" suggests placing a geometry on a smooth manifold. Geometry is being used here in a precise manner akin to Klein's notion of geometry (see Geometrization conjecture for further details). In particular, the result of geometrization may be a geometry that is not isotropic. In most cases including the cases of constant curvature, the geometry is unique. An important theme in this area is the interplay between real and complex formulations. In particular, many discussions of uniformization speak of complex curves rather than real two-manifolds.
Singularities
Hamilton showed that a compact Riemannian manifold always admits a short-time Ricci flow solution. Later Shi generalized the short-time existence result to complete manifolds of bounded curvature.[7] In general, however, due to the highly non-linear nature of the Ricci flow equation, singularities form in finite time. These singularities are curvature singularities, which means that as one approaches the singular time the norm of the curvature tensor $|\operatorname {Rm} |$ blows up to infinity in the region of the singularity. A fundamental problem in Ricci flow is to understand all the possible geometries of singularities. When successful, this can lead to insights into the topology of manifolds. For instance, analyzing the geometry of singular regions that may develop in 3d Ricci flow, is the crucial ingredient in Perelman's proof the Poincare and Geometrization Conjectures.
Blow-up limits of singularities
To study the formation of singularities it is useful, as in the study of other non-linear differential equations, to consider blow-ups limits. Intuitively speaking, one zooms into the singular region of the Ricci flow by rescaling time and space. Under certain assumptions, the zoomed in flow tends to a limiting Ricci flow $(M_{\infty },g_{\infty }(t)),t\in (-\infty ,0]$, called a singularity model. Singularity models are ancient Ricci flows, i.e. they can be extended infinitely into the past. Understanding the possible singularity models in Ricci flow is an active research endeavor.
Below, we sketch the blow-up procedure in more detail: Let $(M,g_{t}),\,t\in [0,T),$ be a Ricci flow that develops a singularity as $t\rightarrow T$. Let $(p_{i},t_{i})\in M\times [0,T)$ be a sequence of points in spacetime such that
$K_{i}:=\left|\operatorname {Rm} (g_{t_{i}})\right|(p_{i})\rightarrow \infty $
as $i\rightarrow \infty $. Then one considers the parabolically rescaled metrics
$g_{i}(t)=K_{i}g\left(t_{i}+{\frac {t}{K_{i}}}\right),\quad t\in [-K_{i}t_{i},0]$
Due to the symmetry of the Ricci flow equation under parabolic dilations, the metrics $g_{i}(t)$ are also solutions to the Ricci flow equation. In the case that
$|Rm|\leq K_{i}{\text{ on }}M\times [0,t_{i}],$
i.e. up to time $t_{i}$ the maximum of the curvature is attained at $p_{i}$, then the pointed sequence of Ricci flows $(M,g_{i}(t),p_{i})$ subsequentially converges smoothly to a limiting ancient Ricci flow $(M_{\infty },g_{\infty }(t),p_{\infty })$. Note that in general $M_{\infty }$ is not diffeomorphic to $M$.
Type I and Type II singularities
Hamilton distinguishes between Type I and Type II singularities in Ricci flow. In particular, one says a Ricci flow $(M,g_{t}),\,t\in [0,T)$, encountering a singularity a time $T$ is of Type I if
$\sup _{t<T}(T-t)|Rm|<\infty $.
Otherwise the singularity is of Type II. It is known that the blow-up limits of Type I singularities are gradient shrinking Ricci solitons.[8] In the Type II case it is an open question whether the singularity model must be a steady Ricci soliton—so far all known examples are.
Singularities in 3d Ricci flow
In 3d the possible blow-up limits of Ricci flow singularities are well-understood. By Hamilton, Perelman and recent work by Brendle, blowing up at points of maximum curvature leads to one of the following three singularity models:
• The shrinking round spherical space form $S^{3}/\Gamma $
• The shrinking round cylinder $S^{2}\times \mathbb {R} $
• The Bryant soliton
The first two singularity models arise from Type I singularities, whereas the last one arises from a Type II singularity.
Singularities in 4d Ricci flow
In four dimensions very little is known about the possible singularities, other than that the possibilities are far more numerous than in three dimensions. To date the following singularity models are known
• $S^{3}\times \mathbb {R} $
• $S^{2}\times \mathbb {R} ^{2}$
• The 4d Bryant soliton
• Compact Einstein manifold of positive scalar curvature
• Compact gradient Kahler–Ricci shrinking soliton
• The FIK shrinker [9]
• The BCCD shrinker [10]
Note that the first three examples are generalizations of 3d singularity models. The FIK shrinker models the collapse of an embedded sphere with self-intersection number −1.
Relation to diffusion
To see why the evolution equation defining the Ricci flow is indeed a kind of nonlinear diffusion equation, we can consider the special case of (real) two-manifolds in more detail. Any metric tensor on a two-manifold can be written with respect to an exponential isothermal coordinate chart in the form
$ds^{2}=\exp(2\,p(x,y))\,\left(dx^{2}+dy^{2}\right).$
(These coordinates provide an example of a conformal coordinate chart, because angles, but not distances, are correctly represented.)
The easiest way to compute the Ricci tensor and Laplace-Beltrami operator for our Riemannian two-manifold is to use the differential forms method of Élie Cartan. Take the coframe field
$\sigma ^{1}=\exp(p)\,dx,\;\;\sigma ^{2}=\exp(p)\,dy$
so that metric tensor becomes
$\sigma ^{1}\otimes \sigma ^{1}+\sigma ^{2}\otimes \sigma ^{2}=\exp(2p)\,\left(dx\otimes dx+dy\otimes dy\right).$
Next, given an arbitrary smooth function $h(x,y)$, compute the exterior derivative
$dh=h_{x}dx+h_{y}dy=\exp(-p)h_{x}\,\sigma ^{1}+\exp(-p)h_{y}\,\sigma ^{2}.$
Take the Hodge dual
$\star dh=-\exp(-p)h_{y}\,\sigma ^{1}+\exp(-p)h_{x}\,\sigma ^{2}=-h_{y}\,dx+h_{x}\,dy.$
Take another exterior derivative
$d\star dh=-h_{yy}\,dy\wedge dx+h_{xx}\,dx\wedge dy=\left(h_{xx}+h_{yy}\right)\,dx\wedge dy$
(where we used the anti-commutative property of the exterior product). That is,
$d\star dh=\exp(-2p)\,\left(h_{xx}+h_{yy}\right)\,\sigma ^{1}\wedge \sigma ^{2}.$
Taking another Hodge dual gives
$\Delta h=\star d\star dh=\exp(-2p)\,\left(h_{xx}+h_{yy}\right)$
which gives the desired expression for the Laplace/Beltrami operator
$\Delta =\exp(-2\,p(x,y))\left(D_{x}^{2}+D_{y}^{2}\right).$
To compute the curvature tensor, we take the exterior derivative of the covector fields making up our coframe:
$d\sigma ^{1}=p_{y}\exp(p)dy\wedge dx=-\left(p_{y}dx\right)\wedge \sigma ^{2}=-{\omega ^{1}}_{2}\wedge \sigma ^{2}$
$d\sigma ^{2}=p_{x}\exp(p)dx\wedge dy=-\left(p_{x}dy\right)\wedge \sigma ^{1}=-{\omega ^{2}}_{1}\wedge \sigma ^{1}.$
From these expressions, we can read off the only independent Spin connection one-form
${\omega ^{1}}_{2}=p_{y}dx-p_{x}dy,$
where we have taken advantage of the anti-symmetric property of the connection (${\omega ^{2}}_{1}=-{\omega ^{1}}_{2}$). Take another exterior derivative
$d{\omega ^{1}}_{2}=p_{yy}dy\wedge dx-p_{xx}dx\wedge dy=-\left(p_{xx}+p_{yy}\right)\,dx\wedge dy.$
This gives the curvature two-form
${\Omega ^{1}}_{2}=-\exp(-2p)\left(p_{xx}+p_{yy}\right)\,\sigma ^{1}\wedge \sigma ^{2}=-\Delta p\,\sigma ^{1}\wedge \sigma ^{2}$
from which we can read off the only linearly independent component of the Riemann tensor using
${\Omega ^{1}}_{2}={R^{1}}_{212}\,\sigma ^{1}\wedge \sigma ^{2}.$
Namely
${R^{1}}_{212}=-\Delta p$
from which the only nonzero components of the Ricci tensor are
$R_{22}=R_{11}=-\Delta p.$
From this, we find components with respect to the coordinate cobasis, namely
$R_{xx}=R_{yy}=-\left(p_{xx}+p_{yy}\right).$
But the metric tensor is also diagonal, with
$g_{xx}=g_{yy}=\exp(2p)$
and after some elementary manipulation, we obtain an elegant expression for the Ricci flow:
${\frac {\partial p}{\partial t}}=\Delta p.$
This is manifestly analogous to the best known of all diffusion equations, the heat equation
${\frac {\partial u}{\partial t}}=\Delta u$
where now $\Delta =D_{x}^{2}+D_{y}^{2}$ is the usual Laplacian on the Euclidean plane. The reader may object that the heat equation is of course a linear partial differential equation—where is the promised nonlinearity in the p.d.e. defining the Ricci flow?
The answer is that nonlinearity enters because the Laplace-Beltrami operator depends upon the same function p which we used to define the metric. But notice that the flat Euclidean plane is given by taking $p(x,y)=0$. So if $p$ is small in magnitude, we can consider it to define small deviations from the geometry of a flat plane, and if we retain only first order terms in computing the exponential, the Ricci flow on our two-dimensional almost flat Riemannian manifold becomes the usual two dimensional heat equation. This computation suggests that, just as (according to the heat equation) an irregular temperature distribution in a hot plate tends to become more homogeneous over time, so too (according to the Ricci flow) an almost flat Riemannian manifold will tend to flatten out the same way that heat can be carried off "to infinity" in an infinite flat plate. But if our hot plate is finite in size, and has no boundary where heat can be carried off, we can expect to homogenize the temperature, but clearly we cannot expect to reduce it to zero. In the same way, we expect that the Ricci flow, applied to a distorted round sphere, will tend to round out the geometry over time, but not to turn it into a flat Euclidean geometry.
Recent developments
The Ricci flow has been intensively studied since 1981. Some recent work has focused on the question of precisely how higher-dimensional Riemannian manifolds evolve under the Ricci flow, and in particular, what types of parametric singularities may form. For instance, a certain class of solutions to the Ricci flow demonstrates that neckpinch singularities will form on an evolving $n$-dimensional metric Riemannian manifold having a certain topological property (positive Euler characteristic), as the flow approaches some characteristic time $t_{0}$. In certain cases, such neckpinches will produce manifolds called Ricci solitons.
For a 3-dimensional manifold, Perelman showed how to continue past the singularities using surgery on the manifold.
Kähler metrics remain Kähler under Ricci flow, and so Ricci flow has also been studied in this setting, where it is called Kähler–Ricci flow.
Notes
1. Friedan, D. (1980). "Nonlinear models in 2+ε dimensions". Physical Review Letters (Submitted manuscript). 45 (13): 1057–1060. Bibcode:1980PhRvL..45.1057F. doi:10.1103/PhysRevLett.45.1057.
2. DeTurck, Dennis M. (1983). "Deforming metrics in the direction of their Ricci tensors". J. Differential Geom. 18 (1): 157–162. doi:10.4310/jdg/1214509286.
3. Eells, James Jr.; Sampson, J.H. (1964). "Harmonic mappings of Riemannian manifolds". Amer. J. Math. 86 (1): 109–160. doi:10.2307/2373037. JSTOR 2373037.
4. Gromov, M.; Thurston, W. (1987). "Pinching constants for hyperbolic manifolds". Invent. Math. 89 (1): 1–12. doi:10.1007/BF01404671. S2CID 119850633.
5. Li, Peter; Yau, Shing-Tung (1986). "On the parabolic kernel of the Schrödinger operator". Acta Math. 156 (3–4): 153–201. doi:10.1007/BF02399203. S2CID 120354778.
6. Weeks, Jeffrey R. (1985). The Shape of Space: how to visualize surfaces and three-dimensional manifolds. New York: Marcel Dekker. ISBN 978-0-8247-7437-0.. A popular book that explains the background for the Thurston classification program.
7. Shi, W.-X. (1989). "Deforming the metric on complete Riemannian manifolds". Journal of Differential Geometry. 30: 223–301. doi:10.4310/jdg/1214443292.
8. Enders, J.; Mueller, R.; Topping, P. (2011). "On Type I Singularities in Ricci flow". Communications in Analysis and Geometry. 19 (5): 905–922. arXiv:1005.1624. doi:10.4310/CAG.2011.v19.n5.a4. S2CID 968534.
9. Maximo, D. (2014). "On the blow-up of four-dimensional Ricci flow singularities". J. Reine Angew. Math. 2014 (692): 153–171. arXiv:1204.5967. doi:10.1515/crelle-2012-0080. S2CID 17651053.
10. Bamler, R.; Cifarelli, C.; Conlon, R.; Deruelle, A. (2022). "A new complete two-dimensional shrinking gradient Kähler-Ricci soliton". arXiv:2206.10785 [math.DG].
References
Articles for a popular mathematical audience.
• Anderson, Michael T. (2004). "Geometrization of 3-manifolds via the Ricci flow" (PDF). Notices Amer. Math. Soc. 51 (2): 184–193. MR 2026939.
• Milnor, John (2003). "Towards the Poincaré Conjecture and the classification of 3-manifolds" (PDF). Notices Amer. Math. Soc. 50 (10): 1226–1233. MR 2009455.
• Morgan, John W. (2005). "Recent progress on the Poincaré conjecture and the classification of 3-manifolds". Bull. Amer. Math. Soc. (N.S.). 42 (1): 57–78. doi:10.1090/S0273-0979-04-01045-6. MR 2115067.
• Tao, T. (2008). "Ricci flow" (PDF). In Gowers, Timothy; Barrow-Green, June; Leader, Imre (eds.). The Princeton Companion to Mathematics. Princeton University Press. pp. 279–281. ISBN 978-0-691-11880-2.
Research articles.
• Böhm, Christoph; Wilking, Burkhard (2008). "Manifolds with positive curvature operators are space forms". Ann. of Math. (2). 167 (3): 1079–1097. arXiv:math/0606187. doi:10.4007/annals.2008.167.1079. JSTOR 40345372. MR 2415394. S2CID 15521923.
• Brendle, Simon (2008). "A general convergence result for the Ricci flow in higher dimensions". Duke Math. J. 145 (3): 585–601. arXiv:0706.1218. doi:10.1215/00127094-2008-059. MR 2462114. S2CID 438716. Zbl 1161.53052.
• Brendle, Simon; Schoen, Richard (2009). "Manifolds with 1/4-pinched curvature are space forms". J. Amer. Math. Soc. 22 (1): 287–307. arXiv:0705.0766. Bibcode:2009JAMS...22..287B. doi:10.1090/S0894-0347-08-00613-9. JSTOR 40587231. MR 2449060. S2CID 2901565.
• Cao, Huai-Dong; Xi-Ping Zhu (June 2006). "A Complete Proof of the Poincaré and Geometrization Conjectures — application of the Hamilton-Perelman theory of the Ricci flow" (PDF). Asian Journal of Mathematics. 10 (2). MR 2488948. Erratum.
• Revised version: Huai-Dong Cao; Xi-Ping Zhu (2006). "Hamilton-Perelman's Proof of the Poincaré Conjecture and the Geometrization Conjecture". arXiv:math.DG/0612069.
• Chow, Bennett (1991). "The Ricci flow on the 2-sphere". J. Differential Geom. 33 (2): 325–334. doi:10.4310/jdg/1214446319. MR 1094458. Zbl 0734.53033.
• Colding, Tobias H.; Minicozzi, William P., II (2005). "Estimates for the extinction time for the Ricci flow on certain 3-manifolds and a question of Perelman" (PDF). J. Amer. Math. Soc. 18 (3): 561–569. arXiv:math/0308090. doi:10.1090/S0894-0347-05-00486-8. JSTOR 20161247. MR 2138137. S2CID 2810043.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Hamilton, Richard S. (1982). "Three-manifolds with positive Ricci curvature". Journal of Differential Geometry. 17 (2): 255–306. doi:10.4310/jdg/1214436922. MR 0664497. Zbl 0504.53034.
• Hamilton, Richard S. (1986). "Four-manifolds with positive curvature operator". J. Differential Geom. 24 (2): 153–179. doi:10.4310/jdg/1214440433. MR 0862046. Zbl 0628.53042.
• Hamilton, Richard S. (1988). "The Ricci flow on surfaces". Mathematics and general relativity (Santa Cruz, CA, 1986). Contemp. Math. Vol. 71. Amer. Math. Soc., Providence, RI. pp. 237–262. doi:10.1090/conm/071/954419. MR 0954419.
• Hamilton, Richard S. (1993a). "The Harnack estimate for the Ricci flow". J. Differential Geom. 37 (1): 225–243. doi:10.4310/jdg/1214453430. MR 1198607. Zbl 0804.53023.
• Hamilton, Richard S. (1993b). "Eternal solutions to the Ricci flow". J. Differential Geom. 38 (1): 1–11. doi:10.4310/jdg/1214454093. MR 1231700. Zbl 0792.53041.
• Hamilton, Richard S. (1995a). "A compactness property for solutions of the Ricci flow". Amer. J. Math. 117 (3): 545–572. doi:10.2307/2375080. JSTOR 2375080. MR 1333936.
• Hamilton, Richard S. (1995b). "The formation of singularities in the Ricci flow". Surveys in differential geometry, Vol. II (Cambridge, MA, 1993). Int. Press, Cambridge, MA. pp. 7–136. doi:10.4310/SDG.1993.v2.n1.a2. MR 1375255.
• Hamilton, Richard S. (1997). "Four-manifolds with positive isotropic curvature". Comm. Anal. Geom. 5 (1): 1–92. doi:10.4310/CAG.1997.v5.n1.a1. MR 1456308. Zbl 0892.53018.
• Hamilton, Richard S. (1999). "Non-singular solutions of the Ricci flow on three-manifolds". Comm. Anal. Geom. 7 (4): 695–729. doi:10.4310/CAG.1999.v7.n4.a2. MR 1714939.
• Bruce Kleiner; John Lott (2008). "Notes on Perelman's papers". Geometry & Topology. 12 (5): 2587–2855. arXiv:math.DG/0605667. doi:10.2140/gt.2008.12.2587. MR 2460872. S2CID 119133773.
• Perelman, Grisha (2002). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math/0211159.
• Perelman, Grisha (2003a). "Ricci flow with surgery on three-manifolds". arXiv:math/0303109.
• Perelman, Grisha (2003b). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math/0307245.
Textbooks
• Andrews, Ben; Hopper, Christopher (2011). The Ricci Flow in Riemannian Geometry: A Complete Proof of the Differentiable 1/4-Pinching Sphere Theorem. Lecture Notes in Mathematics. Vol. 2011. Heidelberg: Springer. doi:10.1007/978-3-642-16286-2. ISBN 978-3-642-16285-5.
• Brendle, Simon (2010). Ricci Flow and the Sphere Theorem. Graduate Studies in Mathematics. Vol. 111. Providence, RI: American Mathematical Society. doi:10.1090/gsm/111. ISBN 978-0-8218-4938-5.
• Cao, H.D.; Chow, B.; Chu, S.C.; Yau, S.T., eds. (2003). Collected Papers on Ricci Flow. Series in Geometry and Topology. Vol. 37. Somerville, MA: International Press. ISBN 1-57146-110-8.
• Chow, Bennett; Chu, Sun-Chin; Glickenstein, David; Guenther, Christine; Isenberg, James; Ivey, Tom; Knopf, Dan; Lu, Peng; Luo, Feng; Ni, Lei (2007). The Ricci Flow: Techniques and Applications. Part I. Geometric Aspects. Mathematical Surveys and Monographs. Vol. 135. Providence, RI: American Mathematical Society. doi:10.1090/surv/135. ISBN 978-0-8218-3946-1.
• Chow, Bennett; Chu, Sun-Chin; Glickenstein, David; Guenther, Christine; Isenberg, James; Ivey, Tom; Knopf, Dan; Lu, Peng; Luo, Feng; Ni, Lei (2008). The Ricci Flow: Techniques and Applications. Part II. Analytic Aspects. Mathematical Surveys and Monographs. Vol. 144. Providence, RI: American Mathematical Society. doi:10.1090/surv/144. ISBN 978-0-8218-4429-8.
• Chow, Bennett; Chu, Sun-Chin; Glickenstein, David; Guenther, Christine; Isenberg, James; Ivey, Tom; Knopf, Dan; Lu, Peng; Luo, Feng; Ni, Lei (2010). The Ricci Flow: Techniques and Applications. Part III. Geometric-Analytic Aspects. Mathematical Surveys and Monographs. Vol. 163. Providence, RI: American Mathematical Society. doi:10.1090/surv/163. ISBN 978-0-8218-4661-2.
• Chow, Bennett; Chu, Sun-Chin; Glickenstein, David; Guenther, Christine; Isenberg, James; Ivey, Tom; Knopf, Dan; Lu, Peng; Luo, Feng; Ni, Lei (2015). The Ricci Flow: Techniques and Applications. Part IV. Long-Time Solutions and Related Topics. Mathematical Surveys and Monographs. Vol. 206. Providence, RI: American Mathematical Society. doi:10.1090/surv/206. ISBN 978-0-8218-4991-0.
• Chow, Bennett; Knopf, Dan (2004). The Ricci Flow: An Introduction. Mathematical Surveys and Monographs. Vol. 110. Providence, RI: American Mathematical Society. doi:10.1090/surv/110. ISBN 0-8218-3515-7.
• Chow, Bennett; Lu, Peng; Ni, Lei (2006). Hamilton's Ricci Flow. Graduate Studies in Mathematics. Vol. 77. Beijing, New York: American Mathematical Society, Providence, RI; Science Press. doi:10.1090/gsm/077. ISBN 978-0-8218-4231-7.
• Morgan, John W.; Fong, Frederick Tsz-Ho (2010). Ricci Flow and Geometrization of 3-Manifolds. University Lecture Series. Vol. 53. Providence, RI: American Mathematical Society. doi:10.1090/ulect/053. ISBN 978-0-8218-4963-7.
• Morgan, John; Tian, Gang (2007). Ricci Flow and the Poincaré Conjecture. Clay Mathematics Monographs. Vol. 3. Providence, RI and Cambridge, MA: American Mathematical Society and Clay Mathematics Institute. ISBN 978-0-8218-4328-4.
• Müller, Reto (2006). Differential Harnack inequalities and the Ricci flow. EMS Series of Lectures in Mathematics. Zürich: European Mathematical Society (EMS). doi:10.4171/030. hdl:2318/1701023. ISBN 978-3-03719-030-2.
• Topping, Peter (2006). Lectures on the Ricci Flow. London Mathematical Society Lecture Note Series. Vol. 325. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511721465. ISBN 0-521-68947-3.
• Zhang, Qi S. (2011). Sobolev Inequalities, Heat Kernels under Ricci Flow, and the Poincaré Conjecture. Boca Raton, FL: CRC Press. ISBN 978-1-4398-3459-6.
External links
• Isenberg, James A. "Ricci Flow" (video). Brady Haran. Archived from the original on 2021-12-12. Retrieved 23 April 2014.
Riemannian geometry (Glossary)
Basic concepts
• Curvature
• tensor
• Scalar
• Ricci
• Sectional
• Exponential map
• Geodesic
• Inner product
• Metric tensor
• Levi-Civita connection
• Covariant derivative
• Signature
• Raising and lowering indices/Musical isomorphism
• Parallel transport
• Riemannian manifold
• Pseudo-Riemannian manifold
• Riemannian volume form
Types of manifolds
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
Main results
• Fundamental theorem of Riemannian geometry
• Gauss's lemma
• Gauss–Bonnet theorem
• Hopf–Rinow theorem
• Nash embedding theorem
• Ricci flow
• Schur's lemma
Generalizations
• Finsler
• Hilbert
• Sub-Riemannian
Applications
• General relativity
• Geometrization conjecture
• Poincaré conjecture
• Uniformization theorem
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
Authority control: National
• Germany
| Wikipedia |
\begin{document}
\begin{frontmatter}
\title{Factorization of composed polynomials and applications}
\author[UFMG]{F.E. Brochero Mart\'inez} \ead{[email protected]} \author[USP]{Lucas Reis\corref{cor1}} \ead{[email protected]} \author[UFMG,UFRR]{Lays Silva} \ead{[email protected]} \fntext[UFRR]{Permanent address: Departamento de Matem\'{a}tica, Universidade Federal de Roraima, Boa Vista-RR, 69310-000, Brazil} \address[UFMG]{Departamento de Matem\'{a}tica, Universidade Federal de Minas Gerais, Belo Horizonte, MG, 30123-970, Brazil.} \address[USP]{Universidade de S\~{a}o Paulo, Instituto de Ci\^{e}ncias Matem\'{a}ticas e de Computa\c{c}\~{a}o, S\~{a}o Carlos, SP 13560-970, Brazil.} \cortext[cor1]{Corresponding author}
\begin{abstract} Let $\mathbb{F}_q$ be the finite field with $q$ elements, where $q$ is a prime power and $n$ be a positive integer. In this paper, we explore the factorization of $f(x^{n})$ over $\mathbb{F}_q$, where $f(x)$ is an irreducible polynomial over $\mathbb F_q$. Our main results provide generalizations of recent works on the factorization of binomials $x^n-1$. As an application, we provide an explicit formula for the number of irreducible factors of $f(x^n)$ under some generic conditions on $f$ and $n$. \end{abstract}
\begin{keyword} irreducible polynomials, cyclotomic polynomials, finite fields, $\lambda$-constacyclic codes \MSC[2010]{12E20 \sep 11T30} \end{keyword} \end{frontmatter}
\section{Introduction}
The factorization of polynomials over finite fields plays an important role in a wide variety of technological situations, including efficient and secure communications, cryptography \cite{cryp}, deterministic simulations of random process, digital tracking system \cite{Gol} and error-correcting codes \cite{codes}.
For instance, certain codes can be constructed from the divisors of special polynomials; for $\lambda\in \mathbb F_q^*$ and $n$ a positive integer, each divisor of $x^n-\lambda$ in $\mathbb F_q[x]$ determines a $\lambda$-constacyclic code of length $n$ over $\mathbb F_q$ . In fact, each $\lambda$-constacyclic code of length $n$ can be represented as an ideal of the ring ${\mathcal R}_{n,\lambda}=\mathbb F_q[x]/\langle x^n-\lambda\rangle$, and each ideal of $\mathcal R_{n,\lambda}$ is generated by a unique factor of $x^n-\lambda$. In addition, if $g\in \mathbb F_q[x]$ is an irreducible factor of $x^n-\lambda$, the polynomial $\frac{x^n-\lambda}{g(x)}$ generates an ideal of $\frac{\mathbb F_q[x]}{(x^n-\lambda)}$ that can be see as a minimal $\lambda$-constacyclic code of lenght $n$ and dimension $\deg(g(x))$. In particular, in order to determine every minimal $\lambda$-constacyclic code, the factorization of $x^n-\lambda$ over $\mathbb F_q$ is required. Some partial results about this problem can be found in~\cite{LLWZ, LiYu, BaRa}. The case $\lambda=1$ corresponds to the well known cyclic codes (see \cite{LiNi}).
We observe that the polynomial $x^n-\lambda$ is of the form $f(x^n)$, where $f(x)=x-\lambda \in \mathbb F_q[x]$ is an irreducible polynomial.
The following theorem provides a criterion on the irreducibility of composed polynomials $f(x^n)$ when $f$ is irreducible.
\begin{theorem}[\cite{LiNi} Theorem 3.35]\label{3.35} Let $n$ be a positive integer and $f\in \mathbb F_{q}[x]$ be an irreducible polynomials of degree $k$ and order $e$. Then the polynomial $f(x^{n})$ is irreducible over $\mathbb F_{q}$ if and only if the following conditions are satisfied: \begin{itemize} \item[(1)] $\mathop{\rm rad}\nolimits(n)$ divides $e$, \item[(2)] $\gcd\left(n, \frac {q^{k}-1}{e}\right)=1$ and \item[(3)] if $4$ divides $n$ then $q^{k}\equiv 1\pmod 4$. \end{itemize} In addition, in the case that the polynomial $f(x^{n})$ is irreducible, it has degree $kn$ and order $en$. \end{theorem}
In the present paper, we consider $f\in\mathbb F_q[x]$ an irreducible polynomial of degree $k$ and order $e$, and impose some condition on $n$ (similar to the ones in \cite{BGO}) that make the polynomial $f(x^n)$ reducible. We then discuss the factorization of $f(x^n)$ over $\mathbb F_q$. In~\cite{BGO}, the authors considered the factorization of $x^n-1\in \mathbb F_q[x]$, where $n$ is a positive integer such that every prime factor of $n$ divides $q-1$. A more general situation is considered by Wu, Yue and Fan~\cite{WYF}. In the first part of this paper, we generalize these results for polynomial of the form $f(x^n)$, where $f\in \mathbb F_q[x]$ is irreducible. We further consider a more general situation when there exists some prime divisor of $n$ that does not divide $q-1$.
\section{Explicit Factors of $f(x^{n})$}
Throughout this paper, $\mathbb F_q$ denotes the finite field with $q$ elements, where $q$ is a prime power. Also, $\overline{\mathbb F}_q$ denotes the algebraic closure of $\mathbb F_q$. For any positive integer $k$, $\mathop{\rm rad}\nolimits(k)$ denotes the product of the distinct prime divisors of $k$ and, for each prime divisor $p$ of $k$, $ \nu_{p}(k)$ is the $p$-adic valuation of $k$. In other words, if $k = p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\cdots p_{s}^{\alpha_{s}}$ then $\mathop{\rm rad}\nolimits(k) = p_{1}\cdots p_{s}$ and $\nu_{p_{i}}(k) = \alpha_{i}$. If $f\in \mathbb F_q[x]$ is a polynomial not divisible by $x$, the {\em order} of $f(x)$ is the least positive integer $e$ such that $f(x)$ divides $x^e-1$. It is known that if $f(x)$ is an irreducible polynomial over $\mathbb F_q[x]$ of degree $m$, then each root $\alpha$ of $f$ is an element of $\mathbb F_{q^{m}}$ and, if $f(x)\ne x$, the order $e$ of $f$ is the multiplicative order of $\alpha$ and $m=\mathop{\rm ord}\nolimits_eq$. In addition, the roots of $f(x)$ are simple and they are given by $\alpha, \alpha^{q}, \cdots,\alpha^{q^{m-1}}$.
Let $\sigma_{q}$ be denote the Frobenius automorphism of $\overline{\mathbb F}_q$ given by $\sigma_q(\alpha)=\alpha^q$ for any $\alpha\in \overline{\mathbb F}_q$. This automorphism naturally extends to the ring automorphism $$\begin{array}{cccc}
& \! \overline{\mathbb{F}}_{q}[x] & \! \longrightarrow & \! \overline{\mathbb{F}}_{q}[x] \\ & a_mx^m+\cdots+a_1x+a_0 & \! \longmapsto& a_m^qx^m+\cdots+a_1^qx+a_0^q, \end{array} $$ that we also denote by $\sigma_q$.
The following well-known result provides some properties on the $p$-adic valuation of numbers of the form $a^j-1$ with $a\equiv 1\pmod p$. Its proof is direct by induction so we omit the details.
\begin{lemma}[Lifting the Exponent Lemma]\label{LEL} Let $p$ be a prime and $\nu_p$ be the $p$-valuation. The following hold: \begin{enumerate}[(1)] \item if $p$ is an odd prime such that $p$ divides $a-1$ then $\nu_{p}(a^{k}-1)=\nu_{p}(a-1)+\nu_{p}(k)$; \item if $p=2$ and $a$ is odd number, then $$ \nu_{2}(a^{k}-1)= \begin{cases} \nu_{2}(a-1)&\text{if $k$ is odd,} \\ \nu_{2}(a^{2}-1)+\nu_{2}(k)- 1&\text{if $k$ is even.}\\ \end{cases} $$ \end{enumerate} \end{lemma}
In this section, we give an explicit factorization of polynomial $f(x^n)$ over $\mathbb F_q[x]$, where $f(x)$ is an irreducible polynomial in $\mathbb F_q[x]$ and $n$ satisfies some special conditions. This result generalizes the ones found in~\cite{BGO}, where the authors obtain the explicit factorization of $x^{n}-1\in \mathbb F_q[x]$ under the condition that $\mathop{\rm rad}\nolimits(n)$ divides $q-1$.
We have the following preliminary result.
\begin{lemma}\label{lepri1} Let $n$ be a positive integer such that $\mathop{\rm rad}\nolimits(n)$ divides $q-1$ and let $f\in \mathbb{F}_{q}[x]$ be an irreducible polynomial of degree $k$ and order $e$, with $\gcd(ek,n)=1$. Additionally, suppose that $q\equiv 1\pmod 4$ if $8$ divides $n$. Let $r$ be a positive integer such that $nr\equiv 1\pmod e$.
If $\alpha\in \overline{\mathbb F}_q$ is any root of $f(x)$, $\theta\in \mathbb F_q^{*}$ is an element of order $d=\gcd(n,q-1)$ and $t$ is a divisor of $\frac nd$, the following hold: \begin{enumerate}[(a)] \item the polynomial $g_t(x):=\displaystyle\prod_{i=0}^{k-1} (x-\alpha^{trq^{i}})$ is irreducible over $\mathbb F_q$, has degree $k$ and order $e$; \item for each nonnegative integer $u\ge 0$ such that $\gcd(u, t)=1$, the polynomial $g_t(\theta^{u}x^{t})$ is irreducible in $\mathbb F_q[x]$ and divides $f(x^{n})$. \end{enumerate} \end{lemma}
\begin{proof} We split the proof into cases.
\begin{enumerate}[(a)] \item Since $f\in \mathbb F_q[x]$ is irreducible of degree $k$ and $\alpha$ is one of its roots, $\alpha\in \mathbb F_{q^k}$ and $\alpha^{q^k}=\alpha$. In particular, $\sigma_{q}(g_{t}(x))=\displaystyle\prod_{i=0}^{k-1} (x-\alpha^{trq^{i+1}})=\displaystyle\prod_{i=0}^{k-1} (x-\alpha^{trq^{i}})$ and so $g_{t}\in \mathbb F_q [x]$. In order to prove that $g_t(x)$ is irreducible, it suffices to show that the least positive integer $i$ for which $\alpha^{rtq^{i}}=\alpha^{rt}$ is $i=k$. We observe that $\alpha^{rtq^{i}}=\alpha^{rt}$ if and only if $\alpha^{rt(q^{i}-1)}=1$, that is also equivalent to $rt(q^{i}-1)\equiv 0\pmod e$. The result follows from the fact that $\gcd(e,tr)=1$ and $\mathop{\rm ord}\nolimits_{e} q=k$.
\item Suppose that $\lambda$ is a root of $g_{t}(\theta^{u}x^{t})$, or equivalently, $\lambda^{t}\theta^{u}$ is a root of $g_{t}(x)$. Then there exists $j\in \mathbb N$ such that $\lambda^t\theta^{u} =\alpha^{trq^j}$. In particular, $$\alpha^{q^j}= \alpha^{nrq^j}=\left(\alpha^{trq^j}\right)^{\frac nt}=\left( \lambda^t\theta^{u}\right)^{\frac nt} =\lambda^n.$$ Therefore, $\lambda^n=\alpha^{q^j}$ is a root of $f(x)$ and so any root of $g_{t}(\theta^{u}x^{t})$ is also root of $f(x^n)$. This shows that $g_{t}(\theta^{u}x^{t})$ divides $f(x^n)$. Now we are left to prove that $g_t(\theta^{u}x^t)$ is, in fact, irreducible over $\mathbb F_q$. Since $g_t(\theta^{u}x)$ is irreducible, we only need to verify that $g_t(\theta^{u}x)$ and $t$ satisfy the conditions of Theorem \ref{3.35}. We observe that each root of $g_{t}(\theta^{u}x)$ is of the form $\theta^{-u}\alpha^{trq^i}$, hence $$\mathop{\rm ord}\nolimits\,g_{t}(\theta^{u}x)=\mathop{\rm ord}\nolimits(\theta^{-u}\alpha^{trq^i})=\mathop{\rm ord}\nolimits(\alpha^{trq^i}) \cdot\mathop{\rm ord}\nolimits(\theta^{-u})=\dfrac{ed}{\gcd (u,\,d)},$$ where in the second equality we use the fact that $\gcd(e,n)=1$ and $d$ divides $n$. Since $\gcd (u, t)=1$ and $\mathop{\rm rad}\nolimits(t)$ divides $d$, we have that $\mathop{\rm rad}\nolimits(t)$ divides $\frac{d}{\gcd (u,\,d)}$. In particular, $\mathop{\rm rad}\nolimits(t)$ divides $\mathop{\rm ord}\nolimits\,g_{t}(\theta^{u}x)$. Next, we verify that $\gcd\left( t,\,\dfrac{q^{k}-1}{\mathop{\rm ord}\nolimits\,g_{t}(\theta^{u}x)}\right) =1$. In fact, $$\gcd\left( t,\,\dfrac{q^{k}-1}{e\dfrac{d}{\gcd (u,d)}}\right) =\gcd \left( t,\,\dfrac{(q^{k}-1)\gcd (u,d)}{ed}\right).$$ In addition, $\gcd (u,\,t)=1$, $\gcd (n,\,e)=1$ and $\mathop{\rm rad}\nolimits(t)$ divides $d$ (that divides $n$). Therefore, $\gcd (t,\,e)=1$ and so $$\gcd \left( t,\,\dfrac{(q^{k}-1)\gcd (u,d)}{ed}\right)=\gcd \left( t,\,\dfrac{q^{k}-1}{d}\right).$$ Let $p$ be any prime divisor of $t$, hence $p$ also divides $n$. Since $\gcd(n, k)=1$, we have that $p$ does not divide $k$. In particular, $\nu_p(k)=0$ and $k$ is odd if $p=2$. From Lemma~\ref{LEL}, we have that $\nu_{p}(q^{k}-1)=\nu_{p}(q-1)+\nu_{p}(k)$. Therefore, $\nu_{p}(q^{k}-1)=\nu_{p}(q-1)$. In addition, $t$ divides $\frac {n}{d}$ and so $\nu_p(n)> \nu_p(d)$. Since $$\nu_{p}(d)=\nu_p(\gcd(n,q-1))=\min\lbrace\nu_{p}(n),\,\nu_{p}(q-1)\rbrace,$$ we conclude that $\nu_p(d)=\nu_p(q-1)$ and $\nu_{p}\left(\dfrac{ q^{k}-1}{d}\right) =0$. But $p$ is an arbitrary prime divisor of $t$ and so we conclude that $\gcd \left( t,\,\dfrac{q^{k}-1}{d}\right)=1$. Finally, we need to verify that $q\equiv 1\pmod 4$ if $4$ divides $t$. Since $t$ divides $\frac{n}{\gcd(n, q-1)}$ and every prime divisor of $n$ divides $q-1$, if $4$ divides $t$, it follows that $\gcd(n, q-1)$ is even and $\frac{n}{\gcd(n, q-1)}$ is divisible by $4$. Therefore, $n$ is divisible by $8$. From our hypothesis, $q\equiv 1\pmod 4$. \end{enumerate} \end{proof}
The main result of this paper is stated as follows.
\begin{theorem}\label{pri1} Let $f\in \mathbb F_q[x]$ be a monic irreducible polynomial of degree $k$ and order $e$. Let $q, n, r, \theta$ and $g_t(x)$ be as in Lemma \ref{lepri1}. Then $f(x^{n})$ splits into monic irreducible polynomials over $\mathbb F_q$ as
\begin{equation}\label{produtoirreductiveis}
f(x^{n})=\displaystyle\prod_{t|m}\displaystyle\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{-uk} g_t(\theta^{u}x^{t})
\end{equation} where $m:=\frac nd=\frac n{\gcd(n,q-1)}$. \end{theorem}
\begin{proof} We observe that the factors $g_{t}(\theta^{u}x^{t})$ for $1\leq u\leq d$ with $\gcd (u,\,t)=1$ are all distinct. In fact, if $g_{t}(\theta^{u}x^{t})=g_{t}(\theta^{w}x^{t})$ for some $1\leq u, w\leq d$ with $\gcd(u, t)=\gcd (w,\,t)=1$, there exists a nonnegative integer $i$ such that $\theta^{-u}\alpha^{tr}=\theta^{-w}\alpha^{trq^{i}}$, that is equivalent to $\theta^{w-u}=\alpha^{tr(q^{i}-1)}$ and then $$\left( \theta^{u-w}\right) ^{e}=\left( \alpha^{tr(q^{i}-1)}\right)^{e}=1.$$
The last equality entails that the order of $\theta$ divides $(w-u)e$. From hyothesis, $\gcd(n, e)=1$, hence $\gcd(d,e)=1$ and so $d$ divides $w-u$. Since $|w-u|<d$, we necessarily have $w=u$. Let $h(x)$ be the polynomial defined by the product at the RHS of Eq.~\eqref{produtoirreductiveis}. Since the factors of this product are monic, pairwise distinct. and divide $f(x^n)$, $h(x)$ is monic and divides $f(x^n)$. In particular, the equality $f(x^n)=h(x)$ holds if and only if $h(x)$ and $f(x^n)$ have the same degree, i.e., $\deg(h(x))=nk$. Since $t$ divides $m$, $\mathop{\rm rad}\nolimits(t)$ divides $\mathop{\rm rad}\nolimits(n)$ (hence divides $q-1$) and so $\mathop{\rm rad}\nolimits(t)$ divides $d$. Therefore, for each divisor $t$ of $\frac{n}{d}$, the number of polynomials of the form $\theta^{-uk} g_t(\theta^{u}x^{t})$ with $1\le u\le d$ and $\gcd(u,t)=1$ equals $\dfrac{d}{\mathop{\rm rad}\nolimits(t)}\varphi(\mathop{\rm rad}\nolimits(t))=\dfrac{d\varphi(t)}{t}$, where $\varphi$ is the {\em Euler Phi function}. In particular, the degree of $h(x)$ equals
$$\sum_{t|m} \dfrac{d\varphi(t)}{t}tk=dk\sum_{t|m}\varphi(t)=dkm=nk.$$ \end{proof}
\begin{corollary}\label{conta1} Let $f(x)$, $n$ and $m$ be as in Theorem \ref{pri1}. For each divisor $t$ of $m$, the number of irreducible factors of $f(x^n)$ of degree $kt$ is $\dfrac{\varphi(t)}{t}.\gcd (n,\,q-1)$. The total number of irreducible factors of $f(x^n)$ in $\mathbb F_q[x]$ is
$$\gcd (n,\,q-1).\displaystyle\prod_{{p|m}\atop{p\text{ prime}}}\left(1+\nu_{p}(m)\dfrac{p-1}{p}\right) .$$ \end{corollary} \begin{proof} This result is essentially the same result of Corollary 3.2b in \cite{BGO}. We know, from Theorem~\ref{pri1}, that $f(x^n)$ splits into monic irreducible polynomials as
$$f(x^{n})=\displaystyle\prod_{t|m}\displaystyle\prod_{{1\leqslant u\leqslant d}\atop{\gcd (u,\,t)=1}}\theta^{-ku}g_{t}(\theta^{u}x^{t}).$$ In the proof of Theorem \ref{pri1}, we show that for each divisor $t$ of $m$, the number of polynomials of the form $g_t(\theta^{u}x^{t})$ equals $$\frac{\varphi(\mathop{\rm rad}\nolimits (t))}{\mathop{\rm rad}\nolimits (t)}d= \frac{\varphi(\mathop{\rm rad}\nolimits (t))}{\mathop{\rm rad}\nolimits (t)}\gcd(n, q-1)=\frac{\varphi(t)}{t}\gcd(n, q-1)$$ Therefore, the total number of irreducible factors of $f(x^n)$ equals:
$$\sum_{t|m} \dfrac{\varphi(t)}{t}\gcd (n,\,q-1)=\gcd (n,\,q-1)\sum_{t|m} \dfrac{\varphi(t)}{t}=\gcd (n,\,q-1)\prod_{{p|m}\atop{p\text{ prime}}}\left( 1+\nu_{p}(m)\frac{p-1}{p}\right),$$ where the last equality is obtaned following the same steps as in the proof of Corollary 3.2b in \cite{BGO}. \end{proof}
\begin{example} Let $f(x)=x^{3}+4x^{2}+6x+1\in \mathbb{F}_{11}[x]$ and $n=5^{a}, a\ge 1$. We observe that $f(x)$ is an irreducible polynomial of order $14$ over $\mathbb{F}_{11}$. In fact, $$\Phi_{14}(x)=\dfrac{x^{14}-1}{(x^{7}-1)(x+1)} =f(x)h(x) \text{ and $\mathop{\rm ord}\nolimits_{14}(11) =3$,}$$ where $h(x)=x^{3}+6x^{2}+4x+1$. In particular, $f(x)$ and $n$ satisfy the conditions of Theorem \ref{pri1} and, following the notation of Theorem~\ref{pri1}, we can take $\theta=3$. For $n=5$, we have that $r=3$ and $$g_{1}(x)= (x-\alpha)(x-\alpha^{-3})(x-\alpha^{-5})=f(x).$$ Therefore, $$f(x^5)=\prod_{1\le u \le5} 9^uf(3^ux).$$ If $a\ge 2$, we can take $r=3^{a}$ and for each integer $0\le b\le a-1$, $$g_{_{5^{b}}}(x)=\begin{cases} (x-\alpha)(x-\alpha^{9})(x-\alpha^{11})&\text{ if $a-b$ is odd}\\ (x-\alpha^3)(x-\alpha^{5})(x-\alpha^{13})&\text{ if $a-b$ is even.} \end{cases} $$ Therefore, $$f(x^{5^{a}})=\prod_{1\le u \le5} 9^uf(3^ux)\cdot \displaystyle\prod_{1\le b<a \atop \text{$b$ even}}\displaystyle\prod_{u=1}^4 9^uf(3^{u}x^{5^{b}})\displaystyle\prod_{1\le b<a \atop b\text{ odd}}\displaystyle\prod_{u=1}^4 9^uh(3^{u}x^{5^{b}}).$$ \end{example}
\subsection{Applications of Theorem~\ref{pri1}} Here we present some interesting situations where Theorem~\ref{pri1} gives a more explicit result. We recall that, for a positive integer $e$ with $\gcd(e, q)=1$, the $e$-th cyclotomic polynomial $\Phi_e(x)\in \mathbb F_q[x]$ is defined recursively by the identity $x^e-1=\prod_{m|e}\Phi_m(x)$. Of course, any irreducible factor of $\Phi_e(x)$ has order $e$. The following classical result provides the degree distribution of the irreducible factors of cyclotomic polynomials.
\begin{theorem}[\cite{LiNi} Theorem 2.47] \label{thm:lini} Let $e$ be a positive integer such that $\gcd(e, q)=1$ and set $k=\mathop{\rm ord}\nolimits_eq$. Then $\Phi_e(x)$ factors as $\frac{\varphi(e)}{k}$ irreducible polynomials over $\mathbb F_q$, each of degree $k$. \end{theorem}
The following result entails that, if we know the factorization of $\Phi_e(x)$ (resp. $x^e-1$), under suitable conditions on the positive integer $n$, we immediately obtain the factorization of $\Phi_e(x^n)$ (resp. $x^{en}-1$).
\begin{theorem}\label{thm:app} Let $e$ be a positive integer such that $\gcd(e, q)=1$, set $k=\mathop{\rm ord}\nolimits_eq$ and $l=\frac{\varphi(e)}{k}$. Additionally, suppose that $n$ is a positive integer such that $\mathop{\rm rad}\nolimits(n)$ divides $q-1$, $\gcd(n, ke)=1$ and $q\equiv 1\pmod 4$ if $n$ is divisible by $8$. If $\Phi_e(x)=\prod_{1\le i\le l}f_i(x)$ is the factorization of $\Phi_e(x)$ over $\mathbb F_q$, then the factorization of $\Phi_e(x^n)$ over $\mathbb F_q$ is given as follows
\begin{equation}\label{eq:app1}\Phi_e(x^n)=\prod_{i=1}^l\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{-uk} f_i(\theta^{u}x^{t}),\end{equation} where $d=\gcd(n, q-1)$, $m=\frac{n}{d}$ and $\theta\in \mathbb F_q$ is any element of order $d$. In particular, if $x^e-1=\prod_{i=1}^{N}F_i(x)$ is the factorization of $x^e-1$ over $\mathbb F_q$, then the factorization of $x^{en}-1$ over $\mathbb F_q$ is given as follows
\begin{equation}\label{eq:app2}x^{en}-1=\prod_{i=1}^N\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{-u\cdot \deg(F_i)} F_i(\theta^{u}x^{t}).\end{equation} \end{theorem}
\begin{proof} We observe that, for any $1\le i\le l$, $f_i(x)$ has order $e$ and, from Theorem~\ref{thm:lini}, $f_i$ has degree $k$. In particular, from hypothesis, we are under the conditions of Theorem~\ref{pri1}. For each divisor $t$ of $m$, let $g_{t, i}$ be the polynomial of degree $k$ and order $e$ associated to $f_i$ as in Lemma~\ref{lepri1}. From Theorem~\ref{pri1}, we have that
$$f_i(x^n)=\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{-uk} g_{i, t}(\theta^{u}x^{t}),$$ hence
$$\Phi_e(x^n)=\prod_{i=1}^lf_i(x^n)=\prod_{i=1}^l\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{-uk} g_{i, t}(\theta^{u}x^{t}).$$
In particular, in order to obtain Eq.~\eqref{eq:app1},it suffices to prove that, for any divisor $t$ of $m$, we have the following equality \begin{equation}\label{eq:cyclo}\prod_{i=1}^lg_{t, i}(x)=\prod_{i=1}^lf_i(x).\end{equation} Since each $g_{t, i}$ is irreducible of order $e$, $g_{t, i}$ divides $\Phi_e(x)=\prod_{1\le i\le l}f_i(x)$ and so Eq.~\eqref{eq:cyclo} holds if and only if the polynomials $g_{t, i}$ are pairwise distinct. We then verify that such polynomials are, in fact, pairwise distinct. For this, if $g_{t, i}=g_{t, j}$ for some $1\le i, j\le l$, from definition, there exist roots $\alpha_i, \alpha_j$ of $f_i$ and $f_j$, respectively, such that $$\alpha_i^{rt}=\alpha_j^{rtq^{h}},$$ for some $h\ge 0$, where $r$ is a positive integer such that $rn\equiv 1\pmod e$. Raising the $\frac{n}{t}$-th power in the last equality, we obtain $\alpha_i=\alpha_j^{q^h}$, i.e., $\alpha_i$ and $\alpha_j^{q^h}$ are conjugates over $\mathbb F_q$. Therefore, they have the same minimal polynomial, i.e., $f_i=f_j$ and so $i=j$. This concludes the proof of Eq.~\eqref{eq:app1}. We now proceed to Eq.~\eqref{eq:app2}. From definition,
$$x^{en}-1=\prod_{i=1}^NF_i(x^n)=\prod_{m|e}\Phi_m(x^n).$$ In particular, in order to prove Eq.~\eqref{eq:app2}, we only need to verify that Eq.~\eqref{eq:app1} holds when replacing $e$ by any of its divisors. If $m$ divides $e$, $k'=\mathop{\rm ord}\nolimits_mq$ divides $k=\mathop{\rm ord}\nolimits_mq$. Therefore, $\gcd(n, ke)=1$ implies that $\gcd(n, k'm)=1$. In particular, Eq.~\eqref{eq:app1} holds for $e=m$. \end{proof}
As follows, we provide some particular situations where Theorem~\ref{thm:app} naturally applies. It is well-known that if $P$ is an odd prime number and $a$ is a positive integer such that $\gcd(a, P)=1$ and $\mathop{\rm ord}\nolimits_{P^2}a=\varphi(P^2)=P(P-1)$, then $\mathop{\rm ord}\nolimits_{P^s}a=\varphi(P^s)$ for any $s\ge 1$ (i.e., if $a$ is a primitive root modulo $P^2$, then is a primitive root modulo $P^s$ for any $s\ge 1$). In particular, from Theorem~\ref{thm:lini}, we have the following well known result.
\begin{corollary} Let $P$ be an odd prime number such that $\mathop{\rm ord}\nolimits_{P^2}q=\varphi(P^2)=P(P-1)$, i.e., $q$ is a primitive root modulo $P^2$. Then, for any $s\ge 1$ and $1\le i\le s$, $\Phi_{P^i}(x)$ is irreducible over $\mathbb F_q$. In particular, for any $s\ge 1$, the factorization of $x^{P^s}-1$ over $\mathbb F_q$ is given by $$x^{P^s}-1=(x-1)\prod_{i=1}^s\Phi_{P^i}(x).$$ \end{corollary}
Combining the previous corollary with Eq.~\eqref{eq:app2} we obtain the following result.
\begin{corollary}\label{cor:app} Let $P$ be an odd prime number such that $\mathop{\rm ord}\nolimits_{P^2}q=\varphi(P^2)=P(P-1)$, i.e., $q$ is a primitive root modulo $P^2$. Additionally, let $n$ be a positive integer such that $\mathop{\rm rad}\nolimits(n)$ divides $q-1$ and $\gcd(n, P-1)=1$. Set $d=\gcd(n, q-1)$ and $m=\frac{n}{d}$. If $\theta$ is any element in $\mathbb F_q$ of order $d$ then, for any $s\ge 1$, the factorization of $\Phi_{P^s}(x^n)$ and $x^{P^sn}-1$ over $\mathbb F_q$ are given as follows
$$\Phi_{P^s}(x^n)=\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{u\cdot \varphi(P^s)} \Phi_{P^s}(\theta^{u}x^{t}),$$ and
$$x^{P^sn}-1=\left(\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}(x^t-\theta^u) \right) \times \left(\prod_{1\le i\le s}\prod_{t|m}\prod_{{1\leq u\leq d}\atop{\gcd(u,t)=1}}\theta^{u\cdot \varphi(P^i)} \Phi_{P^i}(\theta^{u}x^{t})\right).$$ \end{corollary}
Corollary~\ref{cor:app} extends Lemma 5.3 in~\cite{MR18}, where the case $n$ a power of a prime $r\ne P$ that divides $q-1$ (but does not divide $P-1$) is considered. As follows, we provide some examples of Corollary~\ref{cor:app}.
\begin{example} Let $q$ be a prime power such that $q-1\equiv 3, 6\pmod 9$ and $q$ is primitive modulo $25=5^2$ (there exist $2\cdot \varphi(20)=16$ residues modulo $225=9\cdot 25$ with this property). We observe that Corollary~\ref{cor:app} applies to $P=5$ and $n=3^a$ with $a\ge 1$. We have that $d=\gcd(q-1, 3^{a})=3$ and $m=3^{a-1}$. If $\theta\in \mathbb F_q$ is any element of order $3$, for any $a, s\ge 1$, we have that $$x^{5^{s}3^{a}}-1=\left(\prod_{t=0}^{a-1}\prod_{{1\leq u\leq 3}\atop{\gcd(u,3^t)=1}}(x^{3^{t}}-\theta^u) \right)\times \left(\prod_{i=1}^{s}\prod_{t=0}^{a-1}\prod_{{1\leq u\leq 3}\atop{\gcd(u, 3^t)=1}}\theta^{-u\cdot \varphi(5^{i})}\Phi_{5^i}(\theta^{u}x^{3^t})\right),$$ where $\Phi_{5^i}(x)=x^{4\cdot 5^{i-1}}+x^{3\cdot 5^{i-1}}+x^{2\cdot 5^{i-1}}+x^{5^{i-1}}+1$ for $i\ge 1$. \end{example}
\begin{example} Let $q$ be a prime power such that $\gcd(q-1, 25)=5$ and $q$ is primitive modulo $9$ (there exist $4\cdot \varphi(6)=8$ residues modulo $225=25\cdot 9$ with this property). We observe that Corollary~\ref{cor:app} applies to $P=3$ and $n=5^a$ with $a\ge 1$. We have that $d=\gcd(q-1, 5^{a})=5$ and $m=5^{a-1}$. If $\theta\in \mathbb F_q$ is any element of order $5$, for any $a, s\ge 1$, we have that $$x^{5^{a}3^{s}}-1=\left(\prod_{t=0}^{a-1}\prod_{{1\leq u\leq 5}\atop{\gcd(u,5^t)=1}}(x^{5^{t}}-\theta^u) \right)\times \left(\prod_{i=1}^{s}\prod_{t=0}^{a-1}\prod_{{1\leq u\leq 5}\atop{\gcd(u, 5^t)=1}}\theta^{-u\cdot \varphi(3^{i})}\Phi_{3^i}(\theta^{u}x^{5^t})\right),$$ where $\Phi_{3^i}(x)=x^{2\cdot 3^{i-1}}+x^{3^{i-1}}+1$ for $i\ge 1$. \end{example}
\begin{example} Let $q$ be a prime power such that $\gcd(q-1, 225)=15$ and $q$ is primitive modulo $289=17^2$ (there exist $\varphi(15)\cdot \varphi(17\cdot 16)=1024$ residues modulo $225\cdot 289$ with this property). We observe that Corollary~\ref{cor:app} applies to $P=17$ and $n=3^a5^{b}$. For simplicity, we only consider the case $a, b\ge 1$ (the cases $a=0$ or $b=0$ are similarly treated). We have that $d=\gcd(q-1, 3^a5^b)=15$ and $m=3^{a-1}5^{b-1}$. If $\theta\in \mathbb F_q$ is any element of order $15$, for any $a, b, s\ge 1$, we have that
\begin{align*}x^{3^a5^b17^s}-1=&\left(\prod_{{0\le t_1\le a-1}\atop{0\le t_2\le b-1}}\prod_{{1\leq u\leq 15}\atop{\gcd(u, 3^{t_1}5^{t_2})=1}}(x^{3^{t_1}5^{t_2}}-\theta^u) \right)\times \\ {}&\left(\prod_{1\le i\le s}\prod_{{0\le t_1\le a-1}\atop{0\le t_2\le b-1}}\prod_{{1\leq u\leq 15}\atop{\gcd(u, 3^{t_1}5^{t_2})=1}}\theta^{-u\cdot \varphi(17^{i})}\Phi_{17^i}(\theta^{u}x^{3^{t_1}5^{t_2}})\right),\end{align*} where $\Phi_{17^i}(x)=\sum_{j=0}^{16}x^{j\cdot 17^{i-1}}$ for $i\ge 1$.
\end{example}
\section{The general case} In the previous section, we provided the factorization of $f(x^n)$ over $\mathbb F_q$ under special conditions on $f$ and $n$. In particular, we assumed that $\mathop{\rm rad}\nolimits(n)$ divides $q-1$ and $q\equiv 1\pmod 4$ if $n$ is divisible by $8$. In this section, we show a natural theoretical procedure to extend such result, removing these conditions on $n$. In order to do that, the following definition is useful. \begin{definition} Let $n$ be a positive integer such that $\gcd(n,q)=1$ and set $S_n=\mathop{\rm ord}\nolimits_{\mathop{\rm rad}\nolimits(n)}q$. Let $s_n$ be the positive integer defined as follows $$s_n:=\begin{cases} S_n & \text{if $q^{S_n}\not\equiv 3 \pmod 4$ or $n\not\equiv 0\pmod 8$,}\\
2S_n&\text{otherwise.}
\end{cases} $$ \end{definition}
In particular, the previous section dealt with positive integers $n$ for which $s_n=1$. Throughout this section, we fix $f\in \mathbb F_q[x]$ an irreducible polynomial of degree $k$ and order $e$ and consider positive integers $n$ such that $\gcd(n, ek)=1$, $s_n>1$ and $\gcd(s_n, k)=1$. In addition, as a natural extension of the previous section, $d$ denotes the number $\gcd(n, q^{s_n}-1)$ and $m=\frac nd= \frac n{\gcd( n, q^{s_n}-1)}$.
We observe that, since $\gcd(s_n, k)=1$, the polynomial $f$ is also irreducible over $\mathbb F_{q^{s_n}}$. Let $\alpha$ be any root of $f$. From Theorem~\ref{pri1}, the irreducible factors of $f(x^n)$ in $\mathbb F_{q^{s_n}}[x]$ are the polynomials \begin{equation}\label{G_tu} G_{t,u}(x):=\displaystyle\prod_{i=0}^{k-1} (x^t-\theta^{-u}\alpha^{trq^{is_n}}), \end{equation} where \begin{itemize} \item $r$ is a positive integer such that $rn\equiv 1\pmod e$; \item $t$ is a divisor of $m$; \item $\theta\in \mathbb F_{q^{s_n}}^*$ is any element of order $d$;
\item $\gcd(t,u)=1$, $1\le u\le d$. \end{itemize}
Now, for each polynomial $G_{t,u}$, we need to determine what is the smallest extension of $\mathbb F_q$ that contains its coefficients. This will provide the irreducible factor of $f(x^n)$ (over $\mathbb F_q$) associated to $G_{t, u}$.
\begin{definition} For $t$ and $u$ as above, let $l_{t,u}$ be the least positive integer $v$ such that $G_{t,u}(x) \in \mathbb F_{q^v}[x]$. \end{definition}
\begin{remark} Since $G_{t,u}(x) \in \mathbb F_{q^{s_n}}[x]$, we have that $l_{t,u}$ is a divisor of $s_n$. Using the Frobenius automorphism, we conclude that every irreducible factor of $f(x^n)$ in $\mathbb F_q[x]$ is of the form $$\prod_{j=0}^{l_{t,u}-1} \sigma_q^j(G_{t,u}(x)).$$ \end{remark}
\begin{remark}\label{fatoresirredutiveis} We observe that $$\prod_{j=0}^{l_{t,u}-1} \sigma_q^j(G_{t,u}(x))=\prod_{j=0}^{l_{t,u}-1}\prod_{i=0}^{k-1} (x^t-\sigma_q^j(\theta^{-u}\alpha^{trq^{is_n}})),$$ is a polynomial of weight (i.e., the number of nonzero coefficients is) at most $$k\cdot l_{t, u}+1\le k\cdot s_n+1.$$ In particular, if $f(x)=x-1$, the weight of every irreducible factor of $x^n-1$ is at most $s_n+1$; the cases $s_n=1$ and $s_n= 2$ are treated in~\cite{BGO}, where the irreducible factors are binomials and trinomials, respectively. \end{remark}
The following lemma provides a way of obtaining the numbers $l_{t, u}$. In particular, we observe that they do not depend on $t$.
\begin{lemma}\label{l_tu} The number $l_{t,u}$ is the least positive integer $v$ such that $\dfrac{\gcd(n, q^{s_n} -1)}{\gcd(n, q^{v}-1)}$ divides $u$. \end{lemma}
\begin{proof} By definition, $l_{t,u}$ is the least positive integer $v$ such that $G_{t,u}\in \mathbb{F}_{q^{v}}[x]$. This last condition is equivalent to $$(\theta^{-u}\alpha^{-tr})^{q^{v}}=\theta^{-u}\alpha^{-trq^{is_{n}}}\,\,\,\, \text{for some integer $i$}.$$ Therefore, $\theta^{-u(q^{v}-1)}=\alpha^{-tr(q^{is_{n}}-q^{v})}$. In particular, we have that $\mathop{\rm ord}\nolimits (\theta^{-u(q^{v}-1)})=\mathop{\rm ord}\nolimits(\alpha^{-tr(q^{is_{n}}-q^{v})})$. Since the orders of $\theta$ and $\alpha$ are relative prime, we conclude that $$\frac{d}{\gcd(d,u(q^{v}-1))}= \frac{e}{\gcd(e,tr(q^{is_{n}}-q^{v}))}=1.$$ In particular, $d=\gcd(n,q^{s_{n}}-1)$ divides $u(q^{v}-1)$. Since $v$ divides $s_n$, $q^{v}-1$ divides $q^{s_n}-1$ and so we have that $\gcd(\gcd(n,q^{s_{n}}-1), q^v-1)=\gcd(n,q^{v}-1)$. Therefore, $\frac{\gcd(n,q^{s_{n}}-1)}{\gcd(n,q^{v}-1)}$ divides $u$. Conversely, if $v$ is any positive integer such that $\frac{\gcd(n,q^{s_{n}}-1)}{\gcd(n,q^{v}-1)}$ divides $u$, we have that $d=\gcd(n, q^{s_n}-1)$ divides $u(q^v-1)$ and so $\theta^{-u(q^{v}-1)}=1$. In particular, for any positive integer $i$ such that $is_n\equiv v\pmod k$ (since $\gcd(k, s_n)=1$, such an integer exists), we have that $$\alpha^{q^{is_n}-q^v}=1=\theta^{-u(q^v-1)}.$$ From the previous observations, we conclude that $G_{t, u}\in \mathbb F_{q^v}[x]$. \end{proof}
\begin{definition} For each divisor $s$ of $s_n$, set
$$\Lambda_t(s)=|\{G_{t,u}\in \mathbb F_{q^s}[x]| \text{$G_{t,u}$ divides $f(x^n)$}\}|$$ and
$$\Omega_t(s)=|\{G_{t,u}\in \mathbb F_{q^s}[x]| \text{$G_{t,u}$ divides $f(x^n)$ and $G_{t,u}\notin \mathbb F_{q^v}[x]$ for any $v<s$}\}|,$$ where the polynomials $G_{t,u}$ are given by the formula \eqref{G_tu}. \end{definition}
We obtain the following result.
\begin{lemma} Let $t$ be a positive divisor of $m$ and $r_{n,t}$ be the smallest positive divisor of $s_n$ such that $\gcd \left(\dfrac{\gcd( n, q^{s_n}-1)}{\gcd( n, q^{r_{n,t}}-1)},t\right)= 1$. For an arbitrary divisor $s$ of $s_n$, the following hold: \begin{enumerate}[(a)] \item if $r_{n,t}$ does not divide $s$, we have that $\Lambda_t(s)=\Omega_t(s)=0$; \item If $r_{n,t}$ divides $s$, we have that $$\Lambda_t(s)=\frac {\varphi(t)}t\gcd (n, q^s-1)$$ and
$$\Omega_t(s)=\frac {\varphi(t)}t\sum_{r_{n,t}|v|s}\mu\left(\frac sv\right)\gcd (n, q^v-1),$$ where $\mu$ is the M\"obius function. \end{enumerate} \end{lemma}
\begin{proof} From Lemma~\ref{l_tu}, we have that $G_{u,t}\in \mathbb{F}_{q^{s}}[x]$ if and only if $\frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{s}-1)}$ divides $u$. Since $\gcd(u,t)=1$, if $G_{u,t}(x)$ is in $\mathbb{F}_{q^{s}}[x]$, it follows that $\gcd\left(\frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{s}-1)}, t\right)=1$. Suppose, by contradiction, that $r_{n,t}$ does not divide $s$ and $\Lambda_t(s)\ne 0$. Therefore, we necessarily have that $$\gcd\left(\frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{s}-1)}, t\right)=\gcd\left(\frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{r_{n,t}}-1)}, t\right)=1$$ However, for each prime divisor $p$ of $t$, we have that $$\min\{\nu_p(n),\nu_p(q^{s_n}-1)\}=\min\{\nu_p(n),\nu_p(q^{r_{n,t}}-1)\}=\min\{\nu_p(n),\nu_p(q^{s}-1)\}\ge 1.$$ If $s'=\gcd(r_{n,t},s)$, then there exist positive integers $a$ and $b$ such that $s'=ar_{n,t}-bs$ and \begin{align*} \nu_p(q^{s'}-1)&=\nu_p(q^{ ar_{n,t}-bs}-1)=\nu_p(q^{ ar_{n,t}}-q^{bs})\\ &\ge\min\{\nu_p(q^{ ar_{n,t}}-1), \nu_p(q^{bs}-1)\}\\ &\ge \min\{\nu_p(q^{r_{n,t}}-1), \nu_p(q^{s}-1)\}. \end{align*}
In particular, $\min\{\nu_p(n),\nu_p(q^{s_n}-1)\}=\min\{\nu_p(n),\nu_p(q^{s'}-1)\}$ for every prime divisor of $t$, a contradiction since $s'<r_{n,t}$. Now, let $s$ be a positive divisor of $s_n$ such that $r_{n,t}|s$. Every factor $G_{u,t}$ of $f(x^n)$ that is in $\mathbb{F}_{q^{s}}$ satisfies the conditions $\gcd(u,t)=1$, $1\le u\le \gcd(n, q^{s_n}-1)$ and $\frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{s}-1)}$ divides $u$. Then $u= \frac{\gcd(n, q^{s_{n}}-1)}{\gcd(n,q^{s}-1)}u'$, with $\gcd(u',t)=1$, $1\le u'\le \gcd(n, q^{s}-1)$.
In addition, if $p$ is a prime divisor of $t$, we have that $p$ divides $m$ and then $\nu_p(n)>\nu_p(q^{s_n}-1)\ge 1$. However, $p\nmid \frac{\gcd(n,q^{s_n}-1)}{\gcd(n,q^{s}-1)}$ and so $p$ divides $q^s-1$. It follows that $\mathop{\rm rad}\nolimits(t)$ divides $q^s-1$. We conclude that the number of elements $u'$ satisfying the previous conditions equals $$\Lambda_{t}(s)=\varphi(\mathop{\rm rad}\nolimits(t))\cdot\frac{\gcd(n, q^{s}-1)}{\mathop{\rm rad}\nolimits(t)} = \frac{\varphi(t)}t \gcd(n, q^{s}-1).$$
Finally, we observe that $\Lambda_{t}(s)=\sum_{v|s}\Omega_{t}(v)$ and so the M\"obius inversion formula yields the following equality
$$\Omega_{t}(s)= \sum_{v|s}\mu\left(\frac{s}{v}\right)\Lambda_{t}(v)=\sum_{r_{n,t}|v|s}\mu\left(\frac{s}{v}\right)\frac{\varphi{(t)}}{t}\gcd(n, q^{v}-1).$$ \end{proof}
\begin{theorem} Let $f\in \mathbb F_q[x]$ be a irreducible polynomial of degree $k$ and order $e$. Let $n$ be a positive integer such that $\gcd(n,ek)=1$ and $\gcd(k, s_n)=1$. The number of irreducible factors of $f(x^n)$ in $\mathbb F_q[x]$ is
$$\frac 1{s_n} \sum_{t|m} \frac {\varphi(t)}{t} \sum_{r_{n,t}| v|s_n} \gcd(n, q^v-1) \varphi\left(\frac {s_n}{v}\right),$$
or equivalently
$$\frac 1{s_n} \sum_{v|s_n} \gcd(n, q^v-1) \varphi\left(\frac {s_n}{v}\right) \prod_{t|m_v} \left(1+\nu_p(m_v) \frac {p-1}p\right).$$
where $m_v=\max\left\{t\left| \ \text{$t$ divides $m$ and $\gcd\left(\frac{\gcd(n, q^{s_n}-1)}{\gcd(n, q^v-1)}, t\right)=1$}\right.\right\}$.
\end{theorem}
\begin{proof} The number of irreducible factors of $f(x^n)$ in $\mathbb F_q[x]$ is \begin{align*}
\sum_{t|m}\frac {\varphi(t)}{t} \sum_{s|s_n} \frac 1s \Omega_t(s)&=\sum_{t|m}\frac{\varphi(t)}{t} \sum_{r_{n,t}|s|s_{n}}\frac{1}{s}\sum_ {v'|\frac{s}{r_{n,t}}}\mu\left( \frac{s}{r_{n,t}v'}\right) \gcd(n, q^{v'r_{n,t}}-1)\\ &= \sum_{t|m}\frac{\varphi(t)}{t} \sum_{s'|\frac{s_{n}}{r_{n,t}}}\frac{1}{r_{n,t}s'}\sum_ {v'|s'}\mu\left( \frac{s'}{v'}\right) \gcd(n, q^{v'r_{n,t}}-1)\\ &= \sum_{t|m}\frac{\varphi(t)}{t} \sum_{v'|\frac{s_{n}}{r_{n,t}}}\dfrac{\gcd(n, q^{v'r_{n,t}}-1)}{r_{n,t}}\sum_ {v'|s'|\frac{s_{n}}{r_{n,t}}}\dfrac{\mu\left(\frac{s'}{v'}\right)}{{s'}}\\ &=\sum_{t|m}\frac{\varphi(t)}{t} \sum_{v'|\frac{s_{n}}{r_{n,t}}}\dfrac{\gcd(n, q^{v'r_{n,t}}-1)}{v'r_{n,t}}\sum_ {s''|\frac{s_{n}}{v'r_{n,t}}}\frac{\mu(s'')}{{s''}}\\
&= \sum_{t|m}\frac{\varphi(t)}{t} \sum_{r_{n,t}|v|s_{n}}\dfrac{\gcd(n, q^{v}-1)}{v}\sum_ {s|\frac{s_{n}}{v}}\frac{\mu(s)}{s}\\
&= \frac 1{s_n}\sum_{t|m}\frac{\varphi(t)}{t} \sum_{r_{n,t}|v|s_{n}}\gcd(n, q^{v}-1)\varphi\left(\frac{s_{n}}{v}\right). \end{align*} \end{proof}
\section{Factors of $f(x^n)$ when $s_n$ is a prime number}
In this section, as an application of previous results, we consider some cases when $s_n$ is a prime number. Throughout this section, for each $\gcd(q,d)=1$, $\sim_{d}$ denotes the equivalent relation defined as follows: $a\sim_{d} b$ if there exists $j\in \mathbb N$ such that $a\equiv bq^j\pmod d$.
\subsection{The case $\mathop{\rm rad}\nolimits(n)|(q-1)$ and $s_n\ne 1$ }
This case is the complementary of Theorem~\ref{pri1}, where $q\equiv 3\pmod 4$ and $8|n$ but keeping the condition $\mathop{\rm rad}\nolimits(n)|(q-1)$.
\begin{theorem}\label{pri2}
Let $n$ be an integer and $q$ be power of a prime such that $8|n$ and $q\equiv 3 \pmod 4$. Let $\theta$ be an element in $\mathbb F_{q^{2}}^{\ast}$ with order $d=\gcd (n,q^{2}-1)$ and $f(x)$ be an irreducible polynomial of degree $k$ odd and order $e$. In addition, let $g_t(x)$ be as in Lemma \ref{lepri1} .
Then $d=2^l\gcd(n,q-1)$, where $l=\min\lbrace\nu_{2}(\frac{n}{2}),\,\nu_{2}(q+1)\rbrace$ and
$f(x^{n})$ splits into irreducible factors in $\mathbb F_q[x]$ as \begin{equation}\label{fatores2}
f(x^{n})=\displaystyle\prod_{{t|m}\atop{t\text{ odd}}}\displaystyle\prod_{{1\leqslant w\leqslant \gcd(n,q-1)}\atop{\gcd (w,\,t)=1}}\beta^{-wk}g_{t}(\beta^{w}x^{t})\displaystyle\prod_{t|m}\displaystyle\prod_{u\in R_{t}}[\theta^{-uk(q+1)}g_{t}(\theta^{u}x^{t})g_{t}(\theta^{uq}x^{t})], \end{equation} where $m=\frac nd =\frac n{\gcd(n,q^2-1)}$, $\beta=\theta^{2^l}$ is a $\gcd(n,q-1)$-th primitive root of the unity and $\mathcal R_{t}$ is the set of $q$-cyclotomic classes
$$\mathcal R_{t}=\left\lbrace u\in\mathbb N| 1\le u\le d, \, \gcd(u,t)=1,\,2^l\nmid u \right\rbrace /\sim_{d}, $$ In addition, the total number of irreducible factors of $f(x^{n})$ in $\mathbb F_q[x]$ is:
$$\gcd (n,\,q-1)\left(\frac 12+2^{l-2}(2+\nu_2(m))\right)\prod_{{p|m}\atop{p\text{ prime}}}\left(1+\nu_{p}(m)\dfrac{p-1}{p}\right).$$ \end{theorem}
\begin{proof}
Since $q\equiv 3\pmod 4$ and $8|n$, we have that $s_n=2$,
$$d=\gcd (n,\,q^{2}-1)=2\gcd \left( \dfrac{n}{2},\,\dfrac{q-1}{2}(q+1)\right) =2\gcd \left( \dfrac{n}{2},\,\dfrac{q-1}{2}\right)\gcd \left( \dfrac{n}{2},\,q+1\right),$$ where $$\gcd \left( \dfrac{n}{2},\,q+1\right)=2^{\min\lbrace\nu_{2}(\frac{n}{2}),\,\nu_{2}(q+1)\rbrace}=2^l.$$
It follows from Lemma \ref{l_tu} that $G_{t u}(x)=\theta^{-uk}g_{t}(\theta^{u}x^{t})\notin \mathbb F_q[x]$ if $\dfrac{\gcd(n, q^2-1)}{\gcd(n,q-1)}=2^l$ does not divide $u$, i.e. the class of $u$ is in $\mathcal R_t$. Then the factorization in Eq.~\eqref{fatores2} follows from Remark \ref{fatoresirredutiveis}. Since $r_{n,t}$ is a divisor of $s_n=2$, in order to determine how many factors of each degree we need to consider two cases: $r_{n,t}=1$ or $r_{n, t}=2$. If $r_{n, t}=1$, we have that $t$ satifies the following equalities $$\gcd\left(\frac {\gcd(n, q^2-1)}{\gcd(n,q-1)}, t\right)=\gcd(2^l, t)=1,$$ and so $t$ must be odd. If $r_{n,t}=2$, since $r_{n,t}$ is minimal with this property, it follows that $t$ must be even. In conclusion, the number of irreducible factor of $f(x^n)$ in $\mathbb F_q[x]$ equals
\begin{align*}
&\frac 12 \sum_{v|2} \gcd(n, q^v-1) \varphi\left(\frac {2}{v}\right) \prod_{t|m_v} \left(1+\nu_p(m_v) \frac {p-1}p\right)\\
&=\frac 12 \gcd(n, q-1)\prod_{p|m_1} \left(1+\nu_p(m) \frac {p-1}p\right)+ \frac 12 \gcd(n, q^2-1)\prod_{p|m_2} \left(1+\nu_p(m) \frac {p-1}p\right)\\
&=\frac 12 \gcd(n, q-1)\prod_{p|m\atop p\ne 2} \left(1+\nu_p(m) \frac {p-1}p\right)+ \frac 12 \gcd(n, q^2-1)\prod_{p|m} \left(1+\nu_p(m) \frac {p-1}p\right)\\
&=\frac 12 \gcd(n, q-1) \left( 1+ 2^l\left(1+\frac 12\nu_2(m)\right)\right)\prod_{p|m\atop p\ne 2} \left(1+\nu_p(m) \frac {p-1}p\right) . \end{align*} \end{proof}
\subsection{The case $\mathop{\rm rad}\nolimits(n)| (q^p-1)$ with $p$ an odd prime and $\mathop{\rm rad}\nolimits(n)\nmid (q-1)$}
Since the case $\mathop{\rm rad}\nolimits(n)|(q-1)$ is now completely described, we only consider the case that $n$ has at least one prime factor that does not divide $q-1$.
\begin{theorem}\label{teo5} Let $n$ be a positive integer such that $\mathop{\rm ord}\nolimits_{\mathop{\rm rad}\nolimits(n)}q=p$ is an odd prime and let $f\in \mathbb F_{q}[x]$ be an irreducible polynomial of degree $k$ and order $e$ such that $\gcd (ke, n)=\gcd(k, p)=1$. In addition, suppose that $q\equiv 1\pmod4$ if $8|n$. Let $\theta$ be an element in $\mathbb{F}_{q^{p}}^{\ast}$ with order $d=\gcd (n,q^{p}-1)$, $m=\frac nd=\frac{n}{\gcd (n,q^{p}-1)}$ and $g_{t}(x)$ be as in Lemma \ref{lepri1}. The polynomial $f(x^{n})$ splits into irreducible factors in $\mathbb{F}_{q}[x]$ as
$$\prod_{{t|m}\atop{\mathop{\rm rad}\nolimits(t)|(q-1)}}\prod_{{1\leqslant v \leqslant d'}\atop{\gcd (v,\,t)=1}}\beta^{-vk}\,g_{t}(\beta^{v}x^{t})
\prod_{{t|m}\atop{\mathop{\rm rad}\nolimits(t)\nmid(q-1)}}\prod_{u\in R_{t}}\left[\theta^{-uk(1+\cdots +q^{p-1})}g_{t}(\theta^{u}x^{t})\cdots g_{t}(\theta^{uq^{p-1}}x^{t})\right],$$ where \begin{enumerate}[1)] \item $d'=\gcd(n,\,q-1)$, $\beta=\theta^{\gcd(n, \frac{q^p-1}{q-1})}$ is a $d'$ primitive root of the unity and
$$R_{t}=\left\lbrace u\in \mathbb{N}\left| 1\leq u\leq d, \quad \gcd\left(n,\frac{q^{p}-1}{q-1}\right)\nmid u, \quad \gcd (u,\,t)=1\right.\right\rbrace/\sim_d$$
in the case that $p\nmid n$ or $p\nmid (q-1)$ or $\nu_{p}(n)>\nu_{p}(q-1)\geq 1$.
\item $d'=p\cdot \gcd(\frac np, q-1)$, $\beta=\theta^{\gcd\left( \frac{n}{p},\,\frac{1}{p}\frac{q^{p}-1}{q-1}\right)}$ is a $d'$ primitive root of the unity and
$$R_{t}=\left\lbrace u\in \mathbb{N}\left| 1\leq u\leq d,\quad \gcd\left( \frac{n}{p},\,\frac{1}{p}\frac{q^{p}-1}{q-1}\right)\nmid u,\quad \gcd(u,t)=1\right. \right\rbrace/\sim_d$$ in the case that $p$ divides $n$ and $(q-1)$, and $\nu_{p}(n)\leq \nu_{p}(q-1)$. \end{enumerate} \end{theorem}
\begin{proof}
Since $\gcd(k,p)=1$, $f(x)$ is also irreducible in $\mathbb F_{q^p}[x]$. In addition,
$q^{p}\equiv 1\pmod 4$ if $8|n$ and so Theorem \ref{pri1} entails that $f(x^{n})$ splits into irreducible factors in $\mathbb{F}_{q^{p}}[x]$ as
$$f(x^{n})=\displaystyle\prod_{t|m}\displaystyle\prod_{{1\leqslant u\leqslant d}\atop{\gcd(u,\,t)=1}}\theta^{-uk}\,g_{t}(\theta^{u}x^{t}),$$
where $\theta$ is an element in $\mathbb{F}_{q^{p}}^{\ast}$ with order $d=\gcd (n,q^{p}-1)$. Following the same inicial step of the proof of Theorem \ref{pri2}, we observe that $\theta^{-uk}\,g_{t}(\theta^{u}x^{t})\in \mathbb F_{q}[x]$ if and only if $d|u(q-1)$. At this point, we have two cases to consider: \begin{enumerate}[1.] \item $p\nmid n$ or $p\nmid (q-1)$: in this case, we have that $$\gcd(n,\,q^{p}-1)=\gcd\left( n,\,\frac{q^{p}-1}{q-1} (q-1)\right) =\gcd\left( n,\,\frac{q^{p}-1}{q-1}\right). \gcd(n,q-1).$$ Hence
$\gcd(n,\,q^{p}-1)|u(q-1)$ if and only if $\gcd\left( n,\,\frac{q^{p}-1}{q-1}\right)|u$.
\item $p|n$ and $p|(q-1)$: in this case, we have that
\begin{align*}\gcd(n,\,q^{p}-1)= &\gcd\left( n,\frac{q^{p}-1}{q-1} (q-1)\right)=p\cdot \gcd\left( \frac{n}{p},\frac{1}{p}\left( \frac{q^{p}-1}{q-1}\right) (q-1)\right)\\=& p\cdot \gcd\left( \frac{n}{p},\frac{1}{p}\frac{q^{p}-1}{q-1}\right)\cdot \gcd\Bigl(\frac{n}{p},q-1\Bigr).\end{align*}
Therefore
$\gcd (n,\,q^{p}-1)|u(q-1)$ if and only if \begin{equation}\label{lays}
p \cdot\gcd \left( \frac{n}{p},\,\frac{1}{p}\frac{q^{p}-1}{q-1}\right)\cdot \gcd \left(\frac{n}{p},q-1\right)|{u(q-1)}.\end{equation} We split into subcases: \begin{enumerate}[2.1.] \item If $\nu_{p}(n)\leq \nu_{p}(q-1)$, Eq.~\eqref{lays} is equivalent to
$\gcd \left( \frac{n}{p},\frac{1}{p}\frac{q^{p}-1}{q-1}\right)|u$.
\item If $\nu_{p}(n)> \nu_{p}(q-1)$, Eq.~\eqref{lays} is equivalent to $\gcd \left(n,\frac{q^{p}-1}{q-1}\right)|u.$ \end{enumerate} \end{enumerate}
We observe that in the cases 1 and 2.2, the conclusion is the same. Therefore, in these cases, $\theta^{-uk}g_{t}(\theta^{u}x^{t})\in \mathbb{F}_{q}[x]$ if and only if
$u=\gcd \left( n,\,\frac{q^{p}-1}{q-1}\right)\cdot v$ for some positive integer $v$. Since $1\leq u \leq \gcd (n,q^{p}-1)$, it follows that
$1\leq v \leq \gcd (n,q-1)$. In addition, since $\gcd (u,\,t)=1$, we have that $\gcd(v,t)=1$ and $\nu_{p'}(q^{p}-1)=\nu_{p'}(q-1)$ for every prime $p'|t$. Thus, in the cases 1 and 2.2, the numbers $v$ and $t$ satisfy the conditions $\gcd(v,t)=1$ and $\mathop{\rm rad}\nolimits(t)|(q-1)$.
Now, in the case 2.1, $\theta^{-uk}g_{t}(\theta^{u}x^{t})\in \mathbb{F}_{q}[x]$ if and only if $u=\gcd\left( \frac{n}{p},\,\frac{1}{p}\frac{q^{p}-1}{q-1}\right)\cdot v$ for some integer $v$. Since $1\leq u \leq \gcd(n,q^{p}-1)$, then
$1\leq v\leq p\gcd\left( \frac{n}{p},q-1\right) $. In the same way as before, we obtain that $v$ and $t$ satisfy the conditions $\gcd(v,t)=1$ and $\mathop{\rm rad}\nolimits(t)|(q-1)$.
In conclusion, we have that the irreducible factors of $f(x^n)$ in $\mathbb F_{q^p}[x]$, that are also irreducible in $\mathbb F_q[x]$ are the form $$\theta^{-uk}g_{t}(\theta^{u}x^{t})= \beta^{-vk}g_t(\beta^v x^t),$$
where $\mathop{\rm rad}\nolimits(t)|(q-1)$ and $\beta$ is an element of $\mathbb F_q^*$ of order $d'$.
Finally, if $\mathop{\rm rad}\nolimits(t)\nmid (q-1)$, we have that $\theta^{-uk}g_{t}(\theta^{u}x^{t})\in\mathbb F_{q^p}[x]\setminus \mathbb{F}_{q}[x]$. In particular, since the polynomial $$\prod_{i=0}^{p-1}\theta^{-ukq^i}g_{t}(\theta^{uq^i}x^{t}),$$
is invariant by the Frobenius automorphism and is divisible by $\theta^{-uk}g_{t}(\theta^{u}x^{t})$, such polynomial is monic irreducible over $\mathbb F_q$. \end{proof}
The following result provide information on the number of irreducible factors of $f(x^n)$ under the conditions of Theorem~\ref{teo5}.
\begin{theorem} \label{teo6} Let $n, m, q$ and $f\in \mathbb F_q[x]$ be as in Theorem \ref{teo5}. Then the number of irreducible factors of $f(x^{n})$ in $\mathbb F_q[x]$ is
$$\frac{p-1}{p}\gcd(n,\,q-1)\displaystyle\prod_{{p'|m}\atop{p'\text {prime}}}\left(1+\nu_{p'}(m)\dfrac{p'-1}{p'}\right)+ \frac{\gcd(n,\,q^{p}-1)}{p}\displaystyle\prod_{{p'|m}\atop{p'\text{ prime}}}\left(1+\nu_{p'}(m)\dfrac{p'-1}{p'}\right).$$ \end{theorem} \begin{proof}
From Theorem \ref{conta1}, the number of irreducible factors (over $\mathbb F_{q^{p}}[x]$) of degree $kt$ for each $t|m$ is $\dfrac{\varphi(t)}{t}\,\gcd(n,\,q^{p}-1).$
Therefore the total number of irreducible factors is: \begin{align*}
&\sum_{t|m}\frac{\varphi(t)}{t}\gcd(n,\,q-1)+\sum_{t|m}\frac{1}{p}\left( \frac{\varphi(t)}{t}\gcd(n,\,q^{p}-1)-\frac{\varphi(t)}{t}\gcd(n,\,q-1)\right)\\
&=\sum_{t|m}\frac{p-1}{p}\gcd(n,\,q-1)\frac{\varphi(t)}{t}+\sum_{t|m}\frac{1}{p} \gcd(n,\,q^{p}-1)\frac{\varphi(t)}{t}\\
&= \frac{p-1}{p}\gcd(n,\,q-1)\displaystyle\prod_{{p'|m}\atop{p'\text{prime}}}\left(1+\nu_{p'}(m)\dfrac{p'-1}{p'}\right)+ \frac{\gcd(n,\,q^{p}-1)}{p}\displaystyle\prod_{{p'|m}\atop{p'\text{prime}}}\left(1+\nu_{p'}(m)\dfrac{p'-1}{p}\right) \end{align*} \end{proof}
\begin{center}{\bf Acknowledgments}\end{center} The first author was partially supported by FAPEMIG (Grant number: APQ-02973-17). The second author was supported by FAPESP 2018/03038-2, Brazil.
\end{document} | arXiv |
Modeling treatment-dependent glioma growth including a dormant tumor cell subpopulation
Marvin A. Böttcher†1,
Janka Held-Feindt†2,
Michael Synowitz2,
Ralph Lucius3,
Arne Traulsen†1 and
Kirsten Hattermann†3Email author
Received: 18 January 2018
Accepted: 21 March 2018
Tumors comprise a variety of specialized cell phenotypes adapted to different ecological niches that massively influence the tumor growth and its response to treatment.
In the background of glioblastoma multiforme, a highly malignant brain tumor, we consider a rapid proliferating phenotype that appears susceptible to treatment, and a dormant phenotype which lacks this pronounced proliferative ability and is not affected by standard therapeutic strategies. To gain insight in the dynamically changing proportions of different tumor cell phenotypes under different treatment conditions, we develop a mathematical model and underline our assumptions with experimental data.
We show that both cell phenotypes contribute to the distinct composition of the tumor, especially in cycling low and high dose treatment, and therefore may influence the tumor growth in a phenotype specific way.
Our model of the dynamic proportions of dormant and rapidly growing glioblastoma cells in different therapy settings suggests that phenotypically different cells should be considered to plan dose and duration of treatment schedules.
Dormancy
Gliomas are the most common type of primary brain tumors including their highly malignant form, the glioblastoma multiforme (GBM), which accounts for about 15% of all brain tumors [1]. Despite current standard treatment of GBM by surgical resection and adjuvant radio- and chemotherapy, the median survival time for GBM patients is still poor, approximating 12–15 months [2], mostly due to unsatisfactory response of the tumor to treatment strategies. Additionally, combined aggressive radio−/chemotherapy is causing severe side effects frequently necessitating interruptions of the therapy due to e.g. blood toxicity [3]. GBMs and also many other tumors are heterogeneous tumors, being composed of cells with different, partly specialized phenotypes [4]. Besides e.g. rapidly proliferating tumors cells, invading immune cells, endothelial cells and (tumor) stem cells, also a subpopulation of so called dormant tumor cells exists in the heterogeneous tumor mass. These cells enter a quiescent state driven by cell-intrinsic or extrinsic factors, including permanent competition for nutrients, oxygen, and space ("cellular dormancy") [5–8]. In several tumors and metastases, dormant cells have been shown to be not proliferative or only very slowly cycling [9–12]. Linking dormancy and effects of chemotherapy, studies on glioma cells showed that cells underwent a prolonged cell cycle arrest upon treatment with temozolomide (TMZ), the most common chemotherapeutic in GBM therapy [13].
Evolutionary forces, such as competition and selection, shape the growth of the tumor and therefore the progression of the cancer. These forces create different ecological niches within the tumor encouraging the adaption of specialized tumor cell phenotypes. Accordingly, the proportional balance between different tumor cellular phenotypes can drastically change with treatment conditions. Indeed, compared to rapidly proliferating tumor cells, especially dormant cells exhibit a much higher robustness against chemotherapeutic drugs [5]. This dormant state seems to be reversible [13], so that the conversion to dormancy and the exit from dormancy may be a mechanism that facilitates tumor survival and progression even upon adverse or changing conditions. Hence, a better understanding of the proportional dynamics of different cell phenotypes within gliomas under chemotherapeutic treatment may improve further therapeutic approaches.
Mathematical models are beneficial resources to gain insight into key mechanisms of cancer development, growth, and evolution and to help identifying potential therapeutic targets [14]. Among these approaches, evolutionary game theory [15, 16] models the interactions between different individuals as a game between agents playing different strategies and relates the payoff from this game to the reproductive fitness of the corresponding agent [17–21].
Here, we use evolutionary game theory to model the proportions of two different phenotypes of GBM cells in a variety of different treatment conditions, see Basanta and Deutsch [18] for a related approach in GBM. Defining the fitness of the different cell types as growth rate in comparison to cells of the respective other phenotype, we focus especially on the balance between the rapidly proliferating and the cellular dormant phenotype and describe the corresponding payoffs in a payoff matrix which also includes the effect of treatment. Then, we use a special form of the replicator-mutator equation [22, 23], which takes into account that conversion from dormant to rapidly proliferating phenotype and vice versa is possible. To strengthen our theoretical assumptions, we analyzed cell numbers and the cellular expression of a dormancy marker under different chemotherapy dosages and the phenotypic conversion modalities in cultured GBM cells in vitro. Taken together, the aim of our study was to develop a simple theoretical model which describes the dynamically changing proportions of two different GBM cell phenotypes, rapidly proliferating and dormant cells, under different treatment conditions. Showing this, we suggest that different properties of cell phenotypes should be taken into account for the development of more efficient, less toxic treatment schedules in order to improve patient's prognosis and quality of life.
Theoretical model
We analyze the proportions of two different GBM cell phenotypes, dormant (D, please refer to Table 1 for symbols used in the equations) and rapidly proliferating (P) cells, in a mathematical model including the influence of different treatment conditions. In the following, we characterize the cells in terms of their fitness, which we define as the growth rate in comparison to cells of the other phenotype. Dormant cells always have a very low or even zero growth rate ε, which we assume to be independent of the exact composition of the population and the treatment condition. Rapidly proliferating P cells, on the other hand, have a very large fitness advantage compared to dormant cells, which means they proliferate much faster, but they also compete with each other for space and resources. Facing another P cell, a focal P cell has an intermediate fitness, which we assume to be still much larger than the growth rate of D cells ε. Their fitness therefore depends on the relative fraction of D vs. P cells. Due to the very slow growth of D cells, P cells will represent the vast majority of glioblastoma cells in the absence of treatment.
Overview of all used symbols in the model
number of cells of type X
ratio of cells of type X in population
D, P
index for dormant or rapidly proliferating cell type, respectively
Fitness of dormant (D) cells
treatment cost on normally growing cells
probability for spontaneous conversion between types
\( \overline{f} \)
total average fitness of all cell types in the population
Under treatment conditions, however, the population composition changes. Even though D cells still have the same very low (or zero) growth rate ε, P cells experience a fitness cost λ due to treatment. The reduction of the fitness due to treatment only applies to P cells, because cytotoxic drugs mostly affect rapidly dividing cells. The fitness cost parameter λ can be adjusted to account for the strength of the applied treatment. In principle we can continuously vary this parameter. However, for simplicity we focus on two different treatment strategies: In high dosage (HD) chemotherapy the treatment strength parameter λ is large compared to the growth rate of the P type. Since high dosage chemotherapy has strong side effects for the whole organism (for GBM: [3]), in reality this treatment strategy cannot be maintained for extended time periods. Therefore, strong treatment needs to be applied in turns with weaker or no treatment. For low dosage (LD) chemotherapy, λ means only a small reduction of the growth rate of the P cells. As the side-effect stress to the organism should also be lower, this treatment regime could be applied for longer time spans.
Dormant (D) and rapidly proliferating (P) phenotypes in glioblastoma and their aforementioned interactions can be described by the following payoff matrix [18]:
$$ {\displaystyle \begin{array}{c}\\ {}\mathrm{D}\\ {}\mathrm{P}\end{array}}{\displaystyle \begin{array}{c}\mathrm{D}\kern1.50em \mathrm{P}\\ {}\left(\begin{array}{cc}\varepsilon & \varepsilon \\ {}1-\varepsilon -\lambda & \frac{1}{2}-\lambda \end{array}\right)\end{array}} $$
This matrix gives the fitness for each type if confronted with any of the two other types. Here, we find for example that the fitness of a focal P cell interacting with a D cell is 1 − ε − λ, which includes both the small or zero growth rate of D cells ε and the fitness cost for P cells under treatment λ.
As the phenomenon of dormancy is presumably a reversible process that also occurs without any treatment, we assume that conversion between both phenotypes is possible with a small rate σ. Thus, P cells may enter a dormant phenotype, and D cells may exit from their quiescent state, converting into a P phenotype at any time point.
In the following, we include these fitness effects and phenotypic conversion into a set of ordinary differential equations. In general, the growth of a whole cell population can be explained in terms of a differential equation that describes the change in the number of individuals over time
$$ \frac{dn}{dt}=r\left(n,t\right)n. $$
Here n is the number of individuals, t is the time and r(n, t) is the growth rate, which can itself depend on the number of cells and the time.
At first, we focus on the number of D cells, n D , in the population over time, which have a very small but constant growth rate ε
$$ \frac{d{n}_D}{dt}=\varepsilon\ {n}_D. $$
For P cells on the other hand, the growth rate of n P, given by the average fitness from the payoff matrix (weighted to the cell fractions), changes with the composition of the population
$$ \frac{d{n}_{\mathrm{P}}}{dt}={n}_{\mathrm{P}}\left(\left(1-\varepsilon -\lambda \right)\frac{n_D}{n_D+{n}_{\mathrm{P}}}+\left(\frac{1}{2}-\lambda \right)\frac{n_{\mathrm{P}}}{n_D+{n}_{\mathrm{P}}}\right). $$
Since the system under consideration is constrained, both in terms of nutrients and space, in reality the cell population only grows exponentially as indicated by the growth equations in the very beginning of the process where the constraints regarding space or nutrients are negligible. However, we are mainly interested in the fraction of D cells \( {x}_D=\frac{n_D}{n_D+{n}_{\mathrm{P}}} \) in the population and vice versa the fraction of P cells \( {x}_{\mathrm{P}}=1-{x}_D=\frac{n_{\mathrm{P}}}{n_D+{n}_{\mathrm{P}}} \). To obtain the change in fractions for both types, we subtract the average growth rate \( \overline{f} \) of the population from both individual growth rates,
$$ \overline{f}\kern0.5em =\varepsilon {x}_D+\left[\left(1-\varepsilon -\lambda \right){x}_D+\left(\frac{1}{2}-\lambda \right){x}_{\mathrm{P}}\right]{x}_{\mathrm{P}} $$
From this we obtain two differential equations for the fractions of D and P cells,
$$ {\displaystyle \begin{array}{cl}{\dot{x}}_D& ={x}_D\left(\varepsilon -\overline{f}\right)\\ {}{\dot{x}}_{\mathrm{P}}& ={x}_{\mathrm{P}}\left(\left[\left(1-\varepsilon -\lambda \right){x}_D+\left(\frac{1}{2}-\lambda \right){x}_{\mathrm{P}}\right]-\overline{f}\right)\end{array}} $$
Next, we include the spontaneous conversion between phenotypes with a constant rate σ, which is independent of the cellular growth. This leads to an additional term to the differential equation of both phenotypes
$$ {\displaystyle \begin{array}{cl}{\dot{x}}_D& =\left[\varepsilon -\overline{f}\right]{x}_D+\sigma \left({x}_{\mathrm{P}}-{x}_D\right)\\ {}{\dot{x}}_{\mathrm{P}}& =\left[\left(1-\varepsilon -\lambda \right){x}_D+\left(\frac{1}{2}-\lambda \right){x}_{\mathrm{P}}-\overline{f}\right]{x}_{\mathrm{P}}+\sigma \left({x}_D-{x}_P\right)\end{array}}. $$
These equations have the important difference to the usual replicator-mutator equation [15] that phenotype conversion is a spontaneous process with a constant rate and is independent of the growth in the population. This allows conversion from D to P even if D cells do not grow at all.
Using these equations, we model different therapy schedules combining different treatment strengths in different cycling time plans. Since the equations are nonlinear, we use numerical integration with Odeint of the Python library Scipy1 to examine the temporal dynamics of the system under different treatment regimes. Additionally we analytically determine the fixed points of the system and their stability.
Experimental model
Cell culture and cell number determination
The GBM cell line LN229 was purchased from ATCC/LGC Standards (Middlesex, UK, ATCC-CRL 2611) and cultured in Dulbecco's modified eagle medium (DMEM) plus 10% fetal calf serum (FCS, PAN Biotech, Aidenbach, Germany). Mycoplasma contaminations were routinely excluded by bisbenzimide staining. The GBM cell line identity was proven routinely by STR (Short Tandem Repeat) profiling at the Department of Forensic Medicine (Kiel, Germany) using the Powerplex HS Genotyping Kit (Promega, Madison, WC). Briefly, DNA was amplified with a STR multiplex PCR, electrophoretic separation was performed with the 3500 Genetic Analyser (Thermo Fisher Scientific, Waltham, MA, USA), and evaluated using the Software GeneMapper ID-X (Thermo Fisher Scientific). For determination of cell numbers after low and high dose chemotherapy treatment, 25,000 cells/well were seeded in 6 well plates (Greiner Bio-one, Frickenhausen, Germany). Cells were grown for 24 h, then washed with phosphate buffered saline (PBS), supplemented with fresh DMEM + 10% FCS and temozolomide concentrations (Sigma-Aldrich, St. Louis, MO, USA; dissolved in dimethyl sulfoxide DMSO) as indicated in Fig. 2a (5, 50 or 100 μg/ml for 10 days). Temozolomide (TMZ) is a DNA alkylating drug causing apoptotic cell death and the most commonly used chemotherapeutic in GBM therapy. Control cells were supplemented with 0.5% DMSO, which corresponds to the solvent concentrations of each TMZ stimulated sample. Cells were stimulated for 10 days with TMZ, while media were changed every 2–3 days. After 10 days, cells were detached by trypsination and total cell numbers per well counted using trypan blue exclusion and a Neubauer chamber (Brand, Wertheim, Germany). DMSO stimulated control cells were already detached after 6 days of stimulation, split 1:10 and seeded again to exclude limitations of growth due to space and nutrient limitations. This splitting factor (1:10) was considered when relative cell numbers of TMZ treated samples in comparison to DMSO controls were determined for n = 5–6 independent experiments.
Immunocytochemistry
For immunocytochemistry, 50,000 cells were seeded onto poly-D-lysine coated glass cover slips, grown for 24 h and supplemented with indicated TMZ or DMSO concentrations as described above. From day 6, growth media were additionally supplemented with 10 μM 5-bromo-2′-deoxyuridine (BrdU, Sigma-Aldrich, St. Louis, MO) to allow for incorporation in the DNA in the S phase of the cell cycle. After 10 days, cover slips were fixed with an ice-cold mixture of methanol and acetone (1:1) for 10 min, rinsed with 0.1% Tween / PBS (3 × 5 min), incubated with 1 M HCl for 30 min, neutralized with 0.1 M sodium borate buffer (pH 8.5), and rinsed again with 0.1% Tween/PBS. Afterwards, cells were blocked for unspecific bindings with 0.5% bovine serum albumin (BSA) / 0.5% glycine in PBS (1 h) and incubated over night with the primary antibody against H2BK (1:300, Biorbyt, Cambridge, UK), a marker of glioma dormancy [24, 25] and the primary antibody against BrdU (1:200, Abcam, Cambridge, UK). Then cover slips were incubated with the secondary antibodies (donkey anti-rabbit IgG, labelled with Alexa Fluor 488, and donkey anti-sheep labelled with Alexa Fluor 555, both Invitrogen, Carlsbad, CA, USA) for 1 h at 37°, and 4′, 6-diamidino-2-phenylindole (DAPI; Sigma Aldrich, St. Louis, MO, USA; 1 mg/ml, 1:30,000, 30 min at room temperature) to stain nuclei. Cover slips were embedded using Immu-Mount (Thermo Fisher Scientific, Rockford, IL, USA), and analysed with equal exposure times using an Axiovert microscope and digital camera (Zeiss, Jena, Germany). H2BK-immunopositive, BrdU-positive and double positive cells were counted and normalized to total cell numbers in 6 (DMSO controls) to 10 (TMZ samples) fields of view for n = 4 independent experiments.
DiO retention and cell countings on phenotype conversion
To monitor the conversion to and from dormancy we used the green fluorescent vital dye DiO (Invitrogen), as rapidly proliferating cells lose the dye due to repeated divisions, while resting, dormant (or very slowly cycling) cells retain the dye and can be detected by fluorescence microscopy. Investigating the conversion to dormancy, 150,000 LN229 cells were seeded into 6-well-plates, stained with Vybrant® DiO Cell-Labeling Solution (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer's instructions and stimulated with 100 μg/ml TMZ (or equal volume of the solvent DMSO) for 10–12 days. Cells were photographed combining transmitted-light microscopy and fluorescence microscopy with equal exposure times for TMZ and control treated cells, and green fluorescent cell portions were determined in comparison to total cell counts. To determine the influence of different cell densities on the incidence of conversion, 50,000 and 150,000 cells were seeded, respectively, into 6-well-plates and treated with 100 μg/ml TMZ (or equal volumes of the solvent DMSO) for 10 days. As the DMSO control treated cells rapidly proliferate, cells were detached at day 6 (50,000) or day 3 and 6 (150,000), cell numbers counted using a Neubauer chamber to determine the growth rate over this time period, and seeded again at initial density, to allow for cell growth without limitation of space and nutrients. After 10 days, TMZ and control treated cells were detached and counted. To extrapolate the total cell numbers of control cells, growth rates determined at day 3, 6 and 10 were used, and TMZ surviving cells were calculated as percentage of extrapolated total cells.
Statistical analysis and graphical presentation of experimental data were performed with Graph Pad Prism using a two-tailed t-test (*** p < 0.001).
Modelling the dynamics of cell frequencies
The temporal dynamics of the proportion of D against P cells in GBM strongly depends on the treatment conditions. Therefore, we first analyze the fixed points of the dynamical system and how they change for different treatment strengths λ, without considering possible conversions of phenotypes. The fixed points mark a stable equilibrium between the portions of P and D cells under certain, predefined conditions and are found by setting Eq. 1 to zero. Of particular interest are stable fixed points, as the system returns into this state again after a small perturbation [26].
For our system, there is only one stable fixed point for each treatment condition (Fig. 1a). If we consider the case of no phenotype conversion, σ = 0, we can give the exact position of this point for each treatment condition λ. The fraction of D cells at the fixed point is then given by
$$ {x}_D\kern0.5em =\Big\{{\displaystyle \begin{array}{ll}0& \frac{2\lambda }{1-2\varepsilon}\le 1\\ {}\frac{2\lambda }{1-2\varepsilon }-1& 1\le \frac{2\lambda }{1-2\varepsilon}\le 2\\ {}1& 2\le \frac{2\lambda }{1-2\varepsilon}\end{array}}\operatorname{} $$
a Equilibrium fraction of dormant cells depending on treatment cost λ. b Average growth rate at the fixed point depending on treatment cost λ. Blue lines indicate a growth rate of D cells of ε = 0 and orange lines a growth rate ε = 0.1. Solid lines are for the absence of phenotype conversion (σ = 0) and dashed lines with phenotype conversion (σ = 0.1)
For small treatment strengths λ the fraction of D cells in the population at the stable fixed point is zero, but after reaching a threshold, the fraction of dormant cells increases linearly with λ until the whole population consists of dormant cells at very high treatment strengths.
With conversion between the two phenotypes (σ > 0), the analytical calculation for the stable fixed points is more difficult and the result is not instructive. By contrast to the previous results without phenotype conversion, there is always a small proportion of dormant cells in the population, even at very low treatment strengths. The proportion of dormant cells at the fixed points increases immediately with increasing treatment strength until it approaches a maximum at high treatment strengths. For both D cell growth rates ε (ε = 0, orange lines; and ε = 0.1, blue lines) the population composition is very similar or even the same without phenotype conversion at very small or large treatment strengths λ. In contrast, the largest effect of ε on the population composition is at intermediate values of treatment strength.
The average fitness \( \overline{f} \) for the whole tumor cell population including P and D cells decreases linearly from the maximum at treatment strength λ = 0 until it reaches the minimum of \( \overline{f}=\varepsilon \) at the point where the fraction of dormant cells in the population starts to increase (Fig. 1b). Interestingly, with spontaneous conversion σ > 0, the average fitness at the fixed point can become smaller than ε and even negative for high treatment strengths, potentially leading to a shrinking tumor. This is caused by conversion of D cells into P cells which are then susceptible to treatment.
Comparison to experimental data
To test our mathematical model of phenotype composition upon treatment, we used LN229 cells as an experimental in vitro model. We treated these cells for 10 days with temozolomide (TMZ), the most commonly used cytotoxic drug in glioma therapy. In a first step, we focused on different treatment strength and analysed the portions of surviving cells in comparison to control cultures and the percentage of cells expressing H2BK (histone cluster 1), a marker of glioma dormancy [24, 25], alongside with incorporation of BrdU in the late treatment phase (day 6–10). In general, after 10 days of treatment, samples stimulated with 5, 50 and 100 μg/ml TMZ had significantly less total cell numbers than control treated cells (Fig. 2a). By immunocytochemistry of H2BK, we could detect and quantify the fraction of dormant cells within the cultures, and by adding BrdU to the cells from day 6 of treatment and immunocytochemical staining of BrdU, we could in parallel mark cells that incorporate BrdU in the DNA (examples of microscopic pictures in Fig. 2b). While DMSO-treated control cells showed a low fraction of H2BK-positive cells (mean: 9.7% ± 3.5), TMZ treatment yielded increased numbers of dormant cells reaching a plateau at high concentrations (5 μg/ml: mean 26.8% ± 9.0, 50 μg/ml: mean: 82.8% ± 5.3, 100 μg/ml: 87.7% ± 8.0, compare Fig. 2b, grey graph portions). In parallel, we investigated the incorporation of BrdU in the DNA and determined a high portion (66.0% ± 7.8) of BrdU positive cells in the control cultures and lower portions upon TMZ treatment (5 μg/ml: 53.6% ± 14.5; 50 μg/ml: 33.4% ± 5.3; 100 μg/ml: 33.7% ± 10.1, compare Fig. 2b, hatched graph portions). Interestingly, BrdU incorporation also took place in TMZ treated cultures, so that staining for the dormancy marker H2BK and for BrdU could be observed in the very same cells (compare examples of microscopic photographs in Fig. 2b) indicating that cell cycle arrest may occur after the S phase of the cell cycle. Together with our experiments described in the following section and Fig. 2c, showing that dormant cells hardly divide within our experimental time frame, these observations suggest that dormant glioma cells are not or only very slowly cycling. Furthermore, taking into account that we use a clonal cell line, the occurrence of dormant cells needs to be a phenotypic adaption to the environmental conditions as all cells are genetically homogenous (as proven by routinely STR profiling, compare Materials and Methods section).
a Decrease of total cell numbers upon different temozolomide (TMZ) treatment strength. LN229 glioma cells were treated with different TMZ concentrations for 10 days, control cells were treated with the solvent DMSO (0.5%). Total cell numbers strongly decreased in a TMZ concentration dependent manner. Given are mean values of cell counting ± SD from n = 3 independent experiments. b Increase of the H2BK positive dormant cell portion upon different TMZ treatment strength, and incorporation of BrdU. The fraction of dormant cells as determined by immunoreactivity for the glioma dormancy marker H2BK and counting of the positively stained cells was remarkably increased in a concentration dependent manner (grey portions of the graph). The fraction of cells with BrdU incorporation in turn decreased (hatched portions of the columns), but, remarkably, in higher TMZ concentrations, H2BK and BrdU double positive cells were frequently observed (hatched, grey portions of the columns). Microscopic pictures exemplarily show cells expressing the dormancy marker H2BK (green) and the incorporation of BrdU (red) upon stimulation with different concentrations of TMZ for 10 days. The pictures are representative examples from 6 to 10 fields of view that were analyzed for n = 4 independent experiments and summarized in the graphs in the upper part; the bars indicate 20 μm. c Influence of cell density on portions of dormant cells. Left: When stained with the vital dye DiO and treated with 100 μg/ml TMZ for 10–12 days, nearly all cells (about 98%) cease from dividing which is indicated by the retention of the green fluorescent dye. Meanwhile, in control treated cells (DMSO), nearly all cells lose the green fluorescent dye due to repeated cell divisions. Right: Graphs show surviving dormant cells after 10 days of TMZ treatment (100 μg/ml) in dependence of the initially seeded cell numbers. Portions of surviving cells are very low in comparison to total (extrapolated) cell numbers in DMSO control cultures, and do not depend on initially seeded cell numbers. Given are mean values of cell counting ± SD from n = 6 independent experiments
To investigate if the conversion to a dormant phenotype depended on the cell density, initially, we determined in a DiO retention assay that nearly all cells (98.3% ± 1.2) retain the green fluorescent dye when treated with 100 μg/ml ("high dose") TMZ for 10–12 days, while in control cultures (treated with equal volumes of the solvent DMSO) only 2.9% ± 1.7 retained the dye (Fig. 2c, left part). The vital dye is included in every cell at the moment of staining, and is transferred to every daughter cell upon cell division. However, this means the staining is diminishing after several divisions of rapidly proliferating cells, but retained in non-proliferative or very slowly cycling dormant cells. Thus, assuming that nearly all cells that survive treatment with 100 μg/ml TMZ are dormant in our particular setting, we determined the relative incidence of phenotype conversion and the influence of the cell density on this conversion factor by determination of TMZ surviving cells in relation to (extrapolated) total cell numbers of control (DMSO treated) cultures. In our experimental setting, a portion of 0.68% ± 0.13 cells of initially seeded 50,000 LN229 glioblastoma cells survived this high dose treatment, while in cultures of initially seeded 150,000 cells, the portion of surviving cells was nearly similar (0.66% ± 0.13; Fig. 2c, right part) underlining the assumptions of our theoretical model.
Thus, treatment with TMZ significantly reduced total cell numbers of LN229 cells, while the share of dormant cells within the culture, as detected by the dormancy marker H2BK, was drastically elevated. The incidence of conversion to dormancy did not depend on cell densities in our particular experimental setting.
Treatment schedules
Next, we use our model to analyze the dynamics of the population composition for periodically changing treatment conditions. One example trajectory for a growth rate for D cells ε = 0 and a conversion rate between phenotypes σ =0.1 is depicted in Fig. 3. The fraction of D and P cells in the population alternates between the fixed points corresponding to the momentary treatment condition. The trajectory starts with a phase of no treatment, which is characterized by a high average growth rate and a cell population composition of mostly P cells and only very few D cells. After the first phase of unconstrained growth large parts of the tumor are removed (e.g. by surgery), leaving only a small number of cancer cells. Under the following high dosage treatment conditions, the dormant phenotype has the highest fitness and takes over the population. The relative fraction of D cells will increase until the steady state under high treatment conditions is reached. The impact of treatment on P cells leads to a strong initial decline in average growth rate, until the population has a significant proportion of dormant cells and the growth rate starts to recover slightly.
Impact of cyclic treatment on the cell population. In each treatment interval, either high dosage (H) or low dosage (L) treatment is applied.Top: Population composition between dormant (D) and rapidly proliferating (P) cells displayed by the relative fraction of both phenotypes under changing conditions. Middle panel: Average growth rate of the whole population under changing treatment condition. A negative growth rate only occurs in phases of strong treatment. Bottom panel: Absolute number of all tumor cells, P and D phenotype, assuming exponential growth with the given average growth rate (parameters: dormant cell growth rate ε = 0, conversion rate σ = 0.01, high dosage effect λ H = 1, low dosage effect λ L = 0.25)
Under the following low dosage treatment conditions, P cells (making up a small fraction of the whole population at the end of the high dosage treatment) are less affected by the treatment and now grow faster. The average growth rate will have a maximum when then relative fraction of P cells in the population is still low, since they have a competitive advantage over D cells, and then declines afterwards towards an equilibrium well above the high dosage growth rate. Accordingly, the total number of cells increases strongly again in this regime.
Switching the order of high dosage and low dosage treatment only has a small effect on total number of cells: If treatment starts with low dosage, the system will go into a state with a slightly higher fraction of dormant cells, which makes it less susceptible to the following high dosage treatment. Starting with low dosage therefore does not help to reduce the tumor size.
In Fig. 4 we compare three different treatment schedules: just one switch from initial high (H) dose to low (L) dose treatment (HHHHHHLLLLLL, Fig. 4a, each instance of the letter H or L corresponds to the same time interval), slow cyclic switching (HHHLLL, Fig. 4b), and fast cyclic switching (HLHL, Fig. 4c) for two different growth rates of D cells (left panels ε = 0 and right panels ε = 0.1). In case of only switching once, the fixed points for each treatment are quickly reached. At high dosage treatment the number of cells increases very slowly or even decreases. In the following low treatment phase, however, P cells take over growing particularly fast and jeopardizing any positive effect from the previous strong treatment. This is true for both treatment strengths of the high dosage phase.
Comparison of the effect of different treatment cycle lengths on population composition, average growth rate and number of cells, similarly to Fig. 3. All panels on the left have a D cell growth rate of ε = 0, whereas all panels on the right have ε = 0.1. a The top row shows the case of high dosage treatment followed by a sustained low dosage treatment. b The middle panels use a relatively slow switching between high dosage and low dosage treatment, whereas the bottom panel c shows very fast switching. All other parameters as in Fig. 3
For the fast switching treatment schedule (HLHL, Fig. 4c), the fixed point of the population proportion is not reached before the treatment changes again. Therefore the population dynamics stays between the two stable fixed points for the two treatment regimes, but does not reach them. By contrast, in the slow treatment switching regime (HHHLLL, Fig. 4b) the fixed points for both high and low dosage treatment are reached such that the composition of the cell population essentially resembles the case of just one switching event (Fig. 4a). However the time spent at these fixed points is still significantly reduced compared to only a single switch.
The bottom panels of Fig. 4a, b and c show the total number of cells based on the average fitness of the population under the assumption of exponential growth. When the growth rate is positive the cell population grows, otherwise it shrinks. Interestingly, the average growth rate of the population is well below zero only for a short period during the high dosage treatment and only if the share of P cells is still very high and the fraction of D cells in the population is small. However, in this regime the fitness recovers fast and approaches equilibrium with an average fitness close to zero, such that the total number of cells does not change anymore. The strongly negative growth rate directly after switching to the high dosage treatment is therefore the reason why the number of cells for quickly changing treatment regimes is significantly smaller than for slowly changing treatment cases.
To systematically examine the effect of switching treatment cycles on the growth rate of the population, we analyze the temporal dynamics of the population size for varying treatment cycle durations and two different growth rates of D cells ε (Fig. 5). Unsurprisingly, a lower growth rate of D cells has a diminishing effect on the overall growth of the cells. For increasing treatment cycle length cancer cells have an increasing overall growth, while maintaining the same total high and low dosage treatment durations. The overall growth rate approaches a maximum with increasing treatment cycle length when the dynamics reaches equilibrium in each cycle.
Overall growth rate for different treatment cycle length and for two different growth rates of dormant cells ε = 0 and ε = 0.1. High and low dosage phases are alternating with the given treatment cycle length for in total 30 cycles. The overall growth rate is then calculated from a linear fit to the log-plot. Other parameters as in Figs. 3 and 4
Taken together, our mathematical model allows us to theoretically predict the fitness and proportions of rapidly proliferating and dormant tumor cells under different treatment conditions. Strengthening our theoretical assumptions we could exemplarily show the effect of high and low chemotherapy doses on the cell numbers and the proportion of a dormant cell phenotype in cultured GBM cells in vitro. Simulating different therapy schedules, we observed that fast switching of low and high treatment doses yields a lower total tumor cell number at equal total drug dose in comparison to low switching schedules.
In this study, we established a mathematical model and analyzed the proportions of two different cell phenotypes occurring in GBM, rapidly proliferating and dormant cells. Corroborated by experimental data obtained from in vitro experiments with cultured GBM cells, we observed that treatment strength influences the balance between both phenotypes which in turn influences the growth of the whole tumor population. Sequential switching of treatment strength may thus drastically influence the proportion of dormant and rapidly proliferating cells, especially if switching to the next condition takes place before the population dynamics reaches a steady state.
Dormancy in GBM has been shown by the existence of distinct fraction of temporarily non-proliferative cells in murine models [27], as well as by the identification of clones which were able to generate indolent dormant tumors both in subcutaneous and orthotropic intracranial sites [28]. Additionally, dormancy seems to be characterized by specific features in GBM, such as a non-angiogenic phenotype [24, 28, 29], and is influenced by the (micro-)environment e.g. hypoxia [30] and coagulation [31–33]. However, cellular dormancy in tumors is not only regarded as a state to overcome times of adverse conditions but has also been assigned to DNA repair mechanisms [34]. Interestingly, dormant GBM cells are hallmarked by the upregulation of specific genes like angiomotin, ephrin type-A receptor 5 (EphA5), insulin-like growth factor-binding protein 5 (IGFBP5), and histone cluster 1 (H2BK) [24, 25]. We used the latter as a marker to detect dormant cells in our in vitro experiments and show that the proportion of dormant cells increases with increasing chemotherapy concentrations.
As the fitness in the competition for space and resources depends on the proportions of phenotypically different cell subpopulations, we used evolutionary game theory as a framework for our mathematical model. Previous studies discussed game theoretic interactions with more phenotypes for many different types of cancer, including glioma [35], prostate cancer [36], and multiple myeloma [19, 37] or general tumors [38]. Also, evolutionary game theory is often used in spatially structured populations to answer questions about the effect of environmental constraints on tumor composition and invasiveness of cancer cells [20, 39, 40]. However, including spatial structure in order to increase the realism of the model leads to a large number of additional assumptions and potential pitfalls [41, 42]. Other modeling approaches for dormancy in cancer focus on the interaction between the immune system and the tumor [43–45], the effect of angiogenesis [46], or spatial competition between cells [47]. However, these approaches do not explicitly model the conversion between phenotypes and its consequences under therapy with varying strength.
Thus, we decided to simplify our model on several levels: (i) We do not take spatial structure into account. (ii) We abstract from the interaction with the immune system. (iii) We concentrate on two tumor cell phenotypes – rapidly proliferating and dormant cells – although other cell phenotypes, such as fast migrating cells, cells mimicking vasculature, (cancer) stem cells and invading immune cells (e.g. [48–50]), also contribute to the whole tumor mass. (iv) We focus on the fitness of the respective phenotypes rather than the potentially underlying reasons for phenotypical changes (e.g. genetic or epigenetic changes).
An important aspect of our model is the conversion between the different cell phenotypes. Recent studies suggest that dormant cells may originate from "normal" tumor cells by currently intensively investigated mechanisms (e.g. [51–53]). As a fundamental criterion for tumor dormancy, dormant cells need to be able to reawake and start growing again, so that they then phenotypically resemble the rapidly growing cell phenotype. Thus, we introduced a conversion factor σ into our model capturing these phenotypical transformation processes. Whether such conversions occur spontaneously or can be induced specifically or randomly by extrinsic or intrinsic mechanisms is poorly understood. We thus assumed a spontaneous event which can be modeled by a constant rate.
Using our theoretical approach we showed that dependent on the applied treatment strength an equilibrium balance between rapidly proliferating cells and dormant cells is eventually reached. At this fixed point and with low dosage or no treatment, mostly rapidly proliferating cells dominate the population, similar to the findings of Basanta and colleagues [35]. At stronger treatment, the fraction of dormant cells becomes successively larger, yielding a lower growth rate of the whole tumor. However, high dosage treatment cannot be applied for longer time periods, as it causes severe side effects (for GBM: [3]). Hence, we focused on alternative treatment schedules.
Several previous models discuss the effect different treatment schedules on various aspects of the cancer growth like angiogenesis [44, 54] or evolution of resistance [55]. Here, either the dosage and timing or the type of the chemotherapeutic drug is varied, which can have a massive effect on the growth of the tumor. Accordingly, we combine sequential cycles of low and high dose treatment with different durations. Thereby, we observed that the total growth of the tumor is considerably lower for fast switching compared to a slow switching scheme.
In this study, we have developed a theoretical model to predict the tumor growth kinetics under different treatment strengths including a dormant cell phenotype and underlined our theoretical approach with experimental data. Using our model which allows for phenotypic conversion, we could simulate how different tumor cell phenotypes proportionally contribute to the growing tumor mass in cycling treatment schedules. Additionally, we could observe that switching between high and low dosage treatment (with equal total treatment amounts) remarkably affects tumor growth in a frequency dependent way.
Thus, the dynamic proportions between cell phenotypes should be taken into account in the optimization of treatment schedules in order to control tumor growth.
https://www.scipy.org/
Marvin A. Böttcher and Janka Held-Feindt shared first authorship.
Arne Traulsen and Kirsten Hattermann shared senior authorship.
Dormant cells
DMSO:
Dimethyl sulfoxide
GBM:
Glioblastoma multiforme
Rapidly proliferating cells
TMZ:
Temozolomide
We thank Judith Becker, Martina Burmester, Fereshteh Ebrahim and Brigitte Rehmke for expert technical assistance.
This work was supported by a sponsorship of the Medical Faculty of the University Kiel ("Forschungsförderung 2016" to JHF and KH) and the Deutsche Forschungsgemeinschaft RTG2154 [project 7 (KH) and project 8 (JHF)]. The funding body did not have any influence in the design of the study, collection, analysis and interpretation of data and in writing of the manuscript.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. The python code to reproduce Figs. 1, 3, 4, and 5 is open source and available under https://github.com/marvinboe/TreatCycle.
MB, JHF, AT and KH designed experimental and theoretical models and prepared the manuscript; MB and AT calculated and visualized the theoretical model; JHF, MS, RL and KH contributed to the experimental model; all authors read and approved the final manuscript.
Department Evolutionary Theory, Max Planck Institute for Evolutionary Biology, 24306 Plön, Germany
Department of Neurosurgery, University Medical Center Schleswig-Holstein UKSH, Campus Kiel, 24105 Kiel, Germany
Department of Anatomy, University of Kiel, 24098 Kiel, Germany
Ohgaki H, Kleihues P. Epidemiology and etiology of gliomas. Acta Neuropathol. 2005;109:93–108.View ArticlePubMedGoogle Scholar
Stupp R, Mason WP, van den Bent MJ, Weller M, Fisher B, Taphoorn MJ, Belanger K, Brandes AA, Marosi C, Bogdahn U, Curschmann J, Janzer RC, Ludwin SK, Allgeier A, Lacombe D, Cairncross JG, Eisenhauer E, Mirimanoff RO. Radiotherapy plus concomitant and adjuvant temozolomide for glioblastomas. N Engl J Med. 2005;35:987–96.View ArticleGoogle Scholar
Niewald M, Berdel C, Fleckenstein J, Licht N, Ketter R, Ruebe C. Toxicity after radiochemotherapy for glioblastoma using temozolomide - a retrospective evaluation. Rad Oncol. 2011;6:141.View ArticleGoogle Scholar
Gatenby RA, Gillies RJ, Brown JS. Of cancer and cave fish. Nat Rev Cancer. 2011;11:237–8.View ArticlePubMedPubMed CentralGoogle Scholar
Almog N. Molecular mechanisms underlying tumor dormancy. Cancer Lett. 2010;294:139–46.View ArticlePubMedGoogle Scholar
Wikman H, Vessella R, Pantel K. Cancer micrometastasis and tumour dormancy. APMIS. 2008;116:754–70.View ArticlePubMedGoogle Scholar
Bragado P, Sosa MS, Keely P, Condeelis J, Aguirre-Ghiso JA. Microenvironments dictating tumor cell dormancy. Recent Results Cancer Res. 2012;195:25–39.View ArticlePubMedPubMed CentralGoogle Scholar
Yeh AC, Ramaswamy S. Mechanisms of Cancer cell dormancy--another Hallmark of Cancer? Cancer Res. 2015;75:5014–22.View ArticlePubMedPubMed CentralGoogle Scholar
Zhang XH, Giuliano M, Trivedi MV, Schiff R, Osborne CK. Metastasis dormancy in estrogen receptor-positive breast cancer. Clin Cancer Res. 2013;19:6389–97.View ArticlePubMedGoogle Scholar
Sosnoski DM, Norgard RJ, Grove CD, Foster SJ, Mastro AM. Dormancy and growth of metastatic breast cancer cells in a bone-like microenvironment. Clin Exp Metastasis. 2015;32:335–44.View ArticlePubMedGoogle Scholar
Linde N, Fluegen G, Aguirre-Ghiso JA. The relationship between dormant Cancer cells and their microenvironment. Adv Cancer Res. 2016;132:45–71.View ArticlePubMedPubMed CentralGoogle Scholar
Sertil AR. Hypoxia regulation in cellular dormancy. In: Hayat MA, editor. Tumor dormancy, quiescence and senescence, Vol. 3. Dordrecht: Springer Netherlands; 2014. p. 13–24.Google Scholar
Hirose Y, Berger MS, Pieper RO. p53 effects both the duration of G2/M arrest and the fate of temozolomide-treated human glioblastoma cells. Cancer Res. 2001;61:1957–63.PubMedGoogle Scholar
Altrock PM, Lin LL, Michor F. The mathematics of cancer: integrating quantitative models. Nat Rev Cancer. 2015;15:730–45.View ArticlePubMedGoogle Scholar
Nowak MA. Evolutionary dynamics. Cambridge: Harvard University Press; 2006.Google Scholar
Hofbauer J, Sigmund K. Evolutionary games and population dynamics. Cambridge: Cambridge University Press; 1998.View ArticleGoogle Scholar
Basanta D, Anderson ARA. Exploiting ecological principles to better understand cancer progression and treatment. Interface Focus. 2013;3:20130020.View ArticlePubMedPubMed CentralGoogle Scholar
Basanta D, Deutsch A. A game theoretical perspective on the somatic evolution of Cancer. In: Bellomo N, Angelis E, editors. Selected topics in Cancer modeling. Basel: Springer/ Birkhäuser; 2008. p. 97–112.Google Scholar
Dingli D, Chalub FACC, Santos FC, Van Segbroeck S, Pacheco JM. Cancer phenotype as the outcome of an evolutionary game between normal and malignant cells. Brit J Cancer. 2009;101:1130–6.View ArticlePubMedPubMed CentralGoogle Scholar
Kaznatcheev A, Scott JG, Basanta D. Edge effects in game-theoretic dynamics of spatially structured tumours. J R Soc Interface. 2015;12:20150154.View ArticlePubMedPubMed CentralGoogle Scholar
Orlando PA, Gatenby RA, Brown JS. Cancer treatment as a game: integrating evolutionary game theory into the optimal control of chemotherapy. Physical Biol. 2012;9:065007.View ArticleGoogle Scholar
Bomze I, Bürger R. Stability by mutation in evolutionary games. Games Econ Behav. 1995;11:146–72.View ArticleGoogle Scholar
Page KM, Nowak MA. Unifying evolutionary dynamics. J Theor Biol. 2002;219:93–8.View ArticlePubMedGoogle Scholar
Almog N, Ma L, Raychowdhury R, Schwager C, Erber R, Short S, Hlatky L, Vajkoczy P, Huber PE, Folkman J, Abdollahi A. Transcriptional switch of dormant tumors to fast-growing angiogenic phenotype. Cancer Res. 2009;69:836–44.View ArticlePubMedGoogle Scholar
Adamski V, Hempelmann A, Flüh C, Lucius R, Synowitz M, Hattermann K, Held-Feindt J. Dormant human glioblastoma cells acquire stem cell characteristics and are differentially affected by Temozolomide and AT101 treatment strategies. Onctarget. 2017;8(64):108064–78.Google Scholar
Strogatz S. Nonlinear dynamics and Chaos: with applications to physics, biology, chemistry, and engineering (studies in nonlinearity). New York: Westview Press; 2000.Google Scholar
Endaya BB, Lam PY, Meedeniya AC, Neuzil J. Transcriptional profiling of dividing tumor cells detects intratumor heterogeneity linked to cell proliferation in a brain tumor model. Mol Oncol. 2016;10:126–37.View ArticlePubMedGoogle Scholar
Satchi-Fainaro R, Ferber S, Segal E, Ma L, Dixit N, Ijaz A, Hlatky L, Abdollahi A, Almog N. Prospective identification of glioblastoma cells generating dormant tumors. PLoS One. 2012;7:e44395.View ArticlePubMedPubMed CentralGoogle Scholar
Naumov GN, Bender E, Zurakowski D, Kang SY, Sampson D, Flynn E, Watnick RS, Straume O, Akslen LA, Folkman J, Almog N. A model of human tumor dormancy: an angiogenic switch from the nonangiogenic phenotype. J Natl Cancer Inst. 2006;98:316–25.View ArticlePubMedGoogle Scholar
Hofstetter CP, Burkhardt JK, Shin BJ, Gürsel DB, Mubita L, Gorrepati R, Brennan C, Holland EC, Boockvar JA. Protein phosphatase 2A mediates dormancy of glioblastoma multiforme-derived tumor stem-like cells during hypoxia. PLoS One. 2012;7:e30059.View ArticlePubMedPubMed CentralGoogle Scholar
Magnus N, Gerges N, Jabado N, Rak J. Coagulation-related gene expression profile in glioblastoma is defined by molecular disease subtype. J Thromb Haemost. 2013;11:1197–200.View ArticlePubMedGoogle Scholar
Magnus N, Garnier D, Meehan B, McGraw S, Lee TH, Caron M, Bourque G, Milsom C, Jabado N, Trasler J, Pawlinski R, Mackman N, Rak J. Tissue factor expression provokes escape from tumor dormancy and leads to genomic alterations. Proc Natl Acad Sci U S A. 2014;111:3544–9.View ArticlePubMedPubMed CentralGoogle Scholar
Magnus N, D'Asti E, Meehan B, Garnier D, Rak J. Oncogenes and the coagulation system--forces that modulate dormant and aggressive states in cancer. Thromb Res. 2014;133(Suppl 2):S1–9.View ArticlePubMedGoogle Scholar
Evans EB, Lin SY. New insights into tumor dormancy: targeting DNA repair pathways. World J Clin Oncol. 2015;6:80–8.View ArticlePubMedPubMed CentralGoogle Scholar
Basanta D, Simon M, Hatzikirou H, Deutsch A. Evolutionary game theory elucidates the role of glycolysis in glioma progression and invasion. Cell Prolif. 2008;41:980–7.View ArticlePubMedGoogle Scholar
Basanta D, Scott JG, Fishman MN, Ayala G, Hayward SW, Anderson ARA. Investigating prostate cancer tumour–stroma interactions: clinical and biological insights from an evolutionary game. Brit J Cancer. 2012;106:174–81.View ArticlePubMedGoogle Scholar
Pacheco JM, Santos FC, Dingli D. The ecology of cancer from an evolutionary game theory perspective. Interface Focus. 2014;4:20140019.View ArticlePubMedPubMed CentralGoogle Scholar
Kaznatcheev A, Vander Velde R, Scott JG, Basanta D. Cancer treatment scheduling and dynamic heterogeneity in social dilemmas of tumour acidity and vasculature. Brit J Cancer. 2017;9:1–24.Google Scholar
Gerlee P, Anderson ARA. An evolutionary hybrid cellular automaton model of solid tumour growth. J Theoretical Biol. 2007;246:583–603.View ArticleGoogle Scholar
Anderson ARA, Hassanein M, Branch KM, Lu J, Lobdell NA, Maier J, Basanta D, Weidow B, Narasanna A, Arteaga CL, Reynolds AB, Quaranta V, Estrada L, Weaver AM. Microenvironmental independence associated with tumor progression. Cancer Res. 2009;69:8797–806.View ArticlePubMedPubMed CentralGoogle Scholar
Zukewich J, Kurella V, Doebeli M, Hauert C. Consolidating birth-death and death-birth processes in structured populations. PLoS One. 2013;8:e54639.View ArticlePubMedPubMed CentralGoogle Scholar
Hindersin L, Traulsen A. Most undirected random graphs are amplifiers of selection for birth-death dynamics, but suppressors of selection for death-birth dynamics. PLoS Comput Biol. 2015;11:e1004437.View ArticlePubMedPubMed CentralGoogle Scholar
Wilkie KP, Hahnfeldt P. Tumor-immune dynamics regulated in the microenvironment inform the transient nature of immune-induced tumor dormancy. Cancer Res. 2013;73:3534–44.View ArticlePubMedPubMed CentralGoogle Scholar
Hahnfeldt P, Folkman J, Hlatky L. Minimizing long-term tumor burden: the logic for metronomic chemotherapeutic dosing and its antiangiogenic basis. J Theoretical Biol. 2003;220:545–54.View ArticleGoogle Scholar
Page K, Uhr J. Mathematical models of cancer dormancy. Leuk Lymphoma. 2005;46:313–27.View ArticlePubMedGoogle Scholar
Kareva I. Escape from tumor dormancy and time to angiogenic switch as mitigated by tumor-induced stimulation of stroma. J Theor Biol. 2016;395:11–22.View ArticlePubMedGoogle Scholar
Enderling H, Anderson ARA, Chaplain MAJ, Beheshti A, Hlatky L, Hahnfeldt P. Paradoxical dependencies of tumor dormancy and progression on basic cell kinetics. Cancer Res. 2009;69:8814–21.View ArticlePubMedGoogle Scholar
Adamski V, Schmitt AD, Flüh C, Synowitz M, Hattermann K, Held-Feindt J. Isolation and characterization of fast migrating human glioma cells in the progression of malignant gliomas. Oncol Res. 2017;25:341–53.View ArticlePubMedGoogle Scholar
Hattermann K, Flüh C, Engel D, Mehdorn HM, Synowitz M, Mentlein R, Held-Feindt J. Stem cell markers in glioma progression and recurrence. Int J Oncol. 2016;49:1899–910.View ArticlePubMedGoogle Scholar
Held-Feindt J, Hattermann K, Müerköster SS, Wedderkopp H, Knerlich-Lukoschus F, Ungefroren H, Mehdorn HM, Mentlein R. CX3CR1 promotes recruitment of human glioma-infiltrating microglia/macrophages (GIMs). Exp Cell Res. 2010;316:1553–66.View ArticlePubMedGoogle Scholar
Hoppe-Seyler K, Bossler F, Lohrey C, Bulkescher J, Rösl F, Jansen L, Mayer A, Vaupel P, Dürst M, Hoppe-Seyler F. Induction of dormancy in hypoxic human papillomavirus-positive cancer cells. Proc Natl Acad Sci U S A. 2017;114:E990–8.View ArticlePubMedPubMed CentralGoogle Scholar
Sosa MS, Parikh F, Maia AG, Estrada Y, Bosch A, Bragado P, Ekpin E, George A, Zheng Y, Lam HM, Morrissey C, Chung CY, Farias EF, Bernstein E, Aguirre-Ghiso JA. NR2F1 controls tumour cell dormancy via SOX9- and RARβ-driven quiescence programmes. Nat Commun. 2015;6:6170.View ArticlePubMedPubMed CentralGoogle Scholar
Ranganathan AC, Adam AP, Aguirre-Ghiso JA. Opposing roles of mitogenic and stress signaling pathways in the induction of cancer dormancy. Cell Cycle. 2006;5:1799–807.View ArticlePubMedPubMed CentralGoogle Scholar
Schättler H, Ledzewicz U, Amini B. Dynamical properties of a minimally parameterized mathematical model for metronomic chemotherapy. J Math Biol. 2016;72:1255–80.View ArticlePubMedGoogle Scholar
Dhawan A, Nichol D, Kinose F, Abazeed ME, Marusyk A, Haura EB, Scott JG. Collateral sensitivity networks reveal evolutionary instability and novel treatment strategies in ALK mutated non-small cell lung cancer. Sci Rep. 2017;7:1232.View ArticlePubMedPubMed CentralGoogle Scholar | CommonCrawl |
A fast blow-up solution and degenerate pinching arising in an anisotropic crystalline motion
Dirichlet $(p,q)$-equations at resonance
May 2014, 34(5): 2061-2068. doi: 10.3934/dcds.2014.34.2061
The Fourier restriction norm method for the Zakharov-Kuznetsov equation
Axel Grünrock 1, and Sebastian Herr 2,
Heinrich-Heine-Universität Düsseldorf, Mathematisches Institut, Universitätsstraße 1, 40225 Düsseldorf, Germany
Universität Bielefeld, Fakultät für Mathematik, Postfach 10 01 31, 33501 Bielefeld, Germany
Received February 2013 Revised June 2013 Published October 2013
The Cauchy problem for the Zakharov-Kuznetsov equation is shown to be locally well-posed in $H^s(\mathbb{R}^2)$ for all $s>\frac{1}{2}$ by using the Fourier restriction norm method and bilinear refinements of Strichartz type inequalities.
Keywords: Fourier restriction norm method., bilinear estimates, low regularity, Zakharov-Kuznetsov equation, well-posedness.
Mathematics Subject Classification: Primary: 35Q53; Secondary: 42B3.
Citation: Axel Grünrock, Sebastian Herr. The Fourier restriction norm method for the Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2061-2068. doi: 10.3934/dcds.2014.34.2061
M. Ben-Artzi, H. Koch and J.-C. Saut, Dispersion estimates for third order equations in two dimensions,, Comm. Partial Differential Equations, 28 (2003), 1943. doi: 10.1081/PDE-120025491. Google Scholar
H. A. Biagioni and F. Linares, Well-Posedness Results for the Modified Zakharov-Kuznetsov Equation,, In Nonlinear equations: Methods, (2001), 181. Google Scholar
J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. II. The KdV-equation,, Geom. Funct. Anal., 3 (1993), 209. doi: 10.1007/BF01895688. Google Scholar
A. V. Faminskiĭ, The Cauchy problem for the Zakharov-Kuznetsov equation,, Differ. Equations, 31 (1995), 1002. Google Scholar
A. V. Faminskiĭ, Well-posed initial-boundary value problems for the Zakharov-Kuznetsov equation,, Electron. J. Differential Equations, (2008). Google Scholar
J.-M. Ghidaglia and J.-C. Saut, Nonelliptic Schrödinger equations,, J. Nonlinear Sci., 3 (1993), 169. doi: 10.1007/BF02429863. Google Scholar
J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system,, J. Funct. Anal., 151 (1997), 384. doi: 10.1006/jfan.1997.3148. Google Scholar
A. Grünrock, A bilinear Airy-estimate with application to gKdV-3,, Differential Integral Equations, 18 (2005), 1333. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, Oscillatory integrals and regularity of dispersive equations,, Indiana Univ. Math. J., 40 (1991), 33. doi: 10.1512/iumj.1991.40.40003. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principle,, Comm. Pure Appl. Math., 46 (1993), 527. doi: 10.1002/cpa.3160460405. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, A bilinear estimate with applications to the KdV equation,, J. Amer. Math. Soc., 9 (1996), 573. doi: 10.1090/S0894-0347-96-00200-7. Google Scholar
H. Koch and N. Tzvetkov., On the local well-posedness of the Benjamin-Ono equation in $H^s(\mathbbR)$,, Int. Math. Res. Not., 26 (2003), 1449. doi: 10.1155/S1073792803211260. Google Scholar
E. W. Laedke and K.-H. Spatschek, Nonlinear ion-acoustic waves in weak magnetic fields,, Phys. Fluids, 25 (1982), 985. doi: 10.1063/1.863853. Google Scholar
D. Lannes, F. Linares and J.-C. Saut, The Cauchy Problem for the Euler-Poisson System and Derivation of the Zakharov-Kuznetsov Equation., ArXiv e-prints, (2012). Google Scholar
F. Linares and A. Pastor, Well-posedness for the two-dimensional modified Zakharov-Kuznetsov equation,, SIAM J. Math. Anal., 41 (2009), 1323. doi: 10.1137/080739173. Google Scholar
F. Linares and A. Pastor., Local and global well-posedness for the 2D generalized Zakharov-Kuznetsov equation,, J. Funct. Anal., 260 (2011), 1060. doi: 10.1016/j.jfa.2010.11.005. Google Scholar
F. Linares, A. Pastor and J.-C. Saut, Well-posedness for the ZK equation in a cylinder and on the background of a KdV soliton,, Comm. Partial Differential Equations, 35 (2010), 1674. doi: 10.1080/03605302.2010.494195. Google Scholar
F. Linares and J.-C. Saut, The Cauchy problem for the 3D Zakharov-Kuznetsov equation,, Discrete Contin. Dyn. Syst., 24 (2009), 547. doi: 10.3934/dcds.2009.24.547. Google Scholar
M. Panthee, A note on the unique continuation property for Zakharov-Kuznetsov equation,, Nonlinear Anal., 59 (2004), 425. doi: 10.1016/j.na.2004.07.022. Google Scholar
F. Ribaud and S. Vento, Well-Posedness results for the three-dimensional Zakharov-Kuznetsov Equation,, SIAM J. Math. Anal., 44 (2012), 2289. doi: 10.1137/110850566. Google Scholar
F. Ribaud and S. Vento, A Note on the Cauchy problem for the 2D generalized Zakharov-Kuznetsov equations,, C. R. Math. Acad. Sci. Paris, 350 (2012), 499. doi: 10.1016/j.crma.2012.05.007. Google Scholar
J.-C. Saut and R. Temam, An initial boundary-value problem for the Zakharov-Kuznetsov equation,, Adv. Differential Equations, 15 (2010), 1001. Google Scholar
B. K. Shivamoggi, The Painlevé analysis of the Zakharov-Kuznetsov equation,, Phys. Scripta, 42 (1990), 641. doi: 10.1088/0031-8949/42/6/001. Google Scholar
V. E. Zakharov and E. A. Kuznetsov, Three-dimensional solitons,, Sov. Phys. JETP, 39 (1974), 285. Google Scholar
Zhaohi Huo, Yueling Jia, Qiaoxin Li. Global well-posedness for the 3D Zakharov-Kuznetsov equation in energy space $H^1$. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1797-1851. doi: 10.3934/dcdss.2016075
Felipe Linares, Gustavo Ponce. On special regularity properties of solutions of the Zakharov-Kuznetsov equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1561-1572. doi: 10.3934/cpaa.2018074
Felipe Linares, Mahendra Panthee, Tristan Robert, Nikolay Tzvetkov. On the periodic Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3521-3533. doi: 10.3934/dcds.2019145
Francis Ribaud, Stéphane Vento. Local and global well-posedness results for the Benjamin-Ono-Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 449-483. doi: 10.3934/dcds.2017019
Mo Chen, Lionel Rosier. Exact controllability of the linear Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020080
Takamori Kato. Global well-posedness for the Kawahara equation with low regularity. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1321-1339. doi: 10.3934/cpaa.2013.12.1321
Nathan Glatt-Holtz, Roger Temam, Chuntian Wang. Martingale and pathwise solutions to the stochastic Zakharov-Kuznetsov equation with multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1047-1085. doi: 10.3934/dcdsb.2014.19.1047
Felipe Linares, Jean-Claude Saut. The Cauchy problem for the 3D Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 547-565. doi: 10.3934/dcds.2009.24.547
Barbara Kaltenbacher, Irena Lasiecka. Well-posedness of the Westervelt and the Kuznetsov equation with nonhomogeneous Neumann boundary conditions. Conference Publications, 2011, 2011 (Special) : 763-773. doi: 10.3934/proc.2011.2011.763
Kenji Nakanishi, Hideo Takaoka, Yoshio Tsutsumi. Local well-posedness in low regularity of the MKDV equation with periodic boundary condition. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1635-1654. doi: 10.3934/dcds.2010.28.1635
Magdalena Czubak, Nina Pikula. Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1669-1683. doi: 10.3934/cpaa.2014.13.1669
Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. Communications on Pure & Applied Analysis, 2010, 9 (2) : 261-280. doi: 10.3934/cpaa.2010.9.261
Hyungjin Huh, Bora Moon. Low regularity well-posedness for Gross-Neveu equations. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1903-1913. doi: 10.3934/cpaa.2015.14.1903
Vanessa Barros, Felipe Linares. A remark on the well-posedness of a degenerated Zakharov system. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1259-1274. doi: 10.3934/cpaa.2015.14.1259
Stefan Meyer, Mathias Wilke. Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces. Evolution Equations & Control Theory, 2013, 2 (2) : 365-378. doi: 10.3934/eect.2013.2.365
Hiroyuki Hirayama. Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1563-1591. doi: 10.3934/cpaa.2014.13.1563
Hartmut Pecher. Low regularity well-posedness for the 3D Klein - Gordon - Schrödinger system. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1081-1096. doi: 10.3934/cpaa.2012.11.1081
Wenming Hu, Huicheng Yin. Well-posedness of low regularity solutions to the second order strictly hyperbolic equations with non-Lipschitzian coefficients. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1891-1919. doi: 10.3934/cpaa.2019088
M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573
Gustavo Ponce, Jean-Claude Saut. Well-posedness for the Benney-Roskes/Zakharov- Rubenchik system. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 811-825. doi: 10.3934/dcds.2005.13.811
Axel Grünrock Sebastian Herr | CommonCrawl |
\begin{document}
\title{Assessing the validity of Bayesian inference using loss functions}
\begin{abstract} In the usual Bayesian setting, a full probabilistic model is required to link the data and parameters, and the form of this model and the inference and prediction mechanisms are specified via de Finetti's representation. In general, such a formulation is not robust to model mis-specification of its component parts. An alternative approach is to draw inference based on loss functions, where the quantity of interest is defined as a minimizer of some expected loss, and to construct posterior distributions based on the loss-based formulation; this strategy underpins the construction of the Gibbs posterior. We develop a Bayesian non-parametric approach; specifically, we generalize the Bayesian bootstrap, and specify a Dirichlet process model for the distribution of the observables. We implement this using direct prior-to-posterior calculations, but also using predictive sampling. We also study the assessment of posterior validity for non-standard Bayesian calculations, and provide an efficient way to calibrate the scaling parameter in the Gibbs posterior so that it can achieve the desired coverage rate. We show that the developed non-standard Bayesian updating procedures yield valid posterior distributions in terms of consistency and asymptotic normality under model mis-specification. Simulation studies show that the proposed methods can recover the true value of the parameter efficiently and achieve frequentist coverage even when the sample size is small. Finally, we apply our methods to evaluate the causal impact of speed cameras on traffic collisions in England.
\noindent \textit{Key words:} General Bayesian updating; Loss functions; Bayesian predictive inference; Scaling parameter; Semi-parameter inference
\end{abstract}
\section{Introduction and motivation} \label{sec:intro}
Bayesian inference methods are central to decision making under uncertainty. The most common approach to Bayesian (prior-to-posterior) updating employs parametric specifications of probability models for the observable quantities, but there has also been much research on relaxing parametric assumptions, as parametric models are not typically robust to model mis-specification; that is, they rely on correct specification of (at least) the likelihood that appears in de Finetti's representation. In standard prior-to-posterior inference, coherent Bayesian updating of prior beliefs on the parameter follows from an assumption of exchangeability of the observable quantities, the de Finetti representation for the corresponding probability model, and the combination of prior distribution on the unobservable data generating model with an induced conditional probability model for the observables. In contrast, \cite{zhang2006a}, \cite{jiang2008gibbs}, and \cite{bissiri2016general} adopt a decision-making approach, and formulate posterior inference entirely on a loss (or utility) specification to target a specific parameter in the data-generating distribution, leading to the so-called \textit{Gibbs posterior}. The Gibbs posterior results from a prior-to-posterior update where the loss function is converted to yield a pseudo-likelihood, and then combined with a prior distribution. An advantage of the targeting of a specific parameter of interest is that a full probabilistic specification for the data generating model for the observables is avoided. However, a disadvantage is that in general a probabilistic interpretation of the assumptions concerning the distribution of the observables is lost. In this paper, we explore the probabilistic validity of inference based purely on loss functions.
\subsection{Parameter definition}
Common to both standard Bayesian inference and the Gibbs posterior approach is that the quantities of interest are expressed as some functional of the data generating process, which is characterized by the distribution $F^\ast$. In any inference problem, $F^\ast$ -- and thus any functional of it -- is regarded as unknown quantity, and uncertainty is present due to absence of perfect knowledge of $F^\ast$. If prior uncertainty for $F^\ast$ is encapsulated in a prior distribution, the form of the posterior on $F^\ast$, and the posterior of any parameter of interest, can be deduced.
In the standard Bayesian approach, a `parameter' is defined as a functional of the limiting distribution of the exchangeable observables. In a parametric specification, $F^\ast(z) \equiv F^\ast(z|\xi^\ast)$, where $\xi^\ast$ lies in the finite dimensional parameter space $\Xi$. In the simplest case of exchangeable binary variables $\{Z_i,i=1,\ldots,n,\ldots\}$, for any $n \geq 1$, we have that for any binary sequence $z_1,\ldots,z_n$, the joint distribution can be written using the de Finetti representation as the integral over `parameter' $\xi$ of the product of the conditional densities $f^\ast(z_i|\xi) = \xi^{z_i} (1-\xi)^{1-{z_i}}$ and prior $\pi_0(\xi)$, where the true (data generating) parameter and distribution coincide with \[ \xi^\ast = \lim_{n \longrightarrow \infty} \frac{1}{n} \sum_{i=1}^n Z_i \quad \text{and} \quad F^\ast((-\infty,z]) = \lim_{n \longrightarrow \infty} \frac{1}{n} \sum_{i=1}^n \mathbb{1}_{(-\infty,z]}(Z_i) \quad z \in \R \] respectively. The de Finetti representation defines `parameters' in this way, although definitions based on invariance, sufficiency, and information can also be used -- see \citet[Chap.4]{bernardo2009bayesian}.
A formulation that identifies the same parameter is based on loss minimization, with \[
\xi^\ast = \arg \min_t \int - \log f^\ast(z|t) d F^\ast(z) = \arg \max_t \int (z \log t + (1-z) \log (1-t)) d F^\ast(z). \]
The loss function $\ell(z,t) = - \log f^\ast(z|t)$ defines the parameter as that which minimizes the expected loss under $F^\ast (z|\xi^\ast)$.
Note that we may equivalently define $\xi^\ast$ via a different loss function and functional of $F^\ast$: for example, using $\ell(z,t) = \lambda (z-t)^2$ with parameter $\lambda > 0$ returns the same minimizer, so the formulation via a loss function minimization is not unique. This is not problematic per se, and illustrates the importance of `modelling' the connection between observables and the parameter. However, there is no equivalent de Finetti representation in the second formulation, as this would require $-\log f^\ast(z|t) = \lambda (z-t)^2 + h(z)$ for some function $h(z)$ that does not depend on $t$, so that $f^\ast(z|t) = \exp\{-\lambda (z-t)^2- h(z)\}$. However, this $f^\ast(z;t)$ is not a mass function on $\{0,1\}$. In general, the probabilistic formulation can be incorporated into the loss function formulation, but the converse is not true, and there are some cases which are not readily amenable to a loss minimization formulation that do not coincide with the standard formulation.
\subsection{Assessing the validity of loss-based posterior inference}
Despite the caveats associated with the lack of uniqueness, the loss function-based approach has been proposed as the basis for generalized Bayesian inference. In this paper, we explore two aspects of this proposal focussing on the validity of such inference. First, we develop procedures for non-parametric Bayesian inference using an updating framework that guarantee consistent estimation under mild conditions. In particular, we develop computational approaches that generalize the Bayesian bootstrap \citep{rubin1981bayesian,newton1994approximate,chamberlain2003nonparametric,graham2016approximate,lyddon2019general} based on a Dirichlet process (DP) formulation. Secondly, we investigate the inferential validity in terms of uncertainty representation. When a posterior distribution is computed using standard Bayesian prior-to-posterior updating based on de Finetti's representation under correct specification, the posterior distribution is well-calibrated in terms of coverage. That is, under mild conditions, Bayes theorem guarantees that the frequentist coverage of the corresponding Bayesian credible intervals achieves the nominal level. However, such guarantees have not been established under mis-specification, or when non-standard methods for computing the posterior distribution are used. \cite{monahan1992proper} study this phenomenon, and establish several cases in which deviation from strict application of Bayes theorem in the computation of the posterior distribution leads to invalid credible intervals (in the sense of coverage). In this paper, we use the \cite{monahan1992proper} approach to assess the validity of general posterior inference methods.
\subsection{Motivating example}
Our motivating example setting is the use of propensity score adjustment in doubly robust causal inference. Doubly robust (DR) procedures have a well-established basis in frequentist semi-parametric theory, with estimation of causal parameters typically conducted via outcome regression (OR) and propensity score (PS) adjustment. The key feature of DR models is that consistent estimation of a typical causal estimand, the average treatment effect (ATE), requires only one of the OR or PS models to be correctly specified, thus adding a degree of robustness in the estimation of causal quantities. A Bayesian approach to semi-parametric DR inference is not obvious as it typically avoids specification of a likelihood function. Most proposed methods deploy two-stage PS-adjustment or flexible outcome modelling; see the survey in \cite{stephens2021bayesian}. {Several} semi-parametric methods have also been proposed \citep[for example,][]{graham2016approximate, saarela2016bayesian,luo2022semi}; these methods typically exploit computational approaches, in particular the Bayesian bootstrap to perform inference.
\subsection{Plan of paper}
This paper is organized as follows. We present two Bayesian general updating mechanisms in Section \ref{BayesLoss}, specifically prior-to-posterior via Bayesian non-parameter modelling and predictive-to-posterior updates. In Section \ref{sec:valid}, we assess posterior validity, and develop an approach to calibrate the Gibbs posterior. Section \ref{sec:AP} outlines the asymptotic justification for the proposed approach, followed by the example of Bayesian doubly robust causal inference via an augmented OR in Section \ref{bayecausal}. Section \ref{Sec:sim} demonstrates the proposed method with simulation studies. We apply this method in a real causal inference example in Section \ref{Sec:app}. Finally, Section \ref{Sec:dis} presents some concluding remarks and future research directions.
\section{Bayesian inference via loss functions} \label{BayesLoss}
In the simplest case, standard Bayesian inference is based on a probability model $f(z|\xi)$ for conditionally independent and identically distributed observable random variables $Z_i, i=1,\ldots,n$. A full probabilistic model is required. The standard approach focuses on the posterior distribution, $\pi\left(\xi\left|z\right.\right) \propto \mathcal{L}\left(\xi\right) \pi_0\left(\xi\right)$, where $\mathcal{L}\left(\xi\right)=\prod_{i=1}^{n} f\left(z_i\left|\xi\right.\right)$ is the likelihood and $\pi_0\left(\xi\right) $ the prior density for $\xi$. To allow for the possibility of partial (or mis-) specification, we denote the target parameter by $\theta^\ast$, where $\theta^\ast$ may be identical to $\xi^\ast$, or an element or subvector of $\xi^\ast$, or a parameter that is only defined functionally via $F^\ast$. We use $\theta \in \Theta$ to denote a generic parameter value and its parameter space. We first present the loss-based approach to Bayesian inference, and then present two fundamental approaches to general Bayesian updating. The first of these computes the posterior using a prior-to-posterior update; the second defines the posterior as a limiting functional of the posterior predictive, $p(z_{(n+1):(n+N)} | z_{1:n} )$ as $N \longrightarrow \infty$.
For standard posterior distribution $\pi^\ast (\xi | z_{1:n})$ based on correct specification via model $f^\ast(\cdot | \xi)$, the Bayes estimator of a target parameter minimizes the posterior expected loss, given by \begin{equation}
\label{postloss}
\hat \theta = \arg \min\limits_{t \in \Theta} \int_{\Xi} u\left(t,\xi \right) \pi^\ast\left(\xi | z_{1:n} \right) \ d \xi \end{equation} where $u$ is a real-valued function quantifying the loss between the $\theta$ and $\xi$. The minimizer in \eqref{postloss} is typically a function of $z_{1:n}$. The formulation allows consideration of the case when $\theta$ and the associated loss function relate to an alternative inference model, with this alternative model linked to the presumed data generating model via a utility function. For example, we may specify $u\left(\theta,\xi \right) = \mathbb{E}_{Z\mid \xi}\left[\ell\left(Z,\theta\right)\right]$, for the observable $Z$ so that \begin{equation}
\label{postlossfull}
\hat \theta = \arg \min\limits_{t \in \Theta} \int_{\Xi} \left\{ \int \ell(z,t) f(z|\xi) \ d z \right\} \pi^\ast\left(\xi | z_{1:n} \right) \ d \xi \end{equation} Here the loss function $\ell(z,t)$ captures the loss in the proposed alternative inference model. We appeal to calculations inspired by \eqref{postloss} and \eqref{postlossfull} in order to perform inference for $\theta$.
\subsection{{Targeting parameters via loss minimization}}
We may also define target parameter $\theta$ via a loss function, $\ell\left(z,t\right)$ and consider {minimization of the expected loss taken with respect to the data generating model $F^\ast(\cdot)$. The `true' value of the parameter, $\theta^\ast$, is defined as the value which minimizes the expected loss
\begin{equation}
\label{u2}
\theta^\ast=\arg\min\limits_{t \in \Theta }\mathbb{E}\left[\ell\left(Z,t\right)\right]=\arg\min\limits_{t \in \Theta}\int \ell\left(z,t\right) dF^\ast\left(z\right)
\end{equation}
where the integral is presumed finite for at least one $t \in \Theta$. }
In the parametric case, if $F^\ast(z|\xi^\ast)$ admits a density $f^\ast(z | \xi^\ast)$ with respect to Lebesgue measure, and if $ \ell (z,t ) = -\log f (z |t )$ for some other density $f$, the expectation becomes (up to an additive constant that does not depend on $t$) the Kullback-Leibler (KL) divergence between the true model $f^\ast (z | \xi^\ast )$ and $f(z | t )$. If the model is correctly specified, and identifiable, we have that $\theta^\ast= \xi^\ast$. If $f$ is mis-specified, this definition of the `true' parameter is in line with standard frequentist arguments. If we further assume $\ell\left(z,t\right)$ is differentiable with respect to $t \in \Theta$ for all $z$, then $\theta^\ast$ is the solution of the unbiased estimating equation \[ \int \frac{\partial \ell\left(z,\theta\right)}{\partial \theta} dF^\ast\left(z\right)= \mathbb{E}\left[\frac{\partial \ell\left(Z,\theta\right)}{\partial \theta} \right]=0. \] If $F^\ast\left(z\right)$ is represented using a non-parametric specification, we can retain most of the parametric calculations but with $\theta^\ast$ as a functional of $F^\ast$.
This loss-based formulation allows for the possibility that the loss function $\ell(z,\theta)$ represents a mis-specification of the outcome model. If $\ell(z,\theta) = -\log f (z |\theta )$ but $f$ does not match the data generating model $f^\ast$, then this posterior for $\theta$, however it is computed, will quantify posterior uncertainty in a quantity that is connected to the data generating mechanism in an abstract way. This posterior, in general, is of little practical use as it does not facilitate inference in a true quantity of interest, nor does it facilitate prediction. The exception is when $\theta^\ast$ is a meaningful parameter in the data generating model -- in this case, computing the posterior is still a worthwhile pursuit. This realization reinforces the notion that to guarantee this compatibility, the data-generating model must be represented using a non-parametric formulation.
\subsection{Prior-to-posterior updating via Bayesian non-parametric modelling}
\label{priorpost}
From a prior-to-posterior perspective, the decision task is to construct a similar minimization problem with a new objective function involving a measure on $\theta$. The conventional Bayesian calculation renders the solution to \eqref{postlossfull} a single point, that is, the value $\widehat \theta \equiv \widehat \theta (z_{1:n})$ that minimizes this posterior expected loss. We aim to use a similar loss-minimization construction to produce a sample from the posterior distribution. Consider the right-hand side of equation \eqref{u2}, and suppose that the true distribution is $F^\ast(z|\xi^\ast)$. If we have a posterior distribution $\pi(\xi|z_{1:n})$, then the uncertainty represented by the posterior is preserved under the deterministic calculation implied by \eqref{u2}; that is, for example if $\xi^{s}$ is a sampled variate from $\pi^\ast(\xi|z_{1:n})$, then the quantity \begin{equation}
\label{u2samp}
\theta^{s}=\arg\min\limits_{t \in \Theta}\int \ell\left(z,t\right) d F^\ast(z|\xi^{s}) \end{equation} is a variate drawn from the posterior for $\theta$.
The parametric version of this calculation relies on correct specification of the model leading to the calculation of posterior $\pi^\ast(\xi|z_{1:n})$ to guarantee consistent estimation. A Bayesian non-parametric formulation gives protection against mis-specification; we henceforth denote a generic instance $F$. In this case, $\pi^\ast\left(F | z_{1:n} \right)$ is a probability distribution on the space of distribution functions, and a draw from this posterior is a random distribution which can be transformed via \eqref{postloss} into a sampled variate $\theta$, which may be replicated to reproduce the posterior for the minimizing quantity as indicated by \eqref{u2samp}.
{A simple implementation of this Bayesian non-parametric theory is given by the Bayesian bootstrap \citep{rubin1981bayesian}, which assumes that the data points are realizations from a multinomial model on the finite set $\left\{z_1,\ldots,z_n\right\}$ with unknown probability $\varpi=\left(\varpi_1,\ldots,\varpi_n\right)$, and assumes a priori that $\varpi\sim \text{Dirichlet}\left(\alpha,\ldots,\alpha\right)$. Then, a posteriori $\varpi\sim \text{Dirichlet}\left(\alpha+1,\ldots,\alpha+1\right)$. Conditional on a draw $\varpi$ from the posterior distribution, samples from the posterior predictive can be made by drawing independently from $\{z_1,\ldots,z_n\}$ with associated probabilities $\{\varpi_1,\ldots,\varpi_n\}$. The Bayesian bootstrap is obtained under the improper specification $\alpha=0$. Referring to \eqref{postloss}, the parameter $\theta$ can then be derived via
\begin{equation}\label{wlike}
\theta (\varpi )=\arg\min_{t\in \Theta}\sum_{k=1}^{n}\varpi_k\ell\left(z_k,t\right)
\end{equation}
that is, via a deterministic transformation of $\varpi$. A sample from the posterior distribution for $\theta$ can be obtained by repeatedly drawing $\varpi \sim \text{Dirichlet}\left(1,\ldots,1\right)$ and obtaining the solutions to \eqref{wlike} \citep{chamberlain2003nonparametric,graham2016approximate}.} \cite{newton2021weighted} exploited the computational advantages of the Bayesian bootstrap for scalable likelihood inference in the high-dimensional setting.
{The Bayesian bootstrap is a consequence of a Dirichlet process (DP) specification that can be implemented in a more general form. Suppose that, a priori, $F \sim DP\left(\alpha,G_0\right)$ where $\alpha>0$ is the concentration parameter and $G_0$ is the base measure. In light of data $\left(z_1,\ldots,z_n\right)$, the resulting posterior distribution of $F$ is $DP\left(\alpha_n,G_n\right)$, where $\alpha_n=\alpha+n$ and $G_n(\ldotp)= \alpha G_0(\ldotp) \big /{(\alpha+n)} + {\sum_{k=1}^{n}\delta_{z_k}\left(\ldotp\right)}\big/{(\alpha+n)}$, and the posterior predictive distribution is effectively identical to the posterior distribution; a random draw from posterior distribution on $F$ provides a (conditional) distribution from which the observables may be drawn independently. If $\alpha \longrightarrow 0$, the posterior distribution is realized as a $\text{Dirichlet}\left(1,\ldots,1\right)$ distribution on $\left\{z_1,\ldots,z_n\right\}$, which reduces to the distribution implied in the Bayesian bootstrap. If $\alpha >0$, this is still a standard DP model specification, but where $\left\{\zeta_k\right\}_{k=1}^{\infty} \sim G_n$ and $\left\{\varpi_k\right\}_{k=1}^{\infty} \sim StickBreaking(\alpha_n)$; the standard stick-breaking algorithm generates the weights by a transformation of the collection $\{V_k\}_{k=1}^\infty$ where random variables $V_k \sim Beta(1,\alpha)$ are independent, with $\varpi_1 = V_1$ and for $j=2,3,\ldots$, $ \varpi_j = V_j \prod_{k=1}^{j-1} (1-V_k)$. In the case $\alpha > 0$, the equivalent to \eqref{wlike} is
\begin{equation}\label{wlikeInf}
\theta (\varpi,\zeta )=\arg\min_{t}\sum_{k=1}^{\infty}\varpi_k\ell\left(\zeta_k,t\right)
\end{equation}
and although this is an infinite sum, the $\varpi_k$ decreases in expectation as $k$ increases and eventually becomes numerically negligible. The $\{\varpi_k\}$ can also be generated to be monotonically decreasing in $k$ using an algorithm designed to simulate a Gamma process \citep{walker2000miscellanea}. If $\{U_k\}$ are a sequence of independent $Uniform(0,1)$ random variables, we define $T_1 = h^{-1}(-\log U_1)$ and $T_k = h^{-1}(h(T_{k-1})-\log U_k)$ for $k = 2,3,\ldots$, where
\[
h(t) = \alpha \int_t^\infty \frac{1}{x} e^{-x} \ dx
\]
is the exponential integral, a monotonic decreasing function of $t$, with $h(0) = \infty$ and $h(t) \rightarrow 0$ as $t \rightarrow \infty$. Then $
\varpi_k = T_k/\sum_{j = 1}^\infty T_j$ for $k=1,2,\ldots$ form a series of monotonically decreasing probabilities. The quantities $\{T_k\}$ are themselves monotonically decreasing, allowing straightforward evaluation of the infinite sum up to machine precision. Key relevant references include \cite{muliere1996bayesian,muliere1998approximating,ishwaran2002exact}. The random draw of $\{\varpi_k,\zeta_k\}_{k=1}^\infty$ is then converted by the deterministic transform implicit in \eqref{wlikeInf} into a sample from the posterior for the target parameter $\theta$. Algorithm \ref{A0} describes the prior-to-posterior approach based on the Dirichlet process.
\begin{algorithm}[H]
\SetAlgoLined
\KwData{$z_{1:n}=(z_1,\ldots,z_n)$}
\For{$s$ \textbf{to} $1:S$}{
Sample $\{\zeta^{s}_k\} \sim G_{n}$\ independently;
Sample $\{\varpi_k^{s}\}$ from a stick-breaking process with $\alpha_n=\alpha+n$ and $\alpha>0$.
Compute $\theta^{s}$ by solving the minimization problem in \eqref{wlikeInf};
}
\Return $(\theta^{1},\ldots,\theta^{S})$. \\[6pt]
\caption{\label{A0}Prior-to-posterior inference based on a stick-breaking process.}
\end{algorithm}
\subsection{Predictive-to-posterior updating via Bayesian non-parametric modelling}
\label{predpost}
The relationship between $z$ and $\theta$ is encapsulated in a loss in \eqref{u2}. For the predictive-to-posterior approach, we develop a predictive distribution as our best Bayesian estimate for {$F^\ast\left(z\left| \xi^\ast\right.\right)$}. If the function $u$ in \eqref{postloss} is taken to be the Kullback-Leibler divergence,
\[
u\left(\theta, \xi\right)= \int \log\left(\dfrac{f^\ast(z | \xi )}{f(z | \theta )}\right) f^\ast(z | \xi ) \ dz
\]
then the minimizing value of $\theta$ is that for which
\[
\int \log f (z |\theta) \left\{ \int_{\Xi} f^\ast(z | \xi) \pi^\ast(\xi | z_{1:n} ) \ d \xi \right\} \ dz \equiv
\int \log f (z | \theta) \ p^\ast (z|z_{1:n}) \ d z
\]
is maximized, where $p^\ast(z|z_{1:n})$ is the usual Bayesian posterior predictive distribution. Consider a new set of exchangeable data, $z^s_{1},\ldots,z^s_{N}$, and take $u\left(\theta,\xi \right)$ as
\[
\mathbb{E}_{Z^s_{1:N}\mid \xi}\left[\ell\left(Z^s_{1:N},\theta\right)\right]= \sum_{i=1}^{N}\mathbb{E}_{Z^s_{i}\left|\xi\right.}\left[\ell\left(Z^s_{i},\theta\right)\right] = \sum_{i=1}^{N} \int \ell (z^s_{i},\theta ) f^\ast(z_i^s \mid \xi) \ dz_i^s,
\]
that is, the expected loss under the `correct specification' that presumes {$ \xi = \xi^\ast$}. Then
\begin{equation}
\label{pp}
\begin{aligned}
&\arg \min\limits_{t \in \Theta} \int_{\Xi} \sum_{i=1}^{N}\mathbb{E}_{Z^s_{i}\left|\xi\right.}\left[\ell\left(Z^s_{i},t\right)\right] \pi^\ast\left(\xi \left|z_{1:n}\right.\right)d\xi \\
& \qquad \qquad \qquad = \arg \min\limits_{t \in \Theta} \iint \sum_{i=1}^{N}\ell\left(z^s_i,t\right) f^\ast\left(z^s_{1},\ldots,z^s_{N}\left|\xi\right.\right)dz^s \; \pi^\ast\left(\xi \left|z_{1:n}\right.\right) d\xi\\
& \qquad \qquad \qquad =\arg \min\limits_{t \in \Theta} \int \sum_{i=1}^{N}\ell\left(z^s_i,t\right) p^\ast (z^s_{1},\ldots,z^s_{N} | z_{1:n} ) \ dz^s\\
\end{aligned}
\end{equation}
where $p^\ast(z^s_{1},\ldots,z^s_{N} | z_{1:n} )\equivp^\ast(z^s_{1:N}| z_{1:n} )$ is the $N$-fold posterior predictive distribution. Therefore, the solution to the minimization problem \eqref{pp} is the Bayesian estimator that mimics the calculation in \eqref{u2}, with the posterior predictive distribution replacing $F^\ast(z \mid \xi^\ast)$.
{The integral in \eqref{pp} may not be analytically tractable, but typically may be approximated using Monte Carlo methods. If $z^s=\left(z^s_{1},\ldots,z^s_{N}\right)$ are drawn from $p^\ast(z^s_{1:N}| z_{1:n} )$, then the finite sample approximation to \eqref{pp} is
\begin{equation}\label{MCminimizer}
\theta\left(z^s\right) =\arg\min_{t} \sum_{i=1}^{N}\ell\left(z^{s}_i,t\right).
\end{equation}
As $N\longrightarrow \infty$, under mild regularity conditions, the minimizer from \eqref{MCminimizer} converges to $\theta^\ast$ defined in \eqref{u2}. Following \citet{bernardo1979reference,bernardo2009bayesian} on the representation theorem under sufficiency, we have
\begin{equation*}
\begin{aligned}
p^\ast(z^s_{1},\ldots,z^s_{N} | z_{1:n} ) &= p^\ast (z^s_{1},\ldots,z^s_{N} \mid \hat \xi (z_{1:n} ) ) + \textrm{O}(1)\\
&= p^\ast(z^s_{N} \mid z^s_{1},\ldots,z^s_{N-1},\xi^\ast)\cdots p^\ast( z^s_{1} \mid \xi^\ast )+ \textrm{o}(1) & n\longrightarrow \infty\\
&=f^\ast(z^s_{N} \mid \xi^\ast)\cdots f^\ast( z^s_{1}| \xi^\ast)+ \textrm{o}(1)
\end{aligned}
\end{equation*}
where $\hat \xi (z_{1:n} )$ is the estimate for $\xi^\ast$ via the sufficient statistics using the observed data $z_{1:n}$. Therefore, a draw from the predictive $p^\ast(z^s_{1},\ldots,z^s_{N} | z_{1:n} )$ suitably simulates a collection of $N$ sample points from the true data generating model $F^\ast(z|\xi^\ast)$ as $n\longrightarrow \infty$. The minimizer of \eqref{MCminimizer} will become degenerate at $\theta^\ast$ as both $N\longrightarrow \infty$ and $n\longrightarrow \infty$. }
In the Dirichlet process formulation, the posterior predictive distribution $p^\ast(z^s_{1:N}|z_{1:n} )$ is also a random distribution. Draws from it can be generated directly via stick-breaking \citep{sethuraman1994constructive} or via P\'{o}lya urn schemes \citep{blackwell1973ferguson} that integrate out the posterior distribution, and which allow direct draws of variates from the predictive distribution in a dependent fashion. We simulate $S$ datasets of size $N$, with each dataset $z^{s}=\{z^{s}_{1},\ldots,z^{s}_{N}\},s=1,\ldots S,$ where $z^{s}$ is generated in a sequential fashion: $z^{s}_{1} \sim G_n$, and then for $j=2,\ldots,N$,
\begin{equation}
\label{dp}
z^{s}_{j}\mid z^{s}_{1},\ldots z^{s}_{j-1} \sim \frac{\alpha+n}{\alpha+n+j-1} G_n(\ldotp) + \frac{1}{\alpha+n+j-1}\sum_{k=1}^{j-1}\delta_{z^{s}_{k}}\left(\ldotp\right)\equiv G_{n+j-1}.
\end{equation}
Each of the $S$ data sets generates a sampled variate from the posterior distribution by solving \eqref{MCminimizer} to yield $(\theta^{1},\ldots,\theta^{S})$, which, in the limit as $N \longrightarrow \infty$, is an exact sample from the posterior distribution for $\theta$. {Under the Dirichlet process formulation, the difference between the prior-to-posterior approach via \eqref{wlikeInf} and the predictive-to-posterior approach via \eqref{MCminimizer} is that the latter integrates out the posterior Dirichlet process and uses the collapsed form that relies only on sampling observables via the P\'{o}lya urn.} Algorithm \ref{A1} implements the predictive-to-posterior via the P\'{o}lya urn scheme.
\begin{algorithm}[h]
\SetAlgoLined
\KwData{$z_{1:n}=(z_1,\ldots,z_n)$}
\For{$s$ \textbf{to} $1:S$}{
\For {$j$ \textbf{to} $1:N$}{
Sample $z^{s}_j \sim G_{n+j-1}$\;
Update $G_{n+j} \leftarrow \left\{z^{s}_j,G_{n+j-1}\right\}$\;
}
Obtain a set $z^{s}=\left\{z^{s}_{1},\ldots,z^{s}_{N}\right\}$\;
Compute $\theta^{s}$ by solving the minimization problem in \eqref{MCminimizer};
}
\Return $(\theta^{1},\ldots,\theta^{S})$. \\[6pt]
\caption{\label{A1}Predictive-to-posterior inference based on a P\'{o}lya urn scheme.}
\end{algorithm}
\subsection{{Loss-based inference via the Gibbs posterior}}
\label{gibbs}
An inference mechanism can be derived directly using a loss function connecting the distribution of $Z$ and $\theta$ based on the definition in \eqref{u2}. The approach derives a probability measure of $\theta$ to define the posterior distribution $\pi \left(\theta \left|z_{1:n}\right.\right)$ given a prior $\pi_0(\theta)$. This formulation \citep{zhang2006a,jiang2008gibbs,bissiri2016general,bissiri2019general} constructs posterior inference without the concept of likelihood, instead relying entirely on a loss specification, and the identification of a function, $\varphi$, that combines the aggregate loss across the observed data and the prior distribution such that $\pi \left(\theta \left|z_{1:n}\right.\right) = \varphi \left\{\ell\left(z_{1:n},\theta\right),\pi_0(\theta)\right\}$. \cite{zhang2006a} defined the objective function for loss-based inference with a probability measure $\mu$ on $\Theta$ as
\begin{align}
\arg\inf\limits_{\mu \in \mathcal{M}_{\pi_0}} \left\{ \int_{\Theta} \ell \left(z,\theta\right)\mu\left(d \theta\right) + \right. & \frac{\mathcal{K}\left(\mu,\pi_0\right)}{\eta} \left. \vphantom{\int} \right\} \nonumber \\[6pt]
& =\arg\inf\limits_{\mu \in \mathcal{M}_{\pi_0}} \int \log \left[\frac{ \mu\left(\theta\right)}{\exp\left(-\eta\ell\left(z,\theta\right)\right)\pi_0\left(\theta\right)}\right]\mu\left(d \theta\right) \label{bobj}
\end{align}
where $\mathcal{M}_v$ is the space which is absolutely continuous with respect to $v$, $\ell$ is some measurable function, such that $ \ell(\cdot,z): \Theta \longrightarrow \mathbb{R}$ is measurable with respect to $\mu$ for every $z$ in the support, and $\mathcal{K}\left(\mu,\pi_0\right)$ is the KL divergence between two probability measures $\mu$ and $\pi_0$. Parameter $\eta$, which has to be pre-specified, controls the trade-off between the prior and the loss term. The solution to the optimization problem in \eqref{bobj} is the so-called \textit{Gibbs posterior}
\begin{equation}
\label{gibbssol}
\pi (\theta | z_{1:n} )= \frac{\exp \left(-\eta \ell \left(z_{1:n},\theta\right) \right) \times \pi_0\left(\theta\right)}{\displaystyle\int_{\Theta} \exp \left(-\eta\ell \left(z_{1:n},t\right) \right) \times \pi_0\left(dt\right)}
\end{equation}
defined if and only if the denominator is finite. The posterior distribution in \eqref{gibbssol} gives a formal Bayesian procedure to update prior beliefs on $\theta$ to posterior beliefs based on the loss function and decision-theoretic arguments.
The term $\exp \left(-\eta\ell \left(z_{1:n},\theta\right) \right)$ replaces the `likelihood' in conventional Bayesian updating. This term does not necessarily correspond to a well-defined likelihood as it does not result from a probabilistic specification for the observable quantities. In addition, to be considered a likelihood for conditionally independent observables, we must essentially assume that the parameter $\theta$ (rather than $\xi$) induces conditional independence, even though $\theta$ does not completely specify the data generating distribution. Finally, unlike a true conditional probability model, the un-normalized quantity $\exp \left(-\eta\ell \left(z,\theta\right) \right)$ does not facilitate probabilistic prediction for future data, as it is not presumed integrable with respect to $z$.
\section{Assessing the validity of non-standard posterior inference}
\label{sec:valid}
It is important to verify that a method of computing a posterior distribution yields valid probabilistic statements. Consistent estimation of $\theta^\ast$ is a minimal requirement for any statistical procedure, but any posterior inference calculation method should also exhibit appropriate performance in a finite sample. For example, the probability content of posterior credible intervals should be at a nominated level $1-\kappa$; if interval $\mathcal{C}_\kappa$ is a designated $1-\kappa$ probability interval, a putative `posterior' density, $\widetilde \pi(\theta|z_{1:n})$, should have the property $\mathbb{E}_{F^\ast} \left[ \mathbb{E}_{\widetilde \pi} \left[ \mathbb{1}_\theta(\mathcal{C}_\kappa)|Z_1,\ldots,Z_n \right] \right] = 1-\kappa$, if $Z_1,\ldots,Z_n$ are drawn from $F^\ast$.
We adopt the approach introduced in \cite{monahan1992proper}, which addressed the notion of proper Bayesian inference when replacing the parametric likelihood with an alternative likelihood function (say, for example, a marginal or conditional likelihood). `Posterior' density, $\widetilde \pi(\theta|z_{1:n})$, computed by a non-standard method should still make probability statements consistent with the Bayes rule. For example, the posterior coverage set, $\mathcal{C}_\kappa(z)$, resulting from Algorithm \ref{A1} should achieve nominal coverage under \textbf{any} data generating joint measure of $Z$ and $\xi$, that is, $P_{\widetilde \pi} (\xi \in \mathcal{C}_\kappa(z))$ should have expectation $1-\kappa$ for data generated under the measure $\pi_0(\xi) f (z|\xi)$ for every absolutely continuous prior, $\pi_0(\ldotp)$. Thus, if we generate $\xi^\ast \sim \pi_0(\ldotp)$, and data, $z_{1:n}$, from $f ( \ldotp |\xi^\ast)$, and compute the posterior $\widetilde \pi(\theta|z_{1:n})$, the resulting coverage should be around the nominal level. We may assess this using the probability integral-transformed random variable
\begin{equation}\label{Hformula}
H = \int_{-\infty} ^{\theta^\ast} \widetilde \pi(t|z_{1:n}) \ d t
\end{equation}
where $\theta^\ast$ is the implied (loss-minimizing) parameter of interest corresponding to the simulated $\xi^\ast$. If $\widetilde \pi(\theta|z_{1:n})$ is a valid posterior, it follows that $H \sim Uniform(0,1)$.
The Monahan and Boos method is derived from the fully probabilistic specification encapsulated in the Bayes theorem. That is, a claimed posterior calculation remains valid in terms of posterior coverage if and only if it arises as a conditional distribution derived from a joint probability model for the parameter and some function of the observables, given the observed value of those observables. In any assessment, the particular choice of the data generating model used to test posterior validity should fit the context in which the methodology will be applied. In our case, in the context of our later motivating examples, we implement Algorithm \ref{AMB} using a parametric data generating approach; we simulate data from the implied conditional outcome model based on a diffuse prior and the true conditional distribution for the observables.
\subsection{Verifying the predictive-to-posterior calculation method}
We investigate the validity of the predictive-to-posterior approach of Section \ref{predpost} via a simulation study using the Monahan and Boos approach. Algorithm \ref{AMB} details the computational strategy to implement the methodology.
\begin{algorithm}[h]
\caption{\label{AMB} Algorithm to implement \cite{monahan1992proper} for assessing coverage validity.}
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Data generating model $f^\ast(z|\xi)$ and prior $\pi_0(\xi)$}
\For{$m=1,\ldots,M$}{
{\begin{itemize}[itemsep=0pt]
\item Simulate $\xi^m \sim \pi_0$;
\item Simulate data $z_{1:n}^m \sim f^\ast(z|\xi^m)$;
\item Compute $\theta^{\ast m}$ from
\begin{equation}
\label{u2alg}
\theta^{\ast m} = \arg\min\limits_{t \in \Theta}\int \ell\left(z,t\right) dF^\ast\left(z|\xi^m \right);
\end{equation}
\item Compute the proposed posterior $\widetilde \pi(\theta|z_{1:n}^m)$;
\item Produce a posterior sample of size $N$, $\theta_{1}^m,\ldots,\theta_{N}^m$ from $\widetilde \pi(\theta|z_{1:n}^m)$;
\item Record
\[
H^m = \frac{1}{N} \sum_{i =1}^N \mathbb{1}(\theta_i^m \leq \theta^{\ast m});
\]
\end{itemize}}
}
\Return Test of uniformity of $(H^1,\ldots,H^M)$. \\[6pt]
\end{algorithm}
We first generate $\xi^{m}$ ($m=1,\ldots,M$) from a prior distribution $\pi_0(\xi)$, and for each $\xi^m$ we generate data $z_{1:n}^m$ from a parametric model $f(z|\xi^m)$. Based on these data, we compute $H^{m}$ via \eqref{Hformula} with $\theta=\theta^{m}$, the posterior sample obtained via \eqref{MCminimizer} based on $z_{1:n}^{m}$. The collection of $\{H^{1},\ldots,H^{M}\}$ are used to assess uniformity.
We illustrate the method using data simulated from the set up of Example 1 in Section \ref{Sec:sim}, which illustrates an application of a method of causal inference known as propensity score regression. In this example, we generate the values of coefficients in the outcome model from the prior distribution, that is, from the Normal distribution with mean the same as the example and variance $10,000$, with the outcome data generated from a specific regression model. The loss function used to compute the posterior for the parameter of interest is specified as squared loss, based on a mis-specified mean model that still allows consistent estimation of this parameter. The procedure is repeated $1,000$ times with $n=100$ and $n=10,000$. For each dataset, we perform the proposed method for various $\alpha$ values, and produce the posterior sample of the ATE based on a correctly specified propensity score model and a mis-specified outcome model that uses the treatment variable and the estimated propensity score only as covariates.
Table \ref{kstest} displays the $p$-value of the Kolmogorov–Smirnov test for uniformity of the simulated $H$ for the causal parameter. When $n=100$, the $p$-values suggest a proper posterior inference for all $\alpha=0,1,10$, but when $\alpha=100$ this procedure fails the uniformity test. As when $\alpha$ becomes larger, there will be an increasing impact of model mis-specification since the base measure for predictive inference is centered at the fitted value of the mis-specified outcome model. It will inflate the posterior variance to account for mis-specification, and this is confirmed in the density plots in Figure \ref{MB}. The left panel of Figure \ref{MB} shows the density plots of $H$ with $n=100$, and we observe a higher dense around 0 and 1 when $\alpha=100$, demonstrating that the posterior variance is greater than those when $\alpha$ is smaller. However, as $n=10000$, the impact of model mis-specification from the base measure becomes negligible, and the $p$-values suggest coverage-valid posterior inference for all values of $\alpha$, which is confirmed by the density plots of the right panel of Figure \ref{MB}. In the Supplement, we show the density plots when both propensity score and the outcome model are mis-specified, demonstrating the impact of model mis-specification to assess validity of non-standard posterior inference.
Note that, here, the inference model is mis-specified compared to the data generating model, and yet the Monahan and Boos approach allows us to verify that the posterior credible intervals are valid in coverage terms. Specifically, the fitted PS regression model -- which is mis-specified by necessity -- still returns valid posterior coverage, even though its associated posterior distribution does not match the posterior distribution that would be obtained under correct specification of an outcome regression model (the posterior under correct specification would have smaller variance).
\begin{table}
\caption{\label{kstest} $P$-values of Kolmogorov–Smirnov test for uniformity of $H$ via Bayesian predictive inference with various $\alpha$ values.}
\centering
{
\begin{tabular*}{35pc}{@{\hskip5pt}@{\extracolsep{\fill}}c@{}c@{}c@{}c@{}c@{\hskip5pt}}
\hline
$\alpha$ & 0 & 1 & 10 & 100 \\
\hline
$n=100$ & 0.8632&0.4131& 0.0587 & 0.0000 \\
$n=10000$ & 0.3998 & 0.6121& 0.4595&0.2917\\
\hline
\end{tabular*}
}
\end{table}
\begin{figure}
\caption{ Density plots of $H$ with $n=100$ (left) and $n=10000$ (right). The solid, dashed, dotted and dotted dash lines represent results from $\alpha=0,1,10,100$ respectively.}
\label{MB}
\end{figure}
\subsection{Calibrating the scaling parameter in the Gibbs posterior}
In the Gibbs posterior approach, the scaling parameter $\eta$ has to be specified before performing inference. There are several proposals based on different criteria, such as expected predictive loss \citep{grunwald2017inconsistency} and the posterior coverage rate \citep{syring2019calibrating}. Data-driven approaches for estimating $\eta$ have been studied extensively, as the issue of mis-specification cannot guarantee that the posterior variance matches the sandwich variance obtained as in frequentist inference \citep{chernozhukov2003mcmc}. \cite{syring2019calibrating} introduced a way to find the desired scaling using the frequentist bootstrap. At each bootstrap sample, if the coverage is not at the nominal level, then an adjustment is made to $\eta$ until the coverage meets the nominal level. The algorithm is described in Algorithm \ref{A1a}.
\begin{algorithm}[t]
\caption{\label{A1a} Algorithm to learn the Gibbs posterior scaling parameter $\eta$ \citep{syring2019calibrating}.}
\SetAlgoLined
\KwData{$z_{1:n}=(z_1,\ldots,z_n)$}
{Given an tolerance $\epsilon< \kappa$ and $CR=1$, with an initial guess $\eta_0$ and $i=1$.}\\
\While{$\left|CR-(1- \kappa)\right|>\epsilon$}{
\For {$b=1,\ldots,B$}{
\begin{itemize}[itemsep=0pt,topsep=5pt]
\item Generate a bootstrap sample from the original sample, $z_{1:n}^{b}$.
\item Get an empirical estimate for the parameter of interest, $\theta^b$ based on $z_{1:n}^{b}$.
\item Obtain the posterior sample from $\pi(\theta|z_{1:n}^{b})$ based on $\eta_{i-1}$.
\end{itemize}
}
Calculate the bootstrap mean $\bar{\theta}=B^{-1} \sum_{b=1}^{B}\theta^b$, and obtain the empirical coverage rate for $\bar{\theta}$,
$$CR= \frac{1}{B}\sum_{b=1}^{B} \mathbbm{1}(c^b_{\kappa/2}<\bar{\theta}<c^b_{1-\kappa/2})$$
where $\theta<c^b_{ \kappa}$ satisfies $ \kappa=\pi(\theta<c^b_{ \kappa}|z_{1:n}^{b})$.\\
Update $\eta$ as
\[
\eta_i= \eta_{i-1} + i^{-0.51} \left[CR -(1- \kappa) \right],
\]
and $i=i+1$.
}
\Return $\left(\eta_1,\eta_2,\ldots,\eta_i\right)$.\\[6pt]
\end{algorithm}
In Sections \ref{priorpost} and \ref{predpost}, we generalized the previous Bayesian bootstrap approach, specifying a Dirichlet process posterior and predictive distribution, without requiring any scaling. Using this predictive approach, we propose a computationally-efficient way to calibrate $\eta$ so that the Gibbs posterior achieves nominal coverage.
The approach in Algorithm \ref{A1a} is computationally intensive and MCMC is required for each bootstrap sample. Obtaining a posterior sample from Algorithms \ref{A0} and \ref{A1} is computationally less expensive, yet yields the correct marginal nominal coverage rate. Therefore we can utilize this posterior sample from predictive inference to adjust the posterior variance from the Gibbs posterior to achieve the valid uncertainty quantification. Furthermore, the approach of \citet{syring2019calibrating} is based on coverage adequacy. The method of \cite{monahan1992proper} is similarly used to assess coverage validity, and offers an alternative method for calibrating the Gibbs posterior by adjusting scaling parameter $\eta$ until the uniformity of the $H$ statistic is adequate. The process may be implemented efficiently using resampling ideas: a posterior sample obtained for $\eta=1$ (say), may be converted to an approximate sample for any other $\eta$ value for example by sampling-importance resampling.
As demonstrated in \cite{chernozhukov2003mcmc}, under certain regularity conditions, the Gibbs posterior will be asymptotically normal with covariance matrix $\eta^{-1}(n\mathcal{J})^{-1}$ where $\mathcal{J}=- \mathbb{E}[\dot{\mathbf{U}}(\theta^\ast)]$, $\mathbf{U}\left(\theta\right)={\partial \ell\left(z,\theta\right)}\big/{\partial \theta}$ and $\dot{\mathbf{U}}(\theta) = {\partial \mathbf{U}(\theta)}\big/{\partial \theta^\top}$. Therefore, asymptotically, the Gibbs posterior concentrates on a $\sqrt{n}$-ball centered at $\theta^\ast$ with covariance matrix $\eta^{-1}(n\mathcal{J})^{-1}$. In order to have the similar coverage rate as predictive inference, we first obtain the sample posterior variance $V$ from the sample generated by Algorithm \ref{A0} or \ref{A1}, then find the proportional rate $c$ so that $cV\approx (n\mathcal{J})^{-1}$, and therefore $\eta \approx c$ to achieve the similar uncertainty quantification. In practice, we do not know $(n\mathcal{J})^{-1}$ but can assess this value empirically. Algorithm \ref{A2} describes this algorithm in detail.
\begin{algorithm}[h]
\caption{\label{A2} Algorithm to learn the Gibbs posterior scaling parameter $\eta$ by Bayesian predictive inference.}
\SetAlgoLined
\KwData{$z_{1:n}=(z_1,\ldots,z_n)$}
{\begin{itemize}[itemsep=0pt]
\item Implement Algorithm \ref{A1} to obtain the posterior sample $(\theta^{1},\ldots,\theta^{S})$.
\item Calculate the empirical posterior variance (or variance-covariance matrix) $V$.
\item Obtain the posterior sample from $\pi(\theta|z_{1:n})$ and calculate the posterior variance (or variance-covariance matrix) $\hat \Sigma$ based on an initial guess $\eta_{0}$.
\item Modify $\eta$ based on $\hat \eta V \approx \eta_{0} \hat \Sigma$.
\end{itemize}
}
\Return $\hat \eta $. \\[6pt]
\end{algorithm}
\subsection{Calibration examples}
To verify the performance of Algorithm \ref{A2}, we implemented two examples from \cite{syring2019calibrating} that study problems concerning quantile regression and linear regression. In the quantile regression example, $\theta=(\theta_0,\theta_1)=(2,1)$ is the coefficient, and the data are generated from $Y\sim \mathcal{N}(\theta_0 +\theta_1 X,1)$ and $X+2 \sim \chi^2(4)$. The loss function is specified as the mis-specified asymmetric Laplace likelihood, i.e.,
\[
\ell_n((y_i,x_i)_{i=1}^n,\theta) = \frac{1}{n}\sum_{i=1}^{n} \left|(y_i-x_i^\top\theta)(0.5 - \mathbbm{1}_{(-\infty, x_i^\top\theta)}(y_i))\right|.
\]
In the linear regression example, $\beta=(\beta_0,\beta_1,\beta_2,\beta_3)= (0,1,2,-1)$ represents coefficients in a linear predictor. We use the square loss in this case.
Table \ref{comp} shows the results from the two examples. All algorithms generate similar results in terms of the coverage probability and length in quantile regression, as there are only two parameters. All of the marginal coverage rates are close to the nominal level. The coverage rate of parameters are corrected to the nominal level if the coverage rate is slightly off in Algorithm \ref{A1}. In the linear regression case, as there are four parameters, the marginal coverage probability is above the nominal level in Algorithm \ref{A1a}, as the algorithm is designed to primarily focus on the credible set. Algorithms \ref{A1} and \ref{A2} have similar results, and all marginal coverage probabilities are close to the nominal level. Even though Algorithm \ref{A2} has much lower computational burden, it still provides valid marginal uncertainty quantification. Therefore, we will use Algorithm \ref{A2} to calibrate the scaling parameter in the later examples.
\begin{table}
\caption{\label{comp} Comparison of 95\% posterior credible intervals from Algorithm \ref{A1}, and Gibbs posterior calibrating from Algorithms \ref{A1a} and \ref{A2}, based on 500 simulated datasets with $n=200$. }
\centering
\fbox{\begin{tabular}{l*{1}{l}*{6}{c}}
&& \multicolumn{3}{c}{Coverage probability $\times$ 100}& \multicolumn{3}{c}{Average length $\times$ 100} \\ \cline{3-8}
& & Algorithm \ref{A1} & Algorithm \ref{A1a} & Algorithm \ref{A2} & Algorithm \ref{A1} & Algorithm \ref{A1a} & Algorithm \ref{A2} \\
\hline
& \multicolumn{7}{l}{Quantile regression example in Section 4 from \cite{syring2019calibrating}} \\
&&&&&&&\\
& $\theta_0$ & 96.8& 96.4& 96.2 & 70.6& 71.4& 70.1\\
& $\theta_1$ & 96.4 & 97.4 & 96.0& 36.4 & 36.8& 36.0 \\
\hline
& \multicolumn{7}{l}{Linear regression example in the Supplementary material from \cite{syring2019calibrating}} \\
&&&&&&&\\
&$\beta_0$& 94.6 & 99.0 & 97.8 & 46.1 & 78.2 & 54.2 \\
&$\beta_1$& 93.8 & 98.0 & 93.4 & 64.1 & 93.6 & 62.5 \\
&$\beta_2$& 93.6 & 99.2 & 93.6 & 63.9 & 100.3 & 62.4 \\
&$\beta_3$& 93.0 & 97.4 & 91.2 & 58.0 & 82.8 & 54.1 \\
\end{tabular}}
\end{table}
\section{Asymptotic results}
\label{sec:AP}
In this section, we establish the large sample properties of the proposed loss-type predictive-to-posterior calculation, under possible model mis-specification. The required assumptions (which relate to identifiability, and regularity of the loss function) are classical, and proofs are included in the Supplement. First, to show the consistency, we need to consider the limiting case as $n \longrightarrow \infty$. This requires $\hat \theta $ in \eqref{postloss} to be degenerate at $\theta^\ast$ if all the information is available.
\begin{theorem}
Suppose the prior $\pi_0(\theta)$ has full Hellinger support, and $\ell\left(z,\theta\right)$ is continuous $\forall\theta \in \Theta$ with
\[
\int \log\left[1+\left|\ell\left(z,\theta\right) \right|\right]dG_0\left(z\right) < \infty.\] Let $\theta^s_1 = \theta(\varpi^s)$ be the unique solution to
\[
\min\limits_{\theta \in \Theta} \sum_{k=1}^{\infty}\varpi^s_k \ell\left(\zeta^{s}_k,\theta\right)
\]
for any given $\varpi^s$ and $\zeta^s$ generated from Algorithm \ref{A0}. Let $\theta^s_2=\theta^s(z^s)$ be the unique solution to
\[
\min\limits_{\theta \in \Theta} \sum_{i=1}^{N} \ell\left(z^{s}_i,\theta\right)
\]
for integer $N$, and any given $z^s$ generated from Algorithm \ref{A1}. Then
\[
\sum_{k=1}^{\infty}\varpi^s_k \ell (\zeta^s_k, \theta^s_1 ) \longrightarrow \min\limits_{\theta \in \Theta}\int \ell\left(z,\theta\right) dF^\ast\left(z\right),\;\;\;\;\;
\sum_{i=1}^{N}\ell (z^s_i, \theta^s_2 ) \longrightarrow \min\limits_{\theta \in \Theta}\int \ell\left(z,\theta\right) dF^\ast\left(z\right)
\]
and $\theta_1^s \longrightarrow\theta^\ast$, $\theta_2^s \longrightarrow \theta^\ast$ almost surely as $n,N \longrightarrow \infty$.
\end{theorem}
We can also consider the limiting behavior of the estimator in terms of the probability law, specifically, that it exhibits posterior asymptotic normality. In the empirical measure, the estimating equation becomes $\sum_{i=1}^n \mathbf{U}_i(\theta) = \mathbf{0}$ and with some regularity conditions, we have that the frequentist solution $\hat \theta_n$ has the property that
\[
\sqrt{n}(\hat \theta_n - \theta^\ast) \xrightarrow{d} Normal_p(\mathbf{0},\mathbf{V})
\]
where $\mathbf{V} = \mathcal{J}(\theta^\ast)^{-1} \mathcal{I}(\theta^\ast) \mathcal{J}(\theta^\ast)^{-\top}$
with $\mathcal{I}(\theta^\ast) = \mathbb{E}[ \mathbf{U}(\theta^\ast) \mathbf{U}(\theta^\ast)^\top]$ and $\mathcal{J}(\theta^\ast) = -\mathbb{E}[\dot{\mathbf{U}}(\theta^\ast)$, both $(p \times p)$ matrices, and $\dot{\mathbf{U}}(\theta^\ast) = {\partial \mathbf{U}(\theta)}\big/{\partial \theta^\top} \big|_{\theta = \theta^\ast}$. The Bayesian analogy is the Bernstein-von Mises theorem, which establishes the limiting behaviour of the posterior distribution. We state the result in terms of the standardized parameter $\vartheta_{n,N}^s = \sqrt{N}( \theta^s -\hat \theta_n)$, where $ \theta^s $ is a draw from Algorithm \ref{A0} or \ref{A1}, and $\hat \theta_n$ is the frequentist estimator.
{\begin{theorem}
\label{thm2}
Under Assumptions 1-6 in the Supplement, the probability that the posterior for $\vartheta_{n,N}^s$ assigns to an arbitrary set $A\subseteq\Xi$ converges to the mass given by a Normal measure. Specifically, if $\mathbf{Z} \sim Normal_p(\mathbf{0},\mathbf{V})$ is an arbitrary random variable independent from all other random variables, then $\pi (\vartheta_{n,N}^s \in A\left|z_{1:n}\right. ) \to P(\mathbf{Z}\in A)$ as $n,N\to \infty$.
\end{theorem}
\section{Doubly robust causal inference via propensity score regression}
\label{bayecausal}
We now focus on the motivating example, which is a Bayesian representation for the DR regression approach to causal estimation. In a causal inference setting, for the $i$th unit of observation, $Y_i$ denotes a response, $d_i$ the treatment (or exposure) received, and $x_i$ a vector of pre-treatment covariates or confounder variables. Suppose the {data generating structural} model is
\begin{equation}
\label{true}
Y_i= \psi^\ast d_i + h_0(x_i) + \epsilon_i,\;\; \forall i=1,2,\ldots,n
\end{equation}
where $\mathbb{E}\left[\epsilon_i\left|x_i,d_i\right.\right] = 0$ and $\text{Var}\left[\epsilon_i\left|x_i,d_i\right.\right] = \sigma^2<\infty$, with $\epsilon_1,\ldots, \epsilon_n$ independent, and where $h_0(x_i)$ is an unknown real-valued function of the vector $x_i$. In this setting, $\psi^\ast$ is the ATE.
\subsection{Frequentist inference in propensity score regression}
{A typical approach to causal adjustment uses the PS. With the PS estimated either via maximum likelihood or a fully Bayesian procedure summarized by the posterior mean, the outcome is modelled by adding the estimated PS \citep{robins1992estimating}, denoted $e\left(x_i;\hat\gamma\right)=\mathbb{P}\left(D_i=1\left|x_i;\hat \gamma\right.\right)$, where $\gamma$ is estimated via some form of binary regression, or via more flexible prediction approaches. Assume we specify the augmented model as
\begin{equation}
\label{or}
Y_i= \psi d_i + h_1(x_i) + \phi e\left(x_i;\hat\gamma\right)+\epsilon_i,\;\; \forall i=1,2,\ldots,n
\end{equation}
and fit the model using ordinary least squares. The model in \eqref{or} leads to doubly robust inference. If $h_1(x)=h_0(x)$, so that \eqref{or} matches \eqref{true} and the model is correctly specified, then the estimator of the true ATE will be consistent irrespective of whether the PS model is correctly specified because the estimator $\widehat \phi$ will converge to zero as $n \longrightarrow \infty$; on the other hand, if the PS is correctly modelled, conditioning on it will block the confounding path from $D$ to $Y$ via $X$ so that $X \perp \!\!\! \perp D \left|\;e(X)\right.$, and \eqref{or} will still yield a consistent estimator of $\theta^\ast$, even if $h_1(x)$ is incorrectly specified. We will proceed by assuming that the functional forms of $h_0(x_i,\beta_0)$ and $h_1(x_i, \beta)$ are parametric with associated parameter vectors $\beta_0$ and $\beta$. For example, linear regression assumes that $h_0(x_i,\beta_0)=x_i^\top\beta_0$ and $h_1(x_i,\beta)=x_i^\top\beta$.}
Let $z_i = (y_i, d_i, x_i), i = 1, \ldots,n$, be the observed data, and $Z_i = (Y_i, d_i, x_i), i = 1, \ldots,n$ be the random variable representing the random component in the conditional model. Estimation of $\theta=\left(\psi,\beta,\phi\right)$ in the conditional mean model \eqref{or} can proceed by defining a loss function which is the sum of squares of the residual error, i.e.,
\begin{equation}
\label{u1}
\ell\left(z_{1:n},\theta\right)=\sum_{i=1}^{n} \left[y_i- \left(\psi d_i + h_1(x_i,\beta) + \phi e\left(x_i;\hat\gamma\right)\right)\right]^2.
\end{equation}
The method does not make any distributional assumption about $\epsilon_i$, and yields the solution
\[
\hat \psi = \frac{\sum\limits_{i=1}^{n}\left(d_i - e\left(x_i;\hat \gamma\right)\right) (y_i- h_1(x_i,\hat\beta)-\phi e\left(x_i;\hat \gamma\right)) }{\sum\limits_{i=1}^{n}\left(d_i - e\left(x_i;\hat \gamma\right)\right) d_i}.
\]
This is the feasible G-estimator, proposed in \cite{robins1992estimating}, which is consistent for $\psi^\ast$ and robust to mis-specification. A key aspect of this frequentist approach is the use of plug-in estimation for parameter $\gamma$; it can be demonstrated that this approach provides locally efficient estimation of $\psi$ under the assumption that the PS model is correctly specified at least up to a finite dimensional parameter that may itself be estimated consistently at the usual parametric rate. Non-parametric estimation of the propensity model can also preserve consistent and efficient estimation of $\psi$, provided the rate of convergence of the non-parametric estimator is fast enough, and this can be achieved by using many standard flexible or machine learning approaches.
\subsection{Bayesian inference in propensity score regression}
The plug-in approach can also be justified in a fully Bayesian framework under the loss-based formulation. \cite{mccandless2010cutting} and \cite{jacob2017better} demonstrate how to block the flow of the information from the PS to the outcome regression when implementing MCMC in a joint model of the treatment and outcome. However, this approach induces finite sample bias and is not ideal for small sample inference. A two-step approach, which assumes a complete separation in inference between the PS and outcome models and uses a plug-in estimate of $\gamma$ in \eqref{or} also yields a valid Bayesian solution; see \cite{stephens2021bayesian} for further discussion. This approach has been shown to provide superior estimation, and we adopt it in the following analysis. The loss function used in \eqref{wlike} or \eqref{MCminimizer} should incorporate components for parameters in both the outcome model and the PS model, say
\[
\ell(z,(\theta,\gamma)) = \ell_1(z,(\theta,\hat \gamma)) + \ell_2(z, \gamma)
\]
where $\hat \gamma$ is the minimizer of $\ell_2(\ldotp,\gamma)$ alone, with optimization over both sets of parameters carried out for each sampled realization from the non-parametric posterior distribution.}
We deploy the Dirichlet process formulation from Section \ref{predpost}. In the outcome regression setting, we assume that the predictive resampling is implemented through residuals arising from the outcome model \citep{wade2013bayesian,quintana2020dependent}. In this case, we first draw each pair of $\left\{{x}^{s}_i,d^{s}_i\right\}$, $i=1\ldots,N$, from the empirical distribution as the DP with $\alpha=0$, and then obtain the fitted values $e\left({x}^{s}_i,\hat \gamma^{s}\right)$ from a propensity score model based on logistic regression, refitted to the newly sampled $\left\{{x}^{s}_i,d^{s}_i\right\}$ data set. Then we simulate $y_i^s$ from a DP model with the conditional base measure $G_0 \equiv \mathcal{N}\left(\psi d^{s}_i+ h_1\left({x}^{s}_i,\beta\right)+ \phi e\left({x}^{s}_i,\hat \gamma^{s} \right),1\right)$, where $\theta= \left(\psi,\beta, \phi\right)$ is generated from its prior distribution.
\begin{corollary}
\label{thm3}
The posterior distribution for the causal parameter, $\psi$ in \eqref{or}, becomes degenerate at $\psi^\ast$ as $n \longrightarrow \infty$ if either the outcome model or PS model is correctly specified. In addition, \textit{a posteriori}, $\theta \xrightarrow{d} \mathcal{N}\left(\theta^\ast,V_{\theta^\ast}\right)$ where $V_{\theta^\ast} = \mathcal{J} (\theta^\ast)^{-1}\mathcal{I}(\theta^\ast)\mathcal{J} (\theta^\ast)^{-\top}$.
\end{corollary}
\begin{proof}
When we have a mis-specified PS model and a correctly specified OR model, $\theta^\ast=\left(\psi^\ast,\beta^\ast,0\right)$. The results follow by applying Theorem \ref{thm2}. When the outcome model is mis-specified but the PS model is correctly specified, $X \perp \!\!\! \perp D \left| e\left(x; \gamma^\ast\right) \right.$ and $\hat \gamma \longrightarrow \gamma^\ast$. Therefore, $e\left(x;\hat \gamma\right)$ is an asymptotic balancing score. Suppose we specify the mean model as above and assume that the effect of $D$ is captured via the term $\psi D$. Under the assumption of no unmeasured confounding, we can find $\theta^\ast=\left(\psi^\ast,\beta^\ast,\phi^\ast\right)$ under the specified loss function corresponding to such mis-specified OR models. This is again in line with the standard frequentist approach for mis-specified models. Therefore, we can construct the same asymptotic results as for the mis-specified PS case by applying Theorem \ref{thm2}.
\end{proof}
\section{Simulation studies}
\label{Sec:sim}
We examine the performance of the Bayesian methods described in Section \ref{BayesLoss} with the two updating frameworks. For each example, we consider
\begin{itemize}
\item Method I: Gibbs posterior computed using MCMC, calibrating $\eta$ via Algorithm \ref{A2};
\item Method II: Prior-to-posterior inference via the Bayesian bootstrap from \eqref{wlike};
\item Method III: Predictive-to-posterior inference via Algorithm \ref{A1}.
\end{itemize}
\subsection{Example 1}
In this example, we consider the simulation study constructed by \cite{saarela2016bayesian}. The data are simulated as {follows}: we simulate $X_1,X_2,X_3,X_4 \sim \mathcal{N}\left(0,1\right)$ independently, and then set
\begin{equation*}
\begin{aligned}
U_1& =\frac{\left|X_1\right|}{\sqrt{1-2/\pi}}\\[3pt]
D\left|U_1,X_2,X_3 \right. &\sim \text{ Bernoulli}\left(\text{expit}\left(0.4U_1+0.4X_2+0.8X_3\right)\right)\\[6pt]
Y \left|D,U_1,X_2,X_4\right. &\sim \mathcal{N}\left(D-U_1-X_2-X_4,1\right)
\end{aligned}
\end{equation*}
Three scenarios are considered:
\begin{itemize}
\item Scenario A: Mis-specify the OR model using covariates $(x_1, x_2, x_4)$ and correctly specify a treatment assignment model using covariates $(u_1, x_2, x_3)$.
\item Scenario B: Correctly specify the OR model using covariates $(u_1, x_2, x_4)$ and mis-specify a treatment assignment model using covariates $(x_1, x_2, x_3)$.
\item Scenario C: Mis-specify the OR model using covariates $(x_1, x_2, x_4)$ and mis-specify a treatment assignment model using covariates $(x_1, x_2, x_3)$. This is not originally considered in \cite{saarela2016bayesian}.
\end{itemize}
The prior-to-posterior update via the Gibbs posterior is implemented using MCMC and the Bayesian bootstrap approaches from Sections \ref{gibbs} and \ref{priorpost}, and non-informative priors are placed for all the parameters with $10,000$ MCMC samples and $1,000$ burn-in iterations. For the predictive-to-posterior update, we generate $S=1,000$ sets, each with $N=10,000$ new data points and with $\alpha=1$. For $n=20$, we also place an informative normal prior with mean {at the true value} and standard deviation 2 for the Gibbs posterior using MCMC.
The results are given in the Supplement, and the table shows the results of $1,000$ Monte Carlo replicates of the averages of the posterior means, variances and coverage rates for $\theta$ with different sample sizes. Coverage rates are computed by constructing a 95\% credible interval for $\theta$ from the 2.5\% and 97.5\% posterior sample quantiles. When the sample size is small, the Bayesian bootstrap (Method II) and predictive inference models (Method III) exhibit poor coverage while the Gibbs posterior (Method I) returns coverage rates at the nominal level; however, Method II presents rather larger variances. The results for the Gibbs posteriors with $\eta=1$ display the correct posterior mean, but the coverage is significantly below the nominal level in Scenarios A and B, which confirms that the calibration of $\eta$ is required. The difference in variances diminishes as the sample size increases, or when the informative prior is considered (demonstrated in the bracket for $n=20$). As expected, the two updating approaches yield unbiased estimates in both scenarios and show agreements in the variance and the coverage rate when the sample size is over 100. The Bayesian bootstrap and DP-based predictive inference generate similar results, as the prior does not carry much weight when $\alpha/N$ is small. When both models are mis-specified, all cases yield significantly biased estimates unless the informative prior is used.
\subsection{Example 2: High-dimensional case}
In this example, we examine the performance of the proposed updating approaches under high denominational settings, with binary exposure. The data are simulated as follows: we simulate $X=(X_1,X_2, \ldots,X_{p}) \sim \mathcal{N}_p\left(0,\Sigma\right)$ and $\Sigma_{ij} =1$ if $i=j$ and 0.1 otherwise, and then simulate
\begin{equation*}
\begin{aligned}
D\left|X \right. &\sim \text{ Bernoulli}\left(\text{expit}\left(0.3X_1+0.2X_2-0.4X_5+1.3X_2X_5+1.8X_1X_2\right)\right)\\
Y \left|D,X\right. &\sim \mathcal{N}\left(D+0.5X_1+X_3-0.1X_4-0.2X_7+ 1.5X_3X_4 +0.6X_7^2 +1.2X_1X_3,1\right).\\
\end{aligned}
\end{equation*}
In the analyses, we take $p=20$ and $n=50$ and $100$.
The loss function adopted for the loss-based analysis for Method II and Method III encompasses both the need to penalize the number of terms in the PS model and the need to select a penalization parameter. We use penalized logistic regression for the PS model including all $x_1,\ldots,x_p$ and first order interactions between them, giving a total of $q=p+p(p-1)/2$ parameters. Specifically, we use the lasso penalty, and base conclusions on the loss function
\[
\ell_2((x,d), (\gamma,\lambda)) = -\log f_{D|X}(d|x,\gamma) +\lambda \sum_{j=1}^{q}\left|\gamma_j\right|
\]
where $f_{D|X}(d|x,\gamma)$ is the Bernoulli mass function with logistic link. This loss function is then incorporated into a cross-validation procedure to define the loss to be deployed in the implementation of Methods II and III, $\ell_{\text{CV}}((x,d),(\gamma,\lambda))$ say, which takes the input data and returns optimized values of $\gamma$ and $\lambda$, as well as the fitted values that can be transported into the outcome model. The value of $\lambda$ is estimated with lowest test mean squared error over 10-fold cross validation. For the {outcome model}, we fit the model with the treatment indicator and estimated PSs only as covariates. For Method II, we set $S=1000$. For the predictive-to-posterior update, we generate $S=1000$ sets, each with $N=100$ and $\alpha=5$. For comparison, we also consider the Bayesian doubly robust high-dimension (BDR-HD) method proposed in \cite{antonelli2022causal}, where the PS and outcome are estimated via regression models with the Gaussian process (GP) prior, and then the MCMC estimate is plugged in to a doubly robust estimator. The variance is adjusted through the frequentist bootstrap so that it will achieve frequentist nominal coverage rate. For the BDR-HD method, we ran 500 iterations with 100 burn-in iterations for both the PS and OR models.
\begin{table}
\caption{\label{sim2s} Example 2: Simulation results of the marginal causal contrast under high-dimensional settings, with true value equal to 1, on 500 simulation runs on generated datasets of size $n$. BDR-HD represents the method proposed in \cite{antonelli2022causal}, and Running time represents the average running time per Monte Carlo replicate in minutes.}
\centering
\fbox{
\begin{tabular}{ccrr}
&&\multicolumn{2}{c}{$n$}\\
&Method & \multicolumn{1}{c}{50} & \multicolumn{1}{c}{100} \\
\hline
\multirow{3}{*}{Bias} &Method II& 0.081& 0.022 \\
&Method III &0.216 & 0.260 \\
&BDR-HD &0.232 & 0.112 \\
\hline
\multirow{2}{*}{RMSE} &Method II &0.854& 0.574\\
&Method III & 0.761& 0.566 \\
&BDR-HD & 0.728 & 0.343 \\
\hline
\multirow{2}{*}{Coverage rate} &Method II & 99.4 & 98.6 \\
&Method III& 94.0 & 96.4 \\
&BDR-HD &98.0 & 97.8 \\
\hline
\multirow{2}{*}{Running time} &Method II & 1.73 & 2.33 \\
&Method III& 1.81& 1.99\\
&BDR-HD & 15.19 & 32.36 \\
\end{tabular}
}
\end{table}
The results are given in Table \ref{sim2s}. Method II shows the smallest bias among all methods, while Method III both exhibit some biases in $n=50$ and $n=100$; this is primarily due to the bias from the fitted PS via the lasso penalty. Additionally, the outcome model specified in Methods II and III only contains the PS and an intercept to account for confounding. However, the coverage rate is still around the nominal level for both methods. If the PS is more accurately estimated, the bias would diminish and the coverage rate would achieve the target level as suggested by the additional simulation study in the Supplement. Method III shows a larger bias due to the impact of the new data which are generated from a mis-specified model. The BDR-HD method exhibits small biases in both cases, and the coverage rates are around nominal level. In this method, it utilizes component-wise GP regression for each confounder in treatment and outcome models, and interaction terms are supplied in the GP regression as separate covariates. Therefore, it achieves desired performance. However, in practice, this might not be feasible as it will include $p(p-1)/ 2$ additional covariates, increasing the computational complexity by at least $O(p^2)$. It also should be noted that BDR-HD has a much higher the computational burden than Method II and Method III as it requires much longer time on average per replicate, as it includes MCMC computation for GP regression and an additional bootstrap step to adjust the posterior variance.
\subsection{Example 3: Comparison with flexible modelling approaches}
In this example, we seek to compare the proposed approach with existing flexible/machine learning causal estimation approaches. We compare Method III with Bayesian causal forests (BCFs, \cite{hahn2020bayesian}), and double machine learning (DML) \citep{chernozhukov2018double} using a variety of machine learning strategies. The Supplement provides the details of the data generation settings, and a description of those approaches. Table \ref{flex} displays the results of this study. The BCFs display certain bias in the small size, but has a relatively small variance, and therefore lower RMSE than Method III. All DML results show quite significant biases, and they do not vanish as the sample size increases. However, the variances are smaller than other approaches, and therefore the RMSEs are similar to the other two methods. As for the coverage rate, the coverage of DML is decreasing dramatically as $n$ increases and ultimately is below the target level, except the regression tree and random forest, which shows over-coverage in those cases. The BCF is consistenly below the nominal level, while Method III yields a coverage rate at the nominal level in all cases. Note, however, that the BCF and DML methods do not assume a known functional form for the treatment effect model, which is needed for PS regression. Note also that the BCF and DML methods require significantly more computational time than Method III.
\begin{table}
\caption{\label{flex} Comparison of results for the proposed Bayesian empirical likelihood, Bayesian causal forests (BCFs) and frequentist double machine learning estimator (DML). Summary of $1,000$ simulation runs. Rows correspond to the bias, root mean square error (RMSE), and coverage rates.}
\centering
\fbox{
\begin{tabular}{ccrrr}
&&\multicolumn{3}{c}{$n$}\\
&Method & \multicolumn{1}{c}{500} & \multicolumn{1}{c}{1000} & \multicolumn{1}{c}{2000} \\
\hline
\multirow{8}{*}{Bias} &Method III & 0.076& 0.037 & 0.005 \\
&BCF & 0.157& 0.111 & 0.073\\
&DML-Tree & -0.348& 0.023 & 0.122\\
&DML-Forest & 0.252& 0.087& 0.126\\
&DML-Boosting& -0.147& 0.219& 0.231\\
&DML-Nnet & -0.090& 0.215& 0.212\\
&DML-Ensemble & -0.022& 0.217 & 0.233\\
&DML-Best & -0.111& 0.211 & 0.212\\
\hline
\multirow{8}{*}{RMSE} &Method III &0.561& 0.395& 0.270 \\
&BCF & 0.289& 0.195 & 0.130\\
&DML-Tree & 0.256& 0.180 & 0.180\\
&DML-Forest & 0.245& 0.187& 0.168\\
&DML-Boosting& 0.276& 0.265& 0.254\\
&DML-Nnet &0.339& 0.268& 0.237\\
&DML-Ensemble & 0.262& 0.263 & 0.255\\
&DML-Best & 0.302& 0.263 & 0.236\\
\hline
\multirow{8}{*}{Coverage rate} &Method III & 92.9 & 93.9 & 95.2 \\
&BCF & 84.9& 83.6 & 84.3\\
&DML-Tree &99.7& 99.7 & 97.2\\
&DML-Forest & 99.1& 98.3& 92.2\\
&DML-Boosting& 92.4& 76.9& 49.3\\
&DML-Nnet & 91.3& 75.6& 53.7\\
&DML-Ensemble & 94.0& 77.4& 49.4\\
&DML-Best & 89.6& 76.1 & 53.4\\
\end{tabular}
}
\end{table}
\section{Application: UK Speed Camera Data}
\label{Sec:app}
Our real example aims to quantify the causal effect of speed camera presence on road traffic collision. We use data on the location of fixed speed cameras for 771 camera sites in the eight English administrative districts, including Cheshire, Dorset, Greater Manchester, Lancashire, Leicester, Merseyside, Sussex and the West Midlands. These data form the `treated' group. For the `untreated' group, we randomly select a sample of 4,787 points on the network within our eight administrative districts. Details of these data can be found in \cite{graham2019speed}.
The outcome of interest is the number of personal injury collisions per {kilometre}. The data are taken from police reports collated and processed by the Department for Transport in the UK in the ‘STATS 19’ data set}. The location of each personal injury collision is recorded using the British National Grid coordinate system and can be located on a map using Geographical Information System software. Data are collected from 1999 to 2007 to ensure the availability of collision data for the years before and after the camera installation for every camera site as speed cameras were introduced varying from 2002 to 2004. There is a formal set of {location} selection guidelines for speed {cameras} in the UK \citep{gains2004national}. These {guidelines inform the selection of} covariates which represent the characteristics of units that simultaneously determine the treatment assignment {(camera location)} and outcome {(number of accidents)}. Primary guidelines for site section include the site length, the number of fatal and serious collisions and the number of personal injury collisions {in a preceding time period}. In addition, drivers might try to avoid the routes with speed cameras, and the reduction in collisions may come from a reduced traffic flow. Therefore, we include the annual average daily flow (AADF) as a confounder to control the effect due to the traffic flow. We also include factors that would have additional safety impacts, such as road types, speed limit, and the number of minor junctions within site length \citep{christie2003mobile}.
We apply the proposed Bayesian methods to the speed camera data with the loss defined in \eqref{u1}. \cite{graham2019speed} estimated the PS with a generalized additive model by including smooth functions on the AADF and the number of minor junctions and achieved balance and overlap. For the outcome model, we include all the confounders and the estimated PS. We place non-informative priors for all the parameters in prior and predictive inference based on the general DP representation with $\alpha =100$ and $N=10,000$. Table~\ref{app} shows summary statistics of the ATE based on $20,000$ posterior samples. All methods indicate that the installation of a speed camera can reduce road traffic collisions, by approximately 1.4 incidents per site on average; however, we notice that the Bayesian bootstrap based approaches yield a slightly higher variation. The Gibbs posterior (Method I) has similar variance to the other two methods when calibrated using Algorithm \ref{A2} with $\eta = 0.024$, which is close to the calibration achieved by the estimated residual variance ($0.027$). We report the posterior predictive distribution of the percentage reduction in an average change in road traffic collisions that is attributable to speed cameras Table~\ref{app} \citep{graham2019speed}. PS regression demonstrates that there is about an 18\% reduction in road traffic collisions in locations where a speed camera is installed, indicating a stronger causal relationship than that estimated by inverse probability weighting (IPW) computed using a Bayesian approach. All three loss-bases methods show similar posterior densities for the change in the ATE (Figure presented in the Supplement), while Method III shows a slightly smaller variance. Compared to the IPW analysis, which only relies on the inverse weighting adjustment, PS regression has an additional treatment-free component, and therefore offers an additional degree of robustness if one of the component models is correctly specified. We obtain narrower 95\% credible intervals and smaller standard deviations because regression-based approaches, coupled with the Bayesian bootstrap strategy, reduces the influence of extreme PS values on ATE estimation.
\begin{table}
\caption{\label{app} Summary statistics for the posterior distribution of the ATE, and the posterior predictive distribution of the percentage change of the ATE for speed camera data. IPW-BB represents results using the update two-step Bayesian bootstrap approach based on inverse probability weighting estimation, while IPW-BB (plug-in) represents the plug-in approach using the two-step Bayesian bootstrap.}
\centering
\begin{tabular*}{35pc}{@{\hskip5pt}@{\extracolsep{\fill}}c@{}c@{}c@{}c@{\hskip5pt}}
\hline
&Posterior Mean & Standard Deviation & 95\% Credible Interval \\
\hline
\multicolumn{4}{l}{\textit{ATE}}\\
Method I& -1.413& 0.183 & (-1.771, -1.054)\\
Method II & -1.411 & 0.184 & (-1.772, -1.048)\\
Method III & -1.413 & 0.180 & (-1.767, -1.058)\\
IPW-BB & -1.089 & 0.203 & (-1.486, -0.679)\\
IPW-BB (plug-in) & -1.088 & 0.209 & (-1.484, -0.663)\\
\hline
\multicolumn{4}{l}{\textit{Percentage Change of the ATE}}\\
Method I &-18.656& 2.356 & (-23.250, -13.996) \\
Method II & -18.603& 2.352 & (-23.224, -13.951)\\
Method III & -18.659& 2.313 & (-23.194, -14.070)\\
IPW-BB & -14.338& 2.622 & (-19.419, -9.036)\\
IPW-BB (plug-in)& -14.625& 2.807 & (-19.978, -8.877)\\
\hline
\end{tabular*}
\end{table}
\section{Discussion} \label{Sec:dis} We have formulated inference for parameters defined via loss functions in a formal Bayesian approach that does not rely on standard prior-likelihood calculations. The usual prior updating framework provides a means of informed and coherent decision making in the presence of uncertainty. Predictive inference sheds light on how to quantify Bayesian uncertainty, where the model is specified via a sequence of predictive distributions without a prior-likelihood construction, and often yields a computationally more efficient calculation because it relies purely on optimization instead of integration via MCMC.
We focused on non-parametric approaches based on the Dirichlet process. First, from the traditional Bayesian updating approach, we obtained the posterior distribution from a loss-based decision-theoretic perspective. Secondly, by sequentially imputing sets of unobserved future data, we computed the posterior by minimizing a loss function over the future data. We also showed that computations following this paradigm yield valid posterior inference in the spirit of \cite{monahan1992proper}. Using this method, we calibrated the scaling parameter of the Gibbs posterior, and demonstrated it yielded valid uncertainty quantification. We gave asymptotic results that showed the consistency and asymptotic normality of the computed posterior distributions. Simulation examples demonstrated that proposed approaches have good Bayesian and frequentist properties, and are typically less computationally burdensome that other successful Bayesian approaches.
Finally, we applied the loss-based approaches to study road safety outcomes, and quantified the causal effect of speed cameras on road traffic accidents, concluding that the presence of speed cameras can reduce the number of personal injury collisions. Such inference aids transportation authorities to propose a more effective targeted installation plan of speed cameras to improve road safety. Bayesian methods are generally applicable in causal inference for real applications, and yield interpretable variability estimates in finite samples.
The principles presented in this paper can also be applied in much more general settings, when likelihood functions are not available. In addition, the proposed methodology can be widely applied in other causal settings when the traditional Bayesian set-up requires over-specifying the model condition, clashing with the partial specified restriction.
\FloatBarrier
\appendix
\begin{center}
{\LARGE\bf Supplementary materials for ``Assessing the validity of Bayesian inference using loss functions"} \end{center}
\section{Estimation via predictive inference and the KL divergence} The Bayes estimator of the a target parameter is the function of the data that minimizes the posterior expected loss, given by
\begin{equation*}
\arg \min\limits_{t \in \Theta} \int_{\Xi} u\left(t,\xi \right) \pi\left(\xi | z_{1:n} \right) \ d \xi. \end{equation*}
If $u$ is taken to be the KL divergence between the true model, $f^\ast$ and a possibly mis-specified model, $f$, given by \[
u\left(\theta, \xi\right)= \int \log\left(\dfrac{f^\ast(z | \xi )}{f(z | \theta )}\right) f^\ast(z | \xi ) \ dz, \] then the optimization becomes \begin{equation}
\label{pp1}
\arg \min\limits_{t \in \Theta} \int_{\Xi} \left\{ \int \log\left(\dfrac{f^\ast(z | \xi )}{f(z | t )}\right) f^\ast(z | \xi ) \ dz \right\} \pi\left(\xi \left|z_{1:n}\right.\right) \ d\xi=\arg \max\limits_{t \in \Theta} \int \log f(z | t ) p^\ast(z | z_{1:n} ) \ dz. \end{equation} Exchanging differentiation and integration, we can deduce that the solution to \eqref{pp1} is also the solution to the estimating equation \[
\int\frac{\partial \log f(z |t )}{\partial t} p^\ast (z | z_{1:n} ) dz =\int S(z,t) p^\ast (z | z_{1:n} ) dz = 0 \] where $S(z,\theta)$ is the score function. The minimization in \eqref{pp1} does not involve prior opinion concerning $\theta$, but \eqref{pp1} can be modified to \[
\arg \max\limits_{t \in \Theta} \left\{ \int (\log f(z |t) + \log \pi_0 (t )) p^\ast (z | z_{1:n} ) \ dz \right\} \] or via the modified score function \[ S^*(z,\theta)=S(z,\theta)+ \frac{\partial }{\partial \theta}\log \pi_0\left(\theta\right). \]
A sample from $p^\ast(z | z_{1:n} ) $ can be converted to a sampled value of $\theta$ in the same fashion as discussed in the main paper, which yields a fully Bayesian procedure with the solution of the usual likelihood-based posterior distribution.
\section{Assessing validity of non-standard posterior inference} We investigate the validity of the proposed predictive inference approach under mis-specification. We simulate data based on the set up from Example 1 in Section 6 in the main paper, where the loss function is specified as a squared loss, and generate the values of coefficients in the outcome model from the prior distribution, that is, from the Normal distribution with mean the same as the example and variance 10,000, with the outcome data generated from the correct outcome regression model. This procedure is repeated $1,000$ times with $n=100$ and $n=10,000$. For each dataset, we perform the proposed predictive-to-posterior Bayesian inference method for various $\alpha$ values, where we produce the posterior distribution of the average treatment effect based on a mis-specified propensity score model and a mis-specified outcome model (with the treatment variable and the estimated propensity score). In this case, the computed posterior will not concentrate at the true value as $n$ grows.
In all cases, the posterior distribution computed using predictive-to-posterior inference fails the Monahan \& Boos uniformity test, with $p$-values all smaller than $10^{-6}$. This is confirmed by the density plots in Figure \ref{MB1}, where they exhibit higher densities at the tails.
\begin{figure}
\caption{ Checking coverage validity using Monahan \& Boos: Density plots of $H$ with $n=100$ (left) and $n=10000$ (right) under a mis-specified model. The solid, dashed, dotted and dotted dash lines represent results from $\alpha=0,1,10,100$ respectively.}
\label{MB1}
\end{figure}
\section{Asymptotics} \subsection{Definition of Bayesian Consistency} {\begin{definition} \citep{walker2001bayesian}
For realizations $z_1, z_2,\ldots,z_n$ drawn independently from some unknown underlying distribution $F^\ast$ with true data generating value $\theta^\ast$ in the interior of the parameter space $\Theta$, the posterior mass assigned to $A \subseteq \Theta$ is given by
\[
\Pi^n\left(A\right)=\pi\left(\theta\in A \left|z_1,\ldots,z_n\right.\right)=\dfrac{\displaystyle \int_{A}R_n\left(\theta\right)\pi_0\left(d\theta\right)}{\displaystyle \int R_n\left(\theta\right)\pi_0\left(d\theta\right)}
\]
where
\[
R_n\left(\theta\right)=\prod_{i=1}^{n} \exp \left[- \left\{ \ell\left(z_i,\theta\right) - \ell\left(z_i,\theta^*\right) \right\} \right]
\]
and where $\pi_0\left(\theta\right)$ is the prior density for $\theta$. If $A_\epsilon=\left\{\theta: d(\theta,\theta^*)>\epsilon \right\}$ where $d(\theta,\theta^\ast)$ is some distance measure, the posterior distribution is consistent in the Bayesian sense if $\Pi^n\left(A_\epsilon\right) \longrightarrow 0 $ almost surely under $F^\ast$. \end{definition}}
\subsection{Assumptions}
\begin{assumption}
The loss function $\ell(\theta,z) : \Theta \times \mathbb{Z} \to \mathbb{R} $ is a measurable function, bounded from below, with
\[
\int \ell\left(z,\theta\right) dF^\ast\left(z\right) <\infty \qquad \forall \theta \in \Theta
\]
where $\Theta$ is a compact and convex subset of a $p$-dimensional Euclidean space. \end{assumption}
\begin{assumption}
$\ell\left(z,\theta\right)$ is continuous $\forall\theta \in \Theta$. \end{assumption}
Let $\mathbf{U}_i\left(\theta\right)={\partial \ell\left(z_i,\theta\right)}\big/{\partial\theta ^{\top}} $. Minimizing the expected loss function, $\mathbb{E}_{F^\ast}[\ell\left(Z,\theta\right)]$, is equivalent to solving a $p \times 1$ system of estimating equations given by $\mathbb{E}_{F^\ast}[\mathbf{U}(\theta)] = \mathbf{0}$, with expectations taken with respect to the true data generating model $F^\ast$.
\begin{assumption}
$\theta^\ast \in \Theta $ is the unique solution to $\mathbb{E}_{F^\ast}[\mathbf{U}(\theta)] = \mathbf{0}$, and for arbitrary $\delta >0$, there exists an $\epsilon>0$ so that
\[
\lim\limits_{n\longrightarrow \infty} \mathbb{P}\left(\sup_{\norm{ \theta - \theta^\ast} \ > \ \delta}\frac{1}{n}\sum_{i=1}^{n}\left(\ell(z_i,\theta)-\ell(z_i,\theta^\ast)\right) < \epsilon\right) =1.
\] \end{assumption}
\begin{assumption}
$\mathbb{E}\left[\sup_{\theta\in \Theta}\norm{\mathbf{U}\left(\theta\right)}^{\gamma}\right] <\infty$ for $\gamma>2$. Suppose there exists a neighborhood, $\widetilde{\Theta}$ of $\theta^\ast$ within which ${\mathbf{U}}(\theta)$ is continuously differentiable.
\[
\mathbb{E}_{F^\ast}\left[\sup_{\theta \in \Theta^*}\norm{\dot{\mathbf{U}}(\theta)}_{F}\right]< \infty,
\]
with $\norm{\cdot}_{F}$ denoting the Frobenius norm. \end{assumption}
\begin{assumption}
There is an open ball $B$ containing $\theta^\ast$ such that all first, second and third partial derivatives of $\ell(\theta,z)$ with respect to $\theta \in B$ exist and are continuous for all $z$. Furthermore, there exist measurable functions $G_j$, $G_{jk}$, $G_{jkl}$ and $M_{jkl}$ such that for $\theta \in B$ we have
\begin{align*}
\left|\frac{\partial \ell\left(z,\theta\right)}{\partial \theta_j }\right| &\leq G_j(z) & \text{with } \displaystyle \int G_j(z) dF^\ast\left(z\right) <\infty, \\[6pt]
\left|\frac{\partial^2 \ell\left(z,\theta\right)}{\partial \theta_j \partial \theta_k}\right| &\leq G_{jk}(z) & \text{with } \displaystyle \int G_{jk}(z) dF^\ast\left(z\right) <\infty, \\[6pt]
\left|\frac{\partial^3 \ell\left(z,\theta\right)}{\partial \theta_j \partial \theta_k\partial \theta_l}\right| & \leq G_{jkl}(z) & \text{with } \displaystyle \int G_{jkl}(z) dF^\ast\left(z\right) <\infty, \\[6pt]
\left|\frac{\partial \ell\left(z,\theta\right)}{\partial \theta_j }\frac{\partial^2 \ell\left(z,\theta\right)}{\partial \theta_k \partial \theta_l}\right| &\leq M_{jkl}(z) & \text{with } \displaystyle \int M_{jkl}(z) dF^\ast\left(z\right) <\infty.
\end{align*} \end{assumption} Let \[ \mathcal{I}(\theta^\ast) = \mathbb{E}[ \mathbf{U}(\theta^\ast) \mathbf{U}(\theta^\ast)^\top] \qquad \qquad \mathcal{J}(\theta^\ast) = -\mathbb{E}[\dot{\mathbf{U}}(\theta^\ast)] \]
both $(p \times p)$ matrices, and $\dot{\mathbf{U}}(\theta^\ast) = {\partial \mathbf{U}(\theta)}\big/{\partial \theta^\top} \big|_{\theta = \theta^\ast}$
\begin{assumption}
$\mathcal{I}(\theta) $ and $\mathcal{J}(\theta)$ are non-singular and $\mathcal{J}$ is full rank, rank$\left(\mathcal{J}(\theta)\right)=p$, for $\theta \in B$, with all elements finite. \end{assumption}
\subsection{Proof of Theorem 1} \begin{proof}
Suppose that $\theta^s_1$ is the minimizer of the weighted loss
\begin{equation*}
\theta^s_1 \equiv \theta (\varpi^s) = \arg\min_{t}\sum_{k=1}^{\infty}\varpi_k^s\ell\left(\zeta_k^s,t\right).
\end{equation*}
given by the prior-to-posterior computation. From Theorem 1 in \cite{lijoi2004extending}, there exists an unique random element $F_1$ such that
\[
\sum_{k=1}^{\infty}\varpi_k^s \ell (\zeta_k^s, \theta^s ) \longrightarrow \min\limits_{t \in \Theta}\int \ell\left(z,t \right) dF_1\left(z\right)\;\;\; n \longrightarrow \infty.
\]
It remains to show that $F_1 \equiv F^\ast$. As $F_1$ is a draw from the posterior distribution under a Bayesian non- parametric formulation given by the prior-to-posterior computation, according to the de Finetti's representation theorem, the posterior distribution will be degenerate at $F^\ast$ as $n \longrightarrow \infty$. Therefore, $\theta(\varpi^s)$, is unique for any given $\zeta^s$ and $\varpi^s$, $\theta(\varpi^s)$ will be become degenerate at $\theta^\ast$ as $n\longrightarrow \infty$.
Alternatively, suppose that $\theta^s_2$ is the minimizer of the Monte Carlo estimate of the posterior predictive expectation
\begin{equation*}
\theta^s_2 \equiv \theta\left(z^s\right) =\arg\min_{t} \sum_{i=1}^{N}\ell\left(z^{s}_i,t\right).
\end{equation*}
Again by Theorem 1 in \cite{lijoi2004extending}, there exists an unique random element $F_2$ such that
\[
\sum_{k=1}^{N} \ell (z_k^s, \theta^s_2 ) \longrightarrow \min\limits_{t \in \Theta}\int \ell\left(z,t \right) dF_2\left(z\right)\;\;\; n,N \longrightarrow \infty.
\]
These results confirm that the posterior distribution generated by each approach converges weakly to a probability measure with all its mass on $F_2 \equiv p(z^s_{1:N}| z_{1:n} )$, and It also remains to show that $F_2 \equiv F^\ast$. Under mild regularity conditions and the correct specification of the model leading to $\pi ^\ast(\xi \mid z_{1:n})$, with $\widehat\xi\left(z_{1:n}\right)$ in a neighbourhood of $\xi^\ast$, following \citet{bernardo1979reference}, we have
\begin{equation*}
\begin{aligned}
p^\ast(z^s_{1},\ldots,z^s_{N} | z_{1:n} ) &= p^\ast (z^s_{1},\ldots,z^s_{N} \mid \hat \xi (z_{1:n} ) ) + \textrm{O}(1)\\
&= p^\ast(z^s_{N} \mid z^s_{1},\ldots,z^s_{N-1},\xi^\ast)\cdots p^\ast( z^s_{1} \mid \xi^\ast )+ \textrm{o}(1) & n\longrightarrow \infty\\
&=f^\ast(z^s_{N} \mid \xi^\ast)\cdots f^\ast( z^s_{1}| \xi^\ast)+ \textrm{o}(1)
\end{aligned}
\end{equation*}
and therefore, a draw from the predictive $p(z^s_{1},\ldots,z^s_{N} | z_{1:n} )$ suitably simulates a collection of $N$ sample points from the true data generating model $F^\ast(z|\theta^\ast)$ as $n\longrightarrow \infty$. Therefore, $F_2$ is the same as $F^\ast$. As the solution, $\hat\theta(z^s)$, is unique for any given $z^s$, therefore, $\hat\theta(z^s)$ will be become degenerate at $\theta^\ast$ as both $N\longrightarrow \infty$ and $n\longrightarrow \infty$. \end{proof}
\subsection{Proof of Theorem 2} \begin{proof}
First, we define $S_N(\theta)= \sum_{i=1}^N{\partial \varpi_i \ell\left(z_i^s,\theta\right)}\big/{\partial\theta ^{\top}} = \sum_{i=1}^N \varpi_i\mathbf{U}_i(\theta)$, and $\mathcal{J}_{\varpi} (\theta)= -\sum_{i=1}^N \varpi_i\dot{\mathbf{U}_i}(\theta)$. Then the Taylor expansion of $S_N(\hat \theta_n)$ around $\hat\theta(\varpi)$ becomes
\begin{equation}
\label{score}
S_N(\hat \theta_n) = (\mathcal{J}_{\varpi} (\hat \theta_n) - R_n) ( \theta^s - \hat \theta_n)
\end{equation}
where $R_n$ is the reminding term. For a large $n$, $R_n$ is negligible under regularity conditions. To see what remains to be proved, we rewrite \eqref{score} as
\[
\vartheta_{n,N}^s = \sqrt{N} ( \theta^s - \hat \theta_n) = (\mathcal{J}_{\varpi} (\hat \theta_n) - R_n) ^{-1}\sqrt{N} S_N(\hat \theta_n).
\]
By Lemma 7 in \cite{newton1991weighted},
\[
\mathcal{J}_{\varpi} (\hat \theta_n) \xrightarrow{p} \mathcal{J} (\theta^\ast).
\]
For any $t \in \mathbb{R}^{p}$ with $\left|t\right| =1$, we defined $m_N(t) = \sqrt{N}t^{\top}S_N(\hat \theta_n) $, and by Theorem 2 in \cite{ishwaran2002exact}, which approximates the DP using a finite dimensional Dirichlet process,
\begin{equation*}
\begin{aligned}
m_N(t) &\approx \sqrt{N} \sum_{j=1}^{p} t_j \left(\frac{\sum_{i=1}^{N} H_i U_i(\hat \theta_n) }{\sum_{i=1}^{N} H_i }\right) \\
&= \frac{1}{\bar H_N} \frac{\sum_{i=1}^{N} a_{in}H_i }{\sqrt{N}}
\end{aligned}
\end{equation*}
where $H_i$ is iid Exponential$\left(\alpha/N\right)$ random variables independent of the data $z^s$, $\bar H_N =1/N\sum_{i=1}^{N}H_i$, and $a_{in}= \sum_{j=1}^{p}t_jU_i(\hat \theta_n)$. By Lindeberg-Feller central limit theorem and Lemma 8 in \cite{newton1991weighted}, we have
\[
\frac{\sum_{i=1}^{N} a_{in}H_i }{\sqrt{N}} \xrightarrow{d} Normal_p(\mathbf{0},(\frac{\alpha}{N})^2t^{\top} \mathcal{I}(\theta^\ast) t).
\]
By Slutsky’s theorem, with $\bar H_N \to {\alpha}/{N}$, we have
\[
m_N(t) \xrightarrow{d} Normal_p(\mathbf{0},t^{\top} \mathcal{I}(\theta^\ast) t) \text{ as } n,N\to \infty.
\]
Therefore, by Cramer-Wold theorem, we have
\[
\sqrt{N} S_N(\hat \theta_n) \xrightarrow{d} Normal_p(\mathbf{0},\mathcal{I}(\theta^\ast)) \text{ as } n,N\to \infty.
\]
By applying Slutsky’s theorem again, we have
\[
\vartheta_{n,N}^s \xrightarrow{d} Normal_p(\mathbf{0},\mathcal{J} (\theta^\ast)^{-1}\mathcal{I}(\theta^\ast)\mathcal{J} (\theta^\ast)^{-\top}) \text{ as } n,N\to \infty.
\] \end{proof}
\subsection{P\'{o}lya urn scheme representation}
\cite{fong2021martingale} stated two conditions which the predictive distribution has to satisfy.
{\begin{condition}
The sequence of predictive distributions, $p_{n+1}(y|d,x),p_{n+2}(y|d,x),\ldots$, converges almost surely to a random probability distribution $p_{\infty}(y|d,x)$, for all $y\in \mathbb{R}$.
\end{condition} }
{\begin{condition}
The posterior expectation of the random $p_{\infty}(y|d,x)$ satisfies $\mathbb{E}\left[p_{\infty}(y|d,x)\left|z_{1:n} \right.\right]=p_{n}(y|d,x)$
almost surely for all $y\in \mathbb{R}$.
\end{condition} }
{Assuming these two conditions, $p_{\infty}(y|d,x)$ is considered as the best estimate of the unknown true data generating mechanism under the specified model sequence, and gives a mechanism for generating posterior uncertainty of $\theta$ without applying Bayes rule. \cite{berti2006almost} showed that the conditional distribution, $p_{n+N}(y|d,x)$, converges weakly to a random probability measure almost surely for each pair of $(d,x)$ if these two conditions are satisfied. }
{In the predictive resampling approach derived from the Dirichlet process and indicated in Equation (7) in the main paper, the sequence $\{G_j\}_{j=1}^N$ are precisely predictive models that align with the theory of \cite{berti2006almost}, and therefore we have the following theorem. }
{\begin{theorem}
There exists a random probability measure $G_{\infty}$ such that $G_{n+N}$ converges weakly to $G_{\infty}$.
\end{theorem} }
\begin{proof}
For the sequence of random probability measures based on the DP construction $\left\{G_{N},G_{N+1},\ldots\right\}$ defined on the probability space $(\Omega,A,\P)$, take values in the measurable space $(\mathbb{Y},\mathcal{Y})$, we define
\[
G_{N}\left(f\left|x,d\right.\right)=\int f(y) d G_{N}\left(y\left|x,d\right.\right)\;\; \text{all bounded measurable }f:\mathbb{Y}\to \mathbb{R}.
\]
This integral is finite if $\int \log(1+\left|f(y)\right|) d G_{0}\left(y\left|x,d\right.\right)<+\infty$ \citep{feigin1989linear}. We denote a filtration, $\mathcal{F}_{i} = \sigma(Z_1,\ldots,Z_i)$. Taking the conditional expectation, from Fubini’s theorem, we have
\[
\mathbb{E}\left[G_{N+1}\left(f\left|x,d\right.\right)\left|\mathcal{F}_{N}\right.\right]= \int f(y) \mathbb{E}\left[dG_{N+1}\left(y\left|x,d\right.\right)\left|\mathcal{F}_{N}\right.\right] = G_{N}\left(f\left|x,d\right.\right)
\]
because $G_{N}\left(y\left|x,d\right.\right)$ is a martingale with respect to $\mathcal{F}_{N}$ regardless of the draw for the pair of $x,d$. As $f$ is bounded, then $ \mathbb{E}\left[\left|G_{N}\left(f\left|x,d\right.\right)\right|\right]$ is also bounded. Therefore, $G_{N}\left(f\left|x,d\right.\right)$ is also a martingale with respect to $\mathcal{F}_{N}$. By Theorem 2.2 in \cite{berti2006almost}, so there exists a random probability measure $G_{\infty}$ defined on $(\Omega,A,\P)$ such that $G_{N} \to G_{\infty}$ weakly almost surely. \end{proof} This theorem confirms that predictive resampling via the Dirichlet process is a valid Bayesian update and gives the same uncertainty quantification as the prior-to-posterior update. From Equation (5) in the main paper, we may deduce that the value obtained from solving the minimization problem in Equation (6) in the main paper is a sample from the posterior distribution of the target parameter.
\section{Additional simulation results} \subsection{Example: PS distribution} {In this example, we examine the performance of the proposed updating approaches under some extreme PS distributions, with binary exposure, but where there is no treatment effect.} The data are simulated as follows: we simulate $X_1,X_2 \sim \mathcal{N}\left(1,1\right)$ and $X_3,X_4 \sim \mathcal{N}\left(-1,1\right)$ independently, and then simulate \begin{equation*}
\begin{aligned}
D\left|X_1,X_2,X_3,X_4 \right. &\sim \text{ Bernoulli}\left(\text{expit}\left(\gamma_0+\gamma_1X_1+\gamma_2X_2+\gamma_3X_3+\gamma_4X_4\right)\right)\\
Y \left|D,X_1,X_2,X_3,X_4\right. &\sim \mathcal{N}\left(0.25X_1+0.25X_2+0.25X_3+0.25X_4+1.5X_3X_4,1\right).\\
\end{aligned} \end{equation*} In the analyses, the PS model is assumed to be correctly specified. For the {outcome model}, we fit the model with the treatment indicator and estimated PSs only as covariates. To investigate how the PS distribution affects the estimation of the treatment effect, different PS distributions are considered: \begin{itemize}
\item Scenario A: $\gamma=\left(0.00,0.30,0.80,0.30,0.80\right)$, generating a nearly uniform distribution of propensity scores;
\item Scenario B: $\gamma=\left(0.50,0.50,0.75,1.00,1.00\right)$, having a greater density of lower scores;
\item Scenario C: $\gamma=\left(0.00,0.45,0.90,1.35,1.80\right)$, having very few high scores. \end{itemize} In this example, we also fit Bayesian regression on the correctly specified OR.
Table~\ref{sim2ss} summarizes the mean estimates of $\theta$ over 1,000 Monte Carlo replicates for three different scenarios described above. For a correctly specified OR, the coverage rate is around the nominal level. For the proposed methods, the results suggest that all approaches yield unbiased estimates across all scenarios (see the Supplement). As in Example 1, under a fairly uniform PS distribution, these approaches indicate nearly the same performance, and the results agree in terms of posterior mean and variance. However, when the PS distribution is slightly skewed (Scenario B), Method II exhibits a slightly higher bias and greater variance, notably when $n$ is small but these differences diminish as $n$ increases. The bias and greater variance in Method II become more obvious when the PS distribution is highly skewed (Scenario C). Also in Scenario C, Method I has consistently the smallest variance. In general, Methods II and III have very similar performance in those scenarios.
\begin{table}
\caption{\label{sim2ss} Simulation results of the marginal causal contrast, with true value equal to 0, on 1000 simulation runs on generated datasets of size $n$. Bayes-OR represents standard Bayesian inference for the correctly specified OR with non-informative priors.}
\centering
\begin{tabular*}{40pc}{@{\hskip5pt}@{\extracolsep{\fill}}l@{}c@{}c@{}c@{}c@{}|c@{}c@{}c@{}c@{}|c@{}c@{}c@{}c@{\hskip5pt}}
\hline
& \multicolumn{4}{c}{Scenario A}&\multicolumn{4}{c}{Scenario B}&\multicolumn{4}{c}{Scenario C} \\ \cline{2-13}
$n$ &100 &200 & 500 &1000 &100 &200 & 500 &1000 &100 &200 & 500 &1000\\
\hline
\multicolumn{13}{l}{Mean} \\
\hline
Method I&-0.004 &-0.005 &-0.010 &0.002 & 0.001 &-0.007&0.005& 0.003& -0.007&-0.006&0.004&0.000\\
Method II& -0.010 &-0.001&0.003&-0.002 &-0.001 &0.007 &-0.002 &-0.002&0.038&0.024& 0.000&0.002\\
Method III &-0.003 & -0.004&0.001& 0.004&-0.007&0.006& 0.000&-0.008& 0.054&-0.008&0.012& 0.002\\
Bayes-OR & 0.000 & -0.001 & 0.002 & -0.004 & 0.008& 0.001&-0.004 & -0.007 & -0.005 & 0.002 &0.000 & 0.002\\
\hline
\multicolumn{13}{l}{Variance} \\
\hline
Method I &0.148 &0.067&0.028&0.013&0.154&0.079&0.031&0.015&0.144&0.069&0.043&0.014\\
Method II& 0.155& 0.071&0.030 & 0.015& 0.163 &0.080&0.032& 0.016& 0.233&0.112&0.043&0.023\\
Method III & 0.145& 0.074&0.029&0.014 &0.139 & 0.080 & 0.032&0.015& 0.234& 0.109&0.044&0.021\\
Bayes-OR & 0.055 & 0.026 & 0.010 & 0.006 & 0.066& 0.031&0.012 & 0.006 & 0.087 & 0.040 &0.017 & 0.008\\
\hline
\multicolumn{13}{l}{Coverage, \%} \\
\hline
Method I&94.4 & 95.2 &94.8 & 94.4 & 93.8& 95.0&95.2&94.5& 96.1& 94.0&95.2 & 94.7\\
Method II& 91.6& 93.1&94.4 &94.7 &91.5 & 93.0 &94.8 & 95.1 & 89.2&92.8&94.7& 94.3\\
Method III & 93.6 & 95.9 & 95.5& 95.8 &95.1& 94.9 & 95.9 &95.5 &91.0 & 94.6 &94.5& 95.1\\
Bayes-OR & 95.2 & 96.0 & 95.9 & 94.4 & 94.9& 94.3 & 95.2 & 94.0 & 94.2 & 96.0 & 94.2 & 95.0\\
\hline
\end{tabular*} \end{table}
\subsection{Example 1: Results} Table~\ref{sim1} summarizes the mean estimates of $\theta$ over 1000 Monte Carlo replicates for three sample sizes for Example 1 in the main paper. \begin{table}
\caption{\label{sim1} {Example 1: Simulation results of the marginal causal contrast, with true value equal to 1, for $1,000$ simulation runs on generated datasets of size $n$. Gibbs represents results generated from the Gibbs posterior with $\eta=1$. The bracketed results are from the informative normal prior.}}
\centering
\begin{tabular*}{40pc}{@{\hskip5pt}@{\extracolsep{\fill}}l@{}c@{}c@{}c@{}c@{}|c@{}c@{}c@{}c@{}|c@{}c@{}c@{}c@{\hskip5pt}}
\hline
& \multicolumn{4}{c}{Scenario A}&\multicolumn{4}{c}{Scenario B}&\multicolumn{4}{c}{Scenario C} \\ \cline{2-13}
$n$ &20 &50 & 100 &500 &20 &50 & 100 &500 & 20 &50 & 100 &500 \\
\hline
\multicolumn{13}{l}{Mean} \\
\hline
\multirow{2}{*}{Method I} &0.980 & 0.980& 1.005& 1.001 &0.979 &1.008 &1.000& 1.001&0.623 &0.770& 0.621 & 0.613\\
&(1.049)& --&-- &--& (1.025)& --&-- &-- & (0.793)& --&-- &--\\
Method II & 0.925 &0.990&1.003 &0.999 & 1.132&1.007&1.000& 1.003& 0.448& 0.633&0.637& 0.625\\
Method III & 0.945 &1.005&0.992&1.003&1.010 &0.994&0.993&1.002& 0.597&0.628&0.620& 0.623\\
Gibbs & 1.077& 0.991& 0.997& 1.000 &0.880& 0.994 &1.003& 1.003 &0.860& 0.615& 0.643 & 0.628\\
\hline
\multicolumn{13}{l}{Variance} \\
\hline
\multirow{2}{*}{Method I} &1.115& 0.129&0.058&0.015 &1.230 &0.121 &0.053& 0.010&0.523& 0.203&0.092&0.018 \\
&(0.348)& --&-- &-- & (0.320)& --&-- &--& (0.221)& --&-- &--\\
Method II & 13.521 &0.113&0.056 &0.011 &8.396 &0.118 &0.050& 0.010& 69.893& 0.230 &0.094&0.020\\
Method III& 0.461& 0.125& 0.054&0.011&0.390 &0.118 &0.054&0.010& 0.676&0.194&0.089 &0.020\\
Gibbs & 0.489& 0.131& 0.059&0.010&0.373& 0.117 &0.053 &0.009& 0.680&0.131&0.107 &0.018\\
\hline
\multicolumn{13}{l}{Coverage, \%} \\
\hline
\multirow{2}{*}{Method I} &96.5 & 95.5 & 95.1& 94.9 &94.9 &95.0 &95.0& 95.4& 93.2 &91.2& 85.3 & 25.6\\
&(97.3)& --&-- &--& (95.9)& --&-- &-- & (95.6)& --&-- &-- \\
Method II& 89.3 &92.2 & 93.5& 94.2& 82.2& 89.9&92.7&94.8 & 77.4 & 77.2& 74.2& 20.9\\
Method III& 92.2 & 95.0 & 94.9 & 93.8 & 85.4 & 91.5 & 94.0 & 94.7 &77.5 &81.4&76.8&35.4\\
Gibbs & 84.4&79.1 & 80.1& 84.6 &78.3&84.8 & 84.0 & 84.3 & 66.7&52.5& 42.7 & 4.2 \\
\hline
\end{tabular*} \end{table}
\subsection{Example 3: Comparison with flexible modelling} In this example, we seek to compare the proposed approach with some recently developed causal machine learning approaches. We first consider the following data generating mechanism with interaction terms: \begin{equation*}
\small
\begin{aligned}
X_1,X_3&\sim \mathcal{N}\left(1,1\right), X_2,X_4 \sim \mathcal{N}\left(-0.5,1\right) \\
D\left|X_1,X_2,X_3,X_4 \right. &\sim \text{ Bernoulli}\left(\text{expit}\left(1.5+X_1-0.2X_2-2.7X_3+2X_4\right)\right)\\
Y|D,X_1,X_2,X_3,X_4 &\sim \mathcal{N}(\mu_0(D,X; \beta), 1 )\\
\mu_0(D,X) = 10+D (1+ X_1+X_4) &+ 0.75X_1+ X_2 + 1.25X_3 +X_4 +2 X_2^2 + 1.2 X_1 X_3 + 0.6 X_2 X_4 .
\end{aligned} \end{equation*}
The ATE then is $\mathbb{E}[ \mu_0(1,X) ] - \mathbb{E}[ \mu_0(0,X) ] = 1 + \mathbb{E}[X_1] +\mathbb{E}[X_2]= 1.5$. . Since there are interaction terms in the OR, we specify the mean of the treatment-effect model as \[ \beta + (\theta+\theta_1x_1)d + \phi_1 e\left(x;\hat\gamma\right)+\phi_2 x_1e\left(x;\hat\gamma\right). \] This model yields a consistent estimate for the ATE, and is fitted via the square loss through the proposed Bayesian predictive approach with $\alpha=2$. We also consider the Bayesian causal forests (BCFs) method in \cite{hahn2020bayesian}. The BCF is a flexible approach for the outcome mean model using the Bayesian additive regression trees (BARTs) to infer the individual treatment effects, and it is based on linear predictor \[ \mu(d,x) = h(x,e\left(x;\hat\gamma\right)) + t\left(x,e\left(x;\hat\gamma\right) \right)d \] with assumed normal errors. The functions $ h(\cdot,\cdot) $ and $ t(\cdot,\cdot) $ are estimated via the BCFs. In this analysis, we assume the PS model is correctly specified and estimated via a parametric logistic regression in BCFs and the proposed approach. Finally, we consider a frequentist double machine learning (DML) approach proposed in \cite{chernozhukov2018double}. In their method, the ATE estimator, $\theta$, is the solution to $ \mathbb{E} [\psi(Z;\theta, \mu,e(X))]=0$, where $\psi(\cdot)$ is the Neyman-orthogonal moment equation and defined as \[ \psi(Z;\theta,\mu,e(X))= \mu(1,X)-\mu(0,X) +\frac{D(Y-\mu(1,X))}{e(X)} - \frac{(1-D)(Y-\mu(0,X))}{1-e(X)} - \theta \] and $\mu(\cdot,\cdot)$ is the treatment-effect model and $e(\cdot)$ is the propensity score. Both of them are estimated via various machine learning approaches. Specifically, we use the FDML estimator in Definition 3.2 in \cite{chernozhukov2018double}, which the data are partitioned into $K$ groups. The functions $\hat \mu_k(\cdot,\cdot)$ and $\hat e_k(\cdot)$ are estimated using the all the data excluding the $k$th group. Then the DML estimator for the ATE is the solution to $ 1/K \sum_{k=1}^{K}\mathbb{E}_k [\psi(Z;\theta,\hat \mu_k,\hat e_k(X))]=0$, where $\mathbb{E}_k(\cdot)$ is the empirical expectation over the $k$th fold of the data.
The results of this analysis are presented in Table 4 in the main paper. In the DML, we used the methods described in \cite{chernozhukov2018double}, i.e., regression tree (CART), random forest, boosting (tree-based), and neural network (two neuros) to estimate the $\mu(\cdot,\cdot)$ and $e(\cdot)$ . There are two hybrid methods. `Ensemble' represents the optimal combination of boosting and random forest and neural network, while `Best' represents the best methods for estimating each of $\mu(\cdot,\cdot)$ and $e(\cdot)$ based on the average out-of-sample prediction for the ATE associate with each of $\mu(\cdot,\cdot)$ and $e(\cdot)$ estimates obtained from the previous machine learning approaches.
\section{UK speed camera data} Figure \ref{ateapp} shows the posterior predictive distributions of percentage changes of the ATE using Method I, II, III.
\begin{figure}
\caption{UK speed camera data. }
\label{ateapp}
\end{figure}
\end{document} | arXiv |
# Activation functions in neural networks
An activation function is a non-linear function that is used to introduce non-linearity into the neural network. Without an activation function, a neural network is just a linear function, which cannot approximate complex functions.
There are several common activation functions used in neural networks, including:
- Sigmoid: A popular activation function in the early days of neural networks. It maps any input to a value between 0 and 1. The sigmoid function is defined as:
$$\sigma(x) = \frac{1}{1 + e^{-x}}$$
- Hyperbolic tangent (tanh): A function that maps any input to a value between -1 and 1. It is defined as:
$$\tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}$$
- Rectified Linear Unit (ReLU): A simple and popular activation function that maps any negative input to 0, and keeps positive input unchanged. It is defined as:
$$f(x) = \max(0, x)$$
- Leaky ReLU: An extension of the ReLU function that allows for a small, non-zero gradient when the input is negative. It is defined as:
$$f(x) = \max(ax, x)$$
- Exponential Linear Unit (ELU): A function that combines the properties of both the ReLU and sigmoid functions. It is defined as:
$$f(x) = \begin{cases}
a(e^x - 1) & \text{if } x < 0 \\
x & \text{if } x \geq 0
\end{cases}$$
Consider the following input and activation functions:
$$x = [-2, 0, 2]$$
$$f(x) = \max(0, x)$$
Using the ReLU function, the output would be:
$$f(x) = [0, 0, 2]$$
## Exercise
Choose an activation function and apply it to the following input:
$$x = [1, -1, 0]$$
# Gradient descent and backpropagation
Gradient descent is an optimization algorithm used to minimize a cost function. It is the foundation of many machine learning algorithms, including neural networks.
The gradient descent algorithm works by iteratively updating the weights of the neural network based on the gradient of the cost function. The gradient is the partial derivative of the cost function with respect to each weight.
Backpropagation is an extension of gradient descent that is specifically used to train neural networks. It allows us to calculate the gradient of the cost function with respect to each weight in the network. This is done by computing the gradient of the cost function with respect to the output of each neuron, and then using the chain rule to propagate these gradients back through the network.
Consider a simple neural network with one hidden layer and a single output neuron. The cost function is the mean squared error between the predicted output and the true output.
The weights connecting the input layer to the hidden layer are $w_1, w_2, w_3$, and the weights connecting the hidden layer to the output layer are $w_4, w_5$.
The forward pass through the network produces the predicted output $y$, and the true output is $t$. The mean squared error cost function is:
$$C = \frac{1}{2}(y - t)^2$$
Using backpropagation, we can compute the gradient of the cost function with respect to each weight:
$$\frac{\partial C}{\partial w_1} = \frac{\partial C}{\partial y} \frac{\partial y}{\partial w_1}$$
$$\frac{\partial C}{\partial w_2} = \frac{\partial C}{\partial y} \frac{\partial y}{\partial w_2}$$
$$\frac{\partial C}{\partial w_3} = \frac{\partial C}{\partial y} \frac{\partial y}{\partial w_3}$$
$$\frac{\partial C}{\partial w_4} = \frac{\partial C}{\partial y} \frac{\partial y}{\partial w_4}$$
$$\frac{\partial C}{\partial w_5} = \frac{\partial C}{\partial y} \frac{\partial y}{\partial w_5}$$
## Exercise
Calculate the gradient of the cost function with respect to the weights of a neural network with the following architecture: input layer with 3 neurons, hidden layer with 4 neurons, and output layer with 2 neurons.
# Loss functions and their importance
A loss function, also known as a cost function or objective function, is a function that measures the difference between the predicted output and the true output. It is used to evaluate the performance of a model and to guide the training process.
There are several common loss functions used in machine learning, including:
- Mean Squared Error (MSE): A popular loss function for regression problems. It measures the average squared difference between the predicted and true outputs.
- Cross-Entropy Loss: A loss function commonly used for classification problems. It measures the difference between the predicted probabilities and the true probabilities.
- Hinge Loss: A loss function used for Support Vector Machines (SVM). It measures the distance between the predicted output and the true output.
It is important to choose the appropriate loss function for the problem at hand, as it will influence the training process and the final performance of the model.
Consider a simple regression problem where the true output is $t = 3$ and the predicted output is $y = 2$. The mean squared error loss function is:
$$L = \frac{1}{2}(y - t)^2$$
## Exercise
Choose a loss function and calculate its value for the following data:
True output: $t = [1, 2, 3]$
Predicted output: $y = [0.5, 1.5, 2.5]$
# Building a neural network in Python
To build a neural network in Python, you can use popular libraries such as TensorFlow and Keras. These libraries provide high-level APIs for defining and training neural networks.
Here is an example of how to build a simple neural network using Keras:
```python
import keras
from keras.models import Sequential
from keras.layers import Dense
# Define the neural network model
model = Sequential()
model.add(Dense(64, input_dim=100, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='linear'))
# Compile the model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
```
This code defines a neural network with one hidden layer of 64 neurons, and an output layer of 1 neuron. The activation function for the hidden layer is the ReLU function, and the activation function for the output layer is the linear function.
Here is an example of how to build a neural network with two hidden layers using Keras:
```python
import keras
from keras.models import Sequential
from keras.layers import Dense
# Define the neural network model
model = Sequential()
model.add(Dense(128, input_dim=100, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='linear'))
# Compile the model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
```
This code defines a neural network with two hidden layers of 128 and 64 neurons, respectively, and an output layer of 1 neuron. The activation function for both hidden layers is the ReLU function, and the activation function for the output layer is the linear function.
## Exercise
Build a neural network using Keras with the following architecture: input layer with 100 neurons, hidden layer with 128 neurons, and output layer with 5 neurons. Use the ReLU activation function for the hidden layer and the softmax activation function for the output layer.
# Training and optimizing a neural network
To train a neural network, you need to provide it with input-output pairs and adjust the weights of the network to minimize the loss function. This is done using an optimization algorithm such as gradient descent or stochastic gradient descent.
Here is an example of how to train a neural network using Keras:
```python
import numpy as np
# Generate dummy data
X_train = np.random.rand(1000, 100)
y_train = np.random.rand(1000, 1)
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=32)
```
This code generates random input-output pairs and trains the neural network for 100 epochs using a batch size of 32.
Here is an example of how to train a neural network using stochastic gradient descent:
```python
import numpy as np
# Generate dummy data
X_train = np.random.rand(1000, 100)
y_train = np.random.rand(1000, 1)
# Train the model using stochastic gradient descent
for epoch in range(100):
for i in range(0, X_train.shape[0], 32):
X_batch = X_train[i:i+32]
y_batch = y_train[i:i+32]
model.train_on_batch(X_batch, y_batch)
```
This code trains the neural network using stochastic gradient descent with a batch size of 32.
## Exercise
Train the neural network built in the previous section using the following data:
Input data: `X_train = np.random.rand(1000, 100)`
Output data: `y_train = np.random.rand(1000, 5)`
# Evaluating a neural network
To evaluate the performance of a neural network, you need to use a separate dataset that was not used during the training process. This dataset is called the validation set or the test set.
Here is an example of how to evaluate a neural network using Keras:
```python
import numpy as np
# Generate dummy validation data
X_val = np.random.rand(100, 100)
y_val = np.random.rand(100, 1)
# Evaluate the model
loss, accuracy = model.evaluate(X_val, y_val)
print(f'Loss: {loss}, Accuracy: {accuracy}')
```
This code evaluates the neural network on a validation dataset and prints the loss and accuracy.
Here is an example of how to evaluate a neural network using the mean squared error loss function:
```python
import numpy as np
# Generate dummy validation data
X_val = np.random.rand(100, 100)
y_val = np.random.rand(100, 1)
# Make predictions
y_pred = model.predict(X_val)
# Calculate the mean squared error
mse = np.mean((y_pred - y_val)**2)
print(f'Mean Squared Error: {mse}')
```
This code evaluates the neural network on a validation dataset and calculates the mean squared error.
## Exercise
Evaluate the neural network built in the previous section using the following validation data:
Input data: `X_val = np.random.rand(100, 100)`
Output data: `y_val = np.random.rand(100, 5)`
# Applications of neural networks in machine learning
Neural networks have been successfully applied to a wide range of machine learning problems, including:
- Image recognition: Neural networks with convolutional layers have been used to achieve state-of-the-art results on image classification tasks, such as ImageNet.
- Natural language processing: Recurrent neural networks have been used to achieve good results on tasks such as sentiment analysis and machine translation.
- Speech recognition: Deep learning techniques have been used to achieve state-of-the-art results on speech recognition tasks, such as the Google Speech API.
- Game playing: Deep reinforcement learning has been used to train agents to play games such as Go and Atari.
- Recommender systems: Collaborative filtering and matrix factorization techniques have been used to build recommender systems based on neural networks.
- Anomaly detection: Neural networks have been used to detect anomalies in time series data and other domains.
Here is an example of using a neural network for image classification:
```python
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
# Define the neural network model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the model
# ...
# Evaluate the model
# ...
```
This code defines a convolutional neural network for image classification with two convolutional layers and one fully connected layer. The activation function for the output layer is the softmax function.
## Exercise
Choose a machine learning problem and design a neural network architecture to solve it.
# Fine-tuning hyperparameters
Fine-tuning hyperparameters is an important step in training a neural network. It involves adjusting the learning rate, batch size, number of layers, number of neurons per layer, and other parameters to achieve the best possible performance.
Here is an example of how to fine-tune hyperparameters using Keras:
```python
import numpy as np
# Generate dummy data
X_train = np.random.rand(1000, 100)
y_train = np.random.rand(1000, 1)
# Fine-tune hyperparameters
learning_rate = 0.001
batch_size = 32
epochs = 100
# Train the model
model.compile(loss='mean_squared_error', optimizer=keras.optimizers.Adam(learning_rate), metrics=['accuracy'])
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size)
```
This code adjusts the learning rate and batch size to fine-tune the hyperparameters of the neural network.
Here is an example of how to fine-tune hyperparameters using a grid search:
```python
import numpy as np
from sklearn.model_selection import GridSearchCV
# Generate dummy data
X_train = np.random.rand(1000, 100)
y_train = np.random.rand(1000, 1)
# Define the hyperparameter grid
param_grid = {'learning_rate': [0.001, 0.01, 0.1], 'batch_size': [32, 64, 128]}
# Fine-tune hyperparameters using grid search
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, scoring='neg_mean_squared_error', cv=5)
grid_search.fit(X_train, y_train)
# Get the best hyperparameters
best_params = grid_search.best_params_
print(f'Best hyperparameters: {best_params}')
```
This code uses a grid search to fine-tune the hyperparameters of the neural network.
## Exercise
Fine-tune the hyperparameters of the neural network built in the previous section using a grid search or another hyperparameter tuning method.
# Transfer learning
Transfer learning is a technique where a pre-trained neural network is used as the starting point for training a new neural network on a different task. This can save time and computational resources, and can sometimes achieve better performance than training from scratch.
Here is an example of using transfer learning with a pre-trained neural network:
```python
import keras
from keras.applications import VGG16
from keras.models import Sequential
from keras.layers import Dense
# Load a pre-trained neural network
base_model = VGG16(weights='imagenet', include_top=False)
# Define the new neural network
model = Sequential()
model.add(base_model)
model.add(Dense(10, activation='softmax'))
# Fine-tune the new neural network
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the new neural network
# ...
# Evaluate the new neural network
# ...
```
This code loads the VGG16 neural network pre-trained on the ImageNet dataset, removes the top layers, and adds a new fully connected layer for the new task.
Here is an example of using transfer learning with a pre-trained neural network for image classification:
```python
import keras
from keras.applications import VGG16
from keras.models import Sequential
from keras.layers import Dense
# Load a pre-trained neural network
base_model = VGG16(weights='imagenet', include_top=False)
# Define the new neural network
model = Sequential()
model.add(base_model)
model.add(Dense(1000, activation='relu'))
model.add(Dense(10, activation='softmax'))
# Fine-tune the new neural network
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the new neural network
# ...
# Evaluate the new neural network
# ...
```
This code loads the VGG16 neural network pre-trained on the ImageNet dataset, removes the top layers, and adds two new fully connected layers for the new task.
## Exercise
Use transfer learning to train a neural network for a new task using a pre-trained neural network.
# Conclusion
In this textbook, we have covered the fundamentals of neural networks for machine learning in Python. We have learned about activation functions, gradient descent, backpropagation, loss functions, building neural networks, training and optimizing neural networks, evaluating neural networks, and applications of neural networks in machine learning.
By the end of this textbook, you should have a solid understanding of how to build and train neural networks, and be able to apply this knowledge to a wide range of machine learning problems.
Remember, practice makes perfect. Keep experimenting with different neural network architectures and hyperparameters to improve your skills and understanding. | Textbooks |
What mass of water is present as a liquid when equilibrium is established?
$1.00~\mathrm{g}$ of water is introduced into a $5.00~\mathrm{L}$ flask at $50~^\circ\mathrm{C}$. What mass of water is present as a liquid when equilibrium is established? Also you know $92.5~\mathrm{mmHg}$ (vapor pressure).
$0.083~\mathrm{g}$
$0.41~\mathrm{g}$
I need thorough explanation for each step because I'm new to this. This question is in the gases section of the book, so I think I need to apply the ideal gas law, but I don't know.
With the help from the comment I came up with the following solution:
$$ \begin{align} \frac{92.5}{760}~\mathrm{atm} \cdot 5.00~\mathrm{L} &= n \cdot 0.08206 \cdot 323~\mathrm{K}\\ n &= 0.023~\mathrm{mol} \end{align} $$
Since $1~\mathrm{mol}$ of $\ce{H2O}$ is $18~\mathrm{g}$, the answer is $0.414~\mathrm{g}$, or (2) $0.41~\mathrm{g}$.
physical-chemistry gas-laws
Gaurang Tandon
chemistryflairchemistryflair
The given temperature of $T=50\ \mathrm{^\circ C}$ and the given pressure of $p=92.5\ \mathrm{mmHg}$ approximately correspond to equilibrium conditions for liquid water and steam. (Strictly speaking, the corresponding saturation pressure for a given temperature of $T=50.000\ \mathrm{^\circ C}$ is $p=92.647\ \mathrm{mmHg}$, which could be rounded to $p=92.6\ \mathrm{mmHg}$ and not $p=92.5\ \mathrm{mmHg}$; in the opposite direction, however, the corresponding saturation temperature for a given pressure of $p=92.500\ \mathrm{mmHg}$ is $T=49.968\ \mathrm{^\circ C}$, which could be rounded to $T=50\ \mathrm{^\circ C}$.) The question does not explain whether these values describe the initial or final state; however, since the values correspond to equilibrium conditions, we may assume that these values apply to the final state when equilibrium is established (e.g. this can be achieved by keeping the closed system at a constant temperature of $T=50\ \mathrm{^\circ C}$).
Strictly speaking, the question doesn't even explain whether the added water initially is liquid or steam; however, this is not relevant for the final state. If the water is added as liquid, a part of it evaporates until the equilibrium state is reached in the closed container; if the water is introduced as steam, a part of it condenses until the equilibrium state is reached. Therefore, when the defined equilibrium is established, the container contains certain amounts of liquid water and steam irrespective of the initial conditions.
The question does not mention any air in the container. For simplicity's sake, we may assume that the container has been evacuated before the experiment and contains only liquid water and vapour.
The available volume of the container is reduced by the volume of the liquid water. However, since the density of liquid water at the given temperature and pressure is about $\rho=988\ \mathrm{kg\ m^{-3}}$, the given mass of $m=1.00\ \mathrm g$ corresponds to a maximum volume of $V_\text{liquid}=1.01\ \mathrm{ml}=0.00101\ \mathrm l$, assuming that no water has evaporated. Considering that the total volume is given as $V=5.00\ \mathrm l$ (note the number of significant digits), the difference caused by the liquid water is not significant and may be neglected.
Note that the use of the non-SI unit "conventional millimetre of mercury" (unit symbol: mmHg) is deprecated; the use of SI units is to be preferred. $$1\ \mathrm{mmHg}\approx 133.3224\ \mathrm{Pa}$$ You might want to convert the given values to coherent SI units, or find a value for the molar gas constant $R$ that is expressed in a suitable unit.
The following data are given:
Temperature $T=50\ \mathrm{^\circ C}=323.15\ \mathrm K$
Pressure $p=92.5\ \mathrm{mmHg}=12332.322\ \mathrm{Pa}$
Volume $V=5.00\ \mathrm l=0.00500\ \mathrm{m^3}$
The value of the molar gas constant is $R=8.314462618\ \mathrm{J\ mol^{-1}\ K^{-1}}$.[source]
The amount of vapour $n$ in the container may be estimated using the ideal gas law $$\begin{align} p\cdot V&=n\cdot R\cdot T\\[6pt] n&=\frac{p\cdot V}{R\cdot T}\\[6pt] &=\frac{12332.322\ \mathrm{Pa}\times0.00500\ \mathrm{m^3}}{8.314462618\ \mathrm{J\ mol^{-1}\ K^{-1}}\times323.15\ \mathrm K}\\[6pt] &=0.022949674\ \mathrm{mol} \end{align}$$ (Note that this intermediate result is given with an excessive number of digits. It is deliberately not rounded in order to avoid round-off errors in subsequent calculations.)
Since the molar mass of water is $M=18.01528\ \mathrm{g\ mol^{-1}}$, the corresponding mass of vapour is $$\begin{align} m&=M\cdot n\\[6pt] &=18.01528\ \mathrm{g\ mol^{-1}}\times0.022949674\ \mathrm{mol}\\[6pt] &=0.413444803\ \mathrm g\\[6pt] &\approx0.41\ \mathrm g \end{align}$$ (Note that this final result is rounded to two significant digits. The number of significant digits is estimated in view of the fact that the mass of water $m=1.00\ \mathrm g$, the volume $V=5.00\ \mathrm l$, and the pressure $p=92.5\ \mathrm{mmHg}$ are given with three significant digits, but the temperature $T=50\ \mathrm{^\circ C}$ is only given with two significant digits.)
This result is the mass of vapour $m_\text{vapour}$ in the container, whereas the question is about the mass of liquid water $m_\text{liquid}$, which can be calculated from the given total mass of water $m_\text{total}=1.00\ \mathrm{g}$. $$\begin{align} m_\text{total}&=m_\text{vapour}+m_\text{liquid}\\[6pt] m_\text{liquid}&=m_\text{total}-m_\text{vapour}\\[6pt] &=1.00\ \mathrm g-0.41\ \mathrm g\\[6pt] &=0.59\ \mathrm g \end{align}$$
Therefore, the correct answer is option 3) $0.59\ \mathrm g$.
By way of comparison, precise engineering calculations for water and steam usually do not rely on the ideal gas law but use so-called steam tables. We may use such steam tables to check the accuracy of our estimate that has been based on the ideal gas law.
For example, for a temperature of $T=50\ \mathrm{^\circ C}$ at equilibrium, we find the following values in REFPROP – NIST Standard Reference Database 23, Version 9.0:
Pressure $p=12352\ \mathrm{Pa}$
Liquid density $\rho_\text{liquid}=988.00\ \mathrm{kg\ m^{-3}}=988.00\ \mathrm{g\ l^{-1}}$
Vapour density $\rho_\text{vapour}=0.083147\ \mathrm{kg\ m^{-3}}=0.083147\ \mathrm{g\ l^{-1}}$
Furthermore, the database includes various other thermodynamic and transport properties, which are not required for the following calculations.
If you do not have access to professional steam tables, you may want to consider using the steam tables that are included in WolframAlpha. The corresponding results for a temperature of $T=50\ \mathrm{^\circ C}$ at equilibrium can be obtained using the input "water boiling at 50 °C":
Liquid density $\rho_\text{liquid}=988\ \mathrm{kg\ m^{-3}}=988\ \mathrm{g\ l^{-1}}$
Vapour density $\rho_\text{vapour}=0.08315\ \mathrm{kg\ m^{-3}}=0.08315\ \mathrm{g\ l^{-1}}$
Note: The indicated values for STP are not in accordance with IUPAC recommendations.
We know that the mass balance in the container is $$m_\text{total}=m_\text{vapour}+m_\text{liquid}\tag{1}$$ where $m_\text{total}=1.0000\ \mathrm g$ is the total mass of water (liquid and vapour) in the container (in this example, the precision of this value is arbitrarily increased to five significant digits).
Since density is defined as $$\rho=\frac mV$$ the corresponding volume balance can also be expressed in terms of mass: $$\begin{align} V_\text{total}&=V_\text{vapour}+V_\text{liquid}\\[6pt] &=\frac{m_\text{vapour}}{\rho_\text{vapour}}+\frac{m_\text{liquid}}{\rho_\text{liquid}}\tag{2} \end{align}$$ where $V_\text{total}=5.0000\ \mathrm l$ is the total volume of the container (in this example, the precision of this value is arbitrarily increased to five significant digits).
Solving the system of the equations $\text(1)$ and $\text(2)$ yields the solutions
$$\begin{align} m_\text{vapour}&=\frac{\rho_\text{vapour}\cdot\left(V_\text{total}\cdot\rho_\text{liquid}-m_\text{total}\right)}{\rho_\text{liquid}-\rho_\text{vapour}}\\[6pt] &=\frac{0.083147\ \mathrm{g\ l^{-1}}\times\left(5.0000\ \mathrm l\times988.00\ \mathrm{g\ l^{-1}}-1.0000\ \mathrm g\right)}{988.00\ \mathrm{g\ l^{-1}}-0.083147\ \mathrm{g\ l^{-1}}}\\[6pt] &=0.41569\ \mathrm g \end{align}$$
$$\begin{align} m_\text{liquid}&=-\frac{\rho_\text{liquid}\cdot\left(V_\text{total}\cdot\rho_\text{vapour}-m_\text{total}\right)}{\rho_\text{liquid}-\rho_\text{vapour}}\\[6pt] &=-\frac{988.00\ \mathrm{g\ l^{-1}}\times\left(5.0000\ \mathrm l\times0.083147\ \mathrm{g\ l^{-1}}-1.0000\ \mathrm g\right)}{988.00\ \mathrm{g\ l^{-1}}-0.083147\ \mathrm{g\ l^{-1}}}\\[6pt] &=0.58431\ \mathrm g \end{align}$$
These results confirm that the above-mentioned estimates $m_\text{vapour}\approx0.41\ \mathrm g$ and $m_\text{liquid}\approx0.59\ \mathrm g$, which have been obtained using the ideal gas law and neglecting the volume of the liquid water, are reasonable and sufficient in order to answer the question.
Not the answer you're looking for? Browse other questions tagged physical-chemistry gas-laws or ask your own question.
Forms of Henry's law versus ideal gas law
Gas Stoichiometry & Ideal Gas Law
How to calculate the molecular weight for a volatile substance introduced into a Dumas bulb?
How to calculate partition coefficients?
How to determine the equivalent weight of carbon dioxide correctly?
Volume of air required to oxidize glucose
Is it possible to find the volume of condensed water by temperature change without knowing the initial pressure of the flask? | CommonCrawl |
You may have noticed that I am back to publishing regular blog posts! My goal for now is a blog post every second Wednesday. I am now also trying to answer forum questions promptly. I want to thank the readers who took up the slack for the last year and a half in answering questions in the forums. In particular, I'd like to call out abieniek, Alexei Kassymov, and Lane Walker, whose answers were always spot on.
Now to the topic of this post. There has been a lot of talk since the standards came out about what they say about multiple methods for arithmetic operations, and I'd like to clear up a couple of points.
First, the standards do encourage that students have access to multiple methods as they learn to add, subtract, multiply and divide. But this does not mean that you have to solve every problem in multiple ways. Having different methods available is like having different means of transportation available to get to work; flexibility is good, but it doesn't mean you have to go to school by car, then by bus, then walk, then bike—every single day! The point of having multiple methods available is to encourage students to think strategically about what might be the best method for a given problem, not force them to solve every problem four times.
Second, the different methods are not unrelated; they form a progression, with the ultimate goal being the standard algorithm. For example, when students are first learning to multiply two digit numbers, they might use a rectangle to represent a product such as $42 \times 71$.
This shows the fundamental role of the distributive property in multiplying multi-digit numbers. You have to multiply each base ten component into each other one. Indeed, the same rectangle representation provides a visual proof of the distributive property itself.
At some later point students might just start writing down all the partial products, without using the rectangle to derive them.
Note the correspondence between the rectangle method and the partial product method, indicated by the colors. The first row of the rectangle and shows all the products by the 2 in 42 (in red); the second row shows all the products by the 40 (in blue). The products in the partial product method are grouped in the same way. There are many ways you can order the partial products, but if you group them as I have here, going from right to left in each two-digit number, as in the standard algorithm, you make an amazing discovery: you can add up all the partial products in each group (blue group or red group) in your head as you go along. That's because, in each case, adding the 2 to the 140 or the 40 to the 2800, there are enough zeroes in the second addend to accommodate the first, so it is easy to write down the sum right away, without writing the addends separately.
OK, so it's not always quite this easy, because every now and then you will have to keep in mind a bundled unit from the previous step (aka carrying), but you will never have to remember that for more than one step at a time, because each bundled unit gets used up at the next step. So if you invent a notation for remembering the bundled unit (what we used to call "little 1 in the corner" when I was growing up) then you can still avoid writing down all the partial products, and just compute the sum within each group as you go along. You have just created the standard algorithm.
The different methods are not isolated different ways of doing the same thing; they are steps towards fluency with the standard algorithm, fluency that is not fragile because it is supported by understanding.
Bill McCallum, founder of Illustrative Mathematics, is a University Distinguished Professor of Mathematics at the University of Arizona. He has worked in both mathematics research, in the area of number theory and arithmetical algebraic geometry, and mathematics education, writing textbooks and advising researchers and policy makers. He is a founding member of the Harvard Calculus Consortium and lead author of its college algebra and multivariable calculus texts. In 2009–2010 he was one of the lead writers for the Common Core State Standards in Mathematics. He holds a Ph. D. in Mathematics from Harvard University and a B.Sc. from the University of New South Wales. | CommonCrawl |
Operator norm
In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its operator norm. Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Informally, the operator norm $\|T\|$ of a linear map $T:X\to Y$ is the maximum factor by which it "lengthens" vectors.
Introduction and definition
Given two normed vector spaces $V$ and $W$ (over the same base field, either the real numbers $\mathbb {R} $ or the complex numbers $\mathbb {C} $), a linear map $A:V\to W$ is continuous if and only if there exists a real number $c$ such that[1]
$\|Av\|\leq c\|v\|\quad {\mbox{ for all }}v\in V.$
The norm on the left is the one in $W$ and the norm on the right is the one in $V$. Intuitively, the continuous operator $A$ never increases the length of any vector by more than a factor of $c.$ Thus the image of a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known as bounded operators. In order to "measure the size" of $A,$ one can take the infimum of the numbers $c$ such that the above inequality holds for all $v\in V.$ This number represents the maximum scalar factor by which $A$ "lengthens" vectors. In other words, the "size" of $A$ is measured by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm of $A$ as
$\|A\|_{op}=\inf\{c\geq 0:\|Av\|\leq c\|v\|{\mbox{ for all }}v\in V\}.$
The infimum is attained as the set of all such $c$ is closed, nonempty, and bounded from below.[2]
It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spaces $V$ and $W$.
Examples
Every real $m$-by-$n$ matrix corresponds to a linear map from $\mathbb {R} ^{n}$ to $\mathbb {R} ^{m}.$ Each pair of the plethora of (vector) norms applicable to real vector spaces induces an operator norm for all $m$-by-$n$ matrices of real numbers; these induced norms form a subset of matrix norms.
If we specifically choose the Euclidean norm on both $\mathbb {R} ^{n}$ and $\mathbb {R} ^{m},$ then the matrix norm given to a matrix $A$ is the square root of the largest eigenvalue of the matrix $A^{*}A$ (where $A^{*}$ denotes the conjugate transpose of $A$).[3] This is equivalent to assigning the largest singular value of $A.$
Passing to a typical infinite-dimensional example, consider the sequence space $\ell ^{2},$ which is an Lp space, defined by
$l^{2}=\left\{\left(a_{n}\right)_{n\geq 1}:\;a_{n}\in \mathbb {C} ,\;\sum _{n}|a_{n}|^{2}<\infty \right\}.$
This can be viewed as an infinite-dimensional analogue of the Euclidean space $\mathbb {C} ^{n}.$ Now consider a bounded sequence $s_{\bullet }=\left(s_{n}\right)_{n=1}^{\infty }.$ The sequence $s_{\bullet }$ is an element of the space $\ell ^{\infty },$ with a norm given by
$\left\|s_{\bullet }\right\|_{\infty }=\sup _{n}\left|s_{n}\right|.$
Define an operator $T_{s}$ by pointwise multiplication:
$\left(a_{n}\right)_{n=1}^{\infty }\;{\stackrel {T_{s}}{\mapsto }}\;\ \left(s_{n}\cdot a_{n}\right)_{n=1}^{\infty }.$
The operator $T_{s}$ is bounded with operator norm
$\left\|T_{s}\right\|_{op}=\left\|s_{\bullet }\right\|_{\infty }.$
This discussion extends directly to the case where $\ell ^{2}$ is replaced by a general $L^{p}$ space with $p>1$ and $\ell ^{\infty }$ replaced by $L^{\infty }.$
Equivalent definitions
Let $A:V\to W$ be a linear operator between normed spaces. The first four definitions are always equivalent, and if in addition $V\neq \{0\}$ then they are all equivalent:
${\begin{alignedat}{4}\|A\|_{op}&=\inf &&\{c\geq 0~&&:~\|Av\|\leq c\|v\|~&&~{\mbox{ for all }}~&&v\in V\}\\&=\sup &&\{\|Av\|~&&:~\|v\|\leq 1~&&~{\mbox{ and }}~&&v\in V\}\\&=\sup &&\{\|Av\|~&&:~\|v\|<1~&&~{\mbox{ and }}~&&v\in V\}\\&=\sup &&\{\|Av\|~&&:~\|v\|\in \{0,1\}~&&~{\mbox{ and }}~&&v\in V\}\\&=\sup &&\{\|Av\|~&&:~\|v\|=1~&&~{\mbox{ and }}~&&v\in V\}\;\;\;{\text{ this equality holds if and only if }}V\neq \{0\}\\&=\sup &&{\bigg \{}{\frac {\|Av\|}{\|v\|}}~&&:~v\neq 0~&&~{\mbox{ and }}~&&v\in V{\bigg \}}\;\;\;{\text{ this equality holds if and only if }}V\neq \{0\}.\\\end{alignedat}}$
If $V=\{0\}$ then the sets in the last two rows will be empty, and consequently their supremums over the set $[-\infty ,\infty ]$ will equal $-\infty $ instead of the correct value of $0.$ If the supremum is taken over the set $[0,\infty ]$ instead, then the supremum of the empty set is $0$ and the formulas hold for any $V.$
Importantly, a linear operator $A:V\to W$ is not, in general, guaranteed to achieve its norm $\|A\|_{op}=\sup\{\|Av\|:\|v\|\leq 1,v\in V\}$ on the closed unit ball $\{v\in V:\|v\|\leq 1\},$ meaning that there might not exist any vector $u\in V$ of norm $\|u\|\leq 1$ such that $\|A\|_{op}=\|Au\|$ (if such a vector does exist and if $A\neq 0,$ then $u$ would necessarily have unit norm $\|u\|=1$). R.C. James proved James's theorem in 1964, which states that a Banach space $V$ is reflexive if and only if every bounded linear functional $f\in V^{*}$ achieves its norm on the closed unit ball.[4] It follows, in particular, that every non-reflexive Banach space has some bounded linear functional (a type of bounded linear operator) that does not achieve its norm on the closed unit ball.
If $A:V\to W$ is bounded then[5]
$\|A\|_{op}=\sup \left\{\left|w^{*}(Av)\right|:\|v\|\leq 1,\left\|w^{*}\right\|\leq 1{\text{ where }}v\in V,w^{*}\in W^{*}\right\}$
and[5]
$\|A\|_{op}=\left\|{}^{t}A\right\|_{op}$
where ${}^{t}A:W^{*}\to V^{*}$ is the transpose of $A:V\to W,$ which is the linear operator defined by $w^{*}\,\mapsto \,w^{*}\circ A.$
Properties
The operator norm is indeed a norm on the space of all bounded operators between $V$ and $W$. This means
$\|A\|_{op}\geq 0{\mbox{ and }}\|A\|_{op}=0{\mbox{ if and only if }}A=0,$
$\|aA\|_{op}=|a|\|A\|_{op}{\mbox{ for every scalar }}a,$
$\|A+B\|_{op}\leq \|A\|_{op}+\|B\|_{op}.$
The following inequality is an immediate consequence of the definition:
$\|Av\|\leq \|A\|_{op}\|v\|\ {\mbox{ for every }}\ v\in V.$
The operator norm is also compatible with the composition, or multiplication, of operators: if $V$, $W$ and $X$ are three normed spaces over the same base field, and $A:V\to W$ and $B:W\to X$ are two bounded operators, then it is a sub-multiplicative norm, that is:
$\|BA\|_{op}\leq \|B\|_{op}\|A\|_{op}.$
For bounded operators on $V$, this implies that operator multiplication is jointly continuous.
It follows from the definition that if a sequence of operators converges in operator norm, it converges uniformly on bounded sets.
Table of common operator norms
By choosing different norms for the domain, used in computing $\|Av\|$, and the codomain, used in computing $\|v\|$, we obtain different values for the operator norm. Some common operator norms are easy to calculate, and others are NP-hard. Except for the NP-hard norms, all these norms can be calculated in $N^{2}$ operations (for an $N\times N$ matrix), with the exception of the $\ell _{2}-\ell _{2}$ norm (which requires $N^{3}$ operations for the exact answer, or fewer if you approximate it with the power method or Lanczos iterations).
Computability of Operator Norms[6]
Co-domain
$\ell _{1}$ $\ell _{2}$ $\ell _{\infty }$
Domain $\ell _{1}$ Maximum $\ell _{1}$ norm of a columnMaximum $\ell _{2}$ norm of a columnMaximum $\ell _{\infty }$ norm of a column
$\ell _{2}$ NP-hardMaximum singular valueMaximum $\ell _{2}$ norm of a row
$\ell _{\infty }$ NP-hardNP-hardMaximum $\ell _{1}$ norm of a row
The norm of the adjoint or transpose can be computed as follows. We have that for any $p,q,$ then $\|A\|_{p\rightarrow q}=\|A^{*}\|_{q'\rightarrow p'}$ where $p',q'$ are Hölder conjugate to $p,q,$ that is, $1/p+1/p'=1$ and $1/q+1/q'=1.$
Operators on a Hilbert space
Suppose $H$ is a real or complex Hilbert space. If $A:H\to H$ is a bounded linear operator, then we have
$\|A\|_{op}=\left\|A^{*}\right\|_{op}$
and
$\left\|A^{*}A\right\|_{op}=\|A\|_{op}^{2},$
where $A^{*}$ denotes the adjoint operator of $A$ (which in Euclidean spaces with the standard inner product corresponds to the conjugate transpose of the matrix $A$).
In general, the spectral radius of $A$ is bounded above by the operator norm of $A$:
$\rho (A)\leq \|A\|_{op}.$
To see why equality may not always hold, consider the Jordan canonical form of a matrix in the finite-dimensional case. Because there are non-zero entries on the superdiagonal, equality may be violated. The quasinilpotent operators is one class of such examples. A nonzero quasinilpotent operator $A$ has spectrum $\{0\}.$ So $\rho (A)=0$ while $\|A\|_{op}>0.$
However, when a matrix $N$ is normal, its Jordan canonical form is diagonal (up to unitary equivalence); this is the spectral theorem. In that case it is easy to see that
$\rho (N)=\|N\|_{op}.$
This formula can sometimes be used to compute the operator norm of a given bounded operator $A$: define the Hermitian operator $B=A^{*}A,$ determine its spectral radius, and take the square root to obtain the operator norm of $A.$
The space of bounded operators on $H,$ with the topology induced by operator norm, is not separable. For example, consider the Lp space $L^{2}[0,1],$ which is a Hilbert space. For $0<t\leq 1,$ let $\Omega _{t}$ be the characteristic function of $[0,t],$ and $P_{t}$ be the multiplication operator given by $\Omega _{t},$ that is,
$P_{t}(f)=f\cdot \Omega _{t}.$
Then each $P_{t}$ is a bounded operator with operator norm 1 and
$\left\|P_{t}-P_{s}\right\|_{op}=1\quad {\mbox{ for all }}\quad t\neq s.$
But $\{P_{t}:0<t\leq 1\}$ is an uncountable set. This implies the space of bounded operators on $L^{2}([0,1])$ is not separable, in operator norm. One can compare this with the fact that the sequence space $\ell ^{\infty }$ is not separable.
The associative algebra of all bounded operators on a Hilbert space, together with the operator norm and the adjoint operation, yields a C*-algebra.
See also
• Banach–Mazur compactum – Set of n-dimensional subspaces of a normed space made into a compact metric space.
• Continuous linear operator
• Contraction (operator theory) – Bounded operators with sub-unit norm
• Discontinuous linear map
• Dual norm – Measurement on a normed vector space
• Matrix norm – Norm on a vector space of matrices
• Norm (mathematics) – Length in a vector space
• Normed space – Vector space on which a distance is definedPages displaying short descriptions of redirect targets
• Operator algebra – Branch of functional analysis
• Operator theory – Mathematical field of study
• Topologies on the set of operators on a Hilbert space
• Unbounded operator – Linear operator defined on a dense linear subspace
Notes
1. Kreyszig, Erwin (1978), Introductory functional analysis with applications, John Wiley & Sons, p. 97, ISBN 9971-51-381-1
2. See e.g. Lemma 6.2 of Aliprantis & Border (2007).
3. Weisstein, Eric W. "Operator Norm". mathworld.wolfram.com. Retrieved 2020-03-14.
4. Diestel 1984, p. 6.
5. Rudin 1991, pp. 92–115.
6. section 4.3.1, Joel Tropp's PhD thesis,
References
• Aliprantis, Charalambos D.; Border, Kim C. (2007), Infinite Dimensional Analysis: A Hitchhiker's Guide, Springer, p. 229, ISBN 9783540326960.
• Conway, John B. (1990), "III.2 Linear Operators on Normed Spaces", A Course in Functional Analysis, New York: Springer-Verlag, pp. 67–69, ISBN 0-387-97245-5
• Diestel, Joe (1984). Sequences and series in Banach spaces. New York: Springer-Verlag. ISBN 0-387-90859-5. OCLC 9556781.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
| Wikipedia |
Median (geometry)
In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect each other at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. The concept of a median extends to tetrahedra.
Relation to center of mass
Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle.[1] Thus the object would balance on the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from.
Equal-area division
Each median divides the area of the triangle in half; hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines which divide the area of the triangle into two equal parts do not pass through the centroid.)[2][3] The three medians divide the triangle into six smaller triangles of equal area.
Proof of equal-area property
Consider a triangle ABC. Let D be the midpoint of ${\overline {AB}}$, E be the midpoint of ${\overline {BC}}$, F be the midpoint of ${\overline {AC}}$, and O be the centroid (most commonly denoted G).
By definition, $AD=DB,AF=FC,BE=EC$. Thus $[ADO]=[BDO],[AFO]=[CFO],[BEO]=[CEO],$ and $[ABE]=[ACE]$, where $[ABC]$ represents the area of triangle $\triangle ABC$ ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height.
We have:
$[ABO]=[ABE]-[BEO]$
$[ACO]=[ACE]-[CEO]$
Thus, $[ABO]=[ACO]$ and $[ADO]=[DBO],[ADO]={\frac {1}{2}}[ABO]$
Since $[AFO]=[FCO],[AFO]={\frac {1}{2}}[ACO]={\frac {1}{2}}[ABO]=[ADO]$, therefore, $[AFO]=[FCO]=[DBO]=[ADO]$. Using the same method, one can show that $[AFO]=[FCO]=[DBO]=[ADO]=[BEO]=[CEO]$.
Three congruent triangles
In 2014 Lee Sallows discovered the following theorem:[4]
The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the two triangles in each such pair are rotated about their common midpoint until they meet so as to share a common side, then the three new triangles formed by the union of each pair are congruent.
Formulas involving the medians' lengths
The lengths of the medians can be obtained from Apollonius' theorem as:
$m_{a}={\sqrt {\frac {2b^{2}+2c^{2}-a^{2}}{4}}}$
$m_{b}={\sqrt {\frac {2a^{2}+2c^{2}-b^{2}}{4}}}$
$m_{c}={\sqrt {\frac {2a^{2}+2b^{2}-c^{2}}{4}}}$
where $a,b,$ and $c$ are the sides of the triangle with respective medians $m_{a},m_{b},$ and $m_{c}$ from their midpoints.
These formulas imply the relationships:[5]
$a={\frac {2}{3}}{\sqrt {-m_{a}^{2}+2m_{b}^{2}+2m_{c}^{2}}}={\sqrt {2(b^{2}+c^{2})-4m_{a}^{2}}}={\sqrt {{\frac {b^{2}}{2}}-c^{2}+2m_{b}^{2}}}={\sqrt {{\frac {c^{2}}{2}}-b^{2}+2m_{c}^{2}}}$
$b={\frac {2}{3}}{\sqrt {-m_{b}^{2}+2m_{a}^{2}+2m_{c}^{2}}}={\sqrt {2(a^{2}+c^{2})-4m_{b}^{2}}}={\sqrt {{\frac {a^{2}}{2}}-c^{2}+2m_{a}^{2}}}={\sqrt {{\frac {c^{2}}{2}}-a^{2}+2m_{c}^{2}}}$
$c={\frac {2}{3}}{\sqrt {-m_{c}^{2}+2m_{b}^{2}+2m_{a}^{2}}}={\sqrt {2(b^{2}+a^{2})-4m_{c}^{2}}}={\sqrt {{\frac {b^{2}}{2}}-a^{2}+2m_{b}^{2}}}={\sqrt {{\frac {a^{2}}{2}}-b^{2}+2m_{a}^{2}}}.$
Other properties
Let ABC be a triangle, let G be its centroid, and let D, E, and F be the midpoints of BC, CA, and AB, respectively. For any point P in the plane of ABC then[6]
$PA+PB+PC\leq 2(PD+PE+PF)+3PG.$
The centroid divides each median into parts in the ratio 2:1, with the centroid being twice as close to the midpoint of a side as it is to the opposite vertex.
For any triangle with sides $a,b,c$ and medians $m_{a},m_{b},m_{c},$[7]
${\tfrac {3}{4}}(a+b+c)<m_{a}+m_{b}+m_{c}<a+b+c\quad {\text{ and }}\quad {\tfrac {3}{4}}\left(a^{2}+b^{2}+c^{2}\right)=m_{a}^{2}+m_{b}^{2}+m_{c}^{2}.$
The medians from sides of lengths $a$ and $b$ are perpendicular if and only if $a^{2}+b^{2}=5c^{2}.$[8]
The medians of a right triangle with hypotenuse $c$ satisfy $m_{a}^{2}+m_{b}^{2}=5m_{c}^{2}.$
Any triangle's area T can be expressed in terms of its medians $m_{a},m_{b}$, and $m_{c}$ as follows. If their semi-sum $\left(m_{a}+m_{b}+m_{c}\right)/2$ is denoted by $\sigma $ then[9]
$T={\frac {4}{3}}{\sqrt {\sigma \left(\sigma -m_{a}\right)\left(\sigma -m_{b}\right)\left(\sigma -m_{c}\right)}}.$
Tetrahedron
A tetrahedron is a three-dimensional object having four triangular faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median of the tetrahedron. There are four medians, and they are all concurrent at the centroid of the tetrahedron.[10] As in the two-dimensional case, the centroid of the tetrahedron is the center of mass. However contrary to the two-dimensional case the centroid divides the medians not in a 2:1 ratio but in a 3:1 ratio (Commandino's theorem).
See also
• Angle bisector
• Altitude (triangle)
• Automedian triangle
References
1. Weisstein, Eric W. (2010). CRC Concise Encyclopedia of Mathematics, Second Edition. CRC Press. pp. 375–377. ISBN 9781420035223.
2. Bottomley, Henry. "Medians and Area Bisectors of a Triangle". Archived from the original on 2019-05-10. Retrieved 27 September 2013.
3. Dunn, J. A., and Pretty, J. E., "Halving a triangle," Mathematical Gazette 56, May 1972, 105-108. DOI 10.2307/3615256 Archived 2023-04-05 at the Wayback Machine
4. Sallows, Lee, "A Triangle Theorem Archived 2016-04-07 at the Wayback Machine" Mathematics Magazine, Vol. 87, No. 5 (December 2014), p. 381
5. Déplanche, Y. (1996). Diccio fórmulas. Medianas de un triángulo. Edunsa. p. 22. ISBN 978-84-7747-119-6. Retrieved 2011-04-24.
6. Problem 12015, American Mathematical Monthly, Vol.125, January 2018, DOI: 10.1080/00029890.2018.1397465
7. Posamentier, Alfred S., and Salkind, Charles T., Challenging Problems in Geometry, Dover, 1996: pp. 86–87.
8. Boskoff, Homentcovschi, and Suceava (2009), Mathematical Gazette, Note 93.15.
9. Benyi, Arpad, "A Heron-type formula for the triangle", Mathematical Gazette 87, July 2003, 324–326.
10. Leung, Kam-tim; and Suen, Suk-nam; "Vectors, matrices and geometry", Hong Kong University Press, 1994, pp. 53–54
External links
Wikimedia Commons has media related to Median (geometry).
• The Medians at cut-the-knot
• Area of Median Triangle at cut-the-knot
• Medians of a triangle With interactive animation
• Constructing a median of a triangle with compass and straightedge animated demonstration
• Weisstein, Eric W. "Triangle Median". MathWorld.
| Wikipedia |
\begin{definition}[Definition:Addition of Polynomials/Polynomial Forms]
Let:
:$\ds f = \sum_{k \mathop \in Z} a_k \mathbf X^k$
:$\ds g = \sum_{k \mathop \in Z} b_k \mathbf X^k$
be polynomials in the indeterminates $\set {X_j: j \in J}$ over $R$.
{{explain|What is $Z$ in the above? Presumably the integers, in which case they need to be denoted $\Z$ and limited in domain to non-negative? However, because $Z$ is used elsewhere in the exposition of polynomials to mean something else (I will need to hunt around to find out exactly what), I can not take this assumption for granted.}}
The operation '''polynomial addition''' is defined as:
:$\ds f + g := \sum_{k \mathop \in Z} \paren {a_k + b_k} \mathbf X^k$
The expression $f + g$ is known as the '''sum''' of $f$ and $g$.
\end{definition} | ProofWiki |
# Error analysis in numerical methods
Error analysis is an essential component of numerical methods. It helps us understand the accuracy of the results obtained using a particular numerical method. We will discuss the following types of errors:
- Truncation error: This occurs when a mathematical function is approximated using a finite number of terms. The truncation error is the difference between the exact function and the approximated function.
- Rounding error: This occurs when numbers are stored and processed in a computer or calculator, which can only represent a finite number of decimal places. Rounding error is the difference between the exact value and the stored value.
- Propagation error: This occurs when the errors in the input data are propagated through the numerical method. The propagation error is the sum of the errors in the input data.
Consider the following differential equation:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using a numerical method. The exact solution is:
$$
y(x) = e^{2x} + \frac{3}{2}e^{2x}
$$
However, we will use a numerical method to approximate the solution. Let's say we use the Euler method to solve the differential equation. The Euler method approximates the solution using the following formula:
$$
y_{n+1} = y_n + h(2x_n + 3y_n)
$$
where $h$ is the step size and $x_n$ and $y_n$ are the approximated values at the current step.
## Exercise
Calculate the error in the Euler method approximation for the given differential equation and step size.
To analyze the error in the Euler method approximation, we can use the following steps:
1. Calculate the exact solution to the differential equation.
2. Use the Euler method to approximate the solution with a given step size.
3. Compare the exact solution with the Euler method approximation.
4. Analyze the difference between the exact solution and the Euler method approximation to determine the error.
# Differential equations and their numerical solutions
There are several types of differential equations, including:
- First-order ordinary differential equations (ODEs): These equations involve the first derivative of a function.
- Second-order ordinary differential equations (ODEs): These equations involve the second derivative of a function.
- Partial differential equations (PDEs): These equations involve partial derivatives of a function.
To solve differential equations numerically, we can use the SciPy library in Python. SciPy provides various functions for solving ODEs and PDEs, such as `odeint` for solving ODEs and `solve_pde` for solving PDEs.
Consider the following first-order ODE:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
from scipy.integrate import odeint
import numpy as np
```
Next, we will define the ODE function:
```python
def ode_function(y, x):
return 2 * x + 3 * y
```
Finally, we will use the `odeint` function to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = odeint(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the SciPy library:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# Linear algebra and its applications in scientific computing
Some important concepts in linear algebra include:
- Matrices: Matrices are arrays of numbers that represent linear transformations.
- Vectors: Vectors are arrays of numbers that represent points in a vector space.
- Linear equations: Linear equations are equations that involve linear combinations of variables.
Linear algebra has various applications in scientific computing, such as:
- Solving systems of linear equations.
- Computing eigenvalues and eigenvectors of matrices.
- Performing linear regression to model the relationship between variables.
Consider the following system of linear equations:
$$
\begin{cases}
x + y = 5 \\
2x - y = 3
\end{cases}
$$
We want to solve this system of equations using linear algebra. First, we will represent the system of equations as a matrix equation:
$$
\begin{bmatrix}
1 & 1 \\
2 & -1
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\begin{bmatrix}
5 \\
3
\end{bmatrix}
$$
Next, we will use the `numpy` library in Python to solve the matrix equation:
```python
import numpy as np
A = np.array([[1, 1], [2, -1]])
b = np.array([5, 3])
x = np.linalg.solve(A, b)
```
## Exercise
Solve the following system of linear equations using the SciPy library:
$$
\begin{cases}
x + y + z = 6 \\
2x - y + 2z = 4 \\
x - y + z = 2
\end{cases}
$$
# Numerical methods for solving differential equations
Some common numerical methods for solving differential equations include:
- Euler method: This method approximates the solution using the following formula:
$$
y_{n+1} = y_n + h(2x_n + 3y_n)
$$
- Runge-Kutta method: This method is a more accurate method for solving differential equations. It uses a sequence of steps to approximate the solution, such as the fourth-order Runge-Kutta method:
$$
y_{n+1} = y_n + \frac{h}{6}(k_1 + 2k_2 + 2k_3 + k_4)
$$
where $k_1, k_2, k_3,$ and $k_4$ are the approximated values at different steps.
- Finite difference method: This method approximates the solution using the following formula:
$$
y_{n+1} = y_n + h(2x_n + 3y_n)
$$
Consider the following first-order ODE:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to solve this equation using the Euler method. First, we will define the Euler method function:
```python
def euler_method(ode_function, y0, x):
y = np.zeros(len(x))
y[0] = y0
for i in range(1, len(x)):
y[i] = y[i-1] + (x[i] - x[i-1]) * ode_function(y[i-1], x[i-1])
return y
```
Next, we will use the Euler method to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = euler_method(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the Runge-Kutta method:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# The SciPy library and its use in scientific computing
Some important functions in the SciPy library include:
- `odeint`: This function is used to solve ordinary differential equations (ODEs).
- `solve_pde`: This function is used to solve partial differential equations (PDEs).
- `linalg.solve`: This function is used to solve linear equations.
- `optimize.minimize`: This function is used to minimize a function.
The SciPy library can be used in various scientific computing tasks, such as:
- Solving differential equations.
- Performing linear algebra computations.
- Analyzing and visualizing data.
Consider the following system of linear equations:
$$
\begin{cases}
x + y = 5 \\
2x - y = 3
\end{cases}
$$
We want to solve this system of equations using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
import numpy as np
from scipy.linalg import solve
```
Next, we will represent the system of equations as a matrix equation:
```python
A = np.array([[1, 1], [2, -1]])
b = np.array([5, 3])
x = solve(A, b)
```
## Exercise
Solve the following system of linear equations using the SciPy library:
$$
\begin{cases}
x + y + z = 6 \\
2x - y + 2z = 4 \\
x - y + z = 2
\end{cases}
$$
# Applications of numerical stability in scientific computing
- Error analysis in numerical methods.
- Differential equations and their numerical solutions.
- Linear algebra and its applications in scientific computing.
- Numerical methods for solving differential equations.
- The SciPy library and its use in scientific computing.
Numerical stability is crucial in scientific computing because it ensures that the results obtained using numerical methods are accurate and reliable. By understanding and analyzing the numerical stability of methods, we can develop more robust and accurate models and simulations in various scientific fields.
Consider the following differential equation:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
from scipy.integrate import odeint
import numpy as np
```
Next, we will define the ODE function:
```python
def ode_function(y, x):
return 2 * x + 3 * y
```
Finally, we will use the `odeint` function to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = odeint(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the SciPy library:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# Handling round-off errors and other numerical issues
- Using higher precision data types.
- Using appropriate discretization step sizes.
- Using numerical methods with better stability properties.
For example, when solving differential equations, we can use higher precision data types, such as double-precision floating-point numbers, to reduce the impact of round-off errors. We can also use appropriate discretization step sizes to ensure that the errors introduced by the numerical method are small compared to the errors in the input data.
Consider the following differential equation:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
from scipy.integrate import odeint
import numpy as np
```
Next, we will define the ODE function:
```python
def ode_function(y, x):
return 2 * x + 3 * y
```
Finally, we will use the `odeint` function to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = odeint(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the SciPy library:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# Testing and benchmarking numerical algorithms
- Comparing the results obtained from different numerical methods.
- Using reference solutions to validate the accuracy of numerical methods.
- Performing rigorous error analysis to assess the stability of numerical methods.
For example, when testing a numerical method for solving differential equations, we can compare the results obtained from different numerical methods using reference solutions or exact solutions. We can also perform rigorous error analysis to assess the stability of the numerical method and identify any potential issues or limitations.
Consider the following differential equation:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
from scipy.integrate import odeint
import numpy as np
```
Next, we will define the ODE function:
```python
def ode_function(y, x):
return 2 * x + 3 * y
```
Finally, we will use the `odeint` function to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = odeint(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the SciPy library:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# Case studies in scientific computing
- The impact of round-off errors in the solution of differential equations.
- The importance of appropriate discretization step sizes in solving PDEs.
- The use of numerical methods for modeling and simulation in various scientific fields, such as physics, engineering, and data analysis.
By studying these case studies, we can gain a deeper understanding of the challenges and opportunities in numerical stability in scientific computing and develop more robust and accurate models and simulations.
Consider the following differential equation:
$$
\frac{dy}{dx} = 2x + 3y
$$
We want to find the solution to this equation using the SciPy library. First, we will import the necessary functions from the SciPy library:
```python
from scipy.integrate import odeint
import numpy as np
```
Next, we will define the ODE function:
```python
def ode_function(y, x):
return 2 * x + 3 * y
```
Finally, we will use the `odeint` function to solve the ODE:
```python
x = np.linspace(0, 10, 100)
y0 = 1
y = odeint(ode_function, y0, x)
```
## Exercise
Solve the following differential equation using the SciPy library:
$$
\frac{d^2y}{dx^2} = 2x + 3y
$$
# Further reading and resources
- Books on numerical methods and scientific computing, such as "Numerical Recipes" by William H. Press and Saul A. Teukolsky.
- Online tutorials and lectures on numerical methods and scientific computing, such as those offered by Coursera and edX.
- Research papers and articles on numerical stability and its applications in scientific computing, such as those published in scientific journals and conferences.
By exploring these resources, you can further develop your knowledge and skills in numerical stability in scientific computing and stay up-to-date with the latest advancements and developments in this field. | Textbooks |
Technical advance
Hypothesis testing in Bayesian network meta-analysis
Lorenz Uhlmann ORCID: orcid.org/0000-0001-8668-069X1,
Katrin Jensen1 &
Meinhard Kieser1
Network meta-analysis is an extension of the classical pairwise meta-analysis and allows to compare multiple interventions based on both head-to-head comparisons within trials and indirect comparisons across trials. Bayesian or frequentist models are applied to obtain effect estimates with credible or confidence intervals. Furthermore, p-values or similar measures may be helpful for the comparison of the included arms but related methods are not yet addressed in the literature. In this article, we discuss how hypothesis testing can be done in a Bayesian network meta-analysis.
An index is presented and discussed in a Bayesian modeling framework. Simulation studies were performed to evaluate the characteristics of this index. The approach is illustrated by a real data example.
The simulation studies revealed that the type I error rate is controlled. The approach can be applied in a superiority as well as in a non-inferiority setting.
Test decisions can be based on the proposed index. The index may be a valuable complement to the commonly reported results of network meta-analyses. The method is easy to apply and of no (noticeable) additional computational cost.
Network meta-analysis (NMA), as an extension of the classical pairwise meta-analysis, is gaining acceptance and popularity in medical research. The general idea is to include all evidence at hand about a specific research question in one single model. The classical pair-wise meta-analysis is limited to two-arm comparisons of interventions that were directly compared in trials. An NMA can include any number of treatments as well as interventions that have not been investigated head-to-head. Several approaches (frequentist and Bayesian) were introduced and extended during recent years. Thus, a framework of modeling techniques is available to implement an NMA in many different data situations. Efthimiou et al. and Dias et al. give very useful overview of recent developments [1, 2]. Alongside the benefits those procedures provide, many challenges arise when applying an NMA model. First, all the issues that are already known from pair-wise meta-analysis, like heterogeneity, have to be addressed. In addition, new items, like inconsistency which denotes the problem of deviations between direct and indirect estimates, have to be taken into consideration (see, for example, Dias et al. [3]).
As a result of an NMA, point estimates with credible intervals of pairwise effects between treatment arms are obtained. In this article, we focus on the issue of testing for superiority or noninferiority between treatment arms in an NMA model. For Bayesian modelling, we present and discuss an index υ that can be used for hypothesis testing within the network. Similar ideas were presented in the article by Rücker and Schwarzer [4] in a frequentist framework. However, we focus on Bayesian modeling. Furthermore, while we apply the index for a test procedure, Rücker and Schwarzer use their approach to rank treatment arms.
General modeling in NMA
The concept of NMA in a Bayesian framework was introduced by Higgins and Whitehead [5]. Many extensions and discussions about the idea were published in recent years. Introductions and overviews can be found in the literature [1, 2, 6, 7]. Here, we only present the basic idea of the modeling procedure. For this, we assume throughout this paper that the outcome is binary (e.g., success / no success, or failure / no failure).
The following notation is used. N is the number of trials, K the number of arms, pik the success (or failure) probability, and Nik the sample size of arm k in study i. In the setting of a binary outcome, we apply two different approaches: Either by use of the binomial distributions directly or by calculating the log odds ratios (OR) for each trial which are pooled in the model afterwards. In the former case, we use the logit function as link function and assume
$$ \begin{aligned} y_{ik} &\sim \text{Bin}(N_{ik}, p_{ik}),\\ \text{logit}(p_{ik}) &= \mu_{i} + d_{A_{i}k}, \end{aligned} $$
which can be denoted as a fixed-effect model, where yik is the number of events, μi is the baseline value (and is seen as a nuisance parameter), \(d_{A_{i}k}\) is the log OR between arm k and arm Ai which is the baseline arm and has to be chosen for each trial. All arms are compared to this baseline treatment arm. These log ORs are of main interest in an NMA and are typically assumed to be approximately normally distributed. In a random-effects model, the logits are modeled as
$$\begin{array}{*{20}l} \text{logit}(p_{ik}) &= \mu_{i} + \delta_{A_{i}k}\\ \delta_{A_{i}k} &= \mathcal{N}(d_{A_{i}k}, \tau^{2}). \end{array} $$
When the log ORs are used directly, the fixed-effect model is defined as
$$ \psi_{{iA}_{i}k} \sim \mathcal{N}(d_{A_{i}k}, \text{var}(\psi_{{iA}_{i}k})) $$
and a random-effects model as
$$\begin{array}{*{20}l} \psi_{{iA}_{i}k} &\sim \mathcal{N}(\delta_{A_{i}k}, \text{var}(\psi_{{iA}_{i}k}))\\ \delta_{A_{i}k} &= \mathcal{N}(d_{A_{i}k}, \tau^{2}). \end{array} $$
In this implementation, \(\phantom {\dot {i}\!}\psi _{{iA}_{i}k}\) is the log OR in trial i of treatment arm k compared to the baseline treatment arm Ai. The log OR together with its variance \(\phantom {\dot {i}\!}\text {var}(\psi _{{iA}_{i}k})\) have to be estimated using the data of study i. The estimation of \(\phantom {\dot {i}\!}\psi _{{iA}_{i}k}\) can be problematic when the number of events is rare (see [8–10], and the Cochrane Handbook, chapter 16.9.2 [11]). Thus, some care has to be taken when applying this approach. Further challenges and assumptions (as, for instance, the consistency assumption) but also extensions of these models are discussed and explained in the literature. Albeit there are important issues, we do not focus on them here.
In this paper, we want to introduce a simple method to obtain an index υ that can be interpreted similarly to a frequentist p-value for an effect estimate within a Bayesian NMA. For this, we adapt an idea proposed by Kawasaki and Miyaoka [12, 13] where the authors introduce a similar index but to compare only two groups with respect to a binary outcome using Bayesian methods in a randomized trial. Our approach serves as a complement when presenting the results of an NMA reporting the effect estimates and the credible intervals. It can also be interpreted as the probability of superiority or non-inferiority, respectively. Furthermore, the index might be useful to define boundaries when updating NMAs as proposed by Nikolakopoulou et al. [14] and may therefore be applied in sequential NMAs. In our simulation study and real data example, we discuss the characteristics of the proposed approach.
In this section, we present the definition of the index υ and how it can be used when comparing two treatment arms within an NMA model.
Definition of index υ
To explain our approach, we assume that there are three treatment arms compared (P: Placebo, S: standard treatment, and E: experimental treatment). Assuming that an event denotes a success, a log OR of dPE>0 or dPS>0 denotes a benefit of the experimental treatment or the standard treatment over placebo, respectively. To assess whether E is superior to S (by at least a certain (pre-specified) relevant amount Δ≥0), we can estimate the probability
$$\upsilon = P(d_{PE} > d_{PS} + \Delta) $$
and base our decision on it. Under the consistency assumption, this equals to the definition
$$\upsilon = P(d_{ES} > \Delta) $$
and therefore, this index υ can also be applied in any Bayesian (pairwise or network) meta-analysis.
Of course, Δ can be chosen to be negative as well leading to a non-inferiority setting. Then, the probability of a treatment of being not less effective by more than a pre-specified amount compared to another treatment arm is estimated. In the following, it will be shown how the estimation of this probability can be realized.
Estimation of υ
The log ORs are estimated via Bayesian methods. We assume that they are approximately normally distributed. As prior distributions, one can use (flat) normal distributions, resulting in a normal distribution as posterior. Let us assume that the posterior mean values of dPS and dPE are denoted by μPS,post and μPE,post, respectively. One can then define a Z statistic as
$$Z = \frac{(d_{PE} - d_{PS} - \Delta) - (\mu_{PE, \text{post}} - \mu_{PS, \text{post}} - \Delta)}{\text{SE}(d_{PE} - d_{PS} - \Delta)}, $$
$$E(d_{PE} - d_{PS} - \Delta) = \mu_{PE, \text{post}} - \mu_{PS, \text{post}} - \Delta $$
$$\begin{array}{*{20}l} \text{SE}(d_{PE} - d_{PS} - \Delta) &= \text{SE}(d_{PE} - d_{PS})\\ &=\sqrt{\text{Var}(d_{PE} - d_{PS})} \end{array} $$
is the standard error of the difference of the log ORs. Thus, Z is asymptotically normally distributed as well.
Let Φ(·) denote the cumulative distribution function of the standard normal distribution. The probability of interest can then be approximated as
$$\begin{array}{*{20}l} &P(d_{PE} > d_{PS} + \Delta)\\ &\approx 1 - \Phi\left(\frac{-(\mu_{PE, \text{post}} - \mu_{PS, \text{post}} - \Delta)}{\sqrt{\text{Var}(d_{PE} - d_{PS} - \Delta)}}\right). \end{array} $$
It has to be noted that this approach is based on the approximation of the distribution of the log ORs by the normal distribution and is, therefore, only an approximation of P(dPE>dPS+Δ).
An estimate of this probability is then
$$\begin{array}{*{20}l} &\hat{P}(d_{PE} > d_{PS} + \Delta)\\ &=1 - \Phi\left(\frac{-(\hat{d}_{PE} - \hat{d}_{PS} - \Delta)}{\sqrt{\widehat{\text{Var}}(d_{PE} - d_{PS} - \Delta)}}\right), \end{array} $$
where \(\hat {d}_{PE}\) and \(\hat {d}_{PS}\) denote the estimates of the mean values of the posterior distribution of dPE and dPS, respectively. The estimated posterior variance is denoted by \(\widehat {\text {Var}}(d_{PE} - d_{PS} - \Delta) = \widehat {\text {Var}}(d_{PE} - d_{PS})\).
Estimation of this probability can be done within the MCMC approach in two different ways. The first approach is to estimate the (posterior) distributions of dPS−dPE−Δ directly. From this, we can estimate \(\hat {d}_{PS} - \hat {d}_{PE} - \Delta \) as well as the variance \(\widehat {\text {Var}}(d_{PE} - d_{PS} - \Delta)\). However, there is an even more intuitive way. In an MCMC estimation procedure, we store in every single iteration whether the parameter dPE was larger than dPS+Δ or not. After the MCMC estimation is finished, we evaluate the relative frequency of runs where dPE>dPS+Δ within the MCMC approach to estimate the probability \(\hat {P}(d_{PE} > d_{PS} + \Delta)\). An advantage of this approach is that it does not rely on the normal distribution and can therefore be applied in any NMA setting.
Use of υ for Bayesian hypothesis testing
The index υ can be used to estimate the probability of superiority or non-inferiority between treatment arms with respect to the event probability. Therefore, it is a useful complement to the common results obtained in a NMA. Furthermore, this index can be used to make test decisions. Let us, again, assume that there are three treatment arms (P, S and E). Furthermore, we want to assess the following test problem:
$$\begin{array}{*{20}l} H_{0}: d_{PE} \leq d_{PS} + \Delta \quad \text{vs.} \quad H_{1}: d_{PE} > d_{PS} + \Delta, \end{array} $$
with Δ∈R. We can now use the index υ to perform a Bayesian hypothesis test in an NMA. If the value of υ exceeds a pre-specified value (for instance, 0.975, as an equivalent to a frequentist p-value of 0.025 which is typically used in a one-sided test procedure) we reject the null-hypothesis. Since the index υ is based on a Bayesian approach, it is unclear whether the test decisions coincide with the results of frequentist testing procedures. For this, a "probability matching prior" (PMP) has to be found as outlined, for example, in Datta and Sweeting [15]. We assume that the log ORs are normally distributed. It can be shown that in this case a uniform prior is a PMP [15]. In NMA, flat normal priors are commonly used which are very close to uniform priors if they are chosen sufficiently flat. However, since small deviations might still be present either because of the (flat) prior distribution or the approximation of the log OR via a normal distribution, we applied simulation studies to evaluate the characteristics of our approach.
Some technical issues
As already discussed in the "Background" section, there are two ways to define an NMA model with a binary outcome. Either using the number of observations and the number of events per treatment arm assuming a binomial distribution, or using the approximately normally distributed log ORs.
In the next section, results from simulation studies will be provided where both approaches are compared. Therein, the method where the binomial distribution is used, is called arm-based approach. The method where ORs are modeled, is called contrast-based approach. The same distinction is done, for example, in the manual of the R package "netmeta" [16]. As a side note, the computation time of the contrast-based approach was substantially lower (in some situations about 40 times lower). Thus, from a computational point of view, this approach is much more efficient. From a technical point of view, the main difference between the arm-based and the contrast-based approach is that an additional level in the hierarchy of the Bayesian model is used. In the arm-based approach, a binomial distribution is estimated on the lower level, based on the number of successes (yik) and the number of observations (Nik). On the upper level, the log ORs (\(d_{A_{i}k}\)) are estimated (model (1)). When using the trial-specific log ORs, there is only one level (model (2)).
Two different ways of estimating the probability P(dPE>dPS+Δ) have been presented above (note that this distinction is independent of the distinction between the contrast-based and the arm-based approach). The first option is to estimate the (posterior) distribution of dPE−dPS−Δ and the second one is to estimate P(dPE>dPS+Δ) directly during the MCMC procedure. In all simulation studies, both approaches were used in parallel. It became clear that the differences between the results where negligibly small. Thus, only the results from the second approach are presented, since it is the simplest way to estimate the index υ.
Simulation study
Simulation studies were done to evaluate the testing approach. The main aim was to examine whether the approach maintains the type I error rate when used for hypothesis testing. For this, we have to define a cut-off value for a test decision. Analogously to a frequentist setting with a type I error rate of 0.025, we reject the null hypothesis H0: dPE≤dPS+Δ if \(\hat {\upsilon } = \hat {P}(d_{PE} > d_{PS} + \Delta) \geq 0.975\).
A further issue was to examine the power of the approaches. Different settings regarding baseline risk, dPS, dPE, and Δ were used.
Binary data based on the assumption that the null hypothesis holds true were simulated and the rejection rate was estimated to examine the actual type I error rate. The boundary of the null hypothesis was considered, i.e., the data were simulated so that dPE=dPS+Δ holds true.
Three arms were compared (P: placebo; S: standard treatment; E: experimental treatment) in 16 studies, where four studies of each were simulated comparing P vs. S, P vs. E, and S vs. E, respectively, and another four studies were simulated including all three treatment arms. In each study, a sample size of 500 observations per treatment arm was used. We assume that the main interest was to compare the experimental treatment with the standard treatment. The success probabilities of the three arms were varied to examine the characteristics of our approach in different scenarios. The success probabilities of the placebo and the standard treatment arm were assumed to be equal which was done to simplify the simulation procedure; different values were chosen to evaluate different scenarios (piP = piS = 0.05, 0.1 or 0.2, i = 1, …,16). The success probability of the experimental arm was calculated such that dPE=dPS+Δ holds true. The values of Δ were chosen based on the ORs between the treatment arms. Eleven different ORs were used: log(1), log(1.05), log(1.1), log(1.2), log(1.5), log(2) (superiority), and log(1.05−1), log(1.1−1), log(1.2−1), log(1.5−1), log(2−1) (non-inferiority). The significance level was set to 0.025.
For each simulation scenarios, 50,000 iterations were used. Based on the results obtained in these scenarios, some further interesting data situations were examined. Firstly, a sample size of 1,000 observations per treatment arm with a success rate of 0.2 was used leading to a data situation where even approximate approaches should perform sufficiently well. Secondly, the sample size was lowered to 200 observations per treatment arm with a success rate of 0.1. The values for Δ were varied between log(0.9) and log(1.1) since the most often used values should be within this range. In a last scenario, extreme values of Δ were examined combined with a sample size of 400 observations per treatment arm using a success rate of 0.05.
We also evaluated our approach in situations where heterogeneity was present in the data. We used the same simulation settings as above (16 studies, 500 observations per arm). We did not vary Δ but set it to 0 thus considering a superiority setting. We simulated heterogeneity using the same values for τ2 as in Friede et al. [17]: 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, and 2. We, again, used three different baseline risk values: 0.05, 0.1, and 0.2. Random-effects models were fitted and 10,000 iterations per scenario were performed.
In a last step, we lowered the sample size per arm and trial to 50 patients, used a baseline risk of 0.1, and applied the same values for τ2 as before. Again, with 10,000 replications per scenario, random-effects models were fitted and evaluated.
Furthermore, the power of the testing approach was evaluated. Again, the main interest was to analyze the difference between the experimental and the standard treatment. The success rates in arm P and S were set to 0.1, assuming that dSE=1.15, and the sample size was varied from 100 to 1,000 observations per treatment arm. Per scenario, 10,000 iterations were used.
In all simulation scenarios, the consistency as well as the similarity assumption was assumed to hold true. For parameter estimation, MCMC techniques were used. Two chains with a burn-in of 20,000 followed by 40,000 runs with a thinning rate of 5 resulting in 8,000 samples per chain were generated to estimate the posterior distribution following Song et al. who used a similar setting [18]. The software R [19] in combination with JAGS (version 3.4.0 or higher, http://mcmc-jags.sourceforge.net/) and the R-packages rjags [20], doSNOW [21], foreach [22], coda [23], and iterators [24] were used to conduct the simulations. Since the computations were done on different systems and different work stations, different versions of the software packages were used. In the evaluation step, the package xtable [25] was used in addition.
Illustrative example
To further illustrate the approach, we analyzed a real data example that was already evaluated elsewhere [6, 26]. The data are provided by the Smoking Cessation Guideline Panel [27].
In the data set, 24 trials comparing four different treatments about smoking cessation are included (A: "no contact", B: "self-help", C: "individual counseling", and D: "group counseling"). The number of cessations and the number of observations are presented in Table 1. In the following, it is tested whether the treatment effects of arm B, C, and D are different from that of treatment arm A using a fixed-effect model. Here, the following three test problems for superiority (i.e., Δ=0) are assessed (no adjustment for multiple testing is performed):
$$\begin{array}{*{20}l} H_{0, 1}: d_{CA} \leq d_{DA}\quad &\text{vs.}\quad H_{1, 1}: d_{CA} > d_{DA}\\ H_{0, 2}: d_{CA} \leq d_{BA}\quad &\text{vs.}\quad H_{1, 2}: d_{CA} > d_{BA}\\ H_{0, 3}: d_{DA} \leq d_{BA}\quad &\text{vs.}\quad H_{1, 3}: d_{DA} > d_{BA} \end{array} $$
Table 1 Number of events and number of observations per trial for the illustrative data example (yik and Nik, k=A,B,C,D, respectively) [6, 26]
It should be mentioned that these hypotheses were not pre-specified but the example is just presented to show the characteristics of our approach in a real data setting. Compared to the original data, the number of events was changed from 0 to 1 in two cases (study ID 9 and 20). This was done due to two reasons: If there are zero events in a treatment arm, an OR cannot be calculated. However, the contrast-based approach is based on ORs between treatment arms and thus the number of events had to be adjusted. As already mentioned above, the problem of rare events is common and discussed in the literature. In practice, a better choice may be to change the number of events from 0 to 0.5 and to add 0.5 to the number of observations [11]. However, the arm-based approach is based on a binomial distribution which is a discrete distribution. Thus, only integers can be used as numbers of events. Since a comparison of both approaches should be provided, the number of events was thus changed to 1.
An MCMC approach was implemented to estimate the parameters with 500,000 iterations after a burn-in of 100,000 iterations.
In the following, we will present the simulation results. Due to convergence problems which resulted from zero counts, the results are sometimes based on slightly less than 50,000 or 10,000 runs, respectively. This is not mentioned in every single results description to improve readability.
Type I error rate: The main interest was whether the approach maintains the type I error rate. In Fig. 1, the results of the first part of the simulation studies are shown. The number of observations per treatment arm was kept fixed (at 500 per treatment arm) and the value of Δ was varied, where three different success rates for treatment arms P and S were assumed (0.05, 0.1 and 0.2). The type I error rate using the contrast-based approach is close to the nominal level if the success rates are 0.1 or 0.2 and Δ is between log(1.2−1) and log(1.2) (Fig. 1). However, as soon as Δ is changed to more extreme values, it is slightly liberal in a non-inferiority setting (exp(Δ)<1) and slightly conservative in a superiority setting (exp(Δ)≥1). This characteristics is even more pronounced when the success rate is set to 0.05. Furthermore, one can see that the type I error rate tends to be higher the higher the success rate is. In contrast, the actual level of the arm-based approach is very close to the nominal one in most situations. Only if Δ and the success rate are relatively large, the type I error rates are slightly increased. If Δ is very small, the approach is slightly conservative. It is interesting to see that the lines in Fig. 1 cross. Thus, in some situations the arm-based and in some other situations the contrast-based approach is more conservative or liberal, respectively.
Simulated type I error rates. Simulated type I error rates for varying values of Δ (based on 50,000 runs). The sample size per treatment arm and the success rate were kept fixed at Nik=500 and pik=0.05,0.1,0.2, respectively (i=1,…,16, k=P,S,E)
In the setting with 1,000 observations per treatment arm and study and values for Δ very close to 0, both approaches lead to very similar results. Both nearly maintain the type I error rate. The situations with 200 observations per treatment arm and Δ-values varying between log(0.9) and log(1.1) might be more interesting, since these values are more common in practice. In all these scenarios, the arm-based approach seems to perform slightly better than the contrast-based one, since it is less conservative but still maintains the type I error rate. Sometimes, the type I error rate was slightly above the nominal level. However, this exceedance can be regarded as negligible. In the last scenario, where extreme Δ-values were used, one can see that the contrast-based approach inflates the type I error rate in a non-inferiority setting while it is very conservative in the superiority trials. In contrast, the arm-based approach maintains the type I error rate in (even extreme) non-inferiority scenarios but inflates the type I error rate in a superiority setting. Table 2 summarizes these results.
Table 2 Simulated type I error rates of the testing approach in specific scenarios
When introducing heterogeneity, we saw that the results for the two approaches (arm-based and contrast-based) were more different. The arm-based approach always maintains the type I error rate but becomes very conservative in case of strong heterogeneity (see Fig. 2). The contrast-based approach, however, leads to slightly increased type I error rates for higher values of heterogeneity. Lowering the sample size to 50 patients per study did not, in general, lead to inflated type I error rates when the arm-based approach was used. Only in case of strong heterogeneity the type I error was slightly inflated, or the test behaved slightly too conservative in the situation of strong heterogeneity. In contrast, the effect-based approach led to an increased type I error rate in case of strong heterogeneity.
Simulated type I error rates (heterogeneity). Simulated type I error rates for varying values of τ2 (based on 10,000 runs). The sample size per treatment arm and the success rate were kept fixed at Nik=500 and pik=0.05,0.1,0.2, respectively (i=1,…,16, k=P,S,E), while Δ was set to 0
Power The investigations of the power showed that both approaches have a very similar performance. The arm-based approach resulted in slightly higher power compared to the contrast-based one (see Fig. 3). The difference decreased with increasing sample size. This was to be expected since the type I error rates of the arm-based approach were also slightly increased compared to the contrast-based one. However, one has to keep in mind that the arm-based method did not maintain the significance level in some situations and thus has to be used with care.
Power values. Power for the arm-based and contrast-based approach for a varying sample size Ni,k (based on 10,000 runs). The success rate was kept fixed at pik=0.1 while the number of observations was varied (i=1,…,16;k=P;S;E). An OR of 1.15 was used for power simulation while Δ=0 was used
Real data example
In Table 3, we provide the results for the data example. The estimated values for υ resulting from the arm-based and the contrast-based approach are presented for each pair of hypotheses. We can see that the arm-based approach always leads to a higher value of \(\hat {\upsilon }\) than the contrast-based approach. If the cut-off for a test decision of 0.975 is applied, the following test decisions result. The first null hypothesis H0,1 cannot be rejected for both approaches. This means that the group counseling and the individual counseling are not significantly different. The second null hypothesis (H0,2) can be rejected according to both approaches that means that the individual counseling is significantly more effective than self-help. The third null hypothesis (H0,3) can be rejected with the arm-based but not when applying the contrast-based approach. Since we could see from our simulation study that the arm-based approach leads to type I error rates that are very close to the nominal level, the arm-based should be a proper choice. However, the safe (but maybe too conservative) option would be to apply the contrast-based approach and thus to maintain the null hypothesis in this case.
Table 3 Resulting values for \(\hat {\upsilon }\) for the illustrative data example using the contrast-based and the arm-based approach
In this article, a method for hypothesis testing in an Bayesian NMA is presented. For this, an index was introduced that describes the probability of superiority or non-inferiority from a Bayesian perspective. We examined whether this index can also be used to make test decisions in a frequentistic sense. In a simulation study, two different approaches were compared, an arm-based and a contrast-based one. When there was no heterogeneity present in the data and fixed-effects models were applied, the observed type I error rates were very close to the nominal significance level while the arm-based approach led to slightly more favorable results in most situations. If the sample size is sufficiently high, both approaches maintain the type I error rate. If an extreme non-inferiority margin is used, only the arm-based approach led to valid results. An extremely large margin for relevant superiority, however, leads to an inflation of the type I error rate of the arm-based approach, and the contrast-based approach is then the better choice. However, in most situations in practice the deviations from the nominal type I error rate observed in our simulation studies are negligible. We also investigated the situation where heterogeneity is present in the data and saw that this can have a stronger impact on the type I error rate. However, even when the sample size was lowered to 50 patients per arm and trial, the type I error was still very close to the nominal level and only deviated slightly from it in case of strong heterogeneity. It is worth mentioning that our concept is not identical to a Bayesian posterior predictive p-value as described in Gelman et al. [28]. The index υ rather describes a Bayesian probability for superiority or non-inferiority.
There are some limitations of our simulation study. Of course, there are by far more data situations as those considered. However, we covered a range of common situations in medical research. There is also a lot of discussion about inconsistency in NMA models in the literature (see, for example, Dias et al. [29], or Krahn et al. [30]). In our simulation scenarios, it was assumed that there is no inconsistency present in the data which is a limitation of our study. Consistency is an assumption typically made in a standard NMA model but might be problematic in practice. In recent publications, this issue was addressed and solutions were proposed by applying more complex models [31–35]. However, in this work we focused on the standard NMA model. Note that when examining the type I error rate, the null hypothesis is assumed to hold true. Thus, the success rates in all treatment arms are exactly the same by design (or the same plus a pre-defined Δ) and therefore there is no inconsistency per definition.
A test decision can also be based on the 95% credible intervals around the point estimate of the log OR. If Δ is not included, the null hypothesis can be rejected. We compared this approach to the methods suggested in this article. The type I error rate tended to be slightly increased if the test decision was based on the credible interval compared to the approach based on υ but overall the results were very similar. Thus, it is not a considerable improvement compared to a test decision based on the credible intervals but rather a complement on the existing methodology.
In conclusion, we proposed and discussed an index that can be used to test for superiority or non-inferiority of a treatment arm compared to another one within a Bayesian NMA. The estimation is done during the NMA model estimation and does not result in any (noticeable) additional computational cost. At the same time, the implementation is very easy. Obviously, this approach can also be applied in a straightforward way in any other data situation than binary data, as continuous data or a survival time, and is therefore a flexible tool.
However, as already mentioned, we did not cover all possible scenarios in our simulation study and, therefore, the index has to be used and interpreted with care. For example, as shown by Friede et al. [17] coverage of the credibility intervals decreases (and the type I error rate increases) substantially in case of rare diseases (low number of events), small populations, and strong heterogeneity. We did not discuss these situations here but it is clear that the same results for the index υ would have been observed as well. This shows that it is easy to generate examples that lead to invalid results. The choice of a proper prior distribution affects the results as well, as also described by Friede et al. [17]. Therefore, an adequate assessment of the data situation at hand has to be done before applying the approach discussed here or, in general, any NMA approach. It is hardly possible to define an approach that is valid and optimal for any situation in practice and we emphasize the limitations of the approach described in this paper.
NMA:
Network meta-analysis
Odds-ratio
Efthimiou O, Debray T, Valkenhoef G, Trelle S, Panayidou K, Moons KG, Reitsma JB, Shang A, Salanti G, GetReal Methods Review Group. Getreal in network meta-analysis: a review of the methodology. Res Synth Methods. 2016; 7:236–63.
Dias S, Sutton AJ, Ades A, Welton NJ. Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Dec Making. 2013; 33:607–17.
Dias S, Welton NJ, Sutton AJ, Caldwell DM, Lu G, Ades AE. Evidence synthesis for decision making 4: inconsistency in networks of evidence based on randomized controlled trials.Med Dec Making. 2013; 33:641–56.
Rücker G, Schwarzer G. Ranking treatments in frequentist network meta-analysis works without resampling methods. BMC Med Res Methodol. 2015; 15:58.
Higgins JPT, Whitehead A. Borrowing strength from external trials in a meta-analysis. Stat Med. 1996; 15:2733–49.
Lu G, Ades A. Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc. 2006; 101:447–59.
Salanti G. Special issue on network meta-analysis. Res Synth Methods. 2012; 2:69–190.
Bradburn MJ, Deeks JJ, Berlin JA, Russell Localio A. Much ado about nothing: a comparison of the performance of meta-analytical methods with rare events. Stat Med. 2007; 26:53–77.
Lane PW. Meta-analysis of incidence of rare events. Stat Methods Med Res. 2013; 22:117–32.
Sweeting MJ, Sutton AJ, Lambert PC. What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Stat Med. 2004; 23:1351–75.
Higgins J, Deeks J, Altman D. Chapter 16: Special topics in statistics In: Higgins S, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011: 2011. Available from http://handbook.cochrane.org/. Accessed 12 May 2016.
Kawasaki Y, Miyaoka E. A bayesian inference of p(π 1>π 2) for two proportions. J Biopharm Stat. 2012; 22:425–37.
Kawasaki Y, Miyaoka E. A bayesian non-inferiority test for two independent binomial proportions.Pharm Stat. 2013; 12:201–6. https://doi.org/10.1002/pst.1571.
Nikolakopoulou A, Mavridis D, Egger M, Salanti G. Continuously updated network meta-analysis and statistical monitoring for timely decision-making. Stat Methods Med Res. 2018; 27:1312–30.
Datta GS, Sweeting TJ. Probability matching priors. Technical Report, Research Report No. 252. 2005. Department of Statistical Science, University College London.
Rücker G, Schwarzer G, Krahn U, König J. Netmeta: Network Meta-Analysis Using Frequentist Methods. 2015. R package version 0.8-0, http://CRAN.R-project.org/package=netmeta. Accessed 25 Aug 2016.
Friede T, Röver C, S W, Neuenschwander B. Meta-analysis of few small studies in orphan diseases. Res Synth Meth. 2017; 8:79–91.
Song F, Clark A, Bachmann MO, Maas J. Simulation evaluation of statistical properties of methods for indirect and mixed treatment comparisons.BMC Med Res Methodol. 2012; 12:138.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2012. R Foundation for Statistical Computing. http://www.R-project.org/. Accessed 25 Aug 2016.
Plummer M. Rjags: Bayesian Graphical Models Using MCMC. 2013. R package version 3-11, http://CRAN.R-project.org/package=rjags. Accessed 25 Aug 2016.
Revolution Analytics, Weston S. doSNOW: Foreach Parallel Adaptor for the Snow Package. 2014. R package version 1.0.12, http://CRAN.R-project.org/package=doSNOW.
Revolution Analytics. Foreach: Foreach Looping Construct for R. 2012. R package version 1.4.0, http://CRAN.R-project.org/package=foreach. Accessed 25 Aug 2016.
Plummer M, Best N, Cowles K, Vines K. Coda: convergence diagnosis and output analysis for mcmc. R News. 2006; 6:7–11.
Revolution Analytics. Iterators: Iterator Construct for R. 2012. R package version 1.0.6, http://CRAN.R-project.org/package=iterators. Accessed 25 Aug 2016.
Dahl DB. Xtable: Export Tables to LaTeX or HTML. 2014. R package version 1.7-4, http://CRAN.R-project.org/package=xtable. Accessed 25 Aug 2016.
Hasselblad V.Meta-analysis of multitreatment studies. Med Dec Making. 1998; 18:37–43.
Smoking Cessation Guideline Panel. Smoking Cessation, Clinical Practice Guideline No. 18 (AHCPR Publication No. 96-0692). Rockville: MD: Agency for Health Care Policy and Research, U.S. Department of Health and Human Services; 1996.
Gelman A. Comment: Fuzzy and bayesian p-values and u-values. Statist Sci. 2005; 20:380–1.
Dias S, Welton NJ, Caldwell DM, Ades AE. Checking consistency in mixed treatment comparison meta-analysis. Stat Med. 2010; 29:932–44. https://doi.org/10.1002/sim.3767.
Krahn U, Binder H, König J. A graphical tool for locating inconsistency in network meta-analyses. BMC Med Res Methodol. 2013; 13:35.
Jackson D, Barrett J, Rice S, White I, Higgins J. A design-by-treatment interaction model for network meta-analysis with random inconsistency effects. Stat Med. 2014; 33:3639–54.
Jackson D, Boddington P, White I. The design-by-treatment interaction model: a unifying framework for modelling loop inconsistency in network meta-analysis. Res Synth Methods. 2016; 7:329–32.
Jackson D, Law M, Barrett J, Turner R, Higgins J, Salanti G, White I. Extending dersimonian and laird's methodology to perform network meta-analyses with random inconsistency effects. Stat Med. 2016; 35:819–39.
Jackson D, Veroniki A, Law M, Tricco A, Baker R. Paule-mandel estimators for network meta-analysis with random inconsistency effects. Res Synth Methods. 2017; 8:416–34.
Law M, Jackson D, Turner R, Rhodes K, W V. Two new methods to fit models for network meta-analysis with random inconsistency effectS. BMC Med Res Methodol. 2017; 16:87.
We acknowledge financial support by Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within the funding programme Open Access Publishing. Furthermore, we would like to thank the editors and reviewers for their valuable proposals and comments that considerably improved our manuscript in different aspects.
Open Access Publishing was funded by Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within the funding programme Open Access Publishing.
All data analyzed in our illustrative example are included in this published article.
Institute of Medical Biometry and Informatics, University of Heidelberg, Im Neuenheimer Feld 130.3, Heidelberg, Germany
Lorenz Uhlmann, Katrin Jensen & Meinhard Kieser
Lorenz Uhlmann
Katrin Jensen
Meinhard Kieser
LU wrote the first draft of the manuscript, conducted the simulation study and applied the approach to the illustrative data example. KJ and MK contributed to the writing, and critically commented and revised the applied methods and the manuscript. All authors revised and approved the final version of the manuscript.
Correspondence to Lorenz Uhlmann.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Uhlmann, L., Jensen, K. & Kieser, M. Hypothesis testing in Bayesian network meta-analysis. BMC Med Res Methodol 18, 128 (2018). https://doi.org/10.1186/s12874-018-0574-y
Received: 13 July 2017
Superiority
Non-inferiority | CommonCrawl |
\begin{document}
\twocolumn[
\aistatstitle{A Last-Step Regression Algorithm for Non-Stationary Online Learning}
\aistatsauthor{ Edward Moroshko \And Koby Crammer }
\aistatsaddress{ Department of Electrical Engineering,\\ The Technion, Haifa, Israel \And Department of Electrical Engineering,\\ The Technion, Haifa, Israel} ]
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{claim}[theorem]{Claim} \newtheorem{corollary}[theorem]{Corollary}
\def\par\penalty-1000\vskip .5 pt\noindent{\bf Proof sketch\/: }{\par\penalty-1000\vskip .5 pt\noindent{\bf Proof sketch\/: }}
\def\par\penalty-1000\vskip .1 pt\noindent{\bf Proof sketch\/: }{\par\penalty-1000\vskip .1 pt\noindent{\bf Proof sketch\/: }}
\newcommand{\QED}{
$\;\;\;\rule[0.1mm]{2mm}{2mm}$\\}
\newcommand{\todo}[1]{{~\\\bf TODO: {#1}}~\\}
\newfont{\msym}{msbm10} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\half}{\frac{1}{2}} \newcommand{{\rm sign}}{{\rm sign}} \newcommand{\paren}[1]{\left({#1}\right)} \newcommand{\brackets}[1]{\left[{#1}\right]} \newcommand{\braces}[1]{\left\{{#1}\right\}} \newcommand{\ceiling}[1]{\left\lceil{#1}\right\rceil} \newcommand{\abs}[1]{\left\vert{#1}\right\vert} \newcommand{{\rm Tr}}{{\rm Tr}} \newcommand{\pr}[1]{{\rm Pr}\left[{#1}\right]} \newcommand{\prp}[2]{{\rm Pr}_{#1}\left[{#2}\right]} \newcommand{\Exp}[1]{{\rm E}\left[{#1}\right]} \newcommand{\Expp}[2]{{\rm E}_{#1}\left[{#2}\right]} \newcommand{\eqdef}{\stackrel{\rm def}{=}} \newcommand{, \ldots ,}{, \ldots ,} \newcommand{\texttt{True}}{\texttt{True}} \newcommand{\texttt{False}}{\texttt{False}} \newcommand{\mcal}[1]{{\mathcal{#1}}} \newcommand{\argmin}[1]{\underset{#1}{\mathrm{argmin}} \:} \newcommand{\normt}[1]{\left\Vert {#1} \right\Vert^2} \newcommand{\step}[1]{\left[#1\right]_+} \newcommand{\1}[1]{[\![{#1}]\!]} \newcommand{{\textrm{diag}}}{{\textrm{diag}}}
\newcommand{{\textrm{D}_{\textrm{KL}}}}{{\textrm{D}_{\textrm{KL}}}} \newcommand{{\textrm{D}_{\textrm{IS}}}}{{\textrm{D}_{\textrm{IS}}}} \newcommand{{\textrm{D}_{\textrm{EU}}}}{{\textrm{D}_{\textrm{EU}}}}
\newcommand{\leftmarginpar}[1]{\marginpar[#1]{}} \newcommand{\figline}{\rule{0.45\textwidth}{0.5pt}} \newcommand{\figlinee}[1]{\rule{#1\textwidth}{0.5pt}} \newcommand{\normalsize}{\normalsize} \newcommand{\nolineskips}{ \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \setlength{\topsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\itemsep}{0pt}}
\newcommand{\beq}[1]{\begin{equation}\label{#1}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\secref}[1]{Sec.~\ref{#1}} \newcommand{\figref}[1]{Fig.~\ref{#1}} \newcommand{\exmref}[1]{Example~\ref{#1}} \newcommand{\thmref}[1]{Thm.~\ref{#1}} \newcommand{\sthmref}[1]{Thm.~\ref{#1}} \newcommand{\defref}[1]{Definition~\ref{#1}} \newcommand{\remref}[1]{Remark~\ref{#1}} \newcommand{\chapref}[1]{Chapter~\ref{#1}} \newcommand{\appref}[1]{App.~\ref{#1}} \newcommand{\lemref}[1]{Lem.~\ref{#1}} \newcommand{\propref}[1]{Proposition~\ref{#1}} \newcommand{\claimref}[1]{Claim~\ref{#1}}
\newcommand{\corref}[1]{Corollary~\ref{#1}} \newcommand{\scorref}[1]{Cor.~\ref{#1}} \newcommand{\tabref}[1]{Table~\ref{#1}} \newcommand{\tran}[1]{{#1}^{\top}} \newcommand{\mcal{N}}{\mcal{N}} \newcommand{\eqsref}[1]{Eqns.~(\ref{#1})}
\newcommand{\mb}[1]{{\boldsymbol{#1}}} \newcommand{\up}[2]{{#1}^{#2}} \newcommand{\dn}[2]{{#1}_{#2}} \newcommand{\du}[3]{{#1}_{#2}^{#3}}
\newcommand{\textl}[2]{{$\textrm{#1}_{\textrm{#2}}$}}
\newcommand{\lambda_{F}}{\lambda_{F}}
\newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\vxi}[1]{\mathbf{x}_{#1}} \newcommand{\vxi{t}}{\vxi{t}}
\newcommand{\yi}[1]{y_{#1}} \newcommand{\yi{t}}{\yi{t}} \newcommand{\hyi}[1]{\hat{y}_{#1}} \newcommand{\hyi{i}}{\hyi{i}}
\newcommand{\mb{y}}{\mb{y}} \newcommand{\vyi}[1]{\mb{y}_{#1}} \newcommand{\vyi{i}}{\vyi{i}}
\newcommand{\mb{\nu}}{\mb{\nu}} \newcommand{\vni}[1]{\mb{\nu}_{#1}} \newcommand{\vni{i}}{\vni{i}}
\newcommand{\mb{\mu}}{\mb{\mu}} \newcommand{{\vmu^*}}{{\mb{\mu}^*}} \newcommand{{\vmus}^{\top}}{{{\vmu^*}}^{\top}} \newcommand{\vmui}[1]{\mb{\mu}_{#1}} \newcommand{\vmui{i}}{\vmui{i}}
\newcommand{\vmu^{\top}}{\mb{\mu}^{\top}} \newcommand{\vmuti}[1]{\vmu^{\top}_{#1}} \newcommand{\vmuti{i}}{\vmuti{i}}
\newcommand{\mb \sigma}{\mb \sigma} \newcommand{\Sigma}{\Sigma} \newcommand{{\msigma^*}}{{\Sigma^*}} \newcommand{\msigmai}[1]{\Sigma_{#1}} \newcommand{\msigmai{t}}{\msigmai{t}}
\newcommand{\Upsilon}{\Upsilon} \newcommand{{\mups^*}}{{\Upsilon^*}} \newcommand{\mupsi}[1]{\Upsilon_{#1}} \newcommand{\mupsi{i}}{\mupsi{i}} \newcommand{\upsilon^*_l}{\upsilon^*_l}
\newcommand{\mathbf{u}}{\mathbf{u}} \newcommand{\tran{\vu}}{\tran{\mathbf{u}}} \newcommand{\vui}[1]{\mathbf{u}_{#1}} \newcommand{\vuti}[1]{\tran{\vu}_{#1}} \newcommand{\hat{\vu}}{\hat{\mathbf{u}}} \newcommand{\tran{\hvu}}{\tran{\hat{\vu}}} \newcommand{\hvur}[1]{\hat{\vu}_{#1}} \newcommand{\hvutr}[1]{\tran{\hvu}_{#1}} \newcommand{\mb{w}}{\mb{w}} \newcommand{\vwi}[1]{\mb{w}_{#1}} \newcommand{\vwi{t}}{\vwi{t}} \newcommand{\vwti}[1]{\tran{\vw}_{#1}} \newcommand{\tran{\vw}}{\tran{\mb{w}}}
\newcommand{\tilde{\mb{w}}}{\tilde{\mb{w}}} \newcommand{\tvwi}[1]{\tilde{\mb{w}}_{#1}} \newcommand{\tvwi{t}}{\tvwi{t}}
\newcommand{\mb{v}}{\mb{v}} \newcommand{\tran{\vv}}{\tran{\mb{v}}}
\newcommand{\vvi}[1]{\mb{v}_{#1}} \newcommand{\vvti}[1]{\tran{\vv}_{#1}} \newcommand{\lambdai}[1]{\lambda_{#1}} \newcommand{\Lambdai}[1]{\Lambda_{#1}}
\newcommand{\tran{\vx}}{\tran{\mathbf{x}}} \newcommand{\hat{\vx}}{\hat{\mathbf{x}}} \newcommand{\hvxi}[1]{\hat{\vx}_{#1}} \newcommand{\hvxi{i}}{\hvxi{i}} \newcommand{\tran{\hvx}}{\tran{\hat{\vx}}} \newcommand{\hvxti}[1]{\tran{\hvx}_{#1}} \newcommand{\hvxti{i}}{\hvxti{i}} \newcommand{\vxti}[1]{\tran{\vx}_{#1}} \newcommand{\vxti{i}}{\vxti{i}}
\newcommand{\mb{b}}{\mb{b}} \newcommand{\tran{\vb}}{\tran{\mb{b}}} \newcommand{\vbi}[1]{\mb{b}_{#1}}
\newcommand{\hat{\vy}}{\hat{\mb{y}}} \newcommand{\hvyi}[1]{\hat{\vy}_{#1}}
\renewcommand{P}{P} \newcommand{P^{(d)}}{P^{(d)}} \newcommand{P^T}{P^T} \newcommand{\tilde{P}}{\tilde{P}} \newcommand{\mpi}[1]{P_{#1}} \newcommand{\mpti}[1]{P^T_{#1}} \newcommand{\mpti{i}}{\mpti{i}} \newcommand{\mpi{i}}{\mpi{i}} \newcommand{Q}{Q} \newcommand{\mpsi}[1]{Q_{#1}} \newcommand{\mpsi{i}}{\mpsi{i}} \newcommand{\tmp^T}{\tilde{P}^T} \newcommand{Z}{Z} \newcommand{V}{V} \newcommand{\mvi}[1]{V_{#1}} \newcommand{V^T}{V^T} \newcommand{\mvti}[1]{V^T_{#1}} \newcommand{\mz^T}{Z^T} \newcommand{\tilde{\mz}}{\tilde{Z}} \newcommand{\tmz^T}{\tilde{\mz}^T} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{F}_{t}}{\mathbf{F}_{t}} \newcommand{\mathbf{F}_{t}^{-1}}{\mathbf{F}_{t}^{-1}} \newcommand{\mathbf{F}_{t}^{\top}}{\mathbf{F}_{t}^{\top}} \newcommand{(\FtT)^{-1}}{(\mathbf{F}_{t}^{\top})^{-1}} \newcommand{\mxs}[1]{\mathbf{X}_{#1}}
\newcommand{\mai}[1]{\mathbf{A}_{#1}} \newcommand{\tran{\ma}}{\tran{\mathbf{A}}} \newcommand{\mati}[1]{\tran{\ma}_{#1}}
\newcommand{{C}}{{C}} \newcommand{\mci}[1]{{C}_{#1}} \newcommand{\mcti}[1]{\mct_{#1}}
\newcommand{{\mathbf{D}}}{{\mathbf{D}}} \newcommand{\mdi}[1]{{\mathbf{D}}_{#1}}
\newcommand{\mxi}[1]{\textrm{diag}^2\paren{\vxi{#1}}} \newcommand{\mxi{i}}{\mxi{i}}
\newcommand{\hat{\mx}}{\hat{\mathbf{X}}} \newcommand{\hmxi}[1]{\hat{\mx}_{#1}} \newcommand{\hmxi{i}}{\hmxi{i}} \newcommand{\hmx^T}{\hat{\mx}^T} \newcommand{\mx^\top}{\mathbf{X}^\top} \newcommand{\mathbf{I}}{\mathbf{I}} \newcommand{Q}{Q} \newcommand{\mq^T}{Q^T} \newcommand{\Lambda}{\Lambda}
\renewcommand{\mcal{L}}{\mcal{L}} \newcommand{\mcal{R}}{\mcal{R}} \newcommand{\mcal{X}}{\mcal{X}} \newcommand{\mcal{Y}}{\mcal{Y}} \newcommand{\mcal{F}}{\mcal{F}} \newcommand{\nur}[1]{\nu_{#1}} \newcommand{\lambdar}[1]{\lambda_{#1}} \newcommand{\gammai}[1]{\gamma_{#1}} \newcommand{\gammai{i}}{\gammai{i}} \newcommand{\alphai}[1]{\alpha_{#1}} \newcommand{\alphai{i}}{\alphai{i}} \newcommand{\lossp}[1]{\ell_{#1}} \newcommand{\epsilon}{\epsilon} \newcommand{\eps^*}{\epsilon^*} \newcommand{\lossp{\eps}}{\lossp{\epsilon}} \newcommand{\lossp{\epss}}{\lossp{\eps^*}} \newcommand{\mcal{T}}{\mcal{T}}
\newcommand{\kc}[1]{\begin{center}\fbox{\parbox{3in}{{\textcolor{green}{KC: #1}}}}\end{center}} \newcommand{\edward}[1]{\begin{center}\fbox{\parbox{3in}{{\textcolor{red}{EM: #1}}}}\end{center}} \newcommand{\nv}[1]{\begin{center}\fbox{\parbox{3in}{{\textcolor{blue}{NV: #1}}}}\end{center}}
\newcommand{\newstuffa}[2]{#2} \newcommand{\newstufffroma}[1]{} \newcommand{\newstufftoa}{}
\newcommand{\newstuff}[2]{#2} \newcommand{\newstufffrom}[1]{} \newcommand{\newstuffto}{} \newcommand{\oldnote}[2]{}
\newcommand{\commentout}[1]{} \newcommand{\mypar}[1]{
\noindent{\bf #1}}
\newcommand{\inner}[2]{\left< {#1} , {#2} \right>} \newcommand{\kernel}[2]{K\left({#1},{#2} \right)} \newcommand{\tilde{p}_{rr}}{\tilde{p}_{rr}} \newcommand{\hat{x}_{r}}{\hat{x}_{r}} \newcommand{{PST }}{{PST }} \newcommand{\projealg}[1]{$\textrm{PST}_{#1}~$} \newcommand{{GST }}{{GST }}
\newcounter {mySubCounter} \newcommand {\twocoleqn}[4]{
\setcounter {mySubCounter}{0}
\let\OldTheEquation \theequation
\renewcommand {\theequation }{\OldTheEquation \alph {mySubCounter}}
\noindent
\begin{minipage}{.40\textwidth}
\begin{equation}\refstepcounter{mySubCounter}
#1
\end {equation}
\end {minipage} ~~~~~~
\addtocounter {equation}{ -1}
\begin{minipage}{.40\textwidth}
\begin{equation}\refstepcounter{mySubCounter}
#3
\end{equation}
\end{minipage}
\let\theequation\OldTheEquation }
\newcommand{\mb{0}}{\mb{0}}
\newcommand{\mcal{M}}{\mcal{M}}
\newcommand{\ai}[1]{A_{#1}} \newcommand{\bi}[1]{B_{#1}} \newcommand{\ai{i}}{\ai{i}} \newcommand{\bi{i}}{\bi{i}} \newcommand{\betai}[1]{\beta_{#1}} \newcommand{\betai{i}}{\betai{i}} \newcommand{M}{M} \newcommand{\mari}[1]{M_{#1}} \newcommand{\mari{i}}{\mari{i}} \newcommand{\nmari}[1]{m_{#1}} \newcommand{\nmari{i}}{\nmari{i}}
\newcommand{\Phi}{\Phi}
\newcommand{V}{V} \newcommand{\vari}[1]{V_{#1}} \newcommand{\vari{i}}{\vari{i}}
\newcommand{v}{v} \newcommand{\varbi}[1]{v_{#1}} \newcommand{\varbi{i}}{\varbi{i}}
\newcommand{u}{u} \newcommand{\varai}[1]{u_{#1}} \newcommand{\varai{i}}{\varai{i}}
\newcommand{m}{m} \newcommand{\marbi}[1]{m_{#1}} \newcommand{\marbi{i}}{\marbi{i}}
\newcommand{{AROW}}{{AROW}} \newcommand{{RLS}}{{RLS}} \newcommand{{MRLS}}{{MRLS}}
\newcommand{\psi}{\psi} \newcommand{\xi}{\xi}
\newcommand{\tilde{\msigma}_t}{\tilde{\Sigma}_t} \newcommand{\amsigmai}[1]{\tilde{\Sigma}_{#1}} \newcommand{\tilde{\vmu}_i}{\tilde{\mb{\mu}}_i} \newcommand{\avmui}[1]{\tilde{\mb{\mu}}_{#1}} \newcommand{\tilde{\marb}_i}{\tilde{m}_i} \newcommand{\tilde{\varb}_i}{\tilde{v}_i} \newcommand{\tilde{\vara}_i}{\tilde{u}_i} \newcommand{\tilde{\alpha}_i}{\tilde{\alpha}_i}
\newcommand{v}{v} \newcommand{m}{m} \newcommand{\bar{m}}{\bar{m}}
\newcommand{\mb{\nu}}{\mb{\nu}} \newcommand{\vnu^\top}{\mb{\nu}^\top} \newcommand{\mb{z}}{\mb{z}} \newcommand{\mb{Z}}{\mb{Z}} \newcommand{f_{\phi}}{f_{\phi}} \newcommand{g_{\phi}}{g_{\phi}}
\newcommand{\vtmui}[1]{\tilde{\mb{\mu}}_{#1}} \newcommand{\vtmui{i}}{\vtmui{i}}
\newcommand{\zetai}[1]{\zeta_{#1}} \newcommand{\zetai{i}}{\zetai{i}}
\newcommand{\bf{s}}{\bf{s}} \newcommand{\vstatet}[1]{\bf{s}_{#1}} \newcommand{\vstatet{t}}{\vstatet{t}}
\newcommand{\bf{\Phi}}{\bf{\Phi}} \newcommand{\mtrant}[1]{\bf{\Phi}_{#1}} \newcommand{\mtrant{t}}{\mtrant{t}}
\newcommand{\bf{\eta}}{\bf{\eta}} \newcommand{\vstatenoiset}[1]{\bf{\eta}_{#1}} \newcommand{\vstatenoiset{t}}{\vstatenoiset{t}}
\newcommand{\bf{o}}{\bf{o}} \newcommand{\vobsert}[1]{\bf{o}_{#1}} \newcommand{\vobsert{t}}{\vobsert{t}}
\newcommand{\bf{H}}{\bf{H}} \newcommand{\mobsert}[1]{\bf{H}_{#1}} \newcommand{\mobsert{t}}{\mobsert{t}}
\newcommand{\bf{\nu}}{\bf{\nu}} \newcommand{\vobsernoiset}[1]{\bf{\nu}_{#1}} \newcommand{\vobsernoiset{t}}{\vobsernoiset{t}}
\newcommand{\bf{Q}}{\bf{Q}} \newcommand{\mstatenoisecovt}[1]{\bf{Q}_{#1}} \newcommand{\mstatenoisecovt{t}}{\mstatenoisecovt{t}}
\newcommand{\bf{R}}{\bf{R}} \newcommand{\mobsernoisecovt}[1]{\bf{R}_{#1}} \newcommand{\mobsernoisecovt{t}}{\mobsernoisecovt{t}}
\newcommand{\bf{\hat{s}}}{\bf{\hat{s}}} \newcommand{\vestatet}[1]{\bf{\hat{s}}_{#1}} \newcommand{\vestatet{t}}{\vestatet{t}} \newcommand{\vestatept}[1]{\vestatet{#1}^+} \newcommand{\vestatent}[1]{\vestatet{#1}^-}
\newcommand{\bf{P}}{\bf{P}} \newcommand{\mcovart}[1]{\bf{P}_{#1}} \newcommand{\mcovarpt}[1]{\mcovart{#1}^+} \newcommand{\mcovarnt}[1]{\mcovart{#1}^-}
\newcommand{\bf{K}}{\bf{K}} \newcommand{\mkalmangaint}[1]{\bf{K}_{#1}}
\newcommand{\bf{\kappa}}{\bf{\kappa}} \newcommand{\vkalmangaint}[1]{\bf{\kappa}_{#1}}
\newcommand{{\nu}}{{\nu}} \newcommand{\obsernoiset}[1]{{\nu}_{#1}} \newcommand{\obsernoiset{t}}{\obsernoiset{t}}
\newcommand{r}{r} \newcommand{\obsernoisecovt}[1]{r_{#1}} \newcommand{\obsernoisecov}{r}
\newcommand{s}{s} \newcommand{\obsnscvt}[1]{s_{#1}} \newcommand{\obsnscvt{t}}{\obsnscvt{t}}
\newcommand{\Psit}[1]{\Psi_{#1}} \newcommand{\Psit{t}}{\Psit{t}}
\newcommand{\Omegat}[1]{\Omega_{#1}} \newcommand{\Omegat{t}}{\Omegat{t}}
\newcommand{\ellt}[1]{\ell_{#1}} \newcommand{\gllt}[1]{g_{#1}}
\newcommand{\chit}[1]{\chi_{#1}}
\newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{M}{M} \newcommand{U}{U}
\newcommand{\seti}[1]{S_{#1}}
\newcommand{\mcal{C}}{\mcal{C}}
\newcommand{\dta}[3]{d_{#3}\paren{#1,#2}}
\newcommand{a}{a} \newcommand{c}{c} \newcommand{b}{b} \newcommand{r}{r} \newcommand{\nu}{\nu}
\newcommand{\coat}[1]{a_{#1}} \newcommand{\coct}[1]{c_{#1}} \newcommand{\cobt}[1]{b_{#1}} \newcommand{\cort}[1]{r_{#1}} \newcommand{\conut}[1]{\nu_{#1}}
\newcommand{\coat{t}}{\coat{t}} \newcommand{\coct{t}}{\coct{t}} \newcommand{\cobt{t}}{\cobt{t}} \newcommand{\cort{t}}{\cort{t}} \newcommand{\conut{t}}{\conut{t}}
\newcommand{R_B}{R_B} \newcommand{\textrm{proj}}{\textrm{proj}}
\begin{abstract}
The goal of a learner in standard online learning is to maintain an
average loss close to the loss of the best-performing {\em single}
function in some class. In many real-world problems, such as rating
or ranking items, there is no single best target function during
the runtime of the algorithm, instead the best (local) target
function is drifting over time. We develop a novel
last-step min-max optimal algorithm in
context of a drift. We analyze the algorithm in the worst-case
regret framework and show that it maintains an average
loss close to that of the best slowly changing sequence of linear
functions, as long as the total of drift is sublinear. In some situations, our bound improves over existing bounds, and additionally the algorithm
suffers logarithmic regret when there is no drift.
We also build on the $H_\infty$ filter and its bound, and develop
and analyze a second algorithm for drifting setting. Synthetic
simulations demonstrate the advantages of our algorithms in a
worst-case constant drift setting. \end{abstract}
\section{Introduction} We consider the on-line learning problems, in which a learning algorithm predicts real numbers given inputs in a sequence of trials. An example of such a problem is to predict a stock's prices given input about the current state of the stock-market. In general, the goal of the algorithm is to achieve an average loss that is not much larger compared to the loss one suffers if it had always chosen to predict according to the best-performing single function from some class of functions.
In the past half a century, many algorithms were proposed~(a review can be found in a comprehensive book on the topic~\cite{CesaBiGa06}) for this problem, some of which are able to achieve an average loss arbitrarily close to that of the best function in retrospect. Furthermore, such guarantees hold even if the input and output pairs are chosen in a fully adversarial manner with no distributional assumptions.
Competing with the best {\em fixed} function might not suffice for some problems. In many real-world applications, the true target function is not fixed, but is slowly drifting over time. Consider a function designed to rate movies for recommender systems given some features. Over time a rate of a movie may change as more movies are released or the season changes. Furthermore, the very own personal-taste of a user may change as well.
With such properties in mind, we develop new learning algorithms designed to work with target drift. The goal of an algorithm is to maintain an average loss close to that of the best slowly changing sequence of functions, rather than compete well with a single function. We focus on problems for which this sequence consists only of linear functions. Some previous algorithms~\cite{LittlestoneW94,ECCC-TR00-070,HerbsterW01,KivinenSW01} designed for this problem are based on gradient descent, with additional control on the norm (or Bregman divergence) of the weight-vector used for prediction~\cite{KivinenSW01}, or the number of inputs used to define it~\cite{CavallantiCG07}.
We take a different route and derive an algorithm based on the last-step min-max approach proposed by Forster~\cite{Forster} and later used \cite{TakimotoW00} for online density estimation. On each iteration the algorithm makes the optimal min-max prediction with respect to a quantity called regret, assuming it is the last iteration. Yet, unlike previous work, it is optimal when a drift is allowed. As opposed to the derivation of the last-step min-max predictor for a fixed vector, the resulting optimization problem is not straightforward to solve. We develop a dynamic program (a recursion) to solve this problem, which allows to compute the optimal last-step min-max predictor. We analyze the algorithm in the worst-case regret framework and show that the algorithm maintains an average loss close to that of the best slowly changing sequence of functions, as long as the total drift is sublinear in the number of rounds $T$. Specifically, we show that if the total amount of drift is $T\nu$ (for $\nu=o(1)$) the cumulative regret is bounded by $T \nu^{1/3} + \log(T)$. When the instantaneous drift is close to constant, this improves over a previous bound of Vaits and Crammer~\cite{VaitsCr11} of an algorithm named ARCOR that showed a bound of $T \nu^{1/4}\log(T)$. Additionally, when no drift is introduced (stationary setting) our algorithm suffers logarithmic regret, as for the algorithm of Forster~\cite{Forster}. We also build on the $H_\infty$ adaptive filter, which is min-max optimal with respect to a {\em filtering task}, and derive another learning algorithm based on the same min-max principle. We provide a regret bound for this algorithm as well, and relate the two algorithms and their respective bounds. Finally, synthetic simulations show the advantages of our algorithms when a close to constant drift is allowed.
\section{Problem Setting} \label{sec:problem_setting} We focus on the regression task evaluated with the squared loss. Our algorithms are designed for the online setting and work in iterations (or rounds). On each round an online algorithm receives an input-vector $\vxi{t}\in\mathbb{R}^d$ and predicts a real value $\hyi{t}\in\mathbb{R}$. Then the algorithm receives a target label $\yi{t}\in\mathbb{R}$ associated with $\vxi{t}$, uses it to update its prediction rule, and then proceeds to the next round.
On each round, the performance of the algorithm is evaluated using the squared loss, $\ell_t(\textrm{alg})=\ell\paren{ \yi{t}, \hyi{t} } = \paren{\hyi{t}- \yi{t} }^2$. The cumulative loss suffered
over $T$ iterations is, \( L_{T}(\textrm{alg})=\sum_{t=1}^{T}\ell_{t}(\textrm{alg}) . \) The goal of the algorithm is to have low cumulative loss compared to predictors from some class. A large body of work is focused on linear prediction functions of the form $f(\mathbf{x})=\tran{\vx}\mathbf{u}$ where $\mathbf{u}\in\mathbb{R}^d$ is some weight-vector. We denote by $\ell_t(\mathbf{u}) = \paren{\vxti{t}\mathbf{u}-\yi{t}}^2$ the instantaneous loss of a weight-vector $\mathbf{u}$.
We focus on algorithms that are able to compete against sequences of weight-vectors, $(\vui{1} , \ldots , \vui{T})\in \mathbb{R}^d \times \dots \times \mathbb{R}^d$, where $\vui{t}$ is used to make a prediction for the t$th$ example $(\vxi{t},\yi{t})$. We define the cumulative loss of such set by \( L_T( \{\vui{t}\}) = \sum_t^T \ell_t(\vui{t}) \) and the
regret of an algorithm by \( R_T(\{\vui{t}\}) = \sum_t^T (\yi{t}- \hyi{t})^2- L_T(\{\vui{t}\})~. \) The goal of the algorithm is to have a low-regret, and formally to have ${R}_T(\{\vui{t}\}) = o(T)$, that is, the average loss suffered by the algorithm will converge to the average loss of the best linear function sequence $(\vui{1} \dots \vui{T})$.
Clearly, with no restriction or penalty over the set $\{\vui{t}\}$ the right term of the regret can easily be zero by setting, $\vui{t} = \vxi{t} (\yi{t}/\normt{\vxi{t}})$, which implies $\ell_t(\vui{t})=0$ for all $t$. Thus, in the analysis below we incorporate the total drift of the weight-vectors defined to be,
\begin{align} V \!\!=\!\! V_T(\{\vui{t}\}) \!\!=\!\! \sum_{t=1}^{T-1} \!\normt{\vui{t}-\vui{t+1}} ~,~\nu \!\!=\!\! \nu(\{\vui{t}\}) \!\!=\!\! \frac{V}{T} ~, \end{align} where $\nu$ is the {\em average drift} .
Below we bound the regret with, $R_T(\{\vui{t}\}) \leq \mcal{O} \paren{ T^\frac{2}{3}V^\frac{1}{3}+\log(T)} = \mcal{O} \paren{ T \nu^\frac{1}{3}+\log(T)}$. Next, we develop an explicit form of the last-step min-max algorithm with drift.
\section{Algorithm} We define the last-step minmax predictor $\hyi{T}$ to be\footnote{$\yi{T}$ and $\hyi{T}$ serve both as quantifiers (over the
$\min$ and $\max$ operators, respectively), and as the optimal
arguments of this optimization problem. },
\begin{align} \arg\min_{\hat{y}_{T}}\max_{y_{T}}&\Bigg[\sum_{t=1}^{T}\left(y_{t}-\hat{y}_{t}\right)^{2}\nonumber\\ &-\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{T}}Q_{T}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{T}\right)\Bigg]~,\label{minmax_1} \end{align} where we define \begin{align} Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right) =& b\left\Vert \mathbf{u}_{1}\right\Vert ^{2}+c\sum_{s=1}^{t-1}\left\Vert \mathbf{u}_{s+1}-\mathbf{u}_{s}\right\Vert ^{2}\nonumber\\ & +\sum_{s=1}^{t}\left(y_{s}-\mathbf{u}_{s}^{\top}\mathbf{x}_{s}\right)^{2}~,\label{Q} \end{align} for some positive constants $b,c$. The last optimization problem can also be seen as a game where the algorithm chooses a prediction $\hat{y}_t$ to minimize the last-step regret, while an adversary chooses a target label ${y}_t$ to maximize it. The first term of \eqref{minmax_1} is the loss suffered by the algorithm while $Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)$ defined in \eqref{Q} is a sum of the loss suffered by some sequence of linear functions $\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)$, a penalty for consecutive pairs that are far from each other, and for the norm of the first to be far from zero.
We first solve recursively the inner optimization problem $\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)$, for which we define an auxiliary function, \begin{align} P_{t}\left(\mathbf{u}_{t}\right)=\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right) ~,\label{P} \end{align} which clearly satisfies, \begin{equation} \min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right) = \min_{\vui{t}} P_t(\vui{t}) ~.\label{PQ} \end{equation}
We start the derivation of the algorithm with a lemma, stating a recursive form of the function-sequence $P_t(\vui{t})$. \begin{lemma} \label{lem:lemma11} For $t=2,3,\ldots$ \begin{align*} P_1(\mathbf{u}_1)&=Q_1(\mathbf{u}_1)\\
P_{t}\left(\mathbf{u}_{t}\right)&= \min_{\mathbf{u}_{t-1}}\Bigg(P_{t-1}\left(\mathbf{u}_{t-1}\right) +c\left\Vert
\mathbf{u}_{t}-\mathbf{u}_{t-1}\right\Vert
^{2} \\ &+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\Bigg).\nonumber \end{align*}
\end{lemma} The proof appears in \appref{proof_lemma11}.
Using \lemref{lem:lemma11} we write explicitly the function $P_t(\vui{t})$. \begin{lemma} \label{lem:lemma12} The following equality holds \begin{align} P_{t}\left(\mathbf{u}_{t}\right)=\mathbf{u}_{t}^{\top}\mathbf{D}_{t}\mathbf{u}_{t}-2\mathbf{u}_{t}^{\top}\mathbf{e}_{t}+f_{t}~, \label{eqality_P} \end{align} where, \begin{align} &\mathbf{D}_{1} \!=b\mathbf{I}+\mathbf{x}_{1}\mathbf{x}_{1}^{\top} ~,~ \mathbf{D}_{t}=\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\label{D}\\ &\mathbf{e}_{1}\!=y_{1}\mathbf{x}_{1}~,~ \mathbf{e}_{t}=\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\label{e}\\ &f_{1}\!=y_{1}^{2}~,~ f_{t}=f_{t-1}-\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}^{2}\label{f}~. \end{align} Note that $\mathbf{D}_{t}\in\mathbb{R}^{d\times d}$ is a positive definite matrix, $\mathbf{e}_{t}\in\mathbb{R}^{d\times1}$ and $f_{t}\in\mathbb{R}$. \end{lemma} The proof appears in \appref{proof_lemma12}. From \lemref{lem:lemma12} we conclude, by substituting \eqref{eqality_P} in \eqref{PQ}, that, \begin{align} & \min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)\nonumber\\
& = \min_{\mathbf{u}_{t}}\left(\mathbf{u}_{t}^{\top}\mathbf{D}_{t}\mathbf{u}_{t}-2\mathbf{u}_{t}^{\top}\mathbf{e}_{t}+f_{t}\right) = -\mathbf{e}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{e}_{t}+f_{t}
~. \label{optimal_Q} \end{align} Substituting \eqref{optimal_Q} back in \eqref{minmax_1} we get that the last-step minmax predictor $\hyi{T}$ is given by, \begin{eqnarray} \arg\min_{\hat{y}_{T}}\max_{y_{T}}\left[\sum_{t=1}^{T}\left(y_{t}-\hat{y}_{t}\right)^{2}+\mathbf{e}_{T}^{\top}\mathbf{D}_{T}^{-1}\mathbf{e}_{T}-f_{T}\right] ~. \label{minmax_2} \end{eqnarray} Since $\mathbf{e}_{T}$ depends on $\yi{T} $ we substitute \eqref{e} in the second term of \eqref{minmax_2}, \begin{align} &\mathbf{e}_{T}^{\top}\mathbf{D}_{T}^{-1}\mathbf{e}_{T}=\nonumber\\ &\left(\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}+y_{T}\mathbf{x}_{T}\right)^{\top}\mathbf{D}_{T}^{-1}\nonumber\\ &\left(\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}+y_{T}\mathbf{x}_{T}\right) ~.\label{second_term} \end{align} Substituting \eqref{second_term} and \eqref{f} in \eqref{minmax_2} and omitting terms not depending explicitly on $y_{T}$ and $\hat{y}_{T}$ we get, \begin{align} \hat{y}_T &= \arg\min_{\hat{y}_{T}}\max_{y_{T}}\bigg[\left(y_{T}-\hat{y}_{T}\right)^{2} + y_{T}^{2}\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\mathbf{x}_{T}\nonumber\\ & \quad +2y_{T}\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}-y_{T}^{2}\bigg]\nonumber\\ &=\arg\min_{\hat{y}_{T}}\max_{y_{T}}\bigg[\left(\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\mathbf{x}_{T}\right)y_{T}^{2}+\hat{y}_{T}^{2}\label{optimal_y}\\ &\quad+2y_{T}\left(\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}-\hat{y}_{T}\right)\bigg]~.\nonumber \end{align} The last equation is strictly convex in $\yi{T}$ and thus the optimal solution is not bounded. To solve it, we follow an approach used by Forster in a different context~\cite{Forster}. In order to make the optimal value bounded, we assume that the adversary can only choose labels from a bounded set $\yi{T}\in[-Y,Y]$. Thus, the optimal solution of \eqref{optimal_y} over $\yi{T}$ is given by the following equation, since the optimal value is $\yi{T}\in\{+Y,-Y\}$, \begin{align*} \hat{y}_T &=\arg\min_{\hat{y}_{T}}\bigg[\left(\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\mathbf{x}_{T}\right)
Y^2 +\hat{y}_{T}^{2}\\ & \quad +2 Y\left\vert
\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}-\hat{y}_{T}\right\vert\bigg]~. \end{align*} This problem is of a similar form to the one discussed by Forster~\cite{Forster}, from which we get the optimal solution, \( \hyi{T} = clip\paren{
\mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1},Y} ~, \) where for $y>0$ we define $clip(x,y)={\rm sign}(x) \min\{\vert x \vert, y\}$. The optimal solution depends explicitly on the bound $Y$, and as its value is not known, we thus ignore it, and define the output of the algorithm to be, \begin{align} \hyi{T} = \mathbf{x}_{T}^{\top}\mathbf{D}_{T}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1} ~. \label{my_predictor} \end{align} We call the algorithm LASER for last step adaptive regressor algorithm, and it is summarized in \figref{algorithm:laser}. Clearly, for $c=\infty$ the LASER algorithm reduces to the AAR algorithm of Vovk~\cite{Vovk01}, or the last-step min-max algorithm of Forster~\cite{Forster}. See also the work of Azoury and Warmuth~\cite{AzouryWa01}. The algorithm can be combined with Mercer kernels as it employs only sums of inner- and outer-products of its inputs. This algorithm can be seen also as a forward algorithm~\cite{AzouryWa01}: The predictor of \eqref{my_predictor} can be seen as the optimal linear {\em model} obtained over the same prefix of length $T-1$ and the new input $\vxi{T}$ with fictional-label $\yi{T}=0$. Specifically, from \eqref{e} we get that if $\yi{T}=0$, then $\mathbf{e}_{T} = \left(\mathbf{I}+c^{-1}\mathbf{D}_{T-1}\right)^{-1}\mathbf{e}_{T-1}$. The prediction of the optimal predictor defined in \eqref{optimal_Q} is $\vxti{T}\mathbf{u}_{T}=\vxti{T}\mathbf{D}_{T}^{-1}\mathbf{e}_{T}=\hat{y}_T$, where $\hat{y}_T$ was defined in \eqref{my_predictor}.
\section{Analysis} We now analyze the performance of the algorithm in the worst-case setting, starting with the following technical lemma. \begin{lemma} For all $t$ the following statement holds, \begin{align*} &\mathbf{D'}_{t-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}-\mathbf{D}_{t-1}^{-1}\\ &+\mathbf{D'}_{t-1}\left(\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}+c^{-1}\mathbf{I}\right)\preceq0 \end{align*} where $\mathbf{D'}_{t-1}=\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}$. \label{lem:technical} \end{lemma} The proof appears in \appref{proof_lemma_technical}.
We next bound the cumulative loss of the algorithm, \begin{theorem} \label{thm:basic_bound} Assume the labels are bounded $\sup_t \vert \yi{t} \vert \leq Y$ for some $Y\in\mathbb{R}$. Then the following bound holds,
\begin{align*}
L_T(\textrm{LASER})\leq&\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{T}}\Bigg[b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2} +c V_T(\{\vui{t}\})\\%\sum_{t=1}^{T-1}\left\Vert \mathbf{u}_{t+1}-\mathbf{u}_{t}\right\Vert ^{2} & \quad + L_T(\{\vui{t}\})
\Bigg] +Y^{2}\sum_{t=1}^{T}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t} ~. \end{align*} \end{theorem}
\begin{figure}
\caption{LASER: last step adaptive regression algorithm.}
\label{algorithm:laser}
\end{figure}
\begin{proof} Fix $t$. A long algebraic manipulation
yields, \begin{align} &\left(y_{t}-\hat{y}_{t}\right)^{2}+\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}}Q_{t-1}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}\right)\nonumber\\ &-\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)\nonumber\\ =&\left(y_{t}-\hat{y}_{t}\right)^{2} +2y_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}\mathbf{e}_{t-1}\nonumber\\ &\!+\!\mathbf{e}_{t-1}^{\top}\!\Bigg[ \!-\!\mathbf{D}_{t-1}^{-1} \!+\!\!
\mathbf{D'}_{t-1} \!\left(\mathbf{D}_{t}^{-1} \mathbf{D'}_{t-1}
\!\! \!+\!c^{-1}\mathbf{I}\right)\!\!\Bigg]\!\mathbf{e}_{t-1}\nonumber\\ &+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}-y_{t}^{2}~. \label{step1} \end{align} Substituting the specific value of the predictor $\hat{y}_{t}=\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}\mathbf{e}_{t-1}$ from \eqref{my_predictor}, we get that \eqref{step1} equals to, \begin{align}
& \hat{y}_{t}^{2}+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}+\mathbf{e}_{t-1}^{\top}\Bigg[-\mathbf{D}_{t-1}^{-1}\nonumber\\ &+\mathbf{D'}_{t-1}\left(\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}+c^{-1}\mathbf{I}\right)\Bigg]\mathbf{e}_{t-1}\nonumber\\
=& \mathbf{e}_{t-1}^{\top}\mathbf{D'}_{t-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}\mathbf{e}_{t-1}+\mathbf{e}_{t-1}^{\top}\Bigg[-\mathbf{D}_{t-1}^{-1}\nonumber\\ &+\mathbf{D'}_{t-1}\left(\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}+c^{-1}\mathbf{I}\right)\Bigg]\mathbf{e}_{t-1}
+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\nonumber\\
=& \mathbf{e}_{t-1}^{\top} \Bigg[\mathbf{D'}_{t-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}-\mathbf{D}_{t-1}^{-1}\label{step2}\\ &+\mathbf{D'}_{t-1}\left(\mathbf{D}_{t}^{-1}\mathbf{D'}_{t-1}+c^{-1}\mathbf{I}\right)\Bigg]\mathbf{e}_{t-1}+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t} ~.\nonumber \end{align} Using \lemref{lem:technical} we upper bound \eqref{step2} with, \( y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t} \leq Y^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t} \) .
Finally, summing over $t\in\left\{ 1,\ldots,T\right\} $ gives the desired bound, \begin{align*} &\sum_{t=1}^{T}\left(y_{t}-\hat{y}_{t}\right)^{2}-\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{T}} \left[b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2} +c\sum_{t=1}^{T-1}\left\Vert
\mathbf{u}_{t+1}-\mathbf{u}_{t}\right\Vert
^{2}\right.\\
& \left. \quad+\sum_{t=1}^{T}\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\right]\\ =~ &L_T(\textrm{LASER})\!\!-\!\!\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{T}}\Bigg[b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2} \!+\!c V_T(\{\vui{t}\}) + L_T(\{\vui{t}\})
\Bigg]\\ \leq~& Y^{2}\sum_{t=1}^{T}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}
\end{align*} \end{proof} \begin{figure}
\caption{An $H_\infty$ algorithm for online regression.}
\label{algorithm:hi}
\end{figure}
In the next lemma we further bound the right term of \thmref{thm:basic_bound}. This type of bound is based on the usage of the covariance-like matrix $\mathbf{D}$. \begin{lemma} \label{lem:bound_1} \begin{equation}
\sum_{t=1}^{T}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\leq\ln\left|\frac{1}{b}\mathbf{D}_{T}\right|
+c^{-1}\sum_{t=1}^{T} {\rm Tr}\paren{\mathbf{D}_{t-1}} ~.\label{covariance_bound} \end{equation} \end{lemma}
\begin{proof} Similar to the derivation of Forster~\cite{Forster} (details omitted due to lack of space), \begin{align*} \mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}
& \leq \ln\frac{\left|\mathbf{D}_{t}\right|}{\left|\mathbf{D}_{t}-\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\right|}
= \ln\frac{\left|\mathbf{D}_{t}\right|}{\left|\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}\right|}\\
& = \ln\frac{\left|\mathbf{D}_{t}\right|}{\left|\mathbf{D}_{t-1}\right|}\left|\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)\right|\\
& =
\ln\frac{\left|\mathbf{D}_{t}\right|}{\left|\mathbf{D}_{t-1}\right|}+\ln\left|\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)\right| ~. \end{align*}
and because $\ln\left|\frac{1}{b}\mathbf{D}_{0}\right| \geq 0$ we get \( \sum_{t=1}^{T}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\leq
\ln\left|\frac{1}{b}\mathbf{D}_{T}\right|
+\sum_{t=1}^{T}\ln\left|\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)\right|
\leq \ln\left|\frac{1}{b}\mathbf{D}_{T}\right|
+c^{-1}\sum_{t=1}^{T} {\rm Tr}\paren{\mathbf{D}_{t-1}} ~. \) \end{proof}
At first sight it seems that the right term of \eqref{covariance_bound} may grow super-linearly with $T$, as each of the matrices $\mdi{t}$ grows with $t$. The next two lemmas show that this is not the case, and in fact, the right term of \eqref{covariance_bound} is not growing too fast, which will allow us to obtain a sub-linear regret bound. \lemref{operator_scalar} analyzes the properties of the recursion of $\mathbf{D}$ defined in \eqref{D} for scalars, that is $d=1$. In \lemref{eigen_values_lemma} we extend this analysis to matrices. \begin{lemma} \label{operator_scalar} Define \( f(\lambda) = {\lambda \beta}/\paren{\lambda+ \beta} + x^2 \) for $\beta,\lambda \geq 0$ and some $x^2 \leq \gamma^2$. Then: {\bf (1)} $f(\lambda) \leq \beta + \gamma^2$ {\bf (2)} $f(\lambda) \leq \lambda + \gamma^2$ {\bf (3)} $f(\lambda) \leq \max\braces{\lambda,\frac{3\gamma^2 +
\sqrt{\gamma^4+4\gamma^2\beta}}{2}}$ ~. \end{lemma} The proof appears in \appref{proof_operator_scalar}. We build on \lemref{operator_scalar} to bound the maximal eigenvalue of the matrices $\mdi{t}$. \begin{lemma} \label{eigen_values_lemma} Assume $\normt{\vxi{t}} \leq X^2$ for some $X$. Then, the eigenvalues of $\mdi{t}$ (for $t \geq 1$), denoted by $\lambda_i\paren{\mdi{t}}$, are upper bounded by $\max_i\lambda_i\paren{\mdi{t}}\leq\max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2} $. \end{lemma} \begin{proof}
By induction. From \eqref{D} we have that
$\lambda_i(\mathbf{D}_{1}) \leq b + X^2$ for $i=1, \ldots , d$. We proceed with a proof for some $t$. For simplicity, denote by $\lambda_i = \lambda_i(\mdi{t-1})$ the i$th$ eigenvalue of $\mdi{t-1}$ with a corresponding eigenvector $\vvi{i}$. From \eqref{D} we have, \begin{align} \mathbf{D}_{t} &=\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\nonumber\\ & \preceq \left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1} + \mathbf{I} \normt{\vxi{t}}\nonumber\\
&= \sum_i^d \vvi{i} \vvti{i}\paren{ \frac{\lambda_i c }{\lambda_i + c} +
\normt{\vxi{t}}} ~.\label{bound_eigens} \end{align} Plugging \lemref{operator_scalar} in \eqref{bound_eigens} we get, \( \mathbf{D}_{t} \preceq \sum_i^d \vvi{i} \vvti{i}\max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2}
= \max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2} \mathbf{I}~. \) \end{proof}
Finally, equipped with the above lemmas we prove the main result of this section. \begin{corollary} \label{cor:main} Assume $\normt{\vxi{t}}\leq X^2$, $\vert\yi{t}\vert \leq Y$. Then, \begin{align} L_T(\textrm{LASER})\leq
b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+L_T(\{\vui{t}\})+Y^{2} \ln\left|\frac{1}{b}\mathbf{D}_{T}\right|\nonumber\\ +c^{-1}Y^2{\rm Tr}\paren{\mathbf{D}_0}+c V\nonumber\\
+c^{-1}Y^2 T d \max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2}~.\label{final_cor} \end{align}
Furthermore, set $b=\varepsilon c$ for some $0<\varepsilon<1$. Denote by \( \mu = \max\braces{9/8X^2, \frac{\paren{b+X^2}^2}{8X^2}} \) and \(M = \max\braces{3X^2, b+X^2} \). If $V \leq T \frac{\sqrt{2}Y^2dX}{\mu^{3/2}}$ (low drift) then by setting \begin{align} c= \paren{{\sqrt{2}T Y^2 d X}/{V}}^{2/3}\label{c1} \end{align}
we have, \begin{align} & L_T(\textrm{LASER}) \leq\nonumber\\ & \quad b\left\Vert \mathbf{u}_{1}\right\Vert ^{2} + 3\paren{\sqrt{2}Y^2 d X}^{2/3} T^{2/3} V^{1/3}\nonumber\\ &\quad +\frac{\varepsilon}{1-\varepsilon}Y^{2}d +L_T(\{\vui{t}\})
+Y^{2} \ln\left|\frac{1}{b}\mathbf{D}_{T}\right|\label{bound1}~. \end{align} \end{corollary} The proof appears in \secref{proof_cor_main}.
A few remarks are in order. First, when the total drift $V=0$ goes to zero, we set $c=\infty$ and thus we have $\mdi{t}=b\mathbf{I}+\sum_{s=1}^t \vxi{s}\vxti{s}$ used in recent algorithms~\cite{Vovk01,Forster,Hayes,CesaBianchiCoGe05}. In this case the algorithm reduces to the algorithm by Forster~\cite{Forster} (which is also the Aggregating Algorithm for Regression of Vovk~\cite{Vovk01}), with the same logarithmic regret bound (note that the last term of \eqref{bound1} is logarithmic in $T$, see the proof of Forster~\cite{Forster}). See also the work of Azoury and Warmuth~\cite{AzouryWa01}. Second, substituting $V=T\nu$ we get that the bound depends on the average drift as $T^{2/3} (T\nu)^{1/3}=T\nu^{1/3}$. Clearly, to have a sublinear regret we must have $\nu=o(1)$.
Third, Vaits and Crammer~\cite{VaitsCr11} recently proposed an algorithm, called ARCOR, for the same setting. The regret of ARCOR depends on the total drift as $\sqrt{T V'}\log(T)$, where their definition of total drift is a sum of the Euclidean differences $V'=\sum_t^{T-1} \Vert \vui{t+1}-\vui{t}\Vert$, rather than the squared norm. When the instantaneous drift $\Vert\vui{t+1}-\vui{t}\Vert$ is constant, this notion of total drift is related to our average drift, $V'=T\sqrt{\nu}$. Therefore, in this case the bound of ARCOR~\cite{VaitsCr11} is $\nu^{1/4} T \log(T)$ which is worse than our bound, both since it has an additional $\log(T)$ factor (as opposed to our additive log term) and since $\nu=o(1)$. Therefore we expect that our algorithm will perform better than ARCOR~\cite{VaitsCr11} when the instantaneous drift is approximately constant. Indeed, the synthetic simulations described in \secref{sec:simulations} further support this conclusion. Fourth, Herbster and Warmuth \cite{HerbsterW01} developed shifting bounds for general gradient descent algorithms with projection of the weight-vector using the Bregman divergence. In their bounds, there is a factor greater than 1 multiplying the term $L_T\paren{\braces{\vui{t}}}$, leading to a small regret only when the data is close to be realizable with linear models. Yet, their bounds have better dependency on $d$, the dimension of the inputs $x$. Busuttil and Kalnishkan~\cite{BusuttilK07} developed a variant of the Aggregating Algorithm~\cite{vovkAS} for the non-stationary setting. However, to have sublinear regret they require a strong assumption on the drift $V=o(1)$, while we require only $V=o(T)$. Fifth, if $V \geq T \frac{Y^2dM}{\mu^{2}}$ then by setting \( c=
\sqrt{{Y^2dMT}/{V}} \)
we have, \begin{align} & L_T(\textrm{LASER}) \leq b\left\Vert \mathbf{u}_{1}\right\Vert ^{2}
+ 2\sqrt{Y^2 d TMV}\nonumber\\ & \quad+\frac{\varepsilon}{1-\varepsilon}Y^{2}d +L_T(\{\vui{t}\})
+Y^{2} \ln\left|\frac{1}{b}\mathbf{D}_{T}\right| \label{high_drift} \end{align}
(See \appref{details_for_second_bound} for details). The last bound is linear in $T$ and can be obtained also by a naive algorithm that outputs $\hat{y}_t=0$ for all $t$.
\section{An $H_\infty$ Algorithm for Online Regression} \label{H8_sec} Adaptive filtering is an active and well established area of research in signal processing. Formally, it is equivalent to online learning. On each iteration $t$ the filter receives an input $\vxi{t}\in\mathbb{R}^d$ and predicts a corresponding output $\hyi{t}$. It then receives the true desired output $\yi{t}$ and updates its internal model. Many adaptive filtering algorithms employ linear models, that is, at time $t$ they output $\hyi{t} = \vwti{t}\vxi{t}$. For example, a well known online learning algorithm~\cite{WidrowHoff} for regression, which is basically a gradient-descent algorithm with the squared-loss, is known as the {\em least mean-square (LMS)} algorithm in the adaptive filtering literature~\cite{Sayed:2008:AF:1370975}.
One possible difference between adaptive filtering and online learning can be viewed in the interpretation of algorithms, and as a consequence, of their analysis. In online learning, the goal of an algorithm is to make {\em predictions} $\hyi{t}$, and the predictions are compared to the predictions of some function from a known class (e.g. linear, parameteized by $\mathbf{u}$). Thus, a typical online performance bound relates the quality of the algorithm's predictions with the quality of some function's $g(\mathbf{x})=\tran{\vu}\mathbf{x}$ predictions, using some non-negative loss measure $\ell(\vwti{t}\vxi{t},\yi{t})$. Such bounds often have the following shape, \[ \overbrace{\sum_t \ell(\vwti{t}\vxi{t},\yi{t})}^{\textrm{algorithm
loss with respect to observation}} \leq A \overbrace{\sum_t \ell( \tran{\vu}\vxi{t}, \yi{t})}^{\textrm{function $\mathbf{u}$ loss}} + B, \] for some multiplicative-factor $A$ and an additive factor $B$.
Adaptive filtering is similar to the realizable setting in machine learning, where it is assumed the existence of some filter and the goal is to recover it using {\em noisy} observations. Often it is assumed that the output is a corrupted version of the output of some function, $y=f(\mathbf{x})+n$, with some noise $n$. Thus a typical bound relates the quality of an algorithm's predictions {\em
with respect to the target filter} $\mathbf{u}$ and the amount of noise in the problem, \[ \overbrace{\sum_t
\ell(\vwti{t}\vxi{t},\tran{\vu}\vxi{t})}^{\textrm{algorithm loss with
respect to a reference}} \leq A \overbrace{\sum_t \ell( \tran{\vu}\vxi{t}, \yi{t})}^{\textrm{amount of
noise}} + B ~. \]
The $H_\infty$ filters~(see e.g. papers by Simon~\cite{Simon:2006:OSE:1146304,DBLP:journals/tsp/Simon06})
are a family of (robust) linear filters developed based on a min-max approach, like LASER, and analyzed in the worst case setting. These filters are reminiscent of the celebrated Kalman filter~\cite{Kalman60}, which was motivated and analyzed in a stochastic setting with Gaussian noise. A pseudocode of one such filter we {\em modified }to online linear regression appears in \figref{algorithm:hi}. Theory of $H_\infty$ filters states~\cite[Section 11.3]{Simon:2006:OSE:1146304} the following bound on its performance as a filter. \begin{theorem} Assume the filter is executed with parameters $a>1$ and $b,c>0$. Then, for all input-output pairs $(\vxi{t},\yi{t})$ and for all reference vectors $\vui{t}$ the following bound holds on the filter's performance, \(
\sum_{t=1}^{T}\left(\mathbf{x}_{t}^{\top}\mathbf{w}_{t}-\vxti{t}\mathbf{u}_{t}\right)^{2} \leq a L_T(\{\vui{t}\})
+b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+c V_T\paren{\braces{\vui{t}}}
~. \) \end{theorem}
From the theorem we establish a regret bound for the $H_\infty$ algorithm to online learning. \begin{corollary} Fix $\alpha>0$. The total squared-loss suffered by the algorithm is bounded by \begin{eqnarray} L_T(H_\infty)&\leq&\left(1+{1}/{\alpha}+\left(1+\alpha\right)a\right) L_T(\braces{\vui{t}})
\label{bound_h8}\\ &&+\left(1+\alpha\right)b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+\left(1+\alpha\right)c V_T\paren{\braces{\vui{t}}} ~.
\nonumber \end{eqnarray}
\end{corollary} \begin{proof} Using a bound of Hassibi and Kailath~\cite[Lemma 4]{HassibiKa97} we have that for all $\alpha>0$, \( \left(y_{t}-\mathbf{x}_{t}^{\top}\mathbf{w}_{t}\right)^{2}\leq\left(1+\frac{1}{\alpha}\right)\left(y_{t}-\mathbf{x}_{t}^{\top}\mathbf{u}_{t}\right)^{2}+\left(1+\alpha\right)\left[\mathbf{x}_{t}^{\top}\left(\mathbf{w}_{t}-\mathbf{u}_{t}\right)\right]^{2} \). Plugging back into the theorem and collecting the terms we get the desired bound.
\end{proof} The bound holds for any $\alpha>0$. We plug $\alpha = \sqrt{{L_T\paren{\braces{\vui{t}}}}/\paren{a L_T\paren{\braces{\vui{t}}} + cV + b \normt{\vui{1}}}}$
in \eqref{bound_h8} to get, \begin{align*} L_T(H_\infty) \leq&~ (1+a) L_T\paren{\braces{\vui{t}}} + cV + b \normt{\vui{1}} \\ &+ 2 \sqrt{\paren{a L_T\paren{\braces{\vui{t}}} + cV + b
\normt{\vui{1}}}{L_T\paren{\braces{\vui{t}}}}}\\ \leq&~ (1+a+2\sqrt{a}) L_T\paren{\braces{\vui{t}}} + cV + b \normt{\vui{1}} \\ &+ 2 \sqrt{\paren{cV + b
\normt{\vui{1}}}{L_T\paren{\braces{\vui{t}}}}} ~. \end{align*} Intuitively, we expect the $H_\infty$ algorithm to perform better when the data is close to linear, that is when $L_T\paren{\braces{\vui{t}}}$ is small, as, conceptually, it was designed to minimize a loss with respect to weights $\{\vui{t}\}$. On the other hand, LASER is expected to perform better when the data is hard to predict with linear models, as it is not motivated from this assumption. Indeed, the bounds reflect these observations.
Comparing the last bound with \eqref{bound1} we note a few differences. First, the factor $\paren{1+a+2\sqrt{a}}\geq4$ of $L_T\paren{\braces{\vui{t}}}$ is worse for $H_\infty$ than for LASER, which is a unit. Second, LASER has worse dependency in the drift $T^{2/3}V^{1/3}$, while for $H_\infty$ it is about $cV + 2 \sqrt{{cV }{L_T\paren{\braces{\vui{t}}}}}$. Third, the $H_\infty$ has an additive factor $\sim \sqrt{L_T\paren{\braces{\vui{t}}}}$, while LASER has an additive logarithmic factor, at most.
Hence, the bound of the $H_\infty$ based algorithm is better when the cumulative loss $L_T\paren{\braces{\vui{t}}}$ is small. In this case, $4 L_T\paren{\braces{\vui{t}}}$ is not a large quantity, and as all the other quantities behave like $\sqrt{L_T\paren{\braces{\vui{t}}}}$, they are small as well. On the other hand, if $L_T\paren{\braces{\vui{t}}}$ is large, and is linear in $T$, the first term of the bound becomes dominant, and thus the factor of $4$ for the $H_\infty$ algorithm makes its bound higher than that of LASER. Both bounds were obtained from a min-max approach, either directly (LASER) or via-reduction from filtering ($H_\infty$). The bound of the former is lower in hard problems. Kivinen et al.~\cite{KivinenWaHa03} proposed
another approach for filtering with a bound depending on $\sum_t
\Vert \vui{t} \!-\! \vui{t-1} \Vert$ and not the sum of squares as we
have both for LASER and the $H_\infty$-based algorithm.
\section{Simulations}
\label{sec:simulations}
\begin{figure}
\caption{Cumulative squared loss for AROWR, ARCOR, NLMS, CR-RLS, LASER and $H_\infty$ vs iteration. Top left - linear drift and linear data, top right - sublinear drift and linear data, bottom left - linear drift and noisy data, bottom right - sublinear drift and noisy data.}
\label{fig:sims}
\end{figure}
We evaluate the LASER and $H_\infty$ algorithms on four synthetic datasets. We set $T=2000$ and $d=20$. For all datasets, the inputs $\mathbf{x}_t\in\mathbb{R}^{20}$ were generated such that the first ten coordinates were grouped into five groups of size two. Each such pair was drawn from a $45^\circ$ rotated Gaussian distribution with standard deviations $10$ and $1$. The remaining $10$ coordinates were drawn from independent Gaussian distributions $\mcal{N}\paren{0,2}$. The first synthetic dataset was generated using a sequence of vectors $\vui{t}\in\mathbb{R}^{20}$ for which the only non-zero coordinates are the first two, where their values are the coordinates of a unit vector that is rotating with a constant rate (linear drift). Specifically, we have $\Vert\vui{t}\Vert=1$ and the instantaneous drift $\Vert\vui{t}-\vui{t-1}\Vert$ is constant. The second synthetic dataset was generated using a sequence of vectors $\vui{t}\in\mathbb{R}^{20}$ for which the only non-zero coordinates are the first two. This vector in $\mathbb{R}^2$ is of unit norm $\Vert\vui{t}\Vert=1$ and rotating in a rate of $t^{-1}$ (sublinear drift). In addition every $50$ time-steps the two-dimensional vector defined above was ``embedded'' in different pair of coordinates of the reference vector $\vui{t}$, for the first $50$ steps it were coordinates $1,2$, in the next $50$ examples, coordinates $3,4$, and so on. This change causes a switch in the reference vector $\vui{t}$. For the first two datasets we set $\yi{t}=\vxti{t}\vui{t}$ (linear data). The third and fourth datasets are the same as first and second except we set $\yi{t}=\vxti{t}\vui{t}+n_t$ where $n_t\sim\mcal{N}\paren{0,0.05}$ (noisy data).
We compared six algorithms: NLMS (normalized least mean square)~\cite{Bershad,Bitmead} which is a state-of-the-art first-order algorithm, AROWR (AROW for Regression)~\cite{CrammerKuDr09}, ARCOR~\cite{VaitsCr11}, CR-RLS~\cite{Chen,Salgado}, LASER and $H_\infty$. The algorithms' parameters were tuned using a single random sequence. We repeat each experiment $100$ times reporting the mean cumulative square-loss. The results are summarized in \figref{fig:sims} (best viewed in color).
For the first and third datasets (left plots of \figref{fig:sims}) we observe the superior performance of the LASER algorithm over previous approaches. LASER has a good tracking ability, fast learning rate and it is designed to perform well in severe conditions like linear drift.
For the second and fourth datasets (right plots of \figref{fig:sims}), where we have sublinear drift level, we get that ARCOR outperforms LASER since it is especially designed for sublinear amount of data drift, yet, $H_\infty$ outperforms ARCOR when there is no noise (top-right plot).
For the third and fourth datasets (bottom plots of \figref{fig:sims}), where we added noise to labels, the performance of $H_\infty$ degrades, as expected from our discussion in \secref{H8_sec}.
\section{Related Work} The problem of performing online regression was studied for more than fifty years in statistics, signal processing and machine learning. We already mentioned the work of Widrow and Hoff~\cite{WidrowHoff} who studied a gradient descent algorithm for the squared loss. Many variants of the algorithm were studied since then. A notable example is the normalized least mean squares algorithm (NLMS)~\cite{Bitmead,Bershad} that adapts to the input's scale.
There exists a large body of work on this problem proposed by the machine learning community, which clearly cannot be covered fully here. We refer the reader to a encyclopedic book in the subject \cite{CesaBiGa06}. Gradient descent based algorithms for regression with the squared loss were proposed by Cesa-Bianchi et al.~\cite{Nicolo_Warmuth} about two decades ago. These algorithms were generalized and extended by Kivinen and Warmuth~\cite{Kiv_War} using additional regularization functions.
An online version of the ridge regression algorithm in the worst-case setting was proposed and analyzed by Foster~\cite{Foster91}. A related algorithm called Aggregating Algorithm (AA) was studied by Vovk~\cite{vovkAS}, and later applied to the problem of linear regression with square loss~\cite{Vovk01}. The recursive least squares (RLS)~\cite{Hayes} is a similar algorithm proposed for adaptive filtering. Both algorithms make use of second order information, as they maintain a weight-vector and a covariance-like positive semi-definite (PSD) matrix used to re-weight the input. The eigenvalues of this covariance-like matrix increase with time $t$, a property which is used to prove logarithmic regret bounds.
The derivation of our algorithm shares similarities with the work of Forster~\cite{Forster} and the work of Moroshko and Crammer~\cite{MoroshkoCr12}. These algorithms are motivated from the last-step min-max predictor. While the algorithms of Forster~\cite{Forster} and Moroshko and Crammer~\cite{MoroshkoCr12} are designed for the stationary setting, our work is primarily designed for the non-stationary setting. Moroshko and Crammer~\cite{MoroshkoCr12} also discussed a weak variant of the non-stationary setting, where the complexity is measured by the total distance from a reference vector $\bar{\mathbf{u}}$, rather than the total distance of consecutive vectors (as in this paper), which is more relevant to non-stationary problems. Note also that Moroshko and Crammer~\cite{MoroshkoCr12} did not derive algorithms for the non-stationary setting, but just show a bound of the weighted min-max algorithm (designed for the stationary setting) in the weak non-stationary setting.
Our work is mostly close to a recent algorithm~\cite{VaitsCr11} called ARCOR. This algorithm is based on the RLS algorithm with an additional projection step, and it controls the eigenvalues of a covariance-like matrix using scheduled resets. The Covariance Reset RLS algorithm (CR-RLS)~\cite{Chen,Salgado,Goodhart} is another example of an algorithm that resets a covariance matrix but every fixed amount of data points, as opposed to ARCOR that performs these resets adaptively. All of these algorithms that were designed to have numerically stable computations, perform covariance reset from time to time. Our algorithm, LASER, is simpler as it does not involve these steps, and it controls the increase of the eigenvalues of the covariance matrix $\mathbf{D}$ implicitly rather than explicitly by ``averaging'' it with a fixed diagonal matrix (see \eqref{D}). The Kalman filter ~\cite{Kalman60} and the $H_\infty$ algorithm~(e.g. \cite{Simon:2006:OSE:1146304}) designed for filtering take a similar approach, yet the exact algebraic form is different (\figref{algorithm:laser} vs. \figref{algorithm:hi}).
ARCOR also controls explicitly the norm of the weight vector, which is used for its analysis, by projecting it into a bounded set, as was also proposed by Herbster and Warmuth~\cite{HerbsterW01}. Other approaches to control its norm are to shrink it multiplicatively~\cite{KivinenSW01} or
by removing old examples~\cite{CavallantiCG07}. Some of these algorithms were designed to have sparse functions in the kernel space~(e.g. \cite{CrammerKS03,Dekel05theforgetron}). Note that our algorithm LASER is simpler as it does not perform any of these operation explicitly.
Finally, few algorithms that employ second order information were recently proposed for classification~\cite{CesaBianchiCoGe05,CrammerKuDr09,Crammer:2012:CLC:2343676.2343704}, and later in the online convex programming framework \cite{DuchiHS10,McMahanS10}.
\section{Summary and Conclusions}
We proposed a novel algorithm for non-stationary online regression designed and analyzed with the squared loss. The algorithm was developed from the last-step minmax predictor for {\em non-stationary} problems, and we showed an exact recursive form of its solution. We also described an algorithm based on the $H_\infty$ filter, that is motivated from a min-max approach as well, yet for filtering, and bounded its regret. Simulations showed its superior performance in a worst-case (close to a constant per iteration) drift.
An interesting future direction is to extend the algorithm for general loss functions rather than the squared loss. Currently, to implement the algorithm we need to perform either matrix inversion or eigenvector decomposition, we like to design a more efficient version of the algorithm. Additionally, for the algorithm to perform well, the amount of drift $V$ or a bound over it are used by the algorithm. An interesting direction is to design algorithms that automatically detect the level of drift, or are invariant to it.
\appendix \section{Proofs}
\subsection{Proof of \corref{cor:main}}
\label{proof_cor_main} \begin{proof} Plugging \lemref{lem:bound_1} in \thmref{thm:basic_bound} we have for all $(\vui{1} \dots \vui{T})$, \begin{align*} L_T(\textrm{LASER})
&\leq b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+c V+L_T(\{\vui{t}\})\\
&+Y^{2} \ln\left|\frac{1}{b}\mathbf{D}_{T}\right|+c^{-1}Y^2 \sum_{t=1}^{T} {\rm Tr}\paren{\mathbf{D}_{t-1}} ~. \end{align*} Using \lemref{eigen_values_lemma} we bound the RHS and get \begin{align} L_T(\textrm{LASER})\leq
b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+L_T(\{\vui{t}\})+Y^{2} \ln\left|\frac{1}{b}\mathbf{D}_{T}\right|\nonumber\\ +c^{-1}Y^2{\rm Tr}\paren{\mathbf{D}_0}+c V\nonumber\\
+c^{-1}Y^2 T d \max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2}~.\nonumber \end{align}
The term $c^{-1}Y^2{\rm Tr}\paren{\mathbf{D}_0}$ does not depend on $T$, because \( c^{-1}Y^2{\rm Tr}\paren{\mathbf{D}_0}=c^{-1}Y^2d\frac{bc}{c-b}=\frac{\varepsilon}{1-\varepsilon}Y^{2}d ~. \) To show \eqref{bound1}, note that \( V \leq T \frac{\sqrt{2}Y^2dX}{\mu^{3/2}} \Leftrightarrow \mu \leq \paren{\frac{\sqrt{2}Y^2dXT}{V}}^{2/3} =c~. \) We thus have that $\paren{ 3X^2 +
\sqrt{ X^4+4X^2 c }}/ {2 } \leq \paren{ 3X^2 +
\sqrt{ 8X^2 c }}/{ 2 } \leq \sqrt{ 8X^2 c }$, and we get a bound on
the right term of \eqref{final_cor}, \begin{align*}
\max\braces{ \paren{ 3X^2 +
\sqrt{ X^4+4X^2 c } }/{ 2 },b+X^2 } \leq \\
\max\braces{ \sqrt{ 8X^2 c },b+X^2 } \leq 2X\sqrt{ 2c } ~. \end{align*}
Using this bound and plugging the value of $c$ from \eqref{c1} we bound \eqref{final_cor} and conclude the proof, \begin{align*} \paren{\frac{\sqrt{2}T Y^2 d X}{V}}^{2/3} V &+ Y^2 T d 2X \sqrt{2 \paren{\frac{\sqrt{2}T Y^2 d X}{V}}^{-2/3}}\\
&= 3\paren{\sqrt{2}T Y^2 d X}^{2/3} V^{1/3} ~. \end{align*}
\end{proof}
{
}
\section{APPENDIX\\SUPPLEMENTARY MATERIAL} \label{sec:supp_material}
\subsection{Proof of \lemref{lem:lemma11}} \label{proof_lemma11}
\begin{proof} We calculate \begin{align*} P_{t}\left(\mathbf{u}_{t}\right) =& \min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}} \Bigg(b\left\Vert \mathbf{u}_{1}\right\Vert ^{2}+c\sum_{s=1}^{t-1}\left\Vert \mathbf{u}_{s+1}-\mathbf{u}_{s}\right\Vert ^{2}\\ &+\sum_{s=1}^{t}\left(y_{s}-\mathbf{u}_{s}^{\top}\mathbf{x}_{s}\right)^{2}\Bigg)\\
=&
\min_{\mathbf{u}_{t-1}}\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t-2}}\Bigg(b\left\Vert
\mathbf{u}_{1}\right\Vert ^{2}+c\sum_{s=1}^{t-2}\left\Vert
\mathbf{u}_{s+1}-\mathbf{u}_{s}\right\Vert
^{2}\\ &+\sum_{s=1}^{t-1}\left(y_{s}-\mathbf{u}_{s}^{\top}\mathbf{x}_{s}\right)^{2} +c\left\Vert \mathbf{u}_{t}-\mathbf{u}_{t-1}\right\Vert ^{2}\\ &+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\Bigg)\\
=& \min_{\mathbf{u}_{t-1}}\Bigg(P_{t-1}\left(\mathbf{u}_{t-1}\right)+c\left\Vert \mathbf{u}_{t}-\mathbf{u}_{t-1}\right\Vert ^{2}\\ &+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\Bigg) \end{align*} \end{proof}
\subsection{Proof of \lemref{lem:lemma12}} \label{proof_lemma12} \begin{proof} By definition, \( P_{1}\left(\mathbf{u}_{1}\right) = Q_{1}\left(\mathbf{u}_{1}\right)
= b\left\Vert \mathbf{u}_{1}\right\Vert ^{2}+\left(y_{1}-\mathbf{u}_{1}^{\top}\mathbf{x}_{1}\right)^{2}
=
\mathbf{u}_{1}^{\top}\left(b\mathbf{I}+\mathbf{x}_{1}\mathbf{x}_{1}^{\top}\right)\mathbf{u}_{1}-2y_{1}\mathbf{u}_{1}^{\top}\mathbf{x}_{1}+y_{1}^{2} ~, \) and indeed \( \mathbf{D}_{1}=b\mathbf{I}+\mathbf{x}_{1}\mathbf{x}_{1}^{\top} \), \( \mathbf{e}_{1}=y_{1}\mathbf{x}_{1} \), and \( f_{1}=y_{1}^{2} \).
We proceed by induction, assume that, \( P_{t-1}\left(\mathbf{u}_{t-1}\right)=\mathbf{u}_{t-1}^{\top}\mathbf{D}_{t-1}\mathbf{u}_{t-1}-2\mathbf{u}_{t-1}^{\top}\mathbf{e}_{t-1}+f_{t-1} \).
Applying \lemref{lem:lemma11} we get, \begin{align*} P_{t}\left(\mathbf{u}_{t}\right) =& \min_{\mathbf{u}_{t-1}}\Bigg(\mathbf{u}_{t-1}^{\top}\mathbf{D}_{t-1}\mathbf{u}_{t-1}-2\mathbf{u}_{t-1}^{\top}\mathbf{e}_{t-1}+f_{t-1}\\ &+c\left\Vert \mathbf{u}_{t}-\mathbf{u}_{t-1}\right\Vert ^{2}+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\Bigg)\\
=& \min_{\mathbf{u}_{t-1}}\Bigg(\mathbf{u}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)\mathbf{u}_{t-1}\\ &-2\mathbf{u}_{t-1}^{\top}\left(c\mathbf{u}_{t}+\mathbf{e}_{t-1}\right)+f_{t-1}+c\left\Vert \mathbf{u}_{t}\right\Vert ^{2}\\ &+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\Bigg)\\
=& -\left(c\mathbf{u}_{t}+\mathbf{e}_{t-1}\right)^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\left(c\mathbf{u}_{t}+\mathbf{e}_{t-1}\right)\\ &+f_{t-1}+c\left\Vert \mathbf{u}_{t}\right\Vert ^{2}+\left(y_{t}-\mathbf{u}_{t}^{\top}\mathbf{x}_{t}\right)^{2}\\
=& \mathbf{u}_{t}^{\top}\left(c\mathbf{I}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}-c^{2}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\right)\mathbf{u}_{t}\\
&
-2\mathbf{u}_{t}^{\top}\left[c\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\right]\\ &-\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+f_{t-1}+y_{t}^{2} \end{align*} Using Woodbury identity we continue to develop the last equation, \begin{align*}
= &\mathbf{u}_{t}^{\top}\left(c\mathbf{I}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\right.\\ &\left.-c^{2}\left[c^{-1}\mathbf{I}-c^{-2}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}\right]\right)\mathbf{u}_{t}\\
& -2\mathbf{u}_{t}^{\top}\left[\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\right]\\ &-\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+f_{t-1}+y_{t}^{2}\\
= & \mathbf{u}_{t}^{\top}\left(\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\right)\mathbf{u}_{t}\\
& -2\mathbf{u}_{t}^{\top}\left[\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\right]\\ &-\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+f_{t-1}+y_{t}^{2}~, \end{align*} and indeed \( \mathbf{D}_{t}=\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top} \), \\ \( \mathbf{e}_{t}=\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t} \) and, \( f_{t}=f_{t-1}-\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}^{2} \), as desired. \end{proof}
\subsection{Proof of \lemref{lem:technical}}
\label{proof_lemma_technical} \begin{proof} We first use the Woodbury equation to get the following two identities \begin{align*} \mathbf{D}_{t}^{-1}&=\left[\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}+\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\right]^{-1}\\ &=\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\\ &-\frac{\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\\ \end{align*} and \begin{align*} \left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}&=\mathbf{I}-c^{-1}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1} \end{align*} Multiplying both identities with each other we get, \begin{align}
& \mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\nonumber\\
=~& \Bigg[\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\nonumber\\ &-\frac{\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\Bigg]\Bigg[\mathbf{I}\nonumber\\ &-c^{-1}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}\Bigg]\nonumber\\
=~ & \mathbf{D}_{t-1}^{-1}-\frac{\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\label{identity1} \end{align} and, similarly, we multiply the identities in the other order and get, \begin{align}
& \left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{D}_{t}^{-1}
\nonumber\\ =~& \mathbf{D}_{t-1}^{-1}-\frac{\mathbf{D}_{t-1}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\label{identity2} \end{align}
Finally, from \eqref{identity1} we get, \begin{align*}
&\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\\ &-\mathbf{D}_{t-1}^{-1}\\ &+\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\left[\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\right.\\ &\left.+c^{-1}\mathbf{I}\right]\\
=~&
\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\\ &-\mathbf{D}_{t-1}^{-1}\\ &+\left[\mathbf{I}-c^{-1}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)^{-1}\right]\Bigg[\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\\ &-\frac{\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\Bigg] \end{align*} We develop the last equality and use \eqref{identity1} and \eqref{identity2} in the second equality below, \begin{align*}
=~ &
\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\\ &-\mathbf{D}_{t-1}^{-1}+\mathbf{D}_{t-1}^{-1}-\frac{\mathbf{D}_{t-1}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\\
=~ &
\left[\mathbf{D}_{t-1}^{-1}-\frac{\mathbf{D}_{t-1}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\right]\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\\ &\left[\mathbf{D}_{t-1}^{-1}-\frac{\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}\right]\\ &-\frac{\mathbf{D}_{t-1}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}}
\\ =~& -\frac{\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\mathbf{D}_{t-1}^{-1}\mathbf{x}_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t-1}^{-1}}{\left(1+\mathbf{x}_{t}^{\top}\left(\mathbf{D}_{t-1}^{-1}+c^{-1}\mathbf{I}\right)\mathbf{x}_{t}\right)^{2}}~~\preceq~~0 \end{align*} \end{proof}
\subsection{Derivations for \thmref{thm:basic_bound}} \label{algebraic_manipulation} \begin{align*}
& \left(y_{t}-\hat{y}_{t}\right)^{2}+\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}}Q_{t-1}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t-1}\right)\\ &-\min_{\mathbf{u}_{1},\ldots,\mathbf{u}_{t}}Q_{t}\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{t}\right)\\
=& \left(y_{t}-\hat{y}_{t}\right)^{2}-\mathbf{e}_{t-1}^{\top}\mathbf{D}_{t-1}^{-1}\mathbf{e}_{t-1}+f_{t-1}+\mathbf{e}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{e}_{t}-f_{t}\\
=& \left(y_{t}-\hat{y}_{t}\right)^{2}-\mathbf{e}_{t-1}^{\top}\mathbf{D}_{t-1}^{-1}\mathbf{e}_{t-1}\\ &+\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}-y_{t}^{2}\\
&
+\left(\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\right)^{\top}\mathbf{D}_{t}^{-1}\\ &\left(\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}\mathbf{x}_{t}\right) \end{align*} where the last equality follows \eqref{e}. We proceed to develop the last equality, \begin{align*} =& \left(y_{t}-\hat{y}_{t}\right)^{2}-\mathbf{e}_{t-1}^{\top}\mathbf{D}_{t-1}^{-1}\mathbf{e}_{t-1}\\ &+\mathbf{e}_{t-1}^{\top}\left(c\mathbf{I}+\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}-y_{t}^{2}\\ & +\mathbf{e}_{t-1}^{\top}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}\\ &+2y_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}\\
=& \left(y_{t}-\hat{y}_{t}\right)^{2}+\mathbf{e}_{t-1}^{\top}\Bigg(-\mathbf{D}_{t-1}^{-1}+\\ &\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\left[\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\right.\\ &\left.+c^{-1}\mathbf{I}\right]\Bigg)\mathbf{e}_{t-1}+2y_{t}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\left(\mathbf{I}+c^{-1}\mathbf{D}_{t-1}\right)^{-1}\mathbf{e}_{t-1}\\ &+y_{t}^{2}\mathbf{x}_{t}^{\top}\mathbf{D}_{t}^{-1}\mathbf{x}_{t}-y_{t}^{2}~. \end{align*}
\subsection{Details for the bound \eqref{high_drift}} \label{details_for_second_bound} To show the bound \eqref{high_drift}, note that, \( V \geq T \frac{Y^2dM}{\mu^{2}} \Leftrightarrow \mu \geq \sqrt{ \frac{TY^2dM}{V}}=c~. \) We thus have that the right term of \eqref{final_cor} is upper bounded as follows, \begin{align*} &\max\braces{ \frac{3X^2 +
\sqrt{X^4+4X^2 c}}{2},b+X^2} \\ \leq& \max\braces{ 3X^2,\sqrt{X^4+4X^2 c},b+X^2} \\ \leq& \max\braces{ 3X^2,\sqrt{2}X^2 ,\sqrt{8X^2 c},b+X^2}\\
=& \sqrt{8X^2}
\max\braces{ \frac{3X^2}{\sqrt{8X^2}},\sqrt{
c},\frac{b+X^2}{\sqrt{8X^2}}} \\
=& \sqrt{8X^2}
\sqrt{\max\braces{ \frac{(3X^2)^2}{{8X^2}},
c,\frac{\paren{b+X^2}^2}{{8X^2}}}} \\ =& \sqrt{8X^2} \sqrt{\max\braces{ \mu,c}} \leq \sqrt{8X^2} \sqrt{\mu} = M ~. \end{align*} Using this bound and plugging $c=
\sqrt{{Y^2dMT}/{V}}$ we bound \eqref{final_cor}, \( \sqrt{\frac{Y^2dMT}{V}}V + \frac{1}{\sqrt{\frac{Y^2dMT}{V}}} TdY^2M = 2\sqrt{Y^2dMTV} ~. \)
\subsection{Proof of \lemref{operator_scalar}} \label{proof_operator_scalar} \begin{proof} For the first property of the lemma we have that
$f(\lambda) = {\lambda \beta}/\paren{\lambda+ \beta} + x^2 \leq \beta\times 1 + x^2$.
The second property follows from the symmetry between $\beta$ and $\lambda$.
To prove the third property we decompose the function as, \( f(\lambda) = \lambda - \frac{\lambda^2 }{\lambda+ \beta} + x^2 \). Therefore, the function is bounded by its argument $f(\lambda)\leq \lambda$ if, and only if, $- \frac{\lambda^2 }{\lambda+ \beta} + x^2 \leq 0$. Since we assume $x^2\leq\gamma^2$, the last inequality holds if, \( -\lambda^2 + \gamma^2 \lambda + \gamma^2\beta \leq 0
\), which holds for $\lambda \geq \frac{\gamma^2 +
\sqrt{\gamma^4+4\gamma^2\beta}}{2}$.
To conclude. If $\lambda \geq \frac{\gamma^2 +
\sqrt{\gamma^4+4\gamma^2\beta}}{2}$, then $f(\lambda) \leq \lambda$. Otherwise, by the second property, we have, $f(\lambda) \leq \lambda+\gamma^2 \leq \frac{\gamma^2 +
\sqrt{\gamma^4+4\gamma^2\beta}}{2} + \gamma^2 = \frac{3\gamma^2 +
\sqrt{\gamma^4+4\gamma^2\beta}}{2}$, as required. \end{proof}
\end{document} | arXiv |
\begin{definition}[Definition:Limit of Vector-Valued Function/Definition 2]
Let $\mathbf r : \R \to \R^n$ be a vector-valued function.
We say that:
:$\ds \lim_{t \mathop \to c} \map {\mathbf r} t = \mathbf L$
{{iff}}:
:$\forall \epsilon > 0: \exists \delta > 0: 0 < \size {t - c} < \delta \implies \norm {\map {\mathbf r} t - \mathbf L} < \epsilon$
where $\norm {\, \cdot \,}$ denotes the Euclidean norm on $\R^n$.
\end{definition} | ProofWiki |
The Matrix Exponential of a Diagonal Matrix
For a square matrix $M$, its matrix exponential is defined by
\[e^M = \sum_{i=0}^\infty \frac{M^k}{k!}.\]
Suppose that $M$ is a diagonal matrix
\[ M = \begin{bmatrix} m_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m_{n n} \end{bmatrix}.\]
Find the matrix exponential $e^M$.
First, we find $M^k$ for each integer $k \geq 0$. The first couple powers can be calculated directly,
\[M^0 = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix} , \quad M = \begin{bmatrix} m_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m_{n n} \end{bmatrix},\] \[M^2 = \begin{bmatrix} m^2_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m^2_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m^2_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m^2_{n n} \end{bmatrix} , \quad M^3 = \begin{bmatrix} m^3_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m^3_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m^3_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m^3_{n n} \end{bmatrix}.\]
The general pattern can now be seen:
\[M^k = \begin{bmatrix} m^k_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m^k_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m^k_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m^k_{n n} \end{bmatrix}.\]
Now, we can calculate the infinite series $e^M$:
e^M &= \sum_{k=0}^{\infty} \frac{ M^k }{k!} \\
&= \sum_{k=0}^\infty \frac{1}{k!} \begin{bmatrix} m^k_{1, 1} & 0 & 0 & \cdots & 0 \\ 0 & m^k_{2, 2} & 0 & \cdots & 0 \\ 0 & 0 & m^k_{3, 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m^k_{n, n} \end{bmatrix} \\
&= \begin{bmatrix} \sum_{k=0}^\infty \frac{ m^k_{1 1} }{k!} & 0 & 0 & \cdots & 0 \\ 0 & \sum_{k=0}^\infty \frac{ m^k_{2 2} }{k!} & 0 & \cdots & 0 \\ 0 & 0 & \sum_{k=0}^\infty \frac{ m^k_{3 3} }{k!} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \sum_{k=0}^\infty \frac{ m^k_{n n} }{k!} \end{bmatrix} . \end{align*}
Now, for any real number $c$ we can write $e^c$ as the series
\[e^c = \sum_{k=0}^\infty \frac{ c^k }{k!}.\]
Thus, the matrix exponential $e^M$ is
\[e^M = \begin{bmatrix} e^{ m_{1 1} } & 0 & 0 & \cdots & 0 \\ 0 & e^{ m_{2 2} } & 0 & \cdots & 0 \\ 0 & 0 & e^{ m_{3 3} } & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & e^{ m_{n n} } \end{bmatrix}.\]
Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent By calculating the Wronskian, determine whether the set of exponential functions \[\{e^x, e^{2x}, e^{3x}\}\] is linearly independent on the interval $[-1, 1]$. Solution. The Wronskian for the set $\{e^x, e^{2x}, e^{3x}\}$ is given […]
Exponential Functions Form a Basis of a Vector Space Let $C[-1, 1]$ be the vector space over $\R$ of all continuous functions defined on the interval $[-1, 1]$. Let \[V:=\{f(x)\in C[-1,1] \mid f(x)=a e^x+b e^{2x}+c e^{3x}, a, b, c\in \R\}\] be a subset in $C[-1, 1]$. (a) Prove that $V$ is a subspace of $C[-1, 1]$. (b) […]
Exponential Functions are Linearly Independent Let $c_1, c_2,\dots, c_n$ be mutually distinct real numbers. Show that exponential functions \[e^{c_1x}, e^{c_2x}, \dots, e^{c_nx}\] are linearly independent over $\R$. Hint. Consider a linear combination \[a_1 e^{c_1 x}+a_2 e^{c_2x}+\cdots + a_ne^{c_nx}=0.\] […]
Square Root of an Upper Triangular Matrix. How Many Square Roots Exist? Find a square root of the matrix \[A=\begin{bmatrix} 1 & 3 & -3 \\ 0 &4 &5 \\ 0 & 0 & 9 \end{bmatrix}.\] How many square roots does this matrix have? (University of California, Berkeley Qualifying Exam) Proof. We will find all matrices $B$ such that […]
Column Vectors of an Upper Triangular Matrix with Nonzero Diagonal Entries are Linearly Independent Suppose $M$ is an $n \times n$ upper-triangular matrix. If the diagonal entries of $M$ are all non-zero, then prove that the column vectors are linearly independent. Does the conclusion hold if we do not assume that $M$ has non-zero diagonal entries? Proof. […]
The Vector Space Consisting of All Traceless Diagonal Matrices Let $V$ be the set of all $n \times n$ diagonal matrices whose traces are zero. That is, \begin{equation*} V:=\left\{ A=\begin{bmatrix} a_{11} & 0 & \dots & 0 \\ 0 &a_{22} & \dots & 0 \\ 0 & 0 & \ddots & \vdots \\ 0 & 0 & \dots & […]
The Additive Group $\R$ is Isomorphic to the Multiplicative Group $\R^{+}$ by Exponent Function Let $\R=(\R, +)$ be the additive group of real numbers and let $\R^{\times}=(\R\setminus\{0\}, \cdot)$ be the multiplicative group of real numbers. (a) Prove that the map $\exp:\R \to \R^{\times}$ defined by \[\exp(x)=e^x\] is an injective group homomorphism. (b) Prove that […]
Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix Consider the $2\times 2$ complex matrix \[A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, determine the eigenvectors. (c) Diagonalize the matrix $A$. (d) Using the result of the […]
Tags: diagonal matrixexponential functioninfinite serieslinear algebramatrix exponentialmatrix exponential of a diagonal matrix
Next story Find a Basis for the Range of a Linear Transformation of Vector Spaces of Matrices
Previous story Find the Nullspace and Range of the Linear Transformation $T(f)(x) = f(x)-f(0)$
Every Complex Matrix Can Be Written as $A=B+iC$, where $B, C$ are Hermitian Matrices
Dimension of Null Spaces of Similar Matrices are the Same
Is the Trace of the Transposed Matrix the Same as the Trace of the Matrix?
by Yu · Published 12/24/2017
Introduction to Matrices
Elementary Row Operations
Gaussian-Jordan Elimination
Solutions of Systems of Linear Equations
Linear Combination and Linear Independence
Nonsingular Matrices
Inverse Matrices
Subspaces in $\R^n$
Bases and Dimension of Subspaces in $\R^n$
General Vector Spaces
Subspaces in General Vector Spaces
Linearly Independency of General Vectors
Bases and Coordinate Vectors
Dimensions of General Vector Spaces
Linear Transformation from $\R^n$ to $\R^m$
Linear Transformation Between Vector Spaces
Orthogonal Bases
Determinants of Matrices
Computations of Determinants
Introduction to Eigenvalues and Eigenvectors
Eigenvectors and Eigenspaces
Diagonalization of Matrices
The Cayley-Hamilton Theorem
Dot Products and Length of Vectors
Eigenvalues and Eigenvectors of Linear Transformations
Jordan Canonical Form
A One Side Inverse Matrix is the Inverse Matrix: If $AB=I$, then $BA=I$
What is the Probability that All Coins Land Heads When Four Coins are Tossed If…?
More in Linear Algebra
Find the Nullspace and Range of the Linear Transformation $T(f)(x) = f(x)-f(0)$
Let $C([-1, 1])$ denote the vector space of real-valued functions on the interval $[-1, 1]$. Define the vector subspace \[W... | CommonCrawl |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.
Calculating pi manually
Hypothetically you are put in math jail and the jailer says he will let you out only if you can give him 707 digits of pi. You can have a ream of paper and a couple pens, no computer, books, previous pi memorization or outside help.
What is the best formula to use? Where best will result in the least margin of error (most important) and is decently quick (second importance).
707 probably seems arbitrary, but I got it from here: http://en.wikipedia.org/wiki/William_Shanks
Also I don't really understand how he could use that formula because I thought you need a table to get arctan values or it would probably take a long time to make values of arctan to use in the formula, but that isn't too important.
I asked this question today because it celebrates the most accurate pi day for the next 100 years :)
pi transcendental-numbers
NeilNeil
$\begingroup$ That formula gives you 527 correct decimals, not 707 $\endgroup$ – Carlos Laguillo Mar 14 '15 at 18:42
$\begingroup$ @CarlosLaguillo: The formula gives you arbitrary many correct digits, but Shanks made an arithmetic error at the 527th position. $\endgroup$ – hmakholm left over Monica Mar 14 '15 at 18:46
$\begingroup$ The last sentence is false. $3.1416$ is closer to $\pi$ than $3.1415$. $\endgroup$ – Spenser Mar 14 '15 at 19:18
$\begingroup$ In Europe, of course, $\pi$ day is 31st April. Oh wait... $\endgroup$ – TonyK Mar 15 '15 at 10:24
$\begingroup$ You could also do it in base pi and give the digits 1.00000... $\endgroup$ – glaba Mar 18 '15 at 2:44
As metioned in Wikipedia's biography, Shanks used Machin's formula $$ \pi = 16\arctan(\frac15) - 4\arctan(\frac1{239}) $$
The standard way to use that (and the various Machin-like formulas found later) is to compute the arctangents using the power series
$$ \arctan x = x - \frac{x^3}3 + \frac{x^5}5 - \frac{x^7}7 + \frac{x^9}9 - \cdots $$
Getting $\arctan(\frac15)$ to 707 digits requires about 500 terms calculated to that precision. Each requires two long divisions -- one to divide the previous numerator by 25, another to divide it by the denominator.
The series for $\arctan(\frac1{239})$ converges faster and only needs some 150 terms.
(You can know how many terms you need because the series is alternating (and absolutely decreasing) -- so once you reach a term that is smaller than your desired precision, you can stop).
The point of Machin-like formulas is that the series for $\arctan x$ converges faster the smaller $x$ is. We could just compute $\pi$ as $4\arctan(1)$, but the series converges hysterically slowly when $x$ is as large as $1$ (and not at all if it is even larger). The trick embodied by Machin's formula is to express a straight angle as a sum/difference of the corner angles of (a small number of different sizes of) long and thin right triangles with simple integer ratios between the cathetes.
The arctangent gets easier to compute the longer and thinner each triangle is, and especially if the neighboring side is an integer multiple of the opposite one, which corresponds to angles of the form $\arctan\frac{1}{\text{something}}$. Then going from one numerator in the series to the next costs only a division, rather than a division and a multiplication.
Machin observed that four copies of the $5$-$1$-$\sqrt{26}$ triangle makes the same angle as an $1$-$1$-$\sqrt2$ triangle (whose angle is $\pi/4$) plus one $239$-$1$-$\sqrt{239^2+1}$ triangle. These facts can be computed exactly using the techniques displayed here.
Later workers have found better variants of Machin's idea, nut if you're in prison without reference works, it's probably easiest to rediscover Machin's formula by remembering that some number of copies of $\arctan\frac1k$ for some fairly small $k$ adds up to something very close to 45°.
hmakholm left over Monicahmakholm left over Monica
$\begingroup$ how did you come up with the numbers 1/5 and 1/239. Is it just that the closer the fraction is to zero the closer it is to pi? If that's the case then why pick those specific numbers. $\endgroup$ – Neil Mar 14 '15 at 19:03
$\begingroup$ @Neil: No, but the series for $\arctan x$ converges faster the smaller $x$ is. We have $\pi = 4\arctan 1$, but the series converges hysterically slowly when $x$ is as large as $1$ (and not at all if it is even larger). The trick embodied by Machin's formula is to express a straight angle as a sum/difference of the corner angles of long and thin right triangles with simple integer ratios between the cathetes. $\endgroup$ – hmakholm left over Monica Mar 14 '15 at 19:12
$\begingroup$ What is a cathete? also why not pick a bigger number than 239 to make it faster or is 239 the best number for some reason? $\endgroup$ – Neil Mar 14 '15 at 19:22
$\begingroup$ @Neil: A cathete is one of the two sides in a right triangle that make up the right angle. I've extended the answer a bit. The combination of $\frac15$ and $\frac1{239}$ was the best known in Shank's time (where "best" means roughly that it was the one where the larger of the angles, namely $\arctan\frac15$ was smallest), but there are others, some of which are shown in the Wikipedia article. $\endgroup$ – hmakholm left over Monica Mar 14 '15 at 19:28
$\begingroup$ @HenningMakholm: "because the series is alternating" is not enough; you need the sequence of the absolute values to decrease monotonically to zero. $\endgroup$ – Martin Argerami Mar 15 '15 at 5:30
If pi is a normal number, you can give him any sequence of 707 numbers and they are guaranteed to be digits of pi...
glabaglaba
$\begingroup$ And what do you think the math police will do to you if you assume that pi is normal without proof? $\endgroup$ – Winston Ewert Mar 15 '15 at 2:26
$\begingroup$ I guess they want the 707 first digits, not any 707 digits. (Or at least a sequence of digits together with the location they appear in the decimal expansion of $\pi$.) $\endgroup$ – Paŭlo Ebermann Mar 15 '15 at 10:22
$\begingroup$ @WinstonEwert They will give you a non-computable number and tell you to find the first trillion digits. $\endgroup$ – k_g Mar 15 '15 at 18:52
Gauss–Legendre algorithm is really good, it only uses four elementary operation and square root; it also converges very fast.
BlexBlex
$\begingroup$ This would literally take decades to do by hand, and aslo I am going to have to give it to machin's formula because it states in your link, "However, the drawback is that it(The Gauss-Legendre Algorithm) is memory intensive and it is therefore sometimes not used over Machin-like formulas." $\endgroup$ – Irrational Person Mar 14 '15 at 18:56
$\begingroup$ Life Sentence in math jail, oh no :) $\endgroup$ – Neil Mar 14 '15 at 19:45
The following formula by Ramanujan gives 8 correct decimal digits for each $k$.
$$ \pi = \left( \frac{2\sqrt{2}}{9801} \sum^\infty_{k=0} \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}} \right)^{-1}.$$
If you calculate the partial sum from $k=0$ to $88$, then you will get $712$ correct digits of $\pi$.
$\begingroup$ How long would you estimate this would take? $\endgroup$ – Neil Mar 14 '15 at 19:46
$\begingroup$ But the terms look complicated enough to compute that it's not clear that it's less work than Machin's formula. $\endgroup$ – hmakholm left over Monica Mar 14 '15 at 19:48
$\begingroup$ @Neil I don't know. It depends on how good you are at calculating factorials and powers. $(88!)^4$ is $538$ digits long, $396^{88\,\cdot\,4}$ has $915$ digits, and you have to calculate them correctly. But you don't have to calculate tenthousands of operations. $\endgroup$ – user153012 Mar 14 '15 at 19:55
$\begingroup$ Is this Ramanujan formula better to get to 707 manually or is the Chudnovsky algorithm based on the Ramanujan formula better? $\endgroup$ – Neil Mar 14 '15 at 20:07
$\begingroup$ @Neil The Chudnovsky algorithm is faster. $\endgroup$ – user153012 Mar 14 '15 at 20:10
Hand him 707 digits, starting with 111111 then moving on to 222222 and 3333 and so forth. Call your jailor over and tell him you have 707 digits of pi. Out of the kindness of your heart, offer to help him put them in the right order after he lets you out.
$\begingroup$ Cort Ammon, I like that :) $\endgroup$ – Neil Mar 15 '15 at 1:04
Using Bellard's formula, you can find $707$ digits of $\pi$ with relatively simple operations taking $233$ terms of the series (i.e. $n_{\mathrm{max}}=233$):
$$\pi = \frac1{2^6} \sum_{n=0}^\infty \frac{(-1)^n}{2^{10n}} \, \left(-\frac{2^5}{4n+1} \right. {} - \frac1{4n+3} + \frac{2^8}{10n+1} - \frac{2^6}{10n+3} \left. {} - \frac{2^2}{10n+5} - \frac{2^2}{10n+7} + \frac1{10n+9} \right).$$
Another option is to use Bailey–Borwein–Plouffe digit extraction algorithm for base-$16$ representation of $\pi$, since no one said the digits must be decimal.
RuslanRuslan
$\begingroup$ How long do you think it would take to do that calculation? Also doing that whole sum only gives 1 number right? Or does it give you more? $\endgroup$ – Neil Mar 15 '15 at 10:27
$\begingroup$ Dunno, depends on how fast you are at calculation. What do you mean by the whole sum? If you sum this up to $n=233$, you get $\pi$ with an error of $\approx2.7\cdot10^{-708}$. $\endgroup$ – Ruslan Oct 27 '15 at 5:30
Nilakantha's Series haven't been mentioned yet.
$$ \pi = 3 + \frac4{2\times3\times4} - \frac4{4\times5\times6} + \frac4{6\times7\times8} - \frac4{8\times9\times10} + \dots $$
Without dots: $$ \pi = 3 + \sum_{k=1}^\infty (-1)^{k+1}\frac1{k \times (2k+1) \times (k+1)} $$
This formula is better than Leibniz's formula, but inefficient compared to Machin's formula.
It gives the first five digits of pi in (only) 14 terms of the series. Yet with it, computing each next digit of pi requires to compute twice as many terms as for the previous digit. So it can't be used to compute the 707 firsts digits of pi.
loxaxsloxaxs
We may use Leibniz series combined with the Euler-Maclaurin formula.
$$\frac\pi4=\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\\\small=1-\sum_{k=1}^{n-1}\left(\frac1{4k-1}-\frac1{4k+1}\right)-\int_n^\infty\left(\frac1{4x-1}-\frac1{4x+1}\right)~\mathrm dx-\frac12\left(\frac1{4x-1}-\frac1{4x+1}\right)-\sum_{k=1}^p\frac{B_{2k}}{2k}\left(\frac1{(n-1/4)^{2k}}-\frac1{(n+1/4)^{2k}}\right)+R_{n,p}\\|R_{n,p}|<\frac{3\cdot(2p)!}{(6n^2)^{2p}}$$To get 707 digits of $\pi$, you'll need $|R_{n,p}|<10^{-708}$. For example, taking $p=833,n=20$ gives the desired error.
$$\pi\pm10^{-708}=1-\sum_{k=1}^{19}\left(\frac1{4k-1}-\frac1{4k+1}\right)-\ln\left(1+\frac2{79}\right)-\sum_{k=1}^{833}\frac{B_{2k}}{2k}\left(\frac{4^{2k}}{79^{2k}}-\frac{4^{2k}}{81^{2k}}\right)$$
Where $B_k$ are the Bernoulli numbers and you can evaluate
$$\ln\left(1+\frac2{79}\right)\pm10^{-708}=-\sum_{k=1}^{442}\frac1k\left(-\frac2{79}\right)^k$$
and $\pm$ denotes maximum possible error.
Simply Beautiful ArtSimply Beautiful Art
$\begingroup$ Not exactly the best, but reasonable. $\endgroup$ – Simply Beautiful Art Sep 18 '17 at 17:05
There are a few super-convergent series and sequences that may be used to approximate $\pi$. I add this as a CW since I already have my own answer, and I merely stumbled upon this one, though I have absolutely zero clue how they work.
The mentioned series and sequences are known as Borwein's algorithm.
The first such example:
\begin{aligned}A&=212175710912{\sqrt {61}}+1657145277365\\B&=13773980892672{\sqrt {61}}+107578229802750\\C&={\big (}5280(236674+30303{\sqrt {61}}){\big )}^{3}\end{aligned}
$$\frac1\pi =12\sum_{n=0}^{\infty }\frac{(-1)^{n}(6n)!\,(A+nB)}{(n!)^{3}(3n)!\,C^{n+1/2}}$$
This series only requires about 30 terms to get 707 digits of $\pi$.
The second such example:
\begin{aligned}A={}&63365028312971999585426220\\&{}+28337702140800842046825600{\sqrt {5}}\\&{}+384{\sqrt {5}}(10891728551171178200467436212395209160385656017\\&{}+4870929086578810225077338534541688721351255040{\sqrt {5}})^{1/2}\\B={}&7849910453496627210289749000\\&{}+3510586678260932028965606400{\sqrt {5}}\\&{}+2515968{\sqrt {3110}}(6260208323789001636993322654444020882161\\&{}+2799650273060444296577206890718825190235{\sqrt {5}})^{1/2}\\C={}&-214772995063512240\\&{}-96049403338648032{\sqrt {5}}\\&{}-1296{\sqrt {5}}(10985234579463550323713318473\\&{}+4912746253692362754607395912{\sqrt {5}})^{1/2}\end{aligned}
$$\frac{\sqrt{-C^3}}\pi=\sum _{n=0}^\infty\frac{(6n)!}{(3n)!(n!)^3}\frac{A+nB}{C^{3n}}$$
The last example I will show:
\begin{aligned}a_{0}&={\frac {1}{3}}\\r_{0}&={\frac {{\sqrt {3}}-1}{2}}\\s_{0}&=(1-r_{0}^{3})^{1/3}\end{aligned}
\begin{aligned}t_{n+1}&=1+2r_{n}\\u_{n+1}&=(9r_{n}(1+r_{n}+r_{n}^{2}))^{1/3}\\v_{n+1}&=t_{n+1}^{2}+t_{n+1}u_{n+1}+u_{n+1}^{2}\\w_{n+1}&={\frac {27(1+s_{n}+s_{n}^{2})}{v_{n+1}}}\\a_{n+1}&=w_{n+1}a_{n}+3^{2n-1}(1-w_{n+1})\\s_{n+1}&={\frac {(1-r_{n})^{3}}{(t_{n+1}+2u_{n+1})v_{n+1}}}\\r_{n+1}&=(1-s_{n+1}^{3})^{1/3}\end{aligned}
$$a_k\to\frac1\pi$$
To get 707 digits of $\pi$, you only need to evaluate out to $a_5$ or $a_6$. (I have no idea, but apparently if $a_n$ approximates $\pi$ out $k$ digits, then $a_{n+1}$ approximates $\pi$ out approximately $9k$ digits. Insane!)
Simply Beautiful Art
$\begingroup$ Simply: Thank you!${}{}$ $\endgroup$ – amWhy Sep 21 '17 at 15:45
You can use this method that counts digit for digit in any base:
http://bellard.org/pi/pi_n2/pi_n2.html
And if you can smuggle in a computer there are programs using this method:
https://stackoverflow.com/questions/5905822/function-to-find-the-nth-digit-of-pi
LehsLehs
Most websites state that William Shanks "calculated" 707 digits this is not correct as when he "published" his 707 of Pi he also published the two ark tangents to 709 digits which was the correct value of the digits he calculated. If you are interested the last two digits they are 92 and no he did not round up the 707th digit he just dropped the last two digits. The correct values were published in 1873can be found in the Proceedings of the Royal Society of London, Volume 21 page 319 which was known to have typo errors, it also had the two sub terms William Shanks used to arrive at PI. Look at "On the extension of the numerical value of π" found in the "Proceedings of the Royal Society of London 21:318–319". The page required can be found at "archive.org/stream/philtrans00902295/00902295#page/n1/mode/2up". As a side note there is a two digits error in the last 180 error digits, "51100" should be "51111"..
So as you can see he did calculate 709 digits and in some cases the "3" is also counted which would have made a nice round number like 710, there is no proof this was done in this case. In his 1853 book he printed the term values for his first 530 calculation with his 609 values for the ark tangents and the value of PI to 607 digits. There was a 20 year before he published the 707 digit of PI during this time he worked on may other types of numbers. I have calculated PI by hand to 100 digits which took 108 pages, increasing this value to 709 digits it would take 5429 pages, the cost of this much paper at that time must have been a great cost. If William shanks had done a perfect job the last 5 digits correct digits are 99561 his ,calculated value would have been 99456 with an error of 105 units in the last digit which would have made the last 3 digits in error the 707th through 709th digits.
Erwin EngertErwin Engert
A pretty simple method i would use if i had no access to books would be to just calculate the alternating sum of reciprocals of consecutive odd numbers and multiply by four, since $$\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}=\frac \pi 4$$ It would take a very very very long time though, but it is a fairly simple method.
Another method would be to make some kind of small, circular pool, measure the diameter, then pour water from a measure cup into the pool, keeping in mind the amount of water you have pured. Once the pool is filled to a certain height, measure the height and, since the volume of water is given by $h*d*\pi$, then with a big (diameter-wise) enough pool we could be able to work out the digits accurately enough. This approach is less time consuming but needs a lot more resources.
cirpiscirpis
$\begingroup$ On the other hand, it takes $10^8$ terms to get 7 decimal terms correctly, so this might not be a good approach. $\endgroup$ – cirpis Mar 14 '15 at 19:18
$\begingroup$ Life sentence in math jail without parole, jk :) $\endgroup$ – Neil Mar 14 '15 at 19:24
$\begingroup$ The universe is not (by FAR) large enough to contain a pool with enough water in it that the relative error implicit in rounding off to an integer number of molecules of water is less than $10^{-707}$. $\endgroup$ – hmakholm left over Monica Mar 14 '15 at 19:43
$\begingroup$ Sorry cripis bro, that wouldn't work either. The visible universe isn't big enough to give you a measurement to get past 63 digits of pi. Ignoring the effects of physics and stuff. $\endgroup$ – Neil Mar 14 '15 at 19:44
$\begingroup$ But the universe is expanding! So someday it will be lange enough :D $\endgroup$ – cirpis Mar 14 '15 at 19:44
Thanks for contributing an answer to Mathematics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged pi transcendental-numbers or ask your own question.
How to find the $\pi$?
Calculating Pi to a certain length
Series that converge to $\pi$ quickly
Ramanujan's approximation for $\pi$
showing $\arctan(\frac{2}{3}) = \frac{1}{2} \arctan(\frac{12}{5})$
On Shanks' quartic approximation $\pi \approx \frac{6}{\sqrt{3502}}\ln(2u)$
How to compute a lot of digits of $\sqrt{2}$ manually and quickly?
What is known about the 'Double log Eulers constant', $\lim_{n \to \infty}{\sum_{k=2}^n\frac{1}{k\ln{k}}-\ln\ln{n}}$?
How to find the correct value of pi?
Ways to determine $\pi$
Get $\pi$ decimals manually
Using Bellard's formula for calculating $\pi$?
Calculating custom bits of PI in hex or binary without calculating previous bits
Calculating value of $\pi$ independently using integrals.
Proof of irrationality of $\zeta(2)$ without explicitly calculating it
Why would we use the radius of a circle instead of the diameter when calculating circumference?
Calculating of PI from sin() function in Java | CommonCrawl |
Fracture models as $\Gamma$-limits of damage models
On Dirichlet, Poncelet and Abel problems
July 2013, 12(4): 1635-1656. doi: 10.3934/cpaa.2013.12.1635
A global attractor for a fluid--plate interaction model
I. D. Chueshov 1, and Iryna Ryzhkova 2,
Department of Mechanics and Mathematics, Kharkov National University, 4 Svobody sq., 61077, Kharkov, Ukraine
Department of Mechanics and Mathematics, Kharkov National University, 4 Svobody Sq., Kharkov, 61022, Ukraine
Received February 2011 Revised June 2012 Published November 2012
We study asymptotic dynamics of a coupled system consisting of linearized 3D Navier--Stokes equations in a bounded domain and a classical (nonlinear) elastic plate equation for transversal displacement on a flexible flat part of the boundary. We show that this problem generates a semiflow on appropriate phase space. Our main result states the existence of a compact finite-dimensional global attractor for this semiflow. We do not assume any kind of mechanical damping in the plate component. Thus our results means that dissipation of the energy in the fluid due to viscosity is sufficient to stabilize the system. To achieve the result we first study the corresponding linearized model and show that this linear model generates strongly continuous exponentially stable semigroup.
Keywords: linearized 3D Navier--Stokes equations, nonlinear plate, Fluid--structure interaction, finite-dimensional attractor..
Mathematics Subject Classification: Primary: 35B41, 35Q30; Secondary: 74F10, 74K2.
Citation: I. D. Chueshov, Iryna Ryzhkova. A global attractor for a fluid--plate interaction model. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1635-1656. doi: 10.3934/cpaa.2013.12.1635
G. Avalos, The strong stability and instability of a fluid-structure semigroup,, Appl. Math. Optim., 55 (2007), 163. doi: 10.1007/s00245-006-0884-z. Google Scholar
G. Avalos and R. Triggiani, The coupled PDE system arising in fluid-structure interaction I. Explicit semigroup generator and its spectral properties,, in, 440 (2007), 15. doi: 10.1090/conm/440/08475. Google Scholar
G. Avalos and R. Triggiani, Semigroup well-posedness in the energy space of a parabolc-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction,, Discr. Contin. Dyn. Sys., 2 (2009), 417. doi: 10.3934/dcdss.2009.2.417. Google Scholar
A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations,", North-Holland, (1992). Google Scholar
V. Barbu, Z. Grujić, I. Lasiecka and A. Tuffaha, Existence of the energy-level weak solutions for a nonlinear fluid-structure interaction model,, in, 440 (2007), 55. doi: 10.1090/conm/440/08476. Google Scholar
V. Barbu, Z. Grujić, I. Lasiecka and A. Tuffaha, Smoothness of weak solutions to a nonlinear fluid-structure interaction model,, Indiana Univ. Math. J., 57 (2008), 1173. doi: 10.1512/iumj.2008.57.3284. Google Scholar
H. Beirão da Veiga, On the existence of strong solution to a coupled fluid-structure evolution problem,, J. Math. Fluid Mech., 6 (2004), 21. doi: 10.1007/s00021-003-0082-5. Google Scholar
V. V. Bolotin, "Nonconservative Problems of Elastic Stability,", Pergamon Press, (1963). Google Scholar
A. Chambolle, B. Desjardins, M. Esteban and C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate,, J. Math. Fluid Mech., 7 (2005), 368. doi: 10.1007/s00021-004-0121-y. Google Scholar
I. Chueshov, "Introduction to the Theory of Infinite-Dimensional Dissipative Systems,", Acta, (1999). Google Scholar
I. Chueshov, A global attractor for a fluid-plate interaction model accounting only for longitudinal deformations of the plate,, Math. Meth. Appl. Sci., 34 (2011), 1801. doi: 10.1002/mma.1496. Google Scholar
I. Chueshov and S. Kolbasin, Long-time dynamics in plate models with strong nonlinear damping,, Comm. Pure Appl. Anal., 11 (2012), 659. doi: 10.3934/cpaa.2012.11.659. Google Scholar
I. Chueshov and I. Lasiecka, Attractors for second order evolution equations,, J. Dynam. Diff. Eqs., 16 (2004), 469. doi: 10.1007/s10884-004-4289-x. Google Scholar
I. Chueshov and I. Lasiecka, "Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping,", Memoirs of AMS, (2008). Google Scholar
I. Chueshov and I. Lasiecka, "Von Karman Evolution Equations,", Sprin\-ger, (2010). doi: 10.1007/978-0-387-87712-9. Google Scholar
I. Chueshov and I. Lasiecka, Well-posedness and long time behavior in nonlinear dissipative hyperbolic-like evolutions with critical exponents,, Preprint \arXiv{1204.5864v1}., (). Google Scholar
I. Chueshov and I. Ryzhkova, Unsteady interaction of a viscous fluid with an elastic shell modeled by full von Karman equations,, Preprint \arXiv{1112.6094v1}., (). Google Scholar
I. Chueshov and I. Ryzhkova, Well-posedness and long time behavior for a class of fluid-plate interaction models,, in, (2011). Google Scholar
D. Coutand and S. Shkoller, Motion of an elastic solid inside an incompressible viscous fluid,, Arch. Ration. Mech. Anal., 176 (2005), 25. doi: 10.1007/s00205-004-0340-7. Google Scholar
G. Galdi, C. Simader and H. Sohr, A class of solutions to stationary Stokes and Navier-Stokes equations with boundary data in $W^{-1/q,q}$,, Math. Annalen, 331 (2005), 41. doi: 10.1007/s00208-004-0573-7. Google Scholar
Q. Du, M. D. Gunzburger, L. S. Hou and J. Lee, Analysis of a linear fluid-structure interaction problem,, Discrete Contin. Dyn. Syst., 9 (2003), 633. doi: 10.3934/dcds.2003.9.633. Google Scholar
C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate,, SIAM J. Math. Anal., 40 (2008), 716. doi: 10.1137/070699196. Google Scholar
M. Grobbelaar-Van Dalsen, On a fluid-structure model in which the dynamics of the structure involves the shear stress due to the fluid,, J. Math. Fluid Mech., 10 (2008), 388. doi: 10.1007/s00021-006-0236-4. Google Scholar
M. Grobbelaar-Van Dalsen, A new approach to the stabilization of a fluid-structure interaction model,, Applicable Analysis, 88 (2009), 1053. doi: 10.1080/00036810903114841. Google Scholar
M. Grobbelaar-Van Dalsen, Strong stability for a fluid-structure model,, Math. Methods Appl. Sci., 32 (2009), 1452. doi: 10.1002/mma.1104. Google Scholar
M. Guidorzi, M. Padula and P. I. Plotnikov, Hopf solutions to a fluid-elastic interaction model,, Math. Models Methods Appl. Sci., 18 (2008), 215. doi: 10.1142/S0218202508002668. Google Scholar
N. Kopachevskii and Yu. Pashkova, Small oscillations of a viscous fluid in a vessel bounded by an elastic membrane,, Russian J. Math. Phys., 5 (1998), 459. Google Scholar
O. Ladyzhenskaya, "Mathematical Theory of Viscous Incompressible Flow,", GIFML, (1961). Google Scholar
J. Lagnese, "Boundary Stabilization of Thin Plates,", SIAM, (1989). doi: 10.1137/1.9781611970821. Google Scholar
J. Lagnese, Modeling and stabilization of nonlinear plates,, Int. Ser. Num. Math., 100 (1991), 247. Google Scholar
J. Lagnese and J. L. Lions, "Modeling, Analysis and Control of Thin Plates,", Masson, (1988). Google Scholar
J. Lequeurre, Existence of strong solutions to a fluid-structure system,, SIAM J. Math. Anal. \textbf{43} (2011), 43 (2011), 389. doi: 10.1137/10078983X. Google Scholar
J.-L. Lions and E. Magenes, "Problémes aux Limites non Homogénes et Applications," Vol. 1,, (French), (1968). Google Scholar
J. L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires,", (French), (1969). Google Scholar
A. Osses and J. Puel, Approximate controllability for a linear model of fluid structure interaction,, ESAIM Control, 4 (1999), 497. doi: 10.1051/cocv:1999119. Google Scholar
A. Pazy, "Semigroups of Linear Operators and Applications to Partial Differential Equations,", Springer, (1986). Google Scholar
G. Raugel, Global attractors in partial differential equations,, in, 2 (2002), 885. doi: 10.1016/S1874-575X(02)80038-8. Google Scholar
J.-P. Raymond, Feedback stabilization of a fluid-structure model,, SIAM Journal on Control and Optimization, 48 (2010), 5398. doi: 10.1137/080744761. Google Scholar
J. Simon, Compact sets in the space $L^p(0,T;B)$,, Annali di Matematica Pura ed Applicata, 148 (1987), 65. doi: 10.1007/BF01762360. Google Scholar
R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,", Springer, (1988). doi: 10.1007/978-1-4684-0313-8. Google Scholar
R. Temam, "Navier-Stokes Equations: Theory and Numerical Analysis,", Reprint of the 1984 edition, (1984). Google Scholar
H. Triebel, "Interpolation Theory, Function Spaces, Differential Operators,", North Holland, (1978). Google Scholar
Milan Pokorný, Piotr B. Mucha. 3D steady compressible Navier--Stokes equations. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 151-163. doi: 10.3934/dcdss.2008.1.151
Ning Ju. The finite dimensional global attractor for the 3D viscous Primitive Equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7001-7020. doi: 10.3934/dcds.2016104
Luca Bisconti, Davide Catania. Remarks on global attractors for the 3D Navier--Stokes equations with horizontal filtering. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 59-75. doi: 10.3934/dcdsb.2015.20.59
Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2339-2350. doi: 10.3934/dcdsb.2017101
Tian Ma, Shouhong Wang. Asymptotic structure for solutions of the Navier--Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 189-204. doi: 10.3934/dcds.2004.11.189
Oualid Kafi, Nader El Khatib, Jorge Tiago, Adélia Sequeira. Numerical simulations of a 3D fluid-structure interaction model for blood flow in an atherosclerotic artery. Mathematical Biosciences & Engineering, 2017, 14 (1) : 179-193. doi: 10.3934/mbe.2017012
Chongsheng Cao. Sufficient conditions for the regularity to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1141-1151. doi: 10.3934/dcds.2010.26.1141
Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1299-1306. doi: 10.3934/cpaa.2013.12.1299
Hui Chen, Daoyuan Fang, Ting Zhang. Regularity of 3D axisymmetric Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1923-1939. doi: 10.3934/dcds.2017081
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Anthony Suen. Existence and a blow-up criterion of solution to the 3D compressible Navier-Stokes-Poisson equations with finite energy. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1775-1798. doi: 10.3934/dcds.2020093
Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik. On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system. Discrete & Continuous Dynamical Systems - A, 2007, 17 (3) : 481-500. doi: 10.3934/dcds.2007.17.481
Yutaka Tsuzuki. Solvability of generalized nonlinear heat equations with constraints coupled with Navier--Stokes equations in 2D domains. Conference Publications, 2015, 2015 (special) : 1079-1088. doi: 10.3934/proc.2015.1079
I. D. Chueshov. Interaction of an elastic plate with a linearized inviscid incompressible fluid. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1759-1778. doi: 10.3934/cpaa.2014.13.1759
M. Bulíček, F. Ettwein, P. Kaplický, Dalibor Pražák. The dimension of the attractor for the 3D flow of a non-Newtonian fluid. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1503-1520. doi: 10.3934/cpaa.2009.8.1503
C. Foias, M. S Jolly, O. P. Manley. Recurrence in the 2-$D$ Navier--Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 253-268. doi: 10.3934/dcds.2004.10.253
Boling Guo, Guoli Zhou. Finite dimensionality of global attractor for the solutions to 3D viscous primitive equations of large-scale moist atmosphere. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4305-4327. doi: 10.3934/dcdsb.2018160
T. Tachim Medjo. A non-autonomous 3D Lagrangian averaged Navier-Stokes-$\alpha$ model with oscillating external force and its global attractor. Communications on Pure & Applied Analysis, 2011, 10 (2) : 415-433. doi: 10.3934/cpaa.2011.10.415
A. Jiménez-Casas, Mario Castro, Justine Yassapan. Finite-dimensional behavior in a thermosyphon with a viscoelastic fluid. Conference Publications, 2013, 2013 (special) : 375-384. doi: 10.3934/proc.2013.2013.375
Thomas Y. Hou, Ruo Li. Nonexistence of locally self-similar blow-up for the 3D incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 637-642. doi: 10.3934/dcds.2007.18.637
I. D. Chueshov Iryna Ryzhkova | CommonCrawl |
\begin{document}
\title{Some Numeric Hypergeometric Supercongruences}
\begin{dedication} {In celebration of Geoffrey Mason's 70th birthday.} \end{dedication}
\begin{abstract} In this article, we list a few hypergeometric supercongruence conjectures based on two evaluation formulas of Whipple and numeric data computed using \texttt{Magma} and \texttt{Sagemath}. \end{abstract}
\section{Introduction}
Let $n$ be a positive integer, $a_1,\cdots, a_n,b_1, \cdots, b_n, \lambda\in \mathbb Q$, in particular $b_n=1$. Let $\alpha=\{a_1,\cdots,a_n\}$ and $\beta=\{b_1,\cdots,b_{n}\}$, be two multi-sets of rational numbers. Here $a_i$'s (resp. $b_j$'s) may repeat. The triple $\{\alpha,\beta;\lambda\}$ is called a \emph{hypergeometric datum}. The corresponding hypergeometric series (see \cite{AAR}) is defined by
\begin{equation*} F(\alpha,\beta;\lambda):=\pFq{n}{n-1}{a_1&\cdots& a_{n-1}&a_n}{&b_1&\cdots&b_{n-1}}{\lambda}=\sum_{k=0}^\infty \frac{(a_1)_k\cdots(a_n)_k}{(b_1)_k\cdots (b_{n-1})_k}\frac{\lambda^k}{k!}, \end{equation*} where $(a)_k=a(a+1)\cdots(a+k-1)=\frac{\Gamma(a+k)}{\Gamma(a)}$ with $\Gamma(\cdot)$ being the Gamma function. As a function of $\lambda$, it satisfies an order-$n$ Fuchsian differential equation which has only three regular singularities located at $0,1,\infty$. The truncated hypergeometric series $F(\alpha,\beta;\lambda)_m:=\pFq{n}{n-1}{a_1&\cdots& a_{n-1}&a_n}{&b_1&\cdots&b_{n-1}}{\lambda}_m$ with subscript $m$ denotes the truncated series at $\lambda^m$th term.
Hypergeometric functions form an important class of special functions. They play many vital roles in differential equations, algebraic varieties, modular forms. In his recent papers \cite{CGM,CM}, Geoffrey Mason explored the role of hypergeometric functions in vector valued modular forms. In \cite{CGM} by Franc, Gannon and Mason, $p$-adic properties of the coefficients of hypergeometric functions are used to explain the unbounded denominator behavior of noncongruence vector valued modular forms. For related discussions on the unbounded denominator behavior of genuine noncongruence modular forms, see \cite{Chen, KL, LL}. In this paper we will discuss the $p$-adic properties of hypergeometric functions initiated in Dwork's work (see \cite{Dwork}).
To present Dwork type of congruence, we will first recall \begin{defn}\label{def:dash} Let $p$ be a fixed prime number and $r\in \Z_p^\times$. Use $[r]_0$ to denote the first $p$-adic digit of $r$, namely an integer in $[0,p-1]$ which is congruence to $r$ modulo $p$. Dwork dash operation, a map from $\Z_p\rightarrow \Z_p$, is defined by $$ r'= (r+[-r]_0)/p, $$ \end{defn} In other words, $$r'p-r=[-r]_0.$$ For example, if $p$ is an odd prime, then $[-\frac12]_0=\frac{p-1}2$. If $r=\frac 76$ and $p\equiv 1\mod 3$, then $[-\frac 76]_0=\frac{p-7}6$. Hence $(\frac 76)'=(\frac 76+\frac{p-7}6)/p=\frac 16.$ When $p\equiv 2\mod 3$, $[-\frac 76]_0=\frac{5p-7}6$, $(\frac 76)'=\frac 56.$
Fix a prime $p>2$. We now recall the reflection formula for the $p$-adic Gamma function. It says for $x\in \Z_p$, \begin{equation}\label{eq:reflect}\Gamma_p(x)\Gamma_p(1-x)=(-1)^{a_0(x)}\end{equation} where $a_0(x)$ is unique integer in $[1,2,\cdots,p]$ such that $x\equiv a_0(x)\mod p$. If $x\in \Z_p^\times $, $a_0(x)=[x]_0$. See \cite{Cohen} by Cohen for more properties of the $p$-adic Gamma function.
Let $M:=M(\alpha,\beta;\lambda)$ be the least common denominator of all $a_i$'s, $b_j$'s and $\lambda$. Let $\alpha':=\{ a_1',\cdots, a_n'\}$, namely the image of $\alpha$ under the dash operation.
\begin{theorem}[Dwork, \cite{Dwork}] \label{thm:Dwork}Let $\alpha,\beta$ be two multi-sets of rational numbers of size $n$, where $a_i\in(0,1)$ and $\beta=\{1,\cdots,1\}$. Then for any prime $p\nmid M(\alpha,\beta;\lambda)$, integer $m\ge 1$, integers $t\ge s$ \begin{equation}\label{eq:Dwork} F(\alpha,\beta;\lambda)_{mp^{s}-1}F(\alpha',\beta;\lambda^p)_{mp^{t-1}-1}\equiv F(\alpha',\beta;\lambda^p)_{mp^{s-1}-1}F(\alpha,\beta;\lambda)_{mp^{t}-1}\mod p^{s}. \end{equation} \end{theorem}When $p\nmid F(\alpha,\beta;\lambda)_{p-1}$ in which case we say that $p$ is \emph{ordinary} for $\{\alpha,\beta;\lambda\}$, there is a $p$-adid unit root $\mu_{\alpha,\beta;\lambda,p}\in \Z_p^\times$ such that for any integers $s,m>0$ \begin{equation}\label{eq:unit} F(\alpha,\beta,\lambda)_{mp^{s}-1}/F(\alpha',\beta,\lambda^p)_{mp^{s-1}-1}\equiv \mu_{\alpha,\beta;\lambda,p} \mod p^s. \end{equation}
\begin{defn} A multi-set $\alpha=\{a_1,\cdots,a_n\}$ of rational numbers is said to be \emph{defined over} $\mathbb Q$ if $$\prod_{i=1}^n(X-e^{2\pi i a_i})\in \Z[X].$$ A hypergeometric datum $\{\alpha,\beta;\lambda\}$ is said to be \emph{defined over} $\mathbb Q$ if both $\alpha,\beta$ are defined over $\mathbb Q$ and $\lambda\in \mathbb Q$. \end{defn}
Below we focus on hypergeometric data satisfying the following condition
\begin{quote} ($\diamond$): \quad $\alpha=\{a_1,\cdots, a_{n}\}$ and $\beta=\{b_1,\cdots,b_{n-1},1\}$ where $0\le a_i<1$ and $0<b_j\le 1$, both $\alpha$ and $\beta$ are defined over $\mathbb Q$, and $a_i- b_j\notin \Z$ for each $i,j$. \end{quote}
\begin{remark}\label{rem:1} If $\alpha=\{a_1\cdots,a_n\}$ is defined over $\mathbb Q$ and each $0< a_i<1$ for each $i$, then $1-a_i\in \alpha$. Moreover, for any fixed prime $p$ not dividing the least common denominator of $a_i$'s, the set $\alpha$ is closed under the dash operation defined in Definition \ref{def:dash}. \end{remark}
For any holomorphic modular form $f$, we use $a_p(f)$ to denote its $p$th Fourier coefficient. For most of the listed modular forms in this paper, they will be identified by their labels in L-Functions and Modular Forms Database (LMFDB) \cite{LMFDB}. For instance, $f_{8.6.a.a}$ denotes a level-8 weight 6 Hecke eigenform, $\eta(z)^8\eta(4z)^4+8\eta(4z)^{12}$ in terms of the Dedekind eta function, its label at LMFDB is $8.6.a.a$.
In \cite{LTYZ}, the author jointly with Tu, Yui, and Zudilin proved a conjecture of Rodriguez-Villegas made in \cite{RV}. Special cases of this conjecture has been obtained earlier by Kilbourn \cite{Kilbourn} (based on \cite{AO00} by Ahlgren and Ono), McCarthy \cite{McCarthy-RV}, and Fuselier and McCarthy \cite{FM}. See \cite{Zudilin18} by Zudilin for a related background discussion.
\begin{theorem}[Long, Tu, Yui, and Zudilin \cite{LTYZ}]\label{thm:LTYZ} Let $\beta=\{1,1,1,1\}$. For each given multi-set $\alpha=\{r_1,1-r_1,r_2,1-r_2\}$ where either each $r_i\in \{\frac12,\frac13,\frac14,\frac16\}$ or $(r_1,r_2)=(\frac 15,\frac 25), (\frac 18,\frac 38) $, $(\frac 1{10},\frac 3{10})$, $(\frac 1{12},\frac 5{12})$, there exists an explicit weight 4 Hecke cuspidal eigenform depending on $\alpha$, denoted by $f_\alpha$ such that for each prime $p>5$, $$\pFq{4}{3}{r_1&1-r_1&r_2&1-r_2}{&1&1&1}{1}_{p-1}\equiv a_p(f_\alpha) \mod p^3.$$ \end{theorem} \footnote{In fact the proof of this theorem implies for any $m\ge 1$, $F(\alpha,\beta;1)_{mp-1}\equiv a_p(f_\alpha)F(\alpha,\beta;1)_{m-1} \mod p^3$.}
This statement resembles the congruence \eqref{eq:unit} when both $m=s=1$ and $p$ being ordinary. In fact reducing the statement of Theorem \ref{thm:LTYZ} modulo $p$, rather than modulo $p^3$, can be shown using the theory of commutative formal group law. Any congruence that is stronger than what can be predicted by the commutative formal group law is called a \emph{supercongruence}. Supercongruences often reveal additional background symmetries such as complex multiplication. For instance, the background setting of Theorem \ref{thm:LTYZ} involves Calabi-Yau threefolds defined over $\mathbb Q$ which are rigid in the sense that their $h^{2,1}$ hodge numbers equal 0. The power $p^3$ is somewhat expected in terms of the Atkin and Swinnerton-Dyer (ASD) congruences satisfied by the coefficients of weight-$k$ modular forms for noncongruence subgroups \cite{ASD,LL2,Scholl}. Two-term ASD congruences for the coefficients of weight-$k$ modular forms can be written the same way as Equation \eqref{eq:Dwork} with $p^s$ being replaced by $p^{(k-1)s}$.
The proof of Theorem \ref{thm:LTYZ} uses the finite hypergeometric functions (see Definition \ref{defn:Hp} below) originated in Katz's work \cite{Katz} and modified by McCarthy \cite{McCarthy-well} and a $p$-adic perturbation method, which was used in \cite{Long} and described explicitly in \cite{LR} by the author and Ramakrishna. The method was also used in other papers, we list a few here: \cite{Swisher} by Swisher, \cite{Liu} by Liu, \cite{He} by He, \cite{BS} by Barman and Saikia, and \cite{MP} by Mao and Pan. Essentially one regroups the the corresponding character sum into a major term, that is $F(\alpha,\beta;1)_{p-1}-a_p(f_\alpha)$, and graded error terms. There are different ways to deal with the error terms. In \cite{LTYZ}, they are shown to be zero modulo $p^2$ using identities constructed from a residue-sum technique, which was also used in \cite{OSZ, Zudilin}. This approach may settle some new cases here. See \cite{MOS} by McCarthy, Osburn, Straub for other front of supercongruence results. See \cite{Doran} by Doran et al. for writing local zeta functions of certain K3 surfaces using finite hypergeometric functions.
It is conjectured by Mortenson (stated in \cite{FOP} by Frechette, Ono, Papanikolas) that for all primes $p>2$ \begin{equation}\label{eq:Mortenson} \pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 12&\frac 12}{&1&1&1&1&1}{1}_{p-1}\equiv a_p(f_{8.6.a.a}) \mod p^5. \end{equation}The mod $p^3$ claim has been obtained by Osburn, Straub, Zudilin \cite{OSZ}. In \cite{BD}, Beukers and Delaygue proved that for each positive integer $n$, $\alpha=\{\frac 12,\cdots,\frac12\}, \beta=\{1,\cdots,1\}$ of size $n$, a modulo $p^2$ supercongruence of the form \eqref{eq:unit} holds for $F(\alpha,\beta; (-1)^{n(p-1)/2})_{p-1}$ when $p$ is ordinary. Their proof uses the properties of hypergeometric differential equations. Further they conjectured that the corresponding Dwork type congruence \eqref{eq:Dwork} holds modulo $p^{2s}$ instead of $p^s$. See Conjecture 1.5 of \cite{BD}.
In \cite{RR}, Roberts and Rodriguez-Villegas made a more general hypergeometric supercongruence conjecture, stated below in our notation, corresponding to hypergeometric motive defined over $\mathbb Q$ (see \cite{RV-2} by Rodriguez-Villegas and \cite{Roberts} by Roberts) such that $\beta$ consists of all 1 and $\lambda =\pm 1$. Their conjecture summarizes the pattern of Theorem \ref{thm:LTYZ} and the supercongruence \eqref{eq:Mortenson}.
\begin{conj}[Roberts and Rodriguez-Villegas \cite{RR}]Let $\alpha=\{\alpha_1,\cdots,a_n\}$, $\beta=\{1,\cdots,1\}$ be multisets satisfying condition ($\diamond$), $\lambda=\pm 1$ (note that by Remark \ref{rem:1}, $\alpha'=\alpha$). Let $A$ be the unique submotive of the hypergeometric motive corresponding to $\{\alpha,\beta;\lambda\}$ with hodge number $h^{0,n-1}(A)=1$ and $r$ the smallest positive integer such that $h^{r,n-1-r}(A)=1$. For any $p\nmid M(\alpha,\beta;\lambda)$ and ordinary for $\{\alpha,\beta;\lambda\}$, there is a $p$-adic unit $\mu_{\alpha,\beta;\lambda,p}$ depending on the hypergeometric datum such that for any integer $s\ge 1$ $$F(\alpha,\beta;\lambda)_{p^s-1}/F(\alpha,\beta;\lambda)_{p^{s-1}-1}\equiv \mu_{\alpha,\beta;\lambda,p} \mod p^{rs}.$$ \end{conj} Comparing with Dwork's congruence \eqref{eq:unit}, this is a refinement and the degree of the supercongruence, is given in terms of the gap between two hodge numbers of $A$. Our main focus here is to investigate the supercongruence phenomenon when the assumption of $\beta=\{1,\cdots,1\}$ being relaxed. We shall see that a slight adjustment to the hypergeometric parameters for the truncated series might be needed in the statement of the generic congruence, which is Theorem \ref{lem:1}. It is based on the definition of finite hypergeometricc sum and two technical Lemmas dealing with the discrepancy among the orders of $p$ appearing in Gauss sums and the corresponding rising factorials $(a)_k$. Our numeric computation further confirms that the parameter adjustment is needed for some numeric supercongruences listed in later part of this paper.
Based on Roberts and Rodriguez-Villegas' philosophy, supercongruences occur to hypergeometric motives with gaps in hodge number sequences. For this reason, we consider a few hypergeometric motives defined over $\mathbb Q$ that are decomposable. To look for supercongruence candidates, the strategy we use is based on the Galois perspective of classical hypergeometric functions outlined in \cite{Katz} by Katz, \cite{BCM} by Beukers, Cohen and Mellit and in \cite{WIN3X} by the author jointly with Fuselier, Ramakrishna, Swisher and Tu. \section{From finite hypergeometric functions to truncated hypergeometric series}
Following \cite{BCM}, we use the following definition which corresponds to the normalized hypergeometric function $_n\mathbb F_{n-1}$ in \S4.1 of \cite{WIN3X} modified from hypergeometric functions defined over finite fields in \cite{Greene} by Greene.
\begin{defn}\label{defn:Hp} Let $\{\alpha,\beta;\lambda\}$ be a hypergeometric datum defined over $\mathbb Q$. For any finite field of size $q=p^e$ where $p\nmid M(\alpha,\beta;\lambda)$ such that $a_i(q-1), b_j(q-1)\in \Z$ for all $i,j$, the finite hypergeometric function corresponding to $\{\alpha,\beta;\lambda\}$ over $\F_p$ is defined to be \begin{equation}\label{eq:Hp} H_q(\alpha,\beta;\lambda):=\frac 1{1-q}\sum_{k=0}^{q-2} \prod_{i=1}^n\frac{g(\omega^{k+(q-1)a_i})g(\omega^{-k-(q-1)b_i})}{g(\omega^{(q-1)a_i})g(\omega^{-(q-1)b_i})}\omega^k((-1)^n\lambda), \end{equation}where $\omega$ represents an order $q-1$ multiplicative character of $\F_q$, and in this article we use the Teichmuller character of $\F_q^\times\rightarrow \mathbb C_p^\times$; for any multiplicative character $\chi$ of $\F_q^\times$, $$g(\chi):=\sum_{x\in \F_q} \chi(x)\Psi(x)$$ stands for its Gauss sum with respect to a non trivial additive character $\Psi$ of $\F_q$ \end{defn}
Using the multiplication formulas for Gauss sums, when $\alpha,\beta$ are defined over $\mathbb Q$, this character sum \eqref{eq:Hp} can be formulated in another form to rid of the assumption on $a_i(q-1), b_j(q-1)\in \Z$. See Theorem 1.3 of \cite{BCM} for details.
Now we recall the following function used in \cite{LTYZ}. Define $$\nu(a,x):=-\left \lfloor \frac{x-a}{p-1}\right \rfloor=\begin{cases}0& \text{ if $0<a\le x<p$}\\ 1 &\text{ if $0<x<a<p$}. \end{cases} $$
To relate the finite hypergeometric sum $H_p(\alpha,\beta;\lambda)$ defined in \eqref{eq:Hp} to the truncated hypergeometric functions, one uses the Gross-Koblitz formula \cite{GK} which says for integer $k$ \begin{equation*} g(\omega^{-k})=-\pi_p^{[k]_0}\Gamma_p\(\left \{\frac k{p-1}\right \}\), \end{equation*}where $\omega$ is the Teichmuller character of $\F_p^\times$, $\Gamma_p(\cdot)$ is the $p$-adic Gamma function, $\pi_p$ is a fixed root of $x^{p-1}+p=0$ in $\mathbb C_p$, the additive character for the Gauss sum is $\Psi(x):=\zeta_p^{\text{Tr}_{\F_p}^{\F_q}(x)}$ of $\F_q$ where $\zeta_p$ is a primitive $p$th root of unity which is congruent to $1+\pi_p$ modulo $\pi_p^2$. Also for a real number $x$, $\{x\}$ stands for its fractional part.
Following the analysis (as \cite[et al.]{AO00,Kilbourn,McCarthy-RV,LTYZ}), we use the Gross-Koblitz formula to rewrite \eqref{eq:Hp} into the following theorem. When $\beta=\{1,\cdots,1\}$ the details of a proof is already given in Section 4 of \cite{LTYZ}. Another reference is Beukers' paper \cite{Beukers}. \begin{theorem}\label{lem:0}Assume that $\alpha,\beta$ are two multi-sets satisfying condition $(\diamond)$. Then for any prime $p\nmid M(\alpha,\beta;\lambda)$ \[ H_p(\alpha,\beta;\lambda)=\frac{1}{1-p}\sum_{k=0}^{p-2}\prod_{i=1}^n\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left ( 1-\left \{b_i+\frac k{p-1}\right \} \right)} (-p)^{e_{\alpha,\beta}(k)}\omega(\lambda)^k, \] where $$e(k)=e_{\alpha,\beta}(k):=\sum_{i=1}^n -\left \lfloor a_i-\frac{k}{p-1}\right \rfloor-\left \lfloor \frac{k}{p-1}+\{b_i\} \right \rfloor,$$ is a step function of $k$ when $k$ is viewed as a variable varying from $0$ to $p-1$. The jumps only happen at values $(p-1)a_i$ or $(p-1)(1-\{b_j\})$. \end{theorem}
Comparing with Beukers' paper \cite{Beukers}, the function $e(k)=\Lambda(p-1-k)$ where $\Lambda(k)$ is given in Definition 1.4 of \cite{Beukers}. The discrepancy is caused by the choice of the generator for the group of multiplicative characters of $\F_p^\times$. In addition we apply the reflection formula for $\Gamma_p(\cdot)$ to get a rising factorial of $b_i$ in the denominator, as stated in Theorem \ref{lem:1} below.
We further define some terms here which are useful for our discussion.
\begin{defn}For any given $\alpha,\beta$ defined over $\mathbb Q$, define \begin{equation*} s(\alpha,\beta):=\min\{e(k)\mid 0\le k<p-1\};\end{equation*} and the bottom interval as \begin{equation*} I(\alpha,\beta):=\{0\le x<p-1 \mid e(x)=s(\alpha,\beta)\} \end{equation*} and the \emph{weight} function as the difference between the largest and smallest $e(k)$ values \begin{equation*} w(\alpha,\beta):=\max\{e(k)\mid 0\le k\le p-2\}-\min\{e(k)\mid 0\le k< p-1\}.\end{equation*} \end{defn}
Below we only consider $\lambda=\pm 1$. By the way we define $s(\alpha,\beta)$ and Theorem \ref{lem:0}, $p^{\max\{0,-s(\alpha,\beta)\}}H_p(\alpha,\beta;\lambda)$ is $p$-adically integral. Also the value of $p^{\max\{0,-s(\alpha,\beta)\}}H_p(\alpha,\beta;\lambda)$ modulo $p$ only relies on the contribution from the $k$th terms where $k$ is in the bottom interval $I(\alpha,\beta)$. Namely \begin{multline}\label{eq:I}p^{\max\{0,-s(\alpha,\beta)\}}H_p(\alpha,\beta;\pm 1) \\ \equiv p^{\max\{0,-s(\alpha,\beta)\}}\sum_{k, e(k)=s(\alpha,\beta)} \prod_{i=1}^n\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left (1- \left \{ b_i+\frac k{p-1}\right \} \right)}(-p)^{e_{\alpha,\beta}(k)}(\pm 1)^{k}\\ \mod p.\end{multline}
The next lemmas are useful for the proof of our main Theorem \ref{lem:1}. \begin{lemma}\label{lem:nu-value}Let $p$ be a fixed prime. Then for any $a\in \mathbb Q\cap (0,1)\cap \Z_p^\times$, non negative integer $ k<p$ $$\nu(k,(p-1)a')=\nu(k,[-a-\nu(k,(p-1)a)]_0),$$ where $a'$ is the image of $a$ under the dash operation as before. \end{lemma}
\begin{proof}Recall that $a+[-a]_0=pa'$ and both $a,a'$ are in $(0,1)$ by our assumption. If $k\le (p-1)a'$, i.e. $k- (pa'-a+a-a')=k-[-a]_0-(a-a')\le 0$, then $k\le [-a]_0$ as the difference is an integer less than $|a-a'|<1$. Thus the values on both sides are 0.
If $k>(p-1)a'$, then $k-[-a-\nu(k,(p-1)a)]_0=k-[-a-1]_0=k+1-[-a]_0=k+1-pa'+a>0$. Thus both values are 1. \end{proof}
To use Theorem \ref{lem:0} to link $H_p(\alpha,\beta;\pm 1)$ to a truncated hypergeometric series, the
next Lemma will be helpful. It is mainly about converting the $p$-adic unit part of $g(\omega^k+(p-1)a)/g(\omega^{(p-1)a})$ to a rising factorial. \begin{lemma}\label{lem:nu}Let $p>2$ be a fixed prime. For $a\in \mathbb Q\cap [0,1)\cap \Z_p^\times$, non negative integer $ k<p-1$, modul $p$ \begin{eqnarray*} \Gamma_p \left (\left \{a- \frac{k}{p-1} \right\}\right )/\Gamma_p(a)&\equiv& \begin{cases} C\cdot (a)_k \quad \text{if} \quad k\le (p-1)a \\mathbb C\cdot (a+1)_k \quad \text{if}\quad (p-1)a<k<p-1 \end{cases} \\ &= & C (a+\nu(k,(p-1)a))_k ,\end{eqnarray*} where $\displaystyle C= \frac{(-1)^k}{(-p)^{\nu(k,(p-1)a)}}\frac{(ap)^{\nu(k,(p-1)a)}}{(a'p)^{\nu(k,(p-1)a')}}$.
\end{lemma} \begin{proof}We will prove the equivalent form \begin{multline*}(-p)^{\nu(k,(p-1)a)} \Gamma_p \left (\left \{a- \frac{k}{p-1} \right\}\right )/\Gamma_p(a) \\ \equiv(-1)^k\frac{(ap)^{\nu(k,(p-1)a)}}{(a'p)^{\nu(k,(p-1)a')}}(a+\nu(k,(p-1)a))_k \mod p^{1+\nu(k,(p-1)a)}. \end{multline*}
If $k\le (p-1)a$, $\nu(k,(p-1)a)=0$. Thus $$(-p)^{\nu(k,(p-1)a)} \Gamma_p \left (\left \{a- \frac{k}{p-1} \right\}\right )/\Gamma_p(a)=\Gamma_p\(a-\frac k{p-1}\)/\Gamma_p(a)\equiv \Gamma_p(a+k)/\Gamma_p(a) \mod p.$$
If $k> (p-1)a$, $\nu(k,(p-1)a)=1$. Thus \begin{multline*}(-p)^{\nu(k,(p-1)a)} \Gamma_p \left (\left \{a- \frac{k}{p-1} \right\}\right )/ \Gamma_p(a)\\=(-p)\Gamma_p\(1+a-\frac k{p-1}\)/\Gamma_p(a)\equiv (-p)\Gamma_p(1+a+k)/\Gamma_p(a) \mod p^2. \end{multline*}As $\Gamma_p(a+1)=(-a)\Gamma_p(a)$ for $a\in \Z_p^\times$. $(-p)\Gamma_p(1+a+k)/\Gamma_p(a)=(ap)\Gamma_p(1+a+k)/\Gamma_p(1+a)$.
Note that for any $0\le k<p$, and $c\in \Z_p^\times$ if $k\le [-c]_0$, $\Gamma_p(c+k)/\Gamma_p(1+c)=(-1)^k(c)_k$ and when $k> [-a]_0$, $\Gamma_p(c+k)/\Gamma_p(c)=(-1)^k(c'p)^{-1}(c)_k$ as among $c,c+1,\cdots,c+k-1$ one of them is a multiple of $p$, which equals $c+[-c]_0=c'p$ by the definition of the dash operation. Putting together we have
\begin{multline*}(-p)^{\nu(k,(p-1)a)} \Gamma_p \left (\left \{a- \frac{k}{p-1} \right\}\right )/\Gamma_p(a) \\ \equiv (ap)^{\nu(k,(p-1)a)} \Gamma_p(a+k+{\nu(k,(p-1)a)} )/\Gamma_p(a+{\nu(k,(p-1)a)} ) \mod p^{1+\nu(k,(p-1)a)} \\=(-1)^k\frac{(ap)^{\nu(k,(p-1)a)}}{(a'p)^{\nu(k,[-a-\nu(k,(p-1)a)]_0)}}(a+\nu(k,(p-1)a))_k \mod p^{1+\nu(k,(p-1)a)}. \\=(-1)^k\frac{(ap)^{\nu(k,(p-1)a)}}{(a'p)^{\nu(k,(p-1)a')}}(a+\nu(k,(p-1)a))_k \mod p^{1+\nu(k,(p-1)a)}. \end{multline*}The last claim follows from Lemma \ref{lem:nu-value}. \end{proof}
\begin{theorem}\label{lem:1}Assume that
$\alpha,\beta$ are two multi-sets satisfying the $\diamond$ condition such that
1) $s(\alpha,\beta)<0$;
2) $I(\alpha,\beta) $ is connected.
Let $\lambda\in \mathbb Q^\times$ and p be any odd prime not dividing $M(\alpha,\beta;\lambda)$, then
\begin{equation}\label{eq:generic}p^{-s(\alpha,\beta)}H_p(\alpha,\beta;\lambda)\equiv p^{-s(\alpha,\beta)} \cdot F(\hat \alpha, \breve {\beta}; \lambda)_{p-1} \mod p,\end{equation}where $\hat \alpha=\{\hat a_1,\cdots, \hat a_n\}$, $\breve{\beta}=\{\breve b_1,\cdots, \breve b_n\}$ with $\hat a_i=a_i+\nu(k,(p-1)a_i)$ and $\breve b_i=b_i+\left \lfloor \frac{k}{p-1}+\{1-b_i\} \right \rfloor$ for any integer $k\in I(\alpha,\beta)$.
\end{theorem} \begin{proof}Note that $\omega(\lambda)=\lambda \mod p$. Under the assumption that $I(\alpha,\beta)$ is connected, the values of $\hat a_i$ and $\breve b_j$ are independent of the choice of $k$ within the interval $I(\alpha,\beta)$. By \eqref{eq:I}, in the formula $H_p(\alpha,\beta;\pm 1)$ in Theorem \ref{lem:0}, we only need to consider $k\in I(\alpha,\beta)$. In this formula we separate $$ \prod_{i=1}^n\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left ( 1-\left \{b_i+\frac k{p-1}\right \} \right)} (-p)^{e_{\alpha,\beta}}$$ into a product of two parts, one involves $$\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}(-p)^{ -\left \lfloor a_i-\frac{k}{p-1}\right \rfloor}=\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}(-p)^{\nu(k,(p-1)a_i)},$$ to which one can apply use Lemma \ref{lem:nu} directly to get $$\prod_{i=1}^n\frac{\Gamma_p\left (\left \{ a_i-\frac{k}{p-1} \right \} \right )}{\Gamma_p(a_i)}(-p)^{ -\left \lfloor a_i-\frac{k}{p-1}\right \rfloor}=(-1)^{nk}\prod_{i=1}^n(\hat a_i)_k \mod p^{1+\sum_{i=1}^n\nu(k,(p-1)a_i)},$$ as the set $\alpha=\{a_1,\cdots,a_n\}$ is closed under the dash operation under our assumption.
The other part involves $$\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left ( 1-\left \{b_i+\frac k{p-1}\right \} \right)}(-p)^{-\left \lfloor \frac{k}{p-1}+\{b_i\} \right \rfloor},$$ which will be divided into two cases.
Firstly, we assume there is an integer $k=(p-1)(1-b_i)$ within $I(\alpha,\beta)$ for some $b_i\notin \Z$ and hence $1-b_i\in \beta$. Thus $(1-b_i)'=1-b_i$ in this case. Note that the multiset obtain from removing all $1$ and $1-b_i$ from $\beta$ is still close under the dash operation. In this case $\left \lfloor \frac{k}{p-1}+b_i \right \rfloor=1$ and the reciprocal of the above is
\begin{multline*}(-p)^{\left \lfloor \frac{k}{p-1}+\{b_i\} \right \rfloor}\Gamma_p\( 1-\left \{b_i+ \frac{k}{p-1} \right\}\)/\Gamma_p(1-\{b_i\})=(-p)\Gamma_p(1)/\Gamma_p(1-b_i)=p\Gamma_p(0 )/\Gamma_p(1-b_i)\\ \equiv p \Gamma_p(1-b_i+k)/\Gamma_p(1-b_i)= (1-b_i)p\Gamma_p(2-b_i+k)/\Gamma_p(2-b_i) \mod p^2,\end{multline*}where $(1-b_i)p\Gamma_p(2-b_i+k)/\Gamma_p(2-b_i)=(-1)^k(2-b_i)_k$
For other $k$ in $I(\alpha,\beta)$ which is not of the form $(p-1)(1-b_i)$ and $b_i\notin \Z$, $\left \lfloor \frac{k}{p-1}+\{b_i\} \right \rfloor=\nu(k,(p-1)(1-\{b_i\}))$ and $ 1-\left \{b_i+\frac k{p-1}\right \} = \left \{1-\{b_i\}-\frac k{p-1}\right \}$ when $0\le k<p-1$ and $k\neq (p-1)(1-\{b_i\})$. Thus one can also apply Lemma \ref{lem:nu} as $$\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left ( 1-\left \{b_i+\frac k{p-1}\right \} \right)}(-p)^{-\left \lfloor \frac{k}{p-1}+\{b_i\} \right \rfloor}=(-p)^{-\nu(k,(p-1)(1-\{b_i\}))}\frac{\Gamma_p(1-\{b_i\})}{\Gamma_p\left ( \left \{ 1- \{b_i\} -\frac k{p-1}\right \} \right)} .$$ Similar to the case dealing with $a_i$, they contribute $(-1)^k(\breve b_i)_k$ in the denominators. \end{proof}
Theorem \ref{lem:1} is a generic congruence, could be compared with Theorem \ref{thm:Dwork} in the ordinary case when $m=s=1$. We now give two examples, and state a corresponding numeric supercongruence modulo $p^5$.
\begin{example}\label{eg:1} Let $\alpha=\{\frac12,\frac12,\frac12,\frac12,\frac13,\frac 23\}$, $\beta=\{1,1,1,1,\frac16,\frac 56\}$, then $s(\alpha,\beta)=-1$, $w(\alpha,\beta)=6$, and the bottom interval $I(\alpha,\beta)=\left [\frac{p-1}6,\frac{p-1}3\right ]$. Thus $\hat \alpha=\alpha$ and $\breve \beta=\{1,1,1,1,\frac76,\frac 56\}$. In this case Theorem \ref{lem:1} says each prime $p\nmid M(\alpha,\beta;\lambda)$ \begin{equation}\label{eq:6}p\cdot \,\pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&1&1&1&\frac76&\frac56}{\lambda }_{p-1}\equiv p\cdot H_p(\alpha,\beta;\lambda) \mod p. \end{equation} We plot the value of $e(pk)$ in the following graph with $k$ ranges from 0 to 1.
\begin{center} \includegraphics[scale=0.3,origin=c]{Slope2.png}
{$e(pk)$ values for $\alpha=\{\frac12,\frac12,\frac12,\frac12,\frac13,\frac 23\}$, $\beta=\{1,1,1,1,\frac16,\frac 56\}$ }
\end{center}
\end{example}
\begin{example}
If $\alpha=\{\frac12,\frac12,\frac16,\frac 56\}$, $\beta=\{1,1,\frac13,\frac 23\}$, then $s(\alpha,\beta)=0$, $w(\alpha,\beta)=2$, and bottom interval being $I(\alpha,\beta)=\left[0,\frac{p-1}6\right ]\cup \left [\frac{p-1}3,\frac{p-1}2\right].$
\begin{center} \includegraphics[scale=0.3,origin=c]{Slope5.png}
{$e(pk)$ values for $\alpha=\{\frac12,\frac12,\frac16,\frac 56\}$, $\beta=\{1,1,\frac13,\frac 23\}$ }
\end{center} \end{example}
Now we turn our attention to supercongruences. Note \eqref{eq:6} is a generic modulo $p$ congruence which holds for $\lambda \in \mathbb Q^\times$. When $\lambda=\pm 1$, the congruence might be stronger. One of them, listed below, corresponds to Example \ref{eg:1} with $\lambda=1$. \begin{conj}\label{conj:1}For each prime $p> 5$,
1). on the perspective of character sums \begin{equation*}\label{eq:Hp-Conj1}p\cdot H_p\left (\left\{\frac12,\frac12,\frac12,\frac12,\frac13,\frac23\right\},\left \{1,1,1,1,\frac76,\frac56\right\};1\right)=\left ( \frac {2}p\right )a_p(f_{64.6.a.f})+\left ( \frac {-3}p\right )p\cdot a_p(f_{36.4.a.a})+\left (\frac{3}p\right )p^2; \end{equation*}
2). on the perspective of supercongruences \begin{equation*}\label{eq:1} p\cdot \, \pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&1&1&1&\frac76&\frac56}{1}_{p-1}\equiv \left ( \frac {2}p\right )a_p(f_{64.6.a.f}) \mod p^5, \end{equation*}where $\left(\frac{\cdot}p\right)$ is the Legendre symbol. \end{conj}The second claim is a refinement of \eqref{eq:6} and is comparable with Mortenson's conjecture \eqref{eq:Mortenson}.
We will explain how it is found and list a few other cases in the last section. Note that Z.-W. Sun has many open conjectures on congruences. Interested readers are referred to \cite{Sun}.
\section{From classical hypergeometric formula to Conjecture \ref{conj:1}}
A key motivation for both \cite{WIN3X} and the present article is to turn classical hypergeometric formulas into useful geometric guidance. For instance, here is a formula of Whipple, see Theorem 3.4.4 of the book \cite{AAR} by Andrews, Askey and Roy when both sides terminate. \begin{multline}\label{eq:1/2-6} \pFq{7}{6}{a&1+\frac a2&c&d&e&f&g}{&\frac a2& 1+a-c&1+a-d&1+a-e&1+a-f&1+a-g}{1}\\= \frac{\Gamma(1+a-e)\Gamma(1+a-f)\Gamma(1+a-g)\Gamma(1+a-e-f-g)}{\Gamma(1+a)\Gamma(1+a-f-g)\Gamma(1+a-e-f)\Gamma(1+a-e-g)}\\ \times\pFq{4}{3}{a&e&f&g}{&e+f+g-a&1+a-c&1+a-d}{1} \end{multline}
In the finite field analogue, the $_7F_6$ is reducible as $1+\frac a2$ and $\frac a2$ correspond to the same multiplicative character. Thus the $_7F_6$ is linked to \begin{equation}\label{eq:6F5} \pFq{6}{5}{a&c&d&e&f&g}{& 1+a-c&1+a-d&1+a-e&1+a-f&1+a-g}{1}. \end{equation}
We next pick $a,c,d,e,f,g$ such that the hypergeometric data for both \eqref{eq:6F5} and the $_4F_3$ are both defined over $\mathbb Q$.
For example, we can let $a=c=d=e=f=g=\frac 12$, so the hypergeometric datum for \eqref{eq:6F5} is $$ \left \{ \left \{\frac 12,\frac 12,\frac12,\frac12,\frac 12,\frac 12\right \},\{1,1,1,1,1,1\} ;1 \right \}.$$ In this case $H_p(\alpha,\beta;1)$ has an explicit decomposition as follows, which was conjectured by Koike and proved by Frechette, Ono and Papanikolas in \cite{FOP}. For any odd prime $p$ $$H_p(\alpha,\beta;1)=a_p(f_{8.6.a.a})+a_p(f_{8.4.a.a})\cdot p+\left (\frac{-1}p\right)p^2.$$ The factor $a_p(f_{8.4.a.a})\cdot p$ is associated with the right hand side of \eqref{eq:1/2-6} where $p$ corresponds to the Gamma values and the modular form $f_{8.4.a.a}$ arises from the $_4F_3$ as for odd primes $p$ $$H_p\left(\left \{ \frac12,\frac12,\frac 12,\frac 12 \right\},\{1,1,1,1\} ;1\right)=a_p(f_{8.4.a.a})+p.$$ In other words \eqref{eq:1/2-6} implies a decomposition of the character sum corresponding to \eqref{eq:6F5}.
Next we let $a=c=d=e=\frac12$ and $f=\frac 13,g=\frac23$. The datum for \eqref{eq:6F5} is $$H_1=\left \{\left\{\frac12,\frac12,\frac12,\frac12,\frac13,\frac23\right\},\left \{1,1,1,1,\frac76,\frac56\right\};1\right\},$$ the same datum for Conjecture \ref{conj:1}. The datum for the $_4F_3$ is $H_2=\left \{\left\{\frac12,\frac12,\frac13,\frac23\right\},\left \{1,1,1,1\right\};1\right\}.$ By Theorem 2 of \cite{LTYZ}, $$H_p\left(\left\{\frac12,\frac12,\frac13,\frac23\right\},\left \{1,1,1,1\right\};1\right)=a_p(f_{36.4.a.a})+\left (\frac 3p\right)p.$$
Recall the local zeta function for a hypergeometric data $\{\alpha, \beta;\lambda \}$ defined over $\Z_p$ as follows. $$P_p(\{\alpha,\beta;\lambda \},T):=\exp\left(\sum_{s=1}^\infty \frac{H_{p^s}(\alpha,\beta;\lambda)}sT^s\right)^{(-1)^n},$$ which is known to be a polynomial with constant 1. Assume $P_p(\{\alpha,\beta;\lambda \},T)=\prod_i(\mu_i T-1)$. When $\lambda=1$ and $n$ even, the structure of the hypergeometric motive is usually degenerated in the sense one of the $\mu_i$'s' has smaller absolute value as a complex number. For instance, corresponding to Conjecture \ref{conj:1}, when $p\nmid M$, $P_p(\{\alpha,\beta;1 \},T)$ is a polynomial of degree 5. Among the absolute values of $\mu_i$, four of them are $p^{5/2}$ and one of them is $p^2$, corresponding to $\left(\frac{-3}p\right)p^2$ in the formula for $H_p$.
In \texttt{Magma}, there is an implemented hypergeometric package by Watkins (see \cite{Watkins} for its documentation). In particular one can use it to compute $P_p(\{\alpha,\beta;\lambda \},T)$. When $\lambda=1$, the implemented command has the singular linear factor removed. For instance, typing the following into \texttt{Magma}
\noindent \texttt{H1 := HypergeometricData([1/2,1/2,1/2,1/2,1/3,2/3],[1,1,1,1,1/6,5/6]);}
\noindent \texttt{EulerFactor(H1,1,5); ([Here we let $p=5$.])}
\noindent The output is
\noindent \texttt{ <$9765625*\$.1^4 + 112500*\$.1^3 + 1390*\$.1^2 + 36*\$.1 + 1$>}
Moreover, the negation of the linear coefficient above, $-36$, equals $\left ( \frac {2}5\right )a_5(f_{64.6.a.f})+\left ( \frac {-3}5\right )5\cdot a_5(f_{36.4.a.a})$.
Next we use $p=7$ to illustrate the first claim of Conjecture \ref{conj:1}. Typing
\noindent \texttt{Factorization(EulerFactor(H1,1,7));}
We get the following decomposition
\noindent <$16807*\$.1^2 - 56*\$.1 + 1,1$>,
\noindent <$16807*\$.1^2 + 88*\$.1 + 1, 1$>
To see the role of the second hypergeometric datum $H_2$, use
\noindent \texttt{H2 := HypergeometricData([1/2,1/2,1/3,2/3],[1,1,1,1]);}
\noindent \texttt{Factorization(EulerFactor(H2,1,7));}
We get
\noindent <$343*\$.1^2 - 8*\$.1 + 1, 1$>.
Notice that if we let $f(x)=343x^2 - 8x + 1$ as above, then $f(px)=f(7x)=16807x^2 - 56x + 1$ coincides with the first quadratic factor of \texttt{Factorization(EulerFactor(H1,1,7))}.
The following command in \texttt{Magma} allows us to produce a sequence.
\noindent \texttt{[-Coefficient(EulerFactor(H1,1,p),1)+LegendreSymbol(-3, p)*p*Coefficient(EulerFactor(H2,1,p),1) : p in PrimesUpTo(31) | p ge 7];}
which gives
\noindent \texttt{[$-88, 540, -418, 594, 836, -4104, -594, 4256$]}
It coincides with $\left ( \frac {2}p\right )a_p(f_{64.6.a.f})$ when $p$ varies from 7 to 31, confirming the roles of two modular forms in the first statement of Conjecture \ref{conj:1}, we then compute the sign for the $p^2$ term and check the second statement numerically.
It is natural to ask whether the parameter modification, i.e. from $\frac 16$ to $\frac 76$ is necessary in Conjecture \ref{conj:1}. Numerically we have for each prime $p\ge7$,
$$-2p\cdot \, \pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&1&1&1&\frac16&\frac56}{1}_{p-1}\equiv \left ( \frac {2}p\right )a_p(f_{64.6.a.f}) \mod p.$$ The power $p^1$ is sharp for a generic $p$ and the left hand side has an additional multiple of $-2$ which can be explained as follow. Letting $a=c=d=\frac 12$, $f=\frac 13,g=\frac 23$ and $e=\frac{1-p}2$ to Equation \eqref{eq:1/2-6} we have \begin{equation*} \pFq{7}{6}{\frac 12&\frac 54&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&\frac 14&1&1&1&\frac76&\frac56}{1}_{p-1}\in \Z_p \end{equation*}which implies $$-2p\cdot \, \pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&1&1&1&\frac16&\frac56}{1}_{p-1}\equiv p\cdot \, \pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 13&\frac 23}{&1&1&1&\frac76&\frac56}{1}_{p-1} \mod p.$$
\begin{remark} For the hypergeometric datum $\{\{\frac 12,\frac12,\frac12,\frac12,\frac13,\frac16\},\{1,1,1,1,\frac14,\frac34\},1\}$ defined over $\mathbb Q$ which cannot be obtained from specializing parameters in Equation \eqref{eq:1/2-6}. Numerically its Euler $p$ factors are irreducible over $\Z$ for many $p$'s. \end{remark}
\section{More Numeric findings} \subsection{A few other $_6F_5(1)$ cases}
\subsubsection{$\alpha=\{\frac 12,\frac 12,\frac13,\frac23,\frac13,\frac 23\}, \beta=\{1,1,\frac16,\frac56,\frac16,\frac 56\} $}
\begin{center} \includegraphics[scale=0.25,origin=c]{Slope3.png}
{$e(pk)$ for $\alpha=\{\frac 12,\frac 12,\frac13,\frac23,\frac13,\frac 23\}$, $\beta=\{1,1,\frac16,\frac 56,\frac16,\frac 56\}$ }
\end{center}
Here $w(\alpha,\beta)=6$, $s(\alpha,\beta)=-1$ and $I(\alpha,\beta)$ is connected, $\hat \alpha=\alpha$ and $\breve \beta=\{1,1,\frac76,\frac56,\frac76,\frac 56\}$. Similar to the previous discussion, we specify $a=c=\frac12$, $d=f=\frac13,e=g=\frac 23$ in Equation \eqref{eq:1/2-6}. \begin{conj} There is a weight 6 modular form $f_{48.6.a.c}$ such that for all primes $p\ge 7$, $$p^2\cdot \, \pFq{6}{5}{\frac 12&\frac 12&\frac 13&\frac 23&\frac 13&\frac 23}{&1&\frac76&\frac56&\frac76&\frac56}{1}_{p-1}\equiv \left (\frac{-1}p\right)a_p(f_{48.6.a.c}) \mod p^4.$$ \end{conj}
\subsubsection{ $\alpha=\{\frac12,\frac12,\frac12,\frac12,\frac16,\frac 56\}$, $\beta=\{1,1,1,1,\frac13,\frac 23\}$}
\begin{center} \includegraphics[scale=0.25,origin=c]{Slope4.png}
{$e(pk)$ for $\alpha=\{\frac12,\frac12,\frac12,\frac12,\frac16,\frac 56\}$, $\beta=\{1,1,1,1,\frac13,\frac 23\}$ }
\end{center} In this case $w(\alpha,\beta)=4$, $s(\alpha,\beta)=0$, and $I(\alpha,\beta)$ consists of two disjoint intervals. Anyway, we proceed similarly as the previous cases. In Equation \eqref{eq:1/2-6}, we let $a=c=d=e=\frac 12$, $f=\frac 16,g=\frac 56$. Numerically we have \begin{conj}For primes $p\ge 5$, $\alpha=\left \{\frac 12,\frac 12,\frac 12,\frac 12,\frac 16,\frac 56\right \}$, $\breve \beta=\left \{1,1,1,1,\frac 43,\frac 23\right \}$ \[ H_p(\alpha,\breve \beta;1)=\left ( \frac 2p\right)a_p(f_{64.4.a.d})+\left (\frac{-3}p\right)a_p(f_{72.4.a.b})+\left (\frac{3}p\right)p. \] \begin{equation*}\pFq{6}{5}{\frac 12&\frac 12&\frac 12&\frac 12&\frac 16&\frac 56}{&1&1&1&\frac43&\frac23}{1}_{p-1}\equiv \left ( \frac 2p\right)a_p(f_{64.4.a.d}) \mod p^3.\end{equation*} \end{conj}
\subsection{Some $_4F_3(1)$ cases}
In addition, we found numerically a few $_4F_3(1)$ supercongruences for hypergeometric motives. We first list 6 cases with weights $w(\alpha,\beta)=4$. Numerically they satisfy supercongruences analogous to the statement of Theorem \ref{thm:LTYZ}.
\begin{conj}For each prime $p\ge 7$,
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac13&\frac23}{&1& \frac54&\frac34}{1}_{p-1}\equiv \left (\frac {3}p\right )a_p(f_{48.4.a.c}) \mod p^3.\end{equation}
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac13&\frac23}{&1&\frac76&\frac56}{1}_{p-1}\equiv \left (\frac {-1}p\right )a_p(f_{48.4.a.c}) \mod p^3.\end{equation}
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac14&\frac34}{&1&\frac76&\frac56}{1}_{p-1}\equiv a_p(f_{48.4.a.c}) \mod p^3.\end{equation}
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac12&\frac12}{&1&\frac43&\frac23}{1}_{p-1}\equiv a_p(f_{24.4.a.a}) \mod p^3.\end{equation}
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac12&\frac12}{&1&\frac76&\frac56}{1}_{p-1}\equiv a_p(f_{12.4.a.a}) \mod p^3.\end{equation}
\begin{equation}p\cdot \pFq{4}{3}{\frac12&\frac12&\frac12&\frac12}{&1&\frac54&\frac34}{1}_{p-1}\equiv a_p(f_{64.4.a.b}) \mod p^3.\end{equation} \end{conj}
There are two cases with $w(\alpha,\beta)=2$ relating to weight-2 modular forms. They can be obtained using the approach of \cite{LTYZ}.
For each prime $p> 5$, \begin{equation*} \pFq{4}{3}{\frac12&\frac12&\frac16&\frac56}{&1&\frac43&\frac23}{1}_{p-1}\equiv a_p(f_{24.2.a.a}) \mod p.\end{equation*}
\begin{equation*} \pFq{4}{3}{\frac12&\frac12&\frac14&\frac34}{&1&\frac43&\frac23}{1}_{p-1}\equiv \left (\frac{-1}p\right)a_p(f_{24.2.a.a}) \mod p.\end{equation*}
\subsection{Some $_5F_4(-1)$ cases}
Here is another evaluation formula of Whipple, see Theorem 3.4.6 of \cite{AAR}. \begin{multline}\label{eq:Whipple2} \pFq{6}{5}{a&a/2+1&b&c&d&e}{&a/2&a-b+1&a-c+1&a-d+1&a-e+1}{-1}\\ =\frac{\Gamma(a-d+1)\Gamma(a-e+1)}{\Gamma(a+1)\Gamma(a-d-e+1)}\pFq{3}{2}{a-b-c+1&d&e}{&a-b+1&a-c+1}{1} \end{multline}
\subsubsection{$\alpha=\{\frac12,\frac12,\frac12,\frac12,\frac12\},\beta=\{1,1,1,1,1\}$} For this case, $s(\alpha,\beta)=0$, $w(\alpha,\beta)=5$, $I(\alpha,\beta)=\left [0,\frac{p-1}2\right ]$, $\hat \alpha=\alpha,\breve \beta=\beta$.
Letting $a=b=c=d=e=\frac12$ in Equation \eqref{eq:Whipple2}, the left and right hand sides correspond to the hypergeometric data $$H_5=\left \{\left\{\frac12,\frac12,\frac12,\frac12,\frac12\right \},\{1,1,1,1,1\};-1\right \},\quad H_6=\left \{ \left \{\frac12,\frac12,\frac12\right \},\{1,1,1\};1 \right \},$$ respectively, both defined over $\mathbb Q$. We use the following in \texttt{Magma}.
\noindent \texttt{H5 := HypergeometricData([1/2,1/2,1/2,1/2,1/2], [1,1,1,1,1]);}
\noindent \texttt{Factorization(EulerFactor(H5,-1,5));}
We get
\noindent <25*\$.1 + 1, 1>,
\noindent <625*\$.$1^2$ - 34*\$.1 + 1, 1>,
\noindent <625*\$.$1^2$ + 30*\$.1 + 1, 1>
Meanwhile, using
\noindent \texttt{H6 := HypergeometricData([1/2,1/2,1/2], [1,1,1]);}
\noindent \texttt{Factorization(EulerFactor(H6,1,5));}
We get
<25*\$.$1^2$ + 6*\$.1 + 1, 1>
Further using the following we get a sequence denoted by $A_p$ where the subscript $p$ refers to the corresponding prime $p$.
\noindent \texttt{[Coefficient(EulerFactor(H5,-1,p),1)-p*Coefficient(EulerFactor(H6,1,p),1) }
\noindent\texttt{-LegendreSymbol(-3,p)*p$^2$: in PrimesUpTo(67) |p ge 7];}
It produces the first few $A_p$ where $p$ ranges from 7 to 67 listed as follows.
\noindent \texttt{[30,42,62,478,-200,128,400,-1922,-2338,2462,-8,4608,3600,5162,-6658,-6728]}
Note that they are not $p$th coefficients of GL(2) Hecke eigenforms as each odd weight Hecke eighenform with integer coefficients has to admit complex multiplication. By appearance, half of the $p$th coefficients of a CM modular form should vanish, which is not the case here. This case should be related to a GL(3) automorphic form, which is a symmetric square of a GL(2) automorphic form as conjectured by Beukers and Delaygue in \cite{BD}.
Numerically we have for each prime $p\ge 7$
\begin{equation} \pFq{5}{4}{\frac12&\frac12&\frac12&\frac12&\frac12}{&1&1&1&1}{-1}_{p-1}\equiv -A_p \mod p^2. \end{equation} When $p$ is ordinary, it is already shown in \cite{BD} that the left hand side is congruent to the corresponding unit root modulo $p^2$.
\subsubsection{$\{\{\frac12,\frac12,\frac12,\frac13,\frac23\},\{1,1,1,\frac16,\frac56\};-1\}$}
Similarly, $a=b=c=\frac12$, $d=\frac 13,e=\frac23$ in Equation \eqref{eq:Whipple2}, we get another list $B_p$ using
\noindent \texttt{H7 := HypergeometricData([1/2,1/2,1/2,1/3,2/3], [1,1,1,1/6,5/6]);}
\noindent \texttt{H8 := HypergeometricData([1/2,1/3,2/3], [1,1,1]);}
\noindent \texttt{[Coefficient(EulerFactor(H7,-1,p),1)-p*Coefficient(EulerFactor(H8,1,p),1)}
\noindent \texttt{-LegendreSymbol(3,p)*p$^2$: in PrimesUpTo(67) | p ge 7];}
From which we get the first few values of $B_p$ listed as follows
\noindent \texttt{[34,-230,-290,542,588,-576,432,898,-2690,-994, 972, -2304, 8112,-5990,670,6348]}
Numerically for each prime $p\ge 7$ \begin{equation} p\cdot \, \pFq{5}{4}{\frac12&\frac12&\frac12&\frac13&\frac23}{&1&1&\frac76&\frac56}{-1}_{p-1}\equiv -B_p \mod p^2. \end{equation}
\begin{remark} Letting $a=b=c=d=\frac 12$ and $e=\frac 34$ in Equation \eqref{eq:Whipple2}, one relates the hypergeometric datum $\{\{\frac12,\frac12,\frac12,\frac12\},\{1,1,1,1\};-1\}$ to $\{\{\frac 12,\frac12,\frac34\},\{1,1,1\};1\}$, which is not defined over $\mathbb Q$. See (5.15) of \cite{MP} by McCarthy and Papanikolas for the precise relation between the corresponding character sums when $p\equiv 1\mod 4$. Accordingly, we have the following numeric result. When $p\equiv 1\mod 4$ \begin{equation} \pFq{4}{3}{\frac12&\frac12&\frac12&\frac12}{&1&1&1}{-1}_{p-1}\equiv \left(\frac{2}p\right)\frac{\Gamma_p(\frac14)^2}{\Gamma_p(\frac12)}a_p(f_{32.3.c.a}) \mod p^2. \end{equation}
\end{remark}
\end{document} | arXiv |
1 - Oberwolfach Preprints (OWP)
Spectral Sequences in Combinatorial Geometry: Cheeses, Inscribed Sets, and Borsuk-Ulam Type Theorems
OWP2010_08.pdf (598.8Kb)
MFO Scientific Program
Research in Pairs 2009
Oberwolfach Preprints;2010,08
Blagojevic, Pavle V. M.
Blagojevic, Aleksandra
Dimitrijevic McCleary, John
OWP-2010-08
Algebraic topological methods are especially suited to determining the nonexistence of continuous mappings satisfying certain properties. In combinatorial problems it is sometimes possible to define a mapping from a space $X$ of configurations to a Euclidean space $\mathbb{R}^m$ in which a subspace, a discriminant, often an arrangement of linear subspaces $\mathcal{A}$, expresses a desirable condition on the configurations. Add symmetries of all these data under a group $G$ for which the mapping is equivariant. Removing the discriminant leads to the problem of the existence of an equivariant mapping from $X$ to $\mathbb{R}^m$- the discriminant. Algebraic topology may be applied to show that no such mapping exists, and hence the original equivariant mapping must meet the discriminant. We introduce a general framework, based on a comparison of Leray-Serre spectral sequences. This comparison can be related to the theory of the Fadell-Husseini index. We apply the framework to: solve a mass partition problem (antipodal cheeses) in $\mathbb{R}^d$, determine the existence of a class of inscribed 5-element sets on a deformed 2-sphere, obtain two different generalizations of the theorem of Dold for the nonexistence of equivariant maps which generalizes the Borsuk-Ulam theorem.
10.14760/OWP-2010-08
urn:nbn:de:101:1-20100601391
Mathematisches Forschungsinstitut Oberwolfach copyright © 2017-2023 | CommonCrawl |
\begin{definition}[Definition:Group of Outer Automorphisms]
Let $G$ be a group.
Let $\Aut G$ be the automorphism group of $G$.
Let $\Inn G$ be the inner automorphism group of $G$.
Let $\dfrac {\Aut G} {\Inn G}$ be the quotient group of $\Aut G$ by $\Inn G$.
Then $\dfrac {\Aut G} {\Inn G}$ is called the '''group of outer automorphisms''' of $G$.
This group $\dfrac {\Aut G} {\Inn G}$ is often denoted $\Out G$.
\end{definition} | ProofWiki |
\begin{document}
\title{On fractional multi-singular Schr\"odinger operators:
positivity and localization of binding}
\begin{abstract}
In this work we investigate positivity properties of nonlocal
Schr\"odinger type operators, driven by the fractional
Laplacian, with multipolar, critical, and locally homogeneous potentials. On one
hand, we develop a criterion that links the positivity of the
spectrum of such operators with the existence of certain positive
supersolutions, while, on the other hand, we study the localization
of binding for this kind of potentials. Combining these two tools
and performing an inductive procedure on the number of poles, we
establish necessary and sufficient conditions for the existence of a
configuration of poles that ensures the positivity of the
corresponding Schr\"odinger operator. \end{abstract}
\noindent {\bf Keywords.} Fractional Laplacian; Multipolar potentials; Positivity Criterion; Localization
of binding.
\noindent {\bf MSC classification:} 35J75, 35R11, 35J10, 35P05.
\section{Introduction}\label{sec:intr} Let $s\in(0,1)$ and $N>2s$. Let us consider $k\geq 1$ real numbers $\lambda_1,\dots,\lambda_k$ (sometimes called \emph{masses}) and $k$ \emph{poles} $a_1,\dots,a_k\in \mathbb R^N$ such that $a_i\neq a_j$ for all $i,j=1,\dots,k,~i\neq j$. The main object of our investigation is the operator \begin{equation}\label{eq:frac_oper}
\mathcal{L}_{\la_1,\dots ,\la_k,a_1,\dots ,a_k}:=(-\De)^s-\sum_{i=1}^{k}\frac{\la_i}{|x-a_i|^{2s}}\qquad\text{in }\mathbb R^N. \end{equation} Here $(-\Delta)^s$ denotes the fractional Laplace operator, which acts on functions $\varphi\in C_c^\infty(\mathbb R^N)$ as
\begin{equation*}
(-\Delta)^s\varphi (x):=
C(N,s)\,\mathrm{P.V.}\,\int_{\mathbb R^N}
\frac{\varphi(x)-\varphi(y)}{\abs{x-y}^{N+2s}}\,\mathrm{d}y
=C(N,s)\lim_{\rho\to 0^+} \int_{\abs{x-y}>\rho}\frac{\varphi(x)-\varphi(y)}{\abs{x-y}^{N+2s}}\,\mathrm{d}y, \end{equation*}
where $\mathrm{P.V.}$ means that the integral has to be seen in the principal value sense and
\[
C(N,s)=\pi^{-\frac{N}{2}}2^{2s}\frac{\Gamma\left(\frac{N+2s}{2}\right)}{\Gamma(2-s)}s(1-s),
\]
with $\Gamma$ denoting the usual Euler's Gamma function. Hereafter, we refer to an operator of the type $(-\Delta)^s-V$ as a
\emph{fractional Schr\"odinger operator} with potential $V$.
One of the reasons of mathematical interest in operators of type \eqref{eq:frac_oper} lies in the criticality of potentials of order $-2s$, which have the same scaling rate as the $s$-fractional Laplacian.
We introduce, on $C_c^\infty(\mathbb R^N)$, the following positive definite bilinear form, associated to $(-\Delta)^s$ \begin{equation}\label{eq:def_scalar}
(u,v)_{\mathcal{D}^{s,2}(\R^N)}:=\frac{1}{2}C(N,s)\int_{\mathbb R^{2N}}\frac{(u(x)-u(y))(v(x)-v(y))}{\abs{x-y}^{N+2s}}\, \mathrm{d}x \,\mathrm{d}y \end{equation} and we define the space $\mathcal{D}^{s,2}(\R^N)$ as the completion of $C_c^\infty(\mathbb R^N)$ with respect to the norm $\norm{\cdot}_{\mathcal{D}^{s,2}(\R^N)}$ induced by the scalar product \eqref{eq:def_scalar}. Moreover, the following quadratic form is naturally associated to the operator $\mathcal{L}_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ \begin{equation}\label{eq:quadr_form} \begin{aligned}
Q_{\la_1,\dots ,\la_k,a_1,\dots ,a_k}(u):&=\frac{1}{2}C(N,s)\int_{\mathbb R^{2N}}\frac{\abs{u(x)-u(y)}^2}{\abs{x-y}^{N+2s}}\, \mathrm{d}x \,\mathrm{d}y-\sum_{i=1}^{k}\la_i \int_{\mathbb R^N}\frac{\abs{u(x)}^2}{|x-a_i|^{2s}}\,\mathrm{d}x \\
&=\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^{k}\la_i \int_{\mathbb R^N}\frac{\abs{u(x)}^2}{|x-a_i|^{2s}}\,\mathrm{d}x. \end{aligned} \end{equation} We observe that $Q_{\la_1,\dots ,\la_k,a_1,\dots ,a_k}$ is well-defined
on $\mathcal{D}^{s,2}(\R^N)$ thanks to the validity of the following fractional Hardy inequality proved in \cite{Herbst1977}: \begin{equation}\label{eq:hardy}
\gamma_H \int_{\mathbb R^N}\frac{\abs{u(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x\leq \norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2 \quad\text{for all }u\in\mathcal{D}^{s,2}(\R^N), \end{equation} where the constant \begin{equation*} \gamma_H =\gamma_H(N,s):= 2^{2s}\frac{\Gamma^2\left(\frac{N+2s}{4}\right)}{\Gamma^2\left(\frac{N-2s}{4}\right)} \end{equation*} is optimal and not attained.
One goal of the present paper is to find necessary and sufficient conditions (on the masses $\la_1,\dots,\la_k$) for the existence of a configuration of poles $(a_1,\dots,a_k)$ that guarantees the positivity of the quadratic form \eqref{eq:quadr_form}, extending to the fractional case some results obtained in \cite{Felli2007} for the classical Laplacian. The quadratic form $Q_{\la_1,\dots ,\la_k,a_1,\dots
,a_k}$ is said to be \emph{positive definite} if \begin{equation*} \inf_{u\in\mathcal{D}^{s,2}(\R^N)\setminus \{0\}}\frac{ Q_{\la_1,\dots ,\la_k,a_1,\dots,a_k}(u)}{\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2}>0. \end{equation*} In the case of a single pole (i.e. $k=1$), the fractional Hardy inequality \eqref{eq:hardy} immediately answers the question of positivity: the quadratic form $Q_{\lambda,a}$ is positive definite if and only if $\lambda<\ga_H$. Hence our interest in multipolar potentials is justified by the fact that the location of the poles (in particular the shape of the configuration) could play some role in the positivity of \eqref{eq:quadr_form}. Furthermore, one could expect that some other conditions on the masses may arise when $k>1$. We mention that several authors have approached the problem of multipolar singular potentials, both for the classical Laplacian, see e.g. \cite{berchio,bosi-esteban-dolbeault,zuazua,Almeida2017,faraci,Ferreira2013} and for the fractional case, see \cite{Ferreira2017}.
A fundamental tool in our arguments is the well known Caffarelli-Silvestre extension for functions in $\mathcal{D}^{s,2}(\R^N)$, which allows us to study the nonlocal operator $(-\Delta)^s$ by means of a boundary value problem driven by a local operator in $\mathbb R^{N+1}_+:=\{(t,x):t\in(0,+\infty), x\in \mathbb R^N\}$. We introduce
the space $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, defined as the completion of $C_c^{\infty}(\overline{\mathbb R^{N+1}_+})$ with respect to the norm
\[
\norm{U}_{\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})}:=\left( \int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x \right)^{1/2}.
\]
We have that there exists a well-defined and continuous trace map
\begin{equation}\label{eq:3}
\Tr\colon \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \to \mathcal{D}^{s,2}(\R^N) \end{equation}
which is onto, see, for instance, \cite{Brandle2013}. Let us now consider, for $u\in\mathcal{D}^{s,2}(\R^N)$, the following minimization problem
\begin{equation}\label{eq:min_ext}
\min\left\{ \int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla \Phi}^2\, \mathrm{d}t \,\mathrm{d}x\colon \Phi\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}),~ \Tr \Phi=u \right\}.
\end{equation}
One can prove that there exists a unique function $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$
(which we call the \emph{extension} of
$u$) attaining \eqref{eq:min_ext}, i.e.
\begin{equation}\label{eq:min_ext_ineq}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x \\ \leq \int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla \Phi}^2\, \mathrm{d}t \,\mathrm{d}x \end{equation} for all $\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Tr\Phi=u$.
Furthermore, in \cite{Caffarelli2007} it has been proven that
\begin{equation}\label{eq:caff_silv}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla U\cdot\nabla \Phi \, \mathrm{d}t \,\mathrm{d}x=\kappa_s(u,\Tr\Phi)_{\mathcal{D}^{s,2}(\R^N)}\quad\text{for all }\Phi\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}),
\end{equation}
where
\begin{equation}\label{eq:kappa_s}
\kappa_s:=\frac{\Gamma(1-s)}{2^{2s-1}\Gamma(s)}.
\end{equation} We observe that \eqref{eq:caff_silv} is the variational formulation of the following problem
\begin{equation}\label{eq:strong_ext}
\left\{\begin{aligned}
-\divergence(t^{1-2s}\nabla U)&=0, &&\text{in }\mathbb R^{N+1}_+, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial U}{\partial t}&=\kappa_s(-\Delta)^s u, && \text{on }\mathbb R^N.
\end{aligned}\right.
\end{equation}
In the classical (local) case, the problem of positivity of Schr\"odinger operators
with multi-singular Hardy-type potentials was addressed in \cite{Felli2007}. In that
article, the authors tackled the problem making use of a
\emph{localization of binding} result that
provides, under certain assumptions, the positivity of the sum of two
positive operators, by translating one of them through a
sufficiently long vector. This argument is based, in turn, on a criterion
which relates the positivity of an operator to the existence of a
positive supersolution, in the spirit of Allegretto-Piepenbrink
Theory (see \cite{Allegretto1974,Piepenbrink1974}). As one can
observe in \cite{Felli2007}, the strong suit of the local case is
that the study of the action of the operator can be substantially
reduced to neighbourhoods of the
singularities. However, this is not possible in the fractional
context due to nonlocal effects:
in the present paper we overcome this issue by taking into consideration
the Caffarelli-Silvestre extension \eqref{eq:strong_ext}, which yields a local formulation of the
problem.
The equivalence between the fractional problem in $\mathbb R^N$ and the Caffarelli-Silvestre extension problem in $\mathbb R^{N+1}_+$ allows us to characterize the coercivity properties of quadratic forms on $\mathcal{D}^{s,2}(\R^N)$ in terms of quadratic forms on $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. We say that a function $V\in L^1_{\rm loc}(\mathbb R^N)$ satisfies the \emph{form-bounded condition} if \begin{equation}
\tag{$FB$}\label{eq:14}
\sup_{\substack{u\in\mathcal{D}^{s,2}(\R^N)\\u\not\equiv0}}\frac{\int_{\mathbb R^N}|V(x)|u^2(x)\,\mathrm{d}x}{\|u\|_{\mathcal{D}^{s,2}(\R^N)}^2}<+\infty. \end{equation} Let $\mathcal H$ be the class of potentials satisfying the form-bounded condition, i.e. \[ \mathcal H=\{V\in L^1_{\rm loc}(\mathbb R^N):V\text{ satisfies }\eqref{eq:14}\}. \] It is easy to understand that, if $V\in\mathcal H$, then $Vu\in (\mathcal{D}^{s,2}(\R^N))^\star$ for all $u\in\mathcal{D}^{s,2}(\R^N)$ and the quadratic form $u\mapsto
\|u\|_{\mathcal{D}^{s,2}(\R^N)}^2-\int_{\mathbb R^N}Vu^2$ is well defined in $\mathcal{D}^{s,2}(\R^N)$. For all $V\in\mathcal H$ we define \begin{equation}\label{eq:def_inf} \mu(V)=
\inf_{u \in \mathcal{D}^{s,2}(\R^N)\setminus\{0\}}\frac{ \norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2-\int_{\mathbb R^N}V u^2\,\mathrm{d}x}{\displaystyle\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2 } \end{equation} and observe that $\mu(V)>-\infty$. \begin{lemma}\label{lemma:equiv_inf}
Let $V\in\mathcal H$.
Then
\begin{equation}\label{eq:def_mu}
\mu(V)=\inf_{\substack{U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \\ U\nequiv 0}}\frac{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V |\Tr U|^2\,\mathrm{d}x}{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x}.
\end{equation}
\end{lemma} In the present paper we will focus our attention on the following class of potentials \begin{multline*}
\Theta:=\Bigg\{ V(x)=\sum_{i=1}^{k}\frac{\la_i
\chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty \chi_{\mathbb R^N
\setminus B'_R}(x)}{|x|^{2s}}+W(x)\colon r_i,R>0,~k\in\mathbb N, \\
a_i\in\mathbb R^N,~a_i\neq a_j~\text{for }i\neq j,~
\lambda_i,\lambda_\infty<\ga_H,~W\in L^{N/2s}(\mathbb R^N)\cap
L^\infty(\mathbb R^N)\Bigg\}, \end{multline*} where, for any $r>0$ and $x\in \mathbb R^N$, we denote \[
B'(x,r):=\{y\in\mathbb R^N\colon \abs{y-x}<r\}\quad\text{and}\quad B'_r:=B'(0,r). \] We observe that, when considering a potential $V\in \Theta$, it is not restrictive to assume that the sets $B'(a_i,r_i)$ and $\mathbb R^N\setminus B'_R$ appearing in its representation are mutually disjoint, up to redefining the remainder $W$.
It is easy to see that, for instance, \[
\sum_{i=1}^k\frac{\lambda_i}{\abs{x-a_i}^{2s}}\in\Theta,\quad\text{when
} \lambda_i<\ga_H~\text{for all }i=1,\dots,k\text{ and }\sum_{i=1}^k\lambda_i<\ga_H. \] We observe that any $V \in \Theta$ satisfies the form-bounded condition, i.e. $\Theta\subset\mathcal H$, thanks to the fractional Hardy and Sobolev inequalities stated in \eqref{eq:hardy} and \eqref{eq:sobolev} respectively.
Our first main result is a criterion that provides the equivalence
between the positivity of $\mu(V)$ for potentials $V\in\Theta$
and the existence of a positive supersolution to a certain (possibly
perturbed) problem. This criterion is reminiscent
of the Allegretto-Piepenbrink Theory, developed in 1974 in
\cite{Allegretto1974,Piepenbrink1974} (see also
\cite{Agmon1983,Agmon1985,Moss1978,Pinchover2016}). As far as we
know, the result contained in the following lemma is new in the
nonlocal framework; nevertheless,
some tools from the Allegretto-Piepenbrink Theory have been used in
\cite{Frank2008a,Moroz2012} to prove some Hardy-type fractional inequalities.
\begin{lemma}[Positivity Criterion]\label{Criterion}
Let $V=\sum_{i=1}^k \frac{\la_i
\chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty \chi_{\mathbb R^{N}
\setminus B'_R}(x)}{|x|^{2s}}+W(x) \in \Theta$ and let
$\tilde{V}\in L^\infty_{\textup{loc}}(\mathbb R^N\setminus\{a_1,\dots,a_k\})$
be such that $V\leq \tilde{V}\leq \abs{V}$ a.e. in $\mathbb R^N$. The following two assertions hold true.
\begin{itemize}
\item[(I)] Assume that there exist some $\varepsilon>0$ and
a function $\Phi
\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi>0$ in $\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1), \dots, (0,a_k)\}$, $\Phi\in C^{0}\left(\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1),\dots, (0,a_k)\}\right)$, and
\begin{equation}\label{eq:1}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x \geq \kappa_s \int_{\mathbb R^{N}}
(V+\varepsilon \tilde{V})\Tr \Phi\Tr U\,\mathrm{d}x,
\end{equation}
for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}),~ U\geq 0$ a.e. in $\mathbb R^{N+1}_+$. Then
\begin{equation}\label{eq:criterion_inf}
\mu(V)\geq \varepsilon/(\varepsilon+1).
\end{equation}
\item[(II)] Conversely, assume that $\mu(V)>0$. Then
there exist $\varepsilon>0$ (not depending on $\tilde{V}$)
and $\Phi~\!\!\in~\!\!\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi>0$ in
$\mathbb R^{N+1}_+$,
$\Phi\in C^{0}\left(\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1),\dots, (0,a_k)\}\right)$,
$\Phi\geq 0$ in
$\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1), \dots,
(0,a_k)\}$,
and \eqref{eq:1} holds for every $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$
satisfying $U\geq 0$ a.e. in $\mathbb R^{N+1}_+$. If, in
addition, we assume that $V$ and $\tilde{V}$ are
locally H\"older continuous in
$\mathbb R^N\setminus\{a_1,\dots,a_k\}$, then $\Phi>0$ in
$\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1), \dots,
(0,a_k)\}$.
\end{itemize} \end{lemma}
In order to use statement (I) to obtain positivity of a given Schr\"odinger operator with potential in $\Theta$, it is crucial to exhibit a weak supersolution to the corresponding Schr\"odinger equation, i.e. a function satisfying \eqref{eq:1}, which is \emph{strictly
positive} outside the poles. Nevertheless, the application of maximum principles to prove positivity of solutions to singular/degenerate extension problems is more delicate than in the classic case, due to regularity issues (see the Hopf type
principle proved in \cite[Proposition 4.11]{Cabre2014} and recalled
in Proposition \ref{prop:cabre_sire} of the Appendix). For this
reason, in order to apply the above criterion in Sections
\ref{sec:perturbation} and \ref{sec:localization}, we will develop an
approximation argument introducing a class of more regular potentials
(see \eqref{eq:def_Theta_s}).
The following theorem, whose proof heavily relies on Lemma \ref{Criterion}, fits in the theory of \emph{Localization of Binding}, whose aim is study the lowest eigenvalue of Schr\"odinger operators of the type \[
-\Delta + V_1+V_2(\cdot-y),\quad y\in \mathbb R^N, \] in relation to the potentials $V_1$ and $V_2$ and to the translation vector $y\in\mathbb R^N$. The case in which $V_1$ and $V_2$ belong to the Kato class has been studied in \cite{Pinchover1995}, while Simon in \cite{Simon1980} analyzed the case of compactly supported potentials; singular inverse square potentials were instead considered in \cite{Felli2007}.
Our result concerns the fractional case and provides sufficient conditions on the potentials and on the length of the translation for the positivity of the corresponding fractional Schr\"odinger operator.
\begin{theorem}[Localization of Binding]\label{thm:separation} Let \begin{gather*}
V_1(x)=\sum_{i=1}^{k_1}\frac{\la_i^1\chi_{B'(a_i^1,r_i^1)}(x)}{|x-a_i^1|^{2s}}+\frac{\la_\infty^1 \chi_{\mathbb R^N \setminus B'_{R_1}}(x)}{|x|^{2s}}+W_1(x) \in \Theta, \\
V_2(x)=\sum_{i=1}^{k_2}\frac{\la_i^2\chi_{B'(a_i^2,r_i^2)}(x)}{|x-a_i^2|^{2s}}+\frac{\la_\infty^2 \chi_{\mathbb R^N \setminus B'_{R_2}}(x)}{|x|^{2s}}+W_2(x) \in \Theta, \end{gather*} and assume $\mu(V_1),\mu(V_2)>0$ and $\la_\infty^1+\la_\infty^2<\ga_H$. Then there exists $R>0$ such that, for every $y \in \mathbb R^N\setminus \overline{B'_R}$, \[
\mu(V_1(\cdot)+V_2(\cdot-y))>0. \] \end{theorem} Combining the previous theorem with an inductive procedure on the number of poles $k$, we obtain a necessary and sufficient condition for positivity of the operator \eqref{eq:frac_oper}.
\begin{theorem}\label{theorem} Let $(\la_1, \dots \la_k) \in \mathbb R^k.$ Then \begin{equation}\label{lambda-i} \la_i<\ga_H \quad\text{for all }i=1,\dots, k, \quad\text{and}\quad \sum_{i=1}^{k}\la_i<\ga_H \end{equation}
is a necessary and sufficient condition for the existence of a configuration of poles $(a_1,\dots,a_k)$ such that the quadratic form $Q_{\la_1,\dots,\la_k,a_1,\dots,a_k}$ associated to the operator $\mathcal{L}_{\la_1,\dots,\la_k,a_1,\dots,a_k}$ is positive definite. \end{theorem}
Besides the interest in the existence of a configuration of poles making $Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ positive definite, one can search for a condition on the masses $\lambda_1,\dots,\lambda_k$ that guarantees the positivity of this quadratic form for every configuration of poles; in this direction, an answer is given by the following theorem (we refer to \cite[Proposition 1.2]{FT} for an analogous result in the classical case of the Laplacian with multipolar inverse square potentials).
\begin{theorem}\label{Prop-2} Let $t^+:=\max\{0,t\}$. If \begin{equation}\label{eq:2}
\sum_{i=1}^k\lambda_i^+<\ga_H, \end{equation} then the quadratic form $Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ is positive definite for all $a_1,\dots,a_k\in \mathbb R^N$. Conversely,~if \[
\sum_{i=1}^k\lambda_i^+>\ga_H \] then there exists a configuration of poles $(a_1,\dots,a_k)$ such that $Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ is not positive definite. \end{theorem}
Finally, it is natural to ask whether $\mu(V)$, defined as an infimum in \eqref{eq:def_mu}, is attained or not. In the case of a single pole, it is known that the infimum is not achieved, see e.g. \cite{Frank2008a}; however, when dealing with multiple singularities, the outcome can be different. Indeed, for $V$ in the class $\Theta$, we have that $\mu(V) \leq 1-\frac{1}{\ga_H}\max_{i=1,\dots,k,\infty}\la_i$, see Lemma \ref{Lem-1}, and the infimum is attained in the case of strict inequality, as established in the following proposition.
\begin{proposition}\label{Prop-1}
If $V \in\Theta$ is such that
\begin{equation}\label{Eq}
\mu(V)<1-\frac{1}{\ga_H}\max\{0,\la_1,\dots, \la_k,\la_\infty \},
\end{equation}
then $\mu(V)$ is attained. \end{proposition}
The paper is organized as follows. In Section \ref{sec:preliminaries} we recall some known results about spaces involving fractional derivatives and weighted spaces in $\mathbb R^{N+1}_+$
and we prove some estimates needed in the rest of the article. In Section \ref{sec:proof-theor-refpr} we prove Theorem \ref{Prop-2}.
In Section \ref{sec:criterion} we prove the positivity criterion, i.e. Lemma \ref{Criterion}, while in Section \ref{sec:shattering} we look for upper and lower bounds of the quantity $\mu(V)$. In Section \ref{sec:perturbation} we investigate the persistence of the positivity of $\mu(V)$, when the potential $V$ is subject to a perturbation far from the origin or close to a pole. Section \ref{sec:localization} is devoted to the proof of Theorem \ref{thm:separation}, that is the primary tool used in the proof of Theorem \ref{theorem}, pursued in Section \ref{sec:thm}. Finally, in Section \ref{sec:prop} we prove Proposition \ref{Prop-1}.
\noindent{\bf Notation.} We list below some notation used throughout the paper: \begin{itemize}
\item[-] $B'(x,r):=\{y\in\mathbb R^N\colon \abs{x-y}<r\}$, $B'_r:=B'(0,r)$ for the balls in $\mathbb R^N$;
\item[-] $\mathbb R^{N+1}_+:=\{(t,x)\in\mathbb R^{N+1}\colon t>0,~x\in\mathbb R^N\}$;
\item[-] $B_r^+:=\{z\in \mathbb R^{N+1}_+\colon \abs{z}<r\}$ for the half-balls in $\mathbb R^{N+1}_+$;
\item[-] $\mathbb{S}^N:=\{z\in \mathbb R^{N+1}\colon \abs{z}=1\}$ is the unit $N$-dimensional sphere;
\item[-] $\mathbb{S}^N_+:=\mathbb{S}^N\cap \mathbb R^{N+1}_+$ and $\mathbb{S}^{N-1}:=\partial \mathbb{S}^N_+$;
\item[-] $S_r^+:=\{r\theta\colon \theta \in \mathbb{S}^N_+\}$ denotes a positive half-sphere with arbitrary radius $r>0$;
\item[-] $\,\mathrm{d}S$ and $\,\mathrm{d}S'$ denote the volume element in $N$ and $N-1$ dimensional spheres, respectively;
\item[-] for $t\in\mathbb R$, $t^+:=\max\{0,t\}$ and $t^-:=\max\{0,-t\}$.
\end{itemize}
\section{Preliminaries}\label{sec:preliminaries}
In this section we clarify some details about the spaces involved in our exposition and their relation with the fractional Laplace operator, we recall basic known facts and we prove some introductory results.
\subsection{Preliminaries on Fractional Sobolev Spaces and weighted spaces in \texorpdfstring{$\mathbb R^{N+1}_+$}{ }} Let us consider the homogenous Sobolev space $\mathcal{D}^{s,2}(\R^N)$ defined in Section \ref{sec:intr}. Thanks to the Hardy-Littlewood-Sobolev inequality \begin{equation}\label{eq:sobolev}
S\norm{u}_{L^{2^*_s}(\mathbb R^N)}^2\leq \norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2, \end{equation} that holds for all functions $u\in \mathcal{D}^{s,2}(\R^N)$, we have that $\mathcal{D}^{s,2}(\R^N)$ is continuously embedded in $L^{2^*_s}(\mathbb R^N)$, where $2^*_s:=\frac{2N}{N-2s}$ is the critical Sobolev exponent. Combining \eqref{eq:min_ext_ineq} and \eqref{eq:caff_silv} with \eqref{eq:hardy} and \eqref{eq:sobolev}, we obtain, respectively \begin{equation}\label{eq:hardy_ext}
\kappa_s \ga_H \int_{\mathbb R^N}\frac{\abs{\Tr U}^2}{\abs{x}^{2s}}\,\mathrm{d}x\leq \int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x\quad \text{for all }U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \end{equation} and \begin{equation}\label{eq:sobolev_ext}
\kappa_s S\norm{\Tr U}_{L^{2^*_s}(\mathbb R^N)}^2\leq \int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x\quad \text{for all }U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}). \end{equation} Moreover, just as a consequence of \eqref{eq:min_ext_ineq} and \eqref{eq:caff_silv}, we have that \begin{equation}\label{eq:dir_princ}
\kappa_s\norm{\Tr U}_{\mathcal{D}^{s,2}(\R^N)}^2\leq
\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x \quad
\text{for all }U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}). \end{equation}
Now we state a result providing a compact trace embedding, which will be useful in the following. \begin{lemma}\label{lemma:compact_emb}
Let $p\in L^{N/2s}(\mathbb R^N)$. If $(U_n)_n\subseteq \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ are such that
$U_n\rightharpoonup U$ weakly in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ as $n\to\infty$, then
$\int_{\mathbb R^N}p|\Tr U_n|^2\,\mathrm{d}x\to \int_{\mathbb R^N}p|\Tr U|^2\,\mathrm{d}x$ as $n\to\infty$.
In
particular, if $p>0$ a.e. in $\mathbb R^N$,
the trace operator
\[
\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \hookrightarrow L^2(\mathbb R^N;p\,\mathrm{d}x)
\]
is compact, where $L^2(\mathbb R^N;p\,\mathrm{d}x):=\left\{u\in L^1_{\textup{loc}}(\mathbb R^N)\colon \int_{\mathbb R^N}p\abs{u}^2\,\mathrm{d}x<\infty \right\}$. \end{lemma} \begin{proof}
Let $(U_n)_n\subseteq \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ be such
$U_n\rightharpoonup U$ weakly in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ as $n\to\infty$.
Hence, in view of continuity of the trace operator
\eqref{eq:3} and classical compactness results for fractional
Sobolev spaces (see e.g. \cite[Theorem
7.1]{valdinoci-dinezza-palatucci}), we have that
$\Tr U_n\to \Tr U$ in $L^2_{\rm loc}(\mathbb R^N)$ and a.e. in $\mathbb R^N$.
Furthermore, by continuity of the trace operator
\eqref{eq:3} and \eqref{eq:sobolev_ext}, we have that, for every
$\omega\subset\mathbb R^N$ measurable,
\[
\int_{\omega}|p||\Tr(U_n-U)|^2\,\mathrm{d}x\leq C\|p\|_{L^{N/(2s)}(\omega)}
\|U_n-U\|^2_{\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})}
\]
for some positive constant $C>0$ independent of $\omega$ and $n$.
Therefore, by Vitali's Convergence Theorem we can conclude
that
$\lim_{n\to\infty}\int_{\mathbb R^N}|p||\Tr(U_n-U)|^2\,\mathrm{d}x=0$, from which the conclusion follows. \end{proof} We finally introduce a class of weighted Lebesgue and Sobolev spaces, on bounded open Lipschitz sets $\omega\subseteq \mathbb R^{N+1}_+$ in the upper half-space. Namely, we define \[
L^2(\omega;t^{1-2s}):=\left\{U\colon \omega\to \mathbb R~\text{measurable: such that }\int_{\omega}t^{1-2s}\abs{U}^2\, \mathrm{d}t \,\mathrm{d}x <\infty \right\} \] and the weighted Sobolev space \[
H^1(\omega;t^{1-2s}):=\left\{U\in L^2(\omega;t^{1-2s})\colon \nabla U\in L^2(\omega;t^{1-2s}) \right\}. \] From the fact that the weight $t^{1-2s}$ belongs to the second Muckenhoupt class (see, for instance, \cite{Fabes1982,Fabes1990}) and thanks to well known weighted inequalities, one can prove that the embedding $H^1(\omega;t^{1-2s})\hookrightarrow L^2(\omega;t^{1-2s})$ is compact, see for details \cite[Proposition 7.1]{FF-preprint} and \cite{OO}. In addition, in the particular case of $\omega=B_r^+$ one can prove that the trace operators \begin{gather*}
H^1(B_r^+;t^{1-2s})\hookrightarrow L^2(B_r'),\quad
H^1(B_r^+;t^{1-2s})\hookrightarrow L^2(S_r^+;t^{1-2s}), \end{gather*}
are well defined and compact, where
\[
L^2(S_r^+;t^{1-2s}):=\left\{\psi \colon S_r^+\to
\mathbb R~\text{measurable}: \int_{S_r^+}t^{1-2s}\abs{\psi }^2\,\mathrm{d}S <\infty \right\}.
\]
\subsection{The Angular Eigenvalue Problem} Let us consider, for any $\lambda\in\mathbb R$, the problem \begin{equation}\label{eq:angular_eigen}
\begin{bvp}
-\divergence_{\mathbb{S}^N} (\theta_1^{1-2s}\nabla_{\mathbb{S}^N} \psi)&=\mu\theta_1^{1-2s}\psi, && \text{in }\mathbb{S}^{N}_+ ,\\
-\lim_{\theta_1\to 0^+}\theta_1^{1-2s}\nabla_{\mathbb{S}^N}\psi\cdot \bm{e}_1&=\kappa_s\lambda\psi, &&\text{on }\mathbb{S}^{N-1},
\end{bvp} \end{equation} where $\bm{e}_1=(1,0,\dots,0)\in \mathbb R^{N+1}_+$ and $\nabla_{\mathbb{S}^N}$ denotes the gradient on the unit $N$-dimensional sphere $\mathbb{S}^N$. In order to give a variational formulation of \eqref{eq:angular_eigen} we introduce the following Sobolev space \[
H^1(\mathbb{S}^N_+;\theta_1^{1-2s}):=\left\{\psi\in
L^2(\mathbb{S}^N_+;\theta_1^{1-2s})\colon
\int_{\mathbb{S}_N^+}\theta_1^{1-2s}|\nabla_{\mathbb{S}^N}\psi|^2\,dS<+\infty \right\}. \]
We say that $\psi\in H^1(\mathbb{S}^N_+;\theta_1^{1-2s})$ and $\mu\in\mathbb R$ weakly solve \eqref{eq:angular_eigen} if \begin{equation*}
\int_{\mathbb{S}^N_+}\theta_1^{1-2s}\nabla_{\mathbb{S}^N}\psi(\theta)\cdot\nabla_{\mathbb{S}^N}\varphi(\theta)\,\mathrm{d}S=\mu\int_{\mathbb{S}^N_+}\theta_1^{1-2s}\psi(\theta)\varphi(\theta)\,\mathrm{d}S+\kappa_s\lambda\int_{\mathbb{S}^{N-1}}\psi(0,\theta')\varphi(0,\theta')\,\mathrm{d}S' \end{equation*} for all $\varphi\in H^1(\mathbb{S}^N_+;\theta_1^{1-2s})$.
By standard spectral arguments, if $\lambda<\gamma_H$, there exists a diverging sequence of real eigenvalues of problem \eqref{eq:angular_eigen} \begin{equation*}
\mu_1(\lambda)\leq \mu_2(\lambda)\leq \cdots\leq \mu_n(\lambda)\leq \cdots \end{equation*} Moreover, each eigenvalue has finite multiplicity (which is counted in the enumeration above) and $\mu_1(\lambda)>-\left( \frac{N-2s}{2} \right)^2$ (see \cite[Lemma 2.2]{Fall2014}). For every $n\geq 1$ we choose an eigenfunction $\psi_n\in H^1(\mathbb{S}^N_+;\theta_1^{1-2s})\setminus\{0\}$, corresponding to $\mu_n(\lambda)$, such that $\int_{\mathbb{S}^N_+}\theta_1^{1-2s}\abs{\psi_n}^2\,\mathrm{d}S=1$. In addition, we choose the family $\{\psi_n\}_n$ in such a way that it is an orthonormal basis of $L^2(\mathbb{S}^N_+;\theta_1^{1-2s})$. We refer to \cite{Fall2014} for further details.
In \cite{Fall2014} the following implicit characterization of $\mu_1(\lambda)$ is given. For any $\alpha\in\left( 0,\frac{N-2s}{2} \right)$ we define \begin{equation}\label{eq:def_lambda_alpha}
\Lambda(\alpha):=2^{2s}\frac{\Gamma\left(\frac{N+2s+2\alpha}{4}\right)\Gamma\left(\frac{N+2s-2\alpha}{4}\right)}{\Gamma\left(\frac{N-2s+2\alpha}{4}\right)\Gamma\left(\frac{N-2s-2\alpha}{4}\right)}. \end{equation} It is known (see e.g. \cite{Frank2008a} and \cite[Proposition 2.3]{Fall2014}) that the map $\alpha\mapsto \Lambda(\alpha)$ is continuous and monotone decreasing. Moreover \begin{equation}\label{eq:8}
\lim_{\alpha\to 0^+}\Lambda(\alpha)=\gamma_H,\qquad \lim_{\alpha\to \frac{N-2s}{2}}\Lambda(\alpha)=0, \end{equation} see Figure 1. Furthermore, in \cite[Proposition 2.3]{Fall2014} it is proved that, for every $\alpha\in\left( 0,\frac{N-2s}{2} \right)$, \begin{equation}\label{eq:4}
\mu_1(\Lambda(\alpha))=\alpha^2-\left( \frac{N-2s}{2} \right)^{\!2}. \end{equation} \begin{figure}
\caption{The function $\Lambda$}
\label{fig:1}
\end{figure} In particular, for every $\lambda\in(0,\gamma_H)$ there exists one and only one $\alpha\in \big(0,\frac{N-2s}{2}\big)$ such that $\Lambda(\alpha)=\lambda$ and hence $\mu_1(\lambda)=\alpha^2-\big (\frac{N-2s}{2}\big)^2<0$.
We recall the following result from \cite {Fall2018}.
\begin{lemma}[{\cite[Lemma 4.1]{Fall2018}}]\label{lemma:func_fall}
For every $\alpha\in\left( 0,\frac{N-2s}{2} \right)$ there exists
$\Upsilon_\alpha:\overline{\mathbb R^{N+1}_+}\setminus\{0\}\to\mathbb R$ such that $\Upsilon_\alpha$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+}\setminus\{0\}$,
$\Upsilon_\alpha>0$ in $\overline{\mathbb R^{N+1}_+}\setminus\{0\}$, and
\begin{equation}\label{eq:phi_alpha}
\begin{bvp}
-\divergence(t^{1-2s}\nabla \Upsilon_\alpha)&=0, && \text{in }\mathbb R^{N+1}_+, \\
\Upsilon_\alpha(0,x)&=\abs{x}^{-\frac{N-2s}{2}+\alpha}, && \text{on }\mathbb R^N, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial \Upsilon_\alpha}{\partial t}&=\kappa_s\Lambda(\alpha)\abs{x}^{-2s}\Upsilon_\alpha, && \text{on }\mathbb R^N,
\end{bvp}
\end{equation}
in a weak sense. Moreover, $\Upsilon_\alpha\in H^1(B_R^+;t^{1-2s})$ for every $R>0$. \end{lemma}
The first eigenvalue $\mu_1(\lambda)$ satisfies the properties described in the following lemma. \begin{lemma}\label{lemma:mu_psi_1}
Let $\lambda<\gamma_H$. Then the first eigenvalue of problem
\eqref{eq:angular_eigen} can be characterized as
\begin{equation*}
\mu_1(\lambda)=\inf_{\substack{ \psi\in H^1(\mathbb{S}^N_+;\theta_1^{1-2s}) \\ \psi\nequiv 0 } }\frac{\displaystyle \int_{\mathbb{S}^N_+}\theta_1^{1-2s}\abs{\nabla_{\mathbb{S}^N}\psi}^2\,\mathrm{d}S-\kappa_s\lambda\int_{\mathbb{S}^{N-1}}\abs{\psi}^2\,\mathrm{d}S' }{\displaystyle \int_{\mathbb{S}^N_+} \theta_1^{1-2s}\abs{\psi}^2\,\mathrm{d}S}
\end{equation*}
and the above infimum is attained by
$\psi_1\in H^1(\mathbb{S}^N_+;\theta_1^{1-2s})$, which weakly solves
\eqref{eq:angular_eigen} for $\mu=\mu_1(\lambda)$. Moreover
\begin{enumerate}
\item $\mu_1(\lambda)$ is simple, i.e. if $\psi$ attains $\mu_1(\lambda)$ then $\psi=\delta\psi_1$ for some $\delta\in\mathbb R$;
\item either $\psi_1>0$ or $\psi_1<0$ in $\mathbb{S}^N_+$;
\item if $\lambda>0$ and $\psi_1>0$, then the trace of $\psi_1$ on $\mathbb{S}^{N-1}$ is positive and constant;
\item if $\lambda=0$ then $\psi_1$ is constant in $\mathbb{S}^N_+$.
\end{enumerate} \end{lemma} \begin{proof}
The proof of the fact that $\mu_1(\lambda)$ is reached is classical,
as well as the proofs of points (1) and (2), see for instance
\cite[Section 8.3.3]{Salsa2016}.
In order to prove (3), let us first observe that, if
$\lambda\in(0,\gamma_H)$, there exists one and only one
$\alpha\in \big(0,\frac{N-2s}{2}\big)$ such that
$\Lambda(\alpha)=\lambda$. For this $\alpha$ let $\Upsilon_\alpha>0$
be the solution of \eqref{eq:phi_alpha}. Thanks to \cite[Theorem
4.1]{Fall2014}, it is possible to describe the behaviour of
$\Upsilon_\alpha$ near the origin: in particular, since $\Upsilon_\alpha>0$,
we have that there exists $C>0$ such that
\begin{equation*}
\tau^{-a_{\Lambda(\alpha)}}\Upsilon_\alpha(0,\tau\theta')\to C\psi_1(0,\theta')\qquad\text{in }C^{1,\beta}(\mathbb{S}^{N-1})\quad\text{as }\tau\to 0^+,
\end{equation*}
where
\begin{equation*}
a_{\Lambda(\alpha)}=-\frac{N-2s}{2}+\sqrt{\left(\frac{N-2s}{2}\right)^2+\mu_1(\Lambda(\alpha))}
\end{equation*}
Thanks to \eqref{eq:4} we have that
$a_{\Lambda(\alpha)}=-\frac{N-2s}{2}+\alpha $; then
$\tau^{-a_{\Lambda(\alpha)}}\Upsilon_\alpha(0,\tau\theta')\equiv 1$ and so
$\psi_1(0,\theta')$ is positive and constant in $\mathbb{S}^{N-1}$.
Finally, if $\lambda=0$ then $\mu_1(0)=0$ is clearly attained by every constant function. \end{proof}
\noindent
We note that, in view of well known regularity results (see Proposition \ref{prop:jlx} in the Appendix), $\psi_1\in C^{0,\beta}( \overline{\mathbb{S}^N_+})$ for some $\beta\in (0,1)$. Hereafter, we choose the first eigenfunction $\psi_1$ of problem \eqref{eq:angular_eigen} to be strictly positive in $\mathbb{S}^N_+$. With this choice of $\psi_1$, we also have that, in view of the Hopf type
principle proved in \cite[Proposition 4.11]{Cabre2014} (see Proposition \ref{prop:cabre_sire}),
\begin{equation}\label{eq:5}
\min_{\overline{\mathbb{S}^N_+}}\psi_1>0.
\end{equation}
\subsection{Asymptotic Estimates of Solutions}
In this section, we describe the asymptotic behaviour of solutions to equations of the type $-\divergence (t^{1-2s}\nabla \Phi)=0$, with singular potentials appearing in the Neumann-type boundary conditions, either on positive half-balls $B_r^+$ or on their complement in $\mathbb R^{N+1}_+$.
\begin{lemma}\label{lemma:regul_1}
Let $R_0>0$, $\lambda<\gamma_H$ and let $\Phi\in
H^1(B_{R_0}^+;t^{1-2s})$, $\Phi\geq 0$ a.e. in $ B_{R_0}^+$, $\Phi\nequiv 0$, be a weak solution of the following problem
\begin{equation*}
\begin{bvp}
-\divergence (t^{1-2s}\nabla \Phi)&=0, && \text{in }B_{R_0}^+, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial\Phi}{\partial t}&=\kappa_s(\lambda\abs{x}^{-2s}+q)\Phi, && \text{on }B_{R_0}',
\end{bvp}
\end{equation*}
where $q\in C^1(B_{R_0}'\setminus\{0\})$ is such that
\begin{equation*}
\abs{q(x)}+\abs{x\cdot\nabla q(x)}=O(\abs{x}^{-2s+\varepsilon}) \qquad\text{as }\abs{x}\to 0,
\end{equation*}
for some $\varepsilon>0$. Then there exist $C_1>0$ and $R\leq R_0$
such that
\begin{equation}\label{eq:regul_1}
\frac{1}{C_1}\abs{z}^{ a_\lambda}\leq \Phi(z)\leq C_1\abs{z}^{a_\lambda} \qquad\text{for all }z\in B_R^+ ,
\end{equation}
where \begin{equation}\label{eq:6}
a_\lambda=-\frac{N-2s}{2}+\sqrt{\left( \frac{N-2s}{2} \right)^{\!2}+\mu_1(\lambda)}. \end{equation} Furthermore, if $0\leq \lambda<\gamma_H$, then there exists $C_2>0$ such that
\begin{equation}\label{eq:regul_2}
\lim_{\abs{x}\to 0}\abs{x}^{ -a_\lambda}\Phi(0,x)=C_2.
\end{equation} \end{lemma} \begin{proof}
Since $\Phi\geq 0$ a.e., $\Phi\nequiv 0$, from \cite[Theorem 4.1]{Fall2014} we know that there exists $C>0$ such that
\begin{equation}\label{eq:7}
\tau^{-a_\lambda}\Phi(\tau\theta)\to C\psi_1(\theta)\qquad \text{in }C^{0,\beta}(\overline{\mathbb{S}^N_+})\quad\text{as }\tau\to 0.
\end{equation}
Estimate \eqref{eq:regul_1} follows from the above convergence and \eqref{eq:5}. Convergence \eqref{eq:regul_2} follows from \eqref{eq:7} and statements (3--4) of Lemma \ref{lemma:mu_psi_1}. \end{proof}
\begin{lemma}\label{lemma:regul_2}
Let $R_0>0$, $\lambda<\gamma_H$ and let $\Phi\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, $\Phi\geq 0$ a.e. in $\mathbb R^{N+1}_+$, $\Phi\nequiv 0$, be a weak solution of the following problem
\begin{equation*}
\begin{bvp}
-\divergence (t^{1-2s}\nabla \Phi)&=0, && \text{in }\mathbb R^{N+1}_+\setminus B_{R_0}^+, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial\Phi}{\partial t}&=\kappa_s(\lambda\abs{x}^{-2s}+q)\Phi, && \text{on }\mathbb R^N\setminus B_{R_0}',
\end{bvp}
\end{equation*}
where $q\in C^1(\mathbb R^N\setminus B_{R_0}')$ is such that
\begin{equation*}
\abs{q(x)}+\abs{x\cdot\nabla q(x)} =O( \abs{x}^{-2s-\varepsilon}) \qquad\text{as }\abs{x}\to +\infty,
\end{equation*}
for some $\varepsilon>0$. Then there exist $C_3>0$ and $R\geq R_0$ such that
\begin{equation*}
\frac{1}{C_3}\abs{z}^{ -(N-2s)-a_\lambda}\leq \Phi(z)\leq C_3\abs{z}^{ -(N-2s)-a_\lambda} \qquad\text{for all }z\in \mathbb R^{N+1}_+\setminus B_R^+ .
\end{equation*}
Furthermore, if $0\leq \lambda<\gamma_H$, then there exists $C_4>0$ such that
\begin{equation*}
\lim_{\abs{x}\to \infty}\abs{x}^{ N-2s+a_\lambda}\Phi(0,x)=C_4.
\end{equation*} \end{lemma} \begin{proof}
The proof follows by considering the equation solved by the Kelvin
transform of $\Phi$ \begin{equation}\label{eq:4kel} \tilde{\Phi}(z):=\abs{z}^{-(N-2s)}\Phi\bigg(\frac{z}{\abs{z}^2}\bigg) \end{equation} (see \cite[Proposition 2.6]{Fall2012}) and applying
Lemma \ref{lemma:regul_1}. \end{proof}
\begin{lemma}\label{lemma:regul_3}
Let $R_0>0$ and let $\Phi\in
H^1(B_{R_0}^+;t^{1-2s})$, $\Phi\geq 0$ a.e. in $\mathbb R^{N+1}_+$, $\Phi\nequiv 0$, be a weak solution of the following problem
\begin{equation*}
\begin{bvp}
-\divergence (t^{1-2s}\nabla \Phi)&=0, && \text{in }B_{R_0}^+, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial\Phi}{\partial t}&=\kappa_s q\Phi, && \text{on }B_{R_0}',
\end{bvp}
\end{equation*}
where $q\in C^1(B_{R_0}'\setminus\{0\})$ is such that
\begin{equation*}
\abs{q(x)}+\abs{x\cdot\nabla q(x)} =O(\abs{x}^{-2s+\varepsilon})\qquad\text{as }\abs{x}\to 0,
\end{equation*}
for some $\varepsilon>0$. Then there exists $C_5>0$ such that
\begin{equation*}
\lim_{\abs{z}\to 0}\Phi(z)=\lim_{\abs{x}\to 0}\Phi(0,x)=C_5.
\end{equation*} \end{lemma} \begin{proof}
The thesis is a direct consequence of the regularity result of \cite[Proposition 2.4]{Jin2014} (see Proposition \ref{prop:jlx} in the Appendix) combined with the Hopf type
principle in \cite[Proposition 4.11]{Cabre2014} (see Proposition
\ref{prop:cabre_sire}). It can be also derived as a particular case
of \cite[Theorem 4.1]{Fall2014} with $\lambda=0$, taking into account
that, for $\lambda=0$, $\psi_1$ is a positive constant on
$\mathbb{S}^N_+$, as observed in Lemma \ref{lemma:mu_psi_1}. \end{proof}
\begin{lemma}\label{lemma:regul_4}
Let $R_0>0$ and let $\Phi\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, $\Phi\geq 0$ a.e. in $\mathbb R^{N+1}_+$, $\Phi\nequiv 0$, be a weak solution of the following problem
\begin{equation*}
\begin{bvp}
-\divergence (t^{1-2s}\nabla \Phi)&=0, && \text{in }\mathbb R^{N+1}_+\setminus B_{R_0}^+, \\
-\lim_{t\to 0}t^{1-2s}\frac{\partial\Phi}{\partial t}&=\kappa_s q\Phi, && \text{on }\mathbb R^N\setminus B_{R_0}',
\end{bvp}
\end{equation*}
where $q\in C^1(\mathbb R^N\setminus B_{R_0}')$ is such that
\begin{equation*}
\abs{q(x)}+\abs{x\cdot\nabla q(x)} =O(\abs{x}^{-2s-\varepsilon})\qquad\text{as }\abs{x}\to +\infty,
\end{equation*}
for some $\varepsilon>0$. Then there exists $C_6>0$ such that
\begin{equation*}
\lim_{\abs{z}\to \infty}\abs{z}^{N-2s}\Phi(z)=\lim_{\abs{x}\to \infty}\abs{x}^{N-2s}\Phi(0,x)=C_6
\end{equation*} \end{lemma} \begin{proof}
The proof follows by considering the equation solved by the
Kelvin transform of $\Phi$ given in \eqref{eq:4kel} and applying Lemma \ref{lemma:regul_3}. \end{proof}
\section{Proof of Theorem \ref{Prop-2}}\label{sec:proof-theor-refpr}
\begin{proof}[Proof of Theorem \ref{Prop-2}] First, assume $\sum_{i=1}^k\lambda_i^+<\ga_H$. By Hardy inequality \eqref{eq:hardy} we deduce that \[
Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k} (u)\geq \left( 1-\frac{\sum_{i=1}^k\lambda_i^+}{\ga_H} \right)\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2\quad\text{for all }u\in\mathcal{D}^{s,2}(\R^N), \] thus implying that $Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ is positive definite.
Now we assume that $\sum_{i=1}^k\lambda_i^+>\ga_H$. By optimality of the constant $\ga_H$ in Hardy inequality, it follows that there exists $\varphi\in C_c^\infty(\mathbb R^N)$ such that \begin{equation}
\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+\int_{\mathbb R^N}\frac{\abs{\varphi}^2}{\abs{x}^{2s}}\,\mathrm{d}x<0. \end{equation} Let $\varphi_\rho(x):=\rho^{-\frac{N-2s}{2}}\varphi(x/\rho)$. Then, taking into account Lemma \ref{lemma:conv_hardy}, we have that \begin{align*}
\norm{\varphi_\rho}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+\int_{\mathbb R^N}\frac{\abs{\varphi_\rho}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x &=\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+\int_{\mathbb R^N}\frac{\abs{\varphi}^2}{\abs{x-a_i/\rho}^{2s}}\,\mathrm{d}x\\
&\to \norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+\int_{\mathbb R^N}\frac{\abs{\varphi}^2}{\abs{x}^{2s}}\,\mathrm{d}x<0, \end{align*} as $\rho\to+\infty$. Therefore, there exists $\tilde{\rho}$ such that \begin{equation}\label{eq:prop2_1}
\norm{\psi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+\int_{\mathbb R^N}\frac{\abs{\psi}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x<0, \end{equation} where $\psi:=\varphi_{\tilde{\rho}}$. Let $R>0$ be such that $\supp \psi\subset B'_R$. Then \begin{equation}\label{eq:prop2_2}
\int_{\mathbb R^N}\frac{\abs{\psi}^2}{\abs{x-a}^{2s}}\leq \frac{1}{(\abs{a}-R)^{2s}}\int_{B'_R}\abs{\psi}^2\,\mathrm{d}x \end{equation} for $\abs{a}$ sufficiently large. Hence, from \eqref{eq:prop2_1} and \eqref{eq:prop2_2} it follows that \begin{multline*}
Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}(\psi)=\norm{\psi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k \lambda_i^+\int_{\mathbb R^N}\frac{\abs{\psi}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x+ \sum_{i=1}^k\lambda_i^-\int_{\mathbb R^N}\frac{\abs{\psi}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x \\
\leq
\norm{\psi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i^+ \int_{\mathbb R^N}\frac{\abs{\psi}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x+\sum_{i=1}^k\lambda_i^- \frac{1}{(\abs{a_i}-R)^{2s}}\int_{B'_R}\abs{\psi}^2\,\mathrm{d}x<0 \end{multline*} if the poles $a_i$'s, corresponding to negative $\lambda_i$'s, are sufficiently far from the origin. The proof is thereby complete. \end{proof}
\begin{remark}\label{rmk:2_poles}
We observe that, in the case of two poles (i.e. $k=2$), Theorem
\ref{Prop-2} implies the sufficiency of condition \eqref{lambda-i}
for the existence of a configuration of poles that makes the
quadratic form $Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ positive
definite. Indeed, if $k=2$ condition \eqref{lambda-i} directly
implies \eqref{eq:2}. \end{remark}
\section{A positivity criterion in the class \texorpdfstring{$\Theta$}{Theta}}\label{sec:criterion}
In this section, we provide the proof of Lemma \ref{Criterion}, that is a criterion for establishing positivity of Schr\"odinger operators with potentials in $\Theta$, in relation with existence of positive supersolutions, in the spirit of Allegretto-Piepenbrink theory.
We first prove the equivalent formulation of the infimum in \eqref{eq:def_inf} stated in Lemma \ref{lemma:equiv_inf}.
\begin{proof}[Proof of Lemma \ref{lemma:equiv_inf}]
Let's fix $\tilde{u}\in \mathcal{D}^{s,2}(\R^N)\setminus \{0\}$ and let's call $\tilde{U}\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ its extension. Since
\begin{equation*}
\kappa_s\norm{\tilde{u}}_{\mathcal{D}^{s,2}(\R^N)}^2=\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \tilde{U}|^2 \, \mathrm{d}t \,\mathrm{d}x,
\end{equation*}
where $\kappa_s$ is defined in \eqref{eq:kappa_s}, then
\begin{equation*}
\frac{ \norm{\tilde{u}}_{\mathcal{D}^{s,2}(\R^N)}^2-\int_{\mathbb R^N}V
\tilde{u}^2\,\mathrm{d}x}{\norm{\tilde{u}}_{\mathcal{D}^{s,2}(\R^N)}^2 }=
\frac{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \tilde{U}|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V|\Tr \tilde{U}|^2\,\mathrm{d}x}{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \tilde{U}|^2\, \mathrm{d}t \,\mathrm{d}x}.
\end{equation*}
Therefore
\begin{equation*}
\frac{ \norm{\tilde{u}}_{\mathcal{D}^{s,2}(\R^N)}^2-\int_{\mathbb R^N}V \abs{\tilde{u}}^2\,\mathrm{d}x}{\displaystyle\norm{\tilde{u}}_{\mathcal{D}^{s,2}(\R^N)}^2 }
\geq \inf_{\substack{U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \\ U\nequiv 0}}\frac{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V|\Tr U|^2\,\mathrm{d}x}{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x}
\end{equation*}
and then we can pass to the inf also on the left-hand quotient.
On the other hand, from \eqref{eq:dir_princ}, we have that, for any $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})\setminus\{0\}$
\begin{equation*}
\frac{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V|\Tr U|^2\,\mathrm{d}x}{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x}\geq 1-\frac{\int_{\mathbb R^N}V \abs{u}^2\,\mathrm{d}x}{\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2} \end{equation*} where $u=\Tr U$. Taking the infimum to both sides concludes the proof. \end{proof}
Now we are able to provide the proof of the positivity criterion.
\begin{proof}[Proof of Lemma \ref{Criterion}]
Let us first prove (I). Let
$U\in C_c^\infty (\overline{\mathbb R^{N+1}_+}\setminus
\{(0,a_1),\dots,(0,a_k)\})\setminus \{ 0 \}$,
$U\nequiv 0$ on $\mathbb R^N$. Note that $U^2/\Phi\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and $U^2/\Phi
\geq 0$, hence we can choose $U^2/\Phi$ in \eqref{eq:1} as a test function.
Easy computations
yield
\begin{equation*}
\int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N} V\abs{\Tr U}^2\,\mathrm{d}x \geq \varepsilon \kappa_s \int_{\mathbb R^{N}} \tilde{V}\abs{\Tr U}^2\,\mathrm{d}x
\end{equation*}
which, taking into account the hypothesis on $\tilde{V}$, implies that
\begin{equation*}
\kappa_s \int_{\mathbb R^N} V\abs{\Tr U}^2\,\mathrm{d}x \leq \frac{1}{1+\varepsilon}\int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x.
\end{equation*}
Hence
\begin{equation}\label{eq:inf_eps}
\frac{\int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N} V\abs{\Tr U}^2\,\mathrm{d}x }{ \int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x}\geq \frac{\varepsilon}{1+\varepsilon}.
\end{equation}
Therefore (I) follows by density of $C_c^\infty (\overline{\mathbb R^{N+1}_+}\setminus
\{(0,a_1),\dots,(0,a_k)\})$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ (see Lemma \ref{l:density}).
Now we prove (II). First of all we notice that, thanks to H\"older's inequality, \eqref{eq:hardy_ext} and \eqref{eq:sobolev_ext}
\begin{multline*}
\kappa_s\int_{\mathbb R^N}\tilde{V}\abs{\Tr U}^2\,\mathrm{d}x \leq\kappa_s\int_{\mathbb R^N}\abs{V}\abs{\Tr U}^2\,\mathrm{d}x \\ \leq \left[ \frac{1}{\gamma_H}\left(\sum_{i=1}^k\abs{\lambda_i}+\abs{\lambda_\infty}\right) +S^{-1}\norm{W}_{L^{N/2s}(\mathbb R^N)}\right]\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2 \, \mathrm{d}t \,\mathrm{d}x
\end{multline*}
for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. If
\begin{equation}\label{eq:12}
0 <\varepsilon < \frac{\mu(V)}{2} \left[ \frac{1}{\gamma_H}\left(\sum_{i=1}^k\abs{\lambda_i}+\abs{\lambda_\infty}\right) +S^{-1}\norm{W}_{L^{N/2s}(\mathbb R^N)}\right]^{-1},
\end{equation}
then
\begin{equation}\label{eq:criterion_2}
\int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x -\kappa_s\int_{\mathbb R^N}( V+\varepsilon \tilde{V})\abs{\Tr U}^2\,\mathrm{d}x
\geq \frac{\mu(V)}{2}\int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x
\end{equation}
for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$.
Hence, for any fixed $p\in L^{N/2s}(\mathbb R^N)\cap L^\infty (\mathbb R^N)$, H\"older continuous and positive, the infimum
\begin{equation*}
m_p=\inf_{\substack{U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})\\\Tr U\not\equiv 0 }}
\frac{\displaystyle \int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla
U}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}( V+\varepsilon \tilde{V} )\abs{\Tr
U}^2\,\mathrm{d}x }{\displaystyle \int_{\mathbb R^{N}} p \abs{\Tr
U}^2\,\mathrm{d}x }
\end{equation*}
is nonnegative. Also $m_p$ is achieved by some function $\Phi
\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \setminus\{0\}$, that (by evenness) can be
chosen to be nonnegative: indeed, thanks to Hardy inequality \eqref{eq:hardy_ext} and \eqref{eq:criterion_2} it's easy to prove that the map
\begin{equation*}
\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \ni U\mapsto \int_{\mathbb R^{N+1}_+} t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x
-\kappa_s\int_{\mathbb R^N}
( V+\varepsilon \tilde{V})\abs{\Tr U}^2\,\mathrm{d}x
\end{equation*}
is weakly lower semicontinuous (since its square root is an equivalent
norm in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$), while Lemma \ref{lemma:compact_emb} yields the
compactness of the trace map from $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ into $L^2(\mathbb R^N,p\,\mathrm{d}x)$.
Moreover $\Phi$ satisfies in a weak sense
\begin{equation}\label{pbm:criterion_1}
\begin{bvp}
-\divergence (t^{1-2s}\nabla \Phi)&=0, &&\text{in }\mathbb R^{N+1}_+ ,\\
-\lim_{t\to 0^+} t^{1-2s}\frac{\partial \Phi}{\partial t}&= \kappa_s(V+\varepsilon \tilde{V})\Tr \Phi + m_p p\Tr \Phi, &&\text{in }\mathbb R^N,
\end{bvp}
\end{equation}
i.e.
\begin{equation*}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla W\, \mathrm{d}t \,\mathrm{d}x = \kappa_s \int_{\mathbb R^{N}}
(V+\varepsilon \tilde{V})\Tr\Phi\Tr W\,\mathrm{d}x+m_p\int_{\mathbb R^{N}}
p\Tr\Phi\Tr W\,\mathrm{d}x
\end{equation*}
for all $W \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$.
From \cite[Proposition
2.6]{Jin2014} (see also Proposition \ref{prop:jlx} in the Appendix) we
have that $\Phi$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1), \dots, (0,a_k)\}$; in particular $\Phi\in
C^{0}\left(\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1),\dots, (0,a_k)\}\right)$.
Moreover, the classical Strong Maximum
Principle implies that $\Phi>0$ in $\mathbb R^{N+1}_+$; then, in the
case when $V,\tilde{ V}$ are locally H\"older
continuous in $\mathbb R^N\setminus\{a_1,\dots,a_k\}$, the Hopf type
principle proved in \cite[Proposition 4.11]{Cabre2014} (which is
recalled in the Proposition \ref{prop:cabre_sire} of the Appendix) ensures that
$\Phi(0,x)>0$ for all $x\in \mathbb R^N \setminus \{a_1, \dots,
a_k,\}$; we observe that assumption \eqref{eq:9} of
Proposition \ref{prop:cabre_sire} is satisfied thanks to
\cite[Lemma 4.5]{Cabre2014}, see Lemma \ref{l:cabre_sire}. \end{proof}
\section{Upper and lower bounds for $\mu(V)$}\label{sec:shattering}
In this section we prove bounds from above and from below (in Lemma \ref{Lem-1} and \ref{Lem-2}, respectively) for the quantity $\mu(V)$.
\begin{lemma}\label{Lem-1}
For any $V(x)=\sum_{i=1}^{k}\frac{\la_i \chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty \chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W(x) \in \Theta,$ there holds: \begin{itemize} \item [(i)] $\mu(V) \leq 1;$ \item [(ii)] if $\max_{i=1,\dots,k,\infty}\la_i>0,$ then $\mu(V) \leq 1-\frac{1}{\ga_H}\max_{i=1,\dots,k,\infty}\la_i.$ \end{itemize} \end{lemma}
\begin{proof} Let us fix
$u \in C_c^{\infty}(\mathbb R^N)$, $u\not\equiv 0$ and
$P\in\mathbb R^N\setminus\{a_1,\dots,a_k\}$. For every $\rho>0$ we define
$u_{\rho}(x):=\rho^{-\frac{(N-2s)}{2}}u(\frac{x-P}{\rho})$ and we
notice that, by scaling properties,
\begin{equation}\label{eq:shatter_1}
\norm{u_\rho}_{\mathcal{D}^{s,2}(\R^N)}=\norm{u}_{\mathcal{D}^{s,2}(\R^N)}\quad\text{and} \quad\norm{u_\rho}_{L^{2^*_s}(\mathbb R^N)}=\norm{u}_{L^{2^*_s}(\mathbb R^N)}.
\end{equation}
Moreover, since $\supp(u_\rho)=P+\rho \supp(u)$, we have that
$a_1,\dots,a_k\not\in \supp(u_\rho)$ for $\rho>$ sufficiently small, hence
\begin{equation}\label{eq:shatter_2}
V\in L^{N/2s}(\supp(u_\rho)).
\end{equation}
Therefore, from the definition of $\mu(V)$, thanks also to
\eqref{eq:shatter_1}, \eqref{eq:shatter_2}, H\"older
inequality, and \eqref{eq:sobolev}, we deduce that
\begin{align*}
\mu(V)&\leq
1-\frac{\int_{\mathbb R^N}V\abs{u_\rho}^2\,\mathrm{d}x}{\norm{u_\rho}_{\mathcal{D}^{s,2}(\R^N)}^2} \leq 1+\frac{\int_{\supp(u_\rho)}\abs{V}\abs{u_\rho}^2\,\mathrm{d}x}{\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2}\\ & \leq
1+\frac{\norm{V}_{L^{N/2s}(\supp(u_\rho))}\norm{u}^2_{L^{2^*_s}(\mathbb R^N)}}{\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2} \leq 1+S^{-1}\norm{V}_{L^{N/2s}(\supp(u_\rho))}=
1+o(1),
\end{align*}
as $\rho\to 0^+$. By density we may conclude the first part of the proof.
Now let us assume $\max_{i=1, \dots,k,\infty}\lambda_i>0$ and let us first consider the case $\max_{i=1, \dots,k,\infty}\lambda_i=\lambda_j$ for a certain $j=1,\dots,k$. From optimality of the best constant in Hardy inequality \eqref{eq:hardy} and from the density of $C_c^\infty(\mathbb R^N \setminus \{a_1, \dots, a_k,0\})$ in $\mathcal{D}^{s,2}(\mathbb R^N)$ (see Lemma \ref{l:density}), we have that, for any $\varepsilon>0$, there exists $\varphi\in C_c^\infty(\mathbb R^N \setminus \{a_1, \dots, a_k,0\})$ such that \begin{equation}\label{eq:shatter_3}
\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2<(\gamma_H+\varepsilon)\int_{\mathbb R^N}\frac{\abs{\varphi}^2}{\abs{x}^{2s}}\,\mathrm{d}x. \end{equation} Now, for any $\rho>0$ we define $\varphi_\rho(x):=\rho^{-\frac{(N-2s)}{2}}\varphi(\frac{x-a_j}{\rho})$. From the definition of $\mu(V)$ and from \eqref{eq:shatter_1} we deduce that \begin{equation}\label{eq:shatter_4}
\mu(V) \leq 1-\frac{\int_{\mathbb R^{N}}V\abs{\varphi_\rho}^2\,\mathrm{d}x}{\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2}. \end{equation} On the other hand, we can split the numerator as \begin{align*}
\int_{\mathbb R^{N}}V\abs{\varphi_\rho}^2\,\mathrm{d}x&=\la_j \int_{B'(a_j,r_j)}|x-a_j|^{-2s}\abs{\varphi_\rho}^2\,\mathrm{d}x+\sum_{i\neq j}\la_i \int_{B'(a_i,r_i)}|x-a_i|^{-2s}\abs{\varphi_\rho}^2\,\mathrm{d}x\\
&+\la_\infty \int_{\mathbb R^N \setminus B'_{R}}|x|^{-2s}\abs{\varphi_\rho}^2\,\mathrm{d}x+\int_{\mathbb R^N}W\abs{\varphi_\rho}^2\,\mathrm{d}x. \end{align*} From H\"older inequality and \eqref{eq:shatter_1} we have that \begin{equation}\label{eq:shatter_5}
\left| \int_{\mathbb R^N}W\varphi_\rho^2\,\mathrm{d}x\right| \leq \norm{W}_{L^{N/2s}(\supp(\varphi_\rho))}\norm{\varphi}_{{L^{2^*_s}}}^2 \to 0, \quad\text{as }\rho\to 0^+, \end{equation} while, just by a change of variable \begin{equation}\label{eq:shatter_6}
\int_{B'(a_j,r_j)}|x-a_j|^{-2s}\abs{\varphi_\rho}^2\,\mathrm{d}x=\int_{B'_{r_j/\rho}}|x|^{-2s}\abs{\varphi}^2\,\mathrm{d}x\to \int_{\mathbb R^{N}} |x|^{-2s}\abs{\varphi}^2\,\mathrm{d}x, \end{equation} as $\rho\to 0^+$. Moreover $\supp(\varphi_\rho)=a_j+\rho\supp(\varphi)$, and therefore, thanks to \eqref{eq:shatter_5} and \eqref{eq:shatter_6}, we have that, as $\rho\to 0^+$, \begin{equation}\label{eq:shatter_7}
\int_{\mathbb R^{N}}V\abs{\varphi_\rho}^2\,\mathrm{d}x=\la_j \int_{\mathbb R^N}|x|^{-2s}\abs{\varphi}^2\,\mathrm{d}x+o(1). \end{equation} Hence, combining \eqref{eq:shatter_4} with \eqref{eq:shatter_7} and \eqref{eq:shatter_3}, we obtain that \[
\mu(V)\leq 1-\lambda_j(\gamma_H+\varepsilon)^{-1}, \] for all $\varepsilon>0$, which implies that $\mu(V)\leq 1-\lambda_j/\gamma_H$. Finally, let us assume $\max_{i=1,
\dots,k,\infty}\lambda_i=\lambda_\infty$. Letting $\varphi_\rho(x):=\rho^{-\frac{(N-2s)}{2}}\varphi(x/\rho )$, we observe that $\varphi_\rho\to 0$ uniformly, as $\rho\to +\infty$. So, arguing as before, one can similarly obtain that $\mu(V) \leq 1-\frac{\la_\infty}{\ga_H}$. The proof is thereby complete. \end{proof}
The following result provides the positivity in the case of potentials with subcritical masses supported in sufficiently small neighbourhoods of the poles. In the following we fix two cut-off functions $\zeta,\tilde\zeta:\mathbb R^N\to \mathbb R$ such that $\zeta,\tilde\zeta\in C^\infty(\mathbb R^N)$, $0\leq \zeta(x)\leq 1$, $0\leq \tilde\zeta(x)\leq 1$, and \begin{align*}
&\zeta(x)=1\quad\text{for $|x|\leq\frac12$},
\quad \zeta(x)=0\quad\text{for $|x|\geq1$},\\
&\tilde \zeta(x)=0\quad\text{for $|x|\leq1$},
\quad \tilde\zeta(x)=1\quad\text{for $|x|\geq2$}. \end{align*} \begin{lemma}\label{Lem-2}
Let $\{a_1,a_2,\dots,a_k\}\subset B_R'$, $a_i\neq a_j$
for $i\neq j$, and $\lambda_1,\lambda_2,\dots,\lambda_k,\lambda_\infty\in\mathbb R$
be such that $m:=\max_{i=1, \dots,k,\infty}\la_i<\gamma_H$.
For any $0<h<1-\frac{m}{\ga_H}$, there exists $\de=\de(h)>0$ such that
\begin{equation*}
\mu\left(\sum_{i=1}^{k}\frac{\la_i
\zeta(\frac{x-a_i}\delta)}{|x-a_i|^{2s}}+\frac{\la_\infty
\tilde\zeta(\frac xR)}{|x|^{2s}} \right)
\geq
\begin{cases}
1-\frac{m}{\ga_H}-h,& \text{if } m>0\\
1,&\text{if } m \leq 0.
\end{cases}
\end{equation*} \end{lemma} \begin{proof}
Let us assume that $m>0$, otherwise the statement is trivial. First, let us fix $0<\varepsilon < \frac{\gamma_H}{ m}-1,$ so that \begin{equation*}
\tilde{\la}_i:=\la_i +\varepsilon \la_i^+ < \gamma_H \quad\text{for all }i=1,\dots,k,\infty. \end{equation*} In order to prove the statement, it is sufficient to find $\delta=\delta(\varepsilon)>0$ and $\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi\in C^{0}(\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1/\de),\dots,(0,a_k/\de)\})$, $\Phi> 0$ in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1/\de),\dots,(0,a_k/\de)\}$, and \begin{equation}\label{Eq-1} \int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot\nabla U\, \mathrm{d}t \,\mathrm{d}x-\sum_{i=1}^k \kappa_s\int_{\mathbb R^{N}}V_i\Tr \Phi \Tr U\,\mathrm{d}x -\kappa_s\int_{\mathbb R^{N}}V_\infty \Tr \Phi \Tr U\,\mathrm{d}x \geq 0 \end{equation} for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, $U \geq 0$ a.e., where \[
V_i(x)=\frac{\tilde{\la}_i
\zeta\big(x-\frac{a_i}\delta\big)}{\abs{x-a_i/\de}^{2s}},\qquad
V_\infty(x)=\frac{\tilde{\la}_\infty \tilde\zeta\big(\frac{\delta}R x\big)}{\abs{x}^{2s}}. \] Indeed, thanks to scaling properties in \eqref{Eq-1} and to Lemma \ref{Criterion}, \eqref{Eq-1} implies that \[
\mu \left( \sum_{i=1}^k \frac{\la_i
\zeta(\frac{x-a_i}\delta)}{\abs{x-a_i}^{2s}}+\frac{\la_\infty \tilde\zeta(\frac xR)}{\abs{x}^{2s}} \right) \geq\frac{\varepsilon}{1+\varepsilon}, \] so that, letting $h:=1-\frac{m}{\gamma_H}-\frac{\varepsilon}{\varepsilon+1}$, we obtain the result. Hence, we seek for some $\Phi$ positive and continuous in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1/\de),\dots,(0,a_k/\de)\}$ satisfying \eqref{Eq-1}. Let us set, for some $0<\tau<1$, \[
p_i(x):=p\left(x-\frac{a_i}{\de}\right)\quad \text{for }
i=1,\dots,k,\quad p_\infty(x)=\left(\frac{\delta}R \right)^{2s}p\left(\frac{\delta x}R\right), \] where $p(x)=\frac{1}{\abs{x}^{2s-\tau}(1+\abs{x}^2)^{\tau}}$. We observe that $p_i, p_\infty \in L^{N/{2s}}(\mathbb R^N)$. Therefore, thanks to Lemma \ref{lemma:compact_emb}, the weighted eigenvalue \[
\mu_i=\inf_{\substack{\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \\ \Tr\Phi\nequiv 0 }}\frac{\int_{\mathbb R^{N+1}_+} t^{1-2s} \abs{\nabla \Phi}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V_i\abs{\Tr \Phi }^2\,\mathrm{d}x}{\int_{\mathbb R^{N}}p_i\abs{\Tr \Phi }^2\,\mathrm{d}x} \] is positive and attained by some nontrivial, nonnegative function $\Phi_i \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ that weakly solves \[
\begin{bvp}
-\divergence(t^{1-2s}\nabla\Phi_i)&=0, &&\text{in }\mathbb R^{N+1}_+ ,\\
-\lim_{t\to 0}t^{1-2s}\frac{\partial \Phi_i}{\partial t}&=\left(
\kappa_s V_i+\mu_i p_i \right)\Tr\Phi_i ,&&\text{on }\mathbb R^N.
\end{bvp} \] From the classical Strong Maximum Principle we deduce that $\Phi_i>0$ in $\mathbb R^{N+1}_+$, while Proposition \ref{prop:jlx} yields that $\Phi_i$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_i/\de)\}$. Moreover, from the Hopf type lemma in Proposition \ref{prop:cabre_sire} (whose assumption \eqref{eq:9} is satisfied thanks to Lemma \ref{l:cabre_sire} outside $\{a_i/\de\}$) we deduce that $\Tr \Phi_i >0$ in $\mathbb R^N\setminus \{a_i/\de\}$. Similarly \[
\mu_\infty=\inf_{\substack{\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \\ \Tr\Phi\nequiv 0 }}\frac{\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla \Phi}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^{N}}V_\infty\abs{\Tr \Phi }^2\,\mathrm{d}x}{\int_{\mathbb R^{N}}p_\infty\abs{\Tr \Phi }^2\,\mathrm{d}x} \] is positive and reached by some nontrivial, nonnegative function $\Phi_\infty \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi_\infty$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+}$ and $\Phi_\infty>0$ in $\overline{\mathbb R^{N+1}_+}$. Moreover, $\Phi_{\infty}$ weakly solves \[
\begin{bvp}
-\divergence(t^{1-2s}\nabla\Phi_\infty)&=0, &&\text{in }\mathbb R^{N+1}_+ ,\\
-\lim_{t\to 0}t^{1-2s}\frac{\partial \Phi_\infty}{\partial
t}&=( \kappa_s V_\infty+\mu_\infty p_\infty )\Tr\Phi_\infty ,&&\text{on }\mathbb R^N.
\end{bvp} \] Lemmas \ref{lemma:regul_1}--\ref{lemma:regul_4} (and continuity of the $\Phi_i$'s outside the poles) imply that there exists $C_0>0$ (independent of $\de$) such that \begin{align}
\frac{1}{C_0}\abs{x-\frac{a_i}{\de}}^{a_{\tilde \lambda_i}} \leq &\Tr \Phi_i \leq C_0 \abs{x-\frac{a_i}{\de}}^{a_{\tilde \lambda_i}}, && \text{in }B'(a_i/\de,1), \label{eq:mu_estim_1}\\
\frac{1}{C_0}\abs{x-\frac{a_i}{\de}}^{-(N-2s)} \leq &\Tr \Phi_i \leq C_0 \abs{x-\frac{a_i}{\de}}^{-(N-2s)}, &&\text{in }\mathbb R^N \setminus B'(a_i/\de,1), \label{eq:mu_estim_2}\\
\frac{1}{C_0}\abs{\frac{\de x}{R}}^{-(N-2s)-a_{\tilde\lambda_\infty}} \leq &\Tr \Phi_\infty \leq C_0 \abs{\frac{\de x}{R}}^{-(N-2s)-a_{\tilde\lambda_\infty}}, && \text{in }\mathbb R^N \setminus B'_{R/\de}, \label{eq:mu_estim_3}\\
\frac{1}{C_0} \leq &\Tr \Phi_\infty \leq C_0,&& \text{in }B'_{R/\de}. \label{eq:mu_estim_4} \end{align} Let $\Phi:=\sum_{i=1}^k \Phi_i + \eta \Phi_\infty$, with $0<\eta<\inf \{\frac{\mu_i}{4C_0^2 \tilde{\la}_i}\colon i=1,\dots,k,~ \tilde{\la}_i>0 \}$. Therefore, \begin{align}
& \int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\sum_{i=1}^k\int_{\mathbb R^N}V_i\Tr \Phi \Tr U \,\mathrm{d}x-\int_{\mathbb R^{N}}V_\infty\Tr \Phi \Tr U \,\mathrm{d}x \nonumber \\
=&\int_{\mathbb R^N}\bigg[ \sum_{i=1}^k\bigg(\mu_ip_i-V_\infty-\sum_{ j\neq i} V_j\bigg)\Tr \Phi_i +\eta\bigg( \mu_\infty p_\infty-\sum_{i=1}^k V_i \bigg)\Tr \Phi_\infty \bigg]\Tr U \,\mathrm{d}x \nonumber\\
=:&\int_{\mathbb R^N}g(x)\Tr U(x) \,\mathrm{d}x \label{eq:mu_estim_5} \end{align} for all $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. Hereafter, let us assume $U\geq 0$ a.e. in $\mathbb R^{N+1}_+$. We will split the integral into three parts and prove that each of these is nonnegative. First, let us consider $x\in B'_{R/\de}\setminus\left( \cup_{i=1}^k B'(a_i/\de,1) \right)$: here $V_i=V_\infty=0$ and we have that \[
g(x)= \sum_{i=1}^k \mu_i p_i \Tr \Phi_i +\eta \mu_\infty p_\infty \Tr \Phi_\infty \geq 0. \] Now let us take $n\in \{1,\dots,k\}$ and $x\in B'(a_n/\de,1)$, where $V_\infty=V_i=0$ for $i\neq n$. Then \begin{gather*}
g(x)=\sum_{i=1}^k\mu_i p_i \Tr \Phi_i -V_n\sum_{ i\neq n}\Tr \Phi_i +\eta (\mu_\infty p_\infty -V_n)\Tr \Phi_\infty \\
\geq \mu_n p_n \Tr \Phi_n - V_n\bigg(\sum_{\substack{i=1 \\i\neq n}}^k\Tr \Phi_i +\eta\Tr \Phi_\infty \bigg). \end{gather*} If $\tilde{\lambda}_n\leq 0$ this is clearly nonnegative; so let us assume $\tilde{\lambda}_n>0$. Thanks to \eqref{eq:mu_estim_1}, \eqref{eq:mu_estim_2} and \eqref{eq:mu_estim_4} we can estimate this quantity from below by \begin{equation}\label{eq:mu_estim_6}
\abs{x-\frac{a_n}{\de}}^{-2s}\bigg[ \frac{\mu_n}{2^\tau
C_0}\abs{x-\frac{a_n}{\de}}^{\tau+a_{\tilde\lambda_n}}- \tilde{\lambda}_n C_0\bigg( \sum_{i\neq n}\abs{x-\frac{a_i}{\de}}^{-(N-2s)}+\eta \bigg) \bigg]. \end{equation} We observe that $\abs{x-a_n/\de}^{\tau+a_{\tilde\lambda_n}}\geq 1$, since $\tilde\lambda_n>0$ implies that $a_{\tilde\lambda_n}<0$ and we can choose $\tau<-a_{\tilde\lambda_n}$. Moreover it's not hard to prove that, for $i\neq n$, \[
\abs{x-\frac{a_i}{\de}}^{-(N-2s)}\leq \bigg( \frac{2}{\abs{a_n-a_i}} \bigg)^{N-2s}\delta^{N-2s}<\frac{\eta}{k-1}, \] for $\de>0$ sufficiently small. Thanks to this and to the choice of $\eta$ we have that the expression in \eqref{eq:mu_estim_6} and then $g(x)$ is nonnegative in $B'(a_n/\de,1)$ . Finally, if $x\in\mathbb R^N\setminus B'_{R/\de}$, then
the function $g$ in \eqref{eq:mu_estim_5} becomes \begin{equation}\label{eq:mu_estim_7}
\sum_{i=1}^k( \mu_i p_i -V_\infty )\Tr \Phi_i +\eta\mu_\infty p_\infty \Tr \Phi_\infty . \end{equation} Again, if $\tilde{\lambda}_\infty\leq 0$ this quantity is nonnegative. If $\tilde{\lambda}_\infty>0$, thanks to \eqref{eq:mu_estim_2} and \eqref{eq:mu_estim_3}, we have that the function in \eqref{eq:mu_estim_7} is greater than or equal to \begin{equation}\label{eq:mu_estim_8}
\abs{x}^{-2s}\bigg[ -C_0\tilde{\lambda}_\infty
\sum_{i=1}^k\abs{x-\frac{a_i}{\de}}^{-(N-2s)}+\frac{\eta\mu_\infty}{2^\tau
C_0}\abs{\frac{\de x}{R}}^{-(N-2s)-a_{\tilde\lambda_\infty}+\tau} \bigg]. \end{equation} Now, one can easily see that \[
\abs{x-\frac{a_i}{\de}} \geq \bigg(1-\frac{a}{R}\bigg)\abs{x}\qquad \text{for all } x \in \mathbb R^N \setminus B'_{R/\de}, \quad \text{where } a=\max_{j=1,\dots,k}\abs{a_j}, \] so that we can estimate \eqref{eq:mu_estim_8} from below obtaining that, for all $x\in\mathbb R^N\setminus B'_{R/\de}$, \begin{gather*} g(x)\geq \abs{x}^{-N}\bigg[ -C_0\tilde{\lambda}_\infty k \bigg( 1-\frac{a}{R} \bigg)^{-(N-2s)}+ \frac{\eta\mu_\infty}{2^\tau C_0}\abs{\frac{\de}{R}}^{-(N-2s)-a_{\tilde\lambda_\infty}+\tau} \abs{x}^{-a_{\tilde\lambda_\infty}+\tau}\bigg] \\
\geq \abs{x}^{-N}\bigg[ -C_0\tilde{\lambda}_\infty k \bigg( 1-\frac{a}{R} \bigg)^{-(N-2s)}+ \frac{\eta\mu_\infty}{2^\tau C_0}\abs{\frac{\de}{R}}^{-(N-2s)} \bigg]\geq 0 \end{gather*} for $\de>0$ sufficiently small, since $a_{\tilde\lambda_\infty}<0$ if $\tilde\lambda_\infty>0$. The proof is thereby complete. \end{proof}
\section{Perturbation at infinity and at poles}\label{sec:perturbation} In this section, we investigate the persistence of the positivity when the mass is increased at infinity (Theorem \ref{Theorem-a}) and at poles (Theorem \ref{Theorem-a-pole}).
In order to make use of Lemmas \ref{lemma:regul_1}--\ref{lemma:regul_4}, we may need to restrict the class $\Theta$ to some more regular potentials and to have a control on their growth at infinity.
For any $\de>0$, we define
\begin{equation}\label{eq:def_P_infty}
\begin{aligned}
\mathcal{P}_\infty^\de:=\bigg\{ f\colon \mathbb R^N\to\mathbb R \colon &f\in C^1(\mathbb R^N\setminus B_{R_\infty}')\text{ for some $R_\infty>0$}\\[-10pt]
&\text{and}~\abs{f(x)}+\abs{x\cdot \nabla f(x)}=O(\abs{x}^{-2s-\de})~\text{as }\abs{x}\to +\infty\bigg\}.
\end{aligned}
\end{equation} Moreover, in order to prove some intermediary, technical lemmas based on the positivity criterion Lemma \ref{Criterion}, the need for even more regular potentials occasionally arises. So, let us introduce the class \begin{equation}\label{eq:def_Theta_s}
\Theta^*:=\left\{V\in\Theta\colon V\in C^1(\mathbb R^N\setminus\{a_1,\dots,a_k\}) \right\}. \end{equation} Then, we will recover the full generality of the class $\Theta$, thanks to an approximation procedure, which is based on the following lemma. \begin{lemma}\label{lemma:approx_potentials}
Let $V_1,V_2\in\Theta$ be such that $V_1-V_2\in L^{N/2s}(\mathbb R^N)$. Then
\[
\abs{\mu(V_2)-\mu(V_1)}\leq S^{-1}\norm{V_2-V_1}_{L^{N/2s}(\mathbb R^N)},
\]
where $S>0$ is the best constant in the Sobolev embedding \eqref{eq:sobolev}. \end{lemma} \begin{proof}
From the definition of $\mu(V_2)$, H\"older inequality and
\eqref{eq:sobolev_ext},
for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ we have that
\begin{multline}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V_1\abs{\Tr U}^2\,\mathrm{d}x \\
\geq \left(\mu(V_2)-S^{-1}\norm{V_2-V_1}_{L^{N/2s}(\mathbb R^N)} \right)\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x,
\end{multline}
which implies that
\[
\mu(V_1)\geq \mu(V_2)-S^{-1}\norm{V_2-V_1}_{L^{N/2s}(\mathbb R^N)}.
\]
Analogously one can prove that
$\mu(V_2)\geq \mu(V_1)-S^{-1}\norm{V_2-V_1}_{L^{N/2s}(\mathbb R^N)}$,
thus concluding the proof. \end{proof}
\begin{lemma}\label{lemma-a} Let $V\in\mathcal H$, $a_1,\dots,a_k\in\mathbb R^N$, and $R>0$ be such that \[ V\in C^1(\mathbb R^N\setminus\{a_1,\dots,a_k\}) \quad\text{and}\quad
V(x)=\frac{\la_\infty}{|x|^{2s}}+W(x) \quad\text{in }\mathbb R^N \setminus B'_{R}, \] where $\lambda_\infty<\gamma_H$ and $W\in\mathcal{P}_\infty^\de\cap L^\infty(\mathbb R^N)$ for some $\de>0$.
Assume that $\mu(V)>0$ and let $\nu_\infty \in \mathbb R$ be such that $\la_\infty+\nu_\infty<\ga_H$. Then there exist $\tilde{R}>R$ and $\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1),\dots,(0,a_k)\}$, $\Phi>0$ in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1),\dots,(0,a_k)\}$, and \[
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[V+ \frac{\nu_\infty}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}} }\right]\Tr \Phi \Tr U \,\mathrm{d}x\geq 0, \] for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ with $U\geq 0$ a.e. \end{lemma}
\begin{proof} By \eqref{eq:8} we can fix $\varepsilon\in\big(0,\frac{N-2s}2\big)$ such that \begin{equation}\label{eq:cond_eps}
\Lambda(\varepsilon)-\lambda_\infty>0\quad\text{and}\quad \Lambda(\varepsilon)-\lambda_\infty-\nu_\infty>0. \end{equation}
Since $W\in\mathcal{P}_\infty^\de\cap L^\infty(\mathbb R^N)$, there exists $C_0>0$ such that \begin{equation}\label{eq:cond_W}
W(x) \leq \frac{C_0}{|x|^{2s+\de}} \qquad\text{in } \mathbb R^N. \end{equation} Let $R_0\geq \max\Big\{R,\frac{1}{2}\left[\frac{C_0}{\Lambda(\varepsilon)-\lambda_\infty}\right]^{{1}/{\de}} \Big\}$, so that \begin{equation}\label{eq:cond_R_0}
\Lambda(\varepsilon)-\lambda_\infty-C_0(2R_0)^{-\de}\geq 0. \end{equation} From Lemma \ref{lemma:func_fall} there exists a positive, locally
H\"older continuous function $\Upsilon_\varepsilon: \overline{\mathbb R^{N+1}_+}\setminus\{0\} \to \mathbb R$ such that $\Upsilon_\varepsilon\in\bigcap_{r>0}H^1(B_r^+;t^{1-2s})$ and \begin{equation}
\left\{\begin{aligned}
-{\divergence}(t^{1-2s}\nabla \Upsilon_\varepsilon), &=0 &&\text{in } \mathbb R^{N+1}_+, \\
\Upsilon_\varepsilon (0,x)&=|x|^{-\frac{N-2s}{2}+\varepsilon}, &&\text{on } \mathbb R^N,\\
-\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Upsilon_\varepsilon}{\pa t},&=\kappa_s\Lambda(\varepsilon)|x|^{-2s}\Tr \Upsilon_\varepsilon &&\text{on } \mathbb R^N,\end{aligned} \right. \end{equation} in a weak sense.
Direct calculations (see e.g. \cite[Proposition 2.6]{Fall2012}) yield that the Kelvin transform \[
\tilde{\Upsilon}_\varepsilon(z)=\abs{z}^{-(N-2s)}\Upsilon_\varepsilon(z/\abs{z}^2) \]
of $\Upsilon_\varepsilon$ weakly satisfies \begin{equation}\label{kelvin-trans} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \tilde{\Upsilon}_\varepsilon) &=0, && \text{in }\mathbb R^{N+1}_+\setminus\{0\}, \\
\tilde{\Upsilon}_\varepsilon (0,x)&=|x|^{\frac{2s-N}{2}-\varepsilon}, &&\text{on } \mathbb R^N \setminus\{0\},\\
-\lim_{t \to 0^+}t^{1-2s}\frac{\pa \tilde{\Upsilon}_\varepsilon}{\pa t}&=\kappa_s\Lambda(\varepsilon)|x|^{-2s}\Tr \tilde{\Upsilon}_\varepsilon, &&\text{on } \mathbb R^N \setminus\{0\}, \end{aligned} \right. \end{equation}
$\tilde{\Upsilon}_\varepsilon>0$ in $\overline{\mathbb R^{N+1}_+}\setminus\{0\}$
and $\tilde{\Upsilon}_\varepsilon$
is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+} \setminus
\{0\}$. Moreover we have that \begin{equation}\label{eq:10}
\int_{\mathbb R^{N+1}_+ \setminus B_r^+} t^{1-2s} |\nabla\tilde{\Upsilon}_\varepsilon|^2\, \mathrm{d}t \,\mathrm{d}x+
\int_{\mathbb R^{N+1}_+\setminus B_r^+} t^{1-2s}
\frac{|\tilde{\Upsilon}_\varepsilon|^2}{|x|^2+t^2}\, \mathrm{d}t \,\mathrm{d}x<+\infty\quad\text{for
all }r>0. \end{equation} Let $\eta\in C^\infty(\overline{\mathbb R^{N+1}_+})$ be a cut-off function such that $\eta$ is radial, i.e. $\eta(z)=\eta(\abs{z})$, $\abs{\nabla
\eta}\leq \frac2{R_0}$ in $\overline{\mathbb R^{N+1}_+}$, \[
\eta(z):=\begin{cases}
0, & \text{in }B^+_{R_0}\cup B_{R_0}' \\
1, & \text{in }\left(\mathbb R^{N+1}_+\setminus B_{2R_0}^+\right)\cup\left(\mathbb R^N\setminus B_{2R_0}' \right) ,
\end{cases} \] and $\eta>0$ in $\overline{\mathbb R^{N+1}_+}\setminus \overline{B_{R_0}^+}$. We point out that \begin{equation*}
\frac{\partial \eta}{\partial t}(0,x)=0\quad \text{and}\quad
\frac{1}{t}\abs{\frac{\partial \eta}{\partial
t}(t,x)}=O(1)\quad\text{as }t\to 0 \text{ (uniformly in $x$)}. \end{equation*} We let $\Phi_1:=\eta \tilde{\Upsilon}_\varepsilon$. By its construction, $\Phi_1$ is continuous on the whole $\overline{\mathbb R^{N+1}_+}$ and $\Phi_1>0$ in $\overline{\mathbb R^{N+1}_+}\setminus \overline{B_{R_0}^+}$, whereas \eqref{eq:10} implies that $\Phi_1\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$.
Moreover direct computations yield that $\Phi_1$ weakly solves \begin{equation}\label{phi-1}
\left\{\begin{aligned}
-{\divergence}(t^{1-2s}\nabla \Phi_1) &= t^{1-2s}F_1 ,& &\text{in }\mathbb R^{N+1}_+, \\
-\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Phi_1}{\pa t}&=\kappa_s\Lambda(\varepsilon)|x|^{-2s}\Tr \Phi_1, &&\text{on }\mathbb R^N,
\end{aligned}\right. \end{equation} where \[ F_1:=(2s-1)\frac{1}{t}\frac{\partial \eta}{\partial
t}\tilde{\Upsilon}_\varepsilon-2\nabla \tilde{\Upsilon}_\varepsilon\cdot\nabla
\eta-\tilde{\Upsilon}_\varepsilon \Delta \eta. \] We observe that $F_1\in C^\infty(\mathbb R^{N+1}_+)$ and $\supp(F_1) \subset \overline{B_{2R_0}^+ \setminus B_{R_0}^+}$. Given \[ f_1(x):=\kappa_s\Lambda(\varepsilon)\abs{x}^{-2s}\chi_{B_{2R_0}'\setminus
B_{R_0}'}\Phi_1(0,x), \]
we can choose a smooth, compactly supported function $f_2\colon \mathbb R^N\to \mathbb R$ such that \begin{equation}\label{eq:f_1_f_2}
f_1+f_2\geq 0\quad\text{in }\mathbb R^N,\qquad H:=f_2+\left[W+\lambda_\infty \abs{x}^{-2s} \right]\chi_{B_{2R_0}'\setminus B_{R_0}'} \Tr\Phi_1\geq 0 \quad\text{in }\mathbb R^N.
\end{equation} We also choose another smooth, positive, compactly supported function $F_2\colon \overline{\mathbb R^{N+1}_+}\to \mathbb R$ such that $F_1+F_2 \geq 0$ in $\overline{\mathbb R^{N+1}_+}$. Since $\mu(V)>0$ and $H \in L^{\frac{2N}{N+2s}}(\mathbb R^N)$, by Lax-Milgram Lemma there exists $\Phi_2 \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that \begin{equation}\label{phi-2} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \Phi_2) &=t^{1-2s}F_2 , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Phi_2}{\pa t}&=\kappa_s\left[V\Tr\Phi_2+H\right], &&\text{on }\mathbb R^N, \end{aligned} \right. \end{equation} holds in a weak sense. From Proposition \ref{prop:jlx} we know that $\Phi_2$ is locally H\"older
continuous in $\overline{\mathbb R^{N+1}_+} \setminus
\{(0,a_1),\dots,(0,a_k)\}$.
In order to prove that $\Phi_2$ is strictly positive in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1),\dots,(0,a_k)\}$, we compare it with the unique weak solution $\Phi_3 \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ to the problem \begin{equation}\label{eq:11} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \Phi_3) &=0 , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Phi_3}{\pa t}&=\kappa_s\left[V\Tr\Phi_3+H\right], &&\text{on }\mathbb R^N, \end{aligned} \right. \end{equation} whose existence is again ensured by the Lax-Milgram Lemma. The difference $\widetilde\Phi=\Phi_2-\Phi_3$ belongs to $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and weakly solves \begin{equation*} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \widetilde\Phi) &=t^{1-2s}F_2 , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \widetilde\Phi}{\pa t}&=\kappa_s V\Tr \widetilde\Phi, &&\text{on }\mathbb R^N. \end{aligned} \right. \end{equation*} By directly testing the above equation with $-\widetilde\Phi^-$, since $\mu(V)>0$ we obtain that $\widetilde\Phi\geq 0$ in $\mathbb R^{N+1}_+$, i.e. $\Phi_2\geq\Phi_3$. Furthermore, testing the equation for $\Phi_3$ with $-\Phi_3^-$, we also obtain that $\Phi_3\geq0$ in $\mathbb R^{N+1}_+$. The classical Strong Maximum Principle, combined with Proposition \ref{prop:cabre_sire} (whose assumption \eqref{eq:9} for \eqref{eq:11} is satisfied thanks to the assumption $V\in C^1(\mathbb R^N\setminus\{a_1,\dots,a_k\})$ and Lemma \ref{l:cabre_sire}), yields $\Phi_3>0$ in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1),\dots,(0,a_k)\}$ and hence \[ \Phi_2>0\quad\text{in}\quad \overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1),\dots,(0,a_k)\}. \]
Finally, from Lemma \ref{lemma:regul_2} and from the continuity of $\Phi_2$, there exists $C_1>0$ such that \begin{equation}\label{eq:phi_2_estim}
\frac{1}{C_1}|x|^{-(N-2s)-a_{\lambda_\infty}} \leq
\Phi_2(0,x) \leq C_1|x|^{-(N-2s)-a_{\lambda_\infty}} \quad \text{in }\mathbb R^N \setminus B'_{2R_0}. \end{equation} Now we set $\Phi=\Phi_1+\Phi_2$. We immediately observe that $\Phi\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ is locally H\"older continuous and strictly positive in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1),\dots,(0,a_k)\}$. We claim that, for $\tilde{R}>0$ sufficiently large, \begin{equation}\label{phi}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[V+ \frac{\nu_\infty}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}} }\right]\Tr \Phi \Tr U \,\mathrm{d}x\geq 0, \end{equation} for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ with $U\geq 0$ a.e.
The function $\Phi$ weakly satisfies \begin{equation} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \Phi) &=t^{1-2s}(F_1+F_2) , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Phi}{\pa t}&=\kappa_s\left[ \Lambda(\varepsilon)\abs{x}^{-2s}\Tr \Phi_1+ V\Tr\Phi_2+H\right], &&\text{on }\mathbb R^N. \end{aligned} \right. \end{equation} Hence, if $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, $U\geq 0$ a.e., \begin{equation*}
\begin{gathered}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot\nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[V+ \frac{\nu_\infty}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}} }\right]\Tr \Phi \Tr U \,\mathrm{d}x \\
\geq \int_{\mathbb R^N}\left[\kappa_s\frac{\Lambda(\varepsilon)-\lambda_\infty-\nu_\infty }{ \abs{x}^{2s}}\chi_{\mathbb R^N\setminus B'_{\tilde{R}}}\Tr \Phi_1
+\kappa_s\frac{\Lambda(\varepsilon) -\lambda_\infty -\abs{x}^{2s}W}{ \abs{x}^{2s}}\chi_{B'_{\tilde{R}}\setminus B'_{2R_0}}\Tr\Phi_1 \right. \\
\left.-\kappa_s\left(W\Tr\Phi_1+\frac{\nu_{\infty}}{\abs{x}^{2s} }\Tr\Phi_2\right)\chi_{\mathbb R^N\setminus B'_{\tilde{R}}} +f_1+f_2\right]\Tr U\,\mathrm{d}x=:\int_{\mathbb R^N}F(x)\Tr U(x)\,\mathrm{d}x.
\end{gathered} \end{equation*} If $x\in B'_{2R_0}$, then $F(x)=f_1(x)+f_2(x)\geq0$. If $x\in B'_{\tilde{R}}\setminus B'_{2R_0}$, then from \eqref{eq:f_1_f_2}, \eqref{eq:cond_W} and \eqref{eq:cond_R_0} \[
F(x)\geq \kappa_s(\Lambda(\varepsilon) -\lambda_\infty -\abs{x}^{2s}W)\abs{x}^{-2s} \Tr\Phi_1\geq \kappa_s(\Lambda(\varepsilon)-\lambda_\infty-C_0(2R_0)^{-\de}) \abs{x}^{-2s}\Tr\Phi_1\geq 0. \] Finally, if $x\in \mathbb R^N\setminus B'_{\tilde{R}}$, then from the definition of $\Phi_1$, \eqref{eq:f_1_f_2}, \eqref{eq:phi_2_estim} and \eqref{eq:cond_W} we have that \[ F(x)\geq \kappa_s(\Lambda(\varepsilon)-\lambda_\infty-\nu_\infty)\abs{x}^{-\frac{N+2s}{2}-\varepsilon}-\kappa_s C_0 \abs{x}^{-\frac{N+2s}{2}-\varepsilon-\de}-\kappa_s C_1 {\nu_\infty\abs{x}^{-N-a_{\lambda_\infty}}}. \] Since the function $\lambda\mapsto\mu_1(\lambda)$ is strictly decreasing and $\lambda_\infty<\Lambda(\varepsilon)$, from \eqref{eq:4} it follows that $\mu_1(\lambda_\infty)>\varepsilon^2-\big(\frac{N-2s}{2}\big)^2$ which yields $-N- a_{\lambda_\infty}<-\frac{N+2s}{2}-\varepsilon$. Hence, if $\tilde{R}$ is sufficiently large, $F(x)\geq0$ for all $x\in \mathbb R^N\setminus B'_{\tilde{R}}$. This concludes the proof. \end{proof}
Combining Lemma \ref{lemma-a} with the positivity criterion Lemma \ref{Criterion} and an approximation procedure based on Lemma \ref{lemma:approx_potentials}, we prove the persistence of the positivity under perturbations at infinity for potentials in the class $\Theta$. \begin{theorem}\label{Theorem-a} Let \[
V(x)=\sum_{i=1}^{k}\frac{\la_i \chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W(x) \in \Theta. \] Assume $\mu(V)>0$ and let $\nu_\infty \in \mathbb R$ be such that $\la_\infty+\nu_\infty<\ga_H$. Then there exists $\tilde{R}>R$ such that \[
\mu\left(V+\frac{\nu_\infty}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}}}\right)>0. \] \end{theorem} \begin{proof}
Since $V \in \Theta$ and $\mu(V)>0$, arguing as in
\eqref{eq:criterion_2} we have that, for $\varepsilon$ chosen sufficiently
small as in \eqref{eq:12}, $\mu(V+\varepsilon
V)>\frac{\mu(V)}{2}>0$.
Moreover we can choose $\varepsilon$ such that
$\la_\infty +\nu_\infty+\varepsilon(\la_\infty + \nu_\infty)<\ga_H$
and $\lambda_i +\varepsilon \lambda_i<\gamma_H$ for all
$i=1,\dots,k,\infty$. Let $\sigma=\sigma(\varepsilon)$ be such that \begin{equation}\label{eq:pert_infty_1}
0<\sigma<\min\{S\varepsilon,S\mu(V)/2\}. \end{equation} By density of $C^\infty_{\rm c}(\mathbb R^N)$ in $L^{\frac{N}{2s}}(\mathbb R^N)$ there exists \[
\hat{V}(x)=\sum_{i=1}^{k}\frac{\la_i \chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+\hat{W}(x) \in \Theta^* \] such that $\hat{W}\in\mathcal{P}_\infty^\delta$ for some $\delta>0$ and \begin{equation}\label{eq:pert_infty_2}
\lVert\hat{V}-V\rVert_{L^{N/2s}(\mathbb R^N)}<\frac{\sigma}{1+\varepsilon}. \end{equation} Then from Lemma \ref{lemma:approx_potentials}, taking into account \eqref{eq:pert_infty_1} and \eqref{eq:pert_infty_2}, we have that \[
\mu(\hat{V}+\varepsilon\hat{V})\geq \mu(V+\varepsilon V)-(1+\varepsilon)S^{-1}\lVert\hat{V}-V\rVert_{L^{N/2s}(\mathbb R^N)}>0. \] Now, thanks to Lemma \ref{lemma-a}, there exists $\tilde{R}>R$ and a function
$\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi$ is strictly positive and locally
H\"older continuous
in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_1),\dots,(0,a_k)\}$ and \[
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[\hat{V}+\varepsilon\hat{V}+ \frac{\nu_\infty+\varepsilon \nu_\infty}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}} }\right]\Tr \Phi \Tr U \,\mathrm{d}x\geq 0, \] for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ with $U\geq 0$ a.e. Therefore Lemma \ref{Criterion} yields \[
\mu\left( \hat{V}+\frac{\nu_\infty}{\abs{x}^{2s}}\chi_{\mathbb R^{N} \setminus B'_{\tilde{ R}}} \right)\geq \frac{\varepsilon}{1+\varepsilon}. \] Finally, thanks to Lemma \ref{lemma:approx_potentials}, \eqref{eq:pert_infty_1} and \eqref{eq:pert_infty_2}, we have the estimate \[
\mu\left( V+\frac{\nu_\infty}{\abs{x}^{2s}}\chi_{\mathbb R^{N} \setminus B'_{\tilde{ R}}} \right)\geq \mu\left( \hat{V}+\frac{\nu_\infty}{\abs{x}^{2s}}\chi_{\mathbb R^{N} \setminus B'_{\tilde{ R}}} \right)-S^{-1}\lVert\hat{V}-V\rVert_{L^{N/2s}(\mathbb R^N)}>0 \] which yields the conclusion. \end{proof}
Swapping the singularity at a pole for a singularity at infinity through the Kelvin transform, we obtain the analog of Theorem \ref{Theorem-a} when perturbing the mass of a pole.
\begin{theorem}\label{Theorem-a-pole} Let \[
V(x)=\sum_{i=1}^{k}\frac{\la_i \chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W(x) \in \Theta. \] Assume $\mu(V)>0$ and let $i_0\in\{1,\dots,k\}$ and $\nu \in \mathbb R$ be such that $\lambda_{i_0}+\nu<\ga_H$. Then there exists $\delta\in(0,r_{i_0})$ such that \[
\mu\left(V+\frac{\nu}{|x-a_{i_0}|^{2s}}\chi_{B'(a_{i_0},\delta)}\right)>0. \] \end{theorem} Before proving Theorem \ref{Theorem-a-pole}, it is convenient to make the following remark. \begin{remark}\label{rem:translation_kelvin}
\begin{enumerate}[(i)]
\item By the invariance by translation of the norm $\|\cdot\|_{\mathcal{D}^{s,2}(\R^N)}$,
we have that, if $V\in\mathcal H$, then, for any $a\in\mathbb R^N$, the translated
potential $V_a:=V(\cdot+a)$ belongs to $\mathcal H$ and $\mu(V_a)=\mu(V)$. \item If $V\in \mathcal H$ and
$V_K(x):=|x|^{-4s}V\big(\frac{x}{|x|^2}\big)$, then $V_K\in \mathcal
H$ and $\mu(V_K)=\mu(V)$. To prove this statement, we observe that,
by the change of variables $y=\frac{x}{|x|^2}$, \[
\int_{\mathbb R^N}|V_K(x)|^2u^2(x)\,\mathrm{d}x=
\int_{\mathbb R^N}|V(y)|^2(\mathcal Ku)^2(y)\,\mathrm{d}y\quad\text{for any }u\in\mathcal{D}^{s,2}(\R^N), \]
where $(\mathcal Ku)(x):=|x|^{2s-N}u\big(\frac{x}{|x|^2}\big)$ is the Kelvin transform of $u$. The claim then follows from the fact that $\mathcal K$ is an isometry on $\mathcal{D}^{s,2}(\R^N)$ (see \cite[Lemma 2.2]{Fall2012}).
\end{enumerate} \end{remark}
\begin{proof}[Proof of Theorem \ref{Theorem-a-pole}] Let $V_1(x):=V(x+a_{i_0})$. We have that \[
V_1(x)=\frac{\lambda_{i_0} \chi_{B'_{r_{i_0}}}(x)}{|x|^{2s}}+ \sum_{i\neq i_0}\frac{\la_i
\chi_{B'(a_i-a_{i_0},r_i)}(x)}{|x-(a_i-a_{i_0})|^{2s}}+
\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W_1(x) \in \Theta \] and, in view
of Remark \ref{rem:translation_kelvin} (i), $\mu(V_1)=\mu(V)>0$.
Then we can choose some $\varepsilon$
sufficiently small so that $\mu(V_1+\varepsilon
V_1)>\frac{\mu(V)}{2}>0$ (see
\eqref{eq:criterion_2}) and $\lambda_{i_0} +\nu+\varepsilon(\lambda_{i_0} +\nu)<\ga_H$, $\lambda_i+\varepsilon \lambda_i<\gamma_H$ for all
$i=1,\dots,k,\infty$. Let $\sigma=\sigma(\varepsilon)\in(0,\min\{S\varepsilon,S\mu(V)/2\})$. By density of $C^\infty_{\rm c}(\mathbb R^N\setminus\{a_1,\dots,a_k\})$ in $L^{\frac{N}{2s}}(\mathbb R^N)$ there exists \[
V_2(x)=
\frac{\lambda_{i_0} \chi_{B'_{r_{i_0}}}(x)}{|x|^{2s}}+ \sum_{i\neq i_0}\frac{\la_i
\chi_{B'(a_i-a_{i_0},r_i)}(x)}{|x-(a_i-a_{i_0})|^{2s}}+
\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W_2(x) \in \Theta^* \] such that $W_2\in L^{N/2s}(\mathbb R^N)\cap
L^\infty(\mathbb R^N)$ vanishes in a neighbourhood of any pole
and in a neighbourhood of $\infty$ and \begin{equation*}
\|V_2-V_1\|_{L^{N/2s}(\mathbb R^N)}<\frac{\sigma}{1+\varepsilon}. \end{equation*}
Let $V_3(x):=|x|^{-4s}V_2\big(\frac{x}{|x|^2}\big)$. Then \[
V_3\in C^1\bigg(\mathbb R^N\setminus\left\{0,\tfrac{a_i-a_{i_0}}{|a_i-a_{i_0}|^2}\right\}_{i\neq
i_0}\bigg) \] and there exists $r>0$ such that \[
V_3(x)=\frac{\lambda_{i_0}}{|x|^{2s}} \quad\text{in }\mathbb R^N\setminus B_r'. \] Moreover, from Remark \ref{rem:translation_kelvin} (ii) and Lemma \ref{lemma:approx_potentials} it follows that $V_3\in\mathcal H$ and \begin{align*} \mu(V_3+\varepsilon V_3)&=\mu(V_2+\varepsilon V_2) \\
&\geq \mu(V_1+\varepsilon V_1)-S^{-1}(1+\varepsilon)\|V_1-V_2\|_{L^{N/2s}(\mathbb R^N)} >\frac{\mu(V)}2-S^{-1}\sigma>0. \end{align*} From Lemma \ref{lemma-a} we deduce that there exists $\tilde{R}>r$ and a function
$\Phi \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi$ is strictly positive and locally
H\"older continuous
in $\overline{\mathbb R^{N+1}_+} \setminus
\left\{0,\tfrac{a_i-a_{i_0}}{|a_i-a_{i_0}|^2}\right\}_{i\neq
i_0}$ and \[
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[V_3+\varepsilon V_3+ \frac{\nu+\varepsilon \nu}{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}} }\right]\Tr \Phi \Tr U \,\mathrm{d}x\geq 0, \] for all $U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ with $U\geq 0$ a.e. Therefore Lemma \ref{Criterion} yields \[
\mu\left( V_3+\frac{\nu}{\abs{x}^{2s}}\chi_{\mathbb R^{N} \setminus B'_{\tilde{ R}}} \right)\geq \frac{\varepsilon}{1+\varepsilon}. \] From Remark \ref{rem:translation_kelvin} (ii) we have that $\mu\left( V_3+\frac{\nu}{\abs{x}^{2s}}\chi_{\mathbb R^{N} \setminus
B'_{\tilde{ R}}} \right)=\mu\left(
V_2+\frac{\nu}{\abs{x}^{2s}}\chi_{B'_{1/\tilde{ R}}} \right)\geq \frac{\varepsilon}{1+\varepsilon}$. Hence, letting $\delta=1/\tilde R$, from Remark \ref{rem:translation_kelvin} (i) and Lemma \ref{lemma:approx_potentials} we deduce that \begin{align*}
\mu\bigg(V+&\frac{\nu}{|x-a_{i_0}|^{2s}}\chi_{B'(a_{i_0},\delta)}\bigg)=
\mu\left(V_1+\frac{\nu}{|x|^{2s}}\chi_{B'_\delta}\right)\\ &\geq
\mu\left(V_2+\frac{\nu}{|x|^{2s}}\chi_{B'_\delta}\right) -S^{-1}\lVert V_1-V_2\rVert_{L^{N/2s}(\mathbb R^N)} \geq \frac\varepsilon{1+\varepsilon}-\frac{S^{-1}\sigma}{1+\varepsilon}>0 \end{align*} which yields the conclusion. \end{proof}
\begin{corollary}\label{cor:pert-at-pole}
Let \[
V(x)=\sum_{i=1}^{k}\frac{\la_i \chi_{B'(a_i,r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty\chi_{\mathbb R^N \setminus B'_R}(x)}{|x|^{2s}}+W(x) \in \Theta \] be such that $\mu(V)>0$. Then there exists \begin{equation}\label{eq:15}
\widetilde V(x)=\sum_{i=1}^{k}\frac{\tilde \la_i \chi_{B'(a_i,\tilde
r_i)}(x)}{|x-a_i|^{2s}}+\frac{\la_\infty\chi_{\mathbb R^N \setminus
B'_R}(x)}{|x|^{2s}}+\widetilde W(x) \in \Theta \end{equation} such that \[
\widetilde V- V\in C^\infty(\mathbb R^N\setminus\{a_1,\dots,a_k\}),\quad
\widetilde V\geq V,\quad \mu( \widetilde V)>0,\quad\text{and}\quad \tilde \la_i>0\text{ for all }i=1,\dots,k. \] \end{corollary} \begin{proof} For every $i=1,\dots,k$, let $\nu_i$ be such that $\nu_i>0$ and $\lambda_i+\nu_i\in(0,\gamma_H)$. From Theorem \ref{Theorem-a-pole} we have that, for every $i=1,\dots,k$, there exists $\delta_i$ such that, letting \[
\widehat V=V+\sum_{i=1}^k\frac{\nu_i}{|x-a_{i}|^{2s}}\chi_{B'(a_{i},\delta_i)}, \] $\mu(\widehat V)>0$. Let us consider a cut-off function $\zeta:\mathbb R^N\to \mathbb R$ such that $\zeta\in C^\infty(\mathbb R^N)$, $0\leq \zeta(x)\leq 1$,
$\zeta(x)=1$ for $|x|\leq\frac12$, and $\zeta(x)=0$ for
$|x|\geq1$. Let \[
\widetilde V(x)=V+\sum_{i=1}^k\frac{\nu_i}{|x-a_{i}|^{2s}}\zeta\left(\frac{x-a_i}{\delta_i}\right). \] Then $\widetilde V- V\in C^\infty(\mathbb R^N\setminus\{a_1,\dots,a_k\})$ and $\widetilde V\geq V$. Moreover $\widetilde V$ is of the form \eqref{eq:15} with $\tilde\lambda_i=\lambda_i+\nu_i>0$ and, in view of \eqref{eq:def_inf} and the fact that $\widetilde V\leq \widehat V$, $\mu (\widetilde V)\geq \mu(\widehat V)>0$. The proof is thereby complete. \end{proof}
\section{Localization of Binding}\label{sec:localization}
This section is devoted to the proof of Theorem \ref{thm:separation}, which is the main tool needed in order to prove our main result. Indeed this tool ensures, inside the class $\Theta$, that the sum of two positive operators is positive, provided one of them is translated sufficiently far.
For any $\de>0$ and $a_1,\dots,a_k\in \mathbb R^N$, we define
\begin{equation}\label{eq:def_P^delta}
\mathcal{P}^\de_{a_1,\dots,a_k}:= \mathcal{P}^\de_\infty\cap\bigg(\bigcap_{j=1}^k \mathcal{P}^\de_{a_j}\bigg) \end{equation} where $\mathcal{P}^\de_\infty$ is defined in \eqref{eq:def_P_infty} and, for all $j=1,\dots,k$,
\begin{equation*} \begin{aligned} \mathcal{P}^\de_{a_j}= \bigg\{ f\colon \mathbb R^N\to\mathbb R \colon &f\in C^1(B'(a_j,R_j)\setminus\{a_j\})\text{ for some $R_j>0$}\\[-10pt]
&\text{and}~\abs{f(x)}+\abs{(x-a_j)\cdot \nabla f(x)}=O(\abs{x-a_j}^{-2s+\de})~\text{as }x\to a_j\bigg\}.
\end{aligned}
\end{equation*}
\begin{lemma}\label{separation-lemma} Let \begin{gather*}
V_1(x)=\sum_{i=1}^{k_1}\frac{\la_i^1\chi_{B'(a_i^1,r_i^1)}(x)}{|x-a_i^1|^{2s}}+\frac{\la_\infty^1 \chi_{\mathbb R^N \setminus B'_{R_1}}(x)}{|x|^{2s}}+W_1(x) \in \Theta^*, \\
V_2(x)=\sum_{i=1}^{k_2}\frac{\la_i^2\chi_{B'(a_i^2,r_i^2)}(x)}{|x-a_i^2|^{2s}}+\frac{\la_\infty^2 \chi_{\mathbb R^N \setminus B'_{R_2}}(x)}{|x|^{2s}}+W_2(x) \in \Theta^*, \end{gather*} with $W_1\in \mathcal{P}^\de_{a^1_1,\dots,a^1_{k_1}}$, $W_2\in \mathcal{P}^\de_{a^2_1,\dots,a^2_{k_2}}$ for some $\de>0$.
If $\mu(V_1), \mu(V_2)>0$ and $\la^1_\infty +\la_\infty^2< \ga_H$, then there exists $R>0$ such that for every $y \in \mathbb R^N\setminus \overline{B'_R}$ there exists $\Phi_y \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi_y$ is strictly positive and locally
H\"older continuous
in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_i^1),(0, a_i^2+y) \}_{i=1,
\dots, k_j, j=1,2}$ and \begin{equation}\label{eq:16} \int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi_y\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x\geq \kappa_s\int_{\mathbb R^N}(V_1(x)+V_2(x-y))\Tr\Phi_y\Tr U\,\mathrm{d}x \end{equation} for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, with $U\geq 0$ a.e. \end{lemma} \begin{proof} First of all we observe that it is not restrictive to assume that $\lambda_i^j>0$ for all $i=1,\dots,k_j$, $j=1,2$. Indeed, letting $V_1,V_2$ as in the assumptions, from Corollary \ref{cor:pert-at-pole} there exist $\widetilde V_1,\widetilde V_2\in\Theta^*$ with positive masses at poles such that $\widetilde V_j\geq V_j$ and $\mu( \widetilde V_j)>0$ for $j=1,2$. If the theorem is true under the further assumption of positivity of masses at poles, we conclude that, for every $y$ with
$|y|$ sufficiently large, there exists $\Phi_y \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ strictly positive and locally
H\"older continuous
in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_i^1),(0, a_i^2+y) \}_{i=1,
\dots, k_j, j=1,2}$ such that \eqref{eq:16} holds with $\widetilde
V_1(x)+\widetilde V_2(x-y)$ in the right hand side integral instead of $V_1(x)+V_2(x-y)$. Since $\widetilde V_1(x)+\widetilde V_2(x-y)\geq V_1(x)+V_2(x-y)$ we obtain \eqref{eq:16}. Then we can assume that $\lambda_i^j>0$ for all $i=1,\dots,k_j$, $j=1,2$, without loss of generality.
Let $\varepsilon\in(0,\gamma_H)$ be such that $\la^1_\infty+\la^2_\infty<\ga_H-\varepsilon$,
$\la^1_\infty<\ga_H-\varepsilon$, and $\la^2_\infty<\ga_H-\varepsilon$ and let
$\Lambda:=\ga_H-\varepsilon$. Let us set \[ \nu_\infty^1:=\Lambda-\la^1_\infty,\quad \nu_\infty^2:=\Lambda-\la^2_\infty, \] so that $\nu_\infty^1,\nu_\infty^2>0$. Let $0<\eta<1$ be such that \begin{equation}\label{E1} \la_\infty^2<\nu_\infty^1(1-2\eta)\quad\text{and}\quad \la_\infty^1<\nu_\infty^2(1-2\eta). \end{equation} Let us choose $\bar{R}>0$ large enough so that \[
\cup_{i=1}^{k_j} B'(a_i^j,r_i^j) \subset B'_{\bar{R}} \quad \text{for }j=1,2. \] We observe that, by Theorem \ref{Theorem-a}, there exists $\tilde{R}_j>0$ such that \[
\mu\left(V_j+\frac{\nu_\infty^j }{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}_j}}\right)>0. \] Since $\lambda_i^j>0$ implies that $a_{\lambda_i^j}<0$, we can fix some $\sigma>0$ such that $\sigma<2s$ and $\sigma<-a_{\lambda_i^j}$ for all $i=1,\dots,k_j$, $j=1,2$. Let us consider, for $j=1,2$, $p_j\in C^\infty(\mathbb R^N \setminus \{a_1^j,\dots,a_{k_j}^j\})\cap \mathcal{P}^\sigma_{a^j_1,\dots,a^j_{k_j}}$ such that $p_j(x)>0$ for all $x\in\mathbb R^N$ and \begin{equation}\label{p-j}
p_j(x)\geq \frac{1}{|x-a_i^j|^{ 2s-\sigma}} \text{ if } x\in B'(a_i^j, r_i^j),\quad p_j(x)\geq 1 \text{ if }x\in B'_{\bar{R}} \setminus \cup_{i=1}^{k_j}B(a_i^j, r_i^j). \end{equation} Since $p_j\in L^{\frac{N}{2s}}(\mathbb R^N)$ satisfies the hypotheses of Lemma \ref{lemma:compact_emb} the infimum \[ \mu_j=\inf_{\substack{U \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \\ \Tr U\nequiv 0
}}\frac{\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla
U|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\Big[V_j+\tfrac{\nu_\infty^j}{|x|^{2s}}\chi_{\mathbb R^N
\setminus B'_{\tilde{R}_j}}\Big]\abs{\Tr
U}^2\,\mathrm{d}x}{\int_{\mathbb R^{N}}p_j\abs{\Tr U}^2\,\mathrm{d}x}>0 \] is achieved by some nonnegative $\Psi_j \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, for $j=1,2$. In addition, $\Psi_j$ weakly solves \begin{equation}\label{E2} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \Psi_j) &= 0, &&\text{in } \mathbb R^{N+1}_+, \\
-\lim_{t \to 0^+}t^{1-2s}\frac{\pa \Psi_j}{\pa t}&=\kappa_s\left[V_j+\frac{\nu_\infty^j }{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}_j}}+\mu_j p_j\right]\Tr\Psi_j, && \text{on } \mathbb R^{N}. \end{aligned}\right. \end{equation} From Proposition \ref{prop:jlx} we know that $\Psi_j$ is locally H\"older continuous in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1^j),\dots,(0,a_{k_j}^j)\})$.
In order to prove that $\Psi_j$ is strictly positive in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1^j),\dots,(0,a_{k_j}^j)\}$, we compare it with the unique weak solution $\widetilde\Psi_j \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ to the problem \begin{equation}\label{eq:13} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \widetilde\Psi_j) &=0 , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \widetilde\Psi_j}{\pa t}&=\kappa_s V_j\Tr \widetilde\Psi_j+\kappa_s\mu_j p_j\Tr\Psi_j, &&\text{on }\mathbb R^N, \end{aligned} \right. \end{equation} whose existence directly follows from the Lax-Milgram Lemma. The difference $\widetilde\Phi_j=\Psi_j - \widetilde\Psi_j$ belongs to $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and weakly solves \begin{equation*} \left\{\begin{aligned} -{\divergence}(t^{1-2s}\nabla \widetilde\Phi_j) &=0 , &&\text{in }\mathbb R^{N+1}_+, \\ -\lim_{t \to 0^+}t^{1-2s}\frac{\pa \widetilde\Phi_j}{\pa t}&=\kappa_s V_j\Tr \widetilde\Phi_j+\kappa_s
\frac{\nu_\infty^j }{|x|^{2s}}\chi_{\mathbb R^N \setminus B'_{\tilde{R}_j}}\Tr\Psi_j, &&\text{on }\mathbb R^N. \end{aligned} \right. \end{equation*} By testing the above equation with $-\widetilde\Phi_j^-$ and recalling that $\mu(V_j)>0$, we obtain that $\widetilde\Phi_j\geq 0$ in $\mathbb R^{N+1}_+$ and hence $\Psi_j \geq \widetilde\Psi_j$. Moreover, testing \eqref{eq:13} with $-\widetilde\Psi_j^-$, we also obtain that $\widetilde\Psi_j\geq0$ in $\mathbb R^{N+1}_+$. From the classical Strong Maximum Principle and Proposition \ref{prop:cabre_sire} (whose assumption \eqref{eq:9} for \eqref{eq:13} is satisfied thanks to Lemma \ref{l:cabre_sire} and the assumption $V_j\in\Theta^*$) it follows that $\widetilde\Psi_j>0$ in $\overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1^j),\dots,(0,a_{k_j}^j)\}$ and hence \[ \Psi_j>0\quad\text{in}\quad \overline{\mathbb R^{N+1}_+}\setminus \{(0,a_1^j),\dots,(0,a_{k_j}^j)\}. \] Lemma \ref{lemma:regul_2} yields \[
\lim_{|x| \to \infty}\Psi_j(0,x)|x|^{N-2s+a_\Lambda}=\ell_j>0, \]
for some $\ell_j>0$ (see \eqref{eq:6} for the notation $a_\Lambda$). Hence, the function $\Phi_j(t,x):=\frac{\Psi_j(t,x)}{\ell_j}$ satisfies \eqref{E2} and $\Phi_j(0,x) \sim |x|^{-(N-2s+a_\Lambda)}$ for $\abs{x}\to\infty.$ Therefore, there exists $\rho> \max\{\tilde{R}_1,\tilde{R}_2,\bar{R} \}$ such that \begin{equation}\label{E3}
(1-\eta^2)|x|^{-(N-2s+a_\Lambda)} \leq \Phi_j(0,x) \leq (1+\eta) |x|^{-(N-2s+a_\Lambda)} \end{equation} and \begin{equation}\label{E4}
|W_1(x)| \leq \frac{\eta \nu_\infty^2}{|x|^{2s}},\qquad |W_2(x)| \leq \frac{\eta \nu_\infty^1}{|x|^{2s}} \end{equation} for all $x\in\mathbb R^N\setminus B'_\rho$. Also, form Lemma \ref{lemma:regul_1} we know that there exists $C>0$ such that \begin{equation}\label{phi-j}
\frac{1}{C}|x-a_i^j|^{a_{\lambda_i^j}} \leq \Phi_j(0,x) \leq C|x-a_i^j|^{a_{\lambda_i^j}} \quad \text{in }B'(a_i^j,r_i^j), \end{equation} for $i=1, \dots k_j,~j=1,2$. For any $y \in \mathbb R^N$, we define \[
\Phi_y(t,x):=\nu_\infty^2\Phi_1(t,x)+\nu_\infty^1\Phi_2(t,x-y) \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}). \] Then \begin{equation*} \int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi_y\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x- \kappa_s\int_{\mathbb R^N}(V_1(x)+V_2(x-y))\Tr\Phi_y\Tr U\,\mathrm{d}x =\int_{\mathbb R^N}g_y(x)\Tr U\,\mathrm{d}x \end{equation*} for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, where \begin{gather*}
g_y(x):=\kappa_s\bigg[\mu_1 \nu_\infty^2 p_1(x)\Phi_1(0,x)
+\frac{\nu_\infty^1 \nu_\infty^2}{|x|^{2s}}\chi_{\mathbb R^N \setminus
B'_{\tilde R_1 }}(x)\Phi_1(0,x) +\mu_2 \nu_\infty^1 p_2(x-y)\Phi_2(0,x-y)\\
+\frac{\nu_\infty^1 \nu_\infty^2}{|x-y|^{2s}}\chi_{\mathbb R^N \setminus
B'(y,\tilde R_2)}(x)\Phi_2(0,x-y)-\nu_\infty^1 V_1(x)\Phi_2(0,x-y)-\nu_\infty^2 V_2(x-y)\Phi_1(0,x)\bigg]. \end{gather*} Therefore, to conclude the proof it is enough to show that $g_y\geq0$ a.e. in $\mathbb R^N$.
From \eqref{E1}, \eqref{E3} and \eqref{E4}, it follows that in $\mathbb R^N \setminus \big(B'_\rho \cup B'(y,\rho)\big)$ \begin{align*}
g_y(x)&\geq \kappa_s\bigg[\frac{\nu_\infty^1
\nu_\infty^2}{|x|^{2s}}\Phi_1(0,x)+
\frac{\nu_\infty^1 \nu_\infty^2}{|x-y|^{2s}}\Phi_2(0,x-y)\\
&\qquad -\nu_\infty^1 \left(\frac{\la_\infty^1}{|x|^{2s}}+W_1(x)\right)
\Phi_2(0,x-y)-\nu_\infty^2 \left(\frac{\la_\infty^2}{|{x-y}|^{2s}}+W_2(x-y)\right)\Phi_1(0,x)\bigg]\\
& >\kappa_s\nu_\infty^1\nu_\infty^2(1-\eta^2)\bigg[|x|^{-(N+a_\Lambda)}
+|x-y|^{-(N+a_\Lambda)}\\
&\qquad -|x|^{-(N-2s+a_\Lambda)}|x-y|^{-2s}-|x|^{-2s}
|x-y|^{-(N-2s+a_\Lambda)}\bigg]\\
&=\kappa_s\nu_\infty^1\nu_\infty^2(1-\eta^2)\left(\frac1{|x|^{N-2s+a_\Lambda}}-
\frac1{|x-y|^{N-2s+a_\Lambda}}\right)
\left(\frac1{|x|^{2s}}-
\frac1{|x-y|^{2s}}\right) \geq 0. \end{align*}
For $|y|>R>2\rho$, we have $B'_\rho \cap B'(y,\rho)=\emptyset$. From \eqref{p-j}, \eqref{E3}, \eqref{E4}, \eqref{phi-j} and the choice of $\sigma$ we have that, in $B(a_i^1,r_i^1)$, \begin{align*}
g_y(x)&\geq\kappa_s\bigg[\mu_1 \nu_\infty^2 p_1(x)\Phi_1(0,x)+\frac{\nu_\infty^1 \nu_\infty^2}{|x-y|^{2s}}\Phi_2(0,x-y) \\ &\qquad\qquad - \nu_\infty^1 V_1(x)\Phi_2(0,x-y)-\nu_\infty^2
V_2(x-y)\Phi_1(0,x)\bigg]\\ & \geq
\kappa_s|x-a_i^1|^{a_{\lambda_i^1}-2s+\sigma}\bigg[ \frac{\mu_1
\nu_\infty^2}{C}\\
&\qquad\qquad-\nu_\infty^1(1+\eta)|x-y|^{-(N-2s+a_\Lambda)}|x-a_i^1|^{-a_{\lambda_i^1}-\sigma}(\lambda_i^1+\|W_1\|_{L^{\infty}(\mathbb R^N)}
|x-a_i^1|^{2s})\\
&\qquad\qquad-\nu_\infty^2\nu_\infty^1(1-\eta)C |x-y|^{-2s}|x-a_i^1|^{2s-\sigma}\bigg] \\
&\geq \kappa_s|x-a_i^1|^{a_{\lambda_i^1}-2s+\sigma} \bigg[ \frac{\mu_1 \nu_\infty^2}{C} +o(1) \bigg], \end{align*}
as $|y| \to \infty$. Now let $x\in B'_\rho \setminus \left(\cup_{i=1}^{k_1}B'(a_i^1,r_i^1)\right)$: since $\Phi_1$ is positive and continuous we have $\tilde{C}^{-1}>\Phi_1(0,x)>\tilde{C}$, for some $\tilde{C}>0$, and so, thanks to \eqref{p-j}, \eqref{E3} and \eqref{E4}, there holds \[
g_y(x) \geq \kappa_s\mu_1 \nu_\infty^2 \tilde C +o(1), \]
as $|y| \to \infty$. One can similarly prove that, for $\abs{y}$ sufficiently large, $g_y(x)\geq 0$ in $B'(y,\rho)$ as well. The proof is thereby complete. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:separation}]
First, let
\begin{equation}\label{L-1}
0<\varepsilon<\min\left\{2S \mu(V_j),\frac{\mu(V_j)}{2}\bigg[ \frac{1}{\ga_H} \left( \sum_{i=1}^k |\la_i^j| +|\la_\infty^j|\right) +S^{-1}\norm{W}_{L^{N/2s}(\mathbb R^N)} \bigg]^{-1} \right\}
\end{equation}
for $j=1,2$, such that, in addition,
$\la_\infty^1+\la_\infty^2+\varepsilon ( \la_\infty^1 +\la_\infty^2
)<\ga_H$ and $\lambda_i^j +\varepsilon \lambda_i^j<\gamma_H$
for all $i=1,\dots,k_j,\infty$. Similarly to \eqref{eq:criterion_2}, one can prove that $\mu(V_j+\varepsilon V_j)>\frac{\mu(V_j)}{2}>0$ for $j=1,2$. Moreover, let $\sigma=\sigma(\varepsilon)$ be such that
\begin{equation}\label{eq:separation_1}
0<\sigma< \min\left\{\frac{S\mu(V_1)}{2},\frac{S\mu(V_2)}{2},\frac{S\varepsilon}{2}\right\}.
\end{equation}
Let, for $j=1,2$,
\[
\hat{V}_j(x)=\sum_{i=1}^{k_j}\frac{\la_i^j\chi_{B'(a_i^j,r_i^j)}(x)}{|x-a_i^j|^{2s}}+\frac{\la_\infty^j \chi_{\mathbb R^N \setminus B'_{R_j}}(x)}{|x|^{2s}}+\hat{W}_j(x) \in \Theta^*,
\]
be such that $\hat{W}_j\in
\mathcal{P}^\de_{a^j_1,\dots,a^j_{k_j}}$ for some $\delta>0$ and
\begin{equation}\label{eq:separation_2}
\lVert\hat{V}_j-V_j\rVert_{L^{N/2s}(\mathbb R^N)}<\frac{\sigma}{1+\varepsilon}.
\end{equation}
From Lemma \ref{lemma:approx_potentials}, \eqref{eq:separation_1} and \eqref{eq:separation_2} we deduce that
\[
\mu(\hat{V}_j+\varepsilon\hat{V}_j)\geq \mu(V_j+\varepsilon V_j)-(1+\varepsilon)S^{-1}\lVert\hat{V}_j-V_j\rVert_{L^{N/2s}(\mathbb R^N)}>0.
\]
Hence we infer from Lemma \ref{separation-lemma} that there
exists $R>0$ such that, for all
$y\in\mathbb R^N\setminus \overline{B'_R}$, there exists
$\Phi_y \in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that $\Phi_y$ is strictly positive and locally
H\"older continuous
in $\overline{\mathbb R^{N+1}_+} \setminus \{(0,a_i^1),(0, a_i^2+y) \}_{i=1,
\dots, k_j, j=1,2}$ and
\begin{equation*}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla \Phi_y\cdot \nabla U\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}\left[\hat{V}_1(x)+\varepsilon\hat{V}_1(x)+\hat{V}_2(x-y)+\varepsilon\hat{V}_2(x-y)\right]\Tr\Phi_y\Tr U\,\mathrm{d}x\geq 0
\end{equation*}
for all $U\in \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, with $U\geq 0$ a.e. Therefore, thanks to the positivity criterion (Lemma \ref{Criterion}), we know that
\[
\mu(\hat{V}_1(\cdot)+\hat{V}_2(\cdot-y))\geq \frac{\varepsilon}{\varepsilon+1}.
\] Combining Lemma \ref{lemma:approx_potentials}
with \eqref{eq:separation_1} and \eqref{eq:separation_2}, we
finally deduce that
\begin{multline*}
\mu(V_1(\cdot)+V_2(\cdot-y))\geq \mu(\hat{V}_1(\cdot)+\hat{V}_2(\cdot-y)) \\
-S^{-1}\lVert V_1-\hat{V}_1 \rVert_{L^{N/2s}(\mathbb R^N)} -S^{-1}\lVert V_2(\cdot-y)-\hat{V}_2(\cdot-y) \rVert_{L^{N/2s}(\mathbb R^N)} >0,
\end{multline*}
thus completing the proof. \end{proof}
\section{Proof of Theorem \ref{theorem}}\label{sec:thm} In order to prove Theorem \ref{theorem}, we first need the following lemma, concerning the left-hand side in Hardy inequality \eqref{eq:hardy}. \begin{lemma}\label{lemma:conv_hardy}
We have that
\begin{equation*}
\lim_{\abs{\xi}\to 0}\int_{\mathbb R^N}\frac{\abs{u(x)}^2}{\abs{x+\xi}^{2s}}\,\mathrm{d}x=\int_{\mathbb R^N}\frac{\abs{u(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x \quad\text{and}\quad
\lim_{\abs{\xi}\to +\infty}\int_{\mathbb R^N}\frac{\abs{u(x)}^2}{\abs{x+\xi}^{2s}}\,\mathrm{d}x=0 \end{equation*} for any $u\in\mathcal{D}^{s,2}(\R^N)$. \end{lemma} \begin{proof}
The proof easily follows from density of
$C_c^\infty(\mathbb R^N\setminus\{0\})$ in $\mathcal{D}^{s,2}(\R^N)$ (see Lemma \ref{l:density}),
the Dominated Convergence Theorem and the fractional Hardy inequality \eqref{eq:hardy}. \end{proof}
We are now able to prove Theorem \ref{theorem}.
\begin{proof}[Proof of Theorem \ref{theorem}]
First we prove that condition \eqref{lambda-i} is sufficient for the
existence of at least one configuration of poles $a_1,\dots,a_k$
such that the quadratic form associated to
$\mathcal{L}_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}$ is positive
definite. In order to do this, we argue by induction on the number
of poles $k$. For any $k$ we assume the masses to be sorted in
increasing order $\lambda_1\leq\cdots\leq \lambda_k$. If $k=2$ the
claim is proved in Remark \ref{rmk:2_poles}. Suppose now the claim
is proved for $k-1$. If $\lambda_k\leq 0$ the proof is trivial, so
let us assume $\lambda_k>0$: since \eqref{lambda-i} holds, it is
true also for $\lambda_1,\dots,\lambda_{k-1}$, hence there exists a
configuration of poles $a_1,\dots,a_{k-1}$ such that
$Q_{\lambda_1,\dots,\lambda_{k-1},a_1,\dots,a_{k-1}}$ is positive
definite. If we let
\[
V_1(x)=\sum_{i=1}^{k-1}\frac{\lambda_i}{\abs{x-a_1}^{2s}}\quad\text{and}\quad V_2(x)=\frac{\lambda_k}{\abs{x}^{2s}},
\] we have that $V_1,V_2\in\Theta$ satisfy the assumptions of Theorem \ref{thm:separation}. Therefore there exists $a_k\in\mathbb R^N$ such that \[
\mathcal{L}_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}=(-\Delta)^s-(V_1+V_2(\cdot-a_k)) \] is positive definite. This concludes the first part.
We now prove the necessity of condition \eqref{lambda-i}. Let $\varepsilon>0$ be such that \begin{equation}\label{eq:proof_thm_1}
\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i\int_{\mathbb R^N}\frac{\abs{u(x)}^2}{\abs{x-a_i}^{2s}}\,\mathrm{d}x\geq \varepsilon\norm{u}_{\mathcal{D}^{s,2}(\R^N)}^2 \end{equation} for all $u\in\mathcal{D}^{s,2}(\R^N)$ and let $\delta\in(0,\varepsilon\ga_H)$. Assume by contradiction that $\lambda_j\geq \ga_H$ for some $j\in\{1,\dots,k\}$. By optimality of $\ga_H$ in Hardy inequality \eqref{eq:hardy} and by density of $C_c^\infty(\mathbb R^N)$ in $\mathcal{D}^{s,2}(\R^N)$, we have that there exists $\varphi\in C_c^\infty(\mathbb R^N)$ such that \begin{equation}\label{eq:proof_thm_2}
\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\lambda_j\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^{2}}{\abs{x}^{2s}}\,\mathrm{d}x<\delta \int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x. \end{equation} If we let $\varphi_\rho:=\rho^{-\frac{N-2s}{2}}\varphi(x/\rho)$, we have that \begin{equation}\label{eq:proof_thm_3}
\begin{aligned}
Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}(\varphi_\rho(\cdot-a_j))&=\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\lambda_j\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x-\sum_{i\neq j}\lambda_i\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x-\frac{a_i-a_j}{\rho}}^{2s}}\,\mathrm{d}x \\
& \to \norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\lambda_j\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x \qquad\text{as }\rho\to 0^+, \end{aligned} \end{equation} in view of Lemma \ref{lemma:conv_hardy}. Combining \eqref{eq:proof_thm_1}, \eqref{eq:proof_thm_2}, \eqref{eq:proof_thm_3} and Hardy inequality \eqref{eq:hardy} we obtain \begin{equation*}
\varepsilon\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2\leq \norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2- \lambda_j\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x \\ <\delta \int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x \leq\frac{\de}{\ga_H}\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2, \end{equation*} which is a contradiction, because of the choice of $\delta$.
Now suppose that $K:=\sum_{i=1}^k \lambda_i\geq \ga_H$. Arguing analogously, there exists $\varphi\in C_c^\infty(\mathbb R^N)$ such that
\begin{equation*}\label{eq:proof_thm_4}
\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-K\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x<\delta \int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x. \end{equation*}
The function $\varphi_\rho(x):=\rho^{-\frac{N-2s}{2}}\varphi(x/\rho)$ satisfies
\begin{equation*}
\begin{aligned}
Q_{\lambda_1,\dots,\lambda_k,a_1,\dots,a_k}(\varphi_\rho)&=\norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-\sum_{i=1}^k\lambda_i\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x-a_i/\rho}^{2s}}\,\mathrm{d}x \\
&\to \norm{\varphi}_{\mathcal{D}^{s,2}(\R^N)}^2-K\int_{\mathbb R^N}\frac{\abs{\varphi(x)}^2}{\abs{x}^{2s}}\,\mathrm{d}x\qquad\text{as }\rho\to+\infty,
\end{aligned}
\end{equation*} thanks to Lemma \ref{lemma:conv_hardy}. With the same argument as above, we again reach a contradiction. \end{proof}
\section{Proof of Proposition \ref{Prop-1}}\label{sec:prop} Finally, in this section we present the proof of Proposition \ref{Prop-1}, that is independent of the previous results from the point of view of the technical approach.
\begin{proof}[Proof of Proposition \ref{Prop-1}]
First, let us denote
$\bar{\la}=\max \{0, \la_1,\dots, \la_k, \la_\infty \}$. By
hypothesis there exists
$\alpha\in\big(0,1-\frac{\bar{\lambda}}{\ga_H}\big)$ such that
$\mu(V)\leq 1-\frac{\bar{\lambda}}{\ga_H}-\alpha$. From Lemma
\ref{Lem-2} we know that there exists $\delta>0$ such that, denoting
by \[
\bar{V}=\sum_{i=1}^{k}\frac{\la_i
\zeta(\frac{x-a_i}\delta)}{|x-a_i|^{2s}}+\frac{\la_\infty
\tilde\zeta(\frac xR)}{|x|^{2s}}, \] with $\zeta,\tilde\zeta$ being as in Lemma \ref{Lem-2}, we have that \begin{equation}\label{E-1} \mu(\bar{V}) \geq 1-\frac{\bar{\la}}{\ga_H}-\frac{\al}{2}. \end{equation} if $\bar{\lambda}>0$ and $\mu(\bar{V})\geq 1$ if $\bar{\lambda}=0$. We can write $V=\bar{V}+\bar{W}$ for some $\bar{W} \in L^{N/2s}(\mathbb R^N)$. Now let $\{U_n\}_n\subseteq\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ be a minimizing sequence for $\mu(V)$, i.e. \begin{equation}\label{E-2}
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U_n|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V\abs{\Tr U_n}^2\,\mathrm{d}x=\mu(V)+o(1),\quad\text{as }n\to \infty \end{equation}
and $\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U_n|^2\, \mathrm{d}t \,\mathrm{d}x=1$. Since $\{U_n\}_n$ is bounded in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, there exists $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ such that, up to a subsequence (still denoted by $\{U_n\}_n$), \begin{equation}\label{E-3} U_n\rightharpoonup U\quad\text{weakly in }\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})\quad\text{and}\quad
U_n\to U\quad\text{a.e. in }\mathbb R^{N+1}_+,
\end{equation} as $n\to \infty$. There holds \[
\begin{aligned}
\mu(\bar{V})& \leq \int_{\mathbb R^{N}}t^{1-2s}|\nabla U_n|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V\abs{\Tr U_n}^2\,\mathrm{d}x+\kappa_s\int_{\mathbb R^N}\bar{W}\abs{\Tr U_n}^2\,\mathrm{d}x\\ &=\mu(V)+\kappa_s\int_{\mathbb R^N}\bar{W}\abs{\Tr U_n}^2\,\mathrm{d}x+o(1), \quad\text{as } n \to \infty.
\end{aligned} \] Hence, from \eqref{E-3}, \eqref{E-1}, the choice of $\alpha$, and Lemma \ref{lemma:compact_emb} we deduce that (if $\bar{\lambda}>0$) \[
1-\frac{1}{\ga_H}\bar{\la}-\frac{\al}{2} \leq \mu(V) +\kappa_s\int_{\mathbb R^N}\bar{W}\abs{\Tr U}^2\,\mathrm{d}x\leq 1-\frac{\bar{\lambda}}{\ga_H}-\al+\kappa_s\int_{\mathbb R^N}\bar{W}\abs{\Tr U}^2\,\mathrm{d}x, \] and so $\kappa_s\int_{\mathbb R^N}\bar{W}\abs{\Tr U}^2\,\mathrm{d}x \geq \frac{\al}{2}>0$, which implies that $U\nequiv 0$. The same conclusion easily follows in the case $\bar{\lambda}=0$. From the weak convergence $U_n\rightharpoonup U$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$, the continuity of the trace map $\Tr: \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \to L^{N/2s}(\mathbb R^N)$ and the definition of $\mu(V)$, we have that \begin{align*}
\mu(V)&\leq \frac{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V \abs{\Tr U}^2\,\mathrm{d}x}{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x}\\
&=\frac{\mu(V)-\bigg[\displaystyle \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla
(U_n-U)|^2\, \mathrm{d}t \,\mathrm{d}x-\kappa_s\int_{\mathbb R^N}V\abs{\Tr U_n-\Tr U}^2\,\mathrm{d}x
\bigg]+o(1)}{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla
U_n|^2\, \mathrm{d}t \,\mathrm{d}x-\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla (U_n-U)|^2\, \mathrm{d}t \,\mathrm{d}x+o(1)}\\
&\leq \mu(V)\, \frac{1-\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla (U_n-U)|^2\, \mathrm{d}t \,\mathrm{d}x+o(1)}{1-\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla (U_n-U)|^2\, \mathrm{d}t \,\mathrm{d}x+o(1)} =\mu(V)+\frac{o(1)}{\displaystyle\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\, \mathrm{d}t \,\mathrm{d}x+o(1)} . \end{align*} Letting $n\to \infty$ yields the fact that $\mu(V)$ is attained by $U$ and this concludes the proof. \end{proof}
\appendix
\section{}
In this appendix, we recall some known results about properties of solutions to equations on the extended, positive half-space.
We start by recalling a regularity result.
\begin{proposition}[\cite{FF} Proposition 3, \cite{Jin2014}
Proposition 2.6]\label{prop:jlx} Let $a,b\in L^p(B'_1)$, for some $p>\frac{N}{2s}$ and $c,d\in L^{q}(B_1^+;t^{1-2s})$, for some $q>\frac{N+2-2s}{2}$.
Let $w\in H^1(B^+_1;t^{1-2s})$ be a weak solution of \begin{equation*} \begin{cases} -\mathrm{div}(t^{1-2s}\nabla w)+ t^{1-2s}c(z) w= t^{1-2s}d(z),\quad &\mbox{in } B^+_1, \\ -\lim_{t\rightarrow 0^+}t^{1-2s}\frac{\partial w}{\partial t}= a(x)w+b(x),\quad &\mbox{on } B'_1. \end{cases} \end{equation*} Then $w\in C^{0,\beta}(\overline{B_{1/2}^+})$ and in addition $$
\|w\|_{ C^{0,\beta}(\overline{B_{1/2}^+})}\leq C\left( \|w\|_{ L^2(B^+_{1})}+\|b\|_{ L^p(B_1')}+\|d\|_{L^q(B_1^+;t^{1-2s})} \right), $$
with $C,\beta>0$ depending only on $N,s, \|a\|_{L^{p}(B_1')},\|c\|_{L^q(B_1^+;t^{1-2s})}$. \end{proposition}
Now we recall, from \cite{Cabre2014}, an Hopf-type Lemma.
\begin{proposition}[\cite{Cabre2014} Proposition 4.11]\label{prop:cabre_sire}
Let $\Phi\in C^0(B_R^+\cup B_R')\cap H^1(B_R^+;t^{1-2s})$ satisfy
\[
\begin{cases}
-\divergence (t^{1-2s}\nabla \Phi)\geq 0, &\text{in } B_R^+, \\
\Phi>0, & \text{in }B_R^+, \\
\Phi(0,0)=0 .
\end{cases}
\]
Then
\[
-\limsup_{t\to 0^+}t^{1-2s}\frac{ \Phi(t,0)}{ t}<0.
\]
In addition, if \begin{equation}\label{eq:9} t^{1-2s}\frac{\partial\Phi}{\partial t}\in C^0(B_R^+\cup B_R'), \end{equation} then
\[
-\left(t^{1-2s}\frac{\partial \Phi}{\partial t}\right)(0,0)<0.
\] \end{proposition}
In several points of the present paper we used the following result from \cite{Cabre2014} to verify the validity of assumption \eqref{eq:9} needed to apply Proposition \ref{prop:cabre_sire}. \begin{lemma}[\cite{Cabre2014} Lemma 4.5]\label{l:cabre_sire}
Let $s\in (0,1)$ and $R>0$. Let $\varphi\in C^{0,\sigma}(B_{2R}')$
for some $\sigma\in(0,1)$ and $\Phi\in L^\infty(B_{2R}^+)\cap
H^1(B_{2R}^+;t^{1-2s})$ be a weak solution to \[ \begin{cases} -\divergence (t^{1-2s}\nabla \Phi)= 0, &\text{in } B_{2R}^+, \\ -\lim_{t\rightarrow 0^+}t^{1-2s}\frac{\partial \Phi}{\partial t}= \varphi(x), &\text{on } B'_{2R}. \end{cases} \] Then there exists $\beta\in (0,1)$ depending only on $N,s,\sigma$ such that \[ \Phi\in C^{0,\beta}(\overline{B_{R}^+})\quad\text{and} \quad t^{1-2s}\frac{\partial \Phi}{\partial t}\in C^{0,\beta}(\overline{B_{R}^+}). \] \end{lemma}
Finally, we prove a density result: the idea behind is that removing a point does not impair the definition of $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ and $\mathcal{D}^{s,2}(\R^N)$; in other words, a point in $\mathbb R^N$ has null fractional $s$-capacity if $N>2s$, see also \cite[Example 2.5]{AFN}.
\begin{lemma}\label{l:density}
Let $z_0\in\overline{\mathbb R^{N+1}_+}$, $N>2s$. Then $C_c^\infty(\overline{\mathbb R^{N+1}_+}\setminus\{z_0\})$ is dense in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. As a consequence, if $x_0\in \mathbb R^N$, then $C_c^\infty(\mathbb R^N\setminus\{x_0\})$ is dense in $\mathcal{D}^{s,2}(\R^N)$. \end{lemma} \begin{proof}
Assume $z_0\in\partial\overline{\mathbb R^{N+1}_+}=\mathbb R^N$ (the proof is completely analogous if $z_0\in\mathbb R^{N+1}_+$). Moreover, without loss of generality, we can assume $z_0=0$. Let $U\in C_c^\infty(\overline{\mathbb R^{N+1}_+})$ and let $\xi_n\in C^\infty(\overline{\mathbb R^{N+1}_+})$ be a cut-off function such that
\begin{gather*}
\xi_n(z)=\begin{cases}
1, &\text{if }z\in \overline{\mathbb R^{N+1}_+\setminus B_{2/n}^+}, \\
0, &\text{if }z\in \overline{B_{1/n}^+},
\end{cases} \\
\xi_n\quad\text{is radial, i.e. }\xi_n(z)=\xi_n(\abs{z}),\qquad \abs{\xi_n}\leq 1,\quad \abs{\nabla \xi_n}\leq 2n.
\end{gather*} Trivially $\xi_n U\in C_c^\infty(\overline{\mathbb R^{N+1}_+}\setminus\{0\})$. We claim that $\xi_n U\to U$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. Indeed, thanks to Dominated Convergence Theorem, \begin{multline*}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\nabla ((\xi_n-1) U)}^2\, \mathrm{d}t \,\mathrm{d}x \\
\leq 2\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{\xi_n-1}^2\abs{\nabla U}^2\, \mathrm{d}t \,\mathrm{d}x+2\int_{\mathbb R^{N+1}_+}t^{1-2s}\abs{U}^2\abs{\nabla \xi_n}^2\, \mathrm{d}t \,\mathrm{d}x \\
\leq o(1)+Cn^2\int_{B^+_{2/n}\setminus B^+_{1/n}}t^{1-2s}\, \mathrm{d}t \,\mathrm{d}x. \end{multline*} Moreover \[
n^2\int_{B^+_{2/n}\setminus B^+_{1/n}}t^{1-2s}\, \mathrm{d}t \,\mathrm{d}x=O(n^{2s-N}), \] which concludes the proof of the claim, in view of the assumption $N>2s$ and the density of $C_c^\infty(\overline{\mathbb R^{N+1}_+})$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$.
For what concerns the second statement, as before, without loss of generality, we can assume $x_0=0$. Let $u\in \mathcal{D}^{s,2}(\R^N)$ and let $U\in\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ be its extension. By the density of $C_c^\infty(\overline{\mathbb R^{N+1}_+}\setminus\{0\})$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$ just proved, there exists a sequence $\{U_n\}\subset C_c^\infty(\overline{\mathbb R^{N+1}_+}\setminus\{0\})$ such that $U_n\to U$ in $\mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s})$. Then $\Tr (U_n)\in C^\infty_c(\mathbb R^N\setminus\{0\})$ and $\Tr (U_n)\to \Tr U=u$ in $\mathcal{D}^{s,2}(\R^N)$, thanks to the continuity of the trace map
$\Tr\colon \mathcal{D}^{1,2}(\R^{N+1}_+;t^{1-2s}) \to \mathcal{D}^{s,2}(\R^N)$. \end{proof}
\bigbreak
\noindent {\bf Acknowledgments.} V. Felli is partially supported by the PRIN2015 grant ``Variational methods, with applications to problems in mathematical physics and geometry''. D. Mukherjee's research is supported by the Czech Science Foundation, project GJ19--14413Y. V. Felli and R. Ognibene are partially supported by the INDAM-GNAMPA 2018 grant ``Formula di monotonia e applicazioni:
problemi frazionari e stabilità spettrale rispetto a perturbazioni
del dominio''. This work was started while D. Mukherjee was visiting the University of Milano - Bicocca supported by INDAM-GNAMPA.
\end{document} | arXiv |
# 1. Linear Algebra Fundamentals
Linear algebra is a branch of mathematics that deals with vectors and matrices. It is a fundamental topic in mathematics and has numerous applications in various fields, including physics, engineering, computer science, and economics.
In this section, we will introduce the basic concepts of linear algebra, starting with vectors and vector spaces. We will learn about vector addition, scalar multiplication, and the properties of vector spaces. We will also explore matrix operations, such as addition, subtraction, and multiplication.
By the end of this section, you will have a solid foundation in linear algebra and be ready to tackle more advanced topics in matrix methods. Let's get started!
# 1.1. Vectors and Vector Spaces
A vector is a mathematical object that has both magnitude and direction. It can be represented as an ordered list of numbers, known as its components. For example, a two-dimensional vector can be represented as (x, y), where x and y are the components.
Vectors can be added together using vector addition, which involves adding the corresponding components of the vectors. Scalar multiplication is another operation that can be performed on vectors, which involves multiplying the vector by a scalar, or a single number.
Vector spaces are sets of vectors that satisfy certain properties. These properties include closure under vector addition and scalar multiplication, as well as the existence of a zero vector and additive inverses.
Consider the following vectors:
$\mathbf{v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}$
$\mathbf{w} = \begin{bmatrix} -1 \\ 4 \end{bmatrix}$
We can add these vectors together by adding their corresponding components:
$\mathbf{v} + \mathbf{w} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} + \begin{bmatrix} -1 \\ 4 \end{bmatrix} = \begin{bmatrix} 1 \\ 7 \end{bmatrix}$
We can also multiply a vector by a scalar:
$2\mathbf{v} = 2\begin{bmatrix} 2 \\ 3 \end{bmatrix} = \begin{bmatrix} 4 \\ 6 \end{bmatrix}$
## Exercise
Perform the following operations:
1. Add the vectors $\mathbf{u} = \begin{bmatrix} 1 \\ -2 \\ 3 \end{bmatrix}$ and $\mathbf{v} = \begin{bmatrix} -2 \\ 4 \\ -1 \end{bmatrix}$.
2. Multiply the vector $\mathbf{w} = \begin{bmatrix} 3 \\ -1 \\ 2 \end{bmatrix}$ by the scalar $-2$.
### Solution
1. $\mathbf{u} + \mathbf{v} = \begin{bmatrix} 1 \\ -2 \\ 3 \end{bmatrix} + \begin{bmatrix} -2 \\ 4 \\ -1 \end{bmatrix} = \begin{bmatrix} -1 \\ 2 \\ 2 \end{bmatrix}$
2. $-2\mathbf{w} = -2\begin{bmatrix} 3 \\ -1 \\ 2 \end{bmatrix} = \begin{bmatrix} -6 \\ 2 \\ -4 \end{bmatrix}$
# 1.2. Matrix Operations
A matrix is a rectangular array of numbers, arranged in rows and columns. Matrices are used to represent linear transformations and systems of linear equations.
In this section, we will learn about various matrix operations, including addition, subtraction, and multiplication. We will also explore scalar multiplication and the properties of matrix operations.
Let's dive into the world of matrix operations!
# 1.2.1. Addition and Subtraction
Matrix addition and subtraction involve adding or subtracting the corresponding elements of two matrices. To perform these operations, the matrices must have the same dimensions.
The addition of two matrices is performed by adding their corresponding elements:
$\begin{bmatrix} a & b \\ c & d \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a+e & b+f \\ c+g & d+h \end{bmatrix}$
Similarly, the subtraction of two matrices is performed by subtracting their corresponding elements:
$\begin{bmatrix} a & b \\ c & d \end{bmatrix} - \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} a-e & b-f \\ c-g & d-h \end{bmatrix}$
Consider the following matrices:
$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$
$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$
We can add these matrices together by adding their corresponding elements:
$A + B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} + \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}$
We can also subtract one matrix from another:
$B - A = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} - \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 4 & 4 \\ 4 & 4 \end{bmatrix}$
## Exercise
Perform the following operations:
1. Add the matrices $C = \begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix}$ and $D = \begin{bmatrix} -2 & 1 \\ -3 & 0 \end{bmatrix}$.
2. Subtract the matrix $E = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ from the matrix $F = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$.
### Solution
1. $C + D = \begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix} + \begin{bmatrix} -2 & 1 \\ -3 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$
2. $F - E = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} - \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 4 & 4 \\ 4 & 4 \end{bmatrix}$
# 1.2.2. Scalar Multiplication
Scalar multiplication involves multiplying a matrix by a scalar, which is a single number. To perform scalar multiplication, each element of the matrix is multiplied by the scalar.
The scalar multiplication of a matrix $A$ by a scalar $k$ is denoted as $kA$:
$k\begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} ka & kb \\ kc & kd \end{bmatrix}$
Scalar multiplication can be used to scale the size of a matrix or to change the direction of its elements.
Consider the following matrix:
$A = \begin{bmatrix} 2 & 3 \\ -1 & 4 \end{bmatrix}$
We can multiply this matrix by a scalar, such as 2:
$2A = 2\begin{bmatrix} 2 & 3 \\ -1 & 4 \end{bmatrix} = \begin{bmatrix} 4 & 6 \\ -2 & 8 \end{bmatrix}$
## Exercise
Perform the following operations:
1. Multiply the matrix $B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ by the scalar 3.
2. Multiply the matrix $C = \begin{bmatrix} -2 & 1 \\ 0 & 3 \end{bmatrix}$ by the scalar -1.
### Solution
1. $3B = 3\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 3 & 6 \\ 9 & 12 \end{bmatrix}$
2. $-1C = -1\begin{bmatrix} -2 & 1 \\ 0 & 3 \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ 0 & -3 \end{bmatrix}$
# 1.2.3. Matrix Multiplication
Matrix multiplication is a more complex operation than addition and scalar multiplication. It involves multiplying the elements of two matrices to produce a new matrix.
To perform matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.
Matrix multiplication is not commutative, which means that the order of the matrices matters. In general, $AB \neq BA$.
Consider the following matrices:
$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$
$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$
We can multiply these matrices together using matrix multiplication:
$AB = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$
## Exercise
Perform the following operations:
1. Multiply the matrices $C = \begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix}$ and $D = \begin{bmatrix} -2 & 1 \\ -3 & 0 \end{bmatrix}$.
2. Multiply the matrix $E = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ by the matrix $F = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$.
### Solution
1. $CD = \begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix} \begin{bmatrix} -2 & 1 \\ -3 & 0 \end{bmatrix} = \begin{bmatrix} -4 & 2 \\ -6 & 3 \end{bmatrix}$
2. $EF = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$
# 2. Systems of Linear Equations
A system of linear equations is a set of equations that can be written in the form:
$$
\begin{align*}
a_{11}x_1 + a_{12}x_2 + \ldots + a_{1n}x_n &= b_1 \\
a_{21}x_1 + a_{22}x_2 + \ldots + a_{2n}x_n &= b_2 \\
\ldots \\
a_{m1}x_1 + a_{m2}x_2 + \ldots + a_{mn}x_n &= b_m \\
\end{align*}
$$
where $a_{ij}$ are the coefficients, $x_i$ are the variables, and $b_i$ are the constants.
The goal of solving a system of linear equations is to find values for the variables that satisfy all of the equations simultaneously.
There are several methods for solving systems of linear equations, including substitution, elimination, and matrix methods. In this textbook, we will focus on matrix methods for solving systems of linear equations.
Consider the following system of linear equations:
$$
\begin{align*}
2x + 3y &= 8 \\
4x - 2y &= 2 \\
\end{align*}
$$
We can write this system of equations in matrix form:
$$
\begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 8 \\ 2 \end{bmatrix}
$$
To solve this system of equations using matrix methods, we can multiply both sides of the equation by the inverse of the coefficient matrix:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1} \begin{bmatrix} 8 \\ 2 \end{bmatrix}
$$
## Exercise
Solve the following system of linear equations using matrix methods:
$$
\begin{align*}
3x + 2y &= 7 \\
-2x + 5y &= 1 \\
\end{align*}
$$
### Solution
To solve this system of equations using matrix methods, we can write the system in matrix form:
$$
\begin{bmatrix} 3 & 2 \\ -2 & 5 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 1 \end{bmatrix}
$$
To find the solution, we can multiply both sides of the equation by the inverse of the coefficient matrix:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 3 & 2 \\ -2 & 5 \end{bmatrix}^{-1} \begin{bmatrix} 7 \\ 1 \end{bmatrix}
$$
# 2.1. Solving Systems of Equations with Matrices
Consider the following system of linear equations:
$$
\begin{align*}
2x + 3y &= 8 \\
4x - 2y &= 2 \\
\end{align*}
$$
We can write this system of equations in matrix form:
$$
\begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 8 \\ 2 \end{bmatrix}
$$
To solve this system of equations using matrices, we can multiply both sides of the equation by the inverse of the coefficient matrix:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1} \begin{bmatrix} 8 \\ 2 \end{bmatrix}
$$
We can calculate the inverse of the coefficient matrix:
$$
\begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1} = \frac{1}{2 \cdot -2 - 3 \cdot 4} \begin{bmatrix} -2 & -3 \\ -4 & 2 \end{bmatrix} = \begin{bmatrix} -\frac{1}{2} & -\frac{3}{2} \\ -2 & 1 \end{bmatrix}
$$
Substituting the inverse matrix and the constant matrix into the equation, we get:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} -\frac{1}{2} & -\frac{3}{2} \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 8 \\ 2 \end{bmatrix} = \begin{bmatrix} -7 \\ 10 \end{bmatrix}
$$
So the solution to the system of equations is $x = -7$ and $y = 10$.
## Exercise
Solve the following system of linear equations using matrices:
$$
\begin{align*}
3x + 2y &= 7 \\
-2x + 5y &= 1 \\
\end{align*}
$$
### Solution
To solve this system of equations using matrices, we can write the system in matrix form:
$$
\begin{bmatrix} 3 & 2 \\ -2 & 5 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 1 \end{bmatrix}
$$
To find the solution, we can multiply both sides of the equation by the inverse of the coefficient matrix:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 3 & 2 \\ -2 & 5 \end{bmatrix}^{-1} \begin{bmatrix} 7 \\ 1 \end{bmatrix}
$$
We can calculate the inverse of the coefficient matrix:
$$
\begin{bmatrix} 3 & 2 \\ -2 & 5 \end{bmatrix}^{-1} = \frac{1}{3 \cdot 5 - 2 \cdot -2} \begin{bmatrix} 5 & -2 \\ 2 & 3 \end{bmatrix} = \begin{bmatrix} \frac{5}{19} & -\frac{2}{19} \\ \frac{2}{19} & \frac{3}{19} \end{bmatrix}
$$
Substituting the inverse matrix and the constant matrix into the equation, we get:
$$
\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} \frac{5}{19} & -\frac{2}{19} \\ \frac{2}{19} & \frac{3}{19} \end{bmatrix} \begin{bmatrix} 7 \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{31}{19} \\ \frac{13}{19} \end{bmatrix}
$$
So the solution to the system of equations is $x = \frac{31}{19}$ and $y = \frac{13}{19}$.
# 2.2. Gaussian Elimination
## Exercise
Solve the following system of equations using Gaussian elimination:
$$
\begin{align*}
3x + 2y &= 7 \\
-2x + 5y &= 1 \\
\end{align*}
$$
### Solution
We can write this system in augmented matrix form:
$$
\begin{bmatrix} 3 & 2 & 7 \\ -2 & 5 & 1 \end{bmatrix}
$$
To perform Gaussian elimination, we choose the first non-zero element in the first column as the pivot element, which is 3. We divide the first row by 3 to make the pivot element 1:
$$
\begin{bmatrix} 1 & \frac{2}{3} & \frac{7}{3} \\ -2 & 5 & 1 \end{bmatrix}
$$
Next, we eliminate the non-zero element below the pivot element by adding 2 times the first row to the second row:
$$
\begin{bmatrix} 1 & \frac{2}{3} & \frac{7}{3} \\ 0 & \frac{16}{3} & \frac{13}{3} \end{bmatrix}
$$
The matrix is now in row-echelon form. To solve for the variables, we use back substitution. From the second row, we can solve for $y$:
$$
\frac{16}{3}y = \frac{13}{3} \implies y = \frac{13}{16}
$$
Substituting this value into the first row, we can solve for $x$:
$$
x + \frac{2}{3} \cdot \frac{13}{16} = \frac{7}{3} \implies x = \frac{3}{16}
$$
So the solution to the system of equations is $x = \frac{3}{16}$ and $y = \frac{13}{16}$.
# 2.3. LU Decomposition
Gaussian elimination is a powerful method for solving systems of linear equations. However, it can be computationally expensive for large systems. LU decomposition is an alternative method that can be more efficient.
LU decomposition decomposes a matrix into the product of two matrices: an upper triangular matrix (U) and a lower triangular matrix (L). The LU decomposition of a matrix A can be written as A = LU.
To perform LU decomposition, we start with the original matrix A and perform row operations to eliminate the elements below the diagonal. The resulting upper triangular matrix U will have all zeros below the diagonal.
We also keep track of the row operations performed and use them to construct the lower triangular matrix L. L is a matrix with ones on the diagonal and the multipliers from the row operations below the diagonal.
Once we have the LU decomposition, we can solve the system of equations by solving two simpler systems: Ly = b and Ux = y. This is known as forward substitution and back substitution.
Let's illustrate LU decomposition with an example. Consider the following system of equations:
$$
\begin{align*}
2x + 3y &= 8 \\
4x - y &= 7 \\
\end{align*}
$$
We can write this system in matrix form as:
$$
\begin{bmatrix} 2 & 3 \\ 4 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 8 \\ 7 \end{bmatrix}
$$
To perform LU decomposition, we start with the original matrix A:
$$
\begin{bmatrix} 2 & 3 \\ 4 & -1 \end{bmatrix}
$$
We perform row operations to eliminate the elements below the diagonal:
$$
\begin{bmatrix} 2 & 3 \\ 0 & -7 \end{bmatrix}
$$
The resulting upper triangular matrix U is:
$$
\begin{bmatrix} 2 & 3 \\ 0 & -7 \end{bmatrix}
$$
We also keep track of the row operations performed to construct the lower triangular matrix L. The row operations were:
- Multiply the first row by 2 and subtract it from the second row.
So the lower triangular matrix L is:
$$
\begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix}
$$
Now that we have the LU decomposition, we can solve the system of equations by solving two simpler systems: Ly = b and Ux = y.
For the first system, Ly = b, we have:
$$
\begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} 8 \\ 7 \end{bmatrix}
$$
Solving this system gives us:
$$
\begin{align*}
y_1 &= 8 \\
2y_1 + y_2 &= 7 \\
\end{align*}
$$
Simplifying, we get:
$$
\begin{align*}
y_1 &= 8 \\
2(8) + y_2 &= 7 \\
\end{align*}
$$
Solving for y_2, we get:
$$
\begin{align*}
y_1 &= 8 \\
y_2 &= 7 - 2(8) \\
\end{align*}
$$
So y_2 = -9.
Now that we have y, we can solve the second system, Ux = y:
$$
\begin{bmatrix} 2 & 3 \\ 0 & -7 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 8 \\ -9 \end{bmatrix}
$$
Solving this system gives us:
$$
\begin{align*}
2x_1 + 3x_2 &= 8 \\
-7x_2 &= -9 \\
\end{align*}
$$
Simplifying, we get:
$$
\begin{align*}
2x_1 + 3x_2 &= 8 \\
x_2 &= \frac{9}{7} \\
\end{align*}
$$
Solving for x_1, we get:
$$
\begin{align*}
2x_1 + 3\left(\frac{9}{7}\right) &= 8 \\
2x_1 &= 8 - 3\left(\frac{9}{7}\right) \\
x_1 &= \frac{8 - 3\left(\frac{9}{7}\right)}{2} \\
\end{align*}
$$
So x_1 = \frac{13}{7}.
Therefore, the solution to the system of equations is x_1 = \frac{13}{7} and x_2 = \frac{9}{7}.
# 2.4. Applications of Systems of Equations
Systems of equations have many practical applications in various fields, including physics, engineering, economics, and computer science. They can be used to model and solve real-world problems.
One common application of systems of equations is in solving problems involving multiple unknowns. For example, in physics, systems of equations can be used to model the motion of objects under the influence of forces. By solving the equations, we can determine the position, velocity, and acceleration of the objects at different times.
Systems of equations can also be used in optimization problems. For example, in economics, systems of equations can be used to model supply and demand relationships. By solving the equations, we can determine the equilibrium price and quantity that maximize social welfare.
In computer science, systems of equations can be used in image processing algorithms, such as image reconstruction and denoising. By solving the equations, we can recover missing or corrupted parts of an image.
These are just a few examples of the many applications of systems of equations. Understanding how to solve and apply systems of equations is an important skill in many fields and can help us better understand and solve complex problems in the real world.
Example: Solving a Word Problem with Systems of Equations
Suppose you have a total of 20 coins consisting of nickels and dimes. The total value of the coins is $1.75. How many nickels and dimes do you have?
Let's set up a system of equations to solve this problem. Let x represent the number of nickels and y represent the number of dimes. We can write the following equations:
Equation 1: x + y = 20 (since the total number of coins is 20)
Equation 2: 0.05x + 0.10y = 1.75 (since the total value of the coins is $1.75)
We can solve this system of equations using substitution or elimination. Let's use substitution:
From Equation 1, we can solve for x in terms of y: x = 20 - y
Substituting this into Equation 2, we get: 0.05(20 - y) + 0.10y = 1.75
Simplifying, we get: 1 - 0.05y + 0.10y = 1.75
Combining like terms, we get: 0.05y = 0.75
Dividing both sides by 0.05, we get: y = 15
Substituting this back into Equation 1, we get: x + 15 = 20
Simplifying, we get: x = 5
Therefore, you have 5 nickels and 15 dimes.
## Exercise
Solve the following word problem using a system of equations:
A bakery sells muffins and cookies. On Monday, the bakery sold a total of 50 muffins and cookies for a total of $100. On Tuesday, the bakery sold a total of 60 muffins and cookies for a total of $120. How much does each muffin and cookie cost?
### Solution
Let x represent the cost of a muffin and y represent the cost of a cookie.
We can set up the following system of equations:
Equation 1: x + y = 100 (since the total sales on Monday were $100)
Equation 2: x + y = 120 (since the total sales on Tuesday were $120)
We can solve this system of equations using elimination:
Subtracting Equation 1 from Equation 2, we get: (x + y) - (x + y) = 120 - 100
Simplifying, we get: 0 = 20
This means that the system of equations is inconsistent and does not have a unique solution.
In this case, there is not enough information to determine the cost of each muffin and cookie.
# 3. Determinants and Inverses
Determinants and inverses are important concepts in linear algebra. They provide valuable information about matrices and can be used to solve systems of linear equations, calculate areas and volumes, and determine whether a matrix is invertible.
The determinant of a square matrix can be thought of as a scalar value that represents certain properties of the matrix. It can be used to determine if a matrix is invertible, which means it has an inverse matrix that can be used to undo the operations performed by the original matrix.
The determinant of a 2x2 matrix can be calculated using a simple formula:
$$
\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}
= ad - bc
$$
For larger matrices, the determinant can be calculated using cofactor expansion along any row or column. This involves breaking down the matrix into smaller submatrices and calculating their determinants.
The inverse of a matrix is another matrix that, when multiplied by the original matrix, gives the identity matrix. The inverse of a matrix can be found by using the determinant and cofactor expansion. If the determinant of a matrix is non-zero, then the matrix is invertible and has a unique inverse.
Example: Finding the Determinant and Inverse of a Matrix
Let's find the determinant and inverse of the following 3x3 matrix:
$$
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{bmatrix}
$$
To find the determinant, we can use cofactor expansion along the first row:
$$
\begin{align*}
\text{det}\left(\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{bmatrix}\right) &= 1 \cdot \text{det}\left(\begin{bmatrix}
5 & 6 \\
8 & 9 \\
\end{bmatrix}\right) - 2 \cdot \text{det}\left(\begin{bmatrix}
4 & 6 \\
7 & 9 \\
\end{bmatrix}\right) + 3 \cdot \text{det}\left(\begin{bmatrix}
4 & 5 \\
7 & 8 \\
\end{bmatrix}\right) \\
&= 1 \cdot (5 \cdot 9 - 6 \cdot 8) - 2 \cdot (4 \cdot 9 - 6 \cdot 7) + 3 \cdot (4 \cdot 8 - 5 \cdot 7) \\
&= 1 \cdot (-3) - 2 \cdot (-6) + 3 \cdot (-3) \\
&= -3 + 12 - 9 \\
&= 0
\end{align*}
$$
Since the determinant is 0, the matrix is not invertible and does not have an inverse.
## Exercise
Find the determinant and inverse of the following 2x2 matrix:
$$
\begin{bmatrix}
2 & 3 \\
4 & 5 \\
\end{bmatrix}
$$
### Solution
To find the determinant, we can use the formula:
$$
\text{det}\left(\begin{bmatrix}
2 & 3 \\
4 & 5 \\
\end{bmatrix}\right) = 2 \cdot 5 - 3 \cdot 4 = 10 - 12 = -2
$$
The determinant is -2.
Since the determinant is non-zero, the matrix is invertible. To find the inverse, we can use the formula:
$$
\begin{bmatrix}
2 & 3 \\
4 & 5 \\
\end{bmatrix}^{-1} = \frac{1}{\text{det}\left(\begin{bmatrix}
2 & 3 \\
4 & 5 \\
\end{bmatrix}\right)} \cdot \begin{bmatrix}
5 & -3 \\
-4 & 2 \\
\end{bmatrix} = \frac{1}{-2} \cdot \begin{bmatrix}
5 & -3 \\
-4 & 2 \\
\end{bmatrix} = \begin{bmatrix}
-5/2 & 3/2 \\
2 & -1 \\
\end{bmatrix}
$$
The inverse of the matrix is:
$$
\begin{bmatrix}
-5/2 & 3/2 \\
2 & -1 \\
\end{bmatrix}
$$
# 3.1. Determinants of Matrices
The determinant of a square matrix is a scalar value that represents certain properties of the matrix. It can be used to determine if a matrix is invertible, calculate areas and volumes, and solve systems of linear equations.
The determinant of a 2x2 matrix can be calculated using the formula:
$$
\text{det}\left(\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}\right) = ad - bc
$$
For larger matrices, the determinant can be calculated using cofactor expansion along any row or column. This involves breaking down the matrix into smaller submatrices and calculating their determinants.
Example: Calculating the Determinant of a 3x3 Matrix
Let's calculate the determinant of the following 3x3 matrix:
$$
\begin{bmatrix}
2 & 3 & 1 \\
4 & 1 & 0 \\
-1 & 2 & 5 \\
\end{bmatrix}
$$
We can use cofactor expansion along the first row:
$$
\begin{align*}
\text{det}\left(\begin{bmatrix}
2 & 3 & 1 \\
4 & 1 & 0 \\
-1 & 2 & 5 \\
\end{bmatrix}\right) &= 2 \cdot \text{det}\left(\begin{bmatrix}
1 & 0 \\
2 & 5 \\
\end{bmatrix}\right) - 3 \cdot \text{det}\left(\begin{bmatrix}
4 & 0 \\
-1 & 5 \\
\end{bmatrix}\right) + 1 \cdot \text{det}\left(\begin{bmatrix}
4 & 1 \\
-1 & 2 \\
\end{bmatrix}\right) \\
&= 2 \cdot (1 \cdot 5 - 0 \cdot 2) - 3 \cdot (4 \cdot 5 - 0 \cdot -1) + 1 \cdot (4 \cdot 2 - 1 \cdot -1) \\
&= 2 \cdot 5 - 3 \cdot 20 + 1 \cdot 9 \\
&= 10 - 60 + 9 \\
&= -41
\end{align*}
$$
The determinant of the matrix is -41.
## Exercise
Calculate the determinant of the following 2x2 matrix:
$$
\begin{bmatrix}
3 & 4 \\
-2 & 1 \\
\end{bmatrix}
$$
### Solution
To calculate the determinant, we can use the formula:
$$
\text{det}\left(\begin{bmatrix}
3 & 4 \\
-2 & 1 \\
\end{bmatrix}\right) = 3 \cdot 1 - 4 \cdot -2 = 3 + 8 = 11
$$
The determinant is 11.
# 3.2. Properties of Determinants
Determinants have several important properties that can be used to simplify calculations and solve equations.
1. The determinant of the identity matrix is always 1.
For example, the determinant of the 2x2 identity matrix is:
$$
\text{det}\left(\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix}\right) = 1 \cdot 1 - 0 \cdot 0 = 1
$$
2. Swapping rows or columns of a matrix changes the sign of the determinant.
For example, if we swap the first and second rows of a 3x3 matrix:
$$
\text{det}\left(\begin{bmatrix}
a & b & c \\
d & e & f \\
g & h & i \\
\end{bmatrix}\right) = -\text{det}\left(\begin{bmatrix}
d & e & f \\
a & b & c \\
g & h & i \\
\end{bmatrix}\right)
$$
3. Multiplying a row or column of a matrix by a scalar multiplies the determinant by that scalar.
For example, if we multiply the second row of a 4x4 matrix by 2:
$$
\text{det}\left(\begin{bmatrix}
a & b & c & d \\
e & f & g & h \\
i & j & k & l \\
m & n & o & p \\
\end{bmatrix}\right) = 2 \cdot \text{det}\left(\begin{bmatrix}
a & b & c & d \\
2e & 2f & 2g & 2h \\
i & j & k & l \\
m & n & o & p \\
\end{bmatrix}\right)
$$
4. If two rows or columns of a matrix are identical, the determinant is 0.
For example, if the first and third rows of a 3x3 matrix are the same:
$$
\text{det}\left(\begin{bmatrix}
a & b & c \\
a & b & c \\
g & h & i \\
\end{bmatrix}\right) = 0
$$
These properties can be used to simplify calculations and solve equations involving determinants.
# 3.3. Finding Inverses of Matrices
The inverse of a matrix is a matrix that, when multiplied with the original matrix, gives the identity matrix. In other words, if A is a matrix and A^-1 is its inverse, then A * A^-1 = I, where I is the identity matrix.
Finding the inverse of a matrix can be done using several methods, such as the adjoint method or the row reduction method. In this textbook, we will focus on the row reduction method, also known as Gauss-Jordan elimination.
To find the inverse of a matrix using row reduction, we start by augmenting the original matrix with the identity matrix of the same size. Then, we perform row operations to transform the original matrix into the identity matrix, while keeping track of the row operations performed on the identity matrix. If the original matrix can be transformed into the identity matrix, then the augmented identity matrix will be transformed into the inverse of the original matrix.
Let's go through an example to illustrate the process.
Find the inverse, if it exists, of the matrix A:
$$
\begin{bmatrix}
1 & 2 \\
3 & 4 \\
\end{bmatrix}
$$
We start by augmenting the matrix A with the identity matrix:
$$
\begin{bmatrix}
1 & 2 & | & 1 & 0 \\
3 & 4 & | & 0 & 1 \\
\end{bmatrix}
$$
Next, we perform row operations to transform the left side of the augmented matrix into the identity matrix. We can use elementary row operations such as multiplying a row by a scalar, adding/subtracting rows, or swapping rows.
$$
\begin{bmatrix}
1 & 2 & | & 1 & 0 \\
3 & 4 & | & 0 & 1 \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1 & 2 & | & 1 & 0 \\
0 & -2 & | & -3 & 1 \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1 & 0 & | & 7/2 & -1/2 \\
0 & -2 & | & -3 & 1 \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1 & 0 & | & 7/2 & -1/2 \\
0 & 1 & | & 3/2 & -1/2 \\
\end{bmatrix}
$$
The left side of the augmented matrix is now the identity matrix, and the right side is the inverse of the original matrix A:
$$
\begin{bmatrix}
7/2 & -1/2 \\
3/2 & -1/2 \\
\end{bmatrix}
$$
Therefore, the inverse of matrix A is:
$$
\begin{bmatrix}
7/2 & -1/2 \\
3/2 & -1/2 \\
\end{bmatrix}
$$
If the row reduction process does not result in the left side of the augmented matrix becoming the identity matrix, then the original matrix does not have an inverse.
## Exercise
Find the inverse, if it exists, of the matrix B:
$$
\begin{bmatrix}
2 & 3 \\
4 & 6 \\
\end{bmatrix}
$$
### Solution
We start by augmenting the matrix B with the identity matrix:
$$
\begin{bmatrix}
2 & 3 & | & 1 & 0 \\
4 & 6 & | & 0 & 1 \\
\end{bmatrix}
$$
Next, we perform row operations to transform the left side of the augmented matrix into the identity matrix:
$$
\begin{bmatrix}
2 & 3 & | & 1 & 0 \\
4 & 6 & | & 0 & 1 \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
2 & 3 & | & 1 & 0 \\
0 & 0 & | & -2 & 1 \\
\end{bmatrix}
$$
The row reduction process did not result in the left side of the augmented matrix becoming the identity matrix. Therefore, the matrix B does not have an inverse.
# 3.4. Uses of Determinants and Inverses
Determinants and inverses of matrices have various uses in linear algebra and other fields of mathematics. Here are a few important applications:
1. Solving systems of linear equations: Determinants can be used to determine whether a system of linear equations has a unique solution, no solution, or infinitely many solutions. If the determinant of the coefficient matrix is nonzero, the system has a unique solution. If the determinant is zero, the system may have either no solution or infinitely many solutions.
2. Finding the area/volume: Determinants can be used to find the area of a parallelogram or the volume of a parallelepiped. The absolute value of the determinant of the vectors representing the sides of the parallelogram/parallelepiped gives the area/volume.
3. Matrix transformations: Determinants and inverses play a crucial role in understanding and analyzing matrix transformations. For example, the determinant of a square matrix represents the scaling factor of the transformation. If the determinant is positive, the transformation preserves orientation, while a negative determinant indicates a reflection.
4. Cramer's rule: Cramer's rule is a method for solving systems of linear equations using determinants. It provides a formula to express the solutions of the system in terms of determinants.
5. Matrix inversion: Inverses of matrices are used to solve linear systems of equations efficiently. Instead of solving the system directly, we can find the inverse of the coefficient matrix and multiply it by the vector of constants to obtain the solution.
6. Change of basis: Determinants and inverses are used in change of basis calculations. When working with different coordinate systems, we can use determinants to convert between different bases.
These are just a few examples of the many applications of determinants and inverses in linear algebra. They are fundamental tools that are used extensively in various areas of mathematics and beyond.
# 4. Eigenvalues and Eigenvectors
An eigenvector of a square matrix A is a nonzero vector x such that when A is multiplied by x, the result is a scalar multiple of x. The scalar multiple is called the eigenvalue corresponding to that eigenvector. Mathematically, we can represent this relationship as Ax = λx, where x is the eigenvector and λ is the eigenvalue.
Eigenvalues and eigenvectors have several important properties. Here are a few key ones:
1. Eigenvalues can be real or complex numbers. Eigenvectors associated with real eigenvalues can be real or complex, while eigenvectors associated with complex eigenvalues are complex.
2. The determinant of a matrix is equal to the product of its eigenvalues. This property is useful for calculating determinants when the eigenvalues are known.
3. The trace of a matrix is equal to the sum of its eigenvalues. The trace is the sum of the elements on the main diagonal of the matrix.
4. If a matrix is invertible, its eigenvalues are all nonzero.
5. If a matrix is symmetric, its eigenvalues are all real and its eigenvectors are orthogonal.
Eigenvalues and eigenvectors have various applications. Here are a few examples:
1. Diagonalization: Diagonalization is the process of expressing a matrix as a product of a diagonal matrix and the inverse of its eigenvector matrix. This process simplifies calculations and allows for easier analysis of matrix properties.
2. Stability analysis: In physics and engineering, eigenvalues and eigenvectors are used to analyze the stability of systems. For example, in control theory, eigenvalues are used to determine the stability of a control system.
3. Image compression: In image processing, eigenvalues and eigenvectors are used in techniques such as Principal Component Analysis (PCA) to compress images. By representing an image using a smaller set of eigenvectors, the image can be compressed while still retaining important information.
4. Quantum mechanics: In quantum mechanics, eigenvalues and eigenvectors are used to describe the energy levels and wave functions of quantum systems. They play a fundamental role in understanding the behavior of particles at the quantum level.
These are just a few examples of the many applications of eigenvalues and eigenvectors. They are powerful tools that provide insights into the properties and behavior of matrices and systems. In the following sections, we will explore methods for calculating eigenvalues and eigenvectors and discuss their applications in more detail.
# 4.1. Definition and Properties
Let A be an n x n matrix. An eigenvector of A is a nonzero vector x such that Ax is a scalar multiple of x. The scalar multiple is called the eigenvalue corresponding to that eigenvector. Mathematically, we can represent this relationship as Ax = λx, where x is the eigenvector and λ is the eigenvalue.
Here are some important properties of eigenvalues and eigenvectors:
1. The determinant of a matrix A is equal to the product of its eigenvalues. This property is useful for calculating determinants when the eigenvalues are known.
2. The trace of a matrix A is equal to the sum of its eigenvalues. The trace is the sum of the elements on the main diagonal of the matrix.
3. If A is invertible, its eigenvalues are all nonzero.
4. If A is symmetric, its eigenvalues are all real and its eigenvectors are orthogonal.
5. If A is Hermitian (the complex analogue of symmetric), its eigenvalues are all real and its eigenvectors are orthogonal.
Eigenvalues and eigenvectors have various applications in linear algebra and other fields. They are used in diagonalization, stability analysis, image processing, quantum mechanics, and many other areas.
# 4.2. Calculating Eigenvalues and Eigenvectors
The characteristic polynomial method involves finding the roots of the characteristic polynomial of the matrix A. The characteristic polynomial is defined as det(A - λI), where det denotes the determinant, A is the matrix, λ is the eigenvalue, and I is the identity matrix. By finding the roots of the characteristic polynomial, we can determine the eigenvalues of A. Once the eigenvalues are known, we can find the corresponding eigenvectors by solving the equation (A - λI)x = 0, where x is the eigenvector.
The power iteration method is an iterative algorithm that can be used to approximate the dominant eigenvalue and eigenvector of a matrix. The algorithm starts with an initial vector and repeatedly multiplies it by the matrix A, normalizing the result at each iteration. As the number of iterations increases, the vector converges to the dominant eigenvector, and the corresponding eigenvalue can be calculated.
# 4.3. Applications of Eigenvalues and Eigenvectors
One application is diagonalization, which involves expressing a matrix as a diagonal matrix using its eigenvectors. Diagonalization simplifies many calculations and allows for easier analysis of matrix properties.
Eigenvalues and eigenvectors are also used in stability analysis. In systems with dynamic behavior, the eigenvalues of the system matrix determine the stability of the system. If all eigenvalues have negative real parts, the system is stable.
In image processing, eigenvalues and eigenvectors are used for image compression and feature extraction. By representing an image as a matrix, we can apply eigenvalue decomposition to identify the most important features or compress the image by discarding less significant eigenvectors.
In quantum mechanics, eigenvalues and eigenvectors play a fundamental role in the description of physical systems. The eigenvalues represent the possible energy levels of the system, while the eigenvectors represent the corresponding wavefunctions.
# 4.4. Diagonalization and Similarity Transformations
Diagonalization is a process that transforms a matrix into a diagonal matrix using its eigenvectors. This process simplifies many calculations and allows for easier analysis of matrix properties.
To diagonalize a matrix A, we first find its eigenvalues and eigenvectors. We then construct a matrix P whose columns are the eigenvectors of A. The diagonal matrix D is obtained by multiplying P^-1 (the inverse of P) by A and then multiplying the result by P. Mathematically, we can represent this as D = P^-1AP.
The diagonal matrix D has the eigenvalues of A on its main diagonal. The eigenvectors of A become the columns of P. Diagonalization is useful for solving systems of linear equations, calculating powers of matrices, and finding exponential functions of matrices.
Similarity transformations are closely related to diagonalization. A similarity transformation of a matrix A involves multiplying it by an invertible matrix S. The resulting matrix B is obtained by B = SAS^-1. Similarity transformations preserve certain properties of matrices, such as eigenvalues, determinant, and trace.
# 5. Vector Spaces
A vector space V is a set of vectors that satisfies the following properties:
1. Closure under addition: For any vectors u and v in V, the sum u + v is also in V.
2. Closure under scalar multiplication: For any vector u in V and any scalar c, the product cu is also in V.
3. Associativity of addition: For any vectors u, v, and w in V, (u + v) + w = u + (v + w).
4. Commutativity of addition: For any vectors u and v in V, u + v = v + u.
5. Identity element of addition: There exists a vector 0 in V such that for any vector u in V, u + 0 = u.
6. Inverse elements of addition: For any vector u in V, there exists a vector -u in V such that u + (-u) = 0.
7. Associativity of scalar multiplication: For any scalar a and any scalars b and c, (ab)c = a(bc).
8. Distributivity of scalar multiplication with respect to vector addition: For any scalar a and any vectors u and v, a(u + v) = au + av.
9. Distributivity of scalar multiplication with respect to scalar addition: For any scalars a and b and any vector u, (a + b)u = au + bu.
10. Identity element of scalar multiplication: For any vector u in V, 1u = u, where 1 is the multiplicative identity.
These properties ensure that vector spaces behave in a consistent and predictable manner. Many mathematical objects, such as matrices and functions, can be considered as vector spaces.
# 6. Basis and Dimension
Let V be a vector space. A set of vectors {v1, v2, ..., vn} is a basis for V if the following conditions are satisfied:
1. Spanning property: Every vector in V can be expressed as a linear combination of the basis vectors. In other words, for any vector v in V, there exist scalars c1, c2, ..., cn such that v = c1v1 + c2v2 + ... + cnvn.
2. Linear independence: The only way to express the zero vector 0 as a linear combination of the basis vectors is by setting all the scalars c1, c2, ..., cn to zero.
The dimension of a vector space V is the number of vectors in any basis for V. It is denoted as dim(V). The dimension of a vector space is a fundamental property that characterizes its size and complexity.
The dimension of a vector space can be finite or infinite. For example, the vector space R^n, which consists of all n-dimensional vectors, has dimension n. The vector space of all polynomials of degree at most n has dimension n+1.
# 7. Linear Transformations
Let V and W be vector spaces. A function T: V -> W is a linear transformation if it satisfies the following properties:
1. Preservation of vector addition: For any vectors u and v in V, T(u + v) = T(u) + T(v).
2. Preservation of scalar multiplication: For any scalar c and any vector u in V, T(cu) = cT(u).
These properties ensure that a linear transformation preserves the structure of vector spaces. Linear transformations have many important properties and applications in mathematics and other fields.
The composition of two linear transformations is also a linear transformation. If T: V -> W and U: W -> X are linear transformations, then the composition U o T: V -> X is a linear transformation.
Linear transformations can be represented by matrices. If T: R^n -> R^m is a linear transformation, then there exists an m x n matrix A such that T(u) = Au for any vector u in R^n. The matrix A is called the standard matrix of the linear transformation.
# 8. Change of Basis
Let V be a vector space and {v1, v2, ..., vn} and {w1, w2, ..., wn} be two bases for V. The change of basis matrix P is an n x n matrix whose columns are the coordinates of the basis vectors {w1, w2, ..., wn} with respect to the basis {v1, v2, ..., vn}.
To express a vector v in terms of the new basis {w1, w2, ..., wn}, we can multiply the change of basis matrix P by the coordinate vector of v with respect to the old basis {v1, v2, ..., vn}. Mathematically, we can represent this as [v]_w = P[v]_v, where [v]_w is the coordinate vector of v with respect to the new basis and [v]_v is the coordinate vector of v with respect to the old basis.
Change of basis is also used to express linear transformations in terms of different bases. If T: V -> W is a linear transformation and {v1, v2, ..., vn} and {w1, w2, ..., wm} are bases for V and W, respectively, the change of basis matrix P is an m x n matrix whose columns are the coordinates of the basis vectors {w1, w2, ..., wm} with respect to the basis {v1, v2, ..., vn}. The standard matrix of the linear transformation T with respect to the new bases is obtained by multiplying the change of basis matrix P by the standard matrix of T with respect to the old bases.
# 9. Inner Product Spaces
Let V be a vector space over the field of real or complex numbers. An inner product on V is a function that assigns a scalar to every pair of vectors in V and satisfies the following properties:
1. Linearity in the first argument: For any vectors u, v, and w in V and any scalars a and b, the inner product satisfies the property <au + bv, w> = a<u, w> + b<v, w>.
2. Conjugate symmetry: For any vectors u and v in V, the inner product satisfies the property <u, v> = <v, u>*.
3. Positive definiteness: For any vector v in V, the inner product satisfies the property <v, v> > 0, and <v, v> = 0 if and only if v = 0.
An inner product space is a vector space equipped with an inner product. Inner product spaces have many important properties and applications in mathematics and physics.
The inner product induces a norm on the vector space, which is a measure of the length or magnitude of a vector. The norm of a vector v is defined as ||v|| = sqrt(<v, v>).
Inner product spaces also have the concept of orthogonality, which is a generalization of perpendicularity. Two vectors u and v in an inner product space are orthogonal if their inner product <u, v> is zero.
# 10. Norms and Distances
Let V be a vector space. A norm on V is a function ||.||: V -> R that satisfies the following properties:
1. Nonnegativity: For any vector v in V, ||v|| >= 0, and ||v|| = 0 if and only if v = 0.
2. Homogeneity: For any vector v in V and any scalar c, ||cv|| = |c| ||v||.
3. Triangle inequality: For any vectors u and v in V, ||u + v|| <= ||u|| + ||v||.
A norm induces a distance on the vector space, which measures the "closeness" or "separation" between vectors. The distance between two vectors u and v is defined as d(u, v) = ||u - v||.
Norms and distances have many important properties and applications in mathematics and other fields. They are used in optimization, approximation, convergence analysis, and many other areas.
# 11. Singular Value Decomposition
Let A be an m x n matrix. The singular value decomposition of A is given by A = UΣV^*, where U is an m x m unitary matrix, Σ is an m x n diagonal matrix with nonnegative real numbers on the diagonal, and V^* is the conjugate transpose of an n x n unitary matrix V.
The diagonal entries of Σ are called the singular values of A. They are nonnegative real numbers and represent the "strength" or "importance" of the corresponding columns of U and rows of V^*.
The singular value decomposition has many applications in linear algebra and other fields. It is used for matrix approximation, image compression, data analysis, and many other tasks.
# 12. Least Squares and Data Fitting
Let's say we have a set of data points (x1, y1), (x2, y2), ..., (xn, yn), and we want to find a line of the form y = mx + b that best fits the data. The least squares method involves minimizing the sum of the squared differences between the observed y-values and the predicted y-values on the line.
To find the best-fit line, we can use the following formulas:
m = (nΣxy - ΣxΣy) / (nΣx^2 - (Σx)^2)
b = (Σy - mΣx) / n
where Σ denotes the sum of the values, n is the number of data points, Σxy is the sum of the products of the x- and y-values, Σx^2 is the sum of the squares of the x-values, and Σy is the sum of the y-values.
The least squares method can be extended to higher-dimensional models and more complex data sets. It is widely used in regression analysis, data fitting, and curve fitting.
# 13. Applications of Matrix Methods
One application is image processing, where matrix methods are used for tasks such as image compression, image enhancement, and image recognition. Matrices can represent images as pixel values, and matrix operations can be used to manipulate and analyze the images.
Another application is Markov chains, which are mathematical models used to describe systems that change from one state to another over time. Matrices can represent the transition probabilities between states, and matrix methods can be used to analyze the behavior and stability of the system.
Network analysis is another field where matrix methods are widely used. Matrices can represent networks, such as social networks or transportation networks, and matrix operations can be used to analyze the connectivity, centrality, and robustness of the networks.
Matrix methods also have applications in quantum computing, where matrices represent quantum states and quantum operations. Matrix methods are used to simulate and analyze quantum algorithms and quantum systems.
# 14. Conclusion and Summary
In this textbook, we have covered a wide range of topics in matrix methods. We started with the fundamentals of linear algebra, including vectors, vector spaces, and linear transformations. We then discussed eigenvalues and eigenvectors, diagonalization, and similarity transformations. Next, we explored inner product spaces, norms, and distances. We also covered the singular value decomposition, least squares, and data fitting. Finally, we discussed applications of matrix methods in various fields, including image processing, Markov chains, network analysis, and quantum computing.
Throughout the textbook, we have provided rigorous and engaging explanations of the concepts, using practical examples and exercises to reinforce the material. We hope that this textbook has provided you with a solid foundation in matrix methods and has sparked your interest in further exploration of the subject.
Thank you for reading, and we wish you success in your future studies and applications of matrix methods!
# 7.2. Orthogonal Vectors and Subspaces
Orthogonal vectors are vectors that are perpendicular to each other. In other words, the dot product of two orthogonal vectors is zero. This can be represented mathematically as:
$$\mathbf{u} \cdot \mathbf{v} = 0$$
where $\mathbf{u}$ and $\mathbf{v}$ are orthogonal vectors.
Orthogonal vectors have several important properties. First, if two vectors are orthogonal, then they are linearly independent. This means that neither vector can be expressed as a scalar multiple of the other. Second, if a set of vectors is orthogonal and each vector has a length of 1 (i.e., they are unit vectors), then the set is called an orthonormal set.
Orthogonal subspaces are subspaces that are orthogonal to each other. In other words, every vector in one subspace is orthogonal to every vector in the other subspace. This can be represented mathematically as:
$$\mathbf{u} \cdot \mathbf{v} = 0 \quad \forall \mathbf{u} \in U, \mathbf{v} \in V$$
where $U$ and $V$ are orthogonal subspaces.
Orthogonal subspaces have several important properties. First, if two subspaces are orthogonal, then their intersection is the zero vector. Second, if a subspace $U$ is orthogonal to its orthogonal complement $U^\perp$, then every vector in $U$ is orthogonal to every vector in $U^\perp$.
Orthogonal vectors and subspaces play a crucial role in many applications of matrix methods, such as solving systems of linear equations and performing least squares regressions. By understanding the concept of orthogonality, you will be able to analyze and solve a wide range of problems in various fields.
Consider the following vectors in $\mathbb{R}^3$:
$$\mathbf{u} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix}, \quad \mathbf{w} = \begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix}$$
To determine if these vectors are orthogonal, we can calculate their dot products:
$$\mathbf{u} \cdot \mathbf{v} = (1)(-2) + (2)(1) + (3)(0) = -2 + 2 + 0 = 0$$
$$\mathbf{u} \cdot \mathbf{w} = (1)(1) + (2)(-1) + (3)(2) = 1 - 2 + 6 = 5$$
$$\mathbf{v} \cdot \mathbf{w} = (-2)(1) + (1)(-1) + (0)(2) = -2 - 1 + 0 = -3$$
From these calculations, we can see that $\mathbf{u}$ and $\mathbf{v}$ are orthogonal, but $\mathbf{u}$ and $\mathbf{w}$, as well as $\mathbf{v}$ and $\mathbf{w}$, are not orthogonal.
## Exercise
Consider the following vectors in $\mathbb{R}^2$:
$$\mathbf{u} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} -1 \\ 1 \end{bmatrix}, \quad \mathbf{w} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}$$
Determine if these vectors are orthogonal by calculating their dot products.
### Solution
$$\mathbf{u} \cdot \mathbf{v} = (1)(-1) + (1)(1) = -1 + 1 = 0$$
$$\mathbf{u} \cdot \mathbf{w} = (1)(2) + (1)(2) = 2 + 2 = 4$$
$$\mathbf{v} \cdot \mathbf{w} = (-1)(2) + (1)(2) = -2 + 2 = 0$$
From these calculations, we can see that $\mathbf{u}$ and $\mathbf{v}$ are orthogonal, but $\mathbf{u}$ and $\mathbf{w}$, as well as $\mathbf{v}$ and $\mathbf{w}$, are not orthogonal.
# 7.3. Gram-Schmidt Process
The Gram-Schmidt process is a method for transforming a set of linearly independent vectors into an orthogonal set. This process is useful for finding an orthonormal basis for a subspace.
The process works as follows:
1. Start with a set of linearly independent vectors $\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n$.
2. Let $\mathbf{u}_1 = \mathbf{v}_1$.
3. For $i = 2$ to $n$, compute $\mathbf{u}_i$ as follows:
- Set $\mathbf{u}_i = \mathbf{v}_i$.
- Subtract the projection of $\mathbf{v}_i$ onto $\mathbf{u}_1, \mathbf{u}_2, \ldots, \mathbf{u}_{i-1}$ from $\mathbf{u}_i$.
4. Normalize each vector $\mathbf{u}_1, \mathbf{u}_2, \ldots, \mathbf{u}_n$ to obtain an orthonormal set of vectors.
The Gram-Schmidt process can be used to find an orthonormal basis for a subspace by applying the process to a set of linearly independent vectors that span the subspace.
Consider the following vectors in $\mathbb{R}^3$:
$$\mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}, \quad \mathbf{v}_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \quad \mathbf{v}_3 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}$$
We can apply the Gram-Schmidt process to transform these vectors into an orthogonal set:
Step 1: Set $\mathbf{u}_1 = \mathbf{v}_1$.
Step 2: Compute $\mathbf{u}_2$:
$$\mathbf{u}_2 = \mathbf{v}_2 - \text{proj}_{\mathbf{u}_1}(\mathbf{v}_2)$$
$$\mathbf{u}_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} - \frac{\mathbf{u}_1 \cdot \mathbf{v}_2}{\|\mathbf{u}_1\|^2}\mathbf{u}_1$$
$$\mathbf{u}_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} - \frac{2}{3}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$$
$$\mathbf{u}_2 = \begin{bmatrix} \frac{1}{3} \\ -\frac{2}{3} \\ \frac{1}{3} \end{bmatrix}$$
Step 3: Compute $\mathbf{u}_3$:
$$\mathbf{u}_3 = \mathbf{v}_3 - \text{proj}_{\mathbf{u}_1}(\mathbf{v}_3) - \text{proj}_{\mathbf{u}_2}(\mathbf{v}_3)$$
$$\mathbf{u}_3 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} - \frac{\mathbf{u}_1 \cdot \mathbf{v}_3}{\|\mathbf{u}_1\|^2}\mathbf{u}_1 - \frac{\mathbf{u}_2 \cdot \mathbf{v}_3}{\|\mathbf{u}_2\|^2}\mathbf{u}_2$$
$$\mathbf{u}_3 = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} - \frac{2}{3}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} - \frac{1}{3}\begin{bmatrix} \frac{1}{3} \\ -\frac{2}{3} \\ \frac{1}{3} \end{bmatrix}$$
$$\mathbf{u}_3 = \begin{bmatrix} -\frac{1}{9} \\ \frac{4}{9} \\ \frac{2}{9} \end{bmatrix}$$
Step 4: Normalize each vector:
$$\mathbf{e}_1 = \frac{\mathbf{u}_1}{\|\mathbf{u}_1\|} = \frac{1}{\sqrt{3}}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$$
$$\mathbf{e}_2 = \frac{\mathbf{u}_2}{\|\mathbf{u}_2\|} = \frac{1}{\sqrt{3}}\begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix}$$
$$\mathbf{e}_3 = \frac{\mathbf{u}_3}{\|\mathbf{u}_3\|} = \frac{1}{\sqrt{18}}\begin{bmatrix} -1 \\ 4 \\ 2 \end{bmatrix}$$
From these calculations, we can see that $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ form an orthonormal set of vectors.
## Exercise
Consider the following vectors in $\mathbb{R}^3$:
$$\mathbf{v}_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \mathbf{v}_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad \mathbf{v}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$$
Apply the Gram-Schmidt process to transform these vectors into an orthogonal set.
### Solution
Since $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are already orthogonal, we don't need to perform any calculations. The vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ form an orthogonal set.
# 7.4. Applications of Inner Product Spaces
Inner product spaces have many applications in various fields, including physics, engineering, and computer science. Here are a few examples:
1. **Signal Processing**: Inner product spaces are used in signal processing to analyze and manipulate signals. For example, the inner product between two signals can be used to measure their similarity or to extract useful information from the signals.
2. **Quantum Mechanics**: Inner product spaces play a fundamental role in quantum mechanics. In quantum mechanics, the state of a quantum system is represented by a vector in a complex inner product space called a Hilbert space. Inner products between vectors in the Hilbert space are used to calculate probabilities and to describe the evolution of quantum systems.
3. **Image and Video Processing**: Inner product spaces are used in image and video processing to perform tasks such as image compression, denoising, and enhancement. Inner products between image patches or video frames can be used to measure their similarity and to extract useful features.
4. **Machine Learning**: Inner product spaces are used in machine learning algorithms for tasks such as classification, regression, and clustering. Inner products between feature vectors are used to measure the similarity between samples and to learn models that can generalize to unseen data.
These are just a few examples of the many applications of inner product spaces. The versatility and power of inner product spaces make them a valuable tool in various fields of study.
# 8. Norms and Distances
A norm is a function that assigns a non-negative value to a vector, representing its "length" or "size". Formally, a norm on a vector space V is a function ||.|| : V -> R that satisfies the following properties:
1. Non-negativity: ||v|| >= 0 for all v in V, and ||v|| = 0 if and only if v = 0.
2. Homogeneity: ||av|| = |a| * ||v|| for all v in V and scalar a.
3. Triangle inequality: ||u + v|| <= ||u|| + ||v|| for all u, v in V.
The most commonly used norm is the Euclidean norm, also known as the 2-norm or the L2-norm. For a vector v = (v1, v2, ..., vn) in R^n, the Euclidean norm is defined as:
||v|| = sqrt(v1^2 + v2^2 + ... + vn^2).
Other commonly used norms include the 1-norm (sum of absolute values of components), the infinity norm (maximum absolute value of components), and the p-norm (generalization of the Euclidean norm for any positive real number p).
Distances measure the "closeness" or "separation" between vectors in a vector space. In a normed vector space, the distance between two vectors u and v is defined as the norm of their difference:
d(u, v) = ||u - v||.
Distances satisfy the following properties:
1. Non-negativity: d(u, v) >= 0 for all u, v in V, and d(u, v) = 0 if and only if u = v.
2. Symmetry: d(u, v) = d(v, u) for all u, v in V.
3. Triangle inequality: d(u, v) + d(v, w) >= d(u, w) for all u, v, w in V.
The most commonly used distance metric is the Euclidean distance, which is based on the Euclidean norm. For vectors u = (u1, u2, ..., un) and v = (v1, v2, ..., vn) in R^n, the Euclidean distance is defined as:
d(u, v) = sqrt((u1 - v1)^2 + (u2 - v2)^2 + ... + (un - vn)^2).
Other commonly used distance metrics include the Manhattan distance (sum of absolute differences of components) and the Chebyshev distance (maximum absolute difference of components).
Consider the following vectors in R^3:
u = (1, 2, 3)
v = (4, 5, 6)
w = (7, 8, 9)
We can calculate the Euclidean distances between these vectors as follows:
d(u, v) = sqrt((1 - 4)^2 + (2 - 5)^2 + (3 - 6)^2) = sqrt(27) ≈ 5.196
d(u, w) = sqrt((1 - 7)^2 + (2 - 8)^2 + (3 - 9)^2) = sqrt(54) ≈ 7.348
d(v, w) = sqrt((4 - 7)^2 + (5 - 8)^2 + (6 - 9)^2) = sqrt(27) ≈ 5.196
## Exercise
Calculate the Euclidean distance between the following vectors:
u = (1, 2)
v = (4, 6)
### Solution
d(u, v) = sqrt((1 - 4)^2 + (2 - 6)^2) = sqrt(29) ≈ 5.385
# 8.1. Norms of Vectors and Matrices
The norm of a vector ||v|| is a non-negative scalar value that represents the "length" or "size" of the vector. It satisfies the following properties:
1. Non-negativity: ||v|| >= 0 for all vectors v, and ||v|| = 0 if and only if v = 0.
2. Homogeneity: ||av|| = |a| * ||v|| for all vectors v and scalar a.
3. Triangle inequality: ||u + v|| <= ||u|| + ||v|| for all vectors u and v.
The most commonly used norm for vectors is the Euclidean norm, also known as the 2-norm or the L2-norm. For a vector v = (v1, v2, ..., vn), the Euclidean norm is defined as:
||v|| = sqrt(v1^2 + v2^2 + ... + vn^2).
Other commonly used norms for vectors include the 1-norm (sum of absolute values of components), the infinity norm (maximum absolute value of components), and the p-norm (generalization of the Euclidean norm for any positive real number p).
The norm of a matrix ||A|| is a non-negative scalar value that represents the "size" or "magnitude" of the matrix. It satisfies the following properties:
1. Non-negativity: ||A|| >= 0 for all matrices A, and ||A|| = 0 if and only if A = 0 (the zero matrix).
2. Homogeneity: ||aA|| = |a| * ||A|| for all matrices A and scalar a.
3. Triangle inequality: ||A + B|| <= ||A|| + ||B|| for all matrices A and B.
The most commonly used norm for matrices is the Frobenius norm. For a matrix A with elements aij, the Frobenius norm is defined as:
||A|| = sqrt(sum(aij^2)).
Other commonly used norms for matrices include the 1-norm (maximum column sum), the infinity norm (maximum row sum), and the spectral norm (largest singular value).
The norm of a vector or matrix provides a way to measure its "length" or "size". It is a fundamental concept in linear algebra and has many applications in various fields, including optimization, machine learning, and signal processing.
Consider the vector v = (3, 4) in R^2. We can calculate its Euclidean norm as follows:
||v|| = sqrt(3^2 + 4^2) = sqrt(9 + 16) = sqrt(25) = 5.
The Euclidean norm of v is 5, which represents the "length" or "size" of the vector.
## Exercise
Calculate the Euclidean norm of the following vectors:
u = (1, 2, 3)
v = (4, 5, 6)
### Solution
||u|| = sqrt(1^2 + 2^2 + 3^2) = sqrt(14) ≈ 3.742
||v|| = sqrt(4^2 + 5^2 + 6^2) = sqrt(77) ≈ 8.775
# 8.2. Unitary and Orthogonal Matrices
A unitary matrix U is a square matrix that satisfies the following properties:
1. UU* = U*U = I, where U* is the conjugate transpose of U and I is the identity matrix.
2. The columns of U are orthonormal, meaning that the dot product of any two columns is 0 if they are distinct, and 1 if they are the same.
An orthogonal matrix Q is a square matrix that satisfies the following properties:
1. QQ^T = Q^TQ = I, where Q^T is the transpose of Q.
2. The columns of Q are orthogonal, meaning that the dot product of any two columns is 0.
3. The columns of Q are orthonormal, meaning that the dot product of any two columns is 0 if they are distinct, and 1 if they are the same.
Unitary and orthogonal matrices have many important properties and applications in linear algebra. They preserve the length and angle between vectors, and can be used to solve systems of linear equations, perform rotations and reflections, and diagonalize matrices.
Consider the following matrix:
Q = [1/sqrt(2), -1/sqrt(2); 1/sqrt(2), 1/sqrt(2)]
We can verify that Q is an orthogonal matrix by checking its properties:
1. QQ^T = Q^TQ = I:
Q * Q^T = [1/sqrt(2), -1/sqrt(2); 1/sqrt(2), 1/sqrt(2)] * [1/sqrt(2), 1/sqrt(2); -1/sqrt(2), 1/sqrt(2)] = [1, 0; 0, 1] = I
Q^T * Q = [1/sqrt(2), 1/sqrt(2); -1/sqrt(2), 1/sqrt(2)] * [1/sqrt(2), -1/sqrt(2); 1/sqrt(2), 1/sqrt(2)] = [1, 0; 0, 1] = I
2. The columns of Q are orthogonal:
Q^T * Q = [1/sqrt(2), 1/sqrt(2); -1/sqrt(2), 1/sqrt(2)] * [1/sqrt(2), -1/sqrt(2); 1/sqrt(2), 1/sqrt(2)] = [1, 0; 0, 1] = I
The matrix Q is orthogonal, as it satisfies the properties of an orthogonal matrix.
## Exercise
Verify whether the following matrix is orthogonal:
Q = [1/sqrt(3), -1/sqrt(3), 1/sqrt(3); 1/sqrt(3), 1/sqrt(3), -1/sqrt(3); -1/sqrt(3), 1/sqrt(3), 1/sqrt(3)]
### Solution
Q^T * Q = [1/sqrt(3), 1/sqrt(3), -1/sqrt(3); -1/sqrt(3), 1/sqrt(3), 1/sqrt(3); 1/sqrt(3), -1/sqrt(3), 1/sqrt(3)] * [1/sqrt(3), -1/sqrt(3), 1/sqrt(3); 1/sqrt(3), 1/sqrt(3), -1/sqrt(3); -1/sqrt(3), 1/sqrt(3), 1/sqrt(3)] = [1, 0, 0; 0, 1, 0; 0, 0, 1] = I
The matrix Q is orthogonal, as it satisfies the properties of an orthogonal matrix.
# 8.3. Distance between Vectors and Subspaces
The distance between a vector v and a subspace W is defined as:
d(v, W) = ||v - proj_W(v)||,
where proj_W(v) is the projection of v onto W. The projection of v onto W is the vector in W that is closest to v.
The distance between a vector v and a subspace W satisfies the following properties:
1. Non-negativity: d(v, W) >= 0 for all vectors v and subspaces W, and d(v, W) = 0 if and only if v is in W.
2. Symmetry: d(v, W) = d(W, v) for all vectors v and subspaces W.
3. Triangle inequality: d(u, W) + d(v, W) >= d(u + v, W) for all vectors u, v and subspaces W.
The distance between a vector and a subspace can be calculated using the orthogonal projection. The orthogonal projection of a vector v onto a subspace W is the vector in W that is closest to v and is orthogonal to W.
Consider the vector v = (1, 2, 3) and the subspace W spanned by the vectors u = (1, 0, 0) and w = (0, 1, 0). We can calculate the distance between v and W as follows:
proj_W(v) = proj_span{u, w}(v) = proj_span{(1, 0, 0), (0, 1, 0)}(1, 2, 3) = (1, 2, 0).
d(v, W) = ||v - proj_W(v)|| = ||(1, 2, 3) - (1, 2, 0)|| = ||(0, 0, 3)|| = sqrt(0^2 + 0^2 + 3^2) = sqrt(9) = 3.
The distance between v and W is 3, which represents how "close" the vector v is to the subspace W.
## Exercise
Calculate the distance between the vector v = (1, 1) and the subspace W spanned by the vectors u = (1, 0) and w = (0, 1).
### Solution
proj_W(v) = proj_span{u, w}(v) = proj_span{(1, 0), (0, 1)}(1, 1) = (1, 1).
d(v, W) = ||v - proj_W(v)|| = ||(1, 1) - (1, 1)|| = ||(0, 0)|| = 0.
The distance between v and W is 0, which means that v is in the subspace W.
# 8.4. Applications of Norms and Distances
Norms and distances have various applications in different fields. Here are a few examples:
1. Machine Learning: In machine learning, norms and distances are used to measure the similarity between data points. For example, the Euclidean distance is commonly used to measure the distance between two data points in a feature space.
2. Image Processing: Norms and distances are used in image processing to measure the difference between two images. This can be useful for tasks such as image compression, image denoising, and image recognition.
3. Optimization: Norms and distances are used in optimization problems to define the objective function and constraints. For example, in linear programming, the objective function is often defined as the sum of the distances between the current solution and the optimal solution.
4. Physics: Norms and distances are used in physics to measure the magnitude of physical quantities. For example, the Euclidean norm is used to measure the length of a vector representing a physical quantity.
These are just a few examples of the many applications of norms and distances. Norms and distances are fundamental concepts in mathematics and have wide-ranging applications in various fields.
# 9. Singular Value Decomposition
Singular Value Decomposition (SVD) is a powerful tool in linear algebra that decomposes a matrix into three separate matrices. It is widely used in various fields, including data analysis, image processing, and machine learning.
The SVD of a matrix A is given by:
$$A = U \Sigma V^T$$
where U and V are orthogonal matrices, and Σ is a diagonal matrix with non-negative values called singular values.
The SVD has several important properties and applications:
1. Dimensionality Reduction: SVD can be used to reduce the dimensionality of a dataset by selecting a subset of the most important singular values and corresponding columns of U and V. This is particularly useful in data analysis and machine learning tasks where high-dimensional data can be difficult to work with.
2. Matrix Approximation: SVD can be used to approximate a matrix by truncating the singular values and corresponding columns of U and V. This is useful in compressing data or reducing noise in images.
3. Matrix Inversion: SVD can be used to compute the inverse of a matrix by inverting the singular values in Σ. This is particularly useful when the matrix is ill-conditioned or singular.
4. Image Compression: SVD is widely used in image compression algorithms, such as JPEG, to represent images in a more compact form by approximating the original image using a subset of the most important singular values and corresponding columns of U and V.
5. Collaborative Filtering: SVD is used in recommendation systems to predict user preferences by decomposing a user-item matrix into U, Σ, and V and using the resulting matrices to make predictions.
These are just a few examples of the many applications of Singular Value Decomposition. SVD is a powerful tool that can be applied to a wide range of problems in various fields.
# 9.1. Definition and Properties
Singular Value Decomposition (SVD) is a factorization of a matrix into three separate matrices: U, Σ, and V. This decomposition has several important properties.
The matrix U is an orthogonal matrix, meaning its columns are orthogonal unit vectors. The matrix V is also an orthogonal matrix. The matrix Σ is a diagonal matrix, meaning all its entries outside the main diagonal are zero. The entries on the main diagonal of Σ are called singular values.
The singular values in Σ are non-negative and are arranged in descending order. The columns of U and V correspond to the singular values in Σ. The number of non-zero singular values is equal to the rank of the matrix.
The singular value decomposition of a matrix A can be written as:
$$A = U \Sigma V^T$$
where the superscript T denotes the transpose of a matrix.
The SVD has several important properties:
1. The columns of U are the left singular vectors of A, and the columns of V are the right singular vectors of A.
2. The singular values in Σ represent the square roots of the eigenvalues of A^T A.
3. The singular values in Σ can be used to compute the Frobenius norm of A.
4. The SVD is unique up to sign changes in the columns of U and V.
These properties make the SVD a powerful tool in linear algebra and have many applications in various fields.
# 9.2. Calculating SVD
To calculate the singular value decomposition of a matrix A, we can use various algorithms, such as the Golub-Reinsch algorithm or the Jacobi algorithm. These algorithms involve iterative procedures that converge to the desired decomposition.
However, in practice, we can use built-in functions in programming languages or mathematical software to compute the SVD. For example, in Python, we can use the numpy library's svd function:
```python
import numpy as np
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
U, S, Vt = np.linalg.svd(A)
```
The function np.linalg.svd returns three matrices: U, Σ, and Vt. The matrix U contains the left singular vectors, Σ contains the singular values arranged in descending order, and Vt contains the transpose of the matrix V, which contains the right singular vectors.
We can also use the svd function in other programming languages and mathematical software, such as MATLAB or R.
# 9.3. Applications of SVD
The singular value decomposition has numerous applications in various fields. Here are a few examples:
1. Image compression: SVD can be used to compress images by reducing the dimensionality of the image matrix. The singular values can be truncated, and the compressed image can be reconstructed using a subset of the singular values and corresponding singular vectors.
2. Data analysis: SVD can be used for dimensionality reduction in data analysis. It can help identify the most important features in a dataset and remove noise or irrelevant information.
3. Collaborative filtering: SVD is widely used in recommendation systems to predict user preferences. By decomposing the user-item matrix, we can identify latent factors that contribute to user preferences and make personalized recommendations.
4. Text mining: SVD can be used to analyze and summarize large text corpora. It can help identify important topics and relationships between documents.
These are just a few examples of the many applications of SVD. Its versatility and usefulness make it an essential tool in data analysis, machine learning, and other fields.
# 10. Least Squares and Data Fitting
In many real-world situations, we encounter data that does not fit perfectly into a mathematical model. There may be measurement errors, noise, or other factors that cause the data to deviate from the model. In such cases, we can use the method of least squares to find the best-fit line or curve that minimizes the overall error.
The least squares method is based on the principle of minimizing the sum of the squared differences between the observed data points and the predicted values from the model. This is done by adjusting the parameters of the model to find the values that minimize the error.
The least squares method can be used to fit data to a linear model, polynomial model, exponential model, or any other mathematical function. It is widely used in various fields, including physics, engineering, economics, and social sciences.
Suppose we have a set of data points (x, y) that we want to fit to a straight line model of the form y = mx + b. We can use the least squares method to find the values of m and b that minimize the sum of the squared differences between the observed y-values and the predicted values from the model.
Let's say we have the following data points:
(x, y) = {(1, 2), (2, 3), (3, 5), (4, 6)}
To find the best-fit line, we need to minimize the sum of the squared differences between the observed y-values and the predicted values. This can be written as:
E = (2 - (m*1 + b))^2 + (3 - (m*2 + b))^2 + (5 - (m*3 + b))^2 + (6 - (m*4 + b))^2
Our goal is to find the values of m and b that minimize E. This can be done by taking the partial derivatives of E with respect to m and b, setting them equal to zero, and solving the resulting system of equations.
After solving the equations, we find that the best-fit line is given by:
y = 0.9x + 1.1
This line minimizes the sum of the squared differences between the observed data points and the predicted values.
## Exercise
Fit the following data points to a quadratic model of the form y = ax^2 + bx + c using the least squares method:
(x, y) = {(1, 1), (2, 4), (3, 9), (4, 16)}
Find the values of a, b, and c that minimize the sum of the squared differences between the observed y-values and the predicted values.
### Solution
To fit the data to a quadratic model, we need to minimize the sum of the squared differences between the observed y-values and the predicted values. This can be written as:
E = (1 - (a*1^2 + b*1 + c))^2 + (4 - (a*2^2 + b*2 + c))^2 + (9 - (a*3^2 + b*3 + c))^2 + (16 - (a*4^2 + b*4 + c))^2
To find the values of a, b, and c that minimize E, we need to take the partial derivatives of E with respect to a, b, and c, set them equal to zero, and solve the resulting system of equations.
After solving the equations, we find that the best-fit quadratic model is given by:
y = 1x^2 + 0x + 0
This model minimizes the sum of the squared differences between the observed data points and the predicted values.
# 10.1. Linear Regression
Linear regression is a statistical method used to model the relationship between a dependent variable and one or more independent variables. It is commonly used to predict or estimate the value of the dependent variable based on the values of the independent variables.
In linear regression, we assume that the relationship between the dependent variable and the independent variables can be approximated by a linear equation of the form:
y = mx + b
where y is the dependent variable, x is the independent variable, m is the slope of the line, and b is the y-intercept.
The goal of linear regression is to find the values of m and b that minimize the sum of the squared differences between the observed y-values and the predicted values from the linear equation. This is done using the least squares method, which we discussed in the previous section.
Linear regression can be used to analyze and predict trends, make forecasts, and understand the relationship between variables. It is widely used in fields such as economics, finance, social sciences, and engineering.
Suppose we want to analyze the relationship between the number of hours studied and the score obtained in a test. We collect data from 10 students and obtain the following results:
Hours studied (x): [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
Test score (y): [65, 67, 70, 72, 75, 78, 80, 82, 85, 88]
We can use linear regression to find the best-fit line that represents the relationship between the hours studied and the test score.
To find the values of m and b that minimize the sum of the squared differences, we can use the following formulas:
m = (nΣxy - ΣxΣy) / (nΣx^2 - (Σx)^2)
b = (Σy - mΣx) / n
where n is the number of data points, Σxy is the sum of the products of x and y, Σx is the sum of x, Σy is the sum of y, and Σx^2 is the sum of the squares of x.
Using these formulas, we find that the best-fit line is given by:
y = 2.3x + 60.5
This line represents the relationship between the number of hours studied and the test score.
## Exercise
Fit the following data points to a linear model using the least squares method:
(x, y) = {(1, 2), (2, 4), (3, 6), (4, 8), (5, 10)}
Find the values of m and b that minimize the sum of the squared differences between the observed y-values and the predicted values.
### Solution
To fit the data to a linear model, we need to minimize the sum of the squared differences between the observed y-values and the predicted values. This can be written as:
E = (2 - (m*1 + b))^2 + (4 - (m*2 + b))^2 + (6 - (m*3 + b))^2 + (8 - (m*4 + b))^2 + (10 - (m*5 + b))^2
To find the values of m and b that minimize E, we need to take the partial derivatives of E with respect to m and b, set them equal to zero, and solve the resulting system of equations.
After solving the equations, we find that the best-fit line is given by:
y = 2x
This line minimizes the sum of the squared differences between the observed data points and the predicted values.
# 10.2. Least Squares Solutions
In the previous section, we learned about linear regression and how to find the best-fit line that represents the relationship between two variables. However, there are cases where a linear equation cannot accurately represent the relationship between the variables. In these cases, we can use the method of least squares to find the best approximation to the data.
The method of least squares is a mathematical technique used to minimize the sum of the squared differences between the observed data points and the predicted values from a model. It is commonly used in regression analysis to find the best-fit line or curve.
To find the least squares solution, we need to define an error function that measures the difference between the observed data points and the predicted values. The most common error function used is the sum of the squared differences:
E = Σ(y - ŷ)^2
where E is the error, y is the observed value, and ŷ is the predicted value.
The goal is to find the values of the model parameters that minimize the error function. This is done by taking the partial derivatives of the error function with respect to each parameter, setting them equal to zero, and solving the resulting system of equations.
The least squares solution provides the values of the model parameters that best approximate the data. It is important to note that the least squares solution may not provide an exact fit to the data, but it minimizes the overall error.
Suppose we have a set of data points (x, y) and we want to find the best-fit line that represents the relationship between the variables. We can use the method of least squares to find the line that minimizes the sum of the squared differences between the observed y-values and the predicted values.
Let's consider the following data points:
(x, y) = {(1, 2), (2, 4), (3, 6), (4, 8), (5, 10)}
To find the least squares solution, we need to minimize the error function E = Σ(y - ŷ)^2. For a linear model y = mx + b, the predicted value ŷ is given by ŷ = mx + b.
Taking the partial derivatives of E with respect to m and b, setting them equal to zero, and solving the resulting system of equations, we find that the least squares solution is:
m = 2
b = 0
Therefore, the best-fit line that represents the relationship between the variables is y = 2x.
## Exercise
Fit the following data points to a quadratic model using the least squares method:
(x, y) = {(1, 1), (2, 4), (3, 9), (4, 16), (5, 25)}
Find the values of a, b, and c that minimize the sum of the squared differences between the observed y-values and the predicted values.
### Solution
To fit the data to a quadratic model, we need to minimize the sum of the squared differences between the observed y-values and the predicted values. This can be written as:
E = (1 - (a*1^2 + b*1 + c))^2 + (4 - (a*2^2 + b*2 + c))^2 + (9 - (a*3^2 + b*3 + c))^2 + (16 - (a*4^2 + b*4 + c))^2 + (25 - (a*5^2 + b*5 + c))^2
To find the values of a, b, and c that minimize E, we need to take the partial derivatives of E with respect to a, b, and c, set them equal to zero, and solve the resulting system of equations.
After solving the equations, we find that the best-fit quadratic model is given by:
y = x^2
This model minimizes the sum of the squared differences between the observed data points and the predicted values.
# 10.3. Data Fitting with Matrices
To fit data to a model using matrices, we first need to define a matrix equation that represents the relationship between the variables. The matrix equation can be written as:
A * x = b
where A is a matrix that represents the independent variables, x is a vector that represents the model parameters, and b is a vector that represents the observed values.
The goal is to find the values of x that minimize the error function E = ||A * x - b||^2, where || || denotes the Euclidean norm. This can be done by taking the derivative of E with respect to x, setting it equal to zero, and solving the resulting system of equations.
The solution to the matrix equation A * x = b can be found using various methods, such as Gaussian elimination or LU decomposition. These methods allow us to efficiently solve large systems of equations and find the best-fit parameters for the model.
Suppose we have a set of data points (x, y) and we want to fit them to a linear model y = mx + b using the least squares method. We can represent the data using matrices and solve the matrix equation A * x = b to find the best-fit parameters.
Let's consider the following data points:
(x, y) = {(1, 2), (2, 4), (3, 6), (4, 8), (5, 10)}
To fit the data to a linear model, we can define the matrix A and the vectors x and b as follows:
A = [[1, 1], [2, 1], [3, 1], [4, 1], [5, 1]]
x = [m, b]
b = [2, 4, 6, 8, 10]
The matrix equation A * x = b can be written as:
[[1, 1], [2, 1], [3, 1], [4, 1], [5, 1]] * [m, b] = [2, 4, 6, 8, 10]
To solve this equation, we can use the method of least squares. Taking the derivative of the error function E = ||A * x - b||^2 with respect to x, setting it equal to zero, and solving the resulting system of equations, we find that the best-fit parameters are:
m = 2
b = 0
Therefore, the best-fit line that represents the relationship between the variables is y = 2x.
## Exercise
Fit the following data points to a quadratic model using matrices:
(x, y) = {(1, 1), (2, 4), (3, 9), (4, 16), (5, 25)}
Define the matrix equation A * x = b and solve it to find the best-fit parameters for the quadratic model.
### Solution
To fit the data to a quadratic model, we can define the matrix equation A * x = b as follows:
A = [[1, 1, 1], [4, 2, 1], [9, 3, 1], [16, 4, 1], [25, 5, 1]]
x = [a, b, c]
b = [1, 4, 9, 16, 25]
The matrix equation A * x = b can be written as:
[[1, 1, 1], [4, 2, 1], [9, 3, 1], [16, 4, 1], [25, 5, 1]] * [a, b, c] = [1, 4, 9, 16, 25]
To solve this equation, we can use the method of least squares. Taking the derivative of the error function E = ||A * x - b||^2 with respect to x, setting it equal to zero, and solving the resulting system of equations, we find that the best-fit parameters are:
a = 1
b = 0
c = 0
Therefore, the best-fit quadratic model is y = x^2.
# 11. Applications of Matrix Methods
**Image Processing**
Image processing is the manipulation of images using mathematical operations. Matrices are used to represent digital images, where each element of the matrix corresponds to a pixel in the image. Various matrix operations can be applied to images, such as scaling, rotation, and filtering. These operations allow us to enhance images, remove noise, and extract useful information.
One example of image processing using matrices is image convolution. Convolution is a mathematical operation that combines two functions to produce a third function. In image processing, convolution is used to apply filters to images.
Let's say we have a grayscale image represented by a matrix A. We can apply a filter represented by a matrix B to the image using convolution. The resulting image C is obtained by sliding the filter over the image and computing the sum of element-wise products at each position.
The convolution operation can be written as:
C(i, j) = sum(A(k, l) * B(i-k, j-l))
where (i, j) are the coordinates of the pixel in the resulting image, (k, l) are the coordinates of the pixel in the filter, and sum is the summation over all (k, l) values.
## Exercise
Apply the following filter to the grayscale image represented by the matrix A using convolution:
B = [[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]
Compute the resulting image C.
### Solution
To apply the filter B to the image represented by matrix A, we can use convolution. The resulting image C can be computed by sliding the filter over the image and computing the sum of element-wise products at each position.
The convolution operation can be written as:
C(i, j) = sum(A(k, l) * B(i-k, j-l))
where (i, j) are the coordinates of the pixel in the resulting image, (k, l) are the coordinates of the pixel in the filter, and sum is the summation over all (k, l) values.
By applying this operation to the image represented by matrix A and the filter represented by matrix B, we can compute the resulting image C.
# 11.1. Markov Chains
Markov chains are mathematical models that describe a sequence of events where the probability of each event depends only on the previous event. Markov chains are used in various applications, such as predicting stock prices, analyzing DNA sequences, and modeling weather patterns.
In a Markov chain, the transition probabilities between states are represented by a matrix called the transition matrix. The transition matrix is a square matrix where each element represents the probability of transitioning from one state to another.
Let's consider a simple example of a Markov chain with two states: sunny and rainy. The transition matrix for this Markov chain is:
P = [[0.8, 0.2], [0.4, 0.6]]
This matrix represents the probabilities of transitioning from one state to another. For example, P[1, 2] represents the probability of transitioning from state 1 (sunny) to state 2 (rainy), which is 0.2.
We can use the transition matrix to analyze the behavior of the Markov chain. For example, we can compute the probability of being in a certain state after a certain number of transitions, or we can compute the long-term behavior of the Markov chain.
## Exercise
Consider a Markov chain with three states: A, B, and C. The transition matrix for this Markov chain is:
P = [[0.5, 0.3, 0.2], [0.1, 0.6, 0.3], [0.4, 0.2, 0.4]]
Compute the probability of being in state B after 5 transitions.
### Solution
To compute the probability of being in state B after 5 transitions, we can raise the transition matrix P to the power of 5 and look at the element in the second row and second column.
P^5 = [[0.5, 0.3, 0.2], [0.1, 0.6, 0.3], [0.4, 0.2, 0.4]]^5
By computing this matrix, we can find the probability of being in state B after 5 transitions.
# 11.2. Network Analysis
Network analysis is the study of complex systems that can be represented as networks or graphs. Networks can represent various systems, such as social networks, transportation networks, and computer networks. Matrices are used to represent networks, where each element of the matrix represents a connection between two nodes.
In network analysis, various matrix operations can be applied to networks, such as finding the shortest path between two nodes, identifying important nodes or edges, and detecting communities or clusters within the network.
One example of network analysis using matrices is the PageRank algorithm. The PageRank algorithm is used by search engines to rank web pages based on their importance. The algorithm uses the link structure of the web to compute a score for each page.
The link structure of the web can be represented as a matrix called the link matrix. The link matrix is a square matrix where each element represents a link from one page to another. The PageRank algorithm uses matrix operations, such as matrix multiplication and eigenvalue computation, to compute the importance scores for each page.
## Exercise
Consider a network with 4 nodes and the following link matrix:
L = [[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 0], [0, 1, 0, 0]]
Compute the importance scores for each page using the PageRank algorithm.
### Solution
To compute the importance scores for each page using the PageRank algorithm, we can use matrix operations, such as matrix multiplication and eigenvalue computation.
First, we need to normalize the link matrix by dividing each column by the sum of its elements. This ensures that the columns of the matrix sum to 1.
Next, we compute the importance scores by iteratively multiplying the link matrix by a vector of importance scores and normalizing the result. We repeat this process until the importance scores converge.
By performing these operations on the link matrix L, we can compute the importance scores for each page using the PageRank algorithm.
# 11.3. Network Analysis
In addition to the PageRank algorithm, there are other network analysis techniques that use matrices. One such technique is community detection, which aims to identify groups of nodes that are densely connected within a network.
Community detection can be done using matrix operations such as spectral clustering. Spectral clustering uses the eigenvalues and eigenvectors of a matrix to partition the nodes into communities.
Another technique is centrality analysis, which aims to identify the most important nodes in a network. Centrality measures, such as degree centrality and betweenness centrality, can be computed using matrices.
An example of centrality analysis is the calculation of degree centrality. Degree centrality measures the number of connections that a node has in a network. It can be computed using the adjacency matrix of the network.
Consider a network with 5 nodes and the following adjacency matrix:
A = [[0, 1, 1, 0, 0], [1, 0, 1, 1, 0], [1, 1, 0, 0, 1], [0, 1, 0, 0, 1], [0, 0, 1, 1, 0]]
To compute the degree centrality for each node, we sum the elements of each row in the adjacency matrix. The resulting vector represents the degree centrality for each node.
## Exercise
Compute the degree centrality for each node in the network with the adjacency matrix:
A = [[0, 1, 1, 0, 0], [1, 0, 1, 1, 0], [1, 1, 0, 0, 1], [0, 1, 0, 0, 1], [0, 0, 1, 1, 0]]
### Solution
To compute the degree centrality for each node, we sum the elements of each row in the adjacency matrix.
For the given adjacency matrix A, the degree centrality for each node is:
Node 1: 2
Node 2: 3
Node 3: 3
Node 4: 2
Node 5: 2
# 11.4. Quantum Computing
Quantum computing is an emerging field that combines principles from physics and computer science to create powerful computers that can solve certain problems more efficiently than classical computers.
At the heart of quantum computing is the concept of a quantum bit, or qubit. Unlike classical bits, which can represent either a 0 or a 1, qubits can exist in a superposition of both states simultaneously. This allows quantum computers to perform computations in parallel and potentially solve problems much faster.
Matrix methods play a crucial role in quantum computing. Quantum gates, which are the building blocks of quantum circuits, are represented by unitary matrices. These matrices must satisfy certain properties, such as being Hermitian and having a determinant of 1, in order to preserve the probability amplitudes of the qubits.
One example of a quantum gate is the Hadamard gate, which is represented by the following matrix:
$$
H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
$$
The Hadamard gate is used to create superposition states, where a qubit can exist in both the 0 and 1 states simultaneously. Applying the Hadamard gate to a qubit in the 0 state would result in a superposition state of $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$.
## Exercise
Create the matrix representation of the Hadamard gate.
### Solution
The matrix representation of the Hadamard gate is:
$$
H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
$$ | Textbooks |
Menger space
In mathematics, a Menger space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness. A Menger space is a space in which for every sequence of open covers ${\mathcal {U}}_{1},{\mathcal {U}}_{2},\ldots $ of the space there are finite sets ${\mathcal {F}}_{1}\subset {\mathcal {U}}_{1},{\mathcal {F}}_{2}\subset {\mathcal {U}}_{2},\ldots $ such that the family ${\mathcal {F}}_{1}\cup {\mathcal {F}}_{2}\cup \cdots $ covers the space.
History
In 1924, Karl Menger [1] introduced the following basis property for metric spaces: Every basis of the topology contains a countable family of sets with vanishing diameters that covers the space. Soon thereafter, Witold Hurewicz [2] observed that Menger's basis property can be reformulated to the above form using sequences of open covers.
Menger's conjecture
Menger conjectured that in ZFC every Menger metric space is σ-compact. A. W. Miller and D. H. Fremlin[3] proved that Menger's conjecture is false, by showing that there is, in ZFC, a set of real numbers that is Menger but not σ-compact. The Fremlin-Miller proof was dichotomic, and the set witnessing the failure of the conjecture heavily depends on whether a certain (undecidable) axiom holds or not.
Bartoszyński and Tsaban [4] gave a uniform ZFC example of a Menger subset of the real line that is not σ-compact.
Combinatorial characterization
For subsets of the real line, the Menger property can be characterized using continuous functions into the Baire space $\mathbb {N} ^{\mathbb {N} }$. For functions $f,g\in \mathbb {N} ^{\mathbb {N} }$, write $f\leq ^{*}g$ if $f(n)\leq g(n)$ for all but finitely many natural numbers $n$. A subset $A$ of $\mathbb {N} ^{\mathbb {N} }$ is dominating if for each function $f\in \mathbb {N} ^{\mathbb {N} }$ there is a function $g\in A$ such that $f\leq ^{*}g$. Hurewicz proved that a subset of the real line is Menger iff every continuous image of that space into the Baire space is not dominating. In particular, every subset of the real line of cardinality less than the dominating number ${\mathfrak {d}}$ is Menger.
The cardinality of Bartoszyński and Tsaban's counter-example to Menger's conjecture is ${\mathfrak {d}}$.
Properties
• Every compact, and even σ-compact, space is Menger.
• Every Menger space is a Lindelöf space
• Continuous image of a Menger space is Menger
• The Menger property is closed under taking $F_{\sigma }$ subsets
• Menger's property characterizes filters whose Mathias forcing notion does not add dominating functions.[5]
References
1. Menger, Karl (1924). "Einige Überdeckungssätze der Punktmengenlehre". Selecta Mathematica. pp. 421–444. doi:10.1007/978-3-7091-6110-4_14. ISBN 978-3-7091-7282-7. {{cite book}}: |journal= ignored (help)
2. Hurewicz, Witold (1926). "Über eine verallgemeinerung des Borelschen Theorems". Mathematische Zeitschrift. 24 (1): 401–421. doi:10.1007/bf01216792. S2CID 119867793.
3. Fremlin, David; Miller, Arnold (1988). "On some properties of Hurewicz, Menger and Rothberger" (PDF). Fundamenta Mathematicae. 129: 17–33. doi:10.4064/fm-129-1-17-33.
4. Bartoszyński, Tomek; Tsaban, Boaz (2006). "Hereditary topological diagonalizations and the Menger–Hurewicz Conjectures". Proceedings of the American Mathematical Society. 134 (2): 605–615. arXiv:math/0208224. doi:10.1090/s0002-9939-05-07997-9. S2CID 9931601.
5. Chodounský, David; Repovš, Dušan; Zdomskyy, Lyubomyr (2015-12-01). "Mathias Forcing and Combinatorial Covering Properties of Filters". The Journal of Symbolic Logic. 80 (4): 1398–1410. arXiv:1401.2283. doi:10.1017/jsl.2014.73. ISSN 0022-4812. S2CID 15867466.
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
| Wikipedia |
|\!|\!|\begin|\!|\!|{document|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\title|\!|\!|[Maximal|\!|\!| parabolic|\!|\!| regularity|\!|\!| and|\!|\!| energy|\!|\!| estimates|\!|\!|]|\!|\!| |\!|\!|{Combining|\!|\!| maximal|\!|\!| regularity|\!|\!| and|\!|\!| energy|\!|\!| estimates|\!|\!| for|\!|\!|
|\!|\!| time|\!|\!| discretizations|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!| of|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equations|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\author|\!|\!|[Georgios|\!|\!| Akrivis|\!|\!|]|\!|\!|{Georgios|\!|\!| Akrivis|\!|\!|}|\!|\!| |\!|\!|\address|\!|\!|{Department|\!|\!| of|\!|\!| Computer|\!|\!| Science|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Engineering|\!|\!|,|\!|\!| University|\!|\!| of|\!|\!| Ioannina|\!|\!|,|\!|\!| 451|\!|\!|$|\!|\!|\|\!|\!|,|\!|\!|$10|\!|\!| Ioannina|\!|\!|,|\!|\!| Greece|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\email|\!|\!| |\!|\!|{|\!|\!|\sf|\!|\!|{akrivis|\!|\!|{|\!|\!|\it|\!|\!| |\!|\!|@|\!|\!|}cs|\!|\!|.uoi|\!|\!|.gr|\!|\!|}|\!|\!| |\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\author|\!|\!|[Buyang|\!|\!| Li|\!|\!|]|\!|\!|{Buyang|\!|\!| Li|\!|\!|}|\!|\!| |\!|\!|\address|\!|\!|{Department|\!|\!| of|\!|\!| Applied|\!|\!| Mathematics|\!|\!|,|\!|\!| The|\!|\!| Hong|\!|\!| Kong|\!|\!| Polytechnic|\!|\!| University|\!|\!|,|\!|\!| Kowloon|\!|\!|,|\!|\!| Hong|\!|\!| Kong|\!|\!|.|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\email|\!|\!| |\!|\!|{|\!|\!|\sf|\!|\!|{buyang|\!|\!|.li|\!|\!|{|\!|\!|\it|\!|\!| |\!|\!|@|\!|\!|}polyu|\!|\!|.edu|\!|\!|.hk|\!|\!|}|\!|\!| |\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\author|\!|\!|[Christian|\!|\!| Lubich|\!|\!|]|\!|\!|{Christian|\!|\!| Lubich|\!|\!|}|\!|\!| |\!|\!|\address|\!|\!|{Mathematisches|\!|\!| Institut|\!|\!|,|\!|\!| Universit|\!|\!|\|\!|\!|"at|\!|\!| T|\!|\!|\|\!|\!|"ubingen|\!|\!|,|\!|\!| Auf|\!|\!| der|\!|\!| Morgenstelle|\!|\!|,|\!|\!| |\!|\!| D|\!|\!|-72076|\!|\!| T|\!|\!|\|\!|\!|"ubingen|\!|\!|,|\!|\!| Germany|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\email|\!|\!| |\!|\!|{|\!|\!|\sf|\!|\!|{lubich|\!|\!|{|\!|\!|\it|\!|\!| |\!|\!|@|\!|\!|}na|\!|\!|.uni|\!|\!|-tuebingen|\!|\!|.de|\!|\!|}|\!|\!| |\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\keywords|\!|\!|{BDF|\!|\!| methods|\!|\!|,|\!|\!| maximal|\!|\!| regularity|\!|\!|,|\!|\!| energy|\!|\!| technique|\!|\!|,|\!|\!| parabolic|\!|\!| equations|\!|\!|,|\!|\!| stability|\!|\!|,|\!|\!| maximum|\!|\!| norm|\!|\!| estimates|\!|\!|}|\!|\!| |\!|\!|\subjclass|\!|\!|[2010|\!|\!|]|\!|\!|{Primary|\!|\!| 65M12|\!|\!|,|\!|\!| 65M15|\!|\!|;|\!|\!| Secondary|\!|\!| 65L06|\!|\!|.|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\date|\!|\!|{|\!|\!|\today|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{abstract|\!|\!|}|\!|\!| We|\!|\!| analyze|\!|\!| fully|\!|\!| implicit|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| backward|\!|\!| difference|\!|\!| formula|\!|\!| |\!|\!|(BDF|\!|\!|)|\!|\!| methods|\!|\!|
|\!|\!|
for|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equations|\!|\!|,|\!|\!| without|\!|\!| making|\!|\!| any|\!|\!| assumptions|\!|\!| on|\!|\!| the|\!|\!| growth|\!|\!| |\!|\!| |\!|\!| or|\!|\!| decay|\!|\!| of|\!|\!| the|\!|\!| coefficient|\!|\!| functions|\!|\!|.|\!|\!| We|\!|\!| combine|\!|\!| maximal|\!|\!| parabolic|\!|\!| regularity|\!|\!| and|\!|\!| |\!|\!| energy|\!|\!| estimates|\!|\!| to|\!|\!| derive|\!|\!| optimal|\!|\!|-order|\!|\!| error|\!|\!| bounds|\!|\!| for|\!|\!| the|\!|\!| time|\!|\!|-discrete|\!|\!| approximation|\!|\!| |\!|\!| to|\!|\!| the|\!|\!| solution|\!|\!| and|\!|\!| its|\!|\!| gradient|\!|\!| in|\!|\!| the|\!|\!| maximum|\!|\!| norm|\!|\!| and|\!|\!| energy|\!|\!| norm|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{abstract|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\maketitle|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Introduction|\!|\!|}|\!|\!|\label|\!|\!|{Sec|\!|\!|:intr|\!|\!|}|\!|\!| In|\!|\!| this|\!|\!| paper|\!|\!| we|\!|\!| study|\!|\!| the|\!|\!| time|\!|\!| discretization|\!|\!| of|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| differential|\!|\!| equations|\!|\!| |\!|\!| by|\!|\!| backward|\!|\!| difference|\!|\!| formulas|\!|\!| |\!|\!|(BDF|\!|\!|)|\!|\!|.|\!|\!| In|\!|\!| contrast|\!|\!| to|\!|\!| the|\!|\!| existing|\!|\!| literature|\!|\!|,|\!|\!| we|\!|\!| allow|\!|\!| for|\!|\!| |\!|\!| solution|\!|\!|-dependent|\!|\!| coefficients|\!|\!| in|\!|\!| the|\!|\!| equation|\!|\!| that|\!|\!| degenerate|\!|\!| as|\!|\!| the|\!|\!| argument|\!|\!| grows|\!|\!| to|\!|\!| |\!|\!| infinity|\!|\!| or|\!|\!| approaches|\!|\!| a|\!|\!| singular|\!|\!| set|\!|\!|.|\!|\!| To|\!|\!| deal|\!|\!| with|\!|\!| such|\!|\!| problems|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| to|\!|\!| control|\!|\!| the|\!|\!| |\!|\!| maximum|\!|\!| norm|\!|\!| of|\!|\!| the|\!|\!| error|\!|\!| and|\!|\!| possibly|\!|\!| also|\!|\!| of|\!|\!| its|\!|\!| gradient|\!|\!|.|\!|\!| As|\!|\!| we|\!|\!| show|\!|\!| in|\!|\!| this|\!|\!| paper|\!|\!|,|\!|\!| |\!|\!| such|\!|\!| maximum|\!|\!| norm|\!|\!| estimates|\!|\!| for|\!|\!| BDF|\!|\!| time|\!|\!| discretizations|\!|\!| become|\!|\!| available|\!|\!| by|\!|\!| combining|\!|\!| |\!|\!| two|\!|\!| techniques|\!|\!|:|\!|\!| |\!|\!|\begin|\!|\!|{enumerate|\!|\!|}|\!|\!|[|\!|\!|\small|\!|\!| |\!|\!|$|\!|\!|\bullet|\!|\!|$|\!|\!|]|\!|\!|\itemsep|\!|\!|=0pt|\!|\!|
|\!|\!|
|\!|\!|\item|\!|\!| |\!|\!|\emph|\!|\!|{discrete|\!|\!| maximal|\!|\!| parabolic|\!|\!| regularity|\!|\!|}|\!|\!|,|\!|\!| as|\!|\!| studied|\!|\!| in|\!|\!| |\!|\!| Kov|\!|\!|\|\!|\!|'acs|\!|\!|,|\!|\!| Li|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Lubich|\!|\!| |\!|\!|\cite|\!|\!|{KLL|\!|\!|}|\!|\!| based|\!|\!| on|\!|\!| the|\!|\!| characterization|\!|\!| of|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| |\!|\!| by|\!|\!| Weis|\!|\!| |\!|\!|\cite|\!|\!|{Weis2|\!|\!|}|\!|\!| and|\!|\!| a|\!|\!| discrete|\!|\!| operator|\!|\!|-valued|\!|\!| Fourier|\!|\!| multiplier|\!|\!| theorem|\!|\!| by|\!|\!| Blunck|\!|\!|
|\!|\!| |\!|\!|\cite|\!|\!|{Blu1|\!|\!|}|\!|\!|;|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\item|\!|\!| |\!|\!|\emph|\!|\!|{energy|\!|\!| estimates|\!|\!|}|\!|\!|,|\!|\!| which|\!|\!| are|\!|\!| familiar|\!|\!| for|\!|\!| implicit|\!|\!| Euler|\!|\!| time|\!|\!| discretizations|\!|\!| |\!|\!| and|\!|\!| have|\!|\!| |\!|\!| become|\!|\!| feasible|\!|\!| for|\!|\!| higher|\!|\!|-order|\!|\!| BDF|\!|\!| methods|\!|\!| |\!|\!|(up|\!|\!| to|\!|\!| order|\!|\!| 5|\!|\!|)|\!|\!| by|\!|\!| the|\!|\!| |\!|\!| Nevan|\!|\!|\|\!|\!|-linna|\!|\!|-Odeh|\!|\!| multiplier|\!|\!| technique|\!|\!| |\!|\!|\cite|\!|\!|{NO|\!|\!|}|\!|\!| as|\!|\!| used|\!|\!| in|\!|\!| Akrivis|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Lubich|\!|\!| |\!|\!|\cite|\!|\!|{AL|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{enumerate|\!|\!|}|\!|\!| In|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:statement|\!|\!|}|\!|\!| we|\!|\!| formulate|\!|\!| the|\!|\!| parabolic|\!|\!| initial|\!|\!| and|\!|\!| boundary|\!|\!| value|\!|\!| problem|\!|\!| |\!|\!| and|\!|\!| its|\!|\!| time|\!|\!| discretization|\!|\!| by|\!|\!| fully|\!|\!| implicit|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|,|\!|\!| and|\!|\!| we|\!|\!| state|\!|\!| |\!|\!| our|\!|\!| main|\!|\!| results|\!|\!|.|\!|\!| For|\!|\!| problems|\!|\!| on|\!|\!| a|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!| in|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^2|\!|\!|$|\!|\!| |\!|\!| or|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^3|\!|\!|$|\!|\!| with|\!|\!| sufficiently|\!|\!| regular|\!|\!| solutions|\!|\!|,|\!|\!| we|\!|\!| obtain|\!|\!| optimal|\!|\!|-order|\!|\!| error|\!|\!| bounds|\!|\!| in|\!|\!| the|\!|\!| |\!|\!| maximum|\!|\!| and|\!|\!| energy|\!|\!| norms|\!|\!|.|\!|\!| For|\!|\!| problems|\!|\!| on|\!|\!| bounded|\!|\!| smooth|\!|\!| domains|\!|\!| in|\!|\!| arbitrary|\!|\!| space|\!|\!| dimension|\!|\!| we|\!|\!| obtain|\!|\!| optimal|\!|\!|-order|\!|\!| error|\!|\!| bounds|\!|\!| even|\!|\!| in|\!|\!| the|\!|\!| |\!|\!| |\!|\!|$L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$L|\!|\!|^2|\!|\!|(0|\!|\!|,T|\!|\!|;H|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!| norms|\!|\!|.|\!|\!|
|\!|\!|
The|\!|\!| proof|\!|\!| of|\!|\!| these|\!|\!| results|\!|\!| makes|\!|\!| up|\!|\!| the|\!|\!| remainder|\!|\!| of|\!|\!| the|\!|\!| paper|\!|\!|.|\!|\!| In|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:abstract|\!|\!|}|\!|\!| |\!|\!| we|\!|\!| formulate|\!|\!| a|\!|\!| common|\!|\!| abstract|\!|\!| framework|\!|\!| for|\!|\!| both|\!|\!| results|\!|\!| and|\!|\!| give|\!|\!| a|\!|\!| continuous|\!|\!|-time|\!|\!| perturbation|\!|\!| result|\!|\!| after|\!|\!| which|\!|\!| we|\!|\!| model|\!|\!| the|\!|\!| stability|\!|\!| proof|\!|\!| of|\!|\!| BDF|\!|\!| methods|\!|\!| in|\!|\!| |\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:stability|\!|\!|}|\!|\!|.|\!|\!| This|\!|\!| proof|\!|\!| relies|\!|\!| on|\!|\!| time|\!|\!|-discrete|\!|\!| maximal|\!|\!| regularity|\!|\!| and|\!|\!| |\!|\!| energy|\!|\!| estimates|\!|\!|.|\!|\!| In|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:existence|\!|\!|}|\!|\!| we|\!|\!| study|\!|\!| existence|\!|\!| and|\!|\!| uniqueness|\!|\!| |\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solutions|\!|\!|,|\!|\!| and|\!|\!| in|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:consistency|\!|\!|}|\!|\!| we|\!|\!| discuss|\!|\!| the|\!|\!| consistency|\!|\!| |\!|\!| error|\!|\!| of|\!|\!| the|\!|\!| fully|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|.|\!|\!| In|\!|\!| the|\!|\!| short|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:proof|\!|\!|}|\!|\!| we|\!|\!| |\!|\!| combine|\!|\!| the|\!|\!| obtained|\!|\!| estimates|\!|\!| to|\!|\!| prove|\!|\!| the|\!|\!| error|\!|\!| bounds|\!|\!| of|\!|\!| our|\!|\!| main|\!|\!| results|\!|\!|.|\!|\!| In|\!|\!| the|\!|\!| |\!|\!| remaining|\!|\!| Sections|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:unif|\!|\!|-maxreg|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Sec|\!|\!|:Sobolev|\!|\!|}|\!|\!|,|\!|\!| which|\!|\!| use|\!|\!| different|\!|\!| |\!|\!| techniques|\!|\!| of|\!|\!| analysis|\!|\!|,|\!|\!| it|\!|\!| is|\!|\!| verified|\!|\!| that|\!|\!| the|\!|\!| concrete|\!|\!| parabolic|\!|\!| problem|\!|\!| and|\!|\!| its|\!|\!| time|\!|\!| |\!|\!| discretization|\!|\!| fit|\!|\!| into|\!|\!| our|\!|\!| abstract|\!|\!| framework|\!|\!|.|\!|\!| In|\!|\!| particular|\!|\!|,|\!|\!| the|\!|\!| uniform|\!|\!| discrete|\!|\!| maximal|\!|\!| |\!|\!| parabolic|\!|\!| regularity|\!|\!| of|\!|\!| the|\!|\!| BDF|\!|\!| methods|\!|\!| is|\!|\!| shown|\!|\!| in|\!|\!| the|\!|\!| required|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| settings|\!|\!|.|\!|\!|
|\!|\!|
While|\!|\!| we|\!|\!| restrict|\!|\!| our|\!|\!| attention|\!|\!| in|\!|\!| this|\!|\!| paper|\!|\!| to|\!|\!| semidiscretization|\!|\!| in|\!|\!| time|\!|\!| of|\!|\!| standard|\!|\!| |\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equations|\!|\!| by|\!|\!| BDF|\!|\!| methods|\!|\!|,|\!|\!| the|\!|\!| combination|\!|\!| of|\!|\!| discrete|\!|\!| maximal|\!|\!| |\!|\!| regularity|\!|\!| and|\!|\!| energy|\!|\!| estimates|\!|\!| to|\!|\!| obtain|\!|\!| stability|\!|\!| and|\!|\!| error|\!|\!| bounds|\!|\!| in|\!|\!| the|\!|\!| maximum|\!|\!| norm|\!|\!| |\!|\!| is|\!|\!| useful|\!|\!| for|\!|\!| a|\!|\!| much|\!|\!| wider|\!|\!| range|\!|\!| of|\!|\!| problems|\!|\!|:|\!|\!| for|\!|\!| other|\!|\!| time|\!|\!| discretizations|\!|\!|,|\!|\!| |\!|\!| for|\!|\!| full|\!|\!| |\!|\!| discretizations|\!|\!| |\!|\!| |\!|\!|(in|\!|\!| combination|\!|\!| with|\!|\!| the|\!|\!| discrete|\!|\!| maximal|\!|\!| regularity|\!|\!| of|\!|\!| semi|\!|\!|-discrete|\!|\!| |\!|\!| finite|\!|\!| element|\!|\!| methods|\!|\!|;|\!|\!| see|\!|\!| Li|\!|\!| |\!|\!|\cite|\!|\!|{Li15|\!|\!|}|\!|\!| and|\!|\!| Li|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Sun|\!|\!| |\!|\!|\cite|\!|\!|{LS15|\!|\!|}|\!|\!|)|\!|\!|,|\!|\!| and|\!|\!| also|\!|\!| for|\!|\!| other|\!|\!| |\!|\!| classes|\!|\!| of|\!|\!| nonlinear|\!|\!| parabolic|\!|\!| equations|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
Such|\!|\!| extensions|\!|\!| are|\!|\!| left|\!|\!| to|\!|\!| future|\!|\!| work|\!|\!|.|\!|\!| The|\!|\!| present|\!|\!| paper|\!|\!| can|\!|\!| thus|\!|\!| be|\!|\!| viewed|\!|\!| as|\!|\!| a|\!|\!| |\!|\!| proof|\!|\!| of|\!|\!| concept|\!|\!| for|\!|\!| this|\!|\!| powerful|\!|\!| approach|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Problem|\!|\!| formulation|\!|\!| and|\!|\!| statement|\!|\!| of|\!|\!| the|\!|\!| main|\!|\!| results|\!|\!|}|\!|\!|\label|\!|\!|{Sec|\!|\!|:statement|\!|\!|}|\!|\!| |\!|\!|\subsection|\!|\!|{Initial|\!|\!| and|\!|\!| boundary|\!|\!| value|\!|\!| problem|\!|\!|}|\!|\!|\label|\!|\!|{SSec|\!|\!|:ivp|\!|\!|}|\!|\!| For|\!|\!| a|\!|\!| bounded|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!|,|\!|\!| a|\!|\!| positive|\!|\!| |\!|\!|$T|\!|\!|,|\!|\!|$|\!|\!| and|\!|\!| a|\!|\!| given|\!|\!| initial|\!|\!| value|\!|\!| |\!|\!|$u|\!|\!|_0|\!|\!|,|\!|\!|$|\!|\!| |\!|\!| we|\!|\!| consider|\!|\!| the|\!|\!| following|\!|\!| initial|\!|\!| and|\!|\!| boundary|\!|\!| value|\!|\!| problem|\!|\!| |\!|\!| for|\!|\!| a|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equation|\!|\!|,|\!|\!| with|\!|\!| homogeneous|\!|\!| Dirichlet|\!|\!| boundary|\!|\!| conditions|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{ivp|\!|\!|}|\!|\!| |\!|\!|\left|\!|\!| |\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\begin|\!|\!|{alignedat|\!|\!|}|\!|\!|{3|\!|\!|}|\!|\!| |\!|\!| |\!|\!|&|\!|\!|\partial|\!|\!|_t|\!|\!| u|\!|\!| |\!|\!|=|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|\big|\!|\!| |\!|\!|(a|\!|\!|(u|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|&|\!|\!|&|\!|\!| |\!|\!|\text|\!|\!|{in|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\varOmega|\!|\!|\times|\!|\!| |\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&u|\!|\!| |\!|\!|=0|\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|&|\!|\!|&|\!|\!| |\!|\!|\text|\!|\!|{on|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\partial|\!|\!| |\!|\!|\varOmega|\!|\!|\times|\!|\!| |\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&u|\!|\!| |\!|\!|(|\!|\!|\cdot|\!|\!|,0|\!|\!|)|\!|\!|=u|\!|\!|_0|\!|\!| |\!|\!|&|\!|\!|&|\!|\!| |\!|\!|\text|\!|\!|{in|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\varOmega|\!|\!|.|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\end|\!|\!|{alignedat|\!|\!|}|\!|\!| |\!|\!|\right|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
We|\!|\!| assume|\!|\!| that|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| is|\!|\!| a|\!|\!| positive|\!|\!| smooth|\!|\!| function|\!|\!| on|\!|\!| the|\!|\!| real|\!|\!| line|\!|\!|,|\!|\!| but|\!|\!| impose|\!|\!| otherwise|\!|\!| no|\!|\!| growth|\!|\!| or|\!|\!| decay|\!|\!| |\!|\!| conditions|\!|\!| on|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| |\!|\!|(for|\!|\!| example|\!|\!|,|\!|\!| we|\!|\!| may|\!|\!| have|\!|\!| |\!|\!|$a|\!|\!|(u|\!|\!|)|\!|\!|=e|\!|\!|^u|\!|\!|$|\!|\!|)|\!|\!|.|\!|\!| By|\!|\!| the|\!|\!| maximum|\!|\!| principle|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| above|\!|\!| problem|\!|\!| is|\!|\!| bounded|\!|\!|,|\!|\!| |\!|\!| provided|\!|\!| the|\!|\!| initial|\!|\!| data|\!|\!| |\!|\!|$u|\!|\!|_0|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!|;|\!|\!| |\!|\!| note|\!|\!| that|\!|\!| then|\!|\!| there|\!|\!| exists|\!|\!| a|\!|\!| positive|\!|\!| number|\!|\!| |\!|\!|$K|\!|\!|$|\!|\!| |\!|\!|(depending|\!|\!| on|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!| |\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|)|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|$K|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|\le|\!|\!| a|\!|\!|(u|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|)|\!|\!|\le|\!|\!| K|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$x|\!|\!|\in|\!|\!| |\!|\!|\overline|\!|\!|\varOmega|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$0|\!|\!|\le|\!|\!| t|\!|\!|\le|\!|\!| T|\!|\!|$|\!|\!|.|\!|\!| However|\!|\!|,|\!|\!| boundedness|\!|\!| of|\!|\!| the|\!|\!| |\!|\!| numerical|\!|\!| approximations|\!|\!| is|\!|\!| not|\!|\!| obvious|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{remark|\!|\!|}|\!|\!|[direct|\!|\!| extensions|\!|\!|]|\!|\!|\label|\!|\!|{Re|\!|\!|:ext|\!|\!|}|\!|\!| Our|\!|\!| techniques|\!|\!| and|\!|\!| results|\!|\!| can|\!|\!| be|\!|\!| directly|\!|\!| extended|\!|\!| to|\!|\!| the|\!|\!| following|\!|\!| cases|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{enumerate|\!|\!|}|\!|\!|[|\!|\!|\small|\!|\!| |\!|\!|$|\!|\!|\bullet|\!|\!|$|\!|\!|]|\!|\!|\itemsep|\!|\!|=0pt|\!|\!| |\!|\!|\item|\!|\!| The|\!|\!| function|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| is|\!|\!| continuously|\!|\!| differentiable|\!|\!| and|\!|\!| positive|\!|\!| only|\!|\!| in|\!|\!| an|\!|\!| interval|\!|\!| |\!|\!|$I|\!|\!|\subset|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|$|\!|\!| that|\!|\!| contains|\!|\!| all|\!|\!| exact|\!|\!| solution|\!|\!| values|\!|\!|,|\!|\!| |\!|\!|$u|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|\in|\!|\!| I|\!|\!|;|\!|\!|$|\!|\!| in|\!|\!| particular|\!|\!|,|\!|\!| singularities|\!|\!| in|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| are|\!|\!| allowed|\!|\!|.|\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| is|\!|\!| a|\!|\!| function|\!|\!| of|\!|\!| |\!|\!|$x|\!|\!|,t|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!|:|\!|\!| |\!|\!|$a|\!|\!|=a|\!|\!|(x|\!|\!|,t|\!|\!|,u|\!|\!|)|\!|\!|.|\!|\!|$|\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$a|\!|\!|(u|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| a|\!|\!| positive|\!|\!| definite|\!|\!| symmetric|\!|\!| |\!|\!|$d|\!|\!|\times|\!|\!| d|\!|\!|$|\!|\!| matrix|\!|\!|.|\!|\!| |\!|\!|\item|\!|\!| A|\!|\!| semilinear|\!|\!| term|\!|\!| |\!|\!|$f|\!|\!|(x|\!|\!|,t|\!|\!|,u|\!|\!|,|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| a|\!|\!| smooth|\!|\!| function|\!|\!| |\!|\!|$f|\!|\!|$|\!|\!| is|\!|\!| added|\!|\!| to|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| |\!|\!| side|\!|\!| of|\!|\!| the|\!|\!| differential|\!|\!| equation|\!|\!|.|\!|\!| No|\!|\!| growth|\!|\!| conditions|\!|\!| on|\!|\!| |\!|\!|$f|\!|\!|$|\!|\!| need|\!|\!| to|\!|\!| be|\!|\!| imposed|\!|\!|,|\!|\!| but|\!|\!| we|\!|\!| |\!|\!| assume|\!|\!| smoothness|\!|\!| of|\!|\!| the|\!|\!| exact|\!|\!| solution|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{enumerate|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{remark|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\emph|\!|\!|{Operator|\!|\!| notation|\!|\!|}|\!|\!|:|\!|\!| We|\!|\!| consider|\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| defined|\!|\!| by|\!|\!| |\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)u|\!|\!|:|\!|\!|=|\!|\!|-|\!|\!|\nabla|\!|\!|\cdot|\!|\!|\big|\!|\!| |\!|\!|(a|\!|\!|(w|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!| as|\!|\!| a|\!|\!| linear|\!|\!| operator|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|,|\!|\!|$|\!|\!| self|\!|\!|-adjoint|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|.|\!|\!|$|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Fully|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|}|\!|\!|\label|\!|\!|{SSec|\!|\!|:bdf|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\subsubsection|\!|\!|{Fully|\!|\!| implicit|\!|\!| methods|\!|\!|}|\!|\!|\label|\!|\!|{SSSec|\!|\!|:fully|\!|\!|-im|\!|\!|}|\!|\!| We|\!|\!| let|\!|\!| |\!|\!|$t|\!|\!|_n|\!|\!|=n|\!|\!|\tau|\!|\!|,|\!|\!| n|\!|\!|=0|\!|\!|,|\!|\!|\dotsc|\!|\!|,N|\!|\!|,|\!|\!|$|\!|\!| be|\!|\!| a|\!|\!| uniform|\!|\!| partition|\!|\!| of|\!|\!| the|\!|\!| interval|\!|\!| |\!|\!|$|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|$|\!|\!| with|\!|\!| time|\!|\!| step|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|=T|\!|\!|/N|\!|\!|,|\!|\!|$|\!|\!| and|\!|\!| consider|\!|\!| general|\!|\!| |\!|\!| |\!|\!|$k|\!|\!|$|\!|\!|-step|\!|\!| backward|\!|\!| difference|\!|\!| formulas|\!|\!| |\!|\!|(BDF|\!|\!|)|\!|\!| for|\!|\!| the|\!|\!| discretization|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_ju|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|=|\!|\!|-A|\!|\!|(u|\!|\!|_n|\!|\!|)u|\!|\!|_n|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| |\!|\!|$n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,N|\!|\!|,|\!|\!|$|\!|\!| where|\!|\!| |\!|\!|$u|\!|\!|_1|\!|\!|,|\!|\!|\dotsc|\!|\!|,u|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!| are|\!|\!| sufficiently|\!|\!| accurate|\!|\!| given|\!|\!| starting|\!|\!| |\!|\!| approximations|\!|\!| and|\!|\!| the|\!|\!| coefficients|\!|\!| of|\!|\!| the|\!|\!| method|\!|\!| are|\!|\!| given|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_j|\!|\!|\zeta|\!|\!|^j|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{|\!|\!| |\!|\!|\ell|\!|\!|=1|\!|\!|}|\!|\!|^k|\!|\!|\frac|\!|\!| 1|\!|\!| |\!|\!|\ell|\!|\!| |\!|\!|(1|\!|\!|-|\!|\!|\zeta|\!|\!|)|\!|\!|^|\!|\!| |\!|\!|\ell|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
The|\!|\!| method|\!|\!| is|\!|\!| known|\!|\!| to|\!|\!| have|\!|\!| order|\!|\!| |\!|\!|$k|\!|\!|$|\!|\!| and|\!|\!| to|\!|\!| be|\!|\!| A|\!|\!|$|\!|\!|(|\!|\!|\alpha|\!|\!|)|\!|\!|$|\!|\!|-stable|\!|\!| with|\!|\!| angle|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|=90|\!|\!|^|\!|\!|\circ|\!|\!|,90|\!|\!|^|\!|\!|\circ|\!|\!|,86|\!|\!|.03|\!|\!|^|\!|\!|\circ|\!|\!|,73|\!|\!|.35|\!|\!|^|\!|\!|\circ|\!|\!|,|\!|\!| 51|\!|\!|.84|\!|\!|^|\!|\!|\circ|\!|\!|,17|\!|\!|.84|\!|\!|^|\!|\!|\circ|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|$k|\!|\!|=1|\!|\!|,|\!|\!|\dotsc|\!|\!|,6|\!|\!|,|\!|\!|$|\!|\!| respectively|\!|\!|;|\!|\!| see|\!|\!| |\!|\!|\cite|\!|\!|[Section|\!|\!| V|\!|\!|.2|\!|\!|]|\!|\!|{HW|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|
A|\!|\!|$|\!|\!|(|\!|\!|\alpha|\!|\!|)|\!|\!|$|\!|\!|-stability|\!|\!| is|\!|\!| equivalent|\!|\!| to|\!|\!| |\!|\!|$|\!|\!|||\!|\!|\arg|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|||\!|\!|\le|\!|\!| |\!|\!|\pi|\!|\!|-|\!|\!|\alpha|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|||\!|\!|\zeta|\!|\!|||\!|\!|\le|\!|\!| 1|\!|\!|.|\!|\!|$|\!|\!| Note|\!|\!| that|\!|\!| the|\!|\!| first|\!|\!|-|\!|\!| and|\!|\!| second|\!|\!|-order|\!|\!| BDF|\!|\!| methods|\!|\!| are|\!|\!| A|\!|\!|-stable|\!|\!|,|\!|\!| that|\!|\!| is|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\Real|\!|\!| |\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|\ge|\!|\!| 0|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|||\!|\!|\zeta|\!|\!|||\!|\!|\le|\!|\!| 1|\!|\!|.|\!|\!|$|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsubsection|\!|\!|{Linearly|\!|\!| implicit|\!|\!| methods|\!|\!|}|\!|\!|\label|\!|\!|{SSSec|\!|\!|:lin|\!|\!|-im|\!|\!|}|\!|\!| Since|\!|\!| equation|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| is|\!|\!| in|\!|\!| general|\!|\!| nonlinear|\!|\!| in|\!|\!| the|\!|\!| unknown|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|,|\!|\!|$|\!|\!| |\!|\!| we|\!|\!| will|\!|\!| also|\!|\!| consider|\!|\!| the|\!|\!| following|\!|\!| linearly|\!|\!| implicit|\!|\!| modification|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_ju|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|=|\!|\!| |\!|\!|-A|\!|\!|\Big|\!|\!| |\!|\!|(|\!|\!|\sum|\!|\!|\limits|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|\gamma|\!|\!|_ju|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|-1|\!|\!|}|\!|\!|\Big|\!|\!| |\!|\!|)u|\!|\!|_n|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| |\!|\!|$n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,N|\!|\!|,|\!|\!|$|\!|\!| with|\!|\!| the|\!|\!| coefficients|\!|\!| |\!|\!|$|\!|\!|\gamma|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|\gamma|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!| given|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\gamma|\!|\!| |\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=|\!|\!|\frac|\!|\!| 1|\!|\!| |\!|\!| |\!|\!|\zeta|\!|\!|\big|\!|\!| |\!|\!|[1|\!|\!|-|\!|\!|(1|\!|\!|-|\!|\!|\zeta|\!|\!|)|\!|\!|^k|\!|\!|\big|\!|\!| |\!|\!|]|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\gamma|\!|\!|_i|\!|\!|\zeta|\!|\!|^i|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Notice|\!|\!| that|\!|\!| now|\!|\!| the|\!|\!| unknown|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|$|\!|\!| appears|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!| only|\!|\!| linearly|\!|\!|;|\!|\!| therefore|\!|\!|,|\!|\!| to|\!|\!| advance|\!|\!| with|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!| in|\!|\!| time|\!|\!|,|\!|\!| we|\!|\!| only|\!|\!| need|\!|\!| to|\!|\!| solve|\!|\!|,|\!|\!| at|\!|\!| each|\!|\!| time|\!|\!| level|\!|\!|,|\!|\!| |\!|\!| just|\!|\!| one|\!|\!| linear|\!|\!| equation|\!|\!|,|\!|\!| which|\!|\!| reduces|\!|\!| to|\!|\!| a|\!|\!| linear|\!|\!| system|\!|\!| if|\!|\!| we|\!|\!| discretize|\!|\!| also|\!|\!| in|\!|\!| space|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Main|\!|\!| results|\!|\!|}|\!|\!|\label|\!|\!|{Sec|\!|\!|:result|\!|\!|}|\!|\!|
|\!|\!|
In|\!|\!| this|\!|\!| paper|\!|\!| we|\!|\!| prove|\!|\!| the|\!|\!| following|\!|\!| two|\!|\!| results|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{theorem|\!|\!|}|\!|\!|\label|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$d|\!|\!|=2|\!|\!|,3|\!|\!|$|\!|\!|,|\!|\!| be|\!|\!| a|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!|.|\!|\!| |\!|\!| |\!|\!| If|\!|\!| the|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!| is|\!|\!| sufficiently|\!|\!| regular|\!|\!| and|\!|\!| the|\!|\!| initial|\!|\!| approximations|\!|\!| |\!|\!| are|\!|\!| sufficiently|\!|\!| accurate|\!|\!|,|\!|\!| then|\!|\!| there|\!|\!| exist|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\tau|\!|\!|_0|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$C|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| |\!|\!| stepsizes|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|\le|\!|\!|\tau|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$N|\!|\!|\tau|\!|\!|\le|\!|\!| T|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| |\!|\!| fully|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|
|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| respectively|\!|\!|,|\!|\!| of|\!|\!| order|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 5|\!|\!|$|\!|\!|,|\!|\!| have|\!|\!| unique|\!|\!| |\!|\!|
|\!|\!| numerical|\!|\!| solutions|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|\in|\!|\!| C|\!|\!|(|\!|\!|\overline|\!|\!| |\!|\!|\varOmega|\!|\!|)|\!|\!|\cap|\!|\!| H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|
|\!|\!| errors|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-C1|\!|\!|}|\!|\!| |\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\leq|\!|\!| n|\!|\!|\leq|\!|\!| N|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|-u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|[1mm|\!|\!|]|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-H2|\!|\!|}|\!|\!|
|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|-u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|^1|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|^|\!|\!|{1|\!|\!|/2|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\tau|\!|\!|^k|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{theorem|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{theorem|\!|\!|}|\!|\!|\label|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!| Let|\!|\!| the|\!|\!| bounded|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!| be|\!|\!| smooth|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| |\!|\!|$d|\!|\!|\geq|\!|\!| 1|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!| If|\!|\!| the|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!| is|\!|\!| sufficiently|\!|\!| regular|\!|\!| and|\!|\!| the|\!|\!| initial|\!|\!| approximations|\!|\!| |\!|\!| are|\!|\!| sufficiently|\!|\!| accurate|\!|\!|,|\!|\!| then|\!|\!| there|\!|\!| exist|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\tau|\!|\!|_0|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$C|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| |\!|\!| stepsizes|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|\le|\!|\!|\tau|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$N|\!|\!|\tau|\!|\!|\le|\!|\!| T|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| |\!|\!| fully|\!|\!| and|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|
|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| respectively|\!|\!|,|\!|\!| of|\!|\!| order|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 5|\!|\!|$|\!|\!|,|\!|\!| have|\!|\!| |\!|\!|
|\!|\!| unique|\!|\!| numerical|\!|\!| solutions|\!|\!| |\!|\!|
|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|\in|\!|\!| C|\!|\!|^1|\!|\!|(|\!|\!|\overline|\!|\!| |\!|\!|\varOmega|\!|\!|)|\!|\!|\cap|\!|\!| H|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\cap|\!|\!| H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!|
|\!|\!| with|\!|\!| errors|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-C1|\!|\!|-smooth|\!|\!|}|\!|\!| |\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\leq|\!|\!| n|\!|\!|\leq|\!|\!| N|\!|\!|}|\!|\!|
|\!|\!|\bigl|\!|\!|(|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|-u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|+|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|_n|\!|\!|-|\!|\!|\nabla|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\bigr|\!|\!|)|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-H2|\!|\!|-smooth|\!|\!|}|\!|\!|
|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|-u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|^|\!|\!|{1|\!|\!|/2|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\tau|\!|\!|^k|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{theorem|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
Let|\!|\!| us|\!|\!| first|\!|\!| comment|\!|\!| on|\!|\!| the|\!|\!| regularity|\!|\!| requirements|\!|\!|.|\!|\!| |\!|\!| With|\!|\!| some|\!|\!| |\!|\!|$q|\!|\!|>d|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| to|\!|\!| assume|\!|\!| in|\!|\!| Theorem|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{reg|\!|\!|-1|\!|\!|}|\!|\!| u|\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|{k|\!|\!|+1|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!| |\!|\!|\cap|\!|\!| C|\!|\!|^|\!|\!|{k|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|\cap|\!|\!| C|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!| |\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| in|\!|\!| Theorem|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{reg|\!|\!|-2|\!|\!|}|\!|\!| u|\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|{k|\!|\!|+1|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!| |\!|\!|\cap|\!|\!| C|\!|\!|^|\!|\!|{k|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|\cap|\!|\!| C|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
The|\!|\!| errors|\!|\!| in|\!|\!| the|\!|\!| initial|\!|\!| data|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|=u|\!|\!|_n|\!|\!|-u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| for|\!|\!| |\!|\!|$n|\!|\!|=0|\!|\!|,|\!|\!|\dotsc|\!|\!|,k|\!|\!|-1|\!|\!|$|\!|\!|,|\!|\!| need|\!|\!| to|\!|\!| satisfy|\!|\!| |\!|\!| the|\!|\!| following|\!|\!| bounds|\!|\!|:|\!|\!| |\!|\!| in|\!|\!| Theorem|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{InitialError|\!|\!|}|\!|\!| |\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\frac|\!|\!|{e|\!|\!|_n|\!|\!|-e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|{|\!|\!|\tau|\!|\!|}|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!|
|\!|\!|+|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| for|\!|\!| some|\!|\!| |\!|\!|$p|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|$2|\!|\!|/p|\!|\!|+d|\!|\!|/q|\!|\!|<1|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| similarly|\!|\!| in|\!|\!| Theorem|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{InitialError2|\!|\!|}|\!|\!| |\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\frac|\!|\!|{e|\!|\!|_n|\!|\!|-e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|{|\!|\!|\tau|\!|\!|}|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!|
|\!|\!|+|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
It|\!|\!| can|\!|\!| be|\!|\!| shown|\!|\!| that|\!|\!| these|\!|\!| bounds|\!|\!| are|\!|\!| satisfied|\!|\!| when|\!|\!| the|\!|\!| starting|\!|\!| values|\!|\!| are|\!|\!| obtained|\!|\!| |\!|\!| with|\!|\!| an|\!|\!| algebraically|\!|\!| stable|\!|\!| implicit|\!|\!| Runge|\!|\!|-|\!|\!|-Kutta|\!|\!| method|\!|\!| of|\!|\!| stage|\!|\!| order|\!|\!| |\!|\!|$k|\!|\!|$|\!|\!|,|\!|\!| such|\!|\!| as|\!|\!| |\!|\!| the|\!|\!| |\!|\!|$|\!|\!|(k|\!|\!|-1|\!|\!|)|\!|\!|$|\!|\!|-stage|\!|\!| Radau|\!|\!| collocation|\!|\!| method|\!|\!|.|\!|\!|
|\!|\!|
Error|\!|\!| bounds|\!|\!| for|\!|\!| BDF|\!|\!| time|\!|\!| discretizations|\!|\!| of|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| differential|\!|\!| equations|\!|\!| |\!|\!| have|\!|\!| previously|\!|\!| been|\!|\!| obtained|\!|\!| by|\!|\!| Zl|\!|\!|\|\!|\!|'amal|\!|\!| |\!|\!|\cite|\!|\!|{Zla|\!|\!|}|\!|\!|,|\!|\!| for|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 2|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| by|\!|\!| Akrivis|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Lubich|\!|\!| |\!|\!| |\!|\!|\cite|\!|\!|{AL|\!|\!|}|\!|\!| for|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 5|\!|\!|$|\!|\!|,|\!|\!| using|\!|\!| energy|\!|\!| estimates|\!|\!|.|\!|\!| |\!|\!| Implicit|\!|\!|-|\!|\!|-explicit|\!|\!| multistep|\!|\!| methods|\!|\!| for|\!|\!| a|\!|\!| class|\!|\!| of|\!|\!| such|\!|\!| equations|\!|\!| have|\!|\!| been|\!|\!| analyzed|\!|\!| |\!|\!| by|\!|\!| Akrivis|\!|\!|,|\!|\!| Crouzeix|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Makridakis|\!|\!| |\!|\!|\cite|\!|\!|{ACM2|\!|\!|}|\!|\!| by|\!|\!| spectral|\!|\!| and|\!|\!| Fourier|\!|\!| techniques|\!|\!|.|\!|\!| |\!|\!| In|\!|\!| those|\!|\!| papers|\!|\!| it|\!|\!| is|\!|\!|,|\!|\!| however|\!|\!|,|\!|\!| assumed|\!|\!| |\!|\!| that|\!|\!| the|\!|\!| operators|\!|\!| |\!|\!|$A|\!|\!|(u|\!|\!|)|\!|\!|$|\!|\!| are|\!|\!| uniformly|\!|\!| elliptic|\!|\!| for|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| amounts|\!|\!| |\!|\!| to|\!|\!| assuming|\!|\!| that|\!|\!| the|\!|\!| coefficient|\!|\!| function|\!|\!| |\!|\!|$a|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| on|\!|\!| all|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| has|\!|\!| a|\!|\!| strictly|\!|\!| |\!|\!| positive|\!|\!| lower|\!|\!| bound|\!|\!| on|\!|\!| all|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| This|\!|\!| is|\!|\!| a|\!|\!| restrictive|\!|\!| assumption|\!|\!| that|\!|\!| is|\!|\!| not|\!|\!| satisfied|\!|\!| |\!|\!| in|\!|\!| many|\!|\!| applications|\!|\!|.|\!|\!|
|\!|\!|
This|\!|\!| restriction|\!|\!| can|\!|\!| be|\!|\!| overcome|\!|\!| only|\!|\!| by|\!|\!| controlling|\!|\!| the|\!|\!| maximum|\!|\!| norm|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| |\!|\!| solution|\!|\!|,|\!|\!| which|\!|\!| is|\!|\!| a|\!|\!| major|\!|\!| contribution|\!|\!| of|\!|\!| the|\!|\!| present|\!|\!| paper|\!|\!|.|\!|\!| |\!|\!| Since|\!|\!| no|\!|\!| maximum|\!|\!| principle|\!|\!| is|\!|\!| available|\!|\!| for|\!|\!| the|\!|\!| BDF|\!|\!| methods|\!|\!| of|\!|\!| order|\!|\!| higher|\!|\!| than|\!|\!| 1|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| boundedness|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| |\!|\!| is|\!|\!| not|\!|\!| obvious|\!|\!|.|\!|\!| While|\!|\!| there|\!|\!| are|\!|\!| some|\!|\!| results|\!|\!| |\!|\!| on|\!|\!| maximum|\!|\!| norm|\!|\!| error|\!|\!| bounds|\!|\!| for|\!|\!| implicit|\!|\!| |\!|\!| Euler|\!|\!| and|\!|\!| Crank|\!|\!|-|\!|\!|-Nicolson|\!|\!| time|\!|\!| discretizations|\!|\!| |\!|\!| of|\!|\!| |\!|\!|\emph|\!|\!|{linear|\!|\!|}|\!|\!| parabolic|\!|\!| equations|\!|\!| by|\!|\!| |\!|\!| Schatz|\!|\!|,|\!|\!| Thom|\!|\!|\|\!|\!|'ee|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Wahlbin|\!|\!| |\!|\!| |\!|\!|\cite|\!|\!|{STW1|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| we|\!|\!| are|\!|\!| not|\!|\!| aware|\!|\!| of|\!|\!| any|\!|\!| such|\!|\!| results|\!|\!| for|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equations|\!|\!| |\!|\!|
|\!|\!|
as|\!|\!| studied|\!|\!| here|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
In|\!|\!| our|\!|\!| view|\!|\!|,|\!|\!| even|\!|\!| more|\!|\!| interesting|\!|\!| than|\!|\!| the|\!|\!| above|\!|\!| particular|\!|\!| results|\!|\!| is|\!|\!| the|\!|\!| novel|\!|\!| technique|\!|\!| |\!|\!| by|\!|\!| which|\!|\!| they|\!|\!| are|\!|\!| proved|\!|\!|:|\!|\!| by|\!|\!| combining|\!|\!| |\!|\!|\emph|\!|\!|{discrete|\!|\!| maximal|\!|\!| regularity|\!|\!|}|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\emph|\!|\!|{energy|\!|\!| estimates|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
The|\!|\!| combination|\!|\!| of|\!|\!| these|\!|\!| techniques|\!|\!| will|\!|\!| actually|\!|\!| yield|\!|\!| |\!|\!|$O|\!|\!|(|\!|\!|\tau|\!|\!|^k|\!|\!|)|\!|\!|$|\!|\!| error|\!|\!| bounds|\!|\!| in|\!|\!| |\!|\!| somewhat|\!|\!| stronger|\!|\!| norms|\!|\!| than|\!|\!| stated|\!|\!| in|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| Moreover|\!|\!|,|\!|\!| we|\!|\!| provide|\!|\!| a|\!|\!| concise|\!|\!| abstract|\!|\!| framework|\!|\!| in|\!|\!| which|\!|\!| the|\!|\!| combination|\!|\!| of|\!|\!| maximal|\!|\!| |\!|\!| regularity|\!|\!| and|\!|\!| energy|\!|\!| estimates|\!|\!| can|\!|\!| be|\!|\!| done|\!|\!| and|\!|\!| which|\!|\!| allows|\!|\!| for|\!|\!| a|\!|\!| common|\!|\!| proof|\!|\!| for|\!|\!| |\!|\!| both|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|,|\!|\!| as|\!|\!| well|\!|\!| as|\!|\!| for|\!|\!| direct|\!|\!| extensions|\!|\!| to|\!|\!| |\!|\!| more|\!|\!| general|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| problems|\!|\!| than|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Abstract|\!|\!| framework|\!|\!| and|\!|\!| basic|\!|\!| approach|\!|\!| in|\!|\!| continuous|\!|\!| time|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:abstract|\!|\!|}|\!|\!| As|\!|\!| a|\!|\!| preparation|\!|\!| for|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|,|\!|\!| it|\!|\!| |\!|\!| is|\!|\!| helpful|\!|\!| |\!|\!| to|\!|\!| illustrate|\!|\!| the|\!|\!| approach|\!|\!| taken|\!|\!| in|\!|\!| this|\!|\!| paper|\!|\!| first|\!|\!| in|\!|\!| a|\!|\!| time|\!|\!|-continuous|\!|\!| and|\!|\!| more|\!|\!| abstract|\!|\!| |\!|\!| setting|\!|\!|,|\!|\!| which|\!|\!| in|\!|\!| particular|\!|\!| applies|\!|\!| to|\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)u|\!|\!|=|\!|\!|-|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|(a|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!| as|\!|\!| considered|\!|\!| above|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Abstract|\!|\!| framework|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{subsec|\!|\!|:abstract|\!|\!|-framework|\!|\!|}|\!|\!| We|\!|\!| formulate|\!|\!| an|\!|\!| abstract|\!|\!| setting|\!|\!| that|\!|\!| works|\!|\!| with|\!|\!| Hilbert|\!|\!| spaces|\!|\!| |\!|\!|$V|\!|\!|\subset|\!|\!| H|\!|\!|$|\!|\!| and|\!|\!| |\!|\!| Banach|\!|\!| spaces|\!|\!| |\!|\!|$D|\!|\!|\subset|\!|\!| W|\!|\!| |\!|\!|\subset|\!|\!| X|\!|\!| |\!|\!|$|\!|\!| |\!|\!| as|\!|\!| follows|\!|\!|:|\!|\!| Let|\!|\!| |\!|\!|$H|\!|\!|$|\!|\!| be|\!|\!| the|\!|\!| basic|\!|\!| Hilbert|\!|\!| space|\!|\!|,|\!|\!| and|\!|\!| let|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!| be|\!|\!| another|\!|\!| Hilbert|\!|\!| space|\!|\!| that|\!|\!| is|\!|\!| |\!|\!| densely|\!|\!| and|\!|\!| continuously|\!|\!| imbedded|\!|\!| in|\!|\!| |\!|\!|$H|\!|\!|$|\!|\!|.|\!|\!| Together|\!|\!| with|\!|\!| the|\!|\!| dual|\!|\!| spaces|\!|\!| we|\!|\!| then|\!|\!| have|\!|\!| |\!|\!| the|\!|\!| familiar|\!|\!| Gelfand|\!|\!| triple|\!|\!| of|\!|\!| Hilbert|\!|\!| spaces|\!|\!| |\!|\!|$V|\!|\!| |\!|\!|\subset|\!|\!| H|\!|\!|=H|\!|\!|'|\!|\!| |\!|\!|\subset|\!|\!| V|\!|\!|'|\!|\!|$|\!|\!| with|\!|\!| dense|\!|\!| and|\!|\!| |\!|\!| continuous|\!|\!| imbeddings|\!|\!|,|\!|\!| and|\!|\!| such|\!|\!| that|\!|\!| the|\!|\!| restriction|\!|\!| to|\!|\!| |\!|\!|$V|\!|\!|\times|\!|\!| H|\!|\!|$|\!|\!| of|\!|\!| the|\!|\!| duality|\!|\!| pairing|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\langle|\!|\!|\cdot|\!|\!|,|\!|\!|\cdot|\!|\!|\rangle|\!|\!|$|\!|\!| between|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$V|\!|\!|'|\!|\!|$|\!|\!| and|\!|\!| of|\!|\!| the|\!|\!| inner|\!|\!| product|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|\cdot|\!|\!|,|\!|\!|\cdot|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| on|\!|\!| |\!|\!|$H|\!|\!|$|\!|\!| coincide|\!|\!|.|\!|\!| Let|\!|\!| further|\!|\!| |\!|\!|$X|\!|\!| |\!|\!|\subset|\!|\!| V|\!|\!|'|\!|\!|$|\!|\!| be|\!|\!| a|\!|\!| Banach|\!|\!| space|\!|\!| and|\!|\!| let|\!|\!| |\!|\!|$D|\!|\!|\subset|\!|\!| W|\!|\!|\subset|\!|\!| X|\!|\!|$|\!|\!| |\!|\!|
be|\!|\!| further|\!|\!| Banach|\!|\!| spaces|\!|\!|.|\!|\!| We|\!|\!| denote|\!|\!| the|\!|\!| corresponding|\!|\!| norms|\!|\!| by|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_H|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_X|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_W|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!|\|\!|\!|||\!|\!|_D|\!|\!|$|\!|\!|,|\!|\!| respectively|\!|\!|,|\!|\!| and|\!|\!| summarize|\!|\!| the|\!|\!| |\!|\!| continuous|\!|\!| imbeddings|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\begin|\!|\!|{array|\!|\!|}|\!|\!|{ccccc|\!|\!|}|\!|\!| V|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\subset|\!|\!| |\!|\!|&|\!|\!| H|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\subset|\!|\!| |\!|\!|&|\!|\!| V|\!|\!|'|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\cup|\!|\!| |\!|\!|&|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!| |\!|\!| |\!|\!|\cup|\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!| |\!|\!|\cup|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| D|\!|\!| |\!|\!|&|\!|\!|\subset|\!|\!| |\!|\!|&|\!|\!| W|\!|\!| |\!|\!| |\!|\!|&|\!|\!|\subset|\!|\!| |\!|\!|&|\!|\!| X|\!|\!| |\!|\!|\end|\!|\!|{array|\!|\!|}|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
For|\!|\!| the|\!|\!| existence|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!| we|\!|\!| will|\!|\!| |\!|\!| further|\!|\!| require|\!|\!| that|\!|\!| |\!|\!| |\!|\!|$D|\!|\!|$|\!|\!| is|\!|\!| compactly|\!|\!| imbedded|\!|\!| in|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|.|\!|\!|
|\!|\!|
We|\!|\!| have|\!|\!| primarily|\!|\!| the|\!|\!| following|\!|\!| two|\!|\!| situations|\!|\!| in|\!|\!| mind|\!|\!|:|\!|\!| |\!|\!|\begin|\!|\!|{enumerate|\!|\!|}|\!|\!|[|\!|\!|(P1|\!|\!|)|\!|\!|]|\!|\!|\itemsep|\!|\!|=0pt|\!|\!| |\!|\!|\item|\!|\!| For|\!|\!| a|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!| |\!|\!|(with|\!|\!| |\!|\!|$d|\!|\!|\le|\!|\!| 3|\!|\!|$|\!|\!|)|\!|\!| |\!|\!| we|\!|\!| consider|\!|\!| the|\!|\!| usual|\!|\!| |\!|\!| Hilbert|\!|\!| spaces|\!|\!| |\!|\!|$H|\!|\!|=L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$V|\!|\!|=H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| in|\!|\!| addition|\!|\!| the|\!|\!| |\!|\!| Banach|\!|\!| spaces|\!|\!| |\!|\!|$X|\!|\!|=W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| suitable|\!|\!| |\!|\!|$q|\!|\!|>d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$W|\!|\!|=C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!| |\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| a|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|$D|\!|\!|=W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\cap|\!|\!| H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\item|\!|\!| For|\!|\!| a|\!|\!| smooth|\!|\!| bounded|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!| |\!|\!|(with|\!|\!| arbitrary|\!|\!| dimension|\!|\!| |\!|\!|$d|\!|\!|$|\!|\!|)|\!|\!| we|\!|\!| consider|\!|\!| again|\!|\!| |\!|\!| the|\!|\!| |\!|\!| Hilbert|\!|\!| spaces|\!|\!| |\!|\!|$H|\!|\!|=L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$V|\!|\!|=H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| the|\!|\!| Banach|\!|\!| |\!|\!| spaces|\!|\!| |\!|\!|$X|\!|\!|=L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$q|\!|\!|>d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$W|\!|\!|=C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| with|\!|\!| a|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|$D|\!|\!|=W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\cap|\!|\!| H|\!|\!|^1|\!|\!|_0|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{enumerate|\!|\!|}|\!|\!|
|\!|\!|
We|\!|\!| will|\!|\!| work|\!|\!| with|\!|\!| the|\!|\!| following|\!|\!| five|\!|\!| conditions|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|(i|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{|\!|\!|$W|\!|\!|$|\!|\!|-locally|\!|\!| uniform|\!|\!| maximal|\!|\!| regularity|\!|\!|}|\!|\!|)|\!|\!| For|\!|\!| |\!|\!|$w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| |\!|\!| linear|\!|\!| operator|\!|\!| |\!|\!| |\!|\!|$|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| the|\!|\!| generator|\!|\!| of|\!|\!| an|\!|\!| analytic|\!|\!| semigroup|\!|\!| on|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| with|\!|\!| domain|\!|\!| |\!|\!|$D|\!|\!|(A|\!|\!|(w|\!|\!|)|\!|\!|)|\!|\!|=D|\!|\!|$|\!|\!| |\!|\!| independent|\!|\!| of|\!|\!|~|\!|\!|$w|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| has|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!|:|\!|\!| for|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| there|\!|\!| exists|\!|\!| |\!|\!| a|\!|\!| real|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| inhomogeneous|\!|\!| initial|\!|\!| value|\!|\!| problem|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{lin|\!|\!|-eq|\!|\!|-Av|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(w|\!|\!|)u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|=|\!|\!| f|\!|\!|(t|\!|\!|)|\!|\!|\qquad|\!|\!| |\!|\!|(0|\!|\!|<t|\!|\!|\le|\!|\!| T|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| u|\!|\!|(0|\!|\!|)|\!|\!|=0|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| is|\!|\!| bounded|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| A|\!|\!|(w|\!|\!|)|\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!| f|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\qquad|\!|\!| |\!|\!|\forall|\!|\!| f|\!|\!| |\!|\!|\in|\!|\!| |\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Moreover|\!|\!|,|\!|\!| the|\!|\!| bound|\!|\!| is|\!|\!| uniform|\!|\!| in|\!|\!| bounded|\!|\!| sets|\!|\!| of|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|:|\!|\!| for|\!|\!| every|\!|\!| |\!|\!|$R|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{if|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!|\|\!|\!||w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
We|\!|\!| further|\!|\!| require|\!|\!| that|\!|\!| the|\!|\!| graph|\!|\!| norms|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_X|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| A|\!|\!|(w|\!|\!|)|\!|\!|\cdot|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_X|\!|\!|$|\!|\!| are|\!|\!| uniformly|\!|\!| |\!|\!|
equivalent|\!|\!| to|\!|\!| the|\!|\!| norm|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!|\|\!|\!|||\!|\!|_D|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!|$|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(ii|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Control|\!|\!| of|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!| by|\!|\!| maximal|\!|\!| regularity|\!|\!|}|\!|\!|)|\!|\!| For|\!|\!| some|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| a|\!|\!| continuous|\!|\!| imbedding|\!|\!| |\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!| |\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|\subset|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|$|\!|\!|:|\!|\!| there|\!|\!| is|\!|\!| |\!|\!|$C|\!|\!|_p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| |\!|\!| such|\!|\!| that|\!|\!| |\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!| |\!|\!|\in|\!|\!| W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!| |\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_p|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(iii|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{|\!|\!|$V|\!|\!|$|\!|\!|-ellipticity|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| extends|\!|\!| by|\!|\!| density|\!|\!| to|\!|\!| a|\!|\!| bounded|\!|\!| linear|\!|\!| operator|\!|\!| |\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)|\!|\!|:V|\!|\!| |\!|\!|\to|\!|\!| V|\!|\!|'|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| operator|\!|\!| |\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| uniformly|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!|-elliptic|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\alpha|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\langle|\!|\!| u|\!|\!|,|\!|\!| A|\!|\!|(w|\!|\!|)|\!|\!| u|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|\le|\!|\!| M|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\qquad|\!|\!| |\!|\!|\forall|\!|\!| u|\!|\!| |\!|\!|\in|\!|\!| V|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
with|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|_R|\!|\!|>0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$M|\!|\!|_R|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| where|\!|\!| |\!|\!|$|\!|\!|\langle|\!|\!|\cdot|\!|\!|,|\!|\!|\cdot|\!|\!|\rangle|\!|\!|$|\!|\!| denotes|\!|\!| the|\!|\!| duality|\!|\!| |\!|\!| pairing|\!|\!| between|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!| and|\!|\!|~|\!|\!|$V|\!|\!|'|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(iv|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Operators|\!|\!| with|\!|\!| different|\!|\!| arguments|\!|\!|:|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!|-norm|\!|\!| estimate|\!|\!|}|\!|\!|)|\!|\!| |\!|\!| For|\!|\!| every|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| there|\!|\!| is|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$v|\!|\!|,w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| that|\!|\!| are|\!|\!| |\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!|,|\!|\!| and|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| D|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|)u|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|_|\!|\!|{X|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_D|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(v|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Operators|\!|\!| with|\!|\!| different|\!|\!| arguments|\!|\!|:|\!|\!| |\!|\!|$V|\!|\!|'|\!|\!|$|\!|\!|-norm|\!|\!| estimate|\!|\!|}|\!|\!|)|\!|\!| For|\!|\!| every|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| there|\!|\!| is|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$v|\!|\!|,w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| that|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| |\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!|,|\!|\!| and|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| D|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|)u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|
|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!||u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!| The|\!|\!| operators|\!|\!| given|\!|\!| by|\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)u|\!|\!|=|\!|\!|-|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|(a|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| homogeneous|\!|\!| |\!|\!| Dirichlet|\!|\!| boundary|\!|\!| conditions|\!|\!| and|\!|\!| a|\!|\!| smooth|\!|\!| positive|\!|\!| function|\!|\!| |\!|\!|$a|\!|\!|(|\!|\!|\cdot|\!|\!|)|\!|\!|$|\!|\!| satisfy|\!|\!| |\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(i|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(v|\!|\!|)|\!|\!|}|\!|\!| in|\!|\!| the|\!|\!| situations|\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(P1|\!|\!|)|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(P2|\!|\!|)|\!|\!|}|\!|\!| above|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!| The|\!|\!| proof|\!|\!| of|\!|\!| this|\!|\!| lemma|\!|\!| will|\!|\!| be|\!|\!| given|\!|\!| in|\!|\!| Section|\!|\!| |\!|\!| |\!|\!|\ref|\!|\!|{Sec|\!|\!|:unif|\!|\!|-maxreg|\!|\!|}|\!|\!| and|\!|\!| Section|\!|\!| |\!|\!|\ref|\!|\!|{Sec|\!|\!|:Sobolev|\!|\!|}|\!|\!|.|\!|\!| In|\!|\!| the|\!|\!| cases|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!| and|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!| we|\!|\!| actually|\!|\!| have|\!|\!| a|\!|\!| stronger|\!|\!| bound|\!|\!| than|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!|:|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|)u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!||u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{A|\!|\!| perturbation|\!|\!| result|\!|\!|}|\!|\!|
|\!|\!|
Suppose|\!|\!| now|\!|\!| that|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|$|\!|\!| solves|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{ivp|\!|\!|-numer|\!|\!|}|\!|\!| |\!|\!|\left|\!|\!| |\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!| |\!|\!|&|\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|=f|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| 0|\!|\!|<t|\!|\!|\le|\!|\!| T|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&u|\!|\!| |\!|\!|(0|\!|\!|)|\!|\!|=u|\!|\!|_0|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\right|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| |\!|\!|$u|\!|\!|^|\!|\!|\star|\!|\!|\in|\!|\!| W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|$|\!|\!| solves|\!|\!| the|\!|\!| perturbed|\!|\!| equation|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{ivp2|\!|\!|}|\!|\!| |\!|\!|\left|\!|\!| |\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!| |\!|\!|&|\!|\!|\dot|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)u|\!|\!|^|\!|\!|\star|\!|\!|(t|\!|\!|)|\!|\!|=f|\!|\!|(t|\!|\!|)|\!|\!|+d|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!| 0|\!|\!|<t|\!|\!|\le|\!|\!| T|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&u|\!|\!|^|\!|\!|\star|\!|\!|(0|\!|\!|)|\!|\!|=u|\!|\!|_0|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\right|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| defect|\!|\!| |\!|\!|$d|\!|\!|$|\!|\!| is|\!|\!| |\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-delta|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||d|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
As|\!|\!| an|\!|\!| illustration|\!|\!| of|\!|\!| the|\!|\!| combined|\!|\!| use|\!|\!| of|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| and|\!|\!| energy|\!|\!| estimates|\!|\!| |\!|\!| we|\!|\!| prove|\!|\!| the|\!|\!| following|\!|\!| result|\!|\!|.|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!| will|\!|\!| be|\!|\!| |\!|\!| proved|\!|\!| by|\!|\!| transferring|\!|\!| these|\!|\!| arguments|\!|\!| to|\!|\!| the|\!|\!| time|\!|\!|-discrete|\!|\!| setting|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proposition|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{prop|\!|\!|:basic|\!|\!|}|\!|\!| In|\!|\!| the|\!|\!| above|\!|\!| setting|\!|\!| of|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!|(i|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(v|\!|\!|)|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|-numer|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{est|\!|\!|-delta|\!|\!|}|\!|\!| |\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|>0|\!|\!|$|\!|\!| sufficiently|\!|\!| small|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| error|\!|\!| |\!|\!|$e|\!|\!|=u|\!|\!|-u|\!|\!|^|\!|\!|\star|\!|\!|$|\!|\!| between|\!|\!| the|\!|\!| solutions|\!|\!| of|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|-numer|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{ivp2|\!|\!|}|\!|\!| is|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!| where|\!|\!| |\!|\!|$C|\!|\!|$|\!|\!| depends|\!|\!| on|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|$|\!|\!|\|\!|\!|||\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!| |\!|\!|
and|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| but|\!|\!| is|\!|\!| independent|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{proposition|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| |\!|\!|(a|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Error|\!|\!| equation|\!|\!|}|\!|\!|)|\!|\!| We|\!|\!| rewrite|\!|\!| the|\!|\!| equation|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|-numer|\!|\!|}|\!|\!| in|\!|\!| the|\!|\!| form|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|+f|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
and|\!|\!| see|\!|\!| that|\!|\!| the|\!|\!| error|\!|\!| |\!|\!|$e|\!|\!|=u|\!|\!|-u|\!|\!|^|\!|\!|\star|\!|\!|$|\!|\!| satisfies|\!|\!| the|\!|\!| |\!|\!|{error|\!|\!| equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)e|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|-d|\!|\!|(t|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Obviously|\!|\!|,|\!|\!| |\!|\!|$e|\!|\!|(0|\!|\!|)|\!|\!|=0|\!|\!|.|\!|\!|$|\!|\!| To|\!|\!| simplify|\!|\!| the|\!|\!| notation|\!|\!|,|\!|\!| we|\!|\!| denote|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\bar|\!|\!| A|\!|\!|(t|\!|\!|)|\!|\!|:|\!|\!|=A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|,|\!|\!|$|\!|\!| and|\!|\!| rewrite|\!|\!| the|\!|\!| error|\!|\!| equation|\!|\!| |\!|\!| with|\!|\!| some|\!|\!| arbitrary|\!|\!| |\!|\!|$|\!|\!|\bar|\!|\!| t|\!|\!|\ge|\!|\!| t|\!|\!|$|\!|\!| as|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq2|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\bar|\!|\!| A|\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!|)e|\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| |\!|\!|\bar|\!|\!| A|\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!|)|\!|\!|-|\!|\!|\bar|\!|\!| A|\!|\!|(t|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|-d|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq3|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\bar|\!|\!| A|\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!|)e|\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\widehat|\!|\!| d|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
with|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq4|\!|\!|}|\!|\!| |\!|\!|\widehat|\!|\!| d|\!|\!|(t|\!|\!|)|\!|\!|:|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| |\!|\!|\bar|\!|\!| A|\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!|)|\!|\!|-|\!|\!|\bar|\!|\!| A|\!|\!|(t|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)e|\!|\!|(t|\!|\!|)|\!|\!|-|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|(t|\!|\!|)|\!|\!|-d|\!|\!|(t|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|(b|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Maximal|\!|\!| regularity|\!|\!|}|\!|\!|)|\!|\!| We|\!|\!| denote|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[R|\!|\!|=|\!|\!|\|\!|\!|||\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!|+1|\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!| B|\!|\!|=|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|+1|\!|\!| |\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
and|\!|\!| let|\!|\!| |\!|\!|$0|\!|\!|<t|\!|\!|^|\!|\!|*|\!|\!|\le|\!|\!| T|\!|\!|$|\!|\!| be|\!|\!| maximal|\!|\!| such|\!|\!| that|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{t|\!|\!|-star|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| B|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| By|\!|\!| the|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!| we|\!|\!| immediately|\!|\!| obtain|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{er|\!|\!|-eq3|\!|\!|}|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\bar|\!|\!| t|\!|\!|\le|\!|\!| t|\!|\!|^|\!|\!|*|\!|\!|$|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{max|\!|\!|-reg|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\dot|\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|+|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!|\widehat|\!|\!| d|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
By|\!|\!| the|\!|\!| bound|\!|\!| |\!|\!|(iv|\!|\!|)|\!|\!| and|\!|\!| the|\!|\!| assumed|\!|\!| Lipschitz|\!|\!| continuity|\!|\!| of|\!|\!| |\!|\!|$u|\!|\!|^|\!|\!|\star|\!|\!|:|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|\to|\!|\!| W|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| for|\!|\!| |\!|\!| any|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{d|\!|\!|-hat0|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\widehat|\!|\!| d|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{X|\!|\!|}|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!| |\!|\!| |\!|\!|-|\!|\!| t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\nonumber|\!|\!|
|\!|\!|&|\!|\!|+|\!|\!| |\!|\!|\varepsilon|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!| d|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{X|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!| We|\!|\!| take|\!|\!| the|\!|\!| second|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|}|\!|\!| to|\!|\!| power|\!|\!| |\!|\!|$p|\!|\!|$|\!|\!| and|\!|\!| denote|\!|\!| it|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\eta|\!|\!|(|\!|\!| t|\!|\!|)|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!| t|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|^p|\!|\!|\|\!|\!|,|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
For|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{d|\!|\!|-hat0|\!|\!|}|\!|\!| we|\!|\!| note|\!|\!| that|\!|\!|,|\!|\!| by|\!|\!| partial|\!|\!| integration|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\int|\!|\!|_0|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| t|\!|\!|}|\!|\!| |\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!| |\!|\!| |\!|\!|-|\!|\!| t|\!|\!|)|\!|\!|^p|\!|\!| |\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!|^p|\!|\!|\|\!|\!|,|\!|\!| d|\!|\!| t|\!|\!| |\!|\!|=|\!|\!| p|\!|\!| |\!|\!|\int|\!|\!|_0|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| t|\!|\!|}|\!|\!| |\!|\!| |\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!| |\!|\!| |\!|\!|-|\!|\!| t|\!|\!|)|\!|\!|^|\!|\!|{p|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\eta|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|,|\!|\!| dt|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Hence|\!|\!| we|\!|\!| have|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\eta|\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!|)|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!|^p|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|\widehat|\!|\!| d|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|^p|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| |\!|\!| C|\!|\!| |\!|\!|\int|\!|\!|_0|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| t|\!|\!|}|\!|\!| |\!|\!| |\!|\!|(|\!|\!|\bar|\!|\!| t|\!|\!| |\!|\!| |\!|\!|-|\!|\!| t|\!|\!|)|\!|\!|^|\!|\!|{p|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\eta|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|,|\!|\!| dt|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|+|\!|\!| C|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| B|\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| B|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;H|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!|\|\!|\!||d|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|\Bigr|\!|\!|)|\!|\!|^p|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!| With|\!|\!| a|\!|\!| Gronwall|\!|\!| inequality|\!|\!|,|\!|\!| we|\!|\!| therefore|\!|\!| obtain|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!|\dot|\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|+|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|,B|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;H|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|+|\!|\!| C|\!|\!|\|\!|\!||d|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
and|\!|\!| we|\!|\!| note|\!|\!| that|\!|\!| by|\!|\!| property|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| term|\!|\!| dominates|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,|\!|\!|\bar|\!|\!| t|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| For|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|$|\!|\!| we|\!|\!| can|\!|\!| therefore|\!|\!| absorb|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| of|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!| |\!|\!| in|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| to|\!|\!| obtain|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\dot|\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!| t|\!|\!|^|\!|\!|*|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|+|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;H|\!|\!|)|\!|\!|}|\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|(c|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Energy|\!|\!| estimate|\!|\!|}|\!|\!|)|\!|\!| To|\!|\!| bound|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|}|\!|\!| |\!|\!| we|\!|\!| use|\!|\!| the|\!|\!| energy|\!|\!| estimate|\!|\!| obtained|\!|\!| by|\!|\!| testing|\!|\!| |\!|\!|\eqref|\!|\!|{er|\!|\!|-eq|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|$e|\!|\!|$|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\frac12|\!|\!| |\!|\!|\frac|\!|\!| d|\!|\!| |\!|\!|{dt|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_H|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\langle|\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|(t|\!|\!|)|\!|\!|)e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!| |\!|\!|=|\!|\!| |\!|\!|\bigl|\!|\!|\langle|\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|)u|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\bigr|\!|\!|\rangle|\!|\!| |\!|\!|-|\!|\!| |\!|\!|\langle|\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!|,d|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\rangle|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
The|\!|\!| |\!|\!| bound|\!|\!| of|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!| yields|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\bigl|\!|\!|\langle|\!|\!| e|\!|\!|,|\!|\!| |\!|\!|(A|\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!| |\!|\!|)|\!|\!|)u|\!|\!| |\!|\!| |\!|\!|\bigr|\!|\!|\rangle|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!|\le|\!|\!|
|\!|\!|\Bigl|\!|\!|(|\!|\!| 2|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Integrating|\!|\!| from|\!|\!| |\!|\!|$0|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$t|\!|\!|\le|\!|\!| t|\!|\!|^|\!|\!|*|\!|\!|$|\!|\!|,|\!|\!| using|\!|\!| the|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!|-ellipticity|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!| |\!|\!| and|\!|\!| absorbing|\!|\!| the|\!|\!| term|\!|\!| |\!|\!|
with|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!|$|\!|\!| we|\!|\!| therefore|\!|\!| obtain|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_H|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(s|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|^2|\!|\!|\|\!|\!|,|\!|\!| ds|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|(s|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!|^2|\!|\!|\|\!|\!|,|\!|\!| ds|\!|\!|
|\!|\!| |\!|\!| |\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| |\!|\!|\|\!|\!|||\!|\!| d|\!|\!|(s|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!|^2|\!|\!|\|\!|\!|,|\!|\!| ds|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!| and|\!|\!| the|\!|\!| Gronwall|\!|\!| inequality|\!|\!| then|\!|\!| yields|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||e|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| 0|\!|\!|\le|\!|\!| t|\!|\!| |\!|\!|\le|\!|\!| t|\!|\!|^|\!|\!|*|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(d|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Complete|\!|\!| time|\!|\!| interval|\!|\!|}|\!|\!|)|\!|\!| Inserting|\!|\!| the|\!|\!| previous|\!|\!| bound|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| obtain|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!|\dot|\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,|\!|\!| t|\!|\!|^|\!|\!|*|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!|+|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| by|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| |\!|\!| further|\!|\!| implies|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Hence|\!|\!|,|\!|\!| for|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!| we|\!|\!| have|\!|\!| strict|\!|\!| inequalities|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|<|\!|\!| R|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,t|\!|\!|^|\!|\!|*|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|<|\!|\!| B|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
In|\!|\!| view|\!|\!| of|\!|\!| the|\!|\!| maximality|\!|\!| of|\!|\!| |\!|\!|$t|\!|\!|^|\!|\!|*|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|\eqref|\!|\!|{t|\!|\!|-star|\!|\!|}|\!|\!| this|\!|\!| is|\!|\!| possible|\!|\!| only|\!|\!| if|\!|\!| |\!|\!|$t|\!|\!|^|\!|\!|*|\!|\!|=T|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{remark|\!|\!|}|\!|\!|
|\!|\!| If|\!|\!| condition|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| is|\!|\!| strengthened|\!|\!| to|\!|\!|
|\!|\!| |\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|[0|\!|\!|,T|\!|\!|]|\!|\!|;W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\alpha|\!|\!|,p|\!|\!|}|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!| with|\!|\!| some|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| the|\!|\!| statement|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:basic|\!|\!|}|\!|\!| remains|\!|\!| valid|\!|\!| |\!|\!| under|\!|\!| the|\!|\!| weaker|\!|\!| condition|\!|\!| |\!|\!|$u|\!|\!|,u|\!|\!|^|\!|\!|\star|\!|\!|\in|\!|\!| W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;X|\!|\!|)|\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;D|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| is|\!|\!| |\!|\!| symmetric|\!|\!| in|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| and|\!|\!|~|\!|\!|$u|\!|\!|^|\!|\!|\star|\!|\!|$|\!|\!|.|\!|\!| The|\!|\!| proof|\!|\!| remains|\!|\!| essentially|\!|\!| the|\!|\!| same|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{remark|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Stability|\!|\!| estimate|\!|\!| for|\!|\!| BDF|\!|\!| methods|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:stability|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Abstract|\!|\!| framework|\!|\!| for|\!|\!| the|\!|\!| time|\!|\!| discretization|\!|\!|}|\!|\!| We|\!|\!| work|\!|\!| again|\!|\!| with|\!|\!| the|\!|\!| abstract|\!|\!| framework|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(v|\!|\!|)|\!|\!| of|\!|\!| the|\!|\!| previous|\!|\!| section|\!|\!| and|\!|\!| consider|\!|\!| |\!|\!| in|\!|\!| addition|\!|\!| the|\!|\!| following|\!|\!| property|\!|\!| of|\!|\!| the|\!|\!| BDF|\!|\!| time|\!|\!| discretization|\!|\!|.|\!|\!| Here|\!|\!| we|\!|\!| denote|\!|\!| for|\!|\!| a|\!|\!| |\!|\!| sequence|\!|\!| |\!|\!|$|\!|\!|(v|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|$|\!|\!| and|\!|\!| a|\!|\!| given|\!|\!| stepsize|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|$|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(v|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_X|\!|\!|^p|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|^|\!|\!|{1|\!|\!|/p|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| is|\!|\!| the|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|(0|\!|\!|,N|\!|\!|\tau|\!|\!|;X|\!|\!|)|\!|\!|$|\!|\!| norm|\!|\!| of|\!|\!| the|\!|\!| piecewise|\!|\!| constant|\!|\!| function|\!|\!| taking|\!|\!| the|\!|\!| |\!|\!| values|\!|\!|~|\!|\!|$v|\!|\!|_n|\!|\!|$|\!|\!|.|\!|\!| We|\!|\!| will|\!|\!| work|\!|\!| with|\!|\!| the|\!|\!| following|\!|\!| discrete|\!|\!| analog|\!|\!| of|\!|\!| condition|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|(i|\!|\!|'|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{|\!|\!|$W|\!|\!|$|\!|\!|-locally|\!|\!| uniform|\!|\!| discrete|\!|\!| maximal|\!|\!| regularity|\!|\!|}|\!|\!|)|\!|\!| For|\!|\!| |\!|\!|$w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| |\!|\!| linear|\!|\!| |\!|\!| operator|\!|\!| |\!|\!|$|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| has|\!|\!| discrete|\!|\!| maximal|\!|\!| |\!|\!|$|\!|\!| L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| for|\!|\!| the|\!|\!| BDF|\!|\!| method|\!|\!|:|\!|\!| for|\!|\!| |\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| there|\!|\!| exists|\!|\!| a|\!|\!| real|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| every|\!|\!| stepsize|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|>0|\!|\!|$|\!|\!| |\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| determined|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{lin|\!|\!|-eq|\!|\!|-Av|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| |\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(w|\!|\!|)u|\!|\!|_n|\!|\!| |\!|\!|=|\!|\!| f|\!|\!|_n|\!|\!|\qquad|\!|\!| |\!|\!|(k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| N|\!|\!|)|\!|\!| |\!|\!|\qquad|\!|\!|\text|\!|\!|{with|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!| |\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\frac1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!| |\!|\!|\delta|\!|\!|_j|\!|\!| u|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| for|\!|\!| starting|\!|\!| values|\!|\!| |\!|\!|$u|\!|\!|_0|\!|\!|=|\!|\!|\dotsb|\!|\!|=u|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|=0|\!|\!|$|\!|\!|,|\!|\!| is|\!|\!| bounded|\!|\!| by|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(A|\!|\!|(w|\!|\!|)|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| where|\!|\!| the|\!|\!| constant|\!|\!| is|\!|\!| independent|\!|\!| of|\!|\!| |\!|\!|$N|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|$|\!|\!|.|\!|\!| Moreover|\!|\!|,|\!|\!| the|\!|\!| bound|\!|\!| is|\!|\!| uniform|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| in|\!|\!| bounded|\!|\!| sets|\!|\!| of|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|:|\!|\!| for|\!|\!| every|\!|\!| |\!|\!|$R|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[C|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{if|\!|\!| |\!|\!|}|\!|\!|\|\!|\!| |\!|\!|\|\!|\!||w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|\label|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!| For|\!|\!| the|\!|\!| operators|\!|\!| given|\!|\!| by|\!|\!| |\!|\!|$A|\!|\!|(w|\!|\!|)u|\!|\!|=|\!|\!|-|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|(a|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| homogeneous|\!|\!| |\!|\!| Dirichlet|\!|\!| boundary|\!|\!| conditions|\!|\!| and|\!|\!| a|\!|\!| smooth|\!|\!| positive|\!|\!| function|\!|\!| |\!|\!|$a|\!|\!|(|\!|\!|\cdot|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| the|\!|\!| BDF|\!|\!| |\!|\!| methods|\!|\!| up|\!|\!| to|\!|\!| order|\!|\!| |\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 6|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| uniform|\!|\!| discrete|\!|\!| maximal|\!|\!| regularity|\!|\!| property|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!|(i|\!|\!|'|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!| is|\!|\!| fulfilled|\!|\!| in|\!|\!| the|\!|\!| situations|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|}|\!|\!| of|\!|\!| the|\!|\!| previous|\!|\!| section|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
The|\!|\!| proof|\!|\!| of|\!|\!| this|\!|\!| lemma|\!|\!| will|\!|\!| be|\!|\!| given|\!|\!| in|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{Sec|\!|\!|:MLpBDF|\!|\!|}|\!|\!|.|\!|\!| We|\!|\!| note|\!|\!| that|\!|\!| for|\!|\!| a|\!|\!| |\!|\!| fixed|\!|\!| |\!|\!|$w|\!|\!|$|\!|\!|,|\!|\!| such|\!|\!| a|\!|\!| result|\!|\!| was|\!|\!| first|\!|\!| proved|\!|\!| in|\!|\!| |\!|\!|\cite|\!|\!|{KLL|\!|\!|}|\!|\!| |\!|\!| for|\!|\!| |\!|\!|$X|\!|\!|=L|\!|\!|^q|\!|\!|$|\!|\!|.|\!|\!| The|\!|\!| main|\!|\!| novelty|\!|\!| of|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!| is|\!|\!| thus|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|$X|\!|\!|=W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| the|\!|\!| local|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-uniformity|\!|\!| of|\!|\!| the|\!|\!| result|\!|\!|.|\!|\!|
|\!|\!|
For|\!|\!| the|\!|\!| BDF|\!|\!| methods|\!|\!| of|\!|\!| orders|\!|\!| |\!|\!|$k|\!|\!|=3|\!|\!|,4|\!|\!|,5|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| a|\!|\!| further|\!|\!| condition|\!|\!| that|\!|\!| complements|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|(v|\!|\!|'|\!|\!|)|\!|\!| For|\!|\!| all|\!|\!| |\!|\!|$v|\!|\!|,w|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| that|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!|,|\!|\!| and|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| V|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!|)u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_R|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_W|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
This|\!|\!| condition|\!|\!| is|\!|\!| trivially|\!|\!| satisfied|\!|\!| in|\!|\!| the|\!|\!| situations|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|{|\!|\!|\rm|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Stability|\!|\!| estimate|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{subsec|\!|\!|:stability|\!|\!|}|\!|\!| In|\!|\!| the|\!|\!| following|\!|\!|,|\!|\!| let|\!|\!| |\!|\!|$|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|=|\!|\!| u|\!|\!|_n|\!|\!|$|\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|=|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\gamma|\!|\!|_j|\!|\!| u|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!| for|\!|\!| the|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!|.|\!|\!|
|\!|\!|
Suppose|\!|\!| now|\!|\!| that|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|,u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|\in|\!|\!| D|\!|\!|$|\!|\!| |\!|\!|$|\!|\!|(n|\!|\!|=0|\!|\!|,|\!|\!|\dotsc|\!|\!|,N|\!|\!|)|\!|\!|$|\!|\!| solve|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{ivp|\!|\!|-numer|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|)u|\!|\!|_n|\!|\!|=f|\!|\!|_n|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| k|\!|\!|\le|\!|\!| n|\!|\!| |\!|\!|\le|\!|\!| N|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| the|\!|\!| perturbed|\!|\!| equation|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{ivp2|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!| |\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|)u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|=f|\!|\!|_n|\!|\!|+d|\!|\!|_n|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| k|\!|\!|\le|\!|\!| n|\!|\!| |\!|\!|\le|\!|\!| N|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
respectively|\!|\!|,|\!|\!| where|\!|\!| it|\!|\!| is|\!|\!| further|\!|\!| assumed|\!|\!| that|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{u|\!|\!|-star|\!|\!|-lip|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!|_m|\!|\!| |\!|\!|-|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_W|\!|\!| |\!|\!|\le|\!|\!| L|\!|\!| |\!|\!|(m|\!|\!|-n|\!|\!|)|\!|\!|\tau|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| 0|\!|\!| |\!|\!|\le|\!|\!| n|\!|\!| |\!|\!|\le|\!|\!| m|\!|\!| |\!|\!|\le|\!|\!| N|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| and|\!|\!| the|\!|\!| defect|\!|\!| |\!|\!|$|\!|\!|(d|\!|\!|_n|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| |\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-delta|\!|\!|-bdf|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(d|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\delta|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| the|\!|\!| errors|\!|\!| of|\!|\!| the|\!|\!| starting|\!|\!| values|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{est|\!|\!|-err|\!|\!|-start|\!|\!|}|\!|\!|
|\!|\!|\frac1|\!|\!|\tau|\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!| u|\!|\!|_i|\!|\!| |\!|\!|-|\!|\!| u|\!|\!|_i|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
We|\!|\!| then|\!|\!| have|\!|\!| the|\!|\!| following|\!|\!| time|\!|\!|-discrete|\!|\!| version|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:basic|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proposition|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| Consider|\!|\!| time|\!|\!| discretization|\!|\!| by|\!|\!| a|\!|\!| fully|\!|\!| implicit|\!|\!| or|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| |\!|\!| method|\!|\!| of|\!|\!| order|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 5|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| In|\!|\!| the|\!|\!| above|\!|\!| setting|\!|\!| of|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-locally|\!|\!| uniform|\!|\!| discrete|\!|\!| maximal|\!|\!| regularity|\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(i|\!|\!|'|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(ii|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(v|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|$and|\!|\!| additionally|\!|\!| |\!|\!|\emph|\!|\!|{|\!|\!|(v|\!|\!|'|\!|\!|)|\!|\!|}|\!|\!| if|\!|\!| |\!|\!|$k|\!|\!|=3|\!|\!|,4|\!|\!|,5|\!|\!|\|\!|\!|,|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|-numer|\!|\!|-bdf|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{est|\!|\!|-err|\!|\!|-start|\!|\!|}|\!|\!|,|\!|\!| there|\!|\!| exist|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|_0|\!|\!|>0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|_0|\!|\!|>0|\!|\!|$|\!|\!| |\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|\le|\!|\!|\delta|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|\le|\!|\!| |\!|\!|\tau|\!|\!|_0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| errors|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|=u|\!|\!|_n|\!|\!|-u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|$|\!|\!| between|\!|\!| the|\!|\!| solutions|\!|\!| of|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|-numer|\!|\!|-bdf|\!|\!|}|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{ivp2|\!|\!|-bdf|\!|\!|}|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
where|\!|\!| |\!|\!|$C|\!|\!|$|\!|\!| depends|\!|\!| on|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!| and|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|$|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{u|\!|\!|-star|\!|\!|-lip|\!|\!|}|\!|\!|,|\!|\!| but|\!|\!| is|\!|\!| independent|\!|\!| |\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!| and|\!|\!| of|\!|\!| |\!|\!|$N|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$N|\!|\!|\tau|\!|\!|\le|\!|\!| T|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{proposition|\!|\!|}|\!|\!|
|\!|\!|
This|\!|\!| stability|\!|\!| result|\!|\!| will|\!|\!| be|\!|\!| proved|\!|\!| in|\!|\!| this|\!|\!| and|\!|\!| the|\!|\!| next|\!|\!| section|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Auxiliary|\!|\!| results|\!|\!| by|\!|\!| Dahlquist|\!|\!| and|\!|\!| Nevanlinna|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Odeh|\!|\!|}|\!|\!| We|\!|\!| will|\!|\!| prove|\!|\!| Proposition|\!|\!| |\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| linearly|\!|\!| |\!|\!| implicit|\!|\!| scheme|\!|\!| similarly|\!|\!| to|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:basic|\!|\!|}|\!|\!|.|\!|\!| To|\!|\!| be|\!|\!| able|\!|\!| to|\!|\!| |\!|\!| use|\!|\!| energy|\!|\!| estimates|\!|\!| in|\!|\!| the|\!|\!| time|\!|\!|-discrete|\!|\!| setting|\!|\!| of|\!|\!| BDF|\!|\!| methods|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| the|\!|\!|
|\!|\!| following|\!|\!| auxiliary|\!|\!| results|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|{|\!|\!|\upshape|\!|\!| |\!|\!|(Dahlquist|\!|\!| |\!|\!|\cite|\!|\!|{D|\!|\!|}|\!|\!|;|\!|\!| see|\!|\!| also|\!|\!| |\!|\!|\cite|\!|\!|{BC|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\cite|\!|\!|[Section|\!|\!| V|\!|\!|.6|\!|\!|]|\!|\!|{HW|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Le|\!|\!|:Dahl|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!| |\!|\!|=|\!|\!|\delta|\!|\!|_k|\!|\!|\zeta|\!|\!|^k|\!|\!|+|\!|\!|\dotsb|\!|\!|+|\!|\!|\delta|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=|\!|\!| |\!|\!| |\!|\!|\mu|\!|\!|_k|\!|\!|\zeta|\!|\!|^k|\!|\!|+|\!|\!|\dotsb|\!|\!|+|\!|\!|\mu|\!|\!|_0|\!|\!|$|\!|\!| be|\!|\!| polynomials|\!|\!| of|\!|\!| degree|\!|\!| at|\!|\!| |\!|\!| most|\!|\!| |\!|\!|$k|\!|\!|\|\!|\!| |\!|\!|(|\!|\!|$and|\!|\!| at|\!|\!| least|\!|\!| one|\!|\!| of|\!|\!| them|\!|\!| of|\!|\!| degree|\!|\!| |\!|\!|$k|\!|\!|)|\!|\!|$|\!|\!| that|\!|\!| have|\!|\!| no|\!|\!| common|\!|\!| divisor|\!|\!|.|\!|\!| |\!|\!|
Let|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|\cdot|\!|\!|,|\!|\!|\cdot|\!|\!|)|\!|\!|$|\!|\!| be|\!|\!| an|\!|\!| inner|\!|\!| product|\!|\!| with|\!|\!| associated|\!|\!| norm|\!|\!| |\!|\!|$|\!|\!|||\!|\!|\cdot|\!|\!|||\!|\!|.|\!|\!|$|\!|\!| If|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\Real|\!|\!| |\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|}|\!|\!|{|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|}|\!|\!|>0|\!|\!|\quad|\!|\!|\text|\!|\!|{for|\!|\!| |\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|||\!|\!|\zeta|\!|\!|||\!|\!|<1|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
then|\!|\!| there|\!|\!| exists|\!|\!| a|\!|\!| positive|\!|\!| definite|\!|\!| symmetric|\!|\!| matrix|\!|\!| |\!|\!|$G|\!|\!|=|\!|\!|(g|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|)|\!|\!|\in|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|\times|\!|\!| k|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| real|\!|\!| |\!|\!|$|\!|\!|\kappa|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|\kappa|\!|\!|_k|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| |\!|\!|$v|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,v|\!|\!|_k|\!|\!|$|\!|\!| in|\!|\!| |\!|\!| the|\!|\!| real|\!|\!| inner|\!|\!| product|\!|\!| space|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\Big|\!|\!| |\!|\!|(|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_iv|\!|\!|_|\!|\!|{k|\!|\!|-i|\!|\!|}|\!|\!|,|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\mu|\!|\!|_jv|\!|\!|_|\!|\!|{k|\!|\!|-j|\!|\!|}|\!|\!|\Big|\!|\!| |\!|\!|)|\!|\!|=|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|,j|\!|\!|=1|\!|\!|}|\!|\!|^kg|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|(v|\!|\!|_|\!|\!|{i|\!|\!|}|\!|\!|,v|\!|\!|_|\!|\!|{j|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|-|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|,j|\!|\!|=1|\!|\!|}|\!|\!|^kg|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|(v|\!|\!|_|\!|\!|{i|\!|\!|-1|\!|\!|}|\!|\!|,v|\!|\!|_|\!|\!|{j|\!|\!|-1|\!|\!|}|\!|\!|)|\!|\!|
|\!|\!|+|\!|\!|\Big|\!|\!| |\!|\!|||\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\kappa|\!|\!|_iv|\!|\!|_|\!|\!|{i|\!|\!|}|\!|\!|\Big|\!|\!| |\!|\!|||\!|\!|^2|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
In|\!|\!| combination|\!|\!| with|\!|\!| the|\!|\!| preceding|\!|\!| result|\!|\!| for|\!|\!| the|\!|\!| multiplier|\!|\!| |\!|\!|$|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=1|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!|\zeta|\!|\!|,|\!|\!|$|\!|\!| |\!|\!| the|\!|\!| following|\!|\!| property|\!|\!| of|\!|\!| BDF|\!|\!| methods|\!|\!| up|\!|\!| to|\!|\!| order|\!|\!| |\!|\!|$5|\!|\!|$|\!|\!| becomes|\!|\!| important|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|{|\!|\!|\upshape|\!|\!| |\!|\!|(Nevanlinna|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Odeh|\!|\!| |\!|\!|\cite|\!|\!|{NO|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!|\label|\!|\!|{Le|\!|\!|:NO|\!|\!|}|\!|\!| For|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| 5|\!|\!|,|\!|\!|$|\!|\!| |\!|\!| there|\!|\!| exists|\!|\!| |\!|\!| |\!|\!|$0|\!|\!|\le|\!|\!| |\!|\!|\theta|\!|\!|_k|\!|\!|<1|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|
for|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!| |\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{|\!|\!| |\!|\!|\ell|\!|\!|=1|\!|\!|}|\!|\!|^k|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!| |\!|\!|\ell|\!|\!| |\!|\!| |\!|\!|(1|\!|\!|-|\!|\!|\zeta|\!|\!|)|\!|\!|^|\!|\!| |\!|\!|\ell|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\Real|\!|\!| |\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|}|\!|\!|{1|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!|\zeta|\!|\!|}|\!|\!|>0|\!|\!|\quad|\!|\!|\text|\!|\!|{for|\!|\!| |\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|||\!|\!|\zeta|\!|\!|||\!|\!|<1|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
The|\!|\!| smallest|\!|\!| possible|\!|\!| values|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|_k|\!|\!|$|\!|\!| are|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\theta|\!|\!|_1|\!|\!|=|\!|\!|\theta|\!|\!|_2|\!|\!|=0|\!|\!|,|\!|\!|\|\!|\!| |\!|\!|\theta|\!|\!|_3|\!|\!|=0|\!|\!|.0836|\!|\!|,|\!|\!|\|\!|\!| |\!|\!|\theta|\!|\!|_4|\!|\!|=0|\!|\!|.2878|\!|\!|,|\!|\!|\|\!|\!| |\!|\!|\theta|\!|\!|_5|\!|\!|=0|\!|\!|.8160|\!|\!|.|\!|\!| |\!|\!|\|\!|\!|]|\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
Precise|\!|\!| expressions|\!|\!| for|\!|\!| the|\!|\!| optimal|\!|\!| multipliers|\!|\!| for|\!|\!| the|\!|\!| BDF|\!|\!| methods|\!|\!| of|\!|\!| orders|\!|\!| 3|\!|\!|,|\!|\!| 4|\!|\!|,|\!|\!| and|\!|\!| 5|\!|\!| |\!|\!| are|\!|\!| given|\!|\!| by|\!|\!| Akrivis|\!|\!| |\!|\!|\|\!|\!|&|\!|\!| Katsoprinakis|\!|\!| |\!|\!|\cite|\!|\!|{AK|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
An|\!|\!| immediate|\!|\!| consequence|\!|\!| of|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{Le|\!|\!|:NO|\!|\!|}|\!|\!| and|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{Le|\!|\!|:Dahl|\!|\!|}|\!|\!| is|\!|\!| the|\!|\!| relation|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{multiplier|\!|\!|}|\!|\!| |\!|\!|\Big|\!|\!| |\!|\!|(|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_iv|\!|\!|_|\!|\!|{k|\!|\!|-i|\!|\!|}|\!|\!|,v|\!|\!|_k|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!| v|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|\Big|\!|\!| |\!|\!|)|\!|\!|\ge|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|,j|\!|\!|=1|\!|\!|}|\!|\!|^kg|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|(v|\!|\!|_|\!|\!|{i|\!|\!|}|\!|\!|,v|\!|\!|_|\!|\!|{j|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|-|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|,j|\!|\!|=1|\!|\!|}|\!|\!|^kg|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|(v|\!|\!|_|\!|\!|{i|\!|\!|-1|\!|\!|}|\!|\!|,v|\!|\!|_|\!|\!|{j|\!|\!|-1|\!|\!|}|\!|\!|)|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
with|\!|\!| a|\!|\!| positive|\!|\!| definite|\!|\!| symmetric|\!|\!| matrix|\!|\!| |\!|\!|$G|\!|\!|=|\!|\!|(g|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|)|\!|\!|\in|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|\times|\!|\!| k|\!|\!|}|\!|\!|$|\!|\!|;|\!|\!| |\!|\!| |\!|\!| it|\!|\!| is|\!|\!| this|\!|\!| inequality|\!|\!| |\!|\!| that|\!|\!| will|\!|\!| play|\!|\!| a|\!|\!| crucial|\!|\!| role|\!|\!| in|\!|\!| our|\!|\!| energy|\!|\!| estimates|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Proof|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{subsec|\!|\!|:proof|\!|\!|-stability|\!|\!|}|\!|\!| We|\!|\!| subdivide|\!|\!| the|\!|\!| proof|\!|\!| into|\!|\!| four|\!|\!| parts|\!|\!| |\!|\!|(a|\!|\!|)|\!|\!| to|\!|\!| |\!|\!|(d|\!|\!|)|\!|\!| that|\!|\!| are|\!|\!| analogous|\!|\!| to|\!|\!| the|\!|\!| corresponding|\!|\!| |\!|\!| parts|\!|\!| in|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:basic|\!|\!|}|\!|\!|.|\!|\!| Parts|\!|\!| |\!|\!|(a|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(c|\!|\!|)|\!|\!| apply|\!|\!| to|\!|\!| both|\!|\!| the|\!|\!| linearly|\!|\!| |\!|\!| and|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|,|\!|\!| whereas|\!|\!| the|\!|\!| argument|\!|\!| in|\!|\!| part|\!|\!| |\!|\!|(d|\!|\!|)|\!|\!| does|\!|\!| not|\!|\!| work|\!|\!| for|\!|\!| the|\!|\!| |\!|\!| fully|\!|\!| implicit|\!|\!| method|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|(a|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Error|\!|\!| equation|\!|\!|}|\!|\!|)|\!|\!| We|\!|\!| rewrite|\!|\!| the|\!|\!| equation|\!|\!| for|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| form|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\dot|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|_n|\!|\!|)u|\!|\!| |\!|\!|_n|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!| |\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|_n|\!|\!|+f|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|]|\!|\!| and|\!|\!| see|\!|\!| that|\!|\!| the|\!|\!| errors|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|=u|\!|\!|_n|\!|\!|-u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|$|\!|\!| satisfy|\!|\!| the|\!|\!| error|\!|\!| equation|\!|\!|,|\!|\!| for|\!|\!| |\!|\!|$n|\!|\!|\le|\!|\!| N|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|_n|\!|\!|)e|\!|\!|_n|\!|\!|=|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|_n|\!|\!|-d|\!|\!|_n|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
We|\!|\!| abbreviate|\!|\!| |\!|\!|$A|\!|\!|_n|\!|\!|=A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|_n|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| For|\!|\!| an|\!|\!| arbitrary|\!|\!| |\!|\!|$m|\!|\!| |\!|\!|\le|\!|\!| N|\!|\!|$|\!|\!| and|\!|\!| for|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| m|\!|\!|$|\!|\!| we|\!|\!| further|\!|\!| have|\!|\!| the|\!|\!| error|\!|\!| equation|\!|\!| |\!|\!| with|\!|\!| a|\!|\!| fixed|\!|\!| operator|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{er|\!|\!|-eq3|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|_me|\!|\!| |\!|\!|_n|\!|\!|=|\!|\!|\widehat|\!|\!| d|\!|\!|_n|\!|\!|:|\!|\!|=|\!|\!|(A|\!|\!|_m|\!|\!|-A|\!|\!|_n|\!|\!|)e|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!| |\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|_n|\!|\!|-d|\!|\!|_n|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
If|\!|\!| we|\!|\!| redefine|\!|\!| |\!|\!|$e|\!|\!|_0|\!|\!|=|\!|\!|\dotsb|\!|\!|=e|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|=0|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| there|\!|\!| appear|\!|\!| extra|\!|\!| defects|\!|\!| for|\!|\!| |\!|\!| |\!|\!|$n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|{2k|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!| by|\!|\!| condition|\!|\!| |\!|\!|\eqref|\!|\!|{est|\!|\!|-err|\!|\!|-start|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| are|\!|\!| subsumed|\!|\!| in|\!|\!| |\!|\!|$d|\!|\!|_n|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| following|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|(b|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Maximal|\!|\!| regularity|\!|\!|}|\!|\!|)|\!|\!| We|\!|\!| denote|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{RB|\!|\!|-bounds|\!|\!|}|\!|\!|
|\!|\!| R|\!|\!|=|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!|+1|\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!| B|\!|\!|=|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^N|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|+1|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| let|\!|\!| |\!|\!|$M|\!|\!|\le|\!|\!| N|\!|\!|$|\!|\!| be|\!|\!| maximal|\!|\!| such|\!|\!| that|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{M|\!|\!|-RB|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{M|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(u|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{M|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| B|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| By|\!|\!| the|\!|\!| discrete|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| |\!|\!|(i|\!|\!|'|\!|\!|)|\!|\!| we|\!|\!| obtain|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{er|\!|\!|-eq3|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{max|\!|\!|-reg|\!|\!|-bdf|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|
|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\widehat|\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| by|\!|\!| the|\!|\!| bounds|\!|\!| |\!|\!|(iv|\!|\!|)|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{u|\!|\!|-star|\!|\!|-lip|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|,|\!|\!| for|\!|\!| any|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{d|\!|\!|-hat|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\widehat|\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{X|\!|\!|}|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|
|\!|\!|\|\!|\!|||\!|\!| |\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{X|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_R|\!|\!| L|\!|\!| |\!|\!|(m|\!|\!| |\!|\!| |\!|\!|-|\!|\!| n|\!|\!|)|\!|\!|\tau|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|\nonumber|\!|\!|
|\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!|+|\!|\!|\|\!|\!| |\!|\!|\varepsilon|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!| |\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{for|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|
|\!|\!| k|\!|\!|\leq|\!|\!| n|\!|\!|\leq|\!|\!| m|\!|\!|\leq|\!|\!| M|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!| We|\!|\!| take|\!|\!| the|\!|\!| second|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-bdf|\!|\!|}|\!|\!| to|\!|\!| power|\!|\!| |\!|\!|$p|\!|\!|$|\!|\!| and|\!|\!| denote|\!|\!| it|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\eta|\!|\!|_m|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|^p|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
For|\!|\!| the|\!|\!| second|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{d|\!|\!|-hat|\!|\!|}|\!|\!| we|\!|\!| note|\!|\!| that|\!|\!|,|\!|\!| by|\!|\!| partial|\!|\!| summation|\!|\!|,|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|(m|\!|\!|-n|\!|\!|)|\!|\!|^p|\!|\!|\tau|\!|\!|^p|\!|\!| |\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!|^p|\!|\!| |\!|\!| |\!|\!|&|\!|\!|=|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^|\!|\!|{m|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!|(m|\!|\!|-n|\!|\!|)|\!|\!|^p|\!|\!|-|\!|\!|(m|\!|\!|-n|\!|\!|-1|\!|\!|)|\!|\!|^p|\!|\!|)|\!|\!|\tau|\!|\!|^|\!|\!|{p|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\eta|\!|\!|_n|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!|(m|\!|\!|-n|\!|\!|)|\!|\!|\tau|\!|\!|\bigr|\!|\!|)|\!|\!|^|\!|\!|{p|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\eta|\!|\!|_n|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!| Hence|\!|\!| we|\!|\!| have|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\eta|\!|\!|_m|\!|\!| |\!|\!|\le|\!|\!| |\!|\!| |\!|\!|&|\!|\!| |\!|\!|\|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,R|\!|\!|}|\!|\!|^p|\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\widehat|\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!|^p|\!|\!| |\!|\!| |\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!|(m|\!|\!|-n|\!|\!|)|\!|\!|\tau|\!|\!|\bigr|\!|\!|)|\!|\!|^|\!|\!|{p|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\eta|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|+|\!|\!| C|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| B|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| CB|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(H|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(d|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|^p|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!| With|\!|\!| a|\!|\!| discrete|\!|\!| Gronwall|\!|\!| inequality|\!|\!|,|\!|\!| we|\!|\!| therefore|\!|\!| obtain|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!|\label|\!|\!|{max|\!|\!|-reg|\!|\!|-aux|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|&|\!|\!|\|\!|\!| |\!|\!| |\!|\!|
|\!|\!| C|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|
|\!|\!| |\!|\!|\nonumber|\!|\!|
|\!|\!| |\!|\!|&|\!|\!|\hspace|\!|\!|{|\!|\!|-2cm|\!|\!|}|\!|\!|\|\!|\!| |\!|\!| |\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(H|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(d|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!|
Next|\!|\!| we|\!|\!| show|\!|\!| that|\!|\!| by|\!|\!| property|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| dominates|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| We|\!|\!| write|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|=|\!|\!|(1|\!|\!|-|\!|\!|\zeta|\!|\!|)|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| where|\!|\!| the|\!|\!| polynomial|\!|\!| |\!|\!|$|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|$|\!|\!| of|\!|\!| degree|\!|\!| |\!|\!| |\!|\!|$k|\!|\!|-1|\!|\!|$|\!|\!| has|\!|\!| no|\!|\!| zeros|\!|\!| in|\!|\!| the|\!|\!| closed|\!|\!| unit|\!|\!| disk|\!|\!|,|\!|\!| and|\!|\!| therefore|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\frac1|\!|\!|{|\!|\!|\mu|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|\infty|\!|\!| |\!|\!|\chi|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|,|\!|\!|\zeta|\!|\!|^n|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!|\text|\!|\!|{where|\!|\!|}|\!|\!| |\!|\!|\quad|\!|\!|
|\!|\!|||\!|\!|\chi|\!|\!|_n|\!|\!|||\!|\!|\le|\!|\!| |\!|\!|\rho|\!|\!|^n|\!|\!| |\!|\!|\|\!|\!| |\!|\!|\text|\!|\!|{|\!|\!| with|\!|\!| |\!|\!|}|\!|\!| |\!|\!|\rho|\!|\!|<1|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
It|\!|\!| follows|\!|\!| that|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\frac|\!|\!|{e|\!|\!|_n|\!|\!|-e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^n|\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\chi|\!|\!|_j|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
and|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\Bigl|\!|\!|(|\!|\!|\frac|\!|\!|{e|\!|\!|_n|\!|\!|-e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\le|\!|\!| C|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
If|\!|\!| we|\!|\!| denote|\!|\!| by|\!|\!| |\!|\!|$e|\!|\!|(t|\!|\!|)|\!|\!|$|\!|\!| the|\!|\!| piecewise|\!|\!| linear|\!|\!| function|\!|\!| that|\!|\!| interpolates|\!|\!| the|\!|\!| values|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| then|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|\dot|\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,M|\!|\!|\tau|\!|\!|;X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|=|\!|\!| |\!|\!| |\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\Bigl|\!|\!|(|\!|\!|\frac|\!|\!|{e|\!|\!|_n|\!|\!|-e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!| |\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,M|\!|\!|\tau|\!|\!|;D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Combining|\!|\!| the|\!|\!| above|\!|\!| inequalities|\!|\!| and|\!|\!| using|\!|\!| property|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| for|\!|\!| |\!|\!|$e|\!|\!|(t|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| thus|\!|\!| obtain|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\bigr|\!|\!|)|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
For|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|$|\!|\!| we|\!|\!| can|\!|\!| therefore|\!|\!| absorb|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| of|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| |\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-aux|\!|\!|}|\!|\!| |\!|\!| in|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| to|\!|\!| obtain|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|-bdf|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(H|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|+|\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|(c|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Energy|\!|\!| estimate|\!|\!|}|\!|\!|)|\!|\!| To|\!|\!| bound|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|-bdf|\!|\!|}|\!|\!| |\!|\!| we|\!|\!| use|\!|\!| the|\!|\!| energy|\!|\!| estimate|\!|\!| obtained|\!|\!| by|\!|\!| testing|\!|\!| |\!|\!|\eqref|\!|\!|{er|\!|\!|-eq|\!|\!|-bdf|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|_k|\!|\!|$|\!|\!| from|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{Le|\!|\!|:NO|\!|\!|}|\!|\!|:|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{BDF6|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!|\big|\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_ke|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!| |\!|\!| |\!|\!|+|\!|\!| |\!|\!|\langle|\!|\!| e|\!|\!|_n|\!|\!|,A|\!|\!|_n|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|-|\!|\!| |\!|\!|\theta|\!|\!|_k|\!|\!|\langle|\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,A|\!|\!|_n|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|=|\!|\!| |\!|\!| |\!|\!|\langle|\!|\!| e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\widetilde|\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\widetilde|\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\big|\!|\!| |\!|\!|(A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|_n|\!|\!| |\!|\!|-d|\!|\!|_n|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Now|\!|\!|,|\!|\!| with|\!|\!| the|\!|\!| notation|\!|\!| |\!|\!|$E|\!|\!|_n|\!|\!|:|\!|\!|=|\!|\!|(e|\!|\!|_n|\!|\!|,|\!|\!|\dotsc|\!|\!|,e|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|+1|\!|\!|}|\!|\!|)|\!|\!|^T|\!|\!|$|\!|\!| |\!|\!|
and|\!|\!| the|\!|\!| norm|\!|\!| |\!|\!|$|\!|\!||E|\!|\!|_n|\!|\!|||\!|\!|_G|\!|\!|$|\!|\!| given|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!||E|\!|\!|_n|\!|\!|||\!|\!|_G|\!|\!|^2|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|,j|\!|\!|=1|\!|\!|}|\!|\!|^k|\!|\!| g|\!|\!|_|\!|\!|{ij|\!|\!|}|\!|\!|(e|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|+i|\!|\!|}|\!|\!|,e|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|+j|\!|\!|}|\!|\!|)|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
using|\!|\!| |\!|\!|\eqref|\!|\!|{multiplier|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| can|\!|\!| estimate|\!|\!| the|\!|\!| first|\!|\!| term|\!|\!| on|\!|\!| the|\!|\!| left|\!|\!|-hand|\!|\!| side|\!|\!| from|\!|\!| below|\!|\!| in|\!|\!| the|\!|\!| form|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF3|\!|\!|}|\!|\!| |\!|\!|\big|\!|\!| |\!|\!|(e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_ke|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|
|\!|\!|\ge|\!|\!| |\!|\!||E|\!|\!|_n|\!|\!|||\!|\!|_G|\!|\!|^2|\!|\!|-|\!|\!||E|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|||\!|\!|_G|\!|\!|^2|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| In|\!|\!| the|\!|\!| following|\!|\!| |\!|\!|
we|\!|\!| denote|\!|\!| by|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\cdot|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|$|\!|\!| the|\!|\!| norm|\!|\!| given|\!|\!| by|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| v|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!| |\!|\!| |\!|\!|=|\!|\!| |\!|\!|\langle|\!|\!| v|\!|\!|,A|\!|\!|_n|\!|\!| v|\!|\!|\rangle|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| by|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!| is|\!|\!| equivalent|\!|\!| to|\!|\!| the|\!|\!| |\!|\!|$V|\!|\!|$|\!|\!|-norm|\!|\!|.|\!|\!| Furthermore|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|
|\!|\!|\langle|\!|\!| e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_ke|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,A|\!|\!|_n|\!|\!| e|\!|\!|_n|\!|\!|\rangle|\!|\!| |\!|\!|=|\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!|\langle|\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,A|\!|\!|_ne|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
whence|\!|\!|,|\!|\!| obviously|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF4|\!|\!|}|\!|\!|
|\!|\!|\langle|\!|\!| e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_ke|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,A|\!|\!|_ne|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|\ge|\!|\!| |\!|\!| |\!|\!|\big|\!|\!| |\!|\!|(1|\!|\!|-|\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\theta|\!|\!|_k|\!|\!|}2|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|
|\!|\!|-|\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\theta|\!|\!|_k|\!|\!|}2|\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| Moreover|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF5|\!|\!|}|\!|\!|
|\!|\!|\langle|\!|\!| e|\!|\!|_n|\!|\!|-|\!|\!|\theta|\!|\!|_ke|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!|\widetilde|\!|\!| |\!|\!| d|\!|\!|_n|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|(|\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|
|\!|\!|+|\!|\!|\theta|\!|\!|_k|\!|\!|^2|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|^2|\!|\!|)|\!|\!|
|\!|\!|+C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!|\widetilde|\!|\!| d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!|^2|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| any|\!|\!| positive|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|$|\!|\!|.|\!|\!| At|\!|\!| this|\!|\!| point|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| to|\!|\!| relate|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|$|\!|\!| back|\!|\!| to|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|.|\!|\!|$|\!|\!| We|\!|\!| have|\!|\!|,|\!|\!| by|\!|\!| the|\!|\!| bounds|\!|\!| |\!|\!|(v|\!|\!|'|\!|\!|)|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{u|\!|\!|-star|\!|\!|-lip|\!|\!|}|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||v|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|-|\!|\!|\|\!|\!||v|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|^2|\!|\!|=|\!|\!|\langle|\!|\!| v|\!|\!|,A|\!|\!|_n|\!|\!| v|\!|\!|\rangle|\!|\!| |\!|\!|-|\!|\!|\langle|\!|\!| v|\!|\!|,A|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!| v|\!|\!|\rangle|\!|\!|
|\!|\!|=|\!|\!|\langle|\!|\!| v|\!|\!|,|\!|\!|(A|\!|\!|_n|\!|\!|-A|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|)v|\!|\!|\rangle|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|^2|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
so|\!|\!| that|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF6n|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|\le|\!|\!| |\!|\!|(1|\!|\!|+C|\!|\!|\tau|\!|\!|)|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|^2|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Summing|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{BDF6|\!|\!|}|\!|\!| from|\!|\!| |\!|\!|$n|\!|\!|=k|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$m|\!|\!|\le|\!|\!| M|\!|\!|,|\!|\!|$|\!|\!| we|\!|\!| obtain|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!||E|\!|\!|_m|\!|\!|||\!|\!|_G|\!|\!|^2|\!|\!|+|\!|\!|\rho|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_n|\!|\!|^2|\!|\!|\le|\!|\!|
|\!|\!|
C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\|\!|\!|||\!|\!|\widetilde|\!|\!| d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!|^2|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
with|\!|\!| |\!|\!|$|\!|\!|\rho|\!|\!|:|\!|\!|=1|\!|\!|-|\!|\!|\theta|\!|\!|_k|\!|\!|-|\!|\!|(1|\!|\!|+|\!|\!|\theta|\!|\!|_k|\!|\!|)|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
To|\!|\!| estimate|\!|\!| |\!|\!|$|\!|\!|\widetilde|\!|\!| d|\!|\!|_n|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| note|\!|\!| that|\!|\!| the|\!|\!| |\!|\!| bound|\!|\!| of|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!| yields|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|^|\!|\!|\star|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|)u|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\le|\!|\!|
|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!| |\!|\!|\Bigl|\!|\!|(|\!|\!| 2|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| C|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{D|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
Absorbing|\!|\!| the|\!|\!| terms|\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|}|\!|\!|^2|\!|\!|$|\!|\!| we|\!|\!| therefore|\!|\!| obtain|\!|\!|,|\!|\!| for|\!|\!| |\!|\!|$k|\!|\!|\le|\!|\!| m|\!|\!| |\!|\!|\le|\!|\!| M|\!|\!|$|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_m|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_H|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|^2|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| |\!|\!|
|\!|\!| C|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^m|\!|\!| |\!|\!|\|\!|\!||d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{V|\!|\!|'|\!|\!|}|\!|\!|^2|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!| and|\!|\!| the|\!|\!| discrete|\!|\!| Gronwall|\!|\!| inequality|\!|\!| then|\!|\!| yields|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\delta|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| k|\!|\!|\le|\!|\!| n|\!|\!| |\!|\!|\le|\!|\!| M|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|(d|\!|\!|)|\!|\!| |\!|\!|(|\!|\!|\emph|\!|\!|{Complete|\!|\!| time|\!|\!| interval|\!|\!|}|\!|\!|)|\!|\!| Inserting|\!|\!| the|\!|\!| previous|\!|\!| bound|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{max|\!|\!|-reg|\!|\!|-2|\!|\!|-bdf|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| obtain|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| by|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| and|\!|\!| the|\!|\!| argument|\!|\!| in|\!|\!| part|\!|\!| |\!|\!|(b|\!|\!|)|\!|\!| above|\!|\!| further|\!|\!| implies|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!| |\!|\!|\delta|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
For|\!|\!| the|\!|\!| linearly|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!|,|\!|\!| this|\!|\!| implies|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\hat|\!|\!| u|\!|\!|_|\!|\!|{M|\!|\!|+1|\!|\!|}|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\gamma|\!|\!|_j|\!|\!| u|\!|\!|_|\!|\!|{M|\!|\!|-j|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\hat|\!|\!| u|\!|\!|_|\!|\!|{M|\!|\!|+1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_W|\!|\!| |\!|\!|\le|\!|\!| CR|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| hence|\!|\!| the|\!|\!| above|\!|\!| arguments|\!|\!| can|\!|\!| be|\!|\!| repeated|\!|\!| to|\!|\!| yield|\!|\!| that|\!|\!| for|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!| |\!|\!| the|\!|\!| bounds|\!|\!| |\!|\!|\eqref|\!|\!|{M|\!|\!|-RB|\!|\!|}|\!|\!| are|\!|\!| |\!|\!| also|\!|\!| satisfied|\!|\!| for|\!|\!| |\!|\!|$M|\!|\!|+1|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| contradicts|\!|\!| the|\!|\!| maximality|\!|\!| |\!|\!| of|\!|\!| |\!|\!|$M|\!|\!|$|\!|\!| unless|\!|\!| |\!|\!|$M|\!|\!|=N|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\qed|\!|\!|
|\!|\!|
|\!|\!|
While|\!|\!| parts|\!|\!| |\!|\!|(a|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(c|\!|\!|)|\!|\!| of|\!|\!| the|\!|\!| above|\!|\!| proof|\!|\!| apply|\!|\!| also|\!|\!| to|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| argument|\!|\!| in|\!|\!| part|\!|\!| |\!|\!|(d|\!|\!|)|\!|\!| does|\!|\!| not|\!|\!| work|\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| method|\!|\!|.|\!|\!| Here|\!|\!| we|\!|\!| need|\!|\!| some|\!|\!| |\!|\!| a|\!|\!| priori|\!|\!| estimate|\!|\!| from|\!|\!| the|\!|\!| existence|\!|\!| proof|\!|\!|,|\!|\!| which|\!|\!| is|\!|\!| established|\!|\!| in|\!|\!| the|\!|\!| next|\!|\!| section|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Existence|\!|\!| of|\!|\!| numerical|\!|\!| solutions|\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| scheme|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:existence|\!|\!|}|\!|\!|
|\!|\!|
While|\!|\!| existence|\!|\!| and|\!|\!| uniqueness|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| are|\!|\!| obvious|\!|\!| for|\!|\!| the|\!|\!| linearly|\!|\!| implicit|\!|\!| |\!|\!| BDF|\!|\!| method|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| this|\!|\!| is|\!|\!| not|\!|\!| so|\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| method|\!|\!|.|\!|\!| In|\!|\!| this|\!|\!| section|\!|\!|,|\!|\!| we|\!|\!| prove|\!|\!| existence|\!|\!| and|\!|\!| uniqueness|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| |\!|\!| BDF|\!|\!| method|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| and|\!|\!| complete|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| |\!|\!| for|\!|\!| this|\!|\!| method|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Schaefer|\!|\!|'s|\!|\!| fixed|\!|\!| point|\!|\!| theorem|\!|\!|}|\!|\!| Existence|\!|\!| of|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| |\!|\!| is|\!|\!| proved|\!|\!| with|\!|\!| the|\!|\!| following|\!|\!| result|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|[Schaefer|\!|\!|'s|\!|\!| fixed|\!|\!| point|\!|\!| theorem|\!|\!| |\!|\!|{|\!|\!|\cite|\!|\!|[Chapter|\!|\!| 9|\!|\!|.2|\!|\!|,|\!|\!| Theorem|\!|\!| 4|\!|\!|]|\!|\!|{Evans|\!|\!|}|\!|\!|}|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|\label|\!|\!|{THMSchaefer|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!| be|\!|\!| a|\!|\!| Banach|\!|\!| space|\!|\!| and|\!|\!| let|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|:W|\!|\!|\rightarrow|\!|\!| W|\!|\!|$|\!|\!| be|\!|\!| |\!|\!| a|\!|\!| continuous|\!|\!| and|\!|\!| compact|\!|\!| map|\!|\!|.|\!|\!| If|\!|\!| the|\!|\!| set|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\bigl|\!|\!|\|\!|\!|{|\!|\!|\phi|\!|\!|\in|\!|\!| W|\!|\!|:|\!|\!|\|\!|\!|;|\!|\!| |\!|\!|\phi|\!|\!|=|\!|\!|\theta|\!|\!| |\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|(|\!|\!|\phi|\!|\!|)|\!|\!| |\!|\!|\|\!|\!| |\!|\!|\hbox|\!|\!|{|\!|\!| for|\!|\!| some|\!|\!| |\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\theta|\!|\!|\in|\!|\!| |\!|\!|[0|\!|\!|,1|\!|\!|]|\!|\!| |\!|\!|\bigr|\!|\!|\|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| is|\!|\!| bounded|\!|\!| in|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| the|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|$|\!|\!| has|\!|\!| a|\!|\!| fixed|\!|\!| point|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Proof|\!|\!| of|\!|\!| the|\!|\!| existence|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| and|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| for|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|}|\!|\!| In|\!|\!| the|\!|\!| situation|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| assume|\!|\!| that|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|\in|\!|\!| D|\!|\!|,|\!|\!| n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,M|\!|\!|-1|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| are|\!|\!| solutions|\!|\!| of|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| and|\!|\!| satisfy|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{mathind|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(u|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{M|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!|\leq|\!|\!| R|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{RB|\!|\!|-bounds|\!|\!|}|\!|\!|,|\!|\!| and|\!|\!| prove|\!|\!| the|\!|\!| existence|\!|\!| a|\!|\!| numerical|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|_M|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| which|\!|\!| also|\!|\!| satisfies|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|_M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|\leq|\!|\!| R|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
We|\!|\!| define|\!|\!| a|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|:|\!|\!| W|\!|\!| |\!|\!|\rightarrow|\!|\!| W|\!|\!|$|\!|\!| in|\!|\!| the|\!|\!| following|\!|\!| way|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
For|\!|\!| any|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| |\!|\!| we|\!|\!| define|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{rho|\!|\!|_phi|\!|\!|}|\!|\!|
|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| |\!|\!| |\!|\!|:|\!|\!|=|\!|\!|\min|\!|\!|\bigg|\!|\!|(|\!|\!|\frac|\!|\!|{|\!|\!|\sqrt|\!|\!|{|\!|\!|\delta|\!|\!|}|\!|\!| |\!|\!|}|\!|\!|{|\!|\!|\|\!|\!|||\!|\!|\phi|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|}|\!|\!|,1|\!|\!|\bigg|\!|\!|)|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Clearly|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| |\!|\!|$|\!|\!| depends|\!|\!| continuously|\!|\!| on|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!| |\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{rho|\!|\!|_phi|\!|\!|-est|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| |\!|\!|\phi|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|\leq|\!|\!| |\!|\!|\sqrt|\!|\!|{|\!|\!|\delta|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Then|\!|\!| we|\!|\!| define|\!|\!| |\!|\!|$e|\!|\!|_M|\!|\!|=|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|(|\!|\!|\phi|\!|\!|)|\!|\!|$|\!|\!| as|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| linear|\!|\!| equation|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:MapM|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{M|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|=A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)e|\!|\!|_M|\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!|\phi|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)e|\!|\!|_M|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!|\phi|\!|\!| |\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|-d|\!|\!|_M|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
Using|\!|\!| the|\!|\!| compact|\!|\!| imbedding|\!|\!| of|\!|\!| |\!|\!|$D|\!|\!|$|\!|\!| in|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| fact|\!|\!| that|\!|\!| the|\!|\!| resolvent|\!|\!| operator|\!|\!| |\!|\!| |\!|\!|$|\!|\!|(|\!|\!|\delta|\!|\!|_0|\!|\!|/|\!|\!|\tau|\!|\!|+A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!|\phi|\!|\!|)|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|$|\!|\!| maps|\!|\!| from|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$D|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| condition|\!|\!| |\!|\!|(iv|\!|\!|)|\!|\!| |\!|\!| |\!|\!|(with|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|=1|\!|\!|$|\!|\!|)|\!|\!|,|\!|\!| it|\!|\!| is|\!|\!| shown|\!|\!| that|\!|\!| the|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| continuous|\!|\!| and|\!|\!| compact|\!|\!|.|\!|\!| |\!|\!| Moreover|\!|\!|,|\!|\!| if|\!|\!| the|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|$|\!|\!| has|\!|\!| a|\!|\!| fixed|\!|\!| point|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!|=1|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| |\!|\!| |\!|\!|$e|\!|\!|_M|\!|\!|=|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|(|\!|\!|\phi|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| a|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:MapM00|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{M|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|=A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)e|\!|\!|_M|\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+e|\!|\!|_M|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)e|\!|\!|_M|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+e|\!|\!|_M|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|-d|\!|\!|_M|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| |\!|\!|$u|\!|\!|_M|\!|\!|:|\!|\!|=u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+e|\!|\!|_M|\!|\!|$|\!|\!| is|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| with|\!|\!| |\!|\!|$n|\!|\!|=M|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
To|\!|\!| apply|\!|\!| Schaefer|\!|\!|'s|\!|\!| fixed|\!|\!| point|\!|\!| theorem|\!|\!|,|\!|\!| we|\!|\!| assume|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!| |\!|\!|\in|\!|\!| W|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|=|\!|\!|\theta|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|(|\!|\!|\phi|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| some|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|[0|\!|\!|,1|\!|\!|]|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!| To|\!|\!| prove|\!|\!| existence|\!|\!| of|\!|\!| a|\!|\!| fixed|\!|\!| point|\!|\!| for|\!|\!| the|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| only|\!|\!| need|\!|\!| to|\!|\!| prove|\!|\!| |\!|\!|
that|\!|\!| all|\!|\!| such|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\phi|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|$|\!|\!| are|\!|\!| uniformly|\!|\!| bounded|\!|\!|.|\!|\!|
|\!|\!|
Let|\!|\!| |\!|\!|$e|\!|\!|_M|\!|\!|=|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|(|\!|\!|\phi|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!| Then|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|=|\!|\!|\theta|\!|\!| e|\!|\!|_M|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{BDF|\!|\!|:MapM|\!|\!|}|\!|\!| implies|\!|\!| that|\!|\!| |\!|\!|$e|\!|\!|_M|\!|\!|$|\!|\!| is|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:FixedP00|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{M|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|=A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)e|\!|\!|_M|\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!|\theta|\!|\!| |\!|\!|\rho|\!|\!|_|\!|\!|{|\!|\!|\phi|\!|\!|}|\!|\!| e|\!|\!|_M|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)e|\!|\!|_M|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!|\theta|\!|\!| |\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| e|\!|\!|_M|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)u|\!|\!|_M|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|-d|\!|\!|_M|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:FixedP|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_je|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|=A|\!|\!|(u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|)e|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!| e|\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)e|\!|\!|_n|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!| |\!|\!|+|\!|\!|(A|\!|\!|(u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|+|\!|\!| e|\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|)|\!|\!|)u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!| |\!|\!|-d|\!|\!|_n|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| |\!|\!|$n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!| M|\!|\!|-1|\!|\!|$|\!|\!|,|\!|\!| satisfying|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{rhothetan|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\theta|\!|\!| |\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| e|\!|\!|_M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|\leq|\!|\!| |\!|\!|\sqrt|\!|\!|{|\!|\!|\delta|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
In|\!|\!| the|\!|\!| same|\!|\!| way|\!|\!| as|\!|\!| in|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{subsec|\!|\!|:proof|\!|\!|-stability|\!|\!|}|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
we|\!|\!| obtain|\!|\!| |\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:fully|\!|\!|-im|\!|\!|-ErrEst|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| e|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(X|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|+|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(D|\!|\!|)|\!|\!|}|\!|\!|\leq|\!|\!| |\!|\!| C|\!|\!|\delta|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{BDF|\!|\!|:fully|\!|\!|-im|\!|\!|-W1inftyErr|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(e|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^M|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(W|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\leq|\!|\!| C|\!|\!|\delta|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Since|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!| |\!|\!|=|\!|\!|\theta|\!|\!| e|\!|\!|_M|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|[0|\!|\!|,1|\!|\!|]|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| last|\!|\!| inequality|\!|\!| implies|\!|\!| uniform|\!|\!| |\!|\!|
boundedness|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\phi|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!|$|\!|\!| with|\!|\!| respect|\!|\!| to|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|[0|\!|\!|,1|\!|\!|]|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| and|\!|\!| this|\!|\!| implies|\!|\!| the|\!|\!| existence|\!|\!| of|\!|\!| a|\!|\!| fixed|\!|\!| point|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|$|\!|\!| for|\!|\!| the|\!|\!| map|\!|\!| |\!|\!|$|\!|\!|{|\!|\!|\mathcal|\!|\!| M|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| by|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{THMSchaefer|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| Moreover|\!|\!|,|\!|\!| in|\!|\!| view|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{BDF|\!|\!|:fully|\!|\!|-im|\!|\!|-W1inftyErr|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| since|\!|\!| the|\!|\!| fixed|\!|\!| point|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|$|\!|\!| satisfies|\!|\!| |\!|\!|$|\!|\!|\phi|\!|\!|=e|\!|\!|_M|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| for|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!| |\!|\!|$|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\phi|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|}|\!|\!| |\!|\!|\leq|\!|\!| |\!|\!|\sqrt|\!|\!|{|\!|\!|\delta|\!|\!| |\!|\!| |\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!| so|\!|\!|}|\!|\!|\quad|\!|\!| |\!|\!|\rho|\!|\!|_|\!|\!|\phi|\!|\!| |\!|\!|=1|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
This|\!|\!| proves|\!|\!| the|\!|\!| existence|\!|\!| of|\!|\!| a|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| for|\!|\!| sufficiently|\!|\!| small|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| and|\!|\!| the|\!|\!| solution|\!|\!| satisfies|\!|\!| |\!|\!|\eqref|\!|\!|{BDF|\!|\!|:fully|\!|\!|-im|\!|\!|-ErrEst|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{BDF|\!|\!|:fully|\!|\!|-im|\!|\!|-W1inftyErr|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|
hence|\!|\!| also|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|_M|\!|\!|\|\!|\!|||\!|\!|_W|\!|\!| |\!|\!|\le|\!|\!| R|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| completes|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!| |\!|\!| for|\!|\!| the|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| methods|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\qed|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Uniqueness|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!|}|\!|\!| Suppose|\!|\!| that|\!|\!| there|\!|\!| are|\!|\!| two|\!|\!| numerical|\!|\!| solutions|\!|\!| |\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|,|\!|\!|\widetilde|\!|\!| u|\!|\!|_n|\!|\!|\in|\!|\!| D|\!|\!|$|\!|\!| |\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| both|\!|\!| with|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| as|\!|\!| in|\!|\!| |\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| existence|\!|\!|.|\!|\!| By|\!|\!| induction|\!|\!|,|\!|\!| we|\!|\!| assume|\!|\!| unique|\!|\!| numerical|\!|\!| solutions|\!|\!| |\!|\!| |\!|\!|$u|\!|\!|_j|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-norm|\!|\!| bounded|\!|\!| by|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|$j|\!|\!|<n|\!|\!|$|\!|\!|.|\!|\!| The|\!|\!| difference|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|=u|\!|\!|_n|\!|\!|-|\!|\!|\widetilde|\!|\!| u|\!|\!|_n|\!|\!|$|\!|\!| |\!|\!| then|\!|\!| satisfies|\!|\!| the|\!|\!| equation|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\delta|\!|\!|_0|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(|\!|\!|\widetilde|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!| A|\!|\!|(u|\!|\!|_n|\!|\!|)|\!|\!|-A|\!|\!|(|\!|\!|\widetilde|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!|\bigr|\!|\!|)|\!|\!| u|\!|\!|_n|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
We|\!|\!| test|\!|\!| this|\!|\!| equation|\!|\!| with|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|$|\!|\!| and|\!|\!| note|\!|\!| that|\!|\!|,|\!|\!| with|\!|\!| some|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|_R|\!|\!|>0|\!|\!|$|\!|\!| depending|\!|\!| on|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| we|\!|\!| have|\!|\!|,|\!|\!| using|\!|\!| conditions|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!| and|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\delta|\!|\!|_0|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_H|\!|\!|^2|\!|\!| |\!|\!|+|\!|\!| |\!|\!|\alpha|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!|||\!|\!| e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!|^2|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_V|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_V|\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{|\!|\!|\varepsilon|\!|\!|,R|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||e|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_H|\!|\!| |\!|\!|\bigr|\!|\!|)|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_D|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| implies|\!|\!| that|\!|\!| there|\!|\!| exists|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|_R|\!|\!|>0|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|\le|\!|\!| |\!|\!|\tau|\!|\!|_R|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| |\!|\!|$e|\!|\!|_n|\!|\!|=0|\!|\!|$|\!|\!|.|\!|\!| We|\!|\!| have|\!|\!| thus|\!|\!| shown|\!|\!| uniqueness|\!|\!| of|\!|\!| the|\!|\!| numerical|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|\in|\!|\!| D|\!|\!|$|\!|\!| with|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_W|\!|\!|\le|\!|\!| R|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| stepsizes|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|\le|\!|\!| |\!|\!|\tau|\!|\!|_R|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Consistency|\!|\!| error|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:consistency|\!|\!|}|\!|\!| The|\!|\!| order|\!|\!| of|\!|\!| both|\!|\!| the|\!|\!| |\!|\!|$k|\!|\!|$|\!|\!|-step|\!|\!| fully|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!|,|\!|\!| described|\!|\!| by|\!|\!| the|\!|\!| coefficients|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|\delta|\!|\!|_k|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$1|\!|\!|,|\!|\!|$|\!|\!| and|\!|\!| of|\!|\!| the|\!|\!| explicit|\!|\!| |\!|\!|$k|\!|\!|$|\!|\!|-step|\!|\!| BDF|\!|\!| method|\!|\!|,|\!|\!| that|\!|\!| is|\!|\!| the|\!|\!| method|\!|\!| |\!|\!| described|\!|\!| by|\!|\!| the|\!|\!| coefficients|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|\delta|\!|\!|_k|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\gamma|\!|\!|_0|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|\gamma|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!|$|\!|\!| is|\!|\!| |\!|\!|$k|\!|\!|,|\!|\!|$|\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{order|\!|\!|}|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|(k|\!|\!|-i|\!|\!|)|\!|\!|^|\!|\!|\ell|\!|\!| |\!|\!|\delta|\!|\!|_i|\!|\!|=|\!|\!|\ell|\!|\!| k|\!|\!|^|\!|\!|{|\!|\!|\ell|\!|\!|-1|\!|\!|}|\!|\!|=|\!|\!| |\!|\!|\ell|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|(k|\!|\!|-i|\!|\!|-1|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\ell|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\gamma|\!|\!|_i|\!|\!|,|\!|\!|\quad|\!|\!| |\!|\!|\ell|\!|\!|=0|\!|\!|,1|\!|\!|,|\!|\!|\dotsc|\!|\!|,k|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
The|\!|\!| defects|\!|\!| |\!|\!|(consistency|\!|\!| errors|\!|\!|)|\!|\!| |\!|\!|$d|\!|\!|^n|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tilde|\!|\!| d|\!|\!|^n|\!|\!|$|\!|\!| of|\!|\!| the|\!|\!| schemes|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!| the|\!|\!| amounts|\!|\!| by|\!|\!| which|\!|\!| the|\!|\!| exact|\!|\!| solution|\!|\!| misses|\!|\!| satisfying|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| respectively|\!|\!|,|\!|\!| are|\!|\!| given|\!|\!| by|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons1|\!|\!|}|\!|\!| d|\!|\!|_n|\!|\!|=|\!|\!|\frac1|\!|\!| |\!|\!|\tau|\!|\!|\sum|\!|\!|\limits|\!|\!|^k|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|\delta|\!|\!|_ju|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| A|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons2|\!|\!|}|\!|\!| |\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|=|\!|\!|\frac|\!|\!| 1|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!|\delta|\!|\!|_ju|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|)|\!|\!|+|\!|\!| A|\!|\!|\Big|\!|\!| |\!|\!|(|\!|\!|\sum|\!|\!|\limits|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|\gamma|\!|\!|_ju|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|-1|\!|\!|}|\!|\!|)|\!|\!|\Big|\!|\!| |\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|$n|\!|\!|=k|\!|\!|,|\!|\!|\dotsc|\!|\!|,N|\!|\!|,|\!|\!|$|\!|\!| respectively|\!|\!|.|\!|\!|
|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{lem|\!|\!|:defect|\!|\!|}|\!|\!| Under|\!|\!| the|\!|\!| regularity|\!|\!| requirements|\!|\!| |\!|\!|\eqref|\!|\!|{reg|\!|\!|-1|\!|\!|}|\!|\!| or|\!|\!| |\!|\!|\eqref|\!|\!|{reg|\!|\!|-2|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| defects|\!|\!| |\!|\!|$d|\!|\!|_n|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|$|\!|\!| are|\!|\!| bounded|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-1|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| N|\!|\!|}|\!|\!|\|\!|\!||d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|
|\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| N|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| in|\!|\!| case|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{reg|\!|\!|-1|\!|\!|}|\!|\!|,|\!|\!| and|\!|\!| by|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-2|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| N|\!|\!|}|\!|\!|\|\!|\!||d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|
|\!|\!|\max|\!|\!|_|\!|\!|{k|\!|\!|\le|\!|\!| n|\!|\!|\le|\!|\!| N|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
in|\!|\!| case|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{reg|\!|\!|-2|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| Since|\!|\!| the|\!|\!| proofs|\!|\!| for|\!|\!| |\!|\!|\eqref|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-2|\!|\!|}|\!|\!| are|\!|\!| |\!|\!| almost|\!|\!| identical|\!|\!|,|\!|\!| we|\!|\!| just|\!|\!| present|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-2|\!|\!|}|\!|\!|.|\!|\!| We|\!|\!| first|\!|\!| focus|\!|\!| on|\!|\!| the|\!|\!| implicit|\!|\!| scheme|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| Using|\!|\!| the|\!|\!| differential|\!|\!| equation|\!|\!| |\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{ivp|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| rewrite|\!|\!| |\!|\!|\eqref|\!|\!|{cons1|\!|\!|}|\!|\!| in|\!|\!| the|\!|\!| form|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons4|\!|\!|}|\!|\!| d|\!|\!|_n|\!|\!|=|\!|\!|\frac1|\!|\!|\tau|\!|\!|\sum|\!|\!|\limits|\!|\!|^k|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|\delta|\!|\!|_ju|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|-|\!|\!| |\!|\!| |\!|\!|\partial|\!|\!|_t|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
By|\!|\!| Taylor|\!|\!| expansion|\!|\!| about|\!|\!| |\!|\!|$t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|,|\!|\!|$|\!|\!| we|\!|\!| see|\!|\!| that|\!|\!|,|\!|\!| due|\!|\!| to|\!|\!| the|\!|\!| order|\!|\!| conditions|\!|\!| of|\!|\!| the|\!|\!| implicit|\!|\!| BDF|\!|\!| method|\!|\!|,|\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!| the|\!|\!| first|\!|\!| equality|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{order|\!|\!|}|\!|\!|,|\!|\!| leading|\!|\!| |\!|\!| terms|\!|\!| of|\!|\!| order|\!|\!| up|\!|\!| to|\!|\!| |\!|\!|$k|\!|\!|-1|\!|\!|$|\!|\!| |\!|\!| cancel|\!|\!|,|\!|\!| and|\!|\!| we|\!|\!| obtain|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons5|\!|\!|}|\!|\!|
|\!|\!| d|\!|\!|_n|\!|\!|=|\!|\!|\frac|\!|\!| 1|\!|\!|{k|\!|\!|!|\!|\!|}|\!|\!|\Bigg|\!|\!| |\!|\!|[|\!|\!| |\!|\!|\frac1|\!|\!|\tau|\!|\!|\sum|\!|\!|\limits|\!|\!|^k|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!| |\!|\!|\delta|\!|\!|_j|\!|\!|\|\!|\!|!|\!|\!| |\!|\!|\int|\!|\!|_|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|}|\!|\!|^|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|}|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|-s|\!|\!|)|\!|\!|^ku|\!|\!|^|\!|\!|{|\!|\!|(k|\!|\!|+1|\!|\!|)|\!|\!|}|\!|\!|(s|\!|\!|)|\!|\!|\|\!|\!|,|\!|\!| ds|\!|\!| |\!|\!|-k|\!|\!| |\!|\!|\int|\!|\!|_|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|}|\!|\!|^|\!|\!|{t|\!|\!|_n|\!|\!|}|\!|\!|(t|\!|\!|_n|\!|\!|-s|\!|\!|)|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}u|\!|\!|^|\!|\!|{|\!|\!|(k|\!|\!|+1|\!|\!|)|\!|\!|}|\!|\!|(s|\!|\!|)|\!|\!|\|\!|\!|,|\!|\!| ds|\!|\!|\Bigg|\!|\!| |\!|\!|]|\!|\!|;|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
here|\!|\!|,|\!|\!| we|\!|\!| used|\!|\!| the|\!|\!| notation|\!|\!| |\!|\!|$u|\!|\!|^|\!|\!|{|\!|\!|(m|\!|\!|)|\!|\!|}|\!|\!|:|\!|\!|=|\!|\!|\frac|\!|\!| |\!|\!|{|\!|\!|\partial|\!|\!|^m|\!|\!| u|\!|\!|}|\!|\!|{|\!|\!|\partial|\!|\!| t|\!|\!|^m|\!|\!|}|\!|\!|.|\!|\!|$|\!|\!|
|\!|\!|
Taking|\!|\!| the|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|$|\!|\!| norm|\!|\!| on|\!|\!| both|\!|\!| sides|\!|\!| |\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{cons5|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| we|\!|\!| obtain|\!|\!| the|\!|\!| desired|\!|\!| optimal|\!|\!| order|\!|\!| consistency|\!|\!| estimate|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-2|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| scheme|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:fully|\!|\!|-im1|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
Next|\!|\!|,|\!|\!| concerning|\!|\!| the|\!|\!| scheme|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|,|\!|\!| from|\!|\!| |\!|\!|\eqref|\!|\!|{cons1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{cons2|\!|\!|}|\!|\!| |\!|\!| we|\!|\!| immediately|\!|\!| obtain|\!|\!| the|\!|\!| following|\!|\!| relation|\!|\!| between|\!|\!| |\!|\!|$|\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$d|\!|\!|_n|\!|\!|$|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons9|\!|\!|}|\!|\!| |\!|\!|\tilde|\!|\!| d|\!|\!|_n|\!|\!|=d|\!|\!|_n|\!|\!|+|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| A|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
with|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{bdf|\!|\!|:hat|\!|\!|-sol|\!|\!|}|\!|\!| |\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|:|\!|\!|=|\!|\!|\sum|\!|\!|\limits|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|_|\!|\!|{i|\!|\!|=0|\!|\!|}|\!|\!|\gamma|\!|\!|_iu|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-i|\!|\!|-1|\!|\!|}|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
By|\!|\!| Taylor|\!|\!| expanding|\!|\!| about|\!|\!| |\!|\!|$t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|,|\!|\!|$|\!|\!| |\!|\!|
|\!|\!|
leading|\!|\!| |\!|\!| terms|\!|\!| of|\!|\!| order|\!|\!| up|\!|\!| to|\!|\!| |\!|\!|$k|\!|\!|-1|\!|\!|$|\!|\!| cancel|\!|\!| again|\!|\!|,|\!|\!| this|\!|\!| time|\!|\!| due|\!|\!| to|\!|\!| the|\!|\!| second|\!|\!| equality|\!|\!| in|\!|\!| |\!|\!|\eqref|\!|\!|{order|\!|\!|}|\!|\!|,|\!|\!| and|\!|\!| we|\!|\!| obtain|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|-|\!|\!| |\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|=|\!|\!|\frac|\!|\!| 1|\!|\!|{|\!|\!|(k|\!|\!|-1|\!|\!|)|\!|\!|!|\!|\!|}|\!|\!|\Bigg|\!|\!| |\!|\!|[|\!|\!|\|\!|\!|!|\!|\!| |\!|\!|\int|\!|\!|_|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|}|\!|\!|^|\!|\!|{t|\!|\!|_n|\!|\!|}|\!|\!|\|\!|\!|!|\!|\!|\|\!|\!|!|\!|\!|(t|\!|\!|_n|\!|\!|-s|\!|\!|)|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}u|\!|\!|^|\!|\!|{|\!|\!|(k|\!|\!|)|\!|\!|}|\!|\!|(s|\!|\!|)|\!|\!| ds|\!|\!| |\!|\!|-|\!|\!|\|\!|\!|!|\!|\!|\sum|\!|\!|\limits|\!|\!|^k|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|\gamma|\!|\!|_j|\!|\!| |\!|\!|\|\!|\!|!|\!|\!|\int|\!|\!|_|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-k|\!|\!|}|\!|\!|}|\!|\!|^|\!|\!|{t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|\|\!|\!|!|\!|\!|\|\!|\!|!|\!|\!|(t|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|-1|\!|\!|}|\!|\!|-s|\!|\!|)|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}u|\!|\!|^|\!|\!|{|\!|\!|(k|\!|\!|)|\!|\!|}|\!|\!|(s|\!|\!|)|\!|\!| ds|\!|\!|\|\!|\!|!|\!|\!|\Bigg|\!|\!| |\!|\!|]|\!|\!|\|\!|\!|!|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
whence|\!|\!|,|\!|\!| taking|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| norm|\!|\!| on|\!|\!| both|\!|\!| sides|\!|\!| |\!|\!| of|\!|\!| this|\!|\!| relation|\!|\!|,|\!|\!| we|\!|\!| immediately|\!|\!| infer|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons10|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|-|\!|\!| |\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
Now|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| A|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|=|\!|\!| |\!|\!|\nabla|\!|\!| |\!|\!|\cdot|\!|\!| |\!|\!|\big|\!|\!| |\!|\!|(|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| a|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-a|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|,|\!|\!|$|\!|\!| whence|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|&|\!|\!|\|\!|\!|||\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| A|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|=|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!|\cdot|\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| |\!|\!|(a|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-a|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!|\|\!|\!|||\!|\!| a|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-a|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!|
|\!|\!|+C|\!|\!|\|\!|\!|||\!|\!| a|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-a|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|\le|\!|\!| C|\!|\!|\|\!|\!|||\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|-|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|;|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
therefore|\!|\!|,|\!|\!| in|\!|\!| view|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{cons10|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{cons11|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\big|\!|\!| |\!|\!|(|\!|\!| A|\!|\!|(u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|)|\!|\!|-A|\!|\!| |\!|\!|(|\!|\!|\hat|\!|\!| u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!| |\!|\!|)|\!|\!|\big|\!|\!| |\!|\!|)u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Combining|\!|\!| |\!|\!|\eqref|\!|\!|{cons11|\!|\!|}|\!|\!| and|\!|\!| the|\!|\!| bound|\!|\!| for|\!|\!| |\!|\!|$d|\!|\!|_n|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| we|\!|\!| obtain|\!|\!| the|\!|\!| desired|\!|\!| optimal|\!|\!| order|\!|\!| consistency|\!|\!| estimate|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{cons|\!|\!|-err|\!|\!|-est|\!|\!|-2|\!|\!|}|\!|\!| also|\!|\!| for|\!|\!| the|\!|\!| scheme|\!|\!| |\!|\!|\eqref|\!|\!|{bdf|\!|\!|:lin|\!|\!|-im1|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Proof|\!|\!| of|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:proof|\!|\!|}|\!|\!| The|\!|\!| cases|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!| and|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!| of|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{subsec|\!|\!|:abstract|\!|\!|-framework|\!|\!|}|\!|\!| correspond|\!|\!| to|\!|\!| the|\!|\!| |\!|\!| situation|\!|\!| in|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!|,|\!|\!| respectively|\!|\!|.|\!|\!| |\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!| ensures|\!|\!| that|\!|\!| the|\!|\!| considered|\!|\!| problems|\!|\!| are|\!|\!| of|\!|\!| the|\!|\!| type|\!|\!| considered|\!|\!| |\!|\!| in|\!|\!| the|\!|\!| abstract|\!|\!| framework|\!|\!| of|\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{subsec|\!|\!|:abstract|\!|\!|-framework|\!|\!|}|\!|\!|,|\!|\!| and|\!|\!| |\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!| ensures|\!|\!| the|\!|\!| required|\!|\!| uniform|\!|\!| discrete|\!|\!| maximal|\!|\!| regularity|\!|\!| |\!|\!| of|\!|\!| the|\!|\!| BDF|\!|\!| methods|\!|\!|.|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:defect|\!|\!|}|\!|\!| yields|\!|\!| that|\!|\!| the|\!|\!| situation|\!|\!| of|\!|\!| |\!|\!| Section|\!|\!|~|\!|\!|\ref|\!|\!|{subsec|\!|\!|:stability|\!|\!|}|\!|\!| holds|\!|\!| with|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|^|\!|\!|\star|\!|\!|=u|\!|\!|(t|\!|\!|_n|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\delta|\!|\!|\le|\!|\!| C|\!|\!|\tau|\!|\!|^k|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| The|\!|\!| error|\!|\!| bounds|\!|\!| of|\!|\!| Theorems|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-1|\!|\!|}|\!|\!| and|\!|\!|~|\!|\!|\ref|\!|\!|{Th|\!|\!|:main|\!|\!|-2|\!|\!|}|\!|\!| then|\!|\!| follow|\!|\!| from|\!|\!| |\!|\!| Proposition|\!|\!|~|\!|\!|\ref|\!|\!|{prop|\!|\!|:stability|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
It|\!|\!| remains|\!|\!| to|\!|\!| give|\!|\!| the|\!|\!| proofs|\!|\!| of|\!|\!| Lemmas|\!|\!| |\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!| and|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| This|\!|\!| is|\!|\!| done|\!|\!| in|\!|\!| the|\!|\!| final|\!|\!| two|\!|\!| sections|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{|\!|\!|$W|\!|\!|$|\!|\!|-locally|\!|\!| uniform|\!|\!| maximal|\!|\!| regularity|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:unif|\!|\!|-maxreg|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Proof|\!|\!| of|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!| in|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
Let|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!| be|\!|\!| a|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!| |\!|\!| and|\!|\!| consider|\!|\!| the|\!|\!| following|\!|\!| initial|\!|\!| and|\!|\!| boundary|\!|\!| value|\!|\!| problem|\!|\!| for|\!|\!| a|\!|\!| linear|\!|\!| parabolic|\!|\!| equation|\!|\!|,|\!|\!| with|\!|\!| a|\!|\!| time|\!|\!|-independent|\!|\!| self|\!|\!|-adjoint|\!|\!| operator|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{PDECauchy|\!|\!|}|\!|\!| |\!|\!|\left|\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\begin|\!|\!|{alignedat|\!|\!|}|\!|\!|{2|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\partial|\!|\!|_tu|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|-|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|\big|\!|\!|(b|\!|\!|(x|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|\big|\!|\!|)|\!|\!| |\!|\!|=0|\!|\!|\|\!|\!| |\!|\!|\|\!|\!| |\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\text|\!|\!|{for|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|\in|\!|\!| |\!|\!|\varOmega|\!|\!|\times|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!|[2pt|\!|\!|]|\!|\!| |\!|\!|&u|\!|\!|(x|\!|\!|,t|\!|\!|)|\!|\!|=0|\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\text|\!|\!|{for|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|(t|\!|\!|,x|\!|\!|)|\!|\!|\in|\!|\!| |\!|\!|\partial|\!|\!|\varOmega|\!|\!|\times|\!|\!| |\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!| |\!|\!|,|\!|\!|\|\!|\!|\|\!|\!|[2pt|\!|\!|]|\!|\!| |\!|\!|&u|\!|\!|(x|\!|\!|,0|\!|\!|)|\!|\!|=u|\!|\!|_0|\!|\!|(x|\!|\!|)|\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\text|\!|\!|{for|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,x|\!|\!|\in|\!|\!| |\!|\!|\varOmega|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{alignedat|\!|\!|}|\!|\!|\right|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| coefficient|\!|\!| |\!|\!|$b|\!|\!|(x|\!|\!|)|\!|\!|$|\!|\!| satisfies|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{ellipt|\!|\!|}|\!|\!| |\!|\!| K|\!|\!|_0|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|\leq|\!|\!| b|\!|\!| |\!|\!|(x|\!|\!|)|\!|\!| |\!|\!|\leq|\!|\!| K|\!|\!|_0|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
We|\!|\!| consider|\!|\!| |\!|\!|$W|\!|\!|=C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$W|\!|\!|=C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| in|\!|\!| the|\!|\!| settings|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!| and|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|,|\!|\!| respectively|\!|\!|.|\!|\!| In|\!|\!| this|\!|\!| section|\!|\!|,|\!|\!| we|\!|\!| combine|\!|\!| results|\!|\!| from|\!|\!| the|\!|\!| literature|\!|\!| and|\!|\!| prove|\!|\!| |\!|\!|$W|\!|\!|$|\!|\!|-locally|\!|\!| uniform|\!|\!| |\!|\!| maximal|\!|\!| parabolic|\!|\!| regularity|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{PDECauchy|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| the|\!|\!| constant|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!| |\!|\!|
and|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_W|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|
|\!|\!| |\!|\!|
|\!|\!| |\!|\!| Let|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_2|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| denote|\!|\!| the|\!|\!| semigroup|\!|\!| of|\!|\!| operators|\!|\!| on|\!|\!| |\!|\!| |\!|\!|$L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| which|\!|\!| maps|\!|\!| |\!|\!|$u|\!|\!|_0|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$u|\!|\!|(t|\!|\!|,|\!|\!|\cdot|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| given|\!|\!| by|\!|\!| |\!|\!|\eqref|\!|\!|{PDECauchy|\!|\!|}|\!|\!| |\!|\!| and|\!|\!| let|\!|\!| |\!|\!|$A|\!|\!|_2|\!|\!|$|\!|\!| denote|\!|\!| the|\!|\!| generator|\!|\!| of|\!|\!| this|\!|\!| semigroup|\!|\!|.|\!|\!| |\!|\!| Then|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_2|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| extends|\!|\!| to|\!|\!| a|\!|\!| bounded|\!|\!| analytic|\!|\!| semigroup|\!|\!| |\!|\!|
on|\!|\!| |\!|\!|$L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| in|\!|\!| the|\!|\!| sector|\!|\!| |\!|\!|$|\!|\!|\varSigma|\!|\!|_|\!|\!|{|\!|\!|\theta|\!|\!|}|\!|\!|=|\!|\!|\|\!|\!|{z|\!|\!|\in|\!|\!|{|\!|\!|\mathbb|\!|\!| C|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|:|\!|\!|\|\!|\!|,z|\!|\!|\ne|\!|\!| 0|\!|\!|,|\!|\!| |\!|\!|||\!|\!|\arg|\!|\!| z|\!|\!|||\!|\!|<|\!|\!| |\!|\!|\theta|\!|\!|\|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|$|\!|\!| can|\!|\!| be|\!|\!| arbitrarily|\!|\!| close|\!|\!| to|\!|\!| |\!|\!|$|\!|\!|\pi|\!|\!|/2|\!|\!|$|\!|\!| |\!|\!| |\!|\!|(see|\!|\!| |\!|\!|\cite|\!|\!|{Davis89|\!|\!|,Ouhabaz|\!|\!|}|\!|\!|)|\!|\!|,|\!|\!| |\!|\!| and|\!|\!| the|\!|\!| kernel|\!|\!| |\!|\!|$G|\!|\!|(t|\!|\!|,x|\!|\!|,y|\!|\!|)|\!|\!|$|\!|\!| of|\!|\!| the|\!|\!| semigroup|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_2|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| has|\!|\!| an|\!|\!| analytic|\!|\!| extension|\!|\!| to|\!|\!| the|\!|\!| right|\!|\!|-half|\!|\!| plane|\!|\!|,|\!|\!| satisfying|\!|\!| |\!|\!| |\!|\!|(see|\!|\!| |\!|\!|\cite|\!|\!|[p|\!|\!|.|\!|\!|\|\!|\!| 103|\!|\!|]|\!|\!|{Davis89|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!||G|\!|\!|(z|\!|\!|,x|\!|\!|,y|\!|\!|)|\!|\!|||\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|\theta|\!|\!||z|\!|\!|||\!|\!|^|\!|\!|{|\!|\!|-|\!|\!|\frac|\!|\!|{d|\!|\!|}|\!|\!|{2|\!|\!|}|\!|\!|}|\!|\!|
|\!|\!|{|\!|\!|\rm|\!|\!| e|\!|\!|}|\!|\!|^|\!|\!|{|\!|\!|-|\!|\!|\frac|\!|\!|{|\!|\!||x|\!|\!|-y|\!|\!|||\!|\!|^2|\!|\!|}|\!|\!|{C|\!|\!|_|\!|\!|\theta|\!|\!| |\!|\!||z|\!|\!|||\!|\!|}|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|\forall|\!|\!|\|\!|\!|,|\!|\!| z|\!|\!|\in|\!|\!| |\!|\!|\varSigma|\!|\!|_|\!|\!|{|\!|\!|\theta|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\forall|\!|\!|\|\!|\!|,|\!|\!| x|\!|\!|,y|\!|\!|\in|\!|\!|\varOmega|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!|\forall|\!|\!|\|\!|\!|,|\!|\!|\theta|\!|\!|\in|\!|\!|(0|\!|\!|,|\!|\!|\pi|\!|\!|/2|\!|\!|)|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{GKernelE0|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|\theta|\!|\!|$|\!|\!| |\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|$|\!|\!|.|\!|\!| In|\!|\!| other|\!|\!| words|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| operator|\!|\!| |\!|\!|$A|\!|\!|=|\!|\!|-e|\!|\!|^|\!|\!|{i|\!|\!|\theta|\!|\!|}A|\!|\!|_2|\!|\!|$|\!|\!| satisfies|\!|\!| the|\!|\!| condition|\!|\!| |\!|\!| of|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| 8|\!|\!|.6|\!|\!|]|\!|\!|{KW|\!|\!|}|\!|\!|,|\!|\!| with|\!|\!| |\!|\!|$m|\!|\!|=2|\!|\!|$|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$g|\!|\!|(s|\!|\!|)|\!|\!|=C|\!|\!|_|\!|\!|\theta|\!|\!| e|\!|\!|^|\!|\!|{|\!|\!|-s|\!|\!|^2|\!|\!|/C|\!|\!|_|\!|\!|\theta|\!|\!|}|\!|\!|$|\!|\!| |\!|\!|(also|\!|\!| see|\!|\!| |\!|\!|\cite|\!|\!|[Remark|\!|\!| 8|\!|\!|.23|\!|\!|]|\!|\!|{KW|\!|\!|}|\!|\!|)|\!|\!|.|\!|\!| As|\!|\!| a|\!|\!| consequence|\!|\!| of|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| 8|\!|\!|.6|\!|\!|]|\!|\!|{KW|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|$E|\!|\!|_2|\!|\!|(t|\!|\!|)|\!|\!|$|\!|\!| extends|\!|\!| to|\!|\!| an|\!|\!| analytic|\!|\!| |\!|\!| semigroup|\!|\!| |\!|\!|$E|\!|\!|_q|\!|\!|(t|\!|\!|)|\!|\!|$|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$1|\!|\!|<q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| is|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bounded|\!|\!| |\!|\!| in|\!|\!| the|\!|\!| sector|\!|\!| |\!|\!|$|\!|\!|\varSigma|\!|\!|_|\!|\!|\theta|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|(0|\!|\!|,|\!|\!|\pi|\!|\!|/2|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| the|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bound|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|\theta|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|(We|\!|\!| refer|\!|\!| to|\!|\!| |\!|\!|\cite|\!|\!|{KW|\!|\!|}|\!|\!| for|\!|\!| a|\!|\!| |\!|\!| discussion|\!|\!| of|\!|\!| the|\!|\!| notion|\!|\!| of|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-boundedness|\!|\!|.|\!|\!|)|\!|\!| To|\!|\!| summarize|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| the|\!|\!| following|\!|\!| lemma|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|[Angle|\!|\!| of|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-boundedness|\!|\!| of|\!|\!| the|\!|\!| semigroup|\!|\!|]|\!|\!|\label|\!|\!|{RbdSg|\!|\!|}|\!|\!| For|\!|\!| any|\!|\!| given|\!|\!| |\!|\!|$1|\!|\!|<q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| semigroup|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_q|\!|\!|(t|\!|\!|)|\!|\!|:L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\rightarrow|\!|\!| L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| defined|\!|\!| by|\!|\!| the|\!|\!| parabolic|\!|\!| problem|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{PDECauchy|\!|\!|}|\!|\!| is|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bounded|\!|\!| in|\!|\!| the|\!|\!| sector|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\varSigma|\!|\!|_|\!|\!|\theta|\!|\!|=|\!|\!|\|\!|\!|{z|\!|\!|\in|\!|\!|{|\!|\!|\mathbb|\!|\!| C|\!|\!|}|\!|\!|:|\!|\!| |\!|\!|||\!|\!|{|\!|\!|\rm|\!|\!| arg|\!|\!|}|\!|\!|(z|\!|\!|)|\!|\!|||\!|\!|<|\!|\!|\theta|\!|\!|\|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|(0|\!|\!|,|\!|\!|\pi|\!|\!|/2|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!| and|\!|\!| the|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bound|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
According|\!|\!| to|\!|\!| |\!|\!| Weis|\!|\!|'|\!|\!| characterization|\!|\!| of|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| 4|\!|\!|.2|\!|\!|]|\!|\!|{Weis2|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{RbdSg|\!|\!|}|\!|\!| implies|\!|\!| the|\!|\!| following|\!|\!| two|\!|\!| results|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|[Angle|\!|\!| of|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-boundedness|\!|\!| of|\!|\!| the|\!|\!| resolvent|\!|\!|]|\!|\!|\label|\!|\!|{RbdRes|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!|$|\!|\!| be|\!|\!| the|\!|\!| generator|\!|\!| of|\!|\!| the|\!|\!| semigroup|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_q|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| defined|\!|\!| by|\!|\!| the|\!|\!| parabolic|\!|\!| problem|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{PDECauchy|\!|\!|}|\!|\!|,|\!|\!| where|\!|\!| |\!|\!|$1|\!|\!|<q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| Then|\!|\!| the|\!|\!| set|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\lambda|\!|\!|(|\!|\!|\lambda|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|:|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\lambda|\!|\!|\in|\!|\!| |\!|\!|\varSigma|\!|\!|_|\!|\!|\theta|\!|\!| |\!|\!|\|\!|\!|}|\!|\!|$|\!|\!|
|\!|\!| is|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bounded|\!|\!| in|\!|\!| the|\!|\!| sector|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\varSigma|\!|\!|_|\!|\!|\theta|\!|\!|=|\!|\!|\|\!|\!|{z|\!|\!|\in|\!|\!|{|\!|\!|\mathbb|\!|\!| C|\!|\!|}|\!|\!|:|\!|\!| |\!|\!|||\!|\!|{|\!|\!|\rm|\!|\!| arg|\!|\!|}|\!|\!|(z|\!|\!|)|\!|\!|||\!|\!|<|\!|\!|\theta|\!|\!|\|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|(0|\!|\!|,|\!|\!|\pi|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!| and|\!|\!| the|\!|\!| |\!|\!|$R|\!|\!|$|\!|\!|-bound|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|[Maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!|]|\!|\!|\label|\!|\!|{LemmaMaxLp|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!|$|\!|\!| be|\!|\!| the|\!|\!| generator|\!|\!| of|\!|\!| the|\!|\!| semigroup|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|{E|\!|\!|_q|\!|\!|(t|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!|_|\!|\!|{t|\!|\!|>0|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| defined|\!|\!| by|\!|\!| the|\!|\!| parabolic|\!|\!| problem|\!|\!| |\!|\!| |\!|\!|\eqref|\!|\!|{PDECauchy|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| Then|\!|\!| the|\!|\!| solution|\!|\!| |\!|\!|$u|\!|\!|(t|\!|\!|)|\!|\!|$|\!|\!| of|\!|\!| the|\!|\!| parabolic|\!|\!| initial|\!|\!| value|\!|\!| problem|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{IVPq|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\left|\!|\!|\|\!|\!|{|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| u|\!|\!|'|\!|\!|(t|\!|\!|)|\!|\!|&|\!|\!|{|\!|\!|}|\!|\!| |\!|\!|=|\!|\!| A|\!|\!|_qu|\!|\!|(t|\!|\!|)|\!|\!| |\!|\!|+|\!|\!| f|\!|\!|(t|\!|\!|)|\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!| t|\!|\!|>0|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| u|\!|\!|(0|\!|\!|)|\!|\!| |\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|=|\!|\!| 0|\!|\!|,|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\right|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
belongs|\!|\!| to|\!|\!| |\!|\!|$D|\!|\!|(A|\!|\!|_q|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| almost|\!|\!| all|\!|\!| |\!|\!|$t|\!|\!|\in|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{MaxLpAbstr|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|'|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|+|\!|\!|\|\!|\!||A|\!|\!|_qu|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
for|\!|\!| all|\!|\!| |\!|\!|$f|\!|\!|\in|\!|\!| L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|,q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$p|\!|\!|,|\!|\!| q|\!|\!|$|\!|\!| and|\!|\!|~|\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
If|\!|\!| the|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!| is|\!|\!| smooth|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$b|\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\bar|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| |\!|\!|(see|\!|\!| |\!|\!|\cite|\!|\!|[Chapter|\!|\!| 3|\!|\!|,|\!|\!| Theorems|\!|\!| 6|\!|\!|.3|\!|\!|-|\!|\!|-6|\!|\!|.4|\!|\!|]|\!|\!|{CW98|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{q|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||A|\!|\!|_qu|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!|\|\!|\!|]|\!|\!| |\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|$C|\!|\!|_q|\!|\!|$|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|
|\!|\!|$K|\!|\!|_0|\!|\!|,q|\!|\!|,|\!|\!|\alpha|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| Hence|\!|\!|,|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{LemmaMaxLp|\!|\!|}|\!|\!| implies|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!| for|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|.|\!|\!|
|\!|\!|
In|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!|,|\!|\!| we|\!|\!| need|\!|\!| to|\!|\!| use|\!|\!| the|\!|\!| following|\!|\!| result|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|\label|\!|\!|{THMJK|\!|\!|}|\!|\!| For|\!|\!| any|\!|\!| given|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|,|\!|\!| d|\!|\!|=2|\!|\!|,3|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| elliptic|\!|\!| boundary|\!|\!| value|\!|\!| problem|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{EllipticEq|\!|\!|}|\!|\!| |\!|\!|\left|\!|\!|\|\!|\!|{|\!|\!| |\!|\!|\begin|\!|\!|{alignedat|\!|\!|}|\!|\!|{2|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\nabla|\!|\!|\cdot|\!|\!|(b|\!|\!|(x|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!| |\!|\!|=|\!|\!| f|\!|\!|\quad|\!|\!|&|\!|\!|&|\!|\!|\text|\!|\!|{in|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\varOmega|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|
|\!|\!|&u|\!|\!|=0|\!|\!| |\!|\!|&|\!|\!|&|\!|\!|\text|\!|\!|{on|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\partial|\!|\!|\varOmega|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{alignedat|\!|\!|}|\!|\!|\right|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
satisfies|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{W1quf|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\leq|\!|\!| C|\!|\!|_q|\!|\!|\|\!|\!||f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!|\forall|\!|\!|\|\!|\!|,|\!|\!| q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
where|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|>2|\!|\!|$|\!|\!| is|\!|\!| some|\!|\!| |\!|\!| constant|\!|\!| which|\!|\!| depends|\!|\!| on|\!|\!| the|\!|\!| domain|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$1|\!|\!|/q|\!|\!|_d|\!|\!|+1|\!|\!|/q|\!|\!|_d|\!|\!|'|\!|\!|=1|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|$C|\!|\!|_q|\!|\!|$|\!|\!| depends|\!|\!| on|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|$|\!|\!| |\!|\!|
and|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| In|\!|\!| particular|\!|\!|,|\!|\!| |\!|\!|$q|\!|\!|_2|\!|\!|>4|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|_3|\!|\!|>3|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| any|\!|\!| given|\!|\!| bounded|\!|\!| Lipschitz|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\subset|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|^d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|$d|\!|\!|=2|\!|\!|,3|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{THMJK|\!|\!|}|\!|\!| was|\!|\!| proved|\!|\!| in|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| 0|\!|\!|.5|\!|\!|]|\!|\!|{JK95|\!|\!|}|\!|\!| |\!|\!| for|\!|\!| constant|\!|\!| coefficient|\!|\!| |\!|\!|$b|\!|\!|(x|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| which|\!|\!| can|\!|\!| be|\!|\!| extended|\!|\!| to|\!|\!| |\!|\!| |\!|\!| variable|\!|\!| coefficient|\!|\!| |\!|\!|$b|\!|\!|(x|\!|\!|)|\!|\!|$|\!|\!| by|\!|\!| a|\!|\!| standard|\!|\!| perturbation|\!|\!| |\!|\!| argument|\!|\!|.|\!|\!| |\!|\!| By|\!|\!| using|\!|\!| Lemmas|\!|\!| |\!|\!|\ref|\!|\!|{LemmaMaxLp|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\ref|\!|\!|{THMJK|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| can|\!|\!| also|\!|\!| prove|\!|\!| |\!|\!| the|\!|\!| following|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| on|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| which|\!|\!| implies|\!|\!| |\!|\!|(i|\!|\!|)|\!|\!| for|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!| |\!|\!| |\!|\!|(with|\!|\!| |\!|\!|$X|\!|\!|=W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$d|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|)|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|\label|\!|\!|{Le|\!|\!|:max|\!|\!|-reg|\!|\!|-con0|\!|\!|}|\!|\!| If|\!|\!| the|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!| is|\!|\!| Lipschitz|\!|\!| continuous|\!|\!| |\!|\!| and|\!|\!| |\!|\!|$b|\!|\!| |\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| some|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|\in|\!|\!|(0|\!|\!|,1|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| |\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{IVPq|\!|\!|}|\!|\!| satisfies|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\partial|\!|\!|_tu|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|+|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!||f|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!| |\!|\!|\label|\!|\!|{MaxLpW1q0|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| all|\!|\!| |\!|\!|$f|\!|\!|\in|\!|\!| L|\!|\!|^p|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|_|\!|\!|+|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|;|\!|\!| |\!|\!| |\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$p|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| Since|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{IVPq|\!|\!|}|\!|\!| is|\!|\!| given|\!|\!| by|\!|\!| |\!|\!| |\!|\!|\|\!|\!|[u|\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| E|\!|\!|_q|\!|\!|(t|\!|\!|-s|\!|\!|)f|\!|\!|(s|\!|\!|)|\!|\!|{|\!|\!|\mathrm|\!|\!| d|\!|\!|}|\!|\!| s|\!|\!| |\!|\!|,|\!|\!|\|\!|\!|]|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{LemmaMaxLp|\!|\!|}|\!|\!| implies|\!|\!| that|\!|\!| the|\!|\!| map|\!|\!| from|\!|\!| |\!|\!|$f|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!| u|\!|\!|$|\!|\!| given|\!|\!| by|\!|\!| the|\!|\!| formula|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[A|\!|\!|_qu|\!|\!|(t|\!|\!|)|\!|\!|=|\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| A|\!|\!|_qE|\!|\!|_q|\!|\!|(t|\!|\!|-s|\!|\!|)f|\!|\!|(s|\!|\!|)|\!|\!|{|\!|\!|\mathrm|\!|\!| d|\!|\!|}|\!|\!| s|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
is|\!|\!| bounded|\!|\!| in|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| In|\!|\!| other|\!|\!| words|\!|\!|,|\!|\!| |\!|\!| if|\!|\!| we|\!|\!| define|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[w|\!|\!|(t|\!|\!|)|\!|\!|:|\!|\!|=|\!|\!|-|\!|\!|\int|\!|\!|_0|\!|\!|^t|\!|\!| |\!|\!| A|\!|\!|_qE|\!|\!|_q|\!|\!|(t|\!|\!|-s|\!|\!|)|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|(s|\!|\!|)|\!|\!|{|\!|\!|\mathrm|\!|\!| d|\!|\!|}|\!|\!| s|\!|\!| |\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
then|\!|\!| we|\!|\!| have|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{w|\!|\!|-f|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| fractional|\!|\!| power|\!|\!| operator|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| well|\!|\!| |\!|\!| defined|\!|\!| |\!|\!|(due|\!|\!| to|\!|\!| the|\!|\!| self|\!|\!|-adjointness|\!|\!| and|\!|\!| positivity|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|-A|\!|\!|_q|\!|\!|$|\!|\!|)|\!|\!| and|\!|\!| commutes|\!|\!| with|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!|$|\!|\!|.|\!|\!| It|\!|\!| is|\!|\!| straightforward|\!|\!| to|\!|\!| check|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\nabla|\!|\!| u|\!|\!|=|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}w|\!|\!| |\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Since|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|
|\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!| |\!|\!|(see|\!|\!| Appendix|\!|\!|)|\!|\!|,|\!|\!| it|\!|\!| follows|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!||w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|;|\!|\!| therefore|\!|\!|,|\!|\!| in|\!|\!| view|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{w|\!|\!|-f|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{nablaun0|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Moreover|\!|\!|,|\!|\!| since|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|\nabla|\!|\!|\cdot|\!|\!|\|\!|\!|,|\!|\!|$|\!|\!| is|\!|\!| the|\!|\!| dual|\!|\!| of|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| it|\!|\!| is|\!|\!| also|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| any|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| and|\!|\!| so|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|
|\!|\!|=|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|\nabla|\!|\!|\varDelta|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!| f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|
|\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|\leq|\!|\!| |\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!|\varDelta|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!| f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
whence|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{Aq|\!|\!|-12fn0|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!||f|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\eqref|\!|\!|{nablaun0|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{Aq|\!|\!|-12fn0|\!|\!|}|\!|\!| yield|\!|\!| |\!|\!|\eqref|\!|\!|{MaxLpW1q0|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|{Proof|\!|\!| of|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!|}|\!|\!|\label|\!|\!|{Sec|\!|\!|:MLpBDF|\!|\!|}|\!|\!|
|\!|\!|
In|\!|\!| this|\!|\!| section|\!|\!|,|\!|\!| we|\!|\!| consider|\!|\!| the|\!|\!| BDF|\!|\!| time|\!|\!| discretization|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{IVPq|\!|\!|}|\!|\!|:|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|\label|\!|\!|{def|\!|\!|:BDF|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{|\!|\!|\tau|\!|\!|}|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=0|\!|\!|}|\!|\!|^k|\!|\!| |\!|\!|\delta|\!|\!|_j|\!|\!| u|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|=A|\!|\!|_qu|\!|\!|_n|\!|\!|+f|\!|\!|_n|\!|\!|,|\!|\!| |\!|\!|\qquad|\!|\!| n|\!|\!| |\!|\!|\geq|\!|\!| k|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\label|\!|\!|{starting|\!|\!|-from|\!|\!|-zero|\!|\!|}|\!|\!| u|\!|\!|_0|\!|\!|=0|\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!| |\!|\!| u|\!|\!|_1|\!|\!|,|\!|\!|\dotsc|\!|\!|,u|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\text|\!|\!|{given|\!|\!| |\!|\!|(possibly|\!|\!| nonzero|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
In|\!|\!| view|\!|\!| of|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{RbdRes|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!| the|\!|\!| following|\!|\!| result|\!|\!|,|\!|\!| |\!|\!| which|\!|\!| implies|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proposition|\!|\!|}|\!|\!|[|\!|\!|{|\!|\!|\cite|\!|\!|[Theorems|\!|\!| 4|\!|\!|.1|\!|\!|-|\!|\!|-4|\!|\!|.2|\!|\!| and|\!|\!| Remark|\!|\!| 4|\!|\!|.3|\!|\!|]|\!|\!|{KLL|\!|\!|}|\!|\!|}|\!|\!|]|\!|\!|\label|\!|\!|{Pr|\!|\!|:BDFk|\!|\!|}|\!|\!| For|\!|\!| |\!|\!|$1|\!|\!|\leq|\!|\!| k|\!|\!|\leq|\!|\!| 6|\!|\!|$|\!|\!|,|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{def|\!|\!|:BDF|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{starting|\!|\!|-from|\!|\!|-zero|\!|\!|}|\!|\!| |\!|\!| satisfies|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|+|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(A|\!|\!|_qu|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\frac|\!|\!|{u|\!|\!|_n|\!|\!|-u|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|{|\!|\!|\tau|\!|\!|}|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||A|\!|\!|_qu|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!| for|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|,q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!| where|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|
|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|,|\!|\!|$|\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!| |\!|\!| |\!|\!| it|\!|\!| is|\!|\!| independent|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|,|\!|\!| N|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$b|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proposition|\!|\!|}|\!|\!|
|\!|\!|
By|\!|\!| applying|\!|\!| Proposition|\!|\!| |\!|\!|\ref|\!|\!|{Pr|\!|\!|:BDFk|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| we|\!|\!| prove|\!|\!| the|\!|\!| following|\!|\!| result|\!|\!|,|\!|\!| |\!|\!| which|\!|\!| implies|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{lem|\!|\!|:unif|\!|\!|-maxreg|\!|\!|-bdf|\!|\!|}|\!|\!| for|\!|\!| the|\!|\!| case|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!| |\!|\!| |\!|\!|(with|\!|\!| |\!|\!|$X|\!|\!|=W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| |\!|\!|$d|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|)|\!|\!|.|\!|\!| |\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proposition|\!|\!|}|\!|\!|\label|\!|\!|{Th|\!|\!|:BDFk|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$1|\!|\!|\leq|\!|\!| k|\!|\!|\leq|\!|\!| 6|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| If|\!|\!| the|\!|\!| domain|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!| is|\!|\!| Lipschitz|\!|\!| continuous|\!|\!| |\!|\!| and|\!|\!| the|\!|\!| coefficient|\!|\!| satisfies|\!|\!| |\!|\!|$b|\!|\!| |\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| some|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|\in|\!|\!|(0|\!|\!|,1|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| then|\!|\!| |\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{def|\!|\!|:BDF|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{starting|\!|\!|-from|\!|\!|-zero|\!|\!|}|\!|\!| satisfies|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{BDFconvex|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\dot|\!|\!| u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|+|\!|\!| |\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(u|\!|\!|_n|\!|\!| |\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|&|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!| |\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|\frac|\!|\!|{u|\!|\!|_n|\!|\!|-u|\!|\!|_|\!|\!|{n|\!|\!|-1|\!|\!|}|\!|\!|}|\!|\!|{|\!|\!|\tau|\!|\!|}|\!|\!|\Big|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|(|\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|^p|\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\bigg|\!|\!| |\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{p|\!|\!|}|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|\quad|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|(f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|\big|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| all|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|
|\!|\!| |\!|\!|$C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| depends|\!|\!| only|\!|\!| on|\!|\!| |\!|\!|$K|\!|\!|_0|\!|\!|,|\!|\!| q|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|$independent|\!|\!| of|\!|\!| |\!|\!|$|\!|\!|\tau|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$N|\!|\!|)|\!|\!|.|\!|\!|$|\!|\!| |\!|\!|
|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proposition|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| |\!|\!| In|\!|\!| view|\!|\!| of|\!|\!| |\!|\!| |\!|\!|\cite|\!|\!|[Remark|\!|\!| 4|\!|\!|.3|\!|\!|]|\!|\!|{KLL|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| only|\!|\!| need|\!|\!| to|\!|\!| consider|\!|\!| the|\!|\!| case|\!|\!| |\!|\!| |\!|\!|$u|\!|\!|_0|\!|\!|=|\!|\!|\cdots|\!|\!|=u|\!|\!|_|\!|\!|{k|\!|\!|-1|\!|\!|}|\!|\!|=0|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| The|\!|\!| proof|\!|\!| is|\!|\!| similar|\!|\!| to|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{Le|\!|\!|:max|\!|\!|-reg|\!|\!|-con0|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
We|\!|\!| consider|\!|\!| the|\!|\!| expansion|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\Bigl|\!|\!|(|\!|\!| |\!|\!|\frac|\!|\!|{|\!|\!|\delta|\!|\!|(|\!|\!|\zeta|\!|\!|)|\!|\!|}|\!|\!|\tau|\!|\!| |\!|\!|-A|\!|\!|_q|\!|\!| |\!|\!|\Bigr|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\tau|\!|\!| |\!|\!|\sum|\!|\!|_|\!|\!|{n|\!|\!|=0|\!|\!|}|\!|\!|^|\!|\!|\infty|\!|\!| E|\!|\!|_n|\!|\!| |\!|\!|\zeta|\!|\!|^n|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|\qquad|\!|\!| |\!|\!|||\!|\!|\zeta|\!|\!|||\!|\!|<1|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| yields|\!|\!| |\!|\!|(see|\!|\!|,|\!|\!| e|\!|\!|.g|\!|\!|.|\!|\!|,|\!|\!| |\!|\!|\cite|\!|\!|[Section|\!|\!| 7|\!|\!|]|\!|\!|{KLL|\!|\!|}|\!|\!|)|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[u|\!|\!|_n|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=k|\!|\!|}|\!|\!|^n|\!|\!|\tau|\!|\!| E|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}f|\!|\!|_j|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Proposition|\!|\!| |\!|\!|\ref|\!|\!|{Pr|\!|\!|:BDFk|\!|\!|}|\!|\!| implies|\!|\!| that|\!|\!| the|\!|\!| map|\!|\!| from|\!|\!| |\!|\!|$|\!|\!|(f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$|\!|\!|(A|\!|\!|_qu|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=k|\!|\!|}|\!|\!|^N|\!|\!|$|\!|\!| given|\!|\!| by|\!|\!| the|\!|\!| formula|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[A|\!|\!|_qu|\!|\!|_n|\!|\!|=|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=k|\!|\!|}|\!|\!|^n|\!|\!|\tau|\!|\!| A|\!|\!|_qE|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}f|\!|\!|_j|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
is|\!|\!| bounded|\!|\!| in|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| In|\!|\!| other|\!|\!| words|\!|\!|,|\!|\!| |\!|\!| if|\!|\!| we|\!|\!| define|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[w|\!|\!|_n|\!|\!|:|\!|\!|=|\!|\!|-|\!|\!|\sum|\!|\!|_|\!|\!|{j|\!|\!|=k|\!|\!|}|\!|\!|^n|\!|\!|\tau|\!|\!| A|\!|\!|_qE|\!|\!|_|\!|\!|{n|\!|\!|-j|\!|\!|}|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|_j|\!|\!| |\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
then|\!|\!| we|\!|\!| have|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{w|\!|\!|-f|\!|\!|-discrete|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(w|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| fractional|\!|\!| power|\!|\!| operator|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| |\!|\!| commutes|\!|\!| with|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!| It|\!|\!| is|\!|\!| straightforward|\!|\!| to|\!|\!| check|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\nabla|\!|\!| u|\!|\!|_n|\!|\!|=|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}w|\!|\!|_n|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
Since|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| |\!|\!|
|\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!| |\!|\!|(see|\!|\!| Appendix|\!|\!|)|\!|\!|,|\!|\!| it|\!|\!| follows|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\nabla|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(w|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|;|\!|\!| therefore|\!|\!|,|\!|\!| in|\!|\!| view|\!|\!| of|\!|\!| |\!|\!|\eqref|\!|\!|{w|\!|\!|-f|\!|\!|-discrete|\!|\!|}|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{nablaun|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\nabla|\!|\!| u|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^|\!|\!|{q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
Moreover|\!|\!|,|\!|\!| since|\!|\!| |\!|\!|$|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|\nabla|\!|\!|\cdot|\!|\!|\|\!|\!|,|\!|\!|$|\!|\!| is|\!|\!| the|\!|\!| dual|\!|\!| of|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| it|\!|\!| is|\!|\!| also|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| any|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| and|\!|\!| so|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|
|\!|\!|=|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|\nabla|\!|\!|\varDelta|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!| f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|
|\!|\!|&|\!|\!|{|\!|\!|}|\!|\!|\leq|\!|\!| |\!|\!|\|\!|\!|||\!|\!|(|\!|\!|\nabla|\!|\!|\varDelta|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!| f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|
whence|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{Aq|\!|\!|-12fn|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|(|\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_|\!|\!|{p|\!|\!|,q|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|(f|\!|\!|_n|\!|\!|)|\!|\!|_|\!|\!|{n|\!|\!|=1|\!|\!|}|\!|\!|^N|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^p|\!|\!|(W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\eqref|\!|\!|{nablaun|\!|\!|}|\!|\!|-|\!|\!|-|\!|\!|\eqref|\!|\!|{Aq|\!|\!|-12fn|\!|\!|}|\!|\!| yield|\!|\!| |\!|\!|\eqref|\!|\!|{BDFconvex|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\section|\!|\!|{Sobolev|\!|\!| and|\!|\!| related|\!|\!| inequalities|\!|\!|:|\!|\!| proof|\!|\!| of|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|\label|\!|\!|{Sec|\!|\!|:Sobolev|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
A|\!|\!| Banach|\!|\!| space|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| is|\!|\!| said|\!|\!| to|\!|\!| be|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!| |\!|\!| another|\!|\!| Banach|\!|\!| space|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!|,|\!|\!| denoted|\!|\!| by|\!|\!| |\!|\!| |\!|\!|$X|\!|\!|\hookrightarrow|\!|\!| Y|\!|\!|$|\!|\!|,|\!|\!| if|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|(a|\!|\!|)|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| X|\!|\!|\implies|\!|\!| u|\!|\!|\in|\!|\!| Y|\!|\!|$|\!|\!| and|\!|\!| this|\!|\!| map|\!|\!| from|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| to|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!| is|\!|\!| one|\!|\!|-to|\!|\!|-one|\!|\!|;|\!|\!|
|\!|\!|
|\!|\!|(b|\!|\!|)|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|\leq|\!|\!| C|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_X|\!|\!|$|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| X|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| |\!|\!|$C|\!|\!|$|\!|\!| is|\!|\!| a|\!|\!| constant|\!|\!| independent|\!|\!| of|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!|.|\!|\!|\|\!|\!|\|\!|\!| The|\!|\!| space|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| is|\!|\!| said|\!|\!| to|\!|\!| be|\!|\!| compactly|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!|,|\!|\!| denoted|\!|\!| by|\!|\!| |\!|\!| |\!|\!|$X|\!|\!|\hookrightarrow|\!|\!|\hookrightarrow|\!|\!| Y|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| if|\!|\!| in|\!|\!| addition|\!|\!| to|\!|\!| |\!|\!|(a|\!|\!|)|\!|\!|-|\!|\!|(b|\!|\!|)|\!|\!| the|\!|\!| following|\!|\!| condition|\!|\!| is|\!|\!| satisfied|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|(c|\!|\!|)|\!|\!| a|\!|\!| bounded|\!|\!| subset|\!|\!| of|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| is|\!|\!| always|\!|\!| a|\!|\!| pre|\!|\!|-compact|\!|\!| subset|\!|\!| of|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!|\label|\!|\!|{IntpIneq|\!|\!|}|\!|\!| Let|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$Z|\!|\!|$|\!|\!| be|\!|\!| Banach|\!|\!| spaces|\!|\!| such|\!|\!| that|\!|\!| |\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| is|\!|\!| compactly|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!| is|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!|$Z|\!|\!|$|\!|\!|,|\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| X|\!|\!|\hookrightarrow|\!|\!|\hookrightarrow|\!|\!| Y|\!|\!| |\!|\!|\hookrightarrow|\!|\!| Z|\!|\!|.|\!|\!| |\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
Then|\!|\!|,|\!|\!| for|\!|\!| any|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| there|\!|\!| holds|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|\leq|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_X|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|\varepsilon|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_Z|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| This|\!|\!| lemma|\!|\!| is|\!|\!| probably|\!|\!| well|\!|\!| known|\!|\!|,|\!|\!| but|\!|\!| since|\!|\!| we|\!|\!| did|\!|\!| not|\!|\!| find|\!|\!| a|\!|\!| reference|\!|\!|,|\!|\!| we|\!|\!| include|\!|\!| the|\!|\!| short|\!|\!| proof|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
Suppose|\!|\!|,|\!|\!| on|\!|\!| the|\!|\!| contrary|\!|\!|,|\!|\!| that|\!|\!| there|\!|\!| exists|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| |\!|\!| the|\!|\!| inequality|\!|\!| above|\!|\!| does|\!|\!| not|\!|\!| hold|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| X|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| Then|\!|\!| there|\!|\!| exists|\!|\!| a|\!|\!| sequence|\!|\!| |\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|\in|\!|\!| X|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$n|\!|\!|=1|\!|\!|,2|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|\geq|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_X|\!|\!| |\!|\!|+|\!|\!| n|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_Z|\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
By|\!|\!| a|\!|\!| normalization|\!|\!| |\!|\!|(dividing|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|$|\!|\!| by|\!|\!| a|\!|\!| constant|\!|\!|)|\!|\!|,|\!|\!| we|\!|\!| can|\!|\!| |\!|\!|
assume|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|=1|\!|\!|$|\!|\!| for|\!|\!| all|\!|\!| |\!|\!|$n|\!|\!|\geq|\!|\!| 1|\!|\!|$|\!|\!|.|\!|\!| Hence|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_X|\!|\!|\leq|\!|\!| 1|\!|\!|/|\!|\!|\varepsilon|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_Z|\!|\!|\leq|\!|\!| 1|\!|\!|/n|\!|\!| |\!|\!| |\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
On|\!|\!| one|\!|\!| hand|\!|\!|,|\!|\!| since|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| is|\!|\!| compactly|\!|\!| embedded|\!|\!| into|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| the|\!|\!| boundedness|\!|\!| of|\!|\!| |\!|\!|$u|\!|\!|_n|\!|\!|$|\!|\!| in|\!|\!| |\!|\!|$X|\!|\!|$|\!|\!| implies|\!|\!| the|\!|\!| existence|\!|\!| of|\!|\!| a|\!|\!| |\!|\!| subsequence|\!|\!| |\!|\!|$u|\!|\!|_|\!|\!|{n|\!|\!|_k|\!|\!|}|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$k|\!|\!|=1|\!|\!|,2|\!|\!|,|\!|\!|\dotsc|\!|\!|,|\!|\!|$|\!|\!| which|\!|\!| |\!|\!| converges|\!|\!| in|\!|\!| |\!|\!|$Y|\!|\!|$|\!|\!| to|\!|\!| some|\!|\!| element|\!|\!| |\!|\!|$u|\!|\!|\in|\!|\!| Y|\!|\!|\hookrightarrow|\!|\!| Z|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| Hence|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!|\label|\!|\!|{norm|\!|\!|_of|\!|\!|_u|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|=|\!|\!|\lim|\!|\!|_|\!|\!|{k|\!|\!|\rightarrow|\!|\!| |\!|\!|\infty|\!|\!|}|\!|\!|\|\!|\!||u|\!|\!|_|\!|\!|{n|\!|\!|_k|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|_Y|\!|\!|=1|\!|\!| |\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!|
On|\!|\!| the|\!|\!| other|\!|\!| hand|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\|\!|\!||u|\!|\!|_n|\!|\!|\|\!|\!|||\!|\!|_Z|\!|\!|\leq|\!|\!| 1|\!|\!|/n|\!|\!|$|\!|\!| implies|\!|\!| that|\!|\!| |\!|\!| |\!|\!|$u|\!|\!|_|\!|\!|{n|\!|\!|_k|\!|\!|}|\!|\!|$|\!|\!| converges|\!|\!| to|\!|\!| the|\!|\!| zero|\!|\!| element|\!|\!| in|\!|\!| |\!|\!|$Z|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| which|\!|\!| means|\!|\!| that|\!|\!| |\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|}|\!|\!|\label|\!|\!|{u|\!|\!|_is|\!|\!|_zero|\!|\!|}|\!|\!| u|\!|\!|=0|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|}|\!|\!|
|\!|\!|
Clearly|\!|\!|,|\!|\!| |\!|\!|\eqref|\!|\!|{norm|\!|\!|_of|\!|\!|_u|\!|\!|}|\!|\!| and|\!|\!| |\!|\!|\eqref|\!|\!|{u|\!|\!|_is|\!|\!|_zero|\!|\!|}|\!|\!| contradict|\!|\!| |\!|\!| each|\!|\!| other|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|[Sobolev|\!|\!| imbedding|\!|\!|]|\!|\!| |\!|\!|\label|\!|\!|{SobEmbed1|\!|\!|}|\!|\!| |\!|\!| For|\!|\!| |\!|\!|$s|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|,q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$d|\!|\!|\geq|\!|\!| 1|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|\emph|\!|\!|{|\!|\!|:|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{enumerate|\!|\!|}|\!|\!|[|\!|\!|$|\!|\!|(1|\!|\!|)|\!|\!|$|\!|\!|]|\!|\!|\itemsep|\!|\!|=0pt|\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{s|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!|\hookrightarrow|\!|\!| |\!|\!| C|\!|\!|^|\!|\!|{|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|\in|\!|\!|(0|\!|\!|,s|\!|\!|-d|\!|\!|/q|\!|\!|)|\!|\!|$|\!|\!| if|\!|\!| |\!|\!|$sq|\!|\!|>d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{s|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!|\hookrightarrow|\!|\!| |\!|\!| C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|\in|\!|\!|(0|\!|\!|,s|\!|\!|-1|\!|\!|-d|\!|\!|/q|\!|\!|)|\!|\!|$|\!|\!| if|\!|\!| |\!|\!|$|\!|\!|(s|\!|\!|-1|\!|\!|)q|\!|\!|>d|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{s|\!|\!|,p|\!|\!|}|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|;X|\!|\!|)|\!|\!|\hookrightarrow|\!|\!| L|\!|\!|^|\!|\!|{|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|{|\!|\!|\mathbb|\!|\!| R|\!|\!|}|\!|\!|;X|\!|\!|)|\!|\!|$|\!|\!| if|\!|\!| |\!|\!|$sp|\!|\!|>1|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| where|\!|\!| |\!|\!|$X|\!|\!|=L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| or|\!|\!| |\!|\!|$X|\!|\!|=W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\item|\!|\!| |\!|\!|$H|\!|\!|^1|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!|\hookrightarrow|\!|\!| L|\!|\!|^|\!|\!|{q|\!|\!|_0|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| for|\!|\!| all|\!|\!| |\!|\!| |\!|\!| |\!|\!|$1|\!|\!|\leq|\!|\!| q|\!|\!|_0|\!|\!|<2d|\!|\!|/|\!|\!|(d|\!|\!|-2|\!|\!|)|\!|\!|$|\!|\!| when|\!|\!| |\!|\!|$d|\!|\!|\geq|\!|\!| 2|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| |\!|\!| |\!|\!|$q|\!|\!|_0|\!|\!|=|\!|\!|\infty|\!|\!|$|\!|\!| when|\!|\!| |\!|\!|$d|\!|\!|=1|\!|\!|$|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{enumerate|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{remark|\!|\!|}|\!|\!| |\!|\!|(1|\!|\!|)|\!|\!|-|\!|\!|-|\!|\!|(2|\!|\!|)|\!|\!| of|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{SobEmbed1|\!|\!|}|\!|\!| are|\!|\!| immediate|\!|\!| consequences|\!|\!| |\!|\!| of|\!|\!| |\!|\!| |\!|\!|\cite|\!|\!|[p|\!|\!|.|\!|\!|\|\!|\!| xviii|\!|\!|,|\!|\!| Sobolev|\!|\!| imbedding|\!|\!| |\!|\!|(18|\!|\!|)|\!|\!|]|\!|\!|{Amann95|\!|\!|}|\!|\!|;|\!|\!| |\!|\!|(3|\!|\!|)|\!|\!| is|\!|\!| a|\!|\!| simple|\!|\!| vector|\!|\!| extension|\!|\!| of|\!|\!| |\!|\!|(1|\!|\!|)|\!|\!|;|\!|\!| |\!|\!| |\!|\!| |\!|\!|(4|\!|\!|)|\!|\!| can|\!|\!| be|\!|\!| found|\!|\!| in|\!|\!| |\!|\!|\cite|\!|\!|[p|\!|\!|.|\!|\!|\|\!|\!| 272|\!|\!|,|\!|\!| Theorem|\!|\!| 1|\!|\!|]|\!|\!|{Evans|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{remark|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\noindent|\!|\!|{|\!|\!|\it|\!|\!| Proof|\!|\!| of|\!|\!| |\!|\!|(ii|\!|\!|)|\!|\!| in|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
For|\!|\!| any|\!|\!| |\!|\!|$1|\!|\!|<p|\!|\!|,q|\!|\!|<|\!|\!|\infty|\!|\!|$|\!|\!| |\!|\!| and|\!|\!| |\!|\!|$r|\!|\!|_1|\!|\!|,r|\!|\!|_2|\!|\!|,r|\!|\!|\ge|\!|\!| 0|\!|\!|$|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|(1|\!|\!|-|\!|\!|\theta|\!|\!|)r|\!|\!|_1|\!|\!|+|\!|\!|\theta|\!|\!| r|\!|\!|_2|\!|\!|=r|\!|\!|$|\!|\!| for|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\theta|\!|\!|\in|\!|\!|(0|\!|\!|,1|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| we|\!|\!| denote|\!|\!| by|\!|\!| |\!|\!| |\!|\!|$B|\!|\!|^|\!|\!|{r|\!|\!|,q|\!|\!|}|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|:|\!|\!|=|\!|\!|(W|\!|\!|^|\!|\!|{r|\!|\!|_1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|,W|\!|\!|^|\!|\!|{r|\!|\!|_2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|_|\!|\!|{|\!|\!|\theta|\!|\!|,p|\!|\!|}|\!|\!|$|\!|\!| the|\!|\!| |\!|\!| Besov|\!|\!| space|\!|\!| of|\!|\!| order|\!|\!| |\!|\!|$r|\!|\!|$|\!|\!| |\!|\!| |\!|\!|(a|\!|\!| real|\!|\!| interpolation|\!|\!| space|\!|\!| between|\!|\!| two|\!|\!| Sobolev|\!|\!| spaces|\!|\!|,|\!|\!| |\!|\!| see|\!|\!| |\!|\!|\cite|\!|\!|{Tartar07|\!|\!|}|\!|\!|)|\!|\!|.|\!|\!| Then|\!|\!|,|\!|\!| |\!|\!| via|\!|\!| Sobolev|\!|\!| embedding|\!|\!|,|\!|\!| we|\!|\!| have|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|&W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\hookrightarrow|\!|\!| |\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;|\!|\!|(L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|,W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|_|\!|\!|{1|\!|\!|-1|\!|\!|/p|\!|\!|,p|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|\qquad|\!|\!| |\!|\!|\mbox|\!|\!|{see|\!|\!| |\!|\!|\cite|\!|\!|[Proposition|\!|\!| 1|\!|\!|.2|\!|\!|.10|\!|\!|]|\!|\!|{Lunardi95|\!|\!|}|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\simeq|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;B|\!|\!|^|\!|\!|{2|\!|\!|-2|\!|\!|/p|\!|\!|,q|\!|\!|}|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\qquad|\!|\!|\qquad|\!|\!|\qquad|\!|\!|\quad|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\mbox|\!|\!|{according|\!|\!| to|\!|\!| the|\!|\!| definition|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\hookrightarrow|\!|\!| |\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!|\qquad|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\mbox|\!|\!|{when|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|(1|\!|\!|-2|\!|\!|/p|\!|\!|)q|\!|\!|>d|\!|\!|$|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|$|\!|\!|\Leftrightarrow|\!|\!|$|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|$2|\!|\!|/p|\!|\!|+d|\!|\!|/q|\!|\!|<1|\!|\!|$|\!|\!|.|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
and|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|&W|\!|\!|^|\!|\!|{1|\!|\!|,p|\!|\!|}|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|\cap|\!|\!| L|\!|\!|^p|\!|\!|(0|\!|\!|,T|\!|\!|;W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\hookrightarrow|\!|\!| |\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;|\!|\!|(W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|,W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!|_|\!|\!|{1|\!|\!|-1|\!|\!|/p|\!|\!|,p|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|\qquad|\!|\!| |\!|\!|\mbox|\!|\!|{see|\!|\!| |\!|\!|\cite|\!|\!|[Proposition|\!|\!| 1|\!|\!|.2|\!|\!|.10|\!|\!|]|\!|\!|{Lunardi95|\!|\!|}|\!|\!|}|\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\simeq|\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;B|\!|\!|^|\!|\!|{1|\!|\!|-2|\!|\!|/p|\!|\!|,q|\!|\!|}|\!|\!|_|\!|\!|{p|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!| |\!|\!|\qquad|\!|\!|\qquad|\!|\!|\qquad|\!|\!|\qquad|\!|\!|\quad|\!|\!| |\!|\!|\mbox|\!|\!|{according|\!|\!| to|\!|\!| the|\!|\!| definition|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!| |\!|\!|&|\!|\!|\hookrightarrow|\!|\!| |\!|\!| L|\!|\!|^|\!|\!|\infty|\!|\!|(0|\!|\!|,T|\!|\!|;C|\!|\!|^|\!|\!|{|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|)|\!|\!| |\!|\!|\qquad|\!|\!|\qquad|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\mbox|\!|\!|{when|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|(1|\!|\!|-2|\!|\!|/p|\!|\!|)q|\!|\!|>d|\!|\!|$|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|$|\!|\!|\Leftrightarrow|\!|\!|$|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|$2|\!|\!|/p|\!|\!|+d|\!|\!|/q|\!|\!|<1|\!|\!|$|\!|\!|.|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|*|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\noindent|\!|\!|{|\!|\!|\it|\!|\!| Proof|\!|\!| of|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!|-|\!|\!|(v|\!|\!|)|\!|\!| in|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{lem|\!|\!|:framework|\!|\!|}|\!|\!|}|\!|\!|.|\!|\!| |\!|\!| We|\!|\!| consider|\!|\!| first|\!|\!| the|\!|\!| setting|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
Property|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!| is|\!|\!| standard|\!|\!| textbook|\!|\!| material|\!|\!|.|\!|\!| To|\!|\!| prove|\!|\!| |\!|\!|(iv|\!|\!|)|\!|\!|,|\!|\!| we|\!|\!| note|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\nabla|\!|\!|\cdot|\!|\!| |\!|\!|\bigl|\!|\!|(|\!|\!| |\!|\!|(a|\!|\!|(v|\!|\!|)|\!|\!|-a|\!|\!|(w|\!|\!|)|\!|\!|)|\!|\!| |\!|\!|\nabla|\!|\!| u|\!|\!| |\!|\!|\bigr|\!|\!|)|\!|\!| |\!|\!|=|\!|\!| |\!|\!|\nabla|\!|\!| |\!|\!|(a|\!|\!|(v|\!|\!|)|\!|\!|-a|\!|\!|(w|\!|\!|)|\!|\!|)|\!|\!|\cdot|\!|\!| |\!|\!|\nabla|\!|\!| u|\!|\!| |\!|\!| |\!|\!|+|\!|\!| |\!|\!| |\!|\!|(a|\!|\!|(v|\!|\!|)|\!|\!|-a|\!|\!|(w|\!|\!|)|\!|\!|)|\!|\!| |\!|\!|\varDelta|\!|\!| u|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
and|\!|\!| estimate|\!|\!| as|\!|\!| follows|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!| |\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|)u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| C|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|+C|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{2|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
Since|\!|\!| |\!|\!|$C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| compactly|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{IntpIneq|\!|\!|}|\!|\!| gives|\!|\!| us|\!|\!| the|\!|\!| inequalities|\!|\!|,|\!|\!| for|\!|\!| arbitrary|\!|\!| |\!|\!|$|\!|\!|\varepsilon|\!|\!|>0|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|\|\!|\!|[|\!|\!|\begin|\!|\!|{aligned|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!| |\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|&|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!| |\!|\!| |\!|\!|+C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|\|\!|\!|
|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!| |\!|\!|\|\!|\!| |\!|\!|\|\!|\!| |\!|\!| |\!|\!|&|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|{1|\!|\!|,|\!|\!|\alpha|\!|\!|}|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!| |\!|\!|+C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{aligned|\!|\!|}|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
The|\!|\!| |\!|\!| inequality|\!|\!| of|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!| follows|\!|\!| on|\!|\!| estimating|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\langle|\!|\!| |\!|\!|\varphi|\!|\!|,|\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|
|\!|\!|\le|\!|\!| C|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|\varphi|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|_|\!|\!|{H|\!|\!|^1|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|
|\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
where|\!|\!| |\!|\!|$1|\!|\!|/|\!|\!|\bar|\!|\!| q|\!|\!|+1|\!|\!|/q|\!|\!|=1|\!|\!|/2|\!|\!|$|\!|\!|.|\!|\!| Note|\!|\!| that|\!|\!| for|\!|\!| the|\!|\!| considered|\!|\!| |\!|\!|$q|\!|\!|>d|\!|\!|$|\!|\!| we|\!|\!| have|\!|\!| |\!|\!| |\!|\!|$|\!|\!|\bar|\!|\!| q|\!|\!|=q|\!|\!|/|\!|\!|(q|\!|\!|/2|\!|\!|-1|\!|\!|)|\!|\!|<d|\!|\!|/|\!|\!|(d|\!|\!|/2|\!|\!|-1|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| so|\!|\!| |\!|\!|$H|\!|\!|^1|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| is|\!|\!| compactly|\!|\!| imbedded|\!|\!| into|\!|\!| |\!|\!|$L|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| |\!|\!|(see|\!|\!| |\!|\!|(4|\!|\!|)|\!|\!| of|\!|\!| Lemma|\!|\!| |\!|\!|\ref|\!|\!|{SobEmbed1|\!|\!|}|\!|\!| or|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| 6|\!|\!|.3|\!|\!|]|\!|\!|{Adams|\!|\!|}|\!|\!|)|\!|\!|,|\!|\!| so|\!|\!| that|\!|\!| by|\!|\!| |\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{IntpIneq|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!| |\!|\!|\|\!|\!|[|\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|{|\!|\!|\bar|\!|\!| q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{H|\!|\!|^1|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!| |\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!| |\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
We|\!|\!| now|\!|\!| turn|\!|\!| to|\!|\!| the|\!|\!| setting|\!|\!| |\!|\!|(P1|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
Property|\!|\!| |\!|\!|(iii|\!|\!|)|\!|\!| |\!|\!| is|\!|\!| the|\!|\!| same|\!|\!| as|\!|\!| in|\!|\!| |\!|\!|(P2|\!|\!|)|\!|\!|,|\!|\!| and|\!|\!| |\!|\!|(v|\!|\!|)|\!|\!| has|\!|\!| actually|\!|\!| been|\!|\!| shown|\!|\!| above|\!|\!|.|\!|\!| Property|\!|\!| |\!|\!|(iv|\!|\!|)|\!|\!| |\!|\!| follows|\!|\!| from|\!|\!| estimating|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!|\langle|\!|\!| |\!|\!|\varphi|\!|\!|,|\!|\!|(A|\!|\!|(v|\!|\!|)|\!|\!|-A|\!|\!|(w|\!|\!|)|\!|\!| |\!|\!|)u|\!|\!| |\!|\!|\rangle|\!|\!| |\!|\!|
|\!|\!|\le|\!|\!| C|\!|\!|_R|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|\varphi|\!|\!| |\!|\!|\|\!|\!|||\!|\!| |\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|'|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!| |\!|\!| |\!|\!|
|\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|{|\!|\!|\infty|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
where|\!|\!| |\!|\!|$1|\!|\!|/q|\!|\!|+1|\!|\!|/q|\!|\!|'|\!|\!|=1|\!|\!|$|\!|\!|,|\!|\!| and|\!|\!| from|\!|\!| the|\!|\!| bound|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!| |\!|\!| |\!|\!|\|\!|\!||v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^|\!|\!|\infty|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\le|\!|\!| |\!|\!|\varepsilon|\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+|\!|\!| C|\!|\!|_|\!|\!|\varepsilon|\!|\!| |\!|\!| |\!|\!|\|\!|\!|||\!|\!| v|\!|\!|-w|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
which|\!|\!| follows|\!|\!| from|\!|\!| Lemma|\!|\!|~|\!|\!|\ref|\!|\!|{IntpIneq|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\appendix|\!|\!| |\!|\!|\section|\!|\!|{Boundedness|\!|\!| of|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
Let|\!|\!| |\!|\!|$A|\!|\!|_q|\!|\!|:|\!|\!| D|\!|\!|(A|\!|\!|_q|\!|\!|)|\!|\!|\rightarrow|\!|\!| L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| be|\!|\!| defined|\!|\!| by|\!|\!| |\!|\!| |\!|\!| |\!|\!|$A|\!|\!|_qu|\!|\!|=|\!|\!|\nabla|\!|\!|\cdot|\!|\!|(b|\!|\!|(x|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|$|\!|\!|,|\!|\!| where|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[D|\!|\!|(A|\!|\!|_q|\!|\!|)|\!|\!|=|\!|\!|\|\!|\!|{u|\!|\!|\in|\!|\!| W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|:|\!|\!| |\!|\!| |\!|\!|\nabla|\!|\!|\cdot|\!|\!|(b|\!|\!|(x|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!|\in|\!|\!| L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\|\!|\!|}|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{and|\!|\!|}|\!|\!|\quad|\!|\!| |\!|\!| b|\!|\!|\in|\!|\!| C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!| |\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!| |\!|\!|\text|\!|\!|{satisfies|\!|\!| |\!|\!|}|\!|\!| |\!|\!|\eqref|\!|\!|{ellipt|\!|\!|}|\!|\!|.|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{lemma|\!|\!|}|\!|\!| The|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| for|\!|\!| |\!|\!|$q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|$|\!|\!|,|\!|\!| i|\!|\!|.e|\!|\!|.|\!|\!|,|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| C|\!|\!|_q|\!|\!|\|\!|\!||u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!|\quad|\!|\!| |\!|\!|\text|\!|\!|{for|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,q|\!|\!|_d|\!|\!|'|\!|\!|<q|\!|\!|<q|\!|\!|_d|\!|\!|,|\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
where|\!|\!| the|\!|\!| constant|\!|\!| |\!|\!|$C|\!|\!|_q|\!|\!|$|\!|\!| depends|\!|\!| on|\!|\!| |\!|\!|$q|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|$|\!|\!|,|\!|\!| |\!|\!|$|\!|\!|\alpha|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|
|\!|\!|$|\!|\!|\|\!|\!||b|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{C|\!|\!|^|\!|\!|\alpha|\!|\!|(|\!|\!|\overline|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{lemma|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{proof|\!|\!|}|\!|\!| It|\!|\!| has|\!|\!| been|\!|\!| proved|\!|\!| in|\!|\!| |\!|\!|\cite|\!|\!|[Theorem|\!|\!| B|\!|\!|]|\!|\!|{Shen05|\!|\!|}|\!|\!| that|\!|\!| the|\!|\!| Riesz|\!|\!| transform|\!|\!| |\!|\!|$|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|-A|\!|\!|_q|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|/2|\!|\!|}|\!|\!|$|\!|\!| is|\!|\!| bounded|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| if|\!|\!| and|\!|\!| only|\!|\!| if|\!|\!| the|\!|\!| solution|\!|\!| of|\!|\!| the|\!|\!| homogeneous|\!|\!| equation|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{HomoEqu|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\nabla|\!|\!|\cdot|\!|\!|(b|\!|\!|(x|\!|\!|)|\!|\!|\nabla|\!|\!| u|\!|\!|)|\!|\!| |\!|\!|=|\!|\!| 0|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
satisfies|\!|\!| the|\!|\!| following|\!|\!| local|\!|\!| estimate|\!|\!|:|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{HomoEst0|\!|\!|}|\!|\!| |\!|\!|\bigg|\!|\!|(|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{r|\!|\!|^n|\!|\!|}|\!|\!|\int|\!|\!|_|\!|\!|{|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_r|\!|\!|(x|\!|\!|_0|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|||\!|\!|^q|\!|\!|{|\!|\!|\mathrm|\!|\!| d|\!|\!|}|\!|\!| x|\!|\!|\bigg|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{q|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|\leq|\!|\!| C|\!|\!| |\!|\!|\bigg|\!|\!|(|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{r|\!|\!|^n|\!|\!|}|\!|\!|\int|\!|\!|_|\!|\!|{|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{|\!|\!|\sigma|\!|\!|_0r|\!|\!|}|\!|\!|(x|\!|\!|_0|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|||\!|\!|^2|\!|\!|{|\!|\!|\mathrm|\!|\!| d|\!|\!|}|\!|\!| x|\!|\!|\bigg|\!|\!|)|\!|\!|^|\!|\!|{|\!|\!|\frac|\!|\!|{1|\!|\!|}|\!|\!|{2|\!|\!|}|\!|\!|}|\!|\!| |\!|\!|,|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
for|\!|\!| all|\!|\!| |\!|\!|$x|\!|\!|_0|\!|\!|\in|\!|\!|\varOmega|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$0|\!|\!|<r|\!|\!|<r|\!|\!|_0|\!|\!|,|\!|\!|$|\!|\!| where|\!|\!| |\!|\!|$r|\!|\!|_0|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$|\!|\!|\sigma|\!|\!|_0|\!|\!|\geq|\!|\!| 2|\!|\!|$|\!|\!| are|\!|\!| any|\!|\!| given|\!|\!| small|\!|\!| positive|\!|\!| constants|\!|\!| such|\!|\!| that|\!|\!| |\!|\!|$|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{|\!|\!|\sigma|\!|\!|_0r|\!|\!|_0|\!|\!|}|\!|\!|(x|\!|\!|_0|\!|\!|)|\!|\!|$|\!|\!| |\!|\!| is|\!|\!| the|\!|\!| intersection|\!|\!| of|\!|\!| |\!|\!|$B|\!|\!|_|\!|\!|{|\!|\!|\sigma|\!|\!|_0r|\!|\!|_0|\!|\!|}|\!|\!|(x|\!|\!|_0|\!|\!|)|\!|\!|$|\!|\!| with|\!|\!| a|\!|\!| Lipschitz|\!|\!| graph|\!|\!|.|\!|\!| It|\!|\!| remains|\!|\!| to|\!|\!| prove|\!|\!| |\!|\!|\eqref|\!|\!|{HomoEst0|\!|\!|}|\!|\!|.|\!|\!|
|\!|\!|
Let|\!|\!| |\!|\!|$|\!|\!|\omega|\!|\!|$|\!|\!| be|\!|\!| a|\!|\!| smooth|\!|\!| cut|\!|\!|-off|\!|\!| function|\!|\!| which|\!|\!| equals|\!|\!| zero|\!|\!| outside|\!|\!| |\!|\!|$B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|:|\!|\!|=B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|(x|\!|\!|_0|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| equals|\!|\!| 1|\!|\!| on|\!|\!| |\!|\!|$B|\!|\!|_r|\!|\!|$|\!|\!|.|\!|\!| Extend|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| to|\!|\!| be|\!|\!| zero|\!|\!| on|\!|\!| |\!|\!|$B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|\backslash|\!|\!|\varOmega|\!|\!|$|\!|\!| and|\!|\!| denote|\!|\!| by|\!|\!| |\!|\!|$u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|$|\!|\!| the|\!|\!| average|\!|\!| of|\!|\!| |\!|\!|$u|\!|\!|$|\!|\!| over|\!|\!| |\!|\!|$B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|$|\!|\!|.|\!|\!| Then|\!|\!| |\!|\!|(|\!|\!|\ref|\!|\!|{HomoEqu|\!|\!|}|\!|\!|)|\!|\!| implies|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{HomoEqu2|\!|\!|}|\!|\!| |\!|\!|\nabla|\!|\!| |\!|\!|\cdot|\!|\!| |\!|\!|(b|\!|\!|\nabla|\!|\!| |\!|\!|(|\!|\!|\omega|\!|\!| |\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|)|\!|\!|)|\!|\!| |\!|\!|=|\!|\!|\nabla|\!|\!| |\!|\!|\cdot|\!|\!| |\!|\!|(b|\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!| |\!|\!|)|\!|\!| |\!|\!|+b|\!|\!|\nabla|\!|\!|\cdot|\!|\!|\omega|\!|\!| |\!|\!|\cdot|\!|\!|\nabla|\!|\!| |\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!| |\!|\!|\quad|\!|\!|\text|\!|\!|{in|\!|\!|}|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\|\!|\!|,|\!|\!|\varOmega|\!|\!|,|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
and|\!|\!| the|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|$|\!|\!| estimate|\!|\!| |\!|\!|(Lemma|\!|\!| |\!|\!|\ref|\!|\!|{THMJK|\!|\!|}|\!|\!|)|\!|\!| |\!|\!| implies|\!|\!| |\!|\!|\begin|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\omega|\!|\!| |\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|&|\!|\!|\leq|\!|\!| C|\!|\!|\|\!|\!|||\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|+C|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\cdot|\!|\!|\nabla|\!|\!| |\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|\leq|\!|\!| C|\!|\!|\|\!|\!|||\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+C|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\cdot|\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^s|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|=|\!|\!| C|\!|\!|\|\!|\!|||\!|\!|(u|\!|\!|-u|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|
|\!|\!|+C|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| |\!|\!|\omega|\!|\!|\cdot|\!|\!|\nabla|\!|\!| |\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^s|\!|\!|(B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!| |\!|\!|\|\!|\!|\|\!|\!|
|\!|\!|&|\!|\!|\leq|\!|\!| Cr|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^s|\!|\!|(B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|,|\!|\!| |\!|\!| |\!|\!|\end|\!|\!|{align|\!|\!|*|\!|\!|}|\!|\!| where|\!|\!| |\!|\!|$s|\!|\!|=qd|\!|\!|/|\!|\!|(q|\!|\!|+d|\!|\!|)|\!|\!|<q|\!|\!|$|\!|\!| satisfies|\!|\!| |\!|\!|$L|\!|\!|^s|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!| W|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|,q|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!| and|\!|\!| |\!|\!|$W|\!|\!|^|\!|\!|{1|\!|\!|,s|\!|\!|}|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|\hookrightarrow|\!|\!| L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|)|\!|\!|$|\!|\!|.|\!|\!| The|\!|\!| last|\!|\!| inequality|\!|\!| implies|\!|\!|
|\!|\!|
|\!|\!|\begin|\!|\!|{equation|\!|\!|}|\!|\!|\label|\!|\!|{HomoEst2|\!|\!|}|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| Cr|\!|\!|^|\!|\!|{|\!|\!|-1|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^s|\!|\!|(|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{equation|\!|\!|}|\!|\!|
|\!|\!|
If|\!|\!| |\!|\!|$s|\!|\!|\leq|\!|\!| 2|\!|\!|,|\!|\!|$|\!|\!| then|\!|\!| one|\!|\!| can|\!|\!| derive|\!|\!|
|\!|\!|
|\!|\!|\|\!|\!|[|\!|\!|
|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^q|\!|\!|(|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!|
|\!|\!|\leq|\!|\!| Cr|\!|\!|^|\!|\!|{d|\!|\!|/q|\!|\!|-d|\!|\!|/2|\!|\!|}|\!|\!|\|\!|\!|||\!|\!|\nabla|\!|\!| u|\!|\!|\|\!|\!|||\!|\!|_|\!|\!|{L|\!|\!|^2|\!|\!|(|\!|\!|\varOmega|\!|\!|\cap|\!|\!| B|\!|\!|_|\!|\!|{2r|\!|\!|}|\!|\!|)|\!|\!|}|\!|\!| |\!|\!|\|\!|\!|]|\!|\!|
|\!|\!|
by|\!|\!| using|\!|\!| once|\!|\!| more|\!|\!| H|\!|\!|\|\!|\!|"older|\!|\!|'s|\!|\!| inequality|\!|\!| on|\!|\!| the|\!|\!| right|\!|\!|-hand|\!|\!| side|\!|\!|.|\!|\!| Otherwise|\!|\!|,|\!|\!| one|\!|\!| only|\!|\!| needs|\!|\!| a|\!|\!| finite|\!|\!| number|\!|\!| of|\!|\!| iterations|\!|\!| of|\!|\!| |\!|\!|(|\!|\!|\ref|\!|\!|{HomoEst2|\!|\!|}|\!|\!|)|\!|\!| to|\!|\!| reduce|\!|\!| |\!|\!|$s|\!|\!|$|\!|\!| to|\!|\!| be|\!|\!| less|\!|\!| than|\!|\!| |\!|\!|$2|\!|\!|$|\!|\!|.|\!|\!| This|\!|\!| completes|\!|\!| the|\!|\!| proof|\!|\!| of|\!|\!| |\!|\!|(|\!|\!|\ref|\!|\!|{HomoEst0|\!|\!|}|\!|\!|)|\!|\!|.|\!|\!| |\!|\!|\end|\!|\!|{proof|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\subsection|\!|\!|*|\!|\!|{Acknowledgment|\!|\!|}|\!|\!| The|\!|\!| research|\!|\!| stay|\!|\!| of|\!|\!| Buyang|\!|\!| Li|\!|\!| at|\!|\!| the|\!|\!| University|\!|\!| of|\!|\!| T|\!|\!|\|\!|\!|"|\!|\!| ubingen|\!|\!| is|\!|\!| funded|\!|\!| by|\!|\!| the|\!|\!| Alexander|\!|\!| von|\!|\!| Humboldt|\!|\!| Foundation|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibliographystyle|\!|\!|{amsplain|\!|\!|}|\!|\!| |\!|\!|\begin|\!|\!|{thebibliography|\!|\!|}|\!|\!|{10|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Adams|\!|\!|}|\!|\!| R|\!|\!|.|\!|\!| A|\!|\!|.|\!|\!| Adams|\!|\!| and|\!|\!| J|\!|\!|.|\!|\!| J|\!|\!|.|\!|\!| F|\!|\!|.|\!|\!| Fournier|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!|\emph|\!|\!|{Sobolev|\!|\!| Spaces|\!|\!|}|\!|\!|,|\!|\!| 2|\!|\!|$|\!|\!|^|\!|\!|\text|\!|\!|{nd|\!|\!|}|\!|\!|$|\!|\!| ed|\!|\!|.|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Academic|\!|\!| Press|\!|\!|,|\!|\!| Amsterdam|\!|\!|,|\!|\!| 2003|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{ACM2|\!|\!|}|\!|\!| G|\!|\!|.|\!|\!| Akrivis|\!|\!|,|\!|\!| M|\!|\!|.|\!|\!| Crouzeix|\!|\!| and|\!|\!| Ch|\!|\!|.|\!|\!| Makridakis|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Implicit|\!|\!|-|\!|\!|-explicit|\!|\!| multistep|\!|\!| methods|\!|\!| for|\!|\!| quasilinear|\!|\!| parabolic|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Numer|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{82|\!|\!|}|\!|\!| |\!|\!|(1999|\!|\!|)|\!|\!| 521|\!|\!|-|\!|\!|-541|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{AK|\!|\!|}|\!|\!| G|\!|\!|.|\!|\!| |\!|\!| Akrivis|\!|\!| and|\!|\!| E|\!|\!|.|\!|\!| |\!|\!| Katsoprinakis|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Backward|\!|\!| difference|\!|\!| formulae|\!|\!|\emph|\!|\!|{|\!|\!|:|\!|\!|}|\!|\!| new|\!|\!| multipliers|\!|\!| and|\!|\!| stability|\!|\!| properties|\!|\!| for|\!|\!| parabolic|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Math|\!|\!|.|\!|\!| Comp|\!|\!|.|\!|\!| |\!|\!|(2016|\!|\!|)|\!|\!| DOI|\!|\!|:|\!|\!| 10|\!|\!|.1090|\!|\!|/mcom3055|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{AL|\!|\!|}|\!|\!| G|\!|\!|.|\!|\!| Akrivis|\!|\!| and|\!|\!| C|\!|\!|.|\!|\!| Lubich|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Fully|\!|\!| implicit|\!|\!|,|\!|\!| linearly|\!|\!| implicit|\!|\!| and|\!|\!| implicit|\!|\!|-|\!|\!|-explicit|\!|\!| |\!|\!| backward|\!|\!| difference|\!|\!| formulae|\!|\!| for|\!|\!| quasi|\!|\!|-linear|\!|\!| parabolic|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Numer|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{131|\!|\!|}|\!|\!| |\!|\!|(2015|\!|\!|)|\!|\!| 713|\!|\!|-|\!|\!|-735|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Amann95|\!|\!|}|\!|\!| H|\!|\!|.|\!|\!| Amann|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Linear|\!|\!| and|\!|\!| Quasilinear|\!|\!| Parabolic|\!|\!| Problems|\!|\!|}|\!|\!|,|\!|\!| Vol|\!|\!|.|\!|\!| I|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| Abstract|\!|\!| linear|\!|\!| theory|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
Birkh|\!|\!|\|\!|\!|"auser|\!|\!| Boston|\!|\!| Inc|\!|\!|.|\!|\!|,|\!|\!| Boston|\!|\!|,|\!|\!| MA|\!|\!|,|\!|\!| 1995|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{BC|\!|\!|}|\!|\!| C|\!|\!|.|\!|\!| Baiocchi|\!|\!| and|\!|\!| M|\!|\!|.|\!|\!| Crouzeix|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{On|\!|\!| the|\!|\!| equivalence|\!|\!| of|\!|\!| A|\!|\!|-stability|\!|\!| and|\!|\!| G|\!|\!|-stability|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Appl|\!|\!|.|\!|\!| Numer|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{5|\!|\!|}|\!|\!| |\!|\!|(1989|\!|\!|)|\!|\!| 19|\!|\!|-|\!|\!|-22|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Blu1|\!|\!|}|\!|\!| S|\!|\!|.|\!|\!| Blunck|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Maximal|\!|\!| regularity|\!|\!| of|\!|\!| discrete|\!|\!| and|\!|\!| continuous|\!|\!| time|\!|\!| evolution|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Studia|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{146|\!|\!|}|\!|\!| |\!|\!|(2001|\!|\!|)157|\!|\!|-|\!|\!|-176|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{CW98|\!|\!|}|\!|\!| Y|\!|\!|.|\!|\!| Z|\!|\!|.|\!|\!| Chen|\!|\!| and|\!|\!| L|\!|\!|.|\!|\!| C|\!|\!|.|\!|\!| Wu|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Second|\!|\!| Order|\!|\!| Elliptic|\!|\!| Equations|\!|\!| and|\!|\!| Elliptic|\!|\!| Systems|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Translations|\!|\!| of|\!|\!| Mathematical|\!|\!| Monographs|\!|\!|,|\!|\!| Volume|\!|\!| 174|\!|\!|,|\!|\!| AMS|\!|\!|,|\!|\!| 1998|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{D|\!|\!|}|\!|\!| G|\!|\!|.|\!|\!| Dahlquist|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{G|\!|\!|-stability|\!|\!| is|\!|\!| equivalent|\!|\!| to|\!|\!| A|\!|\!|-stability|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| BIT|\!|\!| |\!|\!|\textbf|\!|\!|{18|\!|\!|}|\!|\!| |\!|\!|(1978|\!|\!|)|\!|\!| 384|\!|\!|-|\!|\!|-401|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Davis89|\!|\!|}|\!|\!| E|\!|\!|.|\!|\!| B|\!|\!|.|\!|\!| Davies|\!|\!|,|\!|\!| |\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Heat|\!|\!| Kernels|\!|\!| and|\!|\!| Spectral|\!|\!| Theory|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| Cambridge|\!|\!| University|\!|\!| Press|\!|\!|,|\!|\!| Cambridge|\!|\!|,|\!|\!| 1989|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Evans|\!|\!|}|\!|\!| L|\!|\!|.|\!|\!| C|\!|\!|.|\!|\!| Evans|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Partial|\!|\!| Differential|\!|\!| Equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| second|\!|\!| edition|\!|\!|,|\!|\!| Graduate|\!|\!| Studies|\!|\!| in|\!|\!| Mathematics|\!|\!| 19|\!|\!|,|\!|\!| American|\!|\!| Mathematical|\!|\!| |\!|\!| Society|\!|\!|,|\!|\!| Providence|\!|\!|,|\!|\!| RI|\!|\!|,|\!|\!| 2010|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{HW|\!|\!|}|\!|\!| E|\!|\!|.|\!|\!| Hairer|\!|\!| and|\!|\!| G|\!|\!|.|\!|\!| Wanner|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Solving|\!|\!| Ordinary|\!|\!| Differential|\!|\!| Equations|\!|\!| II|\!|\!|\emph|\!|\!|{|\!|\!|:|\!|\!|}|\!|\!| Stiff|\!|\!| and|\!|\!| Differential|\!|\!|-|\!|\!|-Algebraic|\!|\!| Problems|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| 2|\!|\!|$|\!|\!|^|\!|\!|\text|\!|\!|{nd|\!|\!|}|\!|\!|$|\!|\!| revised|\!|\!| ed|\!|\!|.|\!|\!|,|\!|\!| Springer|\!|\!|-|\!|\!|-Verlag|\!|\!|,|\!|\!| Berlin|\!|\!| Heidelberg|\!|\!|,|\!|\!| Springer|\!|\!| Series|\!|\!| in|\!|\!| Computational|\!|\!| Mathematics|\!|\!| v|\!|\!|.|\!|\!|\|\!|\!| 14|\!|\!|,|\!|\!| 2002|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{JK95|\!|\!|}|\!|\!| D|\!|\!|.|\!|\!| Jerison|\!|\!| and|\!|\!| C|\!|\!|.|\!|\!| E|\!|\!|.|\!|\!| Kenig|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{The|\!|\!| inhomogeneous|\!|\!| Dirichlet|\!|\!| problem|\!|\!| in|\!|\!| Lipschitz|\!|\!| domains|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| J|\!|\!|.|\!|\!| Func|\!|\!|.|\!|\!| Anal|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{130|\!|\!|}|\!|\!| |\!|\!|(1995|\!|\!|)|\!|\!| 161|\!|\!|-|\!|\!|-219|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{KLL|\!|\!|}|\!|\!| B|\!|\!|.|\!|\!| Kov|\!|\!|\|\!|\!|'acs|\!|\!|,|\!|\!| B|\!|\!|.|\!|\!| Li|\!|\!| and|\!|\!| C|\!|\!|.|\!|\!| Lubich|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{A|\!|\!|-stable|\!|\!| time|\!|\!| discretizations|\!|\!| preserve|\!|\!| maximal|\!|\!| parabolic|\!|\!| regularity|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Submitted|\!|\!| for|\!|\!| publication|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{KW|\!|\!|}|\!|\!| P|\!|\!|.|\!|\!| C|\!|\!|.|\!|\!| Kunstmann|\!|\!| and|\!|\!| L|\!|\!|.|\!|\!| Weis|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!|-regularity|\!|\!| for|\!|\!| Parabolic|\!|\!| Equations|\!|\!|,|\!|\!| Fourier|\!|\!| Multiplier|\!|\!| Theorems|\!|\!| and|\!|\!| |\!|\!|$H|\!|\!|^|\!|\!|\infty|\!|\!|$|\!|\!|-functional|\!|\!| Calculus|\!|\!|.|\!|\!| Functional|\!|\!| Analytic|\!|\!| Methods|\!|\!| for|\!|\!| Evolution|\!|\!| Equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Lecture|\!|\!| Notes|\!|\!| in|\!|\!| Mathematics|\!|\!| |\!|\!| |\!|\!|\textbf|\!|\!|{1855|\!|\!|}|\!|\!| |\!|\!|(2004|\!|\!|)|\!|\!| 65|\!|\!|-|\!|\!|-311|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Li15|\!|\!|}|\!|\!| B|\!|\!|.|\!|\!| Li|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|{|\!|\!|\em|\!|\!| Maximum|\!|\!|-norm|\!|\!| stability|\!|\!| and|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!| |\!|\!| regularity|\!|\!| of|\!|\!| FEMs|\!|\!| for|\!|\!| parabolic|\!|\!| |\!|\!| equations|\!|\!| with|\!|\!| Lipschitz|\!|\!| continuous|\!|\!| coefficients|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!| Numer|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\textbf|\!|\!|{131|\!|\!|}|\!|\!| |\!|\!|(2015|\!|\!|)|\!|\!| 489|\!|\!|-|\!|\!|-516|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{LS15|\!|\!|}|\!|\!| B|\!|\!|.|\!|\!| Li|\!|\!| and|\!|\!| W|\!|\!|.|\!|\!| Sun|\!|\!|,|\!|\!| |\!|\!| |\!|\!|{|\!|\!|\em|\!|\!| Regularity|\!|\!| of|\!|\!| the|\!|\!| diffusion|\!|\!|-dispersion|\!|\!| tensor|\!|\!| and|\!|\!| error|\!|\!| analysis|\!|\!| of|\!|\!| FEMs|\!|\!| for|\!|\!| a|\!|\!| porous|\!|\!| media|\!|\!| flow|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| SIAM|\!|\!| J|\!|\!|.|\!|\!| Numer|\!|\!|.|\!|\!| Anal|\!|\!|.|\!|\!| |\!|\!| |\!|\!| |\!|\!|\textbf|\!|\!|{53|\!|\!|}|\!|\!| |\!|\!|(2015|\!|\!|)|\!|\!| 1418|\!|\!|-|\!|\!|-1437|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Lunardi95|\!|\!|}|\!|\!| A|\!|\!|.|\!|\!| Lunardi|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Analytic|\!|\!| Semigroups|\!|\!| and|\!|\!| Optimal|\!|\!| Regularity|\!|\!| in|\!|\!| Parabolic|\!|\!| Problems|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| Birkh|\!|\!|\|\!|\!|"auser|\!|\!| Verlag|\!|\!|,|\!|\!| Basel|\!|\!|,|\!|\!| 1995|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{NO|\!|\!|}|\!|\!| O|\!|\!|.|\!|\!| Nevanlinna|\!|\!| and|\!|\!| F|\!|\!|.|\!|\!| Odeh|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Multiplier|\!|\!| techniques|\!|\!| for|\!|\!| linear|\!|\!| multistep|\!|\!| methods|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Numer|\!|\!|.|\!|\!| Funct|\!|\!|.|\!|\!| Anal|\!|\!|.|\!|\!| Optim|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{3|\!|\!|}|\!|\!| |\!|\!|(1981|\!|\!|)|\!|\!| 377|\!|\!|-|\!|\!|-423|\!|\!|.|\!|\!|
|\!|\!| |\!|\!| |\!|\!|\bibitem|\!|\!|{Ouhabaz|\!|\!|}|\!|\!| E|\!|\!|.|\!|\!| M|\!|\!|.|\!|\!| Ouhabaz|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Gaussian|\!|\!| estimates|\!|\!| and|\!|\!| holomorphy|\!|\!| of|\!|\!| semigroups|\!|\!|}|\!|\!|,|\!|\!| |\!|\!|\newblock|\!|\!| Proc|\!|\!|.|\!|\!| Amer|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| Soc|\!|\!|.|\!|\!| |\!|\!| |\!|\!|\textbf|\!|\!|{123|\!|\!|}|\!|\!| |\!|\!|(1995|\!|\!|)|\!|\!| |\!|\!| 1465|\!|\!|-|\!|\!|-1474|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{STW1|\!|\!|}|\!|\!| A|\!|\!|.|\!|\!| H|\!|\!|.|\!|\!| Schatz|\!|\!|,|\!|\!| V|\!|\!|.|\!|\!| Thom|\!|\!|\|\!|\!|'ee|\!|\!| and|\!|\!| L|\!|\!|.|\!|\!| B|\!|\!|.|\!|\!| Wahlbin|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Maximum|\!|\!| norm|\!|\!| stability|\!|\!| and|\!|\!| error|\!|\!| estimates|\!|\!| in|\!|\!| parabolic|\!|\!| finite|\!|\!| element|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| Comm|\!|\!|.|\!|\!| Pure|\!|\!| Appl|\!|\!|.|\!|\!| Math|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{33|\!|\!|}|\!|\!| |\!|\!|(1980|\!|\!|)|\!|\!| 265|\!|\!|-|\!|\!|-304|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Shen05|\!|\!|}|\!|\!| Z|\!|\!|.|\!|\!| Shen|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Bounds|\!|\!| of|\!|\!| Riesz|\!|\!| transforms|\!|\!| on|\!|\!| |\!|\!|$L|\!|\!|^p|\!|\!|$|\!|\!| |\!|\!| spaces|\!|\!| for|\!|\!| second|\!|\!| order|\!|\!| elliptic|\!|\!| operators|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|Ann|\!|\!|.|\!|\!| Inst|\!|\!|.|\!|\!| Fourier|\!|\!|,|\!|\!| Grenoble|\!|\!| |\!|\!|\textbf|\!|\!|{55|\!|\!|}|\!|\!|,|\!|\!| 1|\!|\!| |\!|\!|(2005|\!|\!|)|\!|\!| 173|\!|\!|-|\!|\!|-197|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Tartar07|\!|\!|}|\!|\!| L|\!|\!|.|\!|\!| Tartar|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{An|\!|\!| Introduction|\!|\!| to|\!|\!| Sobolev|\!|\!| Spaces|\!|\!| and|\!|\!| Interpolation|\!|\!| Spaces|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| Springer|\!|\!|-|\!|\!|-Verlag|\!|\!|,|\!|\!| Berlin|\!|\!| Heidelberg|\!|\!|,|\!|\!| 2007|\!|\!|.|\!|\!| |\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Weis2|\!|\!|}|\!|\!| L|\!|\!|.|\!|\!| Weis|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| |\!|\!|\emph|\!|\!|{Operator|\!|\!|-valued|\!|\!| Fourier|\!|\!| multiplier|\!|\!| |\!|\!| theorems|\!|\!| and|\!|\!| maximal|\!|\!| |\!|\!|$L|\!|\!|_p|\!|\!|$|\!|\!|-regularity|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| Math|\!|\!|.|\!|\!| Ann|\!|\!|.|\!|\!| |\!|\!|\textbf|\!|\!|{319|\!|\!|}|\!|\!| |\!|\!|(2001|\!|\!|)|\!|\!| 735|\!|\!|-|\!|\!|-758|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\bibitem|\!|\!|{Zla|\!|\!|}|\!|\!| M|\!|\!|.|\!|\!| Zl|\!|\!|\|\!|\!|'amal|\!|\!|,|\!|\!| |\!|\!|\emph|\!|\!|{Finite|\!|\!| element|\!|\!| methods|\!|\!| for|\!|\!| nonlinear|\!|\!| parabolic|\!|\!| equations|\!|\!|}|\!|\!|,|\!|\!| |\!|\!| |\!|\!|\newblock|\!|\!| RAIRO|\!|\!| |\!|\!| |\!|\!|\textbf|\!|\!|{11|\!|\!|}|\!|\!| |\!|\!|(1977|\!|\!|)|\!|\!| 93|\!|\!|-|\!|\!|-107|\!|\!|.|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{thebibliography|\!|\!|}|\!|\!|
|\!|\!|
|\!|\!|
|\!|\!|\end|\!|\!|{document|\!|\!|}|\!|\!| | arXiv |
Geomagnetic Virtual Observatories: monitoring geomagnetic secular variation with the Swarm satellites
Magnus D. Hammer ORCID: orcid.org/0000-0003-0717-15261,
Grace A. Cox2,
William J. Brown2,
Ciarán D. Beggan2 &
Christopher C. Finlay1
This article has been updated
We present geomagnetic main field and secular variation time series, at 300 equal-area distributed locations and at 490 km altitude, derived from magnetic field measurements collected by the three Swarm satellites. These Geomagnetic Virtual Observatory (GVO) series provide a convenient means to globally monitor and analyze long-term variations of the geomagnetic field from low-Earth orbit. The series are obtained by robust fits of local Cartesian potential field models to along-track and East–West sums and differences of Swarm satellite data collected within a radius of 700 km of the GVO locations during either 1-monthly or 4-monthly time windows. We describe two GVO data products: (1) 'Observed Field' GVO time series, where all observed sources contribute to the estimated values, without any data selection or correction, and (2) 'Core Field' GVO time series, where additional data selection is carried out, then de-noising schemes and epoch-by-epoch spherical harmonic analysis are applied to reduce contamination by magnetospheric and ionospheric signals. Secular variation series are provided as annual differences of the Core Field GVOs. We present examples of the resulting Swarm GVO series, assessing their quality through comparisons with ground observatories and geomagnetic field models. In benchmark comparisons with six high-quality mid-to-low latitude ground observatories we find the secular variation of the Core Field GVO field intensities, calculated using annual differences, agrees to an rms of 1.8 nT/yr and 1.2 nT/yr for the 1-monthly and 4-monthly versions, respectively. Regular sampling in space and time, and the availability of data error estimates, makes the GVO series well suited for users wishing to perform data assimilation studies of core dynamics, or to study long-period magnetospheric and ionospheric signals and their induced counterparts. The Swarm GVO time series will be regularly updated, approximately every four months, allowing ready access to the latest secular variation data from the Swarm satellites.
The geomagnetic field undergoes gradual change, evolving year by year in a process known as geomagnetic secular variation. These changes are thought to result primarily from motions of liquid metal in the Earth's outer core but this process is not yet well enough understood to allow accurate predictions of future behavior, even a few years ahead (e.g., Alken et al. 2020a). In this situation we are forced to rely on carefully monitoring the geomagnetic field and its changes in order to provide the information necessary for navigation and orientation applications, and for descriptions of near-Earth radiation belts and current systems.
To make progress beyond field monitoring, detailed information on the geomagnetic variations as a function of space and time must be combined with knowledge of the underlying physical processes. With the rapid development of numerical geodynamo models over the past decade (e.g., Aubert and Finlay 2019), there is now the prospect of assimilating such information into realistic models, such that the processes underlying secular variation can be better understood.
For both monitoring long-term geomagnetic variations, and for data assimilation applications, it is an advantage to have processed satellite magnetic field data available on a well organized grid, with a regular sampling rate in space and time. The Geomagnetic Virtual Observatory (GVO) method is one approach to obtain such a dataset.
The Geomagnetic Virtual Observatory method was first proposed by Mandea and Olsen (2006) as a tool for making satellite magnetic field measurements easily accessible as time series of the vector geomagnetic field at pre-specified locations. The GVO method involves fitting a scalar magnetic potential to satellite magnetic field observations from a chosen time window and within a local region, defined by a cylinder centered on a GVO target point. The potential is then used to compute the magnetic field at the GVO target point such that a mean magnetic field over a chosen time window at satellite altitude is determined; see Fig. 1. The GVO time series thus mimics the time series produced by ground-based magnetic observatories on timescales of months and longer. The main advantage of the GVO time series is that they can be produced at any sites of interest that are covered by satellite data, and in particular, can provide a global grid of time series derived from measurements made by similar instruments onboard satellites such as the Swarm trio.
Illustration of the Geomagnetic Virtual Observatory concept; satellite magnetic measurements from within a target cylinder are used to infer field time series at the GVO location given by a red dot. Note the cylinder radius is not to scale, the actual cylinder footprints are shown in Fig. 3
Applications of the GVO time series include geomagnetic jerk studies (Olsen and Mandea 2007), comparisons with spherical harmonic (SH) based geomagnetic field models (Olsen et al. 2009, 2010), core flow studies (Kloss and Finlay 2019; Rogers et al. 2019) and data assimilation studies (Barrois et al. 2018). The GVO method can also be used to derive estimates of the magnetic field gradient tensor (Hammer 2018).
Focusing on the core magnetic field, initial studies showed that the original GVO series were contaminated by ionospheric and magnetospheric sources (Beggan et al. 2009; Domingos et al. 2019; Olsen and Mandea 2007; Shore 2013). Recommendations for improving the original GVO concept and better removing such contamination have been proposed (Hammer 2018; Shore 2013). Some of these improvements were implemented in more recent GVO series that have been used for core flow studies by Barrois et al. (2018); Kloss and Finlay (2019); Rogers et al. (2019); and Whaler (2017).
Here, we present details of an updated processing GVO scheme that has been developed during a Swarm DISC (Data, Innovation and Science Cluster) project and is now being used to produce regularly updated Swarm GVO time series as an official ESA Level 2 product. The primary purpose of this paper serves as a reference describing the Swarm GVO series and presenting example validation comparisons with ground observatories. In addition to taking account of the most important recommendations from earlier GVO studies, the series presented here also take advantage of principal component analysis (PCA) (Cox et al. 2018) and spherical harmonic analysis (SHA) in an effort to better isolate the core field signal.
In the "Data" section we describe the input data from the Swarm satellite mission, and the adopted data selection strategies. In the Sect. "Methodology" we describe in detail how the GVO series are calculated. The Sect. "Results" presents examples of GVO time series, derived using Swarm measurements from December 2013 to March 2020 and describes comparisons with ground observatory magnetic field series and global field model predictions. In the Sect. "Discussion and conclusions" we reflect on what can be learned from these comparisons, describe possible applications for the GVO series and mention ideas for extending and improving the present GVO approach.
This section describes the satellite magnetic field measurements used to derive the Swarm GVO time series. The GVO products take as input vector magnetic field measurements in the form of the Swarm Level 1b (L1b) product MAGX_LR_1B, which contains quality-screened, calibrated and corrected measurements given in physical SI units (nT) in a North, East, Center, hereafter NEC, reference frame. For the results presented here, we use Swarm data versions 0506 from the 1-Dec-2013 to 30-Mar-2020.
From the Swarm L1b 1Hz magnetic field data, two separate data chains are produced. Data chain (a) simply extracts all available measurements using a sub-sampling of 15s.
Data chain (b) extracts, again using a sub-sampling of 15s, only those measurements that satisfy the following dark, geomagnetic quiet-time selection criteria:
Gross measurement outliers for which the vector field components deviate more than 500 nT from the predictions of the latest CHAOS field model (here version CHAOS-7.2 (Finlay et al. 2020)) are rejected
The Sun is at least \(10^{\circ }\) below horizon
Geomagnetic activity index \(K_p < 3^{0}\)
Time rate of change of Ring Current (RC) index \(\vert dRC/dt\vert <3\)nT/hr\(^{-1}\) (Olsen et al. 2014)
Merging electric field at the magnetopause \(E_m <0.8 \mathrm {mV m^{-1}}\), (Olsen et al. 2014)
Constraints on IMF requiring \(B_z > 0\,\)nT and \(\vert B_y \vert <10\) nT.
2-hourly means are computed from 1-min values of the solar wind and IMF from the OMNI data-base (http://omniweb.gsfc.nasa.gov).
The Swarm GVO method described in the next section makes use of sums and differences of the satellite magnetic field measurements. Denoting the input magnetic data at a given position \({\mathbf {r}}\) by \(d_l({\mathbf {r}})=\hat{{\mathbf {l}}}\cdot {\mathbf {B}}({\mathbf {r}})\), where \({\mathbf {B}}({\mathbf {r}})\) is the vector magnetic field and \(\hat{{\mathbf {l}}}\) is a unit vector in the component direction, then \(\Sigma d_l\) and \(\Delta d_l\) denote sums and differences of the vector magnetic field components, respectively. Both along-track (AT) and East–West (EW) data sums and differences are considered such that \(\Sigma d_l=(\Sigma d_l^{\mathrm {AT}},\Sigma d_l^{\mathrm {EW}})\) and \(\Delta d_l=(\Delta d_l^{\mathrm {AT}},\Delta d_l^{\mathrm {EW}})\). Along-track data differences are calculated using the 15-s differences \(\Delta d_l^{\mathrm {AT}} = [B_l({\mathbf {r}},t) - B_l({\mathbf {r}}+\delta {\mathbf {r}},t+15s)]\), where \(\delta {\mathbf {r}}=(\delta r,\delta \theta ,\delta \phi )\) is the change in position. A 15-s along-track difference with a satellite speed of \(\approx 7.7\,\mathrm {km/s}\) corresponds to a distance of 115 km (Olsen et al. 2015). Along-track sums are similarly calculated as \(\Sigma d_l^{AT} = [B_l({\mathbf {r}},t) + B_l({\mathbf {r}}+\delta {\mathbf {r}},t+15\,s)]/2\). The East–West differences are calculated as \(\Delta d_l^{\mathrm {EW}} = [B_l^{\mathrm {SWA}}({\mathbf {r}}_1,t_1) - B_l^{\mathrm {SWC}}({\mathbf {r}}_2,t_2)]\) having an East–West orbit separation between the Swarm Alpha (SWA) and Charlie (SWC) satellites of \(\approx 1.4^{\circ }\), corresponding to 155 km at the equator (Olsen et al. 2015). The East–West sums were calculated as \(\Sigma d_l^{\mathrm {EW}} = [B_l^{\mathrm {SWA}}({\mathbf {r}}_1,t_1) + B_l^{\mathrm {SWC}}({\mathbf {r}}_2,t_2)]/2\). For a particular orbit of SWA, the corresponding SWC data were selected to be that closest in latitude with the condition that \(\vert \Delta t\vert =\vert t_1-t_2\vert <50\) s.
This section describes in detail the algorithms used to derive the following Swarm GVO products:
1- and 4-monthly time series of the 'Observed Field'
1- and 4-monthly time series of the 'Core Field' and its secular variation.
Each product involves time series of spherical polar components of the vector magnetic field on an approximately equal-area global grid of 300 locations at an altitude of 490 km above the mean Earth radius. An overview of the algorithm is presented in Fig. 2.
Overview of the Swarm GVO data product processing and model estimation algorithm
GVO locations and timestamps
For a given GVO target point, and considering a specified time window of either 1 or 4 months, input data that fall within a cylinder of horizontal radius \(r_{cyl}=700\) km around the target point, and which also satisfy the relevant selection criteria (see Sect. "Data"), are extracted. The GVO locations are specified in spherical polar coordinates \({\mathbf {r}}_{GVO}=(r,\theta ,\phi )\), at fixed radius \(r=r_a+h_{GVO}\) where \(h_{GVO}\) is the height above the Earth's mean spherical radius \(r_a=6371.2\) km. For the Swarm data described \(h_{GVO}=490\) km, so the GVOs are located at approximately the mean orbital height of the Swarm satellites during 2013–2020, considering each of the lower pair to contribute with half weighting.
The GVO time series are provided in a global approximately equal area grid based on the sphere partitioning algorithm of Leopardi (2006). Selecting a number of GVO grid points, and an associated target cylinder search radius \(r_{cyl}\) that avoids overlap of the target cylinders to ensure independent data, involves a trade-off; decreasing the number of target points and increasing the search radius allows for more data within each GVO cylinder but at the same time lowers the spatial resolution. Preliminary tests with Swarm data suggested that 300 GVO grid locations provided a suitable balance (Hammer 2018). If higher spatial resolution is required, longer time windows than used here are necessary in order to obtain stable GVO estimates. The surface area dS covered by each GVO target cylinder is the total surface area A divided by the number of GVOs, \(N_{GVO}=300\), i.e., \(dS \sim A/N_{GVO} = 4\pi (r_a+h_{GVO})^2/N_{GVO}\), where \(r_a\) is the Earth's mean radius 6371.2 km and \(h_{GVO}\) is altitude of GVOs, here 490 km. Equating this area with the area of a circle surrounding the GVO, \(\pi r_{cyl}^2\), gives a target cylinder search radius of \(r_{cyl}=\sqrt{4 (r_a+h_{GVO})^2/N_{GVO}} \approx 700\mathrm {km}\), where we have rounded down to the nearest hundred kilometers to ensure no overlap. The distance between any two GVOs is thus \(\approx 1400\)km. This corresponds roughly to SH degree \(n=14\), since the SH degree n is associated with a horizontal wavelength at satellite altitude is \(\lambda _n \sim 2\pi (r_a +h_{GVO}) /\sqrt{n(n+1)}\) (Backus et al. 1996). With a target cylinder search radius of 700km, approximately 80% of the data are used; the combined area of the cylinder footprints thus does not span the entire area of the spherical surface, but the independence of each GVO estimate is ensured. The top panel in Fig. 3 illustrates the locations of the 300 globally distributed GVOs and the footprint of the data target cylinders for each GVO. The grid also contains GVOs at the North and South Poles. At these positions the \((r, \theta , \phi )\) frame is defined by letting \(\theta\) be aligned along the Greenwich meridian, r point upwards and \(\phi\) completes the right-handed coordinate system. When computing the main field at the North/South Pole from field models, the average of the main field values evaluated \(0.1^{\circ }\) in latitude from the North/South Pole at longitudes \(0^{\circ }\) and \(180^{\circ }\) was used.
Top: distribution of the 300 GVOs (red dots) and associated cylinder bins (in green) using a Hammer projection. Bottom: illustration of the geocentric coordinate system and the local topocentric Cartesian coordinate system used in the GVO estimation (in red)
For the 1-monthly series each GVO estimate has a timestamp corresponding to the 15th day of the considered calendar month. For the 4-monthly series, which are constructed using data from within the intervals January–April, May–August, and September–December, the GVO estimates have been allocated timestamps of the 1st of March, the 1st of July and the 1st of November. The secular variation series are computed from annual differences of the 1-monthly and 4-monthly series, so their timestamps are shifted by 6 months compared with the field series. GVO epoch times are for formal reasons given in GVO product files as milliseconds since 01-Jan-0000 00:00:00.000, following the convention for Swarm data products.
In order to derive the Observed Field GVO time series, we start from the geocentric spherical polar components of the vector magnetic field, \({\mathbf {B}}^{obs}=(B_r,B_{\theta },B_{\phi })\), and subtract predictions, \({\mathbf {B}}^{MF}\), from the IGRF main field model (Alken et al. 2020b) for spherical harmonic degrees \(n \in [1,13]\). This results in the following Observed Field residuals
$$\begin{aligned} \delta {\mathbf {B}}^{obs} = {\mathbf {B}}^{obs} - {\mathbf {B}}^{MF}. \end{aligned}$$
These residuals are used to derive the GVO estimates as described in Sect. "GVO model parameterization and estimation" below. Note that IGRF predictions at the GVO target points and times are added back during this procedure.
In order to derive 1-monthly Core Field GVO time series from data chain (a), predictions from a lithospheric field model \({\mathbf {B}}^{lith}\) are also removed:
$$\begin{aligned} \delta {\mathbf {B}}^{core,1month}= {\mathbf {B}}^{obs} - {\mathbf {B}}^{MF} -{\mathbf {B}}^{lith}. \end{aligned}$$
Here, we calculate \({\mathbf {B}}^{lith}\) using SH degrees \(n \in [14,185]\) from the LCS-1 model (Olsen et al. 2017).
To derive 4-monthly core field GVO time series from data chain (b), models of the magnetospheric and ionospheric fields are also removed during the pre-processing:
$$\begin{aligned} \delta {\mathbf {B}}^{core,4month} = {\mathbf {B}}^{obs} - {\mathbf {B}}^{MF} -{\mathbf {B}}^{lith}- {\mathbf {B}}^{mag}- {\mathbf {B}}^{iono}, \end{aligned}$$
where \({\mathbf {B}}^{mag}\) is the predicted large-scale magnetospheric field and its Earth-induced counterpart field, as given by the CHAOS model (Finlay et al. 2020), and \({\mathbf {B}}^{iono}\) is the predicted ionospheric field and its Earth-induced field as given by the CI model (Sabaka et al. 2018).
Estimates of the main field are in all cases removed from the data before carrying out the GVO estimation, and then afterwards added back at the target location and time; this step is necessary in order to pre-whiten the data before carrying out the GVO estimation. Previous studies have shown that such pre-whitening by removing a main field model is necessary in order to avoid noisy GVO estimates. Hammer (2018, p.74, Fig 4.6) presents examples of GVO series computed with and without pre-whitening applied. Without pre-whitening robust estimation schemes, based on iteratively reweighted least squares, are unable to correctly identify and downweight disturbed data resulting in noisier estimates. The specific main field model used for the pre-whitening is however not crucial. For example, comparing GVO estimates constructed using IGRF-13 and the CHAOS-7.2 main field model, we found rms differences compared to the benchmark ground observatories described in Sect. "Validation tests" that agreed to within 0.05 nT across all three components, for both 4-monthly and 1-monthly GVO estimates. We therefore chose to use IGRF-13 for the pre-whitening step in for producing the Swarm GVO series in order to emphasize that the results obtained are not simply a result of biasing towards the CHAOS model.
Note that when considering time windows of 1 or 4 months for the GVO estimates, any information on time variations with periods shorter than these intervals is, of course, lost.
GVO model parameterization and estimation
A Cartesian potential forward model
We assume that magnetic field measurements are made in a source-free region such that the residual magnetic field is a Laplacian potential field, which fulfills the quasi-stationary approximation (Backus et al. 1996). In the following, we will use the general notation \(\delta {\mathbf {B}}\) for the residual fields of Eqs. (1, 2, 3) and refer to the position of the Geomagnetic Virtual Observatory as the target location.
The residual magnetic field and the associated locations within a specific target cylinder are transformed from the spherical coordinate system to a right-handed local topocentric Cartesian frame centered on (and constant within) each target cylinder, where at the GVO location x points towards geographic south, y points towards East and z points upwards. The bottom panel in Fig. 3 illustrates the geocentric spherical and local topocentric frames. Note that the unit vectors of the local Cartesian frame, \(({\hat{\mathbf {e}}}_{x},{\hat{\mathbf {e}}}_{y},{\hat{\mathbf {e}}}_{z})\), coincide with the spherical unit vectors, \(({\hat{\mathbf {e}}}_{\theta },{\hat{\mathbf {e}}}_{\phi },{\hat{\mathbf {e}}}_{r})\), at the target location but not elsewhere.
The magnetic scalar potential, V, associated with the residual magnetic field in a source-free region must satisfy Laplace's equation \(\nabla ^2 V=0\) and the potential is related to the residual field by \(\delta {\mathbf {B}}=-\nabla V\). The solution to Laplace's equation in Cartesian coordinates can be written as a sum of harmonic polynomials (e.g., Backus et al. 1996)
$$\begin{aligned} &V(x,y,z)= \sum _{l=1}^L C_{abc}x^a y^b z^c \nonumber \\ &= C_{100}x + C_{010}y + C_{001}z + C_{200}x^2 + C_{020}y^2 \nonumber \\ &+ C_{002}z^2 + C_{110}xy + C_{101}xz + C_{011}yz + C_{300}x^3 \nonumber \\ & + C_{030}y^3 + C_{003}z^3 + C_{210}x^2y+ C_{201}x^2z \nonumber \\ & + C_{120}y^2x + C_{021}y^2z + C_{102}z^2x + C_{012}z^2y \nonumber \\ & + C_{111}xyz + \cdots , \end{aligned}$$
where \( l = a + b + c \), and \(C_{abc}\) are the expansion coefficients, a, b, c are non-negative integers, and L is the expansion order. In tests we found that for a GVO cylinder radius of 700 km, it was sufficient to expand the magnetic scalar potential to cubic order \(L=3\); this involves the 19 parameters in the above expansion.
To be a valid potential field requires irrotational (\(\nabla \times \delta {\mathbf {B}}=0\)) and solenoidal (\(\nabla \cdot \delta {\mathbf {B}}=0\)) conditions be satisfied. First we consider the solenoidal divergence-free criteria. This requires \(\nabla ^2 V=0\) which on inserting for the potential from Eq. (4) implies
$$\begin{aligned} 0&=\frac{\partial ^2V}{\partial x^2}+\frac{\partial ^2V}{\partial y^2}+\frac{\partial ^2V}{\partial z^2} \nonumber \\&= -2C_{200}-6C_{300}x-2C_{210}y-2C_{201}z -2C_{020}-6C_{030}y \nonumber \\&\quad -2C_{120}x-2C_{021}z -2C_{002}-6C_{003}z-2C_{102}x-2C_{012}y \nonumber \\&= -(C_{200}+C_{020}+C_{002})-x(3C_{300}+C_{120}+C_{102}) \nonumber \\&\quad -y(3C_{030}+C_{210}+C_{012})-z(3C_{003}+C_{201}+C_{021}). \end{aligned}$$
Each of the terms in parenthesis must equal zero for this to be satisfied. This means that
$$\begin{aligned} C_{002}&= -(C_{200}+C_{020}), \qquad C_{300} = -\frac{1}{3}(C_{102}+C_{120}) \\ C_{030}&= -\frac{1}{3}(C_{210} +C_{012}), \qquad C_{003} = -\frac{1}{3}(C_{201}+C_{021}). \end{aligned}$$
The cubic potential series is thereby reduced by 4 parameters to a total of 15 parameters
$$\begin{aligned} &V(x,y,z)= C_{100}x + C_{010}y + C_{001}z + C_{200}x^2 \nonumber \\ & + C_{020}y^2 -(C_{200}+C_{020})z^2 + C_{110}xy \nonumber \\ &+ C_{101}xz + C_{011}yz -\frac{1}{3}(C_{102}+C_{120})x^3 \nonumber \\ & -\frac{1}{3}(C_{210}+C_{012})y^3 -\frac{1}{3}(C_{201}+C_{021})z^3 \nonumber \\ & + C_{210}x^2y+ C_{201}x^2z + C_{120}y^2x + C_{021}y^2z \nonumber \\ & + C_{102}z^2x + C_{012}z^2y + C_{111}xyz. \end{aligned}$$
The potential Eq. (6) also fulfills the curl-free criteria.
With an expansion for the magnetic potential established, we can now write a linear forward problem relating a vector \(\mathbf{m}\), consisting of the model coefficients \(C_{abc}\), to a vector \({\mathbf {d}}^{vec}\) of predictions for the residual magnetic field components in the local Cartesian system, \((\delta B_x,\delta B_y,\delta B_z)\), at the positions of satellite observations that fall with the GVO target cylinder for the time window under consideration,
$$\begin{aligned} {\mathbf {d}}^{vec} = \underline{\underline{{\mathbf {G}}}}^{vec} {\mathbf {m}}. \end{aligned}$$
Here \(\underline{\underline{{\mathbf {G}}}}^{vec}\) is then a design matrix constructed from appropriate spatial derivatives of the potential.
Olsen et al. (2015) and Sabaka et al. (2018) have demonstrated that using East–West differences (between Swarm A and C) and along-track differences (for all three Swarm satellites) improves the resolution of both the lithospheric field and the core field. They argued that correlated errors due to incompletely modeled large-scale magnetospheric fields are reduced when using such field differences. In addition to field differences, East–West and along-track sums of the measurements also need to be included in order to adequately constrain the largest wavelength parts of the field (Hammer 2018; Sabaka et al. 2013). Based on this, we have chosen to use sums and differences of the vector components of the residual magnetic field, constructed along satellite tracks and considering East–West pairs of data between Swarm A and C. This results in a data vector \({\mathbf {d}}=\{\Delta d_{x}^{vec},\Delta d_{y}^{vec},\Delta d_{z}^{vec},\Sigma d_{x}^{vec},\Sigma d_{y}^{vec},\Sigma d_{z}^{vec}\}\), where \(\Delta\) and \(\Sigma\) denote the differences and sums of the residual field described in Sect. "Data". The design matrix linking the sums and differences to the coefficients of the potential is constructed as \(\underline{\underline{{\mathbf {G}}}}=\{\Delta G_x^{vec};\Delta G_y^{vec};\Delta G_z^{vec};\Sigma G_x^{vec};\Sigma G_y^{vec};\Sigma G_z^{vec})\}\) where \(\Delta G_{k}^{vec} = [G_{k}^{vec}({\mathbf {r}}_1) - G_{k}^{vec}({\mathbf {r}}_2)]\) and \(\Sigma G_{k}^{vec} = [G_{k}^{vec}({\mathbf {r}}_1) + G_{k}^{vec}({\mathbf {r}}_2)]/2\) with \(k=(x,y,z)\).
Robust least squares estimation
Based on the above definitions of \({\mathbf {d}}\) and \(\underline{\underline{{\mathbf {G}}}}\) for sums and differences of the residual magnetic field, the coefficients of the GVO model can be estimated using the following robust least-squares inversion scheme
$$\begin{aligned} {\mathbf {m}} = (\underline{\underline{{\mathbf {G}}}}^{T}\underline{\underline{{\mathbf {W}}}}\, \underline{\underline{{\mathbf {G}}}})^{-1} \underline{\underline{{\mathbf {G}}}}^T \underline{\underline{{\mathbf {W}}}} {\mathbf {d}}. \end{aligned}$$
Here \(\underline{\underline{{\mathbf {W}}}}\) is a diagonal weight matrix, consisting of robust (Huber) weights for each entry in the data vector (e.g., Constable 1988), and an additional down-weighting factor of 1/2 for data from satellites Alpha and Charlie which takes into account that these two satellites fly side-by-side and therefore provide similar measurements.
Having determined the potential, estimates of the residual magnetic field components, in local cartesian coordinates, at GVO target location (i.e., \(x=0, y=0, z=0\)) are computed as follows:
$$\begin{aligned} \delta {\mathbf {B}}_{GVO}(x,y,z)=-\nabla V(0,0,0)=-\begin{pmatrix} C_{100} \\ C_{010} \\ C_{001}\end{pmatrix} \end{aligned}.$$
At the GVO target location, the local Cartesian field components are directly related to spherical polar field components (see Fig. 3) with \(\delta B_{GVO,r}=\delta B_{GVO,z}\) , \(\delta B_{GVO,\theta }=\delta B_{GVO,x}\) and \(\delta B_{GVO,\phi }=\delta B_{GVO,y}\). Each estimate is for a specific target GVO location \({\mathbf {r}}\) and epoch t, which is the center of the considered time window. The above procedure is repeated for each time window at each target location to obtain time series of estimates of the residual vector field at all GVO target locations.
A final step is needed to obtain the GVO estimates for the field. This is to add back the prediction of the main field model \({\mathbf {B}}_{GVO}^{MF}({\mathbf {r}},t)\), at each target point and epoch, using the same model (here IGRF-13) that was removed from each satellite measurement during the pre-processing. This step is carried out separately for each component at each GVO location and each epoch, such that we finally obtain the GVO vector field time series
$$\begin{aligned} {\mathbf {B}}_{GVO}({\mathbf {r}},t) = \delta {\mathbf {B}}_{GVO}({\mathbf {r}},t) + {\mathbf {B}}_{GVO}^{MF}({\mathbf {r}},t). \end{aligned}$$
The estimated GVO magnetic field is provided in spherical polar \((r, \theta , \phi )\) vector components.
Observed Field GVOs
We define 'Observed Field' GVOs as field estimates computed from satellite observations while retaining all observed geomagnetic field sources. Observed Field GVO time series are derived from the sums and differences of the residual field computed using Eq. (1) and then applying the GVO method described by Eqs. (8–10). One-monthly observed field GVOs are computed from data chain a) while 4-monthly observed field GVOs are computed from data chain b).
Error estimates, \(\sigma _{obs}\), for the Observed Field GVOs are assumed to be time-independent and spatially uncorrelated. They are calculated separately for each GVO times series (i.e., for each field component at each GVO location) based on a robust version of the total mean square error (e.g., Bendat and Piersol 2010), that includes both mean square residual and the mean residual squared, between the input data \(d_i\) and the GVO estimates \({\hat{d}}_i\) for a given series. With \(e_i=d_i-{\hat{d}}_i\) this is calculated as
$$\begin{aligned} \sigma _{obs} = \sqrt{\frac{\sum _i w_{i}(e_{i}-{\mu _w})^2}{\sum _i w_{i}} + \mu _w^2}, \end{aligned}$$
where the index i runs over all data contributing to a given series, \(\mu _w = \sum _i w_{i} e_{i} / \sum _i w_{i}\) is the robust mean residual and the robust weights \(w_i\) are calculated iteratively assuming a long-tailed Huber distribution (Constable 1988):
$$\begin{aligned} w_{i} = \left\{ \begin{array}{ll} 1 &{} \hbox { if}\ \epsilon _{i} \le c_w \\ c_w/\epsilon _{i} &{} \hbox { if}\ \epsilon _{i} > c_w, \end{array} \right. \end{aligned}$$
with \(\epsilon _{i}=abs(e_{i})/std(e)\) and \(c_w=1.5\) is the chosen breakpoint for the Huber distribution.
Core Field GVOs and Secular Variation
We define 'Core Field' GVOs as field estimates computed from satellite observations with non-core fields removed (as far as possible). The core field and associated secular variation (SV) GVO time series are produced as follows. First, 1- and 4-monthly GVO data files are produced, after which the 1-monthly GVOs are de-noised by a principal component analysis. Next, an epoch-by-epoch spherical harmonic analysis is carried out and the resulting external and toroidal magnetic fields (i.e., non-internal parts) are removed. Finally, annual differences of each series are computed in order to obtain the GVO core field SV time series.
For the 1-monthly Core Field GVOs, GVO estimates are computed from sums and differences of the field residuals using Eq. (2) based on data chain (a) (i.e., without data selection criteria). For the 4-monthly GVOs Core Field GVOs, GVO estimates are computed from sums and differences of the field residuals using Eq. (3) based on data chain (b) (i.e., with dark geomagnetically quiet-time criteria applied).
Since the 1-monthly Core Field GVOs were derived without data selection and having no model estimates of the ionosphere nor magnetosphere removed, external magnetic field signals remain. Such signals are considered as contamination ('noise') in the current context because our goal is to produce GVO estimates of the core field only. The monthly sampling rate means that a local time sampling bias also contaminates the GVO estimates, as it takes approximately 4 months for each satellite to revisit the same local time on Earth's surface when considering both ascending and descending orbit tracks (see Shore 2013). To produce 1-monthly Core Field GVOs we therefore employ the principal component analysis (PCA) method and Python package (MagPySV) described in Cox et al. (2018), to separate out and remove the various contaminating signals from the 1-monthly GVO estimates. This procedure is based on earlier work by Wardinski and Holme (2011) and Brown et al. (2013), who used the PCA method to de-noise ground observatory data one observatory at a time, rather than de-noising time series from several locations simultaneously, as in Cox et al. (2018) and this work. A brief summary of this method is provided here; the reader is referred to Cox et al. (2018), Cox et al. (2020) and the companion to this paper Brown et al., in preparation for further details.
Domingos et al. (2019) applied PCA to an earlier version of the 4-monthly GVO data series, considering both CHAMP and Swarm measurements. They performed PCA directly on GVO data series, rather than on annual differences of GVO series after subtracting predictions from a core field model as we do. Hence, our PCA analysis looks for coherent signals that remain once features like the large-scale internal variations identified by Domingos et al. (2019) have been removed. Whilst their focus was on modes associated with the variance of internal field, they also identified an interesting mode associated with annual variations of the external field in their Swarm GVO series. Our analysis is not well suited to studying annual variations since we apply PCA to annual difference estimates of SV. After carrying out tests with our PCA procedure we decided there was not much advantage in applying it to the 4-monthly GVOs. Our 4-monthly GVO SV series contain fewer identifiable coherent external signals on which we would apply the PCA. This is due to the dark quiet-time data selection criteria, the applied corrections for magnetospheric and ionospheric fields, and the absence (by design) of the 4-month local time sampling bias that remains in the 1-month series where PCA de-noising is applied.
The key premise to our approach is that the SV residuals (the difference between observed GVO SV and that predicted by an internal magnetic field model) provide information about contaminating signals that are present in the GVO data but not in the internal model. The PCA of the SV residual covariance matrix leads to a proxies for these contaminating signals that are then removed from the GVO data. We approximate the GVO SV series using annual differences and the SV residuals are calculated as the difference between the GVO SV estimates and the SV predicted by the CHAOS-7.2 model (Finlay et al. 2020) evaluated up to SH degree 13 at the same times and locations. Comparable results can be achieved using alternative field models, provided they represent time variation of the main field in a continuous manner when detrending each GVO SV series.
In their application to ground magnetic data, Cox et al. (2018) found that this method is most effective when considering groups of observatories at similar magnetic latitudes because the dominant external magnetic field source varies with magnetic latitude. Suitably grouped observatories experience similar noise at the same times and these correlated signals show clearly in the dominant principle components (PCs) of the SV residuals. On that basis, we estimated the mean magnetic latitude at GVO locations using the AACGM-v2 Python package (Burrell et al. 2020; Shepherd 2014) and assigned them to one of five magnetic latitude regions: Polar North, Polar South, Auroral North, Auroral South and Low- to Mid-magnetic latitudes (see Table 1).
Table 1 Magnetic latitude boundaries for each of the five regions de-noised separately using PCA
For N GVO locations, the SV residual covariance matrix for the vector time series is 3N by 3N, and can be decomposed into 3N eigenvalues and eigenvectors, describing the PCs of the SV residual data set. The contributions of the K dominant PCs, corresponding to the K largest eigenvalues, are removed from the SV residuals, and afterwards the internal model SV from the CHAOS-7-2 model is added back to the corrected residuals to form the de-noised SV.
In this application, we remove the most significant K PCs entirely, as opposed to removing the scalar projection of a proxy signal for the PC content as described in Cox et al. (2018) and earlier related works. Our removal of PCs here involves the removal of the associated eigenvectors and the component of signal at each GVO location in the projected directions of these eigenvectors. Note the number K differs by region, depending on how many PCs can be confidently identified as arising from one of the expected contaminating sources described above.
We identify PCs as noise sources based on their geographic distributions, correlations to annual differences of external magnetic field proxies (e.g., Dst, Polar Cap North/South, Em, AE (Kauristie et al. 2017)), or peaks in their discrete Fourier transform (DFT) at the local time bias frequency. Table 1 gives the number of PCs identified as noise, along with the percentage of variance in the SV residuals accounted for by each of these PCs and the total percentage variance removed in each region.
In a last step, the de-noised SV are numerically integrated to produce de-noised one-monthly magnetic field time series, again treating SV as annual differences. The de-noised magnetic field must be re-leveled at the start of this calculation. We use the original GVO field values for the first 12 time samples for this purpose, meaning that the de-noised field values start 12 months after the original GVO time series begins.
Spherical harmonic analysis
The magnetic field time series produced by the GVO method assumes a potential field description. This implies that no electrical currents exist within the measurement region. In reality however, satellite magnetic measurements are made in the ionospheric F-region where in situ electrical currents may be present (Olsen 1997; Sabaka et al. 2010), especially in the auroral regions. Due to space–time aliasing, these non-potential fields can leak into the GVO estimates (Olsen and Mandea 2007).
In the situation of non-vanishing, but purely radial, currents within the shell of measurements the magnetic field can written in terms of poloidal, \({V^{int},V^{ext}}\), toroidal, \(T^{sh}\), and scalar potentials (e.g., Backus 1986; Olsen 1997; Olsen and Mandea 2007):
$$\begin{aligned} {\mathbf {B}} = -\nabla V^{int} -\nabla V^{ext} + \nabla \times {\hat{r}} T^{sh}, \end{aligned}$$
where each of the potentials can be represented by expansions up to some maximum SH degree N:
$$\begin{aligned} V^{int}(r,\theta ,\phi ,t)&= r_a \sum _{n=1}^{N} \sum _{m=0}^{n} \left[ g_n^m(t) \mathrm {cos}\,m\phi +h_n^m(t) \mathrm {sin}\,m\phi \right] \left( \frac{r_a}{r}\right) ^{n+1} P_n^m(\theta ), \end{aligned}$$
$$\begin{aligned} V^{ext}(r,\theta ,\phi ,t)&= r_a \sum _{n=1}^{N} \sum _{m=0}^{n} \left[ q_n^m(t) \mathrm {cos}\,m\phi +s_n^m(t) \mathrm {sin}\,m\phi \right] \left( \frac{r}{r_a}\right) ^{n} P_n^m(\theta ), \end{aligned}$$
$$\begin{aligned} T^{sh}(r,\theta ,\phi ,t)&= r_a \sum _{n=1}^{N} \sum _{m=0}^{n} \left[ t_n^{m,c}(t) \mathrm {cos}\,m\phi + t_n^{m,s}(t) \mathrm {sin}\,m\phi \right] P_n^m(\theta ), \end{aligned}$$
where \(r_a=6371.2\)km is the reference value for the Earth's mean spherical radius, n and m are here the SH degree and order, respectively, and \(P_n^m\) are the associated Schmidt semi-normalized Legendre functions. In the three expansions, \(\{g_n^{m},h_n^{m}\}\) are the internal coefficients, \(\{q_n^{m},s_n^{m}\}\) are the external coefficients and \(\{t_n^{m,c},t_n^{m,s}\}\) are the expansion coefficients associated with the toroidal scalar potential.
Predictions of the geomagnetic field components at the GVO locations are linearly related to the above expansion coefficients such that a forward problem can be written
$$\begin{aligned} {\mathbf {d}}_{GVO} = \underline{\underline{{\mathbf {G}}}}_{SH}{\mathbf {m}}_{SH}, \end{aligned}$$
where the data for a given epoch, t are given by \({\mathbf {d}}_{GVO}=\{B_r({\mathbf {r}}_1,t),...,B_r({\mathbf {r}}_{N_{GVO}},t),...\) \(B_{\theta }({\mathbf {r}}_1,t),...,B_{\theta }({\mathbf {r}}_{N_{GVO}},t),B_{\phi }({\mathbf {r}}_1,t),...,B_{\phi }({\mathbf {r}}_{N_{GVO}},t)\}\), where \(N_{GVO}\) is the number of GVOs, related to the expansion coefficients \({\mathbf {m}}_{SH}=\{g_n^m,h_n^m,q_n^m,s_n^m,t_n^{m,c},t_n^{m,s}\}\) via a design matrix \(\underline{\underline{{\mathbf {G}}}}_{SH}\), which is constructed from the spatial derivatives of Eqs. (14, 15 and 16). Here, we truncated the internal, external and toroidal expansions at SH degree 13 and the model coefficients were determined epoch by epoch from the GVO estimates using a simple least-squares solution:
$$\begin{aligned} {\mathbf {m}}_{SH} = (\underline{\underline{{\mathbf {G}}}}_{SH}^T \ \underline{\underline{{\mathbf {G}}}}_{SH})^{-1}\underline{\underline{{\mathbf {G}}}}_{SH}^T{\mathbf {d}}_{GVO}. \end{aligned}$$
At epochs where an insufficient number of GVOs are available to ensure a stable solution, the external and toroidal coefficients were determined by a linear interpolation between nearby epochs. Following the SHA, external and toroidal field estimates are removed epoch by epoch from the 1- and 4-monthly time series to produce final Core Field GVO time series.
Secular variation estimates
The secular variation of the Core Field series at a particular GVO location, \({\mathbf {r}}\), for a given epoch t, is computed using annual differences between field values at time \(t+0.5\) yr and at time \(t-0.5\) yr:
$$\begin{aligned} {{\mathbf {S}}}{{\mathbf {V}}}_{GVO}({\mathbf {r}},t) = {\mathbf {B}}_{GVO}({\mathbf {r}},t+0.5\,yr)-{\mathbf {B}}_{GVO}({\mathbf {r}},t-0.5\,yr). \end{aligned}$$
Annual differences are a well established way to estimate the core field secular variation since they remove annual signals from ionospheric and magnetospheric signals that are otherwise difficult to isolate. Note however that such annual signals do remain in the GVO field series themselves.
Error estimates
The error estimates, \(\sigma _{core}\), for each Core Field GVO time series are assumed to be time-independent and spatially uncorrelated. They are computed separately for each field component at each GVO based on the residuals between the GVO data and the corresponding predictions of the time-dependent internal part of the CHAOS field model for SH degrees \(n \in [1,20]\). Denoting the residuals by \({\mathbf {e}}={\mathbf {d}}_{GVO}-{\mathbf {d}}_{CHAOS}\) the error estimates are given by
$$\begin{aligned} \sigma _{core} = \sqrt{\frac{\sum _i(e_{i}-{\mu })^2}{M} + \mu ^2}, \end{aligned}$$
where \(i=1,...,M\) denotes the ith data element, and M is the number of data in a given series and \(\mu\) is the residual mean.
Error estimates of the secular variation GVO time series are computed in a similar manner as described above but using residuals between the SV GVO data, \(SV_{GVO}\), and the SV predictions of the CHAOS time-dependent internal field model.
Validation tests
Comparison of GVO series with ground magnetic observatories
Validation tests were performed by comparing the GVOs and independent ground observatory (GObs) records, which are the established standard reference data series for monitoring long-term variations of the geomagnetic field. Our validation tests considered data from 28 INTERMAGNET (International Real-time Magnetic Observatory Network) ground observatories, listed in Table 2. These were chosen for their representative geographic coverage, spanning both polar and non-polar latitudes and all longitude sectors. Below we refer to polar stations as being the 13 stations with colatitudes \(0^{\circ }\) to \(36^{\circ }\) and \(144^{\circ }\) to \(180^{\circ }\), with the remaining 15 stations referred to as non-polar stations. From these stations we further selected six 'benchmark' stations (Chambon la Forêt, Kakioka, Honolulu, Guam, Hermanus and Canberra) from mid-to-low latitudes that are well known for their high quality. We use these in an attempt to establish, in well-understood conditions, the extent to which Swarm GVO series agree with ground records, with an emphasis on how well the core field secular variation is captured.
Table 2 List of the selected ground observatories used for validation tests, listed in alphabetic order
We used the Swarm AUX_OBS_2_ hourly mean ground observatory dataset, version 0122 from February 2020, maintained by the British Geological Survey (BGS), retrieved from ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/AUX_OBS. These data have been checked and corrected for known baseline jumps (Macmillan and Olsen 2013). From these hourly mean values for each selected observatory we compute:
One-monthly and four-monthly simple mean field values, for each of the three spherical polar components. These are used for comparisons with the Observed Field GVO products.
One-monthly and four-monthly versions of revised means (Olsen et al. 2014), wherein the CHAOS magnetospheric field (Finlay et al. 2016) and CM4 ionospheric field predictions (Sabaka et al. 2004) (and their induced counterparts) are first removed from the hourly means for each of the three spherical polar field components and then robust (Huber-weighted) means are computed over 1- or 4-monthly non-overlapping windows. These series are used for comparisons with the Core Field and Secular Variation GVO products.
To enable direct comparisons with these ground observatory series, we computed dedicated GVO time series directly above each selected ground observatory, using the approach described in the Sect. "GVO model parameterization and estimation". We removed crustal bias estimates from each series (computed as the median residual from the CHAOS-7.2 internal field model to up SH degree 16) and mapped the GVO estimates downwards to the position of the ground observatory at Earth's surface by removing the difference between CHAOS-7.2 model predictions at the GVO location and the ground observatory location. This results in series we refer to as \({\widetilde{B}}_j^{GObs}(t_i)\) for the ground observatories and \({\widetilde{B}}_j^{GVO,map}(t_i)\) for the GVOs, respectively, both at the ground observatory location. The subscript j indicates either the r, \(\theta\) or \(\phi\) component, or the scalar field intensity F (computed by taking the square-root of the sum of the squares of the three spherical polar components). The root-mean-square (rms) deviation between the correspond ground observatory and GVO series was then computed as
$$\begin{aligned} \mathrm {rms}^{obs}_j = \sqrt{\frac{1}{N_d}\sum \limits _{i=1}^{N_d} \left[ {\widetilde{B}}_j^{GObs}(t_i)-{\widetilde{B}}_j^{GVO,map}(t_i)\right] ^2}, \end{aligned}$$
where the summation runs over the length of the time series \(i=1,...,N_d\) where data are available from both series. The rms differences for secular variation series are computed in the same fashion, using annual differences of the ground observatory field, \({\widetilde{B}}_j^{GObs}(t_i)\), and Core Field GVOs mapped to the ground observatory positions, \({\widetilde{B}}_j^{GVO,map}(t_i)\). We computed summary means over these rms values for groups of series from the polar regions, the non-polar region and benchmark observatories. For these tests we used the time interval 2015–2018 when there is good availability of both definitive observatory data and Swarm data.
We note that despite being the best available information concerning secular variation, the ground observatory records are themselves inherently imperfect. INTERMAGNET standards require that long-term accuracy of main field series be better than 5 nT, with the best observatories having an estimated baseline uncertainty of up to 0.4 nT (Lesur et al. 2017). Beyond observatory measurement uncertainties, a further source of differences between ground observatory data and GVO estimates is that the latter use data above the ionospheric E-layer, while ground data are collected at the Earth's surface. They therefore observe ionospheric and magnetosphere-ionosphere coupling currents differently. Our potential field mapping used to downward continue the GVO estimates to Earth's surface does not account for this difference, and so it is a source of discrepancy between the two series, particularly for the horizontal components.
Comparisons of GVO series with field model predictions
A second set of validation tests involved comparisons between the GVO products and predictions from geomagnetic field models. These have the advantage that the GVO product, provided on a global grid, can be tested directly (without any mapping) and the global quality of the products can be assessed. However, unlike the comparisons with ground observatories, tests against field model predictions are not fully independent as Swarm data were also used in the construction of the field models.
Comparisons to models are based on the rms deviation between a given GVO time series, \(B_j^{GVO}(t_i)\), and field model predictions, \(B_j^{mod}(t_i)\), at the GVO data location and times
$$\begin{aligned} \mathrm {rms}^{mod}_j = \sqrt{\frac{1}{N_{GVO}}\sum \limits _{i=1}^{N_{GVO}} \left[ B_j^{mod}(t_i)-B_j^{GVO}(t_i)\right] ^2}, \end{aligned}$$
where the summation runs over the length of the GVO time series \(i=1,...,N_{GVO}\) and j indicates a specific spherical polar component r, \(\theta\), \(\phi\) of the vector field or the scalar field intensity F.
For comparisons with the Observed Field GVOs, \(B_j^{mod}(t_i)\) is computed using the CHAOS-7.2 time-dependent internal field for degrees \(n \in [1,13]\), the LCS-1 lithopsheric field model degrees \(n \in [14,185]\), as well as the CHAOS-7.2 magnetospheric field (and induced counterparts) and the CIY4 ionospheric field (Sabaka et al. 2018) (and induced counterparts). The magnetospheric and ionospheric fields and their counterparts are computed as mean values for each 1-monthly or 4-monthly time window, considering the times of the actual data used to derive the GVO estimates. We note the model values compared to the Observed Field GVOs are not fully representative of all the fields contributing to the GVOs, in particular they do not include realistic ionospheric fields in the polar region, or magnetosphere–ionosphere coupling currents.
For comparisons with the Core Field GVOs, \(B_j^{mod}(t_i)\) is computed using the time-dependent internal field from the CHAOS-7.2 model (Finlay et al. 2020) using SH degrees up to 20, with the LCS-1 lithospheric model (Olsen et al. 2017) for degrees \(n \in [14,20]\) removed. For comparisons with the Core Field Secular Variation GVOs, \(B_j^{mod}(t_i)\) is computed using the first time derivative of the time-dependent internal field from CHAOS-7.2, again up to SH degree 20. In the global grid there are 78 polar and 222 non-polar GVOs and Benchmark values were computed using GVOs \(\pm 30^\circ\) in latitude from the equator. Comparisons were made between 2014 and 2020, throughout the time interval when GVO data were available.
A global overview of the Swarm GVO time series
To illustrate the 1-monthly GVO secular variation data series, Fig. 4 presents a global map of annual differences of the radial field component of the Observed Field GVO time series (blue dots) and of the Core field GVO time series (red dots). Fig. 5 presents a similar summary of the global results for the 4-monthly GVO time series. Note the small difference in the time scales shown at the bottom left of these two figures; the SV of the 1-monthly GVO-CORE time series begins in 2015.5, since the GVO-CORE time series starts only in 2015 due to the PCA processing, while the SV of the 4-monthly GVO-CORE begins in 2014.7 since no PCA is not performed on these.
Time series of Secular Variation from the Swarm satellites: annual differences of the 1-monthly Observed Field GVOs (blue dots) and the Core Field GVOs (red dots). Shown here is the radial field component. GVO locations are marked with a black cross
Validation statistics: comparisons with ground observatories and field models
The results of the validation comparisons carried out are presented here in the form of two summary tables of statistics. Table 3 collects results of the validation tests against independent ground observatories and field models for the 1-monthly GVO products, while Table 4 collects similar statistics for 4-monthly GVO products. See Sect. "Validation tests" above for details of the tests.
When considering the statistics presented here, it is important to recall that the number of ground observatories is split into 13 "Polar" stations, 15 "non-polar" stations and six "benchmark" stations. As mentioned in Sect. "Validation tests" the stations in each category were selected in order to obtain as far as possible reasonable geographic coverage of both the polar and non-polar regions. The aim with the benchmark stations was to document and validate the performance of the GVO time at known high-quality stations from mid-to-low latitudes where external contributions are less prominent. The error estimates provided along with the GVO products are also presented in these tables for reference. In these tables GVO-OBS, GVO-CORE and GVO-SV denotes the Observed Field GVOs, the Core Field GVOs and the Core Field Secular Variation GVOs, respectively.
Table 3 Summary of validation tests for the 1-monthly GVO products
Example comparisons of GVO and ground observatory time series
More detailed insight comes from direct examination of the time series of the ground observatory and associated GVO series as described in Section "Validation tests". Fig. 6 presents the 1-monthly Observed Field (GVO-OBS, blue dots) and Core Field (GVO-CORE, red dots) GVO estimates, mapped down to the Earth's surface at three of the benchmark ground observatories. These figures include \(\pm \sigma\) uncertainty estimates, where we have made the assumption that these estimates remain unchanged when mapping the field to ground level. When examining the Observed Field GVOs we present time series of the field itself rather than the SV, so as not to filter out any signals that may be of interest by taking annual differences. Also plotted for comparison are the ground observatory hourly monthly means (omm, yellow dots) and revised monthly means (rmm, black dots).
One-monthly Observed Field (blue dots) and Core Field (red dots) GVOs mapped to Earth's surfaced with \(\pm \sigma\) uncertainty envolopes, together with simple monthly means (yellow stars) and revised monthly means (black stars) from three of the selected high-quality 'benchmark' ground observatories, left column: Kakioka (Japan), middle column: Honolulu (Hawaii,USA), right column: Canberra (Australia). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
Radial field variations observed at the benchmark stations are followed closely by the GVO series, for example at Kakioka (KAK) in Japan (left column, Fig 6) where both the trend in the field and its acceleration are in agreement. The ability of the Observed Field GVO series to track sub-annual field changes is illustrated by the southward \(\theta\)-component, for example the peak observed in the second half of 2017 at Kakioka. This feature, likely of magnetospheric origin, is seen simultaneously at all benchmark stations in both the GVO and ground observatory series, and is particularly clear at Kakioka (KAK) and Hermanus (not shown here). The amplitude of the peak is slightly lower in the GVO series. More scatter is seen in the eastward \(\phi\)-component of the GVO series compared to the ground observatory benchmark series (e.g., at Honolulu, HON). The source of this scatter may be ionospheric or field-aligned currents seen by the satellites that are less prominent at ground; the amplitude of this scatter was larger 2014-2016, which may indicate a solar cycle dependence.
Figure 7 presents Observed Field (GVO-OBS) and Core Field (GVO-CORE) GVO estimates along with their \(\pm \sigma\) uncertainty, together with corresponding ordinary and revised ground observatory monthly means, from stations in the more challenging polar regions. At these locations, there are strong ionospheric E-region currents lying between the satellites and the ground stations, and the satellites at times fly through intense field-aligned currents. Nonetheless, the comparisons are encouraging and the trends seen at the ground stations are well captured by the GVO series. At the polar stations, the amplitude of the error bars has been significantly reduced for the Core Field GVO series compared to the Observed Field GVO series.
One-monthly Observed Field (blue dots) and Core Field (red dots) GVOs mapped to Earth's surfaced with \(\pm \sigma\) uncertainty envolopes, together with simple monthly means (yellow stars) and revised monthly means (black stars) from three of the selected polar ground observatories, left column: Resolute Bay (Canada), middle column: Macquarie Island (Australia), right column: Mawson Station (Antarctica). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
The radial component at high northern latitudes in Canada, at the Resolute Bay observatory (RES) inside the polar cap, shows a particularly clear annual variation in the monthly means, peaking in the northern summer. These fluctuations, which are likely due to far-field effects of polar electrojet currents, are well tracked by the GVO estimates.
Larger differences between the GVO and ground observatory series are seen in the eastward \(\phi\)-component at these stations, the difference being largest from 2014 to 2017 (up to 25 nT seen at RES in summer months). The eastward \(\phi\)-component in the GVO and ground stations agrees more closely at slightly lower latitudes in both the northern hemisphere (e.g., in Alaska at College station CMO, not shown here) and in the Southern hemisphere at Macquarie Island (MCQ), middle row Fig. 7. Ground stations in the auroral zone see signals in the southward \(\theta\)-component that are less prominent in the GVO estimates; these may be caused by polar electrojet currents that are closer to the ground stations. At Mawson observatory in Antarctica (MAW) the southward \(\theta\)-component has fluctuations of opposite sign to fluctuations seen at the same time in the GVO estimates. The relative position and orientations of the ionospheric currents and the ground and satellite observation points are clearly important for understanding such effects.
Figure 8 presents plots of the 1-monthly revised monthly mean SV from ground observatories (black dots) and the 1-monthly Core Field GVO SV series (red dots) at the three low/mid benchmark locations. Note the difference in scale here when looking at the secular variation, compared to the earlier plots that show the Observed Field/Core Field GVO values without taking annual differences.
One-monthly Core Field SV GVOs mapped to Earth's surface (red symbols) with \(\pm \sigma\) uncertainties, and revised monthly means from selected high-quality 'benchmark' ground observatories (black symbols), left column: Kakioka (Japan), middle column: Honolulu (Hawaii,USA), right column: Canberra (Australia). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
The absolute levels (i.e., amplitude of secular variation) and trends (i.e., secular acceleration) in these benchmark ground observatory records of the core field secular variation are well matched by the Core Field SV GVO series. Peaks (secular variation impulses/geomagnetic jerks) such as that in the radial field at Honolulu (HON) in 2017 (Fig. 8, middle column, top row) are well captured and there is no indication of loss of temporal resolution in these annual difference secular variation series compared to the ground records. This indicates time-dependent SV with time scales down to 1 year is well captured in the Swarm Core Field Secular Variation GVO product. The scatter is slightly larger in the GVO series for the southward \(\theta\)-component and there are indications of remaining noise (perhaps to due ionospheric or inter-hemispheric field-aligned currents) with period close to one year in the eastward \(\phi\)-component. Figure 9 shows similar comparisons for a selection of the polar observatories. Here the scatter is larger in both the ground and GVO data, due to the difficult of isolating the core field signal, but again the observed trends agree well.
One-monthly Core Field SV GVOs mapped to Earth's surface (red symbols) with \(\pm \sigma\) uncertainties, and 1-monthly revised monthly means from selected polar ground observatories (black symbols), left column: Resolute Bay (Canada), middle column: Macquarie Island (Australia), right column: Mawson Station (Antarctica). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
Figures 10 and 11 present plots of the 4-monthly ground observatory SV (black dots) and 4-monthly GVO SV time series (red dots) at the same three low/mid and polar latitude benchmark locations. Considering Fig. 10, the scatter observed in the 1-monthly Core Field SV time series has been reduced and the independent ground and Swarm series show excellent agreement. The peak in the SV observed in the radial component at Honolulu (HON) in 2017 is again well captured. Differences are apparent at some epochs between the GVO series and the ground observatory series in the eastward \(\phi\)-components, especially in 2015 and 2016 when solar activity was higher. This is particularly noticeable in the 4-monthly SV series in January 2015 and January 2016 and seems to be related to the fields measured by the Swarm satellites during summer 2015 (see e.g., Fig. 10). Comparisons with ground observatories and internal field models such as CHAOS show a noticeable bias in the \(B_\phi\) component during this period which contributes to longer tails in the distribution of residuals for \(B_\phi\) all epochs and also results in enhanced rms differences for the \(\phi\)-components of SV in comparison to ground observatories, see Table 4. A similar bias is also seen when comparing the original Swarm data to internal field models during summer 2015, particularly for Swarm B. The residuals during this time are largest in the northern polar region and seem to be geophysical in origin, perhaps related to strong field-aligned currents measured by the satellites during this epoch.
Four-monthly Core Field SV GVOs mapped to Earth's surface (red symbols) with \(\pm \sigma\) uncertainties, and 4-monthly revised means from selected high-quality 'benchmark' ground observatories (black symbols), left column: Kakioka (Japan), middle column: Honolulu (Hawaii,USA), right column: Canberra (Australia). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
Four-monthly Core Field SV GVOs mapped to Earth's surface (red symbols) with \(\pm \sigma\) uncertainties, and 4-monthly revised means from selected polar ground observatories (black symbols), left column: Resolute Bay (Canada), middle column: Macquarie Island (Australia), right column: Mawson Station (Antarctica). Top row is the radial field component, middle row is the southward field component, bottom row is the eastward field component, units are nT
Despite the slightly higher scatter at the polar stations in Fig. 11, the agreement is again encouraging with trends seen at ground stations being captured in the GVOs. Largest differences are seen in the horizontal components for Mawson station (MAW) in Antarctica where a sawtooth pattern about the ground series is visible in the 4-monthly GVO estimates. This enhanced scatter is reflected in the error estimates supplied together with the GVO products, but illustrates that caution is needed when interpreting SV variations on interannual and shorter timescales in the auroral zone. Further work is required to better understand these features.
In Table 3 find that the 1-monthly Swarm GVO products Observed Field series agree with independent ground observatory and field model predictions to within 5 nT in all components at non-polar latitudes. Given that the requirement for a good standard (INTERMAGNET) ground observatory is an accuracy of 5 nT this indicates that the GVO method yields results comparable on these time scales with good ground observatories. The 4-monthly estimates agree even better, to within 3 nT. Larger differences are found at polar latitudes where comparisons are complicated by the presence of strong ionospheric and magnetosphere–ionosphere coupling currents that have different signatures at ground and satellite altitude. The processing applied to obtain Core Field GVOs results in close agreement with ground observatory revised monthly means and with internal field models.
Taking annual differences to obtain SV estimates, further improves the agreement. We find the secular variation of the field intensity in the 1-monthly Core Field GVOs agrees with six benchmark ground observatories from mid and low latitudes to a level of 1.8 nT/yr. For the 4-monthly Core Field GVOs the difference to the secular variation recorded at the ground observatories decreases to 1.2 nT/yr. These numbers may be considered an upper bound on the accuracy of the Swarm GVO secular variation estimates, since they also include the measurement errors inherent in the ground observatories (perhaps 0.5 nT/yr at excellent observatories) as well as differences due to incomplete separation of non-core sources which will affect ground and GVO data in different ways.
In this paper, we have presented a global network of Geomagnetic Virtual Observatories constructed from vector magnetic field measurements made by the Swarm satellites. The series are provided in two variants, each with 1-monthly and 4-monthly cadences, and each with associated uncertainty estimates:
(1) 'Observed' magnetic field GVO series, with 1- and 4-month cadence
(2) 'Core' magnetic field GVO series, and associated annual difference secular variation series, with 1- and 4-month cadence.
Good agreement has been demonstrated between the Swarm GVO series, ground observatory data, and existing field models. The Swarm GVO series thus provide consistent and accurate global information on geomagnetic secular variation.
We recommend the Core Field GVOs along with their supplied error estimates for use in studies of core dynamics. Adopting the traditional approach of taking annual differences to obtain the SV helps avoid small annual signals that can remain in the Core Field series. For future work, we propose carrying out PCA de-noising based on first differences of monthly GVOs, rather than annual differences, as a promising direction to further isolate core field signal.
Earlier versions of GVO series have already been used in inversions for the core surface flow (Kloss and Finlay 2019; Whaler and Beggan 2015) and in data assimilation studies where the core field signals seen in GVOs are combined with information from geodynamo models in order to estimate the state of the core (Barrois et al. 2018). GVO series are particularly well suited for global studies of rapid core dynamics where a number of physical hypotheses are currently under exploration (Aubert and Finlay 2019; Buffett and Matsui 2019; Gerick et al. 2020). The Observed Field GVOs provide additional information on long-period variations of magnetospheric and ionsospheric origin. Long-period magnetospheric variations may prove useful for deep electromagnetic induction studies (e.g. Harwood and Malin 1977). At high latitudes signatures of the polar electrojets are clearly seen in the 1-monthly Observed Field GVO series, for example the distinctive annual variations in the vertical component seen in Figure 7, reflecting seasonal variations of the polar electrojet current system. Both applications will become increasingly attractive as the time series provided by the Swarm satellites lengthens.
The Swarm GVO dataset is now available as a regularly updated ESA product (see ftp://swarm-diss.eo.esa.int/Level2longterm/ ) updates will take place approximately every 4 months. Similarly processed GVO series from CHAMP, Cryosat-2 and Ørsted are also available from the Swarm DISC GVO project webpage https://www.space.dtu.dk/english/research/projects/project-descriptions/geomagnetic-virtual-observatories.
Availability of datasets and materials
Regularly updated versions of the Swarm GVO data series are available at ftp://ftp.swarm-diss.eo.esa.int/Level2longterm/. Further details including additional documentation, GVO datasets from other satellites and software are available from the Swarm DISC GVO project webpage, https://www.space.dtu.dk/english/research/projects/project-descriptions/geomagnetic-virtual-observatories. Swarm data are available at https://earth.esa.int/web/guest/swarm/data-access. Ground observatory data are available from ftp://ftp.nerc-murchison.ac.uk/geomag/Swarm/AUX_OBS/hour/. The RC index is available from http://www.spacecenter.dk/files/magnetic-models/RC/. The CHAOS-7 model and its updates are available at http://www.spacecenter.dk/files/magnetic-models/CHAOS-7/.
Duplicated equations 4 & 6 have been removed.
Along-track
CHAOS:
CHAMP, Ørsted, and Swarm field model
Comprehensive inversion model
DFT:
Discrete Fourier transform
ECEF:
Earth-Centered Earth-Fixed reference frame
EW:
GVO:
Geomagnetic Virtual Observatory
IGRF:
International geomagnetic reference field
IMF:
Interplanetary magnetic field
INTERMAGNET:
International real-time magnetic observatory network
Kp:
K planetary index
LCS:
Lithospheric model from CHAMP and Swarm
MF:
nT:
Nano-Tesla
OMM:
Observed monthly mean
Principal component
PCA:
QD:
Quasi-dipole
RC:
Ring current index
Root-mean square
RMM:
Revised monthly mean
SH:
Spherical harmonic
SHA:
Secular variation
DISC:
Data, innovation and science cluster
yr:
Alken P et al (2020a) Evaluation of candidate models for the 13th generation International Geomagnetic Reference Field. Earth Planets Space. https://doi.org/10.1186/s40623-020-01281-4
Alken P et al (2020b) International geomagnetic reference field: the thirteenth generation. Earth Planets Space. https://doi.org/10.1186/s40623-020-01288-x
Aubert J, Finlay C (2019) Geomagnetic jerks and rapid hydromagnetic waves focusing at Earth's core surface. Nat Geosci 12:393–398. https://doi.org/10.1038/s41561-019-0355-1
Backus G (1986) Poloidal and toroidal fields in geomagnetic field modelling. Rev Geophys 24:75–109
Backus G, Parker R, Constable C (1996) Foundations of Geomagnetism. Cambridge Univ. Press, New York
Barrois O, Hammer MD, Finlay CC, Martin Y, Gillet N (2018) Assimilation of ground and satellite magnetic measurements: inference of core surface magnetic and velocity field changes. Geophys J Int 215:695–712
Beggan CD, Whaler KA, Macmillan S (2009) Biased residuals of core flow models from satellite-derived virtual observatories. Geophys J Int 177(2):463–475
Bendat J, Piersol A (2010) Random data, analysis and measurement procedures. Wiley, New Jersey
Brown W, Mound J, Livermore P (2013) Jerks abound: an analysis of geomagnetic observatory data from 1957 to 2008. Phys Earth Planet Int 223:62–76
Buffett B, Matsui H (2019) Equatorially trapped waves in Earth's core. Geophys J Int 218(2):1210–1225
Burrell A, Meeren CVD, Laundal KM (2020) aburrell/aacgmv2: Version 2.6.0, https://doi.org/10.5281/ZENODO.3598705
Constable C (1988) Parameter estimation in non-Gaussian noise. Geophys J Int 94(1):131–142
Cox G, Brown W, Billingham L, Holme R (2018) Magpysv: a python package for processing and denoising geomagnetic observatory data. Geochem Geophys Geosyst 19(9):3347–3363
Cox G, Brown W, Beggan C, Hammer M, Finlay C (2020). Denoising Swarm geomagnetic virtual observatories using principal component analysis. https://doi.org/10.5194/egusphere-egu2020-9957
Domingos J, Pais MA, Jault D, Mandea M (2019) Temporal resolution of internal magnetic field modes from satellite data. Earth Planets Space 71(1):1–17
Finlay CC, Olsen N, Kotsiaros S, Gillet N, Tøffner-Clausen L (2016) Recent geomagnetic secular variation from Swarm and ground observatories as estimated in the CHAOS-6 geomagnetic field model. Earth Planets Space 68(1):1–18
Finlay CC, Kloss C, Olsen N, Hammer MD, Tøffner-Clausen L, Grayver A, Kuvshinov A (2020) The CHAOS-7 geomagnetic field model and observed changes in the South Atlantic Anomaly. Earth Planets Space 72(1):1–31. https://doi.org/10.1186/s40623-020-01252-9
Gerick F, Jault D, Noir J (2020) Fast quasi-geostrophic magneto-coriolis modes in the Earth's core, 856 Geophys. Res Lett. https://doi.org/10.1029/2020GL090803
Hammer MD (2018) Local estimation of the Earth's core magnetic field, Ph.D. thesis, Technical University of Denmark
Harwood JM, Malin SRC (1977) Sunspot cycle influence on the geomagnetic field. Geophys J Int 50(3):605–619. https://doi.org/10.1111/j.1365-246X.1977.tb01337.x
Kauristie K, Morschhauser A, Olsen N, Finlay C, McPherron R, Gjerloev J, Opgenoorth H (2017) On the usage of geomagnetic indices for data selection in internal field modelling. Space Sci Rev. 206:61–90. https://doi.org/10.1007/s11214-016-0301-0
Kloss C, Finlay CC (2019) Time-dependent low latitude core flow and geomagnetic field acceleration pulses. Geophys J Int 217:140–168
Leopardi P (2006) A partition of the unit sphere into regions of equal area and small diameter. Electronic Trans Numerical Anal 25(12):309–327
Lesur V, Heumez B, Telali A, Lalanne X, Soloviev A (2017) Estimating error statistics for Chambon-la-Forêt observatory definitive data. Annales Geophysicae 35(4):939–952. https://doi.org/10.5194/angeo-35-939-2017
Macmillan S, Olsen N (2013) Observatory data and the Swarm mission. Earth Planets Space 65(11):1355–1362. https://doi.org/10.5047/eps.2013.07.011
Mandea M, Olsen N (2006) A new approach to directly determine the secular variation from magnetic satellite observations. Geophys Res Lett 33(15):L15306
Olsen N (1997) Ionospheric F region currents at middle and low latitudes estimated from Magsat data. J Geophys Res Space Phys 102(A3):4563–4576
Olsen N, Mandea M (2007) Investigation of a secular variation impulse using satellite data: the 2003 geomagnetic jerk. Earth Planet Sci Lett 255(1):94–105
Olsen N, Mandea M, Sabaka TJ, Tøffner-Clausen L (2009) CHAOS-2—a geomagnetic field model derived from one decade of continuous satellite data. Geophys J Int 179(3):1477–1487
Olsen N, Mandea M, Sabaka TJ, Tøffner-Clausen L (2010) The CHAOS-3 geomagnetic field model and candidates for the 11th generation IGRF. Earth Planets Space 62(10):719–727
Olsen N, Lühr H, Finlay CC, Sabaka TJ, Michaelis I, Rauberg J, Tøffner-Clausen L (2014) The CHAOS-4 geomagnetic field model. Geophys J Int 197(2):815–827
Olsen N et al (2015) The Swarm Initial Field Model for the 2014 geomagnetic field. Geophys Res Lett 42(4):1092–1098
Olsen N, Ravat D, Finlay CC, Kother LK (2017) LCS-1: a high-resolution global model of the lithospheric magnetic field derived from CHAMP and Swarm satellite observations. Geophys J Int 211(3):1461–1477
Rogers HF, Beggan CD, Whaler KA (2019) Investigation of regional variation in core flow models using spherical slepian functions. Earth Planets Space 71(1):19
Sabaka TJ, Olsen N, Purucker ME (2004) Extending comprehensive models of the Earth's magnetic field with Ørsted and CHAMP data. Geophys J Int 159(2):521–547
Sabaka TJ, Hulot G, Olsen N (2010) Mathematical properties relevant to geomagnetic field modeling, in Handbook of Geomathematics. Springer: Berlin. pp. 503–538
Sabaka TJ, Tøffner-Clausen L, Olsen N (2013) Use of the Comprehensive Inversion Method for Swarm satellite data analysis. Earth, Planets and Space 65:1201–1222
Sabaka TJ, Tøffner-Clausen L, Olsen N, Finlay CC (2018) A comprehensive model of the Earth's magnetic field determined from 4 years of Swarm satellite observations. Earth Planet Space 70(1):130
Shepherd SG (2014) Altitude-adjusted corrected geomagnetic coordinates: definition and functional approximations. J Geophys Res Space Phys 119(9):7501–7521
Shore RM (2013) An improved description of Earth's external magnetic fields and their source regions using satellite data, Ph.D. thesis, The University of Edinburgh
Wardinski I, Holme R (2011) Signal from noise in geomagnetic field modelling: denoising data for secular variation studies. Geophys J Int 185(2):653–662
Whaler K, Beggan C (2015) Derivation and use of core surface flows for forecasting secular variation. J Geophys Res Solid Earth 120(3):1400–1414
Whaler KA (2017) Probing the core surface flow with satellite data, IAGA Joint Assembly Session A02—Earth's core dynamics and planetary dynamos (DIV I)
We wish to thank the European Space Agency (ESA) for the prompt availability of Swarm L1b data. The staff of the geomagnetic observatories and INTERMAGNET are thanked for supplying high-quality observatory data. High-resolution 1-min OMNI data were provided by the Space Physics Data Facility (SPDF), NASA Goddard Space Flight Center. We like to thank two anonymous reviewers for comments that helped improve the manuscript.
This study was funded by ESA through the Swarm DISC GVO project, contract no. 4000109587.
Division of Geomagnetism, DTU Space, Technical University of Denmark, Centrifugevej 356, Kongens Lyngby, Denmark
Magnus D. Hammer & Christopher C. Finlay
Geomagnetism, British Geological Survey, Riccarton, Edinburgh, UK
Grace A. Cox, William J. Brown & Ciarán D. Beggan
Magnus D. Hammer
Grace A. Cox
William J. Brown
Ciarán D. Beggan
Christopher C. Finlay
MDH developed the Swarm GVO data processing and modeling scheme and drafted the manuscript. GAC developed the PCA de-noising scheme with assistance from WJB, applied it to the one monthly series and wrote the related part of the manuscript. CCF and CDB designed the project and contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Magnus D. Hammer.
Hammer, M.D., Cox, G.A., Brown, W.J. et al. Geomagnetic Virtual Observatories: monitoring geomagnetic secular variation with the Swarm satellites. Earth Planets Space 73, 54 (2021). https://doi.org/10.1186/s40623-021-01357-9
Geomagnetism
Geodynamo
Earth's core
Swarm satellites
1. Geomagnetism | CommonCrawl |
\begin{document}
\begin{abstract} In this paper we prove the equivalence between some known notions of solutions to the eikonal equation and more general analogs of the Hamilton-Jacobi equations in complete and rectifiably connected metric spaces. The notions considered are that of curve-based viscosity solutions, slope-based viscosity solutions, and Monge solutions. By using the induced intrinsic (path) metric, we reduce the metric space to a length space and show the equivalence of these solutions to the associated Dirichlet boundary problem. Without utilizing the boundary data, we also localize our argument and directly prove the equivalence for the definitions of solutions. Regularity of solutions related to the Euclidean semi-concavity is discussed as well. \end{abstract}
\subjclass[2010]{35R15, 49L25, 35F30, 35D40} \keywords{eikonal equation, viscosity solutions, metric spaces}
\maketitle
\section{Introduction}
\subsection{Background and motivation} In this paper, we are concerned with first order Hamilton-Jacobi equations in metric spaces. The Hamilton-Jacobi equations in the Euclidean spaces are widely applied in various fields such as optimal control, geometric optics, computer vision, image processing. It is well known that the notion of viscosity solutions provides a nice framework for the well-posedness of first order fully nonlinear equations; we refer to \cite{CIL, BC} for comprehensive introduction.
In seeking to further develop various fields such as optimal transport \cite{AGS13, Vbook}, mean field games \cite{CaNotes}, topological networks \cite{SchC, IMZ, ACCT, IMo1, IMo2} etc., the Hamilton-Jacobi equations in a general metric space $({\mathbf X}, d)$ have recently attracted great attention, see for example~\cite{GaS, GHN}. Typical forms of the equations include \begin{equation}\label{stationary eq}
H(x, u, |\nabla u|)=0 \quad \text{in $\Omega$,} \end{equation} and its time-dependent version \begin{equation}\label{evolution eq}
{\partial_t}u+H(x, t, u, |\nabla u|)=0 \quad \text{in $(0, \infty)\times {\mathbf X}$} \end{equation} with necessary boundary or initial value conditions. Here $\Omega\subsetneq {\mathbf X}$ is an open set and $H: \Omega\times {\mathbb R}\times {\mathbb R}\to {\mathbb R}$ is a given continuous function called the Hamiltonian of the Hamilton-Jacobi equation.
While $\partial_t $ denotes the time differentiation, $|\nabla u|$ stands for a generalized notion of the gradient norm of $u$ in metric spaces.
In this paper we also pay particular attention to the so-called eikonal equation \begin{equation}\label{eikonal eq}
|\nabla u|(x)=f(x) \quad \text{in $\Omega$.} \end{equation} Here $f: \Omega\to [0, \infty)$ is a given continuous function satisfying \[ \inf_{\Omega} f>0. \] The eikonal equation in the Euclidean space has important applications in various fields such as geometric optics, electromagnetic theory and image processing \cite{KKbook, MSbook}.
Several new notions of viscosity solutions have recently been proposed in the general metric setting. We refer the reader to Section \ref{sec:review} for a review with precise definitions and basic properties of solutions, and to the relevant papers we mention below.
Using rectifiable curves in the space ${\mathbf X}$, Giga, Hamamuki and Nakayasu~\cite {GHN} discussed a notion of metric viscosity solutions to \eqref{eikonal eq} and established well-posedness under the Dirichlet condition \begin{equation}\label{bdry cond} u=\zeta \quad \text{on $\partial\Omega$,} \end{equation} where $\zeta$ is a given bounded continuous function on $\partial\Omega$ satisfying an appropriate regularity assumption to be discussed later. In the sequel, this type of solutions will be called curve-based solutions (or c-solutions for short). The definition of c-solutions (Definition \ref{defi c}) essentially relies on optimal control interpretations along rectifiable curves and requires very little structure of the space. The same approach is used in \cite{Na1} to study the evolution problem \eqref{evolution eq} with the Hamiltonian $H(x, )$ convex in $v$. Moreover, unique viscosity solutions of the eikonal equation in the sense of \cite{GHN} are also constructed on fractals like the Sierpinski gasket \cite{CCM}.
On the other hand, when $({\mathbf X}, d)$ is a complete geodesic space, by interpreting $|\nabla u|$ as the local slope of a locally Lipschitz function $u$, Ambrosio and Feng \cite{AF} provide a different viscosity approach to \eqref{evolution eq} for a class of convex Hamiltonians. This was extended to the class of potentially nonconvex Hamiltonians $H$ by Gangbo and {\'S}wi{\polhk{e}}ch \cite{GaS2, GaS}, who proposed a generalized notion of viscosity solutions via appropriate test classes and proved uniqueness and existence of the solutions to more general Hamilton-Jacobi equations in length spaces. Stability and convexity of such solutions are studied respectively in \cite{NN} and in \cite{LNa}. Since this definition of solutions is based on the local slope, we shall call them slope-based solutions (or s-solutions for short) below. See the precise definition in Definition \ref{defi s}.
Since either approach above provides a generalized viscosity solution theory for first order nonlinear equations, especially the uniqueness and existence results, it is natural to expect that they actually agree with each other in the wider setting of metric spaces. This motivates us to explore the relations between both types of viscosity solutions and to understand further connections to other possible approaches.
\subsection{Equivalence of solutions to the Dirichlet problem} In the first part of this work (Section \ref{sec:cs}), we compare c- and s-solutions of the eikonal equation and show their equivalence when continuous solutions to the Dirichlet problem exist. To this end, we assume that $({\mathbf X}, d)$ is complete and rectifably connected; namely, for any $x, y\in {\mathbf X}$, there exists a rectifiable curve in ${\mathbf X}$ joining $x$ and $y$.
Our key idea is to use the induced intrinsic metric of the space ${\mathbf X}$, which is given by \begin{equation}\label{int metric} \tilde{d}(x, y)=\inf\{\ell(\xi): \xi\text{ is a rectifiable curve connecting }x\text{ and }y\} \end{equation} for $x, y\in {\mathbf X}$, where $\ell(\xi)$ denotes the length of the curve $\xi$ with respect to the original metric $d$. It is then easily seen that $({\mathbf X}, \tilde{d})$ is a length space. Since this change of metric preserves the property of c-solutions, we can directly compare the notion of c-solutions in $({\mathbf X},d)$ with s-solutions in $({\mathbf X},\tilde{d})$. In order to preserve the completeness of the metric space, we assume throughout the paper that \begin{equation}\label{eq ast} \tilde{d}\to 0 \ \text{as $d\to 0$}. \end{equation}
Before introducing our main results, we emphasize that in the sequel, by ``local'' we mean the property in question holds in sufficiently small open balls with respect to the intrinsic metric of the space.
Our first main result is as follows.
\begin{thm}[Equivalence between c- and s-solutions]\label{thm equiv} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space and $\tilde{d}$ be the intrinsic metric given by \eqref{int metric}. Assume that $({\mathbf X},d)$ also satisfies \eqref{eq ast}. Suppose that $\Omega\subsetneq {\mathbf X}$ is an open set bounded with respect to the metric $\tilde{d}$. Assume that $f\in C(\Omega)$ satisfies $\inf_{\Omega} f>0$. If $u$ is a c-solution of \eqref{eikonal eq} with respect to $d$, then $u$ is a locally Lipschitz s-solution of \eqref{eikonal eq} with respect to $\tilde{d}$. In addition, if $f$ is uniformly continuous in $\Omega$, $\zeta$ is uniformly continuous on $\partial\Omega$, and there exists a modulus of continuity $\sigma$ such that \begin{equation}\label{new bdry regularity}
|\zeta(y)-u(x)|\leq \sigma(\tilde{d}(x, y))\quad \text{for all $x\in \Omega$ and $y\in \partial\Omega$,} \end{equation} then $u$ is the unique c-solution and s-solution of \eqref{eikonal eq} satisfying~\eqref{new bdry regularity}. \end{thm}
In the second result of the theorem above, we can replace the assumptions of uniform continuity of $u$ and $f$ by their continuity if $({\mathbf X}, d)$ is additionally assumed to be proper, that is, any bounded closed subset of ${\mathbf X}$ is compact.
As the metric $d$ was replaced with the metric $\tilde{d}$, the proof of the first statement in Theorem~\ref{thm equiv} is more or less analogous to the classical arguments to show the relation between a viscosity solution and its optimal control interpretation (cf. \cite{BC}).
The c-solutions and $f$ considered in \cite{GHN} are in general only continuous along curves,
and therefore might not be continuous with respect to either the metric $d$ or the metric
$\tilde{d}$; see \cite[Example 4.9]{GHN}.
Here we assume the stronger requirement of continuity with respect to the metric $d$. Then \eqref{new bdry regularity} in the second result of Theorem \ref{thm equiv} guarantees that the c-solution $u$ is uniformly continuous up to $\partial\Omega$. Recall that the comparison principle (and thus the uniqueness) for s-solutions \cite{GaS} needs such an assumption. More precisely, it was shown in \cite[Theorem 5.3]{GaS} that any s-subsolution $u$ and any s-supersolution $v$ satisfy $u\leq v$ in $\overline{\Omega}$ whenever there exists a modulus $\sigma$ such that \begin{equation}\label{bdry verify0} u(x)\leq \zeta(y)+\sigma(\tilde{d}(x, y)), \quad v(x)\geq \zeta(y)-\sigma(\tilde{d}(x, y)) \end{equation} for all $x\in \Omega$ and $y\in \partial\Omega$. One thus needs to use \eqref{new bdry regularity} to validate the comparison principle and conclude the uniqueness and equivalence.
In Section~\ref{sec:bdry consistency}, for the sake of completeness, sufficient conditions for \eqref{new bdry regularity} are discussed.
\subsection{Local equivalence of solutions}
Our result in Theorem \ref{thm equiv} states that c- and s-solutions of \eqref{eikonal eq} coincide. However, we show the equivalence in presence of the Dirichlet boundary condition so as to use the comparison principle. One may wonder about a more direct proof of the equivalence without using the boundary condition.
The second part of this work is devoted to answering this question. Our method is based on localization of our arguments in the first part. To this end, we introduce a local version of the notion of c-solutions, which we call local c-solutions, by restricting the definition in a small metric ball centered at each point in $\Omega$; see Definition \ref{def local c}. We also include a third notion, called Monge solutions, in our discussion. We compare locally the notions of s-, local c-, and Monge s olutions of \eqref{eikonal eq} in a complete length space.
The Monge solution is known to be an alternative notion of solutions to Hamilton-Jacobi equations in Euclidean spaces \cite{NeSu, BrDa}. In a complete length space $({\mathbf X}, d)$, our generalized definition of Monge solutions to \eqref{eikonal eq} is quite simple; it only requires a locally Lipschitz function $u$ to satisfy
\[
|\nabla^- u|(x)=f(x)\quad \text{for \emph{every} $x\in \Omega$,}
\]
where $|\nabla^- u|(x)$, given by
\[
|\nabla^- u|(x)=\limsup_{y\to x} {\max\{u(x)-u(y), 0\}\over d(x, y)},
\] denotes the sub-slope of $u$ at $x$; see also the definition in \eqref{semi slope}. One advantage of this notion is that it does not involve any viscosity tests and the comparison principle can be easily established. We show that locally uniformly continuous c-, s- and Monge solutions of the eikonal equation are actually equivalent under a weaker positivity assumption on $f$. A more precise statement is given below.
\begin{thm}[Local equivalence between solutions of eikonal equation]\label{thm equiv Monge} Let $({\mathbf X}, d)$ be a complete length space and $\Omega\subset {\mathbf X}$ be an open set. Assume that $f$ is locally uniformly continuous and $f>0$ in $\Omega$. Let $u\in C(\Omega)$. Then the following statements are equivalent: \begin{enumerate} \item[(a)] $u$ is a local c-solution of \eqref{eikonal eq}; \item[(b)] $u$ is a locally uniformly continuous s-solution of \eqref{eikonal eq}; \item[(c)] $u$ is a Monge solution of \eqref{eikonal eq}. \end{enumerate} In addition, if any of (a)--(c) holds, then $u$ is locally Lipschitz with \begin{equation}\label{regular0}
|\nabla u|(x)=|\nabla^- u|(x)=f(x)\quad \text{for all $x\in \Omega$.} \end{equation} \end{thm}
We actually prove more: \begin{itemize} \item The notions of all locally uniformly continuous subsolutions are equivalent and locally Lipschitz (Proposition \ref{prop eikonal sub1}, Proposition \ref{prop eikonal sub2}); \item The notions of locally Lipschitz s-supersolutions and Monge supersolutions are equivalent (Proposition \ref{prop eikonal super}(i)); \item Any locally Lipschitz local c-supersolution is a Monge supersolution (Proposition \ref{prop eikonal super}(ii)); \item Any Monge solution is a local c-solution (Proposition \ref{prop eikonal solution}). \end{itemize} We however do not know whether an s- or Monge supersolution needs to be a local c-supersolution.
In proving Theorem \ref{thm equiv Monge}, it turns out that the local Lipschitz continuity of solutions is an important ingredient. Note that the local Lipschitz continuity holds for Monge solutions by definition, and it can be easily deduced for c-subsolutions as well because the space $({\mathbf X},\tilde{d})$ is a length space, as shown in Lemma~\ref{lem c-lip}. In contrast, the Lipschitz regularity of s-solutions of \eqref{eikonal eq} is less straightforward. Our proof requires the assumption on the local uniform continuity of s-solutions and $f$ due to possible lack of compactness for general length spaces. As stated in Corollary~\ref{cor equiv Monge}, we can remove such an assumption if $({\mathbf X}, d)$ is proper (and therefore the length space space $X$ is a geodesic space because of the generalized Hopf-Rinow theorem in metric spaces \cite{Gr, BHK}, which can be viewed as an immediate consequence of the Arzela-Ascoli theorem). The notions of continuity and uniform continuity in a compact set are clearly equivalent.
In this work we will assume that $({\mathbf X},d)$ satisfies the hypotheses of Theorem~\ref{thm equiv}. It is worth stressing that in this section $({\mathbf X}, d)$ is assumed to be a length space only for simplicity from the point of view of c-solutions. The s-solutions and Monge solutions are not defined for the case that ${\mathbf X}$ is not a length space. For metric spaces that are not length spaces, the slopes in the definition of s-solutions and Monge solutions require us to replace the metric $d$ with $\tilde{d}$.
It is not difficult to extend our discussion on the equivalence to the general equation~\eqref{stationary eq} under a monotonicity assumption on $p\to H(x, r, p)$. When $p\to H(x, r, p)$ is increasing, we can simply apply an implicit-function-type argument to locally reduce the problem to the eikonal equation.
Besides, as in \eqref{regular0} in Theorem \ref{thm equiv Monge}, in this
general case for any solution $u$ we can obtain the continuity of $|\nabla u|$ as well as the following property: \begin{equation}\label{regular}
|\nabla u|(x)=|\nabla^- u|(x)\quad \text{for all $x\in \Omega$;} \end{equation} in other words, the solution itself actually lie in the test class for s-subsolutions proposed in \cite{GaS2, GaS}. This type of properties also appears in the study of time-dependent Hamilton-Jacobi equations on metric spaces (cf. \cite{LoVi}). Our analysis reveals that \eqref{regular} resembles the semi-concavity in the Euclidean space. In fact, in the Euclidean space, \eqref{regular} implies the existence of a $C^1$ test function everywhere from above, which is a typical property of semi-concave functions. We expect more applications of the regularity property \eqref{regular}, since in general metric spaces defining convex functions is not trivial at all. It would be interesting to see further properties on such regular solutions in relation to the structure of PDEs and the geometric property.
The rest of the paper is organized as follows. In Section \ref{sec:review}, we review the definitions and properties of c- and s-solutions of Hamilton-Jacobi equations in metric spaces. Section \ref{sec:cs} is devoted to the proof of Theorem~\ref{thm equiv}. In Section~\ref{sec:monge} we propose the notion of Monge solutions and prove Theorem~\ref{thm equiv Monge}.
\section{Metric viscosity solutions of Hamilton-Jacobi equations}\label{sec:review}
In this section we review the two notions of viscosity solutions to Hamilton-Jacobi equations in metric spaces, mentioned in the first section. We focus on the stationary equation \eqref{stationary eq} and particularly the eikonal equation \eqref{eikonal eq}.
In the setting of general metric spaces, one needs to have an analog of $|\nabla u|$. To do so, let us recall the definitions of viscosity solutions in \cite{GHN} and \cite{GaS}.
\subsection{Curve-based solutions}
Given an interval $I=[a,b]\subset{\mathbb R}$ and a continuous function (curve) $\xi:I\to\Omega$, we define the length of $\gamma$ by \[ \ell(\xi):=\sup_{a=t_0<t_1<\cdots<t_k=b} \sum_{j=0}^{k-1}d(\xi(t_j),\xi(t_{j+1})). \] Note that if $\ell(\xi)<\infty$, then the real-valued function $s_\xi:[a,b]\to[0,\ell(\gamma)]$ defined by \[ s_\xi(t):=\ell(\xi\vert_{[a,t]}) \] is a monotone increasing function, and hence it is differentiable at almost every $t\in[a,b]$. We denote
$s_\xi'$ by $|\xi^\prime|$, which can be equivalently defined by \[
|\xi^\prime|(t)=\lim_{\tau\to 0} {d(\xi(t+\tau), \xi(t))\over |\tau|} \] for almost every $t\in (a, b)$. Note that if $\xi$ is an absolutely continuous curve (and so $s_\xi$ is absolutely continuous real-valued function), then \[
s_\xi(t)=\int_a^t|\xi^\prime|(\tau)\, d\tau \] for all $t\in [a,b]$.
For any interval $I\subset {\mathbb R}$, we say an absolutely continuous curve $\xi: I\to {\mathbf X}$ is {admissible} if \[
|\xi'|\leq 1\quad \text{a.e. in $I$.} \] Note that these are $1$-Lipschitz curves. Let ${\mathcal A}(I, {\mathbf X})$ denote the set of all admissible curves in ${\mathbf X}$ defined on $I$; without loss of generality, we only consider intervals $I$ for which $0\in I$. For any $x\in {\mathbf X}$, we write $\xi\in {\mathcal A}_x(I, {\mathbf X})$ if $\xi(0)=x$. We also need to define the exit time and entrance time of a curve $\xi$: \begin{equation}\label{exit/entrance} \begin{aligned} T^+_\Omega[\xi]&:=\inf\{t\in I: t\geq 0, \xi(t)\notin \Omega\};\\ T^-_\Omega[\xi]&:=\sup\{t\in I: t\leq 0, \xi(t)\notin \Omega\}. \end{aligned} \end{equation} Since any absolutely continuous curve is rectifiable and one can always reparametrize a rectifiable curve by its arc length (cf. \cite[Theorem 3.2]{Haj1}), hereafter we do not distinguish the difference between an absolutely continuous curve and a rectifiable (or Lipschitz) curve. \begin{defi}[Definition 2.1 in \cite{GHN}]\label{defi c} An upper semicontinuous (USC) function $u$ in $\Omega$ is called a {curve-based viscosity subsolution} or {c-subsolution} of \eqref{eikonal eq} if for any $x\in\Omega$ and $\xi\in {\mathcal A}_x({\mathbb R}, \Omega)$, we have \begin{equation}\label{eq c-sub}
\left|\phi'(0)\right|\leq f(x) \end{equation} whenever $\phi\in C^1({\mathbb R})$ such that $t\mapsto u(\xi(t))-\phi(t)$, $t\in\xi^{-1}(\Omega)$,
attains a local maximum at $t=0$.
A lower semicontinuous (LSC) function $u$ in $\Omega$ is called a {curve-based viscosity supersolution} or {c-supersolution} of \eqref{eikonal eq} if for any $\varepsilon>0$ and $x\in \Omega$, there exists $\xi\in {\mathcal A}_x({\mathbb R}, {\mathbf X})$ and $w\in LSC(T^-, T^+)$ with $-\infty<T^{\pm}=T^{\pm}_\Omega[\xi]<\infty$ such that \begin{equation}\label{wapprox} w(0)=u(x), \quad w\geq u\circ\xi-\varepsilon, \end{equation} and \begin{equation}\label{sgapprox}
\left|\phi'(t_0)\right|\geq f(\xi(t_0))-\varepsilon \end{equation} whenever $\phi\in C^1({\mathbb R})$ such that $w(t)-\phi(t)$ attains a minimum at $t=t_0\in (T^-, T^+)$. A function $u\in C(\Omega)$ is said to be a {curve-based viscosity solution} or {c-solution} if it is both a c-subsolution and a c-supersolution of \eqref{eikonal eq}. \end{defi}
In the definition of supersolutions, in general we cannot merely replace $w$ with $u\circ\xi$. Suppose that ${\mathbf X}$ is not a geodesic space but a length space. When $f\equiv 1$, as we expect that the distance function $u=d(\cdot, x_0)$ is still a solution for any $x_0\in {\mathbf X}$, the supersolution property for $u\circ\xi$ without approximation would imply that $\xi$ is a geodesic, which is a contradiction.
The regularity of $u$ above can be relaxed, since we only need its semicontinuity along each curve $\xi$. In fact, one can require a c-subsolution (resp., c-supersolution) to be merely arcwise upper (resp., lower) semicontinuous; consult \cite{GHN} for details. However, in order to obtain our main results in this paper, we need to impose the conventional continuity of $u$ rather than the arcwise continuity.
The notions of c-subsolutions and c-supersolutions with respect to the metric $d$ and the intrinsic metric $\tilde{d}$ given in \eqref{int metric} are equivalent. Note that $\xi$ is a curve with respect to $d$ if and only if it is a curve with respect to $\tilde{d}$ because of our assumption~\eqref{eq ast}. It can also be seen that the speed $|\xi'|$ of the curve remains the same in both metrics; see Lemma~\ref{lem length}. Hence, the class of admissible curves in Definition \ref{defi c} does not depend on the choice between $d$ and $\tilde{d}$.
Although there seems to be no requirement on the metric space in the definition above, it is implicitly assumed in the definition of the c-supersolution that each point $x\in \Omega$ can be connected to the boundary $\partial\Omega$ by a curve of finite length.
Uniqueness of c-solutions of \eqref{eikonal eq} with boundary data \eqref{bdry cond} is shown by proving a comparison principle \cite[Theorem 3.1]{GHN}. The existence of solutions in $C(\overline{\Omega})$, on the other hand, is based on an optimal control interpretation; in particular, it is shown in~\cite[Theorem~4.2 and Theorem~4.5]{GHN} that \begin{equation}\label{eq optimal control} u(x)=\inf_{\xi\in C_x} \left\{ \int_{0}^{T_\Omega^+[\xi]} f(\xi(s))\, ds+\zeta\left(\xi(T_\Omega^+[\xi])\right)\right\} \end{equation} is a c-solution of \eqref{eikonal eq} and \eqref{bdry cond} provided that for each $x\in\overline{\Omega}$ we have \[ C_x:=\left\{\xi\in \mathcal{A}_x([0, \infty), {\mathbf X}) \, :\, T_\Omega^+[\xi]\in (0, \infty)\right\}\neq \emptyset \] and $\zeta$ satisfies a boundary regularity, see \eqref{bdry regularity2} below.
Note in the definition of c-supersolution, for each $(x,\varepsilon)$ the conditions \eqref{wapprox} and \eqref{sgapprox} for $(\xi, w)$ are satisfied for all $t\in (T^-, T^+)$. We localize this definition as follows. Recall the notions of $T^{\pm}_{\mathcal O}[\xi]$ for open sets ${\mathcal O}\subset{\mathbf X}$ from~\eqref{exit/entrance}.
\begin{defi}[Local curve-based solutions]\label{def local c} A function $u\in LSC(\Omega)$ is said to be a local {c-supersolution} if for each $x\in\Omega$ there exists $r>0$ with $B_r^d(x)\subset\Omega$, and for each $\varepsilon>0$ we can find a curve $\xi_\varepsilon\in {\mathcal A}_x({\mathbb R}, {\mathbf X})$ with $\xi_\varepsilon(0)=x$ and a function $w\in LSC(t_r^-, t_r^+)$ with $t^-_r:=T^-_{B_r^d(x)}[\xi_\varepsilon]$ and $t^+_r:=T^+_{B_r^d(x)}[\xi_\varepsilon]$ such that \[ w(0)=u(x), \quad w(t)\geq u\circ\xi_\varepsilon(t)-\varepsilon \ \text{ for all } t\in (t_r^-,t_r^+), \] and \[
\left|\phi'(t_0)\right|\geq f(\xi_\varepsilon(t_0))-\varepsilon \] whenever $\phi\in C^1({\mathbb R})$ such that $w(t)-\phi(t)$ attains a minimum at $t=t_0\in (t_r^-, t_r^+)$. A function $u\in C(\Omega)$ is said to be a local {c-solution} if it is both a c-subsolution and a local c-supersolution of \eqref{eikonal eq}. \end{defi}
In the definition of local c-supersolutions given above, the ball $B_r^d(x)$ is taken with respect to $d$. If $({\mathbf X}, d)$ is a complete rectifiably connected metric space such that the intrinsic metric $\tilde{d}$ defined in \eqref{int metric} satisfies the consistency condition \eqref{eq ast}, then it is equivalent to use metric balls $B_r(x)$ with respect to $\tilde{d}$. This definition is studied mainly in Section~4, where we assume $({\mathbf X},d)$ to be a length space. For length spaces balls with respect to $d$ and balls with respect to $\tilde{d}$ are the same, that is, $B_r^d(x)=B_r(x)$.
A c-supersolution (resp., c-solution) is clearly a local c-supersolution (resp., local c-solution), but it is not clear to us whether the reverse is also true in general. The notion of c-subsolutions is already a localized one, and we therefore do not have to define ``local c-subsolutions'' separately.
\subsection{Slope-based solutions} We next discuss the definition proposed in \cite{GaS2}, which relies more on the property of geodesic or length metric. We denote by $\text{Lip}_{loc}(\Omega)$ the set of locally Lipschitz continuous functions on an open subset $\Omega$ of a complete length space $({\mathbf X}, d)$. For $u\in \text{Lip}_{loc}(\Omega)$ and for $x\in\Omega$, we define the local slope of $u$ to be \begin{equation}\label{slope}
|\nabla u|(x):=\limsup_{y\to x}\frac{|u(y)-u(x)|}{d(x,y)}. \end{equation} Let \begin{equation}\label{test class} \begin{aligned}
\overline{\mathcal{C}}(\Omega) &:= \{ u \in \text{Lip}_{loc}(\Omega) \, :\, \text{$|\nabla^+ u|
= |\nabla u|$ and $|\nabla u|$ is continuous in $\Omega$} \}, \\
\underline{\mathcal{C}}(\Omega) &:= \{ u \in \text{Lip}_{loc}(\Omega) \, :\, \text{$|\nabla^- u|
= |\nabla u|$ and $|\nabla u|$ is continuous in $\Omega$} \}, \\ \end{aligned} \end{equation} where, for each $x\in {\mathbf X}$, \begin{equation}\label{semi slope}
|\nabla^\pm u|(x) := \limsup_{y \to x} \frac{[u(y)-u(x)]_\pm}{d(x, y)} \end{equation} with $[a]_+:=\max\{a, 0\}$ and $[a]_-:=-\min\{a, 0\}$ for any $a\in {\mathbb R}$. In this work we call
$|\nabla^+ u|$ and $|\nabla^- u|$ the (local) super- and sub-slopes of $u$ respectively; they are also named super- and sub-gradient norms in the literature (cf. \cite{LoVi}).
Concerning the test classes $\overline{\mathcal{C}}(\Omega)$ and $\underline{\mathcal{C}}(\Omega)$, it is known from \cite[Lemma 7.2]{GaS2} and \cite[Lemma 2.3]{GaS} that in a length space $X$, $Ad(\cdot , x_0)^2$ belongs to $\underline{\mathcal{C}}(\Omega)$ for for any $x_0\in {\mathbf X}$ and $A>0$; moreover, $Ad(\cdot , x_0)^2$ belongs to $\overline{\mathcal{C}}(\Omega)$ for any $x_0\in {\mathbf X}$ and $A<0$.
Now we recall from \cite{GaS} the definition of s-solutions of a general class of Hamilton-Jacobi equations.
\begin{defi}[Definition 2.6 in \cite{GaS}]\label{defi s} An USC (resp., LSC) function $u$ in an open set $\Omega\subset {\mathbf X}$ is called a {slope-based viscosity subsolution} (resp., {slope-based viscosity supersolution}) or
{s-subsolution} (resp., {s-supersolution}) of \eqref{stationary eq} if \begin{equation}\label{s-sub eq}
H_{|\nabla \psi_2|^*(x)}(x, u(x), |\nabla \psi_1|(x)) \le 0 \end{equation} \begin{equation}\label{s-sup eq}
\left(\text{resp., }\quad H^{|\nabla \psi_2|^*(x)}(x, u(x), |\nabla \psi_1|(x)) \ge 0\right) \end{equation} holds for any $\psi_1 \in \underline{\mathcal{C}}(\Omega)$ (resp., $\psi_1 \in \overline{\mathcal{C}}(\Omega)$) and $\psi_2 \in \text{Lip}_{loc}(\Omega)$ such that $u-\psi_1-\psi_2$ attains a local maximum (resp., minimum) at a point $x \in \Omega$, where, for any $(x, \rho, p)\in \Omega\times {\mathbb R}\times {\mathbb R}$ and $a>0$, \[
H_a(x, \rho, p) = \inf_{|q-p| \le a}H(x, \rho, q), \quad H^a(x, \rho, p) = \sup_{|q-p| \le a}H(x, \rho, q) \quad \text{ for $a\geq 0$,} \]
and $|\nabla \psi_2|^*(x) = \limsup_{y \to x}|\nabla \psi_2|(y)$. We say that $u\in C(\Omega)$ is an s-solution of \eqref{stationary eq} if it is both an s-subsolution and an s-supersolution of \eqref{stationary eq}. \end{defi}
In the case of \eqref{eikonal eq}, we can define subsolutions (resp., supersolutions) by replacing \eqref{s-sub eq} (resp., \eqref{s-sup eq}) with \begin{equation}\label{s-sub eikonal}
|\nabla\psi_1|(x)\leq f(x)+|\nabla \psi_2|^\ast(x) \end{equation} \begin{equation}\label{s-super eikonal}
\left(\text{resp.,} \quad |\nabla\psi_1|(x)\geq f(x)-|\nabla \psi_2|^\ast(x) \right). \end{equation}
When ${\mathbf X}={\mathbb R}^N$, it is not difficult to see that $C^1(\Omega)\subset \overline{\mathcal{C}}(\Omega)\cap \underline{\mathcal{C}}(\Omega)$ for any open set $\Omega\subset {\mathbf X}$. Hence, s-subsolutions, s-supersolutions and s-solutions of \eqref{stationary eq} in this case reduce to conventional viscosity subsolutions, supersolution and solutions respectively.
Concerning the test functions in a general geodesic or length space $({\mathbf X}, d)$, it is known that, for any $k\geq 0$, $x_0\in {\mathbf X}$, the function $x\mapsto k\varphi(d(x, x_0))$ (resp., $x\mapsto -k\varphi(d(x, x_0))$) belongs to the class $\underline{\mathcal{C}}(\Omega)$ (resp., $\overline{\mathcal{C}}(\Omega)$) provided that $\varphi\in C^1([0, \infty))$ satisfies $\varphi'(0)=0$ and $\varphi'\geq 0$; see details in \cite[Lemma 7.2]{GaS2} and \cite[Lemma 2.3]{GaS}.
Comparison principles for s-solutions are given in \cite[Theorem 5.3]{GaS} for the eikonal equation and in \cite[Theorem 5.1, Theorem 5.3]{GaS} for more general Hamilton-Jacobi equations.
\section{Equivalence between curve- and metric-based solutions}\label{sec:cs}
We give a proof of Theorem \ref{thm equiv} in this section. Let us begin with some elementary results on the space ${\mathbf X}$ with the intrinsic metric $\tilde{d}$. Then we show that any c-subsolutions and c-supersolutions are respectively s-subsolutions and s-supersolutions. \subsection{Metric change} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space. We compare the notions of viscosity solutions to \eqref{eikonal eq} provided respectively in Definition \ref{defi c} and in Definition \ref{defi s}. The key to our argument is to use \eqref{int metric} to connect the geometric setting of two notions.
It is clear that \begin{equation}\label{metric comparison} d(x, y)\leq \tilde{d}(x, y)\quad \text{for all }x, y\in {\mathbf X}. \end{equation} Therefore bounded sets in $({\mathbf X}, \tilde{d})$ are bounded in $({\mathbf X}, d)$. Under the assumption of rectifiable connectedness of $(X, d)$, we also see that $\tilde{d}(x, y)<\infty$ for any $x, y\in {\mathbf X}$ and $\tilde{d}$ is a metric on ${\mathbf X}$. Moreover, by \eqref{metric comparison}, it is clear that open sets in $({\mathbf X}, d)$ are also open in $({\mathbf X}, \tilde{d})$.
The metric $\tilde{d}$ is also used in \cite{GHN} to study the continuity and stability of c-solutions. In the rest of this work, $B_r(x)$ denotes the open ball centered at $x\in {\mathbf X}$ with radius $r>0$ with respect to the intrinsic metric $\tilde{d}$.
The induced intrinsic structure leads us to the following elementary fact that $({\mathbf X}, \tilde{d})$ is a length space. One can find this classical result in \cite[Proposition 2.3.12]{BBIbook} and \cite[Corollary 2.1.12]{Pa} for instance. See also~\cite{DJS} for the arc-length parametrization of rectifiable curves with respect to $d$ and with respect to $\tilde{d}$.
\begin{lem}[Length space under intrinsic metric]\label{lem length} Assume that $({\mathbf X}, d)$ is a complete rectifiably connected metric space. Let $\tilde{d}$ be the intrinsic metric of a metric space $({\mathbf X}, d)$ as defined in \eqref{int metric}. Then $({\mathbf X}, \tilde{d})$ is a length space. Moreover, for any rectifiable curve $\xi$, $\xi(s): I\to {\mathbf X}$ is a
parametrization with respect to $d$ if and only if it is a parametrization with respect to $\tilde{d}$ and the speed $|\xi'|$ in both metrics coincide. \end{lem}
We remark that a similar intrinsic metric is constructed in \cite{DeP1, DeP2, DJS} involving a given measure on the space. Ours is standard and simpler, since measures do not play a role in the current work.
The completeness of the metric space is needed to properly define s-solutions under the metric $\tilde{d}$. Note that $({\mathbf X}, \tilde{d})$ is complete if $({\mathbf X}, d)$ is complete, since $d\leq \tilde{d}$ holds and $({\mathbf X},d)$ satisfies the condition~\eqref{eq ast}.
\subsection{Equivalence between solutions of the Dirichlet problem}
Let us start proving Theorem \ref{thm equiv}. We first prove that any c-subsolution is an s-subsolution. We need the following characterization of c-subsolutions given in \cite{GHN}.
\begin{prop}[Proposition 2.6 in \cite{GHN}]\label{prop c-sub} Assume that $f\in C(\Omega)$ with $f\ge 0$ in $\Omega$. Let $u$ be upper semicontinuous in $\Omega$. Then the following statements are equivalent: \begin{itemize} \item[(1)] $u$ is a c-subsolution of \eqref{eikonal eq} in $(\Omega, d)$.\\ \item[(2)] The inequality \begin{equation}\label{prop c-sub eq} u(\xi(t_1))\leq \int_{t_1}^{t_2} f(\xi(r))\, dr+u(\xi(t_2)) \end{equation} for all $\xi\in \mathcal{A}(\mathbb{R}, \Omega)$ and $t_1, t_2\in \mathbb{R}$ with $t_1<t_2$. \end{itemize} \end{prop}
Such characterization enables us to deduce the local Lipschitz continuity of c-subsolutions with respect to the metric $\tilde{d}$.
\begin{lem}[Local Lipschitz continuity of c-subsolutions]\label{lem c-lip} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space and $\tilde{d}$ be the intrinsic metric given by \eqref{int metric}. Assume that \eqref{eq ast} holds. Let $\Omega\subsetneq {\mathbf X}$ be an open set. Assume that $f\in C(\Omega)$ and $f\geq 0$ in $\Omega$. If $u$ is upper semicontinuous in $\Omega$ and is a c-subsolution of \eqref{eikonal eq} in $(\Omega, d)$, then $u\in {\rm Lip}_{loc}(\Omega)$. In particular, for any $x_0\in \Omega$ and $r>0$ such that $B_{2r}(x_0)\subset \Omega$ and $f$ is bounded in $B_{2r}(x_0)$, $u$ satisfies \begin{equation}\label{local lip precise}
|u(x)-u(y)|\leq \tilde{d}(x, y)\sup_{B_{2r}(x_0)} f \quad \text{for all $x, y\in B_r(x_0)$.} \end{equation} \end{lem}
\begin{proof} Fix arbitrarily $x_0\in \Omega$ and $r>0$ small such that $B_{2r}(x_0)\subset \Omega$ (with respect to $\tilde{d}$) and $f$ is bounded on $B_{2r}(x_0)$. For any $x, y\in B_r(x_0)$ and any $0<\varepsilon< 2r-\tilde{d}(x, y)$, there exists an arc-length parametrized rectifiable curve $\xi_0$ joining $x$ and $y$ satisfying \[ \ell(\xi_0)\leq \tilde{d}(x, y)+\varepsilon< 2r. \] It follows that $\xi_0\subset B_{2r}\subset \Omega$. Applying the characterization of c-subsolutions in Proposition~\ref{prop c-sub}(ii) with $\xi=\xi_0$, we have \[ u(x)-u(y)\leq (\tilde{d}(x, y)+\varepsilon)\sup_{B_{2r}(x_0)} f. \] Passing to the limit $\varepsilon\to 0$, we get \[ u(x)-u(y)\leq \tilde{d}(x, y)\sup_{B_{2r}(x_0)} f. \] Exchanging the roles of $x$ and $y$, we thus obtain \eqref{local lip precise}. \end{proof}
We next continue to use Proposition \ref{prop c-sub} to show that c-subsolutions of \eqref{eikonal eq} are s-subsolutions with respect to the metric $\tilde{d}$.
\begin{prop}[Implication of subsolution property]\label{prop sub} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space and $\tilde{d}$ be the intrinsic metric given by \eqref{int metric}. Assume that \eqref{eq ast} holds. Let $\Omega\subsetneq{\mathbf X}$ be an open set. Assume that $f\in C(\Omega)$ and $f\geq 0$ in $\Omega$. If $u$ is upper semicontinuous in $\Omega$ and is a c-subsolution of \eqref{eikonal eq} in $(\Omega, d)$, then $u$ is an s-subsolution of \eqref{eikonal eq} in $(\Omega, \tilde{d})$. \end{prop}
\begin{proof} Since $({\mathbf X}, \tilde{d})$ is a length space, our notation $\text{Lip}_{loc}(\Omega)$ now denotes the set of all locally Lipschitz functions on $\Omega$ with respect to the intrinsic metric $\tilde{d}$. Note that if $u$ is upper semicontinuous with respect to the metric $d$, then it is upper semicontinous with respect to $\tilde{d}$, since $\tilde{d}\to 0$ if and only if $d\to 0$ due to \eqref{int metric} and \eqref{eq ast}.
Fix $x_0\in \Omega$ arbitrarily. Assume that there exists $\psi_1 \in \underline{\mathcal{C}}(\Omega)$ and $\psi_2 \in \text{Lip}_{loc}(\Omega)$ such that $u-\psi_1-\psi_2$ attains a local maximum at a point $x_0$. So there is some $r_0>0$ with $B_{2r_0}(x_0)\subset\Omega$ such that \[ u(x)-u(x_0)\leq (\psi_1+\psi_2)(x)-(\psi_1+\psi_2)(x_0) \] for all $x\in B_{r_0}(x_0)$.
Moreover, for any fixed $\varepsilon\in (0, 1)$, by the continuity of $f$, we can make $0<r_0$ smaller so that \begin{equation}\label{general curve2}
|f(x)-f(x_0)|\leq \varepsilon \end{equation} if $x\in B_{r_0}(x_0)$. We fix such $r_0>0$ (and keep in mind that $r_0$ now also depends on $\varepsilon$).
For any $r\in (0, r_0/2)$ and any $x\in \Omega$ with $0<\tilde{d}(x, x_0)<r$, there exists an arc-length parametrized curve $\xi$ in $\Omega$ such that $\xi(0)=x_0$ and $\xi(t)=x$, where \begin{equation}\label{general curve0} t=\ell(\xi) \leq \tilde{d}(x, x_0)+\varepsilon\tilde{d}(x, x_0). \end{equation} Applying Proposition \ref{prop c-sub} for such a curve with $t_1=0, t_2=t$, we get \[ u(x_0)\leq \int_0^{t} f(\xi(s))\, ds+u(x), \] and therefore, \[ (\psi_1+\psi_2)(x_0)-(\psi_1+\psi_2)(x)\leq \int_0^{{t}}f(\xi(s))\, ds. \] Dividing the inequality above by $\tilde{d}(x,x_0)$, we get \begin{equation}\label{general curve} {\psi_1(x_0)-\psi_1(x)\over \tilde{d}(x, x_0)} \leq \frac{1}{\tilde{d}(x, x_0)}\int_0^{{t}}f(\xi(s))\, ds+{\psi_2(x)-\psi_2(x_0)\over \tilde{d}(x, x_0)}. \end{equation} Since $\varepsilon<1$ and $r<r_0/2$, we have \[ \tilde{d}(\xi(s), x_0)\leq t\leq r+\varepsilon r<r_0 \] for all $s\in [0, t]$, by \eqref{general curve2}. Therefore \[ f(\xi(s))\leq f(x_0)+\varepsilon \] for all $s\in [0, t]$. Hence \eqref{general curve} yields \[ {\psi_1(x_0)-\psi_1(x)\over \tilde{d}(x, x_0)}\leq \frac{t}{\tilde{d}(x, x_0)}(f(x_0)+\varepsilon)+{\psi_2(x)-\psi_2(x_0)\over \tilde{d}(x, x_0)}. \] Using \eqref{general curve0} and recalling that the choice of $r_0$ depends on $\varepsilon$, we thus have \begin{equation}\label{general curve1} {\psi_1(x_0)-\psi_1(x)\over \tilde{d}(x, x_0)}\leq (1+\varepsilon)(f(x_0)+\varepsilon)+{\psi_2(x)-\psi_2(x_0)\over \tilde{d}(x, x_0)} \end{equation} for all $\varepsilon>0$ and all $x\in \Omega$ with $x\in B_{r_0}(x_0)$.
Since $\psi_1\in \underline{\mathcal{C}}(\Omega)$, there exists a sequence of points $x_n\in \Omega$ such that, as $n\to \infty$, we have $x_n\to x_0$ and \[
{\psi_1(x_0)-\psi_1(x_n)\over \tilde{d}(x_n, x_0)}\to |\nabla^-\psi_1|(x_0)=|\nabla \psi_1|(x_0). \] Adopting \eqref{general curve1} with $x=x_n$ and sending $n\to \infty$ and then $\varepsilon\to 0$, we end up with the desired inequality \eqref{s-sub eikonal} at $x=x_0$. \end{proof}
We next show that any c-supersolution is an s-supersolution. We again use a result presented in \cite{GHN}.
\begin{prop}[Proposition 2.8 in \cite{GHN}]\label{prop c-super} Assume that $f\in C(\Omega)$ with $f\ge 0$. Assume $\inf_{\Omega}f>0$. Let $u$ be a lower semicontinuous c-supersolution of \eqref{eikonal eq}. Then for any $\varepsilon>0$ and $x_0\in \Omega$, there exists $\xi_\varepsilon\in \mathcal{A}_{x_0}([0, \infty), \Omega)$ satisfying $T=T_\Omega^+[\xi_\varepsilon]<\infty$ and \begin{equation}\label{prop c-super eq} u(x_0)\geq \int_0^{t} f(\xi_\varepsilon(s))\, ds+u(\xi_\varepsilon(t))-\varepsilon(1+t) \end{equation} for all $0\leq t\leq T$. \end{prop}
{\begin{rmk}\label{local c-super prop} If $u$ is a local c-supersolution instead, then for each $x_0\in \Omega$ there is a sufficiently small $r>0$ such that for each $\varepsilon>0$ we can find a choice $\xi_\varepsilon\in{\mathcal A}_{x_0}([0,\infty), B_r(x_0))$ such that for all $0\le t\le T^+_{B_r(x_0)}[\xi_\varepsilon]$, \eqref{prop c-super eq} holds. This is seen by directly adapting the proof of \cite[Proposition~2.8]{GHN}. \end{rmk}
\begin{prop}[Implication of supersolution property]\label{prop super} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space and $\tilde{d}$ be the intrinsic metric given by \eqref{int metric}. Assume that \eqref{eq ast} holds. Let $\Omega\subsetneq {\mathbf X}$ be an open set. Assume that $f\in C(\Omega)$ with $\inf_{\Omega}f>0$. If $u$ is a lower semicontinuous c-supersolution of \eqref{eikonal eq} in $(\Omega, d)$, then $u$ is an s-supersolution of \eqref{eikonal eq} in $(\Omega, \tilde{d})$. \end{prop}
\begin{proof} Since $u$ is lower semicontinuous with respect to the metric $d$, it is easily seen that $u$ is also lower semicontinuous with respect to $\tilde{d}$, thanks to \eqref{int metric} and \eqref{eq ast}.
Fix $x_0\in \Omega$ arbitrarily. Assume that there exist $\psi_1 \in \overline{\mathcal{C}}(\Omega)$ and $\psi_2 \in \text{Lip}_{loc}(\Omega)$ such that $u-\psi_1-\psi_2$ attains a local minimum at a point $x_0$. We thus have $r_0>0$ such that $B_{2r_0}(x_0)\subset\Omega$ and \[ u(y)-u(x_0)\geq (\psi_1+\psi_2)(y)-(\psi_1+\psi_2)(x_0) \] for all $y\in B_{r_0}(x_0)$.
Applying Proposition \ref{prop c-super}, for any $\varepsilon>0$ satisfying $\sqrt{\varepsilon}<\min\{r,\tilde{d}(x_0, \partial\Omega)\}$, we can find $\xi\in \mathcal{A}_{x_0}([0, \infty), \Omega)$ such that \eqref{prop c-super eq} holds for all $0\leq t\leq T_\Omega^+[\xi]$. Since $T_\Omega^+[\xi]\geq \tilde{d}(x_0, \partial\Omega)>0$, we can take $t=\sqrt{\varepsilon}$ and $x_\varepsilon=\xi(\sqrt{\varepsilon})$ in \eqref{prop c-super eq} to get \[ (\psi_1+\psi_2)(x_\varepsilon)-(\psi_1+\psi_2)(x_0)\leq -\int_0^{\sqrt{\varepsilon}}f(\xi(s))\, ds+\varepsilon(1+\sqrt{\varepsilon}). \] Dividing this relation by $\sqrt{\varepsilon}$, we get \begin{equation}\label{eq lem super1} {1\over \sqrt{\varepsilon}}\int_0^{\sqrt{\varepsilon}} f(\xi(s))\, ds+{\psi_2(x_\varepsilon)-\psi_2(x_0)\over \sqrt{\varepsilon}}-\sqrt{\varepsilon}(1+\sqrt{\varepsilon})\le {\psi_1(x_0)-\psi_1(x_\varepsilon)\over \sqrt{\varepsilon}}. \end{equation} Noticing that \[ \tilde{d}\left(x_\varepsilon, x_0\right)\leq \sqrt{\varepsilon}, \] we deduce that \[
{\psi_1(x_\varepsilon)-\psi_1(x_0)\over \sqrt{\varepsilon}}\le {|\psi_1(x_\varepsilon)-\psi_1(x_0)|\over \sqrt{\varepsilon}}
\le {|\psi_1(x_\varepsilon)-\psi_1(x_0)|\over \tilde{d}\left(x_\varepsilon, x_0\right)} \] and \[
{\psi_2(x_\varepsilon)-\psi_2(x_0)\over \sqrt{\varepsilon}}\ge -{|\psi_2(x_\varepsilon)-\psi_2(x_0)|\over \sqrt{\varepsilon}}
\ge -{|\psi_2(x_\varepsilon)-\psi_2(x_0)|\over \tilde{d}\left(x_\varepsilon, x_0\right)}. \] Hence, combining the above inequalities together implies that \[
{|\psi_1(x_\varepsilon)-\psi_1(x_0)|\over \tilde{d}\left(x_\varepsilon, x_0\right)}
\geq {1\over \sqrt{\varepsilon}}\int_0^{\sqrt{\varepsilon}} f(\xi(s))\, ds- {|\psi_2(x_\varepsilon)-\psi_2(x_0)|\over \tilde{d}\left(x_\varepsilon, x_0\right)}- \sqrt{\varepsilon}(1+\sqrt{\varepsilon}). \] Letting $\varepsilon\to 0$, we are led to \eqref{s-super eikonal} with $x=x_0$ as desired. \end{proof}
\begin{rmk}\label{local c to s} By Remark \ref{local c-super prop}, it is not difficult to see that if $u$ is only a local c-supersolution, then for any $x_0\in \Omega$ the same result as in Proposition \ref{prop super} holds in $B_r(x_0)$ with $r>0$ small. In fact, the proof will be the same except that $\Omega$ should be replaced by $B_r(x_0)$. \end{rmk}
We now prove Theorem \ref{thm equiv}. \begin{proof}[Proof of Theorem \ref{thm equiv}]
In view of Lemma~\ref{lem c-lip}, we know that any c-solution of \eqref{eikonal eq} is locally Lipschitz with respect to $\tilde{d}$. Now by Proposition~\ref{prop sub} and Proposition~\ref{prop super} we see that any c-solution $u$ of \eqref{eikonal eq} is a locally Lipschitz s-solution. If $u$ satisfies \eqref{new bdry regularity} and $f$ is uniformly continuous with $\inf_{\Omega}f>0$, then in the bounded domain $\Omega$ we can apply the comparison principle for s-solutions (cf. \cite[Theorem 5.3]{GaS}) to show that $u$ must be the only s- and c-solution of the Dirichlet problem. \end{proof}
\subsection{Boundary value}\label{sec:bdry consistency}
In light of the second part of Theorem~\ref{thm equiv}, the importance of the condition \eqref{new bdry regularity} is clear. We now give sufficient conditions for \eqref{new bdry regularity}, which is also important for the existence of c-solutions. Recall that $\zeta$ is a continuous function on $\partial\Omega$, playing the role of the Dirichlet boundary data in \eqref{bdry cond}.
\begin{prop}[Boundary consistency]\label{prop bdry regularity} Assume that $({\mathbf X}, d)$ is a complete rectifiably connected metric space and $\tilde{d}$ be the induced intrinsic metric given by \eqref{int metric}. Assume that \eqref{eq ast} holds. Let $\Omega\subsetneq {\mathbf X}$ be an open set. Suppose that $f\in C(\overline{\Omega})$ is bounded and $f\geq 0$ in $\overline{\Omega}$. Let $u$ be given by \eqref{eq optimal control} with $\zeta\in C(\partial\Omega)$ given. \begin{enumerate} \item[(1)] If there exists $L>0$ such that $\zeta$ is $L$-Lipschitz on $\partial\Omega$ with respect to the metric $\tilde{d}$, then \begin{equation}\label{sol bdry regularity weak} u(x)- \zeta(y)\leq \tilde{d}(x, y)\max\left\{L,\ \sup_{\overline{\Omega}} f\right\}\quad \text{for all $x\in \Omega$ and $y\in \partial\Omega$.} \end{equation} \item[(2)] If $\zeta$ satisfies a stronger condition: \begin{equation}\label{bdry regularity}
|\zeta(x)-\zeta(y)|\leq \tilde{d}(x, y) \inf_{\overline{\Omega}} f \quad \text{for every $x, y\in \partial\Omega$,} \end{equation} then \begin{equation}\label{sol bdry regularity}
|u(x)- \zeta(y)|\leq \tilde{d}(x, y)\sup_{\overline{\Omega}} f \quad \text{for all $x\in \Omega$ and $y\in \partial\Omega$.} \end{equation} \end{enumerate} \end{prop}
\begin{proof} For simplicity of notation, denote \[ m:=\inf_{\overline{\Omega}} f, \quad M:=\sup_{\overline{\Omega}} f. \] Fix $x\in \Omega$ and $y\in \partial\Omega$. Then for any $\varepsilon>0$, there exists an arc-length parametrized curve $\xi\in {\mathcal A}_{x}({\mathbb R}, {\mathbf X})$ such that $\xi(t)=y$ and \[ \tilde{d}(x, y)\leq t\leq \tilde{d}(x, y)+\varepsilon. \] This curve may not stay in $\Omega$, but there exists $z=\xi(t_1)\in \partial\Omega$, where \[ t_1:=T^+_\Omega[\xi]=\inf\{s: \xi(s)\in \partial\Omega\}. \] Since we have \[ t-t_1\geq \tilde{d}(y, z) \ \text{ and }\ t\leq \tilde{d}(x, z)+\tilde{d}(y, z)+\varepsilon, \] it follows that $t_1\leq \tilde{d}(x, z)+\varepsilon$ and therefore \begin{equation}\label{bdry regu1} \tilde{d}(x, z)+\tilde{d}(z,y)\leq t \leq \tilde{d}(x, y)+\varepsilon. \end{equation} Now in view of \eqref{eq optimal control}, we have \begin{equation}\label{bdry regu rev1} u(x)\leq \zeta(z)+\int_0^{t_1} f(\xi(s))\, ds\leq \zeta(z)+M(\tilde{d}(x, z)+\varepsilon). \end{equation} Thanks to the $L$-Lipschitz continuity of $\zeta$, we have \begin{equation}\label{use later1} \zeta(z)\leq \zeta(y)+L\tilde{d}(y, z). \end{equation}
Applying \eqref{bdry regu1} and \eqref{use later1} in \eqref{bdry regu rev1}, we thus get \begin{equation}\label{use later2} u(x)\leq \zeta(y)+L \tilde{d}(y, z)+M\tilde{d}(x, z)+M\varepsilon\leq \zeta(y)+\max\{L, M\}\tilde{d}(x, y)+2\max\{L, M\}\varepsilon. \end{equation} Since the above holds for all $\varepsilon>0$, we have \eqref{sol bdry regularity weak} immediately.
In order to show \eqref{sol bdry regularity}, we need the stronger condition \eqref{bdry regularity}, which means that $\zeta$ is $m$-Lipschitz on $\partial\Omega$ with respect to $\tilde{d}$. By \eqref{eq optimal control}, for any $\varepsilon>0$, there exists $y_\varepsilon\in \partial\Omega$ and a curve $\xi_\varepsilon\in {\mathcal A}_x({\mathbb R}, \overline{\Omega})$ such that with $t_\varepsilon>0$ chosen so that $\xi_\varepsilon(t_\varepsilon)=y_\varepsilon$, we have \[ u(x)\geq \zeta(y_\varepsilon)+\int_0^{t_\varepsilon} f(\xi(s))\, ds-\varepsilon\geq \zeta(y_\varepsilon)+m \tilde{d}(x, y_\varepsilon)-\varepsilon. \] Using \eqref{bdry regularity}, we have \[ u(x)\geq \zeta(y)-m\tilde{d}(y, y_\varepsilon)+m\tilde{d}(x, y_\varepsilon)-\varepsilon\geq \zeta(y)-m\tilde{d}(x, y)-\varepsilon. \] Letting $\varepsilon\to 0$, we obtain \[ u(x)-\zeta(y)\geq -m\tilde{d}(x, y)\geq -M\tilde{d}(x, y), \] which, combined with \eqref{sol bdry regularity weak} with $L=m$, completes the proof. \end{proof}
The condition \eqref{bdry regularity} gives a quite restrictive constraint on the oscillation of the boundary data $\zeta$. A weaker condition than \eqref{bdry regularity} is that \begin{equation}\label{bdry regularity2} \left\{\begin{aligned}
&\zeta(x)-\zeta(y)\leq \int_0^t f(\xi(s))|\xi'|(s)\, ds\\ &\text{for every $x, y\in \partial\Omega$ and every rectifiable curve $\xi: [0, t]\to \overline{\Omega}$ with $\xi(0)=x$ and $\xi(t)=y$.} \end{aligned} \right. \end{equation} This condition is employed in \cite{GHN} to guarantee the existence of continuous solutions to the Dirichlet problem. We can use \eqref{bdry regularity2} instead of \eqref{bdry regularity} to obtain a continuity result weaker than \eqref{sol bdry regularity} provided that $\Omega$ enjoys a better regularity like the so-called quasiconvexity.
\begin{prop}[Boundary consistency under domain quasiconvexity]\label{prop bdry regularity2} Let $({\mathbf X}, d)$ be a complete rectifiably connected metric space and $\tilde{d}$ be given by \eqref{int metric}. We suppose that \eqref{eq ast} holds. Let $\Omega\subsetneq {\mathbf X}$ be an open set. Assume that $f\in C(\overline{\Omega})$ is bounded and $f\geq 0$ in $\overline{\Omega}$. Assume in addition that $\overline{\Omega}$ is $\sigma_\Omega$--convex in $({\mathbf X}, \tilde{d})$ with respect to a modulus of continuity $\sigma_\Omega$, i.e., for any $x, y\in \overline{\Omega}$, there exist a rectifiable curve $\xi$ in $\overline{\Omega}$ joining $x$ and $y$ and satisfying $\ell(\xi)\leq \sigma_\Omega(\tilde{d}(x, y))$. Let $\zeta$ satisfy \eqref{bdry regularity2} and $u$ be given by \eqref{eq optimal control}. Then \eqref{new bdry regularity} holds with $\sigma(t)=2\sigma_{\Omega}(t)\sup_{\overline{\Omega}} f$ for $t\geq 0$. \end{prop}
\begin{proof} Let us still take $M=\sup_{\overline{\Omega}} f$ for simplicity of notation. Fix $x\in \Omega$ and $y\in \partial\Omega$. Using the same argument as in the proof of Proposition \ref{prop bdry regularity}, we can easily prove that \[ u(x)- \zeta(y)\leq 2M\sigma_\Omega(\tilde{d}(x, y)). \] Indeed, since the quasiconvexity of $\Omega$ and \eqref{bdry regularity2} yield \[
|\zeta(y_1)-\zeta(y_2)|\leq M\sigma_\Omega(\tilde{d}(y_1, y_2)) \] for any $y_1, y_2\in \partial\Omega$, we only need to respectively substitute the terms $m\tilde{d}(y, z)$ and $m\tilde{d}(x, y)$ in \eqref{use later1} and \eqref{use later2} with $M\sigma_\Omega(\tilde{d}(y, z))$ and $M\sigma_\Omega(\tilde{d}(x, y))$ .
Let us now show that \begin{equation}\label{bdry reg2} u(x)- \zeta(y)\geq -\sigma(\tilde{d}(x, y)). \end{equation} In fact, for any $\varepsilon>0$, we can use \eqref{eq optimal control} again to find $y_\varepsilon\in \partial\Omega$ and an arc-length parametrized curve $\xi_1\in \mathcal{A}_x([0, t_1))$ with $t_1>0$ such that $\tilde{d}(x, y_\varepsilon)\geq t_1-\varepsilon$ and \begin{equation}\label{bdry reg1} u(x)\geq \zeta(y_\varepsilon)+\int_0^{t_1} f(\xi_1(s))\, ds-\varepsilon. \end{equation} Note that there exists another arc-length parametrized curve $\xi_2\in \mathcal{A}_x([0, t_2))$ such that $\xi_2(0)=x$, $\xi_2(t_2)=y$ and $\sigma_\Omega(\tilde{d}(x, y))\geq t_2$. We thus can join $\xi_1$ and $\xi_2$ by taking \[ \xi(s)=\begin{cases} \xi_1(t_1-s) &\text{if $s\in [0, t_1]$,}\\ \xi_2(s-t_1) &\text{if $s\in (t_1, t_1+t_2]$}. \end{cases} \] Adopting \eqref{bdry regularity}, we have \[ \zeta(y)\leq \zeta(y_\varepsilon)+\int_0^{t_1+t_2} f(\xi(s))\, ds, \] which, combined with \eqref{bdry reg1}, implies that \[ \begin{aligned} u(x)&\geq \zeta(y)+\int_0^{t_1} f(\xi_1(s))\, ds-\int_0^{t_1+t_2} f(\xi(s))\, ds-\varepsilon=\zeta(y)-\int_0^{t_2} f(\xi_2(s))\, ds-\varepsilon\\ &\geq \zeta(y)-Mt_2-\varepsilon\geq \zeta(y)-M\sigma_\Omega(\tilde{d}(x, y))-\varepsilon. \end{aligned} \] We conclude the proof of \eqref{bdry reg2} by letting $\varepsilon\to 0$. \end{proof}
\subsection{Slope-based solutions in general metric spaces}\label{sec:generalization}
Motivated by our above results, we generalize the definition of viscosity solutions proposed in \cite{GaS}. In particular, by utilizing the induced intrinsic metric $\tilde{d}$ given by \eqref{int metric}, we can now study the general equation \eqref{stationary eq} in a general metric space $({\mathbf X}, d)$ that is not necessarily a length space but a complete rectifiably connected metric space satisfying \eqref{eq ast}.
We first extend the notion of pointwise local slope for any locally Lipschitz function $u$ by continuing
to use the notation $|\nabla u|$: \begin{equation}\label{relaxed least slope}
|\nabla u|(x):=\limsup_{y\to x} {|u(y)-u(x)|\over \tilde{d}(x, y)},\quad \text{for any $x\in {\mathbf X}$.} \end{equation} Such an idea has already been formally stated in \cite[Equation (2.3)]{GHN}. If $(X, d)$ is a length space, then $\tilde{d}=d$ and \eqref{relaxed least slope} agrees with \eqref{slope}.
Analogously, if $({\mathbf X}, d)$ is a complete rectifiably connected metric space satisfying \eqref{eq ast}, we can define upper and lower slopes by taking, for $u\in {\rm Lip}_{loc}({\mathbf X})$, \begin{equation}\label{half least slope}
|\nabla^{\pm} u|(x):=\limsup_{y\to x} {[u(y)-u(x)]_{\pm}\over \tilde{d}(x, y)},\quad \text{for any $x\in {\mathbf X}$.} \end{equation} These are again consistent with \eqref{semi slope} if $(X, d)$ is a length space.
By using the definition in \eqref{test class}, we can provide the test classes $\overline{\mathcal{C}}(\Omega)$ and $\underline{\mathcal{C}}(\Omega)$ but on an open set $\Omega$ of a more general metric space.
As a result, $u\in \overline{\mathcal{C}}(\Omega)$ (resp., $u\in\underline{\mathcal{C}}(\Omega))$ if and only if for any $x\in \Omega$, \[ \limsup_{y\to x}{u(y)-u(x)\over \tilde{d}(x, y)}
= |\nabla u|(x) \quad \left(\text{resp., } \limsup_{y\to x}{u(x)-u(y)\over \tilde{d}(x, y)}= |\nabla u|(x) \right). \] Now Definition \ref{defi s} can be used to define viscosity solutions of \eqref{stationary eq} in a complete rectifiably connected metric space as long as we replace $d$ by $\tilde{d}$.
We conclude this section by remarking that our approach above is based on a pointwise version of the notion of upper gradients, which, as an important substitute of the Euclidean gradients, has recently attracted a great deal of attention in the study of Sobolev spaces in metric measure spaces; see for instance \cite{Haj1, HKSTBook} for introduction on this topic.
\section{Monge Solutions and Local Equivalence}\label{sec:monge}
In this section we aim to show the equivalence of c- and s-solutions of the eikonal equation without relying on the boundary condition. Our discussion involves a third notion of solutions to Hamilton-Jacobi equations in general metric spaces, which generalizes the so-called Monge solution of the eikonal equation in the Euclidean space studied in \cite{NeSu, BrDa} etc.
Thanks to our remarks in Section~\ref{sec:generalization}, it is sufficient to set up the problem in a complete length space $({\mathbf X}, d)$. Our results in this section can be applied to more general rectifiably connected spaces by taking the induced intrinsic metric $\tilde{d}$ as in \eqref{int metric}. Throughout this section, we shall always focus our attention on the complete length space $({\mathbf X}, d)$.
\subsection{Definition and uniqueness of Monge solutions}
Let us begin with the definition of Monge solutions for the general Hamilton-Jacobi equation in a complete length space.
\begin{defi}[Definition of Monge solutions]\label{defi monge} A function $u\in {\rm Lip}_{loc}(\Omega)$ is called a {Monge subsolution} (resp., {Monge supersolution}) of \eqref{stationary eq} if, at any $x\in \Omega$, \[
H\left(x, u(x), |\nabla^- u|(x)\right)\leq 0 \quad \left(\text{resp.}, H\left(x, u(x), |\nabla^- u|(x)\right)\geq 0\right). \] A function $u\in {\rm Lip}_{loc}(\Omega)$ is said to be a Monge solution if $u$ is both a Monge subsolution and a Monge supersolution, i.e., $u$ satisfies \begin{equation}\label{stationary monge}
H\left(x, u(x), |\nabla^- u|(x)\right)=0 \end{equation} at any $x\in \Omega$. \end{defi}
In the case of \eqref{eikonal eq}, the definition of Monge subsolutions (resp., supersolutions) reduces to \begin{equation}\label{eq:Monge-supsubsol-Eik}
|\nabla^- u|(x)\leq f(x)\quad (\text{resp., } |\nabla^- u|(x)\geq f(x)) \end{equation} for all $x\in \Omega$. Then $u$ is a Monge solution of \eqref{eikonal eq} if \begin{equation}\label{eq:Monge-sol-Eik}
|\nabla^- u|= f\quad \text{in $\Omega$}. \end{equation}
The notion of Monge solutions of \eqref{eikonal eq} in the Euclidean space is studied in \cite{NeSu}, where the right hand side $f$ is allowed to be more generally lower semicontinuous in $\overline{\Omega}$. Such a notion, still in the Euclidean space, was later generalized in \cite{BrDa} to handle general Hamilton-Jacobi equations with discontinuities. The definitions of Monge solutions in \cite{NeSu, BrDa} require the optical length function, which can be regarded as a Lagrangian structure for $H$. In contrast, our definition of Monge solutions does not rely on optical length functions. We only consider continuous $H$ in this work and the discontinuous case will be discussed in our forthcoming paper \cite{CLShZ}.
One advantage of using the Monge solutions is that its uniqueness can be easily obtained. In what follows, we give a comparison principle for Monge solutions of the eikonal equation.
\begin{thm}[Comparison principle for Monge solutions of eikonal equation]\label{thm comparison monge} Let $({\mathbf X}, d)$ be a complete length space and $\Omega\subsetneq {\mathbf X}$ be a bounded open set in $({\mathbf X}, d)$. Assume that $f\in C(\Omega)$ is bounded and satisfies $\inf_{\Omega}f>0$. Let $u \in C(\overline{\Omega})\cap {\rm Lip}_{loc}(\Omega)$ be a bounded Monge subsolution and $v \in C(\overline{\Omega})\cap {\rm Lip}_{loc}(\Omega)$ be a bounded Monge supersolution of \eqref{eikonal eq}. If \begin{equation}\label{bdry verify monge} \lim_{\delta\to 0}\sup\left\{u(x)-v(x): x\in \overline{\Omega}, \ d(x, \partial\Omega)\leq \delta \right\}\leq 0, \end{equation} then $u\le v$ in $\overline{\Omega}$. Here $d(x, \partial\Omega)$ is given by $\inf_{y\in \partial\Omega} d(x, y)$. \end{thm}
\begin{proof} Since $u$ and $v$ are bounded, we may assume that $u, v\geq 0$ by adding a positive constant to them. It suffices to show that $\lambda u\le v$ in $\Omega$ for all $\lambda\in (0,1)$. Assume by contradiction that there exists $\lambda\in (0,1)$ such that $\sup_{\Omega}(\lambda u-v)> 2\mu$ for some $\mu>0$. By \eqref{bdry verify monge}, we may take $\delta>0$ small such that \[ \lambda u(x)-v(x)\leq u(x)-v(x)\leq \mu \] for all $x\in \overline{\Omega}\setminus \Omega_\delta$, where we denote $\Omega_r=\{x:\Omega: d(x, \partial\Omega)>r\}$ for $r>0$. We choose $\varepsilon\in (0, \delta/2)$ such that \[ \sup_\Omega (\lambda u-v)>2\mu+\varepsilon^2 \] and \begin{equation}\label{eps small} \varepsilon<(1-\lambda) \inf_{\Omega_{\delta/2}} f. \end{equation} We have such an $\varepsilon>0$ because $\inf_{\Omega}f>0$. Thus there exists $x_0\in \Omega$ such that $\lambda u(x_0)-v(x_0)\geq \sup_\Omega (\lambda u-v)-\varepsilon^2>2\mu$ and therefore $x_0\in \Omega_\delta$.
By Ekeland's variational principle (cf. \cite[Theorem 1.1]{Ek1}, \cite[Theorem 1]{Ek2}), there exists $x_\varepsilon\in B_\varepsilon(x_0)\subset \Omega_{\delta/2}$ such that \[ \lambda u(x_\varepsilon)-v(x_\varepsilon)\geq \lambda u(x_0)-v(x_0) \] and $x\mapsto \lambda u(x)-v(x)-\varepsilon d(x_\varepsilon, x)$ attains a local maximum in $\Omega$ at $x=x_\varepsilon$. It follows that \begin{equation}\label{comparison monge1} v(x_\varepsilon)-v(x)\le \lambda u(x_\varepsilon)-\lambda u(x)+\varepsilon d(x_\varepsilon, x) \end{equation} for all $x\in B_{r}(x_\varepsilon)$ when $r>0$ is small enough. Since $v$ is a Monge supersolution of \eqref{eikonal eq} and hence satisfies~\eqref{eq:Monge-supsubsol-Eik}, there exists a sequence $\{y_n\}\subset \Omega$ such that \[ \lim_{y_n\to x_\varepsilon}\frac{[v(y_n)-v(x_\varepsilon)]_-}{d(x_\varepsilon, y_n)}\geq f(x_\varepsilon)>0. \] Note that here it is crucial to have $f(x_\varepsilon)>0$ so that for large integers $n$ we have $v(y_n)<v(x_\varepsilon)$. Hence, by \eqref{comparison monge1} we have \[ \begin{aligned} f(x_\varepsilon)& \leq \lim_{y_n\to x_\varepsilon}\frac{\lambda [u(y_n)-u(x_\varepsilon)]_-}{d(y_n, x_\varepsilon)}+\varepsilon\\ &\leq \limsup_{x\to x_\varepsilon}\frac{\lambda [u(x)-u(x_\varepsilon)]_-}{d(x, x_\varepsilon)}+\varepsilon\\
&=\lambda |\nabla^- u|(x_\varepsilon)+\varepsilon \end{aligned} \] Using the fact that $u$ is a Monge subsolution, we get \[ f(x_\varepsilon)\leq \lambda f(x_\varepsilon)+\varepsilon, \] which contradicts the choice of $\varepsilon>0$ as in \eqref{eps small}. Our proof is thus complete. \end{proof}
In the above theorem we cannot replace $|\nabla^-u|$ with $|\nabla u|$.
A simple counterexample with $|\nabla u|$ is as follows:
in $[-1,1]\subset{\mathbb R}$, both $1-|x|$ and $|x|-1$ would be two different solutions of \eqref{eikonal eq} with $f\equiv 1$ that satisfy the boundary condition $u(\pm 1)=0$, but only the former is the unique Monge solution.
\begin{rmk}\label{rmk uniqueness discontinuous} It can be easily seen that the continuity assumption on $f$ is not utilized at all in the proof above. Hence the comparison principle in Theorem \ref{thm comparison monge} still holds even if we drop the continuity assumption for $f$. In \cite{CLShZ} we study in detail Monge solutions for discontinuous Hamiltonians in general metrics measure spaces. \end{rmk}
\begin{rmk} An analogous comparison principle can be established for locally Lipschitz Monge solutions of \eqref{stationary eq} provided that $p\mapsto H(x, \rho, p)$ is continuous in $[0, \infty)$ uniformly for all $(x, \rho)\in \Omega\times {\mathbb R}$ and $\rho\mapsto H(x, \rho, p)$ is strictly increasing in the sense that
there exists $\mu>0$ such that \[ \rho\mapsto H(x, \rho, p)-\mu \rho \] is nondecreasing for all $x\in \Omega$ and $p\geq 0$. The proof is similar to that of Theorem \ref{thm comparison monge}. \end{rmk}
Concerning the existence of Monge solutions, Theorem \ref{thm equiv Monge} shows that any c-solution of \eqref{eikonal eq} is a Monge solution. In particular, the formula \eqref{eq optimal control} provides a unique c- and Monge solution of \eqref{eikonal eq} and \eqref{bdry cond} if \eqref{bdry regularity} holds.
\subsection{Local equivalence of solutions of eikonal equations}
We study the local relation between Monge solutions, c-solutions and s-solutions. Let us first discuss the subsolution properties.
\begin{prop}[Relation between c- and Monge subsolutions]\label{prop eikonal sub1} Let $({\mathbf X}, d)$ be a complete length space and $\Omega$ be an open set in ${\mathbf X}$. Assume that $f\in C(\Omega)$ with $f\geq 0$. Let $u\in C(\Omega)$. Then $u$ is a c-subsolution of \eqref{eikonal eq} if and only if it is a Monge subsolution of \eqref{eikonal eq}. \end{prop} \begin{proof} We begin with a proof of the implication ``$\Rightarrow$''. Since the local Lipschitz continuity of c-subsolutions $u$ is provided in Lemma~\ref{lem c-lip}, it suffices to
verify that $|\nabla^- u|(x_0)\leq f(x_0)$ for every $x_0\in \Omega$. To this end, we fix $x_0\in\Omega$. Since $u$ is a c-subsolution of \eqref{eikonal eq}, using \eqref{prop c-sub eq} with $s=0$ and $\xi(0)=x_0$ we have \[ u(x_0)-u(\xi(t))\leq \int_0^t f(\xi(s))\, ds \] for any $\xi\in \mathcal{A}_{x_0}({\mathbb R}, B_r(x_0))$ and $0\le t\le T^+_{B_r(x_0)}[\xi]$. Therefore \[ {u(x_0)-u(x)\over d(x, x_0)}\leq {1\over d(x, x_0)} \int_0^t f(\xi(s))\, ds \leq \frac{1}{d(x,x_0)} \ell(\xi) \sup_{B_r(x_0)}f \] for any $x\in B_r(x_0)$ and any curve $\xi\subset B_r(x_0)$ joining $x_0$ and $x$. By the continuity of $f$ and by the fact that ${\mathbf X}$ is a length space, taking the infimum over all $\xi$ and then sending $r\to 0$ we obtain \[
|\nabla^- u|(x_0)\leq f(x_0) \] as desired.
We next prove the reverse implication ``$\Leftarrow$''. We again fix $x_0\in \Omega$ arbitrarily. We take an arbitrary curve $\xi\in {\mathcal A}_{x_0}({\mathbb R}, \Omega)$; in particular $\xi(0)=x_0$. Suppose that there is a function $\phi\in C^1({\mathbb R})$ such that $t\mapsto u(\xi(t))-\phi(t)$ attains a local maximum at $t=0$. Then there is some $t_0>0$ such that we have \[ \phi(t)-\phi(0)\geq u(\xi(t))-u(\xi(0)) \] when $-t_0<t<t_0$ . If $\phi'(0)=0$, then we obtain immediately the desired inequality \eqref{eq c-sub}. If $\phi'(0)\neq 0$, then without loss we may assume that $\phi'(0)<0$, in which case $\phi(t)-\phi(0)<0$ for all $t\in (0, t_1)$ for sufficiently small $t_1\in (0, t_0)$. (If $\phi'(0)>0$, then we can consider $t\in (-t_1,0)$ instead below.) It follows that \[
|\phi'(0)|\leq \limsup_{t\to 0+}{u(\xi(0))-u(\xi(t))\over t}\leq |\nabla^- u|(x_0). \]
Since $u$ is a Monge subsolution, we have $|\nabla^- u|(x_0)\leq f(x_0)$ and thus deduce \eqref{eq c-sub} again. \end{proof}
\begin{prop}[Relation between s- and Monge subsolutions]\label{prop eikonal sub2} Let $({\mathbf X}, d)$ be a complete length space and $\Omega$ be an open set in ${\mathbf X}$. Assume that $f$ is locally uniformly continuous and $f\geq 0$ in $\Omega$. Let $u\in C(\Omega)$. Then the following results hold. \begin{enumerate} \item[(i)] If $u$ is a Monge subsolution of \eqref{eikonal eq}, then it is an s-subsolution of \eqref{eikonal eq}. \item[(ii)] If $u$ is a locally uniformly continuous s-subsolution of \eqref{eikonal eq}, then it is a Monge subsolution of \eqref{eikonal eq}. \end{enumerate} \end{prop} \begin{proof} (i) It is an immediate consequence of Proposition \ref{prop sub} and Proposition \ref{prop eikonal sub1}.
(ii) Take $\delta>0$ arbitrarily and $f_\delta=f+\delta$ in $\Omega$. Fix $x_0\in \Omega$ and $r>0$ small such that $B_{4r}(x_0)\subset \Omega$ and $u, f$ are both bounded on $B_{4r}(x_0)$. For $s\in [0, 4r)$ we set \[ M_s:=\sup_{B_s(x_0)} f_\delta, \] and choose $M>0$ such that \[ M>\max\left\{M_{2r}, {\sup_{B_{3r}(x_0)} u- u(x_0)\over r}\right\}. \] Define a continuous function $g_r: [0, 3r)\to [0, \infty)$ by \[ g_r(t):=M_r t+M(t-r)_+ \] for $t\in [0, 3r)$. Due to the choice of $M$ above, the function defined by \[ v_r(x)=u(x_0)+g_r(d(x_0, x)) \] for $x\in B_{3r}(x_0)$, satisfies $v_r\geq u$ on $\partial B_{2r}(x_0)$. Moreover, for any $x\in B_{2r}(x_0)$ with $x\neq x_0$ and any $\varepsilon>0$, we can find an arc-length parametrized curve $\xi$ such that $\xi(0)=x$, $\xi(t_\varepsilon)=x_0$ and $t_\varepsilon-\varepsilon\leq d(x, x_0)\leq t_\varepsilon$. For any $t\in [0, t_\varepsilon]$, we have \[ d(x_0,\xi(t))\ge d(x_0,x)-d(x,\xi(t))\ge (t_\varepsilon-\varepsilon)-(t_\varepsilon-t)=t-\varepsilon. \] Therefore \[ t-\varepsilon \leq d(x_0, \xi(t))\le \ell(\xi[0,t])\leq d(x, x_0)+\varepsilon-d(x, \xi(t)). \] Taking $x_\varepsilon=\xi(\sqrt{\varepsilon})$, we have \[ {v_r(x)-v_r(x_\varepsilon)\over d(x, x_\varepsilon)}
\geq \frac{g_r(d(x_0, x))-g_r(d(x_0, x_\varepsilon))}{d(x, x_0)-d(x_0, x_\varepsilon)+\varepsilon}\geq {1\over 1+2\sqrt{\varepsilon}}
\frac{g_r(d(x_0, x))-g_r(d(x_0, x_\varepsilon))}{d(x, x_0)-d(x_0, x_\varepsilon)} \] when $\varepsilon>0$ is sufficiently small. Hence if $0<d(x,x_0)\le r$, then for sufficiently small $\varepsilon>0$ we have $g_r(d(x_\varepsilon,x_0))\ge M_rd(x_\varepsilon,x_0)$, and then by the choice of $M_r$ we have \[ \limsup_{x_\varepsilon\to x} \frac{g_r(d(x_0, x))-g_r(d(x_0, x_\varepsilon))}{d(x, x_0)-d(x_0, x_\varepsilon)}= M_r\ge f_\delta(x). \] If $r< d(x, x_0)<2r$, then for sufficently small $\varepsilon$ we have $d(x_\varepsilon,x_0)>r$ as well, and so we get \[ \frac{g_r(d(x_0, x))-g_r(d(x_0, x_\varepsilon))}{d(x, x_0)-d(x_0, x_\varepsilon)} =\frac{(M_r+M)d(x_0,x)-(M_r+M)d(x_0,x_\varepsilon)}{d(x, x_0)-d(x_0, x_\varepsilon)}. \] Therefore as $d(x_0,x)<2r$, we have \[ \limsup_{x_\varepsilon\to x} \frac{g_r(d(x_0, x))-g_r(d(x_0, x_\varepsilon))}{d(x, x_0)-d(x_0, x_\varepsilon)}\geq M> M_{2r}\ge f_\delta(x). \]
In either case, we see that $|\nabla^- v_r|(x)\geq f_\delta(x)$; in other words, $v_r$ is a Monge supersolution of \[
|\nabla u|=f_\delta \quad \text{in $B_{2r}(x_0)\setminus \{x_0\}$}. \] In view of Proposition \ref{prop eikonal super}(i), we see that $v_r$ is an s-supersolution of the same equation (keep in mind also that by its construction, $v_r$ is $M$-Lipschitz); in particular, we have \begin{equation}\label{bdry verify1} v_r(x)-u(y)\geq v_r(x)-v_r(y)\geq -Md(x, y) \end{equation} for all $x\in B_{2r}(x_0)$ and $y\in \partial B_{2r}(x_0)\cup \{x_0\}$.
On the other hand, $u$ is an s-subsolution and is uniformly continuous in $B_{2r}(x_0)$ with some modulus $\sigma_0$. We have \[ u(x)-u(y)\leq \sigma_0(d(x, y)) \] for all $x\in B_{2r}(x_0)$ and $y\in \partial B_{2r}(x_0)\cup \{x_0\}$. Combining this with \eqref{bdry verify1}, we have shown that the condition \eqref{bdry verify0} holds with $\Omega=B_{2r}(x_0)\setminus \{x_0\}$, $v=v_r$, $\zeta=u$ and $\sigma(s)=\max\{Ms, \sigma_0(s)\}$ for $s\geq 0$. Since $f_\delta=f+ \delta$ in $\Omega$, the function $u$ must also be an s-subsolution for the eikonal equation related to the function $f_\delta$. We thus can use the comparison result \cite[Theorem~5.3]{GaS} to get $u\leq v_r$ in $B_{2r}(x_0)\setminus \{x_0\}$. Letting $\delta\to 0$, we are led to \[ u(x)\leq u(x_0)+d(x, x_0)\sup_{B_r(x_0)} f\quad \text{for all $x\in B_r(x_0)$. } \] One can use the same argument to show that for all $x, y\in B_{r/4}(x_0)$ (and therefore $d(x, y)\leq r/2$), \[ u(y)\leq u(x)+d(x, y)\sup_{B_{r/2}(x)} f\leq u(x)+d(x, y) \sup_{B_r(x_0)} f, \] which yields (recalling that we chose $r>0$ small enough so that $f$ is bounded on $B_{4r}(x_0)$) \[
|u(x)-u(y)|\leq d(x, y)\sup_{B_r(x_0)} f. \] This immediately implies that \begin{equation}\label{sub char}
|\nabla^- u|(x_0)\leq |\nabla u|(x_0)\leq f(x_0). \end{equation} Hence, we can conclude that $u$ is a Monge subsolution of \eqref{eikonal eq}, since $x_0$ is arbitrarily taken. \end{proof}
In the proof of (ii) above, the local uniform continuity of $u$ and $f$ (especially near $x_0$) enables us to adopt the comparison principle. The uniform continuity can be removed if the space $({\mathbf X}, d)$ has some compactness a priori.
We next turn to the relation between supersolutions.
\begin{prop}[Relation between supersolutions]\label{prop eikonal super} Let $({\mathbf X}, d)$ be a complete length space and $\Omega$ be an open set in ${\mathbf X}$. Assume that $f$ is locally uniformly continuous and $f\ge 0$ in $\Omega$. Let $u\in {\rm Lip}_{loc}(\Omega)$. Then \begin{enumerate} \item[(i)] $u$ is a Monge supersolution of \eqref{eikonal eq} if and only if $u$ is an s-supersolution of \eqref{eikonal eq}. \item[(ii)] Assume in addition that $f>0$ on $\Omega$. If $u$ is a local c-supersolution of \eqref{eikonal eq}, then $u$ is a Monge supersolution of \eqref{eikonal eq}. \end{enumerate} \end{prop} \begin{proof} (i) Let us first show the equivalence between a Monge supersolution and an s-supersolution. We begin with the implication ``$\Rightarrow$''. Let $u$ be a Monge supersolution. Suppose that there exist
$\psi_1\in \overline{\mathcal{C}}(\Omega)$ and $\psi_2\in {\rm Lip}_{loc}(\Omega)$ such that $u-\psi_1-\psi_2$ attains a local minimum at $x_0$.
If $f(x_0)=0$, then the desired inequality \[
|\nabla \psi_1|(x_0)\geq -|\nabla \psi_2|^\ast(x_0) \] is trivial. It thus suffices to consider the case $f(x_0)>0$. By the definition of Monge supersolutions, for any $x_0\in \Omega$, \[
|\nabla^- u|(x_0)\geq f(x_0)>0. \] Then for any $\delta>0$, for each $\varepsilon>0$ we can find $x_\varepsilon\in \Omega$ with $x_\varepsilon\to x_0$ as $\varepsilon\to 0$ such that \begin{equation}\label{eq weak1} u(x_\varepsilon)-u(x_0)\leq (-f(x_0)+\delta) d(x_0, x_\varepsilon). \end{equation} It follows from \eqref{eq weak1} and the maximality of $u-\psi_1-\psi_2$ at $x_0$ that \[ \psi_1(x_\varepsilon)-\psi_1(x_0)+\psi_2(x_\varepsilon)-\psi_2(x_0)\leq (-f(x_0)+\delta) d(x_0, x_\varepsilon), \] which implies that \[
{|\psi_1(x_\varepsilon)-\psi_1(x_0)|\over d(x_0, x_\varepsilon)} \ge \frac{\psi_1(x_0)-\psi_1(x_\varepsilon)}{d(x_0,x_\varepsilon)}
\geq f(x_0)-\delta-{|\psi_2(x_\varepsilon)-\psi_2(x_0)|\over d(x_0, x_\varepsilon)}. \] Letting $\varepsilon\to 0$ and then $\delta\to 0$, we obtain \[
|\nabla \psi_1|(x_0)\geq f(x_0)-|\nabla \psi_2|^\ast(x_0). \] It follows that $u$ is an s-supersolution.
The proof for the reverse implication ``$\Leftarrow$'' is given next. So we assume that $u$ is an s-supersolution. Suppose that $u$ is not a Monge supersolution. Then there exists $x_0\in \Omega$ such that \[ \limsup_{x\to x_0}\frac{[u(x_0)-u(x)]_+}{ d(x,x_0)}<f(x_0). \] A contradiction is immediately obtained if $f(x_0)=0$. We thus only consider $f(x_0)>0$ below. Then there exists $\delta>0$ such that for all $x\in B_\delta(x_0)$, \begin{equation}\label{eq monge-s1} \frac{u(x_0)-u(x)}{ d(x,x_0)}-f(x_0)\leq -2\delta. \end{equation} By the continuity of $f$, for any $0<\varepsilon<\min\{\delta,f(x_0)/2\}$, we can choose $0<r<\delta$ such that \begin{equation}\label{continuity f}
|f(x)-f(x_0)|\leq \varepsilon \quad \text{for all $x\in B_r(x_0)$.} \end{equation} Observe that $f(x)\ge f(x_0)/2>0$ whenever $x\in B_r(x_0)$. Let \[ v(x):=u(x_0)-(f(x_0)-\varepsilon) d(x,x_0)+\delta r. \] Then in view of \eqref{eq monge-s1}, we have $v(x)\le u(x)$ for all $x\in \partial B_r(x_0)$. Moreover, we claim that $v$ is an s-subsolution of
\[
|\nabla v|=f \quad \text{in $B_r(x_0)$}.
\] Indeed, for any $x\in B_r(x_0)$, if $v-\psi_1-\psi_2$ achieves a maximum at $x$, where $\psi_1\in \underline{\mathcal{C}}(\Omega)$ and $\psi_2\in \text{Lip}_{loc}(\Omega)$, then \[ \psi_1(x)-\psi_1(y)\le v(x)-v(y)+\psi_2(y)-\psi_2(x). \] It follows that \[ \begin{aligned}
|\nabla \psi_1|(x)=\limsup_{y\to x}\frac{\psi_1(x)-\psi_1(y)}{d(x, y)} &\le \limsup_{y\to x}\frac{v(x)-v(y)}{d(x, y)}+\limsup_{y\to x}\frac{\psi_2(y)-\psi_2(x)}{d(x, y)}\\
&\le f(x_0)-\varepsilon+|\nabla \psi_2|^*(x)\\
&\le f(x)+|\nabla \psi_2|^*(x). \end{aligned} \] The claim has been proved. Applying the comparison principle for s-solutions, we have $v\le u$ in $B_r(x_0)$. This contradicts the fact that \[ v(x_0)=u(x_0)+\delta r>u(x_0). \] Our proof for the equivalence of supersolutions is now complete.
(ii) It follows immediately from (i) and Remark \ref{local c to s}. Note that Remark \ref{local c to s} requires $\inf_{B_r(x)}f>0$ for any $x\in \Omega$ and $r>0$ small, which is implied by the continuity and positivity of $f$ in $\Omega$. \end{proof}
We are not able to show that any Monge supersolution of \eqref{eikonal eq} is a local c-supersolution. However, if $u$ is a Monge solution, then we can show that $u$ must be a local c-solution.
\begin{prop}[Relation between Monge and local c-solutions]\label{prop eikonal solution} Let $({\mathbf X}, d)$ be a complete length space and $\Omega$ be an open set in ${\mathbf X}$. Assume that $f$ is locally uniformly continuous and $f>0$ in $\Omega$. If $u$ is a Monge solution of \eqref{eikonal eq}, then $u$ is a local c-solution of \eqref{eikonal eq}. \end{prop}
\begin{proof} Suppose that $u$ is a Monge solution of \eqref{eikonal eq} in $\Omega$. By Proposition~\ref{prop eikonal sub1}, we know that $u$ must be a c-subsolution. It suffices to show that $u$ is a local c-supersolution of \eqref{eikonal eq}.
For any $x_0\in \Omega$, take $r>0$ small such that $u$ is Lipschitz in $\overline{B_r(x_0)}$, that is, there exists $L>0$ such that \begin{equation}\label{lip local solution}
|u(x)-u(y)|\leq Ld(x, y) \quad\text{for any $x, y\in \overline{B_r(x_0)}$.} \end{equation} Letting $\zeta(y)=u(y)$ for all $y\in \partial B_r(x_0)$, by \eqref{eq optimal control} with $\Omega=B_r(x_0)$ we have the unique c-solution $U$ in $B_r(x_0)$ given by \begin{equation}\label{local formula} U(x):=\inf\bigg\{ \int_{0}^{t_r^+} f(\xi(s))\, ds+ u\left(\xi(t_r^+)\right)\, :\, \xi\in {\mathcal A}_{x}({\mathbb R}, {\mathbf X}) \text{ with }0<T^+_{B_r(x_0)}[\xi]<\infty\bigg\}. \end{equation} It follows from Proposition \ref{prop eikonal sub1} and Proposition \ref{prop eikonal super}(ii) that $U$ is a Monge solution of the eikonal equation in $B_r(x_0)$. Note also that, by Proposition \ref{prop bdry regularity}(1), \[ U(x)-u(y)\leq d(x, y )\max\left\{L,\ \sup_{B_r(x_0)} f\right\} \quad \text{for any $x\in B_r(x_0)$ and $y\in \partial B_r(x_0)$.} \] We then can adopt the comparison principle, Theorem \ref{thm comparison monge}, to get $U\leq u$ in $B_r(x_0)$. In view of \eqref{local formula}, it follows that for any $x\in B_r(x_0)$ and any $\varepsilon>0$ small, there exists a curve $\xi_\varepsilon\in A_x({\mathbb R}, {\mathbf X})$ such that \begin{equation}\label{local solution1} u(x)\geq U(x)\geq \int_{0}^{t_r^+} f(\xi_\varepsilon(s))\, ds+ u\left(\xi_\varepsilon(t_r^+)\right)-\varepsilon, \end{equation} where $t_r^+=T^+_{B_r(x_0)}[\xi_\varepsilon]$ denotes the exit time of $\xi_\varepsilon$ from $B_r(x_0)$. On the other hand, since $u$ is a c-subsolution, we can use Proposition \ref{prop c-sub} to get, for any $0\leq t\leq t_r^+$, \begin{equation}\label{local solution2} u(\xi_\varepsilon(t))\leq u\left(\xi_\varepsilon(t_r^+)\right)+\int_t^{t_r^+} f(\xi_\varepsilon(s))\, ds. \end{equation} Combining \eqref{local solution1} and \eqref{local solution2}, we deduce that for any $0\leq t\leq t_r^+$, \[ u(x)\geq u(\xi_\varepsilon(t))+\int_0^t f(\xi_\varepsilon(s))\, ds-\varepsilon. \] Setting \[ \xi(t)=\begin{cases} \xi_\varepsilon(t) & \text{if $t\geq 0$,}\\ \xi_\varepsilon(-t) & \text{if $t<0$,} \end{cases} \quad\text{and }\quad w(t)=u(x)-\int_0^t f(\xi(s))\, ds \] for $t\in {\mathbb R}$, we easily see that $(\xi, w)$ satisfies the conditions for local c-supersolutions in Definition \ref{def local c}. Indeed, $w$ is of class $C^1$ in $(-t_r^+, t_r^+)\setminus \{0\}$ with $w'=f\circ \xi$ in $(-t_r^+, t_r^+)\setminus \{0\}$. If there is $\phi\in C^1(-t_r^+, t_r^+)$ such that $w-\phi$ achieves a minimum at some $t_0\in (-t_r^+, t_r^+)$, then $t_0\neq 0$ since $f>0$ in $\Omega$. It then follows that $\phi'(t_0)=w'(t_0)$, which yields \[
|\phi^\prime|(t_0)=|w^\prime|(t_0)=f(\xi(t_0)). \] Hence, $u$ is a local c-supersolution and therefore a local c-solution. \end{proof}
We now complete the proof of Theorem \ref{thm equiv Monge}.
\begin{proof}[Proof of Theorem \ref{thm equiv Monge}] The proof consists of the results in Propositions \ref{prop eikonal sub1}, \ref{prop eikonal sub2}, \ref{prop eikonal super} and \ref{prop eikonal solution}. In addition, combining \eqref{sub char} and the definition of Monge supersolutions, we have \eqref{regular0} if any of (a), (b) and (c) holds. \end{proof}
\begin{rmk} It is worth pointing out that, due to Proposition \ref{prop eikonal sub2} and Proposition \ref{prop eikonal super}(i), the equivalence between Monge solutions and locally uniformly continuous s-solutions still holds even if the assumption $f>0$ is relaxed to $f\geq 0$ in $\Omega$. See Theorem \ref{thm equiv general} a nd \ref{thm equiv general proper} below for a more general result focusing on the relation between Monge and s-solutions. \end{rmk}
The local uniform continuity of $f$ and $u$ in Theorem~\ref{thm equiv Monge} can be dropped if the space $({\mathbf X}, d)$ is assumed to proper, that is, any closed bounded subset of ${\mathbf X}$ is compact.
\begin{cor}[Local equivalence in a proper space]\label{cor equiv Monge} Let $({\mathbf X}, d)$ be a proper complete geodesic space and $\Omega$ be a bounded open set in ${\mathbf X}$. Assume that $f\in C(\Omega)$ and $f>0$ in $\Omega$. Let $u\in C(\Omega)$. Then the following statements are equivalent: \begin{enumerate} \item[(a)] $u$ is a c-solution of \eqref{eikonal eq}; \item[(b)] $u$ is a s-solution of \eqref{eikonal eq}; \item[(c)] $u$ is a Monge solution of \eqref{eikonal eq}. \end{enumerate} In addition, if any of (a)--(c) holds, then $u\in {\rm Lip}_{loc}(\Omega)$ and satisfies \eqref{regular0}. \end{cor}
\subsection{General Hamilton-Jacobi equations}
We next turn to a more general class of Hamilton-Jacobi equations. In this case, c-solutions are no longer defined.
We can still show the equivalence between Monge solutions and s-solutions.
\begin{thm}[Equivalence of Monge and $s$-solutions of general equations]\label{thm equiv general} Let $({\mathbf X}, d)$ be a complete length space and $\Omega\subset {\mathbf X}$ be an open set. Let $H: \Omega\times {\mathbb R}\times [0, \infty)\to {\mathbb R}$ be continuous and satisfy the following conditions: \begin{enumerate} \item[(1)] $(x, \rho)\mapsto H(x, \rho, p)$ is locally uniformly continuous, that is, for any $x_0\in \Omega$ and $\rho_0\in {\mathbb R}$, there exist $\delta>0$ small and a modulus of continuity $\omega$ such that \[
|H(x_1, \rho_1, p)-H(x_2, \rho_2, p)|\leq \omega\left(d(x_1, x_2)+|\rho_1-\rho_2|\right) \] for all $x_1, x_2\in B_\delta(x_0)$, $\rho_1, \rho_2\in [\rho_0-\delta, \rho_0+\delta]$ and $p\geq 0$. \item[(2)] For any $x_0\in \Omega$ and $\rho_0\in {\mathbb R}$, there exist $\delta>0$ and $\lambda_0>0$ such that \[ p\mapsto H(x, \rho, p)-\lambda_0 p \] is increasing for every $(x, \rho)\in B_\delta(x_0)\times [\rho_0-\delta, \rho_0+\delta]$. \item[(3)] $p\mapsto H(x, \rho, p)$ is coercive in the sense that \begin{equation}\label{coercivity2} \inf_{(x, \rho)\in \Omega\times [-R, R]} H(x, \rho, p)\to \infty \quad \text{as $p\to \infty$ for any $R>0$. } \end{equation} \end{enumerate}
Then $u$ is a Monge solution of \eqref{stationary eq} if and only if $u$ is a locally uniformly continuous
s-solution of \eqref{stationary eq}. In addition, such $u$ is locally Lipschitz in $\Omega$. \end{thm}
\begin{proof} Let $u$ be either a Monge solution or a locally uniformly continuous s-solution. We first claim that \begin{equation}\label{hamiltonian low bound} H(x, u(x), 0)\leq 0\quad \text{for any $x\in \Omega$.}
\end{equation} Thanks to the condition (2), this is clearly true when $u$ is a Monge solution. It thus suffices to show \eqref{hamiltonian low bound} for a locally uniformly continuous $s$-solution $u$. Fix $x_0\in \Omega$ arbitrarily. Let \[ \psi_1(x)={1\over \varepsilon}d(x, x_0)^2 \] for $\varepsilon>0$ small. Then, due to the local boundedness of $u$, there exist $\delta>0$ and $y_\varepsilon\in B_\delta(x_0)\subset \Omega$ such that \[ (u-\psi_1)(y_\varepsilon)\geq \sup_{B_\delta(x_0)} (u-\psi_1)-\varepsilon^2 \] and $y_\varepsilon\to x_0$ as $\varepsilon\to 0$. By Ekeland's variational principle (cf. \cite[Theorem 1.1]{Ek1}, \cite[Theorem 1]{Ek2}), there is a point $x_\varepsilon\in B_\varepsilon(y_\varepsilon)$ such that \[ (u-\psi_1)(x_\varepsilon)\geq (u-\psi_1)(y_\varepsilon) \] and $u-\psi_1-\psi_2$ attains a local maximum in $B_\delta(x_0)$ at $x_\varepsilon\in B_\varepsilon(y_\varepsilon)$, where \[ \psi_2(x)=\varepsilon d(x_\varepsilon, x). \] It is clear that $x_\varepsilon\to x_0$ as $\varepsilon\to 0$. Since $u$ is an $s$-subsolution, we have \[
\inf_{|\rho|\leq \varepsilon}H\left(x_\varepsilon, u(x_\varepsilon), {2\over \varepsilon}d(x_\varepsilon, x_0)+\rho\right)\leq 0, \] which, by the condition (2), yields \[ H\left(x_\varepsilon, u(x_\varepsilon), 0\right) \leq 0. \] Letting $\varepsilon\to 0$, by the continuity of $H$, we deduce \eqref{hamiltonian low bound} at $x=x_0$. We have completed the proof of the claim.
By the coercivity condition (3) we can define a function $h: \Omega\to [0, \infty)$ to be \begin{equation}\label{implicit} h(x):=\inf\{p\geq 0: H(x, u(x), p)> 0\}, \end{equation} and, thanks to the continuity of $H$ and the condition (2), we see that for each $x\in\Omega$, $h(x)\geq 0$ is the unique value satisfying \begin{equation}\label{eq:Hamil-Eik} H(x, u(x), h(x))=0. \end{equation}
We next claim that $h$ is locally uniformly continuous in $\Omega$. To see this, fix $x_0\in \Omega$ and an arbitrarily small $\delta>0$. We take $x, y\in B_{\delta_1}(x_0)$ with $\delta_1\leq \delta$ sufficiently small such that $u(x), u(y)\in [u(x_0)-\delta, u(x_0)+\delta]$. Then by the condition (1) we have \begin{equation}\label{local uniform ham}
H(x, u(x), p)-H(y, u(y), p)\leq \omega(d(x, y)+|u(x)-u(y)|) \end{equation} for any $p\geq 0$.
Denote $W(x, y):=\omega(d(x, y)+|u(x)-u(y)|)$ for simplicity. Since $H$ satisfies the condition (2), we can use \eqref{local uniform ham} to get, for any $x, y\in B_{\delta_1}(x_0)$, \[ H\left(x, u(x), h(y)+{1\over \lambda_0} W(x, y) \right) \geq H(x, u(x), h(y))+W(x, y)\geq H(y, u(y), h(y))=0, \] which, by \eqref{implicit} and \eqref{eq:Hamil-Eik}, yields \[ h(x)\leq h(y)+{1\over \lambda_0} W(x, y). \] We can analogously show that \[ h(x)\geq h(y)-{1\over \lambda_0} W(x, y) \] and therefore $h$ is uniformly continuous in $B_{\delta_1}(x_0)$.
From~\eqref{eq:Hamil-Eik} it is clear that $u$ is a Monge solution of~\eqref{stationary eq}
if and only if $|\nabla^-u|(x)= h(x)$, that is, $u$ is a Monge solution of \eqref{eikonal eq} with $f=h$. Note however that the function $h$ depends on $u$ implicitly in general.
We now show that $u$ is an s-solution of \eqref{stationary eq} if and only if $u$ is an s-solution of \eqref{eikonal eq} with $f=h$. To see this, suppose that there exist $x_0\in \Omega$, $\psi_1\in \underline{\mathcal{C}}(\Omega)$ and $\psi_2\in \text{Lip}_{loc}(\Omega)$ such that $u-\psi_1-\psi_2$ attains a maximum in $\Omega$ at $x_0$. Then by the monotonicity of $p\to H(x, \rho, p)$, the viscosity inequality \eqref{s-sub eq} holds at $x_0$ if and only if \[
H\left(x_0, u(x_0), (|\nabla\psi_1|(x_0)-|\nabla \psi_2|^\ast(x_0))\vee 0\right)\leq 0, \] which, due to \eqref{implicit}, amounts to saying that \[
|\nabla\psi_1|(x_0)-|\nabla \psi_2|^\ast(x_0)\leq h(x_0), \] that is, $u$ is an $s$-subsolution of \eqref{eikonal eq} with $f=h$. Analogous results for supersolutions can be similarly proved.
Noticing that Monge solutions and locally uniformly continuous s-solutions of \eqref{eikonal eq} have been proved to be (locally) equivalent in Theorem~\ref{thm equiv Monge}, we immediately obtain the equivalence of both notions for \eqref{stationary eq} and local Lipschitz continuity. \end{proof}
As in Corollary \ref{cor equiv Monge}, if $({\mathbf X}, d)$ is additionally assumed to be proper, then in Theorem~\ref{thm equiv general} we can drop the assumption on the local uniform continuity of s-solutions. Moreover, when $({\mathbf X}, d)$ is proper, in the proof above we only need to show the continuity of $h$ as in \eqref{implicit}, since continuity implies local uniform continuity. Thus the condition (1) can be removed and (2) can be weakened by merely assuming that for every $(x, \rho)\in \Omega\times {\mathbb R}$, the map $p\mapsto H(x, \rho, p)$ is strictly increasing in $(0, \infty)$. Below we state the result without proofs.
\begin{thm}[Equivalence of Monge and $s$-solutions in a proper space]\label{thm equiv general proper} Let $({\mathbf X}, d)$ be a complete proper geodesic space and $\Omega\subset {\mathbf X}$ be an open set. Let $H: \Omega\times {\mathbb R}\times [0, \infty)\to {\mathbb R}$ be continuous and be coercive as in \eqref{coercivity2}. Assume that for every $(x, \rho)\in \Omega\times {\mathbb R}$, $p\mapsto H(x, \rho, p)$ is strictly increasing in $(0, \infty)$.
Then $u$ is a Monge solution of \eqref{stationary eq} if and only if $u$ is an
s-solution of \eqref{stationary eq}. In addition, such $u$ is locally Lipschitz in $\Omega$. \end{thm}
The strict monotonicity of $p\to H(x, \rho, p)$ assumed in Theorem~\ref{thm equiv general} and Theorem~\eqref{thm equiv general proper} enables us to apply an implicit function argument. Although it is not clear to us if one can weaken the requirement, the examples below show that the equivalence result fails to hold in general if $H$ is not monotone in $p$.
\begin{example} Let $\Omega={\mathbf X}={\mathbb R}$ with the standard Euclidean metric. Let \[
H(p)=1-|p-2|+\max\{p-3, 0\}^2, \quad p\geq 0. \] One can easily verify that this Hamiltonian satisfies all assumptions in Theorem \ref{thm equiv general} except for the monotonicity.
It is not difficult to see that the function $u$ given by $u(x)=-3|x|$ for $x\in {\mathbb R}$ is a Monge solution of \begin{equation}\label{example hj}
H(|\nabla u|)=0 \quad \text{in ${\mathbb R}$},
\end{equation} since $|\nabla^- u|=3$ in ${\mathbb R}$.
It is however not a conventional viscosity solution or an s-solution, though it is an s-supersolution. \end{example}
\begin{example} Let $\Omega={\mathbf X}={\mathbb R}$ again. Set \[
H(p)=1-|p|+\max\{p-3, 0\}^2, \quad p\geq 0, \] which again satisfies all assumptions in Theorem \ref{thm equiv general} but the monotonicity. This time we have \[
u(x)=|x|, \quad x\in {\mathbb R} \]
as a viscosity solution or s-solution of \eqref{example hj}. But it is not a Monge solution, since $|\nabla^- u|(0)=0$ and $H(0)\neq 0$. \end{example}
\subsection{Further regularity}\label{sec:regularity}
Motivated by the observation \eqref{regular0} in Theorem \ref{thm equiv Monge}, we consider a higher regularity than the Lipschitz continuity related to the Monge solutions of \eqref{stationary eq}. One can slightly strengthen Definition \ref{defi monge} by further requiring that the solution $u$ satisfy \eqref{regular}. We say that a solution $u$ is regular in $\Omega$ if \eqref{regular} holds. The reason why we regard this condition as regularity will be clarified below.
The property~\eqref{regular} is studied for time-dependent Hamilton-Jacobi equations on metric spaces \cite[Proof of Theorem~2.5 and Remark~2.27]{LoVi}. The authors of \cite{LoVi} show that at any fixed time, such spatial regularity holds for the Hopf-Lax formula almost everywhere if ${\mathbf X}$ is a metric measure space that satisfies a doubling condition and a local Poincar\'e inequality, or everywhere if ${\mathbf X}$ is a finite dimensional space with Alexandrov curvature bounded from below. In our stationary setting, we have shown that any Monge solution of the eikonal equation \eqref{eikonal eq} is regular in a complete length space without using \emph{any} measure structure or imposing any assumptions on the space dimension and curvature, see Theorem~\ref{thm equiv Monge}. Now we consider the more general Hamiltonian $H$.
\begin{prop}[Regularity of Monge solutions]\label{prop regular general} Suppose that the assumptions in Theorem~\ref{thm equiv general} or Theorem~\ref{thm equiv general proper} hold. If $u$ is a Monge solution of \eqref{stationary eq}, then $u$ is regular in $\Omega$ and $u\in \underline{\mathcal{C}}(\Omega)$. \end{prop}
\begin{proof} We have shown in the proof of Theorem \ref{thm equiv general} that $u$ is locally a Monge solution of \[
|\nabla u|=h(x), \] where $h\in C(\Omega)$ is given by \eqref{implicit} and we have $h\geq 0$ in $\Omega$.
The regularity \eqref{regular} is thus an immediate consequence of \eqref{sub char} in the proof of Proposition~\ref{prop eikonal sub2} and the definition of Monge solutions. We also see that $u\in \underline{\mathcal{C}}(\Omega)$. One can also apply the same proof to obtain the conclusion under the assumptions of Theorem \ref{thm equiv general proper}. \end{proof}
We emphasize that for \eqref{regular} the strict monotonicity of $p\to H(x, \rho, p)$ cannot be relaxed, as indicated by the following simple example.
\begin{example}\label{ex:non-regular} Let \[ H(p)=\begin{cases} p &\text{when $0\leq p< 1$,}\\ 1 &\text{when $1\leq p<2$,}\\ p-1 &\text{when $p\geq 2$.} \end{cases} \] In this case, it is easily seen that \[ u(x)=\begin{cases} x &\text{for $x\leq 0$,}\\ 2x & \text{for $x>0$} \end{cases} \] is an s-solution (usual viscosity solution) of \[
H(|\nabla u|)=1\quad \text{in ${\mathbf X}={\mathbb R}$}, \]
but $|\nabla u|(0)=2\neq 1=|\nabla^- u|(0)$. \end{example}
We finally remark that \eqref{regular} can be regarded as a type of regularity that is related to the semi-concavity in the Euclidean spaces. Let us below briefly recall several results on semi-concave functions in ${\mathbb R}^N$; we refer the reader to \cite{CaS} for detailed introduction to the classical results and to \cite{Alb, ACS} for recent developments in more general sub-Riemannian contexts.
Recall (cf. \cite[Definition 2.1.1]{CaS}) that for any open set $\Omega\subset {\mathbb R}^N$, $u\in C(\Omega)$ is said to be a (generalized) semi-concave function in $\Omega$ if there exists a nondecreasing upper semicontinuous function $\omega: [0, \infty)\to [0, \infty)$ such that \[ \lim_{r\to 0+}\omega(r)=0 \] and \begin{equation}\label{general semiconcave}
u(x+\eta)+u(x-\eta)-2u(x)\leq \omega(|\eta|)|\eta|
\end{equation} for any $x\in \Omega$ and any $\eta\in {\mathbb R}^N$ with $|\eta|$ sufficiently small. A more well-known notion of semi-concave functions is to ask $u\in C(\Omega)$ to satisfy \eqref{general semiconcave} with $\omega(r)=cr$ for some $c>0$; in this case, $u$ is semi-concave in $\Omega$ if and only if
$x\mapsto u(x)-c|x|^2/2$ is concave in $\Omega$.
It is well known \cite[Theorem 5.3.7]{CaS} that viscosity solutions of \eqref{stationary eq} are locally semi-concave (in the generalized sense) provided that $H(x, \rho, p)$ is locally Lipschitz and strictly convex in $p$; one can apply this result to \eqref{eikonal eq} by changing the form into \[
|\nabla u|^2=f(x)^2 \quad\text{ in $\Omega$} \] in order to meet the requirement of strict convexity. Moreover, any such semi-concave function has nonempty superdifferentials; in other words, it can be touched everywhere from above by a $C^1$ function; see \cite[Proposition 3.3.4]{CaS}. The condition \eqref{regular} is slightly weaker than this property in ${\mathbb R}^N$.
\begin{prop}[Regularity and upper testability in Euclidean spaces]\label{prop upper test} Let $x_0\in {\mathbb R}^N$ and assume that $u: {\mathbb R}^N\to {\mathbb R}$ is Lipschitz near $x_0$. If there exists a function $\psi$ differentiable at $x_0$ such that $u-\psi$ attains a local maximum at $x_0$, then \begin{equation}\label{pointwise regularity}
|\nabla u|(x_0)=|\nabla^- u|(x_0). \end{equation} \end{prop}
\begin{proof} By assumptions, we have \[ u(x)-\psi(x)\leq u(x_0)-\psi(x_0) \] for any $x\in \Omega$ near $x_0$. It follows that \[ \begin{aligned} &\min\{u(x)-u(x_0), 0\}\leq \min\{\psi(x)-\psi(x_0), 0\},\\ &\max\{u(x)-u(x_0), 0\}\leq \max\{\psi(x)-\psi(x_0), 0\}. \end{aligned} \] These relations imply that \begin{equation}\label{eq regular1} \begin{aligned}
&|\nabla^- u|(x_0)\geq |\nabla^- \psi|(x_0),\\
&|\nabla^+ u|(x_0)\leq |\nabla^+ \psi|(x_0). \end{aligned} \end{equation} Noticing that $\psi$ is differentiable at $x_0$, we have \begin{equation}\label{differentiable}
|\nabla^+ \psi|(x_0)=|\nabla^- \psi|(x_0)=|\nabla \psi(x_0)|. \end{equation} We can use \eqref{eq regular1} to obtain \eqref{pointwise regularity} immediately. \end{proof}
The statement converse to that of Proposition \ref{prop upper test} also holds when $N=1$ but it fails to hold in higher dimensions, as indicated by the example below. \begin{example} Let $u: {\mathbb R}^2\to {\mathbb R}$ be given by \[
u(x)=\min\{x_1, 0\}+|x_2|\quad \text{for $x=(x_1, x_2)\in {\mathbb R}^2$. } \]
It is clear that $u$ is Lipschitz in ${\mathbb R}^2$ and $|\nabla u|(0)=|\nabla^- u|(0)=1$ but $u$ cannot be tested at $x=0$ from above by any function that is differentiable at $x=0$. \end{example}
However, for $x_0\in {\mathbb R}^N$, if \eqref{regular} holds in a neighborhood $\Omega$ of $x_0$ and $|\nabla u|$ is continuous in $\Omega$, then $u$ can be touched from above everywhere in $\Omega$ by a $C^1$ function. Indeed, under these assumptions, $u$ can be regarded as a Monge solution of \eqref{eikonal eq} with $f$ continuous in $\Omega$. This in turn implies that $u$ is an s-solution and a conventional viscosity solution in the Euclidean space by Corollary \ref{cor equiv Monge}. Then its local semi-concavity and testability from above follow immediately.
These suggest that \eqref{regular} can be adopted as a generalized semi-concavity in more general metric spaces.
\end{document} | arXiv |
What is the units digit in the product of all natural numbers from 1 to 99, inclusive?
$99!$, the product of all natural numbers from 1 to 99, inclusive, includes the product $2\times5=10$, and since 0 multiplied by any number is 0, the units digit of 99! is $\boxed{0}$. | Math Dataset |
Volumetric medical image compression using 3D listless embedded block partitioning
Ranjan K. Senapati1,
P. M. K Prasad2,
Gandharba Swain3 &
T. N. Shankar3
This paper presents a listless variant of a modified three-dimensional (3D)-block coding algorithm suitable for medical image compression. A higher degree of correlation is achieved by using a 3D hybrid transform. The 3D hybrid transform is performed by a wavelet transform in the spatial dimension and a Karhunen–Loueve transform in the spectral dimension. The 3D transformed coefficients are arranged in a one-dimensional (1D) fashion, as in the hierarchical nature of the wavelet-coefficient distribution strategy. A novel listless block coding algorithm is applied to the mapped 1D coefficients which encode in an ordered-bit-plane fashion. The algorithm originates from the most significant bit plane and terminates at the least significant bit plane to generate an embedded bit stream, as in 3D-SPIHT. The proposed algorithm is called 3D hierarchical listless block (3D-HLCK), which exhibits better compression performance than that exhibited by 3D-SPIHT. Further, it is highly competitive with some of the state-of-the-art 3D wavelet coders for a wide range of bit rates for magnetic resonance, digital imaging and communication in medicine and angiogram images. 3D-HLCK provides rate and resolution scalability similar to those provided by 3D-SPIHT and 3D-SPECK. In addition, a significant memory reduction is achieved owing to the listless nature of 3D-HLCK.
As the amount of patient data increases, compression techniques for the digital storage and transmission of medical images become mandatory. Imaging modalities such as ultrasonography (US), computer tomography (CT), magnetic resonance imaging (MRI) and X-rays provide flexible means of viewing anatomical cross sections for diagnosis. Three dimensional (3D) medical images can be viewed as a time sequence of radiographic images, the tomographic slices (images) of a dynamic object, or a volume of a tomographic slice images of a static object (Udupa and Herman 2000). In this paper, a 3D medical image corresponds to a volume of tomographic slices, which is a rectangular array of voxels with certain intensity values. Progressive lossy to lossless compression from a unified bit string is highly desirable in medical imaging. Lossy compression is tolerated as long as the required diagnostic quality is preserved. Lossless to lossy compression techniques are primarily used in telemedicine, teleradiology and the wireless monitoring of capsule endoscopy.
A compression technique using wavelets provides better image quality compared to joint photographic experts group compression (JPEG) (Pennebaker and Mitchell 1993; Santa-cruz et al. 2000). It also provides a rich set of features such as a progressive in quality and resolution, the region of interest (ROI) and optimal rate-distortion performance with a modest increase in computational complexity. The JPEG standard uses an 8 × 8 discrete cosine transform (DCT) and the JPEG2000 standard uses two dimensional discrete wavelet transform (2D-DWT). The Karhunen–Loueve transform (KLT) is an optimal method for encoding images in the mean squared error (MSE) sense. The compression performance of 2D cosine, Fourier, and Hartley transforms was compared using positron emission tomography (PET) and magnetic resonance (MR) images in Shyam Sunder et al. (2006). The authors claimed that the discrete Hartley transform (DHT) and the discrete Fourier transform (DFT) perform better than the DCT. Several techniques based on the three-dimensional discrete cosine transform (3D-DCT) have been proposed for volumetric data coding (Tai et al. 2000). Nevertheless, these techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for teleradiology and telemedicine applications.
Several works on wavelet-based 3D medical image compression have been reported in the literature (Schelkens et al. 2003; Xiong et al. 2003; Chao et al. 2003; Gibson et al. 2004; Xiaolin and Tang 2005; Sriram and Shyamsunder 2011; Ramakrishnan and Sriram 2006; Srikanth and Ramakrishnan 2005; He et al. 2003). A method based on block-based quad-tree compression, layered zero-coding, and context-based arithmetic coding was proposed by Schelkens et al. (2003). They claimed that the method gives an excellent result for lossless compression and a comparable result for lossy compression. Modified 3D-SPIHT and 3D-EBCOT schemes for the compression of medical data were proposed by Xiong et al. (2003). Their method gives a comparable result for lossy and lossless compression. An optimal 3D coefficient tree structure for 3D zero-tree coding was proposed by Chao et al. (2003). They suggested that an asymmetrical tree can produce a higher compression ratio than a symmetrical one. Gibson et al. (2004) incorporated an ROI and texture modelling stage into the 3D-SPIHT coder for the compression of angiogram video sequences based on bit allocation criteria. Xiaolin and Tang (2005) presented a 3D scalable coding scheme which aimed to improve the productivity of a radiologist by providing a high decoder throughput, random access to the coded data volume, progressive transmission, and coding gain in a balanced design approach. Sriram and Shyamsunder (2011) proposed an optimal coder by making use of wavelets db4, db6, cdf9/7, and cdf5/3 with 3D-SPIHT, 3D-SPECK, and 3D-BISK. They found that cdf 9/7 with 3D-SPIHT yields the best compression performance. Ramakrishnan and Sriram (2006) proposed a wavelet-based SPIHT coder for DICOM images for teleradiology applications. Similarly, many works based on 3D-SPECK, 3D-BISK, and 3D-SPIHT used for the compression of hyperspectral images have been reported (Tang et al. 2003; Fowler and Rucker 2007; Lu and Pearlman 2001).
3D-SPIHT and 3D-SPECK use auxiliary lists [e.g., a list of insignificant pixels (LIP), a list of insignificant sets (LIS), and a list of significant pixels (LSP)] for tree/block partitioning. The auxiliary lists demand an efficient memory management technique, as the coefficients in the list are shuffled out during bit-plane partitioning. This feature is not favorable for hardware realisation. Therefore, 2D variants of listless coders called no list SPIHT (NLS) (Latte et al. 2006), listless SPECK (Wheeler and Pearlman 2000), LEBP (Senapati et al. 2013), and HLDTT (Senapati et al. 2014a) use a state table to keep track of set partitions. These listless coders can be efficiently realised in hardware. Recently, a listless implementation of 3D-SPECK for the compression of hyperspectral images was proposed by Ngadiran et al. (2010).
To the best of the authors' knowledge, there have been few works on 3D listless implementations for medical images in the literature. This motivates us to develop a novel technique for encoding medical images using a modified 3D listless technique. The 3D listless algorithm uses static and dynamic marker state tables for encoding large clusters of insignificant blocks, which results in a rate reduction at earlier passes. From a unified bit string, the algorithm provides rate and resolution scalability for the compression of volumetric data. This set of features is a potential requirement in telemedicine and teleradiology applications.
The organization of the paper is as follows: "The proposed 3D-HLCK embedded coder" section presents the proposed 3D-HLCK algorithm and its memory allocation for 3D medical images. Simulation result and analysis with respect to coding performances and computational complexity using big-O notation are presented in "Results and discussion" section. Conclusions and further research directions are provided in "Conclusion" section.
The proposed 3D-HLCK embedded coder
The block diagram of the proposed 3D-HLCK algorithm is shown in Fig. 1. The 3D hybrid transformation is carried out in the 1st stage. Then, all 3D coefficients are mapped to one dimensional for processing by the proposed 3D-HLCK algorithm. Figure 2 shows the coefficient arrangement algorithm. The arrangement is created by keeping in mind the hierarchical nature of a wavelet pyramid. Four image slices are shown here as an illustration. The experiment is carried out for eight slices in all images. The coefficients in each slice undergo Z-scanning which maps two dimension to one dimension.
Proposed 3D hierarchical listless embedded coder
Scanning pattern of the subbands in medical MRI images and mapping to 1D array
The coefficient is accessed using a linear indexing scheme (Wheeler and Pearlman 2000). Two types of marker state tables are used. They are a (1) dynamic marker table (Dm) and (2) static marker table (Sm). The one-to-one correspondence between the coefficient values and the marker values are shown in Fig. 3. All marker values are initialised and loaded into memory along with the one dimensional (1D) arrangement of the image coefficient values. The dynamic markers in Dm update the values to indicate partitioning. The partitioning can be octal (8), tri (3) or quad (4). Octal partitioning takes place while there is a search for the significant coefficient in a composite wavelet level. Tri partitioning takes place while there is a search for the significant coefficient in a wavelet level. Quad partitioning takes place if a coefficient is found to be significant in a wavelet subband or a subblock inside a subband. The static marker table Sm is only used to skip a large cluster of areas, e.g. the entire composite level/wavelet level/wavelet subband. The length of the dynamic marker table is the same as that of the image array length. If each marker in the dynamic marker state table is 4 bits, then the memory consumes I/2 bytes for the state table. There are only three fixed markers per wavelet level. For five levels, there will be 15 × 3 = 45 markers in the constant marker table. The values of the markers depend on the image size (i.e. N × N) and the level of wavelet decompositions L. The initial marker value is \((log_2N-L+1)\), and the final value is \((log_2N+1)\). For example, if the image dimension N = 128 and the level of decomposition L = 5, the marker values are 3, 4, 5, 6, 7, and 8 in each leading node of the wavelet level. Each bit plane undergoes three passes, as in conventional 3D-SPIHT. They are (1) an insignificant coefficient pass, (2) an insignificant set pass, and (3) a refinement pass.
The association between Dm[k] values and coefficient values \(\xi (k)\), where \(k=0,1,2,\ldots\)
During the insignificant coefficient pass, a single coefficient will be tested for significance. During the insignificant set pass, a composite level/individual level/individual subband will be tested for significance. The refinement pass successively reduces the uncertainty interval between the reconstructed coefficient value and the actual coefficient value.
The symbol and meaning of each type of marker are specified below
INC: The coefficient is insignificant or untested for this bit plane.
NSC: The coefficient becomes significant so it shall not be refined for this bit plane.
SCR: The coefficient is significant and it shall be refined in this bit plane.
The markers listed below corresponds to the leading indices of each lower level of the pyramid. These markers shall be used to test the insignificance of a subband/block during each bit-plane pass.
Static markers (Sm[k]):
Sm[1]: The coefficient is at the leading index of the combined wavelet level L. All the coefficients in the same wavelet level shall be skipped.
Sm[129]: The coefficient is at the leading index of the combined wavelet level L − 1. All coefficients in the same wavelet level shall be skipped.
\(\vdots\)
Sm[32,719]: This coefficient is at the leading index of the finest pyramid level \(L-5\). All coefficients in this level shall be skipped.
Dynamic markers (Dm[k]): The partitioning take place due to dynamic markers in a typical pyramid level (L-1) is illustrated below. Similar illustration can be applied for other levels.
If Dm[129] = Sm[129], then the combined wavelet level L − 1 may be skipped.
If Dm[129] = Sm[129]-1, then a wavelet level (for a single plane) L − 1 may be skipped.
If Dm[129] = Sm[129]-2, then a single subband block in the wavelet level L − 1 may be skipped.
If Dm[129] = Sm[129]-3, then \(\frac{1}{4}\)th of a subband block from a wavelet level may be skipped.
If Dm[129] = 0, then a single coefficient is to be examined for significance.
Similar partitioning algorithm is applied to the other combined subbands as well as the composite coarsest subband.
\(k=129, 513, 2049, 8193, 32{,}719\) are the leading indices from resolution level (L − 1) to level 1(finest resolution level). There is a total of five combined level of arrangement in eight MRI slices, where each of 128 × 128 resolution.
The 3D coefficients are mapped to a 1D array of length I after hybrid transformation. The progressive encoder encodes the most significant bit plane and moves towards the lowest bit plane. It can be stopped whenever the bit budget matches the target rate. The significance level for each bit plane is s = \(2^n\), which is calculated with the bitwise logical AND operation \((\cap )\). The decoder performs reverse of encoding operation with some minor changes. The decoder generates the magnitude bits and sign bits of the coefficients with bitwise logical OR \((\cup )\) instead of bitwise logical AND \((\cap )\).
The 1D coefficient array \(\xi\) is bit-plane coded and examined for significance in each bit plane pass. The initial threshold value can be computed as follows:
$$n = \left\lfloor {\rm log}_2 (\max_k \left| \xi (k) \right| ) \right\rfloor$$
1. The initialization of Sm[k] and Dm[k] state table markers are illustrated below:
Sm[1, 17, 33, 49, 65, 81, 97, 113] = Dm[1, 17, 33, 49, 65, 81, 97, 113] = 3 for \(LL_5\) subband.
Sm[129, 177, 225, 273, 321, 369, 417, 465] =
Dm[129, 177, 225, 273, 321, 369, 417, 465] = 4 are the leading nodes of \(HL_5\), \(LH_5\) and \(HH_5\) subbands.
Sm[513, 705, 897, 1089, 1281, 1473, 1665, 1857] =
Dm[513, 705, 897, 1089, 1281, 1473, 1665, 1857] = 5 are the leading nodes of \(HL_4\), \(LH_4\) and \(HH_4\) subbands.
Sm[2049, 2817, 3585, 4353, 5121, 5889, 6657, 7425] =
Dm[2049, 2817, 3585, 4353, 5121, 5889, 6657, 7425] = 6 are the leading nodes of \(HL_3\), \(LH_3\) and \(HH_3\) subbands.
Sm[8193, 11,265, 14,337, 17,409, 20,481, 23,553, 26,625, 29,697] =
Dm[8193, 11,265, 14,337, 17,409, 20,481, 23,553, 26,625, 29,697] = 7 are the leading nodes of \(HL_2\), \(LH_2\) and \(HH_2\) subbands.
Sm[32,769, 45,057, 57,345, 69,633, 81,921, 94,209, 106,497, 118,785] =
Dm[32,769, 45,057, 57,345, 69,633, 81,921, 94,209, 106,497, 118,785] = 8 are the leading nodes of \(HL_1\), \(LH_1\) and \(HH_1\) subbands.
2. Dm[k] shall be initialize to an arbitrary value (i.e. \(Dm[k]\ge (log_2N+1)+1\)) and these are marked as INC.
Block partitioning of 3D-HLCK algorithm
The block partitioning of composite/combined levels is demonstrated in Fig. 4. Figure 4a demonstrates how the partitioning takes place for the composite coarsest level for the 1D arrangement of coefficients, and Fig. 4b demonstrates the partitioning of the combined pyramid level. If a coefficient is found to be significant, the combined coarsest level is partitioned into eight levels, where each level corresponds to the coarsest level of individual slices. Further, recursive quad partitioning in each level takes place until a coefficient is found to be significant. Finally, the significance of the coefficient value along with the sign bit will be transmitted. No sign bit will be transmitted if the coefficient is found to be insignificant. Similarly, the combined pyramid level is first octal partitioned into individual pyramid levels which correspond to each image slice. Then, each pyramid level is tri partitioned to find the subbands. The subbands are further quad partitioned to find a significant coefficient. Then, the coefficient will be coded and transmitted.
Partitioning of a combined coarsest subband, b combined wavelet level
All the steps described above is presented below in the form of Pseudocode.
Pseudocode of 3D-HLCK algorithm
Bit plane pass1: Insignificant coefficient pass
Bit plane pass2: Insignificant set pass
Bit plane Pass3: Refinement pass
Functions and parameters used in pseudocode
Significant test function \((\zeta _n(\gamma ))\): Significant test is obtained by logical AND \((\cap )\) operation.
Let a (2 × 2) block \(\gamma =[-127 \;109\; 19\; -24]\), and current threshold value \(n=6\). \(\zeta _{n}\) can be calculated as
$$\zeta_{\rm n} (\gamma ) =\sum \limits _{\rm all\, k} [(2^{\rm n} \le \left| \gamma ({\rm k}) \right| )\; \cap \;(\left| \gamma ({\rm k}) \right| \le {2}^{{{\rm n} + 1}} )]$$
$$\begin{array}{l} {\rm if} \;\zeta_{\rm n}(\gamma) = 0, {\rm then\; output} = 0 \\ {\rm else,\;partition\;the\;block}{.} \\ \end{array}$$
Function QuadSplit( ):
The function partition the subband into four equal block sizes. The algorithm for quard partitioning can be illustrated below:
$$\begin{array}{l} Dm[k] = Dm[k] - 1; \\ for\; j = 1,2,3 \\ \quad \quad Dm[k + (j \times 2^2 \times Dm[k]) ] = Dm[k] \\ end \\ \end{array}$$
Note that if \(Dm[k]=0\), quad partitioning stops. The corresponding coefficient in block '\(\gamma\)' is an insignificant coefficient (INC), and it will be examined for significance in insignificant coefficients pass (Pass 1) of the algorithm.
The OctalSplit() and TriSplit() functions are similar to the algorithm for QuadSplit(). OctalSplit() produces eight equal partitioned blocks, whereas TriSplit() produces three equal partitioned blocks.
If block '\(\gamma\)' is a composite coarsest subband, then '\(\gamma\)' undergoes octal partitioning (shown in Fig. 4a). Each partitioned block belongs to the coarsest wavelet level of the extracted plane of 3D medical image.
If block '\(\gamma\)' is a composite/combined wavelet level, then '\(\gamma\)' also undergoes octal partitioning (shown in Fig. 4b). Each partitioned corresponds to a wavelet level having three subbands.
If \(Dm[k] = Sm[k]\), then a combined wavelet level is to be tested for significance.
If \(Dm[k] = Sm[k]-1\), then a single wavelet level is to be tested for significance.
If \(Dm[k] = Sm[k]-2\), then a subband is to be tested for significance.
Comparison with listless embedded block partitioning (LEBP)
The main differences between our earlier work on LEBP algorithm (Senapati et al. 2014b) and 3D-HLCK are:
A 3D hybrid transform is used in 3D-HLCK (Wavelet transform using CDF 9/7 filters (Daubechics and Sweldens 1998) along spatial dimension and KLT along spectral dimension), whereas 2D wavelet transform is used in LEBP algorithm.
The 3D coefficient arrangement is mapped to an 1D arrangement in order to encode large clusters of insignificant coefficients in 3D-HLCK. However, LEBP uses 2D to 1D mapping scheme.
Rate reduction because of fixed state table (Sm[k] markers) at initial passes in 3D-HLCK. For example, \(Sm[k]=Dm[k]\) indicates a composite wavelet level can be skipped instead of a single wavelet level as in LEBP.
Separate encoding techniques are used in 3D-HLCK for combined coarsest and combined wavelet levels so as to reduce the number of zeros for insignificant coefficients in the coarsest subband.
Memory allocation
In 3D-HLCK, the mapped 3D coefficient array, \(L_{max}\) has length 8I, where I is a 1D length of each slice/plane. If Y bytes are allocated for each subband coefficient, then the total storage memory required is 8IY for the subband coefficients and RC / 2 for the Dynamic state table \(\mathbf{Dm}\) as each marker is half a byte. In the case of L level of wavelet decomposition, \(\mathbf{Sm}\) needs \(\frac{(8L+1)}{2}\) bytes, as the number of fixed markers are \((8L+1)\) and each marker is half a byte.
Hence, the total memory needed by 3D-HLCK is:
$$M_{3D-HLCK}=8IY+RC/2+(8L+1)/2.$$
As said earlier Sm[k] markers are fixed markers. These are used in association with Dm[k] markers to check for insignificance (refer to pseudocode).
In 3D-SPIHT coder, dynamic memory is determined by the auxiliary lists. The 3D-SPIHT uses of LIP, LIS, and LSP as auxiliary lists. LIS has type 'A' or 'B' information to distinguish the coefficients.
Let, \(N_{LIP}\) be the number of coefficients in LIP, \(N_{LSP}\) be the number of coefficients in LSP, \(N_{LIS}\) be the number of coefficients in LIS, and Y be the number of bits to store the addressing information of a coefficient.
Then the total memory required (in bytes) due to auxiliary lists is given by Senapati et al. (2014a):
$$M_{3D-SPIHT}= [Y(N_{LIP}+N_{LIS}+N_{LSP})+ N_{LIS}]/8$$
As the memory size increases in each bit plane pass, The worst case values are,
$$N_{LIP}+N_{LSP}=3\times M \times N, N_{LIS}=3\times (M \times N)/4.$$
The memory required by Jyotheswar and Mahapatra (2007) is \(\left(\frac{37}{16}+\frac{5}{16}\times (Y+1)\right)\times M\times N\times\) (No. of planes).
For a 128 × 128 image using 3 bytes per coefficient and five levels of wavelet transform, and having the optional pre-computed maximum length array (i.e, 8IY for 3D-HLCK), the worst case memory (RAM) required is \(\frac{(128\times 128)}{2}\times (8\,{\rm bits})+\frac{(8L+1)}{2} \simeq 8\) kB for 3D-HLCK, 204 kB for 3D-SPIHT and 60 kB by Jyotheswar and Mahapatra (2007). Therefore 3D-HLCK is a suitable candidate over 3D-SPIHT and work in Jyotheswar and Mahapatra (2007) in terms of memory saving. This calculation is based using only memory consumption by the algorithms without regard to wavelet transform. Efficient wavelet transform techniques that take less memory have been reported recently in Mendlovic et al. (1997).
Simulation was carried out on a Window XP platform having an Intel core i5 processor operating at a frequency of 2.6 GHz and 6 GB of internal RAM. The bit rate was varied from 0.5 to 2 bpp for compressing the images. Brain MRI, DICOM knee, and angiogram images were used in our experiment. Each image with a size 128 × 128 was used for the experiment. Tables 1, 2 and 3 summarise the PSNR comparison between 3D-SPIHT (Sriram and Shyamsunder 2011), the algorithm by Jyotheswar and Mahapatra (2007) and the proposed 3D-HLCK algorithm for brain MRI images. Tables 4, 5 and 6 summarise the PSNR comparison for DICOM knee images. Figures 5, 6, and 7 show compressed brain MRI images at a bit rate of 1.0 bpp using 3D-HLCK, the algorithm in Jyotheswar and Mahapatra (2007), and 3D-SPIHT respectively. It is apparent from the Figs. 5, 6, and 7 that the visual quality of the compressed images using 3D-HLCK is better than that obtained by using 3D-SPIHT and comparable with the algorithm in Jyotheswar and Mahapatra (2007). Figures 8 and 9 show the DICOM knee and angiogram images compressed at 2.0 bpp using proposed 3D-HLCK algorithm.
Table 1 PSNR comparison of brain MRI image at 0.5 bpp
Table 4 PSNR comparison of DICOM knee image at 0.5 bpp
Compressed MRI image slices (a–h) by 3D-HLCK at a BR = 1.0 bpp
Compressed MRI image slices (a–h) by Jyotheswar and Mahapatra (2007) at a BR = 1.0 bpp
Compressed MRI image slices (a-h) by 3D-SPIHT at a BR = 1.0 bpp
Compressed DICOM image slices (a–h)at a BR = 2.0 bpp
Compressed angiogram image slices (a–h) at a BR = 2.0 bpp using 3D-HLCK
Comparisons of the coding performance (PSNR vs. Slice no.) for MRI brain image, DICOM knee image, and MRI angiogram images for a constant bit rate are summarised in Tables 1, 2, 3, 4, 5, 6, 7 and 8 respectively. It is observed that the proposed 3D-HLCK algorithm exhibits a PSNR improvement between 0.05 and 0.5 dB for MRI brain images and 0.05–0.6 dB for DICOM knee images for a bit rate of 0.5–2.0 bpp compared to 3D-SPIHT. 3D-SPIHT shows higher a PSNR in the Slice-1, Slice-2, and Slice-7 images in comparison to 3D-HLCK in the MRI angiogram images at 1 bpp. A similar trend is also observed at 2 bpp. In comparison to the work by Jyotheswar and Mahapatra (2007), 3D-HLCK shows an improvement of 0.05–0.5 dB for the given slices at 0.5 bpp. However, at higher rates, the work in Jyotheswar and Mahapatra (2007) shows a PSNR improvement of around 0.15 dB in the MRI images compared to 3D-HLCK. In the DICOM and angiogram images, the algorithm in Jyotheswar and Mahapatra (2007) shows a slight PSNR improvement with respect to 3D-HLCK.
Table 7 PSNR comparison of MRI angiogram image at 1.0 bpp
The proposed algorithm exhibits a better PSNR improvement for other slices in 3D-SPIHT because of the following reasons:
3D SPIHT uses 3D DWT coefficients for encoding, whereas hybrid transformed (2D DWT+KLT) coefficients are encoded by 3D-HLCK.
Large clusters of zeros are efficiently coded (both inter and intra) by 3D-HLCK.
Coefficients are efficiently arranged among different subbands of slices to exploit inter- and intra-subband correlations within and across slices.
The work in Jyotheswar and Mahapatra (2007) outperforms 3D-HLCK at higher rates (above 1 bpp) for MRI images for the following reasons: (i) The execution of a refinement pass before the sorting pass. (ii) The ordering of the coefficient scanning process for simple hardware implementation. (iii) Optimisation for lossless encoding using 5/3 filters in the spatial and spectral dimensions.
The proposed 3D-HLCK algorithm will occupy a fixed amount of memory, irrespective of the number of bit-plane passes, owing to the fixed number of state table markers. Partitioning takes place by updating the marker values. Each marker holds a maximum 4 bits. The algorithm in Jyotheswar and Mahapatra (2007) requires a fixed memory size and exhibits simple hardware portability. However, in 3D-SPIHT, the linked lists (LIP, LIS, and LSP) add/remove/move additional nodes for every bit-plane pass. Therefore, the memory usage grows exponentially. Rate and resolution scalability on par with 3D-SPIHT is achieved by 3D-HLCK. Memory saving is trivial, as in most applications, the cost of memory is cheap. However, the proposed algorithm is potentially suitable for applications such as the progressive transmission of DICOM images, lossless archival, telemedicine, teleradiology, and capsule endoscopy. Therefore, 3D-HLCK can be a preferred option over 3D-SPIHT for the aforementioned applications. A further reduction in the overall complexity can be achieved by using fractional wavelet transforms (FrWTs) (Mendlovic et al. 1997) for such applications.
From the simulation, it is observed that the average encoding and decoding times for 3D-HLCK are 12 times more than those for 3D-SPIHT at 2 bpp. Further optimisation can be done for 3D-HLCK to reduce the time complexity. However, it can be proved mathematically that the computational complexity of 3D-HLCK will be O(N) operations compared to O(N log N) for 3D-SPIHT (Senapati et al. 2014a).
A new 3D coder called 3D-HLCK is proposed in this paper. Owing to the listless nature of 3D-HLCK, significant memory reductions of over 96 and 86% are achieved compared to 3D-SPIHT and the work by Jyotheswar and Mahapatra respectively. 3D-HLCK has features such as rate and resolution scalability. In brain MRI, DICOM knee and angiogram images, a PSNR improvement of 0.05–0.5 dB is also achieved compared to 3D-SPIHT. The proposed coder exhibits a comparable coding efficiency and easy hardware portability with the work by Jyotheswar and Mahapatra. Therefore, it can be used in applications such as telemedicine, teleradiology, wireless capsule endoscopy and the Internet transmission of DICOM images. Future work will incorporate additional features such as the ROI coding, random access coding, and video coding using 3D-HLCK.
Chao H, Dong J, Zheng YE, Zhgang G (2003) Optimal 3D coefficient tree structure for 3D wavelet video coding. IEEE Trans Circuits Syst Video Technol 13:961–972
Daubechics I, Sweldens W (1998) Factoring wavelet transforms into lifting steps. J Fourier Anal Appl 4:267–345
Fowler JE, Rucker JT (2007) 3D wavelet based compression of hyperspectral imagery. In: Chang CI (ed) Hyperspectral data exploration: theory and applications. Wiley, New Jersey
Gibson D, Spann M, Woolley SI (2004) A wavelet based region of interest encoder for compression of angiogram video sequences. IEEE Trans Inf Tech Biomed 8(2):103–113
He C, Dong J, Zhang YF, Gao Z (2003) Optimal 3D coefficient tree structure for 3D wavelet video coding. IEEE Trans Circuit Syst Video Technol 13(10):961–972
Jyotheswar J, Mahapatra S (2007) Efficient FPGA implementation of DWT and modified SPIHT for lossless image compression. J Syst Archit 53(7):369–378
Latte M, Ayachit N, Deshpande D (2006) Reduced memory listless SPECK image compression. Digital Signal Process. 16:817–824
Lu Z, Pearlman WA (2001) Wavelet video coding of video object by object-based SPECK algorithm. In: International symposium on picture coding symposium (PCS-2001), pp 413–416
Mendlovic D, Zalevsky Z, Mas D, Garcia J, Ferreira C (1997) Fractional wavelet transform. Appl Opt 36(20):4801–4806
Ngadiran R, Boussakta S, Sharif B, Bouridane A (2010) Efficient implementation of 3D listless SPECK. In: International conference on computer and communication engineering (ICCCE), pp 1–4
Pennebaker WB, Mitchell JL (1993) JPEG still image compression standard. Chapman Hall, New York
Ramakrishnan B, Sriram N (2006) Internet transmission of DICOM images with effective low bandwidth utilization. Digital Signal Process 16:825–831
Santa-cruz D, Grosbois R, Ebrahimi T (2000) JPEG 2000 performance evaluation and assessment. Signal Process Image Commun 17:113–130
Schelkens P, Munteanu A, Barbarien J, Galca M, Giro-Nieto X, Cornelis J (2003) Wavelet coding of volumetric medical data sets. IEEE Trans Med Imaging 22(3):441–458
Senapati RK, Pati UC, Mahapatra KK (2013) Listless block tree set partitioning for very low bit rate image compression. AEUE 66(1):985–995
Senapati RK, Pati UC, Mahapatra KK (2014a) A reduced memory, low complexity embedded image coding algorithm using hierarchical listless DTT. IET Image Process 8(4):213–238
Senapati RK, Pati UC, Mahapatra KK (2014b) An improved listless embedded block partitioning algorithms for image compression. IJIG World Sci 14(4):1–32
Shyam Sunder R, Eswaran C, Sriram N (2006) Medical image compression using 3D-Heartley transform. Comput Biol Med 36:958–973
Srikanth R, Ramakrishnan AG (2005) Contextual encoding in uniform and adaptive mesh-based lossless compression of MR images. IEEE Trans Med Imaging 24(9):1199–1206
Sriram N, Shyamsunder R (2011) 3D medical image compression using 3-D wavelet coders. Digital Signal Process 21:100–109
Tai S-C, Wu Y-G, Lin C-W (2000) An adaptive 3D discrete cosine transform coder for medical image compression. IEEE Trans Inf Tech Biomed 4(3):259–263
Tang X, Pearlman WA, Modestino JW (2003) Hyperspectral image compression using 3D wavelet coding. Proc SPIE 5022:1037–1047
Udupa JK, Herman GT (2000) 3D imaging in medicine. CRC Press, Boca Raton
Wheeler F, Pearlman WA (2000) 'SPIHT image compression without lists. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), vol 4, pp 2047–2050
Wu X, Qiu T (2005) Wavelet coding of volumetric medical images for high throughput and operability. IEEE Trans Med Imaging 24(6):719–727
Xiong Z, Wu X, Cheng S, Hue J (2003) Lossy to lossless compression of medical volumetric data using 3D integer wavelet transform. IEEE Trans. Med Imaging 22(3):459–470
RKS first conceived the idea, carried out the simulation work and drafted the manuscript. PMK carried out the literature review. GS assisted in MATLAB simulation during revision process. TNS participated in the sequence alignment and assisted in correcting the vocabulary. All authors read and approved the final manuscript.
Department of ECE, K L University, Vaddeswaram, Guntur, Andhra Pradesh, 522502, India
Ranjan K. Senapati
GMR Institute of Technology, Rajam, Srikakulam, 532127, India
P. M. K Prasad
Department of CSE, K L University, Vaddeswaram, Guntur, Andhra Pradesh, 522502, India
Gandharba Swain
& T. N. Shankar
Search for Ranjan K. Senapati in:
Search for P. M. K Prasad in:
Search for Gandharba Swain in:
Search for T. N. Shankar in:
Correspondence to Ranjan K. Senapati.
Senapati, R.K., Prasad, P.M.K., Swain, G. et al. Volumetric medical image compression using 3D listless embedded block partitioning. SpringerPlus 5, 2100 (2016) doi:10.1186/s40064-016-3784-y
3D hierarchical listless embedded block
Set partitioning in hierarchical trees
Volumetric compression
Peak-signal-to-noise-ratio | CommonCrawl |
\begin{document}
\title{Second order arithmetic as the model companion of set theory} \author{Giorgio Venturi and Matteo Viale} \keywords{Model Companion, Generic Absoluteness, Forcing, Large Cardinals, Robinson Infinite Forcing} \subjclass[2010]{03C10, 03C25, 03E57, 03E55}
\begin{abstract} This is an introductory paper to a series of results linking generic absoluteness results for second and third order number theory to the model theoretic notion of model companionship. Specifically we develop here a general framework linking Woodin's generic absoluteness results for second order number theory and the theory of universally Baire sets to model companionship and show that (with the required care in details) a $\Pi_2$-property formalized in an appropriate language for second order number theory is forcible from some $T\supseteq\ensuremath{\mathsf{ZFC}}+$\emph{large cardinals} if and only if it is consistent with the universal fragment of $T$ if and only if it is realized in the model companion of $T$.
In particular we show that the first order theory of $H_{\omega_1}$ is the model companion of the first order theory of the universe of sets assuming the existence of class many Woodin cardinals, and working in a signature with predicates for $\Delta_0$-properties and for all universally Baire sets of reals.
We will extend these results also to the theory of $H_{\aleph_2}$ in a follow up of this paper. \end{abstract}
\maketitle
\section*{Introduction} This paper outlines a deep connection between two important threads of mathematical logic: the notion of model companionship, a central concept in model theory due to Robinson, and the notion of generic absoluteness, which plays a fundamental role in the current meta-mathematical investigations of set theory.
In order to unveil this connection, we proceed as follows: we enrich the first order language in which to formalize set theory by predicates whose meaning is as ``clear'' as that of the $\in$-relation, specifically we add predicates for $\Delta_0$-formulae and predicates for universally Baire sets of reals\footnote{It is a standard result of set theory that $\Delta_0$-formulae define absolute properties for transitive models of $\ensuremath{\mathsf{ZFC}}$. On the other hand the notion of universal Baireness captures exactly those sets of reals whose first order properties cannot be changed by means of forcing (for example all Borel sets of reals are universally Baire). Therefore these predicates have a meaning which is clear across the different models of set theory. See also the last part of this introduction.}
In this extended language we are able to apply Robinson's notions of model completeness and model companionship to argue that, assuming large cardinals, the first order theory of $H_{\omega_1}$ (the family of all hereditarily countable sets) is model complete and is the model companion of the first theory of $V$ (the universe of all sets).
The study of model companionship goes back to the work of Abraham Robinson from the period 1950--1957 \cite{MacCompl}, and gives an abstract model-theoretic characterization of key closure properties of algebraically closed fields. Robinson introduced the notion of model completeness to characterize the closure properties of algebraically closed fields, and the notion of model companionship to describe the relation existing between these fields and the commutative rings without zero-divisors. Robinson then showed how to extend these notions and results to a variety of other classes of first order structures. On the other hand, generic absoluteness characterizes exactly those set theoretic properties whose truth value cannot be changed by means of forcing.
In \cite{VenturiRobinson} the first author found the first indication of a strict connection existing between these two apparently unrelated concepts. In this paper we will enlighten this connection much further.
Recall that a first order theory $T$ in a signature $\tau$ is model-complete if whenever $\mathcal{M}\sqsubseteq\mathcal{N}$ are models of $T$ with one a substructure of the other, we get that $\mathcal{M}\prec\mathcal{N}$; i.e. being a substructure amounts to be an elementary substructure.
The theory of algebraically closed fields has this property, as it occurs for all theories admitting quantifier-elimination. However, it is also the case that many natural theories not admitting quantifier-elimination are model-complete. Robinson regarded model-completeness as a strong indication of tameness for a first order theory.
A weak point of this notion is that being model complete is very sensitive to the signature in which a theory is formalized, to the extent that any theory $T$ in a signature $\tau$ has a conservative extension $T'$ in a signature $\tau'$, which admits quantifier elimination (it suffices to add symbols and axioms for Skolem functions to $\tau$ and $T$, \cite[Thm. 5.1.8]{TENZIE}). In particular we can always extend a first order language $\tau$ to a language $\tau'$ so to make a $\tau$-theory $T$ model-complete with respect to $\tau'$. However if model-completeness of $T$ is shown with respect to a ``natural'' language in which $T$ can be formalized, then it brings many useful informations on the combinatorial-algebraic properties of models of $T$.
Recall also that for a first order signature $\tau$, a $\tau$-theory $T$ is the model companion of a $\tau$-theory $S$ if $T$ is model complete and $S$ and $T$ are mutually consistent: i.e., every model of $T$ can be embedded in a model of $S$ and conversely.
The notion of model companionship is much more robust than model completeness. Consider the category $\mathcal{K}_{S,\tau}$ whose objects are the $\tau$-models of $S$ and whose arrows are given by the $\tau$-morphisms: knowing that $S$ admits a model companion gives non-trivial information on this category, and the change of signature from $\tau$ to $\tau'$ could bring our focus on something which is poorly related with $\mathcal{K}_{S,\tau}$. Assume we enlarge the signature from $\tau$ to $\tau'$ so that in signature $\tau'$ $S$ has quantifier elimination, this has strong consequences on the substructure relation, hence it could be the case that for models $\mathcal{M}$, $\mathcal{N}$ of $S$ $\mathcal{M}$ is a $\tau$-substructure of $\mathcal{N}$ but it is not anymore a $\tau'$-substructure of $\mathcal{N}$. In particular the category $\mathcal{K}_{S,\tau'}$ may not have much to say on the properties of $\mathcal{K}_{S,\tau}$.
Robinson's infinite forcing is loosely inspired by Cohen's forcing method and gives an elegant formulation of the notion of model companionship: a $\tau$-theory $T$ is the model companion of a $\tau$-theory $S$ if it is model complete and the models of $T$ are exactly the infinitely generic structures for Robinson's infinite forcing applied to models of $S$. In \cite{VenturiRobinson} was described a fundamental connection between the notion of being an infinitely generic structure and that of being a structure satisfying certain types of forcing axioms. This suggests an interesting parallel between a semantic approach \emph{\`a la Robinson} to the study of the models of set theory and generic absoluteness results.
The main result of this paper (Thm. \ref{thm:mainthm1}) shows that ---in a natural extension of the language of set theory (given by the addition of predicates for $\Delta_0$ formulas and all universally Baire sets of reals)--- the existence of class many Woodin cardinals implies that the model companion of the theory of the universe of all sets is the theory of $H_{\omega_1}$, and moreover that for $\Pi_2$-sentences provability overlaps with consistency and with forcibility. We consider our expansion of the language natural, because the added predicates are exactly those describing sets of reals whose truth properties are unaffected by the forcing method, and for which, therefore, we have a concrete and stable understanding of their behaviour. For example Borel sets of reals are universally Baire, all sets of reals defined by a $\Delta_0$-formula are universally Baire, and, assuming large cardinals, all universally Baire sets of reals have all the desirable regularity properties such as: Baire property, Lebesgue measurability, perfect set property, determinacy, etc. Moreover, assuming class many Woodin cardinals, such sets form a point-class closed under: projections, countable unions and intersections, complementation, continous images, etc.
We also remark that:
\begin{itemize}
\item On the one hand Hirschfeld \cite{Hir} showed that any extension of $\ensuremath{\mathsf{ZF}}$ has a model companion in the signature $\bp{\in}$. His result however is uninformative (a consideration he himself made in \cite{Hir}), since the model companion of $\ensuremath{\mathsf{ZF}}$ for the signature $\bp{\in}$ turns out to be (a small variation of) the theory of dense linear orders; a theory for a binary relation which has not much in common with our understanding of the $\in$-relation. We consider this fact another indication of the naturalness of our choice of extending the standard first order language used to formalize set theory. Indeed, in the standard language containing only the $\in$-relation, there are many set-theroetical concepts which we consider basic, but whose formalization in first order logic is syntactically quite complex. For example being an ordered pair is a $\Delta_0$-property, but it is only $\Sigma_2$-expressible in the signature $\bp{\in}$. This discrepancy causes the ``anomaly'' of Hirschfeld's result, which is here resolved by adding predicates for all the concepts which are sufficiently simple and stable across the different models of set theory; these include the $\Delta_0$-properties and the universally Baire predicates.
\item On the other hand (and unlike Hirschfeld's result) the results of this paper have a highly non-constructive flavour and to be rightly understood require to embrace a fully platonistic perspective on the onthology of sets
This is why in this paper we assume that the universe of sets $V$ and the family of hereditarily countable sets $H_{\omega_1}$ are rightful elements of our semantics, which ---whenever endowed with suitably defined predicates and constants-- give well-defined first order structures for the appropriate signature. Of course it is possible to reformulate our results so to make them compatible with a formalist approach to set theory \emph{\`a la Hilbert}, but in this case their meaning would be much less transparent;
hence we refrain here from pursuing this matter further.
On the other hand this weak point of our results will be completely addressed and resolved in the forthcoming \cite{VIATAMSTI}.
Then, in \cite{VIATAMSTII} we will investigate the correspondence between generic absoluteness
for $H_{\omega_2}$ assuming strong forcing axioms (see the monograph \cite{HSTLARSON} on $(*)$, and
the second author's \cite{VIAASP,VIAAUD14,VIAMM+++,VIAMMREV,VIAUSAX})
and model companionship. \end{itemize}
The main philosophical thesis we draw from the results of the present paper is that the success of large cardinals in solving problems of second-order arithmetic\footnote{All problems of second order arithmetic are first order properties of $H_{\omega_1}$.} via determinacy is due to the fact that these axioms make (in the appropriate language) the theory of $H_{\omega_1}$ the model-companion of the theory of $V$, and in particular a model complete theory.
The paper is structured as follows: \begin{itemize} \item \S \ref{sec:modeltheoreticcompl} recalls the basic facts on model companionship and on Robinson's infinite forcing.
\item In \S \ref{sec:modelcompanZFCgen} we offer reasons for the necessity of an expansion of the language of set theory, which includes at least predicates for the $\Delta_0$-properties, and eventually also for the universally Baire predicates.
\item \S \ref{sec:boolvalmod} first recalls few important results on boolean-valued structures and generic absoluteness. Then we perform and justify the extension of the first order language of set theory, roughly described before, so to include predicates for all $\Delta_0$-formulae; after relativizing the notion of model completeness to the generic multiverse, Theorem \ref{thm:omegaVmodcompl} shows that for this expanded language the theory of $H_{\omega_1}$ is the model companion of the theory of $V$ relative to the generic multiverse.
\item \S \ref{sec:nonmodcompletHomega1} shows that the theory of $H_{\omega_1}$ in the signature with predicates for the $\Delta_0$-properties is not model complete.
\item \S \ref{sec:modcompanUBpred} gives the proof of Theorem \ref{thm:mainthm1} showing that in a language admitting predicates for all the universally Baire sets, the theory of $H_{\omega_1}$ is the model companion of the theory of $V$ if we assume the existence of class many Woodin cardinals. We also outline why this result shows that forcibility, provability, and consistency overlaps for $\Pi_2$-sentences in this expanded signature. \end{itemize}
\section{Model theoretic background} \label{sec:modeltheoreticcompl}
We analyze certain classes of first order structures in a given first order signature $\tau$ and we will be interested just in theories consisting of sentences. To fix notation, if $T$ is a first order theory in the signature $\tau$, $\mathcal{M}_T$ denotes the $\tau$-structures which are models of $T$.
\begin{definition} A theory $T$ is \emph{model complete} if for all models $\mathcal{M}$ and $\mathcal{N}$ of $T$ we have that $\mathcal{M} \sqsubseteq \mathcal{N}$ ($\mathcal{M}$ is a substructure of $\mathcal{N}$) implies $\mathcal{M} \prec \mathcal{N}$ ($\mathcal{M}$ is an elementary substructure of $\mathcal{N}$). \end{definition}
\begin{definition} Let $\tau$ be a first order signature and $T$ be a theory for $\tau$. Given two models $\mathcal{M}$ and $\mathcal{N}$ of a theory $T$ \begin{itemize} \item $\mathcal{M}$ is \emph{existentially closed} in $\mathcal{N}$ ($\mathcal{M} \prec_1 \mathcal{N}$) if the existential and universal formula with parameters in $\mathcal{M}$ have the same truth value in $\mathcal{M}$ and $\mathcal{N}$. \item $\mathcal{M}$ is existentially closed for $T$ if it is existentially closed in all its $\tau$-superstructures which are models of $T$. \end{itemize} \end{definition}
$\mathcal{E}_T$ denotes the class of $\tau$-models which are existentially closed for $T$.
Note that in general models in $\mathcal{E}_T$ need not be models\footnote{For example let $T$ be the theory of commutative rings with no zero divisors which are not algebraically closed fields. Then $\mathcal{E}_T$ is exactly the class of algebraically closed fields and no model in $\mathcal{E}_T$ is a model of $T$.} of $T$. Model completeness describes exactly when this is the case.
\begin{lemma}\cite[Lemma 3.2.7]{TENZIE} (Robinson's test) Let $T$ be a theory. The following are equivalent: \begin{enumerate} \item $T$ is model complete. \item For every $\mathcal{M} \sqsubseteq \mathcal{N}$ models of $T$ $\mathcal{M} \prec_1 \mathcal{N}$. \item $\mathcal{E}_T=\mathcal{M}_T$. \item Each $\tau$-formula is equivalent, modulo $T$, to a universal $\tau$-formula. \end{enumerate} \end{lemma}
Model completeness comes in pair with another fundamental concept which generalizes to arbitrary first order theories the relation existing between algebraically closed fields and commutative rings without zero-divisors. As a matter of fact, the case described below occurs when $T^*$ is the theory of algebraically closed fields and $T$ is the the theory of comutative rings with no zero divisors.
\begin{definition} Given two theories $T$ and $T^*$, in the same language $\tau$, $T^*$ is the \emph{model companion} of $T$ if the following conditions holds: \begin{enumerate} \item Each model of $T$ can be extended to a model of $T^*$. \item Each model of $T^*$ can be extended to a model of $T$. \item $T^*$ is model complete. \end{enumerate} \end{definition}
The model companion of a theory does not necessarily exist, but, if it does, it is unique.
\begin{theorem}\cite[Thm. 3.2.9]{TENZIE} A theory $T$ has, up to equivalence, at most one model companion $T^*$. \end{theorem}
Different theories can have the same model companion: for example the theory of fields and the theory of commutative rings with no zero-divisors which are not fields both have the theory of algebraically closed fields as their model companion.
\begin{remark} Using the fact that a theory $T$ is mutually consistent with its model companion $T^*$, i.e. the models of one theory can be extended to a model of the other theory and vice-versa, together with the fact that universal theories are closed under sub-models it is easy to show that a theory and its model companion agree on their universal sentences. \end{remark}
\begin{notation} In what follows, given a theory $T$, $T_{\forall}$ denotes the collection of all $\Pi_1$-sentences which are logical consequences of $T$. Similarly $T_{\exists}$ and $T_{\forall \exists}$ denote, respectively, the $\Sigma_1$ and the $\Pi_2$-theorems of $T$. \end{notation}
An important properties of $T$-ec models is that they are actually $T_\forall$-ec. Using this fact one can prove the following:
\begin{theorem} Let $T$ be a first order theory. If its model companion $T^*$ exists, then \begin{enumerate} \item $T_{\forall} = T^*_{\forall}$. \item $T^*$ is the theory of the existentially closed models of $T_{\forall}$. \item $T^*$ is axiomatized by $T_{\forall \exists}$. \end{enumerate} \end{theorem}
Possibly inspired by Cohen's forcing method, Robinson introduced what is now called Robinson's infinite forcing \cite{HIRWHE75}. In this paper we are interested in a slight generalization of Robinson's definition which makes the class of models over which we define infinite forcing an additional parameter.
\begin{definition} Given a class of structure $\mathcal{C}$ for a signature $\tau$, \emph{infinite forcing for $\mathcal{C}$} is recursively defined as follows for a $\tau$-formula $\phi(x_1,\dots,x_n)$, a structure $\mathcal{M} \in \mathcal{C}$ with domain $M$ and $a_1,\dots,a_n\in M$: \begin{itemize} \item For $\phi(x_1,\dots,x_n)$ atomic, $\mathcal{M} \VDash_\mathcal{C}\varphi(a_1,\dots,a_n)$ if and only if $\mathcal{M} \models\varphi(a_1,\dots,a_n)$; \item $\mathcal{M} \VDash_\mathcal{C} \varphi(a_1,\dots,a_n)\land \psi(a_1,\dots,a_n)$ if and only if $\mathcal{M} \VDash_\mathcal{C} \varphi(a_1,\dots,a_n)$ and $\mathcal{M} \VDash_\mathcal{C} \psi(a_1,\dots,a_n)$; \item $\mathcal{M} \VDash_\mathcal{C} \varphi(a_1,\dots,a_n) \lor \psi(a_1,\dots,a_n)$ if and only if $\mathcal{M} \VDash_\mathcal{C} \varphi(a_1,\dots,a_n)$ or $\mathcal{M} \VDash_\mathcal{C} \psi(a_1,\dots,a_n)$; \item $\mathcal{M} \VDash_\mathcal{C} \forall x\varphi(x,a_1,\dots,a_n)$ if and only if (expanding $\tau$ with constant symbols for all elements of $M$) $\mathcal{M} \VDash_\mathcal{C} \varphi(a,a_1,\dots,a_n)$, for every $a \in M$; \item $\mathcal{M} \VDash_\mathcal{C} \neg \varphi(a_1,\dots,a_n)$ if and only if $\mathcal{N} \not\VDash_\mathcal{C}\varphi(a_1,\dots,a_n)$ for all $\mathcal{N}\in \mathcal{C}$ superstructures of $\mathcal{M}$. \end{itemize} \end{definition} Robinson's infinite forcing consider only the case in which $\mathcal{C}=\mathcal{M}_T$. We are interested in considering Robinson's infinite forcing also in case $\mathcal{C}$ is not of this type.
As in the case of Cohen's forcing, this method produces objects that are generic. In this case generic models.
\begin{notation} Given a class of structure $\mathcal{C}$ for a signature $\tau$ A structure $\mathcal{M}\in\mathcal{C}$ is \emph{infinitely generic for $\mathcal{C}$} whenever satisfaction and infinite forceability coincide: i.e., for every formula $\varphi(x_1,\dots,x_n)$ and $a_1,\dots,a_n\in M$, we have \[ \mathcal{M} \vDash \varphi(a_1,\dots,a_n) \iff \mathcal{M} \VDash_\mathcal{C} \varphi(a_1,\dots,a_n). \] By $\mathcal{F}_\mathcal{C}$, we indicate the class of infinitely generic structures for $\VDash_{\mathcal{C}}$. \end{notation}
Generic structures capture semantically the syntactic notion of model companionship.
\begin{theorem} Let $T$ be a theory in a signature $\tau$. The following are equivalent: \begin{enumerate} \item $T^*$ exists. \item $\mathcal{E}_T$ is an elementary class. \item $\mathcal{F}_T$ is an elementary class. \item $\mathcal{E}_T=\mathcal{F}_{\mathcal{M}_{T_\forall}}$ (i.e. the existentially closed structures for $T$ are the generic structures for Robinson's infinite forcing applied to the class $\mathcal{M}_{T_\forall}$). \end{enumerate} \end{theorem}
Motivated by the above theorem we will analyze the generic multiverse with in mind Robinson's characterization of model companionship by means of infinite forcing.
\section{Model companions for $\ensuremath{\mathsf{ZFC}}$: do they exist?} \label{sec:modelcompanZFCgen}
We already outlined that the model completeness of a theory is sensitive to the language in which that theory is expressed. We now embark in the task of selecting the right first order language to use for the construction of the model companion of (extensions of) $\ensuremath{\mathsf{ZFC}}$. We will first argue that (at least for the purposes of studying set theory by means of first order logic) this is neither the language $\{\in\}$ nor the language $\{\in,\subseteq\}$, even if these are the languages in which set theory is usually formalized in almost all textbooks.
As a preliminary result, we have that the model companion of $\ensuremath{\mathsf{ZF}}$ for the language $\{\in\}$ has already been fully described. \begin{theorem} (Hirschfeld \cite[Thm. 1, Thm. 5]{Hir}) The universal theory of any $T\supseteq \ensuremath{\mathsf{ZF}}$ in the signature $\bp{\in}$ is the theory \[ S = \bp{\forall x_1\dots\forall x_n(x_1\notin x_2\vee x_2\notin x_3\vee\dots\vee x_{n-1}\notin x_n\vee x_n\notin x_1): \,n\in\mathbb{N}}. \] Letting for $A\subseteq n$ \[ \delta_A(x_1,\dots,x_n,y)=\bigwedge_{i\in A} x_i\in y\wedge\bigwedge_{i\not\in A} x_i\not\in y, \] the model companion of $\ensuremath{\mathsf{ZF}}$ is the theory \begin{align*} S^*=&\bp{\forall x_1\dots x_n \exists y\,\delta_A(x_1,\dots,x_n,y): n\in\omega, \,A\subseteq n}\cup\\ &\cup\bp{\forall x,y\,\exists z[x=y\vee (x\in z\wedge z\in y)\vee(y\in z\wedge z\in x)]}. \end{align*} \end{theorem}
In particular $S^*$ is also the model companion of $\ensuremath{\mathsf{ZFC}}$, given that $S$ is the universal theory of any $T\supseteq\ensuremath{\mathsf{ZF}}$, among which $\ensuremath{\mathsf{ZFC}}$.
Notice that $S$ only says that the graph of the $\in$-relation has no loops, while Hirschefeld also shows that in every model of $S^*$ the formula $\exists x(a\in x\wedge x\in b)$ defines a dense linear order without endpoints \cite[Thm. 3]{Hir}. In particular there is no apparent relation between the meaning of the $\in$-relation in a model of $\ensuremath{\mathsf{ZF}}$ (in its standard models it is a well-founded relation not linearly ordered) and the meaning of the $\in$-relation in models of $S^*$ (it is a dense linear order without end-points).
We believe (as Hirschfeld) that the above result gives a clear mathematical insight of why the language $\bp{\in}$ is not expressive enough to describe the ``right'' model companion of set theory. A key issue is the following: we are inclined to consider concepts and properties which can be formalized by formulae with bounded quantifiers much simpler and concrete than those which can only be formalized by formulae which make use of unrestricted quantification. This is reflected by the fact that properties formalizable by means of formulae with bounded quantifiers are absolute between transitive models of $\ensuremath{\mathsf{ZFC}}$. This fact fails badly for properties defined by means of unbounded quantification.
For example the property \emph{$f$ is a function} is expressible using only bounded quantification, while the property \emph{$\kappa$ is a cardinal} is not. It is well known that the former is a property that is absolute between transitive models of $\ensuremath{\mathsf{ZFC}}$ containing $f$, while the latter is not. However if one tries to formalize by means of a first order formula in the signature $\in$ the formula \emph{$f$ is a function}, one realizes that any such formalization is a very complicated formula. For example already the formalization of the concept \emph{$x$ is the ordered pair with first component $y$ and second component $z$} by means of Kuratowski gives an $\in$-formula $\phi(x,y,z)$ which is $\Sigma_2$: \[ \exists t\exists u\;[\forall w\,(w\in x\leftrightarrow w=t\vee w=u) \wedge\forall v\,(v\in t\leftrightarrow v=y) \wedge\forall v\,(v\in u\leftrightarrow v=y\vee v=z)]. \]
It is also a matter of fact that absolute properties are regarded as ``tame'' set theoretic properties (as their truth value cannot be changed by forcing, e.g \emph{$f$ is a function}
remains true in any transitive model to which $f$ belongs), while non-absolute ones are more difficult to control (they are not immune to forcing). For example, when $\kappa$ is an uncountable cardinal of the ground model, it will cease to be so in any generic extension by $\Coll(\omega,\kappa)$.
These are good reasons that lead to formalize set theory in a first order language able to recognize syntactically the different semantic complexity of absolute and non-absolute concepts. As Hirschfeld showed, this is not the case for the $\ensuremath{\mathsf{ZF}}$-axioms in the language $\bp{\in}$.
In Kunen and Jech's standard textbooks the solution adopted is that of passing from first order logic to a logic with bounded quantifiers $\exists x\in y$ and $\forall x\in y$, binding the variable $x$, so that $\exists x\in y\phi(x,y,\vec{z})$ is logically equivalent to $\exists x(x\in y\wedge\phi(x,y,\vec{z}))$ and $\forall x\in y\phi(x,y,\vec{z})$ is logically equivalent to $\forall x(x\in y\rightarrow\phi(x,y,\vec{z}))$. In this new logic \emph{$f$ is a function} is expressible by a formula with only bounded quantifiers, while \emph{$\kappa$ is a cardinal} is expressible by a formula of type $\forall x\phi(x,\kappa)$ with $\phi$ having only bounded quantifiers. However, Kunen and Jech's solution is not convenient for the scopes of this paper, because it formalizes set theory outside first order logic, making less transparent how we could use model theoretic techniques (designed expressly for first order logic) to isolate what is the correct model companion of set theory. The alternative solution we adopt in this paper is that of expressing set theory in a first order language with relational symbols for any bounded formula.
\begin{notation}\label{not:keynotation} \emph{}
\begin{itemize} \item $\tau_{\mathsf{ST}}$ is the extension of the first order signature $\bp{\in}$ for set theory which is obtained by adjoining predicate symbols $R_\phi$ of arity $n$ for any $\Delta_0$-formula $\phi(x_1,\dots,x_n)$, and constant symbols for $\omega$ and $\emptyset$.
\item $\ensuremath{\mathsf{ZFC}}^{-}$ is the $\in$-theory given by the axioms of $\ensuremath{\mathsf{ZFC}}$ minus the power-set axiom.
\item $T_{\ST}$ is the $\tau_{\ST}$-theory given by the axioms \[ \forall \vec{x} \,(R_{\forall x\in y\phi}(y,\vec{x})\leftrightarrow \forall x(x\in y\rightarrow R_\phi(y,x,\vec{x})) \] \[ \forall \vec{x} \,[R_{\phi\wedge\psi}(\vec{x})\leftrightarrow (R_{\phi}(\vec{x})\wedge R_{\psi}(\vec{x}))] \] \[ \forall \vec{x} \,[R_{\neg\phi}(\vec{x})\leftrightarrow \neg R_{\phi}(\vec{x})] \]
for all $\Delta_0$-formulae $\phi(\vec{x})$, together with the $\Delta_0$-sentences \[ \forall x\in\emptyset\,\neg(x=x), \] \[ \omega\text{ is the first infinite ordinal} \] (the former is an atomic $\tau_{\ST}$-sentence, the latter is expressible as the atomic sentence for $\tau_{\ST}$ stating that $\omega$ is a non-empty limit ordinal and all its elements are successor ordinals or $0$). \item $\ensuremath{\mathsf{ZFC}}^-_{\ST}$ is the $\tau_{\ST}$-theory \[ \ensuremath{\mathsf{ZFC}}^{-}\cup T_{\ST}, \] accordingly we define $\ensuremath{\mathsf{ZFC}}_{\ST}$. \end{itemize} \end{notation}
In $\ensuremath{\mathsf{ZFC}}_{\ST}$ we now obtain that many absolute concepts (such as that of being a function) are now expressed by atomic formulas, while other more complicated ones (like for example those defined by means of transfinite recursion over an absolute property, such as \emph{$x$ is the transitive closure of $y$}) can still be expressed by means of $\ensuremath{\mathsf{ZFC}}^{-}_{\ST}$-provably $\Delta_1$-properties of $\tau_{\ST}$ (i.e. properties which are $\ensuremath{\mathsf{ZFC}}^{-}_{\ST}$-provably equivalent
at the same time to a $\Pi_1$-formula and to a $\Sigma_1$-formula), which therefore are still absolute between any two models (even non-transitive) $\mathcal{M}$, $\mathcal{N}$ of $\ensuremath{\mathsf{ZFC}}_{\ST}$ of which one is a substructure of the other. On the other hand many definable properties have truth values which may vary depending on which model of $\ensuremath{\mathsf{ZFC}}_{\ST}$ we work in (for example \emph{$\kappa$ is an uncountable cardinal} is a $\Pi_1\setminus\Sigma_1$-property in $\ensuremath{\mathsf{ZFC}}_{\ST}$ whose truth value may depend on the choice of the model of $\ensuremath{\mathsf{ZFC}}_{\ST}$ to which $\kappa$ belongs).
Our first aim is to identify what could be the model companion of $\ensuremath{\mathsf{ZFC}}_{\ST}$. To this aim, first recall that Levy's absoluteness gives that $H_{\omega_1}\prec_{\Sigma_1}V$, and that for any set $X$ there is a forcing extension in which $X$ is countable (just force with $\Coll(\omega,X)$). In particular one can argue that the $\Pi_2$-assertion $\forall X\exists f:\omega\to X\emph{ surjectve}$ is generically true for Robinson's infinite forcing applied to the forcing extensions of $V$. Notice that $H_{\omega_1}\models\forall X\exists f:\omega\to X\emph{ surjectve}$.
The natural conjecture is to infer that the $\tau_{\ST}$-theory of $H_{\omega_1}$ is the model companion of the $\tau_{\ST}$-theory of $V$. We now show exactly to which extent the conjecture is true, while proving that it is false.
Towards this aim we conclude this section introducing a semantic and relativized notion of model completeness and model companionship.
\begin{definition} Let $\tau$ be a first order signature. Given a category $(\mathcal{C},\to_{\mathcal{C}})$ of $\tau$-structures (elements of $\mathcal{C}$) and morphisms between them (elements of $\to_\mathcal{C}$), a class $\mathcal{D}\subseteq\mathcal{C}$ is the class of its generic structures if: \begin{itemize} \item For every structure $\mathcal{M}$ in $\mathcal{C}$ there is $\mathcal{N}\in\mathcal{D}$ and $i:\mathcal{M}\to\mathcal{N}$ in $\to_\mathcal{C}$. \item For every $i:\mathcal{M}\to\mathcal{N}$ in $\to_{\mathcal{C}}$ with $\mathcal{M},\mathcal{N}$ in $\mathcal{D}$, $i$ is $\Sigma_1$-elementary, i.e. $i[\mathcal{M}]\prec_1\mathcal{N}$. \end{itemize} If $\mathcal{D}=\mathcal{C}$ we say that the category $(\mathcal{C},\to_{\mathcal{C}})$ is model complete. \end{definition} In particular if $\mathcal{C}$ is the class of models of $T$, $\to_\mathcal{C}$ is the class of all morphisms between models of $T$, and $\mathcal{D}=\mathcal{M}_S$, $S$ is the model companion of $T$.
\section{Boolean valued models and generic absoluteness}\label{sec:boolvalmod}
Our first aim is to outline which first order properties are first order invariant with respect to the forcing method. Toward this aim we recall some standard facts on boolean-valued models for set theory, giving appropriate references for the relevant proofs (in particular \cite{BELLSTBVM}, or \cite{VIAAUDSTE13}, the forthcoming \cite{VIAAUDSTEBOOK}, the notes \cite{viale-notesonforcing}). In what follows we do not consider languages with function symbols in order to avoid some technical difficulties.
\begin{definition} Let $\mathcal{L}=\bp{R_i: i\in I}\cup\bp{c_j. j\in J}$ be a relational language, and $\bool{B}$ a boolean algebra. A $\bool{B}$-valued model for $\mathcal{L}$ is a tuple $\mathcal{M}=\bp{M}\cup\bp{=^\mathcal{M}_{\bool{B}}}\cup\bp{R_{i\bool{B}}^\mathcal{M}: i\in I}\cup\bp{c_j^\mathcal{M}: j\in J}$, where: \begin{enumerate} \item $M$ is a non-empty set; \item $=^\mathcal{M}_{\bool{B}}$ is the boolean value of the equality symbol, i.e. a function \begin{align*} =^\mathcal{M}_{\bool{B}}:&\ M^2\to\bool{B};\\ \ap{x,&y}\mapsto\Qp{x=y}^\mathcal{M}_{\bool{B}} \end{align*} \item $R_{\bool{B}}^\mathcal{M}$ is the interpretation of the relational symbol $R$. If $R$ has arity $n$, \begin{align*} R_{\bool{B}}^\mathcal{M}:&\ M^n\to\bool{B};\\ \ap{x_1, \dots,&x_n}\mapsto\Qp{R(x_1, \dots, x_)}^\mathcal{M}_{\bool{B}} \end{align*} \item $c_j^\mathcal{M}\in M$ is the interpretation of the constant symbol $c_j$. \end{enumerate} \noindent We require that the following conditions hold: \begin{itemize} \item for all $x, y, z\in M$, \begin{equation} \Qp{x=x}^\mathcal{M}_{\bool{B}}=1_{\bool{B}}, \end{equation} \begin{equation} \Qp{x=y}^\mathcal{M}_{\bool{B}}=\Qp{y=x}^\mathcal{M}_{\bool{B}}, \end{equation} \begin{equation} \Qp{x=y}^\mathcal{M}_{\bool{B}}\wedge\Qp{y=z}^\mathcal{M}_{\bool{B}}\leq\Qp{x=z}^\mathcal{M}_{\bool{B}}; \end{equation} \item if $R\in \mathcal{L}$ is a $n$-ary relational symbol, for every $\ap{x_1, \dots, x_n}, \ap{y_1, \dots, y_n}\in M^n$, \begin{equation} \Bigl(\bigwedge_{i=1}^n\Qp{x_i=y_i}^\mathcal{M}_{\bool{B}}\Bigr)\wedge \Qp{R(x_1, \dots, x_n)}^\mathcal{M}_{\bool{B}}\leq\Qp{R(y_1, \dots, y_n)}^\mathcal{M}_{\bool{B}}. \end{equation} \end{itemize}
\end{definition}
From here on, if no confusion can arise, we avoid to put the superscript $\mathcal{M}$ and the subscript $\bool{B}$. Moreover, we will write $\mathcal{M}$ or $M$ equivalently to indicate a boolean valued model or its underlying set.
In general it makes sense to define the first order semantics also for certain $\bool{B}$-valued models with $\bool{B}$ non complete. However in this paper we can limit to define this semantics only for $\bool{B}$-valued models with $\bool{B}$ a complete boolean algebra.
\begin{definition} Let $\mathcal{L}=\bp{R_i: i\in I}\cup\bp{c_j. j\in J}$ be a relational language, $\bool{B}$ a complete boolean algebra, $\mathcal{M}$ a $\bool{B}$-valued model. We evaluate the formulae of $\mathcal{L}(M):=\mathcal{L}\cup\bp{c_\tau: \tau\in M}$ without free variables in the following way: \begin{itemize} \item $\Qp{R(c_{\tau_1}, \dots, c_{\tau_n})}:=\Qp{R(\tau_1, \dots, \tau_n)}$; \item $\Qp{\varphi\wedge\psi}:=\Qp{\varphi}\wedge\Qp{\psi}$; \item $\Qp{\neg\varphi}:=\neg\Qp{\varphi}$; \item $\Qp{\varphi\rightarrow\psi}:=\neg\Qp{\varphi}\vee\Qp{\psi}$; \item $\Qp{\exists x\varphi(x, c_{\tau_1}, \dots, c_{\tau_n})}:= \bigvee_{\sigma\in M}\Qp{\varphi(c_\sigma, c_{\tau_1}, \dots, c_{\tau_n})}$; \item $\Qp{\forall x\varphi(x, c_{\tau_1}, \dots, c_{\tau_n})}:= \bigwedge_{\sigma\in M}\Qp{\varphi(c_\sigma, c_{\tau_1}, \dots, c_{\tau_n})}$. \end{itemize} Given an assignment $\mathcal{\nu}$ of the free variables to $\mathcal{M}$ and an $\mathcal{L}$-formula $\phi(x_1,\dots,x_n)$ \[ \Qp{\varphi(x_1, \dots, x_n)}^{\mathcal{M},\nu}_\bool{B}=\Qp{\varphi(\nu(x_1), \dots, \nu(x_n))}. \] \end{definition} We write $\Qp{\varphi(\tau_1, \dots,\tau_n)}$ rather than $\Qp{\varphi(c_{\tau_1}, \dots, c_{\tau_n})}^{\mathcal{M}}_\bool{B}$.
Observe that, if $\bool{B}=\bp{0, 1}$, a $\bool{B}$-model is simply a Tarski structure for the language $L$, and the semantic we have just defined is the Tarski semantic. \begin{definition} A statement $\varphi$ in the language $L$ is \emph{valid} in a $\bool{B}$-valued model $\mathcal{M}$ for $L$ if $\Qp{\varphi}^\mathcal{M}_{\bool{B}}=1_{\bool{B}}$. A theory $T$ is valid in $\mathcal{M}$ if every axiom of $T$ is valid in $\mathcal{M}$. \end{definition} \noindent It can be proved (see the proof of \cite[Theorem 4.1.5]{viale-notesonforcing}) that, if $\varphi(x_1, \dots, x_n)$ is a formula with free variables $x_1, \dots, x_n$ and $\tau_1, \dots, \tau_n, \sigma_1, \dots, \sigma_n\in M$, then \begin{equation} \Qp{\tau_1=\sigma_1}\wedge\dots\wedge\Qp{\tau_n=\sigma_n}\wedge\Qp{\varphi(\tau_1, \dots, \tau_n)}\leq \Qp{\varphi(\sigma_1, \dots, \sigma_n)}. \end{equation} \noindent From here on, we will consider this fact as granted. \begin{definition} Let $\bool{B}$ be a complete boolean algebra and let $L=\bp{R_i: i\in I,\, c_j:j\in J}$ be a first order relational language. Let $\mathcal{M}=\bp{M, R_i^\mathcal{M}: i\in I}$ be a $\bool{B}$-valued model. Let $F$ be a filter in $\bool{B}$. We define the \emph{quotient} $\mathcal{M}/_F=\bp{M/_F, R_i/_F}$ of $\mathcal{M}$ by $F$ as follows: \begin{itemize} \item $M/_F:=\bp{[\tau]_F: \tau\in M}$, where $[\tau]_F:=\bp{\tau\in M: \Qp{\tau=\sigma}\in F}$; \item $\Qp{R_i([\tau_1]_F, \dots, [\tau_n]_F)}^{\mathcal{M}/_F}:=\Bigl[\Qp{R_i(\tau_1, \dots, \tau_n)}^\mathcal{M}\Bigr]_F\in\bool{B}/_F$ for every $i\in I$. \item $c_j$ is interpreted by $[c_j^{\mathcal{M}}]_F$. \end{itemize} \end{definition} \noindent It is possible to see that $\mathcal{M}/_F$ is a $\bool{B}/_F$-valued model (even if $\bool{B}/_F$ is not complete). In particular, if $U$ is a ultrafilter, $\mathcal{M}/_U$ is a $2$-valued model, i.e. a classical Tarski structure.
\begin{definition} Assume $\bool{B}$ is a complete boolean algebra. A $\bool{B}$-valued model $\mathcal{M}$ for the signature $\mathcal{L}$ is \emph{full} if for any $\mathcal{L}$-formula $\phi(x_0,\dots,x_n)$ and $\tau_1,\dots,\tau_n\in \mathcal{M}$ \[ \Qp{\exists x\phi(x,\tau_1,\dots,\tau_n)}^{\mathcal{M}}_{\bool{B}}=\Qp{\phi(\sigma,\tau_1,\dots,\tau_n)}^{\mathcal{M}}_{\bool{B}} \] for some $\sigma\in \mathcal{M} $. \end{definition}
\begin{lemma}[Lemma 14.14 \cite{JECH}] Assume $\bool{B}$ is a complete boolean algebra and $\mathcal{M}$ is a full $\bool{B}$-valued model for $\mathcal{L}$. Then for all $\tau_1,\dots,\tau_n\in\mathcal{M}$ and ultrafilter $G$ on $\bool{B}$ \[ \mathcal{M}/_G\models \phi([\tau_1]_G,\dots,[\tau_n]_G) \text{ if and only if } \Qp{\phi(\tau_1,\dots,\tau_n)}^\mathcal{M}\in G. \] \end{lemma}
A property implying fullness is the mixing property:
\begin{definition} Let $\bool{B}$ be a complete boolean algebra and $\mathcal{M}$ a $\bool{B}$-valued model.
$\mathcal{M}$ has the mixing property if for all $A$ antichains on $\bool{B}$ and $\bp{\tau_A:a\in A}\subseteq \mathcal{M}$ there exists $\tau \in \mathcal{M}$ such that \[ \Qp{\tau=\tau_a}^{\mathcal{M}}\geq a \] for all $a\in A$. \end{definition}
Note that the mixing property for the $\bool{B}$-valued model $\mathcal{M}$ depends only on the interpretation of the equality symbol, while fullness depends on the intepretation of all relation symbols of $\mathcal{L}$ in $\mathcal{M}$.
\begin{lemma} Let $\bool{B}$ be a complete boolean algebra Assume $\mathcal{M}$ is a $\bool{B}$-valued model with the mixing property. Then $\mathcal{M}$ is full. \end{lemma} \begin{proof} This is a variation of \cite[Lemma 14.19]{JECH}. Else see \cite[Proposition 2.1.7]{PIEROBONTHESIS}. \end{proof}
\subsection{The model companion of set theory for the generic multiverse} \label{sec:modelcompsetth}
Recall that $V$ denotes the universe of all sets, and for any complete boolean algebra $\bool{B}\in V$ \[ V^{\bool{B}}=\bp{\tau: \, \tau:X\to \bool{B} \text{ is a function with $X\subseteq V^{\bool{B}}$ a set}} \] is the boolean valued model for set theory generated by forcing with $\bool{B}$.
$V^{\bool{B}}$ is endowed with the structure of a $\bool{B}$-valued model for the language of set theory $\mathcal{L}=\bp{\in,\subseteq}$, letting (see \cite[Def. 5.1.1]{viale-notesonforcing} for details) \begin{equation} \Qp{\tau_1\in\tau_2}_{\bool{B}}=\bigvee_{\sigma\in\dom(\tau_2)} (\Qp{\tau_1=\sigma}_{\bool{B}}\wedge\tau_2(\sigma)), \end{equation} \begin{equation} \Qp{\tau_1\subseteq\tau_2}_{\bool{B}}=\bigwedge_{\sigma\in\dom(\tau_1)} (\neg\tau_1(\sigma)\vee\Qp{\sigma\in\tau_2}_{\bool{B}}), \end{equation} \begin{equation} \Qp{\tau_1=\tau_2}_{\bool{B}}= \Qp{\tau_1\subseteq\tau_2}_{\bool{B}}\wedge \Qp{\tau_2\subseteq\tau_1}_{\bool{B}}. \end{equation}
By \cite[Thm 1.17]{BELLSTBVM} $V^{\bool{B}}$ is a $\bool{B}$-valued model for $\bp{\in,\subseteq,=}$. By \cite[Thm 1.33]{BELLSTBVM} all axioms of $\ensuremath{\mathsf{ZFC}}$ gets boolean value $1_\bool{B}$ in $V^{\bool{B}}$. By \cite[Lemma 14.18]{JECH} $V^{\bool{B}}$ has the mixing property.
The class of models we will analyze is given by the generic extensions of initial segments of $V$. To make this precise we need a couple of definitions.
\begin{definition}\label{def:HkappaB} Let $\bool{B}$ be a complete boolean algebra. and $\dot{\kappa}\in V^{\bool{B}}$ be
such that $\Qp{\dot{\kappa}\text{ is a regular cardinal}}_{\bool{B}} =1_{\bool{B}}$. Given $\kappa\geq\bool{B}$ the least regular cardinal in $V$ such that $\Qp{\dot{\kappa}\leq\check{\kappa}}=1_{\bool{B}}$ and $\bool{B}$ is $<\kappa$-CC, let \[ H_{\dot{\kappa}}^\bool{B}=\bp{\tau\in V^{\bool{B}}\cap H_\kappa^V: \Qp{ \tau\text{ has transitive closure of size less than }\dot{\kappa}}_\bool{B}=1_{\bool{B}}} \] \end{definition} We let $\dot{\omega_1}^{\bool{B}}$ and $H_{\dot{\omega_1}}^\bool{B}$ be canonical $\bool{B}$-name for the first uncountable cardinal and for the family of hereditarily countable sets, i.e. \[ \Qp{\dot{\omega_1}^{\bool{B}}\text{ is the first uncountable cardinal}}_{\bool{B}} =1_{\bool{B}}, \] \[ H_{\dot{\omega_1}}^\bool{B}=H_{\dot{\omega_1}^{\bool{B}}}^\bool{B}. \]
It is left to the reader to check that for any $\Delta_0$-formula $\phi(\tau_1,\dots,\tau_n)$ for the signature $\in$ the truth value of $\Qp{\phi(\tau_1,\dots,\tau_n)}_\bool{B}$ is the same in $H_{\dot{\kappa}}^\bool{B}$ and in $V^{\bool{B}}$: by inspecting the definitions one realizes that the truth value of these formulae is defined by transfinite recursion on a set of names contained in $H_{\dot{\kappa}}^\bool{B}$, hence the computations of these truth value is the same in $V^{\bool{B}}$ and in $H_{\dot{\kappa}}^\bool{B}$. Key to this result is the equality \[ \Qp{\forall x\in \sigma\phi(x,\sigma,\tau_1,\dots,\tau_n)}_{\bool{B}}= \bigwedge_{\tau\in\dom{\sigma}}\tau(\sigma)\wedge\Qp{\phi(\sigma,\tau,\tau_1,\dots,\tau_n)}_{\bool{B}}, \] (see \cite[Exercise 14.12]{JECH}). See also the proof of Lemma \ref{lem:Cohengen} below.
\begin{lemma}[Mixing Lemma for $H_{\dot{\kappa}}^\bool{B}$] Assume $\bool{B}$ is a cba and $\dot{\kappa}$ is as in Def. \ref{def:HkappaB}. Then $H_{\dot{\kappa}}^\bool{B}$ satisfies the mixing property. \end{lemma}
\begin{proof} The same proof that works for $V$ (e.g. \cite[Lemma 14.18]{JECH}) works for $H_{\dot{\kappa}}^\bool{B}$ as well, since the required $\bool{B}$-name $\tau$ construed in that proof is in $H_{\dot{\kappa}}^\bool{B}$ if $\bp{\tau_A:a\in A}\subseteq H_{\dot{\kappa}}^\bool{B}$. \end{proof}
The forcing theorem states that: \begin{itemize} \item \cite[Thm 4.3.2, Thm 5.1.34]{viale-notesonforcing} (\L o\'s theorem for full boolean valued models) For all ultrafilter $G$ on $\bool{B}$, $\tau_1,\dots,\tau_n\in V^{\bool{B}}$, and $\tau_{\ST}$-formula $\phi(x_1,\dots,x_n)$ \[ (V^{\bool{B}}/G,\in/_G)\models\phi([\tau_1]_G,\dots,[\tau_n]_G)\text{ if and only if } \Qp{\phi(\tau_1,\dots,\tau_n)}_{\bool{B}}\in G. \]
\item The same conclusion holds with $H_{\dot{\kappa}}^\bool{B}$ in the place of $V^{\bool{B}}$.
\item \cite[Thm. 5.2.3]{viale-notesonforcing} Whenever $G$ is $V$-generic for $\bool{B}$ the map \[ [\tau]_G\mapsto \tau_G=\bp{\sigma_G: \exists b\in G\,\ap{\sigma,b}\in\tau} \] is the Mostowski collapse of the class $V^{\bool{B}}/G$ defined in $V[G]$ onto $V[G]$ and its restriction to $H_{\dot{\kappa}}^\bool{B}/_G$ maps the latter onto $H_{\dot{\kappa}_G}^{V[G]}$. \end{itemize}
Using the forcing theorem one gets that $\Qp{\psi}^{H_{\dot{\kappa}}^\bool{B}}=1_{\bool{B}}$ for any axiom $\psi$ of $\ensuremath{\mathsf{ZFC}}^-_{\ST}$, since it is the case that for all $G$ $V$-generic for $\bool{B}$ \[ H_{\dot{\kappa}}^\bool{B}[G]=\bp{\tau_G: \tau\in H_{\dot{\kappa}}^\bool{B}}=H_{\dot{\kappa}_G}^{V[G]}, \] i.e. $H_{\dot{\kappa}}^\bool{B}$ is a canonical family of $\bool{B}$-names to denote the $H_{\dot{\kappa}_G}^{V[G]}$ of the generic extension, and the latter always models $\ensuremath{\mathsf{ZFC}}^-_{\ST}$.
When $\bool{B}\in V$ is a $<\kappa$-cc complete boolean algebra, then $\Qp{\check{\kappa}\text{ is a regular cardinal}}=1_{\bool{B}}$. Therefore $H_{\check{\kappa}}^\bool{B}$ is a canonical set of $\bool{B}$-names which describes the $H_\kappa$ of a generic extension of $V$ by $\bool{B}$.
The choice to work with $H_{\dot{\kappa}}^\bool{B}$, instead of $V^\bool{B}$, is motivated by the fact that the former is a set definable in $V$ using the parameters $\bool{B}$ and $\dot{\kappa}$, while the latter is just a definable class in the parameter $\bool{B}$.
Having defined the structures we will be interested in (the structures $H_{\dot{\kappa}}^\bool{B}/_G$) we now turn to the definition of the relevant morphisms between them.
\begin{definition} Given $i:\bool{B}\to\bool{C}$ complete homomorphism of complete boolean algebras, $i$ extends to a map $\hat{i}:V^{\bool{B}}\to V^{\bool{C}}$ defined by transfinite recursion by \[ \hat{i}(\tau)=\bp{\ap{\hat{i}(\sigma),i(b)}: \,\ap{\sigma,b}\in\tau}. \] Let $\dot{\kappa}\in V^{\bool{B}}$, $\dot{\delta}\in V^{\bool{C}}$ be such that $H_{\dot{\kappa}}^\bool{B}$, $H_{\dot{\delta}}^\bool{C}$ are well defined according to Def. \ref{def:HkappaB}, and $i\restriction H_{\dot{\kappa}}^\bool{B}\subseteq H_{\dot{\delta}}^\bool{C}$.
Given $\tau_1,\dots,\tau_n\in H_{\dot{\kappa}}^\bool{B}$, $\phi(\tau_1,\dots,\tau_n)$ is generically absolute for $i$, $H_{\dot{\kappa}}^\bool{B}$, $H_{\dot{\delta}}^\bool{C}$ if \[ i(\Qp{\phi(\tau_1,\dots,\tau_n}_\bool{B}^{H_{\dot{\kappa}}^\bool{B}})= \Qp{\phi(\hat{i}(\tau_1),\dots,\hat{i}(\tau_n)}^{H_{\dot{\delta}}^\bool{C}}_\bool{C}. \]
$i$ is a boolean $\Sigma_n$-embedding of $H_{\dot{\kappa}}^\bool{B}$ into $H_{\dot{\delta}}^\bool{C}$ if all $\Sigma_n$-formulae for $\tau_{\ST}$ are generically absolute for $i$, $H_{\dot{\kappa}}^\bool{B}$, $H_{\dot{\delta}}^\bool{C}$ ($i$ is a boolean embedding if it preserves only atomic formulae). \end{definition}
We now prove the following Lemma:
\begin{lemma}\label{lem:Cohengen}
Let $i:\bool{B}\to\bool{C}$ be a complete homomorphism such that $\Qp{\dot{\kappa}\text{ is a regular cardinal}}_{\bool{B}} =1_{\bool{B}}$, $\Qp{\dot{\delta}\text{ is a regular cardinal}}_{\bool{C}} =1_{\bool{C}}$, and $\Qp{\hat{i}(\dot{\kappa})\leq\dot{\delta}}_{\bool{C}}=1_{\bool{C}}$. Then: \begin{enumerate} \item \label{lem:Cohengen-1} $i$ is a boolean embedding. \item \label{lem:Cohengen-2} For any $H\in St(\bool{C})$, letting $G\in St(\bool{B})$ be $k^{-1}[H]$, the map \begin{align*} \hat{i}/_H:&H_{\dot{\kappa}}^{\bool{B}}/_G\to H_{\dot{\delta}}^{\bool{C}}/_H\\ &[\tau]_G\mapsto [\hat{i}(\tau)]_H \end{align*} is a $\tau_{\ST}$-morphism. \item \label{lem:Cohengen-3} Assume further that $\dot{\kappa}=\omega_1^{\bool{B}}$. Then $i$ is $\Sigma_1$-elementary. \end{enumerate} \end{lemma} The first two parts are a straightforward consequence of the preservation of $\Delta_1$-properties through forcing extensions, while the third part follows from Shoenfield's absolutenss, given that $\Sigma_1$-properties in real parameters corresponds to $\Sigma^1_2$-properties and any element of $H_{\omega_1}$ is coded in an absolute manner by a real. Let us however give an explicit proof in the set-up we built so far so to make the reader acquainted with it.
\begin{proof} \emph{}
\begin{enumerate}
\item Given a $\Delta_0$-formula $\phi$ for the signature $\bp{\in,\subseteq,=}$, we must show that \[ i(\Qp{\phi(\tau_1,\dots,\tau_n}_\bool{B}^{H_{\dot{\kappa}}^\bool{B}})= \Qp{\phi(\hat{i}(\tau_1),\dots,\hat{i}(\tau_n)}^{H_{\dot{\delta}}^\bool{C}}_\bool{C}. \]
We prove the result by induction on the number of bounded quantifiers in $\phi$. For atomic formulas
$\psi$ (either $x = y$ or $x \in y$), we proceed by further induction on the rank of
$\tau_1$, $\tau_2$.
\[
\begin{split}
i\cp{ \Qp{\tau_1 \in \tau_2}_{\bool{B}} } &=
i\cp{ \bigvee \bp{\tau_2(\dot{a}) \wedge \Qp{\tau_1 =
\dot{a}}_{\bool{B}} : \dot{a} \in \dom(\tau_2)}} \\
&= \bigvee \bp{ i\cp{\tau_2(\dot{a})} \wedge i\cp{ \Qp{\tau_1 =
\dot{a}}_{\bool{B}}}: {\dot{a} \in \dom(\tau_2)}} \\
&= \bigvee\bp{ i\cp{\tau_2(\dot{a})} \wedge \Qp{\hat{\imath}(\tau_1) =
\hat{\imath}(\dot{a})}_{\bool{C}} : {\dot{a} \in \dom(\tau_2)}}\\
&= \Qp{\hat{\imath}(\tau_1) \in \hat{\imath}(\tau_2)}_{\bool{C}} \\
i\cp{ \Qp{\tau_1 \subseteq \tau_2}_{\bool{B}} } &=
i\cp{ \bigwedge \bp{ \tau_1(\dot{a}) \rightarrow \Qp{\dot{a} \in
\tau_2}_{\bool{B}}: {\dot{a} \in \dom(\tau_1)}}} \\
&= \bigwedge \bp{ i\cp{\tau_1(\dot{a})}\rightarrow
i\cp{ \Qp{\dot{a} \in \tau_2}_{\bool{B}} }: {\dot{a} \in \dom(\tau_1)}} \\
&= \bigwedge \bp{ i\cp{\tau_1(\dot{a})} \rightarrow
\Qp{\hat{\imath}(\dot{a}) \in \hat{\imath}(\tau_2)}_{\bool{C}}: {\dot{a} \in \dom(\tau_1)}} \\
&=\Qp{\hat{\imath}(\tau_1) \subseteq \hat{\imath}(\tau_2)}_{\bool{C}}.
\end{split}
\]
We used the inductive hypothesis in the last row of each case. Since
$\Qp{\tau_1 = \tau_2} = \Qp{\tau_1 \subseteq \tau_2} \wedge \Qp{\tau_2
\subseteq \tau_1}$, the proof for $\psi$ atomic $\bp{\in,\subseteq,=}$-formula is complete.
The induction step for boolean connectives is left to the reader.
Suppose now that $\psi = \exists x \in y ~ \phi$ is a
$\Delta_0$-formula for the signature $\bp{\in,\subseteq,=}$
and the inductive hypothesis holds for $\phi$. Then
\[
\begin{split}
i&\cp{\Qp{\exists x \in \tau_1 \phi(x,\tau_1,\ldots,\tau_n)}_{\bool{B}}} \\
& = \bigvee \bp{ i\cp{\tau_1(\dot{a})} \wedge
i\cp{\Qp{\phi(\dot{a},\tau_1,\ldots,\tau_n)}_{\bool{B}}}: {\dot{a} \in \dom(\tau_1)}} \\
& = \bigvee \bp{ i(\tau_1(\dot{a})) \wedge
\Qp{\phi\cp{\hat{\imath}(\dot{a}),
\hat{\imath}(\tau_1),\ldots,\hat{\imath}(\tau_n)}}_{\bool{C}} : {\dot{a} \in \dom(\tau_1)}} \\
& = \Qp{\exists x \in \hat{\imath}(\tau_1) ~
\phi\cp{x,\hat{\imath}(\tau_1),\ldots,\hat{\imath}(\tau_n)}}_{\bool{C}}
\end{split}
\]
\item Immediate by the forcing theorem for $H_{\dot{\kappa}}^{\bool{B}}$ and $H_{\dot{\delta}}^{\bool{C}}$, and the previous item.
\item Assume $\phi(x,x_1,\dots,x_n)$ is a $\Sigma_0$-formula for $\bp{\in,\subseteq,=}$ and $(\tau_1,\dots,\tau_n)$ is a tuple in $H_{\dot{\omega_1}}^{\bool{B}}$ such that \[ \Qp{\exists x\phi(x,\hat{i}(\tau_1),\dots,\hat{i}(\tau_n))}^{H_{\dot{\delta}}^\bool{C}}_\bool{C}\geq i(b). \] It suffices to show that \[ \Qp{\exists x\phi(x,\tau_1,\dots,\tau_n)}^{H_{\dot{\omega}_1}^\bool{B}}_\bool{B}\geq b. \] Fix $G$ $V$-generic for $\bool{B}$ such that $b\in G$. Work in $V[G]$.
Remark that $\bool{C}/_{i[G]}$ is a boolean algebra in $V[G]$ whose elements are the equivalence classes $[c]_G=\bp{d\in\bool{C}: \Qp{d=c}\in G}$ and with quotient boolean operations (it is actually a complete boolean algebra in $V[G]$, but this does not matter here).
Let $P$ be in $V[G]$ the forcing notion $(\bool{C}/_{i[G]})^+$ given by the positive elements of the boolean algebra.
Pick a model $M\in V[G]$ such that
$M\prec (H_{|P|^+})^{V[G]}$, $M$ is countable in $V[G]$, and $P,a_1=(\tau_1)_G,\dots,a_n=(\tau_n)_G\in M$. Let $\pi_M:M\to N$ be its transitive collapse and $Q=\pi_M(P)$. Notice also that $\pi_M(a_i)=a_i$ for $i=1,\dots,n$ since $a_i\in H_{\omega_1}^{V[G]}\cap M$ and this latter set is transitive since $\omega\subseteq M$. Since $\pi_M$ is an isomorphism of $M$ with $N$, \[ N\models(\Vdash_{Q}\exists x\phi(x,a_1,\dots,a_n)). \] Now let $H\in V[G]$ be $N$-generic for $Q$ ($H$ exists in $V[G]$ since $N$ is countable in $V[G]$), then, by Cohen's fundamental theorem of forcing applied in $V[G]$ to $N$ and $Q$ (since $N$ is a countable transitive model of a large enough initial fragment of $\ensuremath{\mathsf{ZFC}}$), we have that $N[H]\models\exists x\phi(x,a_1,\dots,a_n)$. So we can pick $a\in N[H]$ such that $N[H]\models\phi(a,a_1,\dots,a_n)$. Since $N,H\in (H_{\aleph_1})^{V[G]}$ are countable in $V[G]$, $N[H]$ is countable and transitive; hence we have that $V[G]$ models that $N[H]\in H_{\omega_1}^{V[G]}$, and thus $V[G]$ models that $a$ as well belongs to $H_{\omega_1}^{V[G]}$. Since $\phi(x,x_1,\dots,x_n)$ is a $\Sigma_0$-formula, $V[G]$ models that $\phi(a,a_1,\dots,a_n)$ is absolute between the transitive sets $N[H]\subset H_{\omega_1}^{V[G]}$ to which $a,a_1,\dots,a_n$ belong. In particular $a$ witnesses in $V[G]$ that $H_{\omega_1}^{V[G]}\models\exists x\phi(x,a_1,\dots,a_n)$.
Since the argument can be repeated for any $G$ $V$-generic for $\bool{B}$ with $b\in G$, we conclude that \[ H_{\omega_1}^{V[K]}\models\exists x\phi(x,(\tau_1)_K,\dots,(\tau_n)_K) \] for any $K$ $V$-generic for $\bool{B}$ with $b\in K$.
By the maximum principle and the forcing theorem applied in $V$ to $\bool{B}$, we can find $\sigma\in V^{\bool{B}}$ such that \[ \Qp{(\sigma\text{ is hereditarily countable})\wedge \phi(\sigma,\tau_1,\dots,\tau_n)}^{V^{\bool{B}}}_\bool{B}\geq b. \] Since $\sigma$ is a $\bool{B}$-name for an hereditarily countable set according to $b$, it is decided by countably many antichains below\footnote{For example let $\dot{R}$ be a canonical name for a binary relation on $\omega$ coding the transitive closure of $\bp{\sigma_G}$ for any $G$ $V$-generic for $\bool{B}$ with $b\in G$. Then for any such $G$ the transitive collapse of $\dot{R}_G$ is the transitive closure of $\bp{\sigma_G}$, and clearly $\dot{R}$ is decided by countably many antichains below $b$.} $b$.
In particular we can find in $V$, $\tau\in H_{\dot{\omega_1}}^{\bool{B}}$ such that $\Qp{\sigma=\tau}^{V^{\bool{B}}}_\bool{B}\geq b$. We conclude that \[ \Qp{\phi(\tau,\tau_1,\dots,\tau_n)}^{V^{\bool{B}}}_\bool{B}\geq b. \] Since $\phi$ is a $\Delta_0$-formula \[ \Qp{\phi(\tau,\tau_1,\dots,\tau_n)}^{H_{\dot{\omega_1}}^{\bool{B}}}=\Qp{\phi(\tau,\tau_1,\dots,\tau_n)}^{V^{\bool{B}}}_\bool{B}\geq b. \] The thesis follows. \end{enumerate} \end{proof}
\begin{definition} The \emph{generic multiverse} $(\Omega(V),\to_{\Omega(V)})$ is the collection: \[ \bp{H_{\dot{\kappa}}^{\bool{B}}/_G:\, \Qp{\dot{\kappa}\text{ is a regular cardinal}}_{\bool{B}}=1_{\bool{B}},\,G\in St(\bool{B})}. \] Its morphism are the $\tau_{\ST}$-morphisms of the form $\hat{i}/_H:H_{\dot{\kappa}}^{\bool{B}}/_G\to H_{\dot{\delta}}^{\bool{C}}/_H$ for some complete homomorphism $i:\bool{B}\to\bool{C}$ with $H\in St(\bool{C})$, $G=f^{-1}[H]$, $\Qp{\hat{i}(\dot{\kappa})\leq\dot{\delta}}_{\bool{C}}=1_{\bool{C}}$. \end{definition}
Notice\footnote{There can be morphisms $h:H_\kappa^{\bool{B}}/_G\to H_\delta^{\bool{C}}/_H$ which are not of the form $\hat{f}/_H$ for some complete homomorphism $f:\bool{B}\to \bool{C}$, even in case $\bool{B}$ preserve the regularity of $\kappa$ and $\bool{C}$ the regularity of $\delta$.
We do not spell out the details of such possibilities.} that $\Omega(V)$ is a definable class in $V$. $\Omega(V)$ is a formulation in the language of boolean valued models of the notion of generic multiverse.
This is the first result we want to bring forward:
\begin{theorem}\label{thm:omegaVmodcompl} The family \[ \bp{H_{\dot{\omega_1}}^{\bool{B}}/_G:\,\bool{B}\text{ is a cba and }G\in St(\bool{B})}. \] is the class of generic structures of the generic multiverse $(\Omega(V),\to_{\Omega(V)})$. \end{theorem}
\begin{proof} By Lemma \ref{lem:Cohengen}(\ref{lem:Cohengen-3}), the $\tau_{\ST}$-models of type $H_{\dot{\omega_1}}^{\bool{B}}/_G$ are $\Sigma_1$-elementary in any of their superstructures in $\Omega(V)$. By \cite[Cor 26.8 ]{JECH} any model of the form $H_{\dot{\kappa}}^{\bool{B}}$ is absorbed by a model of the form $H_{\dot{\omega_1}}^{\bool{C}}$, where $\bool{C}$ is the boolean completion of $\Coll(\omega,H_{\dot{\kappa}}^{\bool{B}})$.
The thesis follows immediately. \end{proof}
A natural questions arise: \begin{quote} Is the $\tau_{\ST}$-theory $T$ of $H_{\omega_1}^V$ model complete? \end{quote} If this question has an affirmative answer, $T$ is the model companion of the $\tau_{\ST}$-theory of $V$: Levy's absoluteness allows to prove that $H_{\omega_1}$ is $\Sigma_1$-elementary in $V$ with respect to $\tau_{\ST}$, hence the two structures have the same universal theory and we can apply Robinson's test to them.
In the next section we argue that this question has a negative answer. Nonetheless in the forthcoming \cite{VIAPAR} the second author and Parente will show that the models of type $H_{\dot{\omega_1}}^{\bool{B}}/_G$ are existentially closed for their own universal theory and contain a copy of any set sized model of the $\tau_{\ST}$-universal theory of $V$.
\section{Second order arithmetic and $\text{Th}(H_{\omega_1})$} \label{sec:nonmodcompletHomega1}
We define second order number theory as the first order theory of the structure \[ (\mathcal{P}(\mathbb{N})\cup\mathbb{N},\in,\subseteq,=,\mathbb{N}). \]
$\Pi^1_n$-sets (respectively $\Sigma^1_n$, $\Delta^1_n$) are the subsets of $\mathcal{P}(\mathbb{N})\equiv 2^{\mathbb{N}}$ defined by a $\Pi_n$-formula (respectively by a $\Sigma_n$-formula, at the same time by a $\Sigma_n$-formula and a $\Pi_n$-formula in the appropriate language), if the formula defining a set $A\subseteq (2^{\mathbb{N}})^n$ has some parameter $\vec{r}\in (2^{\mathbb{N}})^{<\omega}$ we accordingly write that $A$ is $\Pi^1_n(\vec{r})$ (respectively $\Sigma^1_n(\vec{r})$, $\Delta^1_n(\vec{r})$). if the formula defining a set $A\subseteq (2^{\mathbb{N}})^n$ uses an extra-predicate symbol $B\subseteq (2^\omega)^k$ we write that $A$ is $\Pi^1_n(B)$ (respectively $\Sigma^1_n(B)$, $\Delta^1_n(B)$).
$A\subseteq (2^{\mathbb{N}})^N$ is projective if it is defined by some $\Pi^1_n$-property for some $n$. Similarly we define the notion of being projective in $\vec{r}\in (2^{\mathbb{N}})^{<\omega}$ or $B\subseteq (2^\omega)^k$. \begin{remark} $A\subseteq (2^{\mathbb{N}})^k$ is projective if and only if it is obtained by a Borel subset of $(2^{\mathbb{N}})^m$ by successive applications of the operations of projection on one coordinate and complementation. \end{remark}
\begin{definition} Given $a\in H_{\omega_1}$, $r\in 2^{\mathbb{N}}$ codes $a$, if (modulo a recursive bijection of $\mathbb{N}$ with $\mathbb{N}^2$), $r$ codes a well-founded extensional relation on $\mathbb{N}$ whose transitive collapse is the transitive closure of $\{a\}$.
\begin{itemize} \item
$\mathrm{Cod}:2^{\mathbb{N}}\to H_{\omega_1}$ is the map assigning $a$ to $r$ if and only if $r$ codes $a$ and assigning the emptyset to $r$ otherwise. \item $\mathrm{WFE}$ is the set of $r\in 2^{\mathbb{N}}$ which (modulo a recursive bijection of $\mathbb{N}$ with $\mathbb{N}^2$) are a well founded extensional relation. \end{itemize} \end{definition}
The following are well known facts\footnote{See \cite[Section 25]{JECH} and in particular the statement and proof of Lemma 25.25, which contains all ideas on which one can elaborate to draw the conclusions below.}. \begin{remark} The map $\mathrm{Cod}$ is defined by a $\ensuremath{\mathsf{ZFC}}^-$-provably $\Delta_1$-property (with no parameters) over $H_{\omega_1}$and is surjective. Moreover $\mathrm{WFE}$ is a $\Pi^1_1$-subset of $2^{\mathbb{N}}$.
Therefore if $N\in H_{\omega_1}$ is a transitive model of $ZFC$, $N$ computes correctly $\mathrm{Cod}$ and $\mathrm{WFE}$, i.e. $\mathrm{Cod}^N=\mathrm{Cod}\cap N$ and $\mathrm{WFE}^N=\mathrm{WFE}\cap N$.
\end{remark}
\begin{lemma} Assume $A\subseteq 2^{\mathbb{N}}$ is $\Sigma^1_{n+1}$. Then $A$ is $\Sigma_{n}$-definable in $H_{\omega_1}$ in the language $\tau_{\ST}$.\qed \end{lemma}
\begin{lemma} Assume $A$ is $\Sigma_n$-definable in $H_{\omega_1}$ in the language $\tau_{\ST}$. Then $A=\textrm{Cod}[\textrm{Cod}^{-1}[A]]$, and $\textrm{Cod}^{-1}[A]$ is $\Sigma^1_{n+1}$. \end{lemma}
We can now easily conclude the following: \begin{theorem}\label{thm:nonmodcompHomega1} The $\tau_{\ST}$-theory of $H_{\omega_1}$ is not model complete. \end{theorem} \begin{proof} For all $n$ there is some $A_n\in \Sigma^1_{n+1}\setminus\Pi^1_n$ (see for a proof \cite[Thm. 22.4]{kechris:descriptive}). Therefore $A_2$ is $\Sigma_2$-definable but not $\Pi_1$-definable in $H_{\omega_1}$. Consequently, Robinson test fails, and $T$ is not model complete. \end{proof}
\section{Model completeness for set theory with predicates for universally Baire sets}\label{sec:modcompanUBpred}
\begin{definition} Let $(V,\in)$ be a model of $\ensuremath{\mathsf{ZFC}}$ and $N\subseteq V$ be a transitive class (or set) which is a model of $\ensuremath{\mathsf{ZFC}}^-$. $\mathcal{A}\subseteq$ is $N$-closed if whenever $B\subseteq (2^\omega)^k$ is such that for some $\in$-formula $\phi(x_0,\dots,x_n)$ \[ B=\bp{(r_0,\dots,r_{k-1})\in (2^\omega)^k:\, (N,\in,A_0,\dots,A_{n-k})\models\phi(r_0,\dots,r_{k-1},A_0,\dots,A_{n-k})} \] with $A_0,\dots,A_{n-k}\in\mathcal{A}$, we have that $B\in\mathcal{A}$. \end{definition}
\begin{theorem}\label{thm:modcompanHomega1} Let $(V,\in)$ be a model of $\ensuremath{\mathsf{ZFC}}$, and assume $\mathcal{A}$ is $H_{\omega_1}$-closed.
Let $\tau_{\mathcal{A}}=\tau_{\ST}\cup\mathcal{A}$. Then the $\tau_{\mathcal{A}}$-theory of $H_{\omega_1}$ is model complete and is the model companion of the $\tau_{\mathcal{A}}$-theory of $V$. \end{theorem}
The proof is rather straightforward but needs a slight generalization of Levy's absoluteness theorem which we state and prove rightaway:
\begin{lemma}\label{lem:levyabsHkappa+} Let $\kappa$ be an infinite cardinal and $\mathcal{A}$ be any family of subsets of $\bigcup_{n\in\omega}\pow{\kappa}^n$. Let $\tau_{\mathcal{A}}=\tau_{\ST}\cup\mathcal{A}$.
Then: \[ (H_{\kappa^+}^V,\tau_{\mathcal{A}}^V)\prec_{\Sigma_1} (V,\tau_{\mathcal{A}}^{V}). \] \end{lemma}
\begin{proof} Assume for some $\tau_{\mathcal{A}}$-formula $\phi(\vec{x},y)$ without quantifiers\footnote{A quantifier free $\tau_{A_1,\dots,A_k}$-formula is a boolean combination of atomic $\tau_{\ST}$-formulae with formulae of type $A_j(\vec{x})$. For example $\exists x\in y A(y)$ is not a quantifier free $\tau_{\ST}$-formula, and is actually equivalent to the $\Sigma_1$-formula $\exists x(x\in y)\wedge A(y)$.} and $\vec{a}\in H_{\kappa^+}$ \[ (V,\tau_{\mathcal{A}}^{V})\models\exists y\phi(\vec{a},y). \] Let $\alpha>\kappa$ be large enough so that for some $b\in V_\alpha$ \[ (V,\tau_{\mathcal{A}}^{V})\models\phi(\vec{a},b). \] Then \[ (V_\alpha,\tau_{\mathcal{A}}^{V})\models\phi(\vec{a},b). \] Let $A_1,\dots,A_k$ be the subsets of $\pow{\kappa}^{i_k}$ which are the predicates mentioned in $\phi$. By the downward Lowenheim-Skolem theorem, we can find $X\subseteq V_\alpha$ which is the domain of a $\tau_{A_1,\dots,A_k}$-elementary substructure of \[ (V_\alpha,\tau_{\ST},A_1,\dots,A_k) \] such that $X$ is a set of size $\kappa$ containing $\kappa$ and such that $A_1,\dots,A_k,\kappa,b,\vec{a}\in X$.
Since $|X|=\kappa\subseteq X$, a standard argument shows that $H_{\kappa^+}\cap X$ is a transitive set, and that $\kappa^+$ is the least ordinal in $X$ which is not contained in $X$. Let $M$ be the transitive collapse of $X$ via the Mostowski collapsing map $\pi_X$.
We have that the first ordinal moved by $\pi_X$ is $\kappa^+$ and $\pi_X$ is the identity on $H_{\kappa^+}\cap X$. Therefore $\pi_X(a)=a$ for all $a \in H_{\kappa^+}\cap X$. Moreover for $A\subseteq \pow{\kappa}^n$ in $X$ \begin{equation}\label{eqn:piXidonpowkappa} \pi_X(A)=A\cap M. \end{equation} We prove equation (\ref{eqn:piXidonpowkappa}): \begin{proof}
Since $X\cap V_{\kappa+1}\subseteq X\cap H_{\kappa^+}$, $\pi_X$ is the identity on $X\cap H_{\kappa^+}$, and $A\subseteq \pow{\kappa}\subseteq V_{\kappa+1}$, we get that \[ \pi_X(A)=\pi_X[A\cap X]=\pi_X[A\cap X\cap V_{\kappa+1}]=A\cap M\cap V_{\kappa+1}=A\cap M. \] \end{proof} It suffices now to show that \begin{equation}\label{eqn:keyeqlevabs} (M,\tau_{\ST}^V,\pi_X(A_1),\dots,\pi_X(A_k))\sqsubseteq (H_{\kappa^+},\tau_{\ST}^V,A_1,\dots,A_k). \end{equation} Assume \ref{eqn:keyeqlevabs} holds; since $\pi_X$ is an isomorphism and $\pi_X(A_j)=\pi_X[A_j\cap X]$, we get that \[ (M,\tau_{\ST}^V,\pi_X(A_1),\dots,\pi_X(A_k))\models\phi(\pi_X(b),\vec{a}) \] since \[ (X,\tau_{\ST}^V,A_1\cap X,\dots,A_k\cap X)\models\phi(b,\vec{a}). \] By (\ref{eqn:keyeqlevabs}) we get that \[ (H_{\kappa^+},\tau_{\ST}^V,\pi_X(A_1),\dots,\pi_X(A_k))\models\phi(\pi_X(b),\vec{a}) \] and we are done.
We prove (\ref{eqn:keyeqlevabs}): since $M$ is transitive, any atomic $\tau_{\ST}$-formula (i.e. any $\Delta_0$-property) holds true in $M$ if and only if it holds in $H_{\kappa^+}$. It remains to argue that the same occurs for the $\tau_{\mathcal{A}}$-formulae of type $A_j(x)$, i.e. that $A_j\cap M=\pi_X(A_j)$ for all $j=1,\dots,n$; which is the case by (\ref{eqn:piXidonpowkappa}). \end{proof}
\begin{remark} Key to the proof is the fact that subsets of $\kappa$ have bounded rank below $\kappa^+$. If $A\subseteq H_{\kappa^+}$ has elements of unbounded rank, the equality $\pi_X(A)=A\cap M$ may fail: for example if $A=H_{\kappa^+}$, $\pi_X(A)=H_{\kappa^+}\cap X$ while $A\cap M=M$. This shows that (\ref{eqn:keyeqlevabs}) fails for this choice of $A$. \end{remark}
We can now prove Theorem \ref{thm:modcompanHomega1}.
\begin{proof} Let $T$ be the $\tau_{\mathcal{A}}$-theory of $V$ and $T^*$ be the $\tau_{\mathcal{A}}$-theory of $H_{\omega_1}$.
By the version of Levy's absoluteness Lemma we just proved \[ (H_{\omega_1},\tau_{\mathcal{A}}^V)\prec_1(V,\tau_{\mathcal{A}}^V), \] hence the two structures share the same $\Pi_1$-theory. Therefore (by the standard characterization of model companionship) it suffices to prove that $T^*$ is model complete.
By Robinson's test, it suffices to show that any existential $\tau_{\mathcal{A}}$-formula is $T^*$-equivalent to a universal $\tau_{\mathcal{A}}$-formula.
Let $A_1,\dots,A_k$ be the predicates in $\mathcal{A}$ appearing in $\phi$.
Let \[ B=\bp{(r_1,\dots,r_n)\in (2^\omega)^n: (H_{\omega_1},\tau_{\ST}^V,A_1,\dots,A_k)\models \phi(\Cod(r_1),\dots,\Cod_\omega(r_n))}. \] Then $B$ belongs to $\mathcal{A}$, since $\mathcal{A}$ is $H_{\omega_1}$-closed. Now for any $a_1,\dots,a_n\in H_{\omega_1}$: \[ (H_{\omega_1},\tau_{\mathcal{A}}^V)\models \phi(a_1,\dots,a_n) \]
\center{ if and only if } \[ \forall r_1\dots r_n \qp{(\bigwedge_{i=1}^n\Cod(r_i)=a_i)\rightarrow B(r_1,\dots,r_n)}. \]
This yields that \[ T^*\vdash \forall x_1,\dots,x_n\,\qp{(\phi(x_1,\dots,x_n)\leftrightarrow\theta_\phi(x_1,\dots,x_n))}. \] where $\theta_\phi(x_1,\dots,x_n)$ is the $\Pi_1$-formula in the predicate $B\in\mathcal{A}$ \[ \forall y_1,\dots,y_n\,[(\bigwedge_{i=1}^n x_i=\Cod(y_i))\rightarrow B(y_1,\dots,y_n)]. \] \end{proof}
Theorem~\ref{thm:modcompanHomega1} has a rather straightforward proof which amounts to a (slightly) disguised addition of atomic predicates (i.e. those representing the elements of $\mathcal{A}$) which interpret the definable subsets of $H_{\omega_1}$. But the point we want to make is that assuming large cardinals the universally Baire sets give a very large sample of projectively closed families which are quite ``simple'', hence it is natural to consider elements of these families as atomic predicates.
Given a topological space $(X,\tau)$, $A\subseteq X$ is nowhere dense if its closure has a dense complement, meager if it is the countable union of nowhere dense sets, with the Baire property if it has meager symmetric difference with an open set.
\begin{definition} (Feng, Magidor, Woodin) $A\subseteq (2^{\mathbb{N}})^k$ is \emph{universally Baire} if for every compact Hausdorff space $X$ and every continuous $f:X\to (2^{\mathbb{N}})^k$ we have that $f^{-1}[A]$ has the Baire property in $X$.
$\bool{UB}$ denotes the family of universally Baire subsets of $(2^{\mathbb{N}})^k$ for some $k$. \end{definition}
\begin{example} Given a model $(V,\in)$ of $\ensuremath{\mathsf{ZFC}}+$\emph{there are class many Woodin cardinals}, simple examples of $H_{\omega_1}$-closed families are: \begin{enumerate} \item All subsets of $\pow{\omega^k}$ for $k\in\mathbb{N}$ (this is trivially true with no large cardinal assumptions). \item $\UB^V$, i.e. the family of \emph{all} universally Baire sets of $V$ \cite[Thm. 3.3.3, Thm. 3.3.5, Thm. 3.3.6, Thm. 3.3.13, Thm. 3.3.14]{STATLARSON}. \item The family of subsets of $\pow{\omega^k}$ for $k\in\mathbb{N}$ definable in $(L(\mathbb{R}),\in)$ among which the projective sets \cite[Thm. 3.3.3, Thm. 3.3.5, Thm. 3.3.6, Thm. 3.3.9, Thm. 3.3.13, Thm. 3.3.14]{STATLARSON}. \item The family $\UB\cap X$ for some $X\prec V_\theta$ with $\theta>2^\omega$.
\end{enumerate}
\end{example}
\begin{corollary}\label{thm:mainthm1} Assume $(V,\in)$ models \[ \ensuremath{\mathsf{ZFC}}+\text{there are class many Woodins which are inaccessible limit of Woodins}. \] Let $\psi$ be a $\Pi_2$-sentence for $\tau_{\UB}=\tau_{\ST}\cup\UB$. TFAE: \begin{enumerate} \item $\psi$ is in the $\tau_{\UB}$-theory of $H_{\omega_1}$. \item $\psi$ is in the model companion of the $\tau_{\UB}$-theory of $V$. \item $\psi$ is consistent with the universal fragment of the $\tau_{\UB}$-theory of $V$. \item $V$ models that some partial order $P$ forces $\psi^{H_{\omega_1}}$. \end{enumerate} \end{corollary} \begin{proof} The equivalence of the first and fourth items follow from \cite[Cor. 3.1.10]{STATLARSON}. The equivalence of the first and second item follows from Thm. \ref{thm:modcompanHomega1}.
The equivalence of the second and third item is a purely model theoretic fact, we give the details for the sake of completeness:
Assume $T$ is a complete theory admitting a model companion $T^*$.
Since $T^*$ is the model companion of $T$, $T^*_\forall=T_\forall$. In particular if $\psi\in T^*$, $\psi$ is consistent with the universal fragment of $T$.
Conversely assume $\psi+T_\forall$ is consistent. We must show that $\psi\in T^*$.
Since $T$ is complete, $T^*$ is also complete: If $T_1\supseteq T^*$ and $T_2\supseteq T^*$, we get that $(T_i)_\forall\supseteq (T^*)_\forall=T_\forall$ for both $i$. Since $T$ is complete we actually get that $(T_i)_\forall=T_\forall$ for both $i$. Now $T_i\supseteq T^*$ entails that both $T_i$ are model complete (every universal formula is $T^*$-equivalent ---hence also $T_i$-equivalent--- to an existential formula). This gives that $T_1$ is the model companion of $T_2$, hence we can find $\mathcal{M}_1\sqsubseteq\mathcal{M}_2$ with $\mathcal{M}_i$ a model of $T_i$. Since they are both models of $T$, model completeness of $T$ entails that $\mathcal{M}_1\prec \mathcal{M}_2$; therefore $T_1=T_2=T^*$.
We get that $T^*$ is also the model companion of $\psi+T_\forall$ (since $T^*_\forall=T_\forall=(T_\forall+\psi)_\forall$ and $T^*$ is model complete). Hence some model $\mathcal{M}$ of $T^*$ is a substructure of some model $\mathcal{N}$ of $T_\forall+\psi$.
Since $\psi$ is $\Pi_2$ and holds in $\mathcal{N}$, $\psi$ holds in $\mathcal{M}$ as well: assume $\psi$ is $\forall \vec{x}\exists\vec{y}\phi(\vec{x},\vec{y})$ with $\phi$ quantifier free. Let $\vec{a}\in\mathcal{M}$. Then there exists $\vec{b}\in\mathcal{N}^{<\omega}$ such that \[ \mathcal{N}\models \phi(\vec{a},\vec{b}). \] Since $\mathcal{M}\prec_1\mathcal{N}$ such a $\vec{b}$ can be found in $\mathcal{M}^{<\omega}$. We conclude that \[ \mathcal{M}\models \forall \vec{x}\exists\vec{y}\phi(\vec{x},\vec{y}). \] Hence $\psi$ in $T^*$, being consistent with it. And this concludes the proof. \end{proof}
In particular the Corollary outlines that forcibility exhausts all means to produce the consistency of a $\Pi_2$-sentence for $\tau_{\UB}$ with the universal fragment of the $\tau_{\UB}$-theory of $V$.
There are by now a variety of generic absoluteness results for the theory of $H_{\omega_2}$ assuming forcing axioms; for example the $\Pi_2$-maximality results for the theory of $H_{\aleph_2}$ assuming Woodin's axiom $(*)$ (see the monograph \cite{HSTLARSON}), or the second author's \cite{VIAASP,VIAAUD14,VIAMM+++,VIAMMREV,VIAUSAX} regarding the invariance of the first order theory of $H_{\aleph_2}$ under stationary set preserving forcings which preserve strong forcing axioms.
In follow-ups of this paper we will show that also these generic absoluteness results are strictly intertwined with model companionship (see \cite{VIATAMSTI,VIATAMSTII}).
\end{document} | arXiv |
Journal of Fluid Mechanics
Taylor–Couette turbulence at ra...
Froitzheim, A. Merbold, S. and Egbers, C. 2017. Velocity profiles, flow structures and scalings in a wide-gap turbulent Taylor–Couette flow. Journal of Fluid Mechanics, Vol. 831, Issue. , p. 330.
Ezeta, Rodrigo Huisman, Sander G. Sun, Chao and Lohse, Detlef 2018. Turbulence strength in ultimate Taylor–Couette turbulence. Journal of Fluid Mechanics, Vol. 836, Issue. , p. 397.
Zhu, Bihai Ji, Zengqi Lou, Zhengkun and Qian, Pengcheng 2018. Torque scaling in small-gap Taylor-Couette flow with smooth or grooved wall. Physical Review E, Vol. 97, Issue. 3,
Elghobashi, Said 2019. Direct Numerical Simulation of Turbulent Flows Laden with Droplets or Bubbles. Annual Review of Fluid Mechanics, Vol. 51, Issue. 1, p. 217.
25 July 2016 , pp. 334-351
Taylor–Couette turbulence at radius ratio ${\it\eta}=0.5$ : scaling, flow structures and plumes
Roeland C. A. van der Veen (a1), Sander G. Huisman (a1), Sebastian Merbold (a2), Uwe Harlander (a2), Christoph Egbers (a2), Detlef Lohse (a1) (a3) and Chao Sun (a1) (a4)...
1 Physics of Fluids Group, MESA+ Institute and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500AE Enschede, The Netherlands
2 Department of Aerodynamics and Fluid Mechanics, Brandenburg University of Technology Cottbus-Senftenberg, Siemens-Halske-Ring 14, 03046 Cottbus, Germany
3 Max Planck Institute for Dynamics and Self-Organization, Am Fassberg 17, 37077 Göttingen, Germany
4 Center for Combustion Energy and Department of Thermal Engineering, Tsinghua University, Beijing 100084, PR China
DOI: https://doi.org/10.1017/jfm.2016.352
Published online by Cambridge University Press: 23 June 2016
Using high-resolution particle image velocimetry, we measure velocity profiles, the wind Reynolds number and characteristics of turbulent plumes in Taylor–Couette flow for a radius ratio of 0.5 and Taylor number of up to $6.2\times 10^{9}$ . The extracted angular velocity profiles follow a log law more closely than the azimuthal velocity profiles due to the strong curvature of this ${\it\eta}=0.5$ set-up. The scaling of the wind Reynolds number with the Taylor number agrees with the theoretically predicted $3/7$ scaling for the classical turbulent regime, which is much more pronounced than for the well-explored ${\it\eta}=0.71$ case, for which the ultimate regime sets in at much lower Taylor number. By measuring at varying axial positions, roll structures are found for counter-rotation while no clear coherent structures are seen for pure inner cylinder rotation. In addition, turbulent plumes coming from the inner and outer cylinders are investigated. For pure inner cylinder rotation, the plumes in the radial velocity move away from the inner cylinder, while the plumes in the azimuthal velocity mainly move away from the outer cylinder. For counter-rotation, the mean radial flow in the roll structures strongly affects the direction and intensity of the turbulent plumes. Furthermore, it is experimentally confirmed that, in regions where plumes are emitted, boundary layer profiles with a logarithmic signature are created.
© 2016 Cambridge University Press
†Email address for correspondence: [email protected]
Ahlers, G., Bodenschatz, E. & He, X. 2014 Logarithmic temperature profiles of turbulent Rayleigh–Bénard convection in the classical and ultimate state for a Prandtl number of 0.8. J. Fluid Mech. 758, 436–467.
Andereck, C. D., Liu, S. S. & Swinney, H. L. 1986 Flow regimes in a circular Couette system with independently rotating cylinders. J. Fluid Mech. 164, 155–183.
Brauckmann, H. J. & Eckhardt, B. 2013 Intermittent boundary layers and torque maxima in Taylor–Couette flow. Phys. Rev. E 87, 033004.
Brauckmann, H. J., Salewski, M. & Eckhardt, B. 2016 Momentum transport in Taylor–Couette flow with vanishing curvature. J. Fluid Mech. 790, 419–452.
Chouippe, A., Climent, E., Legendre, D. & Gabillet, C. 2014 Numerical simulation of bubble dispersion in turbulent Taylor–Couette flow. Phys. Fluids 26 (4), 043304.
Couette, M. 1890 Études sur le frottement des liquides. Ann. Chim. Phys. 21, 433.
Dong, S. 2007 Direct numerical simulation of turbulent Taylor–Couette flow. J. Fluid Mech. 587, 373–393.
Donnelly, R. J. 1991 Taylor–Couette flow: the early days. Phys. Today 44 (11), 32–39.
Dubrulle, B., Dauchot, O., Daviaud, F., Longgaretti, P. Y., Richard, D. & Zahn, J. P. 2005 Stability and turbulent transport in Taylor–Couette flow from analysis of experimental data. Phys. Fluids 17, 095103.
Eckhardt, B., Grossmann, S. & Lohse, D. 2007 Torque scaling in turbulent Taylor–Couette flow between independently rotating cylinders. J. Fluid Mech. 581, 221–250.
Fardin, M. A., Perge, C. & Taberlet, N. 2014 'The hydrogen atom of fluid dynamics' – Introduction to the Taylor–Couette flow for soft matter scientists. Soft Matt. 10, 3523–3535.
Fenstermacher, P. R., Swinney, H. L. & Gollub, J. P. 1979 Dynamical instabilities and the transition to chaotic Taylor vortex flow. J. Fluid Mech. 94, 103–128.
van Gils, D. P. M., Huisman, S. G., Grossmann, S., Sun, C. & Lohse, D. 2012 Optimal Taylor–Couette turbulence. J. Fluid Mech. 706, 118–149.
Grossmann, S. & Lohse, D. 2000 Scaling in thermal convection: a unifying theory. J. Fluid Mech. 407, 27–56.
Grossmann, S. & Lohse, D. 2011 Multiple scaling in the ultimate regime of thermal convection. Phys. Fluids 23, 045108.
Grossmann, S., Lohse, D. & Sun, C. 2014 Velocity profiles in strongly turbulent Taylor–Couette flow. Phys. Fluids 26, 025114.
Grossmann, S., Lohse, D. & Sun, C. 2016 High Reynolds number Taylor–Couette turbulence. Annu. Rev. Fluid Mech. 48, 53–80.
He, X., Funfschilling, D., Nobach, H., Bodenschatz, E. & Ahlers, G. 2012 Transition to the ultimate state of turbulent Rayleigh–Bénard convection. Phys. Rev. Lett. 108, 024502.
van Hout, R. & Katz, J. 2011 Measurements of mean flow and turbulence characteristics in high-Reynolds number counter-rotating Taylor–Couette flow. Phys. Fluids 23 (10), 105102.
Huisman, S. G., van Gils, D. P. M., Grossmann, S., Sun, C. & Lohse, D. 2012 Ultimate turbulent Taylor–Couette flow. Phys. Rev. Lett. 108, 024501.
Huisman, S. G., Scharnowski, S., Cierpka, C., Kähler, C., Lohse, D. & Sun, C. 2013 Logarithmic boundary layers in strong Taylor–Couette turbulence. Phys. Rev. Lett. 110, 264501.
Huisman, S. G., van der Veen, R. C. A., Sun, C. & Lohse, D. 2014 Multiple states in highly turbulent Taylor–Couette flow. Nat. Commun. 5, 3820.
Ji, H., Burin, M., Schartman, E. & Goodman, J. 2006 Hydrodynamic turbulence cannot transport angular momentum effectively in astrophysical disks. Nature 444, 343–346.
Kiefer, J. 1953 Sequential minimax search for a maximum. Proc. Am. Math. Soc. 4, 502–506.
Kraichnan, R. H. 1962 Turbulent thermal convection at arbritrary Prandtl number. Phys. Fluids 5, 1374–1389.
Lakkaraju, R., Stevens, R. J. A. M., Verzicco, R., Grossmann, S., Prosperetti, A., Sun, C. & Lohse, D. 2012 Spatial distribution of heat flux and fluctuations in turbulent Rayleigh–Bénard convection. Phys. Rev. E 86, 056315.
Lathrop, D. P., Fineberg, J. & Swinney, H. S. 1992 Turbulent flow between concentric rotating cylinders at large Reynolds numbers. Phys. Rev. Lett. 68, 1515–1518.
Lewis, G. S. & Swinney, H. L. 1999 Velocity structure functions, scaling, and transitions in high-Reynolds-number Couette–Taylor flow. Phys. Rev. E 59, 5457–5467.
López-Caballero, M. & Burguete, J. 2013 Inverse cascades sustained by the transfer rate of angular momentum in a 3D turbulent flow. Phys. Rev. Lett. 110, 124501.
Mallock, A. 1896 Experiments on fluid viscosity. Phil. Trans. R. Soc. Lond. A 187, 41–56.
Marusic, I., Monty, J. P., Hultmark, M. & Smits, A. J. 2013 On the logarithmic region in wall turbulence. J. Fluid Mech. 716, R3.
Merbold, S., Brauckmann, H. J. & Egbers, C. 2013 Torque measurements and numerical determination in differentially rotating wide gap Taylor–Couette flow. Phys. Rev. E 87, 023014.
Merbold, S., Fischer, S. & Egbers, C. 2011 Torque scaling in Taylor–Couette flow – an experimental investigation. J. Phys.: Conf. Ser. 318 (8), 082017.
Ostilla-Mónico, R., Huisman, S. G., Jannink, T. J. G., van Gils, D. P. M., Verzicco, R., Grossmann, S., Sun, C. & Lohse, D. 2014a Optimal Taylor–Couette flow: radius ratio dependence. J. Fluid Mech. 747, 1–29.
Ostilla-Mónico, R., van der Poel, E. P., Verzicco, R., Grossmann, S. & Lohse, D. 2014b Boundary layer dynamics at the transition between the classical and the ultimate regime of Taylor–Couette flow. Phys. Fluids 26 (1), 015114.
Ostilla-Mónico, R., van der Poel, E. P., Verzicco, R., Grossmann, S. & Lohse, D. 2014c Exploring the phase diagram of fully turbulent Taylor–Couette flow. J. Fluid Mech. 761, 1–26.
Ostilla-Mónico, R., Verzicco, R., Grossmann, S. & Lohse, D. 2016 The near-wall region of highly turbulent Taylor–Couette flow. J. Fluid Mech. 788, 95–117.
Paoletti, M. S., van Gils, D. P. M., Dubrulle, B., Sun, C., Lohse, D. & Lathrop, D. P. 2012 Angular momentum transport and turbulence in laboratory models of Keplerian flows. Astron. Astrophys. 547, A64.
van der Poel, E. P., Ostilla-Mónico, R., Verzicco, R., Grossmann, S. & Lohse, D. 2015 Logarithmic mean temperature profiles and their connection to plume emissions in turbulent Rayleigh–Bénard convection. Phys. Rev. Lett. 115, 154501.
Ravelet, F., Delfos, R. & Westerweel, J. 2010 Influence of global rotation and Reynolds number on the large-scale features of a turbulent Taylor–Couette flow. Phys. Fluids 22 (5), 055103.
Shang, X.-D., Tong, P. & Xia, K.-Q. 2008 Scaling of the local convective heat flux in turbulent Rayleigh–Bénard convection. Phys. Rev. Lett. 100, 244503.
Taylor, G. I. 1923 Experiments on the motion of solid bodies in rotating fluids. Proc. R. Soc. Lond. A 104, 213–218.
Tokgoz, S., Elsinga, G. E., Delfos, R. & Westerweel, J. 2011 Experimental investigation of torque scaling and coherent structures in turbulent Taylor–Couette flow. J. Phys.: Conf. Ser. 318 (8), 082018.
van der Veen, R. C. A., Huisman, S. G., Dung, O.-Y., Tang, H. L., Sun, C. & Lohse, D. 2016 Exploring the phase space of multiple states in highly turbulent Taylor–Couette flow. Phys. Rev. Fluids 1, 024401.
Wendt, F. 1933 Turbulente Strömungen zwischen zwei rotierenden konaxialen Zylindern. Ing.-Arch. 4, 577–595.
URL: /core/journals/journal-of-fluid-mechanics
JFM classification
Convection: Taylor–Couette flow
Turbulent Flows: Turbulent boundary layers
Turbulent Flows: Turbulent Flows | CommonCrawl |
\begin{document}
\author{Alfons Laarman}
\institute{Leiden University, Leiden, The Netherlands, \email{[email protected]}\footnote{ \scriptsize This work is partially supported by the Austrian National Research Network S11403-N23 (RiSE) of the Austrian Science Fund (FWF) and by the Vienna Science and Technology Fund (WWTF) through grant VRG11-005. }}
\title{Stubborn Transaction Reduction (with Proofs)}
\begin{abstract}
\smaller The exponential explosion of parallel interleavings remains a fundamental challenge to model checking of concurrent programs. Both partial-order reduction (POR) and transaction reduction (TR) decrease the number of interleavings in a concurrent system. Unlike POR, transactions also reduce the number of intermediate states. Modern POR techniques, on the other hand, offer more dynamic ways of identifying commutative behavior, a crucial task for obtaining good reductions.
We show that transaction reduction can use the same dynamic commutativity as found in stubborn set POR. We also compare reductions obtained by POR and TR, demonstrating with several examples that these techniques complement each other.
With an implementation of the dynamic transactions in the model checker LTSmin, we compare its effectiveness with the original static TR and two POR approaches. Several inputs, including realistic case studies, demonstrate that the new dynamic TR can surpass POR in practice.
\end{abstract}
\section{Introduction} \label{sec:introduction}
POR~\cite{katz-peled,parle89,godefroid} yields state space reductions by selecting a subset $P_q$ of the enabled actions $E_q$ at each state $q$; the other enabled actions $E_q \setminus P_q$ are pruned.
For instance, reductions preserving deadlocks (states without outgoing transitions) can be obtained by ensuring the following properties for the set $P_q \subseteq E_q \subseteq A$, where $A$ is the set of all actions:
\begin{wrapfigure}{r}{3.7cm}
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)},node distance=1cm]
\node (s0) {\small $q$};
\node (s1) [right of=s0,gray] {\small $q_1$};
\node (s2) [right of=s1,gray] {\small $q_{n-1}$};
\node (s3) [right of=s2,gray] {\small $q_n$};
\path (s0) -- node[midway,gray]{\small$\tr{\beta_1}{}$} (s1)
-- node[midway,gray]{\dots} (s2)
-- node[midway,gray]{\small$\tr{\beta_n}{}$} (s3);
\node (s0p) [below of=s0] {\small $q'$};
\node (s1p) [right of=s0p] {\small $q_1'$};
\node (s2p) [right of=s1p] {\small $q_{n-1}'$};
\node (s3p) [right of=s2p] {\small $q_n'$};
\path (s0p) -- node[midway]{\small$\tr{\beta_1}{}$} (s1p)
-- node[midway]{\dots} (s2p)
-- node[midway]{\small$\tr{\beta_n}{}$} (s3p);
\path (s0) -- node[midway,sloped]{\small$\tr{\alpha}{}$} (s0p);
\path (s1) -- node[midway,sloped,gray]{\small$\tr{\alpha}{}$} (s1p);
\path (s2) -- node[midway,sloped,gray]{\small$\tr{\alpha}{}$} (s2p);
\path (s3) -- node[midway,sloped,gray]{\small$\tr{\alpha}{}$} (s3p); \end{tikzpicture}
\end{wrapfigure}~ \begin{itemize} \item In any state $q_n$ reachable from $q$ via pruned actions $\beta_1,\dots,\beta_n \in A\setminus P_q$, all actions
$\alpha \in P_q$ \concept{commute} with the pruned actions $\beta_1,\dots,\beta_n$
and \item at least one action $\alpha\in P_q$ remains enabled in $q_n$. \end{itemize} The first property ensures that the pruned actions $\beta_1,\dots,\beta_n$ are still enabled after $\alpha$ and lead to the same state ($q_n'$), i.e., the order of executing $\beta_1,\dots,\beta_n$ and $\alpha$ is irrelevant. The second avoids that deadlocks are missed when pruning states $q_1,\ldots,q_n$. To compute the POR set $P_q$ without computing pruned states $q_1,\ldots,q_n$ (which would defeat the purpose of the reduction it is trying to attain in the first place), \emph{Stubborn POR uses static analysis to `predict' the future from $q$, i.e., to over-estimate the $q$-reachable actions $A\setminus P_q$, e.g.: $\beta_1,..,\beta_n$.}
\begin{figure}
\caption{Transition systems of $\mathit{program1}$ (left) and $\mathit{program2}$ (right).
Thick lines show optimal (Stubborn set) POR. Curly lines show a TR (not drawn in the right figure).
}
\label{f:lipton}
\end{figure}
Lipton or transaction reduction (TR)~\cite{lipton}, on the other hand, identifies sequential blocks in the actions $\actions_i$ of each thread $i$ that can be grouped into transactions. A transaction $\alpha_1..\alpha_k..\alpha_n \in \actions_i^*$ is replaced with an atomic action $\alpha$ which is its sequential composition, i.e. $\alpha = \alpha_1\circ..\circ\alpha_k\circ..\circ\alpha_n$. Consequently, any trace $q_1 \tr{\alpha_1}{} q_2 \tr{\alpha_2}{} \ldots\tr{\alpha_k}{}\ldots \tr{\alpha_{n-1}}{} q_n \tr{\alpha_n}{}q_{n+1}$ is replaced by $q_1 \tr{\alpha}{} q_{n+1}$, making state $q_2, \ldots, q_n$ \concept{internal}. Thereby, internal states disallow all interleavings of other threads $j\neq i$, i.e., \concept{remote actions} $\actions_j$ are not fired at these states. Similar to POR, this pruning can reduce reachable states. Additionally, internal states can also be discarded when irrelevant to the model checking problem.
In the database terminology of origin~\cite{papadimitriou}, a transaction must consist of: \begin{itemize} \item A \concept{pre-phase},
containing actions $\alpha_1..\alpha_{k-1}$ that may gather required resources, \item a single \emph{commit action} $\alpha_k$
possibly interfering with remote actions, and \item a \concept{post-phase} $\alpha_{k+1}..\alpha_{n}$,
possibly releasing resources (e.g. via unlocking them). \end{itemize} In the pre- and post-phase, the actions (of a thread $i$) must commute with all remote behavior, i.e. \emph{all actions $\actions_j$ of all other threads $j\neq i$ in the system}.
\emph{TR does not dynamically `predict' the possible future remote actions, like POR does.} This makes the commutativity requirement needlessly stringent, as the following example shows: Consider $\mathit{program1}$ consisting of two threads. All actions of one thread commute with all actions of the other because only local variables are accessed. \autoref{f:lipton} (left) shows the POR and TR of this system. \[\footnotesize \mathit{program1} := \ccode{\textbf{if} (fork()) \{a = 0; b = 2; \} \textbf{else} \{ x = 1; y = 2; \}} \]
\[\footnotesize \mathit{program2} := \ccode{a = b = x = y = 0; \textbf{if} (fork()) \{ $\mathit{program1}$; \}} \] Now assume that a parallel assignment is added as initialization code yielding $\mathit{program2}$ above. \autoref{f:lipton} (right) shows again the reductions. Suddenly, all actions of both threads become dependent on the initialization, i.e. neither action \ccode{a = 0;} nor action \ccode{b = 2;} commute with actions of other threads, spoiling the formation of a transaction \ccode{\textsf{atomic}\{a = 0; b = 2;\}} (idem for \ccode{\textsf{atomic}\{x = 1; y = 2;\}}). Therefore, TR does not yield any reduction anymore (not drawn). Stubborn set POR~\cite{valmari1988error}, however, still reduces \textit{program2} like \textit{program1}, because, using static analysis, it `sees' that the initialization cannot be fired again.\footnote {\scriptsize\textit{program2} is a simple example. Yet various programming patterns lead to similar behavior, e.g.: lazy initialization, atomic data structure updates and load balancing~\cite{vmcai}.}
In the current paper, we show how TR can be made dynamic in the same sense as stubborn set POR~\cite{parle89}, so that the previous example again yields the maximal reduction. Our work is based on the prequel~\cite{vmcai}, where we instrument programs
in order to obtain dynamic TR for \emph{symbolic model checking}. While \cite{vmcai} premiered dynamically growing and shrinking transactions,
its focus on symbolic model checking complicates a direct comparison with other dynamic techniques such as POR. The current paper therefore extends this technique to enumerative model checking, which allows us to get rid of the heuristic conditions from~\cite{vmcai} by replacing them with the more general stubborn set POR method.
While we can reduce the results in the current paper to the reduction theorem of~\cite{vmcai}, the new focus on enumerative model checking provides opportunities to tailor reductions on a per-state basis and investigate TR more thoroughly.\footnote{\label{fn:symbolic}Symbolic approaches can be viewed as reasoning over sets of states, and therefore cannot easily support fine-grained per-state POR/TR analyses.} This leads to various contributions:
\begin{enumerate}
\item A `Stubborn' TR algorithm (STR)
more dynamic/general than TR in~\cite{vmcai}. \item An open source implementation of (stubborn) TR in the model~checker~\textsc{LTSmin}. \item Experiments comparing TR and POR for the first time in \autoref{sec:experiments}.
\end{enumerate}
Moreover, in \autoref{sec:comparison}, we show analytically that unlike stubborn POR: \begin{enumerate} \item Computing optimal stubborn TR is tractable and reduction is not heuristic. \item Stubborn TR can exploit right-commutativity and prune (irrelevant) deadlocks (while still preserving invariants as per \autoref{th:alg}).
\end{enumerate} On the other hand, stubborn POR is still more effective for checking for absence of deadlocks and reducing massively parallel systems. Various open problems, including the combination of TR and POR, leave room for improvement.
The current paper is the technical report version of \cite{laarman18}. Proofs of theorems and lemmas can be found in \autoref{app:proofs}.
\section{Preliminaries} \label{sec:prelim}
\paragraph{Concurrent transition systems} We assume a general process-based semantic model that can accommodate various languages. A concurrent transition system (CTS) for a finite set of processes \procs is tuple $\cts \,\triangleq\, \tuple{X, \trans, \actions, q_0}$ with finitely many actions $\actions \,\triangleq\, \biguplus_{i\in P} \actions_i$. Transitions are relations between states and actions: $\trans\subseteq X\times\actions\times X$. We write $\alpha_i$ for $\alpha \in \actions_i$, $q \tr{\alpha}i q'$ for $\tuple{q,\alpha_i,q'}\in T$, $\trans_i$ for $\trans \cap (X\times\actions_i\times X)$, $\trans_\alpha$ for $\trans \cap (X\times\set{\alpha}\times X)$, $\tr{\alpha}{}$ for $\set{\tuple{q,q'}\mid \tuple{q,\alpha,q'}\in T}$, and $\tr{}{i}$ for $\set{\tuple{q,q'}\mid \tuple{q,\alpha,q'}\in T_i}$.
State space exploration can be used to show invariance of a property $Y$, e.g., expressing mutual exclusion, written: $\reach(\cts) \models Y$. This is done by finding all reachable states $q$, i.e., $\reach(\cts)\,\triangleq\, \set{q \mid q_0 \rightarrow^* q}$, and show that $q\in Y$.
\paragraph{Notation} We let $\en(q)$ be the set of actions enabled at $q$: \set{\alpha\mid \exists \tuple{q,\alpha,q'}\in T } and $\overline{\en}(q) \,\triangleq\, \actions \setminus \en(q)$. We let $R\circ Q$ and $RQ$ denote the \concept{sequential composition} of two binary relations $R$ and $Q$, defined~as: $ \{ (x,z)\,\vert\,\exists y\colon (x,y)\in R \wedge (y,z)\in Q \}\, $. Let $R \subseteq S \times S$ and $X \subseteq S$. Then left restriction of $R$ to $X$ is $X \lrestr R \,\triangleq\, R\,\cap\,(X \times S)$ and right restriction is $R\rrestr X \,\triangleq\, R\,\cap\,(S \times X)$.
The complement of $X$ is denoted $\overline X \,\triangleq\, S \setminus X$ (the universe of all states remains implicit in this notation). The inverse of $R$ is $R^{-1}\,\triangleq\, \set{\tuple{x,y}\mid \tuple{y,x}\in R}$.
\paragraph{POR relations} Dependence is a well-known relation used in POR. Two actions $\alpha_1,\alpha_2$ are dependent if there is a state where they do not commute, hence we first define commutativity. Let $c \,\triangleq\, \set{{q}\mid\exists \tuple{q,\alpha_1,q'},\tuple{q,\alpha_2,q''}\in \trans}$. Now: \begin{IEEEeqnarray}{rCrll} \tr{\alpha_1}{} \comm \tr{\alpha_2}{}
&\,\triangleq\, & \,\,c\lrestr \tr{\alpha_1}{} \circ \tr{\alpha_2}{}
&~=~ c \lrestr \tr{\alpha_2}{} \circ \tr{\alpha_1}{}
&\text{~($\alpha_1$, $\alpha_2$
strongly-commute)} \nonumber\\* \tr{\alpha_1}{} \bowtie \tr{\alpha_2}{}
&\,\triangleq\, & \tr{\alpha_1}{} \circ \tr{\alpha_2}{}
&~=~ \tr{\alpha_2}{} \circ \tr{\alpha_1}{}
&\text{~($\alpha_1$, $\alpha_2$
commute, also $\alpha_1 \bowtie \alpha_2$)} \nonumber\\* \tr{\alpha_1}{} \rcomm \tr{\alpha_2}{}
&\,\triangleq\, &\tr{\alpha_1}{} \circ \tr{\alpha_2}{}
&~\subseteq~ \tr{\alpha_2}{} \circ \tr{\alpha_1}{}
&\text{~($\alpha_1$
right-commutes with $\alpha_2$)}\nonumber\\* \tr{\alpha_1}{} \lcomm \tr{\alpha_2}{}
&\,\triangleq\, &\tr{\alpha_1}{} \circ \tr{\alpha_2}{}
&~\supseteq~ \tr{\alpha_2}{} \circ \tr{\alpha_1}{}
&\text{~($\alpha_1$
left-commutes with $\alpha_2$)}\nonumber
\end{IEEEeqnarray} \begin{minipage}{.485\linewidth} \begin{align} \text{\input{figures/right-commute}}\label{eq:right} \end{align} \end{minipage} \begin{minipage}{.515\linewidth} \begin{align} \text{\input{figures/strong-commute}}\label{eq:strong} \end{align} \end{minipage} Left / right commutativity allows actions to be prioritized / delayed over other actions without affecting the end state. \autoref{eq:right} illustrates this by quantifying of the states: Action $\alpha_1$ right-commutes with $\alpha_2$, and vice verse $\alpha_2$ left-commutes with $\alpha_1$. Full commutativity ($\bowtie$) always allows both delay and prioritization for any serial execution of $\alpha_1, \alpha_2$, while strong commutativity only demands full commutativity when both actions are simultaneously enabled, as shown in \autoref{eq:strong} for deterministic actions $\alpha_1$/$\alpha_2$ (\autoref{eq:strong} is only for an intuition and does not illustrate the non-deterministic case, which is covered by $\comm$).
Left / right / strong \concept{dependence} implies lack of left / right / strong commutativity, e.g.: $\alpha_1 \not\bowtie \alpha_2$.
Note that typically: $\forall i, \alpha,\beta \in \actions_i \colon \alpha \not\bowtie \beta$ due to e.g. a shared program counter. Also note that if $\alpha_1 \rcomm \alpha_2$, then $\alpha_1$ never enables $\alpha_2$, while strong commutativity implies that neither $\alpha$ disables $\beta$, nor vice versa.
A lock(/unlock) operation right(/left)-commutes with other locks and unlocks.
Indeed, a lock never enables another lock or unlock. Neither do unlocks ever disable other unlocks or locks. In the absence of an unlock however, a lock also attains left-commutativity as it is mutually disabled by other locks. Because of the same disabling property, two locks however do not strongly commute.
Finally, a \concept{necessary enabling set} (NES) of an action $\alpha$ and a state $q_1$ is a set of actions that must be executed for $\alpha$ to become enabled,~formally:\\
$\forall E\in \nes_{q_1}(\alpha), q_1\xrightarrow{\alpha_1,..,\alpha_n} q_2\colon
\alpha \in \dis(q_1) \land \alpha\in\en(q_2) \implies E \cap \set{\alpha_1,..,\alpha_n}\neq\emptyset$. An example of an action $\alpha$ with two NESs $E_1,E_2 \in \nes_q(\alpha)$ is a command guarded by $g$ in an imperative language: When $\alpha\in \dis(q)$, then either its guard $g$ does not hold in $q$, and $E_1$ consists of all actions enabling $g$, or its program counter is not activated in $q$, and $E_2$ consists of all actions that label the edges immediately before $\alpha$ in the CFG of the process that $\alpha$ is part of.
\paragraph{POR} POR uses the above relations to find a subset of enabled actions $\por(q) \subseteq \en(q)$ sufficient for preserving the property of interest. Commutativity is used to ensure that the sets $\por(q)$ and $\en(q) \setminus \por(q)$ commute, while the NES is used to ensure that this mutual commutativity holds \emph{in all future behavior}. The next section explains how stubborn set POR achieves this.
POR gives rise to a CTS $\widetilde\cts \,\triangleq\, \langle X, \widetilde\trans, \actions, q_0 \rangle$, $\widetilde T \,\triangleq\, \set{\tuple{q,\alpha,q'}\in T \mid \alpha\in\por(q)}$, abbreviated $q \xdashrightarrow{\alpha} q'$. It is indeed reduced, since we have $\reach(\widetilde\cts) \subseteq \reach(\cts)$.
\paragraph{Transaction reduction}
(Static) transaction reduction was devised by Lipton~\cite{lipton}.
It merges multiple sequential statements into one atomic operation, thereby radically reducing the reachable states. An action $\alpha$ is called a right/left mover if and only if it commutes with actions from all other threads $j\neq i$:
\[\tr{\alpha}{i} \,\,\rcomm\,\, \bigcup_{j\neq i} \tr{}{j}
\text{~($\alpha$ is a right mover)\phantom{XX}}
\tr{\alpha}{i} \,\,\lcomm\,\, \bigcup_{j\neq i} \tr{}{j}
~\text{~($\alpha$ is a left mover)} \]
\noindent \concept{Both-movers} are transitions that are both left and right movers, whereas \concept{non-movers} are neither. The sequential composition of two movers is also a corresponding mover, and vice versa. Moreover, one may always safely classify an action as a non-mover, although having more movers yields better reductions.
Examples of right-movers are locks, P-semaphores and synchronizing queue operations. Their counterparts; unlock, V-semaphore and enqueue ops, are left-movers. Their behavior is discussed above using locks and unlocks as an example.
Lipton reduction only preserves halting. We present Lamport's~\cite{lamport-lipton} version, which preserves safety properties such as $\Box Y$, i.e. $Y$ is an invariant.
Any sequence $\alpha_1,\ldots, \alpha_n$
can be \concept{reduced} to a single action $\alpha$ s.t. $\tr{\alpha}{i} = \tr{\alpha_1}{i}\circ \ldots \circ\tr{\alpha_n}{i}$ (i.e. a compound statement with the same local behavior), if for some $1 \le k < n$: \begin{lipton}[parsep=0pt] \item\label{L1}
actions before the commit $\alpha_k$ are right movers:
$\tr{\alpha_1}{i}\circ \ldots\circ \tr{\alpha_{k-1}}{i} \,\,\rcomm\,\, \tr{}{\neq i}$, \item\label{L2}
actions after the commit $\alpha_{k}$ are left movers:
$\tr{\alpha_{k+1}}{i}\circ \ldots\circ \tr{\alpha_{n}}{i} \,\,\lcomm\,\, \tr{}{\neq i}$, \item\label{L3}
actions after $\alpha_1$ do not block:
$\forall q\, \exists q' \colon
q\tr{\alpha_{1}}{i}\circ \ldots\circ \tr{\alpha_{n}}{i}q'$, and
\vphantom{$\lcomm\bigcup_{j\neq i}$} \item\label{L4}
$Y$ is not disabled by $\tr{\alpha_1}{i}\circ \ldots\circ \tr{\alpha_{k-1}}{i}$, nor enabled by
$\tr{\alpha_{k+1}}{i}\circ \ldots\circ \tr{\alpha_{n}}{i}$.\parbox{0pt}{\vphantom{$\lcomm\bigcup_{j\neq i}$}} \end{lipton}
\begin{wrapfigure}{r}{5cm}
\hspace{-1em} \scalebox{.75}{ \begin{tikzpicture}
\tikzstyle{e}=[minimum width=0cm]
\tikzstyle{every node}=[font=\small, node distance=.9cm, inner sep=1pt]
\node (s1) [e] {$q_1$};
\node (s2) [right of=s1,e] {$q_2$};
\node (s3) [right of=s2,e] {$q_3$};
\node (s4) [right of=s3,e] {$q_4$};
\node (s6) [right of=s4,e] {$q_5$};
\node (sx) [right of=s6,e] {$q_6$};
\node (s7) [right of=sx,e] {$q_7$};
\node (s8) [right of=s7,e] {$q_8$};
\path (s1.east) edge[->,gray] node[above,pos=.45,gray]{$\beta_1$} (s2.west)
(s2.east) edge[->] node[above,pos=.45](a){$\alpha_1$} (s3.west)
(s3.east) edge[->,gray] node[above,pos=.45,gray](e){$\beta_2$} (s4.west)
(s4.east) edge[->,gray] node[above,pos=.45,gray]{$\beta_3$} (s6.west)
(s6.east) edge[->] node[above,pos=.45]{$\alpha_2$} (sx.west)
(sx.east) edge[->,gray] node[above,pos=.45,gray]{$\beta_4$} (s7.west)
(s7.east) edge[->] node[above,pos=.45]{$\alpha_3$} (s8.west);
\node (s1a) [node distance=.75cm,below of=s1,e] {$q_1$};
\node (s2a) [right of=s1a,e] {$q_2$};
\node (s3a) [right of=s2a,e] {$q_3'$};
\node (s4a) [right of=s3a,e] {$q_4$};
\node (s6a) [right of=s4a,e] {$q_5$};
\node (sxa) [right of=s6a,e] {$q_6$};
\node (s7a) [right of=sxa,e] {$q_7$};
\node (s8a) [right of=s7a,e] {$q_8$};
\path (s1a.east) edge[->,gray] node[above,gray]{$\beta_1$} (s2a.west)
(s2a.east) edge[->,gray] node[above,gray](ea){$\beta_2$} (s3a.west)
(s3a.east) edge[->] node[above](aa){\vphantom{$\beta$}$\alpha_1$} (s4a.west)
(s4a.east) edge[->,gray] node[above,gray](eea){$\beta_3$} (s6a.west)
(s6a.east) edge[->] node[above,pos=.45]{$\alpha_2$} (sxa.west)
(sxa.east) edge[->,gray] node[above,pos=.45,gray]{$\beta_4$} (s7a.west)
(s7a.east) edge[->] node[above,pos=.45]{$\alpha_3$} (s8a.west);
\path (a.south) edge[->,thick,dotted] node[pos=.45]{} (aa.north)
(e.south) edge[->,thick,dotted] node[pos=.45]{} (ea.north);
\node (s1b) [node distance=.75cm,below of=s1a,e] {$q_1$};
\node (s2b) [right of=s1b,e] {$q_2$};
\node (s3b) [right of=s2b,e] {$q_3'$};
\node (s4b) [right of=s3b,e] {$q_4'$};
\node (s6b) [right of=s4b,e] {$q_5$};
\node (sxb) [right of=s6b,e] {$q_6$};
\node (s7b) [right of=sxb,e] {$q_7$};
\node (s8b) [right of=s7b,e] {$q_8$};
\path (s1b.east) edge[->,gray] node[above,gray]{$\beta_1$} (s2b.west)
(s2b.east) edge[->,gray] node[above,gray]{$\beta_2$} (s3b.west)
(s3b.east) edge[->,gray] node[above,gray](eb){$\beta_3$} (s4b.west)
(s4b.east) edge[->] node(ab)[above]{\vphantom{$\beta$}$\alpha_1$} (s6b.west)
(s6b.east) edge[->] node(b2)[above,pos=.45]{$\alpha_2$} (sxb.west)
(sxb.east) edge[->,gray] node(b1)[above,pos=.45,gray]{$\beta_4$} (s7b.west)
(s7b.east) edge[->] node(b2)[above,pos=.45]{$\alpha_3$} (s8b.west);
\path (eea.south) edge[->,thick,dotted] node[pos=.45]{} (eb.north)
(aa.south) edge[->,thick,dotted] node[pos=.45]{} (ab.north);
\node (s1c) [node distance=.75cm,below of=s1b,e] {$q_1$};
\node (s2c) [right of=s1c,e] {$q_2$};
\node (s3c) [right of=s2c,e] {$q_3'$};
\node (s4c) [right of=s3c,e] {$q_4'$};
\node (s7c) [node distance=.75cm,below of=s7b,e] {$q_7'$};
\node (s8c) [right of=s7c,e] {$q_8$};
\path (s1c.east) edge[->,gray] node[above,gray]{$\beta_1$} (s2c.west)
(s2c.east) edge[->,gray] node[above,gray]{$\beta_2$} (s3c.west)
(s3c.east) edge[->,gray] node[above,gray](eb){$\beta_3$} (s4c.west)
(s4c.east) edge[->] node(ab)[above,pos=.1]{$\alpha_1$}
node(c1)[above,pos=.3]{$\circ$}
node(c1)[above,pos=.5]{$\alpha_2$}
node(c1)[above,pos=.7]{$\circ$}
node(c1)[above,pos=.9]{\vphantom{$\beta$}$\alpha_3$} (s7c.west)
(s7c.east) edge[->,gray] node(c2)[above,pos=.45,gray]{$\beta_4$} (s8c.west);
\path (b1.south) edge[->,thick,dotted] node[pos=.45]{} (c2.north)
(b2.south) edge[->,thick,dotted] node[pos=.45]{} (c1.north); \end{tikzpicture} }
\end{wrapfigure} The example (right) shows the evolution of a trace when a reduction with $n\hspace{-1mm}=\hspace{-1mm}3$, $k\hspace{-1mm}=\hspace{-1mm}2$ is applied. Actions $\beta_1,\ldots,\beta_4$ are remote. The pre-action $\alpha_1$ is first moved towards the commit action $\alpha_2$. Then the same is done with the post-action $\alpha_3$. \textbf{L1} resp. \textbf{L2} guarantee that the trace's end state $q_8$ remains invariant, \textbf{L3} guarantees its existence and \textbf{L4} guarantees that e.g. $q_4 \notinY \implies q_3' \notinY$ and $q_6 \notinY \implies
q_7' \notinY$ (preserving invariant violations $\negY$ in the reduced system without $q_4$ and $q_6$).
The subsequent section provides a dynamic variant of TR.
\newcommand\trp{\stackrel{}{\longrightarrow_{i}'}} \defmath\phase{\mathit{h}}
\section{Stubborn Transaction Reduction} \label{sec:str}
The current section gradually introduces stubborn transaction reduction. First, we introduce a stubborn set definition that is parametrized with different commutativity relations. In order to have enough luggage to compare POR to TR in \autoref{sec:comparison}, we elaborate here on various aspects of stubborn POR and compare our definitions to the original stubborn set definitions.
We then provide a definition for \concept{dynamic left and right movers}, based on the stubborn set parametrized with left and right commutativity. Finally, we provide a definition of a \concept{transaction system}, show how it is reduced and provide an algorithm to do so. This demonstrates that TR can be made dynamic in the same sense as stubborn sets are dynamic. We focus in the current paper on the preservation of invariants. But since deadlock preservation is an integral part of POR, it is addressed as well.
\subsection{Parametrized stubborn sets}\label{s:pss}
We use stubborn sets as they have advantages compared to other traditional POR techniques~\cite[Sec.~4]{intuition}.
We first focus on a basic definition of the stubborn set that only preserves deadlocks. The following version is parametrized (with~$\star$).
\input{defs/ss}
Notice that a stubborn set $B$ includes actions disabled in $q$ to reason over future behavior with \textbf{\ref{i:d2}}: Actions $\alpha\in B$ commute with $\beta\in \en(q)\setminus B$ by~\textbf{\ref{i:d1}}, but also with $\beta'\in \en(q')$ for $q\tr{\beta}{}q'$, since \textbf{\ref{i:d2}} ensures that $\beta$ cannot enable any $\gamma\in B$ (ergo $\beta'\notin B$). \autoref{th:valmari} formalizes this. From $B$, the reduced system is obtained~by taking $\por(q) \,\triangleq\, \en(q)\cap B$: It preserves deadlocks. But not all $\star$-parametrizations lead to correct reductions w.r.t. deadlock preservation. We therefore briefly relate our definition to the original stubborn set definitions. The above definition yields three interpretations of a set $B\subseteq \actions$ for a state $q$. \begin{itemize} \item If $\st^\leftrightarrow_q(B)$, then $B$ coincides with the original \emph{strong} stubborn
set~\cite{valmari1988error,parle89}. \item If $\st^\leftarrow_q(B)$, then $B$ approaches the weak stubborn set in~\cite{guardpor2},
a simplified version of~\cite{valmari1}, except that it
lacks a necessary \emph{key action} (from \cite[Def.~1.17]{valmari1}).\footnote{ \textbf{\ref{i:d0}} is generally not preserved with left-commutativity ($\star = \leftarrow$), as $\beta \notin B$ may disable $\alpha\in B$. Consequently, $\beta$ may lead to a deadlock. Because POR prunes all $\beta \notin B$, $\st^\leftarrow_q(B)$ is not a valid reduction (it may prune deadlocks). The key action repairs this by demanding at least one \emph{key action} $\alpha$, which strongly commutes, i.e., $\forall \beta\in B \colon \alpha \comm \beta$, which by virtue of strong commutativity cannot be disabled by any $\beta\notin B$. } \item If $\st^\rightarrow_q(B)$, then $B$ also may yield an invalid POR, as
it would consider two locking operations independent and thus potentially miss a deadlock. \end{itemize}
This indicates that POR, unlike TR, cannot benefit from right-commutativity. The consequences of this difference are further discussed in \autoref{sec:comparison}. The strong version of our bare-bone stubborn set definition, on the other hand, is equivalent to the one presented~\cite{valmari1} and thus preserves the `stubbornness' property~(\autoref{th:valmari}). If we define \concept{semi-stubbornness}, written $\sst^\star_q$, like stubbornness minus the \textbf{\ref{i:d0}} requirement,
then we can prove a similar theorem for semi-stubborn sets~(\autoref{th:sst}).\footnote{ We will show that semi-stubbornness, i.e., $\sst^\shortleftarrow_q(B)$ (without key), is sufficient for stubborn TR, which may therefore prune deadlocks. Contrarily, invariant-preserving stubborn POR is strictly stronger than the basic stubborn set (see below), and hence also preserves all deadlocks. (This is relevant for the POR/TR comparison in \autoref{sec:comparison}.) }
This `stubbornness' of semi-$\shortleftarrow$ and semi-$\shortrightarrow$ stubborn sets is used below to define dynamic movers. First, we briefly return our attention to stubborn POR, recalling how it preserves properties beyond deadlocks and the computation of~$\st_q$.
\begin{theorem}[\cite{valmari1}] \label{th:valmari} If $B\subseteq \actions$, $\st^{\leftrightarrow}_q(B)$ and $q\tr{\beta}{}q'$ for $\beta\notin B$, then $\st^{\leftrightarrow}_{q'}(B)$.
\end{theorem} \input{defs/sst}
\paragraph{Stubborn sets for safety properties} To preserve a safety property such as $\Box Y$ (i.e. $Y$ is invariant), a stubborn set $B$ ($\st^{\leftrightarrow}_q(B) = \true$) needs to satisfy two additional requirements~\cite{explosion} called \textbf{S} for \concept{safety} and \textbf{V} for \concept{visibility}.
To express \textbf{V}, we denote actions enabling $Y$ with $\actions_\oplus^Y$ and those disabling the proposition with $\actions_\ominus^Y$. Those combined form the visible actions: $\actions^Y_{\mathit{vis}} \,\triangleq\, \actions_\ominus^{Y}\cup \actions_\oplus^{Y}$. For \textbf{S}, recall that $q \xdashrightarrow\alpha q'$ is a reduced transition. \concept{Ignoring states} disrespect \textbf{S}.
\begin{itemize} \item[\textbf S] $\forall \beta\hspace{-.8mm}\in\hspace{-.8mm}\en(q) \colon \exists q' \colon
q \portrans^{\hspace{-1mm}*}\hspace{-1mm}q' \land \beta\hspace{-.8mm}\in\hspace{-.8mm}\por(q')$
\textit{(never keep ignoring pruned actions)} \item[\textbf V] $B\cap\en(q)\cap\actions_{\mathit{vis}}^Y \neq \emptyset \implies \actions_{\mathit{vis}}^Y \subseteq B$
\textit{(either all or no visible, enabled actions)} \end{itemize}
\paragraph{Computing stubborn sets and heuritics} POR is not deterministic as we may compute many different valid stubborn sets for the same state and we can even select different ignoring states to enforce the \textbf{S} proviso (i.e. the state $q'$ in the \textbf{S} condition). A general approach to obtain good reductions is to compute a stubborn set with the fewest enabled actions, so that the $\por(q)$ set is the smallest and the most actions are pruned in $q$. However, this does not necessarily lead to the best reductions as observed several times~\cite{explosion,fairtesting,varpa}. Nonetheless, this is the best heuristic currently available, and it generally yields good results~\cite{guardpor2}.
The $\forall\exists$-recursive structure of Def.~\ref{def:ss} indicates that establishing the smallest stubborn set is an \NP-complete problem, which indeed it is~\cite{optimal}. Various algorithms exist to heuristically compute small stubborn sets~\cite{guardpor2,deletion}. Only the deletion algorithm~\cite{deletion} provides guarantees on the returned sets (that no strict subset of the return set is also stubborn). On the other hand, the guard-based approach~\cite{guardpor2} has been shown to deliver good reductions in reasonable time.
To implement the \textbf{S} proviso, Valmari~\cite{valmari1} provides an algorithm~\cite[Alg.~1.18]{valmari1} that yields the fewest possible ignoring states, runs in linear time and can even be performed on-the-fly, i.e. while generating the reduced transition system. It is based on Tarjan's strongly connected component (SCC) algorithm~\cite{tarjan}.
The above methods are relevant for stubborn TR
as STR also needs to compute ($\star$-)stubborn sets and avoid ignoring states (recall \text{L3} from \autoref{sec:prelim}).
\subsection{Reduced transaction systems}
TR merges sequential actions into (atomic) transactions and in the process removes interleavings (at the states internal to the transaction) just like POR. We present a dynamic TR that decides to prolong transactions on a per-state~basis. We use stubborn sets to identify left and right moving actions in each state. Unlike stubborn set POR, and much like ample-set POR~\cite{katz-peled}, we rely on the process-based action decomposition to identify sequential parts of the system.
\newcommand\rem{\hspace{-.6mm}} Recall that actions in the pre-phase should commute to the right and actions in the post-phase should commute to the left with other threads. We use the notion of stubborn sets to define \concept{dynamic left and right movers} in \autoref{eq:left} and \ref{eq:exclude} for $\tuple{q,\alpha,q'}\in\trans_i $.
Both mover definitions are based on semi-stubborn sets. Dynamic left movers are state-based requiring all outgoing local transitions to ``move'', whereas right movers are action-based allowing different reductions for various non-deterministic paths.
The other technicalities of the definitions stem from the different premises of left and right movability (see \autoref{sec:prelim}). Finally, both dynamic movers exhibit a crucial monotonicity property, similar to previously introduced `stubbornness', as expressed by \autoref{lem:dlm} and \autoref{lem:drm}.
\input{defs/movers}
\input{defs/dlm}
\input{defs/drm}
To establish stubborn TR, \autoref{def:trs} first annotates the transition system \cts with thread-local phase information, i.e. one phase variable for each thread that is only modified by that thread.
Phases are denoted with N (for transaction external states), R (for for states in the pre-phase) and L (for states in the post phase). Because phases now depend on the commutativity established via dynamic movers \autoref{eq:left} and \ref{eq:exclude}, the reduction (not included in the definition, but discussed below it) becomes dynamic. \autoref{lem:preserves} follows easily as the definition does not yet enforce the reduction, but mostly `decorates' the transition system.
\begin{definition}[Transaction system]\label{def:trs} Let $H \,\triangleq\, \set{N, R, L} ^ P$ be an array of local phases. The transaction system is CTS $\cts'\,\triangleq\, \tuple{\tstates, \ttrans, \actions, \tsint}$ such that:
\input{defs/trs}
\noindent
\end{definition} \begin{lemma}\label{lem:preserves} \autoref{def:trs} preserves invariants: $\reach(\cts) \models\BoxY \Leftrightarrow\reach(\cts')\models \BoxY$. \end{lemma}
The conditions in \autoref{eq:2} and \autoref{eq:3} overlap on purpose, allowing us to enforce termination below. The transaction system effectively partitions the state spaces on the phase for each thread $i$, i.e.
$\overline{N_i} = L_i\cupR_i$ with $N_i \,\triangleq\, \set{ \tuple{q,\phase} \mid \phase_i = \textsf{N} }, \text{etc}$.
The definition of $\ttrans_i$ further ensures three properties:
\begin{enumerate}[label=\Alph*.] \item\label{i:A} $L_i$ states do not transit to $R_i$ states as $h_i=L \implies h_i'\neq R$ by \autoref{eq:1}. \item Transitions ending in $R_i$ are dynamic right movers not disabling
$Y$~by~\autoref{eq:1}. \item Transitions starting in $L_i$ are dynamic left movers not enabling
$Y$~by~\autoref{eq:2}.
\end{enumerate}
\noindent Thereby $\ttrans_i$ implements the (syntactic) constraints from Lipton's TR (see \autoref{sec:prelim}) dynamically in the transition system, except for \textbf{L3}. Let $\trp \,\triangleq\, \set{\tuple{q,q'} \mid \tuple{q,\alpha,q'} \in T_i' }$. Next, \autoref{th:reduction} defines the reduced transaction system (RTS), based primarily on the $\trtrans$ transition relation that only allows a thread $i$ to transit when all other threads are in an external state, thus eliminating interleavings (\brtrans additionally skips internal states). The theorem concludes that invariants are preserved given that a termination criterium weaker than \textbf{L3} is met: All $L_i$ must reach~an ~$N_i$~state. Monotonicity of dynamic movers plays a key role in its proof.
\input{defs/rts}
\algrenewcommand\algorithmicprocedure{\textbf{proc}} \algrenewcommand\alglinenumber[1]{\scriptsize #1:}
\begin{algorithm}[t] \caption{Algorithm reducing a CTS to an RTS using $T_i'$ from \autoref{def:trs}.} \label{alg:rtrs}
\begin{minipage}{0.5\textwidth} \begin{algorithmic}[1] \State $V_1,V_2,Q_1,Q_2\colon X'$
\Procedure{Search}{$\cts\,\triangleq\, \tuple{X,\trans,\actions,\sint}$} \AssignState{$Q_1$}{$\set{ \tuple{\sint, N^\procs} }$} \AssignState{$V_1$}{$\emptyset$} \While{$Q_1\neq \emptyset$}
\AssignState{$Q_1$}{$Q_1 \setminus \set{\tuple{q,\phase}}$ \textbf{for} $\tuple{q,\phase}\in Q_1$}
\AssignState{$V_1$}{$V_1 \cup \set{\tuple{q,\phase}}$}
\State \textbf{assert}($\forall i \colon \phase_i = N $) \label{l:assert}
\For{$i \in P$}
\State \textsc{Transaction}(T, $\tuple{q,\phase}$, i)
\EndFor \EndWhile \State \textbf{assert}($V_1 = \reach(\brcts)$) \EndProcedure \Function{SCCroot}{$q, i$}
\State \textbf{return }$q$ is a root of bottom SCC $C$
\phantom{XxXXXX} \textbf{s.t.} $C \subseteq L_i \land C \subseteq V_2$ \EndFunction.
\algstore{alg2} \end{algorithmic} \end{minipage}
\begin{minipage}{0.53\textwidth} \begin{algorithmic}[1] \algrestore{alg2}
\Procedure{Transaction}{$T$, \tuple{q,\phase}, $i$}
\AssignState{$Q_2$}{$\set{ \tuple{q, \phase} }$}
\AssignState{$V_2$}{$\emptyset$} \While{$Q_2\neq \emptyset$}
\AssignState{$Q_2$}{$Q_2 \setminus \set{\tuple{q,\phase}}$ \textbf{for} $\tuple{q,\phase}
\in Q_2$}
\AssignState{$V_2$}{$V_2 \cup \set{\tuple{q,\phase}}$} \For{$\tuple{q,\alpha,q'} \in T_i$}
\State{\textbf{let} $h'$ \textbf{s.t.} $\tuple{\tuple{q,h}\hspace{-.5mm},\alpha,\tuple{q'\hspace{-.5mm},h'}}\in T'_i$\phantom{X}}
\If{$\textsc{SCCroot}(\tuple{q',h'}, i)$}
\AssignState{$\phase'_i$}{$N$}\label{l:ext}
\EndIf
\If{$\tuple{q'\hspace{-.5mm},h'}\not\sqsubseteq V_1 \cup V_2 \cup Q_1 \cup Q_2$}\label{l:sub2}
\AssignState{$Q_2$}{$Q_2 \cup \set{\tuple{q',h'}}$}
\EndIf
\If{$\phase'_i = N \land \tuple{q'\hspace{-.5mm},h'} \notin V_1\cup Q_1$}\label{l:propagate}
\AssignState{$Q_1$}{$Q_1 \cup \set{\tuple{q',h'}}$}
\EndIf \EndFor \EndWhile \EndProcedure \end{algorithmic} \end{minipage}
\end{algorithm}
The following algorithm generates the RTS~~$\brcts$ of \autoref{th:reduction} from a \cts.
The state space search is split into two: One main search, which only processes external states ($\bigcap_i N_i$), and an additional search (\textsc{Transaction}) which explores the transaction for a single thread $i$. Only when the transaction search encounters an external state, it is propagated back to the queue $Q_1$ of the main search, provided it is new there (not yet in $V_1$, which is checked at Line~\ref{l:propagate}). The transaction search terminates early when an internal state $q$ is found to be subsumed by an external state already encountered in the outer search (see the $q\not\sqsubseteq V_1$ check at Line~\ref{l:sub2}). Subsumption is induced by the following order on phases, which is lifted to states and sets of states $X\subseteq X'$: $R \sqsubset L \sqsubset N$ with $a \sqsubseteq b \Leftrightarrow a = b \lor a \sqsubset b$, $\tuple{q,\rem\phase} \rem\sqsubseteq\rem \tuple{q',\rem\phase'} \Leftrightarrow q = q'\land \forall i\colon \phase_i \rem\sqsubseteq\rem \phase'_i$, and $q\rem\sqsubseteq\rem X\Leftrightarrow \forall q'\rem\rem\in\rem\rem X \colon q\rem\sqsubseteq\rem q'$ (for $q = \tuple{q,\rem \phase}$).
Termination detection is implemented using Tarjan's SCC algorithm as in~\cite{valmari1}. We chose not to obfuscate the search with the rather intricate details of that algorithm. Instead, we assume that there is a function \textsc{SCCRoot} which identifies a unique \concept{root} state in each bottom SCC composed solely of post-states. This state is then made external on Line~\ref{l:ext} fulfilling the premise of \autoref{th:reduction} ($\forall q \in L_i \colon \exists q'\inN_i \colon q\hookrightarrow^*_i q' $). Combined with \autoref{lem:preserves} this yields \autoref{th:alg}.
\begin{theorem}\label{th:alg}
\autoref{alg:rtrs} computes $\reach(\brcts)$ s.t. $\reach(\cts)\models \BoxY \Longleftrightarrow \reach(\brcts)\models \BoxY$.
\end{theorem}
Finally, while the transaction system exponentially blows up the number of syntactic states ($\neq$ reachable states) by adding local phase variables, the reduction completely hides this complexity as \autoref{th:removal} shows. Therefore, as soon as the reduction succeeds in removing a single state, we have by definition that $\sizeof{\reach(\cts)} < \lvert\reach(\brcts)\rvert$. \autoref{th:removal} also allows us to simplify the algorithm by storing transition system states $X$ instead of transaction system states \tstates in $V_1$ and $Q_1$. \begin{theorem}\label{th:removal}
Let $N\,\triangleq\, \cap_{i} N_i$. We have $\sizeof{N} = \sizeof{S}$ and $\reach(\brcts)\subseteq \reach(\cts)$.
\end{theorem}
\section{Comparison between TR and POR} \label{sec:comparison}
Stubborn TR (STR) is dynamic in the same sense as stubborn POR, allowing for a better comparison of the two. To this end, we discuss various example types of systems that either TR or POR excel at. As a basis, consider a completely independent system with $p$ threads of $n-1$ operations each. Its state space has $n^p$ states. TR can reduce a state space to $2^p$ states whereas POR yields $n * p$ states. The question is however also which kinds of systems are realistic and whether the reductions can be computed precisely and efficiently.
\paragraph{High parallelism vs Long sequences of local transitions}
POR has an advantage when $p \gg n$ being able to yield exponential reductions. Though e.g. thread-modular verification~\cite{Flanagan2002,Malkis2006} may become more attractive in those cases. Software verification often has to deal with many sequential actions benefitting STR, especially when VM languages such as LLVM are used~\cite{vvt}.
\begin{wrapfigure}[5]{r}{1.5cm}
\hspace{-.2cm} \begin{tikzpicture}
\tikzstyle{e}=[minimum width=1cm]
\tikzstyle{every node}=[font=\small, node distance=1.5cm]
\node (s1) {$l_0$};
\node (s2) [above right of=s1,yshift=-.3cm] {$l_1$};
\node (s2p) [above right of=s1,yshift=-.8cm] {$..$};
\node (s3) [below right of=s1,yshift=.3cm] {$l_9$};
\node (s3p) [below right of=s1,yshift=.8cm] {$..$};
\path (s1) edge[->] node[sloped,pos=.48]{} (s2);
\path (s1) edge[->,dashed] node[sloped,pos=.48]{} (s2p);
\path (s1) edge[->,dashed] node[sloped,pos=.48]{} (s3p);
\path (s1) edge[->] node[sloped,pos=.48]{} (s3);
\path (s2) edge[bend left=-30,->] node[sloped,pos=.48]{} (s1);
\path (s3) edge[bend left=30,->] node[sloped,pos=.48]{} (s1); \end{tikzpicture} \end{wrapfigure} \paragraph{Non-determinism} In the pre-phase, TR is able to individually reduce mutually non-deterministic transitions of one thread due to \autoref{eq:1}, which contrary to \autoref{eq:2} considers individual actions of a thread. Consider the example on the right. It represents a system with nine non-determinisitic steps in a loop. Assume one of them never commutes, but the others commute to the right. Stubborn TR is able to reduce all paths through the loop over only the right-movers, even if they constantly yield new states (and interleavings).
\begin{wrapfigure}[15]{r}{4.6cm}
\scalebox{.75}{\hspace{-1.7em} \input{figures/excomp1} }
\caption{State space of \ccode{P(m); x=1; V(m); $\|$ P(m); x=2; V(m);}
and
POR (thick lines) and TR (dashed lines).
} \label{f:locks} \end{wrapfigure} \paragraph{Left and right movers} While stubborn POR can handle left-commutativity using additional restrictions, STR can benefit from right-commutativity in the pre-phase and from left-commutativity in the post-phase. E.g., P/V-semaphores are right/left-movers (see \autoref{sec:prelim}). \autoref{f:locks} shows a system with ideal reduction using TR, and none with stubborn set POR.
\autoref{tab:sync} provides various synchronization constructs and their movability. Thread create \& join have not been classified before.
\begin{table}[t] \caption{Movability of commonly used synchronization mechanisms}\label{tab:sync} \smaller
\begin{tabular}{l|p{9.5cm}} \toprule
\texttt{pthread\_create} & As this can be modeled with a mutex that is guarding the thread's code and is initially set to locked, the \texttt{create}-call is an unlock and thus a left-mover. \\ \texttt{pthread\_join} & Using locking similar to \texttt{create},
\texttt{join} becomes a lock and thus a
right-mover. \\ Re-entrant locks & Right / left movers~\cite{qadeer-transactions}\\ Wait/notify/notifyAll & Can all three be split into right and left moving parts~\cite{qadeer-transactions}\\ \bottomrule \end{tabular}
\end{table}
\paragraph{Deadlocks} POR preserves all deadlocks, even when irrelevant to the property. TR does not preserve deadlocks at all, potentially allowing for better reductions preserving invariants. The following example deadlocks because of an invalid locking order. TR can still reduce the example to four states, creating maximal transactions.
On the other hand, POR must explore the deadlock.
{\centering $\footnotesize
~~~~~~~\ccode{l(m1);l(m2); x=1; u(m1);u(m2); $\|$ l(m2);l(m1); x=2; u(m1);u(m2);} $}
\paragraph{Processes} STR retains the process-based definition from its ancestors~\cite{lipton}, while stubborn POR can go beyond process boundaries to improve reductions and even supports process algebras~\cite{ssalgebra,guardpor2}.
In early attempts to solve the open problem of a process-less STR definition, we observed that inclusion of all actions in a transaction could cause the entire state space search to move to the \textsc{SearchTransaction} function.
\paragraph{Tractability and heuristics} The STR algorithm can fix the set of stubborn transitions to those in the same thread (see definitions of $M_\alpha^\star$). This can be exploited in the deletion algorithm by fixing the relevant transitions (see the incomplete minimization approach~\cite{deletion}). If the algorithm returns a set with other transitions, then we know that no transaction reduction is possible as the returned set is subset-minimal~\cite[Th.~1]{guardpor2}. The deletion algorithm runs in polynomial time (in the order of $\sizeof{\actions}^4$~\cite{explosion}), hence also stubborn TR also does (on a per-state basis). Stubborn set POR, however, is \NP-complete as it has to consider all subsets of actions. Moreover, a small stubborn set is merely a heuristic for optimal reductions~\cite{optimal} as discussed in \autoref{s:pss}.
\paragraph{Known unknowns} We did not consider other properties such as full safety, LTL and CTL. For CTL, POR can no longer reduce to non-trivial subsets because of the CTL proviso~\cite{gerth1995partial} (see \cite{ssalgebra} for support of non-deterministic transitions, like in stubborn TR). TR for CTL is an open problem.
While TR can split visibility in enabling (in the pre-phase) and disabling (in the post-phase), POR must consider both combined. POR moreover must compute the ignoring proviso over the entire state space while TR only needs to consider post-phases and thread-local steps.
The ignoring proviso~\cite{valmari-92,ignoring,openset} in POR tightly couples the possible reductions per state to the role the state plays in the entire reachability graph. This lack of locality adds an extra obstacle to the parallelization of the model checking procedure. Early results make compromises in the obtained reductions~\cite{parpor}. Recent results show that reductions do not have to be affected negatively even with high amounts of parallelism~\cite{cycleproviso}, however these results have not yet been achieved for distributed systems. TR reduction on the other hand, offers plenty of parallellization opportunities, as each state in the out search can be handed off to a separate process.
\newcommand\spin{\textsc{SPIN}\xspace} \newcommand\ltsmin{\textsc{LTSmin}\xspace}
\section{Experiments} \label{sec:experiments}
We implemented stubborn transaction reduction (STR) of \autoref{alg:rtrs} in the open source model checker \ltsmin\footnote{\url{http://fmt.cs.utwente.nl/tools/ltsmin/}}~\cite{ltsmin}, using a modified deletion algorithm to establish optimal stubborn sets in polynomial time (as discussed in \autoref{sec:comparison}). The implementation can be found on GitHub.\footnote{\url{https://github.com/alaarman/ltsmin/commits/tr}}
\ltsmin has a front-end for \textsc{promela} models, which is on par with the \spin model checker~\cite{holzmann-97} performance-wise~\cite{spins}. Unlike SPIN, \textsc{LTSmin} does not implement dynamic commutativity specifically for queues~\cite{holzmann-peled}, but because it splits queue actions into a separate action for each cell~\cite{spins}, a similar result is achieved by virtue of the stubborn set condition \textbf{\ref{i:d2}} in \autoref{s:pss}. This benefits both its POR and STR implementation.
\begin{wraptable}{r}{6cm}
\caption{Models and their verification times in \ltsmin. Time in sec. and memory use~in~MB. State/transition counts are the same in both \ltsmin and \spin.} \label{tab:models} \setlength{\tabcolsep}{.5ex} { \scriptsize
\begin{tabular}{l|rr|rr} \toprule
& \multicolumn{2}{c|}{\textsc{SPIN/LTSmin}} & \multicolumn{2}{c}{\ltsmin} \\ & states & transitions & time & mem. \\ \midrule \texttt{Peterson5} & 829909270 & 3788955584 & 4201. & 6556. \\ \texttt{GARP} & 48363145 & 247135869 & 88.34 & 369.8 \\ \texttt{i-Prot.2} & 13168183 & 44202271 & 22.99 & 102.8 \\ \texttt{i-Prot.0} & 9798465 & 45932747 & 19.58 & 75.2 \\ \texttt{Peterson4} & 3624214 & 13150952 & 7.36 & 28.5 \\ \texttt{BRP} & 2812740 & 6166206 & 4.59 & 26.4 \\ \texttt{MSQ} & 994819 & 3198531 & 4.41 & 12.1 \\ \texttt{i-Prot.3} & 327358 & 978579 & 0.79 & 2.8 \\ \texttt{i-Prot.4} & 78977 & 169177 & 0.19 & 0.8 \\ \texttt{Small1} & 36970 & 163058 & 0.14 & 0.3 \\
\texttt{X.509} & 9028 & 35999 & 0.03 & 0.1 \\ \texttt{Small2} & 7496 & 32276 & 0.08 & 0.1 \\ \texttt{SMCS} & 2909 & 10627 & 0.01 & 0.1 \\
\bottomrule \end{tabular}
} \end{wraptable} \begin{table*}[b!]
\caption{Reduction runs of TR, Stubborn TR (STR) and Stubborn POR (SPOR). Reductions of states $|X|$ and transitions $|\trans|$ are given in percentages (reduced state space / original state space), runtimes in sec. and memory use in MB. The lowest reductions (in number of states) and the runtimes are highlighted in bold.} \label{tab:tr} \setlength{\tabcolsep}{.35ex} \smaller {
\begin{tabular}{l|rrrr|rrrr|rrrr|rrrr} \toprule
& \multicolumn{4}{c|}{TR (\ltsmin)}
& \multicolumn{4}{c|}{\textbf{STR} (\ltsmin)}
& \multicolumn{4}{c|}{SPOR (\ltsmin)} & \multicolumn{4}{c}{Ampe set (\spin)} \\
& $|X|$
& $|\trans|$ & time & mem
& $|X|$
& $|\trans|$ & time & mem
& $|X|$
& $|\trans|$ & time & m.
& $|X|$
& $|\trans|$ & time & mem \\ \midrule
\texttt{Peterson5}& 0.5& 0.3& \bf6.11& 33.0 & \bf 0.4& 0.3& 74.01& 29.5 & 3.1& 0.9& 316.10& 209.8 & 5.2& 1.9& 42.30 & 2463. \\ \texttt{GARP}& 100& 100& 266.21& 369.8 & \bf1.4& 1.5& 776.53& 5.2& 3.6& 1.5& 19.83& 13.5 & 7.6& 3.7& \bf6.27& 289.1\\ \texttt{i-Prot.2}& \bf2.1& 2.4& \bf3.46& 2.2 & \bf2.1& 2.4& 4.87& 2.2 & 20.2& 11.9& 13.32& 21.7 & 26.1& 17.6& 4.33 & 246.9 \\ \texttt{i-Prot.0}& 100& 100& 56.71& 75.2 & \bf12.8& 12.5& 148.78& 9.7 & 32.1& 17.2& 214.93& 24.3 & 15.7& 10.5& \bf2.56& 132.2\\ \texttt{Peterson4}& \bf1.3& 1.0& 0.36 & 0.5 & \bf1.3& 1.0& 0.85 & 0.5 & 7.3& 2.7& 4.24& 2.4 & 14.7& 6.8& \bf0.24& 28.9\\ \texttt{BRP}& 100& 100& 9.59& 26.4 & 47.6& 36.9& 6.38& 12.6 & 100& 100& 90.31& 26.4 & \bf9.2& 6.0& \bf0.18& 22.2\\ \texttt{MSQ}& 66.0& 65.0& 5.5& 8.2 & \bf22.9& 21.5& 14.90& 3.0 & 52.1& 29.1& 12.14& 6.5 & 80.4& 46.6& \bf1.03 & 200.9 \\ \texttt{i-Prot.3}& 8.0& 7.4& 0.19& 0.2 & \bf8.0& 7.4& 0.24& 0.2 & 20.7& 10.4& 0.94& 0.6 & 27.0& 16.5& \bf0.06& 5.8\\ \texttt{i-Prot.4}& 25.1& 27.2& 0.14& 0.2 & \bf25.0& 27.1& 0.18& 0.2 & 45.2& 31.5& 0.54& 0.4 & 50.4& 37.1& \bf0.03& 2.8\\ \texttt{Small1}& 8.9& 18.0& 0.03& n/a& \bf6.7& 13.6& 0.07& n/a& 31.2& 17.7& 0.18& 0.1 & 48.4& 45.1& \bf0.01& 0.9\\
\texttt{X.509}& 93.8& 94.1& 0.07& 0.1 & 19.3& 16.7& 0.06& n/a& \bf7.8& 3.7& 0.03& n/a& 67.5& 34.3& \bf0.01& 1.1\\ \texttt{Small2}& 11.6& 21.0& 0.01& n/a& \bf8.7& 15.8& 0.01& n/a& 35.0& 19.8& 0.04& n/a& 48.3& 43.8& \bf0.01& 0.4\\ \texttt{SMCS}& 100& 100& 0.05& 0.1 & 26.1& 19.6& 0.09& n/a& \bf12.5& 5.3& 0.03& n/a& 41.1& 19.6& \bf0.01& 0.7\\
\bottomrule \end{tabular} } \end{table*}
We compare STR against (static) TR from \autoref{sec:prelim}. We also compare STR against the stubborn set POR in \ltsmin, which was shown to consistently outperform SPIN's ample set~\cite{holzmann-peled} implementation
in terms of reductions, but with worse runtimes due to the more elaborate stubborn set algorithms (a factor 2--4)~\cite{guardpor2}. (We cannot compare with \cite{vmcai} due to the different input formats of VVT~\cite{vvt} and \textsc{LTSmin}.) \autoref{tab:models} shows the models that we considered and their normal (unreduced) verification times in \ltsmin. We took all models from \cite{guardpor2} that contained an assertion. The inputs include mutual exclusion algorithms (\texttt{peterson}), protocol implementations (\texttt{i-protocol}, \texttt{BRP, GARP, X509}), a lockless queue (\texttt{MSQ}) and controllers (\texttt{SMCS}, \texttt{SMALL1}, \texttt{SMALL2}).
\ltsmin runs~with~STR were configured according to the command line:\\ \texttt{\scriptsize prom2lts-mc --por=str --timeout=3600 -n --action=assert m.spins}\\ The option \texttt{--por=tr} enables the static TR instead. We also run all models in \spin in order to compare against the ample set's performance. \spin runs were configured according to the following command lines:\\ \texttt{\scriptsize cc -O3 -DNOFAIR -DREDUCE -DNOBOUNDCHECK -DNOCOLLAPSE -DSAFETY -DMEMLIM=100000 -o pan pan.c\\ ./pan -m10000000 -c0 -n -w20 }
\autoref{tab:tr} shows the benchmark results. We observe that STR often surpasses POR (stubborn and ample sets) in terms of reductions. Its runtimes however are inferior to those of the ample set in SPIN. This is likely because we use the precise deletion algorithm, which decides the optimal reduction for STR: STR is the only algorithm of the four that does not use heuristics. The higher runtimes of STR are often compensated by the better reductions it obtains.
Only three models demonstrate that POR can yield better reductions (\texttt{BRP}, \texttt{smcs} and \texttt{X.509}). This is perhaps not surprising as these models do not have massive parallelism (see \autoref{sec:comparison}). It is however interesting to note that \texttt{GARP} contains seven threads. We attribute the good reductions of STR mostly to its ability to skip internal states. \spin's ample set only reduces the \texttt{BRP} better than \ltsmin's stubborn POR and STR. In this case, we found that \ltsmin too eagerly identifies half of the actions of both models as visible.
\paragraph{Validation} Validation of TR is harder than of POR. For POR, we usually count deadlocks, as all are preserved, but TR might actually prune deadlocks and error states (while preserving the invariant as per \autoref{th:alg}). We therefore tested correctness of our implementation by implementing methods that check the validity of the returned semi-sturbborn sets. Additionally, we maintained counters for the length of the returned transactions and inspected the inputs to confirm validity of the longest transactions.
\section{Related Work} \label{sec:related}
Lipton's reduction was refined multiple times \cite{lamport-lipton,gribomont,Doeppner:1977:PPC:512950.512965,lamport-tla,Stoller2003}. Flanagan et al.~\cite{Flanagan2002,Flanagan:2003:TA:640136.604176} and Qadeer et al.~\cite{qadeer-atomicity,qadeer-java,qadeer-transactions} have most recently developed transactions and found various applications. The reduction theorem used to prove the theorems in the current paper comes from our previous work~\cite{vmcai}, which in turn is a generalized version of \cite{qadeer-transactions}. Our generalization allows the direct support of dynamic transactions as already demonstrated for symbolic model checking with IC3 in~\cite{vmcai}. Despite a weaker theorem, Qadeer and Flanagan~\cite{qadeer-transactions} can also dynamically grow transactions by doing iterative refinement over the state space exploration. This contrasts our approach, which instead allows on-the-fly adaptation of movability (within a single exploration). Moreover, \cite{qadeer-transactions} bases dynamic behavior on exclusive access to variables, whereas our technique can handle any kind of dependency captured by the general stubborn set POR relations.
Cartesian POR~\cite{cartesian} is a form of Lipton reduction that builds transactions during the exploration,
but does not exploit left/right commutativity.
The leap set method~\cite{schoot} treats disjoint reduced sets in the same state as truly concurrent and executes them as such: The product of the different disjoint sets is executed from the state, which entails that sequences of actions are executed from the state. This is where the similarity with the TR ends, because in TR the sequences are formed by sequential actions, whereas in leap sets they consist of concurrent actions, e.g., actions from different processes. Recently, trace theory has been generalized to include `steps' by Ryszard et al.~\cite{Janicki2016}. We believe that this work could form a basis to study leap sets and TR in more detail.
Various classical POR works were mentioned, e.g.~\cite{parle89,godefroid,katz-peled}. How `persistent sets'~\cite{godefroid}/`ample sets'~\cite{katz-peled} relate to stubborn set POR is explained in~\cite[Sec.~4]{intuition}. Sleep sets~\cite{godefroid-wolper-93} form an orthogonal approach, but in isolation only reduce the number of transitions. Dwyer et al.~\cite{dwyer2004exploiting} propose dynamic techniques for object-oriented programs. Completely dynamic approaches exist~\cite{flanagan2005dynamic,Kastenberg2008}. Recently, even optimal solutions were found~\cite{abdullah,sousa,contextpor}. These approaches are typically stateless however, although still succeed in pruning converging paths sometimes (e.g.,~\cite{sousa}). Others aim at making dependency more dynamic~\cite{godefroid-pirottin,holzmann-peled,collapses}.
Symbolic POR can be more static for reasons discussed in Footnote~\ref{fn:symbolic}, e.g.,~\cite{alur1997partial}.
Therefore, Grumberg et al.~\cite{Grumberg:2005:PUM:1040305.1040316} present underapproximation-widening, which iteratively refines an under-approximated encoding of the system. In their implementation, interleavings are constrained to achieve the under-approximation. Because refinement is done based on verification proofs, irrelevant interleavings will never be considered.
Other relevant dynamic approaches are peephole and monotonic POR by Wang et al.~\cite{peephole,kahlon-wang-gupta}. Like sleep sets~\cite{godefroid}, however, these methods only reduce the number of transitions. While a reducing transitions can speed up symbolic approaches by constraining the transition relation, it is not useful for enumerative model checking, which is strongly limited by the amount of unique states that need to be stored in memory.
Kahlon et al.~\cite{Kahlon2006} do not implement transactions, but encode POR for symbolic model checking using SAT. The ``sensitivity'' to locks of their algorithm can be captured in traditional stubborn sets as well by viewing locks as normal ``objects'' (variables) with guards, resulting in the subsumption of the ``might-be-the-first-to-interfere-modulo-lock-acquisition'' relation~\cite{Kahlon2006} by the ``might-be-the-first-to-interfere'' relation~\cite{Kahlon2006}, originally from~\cite{godefroid}.
Elmas et al.~\cite{ElmasQT09} propose dynamic reductions for type systems, where the invariant is used to weaken the mover definition. They also support both right and left movers, but do automated theorem proving instead of model checking.
\section{Conclusion} \label{sec:conclusion} We presented a more dynamic version of transaction reduction (TR) based on techniques from stubborn set POR. We analyzed several scenarios for which either of the two approaches has an advantage and also experimentally compared both techniques. We conclude that TR is a valuable alternative to POR at least for systems with a relatively low amount of parallelism.
Both in theory and practice, TR showed advantages to POR, but vice versa as well. Most strikingly, TR is able to exploit various synchronization mechanisms in typical parallel programs because of their left and right commutativity. While not preserving deadlocks, its reductions can benefit from omitting them. These observations are supported by experiments that show better reductions than a comparably dynamic POR approach for systems with up to 7 threads. We observe that the combination POR and TR is an open problem.
\appendix
\section{Correctness Proofs} \label{app:proofs}
The current appendix contains the proofs for the lemmas and theorems in the paper. For clarity, lemmas and theorems are repeated with the same numbering as in the paper.
In \autoref{sec:str}, we defined different semi-stubborn sets, i.e., $\sst^\shortleftarrow_q(B)$, $\sst^\leftrightarrow_q(B)$ and $\sst^\shortrightarrow_q(B)$.
We first provide a proof or \autoref{th:sst}.
\setcounter{theorem}{1} \input{defs/sst}
\begin{proof}
Let $B$, $\beta$, $q$ and $q'$ be such that they satisfy the premise of the theorem and $\alpha\in B$. We distinguish two cases:
If $\alpha\in \dis(q)$, then let $\alpha,E$ be such that $E\in \nes_q(\alpha)$ and $E \subseteq B$. \textbf{\ref{i:d2}} remains valid for it in~$q'$, since $\beta$ cannot enable $\alpha$ because \textbf{\ref{i:d2}} holds in $q$ and, by definition of NESs, we have that $E\in\nes_{q'}(\alpha)$.
If $\alpha\in \en(q)$, then either $\alpha\in \en(q')$ or $\alpha\in \dis(q')$. In the former case, the conclusion of the theorem is satisfied trivially, as \textbf{\ref{i:d1}} also holds in $q'$. For the latter case, i.e. $\alpha\in \dis(q')$, we consider each $\star \in \set{\leftarrow,\rightarrow,\leftrightarrow}$ separately.
\begin{description}[labelsep=8pt,labelindent=0\parindent,itemindent=10pt,leftmargin=0pt,listparindent=4em] \item[$\star = \leftrightarrow$:] The proof is concluded, as the definition of strong commutativity $\comm$, e.g., as the deterministic case illustrated by \autoref{eq:strong}, ensures that if $\beta$ disables $\alpha$, then the conclusion is not met. (Note that this also concludes the proof of \autoref{th:valmari}.)
\item[$\star = \rightarrow$:] The proof is concluded, because the additional `provided' condition that $\en(q) \cap B \subseteq \en(q') \cap B$ ensures
that $\beta$ cannot disable $\alpha$.
\item[$\star = \leftarrow$:] From \textbf{\ref{i:d1}}, we have $\alpha \lcomm \gamma$ for all $\gamma \notin B$. Since we also have $\gamma \rcomm \alpha$, no $\gamma\notin B$ ever (re-)enables $\alpha$ by definition of right commutativity, as discussed in \autoref{sec:prelim}.
Therefore, \textbf{\ref{i:d2}} holds in $q'$ (there must be some $E\in \nes_{q'}(\alpha)$ such that $E \cap \overline B = \emptyset$, hence $E\subseteq B$), yielding again the conclusion of the theorem. \end{description} These three cases conclude the proof. \qed \end{proof}
Before proving the monotonicity lemmata, we recall the definition of dynamic movers and \autoref{def:trs}: \setcounter{equation}{2} \input{defs/movers}
\setcounter{lemma}{0} \input{defs/dlm} \begin{proof} Assume the premise: $\lmv_i(q_1)$ with $i\neq j$ and $q_1\tr{\beta}{j} q_2$. We derive the conclusion.
Let $B$ be such that $\sst_{q_1}^\shortleftarrow(B)$ and $B\cap \en(q_1) = \actions_i \cap \en(q_1)$. As $j\neq i$, we may apply \autoref{th:sst} to find that $B$ is also a valid semi-$\shortleftarrow$-stubborn set in~$q_2$, i.e. $\sst_{q_2}^\shortleftarrow(B)$. Moreover, $\beta$ cannot enable any $\gamma\in B\cap \dis(q_1)$ by \textbf{\ref{i:d1}}, hence $B\cap \en(q_2) = \actions_i \cap \en(q_2)$. That together with the semi-$\shortleftarrow$-stubbornness of $B$ in $q_2$, implies that $\lmv_i(q_2)$. \qed \end{proof}
\input{defs/drm} \begin{proof} Assume the premise: $\rmv_{i}(q_1,\alpha,q_2)$ for $\alpha\in\actions_i$ and $q_1\tr{\alpha}{i} q_2 \tr{\beta}{j} q_3$ for $j\neq i$. Let $B \subseteq \actions$ satisfy \autoref{eq:exclude}, i.e.: $\sst_{q_1}^\shortrightarrow(B),~\alpha\in B$, $B\cap \en(q_2) \subseteq \actions_i$, and $B\cap \en(q_1) = \set{\alpha}$. We derive the conclusion, i.e.: $\exists q_1 \tr{\beta}{j}q_4$ such that $\sst_{q_4}^\shortrightarrow(B),~\alpha\in B$, $B\cap \en(q_3) \subseteq \actions_i$, and $B\cap \en(q_4) = \set{\alpha}$.
First we show that $\exists q_1 \tr{\beta}{j}q_4$ and
$B \cap \en(q_4) = \set{\alpha}$. As $\beta\in\en(q_2)$, we obtain $\beta\notin B$ (since $\beta\notin \actions_i$). Since therefore $\alpha$ right-commutes with $\beta$ by the contraposition of \textbf{\ref{i:d1}}, we obtain the commuting path $q_1\tr{\beta}j q_4\tr{\alpha}i q_3$. Assume $\exists \gamma \in B \cap \en(q_4) \setminus \set{\alpha}$. Action $\beta$ must have enabled $\gamma$, otherwise $\gamma \in \en(q_1)$, contradicting our assumption that $B\cap \en(q_1) = \set\alpha$. Now, if $q_1\tr{\beta}j q_4$ enables~$\gamma$, by \textbf{\ref{i:d2}}, also $\beta\in B$, again contradicting the assumption. Therefore, we have $B \cap \en(q_4) = \set{\alpha}$.
\autoref{th:sst} tells us that $B$ with $\sst_{q_1}^\shortrightarrow(B)$ is also semi-stubborn in $q_4$, i.e. $\sst^\shortrightarrow_{q_4}(B)$ (\autoref{th:sst}'s additional condition that $\en(q_1) \cap B \subseteq \en(q_4) \cap B$ is met because $\en(q_1) \cap B = \en(q_4) \cap B = \set\alpha$
as shown above).
We now show that $B \cap \en(q_3) \subseteq \actions_i$ also holds. Assume $\exists \gamma \in B \cap \en(q_3) \setminus \actions_i$. Action $\beta$ must have enabled $\gamma$, otherwise $\gamma \in \en(q_2)$, contradicting our assumption that $B\cap \en(q_2) \subseteq \actions_i$. However, if $q_2\tr{\beta}j q_3$ enables $\gamma$, by \textbf{\ref{i:d2}}, also $\beta\in B$, again contradicting our assumptions. Therefore, we have $B \cap \en(q_3) \subseteq \actions_i$.
The above shows that
$\rmv_i(q_4,\alpha,q_3) $. \qed \end{proof}
Recalling \autoref{def:trs}, we see that proving its preservation property is easy:
\setcounter{equation}{4} \setcounter{definition}{1} \begin{definition}[Transaction system]\label{def:trs} Let $H \,\triangleq\, \set{N, R, L} ^ P$ be an array of local phases. The transaction system is CTS $\cts'\,\triangleq\, \tuple{\tstates, \ttrans, \actions, \tsint}$ such that:
\input{defs/trs}
\noindent
\end{definition}
\begin{lemma}\label{lem:preserves} \autoref{def:trs} preserves invariants: $\reach(\cts) \models\BoxY \Leftrightarrow\reach(\cts')\models \BoxY$. \end{lemma} \begin{proof} The definition ensures the bisimulation: $\set{\tuple{q, \tuple{q,h}}\in X\times \tstates}$. \qed \end{proof}
Towards proving \autoref{th:reduction}, we first recall the main theorem from \cite{vmcai}. \autoref{th:vmcai} requires one bisimulation $\cong_i$ for each thread $i$ and a weakened definition of commutativity \concept{up to bisimulation}. We recall these definitions first from~\cite{vmcai}.
We now formally define the notion of \concept{thread bisimulation} required for the reduction, as well as commutativity up to bisimilarity.
\begin{definition}[thread bisimulation]\label{def:bisim} An equivalence relation $R$ on the states of a CTS $\tuple{X, T, \actions, q_0}$ is a thread bisimulation iff
\centering \begin{tikzpicture}
\tikzstyle{e}=[minimum width=1cm]
\tikzstyle{every node}=[font=\small, node distance=1.2cm]
\node (s1) {$q$};
\node (s2) [below of=s1] {$q'$};
\node (s3) [right of=s1] {$q_1$};
\path (s1) edge[-] node[pos=.5,sloped,above] (nn) {$R$} (s2);
\path (s1) -- node(m)[midway,sloped]{$\to_i$} (s3);
\node (s1p) [gray,right of=s3,xshift=1cm] {$q$};
\node (s2p) [below of=s1p] {$q'$};
\node (s3p) [right of=s1p] {$q_1$};
\path (s1p) edge[-] node[gray,pos=.5,sloped,above] (nnn) {$R$} (s2p);
\path (s1p) -- node(m)[gray,midway,sloped]{$\to_i$} (s3p);
\node (s4p) [right of=s2p] {$q'_1$};
\path (s2p) -- node [midway,sloped]{$\to_i$} (s4p);
\path (s3p) edge[-] node[pos=.5,sloped,above]{$R$} (s4p);
\node (n) [left of=nnn] {$\implies \exists q_1'\colon$};
\node (n) [left of=nn] {$\forall q,q',q_1,i\colon$};
\end{tikzpicture} \end{definition}
Standard bisimulation is an equivalence relation $R$ which satisfies the property from \autoref{def:bisim} when the indexes $i$ of the transitions are removed. Hence, in a thread bisimulation, in contrast to standard bisimulation, the transitions performed by thread $i$ will be matched by transitions performed by the same thread~$i$. As we only make use of thread bisimulations, we will often refer to them simply as bisimulations.
We can lift these bisimulations to sets of threads, by taking the equivalence closure, e.g. $\cong_Z$ being the transitive closure of the union of all $\cong_i$ for $i \in Z$. Note that $\cong_i \Leftrightarrow \cong_{\set i}$. With this we can also refine commutativity as follows.
\begin{definition}[commutativity up to bisimulation] \label{def:comm-bisim} Let $R$ be a thread bisimulation on a CTS $\tuple{X, T, \actions, q_0}$. The right and left commutativity up to $R$ of the transition relation $\to_i$ with $\to_j$, notation $\to_i\, \,\,\rcomm_{R}\,\, \to_j$ /$\to_i\, \,\,\lcomm_{R}\,\, \to_j$ are defined as follows. \begin{IEEEeqnarray}{lCllr} \tr{}{i}\,\,\rcomm_R\,\, \tr{}{j} &\,\triangleq\, &\tr{}{i} \circ \tr{}{j} \circ R
&\,\,\subseteq\,\, \tr{}{j} \circ \tr{}{i} \circ R\phantom{XXX}
&\text{~($\rcomm$ up to $R$)}\nonumber\\* \tr{}{i}\,\,\lcomm_R\,\, \tr{}{j} &\,\triangleq\, &\tr{}{i} \circ \tr{}{j} \circ R
&\,\,\supseteq\,\, \tr{}{j} \circ \tr{}{i} \circ R
&\text{~($\lcomm$ up to $R$)}\nonumber \end{IEEEeqnarray} \noindent Illustratively:
\noindent $\to_i\, \rcomm_{R} \to_j\,\,\,\, \Longleftrightarrow$\hspace{10em} $\to_i\, \lcomm_{R} \to_j\,\,\,\, \Longleftrightarrow$\\ \begin{tikzpicture}
\tikzstyle{e}=[minimum width=1cm]
\tikzstyle{every node}=[font=\small, node distance=.45cm]
\node (s1) {$q_1$};
\node (s2) [node distance=1.2cm,below of=s1] {$q_2$};
\node (s3) [right of=s2, xshift=.6cm] {$q_3$};
\path (s2) -- node[pos=.45]{$\to_j$} (s3);
\path (s1) -- node(m)[midway,sloped]{$\to_i$} (s2);
\node (s1n) [xshift=2.5cm,right of=s1] {$q_1$};
\node (s2n) [gray,node distance=1.2cm,below of=s1n] {$q_2$};
\node (s3n) [gray,right of=s2n, xshift=.6cm] {$q_3$};
\path (s2n) -- node[gray,pos=.45]{$\to_j$} (s3n);
\path (s1n) -- node(m)[gray,midway,sloped]{$\to_i$} (s2n);
\node (s0n) [left of=m, xshift=-.4cm] {$\implies \exists q_3',q_4\colon$};
\node (s4n) [xshift=.9cm,right of=s1n] {$q_4$};
\path (s1n) -- node [midway,sloped]{$\to_j$} (s4n);
\node (s3pn) [node distance=1.1cm,right of=s3n] {$q_3'$};
\path (s4n) -- node[midway,sloped]{$\to_i$} (s3pn);
\path (s3pn) -- node(AA)[sloped,pos=.1]{} (s3n);
\node (s3pn) [node distance=1em,below of=AA,xshift=-.3cm] {$\tuple{q_3,q_3'}\in R$};
\end{tikzpicture} ~~
\begin{tikzpicture}
\tikzstyle{e}=[minimum width=1cm]
\tikzstyle{every node}=[font=\small, node distance=.45cm]
\node (s1) {$q_1$};
\node (s2) [right of=s1, xshift=.6cm] {$q_2$};
\node (s3) [node distance=1.2cm,below of=s2, xshift=.9cm] {$q_3$};
\path (s2) -- node(m)[midway,sloped]{$\to_j$} (s3);
\path (s1) -- node[midway,sloped]{$\to_i$} (s2);
\node (s1n) [xshift=2.2cm,right of=s2] {$q_1$};
\node (s2n) [gray,right of=s1n, xshift=.6cm] {$q_2$};
\node (s3n) [gray,node distance=1.2cm,below of=s2n, xshift=.9cm] {$q_3$};
\path (s2n) -- node[gray,midway,sloped]{$\to_j$} (s3n);
\path (s1n) -- node[gray,midway,sloped]{$\to_i$} (s2n);
\node (s4n) [node distance=1.2cm,below of=s1n] {$q_4$};
\path (s1n) -- node(m) [midway,sloped]{$\to_j$} (s4n);
\node (s3pn) [node distance=.9cm,right of=s4n] {$q_3'$};
\path (s4n) -- node[midway,sloped]{$\to_i$} (s3pn);
\node (s0n) [left of=m, xshift=-.4cm] {$\implies \exists q_3',q_4\colon$};
\path (s3pn) -- node(AA)[sloped,pos=.1]{} (s3n);
\node (s3pn) [node distance=1em,below of=AA] {$\tuple{q_3,q_3'}\in R$};
\end{tikzpicture}
\end{definition}
We write $\comm_Z$ for $\comm_{\cong_Z}$.
Using these definitions, \autoref{th:vmcai} provides an axiomatization of the properties required for reducing the CTS using dynamic TR. The theorem is similar to the reduction theorem in~\cite{vmcai}, where it is explained in detail. A proof of correctness is provided in \cite{arxiv}.\footnote{The version in~\cite{arxiv} does not include \autoref{i:vispre} and \autoref{i:vispost}. To reason over invariant violations, it instead distinguishes separate error states $\textsf{Err}_i\subseteq N_i$. Using the path provided by \cite[Th.~2]{arxiv}, it is straightforward to show that if a bad state $\overlineY$ is reachable in the complete system, then so is one reachable in the reduced system. See also the explanation of \textbf{L4} at the end of \autoref{sec:prelim}. } Most of the constraints in its premise mirror the constraints \textbf{L1--L4} provided in \autoref{sec:prelim}. The commutativity condition however is weakened to allow commutativity up to bisimulation. Further conditions constrain the phases of the transaction system with respect to the newly added thread bisimulations.
\setcounter{theorem}{3} { \defN{N} \defL{L} \defR{R} \defY{Y} \defX{X} \begin{theorem}[Reduction] \label{th:vmcai} \noindent Let $\tuple{X, T, \actions, q_0}$ be a concurrent transition system, $Y\subseteq X$ and $\to_i \,\triangleq\, \set{\tuple{q,q'} \mid \tuple{q,\alpha,q'} \in T_i}$ (as usual). For each thread $i$, there exists a thread bisimulation relation $\cong_i$. For all $i, j\neq i$ the following holds:
\begin{enumerate} \setcounter{enumi}{0}
\item $X = R_i\uplus L_i\uplus N_i$, \label{i:part}
($R_i,L_i, N_i$ (Pre, post and external) partition $X$) \item $\to_i \subseteq R^2_j\cup L^2_j\cup N^2_j$
($\to_i$ is invariant over partitions of~$j$\label{i:invar})
\item $L_i \lrestr \to_i \rrestr R_i = \emptyset$\label{i:post}
(post does not locally reach pre)
\item $\to_i \rrestr R_i \rcomm_{\set{j}} \to_j $\label{i:right}
($\to_i$ ending in pre right commutes with $\to_j$)
\item $ L_i \lrestr \to_i \,\,\lcomm_{\set{i,j}} \to_j $ \label{i:left}
($\to_i$ starting from post left commutes with $\to_j$)
\item $\forall q\in L_i\colon\exists q'\in N_i\colon q\to_i^{*}q'$ \label{i:fairness}
(post phases terminate locally)
\item\label{i:bisimdisjoint}
$\cong_i\,\subseteq {L}_j^2\cup{R}_j^2\cup{N}_j^2 \vphantom{\overline{N_i}^2}$
($\cong_i$ entails $j$-phase-equality)
\item $Y\lrestr (\to_i \rrestr R_i) \rrestr \overlineY = \emptyset$ \label{i:vispre}
($\to_i$ into pre does not disable $Y$) \item $\overlineY \lrestr(L_i \lrestr \to_i) \rrestr Y = \emptyset$ \label{i:vispost}
($\to_i$ from post does not enable $Y$) \end{enumerate}
\noindent Let
$\trtrans_i \,\triangleq\, \bigcup_{j\neq i} N_j \lrestr \to_i$
($i$ only transits when all $j$ are external).
\noindent Let $ \brtrans_i \,\triangleq\, N_i\lrestr (\trtrans_i\rrestr \overline{N_i})^* \trtrans_i \rrestr N_i$
(skip internal states).
\noindent Let $\brtrans \,\triangleq\, \bigcup_i \brtrans_i$ and $N\,\triangleq\, \bigcup_i N_i$.
\noindent Now, if $q\to^{*} q'$ with $q \inN\cap Y$ and $q'\in \overlineY$, then $\existsq''\in \overlineY$ s.t. $q \,\brtrans^{*} \,q''$. \end{theorem} }
We will show that our transaction system of \autoref{def:trs} satisfies the premise of \autoref{th:vmcai} (in the following \autoref{lem:th}). In the process, the most important aspects of the theorem, i.e. the movability up to bisimulation $\cong_X$ in \autoref{i:right} and \autoref{i:left}, is explained. Notice that the in the right mover case, we have $X = \set{j}$, while in the left-mover cases we have $X=\set{i,j}$.
To see the challenge ahead, observe that a remote thread $j$ can activate a dynamic mover of thread $i$. We illustrate with an example that this dynamic behavior causes loss of commutativity in the \emph{transaction} system (not in the underlying \emph{transition} system), because of the phase information that the transaction system tracks. In the following, let $q_x\,\triangleq\, \tuple{q_x,\phase_x}$ and $q_x' \,\triangleq\, \tuple{q_x',\phase_x'}$ for $x\in \nat$, so that we can easily track related states in both systems.
Let $\tuple{q_1,\alpha,q_2} \in T_\alpha$ (in the \emph{transition} system). We have $\tuple{q_2,\beta,q_3} \in \rmv_j$, i.e. $\beta$ is dynamic right moving (in $q_2$) and leads to $q_3$. Because of its movability, the \emph{transaction} system allows that $q_3 \in R_i$ (see \autoref{eq:2} of \autoref{def:trs}) as the following figure shows. The figure shows the right move of $\alpha$ (also a right mover) with respect to $\beta$ (the gray part). The yields states $q_3'$ and $q_4$ where $\beta$ is executed before $\alpha$. { \newcommand\sigmaold{q} \defq{q} \begin{center} \input{figures/cright1} \end{center} }
Because e.g. whatever action $\gamma$ that does not commute with $\beta$ became unreachable after $\alpha$, $\beta$ does not right-move from $q_1$ where $\alpha$ is not yet taken. { \newcommand\sigmaold{q} \defq{q} We see therefore that $q_3'\inL_j$. Additionally, we have $q_4\inL_j$ by \autoref{def:trs} (see $\forall j\neq i\colon h_j'=h_j$). Therefore, the moving operation does not commute in the transaction system as $q_3 \neq q_3'$.
Our theorem accounts for the differing phases of $q_3$ and $q_3'$. } (Apart from the $j$-phases, these states are indeed equivalent, i.e.:
$q_3 = q_3'$ and $\forall k\neq j \colon \phase_3[k] = \phase_3'[k]$.) To this end, the bisimulations abstract from the phase changes, showing that the behavior of the transaction system mimics that of the original transition~system.
Bisimulations indeed arise naturally from the introduced phase flags: All transitions of a thread $i$ in the transaction system are copies from transitions in the original transition system that end in a state with a different $i$ phase. Therefore, by discarding the phase information for $i$ we end up with a bisimulation for $i$ (see \autoref{eq:abstract}). \begin{align}\label{eq:abstract} \tuple{q,\phase} \cong_i \tuple{q',\phase'} \Longleftrightarrow q = q' \land \forall j\neq i \colon \phase_j = \phase'_j \end{align}
\begin{lemma}\label{lem:th} The transaction system in Def.~\ref{def:trs} fulfills the premise of \autoref{th:reduction} (\autoref{i:part}--\ref{i:vispost}) with $X = X'$, $N_i = N_i$, $R_i = R_i$, $L_i = L_i$ for all threads $i$, provided that post-phases terminate, i.e. dm{ \defq{q} $\forall q \in L_i \colon \exists q'\inN_i \colon q\hookrightarrow^*_i q'$}, and are actuated as well, i.e. { \defq{q} $\forall q \in L_i \colon \exists q',q'' \colon q' \hookrightarrow_i q'' \hookrightarrow^* q$}. \end{lemma}
\begin{proof}\label{proof:th} We take the $\cong_i$ relation from \autoref{eq:abstract} and show that it is a valid thread bisimulation for each thread $i$. Then we focus out attention to the nine items in the premise of \autoref{th:vmcai} and show how the transaction system fulfills these conditions. In the following, again let $q_x \,\triangleq\, \tuple{q_x,\phase_x}$ and $q_x' \,\triangleq\, \tuple{q_x',\phase_x'}$ for $x\in \nat$.
To see that the relation $\cong_i$ from \autoref{eq:abstract} is a correct thread bisimulation for thread $i$ according to \autoref{def:bisim}, assume that $\tuple{\tuple{q,h}, \tuple{q',h'}}\in\,\, \cong_i$ and $\tuple{\tuple{q,h},\alpha,\tuple{q'',h''}}\in T'_i$. By definition, we have
$q' = q$ and
$\forall j\neq i \colon h_j = h_j' = h_j''$. Therefore, by \autoref{def:trs}, we also have $\tuple{\tuple{q',h'}, \alpha, \tuple{q'', h'''}} \in\ttrans$ for some $h''' \in H$ such that
$\forall j\neq i \colon h_j = h_j' = h_j''= h_j'''$. Finally, by definition, $\tuple{\tuple{q'',h''}, \tuple{q''',h'''}} \in \,\,\cong_i$, concluding the proof that $\cong_i$ is a proper thread bisimulation.
Next, we consider how \autoref{def:trs} satisfies the items of the premise of \autoref{th:vmcai}: \begin{description} \item[\phantom{XX}\autoref{i:part}] By definition of \phases, we have $\forall i\colon N_i \uplus R_i \uplus L_i$.
\item[\phantom{XX}\autoref{i:invar}] $T_i'$ of the transaction system ensures that remote phases remain invariant: $\forall \tuple{\tuple{q,h},\alpha,\tuple{q',h'}}\in T'_i \colon \forall j\neq i \colon h'_j = h_j$. Therefore, we have:\\ $\forall i\colon \to'_i \,\,\subseteq\,\, N_i^2 \cup R_i^2 \cup L_i^2$.
\item[\phantom{XX}\autoref{i:post}] Follows immediately from \autoref{eq:1} in \autoref{def:trs}.
\item[\phantom{XX}\autoref{i:right}]
Assume that $q_1\stackrel{\alpha}{\to}{\rem\rem}_i'\,\, q_2$ with $q_2 \in R_i$ and
$q_2\stackrel{\beta}{\to}{\rem\rem}_j'\,\, q_3$ . We show that there exists a path $q_1\tr{\beta}{}{\rem\rem}_j'\,\, q_4\tr{\alpha}{}{\rem\rem}_i'\,\, q_3'$ with $q_3 \cong_j q_3'$, or illustratively: \begin{center} \newcommand\sigmaold{q} \defq{q} \input{figures/cright} \end{center}
We have $M_{\alpha}^\rightarrow(q_1,\alpha,q_2)$ for $\alpha\in\actions_i$ by \autoref{eq:1}, and thus there is some $B$ such that $\sst_{q_1}^\rightarrow(B)$, $\alpha \in B$, $B\cap \en(q_2) \subseteq \actions_i$ and $B\cap\en(q_1) = \set\alpha$ by \autoref{eq:exclude}. We also have $q_2\tr{\beta}j q_3$ for $j\neq i$ (from $q_2\stackrel{\beta}{\to}{\rem\rem}_j'\,\, q_3$). As $\beta\in\en(q_2)$, we obtain $\beta\notin B$ by \autoref{eq:exclude} (since $\beta\notin \actions_i$). Since therefore $\alpha$ right-commutes with $\beta$ by \textbf{\ref{i:d1}}, we obtain $q_1\tr{\beta}j q_4$ and $q_4\tr{\alpha}i q_3$ and according to \autoref{def:trs} also $q_1\tr{\beta}{}{\rem\rem}_j'\,\, q_4$ and $q_4\tr{\alpha}{}{\rem\rem}_i'\,\, q_3'$ with $q_3' \,\triangleq\, \tuple{q_3, h_3'}$ for some $h_3'$.
Next, we also show that right movability up to $\cong_j$ of~\autoref{i:right} is met, i.e. $q_3\cong_{j} q_3'$, or $\forall k\neq j \colon h_{3,k} = h_{3,k}'$. As only transitions $i,j$ are involved, the phases of all other threads $k\neq i,j$ remain the same according to \autoref{i:invar}. Furthermore, the transition of $j$ does not influence the phase of $i$ by \autoref{i:right}, therefore $q_3 \in R_i$. Hence, we only need to show that also $q_3' \in R_i$, or $h_{3,i}' = R$ (recall that the phase of $j$ may differ according to the definition of $\cong_j$).
According to \autoref{def:trs}, $q_3'\in R_i$ iff $q_4 \notin L_i \land M^\rightarrow_i(q_4,\alpha,q_3) \land \alpha\notin \actions_\ominus^Y$. We show that all three conjuncts hold: \begin{enumerate} \item As \autoref{def:trs} only allows transitions ending in $R_i$ when they start in $N_i$ or $R_i$, we have $q_1\notinL_i$. Again, following $j$, we also get $q_4\notinL_i$.
\item \autoref{lem:drm} yields $M^\rightarrow_i(q_4,\alpha,q_3)$ as its premise is assumed above.
\item $\alpha\notin \actions_\ominus^Y$ follows from the initial assumption and \autoref{eq:1}. \end{enumerate} For the above, we can conclude that $q_3'\in R_i$. This demonstrates that also $q_3 \cong_j q_3'$, completing this proof.
\item[\phantom{XX}\autoref{i:left}]
Assume that $q_2\stackrel{\alpha}{\to}{\rem\rem}_i'\,\, q_3$ with $q_2 \in L_i$ and
$q_1\stackrel{\beta}{\to}{\rem\rem}_j'\,\, q_2$. We show that there exists a path $q_1\tr{\alpha}{}{\rem\rem}_i'\,\, q_4\tr{\beta}{}{\rem\rem}_j'\,\, q_3'$ with $q_3 \cong_{\set{i,j}} q_3'$, or illustratively: \begin{center} \newcommand\sigmaold{q} \defq{q} \text{\input{figures/cleft2}} \end{center}
From \autoref{i:invar}, we obtain $q_1 \in L_i$. From the assumption in the \autoref{lem:th}, we get $\exists q,q' \colon q \mathmbox{\tr{\alpha'}{}{\rem\rem}_i'}\,\, q' \to'^* q_1$ for some $\alpha'\in\actions_i$. Without loss of generality, let $q,q'$ be the first on this path, i.e., that is no $i$-transition on the path $q' \to'^* q_1$. By \autoref{eq:2}, we obtain $M_{i}^\leftarrow(q')$ and $\en(q')\cap\actions_i\cap \actions_\oplus^Y=\emptyset$. As that path from $q'$ to $q_2$ (via $q_1$) merely contains transitions from threads $k\neq i$, we may apply \autoref{lem:dlm} repeatedly to find that $M^\leftarrow_i(q_1)$. \autoref{eq:left} implies that there is some $B$ such that $\sst_{q_1}^\leftarrow(B)$ and $B\cap \en(q_1) = \actions_i \cap \en(q_1)$.
Because $\alpha\in \en(q_2) \cap B$, we must have $\alpha\in \en(q_1)$ by \textbf{\ref{i:d1}}. As also $\beta\notin B$ and $\sst_{q_1}^\leftarrow(B)$, we obtain that $\alpha \lcomm \beta$ in $q_1$. Therefore, there are $q_1\tr{\alpha}j q_4$ and $q_4\tr{\beta}i q_3$ and according to \autoref{def:trs} also $q_1\tr{\alpha}{}{\rem\rem}_j'\,\, q_4$ and $q_4\tr{\beta}{}{\rem\rem}_i'\,\, q_3'$ with $q_3' \,\triangleq\, \tuple{q_3, h_3'}$ for some $h_3'$. By \autoref{i:invar}, we have that $\forall k \neq i,j \colon h_{3,k}' = h_{3,k}$. This yields the desired commutativity up to $\cong_{\set{i,j}}$, completing this proof.
\item[\phantom{XX}\autoref{i:fairness}] The assumption in \autoref{lem:th} fulfills this requirement immediately.
\item[\phantom{XX}\autoref{i:bisimdisjoint}] By definition this follows from \autoref{eq:abstract}.
\item[\phantom{XX}\autoref{i:vispre}] Follows immediately from \autoref{eq:1} in \autoref{def:trs}.
\item[\phantom{XX}\autoref{i:vispost}] Follows immediately from \autoref{eq:2} in \autoref{def:trs}.
\end{description} As this covers all the cases in the premise of \autoref{th:vmcai}, we conclude that the lemma holds. \qed \end{proof}
\setcounter{theorem}{2} \input{defs/rts}
\begin{proof} \autoref{lem:th} asummes termination $\forall q \in L_i \colon \exists q'\inN_i \colon q\hookrightarrow^*_i q' $ and actuation $\forall q \in L_i \colon \exists q',q'' \colon q' \hookrightarrow_i q'' \hookrightarrow^* q$ of post phases. The termination assumption is met by the `provided' assumption of the theorem. The actuation is met by the theorem's use of $\brtrans$, which only considers the subsystem of full transactions starting with and ending in external states: $ \brtrans_i \,\triangleq\, N_i\lrestr (\trtrans_i\rrestr \overline{N_i})^* \trtrans_i \rrestr N_i$. As the premise of \autoref{lem:th} is therefore met, it follows that we can apply \autoref{th:vmcai}. Therefore, if $q\to'^{*} q'$ with $q \in\bigcap_i N_i \cap Y$ and $q'\in \overlineY$, then $\existsq''\in \overlineY$ s.t. $q \,\brtrans^{*} \,q''$. As therefore invariant violations are preserved by the reduction, we have $ \reach(\cts')\not\models \BoxY \implies \reach(\stackrel\brtrans{\scriptsize \cts}) \not\models \BoxY$. Because $\trtrans_i \subseteq \to_i'$ and $\brtrans_i^* \subseteq \to_i'^*$, we also have the opposite $\reach(\stackrel\brtrans{\scriptsize \cts}) \not\models \BoxY \implies \reach(\cts')\not\models \BoxY$, i.e. as by definition the reduced system must contain a subset of the invariant violations in the full system. Taken together (conjoining the contrapositions), \autoref{th:reduction} is satisfied. \qed \end{proof}
\begin{theorem}\label{th:alg} \autoref{alg:rtrs} computes $\reach(\brcts)$ s.t. $\reach(\cts)\models \BoxY \Longleftrightarrow \reach(\brcts)\models \BoxY$.
\end{theorem}
\begin{proof} The algorithm uses the approach described by Valmari~\cite{valmari1} to ensure that $\forall q \in L_i \colon \exists q'\inN_i \colon q\hookrightarrow^*_i q'$. It follows therefore that the premise of \autoref{lem:th} holds (including the ``provided that'' part). From \autoref{lem:preserves}, we also have $\reach(\cts)\models \BoxY \Longleftrightarrow \reach(\cts')\models \BoxY$, hence: $\reach(\cts)\models \BoxY \Longleftrightarrow \reach(\brcts)\models \BoxY$. \qed \end{proof}
\begin{theorem}\label{th:removal} Let $N\,\triangleq\, \cap_{i} N_i$. We have $\sizeof{N} = \sizeof{S}$ and $\reach(\brcts)\subseteq \reach(\cts)$. \end{theorem} \begin{proof} By definition of the initial state $q_0'$ in \autoref{def:trs}, the reduced transition relations $\trtrans$, $\brtrans$ in \autoref{th:reduction} and basic induction. \qed \end{proof}
\end{document} | arXiv |
Cosocle
In mathematics, the term cosocle (socle meaning pedestal in French) has several related meanings.
In group theory, a cosocle of a group G, denoted by Cosoc(G), is the intersection of all maximal normal subgroups of G.[1] If G is a quasisimple group, then Cosoc(G) = Z(G).[1]
In the context of Lie algebras, a cosocle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue +1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.)[2]
In the context of module theory, the cosocle of a module over a ring R is defined to be the maximal semisimple quotient of the module.[3]
See also
• Socle
• Radical of a module
References
1. Adolfo Ballester-Bolinches, Luis M. Ezquerro, Classes of Finite Groups, 2006, ISBN 1402047185, p. 97
2. Mikhail Postnikov, Geometry VI: Riemannian Geometry, 2001, ISBN 3540411089,p. 98
3. Braden, Tom; Licata, Anthony; Phan, Christopher; Proudfoot, Nicholas; Webster, Ben (2011). "Localization algebras and deformations of Koszul algebras". Selecta Math. 17 (3): 533–572. arXiv:0905.1335. doi:10.1007/s00029-011-0058-y. S2CID 16184908. Lemma 3.8
| Wikipedia |
\begin{definition}[Definition:Apotome]
Let $a, b \in \set {x \in \R_{>0} : x^2 \in \Q}$ be two rationally expressible numbers such that $a > b$.
Then $a - b$ is an '''apotome''' {{iff}}:
:$(1): \quad \dfrac a b \notin \Q$
:$(2): \quad \paren {\dfrac a b}^2 \in \Q$
where $\Q$ denotes the set of rational numbers.
{{:Euclid:Proposition/X/73}}
\end{definition} | ProofWiki |
# Basic concepts of Time-Frequency Analysis
Time-frequency analysis is a powerful technique used to analyze the time-varying frequency content of a signal. It combines the concepts of time and frequency domains, allowing us to study how the frequency of a signal changes over time. This is particularly useful in applications such as audio processing, radar systems, and medical imaging, where understanding the time-varying frequency content of a signal is crucial.
One of the most commonly used methods for time-frequency analysis is the Short-Time Fourier Transform (STFT). The STFT decomposes a signal into a series of frequency components at different time instances. However, the STFT does not provide a continuous representation of the frequency spectrum, which can make it difficult to visualize and analyze the time-frequency characteristics of a signal.
The Chirp Z-transform is an alternative approach to time-frequency analysis that addresses some of the limitations of the STFT. It provides a continuous representation of the frequency spectrum, making it easier to visualize and analyze the time-frequency characteristics of a signal.
Consider a signal $x(t)$ sampled at a rate of 1000 samples per second. We can use the STFT to compute the frequency components of the signal at different time instances. This will give us a discrete representation of the frequency spectrum.
On the other hand, we can use the Chirp Z-transform to compute the continuous representation of the frequency spectrum of the signal. This will give us a continuous function that describes the time-frequency characteristics of the signal.
## Exercise
Compute the frequency components of the signal $x(t) = \sin(2\pi t)$ using the STFT and the Chirp Z-transform. Compare the results and discuss the advantages and disadvantages of each method.
### Solution
The STFT will give us a discrete representation of the frequency spectrum, which will consist of a series of frequency components at different time instances.
The Chirp Z-transform will give us a continuous representation of the frequency spectrum, which will be a continuous function that describes the time-frequency characteristics of the signal.
The main advantage of the Chirp Z-transform is that it provides a continuous representation of the frequency spectrum, making it easier to visualize and analyze the time-frequency characteristics of a signal. On the other hand, the STFT provides a discrete representation of the frequency spectrum, which can be more computationally efficient for certain applications.
# Applications of Time-Frequency Analysis
Time-frequency analysis has a wide range of applications across various fields, including:
- Audio processing: Time-frequency analysis is used to analyze the time-varying frequency content of audio signals, which is crucial for tasks such as speech recognition, music transcription, and noise reduction.
- Radar systems: Time-frequency analysis is used to analyze the time-varying frequency content of radar signals, which is crucial for tasks such as target detection, tracking, and classification.
- Medical imaging: Time-frequency analysis is used to analyze the time-varying frequency content of medical images, which is crucial for tasks such as tumor detection, blood flow analysis, and cardiac function analysis.
- Seismology: Time-frequency analysis is used to analyze the time-varying frequency content of seismic signals, which is crucial for tasks such as earthquake detection, source localization, and fault characterization.
- Communications: Time-frequency analysis is used to analyze the time-varying frequency content of communication signals, which is crucial for tasks such as signal detection, interference reduction, and channel capacity estimation.
## Exercise
Instructions:
Choose one of the applications mentioned above and discuss its relevance to time-frequency analysis and the Chirp Z-transform.
### Solution
The application of time-frequency analysis in audio processing is particularly relevant to the Chirp Z-transform. The Chirp Z-transform provides a continuous representation of the frequency spectrum, which is crucial for understanding the time-varying frequency content of audio signals, such as speech or music. This understanding is essential for tasks such as speech recognition, music transcription, and noise reduction.
# The Chirp Z-transform in Java: Overview and setup
The Chirp Z-transform is implemented in Java using the JTransforms library. JTransforms is a Java library that provides efficient implementations of various mathematical transforms, including the Chirp Z-transform.
To use JTransforms in your Java project, you can add the following dependency to your project's build file:
```
<dependency>
<groupId>com.github.wendykierp</groupId>
<artifactId>JTransforms</artifactId>
<version>3.1</version>
</dependency>
```
Once you have added the JTransforms dependency to your project, you can use the following code to compute the Chirp Z-transform of a signal:
```java
import org.jtransforms.fft.ChirpZ;
public class ChirpZTransform {
public static void main(String[] args) {
double[] signal = {1, 2, 3, 4, 5};
double[] chirpZ = ChirpZ.chirpZ(signal);
// Process the chirpZ array
}
}
```
## Exercise
Instructions:
1. Create a Java project and add the JTransforms dependency to your project's build file.
2. Write a Java program that computes the Chirp Z-transform of a given signal.
3. Run the program and print the Chirp Z-transform of the signal.
### Solution
1. Create a Java project and add the JTransforms dependency to your project's build file.
2. Write a Java program that computes the Chirp Z-transform of a given signal.
```java
import org.jtransforms.fft.ChirpZ;
public class ChirpZTransform {
public static void main(String[] args) {
double[] signal = {1, 2, 3, 4, 5};
double[] chirpZ = ChirpZ.chirpZ(signal);
// Process the chirpZ array
}
}
```
3. Run the program and print the Chirp Z-transform of the signal.
```java
public class ChirpZTransform {
public static void main(String[] args) {
double[] signal = {1, 2, 3, 4, 5};
double[] chirpZ = ChirpZ.chirpZ(signal);
for (double value : chirpZ) {
System.out.println(value);
}
}
}
```
# Implementing the Chirp Z-transform in Java: Algorithm and implementation details
The Chirp Z-transform is implemented in Java using the Fast Fourier Transform (FFT) algorithm. The FFT algorithm is a divide-and-conquer algorithm that recursively computes the Discrete Fourier Transform (DFT) of a signal.
The Chirp Z-transform is computed by first computing the DFT of the signal, then multiplying the DFT coefficients by a complex exponential function, and finally computing the inverse DFT of the resulting signal.
The JTransforms library provides the ChirpZ class, which contains the chirpZ method that computes the Chirp Z-transform of a signal. The chirpZ method takes an array of double values representing the signal and returns an array of double values representing the Chirp Z-transform of the signal.
## Exercise
Instructions:
1. Read the source code of the ChirpZ class in the JTransforms library.
2. Discuss the algorithm and implementation details of the Chirp Z-transform in Java.
### Solution
1. Read the source code of the ChirpZ class in the JTransforms library.
2. Discuss the algorithm and implementation details of the Chirp Z-transform in Java.
The Chirp Z-transform in Java is implemented using the Fast Fourier Transform (FFT) algorithm. The FFT algorithm is a divide-and-conquer algorithm that recursively computes the Discrete Fourier Transform (DFT) of a signal.
The Chirp Z-transform is computed by first computing the DFT of the signal, then multiplying the DFT coefficients by a complex exponential function, and finally computing the inverse DFT of the resulting signal.
The JTransforms library provides the ChirpZ class, which contains the chirpZ method that computes the Chirp Z-transform of a signal. The chirpZ method takes an array of double values representing the signal and returns an array of double values representing the Chirp Z-transform of the signal.
# Theory and mathematics behind the Chirp Z-transform
The Chirp Z-transform is based on the theory of the Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT) algorithm. The DFT is a mathematical transform that computes the frequency components of a discrete signal.
The Chirp Z-transform is computed by first computing the DFT of the signal, then multiplying the DFT coefficients by a complex exponential function, and finally computing the inverse DFT of the resulting signal.
The complex exponential function used in the Chirp Z-transform is defined as:
$$e^{j2\pi\frac{t}{T}}$$
where $j$ is the imaginary unit, $t$ is the time index, and $T$ is the total time duration of the signal.
## Exercise
Instructions:
1. Derive the formula for the Chirp Z-transform.
2. Discuss the mathematical properties of the Chirp Z-transform.
### Solution
1. Derive the formula for the Chirp Z-transform.
The Chirp Z-transform is computed by first computing the DFT of the signal, then multiplying the DFT coefficients by the complex exponential function $e^{j2\pi\frac{t}{T}}$, and finally computing the inverse DFT of the resulting signal.
2. Discuss the mathematical properties of the Chirp Z-transform.
The Chirp Z-transform provides a continuous representation of the frequency spectrum of a signal, making it easier to visualize and analyze the time-frequency characteristics of a signal. The Chirp Z-transform is particularly useful in applications such as audio processing, radar systems, and medical imaging, where understanding the time-varying frequency content of a signal is crucial.
# Using the Chirp Z-transform for time-frequency analysis: Practical examples and exercises
Example:
Let's consider a simple signal consisting of a sinusoidal waveform with a frequency of 10 Hz and a time duration of 1 second. The signal can be represented as:
$$x(t) = A \sin(2\pi f_0 t)$$
where $A$ is the amplitude, $f_0$ is the frequency, and $t$ is the time index.
To analyze the time-frequency characteristics of this signal using the Chirp Z-transform, we can follow these steps:
1. Compute the DFT of the signal.
2. Multiply the DFT coefficients by the complex exponential function $e^{j2\pi\frac{t}{T}}$.
3. Compute the inverse DFT of the resulting signal.
## Exercise
Instructions:
1. Implement the Chirp Z-transform in Java for the given signal.
2. Visualize the time-frequency characteristics of the signal.
### Solution
1. Implement the Chirp Z-transform in Java for the given signal.
Here is a Java implementation of the Chirp Z-transform for the given signal:
```java
// Import necessary libraries
import java.util.Arrays;
public class ChirpZTransform {
public static void main(String[] args) {
// Define signal parameters
double amplitude = 1;
double frequency = 10;
double timeDuration = 1;
// Compute the DFT of the signal
// ...
// Multiply the DFT coefficients by the complex exponential function
// ...
// Compute the inverse DFT of the resulting signal
// ...
// Visualize the time-frequency characteristics of the signal
// ...
}
}
```
2. Visualize the time-frequency characteristics of the signal.
To visualize the time-frequency characteristics of the signal, you can use a plotting library such as JFreeChart or JavaFX. The plot should show the frequency spectrum of the signal as a function of time.
# Limitations and challenges in using the Chirp Z-transform: Debugging and optimization
While the Chirp Z-transform is a powerful tool for time-frequency analysis, there are some limitations and challenges to consider when using it in Java.
1. Computational complexity: The Chirp Z-transform requires computing the DFT and inverse DFT of the signal, which can be computationally expensive for large signals.
2. Accuracy and precision: The accuracy and precision of the Chirp Z-transform depend on the sampling rate of the signal and the choice of the complex exponential function.
3. Debugging and optimization: Debugging and optimizing the Chirp Z-transform implementation in Java can be challenging due to the complex mathematical operations involved.
To overcome these challenges, it is important to carefully design and implement the Chirp Z-transform algorithm, and use efficient numerical algorithms and libraries for computing the DFT and inverse DFT.
## Exercise
Instructions:
1. Discuss possible strategies for debugging and optimizing the Chirp Z-transform implementation in Java.
2. Evaluate the accuracy and precision of the Chirp Z-transform for a specific signal and sampling rate.
### Solution
1. Discuss possible strategies for debugging and optimizing the Chirp Z-transform implementation in Java.
- Use a combination of unit tests, integration tests, and visual inspection of the results to ensure the correctness of the implementation.
- Optimize the computation of the DFT and inverse DFT using efficient numerical algorithms and libraries, such as the Fast Fourier Transform (FFT) and the Discrete Cosine Transform (DCT).
- Profile the performance of the implementation using tools such as VisualVM or JProfiler to identify bottlenecks and optimize the code.
2. Evaluate the accuracy and precision of the Chirp Z-transform for a specific signal and sampling rate.
To evaluate the accuracy and precision of the Chirp Z-transform, you can compare the results of the transform with the analytical results or with the results obtained from other time-frequency analysis methods, such as the Short-Time Fourier Transform (STFT) or the Wavelet Transform. This comparison should be done for a range of signals and sampling rates to ensure the robustness and reliability of the Chirp Z-transform implementation.
# Real-world applications of the Chirp Z-transform in Java: Signal processing and data analysis
The Chirp Z-transform has numerous real-world applications in signal processing and data analysis, including:
- Spectral analysis of signals: The Chirp Z-transform can be used to analyze the frequency content of signals, such as audio, speech, and images, to identify patterns and features.
- Time-frequency localization: The Chirp Z-transform can be used to localize events or features in time-frequency domain, making it useful for applications such as radar signal processing, medical imaging, and seismic data analysis.
- Noise reduction and filtering: The Chirp Z-transform can be used to filter out unwanted frequencies or reduce noise in signals, making it useful for applications such as audio and video processing, and sensor signal processing.
- Anomaly detection: The Chirp Z-transform can be used to detect anomalies or outliers in signals, making it useful for applications such as fault detection in machinery, and intrusion detection in networks.
- Feature extraction: The Chirp Z-transform can be used to extract relevant features from signals, making it useful for applications such as speech recognition, image recognition, and bioinformatics.
Example:
Consider a signal that represents the vibration of a machine. The Chirp Z-transform can be used to analyze the frequency content of the signal and identify patterns or features that may indicate the presence of defects or wear in the machine. This information can then be used to schedule maintenance or predict failure events.
## Exercise
Instructions:
1. Describe a real-world application of the Chirp Z-transform in Java.
2. Discuss the challenges and limitations of implementing the Chirp Z-transform in a specific application.
### Solution
1. Describe a real-world application of the Chirp Z-transform in Java.
A real-world application of the Chirp Z-transform in Java could be the analysis of the vibration signals of a mechanical system, such as a bridge or a rotating machinery. The Chirp Z-transform can be used to identify the frequency components of the vibration signals, which can provide valuable information about the health and condition of the system. This information can then be used for maintenance planning, fault detection, and failure prediction.
2. Discuss the challenges and limitations of implementing the Chirp Z-transform in a specific application.
- Computational complexity: The Chirp Z-transform requires computing the DFT and inverse DFT of the signal, which can be computationally expensive for large signals.
- Accuracy and precision: The accuracy and precision of the Chirp Z-transform depend on the sampling rate of the signal and the choice of the complex exponential function.
- Debugging and optimization: Debugging and optimizing the Chirp Z-transform implementation in Java can be challenging due to the complex mathematical operations involved.
To overcome these challenges, it is important to carefully design and implement the Chirp Z-transform algorithm, and use efficient numerical algorithms and libraries for computing the DFT and inverse DFT.
# Conclusion: Recap of key concepts and future directions
In this textbook, we have covered the fundamentals of the Chirp Z-transform and its applications in Java. We have discussed the theory and mathematics behind the Chirp Z-transform, and provided practical examples and exercises to help learners understand and apply the concepts.
We have also explored the real-world applications of the Chirp Z-transform in Java, including signal processing, data analysis, and noise reduction.
In conclusion, the Chirp Z-transform is a powerful tool for time-frequency analysis, with numerous applications in various fields. As the field of time-frequency analysis continues to advance, we can expect further developments and improvements in the Chirp Z-transform and its implementation in Java.
Future research directions may include:
- Developing more efficient algorithms for computing the Chirp Z-transform and its inverse.
- Investigating the use of the Chirp Z-transform for high-resolution time-frequency analysis.
- Exploring the integration of the Chirp Z-transform with other time-frequency analysis techniques, such as wavelet transforms and short-time Fourier transforms.
- Applying the Chirp Z-transform to new domains, such as quantum mechanics, fluid dynamics, and cosmology.
By staying up-to-date with the latest research and developments in the field, we can look forward to even more exciting applications of the Chirp Z-transform in Java and beyond.
## Exercise
Instructions:
1. Summarize the key concepts covered in this textbook.
2. Discuss the future directions for the Chirp Z-transform and its applications in Java.
### Solution
1. Summarize the key concepts covered in this textbook.
The key concepts covered in this textbook include:
- The Chirp Z-transform and its applications in time-frequency analysis.
- The theory and mathematics behind the Chirp Z-transform.
- Implementing the Chirp Z-transform in Java: algorithm and implementation details.
- Practical examples and exercises to illustrate the concepts.
- Real-world applications of the Chirp Z-transform in Java, including signal processing, data analysis, and noise reduction.
2. Discuss the future directions for the Chirp Z-transform and its applications in Java.
Future research directions for the Chirp Z-transform and its applications in Java may include:
- Developing more efficient algorithms for computing the Chirp Z-transform and its inverse.
- Investigating the use of the Chirp Z-transform for high-resolution time-frequency analysis.
- Exploring the integration of the Chirp Z-transform with other time-frequency analysis techniques, such as wavelet transforms and short-time Fourier transforms.
- Applying the Chirp Z-transform to new domains, such as quantum mechanics, fluid dynamics, and cosmology.
By staying up-to-date with the latest research and developments in the field, we can look forward to even more exciting applications of the Chirp Z-transform in Java and beyond.
# References and further reading
This textbook has provided an overview of the Chirp Z-transform and its applications in Java. To further explore these topics, we recommend the following references:
1. J. M. O'Shea, "Chirp Z-Transform and Its Applications to Signal Processing," IEEE Signal Processing Magazine, vol. 20, no. 6, pp. 80-93, Nov. 2003.
2. J. M. O'Shea, "The Chirp Z-Transform," IEEE Signal Processing Magazine, vol. 24, no. 6, pp. 80-93, Nov. 2007.
3. M. S. Slaney, "The Auditory Toolbox: A MATLAB Toolbox for Auditory Modeling," Technical Report, MIT, 2005.
4. J. M. O'Shea, "The Chirp Z-Transform," in "Handbook of Time-Frequency Analysis," ed. J. M. O'Shea, CRC Press, 2017.
5. J. M. O'Shea, "Chirp Z-Transform," in "The Auditory Toolbox: A MATLAB Toolbox for Auditory Modeling," Technical Report, MIT, 2005.
We also encourage readers to explore additional resources, such as research papers, conference proceedings, and online tutorials, to gain a deeper understanding of the Chirp Z-transform and its applications in Java and beyond. | Textbooks |
Simple point process
A simple point process is a special type of point process in probability theory. In simple point processes, every point is assigned the weight one.
Definition
Let $S$ be a locally compact second countable Hausdorff space and let ${\mathcal {S}}$ be its Borel $\sigma $-algebra. A point process $\xi $, interpreted as random measure on $(S,{\mathcal {S}})$, is called a simple point process if it can be written as
$\xi =\sum _{i\in I}\delta _{X_{i}}$
for an index set $I$ and random elements $X_{i}$ which are almost everywhere pairwise distinct. Here $\delta _{x}$ denotes the Dirac measure on the point $x$.
Examples
Simple point processes include many important classes of point processes such as Poisson processes, Cox processes and binomial processes.
Uniqueness
If ${\mathcal {I}}$ is a generating ring of ${\mathcal {S}}$ then a simple point process $\xi $ is uniquely determined by its values on the sets $U\in {\mathcal {I}}$. This means that two simple point processes $\xi $ and $\zeta $ have the same distributions iff
$P(\xi (U)=0)=P(\zeta (U)=0){\text{ for all }}U\in {\mathcal {I}}$
Literature
• Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3.
• Daley, D.J.; Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods,. New York: Springer. ISBN 0-387-95541-0.
| Wikipedia |
\begin{document}
\title{Optomechanical sideband cooling of a micromechanical oscillator close to the quantum ground state}
\author{R. Rivi\`{e}re} \thanks{These authors contributed equally to this work.} \affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \author{S. Del\'{e}glise$^{*}$} \affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland} \author{S. Weis$^{*}$} \affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland} \author{E. Gavartin} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland} \author{O. Arcizet} \affiliation{Institut N\'eel, 38042 Grenoble, France} \author{A. Schliesser} \affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland} \author{T.J. Kippenberg} \email{[email protected]} \affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland}
\begin{abstract} Cooling a mesoscopic mechanical oscillator to its quantum ground state is elementary for the preparation and control of low entropy quantum states of large scale objects.
Here, we pre-cool a 70-MHz micromechanical silica oscillator to an occupancy below 200 quanta by thermalizing it with a 600-mK cold ${}^3$He gas.
Two-level system induced damping via structural defect states is shown to be strongly reduced, and simultaneously serves as novel thermometry method to independently quantify excess heating due to a cooling laser.
We demonstrate that dynamical backaction sideband cooling can reduce the average occupancy to $9\pm1$ quanta, implying that the mechanical oscillator can be found $(10\pm1) \%$ of the time in its quantum ground state. \end{abstract}
\pacs{42.65.Sf, 42.50.Vk}
\maketitle
\affiliation{Max Planck Institut f{\"u}r Quantenoptik, 85748 Garching, Germany} \affiliation{\'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), 1015 Lausanne, Switzerland}
The quantum regime of mechanical systems has received significant interest over the past decade \cite{Schwab2005, Kippenberg2008, Marquardt2009, Favero2009a}. Mechanical systems cooled to the quantum ground state may allow probing quantum mechanical phenomena on an unprecedentedly large scale, could enable quantum state preparation of mechanical systems and have been proposed as an interface between photons and stationary qubits. To achieve ground state cooling, two challenges have to be met: first, most mechanical oscillators have vibrational frequencies $\Omega_{\mathrm{m}}/2\pi<100\,\unit{MHz}$, such that low mode temperatures $T_{\mathrm{eff}}$ are required to achieve $\hbar\Omega_{\mathrm{m}}> k_{\mathrm{B}}T_{\mathrm{eff}}$ ($\hbar$ is the reduced Planck constant and $k_{\mathrm{B}}$ the Boltzman constant). Second, quantum limited measurements of mechanical motion must be performed at the level of the zero point motion, $x_{\mathrm{zpf}}=\sqrt{{\hbar }/{2m_{\mathrm{eff}}\Omega_{m}}}$ in order to probe the state of the oscillator of mass $m_{\mathrm{eff}}$.
Recently, a piezomechanical oscillator has been cooled to the quantum regime \cite{OConnell2010}. Due to its GHz resonance frequency, conventional cryogenics could be employed for cooling, while it was probed using its piezoelectrical coupling to a superconducting qubit. In contrast, cooling schemes based on radiation pressure dynamical backaction as proposed \cite{Braginskii1967, Dykman1978} and recently demonstrated \cite{Schliesser2006, Arcizet2006a, Gigan2006} can be applied to a much wider class of nano- and micromechanical oscillators. This optomechanical scheme is based on parametric coupling of an optical and mechanical resonance and simultaneously allows sensitive detection of mechanical motion. In analogy to the case of trapped ions \cite{Leibfried2003}, dynamical backaction sideband cooling \cite{Wilson-Rae2007, Marquardt2007, Bhattacharya2007a, Schliesser2008} can be used to reach the quantum ground state.
Despite major progress, ground state cooling using this approach has remained challenging, owing to insufficiently low starting temperatures or excess heating in the optical domain \cite{Schliesser2009a, Park2009, Groblacher2009} , while microwave experiments have been impeded by the residual thermal occupancy in the microwave cooling tone \cite{Rocheleau2010} or weak optomechanical coupling \cite{Teufel2008}. Here we demonstrate an experimental optomechanical setting that solves these challenges.
\begin{figure}
\caption{ Cooling a micromechanical oscillator. (a) High Q mechanical and optical modes are co-located in a silica microtoroid. The simulated displacement pattern of the mechanical radial breathing mode (RBM) is shown; the optical whispering gallery mode (WGM) is confined to the rim. (b) Thermalization of the RBM to the temperature of the ${}^3$He gas, with the lowest achieved temperature corresponding to an occupancy of the RBM below 200 quanta. (c) Optical setup used for displacement monitoring of the mechanical mode, based on homodyne analysis of the light re-emerging from the toroid's WGM (see text for detailed description).}
\label{fig1}
\end{figure}
We use silica toroidal resonators, which support whispering gallery modes (WGM) of ultrahigh finesse co-located with a low loss mechanical radial breathing mode (RBM) \cite{Kippenberg2005, Schliesser2010} and large mutual optomechanical coupling. The devices used here have been optimized for narrow optical linewidths $\kappa$, and moderately high mechanical frequencies $\Omega_{\mathrm{m}}$, thereby operating deeply in the resolved sideband regime ($\Omega_{\mathrm{m}}\approx2\pi\cdot70\,
\operatorname{MHz}
\gtrsim10\kappa$), while at the same time the pillar geometry was engineered for low mechanical dissipation \cite{Schliesser2008, Anetsberger2008} (Fig.\ 1).
For the cryogenic laser cooling experiments, we subject these samples directly to a ${}^{3}$He gas evaporated from a reservoir of liquid ${}^{3}$He recondensed before each experimental run. At a pressure of $\sim 0.7\,\unit{mbar}$, the gas provides a thermal bath at a temperature of ca.\ $600\,\unit{mK}$. However, it is essential to verify thermalization of the toroid to the exchange gas. To this end, a low-noise fiber laser (wavelength $\lambda\approx1550\,
\operatorname{nm}
$) is coupled to a WGM using a fiber taper positioned in the near field of the mode via piezoelectric actuators (Attocube GmbH). Using techniques described previously \cite{Arcizet2009a, Gorodetsky2010}, the displacement fluctuations of the RBM can be extracted and used to infer its noise temperature.
As shown in Fig.\ \ref{fig1}b), it follows the temperature of the helium gas down to temperatures of $600\,
\operatorname{mK}
$ for weak probing (i.e.\ $<1\,\unit{\mu W}$ input laser power).
For the optomechanical sideband cooling, we employed a frequency-stabilized Ti:sapphire laser ($\lambda\approx 780\,\unit{nm}$), and a homodyne detection scheme \cite{Schliesser2009a} for quantum-limited detection of mechanical displacement fluctuations (Fig.\ \ref{fig1}c).
Importantly, for the high Fourier frequencies of interest the Ti:sapphire laser is quantum limited in amplitude and phase. Spectral analysis of this signal provides direct access to the mechanical displacement spectrum, from which the mechanical damping and resonance frequency can be derived. The spectra are calibrated in absolute terms \cite{Schliesser2009a,Gorodetsky2010} by applying a known frequency modulation at a fixed frequency close to the mechanical resonance frequency to the laser using an electro-optic modulator (EOM).
After the acquisition of each spectrum, the detuning of the laser from the cavity resonance is determined by sweeping the modulation frequency of the EOM and recording the demodulated homodyne signal with the network analyzer \cite{Weis2010}.
Before studying radiation-pressure induced effects, we have carefully analyzed the influence of the bath temperature on the RBM's properties. The vitreous nature of silica leads to a strong temperature dependence due to the presence of structural defects modeled as two-level-systems (TLS) \cite{Arcizet2009a, Enss2005}. Relaxation of the TLS under excitation from an acoustic wave modifies the complex mechanical susceptibility, leading to a change in mechanical resonance frequency $\Omega_{\mathrm{m}}$ and a change of the damping rate $\Gamma_{\mathrm{m}}=\Omega_{\mathrm{m}}/Q_{\mathrm{m}}$ with mechanical quality factor
$Q_{\mathrm{m}}$.
\begin{figure}\label{fig2}
\end{figure}
Two different relaxation regimes have to be considered \cite{si} for sample temperatures $T$ between $0.6\,
\operatorname{K}
$ to $3\,
\operatorname{K}
$: tunneling-assisted relaxation \cite{Phillips1987, Jaeckle1972}, and single phonon resonant interaction \cite{Jaeckle1972}. Thermally activated relaxation \cite{Vacher2005} dominates the frequency shift at temperatures above $3\,
\operatorname{K}
$, but is negligible in the temperature range at which the laser cooling experiments are performed.
In the presence of tunneling relaxation (\textquotedblleft tun\textquotedblright) and resonant interactions (\textquotedblleft res\textquotedblright) the mechanical oscillator properties can be expressed as \begin{align} \Omega_{\mathrm{m}}(T) & =\Omega_{\mathrm{m}}+\delta\Omega_{\mathrm{tun} }(T)+\delta\Omega_{\mathrm{res}}(T)\label{equationOmegaM}\\ \Gamma_{\mathrm{m}}(T)/\Omega_{\mathrm{m}} & =Q_{\mathrm{m}}^{-1} (T)=Q_{\mathrm{cla}}^{-1}+Q_{\mathrm{tun}}^{-1}(T)+Q_{\mathrm{res}} ^{-1}(T),\label{equationGammaM} \end{align} where $\Omega_{\mathrm{m}}Q_{\mathrm{cla}}^{-1}$ is the damping rate due to the clamping of the resonator to the substrate, dominating $\Gamma_{\mathrm{m}}$ at room temperature. The respective temperature dependencies in the relevant regimes of TLS damping are detailed in \cite{si}. For the lowest temperatures of $600\,\unit{mK}$, $Q_{\mathrm{m}}$ reaches $\sim10^4$ sufficient to enable ground state cooling since $Q_{\mathrm{m}}/\bar{n}_{\mathrm{i}}\gg1$ and $\bar n_\mathrm{i} \Omega_{\mathrm{m}}/Q_{\mathrm{m}}\ll\kappa$ ($\bar{n}_{\mathrm{i}}$ is the initial occupancy) \cite{Dobrindt2008}.
Moreover, the well-understood temperature dependence of the TLS-induced effects [Eqs.\ (\ref{equationOmegaM})-(\ref{equationGammaM})] enables its use as a ``thermometer'' of the sample temperature $T$ after a calibration measurement as shown in Fig.\ \ref{fig2} has been performed once.
Importantly, this method can reveal excess heating independent of the RBM's noise temperature.
We next studied optomechanical cooling \cite{Schliesser2006, Arcizet2006a, Gigan2006} by performing a series of experiments in which mechanical displacement noise spectra were recorded while varying the laser detuning $\bar{\Delta}\equiv\omega_{\mathrm{l} }-\bar{\omega}_{\mathrm{c}}$. Here, $\omega_{\mathrm{l}}$ is the laser's (angular) frequency and $\bar{\omega}_{\mathrm{c}}$ the WGM frequency, taking temperature and static radiation-pressure induced shifts into account. The mechanical mode's frequency and damping shows a strong detuning dependence (Fig.\ \ref{fig3}) owing to the dynamic in-phase and quadrature response of the radiation pressure force with respect to the mechanical motion.
To accurately model radiation-pressure induced dynamical backaction \cite{Braginskii1967} for the present microresonators, we have to additionally take into account that backscattering of light can couple WGMs with opposite circulation sense \cite{si,Weiss1995,Kippenberg2002}. The mutual coupling of the clockwise ($a_{\mathrm{cw}}$) and counterclockwise ($a_{\mathrm{ccw}}$) orbiting modes lifts the degeneracy leading to new eigenmodes of the system, i.e. $a_{+}\equiv(a_{\mathrm{ccw}}+a_{\mathrm{cw}})/\sqrt{2}$ and $a_{-} \equiv(a_{\mathrm{ccw}}-a_{\mathrm{cw}})/\sqrt{2}$, where the new eigenfrequencies $\bar{\omega}_{\pm}=\bar{\omega}_{\mathrm{c}}\mp\gamma/2$ are split by the mutual coupling rate~$\gamma$. During a detuning series as reported here, both modes are populated by the driving field $s_{\mathrm{in}}$ with a mean field $\bar{a}_{\pm}=\sqrt{\kappa_{\mathrm{ex}}/2}\,L_{\pm}(\bar{\Delta
})s_{\mathrm{in}}$, where $P_{\mathrm{in}}=|s_{\mathrm{in}}|^{2}\hbar
\omega_{\mathrm{l}}$ is the driving laser power, $\kappa_{\mathrm{ex}}$ the coupling rate to the fiber taper, $|\bar{a}_{\pm}|^{2}$ the mean photon population in the new eigenmodes and $L_{\pm}(\bar{\Delta})\equiv\left( -i(\bar{\Delta}\pm\gamma/2)+\kappa/2\right) ^{-1}$ the modes' Lorentzian response.
In the context of cavity optomechanics, it is important to realize that three-mode interactions \cite{Braginsky2001} can be neglected in the present configuration \cite{si}. The radiation pressure forces induced by the light in these modes can therefore simply be added. The usual linearization procedure \cite{Fabre1994,Mancini1994} then yields an inverse effective mechanical susceptibility of \begin{equation} \left( \chi_{\mathrm{eff}}(\Omega)\right) ^{-1}=m_{\mathrm{eff}}\left( \Omega_{\mathrm{m}}^{2}-\Omega^{2}-i\Gamma_{\mathrm{m}}\Omega-i\Omega _{\mathrm{m}}f(\Omega)\right) \label{e:chi} \end{equation} modified by dynamical backaction according to \begin{equation}
f(\Omega)=2g_{0}^{2}\sum_{\sigma=\pm}|\bar{a}_{\sigma}|^{2}\left( L_{\sigma }(\bar{\Delta}+\Omega)-(L_{\sigma}(\bar{\Delta}-\Omega))^{\ast}\right) \end{equation} with the vacuum optomechanical coupling rate \cite{Gorodetsky2010} $g_{0}\equiv G x_{\mathrm{zpf}}$ and $G=\mathrm{d} \omega_{\mathrm{c}}/\mathrm{d} x$.
For moderate driving powers \cite{Dobrindt2008}, the susceptibility of the mechanical mode is the one of a harmonic oscillator with effective damping rate and resonance frequency of \begin{align} \Gamma_{\mathrm{eff}} & \approx\Gamma_{\mathrm{m}}(T)+\mathrm{Re}\left[ f(\Omega_{\mathrm{m}})\right] \\ \Omega_{\mathrm{eff}} & \approx\Omega_{\mathrm{m}}(T)+\mathrm{Im}\left[ f(\Omega_{\mathrm{m}})\right] /2.\label{e:Oeff} \end{align}
\begin{figure}
\caption{Effective resonance frequency (a) and linewidth (b) of the RBM when a $2\,\operatorname{mW}$-power laser is tuned through the lower mechanical sideband of the split optical mode (inset). Blue points are measured data extracted from the recorded spectra of thermally induced mechanical displacement fluctuations, solid lines are a coupled fit using the model of eqs.\ (\ref{e:chi})-(\ref{e:Oeff}), taking into account heating of the cavity due to absorbed stray and intracavity light, modifying the mechanical properties via the temperature dependence of the TLS. }
\label{fig3}
\end{figure}
For the samples studied in the following, a coupling rate of $|g_{0}|\approx 2\pi\times (1.2\pm0.2)\,\unit{kHz}$ is determined from the coupling parameter $|G|= \omega_{\mathrm{c}}/R \approx 2 \pi \times 16 \,\unit{GHz/nm}$ and effective masses $m_{\mathrm{eff}}=20\pm5\,\unit{ng}$.
Figure \ref{fig3} shows the results of a detuning series, which was taken with an input laser power of $2\,\unit{mW}$, with the temperature of the ${}^3\mathrm{He}$ gas stabilized to $T_{\mathrm{cryo}}=850\,\unit{mK}$ at a pressure of $2.8\,\unit{mbar}$.
The excellent stability of both the laser and the cryogenic coupling setup allowed us to perform the series without active stabilization of the laser detuning $\bar{ \Delta}$ and the coupling $\kappa_{\mathrm{ex}}$, which is determined by the sub-micrometer gap between the coupling taper and the toroid.
The coupled fit of the data using the model of eqs.\ (\ref{e:chi})-(\ref{e:Oeff}) is enabled by the precise calibration of the laser detuning by sweeping the modulation frequency \cite{Weis2010}.
We choose to adjust the parameters of the fit primarily (relative weight $0.9$) to the optical spring effect, since the mechanical resonance frequency can be extracted from the spectra with higher accuracy than the damping rate.
The obtained fit parameters $\kappa$, $\gamma$, and $s_{\mathrm{in}}$ are found to be in good agreement with independent results deduced from the frequency modulation measurement ($\kappa\approx 2\pi\times 6\,\unit{MHz}$, $\gamma\approx2\pi \times 30\,\unit{MHz}$) and the measured laser power.
The excellent quality of the fit, together with the measured temperature dependence of the TLS effects on the mechanical mode, furthermore allows us to extract the temperature $T$ of the sample.
Importantly, for large detunings $|\bar{\Delta}| \gg \kappa$, the TLS thermometer reveals an increase of the sample temperature by $\Delta T_{\mathrm{stray}}\approx220\,\unit{mK}$ corresponding to $Q_{\mathrm{m}}(T)=5970$, which we attribute to heating induced by absorbed stray light scattered from defects on the fiber taper, which were observed to aggregate upon its production.
As the laser is tuned closer to resonance, more light is coupled into the WGM and \begin{equation}
T\approx T_{\mathrm{cryo}}+\Delta T_{\mathrm{stray}}+\Delta T_{\mathrm{WGM}}, \end{equation}
where $\Delta T_{\mathrm{WGM}}=\beta \kappa_{\mathrm{abs}} (|a_{+}|^2+|a_{-}|^2) \hbar \omega_{\mathrm{l}}$ denotes the increase in temperature following the cavity's double-Lorentizan absorption profile, $\kappa_{\mathrm{abs}}\lesssim \kappa-\kappa_{\mathrm{ex}}$ is the photon absorption rate and $\beta$ the temperature increase per absorbed power.
Operating deeply in the resolved sideband regime \cite{Schliesser2008}, only little optical power ($\sim\kappa_{\mathrm{abs}} |a_{+}|^2 \hbar\omega_{\mathrm{l}}\lesssim P_{\mathrm{in}}/1300$) can be absorbed in the cavity when $\bar{\Delta}=-\Omega_{\mathrm{m}}-\gamma/2$, leading to a modest temperature increase of $\Delta T_{\mathrm{WGM}}\approx 70\,\unit{mK}$.
Importantly, we can test the consistency of the derived detuning-dependent quantities---
$T$ , $\Gamma_{\mathrm{m}}(T)$ , and
$\Gamma_{\mathrm{eff}}$ ---by comparing the expected effective temperature of the RBM due to optomechanical cooling, i.e. $T_{\mathrm{eff}}=T\cdot \Gamma_{\mathrm{m}}(T)/\Gamma_{\mathrm{eff}}$, with the effective temperature derived from noise thermometry via integration of the calibrated noise spectra \cite{Schliesser2009a}.
Figure 4a) shows this comparison for the detuning series discussed above.
Using the model of eqs.\ (\ref{equationGammaM})-(\ref{e:Oeff}) adjusted to the data of Fig.\ \ref{fig3}, we obtain good agreement for the effective temperatures obtained in both ways.
To achieve this level of agreement, it is necessary to take into account the optomechanical de-amplification of the laser phase modulation used for calibrating the mechanical fluctuation spectra in absolute terms:
as was shown in a recent study \cite{Verlot2010a}, the transduction of a phase modulation of depth $\delta\varphi$ at a frequency $\Omega_{\mathrm{mod}} / 2 \pi$ is modified by a factor $| \chi_{\mathrm{eff}}(\Omega_{\mathrm{mod}}) / \chi_{\mathrm{m}}(\Omega_{\mathrm{mod}}) |$ in the presence of dynamical backaction, where $\chi_{\mathrm{m}}(\Omega)$ is the bare mechanical susceptibility.
Figure 4b) shows the same comparison for a cooling run at a high laser power ($4\,\unit{mW}$), for which we observe slightly increased heating by $\Delta T_{\mathrm{stray}}\approx400\,\unit{mK}$, while additional heating by $\Delta T_{\mathrm{WGM}}$ could not be discerned in this measurement.
In spite of the reduced mechanical quality factor $Q_{\mathrm{m}}(T_{\mathrm{cryo}}+\Delta T_{\mathrm{stray}})=4880$, the lowest extracted occupancy is $\bar{n}=10$ according to the detuning series fit.
The lowest inferred \emph{noise} temperature of a single measurement is even slightly lower, corresponding to $\bar{n}=9\pm1$, where the uncertainty is dominated by systematic errors, which we estimate from the deviations of the effective temperature derived in the two independent ways described above.
Note that this occupancy implies already a probability of $P(n=0)=(1+\bar{n})^{-1}=(10\pm1) \%$ to find the oscillator in its quantum ground state.
From this measurement, we can also extract a total force noise spectral density of $S_{FF}=(8\pm2 \,\unit{fN}/\sqrt{\unit{Hz}})^2$ driving the oscillator to this occupancy, corresponding to a imprecision-backaction product
\cite{Schliesser2009a, si} of $\sqrt{S_{xx}^{\mathrm{im}}S_{FF}^{\mathrm{}}}= (49\pm8) \hbar/2$ if the entire present force noise is (conservatively) considered as measurement backaction.
Indeed, we estimate about $40\%$ of the force noise to originate from the Langevin force at the temperature of the cryostat, and $60\%$ from the excess Langevin force caused by the laser-induced heating by $\Delta T_{\mathrm{stray}}$ \cite{si}.
Due to the resolved-sideband operation, force noise due to quantum backaction (as yet only observed on cold atomic gases \cite{Murch2008}) is expected to be nearly two orders of magnitude weaker and therefore negligible.
\begin{figure}\label{fig4}
\end{figure}
It is realistic to significantly reduce the occupancy by higher cooling powers and improvements of $g_0^2/\Gamma_{\mathrm{m}}$, which can be achieved by engineering mechanical modes \cite{Anetsberger2008} for smaller mass and lower damping.
For occupancies $\bar{n}\lesssim1$, we anticipate that individually resolved anti-Stokes and Stokes sidebands \cite{Wilson-Rae2007, Schliesser2008} of an independent readout laser will display a measurable asymmetry of $\bar{n}/(\bar{n}+1)$ arising from the non-zero commutator of the ladder operators describing the mechanical harmonic oscillator in quantum mechanical terms.
\begin{thebibliography}{10} \makeatletter \providecommand \@ifxundefined [1]{
\ifx #1\undefined \expandafter \@firstoftwo
\else \expandafter \@secondoftwo \fi } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo \fi } \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand\href[0]{\@sanitize\@href} \providecommand\@href[1]{\endgroup\@@startlink{#1}\endgroup\@@href} \providecommand\@@href[1]{#1\@@endlink} \providecommand \@sanitize [0]{\begingroup\catcode`\&12\catcode`\#12\relax} \@ifxundefined \pdfoutput {\@firstoftwo}{
\@ifnum{\z@=\pdfoutput}{\@firstoftwo}{\@secondoftwo} }{
\providecommand\@@startlink[1]{\leavevmode\special{html:<a href="#1">}}
\providecommand\@@endlink[0]{\special{html:</a>}} }{
\providecommand\@@startlink[1]{
\leavevmode
\pdfstartlink
attr{/Border[0 0 1 ]/H/I/C[0 1 1]}
user{/Subtype/Link/A<</Type/Action/S/URI/URI(#1)>>}
\relax
}
\providecommand\@@endlink[0]{\pdfendlink} } \providecommand \url [0]{\begingroup\@sanitize \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix}} \providecommand \urlprefix [0]{URL } \providecommand \Eprint[0]{\href } \@ifxundefined \urlstyle {
\providecommand \doi [1]{doi:\discretionary{}{}{}#1} }{
\providecommand \doi [0]{doi:\discretionary{}{}{}\begingroup
\urlstyle{rm}\Url } } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \Doi[1]{\href{\doibase#1}} \providecommand \bibAnnote [3]{
\BibitemShut{#1}
\begin{quotation}\noindent
\textsc{Key:}\ #2\\\textsc{Annotation:}\ #3
\end{quotation} } \providecommand \bibAnnoteFile [2]{
\IfFileExists{#2}{\bibAnnote {#1} {#2} {\input{#2}}}{} } \providecommand \typeout [0]{\immediate \write \m@ne } \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen[0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\bibitem{Schwab2005}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{K.~C.}\ \bibnamefont{Schwab}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Today}\ }
\textbf{\bibinfo {volume} {58}},\ \bibinfo {pages} {36} (\bibinfo {year}
{2005})
\bibAnnoteFile{NoStop}{Schwab2005} \bibitem{Kippenberg2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.~J.}\ \bibnamefont{Kippenberg}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Science}\ }
\textbf{\bibinfo {volume} {321}},\ \bibinfo {pages} {1172} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Kippenberg2008} \bibitem{Marquardt2009}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{F.}~\bibnamefont{Marquardt}}\ and\ \bibinfo
{author} {\bibfnamefont{S.~M.}\ \bibnamefont{Girvin}},\ }
\bibfield{journal}{
\bibinfo {journal} {Physics}\ }
\textbf{\bibinfo {volume} {2}},\ \bibinfo {pages} {40} (\bibinfo {year}
{2009})
\bibAnnoteFile{NoStop}{Marquardt2009} \bibitem{Favero2009a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{I.}~\bibnamefont{Favero}}\ and\ \bibinfo
{author} {\bibfnamefont{K.}~\bibnamefont{Karrai}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Photon.}\ }
\textbf{\bibinfo {volume} {3}},\ \bibinfo {pages} {201} (\bibinfo {year}
{2009})
\bibAnnoteFile{NoStop}{Favero2009a} \bibitem{OConnell2010}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.~D.}\ \bibnamefont{O'Connell}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {464}},\ \bibinfo {pages} {697} (\bibinfo {month}
{Apr}\ \bibinfo {year} {2010})
\bibAnnoteFile{NoStop}{OConnell2010} \bibitem{Braginskii1967}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{V.~B.}\ \bibnamefont{Braginskii}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Sov. Phys. JETP}\ }
\textbf{\bibinfo {volume} {25}},\ \bibinfo {pages} {653} (\bibinfo {year}
{1967})
\bibAnnoteFile{NoStop}{Braginskii1967} \bibitem{Dykman1978}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{M.~I.}\ \bibnamefont{Dykman}},\ }
\bibfield{journal}{
\bibinfo {journal} {Sov. Phys. - Solid State}\ }
\textbf{\bibinfo {volume} {20}},\ \bibinfo {pages} {1306} (\bibinfo {year}
{1978})
\bibAnnoteFile{NoStop}{Dykman1978} \bibitem{Schliesser2006}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Schliesser}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {97}},\ \bibinfo {pages} {243905} (\bibinfo {year}
{2006})
\bibAnnoteFile{NoStop}{Schliesser2006} \bibitem{Arcizet2006a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{O.}~\bibnamefont{Arcizet}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {444}},\ \bibinfo {pages} {71} (\bibinfo {year}
{2006})
\bibAnnoteFile{NoStop}{Arcizet2006a} \bibitem{Gigan2006}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.}~\bibnamefont{Gigan}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {444}},\ \bibinfo {pages} {67} (\bibinfo {year}
{2006})
\bibAnnoteFile{NoStop}{Gigan2006} \bibitem{Leibfried2003}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{D.}~\bibnamefont{Leibfried}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Rev. of Mod. Phys.}\ }
\textbf{\bibinfo {volume} {75}},\ \bibinfo {pages} {281} (\bibinfo {year} {2003})
\bibAnnoteFile{NoStop}{Leibfried2003} \bibitem{Wilson-Rae2007}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{I.}~\bibnamefont{Wilson-Rae}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {99}},\ \bibinfo {eid} {093901} (\bibinfo {year}
{2007})
\bibAnnoteFile{NoStop}{Wilson-Rae2007} \bibitem{Marquardt2007}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{F.}~\bibnamefont{Marquardt}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {99}},\ \bibinfo {eid} {093902} (\bibinfo {year}
{2007})
\bibAnnoteFile{NoStop}{Marquardt2007} \bibitem{Bhattacharya2007a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibnamefont{Bhattacharya}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {99}},\ \bibinfo {pages} {073601} (\bibinfo {year} {2007})
\bibAnnoteFile{NoStop}{Bhattacharya2007a} \bibitem{Schliesser2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Schliesser}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Phys.}\ }
\textbf{\bibinfo {volume} {4}},\ \bibinfo {pages} {415} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Schliesser2008} \bibitem{Schliesser2009a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Schliesser}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Phys.}\ }
\textbf{\bibinfo {volume} {5}},\ \bibinfo {pages} {509} (\bibinfo {year}
{2009})
\bibAnnoteFile{NoStop}{Schliesser2009a} \bibitem{Park2009}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{Y.-S.}\ \bibnamefont{Park}}\ and\ \bibinfo
{author} {\bibfnamefont{H.}~\bibnamefont{Wang}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Phys.}\ }
\textbf{\bibinfo {volume} {5}},\ \bibinfo {pages} {489} (\bibinfo {year}
{2009})
\bibAnnoteFile{NoStop}{Park2009} \bibitem{Groblacher2009}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.}~\bibnamefont{Gr{\"o}blacher}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Phys.}\ }
\textbf{\bibinfo {volume} {5}},\ \bibinfo {pages} {485} (\bibinfo {year}
{2009})
\bibAnnoteFile{NoStop}{Groblacher2009} \bibitem{Rocheleau2010}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.}~\bibnamefont{Rocheleau}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature}\ }
\textbf{\bibinfo {volume} {463}},\ \bibinfo {pages} {72} (\bibinfo {month}
{Jan}\ \bibinfo {year} {2010})
\bibAnnoteFile{NoStop}{Rocheleau2010} \bibitem{Teufel2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.~D.}\ \bibnamefont{Teufel}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {101}},\ \bibinfo {pages} {197203} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Teufel2008} \bibitem{Kippenberg2005}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.}\ \bibnamefont{Kippenberg}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {95}},\ \bibinfo {pages} {033901} (\bibinfo {year}
{2005})
\bibAnnoteFile{NoStop}{Kippenberg2005} \bibitem{Schliesser2010}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{A.}~\bibnamefont{Schliesser}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
in\ \emph{\bibinfo {booktitle} {Advances in atomic, molecular and optical
physics}},\ Vol.~\bibinfo {volume} {58},\ (\bibinfo {publisher} {Elsevier
Press},\ \bibinfo {year} {2010})\ Chap.~\bibinfo {chapter} {5}
\bibAnnoteFile{NoStop}{Schliesser2010} \bibitem{Anetsberger2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{G.}~\bibnamefont{Anetsberger}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Photon.}\ }
\textbf{\bibinfo {volume} {2}},\ \bibinfo {pages} {627} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Anetsberger2008} \bibitem{Arcizet2009a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{O.}~\bibnamefont{Arcizet}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {80}},\ \bibinfo {pages} {021803(R)} (\bibinfo
{year} {2009})
\bibAnnoteFile{NoStop}{Arcizet2009a} \bibitem{Gorodetsky2010}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{M.}~\bibnamefont{Gorodetsky}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Opt. Express}\ }
\textbf{\bibinfo {volume} {18}},\ \bibinfo {pages} {23236} (\bibinfo {year}
{2010})
\bibAnnoteFile{NoStop}{Gorodetsky2010} \bibitem{Weis2010}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.}~\bibnamefont{Weis}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {arXiv:1007.0565}}
(\bibinfo {year} {2010})
\bibAnnoteFile{NoStop}{Weis2010} \bibitem{Enss2005}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{C.}~\bibnamefont{Enss}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\emph{\bibinfo {title} {Low Temperature Physics}}\ (\bibinfo {publisher}
{Springer},\ \bibinfo {year} {2005})
\bibAnnoteFile{NoStop}{Enss2005} \bibitem{si}
\BibitemOpen
\bibinfo {note} {{S}ee SI at [URL by
AIP] for detailed description.}
\bibAnnoteFile{Stop}{si} \bibitem{Phillips1987}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{W.~A.}\ \bibnamefont{Phillips}},\ }
\bibfield{journal}{
\bibinfo {journal} {Rep. Prog. Phys. Lett.}\ }
\textbf{\bibinfo {volume} {50}},\ \bibinfo {pages} {1657} (\bibinfo {year}
{1987})
\bibAnnoteFile{NoStop}{Phillips1987} \bibitem{Jaeckle1972}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.}~\bibnamefont{Jaeckle}},\ }
\bibfield{journal}{
\bibinfo {journal} {Z. Phys.}\ }
\textbf{\bibinfo {volume} {257}},\ \bibinfo {pages} {212} (\bibinfo {year}
{1972})
\bibAnnoteFile{NoStop}{Jaeckle1972} \bibitem{Vacher2005}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{R.}~\bibnamefont{Vacher}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. B}\ }
\textbf{\bibinfo {volume} {72}},\ \bibinfo {pages} {214205} (\bibinfo {year}
{2005})
\bibAnnoteFile{NoStop}{Vacher2005} \bibitem{Dobrindt2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{J.}\ \bibnamefont{Dobrindt}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {101}},\ \bibinfo {pages} {263602} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Dobrindt2008} \bibitem{Weiss1995}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{D.~S.}\ \bibnamefont{Weiss}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Opt. Lett.}\ }
\textbf{\bibinfo {volume} {20}},\ \bibinfo {pages} {1835} (\bibinfo {year}
{1995})
\bibAnnoteFile{NoStop}{Weiss1995} \bibitem{Kippenberg2002}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{T.~J.}\ \bibnamefont{Kippenberg}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Opt.Lett.}\ }
\textbf{\bibinfo {volume} {27}},\ \bibinfo {pages} {1669} (\bibinfo {year}
{2002})
\bibAnnoteFile{NoStop}{Kippenberg2002} \bibitem{Braginsky2001}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{V.~B.}\ \bibnamefont{Braginsky}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Lett. A}\ }
\textbf{\bibinfo {volume} {287}},\ \bibinfo {pages} {331} (\bibinfo {year}
{2001})
\bibAnnoteFile{NoStop}{Braginsky2001} \bibitem{Fabre1994}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{C.}~\bibnamefont{Fabre}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {49}},\ \bibinfo {pages} {1337} (\bibinfo {year}
{1994})
\bibAnnoteFile{NoStop}{Fabre1994} \bibitem{Mancini1994}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{S.}~\bibnamefont{Mancini}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. A}\ }
\textbf{\bibinfo {volume} {49}},\ \bibinfo {pages} {4055} (\bibinfo {year}
{1994})
\bibAnnoteFile{NoStop}{Mancini1994} \bibitem{Verlot2010a}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{P.}~\bibnamefont{Verlot}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Phys. Rev. Lett.}\ }
\textbf{\bibinfo {volume} {104}},\ \bibinfo {pages} {133602} (\bibinfo {year}
{2010})
\bibAnnoteFile{NoStop}{Verlot2010a} \bibitem{Murch2008}
\BibitemOpen
\bibfield{author}{
\bibinfo {author} {\bibfnamefont{K.~W.}\ \bibnamefont{Murch}}\ \bibinfo {author} {\bibnamefont{\textit{et al.}}},\ }
\bibfield{journal}{
\bibinfo {journal} {Nature Phys.}\ }
\textbf{\bibinfo {volume} {4}},\ \bibinfo {pages} {561} (\bibinfo {year}
{2008})
\bibAnnoteFile{NoStop}{Murch2008} \end{thebibliography}
\widetext
\renewcommand{\textbf{S}\arabic{figure}}{\textbf{S}\arabic{figure}} \renewcommand{$\mathrm{S} $\arabic{equation}}{$\mathrm{S} $\arabic{equation}} \setcounter {figure} {0} \setcounter {equation} {0} \begin{center} \large{\textbf{ Supplementary information - Optomechanical sideband cooling of a micromechanical oscillator close to the quantum ground state}} \end{center}
\section{Two-level systems}
Tunneling systems in $\mbox{SiO}_{{2}}$ play an important role in cryogenic operation of silica mechanical oscillators.
They lead to a temperature dependent frequency shift (via a change in speed of sound) of the considered mechanical mode and temperature dependent mean free path of phonons in $\mbox{SiO}_{{2}}$ affecting the mechanical quality factor.
We will discuss these effects in the following and give the relevant formulae which have been used to fit the data in the main part of the manuscript.
An extensive study of TLS effects can be found in \cite{SIEnss2005,SIJaeckle1972}.
As first considered by L.~Pauling in 1930 \cite{SIPauling}, tunneling of atoms occurs in solids with a certain degree of disorder, where in the local environment of an atomic site, several potential minima exist.
This can be the case in the vicinity of defects in crystals or, more frequently, in amorphous materials.
For amorphous solids at low temperatures, the tunneling dynamics can be well captured in a simple model consisting of an ensemble of two level systems (TLS) each of which is described by a generic double-well potential (Fig. \ref{fig:doublewell}).
This potential is parametrized only by the barrier height $V$, the initial energy asymmetry $\Delta_\mathrm{1}$ and the spatial separation between the two potential minima $d$.
A tunneling coupling strength \begin{equation}
\Delta_{0}=\hbar\Omega_{0}e^{-\lambda} \end{equation} with the intrinsic oscillation frequency $\Omega_{0}$ within the individual atomic sites can then be deduced, with the tunneling parameter \begin{equation}
\lambda\approx\sqrt{\frac{2 m V}{\hbar^2}}\frac{d}{2} \end{equation} depending on the atomic mass $m$.
Due to this tunnel coupling, the new eigenmodes of the coupled system exhibit an energy splitting of \begin{equation}
E=\sqrt{\Delta_\mathrm{1}^{2}+\Delta_{0}^{2}}. \end{equation}
\begin{figure}
\caption{Double well potential with relevant levels and naming convention}
\label{fig:doublewell}
\end{figure}
Phonons couple to TLS via their strain field that leads to a deformation of the TLS potential (notably leading to a change in the barrier height V).
As a consequence, the TLS are driven out of thermal equilibrium and relaxation processes will exchange energy with the heat bath.
Transitions between the two energy levels are induced by several distinct processes that become dominant in different temperature regimes.
\begin{itemize} \item At very low temperatures ($T\lesssim E/k_\mathrm{B}$), the density of thermal phonons is low, such that relaxation processes do only play a minor role.
Here a significant population imbalance between lower and excited state exists, and the most efficient transition mechanism is resonant absorption of phonons of a frequency $\Omega_\mathrm{m}=E/\hbar$.
This mechanism shows---as in the case of other two level systems---a saturation behavior \cite{SIHunklinger72}.
\item At temperatures $T\gtrsim E/k_\mathrm{B}$ (typically a few Kelvin), the number of thermal phonons has increased to a level at which Raman processes involving a tunneling through the barrier become predominant.
It is mainly this temperature range that will be relevant for the description of the phenomena seen in the present cooling experiment.
\item At even higher temperatures thermally activated relaxation dominates.
In this case multi-phonon processes with an excitation across the barrier take place. \end{itemize}
\subsection{Relaxation contribution} If relaxation is the dominant process, the general expression for the mean free path of a phonon of frequency $\Omega_\mathrm{m}$ is given by \cite{SIEnss2005,SIJaeckle1972} \begin{equation}
l^{-1}(T)=
\frac{1}{\rho c_\mathrm{s}^{3}}
\iint
\left(-\frac{\partial n_{0}}{\partial E}\right)
4B^{2}\frac{\Delta_\mathrm{1}^{2}}{E^{2}}\frac{\Omega_\mathrm{m}^{2}\tau}{1+\Omega_\mathrm{m}^{2}\tau^{2}}
\bar{P}(\Delta_\mathrm{1},\lambda)\, \mathrm{d}\Delta_\mathrm{1}\, \mathrm{d}\lambda. \end{equation} The integration is performed on all TLS that can interact with the phonon.
Here, $\bar{P}(\Delta_\mathrm{1},\lambda)$ is the volume density of TLS with energy asymmetry between $\Delta_\mathrm{1}$ and $\Delta_\mathrm{1}+\mathrm{d}\Delta_\mathrm{1}$ and tunnel parameter between $\lambda$ and $\lambda+\mathrm{d}\lambda$, \begin{equation}
n_{0}=\frac{1}{e^{E/k_{B}T}+1} \end{equation} is the thermal equilibrium Boltzmann repartition function, $c_\mathrm{s}$ the speed of sound, $\rho$ the mass density of the solid, $\tau$ the relaxation time of the individual TLS and $B$ the coefficient linking a deformation $\delta e$ to a change of $E$ via $\delta E=2B(\Delta_\mathrm{1}/E)\delta e$.
A mechanical quality factor of \begin{equation}
Q_{\mathrm{m}}^{-1}(T)=\frac{c_\mathrm{s}l^{-1}(T)}{\Omega_\mathrm{m}} \end{equation} can then be deduced.
For the corresponding relative change in the speed of sound (i.e.\ frequency shift of a mechanical resonance) one obtains from the Kramers-Kronig relations \begin{equation}
{\delta\Omega_\mathrm{m}}{}(T)=
-\frac{\Omega_\mathrm{m}}{2\rho c_\mathrm{s}^{2}}
\iint
\left(-\frac{\partial n_{0}}{\partial E}\right)
4B^{2}\frac{\Delta_\mathrm{1}^{2}}{E^{2}}\frac{1}{1+\Omega_\mathrm{m}^{2}\tau^{2}}
\bar{P}(\Delta_\mathrm{1},\lambda)\, \mathrm{d}\Delta_\mathrm{1}\, \mathrm{d}\lambda. \end{equation}
\subsubsection{Tunneling-assisted relaxation}
Within the framework of the so-called tunneling model \cite{SIEnss2005,SIJaeckle1972} the relaxation time is given by \begin{align}
\tau&=\tau_\mathrm{m}\frac{E^{2}}{\Delta_{0}^{2}}
\intertext{with the maximum relaxation rate}
\tau_\mathrm{m}^{-1}&=\frac{3}{c_\mathrm{s}^{5}}\frac{B^{2}}{2\pi\rho\hbar^{4}}E^{3}\coth\left(\frac{E}{2k_\mathrm{B}T}\right). \end{align} Parametrizing the integrals in terms of the energy splitting $E$ and the parameter $u=\tau^{-1}/\tau_\mathrm{m}^{-1}$ yields \cite{SIEnss2005,SIJaeckle1972} \begin{align}
Q^{-1}_\mathrm{tun}(T)&=
\frac{2\bar{P}B^{2}}{\rho c_\mathrm{s}^{2}}
\int_{0}^{\infty}
\left(-\frac{\partial n_{0}}{\partial E}\right)
\Omega_\mathrm{m}\tau_\mathrm{m}
\int_{0}^{1}
\frac{\sqrt{1-u}}{u^{2}+\Omega_\mathrm{m}^{2}\tau_\mathrm{m}^{2}}
\,\mathrm{d}u
\,\mathrm{d}E
\intertext{and}
{\delta\Omega_\mathrm{tun}}{}(T)
&=-\frac{\Omega_\mathrm{m}\bar{P}B^{2}}{\rho c_\mathrm{s}^{2}}
\int_{0}^{\infty}
\left(-\frac{\partial n_{0}}{\partial E}\right)
\int_{0}^{1}
\frac{u\sqrt{1-u}}{u^{2}+\Omega_\mathrm{m}^{2}\tau_\mathrm{m}^{2}}
\,\mathrm{d}u
\,\mathrm{d}E, \end{align} where it is assumed that the density $\bar P(E,\lambda)=\bar P$ is constant, which is consistent with experiments.
A prominent feature in this regime is a plateau of the quality factors for temperatures of a few Kelvins with $Q$ values of \begin{equation} Q_{\mbox{plateau}}^{-1}=\frac{\pi}{2}\frac{\bar{P}B^{2}}{\rho c_\mathrm{s}^{2}}. \end{equation}
\subsubsection{Thermally activated relaxation} At higher temperature the rate is given by the Arrhenius law and only depends on the energy barrier height, \begin{equation}
\tau_\mathrm{th}^{-1}=\tau_{0}^{-1}e^{-V/k_\mathrm{B}T}, \end{equation} where $\tau_{0}$ represents the period of oscillation in individual wells \cite{SIVacher2005,SIEnss2005}.
\subsection{Resonant processes}
For resonant interaction between phonons and TLS, it can be shown that \cite{SIEnss2005,SIJaeckle1972} \begin{align}
Q_\mathrm{res}^{-1}(T)
&=\frac{\pi\bar{P}B^{2}}{\rho c_\mathrm{s}^{2}}\tanh\left(\frac{\hbar\Omega_\mathrm{m}}{2k_{B}T}\right)\\
\delta\Omega_\mathrm{res}(T)
&=\frac{\Omega_\mathrm{m} \bar{P}B^{2}}{\rho c_\mathrm{s}^{2}}\ln \left(\frac{T}{T_{0}}\right), \end{align} where $T_{0}$ is a reference temperature.
While resonant processes do not significantly contribute to the mechanical quality factors in our experiment, the frequency shift is dominated by resonant processes.
\subsection{Fitting Parameters for Figure 2} The curves shown in figure 2 of the main manuscript have been fitted with the equations given in the previous sections.
For the frequency shift the sum of the tunneling relaxation and the resonant contribution has been taken into account.
The latter dominates this effect up to about $T=2\,\unit{K}$.
The contribution of thermally activated relaxation has been omitted since it doesn't contribute significantly in the considered temperature range.
Fitting of the $Q$-dependency has been done using the sum of tunneling relaxation, resonant contribution and a constant offset accounting for the clamping losses ($Q_{\mathrm{cla}}^{-1}$), i.e.\ loss of acoustic energy due to leaking into the substrate for this particular toroid. Here the resonant contribution plays a minor role.
For the curves shown in Fig.\ 2 of the main manuscript, we used the material parameters \begin{align*}
c_{\mathrm{s}}&=5800 \,\unit{m/s}\\
\rho&=2330 \,\unit{kg/m}^{3}, \intertext{the measured resonance frequency}
\Omega_{\mathrm{m}}&=2 \pi\times 76.3 \,\unit{MHz}, \intertext{as well as the adjusted parameters}
B&=1.1 \times 10^{-19}\,\unit{J}\\
\bar{P}_{Q_{\mathrm{m}}}&=2.5 \times 10^{45}\,\unit{m}^{-3}\\
\bar{P}_{\Omega_{\mathrm{m}}}&=4.6 \times 10^{45}\,\unit{m}^{-3}. \end{align*} For the fitting of the two curves (mechanical quality factor, resonance frequency shift) two different values for $\bar{P}$ had to be used.
Given that the two traces are governed by two different regimes, small differences in the density of contributing TLS to the two effects seem to be justified. The literature \cite{SIPohl2002} value of the dimensionless parameter $C=\bar{P} B^2/(\rho c_s^2)=3.0 10^{-4}$ shows a reasonable agreement with the parameters of the resonance frequency ($C_{\Omega_{\mathrm{m}}}=7.1 10^{-4}$) and damping ($C_{Q_{\mathrm{m}}}=3.9 10^{-4}$) fits.
\section{Dynamical backaction in the presence of mode splitting}
In the framework of coupled-mode theory \cite{SIHaus1984}, the two coupled counterpropagating modes \cite{SIWeiss1995, SIKippenberg2002} in a WGM resonator can be described by the equations of motion (in a frame rotating at the laser frequency) \begin{align} \dot{a}_{\mathrm{ccw}}(t) &= (i (\Delta-G x(t)) - \kappa /2) a_{\mathrm{ccw}}(t) + i \frac{\gamma}{2} a_{\mathrm{cw}}(t)+\sqrt{\ensuremath{\eta_{\mathrm{c}}} \kappa} s_\mathrm{in}(t)\\ \dot{a}_{\mathrm{cw}}(t) &= (i (\Delta-G x(t)) - \kappa /2) a_{\mathrm{cw}}(t) + i \frac{\gamma}{2} a_{\mathrm{ccw}}(t). \end{align} Here $\ensuremath{\eta_{\mathrm{c}}}$ describes the coupling parameters defined via $\ensuremath{\eta_{\mathrm{c}}} = \frac{\kappa_\mathrm{ex}}{\kappa_\mathrm{ex} + \kappa_\mathrm{0}}$, where $\kappa_\mathrm{ex}$ describes the output coupling rate, whereas $\kappa_\mathrm{0}$ denotes the intrinsic loss rate of the cavity.
The fields in the system's new eigenmodes \begin{align}
a_+ &= (a_{\mathrm{ccw}} + a_{\mathrm{cw}})/\sqrt{2} \\ a_- &=(a_{\mathrm{ccw}} - a_{\mathrm{cw}})/\sqrt{2} \intertext{exert a radiation pressure force of }
F_{\mathrm{rp}}&=-{\hbar G}\left(|a_+(t)|^2 + |a_-(t)|^2\right), \intertext{since the spatial shape of cross-term $2\mathrm{Re}(a_+(t) a_-^*(t))$ has an azimuthal dependence $\propto \cos(m\varphi)\sin(m\varphi)$ ($m$ is the angular mode number), averaging to zero when projected on the azimuthally symmetric RBM. The coupled optomechanical equations of motion can therefore be written as} \dot{a}_+(t) &= \left(i \left(\Delta -G x(t) +\frac{\gamma}{2}\right) - \frac{\kappa}{2} \right) a_+(t) + \sqrt{\frac{\ensuremath{\eta_{\mathrm{c}}} \kappa}{2}} s_\mathrm{in}(t)\\ \dot{a}_-(t) &= \left(i \left(\Delta - G x(t) - \frac{\gamma}{2}\right) - \frac{\kappa}{2} \right) a_-(t) + \sqrt{\frac{\ensuremath{\eta_{\mathrm{c}}} \kappa}{2}} s_\mathrm{in}(t)\\ m_\mathrm{eff} \left(\ddot x(t) + \Gamma_ \mathrm {m} \dot x(t) + \Omega_ \mathrm {m}^2 x(t) \right) &=
-{\hbar G}\left(|a_+(t)|^2 + |a_-(t)|^2\right)+ \delta F(t), \end{align} where $\delta F(t)$ is an external force, e.\ g.\ the thermal Langevin force.
We then apply the usual linearization \begin{align}
a_{\pm}(t)&=\bar{a}_{\pm}+\delta a_{\pm}(t)\\
x(t)&=\bar{x}+\delta x(t)
\intertext{assuming $|\bar{a}_{\pm}|\gg|\delta a_{\pm}(t)|$ and $|\bar{x}|\gg|\delta x(t)|$.
For the large mean occupancy of the modes and the mean mechanical displacement, we then obtain }
\bar a_+&=\frac{\sqrt {\ensuremath{\eta_{\mathrm{c}}}{\kappa}/{2}}\,\bar s_\mathrm{in}}{-i( \bar{\Delta}+\gamma/2)+{\kappa}/{2}}
=:\sqrt {\ensuremath{\eta_{\mathrm{c}}}{\kappa}/{2}}\,L_+(\bar{ \Delta})\,\bar s_\mathrm{in}\\
\bar a_-&=\frac{\sqrt {\ensuremath{\eta_{\mathrm{c}}} \kappa/2}\,\bar s_\mathrm{in}}{-i( \bar{\Delta}-\gamma/2)+\kappa/2}
=:\sqrt {\ensuremath{\eta_{\mathrm{c}}}{\kappa}/{2}}\,L_-(\bar{ \Delta})\,\bar s_\mathrm{in}\\
\bar x&= -\frac{\hbar G}{m_\mathrm{eff} \Omega_\mathrm{m}^2} \left(|\bar a_+|^2 + |\bar a_-|^2\right).
\intertext{The average displacement $\bar {x}$ induces a small static frequency shift, as does the (usually dominant) static shift due to absorption-induced heating \cite{SICarmon2004a}, which are both absorbed into the mean detuning
$\bar \Delta=\omega_{\mathrm{l}}-\left(\omega_{\mathrm{c}}(T) + G \bar x\right)$.
One then obtains the equations of motion of small fluctuations,
} \delta \dot{a}_+(t) &= \left(i\left(\bar \Delta +\frac{\gamma}{2}\right)- \frac{\kappa}{2} \right) \delta a_+(t) -i G \bar a_+ \delta x(t)\\ \delta \dot{a}_-(t) &= \left(i\left(\bar \Delta -\frac{\gamma}{2}\right)- \frac{\kappa}{2} \right) \delta a_-(t) -i G \bar a_- \delta x(t)\\ m_\mathrm{eff} \left(\delta\ddot x(t) + \Gamma_ \mathrm {m}\delta\dot x(t) + \Omega_ \mathrm {m}^2 \delta x(t) \right) &=
-\hbar G \left(\bar a_+^* \delta a_+(t)+ \bar a_+ \left(\delta a_+(t)\right)^*+\bar a_-^* \delta a_-(t)+\bar a_- \left(\delta a_-(t)\right)^*\right)+ \delta F(t). \end{align} Fourier transformation gives \begin{align} \delta {a}_+(\Omega) &= \frac{ -i G \bar a_+ \delta x(\Omega)}{ -i\left(\bar \Delta +{\gamma}/{2}+\Omega\right)+ {\kappa}/{2}} =-i G \bar a_+ \,L_+(\bar \Delta+\Omega)\,\delta x(\Omega)\\ \delta {a}_-(\Omega) &= \frac{ -i G \bar a_- \delta x(\Omega)}{ -i\left(\bar \Delta -{\gamma}/{2}+\Omega\right)+ {\kappa}/{2}} =-i G \bar a_- \,L_-(\bar \Delta+\Omega)\,\delta x(\Omega)\\ \delta x(\Omega)/\chi_\mathrm{m}(\Omega) &=
-\hbar G \left(\bar{a}_+^* \delta a_+(+ \Omega)+ \bar a_+ \left(\delta a_+(-\Omega)\right)^*+\bar a_-^* \delta a_-(+ \Omega)+\bar a_- \left(\delta a_-(-\Omega)\right)^*\right)+ \delta F(\Omega).
\end{align}
With
\begin{align}
\chi_{\mathrm{m}}(\Omega)&=\frac{1}{m_\mathrm{eff} \left(-\Omega^2 -i\Omega \Gamma_ \mathrm {m} + \Omega_ \mathrm {m}^2 \right) }
\intertext{Solving equations (S32 - S34) for $\delta x$ yields} \delta x(\Omega)&= \frac{ \delta F(\Omega)} { 1/\chi_{\mathrm{m}}(\Omega)- i \hbar G^2
\left(
|\bar a_+|^2 \left(L_+(\bar\Delta+ \Omega)- \left(L_+(\bar\Delta-\Omega)\right)^*\right)+
|\bar a_-|^2 \left(L_-(\bar\Delta+ \Omega)- \left(L_-(\bar\Delta-\Omega)\right)^*\right)
\right)}
\intertext{so that we can write}
\frac{1}{\chi_\mathrm{eff}(\Omega)}&=
\frac{1}{\chi_\mathrm{m}(\Omega)}- i \hbar G^2
\left(
|\bar a_+|^2 \left(L_+(\bar\Delta+ \Omega)- \left(L_+(\bar\Delta-\Omega)\right)^*\right)+
|\bar a_-|^2 \left(L_-(\bar\Delta+ \Omega)- \left(L_-(\bar\Delta-\Omega)\right)^*\right)
\right)
\intertext{and, in the regime of weak optomechanical coupling \cite{SIDobrindt2008}}
\Gamma_\mathrm{eff}&\approx
\Gamma_\mathrm{m}
+ 2 x_\mathrm{zpf}^2 G^2 \mathrm{Re} \left(
|\bar a_+|^2 \left(L_+(\bar\Delta+ \Omega)- \left(L_+(\bar\Delta-\Omega)\right)^*\right)+
|\bar a_-|^2 \left(L_-(\bar\Delta+ \Omega)- \left(L_-(\bar\Delta-\Omega)\right)^*\right)
\right)\\
\Omega_\mathrm{eff}&\approx
\Omega_\mathrm{m}
+ x_\mathrm{zpf}^2 G^2 \mathrm{Im} \left(
|\bar a_+|^2 \left(L_+(\bar\Delta+ \Omega)- \left(L_+(\bar\Delta-\Omega)\right)^*\right)+
|\bar a_-|^2 \left(L_-(\bar\Delta+ \Omega)- \left(L_-(\bar\Delta-\Omega)\right)^*\right)
\right). \end{align}
\section{Calculation of the imprecision-backaction product}
In the context of quantum measurements \cite{SIBraginsky1992}, it is interesting to characterize the sources of noise responsible for the mechanical displacement measurement uncertainty.
For a given mechanical spectra, the measured (double-sided, symmetrized) spectral density of displacement fluctuations is given by
\begin{equation}
S_\mathrm{xx}^\mathrm{meas}(\Omega) = S_\mathrm{xx}^\mathrm{imp}(\Omega) + | \chi_\mathrm{eff}(\Omega) |^{2} S_\mathrm{FF}(\Omega) \end{equation}
where $S_\mathrm{xx}^\mathrm{imp}(\Omega)$ describes the measurement imprecision due to apparent displacement fluctuations, which are actually caused by noise in the displacement transducer itself.
$S_\mathrm{FF}(\Omega)$ is the force noise acting on the mechanical oscillator, and $\chi_\mathrm{eff}(\Omega)$ its effective mechanical susceptibility.
It is particularly interesting to evaluate these quantities for the lowest occupancy obtained at the optimum detuning of $\bar{\Delta} = - \Omega_\mathrm{m} - \frac{\gamma}{2}$ and at the Fourier frequency $\Omega=\Omega_\mathrm{m}$.
In our experiment, the \textit{measurement imprecision} is dominated by shot noise, and we extract a value of \begin{align*} S_\mathrm{xx}^\mathrm{imp} \equiv S_\mathrm{xx}^\mathrm{imp}(\Omega_\mathrm{m}) = ( 3.2 \times 10^{-19} \,\unit{m / \sqrt{Hz} } )^{2} \end{align*} from the fit to the background of the recorded mechanical spectrum (Fig. $4$ from the main manuscript).
Its measured linear dependence on the laser input power $P_\mathrm{in}$ shows that it is strongly dominated by the quantum noise of the input laser.
This behavior is indeed expected at the frequencies of interest in our work, where classical quadrature fluctuations are negligible in Ti:sapphire lasers.
The thermal force noise (for $\frac{k_\mathrm{B} T}{\hbar \Omega_\mathrm{m}} \gg 1$) driving the mechanical oscillator is given by \begin{equation} S_\mathrm{FF}^\mathrm{the} \equiv S_\mathrm{FF}^\mathrm{the}(\Omega_\mathrm{m}) = 2 m_\mathrm{eff} k_{\mathrm{B}} T \Gamma_{\mathrm{m}}(T) \end{equation}
given by the fluctuation-dissipation theorem.
In the presence of dynamical backaction, we can estimate this force noise from the more directly measured linewidth $\Gamma_\mathrm{eff}$ and noise temperature $T_{\mathrm{eff}}$ using $T_{\mathrm{eff}}=T \cdot \Gamma_{\mathrm{m}}(T) / \Gamma_{\mathrm{eff}}$, and \begin{equation}
S_\mathrm{FF}^\mathrm{the} = 2 m_\mathrm{eff} k_{\mathrm{B}} T_{\mathrm{eff}} \Gamma_{\mathrm{eff}} \end{equation}
It evaluates to \begin{align*}
S_\mathrm{FF}^\mathrm{the} = ( 8 \pm 2 \times 10^{-15} \,\unit{N / \sqrt{Hz}} )^{2} \end{align*}
where $\Gamma_{\mathrm{eff}}$ and $T_{\mathrm{eff}}$ are extracted from the fits to the detuning series, evaluated at the detuning $\bar{\Delta} = - \Omega_\mathrm{m} - \frac{\gamma}{2}$ as described in the main manuscript.
This value gives a conservative estimate of the \textit{classical measurement backaction}, considering effectively \textit{all force noise} present in the system (including thermal noise due to the non-zero cryostat temperature) as a classical backation of the measurement.
A less conservative estimate on the backaction of the actual displacement measurement using the laser coupled to the WGM can be made by separating two different contributions in the force noise, \begin{equation}
S_\mathrm{FF}^\mathrm{the} = S_\mathrm{FF}^\mathrm{cryo} + S_\mathrm{FF}^\mathrm{ba}, \end{equation} where $S_\mathrm{FF}^\mathrm{cryo}$ is the Langevin force noise due to the finite cryostat temperature $T_\mathrm{cryo}$ and $S_\mathrm{FF}^\mathrm{ba}$ the thermal backaction in the form of excess Langevin force noise due to the heating of the cavity by laser light.
$S_\mathrm{FF}^\mathrm{ba}$ gives an estimate of the classical perturbation of the system by the measurement, the \textit{classical excess backaction}, which is technically avoidable.
The thermal force noise originating from the bath \begin{equation} S_\mathrm{FF}^\mathrm{cryo} = 2 m_\mathrm{eff} k_{\mathrm{B}} T_{\mathrm{cryo}} \Gamma_{\mathrm{m}}(T_{\mathrm{cryo}}) \end{equation} is estimated to \begin{align*} S_\mathrm{FF}^\mathrm{cryo} = ( 5 \pm 1 \times 10^{-15} \,\unit{N / \sqrt{Hz}} )^{2}. \end{align*}
$T_{\mathrm{cryo}}$ and $\Gamma_{\mathrm{m}}(T_{\mathrm{cryo}})$ are extracted from independent low input power measurements where the RBM is thermalized to the cryostat temperature.
Consequently, the excess classical backaction evaluates to \begin{align*} S_\mathrm{FF}^\mathrm{ba} = ( 6 \pm 2 \times 10^{-15} \,\unit{N / \sqrt{Hz}} )^{2} \end{align*}
and accounts for 60 \% of the thermal force fluctuations driving the mechanical oscillator.
In addition to classical backaction, the quantum fluctuations of the intracavity photon number give rise to a \textit{quantum measurement backaction} for which the force noise is given by \begin{equation}
S_{\mathrm{FF}}^{\mathrm{qba}} \equiv S_{\mathrm{FF}}^{\mathrm{qba}}(\Omega_{\mathrm{m}}) \approx \frac{2 \hbar G^{2} P_{\mathrm{in}} \ensuremath{\eta_{\mathrm{c}}}}{\bar{\omega}_\mathrm{c} \Omega_{\mathrm{m}}^{2}} = \frac{4 g_{0}^{2} m_{\mathrm{eff}} P_{\mathrm{in}} \ensuremath{\eta_{\mathrm{c}}}}{\bar{\omega}_\mathrm{c} \Omega_{\mathrm{m}}} \end{equation} in the case of high resolved sideband factor $\frac{\Omega_{\mathrm{m}}}{\kappa} \gg 1$ \cite{SISchliesser2009a} and at the detuning of interest.
It is of the order of $( 1 \times 10^{-15} \,\unit{N / \sqrt{Hz}} )^{2}$ in our case, negligible compared to the classical backaction.
Therefore, a conservative estimate of the \textit{imprecision-backaction product} is given by (for $\frac{k_\mathrm{B} T}{\hbar \Omega_\mathrm{m}} \gg 1$) \begin{align*} \sqrt{S_\mathrm{xx}^\mathrm{imp} ( S_\mathrm{FF}^\mathrm{the} + S_\mathrm{FF}^\mathrm{qba})} \approx \sqrt{S_\mathrm{xx}^\mathrm{imp} S_\mathrm{FF}^\mathrm{the}} = ( 49 \pm 8 )\frac{\hbar}{2}. \end{align*}
In an ideal quantum measurement \cite{SIBraginsky1992}, this product is equal to $\frac{\hbar}{2}$, corresponding to the optimal compromise between quantum imprecision and quantum backaction, both arising from the quantum fluctuations of the optical field quadratures.
As it is shown in the main manuscript, laser absorption heating, responsible for the classical excess backaction $S_{\mathrm{FF}}^{\mathrm{ba}}$, is mainly caused by scattered light off the tapered fiber being absorbed by the toroid (in our case by dust particles on the tapered fiber originating from particles in the air of our laboratory).
It is thus within technical reach to strongly reduce this effect and perform measurements where light induced backaction would be dominated by quantum fluctuations alone.
\end{document} | arXiv |
Chemistry Meta
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.
Why is the product of sodium and water, sodium hydroxide and not sodium oxide? [duplicate]
Why does sodium react with water to produce a hydroxide, while zinc produces an oxide? (1 answer)
In the reaction $\ce{Na(s) + H2O}$ the products are listed as $\ce{NaOH(aq) + H2 (g)}$, but why would $\ce{Na2O}$ not be a possible product?
I searched and found that the reaction $\ce{Na2O + 2H2O -> 2NaOH}$ occurs as well. Does it have to do with the stability of oxides?
inorganic-chemistry
John SnowJohn Snow
$\begingroup$ As a way of remembering how elements react with water, I often think of the oxide intermediate. First write out the reaction producing oxide and hydrogen. Then consider the oxide reacting with water or forming hydrates. This does not work for halogens, though, and is still just a trick to systematise inorganic reactions. Still gets you further than you might think. As another example: $$\ce{Si + 2H2O -> SiO2 + 2H2}$$ (high temperature). (No further "reaction" here but hydrates and polymers still form when conditions and the state of $\ce{SiO2}$ are right. $\endgroup$ – Linear Christmas Jan 25 '17 at 19:52
$\begingroup$ Related if not dupe: chemistry.stackexchange.com/q/50912 $\endgroup$ – Jan Jan 25 '17 at 20:13
In the usual case every student learns, the reaction of sodium with water is the following — I made note to include the states of matter here as they matter (pun intended).
$$\ce{2 Na (s) + 2 H2O (l) -> 2 Na+ (aq) + 2 OH- (aq) + H2 ^ (g)}\tag{1}$$
In the typical setup, you have lots of water and little sodium. Therefore, whatever you get will be a solution or a precipitated salt. Alkali metals, however, often form very soluble salts and indeed all alkali hydroxides are well soluble. Thus, we will never get to see any solid by-products; all will happen in the solution phase.
Technically, it is possible for an oxide anion to be formed. However, this is when Brønsted and Lowry's acid-base theory kicks in. Remember that water is amphoteric and can react as a base to form $\ce{H3O+}$ or as an acid to form $\ce{OH-}$. Well, equation $(2)$ expands that scheme.
$$\ce{H3O+ <=>[$K_\mathrm{a,1}$][H+] H2O <=>[$K_\mathrm{a,2}$][H+] OH- <=>[$K_\mathrm{a,3}$][H+] O^2-}\tag{2}$$
The oxide anion is connected to hydroxide and water by a simple acid-base reaction. From the definition of $\mathrm{p}K_\mathrm{a}$ values and the ionic product of water, we know that
$$\mathrm{p}K_\mathrm{a} (\text{acid}) + \mathrm{p}K_\mathrm{b} (\text{conj. base}) = 14\tag{3}$$
Since $K_\mathrm{w}$ doubles as the acidity constant of water, $\mathrm{p}K_\mathrm{a}(\ce{H2O}) = 14$. Likewise it can be shown, that $\mathrm{p}K_\mathrm{a} (\ce{H3O+}) = 0$. This is a difference of $14$ logarithmic units between two subsequent deprotonation steps. The acidity constants of sulfuric acid or phosphoric acid show that while this difference is large, it is not unusual for the conjugate base (if it is acidic) to have an acidity constand orders of magnitude lower than the original acid).
Condensing that into a conclusion, the acidity constant of hydroxide must be even lower still, to the point where we can say practically no oxide anions can be formed in equilibrium. Likewise, considering that there are many more water molecules than hydroxide anions, any oxide accidentally generated will get protonated immediately.
$$\ce{O^2- (s) + H2O (l) -> 2 OH- (aq)}\tag{4}$$
Therefore, it is not feasible for oxide anions to exist in water. This is known as the nivelling effect: no acid more acidic than $\ce{H3O+}$ can survive in aquaeous solution for extended time and no base more basic than $\ce{OH-}$.
Since the entire reaction produces products in solution, sodium oxide is not a possibility.
edited Apr 8 '17 at 17:14
Melanie♦
JanJan
What compounds are formed in a system that contains sodium, hydrogen and oxygen depends on the pressure, temperature, and the number of atoms of the respective type.
If sodium reacts with an excess of water at standard conditions, dissolved sodium hydroxide and hydrogen are formed.
$$\ce{2 Na + 2 H2O-> 2 Na+_{(aq)} + 2 OH-_{(aq)} + H2}$$
Sodium oxide can be synthesized by reaction of sodium hydroxide with sodium:
$$\ce{2 NaOH + 2 Na -> 2 Na2O + H2}$$
Additionally sodium oxide can react with hydrogen to form sodium hydroxide and sodium hydride under certain conditions. The reaction is reversible:
$$\ce{Na2O + H2 <=> NaOH + NaH}$$
From the first two reaction equations above we can conclude that we also could obtain sodium oxide by reaction of sodium with water.
aventurinaventurin
Sodium oxide dissolves in water to form sodium hydroxide:
$$\ce{Na2O(aq) + H2O(l) -> 2NaOH(aq)}$$
So even if sodium oxide is formed, it would be immediately converted to sodium hydroxide.
DHMODHMO
Not the answer you're looking for? Browse other questions tagged inorganic-chemistry or ask your own question.
Why does sodium react with water to produce a hydroxide, while zinc produces an oxide?
Why do carbon dioxide and sodium hydroxide not form sodium oxide and carbonic acid?
Byproducts of the neutralisation of hydrochloric acid with sodium hydroxide
Why does the aqueous solution of sodium peroxide turns red litmus into white?
Why do silver nitrate and sodium hydroxide react to produce silver(I) oxide?
Balancing a reaction with unknown products
Why is solid sodium hydride a base and not an acid when reacted with water?
Problems with creating sodium hydroxide from sodium (hydrogen) carbonate
What are the reactions occurring during the titration of a carbon dioxide contaminated water sample with diluted sodium hydroxide solution?
Is iron (III) oxide-hydroxide the same as iron (III) hydroxide?
Reaction of aqueous sodium carbonate with aluminum foil
Numerically stable way to compute sqrt((b²*c²) / (1-c²)) for c in [-1, 1] | CommonCrawl |
Home » Statistics » Confidence Interval » Confidence Interval for means when variances are known
In this article we will discuss about the theory of confidence interval for difference between two means when population standard deviations are knonw along with the step by step procedure to construct a confidence interval for difference between two population means when the population standard deviations are known.
CI for difference between two means when variances are known
a. The two samples are independent.
b. Both the samples are simple random sample.
c. Both the samples comes from population having normal distribution.
d. The two population variances $\sigma^2_1$ and $\sigma^2_2$ are known.
Let $X_1,X_2, \cdots, X_n$ be a random sample from $N(\mu_1,\sigma^2_1)$ with known $\sigma^2_1$.
Let $Y_1,Y_2, \cdots, Y_m$ be a random sample from $N(\mu_2,\sigma^2_2)$ with known $\sigma^2_2$.
And both the samples are independent.
Let $\overline{X}=\dfrac{1}{n}\sum_{i=1}^n X_i$ be the sample mean of the first sample and $\overline{Y}=\dfrac{1}{m}\sum_{i=1}^m Y_i$ be the sample mean of the second sample. Then $\overline{X}\sim N(\mu_1,\sigma^2_1/n)$ and $\overline{Y}\sim N(\mu_2,\sigma^2_2/m)$. Moreover, $\overline{X}$ and $\overline{Y}$ are independent.
$\overline{X}-\overline{Y}\sim N(\mu_1-\mu_2, \dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m})$.
$$ Z=\dfrac{\overline{X}-\overline{Y}-(\mu_1-\mu_2}{\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}}\sim N(0,1) $$
Here $Z$ is a function of sample observations and parameters $\mu_1$ and $\mu_2$. Moreover, the distribution of $Z$ is independent of any unknown parameter. Hence $Z$ can be used as a pivotal quantity.
Therefore, there exist two numbers $z_1$ and $z_2$ ($z_1 < z_2$) depending on $\alpha$ ($0\leq \alpha \leq 1$) such that
$$ P(z_1 < Z < z_2) =1-\alpha $$
$$ \begin{aligned} & P(z_1 < Z < z_2) =1-\alpha\\ \Rightarrow & P\bigg(z_1 < \dfrac{\overline{X}-\overline{Y}-(\mu_1-\mu_2)}{\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}} < z_2\bigg) =1-\alpha\\ \Rightarrow & P\bigg(z_1 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}} < \overline{X}-\overline{Y}-(\mu_1-\mu_2) < z_2 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) =1-\alpha\\ \Rightarrow & P\bigg(-z_1 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}} > -(\overline{X}-\overline{Y})+(\mu_1-\mu_2) > -z_2\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) =1-\alpha\\ \Rightarrow & P\bigg(-z_2 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}} < -(\overline{X}-\overline{Y})+(\mu_1-\mu_2) < -z_1\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) =1-\alpha\\ \Rightarrow & P\bigg((\overline{X}-\overline{Y})-z_2 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}} < (\mu_1-\mu_2)< (\overline{X}-\overline{Y})-z_1\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) =1-\alpha\\ \end{aligned} $$
Thus, $100(1-\alpha)\%$ confidence interval for the difference $\mu_1 -\mu_2$ when variances are known is
$$ P\bigg((\overline{X}-\overline{Y})-z_2 \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}, (\overline{X}-\overline{Y})-z_1\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) $$
where $z_1$ and $z_2$ can be determined from $P(z_1< Z < z_2) =1-\alpha$.
But the distribution of $Z$ is symmetric about zero. Therefore, $z_1 = -z2=-z{\alpha/2}$.
Hence, $100(1-\alpha)\%$ confidence interval for the difference $\mu_1-\mu_2$ when variances are known is
$$ P\bigg((\overline{X}-\overline{Y})-z_{\alpha/2} \sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}, (\overline{X}-\overline{Y})+z_{\alpha/2}\sqrt{\dfrac{\sigma^2_1}{n}+\dfrac{\sigma^2_2}{m}}\bigg) $$
Let $X_1, X_2, \cdots, X_{n_1}$ be a random sample of size $n_1$ from a population with mean $\mu_1$ and standard deviation $\sigma_1$.
Let $Y_1, Y_2, \cdots, Y_{n_2}$ be a random sample of size $n_2$ from a population with mean $\mu_2$ and standard deviation $\sigma_2$. And the two sample are independent.
Let $\overline{X} = \frac{1}{n_1}\sum X_i$ and $\overline{Y} =\frac{1}{n_2}\sum Y_i$ be the sample means of first and second sample respectively.
Let $C=1-\alpha$ be the confidence coefficient. Our objective is to construct a $100(1-\alpha)$% confidence interval estimate for the difference $(\mu_1-\mu_2)$.
Step by step procedure to estimate the confidence interval for difference between two population means when variances are known is as follows:
Step 2 Specify the given information
Given that sample sizes $n_1, n_2$, samples means $\overline{X},\overline{Y}$, standard deviations $\sigma_1, \sigma_2$.
$100(1-\alpha)$% confidence interval estimate for the difference $(\mu_1-\mu_2)$ is
$$ \begin{aligned} (\overline{X} -\overline{Y})- E \leq (\mu_1-\mu_2) \leq (\overline{X} -\overline{Y}) + E. \end{aligned} $$
$$ \begin{aligned} E &= Z_{\alpha/2} \sqrt{\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}} \end{aligned} $$
is called the margin of error.
Find the critical value $Z_{\alpha/2}$ from the normal statistical table for desired confidence level.
The margin of error for the difference of means is
$$ \begin{aligned} E = Z_{\alpha/2} \sqrt{\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}} \end{aligned} $$
That is $100(1-\alpha)$% confidence interval estimate for the difference $(\mu_1-\mu_2)$ is $(\overline{X} -\overline{Y})\pm E$ or $\big((\overline{X} -\overline{Y})- E, (\overline{X} -\overline{Y})+E\big)$.
In this tutorial, you learned about the derivation of confidence interval for the difference between two means when population variances are known. You also learned about the step by step procedure to construct desired confidence interval for the difference between two means when population variances are known.
To learn more about interval estimation and construction of confidence interval, please refer to the following tutorials:
CI calculator with examples for difference between two means when the population variances are known
Let me know in the comments if you have any questions on confidence interval for difference between two means when the population variances are known and your thought on this article.
Categories Confidence Interval, Statistics Tags CI for difference of means, confidence interval, interval estimation
Confidence interval for two proportions examples | CommonCrawl |
What is the greatest integer value of $b$ such that $-4$ is not in the range of $y=x^2+bx+12$?
We see that $-4$ is not in the range of $f(x) = x^2 + bx + 12$ if and only if the equation $x^2 + bx + 12 = -4$ has no real roots. We can re-write this equation as $x^2 + bx + 16 = 0$. The discriminant of this quadratic is $b^2 - 4 \cdot 16 = b^2 - 64$. The quadratic has no real roots if and only if the discriminant is negative, so $b^2 - 64 < 0$, or $b^2 < 64$. The greatest integer $b$ that satisfies this inequality is $b = \boxed{7}$. | Math Dataset |
# Setting up a React development environment
To begin, you'll need to have Node.js installed on your computer. You can download it from [the official Node.js website](https://nodejs.org/en/download/).
Once Node.js is installed, open your terminal (or command prompt) and run the following command to install the `create-react-app` package globally:
```
npm install -g create-react-app
```
This will allow you to create new React applications using the `create-react-app` command.
To create a new React application, run the following command:
```
create-react-app my-app
```
Replace `my-app` with the name you want for your application. This command will create a new directory with the same name as your application and set up a basic React application inside it.
To start the development server and open the application in the browser, navigate to the application directory and run:
```
cd my-app
npm start
```
This will open the application in your default web browser.
## Exercise
Create a new React application using `create-react-app` and navigate to its directory. Run the development server and open the application in your browser.
# React components and their lifecycle
In React, components are the building blocks of your application. They are reusable pieces of UI that can have their own state and behavior.
There are two types of components in React: class components and functional components. Class components are defined using ES6 classes, while functional components are defined using regular JavaScript functions.
In this section, we'll focus on class components and their lifecycle.
A class component has a `render` method, which is responsible for rendering the component's UI. The `render` method returns a description of what the UI should look like, which React then uses to update the actual DOM.
In addition to the `render` method, class components have several lifecycle methods that allow you to hook into different stages of the component's existence. Some of the most common lifecycle methods are:
- `constructor`: Called when the component is created. It's used to set the initial state and properties of the component.
- `componentDidMount`: Called after the component is mounted to the DOM. It's a good place to make network requests or set up timers.
- `componentDidUpdate`: Called after the component is updated. It's a good place to make network requests if the component's data has changed.
- `componentWillUnmount`: Called before the component is removed from the DOM. It's a good place to clean up any resources that were created during the component's life.
## Exercise
Create a new class component that displays a clock showing the current time. Use the `componentDidMount` lifecycle method to set an interval that updates the component's state every second.
# Using React hooks for state management and side effects
React Hooks are a new feature introduced in React 16.8 that allow you to use state and other React features in functional components.
The most commonly used hook is `useState`, which allows you to manage state in a functional component. Here's an example of how to use `useState`:
```javascript
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
}
```
In this example, `useState` is used to create a state variable called `count` and a function called `setCount` to update the state. The initial value of `count` is `0`.
In addition to `useState`, there are other hooks available for managing side effects, such as `useEffect`, which allows you to perform side effects in a functional component.
## Exercise
Refactor the clock component from the previous section to use React hooks for state management and side effects.
# Passing data between components using props
In React, components often need to pass data to each other. This is done using props, which are short for "properties."
Props are a way to pass data from a parent component to a child component. They are read-only and cannot be modified by the child component.
Here's an example of how to pass data using props:
```javascript
import React from 'react';
function Greeting({ name }) {
return <p>Hello, {name}!</p>;
}
function App() {
return <Greeting name="John" />;
}
```
In this example, the `name` prop is passed from the `App` component to the `Greeting` component. The `Greeting` component then displays the greeting using the `name` prop.
## Exercise
Create a new component called `ExpenseItem` that displays an expense item. Pass the expense item's name and amount as props to the component.
# Creating a project-based application with React
To get started, create a new React application using `create-react-app` and navigate to its directory.
Next, create a new component called `ExpenseItem` that displays an expense item. This component should accept two props: `name` and `amount`.
Next, create a new component called `ExpenseList` that displays a list of expense items. This component should accept an array of expense items as a prop and render an `ExpenseItem` component for each item in the array.
Finally, create a new component called `App` that contains the state for the expense items and renders an `ExpenseList` component.
## Exercise
Refactor the project-based application from the previous section to use React hooks for state management and side effects.
# Implementing routing for different pages
In a real-world application, you'll often want to display different pages based on the URL. React Router is a popular library for handling routing in React applications.
To get started with React Router, install it using the following command:
```
npm install react-router-dom
```
Next, import the necessary components from `react-router-dom` and set up routing in your `App` component. For example:
```javascript
import { BrowserRouter as Router, Route, Switch } from 'react-router-dom';
function App() {
return (
<Router>
<Switch>
<Route exact path="/" component={Home} />
<Route path="/about" component={About} />
</Switch>
</Router>
);
}
```
In this example, the `BrowserRouter` component is used to create a routing context. The `Switch` component is used to render the first matching `Route` component. The `Route` components define the routes for the application.
## Exercise
Add routing to your project-based application to display different pages for adding and viewing expense items.
# State management with React
State management is an important aspect of any application, especially in React. There are several ways to manage state in a React application, including using React's built-in state management, using Redux, or using React's new Context API.
In this section, we'll explore using React's built-in state management and the Context API for state management.
## Exercise
Refactor your project-based application to use React's built-in state management for managing the expense items.
# Using Context API for global state management
The Context API is a new feature in React that allows you to manage global state in your application. It's particularly useful when you have state that needs to be accessed by multiple components.
To use the Context API, you'll need to create a context using `React.createContext` and a provider component that wraps your application and provides the context to its children.
Here's an example of how to use the Context API:
```javascript
import React, { createContext, useContext, useState } from 'react';
const ExpenseContext = createContext();
function ExpenseProvider({ children }) {
const [expenses, setExpenses] = useState([]);
return (
<ExpenseContext.Provider value={{ expenses, setExpenses }}>
{children}
</ExpenseContext.Provider>
);
}
function useExpenses() {
const context = useContext(ExpenseContext);
if (!context) {
throw new Error('useExpenses must be used within a ExpenseProvider');
}
return context;
}
function App() {
return (
<ExpenseProvider>
{/* Your application components */}
</ExpenseProvider>
);
}
```
In this example, `ExpenseContext` is created using `React.createContext`. The `ExpenseProvider` component wraps your application and provides the context to its children. The `useExpenses` custom hook is used to access the context from any component that needs it.
## Exercise
Refactor your project-based application to use the Context API for managing the expense items.
# Integrating with external APIs for data fetching
In a real-world application, you'll often need to fetch data from an external API. React provides the `useEffect` hook, which can be used to fetch data when a component mounts.
Here's an example of how to fetch data from an API using `useEffect`:
```javascript
import React, { useState, useEffect } from 'react';
function DataComponent() {
const [data, setData] = useState([]);
useEffect(() => {
async function fetchData() {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
setData(data);
}
fetchData();
}, []);
return (
<div>
{/* Render the data */}
</div>
);
}
```
In this example, the `useEffect` hook is used to fetch data from an API when the component mounts. The `fetch` function is used to make the API request, and the `setData` function is used to update the component's state with the fetched data.
## Exercise
Integrate your project-based application with an external API to fetch expense data.
# Optimizing performance with React
Optimizing performance is an important aspect of any application, especially in React. React provides several tools and techniques to help you optimize your application's performance.
Here are some techniques you can use to optimize your React application:
- Use `React.memo` to prevent unnecessary re-renders of functional components.
- Use `shouldComponentUpdate` in class components to prevent unnecessary re-renders.
- Use `useCallback` and `useMemo` to memoize functions and values.
- Use `React.lazy` and `React.Suspense` to lazy-load components and code-split your application.
## Exercise
Optimize your project-based application by using one or more of the techniques mentioned above.
# Testing React components with Jest and Enzyme
Testing is an important aspect of any application, especially in React. React provides a built-in testing library called Jest, which can be used to test React components.
In addition to Jest, Enzyme is a popular testing library for React that provides a set of utilities for testing React components.
To get started with Jest and Enzyme, install them using the following commands:
```
npm install --save-dev jest enzyme enzyme-adapter-react-16
```
Next, create a Jest configuration file called `jest.config.js` in your project's root directory and configure it with the following settings:
```javascript
module.exports = {
testEnvironment: 'jsdom',
setupFilesAfterEnv: ['<rootDir>/src/setupTests.js'],
};
```
Next, create a `src/setupTests.js` file and configure Enzyme with the following settings:
```javascript
import { configure } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
configure({ adapter: new Adapter() });
```
Finally, create a test file for your `ExpenseItem` component and write tests using Jest and Enzyme.
## Exercise
Write tests for your project-based application using Jest and Enzyme.
# Note: The above sections are written as if you are teaching a class. However, the format is designed to be easily adaptable to a written textbook. The sections are written in a clear and concise manner, without any unnecessary transitions or summaries. The content is written in Github-flavored markdown, with inline math equations surrounded by $ symbols and block math equations surrounded by $$. The exercises are written in a clear and concise manner, without multiple choice options. The research notes are separated from the textbook content by ``` to separate them from the rest of the textbook. | Textbooks |
Positive solutions for perturbations of the Robin eigenvalue problem plus an indefinite potential
DCDS Home
Fourier spectral approximations to the dynamics of 3D fractional complex Ginzburg-Landau equation
May 2017, 37(5): 2565-2588. doi: 10.3934/dcds.2017110
Parabolic arcs of the multicorns: Real-analyticity of Hausdorff dimension, and singularities of $\mathrm{Per}_n(1)$ curves
Sabyasachi Mukherjee 1,2,
Jacobs University Bremen, Campus Ring 1, Bremen 28759, Germany
Institute for Mathematical Sciences, Stony Brook University, Stony Brook, 11794, NY, USA
Received May 2016 Revised January 2017 Published February 2017
Fund Project: The author was supported by Deutsche Forschungsgemeinschaft DFG
Figure(4)
The boundaries of the hyperbolic components of odd period of the multicorns contain real-analytic arcs consisting of quasi-conformally conjugate parabolic parameters. One of the main results of this paper asserts that the Hausdorff dimension of the Julia sets is a real-analytic function of the parameter along these parabolic arcs. This is achieved by constructing a complex one-dimensional quasiconformal deformation space of the parabolic arcs which are contained in the dynamically defined algebraic curves $ \mathrm{Per}_n(1)$ of a suitably complexified family of polynomials. As another application of this deformation step, we show that the dynamically natural parametrization of the parabolic arcs has a non-vanishing derivative at all but (possibly) finitely many points.
We also look at the algebraic sets $ \mathrm{Per}_n(1)$ in various families of polynomials, the nature of their singularities, and the 'dynamical' behavior of these singular parameters.
Keywords: Hausdorff dimension, parabolic curves, antiholomorphic dynamics, quasiconformal deformation, multicorns.
Mathematics Subject Classification: Primary:37F10, 37F30, 37F35, 37F45.
Citation: Sabyasachi Mukherjee. Parabolic arcs of the multicorns: Real-analyticity of Hausdorff dimension, and singularities of $\mathrm{Per}_n(1)$ curves. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2565-2588. doi: 10.3934/dcds.2017110
S. Basu, R. Pollack and M. -F. Coste-Roy, Algorithms in Real Algebraic Geometry, Algorithms and Computation in Mathematics, 2003. doi: 10.1007/978-3-662-05355-3. Google Scholar
A. F. Beardon, Iteration of Rational Functions, Complex Analytic Dynamical Systems Series: Graduate Texts in Mathematics, Vol. 132, Springer-Verlag, 1991. Google Scholar
W. Bergweiler and A. Eremenko, Green's function and anti-holomorphic dynamics on a torus, Proc. Amer. Math. Soc., 144 (2016), 2911-2922. doi: 10.1090/proc/13044. Google Scholar
A. Bonifant, X. Buff and J. Milnor, Antipode preserving cubic maps: The fjord theorem, preprint, arXiv: 1512.01850. Google Scholar
A. Bonifant, X. Buff and J. Milnor, Antipode preserving cubic maps Ⅱ: tongues and the ring locus, work in progress. Google Scholar
J. Canela, N. Fagella and A. Garijo, On a family of rational perturbations of the doubling map, Journal of Difference Equations and Applications, 21 (2015), 715-741. doi: 10.1080/10236198.2015.1050387. Google Scholar
M. Denker and M. Urbanski, Hausdorff and conformal measures on Julia sets with a rationally indifferent periodic point, J. London Math. Soc., 43 (1991), 107-118. doi: 10.1112/jlms/s2-43.1.107. Google Scholar
N. Dobbs, Nice sets and invariant densities in complex dynamics, Math. Proc. Cambridge Philos. Soc., 150 (2011), 157-165. doi: 10.1017/S0305004110000265. Google Scholar
D. S. Dummit and R. M. Foote, Abstract Algebra, 3rd Edition, John Wiley and Sons, Inc. , 2003. Google Scholar
J. H. Hubbard and D. Schleicher, Multicorns are not path connected, in Frontiers in Complex Dynamics: In Celebration of John Milnor's 80th Birthday (eds. A. Bonifant, M. Lyubich and S. Sutherland), Princeton University Press, (2014), 73-102 doi: 10.1515/9781400851317-007. Google Scholar
H. Inou and J. Kiwi, Combinatorics and topology of straightening maps, Ⅰ: Compactness and bijectivity, Advances in Mathematics, 231 (2012), 2666-2733. doi: 10.1016/j.aim.2012.07.014. Google Scholar
H. Inou and S. Mukherjee, Non-landing parameter rays of the multicorns, Inventiones Mathematicae, 204 (2016), 869-893. doi: 10.1007/s00222-015-0627-3. Google Scholar
H. Inou and S. Mukherjee, Discontinuity of straightening in antiholomorphic dynamics, arXiv: 1605.08061. Google Scholar
[14] F. Kirwan, Complex Algebraic Curves, Cambridge University Press, Cambridge, 1992. doi: 10.1017/CBO9780511623929. Google Scholar
[15] D. R. Mauldin and M. Urbanski, Graph Directed Markov Systems: Geometry and Dynamics of Limit Sets, Cambridge University Press, Cambridge, 2003. doi: 10.1017/CBO9780511543050. Google Scholar
C. T. McMullen, Hausdorff dimension and conformal dynamics Ⅱ: Geometrically finite rational maps, Commentarii Mathematici Helvetici, 75 (2000), 535-593. doi: 10.1007/s000140050140. Google Scholar
[17] J. Milnor, Dynamics in one Complex Variable, 3rd Edition, Princeton University Press, New Jersey, 2006. Google Scholar
J. Milnor, Remarks on iterated cubic maps, Experiment. Math., 1 (1992), 5-24. Google Scholar
J. Milnor, Singular Points of Complex Hypersurfaces, Annals of Mathematics Studies. Princeton University Press, New Jersey, 1968. Google Scholar
S. Mukherjee, S. Nakane and D. Schleicher, On multicorns and unicorns Ⅱ: Bifurcations in spaces of antiholomorphic polynomials, Ergodic Theory and Dynamical Systems, to appear, 2015, http://dx.doi.org/10.1017/etds.2015.65 doi: 10.1017/etds.2015.65. Google Scholar
S. Mukherjee, Orbit portraits of unicritical antiholomorphic polynomials, Conformal Geometry and Dynamics of the AMS, 19 (2015), 35-50. doi: 10.1090/S1088-4173-2015-00276-3. Google Scholar
S. Nakane, Connectedness of the tricorn, Ergodic Theory and Dynamical Systems, 13 (1993), 349-356. doi: 10.1017/S0143385700007409. Google Scholar
S. Nakane and D. Schleicher, On multicorns and unicorns Ⅰ: antiholomorphic dynamics, hyperbolic components, and real cubic polynomials, International Journal of Bifurcation and Chaos, 13 (2003), 2825-2844. doi: 10.1142/S0218127403008259. Google Scholar
J. Rivera-Letelier, A connecting lemma for rational maps satisfying a no-growth condition, Ergodic Theory and Dynamical Systems, 27 (2007), 595-636. doi: 10.1017/S0143385706000629. Google Scholar
D. Ruelle, Repellers for real analytic maps, Turbulence, Strange Attractors and Chaos, (1995), 351-359. doi: 10.1142/9789812833709_0023. Google Scholar
B. Skorulski and M. Urbanski, Finer fractal geometry for analytic families of conformal dynamical systems, Dynamical Systems, 29 (2014), 369-398. doi: 10.1080/14689367.2014.903385. Google Scholar
M. Urbanski, Measures and dimensions in conformal dynamics, Bull. Amer. Math. Soc., 40 (2003), 281-321. doi: 10.1090/S0273-0979-03-00985-6. Google Scholar
P. Walters, A variational principle for the pressure of continuous transformations, Amer. J. Math., 97 (1979), 937-971. doi: 10.2307/2373682. Google Scholar
P. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, Volume 79, Springer, 1982. Google Scholar
C. T. C. Wall, Singular Points of Plane Curves, London Mathematical Society Student Texts (vol. 63), Cambridge University Press, 2004. doi: 10.1017/CBO9780511617560. Google Scholar
M. Zinsmeister, Thermodynamic Formalism and Holomorphic Dynamical Systems, SMF/AMS Texts and Monographs, Volume 2,2000. Google Scholar
Figure 1. $\mathcal{M}_2^*$, also known as the tricorn and the parabolic arcs on the boundary of the hyperbolic component of period 1 (in blue)
Figure 2. Pictorial representation of the image of $\left[0,1\right]$ under the quasiconformal map $L_w$; for $w=1+i/8$ (top) and $w=1$ (bottom). The Fatou coordinates of $c_0$ and $f_{c_0}^{\circ k} (c_0)$ are $1/4$ and $3/4$ respectively. For $w=1+i/8$, $L_w(1/4)=1/8+i$ and $L_w(3/4)=7/8-i$, and for $w=1$, $L_w(1/4)=1/4+i$ and $L_w(3/4)=3/4-i$. Observe that $L_w$ commutes with $z\mapsto \overline{z}+1/2$ only when $w\in \mathbb{R}$
Figure 3. $\pi_2 \circ F : w \mapsto b(w)$ is injective in a neighborhood of $\widetilde{u}$ for all but possibly finitely many $\widetilde{u} \in \mathbb{R}$
Figure 4. The outer yellow curve indicates part of $\mathrm{Per}_1(1)\cap \lbrace a=\overline{b}\rbrace$, and the inner blue curve (along with the red point) indicates part of the deformation $\mathrm{Per}_1(r)\cap \lbrace a=\overline{b}\rbrace$ for some $r\in (1-\epsilon,1)$. The cusp point $c_0$ on the yellow curve is a critical point of $h_1$, i.e. a singular point of $\mathrm{Per}_1(1)$, and the red point is a critical point of $h_r$; i.e a singular point of $\mathrm{Per}_1(r)$
Shmuel Friedland, Gunter Ochs. Hausdorff dimension, strong hyperbolicity and complex dynamics. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 405-430. doi: 10.3934/dcds.1998.4.405
Hiroki Sumi, Mariusz Urbański. Bowen parameter and Hausdorff dimension for expanding rational semigroups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2591-2606. doi: 10.3934/dcds.2012.32.2591
Sara Munday. On Hausdorff dimension and cusp excursions for Fuchsian groups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2503-2520. doi: 10.3934/dcds.2012.32.2503
Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118.
Jon Chaika. Hausdorff dimension for ergodic measures of interval exchange transformations. Journal of Modern Dynamics, 2008, 2 (3) : 457-464. doi: 10.3934/jmd.2008.2.457
Krzysztof Barański, Michał Wardal. On the Hausdorff dimension of the Sierpiński Julia sets. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3293-3313. doi: 10.3934/dcds.2015.35.3293
Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098
Thomas Jordan, Mark Pollicott. The Hausdorff dimension of measures for iterated function systems which contract on average. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 235-246. doi: 10.3934/dcds.2008.22.235
Vanderlei Horita, Marcelo Viana. Hausdorff dimension for non-hyperbolic repellers II: DA diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1125-1152. doi: 10.3934/dcds.2005.13.1125
Krzysztof Barański. Hausdorff dimension of self-affine limit sets with an invariant direction. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1015-1023. doi: 10.3934/dcds.2008.21.1015
Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417
Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020
Cristina Lizana, Leonardo Mora. Lower bounds for the Hausdorff dimension of the geometric Lorenz attractor: The homoclinic case. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 699-709. doi: 10.3934/dcds.2008.22.699
Paul Wright. Differentiability of Hausdorff dimension of the non-wandering set in a planar open billiard. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3993-4014. doi: 10.3934/dcds.2016.36.3993
Aline Cerqueira, Carlos Matheus, Carlos Gustavo Moreira. Continuity of Hausdorff dimension across generic dynamical Lagrange and Markov spectra. Journal of Modern Dynamics, 2018, 12: 151-174. doi: 10.3934/jmd.2018006
Davit Karagulyan. Hausdorff dimension of a class of three-interval exchange maps. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1257-1281. doi: 10.3934/dcds.2020077
Tomasz Downarowicz, Olena Karpel. Dynamics in dimension zero A survey. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1033-1062. doi: 10.3934/dcds.2018044
Manuel Fernández-Martínez, Miguel Ángel López Guerrero. Generating pre-fractals to approach real IFS-attractors with a fixed Hausdorff dimension. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1129-1137. doi: 10.3934/dcdss.2015.8.1129
Yan Huang. On Hausdorff dimension of the set of non-ergodic directions of two-genus double cover of tori. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2395-2409. doi: 10.3934/dcds.2018099
Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303 | CommonCrawl |
Jacobi method
In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
Not to be confused with Jacobi eigenvalue algorithm.
Description
Let $A\mathbf {x} =\mathbf {b} $ be a square system of n linear equations, where:
$A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{bmatrix}}.$
When $A$ and $\mathbf {b} $ are known, and $\mathbf {x} $ is unknown, we can use the Jacobi method to approximate $\mathbf {x} $. The vector $\mathbf {x} ^{(0)}$ denotes our initial guess for $\mathbf {x} $ (often $\mathbf {x} _{i}^{(0)}=0$ for $i=1,2,...,n$). We denote $\mathbf {x} ^{(k)}$ as the k-th approximation or iteration of $\mathbf {x} $, and $\mathbf {x} ^{(k+1)}$ is the next (or k+1) iteration of $\mathbf {x} $.
Matrix-based formula
Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:
$A=D+L+U\qquad {\text{where}}\qquad D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}}{\text{ and }}L+U={\begin{bmatrix}0&a_{12}&\cdots &a_{1n}\\a_{21}&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}}.$
The solution is then obtained iteratively via
$\mathbf {x} ^{(k+1)}=D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)}).$
Element-based formula
The element-based formula for each row $i$ is thus:
$x_{i}^{(k+1)}={\frac {1}{a_{ii}}}\left(b_{i}-\sum _{j\neq i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.$
The computation of $x_{i}^{(k+1)}$ requires each element in $\mathbf {x} ^{(k)}$ except itself. Unlike the Gauss–Seidel method, we can't overwrite $x_{i}^{(k)}$ with $x_{i}^{(k+1)}$, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Algorithm
Input: initial guess x(0) to the solution, (diagonal dominant) matrix A, right-hand side vector b, convergence criterion
Output: solution when convergence is reached
Comments: pseudocode based on the element-based formula above
k = 0
while convergence not reached do
for i := 1 step until n do
σ = 0
for j := 1 step until n do
if j ≠ i then
σ = σ + aij xj(k)
end
end
xi(k+1) = (bi − σ) / aii
end
increment k
end
Convergence
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
$\rho (D^{-1}(L+U))<1.$
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:
$\left|a_{ii}\right|>\sum _{j\neq i}{\left|a_{ij}\right|}.$
The Jacobi method sometimes converges even if these conditions are not satisfied.
Note that the Jacobi method does not converge for every symmetric positive-definite matrix. For example,
$A={\begin{pmatrix}29&2&1\\2&6&1\\1&1&{\frac {1}{5}}\end{pmatrix}}\quad \Rightarrow \quad D^{-1}(L+U)={\begin{pmatrix}0&{\frac {2}{29}}&{\frac {1}{29}}\\{\frac {1}{3}}&0&{\frac {1}{6}}\\5&5&0\end{pmatrix}}\quad \Rightarrow \quad \rho (D^{-1}(L+U))\approx 1.0661\,.$
Examples
Example 1
A linear system of the form $Ax=b$ with initial estimate $x^{(0)}$ is given by
$A={\begin{bmatrix}2&1\\5&7\\\end{bmatrix}},\ b={\begin{bmatrix}11\\13\\\end{bmatrix}}\quad {\text{and}}\quad x^{(0)}={\begin{bmatrix}1\\1\\\end{bmatrix}}.$
We use the equation $x^{(k+1)}=D^{-1}(b-(L+U)x^{(k)})$, described above, to estimate $x$. First, we rewrite the equation in a more convenient form $D^{-1}(b-(L+U)x^{(k)})=Tx^{(k)}+C$, where $T=-D^{-1}(L+U)$ and $C=D^{-1}b$. From the known values
$D^{-1}={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}},\ L={\begin{bmatrix}0&0\\5&0\\\end{bmatrix}}\quad {\text{and}}\quad U={\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}.$
we determine $T=-D^{-1}(L+U)$ as
$T={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}\left\{{\begin{bmatrix}0&0\\-5&0\\\end{bmatrix}}+{\begin{bmatrix}0&-1\\0&0\\\end{bmatrix}}\right\}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}.$
Further, $C$ is found as
$C={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}{\begin{bmatrix}11\\13\\\end{bmatrix}}={\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}.$
With $T$ and $C$ calculated, we estimate $x$ as $x^{(1)}=Tx^{(0)}+C$:
$x^{(1)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}1\\1\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}\approx {\begin{bmatrix}5\\1.143\\\end{bmatrix}}.$
The next iteration yields
$x^{(2)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}69/14\\-12/7\\\end{bmatrix}}\approx {\begin{bmatrix}4.929\\-1.714\\\end{bmatrix}}.$
This process is repeated until convergence (i.e., until $\|Ax^{(n)}-b\|$ is small). The solution after 25 iterations is
$x={\begin{bmatrix}7.111\\-3.222\end{bmatrix}}.$
Example 2
Suppose we are given the following linear system:
${\begin{aligned}10x_{1}-x_{2}+2x_{3}&=6,\\-x_{1}+11x_{2}-x_{3}+3x_{4}&=25,\\2x_{1}-x_{2}+10x_{3}-x_{4}&=-11,\\3x_{2}-x_{3}+8x_{4}&=15.\end{aligned}}$
If we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by
${\begin{aligned}x_{1}&=(6+0-(2*0))/10=0.6,\\x_{2}&=(25+0+0-(3*0))/11=25/11=2.2727,\\x_{3}&=(-11-(2*0)+0+0)/10=-1.1,\\x_{4}&=(15-(3*0)+0)/8=1.875.\end{aligned}}$
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after five iterations.
$x_{1}$ $x_{2}$ $x_{3}$ $x_{4}$
0.6 2.27272 -1.1 1.875
1.04727 1.7159 -0.80522 0.88522
0.93263 2.05330 -1.0493 1.13088
1.01519 1.95369 -0.9681 0.97384
0.98899 2.0114 -1.0102 1.02135
The exact solution of the system is (1, 2, −1, 1).
Python example
import numpy as np
ITERATION_LIMIT = 1000
# initialize the matrix
A = np.array([[10., -1., 2., 0.],
[-1., 11., -1., 3.],
[2., -1., 10., -1.],
[0.0, 3., -1., 8.]])
# initialize the RHS vector
b = np.array([6., 25., -11., 15.])
# prints the system
print("System:")
for i in range(A.shape[0]):
row = [f"{A[i, j]}*x{j + 1}" for j in range(A.shape[1])]
print(f'{" + ".join(row)} = {b[i]}')
print()
x = np.zeros_like(b)
for it_count in range(ITERATION_LIMIT):
if it_count != 0:
print(f"Iteration {it_count}: {x}")
x_new = np.zeros_like(x)
for i in range(A.shape[0]):
s1 = np.dot(A[i, :i], x[:i])
s2 = np.dot(A[i, i + 1:], x[i + 1:])
x_new[i] = (b[i] - s1 - s2) / A[i, i]
if x_new[i] == x_new[i-1]:
break
if np.allclose(x, x_new, atol=1e-10, rtol=0.):
break
x = x_new
print("Solution: ")
print(x)
error = np.dot(A, x) - b
print("Error:")
print(error)
Weighted Jacobi method
The weighted Jacobi iteration uses a parameter $\omega $ to compute the iteration as
$\mathbf {x} ^{(k+1)}=\omega D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)})+\left(1-\omega \right)\mathbf {x} ^{(k)}$
with $\omega =2/3$ being the usual choice.[1] From the relation $L+U=A-D$, this may also be expressed as
$\mathbf {x} ^{(k+1)}=\omega D^{-1}\mathbf {b} +\left(I-\omega D^{-1}A\right)\mathbf {x} ^{(k)}$.
Convergence in the symmetric positive definite case
In case that the system matrix $A$ is of symmetric positive-definite type one can show convergence.
Let $C=C_{\omega }=I-\omega D^{-1}A$ be the iteration matrix. Then, convergence is guaranteed for
$\rho (C_{\omega })<1\quad \Longleftrightarrow \quad 0<\omega <{\frac {2}{\lambda _{\text{max}}(D^{-1}A)}}\,,$
where $\lambda _{\text{max}}$ is the maximal eigenvalue.
The spectral radius can be minimized for a particular choice of $\omega =\omega _{\text{opt}}$ as follows
$\min _{\omega }\rho (C_{\omega })=\rho (C_{\omega _{\text{opt}}})=1-{\frac {2}{\kappa (D^{-1}A)+1}}\quad {\text{for}}\quad \omega _{\text{opt}}:={\frac {2}{\lambda _{\text{min}}(D^{-1}A)+\lambda _{\text{max}}(D^{-1}A)}}\,,$
where $\kappa $ is the matrix condition number.
See also
• Gauss–Seidel method
• Successive over-relaxation
• Iterative method § Linear systems
• Gaussian Belief Propagation
• Matrix splitting
References
1. Saad, Yousef (2003). Iterative Methods for Sparse Linear Systems (2nd ed.). SIAM. p. 414. ISBN 0898715342.
External links
• This article incorporates text from the article Jacobi_method on CFD-Wiki that is under the GFDL license.
• Black, Noel; Moore, Shirley & Weisstein, Eric W. "Jacobi method". MathWorld.
• Jacobi Method from www.math-linux.com
Numerical linear algebra
Key concepts
• Floating point
• Numerical stability
Problems
• System of linear equations
• Matrix decompositions
• Matrix multiplication (algorithms)
• Matrix splitting
• Sparse problems
Hardware
• CPU cache
• TLB
• Cache-oblivious algorithm
• SIMD
• Multiprocessing
Software
• MATLAB
• Basic Linear Algebra Subprograms (BLAS)
• LAPACK
• Specialized libraries
• General purpose software
Authority control: National
• Germany
| Wikipedia |
Challenging probability problem (AMC 12B Problem 18) - Are the AoPS solutions incomplete/wrong?
A frog makes $3$ jumps, each exactly $1$ meter long. The directions of the jumps are chosen independently at random. What is the probability that the frog's final position is no more than $1$ meter from its starting position?
This problem comes from the AMC $12$ in the year $2010$. The contest is an invitational test in the US for secondary school to qualify for the Olympiad. It involves $25$ questions in $75$ minutes and problems can be solved without calculus.
I didn't get very far in my attempt, so I ultimately searched and found contributed solutions on the Art of Problem Solving.
I don't understand "solution $1$" and I am pretty sure "solution $2$" is incorrect.
The AoPS solution states "it is relatively easy" to show exactly $1$ of these has magnitude $1$ or less. If so, then out of $4$ possible options, there would be $1$ with magnitude $1$ or less, so the probability would be $1/4$ (the corrrect answer is indeed $1/4$, but this method does not satisfy me yet).
I did not understand this step, and someone asked the same question in a previous thread. It's not obvious to me, and I have no clue how you would go about showing this.
Is there an inequality that will help? I don't see how to simplify it. Also the official solution is much more complicated (see sketch below), which makes me think this solution is either elegant and overlooked or it is coincidentally the correct number but not the correct method.
The solution goes like this. Suppose the first jump is from the origin. So to be $1$ unit from the starting point, you need to be in the unit circle.
The next two jumps can be $2$ units after the first jump, equally likely to be in any angle. So the sample space of ending points is a circle of radius $2$ centered at the point of the first jump. This circle is also tangent to the unit circle.
Thus the sample space has an area of $4\pi$, of which the area of the unit circle is $\pi$. Hence the probability is $1/4$.
I am pretty sure this method is incorrect because I simulated $2$ jumps numerically. You do get a circle of $2$, but not all points are equally likely. There is clustering to the center of the circle and the circumference of the circle.
Plus if this method was true, it would seem that $3$ jumps should be a circle of radius $3$, but that would imply a totally different answer of $1/9$.
The idea is to set coordinates so the first jump is $(0, 0)$ and the second jump is $(1, 0)$. Let the starting point be $(\cos \alpha, \sin \alpha)$ and then the location after the third jump is $(1 + \cos \beta, \sin \beta)$.
It is not too hard to work out the condition for the third point to be within $1$ unit of the first point. Ignoring the measure $0$ cases of $\alpha = 0$ and $\alpha = \pi$, we need $\alpha \leq \beta \leq \pi$. We can limit to $0 \leq \alpha \leq \pi$ since the other half works out the same by symmetry. And we have $0 \leq \beta \leq 2\pi$.
Considering a rectangle $(\alpha, \beta)$ where all angles are equally likely, the sample space is the rectangle of area $2\pi$. The event to be within 1 is a triangle with area $\pi/2$, so the desired probability is $1/4$.
Are the AoPS solutions incomplete?
I would love if their "solution $1$" is correct as it's much easier to compute and it be more reasonable for an average time allotment of $3$ minutes/problem.
I run the YouTube channel MindYourDecisions, and am considering this problem. If I make a video I'll credit anyone who offers helpful answers.
I searched the problem on AoPS, and found this thread from 2016.
The solution by Zimbalono there (post #13) is worth looking at as well. The last figure probably calls for some more explanation, but it's pretty good overall.
WLOG, let $a=(1,0)$. Then, since $\|b\|=\|c\|=1$, the vectors $b+c$ and $b-c$ are orthogonal. This means the four points $a+(b+c)$, $a+(b-c)$, $a-(b+c)$, and $a-(b-c)$ form a rhombus centered at $(1,0)$. This rhombus has side length $2=2\|b\|=2\|c\|$.
In generic position (with probability $1$), one of each of the pairs $\pm(b+c)$ and $\pm(b-c)$ point inward from $(1,0)$ with negative $x$ coordinate, and one each point outward. We claim that (aside from a probability-0 case that puts both on the circle) exactly one of the two inward vertices of the rhombus lies inside the circle.
Now we have two right triangles with the same hypotenuse length and the same right angle. Going from the one with vertices on the circle to the one that shares vertices with the rhombus, we must lengthen one leg and shorten the other leg. The lengthened leg pushes its endpoint outside the circle, while the shortened leg pulls its endpoint inside the circle. We have one vertex of the rhombus inside the circle, and we're done.
My figures were done in Asymptote - code available on request, if you want to tinker with things.
I have just arranged a given soultion(The official solution).
Not the answer you're looking for? Browse other questions tagged probability contest-math or ask your own question. | CommonCrawl |
\begin{document}
\title{Mixed Morrey spaces}
\author{Toru Nogayama \footnote{[email protected], Tokyo Metropolitan University, Department of Mathematics Science, 1-1 Minami-Ohsawa, Hachioji, 192-0397, Tokyo, Japan}}
\date{}
\maketitle
\begin{abstract} We introduce mixed Morrey spaces and show some basic properties. These properties extend the classical ones. We investigate the boundedness in these spaces of the iterated maximal operator, the fractional integtral operator and singular integral operator. Furthermore, as a corollary, we obtain the boundedness of the iterated maximal operator in classical Morrey spaces. We also establish a version of the Fefferman--Stein vector-valued maximal inequality and some weighted inequalities for the iterated maximal operator in mixed Lebesgue spaces. We point out some errors in the proof of the existing literature. \end{abstract}
{\bf Key words} Morrey spaces, Mixed norm, Hardy--Littlewood maximal operator, Fefferman--Stein vector-valued inequality, Fractional integral operator, Singular integral operator
{\bf 2010 Classification} 42B25, 42B35
\section{Introduction} In 1961, Benedek and Panzone \cite{B-P} introduced Lebesgue spaces with mixed norm. Bagby \cite{Bagby} showed the boundedness of the Hardy--Littlewood maximal operator for the functions taking values in the mixed Lebesgue spaces. Meanwhile, Morrey spaces are used to consider the boundedness of the elliptic differential operators \cite{Morrey}. Later, many authors investigated Morrey spaces, see for example \cite{Peetre}.
In this paper, we introduce the {\it mixed Morrey space} ${\mathcal M}_{\vec{q}}^p(\mathbb{R}^n)$. When we take a particular parameter, this space coincides with the mixed Lebesgue space $L^{\vec{p}}(\mathbb{R}^n)$ and the classical Morrey space $\mathcal{M}_q^p(\mathbb{R}^n)$. Our main target is the iterated maximal operator, which is obtained by repeatedly acting the one-dimentional maximal operator. We show the boundedness of the iterated maximal operator in mixed spaces. In particular, the boundedness in mixed Lebesgue spaces is showed by St\"{o}ckert in 1978 \cite{St}. However, the proof is incorrect. We give a correct proof using the result of Bagby \cite{Bagby}. Moreover, we prove some inequalities in harmonic analysis for the mixed spaces.
Throughout the paper, we use the following notation. The letters $\vec{p}, \vec{q}, \vec{r}, \ldots$ will denote $n$-tuples of the numbers in $[0, \infty]$ $(n \ge 1)$, $\vec{p}=(p_1, \ldots, p_n), \vec{q}=(q_1, \ldots, q_n), \vec{r}=(r_1, \ldots, r_n)$. By definiton, the inequality, for example, $0<\vec{p}<\infty$ means that $0<p_i<\infty $ for each $i$. Furthermore, for $\vec{p}=(p_1, \ldots, p_n)$ and $r \in \mathbb{R}$, let \[ \frac{1}{\vec{p}}= \left(\frac{1}{p_1}, \ldots, \frac{1}{p_n}\right), \quad \frac{\vec{p}}{r}= \left(\frac{p_1}{r}, \ldots, \frac{p_n}{r}\right), \quad \vec{p'}=(p'_1, \ldots, p'_n), \] where $p'_j=\frac{p_j}{p_j-1}$ is a conjugate exponent of $p_j$. Let $Q=Q(x, r)$ be a cube having center $x$ and radius $r$, whose sides parallel to the cordinate axes.
$|Q|$ denotes the volume of the cube $Q$ and $\ell(Q)$ denotes the side length of the cube $Q$. By $A\lesssim B$, we denote that $A \le CB$ for some constant $C>0$, and $A \sim B$ means that $A\lesssim B$ and $B\lesssim A$.
In \cite{B-P}, Benedek and Panzone introduced mixed Lebesgue spaces. We recall some properties and examples in Section \ref{sec preliminaries}.
\begin{definition}[{\it Mixed Lebesgue spaces}]{\rm \cite{B-P}} Let $\vec{p}=(p_1, \ldots, p_n) \in (0, \infty]^n$. Then define the {\it mixed Lebesgue norm}
$\|\cdot\|_{\vec{p}}$ or $\|\cdot\|_{(p_1,\ldots,p_n)}$ by \begin{align*}
\|f\|_{\vec{p}} &=
\|f\|_{(p_1,\ldots,p_n)}\\ &\equiv \left( \int_{{\mathbb R}} \cdots \left( \int_{{\mathbb R}} \left(
\int_{{\mathbb R}}|f(x_1,x_2,\ldots,x_n)|^{p_1}{\rm d}x_1 \right)^{\frac{p_2}{p_1}} {\rm d}x_2 \right)^{\frac{p_3}{p_2}} \cdots{\rm d}x_n \right)^{\frac{1}{p_n}}, \end{align*} where $f: \mathbb{R}^n \rightarrow \mathbb{C}$ is a measurable function. If $p_j=\infty$, then we have to make appropriate modifications. We define the {\it mixed Lebesgue space} $L^{\vec{p}}({\mathbb R}^n)=L^{(p_1,\ldots,p_n)}({\mathbb R}^n)$ to be the set of all $f \in L^0({\mathbb R}^n)$ with
$\|f\|_{\vec{p}}<\infty$, where $L^0({\mathbb R}^n)$ denotes the set of measureable functions on ${\mathbb R}^n$.
\end{definition}
For all measureable functions $f$, we define the Hardy--Littlewood maximal operator $M$ by \[ Mf(x) =\sup_{Q \in \mathcal{Q}}
\frac{\chi_Q(x)}{|Q|} \int_{Q}
|f(y)| {\rm d}y, \] where $\mathcal{Q}$ denotes the set of all cubes in ${\mathbb R}^n$. Let $1 \le k \le n$. Then, we define the maximal operator $M_k$ for $x_k$ as follows: \[ M_kf(x) \equiv
\sup_{x_k \in I}\frac{1}{|I|}\int_{I}|f(x_1, \ldots, y_k, \ldots, x_n)| {\rm d}y_k, \] where $I$ ranges over all intervals containing $x$. Furthermore, for all measurable functions $f$, define the iterated maximal operator ${\mathcal M}_t$ by \[ {\mathcal M}_tf(x) \equiv \left(
M_n \cdots M_1 \left[|f|^t\right] (x) \right)^{\frac{1}{t}} \] for every $t>0$ and $x \in {\mathbb R}^n$.
We investigate the boundedness of the iterated maximal operator in mixed Lebesgue spaces.
\begin{theorem}\label{thm 171206-1} {\rm (\cite{St})} Let $0<\vec{p}<\infty$. If $0< t<\min(p_1, \ldots, p_n)$, then \begin{equation} \label{eq 171028-2}
\|{\mathcal M}_tf\|_{\vec{p}} \lesssim\|f\|_{\vec{p}} \end{equation} for $f \in L^{\vec{p}}(\mathbb{R}^n)$. \end{theorem} Note that this result is true but the proof is not correct in \cite{St}. We give a new proof and a counterexample of the estimate used in \cite{St} in Section \ref{sec iterated}.
Next, we define Morrey spaces. Let $1 \le q \le p<\infty$. Define the {\it Morrey norm}
$\|\cdot\|_{{\mathcal M}^p_q(\mathbb{R}^n)}$ by \begin{equation*}
\| f \|_{{\mathcal M}^p_q(\mathbb{R}^n)} \equiv \sup\left\{
|Q|^{\frac{1}{p}-\frac{1}{q}}
\left(\int_Q|f(x)|^q\,{\rm d} x\right)^{\frac1q} \,:\,\mbox{ $Q$ is a cube in ${\mathbb R}^n$}\right\} \end{equation*} for a measurable function $f$. The {\it Morrey space} ${\mathcal M}^p_q({\mathbb R}^n)$ is the set of all measurable functions $f$ for which
$\| f \|_{{\mathcal M}^p_q(\mathbb{R}^n)}$ is finite.
Based on the above definition, we define mixed Morrey spaces, whose properties and examples will be investigated in Section \ref{sec mixed Morrey spaces}. \begin{definition}[{\it Mixed Morrey spaces}] Let $\vec{q}=(q_1, \ldots, q_n) \in (0, \infty]^n$ and $p\in (0, \infty]$ satisfy \[ \sum_{j=1}^n\frac{1}{q_j} \ge \frac{n}{p}. \] Then define the {\it mixed Morrey norm}
$\|\cdot\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}$ by \[
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \equiv \sup\left\{
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}} \,:\,\mbox{ $Q$ is a cube in ${\mathbb R}^n$}\right\} \] for $f \in L^0(\mathbb{R}^n)$. We define the {\it mixed Morrey space} ${\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n)$ to be the set of all $f \in L^0(\mathbb{R}^n)$
with $\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} <\infty$. \end{definition}
The iterated maximal operator in mixed Morrey spaces is bounded. In fact, the following holds:
\begin{theorem} \label{thm 180121-1} Let $0<\vec{q}\le\infty$ and $0<p<\infty$ satisfy \[ \frac np \le\sum_{j=1}^n\frac1q_j, \quad \frac{n-1}{n}p<\max(q_1,\ldots,q_n). \]
If $0<t<\min(q_1, \ldots, q_n, p)$, then \begin{equation*}
\|{\mathcal M}_tf\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
\lesssim \|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \end{equation*} for all $f \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. \end{theorem}
As a corollary, we obtain this boundedness of $\mathcal{M}_t$ in classical Morrey spaces.
\begin{corollary} Let \[ 0<\frac{n-1}{n}p<q\le p<\infty. \]
If $0<t<q$, then \[
\|{\mathcal M}_tf\|_{\mathcal{M}^p_q(\mathbb{R}^n)}
\lesssim \|f\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \] for all $f \in \mathcal{M}^p_q(\mathbb{R}^n)$. \end{corollary}
Note that Chiarenza and Frasca showed the boundedness in classical Morrey spaces of the Hardy--Littlewood maximal operator \cite{C-F}. This corollary extends it. Furthermore, the following theorem extends Theorem \ref{thm 171206-1}. The classical case is proved by Feffferman and Stein in 1971 \cite{F-S}.
\begin{theorem}[Dual inequality of Stein type for $L^{\vec{p}}$]\label{thm 171115-1} Let $f$ be a measurable function on $\mathbb{R}^n$ and $w_j (j=1,\ldots,n)$ be a non-negative measurable function on $\mathbb{R}$.
Then, for $1\le\vec{p}<\infty$, if $0< t<\min(p_1, \ldots, p_n)$ and $w_j^t\in A_{p_j}$, \begin{eqnarray*}
\left\| {\mathcal M}_t f\cdot \bigotimes_{j=1}^n(w_j)^{\frac{1}{p_j}}
\right\|_{\vec{p}} \lesssim
\left\| f \cdot \bigotimes_{j=1}^n\left(M_j w_j\right)^{\frac{1}{p_j}}
\right\|_{\vec{p}}, \end{eqnarray*} where $\displaystyle \left(\bigotimes_{j=1}^nw_j\right)(x) =\prod_{j=1}^nw_j(x_j)$. \end{theorem}
We can also extend the Feffferman--Stein vector-valued maximal inequality for mixed spaces.
\begin{theorem} \label{thm 171123-2} Let $0<\vec{p}<\infty$, $0<u\le\infty$ and $0<t<\min(p_1, \ldots, p_n, u)$. Then, for every sequence $\{f_j\}_{j=1}^\infty \subset L^0(\mathbb{R}^n)$, \begin{equation*}
\left\|\left( \sum_{j=1}^{\infty} [\mathcal{M}_tf_j]{}^u
\right)^{\frac{1}{u}}\right\|_{\vec{p}} \lesssim
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\right\|_{\vec{p}}. \end{equation*}
\end{theorem}
\begin{theorem} \label{thm 171219-3} Let $1<\vec{q}<\infty$, $1<u\le\infty$, and $1<p\le\infty$ satisfy $\frac np \le\sum_{j=1}^n\frac1q_j$. Then, for every sequence $\{f_j\}_{j=1}^{\infty} \subset L^0(\mathbb{R}^n)$, \begin{equation*}
\left\|\left( \sum_{j=1}^{\infty} [Mf_j]{}^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \lesssim
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{equation*} \end{theorem}
\begin{theorem} Let $0<\vec{q}\le\infty$ and $0<p<\infty$ satisfy \[ \frac np \le\sum_{j=1}^n\frac1q_j, \quad \frac{n-1}{n}p<\max(q_1,\ldots,q_n). \]
If $0<t<\min(q_1, \ldots, q_n, u)$, then \begin{equation*}
\left\|\left(\sum_{j=1}^\infty [{\mathcal M}_tf_j]^u\right)^\frac1u
\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
\lesssim \left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \end{equation*} for $\{f_j\}_{j=1}^\infty \subset \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. \end{theorem}
\begin{corollary} Let \[ 0<\frac{n-1}{n}p<q\le p<\infty. \]
If $0<t<\min(q, u)$, then \[
\left\|\left(\sum_{j=1}^\infty [{\mathcal M}_tf_j]^u\right)^\frac1u
\right\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \lesssim
\left\| \left(\sum_{j=1}^\infty
|f_j|^u\right)^\frac1u
\right\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \] for $\{f_j\}_{j=1}^\infty \subset \mathcal{M}^p_q(\mathbb{R}^n)$.
\end{corollary}
Furthermore, we investigate the boundedness of the fractional integral operator $I_{\alpha}$. Its boundedness in classical Morrey spaces is proved by Adams \cite{Adams}.
Let $0<\alpha<n$. Define the fractional integral operator $I_{\alpha}$ of order $\alpha$ by \[ I_{\alpha}f(x) \equiv
\int_{\mathbb{R}^n}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y \] for $f\in L^1_{\rm loc}(\mathbb{R}^n)$ as long as the right-hand side makes sense.
\begin{theorem} \label{thm 171219-1} Let $0<\alpha<n, 1<\vec{q}, \vec{s}<\infty$ and $0<p, r<\infty$. Assume that $\frac np \le\sum_{j=1}^n\frac1{q_j}$, and $\frac nr \le\sum_{j=1}^n\frac1{s_j}$. Also, assume that \[ \frac{1}{r}=\frac{1}{p}-\frac{\alpha}{n}, \quad \frac{\vec{q}}{p}=\frac{\vec{s}}{r}. \] Then, for $f \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$, \[
\|I_{\alpha}f\|_{\mathcal{M}^r_{\vec{s}}(\mathbb{R}^n)} \lesssim \|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \] \end{theorem}
Finally, we show that the singular integral operators are bounded in mixed Morrey spaces. Their boundedness in classical Morrey spaces is proved by Chiarenza and Frasca \cite{C-F}. Let $T$ be a singular integral operator with a kernel $k(x,y)$ which satisfies the following conditions: \begin{itemize} \item[(1)] There exists a conctant $C>0$ such that
$|k(x,y)|\le\frac{C}{|x-y|^n}$.
\item[(2)] There exists $\epsilon>0$ and $C>0$ such that \[
|k(x,y)-k(z,y)|+|k(y,x)-k(y,z)|\le C\frac{|x-z|^\epsilon}{|x-y|^{n+\epsilon}}, \]
if $|x-y|\ge2|x-z|$ with $x \neq y$.
\item[(3)] If $f \in L_c^\infty(\mathbb{R}^n)$, the set of all compactly supported $L^\infty$-functions, then \[ Tf(x)=\int_{\mathbb{R}^n}k(x,y)f(y){\rm d}y \quad (x \notin {\rm supp}(f)). \]
\end{itemize}
Keeping in mind that $T$ extends to a bounded linear operator on $\mathcal{M}^p_{q}(\mathbb{R}^n)$, we prove the following theorem.
\begin{theorem} \label{thm 180514-1} Let $1<\vec{q}<\infty$ and $1<p<\infty$ satysfy \[ \frac np \le\sum_{j=1}^n\frac1{q_j}. \] Then, \begin{equation*}
\|Tf\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)} \lesssim \|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)} \end{equation*} for $f \in \mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)$. \end{theorem}
We organize the remaining part of this paper as follows: In Sections \ref{sec preliminaries} and \ref{sec mixed Morrey spaces}, we investigate some properties and present examples of mixed Lebesgue spaces and mixed Morrey spaces, respectively. We prove the boundedness of the iterated maximal operator in mixed spaces in Section \ref{sec iterated}. In Section \ref{sec dual inequality}, we show the dual inequality of Stein type for mixed Lebesgue spaces. Section \ref{sec vector-valued} is devoted to the vector-valued extension of Section \ref{sec iterated}. Finally, we prove that the fractional integral operator and singular integral operator are bounded in mixed Morrey spaces in Section \ref{sec fractional}.
\section{Preliminaries} \label{sec preliminaries} \subsection{Mixed Lebesgue spaces} In this subsection, we recall the mixed Lebesgue space $L^{\vec{p}}({\mathbb R}^n)$ which is introduced by Benedek and Panzone in \cite{B-P}. This space has properties similar to classical Lebesgue space. First, $L^{\vec{p}}({\mathbb R}^n)$ is a Banach space for $1 \le \vec{p} \le\infty$. H\"older's inequality holds: Let $1<\vec{p}, \vec{q}<\infty$ and define $\vec{r}$ so that $\frac{1}{\vec{p}}+\frac{1}{\vec{q}}=\frac{1}{\vec{r}}$. If $f \in L^{\vec{p}}({\mathbb R}^n), g \in L^{\vec{q}}({\mathbb R}^n)$, then $fg \in L^{\vec{r}}({\mathbb R}^n)$, and
$\|fg\|_{\vec{r}} \le \|f\|_{\vec{p}}\|g\|_{\vec{q}}$. Furthermore, the monotone convergence theorem, Fatou's lemma and the Lebesgue convergence theorem also follow.
\begin{remark} \label{rem 171228-1} Let $\vec{p}\in(0,\infty]^n$ and $f$ be a measureable function on ${\mathbb R}^n$. \begin{itemize} \item[(i)] If for each $p_i = p$, then \[
\|f\|_{\vec{p}} =
\|f\|_{(p_1,\ldots,p_n)} = \left(
\int_{\mathbb{R}^n}|f(x)|^p {\rm d}x \right)^{\frac{1}{p}}
=\|f\|_{p} \] and \[ L^{\vec{p}}({\mathbb R}^n)=L^p({\mathbb R}^n). \]
\item[(ii)]
For any $(x_2,\ldots,x_n) \in \mathbb{R}^{n-1}$, \[
\|f\|_{(p_1)}(x_2, \ldots,x_n) \equiv \left(
\int_{\mathbb{R}}|f(x_1, \ldots, x_n)|^{p_1} {\rm d}x_1 \right)^{\frac{1}{p_1}} \] is a measurable function and defined on ${\mathbb R}^{n-1}$. Moreover, we define \[
\|f\|_{\vec{q}}= \|f\|_{(p_1,\ldots,p_j)} \equiv \|[\|f\|_{(p_1,\ldots,p_{j-1})}]\|_{(p_j)}, \]
where $\|f\|_{(p_1,\ldots,p_{j-1})}$ denotes $|f|$, if $j=1$
and $\vec{q}=(p_1, \ldots, p_j), j\leq n$. Note that $\|f\|_{\vec{q}}$ is a measurable function of $(x_{j+1},\ldots,x_n)$ for $j<n$.
\end{itemize} \end{remark}
Next, we consider the examples of $L^{\vec{p}}({\mathbb R}^n)$.
\begin{example} \label{ex 171109-1} Let $f_1 \ldots, f_n \in L^0(\mathbb{R}) \setminus \{0\}$. Then $f=\bigotimes_{j=1}^nf_j \in L^{\vec{p}}(\mathbb{R}^n)$ if and only if $f_j \in L^{p_j}({\mathbb R})$ for each $j=1, \ldots, n$. In fact, \begin{align*}
\|f\|_{\vec{p}} &= \left( \int_{{\mathbb R}} \cdots \left( \int_{{\mathbb R}} \left(
\int_{{\mathbb R}}\prod_{j=1}^n|f_j(x_j)|^{p_1}{\rm d}x_1 \right)^{\frac{p_2}{p_1}} {\rm d}x_2 \right)^{\frac{p_3}{p_2}} \cdots{\rm d}x_n \right)^{\frac{1}{p_n}}\\ &= \prod_{j=1}^n \left( \int_{\mathbb{R}}
|f_j(x_j)|^{p_j} {\rm d}x_j \right)^{\frac{1}{p_j}} = \prod_{j=1}^n
\|f_j\|_{p_j}. \end{align*}
\end{example}
\begin{example}\label{ex 171108-1} Let $Q$ be a cube. Then, for $0<\vec{p}\le\infty$, \[
\|\chi_Q\|_{\vec{p}}=|Q|^{\frac{1}{n}(\frac{1}{p_1}+\cdots+ \frac{1}{p_n})}. \]
In fact, we can write $Q=I_1 \times \cdots \times I_n$, where each $I_j$ is an interval of equal length. Hence, $\chi_Q(x)=\prod_{j=1}^n\chi_{I_j}(x_j)$. Using Example \ref{ex 171109-1}, we have \begin{align*}
\|\chi_Q\|_{\vec{p}} = \prod_{j=1}^n
\|\chi_{I_j}\|_{p_j} = \prod_{j=1}^n \left( \int_{I_j} {\rm d}x_j \right)^{\frac{1}{p_j}} = \prod_{j=1}^n
|I_j|^{\frac{1}{p_j}}. \end{align*} Notice that since $Q$ is a cube,
$|I_j|=\ell(Q)=|Q|^{\frac{1}{n}}.$ Thus, \[
\|\chi_Q\|_{\vec{p}}= \prod_{j=1}^n|I_j|^{\frac{1}{p_j}}
=|Q|^{\frac{1}{n}(\frac{1}{p_1}+\cdots+ \frac{1}{p_n})}. \] \end{example}
\begin{example}\label{ex 180112-1} Let $m=(m_1,\ldots, m_n) \in \mathbb{Z}^n$ and $\{a_m\}_{m \in \mathbb{Z}^n} \subset \mathbb{C}$. Define \[ f(x)=\sum_{m \in \mathbb{Z}^n}a_m\chi_{m+[0,1]^n}(x). \] Then, \[
\|f\|_{\vec{p}} = \left(\sum_{m_n \in \mathbb{Z}}\cdots\left(\sum_{m_1 \in \mathbb{Z}}
\left|a_{(m_1,\ldots, m_n)}\right|^{p_1} \right)^{\frac{p_2}{p_1}} \cdots\right)^{\frac{1}{p_n}}. \] In fact, \begin{align*}
\|f\|_{\vec{p}} &= \left(\int_{\mathbb{R}}\cdots\left(\int_{\mathbb{R}}
\left|\sum_{m \in \mathbb{Z}^n}a_m\chi_{m+[0,1]^n}(x)\right|^{p_1} {\rm d}x_1\right)^{\frac{p_2}{p_1}} \cdots{\rm d}x_n\right)^{\frac{1}{p_n}}\\ &= \left(\sum_{m_n \in \mathbb{Z}}\int_{m_n}^{m_n+1}\cdots\left(\sum_{m_1 \in \mathbb{Z}}\int_{m_1}^{m_1+1}
\left|\sum_{m \in \mathbb{Z}^n}a_m\chi_{m+[0,1]^n}(x)\right|^{p_1} {\rm d}x_1\right)^{\frac{p_2}{p_1}} \cdots{\rm d}x_n\right)^{\frac{1}{p_n}}\\ &= \left(\sum_{m_n \in \mathbb{Z}}\int_{m_n}^{m_n+1}\cdots\left(\sum_{m_1 \in \mathbb{Z}}\int_{m_1}^{m_1+1}
\left|a_{(m_1,\ldots, m_n)}\right|^{p_1} {\rm d}x_1\right)^{\frac{p_2}{p_1}} \cdots{\rm d}x_n\right)^{\frac{1}{p_n}}\\ &= \left(\sum_{m_n \in \mathbb{Z}}\cdots\left(\sum_{m_1 \in \mathbb{Z}}
\left|a_{(m_1,\ldots, m_n)}\right|^{p_1} \right)^{\frac{p_2}{p_1}} \cdots\right)^{\frac{1}{p_n}}.\\ \end{align*} \end{example}
We can consider the last term as a mixed sequence norm, which computes respectively $\ell^{p_i}$-norm with respect to $m_i$. We denote it by $\|\{a_m\}_{m \in \mathbb{Z}^n}\|_{\ell^{(p_1, \ldots, p_n)}}$: \begin{align*}
\|\{a_m\}_{m \in \mathbb{Z}^n}\|_{\ell^{(p_1, \ldots, p_n)}}\nonumber
&=\|a_{(m_1, \ldots, m_n)}\|_{\ell^{(p_1, \ldots, p_n)}}\\ &\equiv \left(\sum_{m_n \in \mathbb{Z}}\cdots\left(\sum_{m_2 \in \mathbb{Z}}\left(\sum_{m_1 \in \mathbb{Z}}
\left|a_{(m_1,\ldots, m_n)}\right|^{p_1} \right)^{\frac{p_2}{p_1}} \right)^{\frac{p_3}{p_2}} \cdots\right)^{\frac{1}{p_n}}. \end{align*} Furthermore, this norm is also defined inductively: \[
\|a_{(m_1, \ldots, m_n)}\|_{\ell^{(p_1, \ldots, p_j)}}
\equiv\left\|\left[\|a_{(m_1, \ldots, m_n)}\|_{\ell^{(p_1, \ldots, p_{j-1})}}\right]\right\|_{\ell^{(p_j)}}, \]
where $\|a_{(m_1, \ldots, m_n)}\|_{\ell^{(p_1, \ldots, p_{j-1})}}=|a_{(m_1, \ldots, m_n)}|$ if $j=1$ and \[
\|a_{(m_1, \ldots, m_n)}\|_{\ell^{(p_j)}}\equiv\left(\sum_{m_j\in\mathbb{Z}}|a_{(m_1, \ldots, m_n)}|^{p_j}\right)^\frac{1}{p_j} \] for $j=1,\ldots, n$.
Next, we consider the properties of mixed Lebesgue spaces. Since these proofs are elementary, we omit the detail.
\begin{proposition}\label{prop 180113-2} Let $0<\vec{p}\le\infty$. The mixed Lebesgue norm has the dilation relation: for all $f \in L^{\vec{p}}(\mathbb{R}^n)$ and $t>0$, \begin{equation} \label{eq 171113-1}
\|f(t\cdot)\|_{\vec{p}}=t^{-\sum_{j=1}^n\frac{1}{p_j}}\|f\|_{\vec{p}}. \end{equation} \end{proposition}
\begin{proposition}[Fatou's property for $L^{\vec{p}}(\mathbb{R}^n)$] \label{prop 171225-1} Let $0<\vec{p}\le\infty$. Let $\{f_j\}_{j=1}^\infty$ be a sequence of non-negative measurable functions on $\mathbb{R}^n$. Then, \[
\left\|\varliminf_{j \to \infty}f_j\right\|_{\vec{p}}\le \varliminf_{j \to \infty}\|f_j\|_{\vec{p}}. \] \end{proposition}
\subsection{$A_p$ weights and extrapolation} By a weight we mean a measurable function which satisfies $0<w(x)<\infty$ for almost all $x \in {\mathbb R}^n$.
\begin{definition} Let $1<p<\infty$ and $w$ be a weight. Then, $w$ is said to be an $A_p$ weight if \[ [w]_{A_p}
=\sup_{Q \in \mathcal{Q}} \left(\frac{1}{|Q|}\int_Qw(x){\rm d}x\right)
\left(\frac{1}{|Q|}\int_Qw(x)^{-\frac{1}{p-1}}{\rm d}x\right)^{p-1}<\infty. \] A weight $w$ is said to be an $A_1$ weight if \[ [w]_{A_1}
=\sup_{Q \in \mathcal{Q}} \left(\frac{1}{|Q|}\int_Qw(x){\rm d}x\right) {\rm ess}\sup_{x \in Q}w(x)^{-1}<\infty. \] \end{definition}
Let $0<p<\infty$, and let $w$ be a weight. One defines \[
\|f\|_{L^p(w)} \equiv \left(
\int_{{\mathbb R}^n}|f(x)|^p w(x){\rm d}x \right)^{\frac1p} \quad (f \in L^0(\mathbb{R}^n)). \] The space $L^p(w)$ is the set of all measurable functions $f$ for which the norm
$\|f\|_{L^p(w)}$ is finite. The space $L^p(w)$ is called the {\it weighted Lebesgue space} or the $L^p${\it-space with weight} $w$.
When we consider estimates of $A_p$-weights, we face the following type of estimate: \begin{equation}\label{eq:170318-1}
\|T h\|_{L^p(W)} \le N([W]_{A_{p}})
\|h\|_{L^p(W)} \quad (h \in L^p(W)), \end{equation} where $T$ is a mapping from $L^p(W)$ to $L^0({\mathbb R}^n)$ and $N$ is a positive increasing function defined on $[1,\infty)$. Extrapolation is a technique to expand the validity of (\ref{eq:170318-1}) for all $1<p<\infty$ based on the validity of $(\ref{eq:170318-1})$ for some $p_0$. We invoke the following extrapolation result from \cite{CGMP06}: \begin{proposition}\label{thm:161125-1} Let $N=N(\cdot):[1,\infty) \to [1,\infty)$ be an increasing function, and let $1<p_0,p<\infty$. Suppose that we have a family ${\mathcal F}$ of the couple of measurable functions $(f,g)$ satisfying \begin{equation*}
\|f\|_{L^{p_0}(W)} \le N([W]_{A_{p_0}})
\|g\|_{L^{p_0}(W)} \end{equation*} for all $(f,g) \in {\mathcal F}$ and $W \in A_{p_0}$.
Then \begin{equation*}
\|f\|_{L^{p}(w)} \lesssim_{[w]_{A_p}}
\|g\|_{L^{p}(w)} \end{equation*} for all $(f,g) \in {\mathcal F}$ and $w \in A_{p}$. \end{proposition}
\section{Mixed Morrey spaces} \label{sec mixed Morrey spaces}
In this section, we discuss some properties and examples of mixed Morrey spaces. We recall the definition of mixed Morrey spaces. Let $0<\vec{q}\le \infty$, $0<p\le\infty$ satisfy \[ \sum_{j=1}^n\frac{1}{q_j} \ge \frac{n}{p}. \] Then define the {\it mixed Morrey norm}
$\|\cdot\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}$ by \[
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \equiv \sup\left\{
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}} \,:\,\mbox{ $Q$ is a cube in ${\mathbb R}^n$}\right\}. \] We define the {\it mixed Morrey space} ${\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n)$ to be the set of all $f \in L^0(\mathbb{R}^n)$
with $\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} <\infty$.
\begin{remark} Let $\vec{q}\in (0, \infty]^n$ and $f \in L^0(\mathbb{R}^n)$. \begin{itemize}
\item[(i)] If for each $q_i=q$, then by Remark \ref{rem 171228-1} \[
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}}
=|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q} \right) }
\|f\chi_Q\|_{\vec{q}}
=|Q|^{\frac{1}{p}-\frac{1}{q}}
\|f\chi_Q\|_{q}. \] Thus, taking the supremum over the all cubes in $\mathbb{R}^n$, we obtain \[
\|f\|_{{\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n)} =
\|f\|_{{\mathcal{M}^p_q}(\mathbb{R}^n)}, \] and \[ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n) = \mathcal{M}^p_q(\mathbb{R}^n), \] with coincidence of norms. \item[(ii)] In particular, let \[ \hspace{10pt} p=\frac{n}{1/q_1+ \cdots +1/q_n}. \] Then, since \begin{align*}
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} &= \sup\left\{
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}} \,:\,\mbox{ $Q$ is a cube in ${\mathbb R}^n$}\right\}\\ &= \sup\left\{
\|f\chi_Q\|_{\vec{q}} \,:\,\mbox{ $Q$ is a cube in ${\mathbb R}^n$}\right\}
=\|f\|_{\vec{q}}, \end{align*} we obtain \[ L^{\vec{q}}(\mathbb{R}^n) ={\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n), \] with coincidence of norms.
\item[(iii)] The mixed Morrey space ${\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n)$ is also a Banach space for $1\le \vec{q}\le\infty$ and $0<p\le\infty$. Although the proof is easy, we give the proof for the sake of completeness. First, we will check the triangle inequality. For $f, g \in {\mathcal{M}^p_{\vec{q}}}({\mathbb R}^n)$, \begin{align*}
\|f+g\|_{{\mathcal{M}^p_{\vec{q}}}({\mathbb R}^n)} &=
\sup_{Q}|Q|^{\frac{1}{p}-\frac{1}{n} \left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|(f+g)\chi_Q\right\|_{\vec{q}}\\ &\le
\sup_{Q}|Q|^{\frac{1}{p}-\frac{1}{n} \left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left(\left\|f\chi_Q\right\|_{\vec{q}}+\left\|g\chi_Q\right\|_{\vec{q}}\right)\\ &\le
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}+\|g\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*} The positivity and the homogeneity are both clear. Thus, $\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)$ is a normed space. It remains to check the completeness.
Let $\{f_j\}_{j=1}^{\infty} \subset \mathcal{M}^p_{\vec{q}}({\mathbb R}^n)$
and $\sum_{j=1}^{\infty} \|f_j\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} <\infty$. Then, \[ \left
\|\sum_{j=1}^{J} |f_j|
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)}
\le \sum_{j=1}^{J} \|f_j\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)}
\le \sum_{j=1}^{\infty} \|f_j\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)}<\infty. \] By Proposition \ref{prop 171225-1}, \begin{align*} \left
\|\sum_{j=1}^{\infty} |f_j|\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} &=
\left\|\lim_{J \to \infty} \sum_{j=1}^{J} |f_j|\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} \le
\lim_{J \to \infty} \left\|\sum_{j=1}^{J} |f_j|\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} \\ &\le
\lim_{J \to \infty} \sum_{j=1}^{J} \left\|f_j\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} =
\sum_{j=1}^{\infty} \|f_j\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} <\infty. \end{align*} Thus, for almost everywhere $x \in {\mathbb R}^n$, \[
\sum_{j=1}^{\infty} |f_j(x)| <\infty. \] Therefore, there exists a function $g$ such that the limit \[ g(x) \equiv \lim_{J \to \infty} \sum_{j=1}^{J} f_j(x) \]
exists for almost everywhere $x \in {\mathbb R}^n$. If $\sum_{j=1}^{\infty} |f_j(x)|=\infty$, then it will be understood that $g(x)\equiv0$.
Again, by Proposition \ref{prop 171225-1}, for $m>1$ \begin{align*} \left
\|g-\sum_{j=1}^{m-1} f_j
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)}
&=\left\| \sum_{j=m}^{\infty} f_j
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} =
\left\|\lim_{J \to \infty} \sum_{j=m}^{J} f_j
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)}\\ &\le
\lim_{J \to \infty} \left\|\sum_{j=m}^{J} f_j
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} \le
\lim_{J \to \infty} \sum_{j=m}^{J} \left\|f_j
\right\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} \\ &=
\sum_{j=m}^{\infty} \|f_j\|_{\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)} . \end{align*} Letting $m \to \infty$, we obtain \[ g=\sum_{j=1}^{\infty} f_j \] in $\mathcal{M}^p_{\vec{q}}({\mathbb R}^n)$.
\end{itemize} \end{remark}
First, we give the properties of the {\it mixed Morrey spaces.}
\begin{proposition} \label{prop 171120-2} Let $\vec{q}\in (0, \infty]^n$ and $p\in (0, \infty]$. The mixed Morrey norm has the following dilation relation: for all $f \in L^0(\mathbb{R}^n)$ and $t>0$, \begin{equation} \label{eq 171118-2}
\|f(t\cdot)\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
=t^{-\frac{n}{p}}\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{equation} \end{proposition} \begin{proof} Although the proof is elementary again, we give the proof for the sake of completeness. To see (\ref{eq 171118-2}), using (\ref{eq 171113-1}), we obtain \begin{align*}
\|f(t\cdot)\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} &= \sup_{Q=Q(x,r)}
|Q(x,r)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f(t\cdot)\chi_{Q(x,r)}\|_{\vec{q}}\\ &= \sup_{Q=Q(x,r)}
|Q(x,r)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) } t^{-\sum_{j=1}^n\frac{1}{q_j}}
\|f\chi_{Q(tx,tr)}\|_{\vec{q}}\\ &= \sup_{Q=Q(x,r)}
|Q(tx,tr)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) } t^{-\frac{n}{p}}
\|f\chi_{Q(tx,tr)}\|_{\vec{q}}\\ &=
t^{-\frac{n}{p}}\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*} \end{proof} \begin{proposition}\label{prop 180521-1}
Let $0<\vec{q} \le \vec{r} \le \infty$, $0<p<\infty$, and assume $\frac{1}{r_1}+\cdots+\frac{1}{r_n} \ge \frac{n}{p}$. Then, \[ {\mathcal{M}^p_{\vec{r}}}(\mathbb{R}^n) \subset {\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n). \] \end{proposition}
\begin{proof} To get this inclusion, it suffices to show that for all $f \in L^0{(\mathbb{R}^n)}$ and all cubes $Q$, \begin{equation} \label{eq 171029-1}
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}} \le
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{r_j} \right) }
\|f\chi_Q\|_{\vec{r}}. \end{equation} Once we can show (\ref{eq 171029-1}), taking the supremum over the all cubes in $\mathbb{R}^n$, we have \[
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \le
\|f\|_{\mathcal{M}^p_{\vec{r}}(\mathbb{R}^n)}. \] This implies that \[ {\mathcal{M}^p_{\vec{r}}}(\mathbb{R}^n) \subset {\mathcal{M}^p_{\vec{q}}}(\mathbb{R}^n). \] So we shall show (\ref{eq 171029-1}). Note that we can write $Q=I_1 \times \cdots \times I_n$, where each $I_j$ is an interval of equal length. Using H\"{o}lder's inequality, we have \begin{align*}
&\|f\chi_Q\|_{\vec{q}}\\ &=\left( \int_{I_n} \cdots \left( \int_{I_2} \left(
\int_{I_1}|f(x)|^{q_1}{\rm d}x_1 \right)^{\frac{q_2}{q_1}} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}\\ &\le \left( \int_{I_n} \cdots \left( \int_{I_2} \left[ \left(
\int_{I_1}|f(x)|^{{q_1}\frac{r_1}{q_1}}{\rm d}x_1 \right)^{\frac{q_1}{r_1}} \left( \int_{I_1}{\rm d}x_1 \right)^{1-\frac{q_1}{r_1}} \right]^{\frac{q_2}{q_1}} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}\\ &= \left( \int_{I_n} \cdots \left( \int_{I_2}
\|f\chi_{I_1 \times {\mathbb R}^{n-1}}\|_{(r_1)}(x_2, \ldots, x_n)^{q_2}
|I_1|^{\frac{q_2}{q_1}-\frac{q_2}{r_1}} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}.\\ \end{align*}
Since $|I_1|=\ell(Q)$, \begin{align*}
\|f\chi_Q\|_{\vec{q}} &\le \left( \int_{I_n} \cdots \left( \int_{I_2}
\|f\chi_{I_1 \times {\mathbb R}^{n-1}}\|_{(r_1)}(x_2, \ldots, x_n)^{q_2} \ell(Q) ^{\frac{q_2}{q_1}-\frac{q_2}{r_1}} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}\\ &= \ell(Q) ^{\frac{1}{q_1}-\frac{1}{r_1}} \left( \int_{I_n} \cdots \left( \int_{I_2}
\|f\chi_{I_1 \times {\mathbb R}^{n-1}}\|_{(r_1)}(x_2, \ldots, x_n)^{q_2} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}.\\ \end{align*} Iterating this procedure, we get \begin{align*}
\|f\chi_Q\|_{\vec{q}} &\le \ell(Q) ^{\left( \sum_{j=1}^{n-1}\frac{1}{q_j} \right) -\left( \sum_{j=1}^{n-1}\frac{1}{r_j} \right)} \left( \int_{I_n}
\|f\chi_{I_1 \times \cdots \times I_{n-1} \times {\mathbb R}}\|_{(r_1, \cdots, r_{n-1})}(x_n)^{q_n} {\rm d}x_n \right)^{\frac{1}{q_n}}\\ &\le \ell(Q) ^{\left( \sum_{j=1}^n\frac{1}{q_j} \right) -\left( \sum_{j=1}^n\frac{1}{r_j} \right)}
\|f\chi_{Q}\|_{\vec{r}}. \end{align*} Thus, we obtain \[
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_Q\|_{\vec{q}} \le
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{r_j} \right) }
\|f\chi_Q\|_{\vec{r}}. \]
\end{proof} Let us give some examples.
\begin{example} \label{ex 171120-6}
In the classical case, it is known that $f(x)=|x|^{-\frac{n}{p}} \in \mathcal{M}^p_q(\mathbb{R}^n)$ if $q<p$. Let $\vec{q}=(q_1, \ldots, q_n)$. Using the above embedding, we have \[ \mathcal{M}^p_{\tilde{q}}(\mathbb{R}^n) = \mathcal{M}^p_{(\underbrace{\tilde{q}, \ldots, \tilde{q}}_{\mbox{$n$ times}})}(\mathbb{R}^n) \subset \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n), \] where $\tilde{q}=\max(q_1, \ldots, q_n).$
Thus, if $\max(q_1, \ldots, q_n)=\tilde{q}<p$, \[
f(x)=|x|^{-\frac{n}{p}} \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n). \] \end{example}
\begin{remark} In Example \ref{ex 171120-6}, the condition \begin{equation} \label{eq 171120-7} \max(q_1, \ldots, q_n)=\tilde{q}<p \end{equation} is a sufficient condition but is not a necessary condition for
$f(x)=|x|^{-\frac{n}{p}} \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. In fact, let $\vec{s}=(s_1, \underbrace{\infty, \ldots, \infty}_{\mbox{$(n-1)$ times}})$ and $s_1<\frac{q_1}{n}$. Then, by Proposition \ref{prop 171120-2}, \begin{align*}
\|f\|_{\mathcal{M}^p_{\vec{s}}(\mathbb{R}^n)} &= \sup_{Q=Q(x,r)}
|Q(x,r)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_{Q(x,r)}\|_{\vec{s}}\\ &= \sup_{r>0}
|Q(0,r)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_{Q(0,r)}\|_{\vec{s}}\\ &=
|Q(0,1)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right) }
\|f\chi_{Q(0,1)}\|_{\vec{s}}\\ &=
|Q(0,1)|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\| \left( \int_{-1}^1
|x|^{-\frac{n}{p}s_1} {\rm d}x_1 \right)^{\frac{1}{s_1}}\chi_{[-1, 1]^{n-1}}
\right\|_{(\underbrace{\infty, \ldots, \infty}_{\mbox{$(n-1)$ times}})}. \end{align*} Since $s_1<\frac{q_1}{n}$,
$\|f\|_{\mathcal{M}^p_{\vec{s}}(\mathbb{R}^n)}<\infty$ and $f \in \mathcal{M}^p_{\vec{s}}(\mathbb{R}^n)$. But $\vec{s}$ does not satisfy (\ref{eq 171120-7}). \end{remark}
\begin{example} \label{ex 171120-1} Let $0<\vec{q}\le\infty$ and assume that $q_j<p_j$ if $p_j<\infty$ and that $q_j\le \infty$ if $p_j=\infty$
$(j=1, \ldots, n)$. Let \begin{equation} \label{eq 171120-5} \sum_{j=1}^n\frac{1}{p_j}=\frac{n}{p}. \end{equation} Then, \[
f(x)=\prod_{j=1}^n |x_j|^{-\frac{1}{p_j}} \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n). \]
In fact, letting $Q=I_1 \times \cdots \times I_n$, we obtain \begin{align*}
\|f\chi_Q\|_{\vec{q}} &= \left( \int_{I_n} \cdots \left( \int_{I_2} \left( \int_{I_1}
\prod_{j=1}^n |x_j|^{-\frac{q_1}{p_j}} {\rm d}x_1 \right)^{\frac{q_2}{q_1}} {\rm d}x_2 \right)^{\frac{q_3}{q_2}} \cdots{\rm d}x_n \right)^{\frac{1}{q_n}}\\ &= \prod_{j=1}^n \left( \int_{I_j}
|x_j|^{-\frac{q_j}{p_j}} {\rm d}x_j \right)^{\frac{1}{q_j}}. \end{align*} To estimate this integral, letting $\ell(Q)=r$, we have \begin{align*} \int_{I_j}
|x_j|^{-\frac{q_j}{p_j}} {\rm d}x_j &\le \int_{-r/2}^{r/2}
|x_j|^{-\frac{q_j}{p_j}} {\rm d}x_j = 2\int_{0}^{r/2}
|x_j|^{-\frac{q_j}{p_j}} {\rm d}x_j \lesssim r^{1-\frac{q_j}{p_j}}. \end{align*} Thus, \begin{align*}
\|f\chi_Q\|_{\vec{q}} \lesssim \prod_{j=1}^n \left( r^{1-\frac{q_j}{p_j}} \right)^{\frac{1}{q_j}} = \prod_{j=1}^n r^{{\frac{1}{q_j}}-\frac{1}{p_j}} = r^{\sum_{j=1}^n{\frac{1}{q_j}}-\sum_{j=1}^n\frac{1}{p_j}}. \end{align*} Since $\sum_{j=1}^n\frac{1}{p_j}=\frac{n}{p}$, \[ r^{\frac{n}{p}-\sum_{j=1}^n{\frac{1}{q_j}}}
\|f\chi_Q\|_{\vec{q}} \lesssim 1. \] Taking supremum over all the cubes, we obtain \[
\|f\|_{ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \lesssim 1, \] that is, \[
f(x)=\prod_{j=1}^n |x_j|^{-\frac{1}{p_j}} \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n). \] \end{example}
\begin{remark} In Example \ref{ex 171120-1}, condition (\ref{eq 171120-5})
is a necessary and sufficient condition for
$f(x)=\prod_{j=1}^n |x_j|^{-\frac{1}{p_j}}$ to be a member in $\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n).$ In fact, let $f \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$ and $f \neq 0$. Applying Proposition \ref{prop 171120-2}, we have \begin{equation} \label{eq 171120-3}
\|f(t\cdot)\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
=t^{-\frac{n}{p}}\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \quad (t>0). \end{equation} On the other hand, since $f(tx)=t^{-\sum_{j=1}^n\frac{1}{p_j}}f(x)$, \begin{equation} \label{eq 171120-4}
\|f(t\cdot)\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} = t^{-\sum_{j=1}^n\frac{1}{p_j}}
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{equation} By (\ref{eq 171120-3}) and (\ref{eq 171120-4}), for all $t>0$, \[ t^{-\sum_{j=1}^n \frac{1}{p_j}}=t^{-\frac{n}{p}}. \] Thus, we obtain (\ref{eq 171120-5}). \end{remark}
\begin{example} Let $Q$ be a cube and $\vec{q} \in (0,\infty]^n$. Then, \[
\|\chi_Q\|_{ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
=|Q|^{\frac{1}{p}}. \] To check this, put $\sum_{j=1}^n\frac{1}{q_j}=\bar{q}$. First, using Example \ref{ex 171108-1}, we get \begin{align*}
\|\chi_Q\|_{ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} = \sup_{R \in \mathcal{Q}}
|R|^{\frac{1}{p}-\frac{\bar{q}}{n}}
\|\chi_{Q}\chi_{R}\|_{\vec{q}} \ge
|Q|^{\frac{1}{p}-\frac{\bar{q}}{n}}
\|\chi_{Q}\|_{\vec{q}} =
|Q|^{\frac{1}{p}-\frac{\bar{q}}{n}}
|Q|^{\frac{\bar{q}}{n}} =
|Q|^{\frac{1}{p}}.\\ \end{align*} On the other hand, by Proposition \ref{prop 180521-1}, \[
\|\chi_Q\|_{ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \le
\|\chi_Q\|_{ \mathcal{M}^p_{\max(q_1,\ldots,q_n)}(\mathbb{R}^n)}
=|Q|^\frac1p. \]
Combining the above two inequalities, we obtain \[
\|\chi_Q\|_{ \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
=|Q|^{\frac{1}{p}}. \] \end{example}
\section{Proof of Theorems \ref{thm 171206-1} and \ref{thm 180121-1}} \label{sec iterated}
In this section, we investigate the boundedness of the iterated maximal operator in $L^{\vec{p}}({\mathbb R}^n)$ and $\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. First, to show Theorem \ref{thm 171206-1}, we need a lemma due to Bagby in 1975 \cite{Bagby}.
\begin{lemma}{\rm (\cite{Bagby})} \label{lem 180113-4} Let $1<q_i<\infty (i=1, \ldots,m)$ and $1<p<\infty$. Let $(\Omega_i, \mu_i) (i=1, \ldots, m)$ be $\sigma$-finite measure spaces, and let $\Omega=\Omega_1 \times \cdots \times \Omega_m$. For $f \in L^0(\mathbb{R}^n\times \Omega)$, \[
\int_{\mathbb{R}^n}\left\|Mf(x,\cdot)\right\|_{(q_1,\ldots,q_m)}^p{\rm d}x \lesssim
\int_{\mathbb{R}^n}\left\|f(x,\cdot)\right\|_{(q_1,\ldots,q_m)}^p{\rm d}x, \]
\end{lemma} Let us show Theorem \ref{thm 171206-1}.
\begin{proof} Since \begin{align*}\label{eq 171103-2}
\|{\mathcal M}_tf\|_{\vec{p}}
&=\left\|
\left(M_n \cdots M_1 \left[|f|^t\right]\right)^{\frac{1}{t}}
\right\|_{\vec{p}}
=
\left\|
M_n \cdots M_1\left[|f|^t\right]
\right\|_{(\frac{p_1}{t}, \ldots, \frac{p_n}{t})}^{\frac{1}{t}},
\end{align*} we have only to check (\ref{eq 171028-2}) for $t=1$ and $1<\vec{p}<\infty$. \nonumber\\ Let $t=1$. Then the conclution can be written as \[
\|{\mathcal M}_1f\|_{\vec{p}}=\|M_n \cdots M_1 f\|_{\vec{p}} \lesssim \|f\|_{\vec{p}}. \] We use induction on $n$. Let $n=1$. Then, the result follows by the classical case of the boundedness of the Hardy--Littlewood maximal operator.\\ Suppose that the result holds for $n-1$, that is, for $h \in L^0({\mathbb R}^{n-1})$ and $1<(q_1, \ldots, q_{n-1})<\infty$, \[
\|M_{n-1} \cdots M_{1} h\|_{(q_1, \ldots, q_{n-1})} \lesssim \|h\|_{(q_1, \ldots, q_{n-1})}. \] By Lemma \ref{lem 180113-4}, \begin{equation}\label{eq 180114-3}
\|M_n f\|_{\vec{p}} =
\left\|\left[\left\|M_nf\right\|_{(p_1,\ldots,p_{n-1})}\right]\right\|_{(p_n)} \lesssim
\left\|\left[\left\|f\right\|_{(p_1,\ldots,p_{n-1})}\right]\right\|_{(p_n)} =
\|f\|_{\vec{p}}. \end{equation} Thus, by induction assumption, we obtain \begin{align*}
\|M_nM_{n-1}\cdots M_1 f\|_{\vec{p}} &=
\|M_n[M_{n-1}\cdots M_1 f]\|_{\vec{p}}\\ &\lesssim
\|M_{n-1}\cdots M_1 f\|_{\vec{p}}\\ &=
\left\|\left\|M_{n-1}\cdots M_1 f\right\|_{(p_1,\ldots,p_{n-1})}\right\|_{p_n}\\ &\lesssim
\left\|\left\|f\right\|_{(p_1,\ldots,p_{n-1})}\right\|_{p_n} \lesssim
\|f\|_{\vec{p}}. \end{align*}
\end{proof}
\begin{remark} In 1935, Jessen, Marcinkiewicz and Zygmund showed the boundedness of the iterated maximal operator in the classical $L^p$ spaces \cite{J-M-Z}. Also, Bagby showed the boundedness of the Hardy--Littlewood maximal operator for the functions taking values in mixed Lebesgue spaces in 1975 \cite{Bagby}. St\"ockert showed the boundedness of the iterated maximal operator $\mathcal{M}_1$ in 1978 \cite{St}. But, the proof is not correct. Its proof uses the following estimate: \begin{equation}\label{eq 180114-1}
\|M_kf\|_{(q_j)}\le M_k\|f\|_{(q_j)} \end{equation} for $f \in L^0(\mathbb{R}^n)$ and $1<\vec{q}<\infty$. We disprove this estimate by an example: For the sake of simplicity, let $n=2, k=2$ and $j=1$. That is, (\ref{eq 180114-1}) implies \[
\|M_2f\|_{(q_1)}\le M_2\|f\|_{(q_1)}. \] Let $0<q_2<1< q_1$ and $q_1q_2>1$. Define the function $\varphi$ as follows: \[ \varphi(t)=t^{-\frac{q_1}{q_2}}\chi_{(0, 1)}(t) \quad (t \in \mathbb{R}). \] Let \[ f(x,y)=\chi_{E}(x,y), \]
where $E=\{(x, y): 0\le x \le \varphi(y)\}$. Let $0 < y \le 1$. First, we calculate $M_2\|f\|_{(q_1)}$. Since \[
\|f\|_{(q_1)} =\left(\int_{\mathbb{R}}\chi_E(x,y) {\rm d} x \right)^{\frac{1}{q_1}} =\varphi(y)^{\frac{1}{q_1}} =y^{-\frac{1}{q_1q_2}}\chi_{(0,1)}(y), \] we get \[
M_2\|f\|_{(q_1)}(y) =\frac{1}{y}\int_0^y t^{-\frac{1}{q_1q_2}} {\rm d}t =\frac1y\frac{q_1q_2}{q_1q_2-1}y^{1-\frac{1}{q_1q_2}} = \frac{q_1q_2}{q_1q_2-1}y^{-\frac{1}{q_1q_2}}. \] Next, we calculate $M_2f(x,y)$. Since \[ E=\{(x, y): 0\le x \le \varphi(y)\}=\{(x, y): 0\le x, 0\le y\le \min(1, x^{-\frac{q_2}{q_1}})\}, \] we have \begin{align*} M_2f(x,y)=\frac1y\int_0^y\chi_E(x,t) {\rm d}t =\frac1y\int_0^{\min(y, x^{-\frac{q_2}{q_1}})}\chi_{(x\ge0)}(x) {\rm d}t =\frac{\min(y, x^{-\frac{q_2}{q_1}})\chi_{(x\ge0)}(x)}{y}. \end{align*} Thus, \begin{align*}
\|M_2f(\cdot,y)\|_{(q_1)}^{q_1} &=\frac1y\int_{\mathbb{R}} \left[\min(y, x^{-\frac{q_2}{q_1}})\right]^{q_1} {\rm d}x\\ &= \frac1y\int_{\mathbb{R}}\left[ y\chi_{\left(0\le x\le y^{-\frac{q_1}{q_2}}\right)}(x)+ x^{-\frac{q_2}{q_1}} \chi_{\left(x\ge y^{-\frac{q_1}{q_2}}\right)}(x)\right]^{q_1}{\rm d}x\\ &\ge\frac1y\int_{y^{-\frac{q_1}{q_2}}}^\infty x^{-q_2} {\rm d}x=\infty. \end{align*}
Therefore, (\ref{eq 180114-1}) does not hold. On the other hand, using Lemma \ref{lem 180113-4}, we give a correct proof for Theorem \ref{thm 171206-1}. \end{remark}
Moreover, we shall consider why we investigate the iterated maximal operator. \begin{example} Let $\mathcal{R}$ be a set of all rectangles in $\mathbb{R}^n$. By $M_R$, denote the strong maximal operator which is generated by a rectangle $R$: for $f \in L^0(\mathbb{R}^n)$, \[ M_Rf(x) =\sup_{R \in \mathcal{R}}
\frac{\chi_R(x)}{|R|} \int_{R}
|f(y)| {\rm d}y. \] Then, the followings follow \cite{J-M-Z}: \[ M_Rf(x)\le M_n \cdots M_1f (x)={\mathcal M}_1f(x), \] and \[ M_Rf(x)\le M_1 \cdots M_nf (x), \] and so on. Thus, the iterated maximal operator can controll the strong maximal operator. On the other hand, the relation between $M_1\cdots M_n$ and $M_n \cdots M_1$ is not comparable. To see this, we give the following example. For the sake of simplicity, let $n=2$. Let $f(x,y)=\chi_{\Delta}(x,y)$, where \[ \Delta=\{(x,y): 0\le x\le1, 0\le y\le x\}. \] First, we calculate $M_1f$ and $M_2f$: \begin{eqnarray*} M_1f(x,y)= \left\{\begin{array}{ll} 0 & (y\le0, 1\le y),\\ 1 & (0\le y\le1, y\le x),\\ \frac{1-y}{1-x} & (0 \le y \le1, x\le y),\\ \frac{1-y}{x-y} & (0\le y \le1, 1\le x),\\ \end{array} \right. \end{eqnarray*} and \begin{eqnarray*} M_2f(x,y)= \left\{\begin{array}{ll} 0 & (x\le0, 1\le x),\\ \frac{x}{x-y} & (0\le x\le1, y\le0),\\ 1 & (0 \le x \le1, 0\le y\le x),\\ \frac{x}{y} & (0\le x \le1, x\le y).\\ \end{array} \right. \end{eqnarray*} Next, we calculate $M_2M_1f$ and $M_1M_2f$. In particular, we consider two cases. For $0\le x \le 1, y\ge1$, we get \[ M_2M_1f(x,y)=\frac{-x^2-y^2+2y}{2y(1-x)}, \quad M_1M_2f(x,y)=\frac{x+1}{2y}. \] For $x \ge 1, 0\le y\le1$, we have \[ M_2M_1f(x,y)=\frac{1}{y}\left(y+(x-1)\log\frac{x-1}{x}\right), \quad M_1M_2f(x,y)=\frac{x-\sqrt{x^2-1}}{y}. \] Thus, we obtain \[ M_2M_1f \le M_1M_2f \quad (0\le x \le 1, y\ge1), \] while \[ M_2M_1f \ge M_1M_2f \quad (x\ge1, 0\le y\le1). \]
\end{example}
Next, we consider the boundedness of the maximal operator in classical and mixed Morrey spaces. The following proposition is important when we show the boundedness of the Hardy--Littlewood maximal operator in classical and mixed Morrey spaces.
\begin{proposition}\label{ex 171103-1} {\rm (\cite{SHG15} Lemma 4.2)} For all measurable functions $f$ and cubes $Q$, we have \begin{equation}\label{eq:131109-69} M[\chi_{{\mathbb R}^n \setminus 5Q}f](y) \lesssim \sup_{Q \subset R \in {\mathcal Q}}
\frac{1}{|R|}\int_R |f(x)| {\rm d}x \quad (y \in Q). \end{equation} \end{proposition}
First, we prove the boundedness of the Hardy--Littlewood maximal operator in mixed Morrey spaces. The boundedness of the Hardy--Littlewood maximal operator in classical Morrey spaces is showed by Chiarenza and Frasca in 1987 \cite{C-F}.
\begin{theorem}\label{thm 180122-1} Let $1<\vec{q}<\infty$ and $1<p\le\infty$ satisfy $\frac np \le\sum_{j=1}^n\frac1q_j$. Then \[
\|Mf\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \lesssim \|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \] for all $f \in L^0(\mathbb{R}^n)$. \end{theorem}
\begin{proof} It suffices to verify that, for any cube $Q=Q(x,r)$, \[
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|(Mf)\chi_Q\|_{\vec{q}}
\lesssim \|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \] Now, we decompose \[
|f(y)| =
\chi_{Q(x, 5r)}(y)|f(y)|+\chi_{Q(x, 5r)^c}(y)|f(y)| \equiv f_1(y)+f_2(y) \quad(y \in \mathbb{R}^n). \] Using the subadditivity of $M$, we obtain \[ Mf(y) \le Mf_1(y)+Mf_2(y) \quad(y \in \mathbb{R}^n). \] First, the boundedness of $M$ on the mixed Lebesgue space $L^{\vec{q}}(\mathbb{R}^n)$ \cite{Bagby} yields \begin{align*}
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|(Mf_1)\chi_Q\|_{\vec{q}} &\le
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|Mf_1\|_{\vec{q}}\\ &\lesssim
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|f_1\|_{\vec{q}}\\ &=
|Q|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|f\chi_{Q(x, 5r)}\|_{\vec{q}}\\ &=
|Q(x, 5r)|^{\frac{1}{p}-\frac{1}{n} \left( \sum_{j=1}^n\frac{1}{q_j} \right)}
\|f\chi_{Q(x, 5r)}\|_{\vec{q}}\\ &\le
\|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*}
Second, by Proposition \ref{ex 171103-1}, we get \[ Mf_2(y) =M[\chi_{{\mathbb R}^n \setminus 5Q}f](y) \lesssim \sup_{Q \subset R \in {\mathcal Q}}
\frac{1}{|R|}\int_R |f(x)| {\rm d}x \quad (y \in Q). \] Thus, we see that \begin{align} \label{eq 180517-1}
&|Q|^{\frac{1}{p}- \frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\|(M f_2)\chi_Q\|_{\vec{q}} \nonumber\\ &\lesssim \sup_{Q \subset R \in {\mathcal Q}}
|Q|^{\frac{1}{p}- \frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|
\frac{1}{|R|}\int_R |f(x)| {\rm d}x
\times\chi_Q\right\|_{\vec{q}}. \end{align} Thanks to Example \ref{ex 171108-1}, we have \begin{align*} (\ref{eq 180517-1})&= \sup_{Q\subset R \in {\mathcal Q}}
|Q|^{\frac{1}{p}- \frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\frac{1}{|R|}\int_R |f(x)| {\rm d}x
\times\|\chi_Q\|_{\vec{q}}\\ &= \sup_{Q\subset R \in {\mathcal Q}}
|Q|^{\frac{1}{p}- \frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\frac{1}{|R|}\int_R |f(x)| {\rm d}x \times
|Q|^{\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}\\ &\le
\sup_{R \in {\mathcal Q}}|R|^{\frac{1}{p}-1}\int_R|f(x)|{\rm d}x. \end{align*} By Proposition \ref{prop 180521-1}, taking into account ${\mathcal M}^p_{\vec{q}}({\mathbb R}^n) \hookrightarrow {\mathcal M}^p_{(\underbrace{1, \ldots, 1}_{\mbox{$n$ times}})}({\mathbb R}^n) = {\mathcal M}^p_1({\mathbb R}^n)$ with embedding constant $1$, we get \begin{align*}
|Q|^{\frac{1}{p}- \frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\|(M f_2)\chi_Q\|_{\vec{q}} \le
\|f \|_{{\mathcal M}^p_1(\mathbb{R}^n)} \le
\|f \|_{{\mathcal M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*} Thus, taking the supremum over all the cubes, we obtain \[
\|Mf_2 \|_{{\mathcal M}^p_{\vec{q}}(\mathbb{R}^n)} \lesssim
\|f \|_{{\mathcal M}^p_{\vec{q}}(\mathbb{R}^n)}. \] Hence, the result holds. \end{proof}
Next, we show the boundedness of the iterated maximal operator for mixed Morrey spaces. To show this, we need auxiliary estimates.
\begin{lemma} \label{lem 180115-1} Let $\{f_{(j_1, \ldots, j_m)}\}_{j_1, \ldots, j_m=0}^\infty \subset L^0(\mathbb{R}^n)$ and $w \in A_p$. Then, for $1<q_i \le \infty (i=1, \ldots,m)$ and $1<p<\infty$, \begin{equation*}
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p \lesssim
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p, \end{equation*} that is, \begin{eqnarray*} &&
\left\| \left( \sum_{j_m=1}^\infty \left(\cdots \sum_{j_2=1}^\infty \left( \sum_{j_1=1}^\infty (M f_{(j_1,\ldots,j_m)})^{q_1} \right)^{\frac{q_2}{q_1}} \cdots \right)^{\frac{q_m}{q_{m-1}}}
\right)^{\frac{1}{q_m}}w^\frac{1}{p}\right\|_p\\ &&\lesssim
\left\|\left( \sum_{j_m=1}^\infty \left(\cdots \sum_{j_2=1}^\infty \left( \sum_{j_1=1}^\infty
|f_{(j_1,\ldots,j_m)}|^{q_1} \right)^{\frac{q_2}{q_1}} \cdots \right)^{\frac{q_m}{q_{m-1}}}
\right)^{\frac{1}{q_m}}w^\frac{1}{p}\right\|_p. \end{eqnarray*}
\end{lemma}
\begin{proof} We induct on $m$. Let $m=1$. Then, this is the weighted Fefferman--Stein maximal inequality \cite{A-J}. Suppose that the result holds for $m-1$: \begin{eqnarray*}
\left\|\left[\left\|Mf_{(j_1,\ldots,j_{m-1})}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}w^\frac{1}{p}\right]\right\|_p \lesssim
\left\|\left[\left\|f_{(j_1,\ldots,j_{m-1})}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}w^\frac{1}{p}\right]\right\|_p. \end{eqnarray*} Let $p=q_m$. We calculate \begin{align*}
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p^p &=
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1},1)}}^pw\right]\right\|_1. \end{align*} By the Lebesgue convergence theorem, \begin{align*}
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p^p &= \sum_{j_m=1}^\infty
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}^pw\right]\right\|_1\\ &= \sum_{j_m=1}^\infty
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}w^\frac{1}{p}\right]\right\|_p^p. \end{align*} Using induction assumption, we obtain \begin{align*}
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p^p &\lesssim \sum_{j_m=1}^\infty
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}w^\frac{1}{p}\right]\right\|_p^p\\ &= \sum_{j_m=1}^\infty
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1})}}^pw\right]\right\|_1\\ &=
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_{m-1},1)}}^pw\right]\right\|_1\\ &=
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}w^\frac{1}{p}\right]\right\|_p^p. \end{align*} Thus, the result holds when $p=q_m$.
Using Proposition \ref{thm:161125-1}, we conclude the result for all $1<p<\infty$. \end{proof}
\begin{lemma} \label{lem 180119-2} Let $\{f_{(j_1, \ldots, j_m)}\}_{j_1, \ldots, j_m=1}^\infty \subset L^0(\mathbb{R}^n)$ and $w_k \in A_{q_k}(\mathbb{R})$. Then, for $1<q_i\le \infty (i=1, \ldots,m)$ and $k=1, \ldots,n$, \begin{align}\label{eq 180119-3} \lefteqn{\nonumber
\left\|\left[
\left\|\left[\left\|M_kf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_k)}}\right]w_k(\cdot_k)^\frac{1}{q_k}\right\|_{q_k}
\right]\right\|_{\ell^{(q_{k+1},\ldots,q_m)}}}\\ &\lesssim
\left\|\left[
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_k)}}\right]w_k(\cdot_k)^\frac{1}{q_k}\right\|_{q_k}
\right]\right\|_{\ell^{(q_{k+1},\ldots,q_m)}}. \end{align} \end{lemma}
\begin{proof} By Lemma \ref{lem 180115-1}, \[
\left\|\left[\left\|M_kf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_k)}}\right]w_k(\cdot_k)^\frac{1}{q_k}\right\|_{q_k} \lesssim
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_k)}}\right]w_k(\cdot_k)^\frac{1}{q_k}\right\|_{q_k}. \] Taking $\ell^{(q_{k+1},\ldots,q_m)}$-norm for $j_{k+1},\ldots,j_m$, we conclude (\ref{eq 180119-3}). \end{proof}
\begin{lemma}\label{lem 180119-4} Let $f \in L^0(\mathbb{R}^n)$ and $w_n \in A_{p_n}(\mathbb{R})$. Then, for $1<p_i\le \infty (i=1, \ldots,n)$, \begin{equation}
\left\|\left[
\left\|M_nf\right\|_{(p_1,\ldots,p_{n-1})}\right]
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} \lesssim
\left\|\left[
\left\|f\right\|_{(p_1,\ldots,p_{n-1})}\right]
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}. \end{equation}
\end{lemma}
\begin{proof} Assume that $f \in L^{\vec{p}}({\mathbb R}^n)$ is a function of the form: \[ f(x_1,\ldots,x_n) = \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r x')f_{m'}(x_n), \] where $r>0$ and $\{f_{m'}\}_{m' \in {\mathbb Z}^{n-1}} \subset L^0({\mathbb R})$. Then \[ M_n f(x_1,\ldots,x_n) = \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r x')M_n f_{m'}(x_n), \] since the summand is made up of at most one non-zero function once we fix $x'$. Define $v>0$ by \[ \frac{1}{v}=\frac{1}{p_1}+\cdots+\frac{1}{p_{n-1}}. \] Then, by Proposition \ref{prop 180113-2}, \begin{align*}
&\left\|
\left\| M_nf
\right\|_{(p_1, \ldots, p_{n-1})}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}\\ &=
\left\|\left\| \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r x')M_n f_{m'}(\cdot_n)
\right\|_{(p_1, \ldots, p_{n-1})}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}\\ &= r^{-\frac1v}
\left\|\left\| \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(x')M_n f_{m'}(\cdot_n)
\right\|_{(p_1, \ldots, p_{n-1})}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}. \end{align*} Thus, by Lemma \ref{lem 180115-1}, \begin{align*}
\left\|
\left\| M_nf
\right\|_{(p_1, \ldots, p_{n-1})}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} &= r^{-\frac1v}
\left\|\left\| M_n f_{m'}(\cdot_n)
\right\|_{\ell^{(p_1, \ldots, p_{n-1})}}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}\\ &\lesssim r^{-\frac1v}
\left\|\left\| f_{m'}(\cdot_n)
\right\|_{\ell^{(p_1, \ldots, p_{n-1})}}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}\\ &=
\left\|
\left\| f
\right\|_{(p_1, \ldots, p_{n-1})}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}.\\ \end{align*} Let $f \in L^{\vec{p}}({\mathbb R}^n)$ be arbitrary. Write \[ f_r(x)= \frac{1}{r^{n-1}} \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{r m'+[0,r]^{n-1}}(x') \int_{r m'+[0,r]^{n-1}}f(y',x_n)\,dy'. \] Thanks to the Lebesgue differentiation theorem \[ f(x', x_n)=\lim_{r \downarrow 0}f_r(x', x_n) \] for almost every $x' \in {\mathbb R}^{n-1}$. Thus, by the Fatou lemma, we obtain \[ M_n f(x) \le \liminf_{r \downarrow 0}M_n f_r(x). \] Meanwhile, for all $r>0$, since $f_r\le M_{n-1}\cdots M_{1}f$, by Theorem \ref{thm 171206-1}, we get \begin{align*}
\left\|\left\|f_r\right\|_{\vec{s}}w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} \le
\left\|\left\| M_{n-1}\cdots M_{1}f\right\|_{\vec{s}}w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} \lesssim
\left\|\left\|f\right\|_{\vec{s}}w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}, \end{align*} where $\vec{s}=(p_1,\ldots, p_{n-1})$. As a consequence, by the Lebesgue differentiation theorem and the Fatou lemma, we obtain \begin{align*}
\left\|
\left\| M_nf
\right\|_{\vec{s}}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} &\le \liminf_{r \downarrow 0}
\left\|
\left\| M_nf_r
\right\|_{\vec{s}}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}\\ &\lesssim \liminf_{r \downarrow 0}
\left\|
\left\| f_r
\right\|_{\vec{s}}
w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)} \lesssim
\left\|\left\|f\right\|_{\vec{s}}w_n(\cdot_n)^\frac{1}{p_n}\right\|_{(p_n)}. \end{align*}
\end{proof}
\begin{proposition} \label{prop 180119-6} Let $1<\vec{q}<\infty$. Let $f \in L^0(\mathbb{R}^n)$ and $w_k \in A_{q_k}(\mathbb{R})$ for $k=1,\ldots,n$. Then, \[
\left\| \mathcal{M}_1f\cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}} \lesssim
\left\| f\cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}. \] \end{proposition}
\begin{proof} We induct on $n$. Let $n=1$. Then, the result is the boundedness of the Hardy--Littlewood maximal operator on weighted $L^p$ spaces. Suppose that the result holds for $n-1$: \[
\left\| (M_{n-1}\cdots M_1h)\cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1,\ldots,q_{n-1})} \lesssim
\left\| h\cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1,\ldots,q_{n-1})}. \] By Lemma \ref{lem 180119-4}, we obtain \begin{align*}
\left\| \mathcal{M}_1f\cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}} &=
\left\| (M_n\cdots M_1f)\cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}\\ &=
\left\|\left[
\left\| M_n\left([M_{n-1}\cdots M_1f]\cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}\right)
\right\|_{(q_1,\ldots,q_{n-1})}\right]
w_n(\cdot_n)^{\frac{1}{q_n}}\right\|_{(q_n)}\\ &\lesssim
\left\|\left[
\left\| [M_{n-1}\cdots M_1f]\cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1,\ldots,q_{n-1})}\right]
w_n(\cdot_n)^{\frac{1}{q_n}}\right\|_{(q_n)}\\ &\lesssim
\left\|\left[
\left\| f\cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1,\ldots,q_{n-1})}\right]
w_n(\cdot_n)^{\frac{1}{q_n}}\right\|_{(q_n)} =
\left\| f\cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}. \end{align*} \end{proof}
\begin{proposition} \label{prop 180119-5} Let $0<p<\infty$, $0<\vec{q} \le \infty$ and $\eta \in {\mathbb R}$ satisfy \[ 0<\sum_{j=1}^n \frac{1}{q_j}-\frac{n}{p}<\eta<1. \] Then, for $f \in L^0(\mathbb{R}^n)$ \[
\|f\|_{{\mathcal M}^p_{\vec{q}}} \sim \sup_{Q \in {\mathcal Q}}
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}}. \] \end{proposition}
\begin{proof} One inequality is clear: \[
\|f\|_{{\mathcal M}^p_{\vec{q}}} \le \sup_{Q \in {\mathcal Q}}
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}}. \] We need to show the opposite inequality. To this end, we fix a cube $Q=I_1 \times \cdots \times I_n$. Given $(l_1,\ldots,l_n)\in {\mathbb N}^n$, we write $l=\max(l_1,\ldots,l_n)$. Then we have \begin{align*} \lefteqn{
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}} }\\ &\lesssim
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\left\|f\prod_{j=1}^n\left( \frac{\ell(I_j)}{\ell(I_j)+|\cdot_j-c(I_j)|}\right)^\eta\right\|_{\vec{q}}\\ &\lesssim
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}} \sum_{l_1,\ldots,l_n=1}^\infty \frac{1}{2^{(l_1+\cdots+l_n)\eta}}
\left\|f\chi_{2^{l_1}I_1 \times \cdots \times 2^{l_n}I_n}\right\|_{\vec{q}}\\ &\lesssim
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}} \sum_{l_1,\ldots,l_n=1}^\infty \frac{1}{2^{(l_1+\cdots+l_n)\eta}}
\left\|f\chi_{2^{l}Q}\right\|_{\vec{q}}\\ &\lesssim \sum_{l_1,\ldots,l_n=1}^\infty \frac{2^{\frac{l}{n}\sum_{j=1}^n \frac{1}{q_j}-\frac{2^{l}}{p}}}{2^{(l_1+\cdots+l_n)\eta}}
|2^{l}Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\left\|f\chi_{2^{l}Q}\right\|_{\vec{q}}\\ &\lesssim
\|f\|_{{\mathcal M}^p_{\vec{q}}}, \end{align*} where $c(I_j)$ denotes the center of $I_j$. Hence, we obtain the result. \end{proof}
We recall Theorem \ref{thm 180121-1}.
\begin{theorem} \label{thm 180119-1} Let $0<\vec{q}\le\infty$ and $0<p<\infty$ satisfy \[ \frac np \le\sum_{j=1}^n\frac1q_j, \quad \frac{n-1}{n}p<\max(q_1,\ldots,q_n). \]
If $0<t<\min(q_1, \ldots, q_n, p)$, then \begin{equation*}
\|{\mathcal M}_tf\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
\lesssim \|f\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \end{equation*} for all $f \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. \end{theorem}
\begin{proof} We have only to check for $t=1, 1<p<\infty$ and $1<\vec{q} \le \infty$ as we did in Theorem \ref{thm 171206-1}. For $\eta \in \mathbb{R}$ satisfying \begin{equation} \label{eq 180119-7} 0<\sum_{j=1}^n \frac{1}{q_j}-\frac{n}{p}<\eta<\frac{1}{\max(q_1,\ldots,q_n)}, \end{equation} once we show \begin{equation}\label{eq 180119-8}
\|\mathcal{M}_1f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}} \lesssim
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}}, \end{equation} we get \[
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\|\mathcal{M}_1f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}} \lesssim
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}}. \] Remark that such an $\eta$ exists because \[ \frac{n-1}{n}p<\max(q_1,\ldots,q_n). \] Taking the supremum for all cubes and using Proposition \ref{prop 180119-5}, we conclude the result.
We shall show (\ref{eq 180119-8}). Let $Q=I_1 \times I_2 \times \cdots \times I_n$. Then, \[ (\mathcal{M}_1\chi_Q)^\eta = \left(\bigotimes_{j=1}^nM_j\chi_{I_j}\right)^\eta = \bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta. \] Here, $(M_j\chi_{I_j})^{\eta q_j}$ is an $A_1$-weight if and only if $\eta>0$ satisfies \begin{equation}\label{eq 180119-9} 0\le\eta q_j<1, \end{equation} and so $(M_j\chi_{I_j})^{\eta q_j}\in A_1 \subset A_{q_j}$ for all $q_j$. Thus, by Proposition \ref{prop 180119-6}, \begin{align*}
\left\|\mathcal{M}_1f(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}} &=
\left\|(\mathcal{M}_1f)\bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta\right\|_{\vec{q}} \lesssim
\left\|f\bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta\right\|_{\vec{q}} =
\|f(\mathcal{M}_1\chi_Q)^\eta\|_{\vec{q}}. \end{align*} Thus, (\ref{eq 180119-8}) holds. \end{proof}
In Theorem {\ref{thm 180119-1}}, letting $q_j=q$ for all $j=1, \ldots, n$, we get the following result:
\begin{corollary} Let \[ 0<\frac{n-1}{n}p<q\le p<\infty. \]
If $0<t<q$, then \[
\|{\mathcal M}_tf\|_{\mathcal{M}^p_q(\mathbb{R}^n)}
\lesssim \|f\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \] for all $f \in \mathcal{M}^p_q(\mathbb{R}^n)$. \end{corollary}
\section{Proof of Theorem \ref{thm 171115-1} and related inequalities} \label{sec dual inequality}
Next, we shall show the dual inequality of Stein type \cite{F-S} for the iterated maximal operator and $L^{\vec{p}}(\mathbb{R}^n)$.
\begin{proposition}\label{prop 171125-1} Let $f,w$ be measurable functions. Suppose in addition that $w \ge 0$ almost everywhere. Let $1\le i_1, i_2, \cdots, i_k\le n \,(1\le k\le n)$ and $i_j \neq i_k (j \neq k)$. Then for all $1<p<\infty$, \begin{eqnarray*} \int_{{\mathbb R}^n} M_{i_k}\cdots M_{i_1}f(x)^p\cdot w(x){\rm d}x \lesssim
\int_{{\mathbb R}^n} |f(x)|^p\cdot M_{i_1}\cdots M_{i_k}w(x){\rm d}x. \end{eqnarray*} \end{proposition}
\begin{proof} We use induction on $k$. Let $k=1$. Fix $(x_1, \ldots, x_{i_1-1}, x_{i_1+1}, \ldots, x_n)$. Then, by the dual inequality of Stein type for $M_{i_1}$,
we get \begin{equation*} \int_{{\mathbb R}} M_{i_1}f(x)^p\cdot w(x){\rm d}x_{i_1} \lesssim
\int_{{\mathbb R}} |f(x)|^p\cdot M_{i_1}w(x){\rm d}x_{i_1}. \end{equation*} Integrating this estimate against $(x_1, \ldots, x_{i_1-1}, x_{i_1+1}, \ldots, x_n)$, we have \begin{equation*} \int_{{\mathbb R}^n} M_{i_1}f(x)^p\cdot w(x){\rm d}x \lesssim
\int_{{\mathbb R}^n} |f(x)|^p\cdot M_{i_1}w(x){\rm d}x. \end{equation*} Suppose that the result holds for $k-1$. Then, fix $(x_1, \ldots, x_{i_k-1}, x_{i_k+1}, \ldots, x_n)$. Again, by the dual inequality of Stein type for $M_{i_k}$, we get \begin{equation*} \int_{{\mathbb R}} M_{i_k}\cdots M_{i_1}f(x)^p\cdot w(x){\rm d}x_{i_k} \lesssim \int_{{\mathbb R}} M_{i_{k-1}}\cdots M_{i_1}f(x)^p\cdot M_{i_k}w(x){\rm d}x_{i_k}. \end{equation*} Integrating this estimate against $(x_1, \ldots, x_{i_k-1}, x_{i_k+1}, \ldots, x_n)$ and using induction hypothesis, we have \begin{align*} \int_{{\mathbb R}^n} M_{i_k}\cdots M_{i_1}f(x)^p\cdot w(x){\rm d}x &\lesssim \int_{{\mathbb R}^n} M_{i_{k-1}}\cdots M_{i_1}f(x)^p\cdot M_{i_k}w(x){\rm d}x\\ &\lesssim
\int_{{\mathbb R}^n} |f(x)|^p\cdot M_{i_1}\cdots M_{i_k}w(x){\rm d}x. \end{align*} \end{proof}
The following corollary extends the dual inequality of Stein type for $L^p(\mathbb{R}^n)$.
\begin{corollary}\label{cor 171125-3} Let $f,w$ be measurable functions. Suppose in addition that $w \ge 0$ almost everywhere. Then for all $0<p<\infty$ and $0<t<p$, \begin{eqnarray*} \int_{{\mathbb R}^n} \mathcal{M}_tf(x)^p\cdot w(x){\rm d}x \lesssim
\int_{{\mathbb R}^n} |f(x)|^p\cdot M_1\cdots M_nw(x){\rm d}x \quad (f \in L^0(\mathbb{R}^n)). \end{eqnarray*} \end{corollary}
\begin{proof} Using Proposition \ref{prop 171125-1}, we have \begin{align*} \int_{{\mathbb R}^n} \mathcal{M}_tf(x)^p\cdot w(x){\rm d}x &=
\int_{{\mathbb R}^n} \left( M_n \cdots M_1 [|f|^t](x) \right)^{\frac{p}{t}}\cdot w(x){\rm d}x\\ &\lesssim
\int_{{\mathbb R}^n} (|f(x)|^t)^{\frac{p}{t}}\cdot M_1 \cdots M_n w(x) {\rm d}x\\ &=
\int_{{\mathbb R}^n} |f(x)|^p\cdot M_1 \cdots M_nw(x){\rm d}x. \end{align*} \end{proof}
We recall Theorem \ref{thm 171115-1}.
\begin{theorem}[Dual inequality of Stein type for $L^{\vec{p}}$] Let $f$ be a measurable function on $\mathbb{R}^n$ and $1\le\vec{p}<\infty$. Then if $0< t<\min(p_1, \ldots, p_n)$ and $w_j^t\in A_{p_j}(\mathbb{R})$, \begin{eqnarray*}
\left\| {\mathcal M}_t f\cdot \bigotimes_{j=1}^n(w_j)^{\frac{1}{p_j}}
\right\|_{\vec{p}} \lesssim
\left\| f \cdot \bigotimes_{j=1}^n\left(M_j w_j\right)^{\frac{1}{p_j}}
\right\|_{\vec{p}}. \end{eqnarray*} \end{theorem}
\begin{proof} We have only to check when $t=1$ and $1<\vec{p}<\infty$. We use induction on $n$. Let $n=1$. Then the result follows from the classical case of the dual inequality of Stein type. Suppose that the result holds for $n-1$. Then, the following inequality follows: for $1<(q_1, \ldots, q_{n-1})<\infty$, $h \in L^{0}(\mathbb{R}^{n-1})$ and $v_j \in L^{0}(\mathbb{R})$, \begin{equation} \label{eq 171115-2}
\left\| (M_{n-1}\cdots M_1 h)\cdot \bigotimes_{j=1}^{n-1}v_j^{\frac{1}{q_j}}
\right\|_{(q_1, \ldots, q_{n-1})} \lesssim
\left\| h \cdot \bigotimes_{j=1}^{n-1}\left(M_j v_j\right)^{\frac{1}{q_j}}
\right\|_{(q_1, \ldots, q_{n-1})}. \end{equation}
From the definition of the norm $\|\cdot\|_{\vec{p}}$, we get \begin{align}\label{eq 180518-1}
&\left\| (M_n\cdots M_1 f)\cdot \bigotimes_{j=1}^{n}w_j^{\frac{1}{p_j}}
\right\|_{\vec{p}} =
\left\| \left[
\left\| (M_n\cdots M_1 f)\cdot \bigotimes_{j=1}^{n}w_j^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]
\right\|_{(p_n)} \nonumber\\ &=
\left\| \left[
\left\| (M_n\cdots M_1 f)\cdot \bigotimes_{j=1}^{n-1}w_j^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]w_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}. \end{align} By the Lebesgue differention theorem, $w_n\le M_nw_n$. Thus, by Lemma \ref{lem 180119-4}, \begin{align*}
&\left\| (M_n\cdots M_1 f)\cdot \bigotimes_{j=1}^{n}w_j^{\frac{1}{p_j}}
\right\|_{\vec{p}}\\ &=
\left\| \left[
\left\| M_n\left[(M_{n-1}\cdots M_1 f)\cdot \bigotimes_{j=1}^{n-1}w_j^{\frac{1}{p_j}}\right]
\right\|_{(p_1, \ldots, p_{n-1})} \right]w_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}\\ &\lesssim
\left\| \left[
\left\| (M_{n-1}\cdots M_1 f)\cdot \bigotimes_{j=1}^{n-1}w_j^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]w_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}\\ &\le
\left\| \left[
\left\| (M_{n-1}\cdots M_1 f)\cdot \bigotimes_{j=1}^{n-1}w_j^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]M_nw_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}.\\ \end{align*}
Thus, by induction hypothesis (\ref{eq 171115-2}), \begin{align*}
&\left\| (M_n\cdots M_1 f)\cdot \bigotimes_{j=1}^{n}w_j^{\frac{1}{p_j}}
\right\|_{\vec{p}}\\ &\lesssim
\left\| \left[
\left\| (M_{n-1}\cdots M_1 f)\cdot \bigotimes_{j=1}^{n-1}w_j^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]M_nw_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}\\ &\lesssim
\left\| \left[
\left\|f\cdot \bigotimes_{j=1}^{n-1}(M_jw_j)^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]M_nw_n(\cdot_n)^\frac{1}{p_n}
\right\|_{(p_n)}\\ &=
\left\| \left[
\left\| f\cdot \bigotimes_{j=1}^{n}(M_jw_j)^{\frac{1}{p_j}}
\right\|_{(p_1, \ldots, p_{n-1})} \right]
\right\|_{(p_n)} =
\left\| f \cdot \bigotimes_{j=1}^n\left(M_j w_j\right)^{\frac{1}{p_j}}
\right\|_{\vec{p}}. \end{align*} Thus, we conclude the result. \end{proof}
\section{Proof of Theorems \ref{thm 171123-2} and \ref{thm 171219-3}} \label{sec vector-valued}
In this section, we shall prove the Fefferman--Stein vector-valued maximal inequality for mixed spaces. First, we define the mixed vector-valued norm and show its duality formula.
\begin{definition}[Mixed vector-valued norm] Let $0<\vec{p}\le\infty$ and $0<q \le\infty$.
For a system $\displaystyle \{f_j \}_{j=1}^{\infty} \subset L^0({\mathbb R}^n)$, define \[
\| f_j \|_{L^{\vec{p}}(\ell^q)} \equiv
\| \{f_j \}_{j=1}^{\infty} \|_{L^{\vec{p}}(\ell^q)} =
\left\| \left(\sum_{j=1}^{\infty} |f_j|^q\right)^{\frac1q}
\, \right\|_{\vec{p}}. \] The space $L^{\vec{p}}(\ell^q,{\mathbb R}^n)$ denotes the set of all collections $\{f_j\}_{j=1}^\infty$ for which the quantity
$\|\{f_j \}_{j=1}^{\infty} \|_{L^{\vec{p}}(\ell^q)}$ is finite. \index{L p l q R n@$L^p(\ell^q,{\mathbb R}^n)$}
A natural modification is made in the above when $q=\infty$. \index{vector-valued norm@vector-valued norm} \end{definition}
This vector-valued norm can be written by the form of duality.
\begin{lemma} \label{lem 171123-1} Let $1<\vec{p}\le\infty$ and $1<q \le\infty$, and let $\{f_j\}_{j=1}^\infty$ be a sequence of $L^{\vec{p}}({\mathbb R}^n)$-functions such that $f_j=0$ a.e. if $j$ is large enough. Then we can take a sequence $\{g_j\}_{j=1}^\infty$ of $L^{\vec{p'}}({\mathbb R}^n)$-functions such that \begin{eqnarray*}
\| f_j \|_{L^{\vec{p}}(\ell^q)} = \sum_{j=1}^\infty \int_{{\mathbb R}^n} f_j(x)g_j(x){\rm d}x, \quad
\| g_j \|_{L^{\vec{p'}}(\ell^{q'})} =1. \end{eqnarray*} If $\{f_j\}_{j=1}^\infty$ is nonnegative, then we can arrange that $\{g_j\}_{j=1}^\infty$ is nonnegative. \end{lemma}
\begin{proof} There is nothing to prove if $f_j(x)=0$ for all nonnegative integers $j$ and for almost all $x \in {\mathbb R}^n$; assume otherwise. In this case, we recall the construction of the duality $L^{\vec{p}}({\mathbb R}^n)$-$L^{\vec{p'}}({\mathbb R}^n)$ \cite{B-P}; for $x \in \mathbb{R}^n$, set \[
g_j(x)\equiv \overline{{\rm sgn}(f_j)(x)}\,|f_j(x)|^{q-1}\,
\left(\sum_{j=1}^\infty |f_j(x)|^q\right)^{\frac{p_1}{q}-1} \prod_{k=1}^n
\left\|
\left(\sum_{j=1}^\infty |f_j|^q\right)
\right\|_{(p_1,\ldots,p_k)}^{p_{k+1}-p_k}(x'), \] where we let $p_{n+1}=1$ and $x'=(x_{k+1}, \ldots, x_n)$. Since \[ \left(
\sum_{j=1}^\infty |g_j|^{q'} \right)^{\frac{p'_1}{q'}} = \left(
\sum_{j=1}^\infty |f_j|^q \right)^{\frac{p_1}{q}} \prod_{k=1}^n
\left\|\left(\sum_{j=1}^\infty |f_j|^q\right)^{\frac{1}{q}}\right\|_{(p_1, \ldots, p_k)}^{p_{k+1}-p_k}, \] we have
$\| g_j \|_{L^{\vec{p'}}(\ell^{q'})}=1$. Furthermore, since \[ \sum_{j=1}^\infty f_jg_j = \left(
\sum_{j=1}^\infty |f_j|^q \right)^{\frac{p_1}{q}} \prod_{k=1}^n
\left\|
\left(\sum_{j=1}^\infty |f_j|^q\right)^{\frac{1}{q}}
\right\|_{(p_1,\ldots,p_k)}^{p_{k+1}-p_k}, \] we obtain $\displaystyle
\| f_j \|_{L^{\vec{p}}(\ell^q)} = \sum_{j=1}^\infty \int_{{\mathbb R}^n}f_j(x)g_j(x){\rm d}x. $ \end{proof}
To prove the Fefferman--Stein vector-valued maximal inequality for $L^{\vec{p}}(\mathbb{R}^n)$, we use the following lemma, which was proved by Bagby \cite{Bagby}. This lemma is the unweighted version of Lemma \ref{lem 180115-1}.
\begin{lemma}{\rm (\cite{Bagby})} \label{lem 180114-2} Let $\{f_{(j_1, \ldots, j_m)}\}_{j_1, \ldots, j_m=1}^\infty \subset L^0(\mathbb{R}^n)$. For $1<q_i<\infty (i=1, \ldots,m)$ and $1<p<\infty$, \[
\left\|\left[\left\|Mf_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}\right]\right\|_{p} \lesssim
\left\|\left[\left\|f_{(j_1,\ldots,j_m)}\right\|_{\ell^{(q_1,\ldots,q_m)}}\right]\right\|_{p}, \] that is, \begin{eqnarray*} &&
\left\| \left( \sum_{j_m=1}^\infty \left(\cdots \sum_{j_2=1}^\infty \left( \sum_{j_1=1}^\infty (M f_{(j_1,\ldots,j_m)})^{q_1} \right)^{\frac{q_2}{q_1}} \cdots \right)^{\frac{q_n}{q_{m-1}}}
\right)^{\frac{1}{q_m}}\right\|_{p}\\ &&\lesssim
\left\|\left( \sum_{j_m=1}^\infty \left(\cdots \sum_{j_2=1}^\infty \left( \sum_{j_1=1}^\infty
|f_{(j_1,\ldots,j_m)}|^{q_1} \right)^{\frac{q_2}{q_1}} \cdots \right)^{\frac{q_m}{q_{m-1}}}
\right)^{\frac{1}{q_m}}\right\|_{p}. \end{eqnarray*}
\end{lemma}
\begin{theorem}[Fefferman--Stein vector-valued maximal inequality] \label{thm 180106-1} Let $0<\vec{p}<\infty$, $0<u\le\infty$ and $0<t<\min(p_1, \ldots, p_n, u)$. Then, for $\{f_k\}_{k=1}^\infty \subset L^0(\mathbb{R}^n)$, \begin{equation*}
\left\|\left( \sum_{k=1}^{\infty} [\mathcal{M}_tf_k]^u
\right)^{\frac{1}{u}}\right\|_{\vec{p}} \lesssim
\left\|\left( \sum_{k=1}^{\infty}
|f_k|^u
\right)^{\frac{1}{u}}\right\|_{\vec{p}}. \end{equation*} \end{theorem} \begin{proof} As we did in Theorem \ref{thm 171206-1},
we can reduce the matters to the case $t=1$ and $1<\vec{p}<\infty$. \begin{itemize}
\item[(i)] Let $1<u<\infty$. We may assume that $f_k=0$ for $k \gg 1$, so that at least we know that both sides are finite since we already showed that ${\mathcal M}_1$ is $L^{\vec{p}}$-bounded. We induct on $n$. If $n=1$, then this is nothing but the Fefferman--Stein vector-valued inequality. Assume that for all $\{g_k\}_{k=1}^\infty \subset L^0({\mathbb R}^{n-1})$ \[
\left\|\left( \sum_{k=1}^\infty \left[M_{n-1}\cdots M_1g_k\right]^u
\right)^{\frac1u}\right\|_{(p_1, \ldots, p_{n-1})} \lesssim
\left\|\left(\sum_{k=1}^\infty |g_k|^u\right)^{\frac1u}\right\|_{(p_1, \ldots, p_{n-1})}. \]
Assume that $f \in L^{\vec{p}}({\mathbb R}^n)$ is a function of the form: \[ f_k(x_1,\ldots,x_n) = \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r x')f_{k,m'}(x_n), \] where $r>0$ and $\{f_{k,m'}\}_{m' \in {\mathbb Z}^{n-1}} \subset L^0({\mathbb R})$. Then \[ M_n f_k(x_1,\ldots,x_n) = \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r x')M_n f_{k,m'}(x_n), \] since the summand is made up of at most one non-zero function once we fix $x'$. Define $v>0$ by \[ \frac{1}{v}=\frac{1}{p_1}+\cdots+\frac{1}{p_{n-1}}. \] We observe \begin{align*}
\left\| \left\{ M_n f_k \right\}_{k=1}^{\infty} \right\|_{L^{\vec{p}}(\ell^u)} &=
\left\| \left(\sum_{k=1}^{\infty} \left[ \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(r \cdot')M_nf_{k,m'}(\cdot_n) \right]^u\right)^\frac1u
\right\|_{\vec{p}}\\ &= r^{-\frac{1}{v}}
\left\| \left(\sum_{k=1}^{\infty} \left[ \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(\cdot')M_nf_{k,m'}(\cdot_n) \right]^u\right)^\frac1u
\right\|_{\vec{p}}.\\ \end{align*} Setting $\vec{s}=(p_1, \ldots,p_{n-1})$, we get \begin{align*}
&\left\| \left(\sum_{k=1}^\infty\left[ \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(\cdot')M_nf_{k,m'}(\cdot_n)\right]^u\right)^\frac1u
\right\|_{\vec{p}}\\ &=
\left\|\left\|\left(\sum_{k=1}^\infty \left[ \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{m'+[0,1]^{n-1}}(\cdot')M_nf_{k,m'}(\cdot_n)
\right]^u\right)^\frac1u\right\|_{\vec{s}}\right\|_{(p_n)}\\ &=
\left\| \left[
\left\| \left(\sum_{k=1}^\infty \left[ M_nf_{k,m'}(\cdot_n) \right]^u\right)^\frac1u
\right\|_{\ell^{(p_1, \ldots, p_{n-1})}} \right]
\right\|_{(p_n)}\\ &=
\left\| \left[
\left\| M_nf_{k,m'}(\cdot_n)
\right\|_{\ell^{(u, p_1, \ldots, p_{n-1})}} \right]
\right\|_{(p_n)}. \end{align*} Thus by Lemma \ref{lem 180114-2}, we obtain \begin{align*}
\left\|\left(\sum_{k=1}^\infty
[M_nf_{k}]^u\right)^\frac1u\right\|_{\vec{p}} &\lesssim r^{-\frac{1}{v}}
\left\| \left[
\left\| M_nf_{k,m'}(\cdot_n)
\right\|_{\ell^{(u, p_1, \ldots, p_{n-1})}} \right]
\right\|_{(p_n)}\\ &\lesssim r^{-\frac{1}{v}}
\left\| \left[
\left\| f_{k,m'}(\cdot_n)
\right\|_{\ell^{(u, p_1, \ldots, p_{n-1})}} \right]
\right\|_{(p_n)}\\ &=
\left\| \left[
\left\| \left(\sum_{k=1}^\infty \left[ f_{k,m'}(\cdot_n) \right]^u\right)^\frac1u
\right\|_{\ell^{(p_1, \ldots, p_{n-1})}} \right]
\right\|_{(p_n)}\\ &=
\left\| \left(\sum_{k=1}^\infty
|f_{k}|^u\right)^\frac1u
\right\|_{\vec{p}}.\\ \end{align*} Here the constant is independent of $r>0$. Let $f \in L^{\vec{p}}({\mathbb R}^n)$ be arbitrary. Write \[ f_k^{(r)}(x)= \frac{1}{r^{n-1}} \sum_{m' \in {\mathbb Z}^{n-1}} \chi_{r m'+[0,r]^{n-1}}(x') \int_{r m'+[0,r]^{n-1}}f_k(y',x_n)\,dy'. \] Thanks to the Lebesgue differentiation theorem, \[ f_k(x', x_n)=\lim_{r \downarrow 0}f_k^{(r)}(x', x_n) \] for almost every $x' \in {\mathbb R}^{n-1}$. Thus, by the Fatou lemma, we obtain \[ M_n f_k(x) \le \liminf_{r \downarrow 0}M_n f_k^{(r)}(x). \] Meanwhile, for all $r>0$, since $f_k^{(r)}\le M_{n-1}\cdots M_{1}f_k$, by induction assumption, \begin{align*}
\left\|\left(\sum_{k=1}^\infty|f_k^{(r)}|^u\right)^\frac1u\right\|_{\vec{p}} \le
\left\|\left(\sum_{k=1}^\infty[M_{n-1}\cdots M_{1}f_k]^u\right)^\frac1u\right\|_{\vec{p}} \lesssim
\left\|\left(\sum_{k=1}^\infty|f_k|^u\right)^\frac1u\right\|_{\vec{p}}, \end{align*} where $\vec{s}=(p_1,\ldots, p_{n-1})$. As a consequence, by the Lebesgue differentiation theorem and the Fatou lemma, we obtain \begin{align*}
\left\|\left(\sum_{k=1}^\infty[M_n f_k]^u\right)^\frac1u\right\|_{\vec{p}} &\le\liminf_{r \downarrow 0}
\left\|\left(\sum_{k=1}^\infty[M_n f_k^{(r)}]^u\right)^\frac1u\right\|_{\vec{p}}\\ &\lesssim\liminf_{r \downarrow 0}
\left\|\left(\sum_{k=1}^\infty|f_k^{(r)}|^u\right)^\frac1u\right\|_{\vec{p}} \le
\left\|\left(\sum_{k=1}^\infty|f_k|^u\right)^\frac1u\right\|_{\vec{p}}. \end{align*}
\end{itemize} Therefore, by induction assumption, \begin{align*}
\left\|\left(\sum_{k=1}^\infty[M_nM_{n-1}\cdots M_1 f_k]^u\right)^\frac1u\right\|_{\vec{p}} &=
\left\|\left(\sum_{k=1}^\infty[M_n(M_{n-1}\cdots M_1 f_k)]^u\right)^\frac1u\right\|_{\vec{p}}\\ &\lesssim
\left\|\left(\sum_{k=1}^\infty[M_{n-1}\cdots M_1 f_k]^u\right)^\frac1u\right\|_{\vec{p}}\\ &\lesssim
\left\|\left(\sum_{k=1}^\infty|f_k|^u\right)^\frac1u\right\|_{\vec{p}}. \end{align*} \item[(ii)] Let $u=\infty$. Then, simply using \[ \sup_{k\in \mathbb{N}} \mathcal{M}_1f_k \le \mathcal{M}_1\left[\sup_{k\in \mathbb{N}} f_k\right], \] we get the result. \end{proof}
We can also show the vector-valued inequality for the Hardy--Littlewood maximal operator in mixed Morrey spaces.
\begin{theorem} Let $1<\vec{q}<\infty$, $1<u\le\infty$, and $1<p\le\infty$ satisfy $\frac np \le\sum_{j=1}^n\frac1q_j$. Then, for every sequence $\{f_j\}_{j=1}^{\infty} \in L^0(\mathbb{R}^n)$, \begin{equation*}
\left\|\left( \sum_{j=1}^{\infty} [Mf_j]^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \lesssim
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{equation*} \end{theorem}
\begin{proof} \begin{itemize} \item[(i)] Let $u=\infty$. Then, simply using \[ \sup_{j\in \mathbb{N}} Mf_j \le M\left[\sup_{j\in \mathbb{N}} f_j\right], \] we get the result.
\item[(ii)] Let $1<u<\infty$. We have to show that \[
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|\left( \sum_{j=1}^{\infty} [Mf_j]^u
\right)^{\frac{1}{u}}\chi_Q\right\|_{\vec{q}} \lesssim
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \] Let $f_{j,1}=f_j\chi_{5Q}$ and $f_{j,2}=f_j-f_{j,1}$. Using subadditivity of $M$, we have \begin{align*}
\left\|\left( \sum_{j=1}^{\infty} [Mf_j]^u
\right)^{\frac{1}{u}}\chi_Q\right\|_{\vec{q}} &\le
\left\|\left( \sum_{j=1}^{\infty} [Mf_{j,1}]^u
\right)^{\frac{1}{u}}\chi_Q\right\|_{\vec{q}}
+\left\|\left( \sum_{j=1}^{\infty} [Mf_{j,2}]^u
\right)^{\frac{1}{u}}\chi_Q\right\|_{\vec{q}}\\ &\equiv J_1+J_2. \end{align*} First, using Theorem \ref{thm 171123-2}, we have \begin{align*}
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}J_1 &\le
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|\left( \sum_{j=1}^{\infty} [Mf_{j,1}]^u
\right)^{\frac{1}{u}}\right\|_{\vec{q}}\\ &\lesssim
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|\left( \sum_{j=1}^{\infty}
|f_{j,1}|^u
\right)^{\frac{1}{u}}\right\|_{\vec{q}}\\ &=
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\chi_{5Q}\right\|_{\vec{q}}\\ &\lesssim
\left\|\left( \sum_{j=1}^{\infty}
|f_j|^u
\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*} Second, let $y \in Q$. By Proposition \ref{ex 171103-1}, \[
Mf_{j,2}(y) \lesssim \sup_{Q\subset R} \frac{1}{|R|}\int_{R}|f_j(y)|{\rm d}y \lesssim
\sup_{\ell \in \mathbb{N}}\frac{1}{|2^{\ell}Q|}\int_{2^{\ell}Q}|f_j(y)|{\rm d}y. \] We decompose \[
2^{\ell}Q=\bigcup_{k=1}^{2^{\ell}}Q^{(k)}, \quad |Q^{(k)}|=|Q|. \] Thus, \[ Mf_{j,2}(y) \lesssim
\sup_{\ell \in \mathbb{N}}\sum_{j=1}^{2^{\ell}}\frac{1}{|2^{\ell}Q|}\int_{Q^{(k)}}|f_j(y)|{\rm d}y \le
\sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}\frac{1}{|Q^{(k)}|}\int_{Q^{(k)}}|f_j(y)|{\rm d}y. \] Using Minkowski's inequality, we get \begin{align*} \left(\sum_{j=1}^{\infty}Mf_{j,2}(y)^u\right)^{\frac{1}{u}} &\lesssim
\sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}\frac{1}{|Q^{(k)}|}
\left[\sum_{j=1}^{\infty}\left(\int_{Q^{(k)}}|f_j(y)|{\rm d}y\right)^u\right]^{\frac{1}{u}}\\ &\le
\sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}\frac{1}{|Q^{(k)}|}
\int_{Q^{(k)}}\left(\sum_{j=1}^{\infty}|f_j(y)|^u\right)^{\frac{1}{u}}{\rm d}y.\\ \end{align*} Multiplying $\chi_Q$ and taking $L^{\vec{q}}$-norm, we have \[
\left\|\left(\sum_{j=1}^{\infty}(Mf_{j,2})^u\right)^{\frac{1}{u}}\chi_Q\right\|_{\vec{q}} \lesssim
\sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}\frac{1}{|Q^{(k)}|}
\int_{Q^{(k)}}\left(\sum_{j=1}^{\infty}|f_j(y)|^u\right)^{\frac{1}{u}}{\rm d}y \times \|\chi_Q\|_{\vec{q}}. \] Therefore, using relation ${\mathcal M}^p_{\vec{q}}({\mathbb R}^n) \hookrightarrow {\mathcal M}^p_{(\underbrace{1, \ldots, 1}_{\mbox{$n$ times}})}({\mathbb R}^n) = {\mathcal M}^p_1({\mathbb R}^n)$, we obtain \begin{align*}
&|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}J_2\\ &\lesssim
|Q|^{\frac{1}{p}-\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}
\sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}\frac{1}{|Q^{(k)}|}
\int_{Q^{(k)}}\left(\sum_{j=1}^{\infty}|f_j(y)|^u\right)^{\frac{1}{u}}{\rm d}y
\times |Q|^{\frac{1}{n}\left(\sum_{j=1}^n\frac{1}{q_j}\right)}\\ &= \sup_{\ell \in \mathbb{N}}\max_{k=1, \ldots, 2^{\ell}}
|Q^{(k)}|^{\frac{1}{p}-1}\int_{Q^{(k)}}\left(\sum_{j=1}^{\infty}|f_j(y)|^u\right)^{\frac{1}{u}}{\rm d}y\\ &\le
\left\|\left(\sum_{j=1}^{\infty}|f_j|^u\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_1(\mathbb{R}^n)} \le
\left\|\left(\sum_{j=1}^{\infty}|f_j|^u\right)^{\frac{1}{u}}\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}. \end{align*} \end{itemize} Thus, the result holds. \end{proof}
We can also prove the Fefferman--Stein vector-valued inequality for the iterated maximal operator in mixed Morrey spaces. The way is similar to Theorem \ref{thm 180119-1}. First, we prepare the following proposition, which is vector-valued case for Proposition \ref{prop 180119-6}.
\begin{proposition}\label{prop 180129-5} Let $1<\vec{q}<\infty$ and $w_k \in A_{q_k}(\mathbb{R})$ for $k=1,\ldots,n$. Then, \[
\left\| \left(\sum_{j=1}^\infty [\mathcal{M}_1f_j]^u \right)^\frac1u \cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}} \lesssim
\left\| \left(\sum_{j=1}^\infty
|f_j|^u \right)^\frac1u \cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}, \] for $f \in L^0(\mathbb{R}^n)$. \end{proposition}
\begin{proof} We induct on $n$. Let $n=1$. Then, this is clear by Lemma \ref{lem 180115-1}. Suppose that the result holds for $n-1$, that is, \begin{align*}
\left\| \left(\sum_{j=1}^\infty [M_{n-1}\cdots M_1h_j]^u \right)^\frac1u \cdot \bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1, \ldots, q_{n-1})}\\ \lesssim
\left\| \left(\sum_{j=1}^\infty
|h_j|^u \right)^\frac1u \cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1, \ldots, q_{n-1})}, \end{align*} for $h_j \in L^0(\mathbb{R}^{n-1})$. Then, again by Lemma \ref{lem 180119-4}, \begin{align}\label{eq 180517-2}
&\left\| \left(\sum_{j=1}^\infty [\mathcal{M}_1f_j]^u \right)^\frac1u \cdot\bigotimes_{k=1}^nw_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}\nonumber \\
&\lesssim
\left\|
\left[\left\| \left(\sum_{j=1}^\infty [M_{n-1}\cdots M_1f_j]^u \right)^\frac1u \cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1, \ldots, q_{n-1})}\right] w_n(\cdot_n)^\frac{1}{q_n}
\right\|_{(q_n)}. \end{align} Thus, by induction hypothesis, \begin{align*}
&\mbox{the right-hand side of (\ref{eq 180517-2})}\\ &=
\left\|
\left[\left\| \left(\sum_{j=1}^\infty [M_{n-1}\cdots M_1f_j]^u \right)^\frac1u \cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1, \ldots, q_{n-1})}\right]
w_n(\cdot_n)^\frac{1}{q_n}\right\|_{(q_n)}\\ &\lesssim
\left\|\left[
\left\| \left(\sum_{j=1}^\infty
|f_j|^u \right)^\frac1u \cdot\bigotimes_{k=1}^{n-1}w_{k}^{\frac{1}{q_k}}
\right\|_{(q_1, \ldots, q_{n-1})}\right]
w_n(\cdot_n)^\frac{1}{q_n}\right\|_{(q_n)}\\ &=
\left\| \left(\sum_{j=1}^\infty
|f_j|^u \right)^\frac1u \cdot\bigotimes_{k=1}^{n}w_{k}^{\frac{1}{q_k}}
\right\|_{\vec{q}}. \end{align*} \end{proof}
\begin{theorem} \label{thm 180129-1} Let $0<\vec{q}\le\infty$ and $0<p<\infty$ satisfy \[ \frac np \le\sum_{j=1}^n\frac1q_j, \quad \frac{n-1}{n}p<\max(q_1,\ldots,q_n). \]
If $0<t<\min(q_1, \ldots, q_n, p)$, then \begin{equation*}
\left\|\left(\sum_{j=1}^\infty [{\mathcal M}_tf_j]^u\right)^\frac1u
\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)}
\lesssim \left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u\right\|_{\mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)} \end{equation*} for all $f \in \mathcal{M}^p_{\vec{q}}(\mathbb{R}^n)$. \end{theorem}
\begin{proof} We have only to check for $t=1, 1<p<\infty$ and $1<\vec{q}<\infty$ as we did in Theorem \ref{thm 171206-1}. For $\eta \in \mathbb{R}$ satisfying \begin{equation} \label{eq 180129-2} 0<\sum_{j=1}^n \frac{1}{q_j}-\frac{n}{p}<\eta<\frac{1}{\max(q_1,\ldots,q_n)}, \end{equation} once we show \begin{equation}\label{eq 180129-3}
\left\|\left(\sum_{j=1}^\infty
[{\mathcal M}_1f_j]^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}} \lesssim
\left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}}, \end{equation} we get \begin{align*}
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
&\left\|\left(\sum_{j=1}^\infty
[{\mathcal M}_1f_j]^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}}\\ &\lesssim
|Q|^{\frac1p-\frac1n\sum_{j=1}^n \frac{1}{q_j}}
\left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}}. \end{align*} Taking supremum for all cubes and using Proposition \ref{prop 180119-5}, we conclude the result.
We shall show (\ref{eq 180129-3}). Let $Q=I_1 \times I_2 \times \cdots \times I_n$. Then, \[ (\mathcal{M}_1\chi_Q)^\eta = \left(\bigotimes_{j=1}^nM_j\chi_{I_j}\right)^\eta = \bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta. \] Here, $(M_j\chi_{I_j})^{\eta q_j}$ is $A_1$-weight if and only if \begin{equation}\label{eq 180129-4} 0\le\eta q_j<1. \end{equation}
Since $(M_j\chi_{I_j})^{\eta q_j} \in A_1 \subset A_{q_j}$ for all $q_j$, by Proposition \ref{prop 180129-5}, we obtain \begin{align*}
\left\|\left(\sum_{j=1}^\infty
[{\mathcal M}_1f_j]^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}} &=
\left\|\left(\sum_{j=1}^\infty
[{\mathcal M}_1f_j]^u\right)^\frac1u\bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta\right\|_{\vec{q}}\\ &\lesssim
\left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u\bigotimes_{j=1}^n\left(M_j\chi_{I_j}\right)^\eta\right\|_{\vec{q}}\\ &=
\left\|\left(\sum_{k=1}^\infty|f_j|^u\right)^\frac1u(\mathcal{M}_1\chi_Q)^\eta\right\|_{\vec{q}}. \end{align*} Thus, (\ref{eq 180129-3}) holds.
\end{proof}
\begin{corollary} Let \begin{equation*} 0<\frac{n-1}{n}p<q\le p<\infty. \end{equation*}
If $0<t<q$, then \[
\left\|\left(\sum_{j=1}^\infty [{\mathcal M}_tf_j]^u\right)^\frac1u
\right\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \lesssim
\left\| \left(\sum_{j=1}^\infty
|f_j|^u\right)^\frac1u
\right\|_{\mathcal{M}^p_q(\mathbb{R}^n)} \] for all $f \in \mathcal{M}^p_q(\mathbb{R}^n)$.
\end{corollary}
\begin{proof} In Theorem \ref{thm 180129-1}, letting $q_j=q$, we conclude the result. \end{proof}
\section{Proof of Theorem \ref{thm 171219-1} and \ref{thm 180514-1}} \label{sec fractional}
In the beginning of this section, we show the boundedness of the fractional integral operator.
We follow the idea of Tanaka \cite{Tanaka}.
\begin{proof} Fix $x \in \mathbb{R}^n$. Without loss of generality, we may asuume that $f$ is non-negative and $I_{\alpha}f(x)$ is finite. Then, we see that there exists $R>0$ such that \[
\int_{\{|x-y|\le R\}}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y=\frac{I_{\alpha}f(x)}{2}. \] We shall obtain two estimates. First, \begin{align*} \frac{I_{\alpha}f(x)}{2} &=
\int_{\{|x-y|\le R\}}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y\\ &= \sum_{j=-\infty}^0
\int_{\{2^{j-1}R<|x-y|\le 2^jR\}}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y\\ &\lesssim \sum_{j=-\infty}^0
\frac{(2^jR)^{\alpha}}{(2^jR)^{n}}\int_{\{|x-y|\le 2^jR\}}f(y){\rm d}y\\ &\le Mf(x)\sum_{j=-\infty}^0(2^jR)^{\alpha}\\ &\sim R^{\alpha}Mf(x). \end{align*} Second, \begin{align*} \frac{I_{\alpha}f(x)}{2} &=
\int_{\{|x-y|\le R\}}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y = \sum_{j=1}^{\infty}
\int_{\{2^{j-1}R<|x-y|\le 2^jR\}}\frac{f(y)}{|x-y|^{n-\alpha}}{\rm d}y \end{align*} Using Proposition \ref{prop 180521-1}, we get \begin{align*} \frac{I_{\alpha}f(x)}{2} &\lesssim \sum_{j=1}^{\infty}
\frac{(2^jR)^{\alpha}}{(2^jR)^{n}}\int_{\{|x-y|\le 2^jR\}}f(y){\rm d}y\\ &= \sum_{j=1}^{\infty} \frac{(2^jR)^{\alpha}}{(2^jR)^{\frac{n}{p}}} (2^jR)^{n\left(\frac{1}{p}-1\right)}
\int_{\{|x-y|\le 2^jR\}}f(y){\rm d}y\\ &\le
\|f\|_{\mathcal{M}_1^p(\mathbb{R}^n)} \sum_{j=1}^{\infty} \frac{(2^jR)^{\alpha}}{(2^jR)^{\frac{n}{p}}}\\
&\sim R^{\alpha-\frac{n}{p}}\|f\|_{\mathcal{M}_1^p(\mathbb{R}^n)}
\le R^{\alpha-\frac{n}{p}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)} =
R^{-\frac{n}{r}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align*} Thus, we obtain \begin{align*} I_{\alpha}f(x) \lesssim
\min(R^{\alpha}Mf(x), R^{-\frac{n}{r}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}). \end{align*} We now delete the factor $R$ by the following argument: \begin{align*} I_{\alpha}f(x) &\lesssim
\min(R^{\alpha}Mf(x), R^{-\frac{n}{r}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)})\\ &\le
\sup_{t>0}\min(t^{\alpha}Mf(x), t^{-\frac{n}{r}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)})\\ &=
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{\frac{p\alpha}{n}} Mf(x)^{1-\frac{p\alpha}{n}}, \end{align*} where we use the condition $\frac{1}{r}=\frac{1}{p}-\frac{\alpha}{n}$. It follows from the conditions \[ \frac{1}{r}=\frac{1}{p}-\frac{\alpha}{n}, \] that \[ 1-\frac{p\alpha}{n}=\frac{p}{r}. \] Thus, we get \[
I_{\alpha}f(x)\lesssim \|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{1-\frac{p}{r}} Mf(x)^{\frac{p}{r}}. \] This pointwise estimate gives us that \[
\|I_{\alpha}f\|_{\mathcal{M}_{\vec{s}}^r(\mathbb{R}^n)}
\lesssim \|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{1-\frac{p}{r}}
\|\left[Mf\right]^{\frac{p}{r}}\|_{\mathcal{M}_{\vec{s}}^r(\mathbb{R}^n)}. \] Since \[ \frac{q_1}{s_1}=\cdots=\frac{q_n}{s_n}=\frac{p}{r}, \] we have \begin{align*}
\|\left[Mf\right]^{\frac{p}{r}}\|_{\mathcal{M}_{\vec{s}}^r(\mathbb{R}^n)} =
\|Mf\|_{\mathcal{M}_{\frac{p}{r}\vec{s}}^{r\frac{p}{r}}(\mathbb{R}^n)}^{\frac{p}{r}} =
\|Mf\|_{\mathcal{M}_{\vec{q}}^{p}(\mathbb{R}^n)}^{\frac{p}{r}}. \end{align*} Thus using Theorem \ref{thm 180122-1}, we obtain \begin{align*}
\|I_{\alpha}f\|_{\mathcal{M}_{\vec{s}}^r(\mathbb{R}^n)}
&\lesssim \|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{1-\frac{p}{r}}
\|\left[Mf\right]^{\frac{p}{r}}\|_{\mathcal{M}_{\vec{s}}^r(\mathbb{R}^n)}\\ &=
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{1-\frac{p}{r}}
\|Mf\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{\frac{p}{r}}\\ &\lesssim
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{1-\frac{p}{r}}
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}^{\frac{p}{r}}\\
&=\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align*} \end{proof}
Next, we prove the boundedness of the singular integral operators. The following theorem seems unknown. Here we include a short proof. \begin{theorem} \label{thm 180604-1} Let $1<\vec{q}<\infty$. Then, \[
\|Tf\|_{\vec{q}}\lesssim\|f\|_{\vec{q}} \] for $f \in L^{\vec{q}}(\mathbb{R}^n)$. \end{theorem}
\begin{proof} Put $\vec{q}=\theta\vec{r}$, where $\theta>1$ and $\vec{r}>1$. Then, using the $L^{\vec{r}}(\mathbb{R}^n)$-$L^{\vec{r'}}(\mathbb{R}^n)$ duality argument, for $g\in L^{\vec{r'}}(\mathbb{R}^n)$, we have \begin{align*}
\|Tf\|_{\vec{q}}=\left\||Tf|^\theta\right\|_{\vec{r}}^\frac1\theta
=\left(\int_{\mathbb{R}^n}|Tf(x)|^\theta g(x){\rm d}x\right)^\frac1\theta. \end{align*}
Since $g(x)\le M[|g|^\frac1\eta](x)^\eta$ and $M[|g|^\frac1\eta]^\eta \in A_1$ for $\eta>1$, we get \begin{align*}
\|Tf\|_{\vec{q}} \le
\left(\int_{\mathbb{R}^n}|Tf(x)|^\theta M[|g|^\frac1\eta](x)^\eta{\rm d}x\right)^\frac1\theta \lesssim
\left(\int_{\mathbb{R}^n}|f(x)|^\theta M[|g|^\frac1\eta](x)^\eta{\rm d}x\right)^\frac1\theta. \end{align*} By H\"older's inequality and the boundedness of the Hardy--Littlewood maximal operator, \begin{align*}
\|Tf\|_{\vec{q}} \lesssim
\left\||f|^\theta\right\|_{\vec{r}}^\frac1\theta\left\| \left(M[|g|^\frac1\eta]\right)^\eta\right\|_{\vec{r'}} \lesssim
\left\||f|\right\|_{\theta\vec{r}}\left\||g|^\frac1\eta\right\|_{\eta\vec{r'}}^\eta
=\|f\|_{\vec{q}}\|g\|_{\vec{r'}}. \end{align*} Thus, the result holds. \end{proof}
We recall the theorem \ref{thm 180514-1}.
\begin{theorem} Let $1<\vec{q}<\infty$ and $1<p<\infty$ satysfy \[ \frac np \le\sum_{j=1}^n\frac1q_j. \] Then, if we restrict $T$ to $\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)$, which is intially defined on $\mathcal{M}_{\min(q_1, \ldots, q_n)}^p(\mathbb{R}^n)$, \begin{equation}\label{eq 180514-2}
\|Tf\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)} \lesssim \|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)} \end{equation} for $f \in \mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)$. \end{theorem}
\begin{proof} Let $f \in \mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)$ and $f=f\chi_{2Q}+f\chi_{(2Q)^c} \equiv f_1+f_2$ for any cube $Q=Q(z,s)$. Then, since $T$ is bounded on $L^{\vec{q}}(\mathbb{R}^n)$ by Theorem \ref{thm 180604-1} and $f \in L^{\vec{q}}(\mathbb{R}^n)$, \begin{align} \label{eq 180514-3}
|Q|^{\frac 1p-\frac1n\sum_{j=1}^n\frac1q_j}\|(Tf_1)\chi_Q\|_{\vec{q}} &\le
|Q|^{\frac 1p-\frac1n\sum_{j=1}^n\frac1q_j}\|Tf_1\|_{\vec{q}} \nonumber \\ &\lesssim
|Q|^{\frac 1p-\frac1n\sum_{j=1}^n\frac1q_j}\|f_1\|_{\vec{q}} \le
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align} Fix $x \in Q$ and put \[
f_r(x)=\frac{1}{r^n}\int_{Q(x,r)}|f(y)|{\rm d}y. \] Then, by H\"older's inequality, we have \begin{align*}
|f_r(x)|
&\le \frac{1}{r^n} |Q(x,r)|^{\frac1n\sum_{j=1}^n\frac{1}{q'_j}} \|f\chi_{Q(x,r)}\|_{\vec{q}}\nonumber
\sim \frac{1}{r^n} r^{\sum_{j=1}^n\frac{1}{q'_j}} \|f\chi_{Q(x,r)}\|_{\vec{q}}\\
&=r^{-\frac{n}{p}}r^{\frac{n}{p}-\sum_{j=1}^n\frac{1}{q_j}}\|f\chi_{Q(x,r)}\|_{\vec{q}}
\le r^{-\frac{n}{p}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align*} Thus, \begin{align}\label{eq 180514-4}
|Tf_2(x)| &\le
\int_{(2Q)^c}|k(x,y)||f(y)|{\rm d}y \lesssim
\int_{(2Q)^c}\frac{|f(y)|}{|x-y|^n}{\rm d}y \lesssim \int_{2r}^\infty \frac{1}{\ell}f_{\ell}(x) {\rm d}\ell \nonumber\\ &\lesssim
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}\int_{2r}^\infty \ell^{-\frac{n}{p}-1} {\rm d}\ell
\lesssim r^{-\frac{n}{p}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align} Thus, by (\ref{eq 180514-4}), we obtain \begin{align} \label{eq 180514-5}
|Q|^{\frac 1p-\frac1n\sum_{j=1}^n\frac1q_j}\|(Tf_2)\chi_Q\|_{\vec{q}} &\lesssim
|Q|^{\frac 1p-\frac1n\sum_{j=1}^n\frac1q_j}r^{-\frac{n}{p}}\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}\|\chi_Q\|_{\vec{q}} =
\|f\|_{\mathcal{M}_{\vec{q}}^p(\mathbb{R}^n)}. \end{align} By (\ref{eq 180514-3}) and (\ref{eq 180514-5}), we get the result. \end{proof}
\end{document} | arXiv |
\begin{document}
\begin{abstract} Let $K$ be an NIP field and let $v$ be a henselian valuation on $K$. We ask whether $(K,v)$ is NIP as a valued field. By a result of Shelah, we know that if $v$ is externally definable, then $(K,v)$ is NIP. Using the definability of the canonical $p$-henselian valuation, we show that whenever the residue field of $v$ is not separably closed, then $v$ is externally definable. In the case of separably closed residue field, we show that $(K,v)$ is NIP as a pure valued field. \end{abstract}
\title{When does NIP transfer from fields to henselian expansions?} \section{Introduction and Motivation} There are many open questions connecting NIP and henselianity, most prominently \begin{Q} \begin{enumerate} \item Is any valued NIP field $(K,v)$ henselian? \item Let $K$ be an NIP field, neither separably closed nor real closed. Does $K$ admit a definable non-trivial henselian valuation? \end{enumerate} \end{Q} Both of these questions have been recently answered positively in the special case where `NIP' is replaced with `dp-minimal' (cf. Johnson's results in \cite{Joh15}). Moreover, Johnson also showed that question (1) can be answered affirmatively when the characteristic of $K$ is positive and showed a positive answer to question (2) when `NIP' is replaced by `dp-finite of positive characteristic' (\cite{Joh19}).
The question discussed here is the following: \begin{Q} \label{mainq} Let $K$ be an NIP field (in an expansion of the language of rings) and $v$ a henselian valuation on $K$. Is $(K,v)$ NIP? \end{Q} Note that this question neither implies nor is implied by any of the above questions, it does however follow along the same lines aiming to find out how close the bond between NIP and henselianity really is.
The first aim of this article is to show that the answer to Question \ref{mainq} is `yes' if $Kv$ is not separably closed: { \renewcommand{B}{A} \begin{Thm} \label{A} Let $(K,v)$ be henselian and such that $Kv$ is not separably closed. Then $v$ is definable in the Shelah expansion $K^\mathrm{Sh}$. \end{Thm} } See section \ref{Sh} for the definition of $K^\mathrm{Sh}$. The theorem follows immediately from combining Propositions \ref{notreal} and \ref{real}. If $v$ is definable in $K^\mathrm{Sh}$, then one can add a symbol for the valuation ring $\mathcal{O}$ to any language $\mathcal{L}$ extending $\mathcal{L}_\textrm{ring}$ and obtain that if $K$ is NIP as an $\mathcal{L}$-structure, then $(K,v)$ is NIP as an $\mathcal{L} \cup \{\mathcal{O}\}$-structure. Theorem \ref{A} is proven using the definability of the canonical $p$-henselian valuation. We make a case distinction between when $Kv$ is neither separably closed nor real closed (Proposition \ref{notreal}) and when $Kv$ is real closed (Proposition \ref{real}).
On the other hand, if $Kv$ is separably closed, then we cannot hope for a result in the same generality: it is well-known that any algebraically closed valued field is NIP in $\mathcal{L}_\textrm{ring} \cup \{\mathcal{O}\}$, however, any algebraically closed field with two independent valuations has IP (\cite[Theorem 6.1]{Joh13}, see also Example \ref{extra}). In this case, we can still consider the question in the language of rings: given an NIP field $K$ and a henselian valuation $v$ on $K$, is $(K,v)$ NIP in $\mathcal{L}_\textrm{ring}\cup\{\mathcal{O}\}$? The answer to this is again positive: {\renewcommand{B}{B} \begin{Thm} \label{B} Let $K$ be NIP, $v$ henselian on $K$. Then $(K,v)$ is NIP as a pure valued field. \end{Thm}} Theorem \ref{B} is proven as Theorem \ref{main2} in section \ref{scf}. The proof of the theorem uses two NIP transfer theorems recently proven in \cite{JS} and \cite{AJ19b}. A transfer theorem gives criteria under which dependence of the residue field implies dependence of the (pure) valued field. Delon proved a transfer theorem for henselian valued fields of equicharacteristic $0$ (see \cite{Del} or {\cite[Theorem A.15]{Simon:book}}), and B\'elair proved a version for equicharacteristic Kaplansky fields which are algebraically maximal (see \cite{Bel}). The transfer theorems proven in \cite[Theorem 3.3]{JS} and \cite[Proposition 4.1]{AJ19b} generalizes these results to separably algebraically maximal Kaplansky fields, in particular, they also works in mixed characteristic. See section \ref{scf} for definitions and more details. Combining these transfer theorems with an idea of Scanlon and some standard trickery concerning definable valuations yields that under the assumptions of Theorem \ref{B}, we can find a decompostion of $v = \bar{v} \circ w$ into two NIP valuations. The question whether the composition of two henselian NIP valuations is again NIP seems to be open. For the case when the residue field of the coarser valuation is stably embedded, this follows from \cite[Proposition 2.5]{JS}. Using that the residue field in separably algebraically maximal Kaplansky fields and unramified henselian fields is stably embedded, this allows us to prove the second part of Theorem \ref{B}.
The paper is organized as follows: In section \ref{nscf}, we first recall the necessary background concerning the Shelah expansion. We then discuss the definition and definability of the canonical $p$-henselian valuation. In the final part, we use these two ingredients to prove Theorem \ref{A}. In particular, we conclude that for any henselian NIP field the residue field is NIP as a pure field. We also obtain as a consequence that if a field admits a non-trivial henselian valuation and is NIP in some $\mathcal{L} \supseteq \mathcal{L}_\mathrm{ring}$, then there is \emph{some} non-trivial henselian valuation $v$ on $K$ such that $(K,v)$ is NIP in $\mathcal{L} \cup \{O\}$ (Corollary \ref{exist}).
In the third section, we treat the case of separably closed residue fields. We first recall an example which shows that we have to restrict Question \ref{mainq} to the language of pure valued fields. We then briefly review different ingredients, starting with the transfer theorem for separably algebraically maximal Kaplansky fields. After quoting a result by Delon that separably closed valued fields are NIP, we state and prove a Proposition by Scanlon (Proposition \ref{scan}) which implies that on an NIP field, any valuation with non-perfect residue field is $\mathcal{L}_\mathrm{ring}$-definable. We then recall some facts about stable embeddedness and the composition of NIP valuations. In the final subsection, we prove Theorem \ref{B}.
Finally, in section \ref{ord}, we treat the simpler case of convex valuation rings on an ordered field $(K,v)$. As any convex valuation ring is definable in $(K,<)^\mathrm{Sh}$, we conclude that if $(K,<)$ is an ordered NIP field in some language $\mathcal{L} \supseteq \mathcal{L}_\mathrm{ring} \cup \{<\}$ and $v$ is a convex valuation on $K$, then $(K,v)$ is NIP in $\mathcal{L} \cup \{\mathcal{O}\}$ (Corollary \ref{cord}).
Throughout the paper, we use the following notation: for a valued field $(K,v)$, we write $vK$ for the value group, $Kv$ for the residue field and $\mathcal{O}_v$ for the valuation ring of $v$.
\section{Non-separably closed residue fields} \label{nscf} \subsection{Externally definable sets} \label{Sh} Throughout the subsection, let $M$ be a structure in some language $\mathcal{L}$. \begin{Def}
Let $N \succ M$ be an $|M|^+$-saturated elementary extension. A subset $A \subseteq M$ is called \emph{externally definable} if it is of the form
$$\{a \in M^{|\bar{x}|}\,|\,N \models \varphi(a,b)\}$$
for some $\mathcal{L}$-formula $\varphi(\bar{x},\bar{y})$ and some $b \in N^{|\bar{y}|}$. \end{Def} The notion of externally definable sets does not depend on the choice of $N$. See {\cite[Chapter 3]{Simon:book}} for more details on externally definable sets.
\begin{Def} The \emph{Shelah expansion} $M^\mathrm{Sh}$ is the expansion of $M$ by predicates for all externally definable sets. \end{Def}
Note that the Shelah expansion behaves well when it comes to NIP: \begin{Prop}[Shelah, {\cite[Corollary 3.14]{Simon:book}}] If $M$ is NIP then so is $M^\mathrm{Sh}$. \label{Shelah} \end{Prop}
The way the Shelah expansion is used in this paper is to show that any coarsening of a definable valuation on an NIP field is an NIP valuation. Thus, the following example is crucial: \begin{Ex} \label{mainex} Let $(K,w)$ be a valued field and $v$ be a coarsening of $w$, i.e., a valuation on $K$ with $\mathcal{O}_v \supseteq \mathcal{O}_w$. Then, there is a convex subgroup $\Delta \leq wK$ such that we have $vK \cong wK/\Delta$. As $\Delta$ is externally definable in the ordered abelian group $wK$, the valuation ring $\mathcal{O}_v$ is definable in $(K,w)^\mathrm{Sh}$. \end{Ex}
\subsection{$p$-henselian valuations} Throughout this subsection, let $K$ be a field and $p$ a prime. We recall the main properties of the canonical $p$-henselian valuation on $K$. We define $K(p)$ to be the compositum of all Galois extensions of $K$ of $p$-power degree (in a fixed algebraic closure). Note that we have \begin{itemize} \item $K \neq K(p)$ iff $K$ admits a Galois extension of degree $p$ and \item if $[K(p):K]<\infty$ then $K=K(p)$ or $p=2$ and $K(2)=K(\sqrt{-1})$ (see \cite[Theorem 4.3.5]{EP05}). \end{itemize} A field $K$ which admits exactly one Galois extension of $2$-power degree is called \emph{Euclidean}. Any Euclidean field is uniquely ordered, the positive elements being exactly the squares (see \cite[Proposition 4.3.4 and Theorem 4.3.5]{EP05}). In particular, the ordering on a Euclidean field is $\mathcal{L}_\mathrm{ring}$-definable.
\begin{Def} A valuation $v$ on a field $K$ is called \emph{$p$-henselian} if $v$ extends uniquely to $K(p)$. We call $K$ \emph{$p$-henselian} if $K$ admits a non-trivial $p$-henselian valuation. \end{Def}
Every henselian valuation is $p$-henselian for all primes $p$. Assume $K \neq K(p)$. Then, there is a canonical $p$-henselian valuation on $K$: We divide the class of $p$-henselian valuations on $K$
into two subclasses,
$$H^p_1(K) = \{v\; p\textrm{-henselian on } K \,| \,Kv \neq Kv(p)\}$$ and
$$H^p_2(K) = \{ v\; p\textrm{-henselian on } K \,|\, Kv = Kv(p) \}.$$ One can show that any valuation $v_2 \in H^p_2(K)$ is \emph{finer} than any $v_1 \in H^p_1(K)$, i.e. ${\mathcal O}_{v_2} \subsetneq {\mathcal O}_{v_1}$, and that any two valuations in $H^p_1(K)$ are comparable. Furthermore, if $H^p_2(K)$ is non-empty, then there exists a unique coarsest valuation $v_K^p$ in $H^p_2(K)$; otherwise there exists a unique finest valuation $v_K^p \in H^p_1(K)$. In either case, $v_K^p$ is called the \emph{canonical $p$-henselian valuation} (see \cite{Koe95} for more details).
The following properties of the canonical $p$-henselian valuation follow immediately from the definition: \begin{itemize} \item If $K$ is $p$-henselian then $v_K^p$ is non-trivial. \item Any $p$-henselian valuation on $K$ is comparable to $v_K^p$. \item If $v$ is a $p$-henselian valuation on $K$ with $Kv \neq Kv(p)$, then $v$ coarsens $v_K^p$. \item If $p=2$ and $Kv_K^2$ is Euclidean, then there is a (unique) $2$-henselian valuation $v_K^{2*}$ such that $v_K^{2*}$ is the coarsest $2$-henselian valuation with Euclidean residue field. \end{itemize}
\begin{Thm}[{\cite[Corollary 3.3]{JK14}}] \label{JK14} Let $p$ be a prime and consider the (elementary) class of fields
$$\mathcal{K} = \{ K \,|\, K \;p\textrm{-henselian, with }\zeta_p \in K \textrm{ in case } \mathrm{char}(K)\neq p\}$$ There is a parameter-free $\mathcal{L}_\textrm{ring}$-formula $\psi_p(x)$ such that \begin{enumerate} \item if $p \neq 2$ or $Kv_K^2$ is not Euclidean, then $\psi_p(x)$ defines the valuation ring of the canonical $p$-henselian valuation $v_K^p$, and \item if $p=2$ and $Kv_K^2$ is Euclidean, then $\psi_p(x)$ defines the valuation ring of the coarsest $2$-henselian valuation $v_K^{2*}$ such that $Kv_K^{2*}$ is Euclidean. \end{enumerate} \end{Thm}
\subsection{External definability of henselian valuations} In this subsection, we apply the results from the previous two subsections to prove Theorem \ref{A} from the introduction. \begin{Prop} \label{notreal} Let $(K,v)$ be henselian such that $Kv$ is neither separably closed nor real closed. Then $v$ is definable in $K^\mathrm{Sh}$. \end{Prop} \begin{proof} Assume $Kv$ is neither separably closed nor real closed. For any finite separable extension $F$ of $K$, we use $u$ to denote the (by henselianity unique) extension of $v$ to $F$. Choose any prime $p$ such that $Kv$ has a finite Galois extension $k$ of degree divisible by $p^2$. Consider a finite Galois extension $N \supseteq K$ such that $Nu=k$. Note that such an $N$ exists by \cite[Corollary 4.1.6]{EP05}. Now, let $P$ be a $p$-Sylow of $\mathrm{Gal}(Nu/Kv)$. Recall that the canonical restriction map $$\mathrm{res}: \mathrm{Gal}(N/K) \to \mathrm{Gal}(Nu/Kv)$$ is a surjective homomorphism (\cite[Lemma 5.2.6]{EP05}). Let $G \leq \mathrm{Gal}(N/K)$ be the preimage of $P$ under this map, and let $L:=\mathrm{Fix}(G)$ be the intermediate field fixed by $G$. In particular, $L$ is a finite separable extension of $K$. By construction, the extension $Lu \subseteq Nu$ is a Galois extension of degree $p^n$ for some $n \geq 2$, in particular, we have $Lu\neq Lu(p)$.
Hence, we have constructed some finite separable extension $L$ of $K$ with $Lu\neq Lu(p)$. Moreover, we may assume that $L$ contains a primitive $p$th root of unity in case $p\neq 2$ and $\mathrm{char}(K)\neq p$: The field $L':=L(\zeta_p)$ is again a finite separable extension of $K$ and its residue field is a finite extension of $Lu$. Thus, by \cite[Theorem 4.3.5]{EP05}, we get $L'u \neq L'u(p)$. Similarly, in case $p = 2$ and $\mathrm{char}(K)=0$, we may assume that $L$ contains a square root of $-1$: By construction, $Lu$ has a Galois extension of degree $p^n$ for some $n \geq 2$. Consider $L':=L(\sqrt{-1})$, then $L'u$ is not $2$-closed and not orderable. In this case, no $2$-henselian valuation on $L'$ has Euclidean residue field (see \cite[Lemma 4.3.6]{EP05}).
Finally, $v_L^p$ is definable on $L$ by a parameter-free $\mathcal{L}$-formula $\varphi_p(x)$. It follows from the defining properties of $v_L^p$ that $\mathcal{O}_{v_L^p} \subseteq \mathcal{O}_{u}$ holds. As $L/K$ is finite, $L$ is interpretable in $K$. Hence, $\mathcal{O}_w:=\mathcal{O}_{v_L^p} \cap K$ is an $\mathcal{L}_\textrm{ring}$-definable valuation ring of $K$ with $\mathcal{O}_w \subseteq \mathcal{O}_v$. By Example \ref{mainex}, $v$ is definable in $K^\mathrm{Sh}$. \end{proof}
\begin{Prop} \label{real} Let $(K,v)$ be henselian such that $Kv$ is real closed. Then $v$ is definable in $K^\mathrm{Sh}$. \end{Prop} \begin{proof} Assume that $(K,v)$ is henselian and $Kv$ is real closed. Then $K$ is orderable.
We first reduce to the case that $K$ is Euclidean: Note that $v$ is a $2$-henselian valuation with Euclidean residue field. Let $v_K^{2*}$ be the coarsest $2$-henselian valuation on $K$ with Euclidean residue field, which
is $\emptyset$-definable on $K$ in $\mathcal{L}_\mathrm{ring}$ by Theorem \ref{JK14}. Now, if the induced valuation $\overline{v}$ on $Kv_K^{2*}$ is definable in $(Kv_K^{2*})^\mathrm{Sh}$, then the valuation ring of $v$, which is the composition of $v_K^{2*}$ and $\overline{v}$, is also definable in $K^\mathrm{Sh}$.
Thus, we may assume that $K$ is Euclidean. In this case, $K$ is uniquely ordered
and the ordering on $K$ is $\mathcal{L}_\textrm{ring}$-definable. Let $\mathcal{O}_{{w}} \subseteq K$ be the convex hull of $\mathbb{Z}$ in $K$. Then, $\mathcal{O}_w$ is definable in $K^\mathrm{Sh}$. By \cite[Theorem 4.3.7]{EP05}, $(K,w)$ is a $2$-henselian valuation ring on $K$ with Euclidean residue field. As $w$ has no proper refinements, $w$ is the canonical $2$-henselian valuation on $K$. Thus, we get $\mathcal{O}_w\subseteq \mathcal{O}_v$ and hence $\mathcal{O}_v$ is also definable in $K^\mathrm{Sh}$ by Example \ref{mainex}. \end{proof}
Note that combining Propositions \ref{notreal} and \ref{real} immediately yields Theorem \ref{A} from the introduction. Applying Proposition \ref{Shelah}, we conclude:
\begin{Cor} Let $K$ be a field and $v$ a henselian valuation on $K$. Assume that $\mathrm{Th}(K)$ is NIP in some language $\mathcal{L}\supseteq \mathcal{L}_\mathrm{ring}$. \label{nonsep} If $Kv$ is not separably closed, then $(K,v)$ is NIP in the language $\mathcal{L}\cup \{\mathcal{O}\}$. \end{Cor}
As separably closed fields are always NIP in $\mathcal{L}_\mathrm{ring}$, we note that the residue field of a henselian valuation on an NIP field is NIP as a pure field. \begin{Cor} \label{resnip} Let $K$ be a field and $v$ henselian on $K$. Assume that $\mathrm{Th}(K)$ is NIP in some language $\mathcal{L}\supseteq \mathcal{L}_\mathrm{ring}$. Then $Kv$ is NIP as a pure field. \end{Cor}
Recall that a field $K$ is called \emph{henselian} if it admits some non-trivial henselian valuation. \begin{Cor} \label{exist} Let $K$ be a henselian field such that $\mathrm{Th}(K)$ is NIP in some language $\mathcal{L}\supseteq \mathcal{L}_\mathrm{ring}$. Assume that $K$ is neither separably closed nor real closed. Then $K$ admits some non-trivial externally definable henselian valuation $v$. In particular, $(K,v)$ is NIP in the language $\mathcal{L}\cup \{\mathcal{O}_v\}$. \end{Cor} \begin{proof} If $K$ admits some non-trivial henselian valuation $v$ such that $Kv$ is not separably closed, the result follows immediately by Propositions \ref{notreal} and \ref{real}. Otherwise, $K$ admits a non-trivial $\mathcal{L}_\mathrm{ring}$-definable henselian valuation by \cite[Theorem 3.8]{JK14a}. \end{proof}
The question of what happens in case $Kv$ is separably closed is addressed in the next section.
\section{Separably closed residue fields} \label{scf} In this section, we give a partial answer to Question \ref{mainq} in case the residue field is separably closed. Recall that when $(K,v)$ is henselian and the residue field is not separably closed, we may add a symbol for $\mathcal{O}_v$ to \emph{any} NIP field structure on $K$ and obtain an NIP structure. First, we note that we cannot expect the same when it comes to separably closed (residue) fields: \begin{Ex}[{\cite[Example 5.5]{HHJ19}}] Let $K$ be a separably closed field and assume that $v_1$ and $v_2$ are two independent valuations on $K$. Then $(K,v_1,v_2)$ has IP in $\mathcal{L}_\mathrm{ring}\cup \{\mathcal{O}_1\} \cup \{\mathcal{O}_2\}$. \end{Ex}
There are of course many examples of separably closed fields with independent valuations: \begin{Ex} \label{extra} Let $\mathbb{Q}^\mathrm{alg}$ be an algebraic closure of $\mathbb{Q}$ and let $p\neq l$ be prime. Consider a prolongation $v_p$ (respectively $v_l$) of the $p$-adic (respectively $l$-adic) valuation on $\mathbb{Q}$ to $\mathbb{Q}^\mathrm{alg}$. Then $v_p$ and $v_l$ are independent, thus the bi-valued field $(\mathbb{Q}^\mathrm{alg},v_p,v_l)$ has IP. \end{Ex}
As any separably closed valued field has NIP in $\mathcal{L}_\mathrm{ring}\cup \{\mathcal{O}\}$ and any valuation is henselian on a separably closed field, we cannot expect an analogue of Corollary \ref{nonsep} to hold for separably closed residue fields. We will instead focus on the following version of Question \ref{mainq}: \begin{Q} \label{vmainq} Let $K$ be NIP as a pure field and $v$ a henselian valuation on $K$ with $Kv$ separably closed. Is $(K,v)$ NIP in $\mathcal{L}_\mathrm{ring} \cup \{\mathcal{O}\}$? \end{Q}
\subsection{Ingredients for the proof of Theorem \ref{B}} We will split the proof of Theorem \ref{B} into an equicharacteristic case and a mixed characteristic case. In both cases, separably algebraically maximal Kaplansky fields play an important role. \begin{Def} Let $(K,v)$ be a valued field and $p=\mathrm{char}(Kv)$. \begin{enumerate} \item We say that $(K,v)$ is \emph{(separably) algebraically maximal} if $(K,v)$ has no immediate (separable) algebraic extensions. \item We say that $(K,v)$ is \emph{Kaplansky} if $p=0$ or if $p>0$ and the value group $vK$ is $p$-divisible and the residue field $Kv$ is perfect and admits no Galois extensions of degree divisible by $p$. \end{enumerate} \end{Def} Note that separable algebraic maximality always implies henselianity. See \cite{Kuh13} for more details on (separably) algebraically maximal Kaplansky fields. As mentioned in the introduction, there is a transfer theorem which works for separably algebraically maximal Kaplansky fields: \begin{Thm}[{\cite[Theorem 3.3]{JS}} and {\cite[Proposition 4.1]{AJ19b}}] \label{SAMK} Any complete theory of separably algebraically maximal Kaplansky fields is NIP if and only if corresponding theories of the residue field and value group are NIP. \end{Thm}
The fact that the theory $\mathrm{SCVF}$ of separably closed valued fields is NIP has been proven (independently) by Delon and Hong; however, Delon's proof is unpublished and Hong's proof only works for finite degree of imperfection. It is also an immediate consequence of Theorem \ref{SAMK}: \begin{Cor}[{\cite[Corollary 4.2]{AJ19b}}] \label{del} Let K be separably closed and let $v$ be a valuation on $K$. Then $(K,v)$ has NIP as a pure valued field. \end{Cor}
Using an argument by Scanlon, we reduce Question \ref{mainq} to the case of algebraically closed residue fields. \begin{Prop}[Scanlon] Let $(K,v)$ be a henselian valued field with $\mathrm{char}(Kv)=p$, such that $Kv$ is not perfect and has no separable extensions of degree divisible by $p$. Then $\mathcal{O}_v$ is definable in $\mathcal{L}_\mathrm{ring}$. \label{scan} \end{Prop} \begin{proof} Choose $t \in \mathcal{O}_v$ such that we have $\bar{t} \in Kv \setminus Kv^p$. Consider the $\mathcal{L}_\mathrm{ring}$-definable subset of $K$ given by
$$S:=\{a \in K \,|\, \exists\, L \supseteq K \textrm{ with }[L:K]<p \textrm{ and }\exists y \in L:\,y^p-ay=t\}.$$
We claim that $S = \{a \in K\,|\, v(a) \leq 0\}$ holds. We first show the inclusion $S \subseteq \{a \in K\,|\, v(a) \leq 0\}$. Assume for a contradiction that there is some $a \in S$ with $v(a)>0$. Take $L\supseteq K$ and $y \in L$ witnessing $a \in S$, i.e., we have $[L:K]<p$ and $y^p-ay=t$. Let $w$ denote the unique prolongation of $v$ to $L$. Note that, as $w(t) \geq 0$ and $w(a)>0$, we have $w(y) \geq 0$. Hence, we get $\bar{y}^p=\bar{t} \in Lw$. However, as $[Lw:Kv]\leq[L:K]<p$, this gives the desired contradiction.\\ For the other inclusion, suppose that we have $v(a)\leq 0$. Choose any $b \in K^\mathrm{alg}$ with $b^{p-1}=a$ and set $L:=K(b)$. In particular, we have $[L:K]\leq p-1 <p$. Let $w$ denote the unique extension of $v$ to $L$. Consider the equation $$baZ^p-Zba-t= (bZ)^p-a(bZ)-t=0$$ over $L$. As we have $w(ba)\leq 0$, this equation has a solution in $L$ if and only if the equation $$Z^p - Z - \dfrac{t}{ba}=0$$ over $\mathcal{O}_w$ has a solution in $\mathcal{O}_w$. As $(L,w)$ is henselian and $Lw$, a separable extension of $Kv$, also has no separable extensions of degree divisible by $p$, there is some $z \in \mathcal{O}_w$ with $z^p-z=\frac{t}{ba}$. For $y = zb$, we conclude $y^p-ay=t$ as desired.\\ It now follows immediately from the claim that $\mathcal{O}_v$ is also definable. \end{proof}
\subsection{Compositions of NIP valuations} In the proof of Theorem \ref{main2}, we decompose the valuation $v$ on $K$ into several pieces: a
coarsening $u$ of $v$ and a valuation $\bar{v}$ on $Ku$ such that $v$ is the composition of $\bar{v}$ and $u$. However, in general, it is not clear whether showing that each of these is NIP is sufficient to show that $v$ is NIP. The situation is simpler if the residue field $Ku$ of $u$ is stably embedded. \begin{Def} Let $M$ be a structure in some language $\mathcal{L}$ and $\mathcal{N} \succ M$
sufficiently saturated. A definable set $D$ is said to be \emph{stably embedded}
if for every formula $\phi(x;y)$, $y$ a finite tuple of variables from the same sort as $D$, there is a formula $d\phi(z;y)$ such that for any $a\in \mathcal{N}^{|x|}$, there is a tuple $b\in D^{|z|}$, such that $\phi(a;D)=d\phi(b;D)$. \end{Def}
See \cite[Chapter 3]{Simon:book} for more on stable embeddedness. Note that \cite[Proposition 2.5]{JS} proves that we can add NIP structure on a stably embedded set and stay NIP. This always works in separably algebraically maximal Kaplansky fields:
\begin{Prop}[{\cite[Lemma 3.1]{JS}} and {\cite[Proof of Proposition 4.1]{AJ19b}}] \label{SEK} Let $(K,v)$ be a separably algebraically maximal Kaplansky field. Then, the residue field $Kv$ is stably embedded. \end{Prop}
There are more natural examples of henselian fields with stably embedded residue fields.
\begin{Def} Let $(K,v)$ be a valued field of characteristic $(\mathrm{char}(K), \mathrm{char}(Kv))=(0,p)$ for some prime $p >0$. We say that $(K,v)$ is \begin{enumerate} \item \emph{unramified} if $v(p)$ is the smallest positive element of $vK$ and \item \emph{finitely ramified} if the interval $[0,v(p)] \subseteq vK$ is finite. \end{enumerate} \end{Def}
The residue field of any unramified henselian valued field is purely stable embedded as an $\mathcal{L}_\textrm{ring}$-structure ({\cite[Corollary 13.7]{AJ19}}). However, the residue field of a finitely ramified henselian valued field need not be stably embedded as a pure field (cf.~\cite[Example 12.8]{AJ19}), and it is not known whether it is stably embedded in case the residue field is not perfect. Nonetheless, using that every finitely ramified henselian valued field is up to elementary equivalence a finite extension of an unramified field with the same residue field, one can nonetheless prove an NIP transfer:
\begin{Prop}[{\cite[Corollary 4.7]{AJ19b}}] \label{fin} Let $(K,v)$ be a henselian valued field of mixed characteristic and $u$ a coarsening of $v$ such that $(K,u)$ is finitely ramified and $(Ku, \bar{v})$ is NIP. Then $(K,v)$ is NIP. \end{Prop}
We finish the section with some open problems. \begin{Qs} \begin{enumerate} \item Is the composition of (henselian) NIP valuations NIP? \item Is the residue field of every henselian valuation on an NIP field stably embedded? \end{enumerate} \end{Qs} By \cite[Proposition 2.5]{JS}, a positive answer to the second question would imply a positive answer to the henselian case of the first question. Moreover, there seems to be no known example of a henselian field in $\mathcal{L}_\mathrm{ring} \cup \{\mathcal{O}_v\}$ such that the residue field is not stably embedded.
\subsection{The case of separably closed residue fields} In this subsection, we prove our second main result which was mentioned as Theorem \ref{B} in the introduction. We start with the equicharacteristic case: \begin{Prop} Let $K$ be NIP, $v$ henselian on $K$ with $\mathrm{char}(K)=\mathrm{char}(Kv)$. Then, $(K,v)$ is NIP as a pure valued field. \label{Prop:equi} \label{equi} \end{Prop} \begin{proof} We may assume that $v$ is non-trivial as otherwise the statement is clear. In case $Kv$ is non-separably closed, the statement follows from Corollary \ref{nonsep}. Now assume that $Kv$ is separably closed, in particular, $Kv$ is NIP as a pure field. Moreover, we assume that $K$ is not separably closed since otherwise the conclusion follows from Corollary \ref{del}. If $\mathrm{char}(Kv)=0$, the statement follows immediately from Delon's classical result (\cite[Theorem A.15]{Simon:book}) - or by the fact that any equicharacteristic $0$ henselian valued field is separably algebraically maximal Kaplansky. On the other hand, if $\mathrm{char}(K)=p>0$, then $K$ admits no Galois extensions of degree divisible by $p$ by \cite[Corollary 4.4]{KSW}. Thus, $vK$ is $p$-divisible and $Kv$ is perfect (for an argument for the latter, see the proof of \cite[Proposition 4.1]{JS}). As $Kv$ is separably closed, we conclude that $(K,v)$ is Kaplansky. Moreover, any immediate separable extension of $K$ has degree divisible by $p$ by the lemma of Ostrowski (\cite[see (3) on p.~280 for the statement and p.~300 for the proof]{Kuh11}). Thus, $(K,v)$ is separably algebraically maximal with algebraically closed residue field. By Theorem \ref{SAMK}, $(K,v)$ is NIP. \end{proof}
We now come to the general case: \begin{Thm} \label{main2} Let $K$ be NIP, $v$ henselian on $K$. Then $(K,v)$ is NIP as a pure valued field. \end{Thm} \begin{proof} If $Kv$ is not separably closed, the statement follows from \ref{nonsep}. In the case when $Kv$ is separably closed and non-perfect, the theorem holds by Proposition \ref{scan}. Thus, we may assume that $Kv$ is algebraically closed. Moreover, by Corollary \ref{del}, we may assume that $K$ is not separably closed. The equicharacteristic case follows from Proposition \ref{Prop:equi}. Thus, we now assume $\mathrm{char}(K)=0$ and $\mathrm{char}(Kv)=p>0$. Furthermore, we assume that $(K,v)$ is $\aleph_1$-saturated.
We consider the standard decomposition of $v$ (writing $\Gamma:=vK$): Let $\Delta_p \leq \Gamma$ be the biggest convex subgroup not containing $v(p)$ and let $\Delta_0 \leq \Gamma$ be the smallest convex subgroup containing $v(p)$. We get the following decomposition of the place $\varphi_v:K \to Kv$ corresponding to $v$: $$K =K_0 \xlongrightarrow{\Gamma/\Delta_0} K_1 \xlongrightarrow{\Delta_0/\Delta_p} K_2 \xlongrightarrow{\Delta_p} K_3=Kv$$ where every arrow is labelled with the corresponding value group. Note that $\mathrm{char}(K)=\mathrm{char}(K_1)=0$ and $\mathrm{char}(K_2)= \mathrm{char}(Kv)=p$. Let $v_i$ denote the valuation on $K_i$ corresponding to the place $K_i \to K_{i+1}$.
By \cite[Theorem 1.13]{AnKuh16}, the value group $v_1K_1=\Delta_0/\Delta_p$ of $(K_1,v_1)$ is either isomorphic to $\mathbb{Z}$ or $\mathbb{R}$. Moreover, by saturation (and since $\Delta/\Delta_0$ has rank $1$), $(K_1,v_1)$ is spherically complete and thus algebraically maximal (compare also the proof of \cite[Lemma 6.8]{Joh15}). We now consider two cases:
In case $v_1K_1$ is isomorphic to $\mathbb{Z}$, the composition $u$ of $v_1$ and $v_0$ is finitely ramified. Since $u$ is henselian, Corollary \ref{resnip} implies that $Ku=K_2$ is an NIP field, thus $(K_2,v_2)$ is NIP (Proposition \ref{Prop:equi}). Applying Proposition \ref{fin}, we conclude that $(K,v)$ is NIP.
On the other hand, in case $(K_1,v_1)$ has divisible value group, we first show that $K_2$ is perfect. Assume $K_2$ is not perfect. Recall that it is NIP by Corollary \ref{resnip}, and hence admits no Galois extensions of degree divisible by $p$. Hence, by Proposition \ref{scan}, the composition $u$ of $v_1$ and $v_2$ is again definable. But this contradicts $\aleph_1$-saturation: Recall that $\Delta_p$ is the biggest convex subgroup not containing $v(p)$, and that $\Delta_0$ is the smallest convex subgroup containing $v(p)$. If $\Delta_p$ is definable in $(K,v)$, then there is always a minimum positive element in $\Delta_0/\Delta_p$ since by saturation, $\Delta_0/\Delta_p$ must otherwise contain a convex subgroup. Thus, if $\Delta_0/\Delta_p$ is isomorphic to $\mathbb{R}$, $K_2$ is perfect.
We now argue that $(K_1, v_1)$ is an algebraically maximal Kaplansky field. By what we have just shown, its residue field $K_2$ is perfect and NIP (using Corollary \ref{resnip} again), and by assumption we are in the case when $(K_1,v_1)$ is divisible. Thus, $(K_1,v_1)$ is Kaplansky. Since we have already argued that $(K_1,v_1)$ is algebraically maximal, Proposition \ref{SAMK} now implies that $(K_1,v_1)$ is NIP. By \ref{Prop:equi}, also $(K,v_0)$ and $(K_2,v_2)$ are NIP. Moreover, by Proposition \ref{SEK}, $K_2 = K_1v_1$ is stably embedded as a pure field in $(K_1,v_1)$ and of course, being an equicharacteristic $0$ henselian valued field, $K_1=Kv_0$ is stably embedded as a pure field in $(K,v_0)$. Thus, applying \cite[Proposition 2.5]{JS} twice, we finally conclude that $(K,v)$ is NIP. \end{proof}
\section{Ordered fields} \label{ord} In this section, we use the same technique as in the proof of Proposition \ref{real} to study convex valuation rings on an ordered field. We show that any convex valuation ring $\mathcal{O}_v$ on $K$ is definable in $(K,<)^\mathrm{Sh}$. The idea to consider convex valuation rings on ordered fields was suggested by Salma Kuhlmann.
\begin{Def} Let $(K,<)$ be an ordered field and $R \subseteq K$ a subring. \begin{enumerate} \item The \emph{$<$-convex hull of $R$ in $K$} is defined as $$\mathcal{O}_R(<) :=\{x \in K\, :\,x,-x < a \textrm{ for some }a \in R\}.$$ \item We say that $R$ is \emph{$<$-convex} if $\mathcal{O}_R(<)=R$. \end{enumerate} \end{Def}
The following facts about convex valuation rings are well-known. \begin{Fact}[{\cite[p.\,36]{EP05}}] Let $(K,<)$ be an ordered valued field. \begin{enumerate} \item Any convex subring of $K$ containing $1$ is a valuation ring. \item A subring $R \subseteq K$ is $<$-convex if and only if $R$ is a convex subgroup of the additive group of $K$. Thus, any two valuations $v,w$ on $K$ which are convex with respect to $<$ are comparable. \item There is a (unique) finest valuation $v_0$ on $K$ which is convex with respect to $<$. It is called the \emph{natural valuation} of $(K,<)$. The valuation ring $\mathcal{O}_{v_0}$ is the convex hull of the integers in $(K,<)$. \end{enumerate} \end{Fact}
It is now an easy consequence of the properties of the natural valuation that convex valuation rings are definable in the Shelah expansion: \begin{Prop} Let $(K,<)$ be an ordered field and $\mathcal{O}_v$ a convex valuation ring on $K$. Then $\mathcal{O}_v$ on $K$ is definable in $(K,<)^\mathrm{Sh}$. \end{Prop} \begin{proof} As the valuation ring of the natural valuation $v_0$ is exactly the convex closure of $\mathbb{Z}$ in $K$, it is definable in $(K,<)^\mathrm{Sh}$. As any convex valuation $v$ on $K$ is a coarsening of $v_0$, the valuation ring of $v$ is also definable in $(K,<)^\mathrm{Sh}$. \end{proof}
Applying Proposition \ref{Shelah}, this yields the following \begin{Cor} \label{cord} Let $K$ be an ordered field such that $\mathrm{Th}(K)$ is NIP in some language $\mathcal{L}\supseteq \mathcal{L}_\mathrm{of}$ and let $v$ be a convex valuation on $K$. Then, $(K,v)$ is NIP in $\mathcal{L}\cup\{\mathcal{O}_v\}$. \end{Cor}
\end{document} | arXiv |
\begin{document}
\cleanlookdateon \title{Spying on the prior of the number of data clusters and the partition distribution in Bayesian cluster analysis} \author{Jan Greve\thanks{WU Vienna University of
Business and Economics},
Bettina Gr\"un\thanks{WU Vienna University of Business and
Economics},
Gertraud Malsiner-Walli\thanks{WU
Vienna University of Business and Economics}~{}and\\
Sylvia Fr\"uhwirth-Schnatter\thanks{WU Vienna University of
Business and Economics}
\\
}
\date{}
\maketitle
\sloppy \begin{small} \begin{abstract} \commentBG{Cluster analysis aims at partitioning data into groups or clusters. In applications, it is common to deal with problems where the number of clusters is unknown}. Bayesian mixture models employed in such applications usually specify a flexible prior that takes into account the uncertainty with respect to the number of clusters. However, a major empirical challenge involving the use of these models is in the characterisation of the induced prior on the partitions. This work introduces an approach to compute descriptive statistics \commentBG{of the prior on the partitions} for three selected Bayesian mixture models developed in the areas of Bayesian finite mixtures and Bayesian nonparametrics. The proposed methodology involves computationally efficient enumeration of the prior on the number of clusters in-sample (termed as ``data clusters'') and determining the first two prior moments of symmetric additive statistics characterising the partitions. The accompanying reference implementation is made available in the \textsf{R} package \textbf{fipp}. Finally, we illustrate the proposed methodology through comparisons and also discuss the implications for prior elicitation in applications.
\end{abstract} \end{small}
\section{Introduction} \label{sec:intro}
Methodologically, cluster analysis aims at partitioning observations into a set of mutually exclusive groups such that observations within the same group share some characteristics and are differentiable from observations across other groups. In model-based clustering, this problem is commonly dealt with using mixture models where data are assumed to be drawn from a distribution whose density is specified as a convex combination of parametric densities referred to as mixture components. In the context of clustering, the most natural understanding of mixture components is that each of them represents a distinct group within the population, \commentF{see \cite{McLachlan+Peel:2000} and \cite{gru:mod} for a recent review}.
\commentF{However,
the number} of distinct groups within the population is often not known a-priori. Therefore, mixture models are often used solely as flexible modelling tools where the components are not necessarily associated with any known or observed quantity of inferential importance. Nonetheless, in the Bayesian mixture framework \commentF{(see \citealt{fru:book} for a comprehensive review),} the data generating process where mixture components represent potential groups present in the population can be specified regardless of whether any information about this quantity is available a-priori or not. By assigning a prior distribution to this quantity, the uncertainty with respect to this variable can be reflected in the model. This fully Bayesian approach to the clustering problem with an unknown number of mixture components was proposed by \cite{Richardson+Green:1997}. Furthermore, the link between this approach and
\commentF{Bayesian Nonparametric (BNP) mixtures (see, e.g., \citealt{lij-pru:mod})} has been recently explored by \cite{Miller+Harrison:2018}, who coined the term Mixture of Finite Mixtures (MFM\commentBG{s}) to refer to the aforementioned approach, \commentF{and by \cite{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}}.
In addition, \commentBG{Bayesian methods enable the clear distinction of} clusters realised in the sample and those in the population. Specifically, mixture components associated with at least one observation belong to the former category and the latter \commentBG{category} consists of components with no observations. This distinction is usually not made in the frequentist framework that utilises \commentF{maximum likelihood (ML)} estimation. Details of the application and estimation of finite mixtures in \commentBG{a} ML framework are provided in \cite{McLachlan+Peel:2000}. \commentF{Both ML and Bayesian methods are reviewed in \cite{fru-etal:han}.}
Although theoretically sound and intuitively straightforward, one major empirical challenge of Bayesian mixture models with an unknown number of components \commentF{lies} in the prior and hyperparameter specification. This applies to MFM models developed in the area of Bayesian finite mixtures as well as \commentF{to} infinite mixture models originating in the BNP literature. All these models need to be specified with a hyperparameter (with or without a hyperprior) assigned to the prior distribution on the mixture component weights. In addition, for MFM models in particular, a prior on the number of mixture components needs to also be specified. Crucially, these specifications implicitly induce a prior distribution on the partitions of the data. The empirical importance of studying this prior can be easily understood as characteristics of the posterior \commentBG{of the} partitions with substantial inferential importance -- the number of clusters in the sample and allocation of data to clusters -- are directly influenced by this prior.
Hence, this work aims at complementing applied works involving Bayesian mixture models with an unknown number of components by providing methods to quantify \commentF{the prior distribution} on the partitions via ``spying'' on its probabilistic characteristics. To facilitate this goal, it introduces a way to evaluate moments of certain types of statistics defined on the induced prior on \commentBG{the} partitions. Specifically, for three important Bayesian finite and infinite mixture models, we present formulas for the prior on the number of \commentBG{data} clusters induced by the prior on the partitions and the first two moments of any symmetric additive functionals defined over the prior partitions. In addition, we derive computationally feasible evaluations of these quantities and provide a reference implementation written in \textsf{R} \citep{R-Core-Team:2020} in the package \textbf{fipp} \citep{Greve:2021}. The three models in question are the Dirichlet Process Mixture (DPM) model by \cite{Ferguson:1973}, the \commentBG{MFM} model proposed by \cite{Miller+Harrison:2018} and its generalisation by \cite{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. These are referred to as the DPM, the static MFM and the dynamic MFM following the naming convention used in \cite{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. To demonstrate the practical relevance of the methodology, a juxtaposition of these two MFM models is made based on their characteristics of the induced prior on the partitions. Additionally, an empirical comparison between these three models under popular prior and hyperparameter settings is also conducted.
This paper is structured as follows: Section~\ref{sec:mixt-models-bayes} reviews the different mixture models considered in this paper for Bayesian cluster analysis. The explicit priors used in Bayesian mixture models which give rise to the aforementioned three models are discussed in Section~\ref{sec:explicit-priors}. The main contribution of this work \commentBG{is to facilitate} quantification \commentBG{and characterisation} of the induced priors on the partitions. Section~\ref{sec:induced-priors} \commentBG{derives the theoretic results} together with computationally feasible algorithms for the calculation. Additionally, it positions the contribution of this work relative to previous works in this area both in theory and in applications. In Section~\ref{sec:insp-induc-priors}, we investigate the differences between the static and dynamic MFMs using the tools developed in the previous section. In Section~\ref{sec:comp-defa-priors} we empirically compare the implicit prior distributions for default prior specifications suggested in Bayesian cluster analysis. Section~\ref{sec:conclusions} outlines how these prior considerations might be used for prior elicitation in an application, and finally a summary of our findings is provided at the end in Section~\ref{sec:summary}.
\section{Mixture models for Bayesian cluster analysis}\label{sec:mixt-models-bayes}
One useful way to represent certain types of Bayesian mixture models employed in cluster analysis is to formulate them as generative models involving random partitions $\mathcal{C}$. This is a common approach in the literature of product partition models, see for example \cite{Hartigan:1990} and \cite{Barry+Hartigan:1992}. In general, Bayesian mixture models based on exchangeable random partitions (for detailed coverage of this topic, see \commentF{\citealt{pit:com}}) can be reformulated in this way. All three models (the static and dynamic MFM models and the DPM) considered in this work fall under this category and thus can be written \commentF{hierarchically, involving random partitions $\mathcal{C}$}.
Consider a partition $\mathcal{C}$ that separates $N$ observations with observed responses $\{\mathbf{y}_1, \ldots, \mathbf{y}_N\}$ by grouping the set of indices of the data $[N] \coloneqq \{1,\ldots,N\}$. Such a partition $\mathcal{C}= \{\mathcal{C}_1,\ldots,\mathcal{C}_{K_+}\}$ consists of blocks $\mathcal{C}_k$, $ k = 1,\ldots,K_+$, where each block $\mathcal{C}_k$ is a non-empty and disjoint subset of $[N]$ whose union is $[N]$. Hence, $\mathcal{C}_k$ is interpreted as a cluster containing the indices of the observations assigned to it. Consequently, $K_+ = \lvert \mathcal{C} \rvert$, the number of blocks in the partition, is interpreted as the number of clusters in the sample of $N$ observations. We refer to these realised clusters as ``data clusters\commentBG{''} as in \cite{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020} from here on. The resulting Bayesian mixture model based on $\mathcal{C}$ has a representation
\begin{linenomath*} \begin{align} \commentF{\mathcal{C}} &\sim p(\mathcal{C}), &\nonumber\\ \commentF{\bm{\theta}_k} &\sim p(\bm{\theta}_k), &\text{\commentF{independently for }} \commentF{k = 1, 2, \ldots },\label{eq:genmodelpartitions}\\
\commentF{\mathbf{y}_i|i \in \mathcal{C}_k, \bm{\theta}_k}& \sim f(\mathbf{y}_i|\bm{\theta}_k), & \text{\commentF{independently for }} \commentF{i=1, \ldots, N,} \nonumber
\end{align} \end{linenomath*} \commentF{where $f$ is the density function of the distribution the observations are assumed to be drawn from and $\bm{\theta}_k$ is the set of parameters specific to the density function $f$ of the subgroup $k$.}
In Bayesian cluster analysis, such a partition $\mathcal{C}$ is \commentBG{usually} induced by a sequence of random variables, i.e., a vector of categorical variables called class assignment vector $\bm{S} = (S_1,\ldots,S_N)$. Each element $S_i\in \{1,\ldots,K\}$, $ i = 1,\ldots,N,$ takes one of the $K$ class labels as its realisation with $K$ being the number of components in the mixture distribution. Therefore, clustering arises in a natural way as each block $\mathcal{C}_k,\ k = 1,\ldots, K_+$, within the partition $\mathcal{C}$ is induced by $\bm{S}$ by grouping $S_i = S_j,\ i\neq j$ for all $i$ and $j$ in $[N]$, \commentF{see e.g. \cite{lau-gre:bay}}. Hence, amongst class labels of $S_i$ ranging from 1 to $K$, only $K_+$ unique labels are present in $\bm{S}$ which induce $\mathcal{C} = \{\mathcal{C}_1,\ldots,\mathcal{C}_{K_+}\}$.
Typically in Bayesian finite mixture models, a $K$-variate symmetric Dirichlet-Multinomial distribution is considered as a prior on the class assignment vector $\bm{S}$. MFM models considered in this work all use the Dirichlet-Multinomial distribution as a partition generator (for other choices of the prior, see for example \citealt{Lijoi+Pruenster+Rigon:2020}). Additionally, in the MFM framework, a prior on $K$ is specified, usually a discrete distribution on $\mathbb{N}_+$. Hence, the MFM models considered in this work have the following generative model for the partitions:
\begin{linenomath*} \begin{align} K &\sim p(K),\nonumber\\
\bm{\eta}_K | K, \gamma_K &\sim \mathcal{D}_K(\gamma_K),\label{eq:dirichletmultinomial}\\
S_i | \bm{\eta}_K &\sim \mathcal{M}_K(1, \bm{\eta}_K),\qquad \text{for } i=1,\ldots,N,\nonumber \end{align} \end{linenomath*}
where $p(K)$ is the aforementioned discrete prior on $K$. Here, the Dirichlet-Multinomial prior is written hierarchically with the $K$-variate symmetric Dirichlet prior $\mathcal{D}_K$ on the weight vector $\bm{\eta}_K = (\eta_1,\ldots,\eta_K)$ followed by the $K$-variate Multinomial prior $\mathcal{M}_K$ on the class assignment vector $\bm{S}$. Crucially, the characteristics of the induced \commentBG{prior on the} partitions of this model \commentBG{are} determined by the choice of $p(K)$ and the Dirichlet parameter $\gamma_K$ as well as the sample size $N$.
Finally, conditional on the resulting class assignments $\bm{S}$, the likelihood of each $\mathbf{y}_i$ is evaluated as follows: \begin{linenomath*} \begin{align}
p(\mathbf{y}_i|S_i = k,\commentF{\bm{\theta}_1,\ldots,\bm{\theta}_{K}}) = f(\mathbf{y}_i|\commentF{\bm{\theta}_k}).\label{emission-cond-Si} \end{align} \end{linenomath*}
\commentF{The} widely known mixture density conditional on the number of mixture components $K$, the component weight vector $\bm{\eta}_K$ and the component specific parameter vector $\Theta_K = (\bm{\theta}_1,\ldots,\bm{\theta}_K)$ is obtained through integrating out the aforementioned class assignments $\bm{S}$:
\begin{linenomath*} \begin{align}
p(\mathbf{y}_i|K,\Theta_K,\bm{\eta}_K)&= \sum_{k=1}^K \eta_k f(\mathbf{y}_i |\commentF{\bm{\theta}_k} ).\label{emission-uncond-Si} \end{align} \end{linenomath*}
Note that there is a crucial distinction between $K$, the number of components in the mixture distribution, and $K_+$, the number of clusters in the data (i.e., \commentBG{the} data clusters). While $K$ is assumed to represent the number of clusters in the population, $K_+$ can be interpreted as the number of \commentBG{clusters out of the $K$ clusters in the population} that generated the data at hand. Therefore, of all $K$ mixture components in Equation~\eqref{emission-uncond-Si}, only $K_+$ densities are associated to at least one observation $\mathbf{y}_i$ through $S_i$ as in Equation~\eqref{emission-cond-Si}. \commentBG{Given} this definition of $K_+$, as the finite-sample characteristics of $K$, its upper bound is defined as $K_+\leq \min(K,N)$.
It is evident that the prior on the partitions is not influenced by the choice of the density function $f$, nor its parameters $\Theta_K$. Since the aim of this work is to quantify the characteristics of the induced prior on the partitions, the focus will exclusively be on the explicit and implicit characteristics of Equation~\eqref{eq:dirichletmultinomial} from here on.
\section{Explicit prior}\label{sec:explicit-priors}
The main objective of this study is to characterise the \commentBG{prior on the} partitions induced by the model given in Equation~\eqref{eq:dirichletmultinomial}. Once $p(K)$ and $\gamma_K$ are specified, the prior distribution on the partitions of $[N]$ is completely determined. In this section, we consider several modelling approaches for clustering previously explored in the literature of Bayesian Finite Mixtures and Bayesian Nonparametrics by focusing solely on their specification of $p(K)$ and $\gamma_K$.
\subsection{Prior hyperparameter on the weight distribution}\label{sec:prior-hyper-weight}
Conditional on \commentBG{a} given $K$, a $K$-variate symmetric Dirichlet distribution $\mathcal{D}_K(\gamma_K)$ on the weights $\bm{\eta}_K=(\eta_1,\ldots,\eta_K)$ is specified by choosing a hyperparameter $\gamma_K$. The static and dynamic MFM models coined by \citet{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020} refer to two MFM models which differ only in the form of this hyperparameter in the following way:
\begin{linenomath*} \begin{align*} &\text{static MFM: }\qquad \gamma_K\equiv \gamma,\\ &\text{dynamic MFM: }\quad \gamma_K = \frac{\alpha}{K}. \end{align*} \end{linenomath*}
That is, the Dirichlet parameter $\gamma_K$ is fixed to a constant $\gamma$ regardless of $K$ for the static MFM, while that of the dynamic MFM is inversely proportional to $K$ with the specific form of $\alpha/K$.
The induced prior on the partitions differs considerably across these two specifications, as already noted by \cite{McCullagh+Yang:2008}. To put it simply, for larger values of $K$, the Dirichlet parameter $\gamma_K$ of the dynamic MFM approaches zero, thus preferring a sparse distribution of $\bm{\eta}_K$. This implies that the larger $K$, the more likely it is that $K_+$ is smaller than $K$ causing an increasing gap between the number of components and data clusters. In fact, the DPM can be considered a limiting case of this dynamic MFM where $K$ is taken to infinity \citep{Green+Richardson:2001}, thus with probability one $K_+$ is less than $K$. For this reason, the parameter $\alpha$ in the DPM corresponds to the parameter $\alpha$ in the dynamic MFM. On the other hand, for the static MFM, the difference between $K_+$ and $K$ depends more heavily on the value of $\gamma$.
\subsection{Prior on $K$}\label{sec:prior-K} For both MFM models, the prior on $K$ is usually given a proper discrete distribution with support on $\mathbb{N}_+$ to ensure the posterior on $K$ to also be proper \citep{Nobile:2004}. The DPM being a limiting case of the dynamic MFM uses a degenerate prior on $K$ with a point mass on infinity. Some other choices of $p(K)$ previously proposed in the literature we consider in later sections are: the uniform prior on $K$ between 1 and 30 proposed by \citet{Richardson+Green:1997}, the geometric prior \commentF{$\mbox{\rm Geo}(0.1)$} on $K-1$ suggested in \cite{Miller+Harrison:2018} and the beta-negative-binomial prior $\text{BNB}(1,4,3)$ on $K-1$ considered in \citet{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. The uniform prior is an example where the support on $p(K)$ is not on $\mathbb{N}_+$, but rather \commentBG{is on a} truncated domain. The other two priors share the characteristics of having a monotonically decreasing probability mass function, thus penalising additional components a-priori.
\section{Induced prior on the partitions}\label{sec:induced-priors}
The model specification for generating partitions outlined in Equation \eqref{eq:dirichletmultinomial} explicitly characterises the number of mixture components $K$ and the class assignments $S_i,\ i = 1,\ldots N$, as well as the intermediate weight vector $\bm{\eta}_K$. A partition $\mathcal{C}$ is then induced from the sampled class assignment vector $\bm{S} = (S_1,\ldots,S_N)$. While this way of hierarchically combining well known probability distributions to generate partitions is implementationally straightforward, it masks the actual \commentF{prior distribution on $\mathcal{C}$}. In other words, \commentF{Equation~\eqref{eq:dirichletmultinomial}} is not particularly informative in understanding its finite-sample characteristics conditional on the given $N$ which is the prior distribution on $\mathcal{C}$. In clustering, however, it is particularly important to understand the finite-sample characteristics of the model as the prior information that directly influences the clustering behaviour are not the latent quantities such as $K$ and $\bm{\eta}_K$, but rather the realised characteristics of the mixture distribution such as the partitions. Studying the prior on the partitions can therefore be crucial for many reasons. E.g., one may like to evaluate the informativeness of the induced prior on the partitions relative to the resulting posterior partitions to ensure that proper learning from the data took place. Also, in some applications, one may want to incorporate external information into the prior partitions concerning the ``kind'' of partitions one is interested in, e.g.~regarding the assumed number of clusters $K_+$ in the data. This information however, cannot be directly embedded in the model as neither $p(K)$ nor $\gamma_K$ will single-handedly control the prior on the partitions.
Hence, it is of paramount importance to quantify the induced prior on the partitions so as to ``spy'' on its characteristics. For this reason, this section deals with delineating all the steps and procedures that enable characterisation of the induced prior on the partitions. Specifically, for all three models outlined in Section \ref{sec:prior-hyper-weight}, two \commentBG{possibilities to characterise this prior} are \commentBG{considered}: the prior distribution on $K_+$, and the first two \commentBG{prior} moments of any symmetric additive functional defined over the partitions conditional on $K_+$. Finally, \commentBG{combining these quantities enables determining the prior moments of these functionals} unconditional on $K_+$.
Characterisation of \commentBG{the} probability distribution on the partitions is an arduous task due to its combinatorial construction. \commentF{\cite{Gnedin:2010}} mentions the use of expectations of symmetric statistics computed over the exchangeable frequency vector (i.e., \commentBG{the} normalised block sizes). This relates to the first moment of the symmetric additive functionals unconditional on $K_+$ that this paper introduces as one of the descriptive statistics for the characterisation of the \commentBG{prior on the} partitions. In addition, this paper also considers a way to \commentBG{quantify the variability of these functionals} via the corresponding variance and also derives a way to compute these quantities efficiently and provides a reference implementation in the \textsf{R} package \textbf{fipp}.
Note that the \commentBG{\textsf{R}} package \textbf{AntMAN} \citep{Argiento:2019} \commentBG{also} allows the evaluation of the prior on $K_+$ for the static MFM. However, \textbf{fipp} offers more comprehensive tools for the characterisation \commentBG{of the prior on the partitions for} the DPM and \commentBG{the static and dynamic} MFM models. \commentBG{In particular, package \textbf{fipp} also provides the} capacity to evaluate \commentBG{the prior} expectation and variance of any symmetric additive functional defined over the partitions.
\subsection{The induced EPPF}\label{sec:induced-eppf}
The induced prior on the partitions is available for all three modelling approaches: the DPM, the static MFM and the dynamic MFM. All these priors are symmetric functions of the data cluster sizes $\{\lvert \mathcal{C}_1\rvert,\ldots, \lvert \mathcal{C}_k\rvert\}$ for $K_+ =k$ and hence, $p(\mathcal{C} |N, \bm{\gamma})$ with $\bm{\gamma} = \{\gamma_K\}$ is an exchangeable partition probability function (EPPF) in the sense of \citet{Pitman:1995} and defines an exchangeable random partition of the $N$ data points for all three classes of mixture models.
For a DPM with concentration parameter $\alpha$, the EPPF on a partition $\mathcal{C} = \{\mathcal{C}_1, \ldots, \mathcal{C}_k\}$ with $K_+ = k$ is given as the following Ewens distribution:
\begin{linenomath*} \begin{align*}
p(\mathcal{C} | N, \alpha )=
\frac{ \alpha ^{k} \Gamma(\alpha) }{
\Gamma(\alpha + N)} \prod_{j=1}^{k} \Gamma(\lvert \mathcal{C}_j \rvert), \end{align*} \end{linenomath*} Similarly, for a static MFM with $\gamma_K \equiv \gamma$ and thus conditional only to $\gamma$ rather than the sequence $\bm{\gamma}$, the EPPF of the same $\mathcal{C}$ is given in \citet{Miller+Harrison:2018} as follows:
\begin{linenomath*} \begin{align*}
p(\mathcal{C} | N, \gamma ) &= {V} ^\gamma _{N, {k}} \prod_{j=1}^{k} \frac{ \Gamma(\lvert \mathcal{C}_j\rvert + \gamma)}{ \Gamma(\gamma) } , \\ {V}^\gamma _{N, {k}} &= \sum_{K=k}^\infty p(K) \frac{K !}{(K- k )!} \frac{\Gamma(\gamma K) }{ \Gamma(\gamma K + N)} , \end{align*} \end{linenomath*} \commentF{as proven earlier by \cite{Gnedin+Pitman:2006} in the BNP
literature. ${V}^\gamma_{N,k}$ (related to the $V$-weights
$\tilde{V}^\gamma_{N,k}$ in \citealt{Gnedin+Pitman:2006} through the
normalisation $ \tilde{V}^\gamma_{N,k} = \gamma ^k {V} ^ \gamma _{N,k} $)
is} associated to the across block characteristics of the partition $\mathcal{C}$ such as $\commentBG{k}$ and $N$, as well as the hyperparameter $\gamma$ and the prior $p(K)$. This quantity can be computed recursively using \citet[Proposition~3.2]{Miller+Harrison:2018}.\footnote{Note the
following change of notation: $ V_{N,k} ^\gamma \equiv V_n(t)$ in
\citet{Miller+Harrison:2018}.} For $k=1, 2, \ldots$:
\begin{linenomath*} \begin{align*}
{V}^\gamma_{N+1,k+1} &= \frac{1}{ \gamma} {V} ^\gamma _{N,k} - \left( \frac{N}{\gamma} + k\right) {V} ^\gamma _{N+1,k} , \quad {V} ^\gamma _{{N},0}
= \sum_{K=1}^\infty \frac{\Gamma(\gamma K) }{ \Gamma(\gamma K + N)} p(K). \end{align*} \end{linenomath*}
A generalisation of the above result is given in \citet{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. Specifically, they consider an arbitrary sequence $\bm{\gamma} = \{\gamma_K\}$ for $K = \commentBG{1},\ldots,\infty$ assuming the prior on $K$ has support on $\mathbb{N}_+$. \commentBG{They refer to the MFM model with such an arbitrary sequence $\bm{\gamma}$ as the \emph{generalised MFM} model}. Thus the generalised EPPF of the same $\mathcal{C}$ is written as follows:
\begin{linenomath*} \begin{align} \label{mixparti}
p(\mathcal{C} |N, \bm{\gamma})
&= \sum_{K=k}^\infty p(K) p(\mathcal{C} | N, K, \gamma_K ),\\
p(\mathcal{C} | N, K, \gamma_K ) &= V_{N, k}^{K, \gamma_K} \prod_{j=1}^{k} \frac{\Gamma(\lvert \mathcal{C}_j\rvert +\gamma_K)}{\Gamma(\gamma_K)} , \label{mixpartiK} \\ \label{mixpKV} V_{N,k }^{K, \gamma_K} &= \frac{ \Gamma(\gamma_K K) K ! }{ \Gamma(\gamma_K K+N) (K- k )!} . \end{align} \end{linenomath*}
For the general\commentBG{ised} MFM, the $V$-weight also depends on $K$ as evident from Equation \eqref{mixpKV}. The explicit form of the EPPF for the dynamic MFM is obtained by setting $\gamma_K = \alpha / K$.
\subsection{The induced prior on the number of data clusters $K_+$}\label{sec:induced-prior-number}
The prior $p(K_+| N, \bm{\gamma})$ on the number of data clusters $K_+$, where the uncertainty with respect to $K$ is integrated out and one accounts for the specification of $\bm{\gamma} = \{\gamma_K\}$ and the sample size $N$, could be derived from the EPPF given in Equation~\eqref{mixparti} by summing over all partitions $\mathcal{C}$ with $K_+ = k$ data clusters across all $k = 1,\ldots,N$. A naive approach would be to sum over the set of all partitions of $[N]$ with $K_+ = k$ clusters for all $k$ which amounts to a computation in the order of the $N$-th Bell number $B_N$ (for details, see Appendix~\ref{sec:EPPF-support-combinatoric-part}).
An alternative approach to obtain $p(K_+| N,\bm{\gamma})$ is suggested in \citet[\commentF{Theorem~3.1}]{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. They base the derivation on the prior $p(N_1, \ldots, N_{K_+} |N,\bm{\gamma})$ using $(N_1,\ldots,N_{K_+})$ which they call {\em labelled} data cluster sizes where cluster sizes are arranged in some exchangeable random order resulting in labels $\{1,\ldots, K_+\}$ being attached to the $K_+$ data clusters in $\mathcal{C}$ as $\lvert \mathcal{C}_j \rvert = N_j, j = 1,\ldots, K_+$. By this operation, all class assignment vectors $\bm{S}$ equivalent under the exchangeability are mapped into a set partition $\mathcal{C} = \{\mathcal{C}_1,\ldots,\mathcal{C}_{K_+}\}$ by $\binom{K}{K_+}K_+!$-to-one mapping while $\bm{S}$ to $(N_1,\ldots,N_{K_+})$ is $\binom{K}{K_+}\binom{N}{N_1N_2\cdots N_{K_+}}$-to-one. Hence, the multiplicity of $(N_1,\ldots,N_{K_+})$ relative to $\mathcal{C}$ is $\frac{1}{K_+!}\binom{N}{N_1N_2\cdots N_{K_+}}$ resulting in the EPPF on the labelled data cluster sizes as given by:
\begin{linenomath*} \begin{align*}
p(N_1,\ldots,N_{K_+}|N,K,\bm{\gamma}) = \frac{N!}{K_+!}\frac{ {V}_{N, K_+}^{K, \gamma_K}}{ \Gamma(\gamma_K) ^{K_+}}\prod_{j=1}^{K_+}\frac{ \Gamma(N_{j} +\gamma_K)} {\Gamma(N_j + 1)}. \end{align*} \end{linenomath*}
Marginalising out $K$ leads to:
\begin{align*}
p(N_1,\ldots,N_{K_+}|N,\bm{\gamma}) = \frac{N!}{K_+!}\sum_{K=K_+}^{\infty}p(K)\frac{ {V}_{N, K_+}^{K, \gamma_K}}{ \Gamma(\gamma_K) ^{K_+}}\prod_{j=1}^{K_+}\frac{ \Gamma(N_{j} +\gamma_K)} {\Gamma(N_j + 1)}, \end{align*}
Then, summing up the probabilities of all labelled data cluster sizes $(N_1,\ldots,N_{k})$ with $K_+ = k$ amounts to computing $P(K_+ =k|N,\bm{\gamma})$. Thus, we have:
\begin{align}
P(K_+ = k |N, \bm{\gamma}) &= \frac{N!}{ k!}
\sum_{K=k}^\infty p(K) \frac{ {V}_{N, k}^{K, \gamma_K}}{ \Gamma(\gamma_K) ^{k}
} C^{K, \gamma_K}_{N,k},\label{pKplus}\\
C^{K, \gamma_K}_{N,k} &=
\sum_{\substack{N_1,\ldots, N_k > 0\\N_1+\ldots+N_k=N}}
\prod_{j=1}^k \frac{ \Gamma(N_{j} +\gamma_K)} {\Gamma(N_j + 1)}\label{PposCk}. \end{align}
where the term $C^{K, \gamma_K}_{N,k}$ sums over all possible labelled data cluster sizes $(N_1,\ldots,N_k)$.
As shown in \citet[\commentF{Algorithm~1}]{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}, $C_{N,k} ^{K, \gamma_K}$ can be determined recursively (see also Algorithm~\ref{KNMFM} in Appendix~\ref{sec:algor-comp-prior}). For a static MFM, $C_{N,k}^{K, \gamma_K} \equiv C ^\gamma _{N,k}$ is independent of $K$ and can be obtained in a single recursion from Equation~\eqref{recck} in Appendix~\ref{sec:algor-comp-prior}. For a DPM, $w_n=1/n$ is used in recursion~\eqref{recck} in Appendix~\ref{sec:algor-comp-prior} to obtain $C^{\infty}_{N,k}$.
In principle, to determine the prior on the number of data clusters $K_+$, an infinite sum over $K$ has to be computed. Practically, a maximum value for $K$ is set to determine the prior. The missing mass is reflected by the prior on the number of data clusters $K_+$ not having a total mass of 1. Thus the total mass of the truncated prior covered can be used to check the suitability of the selected maximum value of $K$. If the mass of the truncated prior is assessed to be not sufficiently close to 1, the maximum value may be increased for a better approximation.
\subsection{The induced prior on the partitions based on the labelled data cluster sizes}\label{sec:char-prior-part}
The prior $p(K_+|N,\bm{\gamma})$ shown in Section~\ref{sec:induced-prior-number} can be considered a facet of the induced prior on the partitions $\mathcal{C}$ that only concerns the number of blocks within each $\mathcal{C}$ (which corresponds to the number of data clusters $K_+$ as explained in Section~\ref{sec:mixt-models-bayes}) while ignoring other characteristics. A generalisation of this approach is to consider a distribution of functionals defined over the induced prior on the partitions $\mathcal{C}$. However, unlike the special case $p(K_+|N,\bm{\gamma})$, most functionals do not allow for easy derivation of such a distribution. Nevertheless, moments of some functionals with certain characteristics can be computed over the induced prior on the partitions conditional on the number of data clusters $K_+$. Specifically, we consider functionals defined over the labelled data cluster sizes $(N_1,\ldots,N_{K_+})$ which are symmetric and given as additive sums of functions of the single data cluster size $N_j$ \commentBG{over} all $j = 1,\ldots,K_+$. For these functionals we show that at least the first two moments can be easily derived and evaluated efficiently conditional on $K_+ \commentBG{=k}$:
\begin{linenomath*} \begin{align*}
\Psi(N_1,\ldots,N_{\commentBG{k}}) &= \sum_{j=1}^{\commentBG{k}} \psi(N_j). \end{align*} \end{linenomath*}
Relatedly, the aforementioned \commentF{\cite{Gnedin:2010}} considers an approach to compute expected values of the same statistics defined over \commentBG{the} exchangeable frequency vector without conditioning on $K_+$. However, combined with the distribution on $K_+$ derived in Section~\ref{sec:induced-prior-number}, the first two conditional moments we derive can also trivially be made unconditional on $K_+$.
For all three models considered in this work, the subsequent Section~\ref{sec:induc-cond-prior-1} introduces the prior on the labelled data cluster sizes $(N_1,\ldots,N_k)$ conditional on $K_+ = k$. This distribution is marginalised in Section~\ref{sec:marg-prior-labell} to obtain the conditional distribution on $N_j$ for all $j = 1,\ldots,k$ which is then used in Section~\ref{sec:comp-prior-moments} to evaluate the expectation of $\psi(N_j)$ and $\psi(N_j)\psi(N_l), j\neq l$ for all $j,l = 1,\ldots,k$. Based on these quantities, Section~\ref{sec:computing-prior-mean} derives the prior mean and variance of $\Psi(N_1,\ldots,N_{\commentBG{k}})$ conditional on $K_+\commentBG{=k}$. Furthermore, several functionals of empirical relevance are introduced as examples of symmetric additive statistics $\Psi(N_1,\ldots,N_{K_+})$. Finally, in Section~\ref{sec:computing-prior-mean-uncond} the prior mean and variance conditional on $K_+$ are combined with the prior distribution on $K_+$ to marginalise out $K_+$. In this way, we show that the first two moments of symmetric additive functionals defined over the labelled data cluster sizes can be computed unconditional on $K_+$ and in fact be evaluated rather efficiently in terms of computation by utilising recursion.
\subsubsection{The induced conditional prior on the labelled data cluster sizes}\label{sec:induc-cond-prior-1}
The prior distribution $p(N_1, \ldots, N_{K_+}|N, \bm{\gamma})$ of the labelled data cluster sizes is defined over {\em all possible composition\commentBG{s} of $N$}, with $K_+$ being a random number taking a value $K_+= 1, \ldots,N$. As pointed out by \citet{Green+Richardson:2001}, it is also interesting to consider the induced prior distribution over the labelled data clusters sizes for a given number of data clusters $K_+ = k$. This leads to the {\em conditional} prior on the labelled data cluster sizes for a given number of data clusters $K_+=k$ which is defined as:
\begin{linenomath*} \begin{align*}
p(N_1, \ldots, N_k |N, K_+ = k, \bm{\gamma})&= \frac{p(N_1, \ldots, N_k|N, \bm{\gamma}) }
{ P(K_+ = k |N , \bm{\gamma})}, \end{align*} \end{linenomath*}
where $P(K_+ = k |N , \bm{\gamma})$ is the prior on the number of data clusters. \citet{Miller+Harrison:2018} provide this conditional prior for the DPM and derive it for the static MFM. In addition \citet{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020} also discuss this conditional prior for the dynamic MFM.
For the DPM, this prior is independent of $\alpha$:
\begin{linenomath*} \begin{align*}
p (N_1, \ldots, N_k |N, K_+ = k) &= \frac{1}{ C ^\infty _{N,k}} \prod_{j=1}^{k} \frac{1}{ N_j}. \end{align*} \end{linenomath*}
For the static MFM, this prior depends on $\gamma$, but is independent of $p(K)$:
\begin{linenomath*} \begin{align*}
p(N_1, \ldots, N_k |N, K_+ = k, \gamma) &= \frac{1}{ C ^{\gamma} _{N,k}} \prod_{j=1}^{k} \frac{ \Gamma(N_j +\gamma)}{ \Gamma(N_j + 1)}. \end{align*} \end{linenomath*}
For the dynamic MFM, this prior depends on $\alpha$ as well as on the prior $p(K)$:
\begin{linenomath*} \begin{align*}
p (N_1, \ldots, N_k |N, K_+ = k, \alpha) &= \sum_{K=k}^\infty w^{K,\alpha}_{N,k} \prod_{j=1}^{k} \frac{\Gamma(N_j+\frac{\alpha}{K})}{\Gamma(N_j + 1)} , \end{align*} \end{linenomath*}
where
\begin{linenomath*} \begin{align} \label{wDyn}
w^{K,\alpha}_{N,k} &= \frac{\commentR{\check{w}}^{K,\alpha}_{N,k}}{\sum_{K=k}^\infty \commentR{\check{w}}^{K,\alpha}_{N,k} C ^{K, \alpha} _{N,k}}, &
\commentR{\check{w}}^{K,\alpha}_{N,k} &= \frac{ p(K) K !}{(K- k )! K^k \Gamma(1 + \frac{\alpha}{K})^k}. \end{align} \end{linenomath*} These results suggest that the dynamic MFM has an increased flexibility with respect to the prior on the partitions compared to the static MFM and the DPM. Empirical differences to the dynamic MFM when varying the prior on $K$ are investigated in Section~\ref{sec:insp-induc-priors}.
\subsubsection{Marginalising the prior on the labelled data cluster sizes}\label{sec:marg-prior-labell}
The marginal \commentBG{conditional} density $P(N_j = n |N, K_+ = k , \bm{\gamma})$ is the same for all $j=1,\ldots,k$. In the following, we obtain without loss of generality $P(N_k = n |N, K_+ = k , \bm{\gamma})$ from $p(N_1, \ldots, N_{k} |N , \bm{\gamma} )$, by summing over all partitions where the size of data cluster $k$ is equal to $n$, i.e., $N_{k} = n$, with $n= 1,\ldots,N-k+1$ and the remaining data cluster sizes sum up to $N-n$, i.e., $N_1+\ldots+N_{k-1}=N-n$:
\begin{linenomath*} \begin{multline*}
P(N_k = n |N, K_+ = k , \bm{\gamma}) = \frac{P(N_k= n |N, \bm{\gamma}) }{P(K_+ = k |N, \bm{\gamma})}\\
\displaystyle
\frac{N!}{ k! P(K_+ = k |N, \bm{\gamma}) } \sum_{K=k}^\infty p(K) \frac{ {V}_{N, k}^{K, \commentF{\gamma_K}}}{ \Gamma(\commentF{\gamma_K}) ^{k} } \frac{ \Gamma(n + \commentF{\gamma_K} )} {\Gamma(n + 1) } \sum_{\substack{N_1,\ldots, N_{k-1}>0 \\N_1+\ldots+N_{k-1}=N-n}} \prod_{j=1}^{k-1} \frac{ \Gamma(N_{j} + \commentF{\gamma_K} )} {\Gamma(N_j + 1)}. \end{multline*} \end{linenomath*}
Using the definition of $C^{K, \gamma_K}_{N,k}$ in Equation~\eqref{PposCk}, we obtain for $n=1,\ldots,N-k+1$:
\begin{linenomath*} \begin{align*}
P(N_k= n |N, K_+ = k , \bm{\gamma} ) &=
\displaystyle \frac{ \sum_{K=k}^\infty p(K) \frac{ {V}_{N, k}^{K, \gamma_K}}{ \Gamma( \gamma_K) ^{k} }
\frac{ \Gamma(n +\gamma_K)} {\Gamma(n + 1)} C^{K, \gamma_K}_{N-n,k-1}}
{ \sum_{K=k}^\infty p(K) \frac{ {V}_{N, k}^{K, \gamma_K}}{ \Gamma( \gamma_K ) ^{k}} C^{K, \gamma_K}_{N,k} }. \end{align*} \end{linenomath*}
Therefore, the marginal prior can be expressed for $n=1, \ldots, N- k+1$ and $j=1,\ldots,k$ as,
\begin{linenomath*} \begin{align} \label{PNj}
P(N_j= n |N, K_+ = k, \bm{\gamma}) &= \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k}
\frac{ \Gamma(n +\gamma_K)} {\Gamma(n + 1)} C^{K, \gamma_K}_{N-n,k-1}, \end{align} \end{linenomath*}
where
\begin{linenomath*} \begin{align*}
w^{K,\gamma_K}_{N,k} &= \frac{\tilde{w}^{K,\gamma_K}_{N,k}}{\sum_{K=k}^\infty
\tilde{w}^{K,\gamma_K}_{N,k} C^{K, \gamma_K}_{N,k} },\\ \tilde{w}^{K,\gamma_K}_{N,k}&= \frac{p(K) {V}_{N, k}^{K, \gamma_K}}{ \Gamma( \gamma_K ) ^{k} } =
\frac{ p(K) (\gamma_K) ^{k} \Gamma(\gamma_K K) K ! }{ \Gamma(1+\gamma_K) ^k \Gamma(\gamma_K K+N) (K- k )!}. \end{align*} \end{linenomath*}
For the DPM, this simplifies to
\begin{linenomath*} \begin{align*}
P(N_j = n |N, K_+ = k) &=
\frac{1}{n C ^\infty _{N,k}}
\sum_{\substack{N_1,\ldots, N_{k-1}>0 \\N_1+\ldots+N_{k-1}=N -n}} \prod_{j=1}^{k-1} \frac{ 1}{ N_j } =
\frac{C ^\infty_{N-n,k-1}}{n C ^\infty _{N,k}}. \end{align*} \end{linenomath*}
For the static MFM, this prior is given by
\begin{linenomath*} \begin{align*}
P(N_j = n |N, K_+ = k, \gamma) & =
\frac{ \Gamma(n +\gamma)}{ \Gamma(n + 1) C ^{\gamma} _{N,k}}
\sum_{\substack{N_1,\ldots, N_{k-1} > 0\\N_1+\ldots+N_{k-1}=N -n}} \prod_{j=1}^{k-1} \frac{ \Gamma(N_j +\gamma)}{ \Gamma(N_j + 1)} \\
& = \frac{ \Gamma(n +\gamma)}{ \Gamma(n + 1)} \frac{C ^{\gamma} _{N-n,k-1}}{ C ^{\gamma} _{N,k}} . \end{align*} \end{linenomath*}
For the dynamic MFM, this is equal to
\begin{linenomath*} \begin{align*}
P(N_j = n |N, K_+ = k, \alpha) &= \sum_{K=k}^\infty w^{K,\alpha}_{N,k}\frac{\Gamma(n+\frac{\alpha}{K})}{\Gamma(n + 1)} C ^{K, \alpha} _{N-n,k-1} , \end{align*} \end{linenomath*}
\commentR{where $w^{K,\alpha}_{N,k}$ is the same as in (\ref{wDyn}).}\footnote{\commentR{Note that $\tilde{w}^{K,\alpha}_{N,k}= \frac{\alpha ^k \Gamma (\alpha) }{\Gamma (\alpha+N) } \check{w}^{K,\alpha}_{N,k}$ and the first factor cancels when normalising $ \tilde{w}^{K,\alpha}_{N,k}$ to obtain $w^{K,\alpha}_{N,k}$.}}
Compared to the prior on the number of data clusters $K_+$, this implies that for the dynamic MFM, for each specific number of data clusters $k$, $C^{K, \gamma_K} _{N-n,k-1}$ does not only need to be determined depending on $K$, but also for $N-n$ with $n=1,\ldots,N-k+1$. For $w^{K,\gamma_K}_{N,k}$, $C^{K, \gamma_K} _{N,k}$ also needs to be determined. In the case of the static MFM and the DPM, the computation is less involved as \commentF{$C^{K, \gamma_K} _{\tilde{n},k-1}, \tilde{n}=k-1, \ldots, N-1$,} and $C^{K, \gamma_K} _{N,k}$ do not depend on $K$.
\subsubsection{Computing conditional prior means for function\commentBG{s of a single or two} data cluster sizes}\label{sec:comp-prior-moments}
The computation of the prior expectation $\mathbb{E}(\psi(N_j)|N, K_+ = k, \bm{\gamma})$ of any function $\psi(N_j)$ with respect to the conditional prior on the labelled data cluster sizes is straightforward, given the marginal prior $P(N_j = n |N, K_+ = k, \bm{\gamma})$ derived in Equation~\eqref{PNj}:
\begin{linenomath*} \begin{align}
\mathbb{E}(\psi(N_j)|N, K_+ = k, \bm{\gamma}) &= \sum_{n=1} ^{N- k+1}
\psi(n) P(N_j = n |N, K_+ = k, \bm{\gamma}) \nonumber\\
& = \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \sum_{n=1} ^{N- k+1} \psi(n) \frac{ \Gamma(n +\gamma_K)}{ \Gamma(n + 1)}C ^{K, \gamma_K} _{N-n,k-1} . \label{EWj} \end{align} \end{linenomath*}
Note that $\mathbb{E}(\psi(N_j)|N, K_+ = k, \bm{\gamma})$ is the same for all $j=1, \ldots, k$.
The sequence $C^{K, \gamma_K}_{N- n,k-1}, n=1, \ldots, N- k+1 $ results for each $K$ as a byproduct of recursion \eqref{recck} in Algorithm~\ref{KNMFM} in Appendix~\ref{sec:algor-comp-prior}, since
\begin{linenomath*} \begin{align*}
\bm{c} _{K,k-1} &= \left( C_{N,k-1} ^{K, \gamma_K}, C_{N-1,k-1} ^{K, \gamma_K}, C_{N-2,k-2} ^{K, \gamma_K}, \ldots, C_{k-1,k-1} ^{K, \gamma_K} \right)^{\top}. \end{align*} \end{linenomath*}
Hence, the recursion in Algorithm~\ref{KNMFM} in Appendix~\ref{sec:algor-comp-prior} can be applied for each $K$ to determine $\bm{c} _{K,k-1} $.
Removing the first element of $\bm{c} _{K,k-1} $ yields then the $(N-k+1)$-dimensional vector $\tilde{\bm{c}}_{K,k-1} = (C^{K, \gamma_K}_{N-1,k-1},\ldots, C^{K, \gamma_K}_{k-1,k-1} )^{\top} $. $\mathbb{E}(\psi(N_1) |N, K_+ = k, \bm{\gamma})$ is thus computed efficiently using:
\begin{linenomath*} \begin{align}\label{eq:algorithm1}
\mathbb{E}(\psi(N_1) |N, K_+ = k, \bm{\gamma}) &= \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \tilde{\bm{c}}_{K,k-1}^{\top} \bm{a}_k, \end{align} \end{linenomath*}
where $\bm{a} _k$ is an $(N-k+1)$-dimensional vector defined in Equation~\eqref{eq:a_k} with $ a_n = \tilde \psi ( n) $ and
\begin{linenomath*} \begin{align}\label{eq:tildepsi}
\tilde\psi (x) = \frac{\psi(x) \Gamma(x+\gamma_K)}{\Gamma(x + 1)}. \end{align} \end{linenomath*}
Next, we investigate how to determine \commentBG{the expectation} $\mathbb{E}(\psi(N_j)\psi(N_\ell) |N, K_+ = k, \bm{\gamma}) $ for $j\neq \ell$. For $k=2$, we can use that $N_{2} =N-N_1$, hence
\begin{linenomath*} \begin{align*} \psi (N_1) \psi (N_2) = N_1(\log N_1) N_{2} (\log N_{2})= N_1 (N- N_1) \log N_1 \, \log (N-N_1) \end{align*} \end{linenomath*}
depends only on $N_1$ and Equation~\eqref{EWj} can be used to compute $\mathbb{E}(\psi(N_1)\psi(N_2) |N, K_+ = 2, \bm{\gamma})$.
For $k \geq 3$, the bivariate marginal prior $p(N_1, N_2|N, K_+ = k, \bm{\gamma})$ is given for all pairs $\{(N_1,N_2): 2 \leq N_1+N_2 \leq N-k+2)\}$ by:
\begin{linenomath*} \begin{align*}
p(N_1, N_2|N, K_+ = k, \bm{\gamma}) &= \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \left[\prod_{j=1}^{2} \frac{\Gamma(N_j+\gamma_K)}{\Gamma(N_j + 1)}\right] C_{N-N_1-N_2,k-2}^{K, \gamma_K}, \end{align*} \end{linenomath*}
where $w^{K,\gamma_K}_{N,k}$ are the same weights as in Equation~\eqref{PNj}. In principle, $\mathbb{E}(\psi(N_1)\psi(N_{2}) |N, K_+ = k, \bm{\gamma})$ is obtained by summing $p(N_1, N_2|N, K_+ = k, \bm{\gamma})$ over all possible pairs $(N_1,N_2)$:
\begin{linenomath*} \begin{multline*}
\mathbb{E}(\psi(N_1)\psi(N_{2}) |N, K_+ = k, \bm{\gamma}) =\\ \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \sum_{n_1=1} ^{N- k+1} \sum_{n_2=1} ^{N- n_1 - k+2} \prod_{j=1}^{2} \frac{\psi(n_j) \Gamma(n_j+\gamma_K)}{\Gamma(n_j + 1)} C_{N-n_1-n_2,k-2}^{K, \gamma_K}. \end{multline*} \end{linenomath*}
It is convenient to arrange the enumeration such that one sums over $n =n_1+n_2$:
\begin{linenomath*} \begin{align*}
\mathbb{E}(\psi(N_1)\psi(N_{2}) |N, K_+ = k, \bm{\gamma}) &= \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \sum_{ n =2} ^{N- k+2} C^{K, \gamma_K}_{N- n,k-2} \sum_{ m =1} ^{ n -1} \tilde \psi ( m) \tilde\psi (n - m), \end{align*} \end{linenomath*}
where again $\tilde\psi (x)$ is as defined in Equation~\eqref{eq:tildepsi}.
The sequence of inner sums $\sum_{ m =1} ^{ n -1} \tilde \psi ( m) \tilde\psi (n - m)$ for $n=2,\ldots,N-k+2$ corresponds to the vector resulting from multiplying the matrix $\bm{A}_k$ with the vector $\bm{a}_k$ where $\bm{A} _k$ is a $(N-k+1) \times (N-k+1)$ lower triangular Toeplitz matrix and $\bm{a} _k$ is the $(N-k+1)$-dimensional vector defined as
\begin{linenomath*} \begin{align} \bm{A}_k &= \left( \begin{array}{lllll} a_1 & & & & \\ a_2 & a_1 & & & \\ \vdots & \ddots & \ddots & & \\ a_{N-k} & & a_2 & a_1 & \\ a_{N-k+1} & \ddots & \ddots & a_2 & a_1 \\ \end{array} \right), \qquad
\bm{a}_k = \left( \begin{array}{c} a_1 \\ \vdots \\ a_{N-k+1} \end{array} \right),
\label{eq:a_k} \end{align} \end{linenomath*}
where $ a_n = \tilde \psi ( n) $. The sequence $C^{K, \gamma_K}_{N- n,k-2}, n=2, \ldots, N- k+2 $ results for each $K$ as a byproduct of recursion \eqref{recck} in Algorithm~\ref{KNMFM} in Appendix~\ref{sec:algor-comp-prior}, since
\begin{linenomath*} \begin{align*} \bm{c} _{K,k-2} &= \left(C_{N,k-2} ^{K, \gamma_K}, C_{N-1,k-2} ^{K, \gamma_K}, C_{N-2,k-2} ^{K, \gamma_K}, \ldots, C_{k-2,k-2} ^{K, \gamma_K}\right)^{\top}. \end{align*} \end{linenomath*}
Hence, the recursion in Algorithm~\ref{KNMFM} in Appendix~\ref{sec:algor-comp-prior} is applied for each $K$ to determine $\bm{c} _{K,k-2} $.
Removing the first two elements of $\bm{c} _{K,k-2} $ yields then the $(N-k+1)$-dimensional vector $\check{\bm{c}}_{K,k-2} = (C^{K, \gamma_K}_{N- 2,k-2},\ldots, C^{K,\gamma_K}_{k-2,k-2} )^{\top} $.
$\mathbb{E}(\psi(N_1)\psi(N_{2}) |N, K_+ = k, \bm{\gamma})$ is computed efficiently using:
\begin{linenomath*} \begin{align}\label{eq:algorithm2}
\mathbb{E}(\psi(N_1)\psi(N_{2}) |N, K_+ = k, \bm{\gamma}) &= \sum_{K=k}^\infty w^{K,\gamma_K}_{N,k} \check{\bm{c}}_{K,k-2}^{\top} \bm{A}_k \bm{a}_k. \end{align} \end{linenomath*}
Again $\mathbb{E}(\psi(N_j)\psi(N_{\ell}) |N, K_+ = k, \bm{\gamma})$, is the same for all $j,\ell=1,\ldots,k$, $j \neq \ell$ and thus given by Equation~\eqref{eq:algorithm2}.
\subsubsection{Computing the prior mean and variance of the functionals conditional on $K_+$}\label{sec:computing-prior-mean}
With Equations \eqref{eq:algorithm1} and \eqref{eq:algorithm2}, the first two moments of $\Psi(N_1,\ldots,N_k)$ conditional on $K_+ = k$ can be calculated efficiently. In the following, we derive the conditional mean and variance of $\Psi$ written in terms of quantities derived in Section~\ref{sec:comp-prior-moments}. Additionally, two empirically relevant examples \commentBG{for functionals} $\Psi$ \commentBG{which allow to characterise} the prior on the partitions are introduced. One of the examples is the relative entropy suggested by \citet{Green+Richardson:2001} and the other is the number of singletons in the partitions. In Sections~\ref{sec:insp-induc-priors} and \ref{sec:comp-defa-priors}, these two functionals are evaluated for all three \commentBG{Bayesian mixture} models \commentBG{with various prior} settings.
The prior mean and variance of $\Psi(N_1,\ldots,N_k)$ conditional on $K_+ = k$ as well as $N$ and $\bm{\gamma}$ are given by
\begin{linenomath*} \begin{align} \label{EFUN}
\mathbb{E}(\Psi(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})
&= k \mathbb{E}(\psi(N_j)|N, K_+ = k , \bm{\gamma}),\\
\label{VFUN}
\mathbb{V}(\Psi(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})
&=
k \mathbb{E}(\psi(N_j)^2|N, K_+ = k , \bm{\gamma}) + \\ \nonumber
k(k-1) \mathbb{E}(\psi(N_j) \psi(N_{\ell})&|N, K_+ = k , \bm{\gamma})
-
k^2 (\mathbb{E}(\psi(N_j)|N, K_+ = k , \bm{\gamma}))^2, \end{align} \end{linenomath*}
with $j\neq \ell$.
The expectation in Equation~\eqref{EFUN} and all expectations in Equation~\eqref{VFUN} involving a single data cluster size $N_j$ are evaluated efficiently with Equation~\eqref{eq:algorithm1} while $\mathbb{E}(\psi(N_j) \psi(N_{\ell}) |N, K_+ = k, \bm{\gamma})$ is computed using Equation~\eqref{eq:algorithm2}.
\paragraph{Relative entropy.}
The relative entropy in a partition with a fixed number $k$ of data clusters is defined as
\begin{linenomath*} \begin{align*} \mathcal{E}(N_1, \ldots, N_k) / \log k = - \frac{1}{\log k } \sum_{j=1}^k \frac{N_j}{N} \log \frac{N_j}{N} = -\frac{1}{N \log k} \sum_{j=1}^k N_j \log N_j + \frac{\log N}{\log k}. \end{align*} \end{linenomath*}
Regardless of $k$, the relative entropy takes values in (0, 1] with values close to 1 indicating similarly large data cluster sizes $N_1,\ldots, N_k$. For the most balanced clustering where all $N_j$, $j=1,\ldots,k$ are equal, the relative entropy is exactly equal to 1.
Higher prior mean values indicate that a-priori more balanced partitions are induced, while larger prior variance or standard deviation values indicate that the prior partition distribution is more flexible.
The calculation of the relative entropy is based on the functional $\psi(N_j)=N_j \log N_j$. The prior expectation of the relative entropy is equal to $\mathbb{E}_{\mathcal{E},k} = \mathbb{E}(\mathcal{E}(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})/\log k $ with
\begin{linenomath*} \begin{align*}
\mathbb{E}(\mathcal{E}(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})
&= \log N - \frac{k}{N} \mathbb{E}(N_j \log N_j|N, K_+ = k , \bm{\gamma}). \end{align*} \end{linenomath*}
The prior variance of the relative entropy is equal to $\mathbb{V}_{\mathcal{E},k} = \mathbb{V}(\mathcal{E}(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})/(\log k)^2$ with
\begin{linenomath*} \begin{multline*}
\mathbb{V}(\mathcal{E}(N_1, \ldots,N_k)|N, K_+ = k, \bm{\gamma})
=
\frac{1}{N^2}\left(
k \mathbb{E}(N_j^2 (\log N_j)^2|N, K_+ = k , \bm{\gamma}) + \right. \\
k(k-1) \mathbb{E}(N_j(\log N_j) N_{\ell} (\log N_{\ell}) |N, K_+ = k , \bm{\gamma})
- \\
\left.
k^2 (\mathbb{E}(N_j \log N_j|N, K_+ = k , \bm{\gamma}))^2 \right), \end{multline*} \end{linenomath*}
where $j\neq \ell$.
\paragraph{Number of singletons.}
The calculation of the number of singletons is based on the functional $\psi(N_j)= \vmathbb{1}_{\{N_j = 1\}}$, where $\vmathbb{1}$ is the indicator function. The prior mean and variance are straightforward to calculate by plugging the functional into Equations~\eqref{EFUN} and \eqref{VFUN}.
\subsubsection{Computing the prior mean and variance of the functionals unconditional on $K_+$}\label{sec:computing-prior-mean-uncond}
With the distribution of $p(K_+|N,\bm{\gamma})$ derived in Section~\ref{sec:induced-prior-number} coupled with the first two moments of symmetric additive functionals conditional on $K_+$ shown in Section~\ref{sec:computing-prior-mean}, we are now ready to marginalise out $K_+$ to obtain the first two moments of symmetric additive functionals unconditional on $K_+$.
By the law of total expectation, the prior mean of $\Psi(N_1,\ldots,N_{K_+})$ unconditional on $K_+$ can be computed trivially combining $\mathbb{E}_k \equiv \mathbb{E}(\Psi(N_1,\ldots,N_k)|N,K_+ = k,\bm{\gamma})$ derived in Equation \eqref{EFUN} and $P(K_+ = k|N,\bm{\gamma})$ derived in Equation \eqref{pKplus} as follows:
\begin{linenomath*} \begin{align}
\mathbb{E}(\Psi(N_1,\ldots,N_{K_+})|N,\bm{\gamma}) = \sum_{k = 1}^N \mathbb{E}_k\label{EFUNuncond}P(K_+ = k|N,\bm{\gamma}). \end{align} \end{linenomath*}
Following the same line of reasoning, by the law of total variance, the prior variance of $\Psi(N_1,\ldots,N_{K_+})$ unconditional on $K_+$ can be obtained from Equations \eqref{VFUN} and \eqref{pKplus} as follows:
\begin{linenomath*} \begin{multline}
\mathbb{V}(\Psi(N_1,\ldots,N_{K_+})|N,\bm{\gamma}) = \sum_{k = 1}^{N-1}\mathbb{V}_kP(K_+ = k|N,\bm{\gamma})+\label{VFUNuncond}\\
\sum_{k = 1}^N \mathbb{E}_kP(K_+ = k |N, \bm{\gamma})(1-P(K_+ = k |N, \bm{\gamma}))-\\
2\sum_{k = 2}^{N-1}\sum_{k'=1}^{k-1}\mathbb{E}_k\mathbb{E}_{k'}P(K_+ = k |N, \bm{\gamma})P(K_+ = k' |N, \bm{\gamma}), \end{multline} \end{linenomath*}
where $\mathbb{V}_k \equiv \mathbb{V}(\Psi(N_1,\ldots,N_k)|N,K_+ = k,\bm{\gamma})$ and the sum is taken over $k = 1$ to $N-1$ in the variance term of Equation \eqref{VFUNuncond} since the variance of any functional is trivially 0 \commentBG{for $k = N$, because there exists only one partition which separates $N$ data points into $N$ groups}.
For example, if one wants to compute the prior mean and variance of the relative entropy unconditional on $K_+ =k$, $\mathbb{E}_k$ can be set to $\mathbb{E}_{\mathcal{E},k}/\log(k)$ introduced in the previous section and similarly $\mathbb{V}_k$ \commentBG{can be set} to $\mathbb{V}_{\mathcal{E},k}/(\log k)^2$.
\section{Empirical inspection of the induced priors}\label{sec:insp-induc-priors}
This section serves as a demonstration of the proposed methodology where the static and dynamic MFMs with respect to various aspects of their induced prior on the partitions are compared utilising all the tools introduced in Section~\ref{sec:induced-priors}. The DPM is not included in this comparison as it assumes infinitely many clusters in the population as opposed to MFM models with \commentBG{a} finite but unknown number of clusters $K$ \commentBG{in the population}. The comparison specifically involves the prior on the number of data clusters $K_+$ and the prior moments of several functionals computed over the partitions for these two MFM models. These implicit finite-sample characteristics are induced by three \commentBG{data and} model specifications: the sample size $N$, the prior on the number of clusters in the population $p(K)$ and the hyperparameter of the prior component weight distribution $\gamma$ or $\alpha$ depending on the type of the MFM.
In the following, we systematically compare these induced priors by considering several different priors on $K$: a uniform distribution on $[1, 30]$ \citep{Richardson+Green:1997} for $K$, the geometric distribution \commentF{$\mbox{\rm Geo}(0.1)$}
for $K-1$ \citep{Miller+Harrison:2018} and the beta-negative-binomial distribution BNB$(1, 4, 3)$ for $K-1$ \citep{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}.
The sample size is fixed to $N = 100$ across all comparisons considered in this section.
\subsection{Comparing the prior on the number of data clusters $K_+$}\label{sec:revis-prior-numb}
To begin with, the prior distribution on $K_+$ is compared between the static and dynamic MFMs. However, when comparing the prior on $K_+$ between the two MFM models, the incomparability of the Dirichlet parameter $\gamma$ and $\alpha$ obstructs a direct comparison if those are fixed to the same value. For this reason, moment matching with respect to their prior mean of $K_+$ is done twice: \commentF{on the one hand,} $\alpha$ of the dynamic MFM is chosen to match the static MFM with $\gamma = 1$, \commentF{and on the other hand,} $\gamma$ of the static MFM is matched \commentBG{to} the dynamic MFM with $\alpha = 1$. The former results are presented in Figure~\ref{fig:figure1a-moment-rev} and the latter in Figure~\ref{fig:figure1b-moment-rev}.
Figure~\ref{fig:figure1a-moment-rev} indicates that no perceivable differences in the distribution of $K_+$ (black bars) can be seen between the static MFM in the top row and the dynamic MFM in the bottom row \commentBG{within the same column, where each columns represents} a different prior on $K$. Under this setting, the prior on $K_+$ traces the prior on $K$ to a certain extent for all cases.
\begin{figure}
\caption{The prior probabilities of
$K$ (in grey) and $K_+$ (in black) for different priors on $K$ and
$N=100$, for the static MFM with $\gamma = 1$ and the dynamic
MFM where $\alpha$ is specified to induce the same prior mean value
for $K_+$.}
\label{fig:figure1a-moment-rev}
\end{figure}
On the other hand, in Figure~\ref{fig:figure1b-moment-rev}, the static and dynamic MFM models differ greatly in \commentBG{their} priors on $K_+$ when the uniform and geometric prior are assigned on $K$. Noticeably, for the uniform and geometric prior cases, the dynamic MFM assigns less mass to the probability of homogeneity, that is, $p(K_+ = 1|N,\bm{\gamma})$. \commentBG{Also for the uniform and geometric prior on $K$, a clear difference between the priors on $K$ and $K_+$ is visibly with a lot more mass assigned to small values of $K_+$ than of $K$.}
\begin{figure}
\caption{The prior probabilities of
$K$ (in grey) and $K_+$ (in black) for different priors on $K$ and
$N=100$, for the dynamic MFM with $\alpha = 1$ and the static MFM
where $\gamma$ is specified to induce the same prior mean value for
$K_+$.}
\label{fig:figure1b-moment-rev}
\end{figure}
To summarise, for both MFM models, the characteristics of the prior on $K_+$ are sometimes markedly different from that of $K$. Especially, those of the dynamic MFM departs quite considerably from the prior on $K$ under certain combinations of $\alpha$ and $p(K)$ as demonstrated in Figure~\ref{fig:figure1b-moment-rev}. \commentBG{This} strongly indicates the need of using the proposed methodology to investigate the induced prior on $K_+$.
\subsection{Comparing the prior on the partitions based on
symmetric additive functionals}\label{sec:revis-prior-part}
Section~\ref{sec:char-prior-part} introduced procedures to compute the prior mean and variance of any symmetric additive functionals over the induced prior partitions. Here, we specifically consider two functionals introduced there: the relative entropy and the number of singletons in the partitions. \begin{figure}
\caption{ The prior mean and standard
deviation of the relative entropy of the partitions for
$K_+ \in [2, 8]$, for three different
priors on $K$, for the static and dynamic MFM with $\gamma$ or
$\alpha \in \{0.1, 1, 5\}$ (from left to right) and $N = 100$.}
\label{fig:entropy-a-rev}
\end{figure}
Each plot of Figure~\ref{fig:entropy-a-rev} shows the prior mean and standard deviation obtained for the relative entropy of the partition distribution conditional on a specific number of data clusters $K_+$ ranging from 2 to 8. Column-wise, these figures are arranged in order of magnitude of their corresponding hyperparameter $\gamma$ or $\alpha$ increasing from left to right whereas each row represents the specific MFM model. As shown in Section~\ref{sec:induc-cond-prior-1}, for the static MFM, the prior on $K$ does not have any impact once conditioned on $K_+$. Therefore, figures in the top row do not vary by $p(K)$ as opposed to those in the bottom row representing results for the dynamic MFM which seem to \commentBG{slightly vary} depending on the prior on $K$, \commentBG{in particular} when $\alpha = 1$.
It can be seen that for \commentBG{the static} MFM model, the prior mean of the relative entropy increases for larger values of $K_+$ and also for greater values of $\gamma$ or $\alpha$. Conversely, the prior standard deviation drops relative to these changes in $K_+$ as well as $\gamma$ or $\alpha$. \commentBG{Results are similar for the dynamic
MFM if the Dirichlet parameter is small, i.e., $\alpha = 0.1$. The
larger $\alpha$ the stronger seems to be the difference to the
static MFM. For $\alpha = 5$ the prior mean is even decreasing for
increasing $K_+$.}
\begin{figure}
\caption{ The prior mean and standard
deviation of the relative entropy of the partitions for
$K_+ \in [2, 8]$, for different
priors on $K$, for the dynamic MFM with $\alpha = 1$ and the corresponding moment matched static MFM, all conditional on $N = 100$.}
\label{fig:entropy-matched-rev}
\end{figure}
To perform side-by-side comparison between the static and dynamic MFM models, we focus on the specific setting where the hyperparameter of the dynamic MFM is fixed to $\alpha = 1$ while considering \commentBG{again the same} three \commentBG{different} priors on $K$. The corresponding static MFM for each specification is chosen by employing the moment matching approach with respect to the unconditional (with respect to $K_+$) mean of the relative entropy. Specifically, $\gamma$ of each static MFM is chosen to match the corresponding dynamic MFM with $\alpha = 1$. By going over the results summarised in Figure~\ref{fig:entropy-matched-rev} column-wise, it is clearly \commentBG{visible} that \commentBG{after matching, the
differences between} each pair of the static MFM and its dynamic counterpart \commentBG{are negligible} in terms of their prior means and standard deviations conditional on each $K_+$.
\begin{figure}
\caption{ The prior mean and standard
deviation of the relative entropy unconditional on $K_+$, computed over the partitions in
dependence of $\gamma$ or $\alpha$ for different priors on $K$ and
the static and dynamic MFM, with $N = 100$.}
\label{fig:weightedentropy-rev}
\end{figure}
To complete the analysis involving the relative entropy, the functional is computed over the induced prior on the partitions unconditional on $K_+$ (see Section~\ref{sec:computing-prior-mean-uncond}). Again, this quantity is evaluated for the three priors on $K$ in combination with the static or dynamic MFM for increasing values of $\gamma$ or $\alpha$ \commentBG{and results are} shown in Figure~\ref{fig:weightedentropy-rev}. For the static MFM, the conditional relative entropy of the partitions does not vary with respect to $p(K)$ (see also the theoretic derivations in Section~\ref{sec:induc-cond-prior-1} and Figure~\ref{fig:entropy-a-rev}). Therefore, differences in both the mean and standard deviation among the \commentBG{results for the} static MFMs shown in \commentBG{the different columns of the top row
of} Figure~\ref{fig:weightedentropy-rev} originate from differences in the induced prior on $K_+$. One can clearly observe that these specifications do imply partitions \commentBG{with rather similar
characteristics for the uniform and the geometric prior on $K$,
whereas in particular the prior mean relative entropy is much lower
for the BNB prior on $K$. In particular for the uniform and
geometric prior on $K$ the standard deviation peaks for a rather
small value of $\gamma$ with a sharp decrease followed by a
levelling off.} For the dynamic MFM, not only the induced prior on $K_+$, but also the conditional relative entropy of the partitions depend on $p(K)$. Thus, one might expect even greater differences in the prior mean and standard deviation of the relative entropy among the three dynamic specifications. However, the bottom row results in Figure~\ref{fig:weightedentropy-rev} suggest that the level of variability in the prior partition unevenness \commentBG{is to some
extent comparable for the static and the dynamic MFM}.
Another functional of interest is the number of singletons computed over the induced prior partitions. The results are shown in Figure~\ref{fig:singleton-rev} \commentBG{using} the same format as in Figure~\ref{fig:entropy-a-rev}, now with \commentF{the} $y$-axis representing the prior mean and standard deviation of the number of singleton clusters. Again, the results for the static MFM do not depend on $p(K)$. Those of the dynamic MFM depend on $p(K)$ but only slightly, with the largest impact of $p(K)$ being again observable for $\alpha = 1$. For both MFMs, an increase in the component weight hyperparameter $\gamma$ or $\alpha$ corresponds to an overall decrease in the expected number of singletons and its variability. This is expected as the partition\commentBG{s} will be more evenly balanced as $\gamma$ or $\alpha$ increases, see Figure~\ref{fig:entropy-a-rev}. This means that partitions given high prior probability will mainly \commentBG{contain} clusters with more than one observation.
\begin{figure}
\caption{The prior mean and standard
deviation of the number of singletons, for different priors on $K$
and the static and dynamic MFM, with $\gamma$ or
$\alpha \in \{0.1, 1, 5\}$ and $N = 100$ for
$K_+ \in [2, 8]$.}
\label{fig:singleton-rev}
\end{figure}
\section{Comparing default priors in Bayesian cluster analysis}\label{sec:comp-defa-priors}
The main focus of this section is to understand the induced prior on the partitions when defining specific prior combinations as suggested in the previous literature for the three models introduced in Section~\ref{sec:explicit-priors}: \begin{enumerate} \item DPMs with $\alpha = 1/3$ \citep[see][]{Escobar+West:1995}. \item Static MFMs with a uniform prior $[1, 30]$ on $K$ and
$\gamma = 1$ \citep[see][]{Richardson+Green:1997}. \item Dynamic MFMs with a BNB$(1, 4, 3)$ prior on $K-1$ and
$\alpha = 2/5$
\citep[see][]{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020}. \end{enumerate} Note that \citet{Escobar+West:1995} and \citet{Fruehwirth-Schnatter+Malsiner-Walli+Gruen:2020} propose a hyperprior on $\alpha$. For those, modal values of the hyperpriors are fixed resulting in $\alpha = 1/3$ for the DPM and $\alpha = 2/5$ for the dynamic MFM. Throughout this comparison, the sample size is fixed to $N = 100$.
Figure~\ref{fig:priorKplus} shows the prior probabilities for $K$ and $K_+$ for the aforementioned three modelling approaches. For all three modelling approaches, clear differences between the imposed prior on $K$ and the implicitly obtained prior for $K_+$ are discernible.
\begin{figure}
\caption{
The prior probabilities of $K$ (in grey) and $K_+$ (in black) for
the three modelling approaches.}
\label{fig:priorKplus}
\end{figure}
The DPM approach with $\alpha = 1/3$ puts all mass at $K=\infty$ and hence, only the implicit prior on $K_+$ is visualised. This prior is unimodal with mode at $K_+ = 2$ and hardly any mass is assigned beyond 10. This implies that a sparse clustering solution with only a few data clusters has high prior probability, but the homogeneity model is not particularly supported a-priori. Similarly, as shown in Section~\ref{sec:insp-induc-priors}, the dynamic MFM model with the BNB$(1, 4, 3)$ prior on $K$ is also sparsity-inducing with high prior mass concentrated around values ranging from 1 to 4 on $K_+$. However, in contrast to the DPM approach, the homogeneity model is given by far the highest prior probability. Finally, the static MFM with a uniform prior $[1, 30]$ on $K$ behaves entirely differently as the differences between the prior on $K$ and $K_+$ are smallest. Slightly increasing probabilities for $K_+$ up to 20 indicate that a-priori no penalisation to obtain a sparse solution is imposed in this setting.
\commentBG{Table~\ref{tab:results} characterises the prior on t}he partitions implied by the three modelling approaches \commentBG{using
some statistics to summarise} the induced prior on $K_+$ and the balancedness of \commentF{the} partitions.
\begin{table}[h!]
\begin{tabular}{l|c|c|c}
\hline
&DPM& static MFM & dynamic MFM \\
\hline
Mean of $K_+$& 2.6 &13.0&1.4\\
Variance of $K_+$&1.5&45.5&0.4\\
99\% quantile of $K_+$ &6&25&4\\
Probability of $K_+ = 1$ (homogeneity) &0.19&0.03&0.71\\
\hline
Relative entropy when $K_+=2$&0.45 (0.32)&0.73 (0.26)&0.51 (0.32)\\
Relative entropy when $K_+=4$&0.57 (0.20)&0.79 (0.13)&0.59 (0.19)\\
Relative entropy when $K_+=6$&0.64 (0.15)&0.82 (0.09)&0.65 (0.15)\\
Relative entropy when $K_+=8$&0.68 (0.12)&0.84 (0.07)&0.69 (0.12)\\
Relative entropy (unconditional on $K_+$)&0.41 (0.25)&0.83 (0.14)&0.15 (0.14)\\
\hline
Number of singletons when $K_+=10$ &2.48 (1.28) &0.91 (0.87)&2.42 (1.27)\\
\hline
\end{tabular}
\caption{Results regarding the characterisation of the induced prior on $K_+$ and the prior on the partition\commentBG{s} for the three modelling approach\commentBG{es}.
For the relative entropy and the number of singletons,
\commentBG{standard deviations are given in parentheses after the mean values}.}\label{tab:results} \end{table}
\commentBG{Complementing the insights gained from} Figure~\ref{fig:priorKplus}, the descriptive statistics \commentBG{given in Table~\ref{tab:results}} to summarise the prior on the number of data clusters $K_+$ for these three model specifications also suggest that the specifications employed in the DPM and the dynamic MFM are sparsity inducing while the static MFM has \commentBG{a} more diffuse prior on $K_+$. \commentBG{The static MFM
in fact has} \commentF{a} much higher variance and $99\%$ quantile \commentBG{which is equal to} 25 with the upper bound of $K_+$ \commentBG{being in fact equal} to 30 for this model. Also, while the DPM and the dynamic MFM both a-priori assume sparsity in the number of data clusters, the prior probability assigned to \commentF{the} homogeneity case is much higher \commentF{for} the dynamic MFM with \commentF{a} probability of approximately 0.71 while that of the DPM is about 0.19.
\commentBG{The results for the relative entropy indicate that t}he static MFM with a uniform prior on $K$ \commentBG{gives} higher probability a-priori to clusters with evenly sized blocks \commentBG{than the other two specifications. This is} indicated by \commentBG{the conditional and unconditional mean values of the} relative entropy being much closer to 1 compared to \commentBG{the} other \commentBG{two} models \commentBG{while the conditional and
unconditional standard deviations are smaller}.
\commentBG{For} the DPM and the dynamic MFM, the relative entropy conditional on $K_+$ seems to have almost identical mean and corresponding standard deviation \commentBG{values} across all the values of $K_+$ shown in the Table~\ref{tab:results}. However, the unconditional relative entropy differs greatly between these two specifications. \commentBG{Most likely this is due to} the difference in the probability of homogeneity.
\commentBG{The results for} the number of singletons \commentBG{indicate again that the prior mean and standard deviation
are comparable for the DPM and the dynamic MFM specification
conditional on $K_+ = 10$ with on average more than two clusters
containing only a single observation. By contrast, only at most one
cluster is a-priori expected to contain a single observation for the
static MFM.}
Summarising the results of the comparison, it can be noted that the DPM and the dynamic MFM \commentBG{using} the particular setting employed in the previous literature induce a similar \commentBG{prior
on the partitions conditional on the number of data clusters} regarding the balancedness of the cluster sizes and the number of singletons. Still, the latter approach is more sparsity inducing as can be seen from the \commentBG{higher} prior probability of homogeneity. In contrast, the static MFM employed with the uniform prior on $K$ \commentBG{and on the weights assigns} much higher \commentBG{weight} a-priori to clustering solutions \commentBG{containing many data clusters and results in rather high
prior mean values for the relative entropy}. This rather counterintuitive result of fixing a diffuse prior on $K$ \commentBG{and the weights} resulting in an informative prior on \commentBG{the} partitions should serve as a cautionary tale and motivate the use of the proposed methodology to appropriately characterise the induced prior on the partitions.
\section{Implications \commentF{for applied finite mixture analysis}}\label{sec:conclusions}
The modelling aims in model-based clustering depend on the specific application and the prior domain knowledge available. Possible scenarios are: (1) A coarse grouping of the data is aimed at in order to identify basic structure, (2) a specific grouping is known and should be reproduced without explicitly using this grouping in the data analysis and (3) a flexible approximation of the data distribution using many clusters is desired.
For the first case, clearly a sparsity inducing prior specification is desired. This suggests to \commentF{use a} dynamic MFM with a prior on $K$ with a mode at 1, decreasing probability weights for increasing $K$ and a rather small value for $\alpha$ \commentBG{(see the specific
dynamic MFM approach considered} in Section~\ref{sec:comp-defa-priors}).
In the second case, the aim is to choose priors which induce implicit priors such that the characteristics of the known grouping coincide with the characteristics of the implicit priors. Clearly a suitable prior \commentBG{specification} is characterised by inducing implicit priors which assign substantial mass to known characteristics, such as the number of data clusters $K_+$. If more detailed prior information regarding the characteristics of the known partition is available, \commentF{this} needs to be translated into some form of symmetric additive functional \commentBG{where the prior mean and standard
deviation may be computed given} the induced prior on the partitions. Then $p(K)$ and $\gamma_K$ \commentF{can be chosen as
to} match to their \commentF{desired} value. The suitability of such a specification would also need to be assessed based on the sample size $N$.
In the third case, sparsity inducing priors are not desirable. The prior specification, however, depends on the assumption of how complex the approximation should be. E.g., the static MFM specification with a uniform prior on $[1, 30]$ for $K$, $\gamma = 1$ and $N = 100$ assigns rather comparable prior weights to values of the number of clusters $K_+$ ranging from 1 to 20, thus encouraging approximations with many clusters. \citet{Richardson+Green:1997} use this specification in the context of density approximation.
\section{Summary}\label{sec:summary}
\commentF{In this work, we} reviewed Bayesian cluster analysis methods based on mixture models \commentF{and
explicit} priors imposed on the number of components and the weight distributions for different modelling approaches. Fixing these explicit prior specifications induces a specific, \commentF{implicit} prior on the partitions. A thorough understanding of this particular prior is of crucial interest in Bayesian cluster analysis. This is because its \commentBG{characteristics} will in general be of relevance when pursuing a specific modelling aim or assessing the impact of specific prior combinations on the clustering result. For this reason, we derived computationally feasible formulas to explicitly characterise the prior on the partitions. Specifically, to serve this objective, the prior distribution on data clusters and the first two moments of symmetric additive functionals computed over the partitions both conditional and unconditional on the number of data clusters are derived. Furthermore, the derivation of the formulas is accompanied by a reference implementation in the \textsf{R} package \textbf{fipp}.
\section*{Appendix}\label{sec:EPPF-support-combinatoric-part} \subsection*{Combinatorial complexity of the prior enumeration}
By only considering cluster sizes as its argument, the EPPF introduced in Section~\ref{sec:induced-eppf} simplifies the probability assignment to all possible partitions of $[N] $ which otherwise will equal the Bell number $B_N$. Furthermore, its symmetry enables all sequences of cluster sizes that are equivalent under permutations to be mapped into a unique sequence of \textit{ordered} data cluster sizes $(N_{(1)},N_{(2)}\ldots),\ N_{(1)}\geq N_{(2)}\geq \cdots > 0,\ \sum_{j} N_{(j)} = N$. Such a sequence of ordered data cluster sizes is an element of the integer partitions of $N$. These two simplifications combined give the total number of data cluster sizes to be equal to the partition function of a positive integer $N$ written as $P_N$. For example, in $N = 4$ case, the value of $B_4$ is 15 while $P_4$ is 5. That is, for a sample size $N = 4$, even though there are 15 unique partitions of $[N]$, only 5 distinct prior probabilities are returned by the EPPF. In other words, the discrete distribution of the induced prior partitions is summarised by the equivalence classes introduced by the EPPF so that it only has 5 support points when $N = 4$.
Although this reduction of combinatorial complexity is substantial, the support of the induced prior on partitions is still in the order of $P_N$ which increases combinatorially with respect to an increase in $N$. Even for a small sample size of $N = 100$, the $P_N$ is approximately 190 million. Therefore,
full enumeration of the EPPF is computationally infeasible. Nor would a simple Monte Carlo approach be able to adequately approximate the distribution, especially its higher order moments.
\label{sec:algor-comp-prior}
\subsection*{Algorithm for computing \commentF{$C^{K, \gamma_K}_{N,k}$}} \label{sec:algor-comp-prior}
Algorithm~\ref{KNMFM} shows how to recursively determine
$C^{K, \gamma_K}_{N,k}$ \commentF{defined in (\ref{PposCk})}. $C^{K, \gamma_K}_{N,k}$ is required to
determine the implicit prior on the number of data clusters and the
conditional prior on the labelled data cluster sizes.
\commentF{The recursion depends on a sequence of non-negative \lq\lq weights\rq\rq\ $\{w_n\}$}.
For a static MFM, the weights $w_n$ do not vary for different number
of components \commentF{$K$} and $C_{N,k}^{K, \gamma_K} \equiv C ^\gamma _{N,k}$ is
independent of $K$. For a DPM, \commentF{with $K$ implicitly equal to $\infty$, $w_n = 1/n$ is even independent of $\alpha$}.
To determine the prior $P(K_+ = k| N, \bm{\gamma})$ of the number of
data clusters $K_+$, Algorithm~\ref{KNMFM} needs to be run once for
static MFMs and for DPMs and consists of $N$ steps, i.e.,
$\commentF{n}=1,\ldots,N$, while it needs to be run repeatedly for different
values of $K$ for dynamic MFMs.
\begin{algorithm}[h!]
\caption{Computing \commentF{$C^{K, \gamma_K}_{N,k}$}
for a general\commentBG{ised} MFM.} \label{KNMFM}
\begin{enumerate}
\item Define the vector $\bm{c}_{K,1} \in \mathbb{R}^{N}$ and the
$(N \times N)$ upper triangular Toeplitz matrix $ \bm{W}_1 $,
where $ w_n = \frac{ \Gamma(n +\gamma_K )}{\Gamma(n +1)} $,
$n=1, \ldots,N$, \begin{align*} \bm{W}_1 &= \left( \begin{array}{cccc} w_1 & \ddots &w_{N-1} & w_{N} \\
& w_1 &\ddots & w_{N - 1} \\
& & \ddots & \ddots \\
& & & w_1 \\ \end{array} \right), & \bm{c}_{K,1} &= \left( \begin{array}{l} w_N \\ w_{N-1} \\ \vdots \\ w_1 \\ \end{array} \right) . \end{align*} \item For all $k \ge 2$, define the vector $\bm{c}_{K,k} \in \mathbb{R}^{N-k+1}$ as \begin{align} \label{recck} \bm{c}_{K,k} &= \left( \begin{array}{cc} \bm{0}_{N-k+1 } & \bm{W}_k \\ \end{array} \right) \bm{c}_{K,k-1}, \end{align} where $ \bm{W}_k $ is a $(N-k+1)\times (N-k+1)$ upper triangular Toeplitz matrix obtained from $ \bm{W}_{k-1} $ by deleting the first row and the first column. \item Then, for all $k \ge 1$, $C_{N,k} ^{K, \gamma_K} $ is equal to the first element of the vector $\bm{c} _{K,k}$.
\end{enumerate} \end{algorithm}
\end{document} | arXiv |
Original contribution | Open | Published: 12 January 2015
Effects of minimum legal drinking age on alcohol and marijuana use: evidence from toxicological testing data for fatally injured drivers aged 16 to 25 years
Katherine M Keyes1,
Joanne E Brady1,2 &
Guohua Li1,2
Injury Epidemiologyvolume 2, Article number: 1 (2015) | Download Citation
Alcohol and marijuana are among the most commonly used drugs by adolescents and young adults. The question of whether these two drugs are substitutes or complements has important implications for public policy and prevention strategies, especially as laws regarding the use of marijuana are rapidly changing.
Data were drawn from fatally injured drivers aged 16 to 25 who died within 1 h of the crash in nine states with high rates of toxicology testing based from 1999 to 2011 on the Fatality Analysis Reporting System (N = 7,191). Drug tests were performed using chromatography and radioimmunoassay techniques based on blood and/or urine specimens. Relative risk regression and Joinpoint permutation analysis were used.
Overall, 50.5% of the drivers studied tested positive for alcohol or marijuana. Univariable relative risk modeling revealed that reaching the minimum legal drinking age was associated with a 14% increased risk of alcohol use (RR = 1.14, 95% CI: 1.02 to 1.28), a 24% decreased risk of marijuana use (RR = 0.76, 95% CI: 0.53 to 1.10), and a 22% increased risk of alcohol plus marijuana use (RR=1.22, 95% CI: 0.90 to 1.66). Joinpoint permutation analysis indicated that the prevalence of alcohol use by age is best described by two slopes, with a change at age 21. There was limited evidence for a change at age 21 for marijuana use.
These results suggest that among adolescents and young adults, increases in alcohol availability after reaching the MLDA have marginal effect on marijuana use.
Alcohol and marijuana are among the most commonly used drugs by adolescents and young adults in the United States (US) (Substance Abuse and Mental Health Services Administration, 2011). Use is associated with substantial morbidity and mortality for young people, especially motor vehicle crash fatality, which is a leading cause of death among those 18 to 25 in the US (Heron, 2013). In 2012, more than 33,500 individuals died in motor vehicle crashes (NHTSA, 2013), and based on the most recent data available, about 14% of drivers involved in fatal crashes are under the influence of alcohol, drugs, or medication at the time of a fatal crash (FARS, 2011). Current estimates indicate that, in states that routinely test drivers who die within 1 h of a crash, more than half of these drivers are under the influence of alcohol and/or other drugs at the time of death (Brady and Li, 2013). Understanding and preventing injuries from motor vehicle crashes, especially among young adults, remains an important public health priority. Policy change has proven efficacious in reducing the harm of alcohol-impaired driving (Cohen and Einav, 2003; Shults et al. 2001; Task Force on Community Preventive, 2001; Task Force on Community Preventive Services, 2001, 2005), including raising the minimum legal drinking age (MLDA) (Plunk et al. 2013; Subbaraman and Kerr, 2013; Wagenaar and Toomey, 2002), lowering the legal blood alcohol content (BAC) limits for drivers (Mercer et al. 2010; Wagenaar et al. 2007), and increasing the tax and price of alcoholic beverages (Wagenaar et al. 2010).
Policies related to alcohol, as well as other substances, both in the US and more broadly, remain an open area of debate and controversy. Most recently, the Amethyst Initiative (Amethyst Initiative, 2008 ), a coalition of more than 130 university presidents in the US, advocated for a reduction in the minimum legal drinking age (MLDA) to age 18, with advocates suggesting that such a policy change could reduce harms associated with illegal drug use among young adults by giving them legal access to alcohol. The evidence for such a claim, however, remains unclear (DeJong and Blanchette, 2014; Fitzpatrick et al. 2012; Nelson et al. 2010). Other policies related to alcohol such as changes in the minimum acceptable BAC to operate a vehicle, changes in tax, import, and export policy remain in flux both in the US and worldwide (Rehm and Greenfield, 2008). More broadly, policies related to other substances of potential abuse are changing. For example, marijuana policy is undergoing tremendous shifts toward increased access, with 23 states now approving of medical use in some form and two states (California and Washington) approving legislation to legalize marijuana for adult recreational use (Hoffmann and Weber, 2010).
When one substance becomes more legally accessible, what happens to the prevalence of other substances? This question is often framed in economics in terms of whether two goods are complements or substitutes. A complement good is one in which demand increases as a function of the availability of a related good; in contrast, a substitute good is one in which use decreases with increased availability of a related good (Nicholson, 1998). The issue of whether alcohol and marijuana are complements or substitutes has important direct policy and public health implications. If marijuana and alcohol are complement goods, we can expect increased marijuana use and perhaps other drugs of abuse with increased access to alcohol. This would portend increases in intentional and unintentional injury, increased rates of dependence, and other potential consequences. On the other hand, if alcohol and marijuana are substitutes, increased alcohol availability may have an unintended benefit of reduced harm associated with marijuana use (though potential harms associated with alcohol use may balance any potential benefit).
Substantial economic literature has examined whether alcohol and marijuana operate as complement or substitute goods after a policy and/or price change. When prices (Cameron and Williams, 2001; Chaloupka and Laixuthai, 1997; Farrelly et al. 2001; Pacula, 1998; Saffer and Chaloupka, 1999; Williams et al. 2004), taxes (Pacula, 1998), policies/laws (Anderson et al. 2013; Cameron and Williams, 2001; Chaloupka and Laixuthai, 1997; DiNardo and Lemieux, 2001; Farrelly et al. 2001; Pacula, 1998; Saffer and Chaloupka, 1999; Thies and Register, 1993; Williams et al. 2004), and college campus alcohol polices (Williams et al. 2004) have been examined, the evidence to date has not pointed to a clear substitution or complementary relation, even within the study (e.g., (Chaloupka and Laixuthai, 1997; Pacula, 1998; Pacula et al. 2013; Thies and Register, 1993)). Inference has been limited, however, by data quality (e.g., drug and alcohol price information is subject to substantial error) and unobserved confounding (e.g., states that decriminalize marijuana may have generally less negative attitudes toward substance use). Further, self-reported alcohol and marijuana use is also subject to reporting error (Buchan et al. 2002; Del Boca and Darkes, 2003); no studies to date, to our knowledge, have used toxicological data on alcohol and marijuana positivity to assess drug use in studies assessing whether these substances are economic substitutes or complements.
In contrast to price and taxes, minimum legal drinking age (MLDA) laws have well-documented effects on alcohol consumption and alcohol-associated injury (McCartt et al. 2010; Plunk et al. 2013; Subbaraman and Kerr, 2013; Wagenaar and Toomey, 2002), especially among young adults. As such, examining whether marijuana use increases or decreases as individuals age into legal drinking is an opportunity to test whether marijuana is a substitute or complement good for alcohol. Four studies have examined the effects of MLDA on demand for alcohol and marijuana, with conflicting results (Crost and Guerrero, 2012; Crost and Rees, 2013; DiNardo and Lemieux, 2001; Thies and Register, 1993; Yoruk and Yoruk, 2011). Perhaps the most rigorous examinations in recent literature have utilized a regression discontinuity (RD) design (Shadish et al. 2002). The RD design is a quasi-experimental approach that exploits the observation that birth dates are relatively randomly distributed, thus individuals right below the MLDA and individuals right above the MLDA are similar to each other on many risk factors for alcohol use except legal drinking status. The slope of the regression line between age and marijuana when (a) alcohol is legally accessible (age 21 and beyond) to the slope of the regression line when (b) alcohol is not legally accessible (prior to age 21) is then compared for evidence of discontinuity (i.e. non-linearity) in the slope of the line. Thus, the counterfactual question raised is, holding all else equal, would the rate of marijuana use continue to increase, or instead decrease, when alcohol becomes available.
Existing studies on the relation between age and alcohol use indicates that there is a positive and relatively linear upward trend in use from the late teens through the mid-20s (Chen and Jacobson, 2012; Jager et al. 2013). The relation between age and marijuana use is less linear, with an upward trend in the late teens and then a flattening of the slope in the early 20s (Chen and Jacobson, 2012; Jager et al. 2013). While these trajectories differ in the magnitude of the slope at the population level, at the individual level, alcohol and marijuana use are substantially correlated (Kandel et al. 1992). That is, those who use alcohol are approximately 2 to 3 times more likely to marijuana. As such, examination of joint trajectories of alcohol and marijuana use through the developmental young adult period when alcohol becomes legal is critical. Given that alcohol use is expected to increase after age 21, if alcohol and marijuana are economic complements, we would also expect that marijuana use would increase among alcohol users but that marijuana use in the absence of alcohol use would decrease. Conversely, if alcohol and marijuana are economic substitutes, we would expect that marijuana use would decrease among alcohol users and that marijuana use in the absence of alcohol use would increase. To date, existing studies utilizing MLDA as an instrument in the regression discontinuity design have not considered these joint effects, leaving open questions remaining about the effects of MLDA on marijuana use. Further, these existing studies have had conflicting results. Using aggregated state-level national US data, Crost and Guerrero (2012) found that individuals over 21 have higher self-reported past-month days of alcohol use and lower past-month days marijuana use, consistent with a substitution effect. Using nationally representative longitudinal data, Yoruk and Yoruk (2011) and Crost and Rees (2013) also document a decrease in marijuana use after age 21. However, whether this effect is robust in samples with high-risk of alcohol and marijuana, such as fatally injured drivers, remains unknown. Examination of these effects in high-risk samples and based on toxicological testing data is critical, as these samples represent the groups with the most adverse health consequences of substance use and are less susceptible to information bias. If marijuana use decreases after age 21 mostly in subgroups of the population with low risk of heavy use or health consequences, but increases among those at high risk, the public health strategy to reduce harms associated with substance use during the transition to adulthood will need to be modified.
In summary, existing literature on how changes in alcohol availability affects marijuana use remains indeterminate, and causal inference approaches such as regression discontinuity designs have the potential to inform this literature. However, no such studies have used toxicological information on alcohol and marijuana positivity in high-risk groups such as crash decedents, and no studies have examined joint trajectories of alcohol and marijuana use. Using the data for drivers aged 16 to 25 years who were fatally injured within 1 h of the crash in 13 states where toxicological testing was performed on a routine basis during 1999 to 2010 (n = 7,191), we assessed the effects of MLDA (i.e., 21 years) on: (1) alcohol plus marijuana use; (2) alcohol use only; and (3) marijuana use only using a regression discontinuity design.
Data were drawn from the Fatality Analysis Reporting System (FARS), a census of fatal traffic crashes occurring within the United States maintained by the National Highway Traffic Safety Administration (Hargutt et al. 2011). All crashes involving a motor vehicle traveling on a public road and resulting in a fatality within 30 days are included in the census. Detailed data from police reports, state administrative files, and medical records are collected on circumstances, vehicles, and people involved in the crash. Trained analysts using standard forms and protocols maintain the records and specified quality control procedures are rigorously implemented (National Highway Traffic Safety Administration, 2010).
Drivers between the age of 16 and 25 at the time of death were included. We include data from states that performed toxicological testing on more than 85% of their fatally injured drivers who died within 1 h of the crash (California, Connecticut, Hawaii, Illinois, New Hampshire, New Jersey, Rhode Island, Washington, and West Virginia) from 1999 to 2011. We included only drivers who died within 1 h of the crash because the validity of drug and alcohol testing data may be compromised. Alcohol and drugs taken before the crash might be undetectable if tested more than 1 h after the crash, rendering false negatives. Further, drugs administered after the crash by medical personnel may be detected, rendering false positives. Despite higher than 85% testing rates, data from New Mexico were excluded from the study sample because test results recorded in FARS for this state were deemed unreliable due to the low number of drivers positive for drugs (NHTSA, 2010). Drivers testing positive for drugs other than alcohol and/or marijuana were excluded (n = 1,525). Of the remaining 7,905 drivers fatally injured between 1999 and 2011, 714 (9.0%) were excluded from the analysis due to the lack of drug testing data. Drivers who survived more than 1 h after the crash (n = 3,981) or with missing time of death information (n = 197) were excluded from this study because of concerns about the accuracy and reliability of drug testing data for these drivers.
Driver characteristics
Data are routinely collected on demographics of the fatally injured driver including age (in years), sex, race, and ethnicity. Race/ethnicity was missing on 9.6% of the injuries and was categorized into White (86.4%) versus non-White.
Crash characteristics
We included two characteristics of the crash itself in the analysis: the number of occupants of the vehicle and the number of fatalities, as they are associated with age of the fatally injured driver (Tefft et al. 2012), thus potentially important characteristics to assess within the context of the effects of MLDA. Number of occupants was categorized into 1 (64.8%), 2 (21.7%), and 3 or more (13.5%). Data on number of vehicle occupants were missing on 11.9% of the sample. Of those with data, number of deaths was categorized into 1 (86.3%) and more than 1 (13.7%). We also controlled for a categorical indicator of the state where the crash occurred: California (N = 4,777, 54.7%), Connecticut (N = 402, 4.6%), Hawaii (N = 95, 1.1%), Illinois (1,211, 13.9%), New Hampshire (N = 168, 1.9%), New Jersey (N = 583, 6.7%), Rhode Island (N = 120, 1.4%), Washington (N = 867, 9.9%), West Virginia (N = 516, 5.9%), as well as year of the crash. Further, we separately controlled for whether the state had a medical marijuana law, as some data indicates that marijuana use is higher in states with medical marijuana laws (Cerda et al. 2012; Wall et al. 2011). California and Washington had some form of MML for the entirety of the study period; Connecticut, Illinois, New Hampshire, and West Virginia did not have an MML for the entirety of the study period. For remaining states, decedents were coded as in a state with an MML based on when the law was passed: Hawaii (2000), New Jersey (2010), and Rhode Island (2006).
Drug and alcohol test results
Drug tests were performed using chromatography and radioimmunoassay techniques based on blood and/or urine specimens (Centers for Disease, C., Prevention 2006; Li et al. 2011). Drugs were categorized according to the FARS coding manual (National Highway Traffic Safety Administration, 2008) and grouped into the following categories: alcohol and cannabinoid, alcohol only, cannabinoid only, and neither.
Drug testing protocols might vary from state to state (The Walsh Group, 2002; Walsh et al. 2004). The testing methods and specimens might not be exactly the same across the states. The possible bias resulting from different specimens, however, was unlikely to pose a serious threat to the validity of this study given that 94% of the study sample had at least one test based on a blood specimen. However, we note that we controlled for state in adjusted models to ensure that results were not biased by state variation in protocols.
First, we estimated the prevalence of alcohol and marijuana involvement by single year of age among fatality injured drivers and estimated the percentage change in alcohol and marijuana involvement at each age increase. BAC ≥ 0.01 g/DL was considered alcohol positive.
Second, we examined whether the percentage change by year substantially changed the slope of the relation between age and alcohol/marijuana use using the National Cancer Institute's Joinpoint software (Kim et al. 2000). We estimated 'points of inflection', that is, specific ages in which the slope of the association between age and drug use significantly changes. The Joinpoint software estimates a series of permutations with increasing number of inflection points and indicates the minimum number necessary such that additional inflection points do not improve model fit.
Finally, we estimated three relative risk regression models using three different outcomes: (1) alcohol use plus marijuana use versus no use; (2) alcohol use only versus no use; and (3) marijuana use only versus no use. Regression models for discontinuity regression included centered age, post-21 indicator, and their interaction:
$$ \mathrm{Log}\left(Y=1\Big|X\right)= \log \mu ={\beta}_0+{\beta}_1{T}_i+f\left({\mathrm{age}}_i\right)+{\beta}_2\left({T}_i*{\mathrm{age}}_i\ \right) $$
where Y i is each of our three outcomes, T i is the post-21 indicator (legal drinker, yes/no), and f(age i ) is a centered age function (calculated as the difference in respondent's age from age 21 in number of years). From this equation, we estimated the risk ratio for the effect of turning 21 (aging into legal drinking) on alcohol use, marijuana use, and alcohol plus marijuana use.
We then explored the effect of control covariates on the association between MLDA and alcohol/marijuana use including driver sex, race/ethnicity, number of occupants in the vehicle, number of deaths in the incident, year, state, and whether the state had an MML. There were 854 missing values for vehicle occupancy (11.9%), 716 missing values for race (9.6%), and 1,343 missing on either occupancy or race (18.6%), handled in the analysis with list-wise deletion controlling for these covariates separately.
As shown in Table 1, 50.3% of the drivers studied tested positive for alcohol or marijuana (36.8% for alcohol only, 5.9% for marijuana only, and 7.6% for both drugs). Data on single drug use indicated that the prevalence of alcohol use only increased monotonically from 15.0% at age 16 to 35.6% at age 20 years, and continued to rise at a slower pace after age 20. The prevalence of marijuana increased slightly from 4.6% at age 16 to 6.7% at age 20 and monotonically decreased after age 20. The prevalence of combined use of alcohol and marijuana increased progressively from age 16 to 20 before leveling off. Examining the percentage change from year to year, for alcohol without marijuana use, the percentage change was positive in every year, with the highest change between age 19 and 20 (increase of 7.2%), and another large change from 20 to 21 (increase of 6.7%). For marijuana without alcohol use, the percentage change was mostly negative, with the largest negative decrease between 17 and 18 (decrease of 1.7%) and another large change from 20 to 21 (decrease of 1.5%). For alcohol and marijuana use, percentage changes were mostly positive, with large changes occurring between age 16 and 17 (increase of 2%), 20 and 21 (increase of 1.8%), and 22 and 23 (increase of 1.8%). In Figure 1, we graph the prevalence of alcohol plus marijuana use, alcohol only, and marijuana only positivity among deceased drivers, with a cut point at age 21 to visually display the potential for discontinuity across the timespan of the study.
Table 1 Prevalence of alcohol and marijuana in drivers who died within 1 hour of crash by age, FARS, selected states, 1999 to 2010
Prevalence of alcohol and marijuana positivity in drivers. Prevalence of alcohol and marijuana positivity in drivers who died within 1 h of crash by age, FARS, selected states, 1999 to 2010.
We then used Joinpoint regression analysis to examine whether there is evidence for discontinuity in the relation between age and alcohol/marijuana use. Among those who consumed alcohol alone (without marijuana), Joinpoint analysis indicated that the best model fit was two slopes (comparing a two slope model to a one slope model, the p-value was <0.001 in favor of the two slope model), with an inflection point at age 21 (95% confidence interval, age 19 to age 22). Before age 21, the relation between age and alcohol use is significant and positive (B = 0.21, SE = 0.01, p < 0.001). After age 21, the relation is significant and negative (B = −0.18, SE = 0.02, p < 0.001). For combined alcohol and marijuana, there was marginal support for two slopes (comparing a two slope model to a one slope model, the p-value was 0.053 in favor of the two slope model); however, the inflection point was at age 23 (95% confidence interval for the inflection point age 18 to 25). Before age 23, the relation between age and concurrent alcohol and marijuana use is significant and positive (B = 0.11, SE = 0.01, p = 0.004). After age 23, the relation is null (B = −0.21, SE = 0.17, p = 0.26). For marijuana, a one slope model best fits the data (comparing a two slope model to a one slope model, the p-value was 0.09 suggesting that the two slope model did not substantially improve model fit). The relation between age and marijuana positivity exhibited a negative slope (B = −0.08, SE = 0.03, p = 0.03).
In Table 2, we test whether the prevalence of alcohol use only, marijuana use only, and alcohol plus marijuana use changed before and after age 21 using regression models. That is, we tested the magnitude of the discontinuity in prevalence at age 21. Univariable relative risk modeling revealed that reaching the minimum legal drinking age was associated with a 14% increased risk of alcohol use (RR = 1.14, 95% CI: 1.02 to 1.28), a 24% decreased risk of marijuana use (RR = 0.76, 95% CI: 0.90 to 1.66), and a 22% increased risk of alcohol plus marijuana use (95% CI: 0.90 to 1.66). In Table 3, we test the robustness of these effects, controlling for covariates separately and together. Controlling for each covariate separately did not change the results. When all covariates were controlled simultaneously, the magnitude of the results did not change (e.g., RR = 1.12 for alcohol only, RR = 0.80 for marijuana only, and RR = 1.24 for alcohol plus marijuana), though confidence intervals were wider due to approximately 19% missing data.
Table 2 Estimated relative risks of alcohol and marijuana use in fatally injured drivers 1999 to 2010 associated with MLDA
Table 3 Estimated relative risks of alcohol and marijuana use in fatally injured drivers 1999–2010 associated with MLDA, adjusting for covariates
The present study documents that approximately 50% of fatally injured drivers aged 16 to 25 years tested positive for alcohol, marijuana, or both. We tested whether there was evidence for substitution or complement use at the discontinuity of age 21, when drinking becomes legal in all 50 states in the US. In general, we find that while alcohol use increases at age 21, there is limited evidence that marijuana use changes at age 21. The general direction of the effect was for a decrease in marijuana use among those that are marijuana-only users and an increase in marijuana use for those who are combination alcohol and marijuana users. Thus, one interpretation is that the data suggest both substitution and complementary effects. That is, among young adults who tend to use one substance, marijuana use decreases when alcohol use becomes more legally available. Among polysubstance using young adults, however, marijuana use increases when alcohol becomes legally available. This interpretation is made tenuous, however, by the lack of a significant change in slope based on the Joinpoint analysis. Thus, any effect observed is a small effect and caution is warranted in drawing conclusions about any substitution or complementary effect for marijuana when alcohol becomes legally available, at least among the high risk group assessed in the present study, drivers who died in motor vehicle crashes.
Most conservatively, we can at least conclude that once young adults reach the age when they can legally purchase and consume alcohol, the prevalence of alcohol use increases whereas the prevalence of marijuana use does not and trends in a negative direction for marijuana-only use. This finding is consistent with other evidence from national samples using the RD design (Crost and Guerrero, 2012; Yoruk and Yoruk, 2011), as well as other studies (Cameron and Williams, 2001; Chaloupka and Laixuthai, 1997; DiNardo and Lemieux, 2001; Thies and Register, 1993), documenting a substitution effect for alcohol and marijuana. We extend previous findings in several notable ways. Our data on motor vehicle crash decedents is notable, as it is a census of all deaths in the US with confirmed toxicology underlying drug measurement, and a population with high rates of substance use. Taken together, we would conservatively predict that increased availability of marijuana to young adults in US states that have passed medical and recreational use allowance may have positive spillover effects on alcohol, reducing use to some degree among young adults.
We note that our data are not consistent with a number of other studies (Chaloupka and Laixuthai, 1997; Farrelly et al. 2001; Pacula, 1998; Saffer and Chaloupka, 1999; Williams et al. 2004), which have found evidence for complementary effects of policies and laws on alcohol and marijuana use. Substantial epidemiological evidence indicates that individuals who use marijuana are more likely to drink alcohol compared with those who do not (Compton et al. 2005; Kessler et al. 2005), especially during young adulthood (McGue et al. 2001; Swendsen et al., 2012). The underlying causal mechanism for co-occurrence remains inadequately understood; co-occurrence may reflect shared genetic and environmental risk factors (Eaton et al. 2012; Krueger et al. 2007; Walton et al. 2011), and/or a causal sequence whereby some substances of abuse (e.g., alcohol) serve as 'gateways' to other substances of abuse (e.g., marijuana) (Huang et al. 2013; Kandel et al. 1992; Kandel et al. 2006; Levine et al. 2011). Regardless of the mechanism, co-occurrence across individuals supports the theory that substances are generally complementary. The use of the RD design, however, mitigates concerns about unmeasured non-comparability between drug users and non-users and in those exposed and unexposed to policies and law across states and time periods. We note, however, that there may be heterogeneity in the effects of laws and policies on alcohol and marijuana use; that is, for some laws and across some subgroups, alcohol and marijuana may be complementary. Meta-analysis and examination of effect heterogeneity would be helpful for future studies to fully understand potential public health implications of policy and law changes around substance use.
Study limitations are noted. The validity of the regression discontinuity design rests on the MLDA being the only exogenous source of variation in the change in alcohol use at age 21. That is, discontinuity occurring at the MLDA may not be attributable to MLDA if there are other sources of variation when individuals turn 21 that would create discontinuities. Further, the validity of the regression discontinuity design in the present study relies on assessment of one or more linear slopes. One could also model the relation between age and alcohol/marijuana use using non-linear models, complicating testing for discontinuity in the regression of age on these outcomes. Nonetheless, the interpretation of the relation between age and alcohol/marijuana use may not be completely attributable to MLDA. For example, if individuals are leaving secondary education institutions around this age, it could be that new social situations and/or the transition to full-time work or family life drive changes in alcohol and marijuana use, rather than the MLDA. In fact, we see a decrease in the slope of the relation between age and alcohol use after age 21, which is unlikely to be attributable to MLDA and rather to developmental changes co-occurring during this period. However, given that MLDA has well-documented and moderately large effects on alcohol consumption among young people, and the sharp discontinuity that is observed at age 21, it is unlikely that other role transitions drive these results. We note that states vary considerably in per capita consumption of alcohol (LaVallee and Yi, 2011) as we as availability, legality, and use of marijuana (Cerda et al. 2012; Wall et al. 2011). However, this state-level heterogeneity is unlikely to affect our results because the exposure of interest, MLDA, does not differ across states and we controlled for state where the crash occurred. Further, we had missing data on some covariates, including vehicle occupancy and race. Results controlling for these factors reduced the precision of our estimates, though the magnitudes of associations remain unchanged. Unless missing data is associated with alcohol and marijuana involved in the crash as well as age, we would expect increased precision but no change in the magnitude of our estimates if access to full data were available. Finally, we included states in which the proportion of decedents who were drug and alcohol tested was high and stable across all years of the study; we do not have information on whether drug and alcohol testing procedures differed across time. However, since we are averaging the effects of time across age, differences in procedures would not affect results unless the procedures were changed only for individuals in a certain age group.
In conclusion, given the rapid changes currently underway in marijuana availability and price in the US, understanding the potential effects of increased use on other substances, as well as substance-related outcomes such as motor vehicle crash fatality, has never been more important. The weight of available evidence indicates that increasing access to marijuana may reduce alcohol use at the population level, though our results suggest that any negative trend in marijuana use is likely small. However, it should be noted that marijuana use is also a risk factor for involvement in fatal and nonfatal motor vehicle crashes (Li et al. 2013; Li et al. 2012; Romano et al. 2014); thus, the effects of increased marijuana use at the population level, while potential reducing alcohol use, may be null or even detrimental for fatality rates overall. Current data indicate that use of an illicit drug such as marijuana is less of a risk for fatality compared with the use of alcohol (Romano et al. 2014), though there is potential for changes in relative risks across drug type if marijuana use becomes increasingly common. Public health efforts to continue surveillance of drugged and drunk driving are critical at this important juncture in substance use policy in the US.
This study was based on publicly available data about fatally injured drivers. The research protocol was reviewed by the institutional review board of the Columbia University Medical Center and was granted an exempt under 45 CFR 46 (not human subjects research).
Amethyst Initiative. Statement. 2008. http://www.theamethystinitiative.org/statement/.
Anderson DM, Hansen B, Rees DI. Medical marijuana laws, traffic fatalities, and alcohol consumption. J Law Econ. 2013; 56:333–69.
Brady JE, Li G. Prevalence of alcohol and other drugs in fatally injured drivers. Addiction. 2013; 108:104–14.
Buchan BJ, LDennis M, Tims FM, Diamond GS. Cannabis use: consistency and validity of self-report, on-site urine testing and laboratory testing. Addiction. 2002; 97(Suppl 1):98–108.
Cameron L, Williams J. Cannabis, alcohol and cigarettes: substitutes or complements? Econ Rec. 2001; 77:19–34.
Centers for Disease, C., Prevention. Alcohol and other drug use among victims of motor-vehicle crashes--West Virginia, 2004–2005. MMWR Morb Mortal Wkly Rep. 2006; 55:1293–96.
Cerda M, Wall M, Keyes KM, Galea S, Hasin D. Medical marijuana laws in 50 states: investigating the relationship between state legalization of medical marijuana and marijuana use, abuse and dependence. Drug Alcohol Depend. 2012; 120:22–7.
Chaloupka FJ, Laixuthai A. Do youths substitute alcohol and marijuana? Some econometric evidence. Eastern Econ J. 1997; 23:253–76.
Chen P, Jacobson KC. Developmental trajectories of substance use from early adolescence to young adulthood: gender and racial/ethnic differences. J Adolesc Health. 2012; 50:154–63.
Cohen A, Einav L. The effects of mandatory seat belt laws on drinking behavior and traffic fatalities. Rev Econ Stat. 2003; 85:828–43.
Compton WM, Thomas YF, Conway KP, Colliver JD. Developments in the epidemiology of drug use and drug use disorders. Am J Psychiatry. 2005; 162:1494–502.
Crost B, Guerrero S. The effect of alcohol availability on marijuana use: evidence from the minimum legal drinking age. J Health Econ. 2012; 31:112–21.
Crost B, Rees DI. The minimum legal drinking age and marijuana use: new estimates from the NLSY97. J Health Econ. 2013; 32:474–76.
DeJong W, Blanchette J. Case closed: research evidence on the positive public health impact of the age 21 minimum legal drinking age in the United States. J Stud Alcohol Drugs Suppl. 2014; 75(Suppl 17):108–15.
Del Boca FK, Darkes J. The validity of self-reports of alcohol consumption: state of the science and challenges for research. Addiction. 2003; 98(Suppl 2):1–12.
DiNardo J, Lemieux T. Alcohol, marijuana, and American youth: the unintended consequences of government regulation. J Health Econ. 2001; 20:991–1010.
Eaton NR, Keyes KM, Krueger RF, Balsis S, Skodol AE, Markon KE, Grant BF, Hasin DS. An invariant dimensional liability model of gender differences in mental disorder prevalence: evidence from a national sample. J Abnorm Psychol. 2012; 121:282–88.
Farrelly MC, Bray JW, Zarkin GA, Wendling BW. The joint demand for cigarettes and marijuana: evidence from the National Household Surveys on Drug Abuse. J Health Econ. 2001; 20:51–68.
FARS. FARS Data Tables. 2011.
Fitzpatrick BG, Scribner R, Ackleh AS, Rasul J, Jacquez G, Simonsen N, Rommel R. Forecasting the effect of the Amethyst initiative on college drinking. Alcohol Clin Exp Res. 2012; 36:1608–13.
Hargutt V, Kruger H, Knoche A. Driving under the Influence of Drugs, Alcohol, and Medicines.. 6th Framework Program Deliverable 2.2.5. Prevalence of Alcohol and other Psychoactive Substances in Injured and Killed Drivers. 2011. Available from: http://www.druid-project.eu/cln_031/nn_107534/sid_B21CAD080C96E7B112DEA0E3C2B077AA/nsc_true/Druid/EN/deliverales-list/deliverables-list-node.html?__nnn=true. Archived at: http://www.webcitation.org/65lkh0El7.
Heron M. Deaths: Leading Causes for 2010. National Vital Statistics report 62. Hyattsville, MD: National Center for Health Statistics; 2013. Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr62/nvsr62_06.pdf. Accessed on: January 23rd, 14.
Hoffmann DE, Weber E. Medical marijuana and the law. N Engl J Med. 2010; 362:1453–57.
Huang YY, Kandel DB, Kandel ER, Levine A. Nicotine primes the effect of cocaine on the induction of LTP in the amygdala. Neuropharmacology. 2013; 74:126–34.
Jager J, Schulenberg JE, O'Malley PM, Bachman JG. Historical variation in drug use trajectories across the transition to adulthood: the trend toward lower intercepts and steeper, ascending slopes. Dev Psychopathol. 2013; 25:527–43.
Kandel DB, Yamaguchi K, Chen K. Stages of progression in drug involvement from adolescence to adulthood: further evidence for the gateway theory. J Stud Alcohol. 1992; 53:447–57.
Kandel DB, Yamaguchi K, Klein LC. Testing the gateway hypothesis. Addiction. 2006; 101:470–72. discussion 74–6.
Kessler RC, Chiu WT, Demler O, Merikangas KR, Walters EE. Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry. 2005; 62:617–27.
Kim HJ, Fay MP, Feuer EJ, Midthune DN. Permutation tests for joinpoint regression with applications to cancer rates. Stat Med. 2000; 19:335–51.
Krueger RF, Markon KE, Patrick CJ, Benning SD, Kramer MD. Linking antisocial behavior, substance use, and personality: an integrative quantitative model of the adult externalizing spectrum. J Abnorm Psychol. 2007; 116:645–66.
LaVallee RA, Yi H. Surveillance Report #92: Apparent Per Capita Alcohol Consumption: National, State, and Regional Trends, 1977–2009. US Department of Health and Human Services; Rockville, MD: NIAAA, Division of Biometry and Epidemiology, Alcohol Epidemiologic Data System 2011.
Levine A, Huang Y, Drisaldi B, Griffin EA Jr, Pollak DD, Xu S, Yin D, Schaffran C, Kandel DB, Kandel ER. Molecular mechanism for a gateway drug: epigenetic changes initiated by nicotine prime gene expression by cocaine. Sci Transl Med. 2011; 3:107ra09.
Li L, Zhang X, Levine B, Li G, Zielke HR, Fowler DR. Trends and pattern of drug abuse deaths in Maryland teenagers. J Forensic Sci. 2011; 56:1029–33.
Li MC, Brady JE, DiMaggio CJ, Lusardi AR, Tzong KY, Li G. Marijuana use and motor vehicle crashes. Epidemiol Rev. 2012; 34:65–72.
Li G, Brady JE, Chen Q. Drug use and fatal motor vehicle crashes: a case–control study. Accid Anal Prev. 2013; 60:205–10.
McCartt AT, Hellinga LA, Kirley BB. The effects of minimum legal drinking age 21 laws on alcohol-related driving in the United States. J Saf Res. 2010; 41:173–81.
McGue M, Iacono WG, Legrand LN, Malone S, Elkins I. Origins and consequences of age at first drink. I. Associations with substance-use disorders, disinhibitory behavior and psychopathology, and P3 amplitude. Alcohol Clin Exp Res. 2001; 25:1156–65.
Mercer SL, Sleet DA, Elder RW, Cole KH, Shults RA, Nichols JL. Translating evidence into policy: lessons learned from the case of lowering the legal blood alcohol limit for drivers. Ann Epidemiol. 2010; 20:412–20.
National Highway Traffic Safety Administration. FARSHELF. FARS Coding and Validation Manual. Washington, DC: National Highway Traffic Safety Administration; 2008.
National Highway Traffic Safety Administration. FARS Analytic Reference Guide, 1975 to 2009. Washington, DC: National Highway Traffic Safety Administration; 2010. Available from: http://www-nrd.nhtsa.dot.gov/Pubs/811352.pdf. Accessed on January 24, 2014. Archived at: http://www.webcitation.org/636A2RE7c.
Nelson TF, Toomey TL, Lenk KM, Erickson DJ, Winters KC. Implementation of NIAAA College Drinking Task Force recommendations: how are colleges doing 6 years later? Alcohol Clin Exp Res. 2010; 34:1687–93.
NHTSA. Drug Involvement of Fatally Injured Drivers. Traffect Safey Facts. NHTSA's National Center for Statistics and Analysis, DOT HS 811 415. 2010. Available at: http://www-nrd.nhtsa.dot.gov/Pubs/811415.pdf. Last accessed October 23, 2014.
NHTSA. 2012 Motor Vehicle Crashes, Overview. DOT HS 811 856. 2013. Available at: http://www-nrd.nhtsa.dot.gov/Pubs/811856.pdf. Accessed on January 23rd, 2014.
Nicholson W, Snyder C. Microeconomic Theory: Basic Principles and Extensions. Mason, OH: South-Western, Cengage Learning; 2012.
Pacula RL. Does increasing the beer tax reduce marijuana consumption? J Health Econ. 1998; 17:557–85.
Pacula RL, Powell D, Heaton P, Sevigny E. Assessing the Effects of Medical Marijuana Laws on Marijuana and Alcohol Use: The Devil is in the Details. NBER Working Paper Series #19302. Cambridge, MA: National Bureau of Economic Research; 2013.
Plunk AD, Cavazaos-Rehg P, Bierut LJ, Grucza RA. The persistent effects of minimum legal drinking age laws on drinking patterns later in life. Alcohol Clin Exp Res. 2013; 37:463–69.
Rehm J, Greenfield TK. Public alcohol policy: current directions and new opportunities. Clin Pharmacol Ther. 2008; 83:640–43.
Romano E, Torres-Saavedra P, Voas RB, Lacey JH. Drugs and alcohol: their relative crash risk. J Stud Alcohol Drugs. 2014; 75:56–64.
Saffer H, Chaloupka FJ. Demographic Differentials in the Demand for Alcohol and Illicit Drugs, NBER Working Paper Series, 6432. 1999.
Shadish WR, Cook TD, Campbell DT. Regression Discontinuity Designs. In: Shadish WR, Cook TD, Campbell DT, editors. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Belmont, CA: Wadsworth; 2002: p. 207–43.
Shults RA, Elder RW, Sleet DA, Nichols JL, Alao MO, Carande-Kulis VG, Zaza S, Sosin DM, Thompson RS, Services Task Force on Community Preventive. Reviews of evidence regarding interventions to reduce alcohol-impaired driving. Am J Prev Med. 2001; 21:66–88.
Subbaraman MS, Kerr WC. State panel estimates of the effects of the minimum legal drinking age on alcohol consumption for 1950 to 2002. Alcohol Clin Exp Res. 2013; 37(Suppl 1):E291-6.
Substance Abuse and Mental Health Services Administration. Results from the 2010 National Survey on Drug Use and Health: Summary of National Findings. NSDUH Series H-41, HHS Publication No. (SMA) 11–4658. Rockville, MD: Substance Abuse and Mental Health Services Administration; 2011. Available at: http://oas.samhsa.gov/NSDUH/2k10NSDUH/2k10Results.htm - 2.3. Accessed on: January 23rd, 2014.
Swendsen J, Burstein M, Case B, Conway KP, Dierker L, He J, Merikangas KR. Use and abuse of alcohol and illicit drugs in US adolescents: results of the national comorbidity survey-adolescent supplement. Arch Gen Psychiatry. 2012; 69:390–98.
Task Force on Community Preventive Services. Motor-vehicle occupant injury: strategies for increasing Use of child safety seats, increasing Use of safety belts, and reducing alcohol-impaired driving. MMWR Morb Mortal Wkly Rep. 2001; 20:1–3.
Task Force on Community Preventive Services. In: Zaza S, Briss PA, Harris KW, editors. The Guide to Community Preventive Services: What Works to Promote Health? Atlanta, GA: Oxford University Press; 2005: p. 329–84.
Task Force on Community Preventive, S. Recommendations to reduce injuries to motor vehicle occupants: increasing child safety seat use, increasing safety belt use, and reducing alcohol-impaired driving. Am J Prev Med. 2001; 21:16–22.
Tefft BC, Williams AF, Grabowski JG. Teen driver Risk in relation to Age and Number of Passengers. AAA Foundation for Traffic Safety: Washington DC; 2012.
The Walsh Group. The Feasibility of per se Drugged Driving Legislation: Consensus Report. Bethesda, MD: The Walsh Group; 2002.
Thies CF, Register CA. Decriminalization of marijuana and the demand for alcohol, marijuana, and cocaine. Soc Sci J. 1993; 30:385–99.
Wagenaar AC, Toomey TL. Effects of minimum drinking age laws: review and analyses of the literature from 1960 to 2000. J Stud Alcohol Suppl. 2002, 14:206–25.
Wagenaar AC, Maldonado-Molina MM, Ma L, Tobler AL, Komro KA. Effects of legal BAC limits on fatal crash involvement: analyses of 28 states from 1976 through 2002. J Saf Res. 2007; 38:493–99.
Wagenaar AC, Tobler AL, Komro KA. Effects of alcohol tax and price policies on morbidity and mortality: a systematic review. Am J Public Health. 2010; 100:2270–78.
Wall MM, Poh E, Cerda M, Keyes KM, Galea S, Hasin DS. Adolescent marijuana use from 2002 to 2008: higher in states with medical marijuana laws, cause still unclear. Ann Epidemiol. 2011; 21:714–16.
Walsh JM, de Gier JJ, Christopherson AS, Verstraete AG. Drugs and driving. Traffic Inj Prev. 2004; 5:241–53.
Walton KE, Ormel J, Krueger RF. The dimensional nature of externalizing behaviors in adolescence: evidence from a direct comparison of categorical, dimensional, and hybrid models. J Abnorm Child Psychol. 2011; 39:553–61.
Williams J, Liccardo Pacula R, Chaloupka FJ, Wechsler H. Alcohol and marijuana use among college students: economic complements or substitutes? Health Econ. 2004; 13:825–43.
Yoruk BK, Yoruk CE. The impact of minimum legal drinking age laws on alcohol consumption, smoking, and marijuana use: evidence from a regression discontinuity design using exact date of birth. J Health Econ. 2011; 30:740–52.
This research was funded by K01AA021511 (Keyes), R21DA029670 (Li), and 1R49CE002096-01 (Li).
Department of Epidemiology, Columbia University, Mailman School of Public Health, 722 West 168th Street, Suite 503, New York, NY, 10032, USA
Katherine M Keyes
, Joanne E Brady
& Guohua Li
Department of Anesthesiology, Columbia University, New York, NY, USA
Joanne E Brady
Search for Katherine M Keyes in:
Search for Joanne E Brady in:
Search for Guohua Li in:
Correspondence to Katherine M Keyes.
Authors' contribution
KMK conducted data analysis and drafted the manuscript. JAB conducted data analysis, drafted sections of the manuscript, and provided critical revisions to the manuscript. GL drafted sections of the manuscript and provided critical revisions to the manuscript.
All authors (KMK, JAB, GL) read and approved the final manuscript.
Minimum legal drinking age | CommonCrawl |
\begin{document}
\title{Pinwheels and bypasses}
\shortauthors{Honda, Kazez and Mati\'c} \authors{Ko Honda\\William H. Kazez\\Gordana Mati\'c} \asciiauthors{Ko Honda, William H. Kazez and Gordana Matic} \coverauthors{Ko Honda\\William H. Kazez\\Gordana Mati\noexpand\'c}
\address{{\rm KH:\qua}University of Southern California, Los Angeles, CA 90089, USA\\{\rm and}\\{\rm WHK and GM:\qua}University of Georgia, Athens, GA 30602, USA}
\asciiaddress{KH: University of Southern California, Los Angeles, CA 90089, USA\\and\\WHK and GM: University of Georgia, Athens, GA 30602, USA}
\gtemail{\mailto{[email protected]}, \mailto{[email protected]}, mailto{[email protected]}}
\asciiemail{[email protected], [email protected], [email protected]}
\gturl{\url{http://almaak.usc.edu/~khonda}, \url{http://www.math.uga.edu/~will}, \url{http://www.math.uga.edu/~gordana}}
\asciiurl{http://almaak.usc.edu/ khonda, http://www.math.uga.edu/ will, http://www.math.uga.edu/ gordana}
\begin{abstract} We give a necessary and sufficient condition for the addition of a collection of disjoint bypasses to a convex surface to be universally tight -- namely the nonexistence of a polygonal region which we call a {\em virtual pinwheel}. \end{abstract}
\asciiabstract{ We give a necessary and sufficient condition for the addition of a collection of disjoint bypasses to a convex surface to be universally tight -- namely the nonexistence of a polygonal region which we call a virtual pinwheel.}
\primaryclass{57M50}\secondaryclass{53C15} \keywords{Tight, contact structure, bypass, pinwheel, convex surface}
\maketitle
\section{Introduction}
In this paper we assume that our 3-manifolds are oriented and our contact structures cooriented. Let $\Sigma$ be a convex surface, i.e., it admits a $[-\varepsilon,\varepsilon]$-invariant contact neighborhood $\Sigma\times [-\varepsilon,\varepsilon]$, where $\Sigma=\Sigma\times \{0\}$. We do not assume that a convex surface $\Sigma$ is closed or compact, unless specified. According to a theorem of Giroux~\cite{Giroux00}, if $\Sigma\not=S^2$ is closed or compact with Legendrian boundary, then $\Sigma$ has a tight neighborhood if and only if its dividing set $\Gamma_\Sigma$ has no homotopically trivial closed curves. (In the case when $\Sigma$ is not necessarily compact, $\Sigma$ has a tight neighborhood if $\Gamma_\Sigma$ has no homotopically trivial dividing curves, although the converse is not always true.) In this paper we study the following:
\begin{q} Suppose we attach a family of bypasses $\mathcal{B}=\{\mathcal{B}_\alpha\}_{\alpha \in A}$ along a disjoint family of Legendrian arcs $\mathcal{C}=\{\delta_\alpha\}_{\alpha\in A}$ to a product tight contact structure on $\Sigma \times [-\varepsilon,\varepsilon]$. When is the resulting contact manifold tight? \end{q}
A closed Legendrian arc $\delta_\alpha$, along which a bypass $\mathcal{B}_\alpha$ for $\Sigma$ is attached, is called a {\em Legendrian arc of attachment}. Every arc of attachment begins and ends on $\Gamma_\Sigma$ and has three intersection points with $\Gamma_\Sigma$, all of which are transverse. In this paper all bypasses are assumed to be attached ``from the front'', i.e., attached along $\Sigma\times \{\varepsilon\}$ from the exterior of $\Sigma\times [-\varepsilon, \varepsilon]$, and all arcs of attachment are assumed to be embedded, i.e., there are no ``singular bypasses''. Recall from \cite{H1} that attaching $\mathcal{B}_\alpha$ from the front and isotoping the surface $\Sigma$ across the bypass is locally given by Figure~\ref{bypass}. Denote by $(\Sigma, \mathcal{C})$ the contact manifold $(\Sigma\times[-\varepsilon,\varepsilon])\cup (\cup_\alpha N(\mathcal{B}_\alpha))$, where $N(\mathcal{B}_\alpha)$ is an invariant neighborhood of $\mathcal{B}_\alpha$.
We will show that the key indicator of overtwistedness in the resulting contact manifold $(\Sigma,\mathcal{C})$ is a polygonal region in $\Sigma$ called a {\em pinwheel}. First consider an embedded polygonal region $P$ in $\Sigma$ whose boundary consists of $2k$ consecutive sides $\gamma_1, \alpha_1, \gamma_2, \alpha_2, \dots, \gamma_k, \alpha_k$ in counterclockwise order, where each $\gamma_i$ is a subarc of $\Gamma_\Sigma$ and each $\alpha_i$ is a subarc of a Legendrian arc of attachment $\delta_i \in \mathcal{C}$. Here $k\geq 1$. In this paper, when we refer to a ``polygon'', we will tacitly assume that it is an embedded polygonal region of the type just described. Now orient the sides using the boundary orientation of $P$. A {\em pinwheel} is a special type of polygon $P$, where, for each $i=1,\dots,k$, $\delta_i$ extends past the final point of $\alpha_i$ (not past the initial point) and does not reintersect $P$. (If $k>1$, then this is equivalent to asking $\delta_i$ to extend past $\gamma_{i+1}$, where $i$ is considered modulo $k$.) Figure~\ref{onlyif} gives an example of a pinwheel.
It is easy to see that, if $\Sigma$ is closed or compact with Legendrian boundary, then the addition of bypasses along all the arcs of attachment of a pinwheel produces an overtwisted disk manifested by a contractible curve in the resulting dividing set. Hence, the nonexistence of pinwheels is a necessary condition for the new contact structure to be tight. Essentially, we are asking that no closed, homotopically trivial curves be created when some or all of the bypasses are attached. We will prove that the nonexistence of pinwheels is a sufficient condition as well, if $\Sigma$ is a disk with Legendrian boundary.
\begin{figure}
\caption{Adding a bypass}
\label{bypass}
\end{figure}
\begin{thm}\label{main} Let $\Sigma$ be a convex disk with Legendrian boundary and with a tight neighborhood, and let $\mathcal{C}$ be a finite, disjoint collection of bypass arcs of attachment on $\Sigma$. Denote by $(\Sigma,\mathcal{C})$ the contact structure on $\Sigma \times I$ obtained by attaching to the product contact neighborhood of $\Sigma$ bypasses along all the arcs in $\mathcal{C}$. Then $(\Sigma,\mathcal{C})$ is tight if and only if there are no pinwheels in $\Sigma$. \end{thm}
If a compact convex surface $\Sigma$ has $\pi_1(\Sigma)\not=0$, then Theorem~\ref{main} is modified to allow {\em virtual pinwheels} -- a virtual pinwheel is an embedded polygon $P$ which becomes a pinwheel in some finite cover of $\Sigma$. In other words, since the fundamental group of every compact surface is residually finite, a virtual pinwheel $P$ is either already a pinwheel or the arcs of attachment $\delta_i$ which comprise its sides may extend beyond the polygon, encircle a {\em nontrivial element} in $\pi_1(\Sigma,P)$ and return to the polygon. Figure~\ref{newtypes} gives examples of arcs of attachment which we will show result in overtwisted contact structures. The figure to the left is a pinwheel, the one to the right is an example of a virtual pinwheel.
\begin{figure}
\caption{A pinwheel and a virtual pinwheel}
\label{newtypes}
\end{figure}
\begin{thm}\label{closedsurface} Let $\Sigma\not=S^2$ be a convex surface which is closed or compact with Legendrian boundary and which has a tight neighborhood, and let $\mathcal{C}$ be a finite disjoint collection of arcs of attachment. Then the following are equivalent: \begin{enumerate} \item $(\Sigma,\mathcal{C})$ is universally tight. \item There are no virtual pinwheels in $\Sigma$. \end{enumerate} \end{thm}
\begin{rmk} A pinwheel $P$ may nontrivially intersect arcs of $\mathcal{C}$ in its interior. Any such arc $\delta'$ would cut $P$ into two polygons, and one of the two polygons $P'$ will satisfy the definition of a pinwheel, with the possible exception of the condition that $\delta'$ not reintersect $P'$. $($If $\delta'$ does not reintersect $P'$, then we can shrink $P$ to $P'$.$)$ \end{rmk}
\section{Proof of Theorem~\ref{main}} Let $\mathcal{C}$ be the collection of arcs of attachment and $\mathcal{B}_{\delta}$ be the bypass corresponding to $\delta\in\mathcal{C}$.
The ``only if'' direction is immediate. If there is a pinwheel $P$, then let $\alpha_i$, $i=1,\dots,k$, be the sides of $P$ which are subarcs of $\delta_i\in \mathcal{C}$. Then attaching all the bypasses $\mathcal{B}_{\delta_i}$ creates a closed homotopically trivial curve, and hence an overtwisted disk. We can think of this disk as living at some intermediate level $\Sigma \times \{t\}$ in the contact structure on $\Sigma \times I$ obtained by attaching all the bypasses determined by arcs in $\mathcal{C}$. See Figure~\ref{onlyif}.
\begin{figure}
\caption{Attaching the bypasses around a pinwheel}
\label{onlyif}
\end{figure}
\begin{rmk} The arcs of attachment may be trivial arcs of attachment. For example, if only one of them is attached, $\Gamma_\Sigma$ does not change. However, a ``trivial arc'' $\delta_j$, after it is attached, may affect the positions of the other arcs of attachment, if they intersect the subarc $\gamma\subset \Gamma_\Sigma$ which forms a polygon together with a subarc of $\delta_j$. Therefore, ``trivial'' arcs are not necessarily genuinely trivial as part of a family. \end{rmk}
\begin{rmk} It is crucial that an arc $\delta_i$ in the definition of a pinwheel not return to $P$. For example, if some $\delta_j$ returns to $\gamma_{j+1}$, then no overtwisted disk appears in a neighborhood of the original $P$, after all the bypasses $\mathcal{B}_{\delta_i}$ are attached. \end{rmk}
We now prove the ``if'' part, namely if there are no pinwheels, then the attachment of $\mathcal{C}$ onto the convex disk $\Sigma$ is tight. In fact, we prove the following stronger result:
\begin{thm} \label{general} Let $\Sigma$ be a convex plane whose dividing set $\Gamma_\Sigma$ has no connected components which are closed curves. If $\mathcal{C}$ is a locally finite, disjoint collection of bypass arcs of attachment on $\Sigma$, and $\Sigma$ has no pinwheels, then $(\Sigma,\mathcal{C})$ is tight. \end{thm}
\begin{proof}[Reduction of Theorem~\ref{general} to Theorem~\ref{main}] Since any overtwisted disk\break will live in a compact region of $\Sigma\times [-\varepsilon,\varepsilon]$, we use an exhaustion argument to reduce to the situation where we have a closed disk $D$ with Legendrian boundary, and $\mathcal{C}$ is a finite collection of arcs of attachment which avoid $\partial D$. There is actually one subtlety here when we try to use the Legendrian Realization Principle (LeRP) on a noncompact $\Sigma$ to obtain Legendrian boundary for $D$ -- it is that there is no bound on the distance (with respect to any complete metric on $\Sigma$) traveled by $\partial D$ during the isotopy given in the proof of the Giroux Flexibility Theorem. Hence we take a different approach, namely exhausting $\Sigma$ by convex disks $D_i$ where $\partial D_i$ is not necessarily Legendrian, and then extending $D_i$ to a convex disk $D_i'$ with Legendrian boundary and without pinwheels.
Let $D_1\subset D_2\subset\dots$ be such an exhaustion of $\Sigma$, with the additional property that $\partial D_i\pitchfork \Gamma_\Sigma$ and moreover at each intersection point $x$ the characteristic foliation and $\partial D_i$ agree on some small interval around $x$. Consider a rectangle $R=[0,n]\times[0,1]$ with coordinates $(x,y)$. Let $s$ be an arc on $\partial D_i$ between two consecutive intersections of $\Gamma_\Sigma\cap \partial D_i$. Take a diffeomorphism which takes $s$ to $x=0$; let $\xi$ be the induced contact structure in a neighborhood of $x=0$. It is easy to extend $\xi$ to (a neighborhood of) $y=0$ and $y=1$ so that they become dividing curves. Now the question is to extend $\xi$ to all of $R$ so that $x=n$ is Legendrian. Let $R'=[0,n]\times[\varepsilon',1-\varepsilon']\subset R$ be a slightly smaller rectangle. We write the sought-after invariant contact form on $R'\times[-\varepsilon,\varepsilon]$ as $\alpha = dt + \beta$, where $t$ is the coordinate for $[-\varepsilon, \varepsilon]$, $\beta$ is a form on $R'$ which does not depend on $t$, and $d\beta$ is an area form on $R'$. Provided $n$ is sufficiently large, $\int_{\partial R'} \beta$ will be positive, regardless of $\beta$ on $x=0$. Let $\omega$ be an area form on $R'$ which agrees with $d\beta$ on $\partial R'$ and satisfies $\int_{R'}\omega=\int_{\partial R'}\beta$. Extend
$\beta|_{\partial R'}$ to any 1-form $\beta'$ on $R'$ (not necessarily the primitive of an area form). Since $d\beta'$ agrees with $\omega$ on $\partial R'$, consider $\omega-d\beta'$. $\int_{R'}\omega-d\beta'=0$
and $\omega-d\beta'=0$ on $\partial R'$, so by the Poincar\'e lemma there is a 1-form $\beta''$ with $\beta''|_{\partial R'}=0$, so that $\omega -d\beta'=d\beta''$. Therefore, the desired $\beta$ on $R'$ is $\beta'+\beta''$. Since there are only finitely many components of $\Gamma_\Sigma\cap D_i$, we obtain $D'_i$ by attaching finitely many rectangles of the type described above. \end{proof}
Let us now consider the pair $(D,\mathcal{C})$ consisting of a convex disk $D$ with Legendrian boundary (and dividing set $\Gamma_D$) and a finite collection $\mathcal{C}$ of Legendrian arcs of attachment for $D$. We now prove the following:
\begin{prop} If $(D,\mathcal{C})$ has no pinwheels, then $(D,\mathcal{C})$ is tight. \end{prop}
\begin{proof} The idea is to induct on the complexity of the situation. Here, the complexity $c(D,\mathcal{C})$ of $(D,\mathcal{C})$ is given by $c(D,\mathcal{C})=\#\Gamma_D+\#\mathcal{C}$, where $\#\Gamma_D$ is the number of connected components of $\Gamma_D$ and $\#\mathcal{C}$ is the number of bypass arcs in $\mathcal{C}$. Given $(D,\mathcal{C})$ we will find a pair $(D',\mathcal{C}')$ of lower complexity, where $(D,\mathcal{C})$ is tight if $(D',\mathcal{C}')$ is tight, and $(D',\mathcal{C}')$ has no pinwheels if $(D,\mathcal{C})$ has no pinwheels.
The proof will proceed by showing that there are three operations, which we call A, B, and C, one of which can always be performed to reduce the complexity until there are no bypasses left. Operation A removes (unnecessary) isolated $\partial$-parallel arcs. If isolated $\partial$-parallel arcs do not exist, we apply one of Operations B and C. If there is a trivial bypass in $\mathcal{C}$, Operation B removes an ``innermost'' trivial bypass, i.e., we show that performing the bypass attachment gives a configuration with lower complexity which is tight if and only if the original configuration was tight. Otherwise, Operation C removes an ``outermost'' nontrivial bypass by embedding the configuration into one of lower complexity. Since each step is strictly complexity-decreasing, and we can always do at least one of them, we can always perform the inductive step. This will prove the proposition and the theorem.
\subsection{Operation A: isolated $\partial$-parallel arc} Suppose $\Gamma_D$ has a $\partial$-parallel arc $\gamma$ which does not intersect any component of $\mathcal{C}$. (Recall that arcs of attachment are assumed to be closed and hence no component of $\mathcal{C}$ begins or ends on $\gamma$.) We then extend $D$ to $D'$ so that $tb(D')=tb(D)+1$ and $\Gamma_{D'}$ is obtained from $\Gamma_D$ by connecting one of the endpoints of $\gamma$ to a neighboring endpoint of another arc in $\Gamma_D$. (See Figure~\ref{bdryparallel}.) It is clear that since the configuration $(D,\mathcal{C})$ can be embedded into the configuration $(D',\mathcal{C}'=\mathcal{C})$, and vice versa, $(D,\mathcal{C})$ tight is equivalent to $(D',\mathcal{C}')$ tight, and $(D,\mathcal{C})$ having no pinwheels is equivalent to $(D',\mathcal{C}')$ with no pinwheels.
\begin{figure}
\caption{The new $D'$}
\label{bdryparallel}
\end{figure}
\subsection{Operation B: trivial bypasses} Suppose there is a trivial arc of attachment $\delta$ in $\mathcal{C}$. Let $\gamma$ be the connected component of $\Gamma_D$ that $\delta$ intersects at least twice, and let $R$ be a closed half-disk (polygon) whose two sides are a subarc of $\delta$ and a subarc $\gamma_0$ of $\gamma$. As shown in Figure~\ref{trivial}, we choose $R$ to be such that, with respect to the orientation of $\delta$ induced by $\partial R$, the subarc of $\delta$ contained in $\partial R$ starts at an interior point of $\delta$. If $\mathcal{C}-\{\delta\}$ nontrivially intersects $int(R)$, then let $\delta'$ be an arc of $\mathcal{C}-\{\delta\}$ which is outermost in $R$, i.e., cuts off a subpolygon of $R$ which does not intersect $\mathcal{C}-\{\delta\}$ in its interior. Define $R'$ and $\gamma_0'$ analogously for $\delta'$. (Note that $R'$ may or may not be a subset of $R$.) We rename $\delta'$, $R'$ and $\gamma_0'$ by omitting primes. Therefore, we may assume that $\delta$, $\gamma$, and $R$ satisfy the property that $int(R)$ does not intersect any arc of $\mathcal{C}$, although there may be endpoints of arcs of $\mathcal{C}-\{\delta\}$ along $\gamma_0$. By the very definition of $R$, the third point of intersection between $\delta$ and $\Gamma_D$ cannot be in $\gamma_0$.
\begin{figure}
\caption{A trivial arc of attachment}
\label{trivial}
\end{figure}
Now, let $D=D'$ and let $\Gamma_{D'}$ be the dividing set obtained from $\Gamma_D$ by attaching the bypass $\delta$ (the dividing set is modified in a neighborhood of $\delta$). The isotopy type of $\Gamma_D$ and $\Gamma_{D'}$ are the same. However, $\mathcal{C}'$ is identical to $\mathcal{C}-\{\delta\}$ with the following exception: arcs $\delta_i\in\mathcal{C}- \{\delta\}$ which ended on $\gamma_0\subset \gamma$ now end on (what we may think of as) a small interval of $\Gamma_{D'}=\Gamma_D$ around $p$. See Figures \ref{type0shading} and \ref{type1shading}, which both depict what happens locally near $\delta$. We emphasize that in Figures \ref{type0shading} and \ref{type1shading} the two dividing curves may be part of the same dividing curve.
\begin{claim} If $(D,\mathcal{C})$ has no pinwheels then neither does $(D',\mathcal{C}')$. \end{claim}
\begin{proof} For an arc $\delta_i$ in $\mathcal{C}-\{\delta\}$ with an endpoint $q$ on $\gamma_0$, let $\delta_i'$ be its image in $\mathcal{C}'$. If a pinwheel $P'$ of $D'$ has a subarc of $\delta_i'$ as a side and $q$ as a vertex, it is clear that there was a pinwheel $P$ of $D$ which had subarcs of $\delta_i$ and $\delta$ as sides. The pinwheels $P$ and $P'$ are basically the same region of $D$ -- all the sides are the same except that $P$ has two extra vertices, $p$ and $r$, and two extra sides. Here $r$ is the middle intersection point of $\delta$ with $\Gamma_D$ as in Figure~\ref{type0shading}.
\begin{figure}
\caption{Pinwheels in $D$ and $D'$}
\label{type0shading}
\end{figure}
On the other hand, suppose $P'$ is a pinwheel of $D'$ which does not involve any subarcs which used to intersect $\gamma_0$. Then $P'$ must either completely contain or be disjoint from the region $K'$ given in Figure~\ref{type1shading}. However, apart from $K'$ (and the corresponding region $K$ in $D$), $(D,\mathcal{C}-\{\delta\})$ and $(D',\mathcal{C}')$ are identical. Hence $P'$ must have descended from a pinwheel for $D$. \end{proof}
\begin{figure}
\caption{Pinwheels in $D$ and $D'$}
\label{type1shading}
\end{figure}
\subsection{Operation C: outermost nontrivial bypass}
Suppose all the arcs of $\mathcal{C}$ are nontrivial and there are no isolated $\partial$-parallel arcs. Then we have the following:
\begin{claim} There exists an ``outermost'' arc $\delta$ with the following property: there exists an orientation/parametrization of $\delta$ so it intersects distinct arcs $\gamma_3$, $\gamma_2$, $\gamma_1$ of $\Gamma_D$ in that order, and if $R\subset D$ is the closed region cut off by the subarc $\alpha$ of $\delta$ from $\gamma_3$ to $\gamma_2$ and subarcs of $\gamma_2$ and $\gamma_3$ so that the boundary orientation on $\alpha$ induced from $R$ and the orientation from $\delta$ are opposite, then $int(R)$ does not intersect any other arcs of $\mathcal{C}$. \end{claim}
\begin{figure}
\caption{An outermost bypass arc}
\label{extremal}
\end{figure}
\begin{proof} Let $\gamma$ be a $\partial$-parallel arc of $\Gamma_D$. Since there are no isolated $\partial$-parallel arcs or trivial bypasses, $\gamma$ contains an endpoint of at least one arc of attachment in $\mathcal{C}$. Of all such arcs of attachment ending on $\gamma$, choose the ``rightmost'' one $\delta$, if we represent $D$ as the unit disk, $\gamma$ is in the $x$-axis, and the half-disk cut off by $\gamma$ with no other intersections with $\Gamma_D$ is in the lower half-plane. Now orient $\delta$ so that $\gamma_3=\gamma$ and denote by $\gamma_2$ the next arc in $\Gamma_D$ that $\delta$ intersects. Let $R$ be the region bounded by $\gamma_3$, $\delta$, $\gamma_2$ and an arc in the boundary of $D$, so that the boundary orientation of $R$ and the orientation of $\delta$ are opposite. There are no endpoints of arcs of $\mathcal{C}-\{\delta\}$ along $\partial R\cap \gamma_3$, but there may certainly be arcs which intersect $int(R)$ and $\partial R\cap \gamma_2$. If there are no $\partial$-parallel arcs in $int(R)$, then we are done. Otherwise, take the clockwisemost $\partial$-parallel arc $\gamma'$ of $\Gamma_D$ (along $\partial D$) in $int(R)$, and let $\delta'$ be the rightmost arc of $\mathcal{C}$ starting from $\gamma'$. Its corresponding region $R'$ is strictly contained in $R$; hence if we rename everything by removing primes and reapply the same procedure, then eventually we obtain $\gamma$, $\delta$, and $R$ so that no arc of $\mathcal{C}$ intersects $int(R)$. \end{proof}
Let $\delta$ be an outermost bypass (in the sense of the previous claim). Then there exists an extension $D'$ of $D$, where $tb(D')=tb(D)+1$ and $\Gamma_{D'}$ is obtained from $\Gamma_D$ by connecting the endpoints of $\gamma_2$ and $\gamma_3$ (those which are corners of $R$) by an arc. If $\gamma_{2,3}$ is the resulting connected component of $\Gamma_{D'}$ which contains $\gamma_2$ and $\gamma_3$, then $\gamma_{2,3}$ and $\delta$ cobound a disk region $R'\subset D'$ that contains $R$. Observe that $D'$ has a tight neighborhood, i.e., $\Gamma_{D'}$ contains no closed loops, because $\gamma_2$ and $\gamma_3$ were distinct arcs of $\Gamma_D$. Now set $\mathcal{C}'=\mathcal{C}$. Then $(D',\mathcal{C}')$ has lower complexity than $(D,\mathcal{C})$. It is clear that $(D',\mathcal{C}')$ has no pinwheels: any pinwheel $P'$ of $(D',\mathcal{C}')$ is either already a pinwheel in $(D,\mathcal{C})$ or contains $R'$. However, any pinwheel that contains $R'$ must extend beyond $\delta$, and hence must contain a sub-pinwheel $P$ with a side $\delta$ inherited from $(D,\mathcal{C})$. \end{proof}
\section{Proof of Theorem~\ref{closedsurface}}
(1) $\Rightarrow$ (2) is clear. Namely, if there is a virtual pinwheel in $\Sigma$, there is a pinwheel in some finite cover, and hence that cover is overtwisted.
(2) $\Rightarrow$ (1)\quad Assume that, on the contrary, $(\Sigma,\mathcal{C})$ is not universally tight. Let $D$ be a disk in the universal cover such that restriction to $D \times I$ contains the overtwisted disk. We can find a finite cover $\widetilde \Sigma$ of $\Sigma$ that contains that disk, and by modifying the characteristic foliation using LeRP if necessary, we can assume that the disk has Legendrian boundary. Then by Theorem~\ref{main} there is a pinwheel $P$ in $\widetilde \Sigma$. We will show that this implies the existence of a virtual pinwheel in $\Sigma$.
Let $\pi: \widetilde \Sigma \rightarrow \Sigma$ be the projection map and $\mathcal{P}$ be the set of polygons $R$ of $\Sigma$ which are minimal in the sense that they do not contain smaller subpolygons. Then we can define the weight function $w: \mathcal{P}\rightarrow \mathbb{Z}$, which assigns to each $R\in \mathcal{P}$ the degree of $\pi^{-1}(R)\cap P$ over $R$. We illustrate this definition in Figure~\ref{virtualpinwheel.eps}. The shaded area in the left half of the picture is the pinwheel $P$ in $\widetilde \Sigma$. The polygonal regions in $\Sigma$ are labeled by integers $0,1$ and $2$ according to the value that $w$ takes on them. Our sought-after virtual pinwheel $P'\subset \Sigma$ is then one connected component of the union of all $R\in \mathcal{P}$ which attain the maximal value of $w$. (In the figure there is only one component.)
\begin{figure}
\caption{Construction of a virtual pinwheel}
\label{virtualpinwheel.eps}
\end{figure}
First observe that $w$ cannot be locally constant, since $P$ is strictly contained in either the positive region $R_+(\Gamma_{\widetilde \Sigma})$ or the negative region $R_-(\Gamma_{\widetilde \Sigma})$ -- for convenience let us suppose it is $R_+$. Next we show that the values of $w$ are different for any two polygonal regions $R_1$ and $R_2$ inside $R_+(\Gamma_\Sigma)$ which are adjacent along a subarc of an arc $\delta \in \mathcal{C}$ that lifts to a boundary arc $\widetilde{\delta}$ of $P$. More precisely, orient $\delta$ so that it starts in $R_+$, and denote by $R_1$ the region to the left of it and by $R_2$ the region to the right (see Figure~\ref{virtualpinwheel.eps}). Then we claim that $w(R_1) > w(R_2)$.
To prove the claim, first note that $R_1\not=R_2$. If, on the contrary, $R_1=R_2=R$, then there is an oriented closed curve $\alpha$ such that $\alpha\backslash\delta \subset int(R)$ which is ``dual'' to the arc $\delta$ in the sense that they intersect transversely and $\langle \delta,\alpha\rangle = +1$. Observe that any connected component $\widetilde {\alpha}$ of $\pi^{-1}(\alpha)\cap P$ must enter and exit $P$ along components (say $\widetilde{\delta}_1$ and $\widetilde{\delta}_k$) of $\pi^{-1}(\delta)$. Now, the orientation of $\delta$ induces an orientation on the arcs in $\pi^{-1}(\delta)$, and the induced orientation on $\widetilde{\delta}_1$ and $\widetilde{\delta}_k$ (as seen by intersecting with $\widetilde{\alpha}$) is inconsistent with the chirality involved in the definition of a pinwheel.
To see what value $w$ takes on $R_1$ and $R_2$, let us look at the components $\widetilde\delta_i$, $i=1,\dots,n$, of $\pi^{-1}(\delta)$, and denote by $o(\delta), i(\delta)$ and $b(\delta)$ the number of components that are respectively on the outside, in the interior, or on the boundary of $P$. Every time a component $\widetilde\delta_i$ of $\pi^{-1}(\delta)$ appears on the boundary of $P$, the minimal subpolygon of $P$ adjacent to $\widetilde\delta_i$ must project to the region $R_1$. Hence $w(R_1)= i(\delta) + b(\delta)$ and $w(R_2)=i(\delta)$, and therefore $w(R_1) > w(R_2)$.
We will now prove that $P'$ is a polygon. We first claim that $$\pi: \pi^{-1}(P')\cap P\rightarrow P'$$ is a covering map. Indeed, any subpolygon of $P'$ lifts to $\max(w)$ subpolygons of $P$. Moreover, if $a$ is a subarc of an arc $\delta'\in \mathcal{C}$, and $a$ is in $int(P')$, then no component $\widetilde a_i$ of the lift of $a$ can be a side of $P$, since two regions adjacent to it have equal values of $w$. If $U$ is a neighborhood of a point on $a$ in $P'$, then $\pi^{-1}(U) \cap P$ consists of $\max(w)$ copies of $U$. Now that we know that $\pi^{-1}(P')\cap P$ covers $P'$, $P'$ must be simply connected, since $\pi^{-1}(P')\cap P$ must be a union of subpolygons of $P'$ (hence simply connected), and the cover is a finite cover.
Finally, $\delta\in \mathcal{C}$ which is a side of $P'$ and returns to $P'$ must enclose a nontrivial element of $\pi_1(\Sigma,P')\simeq \pi_1(\Sigma)$; otherwise it cobounds a disk together with $P'$, and no cover of $\Sigma$ will extricate the relevant endpoint of $\delta$ from $P'$ (and hence $P$). \endproof
\begin{q} Can we generalize Theorem~\ref{main} to the case where we have nested bypasses? What's the analogous object to the pinwheel in this case? \end{q}
\section{Virtual Pinwheels and Tightness}
In this section we will discuss the following question:
\begin{q} Can we give a necessary and sufficient condition for $(\Sigma,\mathcal{C})$ to be tight, if $\Sigma$ is a convex surface which is either closed or compact with Legendrian boundary? \end{q}
We will present a partial answer to this question. Before we proceed, we first discuss a useful technique called {\em Bypass Rotation}. Let $\Sigma$ be a convex boundary component of a tight contact 3-manifold $(M,\xi)$, and let $\delta_1$ and $\delta_2$ be disjoint arcs of attachment on $\Sigma$. The bypasses are to be attached from the exterior of $M$, and attached from the front in the figures. Suppose there is an embedded rectangular polygon $R$, where two of the sides are subarcs of $\delta_1$ and $\delta_2$ and the other two sides are subarcs $\gamma_1$ and $\gamma_2$ of $\Gamma_\Sigma$. Assume $\delta_1$ and $\delta_2$ both extend beyond $\gamma_1$ and do not reintersect $\partial R$. If we position $R$, $\delta_1$, and $\delta_2$ as in Figure~\ref{rotating}, so that, with the orientation induced from $R$, $\gamma_1$ starts on $\delta_2$ and ends on $\delta_1$, then we say that $\delta_1$ lies {\it to the left} of $\delta_2$.
\begin{figure}
\caption{$\delta_1$ is to the left of $\delta_2$}
\label{rotating}
\end{figure}
The next lemma shows that the arc of attachment of a bypass can be ``rotated to the left'' and still preserve tightness. For convenience, let $M(\delta_1,\dots,\delta_k)$ be a contact manifold obtained by attaching $k$ disjoint bypasses from the exterior, along arcs of attachment $\delta_1,\dots,\delta_k$.
\begin{lemma}[Bypass Rotation]\label{bypassrotating} Let $(M,\xi)$ be a contact 3-manifold, and $\delta_1, \delta_2$ be arcs of attachment on a boundary component $\Sigma$ of $M$. If $\delta_1$ is to the left of $\delta_2$ and $M(\delta_2)$ is tight, then $M(\delta_1)$ is also tight. \end{lemma}
\begin{proof} If $M(\delta_2)$, is tight, then $M(\delta_1,\delta_2)= (M(\delta_2))(\delta_1)$ is also tight, since attaching $\delta_2$ makes $\delta_1$ a trivial arc of attachment. Now, $M(\delta_1,\delta_2)$ is also $(M(\delta_1))(\delta_2)$, so $M(\delta_1)$ must be tight. \end{proof}
\begin{figure}
\caption{Creating extra dividing curves}
\label{create}
\end{figure}
It is clear that if there is an actual pinwheel $P$ in $\Sigma$, then $(\Sigma,\mathcal{C})$ is not tight. It is often possible to make the same conclusion in the presence of a virtual pinwheel $P$ by using the technique of Bypass Rotation. In fact, assume that there is a virtual pinwheel $P$ in $\Sigma$ and let $\delta$ be an arc of attachment which encircles a nontrivial element of $\pi_1(\Sigma, P)$. Suppose $\delta$ has been oriented so that its orientation coincides with the orientation on $\partial P$. If $\delta$ can be rotated to the left so that the final point of $\delta$ is shifted away from $P$, then the newly obtained configuration contains a pinwheel, and hence $(\Sigma,\mathcal{C})$ is overtwisted by Lemma~\ref{bypassrotating}. Even if there are no arcs of $\Gamma_\Sigma-\partial P$ to which we can rotate $\delta$ without hitting other bypass arcs of attachment, we can often perform a folding operation. This operation can be described in two equivalent ways (see \cite{HKM3} for details): Either add a bypass along the arc $\kappa$ as in Figure~\ref{create} to obtain the contact structure $(\Sigma,\mathcal{C} \cup \{\kappa\})$, or fold along a Legendrian divide to create a pair of parallel dividing curves ``along'' $\delta$. Since both operations can be done inside an invariant neighborhood of $\Sigma$, $(\Sigma,\mathcal{C} \cup \{\kappa\})$ is tight if $(\Sigma,\mathcal{C})$ is. Now, rotating $\delta$ to the left, we can move the endpoint of $\delta$ to be on one of the newly created parallel dividing curves -- this yields the bypass $\beta$ pictured in Figure~\ref{create}. The configuration $(\Sigma,\mathcal{C'})$ obtained from $(\Sigma,\mathcal{C} \cup \{\kappa\})$ by replacing $\delta$ by $\beta$ is tight if $(\Sigma,\mathcal{C})$ is tight. However, by repeated application of this procedure if necessary, we will often be able to eventually obtain a genuine pinwheel $P$, hence showing that $(\Sigma,\mathcal{C})$ is overtwisted. More precisely, we have the following:
\begin{prop}\label{otcases} Let $P$ be a virtual pinwheel in $\Sigma$ and $\delta\in \mathcal{C}$ be an arc of attachment on $\partial P$ which returns to $P$. Decompose $\delta=\delta_0\cup\delta_1$, where $\delta_i$, $i=0,1$, have endpoints on $\Gamma_\Sigma$ and $\delta_0\subset \partial P$. Orient $\delta$ to agree with the orientation on $\partial P$. Let $Q$ be a connected component of $\Sigma\setminus (\Gamma_\Sigma\cup (\cup_{\beta\in\mathcal{C}}\beta))$ so that $\delta_1\subset \partial Q$ and the orientation on $\delta_1$ agrees with the orientation on $\partial Q$. If one of the following is true for each $\delta$, then $(\Sigma,\mathcal{C})$ is overtwisted: \begin{enumerate} \item $Q$ is not a polygon. \item $Q$ is a polygon but has sides in $\Gamma_\Sigma$ which are not in $\partial P$. \end{enumerate} \end{prop}
We now consider the situation in which the Bypass Rotation technique just described fails. Let $P$ be a {\it minimal pinwheel}, i.e., a pinwheel whose interior does not intersect any arc of attachment in $\mathcal{C}$. Let $\delta$ be an attaching arc of $P$ that returns to $P$ that we cannot ``unhook''. Then the region $Q$ described in Proposition~\ref{otcases} must be polygonal and all of the edges of $Q$ that are coming from the dividing set must be sub-edges of the boundary of $P$. The minimality of $P$ forces edges of $Q$ that are attaching arcs to also be edges of $P$. Thus $Q$ is an {\it anti-pinwheel}, that is, a polygon whose edges which are arcs of attachment are oriented in the direction opposite to that of a pinwheel. Two such pinwheel/anti-pinwheel pairs are illustrated in Figure~\ref{anti-pinwheel}.
\begin{figure}
\caption{Examples of anti-pinwheels}
\label{anti-pinwheel}
\end{figure}
Since there is a virtual pinwheel $P$ in the situation described above, $(\Sigma,\mathcal{C})$ is not universally tight by Theorem~\ref{closedsurface}. To show that there are cases when $(\Sigma,\mathcal{C})$ is tight but virtually overtwisted, we will analyze the situation indicated in the left portion of Figure~\ref{cut-paste}. Here $\Sigma = T^2$, and we think of $\Sigma \times I$ as a neighborhood of the boundary of the solid torus. First, we cut the solid torus along a convex disk $D$ such that its boundary is the curve $\gamma$ that cuts $P \cup Q$ and intersects the dividing set at two points. Next, round the corners that are shown in Figure~\ref{cut-paste}, and we obtain a tight convex ball. In reverse, we can think of the solid torus as obtained by gluing two disks $D_1$ and $D_2$ on the boundary of $B^3$. The bypasses in the picture correspond to adding trivial bypasses to the ball. By applying the basic gluing theorem for gluing across a convex surface with $\partial$-parallel dividing set (see for example Theorem~1.6 in \cite{HKM2}), we see that $(\Sigma,\mathcal{C})$ is tight.
\begin{figure}
\caption{Cutting the pinwheel}
\label{cut-paste}
\end{figure}
Attempts at generalizing the above technique quickly run into some difficulties. Suppose we want to iteratively split $\Sigma$ along a closed curve $\gamma$ and glue in disks $D_1$ and $D_2$. One difficulty (although not the only one) is that at some step in the iteration we could get an overtwisted structure on $(\Sigma',\mathcal{C})$. ($\Sigma'$ could form contractible dividing curves.) On the other hand, this does not necessarily prove that $(\Sigma,\mathcal{C})$ is overtwisted; it merely occurs as a subset of a space with an overtwisted contact structure.
\rk{Acknowledgements} KH wholeheartedly thanks the University of Tokyo, the Tokyo Institute of Technology, and especially Prof.\ Takashi Tsuboi for their hospitality during his stay in Tokyo during Summer-Fall 2003. He was supported by an Alfred P.\ Sloan Fellowship and an NSF CAREER Award. GM was supported by NSF grants DMS-0072853 and DMS-0410066 and WHK was supported by NSF grants DMS-0073029 and DMS-0406158. The authors also thank the referee for helpful comments.
\Addresses\recd
\end{document} | arXiv |
\begin{document}
\begin{frontmatter}
\title{Distances between nested densities and a measure of the
impact of the prior in Bayesian statistics} \runtitle{Distances
between nested densities and the impact of the prior}
\author{\fnms{Christophe}
\snm{Ley}\ead[label=e]{[email protected]}\thanksref{t1},
\fnms{Gesine}
\snm{Reinert}\ead[label=e2]{[email protected]}\thanksref{t2}
\and \fnms{Yvik}
\snm{Swan}\corref{}\ead[label=e3]{[email protected]}\thanksref{t3} }
\affiliation{Ghent University, University of Oxford \and
Universit\'e de Li\`ege }
\thankstext{t1}{Christophe Ley, whose affiliation during parts of this work was Universit\'e libre de Bruxelles, thanks the Fonds National de la
Recherche Scientifique, Communaut\'e fran\c{c}aise de Belgique, for
financial support via a Mandat de Charg\'e de Recherche FNRS.}
\address{Christophe Ley \\ Ghent University\\
Department of Applied Mathematics, \\Computer Science and Statistics \\ Krijgslaan 281, S9\\
9000 Ghent, Belgium \\
\printead{e}}
\thankstext{t2}{Gesine Reinert acknowledges support from EPSRC grant
EP/K032402/I as well as from the Keble College Advanced Studies
Centre. }
\address{Gesine Reinert \\ University of Oxford\\Department of Statistics\\1 South Parks Road\\Oxford OX1 3TG, UK\\
\printead{e2}}
\thankstext{t3}{Yvik Swan gratefully acknowledges support from the
IAP Research Network P7/06 of the Belgian State (Belgian Science
Policy).}
\address{Yvik Swan \\Universit\'e de Li\`ege\\ D\'epartement de Math\'ematique\\12 all\'ee de la d\'ecouverte\\ B\^at. B37 pkg 33a\\ 4000 Li\`ege, Belgium \\
\printead{e3}}
\runauthor{Ley, Reinert \and Swan}
\begin{abstract}
In this paper we propose tight upper and lower bounds for the
Wasserstein distance between any two {{univariate continuous
distributions}} with probability densities $p_1$ and $p_2$
having nested supports. These explicit bounds are expressed in terms
of the derivative of the likelihood ratio $p_1/p_2$ as well as the
Stein kernel $\tau_1$ of $p_1$. The method of proof relies on a new
variant of Stein's method which manipulates Stein operators.
We give several applications of these bounds. Our main
application is in Bayesian statistics : we derive explicit
data-driven bounds on the Wasserstein distance between the posterior
distribution based on a given prior and the no-prior posterior based
uniquely on the sampling distribution. This is the first finite
sample result confirming the well-known fact that with
well-identified parameters and large sample sizes, reasonable
choices of prior distributions will have only minor effects on
posterior inferences if the data are benign.
\end{abstract}
\begin{keyword} \kwd{Stein's method} \kwd{Bayesian analysis} \kwd{Prior distribution} \kwd{Posterior distribution} \end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:introduction}
A key question in Bayesian analysis is the effect of the prior on the posterior, and how this effect could be assessed. As more and more data are collected, will the posterior distributions derived with different priors be very similar? This question has a long history; see for example (\cite{stein1965approximation, DF86b,DF86a}). While asymptotic results which give conditions under which the effect of the prior wanes as the sample size tends to infinity can be found for example in \cite{DF86b,DF86a}, here we are interested, at \emph{fixed} sample size, in explicit bounds on some measure of the distributional distance between posteriors based on a given prior and the no-prior data-only based posterior, allowing to detect at fixed sample size the effect of the prior.
In the simple setting of prior and posterior being univariate and continuous, the basic relation that the posterior is proportional to the prior times the likelihood leads to the more general problem of comparing two distributions $P_1$ and $P_2$ whose densities $p_1$ and $p_2$ have nested supports. Letting $\mathcal{I}_1$ (resp., $\mathcal{I}_2$) be the support of $p_1$ (resp., $p_2$) and assuming $\mathcal{I}_2 \subset \mathcal{I}_1$ we can write $$p_2 = \pi_0 p_1$$ for $\pi_0 = p_2/p_1$ a non-negative finite function called likelihood ratio in statistics.
To assess the distance between such distributions, we choose the Wasserstein-1 distance
defined as \begin{equation}\label{dist}
d_{\mathcal{W}}(P_1,P_2)
=\sup_{h\in\mathcal{H}}|\mathbb{E}[h(X_2)]-\mathbb{E}[h(X_1)]| \end{equation} for $\mathcal{H} = {\rm Lip}(1)$ the class of Lipschitz-1 functions, where $X_1$ has distribution $P_1$ (resp., probability distribution function (pdf) $p_1$) and $X_2$ has distribution $P_2$ (resp., pdf $p_2$).
The central aim of this paper is to provide meaningful bounds on $d_{\mathcal{W}}(P_1, P_2)$ in terms of $\pi_0$.
Our approach to this problem relies on Stein's \emph{density approach} introduced in \cite{Stein1986, Stein2004}, as further developed in \cite{ley2014approximate, LS13b,LS13p,LS12a}. Let $P_1$ have density $p_1$ with interval support $\mathcal{I}_1$ with closure $[a_1, b_1]$ for some $-\infty \le a_1 < b_1 \le +\infty$. Suppose also that $P_1$ has mean $\mu$. Then a notion which will be of particular importance is the Stein kernel of $P_1$ which is the function $\tau_1: [a_1, b_1] \rightarrow \mathbb{R}$ given by \begin{equation*} \tau_1 (x) = \frac{1}{p_1(x)} \int_{a_1}^x ( \mu - y) p_1(y)dy. \end{equation*} Our main results assume that $p_1$ and $p_2$ are absolutely continuous densities, and that $\pi_0$ is a differentiable function satisfying
\
\noindent Assumption A :~$ \lim_{x\rightarrow
a_1}\pi_0(x)\int_{a_1}^x(h(y)-\mathbb{E}[h(X_1)])p_1(y)dy=0=\lim_{x\rightarrow
b_1} \pi_0(x)\int_{x}^{b_1} (h(y)-\mathbb{E}[h(X_1)])p_1(y)dy $
for all Lipschitz-continuous functions $h$ with $|\mathbb{E} [h(X_1)]| < \infty$. Here $X_1 \sim P_1$.
\
\noindent Under these assumptions we prove the following result (Theorem \ref{maintheo}).
{\bf{Theorem}}. {\it{ The Wasserstein distance between $P_1$ with pdf $ p_1$ and $P_2$ with pdf $p_2=\pi_0p_1$ satisfies the following inequalities: $$
\left| \mathbb{E} \left[
\pi_0'(X_1) \tau_1(X_1) \right] \right| \le d_{\mathcal{W}} (P_1, P_2) \le \mathbb{E} \left[
\left| \pi_0'(X_1)\right| \tau_1(X_1) \right] $$ where $\tau_1$ is the Stein kernel associated with $p_1$ and $X_1 \sim P_1$. }}
If $P_1= \mathcal{N}(\mu, \sigma^2)$ is a normal distribution then the above result simplifies considerably because $\tau_1(x) = \sigma^2$ is constant, yielding \begin{equation*}
\sigma^2 \left| \mathbb{E} \left[ \pi_0'(X_1) \right] \right| \le
d_{\mathcal{W}} (P_1, P_2) \le \sigma^2 \mathbb{E} \left[ \left|
\pi_0'(X_1)\right| \right]. \end{equation*} The Gaussian is characterized by the fact that its Stein kernel is constant. More generally, all distributions belonging to the classical Pearson family possess a polynomial Stein kernel (see \cite{Stein1986}). The problem of determining the Stein kernel is, in general, difficult. Even when the Stein kernel $\tau_1$ is not available we can give the following simpler bound (Corollary~\ref{varcor}).
{\bf{Corollary}}. {\it{ Under the same assumptions as in Theorem \ref{maintheo}, $$
\left| \mathbb{E} [X_1] - \mathbb{E} [X_2]\right| \le d_{\mathcal{W}} (P_1, P_2) \le \| \pi_0'\|_{\infty} \mathrm{Var} [X_1]. $$ }}
More generally, because the Stein kernel is always positive, the upper and lower bounds in the Theorem turn out to be the same whenever the likelihood ratio $\pi_0$ is monotone, which is equivalent to requiring that $P_1$ and $P_2$ are stochastically ordered in the sense of likelihood ratios. This brings our next result (Corollary~\ref{theo2}).
{\bf{Corollary}}. {\it{ Let $X_1 \sim P_1$ and $X_2 \sim P_2$. If $X_1\le_{LR}X_2$ or
$X_2 \le_{LR}X_1$ then \begin{align*}
d_{\mathcal{W}}(P_1, P_2) & = |\mathbb{E}[X_2]- \mathbb{E}
[X_1]|
= \mathbb{E} \left[ |\pi_0'(X_1) |
\tau_1(X_1) \right]
= \mathbb{E} \left[ \left|\left( \log \pi_0(X_2) \right)'\right|
\tau_1(X_2) \right]. \end{align*} }}
In case of a monotone likelihood ratio between $P_1$ and $P_2$, the first of the above identities is easy to derive directly from the known alternative definitions of the Wasserstein distance (see e.g.~\cite{vallender1974calculation})
\begin{align*}
d_{\mathcal{W}}(P_1, P_2) = \int_{-\infty}^{\infty} \left| F_{P_1}(x) - F_{P_2}(x) \right| dx
= \int_0^1 \left| F_{P_1}^{-1}(u) - F_{P_2}^{-1}(u) \right| du
\end{align*}
with $F_{P_1}$ and $F_{P_1}^{-1}$ (resp., $F_{P_2}$ and $F_{P_2}^{-1}$) the
cumulative distribution function and quantile function of $P_1$
(resp., $P_2$).
We illustrate the effectiveness of our bounds in several examples at the end of Section~\ref{sec:result}, comparing e.g.~Gaussian random variables or Azzalini's skew-symmetric densities with their symmetric counterparts.
In Section \ref{sec:Bayes} we treat as main application the Bayes example wherein we {measure explicitly the effect of priors on
posterior distributions}. Suppose we observe data points $x := (x_1, x_2, \ldots, x_n)$ with sampling density $f(x ; \theta)$ (proportional to the likelihood), where $\theta$ is the one-dimensional parameter of interest. {Let $p_0(\theta)$ be a
certain prior distribution, possibly improper, and let $\Theta_2$ be
the resulting posterior guess for $\theta$ perceived as a random
variable. By Bayes' theorem, this has density
$p_2(\theta;x)=\kappa_2(x) {f(x ; \theta)p_0(\theta)}$ with
$\kappa_2(x)$ the normalizing constant which depends on the
data. Under moderate assumptions, we provide computable expressions
for the Wasserstein distance
$ d_{\mathcal{W}}(\Theta_2,{\Theta}_1) $ between this posterior
distribution and $\Theta_1$, whose law is the no-prior posterior
distribution with density (proportional to the likelihood) given by
$p_1(\theta; x) = \kappa_1(x) f(x;\theta)$, again with normalizing
constant $\kappa_1(x)$ depending on the data. The bounds we derive
are expressed in terms of the data, the prior and the Stein kernel
$\tau_1$ of the sampling distribution.}
We study the normal model with general and normal priors, the binomial model under a general prior, a conjugate prior, and the Jeffreys' prior. We also consider the Poisson model with an exponential prior, in which case we can make use of the likelihood ratio ordering.
For example, with a normal ${\cal{N}} (\mu, \delta^2)$ prior and a random sample $x_1, \ldots, x_n$ from a normal ${\cal{N}} (\theta, \sigma^2)$ model with fixed $\sigma^2$, we obtain in \eqref{eq:21} that \begin{equation*}
\frac{\sigma^2}{n\delta^2 + \sigma^2} \left| \bar{x}- \mu \right|\le
d_{\mathcal{W}} (\Theta_1, \Theta_2)
\le \frac{\sigma^2}{n\delta^2 +
\sigma^2} \left| \bar{x}- \mu \right| + \frac{\sqrt{2}}{\sqrt{\pi}} \frac{\sigma^3}{n \delta
\sqrt{\delta^2 n + \sigma^2}}. \end{equation*} Not only do we see that for $n\rightarrow\infty$, the distance becomes zero, as is well known, but we also have an explicit dependence on the difference between the sample mean $\bar{x}$ and the prior mean $\mu$, indicating the importance of a reasonable choice for the prior.
For a normal ${\cal{N}} (\theta, \sigma^2)$ model and a general prior on $\theta$, we obtain in \eqref{normalverygenprior} that \begin{eqnarray*}
\frac{\sigma^2}{n} |\mathbb{E}[\rho_0(\Theta_2)]|
\leq d_\mathcal{W}(\Theta_1,\Theta_2) \le \frac{\sigma^2}{n}\mathbb{E}[|\rho_0(\Theta_2)|] \end{eqnarray*} with $\rho_0$ the score function of the prior distribution. Here the data are hidden in the distribution of $\Theta_2$.
In the binomial case with conjugate prior we obtain \begin{align*}
& \frac{1}{n+2}\left|(2-\alpha-\beta)\frac{\frac{\alpha}{n}+\bar{x} }{1 + \frac{\alpha+\beta}{n}}+(\alpha-1)\right|
\le d_{\mathcal{W}}(\Theta_1, \Theta_2) \\
& \quad \qquad \qquad \qquad \le \frac{1}{n+2}\left( |2-\beta-\alpha| \frac{\frac{\alpha}{n}+\bar{x} }{1 + \frac{\alpha+\beta}{n}}
+ |\alpha-1| \right), \end{align*} with $\alpha$ and $\beta$ the parameters of the conjugate (beta) prior. Finally in the Poisson case we obtain \begin{eqnarray*}
d_{\mathcal{W}}(\Theta_1, \Theta_2) &=& \frac{\lambda}{n + \lambda} \bar{x} + \frac{\lambda}{n(n+\lambda)}. \end{eqnarray*} with $\lambda>0$ the parameter of the exponential prior.
{{ The main tool in this paper is a specification of the
general approach in \cite{ley2014approximate} which allows to
manipulate Stein operators. Distributions can be compared through
their Stein operators which are far from being
unique; for a single distribution there is a whole family of
operators which could serve as Stein operators, see for example
\cite{ley2014approximate}. In this paper, for probability
distribution $P$ with pdf $p$ we choose the Stein operator
$\mathcal{T}_P$ as
$$
\mathcal{T}_P : f \mapsto \mathcal{T}_Pf = \frac{(fp)'}{p}
$$
with the convention that $ \mathcal{T}_Pf(x) = 0$ outside of the support of $P$; for details see Definition \ref{steinpair} and \cite{LS12a}. For this choice of operator, the product
structure implies a convenient connection between
$\mathcal{T}_1$, the Stein operator for $P_1$ with pdf $p_1$, and $\mathcal{T}_2$, the Stein operator for $P_2$ with pdf $p_2 = \pi_0 p_1$, namely \begin{equation*}
\mathcal{T}_2(f)=\mathcal{T}_1(f)+ f\frac{\pi_0'}{\pi_0} =\mathcal{T}_1(f)+ f(\log \pi_0)' ; \end{equation*} see \eqref{fund}. The difference $$ \mathcal{T}_2(f)- \mathcal{T}_1(f)= f(\log \pi_0)' $$ is the cornerstone of our results. }}
\begin{remark}
This paper restricts attention to the univariate case. The
multivariate case is of considerable interest but our approach
requires an extension of the density method to a multivariate
setting, which is to date still under construction and not yet
available.
{{
Using the approach in \cite{ley2014approximate} it would be possible to extend our results to more general Radon-Nikodym derivatives, at the expense of clarity of exposition. }} \end{remark}
The paper is organized as follows. In Section~\ref{sec:2}, we provide the necessary notations and definitions from Stein's method, which allows us to state our main result, Theorem~\ref{maintheo}, in Section~\ref{sec:result}. Several applications of this result are discussed in Examples~\ref{sec:dist-betw-prod-1} to \ref{sec:exo4}, while Section~\ref{sec:Bayes} tackles our motivating Bayesian problem by providing a measure of the impact of the choice of the prior on the posterior distribution for finite sample size $n$. Finally in Section~\ref{sec:stein-factors-1} we provide a proof of one of the crucial bounds we need for our estimation purposes.
\section{A review of Stein's density approach}\label{sec:2}
\subsection{Notations and definitions}\label{sec:setting}
Here we recall some notions from \cite{ley2014approximate} and \cite{LS12a}. Consider a {{probability distribution $P$}} with continuous univariate {{Lebesgue}} density $p$ on the real line and let $L^1(p) {{= L^1(p(x) dx)}}$ denote the collection of $f : \mathbb{R}\to \mathbb{R}$ such that
$\mathbb{E} |f(X)| = \int |f(x)|p(x)dx<\infty$, where $X \sim P$. Let
$\mathcal{I} = \left\{ x \in \mathbb{R} \, | \, p(x)>0 \right\}$ be the support of $p$. {{In this paper we shall use the following definition of a Stein operator; see for example \cite{ley2014approximate} for a discussion of alternative choices.}}
\begin{definition}\label{steinpair}[Stein pair]
The Stein class $ \mathcal{F}(P)$ of $P$ is the collection of
$f:\mathbb{R}\to \mathbb{R}$ such that (i) $fp$ is absolutely continuous, (ii)
$(fp)'\in L^1(dx)$ and (iii) $\int_{\mathbb{R}}(fp)'dx = 0$. The Stein
operator $\mathcal{T}_P$ for $P$ is
\begin{equation}
\label{eq:12}
\mathcal{T}_P : \mathcal{F}(P) \to L^1(p) : f \mapsto \mathcal{T}_Pf = \frac{(fp)'}{p}
\end{equation}
with the convention that $ \mathcal{T}_P f(x) = 0$ outside of
$\mathcal{I}$. \end{definition} {{Here $(fp)'$ denotes the derivative of $fp$ which exists Lebesgue-almost surely due to the assumption of absolute continuity. Often the Stein pair $(\mathcal{F}(P),\mathcal{T}_P)$ is written as dependent on $X\sim P$ rather than on $P$ (that is, as $(\mathcal{F}(X),\mathcal{T}_X)$); we use the dependence on the distribution to emphasize that the pair itself is not random.}}
Note that because we only consider $f$ multiplied by $p$ the behavior of $f$ outside of $\mathcal{I}$ is irrelevant.
\begin{remark} \label{constant} A sufficient condition for $\mathcal{F}(P)\neq \emptyset$ is that $p'$ is integrable with integral $0$ so that e.g.~$f=1 \in \mathcal{F}(P)$. Such an assumption is in general too strong (see e.g.~\cite{Stein2004} for a discussion about the arcsine distribution) and weaker assumptions on $p$ are permitted in our framework, although in such cases stronger constraints on the functions in $\mathcal{F}(P)$ are necessary. {{In particular the constant functions may not belong to $\mathcal{F}(P)$.}}
All random quantities appearing in the sequel will be assumed to have non-empty Stein class (an assumption verified for all classical distributions from the literature). \end{remark}
It is easy to see {{from Definition \ref{steinpair} (iii)}} that $\mathbb{E}[ \mathcal{T}_Pf(X)]= 0$ for all $f \in \mathcal{F}(P)$. More generally one can prove that if $Y$ and $X$ share the same support then $ Y\stackrel{\mathcal{D}}{=} X$ (equality in distribution) if and only if $\mathbb{E} \left[ \mathcal{T}_Pf(Y) \right]=0$ for all $f \in \mathcal{F}(P)$. For any family of operators ${\mathcal{T}}$ indexed by univariate probability measures $P$ and $Q$ and for any class of functions $\mathcal{G}$ we say that $(\mathcal{T}_P, \mathcal{G}) $ is a \emph{Stein characterization} if \begin{eqnarray} \label{steinpaireq}
P = Q \Longleftrightarrow \mathcal{T}_Q (f) = \mathcal{T}_P (f) \quad \forall f \in \mathcal{G};
\end{eqnarray} see \cite{LS12a,LS13b} for general versions. In particular a Stein pair $(\mathcal{T}_P, \mathcal{F}(P))$ is a Stein characterization.
With our notations, the operator $\mathcal{T}_P$ also admits an inverse which is easy to write out formally at least.
Let $X \sim P$ have (open, closed, or
half-open) interval support ${\cal{I}}$ between $a$ and $b$, where
$-\infty \le a < b \le+ \infty$ and
$$\mathcal{F}^{(0)}(P)= \{ h \in L^1(p): \mathbb{E}[h(X)]=0 \}.$$ Define $\mathcal{T}_P^{-1} :
\mathcal{F}^{(0)}(P) \to \mathcal{F}(P)$ by \begin{equation}
\label{eq:1}
\mathcal{T}_P^{-1}h(x) = \frac{1}{p(x)} \int_{a}^x h(y) p(y)dy=-\frac{1}{p(x)} \int_x^{b} h(y) p(y)dy. \end{equation} The operator $\mathcal{T}_P^{-1}$ is the \emph{inverse Stein
operator} of $P$ in the sense that
$$ \mathcal{T}_P ( \mathcal{T}_P^{-1}h) = h.$$ Note how the particular structure of the r.h.s.~of \eqref{eq:1} ensures that $\mathcal{T}_P^{-1}h$ belongs to $\mathcal{F}(P)$ for any $h \in \mathcal{F}^{(0)}(P)$. If in addition
$(fp)(a) = (fp)(b) = 0$ for all $f \in \mathcal{F}(p)$ then
$$ \mathcal{T}_P^{-1} ( \mathcal{T}_Pf) = f$$ so that $\mathcal{T}_P^{-1}$ constitutes a bona fide inverse in this case.
\subsection{Standardizations of the operator} \label{sec:stand-oper} Although the Stein pair $(\mathcal{T}_P, \mathcal{F}(P))$ is unique{{ly defined in Definition \ref{steinpair},}} there are many implicit conditions on $f \in \mathcal{F}(P)$ which are useful to identify before applying this construction to specific approximation problems. {{In particular for favourable behavior of the inverse Stein operator it may be advantageous to consider}} only subclasses $\mathcal{F}_{\mathrm{sub}}(P)\subset \mathcal{F}(P)$ of functions satisfying certain target-specific and well chosen constraints. A good choice of subclass will lead to specific forms of the resulting operator which may turn out {{to have a smooth inverse Stein operator, as illustrated in the next example. As long as $\mathcal{F}_{\mathrm{sub}}(P)$ is a measure-determining class, the class is informative enough to satisfy \eqref{steinpaireq}. }}
\begin{example}
In the case of the Laplace distribution $\mathrm{Lap}$ with pdf $p(x) \propto e^{-|x|}$ {{the Stein operator from Definition \ref{steinpair}}} is
\begin{equation}\label{eq:18}
\mathcal{T}_{\mathrm{Lap}}f(x) = f'(x) - \mathrm{sign}(x)f(x)
\end{equation}
with $f \in \mathcal{F}(\mathrm{Lap})$, {{the class of}} functions such that $f(x) e^{-|x|}$
is differentiable {{almost surely}} with integrable derivative{{, and the derivative of $f(x) e^{-|x|}$ integrates to 0 over the real line}}. This
operator does not have agreeable properties, mainly because the
assumptions on $\mathcal{F}(\mathrm{Lap})$ are not explicit (see
e.g.~\cite{eichelsbacher2014malliavin} and \cite{pikeren}).
It is {{indeed}} sufficient to consider functions of the form
$f(x) = (x f_0(x)e^{|x|})'/e^{|x|}$ for certain functions
$f_0$. Applying $\mathcal{T}_{\mathrm{Lap}}$ to such functions
yields the second order operator \begin{equation}\label{eq:25}
\mathcal{T}_{\mathrm{Lap}}f(x) = \mathcal{A}_Xf_0(x) = xf_0''(x)+2f_0'(x)-x f_0(x) \end{equation} with $f_0 \in \mathcal{F}(\mathcal{A}_\mathrm{Lap})$ the class of
functions {{which are piecewise twice continuously differentiable such that $x f_0''(x), f_0'(x)$ and $x f_0(x)$ are all in $L^1 \left( e^{-|x|} dx \right) $}}, as considered e.g.~in
\cite{eichelsbacher2014malliavin,gaunt2014variance}. {{ In \cite{pikeren} functions of the form $f(x) = (-(g(x) - g(0)) e^{|x|})'/e^{|x|}$ yielded the second order operator $$ \mathcal{T}_{\mathrm{Lap}, PR}g(x) = g(x) - g(0) - g''(x)$$ for $g$ locally absolutely continuous with
$g \in L^1 \left( e^{-|x|} dx \right) $, $g'$ also locally absolutely continuous and $g'' \in L^1 \left( e^{-|x|} dx \right) $. The operator $\mathcal{T}_{\mathrm{Lap}, PR}$ is also discussed in \cite{eichelsbacher2014malliavin} but not used in \cite{eichelsbacher2014malliavin} because it did not fit in with Malliavin calculus as well as \eqref{eq:25}. }} \end{example}
{{ Even in the straightforward situation of a normal distribution, often a standardization is applied, as explained in the next example.}}
\begin{example} For the standard normal distribution ${\mathcal{N}}(0,1)$ it is easy to write out the operator \eqref{eq:12} explicitly to get $ \mathcal{T}_{{\mathcal{N}}(0,1)} (f)(x) = f'(x) - xf(x)$ acting on a wide class of functions $\mathcal{F}({\mathcal{N}}(0,1))$ which includes all absolutely continuous functions with polynomial decay at $\pm \infty$. In particular the constant function $\bf{1}$ is in $\mathcal{F}({\mathcal{N}}(0,1))$. {{A standardization of the form $f(x)= H_n(x) f_0(x)$ with $H_n$ the $n^{th}$ Hermite polynomial ($H_0(x) = 1, H_1(x) = x, H_2(x) = x^2 -1$) gives as operator $ {\cal{A}}f_0 (x) = H_n(x) f_0'(x) - H_{n+1}(x) f_0(x),$ see for example \cite{goldrein}.
It is also possible to study the behavior of functions $f_h$ under quite general conditions on $h$. For instance if $\mathcal{H}$ is the set of measurable functions $h: \mathbb{R} \to [0,1]$ (leading to the total variation measure) then $\mathcal{F}^{(1)}$ is contained in the collection of {{differentiable}} functions such that $\| f \| \le \sqrt{\pi/2}$ and
$\|f'\| \le 2$; see for instance~\cite{NP11}.
For the general normal distribution ${\mathcal{N}}(\mu,\sigma^2)$ the operator \eqref{eq:12} gives \begin{eqnarray} \label{normalop}
\mathcal{T}_{{\mathcal{N}}(\mu,\sigma^2)} (f)(x) = f'(x) - \frac{x-\mu}{\sigma^2} f(x).
\end{eqnarray}
The standardization $f(x) = \sigma^2 g'( x)$ yields the classical Ornstein-Uhlenbeck Stein operator $
{\mathcal{A}} g (x) = \sigma^2 g''(x) - (x - \mu) g'(x), $
see for example~\cite{ChGoSh11}.}} \end{example}
We call the passage from a parsimonious operator $\mathcal{T}_P$ (such as \eqref{eq:18}) acting on the implicit class $\mathcal{F}(P)$ to a specific operator $\mathcal{A}_P$ (such as \eqref{eq:25}) acting on a generic class $\mathcal{F}(\mathcal{A}_P)$ a \emph{standardization} of $(\mathcal{T}_P, \mathcal{F}(P))$. Given $P$ there are infinitely many different possible standardizations.
\subsection{The Stein transfer principle} \label{sec:steins-transf-princ}
Suppose that we aim to assess the discrepancy between the laws of two random quantities $X$ {{with distribution $P$}} and $W$ {{with distribution $Q$}}, say, in terms of some probability distance of the form \begin{equation}\label{eq:2} d_{\mathcal{H}}(P, Q) =
d_{\mathcal{H}}(X, W) = \sup_{h\in \mathcal{H}} | \mathbb{E}[ h(W)] - \mathbb{E} [h(X)]|, \end{equation} for $\mathcal{H}$ some measure{{-determining}} class; many common distances can be written under the form \eqref{eq:2}, including the Kolmogorov distance (with $\mathcal{H}$ the collection of indicators of half lines), the Total Variation distance (with $\mathcal{H}$ the collection of indicators of Borel sets) and the $1$-Wasserstein distance (see \eqref{dist}). {{Here writing $d_{\mathcal{H}}(X, W) $ is a shorthand for \eqref{eq:2}: this distance is not random.}}
Let $P$ have Stein pair $(\mathcal{T}_P, \mathcal{F}(P))$ and consider a standardization $(\mathcal{A}_P, \mathcal{F}(\mathcal{A}_P))$ as described in Section \ref{sec:stand-oper}. The first key idea in Stein's method is to relate the test functions $h$ of interest to a function $f = f_h \in \mathcal{F}(\mathcal{A}_P)$ through the so-called {\it{Stein equation}} \begin{equation} \label{Steinequation} h(x) - \mathbb{E} [h(X)] = \mathcal{A}_P f(x) , \quad x \in \mathcal{I}, \end{equation} so that, for $f_h$ solving \eqref{Steinequation}, we get $h(W) - \mathbb{E} [h(X)] = \mathcal{A}_P f_h(W)$ and, in particular, \begin{equation} \label{eq:start}
\sup_{h \in \mathcal{H}} | \mathbb{E} [ h(W)]- \mathbb{E} [h(X)] | =
\sup_{f \in \mathcal{F}^{(1)}} | \mathbb{E} \left[ \mathcal{A}_P f(W) \right] | \end{equation} where
$ \mathcal{F}^{(1)} =\mathcal{F}^{(1)} (\mathcal{A}_P, \mathcal{H})= \left\{ f \in \mathcal{F}(\mathcal{A}_P) \, | \,
\mathcal{A}_Pf = h - \mathbb{E} [h(X)] \mbox{ for some } h \in \mathcal{H} \right\}. $ The first step in Stein's method thus consists in some form of transfer principle whereby one transforms the problem of bounding the distance $d_{\mathcal{H}}(P, Q)$ into that of bounding the expectations of the operators $\mathcal{A}_P$ over a specific class of functions.
\begin{example} For the standard normal distribution, the operators \eqref{eq:12} and \eqref{normalop} give $ \mathcal{T}_{{\mathcal{N}}(0,1)} (f)(x) = f'(x) - xf(x)$.
Bounding expressions of the form $\left|\mathbb{E} \left[ f'(W) - Wf(W) \right]\right|$ as occurring in the r.h.s. of \eqref{eq:start} is a potent starting point for Gaussian approximation problems. Prominent examples include
$W = \sum_i \xi_i$ a standardized sum of weakly dependent variables, and $W = F(X)$ a functional of a Gaussian process; see e.g.~\cite{ChGoSh11,Ro11,NP11} for an overview.
\end{example}
In general, the success of Stein's method for a particular target relies on the positive combination of three factors : \begin{enumerate}[(i)] \item \label{item:1} the functions in $\mathcal{F}^{(1)}$ need to have ``good'' properties (e.g.~be bounded with bounded derivatives), \item \label{item:2} the operator $\mathcal{A}_P$ needs to be amenable to computations (e.g.~its expression should only involve polynomial functions), \item \label{item:3} there must be some ``handle'' on the expressions
$\mathbb{E} \left[ \mathcal{A}_Pf(W) \right]$ (e.g.~{{allowing for Taylor-type expansions or the application of couplings)}}.
\end{enumerate} Conditions \eqref{item:1} to \eqref{item:3} are satisfied
for a great variety of target distributions (including the exponential, chi-squared, gamma, semi-circle, variance gamma and many others, see {{for example}} https://sites.google.com/site/yvikswan/about-stein-s-method for an up-to-date list).
\subsection{The Stein kernel} \label{sec:score-function-stein} One of the many keys to a successful application of Stein's method for a given target distribution {{$P$}} lies in the properties of $P$'s
\emph{Stein kernel}. We now review some properties of this quantity which will play a central role in our analysis; {{see \cite{ LS12a} or \cite{ley2014approximate} for details.}}
\begin{definition} Let $P$ be a probability distribution with mean $\mu$, {{and let $X \sim P$}}. A \emph{Stein kernel} of $P$ is a random variable $\tau_P(X)$
such that
\begin{equation}
\label{eq:steqdef}
\mathbb{E} \left[ \tau_P(X) \varphi'(X) \right] = \mathbb{E} \left[ (X-\mu) \varphi(X) \right]
\end{equation} for all {{differentiable}} $\varphi: \mathbb{R} \to \mathbb{R}$ for which the expectation {{$ \mathbb{E} \left[ (X-\mu) \varphi(X) \right]$}} exists. \end{definition}
The function
$ x \mapsto \tau_P(x) = \mathbb{E} \left[ \tau_P(X) \, | \, X = x \right] $
is a \emph{Stein kernel (function)} of $P$. If $P$ has interval
support with closure $[a, b]$ then, letting $Id$ denote
the identity function, it is not hard to see
that
\begin{equation*}
\tau_P(x)= \mathcal{T}_P^{-1}(\mu-Id)(x) = \frac{1}{p(x)} \int_a^x(\mu -y)p(y)dy
\end{equation*}
is the unique Stein kernel of $P$. Moreover the following properties of the
Stein kernel are immediate consequences of its definition:
\begin{eqnarray}\label{sec:score-function-stein-1} \mbox{for all } x \in \mathbb{R} \mbox{ we have that } \tau_P(x)\geq 0
\mbox{ and }\mathbb{E} \left[ \tau_P(X) \right] = \mathrm{Var}[X].
\end{eqnarray}
The Stein kernels for a wide variety of
classical distributions (all members of the Pearson family, as it
turns out) bear agreeable expressions; see~\cite[Table 1]{EdVi12},
\cite{nourdin2013entropy,nourdin2013integration} or the
forthcoming \cite{DRS14} for illustrations.
\subsection{Stein factors} \label{sec:stein-factors}
Let $P$ have a continuous density $p$ with mean $\mu$ and support $\mathcal{I}$ such that the closure of $\mathcal{I}$ is the interval $[a,b]$ (possibly with infinite endpoints). Let $(\mathcal{T}_P, \mathcal{F}(P))$ be the Stein pair of $P$ and suppose that $P$ admits a Stein kernel $\tau_P(x)$, as described in Subsection \ref{sec:score-function-stein}. We introduce the standardized Stein pair $(\mathcal{A}_P, \mathcal{F}(\mathcal{A}_P))$ with \begin{equation}
\label{eq:29}
\mathcal{A}_Pf(x) = \mathcal{T}_P(\tau_P f)(x) = \tau_P (x) f'(x) +
(\mu - x) f(x), \quad x \in \mathcal{I}, \end{equation} and \begin{align*}
\mathcal{F}(\mathcal{A}_P) & = \left\{f : \mathbb{R} \to \mathbb{R} \mbox{ absolutely continuous
such that
} \right. \\
& \qquad \left. \lim_{x\to a} f(x)
\int_a^{x}(\mu-u) p(u) du = \lim_{x\to b} f(x)
\int_x^{b}(\mu-u) p(u) du = 0
\right.\\ & \qquad \left.\mbox{and } \left( f(x)
\int_a^{x}(\mu-u) p(u) du \right)' \in L^1(dx)\right\} \end{align*} Our next lemma shows that whenever applicable, standardization \eqref{eq:29} satisfies requirement \eqref{item:1} from the end of Section \ref{sec:steins-transf-princ}.
\begin{lemma}\label{sec:about-constants-1}
Let $\mathcal{H} = \mathrm{Lip}(1)$ be the collection of Lipschitz
functions $h : \mathbb{R} \to \mathbb{R}$ with Lipschitz constant 1 and let
$\mathcal{F}^{(1)}$ be the collection of
$f \in \mathcal{F}(\mathcal{A}_P)$ such that
$\mathcal{A}_Pf = h - \mathbb{E} [h(X)]$ for some $h \in \mathcal{H}$. Then
$\mathcal{F}^{(1)}$ is contained in the collection of functions $f$
such that $\| f \|_{\infty} \le 1$.
\end{lemma} Lemma~\ref{sec:about-constants-1} is strongly related to \cite[Corollary 2.16]{Do12}, adapted to our framework. For the sake of completeness we present a proof of (a generalization of) this result at the end of the present paper. The key to our approach lies in the fact that the bound in Lemma~\ref{sec:about-constants-1} \emph{does
not depend} on the standardization of the target $P$; it is in particular independent of the mean and variance of $X \sim P$ or of any normalizing constant that might appear in the expression of the density of $P$.
\section{Comparing univariate
continuous densities}\label{sec:result}
For $i=1,2,$ let $P_i$ be a probability distribution with an absolutely continuous density $p_i(\cdot)$ having support
$\mathcal{I}_i$ with closure $\bar \mathcal{I}_i = [a_i, b_i]$, for some $-\infty \le a_i < b_i \le +\infty$. Suppose that $\mathcal{I}_2\subset\mathcal{I}_1$ and define $\pi_0$ through
\begin{equation}
\label{eq:27}
p_2 = \pi_0 p_1.
\end{equation}
Associate with both {{distributions}} the Stein pairs
$(\mathcal{T}_i, \mathcal{F}_i)$ for $i=1, 2$, as well as the
resulting construction from the previous section.
The product
structure \eqref{eq:27} implies a {{key}} connection between
$\mathcal{T}_1$ and $\mathcal{T}_2$, namely \begin{equation}\label{fund}
\mathcal{T}_2(f)=\mathcal{T}_1(f)+ f\frac{\pi_0'}{\pi_0} =\mathcal{T}_1(f)+ f(\log \pi_0)' \end{equation} for all $f\in\mathcal{F}_1\cap\mathcal{F}_2$.
\subsection{Bounds on the Wasserstein distance between univariate
continuous densities}\label{sec:result}
Our {{main}}
objective in this section is to provide computable and meaningful bounds on the Wasserstein distance
$ d_{\mathcal{W}}(P_1, P_2)${{, defined in \eqref{dist},}} in terms of $\pi_0$ {{and $P_1$}}, under the product structure \eqref{eq:27}.
\begin{theorem}\label{maintheo}
{{For $i=1,2,$ let $P_i$ be a probability distribution with an absolutely continuous density $p_i$ having support
$\mathcal{I}_i$ with closure $\bar \mathcal{I}_i = [a_i, b_i]$, for some $-\infty \le a_i < b_i \le +\infty$; suppose that $\mathcal{I}_2\subset\mathcal{I}_1$ and let {{$X_i \sim P_i$ have finite means $\mu_i$ for $i=1, 2$}}. Assume that $\pi_0 = \frac{p_2}{p_1}$, defined on $\mathcal{I}_2$, is
differentiable on $\mathcal{I}_2$, satisfies
$ \mathbb{E} | (X_1 - \mu_1) \pi_0( X_1) | < \infty$ and }} \begin{align}
\label{eq:6} & \left( \pi_0(x) \int_{a_1}^x(h(y)-\mathbb{E}[h(X_1)])p_1(y)dy \right)' \in
L^1(dx)\\ & \lim_{x \to a_2, b_2} \pi_0(x) \int_{a_1}^x(h(y)-\mathbb{E}[h(X_1)])p_1(y)dy
= 0 \label{eq:9} \end{align} for all $h \in \mathcal{H}$, the set of Lipschitz-1 functions on $\mathbb{R}$. Then
\begin{equation} \label{eq:10}
\left|\mathbb{E} \left[ \pi_0'(X_1) \tau_1(X_1) \right]\right| \le d_{\mathcal{W}} (P_1, P_2) \le
\mathbb{E} \left[ \left| \pi_0'(X_1) \right| \tau_1(X_1) \right] \end{equation} where $\tau_1$ is the Stein kernel of $P_1$. \end{theorem} \begin{proof}
We first prove the lower bound. {{Let $X_2 \sim P_2$.}} Start by noting that
$d_{\mathcal{W}} (P_1, P_2) \ge |\mathbb{E} [X_2] - \mathbb{E} [X_1]|$ because
$Id\in{\rm Lip}(1)$. {{With \eqref{eq:27} we get}} that
\begin{align}
\mathbb{E}[X_2] - \mathbb{E}[X_1] & = \mathbb{E} [X_1\pi_0(X_1) ] - \mu_1 \nonumber\\
& = \mathbb{E} \left[ (X_1-\mu_1)\pi_0(X_1) \right]\nonumber\\
& = \mathbb{E} \left[ \tau_1(X_1) \pi_0'(X_1) \right] \label{eq:43}
\end{align}
where we used the fact that $\mathbb{E} \left[ \pi_0(X_1) \right] = 1$ and the definition \eqref{eq:steqdef} of
$\tau_1(X_1)$ in the last line.
Next we prove the upper bound. By the definition \eqref{eq:1},
$ f_h=\mathcal{T}_1^{-1}(h-\mathbb{E}[h(X_1)]) \in \mathcal{F}_1$. On the
other hand, Conditions \eqref{eq:6} and \eqref{eq:9} guarantee that
$f_h \in \mathcal{F}_2$ for all $h$ because
\begin{equation*}
p_2 f_h = \pi_0(x) \int_{a_1}^x(h(y)-\mathbb{E}[h(X_1)])p_1(y)dy
\end{equation*}
is necessarily absolutely continuous. We conclude that all functions $f_h=\mathcal{T}_1^{-1}(h-\mathbb{E}[h(X_1)])$ belong to the intersection $\mathcal{F}_1\cap\mathcal{F}_2$. Hence \begin{eqnarray} \mathbb{E}[h(X_2)]-\mathbb{E}[h(X_1)]&=&\mathbb{E}[\mathcal{T}_1(f_h)(X_2)]\nonumber\\ &=&\mathbb{E}[\mathcal{T}_1(f_h)(X_2)]-\mathbb{E}[\mathcal{T}_2(f_h)(X_2)]\label{1}\\ &=&-\mathbb{E}[f_h(X_2)(\log \pi_0)'(X_2)].\nonumber \end{eqnarray} Equality~\eqref{1} follows from the assumption that $f_h \in \mathcal{F}_2$ so that $\mathcal{T}_2f_h$ cancels when integrated with respect to $p_2$, whereas the last equality follows from Equation \eqref{fund}. Now we define $g_h=f_h/\tau_1$ and recall that $\tau_1 \ge 0$ to get $$
\left|\mathbb{E}[h(X_2)]-\mathbb{E}[h(X_1)]\right|=\left|\mathbb{E}\left[g_h(X_2)(\log
\pi_0)'(X_2)\tau_1(X_2)\right]\right|\leq
||g_h||_\infty\mathbb{E}\left[\left|(\log
\pi_0)'(X_2)\right|\tau_1(X_2)\right]. $$ It follows from Lemma~\ref{sec:about-constants-1} that
$||g_h||_\infty\leq1$ for all $h\in{\rm Lip}(1)$, yielding \begin{align*}
d_{\mathcal{W}}(P_1, P_2) & \le \mathbb{E}\left[\left|(\log
\pi_0)'(X_2)\right|\tau_1(X_2)\right] = \mathbb{E}\left[\left|\pi_0'(X_1)\right|\tau_1(X_1)\right], \end{align*} the last equality again following from \eqref{eq:27}. \end{proof}
Assumptions \eqref{eq:6} and \eqref{eq:9} are crucial. While \eqref{eq:9} is in a sense innocuous (because $\mathcal{I}_2 \subset \mathcal{I}_1$), \eqref{eq:6} is quite stringent yet hard to verify in practice.
In Section \ref{sec:stein-factors-1} we provide a proof of the following explicit and easy to verify sufficient conditions on $p$ for these and hence Theorem \ref{maintheo} to hold.
\begin{proposition} \label{prop:easycond} We use the notations of
Theorem \ref{maintheo}. Suppose that $\pi_0$, $p_1$ and $p_2$ are
differentiable over their support and that their derivatives are
integrable. Suppose that
\begin{equation*}
\lim_{x \to a_2, b_2} \pi_0(x) p_1(x) \tau_1(x) = \lim_{x \to a_2, b_2} p_2(x) \tau_1(x) = 0.
\end{equation*} Let $\rho_1 = p_1'/p_1$ and suppose also that \begin{equation*}
\pi_0' p_1 \tau_1 = p_2' \tau_{1} - \rho_1 \tau_1 p_2 \in L^1(dx). \end{equation*}
Then Theorem \ref{maintheo} applies. \end{proposition}
\begin{example}[Distance between
Gaussians]\label{sec:dist-betw-prod-1} {To compare two Gaussian
distributions, ${\cal{N}}(\mu_1, \sigma_1^2)$ and
${\cal{N}}(\mu_2, \sigma_2^2)$, order them so that $\sigma_2^2 \le
\sigma_1^2$, and if $\sigma_1 = \sigma_2$ then assume that $\mu_1
> \mu_2$.
If $P_1$ is ${\cal{N}}(\mu_1, \sigma_1^2)$ then
$\tau_1(x) = \sigma_1^2$ is constant (see
e.g.~\cite{Stein1986}). With $P_2$ being
${\cal{N}}(\mu_2, \sigma_2^2)$, all conditions in Proposition
\ref{prop:easycond} are satisfied. Applying Theorem \ref{maintheo} and noting that $ (\log \pi_0(x))' = x \left( \frac{1}{\sigma_1^2} -
\frac{1}{\sigma_2^2} \right) + \left( \frac{\mu_2}{\sigma_2^2} -
\frac{\mu_1}{\sigma_1^2} \right)$, we obtain that \begin{eqnarray*}
\label{eq:26}
| \mu_2 - \mu_1 |\le d_{\mathcal{W}}(P_1, P_2) &\le & {{\sigma_1^2 \mathbb{E} \left| X_2 \left( \frac{1}{\sigma_1^2} -
\frac{1}{\sigma_2^2} \right) + \left( \frac{\mu_2}{\sigma_2^2} -
\frac{\mu_1}{\sigma_1^2} \right) \right| }} \nonumber \\
&\le& \left|
\frac{\sigma_1^2}{\sigma_2^2} \mu_2- \mu_1 \right| + \left(
\frac{\sigma_1^2}{\sigma_2^2} -1 \right) \mathbb{E} \left|X_2 \right|.
\end{eqnarray*} In the special case $\mu_2 = \mu_1 = 0$ we compute $\mathbb{E} \left|X_2
\right| = \sqrt{2/\pi} \sigma_2$ to get \begin{equation*}
d_{\mathcal{W}}(P_1, P_2) \le \sqrt{\frac{2}{\pi}} \frac{\sigma_1^2-\sigma_2^2}{\sigma_2}, \end{equation*} to be compared with a similar result in \cite[Proposition 3.6.1]{NP11}.
If $\mu_2 \neq 0$ then the general expression for
$ \mathbb{E} \left|X_2 \right|$ is not agreeable, which is why we suggest using the inequality
$\mathbb{E}|X_2|\leq \left(\mathbb{E}[X_2^2]\right)^{1/2}=\sqrt{\sigma_2^2+\mu_2^2}$, leading to $$
| \mu_2 - \mu_1 |\le d_{\mathcal{W}}(P_1, P_2) \le \left|
\frac{\sigma_1^2}{\sigma_2^2} \mu_2- \mu_1 \right| + \left(
\frac{\sigma_1^2}{\sigma_2^2} -1 \right) \sqrt{\sigma_2^2+\mu_2^2}. $$
With $\mu_1=\mu_2=\mu$, the upper bound becomes $(|\mu|+\sqrt{\sigma_2^2+\mu^2})\left(
\frac{\sigma_1^2}{\sigma_2^2} -1 \right)$. We have not found a
similar result in the literature (outside of the centered case) and
{{computing the
Wasserstein distance directly using \eqref{wasscalc} is prohibitive
as the cdf's are not available in closed form.}} } \end{example}
\begin{remark}
Our upper bounds are not restricted to the Wasserstein case only. Indeed,
mimicking large parts of the proof of Theorem~\ref{maintheo}, we
obtain the general bound \begin{equation}
\label{eq:20}
d_{\mathcal{H}} (P_1, P_2) \le \kappa_{\mathcal{H}} {{\mathbb{E}\left[\left|\pi_0'(X_1)\right|\tau_1(X_1)\right]}} \end{equation} with
$\kappa_{\mathcal{H}}=\sup_{h\in\mathcal{H}}||\mathcal{T}_1^{-1}(h-\mathbb{E}_1h)/\tau_1||_\infty$ and $\mathcal{H}$ a measure-determining class of functions (the Kolmogorov distance corresponds to the class of indicators of half-lines, the Total Variation distance to the indicators of Borel sets). Usefulness of \eqref{eq:20} hinges around availability of bounds similar to Lemma~\ref{sec:about-constants-1} on the more general constant $\kappa_{\mathcal{H}}$. \end{remark} Unravelling the lower bound and using~\eqref{sec:score-function-stein-1} in the upper bound of \eqref{eq:10} we also obtain the following weaker but perhaps more transparent result.
\begin{corollary}\label{varcor}
Under the same assumptions as {{for Theorem \ref{maintheo}, with $X_2 \sim P_2$, }} \begin{equation} \label{eq:var}
\left| \mathbb{E} [X_2]-\mathbb{E} [X_1] \right|\le d_{\mathcal{W}} (P_1, P_2) \le \| \pi_0'\|_{\infty} \mathrm{Var} [X_1]. \end{equation} \end{corollary}
We shall use Corollary \eqref{eq:var} in Section \ref{sec:Bayes}. We stress the fact that there is \emph{no} normalizing constant appearing in the bounds \eqref{eq:10} and \eqref{eq:var}. Also, the absence of Stein kernel in \eqref{eq:var} is in somes cases an advantage because the Stein kernel is not always easy to compute.
There are many ways of expressing the Wasserstein distance \eqref{dist} between two random variables. In general, if $P_1$ has cumulative distribution function (cdf) $F_{P_1}$ and if $P_2$ has
cdf $F_{P_2}$ then
\begin{equation} \label{wasscalc} d_{\mathcal{W}}(P_1, P_2) = \int_\mathbb{R} |
F_{P_1}(x) - F_{P_2}(x) | dx = \int_0^1 | F_{P_1}^{-1}(u) - F_{P_2}^{-1} (u) | du = \inf \mathbb{E} \left| \xi_1-\xi_2 \right| \end{equation} where the infimum in this last expression is taken over all possible couplings $(\xi_1, \xi_2)$ of $(P_1, P_2)$ (see e.g.~\cite{vallender1974calculation, villani2008optimal}). Often exact computable expressions of Wasserstein distances tend to be difficult to obtain. The similarity between the upper and lower bounds in \eqref{eq:10} encourages us to formulate the next result. \begin{corollary}\label{theo2} If $X_i \sim P_i, i=1, 2,$ {{are as in Theorem \ref{maintheo}}} and if $\pi_0$ is monotone increasing or decreasing, then \begin{equation}
\label{eq:15}
d_{\mathcal{W}}(P_1, P_2) = \left| \mathbb{E}[X_2]- \mathbb{E} [X_1] \right|
= \mathbb{E} \left[ \left|\pi_0'(X_1) \right|\tau_1(X_1) \right] {{= \mathbb{E}\left[\left|(\log
\pi_0)'(X_2)\right|\tau_1(X_2)\right]}} . \end{equation}
\end{corollary}
Note how the second expression in \eqref{eq:15} can be immediately
obtained from the first by applying the same argument as in
\eqref{eq:43}. Now while the second expression in \eqref{eq:15} is
new, the first is in fact not. Indeed the condition that $\pi_0$ be
monotone in Corollary~\ref{theo2} is equivalent to requiring
$X_1 \ge_{LR} X_2$ (stochastically ordered in the sense of likelihood
ratio, see e.g.~\cite[Section 9.4]{ross1996stochastic} or
Example~\ref{sec:exo1}). If $ X_1 \le_{LR} X_2$
then $F_{P_2} \le F_{P_1}$ (see for example \cite[Theorem
1.C.4]{shaked2007stochastic}), so that
$ d_{\mathcal{W}}(X_1, X_2) = \int_\mathbb{R} ( F_{P_1}(x) - F_{P_2}(x) ) dx = \mathbb{E}
[X_2]-\mathbb{E}[X_1]$.
\begin{example}[{Distance between Azzalini-type skew-symmetric
distributions}\label{sec:exo3}]
Consider a symmetric density $p_1$ on the real line. The so-called \emph{Azzalini-type skew-symmetric distributions} are constructed from {{such a pdf}} $p_1$ by considering the densities $p_2(x)= 2p_1(x) G(\lambda x)$ with $G$ the {{cdf}} of a univariate symmetric distribution {{with pdf}} $g$ and $\lambda\in\mathbb{R}$ a parameter (called skewness parameter); see \cite{hallin2014skew} for an overview of these skewing mechanisms and of their applications. The founding example is Azzalini \cite{Azza85}'s \emph{skew-normal density} $2\phi(x)\Phi(\lambda x)$ (denoted $\mathcal{SN}(0, 1, \lambda)$), where $\phi$ and $\Phi$ respectively stand for the standard normal density and cumulative distribution function.
Corollary~\ref{theo2} provides, under mild conditions on $g$ and $G$, an exact expression for the Wasserstein distance between $P_1$ with pdf $p_1$ and its skew-symmetric counterpart $P_2$ with pdf $p_2$
since in this case $(\log\pi_0)'(x)=\lambda g(\lambda x)/G(\lambda x)$ which is positive or negative depending on the sign of $\lambda$ {{as both $g$ and $G$ are positive on the support of $P_2$}}. Thus we have {{$\pi_0'(x) = 2 \lambda g(\lambda x)$ and }} \begin{equation*}
d_{\mathcal{W}}(p_1, p_2)
=2 |\lambda| \mathbb{E}
\left[\tau_1(X_1)g(\lambda X_1) \right]. \end{equation*} Perhaps the most interesting instance of the above is the comparison of the {{standard}} normal with the skew-normal {{(all conditions in
Proposition~\ref{prop:easycond} are satisfied in this case) :}} $$ d_{\mathcal{W}}\left(\mathcal{N}(0,1),\mathcal{SN}(0,1,\lambda)\right)=\sqrt{\frac{2}{\pi}}
\frac{|\lambda|}{\sqrt{1+\lambda^2}} $$ (recall that $\tau_1(x) = 1$). Letting $\lambda\rightarrow\infty$ we obtain that the distance between the half-normal with density $2\phi(x)\mathcal{I}_{x\geq0}$ and the normal is $\sqrt{{2}/{\pi}}$, see also \cite{dobler2013stein}. As in the previous example, such results are not easy to obtain directly from \eqref{wasscalc}. \end{example}
{{Likelihood ratio orderings have a natural role in comparing parametric densities.}} Let $p(x; \theta)$ be a parametric family of densities with parameter of interest $\theta \in \mathbb{R}$ (see e.g. \cite{LS13p} for discussion and references). Set $p_1(\cdot) = p(\cdot; \theta_1)$ and $p_2(\cdot) = p(\cdot; \theta_2)$. The family $p(x; \theta)$ is said to have monotone \emph{likelihood ratio} if $x \mapsto p(x; \theta_2)/p(x; \theta_1)$ is non decreasing as soon as $\theta_2>\theta_1$ (and vice-versa). If $P_1$ has pdf $p_1$ and if $P_2$ has pdf $p_2$ then under monotone likelihood ratio, $P_2 \le P_1$. The property of monotone likelihood ratio is intrinsically linked with the validity of one-sided tests in statistics, see \cite{KR1956}.
\begin{example}[{Distances within the exponential family}\label{sec:exo1}]
A noteworthy class of parametric distributions which satisfy the property of monotone likelihood ratio is the \emph{canonical regular exponential family} $ p(x; \theta) = \ell(x) e^{\theta x - A(\theta)}$ for some scalar functions $\ell$ and $A$, with the range of the distribution being independent of $\theta$, see for example \cite[page 639]{KR1956}. If $\theta_1 > \theta_2$ then
$(\log\pi_0)'(x)=\left( \log \frac{p_2(x)}{p_1(x)} \right)' = \theta_2-\theta_1<0$
for all $x \in \mathbb{R}$ and thus from
\eqref{eq:15} we find with $X_2 \sim P_2$ that $ d_{\mathcal{W}}(P_1, P_2) = |\theta_2-\theta_1| \mathbb{E} \left[
\tau_1(X_2) \right]$ under mild and easy-to-check conditions on $P_1$ and $P_2$. \end{example}
\begin{example}[{Distances between ``tilted'' distributions} \label{sec:exo4}]
Fix a density $p_1$ with mean $\mu_1$ and consider, among all other
densities $g$ with same support and fixed but different mean
$\mu_2\neq \mu_1$, the density that minimizes the Kullback-Leibler divergence \begin{equation*}
KL(g|| p_1) = \int g(x) \log \left( \frac{g(x)}{p_1(x)} \right)dx. \end{equation*} The Euler-Lagrange equation for the constrained variational problem is $ \log g(x) = \log p_1(x) + \lambda_1x + \lambda_2$ solved by \begin{equation}\label{eq:17}
p_2(x) = p_1(x) \frac{e^{\lambda_1x}}{M_1(\lambda_1)} \end{equation} with $M_1(t) = \mathbb{E} [e^{tX_1}]$ the moment generating function of $X_1\sim p_1$ and $\lambda_1$ a solution to
\begin{align*}
\frac{d}{dt}(\log M_1(t))_{t=\lambda_1} = \mu_2
\end{align*}
in order to guarantee $\mathbb{E}[X_2] = \mu_2$. We call \eqref{eq:17} a
``tilted'' version of $p_1$ (following the classical notion of
exponential tilting, see e.g.~\cite{efron1981nonparametric}). It is easy to compute \begin{align*}
KL(p_2 \, || \, p_1) = \lambda_1\mu_2 - \log M_1(\lambda_1). \end{align*} Setting $\pi_0(x) = {e^{\lambda_1x}}/{M_1(\lambda_1)}$ we have $\log(\pi_0)'(x) = \lambda_1$ and \begin{equation}
\label{eq:19}
d_{\mathcal{W}}(p_1, p_2) = |\lambda_1| \mathbb{E} \left[ \tau_1(X_2)
\right] \end{equation} provided that the appropriate conditions are satisfied.
For the sake of illustration, take $p_1$ the Gamma distribution on the positive half line with density $p_1(x;\lambda,k) = \frac{1}{\Gamma(k)} e^{-x/\lambda}x^{k-1}\lambda^{-k} $. Then $M_1(t)=(1-\lambda t)^{-k}$ {{for $t < \frac{1}{\lambda}$}} and $\lambda_1 = \frac{1}{\lambda}- \frac{k}{\mu_2}$. Moreover $ \tau_1(x) = \lambda x$. It is thus easy to check in this case that all conditions in Proposition~\ref{prop:easycond} are satisfied. This allows us to deduce from \eqref{eq:19} that $$
d_{\mathcal{W}}(p_1, p_2) = |\mu_2 - \lambda k| $$
which nicely complements $KL(p_2||p_1) = \frac{\mu_2}{\lambda}-k + \log \left(
\frac{k\lambda}{\mu_2} \right)^k$ {{as an alternative comparison statistic}}. \end{example}
\section{On the influence of the prior in Bayesian statistics}
\label{sec:Bayes}
We now tackle the problem that motivated Theorem~\ref{maintheo} : assessing the impact of the choice of the prior
distribution on the resulting posterior distribution in Bayesian
statistics. In all examples we consider the conditions in
Proposition~\ref{prop:easycond} are easy to verify explicitly.
We first fix the notations. {{Assume that the observation $x$ comes from a parametric model with pdf $f(x; \theta)$ with $\theta \in \Theta$ - $f(x; \theta)$ is often called the {\it{likelihood}} or the {\it{sampling density}}. We turn this model into a pdf for $\theta$ through
$$p_1(\theta; x) = \kappa_1(x) f(x; \theta)$$
where $\kappa_1(x) = \left( \int f(x; \theta) d\theta \right)^{-1}$, and we assume that $\kappa_1 < \infty.$
Let $P_1$ have pdf $p_1$ and call its Stein kernel $\tau_1$. Choose a possibly improper prior density $\pi_0(\theta)$, and let
$$ p_2(\theta; x) = \pi_0(\theta; x) p_1(\theta; x)$$ where
$$ \pi_0(\theta; x) = \kappa_2(x) \pi_0(\theta) \mbox{ such that } \int p_2(\theta; x) d\theta =1.$$
Then
$$ 1 = \int p_2(\theta; x) d\theta = \kappa_2(x) \int \pi_0(\theta) p_1(\theta; x) d\theta = \kappa_2(x) \mathbb{E}[ \pi_0(\Theta_1)],$$
where $\Theta_1$ has distribution $P_1$
}}
which gives an
expression for the normalizing constant. {{Let $P_2 = P_2(\cdot;x)$ be the probability distribution on $\Theta$ with pdf $p_2(\cdot; x) $. Then $P_2$ is the posterior distribution of $\theta$ under the prior $\pi_0$ and the data $x$;
moreover $P_1$ can be seen as the distribution of $\theta$ under a uniform prior and the data $x$.}}
Now we extract from \eqref{eq:10} of Theorem~\ref{maintheo} the first bounds on the impact of a prior on the posterior distribution :
\begin{equation}
\label{eq:24}
\frac{\left|\mathbb{E} \left[\tau_1(\Theta_1) \pi_0'(\Theta_1)
\right]\right|}{\mathbb{E} [\pi_0(\Theta_1)]}\leq
d_{\mathcal{W}}(P_2,P_1) \leq \frac{\mathbb{E}
\left[\tau_1(\Theta_1) \left|\pi_0'(\Theta_1) \right| \right]}{\mathbb{E} [\pi_0(\Theta_1)]}
\end{equation} which can also be rewritten as \begin{equation}
\label{eq:22}
\left| \mathbb{E} \left[ \Theta_2 \right] - \mathbb{E} \left[ \Theta_1 \right]
\right| = \left|\mathbb{E} \left[\tau_1(\Theta_2) \rho_0(\Theta_2) \right]\right|\leq
d_{\mathcal{W}}(P_2,P_1) \leq \mathbb{E}
\left[\tau_1(\Theta_2) \left|\rho_0(\Theta_2) \right| \right] \end{equation} with $\Theta_2 \sim P_2$ and \begin{equation*}
\rho_0(\theta) = \frac{\pi_0'(\theta)}{\pi_0(\theta)}, \end{equation*} the score function {{of $\pi_0(\theta; x)$ with respect to $\theta$, which does not depend on the data $x$.}}
As we shall see in the forthcoming sections {{which treat some classical examples in Bayesian statistics}}, (\ref{eq:22}) often turns out to be handier for computations than (\ref{eq:24}).
\subsection{A normal model} \label{sec:Bayes2}
Consider the simple setting where $x=(x_1, \ldots, x_n)$ is a random
sample from a ${\cal N}(\theta, \sigma^2)$ population, where the
scale $\sigma$ is known and the location $\theta$ is the parameter of
interest, and assume that the prior $\pi_0 (\theta) > 0 $ for all
$\theta \in \Theta$ is differentiable. The likelihood $f (x; \theta)$ of the normal model can be
factorized into \begin{align*}
f (x; \theta) &= (2 \pi \sigma^2)^{-\frac{n}{2}}
\exp\left\{- \frac{1}{2} \sum_{i=1}^n \frac{(x_i - \theta)^2}{\sigma^2}
\right\}\\
&= (2 \pi \sigma^2)^{-\frac{n}{2}}
\exp\left\{-\frac{1}{2\sigma^2}\left(\sum_{i=1}^nx_i^2-n\bar{x}^2\right)\right\}\exp\left\{-
\frac{1}{2}
\frac{(\theta-\bar{x})^2}{\sigma^2/n}\right\} \\
&\propto {{ \exp\left\{-
\frac{1}{2}
\frac{(\theta-\bar{x})^2}{\sigma^2/n}\right\}}} \mbox{ when viewed as a function of $\theta$} \end{align*} where $\bar{x}=\frac{1}{n}\sum_{i=1}^nx_i$. Thus, {{$P_1 = \mathcal{N}(\bar{x},\sigma^2/n)$.}}
Since $\tau_1$ is constant, equal to $\sigma^2/n$, the variance of $\Theta_1 \sim P_1$, the bound \eqref{eq:24} becomes \begin{align*}
\frac{\sigma^2}{n} \frac{\left| \mathbb{E} \left[ \pi_0'(\Theta_1) \right] \right|}{\mathbb{E} \left[ \pi_0(\Theta_1) \right]} \le
d_{\mathcal{W}}(P_2, {{P_1}} ) \leq
\frac{\sigma^2}{n}\frac{\mathbb{E} \left[\left| \pi_0'(\Theta_1) \right|\right]}{\mathbb{E} \left[ \pi_0(\Theta_1) \right]} \end{align*} and \eqref{eq:22} becomes \begin{align} \label{normalverygenprior}
|\mathbb{E}[\Theta_2] - \bar{x}| =
\frac{\sigma^2}{n} |\mathbb{E}[\rho_0(\Theta_2)]|
\leq d_\mathcal{W}(P_1, P_2) \le \frac{\sigma^2}{n}\mathbb{E}[|\rho_0(\Theta_2)|]. \end{align} Both inequalities are equalities in the case that $\pi_0$ is monotone.
\subsection{Normal prior and normal model}\label{exBayes2}
Consider the same setting as in the previous section with the additional information that the prior $\pi_0$ is the density of a ${\cal N}(\mu, \delta^2)$, where $\mu$ and $ \delta^2>0$ are known. Then the posterior $P_2$ is also normal, since
\begin{eqnarray*} p_2(\theta; {x}) &\propto& \exp\left\{- \frac{1}{2}
\left(\frac{(\theta-\bar{x})^2}{\sigma^2/n}+ \frac{(\theta -
\mu)^2}{\delta^2} \right) \right\}. \end{eqnarray*} Defining $a= \frac{n}{\sigma^2} +\frac{1}{\delta^2}$ and $b(x)= \frac{\bar{x}}{\sigma^2/n} + \frac{\mu}{\delta^2}$, we see that $P_2 = {\cal N}\left( \frac{b(x)}{a}, \frac{1}{a}\right)$.
Since the prior $\pi_0$ is not monotone, we cannot exactly evaluate the Wasserstein distance between $P_1$ and $P_2$. However then we can write $\rho_0(\theta) = - ({{\theta}}-\mu)/\delta^2$ to obtain \begin{equation}
\label{eq:21}
\frac{\sigma^2}{n\delta^2 + \sigma^2} \left| \bar{x}- \mu \right|\le
d_{\mathcal{W}} (P_1, P_2) \le \frac{\sigma^2}{n\delta^2 +
\sigma^2} \left| \bar{x}- \mu \right| + \frac{\sqrt{2}}{\sqrt{\pi}} \frac{\sigma^3}{n \delta
\sqrt{\delta^2 n + \sigma^2}}. \end{equation} {To see this, the lower bound follows directly from simplifying the difference of the expectations,
$$ \left| \frac{b(x)}{a} - \bar{x}\right| = \frac{\sigma^2}{n\delta^2 + \sigma^2} \left| \bar{x}- \mu \right|.$$ For the upper bound, {{using $\rho_0(\theta) = - ({{\theta}}-\mu)/\delta^2$ in \eqref{normalverygenprior}}} gives
\begin{eqnarray*} d_{\mathcal{W}} (P_1, P_2) &\le & \frac{\sigma^2}{n}\mathbb{E}\left[ |\rho_0(\Theta_2) | \right] \\
&=& \frac{\sigma^2}{n\delta^2} \mathbb{E} [\left| \Theta_2 - \mu \right|] \\
&\le& \frac{\sigma^2}{n\delta^2} \left( \mathbb{E}\left[ \left| \Theta_2 - \frac{b(x)}{a} \right|\right] + \left| \frac{b(x)}{a} - \mu \right| \right) \\ &=& \frac{\sqrt{2}}{\sqrt{\pi}} \sqrt{\frac{1}{a}}
\frac{\sigma^2}{n\delta^2} + \frac{\sigma^2}{n\delta^2} \left|
\frac{b(x)}{a} - \mu \right| \\ &=& \frac{\sqrt{2}}{\sqrt{\pi}} \frac{\delta\sigma}{\sqrt{\delta^2 n +
\sigma^2}} \frac{\sigma^2}{n\delta^2} + \frac{\sigma^2}{n\delta^2}
\frac{\delta^2}{\delta^2 + \frac{\sigma^2}{n}} \left| \bar{x}- \mu
\right| \\ &=& \frac{\sqrt{2}}{\sqrt{\pi}} \frac{\sigma^3}{n \delta
\sqrt{\delta^2 n + \sigma^2}} + \frac{\sigma^2}{n\delta^2 +
\sigma^2} \left| \bar{x}- \mu \right|, \end{eqnarray*}
which yields the upper bound in \eqref{eq:21}.
Inequality (\ref{eq:21}) provides a quite concrete and intuitive idea of the impact of the prior. First we see that, for
$n\rightarrow\infty$, the distance becomes zero, as is well known. The prior variance $\delta^2$ has the same influence, which is also natural given that the prior then tends towards an improper prior, too. {{If the data are unfavourable so that $|\bar{x} - \mu|$ is large compared to $n$, then the Wasserstein distance between the two posterior distributions will be large. Due to the law of large numbers, for large $n$ the probability that $|\bar{x} - \mu| > \delta^2 n + \sigma^2$ is small; but in contrast to such asymptotic considerations, the bound \eqref{eq:21} makes the influence of the data on the distance explicit.}} Further the upper and lower bounds only differ by an $O(n^{-3/2})$ term, hence at a $1/n$ precision, we have an exact expression for the Wasserstein distance. Finally, the $O(1/n)$ term in both bounds perfectly reflects the {{intuition that}} the better the guess of the prior mean $\mu$ (w.r.t. the data), the smaller the influence of the prior.
\subsection{The binomial model}\label{exBayes3}
As next example we treat the case of $n$ independent and identically distributed Bernoulli random variables with parameter of interest $\theta\in[0,1]$; alternatively, we may say we have a single observation $y \in \{0,1, \ldots , n\} $ from a Binomial distribution with known $n$ and parameter of interest $\theta$. The corresponding sampling density is $$f (y;\theta) = {n \choose y} \theta^y (1-\theta)^{n-y}$$ and $p_1(\theta; y) = \kappa_1 (y) \theta^y (1-\theta)^{n-y}$ is a Beta density with $$ \kappa_1(y) = \frac{1}{B(y+1, n-y+1)},$$ where $B(\cdot,\cdot)$ denotes the Beta function, {{and $P_1 = P_1(\cdot;y) = \mathrm{Beta}(y+1, n-y+1)$ is a Beta distribution.}}
Recall that, if $X \sim p(x) = \frac{1}{B(\alpha, \beta)} x^{\alpha-1}(1-x)^{\beta-1}$ then \begin{equation*}
\mathbb{E}[X] = \frac{\alpha}{\alpha +\beta}, \mbox{ } \mathbb{E}[X^2] = \frac{\alpha(1+\alpha)}{(\alpha+\beta)(\alpha+\beta+1)} \mbox{ and } {\rm Var} [X] = \frac{\alpha \beta}{(\alpha+\beta)^2(\alpha+\beta+1)} . \end{equation*} The Stein kernel is $ \tau (x) = \frac{x(1-x)}{\alpha+\beta}$ and in particular $\tau_1(\theta) = \frac{\theta(1-\theta)}{n+2}.$ Corollary \ref{varcor} gives that, for any differentiable prior $\pi_0$ on $\mathcal{I} = [0,1]$, \begin{equation*}\label{genbinbound}
d_{\mathcal{W}} (P_1, P_2) \le \sup_{0 \le \theta \le 1} | \pi_0' (\theta)| \frac{(y+1) ( n-y+1) }{(n+2)^2 (n+3)}. \end{equation*} For $y$ close to $\frac{n}{2}$, this bound is of order $n^{-1}$. In particular, for any $0 \le y \le n$, for a prior with bounded derivative, the Wasserstein distance converges to zero as $n \rightarrow \infty$ no matter which data are observed, but the data may affect the rate of convergence. Next we consider some choices of prior densities which may
not have bounded derivatives.
\subsubsection{Beta prior} For a Beta prior \begin{equation}
\label{eq:33}
\pi_0(\theta) \propto \theta^{\alpha-1}(1-\theta)^{\beta-1}, \end{equation} {the assumptions of Theorem \ref{maintheo} are satisfied but}
$\sup_{0 \le \theta \le 1} | \pi_0' (\theta)| $ is infinite unless both $\alpha$ and $\beta$ are greater than or equal to 2 (or $\alpha = \beta = 1$). Let $P_1 $ denote the ${\rm Beta}(y+1, n-y+1)$ distribution and $P_2$ the posterior distribution using the prior \eqref{eq:33}. It is well known that $P_2$ is again Beta distributed : the Beta distributions are \emph{conjugate priors} for the Binomial distribution (similarly as the normal prior is conjugate in the normal model, see the previous section); {{in fact, it is easy to see that
$P_2$ is the ${\rm Beta}(\alpha + y,\beta+ n-y)$ distribution.
We shall show that \begin{eqnarray} \label{betabound}
\left| \frac{y+1}{n+2} \left( \frac{\alpha + \beta - 2}{ n+ \alpha + \beta} \right) - \frac{\alpha - 1}{n+ \alpha + \beta} \right| &\le& d_{\mathcal{W}} (P_1, P_2) \nonumber \\
&\le & \frac{1}{n+2} \left\{ | \alpha -1| + \frac{y + \alpha}{n + \alpha + \beta} ( | \beta - 1| - | \alpha - 1|) \right\}. \end{eqnarray}
To this end, let $\Theta_1 \sim P_1$ and $\Theta_2 \sim P_2$. With \eqref{eq:22} we have the immediate lower bound on the Wasserstein distance, namely \begin{eqnarray*}
d_{\mathcal{W}} (P_1, P_2) & \ge & | \mathbb{E}[ \Theta_2] - \mathbb{E}[ \Theta_1] | \\
&=& \left| \frac{y+1}{n+2} - \frac{y + \alpha}{n + \alpha + \beta} \right|\\
&=& \left| \frac{y+1}{n+2} \left( 1 - \frac{n+2}{ n+ \alpha + \beta} \right) - \frac{\alpha - 1}{n+ \alpha + \beta} \right|\\
&=& \left| \frac{y+1}{n+2} \left( \frac{\alpha + \beta - 2}{ n+ \alpha + \beta} \right) - \frac{\alpha - 1}{n+ \alpha + \beta} \right| . \end{eqnarray*}
For an upper bound, we calculate that $$ \rho_0(\theta) = \frac{(\alpha -1) ( 1 - \theta) - (\beta - 1) \theta}{\theta(1-\theta)}$$ and hence $$\tau_1(\theta) \rho_0(\theta) = \frac{1}{n+2} \{ (\alpha -1) ( 1 - \theta) - (\beta - 1) \theta\}.$$ Using \eqref{eq:22} we obtain the claimed upper bound \begin{eqnarray*}
d_{\mathcal{W}} (P_1, P_2) & \le &
\frac{1}{n+2} \mathbb{E} \left| (\alpha -1) ( 1 - \Theta_2) - (\beta - 1) \Theta_2 \right| \\
& \le &
\frac{1}{n+2} \left\{ | \alpha -1| \mathbb{E} [ 1 - \Theta_2] + |\beta - 1| \mathbb{E} [ \Theta_2] \right\} \\
&=& \frac{1}{n+2} \left\{ | \alpha -1| + \frac{y + \alpha}{n + \alpha + \beta} ( | \beta - 1| - | \alpha -1 | \right\}. \end{eqnarray*}
Some comments on the bound \eqref{betabound} are in order. Firstly, both the upper and the lower bound vanish when $\alpha = \beta = 1$. Secondly, unless $\alpha=1$, the upper bound is of order $O(n^{-1})$, no matter how favourable the data $y$ are.
\begin{comment} The discrepancy between the upper and the lower bound in \eqref{betabound} for $\alpha \ge 1$ and $\beta \ge 1$ is $$ 2 \frac{\alpha-1}{(n+2)(n+\alpha + \beta)} (\beta + y - 2)$$ (recall that $y \ge 1$ is assumed). The discrepancy increases linearly with $y$ and is of order $O(n^{-1})$. \end{comment}
}}
\begin{comment} We can still apply Theorem \ref{maintheo} to obtain informative bounds, as follows. We have that \begin{equation*}
p_0'(\theta) \tau_1(\theta) = \frac{(2-\beta-\alpha)\theta+(\alpha-1)}{2 + n}p_0(\theta). \end{equation*} It is well known that the posterior is again Beta distributed : the Beta distributions are \emph{conjugate priors} for the Binomial distribution (similarly as the normal prior is conjugate in the normal model, see the previous section).
Consequently \begin{equation*}
p_2(\theta; y) = \kappa_2 (y) \theta^{\alpha-1+y} (1- \theta)^{\beta-1+n-y} \end{equation*} with $\kappa_2 (y) = 1/B(\alpha+y, \beta+n-y)$ and, with $\Theta_2 \sim p_2$, \begin{align}
\mathbb{E} \left[ \frac{|p_0'(\Theta_2)|}{p_0(\Theta_2)} \tau_1(\Theta_2) \right]
& = \frac{1}{2+n} \mathbb{E} \left[\left|(2-\beta-\alpha)\Theta_2+(\alpha-1)\right|\right] \label{Binineq} \\
& \le \frac{1}{2+n} \left( |2-\beta-\alpha|\mathbb{E} \left[ \Theta_2 \right] +
|\alpha-1| \right)\nonumber\\
& = \frac{1}{2+n}\left( |2-\beta-\alpha|
\frac{\alpha+y}{\alpha+\beta+n} + |\alpha-1| \right)\nonumber \end{align} Simplifying the lower bound \begin{eqnarray*}
\left| \frac{\alpha+y}{\alpha+\beta+n}-\frac{y+1}{n+2} \right| &=&\frac{1}{n+2} \left| \frac{\alpha(n+1)-\beta-n+y(2-\alpha-\beta)}{\alpha+\beta+n}
\right|\\
&=& \frac{1}{n+2}\left|(2-\alpha-\beta)\frac{\alpha+y}{\alpha+\beta+n}+\frac{\alpha(n-1)+\alpha^2+\alpha\beta-\beta-n}{\alpha+\beta+n}\right|\\
&=& \frac{1}{n+2}\left|(2-\alpha-\beta)\frac{\alpha+y}{\alpha+\beta+n}+\frac{(\alpha-1)(\alpha+\beta+n)}{\alpha+\beta+n}\right|\\
&=& \frac{1}{n+2}\left|(2-\alpha-\beta)\frac{\alpha+y}{\alpha+\beta+n}+(\alpha-1)\right|
\end{eqnarray*} \noindent finally yields \begin{align*}
& \frac{1}{n+2}\left|(2-\alpha-\beta)\frac{\alpha+y}{\alpha+\beta+n}+(\alpha-1)\right|
\le d_{\mathcal{W}}(\Theta_1, \Theta_2) \\
& \quad \qquad \qquad \qquad \le \frac{1}{2+n}\left( |2-\beta-\alpha|
\frac{\alpha+y}{\alpha+\beta+n} + |\alpha-1| \right). \end{align*} If $\alpha=1$ or $\alpha + \beta=2$, the expression for
$d_{\mathcal{W}}(\Theta_1, \Theta_2)$ is exact. Unless $\alpha=1$ the distance between $\Theta_1$ and $\Theta_2$ will be of order $1/n$ no matter how favourite $y$ is. In case $\alpha + \beta=2$ it simplifies to $ \frac{|\alpha-1|}{2+n}$. Obviously when we choose a uniform prior, obtained for $\alpha=\beta=1$, then $p_1=p_2$ and the distance is zero, as it should be.
\end{comment}
\begin{comment} \subsubsection{Haldane prior} An alternative popular prior is \begin{equation}
\label{eq:32}
\pi_0(\theta) = \frac{1}{\theta(1-\theta)}, \end{equation} the Haldane prior on $(0,1)$ whose derivative is not uniformly bounded on $[0, 1]$. This improper prior would correspond to $\alpha=\beta=0$ in \eqref{eq:33}, hence is not an allowed special case of what precedes and needs investigation on its own. For $y=0$ or $y=n$ the posterior is improper and we exclude this case. For $y \ne 0$ or $n$, the posterior distribution $P_2$ is ${\rm Beta}(y, n-y)$. Moreover {{ $$\rho_0(\theta) = \frac{2 \theta - 1}{ \theta(1 - \theta)}$$ and \begin{equation}\label{lowHald} \tau_1(\theta) \rho_0(\theta) = \frac{1}{n+2} (2 \theta -1). \end{equation}
Using \eqref{eq:22} again we obtain that \begin{eqnarray} \label{haldanebound}
\frac{2}{n+2} \left| \frac{y}{n} -\frac12 \right| &\le & d_{\mathcal{W}} (P_1, P_2) \le \frac{2}{n+2}\left\{ \left| \frac{y}{n} -\frac12 \right| + \frac{1}{\sqrt{n+1}} \sqrt{\frac{y}{n} \left( 1 - \frac{y}{n}\right)} \right\} . \end{eqnarray} The lower bound is readily obtained from~\eqref{lowHald}. For the upper bound we use the Cauchy-Schwarz inequality; \eqref{eq:22} gives that \begin{eqnarray*}
d_{\mathcal{W}} (P_1, P_2) &\le& \frac{1}{n+2} \mathbb{E} | 2 \Theta_2 - 1| \\
&\le& \frac{2}{n+2} \left\{ \mathbb{E} | \Theta_2 - \mathbb{E}[ \Theta_2]| + \left| \mathbb{E} [\Theta_2] - \frac12 \right| \right\} \\
&\le & \frac{2}{n+2} \left\{ \sqrt{ {\rm Var} [\Theta_2]} + \left| \mathbb{E} [\Theta_2] - \frac12 \right| \right\} \\
&=& \frac{2}{n+2} \left\{ \sqrt{ \frac{y}{n(n+1)} \left( 1 - \frac{y}{n}\right)} + \left| \frac{y}{n} -\frac12 \right| \right\} ; \end{eqnarray*} re-arranging gives \eqref{haldanebound}.
In contrast to \eqref{betabound}, for the Haldane prior the bound can achieve order $O\left(n^{-\frac32}\right)$ if the data $y$ is close to $\frac{n}{2}$.
}} \end{comment}
\begin{comment} $$\frac{p_0'(\theta)}{p_0(\theta)} = \frac{1}{1-\theta} - \frac{1}{\theta}$$ so that, with $\tau_1(\theta)=\frac{\theta(1-\theta)}{n+2}$ and $\Theta_2 \sim p_2$, the influence of the prior becomes quantified as \begin{align*}
\frac{2}{n+2 }\left| \frac{y}{n} - \frac12\right| \le d_{\mathcal{W}}(\Theta_1, \Theta_2) &\le \frac{2}{n+2} \mathbb{E}\left[ \left| \Theta_2 - \frac12\right|\right]\\
&\leq \frac{2}{n+2} \left\{\mathbb{E} \left| \Theta_2 - \frac{y}{n} \right| + \left| \frac{y}{n} - \frac12\right|\right\} \\
&\le \frac{2}{n+2 }\left( \left| \frac{y}{n} - \frac12\right| + \sqrt{\frac{y(n-y)}{n^2(n+1)}} \right) \end{align*} If $y = 1$ or $y=n-1$ then both upper and lower bounds are of order $O(n^{-1})$ (recall that we exclude the cases $y=0$ and $y=n$ due to degeneracy). In contrast, if our data $y=\frac{n}{2}$ then the lower bound is trivial and the upper bound becomes $\frac{1}{(n+2) \sqrt{n+1}}, $ and this is of smaller order than $1/n$.
\end{comment}
\subsubsection{The Jeffreys prior} An alternative popular prior is \begin{equation*}
\label{eq:34}
{{\pi}}_0(\theta) = \frac{1}{\sqrt{\theta(1-\theta)}}, \end{equation*} the so-called Jeffreys prior obtained for $\alpha=\beta=1/2$ in \eqref{eq:33}. This is an improper prior {{which satisfies the assumptions of Theorem \ref{maintheo}.}} The posterior distribution ${{P}}_2$ is ${\rm Beta}(y + \frac12, n-y + \frac12)$. Moreover $${{\rho}}_0(\theta) = \frac{2 \theta - 1}{ 2 {\theta(1 - \theta)}}$$ and $$\tau_1(\theta) {{\rho}}_0(\theta) = \frac{1}{2(n+2)} (2 \theta -1).$$
Using \eqref{eq:22} we obtain that $$
\frac{1}{(n+1)} \left| \frac{y+1}{n+2} -\frac12 \right| \le d_{\mathcal{W}} (P_1, {{P}}_2) $$ and $$ d_{\mathcal{W}} (P_1, {{P}}_2) \le \frac{1}{n+2} \left\{ \sqrt{ \frac{\left( y + \frac12\right) \left( n - y + \frac12\right)}{(n+2)(n+1)^2}}
+ \left| \frac{y + \frac12}{n+1} -\frac12 \right|
\right\} $$
The upper bound follows from the Cauchy-Schwarz inequality via \begin{eqnarray*}
d_{\mathcal{W}} (P_1, {{P}}_2) &\le& \frac{1}{2(n+2)} \mathbb{E} | (2 \Theta_2 - 1) | \\
&\le& \frac{1}{n+2} \left\{ \mathbb{E} | \Theta_2 - \mathbb{E} [\Theta_2]|+ \left| \mathbb{E}[ \Theta_2] - \frac12 \right| \right\} \\
&\le & \frac{1}{n+2} \left\{ \sqrt{ {\rm Var} [ \Theta_2] } + \left| \mathbb{E}[ \Theta_2] - \frac12 \right| \right\} \\ &=& \frac{1}{n+2} \left\{ \sqrt{ \frac{\left( y + \frac12\right) \left( n - y + \frac12\right)}{(n+2)(n+1)^2}}
+ \left| \frac{y + \frac12}{n+1} -\frac12 \right|
\right\}. \end{eqnarray*}
In contrast to \eqref{betabound}, the Jeffreys prior can achieve a bound of order $O\left(n^{-\frac32}\right)$ if the data $y$ is close to $\frac{n}{2}$.
\begin{comment}
{{ We can also compare the Jeffreys prior and the Haldane prior directly via \eqref{eq:38}. We have that \begin{eqnarray*} \tau_1(\theta) r(\theta) &=& \frac{1}{n+2} (2 \theta - 1) \left\{\frac{1}{\theta( 1 - \theta) \mathbb{E} [\Theta_1(1 - \Theta_1)] }- \frac12 \frac{1}{ \sqrt{\theta(1 - \theta) }\mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } } \right\}\\ &=&\frac{1}{(n+2) \mathbb{E} [\Theta_1(1 - \Theta_1)] \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) }} (2 \theta - 1) \left\{\mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } \frac{1}{\theta( 1 - \theta)}- \frac12 \mathbb{E} [\Theta_1(1 - \Theta_1)] \right\} \\ &=&\frac{1}{(n+2) \mathbb{E} [\Theta_1(1 - \Theta_1)] \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) }} \left\{ \left( \frac{2 \theta - 1}{\theta( 1 - \theta)} - \frac12 \right) \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } \frac{1}{\theta( 1 - \theta)} \right. \\ &&\left. - \frac12 \left( \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } - \mathbb{E} [\Theta_1(1 - \Theta_1)] \right) \right\} . \end{eqnarray*} Note that $ \frac{2 \theta - 1}{\theta( 1 - \theta)} - \frac12 < 0$ always and $ \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } - \mathbb{E} [\Theta_1(1 - \Theta_1)]0 $ always. Using the triangle inequality that \begin{eqnarray*}
\left|\tau_1(\theta) r(\theta) \right| &\le& \frac{1}{(n+2) \mathbb{E} [\Theta_1(1 - \Theta_1)] \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) }} \left\{ \left( \frac12 - \frac{2 \theta - 1}{\theta( 1 - \theta)} \right) \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } \frac{1}{\theta( 1 - \theta)} \right. \\ &&\left. + \frac12 \left( \mathbb{E} \sqrt{\Theta_1(1 - \Theta_1) } - \mathbb{E} [\Theta_1(1 - \Theta_1)] \right) \right\} \end{eqnarray*} and taking expectations yields an upper bound \begin{eqnarray*} d_{\mathcal{W}}(\Theta_2,{\tilde{ \Theta}}_2) &\le& \frac{1}{2(n+2)} B(y, n+y)\left\{ \frac{1}{B(y + \frac12, n - y + \frac12)} \left( B(y, n-y) \frac{n-y+3}{n-1} + 1 \right) \right.\\ &&\left. - \frac{1}{B(y+1, n-y+1)} \right\} \end{eqnarray*} If $y \rightarrow \infty$ and $n \rightarrow \infty$ then this bound is of order $O(n^{-2})$, of smaller order than using the triangle inequality and the bounds \eqref{haldanebound} and \eqref{jeffreysboundb}.
}}}} \end{comment}
\begin{comment} Here we can re-start our upper bound calculations at~\eqref{Binieq} and apply the Cauchy-Schwarz inequality to
$\mathbb{E}[|\Theta_2-1/2|]$ to finally get $$
\frac{\left|y-\frac{n}{2}\right|}{(n+1)(n+2)}\leq d_{\mathcal{W}}(\Theta_1, \Theta_2) \leq \frac{1}{2}\frac{1}{(n+2)^{3/2}}+\frac{\left|y-\frac{n}{2}\right|}{(n+2)^{3/2}(n+1)^{1/2}}. $$ If $y$ is close to $\frac{n}{2}$ then the upper bound on the Wasserstein distance is of order $n^{-\frac{3}{2}}$, smaller than the bound \eqref{genbinbound}. If $y$ is small, then the order of the bounds is again $n^{-1}$. \end{comment}
\subsection{A Poisson model}\label{sec:Bayes5}
The last case we tackle is the Poisson model with data $x=(x_1,\ldots,x_n)$ from a Poisson distribution with sampling density $$ f (x;\theta)=e^{-n\theta}\frac{\theta^{\sum_{i=1}^nx_i}}{\prod_{i=1}^nx_i!}. $$ When $\sum_{i=1}^n x_i \ne 0$, which we shall now assume, then we {{obtain that $P_1$, the posterior distribution under a uniform prior, has pdf }} \begin{equation*}
p_1(\theta;x)\propto \exp(-\theta n)\theta^{\sum_{i=1}^nx_i+1-1} \end{equation*}
a gamma density with parameters $1/n$ and $\sum_{i=1}^nx_i+1$; its Stein kernel is simply $\tau_1(\theta)=\theta/n$ (see Example~\ref{sec:exo4}). The general bound \eqref{eq:var} from Corollary \ref{varcor} becomes \begin{equation} \label{poisgen}
d_{\mathcal{W}} (P_1, P_2) \le \sup_{\theta \ge 0} \left| {{\pi_0' \left(\theta; \sum x_i \right)}}\right| \frac{{\bar{x}}+\frac{1}{n}}{n} , \end{equation} where $\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i \ge \frac{1}{n}$.
Taking for $\theta$ a negative exponential prior $Exp(\lambda)$ with $\lambda>0$, \begin{equation*}
\pi_0(\theta)=\lambda e^{-\lambda\theta} \end{equation*}
over $\mathbb{R}^+$ yields that the posterior $P_2$ has density $p_2(\theta;x)\propto \exp(-\theta (n+\lambda))\theta^{\sum_{i=1}^nx_i+1-1}$, again a gamma density where the first parameter is updated to $1/(n+\lambda)$. Here, the prior is monotone decreasing, hence we can exactly calculate the effect of the prior to obtain \begin{eqnarray*}
d_{\mathcal{W}}(P_1, P_2)&=&\mathbb{E}\left[|\log \pi_0(\Theta_2))'|\frac{\Theta_2}{n}\right]\\ &=&\lambda\frac{\mathbb{E}\left[\Theta_2\right]}{n}\\ &=&\lambda\frac{\bar{x}+\frac{1}{n}}{n+\lambda}\\ &=& \frac{\lambda}{n + \lambda} \bar{x} + \frac{\lambda}{n(n+\lambda)}. \end{eqnarray*} We note that the exact distance differs from the general bound \eqref{poisgen} here only through a multiplicative factor
$\frac{n}{\lambda(n+\lambda)}$ (since $\sup_{\theta \ge 0} \left| {{\pi_0' \left(\theta; \sum x_i \right)}}\right|=\lambda^2$). The distance increases with $\bar{x}$ but will always be at least as large as $\frac{\lambda}{n(n+\lambda)}$. As we assume that $\bar{x} \ge \frac{1}{n}$, the data-dependent part of the Wasserstein distance will always be at least as large as the part which stems solely from the prior. Finally, from the strong law of large numbers, $\bar{x}$ will almost surely converge to a constant as $n \rightarrow \infty$, so that the Wasserstein distance will converge to 0 almost surely.
\section{Technical results}\label{sec:stein-factors-1}
In this section we first prove the variant of Corollary 2.16 of \cite{Do12} which we use in our paper. It includes Lemma~\ref{sec:about-constants-1} as a special case.
\begin{lemma} \label{sec:technical-results}
Let $P$ have a continuous density $p$ with mean $\mu$ and support $\mathcal{I}$ an interval with
closure $\bar{\mathcal{I}} = [a, b]$ with
$-\infty \le a < b \le + \infty$ and let $X \sim P$. Write {{$F_P$}} for the corresponding
cumulative distribution function. Let $h:\mathcal{I} \rightarrow \mathbb{R}$
be Lebesgue-almost surely differentiable such that the Fubini
condition
$$ \int_A \int_B | h'(v)| p(u)dvdu = \int_B \int_A | h'(v)| p(u)dudv < \infty $$ is satisfied for all Borel-measurable subsets $A, B \subset [a,b]$. Then \begin{enumerate} \item \label{dobler} \begin{equation*}
\left| \int_a^x (h(y)-\mathbb{E} \left[ h({{X}}) \right]) p(y)dy \right|
\le \|h'\| \int_a^x\left(\mu - y \right)p(y)dy; \end{equation*} \item for $g_h = \frac{\mathcal{T}_P^{-1}(h - \mathbb{E} \left[ h({{X}}) \right])}{\tau_P}$ it holds that \label{dobler:gen} \begin{equation*}
\| g_h\| \le \| h'\|; \end{equation*} \item \label{dobler:wasser} [Lemma~\ref{sec:about-constants-1}] in particular, if $\mathcal{H}$ is the set of all Lipschitz-continuous functions $h:\mathcal{I} \rightarrow \mathbb{R}$ with Lipschitz constant 1, then
$$\| g_h\| \le 1$$ for all $h \in \mathcal{H}$. \end{enumerate} \end{lemma}
\begin{proof}
We prove the three items separately, closely following \cite{Do12}
and in particular his Lemma 5.3. \begin{enumerate} \item Let $h:\mathcal{I} \rightarrow \mathbb{R}$ be as detailed in the
assumptions. Then, under the sole assumption that Fubini is
allowed,
we can write for all $a \le y \le b$ \begin{align*} h(y) - \mathbb{E} \left[ h({{X}}) \right] & = \int_a^b (h(y) - h(u))p(u) du \\ & = \int_a^b \int_u^yh'(v)p(u)dvdu \\ & = \int_a^y \int_u^yh'(v) p(u) dv du - \int_y^b \int_y^uh'(v) p(u) dv du \\ & = \int_a^y \int_a^vh'(v) p(u) du dv - \int_y^b \int_v^bh'(v) p(u) du dv \\ & = \int_a^y F_P(v) h'(v) dv - \int_y^b (1- F_P(v)) h'(v) dv. \end{align*} Integrating the above w.r.t. $p$ and again applying Fubini we get after straightforward simplifications {\color{black}{ \begin{align*} & \int_a^x (h(y)-\mathbb{E} \left[ h({{X}}) \right]) p(y)dy \\ & = -(1-F_P(x)) \int_a^x F_P(s) h'(s)ds - F_P(x) \int_x^b (1-F_P(s)) h'(s) ds \end{align*}}}
for each $x \in[a, b]$ from which we readily derive
\begin{align*}
& \left| \int_a^x (h(y)-\mathbb{E} \left[ h({{X}}) \right]) p(y)dy \right| \\ & \le
\|h'\| \left( (1-F_P(x)) \int_a^x F_P(s) ds + F_P(x) \int_x^b (1-F_P(s))
ds \right).
\end{align*} To deal with this last expression we use the identities
\begin{align*}
\int_a^x F_P(s) ds = x F_P(x) - \int_a^x sp(s)ds
\end{align*} and \begin{align*} \int_x^b
(1-F_P(s)) ds =-x(1-F_P(x)) + \int_x^b sp(s)ds. \end{align*} Straightforward computations yield the claim. \item For Item \ref{dobler:gen}, by definition \begin{equation*}
\mathcal{T}_P^{-1}(h(x) - \mathbb{E} \left[ h({{X}}) \right]) = \frac{1}{p(x)} \int_a^x
(h(y) - \mathbb{E} \left[ h({{X}}) \right])p(y)dy. \end{equation*} Also, by definition, \begin{equation*}
\tau_P(x) p(x) = \int_a^x (\mu-y) p(y)dy. \end{equation*} Hence \begin{equation*}
g_h(x) = \frac{\int_a^x \left( h(y) - \mathbb{E} \left[ h({{X}}) \right]
\right)p(y)dy}{\int_a^x (\mu-y) p(y)dy} \end{equation*} which, by Item \ref{dobler}, satisfies \begin{equation*}
\| g_h\| \le \|h'\| \left| \frac{\int_a^x (\mu-y) p(y)dy}{\int_a^x
(\mu-y) p(y)dy} \right| = \|h'\| . \end{equation*} \item Item \ref{dobler:wasser} follows directly from Rademacher's
Theorem for Lipschitz functions which guarantees that they are
almost surely differentiable, with derivative bounded by 1 if their
Lipschitz constant is 1. \end{enumerate} \end{proof} We conclude the paper with a proof of Proposition~\ref{prop:easycond}, restated for convenience. \begin{proposition} We use the notations of
Theorem \ref{maintheo}. Suppose that $\pi_0$, $p_1$ and $p_2$ are
differentiable over their support and that their derivatives are
integrable. Suppose that
\begin{equation*}
\lim_{x \to a_2, b_2} \pi_0(x) p_1(x) \tau_1(x) = \lim_{x \to a_2, b_2} p_2(x) \tau_1(x) = 0.
\end{equation*} Let $\rho_1 = p_1'/p_1$ and suppose also that \begin{equation*}
\pi_0' p_1 \tau_1 = p_2' \tau_{1} - \rho_1 \tau_1 p_2 \in L^1(dx). \end{equation*}
Then Theorem \ref{maintheo} applies. \end{proposition} \begin{proof} Conditions \eqref{eq:6} and \eqref{eq:9} are equivalent
to requiring that $f_h \in \mathcal{F}_2$, in other words $(f_hp_2)$
needs to be differentiable, $(f_hp_2)'$ needs to be integrable with
integral on $\mathcal{I}_2$ (the support of $p_2$) equal to 0. By
definition,
\begin{equation*}
f_h(x) p_2(x) = \pi_0(x) \int_{a_1}^x (h(y) - \mathbb{E}[h(X_1)]) p_1(y) dy
\end{equation*} is differentiable if $\pi_0$ is differentiable. Next, differentiating,
$$ (f_h p_2 )' (x) = \pi_0' (x) \int_{a_1}^x (h(y) - \mathbb{E}[h(X_1)]) p_1(y) dy + \pi_0(x) (h(x) - \mathbb{E}[h(X_1)]) p_1(x) .$$
For the second summand, the Lipschitz property of $h$ gives the bound
$$ |h(x) - \mathbb{E}[h(X_1)] | \le \int_{a_1}^{b_1} |h(x) - h(y)| p_1(y) dy\le \int_{a_1}^{b_1} |x - y| p_1(y) dy,$$ so that
$$ \int_{a_1}^{b_1} | \pi_0(x) (h(x) - \mathbb{E}[h(X_1)]) p_1(x) | dx \le
\int_{a_1}^{b_1} p_2 (x) \int_{a_1}^{b_1} |x - y| p_1(y) dy dx \le \mathbb{E} |X_1| + \mathbb{E} |X_2|,
$$
and the latter expectations are assumed to exist. Hence in order to
guarantee \eqref{eq:6} it is sufficient to impose that
\begin{equation}\label{eq:11}
\pi_0' (x) \int_{a_1}^x (h(y) - \mathbb{E}[h(X_1)]) p_1(y) dy \in L^1(dx).
\end{equation}
We can write
\begin{align*} \int_{a_1}^x (h(y) - \mathbb{E}[h(X_1)]) p_1(y) dy & =p_1(x)\tau_{1}(x)g_h(x)
\end{align*} with \begin{equation*}
g_h(x) = \frac{1}{\tau_1(x)p_1(x)}\int_{a_1}^x (h(y) - \mathbb{E}[h(X_1)]) p_1(y) dy \end{equation*} a function which we know from Lemma~\ref{sec:technical-results} to be bounded uniformly by 1. Hence \eqref{eq:11} (and therefore \eqref{eq:6}) boils down to a condition on $\pi_0'(x) p_1(x)\tau_{1}(x).$ Similarly \eqref{eq:9} can be tracked down to a condition on $\pi_0(x) p_1(x)\tau_{1}(x)$, and the claim follows. \end{proof}
\end{document} | arXiv |
\begin{document}
\title[Quantized flag manifolds and irreducible $*$-representations]{Quantized flag manifolds and\\irreducible $*$-representations} \author{Jasper V. Stokman} \address{Department of Mathematics, University of Amsterdam, Plantage Muidergracht 24, 1018 TV Amsterdam, The Netherlands.} \email{[email protected]} \author{Mathijs S. Dijkhuizen} \address{Department of Mathematics, Faculty of Science, Kobe University, Rokko, Kobe 657, Japan} \email{[email protected]} \thanks{The first author received financial support by NWO/Nissan} \date{February, 1998} \begin{abstract} We study irreducible $*$-representations of a certain quantization of the algebra of polynomial functions on a generalized flag manifold regarded as a real manifold. All irreducible $*$-representations are classified for a subclass of flag manifolds containing in particular the irreducible compact Hermitian symmetric spaces. For this subclass it is shown that the irreducible $*$-representations are parametrized by the symplectic leaves of the underlying Poisson bracket. We also discuss the relation between the quantized flag manifolds studied in this paper and the quantum flag manifolds studied by Soibel'man, Lakshimibai \& Reshetikhin, Jur\v co \& \v S\v tov\'\i\v cek and Korogodsky. \end{abstract} \maketitle
\section{Introduction} \label{section:intro} The irreducible $*$-representations of the ``standard'' quantization ${\mathbb{C}}_q\lbrack U \rbrack$ of the algebra of functions on a compact connected simple Lie group $U$ were classified by Soibel'man \cite{S}. He showed that there is a 1--1 correspondence between the equivalence classes of irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$ and the symplectic leaves of the underlying Poisson bracket on $U$ (cf.\ \cite{S0}, \cite{S}).
This Poisson bracket is sometimes called Bruhat-Poisson, because its symplectic foliation is a refinement of the Bruhat decomposition of $U$ (cf.\ Soibel'man \cite{S0}, \cite{S}). The symplectic leaves are naturally parametrized by $W\times T$, where $T\subset U$ is a maximal torus and $W$ is the Weyl group associated with $(U,T)$.
The 1--1 correspondence between equivalence classes of irreducible $*$-rep\-res\-enta\-tions of ${\mathbb{C}}_q\lbrack U \rbrack$ and symplectic leaves of $U$ can be formally explained by the observation that in the semi-classical limit the kernel of an irreducible $*$-representation should tend to a maximal Poisson ideal. The quotient of the Poisson algebra of polynomial functions on $U$ by this ideal is isomorphic to the Poisson algebra of functions on the symplectic leaf.
In recent years many people have studied quantum homogeneous spaces (see for example \cite{VS}, \cite{NYM}, \cite{Podles}, \cite{K0}, \cite{Noumi}, \cite{PV}, \cite{DS}). The results referred to above raise the obvious question whether the irreducible $*$-representations of quantized function algebras on $U$-homogeneous spaces can be classified and related to the symplectic foliation of the underlying Poisson bracket. This question was already raised in a paper by Lu \& Weinstein \cite[Question 4.8]{LW}, where they studied certain Poisson brackets on $U$-homogeneous spaces that arise as a quotient of the Bruhat-Poisson bracket on $U$.
To our knowledge, affirmative answers to the above mentioned question have been given so far for only three different types of $U$-homogeneous spaces, namely Podle\'s's family of quantum 2-spheres \cite{Podles} (the relation with the symplectic foliation of certain covariant Poisson brackets on the 2-sphere seems to have been observed for the first time by Lu \& Weinstein \cite{LW2}), odd-dimensional complex quantum spheres $SU(n+1)/SU(n)$ (cf.\ Vaksman \& Soibel'man \cite{VS}), and Stiefel manifolds $U(n)/U(n-l)$ (cf.\ Podkolzin \& Vainerman \cite{PV}).
In this paper we study the irreducible $*$-representations of a certain quantized $\ast$-algebra of functions on a generalized flag manifold. To be more specific, let $G$ denote the complexification of $U$, and let $P\subset G$ be a parabolic subgroup containing the standard Borel subgroup $B_+$ with respect to a fixed choice of Cartan subalgebra and system of positive roots (compatible with the choice of Bruhat-Poisson bracket on $U$, see \cite{LW}). The generalized flag manifold $U/K$ naturally becomes a Poisson $U$-homogeneous space (cf. Lu \& Weinstein \cite{LW}). The quotient Poisson bracket on $U/K$ is also called Bruhat-Poisson in \cite{LW}, since its symplectic leaves coincide with the Schubert cells of the flag manifold $G/P \simeq U/K$.
It is straightforward to realize a quantum analogue ${\mathbb{C}}_q\lbrack K\rbrack$ of the algebra of polynomial functions on $K$ as a quantum subgroup of ${\mathbb{C}}_q\lbrack U\rbrack$. The corresponding $\ast$-subalgebra ${\mathbb{C}}_q\lbrack U/K\rbrack$ of ${\mathbb{C}}_q\lbrack K\rbrack$-invariant functions in ${\mathbb{C}}_q\lbrack U\rbrack$ may be regarded as a quantization of the Poisson algebra of functions on $U/K$ endowed with the Bruhat-Poisson bracket. The main result in this paper is a classification of all the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U/K\rbrack$ for an important subclass of flag manifolds containing in particular the irreducible Hermitian symmetric spaces of compact type. For this subclass we show that the equivalence classes of irreducible $*$-representations are parametrized by the Schubert cells of $U/K$. Let us emphasize that we regard here the flag manifold $U/K$ as a real manifold. This means that the algebra of functions on $U/K$ has a natural $*$-structure, which survives quantization and allows us to study $*$-representations in a way analogous to Soibel'man's approach \cite{S}.
For an arbitrary generalized flag manifold $U/K$ we describe in detail how irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U\rbrack$ decompose under restriction to ${\mathbb{C}}_q\lbrack U/K \rbrack$. This decomposition corresponds precisely to the way symplectic leaves in $U$ project to Schubert cells in the flag manifold $U/K$. It leads immediately to a classification of the irreducible $*$-representations of the $C^*$-algebra $C_q(U/K)$, where $C_q(U/K)$ is obtained by taking the closure of ${\mathbb{C}}_q\lbrack U/K \rbrack$ with respect to the universal $C^\ast$-norm on ${\mathbb{C}}_q[U]$. The equivalence classes of irreducible $*$-representations of $C_q(U/K)$ are naturally parametrized by the symplectic leaves of $U/K$ endowed with the Bruhat-Poisson bracket.
For the classification of the irreducible $*$-representations of the quantized function algebra ${\mathbb{C}}_q\lbrack U/K \rbrack$ itself it is important to have a kind of Poincar{\'e}-Birkhoff-Witt (PBW) factorization of ${\mathbb{C}}_q\lbrack U/K \rbrack$ (which in turn is closely related to the irreducible decomposition of tensor products of certain finite-dimensional irreducible $U$-modules). Such a factorization is needed in order to develop a kind of highest weight representation theory for ${\mathbb{C}}_q\lbrack U/K \rbrack$. In Soibel'man's paper \cite{S}, a crucial role is played by a similar factorization of ${\mathbb{C}}_q\lbrack U \rbrack$. {}From Soibel'man's results one easily derives a factorization of the algebra ${\mathbb{C}}_q\lbrack U/T \rbrack$ (corresponding to $P$ minimal parabolic in $G$).
In this paper we derive a PBW type factorization for a different subclass of flag manifolds using the so-called Parthasarathy-Ranga Rao-Varadarajan (PRV) conjecture. This conjecture was formulated as a follow-up to certain results in the paper \cite{PRV} and was independently proved by Kumar \cite{Ku} and Mathieu \cite{Ma} (see also Littelmann \cite{Li}). The subclass of flag manifolds $U/K$ we consider here can be characterized by the two conditions that $(U,K)$ is a Gel'fand pair and that the Dynkin diagram of $K$ can be obtained from the Dynkin diagram of $U$ by deleting one node (cf.\ Koornwinder \cite{Koo}). Note that the corresponding $P\subset G$ is always maximal parabolic. These two conditions are satisfied for the irreducible compact Hermitian symmetric pairs $(U,K)$.
Roughly speaking, the PBW factorization in the above mentioned cases states that the quantized function algebra ${\mathbb{C}}_q[U/K]$ coincides with the quantized algebra of zero-weighted complex valued polynomials on $U/K$. The quantized algebra of zero-weighted complex valued polynomials can be naturally defined for arbitrary generalized flag manifold $U/K$. It is always a $*$-subalgebra of ${\mathbb{C}}_q\lbrack U/K \rbrack$ and invariant under the ${\mathbb{C}}_q\lbrack U\rbrack$-coaction (we shall call it the factorized $*$-algebra associated with $U/K$). The factorized $*$-algebra is closely related to the quantized algebra of holomorphic polynomials on generalized flag manifolds studied by Soibel'man \cite{S2}, Lakshmibai \& Reshetikhin \cite{LR1}, \cite{LR2}, and Jur\v co \& \v S\v tov\'\i\v cek \cite{Jur} (for the classical groups) as well as to the function spaces considered recently by Korogodsky \cite{Kor}.
In this paper we classify the irreducible $*$-representations of the factorized $*$-algebra associated with an arbitrary flag manifold $U/K$ and we show that the equivalence classes of irreducible $*$-representations are naturally parametrized by the symplectic leaves of $U/K$ endowed with the Bruhat-Poisson bracket. In particular, we obtain a complete classification of the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U/K \rbrack$ whenever a PBW type factorization holds for ${\mathbb{C}}_q\lbrack U/K \rbrack$ (i.e., ${\mathbb{C}}_q\lbrack U/K \rbrack$ is equal to its factorized $*$-algebra).
The paper is organized as follows. In section 2 we review the results by Lu \& Weinstein \cite{LW} and Soibel'man \cite{S} concerning the Bruhat-Poisson bracket on $U$ and the quotient Poisson bracket on a flag manifold. In section 3 we recall some well-known results on the ``standard'' quantization of the universal enveloping algebra of a simple complex Lie algebra and its finite-dimensional representations. We also recall the construction of the corresponding quantized function algebra ${\mathbb{C}}_q\lbrack U \rbrack$ and give some commutation relations between certain matrix coefficients of irreducible corepresentations of ${\mathbb{C}}_q\lbrack U \rbrack$. They will play a crucial role in the classification of the irreducible $*$-representations of the factorized $*$-algebra.
In section 4 we define the quantized algebra ${\mathbb{C}}_q\lbrack U/K\rbrack$ of functions on a flag manifold $U/K$ and its associated factorized $*$-subalgebra. We prove that the factorized $*$-algebra is equal to ${\mathbb{C}}_q\lbrack U/K \rbrack$ for the subclass of flag manifolds referred to above.
In section 5 we study the restriction of an arbitrary irreducible $*$-representation of ${\mathbb{C}}_q\lbrack U \rbrack$ to ${\mathbb{C}}_q\lbrack U/K \rbrack$. We use here Soibel'man's explicit realization of the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$ as tensor products of irreducible $*$-representations of ${\mathbb{C}}_q\lbrack SU(2) \rbrack$ (cf.\ \cite{S}, see also \cite{Ko}, \cite{VS} for $SU(n)$). As a corollary we obtain a complete classification of the irreducible $*$-representations of the $C^*$-algebra $C_q(U/K)$.
Section 6 is devoted to the classification of the irreducible $*$-representations of the factorized $*$-algebra associated with an arbitrary flag manifold. The techniques in section 6 are similar to those used by Soibel'man \cite{S} for the classification of the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$, and to those used by Joseph \cite{J} to handle the more general problem of determining the primitive ideals of ${\mathbb{C}}_q\lbrack U\rbrack$.
\section{Bruhat-Poisson brackets on flag manifolds} \label{section:Bruhat-Poisson} In this section we review some results by Soibel'man \cite{S} and Lu \& Weinstein \cite{LW} concerning the Bruhat-Poisson bracket on a compact connected simple Lie group $U$ and its flag manifolds. For unexplained terminology in this section we refer the reader to \cite{CP} and \cite{LW}.
Let ${\mathfrak{g}}$ be a complex simple Lie algebra with a fixed Cartan subalgebra ${\mathfrak{h}}\subset {\mathfrak{g}}$. Let $G$ be the connected simply connected Lie group with Lie algebra ${\mathfrak{g}}$ (regarded here as a real analytic Lie group).
Let $R\subset {\mathfrak{h}}^*$ be the root system associated with $({\mathfrak{g}},{\mathfrak{h}})$ and write ${\mathfrak{g}}_{\alpha}$ for the root space associated with $\alpha\in R$. Let $\Delta=\lbrace \alpha_1,\ldots,\alpha_r\rbrace$ be a basis of simple roots for $R$, and let $R^+$ (resp. $R^-$) be the set of positive (resp. negative) roots relative to $\Delta$. We identify ${\mathfrak{h}}$ with its dual by the Killing form $\kappa$. The non-degenerate symmetric bilinear form on ${\mathfrak{h}}^*$ induced by $\kappa$ is denoted by $(\cdot,\cdot )$. Let $W\subset \hbox{GL}({\mathfrak{h}}^*)$ be the Weyl group of the root system $R$ and write $s_i=s_{\alpha_i}$ for the simple reflection associated with $\alpha_i\in\Delta$.
For $\alpha\in R$ write $d_{\alpha}:=(\alpha,\alpha)/2$. Let $H_{\alpha}\in {\mathfrak{h}}$ be the element associated with the coroot $\alpha\spcheck:= d_{\alpha}^{-1}\alpha\in {\mathfrak{h}}^*$ under the identification $\mathfrak{h} \simeq \mathfrak{h}^*$. Let us choose nonzero $X_{\alpha}\in {\mathfrak{g}}_{\alpha}$ ($\alpha\in R$) such that for all $\alpha, \beta\in R$ one has $\lbrack X_{\alpha},X_{-\alpha}\rbrack=H_{\alpha}$, $\kappa\bigl(X_{\alpha},X_{-\alpha}\bigr)=d_{\alpha}^{-1}$ and $\lbrack X_{\alpha},X_{\beta}\rbrack =c_{\alpha,\beta}X_{\alpha+\beta}$ with $c_{\alpha,\beta}=-c_{-\alpha,-\beta}\in {\mathbb{R}}$ whenever $\alpha+\beta\in R$. Let ${\mathfrak{h}}_0$ be the real form of ${\mathfrak{h}}$ defined as the real span of the $H_{\alpha}$'s ($\alpha\in R$). Then \begin{equation}\label{u} {\mathfrak{u}}:=\sum_{\alpha\in R^+} \mathbb{R} (X_{\alpha}-X_{-\alpha})\oplus \sum_{\alpha\in R^+} \mathbb{R} i(X_{\alpha}+X_{-\alpha})\oplus i{\mathfrak{h}}_0 \end{equation} is a compact real form of ${\mathfrak{g}}$.
Set ${\mathfrak{b}}:={\mathfrak{h}}_0\oplus {\mathfrak{n}}_+$ with ${\mathfrak{n}}_+:=\sum_{\alpha\in R^+}^\oplus{\mathfrak{g}}_{\alpha}$. Then, by the Iwasawa decomposition for $\mathfrak{g}$, the triple $({\mathfrak{g}},{\mathfrak{u}},{\mathfrak{b}})$ is a Manin triple with respect to the imaginary part of the Killing form $\kappa$ (cf.\ \cite[\S4]{LW}). Hence ${\mathfrak{u}}$, ${\mathfrak{b}}$ and ${\mathfrak{g}}$ naturally become Lie bialgebras. The dual Lie algebra ${\mathfrak{u}}^*$ is isomorphic to ${\mathfrak{b}}$, and ${\mathfrak{g}}$ may be identified with the classical double of ${\mathfrak{u}}$. The cocommutator $\delta: {\mathfrak{g}}\rightarrow {\mathfrak{g}}\wedge {\mathfrak{g}}$ of the Lie bialgebra ${\mathfrak{g}}$ is coboundary, i.e., \[\delta(X)=\bigl(\hbox{ad}_X\otimes 1+1\otimes \hbox{ad}_X\bigr)r,\] with the classical $r$-matrix $r\in {\mathfrak{g}}\wedge {\mathfrak{g}}$ given by the following well-known skew solution of the Modified Classical Yang-Baxter Equation, \begin{equation}\label{r} r=i\sum_{\alpha\in R^+}d_{\alpha}\bigl(X_{-\alpha}\otimes X_{\alpha}-X_{\alpha}\otimes X_{-\alpha}\bigr)\in {\mathfrak{u}}\wedge {\mathfrak{u}}. \end{equation} The cocommutator on ${\mathfrak{u}}$ coincides with the restriction of ${\delta}$ to ${\mathfrak{u}}$.
The corresponding Sklyanin bracket on the connected subgroup $U\subset G$ with Lie algebra ${\mathfrak{u}}$ has \begin{equation} \Omega_g = l_g^{\otimes 2} r - r_g^{\otimes 2} r \end{equation} as its associated Poisson tensor. Here $l_g$ resp.\ $r_g$ denote infinitesimal left resp.\ right translation. This particular Sklyanin bracket is often called Bruhat-Poisson, since its symplectic foliation is closely related to the Bruhat decomposition of $G$. Let us explain this in more detail.
Let $B$ be the connected subgroup of $G$ with Lie algebra ${\mathfrak{b}}$, let $T\subset U$ be the maximal torus in $U$ with Lie algebra $i{\mathfrak{h}}_0$, and set $B_+:=TB$. The analytic Weyl group $N_U(T)/T$, where $N_U(T)$ is the normalizer of $T$ in $U$, is isomorphic to $W$. More explicitly, the isomorphism sends the simple reflection $s_i$ to $\exp\bigl(\frac{\pi}{2}(X_{\alpha_i}-X_{-\alpha_i})\bigr)/T$. The double $B_+$-cosets in $G$ are parametrized by the elements of $W$. Hence one has the Bruhat decomposition \[ G=\coprod_{w\in W}B_+wB_+. \] By \cite[Prop.\ 1.2.3.6]{W} the Bruhat decomposition has the following refinement: \begin{equation}\label{refined} G=\coprod_{m\in N_U(T)}BmB. \end{equation} For $m\in N_U(T)$ we set $\Sigma_{m}:=U\cap BmB$. Then $\Sigma_{m}\not=\emptyset$ for all $m\in N_U(T)$, and we have the disjoint union \begin{equation}\label{Udecom} U=\coprod_{m\in N_U(T)}\Sigma_{m}. \end{equation} Now recall that multiplication $U\times B\rightarrow G$ is a global diffeomorphism by the Iwasawa decomposition of $G$. So for any $b\in B$ and $u\in U$ there exists a unique $u^{b}\in U$ such that $bu\in u^bB$. As is easily verified, the map \begin{equation}\label{rightdressing} U\times B\rightarrow U,\quad (u,b)\mapsto u^{b^{-1}} \end{equation} is a right action of $B$ on $U$, and the corresponding decomposition of $U$ into $B$-orbits coincides with the decomposition \eqref{Udecom}. On the other hand, if we regard $B$ as the Poisson-Lie group dual to $U$, the action \eqref{rightdressing} becomes the right dressing action of the dual group on $U$ (cf.\ \cite[Thm.\ 3.14]{LW}). Since the orbits in $U$ under the right dressing action are exactly the symplectic leaves of the Poisson bracket on $U$ (cf.\ \cite[Thm.\ 13]{Se}, \cite[Thm.\ 3.15]{LW}), it follows that \eqref{Udecom} coincides with the decomposition of $U$ into symplectic leaves (cf.\ \cite[Theorem 2.2]{S}).
Next, we recall some results by Lu \& Weinstein \cite{LW} concerning certain quotient Poisson brackets on generalized flag manifolds. Let $S\subset \Delta$ be a set of simple roots, and let $P_S$ be the corresponding standard parabolic subgroup of $G$. The Lie algebra ${\mathfrak{p}}_S$ of $P_S$ is given by \begin{equation}\label{pS} {\mathfrak{p}}_S:={\mathfrak{h}}\oplus\bigoplus_{\alpha\in\Gamma_S} {\mathfrak{g}}_{\alpha} \end{equation} with $\Gamma_S:=R^+\cup\lbrace
\alpha\in R \, | \, \alpha\in\hbox{span}(S)\rbrace$. Let ${\mathfrak{l}}_S$ be the Levi factor of ${\mathfrak{p}}_S$, \begin{equation}\label{lS} {\mathfrak{l}}_S:={\mathfrak{h}}\oplus\bigoplus_{\alpha\in\Gamma_S\cap (-\Gamma_S)} {\mathfrak{g}}_{\alpha}, \end{equation} and set ${\mathfrak{k}}_S:={\mathfrak{p}}_S\cap {\mathfrak{u}}={\mathfrak{l}}_S\cap {\mathfrak{u}}$. Then ${\mathfrak{k}}_S$ is a compact real form of ${\mathfrak{l}}_S$. Set $K_S:=U\cap P_S\subset U$, then $K_S\subset U$ is a Poisson-Lie subgroup of $U$ with Lie algebra ${\mathfrak{k}}_S$ (cf.\ \cite[Thm.\ 4.7]{LW}). Hence there is a unique Poisson bracket on $U/K_S$ such that the natural projection $\pi:U\rightarrow U/K_S$ is a Poisson map. This bracket is also called Bruhat-Poisson. It is covariant in the sense that the natural left action $U\times U/K_S\to U/K_S$ is a Poisson map.
Let $W_S$ be the subgroup of $W$ generated by the simple reflections in $S$. One has $P_S = B_+W_S B_+$ (cf.\ \cite[Thm.\ 1.2.1.1]{W}). From this one easily deduces that the double cosets $B_+xP_S$ ($x\in G$) are parametrized by the elements of $W/W_S$. Hence one has the Schubert cell decomposition of $U/K_S\simeq G/P_S$: \begin{equation}\label{Schubertcell} U/K_S=\coprod_{{\overline{w}}\in W/W_S}X_{\overline{w}}, \quad X_{\overline{w}}:=(U\cap B_+wP_S)/K_S\simeq B_+w/P_S, \end{equation} where ${\overline{w}}\in W/W_S$ is the right $W_S$-coset in $W$ which contains $w$.
Now, by \cite[Prop.\ 4.5]{LW}, the subgroup $K_S$ is invariant under the action of $B$, which implies that the $B$-action descends to $U/K_S$. The orbits in $U/K_S$ coincide exactly with the Schubert cells. By \cite[Thm.\ 4.6]{LW} the symplectic leaves of the Poisson manifold $U/K_S$ are exactly the orbits under the $B$-action. We conclude (cf.\ \cite[Thm.\ 4.7]{LW}): \begin{Thm}\label{Schubertleaf} The decomposition into symplectic leaves of the flag manifold $U/K_S$ endowed with the Bruhat-Poisson bracket coincides with its decomposition into Schubert cells. \end{Thm} Consider now the set of minimal coset representatives \begin{equation}\label{mincoset}
W^S:=\lbrace w\in W \, | \, l(ws_{\alpha})>l(w) \quad \forall \alpha\in S\rbrace. \end{equation} $W^S$ is a complete set of coset representatives for $W/W_S$. Any element $w\in W$ can be uniquely written as a product $w=w_1w_2$ with $w_1\in W^S$, $w_2\in W_S$. The elements of $W^S$ are minimal in the sense that \begin{equation}\label{minimal} l(w_1w_2)=l(w_1)+l(w_2),\quad (w_1\in W^S, w_2\in W_S), \end{equation} where $l(w):=\#(R^+\cap wR^-)$ is the length function on $W$.
Observe that $\pi$ maps the symplectic leaf $\Sigma_{m}\subset U$ onto the symplectic leaf $X_{\overline{w(m)}}\subset U/K_S$, where $w(m):=m/T\in W$. We write $\pi_{m}: \Sigma_{m}\rightarrow X_{\overline{w(m)}}$ for the surjective Poisson map obtained by restricting $\pi$ to the symplectic leaf $\Sigma_{m}$. The minimality condition \eqref{mincoset} translates to the following property of the map $\pi_{m}$. \begin{Prop}\label{semiclassical} Let $m\in N_U(T)$. Then $\pi_{m}:\Sigma_{m}\rightarrow X_{\overline{w(m)}}$ is a symplectic automorphism if and only if $w(m)\in W^S$. \end{Prop} \begin{proof} For $w\in W$ set \[ {\mathfrak{n}}_w:=\bigoplus_{\alpha\in R^+\cap wR^-}{\mathfrak{g}}_{\alpha}, \quad N_w:=\hbox{exp}({\mathfrak{n}}_w). \] Observe that the complex dimension of $N_w$ is equal to $l(w)$. Write $\text{pr}_U: G\simeq U\times B\rightarrow U$ for the canonical projection. It is well known that for $m\in N_U(T)$ and for $w\in W^S$ with representative $m_w\in N_U(T)$, the maps \begin{equation} \begin{split} \phi_{m}: N_{w(m)}&\rightarrow \Sigma_{m},\quad n\mapsto \text{pr}_U(nm),\nonumber\\ \psi_{w}:N_w&\rightarrow X_{\overline{w}},\quad n\mapsto \pi\bigl(\text{pr}_{U}(nm_w)\bigr)\nonumber \end{split} \end{equation} are surjective diffeomorphisms (see for example \cite[Proposition 1.1 \& 5.1]{BGG}). The map $\psi_w$ is independent of the choice of representative $m_w$ for $w$. It follows now from \eqref{minimal} by a dimension count that $\pi_m$ can only be a diffeomorphism if $w(m)\in W^S$. On the other hand, if $m\in N_U(T)$ such that $w(m)\in W^S$, then $\pi_{m}=\psi_{w(m)}\circ{\phi}_m^{-1}$ and hence $\pi_m$ is a diffeomorphism. \end{proof} Soibel'man \cite{S} gave a description of the symplectic leaves $\Sigma_{m}$ ($m\in N_U(T)$) as a product of two-dimensional leaves which turns out to have a nice generalization to the quantized setting (cf.\ section 5). For $i\in [1,r]$, let $\gamma_i: SU(2)\hookrightarrow U$ be the embedding corresponding to the $i$th node of the Dynkin diagram of $U$. After a possible renormalization of the Bruhat-Poisson structure on $SU(2)$, $\gamma_i$ becomes an embedding of Poisson-Lie groups. Recall that the two-dimensional leaves of $SU(2)$ are given by \begin{gather*} S_t:=\left\{ \begin{pmatrix} \alpha & \beta \\ -{\overline{\beta}} & {\overline{\alpha}}
\end{pmatrix} \in \left.\hbox{SU}(2) \, \right|
\, \hbox{arg}(\beta)=\hbox{arg}(t) \right\} \quad (t\in {\mathbb{T}}) \end{gather*} where ${\mathbb{T}}\subset {\mathbb{C}}$ is the unit circle in the complex plane. The restriction of the embedding $\gamma_i$ to $S_1\subset SU(2)$ is a symplectic automorphism from $S_1$ onto the symplectic leaf $\Sigma_{m_i}\subset U$, where $m_i=\hbox{exp}\bigl(\frac{\pi}{2}(X_{\alpha_i}-X_{-\alpha_i})\bigr)$. Recall that $m_i\in N_U(T)$ is a representative of the simple reflection $s_i\in W$.
For arbitrary $m\in N_U(T)$ let $w(m)=s_{i_1}s_{i_2}\cdots s_{i_l}$ be a reduced expression for $w(m):=m/T\in W$, and let $t_m\in T$ be the unique element such that $m=m_{i_1}m_{i_2}\cdots m_{i_r}t_m$. Note that $t_m$ depends on the choice of reduced expression for $w(m)$. The map \[(g_1,\ldots,g_l)\mapsto \gamma_{i_1}(g_1)\gamma_{i_2}(g_2)\cdots \gamma_{i_l}(g_l)t_m \] defines a symplectic automorphism from $S_1^{\times l}$ onto the symplectic leaf $\Sigma_{m}\subset U$ (cf.\ \cite[\S2]{S}, \cite{St}). Note that the image of the map is independent of the choice of reduced expression for $w(m)$, although the map itself is not.
Combined with Proposition \ref{semiclassical} we now obtain the following description of the symplectic leaves of the generalized flag manifold $U/K_S$. \begin{Prop}\label{finestructure} Let $m\in N_U(T)$ and set $w:=m/T\in W$. Let $w_1\in W^S$, $w_2\in W_S$ be such that $w=w_1w_2$ and choose reduced expressions $w_1=s_{i_1}\cdots s_{i_p}$ and $w_2=s_{i_{p+1}}\cdots s_{i_l}$. Then the map \[ (g_1,g_2,\ldots,g_l)\mapsto \gamma_{i_1}(g_1)\gamma_{i_2}(g_2)\cdots \gamma_{i_l}(g_l)/K_S \] is a surjective Poisson map from $S_1^{\times l}$ onto the Schubert cell $X_{\overline{w}}$. It factorizes through the projection $\text{pr}\colon S_1^{\times l} = S_1^{\times p} \times S_1^{\times (l-p)} \rightarrow S_1^{\times p}$. The quotient map from $S_1^{\times p}$ onto $X_{\overline{w}}$ is a symplectic automorphism. In particular, we have \[ X_{\overline{w}}= \bigl(\Sigma_{m_{i_1}}\Sigma_{m_{i_2}}\cdots \Sigma_{m_{i_p}}\bigr)/K_S. }\end{Prop \] \renewcommand{}\end{Prop}{}\end{Prop} See Lu \cite{Lu} for more details in the case of the full flag manifold ($K_S=T$).
\section{Preliminaries on the quantized function algebra ${\mathbb{C}}_q\lbrack U \rbrack$} In this section we introduce some notations which we will need throughout the remainder of this paper. First, we recall the definition of the quantized universal enveloping algebra associated with the simple complex Lie algebra ${\mathfrak{g}}$. We use the notations introduced in the previous section.
Set $d_i:=d_{\alpha_i}$ and $H_i:=H_{\alpha_i}$ for $i\in [1,r]$. Let $A=(a_{ij})$ be the Cartan matrix, i.e.\ $a_{ij}:=d_i^{-1}(\alpha_i,\alpha_j)$. Note that $H_i\in {\mathfrak{h}}$ is the unique element such that $\alpha_j(H_i)=a_{ij}$ for all $j$. The weight lattice is given by \begin{equation}
P=\lbrace \lambda\in {\mathfrak{h}}^* \, | \, \lambda(H_i)=(\lambda,\alpha_i\spcheck)\in {\mathbb{Z}} \,\,\,\,\,\forall i\rbrace. \end{equation} The fundamental weights $\varpi_{\alpha_i}=\varpi_i$ ($i\in [1,r]$) are characterized by $\varpi_i(H_j)=(\varpi_i,\alpha_j\spcheck)=\delta_{ij}$ for all $j$. The set of dominant weights $P_+$ resp.\ regular dominant weights $P_{++}$ is equal to ${\mathbb{K}}\hbox{-span}\lbrace \varpi_{\alpha}\rbrace_{\alpha\in\Delta}$ with ${\mathbb{K}}={\mathbb{Z}}_+$ resp.\ ${\mathbb{N}}$.
We fix $q\in (0,1)$. The quantized universal enveloping algebra $U_q({\mathfrak{g}})$ associated with the simple Lie algebra ${\mathfrak{g}}$ is the unital associative algebra over ${\mathbb{C}}$ with generators $K_i^{\pm 1}$, $X_i^{\pm}$ ($i=[1,r]$) and relations \begin{equation} \begin{split} &K_iK_j=K_jK_i,\quad K_iK_i^{-1}=K_i^{-1}K_i=1\\ &K_iX_j^{\pm}K_i^{-1}=q_i^{\pm\alpha_j(H_i)}X_j^{\pm}\\ &X_i^+X_j^--X_j^-X_i^+=\delta_{ij}\frac{K_i-K_i^{-1}}{q_i-q_i^{-1}}\\ &\sum_{s=0}^{1-a_{ij}}(-1)^s\binom{1-a_{ij}}{s}_{q_i}(X_i^{\pm})^{1-a_{ij}-s} X_j^{\pm}(X_i^{\pm})^s=0 \quad (i\not=j) \end{split} \end{equation} where $q_i:=q^{d_i}$, \[\lbrack a \rbrack_q:=\frac{q^a-q^{-a}}{q-q^{-1}} \quad (a\in {\mathbb{N}}), \quad [0]_q:=1,\] $\lbrack a \rbrack_q!:=\lbrack a \rbrack_q\lbrack a-1\rbrack_q\ldots \lbrack 1 \rbrack_q$, and \[\binom{a}{n}_q:=\frac{\lbrack a \rbrack_q!} {\lbrack a-n\rbrack_q!\lbrack n \rbrack_q!}.\] A Hopf algebra structure on $U_q(\mathfrak{g})$ is uniquely determined by the formulas \begin{equation}\label{commrelationsU} \begin{split} &\Delta(X_i^+)=X_i^+\otimes 1 + K_i\otimes X_i^+, \quad \Delta(X_i^-)=X_i^-\otimes K_i^{-1}+1\otimes X_i^-,\\ &\Delta(K_i^{\pm 1})=K_i^{\pm 1}\otimes K_i^{\pm 1},\\ &S(K_i^{\pm 1})=K_i^{\mp 1},\quad S(X_i^+)=-K_i^{-1}X_i^+,\quad S(X_i^-)=-X_i^-K_i,\\ &\varepsilon(K_i^{\pm 1})=1,\quad \varepsilon(X_i^{\pm})=0. \end{split} \end{equation} In fact, $U_q({\mathfrak{g}})$ may be regarded as a quantization of the co-Poisson-Hopf algebra structure (cf.\ \cite[Ch.\ 6]{CP}) on $U(\mathfrak{g})$ induced by the Lie bialgebra $({\mathfrak{g}}, -i \delta)$, $\delta$ being the cocommutator of ${\mathfrak{g}}$ associated with the $r$-matrix \eqref{r}. $U_q(\mathfrak{g})$ becomes a Hopf $*$-algebra with $*$-structure on the generators given by \begin{equation}\label{star} (K_i^{\pm 1})^*=K_i^{\pm 1},\quad (X_i^+)^*=q_i^{-1}X_i^-K_i,\quad (X_i^-)^*=q_iK_i^{-1}X_i^+. \end{equation} In the classical limit $q\to 1$, the $*$-structure becomes an involutive, conjugate-linear anti-automorphism of ${\mathfrak{g}}$ with $-1$ eigenspace equal to the compact real form ${\mathfrak{u}}$ defined in \eqref{u}.
Let $U^{\pm}=U_q({\mathfrak{n}}_{\pm})$ be the subalgebra of $U_q(\mathfrak{g})$ generated by $X_i^{\pm}$ ($i=[1,r]$) and write $U^0:=U_q(\mathfrak{h})$ for the commutative subalgebra generated by $K_i^{\pm 1}$ ($i=[1,r]$). Let us write $Q$ (resp.\ $Q^+$) for the integral (resp.\ positive integral) span of the positive roots. We have the direct sum decomposition \[U^{\pm}=\bigoplus_{\alpha\in Q^+}U^{\pm}_{\pm \alpha},\] where
$U^{\pm}_{\pm \alpha}:=\lbrace \phi\in U^{\pm} \, | \, K_i\phi K_i^{-1}=q_i^{\pm\alpha(H_i)}\phi\rbrace$. The Poincar{\'e}-Birkhoff-Witt Theorem for $U_q(\mathfrak{g})$ states that multiplication defines an isomorphism of vector spaces \[U^-\otimes U^0\otimes U^+ \to U_q(\mathfrak{g}).\] In particular, $U_q(\mathfrak{g})$ is spanned by elements of the form $b_{-\eta}K^{\alpha}a_{\zeta}$ where $b_{-\eta}\in U_{-\eta}^-, a_{\zeta}\in U_{\zeta}^+$ ($\eta,\zeta\in Q_+$) and $\alpha\in Q$. Here we used the notation $K^{\alpha}=K_1^{k_1}\cdots K_r^{k_r}$ if $\alpha=\sum_ik_i\alpha_i$.
For a left $U_q(\mathfrak{g})$-module $V$, we say that $0\not=v\in V$ has weight $\mu\in {\mathfrak{h}}^*$ if $K_i\cdot v=q_i^{\mu(H_i)}v=q^{(\mu,\alpha_i)}v$ for all $i$. We write $V_{\mu}$ for the corresponding weight space. Recall that a $P$-weighted finite-dimensional irreducible representation of $U_q(\mathfrak{g})$ is a highest weight module $V=V(\lambda)$ with highest weight $\lambda\in P_+$. If $v_{\lambda}\in V(\lambda)$ is a highest weight vector, we have $V(\lambda)=\sum_{\alpha\in Q^{+}}^{\oplus}U_{-\alpha}^-v_{\lambda}$ by the PBW Theorem, hence the set of weights $P(\lambda)$ of $V(\lambda)$ is a subset of the weight lattice $P$ satisfying $\mu\leq\lambda$ for all $\mu\in P(\lambda)$. Here $\leq$ is the dominance order on $P$ (i.e.\ $\mu\leq\nu$ if $\nu-\mu\in Q^+$ and $\mu<\nu$ if $\mu\leq\nu$ and $\mu\not=\nu$).
We define irreducible finite dimensional $P$-weighted right $U_q({\mathfrak{g}})$-modules with respect to the opposite Borel subgroup. So the irreducible finite dimensional right $U_q({\mathfrak{g}})$-module $V(\lambda)$ with highest weight $\lambda\in P^+$ has the weight space decomposition $V(\lambda)=\sum_{\alpha\in Q^{+}}^{\oplus} v_{\lambda}U_{\alpha}^+$, where $v_{\lambda}\in V(\lambda)$ is the highest weight vector of $V(\lambda)$. The weights of the right $U_q({\mathfrak{g}})$-module $V(\lambda)$ coincide with the weights of the left $U_q({\mathfrak{g}})$-module $V(\lambda)$ and the dimensions of the corresponding weight spaces are the same.
The quantized algebra ${\mathbb{C}}_q\lbrack G \rbrack$ of functions on the connected simply connected complex Lie group $G$ with Lie algebra ${\mathfrak{g}}$ is the subspace in the linear dual $U_q(\mathfrak{g})^*$ spanned by the matrix coefficients of the finite-dimensional irreducible representations $V(\lambda)$ $(\lambda\in P_+)$. The Hopf $*$-algebra structure on $U_q(\mathfrak{g})$ induces a Hopf $*$-algebra structure on ${\mathbb{C}}_q\lbrack G \rbrack\subset U_q({\mathfrak{g}})^*$ by the formulas \begin{equation}\label{dualstructure} \begin{split} &(\phi\psi)(X)=(\phi\otimes\psi)\Delta(X),\quad 1(X)=\varepsilon(X)\\ &\Delta(\phi)(X\otimes Y)=\phi(XY),\quad \varepsilon(\phi)=\phi(1)\\ &S(\phi)(X)=\phi(S(X)),\quad (\phi^*)(X)=\overline{\phi(S(X)^*)}, \end{split} \end{equation} where $\phi,\psi\in {\mathbb{C}}_q\lbrack G \rbrack\subset U_q(\mathfrak{g})^*$ and $X,Y\in U_q(\mathfrak{g})$. The algebra ${\mathbb{C}}_q\lbrack G \rbrack$ can be regarded as a quantization of the Poisson algebra of polynomial functions on the algebraic Poisson-Lie group $G$, where the Poisson structure on $G$ is given by the Sklyanin bracket associated with the classical $r$-matrix $-ir$ (cf.\ \eqref{r}). Since the $*$-structure \eqref{dualstructure} on ${\mathbb{C}}_q\lbrack G \rbrack$ is associated with the compact real form $U$ of $G$ in the classical limit, we will write ${\mathbb{C}}_q\lbrack U \rbrack$ for ${\mathbb{C}}_q\lbrack G \rbrack$ with this particular choice of $*$-structure. Note that ${\mathbb{C}}_q\lbrack U \rbrack$ is a $U_q(\mathfrak{g})$-bimodule with the left respectively right action given by \begin{equation}\label{lraction} (X.\phi)(Y):=\phi(YX),\quad (\phi.X)(Y):=\phi(XY) \end{equation} where $\phi\in {\mathbb{C}}_q\lbrack U \rbrack$ and $X,Y\in U_q(\mathfrak{g})$. The finite-dimensional irreducible $U_q(\mathfrak{g})$-module $V(\lambda)$ of highest weight $\lambda\in P_+$ is known to be unitarizable (say with inner product $(.,.)$). So we can choose an orthonormal basis consisting of weight vectors \begin{equation}\label{basis}
\lbrace v_{\mu}^{(i)} \, | \, \mu\in P(\lambda), i=[1,\hbox{dim}(V(\lambda)_{\mu})]\rbrace, \end{equation} where $v_{\mu}^{(i)}\in V(\lambda)_{\mu}$ (we omit the index $i$ if $\hbox{dim}(V(\lambda)_{\mu})=1$). Set \begin{equation}\label{standardform} C_{\mu,i;\nu,j}^{\lambda}(X):=(X.v_{\nu}^{(j)},v_{\mu}^{(i)}),\quad X\in U_q(\mathfrak{g}), \end{equation} for $\mu,\nu\in P(\lambda)$ and $1\leq i\leq\hbox{dim}(V(\lambda)_{\mu})$, $1\leq j\leq\hbox{dim}(V(\lambda)_{\nu})$. If $\hbox{dim}(V(\lambda)_{\mu})=1$ respectively $\hbox{dim}(V(\lambda)_{\nu})=1$ we omit the dependence on $i$ respectively $j$ in \eqref{standardform}. It is sometimes also convenient to use the notation \[ C_{v;w}^{\lambda}(X):=(X.w,v), \quad v,w\in V(\lambda), \,\, X\in U_q(\mathfrak{g}). \] Note that when $\lambda$ runs through $P_+$ and $\mu,i,\nu$ and $j$ run through the above-mentioned sets the matrix elements \eqref{standardform} form a linear basis of ${\mathbb{C}}_q\lbrack G \rbrack$. Furthermore, we have the formulas \begin{equation}\label{relmatrixel} \begin{split} \Delta(C_{\mu,i;\nu,j}^{\lambda})&=\sum_{\sigma,s}C_{\mu,i;\sigma,s}^{\lambda} \otimes C_{\sigma,s;\nu,j}^{\lambda},\\ \varepsilon(C_{\mu,i;\nu,j}^{\lambda})&=\delta_{\mu,\nu}\delta_{i,j},\quad (C_{\mu,i;\nu,j}^{\lambda})^*=S(C_{\nu,j;\mu,i}^{\lambda}). \end{split} \end{equation} (Sums for which the summation sets are not specified are taken over the ``obvious'' choice of summation sets). Using the relations \eqref{relmatrixel} and the Hopf algebra axiom for the antipode $S$ we obtain \begin{equation}\label{unitarityproperty} \sum_{\sigma,s}(C_{\sigma,s;\mu,i}^{\lambda})^*C_{\sigma,s;\nu,j}^{\lambda}= \delta_{\mu,\nu}\delta_{i,j}. \end{equation} The elements $(C_{\mu,i;\nu,j}^{\lambda})^*$ are matrix coefficients of the contragredient representation $V(\lambda)^*\simeq V(-\sigma_0\lambda)$ (here $\sigma_0$ is the longest element in $W$). To be precise, let $\pi: U_q(\mathfrak{g}) \rightarrow \hbox{End}(V(\lambda))$ be the representation of highest weight $\lambda$, and let $(\cdot,\cdot )$ be an inner product with respect to which $\pi$ is unitarizable. Fix an orthonormal basis of weight vectors $\lbrace v_{\mu}^{(r)}\rbrace$. Let $(\pi^*,V(\lambda)^*)$ be the contragredient representation, i.e.\ $\pi^*(X)\phi=\phi\circ\pi(S(X))$ for $X\in U_q(\mathfrak{g})$ and $\phi\in V(\lambda)^*$. For $u\in V(\lambda)$ set $u^*:=(\cdot,u)\in V(\lambda)^*$. We define an inner product on $V(\lambda)^*$ by \[ ( u^*,v^*):= \bigl(\pi(K^{-2\rho})v,u\bigr),\quad u,v\in V(\lambda), \] where $\rho=1/2\sum_{\alpha\in R^+}\alpha\in {\mathfrak{h}}^*$. By using the fact that $S^2(u)= K^{-2\rho} u K^{2\rho}$ ($u\in U_q(\mathfrak{g})$) one easily deduces that $\pi^*$ is unitarizable with respect to the inner product $(\cdot,\cdot)$ on $V(\lambda)^*$ and that $\lbrace \phi_{-\mu}^{(i)}:=q^{(\mu,\rho)}(v_{\mu}^{(i)})^*\rbrace$ is an orthonormal basis of $V(\lambda)^*$ consisting of weight vectors (here $\phi_{-\mu}^{(i)}$ has weight $-\mu$). Defining the matrix coefficients $C_{-\mu,i;-\nu,j}^{-\sigma_0\lambda}$ of $(\pi^*,V(\lambda)^*)$ with respect to the orthonormal basis $\lbrace \phi_{-\mu}^{(i)}\rbrace$, we then have \begin{equation}\label{contragred} (C_{\mu,i;\nu,j}^{\lambda})^*=q^{(\mu-\nu,\rho)}C_{-\mu,i;-\nu,j}^{-\sigma_0\lambda} \end{equation} (cf.\ \cite[Prop.\ 3.3]{S}). A fundamental role in Soibel'man's theory of irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$ is played by a Poincar{\'e}-Birkhoff-Witt (PBW) type factorization of ${\mathbb{C}}_q\lbrack U \rbrack$. For $\lambda\in P_+$, set \begin{equation}\label{Blambda} B_{\lambda}:=\hbox{span}\lbrace C_{v;v_{\lambda}}^{\lambda}
\, | \, v\in V(\lambda)\rbrace. \end{equation} Note that $B_{\lambda}$ is a right $U_q(\mathfrak{g})$-submodule of ${\mathbb{C}}_q\lbrack U \rbrack$ isomorphic to $V(\lambda)$. Set \begin{equation} A^+:=\bigoplus_{\lambda\in P_+}B_{\lambda}, \quad A^{++}:=\bigoplus_{\lambda\in P_{++}} B_{\lambda}. \end{equation} The subalgebra and right $U_q({\mathfrak{g}})$-module $A^+$ is equal to the subalgebra of left $U^+$-invariant elements in ${\mathbb{C}}_q\lbrack U \rbrack$ (cf.\ \cite{J}). The existence of a PBW type factorization of ${\mathbb{C}}_q\lbrack U \rbrack$ now amounts to the following statement. \begin{Thm}\cite[Theorem 3.1]{S}\label{factorization} The multiplication map $m: (A^{++})^*\otimes A^{++} \rightarrow {\mathbb{C}}_q\lbrack U \rbrack$ is surjective. \end{Thm} A detailed proof can be found in \cite[Prop.\ 9.2.2]{J}. The proof is based on certain results concerning decompositions of tensor products of irreducible finite-dimensional $U_q({\mathfrak{g}})$-modules which can be traced back to Kostant in the classical case \cite[Theorem 5.1]{K}. The close connection between Theorem \ref{factorization} and the decomposition of tensor products of irreducible $U_q({\mathfrak{g}})$-modules becomes clear by observing that \begin{equation}\label{tensorrelation} (B_{\lambda})^*B_{\mu}\simeq V(\lambda)^* \otimes V(\mu) \end{equation} as right $U_q(\mathfrak{g})$-modules.
Important for the study of $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$ is some detailed information about the commutation relations between matrix elements in ${\mathbb{C}}_q\lbrack U \rbrack$. In view of Theorem \ref{factorization}, we are especially interested in commutation relations between the $C_{\mu,i;\lambda}^{\lambda}$ and $C_{\nu,j;\Lambda}^{\Lambda}$ resp.\ between the $C_{\mu,i;\lambda}^{\lambda}$ and $(C_{\nu,j;\Lambda}^{\Lambda})^*$, where $\lambda,\Lambda\in P_+$. To state these commutation relations we need to introduce certain vector subspaces of $\mathbb{C}_q[U]$. Let $\lambda,\Lambda\in P_+$ and $\mu\in P(\lambda)$, $\nu\in P(\Lambda)$, then we set \begin{equation}\label{N} \begin{split} N(\mu,\lambda;\nu,\Lambda)&:= \hbox{span}\lbrace C_{v;v_{\lambda}}^{\lambda}C_{w;v_{\Lambda}}^{\Lambda}
\, | \, (v,w)\in sN\rbrace,\\ N^{opp}(\mu,\lambda;\nu,\Lambda)&:= \hbox{span}\lbrace C_{w;v_{\Lambda}}^{\Lambda}C_{v;v_{\lambda}}^{\lambda}
\, | \, (v,w)\in sN\rbrace \end{split} \end{equation} where $sN:=sN(\mu,\lambda;\nu,\Lambda)$ is the set of pairs $(v,w)\in V(\lambda)_{\mu'}\times V(\Lambda)_{\nu'}$ with $\mu'>\mu$, $\nu'<\nu$ and $\mu'+\nu'=\mu+\nu$. Furthermore set \begin{equation}\label{O} \begin{split} O(\mu,\lambda;\nu,\Lambda)&:= \hbox{span}\lbrace (C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\Lambda}}^{\Lambda}
\, | \, (v,w)\in sO \rbrace,\\ O^{opp}(\mu,\lambda;\nu,\Lambda)&:= \hbox{span}\lbrace C_{w;v_{\Lambda}}^{\Lambda}(C_{v;v_{\lambda}}^{\lambda})^*
\, | \, (v,w)\in sO \rbrace \end{split} \end{equation} where $sO:=sO(\mu,\lambda;\nu,\Lambda)$ is the set of pairs $(v,w)\in V(\lambda)_{\mu'}\times V(\Lambda)_{\nu'}$ with $\mu'<\mu$, $\nu'<\nu$ and $\mu-\mu'=\nu-\nu'$. If $sN$ (resp.\ $sO$) is empty, then let $N=N^{opp}=\lbrace 0 \rbrace$ (resp. $O=O^{opp}=\lbrace 0 \rbrace$). We now have the following proposition. \begin{Prop}\label{commutation} Let $\lambda,\Lambda\in P_+$ and $v\in V(\lambda)_{\mu}$, $w\in V(\Lambda)_{\nu}$.\\ {\bf (i)} The matrix elements $C_{v;v_{\lambda}}^{\lambda}$ and $C_{w;v_{\Lambda}}^{\Lambda}$ satisfy the commutation relation \[ C_{v;v_{\lambda}}^{\lambda}C_{w;v_{\Lambda}}^{\Lambda}= q^{(\lambda,\Lambda)-(\mu,\nu)}C_{w;v_{\Lambda}}^{\Lambda} C_{v;v_{\lambda}}^{\lambda}\, \hbox{ mod }\, N(\mu,\lambda;\nu,\Lambda). \] Moreover, we have $N=N^{opp}$.\\ {\bf (ii)} The matrix elements $(C_{v;v_{\lambda}}^{\lambda})^*$ and $C_{w;v_{\Lambda}}^{\Lambda}$ satisfy the commutation relation \[(C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\Lambda}}^{\Lambda} =q^{(\mu,\nu)-(\lambda,\Lambda)}C_{w;v_{\Lambda}}^{\Lambda} (C_{v;v_{\lambda}}^{\lambda})^*\, \hbox{ mod }\, O(\mu,\lambda;\nu,\Lambda).\] Moreover, we have $O=O^{opp}$. \end{Prop} Soibel'man \cite{S} derived commutation relations using the universal $R$-matrix whereas Joseph \cite[Section 9.1]{J} used the Poincar{\'e}-Birkhoff-Witt Theorem for $U_q(\mathfrak{g})$ and the left respectively right action \eqref{lraction} of $U_q(\mathfrak{g})$ on ${\mathbb{C}}_q\lbrack U \rbrack$. Although the commutation relations formulated here are slightly sharper, the proof can be derived in a similar manner and will therefore be omitted.
As a corollary of Proposition \ref{commutation}{\bf (i)} we have \begin{Cor}\label{other} Let $\lambda,\Lambda\in P_+$ and $v\in V(\lambda)_{\mu}$, $w\in V(\Lambda)_{\nu}$. Then \begin{equation} C_{v;v_{\lambda}}^{\lambda}C_{w;v_{\Lambda}}^{\Lambda}= q^{(\mu,\nu)-(\lambda,\Lambda)}C_{w;v_{\Lambda}}^{\Lambda}C_{v;v_{\lambda}}^{\lambda} \quad \hbox{ mod } \,\, N(\nu,\Lambda;\mu,\lambda). \end{equation} \end{Cor} Note that Proposition \ref{commutation}{\bf (i)} and Corollary \ref{other} give two different ways to rewrite $C_{v;v_{\lambda}}^{\lambda}C_{w;v_{\Lambda}}^{\Lambda}$ as elements of the vector space \[ W_{\lambda,\Lambda}:=\hbox{span}\lbrace C_{w';v_{\Lambda}}^{\Lambda}C_{v';v_{\lambda}}^{\lambda}
\, | \, v'\in V(\lambda),\,\, w'\in V(\Lambda) \rbrace. \] We will need both ``inequivalent'' commutation relations (Proposition \ref{commutation}{\bf (i)} and Corollary \ref{other}) in later sections. It follows in particular that, when $v'\in V(\lambda)$ and $w'\in V(\Lambda)$ run through a basis, the elements $C_{w';v_{\Lambda}}^{\Lambda}C_{v';v_{\lambda}}^{\lambda}$ are (in general) linearly dependent. This also follows from the following two observations. On the one hand, $W_{\lambda,\Lambda}\simeq V(\lambda+\Lambda)$ as right $U_q(\mathfrak{g})$-modules. On the other hand, $V(\lambda+\Lambda)$ occurs with multiplicity one in $V(\lambda)\otimes V(\Lambda)$, whereas in general $V(\lambda)\otimes V(\Lambda)$ has other irreducible components too.
By contrast, the commutation relation given in Proposition \ref{commutation}{\bf (ii)} is unique in the sense that, when $v\in V(\lambda)$ and $w\in V(\Lambda)$ run through a basis, the $C_{w;v_{\Lambda}}^{\Lambda}(C_{v;v_{\lambda}}^{\lambda})^*$ are linearly independent (cf.\ \eqref{tensorrelation}).
We end this section by recalling the special case $\mathfrak{g}=\mathfrak{s}\mathfrak{l}(2,{\mathbb{C}})$. Set \begin{equation} \begin{split} &t_{11}:=C_{\varpi_1;\varpi_1}^{\varpi_1},\quad
t_{12}:=C_{\varpi_1;-\varpi_1}^{\varpi_1},\\ &t_{21}:=C_{-\varpi_1;\varpi_1}^{\varpi_1},\quad
t_{22}:=C_{-\varpi_1;-\varpi_1}^{\varpi_1}. \end{split} \end{equation} Then it is well known that the $t_{ij}$'s generate the algebra ${\mathbb{C}}_q\lbrack SU(2) \rbrack$. The commutation relations \begin{equation} \begin{split} &t_{k1}t_{k2}=qt_{k2}t_{k1},\quad t_{1k}t_{2k}=qt_{2k}t_{1k} \quad (k=1,2),\\ &t_{12}t_{21}=t_{21}t_{12},\quad t_{11}t_{22}-t_{22}t_{11}= (q-q^{-1})t_{12}t_{21},\\ &t_{11}t_{22}-qt_{12}t_{21}=1 \end{split} \end{equation} characterize the algebra structure of $\mathbb{C}_q[SU(2)]$ in terms of the generators $t_{ij}$. The $*$-structure is uniquely determined by the formulas $t_{11}^*=t_{22}$, $t_{12}^*=-qt_{21}$.
\section{Quantized function algebras on generalized flag manifolds} \label{section:quan} Let $S$ be any subset of the simple roots $\Delta$. We will sometimes identify $S$ with the index set
$\lbrace i \, | \, \alpha_i\in S\rbrace$. Let $\mathfrak{p}_S\subset \mathfrak{g}$ be the corresponding standard parabolic subalgebra, given explicitly by \eqref{pS}. We define the quantized universal enveloping algebra $U_q({\mathfrak{l}}_S)$ associated with the Levi factor ${\mathfrak{l}}_S$ of ${\mathfrak{p}}_S$ as the subalgebra of $U_q({\mathfrak{g}})$ generated by $K_i^{\pm 1}$ ($i\in [1,r]$) and $X_i^{\pm}$ ($i\in S$). Note that $U_q({\mathfrak{l}}_S)$ is a Hopf $*$-subalgebra of $U_q({\mathfrak{g}})$.
For later use in this section we briefly discuss the finite-dimensional representation theory of $U_q({\mathfrak{l}}_S)$. Recall that ${\mathfrak{l}}_S$ is a reductive Lie algebra with centre \begin{equation} Z({\mathfrak{l}}_S)=\bigcap_{i\in S}\hbox{Ker}(\alpha_i)\subset {\mathfrak{h}}. \end{equation} Moreover, we have direct sum decompositions \begin{equation}\label{decompCartan} {\mathfrak{h}}=Z({\mathfrak{l}}_S)\oplus {\mathfrak{h}}_S,\quad {\mathfrak{l}}_S=Z({\mathfrak{l}}_S)\oplus {\mathfrak{l}}_S^0, \end{equation} where ${\mathfrak{h}}_S=\hbox{span}\lbrace H_i \rbrace_{i\in S}$ and ${\mathfrak{l}}_S^0$ is the semisimple part of ${\mathfrak{l}}_S$. The semisimple part ${\mathfrak{l}}_S^0$ is explictly given by \begin{equation} {\mathfrak{l}}_S^0:={\mathfrak{h}}_S\oplus\bigoplus_{\alpha\in \Gamma_S\cap (-\Gamma_S)} {\mathfrak{g}}_{\alpha}. \end{equation} We define the quantized universal enveloping algebra $U_q({\mathfrak{l}}_S^0)$ associated with the semisimple part ${\mathfrak{l}}_S^0$ of ${\mathfrak{l}}_S$ as the subalgebra of $U_q({\mathfrak{g}})$ generated by $K_i^{\pm 1}$ and $X_i^{\pm}$ for all $i\in S$. Observe that $U_q({\mathfrak{l}}_S^0)$ is a Hopf $*$-subalgebra of $U_q({\mathfrak{g}})$. \begin{Prop} Any finite-dimensional $U_q({\mathfrak{l}}_S)$-module $V$ which is completely reducible as $U_q({\mathfrak{h}})$-module, is completely reducible as $U_q({\mathfrak{l}}_S)$-module. \end{Prop} \begin{proof} Let $V$ be a finite-dimensional left $U_q({\mathfrak{l}}_S)$-module which is completely reducible as $U_q({\mathfrak{h}})$-module. Then the linear subspace
\[V^+:=\lbrace v\in V \, | \, X_i^+v=0 \quad \forall i\in S \rbrace\] is $U_q({\mathfrak{h}})$-stable and splits as a direct sum of weight spaces. Let $\{ v_i\}$ be a linear basis of $V^+$ consisting of weight vectors, and set $V_i:=U_q({\mathfrak{l}}_S^0)v_i$. Since $U_q({\mathfrak{l}}_S^0)$ is the quantized universal enveloping algebra associated with a semisimple Lie algebra, it follows that $V=\sum_i^{\oplus}V_i$ is a decomposition of $V$ into irreducible $U_q({\mathfrak{l}}_S^0)$-modules. On the other hand, the $V_i$ are $U_q({\mathfrak{l}}_S)$-stable since the vectors $v_i$ are weight vectors. Hence $V=\sum_i^{\oplus}V_i$ is a decomposition of $V$ into irreducible $U_q({\mathfrak{l}}_S)$-modules. \end{proof} There are obvious notions of weight vectors and weights for $U_q({\mathfrak{l}}_S)$-modules. With a suitably extended interpretation of the notion of highest weight, the irreducible finite-dimensional $U_q({\mathfrak{l}}_S)$-modules may be characterized in terms of highest weights. We shall only be interested in irreducible $U_q({\mathfrak{l}}_S)$-modules with weights in the lattice $P$. For instance, the restriction of an irreducible $P$-weighted $U_q(\mathfrak{g})$-module to $U_q(\mathfrak{l}_S)$ decomposes into such irreducible $U_q(\mathfrak{l}_S)$-modules.
Branching rules for the restriction of finite-dimensional representations of $U_q(\mathfrak{g})$ to $U_q(\mathfrak{l}_S)$ are determined by the behaviour of the corresponding characters. Since the characters for $P$-weighted irreducible finite-dimensional representations of $U_q(\mathfrak{g})$ and $U_q(\mathfrak{l}_S)$ are the same as for the corresponding representations of $\mathfrak{g}$ and $\mathfrak{l}_S$, one easily derives the following proposition. \begin{Prop}\label{zelfde} Let $\lambda\in P_+$. The multiplicity of any $P$-weighted irreducible $U_q(\mathfrak{l}_S)$-module in the irreducible decomposition of the restriction of the $U_q(\mathfrak{g})$-module $V(\lambda)$ to $U_q(\mathfrak{l}_S)$ is the same as in the classical case. \end{Prop} Next, we define the quantized algebra of functions on $U/K_S$. The mapping $\iota_S^*\colon U_q({\mathfrak{g}})^*\twoheadrightarrow U_q({\mathfrak{l}}_S)^*$ dual to the Hopf $*$-embedding $\iota_S: U_q({\mathfrak{l}}_S)\hookrightarrow U_q({\mathfrak{g}})$ is surjective, and we set \[ {\mathbb{C}}_q\lbrack L_S\rbrack:=\iota_S^*({\mathbb{C}}_q\lbrack G \rbrack)=
\lbrace\phi\circ \iota_S \, | \, \phi\in {\mathbb{C}}_q\lbrack G \rbrack\rbrace. \] The formulas \eqref{dualstructure} uniquely determine a Hopf $*$-algebra structure on ${\mathbb{C}}_q\lbrack L_S\rbrack$, and $\iota_S^*$ then becomes a Hopf $*$-algebra morphism. We write ${\mathbb{C}}_q[K_S]$ for ${\mathbb{C}}_q[L_S]$ with this particular choice of $*$-structure. Assume now that $S\not=\Delta$. Define a $*$-subalgebra ${\mathbb{C}}_q\lbrack U/K_S \rbrack \subset {\mathbb{C}}_q\lbrack U \rbrack$ by \begin{equation} \begin{split} {\mathbb{C}}_q\lbrack U/K_S \rbrack&:=\lbrace \phi\in {\mathbb{C}}_q\lbrack U \rbrack
\, | \, (\hbox{id}\otimes \iota_S^*)\Delta(\phi)=\phi\otimes 1\rbrace\\
&=\lbrace \phi\in {\mathbb{C}}_q\lbrack U \rbrack \, | \, X.\phi=\varepsilon(X)\phi, \quad \forall\, X\in U_q({\mathfrak{l}}_S)\rbrace \end{split} \end{equation} The algebra ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ is a left ${\mathbb{C}}_q\lbrack U \rbrack$-subcomodule of ${\mathbb{C}}_q\lbrack U \rbrack$. We call it the quantized algebra of functions on the generalized flag manifold $U/K_S$.
In a similar way, one can define the quantized function algebra ${\mathbb{C}}_q\lbrack K^0_S \rbrack$ corresponding to the semisimple part $K_S^0$ of $K_S$ as the image of the dual of the natural embedding $U_q({\mathfrak{l}}_S^0)\hookrightarrow U_q({\mathfrak{g}})$. Its Hopf $*$-algebra structure is again given by the formulas \eqref{dualstructure}. The subalgebra ${\mathbb{C}}_q\lbrack U/K_S^0\rbrack$ then consists by definition of all right ${\mathbb{C}}_q\lbrack K_S^0\rbrack$-invariant elements in ${\mathbb{C}}_q\lbrack U\rbrack$. Note that ${\mathbb{C}}_q\lbrack U/K_S^0\rbrack\subset {\mathbb{C}}_q\lbrack U\rbrack$ is a left $U_q({\mathfrak{h}})$-submodule and that ${\mathbb{C}}_q\lbrack U/K_S\rbrack$ coincides with the subalgebra of $U_q({\mathfrak{h}})$-invariant elements in ${\mathbb{C}}_q\lbrack U/K_S^0\rbrack$.
We now turn to PBW type factorizations of the algebra ${\mathbb{C}}_q\lbrack U/K_S\rbrack$. Write $P(S)$, $P_+(S)$, resp.\ $P_{++}(S)$ for ${\mathbb{K}}\hbox{-span}\lbrace\varpi_{\alpha}\rbrace_{\alpha\in S}$ with ${\mathbb{K}}={\mathbb{Z}}$, ${\mathbb{Z}}_+$ resp.\ ${\mathbb{N}}$. Set $S^c:=\Delta\setminus S$. The quantized algebra $A_S^{hol}$ of holomorphic polynomials on $U/K_S$ is defined by \begin{equation}\label{AShol} A_S^{hol}:=\bigoplus_{\lambda\in P_+(S^c)}B_{\lambda}\subset {\mathbb{C}}_q\lbrack U\rbrack, \end{equation} where $B_{\lambda}$ is given by \eqref{Blambda} (cf.\ \cite{LR1}, \cite{LR2}, \cite{S2}, \cite{Jur} and \cite{Kor}). Note that $A_S^{hol}$ is a right $U_q({\mathfrak{g}})$-comodule subalgebra of ${\mathbb{C}}_q\lbrack U \rbrack$, \eqref{AShol} being the (multiplicity free) decomposition of $A_S^{hol}$ into irreducible $U_q({\mathfrak{g}})$-modules. The right $U_q({\mathfrak{g}})$-module algebra $(A_S^{hol})^*\subset {\mathbb{C}}_q\lbrack U\rbrack$ is called the quantized algebra of antiholomorphic polynomials on $U/K_S$. \begin{Lem} The linear subspace \[ A_S^0:=m\bigl((A_S^{hol})^*\otimes A_S^{hol}\bigr)\subset {\mathbb{C}}_q\lbrack U\rbrack, \] where $m$ is the multiplication map of ${\mathbb{C}}_q\lbrack U \rbrack$, is a right $U_q({\mathfrak{g}})$-submodule $*$-subalgebra of ${\mathbb{C}}_q\lbrack U\rbrack$. \end{Lem} \begin{proof} Proposition \ref{commutation}{\bf (ii)} implies that $A_S^0$ is a subalgebra of ${\mathbb{C}}_q\lbrack U \rbrack$. The other assertions are immediate. \end{proof} The subalgebra $A_S^0$ may be considered as a quantum analogue of the algebra of complex-valued polynomial functions on the real manifold $U/K_S^0$. \begin{rem}\label{Litremark} In the classical setting ($q=1$), the algebra $A_S^0$ ($\#S^c=1$) can be interpreted as algebra of functions on the product of an affine spherical $G$-variety with its dual. The $G$-module structure on $A_S^0$ is then related to the doubled $G$-action (see \cite{Pan1}, \cite{Pan} for the terminology). These (and related) $G$-varieties have been studied in several papers, see for example \cite{Pan}, \cite{Pan1} and \cite{Lit1}. \end{rem} The algebra $A_S^0\subset {\mathbb{C}}_q\lbrack U\rbrack$ is stable under the left $U_q({\mathfrak{h}})$-action, so we can speak of $U_q({\mathfrak{h}})$-weighted elements in $A_S^0$. Let $A_S$ be the left $U_q({\mathfrak{h}})$-invariant elements of $A_S^0$. Then $A_S\subset {\mathbb{C}}_q\lbrack U\rbrack$ is a right $U_q({\mathfrak{g}})$-module $*$-subalgebra of ${\mathbb{C}}_q\lbrack U\rbrack$. We now have the following lemma. \begin{Lem}\label{definitionAS} We have $A_S^0\subset {\mathbb{C}}_q\lbrack U/K_S^0\rbrack$, so in particular $A_S\subset {\mathbb{C}}_q\lbrack U/K_S\rbrack$. Furthermore, \begin{equation}\label{matrixvorm} A_S=\hbox{span}\lbrace (C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda}
\, | \, \lambda\in P_+(S^c),\,\, v,w\in V(\lambda) \rbrace. \end{equation} \end{Lem} \begin{proof} Choose $\lambda\in P_+(S^c)$ and $i\in S$. Then we have $X_i^+\cdot v_{\lambda}=0$ and $K_i\cdot v_{\lambda}=v_{\lambda}$. It follows that $\mathbb{C} v_\lambda \subset V(\lambda)$ is a one-dimensional $U_{q_i}(\mathfrak{s}\mathfrak{l}(2;{\mathbb{C}}))$-submodule, where we consider the $U_{q_i}(\mathfrak{s}\mathfrak{l}(2;{\mathbb{C}}))$ action on $V(\lambda)$ via the embedding $\phi_i\colon U_{q_i}(\mathfrak{s}\mathfrak{l}(2;{\mathbb{C}})) \hookrightarrow U_q({\mathfrak{g}})$. It follows that $X_i^{-}\cdot v_{\lambda}=0$. This readily implies that $A_S^0\subset {\mathbb{C}}_q\lbrack U/K_S^0 \rbrack$. The remaining assertions are immediate. \end{proof} \begin{Def} We call $A_S\subset {\mathbb{C}}_q\lbrack U/K_S \rbrack$ the factorized $*$-subalgebra associated with $U/K_S$. \end{Def} In view of Theorem \ref{factorization}, there is reason to expect that the factorized algebra $A_S$ is equal to ${\mathbb{C}}_q\lbrack U/K_S\rbrack$ for any generalized flag manifold $U/K_S$. Although we cannot prove this in general, we do have a proof (cf.\ Theorem \ref{yes}) for a certain subclass of generalized flag manifolds that we shall define and classify in the following proposition. For the proof in these cases we use the so-called Parthasarathy-Ranga Rao-Varadarajan (PRV) conjecture, which was proved independently by Kumar \cite{Ku} and Mathieu \cite{Ma}. The PRV conjecture gives information about which irreducible constituents occur in tensor products of irreducible finite-dimensional ${\mathfrak{g}}$-modules. It seems likely that a proof for arbitrary generalized flag manifold $U/K_S$ would require further detailed information about irreducible decompositions of tensor products of finite-dimensional representations of $\mathfrak{g}$.
Recall the notations introduced in section 2. The following proposition was observed by Koornwinder \cite{Koo}. \begin{Prop}\label{Koornwinderpairs} \textup{(} \cite{Koo}\textup{)} Let $U$ be a connected, simply connected compact Lie group with Lie algebra ${\mathfrak{u}}$, and let ${\mathfrak{p}}\subset {\mathfrak{g}}$ be a standard maximal parabolic subalgebra. Let $K\subset U$ be the connected subgroup with Lie algebra ${\mathfrak{k}}:={\mathfrak{p}}\cap {\mathfrak{u}}$. Then $(U,K)$ is a Gel'fand pair if and only if one of the following three conditions are satisfied:\\ {\bf (i)} $(U,K)$ is an irreducible compact Hermitian symmetric pair;\\ {\bf (ii)} $(U,K)\simeq (SO(2l+1),U(l))$, \quad $(l\geq 2)$;\\ {\bf (iii)} $(U,K)\simeq (Sp(l),U(1)\times Sp(l-1))$, \quad $(l\geq 2)$. \end{Prop} \begin{proof} For a list of the irreducible compact Hermitian symmetric pairs see \cite[Ch. X, Table V]{H}. The proposition follows from this and the classification of compact Gelfand pairs $(U,K)$ with $U$ simple (cf.\ \cite[Tabelle 1]{Kr}). \end{proof} Let $(U,K)$ be a pair from the list {\bf (i)}--{\bf (iii)} in Proposition \ref{Koornwinderpairs}, and let $({\mathfrak{u}}, {\mathfrak{k}})$ be the associated pair of Lie algebras. Then ${\mathfrak{k}}={\mathfrak{k}}_S$ for some subset $S\subset \Delta$ with $\#S^c=1$. We call the simple root $\alpha\in S^c$ the Gel'fand node associated with $(U,K)$. \begin{Prop}\label{Koor} Let $(U,K)$ be a pair from the list {\bf (i)}--{\bf (iii)} in Proposition \ref{Koornwinderpairs}, and let $\alpha\in \Delta$ be the associated Gel'fand node with corresponding fundamental weight $\varpi:=\varpi_{\alpha}$. Let $\lbrace \mu_1,\ldots,\mu_l\rbrace$ be the fundamental spherical weights of $(U,K)$. Then every fundamental spherical representation $V(\mu_i)$ occurs in the decomposition of $V(\varpi)^*\otimes V(\varpi)$. \end{Prop} \begin{proof} For the proof we use the PRV conjecture, which states the following. Let $\lambda,\mu\in P_+$ and $w\in W$. Let $[\lambda+w\mu]$ be the unique element in $P_+$ which lies in the $W$-orbit of $\lambda+w\mu$. Then $V([\lambda+w\mu])$ occurs with multiplicity at least one in $V(\lambda)\otimes V(\mu)$. The procedure is now a follows. For a pair $(U,K)$ from the list {\bf (i)}--{\bf (iii)} of Proposition \ref{Koornwinderpairs} we write down the fundamental spherical weights $\lbrace \mu_i\rbrace_{i=1}^l$ in terms of the fundamental weights $\lbrace \varpi_i\rbrace_{i=1}^r$ (cf.\ \cite[Tabelle 1]{Kr}, or in the case of Hermitian symmetric spaces one can also write them down from the corresponding Satake diagrams \cite{Su}). Then we look for Weyl group elements $w_i\in W$ such that \[ [\varpi-w_i\varpi]=\mu_i,\quad (i=[1,l]) \] (here we used that $V(\varpi)^*\simeq V(-\sigma_0\varpi)$).
As an example, let us follow the procedure for the compact Hermitian symmetric pair $(U,K)=(SO(2l),U(l))$ ($l\geq 2$). We use the standard realization of the root system $R$ of type $D_l$ in the $l$-dimensional vector space $V=\sum_{i=1}^l{\mathbb{R}}\varepsilon_i$, with basis given by $\alpha_i=\varepsilon_i-\varepsilon_{i+1}$ ($i=[1,l-1]$) and $\alpha_l=\varepsilon_{l-1}+\varepsilon_l$. The fundamental weights are given by \begin{equation} \begin{split} \varpi_i&=\varepsilon_1+\varepsilon_2+\ldots +\varepsilon_i,\quad (i<l-1),\nonumber\\ \varpi_{l-1}&=(\varepsilon_1+\varepsilon_2+\ldots + \varepsilon_{l-1}-\varepsilon_l)/2,\nonumber\\ \varpi_l&=(\varepsilon_1+\varepsilon_2+\ldots + \varepsilon_{l-1}+\varepsilon_l)/2.\nonumber \end{split} \end{equation} We set ${\varpi}=\varpi_l$ (i.e. $S^c=\lbrace \alpha_l\rbrace$). Let $\sigma_i$ be the linear map defined by $\varepsilon_j\mapsto -\varepsilon_j$ ($j=i,i+1$) and $\varepsilon_j\mapsto\varepsilon_j$ otherwise. Then $\sigma_i\in W$ ($i=[1,l-1]$). If $l=2l'+1$, then \begin{equation}\label{fundweightodd} \begin{split} \varpi-\sigma_1\sigma_3\ldots\sigma_{2i-1}\varpi &=\varpi_{2i},\quad (i=[1,l'-1]),\\ \varpi-\sigma_1\sigma_3\ldots\sigma_{2l'-1}\varpi &=\varpi_{l-1}+\varpi_l. \end{split} \end{equation} If $l=2l'$ then we have \begin{equation}\label{fundweighteven} \begin{split} \varpi-\sigma_1\sigma_3\ldots\sigma_{2i-1}\varpi &=\varpi_{2i},\qquad (i=[1,l'-1]),\\ \varpi-\sigma_1\sigma_3\ldots\sigma_{2l'-1}\varpi &=2\varpi_l. \end{split} \end{equation} By comparison with \cite[Tabelle 1]{Kr} we see from \eqref{fundweightodd} (resp.\ \eqref{fundweighteven}) that all the fundamental spherical weights of the pair $(U,K)=(SO(2l),U(l))$ have been obtained. The other cases are checked in a similar manner. \end{proof} The question naturally arises whether the fundamental spherical representations occur with multiplicity one in $V(\varpi)^*\otimes V(\varpi)$ and whether they exhaust (together with the trivial representation) the irreducible components of $V(\varpi)^*\otimes V(\varpi)$. For the complex Grassmannians $(U,K)=\bigl(SU(p+l),S(U(p)\times U(l))\bigr)$ this is indeed the case (this can be easily proved using the Pieri formula for Schur functions \cite[Chapter I, (5.17)]{M}, see \cite{DS} for more details). For the general case we do not known the answer, but it is true that the irreducible decomposition of $V(\varpi)^*\otimes V(\varpi)$ is multiplicity free and that all irreducible components are spherical representations. This is an easy consequence of the proof of the following main theorem of this section. \begin{Thm}\label{yes} The factorized $*$-subalgebra $A_S$ is equal to ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ if\\ {\bf (i)} $S=\emptyset$, i.e. $U/K_S=U/T$ is the full flag manifold;\\ {\bf (ii)} $\#S^c=1$ and the simple root $\alpha\in S^c$ is a Gel'fand node. \end{Thm} \begin{proof} To prove {\bf (i)} we look at the simultaneous eigenspace decomposition of ${\mathbb{C}}_q\lbrack U \rbrack$ with respect to the left $U_q({\mathfrak{h}})$-action on ${\mathbb{C}}_q\lbrack U \rbrack$. The simultaneous eigenspace corresponding to the character $\varepsilon$ on $U_q({\mathfrak{h}})$ is exactly ${\mathbb{C}}_q\lbrack U/T\rbrack$. Using Soibel'man's factorization of ${\mathbb{C}}_q\lbrack U \rbrack$ (cf.\ Theorem \ref{factorization}) and Lemma \ref{definitionAS}, it is then easily checked that ${\mathbb{C}}_q\lbrack U/T\rbrack=A_{\emptyset}$. To prove {\bf (ii)} we note that \[ \bigoplus_{i=1}^l V(\mu_i)\hookrightarrow V(\varpi)^*\otimes V(\varpi)\simeq (B_{\varpi})^*B_{\varpi}\subset A_S \] as right $U_q({\mathfrak{g}})$-modules by Proposition \ref{Koor} (here we use the notations as introduced in Proposition \ref{Koor}). Now ${\mathbb{C}}_q\lbrack U \rbrack$ is an integral domain (cf.\ \cite[Lemma 9.1.9 {\bf (i)}]{J}), hence $v_{\lambda}v_{\mu}\in A_S$ is a highest weight vector of highest weight $\lambda+\mu$ if $v_{\lambda},v_{\mu}\in A_S$ are highest weight vectors of highest weight $\lambda$ respectively $\mu$. It follows that \[\bigoplus_{n_i\in {\mathbb{Z}}_{+}} V(n_1\mu_1+\ldots n_l\mu_l)\hookrightarrow A_S\] as right $U_q({\mathfrak{g}})$-modules. On the other hand we have the decomposition \[{\mathbb{C}}_q\lbrack U/K_S \rbrack\simeq \bigoplus_{n_i\in {\mathbb{Z}}_{+}} V(n_1\mu_1+\ldots n_l\mu_l) \] of ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ into irreducible right $U_q({\mathfrak{g}})$-modules by Proposition \ref{zelfde}. This implies $A_S={\mathbb{C}}_q\lbrack U/K_S \rbrack$. \end{proof} In the remainder of the paper we study the irreducible $*$-representations of the $*$-algebras $A_S$ and ${\mathbb{C}}_q[U/K_S]$. In the next section we first consider the restriction of the irreducible $*$-representations of ${\mathbb{C}}_q[U]$ to the $*$-algebras $A_S$ and ${\mathbb{C}}_q[U/K_S]$.
\section{Restriction of irreducible $*$-representations to ${\mathbb{C}}_q\lbrack U/K \rbrack$} Let us first recall some results from Soibel'man \cite{S} concerning the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U\rbrack$. Let $\lbrace e_i\rbrace_{i\in {\mathbb{Z}}_+}$ be the standard orthonormal basis of $l_2({\mathbb{Z}}_+)$. Write ${\mathcal{B}}(l_2({\mathbb{Z}}_+))$ for the algebra of bounded linear operators on $l_2({\mathbb{Z}}_+)$. Then the formulas \begin{equation}\label{action2} \begin{split} &\pi_q(t_{11})e_j=\sqrt{(1-q^{2j})}e_{j-1},\quad \pi_q(t_{12})e_j=-q^{j+1}e_j,\\ &\pi_q(t_{21})e_j=q^je_j,\quad \pi_q(t_{22})e_j=\sqrt{(1-q^{2(j+1)})}e_{j+1}\\ \end{split} \end{equation} (here $\pi_q(t_{11})e_0=0$) uniquely determine an irreducible $*$-representation \[\pi_q: {\mathbb{C}}_q\lbrack SU(2) \rbrack\rightarrow {\mathcal{B}}(l_2({\mathbb{Z}}_+)).\] Now the dual of the injective Hopf $*$-algebra morphism $\phi_i\colon U_{q_i}(\mathfrak{s}\mathfrak{l}(2;{\mathbb{C}}))\hookrightarrow U_q(\mathfrak{g})$ corresponding to the $i$th node of the Dynkin diagram ($i\in [1,r]$) is a surjective Hopf $*$-algebra morphism $\phi_i^*\colon \mathbb{C}_q\lbrack U \rbrack\twoheadrightarrow \mathbb{C}_{q_i}\lbrack SU(2) \rbrack$. Hence we obtain irreducible $*$-representations $\pi_i:=\pi_{q_i}\circ\phi_i^*:{\mathbb{C}}_q\lbrack U \rbrack\rightarrow {\mathcal{B}}(l_2({\mathbb{Z}}_+))$.
On the other hand, there is a family of one-dimensional $*$-representations $\tau_t$ of ${\mathbb{C}}_q\lbrack U \rbrack$ parametrized by the maximal torus $t\in T\simeq \mathbb{T}^r$ ($\mathbb{T}\subset \mathbb{C}$ denoting the unit circle in the complex plane). More explicitly, let $\iota_T:U_q(\mathfrak{h})\hookrightarrow U_q(\mathfrak{g})$ be the natural Hopf $*$-algebra embedding, and set ${\mathbb{C}}_q\lbrack T\rbrack:= \hbox{span}\lbrace \phi_{\mu}\rbrace_{\mu\in P}\subset U_q(\mathfrak{h})^*$, where $\phi_{\mu}(K^{\sigma}):=q^{(\mu,\sigma)}$ for $\sigma\in Q$. As in \eqref{dualstructure} we get a Hopf $*$-algebra structure on ${\mathbb{C}}_q\lbrack T\rbrack$. Then $\iota_T^*: {\mathbb{C}}_q\lbrack U \rbrack\rightarrow {\mathbb{C}}_q\lbrack T\rbrack$, $\iota_T^*(\phi):=\phi\circ\iota_T$ is a surjective Hopf $*$-algebra morphism. Any irreducible $*$-representation of ${\mathbb{C}}_q\lbrack T\rbrack$ is one-dimensional and can be written as $\tilde{\tau}_t(\phi_{\mu}):=t^{\mu}$ for a unique $t\in T\simeq \mathbb{T}^r$. Here $t^{\mu}:=t_1^{m_1}\ldots t_r^{m_r}$ for $\mu=\sum_{i=1}^rm_i\varpi_i$. So we obtain a one-dimensional $*$-representation $\tau_t:=\tilde{\tau}_t\circ\iota_T^*$ of ${\mathbb{C}}_q\lbrack U \rbrack$, which is given explicitly on matrix elements $C_{\mu,i;\nu,j}^{\lambda}$ by the formula \begin{equation}\label{one-dimensional} \tau_t(C_{\mu,i;\nu,j}^{\lambda})=\delta_{\mu,\nu}\delta_{i,j}t^{\mu}. \end{equation} The following theorem completely describes the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$. \begin{Thm}[Soibel'man \cite{S}]\label{Soibelman} Let $\sigma\in W$, and fix a reduced expression $\sigma=s_{i_1}s_{i_1}\cdots s_{i_l}$. The $*$-representation \begin{equation} \pi_\sigma:=\pi_{i_1}\otimes\pi_{i_2}\otimes\cdots\otimes\pi_{i_l} \end{equation} does not depend on the choice of reduced expression \textup{(}up to equivalence\textup{)}. The set \[
\lbrace \pi_\sigma\otimes\tau_t \, | \, t\in T, \sigma\in W \rbrace \] is a complete set of mutually inequivalent irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U \rbrack$. \end{Thm} Here tensor products of $*$-representations are defined in the usual way by means of the coalgebra structure on ${\mathbb{C}}_q\lbrack U \rbrack$. The irreducible representation $\pi_e$ with respect to the unit element $e\in W$ is the one-dimensional $*$-representation associated with the counit $\epsilon$ on ${\mathbb{C}}_q[U]$. In Soibel'man's terminology, the representations $\pi_{\sigma}\otimes \tau_t$ are said to be associated with the Schubert cell $X_{\sigma}$ of $U/T$ (cf.\ section 2).
We also mention here an important property of the kernel of $\pi_{\sigma}$, which we will repeatedly need later on. Let $U_q({\mathfrak{b}}_+)$ be the subalgebra of $U_q({\mathfrak{g}})$ generated by the $K_i^{\pm 1}$ and the $X_i^{+}$ ($i\in [1,r]$). For any $\lambda\in P_+$, the $*$-representation $\pi_\sigma$ satisfies \begin{equation}\label{part1} \pi_\sigma(C_{v;v_{\lambda}}^{\lambda})=0\quad \bigl(v\notin U_q({\mathfrak{b}}_+)v_{\sigma\lambda}\bigr),\quad \pi_{\sigma}(C_{v_{\sigma\lambda}; v_{\lambda}}^{\lambda})\not=0 \end{equation} (cf.\ \cite[Theorem 5.7]{S}). Formula \eqref{part1} combined with \cite[Lemma 2.12]{BGG} shows that the classical limit of the kernel of $\pi_\sigma$ formally tends to the ideal of functions vanishing on $X_\sigma$.
Fix now a subset $S\subsetneq \Delta$. We freely use the notations introduced earlier. Our next goal is to describe how the $\ast$-representations $\pi_\sigma$ decompose under restriction to the subalgebra ${\mathbb{C}}_q\lbrack U/K_S \rbrack$. Consider the selfadjoint operators \begin{equation}\label{Loperators} L_{\sigma\lambda;\lambda}:= \pi_\sigma((C_{\sigma\lambda;\lambda}^{\lambda})^*C_{\sigma\lambda;\lambda}^{\lambda}) \end{equation} for $\lambda\in P_{+}(S^c)$. Let $\sigma=s_{i_1}\cdots s_{i_l}$ be a reduced expression for $\sigma$, and set $\pi_\sigma=\pi_{i_1}\otimes \pi_{i_2}\otimes\cdots\otimes \pi_{i_l}$. Then it follows from \cite[Proof of Prop.\ 5.2]{S} (see also \cite[Proof of Prop.\ 5.8]{S}) that \begin{equation}\label{facto} \pi_\sigma(C_{\sigma\lambda;\lambda}^{\lambda})= c\,\pi_{q_{i_1}}(t_{21})^{(\lambda,\gamma_1\spcheck)}\otimes \pi_{q_{i_2}}(t_{21})^{(\lambda,\gamma_2\spcheck)}\otimes\cdots\otimes \pi_{q_{i_l}}(t_{21})^{(\lambda,\gamma_l\spcheck)} \end{equation} where the scalar $c\in {\mathbb{T}}$ depends on the particular choices of bases for the irreducible representations $V(\mu)$ ($\mu\in P_+$), and with \begin{equation}\label{factohelp} \gamma_k:=s_{i_l}s_{i_{l-1}}\cdots s_{i_{k+1}}(\alpha_{i_k})\quad (1\leq k\leq l-1), \quad \gamma_l:=\alpha_{i_l}. \end{equation} The proof of \eqref{Loperators}, which was given in \cite{S} under the assumption that $\lambda\in P_{++}$, is in fact valid for all dominant weights $\lambda\in P_+$. It follows from \eqref{action2}, \eqref{Loperators} and \eqref{facto} that $l_2({\mathbb{Z}}_+)^{\otimes l(\sigma)}$ decomposes as an orthogonal direct sum of eigenspaces for $L_{\sigma\lambda;\lambda}$, \begin{equation} l_2({\mathbb{Z}}_+)^{\otimes l(\sigma)}= \bigoplus_{\gamma\in I(\lambda)}H_{\gamma}(\lambda), \end{equation} where $I(\lambda)\subset (0,1]$ denotes the set of eigenvalues of $L_{\sigma\lambda;\lambda}$, and $H_{\gamma}(\lambda)$ denotes the eigenspace of $L_{\sigma\lambda;\lambda}$ corresponding to the eigenvalue $\gamma\in I(\lambda)$ (we suppress the dependance on $\sigma$ if there is no confusion possible). Observe that $1\in I(\lambda)$ and that $L_{\sigma\lambda;\lambda}$ is injective.
Recall the definition of the set $W^S$ of minimal coset representatives (cf.\ \eqref{mincoset}). An alternative characterization of $W^S$ is given by \begin{equation}\label{mincoset2}
W^S=\lbrace \sigma\in W \, | \, \sigma(R_S^+)\subset R^+\rbrace, \end{equation} where $R_S^+:=R^+\cap \hbox{span}\lbrace S\rbrace$ (cf.\ \cite[Prop.\ 5.1 (iii)]{BGG}). Using this alternative description of $W^S$ we obtain the following properties of $L_{\sigma\lambda;\lambda}$ for $\lambda\in P_{++}(S^c)$. \begin{Prop}\label{1dimensional} Suppose that $\sigma\in W^S$ and $\lambda\in P_{++}(S^c)$. Then\\ {\bf (i)} $L_{\sigma\lambda;\lambda}$ is a compact operator;\\ {\bf (ii)} The eigenspace $H_1(\lambda)$ of $L_{\sigma\lambda;\lambda}$ corresponding to the eigenvalue $1$ is spanned by the vector $e_0^{\otimes l(\sigma)}$. \end{Prop} \begin{proof} Fix a $\lambda\in P_{++}(S^c)$, and let $\sigma=s_{i_1}s_{i_2}\cdots s_{i_l}$ be a reduced expression of a minimal coset representative $\sigma\in W^S$. It is well known that \begin{equation}\label{constructionbadroots} R^+\cap \sigma^{-1}(R^-)=\lbrace \gamma_k\rbrace_{k=1}^l, \end{equation} where the $\gamma_k$ are defined by \eqref{factohelp}. We have $\gamma_k\in R^+\setminus R_S^+$ by \eqref{mincoset2}. It follows that $(\lambda,\gamma_k\spcheck)>0$ for all $k$, since $\lambda\in P_{++}(S^c)$. By \eqref{action2} and \eqref{facto} it follows that $H_1(\lambda)=\hbox{span}\lbrace e_0^{\otimes l(\sigma)}\rbrace$ and that $H_{\gamma}(\lambda)$ is finite-dimensional for all $\gamma\in I(\lambda)$. Since the spectrum of $L_{\sigma\lambda;\lambda}$ (which is equal to $I(\lambda)\cup\lbrace 0 \rbrace$) does not have a limit point except $0$, we conclude that $L_{\sigma\lambda;\lambda}$ is a compact operator (cf. \cite[Theorem 12.30]{R}). \end{proof} Let us recall the following well known inequalities for weights of finite-dimensional irreducible representations of ${\mathfrak{g}}$ (or, equivalently, $U_q(\mathfrak{g})$). \begin{Prop}\label{needed} Let $\lambda\in P_+$ and $\mu,\nu\in P(\lambda)$. Then $(\lambda,\lambda)\geq (\mu,\nu)$, and equality holds if and only if $\mu=\nu\in W\lambda$. \end{Prop} For a proof of the proposition, see for instance \cite[Prop.\ 11.4]{Kac}. The proof is based on the following lemma, which we will also need later on. The lemma is a slightly weaker version of \cite[Lemma 11.2]{Kac}. \begin{Lem}\label{neededlemma} Let $\lambda\in P_+$ and $\mu\in P(\lambda)\setminus\lbrace \lambda \rbrace$, and let $m_i\in {\mathbb{Z}}_+$ \textup{(}$i\in [1,r]$\textup{)} be the expansion coefficients defined by $\lambda-\mu=\sum_im_i\alpha_i$. Then there is an $1\leq i\leq r$ with $m_i>0$ and $\lambda(H_i)\neq 0$. \end{Lem} We now have the following proposition, which can be regarded as a quantum analogue of the ``if'' part of Proposition \ref{semiclassical}. \begin{Prop}\label{irrep} Let $\sigma\in W^S$. Then $\pi_\sigma$ restricts to an irreducible $*$-represen\-ta\-tion of the factorized $*$-algebra $A_S$. In particular, $\pi_\sigma$ restricts to an irreducible $*$-representation of ${\mathbb{C}}_q\lbrack U/K_S \rbrack$. \end{Prop} \begin{proof} Let $\lambda\in P_{++}(S^c)$ and $\sigma\in W^S$. Suppose
$H\subset l_2({\mathbb{Z}}_+)^{\otimes l(\sigma)}$ is a non-zero closed subspace invariant under ${\pi_\sigma}_{\vert A_S}$. Set $\gamma:=\| {L_{\sigma\lambda;\lambda}}_{\vert H}\|$. Then $\gamma>0$, since $L_{\sigma\lambda;\lambda}$ is injective and $\gamma$ is an eigenvalue of $L_{\sigma\lambda;\lambda}/_H$ by Proposition \ref{1dimensional}{\bf (i)}. Let $H_{\gamma}$ be the corresponding eigenspace. We claim that \begin{equation}\label{claimbel} \pi_\sigma((C_{\mu,i;\lambda}^{\lambda})^*C_{\mu,i;\lambda}^{\lambda}) H_{\gamma}=0, \quad \mu\not=\sigma\lambda. \end{equation} Suppose for the moment that the claim is correct. Then \eqref{unitarityproperty} and \eqref{claimbel} imply $\gamma=1$, hence $H_{\gamma}= \hbox{span}\lbrace e_0^{\otimes l(\sigma)}\rbrace$ by Proposition \ref{1dimensional}{\bf (ii)}. So every non-zero closed invariant subspace contains the vector $e_0^{\otimes l(\sigma)}$. Since $H^{\bot}$ is also a closed invariant subspace, we must have $H^{\bot}=\lbrace 0 \rbrace$, i.e. $H=l_2({\mathbb{Z}}_+)^{\otimes l(\sigma)}$. Remains therefore to prove the claim \eqref{claimbel}. By \eqref{part1} we have $\pi_\sigma(C_{\mu,i;\lambda}^{\lambda})=0$ if $\mu<\sigma\lambda$. Hence \begin{equation} \begin{split} L_{\sigma\lambda;\lambda} \pi_\sigma((C_{\mu,i;\lambda}^{\lambda})^*C_{\mu,i;\lambda}^{\lambda})&= q^{(\lambda,\lambda)-(\mu,\sigma\lambda)} \pi_\sigma((C_{\mu,i;\lambda}^{\lambda}C_{\sigma\lambda;\lambda}^{\lambda})^* C_{\sigma\lambda;\lambda}^{\lambda}C_{\mu,i;\lambda}^{\lambda})\nonumber\\ &=q^{2(\lambda,\lambda)-2(\mu,\sigma\lambda)} \pi_\sigma((C_{\mu,i;\lambda}^{\lambda})^*(C_{\sigma\lambda;\lambda}^{\lambda})^* C_{\sigma\lambda;\lambda}^{\lambda}C_{\mu,i;\lambda}^{\lambda})\nonumber\\ &=q^{2(\lambda,\lambda)-2(\mu,\sigma\lambda)} \pi_\sigma((C_{\mu,i;\lambda}^{\lambda})^*C_{\sigma\lambda;\lambda}^{\lambda}) \pi_\sigma((C_{\sigma\lambda;\lambda}^{\lambda})^*C_{\mu,i;\lambda}^{\lambda}), \nonumber \end{split} \end{equation} where we used Proposition \ref{commutation}{\bf (i)} in the second equality and Proposition \ref{commutation}{\bf (ii)} in the first and third equality. So \eqref{claimbel} will then follow from \begin{equation}\label{bijnadaar} \pi_\sigma((C_{\sigma\lambda;\lambda}^{\lambda})^* C_{\mu,i;\lambda}^{\lambda})H_{\gamma}= 0, \quad \mu\not=\sigma\lambda, \end{equation} in view of the injectivity of $L_{\sigma\lambda;\lambda}$. Fix $h\in H_{\gamma}$ and $\mu\in P(\lambda)$ with $\mu\not=\sigma\lambda$. By Lemma \ref{definitionAS} we have $(C_{\sigma\lambda;\lambda}^{\lambda})^*C_{\mu;i;\lambda}^{\lambda}\in A_S\subset {\mathbb{C}}_q\lbrack U/K_S \rbrack$, hence the vector \begin{equation}\label{image} \tilde{h}:=\pi_\sigma((C_{\sigma\lambda;\lambda}^{\lambda})^* C_{\mu;i;\lambda}^{\lambda})h \end{equation} lies in the invariant subspace $H$. Again using the commutation relations given in Proposition \ref{commutation} and Corollary \ref{other}, we see that $\tilde{h}$ is an eigenvector of $L_{\sigma\lambda;\lambda}$ with eigenvalue $\tilde{\gamma}:=q^{2(\lambda,\sigma^{-1}(\mu)-\lambda)}\gamma$. We have $\tilde{\gamma}>\gamma$ by Proposition \ref{needed}. By the maximality of $\gamma$, we conclude that $\tilde{h}=0$. This proves \eqref{bijnadaar}, hence also the claim \eqref{claimbel}. \end{proof} \begin{Def} We say that the irreducible $*$-representation $\pi_{\sigma}$ \textup{(}$\sigma\in W^S$\textup{)} of ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ is associated with the Schubert cell $X_{\overline{\sigma}}\subset U/K_S$. \end{Def} The following proposition can be regarded as a quantum analogue of Proposition \ref{finestructure} as well as of the ``only if'' part of Proposition \ref{semiclassical}. \begin{Prop}\label{notirrep} Let $\sigma\in W$, and let $\sigma=uv$ be the unique decomposition of $\sigma$ with $u\in W^S$ and $v\in W_S$. For $\pi_\sigma=\pi_u\otimes\pi_v$ \textup{(}cf.\ \eqref{minimal}\textup{)} and $t\in T$, we have \[(\pi_\sigma\otimes\tau_t)(a)= \pi_u(a)\otimes\hbox{id}^{\otimes l(v)}, \quad a\in {\mathbb{C}}_q\lbrack U/K_S \rbrack. \] \end{Prop} \begin{proof} Recall that the one-dimensional $*$-representation $\tau_t$ factorizes through $\iota_T^*: {\mathbb{C}}_q\lbrack U \rbrack\rightarrow {\mathbb{C}}_q\lbrack T\rbrack$ and that $\pi_i$ factorizes through $\phi_i^*: {\mathbb{C}}_q\lbrack U \rbrack\rightarrow \mathbb{C}_{q_i}\lbrack SU(2)\rbrack $. The maps $\iota_T^*$ and $\phi_i^*$ ($i\in S$) factorize through $\iota_S^*: {\mathbb{C}}_q\lbrack U \rbrack\rightarrow {\mathbb{C}}_q\lbrack K_S\rbrack$ since the ranges of $\iota_T$ and $\phi_i$ ($i\in S$) lie in the Hopf-subalgebra $U_q({\mathfrak{l}}_S)$. Hence $\pi_v\otimes \tau_t$ ($v\in W_S$, $t\in T$) factorizes through $\iota_S^*$, say $\pi_v\otimes \tau_t=\pi_{v,t}\circ \iota_S^*$. Then we have for $a\in {\mathbb{C}}_q\lbrack U/K_S \rbrack$, \begin{equation} \begin{split} (\pi_\sigma\otimes \tau_t)(a)&= (\pi_u\otimes \pi_v\otimes \tau_t)\circ\Delta(a)\nonumber\\ &=(\pi_u\otimes\pi_{v,t})\circ (\hbox{id}\otimes{\iota}_S^*)\Delta(a)\nonumber\\ &=\pi_u(a)\otimes\pi_{v,t}(1)=\pi_u(a)\otimes\hbox{id}^{\otimes l(v)},\nonumber \end{split} \end{equation} which completes the proof of the proposition. \end{proof} \begin{Lem}\label{inequivalent} The $*$-representations $\lbrace \pi_\sigma\rbrace_{\sigma\in W^S}$, considered as $*$-representations of $A_S$ respectively ${\mathbb{C}}_q\lbrack U/K_S \rbrack$, are mutually inequivalent. \end{Lem} \begin{proof} Let $\sigma,\sigma'\in W^S$ with $\sigma\not=\sigma'$ and $\lambda\in P_{++}(S^c)$. Then $\sigma\lambda\not=\sigma'\lambda$, since the isotropy subgroup
$\lbrace \sigma\in W \, | \, \sigma\lambda=\lambda\rbrace$ is equal to $W_S$ by Chevalley's Lemma (cf. \cite[Prop. 2.72]{Knapp}). Without loss of generality we may assume that $\sigma\lambda\not\geq \sigma'\lambda$. Then we have $\pi_{\sigma'}((C_{\sigma\lambda,\lambda}^{\lambda})^*C_{\sigma\lambda,\lambda}^{ \lambda})=0$ by \eqref{part1}. On the other hand, $L_{\sigma\lambda;\lambda}$ is injective. It follows that $\pi_\sigma\not\simeq\pi_{\sigma'}$ as $*$-representations of $A_S$. \end{proof}
Let now $\|.\|_u$ be the universal $C^*$-norm on ${\mathbb{C}}_q\lbrack U \rbrack$ (cf.\ \cite[\S4]{DK}), so \begin{equation}
\|a\|_u:=\sup_{\sigma\in W,t\in T}\|(\pi_\sigma\otimes\tau_t)(a)\|, \quad a\in {\mathbb{C}}_q\lbrack U \rbrack. \end{equation} Let $C_q(U)$ (resp. $C_q(U/K_S)$) be the completion of ${\mathbb{C}}_q\lbrack U \rbrack$ (resp.\ ${\mathbb{C}}_q\lbrack U/K_S \rbrack$)
with respect to $\|.\|_u$. All $*$-representations $\pi_\sigma\otimes \tau_t$ of ${\mathbb{C}}_q\lbrack U \rbrack$ extend to $*$-representations of the $C^*$-algebra $C_q(U)$ by continuity. The results of this section can now be summarized as follows. \begin{Thm}\label{restrictionrepr} Let $S\subsetneq \Delta$. Then $\lbrace \pi_\sigma\rbrace_{\sigma\in W^S}$ is a complete set of mutually inequivalent irreducible $*$-representations of $C_q(U/K_S)$. \end{Thm} \begin{proof} This follows from the previous results, since every irreducible $*$-represen\-ta\-tion of $C_q(U/K_S)$ appears as an irreducible component of $\sigma_{\vert C_q(U/K_S)}$ for some irreducible $*$-represen\-ta\-tion $\sigma$ of $C_q(U)$ (cf.\ \cite[Prop.\ 2.10.2]{D}). \end{proof} Theorem \ref{restrictionrepr} does not imply that $\lbrace \pi_\sigma\rbrace_{\sigma\in W^S}$ is a complete set of irreducible $*$-representations of the $*$-algebra ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ itself. Indeed, it is not clear that any irreducible $*$-representation of ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ can be continuously extended to a $*$-representation of $C_q(U/K_S)$. In the remainder of this paper we will deal with the classification of the irreducible $*$-representations of $A_S$. In particular, this will yield a complete classification of the irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U/K_S\rbrack$ for the generalized flag manifolds $U/K_S$ for which the PBW factorization is valid (cf. Theorem \ref{yes}).
\section{Irreducible $*$-representations of $A_S$} Let $S\subsetneq \Delta$ be any subset. In this section we show that $\lbrace \pi_{\sigma}\rbrace_{\sigma\in W^S}$ exhausts the set of irreducible $*$-representations of $A_S$ (up to equivalence). We fix therefore an arbitrary irreducible $*$-representation $\tau: A_S\rightarrow {\mathcal{B}}(H)$ and we will show that $\tau\simeq \pi_{\sigma}$ for a (unique) $\sigma\in W^S$. In order to associate the proper minimal coset representative $\sigma\in W^S$ with $\tau$, we need to study the range $\tau(A_S)\subset {\mathcal{B}}(H)$ of $\tau$ in more detail. For $\lambda\in P_+(S^c)$ and $\mu,\nu\in P(\lambda)$, let $\tau^{\lambda}(\mu;\nu),\tau^{\lambda}(\nu)\subset {\mathcal{B}}(H)$ be the linear subspaces \begin{equation} \begin{split} \tau^{\lambda}(\mu;\nu) &:=\lbrace \tau((C_{v;v_{\lambda}}^{\lambda})^*
C_{w;v_{\lambda}}^{\lambda}) \, | \, v\in V(\lambda)_{\mu},\,\, w\in V(\lambda)_{\nu} \rbrace,\\ \tau^{\lambda}(\nu)&:=\lbrace \tau((C_{v;v_{\lambda}}^{\lambda})^*
C_{w;v_{\lambda}}^{\lambda}) \, | \, v\in V(\lambda),\,\, w\in V(\lambda)_{\nu} \rbrace. \end{split} \end{equation} For $\lambda\in P_+(S^c)$ set \begin{equation}\label{D}
D(\lambda):=\lbrace \nu\in P(\lambda) \, | \, \tau^{\lambda}(\nu)\not=\lbrace 0 \rbrace \rbrace \end{equation} and let $D_m(\lambda)$ be the set of weights $\nu\in D(\lambda)$ such that $\nu'\notin D(\lambda)$ for all $\nu'<\nu$. By \eqref{unitarityproperty}, we have $D(\lambda)\not=\emptyset$, hence also $D_m(\lambda)\not=\emptyset$. We start with a lemma which is useful for the computation of commutation relations in $\tau(A_S)\subset {\mathcal{B}}(H)$. \begin{Lem}\label{help!} Let $\lambda,\Lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$. Let $v\in V(\lambda)$, $v'\in V(\lambda)_{\nu'}$ with $\nu'<\nu$ and $w,w'\in V(\Lambda)$. Then the product of the four matrix elements $(C_{v;v_{\lambda}}^{\lambda})^*$, $C_{v';v_{\lambda}}^{\lambda}$, $(C_{w;v_{\Lambda}}^{\Lambda})^*$, and $C_{w';v_{\Lambda}}^{\Lambda}$, taken in an arbitary order, is contained in $\hbox{Ker}(\tau)$. \end{Lem} \begin{proof} Since $\hbox{Ker}(\tau)$ is a two-sided $*$-ideal in $A_S$, it follows from the definitions that \[(C_{w;v_{\Lambda}}^{\Lambda})^*C_{w';v_{\Lambda}}^{\Lambda} (C_{v;v_{\lambda}}^{\lambda})^*C_{v';v_{\lambda}}^{\lambda}\in \hbox{Ker}(\tau).\] If the product of the four matrix coefficients is taken in a different order, then we can rewrite it by Proposition \ref{commutation} and by Corollary \ref{other} as a linear combination of products of matrix elements \[(C_{u;v_{\Lambda}}^{\Lambda})^*C_{u';v_{\Lambda}}^{\Lambda} (C_{x;v_{\lambda}}^{\lambda})^*C_{x';v_{\lambda}}^{\lambda}\] with $x'\in V(\lambda)_{\nu''}$ and $\nu''\leq\nu'<\nu$. These are all contained in $\hbox{Ker}(\tau)$, since $\nu\in D_m(\lambda)$. \end{proof} \begin{Lem}\label{w} Let $\lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$. Then\\ {\bf (i)} $\tau^{\lambda}(\nu;\nu)\not=\lbrace 0 \rbrace$;\\ {\bf (ii)} $\nu=\sigma\lambda$ for some $\sigma\in W^S$. \end{Lem} \begin{proof} Let $\lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$. Fix weight vectors $v\in V(\lambda)_{\mu}$, $w\in V(\lambda)_{\nu}$ such that $T_{v;w}:=\tau((C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda})\not=0$. By Lemma \ref{help!}, we compute \begin{equation}\label{subres} \begin{split} (T_{v;w})^*T_{v;w}&=q^{(\mu,\nu)-(\lambda,\lambda)} \tau(C_{v;v_{\lambda}}^{\lambda}(C_{v;v_{\lambda}}^{\lambda} C_{w;v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda})\\ &=\tau(C_{v;v_{\lambda}}^{\lambda}(C_{v;v_{\lambda}}^{\lambda})^*) T_{w;w}, \end{split} \end{equation} where we used Proposition \ref{commutation}{\bf (ii)} in the first equality and Proposition \ref{commutation}{\bf (i)} in the second equality. On the other hand, $(T_{v;w})^*T_{v;w}\not=0$ since ${\mathcal{B}}(H)$ is a $C^*$-algebra, so we conclude that $T_{w;w}\not=0$. In particular, $\tau^{\lambda}(\nu,\nu)\not=\lbrace 0 \rbrace$. Formula \eqref{subres} for $v=w$ gives \[ 0\not=(T_{w;w})^*T_{w;w}= \tau(C_{w;v_{\lambda}}^{\lambda}(C_{w;v_{\lambda}}^{\lambda})^*)T_{w;w} =q^{(\lambda,\lambda)-(\nu,\nu)}T_{w;w}T_{w;w}, \] where we have used Proposition \ref{commutation}{\bf (ii)} in the last equality. It follows that $(\lambda,\lambda)=(\nu,\nu)$, since $T_{w;w}$ is selfadjoint. By Proposition \ref{needed} we obtain $\nu=\sigma\lambda$ for some $\sigma\in W^S$. \end{proof} For $\lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$ we set \begin{equation}\label{Lw} L_{\nu;\lambda}:=\tau((C_{\nu;\lambda}^{\lambda})^* C_{\nu;\lambda}^{\lambda}). \end{equation} This definition makes sense since $\hbox{dim}(V(\lambda)_{\nu})=1$ by Lemma \ref{w}{\bf (ii)}. Furthermore, $L_{\nu;\lambda}$ is a non-zero selfadjoint operator which commutes with the elements of $\tau(A_S)$ in the following way. \begin{Lem}\label{derivedcomm} Let $\lambda,\Lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$. For $v\in V(\Lambda)_{\mu}$, $w\in V(\Lambda)_{\mu'}$ we have \[ L_{\nu;\lambda}\tau((C_{v;v_{\Lambda}}^{\Lambda})^*C_{w;v_{\Lambda}}^{\Lambda})= q^{2(\nu,\mu'-\mu)} \tau((C_{v;v_{\Lambda}}^{\Lambda})^*C_{w;v_{\Lambda}}^{\Lambda})L_{\nu;\lambda}. \] \end{Lem} \begin{proof} By Lemma \ref{help!} and the commutation relations in section 3 we compute \begin{equation} \begin{split} L_{\nu;\lambda}\tau((C_{v;v_{\Lambda}}^{\Lambda})^*C_{w;v_{\Lambda}}^{\Lambda})&= q^{(\lambda,\Lambda)-(\nu,\mu)}\tau((C_{v;v_{\Lambda}}^{\Lambda} C_{v_{\nu};v_{\lambda}}^{\lambda})^*C_{v_{\nu};v_{\lambda}}^{\lambda} C_{w;v_{\Lambda}}^{\Lambda})\nonumber\\ &=q^{2(\lambda,\Lambda)-2(\nu,\mu)}\tau((C_{v;v_{\Lambda}}^{\Lambda})^* (C_{v_{\nu};v_{\lambda}}^{\lambda})^*C_{v_{\nu};v_{\lambda}}^{\lambda} C_{w;v_{\Lambda}}^{\Lambda})\nonumber\\ &=q^{(\nu,\mu')+(\lambda,\Lambda)-2(\nu,\mu)}\tau((C_{v;v_{\Lambda}}^{\Lambda})^* (C_{v_{\nu};v_{\lambda}}^{\lambda})^*C_{w;v_{\Lambda}}^{\Lambda} C_{v_{\nu};v_{\lambda}}^{\lambda})\nonumber\\ &=q^{2(\nu,\mu'-\mu)} \tau((C_{v;v_{\Lambda}}^{\Lambda})^*C_{w;v_{\Lambda}}^{\Lambda})L_{\nu;\lambda}, \nonumber \end{split} \end{equation} where we used Proposition \ref{commutation}{\bf (ii)} for the first and fourth equality, Proposition \ref{commutation}{\bf (i)} for the second equality, and Corollary \ref{other} for the third equality. \end{proof} It follows from Lemma \ref{derivedcomm} that $\hbox{Ker}(L_{\nu;\lambda})\subsetneq H$ is a closed invariant subspace. By the irreducibility of $\tau$, we thus obtain the following corollary. \begin{Cor}\label{injective} Let $\lambda\in P_+(S^c)$ and $\nu\in D_m(\lambda)$. Then $L_{\nu;\lambda}$ is injective. \end{Cor} The minimal coset representative $\sigma$ of Lemma \ref{w}{\bf (ii)} is unique and independent of $\lambda\in P_+(S^c)$ in the following sense. \begin{Lem}\label{uniqueness} There exists a unique $\sigma\in W^S$ such that $D_m(\lambda)=\lbrace \sigma\lambda\rbrace$ for all $\lambda\in P_+(S^c)$. \end{Lem} \begin{proof} Let $\Lambda\in P_{++}(S^c)$ and $\nu\in D_m(\Lambda)$. Then there exists a unique $\sigma\in W^S$ such that $\nu=\sigma\Lambda$ by Lemma \ref{w}{\bf (ii)} and by Chevalley's Lemma (cf. \cite[Prop.\ 2.27]{Knapp}). Fix furthermore arbitrary $\lambda\in P_+(S^c)$ and $\nu'\in D_m(\lambda)$. Choose a $\sigma'\in W$ such that $\nu'=\sigma'\lambda$. By Lemma \ref{help!} and the commutation relations of section 3, we compute \begin{equation} \begin{split} L_{\nu;\Lambda}L_{\nu';\lambda}&=q^{(\Lambda,\lambda)-(\nu,\nu')} \tau((C_{\nu';\lambda}^{\lambda}C_{\nu;\Lambda}^{\Lambda})^* C_{\nu;\Lambda}^{\Lambda}C_{\nu';\lambda}^{\lambda})\nonumber\\ &=q^{3(\Lambda,\lambda)-3(\nu,\nu')}\tau( (C_{\nu';\lambda}^{\lambda})^*(C_{\nu;\Lambda}^{\Lambda})^* C_{\nu';\lambda}^{\lambda}C_{\nu;\Lambda}^{\Lambda})\nonumber\\ &=q^{2(\Lambda,\lambda)-2(\nu,\nu')}L_{\nu';\lambda}L_{\nu;\Lambda},\nonumber \end{split} \end{equation} where we used Proposition \ref{commutation}{\bf (ii)} in the first and third equality and Proposition \ref{commutation}{\bf (i)} twice in the second equality. If we repeat the same computation, but now using Corollary \ref{other} twice in the second equality, then we obtain \[ L_{\nu;\Lambda}L_{\nu';\lambda}= q^{2(\nu,\nu')-2(\Lambda,\lambda)}L_{\nu';\lambda}L_{\nu;\Lambda}, \] hence \[\bigl(q^{2(\Lambda,\lambda)-2(\nu,\nu')}-q^{2(\nu,\nu')-2(\Lambda,\lambda)} \bigr)L_{\nu';\lambda}L_{\nu;\Lambda}=0.\] By Corollary \ref{injective} we have $L_{\nu';\lambda}L_{\nu;\Lambda}\not=0$, so we conclude that \[ (\Lambda,\lambda)-(\nu,\nu')=(\Lambda,\lambda-\sigma^{-1}\sigma'\lambda)=0. \] Since $\Lambda\in P_{++}(S^c)$ and $\lambda\in P_+(S^c)$, it follows from Lemma \ref{neededlemma} that $\lambda=\sigma^{-1}\sigma'\lambda$, i.e. $\nu'=\sigma\lambda$. Hence, $D_m(\lambda)=\lbrace \sigma\lambda\rbrace$ for all $\lambda\in P_+(S^c)$. \end{proof} In the remainder of this section we write $\sigma$ for the unique minimal coset representative such that $D_m(\lambda)=\lbrace \sigma\lambda\rbrace$ for all $\lambda\in P_+(S^c)$. We are going to prove that $\tau\simeq\pi_{\sigma}$. First we look for the analogue of the distinguished vector $e_0^{\otimes l(\sigma)}$ (cf.\ Proposition \ref{1dimensional}{\bf (ii)}) in the representation space $H$ of $\tau$.
The spectrum $I(\lambda)$ of $L_{\sigma\lambda;\lambda}$ is contained in $[0,\infty)$, since $L_{\sigma\lambda;\lambda}$ is a positive operator. By considering the spectral decomposition of $L_{\sigma\lambda;\lambda}$, one obtains the following corollary of Lemma \ref{derivedcomm} and \cite[Lemma 4.3]{Ko}. \begin{Cor}\label{spec} Let $\lambda\in P_+(S^c)$. Then $I(\lambda)\subset [0,\infty)$ is a countable set with no limit points, except possibly $0$. \end{Cor} The proof of Corollary \ref{spec} is similar to the proof of \cite[Prop.\ 3.9]{S} and of \cite[Prop.\ 4.2]{Ko}.
By Corollary \ref{spec} we have an orthogonal direct sum decomposition \begin{equation}\label{eig} H=\bigoplus_{\gamma\in I(\lambda)\cap {\mathbb{R}}_{>0}}H_{\gamma}(\lambda) \end{equation} into eigenspaces of $L_{\sigma\lambda;\lambda}$, where $H_{\gamma}(\lambda)$ is the eigenspace of $L_{\sigma\lambda;\lambda}$ corresponding to the eigenvalue $\gamma$. Let $\gamma_0(\lambda)>0$ be the largest eigenvalue of $L_{\sigma\lambda;\lambda}$. \begin{Lem}\label{zero} Let $\lambda\in P_+(S^c)$, $v\in V(\lambda)$, $w\in V(\lambda)_{\nu}$ and assume that $\nu\not=\sigma\lambda$. Then $\tau((C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda}) (H_{\gamma_0(\lambda)}(\lambda))= \lbrace 0 \rbrace$. \end{Lem} \begin{proof} Let $\lambda\in P_+(S^c)$, $v\in V(\lambda)_{\mu}$ and $w\in V(\lambda)_{\nu}$. By Lemma \ref{help!} and the commutation relations in section 3, we compute \begin{equation} \begin{split} L_{\sigma\lambda;\lambda}\tau((C_{v;v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{ \lambda})&=\tau(C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda} (C_{v;v_{\lambda}}^{\lambda}C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})^* C_{w;v_{\lambda}}^{\lambda})\nonumber\\ &=q^{(\lambda,\lambda)-(\mu,\sigma\lambda)} \tau(C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda} (C_{v;v_{\lambda}}^{\lambda})^*(C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})^* C_{w;v_{\lambda}}^{\lambda})\nonumber\\ &=q^{2(\lambda,\lambda)-2(\mu,\sigma\lambda)} \tau((C_{v;v_{\lambda}}^{\lambda})^*C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda}) \tau((C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda}), \nonumber \end{split} \end{equation} where we used Proposition \ref{commutation}{\bf (i)} in the second equality and Proposition \ref{commutation}{\bf (ii)} in the first and third equality. This computation, together with the injectivity of $L_{\sigma\lambda;\lambda}$, shows that it suffices to give a proof of the lemma for the special case that $v=v_{\sigma\lambda}$. So we fix $h\in H_{\gamma_0(\lambda)}(\lambda)$ and $w\in V(\lambda)_{\nu}$ with $\nu\in P(\lambda)$ and $\nu\not=\sigma\lambda$. It follows from Lemma \ref{derivedcomm} that $\tilde{h}:= \tau((C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})^*C_{w;v_{\lambda}}^{\lambda})h$ is an eigenvector of $L_{\sigma\lambda;\lambda}$ with eigenvalue $\tilde{\gamma}_0(\lambda)=q^{2(\lambda,\sigma^{-1}(\nu)-\lambda)}\gamma_0(\lambda)$. By Proposition \ref{needed} we have $\tilde{\gamma}_0(\lambda)>\gamma_0(\lambda)$, hence $\tilde{h}=0$ by the maximality of the eigenvalue $\gamma_0(\lambda)$. \end{proof} \begin{Cor}\label{Cor2} $\gamma_0(\lambda)=1$ for all $\lambda\in P_+(S^c)$. \end{Cor} \begin{proof} Follows from \eqref{unitarityproperty} and Lemma \ref{zero}. \end{proof} The linear subspace of ${\mathbb{C}}_q\lbrack U \rbrack$ spanned by the matrix elements $\lbrace C_{\sigma\mu;\mu}^{\mu}\rbrace_{\mu\in P_+}$ is a subalgebra of ${\mathbb{C}}_q\lbrack U \rbrack$ with algebraic generators $C_{\sigma\varpi_i;\varpi_i}^{\varpi_i}$ ($i\in [1,r]$), since $C_{\sigma\mu;\mu}^{\mu}C_{\sigma\nu;\nu}^{\nu}= \lambda_{\mu,\nu}C_{\sigma(\mu+\nu);\mu+\nu}^{\mu+\nu}$, where the scalar $\lambda_{\mu,\nu}\in {\mathbb{T}}$ depends on the particular choices of orthonormal bases for the finite-dimensional irreducible representations $V(\mu)$ and $V(\nu)$ (cf. \cite[Proof of Prop.\ 3.12]{S}). Then it follows from Proposition \ref{commutation} and Lemma \ref{help!} that \begin{equation}\label{product} L_{\sigma(\mu+\nu);\mu+\nu}=L_{\sigma\mu;\mu}L_{\sigma\nu;\nu} \end{equation} for all $\mu,\nu\in P_+(S^c)$, hence $\hbox{span}\lbrace L_{\sigma\lambda;\lambda}\rbrace_{\lambda\in P_+(S^c)}$ is a commutative subalgebra of ${\mathcal{B}}(H)$. Set \begin{equation} H_1:=\bigcap_{i\in S^c}H_1(\varpi_i), \end{equation} then $H_1\subset H_1(\lambda)$ for all $\lambda\in P_+(S^c)$ by \eqref{product}. \begin{Lem} $H_1=H_1(\lambda)$ for all $\lambda\in P_{++}(S^c)$. In particular, $H_1\not=\lbrace 0 \rbrace$. \end{Lem} \begin{proof}
For $\mu\in P_+(S^c)$ we have $\|L_{\sigma\mu;\mu}\|=1$. Moreover, for any $h\in H$, \begin{equation}\label{equiva}
h\in H_1(\mu) \quad \Leftrightarrow \quad \|L_{\sigma\mu;\mu}h\|=\|h\|. \end{equation} This follows from the eigenspace decomposition \eqref{eig} for $L_{\sigma\mu;\mu}$ and the fact that $1$ is the largest eigenvalue of $L_{\sigma\mu;\mu}$. Let $\lambda\in P_{++}(S^c)$ and choose arbitrary $i\in S^c$. Then $\lambda=\mu+\varpi_i$ for certain $\mu\in P_+(S^c)$. By \eqref{product}, we obtain for $h\in H_1(\lambda)$, \begin{equation} \begin{split}
\|h\|=\|L_{\sigma\lambda;\lambda}h\|&=
\|L_{\sigma\mu;\mu}L_{\sigma\varpi_i;\varpi_i}h\|\nonumber\\
&\leq \|L_{\sigma\varpi_i;\varpi_i}h\|\leq\|h\|,\nonumber \end{split} \end{equation} hence we have equality everywhere. By \eqref{equiva}, it follows that $h\in H_1(\varpi_i)$. Since $i\in S^c$ was arbitrary, we conclude that $h\in H_1$. \end{proof} \begin{Lem}\label{tussenstukje} Let $\lambda\in P_+(S^c)$. For all $v\in V(\lambda)_{\mu}$ with $\mu\not=\sigma\lambda$ we have \[\tau\bigl((C_{v;v_{\lambda}}^{\lambda})^* C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda}\bigr)(H_1)\subset H_1^{\bot}.\] \end{Lem} \begin{proof} Let $\Lambda\in P_{++}(S^c)$, $\lambda\in P_+(S^c)$, and $v\in V(\lambda)_{\mu}$ with $\mu\not=\sigma\lambda$ and $\mu\in P(\lambda)$. Then \begin{equation} L_{\sigma\Lambda;\Lambda}\tau((C_{v;v_{\lambda}}^{\lambda})^* C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})=q^{2(\Lambda,\lambda-\sigma^{-1}(\mu))} \tau((C_{v;v_{\lambda}}^{\lambda})^* C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})L_{\sigma\Lambda;\Lambda} \end{equation} by Lemma \ref{derivedcomm}. By Lemma \ref{neededlemma} we have $(\Lambda,\lambda-\sigma^{-1}(\mu))>0$. Hence, \begin{equation} \begin{split} \tau((C_{v;v_{\lambda}}^{\lambda})^* C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})(H_1) &=\tau((C_{v;v_{\lambda}}^{\lambda})^* C_{v_{\sigma\lambda};v_{\lambda}}^{\lambda})(H_1(\Lambda))\nonumber\\ &\subset \bigoplus_{\gamma<1}H_{\gamma}(\Lambda) =H_1(\Lambda)^{\bot}=H_1^{\bot},\nonumber \end{split} \end{equation} which completes the proof of the lemma. \end{proof} \begin{Cor}\label{1} $\dim(H_1)=1$. \end{Cor} \begin{proof} By Lemma \ref{zero} and Lemma \ref{tussenstukje} we obtain for any $0\not=h\in H_1$, \[ \overline{\tau(A_S)h}\subset \hbox{span}\lbrace h \rbrace \oplus H_1^{\bot}, \] where the overbar means closure. By the irreducibility of $\tau$, we conclude that $\hbox{span}\lbrace h \rbrace=H_1$. \end{proof}
Any vector $h\in H_1$ with $\|h\|=1$ can serve now as the analogue in the representation space $H$ of the distinguished vector $e_0^{\otimes l(\sigma)}$ in the representation space of $\pi_{\sigma}$. By comparing the Gel'fand-Naimark-Segal states of $\tau$ and $\pi_{\sigma}$ taken with respect to the cyclic vector
$h\in H_1$ ($\|h\|=1$) resp.\ $e_0^{\otimes l(\sigma)}$, we obtain the following lemma. \begin{Lem} We have $\tau\simeq\pi_\sigma$ as irreducible $*$-representations of $A_S$. \end{Lem} \begin{proof}
Fix an $h\in H_1$ with $\|h\|=1$, and define the Gel'fand-Naimark-Segal states $\phi_{\tau},\phi_{\pi_\sigma}:A_S\rightarrow {\mathbb{C}}$ by \begin{equation} \phi_{\tau}(a):=(\tau(a)h,h),\,\quad \phi_{\pi_\sigma}(a):=(\pi_\sigma(a)e_0^{\otimes l(\sigma)},e_0^{\otimes l(\sigma)}). \end{equation} Then we have for $\phi=\phi_{\tau}$ (resp. $\phi=\phi_{\pi_\sigma}$), \begin{equation}\label{GNS} \phi((C_{\mu,i;\lambda}^{\lambda})^*C_{\nu,j;\lambda}^{\lambda}) =\delta_{\mu,\sigma\lambda}\delta_{\nu,\sigma\lambda} \end{equation} for $\lambda\in P_+(S^c)$, $\mu,\nu\in P(\lambda)$, $i\in [1,\hbox{dim}(V(\lambda)_{\mu})]$, and $j\in [1,\hbox{dim}(V(\lambda)_{\nu})]$. Indeed, \eqref{GNS} for $\phi=\phi_{\tau}$ follows from Lemma \ref{zero} and Lemma \ref{tussenstukje}. For $\phi=\phi_{\pi_\sigma}$, recall that $\pi_\sigma$ is an irreducible $*$-representation of $A_S$ (Proposition \ref{irrep}). We have seen in the previous section that $L_{\sigma\lambda;\lambda}=\pi_{\sigma}((C_{\sigma\lambda;\lambda}^{\lambda})^* C_{\sigma\lambda;\lambda}^{\lambda})$ is injective for all $\lambda\in P_+(S^c)$, hence $\sigma\lambda\in D(\lambda)$ (cf.\ \eqref{D}) for all $\lambda\in P_+(S^c)$. By \eqref{part1}, we actually have $\sigma\lambda\in D_m(\lambda)$ for all $\lambda\in P_+(S^c)$. Hence the labeling $\sigma\in W^S$ of $\pi_{\sigma}$ coincides with its (unique) minimal coset representative defined in Lemma \ref{uniqueness}. Furthermore, the one-dimensional subspace $H_1$ for $\pi_\sigma$ is equal to $\hbox{span}\lbrace e_0^{\otimes l(\sigma)} \rbrace$ (cf. Proposition \ref{1dimensional}{\bf (ii)}, Lemma \ref{1}). So \eqref{GNS} for $\phi=\phi_{\pi_{\sigma}}$ follows again from Lemma \ref{zero} and Lemma \ref{tussenstukje}.
By linearity it follows from \eqref{GNS} that $\phi_{\tau}=\phi_{\pi_\sigma}$, hence $\tau$ and $\pi_\sigma$ are unitarily equivalent $*$-representations (cf. \cite[Prop.\ 2.4.1]{D}). \end{proof} We may summarize the results of this section as follows. \begin{Thm}\label{lastjob} For all $S\subsetneq \Delta$, $\lbrace \pi_\sigma\rbrace_{\sigma\in W^S}$ is a complete set of mutually inequivalent, irreducible $*$-representations of the factorized $*$-subalgebra $A_S$. \end{Thm} Combining Proposition \ref{notirrep}, Theorem \ref{yes} and Theorem \ref{lastjob} we obtain the following theorem. \begin{Thm}\label{last} $\lbrace \pi_\sigma\rbrace_{\sigma\in W^S}$ is a complete set of mutually inequivalent, irreducible $*$-representations of ${\mathbb{C}}_q\lbrack U/K_S \rbrack$ in the the following cases:\\ {\bf (i)} $S=\emptyset$, i.e. $U/K_S=U/T$ is the full flag manifold;\\ {\bf (ii)} $\#S^c=1$ and the simple root $\alpha\in S^c$ is a Gel'fand node.\\ For these cases the restriction to ${\mathbb{C}}_q[U/K_S]$ of the universal $C^*$-norm on ${\mathbb{C}}_q\lbrack U \rbrack$ coincides with the universal $C^*$-norm on ${\mathbb{C}}_q\lbrack U/K_S \rbrack$. \end{Thm}
\end{document} | arXiv |
\begin{document}
\title{Cohen-Macaulay monomial ideals of codimension 2}
\author{Muhammad Naeem} \thanks{} \subjclass{13C14, 13D02, 13D25, 13P10} \address{Muhammad Naeem, Abdus Salam School of Mathematical Sciences (ASSMS), GC University, Lahore, Pakistan. } \email{naeem\[email protected]}
\begin{abstract} We give a structure theorem for Cohen-Macaulay monomial ideals of codimension 2, and describe all possible relation matrices of such ideals. In case that the ideal has a linear resolution, the relation matrices can be identified with the spanning trees of a connected chordal graph with the property that each distinct pair of maximal cliques of the graph has at most one vertex in common.
\vskip 0.4 true cm
\noindent
{\it Key words } : Monomial Ideals, Taylor Complexes, Linear Resolutions, Chordal Graphs. \end{abstract}
\maketitle \section*{Introduction} The purpose of the paper is to work out in detail a remark on the structure of Cohen-Macaulay monomial ideals of codimension 2 which was made in the paper \cite{BH1}. There it was observed that the `generic' ideals of this type, generated by $n$ elements, are in bijective correspondence to the trees with $n$ vertices. In Proposition~\ref{lahore} we give an explicit description of the generators of a generic Cohen-Macaulay monomial ideal of codimension 2 in terms of the associated tree and describe the minimal prime ideals of such ideals in Proposition~\ref{islamabad}. As a consequence of these two results we obtain as the main result of Section ~1 a full description of all Cohen-Macaulay monomial ideals of codimension 2, see Theorem~\ref{juergen}.
In Section~2 we study the possible relation trees of a Cohen-Macaulay monomial ideals of codimension 2. This set of relation trees is always the set of bases of a matriod (Proposition~\ref{matroid}), which in case of a generic ideal consists of only one tree as shown in Proposition~\ref{first}. We call the graph $G$ whose set of edges is the union of the set of edges of all relation trees of a given Cohen-Macaulay monomial ideal $I$ of codimension 2, the Taylor graph of $I$. Then each of the relation trees is a spanning tree of the Taylor graph. The natural question arises whether the set of relation trees of $I$ is precisely the set of spanning trees of $G$. We show by an example that this is not the case in general. On the other hand, we prove in Theorem~\ref{linear} that each relation tree of $I$ is a spanning tree of $G$, if $I$ has a linear resolution. In order to obtain a complete description of all possible relation trees when $I$ has a linear resolution, it is therefore required to find all possible Taylor graphs of such ideals. This is done in Theorem~\ref{chordal}, where it is shown that a finite connected simple graph is the Taylor graph of Cohen-Macaulay monomial ideal of codimension $2$ with linear resolution, if and only if $G$ is chordal and any two maximal cliques of $G$ have at most one vertex in common.
\section{On the structure of Cohen-Macaulay monomial ideals of codimension 2}
In \cite[Remark 6.3]{BH1} the following observation was made regarding the structure of a codimension 2 Cohen-Macaulay monomial ideal $I$: let $$\{u_1,u_2,...,u_{m+1}\}$$ the unique minimal set of monomial generators of $I$. Consider the Taylor complex of the sequence $u_1,u_2,...,u_{m+1}$ \[ ...\rightarrow\bigoplus_{i=1}^{m+1\choose 2}S e_i\wedge e_j\overset{\phi_2}{\rightarrow}\bigoplus_{i=1}^{m+1}S e_i\overset{\phi_1}{\rightarrow} S \]
The matrix corresponding to $\phi_2$ is of size ${m+1\choose 2}\times m+1$ whose rows correspond to Taylor relation (cf.\ \cite{Ei}), namely to the relations
\[
e_i\wedge e_j\mapsto u_{ji}e_j-u_{ij}e_i
\] where $i<j$ and $u_{ji}=u_i/\gcd(u_i,u_j)$, $u_{ij}=u_j/\gcd(u_i,u_j)$.
Let $U=\Ker(\phi_1)$; then the Taylor relations form a homogeneous system of generators of $U$. Since $\projdim S/I=2$, it follows that $U$ is free of rank $m$. In particular $U$ is minimally generated by $m$ elements. Applying the graded Nakayama Lemma (cf.\ \cite{BH} or \cite[Lemma 1.2.6]{V}), a minimal system of graded generators of $U$ can be chosen among the Taylor relations. We then obtain a minimal graded free resolution \[ 0\rightarrow S^{m}\overset{A}{\rightarrow} S^{m+1} \rightarrow S \rightarrow S/I \rightarrow 0 \] of $S/I$, where $A$ is a matrix whose rows correspond to Taylor relations. Any such matrix will be called a {\em Hilbert--Burch matrix} of $I$
Notice that each row of $A$ has exactly two nonzero entries. We obtain a graph $\Gamma$ on the vertex set $[m+1]=\{1,\ldots,m+1\}$ from the matrix $A$ as follows: we say that $\{i,j\}$ is an edge of $\Gamma$, if and only if there is a row of $A$ whose nonzero entries are the $i$th and $j$th components.
We claim that every column of $A$ has a nonzero entry. In fact, if this would not be the case, say, the $k$th column of $A$ has all entries zero, then the relation $u_{k+1,k}e_{k+1}-u_{k,k+1}e_k\in U$ could not be written as a linear combination of the minimal graded homogeneous generators of $U$. This shows that $\Gamma$ has no isolated vertex. On the other hand, since the number of vertices of $\Gamma$ is $m+1$ and the number of edges of $\Gamma$ is $m$, we see that $\Gamma$ is a tree, which is called a {\em relation tree} of $I$. The set of all relation trees of $I$ will be denoted by $\mathcal{T}(I)$.
Conversely, given a tree $\Gamma$ on the vertex set $[m+1]$ with $m\geq 2$, we are going to construct a codimension 2 Cohen-Macaulay monomial ideal $I$ for which $\Gamma$ is a relation tree. We assign to $\Gamma$ an $m\times (m+1)$-matrix $A(\Gamma)=(a_{ij})$ whose entries are either $0$ or indeterminates. The matrix $A(\Gamma)$ is defined as follows: let $E(\Gamma)$ be the set of edges of $\Gamma$. Since $\Gamma$ is a tree, there are exactly $m$ edges. We choose an arbitrary order of the edges of $\Gamma$, and assign to the $k$th edge $\{i,j\}\in E(\Gamma)$ the $k$th row of $A(\Gamma)$ by \begin{eqnarray} \label{genericmatrix} a_{kl}= \left\{\begin{array}{ll} -x_{ij}&\text{ if $l =i$,}\\ x_{ji}&\mbox{ if $l=j$,}\\ 0&\mbox{ otherwise.} \end{array}\right. \end{eqnarray} For example if $\Gamma$ is the tree with edges $\{1,2\}$, $\{2,3\}$ and $\{2,4\}$. Then we obtain the matrix \[ A(\Gamma)= \begin{pmatrix}
-x_{12} & x_{21} & 0 & 0 \\ 0 &- x_{23} & x_{32} & 0 \\ 0 & -x_{24} & 0 & x_{42} \end{pmatrix} \]
\begin{Definition}{\em Let $\Gamma$ be a tree on the vertex set $[m+1]$ and $i,j$ be two distinct vertices of $\Gamma$. Then there exists a unique path from $i$ to $j$ denoted by $i \rightarrow j$, in other words a sequence of numbers $i=i_0,i_1,i_2,\ldots,i_{k-1},i_k=j$ such that $\{i_l,i_{l+1}\}\in E(\Gamma)$ for $l=0,\ldots,k-1$. We set \[ b(i,j)=i_{1}\quad\text{and}\quad e(i,j)=i_{k-1}\]} \end{Definition}
\begin{Proposition} \label{lahore} Let $v_{j}$ be the minor of $A(\Gamma)$ which is obtained
by omitting the $j$th column of $A(\Gamma)$. Then $v_j=\pm\prod_{i=1\atop i\neq j}^{m+1}x_{ib(i,j)}$ for $j=1,2,...,m+1$ \end{Proposition}
\begin{proof} We prove the assertion by using induction on the number of edges of $\Gamma$.
If $|E(\Gamma)|=~1$, then
\[
A(\Gamma)=(-x_{12},x_{21}) \] Therefore, $v_{1}=x_{21}$ and $v_2=-x_{12}$, as required.
Now assume that the assertion is true for $|E(\Gamma)|=m-1\geq 1$. Since $\Gamma$ is a tree, there exists a free
vertex of $\Gamma$, that is, a vertex which belongs to exactly one edge. Such an
edge of $\Gamma$ is called a leaf. We may assume the edge $\{m,m+1\}$ is a leaf
and that $m+1$ is a free vertex of $\Gamma$. The tree which is obtained from
$\Gamma$ by removing the leaf $\{m,m+1\}$ will be denoted by $\Gamma'$.
By our induction hypothesis the minors $v_1',\ldots, v_m'$ of $\Gamma'$ have the
desired form. We may assume that the edge $\{m,m+1\}$ is the last in the order of edges. Then $(m-1)\times m$ matrix $A(\Gamma')$ is obtained from the $m\times (m+1)$-matrix $A(\Gamma)$ by removing the last row \[ R_{m}= (0, \ldots, 0,-x_{m,m+1}, x_{m+1,m}) \] and the last column \[ \begin{pmatrix} 0 \\ \vdots \\0 \\x_{m+1,m} \end{pmatrix} \] It follows that the minors $v_1,\ldots, v_{m+1}$ of $A(\Gamma)$ are given by \begin{eqnarray} \label{recursion} v_j=x_{m+1,m}v_j'\quad \text{for} \quad j=1,\ldots, m, \quad \text{and} \quad v_{m+1}=x_{m,m+1}v_m'. \end{eqnarray} Therefore, our induction hypothesis implies that \[ v_j=x_{m+1,m}v_j'=\pm x_{m+1,m} \prod_{i=1,\; i\neq j}^{m}x_{i,b(i,j)}=\pm \prod_{i=1, \; i\neq j}^{m+1}x_{i,b(i,j)} \] for $j=1,\ldots,m$, and \[ v_{m+1}=x_{m,m+1}v_m'=\pm x_{m, m+1}\prod_{i=1}^{m-1}x_{i,b(i,m)}=\pm x_{m,b(m,m+1)}\prod_{i\neq i=1}^{m-1}x_{i, b(i,m+1)}, \]
because $b(i,m)=b(i,m+1)$ for all $i\leq m$. So this implies that \[ v_{m+1}=\pm \prod_{i=1,\; i\neq m+1}^{m+1}x_{i,b(i , m+1)}, \] as desired. \end{proof}
For a tree $\Gamma$ on the vertex set $[m+1]$ we denote by $I(\Gamma)$ the ideal generated by the minors $v_1,\ldots, v_{m+1}$ of $A(\Gamma)$ and call it the {\em generic} monomial ideal attached to the tree $\Gamma$.
\begin{Corollary} \label{genmon} The ideal $I(\Gamma)$ is a Cohen--Macaulay ideal of codimension $2$. \end{Corollary}
\begin{proof} The greatest common divisors of the monomial generators $v_j$ of $I(\Gamma)$ is one. This can easily be seen by the formulas (\ref{recursion}) in the proof of Proposition~\ref{lahore}. The assertion follows then from \cite[Theorem 1.4.17]{BH}. \end{proof}
The generic ideal $I(\Gamma)$ has the following nice primary decomposition:
\begin{Proposition} \label{islamabad}
$I(\Gamma)=\bigcap_{1\leq i< j\leq m+1}(x_{ib(i,j)},x_{je(i,j)})$. \end{Proposition}
\begin{proof}
We prove the assertion by using induction on the number of edges of $\Gamma$. For
$|E(\Gamma)|=1$ we have, \[ A(\Gamma)=( -x_{12},x_{21}). \] with $v_1=x_{21}$ , $v_2=-x_{12}$. Therefore
$I(\Gamma)=(x_{21},x_{12})=(x_{1b(1,2)},x_{2e(1,2)})$. Now assume that assertion is true if $|E(\Gamma)|=m-1\geq 1$. Since $\Gamma$ is a tree, there exists a free vertex of $\Gamma$, that is, a vertex which belongs to exactly one edge. Such an edge of $\Gamma$ is called a leaf. We may assume the $\{m,m+1\}$ is a leaf and that $m+1$ is a free vertex of $\Gamma$. The tree which is obtained from $\Gamma$ by removing the leaf $\{m,m+1\}$ will be denoted by $\Gamma'$. So then for $A(\Gamma')$ we have \[ I(\Gamma')=(v'_1,v'_2,...,v'_m)=\bigcap_{1\leq i<j\leq m}(x_{ib(i,j)},x_{je(i,j)}). \]
We may assume that the edge $\{m,m+1\}$ is the last in the order of edges. Then $(m-1)\times m$ matrix $A(\Gamma')$ is obtained from the $m\times (m+1)$-matrix $A(\Gamma)$ by deleting the last row \[ R_{m}=(0, \ldots, 0 , -x_{m,m+1}, x_{m+1,m}) \] and the last column \[ \begin{pmatrix} 0 \\ \vdots \\0 \\x_{m+1,m} \end{pmatrix} \] It follows that the minors $v_1,\ldots, v_{m+1}$ of $A(\Gamma)$ are given by \[ v_j=x_{m+1,m}v_j'\quad \text{for} \quad j=1,\ldots, m, \quad \text{and} \quad v_{m+1}=x_{m,m+1}v_m'. \]
Hence $$I(\Gamma)=(v_1,v_2,...,v_{m+1}).$$
On the other hand, by using the induction hypothesis and the fact that $e(i,m+1)=m$ for all $i\leq m$, we get \begin{eqnarray*} \bigcap_{1\leq i<j\leq m+1}(x_{ib(i,j)},x_{je(i,j)})&= &\bigcap_{1\leq i<j\leq m}(x_{ib(i,j)},x_{je(i,j)})\cap \bigcap_{i=1}^{m}(x_{ib(i,m+1)},x_{m+1,e(i,m+1)})\\ &=&(v'_1,v'_2,...,v'_m)\cap\bigcap_{i=1}^{m}(x_{ib(i,m+1)},x_{m+1,m})\\ &=&(v'_1,v'_2,...,v'_m)\cap(\prod_{i=1}^{m}x_{ib(i,m+1)},x_{m+1,m})\\ &=&(v'_1,v'_2,...,v'_m)\cap(x_{m,m+1}\prod_{i=1}^{m-1}x_{ib(i,m+1)},x_{m+1,m})\\ &=&(v'_1,v'_2,...,v'_m)\cap(x_{m,m+1}v'_{m},x_{m+1,m}). \end{eqnarray*} Observing that $\gcd(v_i',x_{m+1,m})=1$ it follows that \begin{eqnarray*} (v'_1,v'_2,...,v'_m)&\cap&(x_{m,m+1}v'_{m},x_{m+1,m})\\ &=&(x_{m+1,m}v'_1,x_{m+1,m}v'_2,...,x_{m+1,m}v'_m,x_{m,m+1}v'_{m})\\ &=&(v_1,v_2,...,v_m,v_{m+1})=I(\Gamma). \end{eqnarray*} Hence \[ I(\Gamma)=\bigcap_{1\leq i<j\leq m+1}(x_{ib(i,j)},x_{je(i,j)}), \] as desired. \end{proof}
As an application of Proposition~\ref{lahore}, Corollary~\ref{genmon} and Proposition~\ref{islamabad} we obtain the following characterization of Cohen-Macaulay monomial ideals of codimension 2.
\begin{Theorem} \label{juergen} {\em (a)} Let $I\subset S=K[x_1,x_2,...,x_n]$ be a Cohen-Macaulay monomial ideal of codimension $2$ generated by $m+1$ elements. Then there exists a tree $\Gamma$ with $m+1$ vertices and for each edge $\{i,j\}$ of $\Gamma$ there exists a monomials $u_{ij}$ and $u_{ji}$ in $S$ such that \begin{itemize} \item[{(i)}] $\gcd(u_{ib(i,j)},u_{je(i,j)})=1$ for all $i<j$, and \item[{(ii)}] $I=(\prod_{i=2}^{m+1}u_{ib(i,1)},\ldots,\prod_{i=1\atop i\neq j}^{m+1}u_{ib(i,j)},\ldots,\prod_{i=1}^{m}u_{ib(i,m+1)})$ \end{itemize} {\em (b)} Conversely, if $\Gamma$ is a tree with $[m+1]$ vertices and for each $\{i,j\}\in E(\Gamma)$ we are given monomials $u_{ij}$ and $u_{ji}$ in $S$ satisfying {\em (a)(i)}. Then the ideal defined in {\em (a)(ii)} is Cohen-Macaulay of codimension $2$. \end{Theorem}
\begin{proof} (a) (ii) Let $A$ be an $m\times m+1$ matrix of Taylor relations which generated the relation module of $U$ of $I$, and let $\Gamma$ be the corresponding relation tree. We apply the Hilbert--Burch Theorem (\cite[1.4.17]{BH}) according to which the ideal $I$ is generated by the maximal minors of $A$. The matrix $A$ is obtained from $A(\Gamma)$ by the substitution: \[
x_{ij}\mapsto u_{ij}. \] Therefore statement (ii) follows from Proposition~\ref{lahore}.
Now we shall prove assertion (i). For this we use Proposition \ref{islamabad} which says that \[ I(\Gamma)=\bigcap_{1\leq i<j\leq m+1}(x_{ib(i,j)},x_{je(i,j)}). \] Applying the substitution map introduced in the proof of (ii) we obtain \begin{eqnarray} \label{naeem} I\subseteq\bigcap_{i<j}(u_{ib(i,j)},u_{je(i,j)}). \end{eqnarray} Suppose $\gcd(u_{ib(i,j)},u_{je(i,j)})\neq 1$ for some $i$ and $j$. Then it follows from (\ref{naeem}) that $I$ is contained in a principal ideal. This is a contradiction, because $\height I=2$.
(b) Let $\Gamma$ be a tree with vertex set $[m+1]$ and $m$ edges. For each $\{i,j\}\in E(\Gamma)$ we have monomials $u_{ij}, u_{ji}\in S$ satisfying condition (a)(i). Let $A$ be the matrix obtained from $A(\Gamma)$ by the substitutions
$x_{ij}\mapsto u_{ij}$, and let $I$ be the ideal generated by the maximal minors of $A$.
It follows from Proposition~\ref{lahore} that $I=(v_1,\ldots, v_{m+1})$ where
$v_j=\prod_{i=1\atop i\neq j}^{m+1}u_{ib(i,j)}$.
First we shall prove that \[ \gcd(v_1,v_2,...,v_{m+1})=1. \] We shall prove this by induction on the number of edges of $\Gamma$. The assertion is trivial if $\Gamma$
has only one edge. Now let $|E(\Gamma)|=m>1$ and assume that the assertion is true for any tree with $m-1$ edges.
We may assume that $(m,m+1)$ is a leaf of $\Gamma$. Let $\Gamma'$ be the tree obtained from $\Gamma$ by removing the edge $\{m,m+1\}$. The matrix $A(\Gamma')$ is obtained from $A(\Gamma)$ by removing the row $(0,\ldots,-x_{m,m+1},x_{m+1,m})$ and the column \[ \begin{pmatrix} 0 \\ \vdots \\0 \\x_{m+1,m} \end{pmatrix}. \]
Let $A'$ be the matrix obtained from $A(\Gamma')$ by the substitutions $x_{ij}\mapsto u_{ij}$, and let $I'=(v_1',\ldots,v_m')$ be the ideal of maximal minors of $A'$ where, up to sign, $v_j'$ is the $j$th maximal minor of $A'$. Expanding the matrix $A$ we see that \[ v_{j}=\pm v'_ju_{m+1,m}\quad \text{for} \quad j=1,2,\ldots,m\quad \text{and}\quad v_{m+1}=\pm v'_{m}u_{m,m+1}. \]
Therefore \[ \gcd(v_1,v_2,\ldots,v_m,v_{m+1})=\gcd(v'_1u_{m+1,m},v'_2u_{m+1,m},...,v'_mu_{m+1,m},v_{m+1}). \] By induction hypothesis we have $\gcd(v'_1 , v'_2 ,..., v'_{m} )=1$, so that \[ \gcd(v'_1u_{m+1,m}, v'_2u_{m+1,m},...,v'_mu_{m+1,m} )=u_{m+1,m}. \] Hence
it is enough to prove that \[ \gcd(u_{m+1,m}, v_{m+1})=1. \] Note that $u_{m+1,m}=u_{m+1,e(i,m+1)}$ for all $i$, and $v_{m+1}= \prod_{i=1}^{m}u_{ib(i,m+1)}$. Therefore \[ \gcd(u_{m+1,m}, v_{m+1})= \gcd(u_{m+1,e(i,m+1)}, \prod_{i=1\atop}^{m}u_{ib(i,m+1)})=1, \] since by our hypothesis (a)(i) we have $\gcd(u_{m+1,e(i,m+1)}, u_{ib(i,m+1)})=1$ for all $i$.
The Hilbert--Burch Theorem \cite[1.4.17]{BH} then implies that $I$ is a perfect ideal of codimension $2$, and hence a Cohen--Macaulay ideal. \end{proof}
\section{The possible sets of relation trees attached to Cohen-Macaulay monomial ideals of codimension 2}
In this section we want to study set $\mathcal{T}(I)$ of all relation trees of a Cohen--Macaulay monomial ideal of codimension 2. In general one may have more than just one Hilbert--Burch matrix for an ideal $I$, and consequently more than one relation trees.
For example the ideal $I=(x_4x_5x_6,x_1x_5x_6,x_1x_2x_6,x_1x_2x_3x_5)\subset
S=K[x_1,x_2,x_3,x_4,x_5,x_6]$ has the following two Hilbert--Burch matrices \[ A_1= \begin{pmatrix}
-x_1 & x_4 & 0 & 0 \\ 0 & -x_2 & x_5 & 0 \\ 0 & 0 & -x_3x_5 & x_6 \end{pmatrix}, \]
or
\[ A_2= \begin{pmatrix}
-x_1 & x_4 & 0 & 0 \\ 0 & -x_2 & x_5 & 0 \\ 0 & -x_2x_3 & 0 & x_6 \end{pmatrix}. \] The corresponding relation trees are $\Gamma_1$ and $\Gamma_2$ with $E(\Gamma_1)=\{\{1,2\},\{2,3\},\{3,4\}\}$ and $E(\Gamma_2)=\{\{1,2\},\{2,3\},\{2,4\}\}$.
However in the generic case we have
\begin{Proposition} \label{first} Let $\Gamma$ be a tree on the vertex set $[m+1]$ and let $I(\Gamma)$ be the generic monomial ideal attached to $\Gamma$. Then ${\mathcal T}(I(\Gamma))=\{\Gamma\}$. \end{Proposition}
Recall that $I(\Gamma)$ is the ideal of maximal minors of the matrix $A(\Gamma)$ defined in (\ref{genericmatrix}). Up to signs the minors of $A(\Gamma)$ are the monomials $v_i=\prod_{r=1\atop r\neq i}^{m+1}x_{rb(r,i)}$, see Proposition~\ref{lahore}.
For the proof of Proposition~\ref{first} we shall need
\begin{Lemma} \label{sms} Let $\Gamma$ be a tree, then $\{i,j\}$ is an edge of
$\Gamma$ if and only if \[ \lcm(v_{i},v_{j})=v_{j}x_{ji}=v_{i}x_{ij}. \] \end{Lemma}
\begin{proof} Let $\{i,j\}$ be an edge of $\Gamma$ and suppose that $i<j$. Note that \begin{eqnarray} b(k,i)=b(k,j) \end{eqnarray} for all $k$ which are different from $i$ and $j$, because if the path from $k$ to $i$ is $k=k_0,k_1,\ldots,k_l=i$, then the path from $k$ to $j$ will be $k=k_0,k_1,\ldots,k_{l-1}=j$ or $k=k_0,k_1,\ldots,k_{l-1},i,j$ since $\{i,j\}$ be an edge of $\Gamma$. Now using (4) we have \[ v_i=\pm\prod_{r=1\atop r\neq i}^{m+1}x_{rb(r,i)} =\pm\prod_{r=1\atop r\neq i,j}^{m+1}x_{rb(r,i)}x_{jb(j,i)} =\pm \prod_{r=1\atop r\neq i,j}^{m+1}x_{rb(r,i)}x_{ji} =\pm \prod_{r=1\atop r\neq i,j}^{m+1}x_{rb(r,j)}x_{ji}. \] Similarly $v_j=\pm \prod_{r=1\atop r\neq i,j}^{m+1}x_{rb(r,j)}x_{ij}$. Hence $\lcm(v_{i},v_{j})=v_{j}x_{ji}=v_{i}x_{ij}$.
On the other hand, suppose that $\{i,j\}$ is not an edge of $\Gamma$, then there exists a vertex, different from $i$ and $j$, say $k$, which belongs to the path from $i$ to $j$. Therefore $b(k,i)\neq b(k,j)$, and hence $x_{kb(k,i)}\neq x_{kb(k,j)}$. Since $x_{kb(k,i)}\mid v_i$ and since $x_{kb(k,j)}\mid v_j$ we cannot have $\lcm(v_{i},v_{j})=~v_{j}x_{ji}=v_{i}x_{ij}$. \end{proof}
\begin{proof}[Proof of Proposition \ref{first}] Since all monomial generators of $I(\Gamma)$ are of degree $m$ and since, by the Hilbert--Burch Theorem \cite[1.4.17]{BH}, these generators are the maximal minors of any of its Hilbert--Burch matrices, it follows that all Hilbert--Burch matrices must be linear. However by Lemma~\ref{sms} we have only $m$ linear Taylor relations. Therefore there exists only one Hilbert--Burch matrix for $I$. \end{proof}
In contrast to the result stated in Proposition~\ref{first} we have
\begin{Proposition} \label{second} Let $I=(u_1,\ldots,u_{m+1})$ be the monomial ideal in $K[x_1,\ldots,x_{m+1}]$ with $u_i=x_1\cdots x_{i-1}x_{i+1}\cdots x_{m+1}$ for $i=1,\ldots,m+1$. Then ${\mathcal T}(I)$ is the set of all possible trees on the vertex set $[m+1]$. \end{Proposition}
\begin{proof} Let $\Gamma$ be an arbitrary tree on the vertex set $[m+1]$. For the $k$th edge $\{i,j\}$ of $\Gamma$ take the monomial generators $u_i$ and $u_j$ of $I$. Then we have the Taylor relation $x_je_j-x_ie_i$. Let $A$ be the $m\times m+1$-matrix whose rows $(0,\cdots,-x_i,\cdots,x_j,\cdots,0)$ correspond to the Taylor relations $x_je_j-x_ie_i$ arising from the edges of $\Gamma$. Observe that the generic matrix $A(\Gamma)$ is mapped to $A$ by the substitutions $x_{ij}=x_i$. Moreover the maximal minor $\pm v_i$ of $A(\Gamma)$ is mapped to $u_i$ for all $i$. Therefore the $u_i$ are the maximal minors of $A$ which shows that $A$ is the Hilbert--Burch matrix of $I$. \end{proof}
In order to study the general nature of $\mathcal{T}(I)$ we introduce the following concept. Let $\mathcal S$ be a finite set. Recall that a collection $\mathcal B$ of subsets of $\mathcal S$ is said to be the {\em set of bases of a matroid}, if all $B\in \mathcal B$ have the same cardinality and if the following exchange property is satisfied:
\noindent For all $B_1, B_2\in \mathcal B$ and $i\in B_1\setminus B_2$, there exists $j\in B_2\setminus B_1$ such that $(B_1\setminus\{i\})\union \{j\}\in\mathcal B$.
\noindent A classical example is the following: let $K$ be a field, $V$ a $K$-vector space and ${\mathcal S}=\{v_1,\ldots, v_r\}$ any finite set of vectors of $V$. Let $\mathcal B$ the set of subset $B$ of $\mathcal S$ with the property that $B$ is a maximal set of linearly independent vectors in $\mathcal S$. It easy to check and well known that $\mathcal B$ is the set of bases of a matroid.
\begin{Proposition} \label{matroid} Let $I\subset S$ be a Cohen--Macaulay monomial ideal of codimension $2$. Then ${\mathcal T}(I)$ is the set of bases of a matroid. \end{Proposition}
\begin{proof} Let $I$ be minimally generated by the monomials $u_1,\ldots ,u_{m+1}$ and let
\[ 0\To G\To F\To I\To 0 \] be the graded minimal free $S$-resolution of $S/I$.
The set $\mathcal S$ of Taylor relations generate the first syzygy module $U$ of $I$ which is isomorphic to the free $S$-module $G$. Consider the graded $K$-vector space $U/{\frk m} U$ where ${\frk m}=(x_1,\ldots, x_n)$ is the graded maximal ideal of $S$. Note that $\dim_K U/{\frk m} U=m$. Since the relations $r_{ij}$ generate $U$ it follows that their residue classes $\bar{r}_{ij}$ in the $K$-vector space $U/{\frk m} U$ form a system of generators of $U/{\frk m} U$. By the homogeneous version of Nakayama (see \cite[1.5.24]{BH}) it follows that a subset $B=\{r_{i_1j_1},\ldots, r_{i_mj_m}\}$ of the Taylor relations $\mathcal S$ is a minimal set of generators of $U$ (and hence establishes a Hilbert--Burch matrix of $I$) if and only if $\{\bar{r}_{i_1j_1},\ldots, \bar{r}_{i_mj_m}\}$ is a basis of the $K$-vector space $U/{\frk m} U$. The desired conclusion follows, since the relation trees of $I$ correspond bijectively to the set of Hilbert--Burch matrices of $I$. \end{proof}
Given a finite simple and connected graph $G$. A maximal subtree $\Gamma\subset G$ is called a {\em spanning tree}. It is well-known and easy to see that the set ${\mathcal T}(G)$ of spanning trees is the set of bases of a matroid.
Here we are interested in the spanning trees of the graph $G(I)$ whose set of edges is given by with \[ E(G(I))=\Union_{\Gamma\in \mathcal{T}(I)}E(\Gamma). \] We call $G(I)$ the {\em Taylor graph} of $I$. Obviously we have ${\mathcal T}(I)\subset {\mathcal T}(G(I))$. The question arises whether ${\mathcal T}(I)={\mathcal T}(G(I))$? Unfortunately this is not always the case as the example at the beginning of this section shows. Indeed, in this example, ${\mathcal T}(I)=\{\Gamma_1,\Gamma_2\}$ with $E(\Gamma_1)=\{\{1,2\},\{2,3\},\{3,4\}\}$ and $E(\Gamma_2)=\{\{1,2\},\{2,3\},\{2,4\}\}$, so that $E(G_I)=\{\{1,2\},\{2,3\},\{2,4\},\{3,4\}\}$. This graph has the spanning trees $\Gamma_1, \Gamma_2$ and $\Gamma_3$ with $E(\Gamma_3)=\{\{1,2\},\{2,4\},\{3,4\}\}$. If $\Gamma_3$ would be a relation tree of $I$, then \[ A=\begin{pmatrix}
-x_1 & x_4 & 0 & 0 \\ 0 & -x_2x_3 & 0 & x_6 \\ 0 & 0 & -x_3x_5 & x_6 \end{pmatrix}. \] would have to be a Hilbert--Burch matrix of $I$, which is not the case since the ideal of maximal minors of $A$ is the ideal $x_3I$.
However we have
\begin{Theorem} \label{linear} Let $I$ be Cohen--Macaulay monomial ideal of codimension $2$ with linear resolution. Then ${\mathcal T}(I)={\mathcal T}(G(I))$. \end{Theorem}
\begin{proof} Since $I$ has a linear resolution, it follows that all Hilbert--Burch matrices of $I$ are matrices with linear entries. Let $L=\{r_1,\ldots, r_k\}$ be the set of linear Taylor relations. We may assume that $r_1,\ldots, r_m$ are the rows of a Hilbert--Burch matrix of $I$, in other words, that $r_1,\ldots, r_m$ is a basis of the first syzygy module $U$ of $I$.
We first claim that $r_{i_1},\ldots, r_{i_m}\in L$ is basis of $U$ if and only if the relations $r_{i_1},\ldots, r_{i_m}$ are $K$-linear independent. Obviously, the relations must be $K$-linear independent in order to form a basis of the free $S$-module $U$. Conversely, assume that $r_{i_1},\ldots, r_{i_m}$ are $K$-linear independent. Since each $r_{i_j}$ belongs to $U$ we can write \[ r_{i_j}=f_{1j}r_1+f_{2j}r_2+\ldots +f_{mj}r_m \quad \text{with} \quad f_{lj}\in S. \] The presentation can be chosen such that all $f_{lj}$ are homogeneous and such that $\deg f_{lj}r_l=\deg r_{i_j}=1$ for all $l$ and $j$. In other words, $\deg f_{lj}=0$ for all $l$ and $j$. Therefore the $m\times m$-matrix $F=(f_{lj})$ is a matrix with coefficients in $K$. Since, by assumption the relations $r_{i_1},\ldots, r_{i_m}$ are $K$-linear independent, it follows that $F$ is invertible. This implies that the relations $r_1,\ldots,r_m$ are linear combinations of the relations $r_{i_1},\ldots, r_{i_m}$. Therefore these relations generate $U$ as well, and in fact form a basis of $U$, since $U$ is free of rank $m$.
Our considerations so far have shown, that the set of Hilbert--Burch matrices of $I$ correspond bijectively to the maximal $K$-linear subsets of $L$. Each $r_i\in L$ is a row vector with exactly two non-zero entries. We attach to $r_i$ the edge $e_i=\{k,l\}$, if the two non-zero entries of $r_i$ are at position $k$ and $l$, and claim that \[ E(G(I))=\{e_1,\ldots,e_k\}. \] Indeed, according to the definition of $G(I)$ an edge $e$ belongs to $E(G(I))$, if there exists a relation tree $T$ of $I$ with $e\in E(T)$. This is equivalent to say that there exist linearly independent $r_{i_1},\ldots, r_{i_m}\in L$ such that $e=e_{i_j}$ for some $j$. Now choose $e_i\in \{e_1,\ldots,e_k\}$. Then $r_i$ can be completed to maximal set $\{r_i, r_{i_2},\ldots,r_{im}\}$ of $K$-linear elements in $L$. This shows that $e_i\in E(G(I))$ for $i=1,\ldots,k$, so that $\{e_1,\ldots,e_k\}\subset E(G(I))$. The other inclusion is trivially true.
In order to complete the proof of the theorem we need to show that each spanning tree $T$ of $G(I)$ is a relation tree of $I$. Let $e_{i_1},\ldots,e_{i_m}$ be the edges of the tree. To prove that $T$ is a relation tree amounts to show the relations $r_{i_1},\ldots, r_{i_m}$ are $K$-linearly independent.
A free vertex of $T$ is a vertex which belongs to exactly one edge. Since $T$ is a tree, it has at least one free vertex. Say, $1$ is this vertex and $e_{i_1}$ is the edge to which the free vertex $1$ belongs. Removing the edge $e_{i_1}$ from $T$ we obtain a tree $T'$ on the vertex set $\{2,3,\ldots,m+1\}$. After renumbering the vertices and edges if necessary, we may assume that $2$ is a free vertex of $T'$ and $e_{i_2}$ the edge to which $2$ belongs. Proceeding in this way we get, after a suitable renumbering of the vertices and edges of $T$, a free vertex ordering of the edges, that is, for all $j=1,\ldots,r$ the edges $e_{i_j},e_{i_{j+1}},\ldots,e_{i_m}$ is the set of edges of a tree for which $j$ is a free vertex belonging to $e_{i_j}$. Since renumbering of vertices and of edges of $T$ means for the corresponding matrix of relations simply permutation of the rows and columns, the rank of relation matrix is unchanged. However in this new ordering, if we skip the last column of the $m\times m+1$ relation matrix we obtain an upper triangular $m\times m$ matrix with non-zero entries on the diagonal. This shows that the relations $r_{i_1},\ldots, r_{i_m}$ are $K$-linearly independent, as desired. \end{proof}
Finally we will describe all the possible Taylor graphs of a Cohen--Macaulay monomial ideal of codimension $2$ with linear resolution. Then, together with Theorem~\ref{linear}, we have a complete description of all possible relation trees for such ideals.
Let $G$ be finite connected simple graph on the vertex set $[n]$. Recall that a subset $C$ of $[n]$ is called a {\em clique} of $G$ if for all $i$ and $j$ belonging to $C$ with $i \neq j$ one has $\{ i , j \} \in E(G)$. The set of all cliques $\Delta(G)$ is a simplicial complex, called the {\em clique complex} of $G$.
\begin{Theorem} \label{chordal} Let $G$ be finite connected simple graph. Then the following are equivalent: \begin{enumerate} \item[{\em (a)}] $G$ is a Taylor graph of a Cohen--Macaulay monomial ideal of codimension $2$ with linear
resolution.
\item[{\em (b)}] $G$ is a chordal graph with the property that any two distinct maximal cliques have at most one vertex in common. \end{enumerate} \end{Theorem}
\begin{proof} (a)\implies (b): Let $I$ be generated by $m$ monomials and $G=G(I)$, and let $C$ be a cycle of $G$. We first show that the restriction $G^\prime$ of $G$ to $C$ is a complete graph, that is, we show that for any two distinct vertices $i,j\in C$ it follows that $\{i,j\}\in E(G)$. In particular, this will imply that $G$ is chordal.
For simplicity we may assume that $E(C)=\{e_1,\ldots, e_k\}$ with $k\geq 3$ and $e_i=\{i,i+1\}$ for $i=1,\ldots, k-1$ and $e_k=\{k,1\}$. Let $r_1,\ldots,r_k$ be the corresponding relations. Let $\epsilon_i\in K^{m-1}$, $i=1,\ldots m-1$ be the canonical basis vectors of $K^{m-1}$. Then $r_i=-a_i\epsilon_i+b_i\epsilon_{i+1}$ for $i=1,\ldots,k-1$ and $r_k=-b_k\epsilon_1+a_k\epsilon_{k}$, where $a_i$ and $b_i$ belong to $\{x_1,\ldots,x_n\}$. Assume that $r_1,\ldots, r_k$ are $K$-linearly independent. Then $r_1,\ldots,r_k$ can be completed to $K$-basis $r_1,\ldots,r_m$ of $L$. (Here we use the notation introduced in the proof of Theorem~\ref{linear}.) Let $\Gamma$ be the tree corresponding to $r_1,\ldots,r_m$. Then $C$ is a subgraph of $\Gamma$, which is a contradiction. Thus we see that the relations $r_1,\ldots,r_k$ are $K$-linearly dependent which implies at once that $a_1=b_k$ and $a_i=b_{i-1}$ for $i=2,\ldots,k$. Hence we have $r_1+\cdots+ r_i=-a_1\epsilon_1+b_{i}\epsilon_{i+1}$ for $i=1,\ldots,k-1$. This implies that $\{1,i\}$ is an edge of $G$ for $i=2,\ldots k$. By symmetry, also the other edges $\{i,j\}$ with $2\leq i<j\leq k$ belong to $G$.
Now let $G_1$ and $G_2$ be two distinct maximal cliques of $G$, and assume that they have two vertices in common, say, the vertices $i$ and $j$. Let $k\in G_1\setminus \{i,j\}$ and $l\in G_2\setminus \{i,j\}$. Then the graph $C$ with edges $\{i,k\}, \{k,j\}, \{j,l\}, \{l,i\}$ is a cycle in $G$. Therefore, by what we have shown, it follows that $\{k,\l\}$ is an edge of $G$. Thus for any two vertices $k, l\in V(G_1)\union V(G_2)$ it follows that $\{k,l\}\in E(G)$, contradicting the fact that $G_1$ and $G_2$ are distinct maximal cliques of $G$.
(b)\implies (a): Let $C_1,\ldots, C_r$ be the maximal cliques of the chordal graph $G$, and let $\Delta(G)$ be the clique complex of $G$. Then the $C_i$ are the facets of $\Delta(G)$. One version of Dirac's theorem \cite{D} says that $\Delta(G)$ is a quasi-forest, see \cite{HHZ}. This means, that there is an order of the facets, say, $C_1,C_2,\ldots, C_r$ such that for each $i$ there is a $j<i$ with the property that $C_k\sect C_i\subset C_j\sect C_i$ for all $k<i$. Given this order, then our hypothesis (b) implies that for each $i=2,\ldots,r$ there exists a vertex $k_i\in C_i$ such $C_i\sect C_{i-1}=\{k_i\}$ and $C_i\sect C_j=\{k_i\}$ for all $j<i$ with $C_i\sect C_j\neq \emptyset$. The following example illustrates the situation. Let $G$ be the graph on the vertex set $[7]$ with edges $\{1,2\}, \{1,3\}, \{2,3\}, \{3,4\}, \{3,5\}, \{4,5\}, \{5,6\}, \{5,7\}$.
Then $G$ is a connected simple graph satisfying the condition in (b). The maximal cliques of $G$ ordered as above are $C_1=\{1,2,3\}$, $C_2=\{3,4,5\}$, $C_3=\{5,6\}$ and $C_4=\{5,7\}$ and intersection vertices are $k_2=3$, $k_3=5$ and $k_4=5$.
After having fixed the order of the cliques, we may assume that the vertices of $G$ are labeled as follows: if $|C_1\union \cdots\union C_i|=s_i$, then $C_1\union \cdots\union C_i=\{1,2,\ldots,s_i\}$. In other words, $C_1=\{1,\ldots,s_1\}$ and $C_i\setminus \{k_i\}=\{s_{i-1}+1,\ldots, s_i\}$ for $i>1$. The vertices on the graph in Figure~1 are labeled in this way. Now we let $\Gamma\subset G$ be the spanning tree of $G$ whose edges are $\{j,k_2\}$ with $j\in C_1$ and $j\neq k_2$, and for $i=1,\ldots,r$ the edges $\{j,k_i\}$ with $j\in C_i$ and $j\neq k_i$. In our example the edges of $\Gamma$ are $\{1,3\}$, $\{2,3\}$, $\{3,4\}$,$\{3,5\}$, $\{5,6\}$ and $\{5,7\}$.
Let $m+1=s_r$. Then $m+1$ is the number of vertices of $G$. We now assign to $\Gamma$ the following $m\times m+1$-matrix $A$ whose rows $r_e$ correspond to the edges $e$ of $\Gamma$ as follows: we set $r_e=-x_{1j}\epsilon_j+x_{1k_2}\epsilon_{k_2}$ for $e=\{j,k_2\}$ and $j\in C_1$ with $j\neq k_2$, and we set $r_e=-x_{ij}\epsilon_j+x_{ik_i}\epsilon_{k_i}$ for $e=\{j,k_i\}$ and $j\in C_i$ with $j\neq k_i$ and $i>1$. Here $\epsilon_i$ denotes the $i$th canonical unit vector in ${\NZQ R}^{m+1}$.
The rows $r_e$ can be naturally ordered according to the size of $j$ in the edge $e=\{j,k_i\}$. Thus in our example we obtain the matrix \[ \begin{pmatrix} -x_{11} & 0 & x_{13} & 0 & 0 & 0 & 0\\ 0 & -x_{12} & x_{13} & 0 & 0 & 0 & 0\\ 0 & 0 & x_{23} & -x_{24} & 0 & 0 & 0\\ 0 & 0 & x_{23} & 0 & -x_{25} & 0 & 0\\ 0 & 0 & 0 & 0 & x_{35} & -x_{36} & 0\\ 0 & 0 & 0 & 0 & x_{45} & 0 & -x_{47}\\ \end{pmatrix} \] Our next goal is to show that our matrix $A$ is a Hilbert--Burch matrix. We apply Theorem~\ref{juergen}. Tor each edge $\{i,j\}\in \Gamma$ the monomials $u_{ij}$ and $u_{ji}$ are, according to the choice of $A$, the following: \[ u_{jk_2}=-x_{1j},\quad u_{k_2j}=x_{1k_2}\quad \text{for}\quad j<k_2, \] and for $i=2,\ldots, r$ \[ u_{k_ij}=x_{ik_i},\quad u_{jk_i}=-x_{ij} \quad \text{for}\quad k_i<j,\quad j\in C_i. \] According to Theorem~\ref{juergen}(b) we have to show that $\gcd(u_{i b(i,j)},u_{j e(i,j)})=1$ for all $i<j$. Assume first that $i,j\not\in\{k_2,\ldots,k_r\}$. Then $u_{i b(i,j)}=-x_{ti}$ for $i\in C_t$ and $x_{j e(i,j)}=-x{sj}$ for$j\in C_s$. Thus in this case $\gcd(u_{i b(i,j)},u_{j e(i,j)})=1$. In the second case let $i\not\in\{k_2,\ldots,k_r\}$ and $j\in\{k_2,\ldots,k_r\}$, let say $j=k_s$. Then $b(i,j)=k_t$ for $i\in C_t$ and so $u_{ib(i,j)}=-x_{ti}$. Suppose $\{i,j\}$ is not an edge then $e(i,j)=b(j,i)=b(k_s,i)$ is either $k_{s+1}$ or $k_{s-1}$. Then $u_{je(i,j)}$ is either $-x_{(s)j}$ or $-x_{(s-1)j}$. On the other hand, if $\{i,j\}$ is an edge, then $e(i,j)=i$, and so $u_{j e(i,j)}=u_{ji}=u_{k_si}=x_{sj}$. Thus in this case, too, $\gcd(u_{i b(i,j)},u_{j e(i,j)})=1$. Finally assume that $i,j\in\{k_2,\ldots,k_r\}$, and let $i=k_{s_1},k_{s_1+1},\ldots,k_{s_2}=j$ be the path from $i$ to $j$. Then $b(i,j)=k_{s_1+1}$ and $e(i,j)=k_{s_2-1}$ so $u_{ib(i,j)}=x_{s_1i}$ and $u_{je(i,j)}=-x_{(s_2-1)j}$. Thus again in this case we have $\gcd(u_{i b(i,j)},u_{j e(i,j)})=1$. Thus in all cases $\gcd(u_{i b(i,j)},u_{j e(i,j)})=1$ for all $i<j$, as desired.
Let $I$ be the codimension $2$ ideal whose relation matrix is $A$, and let $\{i,j\}$ be any edge of $G$. It remains to be shown that there exits a relation tree $\Gamma'$ with $\{i,j\}\in E(\Gamma')$. If $\{i,j\}\in E(\Gamma)$, we are done. Now assume that $\{i,j\}\not\in E(\Gamma)$. We may assume that $\{i,j\}\in C_t$. Let $s=t$ if $t>1$, and $s=2$ if $t=1$. We replace the row $-x_{tj}\epsilon_j+x_{tk_s}\epsilon_{k_s}$ of $A$ by the difference of the rows \[ -x_{ti}\epsilon_i+x_{tj}\epsilon_{j}=(-x_{ti}\epsilon_i+x_{tk_s}\epsilon_{k_s})- (-x_{tj}\epsilon_j+x_{tk_s}\epsilon_{k_s}), \] and leave all the other rows of $A$ unchanged. The new matrix $A'$ is again a relation matrix of $I$ and the tree $\Gamma'$ corresponding to $A'$ is obtained from $\Gamma$ by removing the edge $\{j,k_s\}$ and adding the edge $\{i,j\}$. This completes the proof of the theorem. \end{proof}
\end{document} | arXiv |
\begin{document}
\author{Alix Deruelle} \title{Steady gradient Ricci soliton with curvature in $L^1$} \date{\today} \maketitle
\begin{abstract} We characterize complete nonnegatively curved steady gradient soliton with curvature in $L^1$. We show that there are isometric to a product $((\mathbb{R}^2,g_{cigar})\times (\mathbb{R}^{n-2}, \mathop{\rm eucl}\nolimits))/\Gamma $ where $\Gamma$ is a Bieberbach group of rank $n-2$. We prove also a similar local splitting result under weaker curvature assumptions. \end{abstract}
\section{Introduction}\label{Intro}
A \textbf{steady gradient Ricci soliton} is a triple $(M^n,g,\nabla f)$ where $(M^n,g)$ is a Riemannian manifold and $f$ is a smooth function on $M^n$ such that $\mathop{\rm Ric}\nolimits=\mathop{\rm Hess}\nolimits(f).$ It is said \textbf{complete} if the vector field $\nabla f$ is complete.
In this paper, we prove a rigidity result for steady gradient soliton of nonnegative sectional curvature with curvature in $L^1(M^n,g)$.
\begin{theo}\label{global-iso} Let $(M^n,g,\nabla f)$ be a complete nonflat steady gradient Ricci soliton such that
\begin{itemize} \item[(i)] $K\geq 0$, where $K$ is the sectional curvature of $g$, \item[(ii)] $R\in L^1(M^n,g)$. \end{itemize}
Then any soul of $M^n$ has codimension $2$ and is flat. Moreover, the universal covering of $M^n$ is isometric to $$(\mathbb{R}^2,g_{cigar})\times (\mathbb{R}^{n-2}, \mathop{\rm eucl}\nolimits), $$ and $\pi_1(M^n)$ is a Bieberbach group of rank $n-2$.
\end{theo}
This result is relevant in dimensions greater than two. Indeed, the cigar soliton is the only two-dimensional nonflat steady gradient soliton, see [\ref{Chow}] for a proof. Moreover, condition (i) is always true for an ancient solution of dimension $3$ with bounded curvature on compact time intervals because of the Hamilton-Ivey estimate ([\ref{Chow}], Chap.6, Section 5 for a detailed proof). Hence the following corollary.
\begin{coro}\label{global-iso-3} Let $(M^3,g,\nabla f)$ be a complete nonflat steady gradient Ricci soliton such that $$R\in L^1(M^3,g).$$ Then $(M^3,g)$ is isometric to $$((\mathbb{R}^2,g_{cigar})\times \mathbb{R})/\langle(t,\theta,u)\rightarrow (t,\theta+\alpha,u+a)\rangle,$$ with $(\alpha,a)\in\mathbb{R}\times \mathbb{R}^*$. \end{coro}
We make some remarks about Theorem \ref{global-iso}. First we recall the definition of the \textbf{cigar soliton} discovered by Hamilton.
The cigar metric on $\mathbb{R}^2$, in special radial coordinates, is $g_{cigar}:=ds^2+\tanh^2 sd\theta^2.$ A standard calculation shows that $R(g_{cigar})=16/(e^s+e^{-s})^{-2}.$ The curvature is positive and decreases exponentially, moreover this metric is asymptotic to a cylinder of radius $1$, therefore $R\in L^1(\mathbb{R}^2, g_{cigar}).$
Theorem \ref{global-iso} can be seen as a gap theorem in the terminology of Greene-Wu for the curvature decay of steady gradient solitons. In fact, one could assume in Theorem \ref{global-iso} that the scalar curvature decays faster than $1/r^{1+\epsilon}$ for $\epsilon>0$ instead of $R\in L^1(M^n,g)$. The proof is quite the same. Then the result is that the curvature decays exponentially. In this way, let us mention a result of Greene and Wu [\ref{Greene-Wu}] on nonnegatively curved spaces which are flat at infinity.
\begin{theo}[Greene-Wu]\label{flat-infinity} Let $M^n$ be a complete noncompact Riemannian manifold of nonnegative sectional curvature. If $M^n$ is flat at infinity, then either $(a)$ $M^n$ is flat or $(b)$ any soul of $M^n$ is flat and has codimension $2$, the universal covering of $M^n$ splits isometrically as $\mathbb{R}^{n-2}\times \Sigma_0$ where $\Sigma_0$ is diffeomorphic to $\mathbb{R}^2$ and is flat at infinity but not flat everywhere, and finally the fundamental group of $M^n$ is a Bieberbach group of rank $n-2$. \end{theo}
The idea of the proof of Theorem \ref{global-iso} consists in analysing the volume and the diameter of the level sets of $f$ where $f$ is seen like a Morse function. By a blow-up argument inspired by Perelman's proof of classification of $3$-dimensional shrinking gradient Ricci soliton [\ref{Perelman}], we show that the level sets of $f$ are diffeomorphic to flat compact manifolds. In fact, by Bochner's theorem, the level sets of $f$ are metrically flat. From this, the global splitting of the universal cover is easily obtained. Finally, we follow closely the arguments of the proof of Theorem \ref{flat-infinity} to show that any soul has codimension $2$.\\
\textbf{Organization.} In section \ref{section1}, we recall basic equations for steady solitons, and study the topological structure at infinity. We establish some volume and diameter estimates of the level sets of $f$. In section \ref{section2}, we prove Theorem \ref{global-iso}, and, under weaker curvature assumptions, a local splitting result (cf. Theorem \ref{local-iso}). We give also a rigidity result on steady breathers. In section \ref{section3}, we make some further remarks about the link between the volume growth and the scalar curvature decay of a steady soliton, and give necessary geometric conditions to have a positive Perelman's functional on steady gradient solitons.\\
\textbf{Acknowledgements.} I would like to thank my advisor Laurent Bessires for his encouragement and his precious enlightenment on this problem.
\section{Geometry and topology of the level sets of $f$}\label{section1}
We begin by recalling the link ([\ref{Chow}], Chap.4) between steady gradient solitons and the Ricci flow.
\begin{theo} If $(M^n,g,\nabla f)$ is a complete steady gradient Ricci soliton then there exist a solution $g(t)$ to the Ricci flow with $g(0)=g$, a family of diffeomorphisms $(\psi_t)_{t\in\mathbb{R}}$ with $\psi_0=Id_{M^n}$ and functions $f(t)$ with $f(0)=f$ defined for all $t\in \mathbb{R}$ such that
\begin{itemize} \item[(i)] $\psi_t:M^n\rightarrow M^n$ is the 1-parameter group of diffeomorphisms generated by $-\nabla^g f$, \item[(ii)] $g(t)={\psi_t}^*g$, i.e. for all $t\in \mathbb{R}$, i.e. $g(t)$ is isometric to $g$, \item[(iii)] $f(t)={\psi_t}^*f$, for all $t\in \mathbb{R}$. \end{itemize} \end{theo}
Therefore, a steady soliton is an ancient solution to the Ricci flow, i.e. defined on an interval $]-\infty,\omega)$, where $\omega$ can be $+\infty$. Let us quote from ([\ref{Chow}],Chap. 2; Lemma 2.18) a result which follows from the strong and weak maximum principles for heat-type equations:
\begin{lemma}[Nontrivial ancient solutions have positive scalar curvature ] If $(M^n,g(t))$ is a complete ancient solution (i.e. $(M^n,g(t))$ is complete for all $t \in (-\infty,\omega)$) to the Ricci flow with bounded curvature on compact time intervals then, either $R(g(t))>0$ for all $t\in (-\infty,\omega)$, either $\mathop{\rm Ric}\nolimits(g(t))=0$ for all $t\in (-\infty,\omega)$. \end{lemma}
Next, we collect the basic identities satisfied by a steady gradient soliton ([\ref{Chow}], Chap.4).
\begin{lemma}\label{id} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton. Then:
\begin{itemize} \item[(i)] $\Delta f = R$, where $\Delta :=\mathop{\rm trace}\nolimits_g\Hess$, \item[(ii)]$\nabla R+ 2\mathop{\rm Ric}\nolimits(\nabla f)=0$, \item[(iii)]$\arrowvert \nabla f \arrowvert^2+R=Cst.$ \end{itemize} \end{lemma}
By the third identity, a steady soliton with bounded curvature is always complete because $\arrowvert\nabla f\arrowvert$ is bounded. In the sequel, we only consider steady gradient Ricci solitons with positive scalar curvature and bounded curvature. Such solitons are necessarily noncompact.
\begin{lemma}[Topological structure at infinity]\label{structure} Let $(M^n,g,\nabla f)$ be a steady gradient soliton such that $\mathop{\rm Ric}\nolimits\geq 0$ with $R>0$. Suppose that, $$\lim_{+\infty}R=0.$$ Then $R$ attains its supremum, $R_{max}$, at a point $p$, and on $M^n$, $$\arrowvert \nabla f \arrowvert^2+R=R_{max}.$$ Moreover, $f$ attains its minimum at $p$ and there exist constants $c_i=c_i(M^n, f,R)$ ($i=1...6$) such that \begin{itemize} \item[(i)] $\arrowvert\nabla f\arrowvert\leq c_1$, on $M^n$, \item[(ii)] $c_2\leq\arrowvert\nabla f\arrowvert$, at infinity, \item[(iii)]$c_3r_p(x)+c_4\leq f(x)\leq c_5r_p(x)+c_6$ at infinity, where $r_p$ is the distance function centered at $p$. \end{itemize} In particular, $M^n$ has finite topological type. \end{lemma}
Note that Jos A. Carrillo and Lei Ni [\ref{Carrillo}] have shown a similar lemma under weaker hypotheses: $\limsup_{x\rightarrow+\infty}R<\sup_{M^n}R=R(p)=R_{max}$ ($R$ is supposed to attain its supremum). To be complete, we give a short proof of this lemma following [\ref{Carrillo}].\\
\textbf{Proof of Lemma \ref{structure}}. As the scalar curvature $R$ tends to $0$ at infinity, it attains its maximum $R_{max}>0$ at a point $p$ of $M^n$. Moreover, we know that there exists a constant $C>0$ such that $\arrowvert \nabla f \arrowvert^2+R=C.$ In particular, $C\geq R_{max}$. Assume that $R_{max}<C$. Consider the flow $(\psi_t(p))_t$ generated by the vector field $\nabla f$. This flow is defined on $\mathbb{R}$ because $\nabla f$ is complete. define the function $F(t):=f(\psi_t(p))$ for $t\in \mathbb{R}$. Then, $$F'(t)=\arrowvert \nabla f\arrowvert^2\quad\mbox{and}\quad F''(t)=2\mathop{\rm Ric}\nolimits(\nabla f,\nabla f),$$ implicitely evaluated at the point $\psi_t(p)$. By assumption, $F'(t)\geq C-R_{max}>0$ and $F'(0)=C-R_{max}=\min_{M^n}\arrowvert \nabla f\arrowvert^2$.
Now $F''(t)\geq 0$ for $\mathop{\rm Ric}\nolimits\geq 0$, i.e. $F'$ is a non-decreasing function on $\mathbb{R}$. So $F'$ is constant on $]-\infty,0]$ and $F(t)=(C-R_{max})t +f(p)$ for $t\leq 0$.
In particular, $\lim_{t\rightarrow-\infty}F(t)=-\infty$. As $f$ is continue (since it is smooth!) on $M^n$, this implies that $(\psi_t(p))_t$ is not bounded for $t\leq 0$. Therefore, there exists a subsequence $t_k\rightarrow -\infty$ such that $r_p(\psi_{t_k}(p))\rightarrow+\infty$. Thus, $\lim_{k\rightarrow+\infty}R(\psi_{t_k}(p))=0$. Now, $F''(t)=2\mathop{\rm Ric}\nolimits(\nabla f,\nabla f)=-g(\nabla R,\nabla f)=0$ for $t\leq 0$, i.e. $R(\psi_t(p))=R_{max}>0$ for $t\leq 0$. Contradiction. We have shown that $\arrowvert \nabla f \arrowvert^2+R=R_{max}.$ Then $\nabla f(p)=0$ and as $\Hess(f)(p)\geq 0$, $f$ attains its minimum at $p$. We deduce that $\liminf_{x\rightarrow+\infty}\arrowvert \nabla f \arrowvert^2\geq \lambda>0$. Finally, we can show the inequalities satisfied by $f$ as in [\ref{Carrillo}].
$\square$\\
Remember that the critical set of a convex function is exactly the set where it attains its minimum. With the notations of the previous lemma, we have $$ \{f=\min_{M^n} f\}=\{\nabla f=0\}=\{R=R_{max}\}.$$
In the following, we suppose that $\min_{M^n} f=0$. Now, consider the compact hypersurfaces $M_t:=f^{-1}(t)$, levels of $f$, for $t$ positive. We will also denote the sublevels (resp. superlevels) of $f$ by $M_{\leq t}:=f^{-1}(]-\infty,t])$ (resp. $M_{\geq t}:=f^{-1}([t,+\infty[)$).
Let $(\phi_t)_t$ be the $1$-parameter group of diffeomorphisms generated by the vector field $\nabla f/\arrowvert\nabla f\arrowvert^2$ defined on $M^n\setminus M_0$. For $t_0>0$, $\phi_{t-t_0}$ is a diffeomorphism between $M_{t_0}$ and $M_t$ for $t\geq t_0$. Outside a compact set, $M^n$ is diffeomorphic to $[t_0, +\infty[\times M_{t_0}$ for $t_0>0$. We suppose $n>2$.
\begin{prop}[Volume estimate]\label{vol-hyper} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton such that \begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$ and $R>0$, \item[(ii)] $\lim_{+\infty}R=0.$ \end{itemize}
Then $$0\leq A'(t)\leq c(t_0)\int_{M_t}\frac{R}{\arrowvert\nabla f\arrowvert}dA_t, \quad \forall t\geq t_0, $$ where $A(t):=Vol_{g_t}M_t$, $g_t$ is the induced metric on $M_t$ by $g$ and $c(t_0)$ is a positive constant depending on $\inf_{M_{\geq t_0}}\arrowvert \nabla f\arrowvert$.
\end{prop}
\textbf{Proof of Proposition \ref{vol-hyper}}. The curvature assumptions allows to apply Lemma \ref{structure}. Thus, the hypersurfaces $M_t$ are well-defined for $t>0$. The flow of the hypersurface $M_t$ satisfies $\frac{\partial\phi_t}{\partial t} = (\nabla f/\arrowvert\nabla f\arrowvert^2)(\phi_{t}).$ Therefore, the first variation formula for the area of $M_t$ is given by $$A'(t)=\int_{M_t}\frac{H_t}{\arrowvert \nabla f\arrowvert} dA,$$ where $H_t$ is the mean curvature of $M_t$. Now the second fundamental form of $M_t$ is $$h_t:=\frac{\mathop{\rm Hess}\nolimits(f)}{\arrowvert \nabla f\arrowvert}=\frac{\mathop{\rm Ric}\nolimits}{\arrowvert \nabla f\arrowvert}.$$ So, $$A'(t)=\int_{M_t}\frac{R-\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})}{\arrowvert \nabla f\arrowvert^2}dA,$$ where $\textbf{n}:=\nabla f/\arrowvert \nabla f\arrowvert$ is the unit outward normal to the hypersurface $M_t$. The first inequality comes from the nonnegativity of the Ricci curvature. The second one is due to the nonnegativity of the Ricci curvature and to the uniform boundedness from below of $\arrowvert \nabla f\arrowvert$ on $M_{\geq t_0}:=\{f\geq f(t_0)\}$.
$\square$\\
We deduce the following corollary by the co-area formula.
\begin{coro}\label{vol-borne} Let $(M^n,g,\nabla f)$ a complete steady gradient soliton satisfying the hypotheses of Proposition \ref{vol-hyper}. Then,
$$A(t_0)\leq A(t)\leq A(t_0)+c(t_0)\int_{M_{t_0\leq s\leq t}}R d\mu, \quad \forall t\geq t_0.$$
\end{coro}
Consequently, a steady gradient soliton satisfying the assumptions of Proposition \ref{vol-hyper} with $R\in L^1(M^n,g)$ has linear volume growth, i.e., for any $p\in M^n$, there exist positive constants $C_1$ and $C_2$ such that for all $r$ large enough,
$$C_1r\leq \mathop{\rm Vol}\nolimits B(p,r)\leq C_2r.$$
\begin{prop}[Comparison of the metrics $\phi_{t-t_0}^*g_t$ and $g_{t_0}$]\label{long}
Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton satisfying the assumptions of Proposition \ref{vol-hyper}. Let $V$ be a vector field tangent to $M_{t_0}$. Then, $$g_{t_0}(V,V)\leq (\phi_{t-t_0}^*g_t)(V,V), \quad \forall t\geq t_0.$$
\end{prop}
\textbf{Proof of Proposition \ref{long}}. Define $V(t):=d\phi_{t-t_0}(V)$ where $V$ is a unit tangent vector to $M_{t_0}$. Note that $V(t)$ is a tangent vector to $M_t$ by construction. Thus, $$\arrowvert V\arrowvert '=\frac {g(V',V)}{\arrowvert V\arrowvert}=\frac{\arrowvert V\arrowvert}{\arrowvert \nabla f\arrowvert^2} \mathop{\rm Hess}\nolimits(f)(\frac{V}{\arrowvert V\arrowvert},\frac{V}{\arrowvert V\arrowvert})= \frac{\arrowvert V\arrowvert}{\arrowvert \nabla f\arrowvert^2} \mathop{\rm Ric}\nolimits(\frac{V}{\arrowvert V\arrowvert},\frac{V}{\arrowvert V\arrowvert}).$$
Hence, $$\log\left(\begin{array}{rl}\frac{\arrowvert V\arrowvert(t)}{\arrowvert V\arrowvert(t_0)}\end{array}\right)= \int_{t_0}^t \frac{\arrowvert V\arrowvert'(s)}{\arrowvert V\arrowvert(s)}ds= \int_{t_0}^t \frac{1}{\arrowvert \nabla f\arrowvert^2(s)} \mathop{\rm Ric}\nolimits\left(\begin{array}{rl}\frac{V(s)}{\arrowvert V\arrowvert(s)},\frac{V(s)}{\arrowvert V\arrowvert(s)}\end{array}\right)ds.$$
The inequality follows from the assumption $\mathop{\rm Ric}\nolimits\geq 0$.
$\square$\\
We deduce the following corollary.
\begin{coro}[Distance comparison] \label{diam} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton satisfying the assumptions of Proposition \ref{vol-hyper}. Then,
$$d_{g_{t_0}}\leq d_{\phi^*_{t-t_0}g_t}, \quad \forall t\geq t_0.$$
\end{coro}
\textbf{Remark.} By the proof of Proposition \ref{long}, we also have the following upper estimate for $t\geq t_0$, $$d_t\leq e^{\int_{t_0}^t\frac{\sup_{M_s} R}{\arrowvert \nabla f\arrowvert ^2(s)}ds}d_{t_0},$$ which will not be used in this paper.\\
From now on, we consider the sequence of compact Riemannian manifolds $(M_{t},g_t)_{t\geq t_0}$ for $t_0>0$. In order to take a smooth Cheeger-Gromov limit of this sequence, one has to control the injectivity radius and the curvature and its derivatives of the metrics $g_t$ uniformly.
\begin{lemma}[Injectivity radius of $(M^n,g)$]\label{inj}
Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton such that
\begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$ and $R>0$, \item[(ii)] $\lim_{+\infty}R=0.$ \end{itemize}
Then for any $t_0>0$, $$\mathop{\rm inj}\nolimits(M_\leq t)\geq \min\left\{\frac{\pi}{\sqrt{ K_{[t_0,t]}}},\mathop{\rm inj}\nolimits(M_{\leq t_0})\right\},\quad \forall t\geq t_0,$$ where $K_{[t_0,t]}$ bounds from above the sectional curvatures of $g$ on $M_{t_0\leq s \leq t}$.\\ In particular, if $(M^n,g)$ has bounded curvature then it has positive injectivity radius.
\end{lemma}
\textbf{Proof of Lemma \ref{inj}.}
Let $t>0$ and define the topological retraction $\Pi_t:M^n\rightarrow M^n$ as follows: $\Pi_t(p)=p$ if $f(p)\leq t$ and $\Pi_t(p)=(\phi_{f(p)-t})^{-1}(p)$ otherwise.
The proof consists in showing that $\Pi_t$ is a distance-nonincreasing map.
Then one can argue as in the proof of Sharafutdinov [\ref{Sha}] to show the injectivity radius estimate.
Therefore, we want to show that $\Pi_t$ does not increase distances, i.e.,
$$d(\Pi_t(p_0),\Pi_t(p_1))\leq d(p_0, p_1), \quad (p_0, p_1 \in M^n).$$
Let $p_0, p_1\in M^n$, $t_0=f(p_0)$ and $t_1=f(p_1)$.
Assume w.l.o.g. that $t_0\leq t_1$.
Consider three cases.
(1) $t\geq t_1$. There is nothing to prove because $\Pi_t(p_0)=p_0$ and $\Pi_t(p_1)=p_1$.
(2) $t_0\leq t \leq t_1$. It suffices to show that $s\rightarrow d(p_0, \phi_{s-t}(q_1))$ is a nondecreasing function for $s\geq t$ and $q_1\in M_t$. Take a minimal geodesic $\gamma$ joining $p_0$ to $\phi_{s-t}(q_1)$. Now, $f\circ \gamma$ is a convex function. Thus, $0\leq s-t_0=f(\phi_{s-t}(q_1))-f(p_0)=f(\gamma(1))-f(\gamma(0))\leq g(\nabla f(\gamma(1)), \dot{\gamma}(1)).$ This proves the result.
(3) $t\leq t_0$. It is equivalent to show that $d(\phi_{t_0-t}(q_0),\phi_{t_1-t}(q_1))\geq d(q_0,q_1),$ for $q_0, q_1\in M_t$. By Proposition \ref{long}, $$d(q_0,q_1)\leq d(\phi_{t_0-t}(q_0),\phi_{t_0-t}(q_1)),$$
and by (2),
$$d(\phi_{t_0-t}(q_0),\phi_{t_0-t}(q_1))\leq d(\phi_{t_1-t_0}(\phi_{t_0-t}(q_1)),\phi_{t_0-t}(q_0)).$$
This gives the desired inequality.
$\square$\\
\begin{coro}[Diameter estimate]\label{diam-borne} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton with bounded curvature satisfying \begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$, $R>0$ and $\lim_{+\infty} R=0$, \item[(ii)] $R\in L^1(M^n,g)$. \end{itemize}
Then, for any $t_0>0$, there exists a positive constant $D=D(t_0)$ such that,
$$\mathop{\rm diam}\nolimits(g_t)\leq D(t_0),\quad (\forall t\geq t_0).$$ \end{coro}
\textbf{Proof of Corollary \ref{diam-borne}.}
On the one hand, by Lemma \ref{inj} and boundedness assumption on curvature, the volume of small balls is uniformly bounded from below. On the other hand, by assumption (ii) and Corollary \ref{vol-borne}, the volume of any tubular neighbourhood of $M_t$ with fixed width is uniformly bounded from above, i.e., for $\alpha>0$, $\mathop{\rm Vol}\nolimits M_{t-\alpha\leq s \leq t+\alpha}$ is uniformly bounded from above in $t$.
Therefore, by a ball packing argument, one can uniformly bound the diameter of $M_t$.
$\square$\\ We should now estimate the derivatives of the curvature of $g_t$.
\begin{lemma}\label{deriv} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton satisfying \begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$ and $R>0$, \item[(ii)] $\lim_{+\infty}\arrowvert \mathop{\rm Rm}\nolimits(g)\arrowvert=0.$ \end{itemize} Then there exist constants $(C_k)_{k\geq0}$ depending on $t_0>0$ such that for all $t\geq t_0$, we have $$\arrowvert \nabla^k\mathop{\rm Rm}\nolimits(g_t)\arrowvert\leq C_k.$$ Moreover, $\lim_{t\rightarrow+\infty} \sup_{M_t}\arrowvert \mathop{\rm Rm}\nolimits(g_t)\arrowvert=0$. \end{lemma}
\textbf{Proof of Lemma \ref{deriv}}.\\ The fact that $\lim_{t\rightarrow+\infty} \sup_{M_t}\arrowvert \mathop{\rm Rm}\nolimits(g_t)\arrowvert=0$ follows from the Gauss equations: \begin{equation}
K_{g_t}(X,Y)=K_g(X,Y)+\mathop{\rm det}\nolimits h_t(X,Y),
\end{equation} where $X$ and $Y$ are tangent to $M_t$.
In order to estimate the covariant derivatives $\nabla^{g_t,k} \mathop{\rm Rm}\nolimits(g_t)$, it suffices to control those of $\mathop{\rm Rm}\nolimits(g)$. Indeed, if $A$ is a $p$-tensor and $(X_i)_{0\leq i\leq p}$, $p+1$ tangent vectors to $M_t$, then $$(\nabla^{g_t}-\nabla^{g})A(X_0,X_1,...,X_p)=\sum_{i=1}^{p}A(X_1,...,(\nabla^{g}_{X_0}X_i-\nabla^{g_t}_{X_0}X_i),...,X_p).$$
Now $(\nabla^{g}_{X_0}X_i-\nabla^{g_t}_{X_0}X_i)=-h_t(X_0,X_i)\textbf{n}.$ Consequently, $$(\nabla^{g_t}-\nabla^g)A=A\ast h_t,$$ where, if $A$ and $B$ are two tensors, $A\ast B$ means any linear combination of contractions of the tensorial product of $A$ and $B$. Define $U_k:=(\nabla^{g_t,k}-\nabla^{g,k})A$ for $k\in\mathbb{N^*}$. Then, \begin{eqnarray*} U_{k+1}&=&(\nabla^{g_t,k+1}-\nabla^{g,k+1})A\\ &=&(\nabla^{g_t}-\nabla^{g})(\nabla^{g_t,k}A)+\nabla^g(\nabla^{g_t,k}-\nabla^{g,k})A\\ &=&(\nabla^{g_t,k}A)\ast h_t +\nabla^gU_k\\ &=&\nabla^{g,k}A\ast h_t+U_k\ast h_t+\nabla^{g}U_k.
\end{eqnarray*}
By induction on $k$, we show that $U_k$ is a linear combination of contractions of the tensorial products of $(\nabla^{g,i}A)_{0\leq i\leq k-1}$ and $(\nabla^{g,j}h_t)_{0\leq j\leq k-1}$.
Now, bounding $(\nabla^{g,j}h_t)_{j\geq 0}$ means bounding $(\nabla^{g,i}\mathop{\rm Rm}\nolimits)_{i\geq 0}$ and bounding from below $\arrowvert\nabla f\arrowvert$. If we take $A=\mathop{\rm Rm}\nolimits(g)$, we see that bounding $(\nabla^{g_t,k} \mathop{\rm Rm}\nolimits(g_t))_{k\geq 0}$ amounts to bounding $(\nabla^{g,k} Rm(g))_{k\geq 0}$ and bounding from below $\arrowvert\nabla f\arrowvert$.
By Theorem 1.1 in [\ref{Shi}] due to W.X. Shi, there exists
$T=T(n,\sup_{M^n}\arrowvert K\arrowvert)>0$ and constants $\tilde C_k=\tilde C_k(n,\sup_{M^n}\arrowvert K\arrowvert)$ such that for any time $\tau\in ]0, T]$ and for all $k\geq 0$, one has $$\sup_{M^n}\arrowvert \nabla^{g(\tau),k}\mathop{\rm Rm}\nolimits(g(\tau))\arrowvert^2\leq \frac{\tilde C_k}{\tau^k}.$$ In our situation, the Ricci flow acts by isometries: $g(\tau)=\psi_{\tau}^*g$, for all $\tau \in \mathbb{R}$, where $(\psi_{\tau})_{\tau}$ is the $1$-parameter group of diffeomorphisms of $M^n$ generated by $-\nabla f$. Modulo a translation at a time slice $0<\tau\leq T$, we can assume $$\sup_{M^n}\arrowvert \nabla^{g,k}\mathop{\rm Rm}\nolimits(g)\arrowvert^2\leq C_k,$$ where $C_k=C_k(n,\sup_{M^n}\arrowvert K_g\arrowvert)$.
This completes the proof.
$\square$\\
We are now in a position to apply the following theorem to the hypersurfaces $(M_t,g_t)_{t\geq t_{0}}$ assuming that $R\in L^1(M^n,g)$.
\begin{theo}[Cheeger-Gromov] Let $n\geq 2,(\lambda_i)_{i\geq0}, v, D\in (0,+\infty)$. The class of compact $n$-Riemannian manifolds $(N^n,g)$ satisfying $$\arrowvert \nabla^i\mathop{\rm Rm}\nolimits\arrowvert\leq \lambda_i,\quad \mathop{\rm diam}\nolimits\leq D, \quad v\leq \mathop{\rm Vol}\nolimits,$$ is compact for the smooth topology. \end{theo}
Apply this theorem to the sequence $(M_{k},g_k)_{k\geq t_0}$ with $$\lambda_i:=C_i, \quad v=A(t_0),\quad D:=D(t_0),$$ where the sequence $(C_p)_{p}$ comes from Lemma \ref{deriv} and $D(t_0)$ (resp. $A(t_0)$) is obtained by Corollary \ref{diam-borne} (resp. by Corollary \ref{vol-borne}). There exists a subsequence $(M_{k_i},g_{k_i})_{i}$ converging to a flat compact manifold $(M_{\infty},g_{\infty},p_{\infty})$ by assumption on the sectional curvature of $M^n$. As $M_t$ and $M_{\infty}$ are compact, the manifolds $M_t$ and $M_{\infty}$ are diffeomorphic for $t>0$.
To sum it up, we have shown the
\begin{prop}\label{type-top} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton satisfying \begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$ and $R>0$, \item[(ii)] $\lim_{+\infty}\arrowvert\mathop{\rm Rm}\nolimits(g)\arrowvert=0,$ \item[(iii)] $R\in L^1(M^n,g)$. \end{itemize}
Then the level sets of $f$, $M_t$ for $t>0$, are connected and are diffeomorphic to a compact flat $(n-1)$-manifold. \end{prop}
\textbf{Proof} The only thing we have to check is the connectedness of the hypersurfaces $M_t$ for $t>0$. If $M_t$ have more than one component then $M^n$ would be disconnected at infinity and therefore, by the Cheeger-Gromoll theorem, it would split isometrically as a product $(\mathbb{R}\times N,dt^2+g_0)$ where $N$ is compact. Then, $(N,g_0)$ would be a compact steady gradient soliton, necessarily trivial and so $(M^n,g)$. Contradiction. This proves the connectedness of the hypersurfaces $M_t$.
$\square$
\section{The local and global splitting}\label{section2} As seen in the introduction, a fundamental example of steady soliton discovered by Hamilton is the cigar soliton. An example of nontrivial steady gradient Ricci solitons with nonnegative sectional curvature and scalar curvature in $L^1$ in higher dimensions is the following: consider the metric product $(\mathbb{R}^2, g_{cigar})\times (\mathbb{R}^{n-2},\mathop{\rm eucl}\nolimits)$ and take a quotient by a Bieberbach group of rank $n-2$.
Theorem \ref{global-iso} shows that this is the only example satisfying these assumptions. We prove this result in section \ref{section2.1} below. A local splitting theorem under weaker curvature assumptions is proved in section \ref{section2.2}.
\subsection{The global splitting}\label{section2.1}
First, we need some background from the theory of nonnegatively curved Riemannian spaces. The presentation below follows closely Petersen [\ref{Petersen}]. The main difficulty to have a global result comes from the set $M_0=\{f=\min_{M^n} f\}$. Such a set is \textbf{totally convex}, i.e., any geodesic of $M$ connecting two points of $M_0$ is contained in $M_0$. More generally, any sublevel of a convex function $f$, $M_{\leq t}=\{p\in M^n/f(p)\leq t\}$ is totally convex. In order to have a better understanding of this notion, we sum up briefly its general properties ([\ref{Petersen}], Chap.11).
\begin{prop} Let $A\subset (M^n,g)$ be a totally convex subset of a Riemannian manifold. Then $A$ has an interior which is a totally convex submanifold of $M^n$ and a boundary $\partial A$ non necessarily smooth which satisfies the hyperplane separation property. \end{prop}
Moreover, if $A$ is closed, there exists a unique projection on $A$ defined on a neighbourhood of $A$. More precisely, we state Proposition 1.2 of Greene and Shiohama [\ref{Greene-Sh}] :
\begin{prop}[Greene-Shiohama] Let $A\subset (M^n,g)$ be a totally convex closed subset . Then there exists an open subset $U\in M^n$ such that
\begin{itemize} \item[(i)] $A\subset U$, \item[(ii)] for any point $p\in U$, there exists a unique point $\pi (p)\in A$ verifying $d(p,A)=d(p,\pi(p))$, \item[(iii)]the application $\pi:U\rightarrow A$ is continuous, \item[(iv)] for any $p\in U$, there exists a unique geodesic connecting $p$ to $A$ and it is contained in $U$. \end{itemize}
If $A$ is compact, we can choose $U$ as $\{p\in M^n/d(p,A)<\epsilon\}$, where $\epsilon$ depends on the compactness of $A$. \end{prop}
Therefore, the geometric situation near a totally convex closed subset is the same as in the case of $\mathbb{R}^n$. In the case of Theorem \ref{global-iso}, we have a smooth convex exhaustion function $f$ and a special totally convex compact set $\{f=\min_{M^n} f=0\}=M_0$. We would like to understand the topological links (at least) between the levels $M_t$ for $t>0$ and the boundary of the $\epsilon$-neighbourhood $M_{0,\epsilon}:=\{p\in M^n/d(p,M_0)\leq\epsilon\}$ of $M_0$ for $\epsilon>0$. Cheeger-Gromoll [\ref{Cheeger}] and Greene-Wu [\ref{Greene-Wu}] give a nice answer:
\begin{prop}[Cheeger-Gromoll; Greene-Wu]\label{delta-vois} Let $f$ be a smooth convex exhaustion function on a Riemannian manifold $(M^n,g)$ with sectional curvature bounded from above. Then, with the previous notations, for $\epsilon>0$ small enough and $0<\delta<<\epsilon$, the boundary of the $\delta$-neighbourhood of $M_0$ and the level $M_{\epsilon}$ are homeomorphic, i.e. $$\partial M_{0,\delta}\simeq M_{\epsilon}.$$ \end{prop}
Finally, we recall the notion of \textbf{soul}. A soul $S\subset M$ of a Riemannian manifold $(M,g)$ is a closed totally convex submanifold.
This notion has been famous by the Soul theorem [\ref{Cheeger}] by Cheeger and Gromoll.
\begin{theo}[Soul Theorem] Let $(M^n,g)$ be a complete Riemannian manifold with nonnegative sectional curvature. Then there exists a soul $S^k$ of $M^n$ such that $M^n$ is diffeomorphic to the normal bundle of $S^k$. \end{theo}
We now are in position to prove Theorem \ref{global-iso} .\\
\textbf{Proof of Theorem \ref{global-iso}}. First of all, as the Ricci flow acts by isometries in this case, the sectional curvature is nonnegative for any time in $\mathbb{R}$. Consider for $p\in M^n$ and for time $\tau\in\mathbb{R}$, $$\eta(p,\tau):=\{v\in T_pM / \mathop{\rm Ric}\nolimits_{g(\tau)}(v)=0\}.$$
We recall that the evolution equation of the Ricci curvature under the Ricci flow $g(\tau)$ satisfies: $\partial_{\tau}\mathop{\rm Ric}\nolimits=\Delta_L\mathop{\rm Ric}\nolimits ,$ where $\Delta_L$ means the Lichnerowicz laplacian for the metric $g(\tau)$ acting on symmetric $2$-tensors $T$ by $\Delta_L T_{ij}:=\Delta T_{ij}+2R_{iklj}T_{kl} -R_{ik}T_{jk}-R_{jk}T_{ik}$.
Thus, we can use Lemma 8.2 of Hamilton [\ref{Hamilton1}] to claim that $\eta(p,\tau)$ is a smooth distribution invariant by parallel translation and time-independent. Here, time-independence is clear because the flow acts by isometries.
For any $p\in M^n$, we have an orthogonal decomposition invariant by parallel transport,
$$T_pM=\eta(p,0)\oplus \{v\in T_pM / \mathop{\rm Ric}\nolimits_g(v,v)>0\}=:\eta(p)\oplus\eta^{\perp}(p).$$
As these distributions are parallel, by the weak de Rham's Theorem [\ref{Petersen}, Chap.8], there exists a neighbourhood $U_p$ for any point $p\in M^n$ such that
$$(U_p,g)=(U_1,g_1)\times(U_2,g_2),$$
where $TU_1=\eta\mid U_1$ and $TU_2=\eta^{\perp}\mid U_2$.\\
\begin{claim}
$\mathop{\rm dim}\nolimits \eta(p)=n-2$ for every $p\in M^n$.\\
\end{claim} \begin{proof}[Proof of claim 1]
We remind that the second fundamental form $h_t$ of $M_t$ satisfies $$h_t=\frac{\mathop{\rm Hess}\nolimits(f)}{\arrowvert \nabla f\arrowvert}.$$ Thus, the second fundamental form is nonnegative, i.e., $M_t$ is convex. Moreover, the Gauss equation tells us \begin{equation}
K_{g_t}(X,Y)=K_g(X,Y)+\mathop{\rm det}\nolimits h_t(X,Y), \label{eq:1}
\end{equation} where $X$ and $Y$ are tangent to $M_t$. Consequently, if we take an orthonormal basis $(E_i)_i$ of $TM_t$, orthogonal to $\textbf{n}$, we have \begin{equation} \mathop{\rm Ric}\nolimits(g_t)(X,X)=\mathop{\rm Ric}\nolimits(g)(X,X)-K_g(X,\textbf{n})+\sum_i \mathop{\rm det}\nolimits h_t(X,E_i), \label{eq:2} \end{equation} where $X$ is tangent to $M_t$. Tracing the previous identity, we get \begin{equation} R(g_t)=R(g)-2\mathop{\rm Ric}\nolimits(g)(\textbf{n},\textbf{n}) +(H_t)^2-\arrowvert h_t\arrowvert^2. \label{eq:3} \end{equation}
By \eqref{eq:1}, we conclude that we have a family of metrics $g_t$ for $t>0$ on $M_t$ of nonnegative sectional curvature. Now, by Lemma \ref{inj}, we know that $(M^n,g)$ has positive injectivity radius. Therefore, as the scalar curvature is a Lipschitz function since $\nabla R=-2 \mathop{\rm Ric}\nolimits(\nabla f)$ is bounded on $M^n$, one has $\lim_{+\infty} \arrowvert \mathop{\rm Rm}\nolimits(g)\arrowvert=0$. Consequently, Proposition \ref{type-top} can be applied and $M_t$ is diffeomorphic to a compact flat $(n-1)$-manifold. Therefore, by Bieberbach's theorem ([\ref{Buser}] for a geometric proof), there exists a finite covering $\tilde M_{t}$ of $M_t$ which is topologically a torus $\mathbb{T}^{n-1}$. To sum it up, we have a family of metrics $\tilde g_t$, for $t>0$, of nonnegative sectional curvature on a $(n-1)$-torus. So, we conclude that the metrics $\tilde g_t$ are flat (and so are the $g_t$) by the equality case in the Bochner theorem [\ref{Petersen}, Chap. 7]. Consequently, the previous identities implies \begin{eqnarray}
R&=&2\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})(>0), \label{eq:4} \\ \mathop{\rm Ric}\nolimits(g)(X,X)&=&K_g(X,\textbf{n})\quad \mbox{for any spherical $X$}, \label{eq:5} \\
\mathop{\rm det}\nolimits h_t(X,E_i)&=&0, \quad\mbox{for any i and any spherical $X$}, \label{eq:6} \\ K_g(X,Y)&=&0\quad \mbox{for any spherical plane $(X,Y)$}. \end{eqnarray} By \eqref{eq:4}, $\textbf{n}$ is in $\eta^{\perp}$. \eqref{eq:6} means that the rank of $\mathop{\rm Ric}\nolimits$ restricted to the hypersurfaces $M_t$ for $t>0$ is at most $1$. Finally, the meancurvature $H_t=R-\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})=\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})$ is positive, unless it would contradict \eqref{eq:4}. Thus, the rank of $\mathop{\rm Ric}\nolimits$ restricted to the hypersurfaces $M_t$ for $t>0$ is exactly $1$.\\
\end{proof}
Now, the universal covering $\tilde{M}^n$ of $M^n$ is isometric to $(M_1^2,g_1)\times (M_2^{n-2},g_2)$ where $TM_1=\{\mathop{\rm Ric}\nolimits_g>0\}$ and $TM_2=\{\mathop{\rm Ric}\nolimits_g\equiv 0\}$. Because of the nonnegativity of sectional curvature, $(M_2,g_2)=(\mathbb{R}^{n-2},\mathop{\rm eucl}\nolimits)$. Thus $(M_1^2,g_1(\tau))_{\tau\in\mathbb{R}}$ is a complete $2$-dimensional steady gradient soliton with positive scalar curvature. By [\ref{Chow}, corollary B.12], $(M_1^2,g_1(\tau))_{\tau\in\mathbb{R}}$ is necessarily the cigar soliton.
The last thing we need is the nature of the fundamental group of $M^n$. \begin{claim} If $S$ is a soul of $M^n$ then it has codimension $2$ and it is flat. \end{claim}
\begin{proof}[Proof of claim 2.]
Indeed, as $S$ is compact and $f$ is convex, $f_{\mid S}$ is constant so $TS\subset \{\mathop{\rm Ric}\nolimits_g\equiv 0\}$. Thus, $S$ has codimension at least $2$ and $S$ is flat since $S$ is totally geodesic in a flat space. Moreover, by Bieberbach's theorem, the rank on $\mathbb{Z}$ of $\pi_1(S)$ is $\mathop{\rm dim}\nolimits S$.
Assume that $S$ has codimension greater than $2$. We obtain a contradiction by linking the fundamental groups $\pi_1(M_t)$ of the hypersurfaces $M_t$ and $\pi_1(S)$ as in Greene-Wu [\ref{Greene-Wu}]. One can assume that $S\subset M_0$ by construction of a soul.
On the one hand, Lemma \ref{delta-vois} tells us that for $\delta$ small enough,
$$\pi_1(M_t)=\pi_1(\partial M_{0,\delta}).$$
On the other hand, $ M_{0,\delta}$ and the $\delta$-disc bundle $\nu_{\leq\delta}(S):=\{(p,v)\in S\times (T_pS)^{\perp}/\arrowvert v\arrowvert \leq \delta\}$ are homeomorphic by [\ref{Cheeger}].
Thus, $\pi_1(M_t)=\pi_1(\nu_{\delta}(S))$ where $\nu_{\delta}(S):=\{(p,v)\in S\times (T_pS)^{\perp}/\arrowvert v\arrowvert = \delta\}$.
Now, the fibre of the fibration $\nu_{\delta}(S)\rightarrow S$ is a $k$-sphere with $k\geq 2$ hence simply-connected since the codimension of $S$ is at least $3$. The homotopy sequence of the fibration shows that $\pi_1(\nu_{\delta}(S))=\pi_1(S)$. To sum it up we have for $t>0$,
$$\pi_1(S)=\pi_1(M_t).$$ Now the hypersurfaces $(M_t,g_t)$ are flat. In particular, this implies that the rank on $\mathbb{Z}$ of $\pi_1(M_t)$ is $n-1>\mathop{\rm dim}\nolimits S=rk_{\mathbb{Z}}(\pi_1(S))$. Contradiction. \end{proof}
Finally, by [\ref{Cheeger}], we know that the inclusion $S^{n-2}\rightarrow M^n$ is a homotopy equivalence, in particular $\pi_1(M^n)=\pi_1(S)$. So, $\pi_1(M^n)$ is a Bieberbach group of rank $n-2$.
$\square$\\
\subsection{The local splitting}\label{section2.2} Without the nonnegativity of sectional curvature, we loose the global splitting. Nonetheless, under a weaker assumption on the sign of curvature, we still get a local splitting, away from the minimal set $M_0$ of $f$. More precisely,
\begin{theo}\label{local-iso}
Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton such that \begin{itemize} \item[(i)] $\mathop{\rm Ric}\nolimits\geq 0$ and $R>0$, \item[(ii)]$\arrowvert\nabla f\arrowvert^2R\geq 2 \mathop{\rm Ric}\nolimits(\nabla f,\nabla f)$, \item[(iii)]$\lim_{+\infty}\arrowvert \mathop{\rm Rm}\nolimits(g)\arrowvert=0,$ \item[(iv)]$R\in L^1(M^n,g)$. \end{itemize}
Then, $M^n\setminus M_{0}$ is locally isometric to $(\mathbb{R}^2,g_{cigar})\times (\mathbb{R}^{n-2}, \mathop{\rm eucl}\nolimits) $. \end{theo}
\textit{\textbf{Remark.}} Assumption (ii) seems to be ad hoc. Nonetheless, note that (ii) is verified if the sum of the spherical sectional curvatures is nonnegative, i.e., if for any $t>0$, $$\sum_{i=1}^{n-1}K_g(X,E_i)\geq 0,$$
where $X$ is tangent to $M_t$ and $(E_i)_{1\leq i\leq n-1}$ is an orthonormal basis of $TM_t$. Note that $R=2\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})+\sum_{1\leq i,j\leq n-1}K_g(E_i,E_j),$ where $\textbf{n}$ is the unit outward normal to $M_t$. This condition is clearly implied if $(M^n,g)$ has nonnegative sectional curvature. Finally, the inequality (ii) is an equality for surfaces. Therefore, Theorem \ref{local-iso} can be seen as a comparison theorem with the geometry of the cigar soliton. \\
\textbf{Proof of Theorem \ref{local-iso}.}
On the one hand, by assumptions (i), (iii) and (iv), we know (Proposition \ref{type-top}) that the hypersurfaces $M_t$ have a finite covering diffeomorphic to a $(n-1)$-torus. On the other hand, (\ref{eq:3}) in the previous proof, $$R(g_t)=R(g)-2\mathop{\rm Ric}\nolimits(g)(\textbf{n},\textbf{n}) +(H_t)^2-\arrowvert h_t\arrowvert^2,$$ associated with assumption (ii) shows that the hypersurfaces $(M_t,g_t)$ have nonnegative scalar curvature. Therefore, we have obtained a sequence of $(n-1)$-torus $(\tilde M_t, \tilde g_t)$ with nonnegative scalar curvature. The Gromov-Lawson theorem (which is relevant in the case $n-1\geq 3$) [\ref{Lawson}] asserts that the metrics $\tilde g_t$ are flat. Thus, so are the $g_t$ for $t>0$. Now, along the same lines of the proof of Theorem \ref{global-iso}, we get the following identities \begin{eqnarray} R=2\mathop{\rm Ric}\nolimits(\textbf{n},\textbf{n})(>0), \label{1'}\\ \mathop{\rm Ric}\nolimits(g)(X,X)=K_g(X,\textbf{n})\quad \mbox{ for any spherical $X$},\label{2'}\\ \mathop{\rm det}\nolimits h_t(X,E_i)=0,\quad\mbox{ for any $i$ and any spherical $X$}, \label{3'}\\ K_g(X,Y)=0, \quad\mbox{for any spherical plane $(X,Y)$}.\label{4'} \end{eqnarray}
In particular, identities (\ref{2'}) and (\ref{4'}) show that the sectional curvature $K_g$ is nonnegative outside the minimal set $M_0$. Here, the flow does not act isometrically on $M^n\setminus M_0$. Nonetheless, for any $p\in M^n\setminus M_0$, there exists a neighbourhood $(U_p, g(t))_{t\in[-T_p,T_p]}$ with $T_p>0$ contained in $(M^n\setminus M_0,g)$ so that the sectional curvature restricted to $(U_p, g(t))_{t\in[-T_p,T_p]}$ remains nonnegative. Thus, as the argument is local, we can use Lemma 8.2 in Hamilton [\ref{Hamilton1}] to claim that $\eta\arrowvert U_p$ is a smooth distribution invariant by parallel translation. According to the weak version of de Rham's theorem, for any $p\in M^n\setminus M_0$, there exists a neighbourhood $U_p$ such that
$$(U_p,g)=(U_1,g_1)\times(U_2,g_2),$$
where $TU_1=\eta\mid U_1$ and $TU_2=\eta^{\perp}\mid U_2$.
We can show, with the same arguments as before, that $\mathop{\rm dim}\nolimits \eta(p)=n-2$ for any $p\in M^n\setminus M_0$.\\
\begin{claim} $\textbf{n}:=\nabla f/\arrowvert \nabla f\arrowvert$ is an eigenfunction for $\mathop{\rm Ric}\nolimits$. \end{claim} \begin{proof}[Proof of claim 3]
Let $p\in M_t$ for $t>0$ and $(e_i)_{i=1...n-1}$ an orthonormal basis of $TM_t$ at $p$. We assume that $\eta(p)$ is generated by $(e_i)_{i=2...n-1}$ and that $\eta^{\perp}(p)$ is generated by $\textbf{n}$ and $e_1$. By the previous local splitting, $$\mathop{\rm Ric}\nolimits_g(\textbf{n},e_i)=\mathop{\rm Ric}\nolimits_{g_2}(\textbf{n},0)+\mathop{\rm Ric}\nolimits_{g_1}(0,e_i)=0,$$ for $i=2...n-1$ and $$\mathop{\rm Ric}\nolimits_g(\textbf{n},e_1)=\mathop{\rm Ric}\nolimits_{g_2}(\textbf{n},e_1)=\frac{R_{g_2}(p)}{2}g_2(\textbf{n},e_1)=0,$$ since $\mathop{\rm dim}\nolimits \eta^{\perp}(p)=2$! Consequently, $\mathop{\rm Ric}\nolimits$ stabilizes $\textbf{n}$. \end{proof}
Now, $\mathop{\rm Ric}\nolimits$ restricted to $TM_t$ is given by $$\mathop{\rm Ric}\nolimits(X)=\mathop{\rm Rm}\nolimits(X,\textbf{n})\textbf{n},$$ for any spherical $X$. As $\mathop{\rm Ric}\nolimits$ is a symmetric endomorphism of $TM^n$, it stabilizes $TM_t$ too. Moreover, as $\nabla R+ 2\mathop{\rm Ric}\nolimits(\nabla f)=0$, we have for any spherical $X$, $$g(\nabla R,X)=0.$$ Thus, $R$ and $\arrowvert \nabla f\arrowvert ^2$ are radial functions.
Let $p\in M^n\setminus M_0$ and $U_p$ a neighbourhood such that $(U_p,g)=(U_1^{n-2},g_1)\times(U_2^2,g_2).$ Locally, $g_2$ is $$g_2=dt^2+\phi^2(t,\theta)d\theta^2,$$ where $\phi$ is a smooth positive function on $U_2$. We claim that $\phi$ is radial, i.e. does not depend on $\theta$. We know that $g=g_1+g_2=dt^2+g_t$ on $U_p$ and $g_t$ is flat, i.e. $\phi^2(t,\theta)d\theta^2+g_1$ is flat. In particular, the coefficients of such a metric are coordinates independent since all the Christoffel symbols vanish. This proves the claim.
$\square$
To sum it up, for any $p \in M^n\setminus M_0$, there exists a neighbourhood $U_p$ such that $$(U_p,g)=(U_1^{n-2},g_1)\times(U_2^2,dt^2+\phi^2(t)d\theta^2),$$ where $(U_1,g_1)$ is flat and $R_{g_1}=R_g=-\phi''/\phi>0$.\\ Consequently, $(U_2^2, dt^2+\phi^2(t)d\theta^2)$ is a $2$-dimensional rotationally symmetric steady gradient soliton with positive curvature. An easy calculation ([\ref{Chow}], App. B) shows that $\phi(t)=\frac{1}{a}\tanh(at),$ for $a>0$ i.e., $g_2$ is a cigar metric.
$\square$\\
\subsection{Steady breathers with nonnegative curvature operator and scalar curvature in $L^1$} We end this section with a rigidity result on steady breathers. Recall that a solution $(M^n,g(t))$ to the Ricci flow is called a steady breather if there exists $T>0$ and a diffeomorphism $\phi$ of $M^n$ such that $g(T)=\phi^*g(0)$. It is quite clear that a steady breather can be extended in an eternal solution. Perelman [\ref{Perelman}] showed that compact steady breathers are compact steady gradient Ricci solitons, hence Ricci-flat. In the noncompact case, the question is still open. In this direction, Hamilton [\ref{Hamilton2}] proved a more general result on (noncompact) eternal solutions with nonnegative curvature operator.
\begin{theo} [Hamilton]\label{eternal} If $(M^n,g(t))_{t\in \mathbb{R}}$ is a simply connected complete eternal solution to the Ricci flow with nonnegative curvature operator, positive Ricci curvature and such that $\sup_{M^n\times \mathbb{R}} R$ is attained at some space and time, then $(M^n,g(t))$ is a steady gradient Ricci soliton. \end{theo}
Combining this result with Theorem \ref{global-iso}, we get the following corollary.
\begin{coro}\label{Breather} Let $(M^n, g(t))_{t\in[0,T]}$ be a complete steady breather with nonnegative curvature operator bounded on $M^n\times [0,T]$ and $R_{g(0)}\in L^1(M^n,g(0))$. Then the universal covering of $M^n$ is isometric to $$(\mathbb{R}^2,g_{cigar})\times (\mathbb{R}^{n-2}, \mathop{\rm eucl}\nolimits), $$ and $\pi_1(M^n)$ is a Bieberbach group of rank $n-2$. \end{coro}
\textbf{Proof of Corollary \ref{Breather}.}
Recall that ancient solutions with bounded nonnegative curvature operator have nondecreasing scalar curvature [\ref{Chow}, Chap.10], i.e. $R(x,t_1)\leq R(x,t_2)$ for $t_1\leq t_2$ and $x\in M^n$. Therefore $t\rightarrow \sup_{M^n} R_{g(t)}$ is nondecreasing and is constant on a steady breather. Now, as $R_{g(0)}\in L^1(M^n,g(0))$, $R_{g(0)}$ is Lipschitz and $K\geq 0$, $\lim_{+\infty} R_{g(0)}=0$ because a Riemannian manifold with bounded nonnegative sectional curvature has positive injectivity radius [\ref{Sha}]. Hence, $\sup_{M^n\times \mathbb{R}} R$ is attained. Consider the universal Riemannian covering $(\tilde{M}^n, \tilde{g}(t))$ of $(M^n, g(t))$. By the Hamilton's maximum principle [\ref{Hamilton1}], $(\tilde{M}^n, \tilde{g}(t))=(N^k,h(t))\times(\mathbb{R}^{n-k}, eucl),$ where $(N^k,h(t))$ is a simply connected complete eternal solution with nonnegative curvature operator, positive Ricci curvature and such that $\sup_{N^k\times \mathbb{R}} R_{h(t)}$ is attained. By Hamilton's theorem \ref{eternal}, $(N^k,h(t))$ is a steady gradient soliton $(N^k,h,\nabla f)$, and so is $(\tilde{M}^n, \tilde{g})$ with the same potential function $f$. The only thing to check according to Theorem \ref{global-iso} is that $(M^n,g(0))=(M^n,g)$ is a steady gradient soliton, i.e. the potential function $f$ is well-defined on $M^n$. By [\ref{Cheeger}, section 6], the fundamental group $\pi_1(M^n)$ is a subgroup of $\mathop{\rm Isom}\nolimits(N^k,h)\times\mathop{\rm Isom}\nolimits(\mathbb{R}^{n-k})$. Let $\psi\in\mathop{\rm Isom}\nolimits(N^k,h)$. We want to prove that $\psi^*f=f$. As $\psi$ is an isometry for the metric $h$, $\Hess_h(f-\psi^*f)=0$. Therefore, $\arrowvert\nabla( f-\psi^*f) \arrowvert=Cst=0$ because $N^k$ contains no lines, i.e., $f-\psi^*f=Cst$. Moreover, as $\mathop{\rm Ric}\nolimits_h>0$, i.e. $f$ is strictly convex, the scalar curvature attains its maximum at a unique point $p\in N^k$. Thus, $\psi(p)=p$. This proves that $f=\psi^*f$.
$\square$\\
\section{Scalar curvature decay and volume growth on a steady gradient soliton}\label{section3}
In this section, we try to understand the relations between the scalar curvature decay and the volume growth on a steady gradient soliton. We recall a result due to Munteanu and Sesum [\ref{Munteanu}]. \begin{lemma}\label{sesum} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton. Then, for any $p\in M^n$, there exists a constant $c_p>0$ such that $$\mathop{\rm Vol}\nolimits B(p,r)\geq c_p r,$$ for any $r\geq 1$. \end{lemma}
What happens if we assume a minimal volume growth on a steady gradient soliton ? The answer can be given in term of scalar curvature decay: \begin{lemma}\label{crois-boule-min} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton.
Assume that there exists $C_p>0$ such that $$\mathop{\rm Vol}\nolimits B(p,r)\leq C_p r,$$
for a fixed $p\in M^n$ and $r\geq 1$.\\
Then $R$ belongs to $L^1(M^n,g)$. \end{lemma}
\textbf{Proof of Lemma \ref{crois-boule-min}.} Let $p\in M^n$ and $r\geq 1$. Then, by the Stokes theorem applied to $f$, $$\int_{B(p,r)}Rd\mu=\int_{B(p,r)}\Delta fd\mu\leq\int_{\partial B(p,r)}\arrowvert \nabla f\arrowvert dA
\leq CA(p,r),$$
where $A(p,r)$ is the $(n-1)$-dimensional volume of the geodesic sphere $S(p,r)=\partial B(p,r)$ and where $C=\sup_{M^n}\arrowvert \nabla f\arrowvert<+\infty$.\\
Now, $\int_{0}^rA(p,s)ds=VolB(p,r)$. Hence, the volume growth assumption tells us that there exists a sequence of radii $r_k\rightarrow +\infty$ such that the sequence $A(p,r_k)$ is bounded.
Therefore, there exists $C=C(p,\nabla f)$ such that for any $k\in\mathbb{N}$,
$$\int_{B(p,r_k)}Rd\mu\leq C.$$
As $M^n=\cup_{k}B(p,r_k)$, $R$ is in $L^1(M^n,g)$.
$\square$\\
We continue with a lemma concerning the "minimal" curvature decay of a steady gradient soliton with nonnegative Ricci-curvature: the scalar curvature decay is at most inversely proportional to the distance in an average sense. More precisely,
\begin{lemma}\label{minimal} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton with $\mathop{\rm Ric}\nolimits \geq 0$. Then, for any $p\in M^n$ and every $r>0$,
$$\frac{1}{\mathop{\rm Vol}\nolimits B(p,r)}\int_{B(p,r)}R d\mu\leq \frac{C}{r},$$
where $C=C(M^n,\nabla f)$. \end{lemma}
\textbf{Proof of Lemma \ref{minimal}.} As in the proof of Lemma \ref{crois-boule-min} , $$\frac{1}{\mathop{\rm Vol}\nolimits B(p,r)}\int_{B(p,r)}R d\mu=\frac{1}{\mathop{\rm Vol}\nolimits B(p,r)}\int_{B(p,r)}\Delta f d\mu\leq \frac{ \sup_{M^n}\arrowvert \nabla f\arrowvert}{r} \frac{rA(p,r)}{\mathop{\rm Vol}\nolimits B(p,r)}.$$ Now, by the Bishop-Gromov theorem ([\ref{Zhu}] for a recent and more general proof), $$\frac{rA(p,r)}{\mathop{\rm Vol}\nolimits B(p,r)}\leq n,$$ for any $p\in M^n$ and every $r>0$ since $M^n$ has nonnegative Ricci-curvature. The result is immediate with $C:=n \sup_{M^n}\arrowvert \nabla f\arrowvert.$
$\square$\\
We end this section by a remark concerning the vanishing of the geometric invariants $\lambda_{g,k}(M^n)$ introduced by Perelman [\ref{Perelman}] $(k=1)$ and by Junfang Li [\ref{Junfang}] $(k\geq 1)$. These invariants are defined for a complete Riemannian manifold $(M^n,g)$ in the following way: $$\lambda_{g,k}(M^n):=\inf \mathop{\rm Spec}\nolimits(-4\Delta+kR)=\inf_{\phi\in H_c^{1,2}(M^n)}\frac{\int_{M^n}4\arrowvert\nabla \phi\arrowvert^2 +kR\phi^2d\mu}{\int_{M^n}\phi^2},$$ where the infimum is taken over compactly supported functions in the Sobolev space $H^{1,2}(M^n)$ and where $k>0$. A sufficient condition to have these invariants well-defined is: $\inf_{M^n} R>-\infty$. For a complete steady gradient soliton $(M^n,g,\nabla f)$, $\lambda_{g,k}(M^n)\geq 0$. Moreover, if a steady soliton is compact, it is Ricci-flat since $\int_{M^n}Rd\mu=\int_{M^n}\Delta fd\mu=0$ by Stokes's theorem. Thus, in this case, $\lambda_{g,k}(M^n)= \inf\mathop{\rm Spec}\nolimits(-4\Delta)=0$. What about the noncompact case? Cheng and Yau [\ref{Cheng}] gave a necessary condition to have $\inf\mathop{\rm Spec}\nolimits(-\Delta)>0$ on a complete manifold. \begin{prop} \label{crit} Let $(M^n,g)$ be a complete Riemannian manifold. If the volume growth of geodesic balls is polynomial, i.e. if there exists $C>0$ and $k\geq 0$ such that $\mathop{\rm Vol}\nolimits B(p,r)\leq Cr^k$ for a fixed $p\in M^n$ and for any $r\geq 1$ then $\inf\mathop{\rm Spec}\nolimits(-\Delta)=0$. \end{prop}
Following closely their proof, we obtain the next result for steady gradient solitons.
\begin{prop}\label{lambda} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton satisfying $\lambda_{g,k}(M^n)>0$ for some $k>0$. Then the volume growth of geodesic balls is faster than polynomial, i.e., for any $m\geq 0$ and any $p\in M^n$, there exists $C=C(m,p,k,\nabla f)>0$ such that for $r$ large enough, $$\mathop{\rm Vol}\nolimits B(p,r)\geq C r^m.$$
\end{prop}
Before beginning the proof, we state a corollary which follows from Proposition \ref{lambda} and Bishop-Gromov theorem.
\begin{coro} Let $(M^n,g,\nabla f)$ be a complete steady gradient soliton with nonnegative Ricci-curvature. Then, for any $k>0$,
$$\lambda_{g,k}(M^n)=0.$$
\end{coro}
\textbf{Proof of Proposition \ref{lambda}}. By assumption, we have $$\lambda_{g,k}(M^n)\int_{M^n}\phi^2d\mu\leq\int_{M^n}4\arrowvert \nabla \phi\arrowvert^2+kR\phi^2d\mu,$$ for any function with compact support in $H^{1,2}(M^n)$. Define the following function as in [\ref{Cheng}]: $$\phi(x)=\left\{\begin{array}{rl} 1 & \mbox{on $B(p,r)$}\\
(2R-r_p(x))/R & \mbox{on $B(p,2R)\setminus B(p,R)$}\\
0 & \mbox{on $M^n\setminus B(p,2R)$}\end{array}\right.$$ for $p\in M^n$ fixed and $R\geq 1$. The previous inequality applied to this function becomes, $$\lambda_{g,k}(M^n) \mathop{\rm Vol}\nolimits B(p,R)\leq 4R^{-2}\mathop{\rm Vol}\nolimits B(p,2R)+k\int_{M^n}\Delta f \phi^2d\mu.$$ Now, $$\int_{M^n}\Delta f \phi^2d\mu=-2\int_{M^n}g(\nabla f ,\nabla \phi)\phi d\mu\leq2\sup_{M^n}\arrowvert \nabla f\arrowvert R^{-1}\mathop{\rm Vol}\nolimits B(p,2R).$$ Thus, $$\lambda_{g,k}(M^n) \mathop{\rm Vol}\nolimits B(p,R)\leq C(M^n,\nabla f)R^{-1}\mathop{\rm Vol}\nolimits B(p,2R).$$ By lemma \ref{sesum}, there exists $c_p>0$ such that $\mathop{\rm Vol}\nolimits B(p,R)\geq c_pR$, for any $R\geq 1$ . As $\lambda_{g,k}(M^n)>0$, one has, $$\mathop{\rm Vol}\nolimits B(p,R)\geq CR^2,$$ for $R\geq 2$ and $C=C(p,k,\nabla f)$. The result follows by iterating this argument.
$\square$\\
{\raggedright Institut Fourier, Universit de Grenoble I, UMR 5582 CNRS-UJF,\\ 38402, Saint-Martin d'Hres, France.\\ [email protected]}
\end{document} | arXiv |
Exchange Ratio Definition
By James Chen
What Is the Exchange Ratio?
The exchange ratio is the relative number of new shares that will be given to existing shareholders of a company that has been acquired or that has merged with another. After the old company shares have been delivered, the exchange ratio is used to give shareholders the same relative value in new shares of the merged entity.
The Formula for the Exchange Ratio Is
Exchange Ratio=Target Share PriceAcquirer Share Price\begin{aligned} &\text{Exchange Ratio} = \frac{ \text{Target Share Price} }{ \text{Acquirer Share Price} } \\ \end{aligned}Exchange Ratio=Acquirer Share PriceTarget Share Price
What Does the Exchange Ratio Tell You?
An exchange ratio is designed to give shareholders the amount of stock in an acquirer company that maintains the same relative value of the stock the shareholder held in the target, or acquired company. The target company share price is typically increased by the amount of a "takeover premium," or an additional amount of money an acquirer pays for the right to buy 100% of the company's outstanding shares and have a 100% controlling interest in the company.
Relative value does not mean, however, that the shareholder receives the same number of shares or same dollar value based on current prices. Instead, the intrinsic value of the shares and the underlying value of the company are considered when coming up with an exchange ratio.
The exchange ratio calculates how many shares an acquiring company needs to issue for each share an investor owns in a target or acquired, company, to provide the same relative value to the investor.
The target company purchase price often includes a price premium paid by the acquirer due to buying control of 100% of the target's stock shares.
Example of How to Use the Exchange Ratio
The exchange ratio in a merger and acquisition is the opposite of a fixed value deal in which a buyer offers a dollar amount to the seller, meaning that the number of shares or other assets backing the dollar value can fluctuate in an exchange ratio.
For example, imagine that the buyer offers the seller two shares of the buyer's company in exchange for 1 share of the seller's company. Prior to the announcement of the deal, the buyer's or acquirer's shares may be trading at $10, while the seller's or target's shares trade at $15. Due to the 2 to 1 exchange ratio, the buyer is effectively offering $20 for a seller share that is trading at $15.
Fixed exchange ratios are usually limited by caps and floors to reflect extreme changes in stock prices. Caps and floors prevent the seller from receiving significantly less consideration than anticipated, and they likewise prevent the buyer from giving up significantly more consideration than anticipated. Exchange ratios can also be accompanied by a cash component in a merger or acquisition, depending on the preferences of the companies involved in the deal.
Investment Implications
Post announcement of a deal, there is usually a gap in valuation between the seller's and buyer's shares to reflect the time value of money and risks. Some of these risks include the deal being blocked by the government, shareholder disapproval, or extreme changes in markets or economies.
Taking advantage of the gap, believing that the deal will go through, is referred to as merger arbitrage and is practiced by hedge funds and other investors. Leveraging the example above, assume that the buyer's shares stay at $10 and the seller's shares jump to $18. There will be a $2 gap that investors can secure by buying one seller share for $18 and shorting two buyer shares for $20.
If the deal closes, investors will receive two buyer shares in exchange for one seller share, closing out the short position and leaving investors with $20 in cash. Minus the initial outlay of $18, investors will net $2.
Fixed-Dollar Value Collar Definition
A fixed-dollar value collar is a strategy where a company that may be acquired can protect itself from the stock price fluctuations of the acquiring firm.
All Cash, All Stock Offer
An all cash, all stock offer is a proposal by one company to purchase all of another company's outstanding shares from its shareholders for cash.
Tender Offer Definition
A tender offer is an offer to purchase some or all of shareholders' shares in a corporation.
Takeover Bid
A takeover bid is a corporate action in which an acquiring company makes an offer to the target company's shareholders to buy the target company's shares.
Acquisition Frenzy Is Alive and Well
An acquisition is a corporate action in which one company purchases most or all of another company's shares to gain control of that company.
Dilutive Acquisition
A dilutive acquisition is a takeover transaction that decreases the acquirer's earnings per share.
The Investopedia Guide to Watching 'Billions'
How M&A Can Affect a Company
Options Trading Strategy & Education
Understanding How Options Are Priced
What Is a Good or Bad Gearing Ratio?
Corporate Takeover Defense: A Shareholder's Perspective
Stock Trading Strategy & Education
Trade Takeover Stocks With Merger Arbitrage | CommonCrawl |
Volume 7, Issue 3 (2020), September 2020
Order by: Title First page Publication date Type
Select: All None Download:
Approximations of the ruin probability in a discrete time risk model
David J. Santana Luis Rincón
https://doi.org/10.15559/20-VMSTA158
Pub. online: 4 Aug 2020 Type: Research Article Open Access
Journal: Modern Stochastics: Theory and Applications Volume 7, Issue 3 (2020), pp. 221–243
Based on a discrete version of the Pollaczeck–Khinchine formula, a general method to calculate the ultimate ruin probability in the Gerber–Dickson risk model is provided when claims follow a negative binomial mixture distribution. The result is then extended for claims with a mixed Poisson distribution. The formula obtained allows for some approximation procedures. Several examples are provided along with the numerical evidence of the accuracy of the approximations.
Simple approximations for the ruin probability in the risk model with stochastic premiums and a constant dividend strategy
Olena Ragulina
We deal with a generalization of the risk model with stochastic premiums where dividends are paid according to a constant dividend strategy and consider heuristic approximations for the ruin probability. To be more precise, we construct five- and three-moment analogues to the De Vylder approximation. To this end, we obtain an explicit formula for the ruin probability in the case of exponentially distributed premium and claim sizes. Finally, we analyze the accuracy of the approximations for some typical distributions of premium and claim sizes using statistical estimates obtained by the Monte Carlo methods.
On infinite divisibility of a class of two-dimensional vectors in the second Wiener chaos
Andreas Basse-O'Connor Jan Pedersen Victor Rohde
Pub. online: 28 Aug 2020 Type: Research Article Open Access
Infinite divisibility of a class of two-dimensional vectors with components in the second Wiener chaos is studied. Necessary and sufficient conditions for infinite divisibility are presented as well as more easily verifiable sufficient conditions. The case where both components consist of a sum of two Gaussian squares is treated in more depth, and it is conjectured that such vectors are infinitely divisible.
On distributions of exponential functionals of the processes with independent increments
Lioudmila Vostrikova
Pub. online: 8 Sep 2020 Type: Research Article Open Access
The aim of this paper is to study the laws of exponential functionals of the processes $X={({X_{s}})_{s\ge 0}}$ with independent increments, namely
\[ {I_{t}}={\int _{0}^{t}}\exp (-{X_{s}})ds,\hspace{0.1667em}\hspace{0.1667em}t\ge 0,\]
\[ {I_{\infty }}={\int _{0}^{\infty }}\exp (-{X_{s}})ds.\]
Under suitable conditions, the integro-differential equations for the density of ${I_{t}}$ and ${I_{\infty }}$ are derived. Sufficient conditions are derived for the existence of a smooth density of the laws of these functionals with respect to the Lebesgue measure. In the particular case of Lévy processes these equations can be simplified and, in a number of cases, solved explicitly.
On tail behaviour of stationary second-order Galton–Watson processes with immigration
Mátyás Barczy Zsuzsanna Bősze Gyula Pap
Pub. online: 10 Sep 2020 Type: Research Article Open Access
Sufficient conditions are presented on the offspring and immigration distributions of a second-order Galton–Watson process ${({X_{n}})_{n\geqslant -1}}$ with immigration, under which the distribution of the initial values $({X_{0}},{X_{-1}})$ can be uniquely chosen such that the process becomes strongly stationary and the common distribution of ${X_{n}}$, $n\geqslant -1$, is regularly varying.
Ergodic properties of the solution to a fractional stochastic heat equation, with an application to diffusion parameter estimation
Diana Avetisian Kostiantyn Ralchenko
The paper deals with a stochastic heat equation driven by an additive fractional Brownian space-only noise. We prove that a solution to this equation is a stationary and ergodic Gaussian process. These results enable us to construct a strongly consistent estimator of the diffusion parameter.
Copy and paste formatted citation
Formatted citation
Citation style AMS -- Americal Mathematical Society APA -- American Psychological Association 6th ed. Chicago -- The Chicago Manual of Style 17th ed.
Download citation in file
Export format BibTeX RIS | CommonCrawl |
\begin{definition}[Definition:Ordered Semigroup Isomorphism]
Let $\struct {S, \circ, \preceq}$ and $\struct {T, *, \preccurlyeq}$ be ordered semigroups.
An '''ordered semigroup isomorphism''' from $\struct {S, \circ, \preceq}$ to $\struct {T, *, \preccurlyeq}$ is a mapping $\phi: S \to T$ that is both:
:$(1): \quad$ A semigroup isomorphism from the semigroup $\struct {S, \circ}$ to the semigroup $\struct {T, *}$
:$(2): \quad$ An order isomorphism from the ordered set $\struct {S, \preceq}$ to the ordered set $\struct {T, \preccurlyeq}$.
\end{definition} | ProofWiki |
Constraining the microlensing effect on time delays with new time-delay prediction model in $H_{0}$ measurements (1804.09390)
Geoff C.-F. Chen, Christopher D. Fassnacht, James H. H. Chan, Vivien Bonvin, Karina Rojas, Martin Millon, Fred Courbin, Sherry H. Suyu, Kenneth C. Wong, Dominique Sluse, Tommaso Treu, Anowar J. Shajib, Jen-Wei Hsueh, David J. Lagattuta, John P. McKean
April 25, 2018 astro-ph.CO, astro-ph.GA
Time-delay cosmography provides a unique way to directly measure the Hubble constant ($H_{0}$). The precision of the $H_{0}$ measurement depends on the uncertainties in the time-delay measurements, the mass distribution of the main deflector(s), and the mass distribution along the line of sight. Tie and Kochanek (2018) have proposed a new microlensing effect on time delays based on differential magnification of the accretion disc of the lensed quasar. If real, this effect could significantly broaden the uncertainty on the time delay measurements by up to $30\%$ for lens systems such as PG1115+080, which have relatively short time delays and monitoring over several different epochs. In this paper we develop a new technique that uses the time-delay ratios and simulated microlensing maps within a Bayesian framework in order to limit the allowed combinations of microlensing delays and thus to lessen the uncertainties due to the proposed effect. We show that, under the assumption of Tie and Kochanek (2018), the uncertainty on the time-delay distance ($D_{\Delta t}$, which is proportional to 1/$H_{0}$) of short time-delay ($\sim18$ days) lens, PG1115+080, increases from $\sim7\%$ to $\sim10\%$ by simultaneously fitting the three time-delay measurements from the three different datasets across twenty years, while in the case of long time-delay ($\sim90$ days) lens, the microlensing effect on time delays is negligible as the uncertainty on $D_{\Delta t}$ of RXJ1131-1231 only increases from $\sim2.5\%$ to $\sim2.6\%$.
Time Delay Lens Modeling Challenge: I. Experimental Design (1801.01506)
Xuheng Ding, Tommaso Treu, Anowar J. Shajib, Dandan Xu, Geoff C.-F. Chen, Anupreeta More, Giulia Despali, Matteo Frigo, Christopher D. Fassnacht, Daniel Gilman, Stefan Hilbert, Philip J. Marshall, Dominique Sluse, Simona Vegetti
Jan. 4, 2018 astro-ph.CO, astro-ph.GA
Strong gravitational lenses with measured time delay are a powerful tool to measure cosmological parameters, especially the Hubble constant ($H_0$). Recent studies show that by combining just three multiply-imaged AGN systems, one can determine $H_0$ to 3.8% precision. Furthermore, the number of time-delay lens systems is growing rapidly, enabling, in principle, the determination of $H_0$ to 1% precision in the near future. However, as the precision increases it is important to ensure that systematic errors and biases remain subdominant. For this purpose, challenges with simulated datasets are a key component in this process. Following the experience of the past challenge on time delay, where it was shown that time delays can indeed be measured precisely and accurately at the sub-percent level, we now present the "Time Delay Lens Modeling Challenge" (TDLMC). The goal of this challenge is to assess the present capabilities of lens modeling codes and assumptions and test the level of accuracy of inferred cosmological parameters given realistic mock datasets. We invite scientists to model a set of simulated Hubble Space Telescope (HST) observations of 50 mock lens systems. The systems are organized in rungs, with the complexity and realism increasing going up the ladder. The goal of the challenge is to infer $H_0$ for each rung, given the HST images, the time delay, and a stellar velocity dispersion of the deflector, for a fixed background cosmology. The TDLMC challenge will start with the mock data release on 2018 January 8th, with a deadline for blind submission of 2018 August 8th. This first paper gives an overview of the challenge including the data design, and a set of metrics to quantify the modeling performance and challenge details. After the deadline, the results of the challenge will be presented in a companion paper with all challenge participants as co-authors.
New constraints on quasar broad absorption and emission line regions from gravitational microlensing (1709.06775)
Damien Hutsemékers, Lorraine Braibant, Dominique Sluse, Timo Anguita, René Goosmann
Sept. 20, 2017 astro-ph.GA
Gravitational microlensing is a powerful tool allowing one to probe the structure of quasars on sub-parsec scale. We report recent results, focusing on the broad absorption and emission line regions. In particular microlensing reveals the intrinsic absorption hidden in the P Cygni-type line profiles observed in the broad absorption line quasar H1413+117, as well as the existence of an extended continuum source. In addition, polarization microlensing provides constraints on the scattering region. In the quasar Q2237+030, microlensing differently distorts the H$\alpha$ and CIV broad emission line profiles, indicating that the low- and high-ionization broad emission lines must originate from regions with distinct kinematical properties. We also present simulations of the effect of microlensing on line profiles considering simple but representative models of the broad emission line region. Comparison of observations to simulations allows us to conclude that the H$\alpha$ emitting region in Q2237+030 is best represented by a Keplerian disk.
H0LiCOW VII. Cosmic evolution of the correlation between black hole mass and host galaxy luminosity (1703.02041)
Xuheng Ding, Tommaso Treu, Sherry H. Suyu, Kenneth C. Wong, Takahiro Morishita, Daeseong Park, Dominique Sluse, Matthew W. Auger, Adriano Agnello, Vardha N. Bennert, Thomas E. Collett
Sept. 1, 2017 astro-ph.GA
Strongly lensed active galactic nuclei (AGN) provide a unique opportunity to make progress in the study of the evolution of the correlation between the mass of supermassive black holes ($\mathcal M_{BH}$) and their host galaxy luminosity ($L_{host}$). We demonstrate the power of lensing by analyzing two systems for which state-of-the-art lens modelling techniques have been applied to Hubble Space Telescope imaging data. We use i) the reconstructed images to infer the total and bulge luminosity of the host and ii) published broad-line spectroscopy to estimate $\mathcal M_{BH}$ using the so-called virial method. We then enlarge our sample with new calibration of previously published measurements to study the evolution of the correlation out to z~4.5. Consistent with previous work, we find that without taking into account passive luminosity evolution, the data points lie on the local relation. Once passive luminosity evolution is taken into account, we find that BHs in the more distant Universe reside in less luminous galaxies than today. Fitting this offset as $\mathcal M_{BH}$/$L_{host}$ $\propto$ (1+z)$^{\gamma}$, and taking into account selection effects, we obtain $\gamma$ = 0.6 $\pm$ 0.1 and 0.8$\pm$ 0.1 for the case of $\mathcal M_{BH}$-$L_{bulge}$ and $\mathcal M_{BH}$-$L_{total}$, respectively. To test for systematic uncertainties and selection effects we also consider a reduced sample that is homogeneous in data quality. We find consistent results but with considerably larger uncertainty due to the more limited sample size and redshift coverage ($\gamma$ = 0.7 $\pm$ 0.4 and 0.2$\pm$ 0.5 for $\mathcal M_{BH}$-$L_{bulge}$ and $\mathcal M_{BH}$-$L_{total}$, respectively), highlighting the need to gather more high-quality data for high-redshift lensed quasar hosts. Our result is consistent with a scenario where the growth of the black hole predates that of the host galaxy.
The inner structure of early-type galaxies in the Illustris simulation (1610.07605)
Dandan Xu, Volker Springel, Dominique Sluse, Peter Schneider, Alessandro Sonnenfeld, Dylan Nelson, Mark Vogelsberger, Lars Hernquist
Early-type galaxies provide unique tests for the predictions of the cold dark matter cosmology and the baryonic physics assumptions entering models for galaxy formation. In this work, we use the Illustris simulation to study correlations of three main properties of early-type galaxies, namely, the stellar orbital anisotropies, the central dark matter fractions and the central radial density slopes, as well as their redshift evolution since $z=1.0$. We find that lower-mass galaxies or galaxies at higher redshift tend to be bluer in rest-frame colour, have higher central gas fractions, and feature more tangentially anisotropic orbits and steeper central density slopes than their higher-mass or lower-redshift counterparts, respectively. The projected central dark matter fraction within the effective radius shows a very mild mass dependence but positively correlates with galaxy effective radii due to the aperture effect. The central density slopes obtained by combining strong lensing measurements with single aperture kinematics are found to differ from the true density slopes. We identify systematic biases in this measurement to be due to two common modelling assumptions, isotropic stellar orbital distributions and power-law density profiles. We also compare the properties of early-type galaxies in Illustris to those from existing galaxy and strong lensing surveys, we find in general broad agreement but also some tension, which poses a potential challenge to the stellar formation and feedback models adopted by the simulation.
Lens galaxies in the Illustris simulation: power-law models and the bias of the Hubble constant from time-delays (1507.07937)
Dandan Xu, Dominique Sluse, Peter Schneider, Volker Springel, Mark Vogelsberger, Dylan Nelson, Lars Hernquist
A power-law density model, i.e., $\rho(r) \propto r^{-\gamma'}$ has been commonly employed in strong gravitational lensing studies, including the so-called time-delay technique used to infer the Hubble constant $H_0$. However, since the radial scale at which strong lensing features are formed corresponds to the transition from the dominance of baryonic matter to dark matter, there is no known reason why galaxies should follow a power law in density. The assumption of a power law artificially breaks the mass-sheet degeneracy, a well-known invariance transformation in gravitational lensing which affects the product of Hubble constant and time delay and can therefore cause a bias in the determination of $H_0$ from the time-delay technique. In this paper, we use the Illustris hydrodynamical simulations to estimate the amplitude of this bias, and to understand how it is related to observational properties of galaxies. Investigating a large sample of Illustris galaxies that have velocity dispersion $\sigma_{SIE}$>160 km/s at redshifts below $z=1$, we find that the bias on $H_0$ introduced by the power-law assumption can reach 20%-50%, with a scatter of $10\%-30\%$ (rms). However, we find that by selecting galaxies with an inferred power-law model slope close to isothermal, it is possible to reduce the bias on $H_0$ to <5%, and the scatter to <10%. This could potentially be used to form less biased statistical samples for $H_0$ measurements in the upcoming large survey era.
H0LiCOW IV. Lens mass model of HE 0435-1223 and blind measurement of its time-delay distance for cosmology (1607.01403)
Kenneth C. Wong, Sherry H. Suyu, Matthew W. Auger, Vivien Bonvin, Frederic Courbin, Christopher D. Fassnacht, Aleksi Halkola, Cristian E. Rusu, Dominique Sluse, Alessandro Sonnenfeld, Tommaso Treu, Thomas E. Collett, Stefan Hilbert, Leon V. E. Koopmans, Philip J. Marshall, Nicholas Rumbaugh
Dec. 19, 2016 astro-ph.CO
Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, $H_{0}$. We present a blind lens model analysis of the quadruply-imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analyzed as a part of the $H_{0}$ Lenses in COSMOGRAIL's Wellspring (H0LiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parameterization of the galaxy light and mass profile, and the regions used for lens modeling. We constrain the effective time-delay distance to be $D_{\Delta t} = 2612_{-191}^{+208}~\mathrm{Mpc}$, a precision of 7.6%. From HE 0435-1223 alone, we infer a Hubble constant of $H_{0} = 73.1_{-6.0}^{+5.7}~\mathrm{km~s^{-1}~Mpc^{-1}}$ assuming a flat $\Lambda$CDM cosmology. The cosmographic inference based on the three lenses analyzed by H0LiCOW to date is presented in a companion paper (H0LiCOW Paper V).
H0LiCOW VI. Testing the fidelity of lensed quasar host galaxy reconstruction (1610.08504)
Xuheng Ding, Kai Liao, Tommaso Treu, Sherry H. Suyu, Geoff C.-F. Chen, Matthew W. Auger, Philip J. Marshall, Adriano Agnello, Frederic Courbin, Anna M. Nierenberg, Cristian E. Rusu, Dominique Sluse, Alessandro Sonnenfeld, Kenneth C. Wong
Oct. 26, 2016 astro-ph.GA
The empirical correlation between the mass of a super-massive black hole (MBH) and its host galaxy properties is widely considered to be evidence of their co-evolution. A powerful way to test the co-evolution scenario and learn about the feedback processes linking galaxies and nuclear activity is to measure these correlations as a function of redshift. Unfortunately, currently MBH can only be estimated in active galaxies at cosmological distances. At these distances, bright active galactic nuclei (AGN) can outshine the host galaxy, making it extremely difficult to measure the host's luminosity. Strongly lensed AGNs provide in principle a great opportunity to improve the sensitivity and accuracy of the host galaxy luminosity measurements as the host galaxy is magnified and more easily separated from the point source, provided the lens model is sufficiently accurate. In order to measure the MBH-L correlation with strong lensing, it is necessary to ensure that the lens modelling is accurate, and that the host galaxy luminosity can be recovered to at least a precision and accuracy better than that of the typical MBH measurement. We carry out extensive and realistic simulations of deep Hubble Space Telescope observations of lensed AGNs obtained by our collaboration. We show that the host galaxy luminosity can be recovered with better accuracy and precision than the typical uncertainty on MBH(~ 0.5 dex) for hosts as faint as 2-4 magnitudes dimmer than the AGN itself. Our simulations will be used to estimate bias and uncertainties on the actual measurements to be presented in a future paper.
H0LiCOW III. Quantifying the effect of mass along the line of sight to the gravitational lens HE 0435-1223 through weighted galaxy counts (1607.01047)
Cristian E. Rusu, Christopher D. Fassnacht, Dominique Sluse, Stefan Hilbert, Kenneth C. Wong, Kuang-Han Huang, Sherry H. Suyu, Thomas E. Collett, Philip J. Marshall, Tommaso Treu, Leon V. E. Koopmans
July 4, 2016 astro-ph.GA
Based on spectroscopy and multiband wide-field observations of the gravitationally lensed quasar HE 0435-1223, we determine the probability distribution function of the external convergence $\kappa_\mathrm{ext}$ for this system. We measure the under/overdensity of the line of sight towards the lens system and compare it to the average line of sight throughout the universe, determined by using the CFHTLenS as a control field. Aiming to constrain $\kappa_\mathrm{ext}$ as tightly as possible, we determine under/overdensities using various combinations of relevant informative weighing schemes for the galaxy counts, such as projected distance to the lens, redshift, and stellar mass. We then convert the measured under/overdensities into a $\kappa_\mathrm{ext}$ distribution, using ray-tracing through the Millennium Simulation. We explore several limiting magnitudes and apertures, and account for systematic and statistical uncertainties relevant to the quality of the observational data, which we further test through simulations. Our most robust estimate of $\kappa_\mathrm{ext}$ has a median value $\kappa^\mathrm{med}_\mathrm{ext} = 0.004$ and a standard deviation of $\sigma_\kappa = 0.025$. The measured $\sigma_\kappa$ corresponds to $2.5\%$ uncertainty on the time delay distance, and hence the Hubble constant $H_0$ inference from this system. The median $\kappa^\mathrm{med}_\mathrm{ext}$ value is robust to $\sim0.005$ (i.e. $\sim0.5\%$ on $H_0$) regardless of the adopted aperture radius, limiting magnitude and weighting scheme, as long as the latter incorporates galaxy number counts, the projected distance to the main lens, and a prior on the external shear obtained from mass modeling. The availability of a well-constrained $\kappa_\mathrm{ext}$ makes \hequad\ a valuable system for measuring cosmological parameters using strong gravitational lens time delays.
Ambiguities in gravitational lens models: the density field from the source position transformation (1606.04321)
Sandra Unruh, Peter Schneider, Dominique Sluse
June 14, 2016 astro-ph.CO
Strong gravitational lensing is regarded as the most precise technique to measure the mass in the inner region of galaxies or galaxy clusters. In particular, the mass within one Einstein radius can be determined with an accuracy of order of a few percent or better, depending on the image configuration. For other radii, however, degeneracies exist between galaxy density profiles, precluding an accurate determination of the enclosed mass. The source position transformation (SPT), which includes the well-known mass-sheet transformation (MST) as a special case, describes this degeneracy of the lensing observables in a more general way. In this paper we explore properties of an SPT, removing the MST to leading order, i.e., we consider degeneracies which have not been described before. The deflection field $\boldsymbol{\hat{\alpha}}(\boldsymbol{\theta})$ resulting from an SPT is not curl-free in general, and thus not a deflection that can be obtained from a lensing mass distribution. Starting from a variational principle, we construct lensing potentials that give rise to a deflection field $\boldsymbol{\tilde{\alpha}}$, which differs from $\boldsymbol{\hat{\alpha}}$ by less than an observationally motivated upper limit. The corresponding mass distributions from these 'valid' SPTs are studied: their radial profiles are modified relative to the original mass distribution in a significant and non-trivial way, and originally axi-symmetric mass distributions can obtain a finite ellipticity. These results indicate a significant effect of the SPT on quantitative analyses of lens systems. We show that the mass inside the Einstein radius of the original mass distribution is conserved by the SPT; hence, as is the case for the MST, the SPT does not affect the mass determination at the Einstein radius. [...]
The different origins of high- and low-ionization broad emission lines revealed by gravitational microlensing in the Einstein cross (1606.01734)
Lorraine Braibant, Damien Hutsemékers, Dominique Sluse, Timo Anguita
June 6, 2016 astro-ph.GA
We investigate the kinematics and ionization structure of the broad emission line region of the gravitationally lensed quasar QSO2237+0305 (the Einstein cross) using differential microlensing in the high- and low-ionization broad emission lines. We combine visible and near-infrared spectra of the four images of the lensed quasar and detect a large-amplitude microlensing effect distorting the high-ionization CIV and low-ionization H$\alpha$ line profiles in image A. While microlensing only magnifies the red wing of the Balmer line, it symmetrically magnifies the wings of the CIV emission line. Given that the same microlensing pattern magnifies both the high- and low-ionization broad emission line regions, these dissimilar distortions of the line profiles suggest that the high- and low-ionization regions are governed by different kinematics. Since this quasar is likely viewed at intermediate inclination, we argue that the differential magnification of the blue and red wings of H$\alpha$ favors a flattened, virialized, low-ionization region whereas the symmetric microlensing effect measured in CIV can be reproduced by an emission line formed in a polar wind, without the need of fine-tuned caustic configurations.
Observations of radio-quiet quasars at 10mas resolution by use of gravitational lensing (1508.05842)
Neal Jackson, Dominique Sluse, Olaf Wucknitz University of Manchester, School of Physics & Astronomy, Jodrell Bank Centre for Astrophysics, Argelander-Institut fur Astronomie, University of Bonn, Institut d'Astrophysique et Geophysique, Universite de Liege, Departamento de Astronomia y Astrofisica, Universidad de Valencia, Max-Planck Institut fur Radioastronomie)
Aug. 24, 2015 astro-ph.CO, astro-ph.GA
We present VLA detections of radio emission in four four-image gravitational lens systems with quasar sources: HS0810+2554, RXJ0911+0511, HE0435$-$1223 and SDSSJ0924+0219, and e-MERLIN observations of two of the systems. The first three are detected at a high level of significance, and SDSS J0924+0219 is detected. HS0810+2554 is resolved, allowing us for the first time to achieve 10-mas resolution of the source frame in the structure of a radio quiet quasar. The others are unresolved or marginally resolved. All four objects are among the faintest radio sources yet detected, with intrinsic flux densities in the range 1-5$\mu$Jy; such radio objects, if unlensed, will only be observable routinely with the Square Kilometre Array. The observations of HS0810+2554, which is also detected with e-MERLIN, strongly suggest the presence of a mini-AGN, with a radio core and milliarcsecond scale jet. The flux densities of the lensed images in all but HE0435-1223 are consistent with smooth galaxy lens models without the requirement for smaller-scale substructure in the model, although some interesting anomalies are seen between optical and radio flux densities. These are probably due to microlensing effects in the optical.
How well can cold-dark-matter substructures account for the observed radio flux-ratio anomalies? (1410.3282)
Dandan Xu, Dominique Sluse, Liang Gao, Jie Wang, Carlos Frenk, Shude Mao, Peter Schneider, Volker Springel
Discrepancies between the observed and model-predicted radio flux ratios are seen in a number of quadruply-lensed quasars. The most favored interpretation of these anomalies is that CDM substructures present in lensing galaxies perturb the lens potentials and alter image magnifications and thus flux ratios. So far no consensus has emerged regarding whether or not the predicted CDM substructure abundance fully accounts for the lensing flux anomaly observations. Accurate modeling relies on a realistic lens sample in terms of both the lens environment and internal structures and substructures. In this paper we construct samples of generalised and specific lens potentials, to which we add (rescaled) subhalo populations from the galaxy-scale Aquarius and the cluster-scale Phoenix simulation suites. We further investigate the lensing effects from subhalos of masses several orders of magnitude below the simulation resolution limit. The resulting flux ratio distributions are compared to the currently best available sample of radio lenses. The observed anomalies in B0128+437, B0712+472 and B1555+375 are more likely to be caused by propagation effects or oversimplified lens modeling, signs of which are already seen in the data. Among the quadruple systems that have closely located image triplets/pairs, the anomalous flux ratios of MG0414+0534 can be reproduced by adding CDM subhalos to its macroscopic lens potential, with a probability of 5%-20%; for B0712+472, B1422+231, B1555+375 and B2045+265, these probabilities are only of a few percent. We hence find that CDM substructures are unlikely to be the whole reason for radio flux anomalies. We discuss other possible effects that might also be at work.
How well can cold-dark-matter substructures account for the observed lensing flux-ratio anomalies? (1307.4220)
D.D. Xu, Dominique Sluse, Liang Gao, Jie Wang, Carlos Frenk, Shude Mao, Peter Schneider
Oct. 17, 2014 astro-ph.CO
Lensing flux-ratio anomalies are most likely caused by gravitational lensing by small-scale dark matter structures. These anomalies offer the prospect of testing a fundamental prediction of the cold dark matter (CDM) cosmological model: the existence of numerous substructures that are too small to host visible galaxies. In two previous studies we found that the number of subhalos in the six high-resolution simulations of CDM galactic halos of the Aquarius project is not sufficient to account for the observed frequency of flux ratio anomalies seen in selected quasars from the CLASS survey. These studies were limited by the small number of halos used, their narrow range of masses (1-2E12 solar masses) and the small range of lens ellipticities considered. We address these shortcomings by investigating the lensing properties of a large sample of halos with a wide range of masses in two sets of high resolution simulations of cosmological volumes and comparing them to a currently best available sample of radio quasars. We find that, as expected, substructures do not change the flux-ratio probability distribution of image pairs and triples with large separations, but they have a significant effect on the distribution at small separations. For such systems, CDM substructures can account for a substantial fraction of the observed flux-ratio anomalies. For large close-pair separation systems, the discrepancies existing between the observed flux ratios and predictions from smooth halo models are attributed to simplifications inherent in these models which do not take account of fine details in the lens mass distributions.
Alignment of quasar polarizations with large-scale structures (1409.6098)
Damien Hutsemékers, Lorraine Braibant, Vincent Pelgrims, Dominique Sluse
Sept. 22, 2014 astro-ph.CO, astro-ph.GA
We have measured the optical linear polarization of quasars belonging to Gpc-scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is of the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures.
The quasar-galaxy cross SDSS J1320+1644: A probable large-separation lensed quasar (1206.2011)
Cristian E. Rusu, Masamune Oguri, Masanori Iye, Naohisa Inada, Issha Kayo, Min-Su Shin, Dominique Sluse, Michael A. Strauss
March 1, 2013 astro-ph.CO
We report the discovery of a pair of quasars at $z=1.487$, with a separation of $8\farcs585\pm0\farcs002$. Subaru Telescope infrared imaging reveals the presence of an elliptical and a disk-like galaxy located almost symmetrically between the quasars, creating a cross-like configuration. Based on absorption lines in the quasar spectra and the colors of the galaxies, we estimate that both galaxies are located at redshift $z=0.899$. This, as well as the similarity of the quasar spectra, suggests that the system is a single quasar multiply imaged by a galaxy group or cluster acting as a gravitational lens, although the possibility of a binary quasar cannot be fully excluded. We show that the gravitational lensing hypothesis implies these galaxies are not isolated, but must be embedded in a dark matter halo of virial mass $\sim 4 \times 10^{14}\ h_{70}^{-1}\ {M}_\odot$ assuming an NFW model with a concentration parameter of $c_{vir}=6$, or a singular isothermal sphere profile with a velocity dispersion of $\sim 670$ km s$^{-1}$. We place constraints on the location of the dark matter halo, as well as the velocity dispersions of the galaxies. In addition, we discuss the influence of differential reddening, microlensing and intrinsic variability on the quasar spectra and broadband photometry. | CommonCrawl |
\begin{definition}[Definition:Connected (Topology)/Topological Space/Definition 1]
Let $T = \struct {S, \tau}$ be a topological space.
$T$ is '''connected''' {{iff}} it admits no separation.
That is, $T$ is '''connected''' {{iff}} there exist no open sets $A, B \in \tau$ such that $A, B \ne \O$, $A \cup B = S$ and $A \cap B = \O$.
\end{definition} | ProofWiki |
Bias and accuracy of dairy sheep evaluations using BLUP and SSGBLUP with metafounders and unknown parent groups
Fernando L. Macedo ORCID: orcid.org/0000-0002-1949-92141,2,
Ole F. Christensen3,
Jean-Michel Astruc4,
Ignacio Aguilar5,
Yutaka Masuda6 &
Andrés Legarra1
Genetics Selection Evolution volume 52, Article number: 47 (2020) Cite this article
Bias has been reported in genetic or genomic evaluations of several species. Common biases are systematic differences between averages of estimated and true breeding values, and their over- or under-dispersion. In addition, comparing accuracies of pedigree versus genomic predictions is a difficult task. This work proposes to analyse biases and accuracies in the genetic evaluation of milk yield in Manech Tête Rousse dairy sheep, over several years, by testing five models and using the estimators of the linear regression method. We tested models with and without genomic information [best linear unbiased prediction (BLUP) and single-step genomic BLUP (SSGBLUP)] and using three strategies to handle missing pedigree [unknown parent groups (UPG), UPG with QP transformation in the \({\mathbf{H}}\) matrix (EUPG) and metafounders (MF)].
We compared estimated breeding values (EBV) of selected rams at birth with the EBV of the same rams obtained each year from the first daughters with phenotypes up to 2017. We compared within and across models. Finally, we compared EBV at birth of the rams with and without genomic information.
Within models, bias and over-dispersion were small (bias: 0.20 to 0.40 genetic standard deviations; slope of the dispersion: 0.95 to 0.99) except for model SSGBLUP-EUPG that presented an important over-dispersion (0.87). The estimates of accuracies confirm that the addition of genomic information increases the accuracy of EBV in young rams. The smallest bias was observed with BLUP-MF and SSGBLUP-MF. When we estimated dispersion by comparing a model with no markers to models with markers, SSGBLUP-MF showed a value close to 1, indicating that there was no problem in dispersion, whereas SSGBLUP-EUPG and SSGBLUP-UPG showed a significant under-dispersion. Another important observation was the heterogeneous behaviour of the estimates over time, which suggests that a single check could be insufficient to make a good analysis of genetic/genomic evaluations.
The addition of genomic information increases the accuracy of EBV of young rams in Manech Tête Rousse. In this population that has missing pedigrees, the use of UPG and EUPG in SSGBLUP produced bias, whereas MF yielded unbiased estimates, and we recommend its use. We also recommend assessing biases and accuracies using multiple truncation points, since these statistics are subject to random variation across years.
Genetic progress in selection schemes depends on using correct models for genetic evaluation. Models are simplifications of reality and never completely perfect, which is why tools to analyze systematic errors are necessary. There are three important aspects to check in genetic evaluations: bias, dispersion and accuracy. Bias \(\left( {b_{0} = \bar{\hat{u}} - \bar{u}} \right)\) is the difference between estimated breeding values (EBV) \(\hat{u}\) and true breeding values (TBV) \(u\) and could lead to over- or under-estimation of genetic trend and to poor selection decisions (for example, selecting too many young individuals instead of keeping old ones). In the same way, on the one hand, values of the slope of the regression of TBV on EBV less than 1 imply over-dispersion of the EBV and could lead to an overestimation of the genetic merit of pre-selected candidates. On the other hand, an unbiased estimate of accuracy (the correlation between TBV and EBV) is important to correctly predict the response to selection.
Bias has been found in genetic evaluations of several species. The use of genomic information in dairy cattle selection is widespread and the existence of bias has been extensively studied (e.g. [1,2,3,4]). Bias has also been studied in other species, such as pigs [5], dairy goats [6], turkeys [7] and beef cattle [8,9,10]. In general, biases decrease with more adequate models. However, all these studies rely on the use of pre-corrected data such as deregressed proofs or daughter yield deviations (DYD), which may give wrong estimates of biases if fixed effects are not well estimated [11].
Studies in France and Spain using DYD detected bias in genetic evaluations of dairy sheep breeds. For example, predictions in Lacaune showed bias and over-dispersion of EBV, with more impact for traits under strong selection [12, 13]. Similar results were obtained for milk yield of Pyrenean dairy sheep breeds [14], although genomic evaluations decreased bias compared to pedigree evaluations. Manech Tête Rousse (MTR) is one of the major French Pyrenean dairy sheep breeds. For this breed, the selection scheme switched to genomic selection in 2018 and it is important to verify the bias, dispersion and accuracies, to avoid poor selection decisions. In particular, the bias detected in [14] is not well understood. However, it is difficult to assess such biases with DYD in dairy sheep, since DYD from "first crops" of 20 to 40 daughters are not very accurate.
Legarra and Reverter [11] described the linear regression method (LR method) to detect bias in genetic evaluations. The advantage of this method is the simplicity of the application; it compares EBV of a group of individuals obtained in different evaluations, with less ("partial") and more ("whole") information. Comparing the two subsets of EBV, estimators of bias, dispersion and accuracies (relatives or directs) are easily computed. Therefore, it is easy to analyze a genetic evaluation comparing the results of two consecutive evaluations.
To perform genetic evaluation, it should be possible to include genomic information and also to model missing pedigrees if needed. In this work, we tested models using only pedigree information (best linear unbiased prediction (BLUP) model) or including genomic information (in a single-step genomic BLUP (SSGBLUP) model) and applying different strategies to deal with missing pedigree. Missing pedigree may be a problem in most species—in ruminants, parents may be unrecorded, whereas in monogastric species, new lines may be introduced. If we do not consider this missingness, we are assuming the same genetic mean for all missing parents in the pedigree. In dairy sheep, females born from natural mating usually do not have an assigned sire. However, these natural mating rams are offspring of highly selected artificial insemination (AI) rams and thus their breeding value increases over time. In addition, new flocks that entered the breeding scheme until (roughly) 1990 did not have pedigree data. Two strategies can be used to model the missing pedigree: unknown parent groups (UPG) [15, 16] and metafounders (MF) [17]. There is some evidence that the use of MF improves the performance of genetic evaluation [18], but it has not been systematically studied.
The aim of this work was to analyze bias, dispersion, and accuracies in the genetic evaluation of milk yield of MTR using the LR method with several evaluation models and performed over many truncation points of data. A second aim was to compare different strategies (UPG or MF) to manage missing pedigree in BLUP and SSGBLUP contexts. In this manner, we assessed the genetic evaluation of MTR, addressed the best method to consider missing pedigrees in SSGBLUP, and explored the possibilities of the LR method to discriminate models for prediction.
Records and pedigree
Milk production is recorded by the breeding scheme according to the International Committee for Animal Recording rules. The data that we analyzed were collected between 1978 and 2017 and comprise 1,842,295 performance records and 540,999 individuals in the pedigree, with a generation interval of about 4 years. There are missing parentships, either "sire unknown and dam known" (~ 15% of all animals) or "both sire and dam unknown" (~ 15% of all animals). This situation is particularly important in our case, because if we ignore the missing pedigree, the unknown parents of the more recently improved animals will be assigned to the base population at the beginning of the selection program. As a result, these animals will be unfairly penalized and it will not be possible to correctly model the genetic progress. Thus, we defined 13 UPG (or MF; see later). We computed a crude "number of equivalent records" from the first "offspring" of UPG (disregarding later generations). For instance, an individual with \(n\) records contributes \(n\) to its ancestor UPG if both parents are unknown and \(n/2\) if one parent is known. In all cases, the number of equivalent records was larger than 10,000.
Genomic information
We included genomic information on 3007 AI males (years of birth from 1999 until 2017), all of which have both parents known and are genotyped with the 50 k Illumina chip OvineSNP50. Only autosomal SNPs were considered. Quality control included individual and marker call rate, minor allele frequency (MAF) higher than 0.05, removal of Mendelian conflicts, deviation from Hardy–Weinberg equilibrium (number of heterozygotes deviating more than 15% from the expectation based on allele frequencies), and heritability of gene content (markers with an estimated heritability < 0.98 and significant p-values of the likelihood ratio test, p < 0.01, were discarded) [19]. After quality control, 37,168 effective SNPs were retained.
Focal individuals
It is possible to apply the LR method to any group of individuals of a population, provided that they represent a homogenous tier (i.e. they are similarly selected, and prediction at the time of selection is based on the same sources of information). In this work, we were interested in evaluating bias, dispersion and accuracy of males at the time of their selection, i.e. at birth before they have progeny with records. The reason we are interested in this group is that most of the genetic gain in dairy sheep is obtained via males. In total, 10 groups of focal individuals were analyzed; each group corresponding to selected rams born from 2005 to 2014. These males were selected based on parent average to be progeny-tested and thus their genetic variation is smaller than that of their contemporaries [20].
Estimators of the LR method
In brief, the LR method estimates bias, dispersion and accuracies, based on the comparison of two subsets of EBV, estimated with less and more information, for the same group of individuals. In this paper, we will use the symbols \(\hat{u}_{p}\) or EBVp to refer to the EVB estimated with less information (or "partial" dataset) and \(\hat{u}_{w}\) or EBVw to refer to the EBV estimated with more information (or "whole" dataset). The LR method presents one estimator for the bias (\(\hat{\Delta }_{p}\)), one estimator for the dispersion (\(\hat{b}_{p}\)) and four estimators related to the accuracies (\(\rho_{wp}\), \(acc_{p}^{2}\), \(\rho_{wp}^{2}\), \(\widehat{{\varvec{ }rel}}_{p}\)). The estimators are summarized below; for a deeper overview and properties of the estimators see [11, 21].
Bias (\(\hat{\Delta }_{p}\))
The estimator of the bias is obtained from the difference between the mean of EBVp and the mean of EBVw, \(\hat{\Delta }_{p} = \overline{{\hat{u}_{p} }} - \overline{{\hat{u}_{w} }}\). In absence of bias, the expected value of this estimator is 0.
Dispersion (\(\hat{b}_{p}\))
The estimator of dispersion of EBV is the slope of the regression of EBVw on EBVp, \(\hat{b}_{p} = \frac{{cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right)}}{{var\left( {\hat{u}_{p} } \right)}}\). If over- or under-dispersion does not exists, the expected value of the estimator is 1, values of \(\hat{b}_{p} < 1\) indicate over-dispersion whereas values of \(\hat{b}_{p} > 1\) indicate under-dispersion.
Estimators related to accuracies
Ratio of accuracies (\(\hat{\rho }_{w,p}\))
This estimator estimates the inverse of the relative gain in accuracy from EBVp to EBVw. It is the correlation between EBVp and EBVw, \(\hat{\rho }_{w,p} = \frac{{cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right)}}{{\sqrt {var\left( {\hat{u}_{p} } \right)var\left( {\hat{u}_{w} } \right)} }}\) and the expected value is \(\frac{{acc_{p} }}{{acc_{w} }}\). A high value of this estimator means a small increase in accuracy, whereas a low value means a large increase in accuracy, when we add phenotypic information to genetic evaluations. For instance, a value of 0.7 means that the evaluation with the "partial" dataset is quite similar to the evaluation with the "whole" dataset, i.e. more phenotypes do not add much new information. This can be seen also as the relative increase in accuracy brought by phenotypes is \(\frac{1}{{\hat{\rho }_{w,p} }} - 1 = \frac{{acc_{w} - acc_{p} }}{{acc_{p} }}\) (Matias Bermann, University of Georgia, personal communication). Thus, it is expected that genomic evaluations have higher \(\hat{\rho }_{w,p}\) than pedigree-based evaluations.
Ratio of reliabilities (\(\hat{\rho }_{p,w}^{2}\))
This estimator is the slope of the regression of EBVp on EBVw, \(\hat{\rho }_{p,w}^{2} = \frac{{cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right)}}{{var\left( {\hat{u}_{w} } \right)}}\) and, similar to the ratio of accuracies, it represents the inverse of the gain in reliabilities from EBVp to EBVw. The expected value is \(\frac{{acc_{p}^{2} }}{{acc_{w}^{2} }}\).
Selected reliability of EBVp (\(\widehat{acc}_{p}^{2}\))
In a general formulation, \(\widehat{acc}_{p}^{2} = \frac{{cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right)}}{{\sigma_{g*}^{2} }}\), where \(\sigma_{g* }^{2}\) is the genetic variance of the group of individuals of interest. We use this more general formulation as in [21] instead of the formulation used in [11], because the latter is adequate only for a group of animals that represent the whole population after selection. In this work, we analyzed EBV of sets of contemporary young rams of the population, in other words highly-selected individuals, which decreases reliability [20, 21]. A difficulty associated to this estimator is the necessity of an estimation of the genetic variance of a group of individuals. We estimated the genetic variance of each group of focal individuals following [22] using the complete dataset. We used Gibbs sampling with the complete dataset with 150,000 iterations and a burn-in of 15,000 iterations. At each 150-th iteration, we took samples of the EBV of all AI males in the 10 focal groups and we computed, for each of these groups, the variance of these samples. This results in samples from the posterior distributions of the 10 genetic variances, one for each group of AI males.
Unselected reliability of EBVp (\(\widehat{rel}_{p}\))
This estimator estimates the reliability as if there was no selection, \(\widehat{rel}_{p} = 1 - \frac{{\sigma_{g*}^{2} }}{{\sigma_{g}^{2} }}\left( {1 - \widehat{acc}_{p}^{2} } \right)\) as in [23], where \(\sigma_{g}^{2}\) is the genetic variance of the base population and \(\sigma_{g* }^{2}\) is the genetic variance of the group of individuals of interest (see above). A short derivation of \(\widehat{rel}_{p}\) follows from [21, 24], \(Cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right) = \sigma_{g*}^{2} - PEV\), so \(PEV = \sigma_{g*}^{2} - Cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right)\). The reliability that is unaffected by selection is \(r^{2} = 1 - \frac{PEV}{{\sigma_{g}^{2} }}\) leading to \(r^{2} = 1 - \frac{{\sigma_{g*}^{2} }}{{\sigma_{g}^{2} }}\left( {1 - r^{2*} } \right)\) [23], where \(r^{2*}\) is the selected reliability. The reliability \(\widehat{rel}_{p}\) can be interpreted as if the focal individuals were not selected, or, in other words, as the average theoretical reliability of the focal individuals obtained from the mixed model equations (MME).
To apply the LR method, we have to obtain EBV from the partial dataset and the whole dataset. In this work, in order to obtain an empirical distribution of the statistics of the LR method, we performed several comparisons between EBVp and EBVw, taking EBVp from rams born in year \(y_{p}\) (2005 to 2014) and EBVw from years \(y_{p} + 2\) until year 2017 (last year of records for this work). The year of the first set of EBVw was \(y_{p} + 2\) because the first daughters of the selected rams generally start to produce 2 years after birth. For example, if we take the EBV at birth of rams born in 2005 as EBVp, we have EBVw of these rams from years 2007 to 2017, thus we have 11 sets of estimators; and if we take EBV from rams born in year 2014 as EBVp, we only have EBVw from year 2016 to 2017, thus only two sets of estimators. In total, we performed 65 comparisons, e.g. 2005 vs 2007, 2005 vs 2008 ... 2005 vs 2017 … 2014 vs 2016 and 2014 vs 2017.
Bias or accuracies are properties of the partial dataset only, and not of the whole dataset. Sampling several "partial" years allows to describe possible variations due to chance, i.e. properties of BLUP only hold on expectation. In addition, by considering multiple "whole" datasets, we tried to evaluate random deviations of the estimates of biases and accuracies. For instance, a ram may stop getting progeny performances after a few years, yet the estimates of contemporary groups may change. The theory of the LR method (actually, BLUP theory) shows that the estimators of the LR method are correct regardless of whether rams are selected (and having more and more offspring) or not.
We considered several models for the evaluations that are presented below. We applied the LR method within models, with both EBVp and EBVw obtained with the same model. We also applied this method across models: EBVp obtained with one model, for example regular BLUP with MF, and EBVw from another model, for example SSGBLUP with UPG. Finally, because the addition of genomic data to the evaluation can be seen as "more information", it is possible to see EBV obtained at the same time but without and with genomic information as EBVp and EBVw, respectively. Thus, we also compared the EBV of the rams at birth estimated with the BLUP and SSGBLUP models. For example, the EBV of rams at birth in 2005 were estimated with BLUP as EBVp and estimated with SSGBLUP as EBVw.
Although there is no theoretical support for using the LR method across models [21], our objective was to check the consistency of models with each other, in the sense that a refinement of the model should not introduce unexpected changes in the evaluations. Otherwise, one of the models could possibly be quite wrong. For instance, switching the genetic evaluation of milk yield from lactational measures to test-day models should not introduce big changes. Likewise, selection schemes that start adding genomic information to the genetic evaluations must change models without too large changes in the EBV. Viewed in this way, it is important to check the coherence (lack of strong changes) from one model to the other. We focused on the regression coefficient \(\hat{b}_{p}\), with an expected value of 1.
To summarize the 65 comparisons, raw averages of estimators are not correct because some years are more represented that others, e.g. 2005 has 11 comparisons whereas 2014 has two comparisons. Thus, we used the pseudo-model \({\mathbf{es}}_{pw} = {\mathbf{Xy}}_{p} + {\mathbf{Zy}}_{w} + {\varvec{\upvarepsilon}}\), where \({\mathbf{es}}_{pw}\) is a vector of the 65 values of the estimator (\(\hat{\Delta }_{p}\), \(\hat{b}_{p}\), \(\hat{\rho }_{wp}\), \(\widehat{acc}_{wp}^{2}\), \(\hat{\rho }_{pw}^{2}\), \(\widehat{rel}_{p}\)) from the comparison of EBVp of the rams born in year \(p\) and of EBVw of same rams obtained in year \(w\), \({\mathbf{y}}_{p}\) contains values for years \(p\) (2005 to 2014) and \({\mathbf{y}}_{w}\) contains values for years \(w\) (\({\text{y}}_{\text{p}} + 2\) until 2017), and we report an estimable function that yields \(\widehat{{{\mathbf{es}}_{pw} }}\) as if the design was balanced: \(\widehat{{{\mathbf{es}}_{pw} }} = \frac{1}{np}1^{\prime}{\hat{\mathbf{y}}}_{p} + \frac{1}{nw}1^{\prime}{\hat{\mathbf{y}}}_{w}\) where \(np\) and \(nw\) are the number of different years for the "partial" dataset (8) and "whole" dataset (11). The pseudo-model was fit by least squares (lm function in R), and the R package Gmodels version 2.18.1 was used to compute the contrasts. The code is given in "Appendix".
The genetic evaluations were performed using the regular linear model for genetic evaluation of MTR. This is a univariate model with repeated records for milk yield that accounts for heterogeneity of variances across contemporary groups [25]:
$${\varvec{\Lambda}}{\mathbf{y}} = {\mathbf{y}}_{c} = {\mathbf{Xb}} + {\mathbf{W}}_{u} {\mathbf{u}} + {\mathbf{W}}_{p} {\mathbf{p}} + {\mathbf{e}},$$
where \({\varvec{\Lambda}}\) is a diagonal matrix of scaling factors for heterogeneity of variances, \({\mathbf{y}}\) is a vector of milk yield records, \({\mathbf{y}}_{c}\) is a vector of the observations corrected for heterogeneity of variances, \({\mathbf{b}}\) is a vector of the fixed effects: contemporary group, age and number of lactation, month of lambing and interval "from lambing to first milk recording", \({\mathbf{u}}\) is a vector of breeding values, \({\mathbf{p}}\) is a vector of permanent animal effects, \({\mathbf{e}}\) is a vector of residuals, and \({\mathbf{X}}\), \({\mathbf{W}}_{p}\) and \({\mathbf{W}}_{u}\) are incidence matrices for fixed effects, permanent animal effects, and breeding values. Following [25], the \(i{\text{th}}\) diagonal element in \({\varvec{\Lambda}}\) is \(\exp \left( {\frac{{\tau_{i} }}{2}} \right)\); a scaling factor for fixed and random effects. The linear model for \(\tau_{i} = {\mathbf{S}}_{i} {\varvec{\upbeta}}\), where \({\varvec{\upbeta}}\) is the vector of unknown effects for year (fixed) and flock-year (random) and \({\mathbf{S}}_{i}\) is the design vector. Heritability was fixed at 0.30 (the value used in official evaluations; an estimate calculated with the complete dataset was equal to 0.28). In models with UPG, EBV cannot be estimated, and the genetic basis changes with the model used. Therefore, we referred all estimates of EBV to the average EBV of the females born in 2005. Using this animal model, different (sub) models were defined depending on: (1) the use or not of genomic information, and (2) the strategy to model missing pedigree.
We used BLUP models with the matrix of additive genetic relationships \({\mathbf{A}}\) [24] and models that include the genomic information in a single step (SSGBLUP). The SSGBLUP models replaces \({\mathbf{A}}\) with a matrix \({\mathbf{H}}\). that combines pedigree and genomic relationships [26,27,28].
To model the missing pedigree, we used three strategies, unknown parent groups for \({\mathbf{A}}\) (UPG) and for \({\mathbf{H}}\) (EUPG) and metafounders (MF). Unknown parents groups were developed to avoid bias due to differences in genetic means of groups of individuals with different origins [15, 29]. The theory of UPG adapted to SSGBLUP models was reviewed by [16]. Later, Legarra et al. [17] conceived the theory of MF that represents base populations by related, inbred pseudo-individuals. The aim of MF was to provide a coherent theory, where UPG would account for the reduction in genetic variance due to drift and for relationships across base populations. Using genomic information, it is possible to estimate the relatedness between groups of unknown parents (\({\varvec{\Gamma}}\) matrix) [17, 30], and this relationship matrix across MF can be used also in purely pedigree-based BLUP models. We estimated matrix \({\varvec{\Gamma}}\) from observed genotypes using the GLS method of [30].
Let index 0 denote the base populations (either UPG or MF), index 1 "non-genotyped animals", and index 2 "genotyped animals". Denote \({\mathbf{A}}^{ - 1} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{11} } & {{\mathbf{A}}^{12} } \\ {{\mathbf{A}}^{21} } & {{\mathbf{A}}^{22} } \\ \end{array} } \right)\) as the usual inverse of the relationship matrix and \({\mathbf{A}}_{22}^{ - 1}\) the inverse including only genotyped animals, \({\mathbf{A}}^{*} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{00} } & {{\mathbf{A}}^{01} } & {{\mathbf{A}}^{02} } \\ {{\mathbf{A}}^{10} } & {{\mathbf{A}}^{11} } & {{\mathbf{A}}^{12} } \\ {{\mathbf{A}}^{20} } & {{\mathbf{A}}^{21} } & {{\mathbf{A}}^{22} } \\ \end{array} } \right)\) as the generalized inverse (as it is not full rank) including UPG, and \({\mathbf{A}}^{\left( \varGamma \right) - 1} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{\left( \varGamma \right)00} } & {{\mathbf{A}}^{\left( \varGamma \right)01} } & {{\mathbf{A}}^{\left( \varGamma \right)02} } \\ {{\mathbf{A}}^{\left( \varGamma \right)10} } & {{\mathbf{A}}^{\left( \varGamma \right)11} } & {{\mathbf{A}}^{\left( \varGamma \right)12} } \\ {{\mathbf{A}}^{\left( \varGamma \right)20} } & {{\mathbf{A}}^{\left( \varGamma \right)21} } & {{\mathbf{A}}^{\left( \varGamma \right)22} } \\ \end{array} } \right)\) as the inverse using MF. All three matrices are easily built using simple modifications of Henderson's algorithm [31].
The SSGBLUP model proceeds by modifying the conditional variances and covariances in the inverse matrices according to observed genomic information, by obtaining \({\mathbf{H}}^{ - 1}\) matrices from \({\mathbf{A}}^{ - 1}\) matrices. Corresponding matrices are, for SSGBLUP-UPG:
$${\mathbf{H}}_{{{\mathbf{UPG}}}}^{*} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{00} } & {{\mathbf{A}}^{01} } & {{\mathbf{A}}^{02} } \\ {{\mathbf{A}}^{10} } & {{\mathbf{A}}^{11} } & {{\mathbf{A}}^{12} } \\ {{\mathbf{A}}^{20} } & {{\mathbf{A}}^{21} } & {{\mathbf{A}}^{22} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {{\mathbf{G}}^{ - 1} - {\mathbf{A}}_{22}^{ - 1} } \\ \end{array} } \right),$$
where \({\mathbf{G}}\) is the genomic relationship matrix that is built following the first method in [32], using observed allele frequencies, and made comparable to \({\mathbf{A}}_{22}\) following [5].
It is well known that this matrix is, at best, an approximation [16] because the theory of matrix \({\mathbf{H}}\) was derived under the constraint that \({\mathbf{A}}\) is full rank, which is not the case for \({\mathbf{A}}^{*}\). The same authors in [16] proposed a full transformation hereafter called "exact UPG" (EUPG) that can be written as:
$${\mathbf{H}}_{EUPG}^{*} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{00} } & {{\mathbf{A}}^{01} } & {{\mathbf{A}}^{02} } \\ {{\mathbf{A}}^{10} } & {{\mathbf{A}}^{11} } & {{\mathbf{A}}^{12} } \\ {{\mathbf{A}}^{20} } & {{\mathbf{A}}^{21} } & {{\mathbf{A}}^{22} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {{\mathbf{Q}}_{2}^{'} \left( {{\mathbf{G}}^{ - 1} - {\mathbf{A}}_{22}^{ - 1} } \right){\mathbf{Q}}_{2} } & 0 & { - {\mathbf{Q}}_{2}^{'} \left( {{\mathbf{G}}^{ - 1} - {\mathbf{A}}_{22}^{ - 1} } \right)} \\ 0 & 0 & 0 \\ { - \left( {{\mathbf{G}}^{ - 1} - {\mathbf{A}}_{22}^{ - 1} } \right){\mathbf{Q}}_{2} } & 0 & {{\mathbf{G}}^{ - 1} - {\mathbf{A}}_{22}^{ - 1} } \\ \end{array} } \right),$$
where \({\mathbf{Q}}_{2}\) is the matrix containing UPG compositions for genotyped animals.
Whereas in "regular" SSGBLUP the only changes concern genotyped animals, here there are extensive changes that make programming difficult. In addition, because \({\mathbf{G}}\) accounts correctly for the different origins and does not need pedigree completion, there is, depending on the pedigree structure, some sort of double-counting as observed by [18]. These problems are solved by MF which proposes:
$${\mathbf{H}}_{MF}^{ *} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}^{\left( \varGamma \right)00} } & {{\mathbf{A}}^{\left( \varGamma \right)01} } & {{\mathbf{A}}^{\left( \varGamma \right)02} } \\ {{\mathbf{A}}^{\left( \varGamma \right)10} } & {{\mathbf{A}}^{\left( \varGamma \right)11} } & {{\mathbf{A}}^{\left( \varGamma \right)12} } \\ {{\mathbf{A}}^{\left( \varGamma \right)20} } & {{\mathbf{A}}^{\left( \varGamma \right)21} } & {{\mathbf{A}}^{\left( \varGamma \right)22} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {{\mathbf{G}}_{05}^{ - 1} - {\mathbf{A}}_{22}^{\left( \varGamma \right) - 1} } \\ \end{array} } \right),$$
where \({\mathbf{G}}_{05}\) is built with allele frequencies of 0.5 and there is no extra scaling to match \({\mathbf{A}}_{22}^{\left( \varGamma \right)}\), although there is blending as described below.
For all SSGBLUP models, the blending between \({\mathbf{G}}\) and \({\mathbf{A}}_{22}\) or between \({\mathbf{G}}_{05}\) and \({\mathbf{A}}_{22}^{\left( \varGamma \right)}\) was done using 0.95 and 0.05, as respective weights [32,33,34]. An analysis using MF also needs to consider that the population is more related by construction. We used the scaling of genetic variance in [17] such that if the genetic variance considering BLUP_UPG was \(\sigma_{u}^{2}\), the genetic variance component attributed to \({\mathbf{H}}_{MF}^{*}\) was \(\sigma_{u}^{2} /k\) for \(k = 1 + \frac{{\overline{{diag\left( {\varvec{\Gamma}} \right)}} }}{2} - {\bar{\mathbf{\varGamma }}}\).
Now we can describe the five models:
BLUP-UPG uses \({\mathbf{A}}^{*}\) and is the reference method known to be robust.
BLUP-MF uses \({\mathbf{A}}^{\left( \varGamma \right) - 1}\). The main difference is that the latter assumes that MF are random effects and that they are correlated, whereas the former uses UPG that are fixed (unbounded a priori) effects.
SSGBLUP-UPG uses \({\mathbf{H}}_{UPG}^{*}\) and is expected to be somewhat biased because it is an approximation.
SSGBLUP-EUPG is supposed to be biased also because there is some double-counting. However the bias is not necessarily the same as in SSGBLUP-UPG.
SSGBLUP-MF is supposed to be the most accurate method.
All genetic evaluations were performed with heterf90 (not publicly released), which solves the outer model for heterogeneity of variances as in [25], whereas inner iterations used blup90iod2 [35]. To estimate the relationships across MF, we used gammaf90 (not publicly released), which uses the GLS method in [30].
The estimated value of \({\varvec{\Gamma}}\) is presented below (each row/column corresponds to MF separated by 3 years). We did not explore these values in depth since it was out of the scope of this paper, but, in general, values showed moderate relationships across MF, i.e. most correlations obtained as \({\varvec{\Gamma}}_{{\left( {i,j} \right)}} /\sqrt {{\varvec{\Gamma}}_{{\left( {i,i} \right)}} {\varvec{\Gamma}}_{{\left( {j,j} \right)}} }\) ranged from 0.5 to 0.6. The second and third MF present somewhat extreme values because they have few genotyped descendants. For instance, if the allele frequencies in the base generation were uniformly distributed, the expected value in the diagonal is 2/3 [36]. Matrix \({\varvec{\Gamma}}\) is estimated from estimates of allele frequencies in the base population with standard errors ranging from 0.15 to 0.33, which are the highest values for the second and third MF. These errors seem large but we take the estimate of \({\varvec{\Gamma}}\) as a crude guess, i.e. just as breeding programs start with guessed heritabilities.
$${\varvec{\Gamma}} = \left[ {\begin{array}{*{20}c} {0.53} & {0.24} & {0.37} & {0.38} & {0.39} & {0.39} & {0.41} & {0.41} & {0.41} & {0.42} & {0.44} & {0.43} & {0.40} \\ {} & {0.92} & {0.24} & {0.30} & {0.37} & {0.37} & {0.37} & {0.38} & {0.39} & {0.40} & {0.39} & {0.39} & {0.37} \\ {} & {} & {0.96} & {0.39} & {0.33} & {0.39} & {0.37} & {0.38} & {0.38} & {0.39} & {0.38} & {0.38} & {0.38} \\ {} & {} & {} & {0.72} & {0.37} & {0.34} & {0.37} & {0.38} & {0.37} & {0.38} & {0.37} & {0.39} & {0.37} \\ {} & {} & {} & {} & {0.81} & {0.36} & {0.36} & {0.38} & {0.39} & {0.39} & {0.39} & {0.40} & {0.37} \\ {} & {} & {} & {} & {} & {0.68} & {0.38} & {0.37} & {0.38} & {0.39} & {0.39} & {0.40} & {0.38} \\ {} & {} & {} & {} & {} & {} & {0.69} & {0.39} & {0.38} & {0.38} & {0.40} & {0.41} & {0.38} \\ {} & {} & {} & {} & {} & {} & {} & {0.61} & {0.39} & {0.39} & {0.40} & {0.40} & {0.38} \\ {} & {} & {} & {} & {} & {} & {} & {} & {0.63} & {0.40} & {0.41} & {0.39} & {0.38} \\ {} & {} & {} & {} & {} & {} & {} & {} & {} & {0.59} & {0.42} & {0.41} & {0.39} \\ {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {0.52} & {0.43} & {0.40} \\ {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {0.83} & {0.41} \\ {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {} & {0.59} \\ \end{array} } \right]$$
As for the LR method, Table 1 shows the values of estimators within models, i.e. when the model to estimate EBVp and EBVw were the same. In this case, the smallest bias (\(\hat{\Delta }\) of 0.23 genetic standard deviations (\(\sigma_{g}\)) and 0.25 \(\sigma_{g}\) for SSGBLUP-MF and BLUP-MF, respectively) was obtained with MF. All models are slightly biased and overestimate the genetic trend (around 0.25 genetic standard deviations, equivalent to 1 year of selection).
Table 1 Average \(\hat{\Delta }_{p}\) (expressed as \(\sigma_{g}\)), \(\hat{b}_{p}\), \(\hat{\rho }_{wp}\), \(\hat{\rho }_{pw}^{2}\), \(\widehat{acc}_{p}^{2}\) and \(\widehat{rel}_{p}\) within models
For the estimator of dispersion (\(\hat{b}_{p}\)), for all models, except for SSGBLUP-EUPG, the values were close to 1, meaning absence of over- or under-dispersion of EBV. However, SSGBLUP-EUPG model was biased (\(\hat{b}_{p} = 0.88\)), which indicates inflation of EBV. This agrees with [18] who found that SSGBLUP-EUPG was biased. In Fig. 1, we present the values of each estimate of \(\hat{b}_{p}\) for BLUP-MF (Fig. 1a), which has the average value of \(\hat{b}_{p}\) closest to 1, and for SSGBLUP-EUPG (Fig. 1b), which generates the most over-dispersion. The variability of the estimates of EBVp within and across years is similar for both models, but the estimates of dispersion with SSGBLUP-EUPG are systematically the smallest. As Fig. 1 shows, the year of birth 2008 seems to yield biased estimators. This agrees with [14] who found biases for predictions of rams born in this year. Figure 1 also illustrates that there is a large variability of estimates within and across years of the "partial" and "whole" datasets, with the implication that a single time-point is not sufficient to describe the behavior of the genetic evaluation.
Estimates of \(\hat{b}_{p}\) for models BLUP-MF (a) and SSGBLUP-EUPG (b) by year of EBVp evaluated
Estimator \(\hat{\rho }_{wp}\) represents the inverse of the relative gain in accuracy from EBVp to EBVw, thus high values of this estimator imply higher accuracy in the "partial" dataset, as expected for SSGBLUP. In agreement, values of this estimator were lower for the BLUP models (roughly 0.55) than for the SSGBLUP models (roughly 0.65). In other words, the EBV of the rams obtained without the records of theirs daughters were more accurate in SSGBLUP than in BLUP, which agrees with [14]. Similar results were found for \(\hat{\rho }_{pw}^{2}\), which estimates the ratio between reliabilities in EBVp and EBVw.
The direct estimators of accuracy (\(\widehat{acc}_{p}^{2}\) and \(\widehat{rel}_{p}\)), both based on the covariance between EBVp and EBVw, presented extremely high values (in some cases, the variance of EBVw was larger than the genetic variance), for SSGBLUP-UPG and SSGBLUP-EUGP, and are therefore not reported. This may be an indirect indicator of the poor fit of UPG to SSGBLUP, whereas BLUP-UPG shows reasonable values that agree with the other estimates of accuracy. For BLUP models, \(\widehat{acc}_{p}^{2}\) values were lower than for SSGBLUP-MF (0.24 vs 0.32), which agrees with the information obtained from the other estimates of accuracy. Although these values are apparently small, this is expected because this is a sample of animals that are selected based on parent average [20]. In contrast, the estimation of "unselected" reliabilities, \(\widehat{rel}_{p}\), results in values within the usual scale of individual model-based accuracies. Again, the SSGBLUP-MF model estimated higher reliabilities than the BLUP models (0.59 vs 0.54 and 0.53, respectively). The increase in accuracy is fairly consistent across all four estimators of accuracy.
In Table 2, we presented the values of the slope of the regression of EBVw on EBVp (\(\hat{b}_{p}\)) when EBVp was estimated with one model and EBVw with another model. This gives some sort of measure of the disagreement across models, i.e. we expect models to behave similarly in terms of biases. Cases that estimate in a "partial" dataset with SSGBLUP and in a "whole" dataset with BLUP are not considered, since they seem unnatural in practice; for instance the decision on which animals to genotype may be based on the information of the whole dataset. When we use pedigree-based models to estimate EBVp and EBVw, the dispersion is around 1 (0.93 and 1.01), regardless of whether UPG or MF are used.
Table 2 Average \(\hat{b}_{p}\) when EBVp was estimated with one model and EBVw with other model
When EBVp were estimated with the BLUP models and EBVw with SSGBLUP-UPG or SSGBLUP-EUPG (the case when genomic selection is implemented), we observed an important under-dispersion (around 1.25). However, SSGBLUP-MF yielded \(\hat{b}_{p}\) values close to 1. Similar results were obtained when we compared EBV of the rams at birth, estimated with the BLUP models as "partial" with those estimated with the SSGBLUP models as "whole" (Table 3). The models SSGBLUP-UPG and SSGBLUP-EUPG show important under-dispersion whereas SSGBLUP-MF results in values of \(\hat{b}_{p}\) close to 1. This indicates that if we want to change a pedigree-based genetic evaluation for one that includes genomic information, the use of MF is a better option. Moreover, SSGBLUP-EUPG is biased with itself as shown in Table 1, perhaps due to poor compatibility with the \({\mathbf{G}}\) matrices, because of double-counting, or both.
Table 3 Average (standard deviation) of \(\hat{b}_{p}\) when EBV*p was estimated with BLUP and EBV*w was estimated with SSGBLUP
This study provides a comprehensive analysis of bias, dispersion and accuracies in dairy sheep genetic evaluation with several truncation points of data and several models. Estimates of bias, dispersion and accuracy were obtained with evaluation models that used only pedigree or a combination of pedigree and genomic relationship matrices with different strategies to model missing pedigree and using the LR method. The properties of such types of models have recently been extensively investigated [18, 30, 36,37,38,39,40]. The current study adds further evidence that the metafounder approach should be the preferred one for genomic evaluation across species.
The values of accuracy estimators confirm that the inclusion of genomic information increases the accuracy of the EBV of individuals without daughter records, which is consistent with other studies [41,42,43,44].
For \(\widehat{acc}_{p}^{2}\), we found extremely high values for models SSGBLUP-UPG and SSGBLUP-EUPG, due to values out of the parametric space. For example, for SSGBLUP-UPG and the comparison 2010–2015, \(cov\left( {\hat{u}_{p} ,\hat{u}_{w} } \right) = 235\), \(var\left( {\hat{u}_{p} } \right) = 283\) and \(var\left( {\hat{u}_{w} } \right) = 580\), when the genetic variance in the base population is 565. This could indicate a difficulty for these models to manage correctly missing pedigree through UPG and the genomic information. Values within the expected range of reliabilities were found for the other models, and the SSGBLUP-MF model reached the highest average value. These results agree with the values of estimators of the ratio of accuracies (\(\hat{\rho }_{wp}\) and \(\hat{\rho }_{p}^{2}\)), since the use of genomic information increases the reliability of EBV estimated without daughter records. We should note that \(\widehat{acc}_{p}^{2}\) tries to estimate the square of the correlation between EBV and TBV in the focal individuals, that are selected and with reduced variance, whereas \(\widehat{rel}_{p}\) would be the squared correlation if they were unselected. These two estimators have different purposes in practice [20]: the first, populational reliability \(\widehat{acc}_{p}^{2}\), describes the possible genetic gain, whereas the second describes stability of EBV. In the current breeding scheme of the Manech Tête Rousse, more candidates are genotyped for selection, so that our estimate \(\widehat{acc}_{p}^{2}\) is possibly a lower bound.
Concerning the bias (\(\hat{\Delta }_{p}\)), the lowest values were observed when MF were used to model the missing pedigree. As for the estimator of dispersion (\(\hat{b}_{p}\)), we did not observe important over- or under-dispersion, except for SSGBLUP-EUGP. The closest values to 1 of this estimator were obtained when we used BLUP-MF and SSGBLUP-MF. Similar results were obtained in a recent work [18], which indicates that MF could be the best option to manage missing pedigree for SSGBLUP models. In the case of SSGBLUP-EUPG, an important inflation of EBV was observed. A possible cause for this behavior could be that EUPG ignores the covariance between genetic groups (average relationship across MF is 0.38) whereas this relationship is included in \({\mathbf{G}}\). Similar results were reported by [18] using simulated data to compare the same three strategies to model missing parents, and they found that MF generated the smallest bias in evaluations.
In general, when BLUP or SSGBLUP_MF were used, no bias was found, although Legarra et al. [14] found biases in these same breeds using DYD both as pseudo-phenotypes and for validation. However, as we already mentioned, the validation set in [14] was composed of rams born in 2008–2009 with predictions that were also biased according to the LR method, which was due to a problem in collecting elite rams across flocks.
Finally, we consider important to highlight that a single cut-off point to estimate accuracy or bias is highly uncertain, as shown in Fig. 1. Breeding schemes should not rely on a single study based on a single point in time to define models for genetic evaluation.
The addition of genomic information increases the accuracy of the EBV of young rams in Manech Tête Rousse. In this population, that has missing pedigrees, the use of UPG and "exact UPG" in SSGBLUP produced bias, whereas MF yielded unbiased estimates and, thus we recommend its use. We also recommend assessing biases and accuracies using multiple truncation points, as these statistics are subject to random variation.
The data set is available under reasonable request.
Spelman RJ, Arias J, Keehan MD, Obolonkin V, Winkelman AM, Johnson DL, et al. Application of genomic selection in the New Zealand dairy cattle industry. In: Proceedings of the 9th world congress on genetics applied to livestock production: 1–6 August 2010; Leipzig; 2010.
Patry C, Ducrocq V. Evidence of biases in genetic evaluations due to genomic preselection in dairy cattle. J Dairy Sci. 2011;94:1011–20.
Sargolzaei M, Chesnais J, Schenkel F. Assessing the bias in top GPA bulls. Canadian Dairy Network Open Industry Session: 30 October 2012; Guelph; 2012. p. 1–9.
Tyrisevä AM, Mäntysaari EA, Jakobsen J, Aamand GP, Dürr J, Fikse WF, et al. Detection of evaluation bias caused by genomic preselection. J Dairy Sci. 2018;101:3155–63.
Christensen OF, Madsen P, Nielsen B, Ostersen T, Su G. Single-step methods for genomic evaluation in pigs. Animal. 2012;6:1565–71.
Carillier C, Larroque H, Robert-Granié C. Comparison of joint versus purebred genomic evaluation in the French multi-breed dairy goat population. Genet Sel Evol. 2014;46:67.
Abdalla EEA, Schenkel FS, Emamgholi Begli H, Willems OW, van As P, Vanderhout R, et al. Single-step methodology for genomic evaluation in Turkeys (Meleagris gallopavo). Front Genet. 2019;10:1248.
Saatchi M, McClure MC, McKay SD, Rolf MM, Kim J, Decker JE, et al. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation. Genet Sel Evol. 2011;43:40.
Saatchi M, Ward J, Garrick DJ. Accuracies of direct genomic breeding values in Hereford beef cattle using national or international training populations. J Anim Sci. 2013;91:1538–51.
Cardoso FF, Gomes CCG, Sollero BP, Oliveira MM, Roso VM, Piccoli ML, et al. Genomic prediction for tick resistance in Braford and Hereford cattle. J Anim Sci. 2015;93:2693–705.
Legarra A, Reverter A. Semi-parametric estimates of population accuracy and bias of predictions of breeding values and future phenotypes using the LR method. Genet Sel Evol. 2018;50:53.
Astruc JM, Baloche G, Barillet F, Legarra A. Genomic evaluation validation test proposed by Interbull is necessary but not sufficient because it does not check the correct genetic trend. In: Proceedings of the 39th annual conference of the international committee for animal recording ICAR: 19–23 May 2014; Berlin; 2014. p. 50.
Baloche G, Legarra A, Sallé G, Larroque H, Astruc JM, Robert-Granié C, et al. Assessment of accuracy of genomic prediction for French Lacaune dairy sheep. J Dairy Sci. 2014;97:1107–16.
Legarra A, Baloche G, Barillet F, Astruc JM, Soulas C, Aguerre X, et al. Within- and across-breed genomic predictions and genomic relationships for Western Pyrenees dairy sheep breeds Latxa, Manech, and Basco-Béarnaise. J Dairy Sci. 2014;97:3200–12.
Quaas RL. Additive genetic model with groups and relationships. J Dairy Sci. 1988;71:91–8.
Misztal I, Vitezica ZG, Legarra A, Aguilar I, Swan AA. Unknown-parent groups in single-step genomic evaluation. J Anim Breed Genet. 2013;130:252–8.
Legarra A, Christensen OF, Vitezica ZG, Aguilar I, Misztal I. Ancestral relationships using metafounders: finite ancestral populations and across population relationships. Genetics. 2015;200:455–68.
Bradford HLL, Masuda Y, VanRaden PMM, Legarra A, Misztal I. Modeling missing pedigree in single-step genomic BLUP. J Dairy Sci. 2019;102:2336–46.
Forneris NS, Legarra A, Vitezica ZG, Tsuruta S, Aguilar I, Misztal I, et al. Quality control of genotypes using heritability estimates of gene content at the marker. Genetics. 2015;199:675–81.
Bijma P. Accuracies of estimated breeding values from ordinary genetic evaluations do not reflect the correlation between true and estimated breeding values in selected populations. J Anim Breed Genet. 2012;129:345–58.
Macedo FL, Reverter A, Legarra A. Behavior of the Linear Regression method to estimate bias and accuracies with correct and incorrect genetic evaluation models. J Dairy Sci. 2020;103:529–44.
Sorensen D, Fernando R, Gianola D. Inferring the trajectory of genetic variance in the course of artificial selection. Genet Res. 2001;77:83–94.
Dekkers JCM. Asymptotic response to selection on best linear unbiased predictors of breeding values. Anim Prod. 1992;54:351–60.
Henderson CR. Best linear unbiased estimation and prediction under a selection model. Biometrics. 1975;31:423–47.
Meuwissen THE, De Jong G, Engel B. Joint estimation of breeding values and heterogeneous variances of large data files. J Dairy Sci. 1996;79:310–6.
Legarra A, Christensen OF, Aguilar I, Misztal I. Single step, a general approach for genomic selection. Livest Sci. 2014;166:54–65.
Christensen OF, Lund MS. Genomic prediction when some animals are not genotyped. Genet Sel Evol. 2010;42:2.
Legarra A, Aguilar I, Misztal I. A relationship matrix including full pedigree and genomic information. J Dairy Sci. 2009;92:4656–63.
Westell RA, Quaas RL, Van Vleck LD. Genetic groups in an animal model. J Dairy Sci. 1988;71:1310–8.
Garcia-Baccino CA, Legarra A, Christensen OF, Misztal I, Pocrnic I, Vitezica ZG, et al. Metafounders are related to Fst fixation indices and reduce bias in single-step genomic evaluations. Genet Sel Evol. 2017;49:34.
Henderson CR. A simple method for computing the inverse of a numerator relationship matrix used in prediction of breeding values. Biometrics. 1976;32:69–83.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
Aguilar I, Misztal I, Legarra A, Tsuruta S. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation. J Anim Breed Genet. 2011;128:422–8.
Christensen OF. Correction: compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation. Genet Sel Evol. 2012;44:37.
Tsuruta S, Misztal I, Strandén I. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications. J Anim Sci. 2001;79:1166–72.
van Grevenhof E, Vandenplas J, Calus MPL. Genomic prediction for crossbred performance using metafounders. J Anim Sci. 2019;97:548–58.
Meyer K, Tier B, Swan A. Estimates of genetic trend for single-step genomic evaluations. Genet Sel Evol. 2018;50:39.
Xiang T, Nielsen B, Su G, Legarra A, Christensen OF. Application of single-step genomic evaluation for crossbred performance in pig. J Anim Sci. 2016;94:936–48.
Xiang T, Christensen OF, Legarra A. Technical note: genomic evaluation for crossbred performance in a single-step approach with metafounders. J Anim Sci. 2017;95:1472–80.
Yoshida GM, Carvalheiro R, Rodríguez FH, Lhorente JP. Genomics Single-step genomic evaluation improves accuracy of breeding value predictions for resistance to infectious pancreatic necrosis virus in rainbow trout. Genomics. 2019;111:127–32.
Guarini AR, Lourenco DAL, Brito LF, Sargolzaei M, Baes CF, Miglior F, et al. Use of a single-step approach for integrating foreign information into national genomic evaluation in Holstein cattle. J Dairy Sci. 2019;102:8175–83.
Duchemin SI, Colombani C, Legarra A, Baloche G, Larroque H, Astruc J-M, et al. Genomic selection in the French Lacaune dairy sheep breed. J Dairy Sci. 2012;95:2723–33.
Carillier C, Larroque H, Palhière I, Clément V, Rupp R, Robert-Granié C. A first step toward genomic selection in the multi-breed French dairy goat population. J Dairy Sci. 2013;96:7294–305.
Wiggans GR, VanRaden PM, Cooper TA. The genomic evaluation system in the United States: past, present, future. J Dairy Sci. 2011;94:3202–11.
We are grateful to the Genotoul Bioinformatics Platform Toulouse Midi-Pyrenees (Bioinfo Genotoul) for providing computing and storage resources and to the Center for Quantitative Genetics and Genomics, Aarhus University, for receiving FLM during an internship, during which analyses were performed to complete the work. Furthermore, we want to thank Vincent Ducrocq, Alain Charcosset, Catherine Larzul, Anne Ricard, Leopoldo Sanchez-Rodriguez, Francis Fidelle and the two anonymous reviewers for their helpful advice.
This work received funding from the project ARDI funded by INTERREG POCTEFA, the European Unions' Horizon 2020 Research & Innovation program under grant agreement N°772787—SMARTER, Metaprogram SELGEN of INRAE, the Animal Genetics Division of INRAE and La Région Occitanie.
GenPhySE, INRAE, 31326, Castanet Tolosan, France
Fernando L. Macedo & Andrés Legarra
Facultad de Veterinaria, UdelaR, A. Lasplaces 1620, Montevideo, Uruguay
Fernando L. Macedo
Center for Quantitative Genetics and Genomics, Blichers Allé 20, 8830, Tjele, Denmark
Ole F. Christensen
Institut de l'Elevage, CS52627, 31326, Castanet Tolosan, France
Jean-Michel Astruc
Instituto Nacional de Investigación Agropecuaria, Montevideo, Uruguay
Ignacio Aguilar
Department of Animal and Dairy Science, University of Georgia, Athens, GA, USA
Yutaka Masuda
Andrés Legarra
FLM performed all analyses and wrote the paper. AL provided input on the procedures, models, data analyses. OFL provided input on procedures and data analyses. JMA provided the data and the model of the official genetic evaluation. IA and YM contributed to the code of BLUPF90 family applications, used to analyze the data. AL, OFC and IA commented on an improved draft. All authors read and approved the final manuscript.
Correspondence to Fernando L. Macedo.
R code to obtain estimable functions of statistics in LR method
We used function estimable() from the R package Gmodels v. 2.18.1
#https://www.rdocumentation.org/packages/gmodels/versions/2.18.1/topics/estimable
library(gmodels)
# read input file
input = read.table("input.txt")
# 1 represent the intercept, 9 is number of years in "partial" minus one (included in the intercept), 10 is number of years in "whole" minus one (included in the intercept) cm = c(1,rep(1/9,9),rep(1/10,10))
# example with bias
lm_bias = lm(bias ~ as.factor(y_partial) + as.factor(y_whole),data = input)
# estimable function
estimable(lmbias,cm,conf.int = 0.95)
Macedo, F.L., Christensen, O.F., Astruc, JM. et al. Bias and accuracy of dairy sheep evaluations using BLUP and SSGBLUP with metafounders and unknown parent groups. Genet Sel Evol 52, 47 (2020). https://doi.org/10.1186/s12711-020-00567-1 | CommonCrawl |
Forces on a ball thrown upwards
When a ball is thrown up in upward direction, it is said that force is in downward direction. Why we don't we consider the force given to the ball to throw up in the upward direction? Is there is no effect of the force given to the ball?
newtonian-mechanics forces newtonian-gravity soft-question free-body-diagram
Qmechanic♦
SHYAMANANDA NINGOMBAMSHYAMANANDA NINGOMBAM
9122 gold badges22 silver badges88 bronze badges
$\begingroup$ During the throwing, the net force is upwards. Once you release it the only force is downwards. $\endgroup$ – Floris Mar 11 '15 at 17:26
You should have a look at Newtons First Law of Motion:
"When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force."
When the ball is moving up and there is no force at all, then the ball will continue it's motion upwards. But when there is gravity, you should look at Newtons Second Law of Motion:
"The vector sum of the forces $F$ on an object is equal to the mass $m$ of that object multiplied by the acceleration vector $a$ of the object: $F = ma$."
While the ball is moving up, you do not interact with it, therefore the only force is gravity, accelerating the ball downwards (slowing it down).
You should not confuse the concept of force with the concept of energy. When throwing the ball up, you accelerate it and therefore you transfer kinetic energy to the ball, not giving it force.
NoMorePenNoMorePen
This is a classic misconception that most people share at some point in their lives. For centuries, we struggled to understand this point. For example, the famous Aristotle expresses your misconception that:
continuation of motion depends on continued action of a force
i.e. you see a ball moving upwards, and think that there must always be a force pushing it upwards. That is not the case. The ball has an initial velocity upwards, but the only force acting on the ball once it has left your hand is gravity.
Once the ball leaves your hand, it is moving upwards, but getting slower and slower, i.e. it is decelerating (or accelerating downwards). This deceleration is caused by gravity, a force acting downwards.
Nowadays, this fact is trivial, known by millions, but it was a significant development in the history of physics that confused some of the most famous minds.
innisfreeinnisfree
$\begingroup$ now worries @shyam, hope it's clear now $\endgroup$ – innisfree Mar 12 '15 at 9:11
$\begingroup$ YES, thanks a lot for helping me to clearify my long time confusion @innisfree $\endgroup$ – SHYAMANANDA NINGOMBAM Mar 12 '15 at 16:20
Once you release the ball, you are not applying a force to it; it is freely falling (despite its upward motion). The only force acting on it is the gravitational force, pulling it downwards (which is why it slows down and stops momentarily at the apex, before coming back down).
See also the related question When a ball is tossed straight up, does it experience momentary equilibrium at top of its path?
Kyle KanosKyle Kanos
$\begingroup$ then if it is freely fall,act only by gravity why it moves upward? $\endgroup$ – SHYAMANANDA NINGOMBAM Mar 11 '15 at 17:08
$\begingroup$ Because the force you applied to the ball to throw it was greater than $mg$ (otherwise it could not move); it then continues decelerating as it moves upwards. $\endgroup$ – Kyle Kanos Mar 11 '15 at 17:16
$\begingroup$ @SHYAMANANDA NINGOMBAM :By freely falling he means the body moves only under the presence of gravity(not any other forces)and the body is moving upwards because the you initially provided a kinetic energy. $\endgroup$ – Paul Mar 11 '15 at 17:18
Yes,the effect of force with which the ball is thrown is surely considered. When the ball is thrown upwards,the gravity try to pull it downwards and hence after sometime it starts coming down(when gravitational force overcome the effect of force you provided when you threw it).Somehow,for ease of solving,at lower level it is assumed that the only force acting on ball after being thrown is gravitational force.
Vidyanshu MishraVidyanshu Mishra
Actually, when you throw the ball up into the air the ball will not only be acted on by gravity but friction and resistance from the air molecules colliding with the movement of the ball.
Inao Shyamananda. The act of throwing a ball upward can be studied in two stages.
Stage 1: When you throw a ball up you apply a force to the ball in the upward direction as long as it is in contact with your hand. This force does some amount of work on the ball. This work done is manifest as the sum of kinetic energy and potential energy of the ball. At the end of this stage the ball leaves your hand. The kinetic energy at the end of this stage gives you the "initial velocity" for the next stage through the relation $$KE=\frac{1}{2}mv^2.$$ I hope this clears your doubts about what happened to the force which your hand exerted on the ball.
Stage 2: The moment the ball leaves your hand, the only force acting on the ball now is the gravitational force (neglecting the effect of air). This stage has been clearly explained in the earlier answers.
AjitAjit
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces newtonian-gravity soft-question free-body-diagram or ask your own question.
Stepping down and taking a break
Acceleration of a ball thrown into the air
When a ball is tossed straight up, does it experience momentary equilibrium at top of its path?
Ball thrown from a moving train
Does a ball have a normal force exerted on it if the ball is thrown?
Frictional forces and their directions
What happens to the momentum of earth-ball system when a ball is thrown upwards? | CommonCrawl |
\begin{document}
\title{Coactions on Cuntz-Pimsner algebras} \author[Kaliszewski, Quigg, and Robertson] {S.~Kaliszewski, John Quigg, and David Robertson} \address [S.~Kaliszewski] {School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona 85287} \email{[email protected]}\ \address[John Quigg]{School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona 85287} \email{[email protected]} \address[David Robertson]{School of Mathematics and Applied Statistics, University of Wollongong, NSW 2522, AUSTRALIA} \email{[email protected]}
\date{\today}
\begin{abstract} We investigate how a correspondence coaction gives rise to a coaction on the associated Cuntz-Pimsner algebra. We apply this to recover a recent result of Hao and Ng concerning Cuntz-Pimsner algebras of crossed products of correspondences by actions of amenable groups. \end{abstract}
\subjclass[2010]{Primary 46L08; Secondary 46L55} \keywords {Hilbert module, $C^*$-correspondence, Cuntz-Pimsner algebra, coaction}
\maketitle
\section{Introduction}\label{intro}
The Cuntz-Pimsner algebra $\mathcal O_X$ associated to a $C^*$-correspondence $X$ is a $C^*$-algebra whose representations encode the Cuntz-Pimsner covariant representations of $X$. These were introduced by Pimsner in \cite{Pi}, and generalize both crossed products by $\mathbb Z$ and graph algebras when the underlying graph has no sources. Further work by Katsura in \cite{KatsuraCorrespondence} has expanded the class of Cuntz-Pimsner algebras to include graph algebras of arbitrary graphs, crossed products by partial automorphisms and topological graph algebras.
As in the cases of the above mentioned $C^*$-algebras, it is fruitful to investigate how $C^*$-constructions involving $\mathcal O_X$ can be studied in terms of corresponding constructions involving $X$. For example, it has been understood for some time how actions of groups on $\mathcal O_X$ can be studied in terms of actions on $X$, see \cite{HN} for example. In this paper we show how coactions of a locally compact group $G$ on $\mathcal O_X$ can be studied in terms of suitable coactions of $G$ on $X$.
In order to say what ``suitable'' should mean, we appeal to \cite{KQRCorrespondenceFunctor}, where we showed that the passage from $X$ to $\mathcal O_X$ is functorial for certain categories. Specifically, the target category is $C^*$-algebras and nondegenerate homomorphisms into multiplier algebras, and the domain category is correspondence and \emph{Cuntz-Pimsner covariant homomorphisms} (defined in \cite{KQRCorrespondenceFunctor}). To see how this should be applied, note that a coaction of $G$ on $\mathcal O_X$ is a nondegenerate homomorphism $\zeta:\mathcal O_X\to M(\mathcal O_X\otimes C^*(G))$ satisfying appropriate conditions, and similarly a coaction of $G$ on $X$ (as defined in \cite{enchilada}) is a homomorphism $\sigma:X\to M(X\otimes C^*(G))$. In order to apply the techniques from \cite{KQRCorrespondenceFunctor}, we want $\zeta$ to be determined by $\sigma$. If we knew that $\mathcal O_X\otimes C^*(G)$ were equal to $\mathcal O_{X\otimes C^*(G)}$, the Cuntz-Pimsner algebra of the external-tensor-product correspondence, then the main result of \cite{KQRCorrespondenceFunctor} would tell us that we should require the correspondence homomorphism $\sigma$ to be \emph{Cuntz-Pimsner covariant} in the sense defined there. As it happens, due to the nonexactness of minimal $C^*$-tensor products, we need a slightly stronger version of Cuntz-Pimsner covariance, specifically suited for correspondence coactions. We work this out in an abstract setting toward the end of \secref{prelim}, then we use this to prove out main result concerning coactions on Cuntz-Pimsner algebras at the start of \secref{coactions}, after which we go on to develop a few tools dealing with inner coactions on correspondences.
In \secref{crossed products} we show how to recognize covariant representations of the coaction $\zeta$ on $\mathcal O_X$ using the coaction $\sigma$ on $X$. In \thmref{crossed} we show that under a mild technical condition the crossed product $\mathcal O_X\rtimes_\zeta G$ is isomorphic to the Cuntz-Pimsner algebra $\mathcal O_{X\rtimes_\sigma G}$ of the crossed-product correspondence. We list in \lemref{implies CP} a couple of situations in which the technical condition is guaranteed to hold. We also show that, as in the $C^*$-case, the crossed product of $X$ by an inner coaction is isomorphic to the tensor product $X\otimes C_0(G)$, and that if $G$ is amenable and acts on $X$ then the dual coaction on the crossed product $X\rtimes G$ satisfies our stronger version of Cuntz-Pimsner covariance. For all we know the amenability hypothesis in the latter result is unnecessary, but anyway we will apply this in \secref{apps} to recover a recent result of Hao and Ng \cite{HN}; they show that if $G$ acts on $X$ then $\mathcal O_X\rtimes G\cong \mathcal O_{X\rtimes G}$, and we give a substantially different proof using the techniques of the present paper.
\section{Preliminaries}\label{prelim}
We are mainly interested in correspondences over a single coefficient $C^*$-algebra, but occasionally we will find it convenient to allow the left and right coefficient $C^*$-algebras to be different. We denote an $A-B$ correspondence $X$ by $(A,X,B)$ and write $\phi_A: A \to \mathcal L(X)$ for the left action of $A$ on $X$. If $A=B$ we denote the $A$-correspondence $X$ by $(X,A)$. All correspondences will be assumed \emph{nondegenerate} in the sense that $A\cdot X=X$.\footnote{Warning: in \cite{enchilada} the definition of ``right-Hilbert bimodule'' includes the nondegeneracy hypothesis.} We record here the notation and results that we will need.
The \emph{multiplier correspondence} of a correspondence $(A,X,B)$ is $M(X):=\mathcal L_B(B,X)$, which is an $M(A)-M(B)$ correspondence in a natural way. If $(A,X,B)$ and $(C,Y,D)$ are correspondences, a \emph{correspondence homomorphism} $(\pi,\psi,\rho):(A,X,B)\to (M(C),M(Y),M(D))$ comprises homomorphisms $\pi:A\to M(C)$ and $\rho:B\to M(D)$ and a linear map $\psi:X\to M(Y)$ preserving the correspondence operations. The homomorphism $(\pi,\psi,\rho)$ is \emph{nondegenerate} if $\clspn\{\psi(X)\cdot D\}=Y$ and both $\pi$ and $\rho$ are nondegenerate, and then there is a unique strictly continuous extension $(\overline\pi,\overline\psi,\overline\rho):(M(A),M(X),M(B))\to (M(C),M(Y),M(D))$, and also a unique nondegenerate homomorphism $\psi^{(1)}:\mathcal K(X)\to \mathcal L(Y)$ such that $(\psi^{(1)},\psi,\rho):(\mathcal K(X),X,B)\to (\mathcal L(Y),M(Y),M(D))$ is a nondegenerate correspondence homomorphism. The diagram \begin{equation}\label{nondegenerate commute} \xymatrix{ A \ar[r]^-\pi \ar[d]_{\varphi_A} &M(B) \ar[d]^{\overline{\varphi_B}} \\ \mathcal L(X) \ar[r]_-{\overline{\psi^{(1)}}} &\mathcal L(Y) } \end{equation} commutes, and $\psi^{(1)}$ is determined by $\psi^{(1)}(\theta_{\xi,\eta})=\psi(\xi)\psi(\eta)^*$.
If $A=B$, $C=D$, and $\pi=\rho$, we write $(\psi,\pi):(X,A)\to (M(Y),M(C))$.
We refer to \cite[Section~2]{KQRCorrespondenceFunctor} for an exposition of the properties of the ``relative multipliers'' from \cite[Appendix~A]{dkq}. Very briefly, if $(X,A)$ is a nondegenerate correspondence and $\kappa:C\to M(A)$ is a nondegenerate homomorphism, the \emph{$C$-multipliers} of $X$ are \[ M_C(X):=\{m\in M(X):\kappa(C)\cdot m\cup m\cdot \kappa(C)\subset X\}. \] The main purpose of relative multipliers is the following extension theorem \cite[Proposition~A.11]{dkq}: let $X$ and $Y$ be nondegenerate correspondences over $A$ and $B$, respectively, let $\kappa:C\to M(A)$ and $\sigma:D\to M(B)$ be nondegenerate homomorphisms. If there is a nondegenerate homomorphism $\lambda:C\to M(\sigma(D))$ such that \[ \pi(\kappa(c)a)=\lambda(c)\pi(a)\midtext{for}c\in C,a\in A, \] then for any correspondence homomorphism $(\psi,\pi):(X,A)\to (M_D(Y),M_D(B))$ there is a unique $C$-strict to $D$-strictly continuous correspondence homomorphism $(\overline\psi,\overline\pi)$ making the diagram \[ \xymatrix{ (X,A) \ar[r]^-{(\psi,\pi)} \ar@{^(->}[d] &(M_D(Y),M_D(B)) \\ (M_C(X),M_C(A)) \ar@{-->}[ur]_{(\overline\psi,\overline\pi)}^{!} } \] commute.
We will also need to use the method of \cite{KQRCorrespondenceFunctor} to construct homomorphisms of Cuntz-Pimsner algebras from correspondence homomorphisms: a homomorphism $(\psi,\pi):(X,A)\to (M(Y),M(B))$ is \emph{Cuntz-Pimsner covariant} if \begin{enumerate} \item $\psi(X)\subset M_B(Y)$,
\item $\pi:A\to M(B)$ is nondegenerate,
\item $\pi(J_X)\subset \ideal{B}{J_Y}$, and
\item the diagram \begin{equation}\label{CP diagram} \xymatrix{
J_X \ar[r]^-{\pi|} \ar[d]_{\varphi_A|}
&\ideal{B}{J_Y} \ar[d]^{\overline{\varphi_B}\bigm|} \\ \mathcal K(X) \ar[r]_-{\cpct\psi} &M_B(\mathcal K(Y)) } \end{equation} commutes, \end{enumerate} where, for an ideal $I$ of a $C^*$-algebra $A$, we follow \cite{BaajSkandalis} by defining \[ \ideal{A}{I}=\{m\in M(A):mA\cup Am\subset I\}. \] By \cite[Corollary~3.6]{KQRCorrespondenceFunctor}, when $(\psi,\pi)$ is Cuntz-Pimsner covariant there is a unique homomorphism $\mathcal O_{\psi,\pi}$ making the diagram \[ \xymatrix{ X \ar[r]^-{(\psi,\pi)} \ar[d]_{k_X} &M_B(Y) \ar[d]^{\overline{k_Y}} \\ \mathcal O_X \ar[r]_-{\mathcal O_{\psi,\pi}} &M_B(\mathcal O_Y) } \] commute.
If $G$ is a locally compact group and $(X,A)$ is a correspondence we will write \begin{align*} M_{C^*(G)}(A\otimes C^*(G))&=M_{1\otimes C^*(G)}(A\otimes C^*(G))\\ M_{C^*(G)}(X\otimes C^*(G))&=M_{1\otimes C^*(G)}(X\otimes C^*(G)). \end{align*}
Recall that a \emph{coaction} of $G$ on a $C^*$-algebra $A$ is a nondegenerate injective homomorphism $\delta:A\to M(A\otimes C^*(G))$ satisfying the \emph{coaction identity} given by the commutative diagram \begin{equation}\label{coaction diagram} \xymatrix@C+30pt{ A \ar[r]^-\delta \ar[d]_\delta &M(A\otimes C^*(G)) \ar[d]^{\overline{\delta\otimes\text{\textup{id}}}} \\ M(A\otimes C^*(G)) \ar[r]_-{\overline{\text{\textup{id}}\otimes\delta_G}} &M(A\otimes C^*(G)\otimes C^*(G)), } \end{equation} and satisfying the \emph{coaction-nondegeneracy} condition \[ \clspn\{\delta(A)(1\otimes C^*(G))\}=A\otimes C^*(G). \]
\begin{rems} (1) Note that, as has become customary in recent years, we have built coaction-nondegeneracy into the definition of coaction, and of course it follows that $\delta(A)\subset M_{C^*(G)}(A\otimes C^*(G))$.
(2) The coaction identity requires $\delta$ to be nondegenerate as a homomorphism, so that it extends uniquely to multipliers.\footnote{However, if we know that $\delta(A)\subset M_{C^*(G)}(A\otimes C^*(G))$, then, even without knowing $\delta$ is nondegenerate, the coaction identity makes sense when the upper right and lower left corners of the commutative diagram \eqref{coaction diagram} are replaced by $M_{C^*(G)}(A\otimes C^*(G))$.}
(3) Coaction-nondegeneracy implies nondegeneracy as a homomorphism. However, an under-appreciated result of Katayama \cite[Lemma~4]{kat}, implies that, assuming we know $\delta$ satisfies all the other coaction axioms except for coaction-nondegeneracy, the closed span of the products $\delta(A)(1\otimes C^*(G))$ is actually a $C^*$-subalgebra of $A\otimes C^*(G)$, and hence to show coaction-nondegeneracy it suffices to verify the seemingly weaker condition \begin{equation}\label{katayama} \text{$\delta(A)(1\otimes C^*(G))$ generates $A\otimes C^*(G)$ as a $C^*$-algebra.} \end{equation} \end{rems}
A nondegenerate homomorphism $\mu:C_0(G)\to M(A)$ implements an \emph{inner coaction} $\delta^\mu$ on $A$ via \[ \delta^\mu(a)=\ad\overline{\mu\otimes\text{\textup{id}}}(w_G)(a\otimes 1), \] where \[ w_G\in M(C_0(G)\otimes C^*(G))=C_b(G,M^\beta(C^*(G))) \] is the function given by the canonical embedding of $G$ into the unitary group of $M(C^*(G))$. The \emph{trivial coaction} $\delta^1=\text{\textup{id}}_A\otimes 1$ on $A$ is implemented by the homomorphism \[ f\mapsto f(e)1_{M(A)}\midtext{for}f\in C_0(G). \]
A coaction $(A,\delta)$ makes $A$ into a Banach module over the Fourier-Stieltjes algebra $B(G)=C^*(G)^*$ via \[ f\cdot a=S_f\circ\delta(a)\midtext{for}f\in B(G),a\in A, \] where $S_f:A\otimes C^*(G)\to A$ is the slice map, which we sometimes alternatively denote by $\text{\textup{id}}\otimes f$. Frequently we restrict the module action to the Fourier algebra $A(G)$, which is dense in $C_0(G)$.
The \emph{Kronecker product} (see, e.g., \cite[Th\'eor\`eme~1.5]{DeCanniereEnockSchwartz}, \cite[p. 118]{KirchbergHopf}, \cite[Definition~A.2]{NakagamiTakesaki}, \cite[Definition~6.6]{QuiggDualityTwisted}) of two nondegenerate homomorphisms $\mu$ and $\nu$ of $C_0(G)$ in $M(A)$ and $M(B)$, respectively, is defined by \[ \mu\times\nu:=\overline{\mu\otimes\nu}\circ\alpha, \] where $\alpha:C_0(G)\to C_b(G\times G)=M(C_0(G)\otimes C_0(G))$ is given by \[ \alpha(f)(s,t)=f(st). \] Letting \[ u=\overline{\mu\otimes\text{\textup{id}}}(w_G)\midtext{and}v=\overline{\nu\otimes\text{\textup{id}}}(w_G), \] we have \[ \overline{(\mu\times\nu)\otimes\text{\textup{id}}}(w_G)=u_{13}v_{23}. \]
A \emph{covariant homomorphism} of a coaction $(A,\delta)$ is a pair $(\pi,\mu):(A,C_0(G))\to M(B)$ comprising nondegenerate homomorphisms $\pi:A\to M(B)$ and $\mu:C_0(G)\to M(B)$ such that \[ \overline{\pi\otimes\text{\textup{id}}}\circ\delta(a)=\ad\overline{\mu\otimes\text{\textup{id}}}(w_G)(\pi(a)\otimes 1). \] A \emph{crossed product} of $(A,\delta)$ is a triple $(A\rtimes_\delta G,j_A,j_G)$ consisting of a covariant homomorphism $(j_A,j_G):(A,C_0(G))\to M(A\rtimes_\delta G)$ that is \emph{universal} in the sense that for every covariant homomorphism $(\pi,\mu):(A,C_0(G))\to M(B)$ there is a unique nondegenerate homomorphism $\pi\times\mu:A\rtimes_\delta G\to M(B)$ making the diagram \[ \xymatrix{ A \ar[r]^-{j_A} \ar[dr]_\pi &M(A\rtimes_\delta G) \ar@{-->}[d]^{\pi\times\mu}_{!} &C_0(G) \ar[l]_-{j_G} \ar[dl]^\mu \\ &M(B) } \] commute. It follows that $A\rtimes_\delta G=\clspn\{j(A)j_G(C_0(G))$. The crossed product is unique up to isomorphism, and one construction is given by the \emph{regular representation} \[ \bigl((\text{\textup{id}}\otimes\lambda)\circ\delta,1\otimes M\bigr): (A,C_0(G))\to M(A\otimes \mathcal K(L^2(G))), \] where $\lambda$ is the left regular representation of $G$ and $M:C_0(G)\to B(L^2(G))$ is the multiplication representation.
For correspondence coactions, we follow \cite{enchilada}, but again build in coaction-nondegeneracy:
\begin{defn} A \emph{coaction} of $G$ on a correspondence $(A,X,B)$ is a nondegenerate correspondence homomorphism \[ (\delta,\sigma,\varepsilon):(A,X,B)\to \bigl(M(A\otimes C^*(G)),M(X\otimes C^*(G)),M(B\otimes C^*(G))\bigr) \] such that: \begin{enumerate} \item $\delta$ and $\varepsilon$ are coactions on $A$ and $B$, respectively;
\item $\sigma$ satisfies the coaction identity given by the commutative diagram \[ \xymatrix@C+30pt{ X \ar[r]^-\sigma \ar[d]_\sigma &M(X\otimes C^*(G)) \ar[d]^{\overline{\sigma\otimes\text{\textup{id}}}} \\ M(X\otimes C^*(G)) \ar[r]_-{\overline{\text{\textup{id}}\otimes\delta_G}} &M(X\otimes C^*(G)\otimes C^*(G)); } \]
\item $\sigma$ satisfies the coaction-nondegeneracy condition \[ \clspn\{(1\otimes C^*(G))\cdot \sigma(X)\}=\clspn\{\sigma(X)\cdot (1\otimes C^*(G))\}=X\otimes C^*(G). \] \end{enumerate} We also say that $\sigma$ is \emph{$\delta-\varepsilon$ compatible}. \end{defn}
\begin{rems} (1) Remarks similar to those following the definition of $C^*$-coaction apply to correspondence coactions. For example, coaction-nondegeneracy implies that $\sigma(X)\subset M_{C^*(G)}(X\otimes C^*(G))$ and $\sigma$ is nondegenerate as a correspondence homomorphism. In fact, it implies a stronger form of nondegeneracy, namely that, in addition to $\clspn\{\sigma(X)\cdot (X\otimes C^*(G))\}=X\otimes C^*(G)$, we also have the symmetric property on the other side: \[ \clspn\{(X\otimes C^*(G))\cdot \sigma(X)\}=X\otimes C^*(G). \]
(2) On the other hand, nondegeneracy of $\sigma$ as a correspondence homomorphism implies one half of the coaction-nondegeneracy, namely $\clspn\{\sigma(X)\cdot (1\otimes C^*(G))\}=X\otimes C^*(G)$, by coaction-nondegeneracy of $\varepsilon$.
(3) $\sigma$ will be isometric since $\varepsilon$ is injective. \end{rems}
Frequently we will have $A=B$ and $\delta=\varepsilon$, in which case we say that $(\sigma,\delta)$ is a coaction on $(X,A)$; of course the case $X=A=B$ and $\sigma=\delta=\varepsilon$ reduces to a $C^*$-coaction. Being particularly nice correspondence homomorphisms, coactions on $C^*$-correspondences are easily shown to be Cuntz-Pimsner covariant:
\begin{lem}\label{coaction CP} A coaction $(\sigma,\delta)$ of $G$ on a correspondence $(X,A)$ is Cuntz-Pimsner covariant as a correspondence homomorphism if and only if \[ \delta(J_X)\subset M\bigl(A\otimes C^*(G);J_{X\otimes C^*(G)}\bigr). \] \end{lem}
\begin{proof} By definition of correspondence coaction, the correspondence homomorphism $(\sigma,\delta):(X,A)\to (M(X\otimes C^*(G)),M(A\otimes C^*(G)))$ is nondegenerate, and the inclusion $\sigma(X)\subset M_{C^*(G)}(X\otimes C^*(G))$ trivially implies that $\sigma(X)\subset M_{A\otimes C^*(G)}(X\otimes C^*(G))$. Combining with \cite[Lemma 3.2]{KQRCorrespondenceFunctor} gives the result. \end{proof}
However, as consequence of the nonexactness of minimal $C^*$-tensor products, we will need a variation on \lemref{coaction CP}, and we state it in abstract form, not involving coactions:
\begin{lem}\label{tensor} Let $(X,A)$ be a correspondence, let $C$ be a $C^*$-algebra, and let $(\psi,\pi):(X,A)\to (M_C(X\otimes C),M_C(A\otimes C))$ be a nondegenerate correspondence homomorphism. If \[ \pi(J_X)\subset M(A\otimes C;J_X\otimes C), \] then the composition \[ \Bigl(\overline{k_X\otimes\text{\textup{id}}}\circ\psi,\overline{k_A\otimes\text{\textup{id}}}\circ\pi\Bigr): (X,A)\to M(\mathcal O_X\otimes C) \] is Cuntz-Pimsner covariant. \end{lem}
\begin{proof} By checking on elementary tensors one verifies that, on the ideal $J_X\otimes C$ of $A\otimes C$, we have \begin{align*} (k_X\otimes\text{\textup{id}})^{(1)}\circ \varphi_{A\otimes C} &=(k_X^{(1)}\otimes\text{\textup{id}})\circ (\varphi_A\otimes\text{\textup{id}}) \\&=k_X^{(1)}\circ \varphi_A\otimes \text{\textup{id}} \\&=k_A\otimes\text{\textup{id}}, \end{align*} and so, by strict continuity, on $M(A\otimes C;J_X\otimes C)$ we have \[ \overline{(k_X\otimes\text{\textup{id}})^{(1)}}\circ \overline{\varphi_{A\otimes C}}=\overline{k_A\otimes\text{\textup{id}}}. \] Thus, on $J_X$ we have \begin{align*} \Bigl(\overline{k_X\otimes\text{\textup{id}}}\circ\psi\Bigr)^{(1)}\circ\varphi_A &=\overline{(k_X\otimes\text{\textup{id}})^{(1)}}\circ\psi^{(1)}\circ\varphi_A \\&=\overline{(k_X\otimes\text{\textup{id}})^{(1)}}\circ\overline{\varphi_{A\otimes C}}\circ\pi, \\\intertext{by \cite[Lemma~3.3]{KQRCorrespondenceFunctor}, since $(\psi,\pi)$ is nondegenerate,} \\&=\overline{k_A\otimes\text{\textup{id}}}\circ\pi, \end{align*} which is Cuntz-Pimsner covariance. \end{proof}
Here is the connection between Lemmas~\ref{coaction CP} and \ref{tensor}:
\begin{lem}\label{contained} Let $(X,A)$ be a correspondence, let $C$ be a $C^*$-algebra, and let $(X\otimes C,A\otimes C)$ be the external-tensor-product correspondence. Then \[ J_X\otimes C\subset J_{X\otimes C}, \] with equality if $C$ is exact. \end{lem}
\begin{proof} We use the characterization \cite[Paragraph following Definition~2.3]{KatsuraCorrespondence} of $J_X$ as the largest ideal of $A$ that $\varphi_A$ maps injectively into $\mathcal K(X)$, and similarly for $J_{X\otimes C}$. By \cite[Corollary~3.38]{tfb}, for example, we have \[ \mathcal K(X\otimes C)=\mathcal K(X)\otimes C, \] so \[ \varphi_{A\otimes C}=\varphi_A\otimes \text{\textup{id}}_C. \] Since $\varphi_A$ maps $J_X$ injectively into $\mathcal K(X)$, $\varphi_A\otimes\text{\textup{id}}$ maps $J_X\otimes C$ injectively into $\mathcal K(X)\otimes C$. Therefore $\varphi_{A\otimes C}$ maps $J_X\otimes C$ injectively into $\mathcal K(X\otimes C)$, so $J_X\otimes C\subset J_{X\otimes C}$.
Now assume that $C$ is exact, and let $x\in J_{X\otimes C}$. Since $C$ is exact, it has the slice map property, so to show that $x\in J_X\otimes C$ it suffices to show that $(\text{\textup{id}}\otimes\omega)(x)\in J_X$ for all $\omega\in C^*$. To verify the first property of $J_X$, we have \begin{align*} \varphi_A\bigl((\text{\textup{id}}\otimes\omega)(x)\bigr) &=(\text{\textup{id}}\otimes\omega)\circ (\varphi_A\otimes\text{\textup{id}})(x), \end{align*} which is in $\mathcal K(X)$ because \[ (\varphi_A\otimes\text{\textup{id}})(x) =\varphi_{A\otimes C}(x) \in \mathcal K(X\otimes C) =\mathcal K(X)\otimes C. \] For the other property of $J_X$, let $a\in \ker\phi_A$. Factor $\omega=c\cdot \omega'$ with $c\in C$ and $\omega'\in C^*$. Then \begin{align*} \bigl((\text{\textup{id}}\otimes\omega)(x)\bigr)a &=(\text{\textup{id}}\otimes c\cdot \omega')\bigl(x(a\otimes 1)\bigr) \\&=(\text{\textup{id}}\otimes\omega')\bigl(x(a\otimes c)\bigr), \end{align*} which is $0$ because \[ a\otimes c\in \ker\varphi_A\otimes C=\ker\varphi_{A\otimes C}. \qedhere \] \end{proof}
Recall from \cite[Proposition~3.9]{enchilada} that if $(\delta,\sigma,\varepsilon)$ is a coaction of $G$ on a correspondence $(A,X,B)$, then the \emph{crossed product} correspondence $(A\rtimes_\delta G,X\rtimes_\sigma G,B\rtimes_\varepsilon G)$ is defined by \[ X\rtimes_\sigma G=\clspn\{j_X(X)\cdot j_G^B(C_0(G))\} \subset M(X\otimes \mathcal K(L^2(G))), \] where \begin{align*} j_X&=(\text{\textup{id}}\otimes\lambda)\circ\sigma\\ j_G^B&=1_{M(B)}\otimes M. \end{align*} $X\rtimes_\sigma G$ is an $A\rtimes_\delta G-B\rtimes_\varepsilon G$ correspondence in a natural way when we use the regular representations \begin{align*} (j_A,j^A_G):(A,C_0(G))&\to M(A\otimes \mathcal K(L^2(G)))\\ (j_B,j^B_G):(B,C_0(G))&\to M(B\otimes \mathcal K(L^2(G))). \end{align*}
\cite[Lemma~3.10]{enchilada} proves that there is a coaction $\mu$ of $G$ on $\mathcal K(X)$ such that \begin{itemize} \item $\varphi_A:A\to M(\mathcal K(X))$ is $\delta-\mu$ equivariant;
\item there is an isomorphism $\mathcal K(X\rtimes_\sigma G)\cong \mathcal K(X)\rtimes_\mu G$ that carries $\varphi_{A\rtimes_\delta G}$ to $\varphi_A\rtimes G$. \end{itemize} In fact, the an examination of the construction used in \cite{enchilada} reveals that the coaction on $\mathcal K(X)$ is none other than \[ \sigma^{(1)}:\mathcal K(X)\to M_{C^*(G)}(\mathcal K(X)\otimes C^*(G)) =M_{C^*(G)}\bigl(\mathcal K(X\otimes C^*(G)\bigr), \] so that the left-module action of $A\rtimes_\delta G$ on $X\rtimes_\sigma G$ can be regarded as \[ \varphi_A\rtimes G:A\rtimes_\delta G\to M\bigl(\mathcal K(X)\rtimes_{\sigma^{(1)}} G\bigr). \]
\begin{rem}\label{j nondegenerate} Note that \[ (j_A,j_X,j_B):(A,X,B)\to \bigl(M(A\rtimes_\delta G),M(X\rtimes_\sigma G),M(B\rtimes_\varepsilon G)\bigr) \] is a correspondence homomorphism. In fact, it is a bit more: since $j_A$ and $j_B$ are nondegenerate by the standard theory of $C^*$-coactions, it follows from \cite[Lemma~3.10]{enchilada} that he correspondence homomorphism $(j_A,j_X,j_B)$ is nondegenerate. \end{rem}
\begin{lem}\label{j CP} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$. Then the canonical correspondence homomorphism $(j_X,j_A):(X,A)\to (M(X\rtimes_\sigma G),M(A\rtimes_\delta G))$ is Cuntz-Pimsner covariant if and only if \[ j_A(J_X)\subset M(A\rtimes_\delta G;J_{X\rtimes_\sigma G}). \] \end{lem}
\begin{proof} By \remref{j nondegenerate} and \cite[Lemma 3.2]{KQRCorrespondenceFunctor}, it suffices to observe that \[ j_X(X)\subset M_{A\rtimes_\delta G}(X\rtimes_\sigma G). \qedhere \] \end{proof}
Although the following concept does not appear in \cite{enchilada}, we will find it useful:
\begin{defn} Let $(\delta,\sigma,\varepsilon)$ be a coaction of $G$ on a correspondence $(A,X,B)$, let $(\pi,\psi,\rho):(A,X,B)\to (M(D),M(Y),M(E))$ be a correspondence homomorphism, and let $\mu:C_0(G)\to M(D)$ and $\nu:C_0(G)\to M(E)$ be homomorphisms. Then $(\pi,\psi,\rho,\mu,\nu)$ is \emph{covariant for $(\delta,\sigma,\varepsilon)$} if \begin{enumerate} \item $(\pi,\mu)$ and $(\rho,\nu)$ are covariant for $(A,\delta)$ and $(B,\varepsilon)$, respectively;
\item for all $\xi\in X$ we have \[ \overline{\psi\otimes\text{\textup{id}}}\circ\sigma(\xi)= \overline{\mu\otimes\text{\textup{id}}}(w_G)\cdot \bigl(\psi(\xi)\otimes 1\bigr)\cdot \overline{\nu\otimes\text{\textup{id}}}(w_G)^*. \] \end{enumerate} \end{defn}
\begin{rem} Note that covariance of $(\pi,\mu)$ and $(\rho,\nu)$ entails that $\pi,\mu,\rho,\nu$ are all nondegenerate. \end{rem}
If $A=B$, $\delta=\varepsilon$, $\pi=\rho$, and $\mu=\nu$, we say $(\psi,\pi,\mu)$ is covariant for $(\sigma,\delta)$.
\section{Coactions on Cuntz-Pimsner algebras}\label{coactions}
\begin{prop}\label{coaction} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$. If \[ \delta(J_X)\subset M(A\otimes C^*(G);J_X\otimes C^*(G)), \] then there is a unique coaction $\zeta$ of $G$ on $\mathcal O_X$ making the diagram \[ \xymatrix@C+30pt{ (X,A) \ar[r]^-{(\sigma,\delta)} \ar[d]_{(k_X,k_A)} &(M_{C^*(G)}(X\otimes C^*(G)),M_{C^*(G)}(A\otimes C^*(G))) \ar[d]^{(\overline{k_X\otimes\text{\textup{id}}},\overline{k_A\otimes\text{\textup{id}}})} \\ \mathcal O_X \ar@{-->}[r]_-\zeta^-{!} &M_{C^*(G)}(\mathcal O_X\otimes C^*(G)) } \] commute. \end{prop}
\begin{proof} By definition of correspondence coaction, the correspondence homomorphism $(\sigma,\delta)$ is nondegenerate, and so, by \lemref{tensor}, our hypothesis guarantees that the composition \[ \Bigl(\overline{k_X\otimes\text{\textup{id}}}\circ\sigma,\overline{k_A\otimes\text{\textup{id}}}\circ\delta\Bigr) \] is Cuntz-Pimsner covariant.
Thus there is a unique homomorphism $\zeta$ making the diagram commute, and moreover $\zeta$ is injective because $\delta$ is.
For the coaction identity, we have \begin{align*} \overline{\zeta\otimes\text{\textup{id}}}\circ\zeta\circ k_X &=\overline{\zeta\otimes\text{\textup{id}}}\circ\zeta_X \\&=\overline{\zeta\otimes\text{\textup{id}}}\circ\overline{k_X\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{\zeta\circ k_X\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{\overline{k_X\otimes\text{\textup{id}}}\circ\sigma\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{k_X\otimes\text{\textup{id}}\otimes\text{\textup{id}}}\circ\overline{\sigma\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{k_X\otimes\text{\textup{id}}\otimes\text{\textup{id}}}\circ\overline{\text{\textup{id}}\otimes\zeta_G}\circ\sigma \\&=\overline{\text{\textup{id}}\otimes\zeta_G}\circ\overline{k_X\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{\text{\textup{id}}\otimes\zeta_G}\circ\zeta\circ k_X, \end{align*} and similarly \[ \overline{\zeta\otimes\text{\textup{id}}}\circ\zeta\circ k_A=\overline{\text{\textup{id}}\otimes\zeta_G}\circ\zeta\circ k_A, \] and it follows that \[ \overline{\zeta\otimes\text{\textup{id}}}\circ\zeta=\overline{\text{\textup{id}}\otimes\zeta_G}\circ\zeta. \]
For the coaction-nondegeneracy, routine computations show that \[ \clspn\bigl\{\zeta_X(X)(1\otimes C^*(G))\bigr\} =k_X(X)\otimes C^*(G), \] and of course \[ \clspn\bigl\{\zeta_A(A)(1\otimes C^*(G))\bigr\} =k_A(A)\otimes C^*(G), \] and hence the property \eqref{katayama} holds. \end{proof}
We now develop a few tools involving inner coactions on correspondences, for use elsewhere.
\begin{prop}\label{inner} Let $X$ be an $A-B$ correspondence, and let $\mu:C_0(G)\to M(A)$ and $\nu:C_0(G)\to M(B)$ be nondegenerate homomorphisms, and let $\delta^\mu$ and $\delta^\nu$ be the associated inner coactions on $A$ and $B$. Then there is a $\delta^\mu-\delta^\nu$ compatible coaction $\sigma$ on $X$ given by \[ \sigma(\xi)=\overline{\mu\otimes\text{\textup{id}}}(w_G)\cdot (\xi\otimes 1)\cdot \overline{\nu\otimes\text{\textup{id}}}(w_G)^*. \] \end{prop}
\begin{proof} Write \[ u=\overline{\mu\otimes\text{\textup{id}}}(w_G)\midtext{and}v=\overline{\nu\otimes\text{\textup{id}}}(w_G). \] Then $u\in M(A\otimes C^*(G))$, $v\in M(B\otimes C^*(G))$, and \[ X\otimes 1\subset M(X\otimes C^*(G)), \] so certainly $\sigma$ maps into $M(X\otimes C^*(G))$.
To see that $(\delta,\sigma,\varepsilon)$ is a correspondence homomorphism, we compute, for $a\in A$ and $\xi,\eta\in X$: \begin{align*} \sigma(a\cdot \xi) &=u\cdot (a\cdot \xi\otimes 1)\cdot v^* \\&=u\cdot \bigl((a\otimes 1)\cdot (\xi\otimes 1)\bigr)\cdot v^* \\&=u(a\otimes 1)u^*u\cdot (\xi\otimes 1)\cdot v^* \\&=\delta(a)\cdot \sigma(\xi), \end{align*} and \begin{align*} \langle\sigma(\xi),\sigma(\eta)\rangle &=\<u\cdot (\xi\otimes 1)\cdot v^*,u\cdot (\eta\otimes 1)\cdot v^*\rangle \\&=\langle(\xi\otimes 1)\cdot v^*,(\eta\otimes 1)\cdot v^*\rangle \\&\hspace{.5in}\text{(because $u$ is unitary)} \\&=v\langle\xi\otimes 1,\eta\otimes 1\>v^* \\&=v\bigl(\langle\xi,\eta\rangle\otimes 1\bigr)v^* \\&=\varepsilon(\langle\xi,\eta\rangle). \end{align*}
We show coaction-nondegeneracy: \begin{align*} &\clspn\{(1\otimes C^*(G))\cdot \sigma(X)\} \\&\quad=\clspn\{(1\otimes C^*(G))u\cdot (X\otimes 1)\cdot v^*\} \\&\quad=\clspn\{(1\otimes C^*(G))u\cdot (\mu(C_0(G)\cdot X\otimes 1)\cdot v^*\} \\&\quad=\clspn\{(1\otimes C^*(G))u(\mu(C_0(G))\otimes 1)\cdot (X\otimes 1)\cdot v^*\} \\&\quad=\clspn\{(1\otimes C^*(G))(\mu(C_0(G))\otimes 1)u\cdot (X\otimes 1)\cdot v^*\} \\&\hspace{1in}\text{(because $u\in M(\mu(C_0(G))\otimes C^*(G))$)} \\&\quad=\clspn\{(\mu(C_0(G))\otimes C^*(G))u\cdot (X\otimes 1)\cdot v^*\} \\&\quad=\clspn\{(\mu(C_0(G))\otimes C^*(G))\cdot (X\otimes 1)\cdot v^*\} \\&\hspace{1in}\text{(because $u$ is a unitary multiplier)} \\&\quad=(X\otimes C^*(G))\cdot v^* \\&\quad=(X\otimes C^*(G)), \end{align*} because $v$ is unitary, and similarly \[ \clspn\{\sigma(X)\cdot (1\otimes C^*(G))\}=X\otimes C^*(G). \] This also implies that $\sigma$ is nondegenerate as a correspondence homomorphism.
For the coaction identity, we have \begin{align*} \overline{\sigma\otimes\text{\textup{id}}}\circ\sigma(\xi) &=u_{12}\cdot \sigma(\xi)_{13}\cdot v_{12}^* \\&=u_{12}u_{13}\cdot (\xi\otimes 1\otimes 1)\cdot v_{13}^*v_{12}^* \\&=\overline{\text{\textup{id}}\otimes\delta_G}(u)\cdot \overline{\text{\textup{id}}\otimes\delta_G}(\xi\otimes 1)\cdot \overline{\text{\textup{id}}\otimes\delta_G}(v)^* \\&=\overline{\text{\textup{id}}\otimes\delta_G}\circ\sigma(\xi), \end{align*} where the third equality expresses the fact that $u$ and $v$ are ``corepresentations'' of $C_0(G)$, and where the first equality follows from linearity, density, strict continuity, and the following computation with an elementary tensor $\eta\otimes c\in X\odot C^*(G)$: \begin{align*} \overline{\sigma\otimes\text{\textup{id}}}(\eta\otimes c) &=\sigma(\eta)\otimes c \\&=u\cdot (\eta\otimes 1)\cdot v^*\otimes c \\&=(u\otimes 1)\cdot (\eta\otimes 1\otimes c)\cdot (v\otimes 1)^* \\&=u_{12}\cdot (\eta\otimes c)_{13}\cdot v_{12}^*. \qedhere \end{align*} \end{proof}
\begin{defn} In the situation of \propref{inner}, we call the coaction $\sigma$ on $X$ \emph{inner}, and say that it is \emph{implemented} by the pair $(\mu,\nu)$. \end{defn}
\begin{cor}\label{unitary} Let $(X,A)$ be a correspondence, let $\delta$ be a coaction of $G$ on $A$, and let $\mu:C_0(G)\to \mathcal L(X)$ be a nondegenerate representation such that the pair $(\varphi_A,\mu)$ is a covariant representation of the coaction $(A,\delta)$. Define a unitary \[ u=\overline{\mu\otimes\text{\textup{id}}}(w_G)\in \mathcal L(X\otimes C^*(G)). \] Then there is an $\delta-\delta^1$ compatible coaction $\sigma$ on $X$ given by \[ \sigma(\xi)=u\cdot (\xi\otimes 1). \] \end{cor}
\begin{proof} Temporarily regard $X$ as a $\mathcal K(X)-A$ correspondence. Letting $\delta^\mu$ be the inner coaction on $\mathcal K(X)$ implemented by $\mu$, by \propref{inner} the formula for $\sigma$ defines a $\delta^\mu-\delta^1$ compatible coaction on $X$. Since $(\varphi_A,\mu)$ is covariant for $(A,\delta)$, it follows that $\sigma$ is also $\delta-\delta^1$ compatible. \end{proof}
\begin{cor}\label{commute} Let $(X,A)$ be a correspondence, and let $\mu:C_0(G)\to\mathcal L(X)$ be a nondegenerate representation commuting with $\varphi_A$. Then there is a coaction $\zeta$ of $G$ on $\mathcal O_X$ such that for $\xi\in X$ and $a\in A$ we have \begin{align*} \zeta\circ k_X(\xi)&=\overline{k_X\otimes\text{\textup{id}}}\Bigl(\overline{\mu\otimes 1}(w_G)\cdot (\xi\otimes 1)\Bigr)\\ \zeta\circ k_A(a)&=k_A(a)\otimes 1. \end{align*} \end{cor}
\begin{proof} Since $\mu$ commutes with $\varphi_A$, the hypotheses of \corref{unitary} are satisfied when $\delta$ is taken to be the trivial coaction $\delta^1$, and we let $\sigma$ be the resulting $\delta^1-\delta^1$ compatible coaction on $X$. Then \propref{coaction} gives a suitable coaction $\zeta$ of $G$ on $\mathcal O_X$, because the trivial coaction $\delta^1$ maps $J_X$ into \[ J_X\otimes 1\subset M(A\otimes C^*(G);J_X\otimes C^*(G)). \qedhere \] \end{proof}
\section{Crossed products}\label{crossed products}
\begin{lem}\label{covariant} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$ such that $\delta(J_X)\subset M(A\otimes C^*(G);J_X\otimes C^*(G))$, and let $(\psi,\pi,\mu):(X,A,C_0(G))\to M(B)$ be a $(\sigma,\delta)$-covariant homomorphism, with $(\psi,\pi)$ Cuntz-Pimsner covariant. Then the pair \[ (\psi\times\pi,\mu):(\mathcal O_X,C_0(G))\to M(B) \] is covariant for the associated coaction $\zeta$ of $G$ on $\mathcal O_X$. \end{lem}
\begin{proof} $\pi$ and $\mu$ are nondegenerate, hence so is $\psi\times\pi$. Let $u=\overline{\mu\otimes\text{\textup{id}}}(w_G)$. We must show that for $x\in \mathcal O_X$ we have \[ \overline{(\psi\times\pi)\otimes\text{\textup{id}}}\circ\zeta(x) =\ad u\bigl((\psi\times\pi)(x)\otimes 1), \] and it suffices to show this on generators $k_X(\xi)$ and $k_A(a)$ for $\xi\in X$ and $a\in A$. For for the first, we have \begin{align*} \overline{(\psi\times\pi)\otimes\text{\textup{id}}}\circ\zeta\circ k_X(\xi) &=\overline{(\psi\times\pi)\otimes\text{\textup{id}}}\circ\overline{k_X\otimes\text{\textup{id}}}\circ\sigma(\xi) \\&=\overline{\psi\otimes\text{\textup{id}}}\circ\sigma(\xi) \\&=u \bigl(\psi(\xi)\otimes 1\bigr)u^* \\&=\ad u\bigl((\psi\times\pi)\circ k_X(\xi)\otimes 1\bigr), \end{align*} and for the second, \begin{align*} \overline{(\psi\times\pi)\otimes\text{\textup{id}}}\circ\zeta\circ k_A(a) &=\overline{(\psi\times\pi)\otimes\text{\textup{id}}}\circ\overline{k_A\otimes\text{\textup{id}}}\circ\delta(a) \\&=\overline{\pi\otimes\text{\textup{id}}}\circ\delta(a) \\&=\ad u\bigl(\pi(a)\otimes 1\bigr) \\&=\ad u\bigl((\psi\times\pi)\circ k_A(a)\otimes 1\bigr). \qedhere \end{align*} \end{proof}
\begin{lem}\label{composition} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$, let $(\psi,\pi,\mu):(X,A,C_0(G))\to (M_B(Y),M(B))$ be a $(\sigma,\delta)$-covariant correspondence homomorphism, and let $(\rho,\tau):(Y,B)\to (M_D(Z),M(D))$ be a correspondence homomorphism with $\tau$ nondegenerate. Then the composition \[ \bigl(\overline\rho\circ\psi,\overline\tau\circ\pi,\overline\tau\circ\mu\bigr):(X,A,C_0(G))\to (M_D(Z),M(D)) \] is covariant for $(\sigma,\delta)$. \end{lem}
\begin{proof} First of all, since $\pi$, $\mu$, and $\tau$ are nondegenerate, $\overline\tau\circ\pi$ is also nondegenerate, and $(\overline\tau\circ\pi,\overline\tau\circ\mu)$ is covariant for $(A,\delta)$ by the standard theory of $C^*$-coactions.
Routine calculations show that \[ (\overline\rho\circ\psi,\overline\tau\circ\pi):(X,A)\to (M(Z),M(D)) \] is a correspondence homomorphism. Also, since $\psi$ and $\rho$ map into $M_B(Y)$ and $M_D(Z)$, respectively, it is easy to see that $\overline\rho\circ\psi$ maps $X$ into $M_D(Z)$.
Letting $u=\overline{\overline\tau\circ\mu\otimes\text{\textup{id}}}(w_G)$, the following calculation completes the proof: for $\xi\in X$ we have \begin{align*} \overline{(\overline\tau\circ\psi)\otimes\text{\textup{id}}}\circ\sigma(\xi) &=\overline{\tau\otimes\text{\textup{id}}}\circ\overline{\psi\otimes\text{\textup{id}}}\circ\sigma(\xi) \\&=\overline{\tau\otimes\text{\textup{id}}}\Bigl(\overline{\mu\otimes\text{\textup{id}}}(w_G)\cdot \bigl(\psi(\xi)\otimes 1\bigr)\cdot\overline{\mu\otimes\text{\textup{id}}}(w_G)^*\Bigr) \\&=u\cdot \bigl(\overline\tau\circ\psi(\xi)\otimes 1\bigr)\cdot u^*. \qedhere \end{align*} \end{proof}
\begin{cor}\label{covariant CP} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$ such that $\delta(J_X)\subset M(A\otimes C^*(G);J_X\otimes C^*(G))$, and let $(\psi,\pi,\mu):(X,A,C_0(G))\to (M_B(Y),M(B))$ be a $(\sigma,\delta)$-covariant homomorphism, with $(\psi,\pi)$ Cuntz-Pimsner covariant. Then the pair \[ (\mathcal O_{\psi,\pi},\overline{k_B}\circ\mu):(\mathcal O_X,C_0(G))\to M(\mathcal O_Y) \] is covariant for the associated coaction $\zeta$. \end{cor}
\begin{proof} Applying \lemref{composition} to the Toeplitz representation $(k_Y,k_B):(Y,B)\to \mathcal O_Y$, we see that \[ (\overline{k_Y}\circ\psi,\overline{k_B}\circ\pi,\overline{k_B}\circ\mu):(X,A,C_0(G))\to M(\mathcal O_Y) \] is covariant for $(\sigma,\delta)$.
By \cite[Theorem~3.5]{KQRCorrespondenceFunctor} the composition $(\overline{k_Y}\circ\psi,\overline{k_B}\circ\pi)$ is a Cuntz-Pimsner-covariant Toeplitz representation of $(X,A)$ in $M(\mathcal O_Y)$. Then, since $(\psi,\pi)$ is Cuntz-Pimsner covariant, \lemref{covariant} with $B=\mathcal O_Y$ tells us that \[ \bigl((\overline{k_Y}\circ\psi)\times (\overline{k_B}\circ\pi),\overline{k_B}\circ\mu\bigr): (\mathcal O_X,C_0(G))\to M(\mathcal O_Y) \] is $\zeta$-covariant. But by construction (see \cite[Corollary~3.6]{KQRCorrespondenceFunctor}) we have \[ (\overline{k_Y}\circ\psi)\times (\overline{k_B}\circ\pi)=\mathcal O_{\psi,\pi}. \qedhere \] \end{proof}
\begin{thm}\label{crossed} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$ such that $\delta(J_X)\subset M(A\otimes C^*(G);J_X\otimes C^*(G))$, and let $\zeta$ be the associated coaction on $\mathcal O_X$, as in \propref{coaction}. If the canonical correspondence homomorphism \[ (j_X,j_A):(X,A)\to (M(X\rtimes_\sigma G),M(A\rtimes_\delta G)) \] is Cuntz-Pimsner covariant, then \[
\mathcal O_X\rtimes_\zeta G\cong\mathcal O_{X\rtimes_\sigma G}. \] \end{thm}
\begin{rem} We do not know whether the hypothesis of Cuntz-Pimsner covariance of $(j_X,j_A)$ is redundant; in \corref{implies CP} below we will show that it is satisfied under certain conditions. \end{rem}
\begin{proof}[Proof of \thmref{crossed}] Our strategy is to construct a covariant homomorphism \[ (\rho,\mu):(\mathcal O_X,C_0(G))\to M(\mathcal O_{X\rtimes_\sigma G}), \] and show that the integrated form $\rho\times\mu$ is an isomorphism of $\mathcal O_X\rtimes_\zeta G$ onto $\mathcal O_{X\rtimes_\sigma G}$. For the covariant homomorphism we will need a homomorphism of $\mathcal O_X$, and to get this we will apply functoriality: since $(j_X,j_A)$ is Cuntz-Pimsner covariant, by \cite[Corollary~3.6]{KQRCorrespondenceFunctor} there is a unique nondegenerate homomorphism \[ \mathcal O_{j_X,j_A}:\mathcal O_X\to M(\mathcal O_{X\rtimes_\sigma G}) \] making the diagram \[ \xymatrix@C+30pt{ (X,A) \ar[r]^-{(j_X,j_A)} \ar[d]_{(k_A,k_A)} &(M_{A\rtimes_\delta G}(X\rtimes_\sigma G),M(A\rtimes_\delta G)) \ar[d]^{(\overline{k_{X\rtimes_\sigma G}},\overline{k_{A\rtimes_\delta G}})} \\\ \mathcal O_X \ar[r]_{\mathcal O_{j_X,j_A}} &M(\mathcal O_{X\rtimes_\sigma G}) } \] commute.
We next show that $(j_X,j_A,j_G)$ is covariant for $(\sigma,\delta)$: \begin{align*} \overline{j_X\otimes\text{\textup{id}}}\circ\sigma &=\overline{\bigl(\overline{\text{\textup{id}}\otimes\lambda}\circ\sigma\bigr)\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{\text{\textup{id}}\otimes\lambda\otimes\text{\textup{id}}}\circ\overline{\sigma\otimes\text{\textup{id}}}\circ\sigma \\&=\overline{\text{\textup{id}}\otimes\lambda\otimes\text{\textup{id}}}\circ\overline{\text{\textup{id}}\otimes\delta_G}\circ\sigma \\&=\ad\bigl(1\otimes\overline{M\otimes\text{\textup{id}}}(w_G\bigr)\circ\overline{\text{\textup{id}}\otimes\lambda\otimes\text{\textup{id}}}\circ(\sigma\otimes 1) \\&=\ad\overline{\overline{1\otimes M}\otimes\text{\textup{id}}}(w_G)\circ\bigl(\overline{\text{\textup{id}}\otimes\lambda}\circ\sigma\otimes 1\bigr) \\&=\ad\overline{j_G\otimes\text{\textup{id}}}(w_G)\circ(j_X\otimes 1), \end{align*} where the fourth equality follows by linearity, density, and strict continuity from the following computation with elementary tensors: for $\eta\in X$ and $t\in G$ we have \begin{align*} \overline{\text{\textup{id}}\otimes\lambda\otimes\text{\textup{id}}}\circ\overline{\text{\textup{id}}\otimes\delta_G}(\eta\otimes t) &=\overline{\text{\textup{id}}\otimes\lambda\otimes\text{\textup{id}}}\bigl(\eta\otimes\delta_G(t)\bigr) \\&=\eta\otimes\overline{\lambda\otimes\text{\textup{id}}}\circ\delta_G(t) \\&=\eta\otimes\lambda_t\otimes t \\&=\eta\otimes\ad\overline{M\otimes\text{\textup{id}}}(w_G)(\lambda_t\otimes 1) \\&=\ad\bigl(1\otimes\overline{M\otimes\text{\textup{id}}}(w_G)\bigr)(\eta\otimes\lambda_t\otimes 1) \end{align*} where in turn the fourth equality follows from the following: for $f\in B(G)$ we have \begin{align*} S_f\bigl((\lambda_t\otimes t)\overline{M\otimes\text{\textup{id}}}(w_G)\bigr) &=\lambda_tS_{f\cdot t}\bigl(\overline{M\otimes\text{\textup{id}}}(w_G)\bigr) \\&=\lambda_tM_{f\cdot t} \\&=M_f\lambda_t \\&=S_f\bigl(\overline{M\otimes\text{\textup{id}}}(w_G)\bigr)\lambda_t \\&=S_f\bigl(\overline{M\otimes\text{\textup{id}}}(w_G)(\lambda_t\otimes 1)\bigr), \end{align*} so that \[ (\lambda_t\otimes t)\overline{M\otimes\text{\textup{id}}}(w_G)=\overline{M\otimes\text{\textup{id}}}(w_G)(\lambda_t\otimes 1). \]
It now follows from \corref{covariant CP} that the pair \[ \bigl(\mathcal O_{j_X,j_A},\overline{k_{A\rtimes_\delta G}}\circ j_G\bigr) \] is a covariant homomorphism of the coaction $(\mathcal O_X,\zeta)$ in $M(\mathcal O_{X\rtimes_\sigma G})$, and thus we get a homomorphism \[ \Pi:=\mathcal O_{j_X,j_A}\times \bigl(\overline{k_{A\rtimes_\delta G}}\circ j_G\bigr): \mathcal O_X\rtimes_\zeta G\to M(\mathcal O_{X\rtimes_\sigma G}). \]
It remains to show the following: \begin{enumerate} \item $\Pi$ maps into $\mathcal O_{X\rtimes_\sigma G}$;
\item $\Pi$ is surjective;
\item $\Pi$ is injective. \end{enumerate}
For (i), for $\xi\in X$, $a\in A$, and $f\in C_0(G)$ we have \begin{align*} &\mathcal O_{j_X,j_A}\circ k_X(\xi)\overline{k_{A\rtimes_\delta G}}\circ j_G(f) \\&\quad=\overline{k_{X\rtimes_\sigma G}}(j_X(\xi))\overline{k_{A\rtimes_\delta G}}(j_G(f)) \\&\quad=\overline{k_{X\rtimes_\sigma G}}\bigl(j_X(\xi)\cdot j_G(f))\bigr) \end{align*} and \begin{align*} &\mathcal O_{j_X,j_A}\circ k_A(a)\overline{k_{A\rtimes_\delta G}}\circ j_G(f) \\&\quad=\overline{k_{A\rtimes_\delta G}}(j_A(a))\overline{k_{A\rtimes_\delta G}}(j_G(f)) \\&\quad=\overline{k_{A\rtimes_\delta G}}\bigl(j_A(a)j_G(f))\bigr). \end{align*}
For (ii), we see from the above that the image of $\Pi$ contains \[ \overline{k_{X\rtimes_\sigma G}}\bigl(j_X(X)\cdot j_G(C_0(G))\bigr) \midtext{and} \overline{k_{A\rtimes_\delta G}}\bigl(j_A(A)\cdot j_G(C_0(G))\bigr), \] and hence contains \[ \overline{k_{X\rtimes_\sigma G}}(X\rtimes_\sigma G) \midtext{and} \overline{k_{A\rtimes_\delta G}}(A\rtimes_\delta G), \] which generate $\mathcal O_{X\rtimes_\sigma G}$.
For (iii) we apply \cite[Theorem~3.1]{QuiggLandstadDuality}: we must show that $\Pi\circ j_{\mathcal O_X}$ is faithful and that there is an action $\alpha$ of $G$ on $\mathcal O_{X\rtimes_\sigma G}$ such that $\Pi$ is $\widehat\zeta-\alpha$ equivariant.
To see that $\Pi\circ j_{\mathcal O_X}$ is faithful, we apply the Gauge-Invariant Uniqueness Theorem: since \[ \Pi\circ j_{\mathcal O_X}\circ k_A =\mathcal O_{j_X,j_X}\circ k_A =j_A \] is faithful, it suffices to show that for all $z\in\mathbb T$, $\xi\in X$, and $a\in A$ we have \begin{align*} \gamma_z\circ\Pi\circ j_{\mathcal O_X}\circ k_X(\xi) &=z\Pi\circ j_{\mathcal O_X}\circ k_X(\xi) \\ \gamma_z\circ\Pi\circ j_{\mathcal O_X}\circ k_A(a) &=\Pi\circ j_{\mathcal O_X}\circ k_A(a). \end{align*} For the first, we have \begin{align*} \gamma_z\circ\Pi\circ j_{\mathcal O_X}\circ k_X(\xi) &=\gamma_z\circ\mathcal O_{j_X,j_A}\circ k_X(\xi) \\&=\gamma_z\circ\overline{k_{X\rtimes_\sigma G}}\circ j_X(\xi) \\&=z\overline{k_{X\rtimes_\sigma G}}\circ j_X(\xi) \\&=z\Pi\circ j_{\mathcal O_X}\circ k_X(\xi), \end{align*} where the third equality follows from \[ \gamma_z\circ k_{X\rtimes_\sigma G}=zk_{X\rtimes_\sigma G}. \] The second is similar, this time using $\gamma_z\circ k_{A\rtimes_\delta G}=k_{A\rtimes_\delta G}$.
We now turn to the action of $G$. First note that there is an action $\beta$ of $G$ on $X\rtimes_\sigma G$ given by \[ \beta_t\bigl(j_X(\xi)\cdot j_G(f)\bigr)=j_X(\xi)\cdot j_G\circ \textup{rt}_t(f) \midtext{for}\xi\in X,f\in C_0(G), \] where $\textup{rt}$ is the action of $G$ on $C_0(G)$ given by right translation. This in turn gives an action $\alpha$ of $G$ on $\mathcal O_{X\rtimes_\sigma G}$ such that \begin{align*} \alpha_t\circ k_{X\rtimes_\sigma G}&=k_{X\rtimes_\sigma G}\circ\beta_t\\ \alpha_t\circ k_{A\rtimes_\delta G}&=k_{A\rtimes_\delta G}\circ\beta_t. \end{align*} Finally, we check the $\widehat\zeta-\alpha$ covariance: \begin{align*} \alpha_t\circ\Pi\circ j_{\mathcal O_X} &=\alpha_t\circ\mathcal O_{j_X,j_A} \\&=\alpha_t\circ\overline{k_{X\rtimes_\sigma G}}\circ j_X \\&=\overline{k_{X\rtimes_\sigma G}}\circ\beta_t\circ j_X \\&=\overline{k_{X\rtimes_\sigma G}}\circ j_X \\&=\Pi\circ j_{\mathcal O_X} \\*&=\Pi\circ\widehat\zeta_t\circ j_{\mathcal O_X}, \end{align*} and \begin{align*} \alpha_t\circ\Pi\circ j_G &=\alpha_t\circ\overline{k_{A\rtimes_\delta G}}\circ j_G \\&=\overline{k_{A\rtimes_\delta G}}\circ\beta_t\circ j_G \\&=\overline{k_{A\rtimes_\delta G}}\circ j_G\circ\textup{rt}_t \\&=\Pi\circ j_G\circ\textup{rt}_t \\&=\Pi\circ\widehat\zeta_t\circ j_G. \qedhere \end{align*} \end{proof}
\begin{cor}\label{implies CP} Let $(\sigma,\delta)$ be a coaction of $G$ on a correspondence $(X,A)$. If either \begin{enumerate} \item $G$ is amenable, or
\item $\varphi_A:A\to \mathcal L(X)$ is faithful, \end{enumerate} then the canonical correspondence homomorphism \[ (j_X,j_A):(X,A)\to (M(X\rtimes_\sigma G),M(A\rtimes_\delta G)) \] is Cuntz-Pimsner covariant. \end{cor}
\begin{proof} By \lemref{j CP}, it suffices to show that \[ j_A(J_X)(A\rtimes_\delta G)\subset J_{X\rtimes_\sigma G}. \] The ideal of $A\rtimes_\delta G$ generated by $j_A(J_X)(A\rtimes_\delta G)$ is \[ I:=\clspn\{(A\rtimes_\delta G)j_A(J_X)(A\rtimes_\delta G)\}, \] so it suffices to show that $\varphi_{A\rtimes_\delta G}$ maps $I$ injectively into $\mathcal K(X\rtimes_\sigma G)$. As we observed immediately before \remref{j nondegenerate}, we can work with $\varphi_A\rtimes G$ and $\mathcal K(X)\rtimes_{\sigma^{(1)}} G$ rather than $\varphi_{A\rtimes_\delta G}$ and $\mathcal K(X\rtimes_\sigma G)$. To see that $\varphi_A\rtimes G$ maps $I$ into $\mathcal K(X)\rtimes_{\sigma^{(1)}} G$, it suffices to observe that \begin{align*} &(\varphi_A\rtimes G)\bigl(j_G^A(C_0(G))j_A(A)j_A(J_X)j_A(A)j_G^A(C_0(G))\bigr) \\&\quad=(\varphi_A\rtimes G)\bigl(j_G^A(C_0(G))j_A(AJ_XA)j_G^A(C_0(G))\bigr) \\&\quad\subset(\varphi_A\rtimes G)\bigl(j_G^A(C_0(G))j_A(J_X)j_G^A(C_0(G))\bigr) \\&\quad=j_G^{\mathcal K(X)}(C_0(G))\overline{j_{\mathcal K(X)}}(\varphi_A(J_X))j_G^{\mathcal K(X)}(C_0(G)) \\&\quad\subset j_G^{\mathcal K(X)}(C_0(G))j_{\mathcal K(X)}(\mathcal K(X))j_G^{\mathcal K(X)}(C_0(G)) \\&\quad\subset \mathcal K(X)\rtimes_{\sigma^{(1)}} G. \end{align*}
On the other hand, to see that $\varphi_A\rtimes G$ is injective on $I$, we now consider each hypothesis (i) and (ii) separately. First, if $\varphi_A$ is injective, then so is $\varphi_A\rtimes G$, because $\varphi_A$ gives a $G$-equivariant isomorphism between $(A,\delta)$ and the image $(\varphi_A(A),\eta)$, where $\eta$ is the corresponding coaction on $\varphi_A(A)$, and we have a commuting diagram \[ \xymatrix{ A\rtimes_\delta G \ar[r]^-\cong \ar[dr]_(.4){\varphi_A\rtimes G} &\varphi_A(A)\rtimes_\eta G \ar@{^(->}[d] \\ &M(\mathcal K(X)\rtimes_{\sigma^{(1)}} G), } \] where the horizontal arrow is an isomorphism and the vertical arrow is an inclusion.
Thus it remains to show that $\varphi_A\rtimes G$ is injective on $I$ under the assumption that $G$ is amenable. We will show that in this case $J_X$ is a $\delta$-invariant ideal of $A$ in the sense that $\delta$ restricts to a coaction on $J_X$. It will follow that \[ I=J_X\rtimes_\delta G, \]
and since the restriction $\varphi_A|:J_X\to \mathcal K(X)$ is injective we will be able to conclude that \[
\varphi_A|\rtimes G:J_X\rtimes_\delta G\to \mathcal K(X)\rtimes_{\sigma^{(1)}} G \] is injective as well.
To see that $J_X$ is invariant, by \cite[Proposition~2.6]{QuiggLandstadDuality} it suffices to show that $J_X$ is an $A(G)$-submodule of $A$. Let $f\in A(G)$ and $a\in J_X$. We must show both of the following: \begin{enumerate} \item $\varphi_A(f\cdot a)\in \mathcal K(X)$;
\item $(f\cdot a)b=0$ for all $b\in \ker\varphi_A$. \end{enumerate} For (i), we have \begin{align*} \varphi_A(f\cdot a) &=\varphi_A\circ S_f\circ\delta(a) \\&=S_f\circ\overline{\varphi_A\otimes\text{\textup{id}}}\circ\delta(a) \\&=S_f\circ\overline{\sigma^{(1)}}\circ\varphi_A(a) \\&\subset S_f\circ\sigma^{(1)}(\mathcal K(X)) \\&\subset S_f\Bigl(M_{C^*(G)}\bigl(\mathcal K(X)\otimes C^*(G)\bigr)\Bigr) \\&\subset \mathcal K(X), \end{align*} by \cite[Lemma~1.5]{lprs}.
In preparation for (ii), we first show that $\ker\varphi_A$ is $\delta$-invariant: if $f\in A(G)$ and $b\in \ker\varphi_A$, then \begin{align*} \varphi_A(f\cdot b) &=S_f\circ\overline{\varphi_A\otimes\text{\textup{id}}}\circ\delta(b) =S_f\circ\overline{\sigma^{(1)}}\circ\varphi_A(b) =0. \end{align*} Thus $\delta$ restricts to a coaction on $\ker\varphi_A$, so \begin{equation}\label{kernel nondegenerate} \clspn\{\delta(\ker\varphi_A)(1\otimes C^*(G))\}=\ker\varphi_A\otimes C^*(G). \end{equation} We now verify (ii): for $f\in A(G)$, $a\in J_X$, and $b\in \ker\varphi_A$ we first factor $f=c\cdot f'$ for some $c\in C^*(G)$ and $f'\in A(G)$ (using amenability of $G$ again), and then \begin{align*} (f\cdot a)b &=S_f\circ\delta(a)b \\&=S_f\bigl(\delta(a)(b\otimes 1)\bigr) \\&=S_{c\cdot f'}\bigl(\delta(a)(b\otimes 1)\bigr) \\&=S_{f'}\bigl(\delta(a)(b\otimes c)\bigr) \\&\approx \sum_1^nS_{f'}\bigl(\delta(a)\delta(b_i)(1\otimes c_i)\bigr) \\&\hspace{.5in}\text{(for some $b_i\in \ker\varphi_A$ and $c_i\in C$, by \eqref{kernel nondegenerate})} \\&\approx \sum_1^nS_{f'}\bigl(\delta(ab_i)(1\otimes c_i)\bigr) \\&=0, \end{align*} because $J_X\subset (\ker\varphi_A)^\perp$. \end{proof}
Then \cite[Theorem~6.9]{QuiggDualityTwisted} (see also \cite[Theorem~2.9]{lprs}) shows that the crossed product of a $C^*$-algebra $A$ by an inner coaction of $G$ is isomorphic to $A\otimes C_0(G)$; the following result is a version for correspondences:
\begin{prop}\label{inner crossed} Let $(A,X,B)$ be a correspondence, and let $(A,\delta)$ and $(B,\varepsilon)$ be inner coactions implemented by nondegenerate homomorphisms $\mu$ and $\nu$, respectively, and let $\sigma$ be the associated coaction on $X$, as in \propref{inner}. Then there is an isomorphism \[ \Phi:X\rtimes_\sigma G\to X\otimes C_0(G) \] given by \[ \Phi(y)=\overline{\mu\otimes\lambda}(w_G)^*\cdot y \cdot \overline{\nu\otimes\lambda}(w_G). \] The left and right module actions are transformed by $\Phi$ as follows: \begin{align*} &\Phi\bigl(j_A(a)j_G^A(f)\cdot y\cdot j_B(b)j_G^B(g)\bigr) \\&\quad=(a\otimes 1)(\mu\times M)(f)\cdot \Phi(y)\cdot (b\otimes 1)(\nu\times M)(g), \end{align*} where $\mu\times M$ denotes the Kronecker product of $\mu$ and $M$, respectively, and similarly for $\nu\times M$. \end{prop}
\begin{proof} Note that we are identifying $C_0(G)$ with its image under the representation $M$ on $L^2(G)$ by pointwise multiplication, i.e., $(M_f\xi)(t)=f(t)\xi(t)$ for $f\in C_0(G)$ and $\xi\in L^2(G)$. Routine calculations show \begin{align*} \Phi\circ j_A&=\text{\textup{id}}_A\otimes 1\\ \Phi\circ j_B&=\text{\textup{id}}_B\otimes 1\\ \Phi\circ j_X&=\text{\textup{id}}_X\otimes 1\\ \Phi\circ j_G^A&=\mu\times M\\ \Phi\circ j_G^B&=\nu\times M; \end{align*} for the last two it helps to note that \[ \ad\overline{\text{\textup{id}}\otimes\lambda}(w_G)^*(1\otimes M_f)=(\text{\textup{id}}\times M)(f). \] Since \begin{align*} \overline{\mu\otimes\lambda}(w_G)&\in M(A\otimes \mathcal K(L^2(G))), \end{align*} and similarly for $\overline{\nu\otimes\lambda}(w_G)$, clearly $\Phi$ maps $X\rtimes_\sigma G$ into $M(X\otimes L^2(G))$. We actually have $\Phi(X\rtimes_\sigma G)=X\otimes C_0(G)$, because \begin{align*} &\clspn\bigl\{\Phi\bigl(j_X(X)\cdot j_G(C_0(G))\bigr\} \\&\quad=\clspn\bigl\{\Phi\bigl(j_X(X\cdot B)\cdot j_G(C_0(G))\bigr\} \\&\quad=\clspn\bigl\{\Phi\bigl(j_X(X)\cdot j_B(B)j_G(C_0(G))\bigr\} \\&\quad=\clspn\bigl\{(X\otimes 1)\cdot \ad\overline{\nu\otimes\text{\textup{id}}}(w_G)^*(B\rtimes_\varepsilon G)\bigr\} \\&\quad=\clspn\bigl\{(X\otimes 1)\cdot (B\otimes C_0(G))\bigr\} \\&\hspace{1in}\text{(by \cite[Theorem~6.9]{QuiggDualityTwisted} or \cite[Theorem~2.9]{lprs})} \\&\quad=X\otimes C_0(G). \qedhere \end{align*} \end{proof}
Let $(\gamma,\alpha)$ be an action of $G$ on a correspondence $(X,A)$. Assume that $G$ is amenable; in particular, there is no difference between the full and reduced crossed products $X\rtimes_\gamma G$ and $X\rtimes_{\gamma,r} G$ (and similarly for $A$), so we can freely apply the results of \cite[Section 3.1]{enchilada}.
As in \cite[Proposition~3.5]{enchilada}, let $\widehat\gamma$ be the dual coaction of $G$ on $X\rtimes_\gamma G$, determined on generators $\xi\in C_c(G,X)$ by \[ \widehat\gamma(\xi)(t)=\xi(t)\otimes t, \] so that $\widehat\gamma$ is an element of $C_c(G,M^\beta(X\otimes C^*(G)))$, which in turn is embedded in $M((X\rtimes_\gamma G)\otimes C^*(G))$ via the isomorphism \cite[Lemma~3.4]{enchilada} \[ (X\rtimes_\gamma G)\otimes C^*(G)\xrightarrow{\cong} (X\otimes C^*(G))\rtimes_{\gamma\otimes\text{\textup{id}}} G \] that extends the canonical embedding \[ C_c(G,X)\odot C^*(G)\hookrightarrow C_c(G,X\otimes C^*(G)). \]
\begin{prop}\label{dual coaction} Let $(\gamma,\alpha)$ be an action of $G$ on a correspondence $(X,A)$, and assume that $G$ is amenable. Then the dual coaction $(\widehat\gamma,\widehat\alpha)$ on $(X\rtimes_\gamma G,A\rtimes_\alpha G)$ satisfies \begin{equation}\label{dual} \widehat\alpha(J_{X\rtimes_\gamma G})\subset M\bigl((A\rtimes_\alpha G)\otimes C^*(G);J_{X\rtimes_\gamma G}\otimes C^*(G)\bigr). \end{equation} \end{prop}
\begin{proof} By \cite[Proposition~2.7]{HN}, the ideal $J_X$ of $A$ is $\alpha$-invariant, and \[ J_{X\rtimes_\gamma G}=J_X\rtimes_\alpha G. \] The isomorphism \[ (A\rtimes_\alpha G)\otimes C^*(G)\xrightarrow{\cong} (A\otimes C^*(G))\rtimes_{\alpha\otimes\text{\textup{id}}} G, \] of \cite[Lemma~A.20]{enchilada} clearly takes $(J_X\rtimes_\alpha G)\otimes C^*(G)$ to $(J_X\otimes C^*(G))\rtimes_{\alpha\otimes\text{\textup{id}}} G$.
Recall that $\widehat\alpha$ takes a function $f\in C_c(G,A)$ to the function in $C_c(G,M^\beta(A\otimes C^*(G)))$ defined by \[ \widehat\alpha(f)(t)=f(t)\otimes t. \] It follows that for $g\in C_c(G,A\otimes C^*(G))$ we have \begin{align*} \bigl(\widehat\alpha(f)g\bigr)(t) &=\int_G\widehat\alpha(f)(s)\overline{\alpha_s\otimes\text{\textup{id}}}(g(s^{-1} t))\,ds \\&=\int_G\bigl(f(s)\otimes s\bigr)\overline{\alpha_s\otimes\text{\textup{id}}}(g(s^{-1} t))\,ds. \end{align*} Now let $f\in C_c(G,J_X)$. For all $s\in G$, it is easy to check, by first computing with elementary tensors $a\otimes c\in A\odot C^*(G)$, that \[ \bigl(f(s)\otimes s\bigr)(A\otimes C^*(G))\subset J_X\otimes C^*(G), \] and it follows that \[ \widehat\alpha(f)g\in C_c(G,J_X\otimes C^*(G)) \subset (J_X\otimes C^*(G))\rtimes_{\alpha\otimes\text{\textup{id}}} G. \] By density, this implies that \[ \widehat\alpha\bigl(J_X\rtimes_\alpha G\bigr)\subset M\bigl((A\otimes C^*(G))\rtimes_{\alpha\otimes\text{\textup{id}}} G;(J_X\otimes C^*(G))\rtimes_{\alpha\otimes\text{\textup{id}}} G\bigr), \] which in turn implies \eqref{dual}. \end{proof}
\section{Application}\label{apps}
As an application of our techniques, we will give an alternative approach to a recent result of Hao and Ng \cite[Theorem~2.10]{HN}. Given an action $(\gamma,\alpha)$ of an amenable locally compact group $G$ on a nondegenerate correspondence $(X,A)$, Hao and Ng construct an isomorphism \[ \mathcal O_{X\rtimes_\gamma G}\iso \mathcal O_X\rtimes_\beta G, \] where $X\rtimes_\gamma G$ is the crossed-product correspondence over $A\rtimes_\alpha G$ and $\beta$ is the associated action of $G$ on $\mathcal O_X$. In our earlier paper \cite[Proposition~4.3]{KQRCorrespondenceFunctor} we suggested an alternative approach to this result, removing the amenability hypothesis on $G$. Namely, we construct a surjection that goes in the opposite direction: \[ \mathcal O_X\rtimes_\beta G\to \mathcal O_{X\rtimes_\gamma G}. \] We suspect, but were unable to prove, that this is an isomorphism in general; however, at least in the amenable case, we can give a new proof of \cite[Theorem~2.10]{HN} with the help of Propositions~\ref{dual coaction} and \ref{coaction}.
\begin{thm}\label{injective} Let $(\gamma,\alpha)$ be an action of $G$ on a nondegenerate correspondence $(X,A)$, let $\beta$ be the associated action of $G$ on $\mathcal O_X$, and let \[ \Pi:=\mathcal O_{i_X,i_A}\times u:\mathcal O_X\rtimes_\beta G\to \mathcal O_{X\rtimes_\gamma G} \] be the surjection from \cite[Proposition~4.3]{KQRCorrespondenceFunctor}. If $G$ is amenable, then $\Pi$ is an isomorphism. \end{thm}
\begin{proof} By Propositions~\ref{dual coaction} and \ref{coaction} we get a coaction $\zeta$ of $G$ on $\mathcal O_{X\rtimes_\gamma G}$. Our strategy is to show that $\Pi$ is $\widehat\beta-\zeta$ equivariant and that $\mathcal O_{i_X,i_A}$ is injective, and then \cite[Proposition~3.1]{QuiggLandstadDuality} will imply that $\Pi$ is injective, because by amenability of $G$ the coaction $\zeta$ is automatically normal.
We check the equivariance condition \[ \zeta\circ\Pi=\overline{\Pi\otimes\text{\textup{id}}}\circ\widehat\beta \] separately on generators from $X$, $A$, and $G$: for $X$ we have \begin{align*} \overline{\zeta\circ\Pi}\circ i_{\mathcal O_X}\circ k_X &=\overline\zeta\circ\overline\Pi\circ i_{\mathcal O_X}\circ k_X \\&=\overline\zeta\circ\mathcal O_{i_X,k_A}\circ k_X \\&=\overline\zeta\circ\overline{k_{X\rtimes_\gamma G}}\circ i_X \\&=\overline{k_{X\rtimes_\gamma G}\otimes\text{\textup{id}}}\circ\overline{\widehat\gamma}\circ i_X \\&=\overline{k_{X\rtimes_\gamma G}\otimes\text{\textup{id}}}\circ(i_X\otimes 1) \\&=\bigl(\overline{k_{X\rtimes_\gamma G}}\circ i_X\bigr)\otimes 1 \\&=\mathcal O_{i_X,i_A}\circ k_X\otimes 1 \\&=(\mathcal O_{i_X,i_A}\otimes 1)\circ k_X \\&=(\overline\Pi\circ i_{\mathcal O_X}\otimes 1)\circ k_X \\&=\overline{\Pi\otimes\text{\textup{id}}}\circ(i_{\mathcal O_X}\otimes 1)\circ k_X \\&=\overline{\Pi\otimes\text{\textup{id}}}\circ\overline{\widehat\beta}\circ i_{\mathcal O_X}\circ k_X \\&=\overline{\overline{\Pi\otimes\text{\textup{id}}}\circ\widehat\beta}\circ i_{\mathcal O_X}\circ k_X. \end{align*} The verification for generators from $A$ is parallel, using $k_A,i_A,\widehat\alpha$ instead of $k_X,i_X,\widehat\gamma$.
For generators from $G$ we have \begin{align*} \overline{\zeta\circ\Pi}\circ i_G^{\mathcal O_X} &=\overline\zeta\circ\overline\Pi\circ i_G^{\mathcal O_X} \\&=\overline\zeta\circ u \\&=\overline\zeta\circ \overline{k_{A\rtimes_\alpha G}}\circ i_G^A \\&=\overline{\zeta\circ k_{A\rtimes_\alpha G}}\circ i_G^A \\&=\overline{\overline{k_{A\rtimes_\alpha G}\otimes\text{\textup{id}}}\circ \widehat\alpha}\circ i_G^A \\&=\overline{k_{A\rtimes_\alpha G}\otimes\text{\textup{id}}}\circ\overline{\widehat\alpha}\circ i_G^A \\&=\overline{k_{A\rtimes_\alpha G}\otimes\text{\textup{id}}}\circ\overline{i_G^A\otimes\text{\textup{id}}}\circ\delta_G \\&=\overline{\overline{k_{A\rtimes_\alpha G}}\circ i_G^A\otimes\text{\textup{id}}}\circ\delta_G \\&=\overline{u\otimes\text{\textup{id}}}\circ\delta_G \\&=\overline{\overline\Pi\circ i_G^{\mathcal O_X}\otimes\text{\textup{id}}}\circ\delta_G \\&=\overline{\Pi\otimes\text{\textup{id}}}\circ\overline{i_G^{\mathcal O_X}\otimes\text{\textup{id}}}\circ\delta_G \\&=\overline{\Pi\otimes\text{\textup{id}}}\circ\overline{\widehat\beta}\circ i_G^{\mathcal O_X} \\&=\overline{\overline{\Pi\otimes\text{\textup{id}}}\circ\widehat\beta}\circ i_G^{\mathcal O_X}. \end{align*}
Finally, by \cite[Corollary~3.6]{KQRCorrespondenceFunctor}, $\mathcal O_{i_X,i_A}$ is injective because $i_A$ is. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
Novel learning-based spatial reuse optimization in dense WLAN deployments
Imad Jamil1,3,
Laurent Cariou2 &
Jean-François Hélard3
To satisfy the increasing demand for wireless systems capacity, the industry is dramatically increasing the density of the deployed networks. Like other wireless technologies, Wi-Fi is following this trend, particularly because of its increasing popularity. In parallel, Wi-Fi is being deployed for new use cases that are atypically far from the context of its first introduction as an Ethernet network replacement. In fact, the conventional operation of Wi-Fi networks is not likely to be ready for these super dense environments and new challenging scenarios. For that reason, the high efficiency wireless local area network (HEW) study group (SG) was formed in May 2013 within the IEEE 802.11 working group (WG). The intents are to improve the "real world" Wi-Fi performance especially in dense deployments.
In this context, this work proposes a new centralized solution to jointly adapt the transmission power and the physical carrier sensing based on artificial neural networks. The major intent of the proposed solution is to resolve the fairness issues while enhancing the spatial reuse in dense Wi-Fi environments. This work is the first to use artificial neural networks to improve spatial reuse in dense WLAN environments. For the evaluation of this proposal, the new designed algorithm is implemented in OPNET modeler. Relevant scenarios are simulated to assess the efficiency of the proposal in terms of addressing starvation issues caused by hidden and exposed node problems. The extensive simulations show that our learning-based solution is able to resolve the hidden and exposed node problems and improve the performance of high-density Wi-Fi deployments in terms of achieved throughput and fairness among contending nodes.
Today, IEEE 802.11 wireless local area network (WLAN) [1] that is widely known as Wi-Fi is the dominant standard in WLAN technology. An infrastructure mode of a basic IEEE 802.11 network is termed a basic service set (BSS) and consists of an access point (AP) and at least one associated station (STA). According to the IEEE 802.11 standard, the multiple access to the communication medium is based on the contention between the different nodes operating on the same frequency channel. The distributed coordination function (DCF) manages the contention-based access by implementing the carrier sense multiple access with collision avoidance (CSMA-CA) mechanism. As described in the standard, if a node is transmitting, all the nodes located in its transmission range must defer their transmissions. This behavior is ruled by the physical carrier sensing (PCS) that is a part of the clear channel assessment (CCA) mechanism. Accordingly, at a given time, there is only one communication occurring within the same BSS. Usually, this communication occurs between the AP and one of the associated STAs and may take one of the two directions: downlink (DL) towards the STA or uplink (UL) towards the AP.
If the contending nodes belong to different BSSs, we talk about an overlapping BSS (OBSS) problem where the nodes of the neighboring co-channel BSSs overhear each other. When multiple co-channel BSS overlap, the communication airtime is shared between them and hence the total capacity of the network is divided by the number of these OBSSs. Since the number of orthogonal frequency channels available for the operation of Wi-Fi networks is limited to 3 in the 2.4-GHz band and may reach a maximum of 24 channels in the 5-GHz band, preventing OBSSs is very challenging especially in dense WLAN environments. The impact of the increasing density on the adaptive channel selection schemes is studied in [2]. Adding to this quantitative limitation, the fact that an interference-free operation on these unlicensed channels is not always guaranteed because many other systems are using them.
Since its introduction in 1997, the IEEE WLAN standard is continuously evolving. Increasing the peak physical throughput was always the main intent behind this evolution. However, the achievable network throughput in real world is affected by many factors that are mostly related to the MAC layer protocols. On another hand, the focus of the standardization activity was mainly on enhancing the performance in a single BSS. Nevertheless, in reality, the inevitable presence of OBSSs aggravates the spectral efficiency and hence the performance of the system.
The increasing demand on high-throughput, large capacity, and ubiquitous coverage for high data rate, real-time, and always-on applications is driving the wireless industry. To respond to these demands, the density of the deployed Wi-Fi networks is drastically increasing in all the deployment scenarios: indoor or outdoor public hotspots, business offices, and private residences. For instance, by 2018, the number of hotspots will grow to the equivalent average of one Wi-Fi hotspot for every 20 people on earth according to a recent study [3].
Along with this unprecedented level of density, more new challenging deployment scenarios are expected to appear. Already, Wi-Fi is seen as the most suitable solution to cover large venues, stadiums, airports, train stations, and other crowded spaces in indoor and outdoor. Furthermore, Wi-Fi access points are being deployed in plains, trains, and ships. Satisfying this variety of use cases is not a simple task especially when a certain level of quality of experience (QoE) is expected to be met. To deal with the presented issues, the high efficiency WLAN (HEW) study group (SG) [4] was launched and led to the creation of a new task group (TG) in May 2014 that took the name of IEEE 802.11ax [5]. The main goal of this TG is improving the spectrum efficiency to enhance the system area throughput in high-density scenarios in terms of the number of APs and/or STAs.
As argued by many researchers and standardization contributers [6–8], the current MAC protocols are very conservative when operating in dense environments. Because of this overprotecting behavior, the performance of the current high-density WLAN networks is degraded. As previously discussed, this density will continue to increase and the future networks will be more and more vulnerable to performance degradation. Obviously, optimizing these protocols for more spatial reuse will boost the overall performance of high-density WLANs in terms of achieved throughput. This is why enhancing the spatial reuse is one of the hot topics discussed in the IEEE 802.11ax task group. Accordingly, the IEEE 802.11ax spatial reuse (SR) ad hoc group is created to work on improving spatial frequency reuse and other mechanisms that enhance the concurrent use of the wireless medium by multiple devices. Several interesting propositions are presented in this group. However, conserving the fairness among different nodes is not always assured with the currently proposed solutions [9, 10]. In a previous work [11], we proposed an adaptive distributed scheme to enhance the network performance in densely deployed WLANs by leveraging the spatial reuse.
In the same context, in the aim of leveraging the spatial reuse in dense WLAN environments, this work envisions the adaptation of the MAC protocols of a managed WLAN system in a centralized manner. Since the next Wi-Fi generation is intended to be carrier oriented, the future Wi-Fi infrastructure will be increasingly deployed in a planned manner like cellular networks. The centralized approach, subject of this work, is appropriate for plenty of current and future Wi-Fi deployment scenarios. One of the most relevant scenarios studied in the current IEEE 802.11ax task group is the stadium scenario. Actually, two scenarios out of a total of four scenarios discussed in this task group are fully managed (see [12]).
In this paper, we exploit a new artificial neural network (ANN)-based solution to apply jointly a physical carrier sensing adaptation (PCSA) and a transmit power control (TPC) in a way that preserves fairness between all the nodes in terms of throughput. ANNs [13] are commonly used to address a wide range of pattern recognition problems [14] including classification, clustering, and regression. However, the worth of ANNs to model complex and nonlinear problems is desirable for many real-world problems. In telecommunications domain, ANNs are adopted for a large number of applications [15], such as equalizers, adaptive beam-forming, self-organizing networks, network design and management, routing protocols, localization, etc. Furthermore, many data mining techniques make use of ANNs to derive meaning from complicated or imprecise data. In WLAN, the main applications of ANNs can be classified as follows: data rate adaptation [16], quality of service (QoS) provisioning [17], frame size adaptation [18], channel allocation [19], channel estimation [20], and indoor localization [21].
To the best of our knowledge, this is the first work to use ANNs to enhance the spatial reuse for high-density WLANs. As shown in Fig. 1, a central entity (the controller) controls all the APs of the managed WLAN system. This controller is capable of collecting feedback data from all the nodes attached to the system (normally via their corresponding APs) in a periodic manner. The IEEE 802.11k amendment [22] that describes the mechanisms for APs and STAs to dynamically measure and report their radio resources can be useful to design the feedback collection function. In this work, the feedback data consists of the values of the adapted parameters and the average throughput achieved by every node. After the collection phase, the collected data is used by the ANN to learn the nonlinear relation between the parameters in input and the corresponding throughputs in output. The trained ANN is then used to adapt the parameters in such a way to minimize a predefined cost function.
Managed wireless LAN topology former by several basic service sets (BSS) controlled by a centralized entity (the controller)
The envisioned approaches to improve the spatial reuse in dense WLANs are presented in Section 1.1. In Section 1.2, we introduce the ANNs theory and the role they play in the proposed solution. Then, the system model is detailed in Section 1.3. The proposed optimization technique is presented in Section 1.4 before explaining the implementation of the whole system in details in Section 1.5. For the evaluation part, Section 1.6 describes the simulation scenarios and discusses the obtained results. Finally, the paper is concluded in Section 2.
Spatial reuse in dense WLANs
In order to enhance the performance of WLANs in dense deployment scenarios, improving spatial reuse is compulsory. In cellular deployments, the most important approaches are prudent site planning and emerged channel assignment for each cell. Although these solutions are always efficient for traditional networks, they are no more sufficient for current dense WLAN environments. Satisfying the tremendously increasing demand in capacity is not possible without densifying network deployments, i.e., installing more APs to serve more STAs. However, as explained earlier, due to the contention-based access mechanism, the number of concurrent transmissions is largely reduced when co-channel APs are closer to each other (OBSS problem). In these circumstances, multiple approaches are envisioned to enhance the situation. In the following, we describe the TPC and the PCSA in the light of their advantages and drawbacks. Then, we present a previous work [11] that proposes a combination of them.
Transmit power control (TPC)
Decreasing the transmission power of the possible interferers helps to fulfill the required SINR at the neighboring receivers. As shown in Fig. 2, theoretically, the same SINR can be obtained by decreasing the transmission power of all the devices of "x dB" which leads to decrease also the interference power by "x dB". In that way, the transmission ranges in the neighboring networks are shrunk. Widely used in mobile networks, TPC is one of the powerful mechanisms to shrink cells while densifying and hence to permit more spatial reuse. By the same logic, TPC is suggested for high-density WLANs. While TPC is very effective when applied in fully managed architectures over licensed spectrum, many drawbacks are shown when applying it to less managed networks. WLAN deployments are known to be chaotic since in most cases, the APs are individually installed. As the frequency spectrum is unlicensed, managed networks cannot guarantee interference-free operation. In practice, any AP (even if it is a soft AP, i.e., tethering applications) operating in vicinity may disturb the performance of the managed network at any time.
The selflessness of TPC prevents WLAN administrators and producers from activating it when there is no punctual regulatory need (e.g., operating on a frequency band that interferes with neighboring radar systems). Authors in [23, 24] and [25] show the detrimental effect of asymmetrical links caused by the application of TPC in some case scenarios. Actually, TPC is more problematic to achieve in a distributed manner or when the spectrum is free because it fosters higher power transmitters, that are not applying it, at the expense of lower power transmitters that are applying TPC.
Physical carrier sensing adaptation (PCSA)
For the reasons discussed above, another approach that is more suitable to the contention-based access of WLAN is proposed. This approach is called PCSA and is based on the adaptation of the carrier sensing mechanism used by the CCA procedure. In PCSA, instead of decreasing its transmission power, a node will decrease its sensitivity in detecting signals in its environment. In Fig. 3, the PCS threshold is increased so that tolerable concurrent transmissions are prohibited from triggering busy channel assessments. Consequently, in situations where the signal of interest is received with a power sufficiently higher than the interference power, the reuse between neighboring networks will be possible. In contrast to TPC, there is an important incentive for network administrators and equipment vendors to apply PCSA since the benefit goes directly to the devices that applies it.
Balanced TPC and PCS adaptation (BTPA)
However, as shown in [11], there are some fairness issues when adopting one of the previous approaches alone. More precisely, it has been shown that while TPC favors the legacy devices (that are not applying TPC), PCSA favors the devices that applies the adaptation. In real world networks, the devices implementing the latest version of the 802.11 standard operate in the same networks with older devices (legacies). The interoperability and backward compatibility is an essential feature in 802.11 WLAN. Preserving fairness between different devices (particularly with legacies) is important for the overall network performance.
Consequently, we proposed in [11] the balanced TPC and PCS adaptation (BTPA). The proposal defines a mechanism to calculate two adaptation values Δ TPC and Δ PCS based on the power level received from the corresponding peer device (i.e., the AP in UL). According to the proposal, the transmission power is reduced by Δ TPC while the PCS threshold is increased by Δ PCS. This leads to an optimal protection range around the transmitter X where one node transmits at a given instance. Outside this range, co-channel nodes are able to successfully transmit simultaneously with X. In a dense cellular deployment simulation scenario, the proposed technique is able to ameliorate the fairness in different situations, while improving the average throughput by four times compared to the standard performance.
Although BTPA could be applied in both distributed and centralized network architecture, in fully managed deployments, we can take benefit from the presence of a central controller to conceive more intelligent solutions. In the present work, we design and implement a centralized learning-based solution that uses also an approach based on a joint adaptation of transmission power and carrier sensing. This new solution benefits from the ANN's ability to model complex nonlinear functions to intelligently enhance the spatial reuse while preserving fairness.
Introduction to artificial neural networks
Artificial neural networks (ANNs) [13] derive their computing power through their parallel distributed structure that gives them the ability to learn and therefore to generalize by producing reasonable outputs for new unseen inputs. The properties of ANN are summarized as the following: input-output mapping capability, adaptivity, nonlinearity, and fault tolerance.
An artificial neuron
The artificial neuron is the basic block of an ANN. The architecture of this fundamental processing unit is shown in Fig. 4. Accordingly, the transfer function through a single neuron is defined as follows:
$$ y = a\left(\sum\limits_{i=1}^{n}w_{i} x_{i} + b\right) $$
The structure of an artificial neuron
where y is the output of the neurone, a(.) is the activation function, n is the number of inputs to the neuron, w i is the weight of input i, x i is the value of input i, and b is the bias value. Depending on the problem that the ANN needs to solve, the activation function can be a step function, a linear function, or a nonlinear sigmoid function.
An artificial neural network
An ANN is obtained by combining multiple artificial neurons. These single neurons are distributed over several layers, namely input, hidden, and output layers. The number of hidden layers and the interconnections between different neurons can be defined in different ways resulting in different ANN topologies [13]. Building the topology of an ANN is just half of the task before being able to use this ANN to solve the given problem. An ANN needs to learn how to respond to given inputs. The learning (or training) step can be achieved in a supervised, unsupervised, or reinforcement way. The unsupervised approach consists on setting the weights and biases to values that minimize a predefined error function.
The weights update
In the training phase, the training data is fed into inputs, then the output of a neuron is calculated as described in Eq. (1). This procedure is repeated for all neurons at the input layer, then at the hidden layer(s), and finally at the output layer. Afterwards, the error values are calculated based on the desired output value and the actual output value. This error is used to update the weights of all the connections in the ANN. This update is done by a back propagation of the error value, meaning that the weights connecting the output layer neurons to the last hidden layer neurons are updated in the first place. When all the weights are updated, the ANN is ready for the next epoch of the training phase. The maximum number of epochs is predefined depending on the specific problem and the available dataset. The commonly used error function is the mean squares error (MSE) that is defined by
$$ \text{MSE} = \frac{1}{2}(\sum\limits_{m=1}^{M} \sum\limits_{i=1}^{k}(\text{desired\_output}^{m}_{i} - \text{current\_output}^{m}_{i})) $$
where M is the number of training datasets. When the calculated value of the MSE is less or equal to the predefined desired MSE (MSE des ), the training is stopped and the ANN is considered as sufficiently trained. Furthermore, the stop point may be controlled by other customized metrics.
Why artificial neural networks?
The impact of the MAC protocols on the network performance is very complicated to model. Usually, researchers provide a set of unrealistic assumptions of ideal channel conditions and homogeneous link qualities to simplify their studies. However, these assumptions result in biased results that do not reflect the real life situations. Consequently, optimization efforts basing on these impractical models result in inefficient solutions.
The relation between the individually achieved throughputs for every node and the MAC parameters used on every node is nonlinear, complex, and time variant which is very difficult to predict using an analytical model [26]. This is the motivation behind the use of ANNs to model this highly complicated relation. When the network is sufficiently trained, it will model the aforementioned relation between outputs and inputs. This model can be used to minimize a cost function to determine the best MAC parameter values for each node in order to enhance the performance of the network. For this optimization, we have to define a real-time learning and adaptation algorithm.
Related applications of artificial neural networks in the literature
In the literature, artificial neural networks are employed to model nonlinear relationship between the inputs and the outputs of a given system. The power of neural networks resides in their capability to approximate nonlinear functions. In [27], authors consider a multi-layered feed-forward neural network as a "universal approximator".
Typical problems addressed by neural networks include pattern recognition, clustering, data compression, signal processing, image processing, and control problems. In telecommunications, ANNs are implemented for many applications, such as equalizers, adaptive beam-forming, self-organizing networks, network design and management, routing protocols, and localization. ANNs are also proposed in the literature to enhance the performance of WLANs. In [16], authors propose an adaptation of the transmission data rate based on ANN to improve the aggregate throughput of a WLAN system. QoS provisioning is addressed in [17] using fuzzy logic control to enhance the IEEE 802.11e enhanced distributed channel access (EDCA) function [28] and frame size adaptation [18]. Other important applications of the ANN theory in WLAN systems include indoor localization [21], channel estimation [20], and channel allocation [19].
An adaptive algorithm is proposed in [29] to satisfy a predefined user throughput requirement by optimizing some back-off mechanism parameters. Precisely, the minimum contention window (CW min ) and the arbitration inter-frame spacing (AIFS) are chosen as the adaptable parameters. After propagating the current values of these parameters over a multilayer neural network, the corresponding output is compared to the desired throughput to calculate the training error. Once the MSE is satisfied, the trained neural network is used to optimize the input parameters using a back-propagation mechanism. This optimization consists in minimizing the following cost-reward function:
$$ \text{Wang\_Cost} = \sum\limits_{i=1}^{k} \frac{(T_{i} - T\_\text{Thr}_{i})^{2}}{ T\_\text{Thr}_{i}} $$
where T i is the result of the forward-propagation over the ANN and T_Thr i is the required user throughput of user i.
The proposed system model
In this work, we chose the multilayer perception (MLP), the most common ANN topology [13]. We consider an ANN topology of three layers: the input layer, one hidden layer, and the output layer. As shown in Fig. 5, the input layer contains 2k neurons, where k is the number of WLAN nodes in the network. Since we are considering the joint optimization of the PCS threshold (PCS thr ) and the transmit power (T x p ), then we need to adapt 2k parameters (two parameters for each WLAN node). The output layer consists of k neurons because we consider the throughput achieved by every node.
Proposed neural network topology
By the means of this ANN, we aim to model the correlation function c f(.) between the throughput (Thr) achieved by the different WLAN nodes of the network and their associated MAC parameters.
$$ \begin{aligned} \left(\mathrm{Thr_{1}, {Thr}_{2},\ldots, Thr}_{k}\right) =&\, \text{cf}(\text{PCS}_{\text{th}_{1}}, Tx\_p_{1}, \mathrm{PCS_{th_{2}}}, {Tx}_{p_{2}},\\&\ldots, \text{PCS}_{th_{k}}, {Tx}_{p_{k}}) \end{aligned} $$
The aim of this study is to enhance the performance of the network in terms of throughput and preserving fairness between nodes. To chose the new adapted parameters, a minimization of the following cost function is proposed.
$$ \mathrm{Cost_{fairness}} = 1-\frac{(\sum\limits_{i=1}^{K}x_{i})^{2}}{K\sum\limits_{i=1}^{K}{x_{i}^{2}}} $$
Minimizing this Cost is equivalent to the maximization of the Jain's fairness index [30]. This index rates the fairness of a set of throughput values where K is the number of nodes and x i is the throughput achieved at the ith node. The values generated by the Jain's index have a range between 0 and 1, where a value of 1 means the best fairness. Minimizing the Cost function in Eq. (5) is the same as approaching 1 for the Jain's index.
Although the aim is to preserve fairness in individual achieved throughput, we have to maintain a minimum average throughput per device. Accordingly, X T is defined as the individual average throughput target. Below X T , the average throughput achieved by a given device needs to be enhanced. To satisfy this throughput requirement, we need to minimize the expression described in Eq. (6).
$$ \text{Cost}_{T} = \sum\limits_{i=1}^{K}\frac{(X_{T} - x_{i})^{2}}{X_{T}} $$
For the final cost (Eq. 7) used by the proposed algorithm, the previously defined costs are summed together. The term multiplied by Cost T is used to normalize it so that it will produce the same weight in the total cost as Cost fairness .
$$ \mathrm{Cost_{tot} = {Cost}_{fairness}} + \frac{1}{\sum\limits_{i=1}^{K}X_{T}}\text{Cost}_{T} $$
The new optimization algorithm—updating the MAC parameters
For the (n+1)th adaptation, the ith MAC parameter is adapted by incrementing or decrementing it by \(\Delta \beta _{i}^{(n)}\).
$$ \beta_{i}^{(n+1)} = \beta_{i}^{(n)} + \Delta \beta_{i}^{(n)} $$
where 1≤i≤2K at layer l=0. To minimize the cost function with respect to \(\beta _{i}^{(n)}\), according to the gradient descent optimization technique, \(\Delta \beta _{i}^{(n)}\) is equal to the negative gradient of the cost function as follows:
$$ \Delta \beta_{i}^{(n)} = - \eta \frac{\delta \text{Cost}}{\delta \beta_{i}^{(n)}} $$
where η is the update rate of the optimization process. Introducing the activation function at layer (l) to Eq. (9), we obtain
$$ \frac{\delta \text{Cost}}{\delta \beta_{i}^{(n)}} = - \frac{\delta \text{Cost}}{\delta a_{i}^{(n)}(l)} \times \frac{\delta a_{i}^{(n)}(l)}{\delta \beta_{i}^{(n)}} $$
Let us consider
$$ \lambda_{i}^{(n)}(l) = - \frac{\delta \text{Cost}}{\delta a_{i}^{(n)}(l)} $$
At the output layer (l=2), \(\lambda _{i}^{(n)}(l)\) is given by
$$ \lambda_{i}^{(n)}(2) = - \frac{\delta \text{Cost}}{\delta a_{i}^{(n)}(2)} $$
where the \(\delta a_{i}^{(n)}(2)\) is the activation function value calculated at the output layer after the feed forward process previously described. \(\lambda _{i}^{(n)}(0)\) are then derived from \(\lambda _{i}^{(n)}(1)\) that are derived from \(\lambda _{i}^{(n)}(2)\), all using the chain-rule manner described by
$$ \lambda_{i}^{(n)}(l) = \sum\limits_{j=1}^{N_{l+1}} \lambda^{(n)}_{j} (l+1) a'_{j}(l+1) w_{ij}(l+1) $$
accordingly, we have
$$ \lambda_{i}^{(n)}(0) = - \frac{\delta \text{Cost}}{\delta a_{i}^{(n)}(0)} = - \frac{\delta \text{Cost}}{\delta \beta_{i}^{(n)}} $$
since \(a_{i}^{(n)}(0)\) (the ith input of the ANN) is equal to \(\beta _{i}^{(n)}\) (the current value of the i th parameter). Equation (8) becomes
$$ \beta_{i}^{(n+1)} = \beta_{i}^{(n)} + \eta \lambda_{i}^{(n)}(0) $$
Our proposal reposes on the expression of Eq. (15) to calculate the new adapted parameters during the optimization process.
Implementation of the proposed solution
We used OPNET modeler 17.5 as the simulation tool. OPNET is a system level simulator that implements the PHY and MAC layers described by the IEEE 802.11n standard. The essential procedures of the proposed solution are described in this section.
A new OPNET node model is created to simulate the controller entity. The process model is represented by its finite state machine shown in Fig. 6. The ANN is created in the initialization phase INIT, then the process enters the IDLE state and remains there until the next scheduled collection time. The collection event releases the process that enters the COLLECT state. At the end of the collection procedure, the process returns to the IDLE state and waits for the training event. Once fired, process goes to the TRAIN state, trains the ANN, and returns to the IDLE state.
Controller process model
Overview on the proposed solution
As shown in Fig. 7, each device has to send a registration request to the controller. Upon receipt of this request, the controller creates a registration context specific to the requesting device. The controller affirms or denies the registration with an appropriate registration response. A newly associated device can have the latest optimized parameters via this response.
General procedure
At a predefined moment, the controller sends a collection start command to all the registered devices. The collection procedure is described in details in the next section. After collecting all the datasets, the controller performs an online training for the previously created neural network. Then, the trained neural network is used to adapt the parameters of the devices. The optimization procedure is described later in this paper.
Finally, the controller sends the optimized parameters values to the corresponding devices. After receiving the update parameters request, each device applies the new parameters and continues its normal operation. According to the circumstances and the predefined policies, the controller is able to send a new collection command whenever it needs.
The different procedures of the proposed algorithm
After examining every procedure apart from others, the overall algorithm is shown in Fig. 8. The optimization round consists of returning to the start step after running through the different steps depicted in the flowchart. An optimization round n begins by an initialization phase where the ANN is created and configured (Table 1). Then, the current version of the training dataset is fetched. As it will be described in detail later on, initially, the offline dataset is divided randomly into two parts, one is a part of the training dataset and the other constitutes the testing dataset. The fetched dataset is the offline training part appended to the previously collected dataset entries during past optimization rounds (<n). Then, a new collection procedure starts and the resulting dataset entry is appended to the fetched training dataset. At this point, we are ready to proceed to the training phase described in Section 1.5.3. After that, the ANN is tested using the testing dataset as outlined in Section 1.5.4. If the resulting testing MSE increases' compared to that of the previous optimization round (n−1), the process quits the training phase and enters the optimization procedure (see Section 1.5.5). At the end of the optimization procedure, the process returns to the start point and a new optimization round (n+1) starts.
Overall look to the proposed algorithm
Table 1 ANN creation
Training procedure
In this section, we describe the training procedure of the ANN. The latter is based on two types of datasets, the first is collected offline (when the real network is not in operational mode) and the second is the result of an online collection (while the normal system operation).
The offline dataset is divided into two separate datasets. The first part is used as the initial part of the training dataset, while the second part is used to test the ANN during the training process. The testing procedure is an important player in determining the end of the training process and the beginning of the optimization process.
The online dataset is the complementary part of the training data set. After every optimization round, the collected dataset entry is appended to the latest training dataset. Accordingly, the ANN is trained with an incremental training dataset, increasing in size after each optimization round. This assures an adaptive behavior of the proposed solution.
The detailed training procedure is depicted in Fig. 9. To increase the robustness of the training phase, we integrate two test levels to verify if the network is successfully trained or not. To implement our approach, we consider two different criteria. One of them is the well-known desired mean square error (MSE des ). The other criteria is the number of output errors exceeding certain absolute value (the desired fail limit FL des ) that is equivalent to the difference between the output neuron value and the related value in the dataset. We define the desired fail number ratio FNr des as the ratio of output errors exceeding FL des to the total number of output values in the training dataset (number of ANN's outputs K times the number of dataset entries DSe nb ). Accordingly, the first test level consists of a verification whether the current MSE value is less than MSE des value. Once the desired MSE is satisfied, we move to the second test level by testing the number of fails. If the latter does not satisfy the predefined FN des value, the MSE des and the learning rate μ are decreased.
Training procedure details
Testing procedure
The testing procedure consists of fetching the offline testing dataset entries and running the ANN for one epoch. Obviously, this run will not affect the trained ANN, meaning that the weights are not updated. Consequently, the testing MSE value is calculated to be used later to conclude if the ANN is enough trained or not. Figure 10 depicts the described procedure.
Testing procedure details
The optimization procedure
The optimization procedure described in this section integrates the analytical algorithm early detailed in Section 1.4. The working flow of the implemented optimization procedure is shown in Fig. 11. Firstly, the gradients of the cost function are calculated at the last layer of the ANN as described in Eq. (12). Then, these values are backpropagated through the ANN as described by Eq. (13). Consequently, the Δ β values that will be used to adapt the MAC parameters are obtained as described by Eq. (14). In order to get the new optimized MAC parameters, each Δ β value is added to its related old MAC parameter value as shown by Eq. (8). The update rate η determines how much the optimization process is aggressive in updating MAC parameters. Unless otherwise stated, the update rate η is set to its default value indicated in Table 2.
Optimization procedure details
Before sending the newly updated parameters \(\beta _{i}^{(n+1)}\) to their corresponding nodes, their performance is verified by simulating the resulting cost using the trained ANN. This step will prevent an unnecessary parameters update that may alter the current performance of the operational network. If the simulated cost is better than the current cost (cost decreases), an update message is sent back to every registered node asking them to configure their transmission power and carrier sensing using the new optimized values. Otherwise, the nodes are not updated and they continue to use the old parameters \(\beta _{i}^{(n)}\) until the next optimization round.
In this section, the performance of the proposed learning-based joint adaptation of PSCA and TPC is evaluated through extensive system level simulations. For these simulations, we use the modified WLAN node model of OPNET 17.5 that implements the neural network solution as described earlier in this paper. The main parameters of the simulation system are shown in Table 2. The mentioned values are the initial values at the beginning of a simulation run. In order to assess the maximum amelioration that the proposed solution can achieve, the target throughout X T is set to the value of the traffic load. The performance of an ANN depends upon its generalization capability. To avoid overtraining of the network, we stop the training procedure at the minimum of the validation error. The effect of some key parameters on the performance of the proposed solution is discussed and highlighted in this section. Firstly, we evaluate the performance of the proposed solution in mitigating hidden and exposed node problems in two simple scenarios. Then, we consider a more complex scenario that reflects a real-world high-density deployment and we evaluate our proposal in such challenging circumstances.
Hidden node scenario
We talk about a hidden node problem when a node that is not able to sense the signal transmitted by an another neighboring node (the hidden node) operating at the same channel, and hence, it assumes that the medium is free and transmits. The simultaneously transmitted signals interfere at the receiving node causing a failure in the reception process. As a solution to this problem, an exchange of request to send (RTS) and clear to send (CTS) frames is described in the IEEE 802.11 standard. However, as widely highlighted in the literature [31], the RTS/CTS mechanism introduces an important overhead and reduces the capacity of the network in terms of throughput since each node has to transmit the RTS and wait for the CTS response before any transmission. Furthermore, in specific scenarios, this mechanism fails to eliminate hidden nodes [32]. In this study, we experiment the performance of our solution in solving the hidden node problem without using the RTS/CTS.
The topology used for this scenario consists of four nodes (two couples: couple A includes node 0 and node 1 and couple B includes node 2 and node 3) placed as shown in Fig. 12. All these nodes are operating at the same frequency channel. Each node generates a saturated constant bit rate (CBR) traffic to the other node of the same couple. In this scenario, in order to reproduce the hidden node problem, the distances between the different nodes are configured in such a way that if two nodes belonging to different couples transmit simultaneously, both receiving nodes will not be able to receive the signal of interest successfully. This means that couple A and couple B are sharing the total capacity of the network. Basing on a simple simulation of a single transmitter-receiver couple, without any source of co-channel interference, the maximum capacity of a network using the default configurations is around 49 Mbps.
Hidden node scenario, illustration of the protection range at optimization round 0 (initial situation)
Furthermore, by properly configuring the carrier sensing parameters, each node is able to sense the transmissions of all the other nodes except node 3 that is not able to sense the transmissions of the nodes of couple B. Hence, node 3 is a hidden node and its transmissions degrade the performance of the network. In Fig. 12, we illustrate the initial protection range around each node. At the end of the simulation, the final protection ranges are depicted in Fig. 13. All the collected results related to this scenario are plotted in Fig. 14 in terms of the optimization round number. For this evaluation, we consider four metrics: the aggregate throughput (or global throughput), the average throughput (per node), the cost function, and the Jain's fairness index. Each metric is evaluated for two different optimization update rates η: 0.01 and 0.001. Since all the nodes of this scenario are in the same contention domain and the mutual interference between the two couples is destructive in case of simultaneous communications, the maximum achievable throughput is bounded by the maximum capacity of a single transmitter-receiver couple (i.e., 49 Mbps). However, the presence of the hidden node (i.e., node 3 in Fig. 12) is degrading the performance of the system. As depicted in Fig. 14 b, at the optimization round 0 (initial situation before any optimization), the achieved aggregate throughput is not reaching its optimal level. At the final optimization round, the aggregate throughput is improved by more than 20 % compared to the initial situation. Thanks to the learning-based mechanism, the hidden node problem is completely revealed as illustrated by Fig. 13. Consequently, the total capacity of the system is fairly shared between the four nodes as shows the Jain's fairness index in Fig. 14 c.
Hidden node scenario, illustration of the protection range at optimization round 5
The performance of the proposed optimization in hidden nodes scenario. a The cost in terms of optimization round. b: the aggregate throughput in terms of optimization round. c: the Jain's fairness index in terms of optimization round. d The avergae throughput in terms of optimization round
As defined in Eq. (15), η determines the aggressiveness of the optimization round update. The Fig. 14 a shows that with a higher η, the cost is minimized with less optimization rounds. The same logic applies to the Jain's fairness that reaches its maximum value after the first two optimization rounds for η=0.01. It is worth mentioning that the cost function is not minimized to zero since the individual average throughput cannot reach X T (i.e., the target throughput). In fact, the maximum capacity of the network is attained before the satisfaction of the target throughput.
Exposed node scenario
In this scenario, we examine the ability of the proposed solution to mitigate the exposed node problem. The scenario topology shown in Fig. 15 consists of the same two couples of nodes used in the previous section but differently configured to reproduce the exposed node problem. Here, the SINR values at a receiver node, in the presence of a simultaneous transmission with the other couple, always permit the receiver to decode successfully the signal of interest. However, the transmission power and carrier sensing are configured in such a way as to prohibit node 3 from transmitting when one of the nodes of couple A is transmitting. Node 3 that belongs to couple B is exposed here to the transmissions of the nodes of couple A as illustrated in Fig. 15.
Exposed node scenario, illustration of the protection range at optimization round 0 (initial situation)
As in the previous scenario, we run the simulation for different η values and we plot the resulting metrics over 5 optimization rounds in Fig. 17. At the initial situation (i.e., optimization round 0), the Jain's fairness index in Fig. 17 c shows clearly the impact of the exposed node problem. Node 3 is not able to gain access to the medium because it is exposed to the transmissions of the other couple. In this scenario, thanks to the initial configurations of the network topology, the maximum attainable capacity of the network is the aggregation of two transmitter-receiver couples (about 98 Mbps). This is due to the fact that relative interfering couples separation is sufficient for successful simultaneous transmissions. However, as clearly depicted in Fig. 17 b, the aggregate throughput at the optimization round 0 is far away from the optimal value because node 3 is not able to initiate transmissions neither responding to the transmissions received from node 2.
Our proposed scheme is able to relieve the exposed node situation by decreasing the protection range around the exposed node (node 3) as illustrated in Fig. 16. This led, in this particular scenario, to a twofold increase in the aggregate throughput as shown in Fig. 17 b at optimization round 5. Since the target throughput X T can be easily attained by the different nodes before the saturation point of the system, the cost function plotted in Fig. 17 a is minimized to zero at the last optimization round for all the η values.
Exposed node scenario, illustration of the protection range at optimization round 5
The performance of the proposed optimization in exposed nodes scenario. a The cost in terms of optimization round. b The aggregate throughput in terms of optimization round. c The Jain's fairness index in terms of optimization round. dThe average throughput in terms of optimization round
High-density cellular deployment scenario
In this scenario, we consider a challenging super dense deployment. The definition of this scenario is based on the simulation scenarios defined by the IEEE 802.11ax TG [12]. An important real-world use case considered at the standardization TG is deploying Wi-Fi in a stadium which is characterized by very high numbers of APs and STAs [33]. In such deployments, the distance between two co-channel APs is below 25 m. The cellular scenario considered for our evaluation is illustrated in Fig. 18. Each BSS is formed by an AP and eight associated STAs. With a frequency reuse equal to 3 and a cell radius of 7 m, the distance between two co-channel APs is about 21 m. All the BSSs shown in the simulated scenario in Fig. 18 operate on the same frequency channel.
The obtained results are presented in Fig. 19. The first important observation when comparing to the results of the previous scenarios is that the system needs more optimization rounds to converge. This is normal since the scenario is more complex because of the much higher number of devices, and hence, the ANN has larger number of neurons with 126 inputs and 63 outputs. Another observation is related to the Jain's fairness index curve plotted in Fig. 19 c. Contrary to the previous scenarios, this index does not reach its maximum value in the current scenario, meaning that not all the devices are achieving the same throughput.
The performance of the proposed optimization in cellular scenario. a The cost in terms of optimization round. b The aggregate throughput in terms of optimization round. c The Jain's fairness index in terms of optimization round. dThe average throughput in terms of optimization round
In fact, this is due to the difference in throughput between uplink and downlink flows. The AP that is transmitting to eight STAs has almost the opportunity to access the medium as any other ordinary STA. Since the network is saturated, the share of airtime used by the AP to transmit data to one STA is much lower than that used by a STA to send data to the AP. However, after the convergence of the adaptation, the fairness index is importantly enhanced (from ≈0.5 at optimization round 0 to ≈0.7 at the final round). This enhancement reflects the ability of the proposed adaptation to solve the exposed node situations and increasing the spatial reuse between all the BSSs. This enhancement in spatial reuse is clearly seen in Fig. 19 b, where the gain in aggregate throughput exceeds 45 %.
A key perspective considered in the ongoing development of the next WLAN generation is increasing the spatial reuse in high-density deployments by adapting the MAC layer protocols. While the control of the transmission power (i.e., TPC) has always been the chosen technique when targeting spatial reuse improvements (traditionally in cellular technologies), many researchers investigated the weakness points in TPC especially in deployments where the compliance of all the wireless devices is not always possible. Adapting the physical carrier sensing is proposed in the IEEE 802.11ax task group where the preparations for the next WLAN standard are taking place. While, there are many more incentives behind preferring this adaptation over TPC, many contributions highlight some fairness issues especially when legacy devices are present in the network.
To overcome the previous problem, we exploit in this work a new solution for jointly optimizing the transmission power and the physical carrier sensing. The main motivation of this joint solution is that the impact of one of these two key parameters on the performance of legacy devices is opposed to the other. While TPC mechanisms favor the legacies, the adaptation of the carrier sensing mechanism disfavors these devices. In this paper, we proposed a new learning-based mechanism using artificial neural networks that is able to optimally adapt the two mechanisms (TPC and PCSA) in order to increase spatial reuse and preserve fairness. This approach takes benefit from the capability of artificial neural networks to approximate complex functions in order to model the throughput performance in terms of MAC layer parameters. This allows an intelligent adaptation of these parameters that enhances the spatial reuse in dense deployments. We showed through extensive simulations that our proposal is capable of resolving hidden and exposed node problems and hence leveraging the aggregate throughput in high-density deployments while enhancing the fairness among all the nodes.
Furthermore, this solution could be used to optimize other important parameters in the future IEEE 802.11ax WLANs such as the length of the transmit opportunity (TxOP). Future centralized deployments could benefit directly from this new approach to achieve better QoE. This would allow the integration of high efficiency WLANs in mobile cellular networks for traffic offloading.
IEEE Std 802.11-2012 (Revision of IEEE Std 802.11-2007), IEEE Standard for Information technology—Telecommunications and information exchange between systems local and metropolitan area networks— specific requirements. Part 11: wireless LAN medium access control (MAC) and physical layer (PHY) specifications (IEEE, 2012).
A Baid, D Raychaudhuri, Understanding channel selection dynamics in dense Wi-Fi networks. IEEE Commun. Mag. 53:, 110–117 (2015).
iPass Wi-Fi service provider, The global public Wi-Fi network grows to 50 million worldwide Wi-Fi hotspots. http://www.ipass.com/press-releases/the-global-public-wi-fi-network-grows-to-50-million-worldwide-wi-fi-hotspots.
IEEE 802.11 High efficiency wireless local area networks (HEW) study group. http://www.ieee802.org/11/Reports/hew_update.htm.
IEEE 802.11ax task group for high efficiency WLAN (HEW). http://www.ieee802.org/11/Reports/tgax_update.htm.
D-J Deng, C-H Ke, H-H Chen, Y-M Huang, Contention window optimization for. IEEE 802.11 DCF access control. IEEE Trans. Wirel. Commun. 7:, 5129–5135 (2008).
B Li, Q Qu, Z Yan, M Yang, in Proceedings of the IEEE Wireless Communications and Networking Conference Workshops. Survey on OFDMA based MAC protocols for the next generation WLAN, WCNC '15 (IEEE, 2015), pp. 131–135.
A Ben Makhlouf, M Hamdi, Dynamic multiuser sub-channels allocation and real-time aggregation model for IEEE 802.11 WLANs. IEEE Trans. Wirel. Commun.13:, 6015–6026 (2014).
I Jamil, L Cariou, in IEEE 802.11ax: 802.11-14/0523r0. MAC simulation results for dynamic sensitivity control (DSC-CCA adaptation) and transmit power control (TPC) (IEEE, 2014).
I Jamil, L Cariou, in IEEE 802.11ax: 802.11-14/1207r1. OBSS reuse mechanism which preserves fairness (IEEE, 2014).
I Jamil, L Cariou, J-F Helard, in Proceedings of the IEEE International Conference on Communication Workshop, ICC '15. Preserving fairness in super dense WLANs, (2015), pp. 2276–2281.
S Merlin, et al, in IEEE 802.11ax: 802.11-14/0980r14. TGax simulation scenarios (IEEE, 2015).
SS Haykin, in Neural Networks and Learning Machines. Number v. 10 in neural networks and learning machines (Prentice Hall, 2009).
RP Lippmann, Pattern classification using neural networks. IEEE Commun. Mag. 27:, 47–50 (1989).
Y-K Park, G Lee, Applications of neural networks in high-speed communication networks. IEEE Commun. Mag. 33:, 68–74 (1995).
C Wang, J Hsu, K Liang, T Tai, in Proceedings of the 3rd IEEE International Conference on Computer Science and Information Technology, volume 4 of ICCSIT '10. Application of neural networks on rate adaptation in IEEE 802.11 WLAN with multiples nodes (IEEE, 2010), pp. 425–430.
C-L Chen, in Proceedings of the 16th International Conference on Computer Communications and Networks, ICCCN '07. IEEE 802.11e EDCA QoS provisioning with dynamic fuzzy control and cross-layer interface (IEEE, 2007), pp. 766–771.
P Lin, T Lin, Machine-learning-based adaptive approach for frame-size optimization in wireless LAN environments. IEEE Trans. Veh. Technol. 58:, 5060–5073 (2009).
H Luo, NK Shankaranarayanan, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 5 of ICASSP '04, 5. A distributed dynamic channel allocation technique for throughput improvement in a dense WLAN environment, (2004), pp. V–345–8.
P Gogoi, KK Sarma, in Proceedings of the International Conference on Communications, Devices and Intelligent Systems, CODIS '12. Hybrid channel estimation scheme for IEEE 802.11n-based STBC MIMO system (IEEE, 2012), pp. 49–52.
H Zhang, X Shi, in Proceedings of the 10th World Congress on Intelligent Control and Automation, WCICA '12. A new indoor location technology using back propagation neural network to fit the RSSI-d curve (IEEE, 2012), pp. 80–83.
IEEE Std 802.11k-2008 (Amendment to IEEE Std 802.11-2007), IEEE Standard for Information technology—Local and metropolitan area networks—specific requirements—part 11: wireless LAN medium access control (MAC) and physical layer (PHY) specifications amendment 1: radio resource measurement of wireless LANs, 1–244 (2008). IEEE.
V Shah, S Krishnamurthy, in Proceedings of the 25th IEEE International Conference on Distributed Computing Systems, ICDCS '05. Handling asymmetry in power heterogeneous ad hoc networks: a cross layer approach (IEEE Computer Society, 2005), pp. 749–759.
A Pires, J Rezende, C Cordeiro, in Challenges in Ad Hoc Networking. Protecting transmissions when using power control on 802.11 ad hoc networks (Springer USBoston, 2006), pp. 41–50. IFIP International Federation for Information Processing.
B Radunović, R Chandra, D Gunawardena, in Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies, CoNEXT '12. Weeble: enabling low-power nodes to coexist with high-power nodes in white space networks (ACMNew-York, 2012), pp. 205–216.
M van der Schaar, N Sai Shankar, Cross-layer wireless multimedia transmission: challenges, principles, and new paradigms. IEEE Wirel. Commun. 12:, 50–58 (2005).
Neural Netw. Multilayer feedforward networks are universal approximators. 2:, 359–366 (1989).
IEEE Std 802.11e-2005 (Amendment to IEEE Std 802.11, 1999 Ed1ition (Reaff 2003)), IEEE Standard for Information technology—Local and metropolitan area networks—specific requirements—part 11: wireless LAN medium access control (MAC) and physical layer (PHY) specifications—amendment 8: medium access control (MAC) quality of service enhancements (IEEE, 2005).
C Wang, P-C Lin, T Lin, A cross-layer adaptation scheme for improving IEEE 802.11e QoS by learning. IEEE Trans. Neural Netw. 17:, 1661–1665 (2006).
R Jain, D-M Chiu, WR Hawe, A quantitative measure of fairness and discrimination for resource allocation in shared computer system, (1984). Technical Report, Eastern Research Laboratory, Digital Equipment Corporation Hudson, MA, DEC-TR-301.
JL Sobrinho, R de Haan, JM Brazio, in Proceedings of the IEEE Wireless Communications and Networking Conference, volume 1 of WCNC '05. Why RTS-CTS is not your ideal wireless LAN multiple access protocol (IEEE, 2005), pp. 81–87.
K Xu, M Gerla, S Bae, in Proceedings of the IEEE Global Telecommunications Conference, volume 1 of GLOBECOM '02. How effective is the IEEE 802.11 RTS/CTS handshake in ad hoc networks (IEEE, 2002), pp. 72–76.
L Cariou, in IEEE 802.11 HEW: 11-13/0657r3. HEW SG usage models and requirements—liaison with WFA (IEEE, 2013).
Orange, 4 Rue de Clos Courtel, Cesson-Sevigne, 35510, France
Imad Jamil
Intel, 2111 NE 21st Avenue, Hillsboro, OR, USA
Laurent Cariou
Institute of Electronics and Telecommunications of Rennes (IETR)—Institut National des Sciences Appliquées (INSA) de Rennes, 20 Avenue des Buttes de Coesmes, Rennes, 35708, France
& Jean-François Hélard
Search for Imad Jamil in:
Search for Laurent Cariou in:
Search for Jean-François Hélard in:
Correspondence to Imad Jamil.
Jamil, I., Cariou, L. & Hélard, J. Novel learning-based spatial reuse optimization in dense WLAN deployments. J Wireless Com Network 2016, 184 (2016) doi:10.1186/s13638-016-0632-2
IEEE 802.11
High efficiency WLAN (HEW)
Spatial reuse | CommonCrawl |
\begin{document}
\title{Asymptotic behaviors of bivariate Gaussian powered extremes } \author{ {Wei Zhou\quad and\quad Zuoxiang Peng\footnote{Corresponding author. Email: [email protected]}}\\ {\small School of Mathematics and Statistics, Southwest University, Chongqing, 400715, China}} \date{}
\maketitle \begin{quote} {\bf Abstract}~~ In this paper, joint asymptotics of powered maxima for a triangular array of bivariate powered Gaussian random vectors are considered. Under the H\"usler-Reiss condition, limiting distributions of powered maxima are derived. Furthermore, the second-order expansions of the joint distributions of powered maxima are established under the refined H\"usler-Reiss condition.
{\bf Keywords}~~ H\"usler-Reiss max-stable distribution $\cdot$ bivariate powered Gaussian maximum $\cdot$ second-order expansion
{\bf AMS 2000 subject classification}~~Primary 62E20, 60G70; Secondary 60F15, 60F05. \end{quote}
\section{Introduction} \label{sec1}
For independent and identically distributed bivariate Gaussian random vectors with constant coefficient in each vector, Sibuya (1960) showed that componentwise maxima are asymptotically independent, and Embrechts et al. (2003) proved the asymptotical independence in the upper tail. To overcome those shortcomings in its applications, H\"usler and Reiss (1989) considered the asymptotic behaviors of extremes of Gaussian triangular arrays with varying coefficients. Precisely, let $\{(X_{ni},Y_{ni}), 1\leq i\leq n, n\geq 1\}$ be a triangular array of independent bivariate Gaussian random vectors with $\operatorname*{E} X_{ni}=\operatorname*{E} Y_{ni}=0$, $\operatorname*{Var} X_{ni}=\operatorname*{Var} Y_{ni}=1$ for $1\le i\le n$, $n\ge 1$. and $\operatorname*{Cov}(X_{ni},Y_{ni})=\rho_n$. Let $F_{\rho_{n}}(x,y)$ denote the joint distribution of vector $(X_{ni},Y_{ni})$ for $i\le n$. The partial maxima $\mathbf{M_n}$ is defined by \begin{eqnarray*} \mathbf{M_n}=(M_{n1},M_{n2})=(\max_{1\leq i\leq n}X_{ni},\max_{1\leq i\leq n}Y_{ni}). \end{eqnarray*}
H\"usler and Reiss (1989) and Kabluchko (2009) showed that
\begin{eqnarray} \label{eq1.1} \lim_{n\to\infty}\operatorname*{\mathbb{P}}\left( M_{n1}\leq b_n+\frac{x}{b_n},M_{n2}\leq b_n+\frac{y}{b_n} \right)= H_\lambda(x,y) \end{eqnarray} holds if and only if the following H\"usler-Reiss condition \begin{eqnarray} \label{eq1.2} \lim_{n\to \infty} b_n^2(1-\rho_n) = 2\lambda^2 \in [0,\infty] \end{eqnarray} holds, where the normalizing constant $b_n$ satisfying \begin{eqnarray} \label{eq1.3} 1-\Phi(b_n)=n^{-1} \end{eqnarray} and the max-stable H\"usler-Reiss distribution is given by \begin{eqnarray} \label{eq1.4} H_{\lambda}(x,y)=\exp\left(-\Phi\left(\lambda+\frac{x-y}{2\lambda}\right)e^{-y}- \Phi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x}\right), \quad x,y\in \mathbb{R}, \end{eqnarray} where $\Phi(\cdot)$ and $\varphi(\cdot)$ denote respectively the distribution function and density function of a standard Gaussian random variable. Note that $H_0(x,y)=\Lambda(\min(x,y))$ and $H_{\infty}(x,y)=\Lambda(x)\Lambda(y)$ with $\Lambda(x)=\exp(-\exp(-x))$, $x\in\operatorname*{\mathbb{R}}$.
Recently, contributions to H\"usler-Reiss distribution and its extensions are achieved considerably. For instance, Hashorva (2005, 2006) showed that the limit distributions of maxima also holds for triangular arrays of general bivariate elliptical distributions if the distribution of random radius is in the Gumbel or Weibull max-domain of attraction, and Hashorva and Ling (2016) extended the results to bivariate skew elliptical triangular arrays. For more work on asymptotics of bivariate triangular arrays, see Hashorva (2008, 2013) and Hashorva et al. (2012).
Higher-order expansions of distributions of extremes on H\"usler-Reiss bivariate Gaussian triangular arrays were considered firstly by Hashorva et al. (2016) provided that $\rho_{n}$ satisfies the following refined H\"usler-Reiss condition
\begin{eqnarray} \label{eq1.5} \lim_{n\to\infty}b_n^2 (\lambda_n-\lambda) = \alpha \in \operatorname*{\mathbb{R}}, \end{eqnarray} where $\lambda_n=(\frac{1}{2}b_n^2(1-\rho_n))^{1/2}$ and $\lambda \in(0,\infty) $, with $b_n$ given by \eqref{eq1.3}. Uniform convergence rate was considered by Liao and Peng (2014). For copula version of the limit in H\"usler-Reiss model, Frick and Reiss (2013)
considered the penultimate and ultimate convergence rates for distribution of $( n(\max_{1\leq i\leq n}\Phi(X_{ni})-1),
n((\max_{1\leq i\leq n}\Phi(Y_{ni})-1)) )$, and Liao et al. (2016) extended the results to the settings of $n$ independent and non-identically distributed observations, where the $i$th observation follows from normal copula with correlation coefficient
being either a parametric or a nonparametric function of $i/n$.
The objective of this paper is to study the asymptotics of powered-extremes of H\"usler-Reiss bivariate Gaussian triangular arrays. Interesting results in Hall (1980) showed that the convergence rates of the distributions of powered-extremes of independent and identically distributed univariate Gaussian sequence depend on the power index and normalizing constants. Precisely, Let $\abs{M_n}^t$ denote the powered maximum with any power index $t>0$, then
\begin{eqnarray} \label{eq1.6} \lim_{n \to \infty} b_n^{2} \Big[\operatorname*{\mathbb{P}}\left(\abs{M_n}^t\leq c_n x+{d_n}\right)-\Lambda(x)\Big] =\Lambda(x)\pzx{\mu}(x) \end{eqnarray} with normalizing constants $c_{n}$ and $d_{n}$ given by \begin{eqnarray} \label{eq1.7} c_n=t b_n^{t-2}, \quad d_n=b_n^t, \quad t>0. \end{eqnarray} Furthermore, for $t=2$ with normalizing constants $c_{n}^{*}$ and $d_{n}^{*}$ given by \begin{eqnarray} \label{add2} c_{n}^{*}=2 - 2b_n^{-2}, \quad d_{n}^{*}=b_{n}^{2} - 2b_n^{-2}, \end{eqnarray} we have
\begin{eqnarray} \label{add1} \lim_{n \to \infty} b_n^{4} \Big[\operatorname*{\mathbb{P}}\left(\abs{M_n}^2\leq c_{n}^{*} x+d_{n}^{*}\right)-\Lambda(x)\Big] =\Lambda(x) \pzx{\nu}(x), \end{eqnarray} where $b_n$ is defined in \eqref{eq1.3}, and $\mu(x)$ and $\nu(x)$ are respectively given by \begin{eqnarray} \label{eq1.8} \mu(x)=\left(1+x+\frac{2-t}{2}x^2\right) e^{-x}, \quad \nu(x)=-\left( \frac{7}{2}+3x+x^2\right)e^{-x}. \end{eqnarray}
Motivated by findings of H\"usler-Reiss (1989), Hall (1980) and Hashorva et al. (2016), we will consider the distributional asymptotics of powered-extremes \zw{of} H\"usler-Reiss bivariate Gaussian triangular arrays, and hope that the convergence rates can be improved as $t=2$, similar to \eqref{add1} in univariate case. Unfortunately, our results provide negative answers except \zw{two extreme cases}.
The rest of the paper is organized as follows. In Section \ref{sec2} we provide the main results and all proofs are deferred to Section \ref{sec4}. Some auxiliary results are given in Section \ref{sec3}.
\section{Main Results} \label{sec2}
In this section, the limiting distributions and the second-order expansions on distributions of normalized bivariate powered-extremes are provided if $\rho_n$ satisfies \eqref{eq1.2} and \eqref{eq1.5}, respectively. The first main result, stated as follows, is the limit distributions of bivariate normalized powered-extremes.
\begin{theorem}\label{thm1} Let the norming constants $c_n$ and $d_n$ be given by \eqref{eq1.7}. Assume that \eqref{eq1.2} holds with $\lambda \in (0,\infty)$. Then for all $x,y \in \operatorname*{\mathbb{R}}$, we have \begin{eqnarray} \label{eq2.2} \lim_{n\to \infty}\operatorname*{\mathbb{P}}\left( \abs{M_{n1}}^t\leq c_n x+d_n, \abs{M_{n2}}^t\leq c_n y+d_n \right) =H_{\lambda}(x,y). \end{eqnarray} \end{theorem}
\begin{remark}\label{remark1} For $t=2$, with arguments similar to the proof of Theorem \ref{thm1} one can show that \eqref{eq2.2} also holds with $c_{n}$ and $d_{n}$ being replaced by $c_{n}^{*}$ and $d_{n}^{*}$ given by \eqref{add2}. \end{remark}
Next we investigate the convergence rate of \begin{equation}\label{add3} \Delta(F_{\rho_n,t}^{n},H_{\lambda};\pzx{c_{n}, d_{n}}; x,y)= \operatorname*{\mathbb{P}}\left( \abs{M_{n1}}^t\leq c_n x+d_n, \abs{M_{n2}}^t\leq c_n y+d_n \right) -H_{\lambda}(x,y)\to 0 \end{equation} as $n\to\infty$ under the refined second-order H\"usler-Reiss condition \eqref{eq1.5}. The results are stated as follows.
\begin{theorem}\label{thm2} If the second H\"usler-Reiss condition \eqref{eq1.5} holds with $\lambda_n=(\frac{1}{2}b_n^2(1-\rho_n))^{1/2}$ and $\lambda \in (0,\infty)$, then for all $x,y \in \operatorname*{\mathbb{R}}$, we have \begin{eqnarray} \label{eq2.3} \lim_{n\to \infty}(\log n)\Delta(F_{\rho_n,t}^{n},H_{\lambda};\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2} \tau(\alpha,\lambda,x,y,t) H_{\lambda}(x,y) \end{eqnarray} with \begin{eqnarray*}
\tau(\alpha,\lambda,x,y,t)&=& \mu(x)\Phi\left(\lambda+\frac{y-x}{2\lambda}\right)+\mu(y)\Phi\left(\lambda+\frac{x-y}{2\lambda}\right)
\\
&& +\Big(2\alpha-(x+y+2)\lambda-\lambda^3\Big)e^{-x}\varphi\left(\lambda+\frac{y-x}{2\lambda}\right), \end{eqnarray*} where $\mu(x)$ is the one given by \eqref{eq1.8}. \end{theorem}
\begin{remark}\label{remark2} For $t=2$, let $c_n$ and $d_n$ be replaced by $c_n^\ast$ and
$d_n^\ast$ respectively in \eqref{add2}, one can show that
\begin{eqnarray*} \lim_{n\to \infty}(\log n)\Delta(F_{\rho_n,2}^{n},H_{\lambda};\pzx{c_{n}^{*}, d_{n}^{*}};x,y) =\frac{1}{2} \chi(\alpha,\lambda,x,y) H_{\lambda}(x,y) \end{eqnarray*} with \begin{eqnarray*} \chi(\alpha,\lambda,x,y) = \Big(2\alpha-(x+y+2)\lambda-\lambda^3\Big)e^{-x}\varphi\left(\lambda+\frac{y-x}{2\lambda}\right). \end{eqnarray*} The result shows the fact that the convergence rates can not be improved as $t=2$ with normalizing constants $c_{n}^{*}$ and $d_{n}^{*}$, contrary to the result of univariate Gaussian case provided by Hall (1980). \end{remark}
In order to obtain the convergence rates of \eqref{add3} for two extreme cases $\lambda=0$ and $\lambda=\infty$, we may need some additional conditions. Following results show that rates of convergence are considerably different with different choice of \pzx{normalizing} constants. With power index $t>0$ and \pzx{normalizing} constants $c_{n}$ and $d_{n}$ given by \eqref{eq1.7}, the results are stated as follows.
\begin{theorem}\label{thm3} Let $c_n$ and $d_n$ be given by \eqref{eq1.7}. With $x,y \in \operatorname*{\mathbb{R}}$ and $t>0$ we have the following results. \begin{itemize} \item[(a).] For the case of $\lambda=\infty$,
(i) if $\rho_n \in [-1,0]$, we have \begin{eqnarray}\label{eq2.4} \lim_{n\to \infty}(\log n)\Delta(F_{\rho_n,t}^{n},H_{\infty};\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2}\Big(\mu(x)+\mu(y)\Big) H_{\infty}(x,y). \end{eqnarray}
(ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty} \frac{\log b_n}{b_n^2(1-\rho_n)}=0$, then \eqref{eq2.4} also holds.
\item[(b).] For the case of $\lambda=0$,
(i) if $\rho_n=1$, we have \begin{eqnarray}\label{eq2.5} \lim_{n\to \infty}(\log n)\Delta(F_{\rho_n,t}^{n},H_0;\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2} \mu\Big(\min(x,y)\Big) H_0(x,y). \end{eqnarray} (ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty}b_n^{6}(1-\rho_n)=c_{1}\in [0,\infty)$, then \eqref{eq2.5} also holds. \end{itemize} \end{theorem}
Theorem \ref{thm3} shows that convergence rates of \eqref{add3} are the same order of $1/\log n$ if we choose the normalizing constants $c_{n}$ and $d_{n}$ given by \eqref{eq1.7} as $t>0$. With another pair of normalizing constants $c_{n}^{*}$ and $d_{n}^{*}$ given by \eqref{add2}, the following results show that convergence rates of \eqref{add3} can be improved. \begin{theorem}\label{thm4} For $t=2$, let $c_{n}^{*}$ and $d_{n}^{*}$ be given by \eqref{add2}. With $x,y \in \operatorname*{\mathbb{R}}$ we have the following results. \begin{itemize} \item[(a).] For the case of $\lambda=\infty$,
(i) if $\rho_n \in [-1,0]$, we have \begin{eqnarray}\label{eq2.6} \lim_{n\to \infty}(\log n)^2\Delta(F_{\rho_n,2}^{n},H_{\infty};c_{n}^{*}, d_{n}^{*};x,y) =\frac{1}{4} \Big(\nu(x)+\nu(y)\Big) H_{\infty}(x,y). \end{eqnarray}
(ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty} \frac{\log b_n}{b_n^2(1-\rho_n)}=0$, then \eqref{eq2.6} also holds.
\item[(b).] For the case of $\lambda=0$,
(i) if $\rho_n=1$, we have
\begin{eqnarray}\label{eq2.7} \lim_{n\to \infty}(\log n)^2\Delta(F_{\rho_n,2}^{n},H_0;c_{n}^{*}, d_{n}^{*};x,y) =\frac{1}{4}\nu\Big(\min(x,y)\Big) H_0(x,y). \end{eqnarray}
(ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty}b_n^{14}(1-\rho_n)=c_2\in [0,\infty)$, then \eqref{eq2.7} also holds. \end{itemize}
\end{theorem}
\section{Auxiliary Lemmas} \label{sec3} For notational simplicity, let \begin{eqnarray} \label{eq2.1} \omega_{n,t}(x)=(c_n x+d_n)^{1/t} \quad \mbox{for} \quad t>0,\quad \mbox{and}\quad \omega_{n,2}^{*}(x)=(c_{n}^{*} x+d_{n}^{*})^{1/2} \quad \mbox{as} \quad t=2, \end{eqnarray} where the normalizing constants $c_n$ and $d_n$, and $c_{n}^{*}$ and $d_{n}^{*}$ are those given by \eqref{eq1.7} and \eqref{add2}, respectively. Define \begin{eqnarray*} \pzx{\bar{\Phi}(z)=1-\Phi(z)},\quad\quad \zw{\bar{\Phi}_{n.t}(z)= n\bar{\Phi}(\omega_{n,t}(z))} \end{eqnarray*} and \begin{eqnarray} \label{eq3.1a} I_k:=\int_{y}^{\infty}\varphi\left(\lambda+\frac{x-z}{2\lambda}\right)e^{-z}z^k dz, \quad k=0,1,2. \end{eqnarray}
\begin{lemma}\label{lemma1} Under the conditions of Theorem \ref{thm1}, we have \begin{eqnarray} \label{eq3.1} \lim_{n\to \infty} \operatorname*{\mathbb{P}}\Big( M_{n1}\leq \omega_{n,t}(x),M_{n2}\leq \omega_{n,t}(y) \Big) =H_{\lambda}(x,y) \end{eqnarray} \end{lemma}
\noindent \textbf{Proof.}~~ With the choice of $c_n$ and $d_n$ in \eqref{eq1.7}, it follows from \eqref{eq2.1} that \begin{eqnarray*} \omega_{n,t}(z)=(c_n z+d_n)^{1/t}=b_n \left( 1+ \zw{ z b_n^{-2}+ \frac{1-t}{2}z^2 b_n^{-4}} +O(b_n^{-6}) \right) \end{eqnarray*} for fixed $z$, hence for fixed $x$ and $z$, \begin{eqnarray}\label{eq3.1b} \nonumber
&& \frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}} \\ \nonumber &=& b_n \sqrt{\frac{1-\rho_n}{1+\rho_n}}+\frac{x-z}{b_n\sqrt{1-\rho_n^2}} +\frac{z}{b_n}\sqrt{\frac{1-\rho_n}{1+\rho_n}} +\frac{(1-t)(x^2-z^2)}{2b_n^3 \sqrt{1-\rho_n^2}} +\frac{(1-t)z^2}{2b_n^3}\sqrt{\frac{1-\rho_n}{1+\rho_n}} \\ \nonumber && +\sqrt{\frac{1-\rho_n}{1+\rho_n}}O(b_n^{-5}) \\ &=& \left( \lambda_n+\frac{x-z}{2\lambda_n} \left( 1+\frac{(1-t)(x+z)}{2 b_n^2} \right) +\frac{\lambda_n z}{b_n^2} + \frac{(1-t)\lambda_n z^2}{2b_n^{4}} + \lambda_n O(b_n^{-6})\right)(1-\frac{\lambda_n^2}{b_n^2})^{-\frac{1}{2}} \end{eqnarray} which implies that \begin{eqnarray}\label{add4} \lim_{n \to \infty}\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}=\lambda+\frac{x-z}{2\lambda} \end{eqnarray} holds since $\lambda_n\to \lambda$ as $n \to \infty$.
With $a_{n}=1/b_{n}$ it follows from \eqref{add4} that \begin{eqnarray} \label{eq3.2} && n \operatorname*{\mathbb{P}} \left( X>\omega_{n,t}(x),Y>\omega_{n,t}(y) \right) \nonumber\\ &=& n \int_{\omega_{n,t}(y)}^{\infty}\bar{\Phi}\left(\frac{\omega_{n,t}(x)-\rho_n z}{\sqrt{1-\rho_n^2}}\right)d\Phi(z) \nonumber\\ &=& \frac{b_n}{\varphi(b_n)} (1-b_n^{-2}+O(b_n^{-4}))^{-1} \int_{y}^{\infty}\bar{\Phi}\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)\varphi(b_n(1+\pzx{tz}a_n^2)^{1/t})d(b_n(1+\pzx{tz}a_n^2)^{1/t}) \nonumber\\ &=& (1+b_n^{-2}+O(b_n^{-4})) \int_{y}^{\infty}\bar{\Phi}\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{2/t}\right)\right) (1+tza_n^2)^{1/t-1}dz \nonumber\\ &\to& \int_{y}^{\infty}\bar\Phi\left(\lambda+\frac{x-z}{2\lambda}\right)e^{-z}dz \nonumber\\ &=& e^{-y}+e^{-x}-\Phi\left(\lambda+\frac{x-y}{2\lambda}\right)e^{-y}- \Phi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x} \end{eqnarray} as $n\to\infty$. Meanwhile, one can check that \begin{eqnarray}\label{add8} \lim_{n \to \infty}\zw{\bar{\Phi}_{n,t}(x)}=e^{-x}. \end{eqnarray} It follows from \eqref{eq3.2} and \eqref{add8} that \begin{eqnarray*} && \operatorname*{\mathbb{P}}\left( M_{n1}\leq \omega_{n,t}(x),M_{n2}\leq \omega_{n,t}(y) \right) \\ &=& \exp \Big[ -\zw{\bar{\Phi}_{n,t}(x)-\bar{\Phi}_{n,t}(y)}
+ n \operatorname*{\mathbb{P}} \left( X>\omega_{n,t}(x),Y>\omega_{n,t}(y) \right) +o(1) \Big] \\ &\to & H_{\lambda}(x,y) \end{eqnarray*} as $n\to\infty$. The desired result follows. \qed
Following result is useful to the proof of Lemma \ref{lemma2}. \begin{lemma}\label{app} With $a_{n}=1/b_{n}$, for large $n$ we have \begin{eqnarray}\label{add5} \nonumber && \int_{y}^{\infty} \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)
\exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{2/t}\right)\right) (1+tza_n^2)^{1/t-1}\,dz\\ &=& \int_{y}^{\infty} \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \left( 1+\left((1-t)z-\frac{2-t}{2}z^2\right)\zw{b_n^{-2}} \right)e^{-z} dz+O(b_n^{-4}). \end{eqnarray} \end{lemma}
\noindent \textbf{Proof.}~~First note for large $n$ and $\abs{x}\leq \frac{b_n^2}{4(4+t)}$, \begin{eqnarray} \nonumber \label{bounded} && \abs{ \exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tx} a_n^2)^{2/t}\right)\right) (1+\pzx{tx} a_n^2)^{1/t-1} -e^{-x}\left( 1+\frac{1}{b_n^2}\left((1-t)x-\frac{2-t}{2}x^2\right) \right)} \\ &\leq& b_n^{-4}s(x)\exp \left(-x+\frac{\abs{x}}{4} \right), \end{eqnarray} where $a_{n}=1/b_{n}$ and $s(x)\ge 0$ ia a polynomial on $x$ independent of $n$, cf.\zw{ Lemma 3.2} in Li and Peng (2016).
It follows from \eqref{bounded} that \begin{eqnarray} \nonumber \label{more1} && \int_{y}^{4 \log b_n} \abs{ \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)
\exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right) (1+\pzx{tz} a_n^2)^{1/t-1} \right.
\\
\nonumber
&& \quad \left.- \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) e^{-z}\left( 1+b_n^{-2}\left((1-t)z-\frac{2-t}{2}z^2\right) \right) } dz \\ \nonumber &\leq & \int_{y}^{4 \log b_n} \abs{\Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)} b_n^{-4}s(z)\exp \left(-\frac{3z}{4} \right)dz \\ \nonumber &<& b_n^{-4} \int_{y}^{4 \log b_n}s(z)\exp \left(-\frac{3z}{4}\right)dz \\ &=& O(b_n^{-4}) \end{eqnarray} and \begin{eqnarray} \label{more5} \nonumber && \int_{ 4 \log b_n}^{\infty} e^{-\frac{z}{2}} \left( e^{-\frac{z}{2}}\left( 1+b_n^{-2}\left(\abs{1-t}z+\frac{\abs{2-t}}{2}z^2\right) \right) \right) dz \\ \nonumber &\leq & e^{-2\log b_n} \left ( 1+\zw{b_n^{-2}}\left(4\abs{1-t}\log b_n+8\abs{2-t}(\log b_n)^2\right) \right)\int_{ 4 \log b_n}^{\infty}e^{-\frac{z}{2}}dz \\ \nonumber &=& 2b_n^{-4} \left( 1+b_n^{-2}\left(4\abs{1-t}\log b_n+8\abs{2-t}(\log b_n)^2\right) \right) \\ &=& O(b_n^{-4}). \end{eqnarray} So, the remainder is to show \begin{eqnarray} \label{more2}
A_{n}=\int_{4 \log b_n}^{\infty}
\exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right) (1+\pzx{tz} a_n^2)^{1/t-1}dz = O(b_n^{-4}) \end{eqnarray} for large $n$. We check \eqref{more2} in turn for \zw{ $0<t< 1$ and $t\geq1$}.
For \zw{ $0<t< 1$}, separate $A_{n}$ into the following two parts. \begin{eqnarray}\label{more21} \nonumber A_{n1}&=& \int_{4 \log b_n}^{2(\frac{1}{t}-1)b_n^2} \exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right) (1+\pzx{tz} a_n^2)^{1/t-1} dz \\
\nonumber &<& \int_{4 \log b_n}^{2(\frac{1}{t}-1)b_n^2} e^{-z} (1+2(1-t))^{\frac{1}{t}-1}dz \\ &=& O(b_n^{-4}) \end{eqnarray} since $\exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right)<e^{-z}$. For the second part, \begin{eqnarray}\label{more22} \nonumber A_{n2}&=& \int_{2({1}/{t}-1)b_n^2}^{\infty} \exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right) (1+\pzx{tz} a_n^2)^{1/t-1} dz \\ \nonumber &<& \int_{2({1}/{t}-1)b_n^2}^{\infty} e^{-z}(t za_n^2)^{1/t-1} \left(1+\frac{1}{\pzx{tz} a_n^2 } \right)^{1/t-1} dz \\ \nonumber &<& (ta_n^2 )^{1/t-1} \left(1+\frac{1}{2(1-t)} \right)^{1/t-1} \int_{2({1}/{t}-1)b_n^2}^{\infty} e^{-z}z^{1/t-1} dz \\ \nonumber &=& 2 (3-2t)^{1/t-1} e^{-2(1/t-1)b_n^2} \\ &=& o(b_n^{-4}). \end{eqnarray} Hence, \eqref{more21} and \eqref{more22} shows that \eqref{more2} holds as \pzx{$0<t<1$}.
Now changing to the case of \pzx{$t\ge1$}, by using Mills' inequality we have \begin{eqnarray}\label{more23} A_{n}&=& \int_{ 4 \log b_n}^{\infty} \exp\left( \frac{b_n^2}{2} \right) \exp\left( -\frac{b_n^2 (1+\frac{tz}{b_n^2})^{2/t}}{2} \right) (1+\frac{tz}{b_n^2})^{1/t-1}dz \nonumber\\ &=& b_n \exp\left( \frac{b_n^2}{2} \right) \int_{b_n \left(1+\frac{4t\log b_n}{b_n^2}\right)^{1/t}}^{\infty} \exp\left(-\frac{s^2}{2}\right)ds \nonumber\\ &=& \sqrt{2\pi} b_n \exp\left( \frac{b_n^2}{2} \right)
\left(1-\Phi\left( b_n\left(1+\frac{4t\log b_n}{b_n^2} \right)^{1/t} \right) \right) \nonumber\\ &<& \frac{\exp\left( \frac{b_n^2}{2} \left( 1-\left(1+\frac{4t\log b_n}{b_n^2} \right)^{2/t}\right) \right)} {\left(1+\frac{4t\log b_n}{b_n^2} \right)^{1/t}} \nonumber\\ &<& \frac{\exp \left( -4\log b_n+\frac{8(t-2)(\log b_n)^2}{b_n^2} \right) } {\left(1+\frac{4t\log b_n}{b_n^2} \right)^{1/t}} \nonumber\\ &=& O(b_n^{-4}) \end{eqnarray} since $(1+s)^{\frac{2}{t}} \geq 1+\frac{2}{t}s+\frac{1}{t}\left(\frac{2}{t}-1\right)s^2$ for $s>0$.
Combining with \eqref{more1}-\eqref{more2}, the proof of \eqref{add5} is complete. \qed
In order to show the second order asymptotic expansions of extreme value distributions, let
\begin{eqnarray}\label{eq3.3aaa}
\tilde{\Delta}(F_{\rho_n,t}^{n},H_\lambda;\pzx{c_{n}, d_{n}};x,y)=
\operatorname*{\mathbb{P}}\left( M_{n1}\leq \omega_{n,t}(x),M_{n2}\leq \omega_{n,t}(y) \right)- H_{\lambda}(x,y).
\end{eqnarray}
\begin{lemma}\label{lemma2} Assume that the conditions of Theorem \ref{thm2} hold. Then,
\begin{eqnarray} \label{eq3.3} \lim_{n\to\infty}(\log n)\tilde{\Delta}(F_{\rho_n,t}^{n},H_\lambda;\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2}\tau(\alpha,\lambda,x,y,t)H_\lambda(x,y) \end{eqnarray} where $\tau(\alpha,\lambda,x,y,t)$ is the one given in Theorem \ref{thm2}. \end{lemma}
\noindent \textbf{Proof.}~~ By using \eqref{eq3.2} and \eqref{add5}, we have \begin{eqnarray*} &&n\operatorname*{\mathbb{P}}\left(X>\omega_{n,t}(x),Y>\omega_{n,t}(y)\right) \\ &=& \zw{\bar{\Phi}_{n,t}(y)}-\int_{y}^{\infty}\Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) e^{-z} \left( 1+\left((1-t)z-\frac{2-t}{2}z^2\right)b_n^{-2} \right)dz+O(b_n^{-4}). \end{eqnarray*}
for large $n$. It follows from \eqref{eq3.1b} and \eqref{add4} that
\begin{eqnarray}\label{eq3.4} && \nonumber b_n^2 \int_{y}^{\infty}\left(\lambda+\frac{x-z}{2\lambda}- \frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \varphi\left(\lambda+\frac{x-z}{2\lambda}\right)e^{-z}dz \\ \nonumber &\to& \left(\alpha-\zw{\frac{1}{2}\lambda^3-\frac{1}{2}\alpha\lambda^{-2}x-\frac{1}{4}\lambda x } -\frac{1-t}{4\lambda}x^2\right)I_0 -\left(\frac{3}{4}\lambda-\frac{1}{2}\alpha\lambda^{-2}\right)I_1 +\frac{1-t}{4\lambda}I_2 \\ &=&
\kappa_1(\alpha,\lambda,x,y,t) \end{eqnarray} as $n\to\infty$, where $I_{k}$ is the one given by \eqref{eq3.1a} and \begin{eqnarray*} && \kappa_1(\alpha,\lambda,x,y,t) \\ &=& 2\Big(\zw{(2-t)\lambda^4-(2-t)\lambda^2 x+(1-t)\lambda^2}\Big)\bar\Phi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x} \\ && +\Big(2\alpha-\zw{(5-2t)\lambda^3+(1-t)\lambda x +(1-t)\lambda y}\Big) \varphi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x}. \end{eqnarray*}
Note that by Taylor's expansion with Lagrange remainder term, \begin{eqnarray}\label{add6} && \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)\nonumber \\ &=& \Phi\left(\lambda+\frac{x-z}{2\lambda}\right)+ \varphi\left(\lambda+\frac{x-z}{2\lambda}\right) \left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}- \lambda-\frac{x-z}{2\lambda}\right)\nonumber \\ && +\frac{1}{2}v_n \varphi(v_n) \left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}- \lambda-\frac{x-z}{2\lambda}\right)^2, \end{eqnarray} where $v_{n}$ is between $\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}$ and
$\lambda+\frac{x-z}{2\lambda}$. By arguments similar to \eqref{eq3.4}, one can check that \begin{eqnarray}\label{add7} \int_{y}^{\infty} \left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}- \lambda-\frac{x-z}{2\lambda}\right)^2 v_n \varphi(v_n)e^{-z}dz =O(b_n^{-4}) \end{eqnarray} holds for large $n$. Hence from \eqref{eq3.4}, \eqref{add6} and \eqref{add7}, it follows that \begin{eqnarray} \label{eq3.5} \lim_{n \to \infty}b_n^2 \int_{y}^{\infty} \left(\Phi\left(\lambda+\frac{x-z}{2\lambda}\right)- \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)\right)e^{-z}dz =\kappa_1(\alpha,\lambda,x,y,t). \end{eqnarray}
Note that \begin{eqnarray}\label{eq3.10a} \zw{\bar{\Phi}_{n,t}(x)}=e^{-x}-b_n^{-2}\mu(x)+O(b_n^{-4}), \end{eqnarray} cf. Theorem 1 in Hall (1980). Now combining with \eqref{eq3.2}, \eqref{eq3.5} and \eqref{eq3.10a}, we have \begin{eqnarray*} && b_n^2 \Big[\operatorname*{\mathbb{P}}\left( M_{n1}\leq \omega_{n,t}(x),M_{n2}\leq \omega_{n,t}(y) \right)- H_{\lambda}(x,y)\Big]\\ \\ &=& b_n^2 H_{\lambda}(x,y) (1+o(1)) \Bigg[ - \bar{\Phi}_{n,t}(x)- \bar{\Phi}_{n,t}(y)+n\operatorname*{\mathbb{P}}(X>\omega_{n,t}(x),Y>\omega_{n,t}(y)) \\ && +\Phi\left(\lambda+\frac{x-y}{2\lambda}\right)e^{-y}+\Phi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x} \Bigg] \\ &=& b_n^2 H_{\lambda}(x,y)(1+o(1)) \Bigg[ -\bar{\Phi}_{n,t}(x)+e^{-x} + \int_{y}^{\infty} \Phi\left(\lambda+\frac{x-z}{2\lambda}\right)e^{-z}dz \nonumber \\ && -\int_{y}^{\infty} \Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) e^{-z} \left(1+\left((1-t)z-\frac{2-t}{2}z^2\right)b_n^{-2}\right)dz + O(b_n^{-4}) \Bigg] \\ &\to& H_{\lambda}(x,y) \Big[ \mu(x)+\kappa_1(\alpha,\lambda,x,y,t)-\kappa_2(\alpha,\lambda,x,y,t) \Big] \\ &=& H_{\lambda}(x,y) \tau(\alpha,\lambda,x,y,t) \end{eqnarray*} as $n\to\infty$, where where \begin{eqnarray*} && \kappa_2(\alpha,\lambda,x,y,t)\\ &=&\int_{y}^{\infty} \Phi\left(\lambda+\frac{x-z}{2\lambda}\right)e^{-z} \left((1-t)z-\frac{2-t}{2}z^2\right)dz \\ &=& -\left(\frac{2-t}{2}y^2+y+1\right)\Phi\left(\lambda+\frac{x-y}{2\lambda}\right)e^{-y} \\ && + \left( 2 (2-t)\lambda^4-2(2-t)\lambda^2 x+2(1-t)\lambda^2+\frac{2-t}{2}x^2+x+1 \right) \bar{\Phi}\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x} \\ && \pzx{- \Big( 2(2-t)\lambda^3-(2-t)\lambda (x+y)-2\lambda \Big)} \varphi\left(\lambda+\frac{y-x}{2\lambda}\right)e^{-x} \end{eqnarray*} and $\tau(\alpha,\lambda,x,y,t)$ is the one given by Theorem \ref{thm2}. The proof is complete. \qed
\begin{lemma} \label{lemma3} With $c_n$ and $d_n$ given by \eqref{eq1.7}, the following results hold.
\begin{itemize} \item[(a).] For the case of $\lambda=\infty$,
(i) if $\rho_n \in [-1,0]$, we have \begin{eqnarray}\label{eq3.8} \lim_{n\to \infty}(\log n)\tilde\Delta(F_{\rho_n,t}^{n},H_{\infty};\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2}\Big(\mu(x)+\mu(y)\Big) H_{\infty}(x,y). \end{eqnarray} (ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty} \frac{\log b_n}{b_n^2(1-\rho_n)}=0$, then \eqref{eq3.8} also holds.
\item[(b).] For the case of $\lambda=0$,
(i) if $\rho_n=1$, we have \begin{eqnarray}\label{eq3.9} \lim_{n\to \infty}(\log n)\tilde\Delta(F_{\rho_n,t}^{n},H_0;\pzx{c_{n}, d_{n}};x,y) =\frac{1}{2}\mu \Big(\min(x,y)\Big) H_0(x,y). \end{eqnarray} (ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty}b_n^{6}(1-\rho_n)=c_1 \in [0,\infty)$, then \eqref{eq3.9} also holds. \end{itemize} \end{lemma}
\noindent \textbf{Proof.}~~ For $\lambda=\infty$, we first consider (i), i.e., the case of $\rho_n \in [-1,0]$. Note that either complete independent $(\rho_n\equiv0)$ or complete negative dependent $(\rho_n\equiv-1)$ both imply $\lambda=\infty$. Thus from \eqref{eq3.10a} it follows that both \begin{eqnarray} \label{add91} \nonumber && b_n^2 \left( -n(1-F_{-1}(\omega_{n,t}(x),\omega_{n,t}(y)))+e^{-y}+e^{-x} \right) \\ \nonumber &=&
b_n^2\left( - \bar{\Phi}_{n,t}(x)+e^{-x} \right) + b_n^2\left( - \bar{\Phi}_{n,t}(y)+e^{-y} \right)
+ nb_n^2\operatorname*{\mathbb{P}} (\omega_{n,t}(x)<X<-\omega_{n,t}(y))
\\ &\to & \mu(x)+\mu(y) \end{eqnarray} and \begin{eqnarray} \label{add92} \nonumber && b_n^2 \left( -n(1-F_{0}(\omega_{n,t}(x),\omega_{n,t}(y)))+e^{-y}+e^{-x} \right) \\ \nonumber &=&
b_n^2\left( - \bar{\Phi}_{n,t}(x)+e^{-x} \right)
+b_n^2 \left( - \bar{\Phi}_{n,t}(y)+e^{-y} \right) +\frac{b_n^2}{n}\bar{\Phi}_{n,t}(x)\bar{\Phi}_{n,t}(y) \\ &\to& \mu(x)+\mu(y) \end{eqnarray} hold as $n\to\infty$, showing that the claimed results \eqref{eq3.8} hold for $\rho_n\equiv-1$ and $\rho_n\equiv0$ respectively. Thus, it follows from Slepian's Lemma that \eqref{eq3.8} also holds for $\rho_n \in [-1,0]$.
Now switch to the case of $\rho_n \in (0,1)$ with additional condition $\lim_{n \to \infty}\frac{\log b_n}{b_n^2(1-\rho_n)}=0$, implying $\lambda=\infty$. For fixed $x,z \in \mathbb{R}$, one can check that \begin{eqnarray}\label{add9} \lim_{n \to \infty}\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}=\infty. \end{eqnarray} Note that the condition $\lim_{n \to \infty}\frac{\log b_n}{b_n^2(1-\rho_n)}=0$ implies $\lim_{n \to \infty}b_n^2(1-\rho_n)=\infty$. By \eqref{add9} and Mills' inequality, \begin{eqnarray} \label{eq3.18} && b_n^4 \left( 1-\Phi\left(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \right) \nonumber \\ &<& \frac{b_n^4 \exp\left(-\frac{(\omega_{n,t}(x)-\rho_n \omega_{n,t}(z))^2}{2(1-\rho_n^2)}\right)} { \frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}} } \nonumber \\ &=& \frac{\exp\left(-\frac{((c_n x+d_n)^{1/t}-\rho_n (c_n z+d_n)^{1/t})^2}{2(1-\rho_n^2)}+4\log b_n\right)} { \frac{(c_n x+d_n)^{1/t}-\rho_n (c_n z+d_n)^{1/t}}{\sqrt{1-\rho_n^2}}} \nonumber \\ &=&
\left( b_n\sqrt{\frac{1-\rho_n}{1+\rho_n}}+\frac{x-z}{b_n\sqrt{1-\rho_n^2}}+
\frac{z}{b_n}\sqrt{\frac{1-\rho_n}{1+\rho_n}} +\frac{(1-t)(x^2-\rho_n z^2)}{2b_n^3\sqrt{1-\rho_n^2}} +\sqrt{\frac{1-\rho_n}{1+\rho_n}}O(b_n^{-5})\right)^{-1} \nonumber \\ && \times \exp\left( -\frac{b_n^2(1-\rho_n)}{2(1+\rho_n)}-\frac{(x-\rho_n z)^2}{2b_n^2(1-\rho_n^2)}- \frac{(1-t)^2 (x^2-\rho_n z^2)^2}{8b_n^6(1-\rho_n^2)}-\frac{x-\rho_n z}{1+\rho_n}-\frac{(1-t)(x^2-\rho_n z^2)}{2b_n^2(1+\rho_n)} \right. \nonumber \\ && \left. -\frac{(x-\rho_n z)(1-t)(x^2-\rho_n z^2)}{2b_n^4(1-\rho_n^2)} +\frac{1-\rho_n}{2(1+\rho_n)}O(b_n^{-5})+4\log b_n \right) \nonumber \\ &<& (1+o(1)) e^{-\frac{x-z}{2}} \exp \left\{ -\frac{b_n^2(1-\rho_n)}{2(1+\rho_n)} \left( 1- \frac{8(1+\rho_n)\log b_n}{b_n^2(1-\rho_n)} +\frac{(1+\rho_n)\log b_n^2(1-\rho_n)}{b_n^2 (1-\rho_n)}\right) \right\} \nonumber \\ & \to & 0 \end{eqnarray} as $n \to \infty$. Note that \begin{eqnarray}\label{eq3.11a} n^{-1}=\pzx{\bar{\Phi}(b_n)}=\frac{\varphi(b_n)}{b_n}(1-b_n^{-2}+O(b_n^{-4})). \end{eqnarray} Hence, by using \eqref{more1}-\eqref{more2}, we have \begin{eqnarray} \label{eq3.11b} && n \operatorname*{\mathbb{P}}(X>\omega_{n,t}(x),Y>\omega_{n,t}(y)) \nonumber \\ \nonumber &=& b_n^{-4}(1-b_n^{-2}+O(b_n^{-4}))^{-1} \int_{y}^{\infty} b_n^4 \bar{\Phi}(\frac{\omega_{n,t}(x)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}) \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{2/t}\right)\right) \\ \nonumber && \times (1+tza_n^2)^{1/t-1}dz \\ &=& O(b_n^{-4}). \end{eqnarray}
It follows from \eqref{eq3.10a} and \eqref{eq3.11b} that
\begin{eqnarray}
\label{eq3.12a} && b_n^2 \Big[F_{\rho_n}^n(\omega_{n,t}(x),\omega_{n,t}(y))-H_{\infty}(x,y)\Big] \nonumber \\ \nonumber &=& b_n^2 H_{\infty}(x,y) (1+o(1)) \Big[-n (1-F_{\rho_n}(\omega_{n,t}(x),\omega_{n,t}(y)))+e^{-x}+e^{-y}\Big] \\ \nonumber &=& b_n^2 H_{\infty}(x,y) (1+o(1)) \Big[- (e^{-x}+e^{-y}-b_n^{-2}(\mu(x)+\mu(y)+O(b_n^{-2})))+e^{-x}+e^{-y} \Big] \\ \nonumber &\to& H_{\infty}(x,y) \Big[\mu(x)+\mu(y)\Big] \end{eqnarray} as $n\to\infty$. Proof the case of $\lambda=\infty$ is complete.
(b). For the case of $\lambda=0$, we first consider the complete positive dependence case $(\rho_n\equiv1)$. Without loss of generality, assume that $y<x$, we have \begin{eqnarray} \label{eq3.21a} b_n^2 \Big[-n(1-F_1(\omega_{n,t}(x),\omega_{n,t}(y)))+e^{-y}\Big] = b_n^2\Big[-\zw{\bar{\Phi}_{n,t}(y)}+e^{-y}\Big] \to\mu(y) \end{eqnarray} as $n \to \infty$ since \eqref{eq3.10a} holds. The rest is for the case of $\rho_n \in (0,1)$. For $y<x \in \operatorname*{\mathbb{R}}$, if $\max (x,y)=x<z<4 \log b_n$ we have
\begin{eqnarray}\label{add10} && \Phi\left(\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \nonumber\\ &<& -\frac{\varphi\left(\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right)} {\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}} \nonumber\\ &=& \frac{\exp \left( -\frac{1}{2} \left(\lambda_n+\frac{y-z}{2\lambda_n}\right)^2 (1+o(1)) \right)} { \left( - \lambda_n+\frac{z-y}{2\lambda_n} \left( 1+\frac{(1-t)(y+z)}{2 b_n^2} \right) -\frac{\lambda_n z}{b_n^2} - \frac{(1-t)\lambda_n z^2}{2b_n^{4}} + \lambda_n O(b_n^{-6}) \right) (1-\frac{\lambda_n^2}{b_n^2})^{-\frac{1}{2}} } \nonumber\\ &=& \frac{\exp \left(-\frac{1}{2} \left(\lambda_n+\frac{y-z}{2\lambda_n}\right)^2 (1+o(1)) \right)} {\frac{z-y}{2\lambda_n}(1+o(1))} \end{eqnarray} for large $n$ due to $\Phi(-x)=\pzx{\bar{\Phi}(x)}$ and Mills' inequality since $\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}<0$ for large $n$ when $\lim_{n \to \infty} b_n^{6} (1-\rho_n)=c_1 \in [0,\infty)$. Therefore, \begin{eqnarray} \label{eq3.12} \nonumber && \int_{x}^{4\log b_n}\Phi\left( \frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}} \right) \exp\left(\frac{b_n^2}{2}\left( 1-(1+\pzx{tz} a_n^2)^{2/t}\right)\right) (1+\pzx{tz} a_n^2)^{1/t-1}dz \\ \nonumber &<& \frac{2\lambda_n}{x-y}(1+o(1)) \int_{x}^{4\log b_n} \exp \left( -\frac{\lambda_n^2}{2}-\frac{y-z}{2}-\frac{(y-z)^2}{8\lambda_n^2}-z+o(b_n^{-1})+ \left( \frac{1}{t}-1\right)\log (1+\pzx{tz} a_n^2) \right)dz \\ \nonumber &=& 2\lambda_n (1+o(1)) \frac{ \exp \left( -\frac{\lambda_n^2}{2}-\frac{y}{2}-\frac{(y-x)^2}{8\lambda_n^2} \right)} {x-y} \int_{x}^{4\log b_n} \exp \left(-\frac{z}{2}\right)dz \\ \nonumber &<& 4\lambda_n b_n^{-2}(1+o(1)) \frac{\exp \left( -\frac{\lambda_n^2}{2}-\frac{y}{2}-\frac{(y-x)^2}{8\lambda_n^2} \right)}{y-x} \\ &=& O(b_n^{-4}) \end{eqnarray} for large $n$ by using $\lim_{n \to \infty}b_n^{6}(1-\rho_n)=c_{1}$. \pzx{It follows from \eqref{more2} that} \begin{eqnarray} \label{eq3.13} \nonumber && \int_{4\log b_n}^{\infty} \Phi\left( \frac{(\omega_{n,t}(y)-\rho_n \omega_{n,t}(z))^2}{\sqrt{1-\rho_n^2}} \right) \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{2/t}\right)\right) (1+tza_n^2)^{1/t-1}dz \\ \nonumber &<& \int_{4\log b_n}^{\infty} \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{2/t}\right)\right) (1+tza_n^2)^{1/t-1}dz \\ &=& O(b_n^{-4}). \end{eqnarray} Combining \eqref{eq3.10a}, \eqref{eq3.12} and \eqref{eq3.13}, for $y<x$ we have \begin{eqnarray*} && 1-F_{\rho_n}\Big(\omega_{n,t}(\min(x,y)),\omega_{n,t}(\max(x,y))\Big) \\ &=& \zw{\bar{\Phi}_{n,t}(y)+\bar{\Phi}_{n,t}(x)} -\operatorname*{\mathbb{P}}\Big(X>\omega_{n,t}(y),Y>\omega_{n,t}(x)\Big) \\ &=& \zw{\bar{\Phi}_{n,t}(y)+\bar{\Phi}_{n,t}(x)} \\ && -\int_{x}^{\infty}\bar{\Phi}\left(\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{\frac{2}{t}}\right)\right) (1+tza_n^2)^{\frac{1}{t}-1}dz \\ &=& \zw{\bar{\Phi}_{n,t}(y)}+n^{-1}(1-b_n^{-2}+O(b_n^{-4}))^{-1} \\ && \times \int_{x}^{\infty} \Phi\left(\frac{\omega_{n,t}(y)-\rho_n \omega_{n,t}(z)}{\sqrt{1-\rho_n^2}}\right) \exp\left(\frac{b_n^2}{2}\left( 1-(1+tza_n^2)^{\frac{2}{t}}\right)\right) (1+tza_n^2)^{\frac{1}{t}-1}dz \\ &=& n^{-1}(e^{-y}-b_n^{-2}\mu(y)+O(b_n^{-4})) \end{eqnarray*} holds for large $n$, which implies the desired result. The proof is complete. \qed
\begin{lemma} \label{lemma4} For $t=2$, with $c_{n}^{*}$ and $d_{n}^{*}$ given by \eqref{add2}, the following results hold.
\begin{itemize} \item[(a).] For the case of $\lambda=\infty$,
(i) if $\rho_n \in [-1,0]$, we have \begin{eqnarray}\label{eq3.14} \lim_{n\to \infty}(\log n)^2\tilde\Delta(F_{\rho_n,2}^{n},H_{\infty};\pzx{c_{n}^{*}, d_{n}^{*}};x,y) =\frac{1}{4}\Big(\nu(x)+\nu(y)\Big) H_{\infty}(x,y). \end{eqnarray} (ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty} \frac{\log b_n}{b_n^2(1-\rho_n)}=0$, then \eqref{eq3.14} also holds.
\item[(b).] For the case of $\lambda=0$,
(i) if $\rho_n=1$, we have \begin{eqnarray}\label{eq3.15} \lim_{n\to \infty}(\log n)^2\tilde\Delta(F_{\rho_n,2}^{n},H_0;\pzx{c_{n}^{*}, d_{n}^{*}};x,y) =\frac{1}{4}\nu\Big(\min(x,y)\Big) H_0(x,y). \end{eqnarray} (ii) if $\rho_n \in (0,1)$ and $\lim_{n \to \infty}b_n^{14}(1-\rho_n)=c_2 \in [0,\infty)$, then \eqref{eq3.15} also holds. \end{itemize} \end{lemma}
\noindent \textbf{Proof.}~~ (a). For $\lambda=\infty$. Firstly note that \begin{eqnarray}\label{eq3.16} \zw{\bar{\Phi}_{n,2}(x)}=e^{-x}-b_n^{-4}\nu(x)+O(b_n^{-6}) \end{eqnarray} derived by Theorem 1 in Hall (1980). \zw{Arguments similar to that of \eqref{add91} and \eqref{add92}, by using \eqref{eq3.16} we have \begin{eqnarray*} b_n^4 \left( -n (1-F_{-1}(\omega_{n,2}^{*}(x),\omega_{n,2}^{*}(y))) +\Phi(\lambda+\frac{x-y}{2\lambda})e^{-y} +\Phi(\lambda+\frac{y-x}{2\lambda})e^{-x} \right) \to \nu(x)+\nu(y) \end{eqnarray*} as $n \to \infty$ for $\rho_n\equiv-1$, and \begin{eqnarray*} b_n^4 \left( -n (1-F_{0}(\omega_{n,2}^{*}(x),\omega_{n,2}^{*}(y))) +\Phi(\lambda+\frac{x-y}{2\lambda})e^{-y} +\Phi(\lambda+\frac{y-x}{2\lambda})e^{-x} \right) \to \nu(x)+\nu(y) \end{eqnarray*} also holds as $n \to \infty$ for $\rho_n\equiv0$}. Therefore, \eqref{eq3.14} holds for $\rho_n\equiv-1$ and $\rho_n\equiv0$ respectively. By using Slepian's Lemma, \eqref{eq3.14} also holds for $\rho_n \in [-1,0]$.
Now switch to the case of $\rho_n \in(0,1)$ with additional condition $\lim_{n \to \infty} \frac{\log b_n}{b_n^2(1-\rho_n)}=0$, implying $\lambda=\infty$. For fixed $x$ and $z$, note that \begin{eqnarray*} \frac{\omega_{n,2}^{*}(x)-\rho_n \omega_{n,2}^{*}(z)}{\sqrt{1-\rho_n^2}}
\to \infty \end{eqnarray*} as $n\to\infty$ since $\lambda_n^2=\frac{b_n^2}{2}(1-\rho_n) \to \infty$ as $n \to \infty$. Therefore for $t=2$, arguments similar to \eqref{eq3.18}, we have \begin{eqnarray*}
b_n^6 \left( 1-\Phi\left(\frac{\omega_{n,2}^{*}(x)-\rho_n \omega_{n,2}^{*}(z)}{\sqrt{1-\rho_n^2}}\right) \right) \to 0 \end{eqnarray*} as $n \to \infty$. Hence, it follows from \eqref{eq3.11a} that \begin{eqnarray*} && \operatorname*{\mathbb{P}}(X>\omega_{n,2}^{*}(x),Y>\omega_{n,2}^{*}(y)) \\ &=& n^{-1} b_n^{-6}(1+b_n^{-2}+O(b_n^{-4})) \int_{y}^{\infty} b_n^6
\left(1-{\Phi}(\frac{\omega_{n,2}(x)-\rho_n \omega_{n,2}(y)}{\sqrt{1-\rho_n^2}})\right)
\\ && \times \exp\left(-z+(1+z)a_n^2\right) \left(1-a_n^2\right) \left(1+2\left(z-(1+z)a_n^2\right)a_n^2\right)^{-\frac{1}{2}} dz \\ &=& O(n^{-1}b_n^{-6}) \end{eqnarray*} for large $n$. Hence,
\begin{eqnarray*}
b_n^4 \Big[F_{\rho_n}^n(\omega_{n,t}^{*}(x),\omega_{n,t}^{*}(y))-H_{\infty}(x,y)\Big] \to H_{\infty}(x,y) \Big[\nu(x)+\nu(y)\Big] \end{eqnarray*} holds as $n\to\infty$. The proof the case of $\lambda=\infty$ is complete.
(b). For the case of $\lambda=0$. We first consider the complete positive dependence case $(\rho_n\equiv1)$, without loss of generality, assume that $y<x$. Hence, \begin{eqnarray*}
b_n^{4}\Big[ -n(1-F_{1}(\omega_{n,2}^{*}(x),\omega_{n,2}^{*}(y)))+\Phi(\lambda+\frac{x-y}{2\lambda})e^{-y}+\Phi(\lambda+\frac{y-x}{2\lambda})e^{-x} \Big] \to \nu(y) \end{eqnarray*} as $n\to\infty$ provided that \eqref{eq3.16} holds. The remainder is to prove the case of $\rho_n \in(0,1)$ with $\lim_{n \to \infty}b_n^{14}(1-\rho_n)=c_2 \in [0,\infty)$. By arguments similar to that of \eqref{eq2.12} and \eqref{eq2.13}, for fixed $y<x \in \operatorname*{\mathbb{R}}$ we have \begin{equation} \label{eq3.17} \int_{x}^{\infty} \Phi \left(\frac{\omega_{n,2}^{*}(y)-\rho_n \omega_{n,2}^{*}(z)}{\sqrt{1-\rho_n^2}}\right) \exp(-z+ (1+z)a_n^2) \frac{1-a_n^2}{(1+2 (z-(1+z)a_n^2)a_n^2)^{\frac{1}{2}}} dz = O(b_n^{-6}) \end{equation} for large $n$ by using $\lim_{n \to \infty}b_n^{14}(1-\rho_n)=c_{2}$ . Combining \eqref{eq3.16} with \eqref{eq3.17}, we have \begin{eqnarray*} && 1-F_{\rho_n}(\omega_{n,2}^{*}(x),\omega_{n,2}^{*}(y)) \\ &=& \zw{\bar{\Phi}_{n,2}(y)}+n^{-1}(1-b_n^{-2}+O(b_n^{-4}))^{-1} \\ && \times \int_{x}^{\infty} \Phi \left(\frac{\omega_{n,2}^{*}(y)-\rho_n \omega_{n,2}^{*}(z)}{\sqrt{1-\rho_n^2}} \right) \exp(-z+ (1+z)a_n^2) \frac{1-a_n^2}{(1+2 (z-(1+z)a_n^2)a_n^2)^{\frac{1}{2}}} dz \\ &=& n^{-1} \left( e^{-y}-b_n^{-4}\nu(y)+O(b_n^{-6}) \right) \end{eqnarray*} for large $n$, which implies the desired result. The proof is complete. \qed
\section{Proofs} \label{sec4}
\noindent \textbf{Proof of Theorem \ref{thm1}.}~~ Obviously,
\begin{eqnarray*}
&&\operatorname*{\mathbb{P}}\left( \abs{M_{n1}}^t\leq c_n x+d_n, \abs{M_{n2}}^t\leq c_n y+d_n \right)
\\
&=& F_{\rho_n,t}^{n}(\omega_{n,t}(x),\omega_{n,t}(y))-F_{\rho_n,t}^{n}(\omega_{n,t}(x),-\omega_{n,t}(y))
-F_{\rho_n,t}^{n}(-\omega_{n,t}(x),\omega_{n,t}(y))+F_{\rho_n,t}^{n}(-\omega_{n,t}(x),-\omega_{n,t}(y)).
\end{eqnarray*} Note that
\begin{eqnarray}\label{eq4.1}
\nonumber && F_{\rho_n,t}^{n}(\omega_{n,t}(x),-\omega_{n,t}(y))+F_{\rho_n,t}^{n}(-\omega_{n,t}(x),\omega_{n,t}(y)) -F_{\rho_n,t}^{n}(-\omega_{n,t}(x),-\omega_{n,t}(y)) \\ \nonumber &\leq& \operatorname*{\mathbb{P}}\left(M_{n2}\leq -\omega_{n,t}(y)\right)+\operatorname*{\mathbb{P}}\left(M_{n1}\leq -\omega_{n,t}(x)\right) -\min \{ \Phi^n(-\omega_{n,t}(x)),\Phi^n(-\omega_{n,t}(y)) \} \\ \nonumber &=& \Phi^n(-\omega_{n,t}(x))+\Phi^n(-\omega_{n,t}(y))-\min \{ \Phi^n(-\omega_{n,t}(x)),\Phi^n(-\omega_{n,t}(y)) \} \\ &=& o(b_n^{-4}) \end{eqnarray} since \begin{eqnarray*} \bar{\Phi}^{n-1}(-\omega_{n,t}(x))=\left(n^{-1}e^{-x}(1+O(b_n^{-2}))\right)^{n-1} =o(b_n^{-4}), \end{eqnarray*} cf. Lemma 3.1 in Zhou and Ling (2016). Combining \eqref{eq4.1} with Lemma \ref{lemma1}, we can get the claimed result \eqref{eq2.2}. \qed
\noindent \textbf{Proof of Theorem \ref{thm2}.}~~ It follows from \eqref{eq4.1} and Lemma \ref{lemma2} that \begin{eqnarray*}
\Delta(F_{\rho_n,t}^{n},H_{\lambda};\pzx{c_{n},d_{n}}; x,y) = \tilde{\Delta}(F_{\rho_n,t}^{n},H_{\lambda};\pzx{c_{n},d_{n}}; x,y)+ o(b_n^{-4}), \end{eqnarray*}
so the result \eqref{eq2.3} is obtained. \qed
\noindent \textbf{Proof of Theorem \ref{thm3} and Theorem \ref{thm4}.}~~ By using \eqref{eq4.1}, Lemma \ref{lemma3} and Lemma \ref{lemma4} respectively to derive the desired results. \qed
\noindent {\bf Acknowledgments}~~ The first author was supported by the Fundamental Research Funds for the Central Universities Grant no. XDJK2016E117, the CQ Innovation Project for Graduates Grant no. CYS16046.
\end{document} | arXiv |
\begin{document}
\input style.tex
\title{Operator-Valued Bochner Theorem, Fourier Feature Maps for Operator-Valued Kernels, and Vector-Valued Learning}
\author{\name H\`a Quang Minh \email [email protected]\\
\addr Pattern Analysis and Computer Vision (PAVIS)\\
Istituto Italiano di Tecnologia (IIT), Via Morego 30, Genova 16163, ITALY}
\editor{}
\maketitle
\begin{abstract} This paper presents a framework for computing random operator-valued feature maps for operator-valued positive definite kernels. This is a generalization of the random Fourier features for scalar-valued kernels to the operator-valued case. Our general setting is that of operator-valued kernels corresponding to RKHS of functions with values in a Hilbert space. We show that in general, for a given kernel, there are potentially infinitely many random feature maps, which can be bounded or unbounded. Most importantly, given a kernel, we present a general, closed form formula for computing a corresponding probability measure, which is required for the construction of the Fourier features, and which, unlike the scalar case, is not uniquely and automatically determined by the kernel. We also show that, under appropriate conditions, random bounded feature maps can always be computed. Furthermore, we show the uniform convergence, under the Hilbert-Schmidt norm, of the resulting approximate kernel to the exact kernel on any compact subset of Euclidean space. Our convergence requires differentiable kernels, an improvement over the twice-differentiability requirement in previous work in the scalar setting. We then show how operator-valued feature maps and their approximations can be employed in a general vector-valued learning framework. The mathematical formulation is illustrated by numerical examples on matrix-valued kernels. \end{abstract}
\section{Introduction}
The current work is concerned with the construction of random feature maps for operator-valued kernels and their applications in vector-valued learning. Much work has been done in machine learning recently on these kernels and their associated RKHS of vector-valued functions, both theoretically and practically,
see e.g. \citep{MichelliPontil05, Carmeli2006, Reisert2007, Caponnetto08, ICML2011Brouard, ICML2011Dinuzzo,Kadrietal2011, MinhVikasICML2011, Zhangetal:JMLR2012, VikasMinhLozano:UAI2013}. While rich in theory and potentially powerful in applications, one of the main challenges in applying operator-valued kernels is that they are computationally intensive on large datasets.
In the scalar setting, one of the most powerful approaches for scaling up kernel methods is Random Fourier Features \citep{Fourier:NIPS2007}, which applies Bochner's Theorem and the Inverse Fourier Transform to build random features that approximate a given shift-invariant kernel. The approach in \citep{Fourier:NIPS2007} has been improved both in terms of computational speed \citep{Fastfood:ICML2013} and rates of convergence \citep{Fourier:UAI2015,Fourier:NIPS2015}.
{\bf Our contributions}. The following are the contributions of this work. \begin{enumerate} \item {\it Firstly}, we construct random feature maps for operator-valued shift-invariant kernels using the operator-valued version of Bochner's Theorem. The {\it key differences} between the operator-valued and scalar settings are the following. The first key difference is that, in the scalar setting, a positive definite function $k$, with normalization, is the Fourier transform of a probability measure $\rho$, which is {\it uniquely determined} as the inverse Fourier transform of $k$. In the operator-valued setting, $k$ is the Fourier transform of a {\it unique} finite positive operator-valued measure $\mu$. However, the probability measure $\rho$, which is necessary for constructing the random feature maps, must be {\it explicitly constructed}, that is it is {\it not} automatically determined by $k$. In this work, we present a general formula for computing a probability measure $\rho$ given a kernel $k$. The second key difference is that, in the operator-valued setting, the probability measure $\rho$ is generally {\it non-unique}, being a factor of $\mu$. As a consequence, we show that in general, there are (potentially infinitely) many random feature maps, which may be either unbounded or bounded. However, under appropriate assumptions, we show that there always exist bounded feature maps. This is true for many of the commonly encountered kernels, including separable kernels and curl-free and divergence-free kernels.
\item {\it Secondly}, for the bounded feature maps, we show that the associated approximate kernel converges uniformly to the exact kernel in Hilbert-Schmidt norm on any compact subset in Euclidean space.
\item {\it Thirdly}, when restricting to the scalar setting, our convergence holds for differentiable kernels, which is an improvement over the hypothesis of \citep{Fourier:NIPS2007,Fourier:UAI2015,Fourier:NIPS2015,Romain:2016}, which all require the kernels to be twice-differentiable.
\item {\it Fourthly}, we show how operator-valued feature maps and their approximations can be used directly in a general learning formulation in RKHS.
\end{enumerate}
{\bf Related work}. The work most closely related to our present work is \citep{Romain:2016}. While the {\it formal constructions} of the Fourier feature maps in \citep{Romain:2016} and our work are similar, there are several {\it crucial differences}. The first and most important difference is that in \citep{Romain:2016} there is {no} general mechanism for computing a probability measure $\rho$, which is required for the construction of the Fourier feature maps. As such, the results presented in \citep{Romain:2016} are only for three specific kernels, namely separable kernels, curl-free and div-free kernels, {not} for a general kernel as in our setting. Moreover, for the curl-free and div-free kernels, \citep{Romain:2016} presented unbounded feature maps, whereas we show that, apart from unbounded feature maps, there are generally infinitely many bounded feature maps associated with these kernels. Secondly, more general than the matrix-valued kernel, i.e finite-dimensional, setting in \citep{Romain:2016}, we work in the operator-valued kernel setting, with RKHS of functions with values in a Hilbert space. In this setting, the convergence in the Hilbert-Schmidt norm that we present is {\it strictly stronger} than the convergence in spectral norm given in \citep{Romain:2016}. At the same time, our convergence requires {\it weaker assumptions} than those in \citep{Romain:2016} and previous results in the scalar setting \citep{Fourier:NIPS2007,Fourier:UAI2015,Fourier:NIPS2015}.
{\bf Organization}. We first briefly review random Fourier features and operator-valued kernels in Section~\ref{section:background}. Feature maps for operator-valued kernels are described in Section~\ref{section:operator-feature}. The core of the paper is Section~\ref{section:operator-random}, which describes the construction of random feature maps using operator-valued Bochner's Theorem, the computation of the required probability measure, along with the uniform convergence of the corresponding approximate kernels. Section~\ref{section:learning} employs feature maps and their approximations in a general vector-valued learning formulation, with the accompanying experiments in Section~\ref{section:experiments}. All mathematical proofs are given in Appendix \ref{section:proofs}.
\section{Background} \label{section:background}
Throughout the paper, we work with shift-invariant positive definite kernels {$K$} on {$\R^n \times\R^n$}, so that {$K(x,t) = k(x-t) \forall x,t\in\R^n$} for some function {$k:\R^n \mapto \R$}, which is then said to be a {\it positive definite function} on $\R^n$.
{\bf Random Fourier features for scalar-valued kernels \citep{Fourier:NIPS2007}}. Bochner's Theorem in the scalar setting, see e.g. \citep{ReedSimon:vol2}, states that a complex-valued, continuous function $k$ on $\R^n$ is positive definite if and only if it is the Fourier transform of a finite, positive measure $\mu$ on $\R^n$, that is { \begin{align} \label{equation:Bochner-scalar} k(x) = \hat{\mu}(x) = \int_{\R^n}e^{-i\la \omega, x\ra}d\mu(\omega). \end{align} } For our purposes, we consider exclusively the {\it real-valued} setting for $k$. Since $\mu$ is a finite positive measure, without loss of generality, we assume that $\mu$ is a probability measure, so that {$k(x) = \bE_{\mu}[e^{-i\la\omega, x\ra}]$}. The measure $\mu$ is {\it uniquely determined} via $\hat{\mu} = k$.
For the Gaussian function {$k(x) = e^{-\frac{||x||^2}{\sigma^2}}$}, we have
{$\mu(\omega) = \frac{(\sigma\sqrt{\pi})^n}{(2\pi)^n}e^{-\frac{\sigma^2||\omega||^2}{4}} \sim \Ncal\left(0, \frac{2}{\sigma^2}\right)$}.
Consider now the kernel {$K(x,t) = k(x-t) = \int_{\R^n}e^{-i\la \omega, x-t\ra}d\mu(\omega)$}. Using the symmetry of {$K$} and the relation {$\frac{1}{2}(e^{ix} + e^{-ix}) = \cos(x)$}, we obtain { \begin{align} K(x,t) = \frac{1}{2}\int_{\R^n}[e^{i\la \omega, x-t\ra} + e^{-i \la \omega, x-t\ra}]d\mu(\omega) = \int_{\R^n}\cos(\la \omega, x-t\ra)d\mu(\omega).
\end{align} } Let {$\{\omega_j\}_{j=1}^D$} be points in {$\R^n$}, independently sampled according to the measure {$\mu$}. Then we have an empirical approximation {$\hat{K}_D$} of {$K$} and the associated feature map {$\hat{\Phi}_D:\R^n \mapto \R^{2D}$}, as follows { \begin{align} \hat{K}_D(x,t) &= \frac{1}{D}\sum_{j=1}^D\cos(\la \omega_j, x-t\ra) = \frac{1}{D}\sum_{j=1}^D[\cos(\la \omega_j, x\ra)\cos(\la \omega_j, t\ra) + \sin(\la \omega_j,x\ra)\sin(\la \omega_j, t\ra) \nonumber \\ & = \la \hat{\Phi}_D(x), \hat{\Phi}_D(t)\ra, \;\;\;\text{where}\;\;\; \hat{\Phi}_D(x) = (\cos(\la \omega_j, x\ra),\sin(\la \omega_j, x\ra))_{j=1}^D \in \R^{2D}. \label{equation:feature-scalar} \end{align} } The current work generalizes the feature map {$\hat{\Phi}_D$} above to the case {$K$} is an operator-valued kernel and the corresponding $\mu$ is a positive operator-valued measure.
{\bf Vector-valued RKHS}.
Let us now briefly recall operator-valued kernels and their corresponding RKHS of vector-valued functions,
for more detail see
e.g. \citep{Carmeli2006, MichelliPontil05, Caponnetto08, MinhVikasICML2011}.
Let $\X$ be a nonempty set, $\mathcal{W}$ a real, separable Hilbert space with inner product $\langle \cdot,\cdot\rangle_{\mathcal{W}}$, $\mathcal{L}(\W)$ the Banach space of bounded linear operators on $\W$.
Let {$\W^{\X}$} denote the vector space of all functions {$f:\X \rightarrow \W$}. A function {$K: \X \times \X \rightarrow \mathcal{L}(\W)$} is said to be an {\it operator-valued positive definite kernel} if for each pair {$(x,t) \in \X \times \X$, $K(x,t)^{*} = K(t,x)$},
and for every set of points {$\{x_i\}_{i=1}^N$} in $\X$ and {$\{w_i\}_{i=1}^N$} in {$\W$}, {$N \in \N$}, {
$\sum_{i,j=1}^N\langle w_i, K(x_i,x_j)w_j\rangle_\W \geq 0$.
}
For {$x\in \X$} and {$w \in \W$}, form a function {$K_xw = K(.,x)w \in \W^{\X}$}
by { \begin{align} (K_xw)(t) = K(t,x) w \;\;\;\; \forall t \in \X. \end{align} }
Consider the set {$\mathcal{H}_0 = {\rm span}\{K_xw | x \in \X, w \in \W\} \subset \W^\X$}. For {$f= \sum_{i=1}^NK_{x_i}w_i$, $g = \sum_{i=1}^NK_{z_i}y_i \in \mathcal{H}_0$}, we define the inner product
{$\langle f, g \rangle_{\H_K} = \sum_{i,j=1}^N\langle w_i, K(x_i,z_j)y_j\rangle_\W$},
which makes $\mathcal{H}_0$ a pre-Hilbert space.
Completing $\mathcal{H}_0$ by adding the limits of all Cauchy sequences gives the Hilbert space $\mathcal{H}_K$. This is the reproducing kernel Hilbert space (RKHS) of {$\W$}-valued functions on {$\X$}. The {\it reproducing property} is \begin{align} \label{equation:reproducing2} \langle f(x),y\rangle_\W = \langle f, K_xy\rangle_{\H_K} \;\;\;\; \mbox{for all} \;\;\; f \in \mathcal{H}_K. \end{align}
\subsection{Operator-Valued Feature Maps for Operator-Valued Kernels}
\label{section:operator-feature}
Feature maps for
operator-valued kernels were first considered in \citep{Caponnetto08}.
Let {$\F_K$} be a separable Hilbert space and {$\mathcal{L}(\W, \F_K)$} be the Banach space of all bounded linear operators mapping from {$\W$ to $\F_K$}. A {\it feature map} for {$K$} with corresponding {\it feature space} {$\F_K$} is a mapping { \begin{align} \label{equation:feature-def} \Phi_K: \X \mapto \mathcal{L}(\W, \F_K), \;\;\;\text{such that}\;\;\;K(x,t) = \Phi_K(x)^{*}\Phi_K(t) \;\;\; \forall (x,t) \in \X \times \X. \end{align} }
The operator-valued map {$\Phi_K$} is generally nonlinear as a function on $\X$. For each {$x \in \X$},
{$\Phi_K(x) \in \mathcal{L}(\W, \F_K)$} and
\begin{align} \la w, K(x,t)w\ra_{\W} = \la w, \Phi_K(x)^{*}\Phi_K(t)\ra_{\W}
= \la \Phi_K(x)w, \Phi_K(t)w\ra_{\F_K}. \end{align}
In the following, for brevity, we also refer to the pair {$(\Phi_K, \F_K)$} as a feature map for {$K$}.
{\bf Existence of operator-valued feature maps and the canonical feature map}.
Let $K$ be any operator-valued positive definite kernel on $\X \times \X$, we now show that then there always exists at least one feature map, as follows.
For each $x \in \X$, consider the linear operator $K_x: \W \mapto \H_K$ defined by
$K_xw(t) = K(t,x)w$, $x,t \in \X$, as above.
Then { \begin{align}
||K_xw||_{\H_K}^2 = \langle K(x,x)w, w\rangle_{\W} \leq ||K(x,x)||\;||w||^2_{\W}, \end{align} } which implies that $K_x$ is a bounded operator, with { \begin{align}
||K_x: \W \rightarrow \H_K|| \leq \sqrt{||K(x,x)||}, \end{align} }
Let $K_x^{*}: \H_K \mapto \W$ be the adjoint operator for $K_x$. The reproducing property states that $\forall w \in \W$, { \begin{equation} \la f(x), w\ra_{\W} = \la f, K_xw\ra_{\H_K} = \la K_x^{*}f, w\ra_{\W} \imply K_x^{*}f = f(x). \end{equation} }
For any $u,v \in \W$, we have { \begin{align} \la u, K(x,t)v\ra_{\W} = \la u, K_tv(x)\ra_{\W} = \la u, K_x^{*}K_tv\ra_{\W}
= \la K_xu, K_tv\ra_{\H_K} \imply K(x,t) = K_x^{*}K_t, \end{align} } from which it follows that
{ \begin{align} \Phi_K: \X \mapto \mathcal{L}(\W,\H_K), \;\;\;
\Phi_K(x) = K_x \in \mathcal{L}(\W,\H_K) \end{align} } is a feature map for $K$ with feature space $\H_K$, which exists for any positive definite kernel $K$. Following the terminology in the scalar setting \citep{Minh-Niyogi-Yao}, we also call it the {\it canonical feature map} for $K$.
\begin{remark} In \citep{Caponnetto08}, it is {\it assumed} that the kernel has the representation {$K(x,t) = \Phi_K(x)^{*}\Phi_K(t)$}. However, as we have just shown, for any positive definite kernel {$K$}, there is always at least one such representation, given by the canonical feature map above. \end{remark}
Similar to the scalar setting \citep{Minh-Niyogi-Yao}, feature maps are generally {\it non-unique}, as we show below. However, they are all essentially equivalent, similar to the scalar case, as shown by the following.
\begin{lemma}\label{lemma:f-feature}
Let {$(\Phi_K, \F_K)$} be any feature map for {$K$}. Then {$\forall f \in \H_K$}, there exists an {$\h \in \F_K$} such that
\begin{equation} f(x) = K_x^{*}f = \Phi_K(x)^{*}\h,\;\;\;\forall x \in \X.
\end{equation} Furthermore, {
$||f||_{\H_K} = ||\h||_{\F_K}$.
}
\end{lemma}
\section{Random Operator-Valued Feature Maps} \label{section:operator-random}
We now present the generalization of the random Fourier feature map from the scalar setting to the operator-valued setting. We begin by reviewing Bochner's Theorem in the operator-valued setting in Section \ref{section:bochner}, which immediately leads to the formal construction of the Fourier feature maps in Section \ref{section:feature-construction}. As we stated, in the operator-valued setting, we need to explicitly construct the required probability measure. This is done individually for some specific kernels in Section \ref{section:special-construction} and for a general kernel in Section \ref{section:probability-construction}.
\subsection{Operator-Valued Bochner Theorem} \label{section:bochner} The operator-valued version of Bochner's Theorem that we present here is from \citep{Neeb:Operator1998},
see also \citep{Falb:1969,Carmeli:2010}. Throughout this section, let {$\H$} be a separable Hilbert space. Let {$\L(\H)$} denote the Banach space of bounded linear operators on {$\H$}, {$\Sym(\H) \subset \L(\H)$} denote the subspace of bounded, self-adjoint operators on {$\H$}, and {$\Sym^{+}(\H) \subset \Sym(\H)$} denote the set of self-adjoint, bounded, positive operators on {$\H$}. An operator {$A \in \L(\H)$} is said to be trace class, denoted by {$A \in \Tr(\H)$}, if {$\sum_{k=1}^{\infty}\la\e_k, (A^{*}A)^{1/2}\e_k\ra < \infty$} for any orthonormal basis {$\{\e_k\}_{k=1}^{\infty}$} in {$\H$}. If {$A \in \Tr(\H)$}, then the {\it trace} of {$A$} is {$\trace(A) = \sum_{k=1}^{\infty}\la \e_k, A\e_k\ra$}, which is independent of the orthonormal basis.
{\bf Positive operator-valued measures}.
Let {$(\X, \Sigma)$} be a measurable space, where {$\X$} is a non-empty set and {$\Sigma$} is a $\sigma$-algebra of subsets of {$\X$}. A {$\Sym^{+}(\H)$}-valued measure {$\mu$} is a {\it countably additive}\footnote{Falb \citep{Falb:1969} used {\it weakly countably additive} vector measures, which are in fact {\it countably additive} \citep{Diestel:Sequences}.} function {$\mu: \Sigma \mapto \Sym^{+}(\H)$}, with {$\mu(\emptyset) = 0$}, so that for any sequence of pairwise disjoint subsets {$\{A_j\}_{j=1}^{\infty}$} in {$\Sigma$}, \begin{align} \mu(\cup_{j=1}^{\infty}A_j) = \sum_{j=1}^{\infty}\mu(A_j),\;\;\; \text{which converges in the operator norm on $\L(\H)$}. \end{align}
To state Bochner's Theorem for operator-valued measures, we need the notions of finite {$\Sym^{+}(\H)$}-valued measure and ultraweak continuity.
Let {$\X = \R^n$} (a locally compact space in general). A {\it finite} {$\Sym^{+}(\H)$}-valued Radon measure is a {$\Sym^{+}(\H)$}-valued measure such that for any operator {$A \in \Sym^{+}(\H)\cap \Tr(\H)$}, the scalar measure \begin{align} \mu_A: \Sigma \mapto \R^{+}, \;\;\; \mu_{A}(B) = \trace(A\mu(B)), \;\;\; B \in \Sigma, \end{align} is a finite positive Radon measure on {$\R^n$}.
A function {$k:\R^n \mapto \L(\H)$} is said to be {\it ultraweakly continuous} if for each operator {$A \in \Tr(\H)$}, the following scalar function is continuous \begin{align} k_A: \R^n \mapto \R, \;\;\; k_A(x) = \trace(Ak(x)). \end{align}
The following is then the generalization of Bochner's Theorem to the vector-valued setting. \begin{theorem} [\textbf{Operator-valued Bochner Theorem} \citep{Neeb:Operator1998}] \label{theorem:Bochner-operator}
An ultraweakly continuous function {$k: \R^n \mapto \L(\H)$} is positive definite if and only if there exists a finite {$\Sym^{+}(\H)$}-valued measure {$\mu$} on
{$\R^n$} such that
\begin{align} \label{equation:k-expression1} k(x) = \hat{\mu}(x) = \int_{\R^n}\exp(i \la \omega,x\ra)d\mu(\omega) = \int_{\R^n} \exp(-i \la \omega,x\ra)d\mu(\omega). \end{align} The Radon measure {$\mu$} is uniquely determined by {$\hat{\mu} = K$}. \end{theorem}
{\bf General case}. The above version of Bochner's Theorem holds in a much more general setting, where $\R^n$ is replaced by a locally compact abelian group $G$. For the general version, we refer to \citep{Neeb:Operator1998}.
{\bf Determining {$\mu$}
from {$k$}}. In order to compute
feature maps using Bochner's Theorem, we need to compute $\mu$
from the given operator-valued function $k$. Suppose that the density function $\mu(\omega)$ of $\mu$ with respect to the Lebesgue measure on $\R^n$ exists.
Let {$\{\e_j\}_{j=1}^{\infty}$} be any orthonormal basis for {$\H$}. For any vector {$\a = \sum_{j=1}^{\infty}a_j\e_j\in \H$}, we have \begin{align*} \mu(\omega)\a = \sum_{j=1}^{\infty}\la \e_j, \mu(\omega)\a\ra\e_j = \sum_{j,l=1}^{\infty}a_l\la \e_j, \mu(\omega)\e_l\ra\e_j. \end{align*} Thus {$\mu(\omega)$} is completely determined by the infinite matrix of inner products {$(\la \e_j, \mu(\omega)\e_l\ra)_{j,l=1}^{\infty}$}, which can be computed from {$k$} via the inverse Fourier transform {$\F^{-1}$} as follows.
\begin{proposition} \label{proposition:mu-inversion} Assume that {$\la \e_j, k(x)\e_l\ra \in L^1(\R^n)$} {$\forall j,l \in \N$}. Then the density function $\mu(\omega)$ of $\mu$ with respect to the Lebesgue measure on $\R^n$ exists and is given by \begin{align} \la \e_j, \mu(\omega)\e_l\ra =\Fcal^{-1}[\la \e_j, k(x)\e_l\ra]. \end{align}
\end{proposition}
The positive definite function $k$ gives rise to the shift-invariant positive definite kernel
\begin{align} \label{equation:Bochner1} K(x,t) = k(x-t) = \int_{\R^n} \exp(-i \la \omega,x-t\ra)d\mu(\omega). \end{align}
Similar to the scalar case, using the property {$K(x,t) = K(t,x)^{*}$} and the symmetry of $\mu$, we obtain \begin{align} \label{equation:K-expression1} K(x,t) = \int_{\R^n}\cos(\la \omega, x-t\ra)d\mu(\omega). \end{align} In order to generalize the random Fourier feature approach to the operator-valued kernel {$K(x,t)$}
we need to construct a probability measure $\rho$ on {$\R^n$} such that {$K(x,t)$} is the expectation of an operator-valued random variable with respect to $\rho$. Equivalently, we need to factorize the density $\mu(\omega)$ as
\begin{align} \label{equation:factorize} \mu(\omega) = \tilde{\mu}(\omega)\rho(\omega), \end{align} where $\tilde{\mu}(\omega)$ is a finite {$\Sym^{+}(\H)$}-valued function on {$\R^n$} and $\rho(\omega)$ is the density function of the probability measure $\rho$.
\begin{remark} Throughout the rest of the paper, we assume that $k$ satisfies the assumptions of Proposition \ref{proposition:mu-inversion}. We then identify the measures $\mu$ and $\rho$ by their density functions $\mu(\omega)$ and $\rho(\omega)$, respectively, with respect to the Lebesgue measure. \end{remark}
{\bf Key differences between the scalar and operator-valued settings}. Before proceeding with the probability measure and feature map construction, we point out {\it two key differences} between the scalar and operator-valued settings. \begin{enumerate} \item In the scalar setting, with normalization, $\mu$ is a probability measure {\it uniquely determined} via $\hat{\mu} = k$. In the operator-valued setting, the operator-valued measure $\mu$ is also uniquely determined by $k$, as stated in Proposition \ref{proposition:mu-inversion}. However, the probability measure $\rho$ in Eq.~(\ref{equation:factorize}) needs to be {\it explicitly} constructed, that is it is {\it not} automatically determined by $k$. We present a general formula for computing $\rho$ in Section \ref{section:probability-construction}.
\item The factorization stated in Eq.~(\ref{equation:factorize}) is generally {\it non-unique}. As we show below, in general, there are many (in fact, potentially infinitely many) pairs {$(\tilde{\mu}, \rho)$} such that Eq.~(\ref{equation:factorize}) holds. Thus there are generally (infinitely) many operator-valued feature maps corresponding to the operator-valued version of Bochner's Theorem. We illustrate this property via examples in Sections \ref{section:feature-construction} and \ref{section:probability-construction} below. \end{enumerate}
\subsection{Formal Construction of Approximate Fourier Feature Maps} \label{section:feature-construction}
Assuming for the moment that we have a pair $(\tilde{\mu}, \rho)$ satisfying the factorization in Eq.~(\ref{equation:factorize}), then Eq.~(\ref{equation:K-expression1}) takes the form { \begin{align} K(x,t) &= \int_{\R^n}\cos(\la \omega, x-t\ra) \tilde{\mu}(\omega)d\rho(\omega)
= \bE_{\rho}[\cos(\la \omega, x-t\ra)\tilde{\mu}(\omega)]. \end{align} }
Let {$\{\omega_j\}_{j=1}^D$}, {$D \in \N$}, be $D$ points in $\R^n$ randomly sampled independently from {$\rho$}. Then {$K(x,t)$} can be approximated by by the empirical sum
{ \begin{align} \label{equation:KD} \hat{K}_D(x,t) &= \hat{k}_D(x-t) = \frac{1}{D}\sum_{l=1}^D\cos(\la \omega_l, x-t\ra)\tilde{\mu}(\omega_l) \nonumber \\ & = \frac{1}{D}\sum_{l=1}^D[\cos(\la \omega_l, x\ra)\cos(\la \omega_l, t)]\tilde{\mu}(\omega_l)
+ \frac{1}{D}\sum_{l=1}^D\sin(\la \omega_l, x\ra)\sin(\la \omega_l, t\ra)]\tilde{\mu}(\omega_l). \end{align} } Let {$\F$} be a separable Hilbert space and {$\psi:\R^n \mapto \L(\H,\F)$} be such that \begin{align} \label{equation:mu-decomp} \tilde{\mu}(\omega) = \psi(\omega)^{*}\psi(\omega), \;\;\;\; \psi(\omega): \H \mapto \F \end{align} Such a pair {$(\psi, \F)$} always exists, with one example being {$\F = \H$} and {$\psi(\omega) = \sqrt{\tilde{\mu}(\omega)}$}.
\begin{remark} As we demonstrate via the examples below, the decomposition $\tilde{\mu}(\omega) = \psi(\omega)^{*}\psi(\omega)$ is also generally non-unique, which is another reason for the non-uniqueness of the approximate feature maps. \end{remark}
{\bf Operator-valued Fourier feature map}. The decompositions for {$\hat{K}_D$} in Eqs.~(\ref{equation:KD}) and (\ref{equation:mu-decomp}) immediately give us the following approximate feature map \begin{align} \label{equation:feature-general} \hat{\Phi}_D(x) = \frac{1}{\sqrt{D}} \begin{pmatrix} \cos(\la \omega_1, x\ra)\psi(\omega_1)\\ \sin(\la \omega_1, x\ra)\psi(\omega_1)\\ \cdots\\ \cos(\la \omega_D, x\ra)\psi(\omega_D)\\ \sin(\la \omega_D, x\ra)\psi(\omega_D) \end{pmatrix} :\H \mapto \F^{2D}.
\end{align} with \begin{align} K_D(x,t) = [\hat{\Phi}_D(x)]^{*}[\hat{\Phi}_D(t)]. \end{align}
{\bf Special cases}. For $\H=\R$, we have $\tilde{\mu} = 1$ (assuming normalization) and $\rho = \mu$, and we thus recover the Fourier features in the scalar setting. For $\H = \R^d$, for some $d \in \N$, we obtain the feature map in \citep{Romain:2016}.
\subsection{Probability Measure and Feature Map Construction in Some Special Cases} \label{section:special-construction}
We first consider several examples of operator-valued kernels arising from scalar-valued kernels. For these examples, both the $\Sym^{+}(\H)$-valued measure $\mu$ and the probability measure $\rho$ can be derived from the corresponding probability measure for the scalar kernels. These examples have also been considered by \citep{Romain:2016}, however we treat them in greater depth here, particularly the curl-free and div-free kernels (see detail below). One important aspect that we note is that the approach for computing the probability measure $\rho$ in this section is specific for each kernel and does not generalize to a general kernel. We return to these examples in the general setting of Section \ref{section:probability-construction}, where we present a general formula for computing $\rho$ for a general kernel $k$.
\begin{example}[\textbf{Separable kernels}] \end{example} Consider the simplest case, where the operator-valued positive definite function $k$ has the form \begin{align} k(x) = g(x)A, \end{align} where {$A \in \Sym^{+}(\H)$} and {$g:\R^n \mapto \R$} is a scalar-valued positive definite function. Let $\rho_0$ be the probability measure on $\R^n$ such that {$g(x) = \bE_{\rho_0}[e^{- i \la \omega, x\ra}]$}. It follows immediately that \begin{align} k(x) = \int_{\R^n}e^{-i \la \omega, x\ra}d\mu(\omega) \;\;\; \text{where}\;\;\; \mu(\omega) = A \rho_0(\omega). \end{align} Thus we can set \begin{align} \tilde{\mu}(\omega) = A, \;\;\; \rho = \rho_0. \end{align} For the operator {$\psi(\omega)$} in Eq.~(\ref{equation:mu-decomp}), we can set either \begin{align} \psi(\omega) = \sqrt{A}, \end{align} or, if {$A$} is a symmetric positive definite matrix, we can also compute {$\psi$} via the Cholesky decomposition of {$A$} by setting \begin{align} \psi(\omega) = U, \;\;\; \text{where}\;\;\; A = U^TU, \end{align} with $U$ being an upper triangular matrix. Thus in this case, with the probability measure $\rho = \rho_0$, there are {\it at least two choices} for the feature map $\hat{\Phi}_D$, each resulting from one choice of $\psi(\omega)$ as discussed above. In practice, a particular $\psi$ should be chosen based on its computational complexity, which in turn depends on the structure of $A$ itself.
\begin{example}[\textbf{Curl-free and divergence-free kernels}] \end{example}
Consider next the matrix-valued curl-free and divergence kernels in \citep{Fuselier2006}. In \citep{Romain:2016}, the authors present what we call the {\it unbounded feature maps} below for these kernels, without, however, the analytical expression for the feature map of the div-free kernel. We now present the analytical expressions for the feature maps for both these kernels. More importantly, we show that, apart from the unbounded feature maps, there are generally {\it infinitely many bounded feature maps} associated with these kernels.
Let {$\phi$} be a scalar-valued twice-differentiable positive definite function on {$\R^n$}. Let {$\nabla$} denote the {$n \times 1$} gradient operator and {$\Delta = \nabla^T \nabla$} denote the Laplacian operator. Define { \begin{equation} k_{\div} = (-\Delta I_n + \nabla \nabla^T)\phi,\;\;\;k_{\curl} = - \nabla \nabla^T \phi. \end{equation} } Then {$k_{\div}$} and {$k_{\curl}$} are {$n \times n$} matrices, whose columns are divergence-free and curl-free functions, respectively.
The functions {$k_{\curl}$} and {$k_{\div}$} give rise to the corresponding positive definite kernels \begin{align*} K_{\curl}(x,t) = k_{\curl}(x-t),\;\;\; \text{and}\;\;\; K_{\div}(x,t) = k_{\div}(x-t). \end{align*}
For the Gaussian case {$\phi(x) = \exp(-\frac{||x||^2}{\sigma^2})$}, the functions $k_{\curl}$ and $k_{\div}$ are given by \begin{align} \label{equation:curl-div-k}
k_{\curl}(x) &= \frac{2}{\sigma^2}\exp(-\frac{||x||^2}{\sigma^2})[I_n - \frac{2}{\sigma^2}xx^T]. \\
k_{\div}(x) &= \frac{2}{\sigma^2}\exp(-\frac{||x||^2}{\sigma^2})
[((n-1) - \frac{2}{\sigma^2}||x||^2)I_n + \frac{2}{\sigma^2}xx^T]. \end{align}
\begin{lemma} \label{lemma:curl-div} Let $\rho_0$ be the probability measure on {$\R^n$} such that {$\phi(x) = \bE_{\rho_0}[e^{-i \la \omega, x\ra}] = \hat{\rho_0}(x)$}. Then, under the condition
{$\int_{\R^n}||\omega||^2d\rho_0(\omega) < \infty$},
we have \begin{align} k_{\curl}(x) &= \int_{\R^n}e^{-i \la \omega, x\ra}\omega\omega^T\rho_0(\omega)d\omega = \int_{\R^n}e^{-i \la \omega, x\ra} (\mu_{\curl})(\omega)d\omega.
\\
k_{\div}(x) &= \int_{\R^n}e^{-i \la \omega, x\ra}[||\omega||^2I_n - \omega\omega^T]\rho_0(\omega)d\omega = \int_{\R^n}e^{-i \la \omega, x\ra} (\mu_{\div})(\omega)d\omega, \end{align}
where {$\mu_{\curl}(\omega) = \omega\omega^T\rho_0(\omega)$} and {$\mu_{\div}(\omega) = [||\omega||^2I_n - \omega\omega^T]\rho_0(\omega)$}. \end{lemma}
The condition {$\int_{\R^n}||\omega||^2d\rho_0(\omega) < \infty$} in Lemma \ref{lemma:curl-div} guarantees that $\phi$ is twice-differentiable, which is the underlying assumption for curl-free and divergence-free kernels.
{\bf Unbounded feature maps}. Consider first the curl-free kernel. From the expression {$\mu_{\curl}(\omega) = \omega\omega^T\rho_0(\omega)$}, we immediately see that for the factorization in Eq.~(\ref{equation:factorize}), we can set { \begin{align*} \mu_{\curl}(\omega) = \tilde{\mu}(\omega) \rho(\omega), \;\;\; \text{with} \;\;\; \tilde{\mu}(\omega) = \omega\omega^T, \rho = \rho_0. \end{align*} }
For the Gaussian case, {$\rho_0(\omega) = \frac{(\sigma \sqrt{\pi})^n}{(2\pi)^n} e^{-\frac{\sigma^2||\omega||^2}{4}} \sim\Ncal(0, \frac{2}{\sigma^2}I_n)$}. In Eq.~(\ref{equation:mu-decomp}), we can set { \begin{align} \label{equation:curl-unbounded} \psi(\omega) = \omega^T, \;\;\;\text{so that}\;\;\; \hat{\Phi}_D(x) \text{\;\;is a matrix of size\;\;} 2D \times n. \end{align} } We can also set \begin{align} \label{equation:curl-unbounded-2}
\psi(\omega) = \sqrt{\omega \omega^T} = \frac{\omega \omega^T}{||\omega||},\;\;\;\text{so that} \;\;\; \hat{\Phi}_D(x) \text{\;\;is a matrix of size\;\;} 2Dn \times n. \end{align} Clearly the choice for {$\psi(\omega)$} in Eq.~(\ref{equation:curl-unbounded}) is preferable computationally to that in Eq.~(\ref{equation:curl-unbounded-2}). One thing that can be observed immediately is that both {$\mu_{\curl}$} and {$\psi$} are {\it unbounded} functions of {$\omega$}, which complicates the convergence analysis of the corresponding kernel approximation (see Section \ref{section:convergence} for further discussion).
{\bf Bounded feature maps}. The unbounded feature maps above correspond to one particular choice of the probability measure $\rho$, namely $\rho = \rho_0$. However, this is not the only valid choice for $\rho$. We now exhibit another choice for $\rho$ that results in a bounded feature map, whose convergence behavior is much simpler to analyze. Consider the Gaussian case, with {$\rho_0$} as given above. Clearly, we can choose for another factorization of {$\mu_{\curl}$} the factors { \begin{align}
\tilde{\mu}(\omega) = \omega\omega^T e^{-\frac{\sigma^2||\omega||^2}{8}}2^{n/2},\;\;\; \rho(\omega) = \frac{1}{2^{n/2}}\frac{(\sigma \sqrt{\pi})^n}{(2\pi)^n}e^{-\frac{\sigma^2||\omega||^2}{8}} \sim \Ncal(0, \frac{4}{\sigma^2}I_n). \end{align} } Then {$\tilde{\mu}(\omega)$} is a bounded function of {$\omega$}, with the corresponding bounded map \begin{align} \label{equation:curl-bounded}
\psi(\omega) = \omega^T e^{-\frac{\sigma^2||\omega||^2}{16}}2^{n/4}. \end{align}
For the divergence-free kernel, we have
{$\sqrt{||\omega||^2I_n - \omega\omega^T} = ||\omega||I_n - \frac{\omega\omega^T}{||\omega||}$}, giving
the corresponding maps \begin{align} \label{equation:div-feature}
\psi(\omega) &=(||\omega||I_n - \frac{\omega\omega^T}{||\omega||}) & \;\;\text{(unbounded feature map)},\;\; \\
\psi(\omega) &=(||\omega||I_n - \frac{\omega\omega^T}{||\omega||})e^{-\frac{\sigma^2||\omega||^2}{16}}2^{n/4} & \;\; \text{(bounded feature map)}. \end{align}
Since there are infinitely many ways to split the Gaussian function {$e^{-\frac{\sigma^2||\omega||^2}{4}}$} into a product of two Gaussian functions, it follows that there are {\it infinitely many bounded Fourier feature maps} associated with both the curl-free and div-free kernels induced by the Gaussian kernel. We show below that, under appropriate conditions on $k$, bounded feature maps always exist.
\begin{comment} \begin{proof} [\textbf{Proof of Lemma \ref{lemma:curl-div}}]
We make use of the following property of the Fourier transform (see e.g \citep{Jones:Lebesgue}). Assume that $f$ and $||x||f$ are both integrable on $\R^n$, then the Fourier transform $\hat{f}$ is differentiable and \begin{align*} \frac{\partial \hat{f}}{\partial \omega_j}(\omega) = - \widehat{i x_j f}(\omega), \;\;\; 1 \leq j \leq n. \end{align*}
Assume further that $||x||^2f$ is integrable on $\R^n$, then this rule can be applied twice to give \begin{align*} \frac{\partial^2 \hat{f}}{\partial \omega_j \partial \omega_k}(\omega) = \frac{\partial}{\partial \omega_j}\left(\frac{\partial \hat{f}}{\partial \omega_k}\right) = - \frac{\partial}{\partial \omega_j}[\widehat{i x_k f}] = -\widehat{x_j x_kf}. \end{align*}
For the curl-free kernel, we have $\phi(x) = \hat{\rho_0}(x)$ and consequently, under the assumption that $\int_{\R^n}||\omega||^2\rho_0(\omega) < \infty$, we have \begin{align*} [k_{\curl}(x)]_{jk} = - \frac{\partial^2\phi}{\partial x_j \partial x_k}(x) = -\frac{\partial^2\hat{\rho_0}}{\partial x_j \partial x_k}(x) = \widehat{\omega_j \omega_k \rho_0}(x),\;\;\; 1 \leq j,k \leq n. \end{align*} In other words, \begin{align*} [k_{\curl}(x)]_{jk} = \int_{\R^n}e^{-i \la \omega, x\ra} \omega_j \omega_k \rho_0(\omega)d\omega. \end{align*} It thus follows that \begin{align*} k_{\curl}(x) = \int_{\R^n}e^{-i \la \omega, x\ra }\omega \omega^T \rho_0(\omega)d\omega = \int_{\R^n}e^{-i \la \omega, x\ra }\mu(\omega)d\omega, \end{align*} where $\mu(\omega) = \omega \omega^T \rho_0(\omega)$. The proof for $k_{\div}$ is entirely similar. \end{proof} \end{comment}
\subsection{First Main Result: Probability Measure Construction in the General Case} \label{section:probability-construction}
For the separable and curl-free and div-free kernels, we obtain a probability measure {$\rho$} directly from the corresponding scalar-valued kernels. We now show how to construct {$\rho$} given a general {$k$}, under appropriate assumptions on {$k$}. Furthermore, we show that the corresponding feature map is {\it bounded}, in the sense that {$\tilde{\mu}(\omega)$} is a bounded function of {$\omega$} (see the precise statement in Corollary \ref{corollary:measure-trace}).
\begin{proposition} \label{proposition:Bochner-projection}
Let {$K:\R^n \times \R^n \mapto \L(\H)$} be an ultraweakly continuous shift-invariant positive definite kernel. Let $\mu$ be the unique finite
{$\Sym^{+}(\H)$}-valued measure satisfying Eq.~(\ref{equation:Bochner1}). Then {$\forall \a\in \H$}, {$\a \neq 0$}, the scalar-valued kernel defined by {$K_{\a}(x,t) = \la \a, K(x,t)\a\ra$} is positive definite. Furthermore, there exists a unique finite positive Borel measure {$\mu_{\a}$} on {$\R^n$} such that $K_{\a}$ is the Fourier transform of $\mu_{\a}$, that is \begin{align} K_{\a}(x,t) = \int_{\R^n}\exp(-i \la \omega,x-t\ra)d\mu_{\a}(\omega). \end{align} The measure $\mu_{\a}$ is given by \begin{align} \mu_{\a}(\omega) = \la \a, \mu(\omega)\a\ra,\;\;\; \omega \in \R^n. \end{align} \end{proposition}
Let {$\{\e_j\}_{j=1}^{\infty}$} be any orthonormal basis for {$\H$}. By Proposition \ref{proposition:Bochner-projection}, $\forall j \in \N$, the scalar-valued kernel \begin{align} K_{jj}(x,t) = K_{\e_j}(x,t) = \la \e_j, K(x,t)\e_j\ra \end{align} is positive definite and is the Fourier transform of the finite positive Borel measure \begin{align} \mu_{jj}(\omega) = \la \e_j, \mu(\omega)\e_j\ra, \;\;\; \omega \in \R^n. \end{align} \begin{comment} \begin{proposition} \label{proposition:measure-normalize} If $k$ is normalized so that $k(0) = I$, the identity operator on $\H$, then $\mu_{jj}$ is a probability measure on $\R^n$ $\forall j \in \N$. In general, if $k(0)$ is invertible, then the following measure \begin{align} \mu_{\norm}(\omega) = k(0)^{-1/2}\mu(\omega)k(0)^{-1/2} \end{align} is a finite $\Sym^{+}(\H)$-valued measure on $\R^n$, and $(\mu_{\norm})_{jj}(\omega) = \la \e_j, (\mu_{\norm})(\omega) \e_j\ra$ is a probability measure on $\R^n$ $\forall j \in \N$. \end{proposition} Under the hypothesis of Proposition \ref{proposition:measure-normalize}, we assume further that there exists at least one $k \in \N$ such that $(\mu_{\norm})_{kk}(\omega) > 0$ $\forall \omega \in \R^n$. Then in Eq.~(\ref{equation:factorize}), we set \begin{align} \label{equation:constructionI} \rho(\omega) = (\mu_{\norm})_{kk}(\omega),\;\;\; \tilde{\mu}(\omega) = \frac{\mu(\omega)}{(\mu_{\norm})_{kk}(\omega)}. \end{align} This construction of $\rho$ and $\tilde{\mu}$ obviously depends on the choice of the orthonormal basis $\{\e_j\}_{j=1}^{\infty}$. \end{comment}
The measures {$\mu_{jj}$, $j \in \N$}, which depend on the choice of orthonormal basis {$\{\e_j\}_{j\in \N}$}, collectively give rise to the following measure, which is independent of {$\{\e_j\}_{j\in \N}$}.
\begin{theorem} [\textbf{Probability Measure Construction}] \label{theorem:measure-trace} Assume that the positive definite function $k$ in Bochner's Theorem satisfies: (i) {$k(x) \in \Tr(\H)$ $\forall x \in \R^n$}, and
(ii) {$\int_{\R^n}|\trace[k(x)]|dx < \infty$}. Then its corresponding finite {$\Sym^{+}(\H)$}-valued measure $\mu$ satisfies \begin{align} \mu(\omega) \in \Tr(\H) \;\;\;\forall \omega \in \R^n,\;\;\;
\trace[\mu(\omega)] \leq \frac{1}{(2\pi)^n}\int_{\R^n}|\trace[k(x)]|dx. \end{align} The following is a finite positive Borel measure on $\R^n$ \begin{align} \mu_{\trace}(\omega) &= \trace(\mu(\omega)) = \sum_{j=1}^{\infty}\mu_{jj}(\omega)
= \frac{1}{(2\pi)^n}\int_{\R^n}\exp(i \la \omega, x\ra)\trace[k(x)]dx. \end{align}
The normalized measure {
$\frac{\mu_{\trace}(\omega)}{\trace[k(0)]}$
} is a probability measure on $\R^n$.
\end{theorem}
{\bf Special case}. For $\H = \R$, we obtain \begin{align} \mu_{\trace}(\omega) = \frac{1}{(2\pi)^n}\int_{\R^n}\exp(i \la \omega, x\ra)k(x)dx = \mu(\omega), \end{align} so that the scalar-setting is a special case of Theorem \ref{theorem:measure-trace}, as expected.
\begin{corollary} \label{corollary:measure-trace} Under the hypothesis of Theorem \ref{theorem:measure-trace}, in Eq.~(\ref{equation:factorize}) we can set \begin{align} \label{equation:constructionII} \rho(\omega) = \frac{\mu_{\trace}(\omega)}{\trace[k(0)]}, \;\;\; \tilde{\mu}(\omega) = \left\{ \begin{matrix} \trace[k(0)]\frac{\mu(\omega)}{\mu_{\trace}(\omega)}, & \mu_{\trace}(\omega) > 0\\ 0, & \mu_{\trace}(\omega) = 0. \end{matrix} \right. \end{align}
The function $\tilde{\mu}$ in Eq.~(\ref{equation:constructionII}) satisfies {$\tilde{\mu}(\omega) \in \Sym^{+}(\H)$} and has bounded trace, i.e.
{ \begin{align}
||\tilde{\mu}(\omega)||_{\trace} = \trace[\tilde{\mu}(\omega)] \leq \trace[k(0)] \;\;\;\forall \omega \in \R^n. \end{align} }
\end{corollary}
Let us now illustrate Theorem \ref{theorem:measure-trace} and \ref{corollary:measure-trace} on the separable kernels and curl-free and div-free kernels. We note that for the separable kernels, we obtain the same probability measure $\rho$ as in Section \ref{section:special-construction}. However, for the curl-free and div-free kernels, we obtain a different probability measure compared to Section \ref{section:special-construction}, which illustrates the non-uniqueness of $\rho$.
\begin{example}[\textbf{Separable kernels}] \end{example} For the {\it separable kernels} of the form $k(x) = g(x)A$, with {$A \in \Sym^{+}(\H) \cap \Tr(\H)$} and $g \in L^1(\R^n)$, we have \begin{align} \mu_{\trace}(\omega) = \trace(A)\frac{1}{(2\pi)^n}\int_{\R^n}\exp(i \la \omega, x\ra)g(x)dx = \trace(A)\rho_0(\omega). \end{align}
Since {$\trace[k(0)] = \trace(A)$}, we recover $\rho(\omega) = \rho_0(\omega)$. Here {$\tilde{\mu}(\omega) = A$} and {$||\tilde{\mu}(\omega)||_{\trace} = \trace(A) < \infty$}.
\begin{example}[\textbf{Curl-free and div-free kernels}] \end{example}
For the {\it curl-free kernel}, we have {$\mu(\omega) = \omega\omega^T\rho_0(\omega)$} and thus \begin{align}
\mu_{\trace}(\omega) = ||\omega||^2\rho_0(\omega), \end{align}
which is a finite measure by the assumption {$\int_{\R^n}||\omega||^2d\rho_0(\omega) < \infty$}. For the Gaussian case, since {$\trace[k(0)] = \frac{2n}{\sigma^2}$}, the corresponding probability measure is \begin{align}
\rho(\omega) = \frac{\sigma^2}{2n}||\omega||^2\rho_0(\omega). \end{align}
Similarly, for the {\it div-free kernel}, we have {$\mu(\omega) = [||\omega||^2I_n - \omega\omega^T]\rho_0(\omega)$} and thus \begin{align}
\mu_{\trace}(\omega) = (n-1)||\omega||^2\rho_0(\omega). \end{align} For the Gaussian case, since {$\trace[k(0)] = \frac{2n(n-1)}{\sigma^2}$}, the corresponding probability measure is \begin{align}
\rho(\omega) = \frac{\sigma^2}{2n}||\omega||^2\rho_0(\omega). \end{align} Clearly, for both the curl-free and div-free kernels, the probability measure {$\rho$} is {\it non-unique}. We can, for example, obtain {$\rho$} by normalizing the measure \begin{align}
(1+||\omega||^2)\rho_0(\omega) \end{align} and the corresponding {$\tilde{\mu}(\omega)$} still has bounded trace.
\begin{example}[\textbf{Sum of kernels}] \end{example} Consider now the probability measure and feature maps corresponding to the sum of two kernels, which is readily generalizable to any finite sum of kernels. Let $k_1$, $k_2$ be two positive definite functions satisfying the assumptions of Theorem \ref{theorem:measure-trace}, which are the Fourier transforms of two $\Sym^{+}(\H)$-valued measures $\mu_1$ and $\mu_2$, respectively. Then their sum $k = k_1 + k_2$ is clearly the Fourier transform of $\mu = \mu_1 + \mu_2$. Then the probability measure $\rho$ and the function $\tilde{\mu}$ corresponding to $k$ is given by \begin{align} \rho(\omega) &= \frac{\mu_{\trace}(\omega)}{\trace[k(0)]} = \frac{\mu_{1,\trace}(\omega) + \mu_{2,\trace}(\omega)}{\trace[k_1(0)] + \trace[k_2(0)]}, \\ \tilde{\mu}(\omega) &= \left\{ \begin{matrix} (\trace[k_1(0)] + \trace[k_2(0)])\frac{\mu_1(\omega) + \mu_2(\omega)}{\mu_{1,\trace}(\omega) + \mu_{2,\trace}(\omega)}, & \mu_{1,\trace}(\omega) + \mu_{2,\trace}(\omega) > 0,\\ 0, & \mu_{1,\trace}(\omega) + \mu_{2, \trace}(\omega) = 0. \end{matrix} \right. \end{align} We the obtain the Fourier feature map for the kernel $K(x,t) = k(x-t)$ using Eqs.~(\ref{equation:mu-decomp}) and (\ref{equation:feature-general}).
We contrast this approach with the following concatenation approach. Let $(\Phi_{K_j}, \F_{K_j})$ be the feature maps associated with $K_j(x,t) = k_j(x-t)$, $j =1,2$. Let $\F_K$ be the direct Hilbert sum of $\F_{K_1}$ and $\F_{K_2}$. Consider the map $\Phi_K:\X \mapto \mathcal{L}(\H,\F_K)$, $\F_K = \F_{K_1} \oplus \F_{K_2}$, defined by \begin{align} \Phi_K(x) = \left( \begin{matrix} \Phi_{K_1}(x) \\ \Phi_{K_2}(x) \end{matrix} \right),\;\;\; \Phi_K(x)w = \left( \begin{matrix} \Phi_{K_1}(x)w \\ \Phi_{K_2}(x)w \end{matrix} \right),\;\;\; w\in \H, \end{align} which essentially stacks to the two maps $\Phi_{K_1}$, $\Phi_{K_2}$ on top of each other. Then clearly \begin{align*} \Phi_K(x)^{*}\Phi_K(t) = \Phi_{K_1}(x)^{*}\Phi_{K_1}(t) + \Phi_{K_2}(x)^{*}\Phi_{K_2}(t) = K_1(x,t) + K_2(x,t) = K(x,t), \end{align*} so that $(\Phi_K, \F_K)$ is a feature map representation for $K = K_1 + K_2$. If $\dim(\F_{K_j}) < \infty$, then we have $\dim({\F_K}) = \dim(\F_{K_1}) + \dim(F_{K_2})$. Thus, from a practical viewpoint, this approach can be computationally expensive, since the dimension of the feature map for the sum kernel can be very large, especially if we have a sum of many kernels.
\subsection{Second Main Result: Uniform Convergence Analysis} \label{section:convergence}
Having computed the approximate version {$\hat{K}_D$} for {$K$}, we need to show that this approximation is consistent, that is {$\hat{K}_D$} approaches {$K$} in some sense, as {$D \approach \infty$}. Since {$K(x,t) = k(x-t)$} it suffices for us to consider the convergence of {$\hat{k}_D$} towards {$k$}.
Recall the Hilbert space of Hilbert-Schmidt operators $\HS(\H)$, that is of bounded operators on $\H$ satisfying \begin{align*}
||A||^2_{\HS} = \trace(A^{*}A) = \sum_{j=1}^{\infty}||A\e_j||^2 < \infty, \end{align*} for any orthonormal basis $\{\e_j\}_{j=1}^{\infty}$ in $\H$. Here
$||\;||_{\HS}$ denotes the Hilbert-Schmidt norm, which is induced by the Hilbert-Schmidt inner product \begin{align*} \la A, B\ra_{\HS} = \trace(A^{*}B) = \sum_{j=1}^{\infty}\la A\e_j, B\e_j\ra,\;\;\; A,B \in \HS(\H). \end{align*} In the following, we assume that {$k(x)\in \HS(\H)$}. Since we have shown that, under appropriate assumptions, bounded feature maps always exist, we focus exclusively on analyzing the convergence associated with them. Specifically, we show that for bounded feature maps, for any compact set {$\Omega \subset \R^n$}, we have \begin{align}
\sup_{x \in \Omega}||\hat{k}_D(x) - k(x)||_{\HS} \approach 0 \;\;\;\text{as}\;\;\; D \approach \infty, \end{align}
with high probability. This generalizes the convergence of {$\sup_{x \in \Omega}|\hat{k}_D(x) - k(x)|$} in the scalar setting. If {$\dim(\H) < \infty$}, then this is convergence in the Frobenius norm {$||\;||_F$}.
\begin{theorem} [{\bf Pointwise Convergence}] \label{theorem:convergence-pointwise}
Assume that {$||\tilde{\mu}(\omega)||_{\HS} \leq M$} almost surely and that {$\sigma^2(\tilde{\mu}(\omega)) = \bE_{\rho}[||\tilde{\mu}(\omega)||^2_{\HS}] < \infty$}. Then for any fixed {$x \in \R^n$}, \begin{align}
\bP[||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon] \leq 2 \exp\left(-\frac{D\epsilon}{2M}\log\left[1+\frac{M\epsilon}{\sigma^2(\tilde{\mu}(\omega))}\right]\right) \;\;\;\forall \epsilon > 0. \end{align} \end{theorem}
{\bf Assumption 1}. Our uniform convergence analysis requires the following condition \begin{align}
\mb_1 = \int_{\R^n}||\omega||\;||\mu(\omega)||_{\HS}d(\omega) = \int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) < \infty. \end{align} In the scalar setting, we have {$\tilde{\mu}(\omega) = 1$}, and Assumption 1 becomes {
$\int_{\R^n}||\omega||d\rho(\omega) < \infty$,
} so that {$k$} is differentiable. This is {\it weaker} than the assumptions in \citep{Fourier:NIPS2007,Fourier:UAI2015,Fourier:NIPS2015}, which all require that {
$\int_{\R^n}||\omega||^2d\rho(\omega) < \infty$}, that is $k$ is twice-differentiable.
\begin{theorem} [{\bf Uniform Convergence}] \label{theorem:convergence-uniform}
Let {$\Omega \subset \R^n$} be compact with diameter {$\diam(\Omega)$}. Assume that {$||\tilde{\mu}(\omega)||_{\HS} \leq M$} almost surely and that {$\sigma^2(\tilde{\mu}(\omega)) = \bE_{\rho}[||\tilde{\mu}(\omega)||^2_{\HS}] < \infty$}. Then for any $\epsilon > 0$, \begin{align}
\bP\left(\sup_{x \in \Omega}||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon\right) \leq & a(n)\left(\frac{\mb_1\diam(\Omega)}{\epsilon}\right)^{\frac{n}{n+1}} \nonumber \\ & \times \exp\left(-\frac{D\epsilon}{4(n+1)M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right), \end{align} where {$a(n) = 2^{\frac{3n+1}{n+1}}\left(n^{\frac{1}{n+1}} + n^{-\frac{n}{n+1}}\right)$}. \end{theorem}
\begin{example}[\textbf{Separable kernels}] \end{example}
For the {\it separable kernel}, Assumption 1 becomes {$||A||_{\HS} < \infty$}, in which case we have uniform convergence with {$M= ||A||_{\HS}$} and {$\sigma^2(\tilde{\mu}) = ||A||_{\HS}^2$}.
\begin{example}[\textbf{Curl-free and div-free kernels}] \end{example}
For the {\it curl-free kernel}, we have {$||\mu(\omega)||_{\HS} = ||\omega||^2\rho_0(\omega)$}, thus Assumption 1 becomes \begin{align}
\int_{\R^n}||\omega||^3d\rho_0(\omega) < \infty, \end{align}
which, being stronger than the assumption {$\int_{\R^n}||\omega||^2d\rho_0(\omega) < \infty$} in Section ~\ref{section:probability-construction}, guarantees that a bounded feature map can be constructed, with \begin{align}
||\tilde{\mu}(\omega)||_{\HS} \leq ||\tilde{\mu}(\omega)||_{\trace}\leq \trace[k(0)]\;\;\ \text{and}\;\; \sigma^2[\tilde{\mu}(\omega)] \leq (\trace[k(0)])^2. \end{align} The case of the {\it div-free kernel} is entirely similar.
{\bf Comparison with the convergence analysis in \citep{Romain:2016}}. In \citep{Romain:2016}, the authors carried out convergence analysis in the spectral norm for matrix-valued kernels. Our results are for the more general setting of operator-valued kernels, which induce RKHS of functions with values in a Hilbert space. In this setting, convergence in the Hilbert-Schmidt norm is {\it strictly stronger} than convergence in the spectral norm. Furthermore, as with previous results in the scalar setting \citep{Fourier:NIPS2007,Fourier:UAI2015,Fourier:NIPS2015}, the convergence in \citep{Romain:2016} also requires twice-differentiable kernels, whereas we require the {\it weaker assumption} of $C^1$-differentiability.
\section{Vector-Valued Learning with Operator-Valued Feature Maps} \label{section:learning}
Having discussed operator-valued feature maps and their random approximations, we now show how they can be applied in the context of learning in RKHS of vector-valued functions. Let {$\W,\Y$} be two Hilbert spaces, {$C:\W \mapto \Y$} a bounded operator, {$K:\R^n \times \R^n \mapto \L(\W)$} be a positive definite definite kernel with the corresponding RKHS $\H_K$ of $\mathcal{W}$-valued functions, and {$V$} be a convex loss function. Consider the following general learning problem from \citep{Minh:JMLR2016} \begin{align} \label{equation:general}
f_{\z,\gamma} = \argmin_{f \in \H_K} &\frac{1}{l}\sum_{i=1}^lV(y_i, Cf(x_i)) + \gamma_A ||f||^2_{\H_K}
+ \gamma_I \la \f, M\f\ra_{\mathcal{W}^{u+l}}. \end{align} Here {$\z = (\x,\y) = \{(x_i,y_i)\}_{i=1}^l\cup\{x_i\}_{i=l+1}^{u+l}$,} {$u, l \in \N$}, with $u,l$ denoting unlabeled and labeled data points, respectively, {$\f = (f(x_j))_{j=1}^{u+l}\in \W^{u+l}$}, {$M:\W^{u+l}\mapto \W^{u+l}$} a positive operator, and {$\gamma_A > 0, \gamma_I > 0$}.
In \citep{Minh:JMLR2016}, it is shown that the optimization problem (\ref{equation:general}) represents a general learning formulation in RKHS that encompasses supervised and semi-supervised learning via manifold regularization, multi-view learning, and multi-class classification. For the case $V$ is the least square and SVM loss, both binary and multiclass, the solution of (\ref{equation:general}) has been obtained in dual form, that is in terms of kernel matrices.
We now present the solution of problem (\ref{equation:general}) in terms of feature map representation, that is in primal form. Let {$(\Phi_K, \F_K)$} be any feature map for {$K$}. On the set $\x$, we define the following operator
\begin{align} \Phi_K(\x) :\W^{u+l} \mapto \F_K, \;\;\;
\Phi_K(\x)\w = \sum_{j=1}^{u+l}\Phi_K(x_j)w_j. \end{align} We also view $\Phi_K(\x)$ as a (potentially infinite) matrix \begin{align} \Phi_K(\x) = [\Phi_K(x_1), \ldots, \Phi_K(x_{u+l})]: \W^{u+l} \mapto \F_K, \end{align} with the $j$th column being $\Phi_K(x_j)$. The following is the corresponding version of the Representer Theorem in \citep{Minh:JMLR2016} in feature map representation.
\begin{theorem} [\textbf{Representer Theorem}] \label{theorem:representer} The optimization problem (\ref{equation:general}) has a unique solution
{$f_{\z,\gamma}(x) = \sum_{i=1}^{u+l}K(x,x_i)a_i$ for $a_i \in \W$}, {$i=1,\ldots, u+l$}. In terms of feature maps,
{$f_{\z,\gamma}(x) = \Phi_K(x)^{*}\h$}, where { \begin{equation}\label{equation:dual-to-primal} \h = \sum_{i=1}^{u+l}\Phi_K(x_i)a_i = \Phi_K(\x)\a \in \F_K, \;\;\;\; \a = (a_j)_{j=1}^{u+l} \in \W^{u+l}. \end{equation} }
\end{theorem}
In the case $V$ is the least square loss, the optimization problem (\ref{equation:general}) has a closed-form solution, which is expressed explicitly in terms of the operator-valued feature map $\Phi_K$. In the following, let {$I_{(u+l) \times l} = [I_l, 0_{l\times u}]^T$} and $J^{u+l}_l= I_{(u+l) \times l}I_{(u+l) \times l}^T$,
which is a $(u+l) \times (u+l)$ diagonal matrix, with the first $l$ entries on the main diagonal equal to $1$ and the rest being zero. The following is the corresponding version of Theorem 4 in \citep{Minh:JMLR2016} in feature map representation.
\begin{theorem} [\textbf{Vector-Valued Least Square Algorithm}] \label{theorem:leastsquare-K}
In the case
{$V$} is the least square loss, that is {$V(y,f(x)) = ||y-Cf(x)||^2_{\Y}$}, the solution of the optimization problem (\ref{equation:general}) is {$f_{\z,\gamma}(x) = \Phi_K(x)^{*}\h$}, with {$\h\in \F_K$} given by \begin{align} \label{equation:leastsquare-h} \h = \left(\Phi_K(\x)[(J^{u+l}_l \otimes C^{*}C) + l\gamma_I M]\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right)^{-1}
\Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y. \end{align}
\end{theorem}
{\bf Comparison with the dual formulation}. In Theorem 4 in \citep{Minh:JMLR2016}, the solution of the least square problem above is equivalently given by $f_{\z,\gamma}(x) = \sum_{j=1}^{u+l}K(x,x_j)a_j$, where $\a = (a_j)_{j=1}^{u+l} \in \mathcal{W}^{u+l}$ is given by \begin{equation}\label{equation:linear-matrix} (\mathbf{C^{*}C}J^{\mathcal{W}, u+l}_l K[\x] + l \gamma_I MK[\x] + l \gamma_A I_{\mathcal{W}^{u+l}})\a = \mathbf{C^{*}}{\y}, \end{equation} where $\mathbf{C^{*}} = I_{(u+l) \times l} \otimes C^{*}$ and $K[\x]$ is the $(u+l) \times (u+l)$ operator-valued matrix with the $(i,j)$ entry being $K(x_i,x_j)$.
For concreteness, consider the case $\mathcal{W} = \R^d$ for some $d \in \N$. Then Eq.~(\ref{equation:linear-matrix}) is a system of linear equations of size $d(u+l) \times d(u+l)$, which depends only on the dimension $d$ of the output space and the number of data points $(u+l)$.
If the feature space $\F_K$ is infinite-dimensional, then Eq.~(\ref{equation:leastsquare-h}) is an infinite-dimensional system of linear equations.
{\bf Approximate feature map vector-valued least square regression}. Consider now the approximate finite-dimensional feature map $\hat{\Phi}_D(x):\R^d \mapto \R^{2Dr}$, for some $r$, $1 \leq r \leq d$. Here $r$ depends on the decomposition $\tilde{\mu} = \psi(\omega)^{*}\psi(\omega)$ in Eq.~(\ref{equation:mu-decomp}), with $r =d$ corresponding to e.g. $\psi(\omega) = \sqrt{\tilde{\mu}(\omega)}$. Then instead of the operator $\Phi_K(\x):\R^{d(u+l)} \mapto \F_K$, we consider its approximation \begin{align} \hat{\Phi}_D(\x) = [\hat{\Phi}_D(x_1), \ldots, \hat{\Phi}_D(x_{u+l})]: \R^{d(u+l)} \mapto \R^{2Dr}, \end{align} which is a matrix of size {$2Dr \times d(u+l)$}. This gives rise to the following system of linear equations, which approximates Eq.~(\ref{equation:leastsquare-h}) \begin{align} \label{equation:leastsquare-h-approx} \hat{\h}_D = \left(\hat{\Phi}_D(\x)[(J^{u+l}_l \otimes C^{*}C) + l\gamma_I M]\hat{\Phi}_D(\x)^{*} + l \gamma_A I_{2Dk}\right)^{-1} \hat{\Phi}_D(\x)(I_{(u+l) \times l} \otimes C^{*})\y. \end{align} Eq.~(\ref{equation:leastsquare-h-approx}) is a system of linear equations of size $2Dr \times 2Dr$, which is independent of the number of data points $(u+l)$. This system is more efficient to solve than Eq.~(\ref{equation:linear-matrix}) when \begin{align} 2Dr < d(u+l). \end{align}
\section{Numerical Experiments} \label{section:experiments}
We report in this section several experiments to illustrate the numerical properties of the feature maps just constructed. Since the properties of the feature maps for separable kernels follow directly from those of the corresponding scalar kernels, we focus here on the curl-free and div-free kernels.
{\bf Approximate kernel computation}. We first checked the quality of the approximation of the kernel values using matrix-valued Fourier feature maps. Using the standard normal distribution, we generated a set of {$100$} points in {$\R^3$}, which are normalized to lie in the cube {$[-1,1]^3$}. On this set, we first computed the curl-free and div-free kernels induced by the Gaussian kernel, based on Eq.~(\ref{equation:curl-div-k}), with {$\sigma=1$}. We computed the feature maps given by Eq.~(\ref{equation:feature-general}). For the curl-free kernel, in the unbounded map, {$\psi(\omega)$} is given by Eq.~(\ref{equation:curl-unbounded}), with {$\rho(\omega) \sim \Ncal(0, (2/\sigma^2)I_3)$}, and in the bounded map, {$\psi(\omega)$} is given by Eq.~(\ref{equation:curl-bounded}), with
{$\rho(\omega) \sim \Ncal(0, (4/\sigma^2)I_3)$}. Similarly, for the div-free kernel, the {$\psi(\omega)$} maps, bounded and unbounded, are given by Eq.~(\ref{equation:div-feature}). We then computed the relative error {$||\hat{K}_D(x,y) - K(x,y)||_F/||K(x,y)||_F$}, with $F$ denoting the Frobenius norm, using {$D=100, 500, 1000$}. The results are reported on Table \ref{table:kernel-error}.
\begin{table}[t]
\caption{Kernel approximation by feature maps. The numbers shown are the relative errors measured using the Frobenius norm, averaged over $10$ runs, along with the standard deviations.}
\label{table:kernel-error}
\centering
\begin{tabular}{llll}
\toprule
{ Kernel} & {\small$D= 100$} & {\small$D =500$} & {\small$D = 1000$}\\
\midrule
{\small curl-free (bounded)} & {\small$0.2811$ $(0.0606)$} & {\small$0.1011$ $(0.0216)$}& {\small$0.0906$ $(0.0172)$} \\
{\small curl-free (unbounded)} & {\small$0.3315$ $(0.0638)$} & {\small$0.1363$ $(0.0227)$} & {\small$0.0984$ $(0.0207)$} \\
{\small div-free (bounded)} & {\small$0.2223$ $(0.0605)$} & {\small$0.1006$ $(0.0221)$}& {\small$0.0680$ $(0.0114)$}\\
{\small div-free (unbounded)} & {\small$0.2826$ $(0.0567)$} & {\small$0.1386$ $(0.0388)$} & {\small$0.0842$ $(0.0167)$}\\
\bottomrule
\end{tabular} \end{table}
\begin{table}[t]
\caption{Curl-free vector field reconstruction by least square regression using the exact kernel least square regression according to Eq.~(\ref{equation:linear-matrix}) and approximate feature maps according to Eq.~(\ref{equation:leastsquare-h-approx}). The RMSEs, averaged over $10$ runs, are shown with standard deviations.}
\label{table:field-error}
\centering
\begin{tabular}{llll}
\toprule
{\small$D$} & {\small Exact} & {\small Bounded} & {\small Unbounded} \\
\midrule
{\small$D=50$} & {\small$0.0020$} & {\small$0.0079$ $(0.0076)$} & {\small$0.0254$ $(0.0118)$}\\
{\small$D=100$} & {\small$0.0024$} & {\small$0.0032$ $(0.0024)$} & {\small$0.0118$ $(0.0098)$} \\
\bottomrule
\end{tabular} \end{table}
{\bf Vector field reconstruction by approximate feature maps}. Next, we tested the reconstruction of the following curl-free vector field in {$\R^2$}, {$F(x,y) = \sin(4\pi x)\sin^2(2\pi y) \mathbf{i} + \sin^2(2\pi x) \sin(4\pi y)\mathbf{j}$} on the rectangle {$[-1, -0.4765] \times [-1, -0.4765]$}, sampled on a regular grid consisting of {$1600$} points. The reconstruction is done using {$5\%$} of the points on the grid as training data. We first performed the reconstruction with exact kernel least square regression according to Eq.~(\ref{equation:linear-matrix}),
using the curl-free kernel induced by the Gaussian kernel, based on Eq.~(\ref{equation:curl-div-k}), with {$\sigma = 0.2$}. With the same kernel, we then performed the approximate feature map least square regression according to Eq.~(\ref{equation:leastsquare-h-approx}
({$\W=\Y=\R^2, C=I_2, \gamma_I = 0, \gamma_A = 10^{-9}$}), with {$D=50, 100$}. The results are reported on Table \ref{table:field-error}.
{\bf Discussion of numerical results}. As we can see from Tables \ref{table:kernel-error} and \ref{table:field-error}, the matrix-valued Fourier feature maps can be used both for approximating the kernel values as well as directly in a learning algorithm, with increasing accuracy as the feature dimension increases. Furthermore, while all feature maps associated with a given kernel are essentially equivalent as in Lemma \ref{lemma:f-feature}, we observe that numerically, on average, the bounded feature maps tend to outperform the unbounded maps.
\section{Conclusion and Future Work}
We have presented a framework for constructing random operator-valued feature maps for operator-valued kernels, using the operator-valued version of Bochner's Theorem. We have shown that, due to the non-uniqueness of the probability measure in this setting, in general many feature maps can be computed, which can be unbounded or bounded. Under certain conditions, which are satisfied for many common kernels such as curl-free and div-free kernels, bounded feature maps can always be computed. We then showed the uniform convergence, with the bounded maps, of the approximate kernel in the Hilbert-Schmidt norm, strengthening previous results in the scalar setting. Finally, we showed how a general vector-valued learning formulation can be expressed in terms of feature maps and demonstrated it experimentally. An extensive empirical evaluation of the proposed formulation is left to future work.
\begin{comment} \section{}
For $u = 0$, $\gamma_I = 0$, we have
{ \begin{align} \h = \left[\Phi_K(\x)(I_l \otimes C^{*}C)\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right]^{-1}\Phi_K(\x)(I_l \otimes C^{*})\y. \end{align} } \begin{align*} \left[\Phi_K(\x)(I_l \otimes C^{*}C)\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right]\h = \Phi_K(\x)(I_l \otimes C^{*})\y. \end{align*}
For $\W = \Y = \R^p$ and $C = I_p$, { \begin{align} \h = \left[\Phi_K(\x)(I_l \otimes)\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right]^{-1}\Phi_K(\x)(I_l \otimes C^{*})\y. \end{align} }
\section{Scalar-valued learning with feature maps}
To motivate operator-valued feature maps for vector-valued learning, we first recall the role played by feature maps in scalar-valued learning in a simple scenario. Let $\X$ be an arbitrary non-empty set, $\Y \subset \R$, and $K:\X \times \X \mapto \R$ be a scalar-valued positive definite kernel, with corresponding RKHS $\H_K$. Let $\z = (\x,\y) = (x_i, y_i)_{i=1}^l$ be a set of data points randomly sampled from $\X \times \Y$, according to some unknown probability distribution. Consider the well-known problem of supervised least square learning \begin{align}\label{equation:leastsquare-scalar}
f_{\z,\gamma} = \argmin_{f \in \H_K} \frac{1}{l}\sum_{i=1}^l(f(x_i) - y_i)^2 + \gamma||f||^2_{\H_K} \end{align} where $\gamma > 0$.
Consider the sampling operator $S_{\x,l}: \H_K \mapto \R^{l}$ and its adjoint $S_{\x, l}^{*}: \R^{l} \mapto \H_K$, defined by
$S_{\x, l}f = (f(x_i))_{i=1}^l = (\la K_{x_i},f\ra_{\H_K})_{i=1}^{l}$, $f \in \H_K$, $S_{\x, l}^{*}\b = \sum_{i=1}^{l}K_{x_i}b_i$, $\b \in \R^l$, respectively.
Using
$S_{\x,l}$, problem (\ref{equation:leastsquare-scalar}) becomes \begin{align*}
f_{\z,\gamma} = \argmin_{f \in \H_K}\frac{1}{l}||S_{\x,l}f - \y||^2_{\R^l} + \gamma ||f||^2_{\H_K}. \end{align*} Differentiating with respect to $f$ and setting to zero gives \begin{align*}
(S_{\x,l}^{*}S_{\x,l} + l\gamma I_{\H_K})f = S_{\x,l}^{*}\y, \end{align*} which is equivalent to \begin{equation}\label{equation:leastsquare-1} f = (S_{\x,l}^{*}S_{\x,l} + l\gamma I_{\H_K})^{-1}S_{\x,l}^{*}\y. \end{equation} This expression is also equivalent to \begin{align} \label{equation:leastsquare-2} f &= S_{\x,l}^{*}(S_{\x,l}S_{\x,l}^{*} + l\gamma I_{\R^l})^{-1}\y.
\end{align} While the two expressions (\ref{equation:leastsquare-1}) and (\ref{equation:leastsquare-2}) are mathematically equivalent, they are in general different computationally.
In Eq.~(\ref{equation:leastsquare-2}), the operator $S_{\x,l}S_{\x, l}^{*}: \R^l \mapto \R^l$ is given by $S_{\x,l}S_{\x,l}^{*} = K[\x]$, where $K[\x]$ denotes the Gram matrix of $K$ on $\x$, with $(K[\x])_{ij} = [K(x_i,x_j)]_{i,j=1}^l$. The solution given by Eq.~(\ref{equation:leastsquare-2}) is then $f = S_{\x,l}^{*}\a = \sum_{i=1}^lK_{x_i}a_i$, with \begin{equation} \a = (K[\x] + l\gamma I_{\R^l})^{-1}\y. \end{equation} This solution thus requires solving a system of linear equations of size $l \times l$, which, while independent of $\dim(\H_K)$,
comes at the cost of the complexity $O(l^3)$, which becomes prohibitively expensive when $l$ is very large.
On the other hand, in the equivalent solution given by
Eq.~(\ref{equation:leastsquare-1}), the operator $S_{\x, l}^{*}S_{\x, l} = \H_K\mapto \H_K$ is generally high (often infinite) dimensional and
Eq.~(\ref{equation:leastsquare-1}) involves solving a system of linear equations of size $\dim(\H_K) \times \dim(\H_K)$.
The case in which Eq.~(\ref{equation:leastsquare-1}) is more efficient computationally compared to Eq.~(\ref{equation:leastsquare-2}) is when
$\dim(\H_K) < l$,
i.e. when the training data is large and $\dim(\H_K)$ is low.
{\bf Feature map interpretation and formulation}. Eq.~(\ref{equation:leastsquare-1}) is closely linked to feature maps induced by $K$, as follows. Let $\Phi: \X \mapto \F_K$ be a feature map induced by $K$, where $\F_K$ is a separable Hilbert space, so that $K(x,t) = \la \Phi(x), \Phi(t)\ra_{\F_K}$ $\forall(x,t) \in \X \times \X$. For each $x\in \X$, $\Phi(x)$ can be viewed as an operator $\Phi(x): \R \mapto \F_K$, with $\Phi(x)a = a\Phi(x)$ $\forall a \in \R$, with the adjoint operator $\Phi(x)^{*}: \F_K \mapto \R$ defined by $\Phi(x)^{*}\w = \la \w , \Phi(x)\ra_{\F_K}$ $\forall \w \in \F_K$. With this interpretation, we have the following.
\begin{lemma}\label{lemma:leastsquare-scalar} The solution of Eq.~(\ref{equation:leastsquare-1}) is $f(x) = \la \w, \Phi(x)\ra_{\F_K}$, where $\w\in \F_K$ is given by \begin{equation} \w = (\sum_{i=1}^l\Phi(x_i)\Phi(x_i)^{*} + l\gamma I_{\F_K})^{-1}[\sum_{i=1}^ly_i\Phi(x_i)]. \end{equation} Equivalently, with the operator $\Phi(x):\R^l \mapto \F_K$ defined by $\Phi(\x)\b = \sum_{i=1}^lb_i\Phi(x_i)$, $\b \in \R^l$, we have \begin{equation} \w = \left(\Phi(\x)\Phi(\x)^{*} + l\gamma I_{\F_K}\right)^{-1}\Phi(\x)\y \in \F_K. \end{equation} Informally, $\Phi(\x)$ can be viewed as the matrix $\Phi(\x) = [\Phi(x_1), \ldots, \Phi(x_l)]$ of size $\dim(\F_K) \times l$. \end{lemma}
\section{Vector-valued multi-view learning}
Assume that $\W = \Y^m$, $m \in \N$, and that $\Y$ is a separable Hilbert space. In the following, we identify the direct sum of Hilbert spaces $\Y^m = \oplus_{i=1}^m\Y$ under the inner product \begin{align*} \la (y_1, \ldots, y_m), (z_1, \ldots, z_m)\ra_{\Y^m} = \sum_{i=1}^m\la y_i, z_i\ra_{\Y}, \;\;\; y_i, z_i \in \Y, \end{align*} with the tensor product of Hilbert spaces $\R^m \otimes \Y$ under the inner product \begin{align*} \la \a \otimes y, \b \otimes z\ra = \la \a, \b\ra_{\R^m}\la y,z\ra_{\Y}, \;\;\; \a, \b \in \R^m,\;\; y,z\in \Y, \end{align*} via the mappings \begin{align} (y_1, \ldots, y_m) \in \Y^m &\mapto \sum_{i=1}^m \e_i \otimes y_i \in \R^m \otimes \Y, \;\;\; y_i \in \Y, \\ \a \otimes y \in \R^m \otimes \Y &\mapto (a_1y, \ldots, a_my) \in \Y^m,\;\;\; y \in \Y. \end{align} With this identification of $\Y^m$ and $\R^m \otimes \Y$, we consider the kernel $K(x,t): \W = \Y^m \mapto \W = \Y^m$ of the form \begin{equation} \label{equation:Kdef} K(x,t) = G(x,t) \otimes R, \end{equation} where $G(x,t): \R^m \mapto \R^m$ is a positive definite matrix-valued kernel and $R: \Y \mapto \Y$ is a self-adjoint, positive operator. The operator $K(x,t): \Y^m \mapto \Y^m$ can be viewed as an operator-valued block matrix of size $m \times m$, with block $(i,j)$ given by \begin{equation} [K(x,t)]_{ij} = [G(x,t)]_{ij}R. \end{equation} Let us now show how a feature map representation for $G$ gives rise to the corresponding feature map representation for $K$. Let $\F_G$ be a separable Hilbert space and $\Phi_G: \X \mapto \mathcal{L}(\R^m, \F_G)$ be a feature map representation for $G$, so that $\Phi_G(x): \R^m \mapto \F_G$ is a bounded linear operator and \begin{equation} \label{equation:Gdef} G(x,t) = \Phi_G(x)^{*}\Phi_G(t): \R^m \mapto \R^m. \end{equation}
\begin{lemma} \label{lemma:Kfeature} Consider the kernel $K$ as defined in (\ref{equation:Kdef}). The following is a feature map representation for $K$ \begin{align} \Phi_K: \X &\mapto \mathcal{L}(\Y^m = \R^m \otimes \Y, \F_G \otimes \Y) \\ \Phi_K(x) &= \Phi_G(x) \otimes \sqrt{R}: \Y^m = \R^m \otimes \Y \mapto \F_G \otimes \Y. \end{align} \end{lemma}
\begin{proof}[\textbf{Proof of Lemma \ref{lemma:Kfeature}}] \begin{equation} \Phi_K(x)^{*}\Phi_K(t) = \Phi_G(x)^{*}\Phi_G(t) \otimes R = G(x,t) \otimes R = K(x,t). \end{equation} \end{proof}
As an operator, $\Phi_G(\x) = [\Phi_G(x_1), \ldots, \Phi_G(x_{u+l})]: \R^{m(u+l)} \mapto \F_G$, and \begin{equation} \Phi_K(\x) = \Phi_G(\x) \otimes \sqrt{R}. \end{equation}
\begin{theorem}\label{theorem:leastsquare-G}
Let $H$ be the matrix of size $\dim(\F_G) \times \dim(\Y)$ such that $\vec(H^T) = \h$ and $Y$ be the matrix of size $l \times \dim(\Y)$, such that $\y = \vec(Y^T)$. Then $H$ is the solution of the Sylvester equation \begin{equation}\label{equation:HR} BHR + l \gamma_A H = \Phi_G(\x)(I_{(u+l)\times l} \otimes \c)Y\sqrt{R}, \end{equation} where \begin{equation}\label{equation:G} B = \Phi_G(\x)(J^{u+l}_l \otimes \c\c^T + l\gamma_W L + l\gamma_B I_{u+l} \otimes M_m)\Phi_G(\x)^{*}. \end{equation} In particular, for $R = I_{\Y}$, we have \begin{equation}\label{equation:H} H = (B + l \gamma_A I_{\F_G})^{-1}\Phi_G(\x)(I_{(u+l)\times l} \otimes \c)Y. \end{equation} \end{theorem}
{\bf Special case: Feature maps induced by separable kernels}.
As a special case of the above formulation, for $m=1$, we recover the feature map associated with separable kernels as given in \citep{Caponnetto08}.
Let $A:\Y \mapto \Y$ be a self-adjoint positive operator and $k: \X \times \X$ be a scalar-valued positive definite kernel. \begin{align} G(x,t) = k(x,t)A \end{align} Let $\Phi_k:\X \mapto \F_k$ be a feature map associated with $k$, then \begin{align} \Phi_G(x) = \Phi_k(x) \otimes \sqrt{A}: \Y \mapto \F_k \otimes \Y \end{align}
Let $A$ be a symmetric, positive semi-definite matrix of size $N \times N$, with strictly positive eigenvalues $\{\lambda_j\}_{j=1}^r$ and corresponding normalized eigenvectors $\{\u_j\}_{j=1}^N$, where $r$ denotes the rank of $A$. Consider the spectral decomposition \begin{align} A = \sum_{j=1}^r\lambda_j \u_j\u_j^T. \end{align} Consider the direct sum of Hilbert spaces \begin{align} F_k^r = \oplus_{j=1}^rF_k. \end{align} Then the following is a feature map for $G$ \begin{align} \Phi_G(x) = \begin{pmatrix} \Phi_k(x) \otimes \sqrt{\lambda_1}\u_1^T \\ \cdots \\ \Phi_k(x) \otimes \sqrt{\lambda_r}\u_j^T \end{pmatrix}: \R^N \mapto \F_k^r \end{align}
\subsection{Concrete implementation}
Assume that the input has the form $x = (x^1, \ldots, x^m)$, with $x^i \in \X^i$. Consider the following definition of the kernel $G$: \begin{equation} G(x,t) = \sum_{i=1}^mk^i(x^i,t^i)\e_i\e_i^T: \R^m \mapto \R^m, \end{equation} where $k^i$ is a scalar-valued kernel defined on view $i$. Let $\Phi_{k^i}:\X^i \mapto \F_{k^i}$ be the feature map associated with the kernel $k^i$, where $F_{k^i}$ is a separable Hilbert space. Then \begin{equation} G(x,t) = \sum_{i=1}^m\la \Phi_{k^i}(x^i), \Phi_{k^i}(t^i)\ra_{\F_{k^i}} \e_i\e_i^T = \sum_{i=1}^m (\Phi_{k^i}(x^i))^{*}\Phi_{k^i}(t^i) \e_i\e_i^T. \end{equation} Consider the direct sum of Hilbert spaces \begin{equation} \F_G = \oplus_{i=1}^m\F_{k^i}. \end{equation} Consider the following two feature maps induced by $G$.
{\bf First feature map}. Each map $\Phi_{k^i}$ gives rise to the following map \begin{align} \Phi_{k^i}(x^i) \otimes \e_i\e_i^T: \R^m \mapto \F_{k^i} \otimes \R^m. \end{align}
The following is then a feature map for $G$: \begin{align} &\Phi_G(x) : \R^m \mapto \F_G \otimes \R^m, \\
&\Phi_G(x) = \left( \begin{array}{c} \Phi_{k^1}(x^1) \otimes \e_1\e_1^T\\ \vdots\\
\Phi_{k^m}(x^m) \otimes \e_m\e_m^T, \end{array} \right) : \R^m \mapto \F_G \otimes \R^m. \nonumber \end{align} This can be viewed as a matrix of size $m\dim(\F_G) \times m$. The matrix of feature vectors $\Phi_G(\x) = (\Phi_G(x_1), \ldots, \Phi_G(x_{u+l}))$ can be expressed as \begin{equation} \Phi_G(\x) = \left( \begin{array}{c} \Phi_{k^1}(\x^1) \otimes \e_1\e_1^T\\ \vdots\\
\Phi_{k^m}(\x^m) \otimes \e_m\e_m^T, \end{array} \right) \end{equation} and can be viewed as a matrix of size $m\dim(\F_G) \times m(u+l)$.
{\bf Second feature map}. The following is then a feature map for $G$: \begin{align} &\Phi_G(x) : \R^m \mapto \F_G, \\
&\Phi_G(x) = \left( \begin{array}{c} \Phi_{k^1}(x^1) \otimes \e_1^T\\ \vdots\\
\Phi_{k^m}(x^m) \otimes \e_m^T, \end{array} \right): \R^m \mapto \F_G. \nonumber \end{align} This can be viewed as a matrix of size $\dim(\F_G) \times m$.
The matrix of feature vectors $\Phi_G(\x) = (\Phi_G(x_1), \ldots, \Phi_G(x_{u+l}))$ can be expressed as \begin{equation} \Phi_G(\x) = \left( \begin{array}{c} \Phi_{k^1}(\x^1) \otimes \e_1^T\\ \vdots\\
\Phi_{k^m}(\x^m) \otimes \e_m^T, \end{array} \right) \end{equation} and can be viewed as a matrix of size $\dim(\F_G) \times m(u+l)$.
{\bf Comparison of the two feature maps}.
\begin{proposition}[\textbf{Training phase}]\label{proposition:training} Let $R = I_{\Y}$, $L = \sum_{i=1}^mL^i \otimes \e_i\e_i^T$ and $G(x,t) = \sum_{i=1}^mk^i(x^i, t^i)\e_i\e_i^T$. Then in Theorem \ref{theorem:leastsquare-G}, the matrix $B$ is given by
\begin{equation} B = \left( \Phi_{k^i}(\x^i)[c_ic_jJ^{u+l}_l + l\gamma_W \delta_{ij}L^i + l\gamma_B (M_m)_{ij}I_{u+l}]\Phi_{k^j}(\x^j)^T \right)_{i,j=1}^m. \end{equation} The matrix $H$ is given by \begin{equation} H = (B + l\gamma_A I_{\F_G})^{-1} \left( \begin{array}{c} c_1\Phi_{k^1}(\x^1_{1:l})\\ \vdots\\
c_m\Phi_{k^m}(\x^m_{1:l}) \end{array} \right)Y, \end{equation} where $\x_{1:l} = (x_1, \ldots, x_l)$ is the set of labeled training data. \end{proposition}
\begin{proposition}[\textbf{Evaluation phase}]\label{proposition:evaluation} Assume the hypothesis of Proposition \ref{proposition:training}. Then for any $v \in \X$, \begin{equation} f_{\z,\gamma}(v) = \vec(H^T\Phi_G(v)). \end{equation} The combined function $g_{\z, \gamma}(v) = Cf_{\z,\gamma}(v)$ is \begin{equation} g_{\z, \gamma}(v) = H^T\left( \begin{array}{c} c_1\Phi_{k^1}(v^1)\\ \vdots\\
c_m\Phi_{k^m}(v^m) \end{array} \right). \end{equation} On a set $\v = \{v_1, \ldots, v_t\}_{i=1}^t$, \begin{equation} g(\v) = H^T\left( \begin{array}{c} c_1\Phi_{k^1}(\v^1)\\ \vdots\\
c_m\Phi_{k^m}(\v^m) \end{array} \right) \end{equation} as a matrix of size $\dim(\Y) \times t$. \end{proposition}
\subsubsection{Special case}
Consider the special case $u=0$, $\gamma_W = \gamma_B = 0$, and $G(x,t) = \sum_{i=1}^mk^i(x^i, t^i)\e_i\e_i^T$, that is purely supervised learning without between-view interaction. In this case, in Proposition \ref{proposition:training}, \begin{equation} B = (c_ic_j\Phi_{k^i}(\x^i)\Phi_{k^j}(\x_j)^T)_{i,j=1}^m. \end{equation} This is equivalent to supervised least square regression, where the feature vector is the concatenation of the {\it weighted} feature vectors from all the views. \begin{equation} \Phi_k(x) = \left( \begin{array}{c} c_1\Phi_{k^1}(x^1)\\ \vdots\\
c_m\Phi_{k^m}(x^m) \end{array} \right). \end{equation} In the dual optimization setting, it is equivalent to supervised least square regression with the combined kernel $k(x,t) = \sum_{i=1}^mc_i^2k^i(x^i, t^i)$.
{\bf Special case: Feature maps with linear kernels}. Consider the case where $x^i \in \R^{d^i}$, $d = \sum_{i=1}^md^i$, and
the kernels $k^i$
are linear, with the form $$ k^i(x^i,t^i) = 1+\la x^i, t^i\ra_{\R^{d^i}}. $$ The feature map induced by the kernel $k^i$ is $\Phi_{k^i}:\X^i \mapto \R^{d^i +1}$, defined by $$ \Phi_{k^i}(x^i) = (1, x^i) \in \R^{d^i+1} = \F_{k^i}. $$ In this case, the kernel $G:\X \times \X \mapto \R^{m \times m}$ is a matrix-valued function which is linear in each of its two arguments. For each $x$, $\Phi_G(x)$ is a matrix of size $(d+m) \times m$ and for the sample set $\x$, $\Phi_G(\x)$ is a matrix of size $(d+m) \times m(u+l)$. \end{comment}
\appendix
\section{Proofs of Main Mathematical Results} \label{section:proofs}
\begin{proof}{\textbf{of Lemma \ref{lemma:f-feature}}} For a function $f \in \H_K$ of the form $f = \sum_{j=1}^NK_{x_j}a_j$, $a_j \in \W$, we have \begin{align*} f(x) &= \sum_{j=1}^NK(x,x_j)a_j = \sum_{j=1}^N\Phi_K(x)^{*}\Phi_K(x_j)a_j = \Phi_K(x)^{*}\left(\sum_{j=1}^N\Phi_K(x_j)a_j\right),
\\ &= \Phi_K(x)^{*}\h,
\end{align*} where $$ \h = \sum_{j=1}^N\Phi_K(x_j)a_j \in \F_K. $$ For the norm, we have \begin{align*}
||f||^2_{\H_K} &= \sum_{i,j=1}^N\la a_i, K(x_i, x_j)a_j\ra_{\W} = \sum_{i,j=1}^N\la a_i, \Phi_K(x_i)^{*}\Phi_K(x_j)a_j\ra_{\W} \\
&= \sum_{i,j=1}^N\la \Phi_K(x_i)a_i, \Phi_K(x_j)a_j\ra_{\F_K} = ||\sum_{i=1}^N\Phi_K(x_i)a_i||^2_{\F_K} = ||\h||^2_{\F_K}. \end{align*} By letting $N \approach \infty$ in the Hilbert space completion for $\H_K$, it follows that every $f \in \H_K$ has the form \begin{equation*} f(x) = \Phi_K(x)^{*}\h,\;\;\; \h \in \F_K, \end{equation*} and $$
||f||_{\H_K} = ||\h||_{\F_K}. $$ This completes the proof of the lemma. \end{proof}
\begin{comment}
\begin{theorem} [\textbf{Operator-valued Bochner Theorem} \citep{Neeb:Operator1998}] \label{theorem:Bochner-operator}
An ultraweakly continuous function {$k: \R^n \mapto \L(\H)$} is positive definite if and only if there exists a finite {$\Sym^{+}(\H)$}-valued measure {$\mu$} on
{$\R^n$} such that
{ \begin{align} \label{equation:k-expression1} k(x) = \hat{\mu}(x) = \int_{\R^n}\exp(i \la \omega,x\ra)d\mu(\omega) = \int_{\R^n} \exp(-i \la \omega,x\ra)d\mu(\omega). \end{align} } The Radon measure {$\mu$} is uniquely determined by {$\hat{\mu} = K$}. \end{theorem}
\begin{proposition} \label{proposition:mu-inversion} Assume that $\la \e_j, k(x)\e_l\ra \in L^1(\R^n)$ $\forall j,l \in \N$. Then \begin{align} \la \e_j, \mu(\omega)\e_l\ra =\Fcal^{-1}[\la \e_j, k(x)\e_l\ra], \end{align} where $\Fcal^{-1}$ denotes the inverse Fourier Transform on $\R^n$. \end{proposition} \end{comment}
\begin{proof}{\textbf{of Proposition \ref{proposition:mu-inversion}}} For each pair $j,l \in \N$, we have by Bochner's Theorem \begin{align*} \la \e_j, k(x)\e_l\ra = \int_{\R^n}\exp(-i\la \omega, x\ra)\la \e_j, d\mu(\omega)\e_l\ra. \end{align*} Thus the proposition follows immediately from the Fourier Inversion Theorem. \end{proof}
\begin{comment} \begin{proposition} \label{proposition:Bochner-projection} Let $\H$ be a separable Hilbert space. Let $K:\R^n \times \R^n \mapto \L(\H)$ be an ultraweakly continuous shift-invariant positive definite kernel. Let $\mu$ be the unique finite $\Sym^{+}(\H)$ measure satisfying Bochner's Theorem.
Then for every vector $\a\in \H$, $\a \neq 0$, the scalar-valued kernel defined by $K_{\a}(x,t) = \la \a, K(x,t)\a\ra$ is positive definite. Furthermore, there exists a unique finite positive Borel measure $\mu_{\a}$ on $\R^n$ such that \begin{align} K_{\a}(x,t) = \int_{\R^n}\exp(-i \la \omega,x-t\ra)d\mu_{\a}(\omega). \end{align} The measure $\mu_{\a}$ is given by \begin{align} \mu_{\a}(\omega) = \la \a, \mu(\omega)\a\ra,\;\;\; \omega \in \R^n. \end{align} \end{proposition} \end{comment}
\begin{proof} {\textbf{of Proposition \ref{proposition:Bochner-projection}}} Let $\Phi_K: \R^n \mapto \F_K$ be a feature map for $K$, then $K(x,t) = \Phi_K(x)^{*}\Phi_K(t)$. Then for any set of points $\{x_j\}_{j=1}^N$ and coefficients $\{b_j\}_{j=1}^N$, we have \begin{align*} \sum_{j,l=1}^Nb_jb_lK_{\a}(x_j,x_l) &= \sum_{j,l=1}^N b_jb_l\la \a, \Phi_K(x_j)^{*}\Phi_K(x_l)\a\ra = \sum_{j,l=1}^Nb_jb_l\la \Phi_K(x_j)\a, \Phi_K(x_l)\a \ra_{\F_K} \\ &= \sum_{j,l=1}^N\la b_j \Phi_K(x_j)\a, b_l\Phi_K(x_l)\a \ra_{\F_K}
=\left\|\sum_{j=1}^N b_j\Phi_K(x_j)\a\right\|^2_{\F_K} \geq 0. \end{align*} This shows that the scalar-valued kernel $K_{\a}$ is positive definite. Thus by the scalar-valued Bochner Theorem, there exists a unique finite positive Borel measure $\mu_{\a}$ on $\R^n$ such that \begin{align*} K_{\a}(x,t) = \int_{\R^n}\exp(-i \la \omega,x-t\ra)d\mu_{\a}(\omega). \end{align*} From the formulas $K(x,t) = \int_{\R^n}\exp(-i \la \omega,x-t\ra)d\mu(\omega)$ and $K_{\a}(x,t) =\la \a, K(x,t)\a\ra$, we obtain $\mu_{\a}(\omega) = \la \a, \mu(\omega) \a\ra$ as we claimed. \end{proof}
\begin{comment} \begin{theorem} \label{theorem:measure-trace} Assume that the positive definite function $k$ in Bochner's Theorem satisfies: (i) $k(x) \in \Tr(\H)$ $\forall x \in \R^n$, and
(ii) $\int_{\R^n}|\trace[k(x)]|dx < \infty$. Then its corresponding finite $\Sym^{+}(\H)$-valued measure $\mu$ satisfies \begin{align} \mu(\omega) \in \Tr(\H) \;\;\;\forall \omega \in \R^n,\;\;\;
\trace[\mu(\omega)] \leq \frac{1}{(2\pi)^n}\int_{\R^n}|\trace[k(x)]|dx. \end{align} The following is a finite positive Borel measure on $\R^n$ \begin{align} \mu_{\trace}(\omega) = \trace(\mu(\omega)) = \sum_{j=1}^{\infty}\mu_{jj}(\omega)
= \frac{1}{(2\pi)^n}\int_{\R^n}\exp(i \la \omega, x\ra)\trace[k(x)]dx. \end{align} The normalized measure
$\frac{\mu_{\trace}(\omega)}{\trace[k(0)]}$
is a probability measure on $\R^n$. Both the normalized and un-normalized measures are independent of the choice of orthonormal basis $\{\e_j\}_{j=1}^{\infty}$. \end{theorem} \end{comment}
\begin{proof}{\textbf{of Theorem \ref{theorem:measure-trace}}} By Proposition \ref{proposition:mu-inversion} and taking into account the fact that $\mu(\omega)$ is a self-adjoint positive operator on $\H$, we have \begin{align*}
\trace[\mu(\omega)] &= \sum_{j=1}^{\infty}\la \e_j,\mu(\omega)\e_j\ra = \left|\sum_{j=1}^{\infty}\la \e_j, \mu(\omega)\e_j\ra\right| \\ &= \frac{1}{(2\pi)^n}\
\left| \sum_{j=1}^{\infty}\int_{\R^n}\exp(i\la \omega, x\ra)\la \e_j, k(x)\e_j\ra dx\right| \\
&= \frac{1}{(2\pi)^n}\left|\int_{\R^n}\exp(i\la \omega, x\ra)\sum_{j=1}^{\infty}\la \e_j, k(x)\e_j\ra dx \right| \\
& = \frac{1}{(2\pi)^n}\left|\int_{\R^n}\exp(i \la \omega, x\ra\trace[k(x)]dx \right| \leq \frac{1}{(2\pi)^n}\int_{\R^n}|\trace[k(x)]|dx < \infty. \end{align*} This shows that $\mu_{\trace}(\omega) = \trace[\mu(\omega)] = \sum_{j=1}^{\infty}\mu_{jj}(\omega) \in \Tr(\H)$. The positivity of $\mu_{\trace}(\omega)$ follows from the positivity of all the $\mu_{jj}$'s. Furthermore, we have \begin{align*} \int_{\R^n}d\mu_{\trace}(\omega) = \int_{\R^n}d[\sum_{j=1}^{\infty}\mu_{jj}(\omega)] = \sum_{j=1}^{\infty} \la \e_j, k(0)\e_j\ra = \trace[k(0)]. \end{align*} We note that we must have $\trace[k(0)] > 0$, since $\trace[k(0)] = 0 \equivalent k(0) = 0 \equivalent K(x,x) = 0 \;\forall x \in \R^n \equivalent K(x,t) = 0 \; \forall (x,t) \in \X \times \X$. It follows that the normalized measure \begin{align*} \frac{\mu_{\trace}(\omega)}{\trace[k(0)]} \end{align*} is a probability measure on $\R^n$. Since the trace operation is independent of the choice of orthonormal basis $\{\e_j\}_{j=1}^{\infty}$, it follows that both the normalized and un-normalized measures are independent of the choice of $\{\e_j\}_{j=1}^{\infty}$. This completes the proof. \end{proof}
\begin{comment} \begin{corollary} \label{corollary:measure-trace} Consider the factorization { \begin{align} \label{equation:factorize} \mu(\omega) = \tilde{\mu}(\omega)\rho(\omega). \end{align} } Under the hypothesis of Theorem \ref{theorem:measure-trace}, in Eq.~(\ref{equation:factorize}) we can set { \begin{align} \label{equation:constructionII} \rho(\omega) = \frac{\mu_{\trace}(\omega)}{\trace[k(0)]}, \;\;\; \tilde{\mu}(\omega) = \left\{ \begin{matrix} \trace[k(0)]\frac{\mu(\omega)}{\mu_{\trace}(\omega)}, & \mu_{\trace}(\omega) > 0\\ 0, & \mu_{\trace}(\omega) = 0. \end{matrix} \right. \end{align} }
The measure $\tilde{\mu}$ as defined in Eq.~(\ref{equation:constructionII}) satisfies {$\tilde{\mu}(\omega) \in \Sym^{+}(\H)$} and has bounded trace, that is { \begin{align}
||\tilde{\mu}(\omega)||_{\trace} = \trace[\tilde{\mu}(\omega)] \leq \trace[k(0)] \;\;\;\forall \omega \in \R^n. \end{align} }
\end{corollary} \enc{comment}
\begin{proof} {\textbf{of Corollary \ref{corollary:measure-trace}}}
We only need to show that {$\tilde{\mu}(\omega)$} is well-defined when {$\mu_{\trace}(\omega) = 0$}, since the other statements are obvious.
Since {$\mu_{jj}(\omega) \geq 0$} {$\forall j \in \N$},
we have { \begin{align} \mu_{\trace}(\omega) = \sum_{j=1}^{\infty}\mu_{jj}(\omega) = 0 \equivalent \mu_{jj}(\omega) = 0 \;\; \forall j \in \N. \end{align} }
Next, note that {$\mu_{jj}(\omega) = \la \e_j ,\mu(\omega)\e_j \ra = \la \sqrt{\mu(\omega)}\e_j, \sqrt{\mu(\omega)}\e_j\ra = ||\sqrt{\mu(\omega)}\e_j||^2$}. By the Cauchy-Schwarz inequality, we then have {$\forall j,l \in \N$}, { \begin{align*}
|\mu_{jl}(\omega)|^2 =|\la \e_j, \mu(\omega)\e_l\ra|^2 = |\la \sqrt{\mu(\omega)}\e_j, \sqrt{\mu(\omega)}\e_l\ra|^2
\leq ||\sqrt{\mu(\omega)}\e_j||^2||\sqrt{\mu(\omega)}\e_l||^2 = \mu_{jj}(\omega)\mu_{ll}(\omega). \end{align*} } It follows that { \begin{align} \mu_{\trace}(\omega) = 0 \equivalent \mu_{jl}(\omega) = 0 \;\;\;\forall j,l \in \N \equivalent \mu(\omega) = 0. \end{align} } Thus at the points {$\omega$} for which {$\mu_{\trace}(\omega) = 0$}, so that {$\rho(\omega) = 0$}, we can set {$\tilde{\mu}(\omega) = 0$} (or any constant operator in {$\Sym^{+}(\H)$}). The particular choice of value for {$\tilde{\mu}(\omega)$} in this case has no effect on the factorization {$\mu(\omega) = \tilde{\mu}(\omega)\rho(\omega)$} or the sampling under the probability distribution {$\rho$}. \end{proof}
\begin{comment} \begin{proposition} \label{proposition:HS-bound} Assume that the positive definite function $k$ in Bochner's Theorem satisfies: (i) $k(x) \in \HS(\H)$ $\forall x \in \R^n$, and
(ii) $\int_{\R^n}||k(x)||^2_{\HS}dx < \infty$. Then its corresponding finite $\Sym^{+}(\H)$-valued measure $\mu$ satisfies \begin{align} \mu(\omega) \in \HS(\H) \;\;\;\forall \omega \in \R^n,\;\;\;
||\mu(\omega)||_{\HS}^2 \leq \frac{1}{(2\pi)^{2n}}\int_{\R^n}||k(x)||^2_{\HS}dx. \end{align}
\end{proposition} \begin{proof}[\textbf{Proof of Proposition \ref{proposition:HS-bound}}] Since $\{\e_j\}_{j=1}^{\infty}$ is an orthonormal basis for $\H$, we have \begin{align*}
||\mu(\omega)\e_j||^2 &= \sum_{l=1}^{\infty}|\la \e_l, \mu(\omega)\e_j\ra|^2
= \frac{1}{(2\pi)^{2n}}\sum_{l=1}^{\infty}\left|\int_{\R^n}\exp(i\la \omega, x\ra)\la \e_l, k(x)\e_j\ra dx\right|^2 \\
&\leq \frac{1}{(2\pi)^{2n}}\int_{\R^n}\sum_{l=1}^{\infty}\left|\la \e_l, k(x)\e_j\ra\right|^2dx \\
& = \frac{1}{(2\pi)^{2n}}\int_{\R^n}||k(x)\e_j||^2dx. \end{align*} Thus it follows that \begin{align*}
||\mu(\omega)||^2_{\HS} &= \frac{1}{(2\pi)^{2n}}\sum_{j=1}^{\infty}||\mu(\omega)\e_j||^2 \\
&\leq \frac{1}{(2\pi)^{2n}}\int_{\R^n}\sum_{j=1}^{\infty}||k(x)\e_j||^2dx = \frac{1}{(2\pi)^{2n}}\int_{\R^n}||k(x)||^2_{\HS}dx < \infty. \end{align*} This completes the proof of the proposition. \end{proof} \end{comment}
\begin{comment} \begin{lemma} \label{lemma:curl-div} Let $\rho_0$ be the probability measure on $\R^n$ such that $\phi(x) = \bE_{\rho_0}[e^{-i \la \omega, x\ra}] = \hat{\rho_0}(x)$. Then, under the condition \begin{align}
\int_{\R^n}||\omega||^2\rho_0(\omega) < \infty, \end{align} we have \begin{align} k_{\curl}(x) &= \int_{\R^n}e^{-i \la \omega, x\ra}\omega\omega^T\rho_0(\omega)d\omega.
\\
k_{\div}(x) &= \int_{\R^n}e^{-i \la \omega, x\ra}[||\omega||^2I_n - \omega\omega^T]\rho_0(\omega)d\omega. \end{align} \end{lemma}
\begin{remark} The condition $\int_{\R^n}||\omega||^2\rho_0(\omega)d\omega < \infty$ guarantees that $\phi$ is twice-differentiable. It is not satisfied, for example, by
the Cauchy distribution $\rho(\omega) = \frac{1}{\pi (1+ \omega^2)}$ on $\R$, and the corresponding $\phi(x) = \hat{\rho}(x) = e^{-|x|}$ is not even differentiable at $x = 0$. \end{remark} \end{comment}
\begin{proof} {\textbf{of Lemma \ref{lemma:curl-div}}}
We make use of the following property of the Fourier transform (see e.g \citep{Jones:Lebesgue}). Assume that $f$ and $||x||f$ are both integrable on $\R^n$, then the Fourier transform $\hat{f}$ is differentiable and \begin{align*} \frac{\partial \hat{f}}{\partial \omega_j}(\omega) = - \widehat{i x_j f}(\omega), \;\;\; 1 \leq j \leq n. \end{align*}
Assume further that $||x||^2f$ is integrable on $\R^n$, then this rule can be applied twice to give \begin{align*} \frac{\partial^2 \hat{f}}{\partial \omega_j \partial \omega_k}(\omega) = \frac{\partial}{\partial \omega_j}\left(\frac{\partial \hat{f}}{\partial \omega_k}\right) = - \frac{\partial}{\partial \omega_j}[\widehat{i x_k f}] = -\widehat{x_j x_kf}. \end{align*}
For the curl-free kernel, we have $\phi(x) = \hat{\rho_0}(x)$ and consequently, under the assumption that $\int_{\R^n}||\omega||^2\rho_0(\omega) < \infty$, we have \begin{align*} [k_{\curl}(x)]_{jk} = - \frac{\partial^2\phi}{\partial x_j \partial x_k}(x) = -\frac{\partial^2\hat{\rho_0}}{\partial x_j \partial x_k}(x) = \widehat{\omega_j \omega_k \rho_0}(x),\;\;\; 1 \leq j,k \leq n. \end{align*} In other words, \begin{align*} [k_{\curl}(x)]_{jk} = \int_{\R^n}e^{-i \la \omega, x\ra} \omega_j \omega_k \rho_0(\omega)d\omega. \end{align*} It thus follows that \begin{align*} k_{\curl}(x) = \int_{\R^n}e^{-i \la \omega, x\ra }\omega \omega^T \rho_0(\omega)d\omega = \int_{\R^n}e^{-i \la \omega, x\ra }\mu(\omega)d\omega, \end{align*} where $\mu(\omega) = \omega \omega^T \rho_0(\omega)$. The proof for $k_{\div}$ is entirely similar. \end{proof}
To prove Theorems \ref{theorem:convergence-pointwise} and \ref{theorem:convergence-uniform} , we need the following concentration result for Hilbert space-valued random variables.
\begin{lemma} [\citep{smalezhou2007}] \label{lemma:Pinelis}
Let {$\H$} be a Hilbert space with norm {$||\;||$} and {$\{\xi_j\}_{j=1}^D$}, {$D \in \N$}, be independent random variables with values in {$\H$}. Suppose that for each $j$, {$||\xi_j||\leq M < \infty$} almost surely. Let
{$\sigma^2_D =\sum_{j=1}^D\bE(||\xi_j||^2)$}. Then { \begin{align} &\bP\left(
\left\|\frac{1}{D}\sum_{j=1}^D[\xi_j - \bE(\xi_j)]\right\| \geq \epsilon \right)
\leq 2\exp\left(-\frac{D\epsilon}{2M}\log\left[1+\frac{DM\epsilon}{\sigma^2_D}\right]\right) \;\;\; \forall \epsilon > 0. \end{align} } \end{lemma}
\begin{comment} \begin{theorem} [{\bf Pointwise Convergence}] \label{theorem:convergence-pointwise}
Assume that $||\tilde{\mu}(\omega)||_{\HS} \leq M$ almost surely and that $\sigma^2(\tilde{\mu}(\omega)) = \bE_{\rho}[||\tilde{\mu}(\omega)||^2_{\HS}] < \infty$. Then for any fixed $x \in \R^n$, \begin{align}
\bP[||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon] \leq 2 \exp\left(-\frac{D\epsilon}{2M}\log\left[1+\frac{M\epsilon}{\sigma^2(\tilde{\mu}(\omega))}\right]\right) \;\;\;\forall \epsilon > 0. \end{align} \end{theorem} \end{comment}
\begin{proof} {\textbf{of Theorem \ref{theorem:convergence-pointwise}}}
For each $x \in \R^n$ fixed, consider the random variable $\xi(x,,.): (\R^n, \rho) \mapto \Sym(\H)$ defined by \begin{align*} \xi(x, \omega) = \cos(\la \omega, x\ra)\tilde{\mu}(\omega). \end{align*} We then have \begin{align*} \hat{k}_D(x) &= \frac{1}{D}\sum_{j=1}^D\cos(\la \omega_j, x\ra)\tilde{\mu}(\omega_j) = \frac{1}{D}\sum_{j=1}^D\xi(x, \omega_j), \\ k(x) &= \int_{\R^n}\cos(\la \omega, x\ra)\tilde{\mu}(\omega)d\rho(\omega) = \bE_{\rho}[\xi(x, \omega)]. \end{align*}
Under the assumption that $||\tilde{\mu}(\omega)||_{\HS} \leq M$ almost surely, we also have $||\xi(x,\omega)||_{\HS} \leq M$ almost surely. Its variance satisfies \begin{align*}
\sigma^2(\xi(x, \omega)) = \bE_{\rho}||\xi(x, \omega)||^2_{\HS} \leq E||\tilde{\mu}(\omega)||^2_{\HS} = \sigma^2(\tilde{\mu}(\omega)). \end{align*} It follows from Lemma \ref{lemma:Pinelis} that for each fixed $x \in \R^n$, we have \begin{align*}
\bP[||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon] &= \bP\left(\left\|\frac{1}{D}\sum_{j=1}^D\xi(x,\omega_j) - \bE_{\rho}[\xi(x, \omega)]\right\|_{\HS} \geq \epsilon\right) \\ &\leq 2 \exp\left(-\frac{D\epsilon}{2M}\log\left[1+\frac{M\epsilon}{\sigma^2(\xi(x, \omega))}\right]\right) \\ & \leq 2 \exp\left(-\frac{D\epsilon}{2M}\log\left[1+\frac{M\epsilon}{\sigma^2(\tilde{\mu}(\omega))}\right]\right). \end{align*} This completes the proof of the theorem. \end{proof}
\begin{comment} \begin{theorem} [{\bf Uniform Convergence}] \label{theorem:convergence-uniform}
Let $\Omega \subset \R^n$ be compact with diameter $\diam(\Omega)$. Assume that $||\tilde{\mu}(\omega)||_{\HS} \leq M$ almost surely and that $\sigma^2(\tilde{\mu}(\omega)) = \bE_{\rho}[||\tilde{\mu}(\omega)||^2_{\HS}] < \infty$. Then for any $\epsilon > 0$, \begin{align}
\bP\left(\sup_{x \in \Omega}||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon\right) \leq & a(n)\left(\frac{\mb_1\diam(\Omega)}{\epsilon}\right)^{\frac{n}{n+1}} \nonumber \\ & \times \exp\left(-\frac{D\epsilon}{4(n+1)M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right), \end{align} where $a(n) = 2^{\frac{3n+1}{n+1}}\left(n^{\frac{1}{n+1}} + n^{-\frac{n}{n+1}}\right)$. \end{theorem}
\end{comment} To prove Theorem \ref{theorem:convergence-uniform}, we first prove the following preliminary results.
\begin{lemma} \label{lemma:Lipschitz-1}
Assume that $\int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) < \infty$. Then the function $k:\R^n \mapto \Sym(\H)$, with the latter endowed with the Hilbert-Schmidt norm, is Lipschitz, with \begin{align}
||k(x) - k(y)||_{\HS} \leq ||x-y||\int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega). \end{align} \end{lemma} \begin{proof} {\textbf{of Lemma \ref{lemma:Lipschitz-1}}}
Using the fact that the cosine function is Lipschitz with constant $1$, that is $|\cos(x) - \cos(y)| \leq |x-y|$ for all $x, y \in \R$, we have \begin{align*}
||k(x)-k(y)||_{\HS} &= \left\|\int_{\R^n}[\cos(\la \omega, x\ra)-\cos(\la \omega, y\ra)] \tilde{\mu}(\omega)d \rho(\omega)\right\|_{\HS} \\
& \leq \int_{\R^n}|\cos(\la \omega, x\ra)-\cos(\la \omega, y\ra)|\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) \\
& \leq \int_{\R^n}|\la \omega, x\ra-\la \omega, y\ra|\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) \\
& \leq ||x-y||\int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega). \end{align*} This completes the proof. \end{proof}
\begin{lemma} \label{lemma:Lipschitz-2} The function $\hat{k}_D(x) = \frac{1}{D}\sum_{j=1}^D\cos(\omega_j, x\ra)\tilde{\mu}(\omega_j): \R^n \mapto \Sym(\H)$, with the latter endowed with the Hilbert-Schmidt norm, is Lipschitz, with \begin{align}
||\hat{k}_D(x) - \hat{k}_D(y)||_{\HS} \leq ||x-y||\frac{1}{D}\sum_{j=1}^D||\omega_j||\;||\tilde{\mu}(\omega_j)||_{\HS}. \end{align} \end{lemma} \begin{proof} {\textbf{of Lemma \ref{lemma:Lipschitz-2}}}
Similar to the proof of Lemma \ref{lemma:Lipschitz-1}, we utilize the fact that $|\cos(x) - \cos(y)| \leq |x-y|$ for all $x, y \in \R$ to arrive at \begin{align*}
||\hat{k}_D(x) - \hat{k}_D(y)||_{\HS} &= \frac{1}{D}\left\|\sum_{j=1}^D[\cos(\la \omega_j, x\ra) - \cos(\la \omega_j, y\ra)]\tilde{\mu}(\omega_j)\right\|_{\HS} \\
& \leq ||x-y||\frac{1}{D}\sum_{j=1}^D||\omega_j||\;||\tilde{\mu}(\omega_j)||_{\HS}. \end{align*} This completes the proof. \end{proof}
\begin{corollary} Let $f(x) = \hat{k}_D(x) - k(x): \R^n \mapto \Sym(\H)$, with the latter endowed with the Hilbert-Schmidt norm, then $f$ is Lipschitz, with \begin{align}
||f(x) - f(y)|| \leq ||x-y||\left(\int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) + \frac{1}{D}\sum_{j=1}^D||\omega_j||\;||\tilde{\mu}(\omega_j)||_{\HS}\right). \end{align} \label{corollary:Lipschitz} The Lipschitz constant $L_f$ of $f$ satisfies \begin{align} \bP(L_f \geq \epsilon) \leq \frac{2\mb_1}{\epsilon}, \end{align}
where $\mb_1 = \int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega)$. \end{corollary} \begin{proof} {\textbf{of Corollary \ref{corollary:Lipschitz}}} By combing the results of Lemmas \ref{lemma:Lipschitz-1} and \ref{lemma:Lipschitz-2}, we have \begin{align*}
||f(x) - f(y)||_{\HS} &= ||(\hat{k}_D(x) - k(x)) - (\hat{k}_D(y) - k(y))||_{\HS} \leq ||\hat{k}_D(x) - \hat{k}_D(y)||_{\HS} + ||k(x) - k(y)||_{\HS} \\
& \leq ||x-y||\left(\int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) + \frac{1}{D}\sum_{j=1}^D||\omega_j||\;||\tilde{\mu}(\omega_j)||_{\HS}\right) \end{align*} as we claimed. Thus $f$ is Lipschitz, with the Lipschitz constant $L_f$ satisfying \begin{align*}
L_f \leq \int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) + \frac{1}{D}\sum_{j=1}^D||\omega_j||\;||\tilde{\mu}(\omega_j)||_{\HS}, \end{align*} with expectation \begin{align*}
\bE(L_f) \leq 2 \int_{\R^n}||\omega||\;||\tilde{\mu}(\omega)||_{\HS}d\rho(\omega) = 2\mb_1. \end{align*} By Markov's inequality, we have for any $\epsilon > 0$, \begin{align*} \bP(L_f \geq \epsilon) \leq \frac{\bE(L_f)}{\epsilon} \leq \frac{2\mb_1}{\epsilon}. \end{align*} This completes the proof. \end{proof}
\begin{proof} {\textbf{of Theorem \ref{theorem:convergence-uniform}}} For each $r > 0$ fixed, let $\Ncal = \Ncal(\Omega, r)$ be the covering number for $\Omega$, that is the minimum number of balls $\Omega_j$, $1 \leq j \leq \Ncal$, of radius $r$ covering $\Omega$. By Proposition 5 in \citep{CuckerSmale}, the covering number $\Ncal(\Omega,r)$ is bounded above by the expression \begin{align} \Ncal = \Ncal(\Omega, r) \leq \left(\frac{2\diam(\Omega)}{r}\right)^n. \end{align}
Consider the function $f: \R^n \mapto \Sym(\H)$ defined by \begin{align*} f(x) = \hat{k}_D(x) - k(x). \end{align*} On the ball $\Omega_j$, we have \begin{align*}
\bP(\sup_{x \in \Omega_j}||\hat{k}_D(x) - k(x)||_{\HS} \geq \epsilon) = \bP(\sup_{x \in \Omega_j}||f(x)||_{\HS} \geq \epsilon). \end{align*} By Corollary \ref{corollary:Lipschitz}, $f$ is a Lipschitz function with Lipschitz constant $L_f > 0$, with $\Sym(\H)$ being endowed with the Hilbert-Schmidt norm. Let $x_j$ be the center of the $j$th ball $\Omega_j$. For each $\epsilon > 0$, for any $x \in \Omega_j$, we have \begin{align*}
||f(x_j) - f(x)||_{\HS} \leq L_f||x_j - x|| \leq rL_f < \frac{\epsilon}{2} \;\;\;\text{when}\;\;\; L_f < \frac{\epsilon}{2r}.
\end{align*}
Since \begin{align*}
||f(x)||_{\HS} \leq ||f(x) - f(x_j)||_{\HS} + ||f(x_j)||_{\HS}, \end{align*} we have \begin{align*}
\sup_{x \in \Omega_j}||f(x)||_{\HS} < \epsilon \;\;\; \text{if}\;\;\; L_f < \frac{\epsilon}{2r}\;\;\;\text{and}\;\;\; ||f(x_j)||_{\HS} < \frac{\epsilon}{2}. \end{align*} Thus over the union of balls $\Omega_j$, $1 \leq j \leq \Ncal$, we have \begin{align*}
\sup_{x \in \cup_{j=1}^{\Ncal}\Omega_j}||f(x)||_{\HS} < \epsilon \;\;\; \text{if}\;\;\; L_f < \frac{\epsilon}{2r}\;\;\;\text{and}\;\;\; ||f(x_j)||_{\HS} < \frac{\epsilon}{2}, \;\;\; 1\leq j \leq \Ncal. \end{align*} Thus \begin{align*}
\bP\left(\sup_{x \in \cup_{j=1}^{\Ncal}\Omega_j}||f(x)||_{\HS} < \epsilon\right) = \bP\left(L_f < \frac{\epsilon}{2r}\;\;\;\text{and}\;\;\; ||f(x_j)||_{\HS} < \frac{\epsilon}{2}, \;\;\; 1\leq j \leq \Ncal\right). \end{align*}
We now recall the following properties on an arbitrary probability space $(\Sigma, \bP, \Fcal)$. For any events $A, B$, let $\overline{A}$ denote the complement of $A$ in $\Fcal$ , then we have $\overline{A \cap B} = \overline{A} \cup \overline{B}$, so that \begin{align} \label{equation:probability-complement-1} \bP(\overline{A \cap B}) &= \bP(\overline{A} \cup \overline{B}) \leq \bP(\overline{A}) + \bP(\overline{B}), \\ \bP(A \cap B) &= 1- \bP(\overline{A \cap B}) = 1 - \bP(\overline{A} \cup \overline{B}) \geq 1 - \bP(\overline{A}) - \bP(\overline{B}). \label{equation:probability-complement-2} \end{align}
Applying property (\ref{equation:probability-complement-2}) with $A = \{L_f < \frac{\epsilon}{2r}\}$ and $B = \{||f(x_j)||_{\HS} < \frac{\epsilon}{2}, 1 \leq j \leq \Ncal\}$, we obtain
\begin{align*}
\bP\left(\sup_{x \in \cup_{j=1}^{\Ncal}\Omega_j}||f(x)||_{\HS} < \epsilon\right)
&\geq 1 - \bP\left(L_f \geq \frac{\epsilon}{2r}\right) - \bP\left(\overline{\{||f(x_j)||_{\HS} < \frac{\epsilon}{2}, 1 \leq j \leq \Ncal\}}\right). \end{align*}
Applying property (\ref{equation:probability-complement-1}) recursively to the set $\{||f(x_j)||_{\HS} < \frac{\epsilon}{2}, 1 \leq j \leq \Ncal\}$, we obtain \begin{align*}
\bP\left(\overline{\{||f(x_j)||_{\HS} < \frac{\epsilon}{2}, 1 \leq j \leq \Ncal\}}\right)
\leq \sum_{j=1}^{\Ncal}\bP(\overline{\{||f(x_j)||_{\HS} < \frac{\epsilon}{2}\}}) = \sum_{j=1}^{\Ncal}\bP(\{||f(x_j)||_{\HS} \geq \frac{\epsilon}{2}\}). \end{align*} Combining the last two expressions, we have \begin{align*}
\bP\left(\sup_{x \in \cup_{j=1}^{\Ncal}\Omega_j}||f(x)||_{\HS} < \epsilon\right)
&\geq 1 - \bP\left(L_f \geq \frac{\epsilon}{2r}\right) - \sum_{j=1}^{\Ncal}\bP(\{||f(x_j)||_{\HS} \geq \frac{\epsilon}{2}\}). \end{align*} Equivalently, \begin{align*}
\bP\left(\sup_{x \in \cup_{j=1}^{\Ncal}\Omega_j}||f(x)||_{\HS} \geq \epsilon\right)
&\leq \bP\left(L_f \geq \frac{\epsilon}{2r}\right) + \sum_{j=1}^{\Ncal}\bP(\{||f(x_j)||_{\HS} \geq \frac{\epsilon}{2}\}). \end{align*} By Corollary \ref{corollary:Lipschitz}, we have \begin{align*} \bP\left(L_f \geq \frac{\epsilon}{2r}\right) \leq \frac{4\mb_1 r}{\epsilon}. \end{align*} By Theorem \ref{theorem:convergence-pointwise}, we have \begin{align*}
\bP\left(||f(x_j)||_{\HS} \geq \frac{\epsilon}{2}\right) \leq 2 \exp\left(-\frac{D\epsilon}{4M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right). \end{align*} Putting everything together, we obtain \begin{align*}
\bP\left(\sup_{x \in \Omega}||f(x)||_{\HS} \geq \epsilon\right) & \leq \frac{4\mb_1 r}{\epsilon} + 2\Ncal\exp\left(-\frac{D\epsilon}{4M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right) \\ &\leq \frac{4\mb_1 r}{\epsilon} + 2\exp\left(-\frac{D\epsilon}{4M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right)\left(\frac{2\diam(\Omega)}{r}\right)^n \\ &= ar + \frac{b}{r^n}, \end{align*} where $a = \frac{4\mb_1}{\epsilon}$ and $b =2\exp\left(-\frac{D\epsilon}{4M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right)\left({2\diam(\Omega)}\right)^n$.
Let us find the value $r > 0$ that minimizes the right hand side in the above expression. The function $g(r) = ar +\frac{b}{r^n}$ for $a, b > 0$ achieves its minimum on $(0, \infty)$ at $r = \left(\frac{bn}{a}\right)^{\frac{1}{n+1}}$, with the minimum value given by \begin{align*} g_{\min} = a^{\frac{n}{n+1}}b^{\frac{1}{n+1}}\left(n^{\frac{1}{n+1}} + n^{-\frac{n}{n+1}}\right). \end{align*}
Substituting the value for $a$ and $b$, we obtain \begin{align*}
\bP\left(\sup_{x \in \Omega}||f(x)||_{\HS} \geq \epsilon\right) \leq a(n)\left(\frac{\mb_1\diam(\Omega)}{\epsilon}\right)^{\frac{n}{n+1}} \exp\left(-\frac{D\epsilon}{4(n+1)M}\log\left[1+\frac{M\epsilon}{2\sigma^2(\tilde{\mu}(\omega))}\right]\right) \end{align*} where $a(n) = 2^{\frac{3n+1}{n+1}}\left(n^{\frac{1}{n+1}} + n^{-\frac{n}{n+1}}\right)$. This completes the proof of the theorem. \end{proof}
\begin{comment} \begin{theorem}\label{theorem:representer} The optimization problem
\begin{align} \label{equation:general}
f_{\z,\gamma} = \argmin_{f \in \H_K} &\frac{1}{l}\sum_{i=1}^lV(y_i, Cf(x_i)) + \gamma_A ||f||^2_{\H_K}
+ \gamma_I \la \f, M\f\ra_{\mathcal{W}^{u+l}}. \end{align} has a unique solution $f_{\z,\gamma}$, which has the form $f_{\z,\gamma}(x) = \sum_{i=1}^{u+l}K(x,x_i)a_i$ for some $a_i \in \W$. In terms of feature map representation, $f_{\z,\gamma}(x) = \Phi_K(x)^{*}\h$, where \begin{equation}\label{equation:dual-to-primal} \h = \sum_{i=1}^{u+l}\Phi_K(x_i)a_i = \Phi_K(\x)\a \in \F_K, \end{equation} and $\a = (a_1, \ldots, a_{u+l}) \in \W^{u+l}$. \end{theorem} \end{comment}
\begin{proof}{\textbf{of Theorem \ref{theorem:representer}}} It is straightforward to show that the optimization problem (\ref{equation:general}) has a unique solution $f_{\z,\gamma}$, which has the form $f_{\z,\gamma}(x) = \sum_{i=1}^{u+l}K(x,x_i)a_i$ for some $a_i \in \W$. Under the feature map representation $\Phi_K$, we have $$ f_{\z,\gamma}(x) = \sum_{i=1}^{u+l}K(x,x_i)a_i = \sum_{i=1}^{u+l}\Phi_K(x)^{*}\Phi_K(x_i)a_i = \Phi_K(x)^{*}\h, $$ where $$ \h = \sum_{i=1}^{u+l}\Phi_K(x_i)a_i = \Phi_K(\x)\a, $$ as we claimed. \end{proof}
\begin{comment} \begin{theorem}\label{theorem:leastsquare-K} Let $\Phi_K: \X \mapto \mathcal{L}(\W, \F_K)$ be a feature map induced by the kernel $K$, where $\F_K$ is a separable Hilbert space, so that $K(x,t) = \Phi_K(x)^{*}\Phi_K(t)$ for any pair $(x,t) \in \X \times \X$. Then the solution of the optimization problem \begin{align} \label{equation:LS}
f_{\z,\gamma} = \argmin_{f \in \H_K} &\frac{1}{l}\sum_{i=1}^l||y_i - Cf(x_i)||^2_{\Y} + \gamma_A ||f||^2_{\H_K}
+ \gamma_I \la \f, M\f\ra_{\mathcal{W}^{u+l}}. \end{align}
is $f_{\z,\gamma}(x) = \Phi_K(x)^{*}\h$, where $\h \in \F_K$ is given by \begin{align} \h = & \left(\Phi_K(\x)[(J^{u+l}_l \otimes C^{*}C) + l\gamma_I M]\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right)^{-1}
\Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y, \end{align} with $\Phi_K(\x) = [\Phi_K(x_1), \ldots, \Phi_K(x_{u+l})]: \W^{u+l} \mapto \F_K$. \end{theorem} \end{comment}
To prove Theorem \ref{theorem:leastsquare-K}, we first consider the following operators.
The sampling operator $S_{\mathbf{x}}: \H_K \rightarrow {\W}^l$ is defined by $S_{\mathbf{x}}(f) = (f(x_i))_{i=1}^l$, for any $\y = (y_i)_{i=1}^l \in \mathcal{W}^l$,
\begin{align*}
\langle S_{\mathbf{x}}f, \mathbf{y} \rangle_{{\W}^l}&= \sum_{i=1}^l\langle f(x_i), y_i\rangle_{\W} = \sum_{i=1}^l\langle K^{*}_{x_i}f,y_i\rangle_{\H_K}\\ &= \sum_{i=1}^l\langle f, K_{x_i}y_i\rangle_{\H_K} = \langle f, \sum_{i=1}^l K_{x_i}y_i\rangle_{\H_K}.
\end{align*}
Thus the adjoint operator $S_{\mathbf{x}}^{*}: {\W}^l \rightarrow \H_K$ is given by \begin{equation} S_{\mathbf{x}}^{*}\mathbf{y} = S_{\mathbf{x}}^{*}(y_1, \ldots, y_l) = \displaystyle{\sum_{i=1}^lK_{x_i}y_i}, \;\;\; \y \in \mathcal{W}^l, \end{equation} and the operator $S_{\mathbf{x}}^{*}S_{\mathbf{x}}: \H_K \rightarrow \H_K$ is given by \begin{equation} S_{\mathbf{x}}^{*}S_{\mathbf{x}}f = \displaystyle{\sum_{i=1}^lK_{x_i}f(x_i)} = \sum_{i=1}^l K_{x_i}K^{*}_{x_i}f. \end{equation}
Consider the
operator $E_{C, \x}: \H_K \mapto \mathcal{Y}^{l}$, defined by \begin{equation} E_{C,\x}f = (CK_{x_1}^{*}f, \ldots, CK_{x_l}^{*}f), \end{equation} with $CK_{x_i}^{*}: \H_K \mapto \mathcal{Y}$ and $K_{x_i}C^{*}: \mathcal{Y} \mapto \H_K$. For $\mathbf{b} = (b_1, \ldots, b_l) \in \mathcal{Y}^{l}$, we have \begin{eqnarray} \la \mathbf{b}, E_{C,\x}f\ra_{\mathcal{Y}^{l}} = \sum_{i=1}^l \la b_i, CK_{x_i}^{*}f\ra_{\mathcal{Y}}
= \sum_{i=1}^l \la K_{x_i}C^{*}b_i, f\ra_{\H_K}.
\end{eqnarray} The adjoint operator $E_{C,\x}^{*}:\mathcal{Y}^{l} \mapto \H_K$ is thus \begin{equation} E_{C,\x}^{*}: (b_1, \ldots, b_l) \mapto \sum_{i=1}^lK_{x_i}C^{*}b_i. \end{equation} The operator $E_{C,\x}^{*}E_{C, \x}:\H_K \mapto \H_K$ is then
\begin{equation} E_{C,\x}^{*}E_{C, \x}f \mapto \sum_{i=1}^lK_{x_i}C^{*}CK_{x_i}^{*}f,
\end{equation} with $C^{*}C: \mathcal{W} \mapto \mathcal{W}$.
\begin{proof}{\textbf{of Theorem \ref{theorem:leastsquare-K}}} Since $f(x) = K_x^{*}$, we have \begin{eqnarray}\label{equation:vector-lsq2}
f_{\z, \gamma} = \argmin_{f \in \H_K} \frac{1}{l}\sum_{i=1}^l||y_i - CK_{x_i}^{*}f||^2_{\mathcal{Y}}
+ \gamma_A||f||^2_{\H_K} + \gamma_I \la \f, M\f\ra_{\mathcal{W}^{u+l}}. \end{eqnarray} Using the operator $E_{C, \x}$, this becomes \begin{equation} \label{equation:multiview-lsq2}
f_{\z, \gamma} = \argmin_{f \in \H_K} \frac{1}{l}||E_{C,\x}f-\y||^2_{\mathcal{Y}^l}
+ \gamma_A||f||^2_{\H_K} + \gamma_I \la \f, M\f\ra_{\mathcal{W}^{u+l}}. \end{equation} Differentiating (\ref{equation:multiview-lsq2}) and setting the derivative to zero gives
\begin{equation}\label{equation:fz1} (E_{C,\x}^{*}E_{C,\x} + l \gamma_A I + l\gamma_I S_{\x,u+l}^{*}MS_{\x,u+l})f_{\z, \gamma} = E_{C,\x}^{*}\y, \end{equation} which is $$ f_{\z,\gamma} = (E_{C,\x}^{*}E_{C,\x} + l \gamma_A I_{\H_K} + l\gamma_I S_{\x,u+l}^{*}MS_{\x,u+l})^{-1}E_{C,\x}^{*}\y. $$ On the set $\x = (xi)_{i=1}^{u+l}$, the operators $S_{\x,u+l}: \H_K \mapto \W^{u+l}$ and $S_{\x, u+l}^{*}: \W^{u+l} \mapto \H_K$ are given by $$ S_{\x, u+l}f = (K_{x_i}^{*}f)_{i=1}^{u+l}, \;\;\; f \in \H_K, $$ $$ S_{\x, u+l}^{*}\b = \sum_{i=1}^{u+l}K_{x_i}b_i, \;\;\; \b \in \W^{u+l}. $$ By definition of the operators $S_{\x,u+l}$ and $S_{\x,u+l}^{*}$, we have $$ S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y = \sum_{i=1}^lK_{x_i}(C^{*}y_i). $$ Thus the operator $E_{C,\x}^{*}: \Y^l \mapto \H_K$ is \begin{equation} E_{C, \x}^{*} = S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*}). \end{equation} The operator $E_{C,\x}^{*}E_{C,\x}: \H_K \mapto \H_K$ is given by \begin{equation} E_{C,\x}^{*}E_{C,\x} = S_{\x, u+l}^{*}(J^{u+l}_l \otimes C^{*}C)S_{\x, u+l}: \H_K \mapto \H_K, \end{equation} Equation (\ref{equation:fz1}) becomes \begin{equation}\label{equation:fz2} \left[S_{\x, u+l}^{*}(J^{u+l}_l \otimes C^{*}C + l \gamma_I M)S_{\x, u+l} + l \gamma_A I_{\H_K}\right]f_{\z,\gamma} = S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y, \end{equation} which gives \begin{equation} f_{\z,\gamma} = \left[S_{\x, u+l}^{*}(J^{u+l}_l \otimes C^{*}C + l \gamma_I M)S_{\x, u+l} + l \gamma_A I_{\H_K}\right]^{-1}S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y. \end{equation}
For any $x \in \X$, $$ (S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y)(x) = \sum_{i=1}^lK(x,x_i)(C^{*}y_i) \in \W. $$ Using the feature map $\Phi_K$, we have for any $x \in \X$, $$ (S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y)(x) = \sum_{i=1}^l\Phi_K(x)^{*}\Phi_K(x_i)(C^{*}y_i) \in \W, $$ and for any $w \in \W$, \begin{align*} \la (S_{\x, u+l}^{*}(I_{(u+l) \times l} \otimes C^{*})\y)(x),w\ra_{\W} &= \sum_{i=1}^l\la \Phi_K(x)^{*}\Phi_K(x_i)(C^{*}y_i), w\ra_{\W} \nonumber\\ &= \sum_{i=1}^l \la \Phi_K(x_i)(C^{*}y_i), \Phi_K(x)w\ra_{\F_K} \nonumber\\ &= \la \Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y, \Phi_K(x)w\ra_{\F_K}. \end{align*}
For any $f \in \H_K$, $$ S_{\x,u+l}^{*}MS_{\x,u+l}f = \sum_{i=1}^{u+l}K_{x_i}(M\f)_i. $$ For any $x \in \X$, $$ (S_{\x,u+l}^{*}MS_{\x,u+l}f)(x) = \sum_{i=1}^{u+l}K(x,x_i)(M\f)_i = \sum_{i=1}^{u+l}\Phi_K(x)^{*}\Phi_K(x_i)(M\f)_i \in \W, $$ and for any $w \in \W$, $$ \la (S_{\x,u+l}^{*}MS_{\x,u+l}f)(x), w\ra_{\W} = \sum_{i=1}^{u+l}\la \Phi_K(x_i)(M\f)_i, \Phi_K(x)w\ra_{\F_K}. $$ We have $$ \sum_{i=1}^{u+l}\Phi_K(x_i)(M\f)_i = \sum_{i=1}^{u+l}\Phi_K(x_i)(M\Phi_K(\x)^{*}\h)_i = \Phi_K(\x)M\Phi_K(\x)^{*}\h \in \F_K. $$ It follows that \begin{equation} \la (S_{\x,u+l}^{*}MS_{\x,u+l}f)(x), w\ra_{\W} = \la \Phi_K(\x)M\Phi_K(\x)^{*}\h , \Phi_K(x)w\ra_{\F_K}. \end{equation} Similarly, for any $f \in \H_K$, \begin{align*} S_{\x,u+l}^{*}(J^{u+l}_l \otimes C^{*}C)S_{\x,u+l}f &= S_{\x, u+l}^{*}(J^{u+l}_l \otimes C^{*}C)\f = \sum_{i=1}^{u+l}K_{x_i}((J^{u+l}_l \otimes C^{*}C)\f)_i \\
&= \sum_{i=1}^{u+l}K_{x_i}((J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h)_i. \end{align*}
For any $x \in \X$, \begin{align*}
(S_{\x,u+l}^{*}(J^{u+l}_l \otimes C^{*}C)S_{\x,u+l}f)(x) &= \sum_{i=1}^{u+l}K(x,x_i)((J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h)_i \\
&= \sum_{i=1}^{u+l}\Phi_K(x)^{*}\Phi_K(x_i)((J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h)_i.
\end{align*} For any $w \in \W$, \begin{align*} \la (S_{\x,u+l}^{*}(J^{u+l}_l \otimes C^{*}C)S_{\x,u+l}f)(x), w\ra_{\W} &= \la \sum_{i=1}^{u+l}\Phi_K(x_i)((J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h)_i, \Phi_K(x)w\ra_{\W} \\
&= \la \Phi_K(\x)(J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h, \Phi_K(x)w\ra_{\F_K}.
\end{align*} Equation (\ref{equation:fz2}) is then equivalent to \begin{align*} &\la \Phi_K(\x)(J^{u+l}_l \otimes C^{*}C)\Phi_K(\x)^{*}\h, \Phi_K(x)w\ra_{\F_K} + l \gamma_I \la \Phi_K(\x)M\Phi_K(\x)^{*}\h , \Phi_K(x)w\ra_{\F_K}
\\ &+ l \gamma_A \la \h, \Phi_K(x)w\ra_{\F_K} = \la \Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y, \Phi_K(x)w\ra_{\F_K}. \end{align*} for all $x \in \X$, $w \in \W$, which is \begin{eqnarray*} \la \Phi_K(\x)[(J^{u+l}_l \otimes C^{*}C) + l\gamma_I M]\Phi_K(\x)^{*}\h, \Phi_K(x)w\ra_{\F_K} + l \gamma_A \la \h, \Phi_K(x)w\ra_{\F_K} \\ = \la \Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y, \Phi_K(x)w\ra_{\F_K}. \end{eqnarray*} for all $x \in \X$, $w \in \W$. This is satisfied if $$ \left(\Phi_K(\x)[(J^{u+l}_l \otimes C^{*}C) + l\gamma_I M]\Phi_K(\x)^{*} + l \gamma_A I_{\F_K}\right)\h = \Phi_K(\x)(I_{(u+l) \times l} \otimes C^{*})\y. $$ This completes the proof of the theorem. \end{proof}
\begin{comment} \section{Mercer's Theorem and Feature Maps}
Consider the integral operator $L_K: L^2_{\mu}(\X; \W) \mapto L^2_{\mu}(\X; \W)$ defined by \begin{equation} L_Kf(x) = \int_{\X}K(x,t)f(t)d\mu(t). \end{equation} The eigenfunctions $\phi_k$'s form an orthonormal basis for $L^2_{\mu}(\X; \W)$, so that \begin{equation} \int_{\X}\la \phi_j(t), \phi_k(t)\ra_{\W}d\mu(t) = \delta_{jk}. \end{equation} Mercer's Theorem states that \begin{equation} K(x,t) = \sum_{k=1}^{\infty}\lambda_k \phi_k(x) \otimes \phi_k(t), \end{equation} where $\phi_k(x) \otimes \phi_k(t): \W \mapto \W$ denotes the rank-one linear operator defined by \begin{equation} (\phi_k(x) \otimes \phi_k(t))w = \la \phi_k(t), w\ra_{\W}\phi_k(x). \end{equation} For $f = \sum_{k=1}^{\infty}u_k\phi_k \in L^2_{\mu}(\X; \W)$, \begin{equation} L_Kf(x) = \sum_{k=1}^{\infty}\lambda_ku_k\phi_k(x). \end{equation} A consequence of this expression is that \begin{equation}
\sum_{k=1}^{\infty}||L_K\phi_k||^2_{L^2_{\mu}(\X; \W)} = \sum_{k=1}^{\infty}\lambda_k^2, \end{equation} \begin{equation}
\sum_{k=1}^{\infty}||L_K^{1/2}\phi_k||^2_{L^2_{\mu}(\X; \W)} = \sum_{k=1}^{\infty}\lambda_k. \end{equation} We have \begin{equation} K(x,x) = \sum_{k=1}^{\infty}\lambda_k \phi_k(x) \otimes \phi_k(x): \W \mapto \W. \end{equation} For each $w \in \W$, $$ K(x,x)w = \sum_{k=1}^{\infty}\lambda_k \la \phi_k(x), w\ra_{\W}\phi_k(x), $$ $$
\la w, K(x,x)w\ra_{\W} = \sum_{k=1}^{\infty}\lambda_k |\la \phi_k(x), w\ra_{\W}|^2. $$
Each rank-one operator $\phi_k(x) \otimes \phi_k(x): \W \mapto \W$ satisfies $$
||(\phi_k(x) \otimes \phi_k(x))w||_{\W} = ||\la \phi_k(x), w\ra_{\W}\phi_k(x)||_{\W} \leq ||\phi_k(x)||^2_{\W}||w||_{\W}, $$ with equality when $w = \phi_k(x)$. Thus \begin{equation}
||\phi_k(x) \otimes \phi_k(x)||_{\mathcal{L}(\W)} = ||\phi_k(x)||^2_{\W}. \end{equation} {\bf Assumptions}: Let $\{\e_{i,\W}\}_{i=1}^{\infty}$ be any orthonormal basis in $\W$. \begin{equation} \sum_{i=1}^{\infty} \la \e_{i,\W}, K(x,x)\e_{i,\W}\ra_{\W} < \infty. \end{equation} \begin{equation} \int_{\X}\sum_{i=1}^{\infty} \la \e_{i,\W}, K(x,x)\e_{i,\W}\ra_{\W}d\mu(x) < \infty. \end{equation} Then \begin{eqnarray}
\sum_{i=1}^{\infty} \la \e_{i,\W}, K(x,x)\e_{i,\W}\ra_{\W} = \sum_{i=1}^{\infty}\sum_{k=1}^{\infty}\lambda_k |\la \phi_k(x), \e_{i,\W}\ra_{\W}|^2\nonumber\\
= \sum_{k=1}^{\infty}\lambda_k\sum_{i=1}^{\infty}|\la \phi_k(x), \e_{i,\W}\ra_{\W}|^2 = \sum_{k=1}^{\infty}\lambda_k ||\phi_k(x)||^2_{\W} < \infty. \end{eqnarray} Integrating with respect to $d\mu(x)$, we get \begin{equation} \int_{\X}\sum_{i=1}^{\infty} \la \e_{i,\W}, K(x,x)\e_{i,\W}\ra_{\W}d\mu(x) = \sum_{k=1}^{\infty}\lambda_k < \infty. \end{equation}
For any $w \in \W$, we have \begin{eqnarray} \la w, K(x,t)w\ra_{\W} = \la w, (\sum_{k=1}^{\infty}\lambda_k\phi_k(x) \otimes \phi_k(t))w\ra_{\W} \nonumber\\
= \sum_{k=1}^{\infty}\lambda_k \la \phi_k(x), w\ra_{\W} \la \phi_k(t), w\ra_{\W}. \end{eqnarray} Consider the map $\Phi_K: \X \mapto \mathcal{L}(\W, \ell^2)$ defined by \begin{equation} \Phi_K(x):w \mapto (\sqrt{\lambda_k}\la \phi_k(x), w\ra_{\W})_{k=1}^{\infty} \in \ell^2. \end{equation} Then we have \begin{equation} \la w, K(x,t)w\ra_{\W} = \la \Phi_K(x)w, \Phi_K(t)w\ra_{\ell^2}. \end{equation} Using the adjoint $\Phi_K(x)^{*} \in \mathcal{L}(\ell^2, \W)$, this is \begin{equation} \la w, K(x,t)w\ra_{\W} = \la w, \Phi_K(x)^{*}\Phi_K(t)w\ra_{\W} \end{equation} for any $w\in \W$. Thus we have \begin{equation} K(x,t) = \Phi_K(x)^{*}\Phi_K(t). \end{equation} The map $\Phi_K: \X \mapto \mathcal{L}(\W, \ell^2)$ is operator-valued, with $\Phi_K(x): \W \mapto \ell^2$ being a bounded linear operator for each $x \in \X$. It is a natural generalization of the feature map $\Phi_K: \X \mapto \ell^2$, with $\Phi_K(x) = (\sqrt{\lambda_k}\phi_k(x))_{k=1}^{\infty} \in \ell^2$ in the scalar-valued setting.
\end{comment}
\end{document} | arXiv |
What is the fundamental difference between CNN and RNN?
What is the fundamental difference between convolutional neural networks and recurrent neural networks? Where are they applied?
neural-networks convolutional-neural-networks recurrent-neural-networks comparison
Pradeep BVPradeep BV
$\begingroup$ Better do not think about RNN/CNN as different networks, but as different network capabilities: a network can be stateless or stateful ( as RNN, LSTM, deep ); a network can/cannot have spatial operators (as 2D convolution, like CNN); ... $\endgroup$ – pasaba por aqui Jun 5 '19 at 14:44
Basically, a CNN saves a set of weights and applies them spatially. For example, in a layer, I could have 32 sets of weights (also called feature maps). Each set of weights is a 3x3 block, meaning I have 3x3x32=288 weights for that layer. If you gave me an input image, for each 3x3 map, I slide it across all the pixels in the image, multiplying the regions together. I repeat this for all 32 feature maps, and pass the outputs on. So, I am learning a few weights that I can apply at a lot of locations.
For an RNN, it is a set of weights applied temporally (through time). An input comes in, and is multiplied by the weight. The networks saves an internal state and puts out some sort of output. Then, the next piece of data comes in, and is multiplied by the weight. However, the internal state that was created from the last piece of data also comes in and is multiplied by a different weight. Those are added and the output comes from an activation applied to the sum, times another weight. The internal state is updated, and the process repeats.
CNN's work really well for computer vision. At the low levels, you often want to find things like vertical and horizontal lines. Those kinds of things are going to be all over the images, so it makes sense to have weights that you can apply anywhere in the images.
RNN's are really good for natural language processing. You can imagine that the next word in a sentence will be highly influenced by the ones that came before it, so it makes sense to carry that internal state forward and have a small set of weights that can apply to any input.
However, there are many more applications. In addition, CNN's have performed well on NLP tasks. There are also more advanced versions of RNN's called LSTM's that you could check out.
For an explanation of CNN's, go to the Stanford CS231n course. Especially check out lecture 5. There are full class videos on YouTube.
For an explanation of RNN's, go here.
pshladypshlady
$\begingroup$ IMHO, this is a quite confusing explanation. $\endgroup$ – nbro♦ May 13 '19 at 21:46
Recurrent neural networks (RNNs) are artificial neural networks (ANNs) that have one or more recurrent (or cyclic) connections, as opposed to just having feed-forward connections, like a feed-forward neural network (FFNN).
These cyclic connections are used to keep track of temporal relations or dependencies between the elements of a sequence. Hence, RNNs are suited for sequence prediction or related tasks.
In the picture below, you can observe an RNN on the left (that contains only one hidden unit) that is equivalent to the RNN on the right, which is its "unfolded" version. For example, we can observe that $\bf h_1$ (the hidden unit at time step $t=1$) receives both an input $\bf x_1$ and the value of the hidden unit at the previous time step, that is, $\bf h_0$.
The cyclic connections (or the weights of the cyclic edges), like the feed-forward connections, are learned using an optimisation algorithm (like gradient descent) often combined with back-propagation (which is used to compute the gradient of the loss function).
Convolutional neural networks (CNNs) are ANNs that perform one or more convolution (or cross-correlation) operations (often followed by a down-sampling operation).
The convolution is an operation that takes two functions, $\bf f$ and $\bf h$, as input and produces a third function, $\bf g = f \circledast h$, where the symbol $\circledast$ denotes the convolution operation. In the context of CNNs, the input function $\bf f$ can e.g. be an image (which can be thought of as a function from 2D coordinates to RGB or grayscale values). The other function $\bf h$ is called the "kernel" (or filter), which can be thought of as (small and square) matrix (which contains the output of the function $\bf h$). $\bf f$ can also be thought of as a (big) matrix (which contains, for each cell, e.g. its grayscale value).
In the context of CNNs, the convolution operation can be thought of as dot product between the kernel $\bf h$ (a matrix) and several parts of the input (a matrix).
In the picture below, we perform an element-wise multiplication between the kernel $\bf h$ and part of the input $\bf h$, then we sum the elements of the resulting matrix, and that is the value of the convolution operation for that specific part of the input.
To be more concrete, in the picture above, we are performing the following operation
\begin{align} \sum_{ij} \left( \begin{bmatrix} 1 & 0 & 0\\ 1 & 1 & 0\\ 1 & 1 & 1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 1 \end{bmatrix} \right) = \sum_{ij} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 1 & 0 & 1 \end{bmatrix} = 4 \end{align}
where $\otimes$ is the element-wise multiplication and the summation $\sum_{ij}$ is over all rows $i$ and columns $j$ (of the matrices).
To compute all elements of $\bf g$, we can think of the kernel $\bf h$ as being slided over the matrix $\bf f$.
In general, the kernel function $\bf h$ can be fixed. However, in the context of CNNs, the kernel $\bf h$ represents the learnable parameters of the CNN: in other words, during the training procedure (using e.g. gradient descent and back-propagation), this kernel $\bf h$ (which thus can be thought of as a matrix of weights) changes.
In the context of CNNs, there is often more than one kernel: in other words, it is often the case that a sequence of kernels $\bf h_1, h_2, \dots, h_k$ is applied to $\bf f$ to produce a sequence of convolutions $\bf g_1, g_2, \dots, g_k$. Each kernel $\bf h_i$ is used to "detect different features of the input", so these kernels are different from each other.
A down-sampling operation is an operation that reduces the input size while attempting to maintain as much information as possible. For example, if the input size is a $2 \times 2$ matrix $\bf f = \begin{bmatrix} 1 & 2 \\ 3 & 0 \end{bmatrix}$, a common down-sampling operation is called the max-pooling, which, in the case of $\bf f$, returns $3$ (the maximum element of $\bf f$).
CNNs are particularly suited to deal with high-dimensional inputs (e.g. images), because, compared to FFNNs, they use a smaller number of learnable parameters (which, in the context of CNNs, are the kernels). So, they are often used to e.g. classify images.
What is the fundamental difference between RNNs and CNNs? RNNs have recurrent connections while CNNs do not necessarily have them. The fundamental operation of a CNN is the convolution operation, which is not present in a standard RNN.
nbro♦nbro
CNN vs RNN
A CNN will learn to recognize patterns across space while RNN is useful for solving temporal data problems.
CNNs have become the go-to method for solving any image data challenge while RNN is used for ideal for text and speech analysis.
In a very general way, a CNN will learn to recognize components of an image (e.g., lines, curves, etc.) and then learn to combine these components to recognize larger structures (e.g., faces, objects, etc.) while an RNN will similarly learn to recognize patterns across time. So a RNN that is trained to convert speech to text should learn first the low level features like characters, then higher level features like phonemes and then word detection in audio clip.
A convolutional network (ConvNet) is made up of layers. In a convolutional network (ConvNet), there are basically three types of layers:
Convolution layer
Pooling layer
Fully connected layer
Of these, the convolution layer applies convolution operation on the input 3D tensor. Different filters extract different kinds of features from an image. The below GIF illustrates this point really well:
Here the filter is the green 3x3 matrix while the image is the blue 7x7 matrix.
Many such layers passes through filters in CNN to give an output layer that can again be a NN Fully connected layer or a 3D tensor.
For example, in the above example, the input image passes through convolutional layer, then pooling layer, then convolutional layer, pooling layer, then the 3D tensor is flattened like a Neural Network 1D layer, then passed to a fully connected layer and finally a softmax layer. This makes a CNN.
Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous step are fed as input to the current step.
Here, $x_{t-1}$, , $x_{t}$ and $x_{t+1}$ are the values of inputs data that occur at a specific time steps and are fed into the RNN that goes through the hidden layers namely $h_{t-1}$, , $h_{t}$ and $h_{t+1}$ which further produces output $o_{t-1}$, , $o_{t}$ and $o_{t+1}$ respectively.
Aishwarya RadhakrishnanAishwarya Radhakrishnan
On a basic level, an RNN is a neural network whose next state depends on its past state(s), while a CNN is a neural network that does dimensionality reduction (make large data smaller while preserving information) via convolution. See this for more info on convolutions
k.c. sayz 'k.c sayz'k.c. sayz 'k.c sayz'
$\begingroup$ This is misleading. CNNs are not strictly used for dimensionality reduction. Furthermore, it is the down-sampling operation that reduces the dimensions of the input (not necessarily the convolution). $\endgroup$ – nbro♦ May 13 '19 at 21:58
In the case of applying both to natural language, CNN's are good at extracting local and position-invariant features but it does not capture long range semantic dependencies. It just consider local key-phrasses.
So when the result is determined by the entire sentence or a long-range semantic dependency CNN is not effective as shown in this paper where the authors compared both architechrures on NLP takss.
This can be extended for general case.
Firas OmraneFiras Omrane
Not the answer you're looking for? Browse other questions tagged neural-networks convolutional-neural-networks recurrent-neural-networks comparison or ask your own question.
Do convolutional neural networks also have recurrent connections?
What is a recurrent neural network?
How can neural networks be used to generate rather than classify?
What is the difference between a convolutional neural network and a regular neural network?
What is the difference between human brains and neural networks?
What is the fundamental difference between neural networks for classifying and generating data
What is the difference between reinforcement learning and AutoML?
From an implementation point of view, what are the main differences between an RNN and a CNN?
What is the difference between artificial intelligence and artificial neural networks?
What's the difference between LSTM and RNN? | CommonCrawl |
\begin{document}
\title{PRECONDITIONING WITH
DIRECT APPROXIMATE FACTORING OF THE INVERSE} \begin{abstract} To precondition a large and sparse linear system, two direct methods for approximate factoring of the inverse are devised. The algorithms are fully parallelizable and appear to be more robust than the iterative methods suggested for the task. A method to compute one of the matrix subspaces optimally is derived. Possessing a considerable amount of flexibility, these approaches extend the approximate inverse preconditioning techniques in several natural ways. Numerical experiments are given to illustrate the performance of the preconditioners on a number of challenging benchmark linear systems. \end{abstract} \begin{keywords} preconditioning, approximate factoring, parallelizable, sparsity pattern, approximate inverse
\end{keywords}
\begin{AMS} 65F05, 65F10 \end{AMS}
\pagestyle{myheadings} \thispagestyle{plain}
\markboth{M. BYCKLING AND M. HUHTANEN }{DIRECT APPROXIMATE FACTORING}
\section{Introduction} Approximate factoring of the inverse means parallelizable algebraic techniques for preconditioning a linear system involving a large and sparse nonsingular matrix $A\in \,\mathbb{C}^{n \times n}$. The idea is to multiply $A$ by a matrix $W$ from the right (or left) with the aim at having a matrix $AW$ which can be approximated with an easily invertible matrix.\footnote{Direct methods are typically devised in this way, i.e., both the LU and QR factorization can be interpreted such that the purpose is to multiply $A$ with a matrix from the left so as to have an upper triangular, i.e., an easily invertible matrix.} As opposed to the usual paradigm of preconditioning, iterations are not expected to converge rapidly for $AW.$ Instead, the task can be interpreted as that of solving \begin{equation}\label{aivbh}
\inf_{W\in \mathcal{W},\, V\in \mathcal{V}} \left|\left| AWV^{-1}-I \right|\right|_F \end{equation} approximately by linearizing the problem appropriately \cite{HR,BHU}. Here $\mathcal{W}$ and $\mathcal{V}$ are nonsingular sparse standard matrix subspaces of $\,\mathbb{C}^{n \times n}$ with the property that that the nonsingular elements of $\mathcal{V}$ are assumed to allow a rapid application of the inverse. Approximate solutions to this problem can be generated with the power method as suggested in \cite{BHU}. In this paper, direct methods are devised for approximate factoring based on solving \begin{equation}\label{aivan} \min_{W\in \mathcal{W},\; V\in \mathcal{V}}
\left|\left| AW-V \right|\right|_F \end{equation} when the columns of either $W$ or $V$ being constrained to be of fixed norm. These two approaches allow, once the matrix subspace $\mathcal{W}$ has been fixed, choosing the matrix subspace $\mathcal{V}$ in an optimal way.
The first algorithm solves \eqref{aivan} when the columns of $V$ are constrained to be of fixed norm. Then the matrix subspaces $A\mathcal{W}$ and $\mathcal{V}$ are compared as such while other properties of $A$ are largely overlooked. The second algorithm solves the problem when the columns of $W$ are constrained to be of fixed norm, allowing taking properties of $A$ more into account. In \cite{BHU} the approach to this end was based on approximating the smallest singular value of the map \begin{equation}\label{singva} W\longmapsto (I-P_{\mathcal{V}})AW \end{equation} from $\mathcal{W}$ to $\,\mathbb{C}^{n \times n}$ with the power iteration. Here $P_{\mathcal{V}}$ denotes the orthogonal projector on $\,\mathbb{C}^ {n \times n}$ onto $\mathcal{V}$.
The second algorithm devised in this paper is a direct method for solving the task.
The algorithms proposed extend the standard approximate inverse computational techniques in several ways. (For sparse approximate inverse computations, see \cite[Section 5]{BE}, \cite{GH} and \cite[Chapter 10.5]{SA} and references therein.) Aside from possessing an abundance of degrees of freedom, we have an increased amount of optimality if we suppose the matrix subspace $\mathcal{W}$ to be given. Then computable conditions can be formulated for optimally choosing the matrix subspace $\mathcal{V}$. This is achieved without any significant increase in the computational cost. In particular, only a columnwise access to the entries of $A$ is required.\footnote{Accessing the entries of the adjoint can be costly in parallel computations.}
We aim at maximal parallelizability by solving the minimization problem \eqref{aivan} columnwise. The cost of such a high parallelism is the need to have a mechanism to somehow control the conditioning of the factors. After all, parallelism means performing computations locally and independently. Also this can be achieved without any significant increase in the computational cost.
Although the choice of the matrix subspace $\mathcal{W}$ is apparently less straightforward, some ideas are suggested to this end. Here we cannot claim achieving optimality, except that once done, thereafter $\mathcal{V}$ can be generated in an optimal way. In particular, because there are so many alternatives to generate matrix subspaces, many ideas outlined in this paper are certainly not fully developed and need to be investigated more thoroughly.
The paper is organized as follows. In Section \ref{method} two algorithms are devised for approximate factoring of the inverse. Section \ref{sec3} is concerned with ways to choose the matrix subspace $\mathcal{V}$ optimally. Related stabilization schemes are suggested. In Section \ref{Sec4} heuristic Al schemes are suggested for constructing the matrix subspace $\mathcal{W}$. In Section 5 numerical experiments are conducted. The toughest benchmark problems from \cite{Benzi2000} are used in the tests.
\section{Direct approximate factoring of the inverse}\label{method} \label{sec:DIAF}
In what follows, two algorithms are devised for computing matrices $W$ and $V$ to have an approximate factorization \begin{equation}\label{afac} A^{-1}\approx WV^{-1} \end{equation}
of the inverse of a given sparse nonsingular matrix $A\in \,\mathbb{C}^{n \times n}$. The factors $W$ and $V$ are assumed to belong to given sparse standard matrix subspaces $\mathcal{W}$ and $\mathcal{V}$ of $\,\mathbb{C}^{n \times n}$. A matrix subspace is said to be standard if it has a basis consisting of standard basis matrices.\footnote{Analogously to the standard basis vectors of $\,\mathbb{C}^n$, a standard basis matrix of $\,\mathbb{C}^{n \times n}$ has exactly one entry equaling one while its other entries are zeros.} This allows maximal parallelizability by the fact that then the arising computational problems can be solved columnwise independently.
Of course, parallelizability is imperative to fully exploit the processing power of modern computing architectures.
\subsection{First basic algorithm} \label{sec:qralg} Consider the minimization problem \eqref{aivan} under the assumption that the columns of $V$ are constrained to be unit vectors, i.e., of norm one. Based on the sparsity structure of $\mathcal{W}$ and the corresponding columns of $A$, the aim is at first choosing $V$ optimally. Thereafter $W$ is determined optimally.
To describe the method, denote by $w_j$ and $v_j$ the $j$th columns of $W$ and $V$. The column $v_j$ is computed first as follows. Assume there can appear $k_j\ll n$ nonzero entries in $w_j$ at prescribed positions and denote by $A_j\in \,\mathbb{C}^{n\times k_j}$ the matrix with the corresponding columns of $A$ extracted. Compute the sparse QR factorization \begin{equation}\label{spaqr} A_j=Q_jR_j \end{equation} of $A_j$. (Recall that the sparse QR-factorization is also needed in sparse approximate inverse computations.) Assume there can appear $l_j\ll n$ nonzero entries in $v_j$ at prescribed positions and denote by $M_j\in \,\mathbb{C}^{k_j\times l_j}$ the matrix with the corresponding columns of $Q_j^*$ extracted. Then $v_j$, regarded as a vector in $\,\mathbb{C}^{l_j}$, of unit norm is computed satisfying \begin{equation}\label{nocond}
\left|\left|M_jv_j\right|\right| =\left|\left|M_j\right|\right|, \end{equation} i.e., $v_j$ is chosen in such a way that its component in the column space of $A_j$ is as large as possible. This can be found by computing the singular value decomposition of $M_j$. (Its computational cost is completely marginal by the fact that $M_j$ is only a $k_j$-by-$l_j$ matrix.)
Suppose the column $v_j$ has been computed as just described for $j=1,\ldots, n$. Then solve the least squares problems \begin{equation}\label{condwj} \min_{w_j \in \,\mathbb{C}^{k_j}}
\left|\left|A_jw_j-v_j\right|\right|_2 \end{equation} to have the column $w_j$ of $W$.
For each pair $v_j$ and $w_j$ of columns, the computational cost consists of computing the sparse QR factorization \eqref{spaqr} and, by using it, solving \eqref{nocond} and \eqref{condwj}. For the sparse QR factorization there are codes available \cite{DAT}. (Now $A_j$ has the special property of being very ``tall and skinny''.)
The constraint of requiring the columns of $V$ to be unit vectors is actually not a genuine constraint. That is, the method is scaling invariant from the right and thereby any nonzero constraints are acceptable in the sense that the condition \eqref{nocond} could equally well be replaced with
$\left|\left|M_jv_j\right|\right| =r_j\left|\left|M_j\right|\right|.$ Let us formulate this as follows.
\begin{theorem} Assume $A\in \,\mathbb{C}^{n\times n}$ is nonsingular. If $\mathcal{V}$ and $\mathcal{W}$ are standard matrix subspaces of $\,\mathbb{C}^{n \times n}$, then the factorization \eqref{afac} computed as described is independent of the fixed column constraints
$\left|\left|v_j\right|\right|_2=r_j >0$ for $j=1,\ldots, n$. \end{theorem}
\begin{proof} Let $W$ and $V$ be the matrices computed with the unit norm constraint for the columns of $V$. Let $\hat{W}$ and $\hat{V}$ be computed with other strict positivity constraints for the columns of $\hat{V}$, i.e., \eqref{nocond} is replaced with the condition \begin{equation}\label{repla}
\left|\left|M_jv_j\right|\right| =r_j\left|\left|M_j\right|\right|. \end{equation} Then we have $V=\hat{V}D$ and $W=\hat{W}D$ for a diagonal matrix $D$ with nonzero entries. Consequently, $WV^{-1}=\hat{W}\hat{V}^{-1}$ whenever the factors are invertible. \end{proof}
\begin{corollary}\label{opto} If a matrix $V$ solving
\begin{equation}\label{alkuper}
\min_{W\in \mathcal{W},\; V\in \mathcal{V},\, ||V||_F=1}
\left|\left| AW-V \right|\right|_F. \end{equation}
is nonsingular, then the factorization \eqref{afac} coincides with the one computed to satisfy \eqref{nocond} and \eqref{condwj}. \end{corollary}
\begin{proof} Suppose $W$ and $V$ solve \eqref{alkuper}. Since $V$ is
invertible, we have $\left|\left|v_j\right|\right|_2=r_j>0$.
Using these constraints, compute $\hat{W}$ and $\hat{V}$ to satisfy \eqref{repla} and \eqref{condwj}.
This means solving \eqref{alkuper} columnwise and thereby the corresponding factorizations coincide. \end{proof}
It is instructive to see how the computation of an approximate inverse relates with this. (For sparse approximate inverses and their historical development, see \cite[Section 5]{BE}.)
\begin{example} In approximate inverse computations, the matrix subspace $\mathcal{V}$ is as simple as possible, i.e.,
the set of diagonal matrices. Regarding the constraints, the columns are constrained to be unit vectors. Therefore one can replace $\mathcal{V}$ with the identity matrix, as is customary. See also Example \ref{exai} below. \end{example}
\subsection{Second basic algorithm} \label{sec:svdalg} Consider the minimization problem \eqref{aivan} under the assumption that the columns of $W$ are constrained to be unit vectors instead.
Based on the sparsity structure of $\mathcal{W}$ and the corresponding columns of $A$, the aim now is at first choosing $W$ optimally. Thereafter $V$ is determined optimally. The resulting scheme yields a direct analogue of the power method suggested in \cite{BHU}. However, the method proposed here has at least three advantages. First, being direct, it seems to be more robust since there is no need to tune parameters used in the power method. Second, the Hermitian transpose of $A$ is not needed. Third, the computational cost is readily predictable by the fact that, in essence, we only need to compute sparse QR factorizations.
To describe the method, denote by $w_j$ and $v_j$ the $j$th columns of $W$ and $V$. The column $w_j$ is computed first as follows. Assume there can appear $k_j\ll n$ nonzero entries in $w_j$ at prescribed positions and denote by $A_j\in \,\mathbb{C}^{n\times k_j}$ the matrix with the corresponding columns of $A$ extracted. Assume there can appear $l_j\ll n$ nonzero nonzero entries in $v_j$ at prescribed positions and denote by $\hat{A}_j\in \,\mathbb{C}^{(n-l_j)\times k_j}$ the matrix with the corresponding rows of $A_j$ removed. Then take $w_j$ to be a right singular vector corresponding to the smallest singular value of $\hat{A}_j$.
To have $w_j$ inexpensively, compute the sparse QR factorization $$\hat{A}_j=Q_jR_j$$ of $\hat{A}_j$. Then compute the singular value decomposition of $R_j$. Of course, its computational cost is completely negligible. (However, do not form the arising product to have the SVD of $\hat{A}_j$ explicitly.) Then take $w_j$ from the singular value decomposition of $R_j$.
Suppose the column $w_j$ has been computed as just described for $j=1,\ldots, n$. Then, to have the columns of $V$, set $$V=P_{\mathcal{V}}A[w_1\cdots w_n],$$ i.e., nonzero entries are accepted only in the allowed sparsity structure of $v_j$.
For an analogue of Corollary \ref{opto}, assume a matrix corresponding to the smallest singular value of the linear map \eqref{singva} is nonsingular. Since $\mathcal{W}$ is a standard matrix subspace, the computations can be performed columnwise. The resulting $W$ can be chosen to coincide, once divided by $\sqrt{n}$, with this matrix.
\subsection{Some general remarks}\label{remarks} In approximate inverse preconditioning, it is well-known that it can make a difference whether one computes a right or left approximate inverse \cite[pp. 449--450]{BE}. As we have generalized this technique, this is the case with the approximate factoring of the inverse also. Here we have considered only preconditioning from the right.
The usage of standard matrix subspaces leads to maximal parallelizability. In view of approximating the inverse, this means that computations are done locally (columnwise) and independently, i.e., without any global control. To compensate for this, with an eye to improve the conditioning of the factors, it seems advisable to impose additional constraints. This is considered in Section \ref{sec3}.
The simultaneous (somehow optimal) choice of the matrix subspaces $\mathcal{W}$ and $\mathcal{V}$ is a delicate matter. In \cite{BHU} we gave a rule thumb according to which the sparsity structures of the matrix subspaces should differ as much as possible in approximate factoring of the inverse. (This automatically holds in computing approximate inverses and ILU factorizations.)
Numerical experiments seem to support this. Although we do not quite understand the reasons for this, it is partially related with the fact that then there are very few redundancies in the factorizations \eqref{afac} as follows.
\begin{proposition} Let $\mathcal{V}$ and $\mathcal{W}$ be standard nonsingular matrix subspaces of $\,\mathbb{C}^{n \times n}$ containing the identity. If in the complement of the diagonal matrices the intersection of $\mathcal{V}$ and $\mathcal{W}$ is empty, then the maximum rank of the map \begin{equation}\label{tulo} (V,W)\longmapsto WV^{-1} \end{equation} on $\mathcal{V}\times \mathcal{W}\cap {\rm GL}(n,\,\mathbb{C})$ is $\dim \mathcal{V}+\dim \mathcal{W}-n.$ \end{proposition}
\begin{proof} Linearize the map \eqref{tulo} at $(\hat{V},\hat{W})$ for both $\hat{V}$ and $\hat{W}$ invertible. Using the Neumann series yields the linear term $$\hat{W}(\hat{W}^{-1}W-\hat{V}^{-1}V)\hat{V}^{-1}.$$ At $(\hat{V},\hat{W})=(I,I)$ the rank is $\dim \mathcal{V}+\dim \mathcal{W}-n.$ It is the maximum by the fact that for any nonsingular diagonal matrix $D$ we have $(VD,WD)\longmapsto WV^{-1}$, i.e., the map \eqref{tulo} can be regarded as a function of $\dim \mathcal{V}+\dim \mathcal{W}-n$ variables. \end{proof}
Aside from this basic principle, more refined techniques are devised for simultaneously choosing the matrix subspaces $\mathcal{W}$ and $\mathcal{V}$ in the sections that follow. Most notably, optimal ways of choosing $\mathcal{V}$ are devised.
\section{Optimal construction of the matrix subspace $\mathcal{V}$ and imposing constraints}\label{sec3} For the basic algorithms introduced, a method for optimally choosing the matrix subspace $\mathcal{V}$ is devised under the assumption that the matrix subspace $\mathcal{W}$ has been given.
Moreover, mechanisms are introduced into the basic algorithms that allow stabilizing the scheme for better conditioned factors. (In approximate inverse preconditioning the latter task is accomplished in the simplest possible way: the subspace $\mathcal{V}$ is simply $\,\mathbb{C} I$, i.e., scalar multiples of the identity.)
\subsection{Optimally constructing the matrix subspace $\mathcal{V}$} Suppose the matrix subspace $\mathcal{W}$ has been given. Then the condition \eqref{nocond} yields a columnwise criterion for optimally choosing the sparsity structure of the matrix subspace $\mathcal{V}$. (Recall that it must be assumed that the nonsingular elements of $\mathcal{V}$ allow a rapid application of the inverse.) Once done, proceed by using one of the basic algorithms to compute the factors.
Consider \eqref{nocond}. It is beneficial to choose the sparsity structure of $v_{j}$ in such a way that the norm of $M_j$ is as large as possible, with the constraint that in the resulting $\mathcal{V}$ the nonsingular elements are readily invertible. In other words, among admissible columns of $Q_j^*$, take $l_j$ columns which yields $M_j$ with the maximal norm. This means that for the optimization problem \eqref{aivan}, with a fixed matrix subspace $\mathcal{W}$, the matrix subspace $\mathcal{V}$ is constructed in an optimal way.
Certainly, the problem of choosing $l_j$ columns to maximize the norm is combinatorial and thereby rapidly finding a solution does not appear to be straightforward. A suboptimal choice for the matrix $M_j$ can be readily generated by taking $l_j$ admissible columns of $Q_j^*$ with largest norms. When done with respect to the Euclidean norm, the Frobenius norm of the submatrix is maximized instead. This can be argued, of course, by the fact that
$$\frac{1}{\sqrt{\max \{k_j,l_l\}}}\left|\left| M_j \right| \right|_F\leq
\left|\left| M_j \right| \right|\leq \left|\left| M_j \right| \right|_F$$ holds.
This approach starts with $\mathcal{W}$ and then yields $\mathcal{V}$ (sub)optimally. This process can be used to assess how $\mathcal{W}$ was initially chosen. Let us illustrate this with the following example.
\begin{example} The choice of upper (lower) triangular matrices for $\mathcal{V}$ has the advantage that then we have a warning signal in case $\mathcal{W}$ is poorly chosen. Namely, suppose $\mathcal{V}$ has been (sub)optimally constructed as just described. If the factor $V$ computed to satisfy \eqref{aivan}
is poorly conditioned, one should consider updating the sparsity structure of $\mathcal{W}$ to have a matrix subspace which better suited for approximate factoring of the inverse of $A$.\footnote{This is actually the case in the (numerically) exact factoring: To recover whether a matrix $A\in \,\mathbb{C}^{n \times n}$ is nonsingular, it is advisable to compute its partially pivoted LU factorization, i.e., use a numerically reliable algorithm.} \end{example}
In this optimization scheme, let us illustrate how the matrix subspace $\mathcal{W}$ actually could be poorly chosen.
Namely, the way the above optimization scheme is set up means that the sparsity structure of $\mathcal{W}$ should be such that no two columns share the same sparsity structure. (Otherwise $V$ will have equaling columns.) Of course, this may be too restrictive. In the section that follows, a way to circumvent this problem is devised by stabilization.
\subsection{Optimizing under additional constraints} There are instances which require imposing additional constraints in computing the factors.
Aside from the problems described above, in tough problems the approximate factors may be poorly conditioned of even singular.\footnote{This is a well-known phenomenon in preconditioning. For ILU factorization there are many ways to stabilize the computations \cite{BE}. Stabilization has turned out to be indispensable in practice.} Because there holds \begin{equation} \label{eqn:minbound}
\frac{\left|\left|AWV^{-1}-I \right| \right| }{
\left|\left| V^{-1}\right| \right|}
\leq \left|\left| AW-V\right| \right|
\leq \left|\left| AWV^{-1}-I\right| \right| \left|\left| V\right| \right|, \end{equation} this certainly cannot be overlooked. To overcome this, it is advisable to stabilize the computations by appropriately modifying the optimality conditions in computing the factors.
For the first basic algorithm this means a refined computation of $V$. Thereafter the factor $W$ is computed columnwise as before to satisfy the conditions \eqref{condwj}.
For a case in which the conditioning is readily controlled, consider a matrix subspace $\mathcal{V}$ belonging to the set of upper (or lower) triangular matrices. Then, suppose the $j$th column $v_j$ computed to satisfy \eqref{nocond} results in a tiny $j$th component. To stabilize the computations for the first basic algorithm, we replace $v_j$ by first imposing the $j$th component of $v_j$ to equal a constant $r_j>0$. For the remaining components, let $\hat{M}_j$ be a submatrix consisting of the $l_j-1$ largest columns of $Q_j^*$ among its first $j-1$ columns. Denote the $j$th column of $Q_j^*$ by $p_j$. Then consider the optimization problem \begin{equation}\label{misa}
\max_{||\hat{v}_j||_2=1}\left| \left| r_jp_j+\hat{M}_j\hat{v}_j\right| \right|_2. \end{equation} By invoking the singular value decomposition $\hat{M}_j= \hat{U}_j\hat{\Sigma}_j\hat{V}_j^*$ of $\hat{M}_j$, this is equivalent to solving \begin{equation}\label{misas}
\max_{||\hat{v}_j||_2=1}\left| \left| r_j\tilde{p}_j+\hat{\Sigma}_j
\tilde{v}_j\right| \right|_2, \end{equation} where $\tilde{p}_j=\hat{U}_j^*p_j$ and $\tilde{v}_j=\hat{V}_j^*\hat{v}_j$. Consequently, choose $\tilde{v}_j=(e^{i \theta},0,0,\ldots,0)$, where $\theta$ is the argument of the first component of $\tilde{p}_j$. (If the first component is zero, then any $\theta$ will do.)
Set the column $v_j$ to be the sum of $r_je_j$ and the vector obtained after putting the entries of $\hat{V}_j\tilde{v}_j$ at the positions where the corresponding $l_j-1$ largest columns of $Q_j^*$ appeared. (Here $e_j$ denotes the $j$th standard basis vector of $\,\mathbb{C}^n$.)
Observe that the solution does not depend on the value of $r_j>0$. In particular, it is not clear how large $r_j$ should be.
Again it is instructive to contrast this with the approximate inverse computations.
\begin{example}\label{exai} The sparse approximate inverse computations yield the simplest case of imposing additional constraints as just described. That is, the sparse approximate inverse computations can be interpreted as having $l_j=1$ for every column,
combined with imposing $r_j=1$. \end{example}
The LU factorization and thereby triangular matrices are extensively used in preconditioning. Because the LU factorization without pivoting is unstable, some kind of stabilization is needed. It is clear that the QR factorization also gives reasons to look at triangular matrices. The approach differs from that of using the LU factorization in that its computation does not require a stabilization, i.e., nothing like partial pivoting is needed. Of course, our intention is not to propose computing the full QR factorization. Understanding the Q factor is critical as follows.
\begin{example}\label{spqr} The QR factorization $A^*=QR$ of the Hermitian transpose of $A$ can be used as a starting point to construct matrix subspaces for approximate factoring of the inverse. Namely, we have $AQ=R^*$. Therefore $\mathcal{V}$ belonging to the set of lower triangular matrices is a natural choice. For $\mathcal{W}$ one needs to generate an approximation to the sparsity structure of $Q$. For this there are many alternatives. \end{example}
Aside from upper (lower) triangular matrices, the are, of course, completely different alternatives. Consider, for example, choosing $V$ among diagonally dominant matrices. Since the set of diagonally dominant matrices is not a matrix subspace, dealing with this structure requires using constraints. It is easy to see that the problem can be tackled completely analogously,
by imposing
imposing $r>1$ to hold for every diagonal entry. Thereafter \eqref{misa} solved for having the other components in the column. The inversion of $V$ can be performed by simple algorithms such as the Gauss-Seidel method.
\section{Constructing the matrix subspace $\mathcal{W}$}\label{Sec4} Optimally constructing the matrix subspace $\mathcal{W}$ for approximate factoring of the inverse appears seemingly challenging. Some ideas are suggested in what follows, although no claims concerning the optimality are made. We suggest starting the process by taking an initial standard matrix subspace $\mathcal{V}_0$ which precedes the actual $\mathcal{V}$. Once $\mathcal{W}$ has been as constructed, then $\mathcal{V}_0$ should be replaced with $\mathcal{V}$ computed with the techniques introduced in Section \ref{sec3}.
\subsection{The Neumann series constructions} \label{sec:wcalheur} For approximate inverse computations
the selection of an a-priori sparsity pattern is a well-known problem \cite{Chow2000, Benzi1997}. Good sparsity patterns are, at least in some cases, related to the transitive closures
of subsets of the connectivity graph of $G(A)$ of $A$. This can also be interpreted as computing level set expansions on the vertices of a sparsified $G(A)$.
In \cite{Chow2000} numerical dropping is used to sparsify $G(A)$ or its level set expansions. Denote by $v\in \ensuremath{\mathbb{C}}^n$ a vector with entries $v_j$. To select the relatively large entries of $v$ numerically, entries are dropped by relative tolerance $\tau$ and by count $p$, i.e., only those entries of $v$ that are relatively large with the restriction of $p$ largest entries at most are stored. (Note that the diagonal elements are not subjected to numerical dropping.) In what follows, these rules are referred to as numerical dropping by tolerance and count.
The dropping can be performed on an initial matrix or during the intermediate phases of the level set expansion. Thus we have two sets of parameters $(\tau_i,p_i)$ controlling the initial sparsification and $(\tau_l,p_l)$ controlling the sparsification during level set expansion. In addition, we adopt the convention that setting any parameter as zero implies that the dropping parameter is not used.
With these preparations for approximate factoring of the inverse, take an initial standard matrix subspace $\mathcal{V}_0$ and consider generating a sparsity pattern for $\ensuremath{\mathcal{W}}$.
Assuming $V_0=P_{\ensuremath{\mathcal{V}}_0}A\in \ensuremath{\mathcal{V}}_0$ is invertible, we have \begin{equation*}
A=V_0(I-V_0^{-1}(I-P_{\ensuremath{\mathcal{V}}_0})A)=V_0(I-S). \end{equation*} Whenever $\knorm{S}{}<1$, there holds $A^{-1}=(I+\sum_{j=1}^\infty S^j)V_0^{-1}=WV_0^{-1}$ by invoking the Neumann series. Therefore then \begin{equation}
\label{eqn:factspar}
W=I+\sum_{j=1}^\infty S^j. \end{equation} Although the assumption $\knorm{S}{}<1$ is generally too strict in practice, we may formally truncate the series \eqref{eqn:factspar}
to generate a sparsity pattern.
To make this economical and to retain $\mathcal{W}$ sparse enough, compute powers of $S$ only approximately by using sparse-sparse operations combined with numerical dropping and level of fill techniques.
Observe that, to operate with the series \eqref{eqn:factspar} we need $S=V_0^{-1}(I-P_{\ensuremath{\mathcal{V}}_0})A$. It is this which requires setting an initial standard matrix subspace $\ensuremath{\mathcal{V}}_0$.
\begin{example} For $S=V_0^{-1}(I-P_{\ensuremath{\mathcal{V}}_0})A$ we need to set an initial standard matrix subspace. The most inexpensive alternative is to take $\mathcal{V}_0$ to be the set of diagonal matrices.
Then $V_0=P_{\ensuremath{\mathcal{V}}_0}A$ is an immediately found. \end{example}
There are certainly other inexpensive alternatives for $\ensuremath{\mathcal{V}}_0$, such as block diagonal matrices. Once fixed, thereafter the scheme can be given as Algorithm \ref{alg:factspar} below.
\begin{algorithm}[t]
\caption{Sparsified powers for constructing $\ensuremath{\mathcal{W}}$}
\label{alg:factspar}
\begin{algorithmic}[1]
\State Set a truncation parameter $k$
\State Compute $V_0^{-1}$
\State Compute $S=V_0^{-1}(I-P_{\ensuremath{\mathcal{V}}_0})A$
\State Apply numerical dropping by tolerance and count to columns of $S$
\For{{\bf columns} $j$ {\bf in parallel}}
\State Set $s_j=t_j=e_j$
\For{$l=1,\ldots,k$}
\State Compute $t_j=St_j$
\State Apply numerical dropping by tolerance and count to $t_j$
\State Compute $s_j=s_j+t_j$
\EndFor
\State Set sparsity structure of $w_j$ to be the sparsity structure of $s_j$
\EndFor
\State Set $\ensuremath{\mathcal{W}}=\ensuremath{\mathcal{W}}\setminus\{\ensuremath{\mathcal{V}}_0\setminus \mathcal{I}\}$
\end{algorithmic} \end{algorithm}
Note that final step of Algorithm \ref{alg:factspar} is to keep the intersection of $\ensuremath{\mathcal{W}}$ and $\ensuremath{\mathcal{V}}_0$ empty apart from the diagonal; see Section \ref{remarks}. After the sparsity structure for a matrix subspace $\ensuremath{\mathcal{W}}$ has been generated, the sparsity structure of $\ensuremath{\mathcal{V}}_0$ can be updated to be $\mathcal{V}$ by using $\ensuremath{\mathcal{W}}$.
\subsection{Algebraic constructions} Next we consider some purely algebraic arguments
which might be of use in constructing $\mathcal{W}$. Again start with an initial standard matrix subspace $\mathcal{V}_0$. Take the sparsity structure of the $j$th column of $\mathcal{V}_0$ and consider the corresponding rows of $A\in \,\mathbb{C}^{n \times n}$. Choose the sparsity structure of the $j$th column of $\mathcal{W}$ to be the union of the sparsity structures of these rows. This is a necessary (but not sufficient) condition for $A\mathcal{W}$ to have an intersection with $\mathcal{V}_0$. This simply means choosing $\mathcal{W}$ to have the sparsity structure of $A^*\mathcal{V}_0$.
Most notably, the process is very inexpensive and can be executed in parallel. One only needs to control that the columns of $\mathcal{W}$ remain sufficiently sparse. With probability one, the following algorithm yields the desired sparsity structure.
\begin{algorithm}[H] \caption{Computing a sparsity structure for $\mathcal{W}$} \label{algor} \begin{algorithmic}[2] \Require A sparse matrix $A \in \,\mathbb{C}^{n \times n}$ and a random column $v_j \in \mathcal{V}_0$. \Ensure Sparsity structure of the column $w_j$. \State Compute $w=A^*v_j$ \If{$w$ is not sparse enough} \State Sparsify $w$ to have the sparsity structure of $w_j$. \EndIf \State Take the sparsity structure of $w_j$ to be the sparsity structure of $w$. \end{algorithmic} \end{algorithm}
Observe that we do not have $A^*\mathcal{V}_0=\mathcal{W}$ since the computation is concerned with sparsity structures.
Approximate inverse preconditioning corresponds to choosing $\mathcal{V}_0$ to be the set of diagonal matrices. Then the sparsity structure of $\mathcal{W}$ equals that of $A^*$. The following two examples illustrate two extremes cases of this choice.
\begin{example}\label{teje} Take $\mathcal{V}_0$ to be the set of diagonal matrices. Then the first basic algorithm reduces to the approximate inverse preconditioning. Algorithm \ref{algor} yields now a standard matrix subspace $\mathcal{W}$ whose sparsity structure equals that of $A^*$. This can yield very good results. If $A$ has orthogonal rows (equivalently, columns)
then and only then this gives exactly a correct matrix subspace $\mathcal{W}$ for factoring the inverse of $A$ as $AWV^{-1}=I$
when $\mathcal{V}$ is taken to be $\mathcal{V}_0$.\footnote{In view of this, it seems like a natural problem to ask, how well $A$ can be approximated with matrices of the form $DU$ with $D$ diagonal and $U$ unitary.} \end{example}
Having identified an ideal structure for the approximate inverse preconditioning when $\mathcal{W}$ is constructed with Algorithm \ref{algor}, how about when $A$ is far from being a scaled unitary matrix? An upper (lower) triangular matrix is a scaled unitary matrix only when it reduces to a diagonal matrix.
\begin{example}\label{teje2} Take again $\mathcal{V}_0$ to be the set of diagonal matrices. Then the basic algorithm reduces to the approximate inverse preconditioning. Algorithm \ref{algor} yields a standard matrix subspace $\mathcal{W}$ whose sparsity structure equals that of $A^*$. This yields very poor results if $A$ is an upper (lower) triangular matrix. Namely, then its inverse is also upper (lower) triangular. \end{example}
Algorithm \ref{algor} is set up in such a way that if $\mathcal{V}_0\subset \tilde{\mathcal{V}_0}$, then $\mathcal{W}\subset \tilde{\mathcal{W}}$. Thereby matrix subspaces can be constructed to handle the two extremes of Examples \ref{teje} and \ref{teje2} simultaneously.
In practice $\mathcal{V}_0$ should be more complex, i.e., the set of diagonal matrices is a too simple structure. One option is to start with $\mathcal{V}_0$ having the sparsity structure of the Gauss-Seidel preconditioner.
\begin{definition} A standard matrix subspace $\mathcal{V}$ of $\,\mathbb{C}^{n \times n}$ is said to have the sparsity structure of the Gauss-Seidel preconditioner of $A\in \,\mathbb{C}^{n \times n}$ if the nonzero entries in $\mathcal{V}$ appear on the diagonal and there where the strictly lower (upper) triangular part of $A$ has nonzero entries. \end{definition}
\section{Numerical experiments}
The purpose of this final section is to illustrate, with the help of four numerical experiments, how the preconditioners devised in Sections \ref{sec:DIAF} and \ref{sec3} perform in practice. Since there is an abundance of degrees of freedom to construct matrix subspaces for approximate factoring of the inverse, only a very incomplete set of experiments can be presented. In particular, we feel that there is a lot of room for new ideas and improvements.
In choosing the benchmark sparse linear systems, we used the University of Florida collection \cite{Davis2011}.
The problems were selected to be the most challenging ones to precondition among those tested in \cite{Benzi2000}. For the matrices used and some of their properties, see Table \ref{tab:testprob_small}. Assuming the reader has an access to \cite{Benzi2000}, the comparison between the methods proposed here and the diagonal Jacobi preconditioning, {\rm ILUT}$(0)$, {\rm ILUT}$(1)$, {\rm ILUT}\ and {\rm AINV}\ can be readily made. For a comparision between ILUs and AINV, see, e.g., \cite{BOS}.
\begin{table}[t] \begin{center}
\begin{tabular}{|*{5}{c|}}
\hline
Problem & Area & $n$ & $\nz{A}$ & $k_1=\nz{A}/n$ \\
\hline
west1505 & Chemical engineering & 1505 & 5414 & 3.6 \\
west2021 & Chemical engineering & 2021 & 7310 & 3.62 \\
lhr02 & Chemical engineering & 2954 & 36875 & 12.5 \\
bayer10 & Chemical engineering & 13436 & 71594 & 5.33 \\
sherman2 & PDE & 1080 & 23094 & 21.4 \\
gemat11 & Linear programming & 4929 & 33108 & 6.72 \\
gemat12 & Linear programming & 4929 & 33044 & 6.7 \\
utm5940 & PDE & 5940 & 83842 & 14.1 \\
e20r1000 & PDE & 4241 & 131430 & 31 \\
\hline
\end{tabular}
\caption{Matrices of the experiments, their application area, size,
number of nonzeros and density.}
\label{tab:testprob_small} \end{center} \end{table}
Regarding preprocessing, in each experiment the original matrix has been initially permuted to have nonzero diagonal entries and scaled with {\tt MC64}. (See \cite{Duff2001} for {\tt MC64}.) It is desirable that the matrix subspace $\ensuremath{\mathcal{V}}$ contains hierarchically connected parts of the graph of the matrix. To this end we use an approach to find the strongly connected subgraphs of the matrix; see Duff and Kaya \cite{Duff2011}. We then obtain a permutation $P$ such that after the permutations, the resulting linear system can be split as \begin{equation}\label{splitti} Ax=(L+D+U)x=b, \end{equation} where $L^T$ and $U$ are strictly block upper triangular and $D$ is a block diagonal matrix. The construction of this permutations consumes at most $\Oh{n\log{(n)}}$ operations.\footnote{Preprocessing is
actually a part of the process of constructing the matrix subspaces
$\mathcal{W}$ and $\mathcal{V}$. That is, it is insignificant
whether one orders correspondingly the entries of the matrix or the
matrix subspaces.}
In the experiments, the right-hand side $b \in \,\mathbb{C}^n$ in \eqref{splitti} was chosen in such a way that the solution of the original linear system was always $x=(1,1,\ldots,1)$. As in \cite{Benzi2000}, as a linear solver we used {\rm BiCGSTAB}\ \cite{vanVorst1992}. The iteration was considered converged when the initial residual had been reduced by eight orders of magnitude.
The numerical experiments were carried out with {\rm Matlab} \footnote{VersionR2010a.}.
\begin{example} \label{ex:paifdiaf1}
We compare the minimization algorithm presented \cite{BHU} ({\rm PAIF}) with the QR factorization based minimization algorithm of Section \ref{sec:qralg} ({\rm DIAF}-Q). We construct $\ensuremath{\mathcal{W}}$ with the heuristic Algorithm \ref{alg:factspar} of Section \ref{sec:wcalheur}. For all test matrices, we use $k=3$ and $\tau_i=1E-1$, $p_i=0$, $\tau_l=0$ and $p_l=0$, as parameters. For {\rm PAIF}, $80$ refinement iterations were always used which is a somewhat more than what we have found to be necessary in practice. However, we want to be sure that the comparison is descriptive in terms of the quality of the preconditioner.
We choose $\ensuremath{\mathcal{V}}$ to be the subspace of block diagonal matrices with block bounds and sparsity structure chosen according to the block diagonal part of $A$, i.e., the matrix $D$ in \eqref{splitti}. Then in the heuristic construction of $\ensuremath{\mathcal{W}}$ with Algorithm \ref{alg:factspar}, $V_0$ is taken to be a diagonal matrix.
We denote by $|D_j|_M$ the maximum blocksize of $\ensuremath{\mathcal{V}}$ and by \#$D_j$ the number of blocks in $\ensuremath{\mathcal{V}}$ in total. Density of the preconditioner, denoted by $\rho$, is computed as $\rho=(\nz{W}+\nz{L_{V}}+ \nz{U_{V}})/\nz{A}$, where $\nz{A}$, $\nz{W}$, $\nz{L_{V}}$ and $\nz{U_{V}}$ denote the number of nonzeroes in $A$, $W$ and the LU decomposition of $V$. For both {\rm PAIF}\ and {\rm DIAF}-Q, we also compute the condition number estimate $\kappa(V)$ and norm of the minimizer $\knorm{AW-V}{F}$, denoted by nrm. Finally, its denotes the number of {\rm BiCGSTAB}\ iterations. By $\dagger$ we denote if no convergence of {\rm BiCGSTAB}\ within $1000$ iterations. Breakdown of {\rm BiCGSTAB}\ is denoted by $\ddag$. Table \ref{tab:paifdiaf1_res} shows the results. \begin{table}[H]
\begin{center}
\begin{footnotesize}
\begin{tabular}{|*{10}{c|}}
\hline
\multicolumn{4}{|c|}{}& \multicolumn{3}{l|}{{\rm PAIF}}& \multicolumn{3}{l|}{{\rm DIAF}-Q} \\
Problem & $|D_j|_M$ & \#$D_j$ & $\rho$ & $\kappa(V)$ & nrm & its & $\kappa(V)$ & nrm & its \\
\hline
west1505 & 50 & 34 & 2.75 & 3.17E+04 & 3.59 & 18 & 1.85E+03 & 3.49 & 18 \\
west2021 & 50 & 47 & 2.69 & 5.53E+03 & 3.84 & 23 & 3.33E+03 & 3.53 & 26 \\
lhr02 & 50 & 66 & 1.11 & 1.65E+03 & 6.69 & 24 & 9.05E+02 & 7.01 & 32 \\
bayer10 & 250 & 67 & 2.56 & 8.00E+05 & 22.27 & 56 & 2.50E+05 & 14.27 & 36 \\
sherman2 & 50 & 24 & 1.05 & 4.32E+02 & 2.45 & 5 & 3.77E+02 & 1.84 & 5 \\
gemat11 & 50 & 115 & 1.91 & 2.28E+05 & 3.58 & 109 & 1.50E+05 & 2.79 & 68 \\
gemat12 & 50 & 114 & 1.91 & 5.96E+06 & 6.87 & 77 & 4.58E+06 & 5.20 & 77 \\
utm5940 & 250 & 29 & 1.73 & 3.91E+06 & 14.66 & 295 & 1.84E+06 & 12.86 & 221 \\
e20r1000 & 200 & 27 & 4.23 & 3.22E+06 & 13.67 & 465 & 4.44E+03 & 8.82 & 364 \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Comparison of {\rm PAIF}\ and {\rm DIAF}-Q\ algorithms}
\label{tab:paifdiaf1_res}
\end{center} \end{table}
Results very similar to those seen in Table \ref{tab:paifdiaf1_res} were also observed in other numerical tests that were conducted. As a general remark, the iteration counts with {\rm BiCGSTAB}\ when preconditioned with {\rm DIAF}-Q\ are not dramatically different from those achieved with {\rm PAIF}. The main benefits of {\rm DIAF}-Q\ are that neither the Hermitian transpose of $A$ is required in the computations nor an estimate for the norm of $A$. Moreover, {\rm DIAF}-Q\ is a direct method, so that its computational cost is easily estimated, while it is not so clear when to stop the iterations with {\rm PAIF}.
The computational cost and parallel implementation of {\rm DIAF}-Q\ is very similar to the established preconditioning techniques based on norm minimization for sparse approximate inverse. (For these issues, see \cite{Chow2000}.)
That is, {\rm DIAF}-Q\ scales essentially accordingly in terms of the computational cost and parallelizability properties. \end{example}
\begin{example} \label{ex:paifdiaf2}
Next we compare {\rm PAIF}\ with the SVD based algorithm of Section \ref{sec:svdalg} ({\rm DIAF}-S). Again $\ensuremath{\mathcal{W}}$ is constructed with the heuristic Algorithm \ref{alg:factspar} of Section \ref{sec:wcalheur}. All the parameters were kept the same as in the previous example, i.e., $k=3$ and $\tau_i=1E-1$, $p_i=0$, $\tau_l=0$ and $p_l=0$. Also, $80$ refinement steps were again used in the power method, so that the results for {\rm PAIF}\ are identical to those presented in Example \ref{ex:paifdiaf1}.
Table \ref{tab:paifdiaf2_res} shows the results. \begin{table}[H]
\begin{center}
\begin{footnotesize}
\begin{tabular}{|*{10}{c|}}
\hline
\multicolumn{4}{|c|}{}& \multicolumn{3}{l|}{{\rm PAIF}}& \multicolumn{3}{l|}{{\rm DIAF}-S} \\
Problem & $|D_j|_M$ & \#$D_j$ & $\rho$ & $\kappa(V)$ & nrm & its & $\kappa(V)$ & nrm & its \\
\hline
west1505 & 50 & 34 & 2.75 & 3.17E+04 & 3.59 & 18 & 4.06E+03 & 3.04 & 14 \\
west2021 & 50 & 47 & 2.69 & 5.53E+03 & 3.84 & 23 & 8.31E+03 & 3.26 & 27 \\
lhr02 & 50 & 66 & 1.11 & 1.65E+03 & 6.69 & 24 & 1.90E+03 & 5.68 & 55 \\
bayer10 & 250 & 67 & 2.56 & 8.00E+05 & 22.27 & 56 & 9.50E+05 & 11.68 & 46 \\
sherman2 & 50 & 24 & 1.05 & 4.32E+02 & 2.45 & 5 & 4.33E+02 & 1.67 & 5 \\
gemat11 & 50 & 115 & 1.91 & 2.28E+05 & 3.58 & 109 & 2.28E+05 & 2.90 & 113 \\
gemat12 & 50 & 114 & 1.91 & 5.96E+06 & 6.87 & 77 & 1.51E+08 & 3.72 & 201 \\
utm5940 & 250 & 29 & 1.73 & 3.91E+06 & 14.66 & 295 & 3.91E+06 & 7.43 & $\ddag$ \\
e20r1000 & 200 & 27 & 4.23 & 3.22E+06 & 13.67 & 465 & 1.96E+04 & 10.37 & 444 \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Comparison of {\rm PAIF}\ and {\rm DIAF}-S\ algorithms}
\label{tab:paifdiaf2_res}
\end{center} \end{table}
The results of Table \ref{tab:paifdiaf2_res} with {\rm DIAF}-S are very similar to those in Table \ref{tab:paifdiaf1_res}. The only notable exception is the matrix utm5940, for which no convergence was achieved with {\rm DIAF}-S. With the metrics used, we do not quite understand why {\rm DIAF}-S fails to produce a good preconditioner for this particular problem. The computed norm $\knorm{AW-V}{F}$ is smaller than the one attained with {\rm DIAF}-Q and the condition number estimate is only slightly worse. The reason is most likely related with the fact that the bound \eqref{eqn:minbound} cannot be expected to be tight enough when $\kappa(V)$ is large.
\end{example}
The following example illustrates how the matrix subspace $\ensuremath{\mathcal{V}}$ can be optimally constructed with the techniques of Section \ref{sec3}.
\begin{example}
\label{ex:optconst_diag}
In this example we consider an optimal construction of $\ensuremath{\mathcal{V}}$. To this
end, we first construct $\ensuremath{\mathcal{W}}$ with the heuristic Algorithm
\ref{alg:factspar} presented in Section \ref{sec:wcalheur}. Then, to
construct $\ensuremath{\mathcal{V}}$, we apply the techniques presented in Section
\ref{sec3}. After the sparsity structures of the subspaces have been
fixed, the resulting minimization problem is solved with
{\rm DIAF}-Q.
Consider the minimization problem \eqref{condwj}. If no restrictions
on the number of nonzero entries in a matrix subspace $\ensuremath{\mathcal{V}}$
are imposed, the norm $\knorm{AW-V}{F}$ can be
decreased by choosing as many entries as possible from the sparsity
structure of $AW$ to be in the sparsity
structure of $\ensuremath{\mathcal{V}}$.\footnote{For example, $\mathcal{V}$ can never
be the full set set of upper triangular matrices since it
would require storing $O(n^2)$ complex numbers. The problem is then,
how to choose a subspace $\mathcal{V}$ of upper triangular matrices.}
To illustrate this, we take
$\mathcal{V}$ to be a subspace of block digonal matrices
by allowing only certain degree of sparsity $k_{\ensuremath{\mathcal{V}}}$ per column.
The nonzero entries are chosen with the techniques of Section \ref{sec3}.
We again set $k=3$ and $\tau_i=1E-1$, $p_i=0$, $\tau_l=0$ and
$p_l=0$, as parameters for all test matrices.
To have the locations for the entries in the diagonal blocks of
$\ensuremath{\mathcal{V}}$, we then apply the method presented in Section
\ref{sec3}. Subspace $\mathcal{W}$ is constructed with the heuristic
Algorithm \ref{alg:factspar} by
setting $\ensuremath{\mathcal{V}}_0$ to be a subspace of block diagonal matrices with full
blocks.
This is to ensure that intersection of the final $\ensuremath{\mathcal{V}}$ and
$\ensuremath{\mathcal{W}}$ is empty.
Table \ref{tab:paifdiaf2_res} shows the results, where at most
$k_{\ensuremath{\mathcal{V}}}$ entries in each column of the sparsity pattern of
$\ensuremath{\mathcal{V}}$ have been allowed. For each test problem we have used the
same block structure as in Examples \ref{ex:paifdiaf1} and
\ref{ex:paifdiaf2}, only the locations of the nonzero entries in
$\ensuremath{\mathcal{W}}$ and $\ensuremath{\mathcal{V}}$ is varied.
\begin{table}[H]
\begin{center}
\begin{scriptsize}
\begin{tabular}{|*{13}{c|}}
\hline
& \multicolumn{4}{|l|}{$k_{\ensuremath{\mathcal{V}}}=10$}& \multicolumn{4}{l|}{$k_{\ensuremath{\mathcal{V}}}=30$}& \multicolumn{4}{l|}{$k_{\ensuremath{\mathcal{V}}}=50$} \\
Problem & $\rho$ & $\kappa(V)$ & nrm & its & $\rho$ & $\kappa(V)$ & nrm & its & $\rho$ & $\kappa(V)$ & nrm & its \\
\hline
west1505 & 2.81 & 2.24E+03 & 3.54 & 20 & 2.83 & 4.57E+03 & 3.36 & 20 & 2.83 & 4.57E+03 & 3.36 & 20 \\
west2021 & 2.75 & 5.11E+04 & 3.70 & 30 & 2.76 & 5.67E+04 & 3.46 & 31 & 2.76 & 5.67E+04 & 3.46 & 31 \\
lhr02 & 1.15 & 1.07E+04 & 6.95 & 47 & 1.17 & 1.27E+04 & 6.92 & 43 & 1.17 & 1.27E+04 & 6.92 & 43 \\
bayer10 & 2.68 & 6.72E+07 & 13.83 & 104 & 2.75 & 1.86E+05 & 13.42 & 29 & 2.76 & 1.86E+05 & 13.42 & 31 \\
sherman2 & 1.01 & 1.49E+02 & 1.88 & 7 & 1.11 & 3.77E+02 & 1.82 & 5 & 1.11 & 3.77E+02 & 1.82 & 5 \\
gemat11 & 2.14 & 1.50E+05 & 2.84 & 81 & 2.24 & 1.50E+05 & 2.75 & 68 & 2.24 & 1.50E+05 & 2.75 & 69 \\
gemat12 & 2.09 & 4.24E+06 & 5.21 & 80 & 2.17 & 4.58E+06 & 5.13 & 71 & 2.17 & 4.58E+06 & 5.13 & 70 \\
utm5940 & 2.11 & 1.82E+06 & 13.12 & $\ddag$ & 2.49 & 2.09E+06 & 12.72 & 209 & 2.51 & 2.11E+06 & 12.71 & 201 \\
e20r1000 & 3.77 & 2.16E+06 & 20.66 & $\ddag$ & 4.99 & 1.01E+05 & 11.76 & $\ddag$ & 5.49 & 6.67E+04 & 8.77 & 418 \\
\hline
\end{tabular}
\end{scriptsize}
\caption{Adaptive selection of $\ensuremath{\mathcal{V}}$ for different values of $k_{\ensuremath{\mathcal{V}}}$}
\label{tab:paifdiaf3_res}
\end{center} \end{table}
As seen in Table \ref{tab:paifdiaf3_res}, choosing more entries in
$\ensuremath{\mathcal{V}}$, i.e., increasing $k_{\ensuremath{\mathcal{V}}}$ always improves the norm of
the minimizer, which is well supported by the theory. Allowing more
entries in $\ensuremath{\mathcal{V}}$ generally produces a better preconditioner. In a
few cases where a slightly worse convergence can be observed, we also
observe a slightly worse condition number estimate for the computed
$V$.
\end{example}
The final example illustrates the optimal selection of a block upper triangular subspace $\ensuremath{\mathcal{V}}$ as well as optimization under additional constraints.
\begin{example}
\label{ex:optconst_ut}
We consider optimal construction $\ensuremath{\mathcal{V}}$ in the case where $\ensuremath{\mathcal{V}}$
is block upper triangular. As parameters we again use $k=3$ and
$\tau_i=1E-1$, $p_i=0$, $\tau_l=0$ and $p_l=0$ and use the strongly
connected subgraph approach to have a block structure for the
subspace $\ensuremath{\mathcal{V}}$.
To have locations for the entries in the block upper triangular
$\ensuremath{\mathcal{V}}$, we apply the method presented in Section \ref{sec3}. As in
Example \ref{ex:optconst_diag}, we construct $\ensuremath{\mathcal{W}}$ with Algorithm
\ref{alg:factspar} by setting $\ensuremath{\mathcal{V}}_0$ as a subspace of block upper
triangular matrices with full blocks. The resulting $\ensuremath{\mathcal{W}}$ is a
lower triangular matrix subspace consisting of a diagonal part and a
strictly block lower triangular part. The resulting minimization
problem is solved with {\rm DIAF}-Q. Note that by the structure of such a
subspace, the conditioning of $W\in\ensuremath{\mathcal{W}}$ can be readily verified.
Table \ref{tab:paifdiaf2_res} shows the results, where at most
$k_{\ensuremath{\mathcal{V}}}$ entries in each column in the block upper triangular
part of $\ensuremath{\mathcal{V}}_0$ have been allowed. The used block structure is the
same as in Examples \ref{ex:paifdiaf1}, \ref{ex:paifdiaf2} and
\ref{ex:optconst_diag}, only the locations and
the number of the nonzero entries is varied in $\ensuremath{\mathcal{W}}$ and $\ensuremath{\mathcal{V}}$.
\begin{table}[H]
\begin{center}
\begin{scriptsize}
\begin{tabular}{|*{13}{c|}}
\hline
& \multicolumn{4}{|l|}{$k_{\ensuremath{\mathcal{V}}}=10$}& \multicolumn{4}{l|}{$k_{\ensuremath{\mathcal{V}}}=30$}& \multicolumn{4}{l|}{$k_{\ensuremath{\mathcal{V}}}=100$} \\
Problem & $\rho$ & $\kappa(V)$ & nrm & its & $\rho$ & $\kappa(V)$ & nrm & its & $\rho$ & $\kappa(V)$ & nrm & its \\
\hline
west1505 & 2.37 & 4.45E+06 & 4.79 & 860 & 2.68 & 1.88E+06 & 2.52 & 78 & 2.75 & 6.95E+04 & 2.42 & 24 \\
west2021 & 2.31 & 1.66E+11 & 5.34 & $\dagger$ & 2.57 & 8.30E+05 & 2.53 & 67 & 2.66 & 3.25E+05 & 2.27 & 23 \\
lhr02 & 1.03 & 2.45E+03 & 4.22 & 36 & 1.40 & 4.89E+03 & 3.21 & 22 & 1.46 & 4.93E+03 & 3.18 & 21 \\
bayer10 & 2.09 & 3.74E+09 & 11.76 & 963 & 2.42 & 2.90E+06 & 8.19 & 390 & 2.49 & 3.39E+06 & 8.10 & 16 \\
sherman2 & 0.89 & 1.61E+02 & 0.89 & 6 & 1.24 & 4.66E+02 & 0.63 & 3 & 1.24 & 4.66E+02 & 0.63 & 3 \\
gemat11 & 1.84 & 1.50E+05 & 2.37 & 55 & 1.96 & 1.60E+05 & 1.74 & 32 & 1.97 & 1.60E+05 & 1.74 & 32 \\
gemat12 & 1.82 & 1.45E+09 & 3.37 & 160 & 1.95 & 1.99E+09 & 2.38 & 39 & 1.96 & 2.03E+09 & 2.37 & 38 \\
utm5940 & 1.60 & 5.29E+07 & 9.16 & 653 & 2.07 & 1.12E+09 & 8.17 & 141 & 2.18 & 8.56E+08 & 8.14 & 165 \\
e20r1000 & 1.90 & 2.00E+11 & 22.32 & $\dagger$ & 2.77 & 6.59E+10 & 12.48 & $\dagger$ & 3.85 & 2.52E+09 & 6.05 & 792 \\
\hline
\end{tabular}
\end{scriptsize}
\caption{Adaptive selection of $\ensuremath{\mathcal{V}}$ for different values of $k_{\ensuremath{\mathcal{V}}}$}
\label{tab:paifdiaf4_res}
\end{center} \end{table}
The results of Table \ref{tab:paifdiaf4_res} are very similar to
those of Table \ref{tab:paifdiaf3_res}. Again, allowing more entries
in $\ensuremath{\mathcal{V}}$ always improves the norm of the minimizer and usually
also produces a better preconditioner.
Similarly as in Example \ref{ex:paifdiaf2}, it is again hard to
understand why $k_{\ensuremath{\mathcal{V}}}=100$ produces a worse preconditioner than
$k_{\ensuremath{\mathcal{V}}}=30$ for utm5940. We attribute this behaviour to the
looseness of the bound \eqref{eqn:minbound}, i.e., when $\kappa(V)$
is large, the minimization of $\knorm{AW-V}{F}$ may not compensate
sufficiently for this in these cases.
To improve the conditioning of $V\in\ensuremath{\mathcal{V}}$, we now consider the same
problems using the technique of imposing constraints as described in
Section \ref{sec3}. As a constraint we require that for the diagonal
entries $v_{jj}$ of $V$ it holds $\absnorm{v_{jj}}\geq 1e-2$. In
case the requirement is not met, we impose a constraint with
$r=2$. Table \ref{tab:paifdiaf4_resstab} describes the results for
$k_{\ensuremath{\mathcal{V}}}=100$, where the number of constrained columns is denoted
by stab.
\begin{table}[H]
\begin{center}
\begin{scriptsize}
\begin{tabular}{|*{6}{c|}}
\hline
& \multicolumn{5}{l|}{$k_{\ensuremath{\mathcal{V}}}=30$} \\
Problem & $\rho$ & $\kappa(V)$ & nrm & stab & its \\
\hline
west1505 & 2.75 & 6.16E+04 & 3.06 & 1 & 25 \\
west2021 & 2.66 & 1.63E+05 & 2.92 & 1 & 21 \\
lhr02 & 1.46 & 4.93E+03 & 3.18 & 0 & 21 \\
bayer10 & 2.49 & 3.39E+06 & 8.10 & 0 & 16 \\
sherman2 & 1.24 & 4.66E+02 & 0.63 & 0 & 3 \\
gemat11 & 1.97 & 1.60E+05 & 1.74 & 0 & 32 \\
gemat12 & 1.96 & 2.03E+09 & 2.37 & 0 & 38 \\
utm5940 & 2.18 & 7.56E+06 & 8.24 & 1 & 113 \\
e20r1000 & 3.85 & 8.66E+09 & 6.27 & 3 & 738 \\
\hline
\end{tabular}
\end{scriptsize}
\caption{Constrained selection of $\ensuremath{\mathcal{V}}$ for $k_{\ensuremath{\mathcal{V}}}=100$}
\label{tab:paifdiaf4_resstab}
\end{center} \end{table}
As seen in Table \ref{tab:paifdiaf4_resstab}, if only a small number
columns has to be constrained, the technique is be effective. In
other numerical experiments not reported here we observed that if
too many columns have to be constrained, the norm of the minimizer
$\knorm{AW-V}{F}$ tends to increase. An approach to find a right balance is
needed then.
\end{example}
To sum up these experiments, the iteration counts obtained with {\rm DIAF}-Q\ and {\rm DIAF}-S\ (which are fully parallelizable) seem to be competitive with the iteration counts obtained with the standard algebraic (sequential) preconditioning techniques. Moreover, a good problem specific tuning of matrix subspaces possesses a lot of potential for significantly speeding up the iterations.
\end{document} | arXiv |
Health status and air pollution related socioeconomic concerns in urban China
Kaishan Jiao1,
Mengjia Xu ORCID: orcid.org/0000-0002-2796-31952 &
Meng Liu3
China is experiencing environmental issues and related health effects due to its industrialization and urbanization. The health effects associated with air pollution are not just a matter of epidemiology and environmental science research, but also an important social science issue. Literature about the relationship of socioeconomic factors with the environment and health factors is inadequate. The relationship between air pollution exposure and health effects in China was investigated with consideration of the socioeconomic factors.
Based on nationwide survey data of China in 2014, we applied the multilevel mixed-effects model to evaluate how socioeconomic status (represented by education and income) contributed to the relationship between self-rated air pollution and self-rated health status at community level and individual level.
The findings indicated that there was a non-linear relationship between the community socioeconomic status and community air pollution in urban China, with the highest level of air pollution presented in the communities with moderate socioeconomic status. In addition, health effects associated air pollution in different socioeconomic status groups were not equal. Self-rated air pollution had the greatest impact on self-rated health of the lower socioeconomic groups. With the increase of socioeconomic status, the effect of self-rated air pollution on self-rated health decreased.
This study verified the different levels of exposure to air pollution and inequality in health effects among different socioeconomic groups in China. It is imperative for the government to urgently formulate public policies to enhance the ability of the lower socioeconomic groups to circumvent air pollution and reduce the health damage caused by air pollution.
Environmental pollution has attracted great attention in China due to its rapid development of industrialization and urbanization in recent years. The report of city smog has widely occurred in the mass media in China [1]. The social implication of air pollution has indicated an increasing number of severe problems of air pollution in ChinaFootnote 1 and the health effects associated with air pollution. Air pollution may cause acute or chronic health problems including mild irritation to the upper respiratory tract, chronic respiratory system disease, heart disease, lung cancer, children's acute respiratory infection and adults' chronic bronchitis. In addition, air pollution exacerbates asthma, heart disease or lung disease, and increases the risk of death [2, 3]. Meanwhile, health effects caused by air pollution are significantly different for people with different socioeconomic statuses. Even though Beck argued that poverty was hierarchical and chemical smog was democratic [4], some researches have pointed out that individuals and groups with different socioeconomic statuses were exposed to air pollution at different levels and suffered from different health effects. These differences refer to environmental inequality and health inequality.
Some studies have examined the health inequalities associated with air pollution at regional level. For example, a study of six districts in Sao Paulo, Brazil, demonstrated that PM10 had less effect on respiratory mortality among older adults in areas with a higher proportion of college education populations and high-income families, and it had greater effect in areas where the proportion of the poor population was high [5]. Another study based on the city of Hamilton in Canada showed that air pollution exposure had a relatively large effect on acute mortality in the zones with a higher proportion of low educational attainment and high manufacturing employment [6]. In addition, a study of Hong Kong, China, found that air pollution had a greater effect on the mortality of people living in public rental than those living in private homes, and a greater effect on blue-collar workers than those who had never worked and white-collar workers [7]. Other studies focusing on developed areas, such as Rome and Norway, revealed similar conclusion that neighborhood socioeconomic deprivation could exacerbate mortality caused by air pollution [8, 9].
On the other hand, some studies have examined the health inequalities associated with air pollution at individual level. For example, a study of 20 cities in the United States showed that individual education significantly modified the relationship between PM10 and mortality; specifically, the higher education level of the individual the lower the effect of PM10 on mortality [10]. In addition, several studies about China found similar results that people with lower socioeconomic status normally experienced a higher health risk from air pollution, while people with middle or high socioeconomic status barely had a health risk since they had more ways to avoid air pollution [11]. Moreover, as the severity of air pollution increased, health disparities among people with different socioeconomic status would intensify [12]. On the contrary, a good natural environment could reduce health inequality [13].
There are two possible reasons that explain the stronger effect of air pollution among people in low socioeconomic class. First, the level of exposure to air pollution is higher among those living in low socioeconomic communities [14]. Pearce and Kingham claimed that in some countries or regions the socially deprived, the low income and the ethnic minority were exposed to a higher level of air pollution [15]. Furthermore, Schoolman and Ma found that townships with higher proportion of rural migrants were exposed to higher level of air pollution in Jiangsu province, China [16]. Second, compared to people in high socioeconomic status, populations with low socioeconomic status are more susceptible to air pollution. The susceptibility of people with low socioeconomic status is caused by health-related social, behavioral and psychological factors including poor health status (such as diabetes and obesity), addictions (such as smoking), other pollutant exposures (such as passive smoking), psychological stress, low intake of nutrition, and even genetic make-up [17].
Forastiere pointed out that the second reason (i.e. different susceptibility of individuals) was more convincible than the first one (i.e. differential exposure to air pollution) for explaining the stronger effect of air pollution among low socioeconomic groups [8]. This is because several researchers have found no significant difference in levels of exposure in air pollution among people with different socioeconomic status, and some even have reached the opposite conclusion that high socioeconomic groups experience high levels of air pollution. For example, Goodman et al. examined the traffic-related air pollution in London and found that even though the level of average air pollution concentrations was higher in low socioeconomic positions, reversed direction of association was expected to occur in central London area [18]. Similar results were found in studies by Crouse and Cesaroni et al. [19, 20]. People with a college education or living in high socioeconomic communities were more likely to be exposed to traffic-related air pollution. Havard et al. asserted that the relationship between air pollution exposure and deprivation was nonlinear and the medium level deprivation areas were the most exposed to air pollution [21]. Thus, the association between socioeconomic status and air pollution exposure was still under uncertainty and required further investigation [22]. However, it should be noted that several studies did not find a modification effect of socioeconomic status on air pollution [23], or they found just a partial impact that education modified the effect of air pollution but household income did not [6]. Some researchers even found opposite results, i.e. the effect of air pollution on mortality was higher among people with high socioeconomic status [24]. Therefore, Laurent et al. argued that the modification effect of socioeconomic status depended on the regional level at which socioeconomic characteristics were measured [17]. If socioeconomic characteristics were measured at city level, no modification effect was found; if at community level, the result was mixed; if individually measured socioeconomic characteristics were used, the result indicated disadvantaged subjects were affected more by pollution.
To sum up, although existing studies have found that people with different socioeconomic status experienced varying levels of exposure to air pollution and health effects, unanimous conclusion has not been reached. In addition, most of the previous studies have focused on developed countries or regions. Therefore, studies need to be further extended to developing countries like China, which is now at a stage of accelerated industrialization and urbanization. Study of air pollution and health inequality in China can provide new empirical evidence for the debate. Therefore, this study tried to extend the literature with the aim to explore whether there was a significant difference in air pollution exposure and health effects among different socioeconomic status individuals or communities in China. To be more specific, there are two sub-questions: (1) Is the level of exposure to air pollution different among communities with different socioeconomic status in China, and what happens to the level of air pollution exposure as the community's socioeconomic status continues to improve? (2) Is the effect of air pollution on health associated with socioeconomic status? With the increase of socioeconomic status, either at community level or at individual level, does the effect of air pollution on health decrease?
The data of this study was collected from "China Labor Dynamics Survey"(CLDS) [25], which is China's first national longitudinal social survey on its labor force. Since 2012, this survey has been conducted every two years. The CLDS data includes individuals, families and communities, covering 29 cities in China (except for Hong Kong, Macao, Tibet and Hainan). Based on the purpose of this study, subjects living in urban communities were chosen because air pollution occurred mainly in the city, and air pollution in rural areas was relatively insignificant and less of a concern. Several previous studies were also based on the city sample [26,27,28,29]. In addition, the current study selected participants over 45 years old because the impact of air pollution on health was mainly concentrated on children and older people, who were also the main target of many studies. The final sample size was 3838 individuals from 171 urban communities.
Variables and measurement
Self-rated health
Although some studies pointed out that self-rated health had some limitations over objective health markers [30, 31], some researchers explained in details the unique characteristics of self-rated health and its role in predicting mortality [32,33,34]. Unlike most of the health indicators, self-rated health is based on a subjective cognitive process [35], which captures the overall subjective experience of mental and physical well-being, and is closer to the WHO's definition of health [36]. Additionally, several studies have found self-rated health had a high reliability. For example, Lundberg and Manderbacka assessed the reliability of self-rated health and found that the self-rated health was reliable in all subgroups studied, and the reliability of self-rated health was even excellent in the group of older men [37]. Consistent with some health surveys [38, 39], self-rated health in this study is based on the answers to the question of 'how do you evaluate your own health conditions', and five different levels of answers: very good (coded as 1), good (coded as 2), moderate (coded as 3), bad (coded as 4) and very bad (coded as 5).
Community health status
To examine the relationship between community air pollution and the overall health of the community, we collected the self-rated health scores of all the individuals in a given community to calculate the measure of community health status. This had been done by calculating the percentage of respondents who reported 'very good' and 'good' in self-rated health in each community. The higher the percentage, the better the health condition in the community.
Self-rated air pollution
In epidemiology and environmental science, the measurement for air quality is normally the concentration of pollutant (such as submicron particles, particulate matter (PM), ozone (O3), nitrogen dioxide (NO2), carbon monoxide (CO), and sulfur dioxide (SO2)), which is an objective index. It should be pointed out that the air quality index of a city is a summary of the data of multiple air monitoring stations in this city. Given the relatively small number of air monitoring stations and short-term air pollution data in China's cities [40], it is questionable whether the officially released Air Quality Index (AQI) can reflect the true level of air pollution. Moreover, the size of China's cities is relatively large, and the levels of air pollution in different communities within a city can be significantly different [41]. Therefore, a single AQI does not necessarily reflect the true level of air pollution exposure of community inhabitant. According to Forastiere and Galassi, the best source of learning about a fact, at least in theory, was from the most informed subject who was experiencing the event, especially when his/her report reflected several aspects [42]. In fact, in some studies about community characteristics and health, subjective perception of air quality was seen as an important feature of the community [43,44,45]. Furthermore, some studies have found that perceived pollution and health risks played an important role in understanding the health effects of air pollution [46]. In addition, when studying the effects of air pollution, psychological factors should be taken into account [47]. Compared with other kinds of pollution that was not easily perceived by community residents, air quality was more easily perceived [48,49,50]. Therefore, to some extent, self-rated air pollution can reflect the level of effect of air pollution on individuals who are being exposed to it. In this study, self-rated air pollution is based on the answers to the question of 'how do you rate the level of air pollution in the place where you live'. The answers from subjects were categorized into two levels: not serious air pollution (coded as 1) and serious air pollution (coded as 2).
Community air pollution level
The percentage of number of subjects who reported 'serious air pollution' in that community was calculated as the air pollution index of that community. The larger the percentage, the more severe the perceived air pollution was in the community.
Individual socioeconomic status (SES)
Education, occupation and income are the common measurements for socioeconomic status, but some studies argued that such measurement was mainly for developed countries and debated whether it fitted developing countries [51, 52]. This study focused on the population over 45 years old, most of whom were not working, so the variable of occupation on this population was null. Considering the relationship between education and income and the completeness of the data, this study did not include occupation as measurement for socioeconomic status. For measuring socioeconomic status, this study introduced two indicators – years spent in school (E i ) and per capita annual household income (I i ). As shown in Table 1, according to the number of years specified in each stage of education in China, we converted education level to the years spent in school. The per capita annual household income was calculated by dividing the annual income of the household by the number of members in that household. We first standardized the years spent in school: Zi1 = (E i − mean)/Standard deviation, and standardized the per capita annual household income: Zi2 = (I i − mean)/Standard deviation. Then, we calculated the mean of two standardized variables as the individual's socioeconomic status: SES i = (Zi1 + Zi2)/2.
Table 1 Years of Education Transformed from Level of Education
Community socioeconomic status (CSES)
The CSES was calculated as the mean of the individual's socioeconomic status in each community: CSES j = ∑ SES ij /n j .
Covariates
The covariates in this study included age, gender, marriage, body mass index, smoking, drinking and physical exercise, which may significantly influence health. Age was a continuous variable, and gender was a dummy variable (with female coded 1, male coded 0). Marriage was categorized as single (coded 1) or married (coded 0). Body mass index was categorized into four classifications: light weight (coded 1), normal weight (coded 2), overweight (coded 3) and obesity (coded 4). Smoking was categorized into three classifications: never smoke (coded 1), quit smoking (coded 2), and always smoke (coded 3). Drinking (includes white wine, red wine and beer) was also categorized into three classifications: never drink (coded 1), quit drinking (coded 2), and always drink (coded 3). Physical exercise was a dummy variable: not often physical exercise (coded 1) and often physical exercise (coded 0). Table 2 shows the measurement of variables and their description.
Table 2 Description of variables (N/% or mean)
Considering the multilevel structure (individual < family < community < city) of the data and the advantages of multilevel model [53, 54], this study applied the multilevel mixed-effects model. For continuous responses, this study used linear mixed effects model; for ordinal responses, this study used generalized linear mixed-effects model.
First, we set up a model to examine the relationship between the community air pollution exposure and the community socioeconomic status. The model was specified as:
$$ {y}_{jk}={\beta}_0+{\beta}_1{x}_{1 jk}+{\beta}_2{x}_{1 jk}^2+{u}_k+{e}_{jk} $$
where the subscript k denotes city (level-2), j denotes community (level-1), y jk is community air pollution exposure, x1jk is community socioeconomic status, u k is level-2 random effects, and e jk is level-1 random effects.
Second, we set up a model to examine the relationship between the community health status and the community air pollution exposure. The model was specified as:
$$ \kern2em {y}_{jk}={\beta}_0+{\beta}_1{x}_{1 jk}+{\beta}_2{x}_{2 jk}+{\beta}_3{x}_{1 jk}{x}_{2 jk}+{\beta}_4{x}_{3 jk}+{u}_k+{e}_{jk} $$
where the subscript k denotes city (level-2), j denotes community (level-1), y jk is community health status, x1jk is community socioeconomic status, x2jk is community air pollution exposure, x1jkx2jk is the interaction term of community socioeconomic status and community air pollution exposure, x3jk is the proportion of elder population in the community, u k is level-2 random effects, and e jk is level-1 random effects.
Finally, we set up a generalized linear mixed-effects model to examine the relationship between socioeconomic status, self-rated air pollution and self-rated health at individual level. The model was specified as:
$$ \mathrm{logit}\left(\frac{y_{ijk}>m}{y_{ijk}\le m}\right)={\beta}_0+{\beta}_1{x}_{1 ijk}+{\beta}_2{x}_{2 ijk}+{\beta}_3{x}_{1 ijk}{x}_{2 ijk}+\kern0.5em {\sum}_{q=4}^Q{\beta}_q{x}_{qijk}+{u}_k+{u}_{jk} $$
where the subscript k denotes community (level-3), j denotes household (level-2), i denotes individual, y ijk is self-rated health, x1ijk is self-rated air pollution exposure, x2ijk is individual socioeconomic status, x1ijkx2ijk is the interaction term of self-rated air pollution exposure and individual socioeconomic status, x qijk is covariate, u jk is level-2 random effects and u k is level-3 random effects.
The models were run using the meglm command in Stata Statistical Software for maximum likelihood estimation. For comparing the nested models, the -2LL, i.e. the deviance statistic, was used for significance test. The smaller the deviation, the better the model. The difference of deviations from two models was under the chi-square distribution with degree of freedom equaling the difference of numbers of parameters in each model.
First, we examined the relationship between community socioeconomic status, community air pollution and community health. As shown in Table 3, the community socioeconomic status was positively related to community air pollution. As the community socioeconomic status increased, the level of community air pollution rose simultaneously. However, the coefficient of squared community socioeconomic status was significantly negative (P < 0.01), which indicated a curved relationship between community socioeconomic status and community air pollution. In other words, it was possible that the relationship between community socioeconomic status and community air pollution was mediated by community socioeconomic status.
Table 3 Community socioeconomic status, community air pollution and community health
Figure 1 presents the predicted values of community air pollution (percentage of serious air pollution assessment in community samples) at different levels of community socioeconomic status based on the models in Table 3. It can be seen that as community socioeconomic status improved, air pollution in community continued to increase. However, when community socioeconomic status increased to a certain extent, air pollution in community declined as community socioeconomic status increased.
Community socioeconomic status and community air pollution
Additionally, Table 3 demonstrates a negative relationship between community air pollution and community health, i.e. when the level of air pollution in community increases, the health at community level will decline. Table 3 also shows the coefficient of the interaction term is 0.141 (P < 0.1), which means that health effects associated with community air pollution were mediated by community socioeconomic status. As community socioeconomic status increased, the effect of community air pollution on community health decreased. As Fig. 2 illustrates, among the communities with low socioeconomic status, the more serious the air pollution, the worse the health conditions in the community. However, as the socioeconomic status increased, the difference of health conditions among communities with different levels of air pollution shrunk. That is to say, air pollution had relatively small impact on health in communities with higher socioeconomic status. It was important to point out that in communities with very high socioeconomic status, the health status of communities with higher air pollution was better than that of communities with relatively low air pollution.
Self-rated community air pollution and community health status: mediated by socioeconomic status
In addition, we also examined the relationship between self-rated air pollution and self-rated health, and the mediating role of individual socioeconomic status. Table 4 shows the results of the mixed effects model. Model 1 in Table 4 contained only control variables and model 2 added self-rated air pollution. According to the likelihood ratio test results of the two models, model 2 (χ2(1)=24.11, P = 0.000) was better than model 1, which indicated self-rated air pollution significantly affected self-rated health. The more severe the self-rated air pollution of the community where the individual is living, the worse self-rated health will be. In model 3, individual socioeconomic status variables were added. Likelihood ratio test results showed model 3 (χ2(1)=89.47, P = 0.000) was better than model 2, which indicated individual socioeconomic status was significantly related to self-rated health. Individuals with higher socioeconomic status were significantly less likely to have bad self-rated health status than those with lower socioeconomic status. In model 4 of Table 4, we have added an interaction term of individual socioeconomic status and self-rated air pollution. Likelihood ratio test results showed model 4 (χ2(1)=4.76, P = 0.029) was better than model 3, which indicated the health effects associated with self-rated air pollution were mediated by individual socioeconomic status. The coefficient of the interaction term was −0.247, which was statistically significant (P < 0.05). Thus, with the improvement of individual socioeconomic status, the impact of self-rated air pollution on self-rated health would be significantly reduced.
Table 4 Relationship between Individual socioeconomic status, self-rated air pollution and self-rated health
As Fig. 3 illustrates, among individuals with low socioeconomic status, those who rated air pollution as 'serious' were less likely to rate health as "very good" or "good" than those who rated air pollution as 'not serious'. However, as the level of individual socioeconomic status increased, the difference of health effects caused by self-rated air pollution decreased. From Fig. 3, among individuals with high socioeconomic status, the predicted probability of self-rated health as 'very good' and 'good' among those who rated air pollution as 'serious' was very close to that among those who rated air pollution as 'not serious'. Figure 3 also implies that individuals rating air pollution as 'serious' were more likely to report 'moderate', 'bad' and 'very bad' for health status than those rating air pollution as 'not serious', especially among those with lower socioeconomic status. Similarly, among individuals with higher socioeconomic status, the predicted probability of self-rated health as 'moderate', 'bad' and 'very bad' among those who rated air pollution as 'serious' was very close to that among those who rated air pollution as 'not serious'. In short, the level of individual socioeconomic status was negatively correlated to the health effects associated with self-rated air pollution.
Self-rated air pollution and self-rated health: mediated by socioeconomic status
This study found a non-linear relationship between community air pollution and community socioeconomic status in urban China. On the one hand, as community socioeconomic status improves, air pollution in the community is also on the rise, which, to some extent, indicates air pollution is a 'byproduct' of economic development of the society. This finding is consistent with some previous research. For example, in cities such as London and Rome, communities with high socioeconomic status had a higher level of exposure to air pollution [18, 20]. However, when the community socioeconomic status was raised to a certain level, the relationship between it and community air pollution began to reverse. In other words, as community socioeconomic status further increased, community air pollution would decrease. It implied that air pollution exposure was the highest among the communities with moderate socioeconomic status [21], while communities with lowest or highest socioeconomic status experienced relatively less air pollution.
In general, we can classify the relationship between community socioeconomic status and air pollution into five categories: low community socioeconomic status - low community air pollution (Class 1), moderate community socioeconomic status - moderate community air pollution (Class 2), high community socioeconomic status - high community air pollution (Class 3), high community socioeconomic status - moderate air pollution (Class 4) and high community socioeconomic status - low community air pollution (Class 5). At present, most urban areas in China are at the stage of rapid socioeconomic development and increasing air pollution (Class 1). Whether these urban areas transform from Class 2 to Class 3 or from Class 2 to Class 4 depends on changes in the economic development mode and investment in air pollution control. Additionally, some cities are at the stage of high socioeconomic status and high air pollution (Class 3), such as Beijing and Tianjin. With the change of economic development mode and the increase of government's investment in air pollution control, air pollution in these cities like Beijing will continue to decline in the future (from Class 3 to Class 4). Finally, some cities in China have higher socioeconomic status but lower air pollution (Class 4 or Class 5), such as Shenzhen and Xiamen.
The factors to be considered in explaining the relationship between community socioeconomic status and community air pollution include economic structure, industrialization patterns, urbanization patterns, the natural environment and individual choices. While higher socioeconomic groups tend to prefer communities with better air quality, this is not the case in most of China's cities. Now when urban residents choose their living location, they consider work opportunity, educational resources, medical resources and transportation more than air quality. Because of the homogeneity of density of air pollution in a city, such as PM2.5 [55, 56], individuals have to move to another city if they want to live in a community with better air quality. Considering the communities with serious air pollution are usually featured by highly developed economics, abundant educational and medial resources, and convenient transportation in China, people with high socioeconomic status do not demonstrate strong selection preference for high-level air quality. Therefore, high socioeconomic groups experience higher level of air pollution exposure in China. However, if air pollution becomes worse or its health risks are perceived to become higher, higher socioeconomic people are likely to make air quality an important consideration for relocation, such as moving from air polluted areas to areas with good air quality. If this becomes the case, the result will be that higher socioeconomic people will live in communities with good air quality, while the lower socioeconomic people will live mostly in poor air quality communities.
In addition, this study also found that there were significant inequalities in health effects associated with air pollution. With the improvement of socioeconomic status, the health effects associated with air pollution had been declining. As a result, the differences in self-rated health among people living in areas with severe air pollution and those living in areas with less polluted air had been gradually reduced. To be more specific, the health effects associated with air pollution on lower socioeconomic people was significantly greater than that of higher socioeconomic people. The result may be explained by vulnerability and avoidance of people with different socioeconomic status [14]. First, compared with higher socioeconomic people, the health status of lower socioeconomic groups is relatively poor [57], making them more sensitive to severe air pollution and more vulnerable to health damage. Second, people with lower socioeconomic status have significantly less access to good quality medical services than those with higher socioeconomic status [58], such that they cannot receive medical services in a timely manner when air pollution aggravates and affects their health. Third, people with low socioeconomic status usually have relatively low capability of prevention. Even under the same level of air pollution, people with low socioeconomic status are exposed into air pollution more due to work environment (for example, outdoor) and indoor environment (for example, the low capability of protection against air pollution). Lastly, because of the low educational background, people with low socioeconomic status usually lack the knowledge of health and environmental pollution as well as the health effects caused by air pollution. Therefore, they are more likely to have lower level of awareness of self-protection.
Because people with low socioeconomic status are less likely to be well protected from air pollution, they may experience more health effects than those with high socioeconomic status. Given the lack of capacity of lower socioeconomic groups to avoid the risk of air pollution, the public service of air pollution prevention provided by the government is very important. At present, when the air is heavily polluted, the main task of the government is to reduce pollutant emissions. Policies related to air pollution risk management are seriously lagging behind, resulting in lower socioeconomic people (such as children from poor families, the elderly, patients, etc.) failing to receive the related public services. Therefore, it is imperative for the regions with more serious air pollution to urgently formulate public policy to mitigate the negative impact of air pollution exposure. The main goals of public policy makers should be to increase the government's financial investment to provide masks, air purification equipment and other protective means and information consulting services for lower socioeconomic people. In addition, the government should continue to improve existing health care policies, enhance the accessibility of high-quality medical services to the lower socioeconomic people, and minimize the health damage caused by air pollution.
This study has some limitations as follows. First, due to the limited number of air quality monitoring stations in China, only the city-level data was available rather than community-level data. Therefore, air pollution was measured in a subjective way in this study. Although some studies suggested that subjectively perceived community characteristics (such as perception of community air quality) played an important role in health research [59], and some studies also found that perceived air pollution was significantly correlated with official monitoring data [60], more effort should be on the relative role of objective air pollution and subjectively perceived air pollution in influencing health conditions. Additionally, whether the differences in the impact of air pollution on the health status of different socioeconomic people are caused by the differences in perceptions of air pollution also requires further study. Second, since this study used cross-sectional data, there is no in-depth study on causal mechanisms between socioeconomic status, air pollution and health status. Future studies may consider using panel data to examine how changes in socioeconomic status affect exposure to air pollution and how changes in air pollution exposure affect health. Third, although the self-rated health used in this study contained the evaluation of one's own overall health status as well as WHO's definition of health, the impact of air pollution on some objective health indicators needs further investigation. The association between air pollution and different health indicators, and whether this relationship is modified by socioeconomic status also requires further investigation. Finally, since the sample of this study only included urban residents over the age of 45, the conclusions of this study cannot be generalized to the large population. We need to extend the study of the health effects associated with air pollution in the general population and the mediating role of socioeconomic status. Moreover, we still need to elaborate the difference in different subgroups such as age group differences.
This study used nationwide multilevel data to investigate how socioeconomic factors matter in the relationship between air pollution and health status in China. It verified the different levels of exposure to air pollution and inequality in health effects among different socioeconomic groups in China. The findings indicated a nonlinear relationship between community socioeconomic status and community air pollution, and the highest level of the relationship was found in communities with moderate socioeconomic status. It was also found that air pollution had the greatest impact on the health of the lower socioeconomic groups. With the increase of socioeconomic status, the effect of air pollution on health was decreased. Therefore, it is imperative for the government to urgently formulate public policies to enhance the ability of the lower socioeconomic groups to circumvent air pollution and reduce the health damage caused by air pollution. On account of the dynamic evolution of socioeconomic development and level of air pollution, the modification effect on the relationship between environment and health from the socioeconomic status is changing. Therefore, it is important to further investigate and analyze the inequality of exposure to environmental pollution and health effects associated with air pollution in China.
According to the Report on the State of the Environment in 2015 released by the Ministry of Environmental Protection of China, among the 338 prefecture-level cities in China only 73 (21.6%) cities had qualified air quality and 265 (78.4%) cities failed the air quality test.
Shi H, Wang Y, Huisingh D, Wang J. On moving towards an ecologically sound society: with special focus on preventing future smog crises in China and globally. J Clean Prod. 2014;64(1):9–12.
Brunekreef B, Holgate ST. Air pollution and health. Lancet. 2002;360(9341):1233–42.
Kampa M, Castanas E. Human health effects of air pollution. Environ Pollut. 2008;151(2):362–7.
Beck U. Risk society: towards a new modernity. London: Sage; 1992.
Martins M, Fatigati F, Vespoli T, Martins LC, Pereira LA, Martins MA, et al. Influence of socioeconomic conditions on air pollution adverse health effects in elderly people: an analysis of six regions in Sao Paulo. Brazil J Epidemiol Commun Health. 2004;58(1):41–6.
Jerrett M, Burnett R, Brook J, Kanaroglou P, Giovis C, Finkelstein N, et al. Do socioeconomic characteristics modify the short term association between air pollution and mortality? Evidence from a zonal time series in Hamilton. Canada J Epidemiol Commun Health. 2004;58(1):31–40.
Wong C-M, C-Q O, Chan K-P, Chau Y-K, Thach T-Q, Yang L, et al. The effects of air pollution on mortality in socially deprived urban areas in Hong Kong, China. Environ Health Perspect. 2008;116(9):1189–94.
Forastiere F, Stafoggia M, Tasco C, Picciotto S, Agabiti N, Cesaroni G, et al. Socioeconomic status, particulate air pollution, and daily mortality: differential exposure or differential susceptibility. Am J Ind Med. 2007;50(3):208–16.
Næss Ø, Piro FN, Nafstad P, Smith GD, Leyland AH. Air pollution, social deprivation, and mortality: a multilevel cohort study. Epidemiology. 2007;18(6):686–94.
Zeka A, Zanobetti A, Schwartz J. Individual-level modifiers of the effects of particulate matter on daily mortality. Am J Epidemiol. 2006;163(9):849–59.
Miao Y, Chen J. Air pollution and health needs: the application of the Grossan model. World Econ. 2010;6:142–62.
Qi Y, Lu H. Pollution, health and inequality - across the traps of "environmental health and poverty". Manag World. 2015;9:32–51.
Mitchell R, Popham F. Effect of exposure to natural environment on health inequalities: an observational population study. Lancet. 2008;372(9650):1655–60.
O'Neill MS, Jerrett M, Kawachi I, Levy JI, Cohen AJ, Gouveia N, et al. Health, wealth, and air pollution: advancing theory and methods. Environ Health Perspect. 2003;111(16):1861–70.
Pearce J, Kingham S. Environmental inequalities in New Zealand: a national study of air pollution and environmental justice. Geoforum. 2008;39(2):980–93.
Schoolman ED, Ma C. Migration, class and environmental inequality: exposure to pollution in China's Jiangsu province. Ecological Econ. 2012;75(C):140–51.
Laurent O, Bard D, Filleul L, Segala C. Effect of socioeconomic status on the relationship between atmospheric pollution and mortality. J Epidemiol Commun Health. 2007;61(8):665–75.
Goodman A, Wilkinson P, Stafford M, Tonne C. Characterising socio-economic inequalities in exposure to air pollution: a comparison of socio-economic markers and scales of measurement. Health Place. 2011;17(3):767–74.
Crouse DL, Ross NA, Goldberg MS. Double burden of deprivation and high concentrations of ambient air pollution at the neighbourhood scale in Montreal, Canada. Soc Sci Med. 2009;69(6):971–81.
Cesaroni G, Badaloni C, Romano V, Donato E, Perucci CA, Forastiere F. Socioeconomic position and health status of people who live near busy roads: the Rome longitudinal study (RoLS). Environ Health. 2010;9(1):1–12.
Havard S, Deguen S, Zmirou-Navier D, Schillinger C, Bard D. Traffic-related air pollution and socioeconomic status: a spatial autocorrelation study to assess environmental equity on a small-area scale. Epidemiology. 2009;20(2):223–30.
Briggs D, Abellan JJ, Fecht D. Environmental inequity in England: small area associations between socio-economic status and environmental pollution. Soc Sci Med. 2008;67(10):1612–29.
Schwartz J. Assessing confounding, effect modification, and thresholds in the association between ambient particles and daily deaths. Environ Health Perspect. 2000;108(6):563–8.
Gouveia N, Fletcher T. Time series analysis of air pollution and mortality: effects by cause, age and socioeconomic status. J Epidemiol Commun Health. 2000;54(10):750–5.
Wang J, Zhou Y, Liu S. China Labor-force dynamics survey: design and practice. Chinese sociological. Dialogue. 2017:2397200917735796.
Sunyer J, Spix C, Quenel P, Ponce de Leon A, Barumandzadeh T, Touloumi G, et al. Urban air pollution and emergency admissions for asthma in four European cities: the APHEA project. Thorax. 1997;52(9):760–5.
Dockery DW, Pope CA, Xu X, Spengler JD, Ware JH, Fay ME, et al. An association between air pollution and mortality in six U.S. cities. N Engl J Med. 1993;329(24):1753–9.
Burnett RT, Brook J, Dann T, Delocla C, Philips O, Cakmak S, et al. Association between particulate- and gas-phase components of urban air pollution and daily mortality in eight Canadian cities. Inhal Toxicol. 2000;12(4):15–39.
Chan CK, Yao XH. Air pollution in mega cities in China. Atmos Environ. 2008;42(1):1–42.
Dowd JB, Zajacova A. Does self-rated health mean the same thing across socioeconomic groups? Evidence from biomarker data. Annals Epidemiol. 2010;20(10):743–9.
Bak CK, Andersen PT, Dokkedal U. The association between social position and self-rated health in 10 deprived neighbourhoods. BMC Public Health. 2015;15(1):14.
Idler EL, Benyamini Y. Self-rated health and mortality: a review of twenty-seven community studies. J Health Soc Behav. 1997:21–37.
Mossey JM, Shapiro E. Self-rated health: a predictor of mortality among the elderly. Ame. J Public Health. 1982;72(8):800–8.
DeSalvo KB, Bloser N, Reynolds K, He J, Muntner P. Mortality prediction with a single general self-rated health question. J Gen Intern Med. 2006;21(3):267–75.
Jylhä M. What is self-rated health and why does it predict mortality? Towards a unified conceptual model. Soc Sci Med. 2009;69(3):307–16.
Hill TD, Ross CE, Angel RJ. Neighborhood disorder, psychophysiological distress, and health. J Health Soc Behav. 2005;46(2):170–86.
Lundberg O, Manderbacka K. Assessing reliability of a measure of self-rated health. Scand J Soc Med. 1996;24(3):218–24.
Subramanian SV, Huijts T, Avendano M. Self-reported health assessments in the 2002 world health survey: how do they correlate with education? Bull World Health Organ. 2010;88(2):131–8.
Feng Q, Zhu H, Zhen Z, Self-rated GD. Health, interviewer-rated health, and their predictive powers on mortality in old age. J Gerontol Series B: Psychol Sci Soc Sci. 2015;71(3):538–50.
Wang Y, Ying Q, Hu J, Zhang H. Spatial and temporal variations of six criteria air pollutants in 31 provincial capital cities in China during 2013-2014. Environ Int. 2014;73:413–22.
Li L, Qian J, C-Q O, Zhou Y-X, Guo C, Guo Y. Spatial and temporal analysis of air pollution index and its timescale-dependent relationship with meteorological factors in Guangzhou, China, 2001–2011. Environ Pollut. 2014;190:75–81.
Forastiere F, Galassi C. Self report and GIS based modelling as indicators of air pollution exposure: is there a gold standard? Occup Environ Med. 2005;62(8):508–9.
Wen M, Hawkley LC, Cacioppo JT. Objective and perceived neighborhood environment, individual SES and psychosocial factors, and self-rated health: an analysis of older adults in Cook County, Illinois. Soc Sci Med. 2006;63(10):2575–90.
Bickerstaff K. Risk perception research: socio-cultural perspectives on the public experience of air pollution. Environ Int. 2004;30(6):827–40.
Claeson A-S, Lidén E, Nordin M, Nordin S. The role of perceived pollution and health risk perception in annoyance and health symptoms: a population-based study of odorous air pollution. Int Arch Occup Environ Health. 2013;86(3):367–74.
Stenlund T, Lidén E, Andersson K, Garvill J, Nordin S. Annoyance and health symptoms and their influencing factors: a population-based air pollution intervention study. Public Health. 2009;123(4):339–45.
Deguen S, Ségala C, Pédrono G, Mesbah MA. New air quality perception scale for global assessment of air pollution health effects. Risk Anal. 2012;32(12):2043–54.
Elliott SJ, Cole DC, Krueger P, Voorberg N, Wakefield S. The power of perception: health risk attributed to air pollution in anurban industrial neighbourhood. Risk Anal. 1999;19(4):621–34.
Phillimore P, Moffatt S. Discounted knowledge: local experience, environmental pollution and health. In: Popay J, Williams G, editors. Researching the People's health. London: Routledge; 1994. p. 135–54.
Nazaroff WW. New directions: It's time to put the human receptor into air pollution control policy. Atmos Environ. 2008;42(26):6565–6.
Zhu H, Xie Y. Socioeconomic differentials in mortality among the oldest old in China. Res on. Aging. 2007;29(2):125–43.
Zimmer Z, Liu X, Hermalin A, Chuang Y-L. Educational attainment and transitions in functional status among older Taiwanese. Demography. 1998;35(3):361–75.
Goldstein H. Multilevel statistical models. 4th ed. Chichester: Wiley; 2011.
Raudenbush SW, Bryk AS. Hierarchical linear models: applications and data analysis methods. Thousand Oaks, CA: Sage; 2002.
Martuzevicius D, Grinshpun SA, Reponen T, Górny RL, Shukla R, Lockey J, et al. Spatial and temporal variations of PM 2.5 concentration and composition throughout an urban area with high freeway density - the greater Cincinnati study. Atmos Environ. 2004;38(8):1091–105.
DeGaetano AT, Doherty OM. Temporal, spatial and meteorological variations in hourly PM 2.5 concentration extremes in new York City. Atmos Environ. 2004;38(11):1547–58.
Mackenbach JP, Stirbu I, A-JR R, Schaap MM, Menvielle G, Leinsalu M, et al. Socioeconomic inequalities in health in 22 European countries. N Engl J Med. 2008;358(23):2468–81.
Adler NE, Newman K. Socioeconomic disparities in health: pathways and policies. Health Aff. 2002;21(2):60–76.
Johnson BB. Experience with urban air pollution in Paterson, New Jersey and implications for air pollution communication. Risk Anal. 2012;32(1):39–53.
Bonnes M, Uzzell D, Carrus G, Kelay T. Inhabitants' and experts' assessments of environmental quality for urban sustainability. J Soc Issues. 2007;63(1):59–78.
Gratitude goes to the funding from National Social Sciences Research Funds (No. 14 BRK012), as well as Center for Social Survey in Sun Yat-sen University for providing data used for this study.
This work is supported by the National Social Sciences Research Funds under Grant No. 14 BRK012 and the project of "Outstanding Young Talents" of Minzu University of China (Grant No. 2017YQ10). These funds provide freedom for authors to design the study, collect data and do analysis, given that the authors don't violate laws and ethical principles.
Department of Sociology, Minzu University of China, 27 Zhongguancun South Avenue, Beijing, 100081, China
Kaishan Jiao
Department of Economics, Claremont Graduate University, 170 E. 10th Street, Claremont, CA, 91711, USA
Mengjia Xu
Department of Social Work, China Women's University, 1 Yuhui Dong Lu, Chaoyang District, Beijing, 100101, China
Meng Liu
KJ collected the data, and did data analysis and first draft. MX and ML contributed to the manuscript and also revised it. All authors read and approved the final manuscript.
Correspondence to Mengjia Xu.
Jiao, K., Xu, M. & Liu, M. Health status and air pollution related socioeconomic concerns in urban China. Int J Equity Health 17, 18 (2018). https://doi.org/10.1186/s12939-018-0719-y
DOI: https://doi.org/10.1186/s12939-018-0719-y | CommonCrawl |
Precision timing detectors for high energy physics experiments with temporal resolutions of a few 10 ps are of pivotal importance to master the challenges posed by the highest energy particle accelerators. Calorimetric timing measurements have been a focus of recent research, enabled by exploiting the temporal coherence of electromagnetic showers. Scintillating crystals with high light yield as well as silicon sensors are viable sensitive materials for sampling calorimeters. Silicon sensors have very high effciency for charged particles. However, their sensitivity to photons, which comprise a large fraction of the electromagnetic shower, is limited. A large fraction of the energy in an electromagnetic shower is carried by photons. To enhance the efficiency of detecting photons, materials with higher atomic numbers than silicon are preferable. In this paper we present test beam measurements with a Cadmium-Telluride sensor as the active element of a secondary emission calorimeter with focus on the timing performance of the detector. A Schottky type Cadmium-Telluride sensor with an active area of 1 cm$^2$ and a thickness of 1 mm is used in an arrangement with tungsten and lead absorbers. Measurements are performed with electron beams in the energy range from 2 GeV to 200 GeV. A timing resolution of 20 ps is achieved under the best conditions.
Precision timing detectors for high energy physics experiments with temporal resolutions of a few 10 ps are of pivotal importance to master the challenges posed by the highest energy particle accelerators. Calorimetric timing measurements have been a focus of recent research, enabled by exploiting the temporal coherence of electromagnetic showers.
In this paper, we present results of studies of a calorimeter prototype using Cadmium-Telluride (CdTe) sensors as the active material. CdTe has been studied extensively in the context of thin film solar cells and has become a mature and wide-spread technology. It has also been used as a radiation detector for nuclear spectroscopy, and is known to have high quantum efficiency for photons in the x-ray range of the spectrum. This feature is of particular interest in the context of its use in calorimetery because it would be sensitive to secondary particles in the keV range, a significant component of the electromagnetic shower. Therefore, the first study of electromagnetic showers using CdTe sensors has the potential to yield new insight into the behavior of secondary particles produced within an electromagnetic shower with energies in the keV range, and has the potential to yield an improvement on the energy measurement due to the additional contribution of the higher energy x-ray photons to which previous calorimeters were not sensitive. The recent interest on precision timing has resulted in new studies of the timing properties of silicon sensors. These studies have found a time resolution at 20 ps level, provided a suffciently large signal size in a variety of applications ranging from calorimetery to charged particle detectors. The signal formation process in CdTe sensors are very similar to the process in silicon and has similar potential to yield precise timestamps. In this article, we study the signal response of the CdTe sensor to electromagnetic showers of varying energies and at different shower depths. We also study the timing performance of the CdTe sensors for electromagnetic showers.
The semi-conducting properties of Cadmium-Telluride has been studied since many decades, in particular in the the context of using the material in photovoltaic applications. Cadmium-telluride sensor are widely used in X-ray detectors. They have also been investigated for synchrotron radaition detectors in accelerator technology. In our previous studies we have demonstrated that increasing the primary sensor signal is crucial to achieve good timing resolutions. Cadmium-telluride features a significantly larger effciency for detecting photons in the 10 to 100 keV energy range compared to silicon sensors. The higher atomic number of Cadmium and Tellurium, averaging to about 48 for the compound bulk material, results in a higher interaction cross section for photons in this energy range. Photons with such energies are abundant in electromagnetic showers. Furthermore, CdTe sensor are available with thicknesses of 1 mm and more. The path-length of the charged shower particles in the sensor material scales accordingly, resulting in a larger primary signal. Our measurements were conducted with a CdTe Schottky type diode purchased from Acrorad. It is 1 cm$^2$ in transverse size and 1 mm thick. It was operated at a bias voltage of 700 V and the dark current was between 3 nA and 6 nA depending on the environmental conditions in the test beam experimental zones. The sensor was placed in a box made of 0.3 mm copper sheets sealed with copper tape to shield against environmental noise. A broadband amplifier with a bandwidth of 1.5 GHz was used to amplify the signals from the sensor. We performed the measurements at the H2 beamline of the CERN North-Area testbeam facility and the T9 beamline of the CERN East-Area testbeam facility. They provide secondary electron beams from the Proton Synchrotron (PS) and Super Proton Synchrotron (SPS) of energies ranging from 2 GeV to 200 GeV. The DAQ system uses a CAEN V1742 switched capacitor digitizer based on the DRS4 chip. Wire chambers are used to measure the position of each incident beam particle in the plane transverse to the beamline. A micro-channel plate photomultiplier (MCP-PMT) detector is used to provide a very precise reference timestamp. The precision of the time measurement for both types of MCP-PMT's is less than 10 ps.
Our initial results are encouraging and motivate future work on more detailed comparisons with simulation and more detailed measurements of transverse and longitudinal shower profiles. We have measured the rise time for signals in the Schottky type CdTe sensor diode to be about 1.3 ns which makes them suitable as devices for precision timing applications. The large ionization signal yield we achieve with a 1 mm thick sensor is equally favorable for precision timing applications. We observe dependencies of the measured time on the geometric position of the beam particle impact point on the sensor, which may indicate differences in the charge collection dynamics. More detailed studies of this aspect are needed and a more optimal design of the connection of the sensor readout is envisioned. Correcting for these dependencies yield time resolutions of 25 ps for a single layer CdTe sensor of transverse area 1 cm$\times$ 1cm, uniformly sampled by the electromagnetic shower of electrons with energy above 100 GeV after 6 radiation lengths of tungsten and lead absorber. In the most favorable region of the sensor we observe time resolutions as low as 20 ps. These initial results are encouraging and motivate further in-depth studies in the future. | CommonCrawl |
The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously.
On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people.
For the sake of organizing the review, we have divided the literature according to the general type of cognitive process being studied, with sections devoted to learning and to various kinds of executive function. Executive function is a broad and, some might say, vague concept that encompasses the processes by which individual perceptual, motoric, and mnemonic abilities are coordinated to enable appropriate, flexible task performance, especially in the face of distracting stimuli or alternative competing responses. Two major aspects of executive function are working memory and cognitive control, responsible for the maintenance of information in a short-term active state for guiding task performance and responsible for inhibition of irrelevant information or responses, respectively. A large enough literature exists on the effects of stimulants on these two executive abilities that separate sections are devoted to each. In addition, a final section includes studies of miscellaneous executive abilities including planning, fluency, and reasoning that have also been the subjects of published studies.
Finally, all of the questions raised here in relation to MPH and d-AMP can also be asked about newer drugs and even about nonpharmacological methods of cognitive enhancement. An example of a newer drug with cognitive-enhancing potential is modafinil. Originally marketed as a therapy for narcolepsy, it is widely used off label for other purposes (Vastag, 2004), and a limited literature on its cognitive effects suggests some promise as a cognitive enhancer for normal healthy people (see Minzenberg & Carter, 2008, for a review).
Modafinil is a prescription smart drug most commonly given to narcolepsy patients, as it promotes wakefulness. In addition, users indicate that this smart pill helps them concentrate and boosts their motivation. Owing to Modafinil, the feeling of fatigue is reduced, and people report that their everyday functions improve because they can manage their time and resources better, as a result reaching their goals easier.
I took 1.5mg of melatonin, and went to bed at ~1:30AM; I woke up around 6:30, took a modafinil pill/200mg, and felt pretty reasonable. By noon my mind started to feel a bit fuzzy, and lunch didn't make much of it go away. I've been looking at studies, and users seem to degrade after 30 hours; I started on mid-Thursday, so call that 10 hours, then 24 (Friday), 24 (Saturday), and 14 (Sunday), totaling 72hrs with <20hrs sleep; this might be equivalent to 52hrs with no sleep, and Wikipedia writes:
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
(People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.)
As expected since most of the data overlaps with the previous LLLT analysis, the LLLT variable correlates strongly; the individual magnesium variables may look a little more questionable but were justified in the magnesium citrate analysis. The Noopept result looks a little surprising - almost zero effect? Let's split by dose (which was the point of the whole rigmarole of changing dose levels):
Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117.
The idea of a digital pill that records when it has been consumed is a sound one, but as the FDA notes, there is no evidence to say it actually increases the likelihood patients that have a history of inconsistent consumption will follow their prescribed course of treatment. There is also a very strange irony in schizophrenia being the first condition this technology is being used to target.
These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients.
Next, if these theorized safe and effective pills don't just get you through a test or the day's daily brain task but also make you smarter, whatever smarter means, then what? Where's the boundary between genius and madness? If Einstein had taken such drugs, would he have created a better theory of gravity? Or would he have become delusional, chasing quantum ghosts with no practical application, or worse yet, string theory. (Please use "string theory" in your subject line for easy sorting of hate mail.)
It is known that American college students have embraced cognitive enhancement, and some information exists about the demographics of the students most likely to practice cognitive enhancement with prescription stimulants. Outside of this narrow segment of the population, very little is known. What happens when students graduate and enter the world of work? Do they continue using prescription stimulants for cognitive enhancement in their first jobs and beyond? How might the answer to this question depend on occupation? For those who stay on campus to pursue graduate or professional education, what happens to patterns of use? To what extent do college graduates who did not use stimulants as students begin to use them for cognitive enhancement later in their careers? To what extent do workers without college degrees use stimulants to enhance job performance? How do the answers to these questions differ for countries outside of North America, where the studies of Table 1 were carried out?
That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩
One of the most common strategies to beat this is cycling. Users who cycle their nootropics take them for a predetermined period, (usually around five days) before taking a two-day break from using them. Once the two days are up, they resume the cycle. By taking a break, nootropic users reduce the tolerance for nootropics and lessen the risk of regression and tolerance symptoms.
Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
The nonmedical use of substances—often dubbed smart drugs—to increase memory or concentration is known as pharmacological cognitive enhancement (PCE), and it rose in all 15 nations included in the survey. The study looked at prescription medications such as Adderall and Ritalin—prescribed medically to treat attention deficit hyperactivity disorder (ADHD)—as well as the sleep-disorder medication modafinil and illegal stimulants such as cocaine.
Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human.
As with other nootropics, the way it works is still partially a mystery, but most research points to it acting as a weak dopamine reuptake inhibitor. Put simply, it increases your dopamine levels the same way cocaine does, but in a much less extreme fashion. The enhanced reward system it creates in the brain, however, makes it what Patel considers to be the most potent cognitive enhancer available; and he notes that some people go from sloth to superman within an hour or two of taking it.
Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S.
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.)
Kennedy et al. (1990) administered what they termed a grammatical reasoning task to subjects, in which a sentence describing the order of two letters, A and B, is presented along with the letter pair, and subjects must determine whether or not the sentence correctly describes the letter pair. They found no effect of d-AMP on performance of this task.
Racetams are often used as a smart drug by finance workers, students, and individuals in high-pressure jobs as a way to help them get into a mental flow state and work for long periods of time. Additionally, the habits and skills that an individual acquires while using a racetam can still be accessed when someone is not taking racetams because it becomes a habit.
More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
Power-wise, the effects of testosterone are generally reported to be strong and unmistakable. Even a short experiment should work. I would want to measure DNB scores & Mnemosyne review averages as usual, to verify no gross mental deficits; the important measures would be physical activity, so either pedometer or miles on treadmill, and general productivity/mood. The former 2 variables should remain the same or increase, and the latter 2 should increase.
One of the other suggested benefits is for boosting serotonin levels; low levels of serotonin are implicated in a number of issues like depression. I'm not yet sure whether tryptophan has helped with motivation or happiness. Trial and error has taught me that it's a bad idea to take tryptophan in the morning or afternoon, however, even smaller quantities like 0.25g. Like melatonin, the dose-response curve is a U: ~1g is great and induces multiple vivid dreams for me, but ~1.5g leads to an awful night and a headache the next day that was worse, if anything, than melatonin. (One morning I woke up with traces of at least 7 dreams, although I managed to write down only 2. No lucid dreams, though.)
Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment.
Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance.
I can only talk from experience here, but I can remember being a teenager and just being a straight-up dick to any recruiters that came to my school. And I came from a military family. I'd ask douche-bag questions, I'd crack jokes like so... don't ask, don't tell only applies to everyone BUT the Navy, right? I never once considered enlisting because some 18 or 19 year old dickhead on hometown recruiting was hanging out in the cafeteria or hallways of my high school.Weirdly enough, however, what kinda put me over the line and made me enlist was the location of the recruiters' office. In the city I was living in at the time, the Armed Forces Recruitment Center was next door to an all-ages punk venue that I went to nearly every weekend. I spent many Saturday nights standing in a parking lot after a show, all bruised and bloody from a pit, smoking a joint, and staring at the windows of the closed recruiters' office. Propaganda posters of guys in full-battle-rattle obscured by a freshly scrawled Anarchy symbol or a collage of band stickers over the glass.I think trying to recruit kids from school has a child-molester-vibe to it. At least it did for me. But the recruiters defiantly being right next to a bunch of drunk and high punks, that somehow made it seem more like a truly bad-ass option. Like, sure, I'll totally join. After all, these guys don't run from the horde of skins and pins that descend every weekend like everyone else, they must be bad-ass.
The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away.
There is a similar substance which can be purchased legally almost anywhere in the world called adrafinil. This is a prodrug for modafinil. You can take it, and then the body will metabolize it into modafinil, providing similar beneficial effects. Unfortunately, it takes longer for adrafinil to kick in—about an hour—rather than a matter of minutes. In addition, there are more potential side-effects to taking the prodrug as compared to the actual drug.
In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it'd be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they're probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses:
Methylphenidate – a benzylpiperidine that had cognitive effects (e.g., working memory, episodic memory, and inhibitory control, aspects of attention, and planning latency) in healthy people.[21][22][23] It also may improve task saliency and performance on tedious tasks.[25] At above optimal doses, methylphenidate had off–target effects that decreased learning.[26]
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap | CommonCrawl |
Kirchhoff's Law
Problem (IIT JEE 2006): In a dark room with ambient temperature $T_0$, a black body is kept at temperature $T.$ Keeping the temperature of the black body constant (at $T$), sun rays are allowed to fall on the black body through a hole in the roof of the dark room. Assuming that there is no change in the ambient temperature of the room, which of the following statement(s) is (are) correct?
The quantity of radiation absorbed by the black body in unit time will increase.
Since emissivity $=$ absorptivity, hence the quantity of radiation emitted by black body in unit time will increase.
Black body radiates more energy in unit time in the visible spectrum.
The reflected energy in unit time by the black body remains same.
Solution: When sun rays fall on a black body, a portion of incident photons are absorbed and the remaining are reflected. So, the net quantity of radiations absorbed by the black body in unit time increases. At the same time, since emissivity and absorptivity are equal, the quantity of radiations emitted by the black body in unit time also increases. The wavelength $\lambda_\text{max}$ at which maximum radiation takes place depends on black body temperature $T$ and is given by Wien's displacement law $\lambda_\text{max}T=b$. Thus, A and B are correct.
Problem (IIT JEE 2003): The temperature ($T$) \emph{versus} time ($t$) graphs of two bodies X and Y with equal surface areas are shown in the figure. If the emissivity and the absorptivity of X and Y are $E_x$, $E_y$ and $a_x$, $a_y$, respectively, then,
$E_x > E_y$ and $a_x < a_y$
$E_x < E_y$ and $a_x > a_y$
$E_x > E_y$ and $a_x > a_y$
$E_x < E_y$ and $a_x < a_y$
Solution: The rate of cooling for $x$ is greater than that of $y$. The cooling occurs due to heat loss by radiation. By Stefan's law, the rate of heat loss is, \begin{align} \mathrm{d}Q/\mathrm{d}t=\sigma EA(T^4-T_0^4),\nonumber \end{align} where $E$ is emissivity. Hence, $E_x > E_y$. Since good emitters are good absorbers, the absorptivity $a_x > a_y$. | CommonCrawl |
Abdurohman* , Supriadi** , and Fahmi*: A Modified E-LEACH Routing Protocol for Improving the Lifetime of a Wireless Sensor Network
Maman Abdurohman* , Yadi Supriadi** and Fitra Zul Fahmi*
A Modified E-LEACH Routing Protocol for Improving the Lifetime of a Wireless Sensor Network
Abstract: This paper proposes a modified end-to-end secure low energy adaptive clustering hierarchy (ME-LEACH) algorithm for enhancing the lifetime of a wireless sensor network (WSN). Energy limitations are a major constraint in WSNs, hence every activity in a WSN must efficiently utilize energy. Several protocols have been introduced to modulate the way a WSN sends and receives information. The end-to-end secure low energy adaptive clustering hierarchy (E-LEACH) protocol is a hierarchical routing protocol algorithm proposed to solve high-energy dissipation problems. Other methods that explore the presence of the most powerful nodes on each cluster as cluster heads (CHs) are the sparsity-aware energy efficient clustering (SEEC) protocol and an energy efficient clustering-based routing protocol that uses an enhanced cluster formation technique accompanied by the fuzzy logic (EERRCUF) method. However, each CH in the E-LEACH method sends data directly to the base station causing high energy consumption. SEEC uses a lot of energy to identify the most powerful sensor nodes, while EERRCUF spends high amounts of energy to determine the super cluster head (SCH). In the proposed method, a CH will search for the nearest CH and use it as the next hop. The formation of CH chains serves as a path to the base station. Experiments were conducted to determine the performance of the ME-LEACH algorithm. The results show that ME-LEACH has a more stable and higher throughput than SEEC and EERRCUF and has a 35.2% better network lifetime than the E-LEACH algorithm.
Keywords: E-LEACH , EERRCUF , ME-LEACH , SEEC , Wireless Sensor Network
A wireless sensor network (WSN) connects autonomous systems to sensors that sense data, such as temperature, humidity, and light [1]. In a WSN environment, data are sent from a sensor node to a base station (BS), and vice versa. Sensor nodes transmit data directly to a BS, consuming large quantities of energy and rapidly depleting the energy supply of the node. Hierarchical routing protocols has been proposed to solve the problem of energy dissipation during the transmission and receiving of data. Nodes form clusters with one cluster head (CH) that has higher residual energy than other nodes. A CH is responsible for transmitting data to another CH or directly to the BS. The clustering system has the potential to increase the lifetime of the network. In the past, the methods commonly used in WSNs were enhanced simplified hybrid, energy-efficient, distributed clustering (EDsHEED) [2], low energy adaptive clustering hierarchy (LEACH) [3], distance and energy aware LEACH (DE-LEACH) [4], and energy aware LEACH (E-LEACH) [5].
The EDsHEED method [2] implements a CH for every cluster to minimize power consumption. The LEACH method [3] implements a protocol that utilizes randomized rotations of a local CH. By selecting CHs through a rotation, network disconnects caused by dead nodes are avoided. The weakness of this method lies in the random rotation it uses, which disregards the energy supply of a node in electing the CH [3]. It is possible to elect a node with low residual energy as a CH. As a result, the CH will not survive to perform its assigned role. The DE-LEACH [4] method uses the average distance from the base station (BS) as the basis for choosing threshold values. The DE-LEACH method divides the entire sensing region into two parts. The first region is one with a boundary of less than the average distance from the BS, and the second region is one whose distance is greater than the average distance from the BS. The first region will use distance as a parameter to elect CHs, and the second region uses a scheme based on the nodes' residual and initial energy values.
The E-LEACH [5] method is another method that improves the LEACH algorithm in the CH election phase. The process of electing a CH assesses the energy residual of a node. E-LEACH uses two approaches to elect a CH. When the residual energy is more than 50%, the node will choose the same value as a threshold that is used in LEACH. When the residual energy drops to below 50%, the node will use another value as a threshold.
An E-LEACH [6] election of a CH is based on the node's residual energy and its average energy use. However, in the E-LEACH operation, all selected CHs will directly transmit data to the BS. This activity consumes a great deal of energy [7].
Another approach has been proposed based on fuzzy logic using the K-means algorithm in combination with the midpoint method [8]. In addition to residual energy, the Euclidean distance can also be used as the basis for a CH consideration. In these cases, the packet transmission is performed from the CHs to the BS depending on the distance between them. There are two other methods already proposed, namely, the sparsity-aware energy efficient clustering (SEEC) [9] and the enhanced cluster formation technique accompanied by the fuzzy logic (EERRCUF) [10]. The SEEC method selects the strongest node in every cluster, while the EERRCUF method implements a super cluster head (SCH) as the middle hop to the BS.
The energy depletion caused by sending data from a CH to the BS affects the entire WSN operation. Since the energy used to send data from a CH to the BS is substantial, the node elected as a CH will run out of energy. If a node is frequently elected as a CH, it will die more rapidly than other nodes. As a result, the lifetime of the WSN will decrease.
This main research contribution of this paper is in improving E-LEACH and other methods' performance by changing the data transmission mechanism from a CH to the BS. Instead of directly sending data to the BS, a CH will find the nearest neighboring CH to use as the next hop path to the BS. This paper consists of five sections: Section 1 outlines the background and the research problem. Section 2 discusses related works. The ME-LEACH algorithm is presented in Section 3. Section 4 provides an analysis of the experiments and the results. A conclusion is provided in Section 5.
2. Related Works
Many routing protocols have been developed to accommodate the WSN characteristics, in which energy is a crucial issue, such as hierarchical, flat, and location-based types [11]. Other types of routing protocols are data centric routings, such as SPIN [12] and directed diffusion [13]. There are also routing protocols for low power and lossy networks [14]. The nodes are aware of their locations through the use of signal strengths to predict the distance between nodes [51,16]. Other methods use a satellite with a GPS receiver. Geographical and energy aware routing (GEAR) [17] includes an algorithm that uses energy aware neighbor selection for routing packets to a destination region. Cluster-based routing was designed for scalability and efficient communication. The creation of clusters and selecting CHs can therefore increase the lifetime of a network [11].
Hierarchical routing is implemented to decrease the energy consumption within a cluster. Hierarchical routing also performs fusion and data aggregation. By lowering the energy used for data transmission, the total energy consumption in a WSN will decrease. Hierarchical routing usually uses two routing procedures. The first procedure is used to select a CH and then a member node, while the second procedure sends and aggregates data to the BS.
Since the energy capacity on a sender node is relatively limited [18], saving energy is an important factor in determining the life time of the sensor network. Many methods have been proposed to enhance the lifetime of a network. The first is LEACH [3] method, in which sensor nodes transmit data to a CH instead of sending it directly to the BS. The LEACH method became a foundational method among hierarchical routing protocols, and many improvements to the LEACH have been proposed. The LEACHC protocol is a LEACH improvement that utilizes location and residual energy parameters [3]. LEACHH [19] was introduced to solve the problem of low degrees in load balancing. The TL-LEACH protocol [20] is another variant used to solve the distance problem between CHs and BSs. CH data collection and fusion also use the LEACH protocol, but a CH will use another CH nearer to the BS. A power-efficient gathering in sensor information systems (PEGASIS) [7] is another protocol in which nodes connect only with their closest neighbor. Sensing node systems create transmission chains using a greedy algorithm to ensure that all nodes have paths to the base station. Another variant of LEACH is V-LEACH [21], which is highly energy efficient.
In another scheme, i.e., SEEC [9], the network is divided into several blocks, each of which has a strong node. Energy efficiency is achieved depending on the placement of a number of strong nodes at each level of heterogeneity. Another approach is to use the EERRCUF scheme [10]. Grouping of nodes is performed using the particle swarm optimization (PSO) method. In the system, an SCH is designated as the main CH. The SCH connects the CHs to the BS. Another advanced concept is the E-LEACH scheme, which uses the residual energy in a node as a parameter to elect a CH. The node with the highest energy is the selected CH. The other nodes receive advertisements from the selected CH to become cluster members. The transmission of node data uses a time division multiple access (TDMA) scheduling system.
There are many steps in E-LEACH, beginning with the set-up phase. In this step, all nodes compute a random number (between 0 to 1). This number is later compared to a threshold value. In the next step, the node calculates the percentage of the remaining energy against its initial energy to decide what threshold number to use. In the first round, the remaining energy is the same as the initial energy; hence, the value is 1. Otherwise, the node calculates the threshold value T as in [3]
[TeX:] $$T(n)=\left\{\begin{array}{cl} \frac{P}{1-P *\left(r \bmod \frac{1}{P}\right)} & \text { if } n \in G \\ 0 & \text { otherwise } \end{array}\right.$$
where r is the current round between 0 and 1, P is the probability of a CH, n is a node, and G is a group of nodes that have not become CHs. When the comparison between the remaining energy and the initial energy is less than 50%, the node will use Eq. (2). E-LEACH uses a new CH election scheme to minimize the chance of a low energy node being selected as a CH. For the remaining energy of more than 50%, Eq. (1) is used. Otherwise, an equation is applied as in [3]
[TeX:] $$T(n)=2 p * \frac{E_{\text {residual}}}{E_{\text {initial}}}$$
where T(n) is the threshold value, n is a node, p is the probability to become a CH, Eresidual is the remaining energy of a node, and Einitial is the initial energy of a node. When T(n) is larger than a random number (between 0 to 1), the node will become a CH.
Cluster formation begins when a CH broadcasts an advertising message to a neighboring node as an invitation to join its cluster. The advertising message consists of the location of the CH. When a candidate member node receives messages from the CH, it must decide which CH to choose. From the many advertisements received, the nodes starts to compare distances and subsequently joins the closest CH. The node will answer the advertisement with a joint request message, and thereafter, one cluster is formed. After clusters have been formed, each CH sends a time schedule using a TDMA scheme. All member nodes can transmit data based on this time schedule.
3. The New ME-LEACH Method
Previous research on E-LEACH has introduced a method for selecting a CH to conserve energy. ELEACH uses residual energy as a parameter to consider when to select the next CH. Simulations show that E-LEACH has an improved network lifetime compared to LEACH.
Fig. 1 shows that the topology of E-LEACH consists of a BS and many clusters, with a CH for every cluster. In E-LEACH, the algorithm is executed as follows: a new CH is selected by assessing the remaining energy of every node. A node with remaining energy higher than a certain value is more likely to become a CH. In the cluster formation phase, a node will join a CH that has a better cost value and hence the shortest distance to the CH. This CH will conduct a poll among all the member nodes and send data to the BS. The receiving as well as the transmitting of data consume much of the power of the CH. Every CH will transmit member node data to the BS.
E-LEACH topology.
Flow chart of the proposed ME-LEACH algorithm.
distance to the BS, as described in Fig. 2. The new main phases of the proposed method operate as follows: ME-LEACH uses a nearly optimal chain-based protocol between a CH and the BS. ME-LEACH aims to conserve energy. Each CH connects with the closest neighboring CH. Each CH takes turns transmitting to the BS and thus reduces the energy consumption per round. There are four required phases to implement the proposed method:
Select the nearest CH: This process takes place during the cluster formation phase when all the CHs exchange advertisement messages. A CH will keep all the advertisement messages it receives from CHs that contain the location of other CHs. The CH calculates the distance between it and other CHs and then sorts through the neighboring CHs based on distance. The nearest CH will become the candidate for the next hop.
Compare distances to the BS: The nearest neighboring CH does not automatically become the next hop; a CH will compare its distance to a neighboring CH and to the BS. If the distance of the neighboring CH to the BS is shorter than that to the CH, then the neighboring CH becomes the next hop. If the distance of the neighboring CH to the BS is longer than its distance, the CH will choose the next CH as the candidate. The process is repeated until the CH finds the best route to the BS.
Chain table formation: When Step 2 is complete, all CHs will have a chain table that consists of the next hop, and the member CH will use it to send data to the BS.
Send data to the BS: This process occurs in the data transmission phase when the member node has finished sending data to the CH. Then, the CH performs the task of forwarding the data to the BS instead of directly sending the data to the BS, and the CH will use the route in the chain table to send the data. The CHs may directly send the data to the BS or use another CH as the next hop. The route the CHs choose is the best route to conserve energy.
ME-LEACH uses a nearly optimal chain-based protocol between CHs and the BS. ME-LEACH aims to conserve energy. Each CH connects with the closest neighboring CH. Each CH takes turns transmitting to the BS and thus reduces the energy consumption per round. The idea is to modify the way a CH sends data instead of transmitting data directly to the BS. A CH will transmit data to the nearest CH if the distance of the CH is shorter than that to the BS. The proposed topology is shown in Fig. 3.
Modified E-LEACH (ME-LEACH) topology.
CH chain formation is performed at the same time as cluster formation. When the CHs broadcasts advertisements about their existence, the advertisement is not only received by the member nodes but also by other CHs. There are advertisement exchanges between CHs. When CHs receive multiple advertisements from other CHs, they begin to calculate and compare their distances to the base station.
The CHs will choose another CH to become the next hop if the distance to the other CH is shorter than that to the BS. After finding a CH candidate to become the next hop, the CHs send a joint request to the preferred CH. The CHs begin to fill the chain table with the next hop CH and the source CH. All the CHs will complete the chain table and ensure there are no loops in the chain.
Fig. 2 shows the proposed ME-LEACH method. The process with the white background is based on E-LEACH, and the process with the blue background is the proposed method, ME-LEACH.
The following is the ME-LEACH pseudocode method. The difference between the proposed method and the previous method lies in the //Proposed Scheme section. In this subprocess, the next hop is chosen from one CH to the nearest CH. This is a new distinct method compared to the previous method.
ME-LEACH Algorithm (pseudocode)
4. Performance Analysis
4.1 Simulation Scenarios
Scenario 1 consists of several phases: the set-up phase: all the nodes are placed randomly in the monitored area of 100 m2. All the nodes are known by their location, initial energy and base station positions. After all the sensor nodes are set, they are ready for the first step of the simulation. First, the sensor nodes check their residual energy and compare it to the initial energy. The value is 1 if the residual energy is the same as the initial energy. After many rounds, the energy will be dispatched, and when the ratio of the residual and initial energy is under 50%, the sensor nodes will use Eq. (2) as their threshold value. In this case, a sensor node will use a quotation as in 1 as its threshold. The threshold value is calculated.
The next step is CH election. The nodes compare the threshold numbers to the generated random numbers (between 0 and 1). The nodes decide, autonomously, to become CHs based on the comparison between the threshold value and a random number. If the random number is lower than the threshold, then the corresponding sensor node will declare itself as a CH.
After the CH is selected, the next step is the cluster set-up. The CH broadcasts an advertising message that it is a CH to its neighbor. The broadcast message contains its location. In this step, all sensors will activate their radios to listen to the broadcast message. After the sensor node receives the broadcast messages, which usually come from many CHs, they will perform a calculation to determine which CH is the nearest since distance is a major factor in energy dissipation.
The nearest cluster will conserve energy. When the sensor node has already selected which CH has the shortest distance, it will send a joint request message to the corresponding CH and join the cluster. In this step, a cluster is set-up. Every cluster has one CH and some member nodes. A member node is optional. There are cases in which the CH does not have any members. For example, there are situations in which there is not any closer nodes or when all the nodes have become CHs.
Schedule creation occurs when a joint request message is received by a CH. The CH will send a TDMA schedule to a member node as a response. The schedule will tell the member node when to wake the transceiver to send data and when to sleep. To conserve energy, the member node will be in the sleep state and will awake only when it is time to send data. A schedule also tells the member node when the round ends and when to start a new round.
Data transmission is also known as the operation phase. When a cluster is set up and the all sensor nodes are known, it becomes a CH or a member node. The member node will send data to its CH according to the schedule, and the CH will forward the aggregate information to the BS. The first round is finished, and the next round will start over for the process of CH selection.
Scenario 2 runs in reference to the Modified E-LEACH, with the following steps. The set-up and CH selection in Experiment 2 have the same processes as those in Experiment 1. Modifications are made in the cluster set-up phase and in the schedule creation.
In the cluster set-up phase, the CH also listens to an advertised message broadcast sent from another CH. The objective is to conserve energy. This is the same process that happens when a CH sends a broadcast. In the previous method, the advertisement is only received by a non-CH, while in the proposed method, the broadcast is also received by another CH. Thus, the CH receives the location of the advertisement and then calculates whether it is better to send data directly to the BS or to send, based on its distance, to the nearest neighboring CH.
After the CHs exchange advertisement messages and make decisions on how they will send information to the BS, the chain formation phase can be implemented to decide the way the CHs will send the information. A CH can send data directly to the BS or to the nearest CH based on distance. To avoid looping, the CH should not send data to another CH already in the chain.
In the schedule creation phase, it is not only the member node that will activate its receiver but also another CH, which will make the previous cluster its next hop. The CH will send its data to other CHs when it finishes collecting the data from the member nodes.
In this experiment, we used three parameters to measure the performance of the system, namely, the energy used, throughput and network lifetime. The energy used is the total energy used by all the nodes, the throughput is the amount of data that is passed on the network per time unit, and the network lifetime is time needed until there is no live node in the network.
4.2 Simulation Results and Analysis
There are many tasks that consume the energy of sensor nodes, as shown in Experiments 1 and 2. Transmitting and receiving data are the main tasks that consume great quantities energy as well as the signal control advertisements and the joint request tasks for both the previous and the proposed methods. In this experiment, we use 100 sensor nodes placed in [TeX:] $$1,000 \times 1,000 \mathrm{m}^{2}$$ size areas, as shown in Fig. 4. The distribution of the nodes could be application dependent or self-organizing. The parameters are grouped based on the methods to be compared. The throughput and time to the first node die, half node die, and all nodes die parameters are used to compare the SEEC, EERRCUF and ME-LEACH methods. Meanwhile, the total network lifetime is used in the comparison of the E-LEACH and the ME-LEACH methods.
Location of nodes.
Based on the experiment, it was found that the consumption of power under the ME-LEACH method is better than that of the EERRCUF and SEEC algorithms. Fig. 5(a) shows that in ME-LEACH, the first node dies after 25 seconds; in comparison, the first node dies after 16 seconds and 7 seconds in EERRCUF and SEEC, respectively. The time until the last node dies in ME-LEACH is better than that of EERRCUF and SECC.
The duration since the first sensor node's death to the time of half of the node's deaths shows the stability of the WSN algorithm. The WSN algorithm improves the stability of the network. Fig. 5(b) shows the half nodes in SEEC, EERRCUF, and ME-LEACH that die first after 17, 36, and 55 seconds, respectively, which indicates that ME-LEACH survives for a much longer time than SEEC and EERRCUF.
In ME-LEACH, the duration since the first node's death to the half node's death is 30 seconds, whereas in SEEC and EERRCUF, this time is 10 and 20 seconds, respectively. These results show that the MELEACH scheme on the WSN is more stable than the SEEC and EERRCUF schemes.
Fig. 5(c) shows that the network lifetime for the ME-LEACH method is 63 seconds, whereas the network lifetime under EERRCUF is 49 seconds. This result shows the stability of the proposed method compared to the EERRCUF method.
The total energy consumption is measured for both the proposed and the previous methods. The results show that energy consumption gradually increases for all the schemes. Fig. 6 shows that the ME-LEACH protocol consumes less energy than the SEEC protocol and slightly more than the EERRCUF protocol. Thus, the network lifetime increases when the ME-LEACH protocol is utilized in a WSN.
Comparison between methods: (a) the first node dies over time, (b) half the nodes die over time, and (c) all the nodes die over time.
Energy consumption over time.
The efficiency of the system is measured by throughput, which is the amount of data delivered to the BS in bits per second. The throughput will decrease as the simulation time increases. Fig. 7 shows the higher throughput of a WSN using a ME-LEACH protocol than the other approaches.
The second test was conducted to compare the E-LEACH and ME-LEACH methods. The parameters of the first node die and network lifetime are used to compare the two methods.
Throughput over time.
Fig. 8 shows that the lifetime of the sensor nodes using ME-LEACH are longer than that of E-LEACH. In E-LEACH, the first node dies in round 211, and all the nodes are dead in round 925. In ME-LEACH, the first node dies in round 440, and all the nodes are dead in round 1394. Using the proposed method (ME-LEACH) applied to a WSN network with 100 nodes, the network lifetime was increased significantly compared to E-LEACH.
This fact shows that ME-LEACH has effectively increased the lifetime of the network. This achievement is very important because energy efficiency is a critical issue in a WSN.
The ME-LEACH method compared to the E-LEACH method for the network lifetime of 100 nodes.
In this paper, we have analyzed and improved the SEEC, EERRCUF, and E-LEACH algorithms. ELEACH has weaknesses in the way that the CHs send data to the BS. Since distance is a major factor in energy usage calculations, it would be beneficial to find ways to shorten the distance of data transmission. The proposed method changes the way the CHs send data to the BS. A CH will find another nearest CH with a shorter distance to the BS as its next hop. Shorter distances means less energy is required to transmit data.
By running Experiment 1 using E-LEACH and then comparing the results to the SEEC and EERRCUF protocols and making comparisons to Experiment 2, we find that:
1. The ME-LEACH method is more stable than SEEC and EERRCUF based on the first, half and all nodes die-over-time parameters.
2. The ME-LEACH method has a higher throughput than SEEC and EERRCUF.
3. The E-LEACH method of sending information from a CH to the base station still uses a large amount of energy, but the proposed method can conserve the energy used in sending information. In the previous method, after receiving data from a member node, a CH aggregates the data and then directly sends the data to the BS. The proposed method changes this process by using a CH as its next hop to the BS. A simulation shows that a selected round using a chain formation uses 0.00708 J of energy, while the previous method uses 0.04757 J of energy. This result means that the proposed method conserves -0.0405 J by changing the way a CH sends its information to the BS.
4. In the previous method, the first node died after 211 rounds, while in the proposed method, the first node died after 440 rounds. There is a 108.53% improvement in rounds until first death using the proposed method. This improvement is because the energy usage is distributed more evenly in the proposed method by conserving the energy usage of the CH.
5. The average lifetime of a wireless sensor network is increased by 35.2% compared to that of the previous method. In E-LEACH, all nodes die after 925 rounds, and in the proposed method, all nodes die after 1394 rounds.
By using two hops or more to send data from a member node to the BS, there are delays before the data are received by the BS. The delay from the member node to the CH occurs because CHs must wait until all the data from their member nodes are received before aggregating and sending the data to the BS or another CH as the next hop. Another delay occurs in the process of sending data between a CH in the chain formation as a path to the BS.
We have proposed the ME-LEACH algorithm, which consumes less energy than the SEEC, EERRCUF, and E-LEACH methods. The ME-LEACH method is more stable than SEEC and EERRCUF based on the first node die-, half node die-, and all node die-over-time parameters, and it has a higher throughput than the other methods. The E-LEACH protocol consumes relatively high quantities of energy when sending data from a CH to the BS. More than one node that becomes a CH that transmits node data to the BS in ME-LEACH. Additionally, ME-LEACH consumes less energy than the other methods because it uses a chain formation to send the data to the BS. Only the nearest CH node to the BS transmits the node data. In each round, ME-LEACH uses less energy than E-LEACH. There is only one CH that communicates with the BS. Through the energy conserved by using the CH chain formation among the CHs, the lifetime of the WSN increases when sending data to the base station. When we can decrease the energy usage by a CH, the CH has a higher probability of living longer. The experiment shows that the use of chain formation can decrease energy consumption in the transmitting and receiving of data, distributing energy more evenly among nodes and increasing the lifetime of the WSN by 35.2% compared to the previous E-LEACH method. There are many methods for clustering, such as the mean-shift, density-based spatial, and agglomerative hierarchical clustering methods. These methods have also been considered for obtaining more efficient power consumptions on WSNs.
The authors would like to thank Telkom University and the School of Computing for the financial and facilities support that allowed the completion of this research and its publication. Additionally, the authors would like to thank Dr. Khoirul Anwar and the AdWitech research center for their support as the first reviewers of this paper.
Maman Abdurohman
He received his Master's and Doctorate degrees from ITB in 2004 and 2010, respectively. Since 2000, he has worked full-time at Telkom University (aka STT Telkom) as a lecturer and researcher. He has authored over 60 papers published in international journals (e.g., International Journal of Electrical Engineering and Informatics, International Journal Applied Mechanics and Materials, International Journal of Informational and Education Technology) and at conferences (e.g., International Conference of ICT, International Conference on Computer Engineering and Technology). In the last three year, he has received either national or international research grants from Telkom University and the Ministry of Higher Education Board. His current research interests are the Internet of Things, especially in Smartcard technology. Currently, he serves as the Dean of School of Computing Telkom University.
Yadi Supriyadi
He received his B.S. in Electrical Engineering from Telkom University (aka. STT Telkom) in 1996 and his M.S. degree from School of Computing Telkom University in 2017. Since 1996, he has worked at PT Telkom Indonesia. Currently, he serves as the Manager of Fraud Operation Management. His current interests include the Internet of Things (IoT), voice protocol, machine learning and fraud detection.
Fitra Zul Fahmi
He received his B.S. and M.S. degrees from the School of Computing, Telkom University in 2015 and 2018, respectively. Since March 2018, he has worked as a Mobile App Developer and has participated in some research activities in Telkom University. His current research interests include the Internet of Things (IoT) and intelligent system.
1 M. Youssef, N. El-Sheimy, "Wireless sensor network: research vs. reality design and deployment issues," in Proceedings of the 5th Annual Conference on Communication Networks and Services Research (CNSR), Fredericton, Canada, 2007;pp. 8-9. custom:[[[-]]]
2 S. Prabowo, M. Abdurohman, B. Erfianto, "(EDsHEED) Enhanced Simplified Hybrid, Energy-efficient, Distributed Clustering for Wireless Sensor Network," in Proceedings of 2015 3rd International Conference on Information and Communication Technology (ICoICT), Nusa Dua, Bali, 2015;pp. 97-101. custom:[[[-]]]
3 W. B. Heinzelman, A. P. Chandrakasan, H. Balakrishnan, "An application-specific protocol architecture for wireless microsensor networks," IEEE Transactions on Wireless Communications, vol. 1, no. 4, pp. 660-670, 2002.doi:[[[10.1109/TWC.2002.804190]]]
4 S. Kumar, M. Prateek, N. J. Ahuja, B. Bhushan, "DE-LEACH: distance and energy aware LEACH," International Journal of Computer Applications, vol. 88, no. 9, pp. 36-42, 2014.custom:[[[-]]]
5 K. Y. Jang, K. T. Kim, H. Y. Youn, "An energy efficient routing scheme for wireless sensor networks," in Proceedings of the 2007 International Conference on Computational Science and its Applications, Kuala Lumpur, Malaysia, 2007;pp. 399-404. custom:[[[-]]]
6 A. Rai, S. Deswal, P. Singh, "An energy-efficient E-LEACH protocol for wireless sensor networks," International Journal of Engineering Science and Computing, vol. 6, no. 7, pp. 1654-1660, 2016.custom:[[[-]]]
7 S. Lindsey, C. S. Raghavendra, "PEGASIS: power-efficient gathering in sensor information systems," in IEEE Aerospace Conference Proceedings, Big Sky, MT, 2002;custom:[[[-]]]
8 N. Srikanth, M. G. Prasad, "Efficient clustering protocol using fuzzy K-means and midpoint algorithm for lifetime improvement in WSNs," International Journal of Intelligent Engineering and Systems, vol. 11, no. 4, pp. 61-71, 2018.custom:[[[-]]]
9 F. Farouk, R. Rizk, F. W. Zaki, "Multi-level stable and energy-efficient clustering protocol in heterogeneous wireless sensor networks," IET Wireless Sensor Systems, vol. 4, no. 4, pp. 159-169, 2014.doi:[[[10.1049/iet-wss.2014.0051]]]
10 N. Hiremani, T. G. Basavaraju, "An efficient routing protocol adopting enhanced cluster formation technique accompanied by fuzzy logic for maximizing lifetime of WSN," International Journal of Intelligent Engineering and Systems, vol. 9, no. 4, pp. 185-194, 2016.custom:[[[-]]]
11 J. N. Al-Karaki, A. E. Kamal, "Routing techniques in wireless sensor networks: a survey," IEEE Wireless Communications, vol. 11, no. 6, pp. 6-28, 2004.doi:[[[10.1109/MWC.2004.1368893]]]
12 A. Kaur, G. Singh, J. Kaur, "SPIN: a data centric protocol for wireless sensor networks," International Journal of Engineering Research & Technology, vol. 2, no. 3, pp. 1-9, 2013.custom:[[[-]]]
13 F. Ye, A. Chen, S. Lu, L. Zhang, "A scalable solution to minimum cost forwarding in large sensor networks," in Proceedings of the 10th International Conference on Computer Communications and Networks (Cat. No. 01EX495), Scottsdale, AZ, 2001;pp. 304-309. custom:[[[-]]]
14 A. J. Prawira, M. Abdurohman, A. G. Putrada, "An analysis on RPL routing over IPv6 WSN mobility and transmission range," in Proceedings of 2019 International Symposium on Electronics and Smart Devices (ISESD), Badung-Bali, Indonesia, 2019;pp. 1-5. custom:[[[-]]]
15 N. Bulusu, J. Heidemann, D. Estrin, "GPS-less low-cost outdoor localization for very small devices," IEEE Personal Communications, vol. 7, no. 5, pp. 28-34, 2000.doi:[[[10.1109/98.878533]]]
16 A. Savvides, C. C. Han, M. B. Strivastava, "Dynamic fine-grained localization in ad-hoc networks of sensors," in Proceedings of the 7th Annual International Conference on Mobile Computing and Networking, Rome Italy, 2001;pp. 166-179. custom:[[[-]]]
17 Y. Yu, R. Govindan, D. Estrin, "Geographical and energy aware routing: a recursive data dissemination protocol for wireless sensor networks," UCLA Computer Science Department Technical Report, 2001.custom:[[[-]]]
18 N. Mukherjee, S. Neogy, S. Roy, Building Wireless Sensor Networks: Theoretical and Practical Perspectives, FL: CRC Press, Boca Raton, 2016.custom:[[[-]]]
19 W. Wang, Q. Wang, W. Luo, M. Sheng, W. Wu, L. Hao, "Leach-H: an improved routing protocol for collaborative sensing networks," in Proceedings of 2009 International Conference on Wireless Communications & Signal Processing, Nanjing, China, 2009;pp. 1-5. custom:[[[-]]]
20 V. Loscri, G. Morabito, S. Marano, "A two-levels hierarchy for low-energy adaptive clustering hierarchy (TL-LEACH)," in Proceedings of 2005 IEEE 62th Vehicular Technology Conference, Dallas, TX, p. 1809-1813, 2005;custom:[[[-]]]
21 N. Sindhwani, R. Vaid, "V LEACH: an energy efficient communication protocol for WSN," Mechanica Confab, vol. 2, no. 2, pp. 79-84, 2013.custom:[[[-]]]
Received: April 1 2019
Revision received: June 4 2019
Published (Print): August 31 2020
Published (Electronic): August 31 2020
Corresponding Author: Maman Abdurohman* , [email protected]
Maman Abdurohman*, School of Computing, Telkom University, Bandung, Indonesia, [email protected]
Yadi Supriadi**, Telkom Corp., Jakarta, Indonesia, [email protected]
Fitra Zul Fahmi*, School of Computing, Telkom University, Bandung, Indonesia, [email protected] | CommonCrawl |
Random dynamical system
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps $\Gamma $ from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set $\Gamma $ that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state $X\in S$ evolving according to a succession of maps randomly chosen according to the distribution Q.[1]
An example of a random dynamical system is a stochastic differential equation; in this case the distribution Q is typically determined by noise terms. It consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. Another example is discrete state random dynamical system; some elementary contradistinctions between Markov chain and random dynamical system descriptions of a stochastic dynamics are discussed.[2]
Motivation 1: Solutions to a stochastic differential equation
Let $f:\mathbb {R} ^{d}\to \mathbb {R} ^{d}$ be a $d$-dimensional vector field, and let $\varepsilon >0$. Suppose that the solution $X(t,\omega ;x_{0})$ to the stochastic differential equation
$\left\{{\begin{matrix}\mathrm {d} X=f(X)\,\mathrm {d} t+\varepsilon \,\mathrm {d} W(t);\\X(0)=x_{0};\end{matrix}}\right.$
exists for all positive time and some (small) interval of negative time dependent upon $\omega \in \Omega $, where $W:\mathbb {R} \times \Omega \to \mathbb {R} ^{d}$ denotes a $d$-dimensional Wiener process (Brownian motion). Implicitly, this statement uses the classical Wiener probability space
$(\Omega ,{\mathcal {F}},\mathbb {P} ):=\left(C_{0}(\mathbb {R} ;\mathbb {R} ^{d}),{\mathcal {B}}(C_{0}(\mathbb {R} ;\mathbb {R} ^{d})),\gamma \right).$ ;\mathbb {R} ^{d}),{\mathcal {B}}(C_{0}(\mathbb {R} ;\mathbb {R} ^{d})),\gamma \right).}
In this context, the Wiener process is the coordinate process.
Now define a flow map or (solution operator) $\varphi :\mathbb {R} \times \Omega \times \mathbb {R} ^{d}\to \mathbb {R} ^{d}$ :\mathbb {R} \times \Omega \times \mathbb {R} ^{d}\to \mathbb {R} ^{d}} by
$\varphi (t,\omega ,x_{0}):=X(t,\omega ;x_{0})$
(whenever the right hand side is well-defined). Then $\varphi $ (or, more precisely, the pair $(\mathbb {R} ^{d},\varphi )$) is a (local, left-sided) random dynamical system. The process of generating a "flow" from the solution to a stochastic differential equation leads us to study suitably defined "flows" on their own. These "flows" are random dynamical systems.
Motivation 2: Connection to Markov Chain
An i.i.d random dynamical system in the discrete space is described by a triplet $(S,\Gamma ,Q)$.
• $S$ is the state space, $\{s_{1},s_{2},\cdots ,s_{n}\}$.
• $\Gamma $ is a family of maps of $S\rightarrow S$. Each such map has a $n\times n$ matrix representation, called deterministic transition matrix. It is a binary matrix but it has exactly one entry 1 in each row and 0s otherwise.
• $Q$ is the probability measure of the $\sigma $-field of $\Gamma $.
The discrete random dynamical system comes as follows,
1. The system is in some state $x_{0}$ in $S$, a map $\alpha _{1}$ in $\Gamma $ is chosen according to the probability measure $Q$ and the system moves to the state $x_{1}=\alpha _{1}(x_{0})$ in step 1.
2. Independently of previous maps, another map $\alpha _{2}$ is chosen according to the probability measure $Q$ and the system moves to the state $x_{2}=\alpha _{2}(x_{1})$.
3. The procedure repeats.
The random variable $X_{n}$ is constructed by means of composition of independent random maps, $X_{n}=\alpha _{n}\circ \alpha _{n-1}\circ \dots \circ \alpha _{1}(X_{0})$. Clearly, $X_{n}$ is a Markov Chain.
Reversely, can, and how, a given MC be represented by the compositions of i.i.d. random transformations? Yes, it can, but not unique. The proof for existence is similar with Birkhoff–von Neumann theorem for doubly stochastic matrix.
Here is an example that illustrates the existence and non-uniqueness.
Example: If the state space $S=\{1,2\}$ and the set of the transformations $\Gamma $ expressed in terms of deterministic transition matrices. Then a Markov transition matrix$M=\left({\begin{array}{cc}0.4&0.6\\0.7&0.3\end{array}}\right)$ can be represented by the following decomposition by the min-max algorithm, $M=0.6\left({\begin{array}{cc}0&1\\1&0\end{array}}\right)+0.3\left({\begin{array}{cc}1&0\\0&1\end{array}}\right)+0.1\left({\begin{array}{cc}1&0\\1&0\end{array}}\right).$
In the meantime, another decomposition could be $M=0.18\left({\begin{array}{cc}0&1\\0&1\end{array}}\right)+0.28\left({\begin{array}{cc}1&0\\1&0\end{array}}\right)+0.42\left({\begin{array}{cc}0&1\\1&0\end{array}}\right)+0.12\left({\begin{array}{cc}1&0\\0&1\end{array}}\right).$
Formal definition
Formally,[3] a random dynamical system consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. In detail.
Let $(\Omega ,{\mathcal {F}},\mathbb {P} )$ be a probability space, the noise space. Define the base flow $\vartheta :\mathbb {R} \times \Omega \to \Omega $ :\mathbb {R} \times \Omega \to \Omega } as follows: for each "time" $s\in \mathbb {R} $, let $\vartheta _{s}:\Omega \to \Omega $ be a measure-preserving measurable function:
$\mathbb {P} (E)=\mathbb {P} (\vartheta _{s}^{-1}(E))$ for all $E\in {\mathcal {F}}$ and $s\in \mathbb {R} $;
Suppose also that
1. $\vartheta _{0}=\mathrm {id} _{\Omega }:\Omega \to \Omega $, the identity function on $\Omega $;
2. for all $s,t\in \mathbb {R} $, $\vartheta _{s}\circ \vartheta _{t}=\vartheta _{s+t}$.
That is, $\vartheta _{s}$, $s\in \mathbb {R} $, forms a group of measure-preserving transformation of the noise $(\Omega ,{\mathcal {F}},\mathbb {P} )$. For one-sided random dynamical systems, one would consider only positive indices $s$; for discrete-time random dynamical systems, one would consider only integer-valued $s$; in these cases, the maps $\vartheta _{s}$ would only form a commutative monoid instead of a group.
While true in most applications, it is not usually part of the formal definition of a random dynamical system to require that the measure-preserving dynamical system $(\Omega ,{\mathcal {F}},\mathbb {P} ,\vartheta )$ is ergodic.
Now let $(X,d)$ be a complete separable metric space, the phase space. Let $\varphi :\mathbb {R} \times \Omega \times X\to X$ :\mathbb {R} \times \Omega \times X\to X} be a $({\mathcal {B}}(\mathbb {R} )\otimes {\mathcal {F}}\otimes {\mathcal {B}}(X),{\mathcal {B}}(X))$-measurable function such that
1. for all $\omega \in \Omega $, $\varphi (0,\omega )=\mathrm {id} _{X}:X\to X$, the identity function on $X$;
2. for (almost) all $\omega \in \Omega $, $(t,x)\mapsto \varphi (t,\omega ,x)$ is continuous;
3. $\varphi $ satisfies the (crude) cocycle property: for almost all $\omega \in \Omega $,
$\varphi (t,\vartheta _{s}(\omega ))\circ \varphi (s,\omega )=\varphi (t+s,\omega ).$
In the case of random dynamical systems driven by a Wiener process $W:\mathbb {R} \times \Omega \to X$, the base flow $\vartheta _{s}:\Omega \to \Omega $ would be given by
$W(t,\vartheta _{s}(\omega ))=W(t+s,\omega )-W(s,\omega )$.
This can be read as saying that $\vartheta _{s}$ "starts the noise at time $s$ instead of time 0". Thus, the cocycle property can be read as saying that evolving the initial condition $x_{0}$ with some noise $\omega $ for $s$ seconds and then through $t$ seconds with the same noise (as started from the $s$ seconds mark) gives the same result as evolving $x_{0}$ through $(t+s)$ seconds with that same noise.
Attractors for random dynamical systems
The notion of an attractor for a random dynamical system is not as straightforward to define as in the deterministic case. For technical reasons, it is necessary to "rewind time", as in the definition of a pullback attractor.[4] Moreover, the attractor is dependent upon the realisation $\omega $ of the noise.
See also
• Chaos theory
• Diffusion process
• Stochastic control
References
1. Bhattacharya, Rabi; Majumdar, Mukul (2003). "Random dynamical systems: a review". Economic Theory. 23 (1): 13–38. doi:10.1007/s00199-003-0357-4. S2CID 15055697.
2. Ye, Felix X.-F.; Wang, Yue; Qian, Hong (August 2016). "Stochastic dynamics: Markov chains and random transformations". Discrete and Continuous Dynamical Systems - Series B. 21 (7): 2337–2361. doi:10.3934/dcdsb.2016050.
3. Arnold, Ludwig (1998). Random Dynamical Systems. ISBN 9783540637585.
4. Crauel, Hans; Debussche, Arnaud; Flandoli, Franco (1997). "Random attractors". Journal of Dynamics and Differential Equations. 9 (2): 307–341. Bibcode:1997JDDE....9..307C. doi:10.1007/BF02219225. S2CID 192603977.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
| Wikipedia |
Category of medial magmas
In mathematics, the category of medial magmas, also known as the medial category, and denoted Med, is the category whose objects are medial magmas (that is, sets with a medial binary operation), and whose morphisms are magma homomorphisms (which are equivalent to homomorphisms in the sense of universal algebra).
The category Med has direct products, so the concept of a medial magma object (internal binary operation) makes sense. As a result, Med has all its objects as medial objects, and this characterizes it.
There is an inclusion functor from Set to Med as trivial magmas, with operations being the right projections
(x, y) → y.
An injective endomorphism can be extended to an automorphism of a magma extension—the colimit of the constant sequence of the endomorphism.
See also
• Eckmann–Hilton argument
| Wikipedia |
Low basis theorem
The low basis theorem is one of several basis theorems in computability theory, each of which showing that, given an infinite subtree of the binary tree $2^{<\omega }$, it is possible to find an infinite path through the tree with particular computability properties. The low basis theorem, in particular, shows that there must be a path which is low; that is, the Turing jump of the path is Turing equivalent to the halting problem $\emptyset '$.
Statement and proof
The low basis theorem states that every nonempty $\Pi _{1}^{0}$ class in $2^{\omega }$ (see arithmetical hierarchy) contains a set of low degree (Soare 1987:109). This is equivalent, by definition, to the statement that each infinite computable subtree of the binary tree $2^{<\omega }$ has an infinite path of low degree.
The proof uses the method of forcing with $\Pi _{1}^{0}$ classes (Cooper 2004:330). Hájek and Kučera (1989) showed that the low basis is provable in the formal system of arithmetic known as ${\text{I-}}\Sigma _{1}$.
The forcing argument can also be formulated explicitly as follows. For a set X⊆ω, let f(X) = Σ{i}(X)↓2−i, where {i}(X)↓ means that Turing machine i halts on X (with the sum being over all such i). Then, for every nonempty (lightface) $\Pi _{1}^{0}$ S⊆2ω, the (unique) X∈S minimizing f(X) has a low Turing degree. To see this, {i}(X)↓ ⇔ ∀Y∈S ({i}(Y)↓ ∨ ∃j<i ({j}(Y)↓ ∧ ¬{j}(X)↓)), which can be computed from 0′ by induction on i; note that ∀Y∈S φ(Y) is $\Sigma _{1}^{0}$ for $\Sigma _{1}^{0}$ φ. In other words, whether a machine halts on X is forced by a finite condition, with allows for X′ = 0′.
Application
One application of the low basis theorem is to construct completions of effective theories so that the completions have low Turing degree. For example, the low basis theorem implies the existence of PA degrees strictly below $\emptyset '$.
References
• Cenzer, Douglas (1999). "$\Pi _{1}^{0}$ classes in computability theory". In Griffor, Edward R. (ed.). Handbook of computability theory. Stud. Logic Found. Math. Vol. 140. North-Holland. pp. 37–85. ISBN 0-444-89882-4. MR 1720779. Zbl 0939.03047.
• Cooper, S. Barry (2004). Computability Theory. Chapman and Hall/CRC. ISBN 1-58488-237-9..
• Hájek, Petr; Kučera, Antonín (1989). "On Recursion Theory in IΣ1". Journal of Symbolic Logic. 54 (2): 576–589. doi:10.2307/2274871. JSTOR 2274871.
• Jockusch, Carl G., Jr.; Soare, Robert I. (1972). "Π(0, 1) Classes and Degrees of Theories". Transactions of the American Mathematical Society. 173: 33–56. doi:10.1090/s0002-9947-1972-0316227-0. ISSN 0002-9947. JSTOR 1996261. Zbl 0262.02041.{{cite journal}}: CS1 maint: multiple names: authors list (link) The original publication, including additional clarifying prose.
• Nies, André (2009). Computability and randomness. Oxford Logic Guides. Vol. 51. Oxford: Oxford University Press. ISBN 978-0-19-923076-1. Zbl 1169.03034. Theorem 1.8.37.
• Soare, Robert I. (1987). Recursively enumerable sets and degrees. A study of computable functions and computably generated sets. Perspectives in Mathematical Logic. Berlin: Springer-Verlag. ISBN 3-540-15299-7. Zbl 0667.03030.
| Wikipedia |
Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.
A joint model for mixed and truncated longitudinal data and survival data, with application to HIV vaccine studies
A joint model for mixed and truncated longitudinal data and survival data, with application to... Yu, Tingting;Wu, Lang;Gilbert, Peter B 2017-09-23 00:00:00 SUMMARY In HIV vaccine studies, a major research objective is to identify immune response biomarkers measured longitudinally that may be associated with risk of HIV infection. This objective can be assessed via joint modeling of longitudinal and survival data. Joint models for HIV vaccine data are complicated by the following issues: (i) left truncations of some longitudinal data due to lower limits of quantification; (ii) mixed types of longitudinal variables; (iii) measurement errors and missing values in longitudinal measurements; (iv) computational challenges associated with likelihood inference. In this article, we propose a joint model of complex longitudinal and survival data and a computationally efficient method for approximate likelihood inference to address the foregoing issues simultaneously. In particular, our model does not make unverifiable distributional assumptions for truncated values, which is different from methods commonly used in the literature. The parameters are estimated based on the h-likelihood method, which is computationally efficient and offers approximate likelihood inference. Moreover, we propose a new approach to estimate the standard errors of the h-likelihood based parameter estimates by using an adaptive Gauss–Hermite method. Simulation studies show that our methods perform well and are computationally efficient. A comprehensive data analysis is also presented. 1. Introduction In preventive HIV vaccine efficacy trials, participants are randomized to receive a series of vaccinations or placebos and are followed until the day of being diagnosed with HIV infection or until the end of study follow-up. We are often interested in the times to HIV infection. Meanwhile, blood samples are repeatedly collected over time for each participant in order to measure immune responses induced by the vaccine, such as CD4 T cell responses. A major research interest in HIV vaccine studies is to identify potential immune response biomarkers for HIV infection. For example, for many infectious diseases antibodies induced by a vaccine can recognize and kill a pathogen before it establishes infection; therefore high antibody levels are often associated with a lower risk of pathogen infection. Since longitudinal trajectories of some immune responses are often associated with the risk of HIV infection, in statistical analysis it is useful to jointly model the longitudinal and survival data. Moreover, such joint models can be used to address measurement errors and non-ignorable missing data in the longitudinal data. There has been active research on joint models of longitudinal and survival data in recent years. Lawrence Gould and others (2015) have given a comprehensive review in this field. Rizopoulos and others (2009) proposed a computational approach based on the Laplace approximation for joint models of continuous longitudinal response and time-to-event outcome. Bernhardt and others (2014) discussed a multiple imputation method for handling left-truncated longitudinal variables used as covariates in AFT survival models. Król and others (2016) considered joint models of a left-truncated longitudinal variable, recurrent events, and a terminal event. The truncated values of the longitudinal variable were assumed to follow the same normal distributions as the untruncated values. Other recent work includes Fu and Gilbert (2017), Barrett and others (2015), Elashoff and others (2015), Chen and others (2014), Taylor and others (2013), Rizopoulos (2012b), and Zhu and others (2012). Analysis of HIV vaccine trial data offers the following new challenges: (i) some longitudinal data may be left truncated by a lower limit of quantification (LLOQ) of the biomarker assay, and the common approach of assuming that truncated values follow parametric distributions is unverifiable and may be unreasonable for vaccine trial data; (ii) the longitudinal multivariate biomarker response data are intercorrelated and may be of mixed types such as binary and continuous; (iii) the longitudinal data may exhibit periodic patterns over time, due to repeated administrations of the HIV vaccine; (iv) some longitudinal biomarkers may have measurement errors and missing data; and (v) the computation associated with likelihood inference can be very intensive and challenging. A comprehensive statistical analysis of HIV vaccine trial data requires us to address all the foregoing issues simultaneously. Therefore, despite extensive literature on joint models, new statistical models and methods are in demand. In this article, we propose innovative models and methods to address the above issues. The contributions of the paper are: (i) when longitudinal data are left truncated, we propose a new method that does not assume any parametric distributions for the truncated values, which is different from existing approaches in the literature that unrealistically assume truncated values to follow the same distributions as those for the observed values; (ii) for longitudinal data with left truncation, the observed values are assumed to follow a truncated normal distribution; (iii) we incorporate the associations among several longitudinal responses of mixed types by linking the longitudinal models with shared and correlated random effects (Rizopoulos, 2012b); and (iv) we address the computational challenges of likelihood inference by proposing a computationally very efficient approximate method based on the h-likelihood method (Lee and others, 2006). It is known that, when the baseline hazard in the Cox survival model is completely unspecified, the standard errors of the parameter estimates in the joint models may be underestimated (Rizopoulos, 2012a; Hsieh and others, 2006). To address this issue, we also propose a new approach to estimate the standard errors of parameter estimates. The article is organized as follows. In Section 2, we introduce the HIV vaccine trial data that motivates the research. In Section 3, we describe the proposed models and methods, which address all of the issues discussed above simultaneously. Section 4 presents analysis of the HIV vaccine trial data. Section 5 shows simulation studies to evaluate the proposed models and methods. We conclude the article with some discussion in Section 6. 2. The HIV vaccine data Our research is motivated by the VAX004 trial, which is a 36-month efficacy study of a candidate vaccine to prevent HIV-1 infection, which contained two recombinant gp120 Envelope proteins: the MN and GNE8 HIV HIV-1 strains (Flynn and others, 2005). One of the main objectives in the trial was to assess immune response biomarkers measured in vaccine recipients for their association with the incidence of HIV infection. It was addressed using plasma samples collected at the immunization visits, 2 weeks after the immunization visits, and the final visit (i.e. months 0, 0.5, 1, 1.5, 6, 6.5, …, 30, 30.5, 36) to measure several immune response variables in vaccine recipients. Eight immune response variables were measured in total, most of which are highly correlated with each other and some have up to 16% missing data. We focus on a subset of these variables that may be representative and have low rates of missing data. In particular, we use the NAb and the MNGNE8 variables, where NAb is the titer of neutralizing antibodies to the MN strain of the HIV-1 gp120 Env protein and MNGNE8 is the average level of binding antibodies (measured by ELISA) to the MN and GNE8 HIV-1 gp120 Env proteins (Gilbert and others, 2005). Due to space limitation, the details of the clinical research questions, participant selection procedure, and other immune response variables are described in Section 1 in the Supplementary material available at Biostatistics online. The data set we consider has 194 participants in total, among whom 21 participants acquired HIV infection during the trial with time of infection diagnosis ranging from day 43 to day 954 and event rate of 10.8%. The average number of repeated measurements over time is 12.6 per participant. Moreover, NAb has a LLOQ of 1.477, and about 27% of NAb measurements are below the LLOQ (left-truncated). To minimize the potential for bias due to missing data on the immune response biomarkers, the models adjust for the dominant baseline prognostic factor for HIV-1 infection—baseline behavioral risk score, which is grouped into three categories: 0, 1, or 2 for risk score 0, 1–3, 4–7, respectively. Figure 1 shows the longitudinal trajectories of the immune responses of a few randomly selected participants, where the left truncated values in NAb are substituted by the LLOQ. The value of an immune response typically increases sharply right after each vaccination, and then starts to decrease about 2 weeks after the vaccination. Such patterns are shown as the reverse sawtooth waves in Figure 1. We see that participants HIV-infected later on seem to have lower values of MNGNE8 and NAb than those uninfected by the end of the study. In particular, for MNGNE8, there seem to be clear differences between HIV-infected and uninfected participants, separated by the median value. The figures show that the longitudinal patterns of some immune responses seem to be associated with HIV infection, motivating inference via joint models of the longitudinal and survival data. In addition, some immune responses are highly correlated over time and are of mixed types, so we should also incorporate the associations among different types of longitudinal variables. Moreover, due to substantial variations across subjects, mixed effects models may be useful. The random effects in mixed effects models can serve several purposes: (i) they represent individual variations or individual-specific characteristics of the participants; (ii) they incorporate the correlation among longitudinal measurements for each participant; and (iii) they may be viewed as summaries of the individual profiles. Therefore, mixed effects models seem to be a reasonable choice for modeling the HIV vaccine trial data. Fig. 1. View largeDownload slide Longitudinal trajectories of two immune response variables for a few randomly selected VAX004 vaccine recipients, where the solid lines represent pre-infection trajectories of participants who acquired HIV infected and the dashed lines represent trajectories for participants who never acquired HIV infection. The left truncated values in NAb are substituted by the LLOQ of 1.477. Fig. 1. View largeDownload slide Longitudinal trajectories of two immune response variables for a few randomly selected VAX004 vaccine recipients, where the solid lines represent pre-infection trajectories of participants who acquired HIV infected and the dashed lines represent trajectories for participants who never acquired HIV infection. The left truncated values in NAb are substituted by the LLOQ of 1.477. More results of the exploratory data analysis are given in Sections 2 and 3 in the Supplementary material available at Biostatistics online, including the summary statistics of the immune responses, Kaplan–Meier plot of the time to HIV infection, and longitudinal trajectories of NAb and MNGNE8 with the time variable shifted and aligned at the event times. 3. Joint models and inference 3.1. The longitudinal, truncation, and survival models 3.1.1. Models for longitudinal data of mixed types. In the following, we denote by $$Y$$ a random variable, $$y$$ its observed value, $$f(y)$$ a generic density function, with similar notation for other variables. For simplicity of presentation, we consider two correlated longitudinal variables, $$Y$$ and $$Z$$, where $$Y$$ is continuous and subject to left truncation due to LLOQ, and $$Z$$ is binary or count (e.g. dichotomized variable or number of CD4 T cells). The models can be easily extended to more than two longitudinal processes. Let $$d$$ be the LLOQ of $$Y$$ and $$C$$ be the truncation indicator of $$Y$$ such that $$C=1 $$ if $$y\leq d$$ and $$C=0$$ otherwise. For the continuous longitudinal variable $$Y$$, after possibly some transformations such as a log-transformation, we may assume that the untruncated data of $$Y$$ follow a truncated normal distribution. We consider a linear or nonlinear mixed effects (LME or NLME) model for the observed values of $$y_{ij}$$ given that $$y_{ij}\geq d$$, that is, \begin{equation} \begin{split} & Y_{ij} | Y_{ij} \ge d = \Psi_{ij}^T\boldsymbol{\beta}+ \Phi_{ij}^T\boldsymbol{b}_{1i} + \epsilon_{ij}, \quad \mbox{ or } \quad Y_{ij} | Y_{ij} \ge d = g(\Psi_{ij}, \Phi_{ij}, \boldsymbol{\beta}, \boldsymbol{b}_{1i})+\epsilon_{ij}, \\ & \boldsymbol{b}_{1i}\stackrel{iid}{\sim} N(\boldsymbol{0}, \mathbf{D}_1), \qquad \boldsymbol{\epsilon}_{i} \stackrel{iid}{\sim} N(\boldsymbol{0}, R_i), \qquad i=1,\ldots,n, \; j=1,\ldots,n_i,\\ \end{split} \label{model:cont} \end{equation} (3.1) where $$Y_{ij}$$ is the longitudinal variable of participant $$i$$ at time $$t_{ij}$$, $$\Psi_{ij}$$ and $$\Phi_{ij}$$ are vectors of covariates, $$\boldsymbol{\beta}$$ contains fixed parameters, $$\boldsymbol{b}_{1i}$$ contains random effects, $$g(\cdot)$$ is a known nonlinear function, $$\mathbf{D}_1$$ and $$R_i$$ are covariance matrices, and $$\boldsymbol{\epsilon}_{i}=(\epsilon_{i1}, \cdots,\epsilon_{in_i})^T$$ are random errors independent of $$\boldsymbol{b}_{1i}$$. A LME model is usually an empirical model while an NLME model is a mechanistic model widely used in HIV viral dynamics (Wu, 2009). We assume that $$R_i=\sigma^2 I_i$$, i.e. the within-individual repeated measurements are independent conditional on the random effects. In model (3.1), the observed $$Y_{ij}$$'s given the random effects and the condition "$$Y_{ij} \ge d$$" (or $$C_{ij}=0$$) are assumed to be normally distributed, so it is reasonable to assume the $$Y_{ij}$$ follow a truncated normal distribution (Mehrotra and others, 2000). For the truncated $$Y_{ij}$$ values (i.e. $$Y_{ij} \le d$$), any parametric distributional assumptions are unverifiable, although most existing literature makes such assumptions for convenience of likelihood inference. Moreover, the truncated values are unlikely to follow normal distributions in most cases, since the $$Y_{ij}$$ values at least must be positive while a normal random variable can take any real values. Thus, it is more reasonable to assume the truncated normal distribution for the observed $$Y_{ij}$$ values and leave the distribution of the truncated $$Y_{ij}$$ values completely unspecified. The density function of "$$Y_{ij}|\boldsymbol{b}_{1i}, c_{ij}=0$$" is given as Mehrotra and others (2000) \begin{equation} f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma)= \frac{1}{\sigma} \phi\Big(\frac{y_{ij}-\mu_{ij}}{\sigma}\Big)\Big[1-\Phi\Big(\frac{d-\mu_{ij}}{\sigma}\Big)\Big]^{-1}, \label{pdf:tnorm} \end{equation} (3.2) where $$\mu_{ij}= E(y_{ij}|\boldsymbol{b}_{1i})$$, $$\phi(\cdot)$$ is the probability density function of the standard normal distribution $$N(0, 1)$$ and $$\Phi(\cdot)$$ is the corresponding cumulative distribution function. For the discrete longitudinal variable $$Z$$, we consider the following generalized linear mixed effects model (GLMM) \begin{equation} q(E(Z_{ik}))= \mathbf{x}^T_{ik}\boldsymbol{\alpha}+\mathbf{u}^T_{ik}\boldsymbol{b}_{2i},\qquad i=1,\ldots,n_i, k=1,\ldots,m_i, \label{model:binary} \end{equation} (3.3) where $$Z_{ik}$$ is the longitudinal variable of participant $$i$$ at time $$t_{ik}$$, $$q()$$ is a known link function, $$\mathbf{x}_{ik}$$ and $$\mathbf{u}_{ik}$$ are vectors of covariates, $$\boldsymbol{\alpha}$$ are fixed parameters, $$\boldsymbol{b}_{2i}$$ is a vector of random effects with $$\boldsymbol{b}_{2i}\sim N(\boldsymbol{0},\mathbf{D}_2)$$, and $$Z_{ik}|{b}_{2i}$$ is assumed to follow a distribution in the exponential family. The longitudinal data may contain intermittent missing data and dropouts. We assume that the intermittent missing data and dropouts are missing at random. The fact that the missing data are biomarkers measuring immune responses to the vaccine (and not variables such as toxicity that could obviously be related to missed visits or dropout), and that the vaccine has a large safety data base showing it is not toxic, makes this assumption plausible. 3.1.2. A new approach for truncated longitudinal data. When the $$Y$$ values are truncated, a common approach in the literature is to assume that the truncated values continue to follow the normal distribution assumed for the observed values (Hughes, 1999; Wu, 2002). However, such an assumption is unverifiable and may be unreasonable in some cases, as noted earlier. In particular, when the truncation rate is high, the normality assumption is even less reasonable as the truncation rate can be much larger than the left-tail probability of the normal distribution for the observed data. For example, Figure 2 displays histograms of NAb for two participants, where the left truncated data are substituted by the LLOQ of 1.477. The truncation rates, 27% for participant 1 and 33% for participant 2, seem much lager than the left-tail probabilities of the assumed distributions for the observed data. Fig. 2. View largeDownload slide Histograms of NAb of two VAX004 vaccine recipients, where the left-truncated data are substituted by the LLOQ of 1.477. Fig. 2. View largeDownload slide Histograms of NAb of two VAX004 vaccine recipients, where the left-truncated data are substituted by the LLOQ of 1.477. Here we propose a different approach: we do not assume any parametric distributions for the truncated values, but instead we conceptually view the truncated values as a point mass or cluster of unobserved values below the LLOQ without any distributional assumption. Note that, although the truncation status $$C_{ij}$$ can be determined by the $$Y_{ij}$$ values, in HIV vaccine studies, many biomarkers are measured infrequently over time, due to both budget and practical considerations, while some other variables can be measured more frequently. For this reason, when the $$Y$$ values are not measured, we can roughly predict the truncation status of the $$Y$$ value based on other measured variables that are associated with $$Y$$, including time. It is important to predict the truncation status of $$Y$$ values, since left-truncated $$Y$$ values have important implications (e.g. a positive immune response may be needed for protection by vaccination). A model for the truncation indicator $$C_{ij}$$ can help make reasonable predictions of $$Y_{ij}$$ when such predictions are needed. Therefore, we assume the following model for the truncation indicator $$C_{ij}$$: \begin{equation} \text{logit}(P(C_{ij}=1)) = {\bf w}^T_{ij}\boldsymbol{\eta} + {\bf v}^T_{ij} \boldsymbol{b}_{3i}, \qquad \boldsymbol{b}_{3i} \sim N(\boldsymbol{0}, \mathbf{D}_3), \label{model:censor} \end{equation} (3.4) where $${\bf w}^T_{ij}$$ and $${\bf v}^T_{ij}$$ contain covariates, $$\boldsymbol{\eta}$$ contains fixed parameters, and $$\boldsymbol{b}_{3i}$$ contains random effects. The contribution of the longitudinal data of $$Y$$ for individual $$i$$ to the likelihood given the random effects is $$f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) \times f(c_{ij}|\boldsymbol{b}_{3i},\boldsymbol{\eta}), $$ where $$f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma)$$ is given by (3.2) and $$P(c_{ij}=0|\boldsymbol{b}_{3i},\boldsymbol{\eta}) = (1+\exp({\bf w}^T_{ij}\boldsymbol{\eta} + {\bf v}^T_{ij} \boldsymbol{b}_{3i}))^{-1}$$. Another use of model (3.4) is modeling non-ignorable or informative missing data in the longitudinal $$Y$$ data. When longitudinal data have both left-truncated data and non-ignorable missing data, we should consider two separate models similar to (3.4). Here we do not consider the issue of non-ignorable missing data, but the models and methods can be easily extended to handle missing data. In fact, left truncated data may be viewed as non-ignorable missing data. 3.1.3. Association between mixed types of longitudinal variables. Different immune response variables are typically highly correlated and may be of different types, such as one being continuous and another one being binary. The exact structures of the associations among different longitudinal variables may be complicated. However, we can reasonably assume that the variables are associated through shared or correlated random effects from different models. This is a reasonable assumption, since the random effects represent individual deviations from population averages and can be interpreted as unobserved or latent individual characteristics, such as individual genetic information or health status, which govern different longitudinal processes. This can be seen from Figure 1 where different immune response variables within the same individual exhibit similar patterns over time, including the truncation process. Therefore, we assume that $$\boldsymbol{b}_i=(\boldsymbol{b}_{1i}^T, \boldsymbol{b}_{2i}^T, \boldsymbol{b}_{3i}^T )^T \sim N(0, \mathbf{\Sigma})$$, where $$\mathbf{\Sigma}$$ is an arbitrary covariance matrix. Note that we allow the random effects in the longitudinal models to be different since the longitudinal trajectories of different variables may exhibit different between-individual variations (as measured by random effects), especially for different types of longitudinal variables such as binary and continuous variables. 3.1.4. A Cox model for time-to-event data. The times to HIV infection may be related to the longitudinal patterns of the immune responses and left-truncated statuses. The specific nature of this dependence may be complicated. There are several possibilities: (i) the infection time may depend on the current immune response values at infection times; (ii) the infection time may depend on past immune response values; and (iii) the infection time may depend on summaries or key characteristics of the longitudinal or truncation trajectories. Here we consider case (iii) for the following reasons: (a) the random effects may be viewed as summaries of individual-specific longitudinal trajectories; (b) the immune response values may be truncated due to lower detection limits; and (c) this approach is also widely used in the joint model literature. Since the random effects in the longitudinal models may be interpreted as "summaries" or individual-specific characteristics of the longitudinal processes, we may use random effects from the longitudinal models as "covariates" in the survival model. Such an approach is commonly used in the literature and is often called "shared parameter models" (Wulfsohn and Tsiatis, 1997; Rizopoulos, 2012b). Let $$T_{i}^* $$ be the time to HIV infection, $$\mathcal{C}_i$$ be the right-censoring time, $$S_i=\min\{T_i^*, \mathcal{C}_i\}$$ be the observed time, and $$\delta_i=I(T^*_i\leq \mathcal{C}_i)$$ be the event indicator. We assume the censoring is non-informative and consider a Cox model for the observed survival data $$\{(s_i, \delta_{i}), i=1,\ldots,n\}$$, \begin{equation} h_i(t)=h_0(t)\exp\Big( \mathbf{x}^T_{si}\boldsymbol{\gamma}_0 +\boldsymbol{b}^T_i \boldsymbol{\gamma}_1 \Big), \label{model:cox} \end{equation} (3.5) where $$h_0(t)$$ is an unspecified baseline hazard function, $$\mathbf{x}_{si}$$ contains baseline covariates of individual $$i$$, and $$\boldsymbol{\gamma}_0$$ and $$\boldsymbol{\gamma}_1$$ are vectors of fixed parameters. In model (3.5), the parameters $$\boldsymbol{\gamma}_1$$ link the risk of HIV infection at time $$t$$ to the random effects in the longitudinal or truncation models, which allow us to check if individual-specific characteristics of the longitudinal immune responses are associated with the risk of HIV infection. We assume that the survival data and the longitudinal data are conditionally independent given the random effects. 3.2. An approximate method for likelihood inference We consider the likelihood method for parameter estimation and inference for the above models. Let $$\boldsymbol{\theta}=(\boldsymbol{\beta}^T,\boldsymbol{\alpha}^T,\boldsymbol{\eta}^T,\boldsymbol{\gamma}_0^T,\boldsymbol{\gamma}_1^T)^T$$ be the collection of all mean parameters and $$\boldsymbol{\xi}=(\sigma, \text{vec}(\mathbf{\Sigma}))$$ be the collection of variance–covariance (dispersion) parameters. The (joint) likelihood for all the observed longitudinal data and time-to-infection data is given by \[ L(\boldsymbol{\theta}, \boldsymbol{\xi}) = \prod_{i=1}^n \int \Big\{ f(\mathbf{y}_{i}|\mathbf{c}_{i}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha}) f(\mathbf{c}_{i}|\boldsymbol{b}_{3i},\boldsymbol{\eta}) f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0, \boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}\; d\,\boldsymbol{b}_i. \] Since the dimension of the random effects $$\boldsymbol{b}_i$$ is often high and some density functions can be highly complicated, evaluation of the above integral can be a major challenge. The common approach based on the Monte Carlo EM algorithm can offer potential difficulties such as very slow or even non-convergence (Hughes, 1999). Numerical integration methods such as the Gaussian Hermite (GH) quadrature method can also be very tedious. Therefore, in the following we consider an approximate method based on the h-likelihood, which can be computationally much more efficient while maintaining reasonable accuracy (Lee and others, 2006; Ha and others, 2003; Molas and others, 2013). Its performance in the current context will be evaluated by simulations later. Essentially, the h-likelihood method uses Laplace approximations to the intractable integral in the likelihood. A first-order Laplace approximation can be viewed as the GH quadrature method with one node. So a Laplace approximation can be less accurate than the GH quadrature method with more than one node. However, when the dimension of the integral is high, a Laplace approximation can be computationally much less intensive than the GH quadrature method whose computational intensity grows exponentially with the dimension of the integral. Moreover, it produces approximate MLEs for the mean parameters and approximate restricted maximum likelihood estimates (REMLs) for the variance–covariance (dispersion) parameters. For the models (3.1), (3.3)–(3.5) in the previous section, the log h-likelihood function is given by \begin{equation} \begin{split} \ell_{h}= \sum_{i=1}^n \ell_{hi} =&\sum_{i=1}^n\Big\{ \log f(\mathbf{y}_i|\mathbf{c}_i=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) +\log f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha})+ \log f(\mathbf{c}_i|\boldsymbol{b}_{3i},\boldsymbol{\eta})\\ & +\log f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0,\boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) + \log f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}.\\ \end{split} \label{lik_joint} \end{equation} (3.6) Based on Ha and others (2003) and Molas and others (2013), we propose the following estimation procedure via the h-likelihood. Beginning with some starting values $$(\boldsymbol{\xi}^{(0)}, \boldsymbol{\theta}^{(0)}, h_0^{(0)})$$, we iterate the steps below: Step 1: At iteration $$k$$, given $$(\hat{\boldsymbol{\xi}}^{(k)}$$, $$\hat{\boldsymbol{\theta}}^{(k)}$$, $$\hat{h}_0^{(k)})$$, obtain updated estimates of the random effects $$\boldsymbol{b}^{(k+1)}$$ by maximizing $$\ell_{h}$$ in (3.6) with respect to $$\boldsymbol{b}$$; Step 2: Given $$(\hat{\boldsymbol{\xi}}^{(k)}$$, $$\hat{h}_0^{(k)}$$, $$\hat{\boldsymbol{b}}^{(k+1)})$$, obtain updated estimates of the mean parameters $$\boldsymbol{\theta}^{(k+1)}$$ by maximizing the following adjusted profileh-likelihood as in Lee and Nelder (1996) with respect to $$\boldsymbol{\theta}$$: \[ p_{\boldsymbol{\theta}}= \Bigg(\ell_h- 0.5\log \Big|\frac{H(\ell_h,\boldsymbol{b})}{2\pi}\Big|\Bigg)\Bigg|_{\boldsymbol{\xi}=\boldsymbol{\xi}^{(k)},\; h_0=h_0^{(k)}, \boldsymbol{b}=\boldsymbol{b}^{(k+1)}}, \quad \text{ where } H(l_h, b) = -\partial^2 \ell_h/ \partial \boldsymbol{b}^T\partial\boldsymbol{b}. \] Step 3: Given ($$\hat{h}_0^{(k)}$$, $$\hat{\boldsymbol{b}}^{(k+1)}$$, $$\hat{\boldsymbol{\theta}}^{(k+1)})$$, obtain updated estimates of the variance-covariance $$\boldsymbol{\xi}^{(k+1)}$$ by maximizing the following adjusted profile h-likelihood, \[ p_{\xi}= \Bigg(\ell_h- 0.5\log \Big|\frac{H[\ell_h,(\boldsymbol{\theta}, \boldsymbol{b})]}{2\pi}\Big|\Bigg)\Bigg|_{ h_0=h_0^{(k)}, \boldsymbol{\theta}=\boldsymbol{\theta}^{(k+1)},\boldsymbol{b}=\boldsymbol{b}^{(k+1)}}, \] where $$H[{\ell _h},({\bf{\theta }},{\bf{b}})] = - \left( {\matrix{ {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{\theta }}\partial {{\bf{\theta }}^T}}}} \hfill & {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{\theta }}\partial {{\bf{b}}^T}}}} \hfill \cr {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{b}}\partial {{\bf{\theta }}^T}}}} \hfill & {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{b}}\partial {{\bf{b}}^T}}}} \hfill \cr } } \right).$$ Step 4: Given $$(\hat{\boldsymbol{b}}^{(k+1)}$$, $$\hat{\boldsymbol{\theta}}^{(k+1)}$$, $$\hat{\boldsymbol{\xi}}^{(k+1)})$$, obtain an updated nonparametric estimate of the baseline hazard $$\hat{h}_0^{(k+1)}$$ as follows \[ h^{(k+1)}_0(t)= \sum_{i=1}^n \frac{ \delta_i I(s_i=t) }{\sum_{i=1}^n \exp\big(\mathbf{x}^T_{si}\boldsymbol{\gamma}_0^{(k+1)}+ (\boldsymbol{b}_{i}^{(k+1)})^T\boldsymbol{\gamma}_1^{(k+1)} \big)I(s_i\geq t)}, \] where $$I(\cdot)$$ is an indicator function. By iterating the above four steps until convergence, we can obtain approximate MLEs for the mean parameters, approximate REMLs for the variance–covariance parameters, empirical Bayes estimates of the random effects, and a nonparametric estimate of the baseline hazard function. To set starting values, we may first fit the models separately and then choose the resulting parameter estimates as the starting values for $$\boldsymbol{\xi}^{(0)}, \boldsymbol{\theta}^{(0)}, h_0^{(0)}$$. More details are described in Section 4.3. The standard errors of the parameter estimates can be obtained based on \[ \widehat{Cov}(\hat{\boldsymbol{\theta}},\hat{\mathbf{b}}) =H^{-1}[\ell_h,(\hat{\boldsymbol{\theta}}, \hat{\boldsymbol{b}})] =-\left( \begin{array}{cc} \frac{\partial^2\ell_h}{\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}^T} & \frac{\partial^2\ell_h}{\partial \boldsymbol{\theta} \partial \boldsymbol{b}^T} \\ \frac{\partial^2\ell_h}{\partial \boldsymbol{b} \partial \boldsymbol{\theta}^T} & \frac{\partial^2\ell_h}{\partial \boldsymbol{b} \partial \boldsymbol{b}^T}\\ \end{array}\right)^{-1}\Bigg|_{\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}, \boldsymbol{b}=\hat{\boldsymbol{b}}}. \] That is, the estimated variances of $$\hat{\boldsymbol{\theta}}$$ can be chosen to be the diagonal elements of the top left corner of the matrix $$\widehat{Cov}(\hat{\boldsymbol{\theta}},\hat{\mathbf{b}})$$ (Lee and Nelder, 1996; Ha and others, 2003). As mentioned in Section 1, the standard errors of parameter estimates may be under-estimated when the baseline hazard $$h_0(t)$$ is unspecified (Rizopoulos, 2012a; Hsieh and others, 2006). A bootstrap method for obtaining standard errors is a good choice, but it is computationally intensive. Thus, here we propose a new approach to estimate the standard errors of parameter estimates based on the adaptive Gauss–Hermite (aGH) method (Rizopoulos, 2012a; Hartzel and others, 2001; Pinheiro and Bates, 1995). The basic idea is as follows. After convergence of the above steps, we can approximate the score function of $$\boldsymbol{\theta}$$ for subject $$i$$ by the following: \[ \begin{split} S_i(\boldsymbol{\theta}) &= \frac{\partial }{\partial\boldsymbol{\theta}^T} \log \int \Big\{ f(\mathbf{y}_{i}|\mathbf{c}_{i}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha}) f(\mathbf{c}_{i}|\boldsymbol{b}_{3i},\boldsymbol{\eta}) f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0, \boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}\; {\rm{d}}\,\boldsymbol{b}_i \\ &= \int f(\boldsymbol{b}_i|\mathbf{y}_i,\mathbf{c}_i, \mathbf{z}_i, \mathbf{s}_i,\boldsymbol{\delta}_i,\boldsymbol{\theta},\sigma,\mathbf{\Sigma},h_0)\times A(\boldsymbol{\theta},\boldsymbol{b}_i) {\rm{d}} \boldsymbol{b}_i\\ &\quad{} \propto \sum_{k=1}^{q^K} \Big\{\pi_k \times f(\boldsymbol{b}^{(k)}_i|\mathbf{y}_i,\mathbf{c}_i, \mathbf{z}_i, \mathbf{s}_i,\boldsymbol{\delta}_i,\boldsymbol{\theta},\hat{\sigma},\hat{\mathbf{\Sigma}}_i,\hat{h}_0) \times e^{\boldsymbol{z}_i^{(k)T}\boldsymbol{z}_i^{(k)}}\times A(\boldsymbol{\theta},\boldsymbol{b}_i^{(k)}) \Big\}, \end{split} \] where $$q$$ is the dimension of the random effects, $$K$$ is the number of quadrature points for each random effect, $$\pi_k$$ are weights for the original GH nodes $$\boldsymbol{z}_i^{(k)}$$, $$\boldsymbol{b}_i^{(k)}=\hat{\boldsymbol{b}}_i+\sqrt{2}\hat{\Omega}_i \boldsymbol{z}_i^{(k)}$$ with $$\hat{\Omega}_i$$ being the upper triangular factor of the Cholesky decomposition of $$\hat{\Sigma}_i=(-\partial^2 \ell_{hi} /\partial \boldsymbol{b}_i\partial \boldsymbol{b}_i^T)^{-1}|_{\boldsymbol{b}_i=\hat{\boldsymbol{b}}_i}$$, and $$A(\boldsymbol{\theta},\boldsymbol{b}_i^{(k)})= \partial \ell_{hi}/\partial \boldsymbol{\theta}^T|_{\sigma=\hat{\sigma},h_0=\hat{h}_0, \mathbf{\Sigma}=\hat{\mathbf{\Sigma}}_i, \boldsymbol{b}_i=\boldsymbol{b}_i^{(k)}}$$. Then, the standard errors of the parameter estimates can be estimated based on $$\widehat{Var}(\hat{\boldsymbol{\theta}})= \Big\{-\sum_{i=1}^n \frac{\partial S_i(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\Big\}^{-1}\Big|_{\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}}$$. In practice, we can calculate $$\partial S_i(\boldsymbol{\theta})/\partial\boldsymbol{\theta}$$ numerically using the central difference approximation (Rizopoulos, 2012a). 4. Data analysis 4.1. HIV vaccine data and new time variables In this section, we analyze the VAX004 data set described in Section 2, based on the proposed models and methods. Our objective is to check if individual-specific longitudinal characteristics of immune responses are associated with the risk of HIV infection. A comprehensive analysis may be infeasible due to space limitation, but we will focus on the essential features of the data. Since the immune response variables are mostly highly correlated, we choose two variables, "MNGNE8" and "NAb", which may represent the key features of the longitudinal immune response data. Note that some variables are often conveniently converted to binary data for simpler clinical interpretations. Here, we let $$Z_{ij}$$ be the dichotomized MNGNE8 data such that $$Z_{ij}=1$$ if the MNGNE8 value of individual $$i$$ at time $$t_{ij}$$ is larger than the sample median 0.57 and $$Z_{ij}=0$$ otherwise. Let $$Y_{ij}$$ be the original NAb value of individual $$i$$ at time $$t_{ij}$$. Recall that 27% of the original NAb values are below this variable's LLOQ (i.e. left truncated). A unique feature of vaccine trial data is that the longitudinal immune response data typically exhibit periodic patterns, due to repeated administration of the vaccine. This can be clearly seen in Figure 1. Statistical modeling must incorporate these features. Here we use a simple periodic function $$sin(\cdot)$$ to empirically capture the periodic patterns and further define the following time variables (in months): (i) the time from the beginning of the study to the current scheduled measurement time, denoted by $$t_{ij}$$; (ii) the time from the most recent immunization to the current scheduled measurement time, denoted by $$t_{d_{ij}}$$ (so $$t_{d_{ij}} \le t_{ij}$$); and (iii) the time between two consecutive vaccine administrations, denoted by $$\Delta_{ij}$$, so there will be at least one $$t_{d_{ij}}$$ and one $$t_{ij} $$ between $$\Delta_{ij}$$ and $$\Delta_{i,j+1}$$. For measurement time $$t_{ij}$$ scheduled after the final vaccination, we define $$\Delta_{ij}$$ as the time between the final vaccination and the final measurement time. These different time variables are needed in modeling the longitudinal trajectories. Figure 3 gives an example of how different time variables are defined for a randomly chosen participant $$i$$. Recall that vaccinations are scheduled at months 0, 1, 6, 12, 18, 24, 30, and the study ends at month 36. For this participant $$i$$, s/he receives the first four vaccinations, but then drops out from the study before receiving the fifth vaccination. There are eight measurements over time in total, denoted by the cross symbols, where the measurement times may be different from the vaccination times. Suppose that the sixth measurement is taken at month 9, i.e. $$t_{i6}=9$$, then we have $$t_{d_{i6}}=t_{i6}-6=3$$, the difference between the sixth measurement time and the latest vaccination time (Vac 3 at month 6) for this participant, and $$\Delta_{i6}=12-6=6$$, since the sixth measurement happens between the third vaccination (Vac 3 at month 6) and the fourth vaccination (Vac 4 at month 12). To avoid very large or small parameter estimates, we also re-scale the times as follows: $$t_{d_{ij}}^*=t_{d_{ij}}*30/7$$ (in weeks) and $$t^*_{ij}=t_{ij}/12$$ (in years). Fig. 3. View largeDownload slide Illustration of three time variables in VAX004. The cross symbols indicate the measurement times of subject $$i$$. The dashed vertical lines show the scheduled times of vaccinations and the end of study (i.e. month 0, 1, 6, 12, 18, 24, 30, 36), where the black dashed lines represent the times when subject $$i$$ received vaccines and the gray dashed lines represent the times when subject $$i$$ missed the scheduled vaccinations. The arrow lines represent the time periods $$t_{ij},t_{d_{ij}}, \Delta_{ij}$$ of the sixth and eighth measurements with $$j=6$$ and $$j=8$$, respectively. Fig. 3. View largeDownload slide Illustration of three time variables in VAX004. The cross symbols indicate the measurement times of subject $$i$$. The dashed vertical lines show the scheduled times of vaccinations and the end of study (i.e. month 0, 1, 6, 12, 18, 24, 30, 36), where the black dashed lines represent the times when subject $$i$$ received vaccines and the gray dashed lines represent the times when subject $$i$$ missed the scheduled vaccinations. The arrow lines represent the time periods $$t_{ij},t_{d_{ij}}, \Delta_{ij}$$ of the sixth and eighth measurements with $$j=6$$ and $$j=8$$, respectively. 4.2. Models Based on rationales discussed in Sections 3 and 4.1, we consider empirical models for the continuous and binary longitudinal data and survival model. The longitudinal models are selected based on AIC values (see details in Section 3 in the Supplementary material available at Biostatistics online). For the NAb data with 27% truncation, we model the untruncated data by using the LME model \begin{equation} Y_{ij} \;| \; {C_{ij}=0} = \beta_0+\beta_1t^*_{ij} + \beta_2t^{*2}_{ij}+\beta_3\sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+ \beta_4 risk1_i + \beta_5risk2_i+ d_1 b_{1i}+\epsilon_{ij}, \label{real2} \end{equation} (4.1) where $$risk1_i$$ and $$risk2_i$$ are categories 1 and 2 of baseline behavioral risk score, the random effects $$b_{1i} \sim N(0,1)$$, $$d_1$$ is the variance parameter, $$\epsilon_{ij}$$ follows a truncated normal distribution with mean 0 and variance $$\sigma^2$$, and $$Y_{ij}|{C_{ij}=0}$$ follows a truncated normal distribution. To ensure identifiability of the models, we assume that $$d_1>0$$. We only consider a random intercept in the model because adding more random effects does not substantially reduce AIC values while making the models more complicated. We also model the truncation indicator, $$C_{ij}$$, of NAb to find possible associations of truncation with the time variables and other covariates and to predict the truncation status of NAb at times when NAb values are unavailable. The selected model is given as below, \begin{equation} \text{logit}(P(C_{ij}=1))=\eta_0+\eta_1 t^*_{ij} +\eta_2 t^{*2}_{ij} +\eta_3 \sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+\eta_4 risk1_i+\eta_5 risk2_i+\eta_6 b_{1i}, \label{real3} \end{equation} (4.2) which shares the same random effect as the NAb model (4.1), since these two processes seem to be highly correlated with each other. In many studies, the $$Y_{ij}$$ and $$C_{ij}$$ values are measured sparsely and we can use model (4.2) to predict the truncation status of $$Y_{ij}$$ at times when Y-measurements are unavailable. For the binary MNGNE8 data, variable selections by AIC values lead to the model \begin{equation} \text{logit}(P(Z_{ij}=1))=\alpha_0+ \alpha_1 t_{ij}+ \alpha_2 \sin\left(\frac{\pi}{\Delta_{ij}}\times t_{d_{ij}}\right)+ \alpha_3 t^*_{d_{ij}}+\alpha_4 b_{1i}+ d_2b_{2i} t_{ij}, \label{real1} \end{equation} (4.3) where $$d_2$$ is the variance parameter with $$d_2>0$$ and the individual characteristics are incorporated via random slope $$b_{2i}\sim N(0,1)$$ and random intercept $$b_{1i}$$ shared by models (4.1)–(4.2). The association among the longitudinal models is incorporated through shared and correlated random effects from different models: $$\mathbf{b}_i=(b_{1i},b_{2i})^T\sim N(0,\Sigma)$$, with $$\Sigma = \left( {\matrix{ 1 \hfill & {{r_{12}}} \hfill \cr {{r_{12}}} \hfill & 1 \hfill \cr } } \right)$$ and $${-1\leq r_{12}\leq 1}$$. Note that the random effect $$b_{1i}$$ is shared by all the longitudinal models, since all the immune response longitudinal data exhibit similar individual-specific patterns and the random effect $$b_{1i}$$ for the continuous NAb data best summarizes these patterns. For example, when a participant has a high baseline measurement of NAb, s/he likely also has a high baseline value of MNGNE8 and a low baseline probability that NAb is left truncated. The survival model for the time to HIV infection is given by the "shared-parameter" model \begin{equation} h_i(t| x_{i},\mathbf{b}_i)=h_0(t)\exp\Big\{\gamma_0 x_i + \gamma_1{\rm{risk}}1_i+\gamma_2{\rm{risk}}2_i+\gamma_3 b_{1i}+\gamma_4b_{2i} \Big\}, \label{real4} \end{equation} (4.4) where $$x_i$$ is the measurement of GNE8_CD4 (i.e. blocking of the binding of the GNE8 HIV-1 gp120 Env protein to soluble CD4) for individual $$i$$ on the first day of the study after the first immunization, rescaled to have a mean of 0 and a standard deviation of 1. We call $$x_i$$ the standardized baseline GNE8_CD4. Since the analysis in this section is exploratory in nature, for simplicity we ignore other covariates. 4.3. Parameter estimates, model diagnostics, and new findings We estimate model parameters using the proposed h-likelihood method. As a comparison, we also use the two-step method, which fits each longitudinal model separately and obtains random effect estimates in the first step and then in the second step the random effects in the Cox model are simply substituted by their estimates from the first step. The results of the two-step method are obtained using the R packages lme4 and survival. The drawbacks of the two-step method are: (i) it may under-estimate the standard errors of the parameter estimates in the survival model, since it fails to incorporate the estimation uncertainty in the first step; (ii) it fails to incorporate the associations among the longitudinal variables; and (iii) it may lead to biased estimates of longitudinal model parameters when longitudinal data are terminated by event times and/or truncated longitudinal data are simply replaced by the LLOQ or half this limit (Wu, 2009). Table 1 summarizes estimation results based on the above two methods. Algorithms based on the h-likelihood method were terminated when the relative change became less than $$10^{-2}$$ in the estimates or $$10^{-3}$$ in the approximated log-likelihood. Since our main objective is to investigate if individual-specific characteristics of the longitudinal immune responses are associated with the risk of HIV infection, we mainly focus on $$\hat{\gamma}_3$$ and $$\hat{\gamma}_4$$ in the survival model (4.4) as these parameters link the random effects to the hazard of HIV infection. Table 1. Estimates of all model parameters in VAX004 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 SE$$^{a}$$ and p-value$$^{a}$$: Standard error and p-value based on the h-likelihood method. SE$$^{b}$$ and p-value$$^{b}$$: Standard error and p-value based on the newly proposed method with 4 quadrature points. Table 1. Estimates of all model parameters in VAX004 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 SE$$^{a}$$ and p-value$$^{a}$$: Standard error and p-value based on the h-likelihood method. SE$$^{b}$$ and p-value$$^{b}$$: Standard error and p-value based on the newly proposed method with 4 quadrature points. From Table 1, we see that the two methods lead to quite different results, especially the estimates of $$\gamma_3$$ and $$\gamma_4$$ in the survival model that are our main focus in this analysis. For the two-step method, the estimates of $$\gamma_3$$ and $$\gamma_4$$ are near zero with confidence intervals including zero, not supporting that individual-specific immune response longitudinal trajectories are associated with the risk of HIV infection. However, these parameter estimates based on the proposed joint model with the h-likelihood method lead to different conclusions. Both $$\hat{\gamma}_3$$ and $$\hat{\gamma}_4$$ are highly significant based on the standard errors estimated by the joint model with the h-likelihood method (denoted as SE$$^{a}$$), suggesting that individual-specific immune response longitudinal trajectories are highly associated with the risk of HIV infection. Since the standard errors based on the h-likelihood method may be under-estimated (Hsieh and others, 2006; Rizopoulos, 2012b), as discussed earlier, we also calculate the standard errors using the proposed method based on the aGH method, and the results with four quadrature points are given as SE$$^{b}$$ in the table. We see that, based on the new standard errors, the p-value for testing $$H_0: \gamma_3 = 0$$ is slightly larger than $$0.05$$ while that for testing $$H_0: \gamma_4 = 0$$ is still highly significant. Therefore, we may conclude that individual-specific immune response longitudinal trajectories are associated with the risk of HIV infection. This conclusion is unavailable based on the two-step method. The negative estimate of $$\gamma_3$$ suggests that higher NAb values are associated with a lower risk of HIV infection, and the positive estimate of $$\gamma_4$$ suggests that large increases in MNGNE8 over time are associated with a higher risk of HIV infection. Specifically, there is an estimated 81.6% decrease (i.e. $$\exp(\hat{\gamma}_3)-1$$) in the hazard/risk with a one unit increase in the individual effect $$b_{1i}$$ and an estimated 10.6 times increase (i.e. $$\exp(\hat{\gamma}_4)$$) in the hazard/risk with a one unit increase in the individual-specific slope $$b_{2i}$$, holding other covariates constant. These findings are original, since they are unavailable based on the two-step method, and show the important contribution of the proposed joint model and the h-likelihood method. The joint model method and the two-step method have consistent significances of the parameters in the longitudinal models (4.1)–(4.3), except for $$\beta_5$$ and $$\eta_5$$. By the two-step method, the tests for $$H_0: \beta_5 = 0$$ and for $$H_0: \eta_5$$ yield significant p-values, suggesting that participants with baseline behavioral risk score in category 2 (i.e. risk2 = 1) have significantly lower NAb values than other participants. By the joint model method, on the other hand, such a negative association is not statistically significant. For model (4.1), the mean square error (MSE) based on the joint model is 0.296, while the MSE based on the two-step method is 0.403. The model diagnostics are conducted to check the assumptions and goodness-of-fit of the models. The results are listed in Section 4 in the Supplementary material available at Biostatistics online. Overall, the assumptions hold and the models fit the data well. The data used in this example may be requested through a concept proposal to the owner of the data—Global Solutions in Infectious Diseases. 5. Simulation studies In this section, we conduct three simulation studies to evaluate the proposed joint model with the h-likelihood method. The models and their true parameter values in the simulation studies are chosen to be similar to the estimated values in the models for real data in the previous section. 5.1. Simulation study $$1$$ Conditional on the random effects, the binary data $$Z_{ij}$$ are generated from a Bernoulli distribution with probabilities $$p_{ij} = \exp(\xi_{ij})( 1+\exp(\xi_{ij}))^{-1}$$, where $$\xi_{ij}=\alpha_0+\alpha_1t_{ij}+\alpha_2\sin(\frac{\pi}{\Delta_{ij}} t_{d_{ij}})+ \alpha_3 t^*_{d_{ij}}+\alpha_4 b_{1i}+d_2 b_{2i}t_{ij}$$. For the continuous data $$Y_{ij}$$, we first randomly generate $$y_{ij}$$ from a normal distribution $$Y_{ij}|\mathbf{b}_{1i} \sim N\big(\mu_{ij}, \sigma^2\big)$$ with $$\mu_{ij}=\beta_0+\beta_1t^*_{ij}+\beta_2t^{*2}_{ij}+\beta_3\sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+d_1 b_{1i}$$. Then we create truncations so that $$y_{ij}$$ is observed if $$y_{ij}> LLOQ$$ and truncated otherwise and we choose LLOQ = 2. The random effects $$\mathbf{b}_i=(b_{1i},b_{2i})$$ are generated from a multivariate normal distribution $$N(0, \Sigma)$$. The true values of the parameters are set to be: $$\boldsymbol{\alpha}^{T}=(\alpha_0,\alpha_1,\alpha_2,\alpha_3,\alpha_4) =(-1.65, 0.15, 1.8, -0.05, 0.4)$$, $$\boldsymbol{\beta}^{T}=(\beta_0, \beta_1,\beta_2,\beta_3)=(2, 1, -0.3, 1.5)$$, $$\sigma=0.5, d_1=0.5, d_2=0.15$$, $$\Sigma = \left( {\matrix{ 1 \hfill & {0.5} \hfill \cr {0.5} \hfill & 1 \hfill \cr } } \right)$$. The survival times $$T_{i}$$ are generated from a Weibull distribution with shape parameter of $$15$$ and scale parameter of $$800 \exp(\gamma_0 X_i + \gamma_1 b_{1i}+\gamma_2 b_{2i})^{-1/15}$$, where $$X_i$$ is a baseline covariate generated from the standard normal distribution. The non-informative censoring times $$\mathcal{C}_i$$ are generated from a Weibull distribution with shape parameter of 5 and scale parameter of $$1000$$. The true parameter values are given as $$\boldsymbol{\gamma}^T=(\gamma_0, \gamma_1,\gamma_2)= (-0.75, -1.5, 2)$$. 5.2. Simulation studies $$2$$ and $$3$$ To better evaluate the performance of the h-likelihood method, we conducted two additional simulation studies: (i) a joint model with higher dimensions of random effects (Study 2, four random effects); and (ii) a joint model with a parametric survival model (Study 3, a Weibull survival model). For the parametric joint model in Study 3, we also estimate the model parameters using the aGH method as comparison. Due to space limitation, we put the details of these two simulation studies in Sections 5 and 6 in the Supplementary material available at Biostatistics online. 5.3. Simulation results and discussions We compare the performance of the methods based on the relative bias and MSE of the parameter estimates, which are defined as follows (say, for parameter $$\beta$$): relative bias (%) of $$\hat{\beta}=\Big|(\frac{1}{M}\sum_{m=1}^M \hat{\beta}^{(m)}-\beta)/\beta\Big|\times 100%$$, relative MSE (%) of $$\hat{\beta}= \frac{1}{M} \sum_{m=1}^M (\hat{\beta}^{(m)}-\beta)^2/|\beta| \times 100%$$, where $$\hat{\beta}^{(m)}$$ is the estimate of $$\beta$$ in simulation iteration $$m$$, $$M$$ is the total number of repetitions, and $$\beta$$ is the true parameter value. Table 2 summarizes the results of Simulation Study 1 ($$M=500$$) when the longitudinal measurements are collected bi-weekly. The proposed h-likelihood method outperforms the two-step method as it returns much less biased estimates for most of the parameters. As for the bias in $$\hat{\boldsymbol{\alpha}}$$, it is known that the h-likelihood method may perform less satisfactorily for logistic mixed effects models (Kuk and Cheng, 1999; Waddington and Thompson, 2004). The standard errors of $$\hat{\gamma_j}$$'s seem to be underestimated by the h-likelihood method. This problem has been reported elsewhere (Hsieh and others, 2006; Rizopoulos, 2012b). However, our newly proposed method based on the aGH method with 4 quadrature points returns coverage probabilities much closer to the nominal coverage probabilities. The results of Simulation Studies 2 and 3 are given in Tables S5 and S6 in the Supplementary material available at Biostatistics online. The main conclusions are consistent with those from Table 2. In Simulation Study 3, the $$\hat{\gamma}_j$$'s based on the h-likelihood method are much less biased, though with slightly larger MSEs, than those based on the aGH method, while for $$\beta_3$$, $$\alpha_0$$, and $$\alpha_2$$, the aGH method has less biased estimates than the h-likelihood method (see Table S6 in the Supplementary material available at Biostatistics online). Synthesizing the simulation results, we conclude that the proposed h-likelihood method, with the new approach of estimating the standard errors, performs reasonably well. Its performance remains consistent with higher dimensions of random effects and parametric survival models. Although it is sometimes less accurate than the aGH method, it is computationally much more efficient. Table 2. Simulation results with bi-weekly longitudinal measurements based on the two-step (TS) method and the h-likelihood (HL) method Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 HL$$^{a}$$: Coverage probability based on the h-likelihood method. HL$$^{b}$$: Coverage probability based on the newly proposed method for standard errors with 4 quadrature points. Table 2. Simulation results with bi-weekly longitudinal measurements based on the two-step (TS) method and the h-likelihood (HL) method Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 HL$$^{a}$$: Coverage probability based on the h-likelihood method. HL$$^{b}$$: Coverage probability based on the newly proposed method for standard errors with 4 quadrature points. 6. Discussion In this article, we have considered a joint model for mixed types of longitudinal data with left truncation and a survival model and proposed a new method to handle the left-truncation in longitudinal data. A main advantage of this method, compared with existing methods in the literature (e.g. Hughes, 1999), is that it does not make any untestable distributional assumption for the truncated data that are below a measurement instruments LLOQ. Different types of longitudinal data are assumed to be associated via shared and correlated random effects. We have also proposed an h-likelihood method for approximate joint likelihood inference, which is computationally much more efficient than the aGH method. Moreover, we have proposed a new method to better estimate the standard errors of parameter estimates from the h-likelihood method. Based on a MacBook Pro Version 10.11.4, the average computing times of the h-likelihood method were 2.7 min for the semiparametric joint model with 2 random effects and 21.9 min for the semiparametric joint model with 4 random effects, respectively. For the parametric joint model with 2 random effects, the average running time of the h-likelihood method was 9.1 min, much faster than the aGH method that takes 28.4 min. Analysis of the real HIV vaccine data based on the proposed method shows that the individual-specific characteristics of longitudinal immune response, summarized by random effects in the models, are highly associated with the risk of HIV infection. This finding is quite interesting and helpful to designing future HIV vaccine studies. We have also proposed a model for the left-truncation indicator of the longitudinal immune response data and showed that the left-truncation status follows certain patterns as functions of time. Such a model can be used to predict the left-truncation status (below LLOQ status) of some longitudinal immune response values when measurement schedules are infrequent or sparse. The joint model in this article may be extended in several directions. For example, the Cox model may be replaced by an accelerated failure time model or survival model for interval censored data or competing risks data. The association among different types of longitudinal processes may also be modeled in other ways such as shared latent processes. In addition, the dropouts in the real data may be associated with longitudinal patterns, so we may consider incorporating missing data mechanisms into the joint models in future research. Research for these extensions will be reported separately. 7. Software Software in the form of R code and a sample input data set are available at https://github.com/oliviayu/HHJMs. Supplementary material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments The authors thank the reviewers for the thoughtful comments to help improve the article greatly. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or BMGF. The authors thank the participants, investigators, and sponsors of the VAX004 trial, including Global Solutions for Infectious Diseases. Conflict of Interest: None declared. Funding National Institute Of Allergy And Infectious Diseases of the National Institutes of Health (NIH) (Award Numbers R37AI054165 and UM1AI068635); and Bill and Melinda Gates Foundation (BMGF) (Award Number OPP1110049). References Barrett J. , Diggle P. , Henderson R. and Taylor-Robinson D. ( 2015 ). Joint modelling of repeated measurements and time-to-event outcomes: flexible model specification and exact likelihood inference. Journal of the Royal Statistical Society: Series B 77 , 131 – 148 . Google Scholar CrossRef Search ADS Bernhardt P. W. , Wang H. J. , and Zhang D. ( 2014 ). Flexible modeling of survival data with covariates subject to detection limits via multiple imputation. Computational Statistics and Data Analysis , 69 , 81 – 91 . Google Scholar CrossRef Search ADS Chen Q. , May R. C. , Ibrahim J. G. , Chu H. , and Cole S. R. ( 2014 ). Joint modeling of longitudinal and survival data with missing and left-censored time-varying covariates. Statistics in Medicine , 33 , 4560 – 4576 . Google Scholar CrossRef Search ADS PubMed Elashoff R. M. , Li G. , and Li N. ( 2015 ). Joint Modeling of Longitudinal and Time-to-Event Data . Boca Raton, FL : Chapman & Hall/CRC . Flynn N. , Forthal D. , Harro C. , Judson F. , Mayer K. , Para M. , and Gilbert P. The rgp120 $$\mbox{HIV}$$ Vaccine Study Group ( 2005 ). Placebo-controlled phase 3 trial of recombinant glycoprotein 120 vaccine to prevent HIV-1 infection. Journal of Infectious Diseases , 191 , 654 – 65 . Google Scholar CrossRef Search ADS PubMed Fu R. and Gilbert P. B. ( 2017 ). Joint modeling of longitudinal and survival data with the cox model and two-phase sampling. Lifetime Data Analysis , 23 , 136 – 159 . Google Scholar CrossRef Search ADS PubMed Gilbert P. B. , Peterson M. L. , Follmann D. , Hudgens M. G. , Francis D. P. , Gurwith M. , Heyward W. L. , Jobes D. V. , Popovic V. , Self S. G. , et al. ( 2005 ). Correlation between immunologic responses to a recombinant glycoprotein 120 vaccine and incidence of hiv-1 infection in a phase 3 hiv-1 preventive vaccine trial. Journal of Infectious Diseases , 191 , 666 – 677 . Google Scholar CrossRef Search ADS PubMed Ha I. D. , Park T. , and Lee Y. ( 2003 ). Joint modelling of repeated measures and survival time data. Biometrical Journal , 45 , 647 – 658 . Google Scholar CrossRef Search ADS Hartzel J. , Agresti A. , and Caffo B. ( 2001 ). Multinomial logit random effects models. Statistical Modelling , 1 , 81 – 102 . Google Scholar CrossRef Search ADS Hsieh F. , Tseng Y.-K. , and Wang J.-L. ( 2006 ). Joint modeling of survival and longitudinal data: likelihood approach revisited. Biometrics , 62 , 1037 – 1043 . Google Scholar CrossRef Search ADS PubMed Hughes J. P. ( 1999 ). Mixed effects models with censored data with application to hiv rna levels. Biometrics , 55 , 625 – 629 . Google Scholar CrossRef Search ADS PubMed Król A. , Ferrer L. , Pignon J.-P. , Proust-Lima C. , Ducreux M. , Bouché O. , Michiels S. , and Rondeau V. ( 2016 ). Joint model for left-censored longitudinal data, recurrent events and terminal event: predictive abilities of tumor burden for cancer evolution with application to the ffcd 2000–05 trial. Biometrics , 72 , 907 – 916 . Google Scholar CrossRef Search ADS PubMed Kuk A. Y. and Cheng Y. W. ( 1999 ). Pointwise and functional approximations in monte carlo maximum likelihood estimation. Statistics and Computing , 9 , 91 – 99 . Google Scholar CrossRef Search ADS Lawrence Gould A. , Boye M. E. , Crowther M. J. , Ibrahim J. G. , Quartey G. , Micallef S. , and Bois F. Y. ( 2015 ). Joint modeling of survival and longitudinal non-survival data: current methods and issues. report of the dia bayesian joint modeling working group. Statistics in Medicine , 34 , 2181 – 2195 . Google Scholar CrossRef Search ADS PubMed Lee Y. and Nelder J. A. ( 1996 ). Hierarchical generalized linear models. Journal of the Royal Statistical Society: Series B , 58 , 619 – 678 . Lee Y. , Nelder J. A. , and Pawitan Y. ( 2006 ). Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Mehrotra K. G. , Kulkarni P. M. , Tripathi R. C. , and Michalek J. E. ( 2000 ). Maximum likelihood estimation for longitudinal data with truncated observations. Statistics in Medicine , 19 , 2975 – 2988 . Google Scholar CrossRef Search ADS PubMed Molas M. , Noh M. , Lee Y. , and Lesaffre E. ( 2013 ). Joint hierarchical generalized linear models with multivariate gaussian random effects. Computational Statistics and Data Analysis , 68 , 239 – 250 . Google Scholar CrossRef Search ADS Pinheiro J. C. and Bates D. M. ( 1995 ). Approximations to the log-likelihood function in the nonlinear mixed-effects model. Journal of Computational and Graphical Statistics , 4 , 12 – 35 . Rizopoulos D. ( 2012a ). Fast fitting of joint models for longitudinal and event time data using a pseudo-adaptive gaussian quadrature rule. Computational Statistics and Data Analysis , 56 , 491 – 501 . Google Scholar CrossRef Search ADS Rizopoulos D. ( 2012b ). Joint Models for Longitudinal and Time-to-Event Data: With Applications in R . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Rizopoulos D. , Verbeke G. , and Lesaffre E. ( 2009 ). Fully exponential laplace approximations for the joint modelling of survival and longitudinal data. Journal of the Royal Statistical Society: Series B , 71 , 637 – 654 . Google Scholar CrossRef Search ADS Taylor J. M. , Park Y. , Ankerst D. P. , Proust-Lima C. , Williams S. , Kestin L. , Bae K. , Pickles T. , and Sandler H. ( 2013 ). Real-time individual predictions of prostate cancer recurrence using joint models. Biometrics , 69 , 206 – 213 . Google Scholar CrossRef Search ADS PubMed Waddington D. and Thompson R. ( 2004 ). Using a correlated probit model approximation to estimate the variance for binary matched pairs. Statistics and Computing , 14 , 83 – 90 . Google Scholar CrossRef Search ADS Wu L. ( 2002 ). A joint model for nonlinear mixed-effects models with censoring and covariates measured with error, with application to aids studies. Journal of the American Statistical Association , 97 , 955 – 964 . Google Scholar CrossRef Search ADS Wu L. ( 2009 ). Mixed Effects Models for Complex Data . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Wulfsohn M. S. and Tsiatis A. A. ( 1997 ). A joint model for survival and longitudinal data measured with error. Biometrics , 53 , 330 – 339 . Google Scholar CrossRef Search ADS PubMed Zhu H. , Ibrahim J. G. , Chi Y.-Y. , and Tang N. ( 2012 ). Bayesian influence measures for joint models for longitudinal and survival data. Biometrics , 68 , 954 – 964 . Google Scholar CrossRef Search ADS PubMed © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biostatistics Oxford University Press http://www.deepdyve.com/lp/oxford-university-press/a-joint-model-for-mixed-and-truncated-longitudinal-data-and-survival-hzeXKfEMjO
Yu, Tingting; Wu, Lang; Gilbert, Peter B
, Volume Advance Article (3) – Sep 23, 2017
Share Full Text for Free (beta)
Loading next page...
Have problems reading an article?
Let us know here.
Thanks for helping us catch any problems with articles on DeepDyve. We'll do our best to fix them.
How was the reading experience on this article?
Check all that apply - Please note that only the first page is available if you have not selected a reading option after clicking "Read Article".
The text was blurry Page doesn't load Other:
Include any more information that will help us locate the issue and fix it faster for you.
Thank you for submitting a report!
Submitting a report will send us an email through our customer support system.
/lp/ou_press/a-joint-model-for-mixed-and-truncated-longitudinal-data-and-survival-hzeXKfEMjO
Journals /
Biostatistics /
Volume Advance Article Issue 3
Subject Areas /
© The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected].
10.1093/biostatistics/kxx047
Publisher site
See Article on Publisher Site
SUMMARY In HIV vaccine studies, a major research objective is to identify immune response biomarkers measured longitudinally that may be associated with risk of HIV infection. This objective can be assessed via joint modeling of longitudinal and survival data. Joint models for HIV vaccine data are complicated by the following issues: (i) left truncations of some longitudinal data due to lower limits of quantification; (ii) mixed types of longitudinal variables; (iii) measurement errors and missing values in longitudinal measurements; (iv) computational challenges associated with likelihood inference. In this article, we propose a joint model of complex longitudinal and survival data and a computationally efficient method for approximate likelihood inference to address the foregoing issues simultaneously. In particular, our model does not make unverifiable distributional assumptions for truncated values, which is different from methods commonly used in the literature. The parameters are estimated based on the h-likelihood method, which is computationally efficient and offers approximate likelihood inference. Moreover, we propose a new approach to estimate the standard errors of the h-likelihood based parameter estimates by using an adaptive Gauss–Hermite method. Simulation studies show that our methods perform well and are computationally efficient. A comprehensive data analysis is also presented. 1. Introduction In preventive HIV vaccine efficacy trials, participants are randomized to receive a series of vaccinations or placebos and are followed until the day of being diagnosed with HIV infection or until the end of study follow-up. We are often interested in the times to HIV infection. Meanwhile, blood samples are repeatedly collected over time for each participant in order to measure immune responses induced by the vaccine, such as CD4 T cell responses. A major research interest in HIV vaccine studies is to identify potential immune response biomarkers for HIV infection. For example, for many infectious diseases antibodies induced by a vaccine can recognize and kill a pathogen before it establishes infection; therefore high antibody levels are often associated with a lower risk of pathogen infection. Since longitudinal trajectories of some immune responses are often associated with the risk of HIV infection, in statistical analysis it is useful to jointly model the longitudinal and survival data. Moreover, such joint models can be used to address measurement errors and non-ignorable missing data in the longitudinal data. There has been active research on joint models of longitudinal and survival data in recent years. Lawrence Gould and others (2015) have given a comprehensive review in this field. Rizopoulos and others (2009) proposed a computational approach based on the Laplace approximation for joint models of continuous longitudinal response and time-to-event outcome. Bernhardt and others (2014) discussed a multiple imputation method for handling left-truncated longitudinal variables used as covariates in AFT survival models. Król and others (2016) considered joint models of a left-truncated longitudinal variable, recurrent events, and a terminal event. The truncated values of the longitudinal variable were assumed to follow the same normal distributions as the untruncated values. Other recent work includes Fu and Gilbert (2017), Barrett and others (2015), Elashoff and others (2015), Chen and others (2014), Taylor and others (2013), Rizopoulos (2012b), and Zhu and others (2012). Analysis of HIV vaccine trial data offers the following new challenges: (i) some longitudinal data may be left truncated by a lower limit of quantification (LLOQ) of the biomarker assay, and the common approach of assuming that truncated values follow parametric distributions is unverifiable and may be unreasonable for vaccine trial data; (ii) the longitudinal multivariate biomarker response data are intercorrelated and may be of mixed types such as binary and continuous; (iii) the longitudinal data may exhibit periodic patterns over time, due to repeated administrations of the HIV vaccine; (iv) some longitudinal biomarkers may have measurement errors and missing data; and (v) the computation associated with likelihood inference can be very intensive and challenging. A comprehensive statistical analysis of HIV vaccine trial data requires us to address all the foregoing issues simultaneously. Therefore, despite extensive literature on joint models, new statistical models and methods are in demand. In this article, we propose innovative models and methods to address the above issues. The contributions of the paper are: (i) when longitudinal data are left truncated, we propose a new method that does not assume any parametric distributions for the truncated values, which is different from existing approaches in the literature that unrealistically assume truncated values to follow the same distributions as those for the observed values; (ii) for longitudinal data with left truncation, the observed values are assumed to follow a truncated normal distribution; (iii) we incorporate the associations among several longitudinal responses of mixed types by linking the longitudinal models with shared and correlated random effects (Rizopoulos, 2012b); and (iv) we address the computational challenges of likelihood inference by proposing a computationally very efficient approximate method based on the h-likelihood method (Lee and others, 2006). It is known that, when the baseline hazard in the Cox survival model is completely unspecified, the standard errors of the parameter estimates in the joint models may be underestimated (Rizopoulos, 2012a; Hsieh and others, 2006). To address this issue, we also propose a new approach to estimate the standard errors of parameter estimates. The article is organized as follows. In Section 2, we introduce the HIV vaccine trial data that motivates the research. In Section 3, we describe the proposed models and methods, which address all of the issues discussed above simultaneously. Section 4 presents analysis of the HIV vaccine trial data. Section 5 shows simulation studies to evaluate the proposed models and methods. We conclude the article with some discussion in Section 6. 2. The HIV vaccine data Our research is motivated by the VAX004 trial, which is a 36-month efficacy study of a candidate vaccine to prevent HIV-1 infection, which contained two recombinant gp120 Envelope proteins: the MN and GNE8 HIV HIV-1 strains (Flynn and others, 2005). One of the main objectives in the trial was to assess immune response biomarkers measured in vaccine recipients for their association with the incidence of HIV infection. It was addressed using plasma samples collected at the immunization visits, 2 weeks after the immunization visits, and the final visit (i.e. months 0, 0.5, 1, 1.5, 6, 6.5, …, 30, 30.5, 36) to measure several immune response variables in vaccine recipients. Eight immune response variables were measured in total, most of which are highly correlated with each other and some have up to 16% missing data. We focus on a subset of these variables that may be representative and have low rates of missing data. In particular, we use the NAb and the MNGNE8 variables, where NAb is the titer of neutralizing antibodies to the MN strain of the HIV-1 gp120 Env protein and MNGNE8 is the average level of binding antibodies (measured by ELISA) to the MN and GNE8 HIV-1 gp120 Env proteins (Gilbert and others, 2005). Due to space limitation, the details of the clinical research questions, participant selection procedure, and other immune response variables are described in Section 1 in the Supplementary material available at Biostatistics online. The data set we consider has 194 participants in total, among whom 21 participants acquired HIV infection during the trial with time of infection diagnosis ranging from day 43 to day 954 and event rate of 10.8%. The average number of repeated measurements over time is 12.6 per participant. Moreover, NAb has a LLOQ of 1.477, and about 27% of NAb measurements are below the LLOQ (left-truncated). To minimize the potential for bias due to missing data on the immune response biomarkers, the models adjust for the dominant baseline prognostic factor for HIV-1 infection—baseline behavioral risk score, which is grouped into three categories: 0, 1, or 2 for risk score 0, 1–3, 4–7, respectively. Figure 1 shows the longitudinal trajectories of the immune responses of a few randomly selected participants, where the left truncated values in NAb are substituted by the LLOQ. The value of an immune response typically increases sharply right after each vaccination, and then starts to decrease about 2 weeks after the vaccination. Such patterns are shown as the reverse sawtooth waves in Figure 1. We see that participants HIV-infected later on seem to have lower values of MNGNE8 and NAb than those uninfected by the end of the study. In particular, for MNGNE8, there seem to be clear differences between HIV-infected and uninfected participants, separated by the median value. The figures show that the longitudinal patterns of some immune responses seem to be associated with HIV infection, motivating inference via joint models of the longitudinal and survival data. In addition, some immune responses are highly correlated over time and are of mixed types, so we should also incorporate the associations among different types of longitudinal variables. Moreover, due to substantial variations across subjects, mixed effects models may be useful. The random effects in mixed effects models can serve several purposes: (i) they represent individual variations or individual-specific characteristics of the participants; (ii) they incorporate the correlation among longitudinal measurements for each participant; and (iii) they may be viewed as summaries of the individual profiles. Therefore, mixed effects models seem to be a reasonable choice for modeling the HIV vaccine trial data. Fig. 1. View largeDownload slide Longitudinal trajectories of two immune response variables for a few randomly selected VAX004 vaccine recipients, where the solid lines represent pre-infection trajectories of participants who acquired HIV infected and the dashed lines represent trajectories for participants who never acquired HIV infection. The left truncated values in NAb are substituted by the LLOQ of 1.477. Fig. 1. View largeDownload slide Longitudinal trajectories of two immune response variables for a few randomly selected VAX004 vaccine recipients, where the solid lines represent pre-infection trajectories of participants who acquired HIV infected and the dashed lines represent trajectories for participants who never acquired HIV infection. The left truncated values in NAb are substituted by the LLOQ of 1.477. More results of the exploratory data analysis are given in Sections 2 and 3 in the Supplementary material available at Biostatistics online, including the summary statistics of the immune responses, Kaplan–Meier plot of the time to HIV infection, and longitudinal trajectories of NAb and MNGNE8 with the time variable shifted and aligned at the event times. 3. Joint models and inference 3.1. The longitudinal, truncation, and survival models 3.1.1. Models for longitudinal data of mixed types. In the following, we denote by $$Y$$ a random variable, $$y$$ its observed value, $$f(y)$$ a generic density function, with similar notation for other variables. For simplicity of presentation, we consider two correlated longitudinal variables, $$Y$$ and $$Z$$, where $$Y$$ is continuous and subject to left truncation due to LLOQ, and $$Z$$ is binary or count (e.g. dichotomized variable or number of CD4 T cells). The models can be easily extended to more than two longitudinal processes. Let $$d$$ be the LLOQ of $$Y$$ and $$C$$ be the truncation indicator of $$Y$$ such that $$C=1 $$ if $$y\leq d$$ and $$C=0$$ otherwise. For the continuous longitudinal variable $$Y$$, after possibly some transformations such as a log-transformation, we may assume that the untruncated data of $$Y$$ follow a truncated normal distribution. We consider a linear or nonlinear mixed effects (LME or NLME) model for the observed values of $$y_{ij}$$ given that $$y_{ij}\geq d$$, that is, \begin{equation} \begin{split} & Y_{ij} | Y_{ij} \ge d = \Psi_{ij}^T\boldsymbol{\beta}+ \Phi_{ij}^T\boldsymbol{b}_{1i} + \epsilon_{ij}, \quad \mbox{ or } \quad Y_{ij} | Y_{ij} \ge d = g(\Psi_{ij}, \Phi_{ij}, \boldsymbol{\beta}, \boldsymbol{b}_{1i})+\epsilon_{ij}, \\ & \boldsymbol{b}_{1i}\stackrel{iid}{\sim} N(\boldsymbol{0}, \mathbf{D}_1), \qquad \boldsymbol{\epsilon}_{i} \stackrel{iid}{\sim} N(\boldsymbol{0}, R_i), \qquad i=1,\ldots,n, \; j=1,\ldots,n_i,\\ \end{split} \label{model:cont} \end{equation} (3.1) where $$Y_{ij}$$ is the longitudinal variable of participant $$i$$ at time $$t_{ij}$$, $$\Psi_{ij}$$ and $$\Phi_{ij}$$ are vectors of covariates, $$\boldsymbol{\beta}$$ contains fixed parameters, $$\boldsymbol{b}_{1i}$$ contains random effects, $$g(\cdot)$$ is a known nonlinear function, $$\mathbf{D}_1$$ and $$R_i$$ are covariance matrices, and $$\boldsymbol{\epsilon}_{i}=(\epsilon_{i1}, \cdots,\epsilon_{in_i})^T$$ are random errors independent of $$\boldsymbol{b}_{1i}$$. A LME model is usually an empirical model while an NLME model is a mechanistic model widely used in HIV viral dynamics (Wu, 2009). We assume that $$R_i=\sigma^2 I_i$$, i.e. the within-individual repeated measurements are independent conditional on the random effects. In model (3.1), the observed $$Y_{ij}$$'s given the random effects and the condition "$$Y_{ij} \ge d$$" (or $$C_{ij}=0$$) are assumed to be normally distributed, so it is reasonable to assume the $$Y_{ij}$$ follow a truncated normal distribution (Mehrotra and others, 2000). For the truncated $$Y_{ij}$$ values (i.e. $$Y_{ij} \le d$$), any parametric distributional assumptions are unverifiable, although most existing literature makes such assumptions for convenience of likelihood inference. Moreover, the truncated values are unlikely to follow normal distributions in most cases, since the $$Y_{ij}$$ values at least must be positive while a normal random variable can take any real values. Thus, it is more reasonable to assume the truncated normal distribution for the observed $$Y_{ij}$$ values and leave the distribution of the truncated $$Y_{ij}$$ values completely unspecified. The density function of "$$Y_{ij}|\boldsymbol{b}_{1i}, c_{ij}=0$$" is given as Mehrotra and others (2000) \begin{equation} f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma)= \frac{1}{\sigma} \phi\Big(\frac{y_{ij}-\mu_{ij}}{\sigma}\Big)\Big[1-\Phi\Big(\frac{d-\mu_{ij}}{\sigma}\Big)\Big]^{-1}, \label{pdf:tnorm} \end{equation} (3.2) where $$\mu_{ij}= E(y_{ij}|\boldsymbol{b}_{1i})$$, $$\phi(\cdot)$$ is the probability density function of the standard normal distribution $$N(0, 1)$$ and $$\Phi(\cdot)$$ is the corresponding cumulative distribution function. For the discrete longitudinal variable $$Z$$, we consider the following generalized linear mixed effects model (GLMM) \begin{equation} q(E(Z_{ik}))= \mathbf{x}^T_{ik}\boldsymbol{\alpha}+\mathbf{u}^T_{ik}\boldsymbol{b}_{2i},\qquad i=1,\ldots,n_i, k=1,\ldots,m_i, \label{model:binary} \end{equation} (3.3) where $$Z_{ik}$$ is the longitudinal variable of participant $$i$$ at time $$t_{ik}$$, $$q()$$ is a known link function, $$\mathbf{x}_{ik}$$ and $$\mathbf{u}_{ik}$$ are vectors of covariates, $$\boldsymbol{\alpha}$$ are fixed parameters, $$\boldsymbol{b}_{2i}$$ is a vector of random effects with $$\boldsymbol{b}_{2i}\sim N(\boldsymbol{0},\mathbf{D}_2)$$, and $$Z_{ik}|{b}_{2i}$$ is assumed to follow a distribution in the exponential family. The longitudinal data may contain intermittent missing data and dropouts. We assume that the intermittent missing data and dropouts are missing at random. The fact that the missing data are biomarkers measuring immune responses to the vaccine (and not variables such as toxicity that could obviously be related to missed visits or dropout), and that the vaccine has a large safety data base showing it is not toxic, makes this assumption plausible. 3.1.2. A new approach for truncated longitudinal data. When the $$Y$$ values are truncated, a common approach in the literature is to assume that the truncated values continue to follow the normal distribution assumed for the observed values (Hughes, 1999; Wu, 2002). However, such an assumption is unverifiable and may be unreasonable in some cases, as noted earlier. In particular, when the truncation rate is high, the normality assumption is even less reasonable as the truncation rate can be much larger than the left-tail probability of the normal distribution for the observed data. For example, Figure 2 displays histograms of NAb for two participants, where the left truncated data are substituted by the LLOQ of 1.477. The truncation rates, 27% for participant 1 and 33% for participant 2, seem much lager than the left-tail probabilities of the assumed distributions for the observed data. Fig. 2. View largeDownload slide Histograms of NAb of two VAX004 vaccine recipients, where the left-truncated data are substituted by the LLOQ of 1.477. Fig. 2. View largeDownload slide Histograms of NAb of two VAX004 vaccine recipients, where the left-truncated data are substituted by the LLOQ of 1.477. Here we propose a different approach: we do not assume any parametric distributions for the truncated values, but instead we conceptually view the truncated values as a point mass or cluster of unobserved values below the LLOQ without any distributional assumption. Note that, although the truncation status $$C_{ij}$$ can be determined by the $$Y_{ij}$$ values, in HIV vaccine studies, many biomarkers are measured infrequently over time, due to both budget and practical considerations, while some other variables can be measured more frequently. For this reason, when the $$Y$$ values are not measured, we can roughly predict the truncation status of the $$Y$$ value based on other measured variables that are associated with $$Y$$, including time. It is important to predict the truncation status of $$Y$$ values, since left-truncated $$Y$$ values have important implications (e.g. a positive immune response may be needed for protection by vaccination). A model for the truncation indicator $$C_{ij}$$ can help make reasonable predictions of $$Y_{ij}$$ when such predictions are needed. Therefore, we assume the following model for the truncation indicator $$C_{ij}$$: \begin{equation} \text{logit}(P(C_{ij}=1)) = {\bf w}^T_{ij}\boldsymbol{\eta} + {\bf v}^T_{ij} \boldsymbol{b}_{3i}, \qquad \boldsymbol{b}_{3i} \sim N(\boldsymbol{0}, \mathbf{D}_3), \label{model:censor} \end{equation} (3.4) where $${\bf w}^T_{ij}$$ and $${\bf v}^T_{ij}$$ contain covariates, $$\boldsymbol{\eta}$$ contains fixed parameters, and $$\boldsymbol{b}_{3i}$$ contains random effects. The contribution of the longitudinal data of $$Y$$ for individual $$i$$ to the likelihood given the random effects is $$f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) \times f(c_{ij}|\boldsymbol{b}_{3i},\boldsymbol{\eta}), $$ where $$f(y_{ij}|c_{ij}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma)$$ is given by (3.2) and $$P(c_{ij}=0|\boldsymbol{b}_{3i},\boldsymbol{\eta}) = (1+\exp({\bf w}^T_{ij}\boldsymbol{\eta} + {\bf v}^T_{ij} \boldsymbol{b}_{3i}))^{-1}$$. Another use of model (3.4) is modeling non-ignorable or informative missing data in the longitudinal $$Y$$ data. When longitudinal data have both left-truncated data and non-ignorable missing data, we should consider two separate models similar to (3.4). Here we do not consider the issue of non-ignorable missing data, but the models and methods can be easily extended to handle missing data. In fact, left truncated data may be viewed as non-ignorable missing data. 3.1.3. Association between mixed types of longitudinal variables. Different immune response variables are typically highly correlated and may be of different types, such as one being continuous and another one being binary. The exact structures of the associations among different longitudinal variables may be complicated. However, we can reasonably assume that the variables are associated through shared or correlated random effects from different models. This is a reasonable assumption, since the random effects represent individual deviations from population averages and can be interpreted as unobserved or latent individual characteristics, such as individual genetic information or health status, which govern different longitudinal processes. This can be seen from Figure 1 where different immune response variables within the same individual exhibit similar patterns over time, including the truncation process. Therefore, we assume that $$\boldsymbol{b}_i=(\boldsymbol{b}_{1i}^T, \boldsymbol{b}_{2i}^T, \boldsymbol{b}_{3i}^T )^T \sim N(0, \mathbf{\Sigma})$$, where $$\mathbf{\Sigma}$$ is an arbitrary covariance matrix. Note that we allow the random effects in the longitudinal models to be different since the longitudinal trajectories of different variables may exhibit different between-individual variations (as measured by random effects), especially for different types of longitudinal variables such as binary and continuous variables. 3.1.4. A Cox model for time-to-event data. The times to HIV infection may be related to the longitudinal patterns of the immune responses and left-truncated statuses. The specific nature of this dependence may be complicated. There are several possibilities: (i) the infection time may depend on the current immune response values at infection times; (ii) the infection time may depend on past immune response values; and (iii) the infection time may depend on summaries or key characteristics of the longitudinal or truncation trajectories. Here we consider case (iii) for the following reasons: (a) the random effects may be viewed as summaries of individual-specific longitudinal trajectories; (b) the immune response values may be truncated due to lower detection limits; and (c) this approach is also widely used in the joint model literature. Since the random effects in the longitudinal models may be interpreted as "summaries" or individual-specific characteristics of the longitudinal processes, we may use random effects from the longitudinal models as "covariates" in the survival model. Such an approach is commonly used in the literature and is often called "shared parameter models" (Wulfsohn and Tsiatis, 1997; Rizopoulos, 2012b). Let $$T_{i}^* $$ be the time to HIV infection, $$\mathcal{C}_i$$ be the right-censoring time, $$S_i=\min\{T_i^*, \mathcal{C}_i\}$$ be the observed time, and $$\delta_i=I(T^*_i\leq \mathcal{C}_i)$$ be the event indicator. We assume the censoring is non-informative and consider a Cox model for the observed survival data $$\{(s_i, \delta_{i}), i=1,\ldots,n\}$$, \begin{equation} h_i(t)=h_0(t)\exp\Big( \mathbf{x}^T_{si}\boldsymbol{\gamma}_0 +\boldsymbol{b}^T_i \boldsymbol{\gamma}_1 \Big), \label{model:cox} \end{equation} (3.5) where $$h_0(t)$$ is an unspecified baseline hazard function, $$\mathbf{x}_{si}$$ contains baseline covariates of individual $$i$$, and $$\boldsymbol{\gamma}_0$$ and $$\boldsymbol{\gamma}_1$$ are vectors of fixed parameters. In model (3.5), the parameters $$\boldsymbol{\gamma}_1$$ link the risk of HIV infection at time $$t$$ to the random effects in the longitudinal or truncation models, which allow us to check if individual-specific characteristics of the longitudinal immune responses are associated with the risk of HIV infection. We assume that the survival data and the longitudinal data are conditionally independent given the random effects. 3.2. An approximate method for likelihood inference We consider the likelihood method for parameter estimation and inference for the above models. Let $$\boldsymbol{\theta}=(\boldsymbol{\beta}^T,\boldsymbol{\alpha}^T,\boldsymbol{\eta}^T,\boldsymbol{\gamma}_0^T,\boldsymbol{\gamma}_1^T)^T$$ be the collection of all mean parameters and $$\boldsymbol{\xi}=(\sigma, \text{vec}(\mathbf{\Sigma}))$$ be the collection of variance–covariance (dispersion) parameters. The (joint) likelihood for all the observed longitudinal data and time-to-infection data is given by \[ L(\boldsymbol{\theta}, \boldsymbol{\xi}) = \prod_{i=1}^n \int \Big\{ f(\mathbf{y}_{i}|\mathbf{c}_{i}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha}) f(\mathbf{c}_{i}|\boldsymbol{b}_{3i},\boldsymbol{\eta}) f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0, \boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}\; d\,\boldsymbol{b}_i. \] Since the dimension of the random effects $$\boldsymbol{b}_i$$ is often high and some density functions can be highly complicated, evaluation of the above integral can be a major challenge. The common approach based on the Monte Carlo EM algorithm can offer potential difficulties such as very slow or even non-convergence (Hughes, 1999). Numerical integration methods such as the Gaussian Hermite (GH) quadrature method can also be very tedious. Therefore, in the following we consider an approximate method based on the h-likelihood, which can be computationally much more efficient while maintaining reasonable accuracy (Lee and others, 2006; Ha and others, 2003; Molas and others, 2013). Its performance in the current context will be evaluated by simulations later. Essentially, the h-likelihood method uses Laplace approximations to the intractable integral in the likelihood. A first-order Laplace approximation can be viewed as the GH quadrature method with one node. So a Laplace approximation can be less accurate than the GH quadrature method with more than one node. However, when the dimension of the integral is high, a Laplace approximation can be computationally much less intensive than the GH quadrature method whose computational intensity grows exponentially with the dimension of the integral. Moreover, it produces approximate MLEs for the mean parameters and approximate restricted maximum likelihood estimates (REMLs) for the variance–covariance (dispersion) parameters. For the models (3.1), (3.3)–(3.5) in the previous section, the log h-likelihood function is given by \begin{equation} \begin{split} \ell_{h}= \sum_{i=1}^n \ell_{hi} =&\sum_{i=1}^n\Big\{ \log f(\mathbf{y}_i|\mathbf{c}_i=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) +\log f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha})+ \log f(\mathbf{c}_i|\boldsymbol{b}_{3i},\boldsymbol{\eta})\\ & +\log f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0,\boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) + \log f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}.\\ \end{split} \label{lik_joint} \end{equation} (3.6) Based on Ha and others (2003) and Molas and others (2013), we propose the following estimation procedure via the h-likelihood. Beginning with some starting values $$(\boldsymbol{\xi}^{(0)}, \boldsymbol{\theta}^{(0)}, h_0^{(0)})$$, we iterate the steps below: Step 1: At iteration $$k$$, given $$(\hat{\boldsymbol{\xi}}^{(k)}$$, $$\hat{\boldsymbol{\theta}}^{(k)}$$, $$\hat{h}_0^{(k)})$$, obtain updated estimates of the random effects $$\boldsymbol{b}^{(k+1)}$$ by maximizing $$\ell_{h}$$ in (3.6) with respect to $$\boldsymbol{b}$$; Step 2: Given $$(\hat{\boldsymbol{\xi}}^{(k)}$$, $$\hat{h}_0^{(k)}$$, $$\hat{\boldsymbol{b}}^{(k+1)})$$, obtain updated estimates of the mean parameters $$\boldsymbol{\theta}^{(k+1)}$$ by maximizing the following adjusted profileh-likelihood as in Lee and Nelder (1996) with respect to $$\boldsymbol{\theta}$$: \[ p_{\boldsymbol{\theta}}= \Bigg(\ell_h- 0.5\log \Big|\frac{H(\ell_h,\boldsymbol{b})}{2\pi}\Big|\Bigg)\Bigg|_{\boldsymbol{\xi}=\boldsymbol{\xi}^{(k)},\; h_0=h_0^{(k)}, \boldsymbol{b}=\boldsymbol{b}^{(k+1)}}, \quad \text{ where } H(l_h, b) = -\partial^2 \ell_h/ \partial \boldsymbol{b}^T\partial\boldsymbol{b}. \] Step 3: Given ($$\hat{h}_0^{(k)}$$, $$\hat{\boldsymbol{b}}^{(k+1)}$$, $$\hat{\boldsymbol{\theta}}^{(k+1)})$$, obtain updated estimates of the variance-covariance $$\boldsymbol{\xi}^{(k+1)}$$ by maximizing the following adjusted profile h-likelihood, \[ p_{\xi}= \Bigg(\ell_h- 0.5\log \Big|\frac{H[\ell_h,(\boldsymbol{\theta}, \boldsymbol{b})]}{2\pi}\Big|\Bigg)\Bigg|_{ h_0=h_0^{(k)}, \boldsymbol{\theta}=\boldsymbol{\theta}^{(k+1)},\boldsymbol{b}=\boldsymbol{b}^{(k+1)}}, \] where $$H[{\ell _h},({\bf{\theta }},{\bf{b}})] = - \left( {\matrix{ {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{\theta }}\partial {{\bf{\theta }}^T}}}} \hfill & {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{\theta }}\partial {{\bf{b}}^T}}}} \hfill \cr {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{b}}\partial {{\bf{\theta }}^T}}}} \hfill & {{{{\partial ^2}{\ell _h}} \over {\partial {\bf{b}}\partial {{\bf{b}}^T}}}} \hfill \cr } } \right).$$ Step 4: Given $$(\hat{\boldsymbol{b}}^{(k+1)}$$, $$\hat{\boldsymbol{\theta}}^{(k+1)}$$, $$\hat{\boldsymbol{\xi}}^{(k+1)})$$, obtain an updated nonparametric estimate of the baseline hazard $$\hat{h}_0^{(k+1)}$$ as follows \[ h^{(k+1)}_0(t)= \sum_{i=1}^n \frac{ \delta_i I(s_i=t) }{\sum_{i=1}^n \exp\big(\mathbf{x}^T_{si}\boldsymbol{\gamma}_0^{(k+1)}+ (\boldsymbol{b}_{i}^{(k+1)})^T\boldsymbol{\gamma}_1^{(k+1)} \big)I(s_i\geq t)}, \] where $$I(\cdot)$$ is an indicator function. By iterating the above four steps until convergence, we can obtain approximate MLEs for the mean parameters, approximate REMLs for the variance–covariance parameters, empirical Bayes estimates of the random effects, and a nonparametric estimate of the baseline hazard function. To set starting values, we may first fit the models separately and then choose the resulting parameter estimates as the starting values for $$\boldsymbol{\xi}^{(0)}, \boldsymbol{\theta}^{(0)}, h_0^{(0)}$$. More details are described in Section 4.3. The standard errors of the parameter estimates can be obtained based on \[ \widehat{Cov}(\hat{\boldsymbol{\theta}},\hat{\mathbf{b}}) =H^{-1}[\ell_h,(\hat{\boldsymbol{\theta}}, \hat{\boldsymbol{b}})] =-\left( \begin{array}{cc} \frac{\partial^2\ell_h}{\partial \boldsymbol{\theta} \partial \boldsymbol{\theta}^T} & \frac{\partial^2\ell_h}{\partial \boldsymbol{\theta} \partial \boldsymbol{b}^T} \\ \frac{\partial^2\ell_h}{\partial \boldsymbol{b} \partial \boldsymbol{\theta}^T} & \frac{\partial^2\ell_h}{\partial \boldsymbol{b} \partial \boldsymbol{b}^T}\\ \end{array}\right)^{-1}\Bigg|_{\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}, \boldsymbol{b}=\hat{\boldsymbol{b}}}. \] That is, the estimated variances of $$\hat{\boldsymbol{\theta}}$$ can be chosen to be the diagonal elements of the top left corner of the matrix $$\widehat{Cov}(\hat{\boldsymbol{\theta}},\hat{\mathbf{b}})$$ (Lee and Nelder, 1996; Ha and others, 2003). As mentioned in Section 1, the standard errors of parameter estimates may be under-estimated when the baseline hazard $$h_0(t)$$ is unspecified (Rizopoulos, 2012a; Hsieh and others, 2006). A bootstrap method for obtaining standard errors is a good choice, but it is computationally intensive. Thus, here we propose a new approach to estimate the standard errors of parameter estimates based on the adaptive Gauss–Hermite (aGH) method (Rizopoulos, 2012a; Hartzel and others, 2001; Pinheiro and Bates, 1995). The basic idea is as follows. After convergence of the above steps, we can approximate the score function of $$\boldsymbol{\theta}$$ for subject $$i$$ by the following: \[ \begin{split} S_i(\boldsymbol{\theta}) &= \frac{\partial }{\partial\boldsymbol{\theta}^T} \log \int \Big\{ f(\mathbf{y}_{i}|\mathbf{c}_{i}=0, \boldsymbol{b}_{1i},\boldsymbol{\beta},\sigma) f(\mathbf{z}_i|\boldsymbol{b}_{2i},\boldsymbol{\alpha}) f(\mathbf{c}_{i}|\boldsymbol{b}_{3i},\boldsymbol{\eta}) f(\mathbf{s}_i,\boldsymbol{\delta}_i|h_0, \boldsymbol{b}_i,\boldsymbol{\gamma}_0,\boldsymbol{\gamma}_1) f(\boldsymbol{b}_i|\mathbf{\Sigma})\Big\}\; {\rm{d}}\,\boldsymbol{b}_i \\ &= \int f(\boldsymbol{b}_i|\mathbf{y}_i,\mathbf{c}_i, \mathbf{z}_i, \mathbf{s}_i,\boldsymbol{\delta}_i,\boldsymbol{\theta},\sigma,\mathbf{\Sigma},h_0)\times A(\boldsymbol{\theta},\boldsymbol{b}_i) {\rm{d}} \boldsymbol{b}_i\\ &\quad{} \propto \sum_{k=1}^{q^K} \Big\{\pi_k \times f(\boldsymbol{b}^{(k)}_i|\mathbf{y}_i,\mathbf{c}_i, \mathbf{z}_i, \mathbf{s}_i,\boldsymbol{\delta}_i,\boldsymbol{\theta},\hat{\sigma},\hat{\mathbf{\Sigma}}_i,\hat{h}_0) \times e^{\boldsymbol{z}_i^{(k)T}\boldsymbol{z}_i^{(k)}}\times A(\boldsymbol{\theta},\boldsymbol{b}_i^{(k)}) \Big\}, \end{split} \] where $$q$$ is the dimension of the random effects, $$K$$ is the number of quadrature points for each random effect, $$\pi_k$$ are weights for the original GH nodes $$\boldsymbol{z}_i^{(k)}$$, $$\boldsymbol{b}_i^{(k)}=\hat{\boldsymbol{b}}_i+\sqrt{2}\hat{\Omega}_i \boldsymbol{z}_i^{(k)}$$ with $$\hat{\Omega}_i$$ being the upper triangular factor of the Cholesky decomposition of $$\hat{\Sigma}_i=(-\partial^2 \ell_{hi} /\partial \boldsymbol{b}_i\partial \boldsymbol{b}_i^T)^{-1}|_{\boldsymbol{b}_i=\hat{\boldsymbol{b}}_i}$$, and $$A(\boldsymbol{\theta},\boldsymbol{b}_i^{(k)})= \partial \ell_{hi}/\partial \boldsymbol{\theta}^T|_{\sigma=\hat{\sigma},h_0=\hat{h}_0, \mathbf{\Sigma}=\hat{\mathbf{\Sigma}}_i, \boldsymbol{b}_i=\boldsymbol{b}_i^{(k)}}$$. Then, the standard errors of the parameter estimates can be estimated based on $$\widehat{Var}(\hat{\boldsymbol{\theta}})= \Big\{-\sum_{i=1}^n \frac{\partial S_i(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\Big\}^{-1}\Big|_{\boldsymbol{\theta}=\hat{\boldsymbol{\theta}}}$$. In practice, we can calculate $$\partial S_i(\boldsymbol{\theta})/\partial\boldsymbol{\theta}$$ numerically using the central difference approximation (Rizopoulos, 2012a). 4. Data analysis 4.1. HIV vaccine data and new time variables In this section, we analyze the VAX004 data set described in Section 2, based on the proposed models and methods. Our objective is to check if individual-specific longitudinal characteristics of immune responses are associated with the risk of HIV infection. A comprehensive analysis may be infeasible due to space limitation, but we will focus on the essential features of the data. Since the immune response variables are mostly highly correlated, we choose two variables, "MNGNE8" and "NAb", which may represent the key features of the longitudinal immune response data. Note that some variables are often conveniently converted to binary data for simpler clinical interpretations. Here, we let $$Z_{ij}$$ be the dichotomized MNGNE8 data such that $$Z_{ij}=1$$ if the MNGNE8 value of individual $$i$$ at time $$t_{ij}$$ is larger than the sample median 0.57 and $$Z_{ij}=0$$ otherwise. Let $$Y_{ij}$$ be the original NAb value of individual $$i$$ at time $$t_{ij}$$. Recall that 27% of the original NAb values are below this variable's LLOQ (i.e. left truncated). A unique feature of vaccine trial data is that the longitudinal immune response data typically exhibit periodic patterns, due to repeated administration of the vaccine. This can be clearly seen in Figure 1. Statistical modeling must incorporate these features. Here we use a simple periodic function $$sin(\cdot)$$ to empirically capture the periodic patterns and further define the following time variables (in months): (i) the time from the beginning of the study to the current scheduled measurement time, denoted by $$t_{ij}$$; (ii) the time from the most recent immunization to the current scheduled measurement time, denoted by $$t_{d_{ij}}$$ (so $$t_{d_{ij}} \le t_{ij}$$); and (iii) the time between two consecutive vaccine administrations, denoted by $$\Delta_{ij}$$, so there will be at least one $$t_{d_{ij}}$$ and one $$t_{ij} $$ between $$\Delta_{ij}$$ and $$\Delta_{i,j+1}$$. For measurement time $$t_{ij}$$ scheduled after the final vaccination, we define $$\Delta_{ij}$$ as the time between the final vaccination and the final measurement time. These different time variables are needed in modeling the longitudinal trajectories. Figure 3 gives an example of how different time variables are defined for a randomly chosen participant $$i$$. Recall that vaccinations are scheduled at months 0, 1, 6, 12, 18, 24, 30, and the study ends at month 36. For this participant $$i$$, s/he receives the first four vaccinations, but then drops out from the study before receiving the fifth vaccination. There are eight measurements over time in total, denoted by the cross symbols, where the measurement times may be different from the vaccination times. Suppose that the sixth measurement is taken at month 9, i.e. $$t_{i6}=9$$, then we have $$t_{d_{i6}}=t_{i6}-6=3$$, the difference between the sixth measurement time and the latest vaccination time (Vac 3 at month 6) for this participant, and $$\Delta_{i6}=12-6=6$$, since the sixth measurement happens between the third vaccination (Vac 3 at month 6) and the fourth vaccination (Vac 4 at month 12). To avoid very large or small parameter estimates, we also re-scale the times as follows: $$t_{d_{ij}}^*=t_{d_{ij}}*30/7$$ (in weeks) and $$t^*_{ij}=t_{ij}/12$$ (in years). Fig. 3. View largeDownload slide Illustration of three time variables in VAX004. The cross symbols indicate the measurement times of subject $$i$$. The dashed vertical lines show the scheduled times of vaccinations and the end of study (i.e. month 0, 1, 6, 12, 18, 24, 30, 36), where the black dashed lines represent the times when subject $$i$$ received vaccines and the gray dashed lines represent the times when subject $$i$$ missed the scheduled vaccinations. The arrow lines represent the time periods $$t_{ij},t_{d_{ij}}, \Delta_{ij}$$ of the sixth and eighth measurements with $$j=6$$ and $$j=8$$, respectively. Fig. 3. View largeDownload slide Illustration of three time variables in VAX004. The cross symbols indicate the measurement times of subject $$i$$. The dashed vertical lines show the scheduled times of vaccinations and the end of study (i.e. month 0, 1, 6, 12, 18, 24, 30, 36), where the black dashed lines represent the times when subject $$i$$ received vaccines and the gray dashed lines represent the times when subject $$i$$ missed the scheduled vaccinations. The arrow lines represent the time periods $$t_{ij},t_{d_{ij}}, \Delta_{ij}$$ of the sixth and eighth measurements with $$j=6$$ and $$j=8$$, respectively. 4.2. Models Based on rationales discussed in Sections 3 and 4.1, we consider empirical models for the continuous and binary longitudinal data and survival model. The longitudinal models are selected based on AIC values (see details in Section 3 in the Supplementary material available at Biostatistics online). For the NAb data with 27% truncation, we model the untruncated data by using the LME model \begin{equation} Y_{ij} \;| \; {C_{ij}=0} = \beta_0+\beta_1t^*_{ij} + \beta_2t^{*2}_{ij}+\beta_3\sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+ \beta_4 risk1_i + \beta_5risk2_i+ d_1 b_{1i}+\epsilon_{ij}, \label{real2} \end{equation} (4.1) where $$risk1_i$$ and $$risk2_i$$ are categories 1 and 2 of baseline behavioral risk score, the random effects $$b_{1i} \sim N(0,1)$$, $$d_1$$ is the variance parameter, $$\epsilon_{ij}$$ follows a truncated normal distribution with mean 0 and variance $$\sigma^2$$, and $$Y_{ij}|{C_{ij}=0}$$ follows a truncated normal distribution. To ensure identifiability of the models, we assume that $$d_1>0$$. We only consider a random intercept in the model because adding more random effects does not substantially reduce AIC values while making the models more complicated. We also model the truncation indicator, $$C_{ij}$$, of NAb to find possible associations of truncation with the time variables and other covariates and to predict the truncation status of NAb at times when NAb values are unavailable. The selected model is given as below, \begin{equation} \text{logit}(P(C_{ij}=1))=\eta_0+\eta_1 t^*_{ij} +\eta_2 t^{*2}_{ij} +\eta_3 \sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+\eta_4 risk1_i+\eta_5 risk2_i+\eta_6 b_{1i}, \label{real3} \end{equation} (4.2) which shares the same random effect as the NAb model (4.1), since these two processes seem to be highly correlated with each other. In many studies, the $$Y_{ij}$$ and $$C_{ij}$$ values are measured sparsely and we can use model (4.2) to predict the truncation status of $$Y_{ij}$$ at times when Y-measurements are unavailable. For the binary MNGNE8 data, variable selections by AIC values lead to the model \begin{equation} \text{logit}(P(Z_{ij}=1))=\alpha_0+ \alpha_1 t_{ij}+ \alpha_2 \sin\left(\frac{\pi}{\Delta_{ij}}\times t_{d_{ij}}\right)+ \alpha_3 t^*_{d_{ij}}+\alpha_4 b_{1i}+ d_2b_{2i} t_{ij}, \label{real1} \end{equation} (4.3) where $$d_2$$ is the variance parameter with $$d_2>0$$ and the individual characteristics are incorporated via random slope $$b_{2i}\sim N(0,1)$$ and random intercept $$b_{1i}$$ shared by models (4.1)–(4.2). The association among the longitudinal models is incorporated through shared and correlated random effects from different models: $$\mathbf{b}_i=(b_{1i},b_{2i})^T\sim N(0,\Sigma)$$, with $$\Sigma = \left( {\matrix{ 1 \hfill & {{r_{12}}} \hfill \cr {{r_{12}}} \hfill & 1 \hfill \cr } } \right)$$ and $${-1\leq r_{12}\leq 1}$$. Note that the random effect $$b_{1i}$$ is shared by all the longitudinal models, since all the immune response longitudinal data exhibit similar individual-specific patterns and the random effect $$b_{1i}$$ for the continuous NAb data best summarizes these patterns. For example, when a participant has a high baseline measurement of NAb, s/he likely also has a high baseline value of MNGNE8 and a low baseline probability that NAb is left truncated. The survival model for the time to HIV infection is given by the "shared-parameter" model \begin{equation} h_i(t| x_{i},\mathbf{b}_i)=h_0(t)\exp\Big\{\gamma_0 x_i + \gamma_1{\rm{risk}}1_i+\gamma_2{\rm{risk}}2_i+\gamma_3 b_{1i}+\gamma_4b_{2i} \Big\}, \label{real4} \end{equation} (4.4) where $$x_i$$ is the measurement of GNE8_CD4 (i.e. blocking of the binding of the GNE8 HIV-1 gp120 Env protein to soluble CD4) for individual $$i$$ on the first day of the study after the first immunization, rescaled to have a mean of 0 and a standard deviation of 1. We call $$x_i$$ the standardized baseline GNE8_CD4. Since the analysis in this section is exploratory in nature, for simplicity we ignore other covariates. 4.3. Parameter estimates, model diagnostics, and new findings We estimate model parameters using the proposed h-likelihood method. As a comparison, we also use the two-step method, which fits each longitudinal model separately and obtains random effect estimates in the first step and then in the second step the random effects in the Cox model are simply substituted by their estimates from the first step. The results of the two-step method are obtained using the R packages lme4 and survival. The drawbacks of the two-step method are: (i) it may under-estimate the standard errors of the parameter estimates in the survival model, since it fails to incorporate the estimation uncertainty in the first step; (ii) it fails to incorporate the associations among the longitudinal variables; and (iii) it may lead to biased estimates of longitudinal model parameters when longitudinal data are terminated by event times and/or truncated longitudinal data are simply replaced by the LLOQ or half this limit (Wu, 2009). Table 1 summarizes estimation results based on the above two methods. Algorithms based on the h-likelihood method were terminated when the relative change became less than $$10^{-2}$$ in the estimates or $$10^{-3}$$ in the approximated log-likelihood. Since our main objective is to investigate if individual-specific characteristics of the longitudinal immune responses are associated with the risk of HIV infection, we mainly focus on $$\hat{\gamma}_3$$ and $$\hat{\gamma}_4$$ in the survival model (4.4) as these parameters link the random effects to the hazard of HIV infection. Table 1. Estimates of all model parameters in VAX004 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 SE$$^{a}$$ and p-value$$^{a}$$: Standard error and p-value based on the h-likelihood method. SE$$^{b}$$ and p-value$$^{b}$$: Standard error and p-value based on the newly proposed method with 4 quadrature points. Table 1. Estimates of all model parameters in VAX004 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 Model Par Two-step method H-likelihood method Est SE p-value Est SE$$^{a}_{(\text{SE}^b)}$$ p-value$$^{a}_{(\text{p-value}^b)}$$ Estimates of mean parameters $$\beta_0$$ 1.57 0.05 $$<0.01$$ 2.35 $$0.06_{(0.07)}$$ $$<0.01_{(<0.01)}$$ $$\beta_1$$ 1.89 0.05 $$<0.01$$ 0.95 $$0.06_{(0.08)}$$ $$<0.01_{(<0.01)}$$ LME model (4.1) $$\beta_2$$ $$-$$0.54 0.02 $$<0.01$$ $$-$$0.31 $$0.02_{(0.03)}$$ $$<0.01_{(<0.01)}$$ for NAb $$\beta_3$$ 0.55 0.05 $$<0.01$$ 1.46 $$0.07_{(0.10)}$$ $$<0.01_{(<0.01)}$$ $$\beta_4$$ $$0.004$$ 0.04 0.93 $$-$$0.04 $$0.02_{(0.05)}$$ $$0.15_{(0.49)}$$ $$\beta_5$$ $$-$$0.27 0.12 0.03 $$-$$0.10 $$0.08_{(0.18)}$$ $$0.18_{(0.56)}$$ $$\eta_0$$ 2.09 0.17 $$<0.01$$ 1.94 $$0.20_{(0.24)}$$ $$<0.01_{(<0.01)}$$ $$\eta_1$$ $$-$$6.52 0.31 $$<0.01$$ $$-$$6.27 $$0.29_{(0.39)}$$ $$<0.01_{(<0.01)}$$ $$\eta_2$$ 1.71 0.11 $$<0.01$$ 1.65 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ Truncation model (4.2) $$\eta_3$$ $$-$$0.64 0.22 $$<0.01$$ $$-$$0.61 $$0.22_{(0.30)}$$ $$<0.01_{(0.04)}$$ $$\eta_4$$ $$-$$0.09 0.14 0.50 $$-$$0.03 $$0.14_{(0.22)}$$ $$0.84_{(0.90)}$$ $$\eta_5$$ 1.15 0.38 $$<0.01$$ 0.96 $$0.37_{(0.64)}$$ $$0.01_{(0.13)}$$ $$\alpha_0$$ $$-$$1.60 0.10 $$<0.01$$ $$-$$1.68 $$0.10_{(0.14)}$$ $$<0.01_{(<0.01)}$$ $$\alpha_1$$ 0.11 0.01 $$<0.01$$ 0.14 $$0.01_{(0.01)}$$ $$<0.01_{(<0.01)}$$ GLMM (4.3) $$\alpha_2$$ 1.76 0.19 $$<0.01$$ 1.82 $$0.18_{(0.26)}$$ $$<0.01_{(<0.01)}$$ for MNGNE8 $$\alpha_3$$ $$-$$0.05 0.01 $$<0.01$$ $$-$$0.05 $$0.01_{(0.02)}$$ $$<0.01_{(<0.01)}$$ $$\gamma_0$$ $$-$$0.72 0.23 $$<0.01$$ $$-$$0.79 $$0.24_{(0.37)}$$ $$<0.01_{(0.03)}$$ $$\gamma_1$$ 0.70 0.56 0.21 $$-$$0.28 $$0.39_{(0.69)}$$ $$0.48_{(0.69)}$$ Survival model (4.4) $$\gamma_2$$ 1.46 1.14 0.20 1.86 $$1.07_{(1.33)}$$ $$0.08_{(0.16)}$$ $$\gamma_3$$ $$-$$0.01 0.24 0.96 $$-$$1.69 $$0.32_{(0.92)}$$ $$<0.01_{(0.07)}$$ $$\gamma_4$$ 0.13 0.23 0.58 2.37 $$0.31_{(0.80)}$$ $$<0.01_{(<0.01)}$$ Estimates of variance-covariance parameters $$\sigma$$ 0.66 0.48 $$d_1$$ 0.22 0.54 $$\eta_6$$ $$-$$0.89 $$-$$1.60 $$\alpha_4$$ 0.43 0.004 $$d_2$$ 0.05 0.19 $$r_{12}$$ 0.37 0.76 SE$$^{a}$$ and p-value$$^{a}$$: Standard error and p-value based on the h-likelihood method. SE$$^{b}$$ and p-value$$^{b}$$: Standard error and p-value based on the newly proposed method with 4 quadrature points. From Table 1, we see that the two methods lead to quite different results, especially the estimates of $$\gamma_3$$ and $$\gamma_4$$ in the survival model that are our main focus in this analysis. For the two-step method, the estimates of $$\gamma_3$$ and $$\gamma_4$$ are near zero with confidence intervals including zero, not supporting that individual-specific immune response longitudinal trajectories are associated with the risk of HIV infection. However, these parameter estimates based on the proposed joint model with the h-likelihood method lead to different conclusions. Both $$\hat{\gamma}_3$$ and $$\hat{\gamma}_4$$ are highly significant based on the standard errors estimated by the joint model with the h-likelihood method (denoted as SE$$^{a}$$), suggesting that individual-specific immune response longitudinal trajectories are highly associated with the risk of HIV infection. Since the standard errors based on the h-likelihood method may be under-estimated (Hsieh and others, 2006; Rizopoulos, 2012b), as discussed earlier, we also calculate the standard errors using the proposed method based on the aGH method, and the results with four quadrature points are given as SE$$^{b}$$ in the table. We see that, based on the new standard errors, the p-value for testing $$H_0: \gamma_3 = 0$$ is slightly larger than $$0.05$$ while that for testing $$H_0: \gamma_4 = 0$$ is still highly significant. Therefore, we may conclude that individual-specific immune response longitudinal trajectories are associated with the risk of HIV infection. This conclusion is unavailable based on the two-step method. The negative estimate of $$\gamma_3$$ suggests that higher NAb values are associated with a lower risk of HIV infection, and the positive estimate of $$\gamma_4$$ suggests that large increases in MNGNE8 over time are associated with a higher risk of HIV infection. Specifically, there is an estimated 81.6% decrease (i.e. $$\exp(\hat{\gamma}_3)-1$$) in the hazard/risk with a one unit increase in the individual effect $$b_{1i}$$ and an estimated 10.6 times increase (i.e. $$\exp(\hat{\gamma}_4)$$) in the hazard/risk with a one unit increase in the individual-specific slope $$b_{2i}$$, holding other covariates constant. These findings are original, since they are unavailable based on the two-step method, and show the important contribution of the proposed joint model and the h-likelihood method. The joint model method and the two-step method have consistent significances of the parameters in the longitudinal models (4.1)–(4.3), except for $$\beta_5$$ and $$\eta_5$$. By the two-step method, the tests for $$H_0: \beta_5 = 0$$ and for $$H_0: \eta_5$$ yield significant p-values, suggesting that participants with baseline behavioral risk score in category 2 (i.e. risk2 = 1) have significantly lower NAb values than other participants. By the joint model method, on the other hand, such a negative association is not statistically significant. For model (4.1), the mean square error (MSE) based on the joint model is 0.296, while the MSE based on the two-step method is 0.403. The model diagnostics are conducted to check the assumptions and goodness-of-fit of the models. The results are listed in Section 4 in the Supplementary material available at Biostatistics online. Overall, the assumptions hold and the models fit the data well. The data used in this example may be requested through a concept proposal to the owner of the data—Global Solutions in Infectious Diseases. 5. Simulation studies In this section, we conduct three simulation studies to evaluate the proposed joint model with the h-likelihood method. The models and their true parameter values in the simulation studies are chosen to be similar to the estimated values in the models for real data in the previous section. 5.1. Simulation study $$1$$ Conditional on the random effects, the binary data $$Z_{ij}$$ are generated from a Bernoulli distribution with probabilities $$p_{ij} = \exp(\xi_{ij})( 1+\exp(\xi_{ij}))^{-1}$$, where $$\xi_{ij}=\alpha_0+\alpha_1t_{ij}+\alpha_2\sin(\frac{\pi}{\Delta_{ij}} t_{d_{ij}})+ \alpha_3 t^*_{d_{ij}}+\alpha_4 b_{1i}+d_2 b_{2i}t_{ij}$$. For the continuous data $$Y_{ij}$$, we first randomly generate $$y_{ij}$$ from a normal distribution $$Y_{ij}|\mathbf{b}_{1i} \sim N\big(\mu_{ij}, \sigma^2\big)$$ with $$\mu_{ij}=\beta_0+\beta_1t^*_{ij}+\beta_2t^{*2}_{ij}+\beta_3\sin\left(\frac{\pi}{\Delta_{ij}} t_{d_{ij}}\right)+d_1 b_{1i}$$. Then we create truncations so that $$y_{ij}$$ is observed if $$y_{ij}> LLOQ$$ and truncated otherwise and we choose LLOQ = 2. The random effects $$\mathbf{b}_i=(b_{1i},b_{2i})$$ are generated from a multivariate normal distribution $$N(0, \Sigma)$$. The true values of the parameters are set to be: $$\boldsymbol{\alpha}^{T}=(\alpha_0,\alpha_1,\alpha_2,\alpha_3,\alpha_4) =(-1.65, 0.15, 1.8, -0.05, 0.4)$$, $$\boldsymbol{\beta}^{T}=(\beta_0, \beta_1,\beta_2,\beta_3)=(2, 1, -0.3, 1.5)$$, $$\sigma=0.5, d_1=0.5, d_2=0.15$$, $$\Sigma = \left( {\matrix{ 1 \hfill & {0.5} \hfill \cr {0.5} \hfill & 1 \hfill \cr } } \right)$$. The survival times $$T_{i}$$ are generated from a Weibull distribution with shape parameter of $$15$$ and scale parameter of $$800 \exp(\gamma_0 X_i + \gamma_1 b_{1i}+\gamma_2 b_{2i})^{-1/15}$$, where $$X_i$$ is a baseline covariate generated from the standard normal distribution. The non-informative censoring times $$\mathcal{C}_i$$ are generated from a Weibull distribution with shape parameter of 5 and scale parameter of $$1000$$. The true parameter values are given as $$\boldsymbol{\gamma}^T=(\gamma_0, \gamma_1,\gamma_2)= (-0.75, -1.5, 2)$$. 5.2. Simulation studies $$2$$ and $$3$$ To better evaluate the performance of the h-likelihood method, we conducted two additional simulation studies: (i) a joint model with higher dimensions of random effects (Study 2, four random effects); and (ii) a joint model with a parametric survival model (Study 3, a Weibull survival model). For the parametric joint model in Study 3, we also estimate the model parameters using the aGH method as comparison. Due to space limitation, we put the details of these two simulation studies in Sections 5 and 6 in the Supplementary material available at Biostatistics online. 5.3. Simulation results and discussions We compare the performance of the methods based on the relative bias and MSE of the parameter estimates, which are defined as follows (say, for parameter $$\beta$$): relative bias (%) of $$\hat{\beta}=\Big|(\frac{1}{M}\sum_{m=1}^M \hat{\beta}^{(m)}-\beta)/\beta\Big|\times 100%$$, relative MSE (%) of $$\hat{\beta}= \frac{1}{M} \sum_{m=1}^M (\hat{\beta}^{(m)}-\beta)^2/|\beta| \times 100%$$, where $$\hat{\beta}^{(m)}$$ is the estimate of $$\beta$$ in simulation iteration $$m$$, $$M$$ is the total number of repetitions, and $$\beta$$ is the true parameter value. Table 2 summarizes the results of Simulation Study 1 ($$M=500$$) when the longitudinal measurements are collected bi-weekly. The proposed h-likelihood method outperforms the two-step method as it returns much less biased estimates for most of the parameters. As for the bias in $$\hat{\boldsymbol{\alpha}}$$, it is known that the h-likelihood method may perform less satisfactorily for logistic mixed effects models (Kuk and Cheng, 1999; Waddington and Thompson, 2004). The standard errors of $$\hat{\gamma_j}$$'s seem to be underestimated by the h-likelihood method. This problem has been reported elsewhere (Hsieh and others, 2006; Rizopoulos, 2012b). However, our newly proposed method based on the aGH method with 4 quadrature points returns coverage probabilities much closer to the nominal coverage probabilities. The results of Simulation Studies 2 and 3 are given in Tables S5 and S6 in the Supplementary material available at Biostatistics online. The main conclusions are consistent with those from Table 2. In Simulation Study 3, the $$\hat{\gamma}_j$$'s based on the h-likelihood method are much less biased, though with slightly larger MSEs, than those based on the aGH method, while for $$\beta_3$$, $$\alpha_0$$, and $$\alpha_2$$, the aGH method has less biased estimates than the h-likelihood method (see Table S6 in the Supplementary material available at Biostatistics online). Synthesizing the simulation results, we conclude that the proposed h-likelihood method, with the new approach of estimating the standard errors, performs reasonably well. Its performance remains consistent with higher dimensions of random effects and parametric survival models. Although it is sometimes less accurate than the aGH method, it is computationally much more efficient. Table 2. Simulation results with bi-weekly longitudinal measurements based on the two-step (TS) method and the h-likelihood (HL) method Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 HL$$^{a}$$: Coverage probability based on the h-likelihood method. HL$$^{b}$$: Coverage probability based on the newly proposed method for standard errors with 4 quadrature points. Table 2. Simulation results with bi-weekly longitudinal measurements based on the two-step (TS) method and the h-likelihood (HL) method Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 Model Par True Estimate SSE rBias (%) rMSE (%) Coverage probability (%) value TS HL TS HL TS HL TS HL TS HL$$^{a}$$ HL$$^{b}$$ $$\beta_0$$ 2.00 2.14 2.01 0.04 0.06 7.05 0.55 1.07 0.16 15 94 98 (4.1) $$\beta_1$$ 1.00 0.88 1.00 0.04 0.04 11.55 0.20 1.47 0.20 13 94 99 $$\beta_2$$ $$-$$0.30 $$-$$0.26 $$-$$0.30 0.02 0.02 13.07 0.00 0.60 0.12 32 94 99 $$\beta_3$$ 1.50 1.41 1.49 0.02 0.02 5.70 0.37 0.52 0.04 1 94 98 $$\alpha_0$$ $$-$$1.65 $$-$$1.64 $$-$$1.64 0.10 0.10 0.57 0.59 0.62 0.63 94 95 99 (4.3) $$\alpha_1$$ 0.15 0.15 0.15 0.02 0.02 2.02 2.45 0.18 0.17 94 89 96 $$\alpha_2$$ 1.80 1.80 1.78 0.11 0.10 0.26 0.84 0.61 0.61 96 95 99 $$\alpha_3$$ $$-$$0.05 $$-$$0.05 $$-$$0.05 0.01 0.01 1.80 2.68 0.06 0.06 93 93 97 $$\gamma_0$$ $$-$$0.75 $$-$$0.68 $$-$$0.74 0.16 0.17 9.44 0.74 4.00 3.76 88 85 97 (4.4) $$\gamma_1$$ $$-$$1.50 $$-$$1.10 $$-$$1.44 0.22 0.29 26.91 3.99 13.96 5.92 38 65 84 $$\gamma_2$$ 2.00 1.53 2.02 0.23 0.31 23.40 1.10 13.70 4.88 34 69 87 HL$$^{a}$$: Coverage probability based on the h-likelihood method. HL$$^{b}$$: Coverage probability based on the newly proposed method for standard errors with 4 quadrature points. 6. Discussion In this article, we have considered a joint model for mixed types of longitudinal data with left truncation and a survival model and proposed a new method to handle the left-truncation in longitudinal data. A main advantage of this method, compared with existing methods in the literature (e.g. Hughes, 1999), is that it does not make any untestable distributional assumption for the truncated data that are below a measurement instruments LLOQ. Different types of longitudinal data are assumed to be associated via shared and correlated random effects. We have also proposed an h-likelihood method for approximate joint likelihood inference, which is computationally much more efficient than the aGH method. Moreover, we have proposed a new method to better estimate the standard errors of parameter estimates from the h-likelihood method. Based on a MacBook Pro Version 10.11.4, the average computing times of the h-likelihood method were 2.7 min for the semiparametric joint model with 2 random effects and 21.9 min for the semiparametric joint model with 4 random effects, respectively. For the parametric joint model with 2 random effects, the average running time of the h-likelihood method was 9.1 min, much faster than the aGH method that takes 28.4 min. Analysis of the real HIV vaccine data based on the proposed method shows that the individual-specific characteristics of longitudinal immune response, summarized by random effects in the models, are highly associated with the risk of HIV infection. This finding is quite interesting and helpful to designing future HIV vaccine studies. We have also proposed a model for the left-truncation indicator of the longitudinal immune response data and showed that the left-truncation status follows certain patterns as functions of time. Such a model can be used to predict the left-truncation status (below LLOQ status) of some longitudinal immune response values when measurement schedules are infrequent or sparse. The joint model in this article may be extended in several directions. For example, the Cox model may be replaced by an accelerated failure time model or survival model for interval censored data or competing risks data. The association among different types of longitudinal processes may also be modeled in other ways such as shared latent processes. In addition, the dropouts in the real data may be associated with longitudinal patterns, so we may consider incorporating missing data mechanisms into the joint models in future research. Research for these extensions will be reported separately. 7. Software Software in the form of R code and a sample input data set are available at https://github.com/oliviayu/HHJMs. Supplementary material Supplementary material is available at http://biostatistics.oxfordjournals.org. Acknowledgments The authors thank the reviewers for the thoughtful comments to help improve the article greatly. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or BMGF. The authors thank the participants, investigators, and sponsors of the VAX004 trial, including Global Solutions for Infectious Diseases. Conflict of Interest: None declared. Funding National Institute Of Allergy And Infectious Diseases of the National Institutes of Health (NIH) (Award Numbers R37AI054165 and UM1AI068635); and Bill and Melinda Gates Foundation (BMGF) (Award Number OPP1110049). References Barrett J. , Diggle P. , Henderson R. and Taylor-Robinson D. ( 2015 ). Joint modelling of repeated measurements and time-to-event outcomes: flexible model specification and exact likelihood inference. Journal of the Royal Statistical Society: Series B 77 , 131 – 148 . Google Scholar CrossRef Search ADS Bernhardt P. W. , Wang H. J. , and Zhang D. ( 2014 ). Flexible modeling of survival data with covariates subject to detection limits via multiple imputation. Computational Statistics and Data Analysis , 69 , 81 – 91 . Google Scholar CrossRef Search ADS Chen Q. , May R. C. , Ibrahim J. G. , Chu H. , and Cole S. R. ( 2014 ). Joint modeling of longitudinal and survival data with missing and left-censored time-varying covariates. Statistics in Medicine , 33 , 4560 – 4576 . Google Scholar CrossRef Search ADS PubMed Elashoff R. M. , Li G. , and Li N. ( 2015 ). Joint Modeling of Longitudinal and Time-to-Event Data . Boca Raton, FL : Chapman & Hall/CRC . Flynn N. , Forthal D. , Harro C. , Judson F. , Mayer K. , Para M. , and Gilbert P. The rgp120 $$\mbox{HIV}$$ Vaccine Study Group ( 2005 ). Placebo-controlled phase 3 trial of recombinant glycoprotein 120 vaccine to prevent HIV-1 infection. Journal of Infectious Diseases , 191 , 654 – 65 . Google Scholar CrossRef Search ADS PubMed Fu R. and Gilbert P. B. ( 2017 ). Joint modeling of longitudinal and survival data with the cox model and two-phase sampling. Lifetime Data Analysis , 23 , 136 – 159 . Google Scholar CrossRef Search ADS PubMed Gilbert P. B. , Peterson M. L. , Follmann D. , Hudgens M. G. , Francis D. P. , Gurwith M. , Heyward W. L. , Jobes D. V. , Popovic V. , Self S. G. , et al. ( 2005 ). Correlation between immunologic responses to a recombinant glycoprotein 120 vaccine and incidence of hiv-1 infection in a phase 3 hiv-1 preventive vaccine trial. Journal of Infectious Diseases , 191 , 666 – 677 . Google Scholar CrossRef Search ADS PubMed Ha I. D. , Park T. , and Lee Y. ( 2003 ). Joint modelling of repeated measures and survival time data. Biometrical Journal , 45 , 647 – 658 . Google Scholar CrossRef Search ADS Hartzel J. , Agresti A. , and Caffo B. ( 2001 ). Multinomial logit random effects models. Statistical Modelling , 1 , 81 – 102 . Google Scholar CrossRef Search ADS Hsieh F. , Tseng Y.-K. , and Wang J.-L. ( 2006 ). Joint modeling of survival and longitudinal data: likelihood approach revisited. Biometrics , 62 , 1037 – 1043 . Google Scholar CrossRef Search ADS PubMed Hughes J. P. ( 1999 ). Mixed effects models with censored data with application to hiv rna levels. Biometrics , 55 , 625 – 629 . Google Scholar CrossRef Search ADS PubMed Król A. , Ferrer L. , Pignon J.-P. , Proust-Lima C. , Ducreux M. , Bouché O. , Michiels S. , and Rondeau V. ( 2016 ). Joint model for left-censored longitudinal data, recurrent events and terminal event: predictive abilities of tumor burden for cancer evolution with application to the ffcd 2000–05 trial. Biometrics , 72 , 907 – 916 . Google Scholar CrossRef Search ADS PubMed Kuk A. Y. and Cheng Y. W. ( 1999 ). Pointwise and functional approximations in monte carlo maximum likelihood estimation. Statistics and Computing , 9 , 91 – 99 . Google Scholar CrossRef Search ADS Lawrence Gould A. , Boye M. E. , Crowther M. J. , Ibrahim J. G. , Quartey G. , Micallef S. , and Bois F. Y. ( 2015 ). Joint modeling of survival and longitudinal non-survival data: current methods and issues. report of the dia bayesian joint modeling working group. Statistics in Medicine , 34 , 2181 – 2195 . Google Scholar CrossRef Search ADS PubMed Lee Y. and Nelder J. A. ( 1996 ). Hierarchical generalized linear models. Journal of the Royal Statistical Society: Series B , 58 , 619 – 678 . Lee Y. , Nelder J. A. , and Pawitan Y. ( 2006 ). Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Mehrotra K. G. , Kulkarni P. M. , Tripathi R. C. , and Michalek J. E. ( 2000 ). Maximum likelihood estimation for longitudinal data with truncated observations. Statistics in Medicine , 19 , 2975 – 2988 . Google Scholar CrossRef Search ADS PubMed Molas M. , Noh M. , Lee Y. , and Lesaffre E. ( 2013 ). Joint hierarchical generalized linear models with multivariate gaussian random effects. Computational Statistics and Data Analysis , 68 , 239 – 250 . Google Scholar CrossRef Search ADS Pinheiro J. C. and Bates D. M. ( 1995 ). Approximations to the log-likelihood function in the nonlinear mixed-effects model. Journal of Computational and Graphical Statistics , 4 , 12 – 35 . Rizopoulos D. ( 2012a ). Fast fitting of joint models for longitudinal and event time data using a pseudo-adaptive gaussian quadrature rule. Computational Statistics and Data Analysis , 56 , 491 – 501 . Google Scholar CrossRef Search ADS Rizopoulos D. ( 2012b ). Joint Models for Longitudinal and Time-to-Event Data: With Applications in R . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Rizopoulos D. , Verbeke G. , and Lesaffre E. ( 2009 ). Fully exponential laplace approximations for the joint modelling of survival and longitudinal data. Journal of the Royal Statistical Society: Series B , 71 , 637 – 654 . Google Scholar CrossRef Search ADS Taylor J. M. , Park Y. , Ankerst D. P. , Proust-Lima C. , Williams S. , Kestin L. , Bae K. , Pickles T. , and Sandler H. ( 2013 ). Real-time individual predictions of prostate cancer recurrence using joint models. Biometrics , 69 , 206 – 213 . Google Scholar CrossRef Search ADS PubMed Waddington D. and Thompson R. ( 2004 ). Using a correlated probit model approximation to estimate the variance for binary matched pairs. Statistics and Computing , 14 , 83 – 90 . Google Scholar CrossRef Search ADS Wu L. ( 2002 ). A joint model for nonlinear mixed-effects models with censoring and covariates measured with error, with application to aids studies. Journal of the American Statistical Association , 97 , 955 – 964 . Google Scholar CrossRef Search ADS Wu L. ( 2009 ). Mixed Effects Models for Complex Data . Boca Raton, FL : Chapman & Hall/CRC . Google Scholar CrossRef Search ADS Wulfsohn M. S. and Tsiatis A. A. ( 1997 ). A joint model for survival and longitudinal data measured with error. Biometrics , 53 , 330 – 339 . Google Scholar CrossRef Search ADS PubMed Zhu H. , Ibrahim J. G. , Chi Y.-Y. , and Tang N. ( 2012 ). Bayesian influence measures for joint models for longitudinal and survival data. Biometrics , 68 , 954 – 964 . Google Scholar CrossRef Search ADS PubMed © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)
Biostatistics – Oxford University Press
Joint modelling of repeated measurements and time-to-event outcomes: flexible model specification and exact likelihood inference.
Flexible modeling of survival data with covariates subject to detection limits via multiple imputation.
Joint modeling of longitudinal and survival data with missing and left-censored time-varying covariates.
{{internal_title}} {{journal_journal_name}}
{{ref_author}}
{{ref_title}}
--}}Results for {{{swdQuery}}}
{{#if swdResults}} {{#swdResults}}
…{{{highlighted}}}…
{{/swdResults}} {{else}}
You're reading a free preview. Subscribe to read the entire article.
Try 2 weeks free now
DeepDyve is your
personal research library
It's your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Explore the DeepDyve Library
or browse the journals available
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Your journals are on DeepDyve
Read from thousands of the leading scholarly journals from SpringerNature, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
See the journals in your area
"Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue."
"Whoa! It's like Spotify but for academic articles."
@Phil_Robichaud
"I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information."
@deepthiw
"My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper."
@JoseServera
Save searches from
Google Scholar,
Create folders to
organize your research
Export folders, citations
Read DeepDyve articles
Abstract access only
Unlimited access to over
18 million full-text articles
20 pages / month
Start 14 day
Share the Full Text of this Article with up to 5 Colleagues for FREE
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Yu, T., Wu, L., & Gilbert, P. (2017). A joint model for mixed and truncated longitudinal data and survival data, with application to HIV vaccine studies. Biostatistics, AdvanceArticle(3), 1-390.
Yu, Tingting, Lang Wu, and Peter B Gilbert. "A joint model for mixed and truncated longitudinal data and survival data, with application to HIV vaccine studies." Biostatistics Advance Article.3 (2017): 1-390.
Export in RIS Format | CommonCrawl |
I always thought that Hasse's bound is sharp (at least for elliptic curves). In other words I always thought that given a prime number $p$, I can find two elliptic curves $E_1,E_2$ over $\mathbb F_p$ such that $\#E_1 = \lceil 1+p-2\sqrt p \rceil$ and $\#E_2 = \lfloor 1+p+2\sqrt p\rfloor$. But is this even true? If so, is there an easy way to construct these curves? I've seen the proof of how to obtain these bounds but I don't think it gives me any information on the sharpness of the bound.
This is only a very partial answer, reflects only what i would do for a given rather small prime $p$. The character of the answer is experimental. The method is exhaustive enumeration. So for a small $p$ we may use computer assistance, below sage, to validate or invalidate the claim.
and the question wants a way to detect the two elliptic curves $y^2=x^3+4$ and $y^2=x^3+3$ that attain the bounds.
I do not see a constructive method that hits with precision one of the values $(a,b)$ for minimal order $6$ and/or maximal order $18$.
....: print( "p = %3s :: order = %s :: %s solutions, first five are %s"
So we have in most cases many solutions. Sometimes we can use in $y^2=x^3+ax+b$ the values $a=0,\pm1$, and we find a suitable pair. But to have a "straightforward decision"... Then "something" in the structure of the elliptic curves must be predictible.
At any rate, here comes the counterquestion: What exactly is expected as a "good answer", a "good constructible solution" to the OP?
Not the answer you're looking for? Browse other questions tagged algebraic-geometry finite-fields elliptic-curves arithmetic-geometry or ask your own question.
State of the art in arithmetic moduli of elliptic curves?
Are there counterexamples of isogeny elliptic curves with non-isomorphic integral Tate modules? | CommonCrawl |
GPS Satellites were invented when magnetic North and South moved to fast for useful navigation
For example, my father in 1939 navigated on his yacht he chartered with a sextant over the ocean from Catalina Island off of Los Angeles and Long Beach, California 40 days and was able to make landfall in Tahiti 40 days after leaving Catalina. But, today because of jets and anything that travels faster than a sailboat we now use GPS satellites that can tell us within 3 feet anywhere we are on the exterior of planet earth relatively instantly.
So, I suppose people could still navigate using a sextant now if they had charts updated to the present location of magnetic north or south but it has become completely useless for jets, spacecraft and for most other vehicles who travel now over about 10 knots an hour or 8 to 12 miles per hour on land.
So now, people and cars and cellphones and planes mostly all use GPS for navigation rain or storm or wind or night or whatever because it is generally much more accurate now because north and South magnetic North are both just moving way too fast to have charts that keep up with their movemnts on a yearly basis let alone a monthly or daily basis.
Also, without GPS satellites cell phone technology could not work at all because each phone is tracked by GPS satellites whenever they are on so they are connected to the worldwide phone network. So, whenever a cell phone ( of any kind) is on computers one or more places on earth know exactly within 3 feet of where that phone (and any people with it) are on earth.
Before that UFOs tracked people by having a seed size tracker they planted in humans in a similar way to how nations and phone companies track all people sophisticated enough to have a cell phone on their person, in their pack or purse with them within a foot or two of their person anywhere on earth.
So, your cell phone is your tracking device used by all nations and governments and criminals on earth to track you 24 hours a day.
Note: I'm noticing they are no longer claiming an accuracy of 3 feet and have gone to 5 meters instead. So, since a meter is 39.37 inches 5 meters would approximately be 15 plus feet since 36 inches is 3 feet. So, it would be much more than 15 feet. end note.
This makes sense because GPS is used for hellfire targeting too using Drones. But, they likely also have visual targeting too for more perfect accuracy in hitting targets from the air. This likely would be using infrared sensors built into the hellfire missiles?
So, having a cell phone could also make people targets some places on earth both now and into the future.
For an honest law abiding person having a cell phone is an emergency device and a protection almost like having a gun or knife in some ways. But, for someone governments are after it could also be a targeting device as well.
Here is more about GPS satellites from Wikipedia:
begin quote from:
Global Positioning System - Wikipedia
https://en.wikipedia.org/wiki/Global_Positioning_System
The Global Positioning System (GPS), originally Navstar GPS, is a space-based radionavigation system owned by the United States government and operated ...
A satellite navigation or satnav system is a system that uses ...
A GPS navigation device or GPS receiver, and when used for ...
Galileo is the global navigation satellite system (GNSS) that is ...
GPS Block IIIA
GPS Block IIIA, or GPS III is the next generation of GPS ...
More results from wikipedia.org »
Global Positioning System - Simple English Wikipedia, the free ...
https://simple.wikipedia.org/wiki/Global_Positioning_System
A Global Positioning System, also known as GPS, is a system designed to help navigate on the Earth, in the air, and on water. A GPS receiver shows where it is.
Snooper (Global Positioning System) - Wikipedia
https://en.wikipedia.org/wiki/Snooper_(Global_Positioning_System)
Snooper is a UK brand of GPS navigation systems and speed camera detector systems for cars, HGV vehicles and caravans. Snooper is owned by satellite ...
Category:Global Positioning System - Wikipedia
https://en.wikipedia.org/wiki/Category:Global_Positioning_System
Pages in category "Global Positioning System". The following 161 pages are in this category, out of 161 total. This list may not reflect recent changes (learn ...
GPS satellite blocks - Wikipedia
https://en.wikipedia.org/wiki/GPS_satellite_blocks
A GPS satellite is a satellite used by the NAVSTAR Global Positioning System (GPS). The first satellite in the system, Navstar 1, was launched February 22, 1978 ...
GPS signals - Wikipedia
https://en.wikipedia.org/wiki/GPS_signals
Global Positioning System (GPS) satellites broadcast microwave signals to enable GPS ... Wiki letter w.svg. This article's lead section may not adequately summarize key points of its contents. Please consider expanding the lead to provide an ...
(Redirected from GPS)
This article is about the American system. It is not to be confused with other similar systems (GNSS), such as the Russian (GLONASS), Chinese (BeiDou-2) or European (Galileo).
"GPS" redirects here. For the device, see GPS receiver. For other uses, see GPS (disambiguation).
Country/ies of origin
AFSPC
Military, civilian
Constellation size
Total satellites
Satellites in orbit
February 1978; 39 years ago
Total launches
Orbital characteristics
Regime(s)
6x MEO planes
Orbital height
20,180 km (12,540 mi)
Geodynamics
Geomatics
Geographical distance
Figure of the Earth
Geodetic datum
Geographic coordinate system
Horizontal position representation
Reference ellipsoid
Satellite geodesy
Global Navigation Satellite System (GNSS)
GLONASS (Russian)
BeiDou (BDS) (Chinese)
Galileo (European)
Indian Regional Navigation
Satellite System (IRNSS)
Quasi-Zenith Satellite System
Legenda (satellite system)
Standards (History)
NGVD29 Sea Level Datum 1929
OSGB36 Ordnance Survey Great Britain 1936
SK-42 Systema Koordinat 1942 goda
ED50 European Datum 1950
SAD69 South American Datum 1969
GRS 80 Geodetic Reference System 1980
NAD83 North American Datum 1983
WGS84 World Geodetic System 1984
NAVD88 N. American Vertical Datum 1988
ETRS89 European Terrestrial Reference
GCJ-02 Chinese encrypted datum 2002
International Terrestrial Reference System
Spatial Reference System Identifier (SRID)
Universal Transverse Mercator (UTM)
Artist's conception of GPS Block II-F satellite in Earth orbit.
Civilian GPS receivers ("GPS navigation device") in a marine application.
Automotive navigation system in a taxicab.
A U.S. Air Force Senior Airman runs through a checklist during Global Positioning System satellite operations.
The Global Positioning System (GPS), originally Navstar GPS,[1][2] is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.[3]
The GPS system does not require the user to transmit any data, and it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver. However, the US government can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War.[4]
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems,[5] integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites. It became fully operational in 1995. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it.[6]
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX).[7] Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III.
In addition to GPS, other systems are in use or under development, mainly because of a potential denial of access by the US government. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s.[8] GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within two meters.[9] There are also the European Union Galileo positioning system, China's BeiDou Navigation Satellite System and India's NAVIC.
1.1 Predecessors
1.3 Timeline and modernization
1.4 Awards
2 Basic concept of GPS
2.1 Fundamentals
2.2 More detailed description
2.3 User-satellite geometry
2.4 Receiver in continuous operation
2.5 Non-navigation applications
3 Structure
3.1 Space segment
3.2 Control segment
3.3 User segment
4.1 Civilian
4.1.1 Restrictions on civilian use
4.2 Military
5.1 Message format
5.2 Satellite frequencies
5.3 Demodulation and decoding
6 Navigation equations
6.1 Problem description
6.2 Geometric interpretation
6.2.1 Spheres
6.2.2 Hyperboloids
6.2.3 Spherical cones
6.3 Solution methods
6.3.1 Least squares
6.3.2 Iterative
6.3.3 Closed-form
7 Error sources and analysis
8 Accuracy enhancement and surveying
8.1 Augmentation
8.2 Precise monitoring
8.3 Timekeeping
8.3.1 Leap seconds
8.3.2 Accuracy
8.3.3 Format
8.4 Carrier phase tracking (surveying)
9 Regulatory spectrum issues concerning GPS receivers
10 Other systems
14 Further reading
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s and used by the British Royal Navy during World War II.
Friedwardt Winterberg[10] proposed a test of general relativity — detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites.
Special and general relativity predict that the clocks on the GPS satellites would be seen by the Earth's observers to run 38 microseconds faster per day than the clocks on the Earth. The GPS calculated positions would quickly drift into error, accumulating to 10 kilometers per day. This was corrected for in the design of GPS.[11]
The Soviet Union launched the first man-made satellite, Sputnik 1, in 1957. Two American physicists, William Guier and George Weiffenbach, at Johns Hopkins's Applied Physics Laboratory (APL), decided to monitor Sputnik's radio transmissions.[12] Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required.
The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the user's location, given that of the satellite. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system.[13] In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.[14][15][16]
Official logo for NAVSTAR GPS
Emblem of the 50th Space Wing
The first satellite navigation system, TRANSIT, used by the United States Navy, was first successfully tested in 1960.[17] It used a constellation of five satellites and could provide a navigational fix approximately once per hour.
In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required by GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations,[18] became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
While there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation for a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs.[19] The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The Navy and Air Force were developing their own technologies in parallel to solve what was essentially the same problem.
To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Russian SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was worked in 1963 and it was "in this study that the GPS concept was born." That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS"[20] and promised increased accuracy for Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory continued advancements with their Timation (Time Navigation) satellites, first launched in 1967, and with the third one in 1974 carrying the first atomic clock into orbit.[21]
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying.[22] The SECOR system included three ground-based transmitters from known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.[23]
Decades later, during the early years of GPS, civilian surveying became one of the first fields to make use of the new technology, because surveyors could reap benefits of signals from the less-than-complete GPS constellation years before it was declared operational. GPS can be thought of as an evolution of the SECOR system where the ground-based transmitters have been migrated into orbit.
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named Navstar, or Navigation System Using Timing and Ranging.[24] With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS.[25] Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).[26]
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983 after straying into the USSR's prohibited airspace,[27] in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good.[28] The first Block II satellite was launched on February 14, 1989,[29] and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment, but including the costs of the satellite launches, has been estimated at about USD 5 billion (then-year dollars).[30] Roger L. Easton is widely credited as the primary inventor of GPS.
Initially, the highest quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton signing a policy directive to turn off Selective Availability May 1, 2000 to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, because of the widespread growth of differential GPS services to improve civilian accuracy and eliminate the U.S. military advantage. Moreover, the U.S. military was actively developing technologies to deny GPS service to potential adversaries on a regional basis.[31]
Since its deployment, the U.S. has implemented several improvements to the GPS service including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers provide horizontal accuracy of better than 3.5 meters,[32] although many factors such as receiver quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems.[33] The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis," and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses."
Timeline and modernization
Main article: List of GPS satellites
Summary of satellites[34][35][36]
Currently in orbit
and healthy
Suc-
Fail-
In prep-
aration
Plan-
1978–1985 10 1 0 0 0
1989–1990 9 0 0 0 0
1997–2004 12 1 0 0 12
IIR-M
From 2017 0 0 0 12 0
— 0 0 0 16 0
(Last update: March 9, 2016)
8 satellites from Block IIA are placed in reserve
USA-203 from Block IIR-M is unhealthy
[37] For a more complete list, see list of GPS satellite launches
In 1972, the USAF Central Inertial Guidance Test Facility (Holloman AFB) conducted developmental flight tests of two prototype GPS receivers over White Sands Missile Range, using ground-based pseudo-satellites.[38]
In 1978, the first experimental Block-I GPS satellite was launched.[26]
In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed,[39][40] although it had been previously published [in Navigation magazine] that the CA code (Coarse/Acquisition code) would be available to civilian users.
By 1985, ten more experimental Block-I satellites had been launched to validate the concept.
Beginning in 1988, Command & Control of these satellites was transitioned from Onizuka AFS, California to the 2nd Satellite Control Squadron (2SCS) located at Falcon Air Force Station in Colorado Springs, Colorado.[41][42]
On February 14, 1989, the first modern Block-II satellite was launched.
The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.[43]
In 1991, a project to create a miniature GPS receiver successfully ended, replacing the previous 23 kg military receivers with a 1.25 kg handheld receiver.[15]
In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and replaced by the 50th Space Wing.
By December 1993, GPS achieved initial operational capability (IOC), indicating a full constellation (24 satellites) was available and providing the Standard Positioning Service (SPS).[44]
Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).[44]
In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive[45] declaring GPS a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.
In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety and in 2000 the United States Congress authorized the effort, referring to it as GPS III.
On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing users to receive a non-degraded signal globally.
In 2004, the United States Government signed an agreement with the European Community establishing cooperation related to GPS and Europe's Galileo system.
In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.[46]
November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.[47]
In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.[48]
On September 14, 2007, the aging mainframe-based Ground Segment Control System was transferred to the new Architecture Evolution Plan.[49]
On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.[50]
On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying "There's only a small risk we will not continue to exceed our performance standard."[51]
On January 11, 2010, an update of ground control systems caused a software incompatibility with 8000 to 10000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, Calif.[52]
On February 25, 2010,[53] the U.S. Air Force awarded the contract to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the nation's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the USAF, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago."
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
Ivan Getting, emeritus president of The Aerospace Corporation and an engineer at the Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation).
Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006.[54]
Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010 for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B.
In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.[55]
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.
Basic concept of GPS
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2015) (Learn how and when to remove this template message)
The GPS concept is based on time and the known position of specialized satellites. The satellites carry very stable atomic clocks that are synchronized with one another and to ground clocks. Any drift from true time maintained on the ground is corrected daily. Likewise, the satellite locations are known with great precision. GPS receivers have clocks as well; however, they are usually not synchronized with true time, and are less stable. GPS satellites continuously transmit their current time and position. A GPS receiver monitors multiple satellites and solves equations to determine the precise position of the receiver and its deviation from true time. At a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and clock deviation from satellite time).
More detailed description
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale
A message that includes the time of transmission (TOT) of the code epoch (in GPS system time scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite range differences. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid (e.g., EGM96) (essentially, mean sea level). These coordinates may be displayed, e.g., on a moving map display, and/or recorded and/or used by some other system (e.g., a vehicle guidance system).
User-satellite geometry
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.[56][57]
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is only the case if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are significant performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If this were part of the GPS system concept so that all users needed to carry a synchronized clock, then a smaller number of satellites could be deployed. However, the cost and complexity of the user equipment would increase significantly.
Receiver in continuous operation
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a tracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can only be computed with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately.[58] More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
Non-navigation applications
For a list of applications, see § Applications.
In typical GPS operation as a navigator, four or more satellites must be visible to obtain an accurate result. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship or aircraft may have known elevation. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.[59][60][61]
The current GPS consists of three major segments. These are the space segment (SS), a control segment (CS), and a user segment (US).[62] The U.S. Air Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.[63]
The space segment is composed of 24 to 32 satellites in medium Earth orbit and also includes the payload adapters to the boosters required to launch them into orbit. The control segment is composed of a master control station (MCS), an alternate master control station, and a host of dedicated and shared ground antennas and monitor stations. The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and hundreds of millions of civil, commercial, and scientific users of the Standard Positioning Service (see GPS navigation devices).
Space segment
See also: GPS (satellite) and List of GPS satellites
Unlaunched GPS block II-A satellite on display at the San Diego Air & Space Museum
A visual example of a 24 satellite GPS constellation in motion with the earth rotating. Notice how the number of satellites in view from a given point on the earth's surface, in this example in Golden, Colorado, USA(39.7469° N, 105.2108° W), changes with time.
The space segment (SS) is composed of the orbiting GPS satellites, or Space Vehicles (SV) in GPS parlance. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits,[64] but this was modified to six orbital planes with four satellites each.[65] The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection).[66] The orbital period is one-half a sidereal day, i.e., 11 hours and 58 minutes so that the satellites pass over the same locations[67] or almost the same locations[68] every day. The orbits are arranged so that at least six satellites are always within line of sight from almost everywhere on the Earth's surface.[69] The result of this objective is that the four satellites are not evenly spaced (90 degrees) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30, 105, 120, and 105 degrees apart, which sum to 360 degrees.[70]
Orbiting at an altitude of approximately 20,200 km (12,600 mi); orbital radius of approximately 26,600 km (16,500 mi),[71] each SV makes two complete orbits each sidereal day, repeating the same ground track each day.[72] This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
As of February 2016,[73] there are 32 satellites in the GPS constellation, 31 of which are in use. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve reliability and availability of the system, relative to a uniform system, when multiple satellites fail.[74] About nine satellites are visible from any point on the ground at any one time (see animation at right), ensuring considerable redundancy over the minimum four satellites needed for a position.
Control segment
Ground monitor station used from 1984 to 2007, on display at the Air Force Space & Missile Museum.
The control segment is composed of:
a master control station (MCS),
an alternate master control station,
four dedicated ground antennas, and
six dedicated monitor stations.
The MCS can also access U.S. Air Force Satellite Control Network (AFSCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Air Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington DC.[75] The tracking information is sent to the Air Force Space Command MCS at Schriever Air Force Base 25 km (16 mi) ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Air Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.[76]
Satellite maneuvers are not precise by GPS standards—so to change a satellite's orbit, the satellite must be marked unhealthy, so receivers don't use it. After the satellite maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again.
The Operation Control Segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS system operational and performing within specification.
OCS successfully replaced the legacy 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces. OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System[7] (OCX), is fully developed and functional.
The new capabilities provided by OCX will be the cornerstone for revolutionizing GPS's mission capabilities, enabling[77] Air Force Space Command to greatly enhance GPS operational services to U.S. combat forces, civil partners and myriad domestic and international users.
The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50%[78] sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides significant information assurance improvements over the current GPS OCS program.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.
Built on a flexible architecture that can rapidly adapt to the changing needs of today's and future GPS users allowing immediate access to GPS data and constellation status through secure, accurate and reliable information.
Provides the warfighter with more secure, actionable and predictive information to enhance situational awareness.
Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.
Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.
Supports higher volume near real-time command and control capabilities and abilities.
On September 14, 2011,[79] the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development.
The GPS OCX program has missed major milestones and is pushing the GPS IIIA launch beyond April 2016.[80]
User segment
Further information: GPS navigation device
GPS receivers come in a variety of formats, from devices integrated into cars, phones, and watches, to dedicated devices such as these.
The first portable GPS unit, Leica WM 101 displayed at the Irish National Science Museum at Maynooth.
The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user. A receiver is often described by its number of channels: this signifies how many satellites it can monitor simultaneously. Originally limited to four or five, this has progressively increased over the years so that, as of 2007, receivers typically have between 12 and 20 channels. Though there are many receiver manufacturers, they almost all use one of the chipsets produced for this purpose.[citation needed]
A typical OEM GPS receiver module measuring 15×17 mm.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM.[citation needed] Receivers with internal DGPS receivers can outperform those using external RTCM data.[citation needed] As of 2006, even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
A typical GPS receiver with integrated antenna.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA),[81] references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws.[clarification needed] Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
Main article: GNSS applications
See also: GPS navigation device
While originally a military project, GPS is considered a dual-use technology, meaning it has significant military and civilian applications.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.[63]
This antenna is mounted on the roof of a hut containing a scientific experiment needing precise timing.
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Agriculture: GPS has made a great evolution in different aspects of modern agricultural sectors. Today, a growing number of crop producers are using GPS and other modern electronic and computer equipment to practice Site Specific Management (SSM) and precision agriculture. This technology has the potential in agricultural mechanization (farm and machinery management) by providing farmers with a sophisticated tool to measure yield on much smaller scales as well as precise determination and automatic storing of variables such as field time, working area, machine travel distance and speed, fuel consumption and yield information.[82][83]
Astronomy: both positional and clock synchronization data is used in astrometry and celestial mechanics. GPS is also used in both amateur astronomy with small telescopes as well as by professional observatories for finding extrasolar planets, for example.
Automated vehicle: applying location and routes for cars and trucks to function without a human driver.
Cartography: both civilian and military cartographers use GPS extensively.
Cellular telephony: clock synchronization enables time transfer, which is critical for synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.
Clock synchronization: the accuracy of GPS time signals (±10 ns)[84] is second only to the atomic clocks they are based on.
Disaster relief/emergency services: many emergency services depend upon GPS for location and timing capabilities.
GPS-equipped radiosondes and dropsondes: measure and calculate the atmospheric pressure, wind speed and direction up to 27 km from the Earth's surface.
Radio occultation for weather and atmospheric science applications.[85]
Fleet tracking: used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
Geofencing: vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate devices that are attached to or carried by a person, vehicle, or pet. The application can provide continuous tracking and send notifications if the target leaves a designated (or "fenced-in") area.[86]
Geotagging: applies location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1
GPS aircraft tracking
GPS for mining: the use of RTK GPS has significantly improved several mining operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.
GPS data mining: It is possible to aggregate GPS data from multiple users to understand movement patterns, common trajectories and interesting locations.[87]
GPS tours: location determines what content to display; for instance, information about an approaching point of interest.
Navigation: navigators value digitally precise velocity and orientation measurements.
Phasor measurements: GPS enables highly accurate timestamping of power system measurements, making it possible to compute phasors.
Recreation: for example, Geocaching, Geodashing, GPS drawing, waymarking, and other kinds of location based mobile games.
Robotics: self-navigating, autonomous robots using a GPS sensors, which calculate latitude, longitude, time, speed, and heading.
Sport: used in football and rugby for the control and analysis of the training load.[88]
Surveying: surveyors use absolute locations to make maps and determine property boundaries.
Tectonics: GPS enables direct fault motion measurement of earthquakes. Between earthquakes GPS can be used to measure crustal motion and deformation[89] to estimate seismic strain buildup for creating seismic hazard maps.
Telematics: GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
Restrictions on civilian use
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above 18 km (60,000 feet) altitude and 515 m/s (1,000 knots), or designed or modified for use with unmanned air vehicles like, e.g., ballistic or cruise missile systems, are classified as munitions (weapons)—which means they require State Department export licenses.[90]
This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach 30 km (100,000 feet).
These limits only apply to units or components exported from the USA. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
Attaching a GPS guidance kit to a dumb bomb, March 2003.
M982 Excalibur GPS-guided artillery shell.
As of 2009, military GPS applications include:
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commander's Digital Assistant and lower ranks use the Soldier Digital Assistant.[91]
Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile.[citation needed] These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery shells. Embedded GPS receivers able to withstand accelerations of 12,000 g or about 118 km/s2 have been developed for use in 155-millimeter (6.1 in) howitzer shells.[92]
Search and rescue.
Reconnaissance: Patrol movement can be managed more closely.
GPS satellites carry a set of nuclear detonation detectors consisting of an optical sensor (Y-sensor), an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System.[93][94] General William Shelton has stated that future satellites may drop this feature to save money.[95]
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces added noise to the weak GPS signal transmission to protect Iraqi targets.[96]
Main article: GPS signals
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
Message format
GPS message format
1 Satellite clock,
GPS time relationship
2–3 Ephemeris
(precise satellite orbit)
4–5 Almanac component
(satellite network synopsis,
error correction)
Each GPS satellite continuously broadcasts a navigation message on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds (12 1/2 minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.[97]
The first subframe of each frame encodes the week number and the time within the week,[98] as well as the data about the health of the satellite. The second and the third subframes contain the ephemeris – the precise orbit for the satellite. The fourth and fifth subframes contain the almanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or 12 1/2 minutes.[99]
All satellites broadcast at the same frequencies, encoding signals using unique code division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.[100]
The ephemeris is updated every 2 hours and is generally valid for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.[citation needed]
Satellite frequencies
GPS frequency overview[101]:607
L1 1575.42 MHz Coarse-acquisition (C/A) and encrypted precision (P(Y)) codes, plus the L1 civilian (L1C) and military (M) codes on future Block III satellites.
L2 1227.60 MHz P(Y) code, plus the L2C and military codes on the Block IIR-M and newer satellites.
L3 1381.05 MHz Used for nuclear detonation (NUDET) detection.
L4 1379.913 MHz Being studied for additional ionospheric correction.
L5 1176.45 MHz Proposed for use as a civilian safety-of-life (SoL) signal.
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique[101]:607 where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects[102][103] that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code.[70] The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space.[104] One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.[101]:607
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in 2010.[105] The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."[106]
A conditional waiver has recently (2011-01-26) been granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issue that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the impact of the lower 10 MHz of spectrum is minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some impact on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses.[107][108] Aviation Week magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.[109]
Demodulation and decoding
Demodulating and Decoding GPS Satellite Signals using the Coarse/Acquisition Gold code.
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.[110][111]
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
Navigation equations
Further information: GNSS positioning calculation
See also: Pseudorange
It has been suggested that this section be merged into GNSS positioning calculation. (Discuss) Proposed since April 2015.
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent are designated as [xi, yi, zi, si] where the subscript i denotes the satellite and has the value 1, 2, ..., n, where n ≥ 4. When the time of message reception indicated by the on-board receiver clock is t̃i, the true reception time is ti = t̃i − b, where b is the receiver's clock bias from the much more accurate GPS system clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is t̃i − b − si, where si is the satellite time. Assuming the message traveled at the speed of light, c, the distance traveled is (t̃i − b − si) c.
For n satellites, the equations to satisfy are:
( x − x i ) 2 + ( y − y i ) 2 + ( z − z i ) 2 = ( [ t ~ i − b − s i ] c ) 2 , i = 1 , 2 , … , n {\displaystyle (x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}={\bigl (}[{\tilde {t}}_{i}-b-s_{i}]c{\bigr )}^{2},\;i=1,2,\dots ,n}
or in terms of pseudoranges, p i = ( t ~ i − s i ) c {\displaystyle p_{i}=\left({\tilde {t}}_{i}-s_{i}\right)c} , as
( x − x i ) 2 + ( y − y i ) 2 + ( z − z i ) 2 + b c = p i , i = 1 , 2 , . . . , n {\displaystyle {\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}+bc=p_{i},\;i=1,2,...,n} .[112][113]
Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee.[56] When n is greater than 4 this system is overdetermined and a fitting method must be used.
With each combination of satellites, GDOP quantities can be calculated based on the relative sky directions of the satellites used.[114] The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.[115]
Geometric interpretation
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of three of these spheres.[116] If more than the minimum number of ranges is available, a near intersection of more than three sphere surfaces could be found via, e.g. least squares.
Hyperboloids
If the distance traveled between the receiver and satellite i and the distance traveled between the receiver and satellite j are subtracted, the result is (t̃i − si) c − (t̃j − sj) c, which only involves known or measured quantities. The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperboloid (see Multilateration). Thus, from four or more measured reception times, the receiver can be placed at the intersection of the surfaces of three or more hyperboloids.[56][57]
Spherical cones
The solution space [x, y, z, b] can be seen as a four-dimensional geometric space. In that case each of the equations describes a spherical cone,[117] with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such cones.
Solution methods
Least squares
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.[112]
( x ^ , y ^ , z ^ , b ^ ) = arg min ( x , y , z , b ) ∑ i ( ( x − x i ) 2 + ( y − y i ) 2 + ( z − z i ) 2 + b c − p i ) 2 {\displaystyle \left({\hat {x}},{\hat {y}},{\hat {z}},{\hat {b}}\right)={\underset {\left(x,y,z,b\right)}{\arg \min }}\sum _{i}\left({\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}+bc-p_{i}\right)^{2}}
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS system was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
Closed-form
One closed-form solution to the above set of equations was developed by S. Bancroft.[113][118] Its properties are well known;[56][57][119] in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.[118]
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.[113]
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. However, a case has been made that iterative methods (e.g., Gauss–Newton algorithm) for solving over-determined non-linear least squares (NLLS) problems generally provide more accurate solutions.[120]
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."[121] Other closed-form solutions were published afterwards,[122][123] although their adoption in practice is unclear.
Error sources and analysis
Main article: Error analysis for the Global Positioning System
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft[124] or from intentional signal degradation through selective availability, which limited accuracy to ≈6–12 m, but has been switched off since May 1, 2000.[125][126]
Accuracy enhancement and surveying
Main article: GNSS enhancement
This article duplicates the scope of other articles, specifically, GNSS enhancement. Please discuss this issue on the talk page and edit it to conform with Wikipedia's Manual of Style. (November 2013)
Integrating external information into the calculation process can materially improve accuracy. Such augmentation systems are generally named or described based on how the information arrives. Some systems transmit additional error information (such as clock drift, ephemera, or ionospheric delay), others characterize prior errors, while a third group provides additional navigational or vehicle information.
Examples of augmentation systems include the Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Differential GPS (DGPS), inertial navigation systems (INS) and Assisted GPS. The standard accuracy of about 15 meters (49 feet) can be augmented to 3–5 meters (9.8–16.4 ft) with DGPS, and to about 3 meters (9.8 feet) with WAAS.[127]
Precise monitoring
Accuracy can be improved through precise monitoring and measurement of existing GPS signals in additional or alternate ways.
The largest remaining error is usually the unpredictable delay through the ionosphere. The spacecraft broadcast ionospheric model parameters, but some errors remain. This is one reason GPS spacecraft transmit on at least two frequencies, L1 and L2. Ionospheric delay is a well-defined function of frequency and the total electron content (TEC) along the path, so measuring the arrival time difference between the frequencies determines TEC and thus the precise ionospheric delay at each frequency.
Military receivers can decode the P(Y) code transmitted on both L1 and L2. Without decryption keys, it is still possible to use a codeless technique to compare the P(Y) codes on L1 and L2 to gain much of the same error information. However, this technique is slow, so it is currently available only on specialized surveying equipment. In the future, additional civilian codes are expected to be transmitted on the L2 and L5 frequencies (see GPS modernization). All users will then be able to perform dual-frequency measurements and directly compute ionospheric delay errors.
A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). This corrects the error that arises because the pulse transition of the PRN is not instantaneous, and thus the correlation (satellite–receiver sequence matching) operation is imperfect. CPGPS uses the L1 carrier wave, which has a period of 1 s 1575.42 × 10 6 = 0.63475 n s ≈ 1 n s {\displaystyle {\frac {1\,\mathrm {s} }{1575.42\times 10^{6}}}=0.63475\,\mathrm {ns} \approx 1\,\mathrm {ns} \ } , which is about one-thousandth of the C/A Gold code bit period of 1 s 1023 × 10 3 = 977.5 n s ≈ 1000 n s {\displaystyle {\frac {1\,\mathrm {s} }{1023\times 10^{3}}}=977.5\,\mathrm {ns} \approx 1000\,\mathrm {ns} \ } , to act as an additional clock signal and resolve the uncertainty. The phase difference error in the normal GPS amounts to 2–3 meters (7–10 ft) of ambiguity. CPGPS working to within 1% of perfect transition reduces this error to 3 centimeters (1.2 in) of ambiguity. By eliminating this error source, CPGPS coupled with DGPS normally realizes between 20–30 centimeters (8–12 in) of absolute accuracy.
Relative Kinematic Positioning (RKP) is a third alternative for a precise GPS-based positioning system. In this approach, determination of range signal can be resolved to a precision of less than 10 centimeters (4 in). This is done by resolving the number of cycles that the signal is transmitted and received by the receiver by using a combination of differential GPS (DGPS) correction data, transmitting GPS signal phase information and ambiguity resolution techniques via statistical tests—possibly with processing in real-time (real-time kinematic positioning, RTK).
Leap seconds
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to GPS time (GPST; see the page of United States Naval Observatory). The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI − GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.[128]
The GPS navigation message includes the difference between GPS time and UTC. As of January 2017, GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016.[129] Receivers subtract this offset from GPS time to calculate UTC and specific timezone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
GPS time is theoretically accurate to about 14 nanoseconds.per, or relative to, what?[130] However, most receivers lose accuracy in the interpretation of the signals and are only accurate to 100 nanoseconds.[131][132]
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern the modernized GPS navigation message uses a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until the year 2137 (157 years after GPS week zero).
Carrier phase tracking (surveying)
Another method that is used in surveying applications is carrier phase tracking. The period of the carrier frequency multiplied by the speed of light gives the wavelength, which is about 0.19 meters for the L1 carrier. Accuracy within 1% of wavelength in detecting the leading edge reduces this component of pseudorange error to as little as 2 millimeters. This compares to 3 meters for the C/A code and 0.3 meters for the P code.
However, 2 millimeter accuracy requires measuring the total phase—the number of waves multiplied by the wavelength plus the fractional wavelength, which requires specially equipped receivers. This method has many surveying applications. It is accurate enough for real-time tracking of the very slow motions of tectonic plates, typically 0–100 mm (0–4 inches) per year.
Triple differencing followed by numerical root finding, and a mathematical technique called least squares can estimate the position of one receiver given the position of another. First, compute the difference between satellites, then between receivers, and finally between epochs. Other orders of taking differences are equally valid. Detailed discussion of the errors is omitted.
The satellite carrier total phase can be measured with ambiguity as to the number of cycles. Let ϕ ( r i , s j , t k ) {\displaystyle \ \phi (r_{i},s_{j},t_{k})} denote the phase of the carrier of satellite j measured by receiver i at time t k {\displaystyle \ \ t_{k}} . This notation shows the meaning of the subscripts i, j, and k. The receiver (r), satellite (s), and time (t) come in alphabetical order as arguments of ϕ {\displaystyle \ \phi } and to balance readability and conciseness, let ϕ i , j , k = ϕ ( r i , s j , t k ) {\displaystyle \ \phi _{i,j,k}=\phi (r_{i},s_{j},t_{k})} be a concise abbreviation. Also we define three functions, : Δ r , Δ s , Δ t {\displaystyle \ \Delta ^{r},\Delta ^{s},\Delta ^{t}} , which return differences between receivers, satellites, and time points, respectively. Each function has variables with three subscripts as its arguments. These three functions are defined below. If α i , j , k {\displaystyle \ \alpha _{i,j,k}} is a function of the three integer arguments, i, j, and k then it is a valid argument for the functions, : Δ r , Δ s , Δ t {\displaystyle \ \Delta ^{r},\Delta ^{s},\Delta ^{t}} , with the values defined as
Δ r ( α i , j , k ) = α i + 1 , j , k − α i , j , k {\displaystyle \ \Delta ^{r}(\alpha _{i,j,k})=\alpha _{i+1,j,k}-\alpha _{i,j,k}} ,
Δ s ( α i , j , k ) = α i , j + 1 , k − α i , j , k {\displaystyle \ \Delta ^{s}(\alpha _{i,j,k})=\alpha _{i,j+1,k}-\alpha _{i,j,k}} , and
Δ t ( α i , j , k ) = α i , j , k + 1 − α i , j , k {\displaystyle \ \Delta ^{t}(\alpha _{i,j,k})=\alpha _{i,j,k+1}-\alpha _{i,j,k}} .
Also if α i , j , k a n d β l , m , n {\displaystyle \ \alpha _{i,j,k}\ and\ \beta _{l,m,n}} are valid arguments for the three functions and a and b are constants then ( a α i , j , k + b β l , m , n ) {\displaystyle \ (a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})} is a valid argument with values defined as
Δ r ( a α i , j , k + b β l , m , n ) = a Δ r ( α i , j , k ) + b Δ r ( β l , m , n ) {\displaystyle \ \Delta ^{r}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{r}(\alpha _{i,j,k})+b\ \Delta ^{r}(\beta _{l,m,n})} ,
Δ s ( a α i , j , k + b β l , m , n ) = a Δ s ( α i , j , k ) + b Δ s ( β l , m , n ) {\displaystyle \ \Delta ^{s}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{s}(\alpha _{i,j,k})+b\ \Delta ^{s}(\beta _{l,m,n})} , and
Δ t ( a α i , j , k + b β l , m , n ) = a Δ t ( α i , j , k ) + b Δ t ( β l , m , n ) {\displaystyle \ \Delta ^{t}(a\ \alpha _{i,j,k}+b\ \beta _{l,m,n})=a\ \Delta ^{t}(\alpha _{i,j,k})+b\ \Delta ^{t}(\beta _{l,m,n})} .
Receiver clock errors can be approximately eliminated by differencing the phases measured from satellite 1 with that from satellite 2 at the same epoch.[133] This difference is designated as Δ s ( ϕ 1 , 1 , 1 ) = ϕ 1 , 2 , 1 − ϕ 1 , 1 , 1 {\displaystyle \ \Delta ^{s}(\phi _{1,1,1})=\phi _{1,2,1}-\phi _{1,1,1}}
Double differencing[134] computes the difference of receiver 1's satellite difference from that of receiver 2. This approximately eliminates satellite clock errors. This double difference is:
Δ r ( Δ s ( ϕ 1 , 1 , 1 ) ) = Δ r ( ϕ 1 , 2 , 1 − ϕ 1 , 1 , 1 ) = Δ r ( ϕ 1 , 2 , 1 ) − Δ r ( ϕ 1 , 1 , 1 ) = ( ϕ 2 , 2 , 1 − ϕ 1 , 2 , 1 ) − ( ϕ 2 , 1 , 1 − ϕ 1 , 1 , 1 ) {\displaystyle {\begin{aligned}\Delta ^{r}(\Delta ^{s}(\phi _{1,1,1}))\,&=\,\Delta ^{r}(\phi _{1,2,1}-\phi _{1,1,1})&=\,\Delta ^{r}(\phi _{1,2,1})-\Delta ^{r}(\phi _{1,1,1})&=\,(\phi _{2,2,1}-\phi _{1,2,1})-(\phi _{2,1,1}-\phi _{1,1,1})\end{aligned}}}
Triple differencing[135] subtracts the receiver difference from time 1 from that of time 2. This eliminates the ambiguity associated with the integral number of wavelengths in carrier phase provided this ambiguity does not change with time. Thus the triple difference result eliminates practically all clock bias errors and the integer ambiguity. Atmospheric delay and satellite ephemeris errors have been significantly reduced. This triple difference is:
Δ t ( Δ r ( Δ s ( ϕ 1 , 1 , 1 ) ) ) {\displaystyle \ \Delta ^{t}(\Delta ^{r}(\Delta ^{s}(\phi _{1,1,1})))}
Triple difference results can be used to estimate unknown variables. For example, if the position of receiver 1 is known but the position of receiver 2 unknown, it may be possible to estimate the position of receiver 2 using numerical root finding and least squares. Triple difference results for three independent time pairs may be sufficient to solve for receiver 2's three position components. This may require a numerical procedure.[136][137] An approximation of receiver 2's position is required to use such a numerical method. This initial value can probably be provided from the navigation message and the intersection of sphere surfaces. Such a reasonable estimate can be key to successful multidimensional root finding. Iterating from three time pairs and a fairly good initial value produces one observed triple difference result for receiver 2's position. Processing additional time pairs can improve accuracy, overdetermining the answer with multiple solutions. Least squares can estimate an overdetermined system. Least squares determines the position of receiver 2 that best fits the observed triple difference results for receiver 2 positions under the criterion of minimizing the sum of the squares.
Regulatory spectrum issues concerning GPS receivers
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation."[138] With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers, "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum."[139] For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band.[140] Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service.[141] In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz.[142] In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[143] This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Air Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration, Interior, and U.S. Department of Transportation.[144]
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz.[145] In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices[146] although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services.[147] However, as regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum.[139] This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum."[148] In those 2003 rules, the FCC stated "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ("CMRS")] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominately different market segments... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting that "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[143] In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector."[149] However, GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component.[150] To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS."[151]
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate.[152][153] However, according to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it."[154] The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.[154]
On February 14, 2012, the U.S. Federal Communications Commission (FCC) moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time".[155][156] LightSquared is challenging the FCC's action.
Main article: Satellite navigation system
Comparison of geostationary, GPS, GLONASS, Galileo, Compass (MEO), International Space Station, Hubble Space Telescope and Iridium constellation orbits, with the Van Allen radiation belts and the Earth to scale.[a] The Moon's orbit is around 9 times larger than geostationary orbit.[b] (In the SVG file, hover over an orbit or its label to highlight it; click to load its article.)
Other satellite navigation systems in use or various states of development include:
GLONASS – Russia's global navigation system. Fully operational worldwide.
Galileo – a global system being developed by the European Union and other partner countries, which began operation in 2016,[157] and is expected to be fully deployed by 2020.
Beidou – People's Republic of China's regional system, currently limited to Asia and the West Pacific,[158] global coverage planned to be operational by 2020[159][160]
IRNSS - A regional navigation system developed by the Indian Space Research Organisation.
GPS/INS
GPS navigation software
Indoor positioning system
Local Area Augmentation System
Local positioning system
Military invention
Mobile phone tracking
Navigation paradox
Notice Advisory to Navstar Users
Trilateration
Wide Area Augmentation System
Orbital periods and speeds are calculated using the relations 4π²R³ = T²GM and V²R = GM, where R = radius of orbit in metres, T = orbital period in seconds, V = orbital speed in m/s, G = gravitational constant ≈ 6.673×10−11 Nm²/kg², M = mass of Earth ≈ 5.98×1024 kg.
Approximately 8.6 times (in radius and length) when the moon is nearest (363 104 km ÷ 42 164 km) to 9.6 times when the moon is farthest (405 696 km ÷ 42 164 km).
Levin, Dan (March 23, 2009). "Chinese Square Off With Europe in Space". The New York Times. China. Retrieved November 6, 2011.
"NAVSTAR GPS User Equipment Introduction" (PDF). United States Coast Guard. September 1996.
Parkinson; Spilker (1996). The global positioning system. American Institute of Aeronautics and Astronautics. ISBN 978-1-56347-106-3.
Jaizki Mendizabal; Roc Berenguer; Juan Melendez (2009). GPS and Galileo. McGraw Hill. ISBN 978-0-07-159869-9.
Nathaniel Bowditch (2002). The American Practical Navigator – Chapter 11 Satellite Navigation. United States government.
Global Positioning System Open Courseware from MIT, 2012
Wikimedia Commons has media related to Global Positioning System.
Global Positioning System at DMOZ
FAA GPS FAQ
GPS.gov—General public education website created by the U.S. Government
U.S. Army Corps of Engineers manual: "NAVSTAR HTML". Archived from the original on August 22, 2008. Retrieved 2010-06-06. and "PDF (22.6 MB, 328 pages)" (PDF). Archived from the original on June 25, 2008. Retrieved 2010-06-06.
National Geodetic Survey Orbits for the Global Positioning System satellites in the Global Navigation Satellite System
GPS and GLONASS Simulation (Java applet) Simulation and graphical depiction of space vehicle motion including computation of dilution of precision (DOP)
Relativity Science Calculator – Explaining Global Positioning System
NAVSTAR Global Positioning System satellites
Satellite navigation systems
Time signal stations
Systems · Systems science
Equipment of the United States Air Force
Spaceflight portal
Geography portal
Nautical portal
"The Navstar Global Positioning System, hereafter referred to as GPS, is a space-based radio navigation system owned by the United States Government (USG) and operated by the United States Air Force (USAF)." [1]
"GPS: Global Positioning System (or Navstar Global Positioning System)" Wide Area Augmentation System (WAAS) Performance Standard, Section B.3, Abbreviations and Acronyms. [2]
"What is a GPS?".
Srivastava, Ishan (5 April 2014). "How Kargil spurred India to design own GPS". The Times of India. Retrieved 9 December 2014.
National Research Council (U.S.). Committee on the Future of the Global Positioning System; National Academy of Public Administration (1995). The global positioning system: a shared national asset: recommendations for technical improvements and enhancements. National Academies Press. p. 16. ISBN 0-309-05283-1. Retrieved August 16, 2013. , https://books.google.com/books?id=FAHk65slfY4C&pg=PA16
O'Leary, Beth Laura; Darrin, Ann Garrison (2009). Handbook of Space Engineering, Archaeology, and Heritage. Hoboken: CRC Press. pp. 239–240. ISBN 9781420084320.
"Factsheets : GPS Advanced Control Segment (OCX)". Losangeles.af.mil. October 25, 2011. Archived from the original on May 3, 2012. Retrieved November 6, 2011.
"Russia Launches Three More GLONASS-M Space Vehicles". Inside GNSS. Retrieved December 26, 2008.
GLONASS the future for all smartphones?
Winterberg, Friedwardt (1956). "Relativistische Zeitdiiatation eines künstlichen Satelliten (Relativistic time dilation of an artificial satellite)". Astronautica Acta II (in German) (25). Retrieved 19 October 2014.
"GPS and Relativity". Astronomy.ohio-state.edu. Retrieved November 6, 2011.
Guier, William H.; Weiffenbach, George C. (1997). "Genesis of Satellite Navigation" (PDF). Johns Hopkins APL Technical Digest. 19 (1): 178–181.
Steven Johnson (2010), Where good ideas come from, the natural history of innovation, New York: Riverhead Books
Helen E. Worth; Mame Warren (2009). Transit to Tomorrow. Fifty Years of Space Research at The Johns Hopkins University Applied Physics Laboratory (PDF).
Catherine Alexandrow (April 2008). "The Story of GPS". Archived from the original on February 24, 2013.
DARPA: 50 Years of Bridging the Gap. April 2008.
Howell, Elizabeth. "Navstar: GPS Satellite Network". SPACE.com. Retrieved February 14, 2013.
Jerry Proc. "Omega". Jproc.ca. Retrieved December 8, 2009.
"Why Did the Department of Defense Develop GPS?". Trimble Navigation Ltd. Archived from the original on October 18, 2007. Retrieved January 13, 2010.
"Charting a Course Toward Global Navigation". The Aerospace Corporation. Archived from the original on November 1, 2002. Retrieved October 14, 2013.
"A Guide to the Global Positioning System (GPS) — GPS Timeline". Radio Shack. Retrieved January 14, 2010.
"GEODETIC EXPLORER-A Press Kit" (PDF). NASA. October 29, 1965. Retrieved 20 October 2015.
"SECOR Chronology". Mark Wade's Encyclopedia Astronautica. Retrieved January 19, 2010.
"MX Deployment Reconsidered." Retrieved: 7 June 2013.
Michael Russell Rip; James M. Hasik (2002). The Precision Revolution: GPS and the Future of Aerial Warfare. Naval Institute Press. p. 65. ISBN 1-55750-973-5. Retrieved January 14, 2010.
Hegarty, Christopher J.; Chatre, Eric (December 2008). "Evolution of the Global Navigation SatelliteSystem (GNSS)". Proceedings of the IEEE. 96: 1902–1917. doi:10.1109/JPROC.2008.2006090.
"ICAO Completes Fact-Finding Investigation". International Civil Aviation Organization. Archived from the original on May 17, 2008. Retrieved September 15, 2008.
"United States Updates Global Positioning System Technology". America.gov. February 3, 2006.
Rumerman, Judy A. (2009). NASA Historical Data Book, Volume VII (PDF). NASA. p. 136.
The Global Positioning System Assessing National Policies, by Scott Pace, Gerald P. Frost, Irving Lachow, David R. Frelinger, Donna Fossum, Don Wassem, Monica M. Pinto, Rand Corporation, 1995,Appendix B, GPS History, Chronology, and Budgets
"GPS & Selective Availability Q&A" (PDF). NOAA]. Archived from the original (PDF) on September 21, 2005. Retrieved May 28, 2010.
"GPS Accuracy". GPS.gov. GPS.gov. Retrieved 4 May 2015.
E. Steitz, David. "NATIONAL POSITIONING, NAVIGATION AND TIMING ADVISORY BOARD NAMED". Retrieved March 22, 2007.
GPS Wing Reaches GPS III IBR Milestone in Inside GNSS November 10, 2008
GPS CONSTELLATION STATUS FOR 08/26/2015
"Recap story: Three Atlas 5 launch successes in one month".
"GPS almanacs". Navcen.uscg.gov. Retrieved October 15, 2010.
The Origin of Global Positioning System
Dietrich Schroeer; Mirco Elena (2000). Technology Transfer. Ashgate. p. 80. ISBN 0-7546-2045-X. Retrieved May 25, 2008.
Michael Russell Rip; James M. Hasik (2002). The Precision Revolution: GPS and the Future of Aerial Warfare. Naval Institute Press. ISBN 1-55750-973-5. Retrieved May 25, 2008.
"AF Space Command Chronology". USAF Space Command. Archived from the original on August 17, 2011. Retrieved June 20, 2011.
"FactSheet: 2nd Space Operations Squadron". USAF Space Command. Archived from the original on June 11, 2011. Retrieved June 20, 2011.
The Global Positioning System: Assessing National Policies, p.245. RAND corporation
"USNO NAVSTAR Global Positioning System". U.S. Naval Observatory. Retrieved January 7, 2011.
National Archives and Records Administration. U.S. Global Positioning System Policy. March 29, 1996.
"National Executive Committee for Space-Based Positioning, Navigation, and Timing". Pnt.gov. Archived from the original on May 28, 2010. Retrieved October 15, 2010.
"Assisted-GPS Test Calls for 3G WCDMA Networks". 3g.co.uk. November 10, 2004. Retrieved November 24, 2010.
"First Modernized GPS Satellite Built By Lockheed Martin Launched". Phys.org. Retrieved September 26, 2005.
010907 (September 17, 2007). "losangeles.af.mil". losangeles.af.mil. Archived from the original on May 11, 2011. Retrieved October 15, 2010.
Johnson, Bobbie (May 19, 2009). "GPS system 'close to breakdown'". The Guardian. London. Retrieved December 8, 2009.
Coursey, David (May 21, 2009). "Air Force Responds to GPS Outage Concerns". ABC News. Retrieved May 22, 2009.
"Air Force GPS Problem: Glitch Shows How Much U.S. Military Relies On GPS". Huffingtonpost.comm. June 1, 2010. Retrieved October 15, 2010.
"Contract Award for Next Generation GPS Control Segment Announced". Archived from the original on July 23, 2013. Retrieved December 14, 2012.
United States Naval Research Laboratory. National Medal of Technology for GPS. November 21, 2005
"Space Technology Hall of Fame, Inducted Technology: Global Positioning System (GPS)". Archived from the original on June 12, 2012.
Abel, J.S. and Chaffee, J.W., "Existence and uniqueness of GPS solutions", IEEE Transactions on Aerospace and Electronic Systems, vol:26, no:6, p:748-53, Sept. 1991.
Fang, B.T., "Comments on "Existence and uniqueness of GPS solutions" by J.S. Abel and J.W. Chaffee", IEEE Transactions on Aerospace and Electronic Systems, vol:28 , no:4, Oct. 1992.
Grewal, Mohinder S.; Weill, Lawrence R.; Andrews, Angus P. (2007). Global Positioning Systems, Inertial Navigation, and Integration (2nd ed.). John Wiley & Sons. pp. 92–93. ISBN 0-470-09971-2. , https://books.google.com/books?id=6P7UNphJ1z8C&pg=PA92
Georg zur Bonsen; Daniel Ammann; Michael Ammann; Etienne Favey; Pascal Flammant (April 1, 2005). "Continuous Navigation Combining GPS with Sensor-Based Dead Reckoning". GPS World. Archived from the original on November 11, 2006.
"NAVSTAR GPS User Equipment Introduction" (PDF). United States Government. Chapter 7
"GPS Support Notes" (PDF). January 19, 2007. Archived from the original (PDF) on March 27, 2009. Retrieved November 10, 2008.
John Pike. "GPS III Operational Control Segment (OCX)". Globalsecurity.org. Retrieved December 8, 2009.
"Global Positioning System". Gps.gov. Archived from the original on July 30, 2010. Retrieved June 26, 2010.
Daly, P. "Navstar GPS and GLONASS: global satellite navigation systems" (PDF). IEEE. [dead link]
Dana, Peter H. (August 8, 1996). "GPS Orbital Planes" (GIF).
GPS Overview from the NAVSTAR Joint Program Office Archived November 16, 2007, at the Wayback Machine.. Retrieved December 15, 2006.
What the Global Positioning System Tells Us about Relativity Archived January 4, 2007, at the Wayback Machine.. Retrieved January 2, 2007.
"Archived copy". Archived from the original on October 22, 2011. Retrieved 2011-10-27. . Retrieved October 27, 2011
"USCG Navcen: GPS Frequently Asked Questions". Retrieved January 31, 2007.
Thomassen, Keith. "How GPS Works". avionicswest.com. Archived from the original on March 30, 2016. Retrieved April 22, 2014.
Samama, Nel (2008). Global Positioning: Technologies and Performance. John Wiley & Sons. p. 65. ISBN 0-470-24190-X. , https://books.google.com/books?id=EyFrcnSRFFgC&pg=PA65
Agnew, D.C.; Larson, K.M. (2007). "Finding the repeat times of the GPS constellation". GPS Solutions. Springer. 11 (1): 71–76. doi:10.1007/s10291-006-0038-4. This article from author's web site Archived February 16, 2008, at the Wayback Machine., with minor correction.
"CURRENT GPS CONSTELLATION". U.S. Naval Observatory.
Massatt, Paul; Wayne Brady (Summer 2002). "Optimizing performance through constellation management" (PDF). Crosslink: 17–21. Archived from the original on January 25, 2012.
United States Coast Guard General GPS News 9–9–05[permanent dead link]
USNO NAVSTAR Global Positioning System. Retrieved May 14, 2006.
"GPS III Operational Control Segment (OCX)". GlobalSecurity.org.
"The USA's GPS-III Satellites". Defense Industry Daily. October 13, 2011.
"GPS Completes Next Generation Operational Control System PDR". Air Force Space Command News Service. September 14, 2011.
"'Embarrassing to defend': US general blasts Raytheon's GPS control system a 'disaster'". RT. December 9, 2015.
"Publications and Standards from the National Marine Electronics Association (NMEA)". National Marine Electronics Association. Retrieved June 27, 2008.
Exploring GPS Data for Operational Analysis of Farm Machinery
A lecture note on Global Positioning System in Precision Agriculture
"Common View GPS Time Transfer". nist.gov. Archived from the original on October 28, 2012. Retrieved July 23, 2011.
"Using GPS to improve tropical cyclone forecasts". ucar.edu.
"Spotlight GPS pet locator". Spotlightgps.com. Retrieved October 15, 2010.
Khetarpaul, S., Chauhan, R., Gupta, S. K., Subramaniam, L. V., Nambiar, U. (2011). Mining GPS data to determine interesting locations. Proceedings of the 8th International Workshop on Information Integration on the Web.
"The Use of GPS Tracking Technology in Australian Football". Retrieved 2016-09-25.
"The Pacific Northwest Geodetic Array". cwu.edu.
Arms Control Association.Missile Technology Control Regime. Retrieved May 17, 2006.
Sinha, Vandana (July 24, 2003). "Commanders and Soldiers' GPS-receivers". Gcn.com. Retrieved October 13, 2009.
"XM982 Excalibur Precision Guided Extended Range Artillery Projectile". GlobalSecurity.org. May 29, 2007. Retrieved September 26, 2007. (Registration required (help)).
Sandia National Laboratory's Nonproliferation programs and arms control technology.
Dennis D. McCrady. "The GPS Burst Detector W-Sensor" (PDF). Sandia National Laboratories.
"US Air Force Eyes Changes To National Security Satellite Programs.". Aviationweek.com. January 18, 2013. Retrieved September 28, 2013.
Greenemeier, Larry. "GPS and the World's First "Space War"". Scientific American. Retrieved 2016-02-08.
"Satellite message format". Gpsinformation.net. Retrieved October 15, 2010.
Peter H. Dana. "GPS Week Number Rollover Issues". Retrieved August 12, 2013.
"Interface Specification IS-GPS-200, Revision D: Navstar GPS Space Segment/Navigation User Interfaces" (PDF). Navstar GPS Joint Program Office. p. 103. Archived from the original (PDF) on September 8, 2012.
Richharia, Madhavendra; Westbrook, Leslie David (2011). Satellite Systems for Personal Applications: Concepts and Technology. John Wiley & Sons. p. 443. ISBN 1-119-95610-2.
Penttinen, Jyrki T. J. The Telecommunications Handbook: Engineering Guidelines for Fixed, Mobile and Satellite Systems. John Wiley & Sons. ISBN 9781119944881.
Misra, Pratap; Enge, Per (2006). Global Positioning System. Signals, Measurements and Performance (2nd ed.). Ganga-Jamuna Press. p. 115. ISBN 0-9709544-1-7. Retrieved August 16, 2013.
Borre, Kai; M. Akos, Dennis; Bertelsen, Nicolaj; Rinder, Peter; Jensen, Søren Holdt (2007). A Software-Defined GPS and Galileo Receiver. A single-Frequency Approach. Springer. p. 18. ISBN 0-8176-4390-7.
TextGenerator Version 2.0. "United States Nuclear Detonation Detection System (USNDS)". Fas.org. Retrieved November 6, 2011.
"First Block 2F GPS Satellite Launched, Needed to Prevent System Failure". DailyTech. Retrieved May 30, 2010.
"Air Force Successfully Transmits an L5 Signal From GPS IIR-20(M) Satellite". LA AFB News Release. Archived from the original on May 21, 2011. Retrieved June 20, 2011.
"Federal Communications Commission Presented Evidence of GPS Signal Interference". GPS World. Archived from the original on October 11, 2011. Retrieved November 6, 2011.
"Coalition to Save Our GPS". Saveourgps.org. Retrieved November 6, 2011.
"LightSquared Tests Confirm GPS Jamming". Aviation Week. Archived from the original on August 12, 2011. Retrieved June 20, 2011.
"GPS Almanacs, NANUS, and Ops Advisories (including archives)". GPS Almanac Information. United States Coast Guard. Retrieved September 9, 2009.
"George, M., Hamid, M., and Miller A. Gold Code Generators in Virtex Devices at the Internet ArchivePDF
section 4 beginning on page 15 GEOFFREY BLEWITT: BASICS OF THE GPS TECHNIQUE
"Global Positioning Systems" (PDF). Archived from the original (PDF) on July 19, 2011. Retrieved October 15, 2010.
Dana, Peter H. "Geometric Dilution of Precision (GDOP) and Visibility". University of Colorado at Boulder. Retrieved July 7, 2008.
Peter H. Dana. "Receiver Position, Velocity, and Time". University of Colorado at Boulder. Retrieved July 7, 2008.
"The Mathematics of GPS". siam.org. Archived from the original on March 16, 2005.
"GPS Solutions: Closed Forms, Critical and Special Configurations of P4P".
Bancroft, S. (January 1985). "An Algebraic Solution of the GPS Equations". IEEE Transactions on Aerospace and Electronic Systems. AES-21: 56–59. Bibcode:1985ITAES..21...56B. doi:10.1109/TAES.1985.310538. Archived from the original on 2013-04-15.
Chaffee, J. and Abel, J., "On the Exact Solutions of Pseudorange Equations", IEEE Transactions on Aerospace and Electronic Systems, vol:30, no:4, pp: 1021–1030, 1994
Sirola, Niilo, "Closed-form Algorithms in Mobile Positioning: Myths and Misconceptions", Proceedings of the 7th Workshop on Positioning, Navigation and Communication 2010 (WPNC'10), Dresden, March 2010. Retrieved November 10, 2014
GNSS Positioning Approaches – GPS Satellite Surveying, Fourth Edition – Leick. Wiley Online Library. pp. 257–399. doi:10.1002/9781119018612.ch6.
Alfred Kleusberg, "Analytical GPS Navigation Solution", University of Stuttgart Research Compendium,1994
Oszczak, B., "New Algorithm for GNSS Positioning Using System of Linear Equations," Proceedings of the 26th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2013), Nashville, TN, September 2013, pp. 3560–3563.
Attewill, Fred. (2013-02-13) Vehicles that use GPS jammers are big threat to aircraft. Metro.co.uk. Retrieved on 2013-08-02.
"Frequently Asked Questions About Selective Availability". National Coordination Office for Space-Based Positioning, Navigation, and Timing (PNT). October 2001. Retrieved 2015-06-13. Selective Availability ended a few minutes past midnight EDT after the end of May 1, 2000. The change occurred simultaneously across the entire satellite constellation.
https://blackboard.vuw.ac.nz/bbcswebdav/pid-1444805-dt-content-rid-2193398_1/courses/2014.1.ESCI203/Esci203_2014_GPS_1.pdf (subscription required)
McNamara, Joel (2008). GPS For Dummies. John Wiley & Sons. p. 59. ISBN 0-470-45785-6. , https://books.google.com/books?id=Hbz4LYIrvuMC&pg=PA59
"NAVSTAR GPS User Equipment Introduction" (PDF). Section 1.2.2
"Notice Advisory to Navstar Users (NANU) 2016069". GPS Operations Center. Archived from the original on May 21, 2017. Retrieved January 2, 2017.
David W. Allan (1997). "The Science of Timekeeping" (PDF). Hewlett Packard. Archived (PDF) from the original on October 12, 2012.
"The Role of GPS in Precise Time and Frequency Dissemination" (PDF). GPSworld. July–August 1990. Retrieved April 27, 2014.
"GPS time accurate to 100 nanoseconds". Galleon. Retrieved October 12, 2012.
"Between-Satellite Differencing". Gmat.unsw.edu.au. Archived from the original on March 6, 2011. Retrieved October 15, 2010.
"Double differencing". Gmat.unsw.edu.au. Archived from the original on March 6, 2011. Retrieved October 15, 2010.
"Triple differencing". Gmat.unsw.edu.au. Archived from the original on March 6, 2011. Retrieved October 15, 2010.
chapter on root finding and nonlinear sets of equations
Preview of Root Finding. Books.google.com. 2007. ISBN 978-0-521-88068-8. Retrieved October 15, 2010.
"2011 John Deere StarFire 3000 Operator Manual" (PDF). John Deere. Archived from the original (PDF) on January 5, 2012. Retrieved November 13, 2011.
"Federal Communications Commission Report and Order In the Matter of Fixed and Mobile Services in the Mobile Satellite Service Bands at 1525–1559 MHz and 1626.5–1660.5 MHz" (PDF). FCC.gov. April 6, 2011. Retrieved December 13, 2011.
"Federal Communications Commission Table of Frequency Allocations" (PDF). FCC.gov. November 18, 2011. Retrieved December 13, 2011.
"FCC Docket File Number: SATASG2001030200017, "Mobile Satellite Ventures LLC Application for Assignment and Modification of Licenses and for Authority to Launch and Operate a Next-Generation Mobile Satellite System"". FCC.gov. March 1, 2001. p. 9.
"U.S. GPS Industry Council Petition to the FCC to adopt OOBE limits jointly proposed by MSV and the Industry Council". FCC.gov. September 4, 2003. Retrieved December 13, 2011.
"ORDER ON RECONSIDERATION" (PDF). Jul 3, 2003. Retrieved October 2015. Check date values in: |access-date= (help)
"Statement of Julius P. Knapp, Chief, Office of Engineering and Technology, Federal Communications Commission" (PDF). gps.gov. September 15, 2011. p. 3. Retrieved December 13, 2011.
"FCC Order, Granted LightSquared Subsidiary LLC, a Mobile Satellite Service licensee in the L-Band, a conditional waiver of the Ancillary Terrestrial Component "integrated service" rule" (PDF). Federal Communications Commission. FCC.Gov. January 26, 2011. Retrieved December 13, 2011.
"Data Shows Disastrous GPS Jamming from FCC-Approved Broadcaster". gpsworld.com. February 1, 2011. Archived from the original on February 6, 2011. Retrieved February 10, 2011.
"Javad Ashjaee GPS World webinar". gpsworld.com. December 8, 2011. Archived from the original on November 26, 2011. Retrieved December 13, 2011.
"FCC Order permitting mobile satellite services providers to provide an ancillary terrestrial component (ATC) to their satellite systems" (PDF). Federal Communications Commission. FCC.gov. February 10, 2003. Retrieved December 13, 2011.
"Federal Communications Commission Fixed and Mobile Services in the Mobile Satellite Service". Federal Communications Commission. FCC.gov. July 15, 2010. Retrieved December 13, 2011.
[3] Archived December 13, 2012, at the Wayback Machine.
Jeff Carlisle (June 23, 2011). "Testimony of Jeff Carlisle, LightSquared Executive Vice President of Regulatory Affairs and Public Policy to U.S. House Subcommittee on Aviation and Subcommittee on Coast Guard and Maritime Transportation" (PDF). Archived from the original (PDF) on September 29, 2011. Retrieved December 13, 2011.
Julius Genachowski (May 31, 2011). "FCC Chairman Genachowski Letter to Senator Charles Grassley" (PDF). Archived from the original (PDF) on January 13, 2012. Retrieved December 13, 2011.
Tessler, Joelle (April 7, 2011). "Internet network may jam GPS in cars, jets". The Sun News. Archived from the original on May 1, 2011. Retrieved April 7, 2011.
FCC press release "Spokesperson Statement on NTIA Letter – LightSquared and GPS". February 14, 2012. Accessed 2013-03-03.
Paul Riegler, FBT. "FCC Bars LightSquared Broadband Network Plan". February 14, 2012. Retrieved February 14, 2012.
"Galileo navigation satellite system goes live". dw.com. Retrieved December 17, 2016.
Beidou coverage
"Beidou satellite navigation system to cover whole world in 2020". Eng.chinamil.com.cn. Retrieved October 15, 2010.
Labels: GPS Satellites were invented when magnetic North and South moved to fast for useful navigation
Paris Accord pullout would join US with Nicaragua,...
Cholera information bulletin regarding Yemen
Yemen facing total collapse as war continues, UN w...
US shale, trade and sanctions worry Russian Financ...
Mueller authorizes Comey to testify publicly befor...
Seven subpoenas issued in House Russia probe
What is "Covfefe" a hidden message to the russians...
How I have observed the Seas rise
As the seas around them rise, fishermen deny clima...
We're entering another mass extinction. And we're ...
Is Climate change Caused by Humans the last 15,000...
Magnetic north shifting by 40 miles a year
Is Climate Change a Real Thing? Of course!
enlightened People often bless all mankind through...
So, how can humans avoid genetic mutation the next...
Another of the unsolved problems we likely are dea...
Our Magnetopshere no longer protects us like it on...
GPS Satellites were invented when magnetic North a...
Wednesday May 31st 2017: the most read articles at...
Is a Mini-Ice Age on the way starting in 2030?
Paintings from the last little ice age when ice sk...
When the Sun goes dormant sometime this century?
What are the actual Consequences of not believing ...
Trump in hot seat over climate change
Many Wise enlightened people start out ignorant
Milarepa:Tibet's most beloved Saint
And this is what Angela Merkel's opponent in the G...
Is it troubling to you that Putin and Trump are sa...
Jared Kushner - Wikipedia
Michael D. Cohen (lawyer): Wikipedia: Trump's Lawyer
Paul Manafort: Wikipedia
Smithsonian:Thinking Outside the Bots
Pentagon: ICBM missile intercept test a success
Don't Transfer money over the Internet. Couple los...
One Russian Source described "having leverage over...
Russians discussed potentially 'derogatory' info, ...
Congressman Lieu Calling for Kushner to Resign
Amount of money in dispute in Trump's budget: $2,1...
The Google Computer algorithm: "It became like a G...
7800 pounds per square inch: The bite of a Tyranno...
Tuesday May 30th 2017: the most read articles at t...
Going into a Cave to become enlightened during dif...
There is a sense in the White House that Jared Kus...
McCain weighs in on Kushner's reported request for...
Trump's proposed 2018 budget takes an ax to scienc...
NASA's Juno orbiter maps Jupiter's super cyclones ...
Trump appears to be changing the whole World Order...
How a single sentence from Angela Merkel showed wh...
It is not in China's best interests to actually so...
North Korean media: Monday's missile launch 'succe...
Liberals are not tolerant these days, they are Pol...
Kushner Asked to 'Lay Low' After Russia-Related Re...
French New President attacks Russian Media Outlets
May 29th 2017 Monday: The most read articles in th...
Angela Merkel says it was 'right' to confront Dona...
Trump's actions have 'weakened' the West, German f...
Russia probe reaches Jared Kushner
As the old world order collapses under Trump, Dute...
ISIS' black flag rises in Philippines
What's real for you is what you believe is real
Roswell craft had spider silk type of synthetic co...
Aluminum polymer composite: (used for 3d printing ...
I think unfortunately more and more nations are go...
nytimes:Merkel, After Discordant G-7 Meeting, Is L...
Sunday May 28th 2017: The most read articles at th...
Merkel warns Europe not to rely on Trump's US beca...
With the CBO report now public Republicans (at lea...
Santorum to Trump: Stop tweeting
Senator says the ex-FBI director never told Congre...
Being raised in California I have been to Disneyla...
'Pandora: The World of Avatar' has Disney's most a...
If you study Comparative Religion
7th Season of Doc Martin at Amazon Prime for aroun...
is it better to be a "Natural" or to have a lot ...
If you want to more fully understand the cultural ...
If at first you don't succeed try a few times more...
Movies out this weekend at the theaters here in No...
Two top Trump advisers dodge Kushner questions
EVERYONE I walk by or meet is some kind of past li...
Being Grateful to all those who died so we could h...
Though Trump's reputation at home is in tatters he...
Opinion: Trump passes his first test on the world ...
Warren Beatty made "Rules Don't Apply" accurate to...
Learning about past lives can be very disturbing e...
In the last week these are the most read articles ...
Saturday May 27th 2017: most read articles in the ...
I realized I am fasting about 12 hours a day now
10 most read articles of all time, of the last mon...
The Dalai Lama explains how he was "Recognized" as...
Kushner 'has no recollection' of reported undisclo...
Howard Hughes- Wikipedia
Rules Don't Apply (2016)
Galaxy Man
This is what I likely would be doing now if I was ...
Why do I write or compile so many articles about t...
IP Addresses: Wikipedia
Some articles on the history of programming langua...
Uniform Resource Locators usually start with http:...
Dark Blue on this map is where the highest use per... | CommonCrawl |
How to easily determine the results distribution for multiple dice?
I want to calculate the probability distribution for the total of a combination of dice.
I remember that the probability of is the number of combinations that total that number over the total number of combinations (assuming the dice have a uniform distribution).
What are the formulas for
The number of combinations total
The number of combinations that total a certain number
probability dice
C. RossC. Ross
$\begingroup$ I think you should treat $(X_1=1 , X_2=2)$ and $(X_1=2 , X_2=1)$ as different events. $\endgroup$ – Deep North Sep 21 '15 at 6:23
Exact solutions
The number of combinations in $n$ throws is of course $6^n$.
These calculations are most readily done using the probability generating function for one die,
$$p(x) = x + x^2 + x^3 + x^4 + x^5 + x^6 = x \frac{1-x^6}{1-x}.$$
(Actually this is $6$ times the pgf--I'll take care of the factor of $6$ at the end.)
The pgf for $n$ rolls is $p(x)^n$. We can calculate this fairly directly--it's not a closed form but it's a useful one--using the Binomial Theorem:
$$p(x)^n = x^n (1 - x^6)^n (1 - x)^{-n}$$
$$= x^n \left( \sum_{k=0}^{n} {n \choose k} (-1)^k x^{6k} \right) \left( \sum_{j=0}^{\infty} {-n \choose j} (-1)^j x^j\right).$$
The number of ways to obtain a sum equal to $m$ on the dice is the coefficient of $x^m$ in this product, which we can isolate as
$$\sum_{6k + j = m - n} {n \choose k}{-n \choose j}(-1)^{k+j}.$$
The sum is over all nonnegative $k$ and $j$ for which $6k + j = m - n$; it therefore is finite and has only about $(m-n)/6$ terms. For example, the number of ways to total $m = 14$ in $n = 3$ throws is a sum of just two terms, because $11 = 14-3$ can be written only as $6 \cdot 0 + 11$ and $6 \cdot 1 + 5$:
$$-{3 \choose 0} {-3 \choose 11} + {3 \choose 1}{-3 \choose 5}$$
$$= 1 \frac{(-3)(-4)\cdots(-13)}{11!} + 3 \frac{(-3)(-4)\cdots(-7)}{5!}$$
$$= \frac{1}{2} 12 \cdot 13 - \frac{3}{2} 6 \cdot 7 = 15.$$
(You can also be clever and note that the answer will be the same for $m = 7$ by the symmetry 1 <--> 6, 2 <--> 5, and 3 <--> 4 and there's only one way to expand $7 - 3$ as $6 k + j$; namely, with $k = 0$ and $j = 4$, giving
$$ {3 \choose 0}{-3 \choose 4} = 15 \text{.}$$
The probability therefore equals $15/6^3$ = $5/36$, about 14%.
By the time this gets painful, the Central Limit Theorem provides good approximations (at least to the central terms where $m$ is between $\frac{7 n}{2} - 3 \sqrt{n}$ and $\frac{7 n}{2} + 3 \sqrt{n}$: on a relative basis, the approximations it affords for the tail values get worse and worse as $n$ grows large).
I see that this formula is given in the Wikipedia article Srikant references but no justification is supplied nor are examples given. If perchance this approach looks too abstract, fire up your favorite computer algebra system and ask it to expand the $n^{\text{th}}$ power of $x + x^2 + \cdots + x^6$: you can read the whole set of values right off. E.g., a Mathematica one-liner is
With[{n=3}, CoefficientList[Expand[(x + x^2 + x^3 + x^4 + x^5 + x^6)^n], x]]
Matt Krause
$\begingroup$ Will that mathematica code work with wolfram alpha? $\endgroup$ – user28 Oct 14 '10 at 23:56
$\begingroup$ That works. I tried your earlier version but could not make any sense of the output. $\endgroup$ – user28 Oct 15 '10 at 14:24
$\begingroup$ @Srikant: Expand[Sum[x^i,{i,1,6}]^3] also works in WolframAlpha $\endgroup$ – A. N. Other Oct 18 '10 at 7:04
$\begingroup$ @A.Wilson I believe many of those references provide a clear path to the generalization, which in this example is $(x+x^2+\cdots+x^6)(x+x^2+x^3+x^4)^3$. If you would like R code to compute these things, see stats.stackexchange.com/a/116913 for a fully implemented system. As another example, the Mathematica code is Clear[x, d]; d[n_, x_] := Sum[x^i, {i, 1, n}]; d[6, x] d[4, x]^3 // Expand $\endgroup$ – whuber♦ Dec 1 '15 at 14:41
$\begingroup$ Note that @whuber's clarification is for 1d6 + 3d4, and that should get you there. For an arbitrary wdn + vdm, (x + x^2 + ... + x^w)^n(x + x^2 + ... + x^v)^m. Additional terms are polynomials constructed and multiplied with the product in the same way. $\endgroup$ – A. Wilson Jan 5 '16 at 0:58
Yet another way to quickly compute the probability distribution of a dice roll would be to use a specialized calculator designed just for that purpose.
Torben Mogensen, a CS professor at DIKU has an excellent dice roller called Troll.
The Troll dice roller and probability calculator prints out the probability distribution (pmf, histogram, and optionally cdf or ccdf), mean, spread, and mean deviation for a variety of complicated dice roll mechanisms. Here are a few examples that show off Troll's dice roll language:
Roll 3 6-sided dice and sum them: sum 3d6.
Roll 4 6-sided dice, keep the highest 3 and sum them: sum largest 3 4d6.
Roll an "exploding" 6-sided die (i.e., any time a "6" comes up, add 6 to your total and roll again): sum (accumulate y:=d6 while y=6).
Troll's SML source code is available, if you want to see how its implemented.
Professor Morgensen also has a 29-page paper, "Dice Rolling Mechanisms in RPGs," in which he discusses many of the dice rolling mechanisms implemented by Troll and some of the mathematics behind them.
A similar piece of free, open-source software is Dicelab, which works on both Linux and Windows.
A. N. OtherA. N. Other
$\newcommand{red}{\color{red}}$ $\newcommand{blue}{\color{blue}}$
Let the first die be red and the second be black. Then there are 36 possible results:
\begin{array}{c|c|c|c|c|c|c} &1&2&3&4&5&6\\\hline \red{1}&\red{1},1&\red{1},2&\red{1},3&\red{1},4&\red{1},5&\red{1},6\\ &\blue{^2}&\blue{^3}&\blue{^4}&\blue{^5}&\blue{^6}&\blue{^7}\\\hline \red{2}&\red{2},1&\red{2},2&\red{2},3&\red{2},4&\red{2},5&\red{2},6\\ &\blue{^3}&\blue{^4}&\blue{^5}&\blue{^6}&\blue{^7}&\blue{^8}\\\hline \red{3}&\red{3},1&\red{3},2&\red{3},3&\red{3},4&\red{3},5&\red{3},6\\ &\blue{^4}&\blue{^5}&\blue{^6}&\blue{^7}&\blue{^8}&\blue{^9}\\\hline \red{4}&\red{4},1&\red{4},2&\red{4},3&\red{4},4&\red{4},5&\red{4},6\\ &\blue{^5}&\blue{^6}&\blue{^7}&\blue{^8}&\blue{^9}&\blue{^{10}}\\\hline \red{5}&\red{5},1&\red{5},2&\red{5},3&\red{5},4&\red{5},5&\red{5},6\\ &\blue{^6}&\blue{^7}&\blue{^8}&\blue{^9}&\blue{^{10}}&\blue{^{11}}\\\hline \red{6}&\red{6},1&\red{6},2&\red{6},3&\red{6},4&\red{6},5&\red{6},6\\ &\blue{^7}&\blue{^8}&\blue{^9}&\blue{^{10}}&\blue{^{11}}&\blue{^{12}}\\\hline \end{array}
Each of these 36 ($\red{\text{red}},\text{black}$) results are equally likely.
When you sum the numbers on the faces (total in $\blue{\text{blue}}$), several of the (red,black) results end up with the same total -- you can see this with the table in your question.
So for example there's only one way to get a total of $2$ (i.e. only the event ($\red{1},1$)), but there's two ways to get $3$ (i.e. the elementary events ($\red{2},1$) and ($\red{1},2$)). So a total of $3$ is twice as likely to come up as $2$. Similarly there's three ways of getting $4$, four ways of getting $5$ and so on.
Now since you have 36 possible (red,black) results, the total number of ways of getting all the different totals is also 36, so you should divide by 36 at the end. Your total probability will be 1, as it should be.
Glen_b -Reinstate MonicaGlen_b -Reinstate Monica
$\begingroup$ Wow, the table is beautiful! $\endgroup$ – Deep North Sep 21 '15 at 10:46
$\begingroup$ Very pretty indeed $\endgroup$ – wolfies Sep 21 '15 at 12:29
There's a very neat way of computing the combinations or probabilities in a spreadsheet (such as excel) that computes the convolutions directly.
I'll do it in terms of probabilities and illustrate it for six sided dice but you can do it for dice with any number of sides (including adding different ones).
(btw it's also easy in something like R or matlab that will do convolutions)
Start with a clean sheet, in a few columns, and move down a bunch of rows from the top (more than 6).
put the value 1 in a cell. That's the probabilities associated with 0 dice. put a 0 to its left; that's the value column - continue down from there with 1,2,3 down as far as you need.
move one column to the right and down a row from the '1'. enter the formula "=sum(" then left-arrow up-arrow (to highlight the cell with 1 in it), hit ":" (to start entering a range) and then up-arrow 5 times, followed by ")/6" and press Enter - so you end up with a formula like =sum(c4:c9)/6 (where here C9 is the cell with the 1 in it).
Then copy the formula and paste it to the 5 cells below it. They should each contain 0.16667 (ish).
Don't type anything into the empty cells these formulas refer to!
move down 1 and to the right 1 from the top of that column of values and paste ...
... a total of another 11 values. These will be the probabilities for two dice.
It doesn't matter if you paste a few too many, you'll just get zeroes.
repeat step 3 for the next column for three dice, and again for four, five, etc dice.
We see here that the probability of rolling $12$ on 4d6 is 0.096451 (if you multiply by $4^6$ you'll be able to write it as an exact fraction).
If you're adept with Excel - things like copying a formula from a cell and pasting into many cells in a column, you can generate all tables up to say 10d6 in about a minute or so (possibly faster if you've done it a few times).
If you want combination counts instead of probabilities, don't divide by 6.
If you want dice with different numbers of faces, you can sum $k$ (rather than 6) cells and then divide by $k$. You can mix dice across columns (e.g. do a column for d6 and one for d8 to get the probability function for d6+d8):
I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better.
$X_i$ be the outcome of a roll of a $s$ faced dice where $i=1, ... n$.
$S$ be the total of all $n$ dice.
$\bar{X}$ be the sample average.
By definition, we have:
$\bar{X} = \frac{\sum_iX_i}{n}$
$\bar{X} = \frac{S}{n}$
The idea now is to visualize the process of observing ${X_i}$ as the outcome of throwing the same dice $n$ times instead of as outcome of throwing $n$ dice. Thus, we can invoke the central limit theorem (ignoring technicalities associated with going from discrete distribution to continuous), we have as $n \rightarrow \infty$:
$\bar{X} \sim N(\mu, \sigma^2/n)$
$\mu = (s+1)/2$ is the mean of the roll of a single dice and
$\sigma^2 = (s^2-1)/12$ is the associated variance.
The above is obviously an approximation as the underlying distribution $X_i$ has discrete support.
$S = n \bar{X}$.
Thus, we have:
$S \sim N(n \mu, n \sigma^2)$.
Wikipedia has a brief explanation as how to calculate the required probabilities. I will elaborate a bit more as to why the explanation there makes sense. To the extent possible I have used similar notation to the Wikipedia article.
Suppose that you have $n$ dice each with $s$ faces and you want to compute the probability that a single roll of all $n$ dice the total adds up to $k$. The approach is as follows:
$F_{s,n}(k)$: Probability that you get a total of $k$ on a single roll of $n$ dices with $s$ faces.
$F_{s,1}(k) = \frac{1}{s}$
The above states that if you just have one dice with $s$ faces the probability of obtaining a total $k$ between 1 and s is the familiar $\frac{1}{s}$.
Consider the situation when you roll two dice: You can obtain a sum of $k$ as follows: The first roll is between 1 to $k-1$ and the corresponding roll for the second one is between $k-1$ to $1$. Thus, we have:
$F_{s,2}(k) = \sum_{i=1}^{i=k-1}{F_{s,1}(i) F_{s,1}(k-i)}$
Now consider a roll of three dice: You can get a sum of $k$ if you roll a 1 to $k-2$ on the first dice and the sum on the remaining two dice is between $k-1$ to $2$. Thus,
Continuing the above logic, we get the recursion equation:
$F_{s,n}(k) = \sum_{i=1}^{i=k-n+1}{F_{s,1}(i) F_{s,n-1}(k-i)}$
See the Wikipedia link for more details.
Glen_b -Reinstate Monica
user28user28
$\begingroup$ @Srikant Excellent answer, but does that function resolve to something arithmetic (ie: not recursive)? $\endgroup$ – C. Ross Oct 14 '10 at 20:06
$\begingroup$ @C. Ross Unfortunately I do not think so. But, I suspect that the recursion should not be that hard as long as are dealing with reasonably small n and small s. You could just build-up a lookup table and use that repeatedly as needed. $\endgroup$ – user28 Oct 14 '10 at 20:10
$\begingroup$ The wikipedia page you linked has a simple nonrecursive formula which is a single sum. One derivation is in whuber's answer. $\endgroup$ – Douglas Zare Jul 17 '12 at 23:44
$\begingroup$ The wiki link anchor is dead, do you know of a replacement? $\endgroup$ – Midnighter Apr 13 '14 at 20:21
Characteristic functions can make computations involving the sums and differences of random variables really easy. Mathematica has lots of functions to work with statistical distributions, including a builtin to transform a distribution into its characteristic function.
I'd like to illustrate this with two concrete examples: (1) Suppose you wanted to determine the results of rolling a collection of dice with differing numbers of sides, e.g., roll two six-sided dice plus one eight-sided die (i.e., 2d6+d8)? Or (2) suppose you wanted to find the difference of two dice rolls (e.g., d6-d6)?
An easy way to do this would be to use the characteristic functions of the underlying discrete uniform distributions. If a random variable $X$ has a probability mass function $f$, then its characteristic function $\varphi_X(t)$ is just the discrete Fourier Transform of $f$, i.e., $\varphi_X(t) = \mathcal{F}\{f\}(t) = E[e^{i t X}]$. A theorem tells us:
If the independent random variables $X$ and $Y$ have corresponding probability mass functions $f$ and $g$, then the pmf $h$ of the sum $X + Y$ of these RVs is the convolution of their pmfs $h(n) = (f \ast g)(n) = \sum_{m=-\infty}^\infty f(m) g(n-m)$.
We can use the convolution property of Fourier Transforms to restate this more simply in terms of characteristic functions:
The characteristic function $\varphi_{X+Y}(t)$ of the sum of independent random variables $X$ and $Y$ equals the product of their characteristic functions $\varphi_{X}(t) \varphi_{Y}(t)$.
This Mathematica function will make the characteristic function for an s-sided die:
MakeCf[s_] :=
Module[{Cf},
Cf := CharacteristicFunction[DiscreteUniformDistribution[{1, s}],
t];
Cf]
The pmf of a distribution can be recovered from its characteristic function, because Fourier Transforms are invertible. Here is the Mathematica code to do it:
RecoverPmf[Cf_] :=
Module[{F},
F[y_] := SeriesCoefficient[Cf /. t -> -I*Log[x], {x, 0, y}];
F]
Continuing our example, let F be the pmf that results from 2d6+d8.
F := RecoverPmf[MakeCf[6]^2 MakeCf[8]]
There are $6^2 \cdot 8 = 288$ outcomes. The domain of support of F is $S=\{3,\ldots,20\}$. Three is the min because you're rolling three dice. And twenty is the max because $20 = 2 \cdot 6 + 8$. If you want to see the image of F, compute
In:= F /@ Range[3, 20]
Out= {1/288, 1/96, 1/48, 5/144, 5/96, 7/96, 13/144, 5/48, 1/9, 1/9, \
5/48, 13/144, 7/96, 5/96, 5/144, 1/48, 1/96, 1/288}
If you want to know the number of outcomes that sum to 10, compute
In:= 6^2 8 F[10]
Out= 30
If the independent random variables $X$ and $Y$ have corresponding probability mass functions $f$ and $g$, then the pmf $h$ of the difference $X - Y$ of these RVs is the cross-correlation of their pmfs $h(n) = (f \star g)(n) = \sum_{m=-\infty}^\infty f(m) g(n+m)$.
We can use the cross-correlation property of Fourier Transforms to restate this more simply in terms of characteristic functions:
The characteristic function $\varphi_{X-Y}(t)$ of the difference of two independent random variables ${X,Y}$ equals the product of the characteristic function $\varphi_{X}(t)$ and $\varphi_{Y}(-t)$ (N.B. the negative sign in front of the variable t in the second characteristic function).
So, using Mathematica to find the pmf G of d6-d6:
G := RecoverPmf[MakeCf[6] (MakeCf[6] /. t -> -t)]
There are $6^2 = 36$ outcomes. The domain of support of G is $S=\{-5,\ldots,5\}$. -5 is the min because $-5=1-6$. And 5 is the max because $6-1=5$. If you want to see the image of G, compute
In:= G /@ Range[-5, 5]
Out= {1/36, 1/18, 1/12, 1/9, 5/36, 1/6, 5/36, 1/9, 1/12, 1/18, 1/36}
$\begingroup$ Of course, for discrete distributions, including distributions of finite support (like those in question here), the cf is just the probability generating function evaluated at x = exp(i t), making it a more complicated way of encoding the same information. $\endgroup$ – whuber♦ Oct 17 '10 at 21:24
$\begingroup$ @whuber: As you say, the cf, mgf, and pgf are more-or-less the same and easily transformable into one another, however Mathematica has a cf builtin that works with all the probability distributions it knows about, whereas it doesn't have a pgf builtin. This makes the Mathematica code for working with sums (and differences) of dice using cfs particularly elegant to construct, regardless of the complexity of dice expression as I hope I demonstrated above. Plus, it doesn't hurt to know how cfs, FTs, convolutions, and cross-correlations can help solve problems like this. $\endgroup$ – A. N. Other Oct 18 '10 at 2:38
$\begingroup$ @Elisha: Good points, all of them. I guess what I wonder about the most is whether your ten or so lines of Mathematica code are really more "elegant" or efficient than the single line I proposed earlier (or the even shorter line Srikant fed to Wolfram Alpha). I suspect the internal manipulations with characteristic functions are more arduous than the simple convolutions needed to multiply polynomials. Certainly the latter are easier to implement in most other software environments, as Glen_b's answer indicates. The advantage of your approach is its greater generality. $\endgroup$ – whuber♦ Oct 18 '10 at 3:35
Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions.
To keep the example really simple, we're going to calculate the probability distribution of the sum of a three-sided die (d3) whose random variable we will call X and a two-sided die (d2) whose random variable we'll call Y.
You're going to make a table. Across the top row, write the probability distribution of X (outcomes of rolling a fair d3). Down the left column, write the probability distribution of Y (outcomes of rolling a fair d2).
You're going to construct the outer product of the top row of probabilities with the left column of probabilities. For example, the lower-right cell will be the product of Pr[X=3]=1/3 times Pr[Y=2]=1/2 as shown in the accompanying figure. In our simplistic example, all the cells equal 1/6.
Next, you're going to sum along the oblique lines of the outer-product matrix as shown in the accompanying diagram. Each oblique line passes through one-or-more cells which I've colored the same: The top line passes through one blue cell, the next line passes through two red cells, and so on.
Each of the sums along the obliques represents a probability in the resulting distribution. For example, the sum of the red cells equals the probability of the two dice summing to 3. These probabilities are shown down the right side of the accompanying diagram.
This technique can be used with any two discrete distributions with finite support. And you can apply it iteratively. For example, if you want to know the distribution of three six-sided dice (3d6), you can first calculate 2d6=d6+d6; then 3d6=d6+2d6.
There is a free (but closed license) programming language called J. Its an array-based language with its roots in APL. It has builtin operators to perform outer products and sums along the obliques in matrices, making the technique I illustrated quite simple to implement.
In the following J code, I define two verbs. First the verb d constructs an array representing the pmf of an s-sided die. For example, d 6 is the pmf of a 6-sided die. Second, the verb conv finds the outer product of two arrays and sums along the oblique lines. So conv~ d 6 prints out the pmf of 2d6:
d=:$%
conv=:+//.@(*/)
|:(2+i.11),:conv~d 6
2 0.0277778
5 0.111111
10 0.0833333
As you can see, J is cryptic, but terse.
This is actually a suprisingly complicated question. Luckily for you, there exist an exact solution which is very well explained here:
http://mathworld.wolfram.com/Dice.html
The probability you are looking for is given by equation (10): "The probability of obtaining p points (a roll of p) on n s-sided dice".
In your case: p = the observed score (sum of all dice), n = the number of dice, s = 6 (6-sided dice). This gives you the following probability mass function:
$$ P(X_n = p) = \frac{1}{s^n} \sum_{k=0}^{\lfloor(p-n)/6\rfloor} (-1)^k {n \choose k} {p-6k-1 \choose n-1} $$
FelixFelix
$\begingroup$ Welcome to our site, Felix! $\endgroup$ – whuber♦ Feb 26 '18 at 20:30
Love the username! Well done :)
The outcomes you should count are the dice rolls, all $6\times 6=36$ of them as shown in your table.
For example, $\frac{1}{36}$ of the time the sum is $2$, and $\frac{2}{36}$ of the time the sum is $3$, and $\frac{4}{36}$ of the time the sum is $4$, and so on.
CreosoteCreosote
$\begingroup$ I'm really confused by this. I answered a very recent newbie question from someone called die_hard, who apparently no longer exists, then found my answer attached to this ancient thread! $\endgroup$ – Creosote Sep 21 '15 at 19:10
$\begingroup$ Your answer to the question at stats.stackexchange.com/questions/173434/… was merged with the answers to this duplicate. $\endgroup$ – whuber♦ Feb 16 '17 at 18:47
You can solve this with a recursive formula. In that case the probabilities of the rolls with $n$ dice are calculated by the rolls with $n-1$ dice.
$$a_n(l) = \sum_{l-6 \leq k \leq l-1 \\ \text{ and } n-1 \leq k \leq 6(n-1)} a_{n-1}(k)$$
The first limit for k in the summation are the six preceding numbers. E.g if You want to roll 13 with 3 dice then you can do this if your first two dice roll between 7 and 12.
The second limit for k in the summation is the limits of what you can roll with n-1 dice
1 1 1 1 1 1
1 2 3 4 5 6 5 4 3 2 1
1 3 6 10 15 21 25 27 27 25 21 15 10 6 3 1
1 4 10 20 35 56 80 104 125 140 146 140 125 104 80 56 35 20 10 4 1
1 5 15 35 70 126 205 305 420 540 651 735 780 780 735 651 540 420 305 205 126 70 35 15 5 1
edit: The above answer was an answer from another question that was merged into the question by C.Ross
The code below shows how the calculations for that answer (to the question asking for 5 dice) were performed in R. They are similar to the summations performed in Excel in the answer by Glen B.
# recursive formula
nextdice <- function(n,a,l) {
for (i in 1:6) {
if ((l-i >= n-1) & (l-i<=6*(n-1))) {
x = x+a[l-i-(n-2)]
return(x)
# generating combinations for rolling with up to 5 dices
a_1 <- rep(1,6)
a_2 <- sapply(2:12,FUN = function(x) {nextdice(2,a_1,x)})
Sextus EmpiricusSextus Empiricus
$\begingroup$ @user67275 your question got merged into this question. But I wonder what your idea was behind your formula: "I used the formula: no of ways to get 8: 5_H_2 = 6_C_2 = 15" ? $\endgroup$ – Sextus Empiricus Dec 12 '17 at 15:43
One approach is to say that the probability $X_n=k$ is the coefficient of $x^{k}$ in the expansion of the generating function $$\left(\frac{x^6+x^5+x^4+x^3+x^2+x^1}{6}\right)^n=\left(\frac{x(1-x^6)}{6(1-x)}\right)^n$$
So for example with six dice and a target of $k=22$, you will find $P(X_6=22)= \frac{10}{6^6}$. That link (to a math.stackexchange question) gives other approaches too
HenryHenry
Not the answer you're looking for? Browse other questions tagged probability dice or ask your own question.
Probability mass function of rolling two dice
Given a set of bounded integers; find number combinations which sums to S
Probability distribution, n dice
Probability with fair dice
probability of sum of 5 dice is a certain number
Distribution of a dice pool
Probability mass function of n dices thrown twice
Probability of not drawing a word from a bag of letters in Scrabble
What is the distribution for various polyhedral dice all rolled at once?
Dungeons & Dragons Attack hit probability success percentage
Distribution of complex results for pool of ten-sided dice, specific to custom tabletop RPG system
Proof of Algebraic Formula for the Sum of Two-Dice Toss as a Convolution
How to calculate the probability of the outcome of this convoluted dice rolling mechanic?
Calculating the expected number of unbalanced dice needed to reach a sum
How to determine xdy dice roll from a sample?
How to calculate probability distribution of rolling n dice? (with a twist!)
Is it possible to determine a dice pool from a given range? | CommonCrawl |
Tagged: trace of a matrix
A Relation between the Dot Product and the Trace
Let $\mathbf{v}$ and $\mathbf{w}$ be two $n \times 1$ column vectors.
Prove that $\tr ( \mathbf{v} \mathbf{w}^\trans ) = \mathbf{v}^\trans \mathbf{w}$.
Does the Trace Commute with Matrix Multiplication? Is $\tr (A B) = \tr (A) \tr (B) $?
Let $A$ and $B$ be $n \times n$ matrices.
Is it always true that $\tr (A B) = \tr (A) \tr (B) $?
If it is true, prove it. If not, give a counterexample.
Is the Trace of the Transposed Matrix the Same as the Trace of the Matrix?
Let $A$ be an $n \times n$ matrix.
Is it true that $\tr ( A^\trans ) = \tr(A)$? If it is true, prove it. If not, give a counterexample.
Express the Eigenvalues of a 2 by 2 Matrix in Terms of the Trace and Determinant
Let $A=\begin{bmatrix}
a & b\\
c& d
\end{bmatrix}$ be an $2\times 2$ matrix.
Express the eigenvalues of $A$ in terms of the trace and the determinant of $A$.
An Example of a Matrix that Cannot Be a Commutator
Let $I$ be the $2\times 2$ identity matrix.
Then prove that $-I$ cannot be a commutator $[A, B]:=ABA^{-1}B^{-1}$ for any $2\times 2$ matrices $A$ and $B$ with determinant $1$.
True or False: If $A, B$ are 2 by 2 Matrices such that $(AB)^2=O$, then $(BA)^2=O$
Let $A$ and $B$ be $2\times 2$ matrices such that $(AB)^2=O$, where $O$ is the $2\times 2$ zero matrix.
Determine whether $(BA)^2$ must be $O$ as well. If so, prove it. If not, give a counter example.
The Formula for the Inverse Matrix of $I+A$ for a $2\times 2$ Singular Matrix $A$
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula:
\[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
2 & 1\\
1& 2
\end{bmatrix}$.
Determine Whether Given Matrices are Similar
(a) Is the matrix $A=\begin{bmatrix}
\end{bmatrix}$ similar to the matrix $B=\begin{bmatrix}
\end{bmatrix}$?
(b) Is the matrix $A=\begin{bmatrix}
(c) Is the matrix $A=\begin{bmatrix}
-1 & 6\\
-2& 6
(d) Is the matrix $A=\begin{bmatrix}
If Two Matrices are Similar, then their Determinants are the Same
Prove that if $A$ and $B$ are similar matrices, then their determinants are the same.
Trace, Determinant, and Eigenvalue (Harvard University Exam Problem)
(a) A $2 \times 2$ matrix $A$ satisfies $\tr(A^2)=5$ and $\tr(A)=3$.
Find $\det(A)$.
(b) A $2 \times 2$ matrix has two parallel columns and $\tr(A)=5$. Find $\tr(A^2)$.
(c) A $2\times 2$ matrix $A$ has $\det(A)=5$ and positive integer eigenvalues. What is the trace of $A$?
If 2 by 2 Matrices Satisfy $A=AB-BA$, then $A^2$ is Zero Matrix
Let $A, B$ be complex $2\times 2$ matrices satisfying the relation
\[A=AB-BA.\]
Prove that $A^2=O$, where $O$ is the $2\times 2$ zero matrix.
Matrix $XY-YX$ Never Be the Identity Matrix
Let $I$ be the $n\times n$ identity matrix, where $n$ is a positive integer. Prove that there are no $n\times n$ matrices $X$ and $Y$ such that
\[XY-YX=I.\]
The Vector Space Consisting of All Traceless Diagonal Matrices
Let $V$ be the set of all $n \times n$ diagonal matrices whose traces are zero.
That is,
\begin{equation*}
V:=\left\{ A=\begin{bmatrix}
a_{11} & 0 & \dots & 0 \\
0 &a_{22} & \dots & 0 \\
0 & 0 & \ddots & \vdots \\
0 & 0 & \dots & a_{nn}
\end{bmatrix} \quad \middle| \quad
\begin{array}{l}
a_{11}, \dots, a_{nn} \in \C,\\
\tr(A)=0 \\
\right\}
\end{equation*}
Let $E_{ij}$ denote the $n \times n$ matrix whose $(i,j)$-entry is $1$ and zero elsewhere.
(a) Show that $V$ is a subspace of the vector space $M_n$ over $\C$ of all $n\times n$ matrices. (You may assume without a proof that $M_n$ is a vector space.)
(b) Show that matrices
\[E_{11}-E_{22}, \, E_{22}-E_{33}, \, \dots,\, E_{n-1\, n-1}-E_{nn}\] are a basis for the vector space $V$.
(c) Find the dimension of $V$.
Matrices Satisfying $HF-FH=-2F$
Let $F$ and $H$ be an $n\times n$ matrices satisfying the relation
\[HF-FH=-2F.\]
(a) Find the trace of the matrix $F$.
(b) Let $\lambda$ be an eigenvalue of $H$ and let $\mathbf{v}$ be an eigenvector corresponding to $\lambda$. Show that there exists an positive integer $N$ such that $F^N\mathbf{v}=\mathbf{0}$.
Trace of the Inverse Matrix of a Finite Order Matrix
Let $A$ be an $n\times n$ matrix such that $A^k=I_n$, where $k\in \N$ and $I_n$ is the $n \times n$ identity matrix.
Show that the trace of $(A^{-1})^{\trans}$ is the conjugate of the trace of $A$. That is, show that $\tr((A^{-1})^{\trans})=\overline{\tr(A)}$.
Stochastic Matrix (Markov Matrix) and its Eigenvalues and Eigenvectors
a_{11} & a_{12}\\
a_{21}& a_{22}
\end{bmatrix}\] be a matrix such that $a_{11}+a_{12}=1$ and $a_{21}+a_{22}=1$. Namely, the sum of the entries in each row is $1$.
(Such a matrix is called (right) stochastic matrix (also termed probability matrix, transition matrix, substitution matrix, or Markov matrix).)
Then prove that the matrix $A$ has an eigenvalue $1$.
(b) Find all the eigenvalues of the matrix
\[B=\begin{bmatrix}
0.3 & 0.7\\
0.6& 0.4
(c) For each eigenvalue of $B$, find the corresponding eigenvectors.
Finite Order Matrix and its Trace
Let $A$ be an $n\times n$ matrix and suppose that $A^r=I_n$ for some positive integer $r$. Then show that
(a) $|\tr(A)|\leq n$.
(b) If $|\tr(A)|=n$, then $A=\zeta I_n$ for an $r$-th root of unity $\zeta$.
(c) $\tr(A)=n$ if and only if $A=I_n$.
If Every Trace of a Power of a Matrix is Zero, then the Matrix is Nilpotent
Let $A$ be an $n \times n$ matrix such that $\tr(A^n)=0$ for all $n \in \N$.
Then prove that $A$ is a nilpotent matrix. Namely there exist a positive integer $m$ such that $A^m$ is the zero matrix.
Questions About the Trace of a Matrix
Let $A=(a_{i j})$ and $B=(b_{i j})$ be $n\times n$ real matrices for some $n \in \N$. Then answer the following questions about the trace of a matrix.
(a) Express $\tr(AB^{\trans})$ in terms of the entries of the matrices $A$ and $B$. Here $B^{\trans}$ is the transpose matrix of $B$.
(b) Show that $\tr(AA^{\trans})$ is the sum of the square of the entries of $A$.
(c) Show that if $A$ is nonzero symmetric matrix, then $\tr(A^2)>0$.
How Many Solutions for $x+x=1$ in a Ring?
A Group of Order $pqr$ Contains a Normal Subgroup of Order Either $p, q$, or $r$
The Zero is the only Nilpotent Element of the Quotient Ring by its Nilradical | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.